[ { "question":"How to make IPython notebook matplotlib plot inline I am trying to use IPython notebook on MacOS X with Python 2.7.2 and IPython 1.1.0. I cannot get matplotlib graphics to show up inline. ``` import matplotlib import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` I have also tried %pylab inline and the ipython command line arguments --pylab=inline but this makes no difference. ``` x = np.linspace(0, 3*np.pi, 500) plt.plot(x, np.sin(x**2)) plt.title('A simple chirp') plt.show() ``` Instead of inline graphics, I get this: ``` ``` And matplotlib.get_backend() shows that I have the 'module:\/\/IPython.kernel.zmq.pylab.backend_inline' backend.", "response":"I used %matplotlib inline in the first cell of the notebook and it works. I think you should try: ``` %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt ``` You can also always start all your IPython kernels in inline mode by default by setting the following config options in your config files: ``` c.IPKernelApp.matplotlib= Default: None Choices: ['auto', 'gtk', 'gtk3', 'inline', 'nbagg', 'notebook', 'osx', 'qt', 'qt4', 'qt5', 'tk', 'wx'] Configure matplotlib for interactive use with the default matplotlib backend. ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19410042\/how-to-make-ipython-notebook-matplotlib-plot-inline", "best_answers_votes":1238, "question_length":665, "response_length":561 }, { "question":"How do I set the figure title and axes labels font size? I am creating a figure in Matplotlib like this: ```py from matplotlib import pyplot as plt fig = plt.figure() plt.plot(data) fig.suptitle('test title') plt.xlabel('xlabel') plt.ylabel('ylabel') fig.savefig('test.jpg') ``` I want to specify font sizes for the figure title and the axis labels. I need all three to be different font sizes, so setting a global font size (mpl.rcParams['font.size']=x) is not what I want. How do I set font sizes for the figure title and the axis labels individually?", "response":"Functions dealing with text like label, title, etc. accept parameters same as matplotlib.text.Text. For the font size you can use size\/fontsize: ``` from matplotlib import pyplot as plt fig = plt.figure() plt.plot(data) fig.suptitle('test title', fontsize=20) plt.xlabel('xlabel', fontsize=18) plt.ylabel('ylabel', fontsize=16) fig.savefig('test.jpg') ``` For globally setting title and label sizes, mpl.rcParams contains axes.titlesize and axes.labelsize. (From the page): ``` axes.titlesize : large # fontsize of the axes title axes.labelsize : medium # fontsize of the x any y labels ``` (As far as I can see, there is no way to set x and y label sizes separately.) And I see that axes.titlesize does not affect suptitle. I guess, you need to set that manually.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12444716\/how-do-i-set-the-figure-title-and-axes-labels-font-size", "best_answers_votes":1262, "question_length":553, "response_length":764 }, { "question":"Changing the tick frequency on the x or y axis I am trying to fix how python plots my data. Say: ```py x = [0, 5, 9, 10, 15] y = [0, 1, 2, 3, 4] matplotlib.pyplot.plot(x, y) matplotlib.pyplot.show() ``` The x axis' ticks are plotted in intervals of 5. Is there a way to make it show intervals of 1?", "response":"You could explicitly set where you want to tick marks with plt.xticks: ``` plt.xticks(np.arange(min(x), max(x)+1, 1.0)) ``` For example, ``` import numpy as np import matplotlib.pyplot as plt x = [0,5,9,10,15] y = [0,1,2,3,4] plt.plot(x,y) plt.xticks(np.arange(min(x), max(x)+1, 1.0)) plt.show() ``` (np.arange was used rather than Python's range function just in case min(x) and max(x) are floats instead of ints.) The plt.plot (or ax.plot) function will automatically set default x and y limits. If you wish to keep those limits, and just change the stepsize of the tick marks, then you could use ax.get_xlim() to discover what limits Matplotlib has already set. ``` start, end = ax.get_xlim() ax.xaxis.set_ticks(np.arange(start, end, stepsize)) ``` The default tick formatter should do a decent job rounding the tick values to a sensible number of significant digits. However, if you wish to have more control over the format, you can define your own formatter. For example, ``` ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%0.1f')) ``` Here's a runnable example: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.ticker as ticker x = [0,5,9,10,15] y = [0,1,2,3,4] fig, ax = plt.subplots() ax.plot(x,y) start, end = ax.get_xlim() ax.xaxis.set_ticks(np.arange(start, end, 0.712123)) ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%0.1f')) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12608788\/changing-the-tick-frequency-on-the-x-or-y-axis", "best_answers_votes":937, "question_length":298, "response_length":1398 }, { "question":"When to use cla(), clf() or close() for clearing a plot Matplotlib offers these functions: ```py cla() # Clear axis clf() # Clear figure close() # Close a figure window ``` When should I use each function and what exactly does it do?", "response":"They all do different things, since matplotlib uses a hierarchical order in which a figure window contains a figure which may consist of many axes. Additionally, there are functions from the pyplot interface and there are methods on the Figure class. I will discuss both cases below. pyplot interface pyplot is a module that collects a couple of functions that allow matplotlib to be used in a functional manner. I here assume that pyplot has been imported as import matplotlib.pyplot as plt. In this case, there are three different commands that remove stuff: See matplotlib.pyplot Functions: plt.cla() clears an axis, i.e. the currently active axis in the current figure. It leaves the other axes untouched. plt.clf() clears the entire current figure with all its axes, but leaves the window opened, such that it may be reused for other plots. plt.close() closes a window, which will be the current window, if not specified otherwise. Which functions suits you best depends thus on your use-case. The close() function furthermore allows one to specify which window should be closed. The argument can either be a number or name given to a window when it was created using figure(number_or_name) or it can be a figure instance fig obtained, i.e., usingfig = figure(). If no argument is given to close(), the currently active window will be closed. Furthermore, there is the syntax close('all'), which closes all figures. methods of the Figure class Additionally, the Figure class provides methods for clearing figures. I'll assume in the following that fig is an instance of a Figure: fig.clf() clears the entire figure. This call is equivalent to plt.clf() only if fig is the current figure. fig.clear() is a synonym for fig.clf() Note that even del fig will not close the associated figure window. As far as I know the only way to close a figure window is using plt.close(fig) as described above.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8213522\/when-to-use-cla-clf-or-close-for-clearing-a-plot", "best_answers_votes":960, "question_length":233, "response_length":1898 }, { "question":"How do I change the figure size with subplots? How do I increase the figure size for this figure? This does nothing: ```py f.figsize(15, 15) ``` Example code from the link: ```py import matplotlib.pyplot as plt import numpy as np # Simple data to display in various forms x = np.linspace(0, 2 * np.pi, 400) y = np.sin(x ** 2) plt.close('all') # Just a figure and one subplot f, ax = plt.subplots() ax.plot(x, y) ax.set_title('Simple plot') # Two subplots, the axes array is 1-d f, axarr = plt.subplots(2, sharex=True) axarr[0].plot(x, y) axarr[0].set_title('Sharing X axis') axarr[1].scatter(x, y) # Two subplots, unpack the axes array immediately f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) ax1.plot(x, y) ax1.set_title('Sharing Y axis') ax2.scatter(x, y) # Three subplots sharing both x\/y axes f, (ax1, ax2, ax3) = plt.subplots(3, sharex=True, sharey=True) ax1.plot(x, y) ax1.set_title('Sharing both axes') ax2.scatter(x, y) ax3.scatter(x, 2 * y ** 2 - 1, color='r') # Fine-tune figure; make subplots close to each other and hide x ticks for # all but bottom plot. f.subplots_adjust(hspace=0) plt.setp([a.get_xticklabels() for a in f.axes[:-1]], visible=False) # row and column sharing f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex='col', sharey='row') ax1.plot(x, y) ax1.set_title('Sharing x per column, y per row') ax2.scatter(x, y) ax3.scatter(x, 2 * y ** 2 - 1, color='r') ax4.plot(x, 2 * y ** 2 - 1, color='r') # Four axes, returned as a 2-d array f, axarr = plt.subplots(2, 2) axarr[0, 0].plot(x, y) axarr[0, 0].set_title('Axis [0,0]') axarr[0, 1].scatter(x, y) axarr[0, 1].set_title('Axis [0,1]') axarr[1, 0].plot(x, y ** 2) axarr[1, 0].set_title('Axis [1,0]') axarr[1, 1].scatter(x, y ** 2) axarr[1, 1].set_title('Axis [1,1]') # Fine-tune figure; hide x ticks for top plots and y ticks for right plots plt.setp([a.get_xticklabels() for a in axarr[0, :]], visible=False) plt.setp([a.get_yticklabels() for a in axarr[:, 1]], visible=False) # Four polar axes f, axarr = plt.subplots(2, 2, subplot_kw=dict(projection='polar')) axarr[0, 0].plot(x, y) axarr[0, 0].set_title('Axis [0,0]') axarr[0, 1].scatter(x, y) axarr[0, 1].set_title('Axis [0,1]') axarr[1, 0].plot(x, y ** 2) axarr[1, 0].set_title('Axis [1,0]') axarr[1, 1].scatter(x, y ** 2) axarr[1, 1].set_title('Axis [1,1]') # Fine-tune figure; make subplots farther from each other. f.subplots_adjust(hspace=0.3) plt.show() ```", "response":"Use .set_figwidth and .set_figheight on the matplotlib.figure.Figure object returned by plt.subplots(), or set both with f.set_size_inches(w, h). ``` f.set_figheight(15) f.set_figwidth(15) ``` Note: Unlike set_size_inches(), where the measurement unit is explicitly mentioned in the function's name, this is not the case for set_figwidth() and set_figheight(), which also use inches. This information is provided by the documentation of the function. Alternatively, when using .subplots() to create a new figure, specify figsize=: ``` f, axs = plt.subplots(2, 2, figsize=(15, 15)) ``` .subplots accepts **fig_kw, which are passed to pyplot.figure, and is where figsize can be found. Setting the figure's size may trigger the ValueError exception: ``` Image size of 240000x180000 pixels is too large. It must be less than 2^16 in each direction ``` This is a common problem for using the set_fig*() functions due to the assumptions that they work with pixels and not inches (obviously 240000*180000 inches is too much).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14770735\/how-do-i-change-the-figure-size-with-subplots", "best_answers_votes":1222, "question_length":2402, "response_length":1018 }, { "question":"What does the argument mean in fig.add_subplot(111)? Sometimes I come across code such as this: ``` import matplotlib.pyplot as plt x = [1, 2, 3, 4, 5] y = [1, 4, 9, 16, 25] fig = plt.figure() fig.add_subplot(111) plt.scatter(x, y) plt.show() ``` Which produces: I've been reading the documentation like crazy but I can't find an explanation for the 111. sometimes I see a 212. What does the argument of fig.add_subplot() mean?", "response":"I think this would be best explained by the following picture: To initialize the above, one would type: ``` import matplotlib.pyplot as plt fig = plt.figure() fig.add_subplot(221) #top left fig.add_subplot(222) #top right fig.add_subplot(223) #bottom left fig.add_subplot(224) #bottom right plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3584805\/what-does-the-argument-mean-in-fig-add-subplot111", "best_answers_votes":638, "question_length":427, "response_length":305 }, { "question":"Hiding axis text in matplotlib plots I'm trying to plot a figure without tickmarks or numbers on either of the axes (I use axes in the traditional sense, not the matplotlib nomenclature!). An issue I have come across is where matplotlib adjusts the x(y)ticklabels by subtracting a value N, then adds N at the end of the axis. This may be vague, but the following simplified example highlights the issue, with '6.18' being the offending value of N: ``` import matplotlib.pyplot as plt import random prefix = 6.18 rx = [prefix+(0.001*random.random()) for i in arange(100)] ry = [prefix+(0.001*random.random()) for i in arange(100)] plt.plot(rx,ry,'ko') frame1 = plt.gca() for xlabel_i in frame1.axes.get_xticklabels(): xlabel_i.set_visible(False) xlabel_i.set_fontsize(0.0) for xlabel_i in frame1.axes.get_yticklabels(): xlabel_i.set_fontsize(0.0) xlabel_i.set_visible(False) for tick in frame1.axes.get_xticklines(): tick.set_visible(False) for tick in frame1.axes.get_yticklines(): tick.set_visible(False) plt.show() ``` The three things I would like to know are: How to turn off this behaviour in the first place (although in most cases it is useful, it is not always!) I have looked through matplotlib.axis.XAxis and cannot find anything appropriate How can I make N disappear (i.e. X.set_visible(False)) Is there a better way to do the above anyway? My final plot would be 4x4 subplots in a figure, if that is relevant.", "response":"Instead of hiding each element, you can hide the whole axis: ``` frame1.axes.get_xaxis().set_visible(False) frame1.axes.get_yaxis().set_visible(False) ``` Or, you can set the ticks to an empty list: ``` frame1.axes.get_xaxis().set_ticks([]) frame1.axes.get_yaxis().set_ticks([]) ``` In this second option, you can still use plt.xlabel() and plt.ylabel() to add labels to the axes.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2176424\/hiding-axis-text-in-matplotlib-plots", "best_answers_votes":687, "question_length":1422, "response_length":380 }, { "question":"How to change the figure size of a seaborn axes or figure level plot How do I change the size of my image so it's suitable for printing? For example, I'd like to use an A4 paper, whose dimensions are 11.7 inches by 8.27 inches in landscape orientation.", "response":"You can also set figure size by passing dictionary to rc parameter with key 'figure.figsize' in seaborn set_theme method (which replaces the set method, deprecated in v0.11.0 (September 2020)) ``` import seaborn as sns sns.set_theme(rc={'figure.figsize':(11.7,8.27)}) ``` Other alternative may be to use figure.figsize of rcParams to set figure size as below: ``` from matplotlib import rcParams # figure size in inches rcParams['figure.figsize'] = 11.7,8.27 ``` More details can be found in matplotlib documentation", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/31594549\/how-to-change-the-figure-size-of-a-seaborn-axes-or-figure-level-plot", "best_answers_votes":568, "question_length":252, "response_length":516 }, { "question":"How to change tick label font size In a matplotlib figure, how can I make the font size for the tick labels using ax1.set_xticklabels() smaller? Further, how can one rotate it from horizontal to vertical?", "response":"There is a simpler way actually. I just found: ``` import matplotlib.pyplot as plt # We prepare the plot fig, ax = plt.subplots() # We change the fontsize of minor ticks label ax.tick_params(axis='both', which='major', labelsize=10) ax.tick_params(axis='both', which='minor', labelsize=8) ``` This only answers to the size of label part of your question though.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6390393\/how-to-change-tick-label-font-size", "best_answers_votes":844, "question_length":204, "response_length":361 }, { "question":"How to remove xticks from a plot I have a semilogx plot and I would like to remove the xticks. I tried: ``` plt.gca().set_xticks([]) plt.xticks([]) ax.set_xticks([]) ``` The grid disappears (ok), but small ticks (at the place of the main ticks) remain. How to remove them?", "response":"The plt.tick_params method is very useful for stuff like this. This code turns off major and minor ticks and removes the labels from the x-axis. Note that there is also ax.tick_params for matplotlib.axes.Axes objects. ``` from matplotlib import pyplot as plt plt.plot(range(10)) plt.tick_params( axis='x', # changes apply to the x-axis which='both', # both major and minor ticks are affected bottom=False, # ticks along the bottom edge are off top=False, # ticks along the top edge are off labelbottom=False) # labels along the bottom edge are off plt.show() plt.savefig('plot') plt.clf() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12998430\/how-to-remove-xticks-from-a-plot", "best_answers_votes":751, "question_length":272, "response_length":592 }, { "question":"Installation Issue with matplotlib Python [duplicate] This question already has answers here: python matplotlib framework under macosx? (11 answers) Closed 11 years ago. I have issue after installing the matplotlib package unable to import matplotlib.pyplot as plt. Any suggestion will be greatly appreciate. ``` >>> import matplotlib.pyplot as plt Traceback (most recent call last): File \"\", line 1, in File \"\/\/anaconda\/lib\/python2.7\/site-packages\/matplotlib-1.3.1-py2.7-macosx-10.5-x86_64.egg\/matplotlib\/pyplot.py\", line 98, in _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup() File \"\/\/anaconda\/lib\/python2.7\/site-packages\/matplotlib-1.3.1-py2.7-macosx-10.5-x86_64.egg\/matplotlib\/backends\/__init__.py\", line 28, in pylab_setup globals(),locals(),[backend_name],0) File \"\/\/anaconda\/lib\/python2.7\/site-packages\/matplotlib-1.3.1-py2.7-macosx-10.5-x86_64.egg\/matplotlib\/backends\/backend_macosx.py\", line 21, in from matplotlib.backends import _macosx **RuntimeError**: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. ```", "response":"Problem Cause In mac os image rendering back end of matplotlib (what-is-a-backend to render using the API of Cocoa by default). There are Qt4Agg and GTKAgg and as a back-end is not the default. Set the back end of macosx that is differ compare with other windows or linux os. Solution I assume you have installed the pip matplotlib, there is a directory in your root called ~\/.matplotlib. Create a file ~\/.matplotlib\/matplotlibrc there and add the following code: backend: TkAgg From this link you can try different diagrams.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21784641\/installation-issue-with-matplotlib-python", "best_answers_votes":1346, "question_length":1323, "response_length":525 }, { "question":"\"UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.\" when plotting figure with pyplot on Pycharm I am trying to plot a simple graph using pyplot, e.g.: ``` import matplotlib.pyplot as plt plt.plot([1,2,3],[5,7,4]) plt.show() ``` but the figure does not appear and I get the following message: ``` UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure. ``` I found and tried some advice to re-configure the \"backend\" mentioned in that warning, like so: ``` import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt ``` but this gives me an error message: ``` ModuleNotFoundError: No module named 'tkinter' ``` I assumed that I had to install this module separately, but pip install tkinter does not work: ``` Collecting tkinter Could not find a version that satisfies the requirement tkinter (from versions: ) No matching distribution found for tkinter ``` How can I make Matplotlib display the graph? See also: Why does tkinter (or turtle) seem to be missing or broken? Shouldn't it be part of the standard library? . This question is not a duplicate, because the answers discuss other backends besides the Tkinter one. Also see _tkinter.TclError: no display name and no $DISPLAY environment variable for issues with attempts to use Matplotlib remotely.", "response":"Solution 1: is to install the GUI backend tk I found a solution to my problem (thanks to the help of ImportanceOfBeingErnest). All I had to do was to install tkinter through the Linux bash terminal using the following command: ``` sudo apt-get install python3-tk ``` instead of installing it with pip or directly in the virtual environment in Pycharm. Solution 2: install any of the matplotlib supported GUI backends solution 1 works fine because you get a GUI backend... in this case the TkAgg however you can also fix the issue by installing any of the matplolib GUI backends like Qt5Agg, GTKAgg, Qt4Agg, etc for example pip install pyqt5 will fix the issue also NOTE: usually this error appears when you pip install matplotlib and you are trying to display a plot in a GUI window and you do not have a python module for GUI display. The authors of matplotlib made the pypi software deps not depend on any GUI backend because some people need matplotlib without any GUI backend.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/56656777\/userwarning-matplotlib-is-currently-using-agg-which-is-a-non-gui-backend-so", "best_answers_votes":613, "question_length":1369, "response_length":980 }, { "question":"How to remove axis, legends, and white padding I would like to apply colormap to an image, and write the resulting image, without using axes, labels, titles, or anything automatically added by matplotlib. Here is what I did: ``` def make_image(inputname,outputname): data = mpimg.imread(inputname)[:,:,0] fig = plt.imshow(data) fig.set_cmap('hot') fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False) plt.savefig(outputname) ``` It successfully removes the axis of the figure, but the figure saved, presents a white padding, and a frame around the actual image. How can I remove them (at least the white padding)?", "response":"The axis('off') method resolves one of the problems more succinctly than separately changing each axis and border. It still leaves the white space around the border however. Adding bbox_inches='tight' to the savefig command almost gets you there; you can see in the example below that the white space left is much smaller, but still present. Newer versions of matplotlib may require bbox_inches=0 instead of the string 'tight' (via @episodeyang and @kadrach) ``` from numpy import random import matplotlib.pyplot as plt data = random.random((5,5)) img = plt.imshow(data, interpolation='nearest') img.set_cmap('hot') plt.axis('off') plt.savefig(\"test.png\", bbox_inches='tight') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9295026\/how-to-remove-axis-legends-and-white-padding", "best_answers_votes":660, "question_length":640, "response_length":680 }, { "question":"How to adjust padding with cutoff or overlapping labels Updated MRE with subplots I'm not sure of the usefulness of the original question and MRE. The margin padding seems to be properly adjusted for large x and y labels. The issue is reproducible with subplots. Using matplotlib 3.4.2 ```py fig, axes = plt.subplots(ncols=2, nrows=2, figsize=(8, 6)) axes = axes.flatten() for ax in axes: ax.set_ylabel(r'$\\ln\\left(\\frac{x_a-x_b}{x_a-x_c}\\right)$') ax.set_xlabel(r'$\\ln\\left(\\frac{x_a-x_d}{x_a-x_e}\\right)$') plt.show() ``` Original I am plotting a dataset using matplotlib where I have an xlabel that is quite \"tall\" (it's a formula rendered in TeX that contains a fraction and is therefore has the height equivalent of a couple of lines of text). In any case, the bottom of the formula is always cut off when I draw the figures. Changing figure size doesn't seem to help this, and I haven't been able to figure out how to shift the x-axis \"up\" to make room for the xlabel. Something like that would be a reasonable temporary solution, but what would be nice would be to have a way to make matplotlib recognize automatically that the label is cut off and resize accordingly. Here's an example of what I mean: ``` import matplotlib.pyplot as plt plt.figure() plt.ylabel(r'$\\ln\\left(\\frac{x_a-x_b}{x_a-x_c}\\right)$') plt.xlabel(r'$\\ln\\left(\\frac{x_a-x_d}{x_a-x_e}\\right)$', fontsize=50) plt.title('Example with matplotlib 3.4.2\\nMRE no longer an issue') plt.show() ``` The entire ylabel is visible, however, the xlabel is cut off at the bottom. In the case this is a machine-specific problem, I am running this on OSX 10.6.8 with matplotlib 1.0.0", "response":"Use: ``` import matplotlib.pyplot as plt plt.gcf().subplots_adjust(bottom=0.15) # alternate option without .gcf plt.subplots_adjust(bottom=0.15) ``` to make room for the label, where plt.gcf() means get the current figure. plt.gca(), which gets the current Axes, can also be used. Edit: Since I gave the answer, matplotlib has added the plt.tight_layout() function. See matplotlib Tutorials: Tight Layout Guide So I suggest using it: ``` fig, axes = plt.subplots(ncols=2, nrows=2, figsize=(8, 6)) axes = axes.flatten() for ax in axes: ax.set_ylabel(r'$\\ln\\left(\\frac{x_a-x_b}{x_a-x_c}\\right)$') ax.set_xlabel(r'$\\ln\\left(\\frac{x_a-x_d}{x_a-x_e}\\right)$') plt.tight_layout() plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6774086\/how-to-adjust-padding-with-cutoff-or-overlapping-labels", "best_answers_votes":686, "question_length":1645, "response_length":688 }, { "question":"Matplotlib different size subplots I need to add two subplots to a figure. One subplot needs to be about three times as wide as the second (same height). I accomplished this using GridSpec and the colspan argument but I would like to do this using figure so I can save to PDF. I can adjust the first figure using the figsize argument in the constructor, but how do I change the size of the second plot?", "response":"As of matplotlib 3.6.0, width_ratios and height_ratios can now be passed directly as keyword arguments to plt.subplots and subplot_mosaic, as per What's new in Matplotlib 3.6.0 (Sep 15, 2022). f, (a0, a1) = plt.subplots(1, 2, width_ratios=[3, 1]) f, (a0, a1, a2) = plt.subplots(3, 1, height_ratios=[1, 1, 3]) Another way is to use the subplots function and pass the width ratio with gridspec_kw matplotlib Tutorial: Customizing Figure Layouts Using GridSpec and Other Functions matplotlib.gridspec.GridSpec has available gridspect_kw options ```py import numpy as np import matplotlib.pyplot as plt # generate some data x = np.arange(0, 10, 0.2) y = np.sin(x) # plot it f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]}) a0.plot(x, y) a1.plot(y, x) f.tight_layout() f.savefig('grid_figure.pdf') ``` Because the question is canonical, here is an example with vertical subplots. ```py # plot it f, (a0, a1, a2) = plt.subplots(3, 1, gridspec_kw={'height_ratios': [1, 1, 3]}) a0.plot(x, y) a1.plot(x, y) a2.plot(x, y) f.tight_layout() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10388462\/matplotlib-different-size-subplots", "best_answers_votes":668, "question_length":402, "response_length":1054 }, { "question":"How to have one colorbar for all subplots I've spent entirely too long researching how to get two subplots to share the same y-axis with a single colorbar shared between the two in Matplotlib. What was happening was that when I called the colorbar() function in either subplot1 or subplot2, it would autoscale the plot such that the colorbar plus the plot would fit inside the 'subplot' bounding box, causing the two side-by-side plots to be two very different sizes. To get around this, I tried to create a third subplot which I then hacked to render no plot with just a colorbar present. The only problem is, now the heights and widths of the two plots are uneven, and I can't figure out how to make it look okay. Here is my code: ``` from __future__ import division import matplotlib.pyplot as plt import numpy as np from matplotlib import patches from matplotlib.ticker import NullFormatter # SIS Functions TE = 1 # Einstein radius g1 = lambda x,y: (TE\/2) * (y**2-x**2)\/((x**2+y**2)**(3\/2)) g2 = lambda x,y: -1*TE*x*y \/ ((x**2+y**2)**(3\/2)) kappa = lambda x,y: TE \/ (2*np.sqrt(x**2+y**2)) coords = np.linspace(-2,2,400) X,Y = np.meshgrid(coords,coords) g1out = g1(X,Y) g2out = g2(X,Y) kappaout = kappa(X,Y) for i in range(len(coords)): for j in range(len(coords)): if np.sqrt(coords[i]**2+coords[j]**2) <= TE: g1out[i][j]=0 g2out[i][j]=0 fig = plt.figure() fig.subplots_adjust(wspace=0,hspace=0) # subplot number 1 ax1 = fig.add_subplot(1,2,1,aspect='equal',xlim=[-2,2],ylim=[-2,2]) plt.title(r\"$\\gamma_{1}$\",fontsize=\"18\") plt.xlabel(r\"x ($\\theta_{E}$)\",fontsize=\"15\") plt.ylabel(r\"y ($\\theta_{E}$)\",rotation='horizontal',fontsize=\"15\") plt.xticks([-2.0,-1.5,-1.0,-0.5,0,0.5,1.0,1.5]) plt.xticks([-2.0,-1.5,-1.0,-0.5,0,0.5,1.0,1.5]) plt.imshow(g1out,extent=(-2,2,-2,2)) plt.axhline(y=0,linewidth=2,color='k',linestyle=\"--\") plt.axvline(x=0,linewidth=2,color='k',linestyle=\"--\") e1 = patches.Ellipse((0,0),2,2,color='white') ax1.add_patch(e1) # subplot number 2 ax2 = fig.add_subplot(1,2,2,sharey=ax1,xlim=[-2,2],ylim=[-2,2]) plt.title(r\"$\\gamma_{2}$\",fontsize=\"18\") plt.xlabel(r\"x ($\\theta_{E}$)\",fontsize=\"15\") ax2.yaxis.set_major_formatter( NullFormatter() ) plt.axhline(y=0,linewidth=2,color='k',linestyle=\"--\") plt.axvline(x=0,linewidth=2,color='k',linestyle=\"--\") plt.imshow(g2out,extent=(-2,2,-2,2)) e2 = patches.Ellipse((0,0),2,2,color='white') ax2.add_patch(e2) # subplot for colorbar ax3 = fig.add_subplot(1,1,1) ax3.axis('off') cbar = plt.colorbar(ax=ax2) plt.show() ```", "response":"Just place the colorbar in its own axis and use subplots_adjust to make room for it. As a quick example: ``` import numpy as np import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=2, ncols=2) for ax in axes.flat: im = ax.imshow(np.random.random((10,10)), vmin=0, vmax=1) fig.subplots_adjust(right=0.8) cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7]) fig.colorbar(im, cax=cbar_ax) plt.show() ``` Note that the color range will be set by the last image plotted (that gave rise to im) even if the range of values is set by vmin and vmax. If another plot has, for example, a higher max value, points with higher values than the max of im will show in uniform color.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13784201\/how-to-have-one-colorbar-for-all-subplots", "best_answers_votes":445, "question_length":2485, "response_length":672 }, { "question":"Savefig outputs blank image I am trying to save plots I make using matplotlib; however, the images are saving blank. Here is my code: ``` plt.subplot(121) plt.imshow(dataStack, cmap=mpl.cm.bone) plt.subplot(122) y = copy.deepcopy(tumorStack) y = np.ma.masked_where(y == 0, y) plt.imshow(dataStack, cmap=mpl.cm.bone) plt.imshow(y, cmap=mpl.cm.jet_r, interpolation='nearest') if T0 is not None: plt.subplot(123) plt.imshow(T0, cmap=mpl.cm.bone) #plt.subplot(124) #Autozoom #else: #plt.subplot(124) #Autozoom plt.show() plt.draw() plt.savefig('tessstttyyy.png', dpi=100) ``` And tessstttyyy.png is blank (also tried with .jpg)", "response":"First, what happens when T0 is not None? I would test that, then I would adjust the values I pass to plt.subplot(); maybe try values 131, 132, and 133, or values that depend whether or not T0 exists. Second, after plt.show() is called, a new figure is created. To deal with this, you can Call plt.savefig('tessstttyyy.png', dpi=100) before you call plt.show() Save the figure before you show() by calling plt.gcf() for \"get current figure\", then you can call savefig() on this Figure object at any time. For example: ``` fig1 = plt.gcf() plt.show() plt.draw() fig1.savefig('tessstttyyy.png', dpi=100) ``` In your code, 'tesssttyyy.png' is blank because it is saving the new figure, to which nothing has been plotted.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9012487\/savefig-outputs-blank-image", "best_answers_votes":518, "question_length":623, "response_length":716 }, { "question":"How do I make a single legend for many subplots? I am plotting the same type of information, but for different countries, with multiple subplots with Matplotlib. That is, I have nine plots on a 3x3 grid, all with the same for lines (of course, different values per line). However, I have not figured out how to put a single legend (since all nine subplots have the same lines) on the figure just once. How do I do that?", "response":"There is also a nice function get_legend_handles_labels() you can call on the last axis (if you iterate over them) that would collect everything you need from label= arguments: ``` handles, labels = ax.get_legend_handles_labels() fig.legend(handles, labels, loc='upper center') ``` If the pyplot interface is being used instead of the Axes interface, use: ``` handles, labels = plt.gca().get_legend_handles_labels() ``` To remove legends from subplots, see Remove the legend on a matplotlib figure. To merge twinx legends, see Secondary axis with twinx(): how to add to legend.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9834452\/how-do-i-make-a-single-legend-for-many-subplots", "best_answers_votes":411, "question_length":419, "response_length":577 }, { "question":"Generating a PNG with matplotlib when DISPLAY is undefined I am trying to use networkx with Python. When I run this program it get this error. Is there anything missing? ``` #!\/usr\/bin\/env python import networkx as nx import matplotlib import matplotlib.pyplot import matplotlib.pyplot as plt G=nx.Graph() G.add_node(1) G.add_nodes_from([2,3,4,5,6,7,8,9,10]) #nx.draw_graphviz(G) #nx_write_dot(G, 'node.png') nx.draw(G) plt.savefig(\"\/var\/www\/node.png\") Traceback (most recent call last): File \"graph.py\", line 13, in nx.draw(G) File \"\/usr\/lib\/pymodules\/python2.5\/networkx\/drawing\/nx_pylab.py\", line 124, in draw cf=pylab.gcf() File \"\/usr\/lib\/pymodules\/python2.5\/matplotlib\/pyplot.py\", line 276, in gcf return figure() File \"\/usr\/lib\/pymodules\/python2.5\/matplotlib\/pyplot.py\", line 254, in figure **kwargs) File \"\/usr\/lib\/pymodules\/python2.5\/matplotlib\/backends\/backend_tkagg.py\", line 90, in new_figure_manager window = Tk.Tk() File \"\/usr\/lib\/python2.5\/lib-tk\/Tkinter.py\", line 1650, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: no display name and no $DISPLAY environment variable ``` I get a different error now: ``` #!\/usr\/bin\/env python import networkx as nx import matplotlib import matplotlib.pyplot import matplotlib.pyplot as plt matplotlib.use('Agg') G=nx.Graph() G.add_node(1) G.add_nodes_from([2,3,4,5,6,7,8,9,10]) #nx.draw_graphviz(G) #nx_write_dot(G, 'node.png') nx.draw(G) plt.savefig(\"\/var\/www\/node.png\") ``` ``` \/usr\/lib\/pymodules\/python2.5\/matplotlib\/__init__.py:835: UserWarning: This call to matplotlib.use() has no effect because the the backend has already been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time. if warn: warnings.warn(_use_error_msg) Traceback (most recent call last): File \"graph.py\", line 15, in nx.draw(G) File \"\/usr\/lib\/python2.5\/site-packages\/networkx-1.2.dev-py2.5.egg\/networkx\/drawing\/nx_pylab.py\", line 124, in draw cf=pylab.gcf() File \"\/usr\/lib\/pymodules\/python2.5\/matplotlib\/pyplot.py\", line 276, in gcf return figure() File \"\/usr\/lib\/pymodules\/python2.5\/matplotlib\/pyplot.py\", line 254, in figure **kwargs) File \"\/usr\/lib\/pymodules\/python2.5\/matplotlib\/backends\/backend_tkagg.py\", line 90, in new_figure_manager window = Tk.Tk() File \"\/usr\/lib\/python2.5\/lib-tk\/Tkinter.py\", line 1650, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: no display name and no $DISPLAY environment variable ``` I get a different error now: ``` #!\/usr\/bin\/env python import networkx as nx import matplotlib import matplotlib.pyplot import matplotlib.pyplot as plt matplotlib.use('Agg') G=nx.Graph() G.add_node(1) G.add_nodes_from([2,3,4,5,6,7,8,9,10]) #nx.draw_graphviz(G) #nx_write_dot(G, 'node.png') nx.draw(G) plt.savefig(\"\/var\/www\/node.png\") ``` ``` \/usr\/lib\/pymodules\/python2.5\/matplotlib\/__init__.py:835: UserWarning: This call to matplotlib.use() has no effect because the the backend has already been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time. if warn: warnings.warn(_use_error_msg) Traceback (most recent call last): File \"graph.py\", line 15, in nx.draw(G) File \"\/usr\/lib\/python2.5\/site-packages\/networkx-1.2.dev-py2.5.egg\/networkx\/drawing\/nx_pylab.py\", line 124, in draw cf=pylab.gcf() File \"\/usr\/lib\/pymodules\/python2.5\/matplotlib\/pyplot.py\", line 276, in gcf return figure() File \"\/usr\/lib\/pymodules\/python2.5\/matplotlib\/pyplot.py\", line 254, in figure **kwargs) File \"\/usr\/lib\/pymodules\/python2.5\/matplotlib\/backends\/backend_tkagg.py\", line 90, in new_figure_manager window = Tk.Tk() File \"\/usr\/lib\/python2.5\/lib-tk\/Tkinter.py\", line 1650, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: no display name and no $DISPLAY environment variable ```", "response":"The main problem is that (on your system) matplotlib chooses an x-using backend by default. I just had the same problem on one of my servers. The solution for me was to add the following code in a place that gets read before any other pylab\/matplotlib\/pyplot import: ``` import matplotlib # Force matplotlib to not use any Xwindows backend. matplotlib.use('Agg') ``` The alternative is to set it in your .matplotlibrc", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2801882\/generating-a-png-with-matplotlib-when-display-is-undefined", "best_answers_votes":535, "question_length":4015, "response_length":417 }, { "question":"Display image as grayscale I'm trying to display a grayscale image using matplotlib.pyplot.imshow(). My problem is that the grayscale image is displayed as a colormap. I need it to be grayscale because I want to draw on top of the image with color. I read in the image and convert to grayscale using PIL's Image.open().convert(\"L\") ``` image = Image.open(file).convert(\"L\") ``` Then I convert the image to a matrix so that I can easily do some image processing using ``` matrix = scipy.misc.fromimage(image, 0) ``` However, when I do ``` figure() matplotlib.pyplot.imshow(matrix) show() ``` it displays the image using a colormap (i.e. it's not grayscale). What am I doing wrong here?", "response":"The following code will load an image from a file image.png and will display it as grayscale. ``` import numpy as np import matplotlib.pyplot as plt from PIL import Image fname = 'image.png' image = Image.open(fname).convert(\"L\") arr = np.asarray(image) plt.imshow(arr, cmap='gray', vmin=0, vmax=255) plt.show() ``` If you want to display the inverse grayscale, switch the cmap to cmap='gray_r'.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3823752\/display-image-as-grayscale", "best_answers_votes":520, "question_length":684, "response_length":395 }, { "question":"Modify tick label text I want to make some modifications to a few selected tick labels in a plot. For example, if I do: ``` label = axes.yaxis.get_major_ticks()[2].label label.set_fontsize(size) label.set_rotation('vertical') ``` the font size and the orientation of the tick label is changed. However, if try: ``` label.set_text('Foo') ``` the tick label is not modified. Also if I do: ``` print label.get_text() ``` nothing is printed. Here's some more strangeness. When I tried this: ```py import matplotlib.pyplot as plt import numpy as np axes = plt.figure().add_subplot(111) t = np.arange(0.0, 2.0, 0.01) s = np.sin(2*np.pi*t) axes.plot(t, s) for ticklabel in axes.get_xticklabels(): print(ticklabel.get_text()) ``` Only empty strings are printed, but the plot contains ticks labeled as '0.0', '0.5', '1.0', '1.5', and '2.0'.", "response":"Caveat: Unless the ticklabels are already set to a string (as is usually the case in e.g. a boxplot), this will not work with any version of matplotlib newer than 1.1.0. If you're working from the current github master, this won't work. I'm not sure what the problem is yet... It may be an unintended change, or it may not be... Normally, you'd do something along these lines: ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() # We need to draw the canvas, otherwise the labels won't be positioned and # won't have values yet. fig.canvas.draw() labels = [item.get_text() for item in ax.get_xticklabels()] labels[1] = 'Testing' ax.set_xticklabels(labels) plt.show() ``` To understand the reason why you need to jump through so many hoops, you need to understand a bit more about how matplotlib is structured. Matplotlib deliberately avoids doing \"static\" positioning of ticks, etc, unless it's explicitly told to. The assumption is that you'll want to interact with the plot, and so the bounds of the plot, ticks, ticklabels, etc will be dynamically changing. Therefore, you can't just set the text of a given tick label. By default, it's re-set by the axis's Locator and Formatter every time the plot is drawn. However, if the Locators and Formatters are set to be static (FixedLocator and FixedFormatter, respectively), then the tick labels stay the same. This is what set_*ticklabels or ax.*axis.set_ticklabels does. Hopefully that makes it slighly more clear as to why changing an individual tick label is a bit convoluted. Often, what you actually want to do is just annotate a certain position. In that case, look into annotate, instead.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11244514\/modify-tick-label-text", "best_answers_votes":429, "question_length":831, "response_length":1653 }, { "question":"How can I convert an RGB image into grayscale in Python? I'm trying to use matplotlib to read in an RGB image and convert it to grayscale. In matlab I use this: ``` img = rgb2gray(imread('image.png')); ``` In the matplotlib tutorial they don't cover it. They just read in the image ``` import matplotlib.image as mpimg img = mpimg.imread('image.png') ``` and then they slice the array, but that's not the same thing as converting RGB to grayscale from what I understand. ``` lum_img = img[:,:,0] ``` I find it hard to believe that numpy or matplotlib doesn't have a built-in function to convert from rgb to gray. Isn't this a common operation in image processing? I wrote a very simple function that works with the image imported using imread in 5 minutes. It's horribly inefficient, but that's why I was hoping for a professional implementation built-in. Sebastian has improved my function, but I'm still hoping to find the built-in one. matlab's (NTSC\/PAL) implementation: ``` import numpy as np def rgb2gray(rgb): r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2] gray = 0.2989 * r + 0.5870 * g + 0.1140 * b return gray ```", "response":"How about doing it with Pillow: ``` from PIL import Image img = Image.open('image.png').convert('L') img.save('greyscale.png') ``` If an alpha (transparency) channel is present in the input image and should be preserved, use mode LA: ``` img = Image.open('image.png').convert('LA') ``` Using matplotlib and the formula ``` Y' = 0.2989 R + 0.5870 G + 0.1140 B ``` you could do: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg def rgb2gray(rgb): return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140]) img = mpimg.imread('image.png') gray = rgb2gray(img) plt.imshow(gray, cmap=plt.get_cmap('gray'), vmin=0, vmax=1) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12201577\/how-can-i-convert-an-rgb-image-into-grayscale-in-python", "best_answers_votes":488, "question_length":1121, "response_length":664 }, { "question":"Is there a way to detach matplotlib plots so that the computation can continue? After these instructions in the Python interpreter one gets a window with a plot: ``` from matplotlib.pyplot import * plot([1,2,3]) show() # other code ``` Unfortunately, I don't know how to continue to interactively explore the figure created by show() while the program does further calculations. Is it possible at all? Sometimes calculations are long and it would help if they would proceed during examination of intermediate results.", "response":"Use matplotlib's calls that won't block: Using draw(): ``` from matplotlib.pyplot import plot, draw, show plot([1,2,3]) draw() print('continue computation') # at the end call show to ensure window won't close. show() ``` Using interactive mode: ``` from matplotlib.pyplot import plot, ion, show ion() # enables interactive mode plot([1,2,3]) # result shows immediatelly (implicit draw()) print('continue computation') # at the end call show to ensure window won't close. show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/458209\/is-there-a-way-to-detach-matplotlib-plots-so-that-the-computation-can-continue", "best_answers_votes":265, "question_length":517, "response_length":481 }, { "question":"Rotate label text in seaborn I have a simple factorplot ``` import seaborn as sns g = sns.factorplot(\"name\", \"miss_ratio\", \"policy\", dodge=.2, linestyles=[\"none\", \"none\", \"none\", \"none\"], data=df[df[\"level\"] == 2]) ``` The problem is that the x labels all run together, making them unreadable. How do you rotate the text so that the labels are readable?", "response":"I had a problem with the answer by @mwaskorn, namely that ``` g.set_xticklabels(rotation=30) ``` fails, because this also requires the labels. A bit easier than the answer by @Aman is to just add ``` plt.xticks(rotation=30) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/26540035\/rotate-label-text-in-seaborn", "best_answers_votes":449, "question_length":353, "response_length":227 }, { "question":"Set markers for individual points on a line I have used Matplotlib to plot lines on a figure. Now I would now like to set the style, specifically the marker, for individual points on the line. How do I do this? To clarify my question, I want to be able to set the style for individual markers on a line, not every marker on said line.", "response":"Specify the keyword args linestyle and\/or marker in your call to plot. For example, using a dashed line and blue circle markers: ``` plt.plot(range(10), linestyle='--', marker='o', color='b', label='line with marker') plt.legend() ``` A shortcut call for the same thing: ``` plt.plot(range(10), '--bo', label='line with marker') plt.legend() ``` Here is a list of the possible line and marker styles: ``` ================ =============================== character description ================ =============================== - solid line style -- dashed line style -. dash-dot line style : dotted line style . point marker , pixel marker o circle marker v triangle_down marker ^ triangle_up marker triangle_right marker 1 tri_down marker 2 tri_up marker 3 tri_left marker 4 tri_right marker s square marker p pentagon marker * star marker h hexagon1 marker H hexagon2 marker + plus marker x x marker D diamond marker d thin_diamond marker | vline marker _ hline marker ================ =============================== ``` edit: with an example of marking an arbitrary subset of points, as requested in the comments: ``` import numpy as np import matplotlib.pyplot as plt xs = np.linspace(-np.pi, np.pi, 30) ys = np.sin(xs) markers_on = [12, 17, 18, 19] plt.plot(xs, ys, '-gD', markevery=markers_on, label='line with select markers') plt.legend() plt.show() ``` This last example using the markevery kwarg is possible in since 1.4+, due to the merge of this feature branch. If you are stuck on an older version of matplotlib, you can still achieve the result by overlaying a scatterplot on the line plot. See the edit history for more details.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8409095\/set-markers-for-individual-points-on-a-line", "best_answers_votes":552, "question_length":334, "response_length":1643 }, { "question":"Moving matplotlib legend outside of the axis makes it cutoff by the figure box I'm familiar with the following questions: Matplotlib savefig with a legend outside the plot How to put the legend out of the plot It seems that the answers in these questions have the luxury of being able to fiddle with the exact shrinking of the axis so that the legend fits. Shrinking the axes, however, is not an ideal solution because it makes the data smaller making it actually more difficult to interpret; particularly when its complex and there are lots of things going on ... hence needing a large legend The example of a complex legend in the documentation demonstrates the need for this because the legend in their plot actually completely obscures multiple data points. http:\/\/matplotlib.sourceforge.net\/users\/legend_guide.html#legend-of-complex-plots What I would like to be able to do is dynamically expand the size of the figure box to accommodate the expanding figure legend. ``` import matplotlib.pyplot as plt import numpy as np x = np.arange(-2*np.pi, 2*np.pi, 0.1) fig = plt.figure(1) ax = fig.add_subplot(111) ax.plot(x, np.sin(x), label='Sine') ax.plot(x, np.cos(x), label='Cosine') ax.plot(x, np.arctan(x), label='Inverse tan') lgd = ax.legend(loc=9, bbox_to_anchor=(0.5,0)) ax.grid('on') ``` Notice how the final label 'Inverse tan' is actually outside the figure box (and looks badly cutoff - not publication quality!) Finally, I've been told that this is normal behaviour in R and LaTeX, so I'm a little confused why this is so difficult in python... Is there a historical reason? Is Matlab equally poor on this matter? I have the (only slightly) longer version of this code on pastebin http:\/\/pastebin.com\/grVjc007", "response":"[EDIT - 25th Feb 2025] My day job is no longer Python, so I'm not following the recent matplotlib developments. Please read all the newer answers here as there look to be some excellent modern suggestions compared to this solution from the ancient history of 2012. Sorry EMS, but I actually just got another response from the matplotlib mailling list (Thanks goes out to Benjamin Root). The code I am looking for is adjusting the savefig call to: ``` fig.savefig('samplefigure', bbox_extra_artists=(lgd,), bbox_inches='tight') #Note that the bbox_extra_artists must be an iterable ``` This is apparently similar to calling tight_layout, but instead you allow savefig to consider extra artists in the calculation. This did in fact resize the figure box as desired. ``` import matplotlib.pyplot as plt import numpy as np plt.gcf().clear() x = np.arange(-2*np.pi, 2*np.pi, 0.1) fig = plt.figure(1) ax = fig.add_subplot(111) ax.plot(x, np.sin(x), label='Sine') ax.plot(x, np.cos(x), label='Cosine') ax.plot(x, np.arctan(x), label='Inverse tan') handles, labels = ax.get_legend_handles_labels() lgd = ax.legend(handles, labels, loc='upper center', bbox_to_anchor=(0.5,-0.1)) text = ax.text(-0.2,1.05, \"Aribitrary text\", transform=ax.transAxes) ax.set_title(\"Trigonometry\") ax.grid('on') fig.savefig('samplefigure', bbox_extra_artists=(lgd,text), bbox_inches='tight') ``` This produces: [edit] The intent of this question was to completely avoid the use of arbitrary coordinate placements of arbitrary text as was the traditional solution to these problems. Despite this, numerous edits recently have insisted on putting these in, often in ways that led to the code raising an error. I have now fixed the issues and tidied the arbitrary text to show how these are also considered within the bbox_extra_artists algorithm. [edit] Some of the comments below note that since 2019, the command has been simplified. plt.savefig('x.png', bbox_inches='tight') was sufficient. Thanks for sharing. \u2013 mateuszb Jun 27, 2019", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10101700\/moving-matplotlib-legend-outside-of-the-axis-makes-it-cutoff-by-the-figure-box", "best_answers_votes":408, "question_length":1721, "response_length":2005 }, { "question":"How to convert a NumPy array to PIL image applying matplotlib colormap I want to take a NumPy 2D array which represents a grayscale image, and convert it to an RGB PIL image while applying some of the matplotlib colormaps. I can get a reasonable PNG output by using the pyplot.figure.figimage command: ``` dpi = 100.0 w, h = myarray.shape[1]\/dpi, myarray.shape[0]\/dpi fig = plt.figure(figsize=(w,h), dpi=dpi) fig.figimage(sub, cmap=cm.gist_earth) plt.savefig('out.png') ``` Although I could adapt this to get what I want (probably using StringIO do get the PIL image), I wonder if there is not a simpler way to do that, since it seems to be a very natural problem of image visualization. Let's say, something like this: ``` colored_PIL_image = magic_function(array, cmap) ```", "response":"Quite a busy one-liner, but here it is: First ensure your NumPy array, myarray, is normalised with the max value at 1.0. Apply the colormap directly to myarray. Rescale to the 0-255 range. Convert to integers, using np.uint8(). Use Image.fromarray(). And you're done: ``` from PIL import Image from matplotlib import cm im = Image.fromarray(np.uint8(cm.gist_earth(myarray)*255)) ``` with plt.savefig(): with im.save():", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10965417\/how-to-convert-a-numpy-array-to-pil-image-applying-matplotlib-colormap", "best_answers_votes":400, "question_length":775, "response_length":418 }, { "question":"Reduce left and right margins in matplotlib plot I'm struggling to deal with my plot margins in matplotlib. I've used the code below to produce my chart: ``` plt.imshow(g) c = plt.colorbar() c.set_label(\"Number of Slabs\") plt.savefig(\"OutputToUse.png\") ``` However, I get an output figure with lots of white space on either side of the plot. I've searched google and read the matplotlib documentation, but I can't seem to find how to reduce this.", "response":"One way to automatically do this is the bbox_inches='tight' kwarg to plt.savefig. E.g. ``` import matplotlib.pyplot as plt import numpy as np data = np.arange(3000).reshape((100,30)) plt.imshow(data) plt.savefig('test.png', bbox_inches='tight') ``` Another way is to use fig.tight_layout() ``` import matplotlib.pyplot as plt import numpy as np xs = np.linspace(0, 1, 20); ys = np.sin(xs) fig = plt.figure() axes = fig.add_subplot(1,1,1) axes.plot(xs, ys) # This should be called after all axes have been added fig.tight_layout() fig.savefig('test.png') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4042192\/reduce-left-and-right-margins-in-matplotlib-plot", "best_answers_votes":392, "question_length":446, "response_length":557 }, { "question":"Getting individual colors from a color map in matplotlib If you have a Colormap cmap, for example: ``` cmap = matplotlib.cm.get_cmap('Spectral') ``` How can you get a particular colour out of it between 0 and 1, where 0 is the first colour in the map and 1 is the last colour in the map? Ideally, I would be able to get the middle colour in the map by doing: ``` >>> do_some_magic(cmap, 0.5) # Return an RGBA tuple (0.1, 0.2, 0.3, 1.0) ```", "response":"You can do this with the code below, and the code in your question was actually very close to what you needed, all you have to do is call the cmap object you have. ``` import matplotlib cmap = matplotlib.cm.get_cmap('Spectral') rgba = cmap(0.5) print(rgba) # (0.99807766255210428, 0.99923106502084169, 0.74602077638401709, 1.0) ``` For values outside of the range [0.0, 1.0] it will return the under and over colour (respectively). This, by default, is the minimum and maximum colour within the range (so 0.0 and 1.0). This default can be changed with cmap.set_under() and cmap.set_over(). For \"special\" numbers such as np.nan and np.inf the default is to use the 0.0 value, this can be changed using cmap.set_bad() similarly to under and over as above. Finally it may be necessary for you to normalize your data such that it conforms to the range [0.0, 1.0]. This can be done using matplotlib.colors.Normalize simply as shown in the small example below where the arguments vmin and vmax describe what numbers should be mapped to 0.0 and 1.0 respectively. ``` import matplotlib norm = matplotlib.colors.Normalize(vmin=10.0, vmax=20.0) print(norm(15.0)) # 0.5 ``` A logarithmic normaliser (matplotlib.colors.LogNorm) is also available for data ranges with a large range of values. (Thanks to both Joe Kington and tcaswell for suggestions on how to improve the answer.)", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25408393\/getting-individual-colors-from-a-color-map-in-matplotlib", "best_answers_votes":415, "question_length":439, "response_length":1367 }, { "question":"How to do a scatter plot with empty circles in Python? In Python, with Matplotlib, how can a scatter plot with empty circles be plotted? The goal is to draw empty circles around some of the colored disks already plotted by scatter(), so as to highlight them, ideally without having to redraw the colored circles. I tried facecolors=None, to no avail.", "response":"From the documentation for scatter: ``` Optional kwargs control the Collection properties; in particular: edgecolors: The string \u2018none\u2019 to plot faces with no outlines facecolors: The string \u2018none\u2019 to plot unfilled outlines ``` Try the following: ``` import matplotlib.pyplot as plt import numpy as np x = np.random.randn(60) y = np.random.randn(60) plt.scatter(x, y, s=80, facecolors='none', edgecolors='r') plt.show() ``` Note: For other types of plots see this post on the use of markeredgecolor and markerfacecolor.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4143502\/how-to-do-a-scatter-plot-with-empty-circles-in-python", "best_answers_votes":408, "question_length":350, "response_length":518 }, { "question":"ImportError: No module named matplotlib.pyplot [duplicate] This question already has answers here: ImportError: No module named requests (39 answers) Closed last year. I am currently practicing matplotlib. This is the first example I practice. ``` #!\/usr\/bin\/python import matplotlib.pyplot as plt radius = [1.0, 2.0, 3.0, 4.0] area = [3.14159, 12.56636, 28.27431, 50.26544] plt.plot(radius, area) plt.show() ``` When I run this script with python .\/plot_test.py, it shows plot correctly. However, I run it by itself, .\/plot_test.py, it throws the followings: ```none Traceback (most recent call last): File \".\/plot_test.py\", line 3, in import matplotlib.pyplot as plt ImportError: No module named matplotlib.pyplot ``` Does python look for matplotlib in different locations? The environment is: Mac OS X 10.8.4 64bit built-in python 2.7 numpy, scipy, matplotlib is installed with: ``` sudo port install py27-numpy py27-scipy py27-matplotlib \\ py27-ipython +notebook py27-pandas py27-sympy py27-nose ```", "response":"pip will make your life easy! Step 1: Install pip - Check if you have pip already simply by writing pip in the python console. If you don't have pip, get a python script called get-pip.py , via here: https:\/\/pip.pypa.io\/en\/latest\/installing.html or directly here: https:\/\/bootstrap.pypa.io\/get-pip.py (You may have to use Save As ..) Step 2: Take note of where the file got saved and cd the directory from command prompt. Run the get-pip.py script to install pip. You can write in cmd this line within quotes: \"python .\\get-pip.py\" Step 3: Now in cmd type: pip install matplotlib And you should be through.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18176591\/importerror-no-module-named-matplotlib-pyplot", "best_answers_votes":262, "question_length":1004, "response_length":606 }, { "question":"How to plot in multiple subplots I am a little confused about how this code works: ``` fig, axes = plt.subplots(nrows=2, ncols=2) plt.show() ``` How does the fig, axes work in this case? What does it do? Also why wouldn't this work to do the same thing: ``` fig = plt.figure() axes = fig.subplots(nrows=2, ncols=2) ```", "response":"There are several ways to do it. The subplots method creates the figure along with the subplots that are then stored in the ax array. For example: ``` import matplotlib.pyplot as plt x = range(10) y = range(10) fig, ax = plt.subplots(nrows=2, ncols=2) for row in ax: for col in row: col.plot(x, y) plt.show() ``` However, something like this will also work, it's not so \"clean\" though since you are creating a figure with subplots and then add on top of them: ``` fig = plt.figure() plt.subplot(2, 2, 1) plt.plot(x, y) plt.subplot(2, 2, 2) plt.plot(x, y) plt.subplot(2, 2, 3) plt.plot(x, y) plt.subplot(2, 2, 4) plt.plot(x, y) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/31726643\/how-to-plot-in-multiple-subplots", "best_answers_votes":343, "question_length":318, "response_length":641 }, { "question":"How to set xlim and ylim for a subplot [duplicate] This question already has answers here: How to set the subplot axis range (6 answers) Closed 10 years ago. I would like to limit the X and Y axis in matplotlib for a specific subplot. The subplot figure itself doesn't have any axis property. I want for example to change only the limits for the second plot: ``` import matplotlib.pyplot as plt fig=plt.subplot(131) plt.scatter([1,2],[3,4]) fig=plt.subplot(132) plt.scatter([10,20],[30,40]) fig=plt.subplot(133) plt.scatter([15,23],[35,43]) plt.show() ```", "response":"You should use the object-oriented interface to matplotlib, rather than the state machine interface. Almost all of the plt.* function are thin wrappers that basically do gca().*. plt.subplot returns an axes object. Once you have a reference to the axes object you can plot directly to it, change its limits, etc. ``` import matplotlib.pyplot as plt ax1 = plt.subplot(131) ax1.scatter([1, 2], [3, 4]) ax1.set_xlim([0, 5]) ax1.set_ylim([0, 5]) ax2 = plt.subplot(132) ax2.scatter([1, 2],[3, 4]) ax2.set_xlim([0, 5]) ax2.set_ylim([0, 5]) ``` and so on for as many axes as you want. Or better, wrap it all up in a loop: ``` import matplotlib.pyplot as plt DATA_x = ([1, 2], [2, 3], [3, 4]) DATA_y = DATA_x[::-1] XLIMS = [[0, 10]] * 3 YLIMS = [[0, 10]] * 3 for j, (x, y, xlim, ylim) in enumerate(zip(DATA_x, DATA_y, XLIMS, YLIMS)): ax = plt.subplot(1, 3, j + 1) ax.scatter(x, y) ax.set_xlim(xlim) ax.set_ylim(ylim) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15858192\/how-to-set-xlim-and-ylim-for-a-subplot", "best_answers_votes":370, "question_length":555, "response_length":912 }, { "question":"plot a circle with Matplotlib.pyplot surprisingly I didn't find a straight-forward description on how to draw a circle with matplotlib.pyplot (please no pylab) taking as input center (x,y) and radius r. I tried some variants of this: ``` import matplotlib.pyplot as plt circle=plt.Circle((0,0),2) # here must be something like circle.plot() or not? plt.show() ``` ... but still didn't get it working.", "response":"You need to add it to an axes. A Circle is a subclass of an Patch, and an axes has an add_patch method. (You can also use add_artist but it's not recommended.) Here's an example of doing this: ```py import matplotlib.pyplot as plt circle1 = plt.Circle((0, 0), 0.2, color='r') circle2 = plt.Circle((0.5, 0.5), 0.2, color='blue') circle3 = plt.Circle((1, 1), 0.2, color='g', clip_on=False) fig, ax = plt.subplots() # note we must use plt.subplots, not plt.subplot # (or if you have an existing figure) # fig = plt.gcf() # ax = fig.gca() ax.add_patch(circle1) ax.add_patch(circle2) ax.add_patch(circle3) fig.savefig('plotcircles.png') ``` This results in the following figure: The first circle is at the origin, but by default clip_on is True, so the circle is clipped when ever it extends beyond the axes. The third (green) circle shows what happens when you don't clip the Artist. It extends beyond the axes (but not beyond the figure, ie the figure size is not automatically adjusted to plot all of your artists). The units for x, y and radius correspond to data units by default. In this case, I didn't plot anything on my axes (fig.gca() returns the current axes), and since the limits have never been set, they defaults to an x and y range from 0 to 1. Here's a continuation of the example, showing how units matter: ```py circle1 = plt.Circle((0, 0), 2, color='r') # now make a circle with no fill, which is good for hi-lighting key results circle2 = plt.Circle((5, 5), 0.5, color='b', fill=False) circle3 = plt.Circle((10, 10), 2, color='g', clip_on=False) ax = plt.gca() ax.cla() # clear things for fresh plot # change default range so that new circles will work ax.set_xlim((0, 10)) ax.set_ylim((0, 10)) # some data ax.plot(range(11), 'o', color='black') # key data point that we are encircling ax.plot((5), (5), 'o', color='y') ax.add_patch(circle1) ax.add_patch(circle2) ax.add_patch(circle3) fig.savefig('plotcircles2.png') ``` which results in: You can see how I set the fill of the 2nd circle to False, which is useful for encircling key results (like my yellow data point).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9215658\/plot-a-circle-with-matplotlib-pyplot", "best_answers_votes":348, "question_length":400, "response_length":2086 }, { "question":"Specifying and saving a figure with exact size in pixels Say I have an image of size 3841 x 7195 pixels. I would like to save the contents of the figure to disk, resulting in an image of the exact size I specify in pixels. No axis, no titles. Just the image. I don't personally care about DPIs, as I only want to specify the size the image takes in the screen in disk in pixels. I have read other threads, and they all seem to do conversions to inches and then specify the dimensions of the figure in inches and adjust dpi's in some way. I would like to avoid dealing with the potential loss of accuracy that could result from pixel-to-inches conversions. I have tried with: ``` w = 7195 h = 3841 fig = plt.figure(frameon=False) fig.set_size_inches(w,h) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ax.imshow(im_np, aspect='normal') fig.savefig(some_path, dpi=1) ``` with no luck (Python complains that width and height must each be below 32768 (?)) From everything I have seen, matplotlib requires the figure size to be specified in inches and dpi, but I am only interested in the pixels the figure takes in disk. How can I do this? To clarify: I am looking for a way to do this with matplotlib, and not with other image-saving libraries.", "response":"Matplotlib doesn't work with pixels directly, but rather physical sizes and DPI. If you want to display a figure with a certain pixel size, you need to know the DPI of your monitor. For example this link will detect that for you. If you have an image of 3841x7195 pixels it is unlikely that you monitor will be that large, so you won't be able to show a figure of that size (matplotlib requires the figure to fit in the screen, if you ask for a size too large it will shrink to the screen size). Let's imagine you want an 800x800 pixel image just for an example. Here's how to show an 800x800 pixel image in my monitor (my_dpi=96): ``` plt.figure(figsize=(800\/my_dpi, 800\/my_dpi), dpi=my_dpi) ``` So you basically just divide the dimensions in pixels by your DPI. If you want to save a figure of a specific size, then it is a different matter. Screen DPIs are not so important anymore (unless you ask for a figure that won't fit in the screen). Using the same example of the 800x800 pixel figure, we can save it in different resolutions using the dpi keyword of savefig. To save it in the same resolution as the screen just use the same dpi: ``` plt.savefig('my_fig.png', dpi=my_dpi) ``` To save it as an 8000x8000 pixel image, use a dpi 10 times larger: ``` plt.savefig('my_fig.png', dpi=my_dpi * 10) ``` Note that the setting of the DPI is not supported by all backends. Here, the PNG backend is used, but the pdf and ps backends will implement the size differently. Also, changing the DPI and sizes will also affect things like fontsize. A larger DPI will keep the same relative sizes of fonts and elements, but if you want smaller fonts for a larger figure you need to increase the physical size instead of the DPI. Getting back to your example, if you want to save a image with 3841 x 7195 pixels, you could do the following: ``` plt.figure(figsize=(3.841, 7.195), dpi=100) ( your code ...) plt.savefig('myfig.png', dpi=1000) ``` Note that I used the figure dpi of 100 to fit in most screens, but saved with dpi=1000 to achieve the required resolution. In my system this produces a png with 3840x7190 pixels -- it seems that the DPI saved is always 0.02 pixels\/inch smaller than the selected value, which will have a (small) effect on large image sizes. Some more discussion of this here.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13714454\/specifying-and-saving-a-figure-with-exact-size-in-pixels", "best_answers_votes":277, "question_length":1266, "response_length":2293 }, { "question":"Date ticks and rotation [duplicate] This question already has answers here: Rotate axis tick labels (13 answers) Closed 2 years ago. I am having an issue trying to get my date ticks rotated in matplotlib. A small sample program is below. If I try to rotate the ticks at the end, the ticks do not get rotated. If I try to rotate the ticks as shown under the comment 'crashes', then matplot lib crashes. This only happens if the x-values are dates. If I replaces the variable dates with the variable t in the call to avail_plot, the xticks(rotation=70) call works just fine inside avail_plot. Any ideas? ``` import numpy as np import matplotlib.pyplot as plt import datetime as dt def avail_plot(ax, x, y, label, lcolor): ax.plot(x,y,'b') ax.set_ylabel(label, rotation='horizontal', color=lcolor) ax.get_yaxis().set_ticks([]) #crashes #plt.xticks(rotation=70) ax2 = ax.twinx() ax2.plot(x, [1 for a in y], 'b') ax2.get_yaxis().set_ticks([]) ax2.set_ylabel('testing') f, axs = plt.subplots(2, sharex=True, sharey=True) t = np.arange(0.01, 5, 1) s1 = np.exp(t) start = dt.datetime.now() dates=[] for val in t: next_val = start + dt.timedelta(0,val) dates.append(next_val) start = next_val avail_plot(axs[0], dates, s1, 'testing', 'green') avail_plot(axs[1], dates, s1, 'testing2', 'red') plt.subplots_adjust(hspace=0, bottom=0.3) plt.yticks([0.5,],(\"\",\"\")) #doesn't crash, but does not rotate the xticks #plt.xticks(rotation=70) plt.show() ```", "response":"If you prefer a non-object-oriented approach, move plt.xticks(rotation=70) to right before the two avail_plot calls, eg ``` plt.xticks(rotation=70) avail_plot(axs[0], dates, s1, 'testing', 'green') avail_plot(axs[1], dates, s1, 'testing2', 'red') ``` This sets the rotation property before setting up the labels. Since you have two axes here, plt.xticks gets confused after you've made the two plots. At the point when plt.xticks doesn't do anything, plt.gca() does not give you the axes you want to modify, and so plt.xticks, which acts on the current axes, is not going to work. For an object-oriented approach not using plt.xticks, you can use ``` plt.setp( axs[1].xaxis.get_majorticklabels(), rotation=70 ) ``` after the two avail_plot calls. This sets the rotation on the correct axes specifically.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11264521\/date-ticks-and-rotation", "best_answers_votes":319, "question_length":1438, "response_length":803 }, { "question":"How to add hovering annotations to a plot I am using matplotlib to make scatter plots. Each point on the scatter plot is associated with a named object. I would like to be able to see the name of an object when I hover my cursor over the point on the scatter plot associated with that object. In particular, it would be nice to be able to quickly see the names of the points that are outliers. The closest thing I have been able to find while searching here is the annotate command, but that appears to create a fixed label on the plot. Unfortunately, with the number of points that I have, the scatter plot would be unreadable if I labeled each point. Does anyone know of a way to create labels that only appear when the cursor hovers in the vicinity of that point?", "response":"Here is a code that uses a scatter and shows an annotation upon hovering over the scatter points. ``` import matplotlib.pyplot as plt import numpy as np; np.random.seed(1) x = np.random.rand(15) y = np.random.rand(15) names = np.array(list(\"ABCDEFGHIJKLMNO\")) c = np.random.randint(1,5,size=15) norm = plt.Normalize(1,4) cmap = plt.cm.RdYlGn fig,ax = plt.subplots() sc = plt.scatter(x,y,c=c, s=100, cmap=cmap, norm=norm) annot = ax.annotate(\"\", xy=(0,0), xytext=(20,20),textcoords=\"offset points\", bbox=dict(boxstyle=\"round\", fc=\"w\"), arrowprops=dict(arrowstyle=\"->\")) annot.set_visible(False) def update_annot(ind): pos = sc.get_offsets()[ind[\"ind\"][0]] annot.xy = pos text = \"{}, {}\".format(\" \".join(list(map(str,ind[\"ind\"]))), \" \".join([names[n] for n in ind[\"ind\"]])) annot.set_text(text) annot.get_bbox_patch().set_facecolor(cmap(norm(c[ind[\"ind\"][0]]))) annot.get_bbox_patch().set_alpha(0.4) def hover(event): vis = annot.get_visible() if event.inaxes == ax: cont, ind = sc.contains(event) if cont: update_annot(ind) annot.set_visible(True) fig.canvas.draw_idle() else: if vis: annot.set_visible(False) fig.canvas.draw_idle() fig.canvas.mpl_connect(\"motion_notify_event\", hover) plt.show() ``` Because people also want to use this solution for a line plot instead of a scatter, the following would be the same solution for plot (which works slightly differently). ```css import matplotlib.pyplot as plt import numpy as np; np.random.seed(1) x = np.sort(np.random.rand(15)) y = np.sort(np.random.rand(15)) names = np.array(list(\"ABCDEFGHIJKLMNO\")) norm = plt.Normalize(1,4) cmap = plt.cm.RdYlGn fig,ax = plt.subplots() line, = plt.plot(x,y, marker=\"o\") annot = ax.annotate(\"\", xy=(0,0), xytext=(-20,20),textcoords=\"offset points\", bbox=dict(boxstyle=\"round\", fc=\"w\"), arrowprops=dict(arrowstyle=\"->\")) annot.set_visible(False) def update_annot(ind): x,y = line.get_data() annot.xy = (x[ind[\"ind\"][0]], y[ind[\"ind\"][0]]) text = \"{}, {}\".format(\" \".join(list(map(str,ind[\"ind\"]))), \" \".join([names[n] for n in ind[\"ind\"]])) annot.set_text(text) annot.get_bbox_patch().set_alpha(0.4) def hover(event): vis = annot.get_visible() if event.inaxes == ax: cont, ind = line.contains(event) if cont: update_annot(ind) annot.set_visible(True) fig.canvas.draw_idle() else: if vis: annot.set_visible(False) fig.canvas.draw_idle() fig.canvas.mpl_connect(\"motion_notify_event\", hover) plt.show() ``` In case someone is looking for a solution for lines in twin axes, refer to How to make labels appear when hovering over a point in multiple axis? In case someone is looking for a solution for bar plots, please refer to e.g. this answer.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7908636\/how-to-add-hovering-annotations-to-a-plot", "best_answers_votes":259, "question_length":766, "response_length":2626 }, { "question":"Format y axis as percent I have an existing plot that was created with pandas like this: ``` df['myvar'].plot(kind='bar') ``` The y axis is format as float and I want to change the y axis to percentages. All of the solutions I found use ax.xyz syntax and I can only place code below the line above that creates the plot (I cannot add ax=ax to the line above.) How can I format the y axis as percentages without changing the line above? Here is the solution I found but requires that I redefine the plot: ``` import matplotlib.pyplot as plt import numpy as np import matplotlib.ticker as mtick data = [8,12,15,17,18,18.5] perc = np.linspace(0,100,len(data)) fig = plt.figure(1, (7,4)) ax = fig.add_subplot(1,1,1) ax.plot(perc, data) fmt = '%.0f%%' # Format you want the ticks, e.g. '40%' xticks = mtick.FormatStrFormatter(fmt) ax.xaxis.set_major_formatter(xticks) plt.show() ``` Link to the above solution: Pyplot: using percentage on x axis", "response":"This is a few months late, but I have created PR#6251 with matplotlib to add a new PercentFormatter class. With this class you just need one line to reformat your axis (two if you count the import of matplotlib.ticker): ``` import ... import matplotlib.ticker as mtick ax = df['myvar'].plot(kind='bar') ax.yaxis.set_major_formatter(mtick.PercentFormatter()) ``` PercentFormatter() accepts three arguments, xmax, decimals, symbol. xmax allows you to set the value that corresponds to 100% on the axis. This is nice if you have data from 0.0 to 1.0 and you want to display it from 0% to 100%. Just do PercentFormatter(1.0). The other two parameters allow you to set the number of digits after the decimal point and the symbol. They default to None and '%', respectively. decimals=None will automatically set the number of decimal points based on how much of the axes you are showing. Update PercentFormatter was introduced into Matplotlib proper in version 2.1.0.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/31357611\/format-y-axis-as-percent", "best_answers_votes":376, "question_length":940, "response_length":961 }, { "question":"reducing number of plot ticks I have too many ticks on my graph and they are running into each other. How can I reduce the number of ticks? For example, I have ticks: ``` 1E-6, 1E-5, 1E-4, ... 1E6, 1E7 ``` And I only want: ``` 1E-5, 1E-3, ... 1E5, 1E7 ``` I've tried playing with the LogLocator, but I haven't been able to figure this out.", "response":"Alternatively, if you want to simply set the number of ticks while allowing matplotlib to position them (currently only with MaxNLocator), there is pyplot.locator_params, ``` pyplot.locator_params(nbins=4) ``` You can specify specific axis in this method as mentioned below, default is both: ``` # To specify the number of ticks on both or any single axes pyplot.locator_params(axis='y', nbins=6) pyplot.locator_params(axis='x', nbins=10) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6682784\/reducing-number-of-plot-ticks", "best_answers_votes":368, "question_length":339, "response_length":442 }, { "question":"_tkinter.TclError: no display name and no $DISPLAY environment variable I am running a simple python script in the server: ``` import matplotlib.pyplot as plt import numpy as np x = np.random.randn(60) y = np.random.randn(60) plt.scatter(x, y, s=20) out_png = 'path\/to\/store\/out_file.png' plt.savefig(out_png, dpi=150) ``` I try to use the command python example.py in this server which has matplotlib 1.5.1 installed it fails with the error: ``` Traceback (most recent call last): File \"example.py\", line 7, in plt.scatter(x, y, s=20) File \"\/home\/USER\/.virtualenvs\/nnet\/lib\/python2.7\/site-packages\/matplotlib\/pyplot.py\", line 3241, in scatter ax = gca() File \"\/home\/USER\/.virtualenvs\/nnet\/lib\/python2.7\/site-packages\/matplotlib\/pyplot.py\", line 928, in gca return gcf().gca(**kwargs) File \"\/home\/USER\/.virtualenvs\/nnet\/lib\/python2.7\/site-packages\/matplotlib\/pyplot.py\", line 578, in gcf return figure() File \"\/home\/USER\/.virtualenvs\/nnet\/lib\/python2.7\/site-packages\/matplotlib\/pyplot.py\", line 527, in figure **kwargs) File \"\/home\/USER\/.virtualenvs\/nnet\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_tkagg.py\", line 84, in new_figure_manager return new_figure_manager_given_figure(num, figure) File \"\/home\/USER\/.virtualenvs\/nnet\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_tkagg.py\", line 92, in new_figure_manager_given_figure window = Tk.Tk() File \"\/usr\/local\/lib\/python2.7\/lib-tk\/Tkinter.py\", line 1810, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: no display name and no $DISPLAY environment variable ``` What is happening here?", "response":"Matplotlib chooses Xwindows backend by default. You need to set matplotlib to not use the Xwindows backend. Add this code to the start of your script (before importing pyplot) and try again: ``` import matplotlib matplotlib.use('Agg') ``` Or add to .config\/matplotlib\/matplotlibrc line backend: Agg to use non-interactive backend. ```sh echo \"backend: Agg\" > ~\/.config\/matplotlib\/matplotlibrc ``` Or when connect to server use ssh -X remoteMachine command to use Xwindows. Also you may try to export display: export DISPLAY=mymachine.com:0.0. For more info: https:\/\/matplotlib.org\/faq\/howto_faq.html#matplotlib-in-a-web-application-server", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37604289\/tkinter-tclerror-no-display-name-and-no-display-environment-variable", "best_answers_votes":348, "question_length":1645, "response_length":638 }, { "question":"Plotting in a non-blocking way with Matplotlib I am having problems trying to make matplotlib plot a function without blocking execution. I have tried using show(block=False) as some people suggest, but all I get is a frozen window. If I simply call show(), the result is plotted properly but execution is blocked until the window is closed. From other threads I've read, I suspect that whether show(block=False) works or not depends on the backend. Is this correct? My backend is Qt4Agg. Could you have a look at my code and tell me if you see something wrong? Here is my code. ``` from math import * from matplotlib import pyplot as plt print(plt.get_backend()) def main(): x = range(-50, 51, 1) for pow in range(1,5): # plot x^1, x^2, ..., x^4 y = [Xi**pow for Xi in x] print(y) plt.plot(x, y) plt.draw() #plt.show() #this plots correctly, but blocks execution. plt.show(block=False) #this creates an empty frozen window. _ = raw_input(\"Press [enter] to continue.\") if __name__ == '__main__': main() ``` PS. I forgot to say that I would like to update the existing window every time I plot something, instead of creating a new one.", "response":"I spent a long time looking for solutions, and found this answer. It looks like, in order to get what you (and I) want, you need the combination of plt.ion(), plt.show() (not with block=False) and, most importantly, plt.pause(.001) (or whatever time you want). The pause is needed because the GUI events happen while the main code is sleeping, including drawing. It's possible that this is implemented by picking up time from a sleeping thread, so maybe IDEs mess with that\u2014I don't know. Here's an implementation that works for me on python 3.5: ``` import numpy as np from matplotlib import pyplot as plt def main(): plt.axis([-50,50,0,10000]) plt.ion() plt.show() x = np.arange(-50, 51) for pow in range(1,5): # plot x^1, x^2, ..., x^4 y = [Xi**pow for Xi in x] plt.plot(x, y) plt.draw() plt.pause(0.001) input(\"Press [enter] to continue.\") if __name__ == '__main__': main() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/28269157\/plotting-in-a-non-blocking-way-with-matplotlib", "best_answers_votes":262, "question_length":1134, "response_length":880 }, { "question":"How to plot multiple dataframes in subplots I have a few Pandas DataFrames sharing the same value scale, but having different columns and indices. When invoking df.plot(), I get separate plot images. what I really want is to have them all in the same plot as subplots, but I'm unfortunately failing to come up with a solution to how and would highly appreciate some help.", "response":"You can manually create the subplots with matplotlib, and then plot the dataframes on a specific subplot using the ax keyword. For example for 4 subplots (2x2): ``` import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=2, ncols=2) df1.plot(ax=axes[0,0]) df2.plot(ax=axes[0,1]) ... ``` Here axes is an array which holds the different subplot axes, and you can access one just by indexing axes. If you want a shared x-axis, then you can provide sharex=True to plt.subplots.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22483588\/how-to-plot-multiple-dataframes-in-subplots", "best_answers_votes":414, "question_length":371, "response_length":480 }, { "question":"How do I equalize the scales of the x-axis and y-axis? How do I create a plot where the scales of x-axis and y-axis are the same? This equal ratio should be maintained even if I change the window size. Currently, my graph scales together with the window size. I tried: ``` plt.xlim(-3, 3) plt.ylim(-3, 3) plt.axis('equal') ```", "response":"Use Axes.set_aspect in the following manner: ``` from matplotlib import pyplot as plt plt.plot(range(5)) plt.xlim(-3, 3) plt.ylim(-3, 3) ax = plt.gca() ax.set_aspect('equal', adjustable='box') plt.draw() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17990845\/how-do-i-equalize-the-scales-of-the-x-axis-and-y-axis", "best_answers_votes":323, "question_length":326, "response_length":207 }, { "question":"matplotlib error - no module named tkinter [duplicate] This question already has answers here: \"UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.\" when plotting figure with pyplot on Pycharm (31 answers) Closed 2 years ago. I tried to use the matplotlib package via Pycharm IDE on windows 10. when I run this code: ``` from matplotlib import pyplot ``` I get the following error: ``` ImportError: No module named 'tkinter' ``` I know that in python 2.x it was called Tkinter, but that is not the problem - I just installed a brand new python 3.5.1. EDIT: in addition, I also tried to import 'tkinter' and 'Tkinter' - neither of these worked (both returned the error message I mentioned).", "response":"For Linux Debian based distros: ``` sudo apt-get install python3-tk ``` RPM based distros: ``` sudo yum install python3-tkinter ``` For windows: For Windows, I think the problem is you didn't install complete Python package. Since Tkinter should be shipped with Python out of box. See: http:\/\/www.tkdocs.com\/tutorial\/install.html . Good python distributions for Windows can be found by the companies Anaconda or ActiveState. Test the python module ``` python -c \"import tkinter\" ``` p.s. I suggest installing ipython, which provides powerful shell and necessary packages as well.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/36327134\/matplotlib-error-no-module-named-tkinter", "best_answers_votes":255, "question_length":740, "response_length":579 }, { "question":"How to update a plot in matplotlib I'm having issues with redrawing the figure here. I allow the user to specify the units in the time scale (x-axis) and then I recalculate and call this function plots(). I want the plot to simply update, not append another plot to the figure. ``` def plots(): global vlgaBuffSorted cntr() result = collections.defaultdict(list) for d in vlgaBuffSorted: result[d['event']].append(d) result_list = result.values() f = Figure() graph1 = f.add_subplot(211) graph2 = f.add_subplot(212,sharex=graph1) for item in result_list: tL = [] vgsL = [] vdsL = [] isubL = [] for dict in item: tL.append(dict['time']) vgsL.append(dict['vgs']) vdsL.append(dict['vds']) isubL.append(dict['isub']) graph1.plot(tL,vdsL,'bo',label='a') graph1.plot(tL,vgsL,'rp',label='b') graph2.plot(tL,isubL,'b-',label='c') plotCanvas = FigureCanvasTkAgg(f, pltFrame) toolbar = NavigationToolbar2TkAgg(plotCanvas, pltFrame) toolbar.pack(side=BOTTOM) plotCanvas.get_tk_widget().pack(side=TOP) ```", "response":"You essentially have two options: Do exactly what you're currently doing, but call graph1.clear() and graph2.clear() before replotting the data. This is the slowest, but most simplest and most robust option. Instead of replotting, you can just update the data of the plot objects. You'll need to make some changes in your code, but this should be much, much faster than replotting things every time. However, the shape of the data that you're plotting can't change, and if the range of your data is changing, you'll need to manually reset the x and y axis limits. To give an example of the second option: ``` import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 6*np.pi, 100) y = np.sin(x) # You probably won't need this if you're embedding things in a tkinter plot... plt.ion() fig = plt.figure() ax = fig.add_subplot(111) line1, = ax.plot(x, y, 'r-') # Returns a tuple of line objects, thus the comma for phase in np.linspace(0, 10*np.pi, 500): line1.set_ydata(np.sin(x + phase)) fig.canvas.draw() fig.canvas.flush_events() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4098131\/how-to-update-a-plot-in-matplotlib", "best_answers_votes":258, "question_length":993, "response_length":1046 }, { "question":"Set Colorbar Range I have the following code: ```py import matplotlib.pyplot as plt cdict = { 'red' : ( (0.0, 0.25, .25), (0.02, .59, .59), (1., 1., 1.)), 'green': ( (0.0, 0.0, 0.0), (0.02, .45, .45), (1., .97, .97)), 'blue' : ( (0.0, 1.0, 1.0), (0.02, .75, .75), (1., 0.45, 0.45)) } cm = m.colors.LinearSegmentedColormap('my_colormap', cdict, 1024) plt.clf() plt.pcolor(X, Y, v, cmap=cm) plt.loglog() plt.xlabel('X Axis') plt.ylabel('Y Axis') plt.colorbar() plt.show() ``` This produces a graph of the values v on the axes X vs Y, using the specified colormap. The X and Y axes are perfect, but the colormap spreads between the min and max of v. I would like to force the colormap to range between 0 and 1. I thought of using: ```py plt.axis(...) ``` To set the ranges of the axes, but this only takes arguments for the min and max of X and Y, not the colormap. Edit: For clarity, let's say I have one graph whose values range (0 ... 0.3), and another graph whose values (0.2 ... 0.8). In both graphs, I will want the range of the colorbar to be (0 ... 1). In both graphs, I want this range of colour to be identical using the full range of cdict above (so 0.25 in both graphs will be the same colour). In the first graph, all colours between 0.3 and 1.0 won't feature in the graph, but will in the colourbar key at the side. In the other, all colours between 0 and 0.2, and between 0.8 and 1 will not feature in the graph, but will in the colourbar at the side.", "response":"Using vmin and vmax forces the range for the colors. Here's an example: ``` import matplotlib as m import matplotlib.pyplot as plt import numpy as np cdict = { 'red' : ( (0.0, 0.25, .25), (0.02, .59, .59), (1., 1., 1.)), 'green': ( (0.0, 0.0, 0.0), (0.02, .45, .45), (1., .97, .97)), 'blue' : ( (0.0, 1.0, 1.0), (0.02, .75, .75), (1., 0.45, 0.45)) } cm = m.colors.LinearSegmentedColormap('my_colormap', cdict, 1024) x = np.arange(0, 10, .1) y = np.arange(0, 10, .1) X, Y = np.meshgrid(x,y) data = 2*( np.sin(X) + np.sin(3*Y) ) def do_plot(n, f, title): #plt.clf() plt.subplot(1, 3, n) plt.pcolor(X, Y, f(data), cmap=cm, vmin=-4, vmax=4) plt.title(title) plt.colorbar() plt.figure() do_plot(1, lambda x:x, \"all\") do_plot(2, lambda x:np.clip(x, -4, 0), \"0\") plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3373256\/set-colorbar-range", "best_answers_votes":242, "question_length":1463, "response_length":770 }, { "question":"How to put individual tags for a matplotlib scatter plot? I am trying to do a scatter plot in matplotlib and I couldn't find a way to add tags to the points. For example: ``` scatter1=plt.scatter(data1[\"x\"], data1[\"y\"], marker=\"o\", c=\"blue\", facecolors=\"white\", edgecolors=\"blue\") ``` I want for the points in \"y\" to have labels as \"point 1\", \"point 2\", etc. I couldn't figure it out.", "response":"Perhaps use plt.annotate: ``` import numpy as np import matplotlib.pyplot as plt N = 10 data = np.random.random((N, 4)) labels = ['point{0}'.format(i) for i in range(N)] plt.subplots_adjust(bottom = 0.1) plt.scatter( data[:, 0], data[:, 1], marker='o', c=data[:, 2], s=data[:, 3] * 1500, cmap=plt.get_cmap('Spectral')) for label, x, y in zip(labels, data[:, 0], data[:, 1]): plt.annotate( label, xy=(x, y), xytext=(-20, 20), textcoords='offset points', ha='right', va='bottom', bbox=dict(boxstyle='round,pad=0.5', fc='yellow', alpha=0.5), arrowprops=dict(arrowstyle = '->', connectionstyle='arc3,rad=0')) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5147112\/how-to-put-individual-tags-for-a-matplotlib-scatter-plot", "best_answers_votes":384, "question_length":384, "response_length":619 }, { "question":"How to remove gaps between subplots The code below produces gaps between the subplots. How do I remove the gaps between the subplots and make the image a tight grid? ``` import matplotlib.pyplot as plt for i in range(16): i = i + 1 ax1 = plt.subplot(4, 4, i) plt.axis('on') ax1.set_xticklabels([]) ax1.set_yticklabels([]) ax1.set_aspect('equal') plt.subplots_adjust(wspace=None, hspace=None) plt.show() ```", "response":"The problem is the use of aspect='equal', which prevents the subplots from stretching to an arbitrary aspect ratio and filling up all the empty space. Normally, this would work: ``` import matplotlib.pyplot as plt ax = [plt.subplot(2,2,i+1) for i in range(4)] for a in ax: a.set_xticklabels([]) a.set_yticklabels([]) plt.subplots_adjust(wspace=0, hspace=0) ``` The result is this: However, with aspect='equal', as in the following code: ``` import matplotlib.pyplot as plt ax = [plt.subplot(2,2,i+1) for i in range(4)] for a in ax: a.set_xticklabels([]) a.set_yticklabels([]) a.set_aspect('equal') plt.subplots_adjust(wspace=0, hspace=0) ``` This is what we get: The difference in this second case is that you've forced the x- and y-axes to have the same number of units\/pixel. Since the axes go from 0 to 1 by default (i.e., before you plot anything), using aspect='equal' forces each axis to be a square. Since the figure is not a square, pyplot adds in extra spacing between the axes horizontally. To get around this problem, you can set your figure to have the correct aspect ratio. We're going to use the object-oriented pyplot interface here, which I consider to be superior in general: ``` import matplotlib.pyplot as plt fig = plt.figure(figsize=(8,8)) # Notice the equal aspect ratio ax = [fig.add_subplot(2,2,i+1) for i in range(4)] for a in ax: a.set_xticklabels([]) a.set_yticklabels([]) a.set_aspect('equal') fig.subplots_adjust(wspace=0, hspace=0) ``` Here's the result:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20057260\/how-to-remove-gaps-between-subplots", "best_answers_votes":268, "question_length":406, "response_length":1484 }, { "question":"Plotting time on the independent axis I have an array of timestamps in the format (HH:MM:SS.mmmmmm) and another array of floating point numbers, each corresponding to a value in the timestamp array. Can I plot time on the x axis and the numbers on the y-axis using Matplotlib? I was trying to, but somehow it was only accepting arrays of floats. How can I get it to plot the time? Do I have to modify the format in any way?", "response":"Update: This answer is outdated since matplotlib version 3.5. The plot function now handles datetime data directly. See https:\/\/matplotlib.org\/3.5.1\/api\/_as_gen\/matplotlib.pyplot.plot_date.html The use of plot_date is discouraged. This method exists for historic reasons and may be deprecated in the future. datetime-like data should directly be plotted using plot. If you need to plot plain numeric data as Matplotlib date format or need to set a timezone, call ax.xaxis.axis_date \/ ax.yaxis.axis_date before plot. See Axis.axis_date. Old, outdated answer: You must first convert your timestamps to Python datetime objects (use datetime.strptime). Then use date2num to convert the dates to matplotlib format. Plot the dates and values using plot_date: ``` import matplotlib.pyplot as plt import matplotlib.dates from datetime import datetime x_values = [datetime(2021, 11, 18, 12), datetime(2021, 11, 18, 14), datetime(2021, 11, 18, 16)] y_values = [1.0, 3.0, 2.0] dates = matplotlib.dates.date2num(x_values) plt.plot_date(dates, y_values) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/1574088\/plotting-time-on-the-independent-axis", "best_answers_votes":226, "question_length":423, "response_length":1044 }, { "question":"How do I tell matplotlib that I am done with a plot? The following code plots to two PostScript (.ps) files, but the second one contains both lines. ``` import matplotlib import matplotlib.pyplot as plt import matplotlib.mlab as mlab plt.subplot(111) x = [1,10] y = [30, 1000] plt.loglog(x, y, basex=10, basey=10, ls=\"-\") plt.savefig(\"first.ps\") plt.subplot(111) x = [10,100] y = [10, 10000] plt.loglog(x, y, basex=10, basey=10, ls=\"-\") plt.savefig(\"second.ps\") ``` How can I tell matplotlib to start afresh for the second plot?", "response":"There is a clear figure command, and it should do it for you: ``` plt.clf() ``` If you have multiple subplots in the same figure ``` plt.cla() ``` clears the current axes.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/741877\/how-do-i-tell-matplotlib-that-i-am-done-with-a-plot", "best_answers_votes":227, "question_length":528, "response_length":171 }, { "question":"How do I create a second (new) plot, then later plot on the old one? I want to plot data, then create a new figure and plot data2, and finally come back to the original plot and plot data3, kinda like this: ``` import numpy as np import matplotlib as plt x = arange(5) y = np.exp(5) plt.figure() plt.plot(x, y) z = np.sin(x) plt.figure() plt.plot(x, z) w = np.cos(x) plt.figure(\"\"\"first figure\"\"\") # Here's the part I need plt.plot(x, w) ``` FYI How do I tell matplotlib that I am done with a plot? does something similar, but not quite! It doesn't let me get access to that original plot.", "response":"If you find yourself doing things like this regularly it may be worth investigating the object-oriented interface to matplotlib. In your case: ``` import matplotlib.pyplot as plt import numpy as np x = np.arange(5) y = np.exp(x) fig1, ax1 = plt.subplots() ax1.plot(x, y) ax1.set_title(\"Axis 1 title\") ax1.set_xlabel(\"X-label for axis 1\") z = np.sin(x) fig2, (ax2, ax3) = plt.subplots(nrows=2, ncols=1) # two axes on figure ax2.plot(x, z) ax3.plot(x, -z) w = np.cos(x) ax1.plot(x, w) # can continue plotting on the first axis ``` It is a little more verbose but it's much clearer and easier to keep track of, especially with several figures each with multiple subplots.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6916978\/how-do-i-create-a-second-new-plot-then-later-plot-on-the-old-one", "best_answers_votes":197, "question_length":589, "response_length":668 }, { "question":"What is the difference between pylab and pyplot? [duplicate] This question already has answers here: Which is the recommended way to plot: matplotlib or pylab? (2 answers) Closed 4 years ago. What is the difference between matplotlib.pyplot and matplotlib.pylab? Which is preferred for what usage? I am a little confused, because it seems like independent from which I import, I can do the same things. What am I missing?", "response":"This wording is no longer in the documentation. Use of the pylab import is now discouraged and the OO interface is recommended for most non-interactive usage. From the documentation, the emphasis is mine: Matplotlib is the whole package; pylab is a module in matplotlib that gets installed alongside matplotlib; and matplotlib.pyplot is a module in matplotlib. Pyplot provides the state-machine interface to the underlying plotting library in matplotlib. This means that figures and axes are implicitly and automatically created to achieve the desired plot. For example, calling plot from pyplot will automatically create the necessary figure and axes to achieve the desired plot. Setting a title will then automatically set that title to the current axes object: Pylab combines the pyplot functionality (for plotting) with the numpy functionality (for mathematics and for working with arrays) in a single namespace, making that namespace (or environment) even more MATLAB-like. For example, one can call the sin and cos functions just like you could in MATLAB, as well as having all the features of pyplot. The pyplot interface is generally preferred for non-interactive plotting (i.e., scripting). The pylab interface is convenient for interactive calculations and plotting, as it minimizes typing. Note that this is what you get if you use the ipython shell with the -pylab option, which imports everything from pylab and makes plotting fully interactive.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11469336\/what-is-the-difference-between-pylab-and-pyplot", "best_answers_votes":175, "question_length":421, "response_length":1458 }, { "question":"How to plot a high resolution graph I've used matplotlib for plotting some experimental results (discussed it in here: Looping over files and plotting. However, saving the picture by clicking right to the image gives very bad quality \/ low resolution images. ``` from glob import glob import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl # loop over all files in the current directory ending with .txt for fname in glob(\".\/*.txt\"): # read file, skip header (1 line) and unpack into 3 variables WL, ABS, T = np.genfromtxt(fname, skip_header=1, unpack=True) # first plot plt.plot(WL, T, label='BN', color='blue') plt.xlabel('Wavelength (nm)') plt.xlim(200,1000) plt.ylim(0,100) plt.ylabel('Transmittance, %') mpl.rcParams.update({'font.size': 14}) #plt.legend(loc='lower center') plt.title('') plt.show() plt.clf() # second plot plt.plot(WL, ABS, label='BN', color='red') plt.xlabel('Wavelength (nm)') plt.xlim(200,1000) plt.ylabel('Absorbance, A') mpl.rcParams.update({'font.size': 14}) #plt.legend() plt.title('') plt.show() plt.clf() ``` Example graph of what I'm looking for: example graph", "response":"You can use savefig() to export to an image file: ``` plt.savefig('filename.png') ``` In addition, you can specify the dpi argument to some scalar value (default is 100). For example: ``` plt.savefig('filename.png', dpi=300) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/39870642\/how-to-plot-a-high-resolution-graph", "best_answers_votes":303, "question_length":1114, "response_length":228 }, { "question":"Matplotlib: draw grid lines behind other graph elements In Matplotlib, I make dashed grid lines as follows: ``` fig = pylab.figure() ax = fig.add_subplot(1,1,1) ax.yaxis.grid(color='gray', linestyle='dashed') ``` however, I can't find out how (or even if it is possible) to make the grid lines be drawn behind other graph elements, such as bars. Changing the order of adding the grid versus adding other elements makes no difference. Is it possible to make it so that the grid lines appear behind everything else?", "response":"According to this - https:\/\/web.archive.org\/web\/20200131000410\/http:\/\/matplotlib.1069221.n5.nabble.com\/axis-elements-and-zorder-td5346.html - you can use Axis.set_axisbelow(True) (I am currently installing matplotlib for the first time, so have no idea if that's correct - I just found it by googling \"matplotlib z order grid\" - \"z order\" is typically used to describe this kind of thing (z being the axis \"out of the page\"))", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/1726391\/matplotlib-draw-grid-lines-behind-other-graph-elements", "best_answers_votes":205, "question_length":513, "response_length":425 }, { "question":"Bin size in Matplotlib (Histogram) I'm using matplotlib to make a histogram. Is there any way to manually set the size of the bins as opposed to the number of bins?", "response":"Actually, it's quite easy: instead of the number of bins you can give a list with the bin boundaries. They can be unequally distributed, too: ``` plt.hist(data, bins=[0, 10, 20, 30, 40, 50, 100]) ``` If you just want them equally distributed, you can simply use range: ``` plt.hist(data, bins=range(min(data), max(data) + binwidth, binwidth)) ``` Added to original answer The above line works for data filled with integers only. As macrocosme points out, for floats you can use: ``` import numpy as np plt.hist(data, bins=np.arange(min(data), max(data) + binwidth, binwidth)) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6986986\/bin-size-in-matplotlib-histogram", "best_answers_votes":358, "question_length":164, "response_length":579 }, { "question":"How to display an image I tried to use IPython.display with the following code: ``` from IPython.display import display, Image display(Image(filename='MyImage.png')) ``` I also tried to use matplotlib with the following code: ``` import matplotlib.pyplot as plt import matplotlib.image as mpimg plt.imshow(mpimg.imread('MyImage.png')) ``` In both cases, nothing is displayed, not even an error message.", "response":"If you are using matplotlib and want to show the image in your interactive notebook, try the following: ``` %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg img = mpimg.imread('your_image.png') imgplot = plt.imshow(img) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/35286540\/how-to-display-an-image", "best_answers_votes":388, "question_length":402, "response_length":269 }, { "question":"Generating matplotlib graphs without a running X server [duplicate] This question already has answers here: Generating a PNG with matplotlib when DISPLAY is undefined (13 answers) Closed 11 years ago. Matplotlib seems to require the $DISPLAY environment variable which means a running X server.Some web hosting services do not allow a running X server session.Is there a way to generate graphs using matplotlib without a running X server? ``` [username@hostname ~]$ python2.6 Python 2.6.5 (r265:79063, Nov 23 2010, 02:02:03) [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import matplotlib.pyplot as plt >>> fig = plt.figure() Traceback (most recent call last): File \"\", line 1, in File \"\/home\/username\/lib\/python2.6\/matplotlib-1.0.1-py2.6-linux-i686.egg\/matplotlib\/pyplot.py\", line 270, in figure **kwargs) File \"\/home\/username\/lib\/python2.6\/matplotlib-1.0.1-py2.6-linux-i686.egg\/matplotlib\/backends\/backend_tkagg.py\", line 80, in new_figure_manager window = Tk.Tk() File \"\/usr\/local\/lib\/python2.6\/lib-tk\/Tkinter.py\", line 1643, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: no display name and no $DISPLAY environment variable >>> ```", "response":"@Neil's answer is one (perfectly valid!) way of doing it, but you can also simply call matplotlib.use('Agg') before importing matplotlib.pyplot, and then continue as normal. E.g. ``` import matplotlib as mpl mpl.use('Agg') import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) ax.plot(range(10)) fig.savefig('temp.png') ``` You don't have to use the Agg backend, as well. The pdf, ps, svg, agg, cairo, and gdk backends can all be used without an X-server. However, only the Agg backend will be built by default (I think?), so there's a good chance that the other backends may not be enabled on your particular install. Alternately, you can just set the backend parameter in your .matplotlibrc file to automatically have matplotlib.pyplot use the given renderer.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4931376\/generating-matplotlib-graphs-without-a-running-x-server", "best_answers_votes":362, "question_length":1303, "response_length":784 }, { "question":"How to export plots from matplotlib with transparent background? I am using matplotlib to make some graphs and unfortunately I cannot export them without the white background. In other words, when I export a plot like this and position it on top of another image, the white background hides what is behind it rather than allowing it to show through. How can I export plots with a transparent background instead?", "response":"Use the matplotlib savefig function with the keyword argument transparent=True to save the image as a png file. ``` In [28]: import numpy as np In [29]: from matplotlib.pyplot import plot, savefig In [30]: x = np.linspace(0,6,31) In [31]: y = np.exp(-0.5*x) * np.sin(x) In [32]: plot(x, y, 'bo-') Out[32]: [] In [33]: savefig('demo.png', transparent=True) ``` Result: Of course, that plot doesn't demonstrate the transparency. Here's a screenshot of the PNG file displayed using the ImageMagick display command. The checkerboard pattern is the background that is visible through the transparent parts of the PNG file.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15857647\/how-to-export-plots-from-matplotlib-with-transparent-background", "best_answers_votes":293, "question_length":411, "response_length":617 }, { "question":"Saving images in Python at a very high quality How can I save Python plots at very high quality? That is, when I keep zooming in on the object saved in a PDF file, why isn't there any blurring? Also, what would be the best mode to save it in? png, eps? Or some other? I can't do pdf, because there is a hidden number that happens that mess with Latexmk compilation.", "response":"If you are using Matplotlib and are trying to get good figures in a LaTeX document, save as an EPS. Specifically, try something like this after running the commands to plot the image: ``` plt.savefig('destination_path.eps', format='eps') ``` I have found that EPS files work best and the dpi parameter is what really makes them look good in a document. To specify the orientation of the figure before saving, simply call the following before the plt.savefig call, but after creating the plot (assuming you have plotted using an axes with the name ax): ``` ax.view_init(elev=elevation_angle, azim=azimuthal_angle) ``` Where elevation_angle is a number (in degrees) specifying the polar angle (down from vertical z axis) and the azimuthal_angle specifies the azimuthal angle (around the z axis). I find that it is easiest to determine these values by first plotting the image and then rotating it and watching the current values of the angles appear towards the bottom of the window just below the actual plot. Keep in mind that the x, y, z, positions appear by default, but they are replaced with the two angles when you start to click+drag+rotate the image.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16183462\/saving-images-in-python-at-a-very-high-quality", "best_answers_votes":232, "question_length":365, "response_length":1157 }, { "question":"Plotting a list of (x, y) coordinates I have a list of pairs (a, b) that I would like to plot with matplotlib in python as actual x-y coordinates. Currently, it is making two plots, where the index of the list gives the x-coordinate, and the first plot's y values are the as in the pairs and the second plot's y values are the bs in the pairs. To clarify, my data looks like this: li = [(a,b), (c,d), ... , (t, u)] and I want to do a one-liner that just calls plt.plot(). If I didn't require a one-liner I could trivially do: ```py xs = [x[0] for x in li] ys = [x[1] for x in li] plt.plot(xs, ys) ``` How can I get matplotlib to plot these pairs as x-y coordinates? Sample data ```py # sample data li = list(zip(range(1, 14), range(14, 27))) li \u2192 [(1, 14), (2, 15), (3, 16), (4, 17), (5, 18), (6, 19), (7, 20), (8, 21), (9, 22), (10, 23), (11, 24), (12, 25), (13, 26)] ``` Incorrect Plot ```py plt.plot(li) plt.title('Incorrect Plot:\\nEach index of the tuple plotted as separate lines') ``` Desired Plot This produces the correct plot, but to many lines of code are used to unpack li. I need to unpack and plot with a single line of code, not multiple list-comprehensions. ```py xs = [x[0] for x in li] ys = [x[1] for x in li] plt.plot(xs, ys) plt.title('Correct Plot:\\nBut uses to many lines to unpack li') ```", "response":"Given li in the question: ``` li = list(zip(range(1, 14), range(14, 27))) ``` To unpack the data from pairs into lists use zip: ``` x, y = zip(*li) x \u2192 (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13) y \u2192 (14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26) ``` The one-liner uses the unpacking operator (*), to unpack the list of tuples for zip, and unpacks the zip object into the plot API. ``` plt.scatter(*zip(*li)) ``` ``` plt.plot(*zip(*li)) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21519203\/plotting-a-list-of-x-y-coordinates", "best_answers_votes":280, "question_length":1311, "response_length":446 }, { "question":"Plt.show shows full graph but savefig is cropping the image My code is succesfully saving images to file, but it is cropping important details from the right hand side. Answers exist for fixing this problem when it arises for plt.show, but it is the savefig command that is incorrectly producing the graph in this example. How can this be fixed? The relevant sample of my code: ``` import glob import os for file in glob.glob(\"*.oax\"): try: spc_file = open(file, 'r').read() newName = file[6:8] + '-' + file[4:6] + '-' + file[0:4] + ' ' + file[8:12] + ' UTC (Observed) - No Sea Breeze Day' plt.title(newName, fontsize=12, loc='left') plt.savefig('X:\/' + newName + '.png') plt.show() except Exception: pass ``` And the images (top is plt.show and bottom is file produced from savefig:", "response":"You may try ``` plt.savefig('X:\/' + newName + '.png', bbox_inches='tight') ``` Or you may define figure size like ``` fig = plt.figure(figsize=(9, 11)) ... plt.savefig(filename, bbox_inches = 'tight') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37427362\/plt-show-shows-full-graph-but-savefig-is-cropping-the-image", "best_answers_votes":351, "question_length":783, "response_length":204 }, { "question":"Plot smooth line with PyPlot I've got the following simple script that plots a graph: ``` import matplotlib.pyplot as plt import numpy as np T = np.array([6, 7, 8, 9, 10, 11, 12]) power = np.array([1.53E+03, 5.92E+02, 2.04E+02, 7.24E+01, 2.72E+01, 1.10E+01, 4.70E+00]) plt.plot(T,power) plt.show() ``` As it is now, the line goes straight from point to point which looks ok, but could be better in my opinion. What I want is to smooth the line between the points. In Gnuplot I would have plotted with smooth cplines. Is there an easy way to do this in PyPlot? I've found some tutorials, but they all seem rather complex.", "response":"You could use scipy.interpolate.spline to smooth out your data yourself: ``` from scipy.interpolate import spline # 300 represents number of points to make between T.min and T.max xnew = np.linspace(T.min(), T.max(), 300) power_smooth = spline(T, power, xnew) plt.plot(xnew,power_smooth) plt.show() ``` spline is deprecated in scipy 0.19.0, use BSpline class instead. Switching from spline to BSpline isn't a straightforward copy\/paste and requires a little tweaking: ``` from scipy.interpolate import make_interp_spline, BSpline # 300 represents number of points to make between T.min and T.max xnew = np.linspace(T.min(), T.max(), 300) spl = make_interp_spline(T, power, k=3) # type: BSpline power_smooth = spl(xnew) plt.plot(xnew, power_smooth) plt.show() ``` Before: After:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5283649\/plot-smooth-line-with-pyplot", "best_answers_votes":231, "question_length":620, "response_length":777 }, { "question":"How to rotate x-axis tick labels in a pandas plot With the following code: ``` import matplotlib matplotlib.style.use('ggplot') import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame({ 'celltype':[\"foo\",\"bar\",\"qux\",\"woz\"], 's1':[5,9,1,7], 's2':[12,90,13,87]}) df = df[[\"celltype\",\"s1\",\"s2\"]] df.set_index([\"celltype\"],inplace=True) df.plot(kind='bar',alpha=0.75) plt.xlabel(\"\") ``` I made this plot: How can I rotate the x-axis tick labels to 0 degrees? I tried adding this but did not work: ``` plt.set_xticklabels(df.index,rotation=90) ```", "response":"Pass param rot=0 to rotate the xticklabels: ``` import matplotlib matplotlib.style.use('ggplot') import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame({ 'celltype':[\"foo\",\"bar\",\"qux\",\"woz\"], 's1':[5,9,1,7], 's2':[12,90,13,87]}) df = df[[\"celltype\",\"s1\",\"s2\"]] df.set_index([\"celltype\"],inplace=True) df.plot(kind='bar',alpha=0.75, rot=0) plt.xlabel(\"\") plt.show() ``` yields plot:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/32244019\/how-to-rotate-x-axis-tick-labels-in-a-pandas-plot", "best_answers_votes":329, "question_length":556, "response_length":396 }, { "question":"What is the difference between drawing plots using plot, axes or figure in matplotlib? I'm kind of confused what is going at the backend when I draw plots in matplotlib, tbh, I'm not clear with the hierarchy of plot, axes and figure. I read the documentation and it was helpful but I'm still confused... The below code draws the same plot in three different ways - ``` #creating the arrays for testing x = np.arange(1, 100) y = np.sqrt(x) #1st way plt.plot(x, y) #2nd way ax = plt.subplot() ax.plot(x, y) #3rd way figure = plt.figure() new_plot = figure.add_subplot(111) new_plot.plot(x, y) ``` Now my question is - What is the difference between all the three, I mean what is going under the hood when any of the 3 methods are called? Which method should be used when and what are the pros and cons of using any on those?", "response":"The names of objects Matplotlib is strongly object oriented and its principal objects are the figure and the axes(1). You can think of the figure as a canvas, of which you typically specify the dimensions and possibly e.g., the background color etc etc. You use the canvas, the figure, essentially in two ways, placing other objects on it (mostly axes, but also text labels etc) and saving its contents with savefig. You can think of an axes as a sort of Swiss Army knife, a handy object that offers a tool (e.g. .plot, .scatter, .hist etc) for everything, mostly. You can place one, two, ... many axes inside a figure using one of many different methods. The plt interface The plt procedural interface was originally developed to mimic the MATLAB\u2122 interface but is not really different from the object oriented interface, even if you don't make a direct reference to the main objects (i.e., a figure and an axes) these objects are automatically instantiated and each plt method is, essentially, translated to a call of one of the methods of the underlying fundamental objects: e.g., a plt.plot() is a hidden_axes.plot and a plt.savefig is a hidden_figure.savefig. In every moment you can have an handle on these hidden objects using plt.gcf and plt.gca, and this is sometimes necessary when one of the object methods has not been ported to a method in the plt namespace. I'd like to add that the plt namespace contains also a number of convenience methods(2) to instantiate, in different ways, figure and axes. Your examples 1st way ``` plt.plot(x, y) ``` Here you use only the plt interface, you can only use a single axes in each figure, but this is what you want when you are doing an exploration of your data, a quick recipe that gets the work done... 2nd way ``` ax = plt.subplot() ax.plot(x, y) ``` Here you use a convenience method in the plt namespace to give a name (and a handle) to your axes object, but btw there is also an hidden figure. You can later use the axes object to plot, to make an histogram etc, all things that you can do with the plt interface, but you can also access all its attributes and modify them with greater freedom. 3rd way ``` figure = plt.figure() new_plot = figure.add_subplot(111) new_plot.plot(x, y) ``` Here you start instantiating a figure using a convenience method in the plt namespace and later you use only the object oriented interface. It is possible to bypass the plt convenience method (matplotlib.figure.Figure) but you then have to tweak the figure for a better interactive experience (after all, it's a convenience method). Personal recommendations I suggest bare plt.plot, plt.scatter in the context of an interactive session, possibly using IPython with its %matplotlib magic command, and also in the context of an exploratory Jupyter notebook. On the other hand the object oriented approach, plus a few plt convenience methods(2), is the way to go if you have a permanent issue to solve once for all with a customized arrangement of finely tuned subplots, if you want to embed Matplotlib in the UI of a program you write. There is a large gray area between these extremes and if you ask me what to do I'd just say \"It depends\"... (1) I find the name axes a bit misleading, but probably it's just me. (2) The convenience methods in the plt name space are REALLY CONVENIENT! In particular, when you instantiate Figures and Axes using them, all the minute details needed to deal with interactive windows are automatically taken into account.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37970424\/what-is-the-difference-between-drawing-plots-using-plot-axes-or-figure-in-matpl", "best_answers_votes":115, "question_length":822, "response_length":3496 }, { "question":"Remove or adapt border of frame of legend using matplotlib When plotting a plot using matplotlib: How to remove the box of the legend? How to change the color of the border of the legend box? How to remove only the border of the box of the legend?", "response":"When plotting a plot using matplotlib: How to remove the box of the legend? ``` plt.legend(frameon=False) ``` How to change the color of the border of the legend box? ``` leg = plt.legend() leg.get_frame().set_edgecolor('b') ``` How to remove only the border of the box of the legend? ``` leg = plt.legend() leg.get_frame().set_linewidth(0.0) ``` For the matplotlib object oriented approach: ``` axes.legend(frameon=False) leg = axes.legend() leg.get_frame().set_edgecolor('b') leg.get_frame().set_linewidth(0.0) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25540259\/remove-or-adapt-border-of-frame-of-legend-using-matplotlib", "best_answers_votes":327, "question_length":247, "response_length":516 }, { "question":"How to pick a new color for each plotted line within a figure I'd like to NOT specify a color for each plotted line, and have each line get a distinct color. But if I run: ``` from matplotlib import pyplot as plt for i in range(20): plt.plot([0, 1], [i, i]) plt.show() ``` then I get this output: If you look at the image above, you can see that matplotlib attempts to pick colors for each line that are different, but eventually it re-uses colors - the top ten lines use the same colors as the bottom ten. I just want to stop it from repeating already used colors AND\/OR feed it a list of colors to use.", "response":"I usually use the second one of these: ```py from matplotlib.pyplot import cm import numpy as np #variable n below should be number of curves to plot #version 1: color = cm.rainbow(np.linspace(0, 1, n)) for i, c in enumerate(color): plt.plot(x, y, c=c) #or version 2: color = iter(cm.rainbow(np.linspace(0, 1, n))) for i in range(n): c = next(color) plt.plot(x, y, c=c) ``` Example of 2:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4971269\/how-to-pick-a-new-color-for-each-plotted-line-within-a-figure", "best_answers_votes":223, "question_length":604, "response_length":387 }, { "question":"matplotlib: colorbars and its text labels I'd like to create a colorbar legend for a heatmap, such that the labels are in the center of each discrete color. Example borrowed from here: ``` import matplotlib.pyplot as plt import numpy as np from matplotlib.colors import ListedColormap #discrete color scheme cMap = ListedColormap(['white', 'green', 'blue','red']) #data np.random.seed(42) data = np.random.rand(4, 4) fig, ax = plt.subplots() heatmap = ax.pcolor(data, cmap=cMap) #legend cbar = plt.colorbar(heatmap) cbar.ax.set_yticklabels(['0','1','2','>3']) cbar.set_label('# of contacts', rotation=270) # put the major ticks at the middle of each cell ax.set_xticks(np.arange(data.shape[1]) + 0.5, minor=False) ax.set_yticks(np.arange(data.shape[0]) + 0.5, minor=False) ax.invert_yaxis() #labels column_labels = list('ABCD') row_labels = list('WXYZ') ax.set_xticklabels(column_labels, minor=False) ax.set_yticklabels(row_labels, minor=False) plt.show() ``` This generates the following plot: Ideally I'd like to generate a legend bar which has the four colors and for each color, a label in its center: 0,1,2,>3. How can this be achieved?", "response":"``` import matplotlib.pyplot as plt import numpy as np from matplotlib.colors import ListedColormap #discrete color scheme cMap = ListedColormap(['white', 'green', 'blue','red']) #data np.random.seed(42) data = np.random.rand(4, 4) fig, ax = plt.subplots() heatmap = ax.pcolor(data, cmap=cMap) #legend cbar = plt.colorbar(heatmap) cbar.ax.get_yaxis().set_ticks([]) for j, lab in enumerate(['$0$','$1$','$2$','$>3$']): cbar.ax.text(.5, (2 * j + 1) \/ 8.0, lab, ha='center', va='center') cbar.ax.get_yaxis().labelpad = 15 cbar.ax.set_ylabel('# of contacts', rotation=270) # put the major ticks at the middle of each cell ax.set_xticks(np.arange(data.shape[1]) + 0.5, minor=False) ax.set_yticks(np.arange(data.shape[0]) + 0.5, minor=False) ax.invert_yaxis() #labels column_labels = list('ABCD') row_labels = list('WXYZ') ax.set_xticklabels(column_labels, minor=False) ax.set_yticklabels(row_labels, minor=False) plt.show() ``` You were very close. Once you have a reference to the color bar axis, you can do what ever you want to it, including putting text labels in the middle. You might want to play with the formatting to make it more visible.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15908371\/matplotlib-colorbars-and-its-text-labels", "best_answers_votes":192, "question_length":1141, "response_length":1142 }, { "question":"A tool to convert MATLAB code to Python [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 9 years ago. Improve this question I have a bunch of MATLAB code from my MS thesis which I now want to convert to Python (using numpy\/scipy and matplotlib) and distribute as open-source. I know the similarity between MATLAB and Python scientific libraries, and converting them manually will be not more than a fortnight (provided that I work towards it every day for some time). I was wondering if there was already any tool available which can do the conversion.", "response":"There are several tools for converting Matlab to Python code. The only one that's seen recent activity (last commit from June 2018) is Small Matlab to Python compiler (also developed here: SMOP@chiselapp). Other options include: LiberMate: translate from Matlab to Python and SciPy (Requires Python 2, last update 4 years ago). OMPC: Matlab to Python (a bit outdated). Mat2py: Matlab to Python (Requires Python 2). Also, for those interested in an interface between the two languages and not conversion: pymatlab: communicate from Python by sending data to the MATLAB workspace, operating on them with scripts and pulling back the resulting data. Python-Matlab wormholes: both directions of interaction supported. Python-Matlab bridge: use Matlab from within Python, offers matlab_magic for iPython, to execute normal matlab code from within ipython. PyMat: Control Matlab session from Python. pymat2: continuation of the seemingly abandoned PyMat. mlabwrap, mlabwrap-purepy: make Matlab look like Python library (based on PyMat). oct2py (repository): run GNU Octave commands from within Python. pymex: Embeds the Python Interpreter in Matlab, also on File Exchange. matpy: Access MATLAB in various ways: create variables, access .mat files, direct interface to MATLAB engine (requires MATLAB be installed). MatPy: Python package for numerical linear algebra and plotting with a MatLab-like interface. Btw might be helpful to look here for other migration tips: http:\/\/bci2000.org\/downloads\/BCPy2000\/Migration.html On a different note, for people who might find it useful there is: matlab2fortran", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9845292\/a-tool-to-convert-matlab-code-to-python", "best_answers_votes":186, "question_length":930, "response_length":1596 }, { "question":"How to share x axes of two subplots after they have been created I'm trying to share two subplots axes, but I need to share the x axis after the figure was created. E.g. I create this figure: ```py import numpy as np import matplotlib.pyplot as plt t = np.arange(1000)\/100. x = np.sin(2*np.pi*10*t) y = np.cos(2*np.pi*10*t) fig = plt.figure() ax1 = plt.subplot(211) plt.plot(t,x) ax2 = plt.subplot(212) plt.plot(t,y) # some code to share both x axes plt.show() ``` Instead of the comment I want to insert some code to share both x axes. How do I do this? There are some relevant sounding attributes _shared_x_axes and _shared_x_axes when I check to figure axis (fig.get_axes()) but I don't know how to link them.", "response":"The usual way to share axes is to create the shared properties at creation. Either ``` fig=plt.figure() ax1 = plt.subplot(211) ax2 = plt.subplot(212, sharex = ax1) ``` or ``` fig, (ax1, ax2) = plt.subplots(nrows=2, sharex=True) ``` Sharing the axes after they have been created should therefore not be necessary. However if for any reason, you need to share axes after they have been created (actually, using a different library which creates some subplots, like here might be a reason), there would still be a solution: Using ``` ax2.sharex(ax1) ``` creates a link between the two axes, ax1 and ax2. In contrast to the sharing at creation time, you will have to set the xticklabels off manually for one of the axes (in case that is wanted). A complete example: ``` import numpy as np import matplotlib.pyplot as plt t= np.arange(1000)\/100. x = np.sin(2*np.pi*10*t) y = np.cos(2*np.pi*10*t) fig=plt.figure() ax1 = plt.subplot(211) ax2 = plt.subplot(212) ax1.plot(t,x) ax2.plot(t,y) ax2.sharex(ax1) ax1.set_xticklabels([]) # ax2.autoscale() ## call autoscale if needed plt.show() ``` For a list of axes you would do: ``` for ax in axes[1:]: ax.sharex(axes[0]) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42973223\/how-to-share-x-axes-of-two-subplots-after-they-have-been-created", "best_answers_votes":265, "question_length":712, "response_length":1162 }, { "question":"How to connect scatterplot points with line using matplotlib I have two lists, dates and values. I want to plot them using matplotlib. The following creates a scatter plot of my data. ```py import matplotlib.pyplot as plt plt.scatter(dates,values) plt.show() ``` plt.plot(dates, values) creates a line graph. But what I really want is a scatterplot where the points are connected by a line. Similar to in R: ```r plot(dates, values) lines(dates, value, type=\"l\") ``` which gives me a scatterplot of points overlaid with a line connecting the points. How do I do this in python?", "response":"I think @Evert has the right answer: ``` plt.scatter(dates,values) plt.plot(dates, values) plt.show() ``` Which is pretty much the same as ``` plt.plot(dates, values, '-o') plt.show() ``` You can replace -o with another suitable format string as described in the documentation. You can also split the choices of line and marker styles using the linestyle= and marker= keyword arguments.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20130227\/how-to-connect-scatterplot-points-with-line-using-matplotlib", "best_answers_votes":224, "question_length":577, "response_length":386 }, { "question":"How to plot multiple functions on the same figure How can I plot the following 3 functions (i.e. sin, cos and the addition), on the domain t, in the same figure? ```py import numpy as np import matplotlib.pyplot as plt t = np.linspace(0, 2*np.pi, 400) a = np.sin(t) b = np.cos(t) c = a + b ```", "response":"To plot multiple graphs on the same figure you will have to do: ``` from numpy import * import math import matplotlib.pyplot as plt t = linspace(0, 2*math.pi, 400) a = sin(t) b = cos(t) c = a + b plt.plot(t, a, 'r') # plotting t, a separately plt.plot(t, b, 'b') # plotting t, b separately plt.plot(t, c, 'g') # plotting t, c separately plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22276066\/how-to-plot-multiple-functions-on-the-same-figure", "best_answers_votes":263, "question_length":293, "response_length":351 }, { "question":"Moving x-axis to the top of a plot in matplotlib Based on this question about heatmaps in matplotlib, I wanted to move the x-axis titles to the top of the plot. ``` import matplotlib.pyplot as plt import numpy as np column_labels = list('ABCD') row_labels = list('WXYZ') data = np.random.rand(4,4) fig, ax = plt.subplots() heatmap = ax.pcolor(data, cmap=plt.cm.Blues) # put the major ticks at the middle of each cell ax.set_xticks(np.arange(data.shape[0])+0.5, minor=False) ax.set_yticks(np.arange(data.shape[1])+0.5, minor=False) # want a more natural, table-like display ax.invert_yaxis() ax.xaxis.set_label_position('top') # <-- This doesn't work! ax.set_xticklabels(row_labels, minor=False) ax.set_yticklabels(column_labels, minor=False) plt.show() ``` However, calling matplotlib's set_label_position (as notated above) doesn't seem to have the desired effect. Here's my output: What am I doing wrong?", "response":"Use ``` ax.xaxis.tick_top() ``` to place the tick marks at the top of the image. The command ``` ax.set_xlabel('X LABEL') ax.xaxis.set_label_position('top') ``` affects the label, not the tick marks. ``` import matplotlib.pyplot as plt import numpy as np column_labels = list('ABCD') row_labels = list('WXYZ') data = np.random.rand(4, 4) fig, ax = plt.subplots() heatmap = ax.pcolor(data, cmap=plt.cm.Blues) # put the major ticks at the middle of each cell ax.set_xticks(np.arange(data.shape[1]) + 0.5, minor=False) ax.set_yticks(np.arange(data.shape[0]) + 0.5, minor=False) # want a more natural, table-like display ax.invert_yaxis() ax.xaxis.tick_top() ax.set_xticklabels(column_labels, minor=False) ax.set_yticklabels(row_labels, minor=False) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14406214\/moving-x-axis-to-the-top-of-a-plot-in-matplotlib", "best_answers_votes":242, "question_length":906, "response_length":760 }, { "question":"figure of imshow() is too small I'm trying to visualize a numpy array using imshow() since it's similar to imagesc() in Matlab. ``` imshow(random.rand(8, 90), interpolation='nearest') ``` The resulting figure is very small at the center of the grey window, while most of the space is unoccupied. How can I set the parameters to make the figure larger? I tried figsize=(xx,xx) and it's not what I want. Thanks!", "response":"If you don't give an aspect argument to imshow, it will use the value for image.aspect in your matplotlibrc. The default for this value in a new matplotlibrc is equal. So imshow will plot your array with equal aspect ratio. If you don't need an equal aspect you can set aspect to auto ``` imshow(random.rand(8, 90), interpolation='nearest', aspect='auto') ``` which gives the following figure If you want an equal aspect ratio you have to adapt your figsize according to the aspect ``` fig, ax = subplots(figsize=(18, 2)) ax.imshow(random.rand(8, 90), interpolation='nearest') tight_layout() ``` which gives you:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10540929\/figure-of-imshow-is-too-small", "best_answers_votes":215, "question_length":409, "response_length":612 }, { "question":"Fill between two vertical lines [duplicate] This question already has answers here: How to highlight specific x-value ranges (2 answers) Closed 3 years ago. I went through the examples in the matplotlib documentation, but it wasn't clear to me how I can make a plot that fills the area between two specific vertical lines. For example, say I want to create a plot between x=0.2 and x=4 (for the full y range of the plot). Should I use fill_between, fill or fill_betweenx? Can I use the where condition for this?", "response":"It sounds like you want axvspan, rather than one of the fill between functions. The differences is that axvspan (and axhspan) will fill up the entire y (or x) extent of the plot regardless of how you zoom. For example, let's use axvspan to highlight the x-region between 8 and 14: ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(range(20)) ax.axvspan(8, 14, alpha=0.5, color='red') plt.show() ``` You could use fill_betweenx to do this, but the extents (both x and y) of the rectangle would be in data coordinates. With axvspan, the y-extents of the rectangle default to 0 and 1 and are in axes coordinates (in other words, percentages of the height of the plot). To illustrate this, let's make the rectangle extend from 10% to 90% of the height (instead of taking up the full extent). Try zooming or panning, and notice that the y-extents say fixed in display space, while the x-extents move with the zoom\/pan: ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(range(20)) ax.axvspan(8, 14, ymin=0.1, ymax=0.9, alpha=0.5, color='red') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23248435\/fill-between-two-vertical-lines", "best_answers_votes":324, "question_length":511, "response_length":1089 }, { "question":"Hide tick label values but keep axis labels I have this image: ``` plt.plot(sim_1['t'],sim_1['V'],'k') plt.ylabel('V') plt.xlabel('t') plt.show() ``` I want to hide the numbers; if I use: ``` plt.axis('off') ``` ...I get this image: It also hide the labels, V and t. How can I keep the labels while hiding the values?", "response":"If you use the matplotlib object-oriented approach, this is a simple task using ax.set_xticklabels() and ax.set_yticklabels(). Here we can just set them to an empty list to remove any labels: ``` import matplotlib.pyplot as plt # Create Figure and Axes instances fig,ax = plt.subplots(1) # Make your plot, set your axes labels ax.plot(sim_1['t'],sim_1['V'],'k') ax.set_ylabel('V') ax.set_xlabel('t') # Turn off tick labels ax.set_yticklabels([]) ax.set_xticklabels([]) plt.show() ``` If you also want to remove the tick marks as well as the labels, you can use ax.set_xticks() and ax.set_yticks() and set those to an empty list as well: ``` ax.set_xticks([]) ax.set_yticks([]) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37039685\/hide-tick-label-values-but-keep-axis-labels", "best_answers_votes":249, "question_length":317, "response_length":680 }, { "question":"Prevent scientific notation I have the following code: ``` plt.plot(range(2003,2012,1),range(200300,201200,100)) # several solutions from other questions have not worked, including # plt.ticklabel_format(style='sci', axis='x', scilimits=(-1000000,1000000)) # ax.get_xaxis().get_major_formatter().set_useOffset(False) plt.show() ``` which produces the following plot: How do I prevent scientific notation here? Is ticklabel_format broken? does not resolve the issue of actually removing the offset. ```py plt.plot(np.arange(1e6, 3 * 1e7, 1e6)) plt.ticklabel_format(useOffset=False) ```", "response":"In your case, you're actually wanting to disable the offset. Using scientific notation is a separate setting from showing things in terms of an offset value. However, ax.ticklabel_format(useOffset=False) should have worked (though you've listed it as one of the things that didn't). For example: ``` fig, ax = plt.subplots() ax.plot(range(2003,2012,1),range(200300,201200,100)) ax.ticklabel_format(useOffset=False) plt.show() ``` If you want to disable both the offset and scientific notaion, you'd use ax.ticklabel_format(useOffset=False, style='plain'). Difference between \"offset\" and \"scientific notation\" In matplotlib axis formatting, \"scientific notation\" refers to a multiplier for the numbers show, while the \"offset\" is a separate term that is added. Consider this example: ``` import numpy as np import matplotlib.pyplot as plt x = np.linspace(1000, 1001, 100) y = np.linspace(1e-9, 1e9, 100) fig, ax = plt.subplots() ax.plot(x, y) plt.show() ``` The x-axis will have an offset (note the + sign) and the y-axis will use scientific notation (as a multiplier -- No plus sign). We can disable either one separately. The most convenient way is the ax.ticklabel_format method (or plt.ticklabel_format). For example, if we call: ``` ax.ticklabel_format(style='plain') ``` We'll disable the scientific notation on the y-axis: And if we call ``` ax.ticklabel_format(useOffset=False) ``` We'll disable the offset on the x-axis, but leave the y-axis scientific notation untouched: Finally, we can disable both through: ``` ax.ticklabel_format(useOffset=False, style='plain') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/28371674\/prevent-scientific-notation", "best_answers_votes":262, "question_length":584, "response_length":1579 }, { "question":"How to maximize a plt.show() window Just for curiosity I would like to know how to do this in the code below. I have been searching for an answer but is useless. ``` import numpy as np import matplotlib.pyplot as plt data=np.random.exponential(scale=180, size=10000) print ('el valor medio de la distribucion exponencial es: ') print np.average(data) plt.hist(data,bins=len(data)**0.5,normed=True, cumulative=True, facecolor='red', label='datos tamano paqutes acumulativa', alpha=0.5) plt.legend() plt.xlabel('algo') plt.ylabel('algo') plt.grid() plt.show() ```", "response":"I am on a Windows (WIN7), running Python 2.7.5 & Matplotlib 1.3.1. I was able to maximize Figure windows for TkAgg, QT4Agg, and wxAgg using the following lines: ```py from matplotlib import pyplot as plt ### for 'TkAgg' backend plt.figure(1) plt.switch_backend('TkAgg') #TkAgg (instead Qt4Agg) print '#1 Backend:',plt.get_backend() plt.plot([1,2,6,4]) mng = plt.get_current_fig_manager() ### works on Ubuntu??? >> did NOT working on windows # mng.resize(*mng.window.maxsize()) mng.window.state('zoomed') #works fine on Windows! plt.show() #close the figure to run the next section ### for 'wxAgg' backend plt.figure(2) plt.switch_backend('wxAgg') print '#2 Backend:',plt.get_backend() plt.plot([1,2,6,4]) mng = plt.get_current_fig_manager() mng.frame.Maximize(True) plt.show() #close the figure to run the next section ### for 'Qt4Agg' backend plt.figure(3) plt.switch_backend('QT4Agg') #default on my system print '#3 Backend:',plt.get_backend() plt.plot([1,2,6,4]) figManager = plt.get_current_fig_manager() figManager.window.showMaximized() plt.show() ``` if you want to maximize multiple figures you can use ``` for fig in figs: mng = fig.canvas.manager # ... ``` Hope this summary of the previous answers (and some additions) combined in a working example (at least for windows) helps.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12439588\/how-to-maximize-a-plt-show-window", "best_answers_votes":212, "question_length":561, "response_length":1290 }, { "question":"How to create major and minor gridlines with different linestyles I am currently using matplotlib.pyplot to create graphs and would like to have the major gridlines solid and black and the minor ones either greyed or dashed. In the grid properties, which=both\/major\/mine, and then color and linestyle are defined simply by linestyle. Is there a way to specify minor linestyle only? The appropriate code I have so far is ``` plt.plot(current, counts, 'rd', markersize=8) plt.yscale('log') plt.grid(b=True, which='both', color='0.65', linestyle='-') ```", "response":"Actually, it is as simple as setting major and minor separately: ``` In [9]: plot([23, 456, 676, 89, 906, 34, 2345]) Out[9]: [] In [10]: yscale('log') In [11]: grid(visible=True, which='major', color='b', linestyle='-') In [12]: grid(visible=True, which='minor', color='r', linestyle='--') ``` The gotcha with minor grids is that you have to have minor tick marks turned on too. In the above code this is done by yscale('log'), but it can also be done with plt.minorticks_on(). Note: before matplotlib 3.5, visible parameter was named b", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9127434\/how-to-create-major-and-minor-gridlines-with-different-linestyles", "best_answers_votes":229, "question_length":551, "response_length":536 }, { "question":"Matplotlib discrete colorbar I am trying to make a discrete colorbar for a scatterplot in matplotlib I have my x, y data and for each point an integer tag value which I want to be represented with a unique colour, e.g. ``` plt.scatter(x, y, c=tag) ``` typically tag will be an integer ranging from 0-20, but the exact range may change so far I have just used the default settings, e.g. ``` plt.colorbar() ``` which gives a continuous range of colours. Ideally i would like a set of n discrete colours (n=20 in this example). Even better would be to get a tag value of 0 to produce a gray colour and 1-20 be colourful. I have found some 'cookbook' scripts but they are very complicated and I cannot think they are the right way to solve a seemingly simple problem", "response":"You can create a custom discrete colorbar quite easily by using a BoundaryNorm as normalizer for your scatter. The quirky bit (in my method) is making 0 showup as grey. For images i often use the cmap.set_bad() and convert my data to a numpy masked array. That would be much easier to make 0 grey, but i couldnt get this to work with the scatter or the custom cmap. As an alternative you can make your own cmap from scratch, or read-out an existing one and override just some specific entries. ``` import numpy as np import matplotlib as mpl import matplotlib.pylab as plt fig, ax = plt.subplots(1, 1, figsize=(6, 6)) # setup the plot x = np.random.rand(20) # define the data y = np.random.rand(20) # define the data tag = np.random.randint(0, 20, 20) tag[10:12] = 0 # make sure there are some 0 values to show up as grey cmap = plt.cm.jet # define the colormap # extract all colors from the .jet map cmaplist = [cmap(i) for i in range(cmap.N)] # force the first color entry to be grey cmaplist[0] = (.5, .5, .5, 1.0) # create the new map cmap = mpl.colors.LinearSegmentedColormap.from_list( 'Custom cmap', cmaplist, cmap.N) # define the bins and normalize bounds = np.linspace(0, 20, 21) norm = mpl.colors.BoundaryNorm(bounds, cmap.N) # make the scatter scat = ax.scatter(x, y, c=tag, s=np.random.randint(100, 500, 20), cmap=cmap, norm=norm) # create a second axes for the colorbar ax2 = fig.add_axes([0.95, 0.1, 0.03, 0.8]) cb = plt.colorbar.ColorbarBase(ax2, cmap=cmap, norm=norm, spacing='proportional', ticks=bounds, boundaries=bounds, format='%1i') ax.set_title('Well defined discrete colors') ax2.set_ylabel('Very custom cbar [-]', size=12) ``` I personally think that with 20 different colors its a bit hard to read the specific value, but thats up to you of course.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14777066\/matplotlib-discrete-colorbar", "best_answers_votes":138, "question_length":762, "response_length":1774 }, { "question":"Adding an arbitrary line to a matplotlib plot in ipython notebook I'm rather new to both python\/matplotlib and using it through the ipython notebook. I'm trying to add some annotation lines to an existing graph and I can't figure out how to render the lines on a graph. So, for example, if I plot the following: ``` import numpy as np np.random.seed(5) x = arange(1, 101) y = 20 + 3 * x + np.random.normal(0, 60, 100) p = plot(x, y, \"o\") ``` I get the following graph: So how would I add a vertical line from (70,100) up to (70,250)? What about a diagonal line from (70,100) to (90,200)? I've tried a few things with Line2D() resulting in nothing but confusion on my part. In R I would simply use the segments() function which would add line segments. Is there an equivalent in matplotlib?", "response":"You can directly plot the lines you want by feeding the plot command with the corresponding data (boundaries of the segments): plot([x1, x2], [y1, y2], color='k', linestyle='-', linewidth=2) (of course you can choose the color, line width, line style, etc.) From your example: ``` import numpy as np import matplotlib.pyplot as plt np.random.seed(5) x = np.arange(1, 101) y = 20 + 3 * x + np.random.normal(0, 60, 100) plt.plot(x, y, \"o\") # draw vertical line from (70,100) to (70, 250) plt.plot([70, 70], [100, 250], 'k-', lw=2) # draw diagonal line from (70, 90) to (90, 200) plt.plot([70, 90], [90, 200], 'k-') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12864294\/adding-an-arbitrary-line-to-a-matplotlib-plot-in-ipython-notebook", "best_answers_votes":231, "question_length":789, "response_length":627 }, { "question":"How to place inline labels in a line plot In Matplotlib, it's not too tough to make a legend (example_legend(), below), but I think it's better style to put labels right on the curves being plotted (as in example_inline(), below). This can be very fiddly, because I have to specify coordinates by hand, and, if I re-format the plot, I probably have to reposition the labels. Is there a way to automatically generate labels on curves in Matplotlib? Bonus points for being able to orient the text at an angle corresponding to the angle of the curve. ``` import numpy as np import matplotlib.pyplot as plt def example_legend(): plt.clf() x = np.linspace(0, 1, 101) y1 = np.sin(x * np.pi \/ 2) y2 = np.cos(x * np.pi \/ 2) plt.plot(x, y1, label='sin') plt.plot(x, y2, label='cos') plt.legend() ``` ``` def example_inline(): plt.clf() x = np.linspace(0, 1, 101) y1 = np.sin(x * np.pi \/ 2) y2 = np.cos(x * np.pi \/ 2) plt.plot(x, y1, label='sin') plt.plot(x, y2, label='cos') plt.text(0.08, 0.2, 'sin') plt.text(0.9, 0.2, 'cos') ```", "response":"Update: User cphyc has kindly created a Github repository for the code in this answer (see here), and bundled the code into a package which may be installed using pip install matplotlib-label-lines. Pretty Picture: In matplotlib it's pretty easy to label contour plots (either automatically or by manually placing labels with mouse clicks). There does not (yet) appear to be any equivalent capability to label data series in this fashion! There may be some semantic reason for not including this feature which I am missing. Regardless, I have written the following module which takes any allows for semi-automatic plot labelling. It requires only numpy and a couple of functions from the standard math library. Description The default behaviour of the labelLines function is to space the labels evenly along the x axis (automatically placing at the correct y-value of course). If you want you can just pass an array of the x co-ordinates of each of the labels. You can even tweak the location of one label (as shown in the bottom right plot) and space the rest evenly if you like. In addition, the label_lines function does not account for the lines which have not had a label assigned in the plot command (or more accurately if the label contains '_line'). Keyword arguments passed to labelLines or labelLine are passed on to the text function call (some keyword arguments are set if the calling code chooses not to specify). Issues Annotation bounding boxes sometimes interfere undesirably with other curves. As shown by the 1 and 10 annotations in the top left plot. I'm not even sure this can be avoided. It would be nice to specify a y position instead sometimes. It's still an iterative process to get annotations in the right location It only works when the x-axis values are floats Gotchas By default, the labelLines function assumes that all data series span the range specified by the axis limits. Take a look at the blue curve in the top left plot of the pretty picture. If there were only data available for the x range 0.5-1 then then we couldn't possibly place a label at the desired location (which is a little less than 0.2). See this question for a particularly nasty example. Right now, the code does not intelligently identify this scenario and re-arrange the labels, however there is a reasonable workaround. The labelLines function takes the xvals argument; a list of x-values specified by the user instead of the default linear distribution across the width. So the user can decide which x-values to use for the label placement of each data series. Also, I believe this is the first answer to complete the bonus objective of aligning the labels with the curve they're on. :) label_lines.py: ``` from math import atan2,degrees import numpy as np #Label line with line2D label data def labelLine(line,x,label=None,align=True,**kwargs): ax = line.axes xdata = line.get_xdata() ydata = line.get_ydata() if (x xdata[-1]): print('x label location is outside data range!') return #Find corresponding y co-ordinate and angle of the line ip = 1 for i in range(len(xdata)): if x < xdata[i]: ip = i break y = ydata[ip-1] + (ydata[ip]-ydata[ip-1])*(x-xdata[ip-1])\/(xdata[ip]-xdata[ip-1]) if not label: label = line.get_label() if align: #Compute the slope dx = xdata[ip] - xdata[ip-1] dy = ydata[ip] - ydata[ip-1] ang = degrees(atan2(dy,dx)) #Transform to screen co-ordinates pt = np.array([x,y]).reshape((1,2)) trans_angle = ax.transData.transform_angles(np.array((ang,)),pt)[0] else: trans_angle = 0 #Set a bunch of keyword arguments if 'color' not in kwargs: kwargs['color'] = line.get_color() if ('horizontalalignment' not in kwargs) and ('ha' not in kwargs): kwargs['ha'] = 'center' if ('verticalalignment' not in kwargs) and ('va' not in kwargs): kwargs['va'] = 'center' if 'backgroundcolor' not in kwargs: kwargs['backgroundcolor'] = ax.get_facecolor() if 'clip_on' not in kwargs: kwargs['clip_on'] = True if 'zorder' not in kwargs: kwargs['zorder'] = 2.5 ax.text(x,y,label,rotation=trans_angle,**kwargs) def labelLines(lines,align=True,xvals=None,**kwargs): ax = lines[0].axes labLines = [] labels = [] #Take only the lines which have labels other than the default ones for line in lines: label = line.get_label() if \"_line\" not in label: labLines.append(line) labels.append(label) if xvals is None: xmin,xmax = ax.get_xlim() xvals = np.linspace(xmin,xmax,len(labLines)+2)[1:-1] for line,x,label in zip(labLines,xvals,labels): labelLine(line,x,label,align,**kwargs) ``` Test code to generate the pretty picture above: ``` from matplotlib import pyplot as plt from scipy.stats import loglaplace,chi2 from labellines import * X = np.linspace(0,1,500) A = [1,2,5,10,20] funcs = [np.arctan,np.sin,loglaplace(4).pdf,chi2(5).pdf] plt.subplot(221) for a in A: plt.plot(X,np.arctan(a*X),label=str(a)) labelLines(plt.gca().get_lines(),zorder=2.5) plt.subplot(222) for a in A: plt.plot(X,np.sin(a*X),label=str(a)) labelLines(plt.gca().get_lines(),align=False,fontsize=14) plt.subplot(223) for a in A: plt.plot(X,loglaplace(4).pdf(a*X),label=str(a)) xvals = [0.8,0.55,0.22,0.104,0.045] labelLines(plt.gca().get_lines(),align=False,xvals=xvals,color='k') plt.subplot(224) for a in A: plt.plot(X,chi2(5).pdf(a*X),label=str(a)) lines = plt.gca().get_lines() l1=lines[-1] labelLine(l1,0.6,label=r'$Re=${}'.format(l1.get_label()),ha='left',va='bottom',align = False) labelLines(lines[:-1],align=False) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16992038\/how-to-place-inline-labels-in-a-line-plot", "best_answers_votes":138, "question_length":1022, "response_length":5419 }, { "question":"Ubuntu running `pip install` gives error 'The following required packages can not be built: * freetype' When performing pip install -r requirements.txt, I get the following error during the stage where it is installing matplotlib: ``` REQUIRED DEPENDENCIES AND EXTENSIONS numpy: yes [not found. pip may install it below.] dateutil: yes [dateutil was not found. It is required for date axis support. pip\/easy_install may attempt to install it after matplotlib.] tornado: yes [tornado was not found. It is required for the WebAgg backend. pip\/easy_install may attempt to install it after matplotlib.] pyparsing: yes [pyparsing was not found. It is required for mathtext support. pip\/easy_install may attempt to install it after matplotlib.] pycxx: yes [Couldn't import. Using local copy.] libagg: yes [pkg-config information for 'libagg' could not be found. Using local copy.] freetype: no [pkg-config information for 'freetype2' could not be found.] ``` ... ``` The following required packages can not be built: * freetype ``` Shouldn't pip install -r requirements.txt also install freetype? How should freetype be installed in Ubuntu 12.04 so it works with matplotlib?", "response":"No. pip will not install system-level dependencies. This means pip will not install RPM(s) (Redhat based systems) or DEB(s) (Debian based systems). To install system dependencies you will need to use one of the following methods depending on your system. Ubuntu\/Debian: ``` apt-get install libfreetype6-dev ``` To search for packages on Ubuntu\/Debian based systems: ``` apt-cache search ``` e.g: ``` apt-cache search freetype | grep dev ``` Redhat\/CentOS\/Fedora: ``` yum -y install freetype-devel ``` To search for packages on Redhat\/CentOS\/Fedora based systems: ``` yum search ``` e.g: ``` yum search freetype | grep devel ``` Mac OS X: (via Homebrew) ``` brew install freetype ``` To search for packages on Mac OS X based systems: ``` brew search ``` e.g: ``` brew search freetype ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20533426\/ubuntu-running-pip-install-gives-error-the-following-required-packages-can-no", "best_answers_votes":232, "question_length":1168, "response_length":789 }, { "question":"How to display multiple images in one figure [duplicate] This question already has answers here: Multiple figures in a single window (7 answers) Closed 7 years ago. I am trying to display 20 random images on a single Figure. The images are indeed displayed, but they are overlaid. I am using: ``` import numpy as np import matplotlib.pyplot as plt w=10 h=10 fig=plt.figure() for i in range(1,20): img = np.random.randint(10, size=(h,w)) fig.add_subplot(i,2,1) plt.imshow(img) plt.show() ``` I would like them to appear naturally in a grid layout (say 4x5), each with the same size. Part of the problem is that I do not know what the arguments to add_subplot mean. The documentation states that the arguments are the number of rows, number of columns, and plot number. There is no positioning argument. Additionally, the plot number can only be 1 or 2. How can I achieve this?", "response":"Here is my approach that you may try: ``` import numpy as np import matplotlib.pyplot as plt w = 10 h = 10 fig = plt.figure(figsize=(8, 8)) columns = 4 rows = 5 for i in range(1, columns*rows +1): img = np.random.randint(10, size=(h,w)) fig.add_subplot(rows, columns, i) plt.imshow(img) plt.show() ``` The resulting image: (Original answer date: Oct 7 '17 at 4:20) Edit 1 Since this answer is popular beyond my expectation. And I see that a small change is needed to enable flexibility for the manipulation of the individual plots. So that I offer this new version to the original code. In essence, it provides:- access to individual axes of subplots possibility to plot more features on selected axes\/subplot New code: ``` import numpy as np import matplotlib.pyplot as plt w = 10 h = 10 fig = plt.figure(figsize=(9, 13)) columns = 4 rows = 5 # prep (x,y) for extra plotting xs = np.linspace(0, 2*np.pi, 60) # from 0 to 2pi ys = np.abs(np.sin(xs)) # absolute of sine # ax enables access to manipulate each of subplots ax = [] for i in range(columns*rows): img = np.random.randint(10, size=(h,w)) # create subplot and append to ax ax.append( fig.add_subplot(rows, columns, i+1) ) ax[-1].set_title(\"ax:\"+str(i)) # set title plt.imshow(img, alpha=0.25) # do extra plots on selected axes\/subplots # note: index starts with 0 ax[2].plot(xs, 3*ys) ax[19].plot(ys**2, xs) plt.show() # finally, render the plot ``` The resulting plot: Edit 2 In the previous example, the code provides access to the sub-plots with single index, which is inconvenient when the figure has many rows\/columns of sub-plots. Here is an alternative of it. The code below provides access to the sub-plots with [row_index][column_index], which is more suitable for manipulation of array of many sub-plots. ``` import matplotlib.pyplot as plt import numpy as np # settings h, w = 10, 10 # for raster image nrows, ncols = 5, 4 # array of sub-plots figsize = [6, 8] # figure size, inches # prep (x,y) for extra plotting on selected sub-plots xs = np.linspace(0, 2*np.pi, 60) # from 0 to 2pi ys = np.abs(np.sin(xs)) # absolute of sine # create figure (fig), and array of axes (ax) fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=figsize) # plot simple raster image on each sub-plot for i, axi in enumerate(ax.flat): # i runs from 0 to (nrows*ncols-1) # axi is equivalent with ax[rowid][colid] img = np.random.randint(10, size=(h,w)) axi.imshow(img, alpha=0.25) # get indices of row\/column rowid = i \/\/ ncols colid = i % ncols # write row\/col indices as axes' title for identification axi.set_title(\"Row:\"+str(rowid)+\", Col:\"+str(colid)) # one can access the axes by ax[row_id][col_id] # do additional plotting on ax[row_id][col_id] of your choice ax[0][2].plot(xs, 3*ys, color='red', linewidth=3) ax[4][3].plot(ys**2, xs, color='green', linewidth=3) plt.tight_layout(True) plt.show() ``` The resulting plot: Ticks and Tick-labels for Array of Subplots Some of the ticks and tick-labels accompanying the subplots can be hidden to get cleaner plot if all of the subplots share the same value ranges. All of the ticks and tick-labels can be hidden except for the outside edges on the left and bottom like this plot. To achieve the plot with only shared tick-labels on the left and bottom edges, you can do the following:- Add options sharex=True, sharey=True in fig, ax = plt.subplots() That line of code will become: ``` fig,ax=plt.subplots(nrows=nrows,ncols=ncols,figsize=figsize,sharex=True,sharey=True) ``` To specify required number of ticks, and labels to plot, inside the body of for i, axi in enumerate(ax.flat):, add these code ``` axi.xaxis.set_major_locator(plt.MaxNLocator(5)) axi.yaxis.set_major_locator(plt.MaxNLocator(4)) ``` the number 5, and 4 are the number of ticks\/tick_labels to plot. You may need other values that suit your plots.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/46615554\/how-to-display-multiple-images-in-one-figure", "best_answers_votes":332, "question_length":875, "response_length":3822 }, { "question":"plot different color for different categorical levels I have this data frame diamonds which is composed of variables like (carat, price, color), and I want to draw a scatter plot of price to carat for each color, which means different color has different color in the plot. This is easy in R with ggplot: ``` ggplot(aes(x=carat, y=price, color=color), #by setting color=color, ggplot automatically draw in different colors data=diamonds) + geom_point(stat='summary', fun.y=median) ``` I wonder how could this be done in Python using matplotlib ? PS: I know about auxiliary plotting packages, such as seaborn and ggplot for python, and I don't prefer them, just want to find out if it is possible to do the job using matplotlib alone, ;P", "response":"Imports and Sample DataFrame ```py import matplotlib.pyplot as plt import pandas as pd import seaborn as sns # for sample data from matplotlib.lines import Line2D # for legend handle # DataFrame used for all options df = sns.load_dataset('diamonds') carat cut color clarity depth table price x y z 0 0.23 Ideal E SI2 61.5 55.0 326 3.95 3.98 2.43 1 0.21 Premium E SI1 59.8 61.0 326 3.89 3.84 2.31 2 0.23 Good E VS1 56.9 65.0 327 4.05 4.07 2.31 ``` With matplotlib You can pass plt.scatter a c argument, which allows you to select the colors. The following code defines a colors dictionary to map the diamond colors to the plotting colors. ```py fig, ax = plt.subplots(figsize=(6, 6)) colors = {'D':'tab:blue', 'E':'tab:orange', 'F':'tab:green', 'G':'tab:red', 'H':'tab:purple', 'I':'tab:brown', 'J':'tab:pink'} ax.scatter(df['carat'], df['price'], c=df['color'].map(colors)) # add a legend handles = [Line2D([0], [0], marker='o', color='w', markerfacecolor=v, label=k, markersize=8) for k, v in colors.items()] ax.legend(title='color', handles=handles, bbox_to_anchor=(1.05, 1), loc='upper left') plt.show() ``` df['color'].map(colors) effectively maps the colors from \"diamond\" to \"plotting\". (Forgive me for not putting another example image up, I think 2 is enough :P) With seaborn You can use seaborn which is a wrapper around matplotlib that makes it look prettier by default (rather opinion-based, I know :P) but also adds some plotting functions. For this you could use seaborn.lmplot with fit_reg=False (which prevents it from automatically doing some regression). sns.scatterplot(x='carat', y='price', data=df, hue='color', ec=None) also does the same thing. Selecting hue='color' tells seaborn to split and plot the data based on the unique values in the 'color' column. ```py sns.lmplot(x='carat', y='price', data=df, hue='color', fit_reg=False) ``` With pandas.DataFrame.groupby & pandas.DataFrame.plot If you don't want to use seaborn, use pandas.groupby to get the colors alone, and then plot them using just matplotlib, but you'll have to manually assign colors as you go, I've added an example below: ```py fig, ax = plt.subplots(figsize=(6, 6)) grouped = df.groupby('color') for key, group in grouped: group.plot(ax=ax, kind='scatter', x='carat', y='price', label=key, color=colors[key]) plt.show() ``` This code assumes the same DataFrame as above, and then groups it based on color. It then iterates over these groups, plotting for each one. To select a color, I've created a colors dictionary, which can map the diamond color (for instance D) to a real color (for instance tab:blue).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/26139423\/plot-different-color-for-different-categorical-levels", "best_answers_votes":227, "question_length":736, "response_length":2602 }, { "question":"Can Pandas plot a histogram of dates? I've taken my Series and coerced it to a datetime column of dtype=datetime64[ns] (though only need day resolution...not sure how to change). ``` import pandas as pd df = pd.read_csv('somefile.csv') column = df['date'] column = pd.to_datetime(column, coerce=True) ``` but plotting doesn't work: ``` ipdb> column.plot(kind='hist') *** TypeError: ufunc add cannot use operands with types dtype('>> ax.bar(D,range(1,len(D)+1,1),0.5) Traceback (most recent call last): File \"\", line 1, in File \"\/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/axes.py\", line 4904, in bar self.add_patch(r) File \"\/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/axes.py\", line 1570, in add_patch self._update_patch_limits(p) File \"\/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/axes.py\", line 1588, in _update_patch_limits xys = patch.get_patch_transform().transform(vertices) File \"\/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/patches.py\", line 580, in get_patch_transform self._update_patch_transform() File \"\/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/patches.py\", line 576, in _update_patch_transform bbox = transforms.Bbox.from_bounds(x, y, width, height) File \"\/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/transforms.py\", line 786, in from_bounds return Bbox.from_extents(x0, y0, x0 + width, y0 + height) TypeError: coercing to Unicode: need string or buffer, float found ```", "response":"You can do it in two lines by first plotting the bar chart and then setting the appropriate ticks: ``` import matplotlib.pyplot as plt D = {u'Label1':26, u'Label2': 17, u'Label3':30} plt.bar(range(len(D)), list(D.values()), align='center') plt.xticks(range(len(D)), list(D.keys())) # # for python 2.x: # plt.bar(range(len(D)), D.values(), align='center') # python 2.x # plt.xticks(range(len(D)), D.keys()) # in python 2.x plt.show() ``` Note that the penultimate line should read plt.xticks(range(len(D)), list(D.keys())) in python3, because D.keys() returns a generator, which matplotlib cannot use directly.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16010869\/plot-a-bar-using-matplotlib-using-a-dictionary", "best_answers_votes":208, "question_length":1385, "response_length":609 }, { "question":"OpenCV giving wrong color to colored images on loading I'm loading in a color image in Python OpenCV and plotting the same. However, the image I get has it's colors all mixed up. Here is the code: ``` import cv2 import numpy as np from numpy import array, arange, uint8 from matplotlib import pyplot as plt img = cv2.imread('lena_caption.png', cv2.IMREAD_COLOR) bw_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) images = [] images.append(img) images.append(bw_img) titles = ['Original Image','BW Image'] for i in xrange(len(images)): plt.subplot(1,2,i+1),plt.imshow(images[i],'gray') plt.title(titles[i]) plt.xticks([]),plt.yticks([]) plt.show() ``` Here is the original image: And here is the plotted image:", "response":"OpenCV uses BGR as its default colour order for images, matplotlib uses RGB. When you display an image loaded with OpenCv in matplotlib the channels will be back to front. The easiest way of fixing this is to use OpenCV to explicitly convert it back to RGB, much like you do when creating the greyscale image. ``` RGB_img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) ``` And then use that in your plot.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/39316447\/opencv-giving-wrong-color-to-colored-images-on-loading", "best_answers_votes":258, "question_length":705, "response_length":396 }, { "question":"plot with custom text for x axis points I am drawing a plot using matplotlib and python like the sample code below. ``` x = array([0,1,2,3]) y = array([20,21,22,23]) plot(x,y) show() ``` As it is the code above on the x axis I will see drawn values 0.0, 0.5, 1.0, 1.5 i.e. the same values of my reference x values. Is there anyway to map each point of x to a different string? So for example I want x axis to show months names( strings Jun, July,...) or other strings like people names ( \"John\", \"Arnold\", ... ) or clock time ( \"12:20\", \"12:21\", \"12:22\", .. ). Do you know what I can do or what function to have a look at? For my purpose could it be matplotlib.ticker of help?", "response":"You can manually set xticks (and yticks) using pyplot.xticks: ``` import matplotlib.pyplot as plt import numpy as np x = np.array([0,1,2,3]) y = np.array([20,21,22,23]) my_xticks = ['John','Arnold','Mavis','Matt'] plt.xticks(x, my_xticks) plt.plot(x, y) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3100985\/plot-with-custom-text-for-x-axis-points", "best_answers_votes":260, "question_length":676, "response_length":268 }, { "question":"Scatter plot and Color mapping in Python I have a range of points x and y stored in numpy arrays. Those represent x(t) and y(t) where t=0...T-1 I am plotting a scatter plot using ``` import matplotlib.pyplot as plt plt.scatter(x,y) plt.show() ``` I would like to have a colormap representing the time (therefore coloring the points depending on the index in the numpy arrays) What is the easiest way to do so?", "response":"Here is an example ``` import numpy as np import matplotlib.pyplot as plt x = np.random.rand(100) y = np.random.rand(100) t = np.arange(100) plt.scatter(x, y, c=t) plt.show() ``` Here you are setting the color based on the index, t, which is just an array of [1, 2, ..., 100]. Perhaps an easier-to-understand example is the slightly simpler ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(100) y = x t = x plt.scatter(x, y, c=t) plt.show() ``` Note that the array you pass as c doesn't need to have any particular order or type, i.e. it doesn't need to be sorted or integers as in these examples. The plotting routine will scale the colormap such that the minimum\/maximum values in c correspond to the bottom\/top of the colormap. Colormaps You can change the colormap by adding ``` import matplotlib.cm as cm plt.scatter(x, y, c=t, cmap=cm.cmap_name) ``` Importing matplotlib.cm is optional as you can call colormaps as cmap=\"cmap_name\" just as well. There is a reference page of colormaps showing what each looks like. Also know that you can reverse a colormap by simply calling it as cmap_name_r. So either ``` plt.scatter(x, y, c=t, cmap=cm.cmap_name_r) # or plt.scatter(x, y, c=t, cmap=\"cmap_name_r\") ``` will work. Examples are \"jet_r\" or cm.plasma_r. Here's an example with the new 1.5 colormap viridis: ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(100) y = x t = x fig, (ax1, ax2) = plt.subplots(1, 2) ax1.scatter(x, y, c=t, cmap='viridis') ax2.scatter(x, y, c=t, cmap='viridis_r') plt.show() ``` Colorbars You can add a colorbar by using ``` plt.scatter(x, y, c=t, cmap='viridis') plt.colorbar() plt.show() ``` Note that if you are using figures and subplots explicitly (e.g. fig, ax = plt.subplots() or ax = fig.add_subplot(111)), adding a colorbar can be a bit more involved. Good examples can be found here for a single subplot colorbar and here for 2 subplots 1 colorbar.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17682216\/scatter-plot-and-color-mapping-in-python", "best_answers_votes":241, "question_length":409, "response_length":1928 }, { "question":"How can I get the output of a matplotlib plot as an SVG? I need to take the output of a matplotlib plot and turn it into an SVG path that I can use on a laser cutter. ``` import matplotlib.pyplot as plt import numpy as np x = np.arange(0,100,0.00001) y = x*np.sin(2*pi*x) plt.plot(y) plt.show() ``` For example, below you see a waveform. I would like to be able to output or save this waveform as an SVG path that I can later work with in a program such as Adobe Illustrator. I am aware of an SVG library called \"Cairo\" that matplotlib can use (matplotlib.use('Cairo')), however it's not clear to me that this will give me access to the SVG path that I need, even though matplotlib will now be using Cairo to generate the plot. I do have cairo working on my system, and can successfully draw an example composed of SVG paths that I can indeed edit in Illustrator, but I don't have a way to take my equation above into an SVG path. ``` import cairo from cairo import SVGSurface, Context, Matrix s = SVGSurface('example1.svg', WIDTH, HEIGHT) c = Context(s) # Transform to normal cartesian coordinate system m = Matrix(yy=-1, y0=HEIGHT) c.transform(m) # Set a background color c.save() c.set_source_rgb(0.3, 0.3, 1.0) c.paint() c.restore() # Draw some lines c.move_to(0, 0) c.line_to(2 * 72, 2* 72) c.line_to(3 * 72, 1 * 72) c.line_to(4 * 72, 2 * 72) c.line_to(6 * 72, 0) c.close_path() c.save() c.set_line_width(6.0) c.stroke_preserve() c.set_source_rgb(0.3, 0.3, 0.3) c.fill() c.restore() # Draw a circle c.save() c.set_line_width(6.0) c.arc(1 * 72, 3 * 72, 0.5 * 72, 0, 2 * pi) c.stroke_preserve() c.set_source_rgb(1.0, 1.0, 0) c.fill() c.restore() # Save as a SVG and PNG s.write_to_png('example1.png') s.finish() ``` (note that the image displayed here is a png, as stackoverflow doesn't accept svg graphics for display)", "response":"You will most probably want to fix the image size and get rid of all sorts of backgrounds and axis markers: ``` import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=[6, 6]) x = np.arange(0, 100, 0.00001) y = x*np.sin(2* np.pi * x) plt.plot(y) plt.axis('off') plt.gca().set_position([0, 0, 1, 1]) plt.savefig(\"test.svg\") ``` The resulting SVG file contains only one extra element, as savefig really wants to save the figure background. The color of this background is easy to change to 'none', but it does not seem to get rid of it. Anyway, the SVG is very clean otherwise and in the correct scale (1\/72\" per unit).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/24525111\/how-can-i-get-the-output-of-a-matplotlib-plot-as-an-svg", "best_answers_votes":139, "question_length":1822, "response_length":631 }, { "question":"why is plotting with Matplotlib so slow? I'm currently evaluating different python plotting libraries. Right now I'm trying matplotlib and I'm quite disappointed with the performance. The following example is modified from SciPy examples and gives me only ~ 8 frames per second! Any ways of speeding this up or should I pick a different plotting library? ``` from pylab import * import time ion() fig = figure() ax1 = fig.add_subplot(611) ax2 = fig.add_subplot(612) ax3 = fig.add_subplot(613) ax4 = fig.add_subplot(614) ax5 = fig.add_subplot(615) ax6 = fig.add_subplot(616) x = arange(0,2*pi,0.01) y = sin(x) line1, = ax1.plot(x, y, 'r-') line2, = ax2.plot(x, y, 'g-') line3, = ax3.plot(x, y, 'y-') line4, = ax4.plot(x, y, 'm-') line5, = ax5.plot(x, y, 'k-') line6, = ax6.plot(x, y, 'p-') # turn off interactive plotting - speeds things up by 1 Frame \/ second plt.ioff() tstart = time.time() # for profiling for i in arange(1, 200): line1.set_ydata(sin(x+i\/10.0)) # update the data line2.set_ydata(sin(2*x+i\/10.0)) line3.set_ydata(sin(3*x+i\/10.0)) line4.set_ydata(sin(4*x+i\/10.0)) line5.set_ydata(sin(5*x+i\/10.0)) line6.set_ydata(sin(6*x+i\/10.0)) draw() # redraw the canvas print 'FPS:' , 200\/(time.time()-tstart) ```", "response":"First off, (though this won't change the performance at all) consider cleaning up your code, similar to this: ``` import matplotlib.pyplot as plt import numpy as np import time x = np.arange(0, 2*np.pi, 0.01) y = np.sin(x) fig, axes = plt.subplots(nrows=6) styles = ['r-', 'g-', 'y-', 'm-', 'k-', 'c-'] lines = [ax.plot(x, y, style)[0] for ax, style in zip(axes, styles)] fig.show() tstart = time.time() for i in xrange(1, 20): for j, line in enumerate(lines, start=1): line.set_ydata(np.sin(j*x + i\/10.0)) fig.canvas.draw() print 'FPS:' , 20\/(time.time()-tstart) ``` With the above example, I get around 10fps. Just a quick note, depending on your exact use case, matplotlib may not be a great choice. It's oriented towards publication-quality figures, not real-time display. However, there are a lot of things you can do to speed this example up. There are two main reasons why this is as slow as it is. 1) Calling fig.canvas.draw() redraws everything. It's your bottleneck. In your case, you don't need to re-draw things like the axes boundaries, tick labels, etc. 2) In your case, there are a lot of subplots with a lot of tick labels. These take a long time to draw. Both these can be fixed by using blitting. To do blitting efficiently, you'll have to use backend-specific code. In practice, if you're really worried about smooth animations, you're usually embedding matplotlib plots in some sort of gui toolkit, anyway, so this isn't much of an issue. However, without knowing a bit more about what you're doing, I can't help you there. Nonetheless, there is a gui-neutral way of doing it that is still reasonably fast. ``` import matplotlib.pyplot as plt import numpy as np import time x = np.arange(0, 2*np.pi, 0.1) y = np.sin(x) fig, axes = plt.subplots(nrows=6) fig.show() # We need to draw the canvas before we start animating... fig.canvas.draw() styles = ['r-', 'g-', 'y-', 'm-', 'k-', 'c-'] def plot(ax, style): return ax.plot(x, y, style, animated=True)[0] lines = [plot(ax, style) for ax, style in zip(axes, styles)] # Let's capture the background of the figure backgrounds = [fig.canvas.copy_from_bbox(ax.bbox) for ax in axes] tstart = time.time() for i in xrange(1, 2000): items = enumerate(zip(lines, axes, backgrounds), start=1) for j, (line, ax, background) in items: fig.canvas.restore_region(background) line.set_ydata(np.sin(j*x + i\/10.0)) ax.draw_artist(line) fig.canvas.blit(ax.bbox) print 'FPS:' , 2000\/(time.time()-tstart) ``` This gives me ~200fps. To make this a bit more convenient, there's an animations module in recent versions of matplotlib. As an example: ``` import matplotlib.pyplot as plt import matplotlib.animation as animation import numpy as np x = np.arange(0, 2*np.pi, 0.1) y = np.sin(x) fig, axes = plt.subplots(nrows=6) styles = ['r-', 'g-', 'y-', 'm-', 'k-', 'c-'] def plot(ax, style): return ax.plot(x, y, style, animated=True)[0] lines = [plot(ax, style) for ax, style in zip(axes, styles)] def animate(i): for j, line in enumerate(lines, start=1): line.set_ydata(np.sin(j*x + i\/10.0)) return lines # We'd normally specify a reasonable \"interval\" here... ani = animation.FuncAnimation(fig, animate, xrange(1, 200), interval=0, blit=True) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8955869\/why-is-plotting-with-matplotlib-so-slow", "best_answers_votes":144, "question_length":1217, "response_length":3203 }, { "question":"Change line width of lines in matplotlib pyplot legend [duplicate] This question already has answers here: increase the linewidth of the legend lines in matplotlib (4 answers) Closed 7 years ago. I would like to change the thickness\/width of the line samples featured in the pyplot legend. Line width of line samples within legend are the same as the lines they represent in the plot (so if line y1 has linewidth=7.0, the legend's corresponding y1 label will also have linewidth=7.0). I would like the legend lines to be thicker than lines featured in the plot. For example, the following code generates the following image: ``` import numpy as np import matplotlib.pyplot as plt # make some data x = np.linspace(0, 2*np.pi) y1 = np.sin(x) y2 = np.cos(x) # plot sin(x) and cos(x) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x, y1, c='b', label='y1',linewidth=7.0) ax.plot(x, y2, c='r', label='y2') leg = plt.legend() plt.show() ``` I want to set the y1 label in the legend to have linewidth=7.0, while the y1 line featured in the plot has a different width (linewidth=1.0). Related issues had answers for changing the linewidth of the legend bounding box through leg.get_frame().set_linewidth(7.0). This does not change linewidth of the lines within the legend.", "response":"@ImportanceOfBeingErnest 's answer is good if you only want to change the linewidth inside the legend box. But I think it is a bit more complex since you have to copy the handles before changing legend linewidth. Besides, it can not change the legend label fontsize. The following two methods can not only change the linewidth but also the legend label text font size in a more concise way. Method 1 ``` import numpy as np import matplotlib.pyplot as plt # make some data x = np.linspace(0, 2*np.pi) y1 = np.sin(x) y2 = np.cos(x) # plot sin(x) and cos(x) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x, y1, c='b', label='y1') ax.plot(x, y2, c='r', label='y2') leg = plt.legend() # get the individual lines inside legend and set line width for line in leg.get_lines(): line.set_linewidth(4) # get label texts inside legend and set font size for text in leg.get_texts(): text.set_fontsize('x-large') plt.savefig('leg_example') plt.show() ``` Method 2 ``` import numpy as np import matplotlib.pyplot as plt # make some data x = np.linspace(0, 2*np.pi) y1 = np.sin(x) y2 = np.cos(x) # plot sin(x) and cos(x) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x, y1, c='b', label='y1') ax.plot(x, y2, c='r', label='y2') leg = plt.legend() # get the lines and texts inside legend box leg_lines = leg.get_lines() leg_texts = leg.get_texts() # bulk-set the properties of all lines and texts plt.setp(leg_lines, linewidth=4) plt.setp(leg_texts, fontsize='x-large') plt.savefig('leg_example') plt.show() ``` The above two methods produce the same output image:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42758897\/change-line-width-of-lines-in-matplotlib-pyplot-legend", "best_answers_votes":66, "question_length":1270, "response_length":1560 }, { "question":"Change figure window title in pylab How can I set a figure window's title in pylab\/python? ``` fig = figure(9) # 9 is now the title of the window fig.set_title(\"Test\") #doesn't work fig.title = \"Test\" #doesn't work ```", "response":"If you want to actually change the window you can do: ``` fig = pylab.gcf() fig.canvas.manager.set_window_title('Test') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5812960\/change-figure-window-title-in-pylab", "best_answers_votes":185, "question_length":218, "response_length":123 }, { "question":"Plotting with seaborn using the matplotlib object-oriented interface I strongly prefer using matplotlib in OOP style: ``` f, axarr = plt.subplots(2, sharex=True) axarr[0].plot(...) axarr[1].plot(...) ``` This makes it easier to keep track of multiple figures and subplots. Question: How to use seaborn this way? Or, how to change this example to OOP style? How to tell seaborn plotting functions like lmplot which Figure or Axes it plots to?", "response":"It depends a bit on which seaborn function you are using. The plotting functions in seaborn are broadly divided into two types: \"Axes-level\" functions, including regplot, boxplot, kdeplot, and many others \"Figure-level\" functions, including relplot, catplot, displot, pairplot, jointplot and one or two others The first group is identified by taking an explicit ax argument and returning an Axes object. As this suggests, you can use them in an \"object oriented\" style by passing your Axes to them: ``` f, (ax1, ax2) = plt.subplots(2) sns.regplot(x, y, ax=ax1) sns.kdeplot(x, ax=ax2) ``` Axes-level functions will only draw onto an Axes and won't otherwise mess with the figure, so they can coexist perfectly happily in an object-oriented matplotlib script. The second group of functions (Figure-level) are distinguished by the fact that the resulting plot can potentially include several Axes which are always organized in a \"meaningful\" way. That means that the functions need to have total control over the figure, so it isn't possible to plot, say, an lmplot onto one that already exists. Calling the function always initializes a figure and sets it up for the specific plot it's drawing. However, once you've called lmplot, it will return an object of the type FacetGrid. This object has some methods for operating on the resulting plot that know a bit about the structure of the plot. It also exposes the underlying figure and array of axes at the FacetGrid.fig and FacetGrid.axes arguments. The jointplot function is very similar, but it uses a JointGrid object. So you can still use these functions in an object-oriented context, but all of your customization has to come after you've called the function.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23969619\/plotting-with-seaborn-using-the-matplotlib-object-oriented-interface", "best_answers_votes":298, "question_length":441, "response_length":1713 }, { "question":"Defining the midpoint of a colormap in matplotlib I want to set the middle point of a colormap, i.e., my data goes from -5 to 10 and I want zero to be the middle point. I think the way to do it is by subclassing normalize and using the norm, but I didn't find any example and it is not clear to me, what exactly have I to implement?", "response":"I know this is late to the game, but I just went through this process and came up with a solution that perhaps less robust than subclassing normalize, but much simpler. I thought it'd be good to share it here for posterity. The function ``` import numpy as np import matplotlib import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import AxesGrid def shiftedColorMap(cmap, start=0, midpoint=0.5, stop=1.0, name='shiftedcmap'): ''' Function to offset the \"center\" of a colormap. Useful for data with a negative min and positive max and you want the middle of the colormap's dynamic range to be at zero. Input ----- cmap : The matplotlib colormap to be altered start : Offset from lowest point in the colormap's range. Defaults to 0.0 (no lower offset). Should be between 0.0 and `midpoint`. midpoint : The new center of the colormap. Defaults to 0.5 (no shift). Should be between 0.0 and 1.0. In general, this should be 1 - vmax \/ (vmax + abs(vmin)) For example if your data range from -15.0 to +5.0 and you want the center of the colormap at 0.0, `midpoint` should be set to 1 - 5\/(5 + 15)) or 0.75 stop : Offset from highest point in the colormap's range. Defaults to 1.0 (no upper offset). Should be between `midpoint` and 1.0. ''' cdict = { 'red': [], 'green': [], 'blue': [], 'alpha': [] } # regular index to compute the colors reg_index = np.linspace(start, stop, 257) # shifted index to match the data shift_index = np.hstack([ np.linspace(0.0, midpoint, 128, endpoint=False), np.linspace(midpoint, 1.0, 129, endpoint=True) ]) for ri, si in zip(reg_index, shift_index): r, g, b, a = cmap(ri) cdict['red'].append((si, r, r)) cdict['green'].append((si, g, g)) cdict['blue'].append((si, b, b)) cdict['alpha'].append((si, a, a)) newcmap = matplotlib.colors.LinearSegmentedColormap(name, cdict) plt.register_cmap(cmap=newcmap) return newcmap ``` An example ``` biased_data = np.random.random_integers(low=-15, high=5, size=(37,37)) orig_cmap = matplotlib.cm.coolwarm shifted_cmap = shiftedColorMap(orig_cmap, midpoint=0.75, name='shifted') shrunk_cmap = shiftedColorMap(orig_cmap, start=0.15, midpoint=0.75, stop=0.85, name='shrunk') fig = plt.figure(figsize=(6,6)) grid = AxesGrid(fig, 111, nrows_ncols=(2, 2), axes_pad=0.5, label_mode=\"1\", share_all=True, cbar_location=\"right\", cbar_mode=\"each\", cbar_size=\"7%\", cbar_pad=\"2%\") # normal cmap im0 = grid[0].imshow(biased_data, interpolation=\"none\", cmap=orig_cmap) grid.cbar_axes[0].colorbar(im0) grid[0].set_title('Default behavior (hard to see bias)', fontsize=8) im1 = grid[1].imshow(biased_data, interpolation=\"none\", cmap=orig_cmap, vmax=15, vmin=-15) grid.cbar_axes[1].colorbar(im1) grid[1].set_title('Centered zero manually,\\nbut lost upper end of dynamic range', fontsize=8) im2 = grid[2].imshow(biased_data, interpolation=\"none\", cmap=shifted_cmap) grid.cbar_axes[2].colorbar(im2) grid[2].set_title('Recentered cmap with function', fontsize=8) im3 = grid[3].imshow(biased_data, interpolation=\"none\", cmap=shrunk_cmap) grid.cbar_axes[3].colorbar(im3) grid[3].set_title('Recentered cmap with function\\nand shrunk range', fontsize=8) for ax in grid: ax.set_yticks([]) ax.set_xticks([]) ``` Results of the example:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7404116\/defining-the-midpoint-of-a-colormap-in-matplotlib", "best_answers_votes":101, "question_length":332, "response_length":3179 }, { "question":"Show matplotlib plots (and other GUI) in Ubuntu (WSL1 & WSL2) So it seems on ubuntu for windows (windows subsystem for linux) people are suggesting we need to use Agg backend and just save images, not show plots. ``` import matplotlib matplotlib.use('Agg') # no UI backend import matplotlib.pyplot as plt import numpy as np t = np.arange(0.0, 2.0, 0.01) s = 1 + np.sin(2*np.pi*t) plt.plot(t, s) plt.title('About as simple as it gets, folks') #plt.show() plt.savefig(\"matplotlib.png\") #savefig, don't show ``` How could we get it to where plt.show() would actually show us an image? My current option is to override plot.show() to instead just savefig a plot-148123456.png under \/mnt\/c\/Users\/james\/plots\/ in windows and just have an explorer window open viewing the images. I suppose I could host that folder and use a browser. My goal is to be able to run simple examples like the code above without changing the code to ftp the images somewhere etc. I just want the plot to show up in a window. Has anyone figured out a decent way to do it?", "response":"Ok, so I got it working as follows. I have Ubuntu on windows, with anaconda python 3.6 installed. Download and install VcXsrv or Xming (X11 for Windows) from sourceforge(see edit below) sudo apt-get update sudo apt-get install python3.6-tk (you may have to install a different python*-tk depnding on the python version you're using) pip install matplotlib (for matplotlib. but many other things now work too) export DISPLAY=localhost:0.0 (add to ~\/.bashrc to make permanent. see WSL2 below) Anyways, after all that, this code running in ubuntu on wsl worked as is: ``` import matplotlib.pyplot as plt import numpy as np t = np.arange(0.0, 2.0, 0.01) s = 1 + np.sin(2*np.pi*t) plt.plot(t, s) plt.title('About as simple as it gets, folks') plt.show() ``` result: Maybe this is better done through a Jupyter notebook or something, but it's nice to have basic command-line python matplotlib functionality in Ubuntu for Windows on Subsystem for Linux, and this makes many other gui apps work too. For example you can install xeyes, and it will say to install x11-apps and installing that will install GTK which a lot of GUI apps use. But the point is once you have your DISPLAY set correctly, and your x server on windows, then most things that would work on a native ubuntu will work for the WSL. Edit 2019-09-04 : Today I was having issues with 'unable to get screen resources' after upgrading some libraries. So I installed VcXsrv and used that instead of Xming. Just install from https:\/\/sourceforge.net\/projects\/vcxsrv\/ and run xlaunch.exe, select multiple windows, next next next ok. Then everything worked. Edit for WSL 2 users 2020-06-23 WSL2 (currently insider fast ring) has GPU\/docker support so worth upgrade. However it runs in vm. For WSL 2, follow same steps 1-4 then: the ip is not localhost. it's in resolv.conf so run this instead (and include in ~\/.bashrc): ``` export DISPLAY=`grep -oP \"(? Firewall & network protection -> Allow an app through firewall -> make sure VcXsrv has both public and private checked. (When Launching xlaunch first time, you might get a prompt to allow through firewall. This works too. Also, if VcXsrv is not in list of apps, you can manually add it, eg from 'C:\\program files\\vcxsrv\\vcxsrv.exe') Launch VcXsrv with \"Disable access control\" ticked Note: a few WSL2 users got error like couldn't connect to display \"172.x.x.x:0\". If that's you try to check the IP address stored in DISPLAY with this command: echo $DISPLAY. If the showed IP seems to be wrong (i.e. \"8.8.8.8\" or another not working IP address) you need to change the code in ~\/.bashrc showed in the point 5 to something that will get your instance's ip address. One user said this worked: export DISPLAY=$(ifconfig | grep inet | awk '{print $2}' | head -n 1 | awk '{print $0\":0\"}'). However for some others it did not work. YMMV, but just find your IP and use if for DISPLAY. For most WSL2 users, the command in #5 works. Edit for Windows 11 : if MS convinced you to throw out your old computer and buy one with a TPM and so you got Windows 11, you get GUI for free. I hope they add upgrade path to do that on Windows 10 because Win10 will be like XP and last a long time since MS decided you need recent computer even though Win11 would work fine on old computers.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/43397162\/show-matplotlib-plots-and-other-gui-in-ubuntu-wsl1-wsl2", "best_answers_votes":178, "question_length":1041, "response_length":3271 }, { "question":"Change grid interval and specify tick labels I am trying to plot counts in gridded plots, but I haven't been able to figure out how to go about it. I want: to have dotted grids at an interval of 5; to have major tick labels only every 20; for the ticks to be outside the plot; and to have \"counts\" inside those grids. I have checked for potential duplicates, such as here and here, but have not been able to figure it out. This is my code: ```py import matplotlib.pyplot as plt from matplotlib.ticker import MultipleLocator, FormatStrFormatter for x, y, count in data.values(): fig = plt.figure() ax = fig.add_subplot(111) ax.annotate(count, xy = (x, y), size = 5) # overwrites and I only get the last data point plt.close() # Without this, I get a \"fail to allocate bitmap\" error. plt.suptitle('Number of counts', fontsize = 12) ax.set_xlabel('x') ax.set_ylabel('y') plt.axes().set_aspect('equal') plt.axis([0, 1000, 0, 1000]) # This gives an interval of 200. majorLocator = MultipleLocator(20) majorFormatter = FormatStrFormatter('%d') minorLocator = MultipleLocator(5) # I want the minor grid to be 5 and the major grid to be 20. plt.grid() ``` This is what I get.", "response":"There are several problems in your code. First the big ones: You are creating a new figure and a new axes in every iteration of your loop \u2192 put fig = plt.figure and ax = fig.add_subplot(1,1,1) outside of the loop. Don't use the Locators. Call the functions ax.set_xticks() and ax.grid() with the correct keywords. With plt.axes() you are creating a new axes again. Use ax.set_aspect('equal'). The minor things: You should not mix the MATLAB-like syntax like plt.axis() with the objective syntax. Use ax.set_xlim(a,b) and ax.set_ylim(a,b) This should be a working minimal example: ``` import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(1, 1, 1) # Major ticks every 20, minor ticks every 5 major_ticks = np.arange(0, 101, 20) minor_ticks = np.arange(0, 101, 5) ax.set_xticks(major_ticks) ax.set_xticks(minor_ticks, minor=True) ax.set_yticks(major_ticks) ax.set_yticks(minor_ticks, minor=True) # And a corresponding grid ax.grid(which='both') # Or if you want different settings for the grids: ax.grid(which='minor', alpha=0.2) ax.grid(which='major', alpha=0.5) plt.show() ``` Output is this:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/24943991\/change-grid-interval-and-specify-tick-labels", "best_answers_votes":280, "question_length":1167, "response_length":1129 }, { "question":"How to set xticks in subplots If I plot a single imshow plot I can use ```py fig, ax = plt.subplots() ax.imshow(data) plt.xticks( [4, 14, 24], [5, 15, 25] ) ``` to replace my xtick labels. Now, I am plotting 12 imshow plots using ```py f, axarr = plt.subplots(4, 3) axarr[i, j].imshow(data) ``` How can I change xticks just for one of these subplots? I can only access the axes of the subplots with axarr[i, j]. How can I access plt just for one particular subplot?", "response":"There are two ways: Use the axes methods of the subplot object (e.g. ax.set_xticks and ax.set_xticklabels) or Use plt.sca to set the current axes for the pyplot state machine (i.e. the plt interface). As an example (this also illustrates using setp to change the properties of all of the subplots): ``` import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=3, ncols=4) # Set the ticks and ticklabels for all axes plt.setp(axes, xticks=[0.1, 0.5, 0.9], xticklabels=['a', 'b', 'c'], yticks=[1, 2, 3]) # Use the pyplot interface to change just one subplot... plt.sca(axes[1, 1]) plt.xticks(range(3), ['A', 'Big', 'Cat'], color='red') fig.tight_layout() plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19626530\/how-to-set-xticks-in-subplots", "best_answers_votes":221, "question_length":465, "response_length":673 }, { "question":"Getting vertical gridlines to appear in line plot in matplotlib I want to get both horizontal and vertical grid lines on my plot but only the horizontal grid lines are appearing by default. I am using a pandas.DataFrame from an sql query in python to generate a line plot with dates on the x-axis. I'm not sure why they do not appear on the dates and I have tried to search for an answer to this but couldn't find one. All I have used to plot the graph is the simple code below. ``` data.plot() grid('on') ``` data is the DataFrame which contains the dates and the data from the sql query. I have also tried adding the code below but I still get the same output with no vertical grid lines. ``` ax = plt.axes() ax.yaxis.grid() # horizontal lines ax.xaxis.grid() # vertical lines ``` Any suggestions?", "response":"You may need to give boolean arg in your calls, e.g. use ax.yaxis.grid(True) instead of ax.yaxis.grid(). Additionally, since you are using both of them you can combine into ax.grid, which works on both, rather than doing it once for each dimension. ``` ax = plt.gca() ax.grid(True) ``` That should sort you out.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16074392\/getting-vertical-gridlines-to-appear-in-line-plot-in-matplotlib", "best_answers_votes":127, "question_length":799, "response_length":311 }, { "question":"prevent plot from showing in jupyter notebook How can I prevent a specific plot to be shown in Jupyter notebook? I have several plots in a notebook but I want a subset of them to be saved to a file and not shown on the notebook as this slows considerably. A minimal working example for a Jupyter notebook is: ``` %matplotlib inline from numpy.random import randn from matplotlib.pyplot import plot, figure a=randn(3) b=randn(3) for i in range(10): fig=figure() plot(b) fname='s%03d.png'%i fig.savefig(fname) if(i%5==0): figure() plot(a) ``` As you can see I have two types of plots, a and b. I want a's to be plotted and shown and I don't want the b plots to be shown, I just want them them to be saved in a file. Hopefully this will speed things a bit and won't pollute my notebook with figures I don't need to see. Thank you for your time", "response":"Perhaps just clear the axis, for example: ``` fig = plt.figure() plt.plot(range(10)) fig.savefig(\"save_file_name.pdf\") plt.close() ``` This will not plot the output in inline mode. I can't work out if it is really clearing the data though.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18717877\/prevent-plot-from-showing-in-jupyter-notebook", "best_answers_votes":162, "question_length":840, "response_length":239 }, { "question":"Matplotlib Legends not working Ever since upgrading matplotlib I get the following error whenever trying to create a legend: ``` \/usr\/lib\/pymodules\/python2.7\/matplotlib\/legend.py:610: UserWarning: Legend does not support [] Use proxy artist instead. http:\/\/matplotlib.sourceforge.net\/users\/legend_guide.html#using-proxy-artist warnings.warn(\"Legend does not support %s\\nUse proxy artist instead.\\n\\nhttp:\/\/matplotlib.sourceforge.net\/users\/legend_guide.html#using-proxy-artist\\n\" % (str(orig_handle),)) \/usr\/lib\/pymodules\/python2.7\/matplotlib\/legend.py:610: UserWarning: Legend does not support [] Use proxy artist instead. http:\/\/matplotlib.sourceforge.net\/users\/legend_guide.html#using-proxy-artist warnings.warn(\"Legend does not support %s\\nUse proxy artist instead.\\n\\nhttp:\/\/matplotlib.sourceforge.net\/users\/legend_guide.html#using-proxy-artist\\n\" % (str(orig_handle),)) ``` This even occurs with a trivial script like this: ``` import matplotlib.pyplot as plt a = [1,2,3] b = [4,5,6] c = [7,8,9] plot1 = plt.plot(a,b) plot2 = plt.plot(a,c) plt.legend([plot1,plot2],[\"plot 1\", \"plot 2\"]) plt.show() ``` I've found the link that the error points me towards pretty useless in diagnosing the source of the error.", "response":"You should add commas: ``` plot1, = plt.plot(a,b) plot2, = plt.plot(a,c) ``` The reason you need the commas is because plt.plot() returns a tuple of line objects, no matter how many are actually created from the command. Without the comma, \"plot1\" and \"plot2\" are tuples instead of line objects, making the later call to plt.legend() fail. The comma implicitly unpacks the results so that instead of a tuple, \"plot1\" and \"plot2\" automatically become the first objects within the tuple, i.e. the line objects you actually want. http:\/\/matplotlib.sourceforge.net\/users\/legend_guide.html#adjusting-the-order-of-legend-items line, = plot(x,sin(x)) what does comma stand for?", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11983024\/matplotlib-legends-not-working", "best_answers_votes":207, "question_length":1213, "response_length":670 }, { "question":"How to generate random colors in matplotlib? What's the trivial example of how to generate random colors for passing to plotting functions? I'm calling scatter inside a loop and want each plot a different color. ``` for X,Y in data: scatter(X, Y, c=??) ``` c: a color. c can be a single color format string, or a sequence of color specifications of length N, or a sequence of N numbers to be mapped to colors using the cmap and norm specified via kwargs (see below). Note that c should not be a single numeric RGB or RGBA sequence because that is indistinguishable from an array of values to be colormapped. c can be a 2-D array in which the rows are RGB or RGBA, however.", "response":"I'm calling scatter inside a loop and want each plot in a different color. Based on that, and on your answer: It seems to me that you actually want n distinct colors for your datasets; you want to map the integer indices 0, 1, ..., n-1 to distinct RGB colors. Something like: Here is the function to do it: ``` import matplotlib.pyplot as plt def get_cmap(n, name='hsv'): '''Returns a function that maps each index in 0, 1, ..., n-1 to a distinct RGB color; the keyword argument name must be a standard mpl colormap name.''' return plt.cm.get_cmap(name, n) ``` Usage in your pseudo-code snippet in the question: ``` cmap = get_cmap(len(data)) for i, (X, Y) in enumerate(data): scatter(X, Y, c=cmap(i)) ``` I generated the figure in my answer with the following code: ``` import matplotlib.pyplot as plt def get_cmap(n, name='hsv'): '''Returns a function that maps each index in 0, 1, ..., n-1 to a distinct RGB color; the keyword argument name must be a standard mpl colormap name.''' return plt.cm.get_cmap(name, n) def main(): N = 30 fig=plt.figure() ax=fig.add_subplot(111) plt.axis('scaled') ax.set_xlim([ 0, N]) ax.set_ylim([-0.5, 0.5]) cmap = get_cmap(N) for i in range(N): rect = plt.Rectangle((i, -0.5), 1, 1, facecolor=cmap(i)) ax.add_artist(rect) ax.set_yticks([]) plt.show() if __name__=='__main__': main() ``` Tested with both Python 2.7 & matplotlib 1.5, and with Python 3.5 & matplotlib 2.0. It works as expected.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14720331\/how-to-generate-random-colors-in-matplotlib", "best_answers_votes":201, "question_length":672, "response_length":1427 }, { "question":"Plotting images side by side using matplotlib I was wondering how I am able to plot images side by side using matplotlib for example something like this: The closest I got is this: This was produced by using this code: ``` f, axarr = plt.subplots(2,2) axarr[0,0] = plt.imshow(image_datas[0]) axarr[0,1] = plt.imshow(image_datas[1]) axarr[1,0] = plt.imshow(image_datas[2]) axarr[1,1] = plt.imshow(image_datas[3]) ``` But I can't seem to get the other images to show. I'm thinking that there must be a better way to do this as I would imagine trying to manage the indexes would be a pain. I have looked through the documentation although I have a feeling I may be look at the wrong one. Would anyone be able to provide me with an example or point me in the right direction? EDIT: See the answer from @duhaime if you want a function to automatically determine the grid size.", "response":"The problem you face is that you try to assign the return of imshow (which is an matplotlib.image.AxesImage to an existing axes object. The correct way of plotting image data to the different axes in axarr would be ``` f, axarr = plt.subplots(2,2) axarr[0,0].imshow(image_datas[0]) axarr[0,1].imshow(image_datas[1]) axarr[1,0].imshow(image_datas[2]) axarr[1,1].imshow(image_datas[3]) ``` The concept is the same for all subplots, and in most cases the axes instance provide the same methods than the pyplot (plt) interface. E.g. if ax is one of your subplot axes, for plotting a normal line plot you'd use ax.plot(..) instead of plt.plot(). This can actually be found exactly in the source from the page you link to.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/41793931\/plotting-images-side-by-side-using-matplotlib", "best_answers_votes":182, "question_length":871, "response_length":716 }, { "question":"Changing the color of an axis Is there a way to change the color of an axis (not the ticks) in matplotlib? I have been looking through the docs for Axes, Axis, and Artist, but no luck; the matplotlib gallery also has no hint. Any idea?", "response":"When using figures, you can easily change the spine color with: ``` ax.spines['bottom'].set_color('#dddddd') ax.spines['top'].set_color('#dddddd') ax.spines['right'].set_color('red') ax.spines['left'].set_color('red') ``` Use the following to change only the ticks: which=\"both\" changes both the major and minor tick colors ```py ax.tick_params(axis='x', colors='red') ax.tick_params(axis='y', colors='red') ``` And the following to change only the label: ``` ax.yaxis.label.set_color('red') ax.xaxis.label.set_color('red') ``` And finally the title: ``` ax.title.set_color('red') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/1982770\/changing-the-color-of-an-axis", "best_answers_votes":237, "question_length":235, "response_length":584 }, { "question":"Histogram Matplotlib So I have a little problem. I have a data set in scipy that is already in the histogram format, so I have the center of the bins and the number of events per bin. How can I now plot is as a histogram. I tried just doing ``` bins, n=hist() ``` but it didn't like that. Any recommendations?", "response":"``` import matplotlib.pyplot as plt import numpy as np mu, sigma = 100, 15 x = mu + sigma * np.random.randn(10000) hist, bins = np.histogram(x, bins=50) width = 0.7 * (bins[1] - bins[0]) center = (bins[:-1] + bins[1:]) \/ 2 plt.bar(center, hist, align='center', width=width) plt.show() ``` The object-oriented interface is also straightforward: ``` fig, ax = plt.subplots() ax.bar(center, hist, align='center', width=width) fig.savefig(\"1.png\") ``` If you are using custom (non-constant) bins, you can pass compute the widths using np.diff, pass the widths to ax.bar and use ax.set_xticks to label the bin edges: ``` import matplotlib.pyplot as plt import numpy as np mu, sigma = 100, 15 x = mu + sigma * np.random.randn(10000) bins = [0, 40, 60, 75, 90, 110, 125, 140, 160, 200] hist, bins = np.histogram(x, bins=bins) width = np.diff(bins) center = (bins[:-1] + bins[1:]) \/ 2 fig, ax = plt.subplots(figsize=(8,3)) ax.bar(center, hist, align='center', width=width) ax.set_xticks(bins) fig.savefig(\"\/tmp\/out.png\") plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5328556\/histogram-matplotlib", "best_answers_votes":268, "question_length":309, "response_length":1027 }, { "question":"Linear regression with matplotlib \/ numpy I'm trying to generate a linear regression on a scatter plot I have generated, however my data is in list format, and all of the examples I can find of using polyfit require using arange. arange doesn't accept lists though. I have searched high and low about how to convert a list to an array and nothing seems clear. Am I missing something? Following on, how best can I use my list of integers as inputs to the polyfit? Here is the polyfit example I am following: ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(data) y = np.arange(data) m, b = np.polyfit(x, y, 1) plt.plot(x, y, 'yo', x, m*x+b, '--k') plt.show() ```", "response":"arange generates lists (well, numpy arrays); type help(np.arange) for the details. You don't need to call it on existing lists. ```py >>> x = [1,2,3,4] >>> y = [3,5,7,9] >>> >>> m,b = np.polyfit(x, y, 1) >>> m 2.0000000000000009 >>> b 0.99999999999999833 ``` I should add that I tend to use poly1d here rather than write out \"m*x+b\" and the higher-order equivalents, so my version of your code would look something like this: ```py import numpy as np import matplotlib.pyplot as plt x = [1,2,3,4] y = [3,5,7,10] # 10, not 9, so the fit isn't perfect coef = np.polyfit(x,y,1) poly1d_fn = np.poly1d(coef) # poly1d_fn is now a function which takes in x and returns an estimate for y plt.plot(x,y, 'yo', x, poly1d_fn(x), '--k') #'--k'=black dashed line, 'yo' = yellow circle marker plt.xlim(0, 5) plt.ylim(0, 12) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6148207\/linear-regression-with-matplotlib-numpy", "best_answers_votes":246, "question_length":681, "response_length":812 }, { "question":"Stop matplotlib repeating labels in legend Here is a very simplified example: ``` xvalues = [2,3,4,6] for x in xvalues: plt.axvline(x,color='b',label='xvalues') plt.legend() ``` The legend will now show 'xvalues' as a blue line 4 times in the legend. Is there a more elegant way of fixing this than the following? ``` for i,x in enumerate(xvalues): if not i: plt.axvline(x,color='b',label='xvalues') else: plt.axvline(x,color='b') ```", "response":"plt.legend takes as parameters A list of axis handles which are Artist objects A list of labels which are strings These parameters are both optional defaulting to plt.gca().get_legend_handles_labels(). You can remove duplicate labels by putting them in a dictionary before calling legend. This is because dicts can't have duplicate keys. For example: For Python versions 3.7 As of Python 3.7, dictionaries retain input order by default. Thus, there is no need for OrderedDict form the collections module. ``` import matplotlib.pyplot as plt handles, labels = plt.gca().get_legend_handles_labels() by_label = dict(zip(labels, handles)) plt.legend(by_label.values(), by_label.keys()) ``` Docs for plt.legend", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13588920\/stop-matplotlib-repeating-labels-in-legend", "best_answers_votes":219, "question_length":434, "response_length":706 }, { "question":"How to get a matplotlib Axes instance I need to make a candlestick chart using some stock data. For this I want to use the function matplotlib.finance.candlestick(). I need to supply quotes to this function and \"an Axes instance to plot to\". I created some sample quotes as follows: ```py quotes = [(1, 5, 6, 7, 4), (2, 6, 9, 9, 6), (3, 9, 8, 10, 8), (4, 8, 8, 9, 8), (5, 8, 11, 13, 7)] ``` I now also need an Axes instance though, at which I am a bit lost. I created plots before using matplotlib.pyplot. I now need to do something with matplotlib.axes though, but I am unsure what exactly. Could anybody help me?", "response":"Use the gca (\"get current axes\") helper function: ``` ax = plt.gca() ``` Example: ``` import matplotlib.pyplot as plt import matplotlib.finance quotes = [(1, 5, 6, 7, 4), (2, 6, 9, 9, 6), (3, 9, 8, 10, 8), (4, 8, 8, 9, 8), (5, 8, 11, 13, 7)] ax = plt.gca() h = matplotlib.finance.candlestick(ax, quotes) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15067668\/how-to-get-a-matplotlib-axes-instance", "best_answers_votes":245, "question_length":614, "response_length":318 }, { "question":"How can I set the matplotlib 'backend'? I am new user of matplotlib, my platform is Ubuntu 10.04 Python 2.6.5 This is my code ``` import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt plt.plot([1,2,3]) ``` The error is: ``` \/usr\/local\/lib\/python2.6\/dist-packages\/matplotlib\/backends\/__init__.py:41: UserWarning: Your currently selected backend, 'agg' does not support show(). Please select a GUI backend in your matplotlibrc file ('\/usr\/local\/lib\/python2.6\/dist-packages\/matplotlib\/mpl-data\/matplotlibrc') or with matplotlib.use() (backend, matplotlib.matplotlib_fname())) ``` I installed the Anti-Grain Geometry library apt-get install libagg but it is doesn't work. I tried to use other argument of backend like 'GTK' and 'TkAgg'. I installed python-gtk2-dev package, but still the error is below. Can anyone tell me an executable backend argument and its dependency library? Here is the error: ``` >>> matplotlib.use('GTK') >>> import matplotlib.pyplot as plt Traceback (most recent call last): File \"\", line 1, in File \"\/usr\/local\/lib\/python2.6\/dist-packages\/matplotlib\/pyplot.py\", line 95, in new_figure_manager, draw_if_interactive, show = pylab_setup() File \"\/usr\/local\/lib\/python2.6\/dist-packages\/matplotlib\/backends\/__init__.py\", line 25, in pylab_setup globals(),locals(),[backend_name]) File \"\/usr\/local\/lib\/python2.6\/dist-packages\/matplotlib\/backends\/backend_gtk.py\", line 28, in from matplotlib.backends.backend_gdk import RendererGDK, FigureCanvasGDK File \"\/usr\/local\/lib\/python2.6\/dist-packages\/matplotlib\/backends\/backend_gdk.py\", line 29, in from matplotlib.backends._backend_gdk import pixbuf_get_pixels_array ImportError: No module named _backend_gdk ```", "response":"Your currently selected backend, 'agg' does not support show(). AGG backend is for writing to file, not for rendering in a window. See the backend FAQ at the matplotlib web site. ImportError: No module named _backend_gdk For the second error, maybe your matplotlib distribution is not compiled with GTK support, or you miss the PyGTK package. Try to install it. Do you call the show() method inside a terminal or application that has access to a graphical environment? Try other GUI backends, in this order: TkAgg wxAgg Qt5Agg Qt4Agg", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4930524\/how-can-i-set-the-matplotlib-backend", "best_answers_votes":121, "question_length":1695, "response_length":533 }, { "question":"savefig without frames, axes, only content In numpy\/scipy I have an image stored in an array. I can display it, I want to save it using savefig without any borders, axes, labels, titles,... Just pure image, nothing else. I want to avoid packages like PyPNG or scipy.misc.imsave, they are sometimes problematic (they do not always install well), only basic savefig() for me", "response":"EDIT Changed aspect='normal to aspect='auto' since that changed in more recent versions of matplotlib (thanks to @Luke19). Assuming : ```py import matplotlib.pyplot as plt ``` To make a figure without the frame : ```py fig = plt.figure(frameon=False) fig.set_size_inches(w,h) ``` To make the content fill the whole figure ```py ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ``` Then draw your image on it : ```py ax.imshow(your_image, aspect='auto') fig.savefig(fname, dpi) ``` The aspect parameter changes the pixel size to make sure they fill the figure size specified in fig.set_size_inches(\u2026). To get a feel of how to play with this sort of things, read through matplotlib's documentation, particularly on the subject of Axes, Axis and Artist.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8218608\/savefig-without-frames-axes-only-content", "best_answers_votes":152, "question_length":372, "response_length":773 }, { "question":"Pandas plot() without a legend Using the pandas library in python and using ```py .plot() ``` on a dataframe, how do I display the plot without a legend?", "response":"There is a parameter in the function corresponding to legend; by default it is True ``` df.plot(legend=False) ``` Following is the definition of the .plot() method Definition: df.plot(frame=None, x=None, y=None, subplots=False, sharex=True, sharey=False, use_index=True, figsize=None, grid=None, legend=True, rot=None, ax=None, style=None, title=None, xlim=None, ylim=None, logx=False, logy=False, xticks=None, yticks=None, kind='line', sort_columns=False, fontsize=None, secondary_y=False, **kwds)", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20865487\/pandas-plot-without-a-legend", "best_answers_votes":212, "question_length":153, "response_length":498 }, { "question":"What is the currently correct way to dynamically update plots in Jupyter\/iPython? In the answers to how to dynamically update a plot in a loop in ipython notebook (within one cell), an example is given of how to dynamically update a plot inside a Jupyter notebook within a Python loop. However, this works by destroying and re-creating the plot on every iteration, and a comment in one of the threads notes that this situation can be improved by using the new-ish %matplotlib nbagg magic, which provides an interactive figure embedded in the notebook, rather than a static image. However, this wonderful new nbagg feature seems to be completely undocumented as far as I can tell, and I'm unable to find an example of how to use it to dynamically update a plot. Thus my question is, how does one efficiently update an existing plot in a Jupyter\/Python notebook, using the nbagg backend? Since dynamically updating plots in matplotlib is a tricky issue in general, a simple working example would be an enormous help. A pointer to any documentation on the topic would also be extremely helpful. To be clear what I'm asking for: what I want to do is to run some simulation code for a few iterations, then draw a plot of its current state, then run it for a few more iterations, then update the plot to reflect the current state, and so on. So the idea is to draw a plot and then, without any interaction from the user, update the data in the plot without destroying and re-creating the whole thing. Here is some slightly modified code from the answer to the linked question above, which achieves this by re-drawing the whole figure every time. I want to achieve the same result, but more efficiently using nbagg. ``` %matplotlib inline import time import pylab as pl from IPython import display for i in range(10): pl.clf() pl.plot(pl.randn(100)) display.display(pl.gcf()) display.clear_output(wait=True) time.sleep(1.0) ```", "response":"Here is an example that updates a plot in a loop. It updates the data in the figure and does not redraw the whole figure every time. It does block execution, though if you're interested in running a finite set of simulations and saving the results somewhere, it may not be a problem for you. The %matplotlib widget magic requires the ipympl Matplotlib Jupyter Extension package. You can install a working environment with pip install jupyter ipympl ``` %matplotlib widget import numpy as np import matplotlib.pyplot as plt import time def pltsin(ax, colors=['b']): x = np.linspace(0,1,100) if ax.lines: for line in ax.lines: line.set_xdata(x) y = np.random.random(size=(100,1)) line.set_ydata(y) else: for color in colors: y = np.random.random(size=(100,1)) ax.plot(x, y, color) fig.canvas.draw() fig,ax = plt.subplots(1,1) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_xlim(0,1) ax.set_ylim(0,1) plt.show() # run this cell to dynamically update plot for f in range(5): pltsin(ax, ['b', 'r']) time.sleep(1) ``` I put this up on nbviewer here, and here's a direct link to the gist", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34486642\/what-is-the-currently-correct-way-to-dynamically-update-plots-in-jupyter-ipython", "best_answers_votes":77, "question_length":1920, "response_length":1078 }, { "question":"How to set the 'equal' aspect ratio for all axes (x, y, z) When I set up an equal aspect ratio for a 3d graph, the z-axis does not change to 'equal'. So this: ```py fig = pylab.figure() mesFig = fig.gca(projection='3d', adjustable='box') mesFig.axis('equal') mesFig.plot(xC, yC, zC, 'r.') mesFig.plot(xO, yO, zO, 'b.') pyplot.show() ``` Gives me the following: Where obviously the unit length of z-axis is not equal to x- and y- units. How can I make the unit length of all three axes equal? All the solutions I found did not work.", "response":"I like some of the previously posted solutions, but they do have the drawback that you need to keep track of the ranges and means over all your data. This could be cumbersome if you have multiple data sets that will be plotted together. To fix this, I made use of the ax.get_[xyz]lim3d() methods and put the whole thing into a standalone function that can be called just once before you call plt.show(). Here is the new version: ``` from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import matplotlib.pyplot as plt import numpy as np def set_axes_equal(ax): \"\"\" Make axes of 3D plot have equal scale so that spheres appear as spheres, cubes as cubes, etc. Input ax: a matplotlib axis, e.g., as output from plt.gca(). \"\"\" x_limits = ax.get_xlim3d() y_limits = ax.get_ylim3d() z_limits = ax.get_zlim3d() x_range = abs(x_limits[1] - x_limits[0]) x_middle = np.mean(x_limits) y_range = abs(y_limits[1] - y_limits[0]) y_middle = np.mean(y_limits) z_range = abs(z_limits[1] - z_limits[0]) z_middle = np.mean(z_limits) # The plot bounding box is a sphere in the sense of the infinity # norm, hence I call half the max range the plot radius. plot_radius = 0.5*max([x_range, y_range, z_range]) ax.set_xlim3d([x_middle - plot_radius, x_middle + plot_radius]) ax.set_ylim3d([y_middle - plot_radius, y_middle + plot_radius]) ax.set_zlim3d([z_middle - plot_radius, z_middle + plot_radius]) fig = plt.figure() ax = fig.add_subplot(projection=\"3d\") # Use this for matplotlib prior to 3.3.0 only. #ax.set_aspect(\"equal\") # # Use this for matplotlib 3.3.0 and later. # https:\/\/github.com\/matplotlib\/matplotlib\/pull\/17515 ax.set_box_aspect([1.0, 1.0, 1.0]) X = np.random.rand(100)*10+5 Y = np.random.rand(100)*5+2.5 Z = np.random.rand(100)*50+25 scat = ax.scatter(X, Y, Z) set_axes_equal(ax) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13685386\/how-to-set-the-equal-aspect-ratio-for-all-axes-x-y-z", "best_answers_votes":92, "question_length":531, "response_length":1804 }, { "question":"How to give a pandas\/matplotlib bar graph custom colors I just started using pandas\/matplotlib as a replacement for Excel to generate stacked bar charts. I am running into an issue (1) there are only 5 colors in the default colormap, so if I have more than 5 categories then the colors repeat. How can I specify more colors? Ideally, a gradient with a start color and an end color, and a way to dynamically generate n colors in between? (2) the colors are not very visually pleasing. How do I specify a custom set of n colors? Or, a gradient would also work. An example which illustrates both of the above points is below: ``` 4 from matplotlib import pyplot 5 from pandas import * 6 import random 7 8 x = [{i:random.randint(1,5)} for i in range(10)] 9 df = DataFrame(x) 10 11 df.plot(kind='bar', stacked=True) ``` And the output is this:", "response":"You can specify the color option as a list directly to the plot function. ``` from matplotlib import pyplot as plt from itertools import cycle, islice import pandas, numpy as np # I find np.random.randint to be better # Make the data x = [{i:np.random.randint(1,5)} for i in range(10)] df = pandas.DataFrame(x) # Make a list by cycling through the colors you care about # to match the length of your data. my_colors = list(islice(cycle(['b', 'r', 'g', 'y', 'k']), None, len(df))) # Specify this list of colors as the `color` option to `plot`. df.plot(kind='bar', stacked=True, color=my_colors) ``` To define your own custom list, you can do a few of the following, or just look up the Matplotlib techniques for defining a color item by its RGB values, etc. You can get as complicated as you want with this. ``` my_colors = ['g', 'b']*5 # <-- this concatenates the list to itself 5 times. my_colors = [(0.5,0.4,0.5), (0.75, 0.75, 0.25)]*5 # <-- make two custom RGBs and repeat\/alternate them over all the bar elements. my_colors = [(x\/10.0, x\/20.0, 0.75) for x in range(len(df))] # <-- Quick gradient example along the Red\/Green dimensions. ``` The last example yields the follow simple gradient of colors for me: I didn't play with it long enough to figure out how to force the legend to pick up the defined colors, but I'm sure you can do it. In general, though, a big piece of advice is to just use the functions from Matplotlib directly. Calling them from Pandas is OK, but I find you get better options and performance calling them straight from Matplotlib.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11927715\/how-to-give-a-pandas-matplotlib-bar-graph-custom-colors", "best_answers_votes":162, "question_length":838, "response_length":1561 }, { "question":"Why doesn't plt.imshow() display the image? I have this code, copied from a tutorial: ``` import numpy as np np.random.seed(123) from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.utils import np_utils from keras.datasets import mnist (X_train,y_train),(X_test,y_test) = mnist.load_data() print X_train.shape from matplotlib import pyplot as plt plt.imshow(X_train[0]) ``` No image was displayed. Why not? There doesn't appear to be anything wrong with the backend of matplotlib on my computer. I tested that like so: ``` import matplotlib.pyplot as plt data = [[0, 0.25], [0.5, 0.75]] fig, ax = plt.subplots() im = ax.imshow(data, cmap=plt.get_cmap('hot'), interpolation='nearest', vmin=0, vmax=1) fig.colorbar(im) plt.show() ``` and was able to produce an image: I also tried printing X_train[0] and it looks right.", "response":"The solution was as simple as adding plt.show() at the end of the code snippet: ``` import numpy as np np.random.seed(123) from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.utils import np_utils from keras.datasets import mnist (X_train,y_train),(X_test,y_test) = mnist.load_data() print X_train.shape from matplotlib import pyplot as plt plt.imshow(X_train[0]) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42812230\/why-doesnt-plt-imshow-display-the-image", "best_answers_votes":257, "question_length":927, "response_length":487 }, { "question":"How to add a second x-axis I have a very simple question. I need to have a second x-axis on my plot and I want that this axis has a certain number of tics that correspond to certain position of the first axis. Let's try with an example. Here I am plotting the dark matter mass as a function of the expansion factor, defined as 1\/(1+z), that ranges from 0 to 1. ``` semilogy(1\/(1+z),mass_acc_massive,'-',label='DM') xlim(0,1) ylim(1e8,5e12) ``` I would like to have another x-axis, on the top of my plot, showing the corresponding z for some values of the expansion factor. Is that possible? If yes, how can I have xtics ax", "response":"I'm taking a cue from the comments in @Dhara's answer, it sounds like you want to set a list of new_tick_locations by a function from the old x-axis to the new x-axis. The tick_function below takes in a numpy array of points, maps them to a new value and formats them: ``` import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax1 = fig.add_subplot(111) ax2 = ax1.twiny() X = np.linspace(0,1,1000) Y = np.cos(X*20) ax1.plot(X,Y) ax1.set_xlabel(r\"Original x-axis: $X$\") new_tick_locations = np.array([.2, .5, .9]) def tick_function(X): V = 1\/(1+X) return [\"%.3f\" % z for z in V] ax2.set_xlim(ax1.get_xlim()) ax2.set_xticks(new_tick_locations) ax2.set_xticklabels(tick_function(new_tick_locations)) ax2.set_xlabel(r\"Modified x-axis: $1\/(1+X)$\") plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10514315\/how-to-add-a-second-x-axis", "best_answers_votes":157, "question_length":622, "response_length":773 }, { "question":"Turn off axes in subplots I have the following code: ```py import matplotlib.pyplot as plt import matplotlib.image as mpimg import matplotlib.cm as cm img = mpimg.imread(\"lena.jpg\") fig, axs = plt.subplots(2, 2) axs[0,0].imshow(img, cmap = cm.Greys_r) axs[0,0].set_title(\"Rank = 512\") rank = 128 new_img = prune_matrix(rank, img) axs[0,1].imshow(new_img, cmap = cm.Greys_r) axs[0,1].set_title(\"Rank = %s\" %rank) rank = 32 new_img = prune_matrix(rank, img) axs[1,0].imshow(new_img, cmap = cm.Greys_r) axs[1,0].set_title(\"Rank = %s\" %rank) rank = 16 new_img = prune_matrix(rank, img) axs[1,1].imshow(new_img, cmap = cm.Greys_r) axs[1,1].set_title(\"Rank = %s\" %rank) plt.show() ``` However, the result is pretty ugly because of the values on the axes: How can I turn off axes values for all subplots simultaneously? How to remove axis, legends, and white padding doesn't work because I don't know how to make it work with subplots.", "response":"You can turn the Axes off by following the advice in Veedrac's comment (linking to here) with one small modification. Rather than using plt.axis('off'), use ax.axis('off') where ax is a matplotlib.axes object. To do this, index each Axes, axs[0, 0].axis('off'), and so on for each subplot. See Native Matplotlib interfaces for the difference between pyplot and Axes. The code below shows the result without the prune_matrix, which is not available. ```py import matplotlib.pyplot as plt import matplotlib.image as mpimg import matplotlib.cm as cm import matplotlib.cbook as cbook # used for matplotlib sample image # load readily available sample image with cbook.get_sample_data('grace_hopper.jpg') as image_file: img = plt.imread(image_file) # read a local file # img = mpimg.imread(\"file.jpg\") fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(8, 8), tight_layout=True) axs[0, 0].imshow(img, cmap=cm.Greys_r) axs[0, 0].set_title(\"Rank = 512\") axs[0, 0].axis(\"off\") axs[0, 1].imshow(img, cmap=cm.Greys_r) axs[0, 1].set_title(\"Rank = %s\" % 128) axs[0, 1].axis(\"off\") axs[1, 0].imshow(img, cmap=cm.Greys_r) axs[1, 0].set_title(\"Rank = %s\" % 32) axs[1, 0].axis(\"off\") axs[1, 1].imshow(img, cmap=cm.Greys_r) axs[1, 1].set_title(\"Rank = %s\" % 16) axs[1, 1].axis(\"off\") plt.show() ``` Note: To turn off only the x or y axis you can use set_visible() e.g.: ``` axs[0, 0].xaxis.set_visible(False) # Hide only x axis ``` Iterative approach ```py fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(8, 8), tight_layout=True) # convert the 2d array to 1d, which removes the need to iterate through i and j axs = axs.flat ranks = [512, 128, 32, 16] # iterate through each Axes with the associate rank for ax, rank in zip(axs, ranks): ax.imshow(img, cmap=cm.Greys_r) ax.set_title(f'Rank = {rank}') ax.axis('off') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25862026\/turn-off-axes-in-subplots", "best_answers_votes":202, "question_length":928, "response_length":1816 }, { "question":"How to create a scatter plot by category [duplicate] This question already has answers here: Color a scatter plot by Column Values (6 answers) Closed 3 years ago. I am trying to make a simple scatter plot in pyplot using a Pandas DataFrame object, but want an efficient way of plotting two variables but have the symbols dictated by a third column (key). I have tried various ways using df.groupby, but not successfully. A sample df script is below. This colours the markers according to 'key1', but Id like to see a legend with 'key1' categories. Am I close? Thanks. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame(np.random.normal(10,1,30).reshape(10,3), index = pd.date_range('2010-01-01', freq = 'M', periods = 10), columns = ('one', 'two', 'three')) df['key1'] = (4,4,4,6,6,6,8,8,8,8) fig1 = plt.figure(1) ax1 = fig1.add_subplot(111) ax1.scatter(df['one'], df['two'], marker = 'o', c = df['key1'], alpha = 0.8) plt.show() ```", "response":"You can use scatter for this, but that requires having numerical values for your key1, and you won't have a legend, as you noticed. It's better to just use plot for discrete categories like this. For example: ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd np.random.seed(1974) # Generate Data num = 20 x, y = np.random.random((2, num)) labels = np.random.choice(['a', 'b', 'c'], num) df = pd.DataFrame(dict(x=x, y=y, label=labels)) groups = df.groupby('label') # Plot fig, ax = plt.subplots() ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling for name, group in groups: ax.plot(group.x, group.y, marker='o', linestyle='', ms=12, label=name) ax.legend() plt.show() ``` If you'd like things to look like the default pandas style, then just update the rcParams with the pandas stylesheet and use its color generator. (I'm also tweaking the legend slightly): ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd np.random.seed(1974) # Generate Data num = 20 x, y = np.random.random((2, num)) labels = np.random.choice(['a', 'b', 'c'], num) df = pd.DataFrame(dict(x=x, y=y, label=labels)) groups = df.groupby('label') # Plot plt.rcParams.update(pd.tools.plotting.mpl_stylesheet) colors = pd.tools.plotting._get_standard_colors(len(groups), color_type='random') fig, ax = plt.subplots() ax.set_color_cycle(colors) ax.margins(0.05) for name, group in groups: ax.plot(group.x, group.y, marker='o', linestyle='', ms=12, label=name) ax.legend(numpoints=1, loc='upper left') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21654635\/how-to-create-a-scatter-plot-by-category", "best_answers_votes":143, "question_length":975, "response_length":1547 }, { "question":"`sharex` axis but don't show x-axis tick labels for both, just one I'm using python + matplotlib and I'm having two plots share an axis. If you try to set graph1.set_xticklabels([]) while sharing an axis, it has no effect because it is shared. Is there a way to share the axis AND be able to hide the x axis of one plot?", "response":"This is a common gotcha when using shared axes. Fortunately, there's a simple fix: use plt.setp(ax.get_xticklabels(), visible=False) to make the labels invisible on just one axis. This is equivalent to [label.set_visible(False) for label in ax.get_xticklabels()], for whatever it's worth. setp will automatically operate on an iterable of matplotlib objects, as well as individual objects. As an example: ``` import matplotlib.pyplot as plt fig = plt.figure() ax1 = fig.add_subplot(2,1,1) ax1.plot(range(10), 'b-') ax2 = fig.add_subplot(2,1,2, sharex=ax1) ax2.plot(range(10), 'r-') plt.setp(ax1.get_xticklabels(), visible=False) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4209467\/sharex-axis-but-dont-show-x-axis-tick-labels-for-both-just-one", "best_answers_votes":146, "question_length":320, "response_length":643 }, { "question":"Specifying the order of layers Suppose I run the following script: ``` import matplotlib.pyplot as plt lineWidth = 20 plt.figure() plt.plot([0,0],[-1,1], lw=lineWidth, c='b') plt.plot([-1,1],[-1,1], lw=lineWidth, c='r') plt.plot([-1,1],[1,-1], lw=lineWidth, c='g') plt.show() ``` This produces the following: How can I specify the top-to-bottom order of the layers instead of having Python pick for me?", "response":"I don't know why zorder has that behavior and it's likely that might be a bug or, at the very least, a badly documented feature. It might be because there are already automatic references to zorder when you build a plot (like grid, axis, and so on...) and when you try to specify the zorder for elements you are somehow overlapping them. This is hypothetical in any case. For you to solve your problem just make the differences in zorder exaggerated. For instance instead of 0,1,2, make it 0,5,10: ``` import matplotlib.pyplot as plt lineWidth = 20 plt.figure() plt.plot([0,0],[-1,1], lw=lineWidth, c='b',zorder=10) plt.plot([-1,1],[-1,1], lw=lineWidth, c='r',zorder=5) plt.plot([-1,1],[1,-1], lw=lineWidth, c='g',zorder=0) plt.show() ``` Which results in this: For this plot I specified the opposite order shown in your question.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37246941\/specifying-the-order-of-layers", "best_answers_votes":178, "question_length":402, "response_length":830 }, { "question":"A logarithmic colorbar in matplotlib scatter plot I would like to make the colors of the points on the scatter plot correspond to the value of the void fraction, but on a logarithmic scale to amplify differences. I did this, but now when I do plt.colorbar(), it displays the log of the void fraction, when I really want the actual void fraction. How can I make a log scale on the colorbar with the appropriate labels of the void fraction, which belongs to [0.00001,1]? Here is an image of the plot I have now, but the void fraction colorbar is not appropriately labeled to correspond to the true void fraction, instead of the log of it. ``` fig = plt.figure() plt.scatter(x,y,edgecolors='none',s=marker_size,c=np.log(void_fraction)) plt.colorbar() plt.title('Colorbar: void fraction') ``` Thanks for your help.", "response":"There is now a section of the documentation describing how color mapping and normalization works The way that matplotlib does color mapping is in two steps, first a Normalize function (wrapped up by the sub-classes of matplotlib.colors.Normalize) which maps the data you hand in to [0, 1]. The second step maps values in [0,1] -> RGBA space. You just need to use the LogNorm normalization class, passed in with the norm kwarg. ``` plt.scatter(x,y,edgecolors='none',s=marker_size,c=void_fraction, norm=matplotlib.colors.LogNorm()) ``` When you want to scale\/tweak data for plotting, it is better to let matplotlib do the transformations than to do it your self. Normalize doc LogNorm doc matplotlib.color doc", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17201172\/a-logarithmic-colorbar-in-matplotlib-scatter-plot", "best_answers_votes":143, "question_length":810, "response_length":707 }, { "question":"Can I create AxesSubplot objects, then add them to a Figure instance? Looking at the matplotlib documentation, it seems the standard way to add an AxesSubplot to a Figure is to use Figure.add_subplot: ``` from matplotlib import pyplot fig = pyplot.figure() ax = fig.add_subplot(1,1,1) ax.hist( some params .... ) ``` I would like to be able to create AxesSubPlot-like objects independently of the figure, so I can use them in different figures. Something like ``` fig = pyplot.figure() histoA = some_axes_subplot_maker.hist( some params ..... ) histoA = some_axes_subplot_maker.hist( some other params ..... ) # make one figure with both plots fig.add_subaxes(histo1, 211) fig.add_subaxes(histo1, 212) fig2 = pyplot.figure() # make a figure with the first plot only fig2.add_subaxes(histo1, 111) ``` Is this possible in matplotlib and if so, how can I do this? Update: I have not managed to decouple creation of Axes and Figures, but following examples in the answers below, can easily re-use previously created axes in new or olf Figure instances. This can be illustrated with a simple function: ``` def plot_axes(ax, fig=None, geometry=(1,1,1)): if fig is None: fig = plt.figure() if ax.get_geometry() != geometry : ax.change_geometry(*geometry) ax = fig.axes.append(ax) return fig ```", "response":"Typically, you just pass the axes instance to a function. For example: ``` import matplotlib.pyplot as plt import numpy as np def main(): x = np.linspace(0, 6 * np.pi, 100) fig1, (ax1, ax2) = plt.subplots(nrows=2) plot(x, np.sin(x), ax1) plot(x, np.random.random(100), ax2) fig2 = plt.figure() plot(x, np.cos(x)) plt.show() def plot(x, y, ax=None): if ax is None: ax = plt.gca() line, = ax.plot(x, y, 'go') ax.set_ylabel('Yabba dabba do!') return line if __name__ == '__main__': main() ``` To respond to your question, you could always do something like this: ``` def subplot(data, fig=None, index=111): if fig is None: fig = plt.figure() ax = fig.add_subplot(index) ax.plot(data) ``` Also, you can simply add an axes instance to another figure: ``` import matplotlib.pyplot as plt fig1, ax = plt.subplots() ax.plot(range(10)) fig2 = plt.figure() fig2.axes.append(ax) plt.show() ``` Resizing it to match other subplot \"shapes\" is also possible, but it's going to quickly become more trouble than it's worth. The approach of just passing around a figure or axes instance (or list of instances) is much simpler for complex cases, in my experience...", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6309472\/can-i-create-axessubplot-objects-then-add-them-to-a-figure-instance", "best_answers_votes":58, "question_length":1287, "response_length":1147 }, { "question":"How to set the range of y-axis for a seaborn boxplot [duplicate] This question already has answers here: How to set the axis limits in Matplotlib? (10 answers) Closed 2 years ago. From the official seaborn documentation, I learned that you can create a boxplot as below: ```py import seaborn as sns sns.set_style(\"whitegrid\") tips = sns.load_dataset(\"tips\") ax = sns.boxplot(x=\"day\", y=\"total_bill\", data=tips) ``` My question is: how do I limit the range of y-axis of this plot? For example, I want the y-axis to be within [10, 40]. Is there any easy way to do this?", "response":"It is standard matplotlib.pyplot: ``` import matplotlib.pyplot as plt plt.ylim(10, 40) ``` Or simpler, as mwaskom comments below: ``` ax.set(ylim=(10, 40)) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/33227473\/how-to-set-the-range-of-y-axis-for-a-seaborn-boxplot", "best_answers_votes":174, "question_length":567, "response_length":159 }, { "question":"How to specify legend position in graph coordinates I am aware of the bbox_to_anchor keyword and this thread, which very helpfully suggests how to manually place the legend: How to put the legend out of the plot However, I'd like to use the coordinates of my x- and y-axis in the graph to specify the legend position (inside the plot), as I might need to move the figure into a large figure with a different axis environment, and I don't want to manually play around with those coordinates every time I do this. Is this possible? Edit: A small example is here: ``` import numpy as n f, axarr = plt.subplots(2,sharex=True) axarr[1].set_ylim([0.611,0.675]) axarr[0].set_ylim([0.792,0.856]) axarr[0].plot([0, 0.04, 0.08],n.array([ 0.83333333, 0.82250521,0.81109048]), label='test1') axarr[0].errorbar([0, 0.04, 0.08],n.array([ 0.8, 0.83, 0.82]),n.array([0.1,0.1,0.01]), label='test2') axarr[1].plot([0, 0.04, 0.08],n.array([ 0.66666667, 0.64888304, 0.63042428])) axarr[1].errorbar([0, 0.04, 0.08],n.array([ 0.67, 0.64, 0.62]),n.array([ 0.01, 0.05, 0.1])) axarr[0].legend(bbox_to_anchor=(0.04, 0.82, 1., .102),labelspacing=0.1, handlelength=0.1, handletextpad=0.1,frameon=False, ncol=4, columnspacing=0.7) ``` I think what confuses me is that the legend does not actually start at 0.82, and indeed for my larger plot (with 5 subplots of this type), I need to use legend coordinates bbox_to_anchor=(0.04, 1.15, 1., .102) in order to make the legend appear on coordinates (0.02, 0.83). But maybe I am getting something else wrong?", "response":"The loc parameter specifies in which corner of the bounding box the legend is placed. The default for loc is loc=\"best\" which gives unpredictable results when the bbox_to_anchor argument is used. Therefore, when specifying bbox_to_anchor, always specify loc as well. The default for bbox_to_anchor is (0,0,1,1), which is a bounding box over the complete axes. If a different bounding box is specified, is is usually sufficient to use the first two values, which give (x0, y0) of the bounding box. Below is an example where the bounding box is set to position (0.6,0.5) (green dot) and different loc parameters are tested. Because the legend extents outside the bounding box, the loc parameter may be interpreted as \"which corner of the legend shall be placed at position given by the 2-tuple bbox_to_anchor argument\". ``` import matplotlib.pyplot as plt plt.rcParams[\"figure.figsize\"] = 6, 3 fig, axes = plt.subplots(ncols=3) locs = [\"upper left\", \"lower left\", \"center right\"] for l, ax in zip(locs, axes.flatten()): ax.set_title(l) ax.plot([1,2,3],[2,3,1], \"b-\", label=\"blue\") ax.plot([1,2,3],[1,2,1], \"r-\", label=\"red\") ax.legend(loc=l, bbox_to_anchor=(0.6,0.5)) ax.scatter((0.6),(0.5), s=81, c=\"limegreen\", transform=ax.transAxes) plt.tight_layout() plt.show() ``` See especially this answer for a detailed explanation and the question What does a 4-element tuple argument for 'bbox_to_anchor' mean in matplotlib? . If you want to specify the legend position in other coordinates than axes coordinates, you can do so by using the bbox_transform argument. If may make sense to use figure coordinates ``` ax.legend(bbox_to_anchor=(1,0), loc=\"lower right\", bbox_transform=fig.transFigure) ``` It may not make too much sense to use data coordinates, but since you asked for it this would be done via bbox_transform=ax.transData.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44413020\/how-to-specify-legend-position-in-graph-coordinates", "best_answers_votes":166, "question_length":1524, "response_length":1828 }, { "question":"How to forget previous plots - how can I flush\/refresh? How do you get matplotlib.pyplot to \"forget\" previous plots I am trying to plot multiple time using matplotlib.pyplot The code looks like this: ``` def plottest(): import numpy as np import matplotlib.pyplot as plt a=np.random.rand(10,) b=np.random.rand(10,) c=np.random.rand(10,) plt.plot(a,label='a') plt.plot(b,label='b') plt.plot(c,label='c') plt.legend(loc='upper left') plt.ylabel('mag') plt.xlabel('element)') plt.show() e=np.random.rand(10,) f=np.random.rand(10,) g=np.random.rand(10,) plt.plot(e,label='e') plt.plot(f,label='f') plt.plot(g,label='g') plt.legend(loc='upper left') plt.ylabel('mag') plt.xlabel('element)') plt.show() ``` Unfortunately I keep getting the same plot (actually from some other code which I ran and completed a while ago) no matter what I do. Similar code has worked previously for me. I have looked at these questions: How to \"clean the slate\"? Matplotlib pyplot show() doesn't work once closed (python) matplotlib pyplot show() .. blocking or not? and tried using plt.show(), plt.clf() and plt.close to no avail. Any ideas?", "response":"I would rather use plt.clf() after every plt.show() to just clear the current figure instead of closing and reopening it, keeping the window size and giving you a better performance and much better memory usage. Similarly, you could do plt.cla() to just clear the current axes. To clear a specific axes, useful when you have multiple axes within one figure, you could do for example: ``` fig, axes = plt.subplots(nrows=2, ncols=2) axes[0, 1].clear() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17106288\/how-to-forget-previous-plots-how-can-i-flush-refresh", "best_answers_votes":137, "question_length":1117, "response_length":453 }, { "question":"multiple axis in matplotlib with different scales [duplicate] This question already has answers here: Overlay plots with different scales (5 answers) Secondary axis with twinx(): how to add to legend (11 answers) Closed 6 years ago. How can multiple scales can be implemented in Matplotlib? I am not talking about the primary and secondary axis plotted against the same x-axis, but something like many trends which have different scales plotted in same y-axis and that can be identified by their colors. For example, if I have trend1 ([0,1,2,3,4]) and trend2 ([5000,6000,7000,8000,9000]) to be plotted against time and want the two trends to be of different colors and in Y-axis, different scales, how can I accomplish this with Matplotlib? When I looked into Matplotlib, they say that they don't have this for now though it is definitely on their wishlist, Is there a way around to make this happen? Are there any other plotting tools for python that can make this happen?", "response":"Since Steve Tjoa's answer always pops up first and mostly lonely when I search for multiple y-axes at Google, I decided to add a slightly modified version of his answer. This is the approach from this matplotlib example. Reasons: His modules sometimes fail for me in unknown circumstances and cryptic intern errors. I don't like to load exotic modules I don't know (mpl_toolkits.axisartist, mpl_toolkits.axes_grid1). The code below contains more explicit commands of problems people often stumble over (like single legend for multiple axes, using viridis, ...) rather than implicit behavior. ```py import matplotlib.pyplot as plt # Create figure and subplot manually # fig = plt.figure() # host = fig.add_subplot(111) # More versatile wrapper fig, host = plt.subplots(figsize=(8,5), layout='constrained') # (width, height) in inches # (see https:\/\/matplotlib.org\/stable\/api\/_as_gen\/matplotlib.pyplot.subplots.html and # .. https:\/\/matplotlib.org\/stable\/tutorials\/intermediate\/constrainedlayout_guide.html) ax2 = host.twinx() ax3 = host.twinx() host.set_xlim(0, 2) host.set_ylim(0, 2) ax2.set_ylim(0, 4) ax3.set_ylim(1, 65) host.set_xlabel(\"Distance\") host.set_ylabel(\"Density\") ax2.set_ylabel(\"Temperature\") ax3.set_ylabel(\"Velocity\") color1, color2, color3 = plt.cm.viridis([0, .5, .9]) p1 = host.plot([0, 1, 2], [0, 1, 2], color=color1, label=\"Density\") p2 = ax2.plot( [0, 1, 2], [0, 3, 2], color=color2, label=\"Temperature\") p3 = ax3.plot( [0, 1, 2], [50, 30, 15], color=color3, label=\"Velocity\") host.legend(handles=p1+p2+p3, loc='best') # right, left, top, bottom ax3.spines['right'].set_position(('outward', 60)) # no x-ticks host.xaxis.set_ticks([]) # Alternatively (more verbose): # host.tick_params( # axis='x', # changes apply to the x-axis # which='both', # both major and minor ticks are affected # bottom=False, # ticks along the bottom edge are off) # labelbottom=False) # labels along the bottom edge are off # sometimes handy: direction='in' # Move \"Velocity\"-axis to the left # ax3.spines['left'].set_position(('outward', 60)) # ax3.spines['left'].set_visible(True) # ax3.spines['right'].set_visible(False) # ax3.yaxis.set_label_position('left') # ax3.yaxis.set_ticks_position('left') host.yaxis.label.set_color(p1[0].get_color()) ax2.yaxis.label.set_color(p2[0].get_color()) ax3.yaxis.label.set_color(p3[0].get_color()) # For professional typesetting, e.g. LaTeX, use .pgf or .pdf # For raster graphics use the dpi argument. E.g. '[...].png\", dpi=300)' plt.savefig(\"pyplot_multiple_y-axis.pdf\", bbox_inches='tight') # bbox_inches='tight': Try to strip excess whitespace # https:\/\/matplotlib.org\/stable\/api\/_as_gen\/matplotlib.pyplot.savefig.html ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9103166\/multiple-axis-in-matplotlib-with-different-scales", "best_answers_votes":139, "question_length":973, "response_length":2666 }, { "question":"Interactive large plot with ~20 million sample points and gigabytes of data I have got a problem (with my RAM) here: it's not able to hold the data I want to plot. I do have sufficient HD space. Is there any solution to avoid that \"shadowing\" of my data-set? Concretely I deal with Digital Signal Processing and I have to use a high sample-rate. My framework (GNU Radio) saves the values (to avoid using too much disk space) in binary. I unpack it. Afterwards I need to plot. I need the plot zoomable, and interactive. And that is an issue. Is there any optimization potential to this, or another software\/programming language (like R or so) which can handle larger data-sets? Actually I want much more data in my plots. But I have no experience with other software. GNUplot fails, with a similar approach to the following. I don't know R (jet). ``` import matplotlib.pyplot as plt import matplotlib.cbook as cbook import struct \"\"\" plots a cfile cfile - IEEE single-precision (4-byte) floats, IQ pairs, binary txt - index,in-phase,quadrature in plaintext note: directly plotting with numpy results into shadowed functions \"\"\" # unpacking the cfile dataset def unpack_set(input_filename, output_filename): index = 0 # index of the samples output_filename = open(output_filename, 'wb') with open(input_filename, \"rb\") as f: byte = f.read(4) # read 1. column of the vector while byte != \"\": # stored Bit Values floati = struct.unpack('f', byte) # write value of 1. column to a variable byte = f.read(4) # read 2. column of the vector floatq = struct.unpack('f', byte) # write value of 2. column to a variable byte = f.read(4) # next row of the vector and read 1. column # delimeter format for matplotlib lines = [\"%d,\" % index, format(floati), \",\", format(floatq), \"\\n\"] output_filename.writelines(lines) index = index + 1 output_filename.close return output_filename.name # reformats output (precision configuration here) def format(value): return \"%.8f\" % value # start def main(): # specify path unpacked_file = unpack_set(\"test01.cfile\", \"test01.txt\") # pass file reference to matplotlib fname = str(unpacked_file) plt.plotfile(fname, cols=(0,1)) # index vs. in-phase # optional # plt.axes([0, 0.5, 0, 100000]) # for 100k samples plt.grid(True) plt.title(\"Signal-Diagram\") plt.xlabel(\"Sample\") plt.ylabel(\"In-Phase\") plt.show(); if __name__ == \"__main__\": main() ``` Something like plt.swap_on_disk() could cache the stuff on my SSD ;)", "response":"A survey of open source interactive plotting software with a 10 million point scatter plot benchmark on Ubuntu Inspired by the use case described at: https:\/\/stats.stackexchange.com\/questions\/376361\/how-to-find-the-sample-points-that-have-statistically-meaningful-large-outlier-r I have benchmarked a few plotting programs with the exact same input files. Basically, I wanted to: do an XY scatter plot of multidimensional data, hopefully with Z as the point color interactively select some interesting looking points from the plot with my mouse view all dimensions of the selected points (including at least X, Y and Z) to try and understand why they are outliers in the XY scatter That problem can be represented by the following simplified test data: ``` python -c 'for i in range(10000000): print(f\"{i},{i*2},{i*4}\")' > 10m1.csv echo 5000000,20000000,-1 >> 10m1.csv ``` The first few lines of 10m1.csv (~239 MB) look like this: 10m1.csv ``` 0,0,0 1,2,4 2,4,8 3,6,12 4,8,16 ``` and the very last one, the 10 million-first, is the outlier, and looks like: ``` 5000000,20000000,-1 ``` so we basically have: a line with inclination 2 and 10 million points on it plus a single outlier point outside of the line, on the top center of the plot something like: ``` Y ^ | | | + + | | + | | + | | + | | + | | + | | + | | + | +-------------------> X ``` And the goal of this benchmark to find the point (5000000,20000000) on the graphical plot, and then determine the value of the third column from it, which is -1 in our test. When I first wrote this answer, I had used 10.csv generated with: ``` python -c 'for i in range(10000000): print(f\"{i},{i*2},{i*4}\")' > 10m.csv ``` without the outlier. While this tests performance, it does not test selection capabilities, so the goal is to migrate each test to 10m1.csv when I find motivation to do it. I also made a 10 point + outlier example in case I wanted to evaluate usability for some tool that could not handle the 10m point count: ``` i=0; while [ \"$i\" -lt 10 ]; do echo \"$i,$((2 * i)),$((4 * i))\"; i=$((i + 1)); done > 11.csv echo 5,20,-1 >> 11.csv ``` To have extra fun, I also prepared an even larger 1 billion point dataset in case any of the programs could handle the 10 million points! CSV files were getting a bit wonky, so I moved to HDF5: ``` #!\/usr\/bin\/env python3 import h5py import numpy size = 1000000000 with h5py.File('1b.hdf5', 'w') as f: x = numpy.arange(size + 1) x[size] = size \/ 2 f.create_dataset('x', data=x, dtype='int64') y = numpy.arange(size + 1) * 2 y[size] = 3 * size \/ 2 f.create_dataset('y', data=y, dtype='int64') z = numpy.arange(size + 1) * 4 z[size] = -1 f.create_dataset('z', data=z, dtype='int64') ``` This produces a ~23 GiB file analogous to 10m1.csv containing: 1 billion points in a straight line much like 10m.csv one outlier point at the center top of the graph I'm also creating a SQLite version of 10m1.csv, because that is perhaps one of the most reasonable formats to work with in practice, given that it will allow for well understood SQL querying, explicit indexing control and binary numeric data: ``` f=10m.sqlite rm -f \"$f\" n=10000000 time sqlite3 \"$f\" 'create table t(x integer, y integer, z integer)' time sqlite3 \"$f\" \"insert into t select value as x, value * 2 as y, value * 2 as z from generate_series(0, $((n - 1)))\" time sqlite3 \"$f\" \"insert into t values ($((n\/2)), $((3*n\/2)), -1)\" time sqlite3 \"$f\" 'create index txy on t(x, y)' ``` I also ran that code with n = 1 billion to produce a 1b.sqlite. generate_series is the fastest insertion method I could find so far: Bulk insert huge data into SQLite using Python Generation time: 100 s for insert, 290 s index creation. Disk size: 40 GB. I index by (x, y) as that is presumably what would speed up queries made by a viewer tool trying to get all points in a given x-y rectangle. The resulting 10m1.sqlite is about 367 MB, which is larger than the CSV due to the index. The tests were carried out in Ubuntu 18.10 unless mentioned otherwise in the a subsection, in a ThinkPad P51 laptop with Intel Core i7-7820HQ CPU (4 cores \/ 8 threads), 2x Samsung M471A2K43BB1-CRC RAM (2x 16GiB), NVIDIA Quadro M1200 4GB GDDR5 GPU. Summary of results This is what I observed, considering my very specific test use case and that I'm a first time user of many of the reviewed software: Does it handle 10 million points: Tool Handles 10m points? Lots of features? UI feels good? Vaex Yes, even 1 Billion! Yes. Yes, Jupyter widget VisIt Yes, but not 100m Yes, 2D and 3D, focus on interactive. No Paraview No Same as above, a bit less 2D features maybe. Very Mayavi Yes 3D only, good interactive and scripting support, but more limited features. OK gnuplot Barely on non-interactive mode. Lots of features, but limited in interactive mode. OK matplotlib No Same as above. OK Bokeh No, up to 1m Yes, easy to script. Very, Jupyter widget PyViz ? ? ? seaborn ? ? ? sqlitebrowser No Can visualize SQL query results Meh Vaex 2.0.2 https:\/\/github.com\/vaexio\/vaex Install and get a hello world working as shown at: How to do interactive 2D scatter plot zoom \/ point selection in Vaex? I tested vaex with up to 1 billion points and it worked, it is awesome! It is \"Python-scripted-first\" which is great for reproducibility, and allows me to easily interface with other Python things. The Jupyter setup has a few moving parts, but once I got it running with virtualenv, it was amazing. To load our CSV run in Jupyter: ``` import vaex df = vaex.from_csv('10m.csv', names=['x', 'y', 'z'],) df.plot_widget(df.x, df.y, backend='bqplot') ``` and we can see instantly: Now, we can zoom, pan and select points with the mouse, and updates are really fast, all in under 10 seconds. Here I have zoomed in to see some individual points and have selected a few of them (faint lighter rectangle on image): After the selection is made with the mouse, this has the exact same effect as using the df.select() method. So we can extract the selected points by running in Jupyter: ``` df.to_pandas_df(selection=True) ``` which outputs data with format: ``` x y z index 0 4525460 9050920 18101840 4525460 1 4525461 9050922 18101844 4525461 2 4525462 9050924 18101848 4525462 3 4525463 9050926 18101852 4525463 4 4525464 9050928 18101856 4525464 5 4525465 9050930 18101860 4525465 6 4525466 9050932 18101864 4525466 ``` Since 10M points worked fine, I decided to try 1B points... and it also worked fine! ``` import vaex df = vaex.open('1b.hdf5') df.plot_widget(df.x, df.y, backend='bqplot') ``` To observe the outlier, which was invisible on the original plot, we can follow How change the point style in a vaex interactive Jupyter bqplot plot_widget to make individual points larger and visible? and use: ``` df.plot_widget(df.x, df.y, f='log', shape=128, backend='bqplot') ``` which produces: and after selecting the point: we obtain the outlier's full data: ``` x y z 0 500000000 1500000000 -1 ``` Here is a demo by the creators with a more interesting dataset and more features: https:\/\/www.youtube.com\/watch?v=2Tt0i823-ec&t=770 There is no built-in sqlite support however unfortunately: https:\/\/github.com\/vaexio\/vaex\/issues\/864 Tested in Ubuntu 19.04. VisIt 2.13.3 Website: https:\/\/wci.llnl.gov\/simulation\/computer-codes\/visit License: BSD Developed by Lawrence Livermore National Laboratory, which is a National Nuclear Security Administration laboratory, so you can imagine that 10m points will be nothing for it if I could get it working. (The book The Supermen: The Story of Seymour Cray by Charles J. Murray (1997) does a good job a showing how computational power hungry labs such as these were when building the first H bombs, because you can't just run experiments at will with nukes, and even if you do, you can't really measure what you would like because it blows up too fast and too hot: a computer model is a must. And they decided that a bunch of physicist's wives with calculators wasn't going to cut it like as for the earlier Los Alamos fission bomb. When Israel bought one of their computers, everyone immediately assumed it was to make nukes.) Installation: there is no Debian package, just download Linux binaries from website. Runs without installing. See also: https:\/\/askubuntu.com\/questions\/966901\/installing-visit Based on VTK which is the backend library that many of the high perfomance graphing software use. Written in C. After 3 hours of playing with the UI, I did get it working, and it did solve my use case as detailed at: https:\/\/stats.stackexchange.com\/questions\/376361\/how-to-find-the-sample-points-that-have-statistically-meaningful-large-outlier-r Here is how it looks like on the test data of this post: and a zoom with some picks: and here is the picks window: Performance wise, VisIt was very good: every graphic operation either took only a small amount of time or was immediate. When I had to wait, it shows a \"processing\" message with the percentage of work left, and the GUI didn't freeze. Since 10m points worked so well, I also tried 100m points (a 2.7G CSV file) but it crashed \/ went into a weird state unfortunately, I watched it in htop as the 4 VisIt threads took up all of my 16GiB RAM and died likely due to a failed malloc. The initial getting started was a bit painful: many of the defaults feel atrocious if you are not a nuclear bomb engineer? E.g.: default point size 1px (gets confused with dust on my monitor) axes scale from 0.0 to 1.0: How to show the actual axes number values on the Visit plotting program instead of fractions from 0.0 to 1.0? multi-window setup, nasty multi popups when you Pick data points shows your username and plot date (remove with \"Controls\" > \"Annotation\" > \"User information\") automatic positioning defaults are bad: legend conflicts with axes, could not find title automation so had to add a label and reposition everything by hand there are just a lot of features, so it can be hard to find what you want the manual was very helpful, but it is a 386 page PDF mammoth ominously dated \"October 2005 Version 1.5\". I wonder if they used this to develop Trinity! and it is a nice Sphinx HTML created just after I originally answered this question no Ubuntu package. But the prebuilt binaries did just work. I attribute these problems to: it has been around for such a long time and uses some outdated GUI ideas you can't just click on the plot elements to change them (e.g. axes, title, etc.), and there are a lot of features, so it is a bit hard to find the one your are looking for I also love it how a bit of LLNL infrastructure leaks into that repo. See for example docs\/OfficeHours.txt and other files in that directory! I'm sorry for Brad who is the \"Monday Morning guy\"! Oh, and the password for the answering machine is \"Kill Ed\", don't forget that. Paraview 5.9.0 Website: https:\/\/www.paraview.org\/ License: BSD Tested on: Ubuntu 20.10. Installation: ``` sudo apt install paraview ``` or get the latest by download prebuilts from the website. This is what I did for this review, since the apt one was only at 5.7.0. I downloaded ParaView-5.9.0-MPI-Linux-Python3.8-64bit.tar.gz. Developed by Kitware and Los Alamos National Laboratory, and later Sandia National Laboratories (so the other two NNSA labs), so once again we expect that it will easily handle the data. Also VTK based and written in C++, which was further promising. However I was disappointed: for some reason, 10m points made the GUI very slow and unresponsive, making it unusable. Whenever I clicked something, like to hide the lines, it took several dozen seconds. I think that at some point it just glitched out and stopped responding at all. I'm fine with a controlled well advertised \"I'm working now, wait a bit\" moment, but the GUI freezing while that happens? Not acceptable. htop showed that Paraview was using 8 threads and 3GB RAM, so neither CPU nor memory was maxed out. GUI-wise, Paraview is very nice and modern, way better than VisIt when it is not stuttering. Since 10m1.csv killed it, I tested with 11.csv to see if I would have been able to solve my problem except for performance, and the answer is yes: paraview 11.csv select CSV reader from the popup properties properties Apply on the left right click on the CSV on Pipeline Browser Add filter > Alphabetical > Plot data. Why is plotting a filter? Not very intuitive for first time users, related: paraview: plot data from csv file I'm sure it is one of those things that make sense once you understand further generalizations of what filters can do, but still. properties > Apply unselect \"Use index for x axis\" X Array Name: Field 0 Series Parameters remove Field 0 and Field 2 Select Field 1 and: Line style: None Marker style: cross Marker size: increase or decrease as needed \"Rectangle Selection (s)\" icon above the plot Select outlier (point is highlighted) Add another filter to the plot filter: \"Extract Selection\" Apply And finally!!! I get a table containing only the selected outlier, and showing the value of \"Field 2\" as -1: So yes, not exactly a walk in the park, but I managed eventually. Another downside is that Paraview felt lacking features compared to VisIt, e.g.: I could not find how to set the color of my scatter based on a third column: How to color scatter plot points by the value of a third column in Paraview like gnuplot palette? Mayavi 4.6.2 Website: https:\/\/github.com\/enthought\/mayavi Developped by: Enthought Install: ``` sudo apt-get install libvtk6-dev python3 -m pip install -u mayavi PyQt5 ``` The VTK Python one. Mayavi seems to be very focused on 3D, I could not find how to do 2D plots in it, so it does not cut it for my use case unfortunately. Just to check performance however, I adapted the example from: https:\/\/docs.enthought.com\/mayavi\/mayavi\/auto\/example_scatter_plot.html for 10 million points, and it run just fine without lagging: ``` import numpy as np from tvtk.api import tvtk from mayavi.scripts import mayavi2 n = 10000000 pd = tvtk.PolyData() pd.points = np.linspace((1,1,1),(n,n,n),n) pd.verts = np.arange(n).reshape((-1, 1)) pd.point_data.scalars = np.arange(n) @mayavi2.standalone def main(): from mayavi.sources.vtk_data_source import VTKDataSource from mayavi.modules.outline import Outline from mayavi.modules.surface import Surface mayavi.new_scene() d = VTKDataSource() d.data = pd mayavi.add_source(d) mayavi.add_module(Outline()) s = Surface() mayavi.add_module(s) s.actor.property.trait_set(representation='p', point_size=1) main() ``` Output: I couldn't however zoom in enough to see indivitual points, the near 3D plane was too far. Maybe there is a way? One cool thing about Mayavi is that devs put a lot of effort into allowing you to fire and setup the GUI from a Python script nicely, much like Matplotlib and gnuplot. It seems that this is also possible in Paraview, but the docs are not as good at least. Generally it feels not a featurefull as VisIt \/ Paraview. For example, I couldn't directly load a CSV from the GUI: How to load a CSV file from the Mayavi GUI? Gnuplot 5.2.2 Website: http:\/\/www.gnuplot.info\/ gnuplot is really convenient when I need to go quick and dirty, and it is always the first thing that I try. Installation: ``` sudo apt-get install gnuplot ``` For non-interactive use, it can handle 10m points reasonably well: ``` #!\/usr\/bin\/env gnuplot set terminal png size 1024,1024 set output \"gnuplot.png\" set key off set datafile separator \",\" plot \"10m1.csv\" using 1:2:3:3 with labels point ``` which finished in 7 seconds: But if I try to go interactive with ``` #!\/usr\/bin\/env gnuplot set terminal wxt size 1024,1024 set key off set datafile separator \",\" plot \"10m.csv\" using 1:2:3 palette ``` and: ``` gnuplot -persist main.gnuplot ``` then the initial render and zooms feel too sluggish. I can't even see the rectangle selection line! Also note that for my use case, I needed to use hypertext labels as in: ``` plot \"10m.csv\" using 1:2:3 with labels hypertext ``` but there was a performance bug with the labels feature including for non-interactive rendering. But I reported it, and Ethan solved it in a day: https:\/\/groups.google.com\/forum\/#!topic\/comp.graphics.apps.gnuplot\/qpL8aJIi9ZE I must say however that there is one reasonable workaround for outlier selection: just add labels with the row ID to all points! If there are many points nearby, you won't be able to read the labels. But for the outliers which you care about, you just might! For example, if I add one outlier to our original data: ``` cp 10m.csv 10m1.csv printf '2500000,10000000,40000000\\n' >> 10m1.csv ``` and modify the plot command to: ``` #!\/usr\/bin\/env gnuplot set terminal png size 1024,1024 set output \"gnuplot.png\" set key off set datafile separator \",\" plot \"10.csv\" using 1:2:3:3 palette with labels ``` This slowed down the plotting significantly (40 mins after the fix mentioned above!!!), but produces a reasonable output: so with some data filtering, we would get there, eventually. Matplotlib 1.5.1, numpy 1.11.1, Python 3.6.7 Website: https:\/\/matplotlib.org\/ Matplotlib is what I usually try when my gnuplot script starts getting too insane. numpy.loadtxt alone took about 10 seconds, so I knew this wasn't going to go well: ``` #!\/usr\/bin\/env python3 import numpy import matplotlib.pyplot as plt x, y, z = numpy.loadtxt('10m.csv', delimiter=',', unpack=True) plt.figure(figsize=(8, 8), dpi=128) plt.scatter(x, y, c=z) # Non-interactive. #plt.savefig('matplotlib.png') # Interactive. plt.show() ``` First the non-interactive attempt gave good output, but took 3 minutes and 55 seconds... Then the interactive one took a long time on initial render and on zooms. Not usable: Notice on this screenshot how the zoom selection, which should immediately zoom and disappear stayed on screen for a long time while it waited for zoom to be calculated! I had to comment out plt.figure(figsize=(8, 8), dpi=128) for the interactive version to work for some reason, or else it blew up with: ``` RuntimeError: In set_size: Could not set the fontsize ``` Bokeh 1.3.1 https:\/\/github.com\/bokeh\/bokeh Ubuntu 19.04 install: ``` python3 -m pip install bokeh ``` Then launch Jupyter: ``` jupyter notebook ``` Now if I plot 1m points, everything works perfectly, the interface is awesome and fast, including zoom and on hover information: ``` from bokeh.io import output_notebook, show from bokeh.models import HoverTool from bokeh.transform import linear_cmap from bokeh.plotting import figure from bokeh.models import ColumnDataSource import numpy as np output_notebook() N = 1000000 source = ColumnDataSource(data=dict( x=np.random.random(size=N) * N, y=np.random.random(size=N) * N, z=np.random.random(size=N) )) hover = HoverTool(tooltips=[(\"z\", \"@z\")]) p = figure() p.add_tools(hover) p.circle( 'x', 'y', source=source, color=linear_cmap('z', 'Viridis256', 0, 1.0), size=5 ) show(p) ``` Initial view: After a zoom: If I go up to 10m though it chokes, htop shows that chromium has 8 threads taking up all my memory in uninterruptible IO state. This asks about referencing the points: How to reference selected bokeh data points PyViz https:\/\/pyviz.org\/ TODO evaluate. Integrates Bokeh + datashader + other tools. Video demoing 1B datapoints: https:\/\/www.youtube.com\/watch?v=k27MJJLJNT4 \"PyViz: Dashboards for Visualizing 1 Billion Datapoints in 30 Lines of Python\" by \"Anaconda, Inc.\" published on 2018-04-17. seaborn https:\/\/seaborn.pydata.org\/ TODO evaluate. There's already a QA on how to use seaborn to visualize at least 50 million rows. sqlitebrowser 3.12.2 https:\/\/github.com\/sqlitebrowser\/sqlitebrowser I tried this one to see if it could handle 10m1.sqlite but unfortunately it couldn't. Shame! It is quite cool that it can directly plot query results though. Here's how it looks like: In this image, I loaded 10m1.sqlite into the tool, and then started browsing the data. But it only plots the data that was loaded for browsing. You can click the button on the bottom right under the plot to \"Load all data and redraw plot\" but that opens a progress bar that goes up 1% every 3s , so it is not looking promising and I gave up. Tested on Ubuntu 23.04. SQL histogram queries I wonder why I can't easily find an interactive UI tool that uses this as a backend. SQL histograms on an indexed database feels like the most rational way to go about things. E.g. using 10 steps and ignoring empty bins: ``` div=10 x=0 y=0 x2=10000000 y2=20000000 dx=$(((x2 - x) \/ div)) dy=$(((y2 - y) \/ div)) time sqlite3 10m1.sqlite --cmd '.mode csv' = $x and x = $y and y = $x and x = $y and y = $cx and x = $cy and y < $((cy + dy)) limit $max ) EOF cy=$((cy+dy)) done cx=$((cx+dx)) done ``` finished in just 0.2s which is amazing. It would likely scale up to 1B if it weren't for the insanely long generation time. PostgreSQL index creation wasn't faster either unfortunately: How to port simple spatial index using SQLite R-Trees to Postgres? though at least it supports points and not just rectangles. Tested on Ubuntu 23.04.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5854515\/interactive-large-plot-with-20-million-sample-points-and-gigabytes-of-data", "best_answers_votes":109, "question_length":2437, "response_length":20941 }, { "question":"Pip install Matplotlib error with virtualenv I am trying to install matplotlib in a new virtualenv. When I do: ``` pip install matplotlib ``` or ``` pip install http:\/\/sourceforge.net\/projects\/matplotlib\/files\/matplotlib\/matplotlib-1.1.0\/matplotlib-1.1.0.tar.gz ``` I get this error: ``` building 'matplotlib._png' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -fPIC - DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I\/usr\/local\/include -I\/usr\/include -I. -I\/home\/sam\/django-projects\/datazone\/local\/lib\/python2.7\/site-packages\/numpy\/core\/include -I. -I\/usr\/include\/python2.7 -c src\/_png.cpp -o build\/temp.linux-x86_64-2.7\/src\/_png.o src\/_png.cpp:10:20: fatal error: png.h: No such file or directory compilation terminated. error: command 'gcc' failed with exit status 1 ``` Anyone have an idea what is going on? Any help much appreciated.", "response":"Building Matplotlib requires libpng (and freetype, as well) which isn't a python library, so pip doesn't handle installing it (or freetype). You'll need to install something along the lines of libpng-devel and freetype-devel (or whatever the equivalent is for your OS). See the building requirements\/instructions for matplotlib.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9829175\/pip-install-matplotlib-error-with-virtualenv", "best_answers_votes":178, "question_length":879, "response_length":328 }, { "question":"How to draw a line with matplotlib? I cannot find a way to draw an arbitrary line with matplotlib Python library. It allows to draw horizontal and vertical lines (with matplotlib.pyplot.axhline and matplotlib.pyplot.axvline, for example), but i do not see how to draw a line through two given points (x1, y1) and (x2, y2). Is there a way? Is there a simple way?", "response":"This will draw a line that passes through the points (-1, 1) and (12, 4), and another one that passes through the points (1, 3) et (10, 2) x1 are the x coordinates of the points for the first line, y1 are the y coordinates for the same -- the elements in x1 and y1 must be in sequence. x2 and y2 are the same for the other line. ``` import matplotlib.pyplot as plt x1, y1 = [-1, 12], [1, 4] x2, y2 = [1, 10], [3, 2] plt.plot(x1, y1, x2, y2, marker = 'o') plt.show() ``` I suggest you spend some time reading \/ studying the basic tutorials found on the very rich matplotlib website to familiarize yourself with the library. What if I don't want line segments? [edit]: As shown by @thomaskeefe, starting with matplotlib 3.3, this is now builtin as a convenience: plt.axline((x1, y1), (x2, y2)), rendering the following obsolete. There are no direct ways to have lines extend to infinity... matplotlib will either resize\/rescale the plot so that the furthest point will be on the boundary and the other inside, drawing line segments in effect; or you must choose points outside of the boundary of the surface you want to set visible, and set limits for the x and y axis. As follows: ``` import matplotlib.pyplot as plt x1, y1 = [-1, 12], [1, 10] x2, y2 = [-1, 10], [3, -1] plt.xlim(0, 8), plt.ylim(-2, 8) plt.plot(x1, y1, x2, y2, marker = 'o') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/36470343\/how-to-draw-a-line-with-matplotlib", "best_answers_votes":128, "question_length":361, "response_length":1355 }, { "question":"Relationship between dpi and figure size I have created a figure using matplotlib but I have realized the plot axis and the drawn line gets zoomed out. Reading this earlier discussion thread, it explains how to set the figure size. ``` fig, ax = plt.subplots() fig.set_size_inches(3, 1.5) plt.savefig(file.jpeg, edgecolor='black', dpi=400, facecolor='black', transparent=True) ``` With the above code (other configurations removed for brevity), I do get a resulting image file with 1200 X 600 desired dimensions(should we say resolution too?) and desired file size. The projected image is scaled out in an unusual way, annotations for example are enlarged. While I can set the size of the labels on the axis, the figure doesn't look proportional with respect to the scale since the bottom and right spines are large and so are the plotted lines. The question, therefore, is, what configurations are going wrong?", "response":"Figure size (figsize) determines the size of the figure in inches. This gives the amount of space the axes (and other elements) have inside the figure. The default figure size is (6.4, 4.8) inches in matplotlib 2. A larger figure size will allow for longer texts, more axes or more ticklabels to be shown. Dots per inches (dpi) determines how many pixels the figure comprises. The default dpi in matplotlib is 100. A figure of figsize=(w,h) will have ``` px, py = w*dpi, h*dpi # pixels # e.g. # 6.4 inches * 100 dpi = 640 pixels ``` So in order to obtain a figure with a pixel size of e.g. (1200,600) you may chose several combinations of figure size and dpi, e.g. ``` figsize=(15,7.5), dpi= 80 figsize=(12,6) , dpi=100 figsize=( 8,4) , dpi=150 figsize=( 6,3) , dpi=200 etc. ``` Now, what is the difference? This is determined by the size of the elements inside the figure. Most elements like lines, markers, texts have a size given in points. Matplotlib figures use Points per inch (ppi) of 72. A line with thickness 1 point will be 1.\/72. inch wide. A text with fontsize 12 points will be 12.\/72. inch heigh. Of course if you change the figure size in inches, points will not change, so a larger figure in inches still has the same size of the elements. Changing the figure size is thus like taking a piece of paper of a different size. Doing so, would of course not change the width of the line drawn with the same pen. On the other hand, changing the dpi scales those elements. At 72 dpi, a line of 1 point size is one pixel strong. At 144 dpi, this line is 2 pixels strong. A larger dpi will therefore act like a magnifying glass. All elements are scaled by the magnifying power of the lens. A comparisson for constant figure size and varying dpi is shown in the image below on the left. On the right you see a constant dpi and varying figure size. Figures in each row have the same pixel size. Code to reproduce: ``` import matplotlib.pyplot as plt %matplotlib inline def plot(fs,dpi): fig, ax=plt.subplots(figsize=fs, dpi=dpi) ax.set_title(\"Figsize: {}, dpi: {}\".format(fs,dpi)) ax.plot([2,4,1,5], label=\"Label\") ax.legend() figsize=(2,2) for i in range(1,4): plot(figsize, i*72) dpi=72 for i in [2,4,6]: plot((i,i), dpi) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/47633546\/relationship-between-dpi-and-figure-size", "best_answers_votes":243, "question_length":911, "response_length":2232 }, { "question":"Which is the recommended way to plot: matplotlib or pylab? I can plot in Python using either: ``` import matplotlib matplotlib.pyplot.plot(...) ``` Or: ``` import pylab pylab.plot(...) ``` Both of these use matplotlib. Which is recommend as the correct method to plot? Why?", "response":"Official docs: Matplotlib, pyplot and pylab: how are they related? Both of those imports boil down do doing exactly the same thing and will run the exact same code, it is just different ways of importing the modules. Also note that matplotlib has two interface layers, a state-machine layer managed by pyplot and the OO interface pyplot is built on top of, see How can I attach a pyplot function to a figure instance? pylab is a clean way to bulk import a whole slew of helpful functions (the pyplot state machine function, most of numpy) into a single name space. The main reason this exists (to my understanding) is to work with ipython to make a very nice interactive shell which more-or-less replicates MATLAB (to make the transition easier and because it is good for playing around). See pylab.py and matplotlib\/pylab.py At some level, this is purely a matter of taste and depends a bit on what you are doing. If you are not embedding in a gui (either using a non-interactive backend for bulk scripts or using one of the provided interactive backends) the typical thing to do is ``` import matplotlib.pyplot as plt import numpy as np plt.plot(....) ``` which doesn't pollute the name space. I prefer this so I can keep track of where stuff came from. If you use ``` ipython --pylab ``` this is equivalent to running ``` from pylab import * ``` It is now recommended that for new versions of ipython you use ``` ipython --matplotlib ``` which will set up all the proper background details to make the interactive backends to work nicely, but will not bulk import anything. You will need to explicitly import the modules want. ``` import numpy as np import matplotlib.pyplot as plt ``` is a good start. If you are embedding matplotlib in a gui you don't want to import pyplot as that will start extra gui main loops, and exactly what you should import depends on exactly what you are doing.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16849483\/which-is-the-recommended-way-to-plot-matplotlib-or-pylab", "best_answers_votes":114, "question_length":273, "response_length":1893 }, { "question":"Get matplotlib color cycle state Is it possible to query the current state of the matplotlib color cycle? In other words is there a function get_cycle_state that will behave in the following way? ``` >>> plot(x1, y1) >>> plot(x2, y2) >>> state = get_cycle_state() >>> print state 2 ``` Where I expect the state to be the index of the next color that will be used in a plot. Alternatively, if it returned the next color (\"r\" for the default cycle in the example above), that would be fine too.", "response":"Accessing the color cycle iterator There's no \"user-facing\" (a.k.a. \"public\") method to access the underlying iterator, but you can access it through \"private\" (by convention) methods. However, you'd can't get the state of an iterator without changing it. Setting the color cycle Quick aside: You can set the color\/property cycle in a variety of ways (e.g. ax.set_color_cycle in versions =1.5). Have a look at the example here for version 1.5 or greater, or the previous style here. Accessing the underlying iterator However, while there's no public-facing method to access the iterable, you can access it for a given axes object (ax) through the _get_lines helper class instance. ax._get_lines is a touch confusingly named, but it's the behind-the-scenes machinery that allows the plot command to process all of the odd and varied ways that plot can be called. Among other things, it's what keeps track of what colors to automatically assign. Similarly, there's ax._get_patches_for_fill to control cycling through default fill colors and patch properties. At any rate, the color cycle iterable is ax._get_lines.color_cycle for lines and ax._get_patches_for_fill.color_cycle for patches. On matplotlib >=1.5, this has changed to use the cycler library, and the iterable is called prop_cycler instead of color_cycle and yields a dict of properties instead of only a color. All in all, you'd do something like: ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() color_cycle = ax._get_lines.color_cycle # or ax._get_lines.prop_cycler on version >= 1.5 # Note that prop_cycler cycles over dicts, so you'll want next(cycle)['color'] ``` You can't view the state of an iterator However, this object is a \"bare\" iterator. We can easily get the next item (e.g. next_color = next(color_cycle), but that means that the next color after that is what will be plotted. By design, there's no way to get the current state of an iterator without changing it. In v1.5 or greater, it would be nice to get the cycler object that's used, as we could infer its current state. However, the cycler object itself isn't accessible (publicly or privately) anywhere. Instead, only the itertools.cycle instance created from the cycler object is accessible. Either way, there's no way to get to the underlying state of the color\/property cycler. Match the color of the previously plotted item instead In your case, it sounds like you're wanting to match the color of something that was just plotted. Instead of trying to determine what the color\/property will be, set the color\/etc of your new item based on the properties of what's plotted. For example, in the case you described, I'd do something like this: ``` import matplotlib.pyplot as plt import numpy as np def custom_plot(x, y, **kwargs): ax = kwargs.pop('ax', plt.gca()) base_line, = ax.plot(x, y, **kwargs) ax.fill_between(x, 0.9*y, 1.1*y, facecolor=base_line.get_color(), alpha=0.5) x = np.linspace(0, 1, 10) custom_plot(x, x) custom_plot(x, 2*x) custom_plot(x, -x, color='yellow', lw=3) plt.show() ``` It's not the only way, but its cleaner than trying to get the color of the plotted line before-hand, in this case.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13831549\/get-matplotlib-color-cycle-state", "best_answers_votes":130, "question_length":492, "response_length":3160 }, { "question":"Heatmap in matplotlib with pcolor? I'd like to make a heatmap like this (shown on FlowingData): The source data is here, but random data and labels would be fine to use, i.e. ``` import numpy column_labels = list('ABCD') row_labels = list('WXYZ') data = numpy.random.rand(4,4) ``` Making the heatmap is easy enough in matplotlib: ``` from matplotlib import pyplot as plt heatmap = plt.pcolor(data) ``` And I even found a colormap arguments that look about right: heatmap = plt.pcolor(data, cmap=matplotlib.cm.Blues) But beyond that, I can't figure out how to display labels for the columns and rows and display the data in the proper orientation (origin at the top left instead of bottom left). Attempts to manipulate heatmap.axes (e.g. heatmap.axes.set_xticklabels = column_labels) have all failed. What am I missing here?", "response":"This is late, but here is my python implementation of the flowingdata NBA heatmap. updated:1\/4\/2014: thanks everyone ``` # -*- coding: utf-8 -*- # 3.0 # ------------------------------------------------------------------------ # Filename : heatmap.py # Date : 2013-04-19 # Updated : 2014-01-04 # Author : @LotzJoe >> Joe Lotz # Description: My attempt at reproducing the FlowingData graphic in Python # Source : http:\/\/flowingdata.com\/2010\/01\/21\/how-to-make-a-heatmap-a-quick-and-easy-solution\/ # # Other Links: # http:\/\/stackoverflow.com\/questions\/14391959\/heatmap-in-matplotlib-with-pcolor # # ------------------------------------------------------------------------ import matplotlib.pyplot as plt import pandas as pd from urllib2 import urlopen import numpy as np %pylab inline page = urlopen(\"http:\/\/datasets.flowingdata.com\/ppg2008.csv\") nba = pd.read_csv(page, index_col=0) # Normalize data columns nba_norm = (nba - nba.mean()) \/ (nba.max() - nba.min()) # Sort data according to Points, lowest to highest # This was just a design choice made by Yau # inplace=False (default) ->thanks SO user d1337 nba_sort = nba_norm.sort('PTS', ascending=True) nba_sort['PTS'].head(10) # Plot it out fig, ax = plt.subplots() heatmap = ax.pcolor(nba_sort, cmap=plt.cm.Blues, alpha=0.8) # Format fig = plt.gcf() fig.set_size_inches(8, 11) # turn off the frame ax.set_frame_on(False) # put the major ticks at the middle of each cell ax.set_yticks(np.arange(nba_sort.shape[0]) + 0.5, minor=False) ax.set_xticks(np.arange(nba_sort.shape[1]) + 0.5, minor=False) # want a more natural, table-like display ax.invert_yaxis() ax.xaxis.tick_top() # Set the labels # label source:https:\/\/en.wikipedia.org\/wiki\/Basketball_statistics labels = [ 'Games', 'Minutes', 'Points', 'Field goals made', 'Field goal attempts', 'Field goal percentage', 'Free throws made', 'Free throws attempts', 'Free throws percentage', 'Three-pointers made', 'Three-point attempt', 'Three-point percentage', 'Offensive rebounds', 'Defensive rebounds', 'Total rebounds', 'Assists', 'Steals', 'Blocks', 'Turnover', 'Personal foul'] # note I could have used nba_sort.columns but made \"labels\" instead ax.set_xticklabels(labels, minor=False) ax.set_yticklabels(nba_sort.index, minor=False) # rotate the plt.xticks(rotation=90) ax.grid(False) # Turn off all the ticks ax = plt.gca() for t in ax.xaxis.get_major_ticks(): t.tick1On = False t.tick2On = False for t in ax.yaxis.get_major_ticks(): t.tick1On = False t.tick2On = False ``` The output looks like this: There's an ipython notebook with all this code here. I've learned a lot from 'overflow so hopefully someone will find this useful.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14391959\/heatmap-in-matplotlib-with-pcolor", "best_answers_votes":122, "question_length":823, "response_length":2641 }, { "question":"How do I write a Latex formula in the legend of a plot using Matplotlib inside a .py file? I am writing a script in Python (.py file) and I am using Matplotlib to plot an array. I want to add a legend with a formula to the plot, but I haven't been able to do it. I have done this before in IPython or the terminal. In this case, writing something like this: ``` legend(ur'$The_formula$') ``` worked perfectly. However, this doesn't work when I call my .py script from the terminal\/IPython.", "response":"The easiest way is to assign the label when you plot the data, e.g.: ``` import matplotlib.pyplot as plt ax = plt.gca() # or any other way to get an axis object ax.plot(x, y, label=r'$\\sin (x)$') ax.legend() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14016217\/how-do-i-write-a-latex-formula-in-the-legend-of-a-plot-using-matplotlib-inside-a", "best_answers_votes":118, "question_length":489, "response_length":211 }, { "question":"Drawing average line in histogram I am drawing a histogram using matplotlib in python, and would like to draw a line representing the average of the dataset, overlaid on the histogram as a dotted line (or maybe some other color would do too). Any ideas on how to draw a line overlaid on the histogram? I am using the plot() command, but not sure how to draw a vertical line (i.e. what value should I give for the y-axis? thanks!", "response":"You can use plot or vlines to draw a vertical line, but to draw a vertical line from the bottom to the top of the y axis, axvline is the probably the simplest function to use. Here's an example: ``` In [80]: import numpy as np In [81]: import matplotlib.pyplot as plt In [82]: np.random.seed(6789) In [83]: x = np.random.gamma(4, 0.5, 1000) In [84]: result = plt.hist(x, bins=20, color='c', edgecolor='k', alpha=0.65) In [85]: plt.axvline(x.mean(), color='k', linestyle='dashed', linewidth=1) Out[85]: ``` Result:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16180946\/drawing-average-line-in-histogram", "best_answers_votes":178, "question_length":428, "response_length":514 }, { "question":"How to remove lines in a Matplotlib plot How can I remove a line (or lines) of a matplotlib axes in such a way as it actually gets garbage collected and releases the memory back? The below code appears to delete the line, but never releases the memory (even with explicit calls to gc.collect()) ``` from matplotlib import pyplot import numpy a = numpy.arange(int(1e7)) # large so you can easily see the memory footprint on the system monitor. fig = pyplot.Figure() ax = pyplot.add_subplot(1, 1, 1) lines = ax.plot(a) # this uses up an additional 230 Mb of memory. # can I get the memory back? l = lines[0] l.remove() del l del lines # not releasing memory ax.cla() # this does release the memory, but also wipes out all other lines. ``` So is there a way to just delete one line from an axes and get the memory back? This potential solution also does not work.", "response":"This is a very long explanation that I typed up for a coworker of mine. I think it would be helpful here as well. Be patient, though. I get to the real issue that you are having toward the end. Just as a teaser, it's an issue of having extra references to your Line2D objects hanging around. WARNING: One other note before we dive in. If you are using IPython to test this out, IPython keeps references of its own and not all of them are weakrefs. So, testing garbage collection in IPython does not work. It just confuses matters. Okay, here we go. Each matplotlib object (Figure, Axes, etc) provides access to its child artists via various attributes. The following example is getting quite long, but should be illuminating. We start out by creating a Figure object, then add an Axes object to that figure. Note that ax and fig.axes[0] are the same object (same id()). ``` >>> #Create a figure >>> fig = plt.figure() >>> fig.axes [] >>> #Add an axes object >>> ax = fig.add_subplot(1,1,1) >>> #The object in ax is the same as the object in fig.axes[0], which is >>> # a list of axes objects attached to fig >>> print ax Axes(0.125,0.1;0.775x0.8) >>> print fig.axes[0] Axes(0.125,0.1;0.775x0.8) #Same as \"print ax\" >>> id(ax), id(fig.axes[0]) (212603664, 212603664) #Same ids => same objects ``` This also extends to lines in an axes object: ``` >>> #Add a line to ax >>> lines = ax.plot(np.arange(1000)) >>> #Lines and ax.lines contain the same line2D instances >>> print lines [] >>> print ax.lines [] >>> print lines[0] Line2D(_line0) >>> print ax.lines[0] Line2D(_line0) >>> #Same ID => same object >>> id(lines[0]), id(ax.lines[0]) (216550352, 216550352) ``` If you were to call plt.show() using what was done above, you would see a figure containing a set of axes and a single line: Now, while we have seen that the contents of lines and ax.lines is the same, it is very important to note that the object referenced by the lines variable is not the same as the object reverenced by ax.lines as can be seen by the following: ``` >>> id(lines), id(ax.lines) (212754584, 211335288) ``` As a consequence, removing an element from lines does nothing to the current plot, but removing an element from ax.lines removes that line from the current plot. So: ``` >>> #THIS DOES NOTHING: >>> lines.pop(0) >>> #THIS REMOVES THE FIRST LINE: >>> ax.lines.pop(0) ``` So, if you were to run the second line of code, you would remove the Line2D object contained in ax.lines[0] from the current plot and it would be gone. Note that this can also be done via ax.lines.remove() meaning that you can save a Line2D instance in a variable, then pass it to ax.lines.remove() to delete that line, like so: ``` >>> #Create a new line >>> lines.append(ax.plot(np.arange(1000)\/2.0)) >>> ax.lines [, ] ``` ``` >>> #Remove that new line >>> ax.lines.remove(lines[0]) >>> ax.lines [] ``` All of the above works for fig.axes just as well as it works for ax.lines Now, the real problem here. If we store the reference contained in ax.lines[0] into a weakref.ref object, then attempt to delete it, we will notice that it doesn't get garbage collected: ``` >>> #Create weak reference to Line2D object >>> from weakref import ref >>> wr = ref(ax.lines[0]) >>> print wr >>> print wr() >>> #Delete the line from the axes >>> ax.lines.remove(wr()) >>> ax.lines [] >>> #Test weakref again >>> print wr >>> print wr() ``` The reference is still live! Why? This is because there is still another reference to the Line2D object that the reference in wr points to. Remember how lines didn't have the same ID as ax.lines but contained the same elements? Well, that's the problem. ``` >>> #Print out lines >>> print lines [, ] To fix this problem, we simply need to delete `lines`, empty it, or let it go out of scope. >>> #Reinitialize lines to empty list >>> lines = [] >>> print lines [] >>> print wr ``` So, the moral of the story is, clean up after yourself. If you expect something to be garbage collected but it isn't, you are likely leaving a reference hanging out somewhere.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4981815\/how-to-remove-lines-in-a-matplotlib-plot", "best_answers_votes":95, "question_length":860, "response_length":4048 }, { "question":"matplotlib taking time when being imported I just upgraded to the latest stable release of matplotlib (1.5.1) and everytime I import matplotlib I get this message: ``` \/usr\/local\/lib\/python2.7\/dist-packages\/matplotlib\/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment. warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.') ``` ... which always stalls for a few seconds. Is this the expected behaviour? Was it the same also before, but just without the printed message?", "response":"As tom suggested in the comment above, deleting the files: ``` fontList.cache fontList.py3k.cache tex.cache ``` solve the problem. In my case the files were under: ``` `~\/.matplotlib` ``` EDITED A couple of days ago the message appeared again, I deleted the files in the locations mention above without any success. I found that as suggested here by T Mudau there's an extra location with text cache files is: ~\/.cache\/fontconfig", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34771191\/matplotlib-taking-time-when-being-imported", "best_answers_votes":117, "question_length":568, "response_length":429 }, { "question":"Matplotlib\/Pyplot: How to zoom subplots together? I have plots of 3-axis accelerometer time-series data (t,x,y,z) in separate subplots I'd like to zoom together. That is, when I use the \"Zoom to Rectangle\" tool on one plot, when I release the mouse all 3 plots zoom together. Previously, I simply plotted all 3 axes on a single plot using different colors. But this is useful only with small amounts of data: I have over 2 million data points, so the last axis plotted obscures the other two. Hence the need for separate subplots. I know I can capture matplotlib\/pyplot mouse events (http:\/\/matplotlib.sourceforge.net\/users\/event_handling.html), and I know I can catch other events (http:\/\/matplotlib.sourceforge.net\/api\/backend_bases_api.html#matplotlib.backend_bases.ResizeEvent), but I don't know how to tell what zoom has been requested on any one subplot, and how to replicate it on the other two subplots. I suspect I have the all the pieces, and need only that one last precious clue... -BobC", "response":"The easiest way to do this is by using the sharex and\/or sharey keywords when creating the axes: ``` from matplotlib import pyplot as plt ax1 = plt.subplot(2,1,1) ax1.plot(...) ax2 = plt.subplot(2,1,2, sharex=ax1) ax2.plot(...) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4200586\/matplotlib-pyplot-how-to-zoom-subplots-together", "best_answers_votes":161, "question_length":999, "response_length":231 }, { "question":"Is there a parameter in matplotlib\/pandas to have the Y axis of a histogram as percentage? I would like to compare two histograms by having the Y axis show the percentage of each column from the overall dataset size instead of an absolute value. Is that possible? I am using Pandas and matplotlib. Thanks", "response":"The density=True (normed=True for matplotlib < 2.2.0) returns a histogram for which np.sum(pdf * np.diff(bins)) equals 1. If you want the sum of the histogram to be 1 you can use Numpy's histogram() and normalize the results yourself. ``` x = np.random.randn(30) fig, ax = plt.subplots(1,2, figsize=(10,4)) ax[0].hist(x, density=True, color='grey') hist, bins = np.histogram(x) ax[1].bar(bins[:-1], hist.astype(np.float32) \/ hist.sum(), width=(bins[1]-bins[0]), color='grey') ax[0].set_title('normed=True') ax[1].set_title('hist = hist \/ hist.sum()') ``` Btw: Strange plotting glitch at the first bin of the left plot.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17874063\/is-there-a-parameter-in-matplotlib-pandas-to-have-the-y-axis-of-a-histogram-as-p", "best_answers_votes":108, "question_length":304, "response_length":618 }, { "question":"Matplotlib: how to set the current figure? This is hopefully a simple question but I can't figure it out at the moment. I want to use matplotlib to show 2 figures and then use them interactively. I create the figures with: ``` import matplotlib import pylab as pl f1 = pl.figure() f2 = pl.figure() ``` and can use the MATLAB-like pyplot interface to plot and draw in both figures. With ``` current_figure = pl.gcf() ``` I can determine the currently active figure for the pyplot interface, depending on which figure I clicked in. Now I want to draw something to the first figure with the pyplot interface but the current figure can be either of them. So is there something like ``` pl.set_current_figure(figure) ``` or any workaround? (I know that I can use the object oriented interface but for interactive stuff just using commands like plot(x, y) is much nicer)", "response":"You can simply set figure f1 as the new current figure with: ``` pl.figure(f1.number) ``` Another option is to give names (or numbers) to figures, which might help make the code easier to read: ``` pl.figure(\"Share values\") # ... some plots ... pl.figure(\"Profits\") # ... some plots ... pl.figure(\"Share values\") # Selects the first figure again ``` In fact, figure \"numbers\" can be strings, which are arguably more explicit that simple numbers. PS: The pyplot equivalent of pylab.figure() is matplotlib.pyplot.figure(). PPS: figure() now accepts a Figure object, so you should be able to activate figure f1 with figure(f1).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7986567\/matplotlib-how-to-set-the-current-figure", "best_answers_votes":119, "question_length":864, "response_length":624 }, { "question":"Plotting a 3d cube, a sphere and a vector I search how to plot something with less instruction as possible with Matplotlib but I don't find any help for this in the documentation. I want to plot the following things: a wireframe cube centered in 0 with a side length of 2 a \"wireframe\" sphere centered in 0 with a radius of 1 a point at coordinates [0, 0, 0] a vector that starts at this point and goes to [1, 1, 1] How to do that?", "response":"It is a little complicated, but you can draw all the objects by the following code: ``` from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt import numpy as np from itertools import product, combinations fig = plt.figure() ax = fig.gca(projection='3d') ax.set_aspect(\"equal\") # draw cube r = [-1, 1] for s, e in combinations(np.array(list(product(r, r, r))), 2): if np.sum(np.abs(s-e)) == r[1]-r[0]: ax.plot3D(*zip(s, e), color=\"b\") # draw sphere u, v = np.mgrid[0:2*np.pi:20j, 0:np.pi:10j] x = np.cos(u)*np.sin(v) y = np.sin(u)*np.sin(v) z = np.cos(v) ax.plot_wireframe(x, y, z, color=\"r\") # draw a point ax.scatter([0], [0], [0], color=\"g\", s=100) # draw a vector from matplotlib.patches import FancyArrowPatch from mpl_toolkits.mplot3d import proj3d class Arrow3D(FancyArrowPatch): def __init__(self, xs, ys, zs, *args, **kwargs): FancyArrowPatch.__init__(self, (0, 0), (0, 0), *args, **kwargs) self._verts3d = xs, ys, zs def draw(self, renderer): xs3d, ys3d, zs3d = self._verts3d xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M) self.set_positions((xs[0], ys[0]), (xs[1], ys[1])) FancyArrowPatch.draw(self, renderer) a = Arrow3D([0, 1], [0, 1], [0, 1], mutation_scale=20, lw=1, arrowstyle=\"-|>\", color=\"k\") ax.add_artist(a) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11140163\/plotting-a-3d-cube-a-sphere-and-a-vector", "best_answers_votes":216, "question_length":431, "response_length":1277 }, { "question":"How to show two figures using matplotlib? I have some troubles while drawing two figures at the same time, not shown in a single plot. But according to the documentation, I wrote the code and only the figure one shows. I think maybe I lost something important. Could anyone help me to figure out? Thanks. (The *tlist_first* used in the code is a list of data.) ``` plt.figure(1) plt.hist(tlist_first, bins=2000000, normed = True, histtype =\"step\", cumulative = True, color = 'g',label = 'first answer') plt.ylabel('Percentage of answered questions') plt.xlabel('Minutes elapsed after questions are posted') plt.axvline(x = 30, ymin = 0, ymax = 1, color = 'r', linestyle = '--', label = '30 min') plt.axvline(x = 60, ymin = 0, ymax = 1, color = 'c', linestyle = '--', label = '1 hour') plt.legend() plt.xlim(0,120) plt.ylim(0,1) plt.show() plt.close() ### not working either with this line or without it plt.figure(2) plt.hist(tlist_first, bins=2000000, normed = True, histtype =\"step\", cumulative = True, color = 'g',label = 'first answer') plt.ylabel('Percentage of answered questions') plt.xlabel('Minutes elapsed after questions are posted') plt.axvline(x = 240, ymin = 0, ymax = 1, color = 'r', linestyle = '--', label = '30 min') plt.axvline(x = 1440, ymin = 0, ymax = 1, color = 'c', linestyle = '--', label = '1 hour') plt.legend(loc= 4) plt.xlim(0,2640) plt.ylim(0,1) plt.show() ```", "response":"Alternatively to calling plt.show() at the end of the script, you can also control each figure separately doing: ``` f = plt.figure(1) plt.hist........ ............ f.show() g = plt.figure(2) plt.hist(........ ................ g.show() raw_input() ``` In this case you must call raw_input to keep the figures alive. This way you can select dynamically which figures you want to show Note: raw_input() was renamed to input() in Python 3", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7744697\/how-to-show-two-figures-using-matplotlib", "best_answers_votes":110, "question_length":1390, "response_length":435 }, { "question":"Putting newline in matplotlib label with TeX in Python? How can I add a newline to a plot's label (e.g. xlabel or ylabel) in matplotlib? For example, ``` plt.bar([1, 2], [4, 5]) plt.xlabel(\"My x label\") plt.ylabel(r\"My long label with $\\Sigma_{C}$ math \\n continues here\") ``` Ideally I'd like the y-labeled to be centered too. Is there a way to do this? It's important that the label have both TeX (enclosed in '$') and the newline.", "response":"You can have the best of both worlds: automatic \"escaping\" of LaTeX commands and newlines: ``` plt.ylabel(r\"My long label with unescaped {\\LaTeX} $\\Sigma_{C}$ math\" \"\\n\" # Newline: the backslash is interpreted as usual r\"continues here with $\\pi$\") ``` (instead of using three lines, separating the strings by single spaces is another option). In fact, Python automatically concatenates string literals that follow each other, and you can mix raw strings (r\"\u2026\") and strings with character interpolation (\"\\n\").", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2660319\/putting-newline-in-matplotlib-label-with-tex-in-python", "best_answers_votes":143, "question_length":433, "response_length":510 }, { "question":"Color according to class labels I have two vectors, one with values and one with class labels like 1,2,3 etc. I would like to plot all the points that belong to class 1 in red, to class 2 in blue, to class 3 in green etc. How can I do that?", "response":"The accepted answer has it spot on, but if you might want to specify which class label should be assigned to a specific color or label you could do the following. I did a little label gymnastics with the colorbar, but making the plot itself reduces to a nice one-liner. This works great for plotting the results from classifications done with sklearn. Each label matches a (x,y) coordinate. ``` import matplotlib import matplotlib.pyplot as plt import numpy as np x = [4,8,12,16,1,4,9,16] y = [1,4,9,16,4,8,12,3] label = [0,1,2,3,0,1,2,3] colors = ['red','green','blue','purple'] fig = plt.figure(figsize=(8,8)) plt.scatter(x, y, c=label, cmap=matplotlib.colors.ListedColormap(colors)) cb = plt.colorbar() loc = np.arange(0,max(label),max(label)\/float(len(colors))) cb.set_ticks(loc) cb.set_ticklabels(colors) ``` Using a slightly modified version of this answer, one can generalise the above for N colors as follows: ``` import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt N = 23 # Number of labels # setup the plot fig, ax = plt.subplots(1,1, figsize=(6,6)) # define the data x = np.random.rand(1000) y = np.random.rand(1000) tag = np.random.randint(0,N,1000) # Tag each point with a corresponding label # define the colormap cmap = plt.cm.jet # extract all colors from the .jet map cmaplist = [cmap(i) for i in range(cmap.N)] # create the new map cmap = cmap.from_list('Custom cmap', cmaplist, cmap.N) # define the bins and normalize bounds = np.linspace(0,N,N+1) norm = mpl.colors.BoundaryNorm(bounds, cmap.N) # make the scatter scat = ax.scatter(x,y,c=tag,s=np.random.randint(100,500,N),cmap=cmap, norm=norm) # create the colorbar cb = plt.colorbar(scat, spacing='proportional',ticks=bounds) cb.set_label('Custom cbar') ax.set_title('Discrete color mappings') plt.show() ``` Which gives:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12487060\/color-according-to-class-labels", "best_answers_votes":110, "question_length":240, "response_length":1816 }, { "question":"Saving a figure after invoking pyplot.show() results in an empty file The following example code generates a simple plot, then saves it to 'fig1.pdf', then displays it, then saves it again to 'fig2.pdf'. The first image looks as expected, but the second one is blank (contains a white square). What's actually going on here? The line plt.show() apparently messes something up, but I can't figure out what\/how! ``` import numpy as np import matplotlib.pyplot as plt x = np.linspace(-1, 1, 100) y = x**2 plt.plot(x,y) plt.savefig('fig1.pdf') plt.show() plt.savefig('fig2.pdf') ```", "response":"If you want to save the figure after displaying it, you'll need to hold on to the figure instance. The reason that plt.savefig doesn't work after calling show is that the current figure has been reset. pyplot keeps track of which figures, axes, etc are \"current\" (i.e. have not yet been displayed with show) behind-the-scenes. gcf and gca get the current figure and current axes instances, respectively. plt.savefig (and essentially any other pyplot method) just does plt.gcf().savefig(...). In other words, get the current figure instance and call its savefig method. Similarly plt.plot basically does plt.gca().plot(...). After show is called, the list of \"current\" figures and axes is empty. In general, you're better off directly using the figure and axes instances to plot\/save\/show\/etc, rather than using plt.plot, etc, to implicitly get the current figure\/axes and plot on it. There's nothing wrong with using pyplot for everything (especially interactively), but it makes it easier to shoot yourself in the foot. Use pyplot for plt.show() and to generate a figure and an axes object(s), but then use the figure or axes methods directly. (e.g. ax.plot(x, y) instead of plt.plot(x, y), etc) The main advantage of this is that it's explicit. You know what objects you're plotting on, and don't have to reason about what the pyplot state-machine does (though it's not that hard to understand the state-machine interface, either). As an example of the \"recommended\" way of doing things, do something like: ``` import numpy as np import matplotlib.pyplot as plt x = np.linspace(-1, 1, 100) y = x**2 fig, ax = plt.subplots() ax.plot(x, y) fig.savefig('fig1.pdf') plt.show() fig.savefig('fig2.pdf') ``` If you'd rather use the pyplot interface for everything, then just grab the figure instance before you call show. For example: ``` import numpy as np import matplotlib.pyplot as plt x = np.linspace(-1, 1, 100) y = x**2 plt.plot(x, y) fig = plt.gcf() fig.savefig('fig1.pdf') plt.show() fig.savefig('fig2.pdf') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21875356\/saving-a-figure-after-invoking-pyplot-show-results-in-an-empty-file", "best_answers_votes":133, "question_length":578, "response_length":2015 }, { "question":"How to fix overlapping annotations \/ text I'm trying to stop annotation text overlapping in my graphs. The method suggested in the accepted answer to Matplotlib overlapping annotations looks extremely promising, however is for bar graphs. I'm having trouble converting the \"axis\" methods over to what I want to do, and I don't understand how the text lines up. ``` import sys import matplotlib.pyplot as plt # start new plot plt.clf() plt.xlabel(\"Proportional Euclidean Distance\") plt.ylabel(\"Percentage Timewindows Attended\") plt.title(\"Test plot\") together = [(0, 1.0, 0.4), (25, 1.0127692669427917, 0.41), (50, 1.016404709797609, 0.41), (75, 1.1043426359673716, 0.42), (100, 1.1610446924342996, 0.44), (125, 1.1685687930691457, 0.43), (150, 1.3486407784550272, 0.45), (250, 1.4013999168008104, 0.45)] together.sort() for x,y,z in together: plt.annotate(str(x), xy=(y, z), size=8) eucs = [y for (x,y,z) in together] covers = [z for (x,y,z) in together] p1 = plt.plot(eucs,covers,color=\"black\", alpha=0.5) plt.savefig(\"test.png\") ``` Images (if this works) can be found here (this code): and here (more complicated):", "response":"I just wanted to post here another solution, a small library I wrote to implement this kind of things: https:\/\/github.com\/Phlya\/adjustText An example of the process can be seen here: Here is the example image: ``` import matplotlib.pyplot as plt from adjustText import adjust_text import numpy as np together = [(0, 1.0, 0.4), (25, 1.0127692669427917, 0.41), (50, 1.016404709797609, 0.41), (75, 1.1043426359673716, 0.42), (100, 1.1610446924342996, 0.44), (125, 1.1685687930691457, 0.43), (150, 1.3486407784550272, 0.45), (250, 1.4013999168008104, 0.45)] together.sort() text = [x for (x,y,z) in together] eucs = [y for (x,y,z) in together] covers = [z for (x,y,z) in together] p1 = plt.plot(eucs,covers,color=\"black\", alpha=0.5) texts = [] for x, y, s in zip(eucs, covers, text): texts.append(plt.text(x, y, s)) plt.xlabel(\"Proportional Euclidean Distance\") plt.ylabel(\"Percentage Timewindows Attended\") plt.title(\"Test plot\") adjust_text(texts, only_move={'points':'y', 'texts':'y'}, arrowprops=dict(arrowstyle=\"->\", color='r', lw=0.5)) plt.show() ``` If you want a perfect figure, you can fiddle around a little. First, let's also make text repel the lines - for that we just create lots of virtual points along them using scipy.interpolate.interp1d. We want to avoid moving the labels along the x-axis, because, well, why not do it for illustrative purposes. For that we use the parameter only_move={'points':'y', 'text':'y'}. If we want to move them along x axis only in the case that they are overlapping with text, use move_only={'points':'y', 'text':'xy'}. Also in the beginning the function chooses optimal alignment of texts relative to their original points, so we only want that to happen along the y axis too, hence autoalign='y'. We also reduce the repelling force from points to avoid text flying too far away due to our artificial avoidance of lines. All together: ``` from scipy import interpolate p1 = plt.plot(eucs,covers,color=\"black\", alpha=0.5) texts = [] for x, y, s in zip(eucs, covers, text): texts.append(plt.text(x, y, s)) f = interpolate.interp1d(eucs, covers) x = np.arange(min(eucs), max(eucs), 0.0005) y = f(x) plt.xlabel(\"Proportional Euclidean Distance\") plt.ylabel(\"Percentage Timewindows Attended\") plt.title(\"Test plot\") adjust_text(texts, x=x, y=y, autoalign='y', only_move={'points':'y', 'text':'y'}, force_points=0.15, arrowprops=dict(arrowstyle=\"->\", color='r', lw=0.5)) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19073683\/how-to-fix-overlapping-annotations-text", "best_answers_votes":214, "question_length":1117, "response_length":2424 }, { "question":"How to display custom values on a bar plot I'm looking to see how to do two things in Seaborn with using a bar chart to display values that are in the dataframe, but not in the graph. I'm looking to display the values of one field in a dataframe while graphing another. For example, below, I'm graphing 'tip', but I would like to place the value of 'total_bill' centered above each of the bars (i.e.325.88 above Friday, 1778.40 above Saturday, etc.) Is there a way to scale the colors of the bars, with the lowest value of 'total_bill' having the lightest color (in this case Friday) and the highest value of 'total_bill' having the darkest? Obviously, I'd stick with one color (i.e., blue) when I do the scaling. While I see that others think that this is a duplicate of another problem (or two), I am missing the part of how I use a value that is not in the graph as the basis for the label or the shading. How do I say, use total_bill as the basis. I'm sorry, but I just can't figure it out based on those answers. Starting with the following code, ``` import pandas as pd import seaborn as sns %matplotlib inline df = pd.read_csv(\"https:\/\/raw.githubusercontent.com\/wesm\/pydata-book\/1st-edition\/ch08\/tips.csv\", sep=',') groupedvalues = df.groupby('day').sum().reset_index() g = sns.barplot(x='day', y='tip', data=groupedvalues) ``` I get the following result: Interim Solution: ``` for index, row in groupedvalues.iterrows(): g.text(row.name, row.tip, round(row.total_bill, 2), color='black', ha=\"center\") ``` On the shading, using the example below, I tried the following: ``` import pandas as pd import seaborn as sns %matplotlib inline df = pd.read_csv(\"https:\/\/raw.githubusercontent.com\/wesm\/pydata-book\/1st-edition\/ch08\/tips.csv\", sep=',') groupedvalues = df.groupby('day').sum().reset_index() pal = sns.color_palette(\"Greens_d\", len(data)) rank = groupedvalues.argsort().argsort() g = sns.barplot(x='day', y='tip', data=groupedvalues) for index, row in groupedvalues.iterrows(): g.text(row.name, row.tip, round(row.total_bill, 2), color='black', ha=\"center\") ``` But that gave me the following error: AttributeError: 'DataFrame' object has no attribute 'argsort' So I tried a modification: ``` import pandas as pd import seaborn as sns %matplotlib inline df = pd.read_csv(\"https:\/\/raw.githubusercontent.com\/wesm\/pydata-book\/1st-edition\/ch08\/tips.csv\", sep=',') groupedvalues = df.groupby('day').sum().reset_index() pal = sns.color_palette(\"Greens_d\", len(data)) rank = groupedvalues['total_bill'].rank(ascending=True) g = sns.barplot(x='day', y='tip', data=groupedvalues, palette=np.array(pal[::-1])[rank]) ``` and that leaves me with IndexError: index 4 is out of bounds for axis 0 with size 4", "response":"New in matplotlib 3.4.0 There is now a built-in Axes.bar_label to automatically label bar containers: ```py ax = sns.barplot(x='day', y='tip', data=groupedvalues) ax.bar_label(ax.containers[0]) # only 1 container needed unless using `hue` ``` For custom labels (e.g., tip bars with total_bill values), use the labels parameter: ```py ax = sns.barplot(x='day', y='tip', data=groupedvalues) ax.bar_label(ax.containers[0], labels=groupedvalues['total_bill']) # ---------------------------------- ``` For multi-group bar plots (i.e., with hue), there will be multiple bar containers that need to be iterated: ```py ax = sns.barplot(x='day', y='tip', hue='sex', data=df) for container in ax.containers: ax.bar_label(container) ``` More details: How to label percentage counts (fmt param) How to rotate labels (rotation param) How to vertically center labels (label_type param) How to add spacing to labels (padding param) Color-ranked version Is there a way to scale the colors of the bars, with the lowest value of total_bill having the lightest color (in this case Friday) and the highest value of total_bill having the darkest? Find the rank of each total_bill value: Either use Series.sort_values: ```py ranks = groupedvalues.total_bill.sort_values().index # Int64Index([1, 0, 3, 2], dtype='int64') ``` Or condense Ernest's Series.rank version by chaining Series.sub: ```py ranks = groupedvalues.total_bill.rank().sub(1).astype(int).array # [1, 0, 3, 2] ``` Then reindex the color palette using ranks: ```py palette = sns.color_palette('Blues_d', len(ranks)) ax = sns.barplot(x='day', y='tip', palette=np.array(palette)[ranks], data=groupedvalues) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/43214978\/how-to-display-custom-values-on-a-bar-plot", "best_answers_votes":165, "question_length":2703, "response_length":1650 }, { "question":"How to make several plots on a single page using matplotlib? I have written code that opens 16 figures at once. Currently, they all open as separate graphs. I'd like them to open all on the same page. Not the same graph. I want 16 separate graphs on a single page\/window. Furthermore, for some reason, the format of the numbins and defaultreallimits doesn't hold past figure 1. Do I need to use the subplot command? I don't understand why I would have to but can't figure out what else I would do? ``` import csv import scipy.stats import numpy import matplotlib.pyplot as plt for i in range(16): plt.figure(i) filename= easygui.fileopenbox(msg='Pdf distance 90m contour', title='select file', filetypes=['*.csv'], default='X:\\\\herring_schools\\\\') alt_file=open(filename) a=[] for row in csv.DictReader(alt_file): a.append(row['Dist_90m(nmi)']) y= numpy.array(a, float) relpdf=scipy.stats.relfreq(y, numbins=7, defaultreallimits=(-10,60)) bins = numpy.arange(-10,60,10) print numpy.sum(relpdf[0]) print bins patches=plt.bar(bins,relpdf[0], width=10, facecolor='black') titlename= easygui.enterbox(msg='write graph title', title='', default='', strip=True, image=None, root=None) plt.title(titlename) plt.ylabel('Probability Density Function') plt.xlabel('Distance from 90m Contour Line(nm)') plt.ylim([0,1]) plt.show() ```", "response":"The answer from las3rjock, which somehow is the answer accepted by the OP, is incorrect--the code doesn't run, nor is it valid matplotlib syntax; that answer provides no runnable code and lacks any information or suggestion that the OP might find useful in writing their own code to solve the problem in the OP. Given that it's the accepted answer and has already received several up-votes, I suppose a little deconstruction is in order. First, calling subplot does not give you multiple plots; subplot is called to create a single plot, as well as to create multiple plots. In addition, \"changing plt.figure(i)\" is not correct. plt.figure() (in which plt or PLT is usually matplotlib's pyplot library imported and rebound as a global variable, plt or sometimes PLT, like so: ``` from matplotlib import pyplot as PLT fig = PLT.figure() ``` the line just above creates a matplotlib figure instance; this object's add_subplot method is then called for every plotting window (informally think of an x & y axis comprising a single subplot). You create (whether just one or for several on a page), like so ``` fig.add_subplot(111) ``` this syntax is equivalent to ``` fig.add_subplot(1,1,1) ``` choose the one that makes sense to you. Below I've listed the code to plot two plots on a page, one above the other. The formatting is done via the argument passed to add_subplot. Notice the argument is (211) for the first plot and (212) for the second. ``` from matplotlib import pyplot as PLT fig = PLT.figure() ax1 = fig.add_subplot(211) ax1.plot([(1, 2), (3, 4)], [(4, 3), (2, 3)]) ax2 = fig.add_subplot(212) ax2.plot([(7, 2), (5, 3)], [(1, 6), (9, 5)]) PLT.show() ``` Each of these two arguments is a complete specification for correctly placing the respective plot windows on the page. 211 (which again, could also be written in 3-tuple form as (2,1,1) means two rows and one column of plot windows; the third digit specifies the ordering of that particular subplot window relative to the other subplot windows--in this case, this is the first plot (which places it on row 1) hence plot number 1, row 1 col 1. The argument passed to the second call to add_subplot, differs from the first only by the trailing digit (a 2 instead of a 1, because this plot is the second plot (row 2, col 1). An example with more plots: if instead you wanted four plots on a page, in a 2x2 matrix configuration, you would call the add_subplot method four times, passing in these four arguments (221), (222), (223), and (224), to create four plots on a page at 10, 2, 8, and 4 o'clock, respectively and in this order. Notice that each of the four arguments contains two leadings 2's--that encodes the 2 x 2 configuration, ie, two rows and two columns. The third (right-most) digit in each of the four arguments encodes the ordering of that particular plot window in the 2 x 2 matrix--ie, row 1 col 1 (1), row 1 col 2 (2), row 2 col 1 (3), row 2 col 2 (4).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/1358977\/how-to-make-several-plots-on-a-single-page-using-matplotlib", "best_answers_votes":205, "question_length":1322, "response_length":2930 }, { "question":"Matplotlib scatter plot legend I created a 4D scatter plot graph to represent different temperatures in a specific area. When I create the legend, the legend shows the correct symbol and color but adds a line through it. The code I'm using is: ``` colors=['b', 'c', 'y', 'm', 'r'] lo = plt.Line2D(range(10), range(10), marker='x', color=colors[0]) ll = plt.Line2D(range(10), range(10), marker='o', color=colors[0]) l = plt.Line2D(range(10), range(10), marker='o',color=colors[1]) a = plt.Line2D(range(10), range(10), marker='o',color=colors[2]) h = plt.Line2D(range(10), range(10), marker='o',color=colors[3]) hh = plt.Line2D(range(10), range(10), marker='o',color=colors[4]) ho = plt.Line2D(range(10), range(10), marker='x', color=colors[4]) plt.legend((lo,ll,l,a, h, hh, ho),('Low Outlier', 'LoLo','Lo', 'Average', 'Hi', 'HiHi', 'High Outlier'),numpoints=1, loc='lower left', ncol=3, fontsize=8) ``` I tried changing Line2D to Scatter and scatter. Scatter returned an error and scatter changed the graph and returned an error. With scatter, I changed the range(10) to the lists containing the data points. Each list contains either the x, y, or z variable. ``` lo = plt.scatter(xLOutlier, yLOutlier, zLOutlier, marker='x', color=colors[0]) ll = plt.scatter(xLoLo, yLoLo, zLoLo, marker='o', color=colors[0]) l = plt.scatter(xLo, yLo, zLo, marker='o',color=colors[1]) a = plt.scatter(xAverage, yAverage, zAverage, marker='o',color=colors[2]) h = plt.scatter(xHi, yHi, zHi, marker='o',color=colors[3]) hh = plt.scatter(xHiHi, yHiHi, zHiHi, marker='o',color=colors[4]) ho = plt.scatter(xHOutlier, yHOutlier, zHOutlier, marker='x', color=colors[4]) plt.legend((lo,ll,l,a, h, hh, ho),('Low Outlier', 'LoLo','Lo', 'Average', 'Hi', 'HiHi', 'High Outlier'),scatterpoints=1, loc='lower left', ncol=3, fontsize=8) ``` When I run this, the legend no longer exists, it is a small white box in the corner with nothing in it. Any advice?", "response":"2D scatter plot Using the scatter method of the matplotlib.pyplot module should work (at least with matplotlib 1.2.1 with Python 2.7.5), as in the example code below. Also, if you are using scatter plots, use scatterpoints=1 rather than numpoints=1 in the legend call to have only one point for each legend entry. In the code below I've used random values rather than plotting the same range over and over, making all the plots visible (i.e. not overlapping each other). ``` import matplotlib.pyplot as plt from numpy.random import random colors = ['b', 'c', 'y', 'm', 'r'] lo = plt.scatter(random(10), random(10), marker='x', color=colors[0]) ll = plt.scatter(random(10), random(10), marker='o', color=colors[0]) l = plt.scatter(random(10), random(10), marker='o', color=colors[1]) a = plt.scatter(random(10), random(10), marker='o', color=colors[2]) h = plt.scatter(random(10), random(10), marker='o', color=colors[3]) hh = plt.scatter(random(10), random(10), marker='o', color=colors[4]) ho = plt.scatter(random(10), random(10), marker='x', color=colors[4]) plt.legend((lo, ll, l, a, h, hh, ho), ('Low Outlier', 'LoLo', 'Lo', 'Average', 'Hi', 'HiHi', 'High Outlier'), scatterpoints=1, loc='lower left', ncol=3, fontsize=8) plt.show() ``` 3D scatter plot To plot a scatter in 3D, use the plot method, as the legend does not support Patch3DCollection as is returned by the scatter method of an Axes3D instance. To specify the markerstyle you can include this as a positional argument in the method call, as seen in the example below. Optionally one can include argument to both the linestyle and marker parameters. ``` import matplotlib.pyplot as plt from numpy.random import random from mpl_toolkits.mplot3d import Axes3D colors=['b', 'c', 'y', 'm', 'r'] ax = plt.subplot(111, projection='3d') ax.plot(random(10), random(10), random(10), 'x', color=colors[0], label='Low Outlier') ax.plot(random(10), random(10), random(10), 'o', color=colors[0], label='LoLo') ax.plot(random(10), random(10), random(10), 'o', color=colors[1], label='Lo') ax.plot(random(10), random(10), random(10), 'o', color=colors[2], label='Average') ax.plot(random(10), random(10), random(10), 'o', color=colors[3], label='Hi') ax.plot(random(10), random(10), random(10), 'o', color=colors[4], label='HiHi') ax.plot(random(10), random(10), random(10), 'x', color=colors[4], label='High Outlier') plt.legend(loc='upper left', numpoints=1, ncol=3, fontsize=8, bbox_to_anchor=(0, 0)) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17411940\/matplotlib-scatter-plot-legend", "best_answers_votes":163, "question_length":1924, "response_length":2469 }, { "question":"How to add line based on slope and intercept In R, there is a function called abline in which a line can be drawn on a plot based on the specification of the intercept (first argument) and the slope (second argument). For instance, ``` plot(1:10, 1:10) abline(0, 1) ``` where the line with an intercept of 0 and the slope of 1 spans the entire range of the plot. Is there such a function in Matplotlib?", "response":"A lot of these solutions are focusing on adding a line to the plot that fits the data. Here's a simple solution for adding an arbitrary line to the plot based on a slope and intercept. ``` import matplotlib.pyplot as plt import numpy as np def abline(slope, intercept): \"\"\"Plot a line from slope and intercept\"\"\" axes = plt.gca() x_vals = np.array(axes.get_xlim()) y_vals = intercept + slope * x_vals plt.plot(x_vals, y_vals, '--') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7941226\/how-to-add-line-based-on-slope-and-intercept", "best_answers_votes":128, "question_length":402, "response_length":435 }, { "question":"How to use matplotlib tight layout with Figure? [duplicate] This question already has answers here: Improve subplot size\/spacing with many subplots (12 answers) Closed 2 years ago. I found tight_layout function for pyplot and want to use it. In my application I embed matplotlib plots into Qt GUI and use figure and not pyplot. Is there any way I can apply tight_layout there? Would it also work if I have several axes in one figure?", "response":"Just call fig.tight_layout() as you normally would. (pyplot is just a convenience wrapper. In most cases, you only use it to quickly generate figure and axes objects and then call their methods directly.) There shouldn't be a difference between the QtAgg backend and the default backend (or if there is, it's a bug). E.g. ``` import matplotlib.pyplot as plt #-- In your case, you'd do something more like: # from matplotlib.figure import Figure # fig = Figure() #-- ...but we want to use it interactive for a quick example, so #-- we'll do it this way fig, axes = plt.subplots(nrows=4, ncols=4) for i, ax in enumerate(axes.flat, start=1): ax.set_title('Test Axes {}'.format(i)) ax.set_xlabel('X axis') ax.set_ylabel('Y axis') plt.show() ``` Before Tight Layout After Tight Layout ``` import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=4, ncols=4) for i, ax in enumerate(axes.flat, start=1): ax.set_title('Test Axes {}'.format(i)) ax.set_xlabel('X axis') ax.set_ylabel('Y axis') fig.tight_layout() plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9603230\/how-to-use-matplotlib-tight-layout-with-figure", "best_answers_votes":152, "question_length":433, "response_length":1023 }, { "question":"Modify the legend of pandas bar plot I am always bothered when I make a bar plot with pandas and I want to change the names of the labels in the legend. Consider for instance the output of this code: ``` import pandas as pd from matplotlib.pyplot import * df = pd.DataFrame({'A':26, 'B':20}, index=['N']) df.plot(kind='bar') ``` Now, if I want to change the name in the legend, I would usually try to do: ``` legend(['AAA', 'BBB']) ``` But I end up with this: In fact, the first dashed line seems to correspond to an additional patch. So I wonder if there is a simple trick here to change the labels, or do I need to plot each of the columns independently with matplotlib and set the labels myself. Thanks.", "response":"To change the labels for Pandas df.plot() use ax.legend([...]): ``` import pandas as pd import matplotlib.pyplot as plt fig, ax = plt.subplots() df = pd.DataFrame({'A':26, 'B':20}, index=['N']) df.plot(kind='bar', ax=ax) #ax = df.plot(kind='bar') # \"same\" as above ax.legend([\"AAA\", \"BBB\"]); ``` Another approach is to do the same by plt.legend([...]): ``` import matplotlib.pyplot as plt df.plot(kind='bar') plt.legend([\"AAA\", \"BBB\"]); ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/33149428\/modify-the-legend-of-pandas-bar-plot", "best_answers_votes":161, "question_length":706, "response_length":440 }, { "question":"Can i cycle through line styles in matplotlib I know how to cycle through a list of colors in matplotlib. But is it possible to do something similar with line styles (plain, dotted, dashed, etc.)? I'd need to do that so my graphs would be easier to read when printed. Any suggestions how to do that?", "response":"Something like this might do the trick: ``` import matplotlib.pyplot as plt from itertools import cycle lines = [\"-\",\"--\",\"-.\",\":\"] linecycler = cycle(lines) plt.figure() for i in range(10): x = range(i,i+10) plt.plot(range(10),x,next(linecycler)) plt.show() ``` Result: Edit for newer version (v2.22) ``` import matplotlib.pyplot as plt from cycler import cycler # plt.figure() for i in range(5): x = range(i,i+5) linestyle_cycler = cycler('linestyle',['-','--',':','-.']) plt.rc('axes', prop_cycle=linestyle_cycler) plt.plot(range(5),x) plt.legend(['first','second','third','fourth','fifth'], loc='upper left', fancybox=True, shadow=True) plt.show() ``` For more detailed information consult the matplotlib tutorial on \"Styling with cycler\" To see the output click \"show figure\"", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7799156\/can-i-cycle-through-line-styles-in-matplotlib", "best_answers_votes":136, "question_length":299, "response_length":780 }, { "question":"python error: no module named pylab I am new to Python and want to use its plot functionality to create graphs. I am using ubuntu 12.04. I followed the Python installation steps from http:\/\/eli.thegreenplace.net\/2011\/10\/10\/installing-python-2-7-on-ubuntu\/ but when I do ``` from pylab import * ``` I am getting this error ``` >>> from pylab import * Traceback (most recent call last): File \"\", line 1, in ImportError: No module named pylab ``` My Python version is python 2.7. Can anybody tell me what I am missing here?", "response":"You'll need to install numpy, scipy and matplotlib to get pylab. In ubuntu you can install them with this command: ``` sudo apt-get install python-numpy python-scipy python-matplotlib ``` If you installed python from source you will need to install these packages through pip. Note that you may have to install other dependencies to do this, as well as install numpy before the other two. That said, I would recommend using the version of python in the repositories as I think it is up to date with the current version of python (2.7.3).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10965336\/python-error-no-module-named-pylab", "best_answers_votes":141, "question_length":521, "response_length":537 }, { "question":"Barchart with vertical ytick labels I'm using matplotlib to generate a (vertical) barchart. The problem is my labels are rather long. Is there any way to display them vertically, either in the bar or above it or below it?", "response":"Do you mean something like this: ``` >>> from matplotlib import * >>> plot(xrange(10)) >>> yticks(xrange(10), rotation='vertical') ``` ? In general, to show any text in matplotlib with a vertical orientation, you can add the keyword rotation='vertical'. For further options, you can look at help(matplotlib.pyplot.text) The yticks function plots the ticks on the y axis; I am not sure whether you originally meant this or the ylabel function, but the procedure is alwasy the same, you have to add rotation='vertical' Maybe you can also find useful the options 'verticalalignment' and 'horizontalalignment', which allows you to define how to align the text with respect to the ticks or the other elements.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/1221108\/barchart-with-vertical-ytick-labels", "best_answers_votes":116, "question_length":221, "response_length":704 }, { "question":"What are the differences between add_axes and add_subplot? In a previous answer it was recommended to me to use add_subplot instead of add_axes to show axes correctly, but searching the documentation I couldn't understand when and why I should use either one of these functions. Can anyone explain the differences?", "response":"Common grounds Both, add_axes and add_subplot add an axes to a figure. They both return a (subclass of a) matplotlib.axes.Axes object. However, the mechanism which is used to add the axes differs substantially. add_axes The calling signature of add_axes is add_axes(rect), where rect is a list [x0, y0, width, height] denoting the lower left point of the new axes in figure coodinates (x0,y0) and its width and height. So the axes is positionned in absolute coordinates on the canvas. E.g. ``` fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ``` places a figure in the canvas that is exactly as large as the canvas itself. add_subplot The calling signature of add_subplot does not directly provide the option to place the axes at a predefined position. It rather allows to specify where the axes should be situated according to a subplot grid. The usual and easiest way to specify this position is the 3 integer notation, ``` fig = plt.figure() ax = fig.add_subplot(231) ``` In this example a new axes is created at the first position (1) on a grid of 2 rows and 3 columns. To produce only a single axes, add_subplot(111) would be used (First plot on a 1 by 1 subplot grid). (In newer matplotlib versions, add_subplot() without any arguments is possible as well.) The advantage of this method is that matplotlib takes care of the exact positioning. By default add_subplot(111) would produce an axes positioned at [0.125,0.11,0.775,0.77] or similar, which already leaves enough space around the axes for the title and the (tick)labels. However, this position may also change depending on other elements in the plot, titles set, etc. It can also be adjusted using pyplot.subplots_adjust(...) or pyplot.tight_layout(). In most cases, add_subplot would be the prefered method to create axes for plots on a canvas. Only in cases where exact positioning matters, add_axes might be useful. Example ``` import matplotlib.pyplot as plt plt.rcParams[\"figure.figsize\"] = (5,3) fig = plt.figure() fig.add_subplot(241) fig.add_subplot(242) ax = fig.add_subplot(223) ax.set_title(\"subplots\") fig.add_axes([0.77,.3,.2,.6]) ax2 =fig.add_axes([0.67,.5,.2,.3]) fig.add_axes([0.6,.1,.35,.3]) ax2.set_title(\"random axes\") plt.tight_layout() plt.show() ``` Alternative The easiest way to obtain one or more subplots together with their handles is plt.subplots(). For one axes, use ``` fig, ax = plt.subplots() ``` or, if more subplots are needed, ``` fig, axes = plt.subplots(nrows=3, ncols=4) ``` The initial question In the initial question an axes was placed using fig.add_axes([0,0,1,1]), such that it sits tight to the figure boundaries. The disadvantage of this is of course that ticks, ticklabels, axes labels and titles are cut off. Therefore I suggested in one of the comments to the answer to use fig.add_subplot as this will automatically allow for enough space for those elements, and, if this is not enough, can be adjusted using pyplot.subplots_adjust(...) or pyplot.tight_layout().", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/43326680\/what-are-the-differences-between-add-axes-and-add-subplot", "best_answers_votes":172, "question_length":314, "response_length":2975 }, { "question":"How do I use matplotlib autopct? I'd like to create a matplotlib pie chart which has the value of each wedge written on top of the wedge. The documentation suggests I should use autopct to do this. autopct: [ None | format string | format function ] If not None, is a string or function used to label the wedges with their numeric value. The label will be placed inside the wedge. If it is a format string, the label will be fmt%pct. If it is a function, it will be called. Unfortunately, I'm unsure what this format string or format function is supposed to be. Using this basic example below, how can I display each numerical value on top of its wedge? ``` plt.figure() values = [3, 12, 5, 8] labels = ['a', 'b', 'c', 'd'] plt.pie(values, labels=labels) #autopct?? plt.show() ```", "response":"autopct enables you to display the percent value using Python string formatting. For example, if autopct='%.2f', then for each pie wedge, the format string is '%.2f' and the numerical percent value for that wedge is pct, so the wedge label is set to the string '%.2f'%pct. ``` import matplotlib.pyplot as plt plt.figure() values = [3, 12, 5, 8] labels = ['a', 'b', 'c', 'd'] plt.pie(values, labels=labels, autopct='%.2f') plt.show() ``` yields You can do fancier things by supplying a callable to autopct. To display both the percent value and the original value, you could do this: ``` import matplotlib.pyplot as plt # make the pie circular by setting the aspect ratio to 1 plt.figure(figsize=plt.figaspect(1)) values = [3, 12, 5, 8] labels = ['a', 'b', 'c', 'd'] def make_autopct(values): def my_autopct(pct): total = sum(values) val = int(round(pct*total\/100.0)) return '{p:.2f}% ({v:d})'.format(p=pct,v=val) return my_autopct plt.pie(values, labels=labels, autopct=make_autopct(values)) plt.show() ``` Again, for each pie wedge, matplotlib supplies the percent value pct as the argument, though this time it is sent as the argument to the function my_autopct. The wedge label is set to my_autopct(pct).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6170246\/how-do-i-use-matplotlib-autopct", "best_answers_votes":170, "question_length":780, "response_length":1207 }, { "question":"Is there a way to make a discontinuous axis in Matplotlib? I'm trying to create a plot using pyplot that has a discontinuous x-axis. The usual way this is drawn is that the axis will have something like this: (values)----\/\/----(later values) where the \/\/ indicates that you're skipping everything between (values) and (later values). I haven't been able to find any examples of this, so I'm wondering if it's even possible. I know you can join data over a discontinuity for, eg, financial data, but I'd like to make the jump in the axis more explicit. At the moment I'm just using subplots but I'd really like to have everything end up on the same graph in the end.", "response":"Paul's answer is a perfectly fine method of doing this. However, if you don't want to make a custom transform, you can just use two subplots to create the same effect. Rather than put together an example from scratch, there's an excellent example of this written by Paul Ivanov in the matplotlib examples (It's only in the current git tip, as it was only committed a few months ago. It's not on the webpage yet.). This is just a simple modification of this example to have a discontinuous x-axis instead of the y-axis. (Which is why I'm making this post a CW) Basically, you just do something like this: ``` import matplotlib.pylab as plt import numpy as np # If you're not familiar with np.r_, don't worry too much about this. It's just # a series with points from 0 to 1 spaced at 0.1, and 9 to 10 with the same spacing. x = np.r_[0:1:0.1, 9:10:0.1] y = np.sin(x) fig,(ax,ax2) = plt.subplots(1, 2, sharey=True) # plot the same data on both axes ax.plot(x, y, 'bo') ax2.plot(x, y, 'bo') # zoom-in \/ limit the view to different portions of the data ax.set_xlim(0,1) # most of the data ax2.set_xlim(9,10) # outliers only # hide the spines between ax and ax2 ax.spines['right'].set_visible(False) ax2.spines['left'].set_visible(False) ax.yaxis.tick_left() ax.tick_params(labeltop='off') # don't put tick labels at the top ax2.yaxis.tick_right() # Make the spacing between the two axes a bit smaller plt.subplots_adjust(wspace=0.15) plt.show() ``` To add the broken axis lines \/\/ effect, we can do this (again, modified from Paul Ivanov's example): ``` import matplotlib.pylab as plt import numpy as np # If you're not familiar with np.r_, don't worry too much about this. It's just # a series with points from 0 to 1 spaced at 0.1, and 9 to 10 with the same spacing. x = np.r_[0:1:0.1, 9:10:0.1] y = np.sin(x) fig,(ax,ax2) = plt.subplots(1, 2, sharey=True) # plot the same data on both axes ax.plot(x, y, 'bo') ax2.plot(x, y, 'bo') # zoom-in \/ limit the view to different portions of the data ax.set_xlim(0,1) # most of the data ax2.set_xlim(9,10) # outliers only # hide the spines between ax and ax2 ax.spines['right'].set_visible(False) ax2.spines['left'].set_visible(False) ax.yaxis.tick_left() ax.tick_params(labeltop='off') # don't put tick labels at the top ax2.yaxis.tick_right() # Make the spacing between the two axes a bit smaller plt.subplots_adjust(wspace=0.15) # This looks pretty good, and was fairly painless, but you can get that # cut-out diagonal lines look with just a bit more work. The important # thing to know here is that in axes coordinates, which are always # between 0-1, spine endpoints are at these locations (0,0), (0,1), # (1,0), and (1,1). Thus, we just need to put the diagonals in the # appropriate corners of each of our axes, and so long as we use the # right transform and disable clipping. d = .015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color='k', clip_on=False) ax.plot((1-d,1+d),(-d,+d), **kwargs) # top-left diagonal ax.plot((1-d,1+d),(1-d,1+d), **kwargs) # bottom-left diagonal kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d,d),(-d,+d), **kwargs) # top-right diagonal ax2.plot((-d,d),(1-d,1+d), **kwargs) # bottom-right diagonal # What's cool about this is that now if we vary the distance between # ax and ax2 via f.subplots_adjust(hspace=...) or plt.subplot_tool(), # the diagonal lines will move accordingly, and stay right at the tips # of the spines they are 'breaking' plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5656798\/is-there-a-way-to-make-a-discontinuous-axis-in-matplotlib", "best_answers_votes":89, "question_length":665, "response_length":3582 }, { "question":"Automatically run %matplotlib inline in IPython Notebook Every time I launch IPython Notebook, the first command I run is ``` %matplotlib inline ``` Is there some way to change my config file so that when I launch IPython, it is automatically in this mode?", "response":"The configuration way IPython has profiles for configuration, located at ~\/.ipython\/profile_*. The default profile is called profile_default. Within this folder there are two primary configuration files: ipython_config.py ipython_kernel_config.py Add the inline option for matplotlib to ipython_kernel_config.py: ``` c = get_config() # ... Any other configurables you want to set c.InteractiveShellApp.matplotlib = \"inline\" ``` matplotlib vs. pylab Usage of %pylab to get inline plotting is discouraged. It introduces all sorts of gunk into your namespace that you just don't need. %matplotlib on the other hand enables inline plotting without injecting your namespace. You'll need to do explicit calls to get matplotlib and numpy imported. ``` import matplotlib.pyplot as plt import numpy as np ``` The small price of typing out your imports explicitly should be completely overcome by the fact that you now have reproducible code.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21176731\/automatically-run-matplotlib-inline-in-ipython-notebook", "best_answers_votes":87, "question_length":256, "response_length":932 }, { "question":"How to create grouped boxplots Is there a way to group boxplots in matplotlib? Assume we have three groups \"A\", \"B\", and \"C\" and for each we want to create a boxplot for both \"apples\" and \"oranges\". If a grouping is not possible directly, we can create all six combinations and place them linearly side by side. What would be to simplest way to visualize the groupings? I'm trying to avoid setting the tick labels to something like \"A + apples\" since my scenario involves much longer names than \"A\".", "response":"How about using colors to differentiate between \"apples\" and \"oranges\" and spacing to separate \"A\", \"B\" and \"C\"? Something like this: ``` from pylab import plot, show, savefig, xlim, figure, \\ hold, ylim, legend, boxplot, setp, axes # function for setting the colors of the box plots pairs def setBoxColors(bp): setp(bp['boxes'][0], color='blue') setp(bp['caps'][0], color='blue') setp(bp['caps'][1], color='blue') setp(bp['whiskers'][0], color='blue') setp(bp['whiskers'][1], color='blue') setp(bp['fliers'][0], color='blue') setp(bp['fliers'][1], color='blue') setp(bp['medians'][0], color='blue') setp(bp['boxes'][1], color='red') setp(bp['caps'][2], color='red') setp(bp['caps'][3], color='red') setp(bp['whiskers'][2], color='red') setp(bp['whiskers'][3], color='red') setp(bp['fliers'][2], color='red') setp(bp['fliers'][3], color='red') setp(bp['medians'][1], color='red') # Some fake data to plot A= [[1, 2, 5,], [7, 2]] B = [[5, 7, 2, 2, 5], [7, 2, 5]] C = [[3,2,5,7], [6, 7, 3]] fig = figure() ax = axes() hold(True) # first boxplot pair bp = boxplot(A, positions = [1, 2], widths = 0.6) setBoxColors(bp) # second boxplot pair bp = boxplot(B, positions = [4, 5], widths = 0.6) setBoxColors(bp) # thrid boxplot pair bp = boxplot(C, positions = [7, 8], widths = 0.6) setBoxColors(bp) # set axes limits and labels xlim(0,9) ylim(0,9) ax.set_xticklabels(['A', 'B', 'C']) ax.set_xticks([1.5, 4.5, 7.5]) # draw temporary red and blue lines and use them to create a legend hB, = plot([1,1],'b-') hR, = plot([1,1],'r-') legend((hB, hR),('Apples', 'Oranges')) hB.set_visible(False) hR.set_visible(False) savefig('boxcompare.png') show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16592222\/how-to-create-grouped-boxplots", "best_answers_votes":119, "question_length":499, "response_length":1641 }, { "question":"Superscript in Python plots I want to label my x axis at follows : ``` pylab.xlabel('metres 10^1') ``` But I don't want to have the ^ symbol included . ``` pylab.xlabel('metres 10$^{one}$') ``` This method works and will superscript letters but doesn't seem to work for numbers . If I try : ``` pylab.xlabel('metres 10$^1$') ``` It superscripts a letter N for some reason . Anyone know how to superscript numbers in python plots ? thanks .", "response":"You just need to have the full expression inside the $. Basically, you need \"meters $10^1$\". You don't need usetex=True to do this (or most any mathematical formula). You may also want to use a raw string (e.g. r\"\\t\", vs \"\\t\") to avoid problems with things like \\n, \\a, \\b, \\t, \\f, etc. For example: ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.set(title=r'This is an expression $e^{\\sin(\\omega\\phi)}$', xlabel='meters $10^1$', ylabel=r'Hertz $(\\frac{1}{s})$') plt.show() ``` If you don't want the superscripted text to be in a different font than the rest of the text, use \\mathregular (or equivalently \\mathdefault). Some symbols won't be available, but most will. This is especially useful for simple superscripts like yours, where you want the expression to blend in with the rest of the text. ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.set(title=r'This is an expression $\\mathregular{e^{\\sin(\\omega\\phi)}}$', xlabel='meters $\\mathregular{10^1}$', ylabel=r'Hertz $\\mathregular{(\\frac{1}{s})}$') plt.show() ``` For more information (and a general overview of matplotlib's \"mathtext\"), see: http:\/\/matplotlib.org\/users\/mathtext.html", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21226868\/superscript-in-python-plots", "best_answers_votes":164, "question_length":439, "response_length":1174 }, { "question":"How to draw grid lines behind matplotlib bar graph ``` x = ['01-02', '02-02', '03-02', '04-02', '05-02'] y = [2, 2, 3, 7, 2] fig, ax = plt.subplots(1, 1) ax.bar(range(len(y)), y, width=0.3,align='center',color='skyblue') plt.xticks(range(len(y)), x, size='small') plt.savefig('\/home\/user\/graphimages\/foo2.png') plt.close() ``` I want to draw grid lines (of x & y) behind the bar graph.", "response":"To add a grid you simply need to add ax.grid() If you want the grid to be behind the bars then add ``` ax.grid(zorder=0) ax.bar(range(len(y)), y, width=0.3, align='center', color='skyblue', zorder=3) ``` The important part is that the zorder of the bars is greater than grid. Experimenting it seems zorder=3 is the lowest value that actually gives the desired effect. I have no idea why zorder=1 isn't sufficient. EDIT: I have noticed this question has already been answered here using a different method although it suffers some link rot. Both methods yield the same result as far as I can see but andrew cooke's answer is more elegant.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23357798\/how-to-draw-grid-lines-behind-matplotlib-bar-graph", "best_answers_votes":147, "question_length":385, "response_length":637 }, { "question":"How to hide axes and gridlines I would like to be able to hide the axes and gridlines on a 3D matplotlib graph. I want to do this because when zooming in and out the image gets pretty nasty. I'm not sure what code to include here but this is what I use to create the graph. ``` fig = plt.figure() ax = fig.gca(projection='3d') ax.view_init(30, -90) ax.set_xlabel(\"X\") ax.set_ylabel(\"Y\") ax.set_zlabel(\"Z\") plt.xlim(0,pL) plt.ylim(0,pW) ax.set_aspect(\"equal\") plt.show() ``` This is an example of the plot that I am looking at:", "response":"``` # Hide grid lines ax.grid(False) # Hide axes ticks ax.set_xticks([]) ax.set_yticks([]) ax.set_zticks([]) ``` Note, you need matplotlib>=1.2 for set_zticks() to work.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/45148704\/how-to-hide-axes-and-gridlines", "best_answers_votes":181, "question_length":526, "response_length":169 }, { "question":"Pandas dataframe groupby plot I have a dataframe which is structured as: ``` Date ticker adj_close 0 2016-11-21 AAPL 111.730 1 2016-11-22 AAPL 111.800 2 2016-11-23 AAPL 111.230 3 2016-11-25 AAPL 111.790 4 2016-11-28 AAPL 111.570 ... 8 2016-11-21 ACN 119.680 9 2016-11-22 ACN 119.480 10 2016-11-23 ACN 119.820 11 2016-11-25 ACN 120.740 ... ``` How can I plot based on the ticker the adj_close versus Date?", "response":"Simple plot, you can use: ``` df.plot(x='Date',y='adj_close') ``` Or you can set the index to be Date beforehand, then it's easy to plot the column you want: ``` df.set_index('Date', inplace=True) df['adj_close'].plot() ``` If you want a chart with one series by ticker on it You need to groupby before: ``` df.set_index('Date', inplace=True) df.groupby('ticker')['adj_close'].plot(legend=True) ``` If you want a chart with individual subplots: ``` grouped = df.groupby('ticker') ncols=2 nrows = int(np.ceil(grouped.ngroups\/ncols)) fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(12,4), sharey=True) for (key, ax) in zip(grouped.groups.keys(), axes.flatten()): grouped.get_group(key).plot(ax=ax) ax.legend() plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/41494942\/pandas-dataframe-groupby-plot", "best_answers_votes":158, "question_length":404, "response_length":734 }, { "question":"Pandas bar plot changes date format I have a simple stacked line plot that has exactly the date format I want magically set when using the following code. ``` df_ts = df.resample(\"W\", how='max') df_ts.plot(figsize=(12,8), stacked=True) ``` However, the dates mysteriously transform themselves to an ugly and unreadable format when plotting the same data as a bar plot. ``` df_ts = df.resample(\"W\", how='max') df_ts.plot(kind='bar', figsize=(12,8), stacked=True) ``` The original data was transformed a bit to have the weekly max. Why is this radical change in automatically set dates happening? How can I have the nicely formatted dates as above? Here is some dummy data ``` start = pd.to_datetime(\"1-1-2012\") idx = pd.date_range(start, periods= 365).tolist() df=pd.DataFrame({'A':np.random.random(365), 'B':np.random.random(365)}) df.index = idx df_ts = df.resample('W', how= 'max') df_ts.plot(kind='bar', stacked=True) ```", "response":"The plotting code assumes that each bar in a bar plot deserves its own label. You could override this assumption by specifying your own formatter: ``` ax.xaxis.set_major_formatter(formatter) ``` The pandas.tseries.converter.TimeSeries_DateFormatter that Pandas uses to format the dates in the \"good\" plot works well with line plots when the x-values are dates. However, with a bar plot the x-values (at least those received by TimeSeries_DateFormatter.__call__) are merely integers starting at zero. If you try to use TimeSeries_DateFormatter with a bar plot, all the labels thus start at the Epoch, 1970-1-1 UTC, since this is the date which corresponds to zero. So the formatter used for line plots is unfortunately useless for bar plots (at least as far as I can see). The easiest way I see to produce the desired formatting is to generate and set the labels explicitly: ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import matplotlib.ticker as ticker start = pd.to_datetime(\"5-1-2012\") idx = pd.date_range(start, periods=365) df = pd.DataFrame({'A': np.random.random(365), 'B': np.random.random(365)}) df.index = idx df_ts = df.resample('W').max() ax = df_ts.plot(kind='bar', stacked=True) # Make most of the ticklabels empty so the labels don't get too crowded ticklabels = ['']*len(df_ts.index) # Every 4th ticklable shows the month and day ticklabels[::4] = [item.strftime('%b %d') for item in df_ts.index[::4]] # Every 12th ticklabel includes the year ticklabels[::12] = [item.strftime('%b %d\\n%Y') for item in df_ts.index[::12]] ax.xaxis.set_major_formatter(ticker.FixedFormatter(ticklabels)) plt.gcf().autofmt_xdate() plt.show() ``` yields For those looking for a simple example of a bar plot with dates: ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.ticker as mticker dates = pd.date_range('2012-1-1', '2017-1-1', freq='M') df = pd.DataFrame({'A':np.random.random(len(dates)), 'Date':dates}) fig, ax = plt.subplots() df.plot.bar(x='Date', y='A', ax=ax) ticklabels = ['']*len(df) skip = len(df)\/\/12 ticklabels[::skip] = df['Date'].iloc[::skip].dt.strftime('%Y-%m-%d') ax.xaxis.set_major_formatter(mticker.FixedFormatter(ticklabels)) fig.autofmt_xdate() # fixes the tracker # https:\/\/matplotlib.org\/users\/recipes.html def fmt(x, pos=0, max_i=len(ticklabels)-1): i = int(x) i = 0 if i max_i else i return dates[i] ax.fmt_xdata = fmt plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/30133280\/pandas-bar-plot-changes-date-format", "best_answers_votes":97, "question_length":924, "response_length":2432 }, { "question":"My matplotlib.pyplot legend is being cut off I'm attempting to create a plot with a legend to the side of it using matplotlib. I can see that the plot is being created, but the image bounds do not allow the entire legend to be displayed. ``` lines = [] ax = plt.subplot(111) for filename in args: lines.append(plt.plot(y_axis, x_axis, colors[colorcycle], linestyle='steps-pre', label=filename)) ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ``` This produces:", "response":"Eventhough that it is late, I want to refer to a nice recently introduced alternative: New matplotlib feature: The tight bounding box If you are interested in the output file of plt.savefig: in this case the flag bbox_inches='tight' is your friend! ``` import matplotlib.pyplot as plt fig = plt.figure(1) plt.plot([1, 2, 3], [1, 0, 1], label='A') plt.plot([1, 2, 3], [1, 2, 2], label='B') plt.legend(loc='center left', bbox_to_anchor=(1, 0)) fig.savefig('samplefigure', bbox_inches='tight') ``` I want to refer also to a more detailed answer: Moving matplotlib legend outside of the axis makes it cutoff by the figure box Advantages There is no need to adjust the actual data\/picture. It is compatible with plt.subplots as-well where as the others are not! It applies at least to the mostly used output files, e.g. png, pdf.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9651092\/my-matplotlib-pyplot-legend-is-being-cut-off", "best_answers_votes":160, "question_length":474, "response_length":824 }, { "question":"Histogram values of a Pandas Series I have some values in a Python Pandas Series (type: pandas.core.series.Series) ``` In [1]: series = pd.Series([0.0,950.0,-70.0,812.0,0.0,-90.0,0.0,0.0,-90.0,0.0,-64.0,208.0,0.0,-90.0,0.0,-80.0,0.0,0.0,-80.0,-48.0,840.0,-100.0,190.0,130.0,-100.0,-100.0,0.0,-50.0,0.0,-100.0,-100.0,0.0,-90.0,0.0,-90.0,-90.0,63.0,-90.0,0.0,0.0,-90.0,-80.0,0.0,]) In [2]: series.min() Out[2]: -100.0 In [3]: series.max() Out[3]: 950.0 ``` I would like to get values of histogram (not necessary plotting histogram)... I just need to get the frequency for each interval. Let's say that my intervals are going from [-200; -150] to [950; 1000] so lower bounds are ``` lwb = range(-200,1000,50) ``` and upper bounds are ``` upb = range(-150,1050,50) ``` I don't know how to get frequency (the number of values that are inside each interval) now... I'm sure that defining lwb and upb is not necessary... but I don't know what function I should use to perform this! (after diving in Pandas doc, I think cut function can help me because it's a discretization problem... but I'm don't understand how to use it) After being able to do this, I will have a look at the way to display histogram (but that's an other problem)", "response":"You just need to use the histogram function of NumPy: ``` import numpy as np count, division = np.histogram(series) ``` where division is the automatically calculated border for your bins and count is the population inside each bin. If you need to fix a certain number of bins, you can use the argument bins and specify a number of bins, or give it directly the boundaries between each bin. ``` count, division = np.histogram(series, bins = [-201,-149,949,1001]) ``` to plot the results you can use the matplotlib function hist, but if you are working in pandas each Series has its own handle to the hist function, and you can give it the chosen binning: ``` series.hist(bins=division) ``` Edit: As mentioned by another poster, Pandas is built on top of NumPy. Since OP is explicitly using Pandas, we can do away with the additional import by accessing NumPy through Pandas: ```py count, division = pd.np.histogram(series) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13129618\/histogram-values-of-a-pandas-series", "best_answers_votes":122, "question_length":1227, "response_length":926 }, { "question":"matplotlib: drawing lines between points ignoring missing data I have a set of data which I want plotted as a line-graph. For each series, some data is missing (but different for each series). Currently matplotlib does not draw lines which skip missing data: for example ``` import matplotlib.pyplot as plt xs = range(8) series1 = [1, 3, 3, None, None, 5, 8, 9] series2 = [2, None, 5, None, 4, None, 3, 2] plt.plot(xs, series1, linestyle='-', marker='o') plt.plot(xs, series2, linestyle='-', marker='o') plt.show() ``` results in a plot with gaps in the lines. How can I tell matplotlib to draw lines through the gaps? (I'd rather not have to interpolate the data).", "response":"You can mask the NaN values this way: ``` import numpy as np import matplotlib.pyplot as plt xs = np.arange(8) series1 = np.array([1, 3, 3, None, None, 5, 8, 9]).astype(np.double) s1mask = np.isfinite(series1) series2 = np.array([2, None, 5, None, 4, None, 3, 2]).astype(np.double) s2mask = np.isfinite(series2) plt.plot(xs[s1mask], series1[s1mask], linestyle='-', marker='o') plt.plot(xs[s2mask], series2[s2mask], linestyle='-', marker='o') plt.show() ``` This leads to", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14399689\/matplotlib-drawing-lines-between-points-ignoring-missing-data", "best_answers_votes":119, "question_length":665, "response_length":470 }, { "question":"GridSpec with shared axes in Python This solution to another thread suggests using gridspec.GridSpec instead of plt.subplots. However, when I share axes between subplots, I usually use a syntax like the following ``` fig, axes = plt.subplots(N, 1, sharex='col', sharey=True, figsize=(3,18)) ``` How can I specify sharex and sharey when I use GridSpec ?", "response":"First off, there's an easier workaround for your original problem, as long as you're okay with being slightly imprecise. Just reset the top extent of the subplots to the default after calling tight_layout: ``` fig, axes = plt.subplots(ncols=2, sharey=True) plt.setp(axes, title='Test') fig.suptitle('An overall title', size=20) fig.tight_layout() fig.subplots_adjust(top=0.9) plt.show() ``` However, to answer your question, you'll need to create the subplots at a slightly lower level to use gridspec. If you want to replicate the hiding of shared axes like subplots does, you'll need to do that manually, by using the sharey argument to Figure.add_subplot and hiding the duplicated ticks with plt.setp(ax.get_yticklabels(), visible=False). As an example: ``` import matplotlib.pyplot as plt from matplotlib import gridspec fig = plt.figure() gs = gridspec.GridSpec(1,2) ax1 = fig.add_subplot(gs[0]) ax2 = fig.add_subplot(gs[1], sharey=ax1) plt.setp(ax2.get_yticklabels(), visible=False) plt.setp([ax1, ax2], title='Test') fig.suptitle('An overall title', size=20) gs.tight_layout(fig, rect=[0, 0, 1, 0.97]) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22511550\/gridspec-with-shared-axes-in-python", "best_answers_votes":95, "question_length":352, "response_length":1123 }, { "question":"making matplotlib scatter plots from dataframes in Python's pandas What is the best way to make a series of scatter plots using matplotlib from a pandas dataframe in Python? For example, if I have a dataframe df that has some columns of interest, I find myself typically converting everything to arrays: ``` import matplotlib.pylab as plt # df is a DataFrame: fetch col1 and col2 # and drop na rows if any of the columns are NA mydata = df[[\"col1\", \"col2\"]].dropna(how=\"any\") # Now plot with matplotlib vals = mydata.values plt.scatter(vals[:, 0], vals[:, 1]) ``` The problem with converting everything to array before plotting is that it forces you to break out of dataframes. Consider these two use cases where having the full dataframe is essential to plotting: For example, what if you wanted to now look at all the values of col3 for the corresponding values that you plotted in the call to scatter, and color each point (or size) it by that value? You'd have to go back, pull out the non-na values of col1,col2 and check what their corresponding values. Is there a way to plot while preserving the dataframe? For example: ``` mydata = df.dropna(how=\"any\", subset=[\"col1\", \"col2\"]) # plot a scatter of col1 by col2, with sizes according to col3 scatter(mydata([\"col1\", \"col2\"]), s=mydata[\"col3\"]) ``` Similarly, imagine that you wanted to filter or color each point differently depending on the values of some of its columns. E.g. what if you wanted to automatically plot the labels of the points that meet a certain cutoff on col1, col2 alongside them (where the labels are stored in another column of the df), or color these points differently, like people do with dataframes in R. For example: ``` mydata = df.dropna(how=\"any\", subset=[\"col1\", \"col2\"]) myscatter = scatter(mydata[[\"col1\", \"col2\"]], s=1) # Plot in red, with smaller size, all the points that # have a col2 value greater than 0.5 myscatter.replot(mydata[\"col2\"] > 0.5, color=\"red\", s=0.5) ``` How can this be done? EDIT Reply to crewbum: You say that the best way is to plot each condition (like subset_a, subset_b) separately. What if you have many conditions, e.g. you want to split up the scatters into 4 types of points or even more, plotting each in different shape\/color. How can you elegantly apply condition a, b, c, etc. and make sure you then plot \"the rest\" (things not in any of these conditions) as the last step? Similarly in your example where you plot col1,col2 differently based on col3, what if there are NA values that break the association between col1,col2,col3? For example if you want to plot all col2 values based on their col3 values, but some rows have an NA value in either col1 or col3, forcing you to use dropna first. So you would do: ``` mydata = df.dropna(how=\"any\", subset=[\"col1\", \"col2\", \"col3\") ``` then you can plot using mydata like you show -- plotting the scatter between col1,col2 using the values of col3. But mydata will be missing some points that have values for col1,col2 but are NA for col3, and those still have to be plotted... so how would you basically plot \"the rest\" of the data, i.e. the points that are not in the filtered set mydata?", "response":"Try passing columns of the DataFrame directly to matplotlib, as in the examples below, instead of extracting them as numpy arrays. ``` df = pd.DataFrame(np.random.randn(10,2), columns=['col1','col2']) df['col3'] = np.arange(len(df))**2 * 100 + 100 In [5]: df Out[5]: col1 col2 col3 0 -1.000075 -0.759910 100 1 0.510382 0.972615 200 2 1.872067 -0.731010 500 3 0.131612 1.075142 1000 4 1.497820 0.237024 1700 ``` Vary scatter point size based on another column ``` plt.scatter(df.col1, df.col2, s=df.col3) # OR (with pandas 0.13 and up) df.plot(kind='scatter', x='col1', y='col2', s=df.col3) ``` Vary scatter point color based on another column ``` colors = np.where(df.col3 > 300, 'r', 'k') plt.scatter(df.col1, df.col2, s=120, c=colors) # OR (with pandas 0.13 and up) df.plot(kind='scatter', x='col1', y='col2', s=120, c=colors) ``` Scatter plot with legend However, the easiest way I've found to create a scatter plot with legend is to call plt.scatter once for each point type. ``` cond = df.col3 > 300 subset_a = df[cond].dropna() subset_b = df[~cond].dropna() plt.scatter(subset_a.col1, subset_a.col2, s=120, c='b', label='col3 > 300') plt.scatter(subset_b.col1, subset_b.col2, s=60, c='r', label='col3 <= 300') plt.legend() ``` Update From what I can tell, matplotlib simply skips points with NA x\/y coordinates or NA style settings (e.g., color\/size). To find points skipped due to NA, try the isnull method: df[df.col3.isnull()] To split a list of points into many types, take a look at numpy select, which is a vectorized if-then-else implementation and accepts an optional default value. For example: ``` df['subset'] = np.select([df.col3 < 150, df.col3 < 400, df.col3 < 600], [0, 1, 2], -1) for color, label in zip('bgrm', [0, 1, 2, -1]): subset = df[df.subset == label] plt.scatter(subset.col1, subset.col2, s=120, c=color, label=str(label)) plt.legend() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14300137\/making-matplotlib-scatter-plots-from-dataframes-in-pythons-pandas", "best_answers_votes":122, "question_length":3162, "response_length":1869 }, { "question":"matplotlib colorbar in each subplot I would like to add a separate colorbar to each subplot in a 2x2 plot. ``` fig , ( (ax1,ax2) , (ax3,ax4)) = plt.subplots(2, 2,sharex = True,sharey=True) z1_plot = ax1.scatter(x,y,c = z1,vmin=0.0,vmax=0.4) plt.colorbar(z1_plot,cax=ax1) z2_plot = ax2.scatter(x,y,c = z2,vmin=0.0,vmax=40) plt.colorbar(z1_plot,cax=ax2) z3_plot = ax3.scatter(x,y,c = z3,vmin=0.0,vmax=894) plt.colorbar(z1_plot,cax=ax3) z4_plot = ax4.scatter(x,y,c = z4,vmin=0.0,vmax=234324) plt.colorbar(z1_plot,cax=ax4) plt.show() ``` I thought that this is how you do it, but the resulting plot is really messed up; it just has an all grey background and ignores the set_xlim , set_ylim commands I have (not shown here for simplicity). + it shows no color bars. Is this the right way to do it? I also tried getting rid of the \"cax = ...\", but then the colorbar all goes on the bottom right plot and not to each separate plot!", "response":"This can be easily solved with the the utility make_axes_locatable. I provide a minimal example that shows how this works and should be readily adaptable: ``` import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import numpy as np m1 = np.random.rand(3, 3) m2 = np.arange(0, 3*3, 1).reshape((3, 3)) fig = plt.figure(figsize=(16, 12)) ax1 = fig.add_subplot(121) im1 = ax1.imshow(m1, interpolation='None') divider = make_axes_locatable(ax1) cax = divider.append_axes('right', size='5%', pad=0.05) fig.colorbar(im1, cax=cax, orientation='vertical') ax2 = fig.add_subplot(122) im2 = ax2.imshow(m2, interpolation='None') divider = make_axes_locatable(ax2) cax = divider.append_axes('right', size='5%', pad=0.05) fig.colorbar(im2, cax=cax, orientation='vertical'); ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23876588\/matplotlib-colorbar-in-each-subplot", "best_answers_votes":78, "question_length":925, "response_length":797 }, { "question":"Fitting a Normal distribution to 1D data I have a 1 dimensional array. I can compute the \"mean\" and \"standard deviation\" of this sample and plot the \"Normal distribution\" but I have a problem: I want to plot the data and Normal distribution in the same figure. I dont know how to plot both the data and the normal distribution. Any Idea about \"Gaussian probability density function in scipy.stats\"? ``` s = np.std(array) m = np.mean(array) plt.plot(norm.pdf(array,m,s)) ```", "response":"You can use matplotlib to plot the histogram and the PDF (as in the link in @MrE's answer). For fitting and for computing the PDF, you can use scipy.stats.norm, as follows. ``` import numpy as np from scipy.stats import norm import matplotlib.pyplot as plt # Generate some data for this demonstration. data = norm.rvs(10.0, 2.5, size=500) # Fit a normal distribution to the data: mu, std = norm.fit(data) # Plot the histogram. plt.hist(data, bins=25, density=True, alpha=0.6, color='g') # Plot the PDF. xmin, xmax = plt.xlim() x = np.linspace(xmin, xmax, 100) p = norm.pdf(x, mu, std) plt.plot(x, p, 'k', linewidth=2) title = \"Fit results: mu = %.2f, std = %.2f\" % (mu, std) plt.title(title) plt.show() ``` Here's the plot generated by the script:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20011122\/fitting-a-normal-distribution-to-1d-data", "best_answers_votes":191, "question_length":473, "response_length":747 }, { "question":"Delete a subplot I'm trying to figure out a way of deleting (dynamically) subplots in matplotlib. I see they have a remove method, but I get the error ``` NotImplementedError: cannot remove artist ``` I'm surprised that I can't find this anywhere. Does anyone know how to do this? ```py from matplotlib import pyplot as plt fig, axs = plt.subplots(1,3) axs[0].plot([1,2],[3,4]) axs[2].plot([0,1],[2,3]) plt.draw() plt.tight_layout() ```", "response":"Use fig.delaxes or plt.delaxes to remove unwanted subplots ```py fig, axs = plt.subplots(1,3) axs[0].plot([1,2],[3,4]) axs[2].plot([0,1],[2,3]) fig.delaxes(axs[1]) plt.draw() plt.tight_layout() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14694501\/delete-a-subplot", "best_answers_votes":167, "question_length":436, "response_length":197 }, { "question":"How to have clusters of stacked bars So here is how my data set looks like : ``` In [1]: df1=pd.DataFrame(np.random.rand(4,2),index=[\"A\",\"B\",\"C\",\"D\"],columns=[\"I\",\"J\"]) In [2]: df2=pd.DataFrame(np.random.rand(4,2),index=[\"A\",\"B\",\"C\",\"D\"],columns=[\"I\",\"J\"]) In [3]: df1 Out[3]: I J A 0.675616 0.177597 B 0.675693 0.598682 C 0.631376 0.598966 D 0.229858 0.378817 In [4]: df2 Out[4]: I J A 0.939620 0.984616 B 0.314818 0.456252 C 0.630907 0.656341 D 0.020994 0.538303 ``` I want to have stacked bar plot for each dataframe but since they have same index, I'd like to have 2 stacked bars per index. I've tried to plot both on the same axes : ``` In [5]: ax = df1.plot(kind=\"bar\", stacked=True) In [5]: ax2 = df2.plot(kind=\"bar\", stacked=True, ax = ax) ``` But it overlaps. Then I tried to concat the two dataset first : ``` pd.concat(dict(df1 = df1, df2 = df2),axis = 1).plot(kind=\"bar\", stacked=True) ``` but here everything is stacked My best try is : ``` pd.concat(dict(df1 = df1, df2 = df2),axis = 0).plot(kind=\"bar\", stacked=True) ``` Which gives : This is basically what I want, except that I want the bar ordered as (df1,A) (df2,A) (df1,B) (df2,B) etc... I guess there is a trick but I can't found it ! After @bgschiller's answer I got this : Which is almost what I want. I would like the bar to be clustered by index, in order to have something visually clear. Bonus : Having the x-label not redundant, something like : ``` df1 df2 df1 df2 _______ _______ ... A B ```", "response":"I eventually found a trick (edit: see below for using seaborn and longform dataframe): Solution with pandas and matplotlib Here it is with a more complete example : ``` import pandas as pd import matplotlib.cm as cm import numpy as np import matplotlib.pyplot as plt def plot_clustered_stacked(dfall, labels=None, title=\"multiple stacked bar plot\", H=\"\/\", **kwargs): \"\"\"Given a list of dataframes, with identical columns and index, create a clustered stacked bar plot. labels is a list of the names of the dataframe, used for the legend title is a string for the title of the plot H is the hatch used for identification of the different dataframe\"\"\" n_df = len(dfall) n_col = len(dfall[0].columns) n_ind = len(dfall[0].index) axe = plt.subplot(111) for df in dfall : # for each data frame axe = df.plot(kind=\"bar\", linewidth=0, stacked=True, ax=axe, legend=False, grid=False, **kwargs) # make bar plots h,l = axe.get_legend_handles_labels() # get the handles we want to modify for i in range(0, n_df * n_col, n_col): # len(h) = n_col * n_df for j, pa in enumerate(h[i:i+n_col]): for rect in pa.patches: # for each index rect.set_x(rect.get_x() + 1 \/ float(n_df + 1) * i \/ float(n_col)) rect.set_hatch(H * int(i \/ n_col)) #edited part rect.set_width(1 \/ float(n_df + 1)) axe.set_xticks((np.arange(0, 2 * n_ind, 2) + 1 \/ float(n_df + 1)) \/ 2.) axe.set_xticklabels(df.index, rotation = 0) axe.set_title(title) # Add invisible data to add another legend n=[] for i in range(n_df): n.append(axe.bar(0, 0, color=\"gray\", hatch=H * i)) l1 = axe.legend(h[:n_col], l[:n_col], loc=[1.01, 0.5]) if labels is not None: l2 = plt.legend(n, labels, loc=[1.01, 0.1]) axe.add_artist(l1) return axe # create fake dataframes df1 = pd.DataFrame(np.random.rand(4, 5), index=[\"A\", \"B\", \"C\", \"D\"], columns=[\"I\", \"J\", \"K\", \"L\", \"M\"]) df2 = pd.DataFrame(np.random.rand(4, 5), index=[\"A\", \"B\", \"C\", \"D\"], columns=[\"I\", \"J\", \"K\", \"L\", \"M\"]) df3 = pd.DataFrame(np.random.rand(4, 5), index=[\"A\", \"B\", \"C\", \"D\"], columns=[\"I\", \"J\", \"K\", \"L\", \"M\"]) # Then, just call : plot_clustered_stacked([df1, df2, df3],[\"df1\", \"df2\", \"df3\"]) ``` And it gives that : You can change the colors of the bar by passing a cmap argument: ``` plot_clustered_stacked([df1, df2, df3], [\"df1\", \"df2\", \"df3\"], cmap=plt.cm.viridis) ``` Solution with seaborn: Given the same df1, df2, df3, below, I convert them in a long form: ``` df1[\"Name\"] = \"df1\" df2[\"Name\"] = \"df2\" df3[\"Name\"] = \"df3\" dfall = pd.concat([pd.melt(i.reset_index(), id_vars=[\"Name\", \"index\"]) # transform in tidy format each df for i in [df1, df2, df3]], ignore_index=True) ``` The problem with seaborn is that it doesn't stack bars natively, so the trick is to plot the cumulative sum of each bar on top of each other: ``` dfall.set_index([\"Name\", \"index\", \"variable\"], inplace=1) dfall[\"vcs\"] = dfall.groupby(level=[\"Name\", \"index\"]).cumsum() dfall.reset_index(inplace=True) >>> dfall.head(6) Name index variable value vcs 0 df1 A I 0.717286 0.717286 1 df1 B I 0.236867 0.236867 2 df1 C I 0.952557 0.952557 3 df1 D I 0.487995 0.487995 4 df1 A J 0.174489 0.891775 5 df1 B J 0.332001 0.568868 ``` Then loop over each group of variable and plot the cumulative sum: ``` c = [\"blue\", \"purple\", \"red\", \"green\", \"pink\"] for i, g in enumerate(dfall.groupby(\"variable\")): ax = sns.barplot(data=g[1], x=\"index\", y=\"vcs\", hue=\"Name\", color=c[i], zorder=-i, # so first bars stay on top edgecolor=\"k\") ax.legend_.remove() # remove the redundant legends ``` It lacks the legend that can be added easily I think. The problem is that instead of hatches (which can be added easily) to differentiate the dataframes we have a gradient of lightness, and it's a bit too light for the first one, and I don't really know how to change that without changing each rectangle one by one (as in the first solution). Tell me if you don't understand something in the code. Feel free to re-use this code which is under CC0.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22787209\/how-to-have-clusters-of-stacked-bars", "best_answers_votes":114, "question_length":1471, "response_length":3907 }, { "question":"Add colorbar to existing axis I'm making some interactive plots and I would like to add a colorbar legend. I don't want the colorbar to be in its own axes, so I want to add it to the existing axes. I'm having difficulties doing this, as most of the example code I have found creates a new axes for the colorbar. I have tried the following code using matplotlib.colorbar.ColorbarBase, which adds a colorbar to an existing axes, but it gives me strange results and I can't figure out how to specify attributes of the colorbar (for instance, where on the axes it is placed and what size it is) ``` import matplotlib import matplotlib.pyplot as plt from matplotlib.cm import coolwarm import numpy as np x = np.random.uniform(1, 10, 10) y = np.random.uniform(1, 10, 10) v = np.random.uniform(1, 10, 10) fig, ax = plt.subplots() s = ax.scatter(x, y, c=v, cmap=coolwarm) matplotlib.colorbar.ColorbarBase(ax=ax, cmap=coolwarm, values=sorted(v), orientation=\"horizontal\") ``` Using fig.colorbar instead ofmatplotlib.colorbar.ColorbarBase still doesn't give me quite what I want, and I still don't know how to adjust the attributes of the colorbar. ``` fig.colorbar(s, ax=ax, cax=ax) ``` Let's say I want to have the colorbar in the top left corner, stretching about halfway across the top of the plot. How would I go about doing that? Am I better off writing a custom function for this, maybe using LineCollection?", "response":"This technique is usually used for multiple axis in a figure. In this context it is often required to have a colorbar that corresponds in size with the result from imshow. This can be achieved easily with the axes grid tool kit: ``` import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable data = np.arange(100, 0, -1).reshape(10, 10) fig, ax = plt.subplots() divider = make_axes_locatable(ax) cax = divider.append_axes('right', size='5%', pad=0.05) im = ax.imshow(data, cmap='bone') fig.colorbar(im, cax=cax, orientation='vertical') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/32462881\/add-colorbar-to-existing-axis", "best_answers_votes":146, "question_length":1405, "response_length":598 }, { "question":"Keras - Plot training, validation and test set accuracy I want to plot the output of this simple neural network: ``` model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) history = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True) model.test_on_batch(x_test, y_test) model.metrics_names ``` I have plotted accuracy and loss of training and validation: ``` print(history.history.keys()) # \"Accuracy\" plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left') plt.show() # \"Loss\" plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left') plt.show() ``` Now I want to add and plot test set's accuracy from model.test_on_batch(x_test, y_test), but from model.metrics_names I obtain the same value 'acc' utilized for plotting accuracy on training data plt.plot(history.history['acc']). How could I plot test set's accuracy?", "response":"``` import keras from matplotlib import pyplot as plt history = model1.fit(train_x, train_y,validation_split = 0.1, epochs=50, batch_size=4) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() ``` ``` plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/41908379\/keras-plot-training-validation-and-test-set-accuracy", "best_answers_votes":152, "question_length":1138, "response_length":554 }, { "question":"Positioning the colorbar I have a matplotlib plot with a colorbar attached. I want to position the colorbar so that it is horizontal, and underneath my plot. I have almost done this via the following: ``` plt.colorbar(orientation=\"horizontal\",fraction=0.07,anchor=(1.0,0.0)) ``` But the colorbar is still overlapping with the plot slightly (and the labels of the x axis). I want to move the colorbar further down, but I can't figure out how to do it.", "response":"using padding pad In order to move the colorbar relative to the subplot, one may use the pad argument to fig.colorbar. ``` import matplotlib.pyplot as plt import numpy as np; np.random.seed(1) fig, ax = plt.subplots(figsize=(4,4)) im = ax.imshow(np.random.rand(11,16)) ax.set_xlabel(\"x label\") fig.colorbar(im, orientation=\"horizontal\", pad=0.2) plt.show() ``` using an axes divider One can use an instance of make_axes_locatable to divide the axes and create a new axes which is perfectly aligned to the image plot. Again, the pad argument would allow to set the space between the two axes. ``` import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import numpy as np; np.random.seed(1) fig, ax = plt.subplots(figsize=(4,4)) im = ax.imshow(np.random.rand(11,16)) ax.set_xlabel(\"x label\") divider = make_axes_locatable(ax) cax = divider.new_vertical(size=\"5%\", pad=0.7, pack_start=True) fig.add_axes(cax) fig.colorbar(im, cax=cax, orientation=\"horizontal\") plt.show() ``` using subplots One can directly create two rows of subplots, one for the image and one for the colorbar. Then, setting the height_ratios as gridspec_kw={\"height_ratios\":[1, 0.05]} in the figure creation, makes one of the subplots much smaller in height than the other and this small subplot can host the colorbar. ``` import matplotlib.pyplot as plt import numpy as np; np.random.seed(1) fig, (ax, cax) = plt.subplots(nrows=2,figsize=(4,4), gridspec_kw={\"height_ratios\":[1, 0.05]}) im = ax.imshow(np.random.rand(11,16)) ax.set_xlabel(\"x label\") fig.colorbar(im, cax=cax, orientation=\"horizontal\") plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13310594\/positioning-the-colorbar", "best_answers_votes":130, "question_length":450, "response_length":1617 }, { "question":"creating over 20 unique legend colors using matplotlib I am plotting 20 different lines on a single plot using matplotlib. I use a for loop for plotting and label every line with its key and then use the legend function ``` for key in dict.keys(): plot(x,dict[key], label = key) graph.legend() ``` But using this way, the graph repeats a lot of colors in the legend. Is there any way to ensure a unique color is assigned to each line using matplotlib and over 20 lines? thanks", "response":"The answer to your question is related to two other SO questions. The answer to How to pick a new color for each plotted line within a figure in matplotlib? explains how to define the default list of colors that is cycled through to pick the next color to plot. This is done with the Axes.set_color_cycle method. You want to get the correct list of colors though, and this is most easily done using a color map, as is explained in the answer to this question: Create a color generator from given colormap in matplotlib. There a color map takes a value from 0 to 1 and returns a color. So for your 20 lines, you want to cycle from 0 to 1 in steps of 1\/20. Specifically you want to cycle form 0 to 19\/20, because 1 maps back to 0. This is done in this example: ``` import matplotlib.pyplot as plt import numpy as np NUM_COLORS = 20 cm = plt.get_cmap('gist_rainbow') fig = plt.figure() ax = fig.add_subplot(111) ax.set_prop_cycle(color=[cm(1.*i\/NUM_COLORS) for i in range(NUM_COLORS)]) for i in range(NUM_COLORS): ax.plot(np.arange(10)*(i+1)) fig.savefig('moreColors.png') plt.show() ``` This is the resulting figure: Alternative, better (debatable) solution There is an alternative way that uses a ScalarMappable object to convert a range of values to colors. The advantage of this method is that you can use a non-linear Normalization to convert from line index to actual color. The following code produces the same exact result: ``` import matplotlib.pyplot as plt import matplotlib.cm as mplcm import matplotlib.colors as colors import numpy as np NUM_COLORS = 20 cm = plt.get_cmap('gist_rainbow') cNorm = colors.Normalize(vmin=0, vmax=NUM_COLORS-1) scalarMap = mplcm.ScalarMappable(norm=cNorm, cmap=cm) fig = plt.figure() ax = fig.add_subplot(111) # old way: #ax.set_prop_cycle(color=[cm(1.*i\/NUM_COLORS) for i in range(NUM_COLORS)]) # new way: ax.set_prop_cycle(color=[scalarMap.to_rgba(i) for i in range(NUM_COLORS)]) for i in range(NUM_COLORS): ax.plot(np.arange(10)*(i+1)) fig.savefig('moreColors.png') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8389636\/creating-over-20-unique-legend-colors-using-matplotlib", "best_answers_votes":149, "question_length":476, "response_length":2023 }, { "question":"Reset color cycle in Matplotlib Say I have data about 3 trading strategies, each with and without transaction costs. I want to plot, on the same axes, the time series of each of the 6 variants (3 strategies * 2 trading costs). I would like the \"with transaction cost\" lines to be plotted with alpha=1 and linewidth=1 while I want the \"no transaction costs\" to be plotted with alpha=0.25 and linewidth=5. But I would like the color to be the same for both versions of each strategy. I would like something along the lines of: ``` fig, ax = plt.subplots(1, 1, figsize=(10, 10)) for c in with_transaction_frame.columns: ax.plot(with_transaction_frame[c], label=c, alpha=1, linewidth=1) ****SOME MAGIC GOES HERE TO RESET THE COLOR CYCLE for c in no_transaction_frame.columns: ax.plot(no_transaction_frame[c], label=c, alpha=0.25, linewidth=5) ax.legend() ``` What is the appropriate code to put on the indicated line to reset the color cycle so it is \"back to the start\" when the second loop is invoked?", "response":"In Matplotlib = 1.5 plt.gca().set_prop_cycle(None) for i in range(3): plt.plot(np.arange(10, 1, -1) + i) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/24193174\/reset-color-cycle-in-matplotlib", "best_answers_votes":121, "question_length":999, "response_length":119 }, { "question":"What is %pylab? I keep seeing people use %pylab in various code snippits, particularly with iPython. However, I cannot see where %pylab is mentioned anywhere in Learning Python (and the few other Python books I have) and am not really sure what it means. I'm sure the answer is simple, but can anyone enlighten me?", "response":"%pylab is a magic function in ipython. Magic functions in ipython always begin with the percent sign (%) followed without any spaces by a small text string; in essence, ipython magic functions define shortcuts particularly useful for interactive work, e.g., to give you an idea of how magic functions work in python, a few of my favorites: to view cwd directory contents: ``` %ls ``` to run a script in ipython using an empty namespace, type space then a script name: ``` %run ``` to execute a code snippet (particularly for multi-line snippets which would usually cause an _IndentationError_ to be thrown): ``` %paste ``` When the %pylab magic function is entered at the IPython prompt, it triggers the import of various modules within Matplotlib. Which modules? well, the ones subsumed under the pylab interface. The awesome Matplotlib plotting library has two distinct interfaces: a pythonic one, and the original MATLAB-like one intended for plotting at the interactive prompt. The former is usually imported like so: ``` from matplotlib import pyplot as PLT ``` Indeed, pyplot has its own magic python magic function ``` %pyplot ``` Why two different interfaces? Matplotlib's original interface was pylab; only later was the pythonic interface added. Scripting and app development were not the primary uses cases for Matplotlib when the project began, plotting in the python shell was. Apparently John Hunter (Matplotlib's creator) wanted to include interactive plotting in python so he submitted a patch to Fernando Perez's (FP) IPython project. FP was a Ph.D student at the time and informed JH that he would not able to review the path for some time. As a result, JH created Matplotlib. The significance is that Matplotlib began as a shell-based plotting scheme. the pylab interface is indeed more suitable for interactive work: ``` from pylab import * x, y = arange(10), cos(x\/2) plot(x, y) show() ``` and using the pyplot interface: ``` from matplotlib import pyplot as PLT import numpy as NP x, y = NP.arange(10), NP.cos(x\/2) fig = PLT.figure() ax1 = fig.add_subplot(111) ax1.plot(x, y) PLT.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20961287\/what-is-pylab", "best_answers_votes":85, "question_length":314, "response_length":2112 }, { "question":"Set legend symbol opacity I'm working on a plot with translucent 'x' markers (20% alpha). How do I make the marker appear at 100% opacity in the legend? ``` import matplotlib.pyplot as plt plt.plot_date( x = xaxis, y = yaxis, marker = 'x', color=[1, 0, 0, .2], label='Data Series' ) plt.legend(loc=3, mode=\"expand\", numpoints=1, scatterpoints=1 ) ```", "response":"UPDATED: There is an easier way! First, assign your legend to a variable when you create it: ``` leg = plt.legend() ``` Then: ``` for lh in leg.legendHandles: lh.set_alpha(1) ``` OR if the above doesn't work (you may be using an older version of matplotlib): ``` for lh in leg.legendHandles: lh._legmarker.set_alpha(1) ``` and since Matplotlib 3.9: ``` for lh in leg.legend_handles: lh.set_alpha(1) ``` to make your markers opaque for a plt.plot or a plt.scatter, respectively. Note that using simply lh.set_alpha(1) on a plt.plot will make the lines in your legend opaque rather than the markers. You should be able to adapt these two possibilities for the other plot types. Sources: Synthesized from some good advice by DrV about marker sizes. Update was inspired by useful comment from Owen.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12848808\/set-legend-symbol-opacity", "best_answers_votes":128, "question_length":350, "response_length":794 }, { "question":"Change figure size and figure format in matplotlib [duplicate] This question already has answers here: How do I change the size of figures drawn with Matplotlib? (16 answers) Closed 4 years ago. I want to obtain fig1 exactly of 4 by 3 inch sized, and in tiff format correcting the program below: ```py import matplotlib.pyplot as plt list1 = [3,4,5,6,9,12] list2 = [8,12,14,15,17,20] plt.plot(list1, list2) plt.savefig('fig1.png', dpi = 300) plt.close() ```", "response":"You can set the figure size if you explicitly create the figure with ``` plt.figure(figsize=(3,4)) ``` You need to set figure size before calling plt.plot() To change the format of the saved figure just change the extension in the file name. However, I don't know if any of matplotlib backends support tiff", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17109608\/change-figure-size-and-figure-format-in-matplotlib", "best_answers_votes":159, "question_length":457, "response_length":306 }, { "question":"Pandas timeseries plot setting x-axis major and minor ticks and labels I want to be able to set the major and minor xticks and their labels for a time series graph plotted from a Pandas time series object. The Pandas 0.9 \"what's new\" page says: \"you can either use to_pydatetime or register a converter for the Timestamp type\" but I can't work out how to do that so that I can use the matplotlib ax.xaxis.set_major_locator and ax.xaxis.set_major_formatter (and minor) commands. If I use them without converting the pandas times, the x-axis ticks and labels end up wrong. By using the 'xticks' parameter, I can pass the major ticks to pandas' .plot, and then set the major tick labels. I can't work out how to do the minor ticks using this approach (I can set the labels on the default minor ticks set by pandas' .plot). Here is my test code: Graph with strange dates on xaxis ```py import pandas as pd import matplotlib.dates as mdates import numpy as np dateIndex = pd.date_range(start='2011-05-01', end='2011-07-01', freq='D') testSeries = pd.Series(data=np.random.randn(len(dateIndex)), index=dateIndex) ax = plt.figure(figsize=(7,4), dpi=300).add_subplot(111) testSeries.plot(ax=ax, style='v-', label='first line') # using MatPlotLib date time locators and formatters doesn't work with new # pandas datetime index ax.xaxis.set_minor_locator(mdates.WeekdayLocator()) ax.xaxis.set_minor_formatter(mdates.DateFormatter('%d\\n%a')) ax.xaxis.grid(True, which=\"minor\") ax.xaxis.grid(False, which=\"major\") ax.xaxis.set_major_formatter(mdates.DateFormatter('\\n\\n\\n%b%Y')) plt.show() ``` Graph with correct dates (without minor ticks) ```py # set the major xticks and labels through pandas ax2 = plt.figure(figsize=(7,4), dpi=300).add_subplot(111) xticks = pd.date_range(start='2011-05-01', end='2011-07-01', freq='W-Tue') testSeries.plot(ax=ax2, style='-v', label='second line', xticks=xticks.to_pydatetime()) ax2.set_xticklabels([x.strftime('%a\\n%d\\n%h\\n%Y') for x in xticks]); # remove the minor xtick labels set by pandas.plot ax2.set_xticklabels([], minor=True) # turn the minor ticks created by pandas.plot off plt.show() ``` Update: I've been able to get closer to the layout I wanted by using a loop to build the major xtick labels: ```py # only show month for first label in month month = dStart.month - 1 xticklabels = [] for x in xticks: if month != x.month : xticklabels.append(x.strftime('%d\\n%a\\n%h')) month = x.month else: xticklabels.append(x.strftime('%d\\n%a')) ``` However, this is a bit like doing the x-axis using ax.annotate: possible but not ideal. How do I set the major and minor ticks when plotting pandas time-series data?", "response":"Both pandas and matplotlib.dates use matplotlib.units for locating the ticks. But while matplotlib.dates has convenient ways to set the ticks manually, pandas seems to have the focus on auto formatting so far (you can have a look at the code for date conversion and formatting in pandas). So for the moment it seems more reasonable to use matplotlib.dates (as mentioned by @BrenBarn in his comment). ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as dates idx = pd.date_range('2011-05-01', '2011-07-01') s = pd.Series(np.random.randn(len(idx)), index=idx) fig, ax = plt.subplots() ax.plot_date(idx.to_pydatetime(), s, 'v-') ax.xaxis.set_minor_locator(dates.WeekdayLocator(byweekday=(1), interval=1)) ax.xaxis.set_minor_formatter(dates.DateFormatter('%d\\n%a')) ax.xaxis.grid(True, which=\"minor\") ax.yaxis.grid() ax.xaxis.set_major_locator(dates.MonthLocator()) ax.xaxis.set_major_formatter(dates.DateFormatter('\\n\\n\\n%b\\n%Y')) plt.tight_layout() plt.show() ``` (my locale is German, so that Tuesday [Tue] becomes Dienstag [Di])", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12945971\/pandas-timeseries-plot-setting-x-axis-major-and-minor-ticks-and-labels", "best_answers_votes":94, "question_length":2642, "response_length":1078 }, { "question":"How to embed matplotlib in pyqt I want to embed a graph in a pyqt4 user interface. I do not understand how people did the embedding in the examples I found - this one (at the bottom) and that one. Could anyone post a step-by-step explanation or at least a very small, very simple code only creating e.g. a graph and a button in one pyqt4 GUI.", "response":"It is not that complicated actually. Relevant Qt widgets are in matplotlib.backends.backend_qt4agg. FigureCanvasQTAgg and NavigationToolbar2QT are usually what you need. These are regular Qt widgets. You treat them as any other widget. Below is a very simple example with a Figure, Navigation and a single button that draws some random data. I've added comments to explain things. ``` import sys from PyQt4 import QtGui from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas from matplotlib.backends.backend_qt4agg import NavigationToolbar2QT as NavigationToolbar from matplotlib.figure import Figure import random class Window(QtGui.QDialog): def __init__(self, parent=None): super(Window, self).__init__(parent) # a figure instance to plot on self.figure = Figure() # this is the Canvas Widget that displays the `figure` # it takes the `figure` instance as a parameter to __init__ self.canvas = FigureCanvas(self.figure) # this is the Navigation widget # it takes the Canvas widget and a parent self.toolbar = NavigationToolbar(self.canvas, self) # Just some button connected to `plot` method self.button = QtGui.QPushButton('Plot') self.button.clicked.connect(self.plot) # set the layout layout = QtGui.QVBoxLayout() layout.addWidget(self.toolbar) layout.addWidget(self.canvas) layout.addWidget(self.button) self.setLayout(layout) def plot(self): ''' plot some random stuff ''' # random data data = [random.random() for i in range(10)] # create an axis ax = self.figure.add_subplot(111) # discards the old graph ax.clear() # plot data ax.plot(data, '*-') # refresh canvas self.canvas.draw() if __name__ == '__main__': app = QtGui.QApplication(sys.argv) main = Window() main.show() sys.exit(app.exec_()) ``` Edit: Updated to reflect comments and API changes. NavigationToolbar2QTAgg changed with NavigationToolbar2QT Directly import Figure instead of pyplot Replace deprecated ax.hold(False) with ax.clear()", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12459811\/how-to-embed-matplotlib-in-pyqt", "best_answers_votes":124, "question_length":342, "response_length":1935 }, { "question":"How to prevent numbers being changed to exponential form in a plot I'm using Matplotlib in Python to plot simple x-y datasets. This produces nice-looking graphs, although when I \"zoom in\" too close on various sections of the plotted graph using the Figure View (which appears when you execute plt.show() ), the x-axis values change from standard number form (1050, 1060, 1070 etc.) to scientific form with exponential notation (e.g. 1, 1.5, 2.0 with the x-axis label given as +1.057e3). I'd prefer my figures to retain the simple numbering of the axis, rather than using exponential form. Is there a way I can force Matplotlib to do this?", "response":"The formatting of tick labels is controlled by a Formatter object, which assuming you haven't done anything fancy will be a ScalerFormatterby default. This formatter will use a constant shift if the fractional change of the values visible is very small. To avoid this, simply turn it off: ``` plt.plot(arange(0,100,10) + 1000, arange(0,100,10)) ax = plt.gca() ax.get_xaxis().get_major_formatter().set_useOffset(False) plt.draw() ``` If you want to avoid scientific notation in general, ``` ax.get_xaxis().get_major_formatter().set_scientific(False) ``` Can control this with globally via the axes.formatter.useoffset rcparam.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14711655\/how-to-prevent-numbers-being-changed-to-exponential-form-in-a-plot", "best_answers_votes":106, "question_length":638, "response_length":625 }, { "question":"Fast Live Plotting in Matplotlib \/ PyPlot For years, I've been struggling to get efficient live plotting in matplotlib, and to this day I remain unsatisfied. I want a redraw_figure function that updates the figure \"live\" (as the code runs), and will display the latest plots if I stop at a breakpoint. Here is some demo code: ```py import numpy as np import time from matplotlib import pyplot as plt def live_update_demo(): plt.subplot(2, 1, 1) h1 = plt.imshow(np.random.randn(30, 30)) redraw_figure() plt.subplot(2, 1, 2) h2, = plt.plot(np.random.randn(50)) redraw_figure() t_start = time.time() for i in xrange(1000): h1.set_data(np.random.randn(30, 30)) redraw_figure() h2.set_ydata(np.random.randn(50)) redraw_figure() print 'Mean Frame Rate: %.3gFPS' % ((i+1) \/ (time.time() - t_start)) def redraw_figure(): plt.draw() plt.pause(0.00001) live_update_demo() ``` Plots should update live when the code is run, and we should see the latest data when stopping at any breakpoint after redraw_figure(). The question is how to best implement redraw_figure() In the implementation above (plt.draw(); plt.pause(0.00001)), it works, but is very slow (~3.7FPS) I can implement it as: ```py def redraw_figure(): plt.gcf().canvas.flush_events() plt.show(block=False) ``` And it runs faster (~11FPS), but plots are not up-to date when you stop at breakpoints (eg if I put a breakpoint on the t_start = ... line, the second plot does not appear). Strangely enough, what does actually work is calling the show twice: ```py def redraw_figure(): plt.gcf().canvas.flush_events() plt.show(block=False) plt.show(block=False) ``` Which gives ~11FPS and does keep plots up-to-data if your break on any line. Now I've heard it said that the \"block\" keyword is deprecated. And calling the same function twice seems like a weird, probably-non-portable hack anyway. So what can I put in this function that will plot at a reasonable frame rate, isn't a giant kludge, and preferably will work across backends and systems? Some notes: I'm on OSX, and using TkAgg backend, but solutions on any backend\/system are welcome Interactive mode \"On\" will not work, because it does not update live. It just updates when in the Python console when the interpreter waits for user input. A blog suggested the implementation: ```py def redraw_figure(): fig = plt.gcf() fig.canvas.draw() fig.canvas.flush_events() ``` But at least on my system, that does not redraw the plots at all. So, if anybody has an answer, you would directly make me and thousands of others very happy. Their happiness would probably trickle through to their friends and relatives, and their friends and relatives, and so on, so that you could potentially improve the lives of billions. Conclusions ImportanceOfBeingErnest shows how you can use blit for faster plotting, but it's not as simple as putting something different in the redraw_figure function (you need to keep track of what things to redraw).", "response":"First of all, the code that is posted in the question runs with 7 fps on my machine, with QT4Agg as backend. Now, as has been suggested in many posts, like here or here, using blit might be an option. Although this article mentions that blit causes strong memory leakage, I could not observe that. I have modified your code a bit and compared the frame rate with and without the use of blit. The code below gives 28 fps when run without blit 175 fps with blit Code: ``` import time from matplotlib import pyplot as plt import numpy as np def live_update_demo(blit = False): x = np.linspace(0,50., num=100) X,Y = np.meshgrid(x,x) fig = plt.figure() ax1 = fig.add_subplot(2, 1, 1) ax2 = fig.add_subplot(2, 1, 2) img = ax1.imshow(X, vmin=-1, vmax=1, interpolation=\"None\", cmap=\"RdBu\") line, = ax2.plot([], lw=3) text = ax2.text(0.8,0.5, \"\") ax2.set_xlim(x.min(), x.max()) ax2.set_ylim([-1.1, 1.1]) fig.canvas.draw() # note that the first draw comes before setting data if blit: # cache the background axbackground = fig.canvas.copy_from_bbox(ax1.bbox) ax2background = fig.canvas.copy_from_bbox(ax2.bbox) plt.show(block=False) t_start = time.time() k=0. for i in np.arange(1000): img.set_data(np.sin(X\/3.+k)*np.cos(Y\/3.+k)) line.set_data(x, np.sin(x\/3.+k)) tx = 'Mean Frame Rate:\\n {fps:.3f}FPS'.format(fps= ((i+1) \/ (time.time() - t_start)) ) text.set_text(tx) #print tx k+=0.11 if blit: # restore background fig.canvas.restore_region(axbackground) fig.canvas.restore_region(ax2background) # redraw just the points ax1.draw_artist(img) ax2.draw_artist(line) ax2.draw_artist(text) # fill in the axes rectangle fig.canvas.blit(ax1.bbox) fig.canvas.blit(ax2.bbox) # in this post http:\/\/bastibe.de\/2013-05-30-speeding-up-matplotlib.html # it is mentionned that blit causes strong memory leakage. # however, I did not observe that. else: # redraw everything fig.canvas.draw() fig.canvas.flush_events() #alternatively you could use #plt.pause(0.000000000001) # however plt.pause calls canvas.draw(), as can be read here: #http:\/\/bastibe.de\/2013-05-30-speeding-up-matplotlib.html live_update_demo(True) # 175 fps #live_update_demo(False) # 28 fps ``` Update: For faster plotting, one may consider using pyqtgraph. As the pyqtgraph documentation puts it: \"For plotting, pyqtgraph is not nearly as complete\/mature as matplotlib, but runs much faster.\" I ported the above example to pyqtgraph. And although it looks kind of ugly, it runs with 250 fps on my machine. Summing that up, matplotlib (without blitting): 28 fps matplotlib (with blitting): 175 fps pyqtgraph : 250 fps pyqtgraph code: ``` import sys import time from pyqtgraph.Qt import QtCore, QtGui import numpy as np import pyqtgraph as pg class App(QtGui.QMainWindow): def __init__(self, parent=None): super(App, self).__init__(parent) #### Create Gui Elements ########### self.mainbox = QtGui.QWidget() self.setCentralWidget(self.mainbox) self.mainbox.setLayout(QtGui.QVBoxLayout()) self.canvas = pg.GraphicsLayoutWidget() self.mainbox.layout().addWidget(self.canvas) self.label = QtGui.QLabel() self.mainbox.layout().addWidget(self.label) self.view = self.canvas.addViewBox() self.view.setAspectLocked(True) self.view.setRange(QtCore.QRectF(0,0, 100, 100)) # image plot self.img = pg.ImageItem(border='w') self.view.addItem(self.img) self.canvas.nextRow() # line plot self.otherplot = self.canvas.addPlot() self.h2 = self.otherplot.plot(pen='y') #### Set Data ##################### self.x = np.linspace(0,50., num=100) self.X,self.Y = np.meshgrid(self.x,self.x) self.counter = 0 self.fps = 0. self.lastupdate = time.time() #### Start ##################### self._update() def _update(self): self.data = np.sin(self.X\/3.+self.counter\/9.)*np.cos(self.Y\/3.+self.counter\/9.) self.ydata = np.sin(self.x\/3.+ self.counter\/9.) self.img.setImage(self.data) self.h2.setData(self.ydata) now = time.time() dt = (now-self.lastupdate) if dt <= 0: dt = 0.000000000001 fps2 = 1.0 \/ dt self.lastupdate = now self.fps = self.fps * 0.9 + fps2 * 0.1 tx = 'Mean Frame Rate: {fps:.3f} FPS'.format(fps=self.fps ) self.label.setText(tx) QtCore.QTimer.singleShot(1, self._update) self.counter += 1 if __name__ == '__main__': app = QtGui.QApplication(sys.argv) thisapp = App() thisapp.show() sys.exit(app.exec_()) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/40126176\/fast-live-plotting-in-matplotlib-pyplot", "best_answers_votes":109, "question_length":2940, "response_length":4241 }, { "question":"How to generate audio from a numpy array? I want to create \"heart rate monitor\" effect from a 2D array in numpy and want the tone to reflect the values in the array.", "response":"You can use the write function from scipy.io.wavfile to create a wav file which you can then play however you wish. Note that the array must be integers, so if you have floats, you might want to scale them appropriately: ``` import numpy as np from scipy.io.wavfile import write rate = 44100 data = np.random.uniform(-1, 1, rate) # 1 second worth of random samples between -1 and 1 scaled = np.int16(data \/ np.max(np.abs(data)) * 32767) write('test.wav', rate, scaled) ``` If you want Python to actually play audio, then this page provides an overview of some of the packages\/modules.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10357992\/how-to-generate-audio-from-a-numpy-array", "best_answers_votes":112, "question_length":165, "response_length":584 }, { "question":"RuntimeError: The current NumPy installation fails to pass a sanity check due to a bug in the windows runtime [duplicate] This question already has answers here: How do you fix \"runtimeError: package fails to pass a sanity check\" for numpy and pandas? (9 answers) Closed 4 years ago. I am using Python 3.9 on Windows 10 version 2004 x64. PowerShell as Administrator. Python version: ```none Python 3.9.0 (tags\/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)] on win32 ``` Install matplotlib error. ```none pip install virtualenv virtualenv foo cd .\\foo .\\Scripts\\active pip install numpy pip install matplotlib ``` Error ```none Windows PowerShell Copyright (C) Microsoft Corporation. All rights reserved. Try the new cross-platform PowerShell https:\/\/aka.ms\/pscore6 PS C:\\WINDOWS\\system32> Set-ExecutionPolicy Unrestricted -Force PS C:\\WINDOWS\\system32> cd \/d C:\\Windows\\System32\\cmd.exe Set-Location : A positional parameter cannot be found that accepts argument 'C:\\Windows\\System32\\cmd.exe'. At line:1 char:1 + cd \/d C:\\Windows\\System32\\cmd.exe + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Set-Location], ParameterBindingException + FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.SetLocationCommand PS C:\\WINDOWS\\system32> cd C:\\Windows\\System32\\cmd.exe cd : Cannot find path 'C:\\Windows\\System32\\cmd.exe' because it does not exist. At line:1 char:1 + cd C:\\Windows\\System32\\cmd.exe + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (C:\\Windows\\System32\\cmd.exe:String) [Set-Location], ItemNotFoundExcepti on + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand PS C:\\WINDOWS\\system32> cd D:\\ PS D:\\> cd .\\Users\\donhuvy\\ PS D:\\Users\\donhuvy> ls Directory: D:\\Users\\donhuvy Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 10\/26\/2020 3:35 PM AppData d----- 11\/7\/2020 9:33 AM PycharmProjects PS D:\\Users\\donhuvy> cd .\\PycharmProjects\\pythonProject\\ PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject> virtualenv foo virtualenv : The term 'virtualenv' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + virtualenv foo + ~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (virtualenv:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject> pip install virtualenv Collecting virtualenv Downloading virtualenv-20.1.0-py2.py3-none-any.whl (4.9 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.9 MB 1.1 MB\/s Collecting distlib=0.3.1 Downloading distlib-0.3.1-py2.py3-none-any.whl (335 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 335 kB 6.4 MB\/s Requirement already satisfied: six=1.9.0 in c:\\users\\donhuvy\\appdata\\roaming\\python\\python39\\site-packages (from virtualenv) (1.15.0) Collecting filelock=3.0.0 Downloading filelock-3.0.12-py3-none-any.whl (7.6 kB) Collecting appdirs=1.4.3 Downloading appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB) Installing collected packages: distlib, filelock, appdirs, virtualenv Successfully installed appdirs-1.4.4 distlib-0.3.1 filelock-3.0.12 virtualenv-20.1.0 PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject> virtualenv foo created virtual environment CPython3.9.0.final.0-64 in 1312ms creator CPython3Windows(dest=D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo, clear=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=C:\\Users\\donhuvy\\AppData\\Local\\pypa\\virtualenv) added seed packages: pip==20.2.4, setuptools==50.3.2, wheel==0.35.1 activators BashActivator,BatchActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject> cd .\\foo PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo> .\\Scripts\\activate (foo) PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo> pip install numpy Collecting numpy Using cached numpy-1.19.4-cp39-cp39-win_amd64.whl (13.0 MB) Installing collected packages: numpy Successfully installed numpy-1.19.4 (foo) PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo> pip install matplotlib Collecting matplotlib Using cached matplotlib-3.3.2.tar.gz (37.9 MB) ** On entry to DGEBAL parameter number 3 had an illegal value ** On entry to DGEHRD parameter number 2 had an illegal value ** On entry to DORGHR DORGQR parameter number 2 had an illegal value ** On entry to DHSEQR parameter number 4 had an illegal value ERROR: Command errored out with exit status 1: command: 'D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\Scripts\\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'C:\\\\Users\\\\donhuvy\\\\AppData\\\\Local\\\\Temp\\\\pip-install-8bn40qg7\\\\matplotlib\\\\setup.py'\"'\"'; __file__='\"'\"'C:\\\\Users\\\\donhuvy\\\\AppData\\\\Local\\\\Temp\\\\pip-install-8bn40qg7\\\\matplotlib\\\\setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' egg_info --egg-base 'C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe' cwd: C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\ Complete output (61 lines): Edit setup.cfg to change the build options; suppress output with --quiet. BUILDING MATPLOTLIB matplotlib: yes [3.3.2] python: yes [3.9.0 (tags\/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)]] platform: yes [win32] sample_data: yes [installing] tests: no [skipping due to configuration] macosx: no [Mac OS-X only] running egg_info creating C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info writing C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\PKG-INFO writing dependency_links to C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\dependency_links.txt writing namespace_packages to C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\namespace_packages.txt writing requirements to C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\requires.txt writing top-level names to C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\top_level.txt writing manifest file 'C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\SOURCES.txt' Traceback (most recent call last): File \"\", line 1, in File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setup.py\", line 242, in setup( # Finally, pass this all along to distutils to do the heavy lifting. File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\__init__.py\", line 153, in setup return distutils.core.setup(**attrs) File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\core.py\", line 148, in setup dist.run_commands() File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\dist.py\", line 966, in run_commands self.run_command(cmd) File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\dist.py\", line 985, in run_command cmd_obj.run() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\command\\egg_info.py\", line 298, in run self.find_sources() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\command\\egg_info.py\", line 305, in find_sources mm.run() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\command\\egg_info.py\", line 536, in run self.add_defaults() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\command\\egg_info.py\", line 572, in add_defaults sdist.add_defaults(self) File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\command\\sdist.py\", line 228, in add_defaults self._add_defaults_ext() File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\command\\sdist.py\", line 311, in _add_defaults_ext build_ext = self.get_finalized_command('build_ext') File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\cmd.py\", line 299, in get_finalized_command cmd_obj.ensure_finalized() File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\cmd.py\", line 107, in ensure_finalized self.finalize_options() File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setup.py\", line 88, in finalize_options self.distribution.ext_modules[:] = [ File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setup.py\", line 91, in for ext in package.get_extensions() File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setupext.py\", line 345, in get_extensions add_numpy_flags(ext) File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setupext.py\", line 469, in add_numpy_flags import numpy as np File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\numpy\\__init__.py\", line 305, in _win_os_check() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\numpy\\__init__.py\", line 302, in _win_os_check raise RuntimeError(msg.format(__file__)) from None RuntimeError: The current Numpy installation ('D:\\\\Users\\\\donhuvy\\\\PycharmProjects\\\\pythonProject\\\\foo\\\\lib\\\\site-packages\\\\numpy\\\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https:\/\/tinyurl.com\/ y3dm3h86 ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. (foo) PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo> ``` A screenshot of some of the above text Error information link to fmod(), after an update to windows 2004, is causing a strange interaction with other code I use PyCharm 2020.2 Ultimate, and it also catches the error. How can I fix it?", "response":"The temporary solution is to use NumPy 1.19.3. ```none pip install numpy==1.19.3 ``` From a Microsoft thread, a fix was promised to be available around January 2021. It was fixed in the KB4598291 update.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/64729944\/runtimeerror-the-current-numpy-installation-fails-to-pass-a-sanity-check-due-to", "best_answers_votes":241, "question_length":10048, "response_length":203 }, { "question":"How to recover matplotlib defaults after setting stylesheet In an ipython notebook, I used a matplotlib stylesheet to change the look of my plots using ``` from matplotlib.pyplot import * %matplotlib inline style.use('ggplot') ``` My version of matplotlib is 1.4.0. How do I go back to the default matplotlib styling? I tried all the available styles in ``` print style.available ``` but there doesn't seem to be a \"default\" option. I also tried ``` matplotlib.rcdefaults() ``` For some reason, this gives me a gray background. It also changes the text from gray (ggplot style) to black, which may be the default, but also could be another random style.", "response":"You should be able to set it back to default by: ``` import matplotlib as mpl mpl.rcParams.update(mpl.rcParamsDefault) ``` In ipython, things are a little different, especially with inline backend: ``` In [1]: %matplotlib inline In [2]: import matplotlib as mpl import matplotlib.pyplot as plt In [3]: inline_rc = dict(mpl.rcParams) In [4]: plt.plot(range(10)) Out[4]: [] ``` ``` In [5]: mpl.rcParams.update(mpl.rcParamsDefault) plt.plot(range(10)) Out[5]: [] ``` ``` In [6]: mpl.rcParams.update(inline_rc) plt.plot(range(10)) Out[6]: [] ``` Basically, %matplotlib inline uses its own rcParams. You can grab that from the source, but the arguably easier way is probably just save the rcParams as inline_rc after %matplotlib inline cell magic in this example, and reuse that later.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/26413185\/how-to-recover-matplotlib-defaults-after-setting-stylesheet", "best_answers_votes":110, "question_length":653, "response_length":780 }, { "question":"How to overplot a line on a scatter plot in python? I have two vectors of data and I've put them into pyplot.scatter(). Now I'd like to over plot a linear fit to these data. How would I do this? I've tried using scikitlearn and np.polyfit().", "response":"``` import numpy as np from numpy.polynomial.polynomial import polyfit import matplotlib.pyplot as plt # Sample data x = np.arange(10) y = 5 * x + 10 # Fit with polyfit b, m = polyfit(x, y, 1) plt.plot(x, y, '.') plt.plot(x, b + m * x, '-') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19068862\/how-to-overplot-a-line-on-a-scatter-plot-in-python", "best_answers_votes":152, "question_length":241, "response_length":255 }, { "question":"How to set font size of Matplotlib axis Legend? I have a code like this: ``` import matplotlib.pyplot as plt from matplotlib.pyplot import * from matplotlib.font_manager import FontProperties fontP = FontProperties() fontP.set_size('xx-small') fig=plt.figure() ax1=fig.add_subplot(111) plot([1,2,3], label=\"test1\") ax1.legend(loc=0, ncol=1, bbox_to_anchor=(0, 0, 1, 1), prop = fontP,fancybox=True,shadow=False,title='LEGEND') plt.show() ``` It can be seen in the plot that the setting in Fontsize does not affect the Legend Title font size. How to set the font size of the legend title to a smaller size?", "response":"This is definitely an old question, but was frustrating me too and none of the other answers changed the legend title fontsize at all, but instead just changed the rest of the text. So after banging my head against the matplotlib documentation for awhile I came up with this. ``` legend = ax1.legend(loc=0, ncol=1, bbox_to_anchor=(0, 0, 1, 1), prop = fontP,fancybox=True,shadow=False,title='LEGEND') plt.setp(legend.get_title(),fontsize='xx-small') ``` As of Matplotlib 3.0.3, you can also set it globally with ``` plt.rcParams['legend.title_fontsize'] = 'xx-small' ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12402561\/how-to-set-font-size-of-matplotlib-axis-legend", "best_answers_votes":114, "question_length":604, "response_length":569 }, { "question":"Difference between \"axes\" and \"axis\" in matplotlib? I'm confused about what the different between axes and axis is in matplotlib. Could someone please explain in an easy-to-understand way?", "response":"This figure from the documentation will answer your question: You can find this image here (in the Matplotlib 1.x docs); it's actually been replaced in the Matplotlib 2.x docs.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5575451\/difference-between-axes-and-axis-in-matplotlib", "best_answers_votes":121, "question_length":188, "response_length":176 }, { "question":"Plotting a stacked Bar Chart I am trying to create a stacked bar graph with pandas that replicates the picture, all my data is separate from that excel spreadsheet. I can't figure out how to make a dataframe for it like pictured, nor can I figure out how to make the stacked bar chart. All examples I locate work in different ways to what I'm trying to create. My dataframe is a csv of all values narrowed down to the following with a pandas dataframe. ``` Site Name Abuse\/NFF 0 NORTH ACTON ABUSE 1 WASHINGTON - 2 WASHINGTON NFF 3 BELFAST - 4 CROYDON - ``` I have managed to count the data with totals and get individual counts for each site, I just cant seem to combine it in a way to graph. Would really appreciate some strong guidance. Completed code, many thanks for the assistance completing. ``` test5 = faultdf.groupby(['Site Name', 'Abuse\/NFF'])['Site Name'].count().unstack('Abuse\/NFF').fillna(0) test5.plot(kind='bar', stacked=True) ```", "response":"Are you getting errors, or just not sure where to start? ``` %pylab inline import pandas as pd import matplotlib.pyplot as plt df2 = df.groupby(['Name', 'Abuse\/NFF'])['Name'].count().unstack('Abuse\/NFF').fillna(0) df2[['abuse','nff']].plot(kind='bar', stacked=True) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23415500\/plotting-a-stacked-bar-chart", "best_answers_votes":144, "question_length":946, "response_length":269 }, { "question":"Matplotlib xticks not lining up with histogram I'm generating some histograms with matplotlib and I'm having some trouble figuring out how to get the xticks of a histogram to align with the bars. Here's a sample of the code I use to generate the histogram: ``` from matplotlib import pyplot as py py.hist(histogram_data, 49, alpha=0.75) py.title(column_name) py.xticks(range(49)) py.show() ``` I know that all of values in the histogram_data array are in [0,1,...,48]. Which, assuming I did the math right, means there are 49 unique values. I'd like to show a histogram of each of those values. Here's a picture of what's generated. How can I set up the graph such that all of the xticks are aligned to the left, middle or right of each of the bars?", "response":"Short answer: Use plt.hist(data, bins=range(50)) instead to get left-aligned bins, plt.hist(data, bins=np.arange(50)-0.5) to get center-aligned bins, etc. Also, if performance matters, because you want counts of unique integers, there are a couple of slightly more efficient methods (np.bincount) that I'll show at the end. Problem Statement As a stand-alone example of what you're seeing, consider the following: ``` import matplotlib.pyplot as plt import numpy as np # Generate a random array of integers between 0-9 # data.min() will be 0 and data.max() will be 9 (not 10) data = np.random.randint(0, 10, 1000) plt.hist(data, bins=10) plt.xticks(range(10)) plt.show() ``` As you've noticed, the bins aren't aligned with integer intervals. This is basically because you asked for 10 bins between 0 and 9, which isn't quite the same as asking for bins for the 10 unique values. The number of bins you want isn't exactly the same as the number of unique values. What you actually should do in this case is manually specify the bin edges. To explain what's going on, let's skip matplotlib.pyplot.hist and just use the underlying numpy.histogram function. For example, let's say you have the values [0, 1, 2, 3]. Your first instinct would be to do: ``` In [1]: import numpy as np In [2]: np.histogram([0, 1, 2, 3], bins=4) Out[2]: (array([1, 1, 1, 1]), array([ 0. , 0.75, 1.5 , 2.25, 3. ])) ``` The first array returned is the counts and the second is the bin edges (in other words, where bar edges would be in your plot). Notice that we get the counts we'd expect, but because we asked for 4 bins between the min and max of the data, the bin edges aren't on integer values. Next, you might try: ``` In [3]: np.histogram([0, 1, 2, 3], bins=3) Out[3]: (array([1, 1, 2]), array([ 0., 1., 2., 3.])) ``` Note that the bin edges (the second array) are what you were expecting, but the counts aren't. That's because the last bin behaves differently than the others, as noted in the documentation for numpy.histogram: ``` Notes ----- All but the last (righthand-most) bin is half-open. In other words, if `bins` is:: [1, 2, 3, 4] then the first bin is ``[1, 2)`` (including 1, but excluding 2) and the second ``[2, 3)``. The last bin, however, is ``[3, 4]``, which *includes* 4. ``` Therefore, what you actually should do is specify exactly what bin edges you want, and either include one beyond your last data point or shift the bin edges to the 0.5 intervals. For example: ``` In [4]: np.histogram([0, 1, 2, 3], bins=range(5)) Out[4]: (array([1, 1, 1, 1]), array([0, 1, 2, 3, 4])) ``` Bin Alignment Now let's apply this to the first example and see what it looks like: ``` import matplotlib.pyplot as plt import numpy as np # Generate a random array of integers between 0-9 # data.min() will be 0 and data.max() will be 9 (not 10) data = np.random.randint(0, 10, 1000) plt.hist(data, bins=range(11)) # <- The only difference plt.xticks(range(10)) plt.show() ``` Okay, great! However, we now effectively have left-aligned bins. What if we wanted center-aligned bins to better reflect the fact that these are unique values? The quick way is to just shift the bin edges: ``` import matplotlib.pyplot as plt import numpy as np # Generate a random array of integers between 0-9 # data.min() will be 0 and data.max() will be 9 (not 10) data = np.random.randint(0, 10, 1000) bins = np.arange(11) - 0.5 plt.hist(data, bins) plt.xticks(range(10)) plt.xlim([-1, 10]) plt.show() ``` Similarly for right-aligned bins, just shift by -1. Another approach For the particular case of unique integer values, there's another, more efficient approach we can take. If you're dealing with unique integer counts starting with 0, you're better off using numpy.bincount than using numpy.hist. For example: ``` import matplotlib.pyplot as plt import numpy as np data = np.random.randint(0, 10, 1000) counts = np.bincount(data) # Switching to the OO-interface. You can do all of this with \"plt\" as well. fig, ax = plt.subplots() ax.bar(range(10), counts, width=1, align='center') ax.set(xticks=range(10), xlim=[-1, 10]) plt.show() ``` There are two big advantages to this approach. One is speed. numpy.histogram (and therefore plt.hist) basically runs the data through numpy.digitize and then numpy.bincount. Because you're dealing with unique integer values, there's no need to take the numpy.digitize step. However, the bigger advantage is more control over display. If you'd prefer thinner rectangles, just use a smaller width: ``` import matplotlib.pyplot as plt import numpy as np data = np.random.randint(0, 10, 1000) counts = np.bincount(data) # Switching to the OO-interface. You can do all of this with \"plt\" as well. fig, ax = plt.subplots() ax.bar(range(10), counts, width=0.8, align='center') ax.set(xticks=range(10), xlim=[-1, 10]) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/27083051\/matplotlib-xticks-not-lining-up-with-histogram", "best_answers_votes":190, "question_length":749, "response_length":4826 }, { "question":"Python equivalent to 'hold on' in Matlab Is there an explicit equivalent command in Python's matplotlib for Matlab's hold on? I'm trying to plot all my graphs on the same axes. Some graphs are generated inside a for loop, and these are plotted separately from su and sl: ``` import numpy as np import matplotlib.pyplot as plt for i in np.arange(1,5): z = 68 + 4 * np.random.randn(50) zm = np.cumsum(z) \/ range(1,len(z)+1) plt.plot(zm) plt.axis([0,50,60,80]) plt.show() n = np.arange(1,51) su = 68 + 4 \/ np.sqrt(n) sl = 68 - 4 \/ np.sqrt(n) plt.plot(n,su,n,sl) plt.axis([0,50,60,80]) plt.show() ```", "response":"Just call plt.show() at the end: ``` import numpy as np import matplotlib.pyplot as plt plt.axis([0,50,60,80]) for i in np.arange(1,5): z = 68 + 4 * np.random.randn(50) zm = np.cumsum(z) \/ range(1,len(z)+1) plt.plot(zm) n = np.arange(1,51) su = 68 + 4 \/ np.sqrt(n) sl = 68 - 4 \/ np.sqrt(n) plt.plot(n,su,n,sl) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21465988\/python-equivalent-to-hold-on-in-matlab", "best_answers_votes":65, "question_length":596, "response_length":324 }, { "question":"How to use log scale with pandas plots I'm making a fairly simple histogram with pandas using ``` results.val1.hist(bins=120) ``` which works fine, but I really want to have a log scale on the y axis, which I normally (probably incorrectly) do like this: ``` fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) plt.plot(np.random.rand(100)) ax.set_yscale('log') plt.show() ``` If I replace the plt command with the pandas command, so I have: ``` fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) results.val1.hist(bins=120) ax.set_yscale('log') plt.show() ``` results in many copies of the same error: ``` Jan 9 15:53:07 BLARG.local python[6917] : CGContextClosePath: no current point. ``` I do get a log scale histogram, but it only has the top lines of the bars, but no vertical bars or colors. Am doing something horribly wrong or is this just not supported by pandas? From Paul H's code I added bottom=0.1 to hist call fixes the problem, I guess there is some kind of divide by zero thing, or something.", "response":"I'd recommend using the log=True parameter in the pyplot hist function: Setup step ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt df = pd.DataFrame({'column_name': np.random.normal(size=2000)}) ``` Using pyplot: ``` plt.hist(df['column_name'], log=True) ``` Or equivalently, you could use the plot method of the dataframe column (series) directly: ``` df[\"column_name\"].plot(kind=\"hist\", logy=True) ``` There's also logx for log scaling the x-axis and loglog=True for log scaling both axes.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21033720\/how-to-use-log-scale-with-pandas-plots", "best_answers_votes":102, "question_length":1023, "response_length":518 }, { "question":"Python: Unable to Render Tex in Matplotlib I recently upgraded my laptop to Snow Leopard, updated TeX to Version 3.1415926 (TeX Live 2011\/MacPorts 2011_5), and installed Python 2.7.3. After all these installs, I ran macport selfupdate and macport upgrade outdated. However, now when I try to use TeX in matplotlib, I receive the following: ```none LaTeX was not able to process the following string:'lp' Here is the full report generated by LaTeX: This is pdfTeX, Version 3.1415926-2.3-1.40.12 (TeX Live 2011\/MacPorts 2011_5) restricted \\write18 enabled. entering extended mode (.\/64a53cc27244d5ee10969789771e33fa.tex LaTeX2e Babel and hyphenation patterns for english, dumylang, nohyphenation, cz ech, slovak, dutch, ukenglish, usenglishmax, basque, french, german-x-2009-06-1 9, ngerman-x-2009-06-19, german, ngerman, swissgerman, italian, polish, portugu ese, spanish, catalan, galician, ukenglish, loaded. (\/opt\/local\/share\/texmf-texlive-dist\/tex\/latex\/base\/article.cls Document Class: article 2007\/10\/19 v1.4h Standard LaTeX document class (\/opt\/local\/share\/texmf-texlive-dist\/tex\/latex\/base\/size10.clo)) ! LaTeX Error: File `type1cm.sty' not found. Type X to quit or to proceed, or enter new name. (Default extension: sty) l.3 \\renewcommand {\\rmdefault}{pnc}^^M No pages of output. ``` Similar to this previous question, I tried setting the path in my python code via: ``` os.environ['PATH'] = os.environ['PATH'] + ':\/opt\/local\/bin\/latex' ``` since which latex yielded \/opt\/local\/bin\/latex. However, that didn't work, with the same error message. I also tried the path to tex, as well as the example from the previous question. No change. I then tried to force possibly missing packages via: ``` matplotlib.rcParams['text.latex.preamble']=[r\"\\usepackage{amsmath}\"] ``` however, that also did not work. The only way I can get my plots to work is to say rc('text', usetex=False), which is not ideal. Any help would be much appreciated.", "response":"On an Ubunutu 14.04 machine the combination of answers from above worked. I sudo apt-get install the dvipng,texlive-latex-extra, and texlive-fonts-recommended packages and that did the trick: ```none $ sudo apt-get install dvipng texlive-latex-extra texlive-fonts-recommended ``` Edit: As of Matplotlib 3.2.1, you now also need the package cm-super (see https:\/\/github.com\/matplotlib\/matplotlib\/issues\/16911) ```none $ sudo apt-get install dvipng texlive-latex-extra texlive-fonts-recommended cm-super ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11354149\/python-unable-to-render-tex-in-matplotlib", "best_answers_votes":112, "question_length":1942, "response_length":505 }, { "question":"Using Colormaps to set color of line in matplotlib How does one set the color of a line in matplotlib with scalar values provided at run time using a colormap (say jet)? I tried a couple of different approaches here and I think I'm stumped. values[] is a storted array of scalars. curves are a set of 1-d arrays, and labels are an array of text strings. Each of the arrays have the same length. ``` fig = plt.figure() ax = fig.add_subplot(111) jet = colors.Colormap('jet') cNorm = colors.Normalize(vmin=0, vmax=values[-1]) scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=jet) lines = [] for idx in range(len(curves)): line = curves[idx] colorVal = scalarMap.to_rgba(values[idx]) retLine, = ax.plot(line, color=colorVal) #retLine.set_color() lines.append(retLine) ax.legend(lines, labels, loc='upper right') ax.grid() plt.show() ```", "response":"The error you are receiving is due to how you define jet. You are creating the base class Colormap with the name 'jet', but this is very different from getting the default definition of the 'jet' colormap. This base class should never be created directly, and only the subclasses should be instantiated. What you've found with your example is a buggy behavior in Matplotlib. There should be a clearer error message generated when this code is run. This is an updated version of your example: ``` import matplotlib.pyplot as plt import matplotlib.colors as colors import matplotlib.cm as cmx import numpy as np # define some random data that emulates your indeded code: NCURVES = 10 np.random.seed(101) curves = [np.random.random(20) for i in range(NCURVES)] values = range(NCURVES) fig = plt.figure() ax = fig.add_subplot(111) # replace the next line #jet = colors.Colormap('jet') # with jet = cm = plt.get_cmap('jet') cNorm = colors.Normalize(vmin=0, vmax=values[-1]) scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=jet) print scalarMap.get_clim() lines = [] for idx in range(len(curves)): line = curves[idx] colorVal = scalarMap.to_rgba(values[idx]) colorText = ( 'color: (%4.2f,%4.2f,%4.2f)'%(colorVal[0],colorVal[1],colorVal[2]) ) retLine, = ax.plot(line, color=colorVal, label=colorText) lines.append(retLine) #added this to get the legend to work handles,labels = ax.get_legend_handles_labels() ax.legend(handles, labels, loc='upper right') ax.grid() plt.show() ``` Resulting in: Using a ScalarMappable is an improvement over the approach presented in my related answer: creating over 20 unique legend colors using matplotlib", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8931268\/using-colormaps-to-set-color-of-line-in-matplotlib", "best_answers_votes":93, "question_length":831, "response_length":1631 }, { "question":"Plot an histogram with y-axis as percentage (using FuncFormatter?) I have a list of data in which the numbers are between 1000 and 20 000. ``` data = [1000, 1000, 5000, 3000, 4000, 16000, 2000] ``` When I plot a histogram using the hist() function, the y-axis represents the number of occurrences of the values within a bin. Instead of the number of occurrences, I would like to have the percentage of occurrences. Code for the above plot: ``` f, ax = plt.subplots(1, 1, figsize=(10,5)) ax.hist(data, bins = len(list(set(data)))) ``` I've been looking at this post which describes an example using FuncFormatter but I can't figure out how to adapt it to my problem. Some help and guidance would be welcome :) EDIT: Main issue with the to_percent(y, position) function used by the FuncFormatter. The y corresponds to one given value on the y-axis I guess. I need to divide this value by the total number of elements which I apparently can' t pass to the function... EDIT 2: Current solution I dislike because of the use of a global variable: ``` def to_percent(y, position): # Ignore the passed in position. This has the effect of scaling the default # tick locations. global n s = str(round(100 * y \/ n, 3)) print (y) # The percent symbol needs escaping in latex if matplotlib.rcParams['text.usetex'] is True: return s + r'$\\%$' else: return s + '%' def plotting_hist(folder, output): global n data = list() # Do stuff to create data from folder n = len(data) f, ax = plt.subplots(1, 1, figsize=(10,5)) ax.hist(data, bins = len(list(set(data))), rwidth = 1) formatter = FuncFormatter(to_percent) plt.gca().yaxis.set_major_formatter(formatter) plt.savefig(\"{}.png\".format(output), dpi=500) ``` EDIT 3: Method with density = True Actual desired output (method with global variable):", "response":"Other answers seem utterly complicated. A histogram which shows the proportion instead of the absolute amount can easily produced by weighting the data with 1\/n, where n is the number of datapoints. Then a PercentFormatter can be used to show the proportion (e.g. 0.45) as percentage (45%). ``` import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import PercentFormatter data = [1000, 1000, 5000, 3000, 4000, 16000, 2000] plt.hist(data, weights=np.ones(len(data)) \/ len(data)) plt.gca().yaxis.set_major_formatter(PercentFormatter(1)) plt.show() ``` Here we see that three of the 7 values are in the first bin, i.e. 3\/7=43%.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/51473993\/plot-an-histogram-with-y-axis-as-percentage-using-funcformatter", "best_answers_votes":169, "question_length":1780, "response_length":645 }, { "question":"How to position suptitle I'm trying to adjust a suptitle above a multi-panel figure and am having trouble figuring out how to adjust the figsize and subsequently position the suptitle. The problem is that calling plt.suptitle(\"my title\", y=...) to adjust the position of the suptitle also adjusts the figure dimensions. A few questions: where does suptitle(..., y=1.1) actually put the title? As far as I can tell, the documentation for the y parameter of suptitle points to matplotlib.text.Text, but I don't know what figure coordinates mean when you have multiple subplots. what is the effect on figure size when specifying y to suptitle? how do I manually adjust figure size and spacing (subplots_adjust?) to add a figure title per panel and a suptitle for the entire figure, maintaining the size of each ax in the figure? An example: ``` data = np.random.random(size=100) f, a = plt.subplots(2, 2, figsize=(10, 5)) a[0,0].plot(data) a[0,0].set_title(\"this is a really long title\\n\"*2) a[0,1].plot(data) a[1,1].plot(data) plt.suptitle(\"a big long suptitle that runs into the title\\n\"*2, y=1.05); ``` Obviously I can tweak y each time I make a figure, but I need a solution that generally works without manual intervention. I've tried both constrained layout and tight layout; neither works reliably with figures of any complexity.", "response":"1. What do figure coordinates mean? Figure coordinates go 0 to 1, where (0,0) is the lower left corner and (1,1) is the upper right corner. A coordinate of y=1.05 is hence slightly outside the figure. 2. what is the effect on figure size when specifying y to suptitle? Specifying y to suptitle has no effect whatsoever on the figure size. 3a. How do I manually adjust figure size and spacing to add a figure title per panel and a suptitle for the entire figure? First, one would not add an additional linebreak. I.e. if you want to have 2 lines, don't use 3 linebreaks (\\n). Then one can adjust the subplot parameters as desired to leave space for the titles. E.g. fig.subplots_adjust(top=0.8) and use a y <= 1 for the title to be inside the figure. ``` import matplotlib.pyplot as plt import numpy as np data = np.random.random(size=100) fig, axes = plt.subplots(2, 2, figsize=(10, 5)) fig.subplots_adjust(top=0.8) axes[0,0].plot(data) axes[0,0].set_title(\"\\n\".join([\"this is a really long title\"]*2)) axes[0,1].plot(data) axes[1,1].plot(data) fig.suptitle(\"\\n\".join([\"a big long suptitle that runs into the title\"]*2), y=0.98) plt.show() ``` 3b. ... while maintaining the size of each ax in the figure? Maintaining the size of the axes and still have enough space for the titles is only possible by changing the overall figure size. This could look as follows, where we define a function make_space_above which takes the array of axes as input, as well as the newly desired top margin in units of inches. So for example, you come to the conclusion that you need 1 inch of margin on top to host your titles: ``` import matplotlib.pyplot as plt import numpy as np data = np.random.random(size=100) fig, axes = plt.subplots(2, 2, figsize=(10, 5), squeeze = False) axes[0,0].plot(data) axes[0,0].set_title(\"\\n\".join([\"this is a really long title\"]*2)) axes[0,1].plot(data) axes[1,1].plot(data) fig.suptitle(\"\\n\".join([\"a big long suptitle that runs into the title\"]*2), y=0.98) def make_space_above(axes, topmargin=1): \"\"\" increase figure size to make topmargin (in inches) space for titles, without changing the axes sizes\"\"\" fig = axes.flatten()[0].figure s = fig.subplotpars w, h = fig.get_size_inches() figh = h - (1-s.top)*h + topmargin fig.subplots_adjust(bottom=s.bottom*h\/figh, top=1-topmargin\/figh) fig.set_figheight(figh) make_space_above(axes, topmargin=1) plt.show() ``` (left: without calling make_space_above; right: with call to make_space_above(axes, topmargin=1))", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/55767312\/how-to-position-suptitle", "best_answers_votes":111, "question_length":1333, "response_length":2478 }, { "question":"JupyterLab interactive plot With old Jupyter notebooks, I could create interactive plots via: ``` import matplotlib.pyplot as plt %matplotlib notebook x = [1,2,3] y = [4,5,6] plt.figure() plt.plot(x,y) ``` However, in JupyterLab, this gives an error: ``` JavaScript output is disabled in JupyterLab ``` I have also tried the magic (with jupyter-matplotlib installed): ``` %matplotlib ipympl ``` But that just returns: ``` FigureCanvasNbAgg() ``` Inline plots work, but they are not interactive plots: ``` %matplotlib inline ```", "response":"JupyterLab 3.0+ Install jupyterlab and ipympl. For pip users: ``` pip install --upgrade jupyterlab ipympl ``` For conda users: ``` conda update -c conda-forge jupyterlab ipympl ``` Restart JupyterLab. Decorate the cell containing plotting code with the header: ``` %matplotlib widget # plotting code goes here ``` JupyterLab 2.0 Install nodejs, e.g. conda install -c conda-forge nodejs. Install ipympl, e.g. conda install -c conda-forge ipympl. [Optional, but recommended.] Update JupyterLab, e.g. conda update -c conda-forge jupyterlab==2.2.9==py_0. [Optional, but recommended.] For a local user installation, run: export JUPYTERLAB_DIR=\"$HOME\/.local\/share\/jupyter\/lab\". Install extensions: ``` jupyter labextension install @jupyter-widgets\/jupyterlab-manager jupyter labextension install jupyter-matplotlib ``` Enable widgets: jupyter nbextension enable --py widgetsnbextension. Restart JupyterLab. Decorate with %matplotlib widget.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/50149562\/jupyterlab-interactive-plot", "best_answers_votes":98, "question_length":527, "response_length":934 }, { "question":"How do I change matplotlib's subplot projection of an existing axis? I'm trying to construct a simple function that takes a subplot instance (matplotlib.axes._subplots.AxesSubplot) and transforms its projection to another projection, for example, to one of the cartopy.crs.CRS projections. The idea looks something like this ``` import cartopy.crs as ccrs import matplotlib.pyplot as plt def make_ax_map(ax, projection=ccrs.PlateCarree()): # set ax projection to the specified projection ... # other fancy formatting ax2.coastlines() ... # Create a grid of plots fig, (ax1, ax2) = plt.subplots(ncols=2) # the first subplot remains unchanged ax1.plot(np.random.rand(10)) # the second one gets another projection make_ax_map(ax2) ``` Of course, I can just use fig.add_subplot() function: ``` fig = plt.figure(figsize=(10,5)) ax1 = fig.add_subplot(121) ax1.plot(np.random.rand(10)) ax2 = fig.add_subplot(122,projection=ccrs.PlateCarree()) ax2.coastlines() ``` but I was wondering if there is a proper matplotlib method to change a subplot axis projection after it was defined. Reading matplotlib API didn't help unfortunately.", "response":"You can't change the projection of an existing axes, the reason is given below. However the solution to your underlying problem is simply to use the subplot_kw argument to plt.subplots() described in the matplotlib documentation here. For example, if you wanted all your subplots to have the cartopy.crs.PlateCarree projection you could do ```python import matplotlib.pyplot as plt import cartopy.crs as ccrs # Create a grid of plots fig, (ax1, ax2) = plt.subplots(ncols=2, subplot_kw={'projection': ccrs.PlateCarree()}) ``` Regarding the actual question, specifying a projection when you create an axes set determines the axes class you get, which is different for each projection type. For example ```python import matplotlib.pyplot as plt import cartopy.crs as ccrs ax1 = plt.subplot(311) ax2 = plt.subplot(312, projection='polar') ax3 = plt.subplot(313, projection=ccrs.PlateCarree()) print(type(ax1)) print(type(ax2)) print(type(ax3)) ``` This code will print the following ``` ``` Notice how each axes is actually an instance of a different class.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/33942233\/how-do-i-change-matplotlibs-subplot-projection-of-an-existing-axis", "best_answers_votes":90, "question_length":1123, "response_length":1056 }, { "question":"Does matplotlib have a function for drawing diagonal lines in axis coordinates? Matplotlib Axes have the functions axhline and axvline for drawing horizontal or vertical lines at a given y or x coordinate (respectively) independently of the data scale on an Axes. Is there a similar function for plotting a constant diagonal? For example, if I have a scatterplot of variables with a similar domain, it is often useful to know whether they fall above or below the line of y = x: ``` mean, cov = [0, 0], [(1, .6), (.6, 1)] x, y = np.random.multivariate_normal(mean, cov, 100).T y += x + 1 f, ax = plt.subplots(figsize=(6, 6)) ax.scatter(x, y, c=\".3\") ax.plot([-3, 3], [-3, 3], ls=\"--\", c=\".3\") ax.set(xlim=(-3, 3), ylim=(-3, 3)) ``` This can of course be done programmatically by grabbing the axis limits, (ax.get_xlim(), etc.), but that a) takes a few extra steps and b) is brittle in cases where more data might end up on the plot and shift the limits. (Actually in some cases just adding the constant line itself stretches the axes). It would be preferable to just do, e.g., ax.axdline(ls=\"--\", c=\".3\"), but it's not clear if something like this exists in the matplotlib codebase. All you would need to do would be modify the axhline code to plot from [0, 1] in axes coordinates for both x and y, I think.", "response":"Drawing a diagonal from the lower left to the upper right corners of your plot would be accomplished by the following ax.plot([0, 1], [0, 1], transform=ax.transAxes) Using transform=ax.transAxes, the supplied x and y coordinates are interpreted as axes coordinates instead of data coordinates. This, as @fqq pointed out, is only the identity line when your x and y limits are equal. To draw the line y=x such that it always extends to the limits of your plot, an approach similar to the one given by @Ffisegydd would work, and can be written as the following function. ``` def add_identity(axes, *line_args, **line_kwargs): identity, = axes.plot([], [], *line_args, **line_kwargs) def callback(axes): low_x, high_x = axes.get_xlim() low_y, high_y = axes.get_ylim() low = max(low_x, low_y) high = min(high_x, high_y) identity.set_data([low, high], [low, high]) callback(axes) axes.callbacks.connect('xlim_changed', callback) axes.callbacks.connect('ylim_changed', callback) return axes ``` Example usage: ``` import numpy as np import matplotlib.pyplot as plt mean, cov = [0, 0], [(1, .6), (.6, 1)] x, y = np.random.multivariate_normal(mean, cov, 100).T y += x + 1 f, ax = plt.subplots(figsize=(6, 6)) ax.scatter(x, y, c=\".3\") add_identity(ax, color='r', ls='--') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22104256\/does-matplotlib-have-a-function-for-drawing-diagonal-lines-in-axis-coordinates", "best_answers_votes":68, "question_length":1306, "response_length":1277 }, { "question":"how to extract a subset of a colormap as a new colormap in matplotlib? I would like to use a colormap from matplotlib e.g. CMRmap. But I don't want to use the \"black\" color at the beginning and the \"white\" color at the end. I'm interested to plot my data using the in-between colors. I think ppl use it quite often but I was searching over internet and could not manage to find any simple solution. I'll appreciate if someone suggest any solution.", "response":"The staticmethod colors.LinearSegmentedColormap.from_list can be used to create new LinearSegmentedColormaps. Below, I sample the original colormap at 100 points between 0.2 and 0.8: ``` cmap(np.linspace(0.2, 0.8, 100)) ``` and use these colors to generate a new colormap: ``` import matplotlib.pyplot as plt import matplotlib.colors as colors import numpy as np def truncate_colormap(cmap, minval=0.0, maxval=1.0, n=100): new_cmap = colors.LinearSegmentedColormap.from_list( 'trunc({n},{a:.2f},{b:.2f})'.format(n=cmap.name, a=minval, b=maxval), cmap(np.linspace(minval, maxval, n))) return new_cmap arr = np.linspace(0, 50, 100).reshape((10, 10)) fig, ax = plt.subplots(ncols=2) cmap = plt.get_cmap('jet') new_cmap = truncate_colormap(cmap, 0.2, 0.8) ax[0].imshow(arr, interpolation='nearest', cmap=cmap) ax[1].imshow(arr, interpolation='nearest', cmap=new_cmap) plt.show() ``` The plot on the left shows the image using the original colormap (in this example, jet). The plot on the right shows the same image using new_cmap.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18926031\/how-to-extract-a-subset-of-a-colormap-as-a-new-colormap-in-matplotlib", "best_answers_votes":121, "question_length":447, "response_length":1026 }, { "question":"matplotlib iterate subplot axis array through single list Is there a simple\/clean way to iterate an array of axis returned by subplots like ``` nrow = ncol = 2 a = [] fig, axs = plt.subplots(nrows=nrow, ncols=ncol) for i, row in enumerate(axs): for j, ax in enumerate(row): a.append(ax) for i, ax in enumerate(a): ax.set_ylabel(str(i)) ``` which even works for nrow or ncol == 1. I tried list comprehension like: ``` [element for tupl in tupleOfTuples for element in tupl] ``` but that fails if nrows or ncols == 1", "response":"The ax return value is a numpy array, which can be reshaped, I believe, without any copying of the data. If you use the following, you'll get a linear array that you can iterate over cleanly. ``` nrow = 1; ncol = 2; fig, axs = plt.subplots(nrows=nrow, ncols=ncol) for ax in axs.reshape(-1): ax.set_ylabel(str(i)) ``` This doesn't hold when ncols and nrows are both 1, since the return value is not an array; you could turn the return value into an array with one element for consistency, though it feels a bit like a cludge: ``` nrow = 1; ncol = 1; fig, axs = plt.subplots(nrows=nrow, ncols=nrow) axs = np.array(axs) for ax in axs.reshape(-1): ax.set_ylabel(str(i)) ``` reshape docs. The argument -1 causes reshape to infer dimensions of the output.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20288842\/matplotlib-iterate-subplot-axis-array-through-single-list", "best_answers_votes":106, "question_length":514, "response_length":749 }, { "question":"How do I plot list of tuples? I have the following data set. I would like to use Python or Gnuplot to plot the data. The tuples are of the form (x, y). The Y-axis should be a log axis, that is, log(y). A scatter plot or line plot would be ideal. How can this be done? ``` [(0, 6.0705199999997801e-08), (1, 2.1015700100300739e-08), (2, 7.6280656623374823e-09), (3, 5.7348209304555086e-09), (4, 3.6812203579604238e-09), (5, 4.1572516753310418e-09)] ```", "response":"If I get your question correctly, you could do something like this. ``` >>> import matplotlib.pyplot as plt >>> testList =[(0, 6.0705199999997801e-08), (1, 2.1015700100300739e-08), (2, 7.6280656623374823e-09), (3, 5.7348209304555086e-09), (4, 3.6812203579604238e-09), (5, 4.1572516753310418e-09)] >>> from math import log >>> testList2 = [(elem1, log(elem2)) for elem1, elem2 in testList] >>> testList2 [(0, -16.617236475334405), (1, -17.67799605473062), (2, -18.691431541177973), (3, -18.9767093108359), (4, -19.420021520728017), (5, -19.298411635970396)] >>> zip(*testList2) [(0, 1, 2, 3, 4, 5), (-16.617236475334405, -17.67799605473062, -18.691431541177973, -18.9767093108359, -19.420021520728017, -19.298411635970396)] >>> plt.scatter(*zip(*testList2)) >>> plt.show() ``` which would give you something like Or as a line plot, ``` >>> plt.plot(*zip(*testList2)) >>> plt.show() ``` EDIT - If you want to add a title and labels for the axis, you could do something like ``` >>> plt.scatter(*zip(*testList2)) >>> plt.title('Random Figure') >>> plt.xlabel('X-Axis') >>> plt.ylabel('Y-Axis') >>> plt.show() ``` which would give you", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18458734\/how-do-i-plot-list-of-tuples", "best_answers_votes":119, "question_length":450, "response_length":1130 }, { "question":"How to map number to color using matplotlib's colormap? Consider a variable x containing a floating point number. I want to use matplotlib's colormaps to map this number to a color, but not plot anything. Basically, I want to be able to choose the colormap with mpl.cm.autumn for example, use mpl.colors.Normalize(vmin = -20, vmax = 10) to set the range, and then map x to the corresponding color. But I really don't get the documentation of mpl.cm, so if anyone could give me a hint.", "response":"It's as simple as cm.hot(0.3): ```py import matplotlib.cm as cm print(cm.hot(0.3)) ``` ``` (0.8240081481370484, 0.0, 0.0, 1.0) ``` If you also want to have the normalizer, use ```py import matplotlib as mpl import matplotlib.cm as cm norm = mpl.colors.Normalize(vmin=-20, vmax=10) cmap = cm.hot x = 0.3 m = cm.ScalarMappable(norm=norm, cmap=cmap) print(m.to_rgba(x)) ``` ``` (1.0, 0.8225486412996345, 0.0, 1.0) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15140072\/how-to-map-number-to-color-using-matplotlibs-colormap", "best_answers_votes":130, "question_length":484, "response_length":414 }, { "question":"How to get matplotlib figure size For a project, I need to know the current size (in pixels) of my matplotlib figure, but I can't find how to do this. Does anyone know how to do this ?", "response":"``` import matplotlib.plt fig = plt.figure() size = fig.get_size_inches()*fig.dpi # size in pixels ``` To do it for the current figure, ``` fig = plt.gcf() size = fig.get_size_inches()*fig.dpi # size in pixels ``` You can get the same info by doing: ``` bbox = fig.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) width, height = bbox.width*fig.dpi, bbox.height*fig.dpi ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/29702424\/how-to-get-matplotlib-figure-size", "best_answers_votes":103, "question_length":184, "response_length":388 }, { "question":"How to set the default color cycle for all subplots with matplotlib? How can I set a default set of colors for plots made with matplotlib? I can set a particular color map like this ``` import numpy as np import matplotlib.pyplot as plt fig=plt.figure(i) ax=plt.gca() colormap = plt.get_cmap('jet') ax.set_color_cycle([colormap(k) for k in np.linspace(0, 1, 10)]) ``` but is there some way to set the same set of colors for all plots, including subplots?", "response":"Sure! Either specify axes.color_cycle in your .matplotlibrc file or set it at runtime using matplotlib.rcParams or matplotlib.rc. As an example of the latter: ```py import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np # Set the default color cycle mpl.rcParams['axes.prop_cycle'] = mpl.cycler(color=[\"r\", \"k\", \"c\"]) x = np.linspace(0, 20, 100) fig, axes = plt.subplots(nrows=2) for i in range(10): axes[0].plot(x, i * (x - 10)**2) for i in range(10): axes[1].plot(x, i * np.cos(x)) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9397944\/how-to-set-the-default-color-cycle-for-all-subplots-with-matplotlib", "best_answers_votes":93, "question_length":454, "response_length":519 }, { "question":"Text box with line wrapping Is it possible to display text in a box through Matplotlib, with automatic line breaks? By using pyplot.text(), I was only able to print multi-line text that flows beyond the boundaries of the window, which is annoying. The size of the lines is not known in advance\u2026 Any idea would be much appreciated!", "response":"The contents of this answer were merged into mpl master in https:\/\/github.com\/matplotlib\/matplotlib\/pull\/4342 and will be in the next feature release. Wow... This is a thorny problem... (And it exposes a lot of limitations in matplotlib's text rendering...) This should (i.m.o.) be something that matplotlib has built-in, but it doesn't. There have been a few threads about it on the mailing list, but no solution that I could find to automatic text wrapping. So, first off, there's no way to determine the size (in pixels) of the rendered text string before it's drawn in matplotlib. This isn't too large of a problem, as we can just draw it, get the size, and then redraw the wrapped text. (It's expensive, but not too excessively bad) The next problem is that characters don't have a fixed width in pixels, so wrapping a text string to a given number of characters won't necessarily reflect a given width when rendered. This isn't a huge problem, though. Beyond that, we can't just do this once... Otherwise, it will be wrapped correctly when drawn the first time (on the screen, for example), but not if drawn again (when the figure is resized or saved as an image with a different DPI than the screen). This isn't a huge problem, as we can just connect a callback function to the matplotlib draw event. At any rate this solution is imperfect, but it should work in most situations. I don't try to account for tex-rendered strings, any stretched fonts, or fonts with an unusual aspect ratio. However, it should now properly handle rotated text. However, It should attempt automatically wrap any text objects in multiple subplots in whichever figures you connect the on_draw callback to... It will be imperfect in many cases, but it does a decent job. ``` import matplotlib.pyplot as plt def main(): fig = plt.figure() plt.axis([0, 10, 0, 10]) t = \"This is a really long string that I'd rather have wrapped so that it\"\\ \" doesn't go outside of the figure, but if it's long enough it will go\"\\ \" off the top or bottom!\" plt.text(4, 1, t, ha='left', rotation=15) plt.text(5, 3.5, t, ha='right', rotation=-15) plt.text(5, 10, t, fontsize=18, ha='center', va='top') plt.text(3, 0, t, family='serif', style='italic', ha='right') plt.title(\"This is a really long title that I want to have wrapped so it\"\\ \" does not go outside the figure boundaries\", ha='center') # Now make the text auto-wrap... fig.canvas.mpl_connect('draw_event', on_draw) plt.show() def on_draw(event): \"\"\"Auto-wraps all text objects in a figure at draw-time\"\"\" import matplotlib as mpl fig = event.canvas.figure # Cycle through all artists in all the axes in the figure for ax in fig.axes: for artist in ax.get_children(): # If it's a text artist, wrap it... if isinstance(artist, mpl.text.Text): autowrap_text(artist, event.renderer) # Temporarily disconnect any callbacks to the draw event... # (To avoid recursion) func_handles = fig.canvas.callbacks.callbacks[event.name] fig.canvas.callbacks.callbacks[event.name] = {} # Re-draw the figure.. fig.canvas.draw() # Reset the draw event callbacks fig.canvas.callbacks.callbacks[event.name] = func_handles def autowrap_text(textobj, renderer): \"\"\"Wraps the given matplotlib text object so that it exceed the boundaries of the axis it is plotted in.\"\"\" import textwrap # Get the starting position of the text in pixels... x0, y0 = textobj.get_transform().transform(textobj.get_position()) # Get the extents of the current axis in pixels... clip = textobj.get_axes().get_window_extent() # Set the text to rotate about the left edge (doesn't make sense otherwise) textobj.set_rotation_mode('anchor') # Get the amount of space in the direction of rotation to the left and # right of x0, y0 (left and right are relative to the rotation, as well) rotation = textobj.get_rotation() right_space = min_dist_inside((x0, y0), rotation, clip) left_space = min_dist_inside((x0, y0), rotation - 180, clip) # Use either the left or right distance depending on the horiz alignment. alignment = textobj.get_horizontalalignment() if alignment is 'left': new_width = right_space elif alignment is 'right': new_width = left_space else: new_width = 2 * min(left_space, right_space) # Estimate the width of the new size in characters... aspect_ratio = 0.5 # This varies with the font!! fontsize = textobj.get_size() pixels_per_char = aspect_ratio * renderer.points_to_pixels(fontsize) # If wrap_width is threshold: # Intersects the right axis distances.append((box.x1 - x0) \/ cos(rotation)) if cos(rotation) threshold: # Intersects the top axis distances.append((box.y1 - y0) \/ sin(rotation)) if sin(rotation) < -threshold: # Intersects the bottom axis distances.append((box.y0 - y0) \/ sin(rotation)) return min(distances) if __name__ == '__main__': main() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4018860\/text-box-with-line-wrapping", "best_answers_votes":126, "question_length":330, "response_length":4759 }, { "question":"How can I rotate xticklabels in so the spacing between each xticklabel is equal? [duplicate] This question already has answers here: Aligning rotated xticklabels with their respective xticks (6 answers) Closed 2 years ago. How can I rotate xticklabels in matplotlib so that the spacing between each xticklabel is equal? For example with this code: ``` import matplotlib.pyplot as plt import numpy as np # Data + parameters fontsize = 20 t = np.arange(0.0, 6.0, 1) xticklabels = ['Full', 'token emb', 'char emb', 'char LSTM', 'token LSTM', 'feed forward','ANN'] # Plotting fig = plt.figure(1) ax = fig.add_subplot(111) plt.plot(t, t) plt.xticks(range(0, len(t) + 1)) ax.tick_params(axis='both', which='major', labelsize=fontsize) ax.set_xticklabels(xticklabels, rotation = 45) fig.savefig('test_rotation.png', dpi=300, format='png', bbox_inches='tight') ``` I obtain: The spacing between each xticklabel is unequal. For example, the spacing between 'Full' and 'token emb' is much larger than the spacing between 'feed forward' and 'ANN'. I use Matplotlib 2.0.0 and Python 3.5 64-bit on Windows 7 SP1 x64 Ultimate.", "response":"The labels are centered at the tickmark position. Their bounding boxes are unequal in width and might even overlap, which makes them look unequally spaced. Since you'd always want the ticklabels to link to their tickmarks, changing the spacing is not really an option. However you might want to align them such the the upper right corner is the reference for their positioning below the tick. Use the horizontalalignment or ha argument for that and set it to \"right\": ``` ax.set_xticklabels(xticklabels, rotation = 45, ha=\"right\") ``` This results in the following plot: An alternative can be to keep the ticklabels horizontally centered, but also center them vertically. This leads to an equal spacing but required to further adjust their vertical position with respect to the axis. ``` ax.set_xticklabels(xticklabels, rotation = 45, va=\"center\", position=(0,-0.28)) ``` The above can be used if the ticks are specified manually like in the question (e.g. via plt.xticks or via ax.set_xticks) or if a categorical plot is used. If instead the labels are shown automatically, one should not use set_xticklabels. This will in general let the labels and tick positions become out of sync, because set_xticklabels sets the formatter of the axes to a FixedFormatter, while the locator stays the automatic AutoLocator, or any other automatic locator. In those cases either use plt.setp to set the rotation and alignment of existing labels, ``` plt.setp(ax.get_xticklabels(), ha=\"right\", rotation=45) ``` or loop over them to set the respective properties, ``` for label in ax.get_xticklabels(): label.set_ha(\"right\") label.set_rotation(45) ``` An example would be ``` import numpy as np; np.random.seed(42) import matplotlib.pyplot as plt t = np.arange(\"2018-01-01\", \"2018-03-01\", dtype=\"datetime64[D]\") x = np.cumsum(np.random.randn(len(t))) fig, ax = plt.subplots() ax.plot(t, x) for label in ax.get_xticklabels(): label.set_ha(\"right\") label.set_rotation(45) plt.tight_layout() plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/43152502\/how-can-i-rotate-xticklabels-in-so-the-spacing-between-each-xticklabel-is-equal", "best_answers_votes":141, "question_length":1112, "response_length":1989 }, { "question":"Color a scatter plot by Column Values One of my favorite aspects of using the ggplot2 library in R is the ability to easily specify aesthetics. I can quickly make a scatterplot and apply color associated with a specific column and I would love to be able to do this with python\/pandas\/matplotlib. I'm wondering if there are there any convenience functions that people use to map colors to values using pandas dataframes and Matplotlib? ``` ##ggplot scatterplot example with R dataframe, `df`, colored by col3 ggplot(data = df, aes(x=col1, y=col2, color=col3)) + geom_point() ##ideal situation with pandas dataframe, 'df', where colors are chosen by col3 df.plot(x=col1,y=col2,color=col3) ``` EDIT: Thank you for your responses but I want to include a sample dataframe to clarify what I am asking. Two columns contain numerical data and the third is a categorical variable. The script I am thinking of will assign colors based on this value. ```py np.random.seed(250) df = pd.DataFrame({'Height': np.append(np.random.normal(6, 0.25, size=5), np.random.normal(5.4, 0.25, size=5)), 'Weight': np.append(np.random.normal(180, 20, size=5), np.random.normal(140, 20, size=5)), 'Gender': [\"Male\",\"Male\",\"Male\",\"Male\",\"Male\", \"Female\",\"Female\",\"Female\",\"Female\",\"Female\"]}) Height Weight Gender 0 5.824970 159.210508 Male 1 5.780403 180.294943 Male 2 6.318295 199.142201 Male 3 5.617211 157.813278 Male 4 6.340892 191.849944 Male 5 5.625131 139.588467 Female 6 4.950479 146.711220 Female 7 5.617245 121.571890 Female 8 5.556821 141.536028 Female 9 5.714171 134.396203 Female ```", "response":"Imports and Data ```py import numpy import pandas import matplotlib.pyplot as plt import seaborn as sns seaborn.set(style='ticks') numpy.random.seed(0) N = 37 _genders= ['Female', 'Male', 'Non-binary', 'No Response'] df = pandas.DataFrame({ 'Height (cm)': numpy.random.uniform(low=130, high=200, size=N), 'Weight (kg)': numpy.random.uniform(low=30, high=100, size=N), 'Gender': numpy.random.choice(_genders, size=N) }) ``` Update August 2021 With seaborn 0.11.0, it's recommended to use new figure level functions like seaborn.relplot than to use FacetGrid directly. ```py sns.relplot(data=df, x='Weight (kg)', y='Height (cm)', hue='Gender', hue_order=_genders, aspect=1.61) plt.show() ``` Update October 2015 Seaborn handles this use-case splendidly: Map matplotlib.pyplot.scatter onto a seaborn.FacetGrid ```py fg = sns.FacetGrid(data=df, hue='Gender', hue_order=_genders, aspect=1.61) fg.map(plt.scatter, 'Weight (kg)', 'Height (cm)').add_legend() ``` Which immediately outputs: Old Answer In this case, I would use matplotlib directly. ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd def dfScatter(df, xcol='Height', ycol='Weight', catcol='Gender'): fig, ax = plt.subplots() categories = np.unique(df[catcol]) colors = np.linspace(0, 1, len(categories)) colordict = dict(zip(categories, colors)) df[\"Color\"] = df[catcol].apply(lambda x: colordict[x]) ax.scatter(df[xcol], df[ycol], c=df.Color) return fig if 1: df = pd.DataFrame({'Height':np.random.normal(size=10), 'Weight':np.random.normal(size=10), 'Gender': [\"Male\",\"Male\",\"Unknown\",\"Male\",\"Male\", \"Female\",\"Did not respond\",\"Unknown\",\"Female\",\"Female\"]}) fig = dfScatter(df) fig.savefig('fig1.png') ``` And that gives me: As far as I know, that color column can be any matplotlib compatible color (RBGA tuples, HTML names, hex values, etc). I'm having trouble getting anything but numerical values to work with the colormaps.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14885895\/color-a-scatter-plot-by-column-values", "best_answers_votes":92, "question_length":1569, "response_length":1911 }, { "question":"Matplotlib plots aren't shown when running file from bash terminal Plots are normally shown when I run files from the ipython shell or from an ipython notebook, but they don't show up when I run the file from a bash terminal -- everything else works fine when is run from a bash terminal. Sample python script (trial.py): ``` import matplotlib.pyplot as plt print 'please, show my graph' plt.plot([1,2,3], [1,2,3]) plt.show() ``` This is what I get (plot doesn't show up): ``` [~\/Desktop]$ python trial.py please, show my graph [~\/Desktop]$ ``` If I do ``` import matplotlib matplotlib.use('TkAgg') ``` before importing pyplot, then a window opens and closes immediately when I run it from the terminal. I've tried different ways of importing modules without success: ``` import matplotlib.pyplot as plt import matplotlib.pylab as plt from matplotlib import pyplot as plt from matplotlib import pylab as plt ``` I have the plt.show() function in my file. Do you know how I can fix it? Some info about versions and installation: I'm on a mac OSX 10.11.3. ``` In [61]: print matplotlib.__file__ \/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/__init__.pyc In [62]: print matplotlib.__version__ 1.4.2 In [64]: print sys.version 2.7.9 (default, Apr 7 2015, 07:58:25) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] In [65]: matplotlib.get_backend() Out[65]: u'MacOSX' ```", "response":"You need to add matplotlib.pyplot.show() in your code to show plots in non-interactive mode. See docs at http:\/\/matplotlib.org\/api\/pyplot_api.html#matplotlib.pyplot.show EDIT: After further info from OP, blocking had to be enabled explicitly using plt.show(block=True).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/36269746\/matplotlib-plots-arent-shown-when-running-file-from-bash-terminal", "best_answers_votes":119, "question_length":1376, "response_length":269 }, { "question":"How to make an axes occupy multiple subplots with pyplot I would like to have three plots in a single figure. The figure should have a subplot layout of two by two, where the first plot should occupy the first two subplot cells (i.e. the whole first row of plot cells) and the other plots should be positioned underneath the first one in cells 3 and 4. I know that MATLAB allows this by using the subplot command like so: ```matlab subplot(2,2,[1,2]) % the plot will span subplots 1 and 2 ``` Is it also possible in pyplot to have a single axes occupy more than one subplot? The docstring of pyplot.subplot doesn't talk about it. Anyone got an easy solution?", "response":"You can simply do: ```py import numpy as np import matplotlib.pyplot as plt x = np.arange(0, 7, 0.01) plt.subplot(2, 1, 1) plt.plot(x, np.sin(x)) plt.subplot(2, 2, 3) plt.plot(x, np.cos(x)) plt.subplot(2, 2, 4) plt.plot(x, np.sin(x)*np.cos(x)) ``` i.e., the first plot is really a plot in the upper half (the figure is only divided into 2x1 = 2 cells), and the following two smaller plots are done in a 2x2=4 cell grid. The third argument to subplot() is the position of the plot inside the grid (in the direction of reading in English, with cell 1 being in the top-left corner): for example in the second subplot (subplot(2, 2, 3)), the axes will go to the third section of the 2x2 matrix i.e, to the bottom-left corner.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2265319\/how-to-make-an-axes-occupy-multiple-subplots-with-pyplot", "best_answers_votes":82, "question_length":658, "response_length":721 }, { "question":"add colorbar to a sequence of line plots I have a sequence of line plots for two variables (x,y) for a number of different values of a variable z. I would normally add the line plots with legends like this: ``` import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) # suppose mydata is a list of tuples containing (xs, ys, z) # where xs and ys are lists of x's and y's and z is a number. legns = [] for(xs,ys,z) in mydata: pl = ax.plot(xs,ys,color = (z,0,0)) legns.append(\"z = %f\"%(z)) ax.legends(legns) plt.show() ``` But I have too many graphs and the legends will cover the graph. I'd rather have a colorbar indicating the value of z corresponding to the color. I can't find anything like that in the galery and all my attempts do deal with the colorbar failed. Apparently I must create a collection of plots before trying to add a colorbar. Is there an easy way to do this? Thanks. EDIT (clarification): I wanted to do something like this: ``` import matplotlib.pyplot as plt import matplotlib.cm as cm fig = plt.figure() ax = fig.add_subplot(111) mycmap = cm.hot # suppose mydata is a list of tuples containing (xs, ys, z) # where xs and ys are lists of x's and y's and z is a number between 0 and 1 plots = [] for(xs,ys,z) in mydata: pl = ax.plot(xs,ys,color = mycmap(z)) plots.append(pl) fig.colorbar(plots) plt.show() ``` But this won't work according to the Matplotlib reference because a list of plots is not a \"mappable\", whatever this means. I've created an alternative plot function using LineCollection: ``` def myplot(ax,xs,ys,zs, cmap): plot = lc([zip(x,y) for (x,y) in zip(xs,ys)], cmap = cmap) plot.set_array(array(zs)) x0,x1 = amin(xs),amax(xs) y0,y1 = amin(ys),amax(ys) ax.add_collection(plot) ax.set_xlim(x0,x1) ax.set_ylim(y0,y1) return plot ``` xs and ys are lists of lists of x and y coordinates and zs is a list of the different conditions to colorize each line. It feels a bit like a cludge though... I thought that there would be a more neat way to do this. I like the flexibility of the plt.plot() function.", "response":"(I know this is an old question but...) Colorbars require a matplotlib.cm.ScalarMappable, plt.plot produces lines which are not scalar mappable, therefore, in order to make a colorbar, we are going to need to make a scalar mappable. Ok. So the constructor of a ScalarMappable takes a cmap and a norm instance. (norms scale data to the range 0-1, cmaps you have already worked with and take a number between 0-1 and returns a color). So in your case: ``` import matplotlib.pyplot as plt sm = plt.cm.ScalarMappable(cmap=my_cmap, norm=plt.normalize(min=0, max=1)) plt.colorbar(sm) ``` Because your data is in the range 0-1 already, you can simplify the sm creation to: ``` sm = plt.cm.ScalarMappable(cmap=my_cmap) ``` EDIT: For matplotlib v1.2 or greater the code becomes: ``` import matplotlib.pyplot as plt sm = plt.cm.ScalarMappable(cmap=my_cmap, norm=plt.normalize(vmin=0, vmax=1)) # fake up the array of the scalar mappable. Urgh... sm._A = [] plt.colorbar(sm) ``` EDIT: For matplotlib v1.3 or greater the code becomes: ``` import matplotlib.pyplot as plt sm = plt.cm.ScalarMappable(cmap=my_cmap, norm=plt.Normalize(vmin=0, vmax=1)) # fake up the array of the scalar mappable. Urgh... sm._A = [] plt.colorbar(sm) ``` EDIT: For matplotlib v3.1 or greater simplifies to: ``` import matplotlib.pyplot as plt sm = plt.cm.ScalarMappable(cmap=my_cmap, norm=plt.Normalize(vmin=0, vmax=1)) plt.colorbar(sm) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8342549\/add-colorbar-to-a-sequence-of-line-plots", "best_answers_votes":137, "question_length":2057, "response_length":1404 }, { "question":"How do I assign multiple legend labels at once? I have the following dataset: ``` x = [0, 1, 2, 3, 4] y = [ [0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [9, 8, 7, 6, 5] ] ``` Now I plot it with: ``` import matplotlib.pyplot as plt plt.plot(x, y) ``` However, I want to label the 3 y-datasets with this command, which raises an error when .legend() is called: ``` lineObjects = plt.plot(x, y, label=['foo', 'bar', 'baz']) plt.legend() File \".\/plot_nmos.py\", line 33, in plt.legend() ... AttributeError: 'list' object has no attribute 'startswith' ``` When I inspect the lineObjects: ``` >>> lineObjects[0].get_label() ['foo', 'bar', 'baz'] >>> lineObjects[1].get_label() ['foo', 'bar', 'baz'] >>> lineObjects[2].get_label() ['foo', 'bar', 'baz'] ``` Question Is there an elegant way to assign multiple labels by just using the .plot() method?", "response":"You can iterate over your line objects list, so labels are individually assigned. An example with the built-in python iter function: ``` lineObjects = plt.plot(x, y) plt.legend(iter(lineObjects), ('foo', 'bar', 'baz'))` ``` Edit: after updating to matplotlib 1.1.1, it looks like the plt.plot(x, y), with y as a list of lists (as provided by the author of the question), doesn't work anymore. The one step plotting without iteration over the y arrays is still possible thought after passing y as numpy.array (assuming (numpy)[http:\/\/numpy.scipy.org\/] as been previously imported). In this case, use plt.plot(x, y) (if the data in the 2D y array are arranged as columns [axis 1]) or plt.plot(x, y.transpose()) (if the data in the 2D y array are arranged as rows [axis 0]) Edit 2: as pointed by @pelson (see commentary below), the iter function is unnecessary and a simple plt.legend(lineObjects, ('foo', 'bar', 'baz')) works perfectly", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11481644\/how-do-i-assign-multiple-legend-labels-at-once", "best_answers_votes":70, "question_length":831, "response_length":933 }, { "question":"Matplotlib, Consistent font using latex My problem is I'd like to use Latex titles in some plots, and no latex in others. Right now, matplotlib has two different default fonts for Latex titles and non-Latex titles and I'd like the two to be consistent. Is there an RC setting I have to change that will allow this automatically? I generate a plot with the following code: ``` import numpy as np from matplotlib import pyplot as plt tmpData = np.random.random( 300 ) ##Create a plot with a tex title ax = plt.subplot(211) plt.plot(np.arange(300), tmpData) plt.title(r'$W_y(\\tau, j=3)$') plt.setp(ax.get_xticklabels(), visible = False) ##Create another plot without a tex title plt.subplot(212) plt.plot(np.arange(300), tmpData ) plt.title(r'Some random numbers') plt.show() ``` Here is the inconsistency I am talking about. The axis tick labels are thin looking relative to the titles.:", "response":"To make the tex-style\/mathtext text look like the regular text, you need to set the mathtext font to Bitstream Vera Sans, ``` import matplotlib matplotlib.rcParams['mathtext.fontset'] = 'custom' matplotlib.rcParams['mathtext.rm'] = 'Bitstream Vera Sans' matplotlib.rcParams['mathtext.it'] = 'Bitstream Vera Sans:italic' matplotlib.rcParams['mathtext.bf'] = 'Bitstream Vera Sans:bold' matplotlib.pyplot.title(r'ABC123 vs $\\mathrm{ABC123}^{123}$') ``` If you want the regular text to look like the mathtext text, you can change everything to Stix. This will affect labels, titles, ticks, etc. ``` import matplotlib matplotlib.rcParams['mathtext.fontset'] = 'stix' matplotlib.rcParams['font.family'] = 'STIXGeneral' matplotlib.pyplot.title(r'ABC123 vs $\\mathrm{ABC123}^{123}$') ``` Basic idea is that you need to set both the regular and mathtext fonts to be the same, and the method of doing so is a bit obscure. You can see a list of the custom fonts, ``` sorted([f.name for f in matplotlib.font_manager.fontManager.ttflist]) ``` As others mentioned, you can also have Latex render everything for you with one font by setting text.usetex in the rcParams, but that's slow and not entirely necessary.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11367736\/matplotlib-consistent-font-using-latex", "best_answers_votes":87, "question_length":885, "response_length":1197 }, { "question":"How to make savefig() save image for 'maximized' window instead of default size I am using pylab in matplotlib to create a plot and save the plot to an image file. However, when I save the image using pylab.savefig( image_name ), I find that the SIZE image saved is the same as the image that is shown when I use pylab.show(). As it happens, I have a lot of data in the plot and when I am using pylab.show(), I have to maximize the window before I can see all of the plot correctly, and the xlabel tickers don't superimpose on each other. Is there anyway that I can programmatically 'maximize' the window before saving the image to file? - at the moment, I am only getting the 'default' window size image, which results in the x axis labels being superimposed on one another.", "response":"There are two major options in matplotlib (pylab) to control the image size: You can set the size of the resulting image in inches You can define the DPI (dots per inch) for output file (basically, it is a resolution) Normally, you would like to do both, because this way you will have full control over the resulting image size in pixels. For example, if you want to render exactly 800x600 image, you can use DPI=100, and set the size as 8 x 6 in inches: ``` import matplotlib.pyplot as plt # plot whatever you need... # now, before saving to file: figure = plt.gcf() # get current figure figure.set_size_inches(8, 6) # when saving, specify the DPI plt.savefig(\"myplot.png\", dpi = 100) ``` One can use any DPI. In fact, you might want to play with various DPI and size values to get the result you like the most. Beware, however, that using very small DPI is not a good idea, because matplotlib may not find a good font to render legend and other text. For example, you cannot set the DPI=1, because there are no fonts with characters rendered with 1 pixel :) From other comments I understood that other issue you have is proper text rendering. For this, you can also change the font size. For example, you may use 6 pixels per character, instead of 12 pixels per character used by default (effectively, making all text twice smaller). ``` import matplotlib #... matplotlib.rc('font', size=6) ``` Finally, some references to the original documentation: http:\/\/matplotlib.sourceforge.net\/api\/pyplot_api.html#matplotlib.pyplot.savefig, http:\/\/matplotlib.sourceforge.net\/api\/pyplot_api.html#matplotlib.pyplot.gcf, http:\/\/matplotlib.sourceforge.net\/api\/figure_api.html#matplotlib.figure.Figure.set_size_inches, http:\/\/matplotlib.sourceforge.net\/users\/customizing.html#dynamic-rc-settings P.S. Sorry, I didn't use pylab, but as far as I'm aware, all the code above will work same way in pylab - just replace plt in my code with the pylab (or whatever name you assigned when importing pylab). Same for matplotlib - use pylab instead.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10041627\/how-to-make-savefig-save-image-for-maximized-window-instead-of-default-size", "best_answers_votes":72, "question_length":775, "response_length":2028 }, { "question":"Embedding small plots inside subplots in matplotlib If you want to insert a small plot inside a bigger one you can use Axes, like here. The problem is that I don't know how to do the same inside a subplot. I have several subplots and I would like to plot a small plot inside each subplot. The example code would be something like this: ``` import numpy as np import matplotlib.pyplot as plt fig = plt.figure() for i in range(4): ax = fig.add_subplot(2,2,i) ax.plot(np.arange(11),np.arange(11),'b') #b = ax.axes([0.7,0.7,0.2,0.2]) #it gives an error, AxesSubplot is not callable #b = plt.axes([0.7,0.7,0.2,0.2]) #plt.plot(np.arange(3),np.arange(3)+11,'g') #it plots the small plot in the selected position of the whole figure, not inside the subplot ``` Any ideas?", "response":"I wrote a function very similar to plt.axes. You could use it for plotting yours sub-subplots. There is an example... ``` import matplotlib.pyplot as plt import numpy as np #def add_subplot_axes(ax,rect,facecolor='w'): # matplotlib 2.0+ def add_subplot_axes(ax,rect,axisbg='w'): fig = plt.gcf() box = ax.get_position() width = box.width height = box.height inax_position = ax.transAxes.transform(rect[0:2]) transFigure = fig.transFigure.inverted() infig_position = transFigure.transform(inax_position) x = infig_position[0] y = infig_position[1] width *= rect[2] height *= rect[3] # <= Typo was here #subax = fig.add_axes([x,y,width,height],facecolor=facecolor) # matplotlib 2.0+ subax = fig.add_axes([x,y,width,height],axisbg=axisbg) x_labelsize = subax.get_xticklabels()[0].get_size() y_labelsize = subax.get_yticklabels()[0].get_size() x_labelsize *= rect[2]**0.5 y_labelsize *= rect[3]**0.5 subax.xaxis.set_tick_params(labelsize=x_labelsize) subax.yaxis.set_tick_params(labelsize=y_labelsize) return subax def example1(): fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(111) rect = [0.2,0.2,0.7,0.7] ax1 = add_subplot_axes(ax,rect) ax2 = add_subplot_axes(ax1,rect) ax3 = add_subplot_axes(ax2,rect) plt.show() def example2(): fig = plt.figure(figsize=(10,10)) axes = [] subpos = [0.2,0.6,0.3,0.3] x = np.linspace(-np.pi,np.pi) for i in range(4): axes.append(fig.add_subplot(2,2,i)) for axis in axes: axis.set_xlim(-np.pi,np.pi) axis.set_ylim(-1,3) axis.plot(x,np.sin(x)) subax1 = add_subplot_axes(axis,subpos) subax2 = add_subplot_axes(subax1,subpos) subax1.plot(x,np.sin(x)) subax2.plot(x,np.sin(x)) if __name__ == '__main__': example2() plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17458580\/embedding-small-plots-inside-subplots-in-matplotlib", "best_answers_votes":70, "question_length":763, "response_length":1663 }, { "question":"Remove line through legend marker with .plot I have a matplotlib plot generated with the following code: ``` import matplotlib.pyplot as pyplot Fig, ax = pyplot.subplots() for i, (mark, color) in enumerate(zip( ['s', 'o', 'D', 'v'], ['r', 'g', 'b', 'purple'])): ax.plot(i+1, i+1, color=color, marker=mark, markerfacecolor='None', markeredgecolor=color, label=i) ax.set_xlim(0,5) ax.set_ylim(0,5) ax.legend() ``` with this as the generated figure: I don't like the lines through the markers in the legend. How can I get rid of them?", "response":"You can specify linestyle='None' or linestyle='' as a keyword argument in the plot command. Also, ls= can replace linestyle=. ```py import matplotlib.pyplot as plt fig, ax = plt.subplots() for i, (mark, color) in enumerate(zip( ['s', 'o', 'D', 'v'], ['r', 'g', 'b', 'purple'])): ax.plot(i+1, i+1, color=color, marker=mark, markerfacecolor='None', markeredgecolor=color, linestyle='None', label=i) ax.set_xlim(0, 5) ax.set_ylim(0, 5) ax.legend(numpoints=1) plt.show() ``` Since you're only plotting single points, you can't see the line attribute except for in the legend.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21285885\/remove-line-through-legend-marker-with-plot", "best_answers_votes":125, "question_length":531, "response_length":571 }, { "question":"Matplotlib\/Pandas error using histogram I have a problem making histograms from pandas series objects and I can't understand why it does not work. The code has worked fine before but now it does not. Here is a bit of my code (specifically, a pandas series object I'm trying to make a histogram of): ``` type(dfj2_MARKET1['VSPD2_perc']) ``` which outputs the result: pandas.core.series.Series Here's my plotting code: ``` fig, axes = plt.subplots(1, 7, figsize=(30,4)) axes[0].hist(dfj2_MARKET1['VSPD1_perc'],alpha=0.9, color='blue') axes[0].grid(True) axes[0].set_title(MARKET1 + ' 5-40 km \/ h') ``` Error message: ``` AttributeError Traceback (most recent call last) in () 1 fig, axes = plt.subplots(1, 7, figsize=(30,4)) 2 ----> 3 axes[1].hist(dfj2_MARKET1['VSPD2_perc'],alpha=0.9, color='blue') 4 axes[1].grid(True) 5 axes[1].set_xlabel('Time spent [%]') C:\\Python27\\lib\\site-packages\\matplotlib\\axes.pyc in hist(self, x, bins, range, normed, weights, cumulative, bottom, histtype, align, orientation, rwidth, log, color, label, stacked, **kwargs) 8322 # this will automatically overwrite bins, 8323 # so that each histogram uses the same bins -> 8324 m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs) 8325 m = m.astype(float) # causes problems later if it's an int 8326 if mlast is None: C:\\Python27\\lib\\site-packages\\numpy\\lib\\function_base.pyc in histogram(a, bins, range, normed, weights, density) 158 if (mn > mx): 159 raise AttributeError( --> 160 'max must be larger than min in range parameter.') 161 162 if not iterable(bins): AttributeError: max must be larger than min in range parameter. ```", "response":"This error occurs among other things when you have NaN values in the Series. Could that be the case? These NaN's are not handled well by the hist function of matplotlib. For example: ``` s = pd.Series([1,2,3,2,2,3,5,2,3,2,np.nan]) fig, ax = plt.subplots() ax.hist(s, alpha=0.9, color='blue') ``` produces the same error AttributeError: max must be larger than min in range parameter. One option is eg to remove the NaN's before plotting. This will work: ``` ax.hist(s.dropna(), alpha=0.9, color='blue') ``` Another option is to use pandas hist method on your series and providing the axes[0] to the ax keyword: ``` dfj2_MARKET1['VSPD1_perc'].hist(ax=axes[0], alpha=0.9, color='blue') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20656663\/matplotlib-pandas-error-using-histogram", "best_answers_votes":131, "question_length":1621, "response_length":687 }, { "question":"Interactive matplotlib plot with two sliders I used matplotlib to create some plot, which depends on 8 variables. I would like to study how the plot changes when I change some of them. I created some script that calls the matplotlib one and generates different snapshots that later I convert into a movie, it is not bad, but a bit clumsy. I wonder if somehow I could interact with the plot regeneration using keyboard keys to increase \/ decrease values of some of the variables and see instantly how the plot changes. What is the best approach for this? Also if you can point me to interesting links or a link with a plot example with just two sliders?", "response":"In addition to what @triplepoint mentioned, have a look at the slider widget. There's an example on the matplotlib examples page. It's a graphical slider bar rather than keyboard bindings, but it works quite well for what you want to do. Also note that to guarantee the sliders and buttons remain responsive and not garbage-collected, references to the objects (amp_slider, freq_slider, etc.) should be maintained by yourself. (I'm making this community wiki, as I'm just copy-pasting from the example. This particular example teaches bad habits (e.g. from pylab import *), but it gets the point across. The example has been fixed to avoid the use of pylab.) ``` from numpy import pi, sin import numpy as np import matplotlib.pyplot as plt from matplotlib.widgets import Slider, Button, RadioButtons def signal(amp, freq): return amp * sin(2 * pi * freq * t) axis_color = 'lightgoldenrodyellow' fig = plt.figure() ax = fig.add_subplot(111) # Adjust the subplots region to leave some space for the sliders and buttons fig.subplots_adjust(left=0.25, bottom=0.25) t = np.arange(0.0, 1.0, 0.001) amp_0 = 5 freq_0 = 3 # Draw the initial plot # The 'line' variable is used for modifying the line later [line] = ax.plot(t, signal(amp_0, freq_0), linewidth=2, color='red') ax.set_xlim([0, 1]) ax.set_ylim([-10, 10]) # Add two sliders for tweaking the parameters # Define an axes area and draw a slider in it amp_slider_ax = fig.add_axes([0.25, 0.15, 0.65, 0.03], facecolor=axis_color) amp_slider = Slider(amp_slider_ax, 'Amp', 0.1, 10.0, valinit=amp_0) # Draw another slider freq_slider_ax = fig.add_axes([0.25, 0.1, 0.65, 0.03], facecolor=axis_color) freq_slider = Slider(freq_slider_ax, 'Freq', 0.1, 30.0, valinit=freq_0) # Define an action for modifying the line when any slider's value changes def sliders_on_changed(val): line.set_ydata(signal(amp_slider.val, freq_slider.val)) fig.canvas.draw_idle() amp_slider.on_changed(sliders_on_changed) freq_slider.on_changed(sliders_on_changed) # Add a button for resetting the parameters reset_button_ax = fig.add_axes([0.8, 0.025, 0.1, 0.04]) reset_button = Button(reset_button_ax, 'Reset', color=axis_color, hovercolor='0.975') def reset_button_on_clicked(mouse_event): freq_slider.reset() amp_slider.reset() reset_button.on_clicked(reset_button_on_clicked) # Add a set of radio buttons for changing color color_radios_ax = fig.add_axes([0.025, 0.5, 0.15, 0.15], facecolor=axis_color) color_radios = RadioButtons(color_radios_ax, ('red', 'blue', 'green'), active=0) def color_radios_on_clicked(label): line.set_color(label) fig.canvas.draw_idle() color_radios.on_clicked(color_radios_on_clicked) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6697259\/interactive-matplotlib-plot-with-two-sliders", "best_answers_votes":87, "question_length":652, "response_length":2651 }, { "question":"\\text does not work in a matplotlib label I am using matplotlib together with latex labels for the axis, title and colorbar labels While it works really great most of the time, it has some issues when you have a formula using \\text. One really simple example. ``` from matplotlib import pyplot as plt plt.plot([1,2,3]) plt.title(r\"$f_{\\text{cor, r}}$\") plt.show() ``` This will result in an error message like: ``` IPython\/core\/formatters.py:239: FormatterWarning: Exception in image\/png formatter: f_{\\text{1cor, r}} ^ Unknown symbol: \\text (at char 3), (line:1, col:4) FormatterWarning, ``` Is there an easy way to use \\text in there?", "response":"\\text won't work because it requires the amsmath package (not included in mathtext - the math rendering engine of matplotlib). So you basically have two options: use latex based font rendering ```py from matplotlib import pyplot as plt import matplotlib as mpl mpl.rcParams['text.usetex'] = True mpl.rcParams['text.latex.preamble'] = [r'\\usepackage{amsmath}'] #for \\text command plt.plot([1,2,3]) plt.title(r\"$f_{\\text{cor, r}}$\") plt.show() ``` use mathtext but use \\mathrm instead of \\text ```py from matplotlib import pyplot as plt import matplotlib as mpl mpl.rcParams['text.usetex'] = False # not really needed plt.plot([1,2,3]) plt.title(r\"$f_{\\mathrm{cor, r}}$\") plt.show() ``` The latter approach creates a figure like Be aware that unlike with the \\text command, spaces inside the \\mathrm environment are not respected. If you want more space between the variables you have to use latex style commands (\\, \\;, ...).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23824687\/text-does-not-work-in-a-matplotlib-label", "best_answers_votes":101, "question_length":636, "response_length":924 }, { "question":"show origin axis (x,y) in matplotlib plot I have following simple plot, and I would like to display the origin axis (x, y). I already have grid, but I need the x, y axis to be emphasized. this is my code: ``` x = linspace(0.2,10,100) plot(x, 1\/x) plot(x, log(x)) axis('equal') grid() ``` I have seen this question. The accepted answer suggests to use \"Axis spine\" and just links to some example. The example is however too complicated, using subplots. I am unable to figure out, how to use \"Axis spine\" in my simple example.", "response":"Using subplots is not too complicated, the spines might be. Dumb, simple way: ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt x = np.linspace(0.2,10,100) fig, ax = plt.subplots() ax.plot(x, 1\/x) ax.plot(x, np.log(x)) ax.set_aspect('equal') ax.grid(True, which='both') ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ``` And I get: (you can't see the vertical axis since the lower x-limit is zero.) Alternative using simple spines ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt x = np.linspace(0.2,10,100) fig, ax = plt.subplots() ax.plot(x, 1\/x) ax.plot(x, np.log(x)) ax.set_aspect('equal') ax.grid(True, which='both') # set the x-spine (see below for more info on `set_position`) ax.spines['left'].set_position('zero') # turn off the right spine\/ticks ax.spines['right'].set_color('none') ax.yaxis.tick_left() # set the y-spine ax.spines['bottom'].set_position('zero') # turn off the top spine\/ticks ax.spines['top'].set_color('none') ax.xaxis.tick_bottom() ``` Alternative using seaborn (my favorite) ``` import numpy as np import matplotlib.pyplot as plt import seaborn seaborn.set(style='ticks') x = np.linspace(0.2,10,100) fig, ax = plt.subplots() ax.plot(x, 1\/x) ax.plot(x, np.log(x)) ax.set_aspect('equal') ax.grid(True, which='both') seaborn.despine(ax=ax, offset=0) # the important part here ``` Using the set_position method of a spine Here are the docs for a the set_position method of spines: Spine position is specified by a 2 tuple of (position type, amount). The position types are: 'outward' : place the spine out from the data area by the specified number of points. (Negative values specify placing the spine inward.) 'axes' : place the spine at the specified Axes coordinate (from 0.0-1.0). 'data' : place the spine at the specified data coordinate. Additionally, shorthand notations define a special positions: 'center' -> ('axes',0.5) 'zero' -> ('data', 0.0) So you can place, say the left spine anywhere with: ax.spines['left'].set_position((system, poisition)) where system is 'outward', 'axes', or 'data' and position in the place in that coordinate system.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25689238\/show-origin-axis-x-y-in-matplotlib-plot", "best_answers_votes":129, "question_length":524, "response_length":2140 }, { "question":"How to plot empirical CDF (ECDF) How can I plot the empirical CDF of an array of numbers with Matplotlib in Python? I'm looking for the CDF analog of Pylab\u2019s hist function. One thing I can think of is: ``` from scipy.stats import cumfreq a = array([...]) # my array of numbers num_bins = 20 b = cumfreq(a, num_bins) plt.plot(b) ```", "response":"If you like linspace and prefer one-liners, you can do: ``` plt.plot(np.sort(a), np.linspace(0, 1, len(a), endpoint=False)) ``` Given my tastes, I almost always do: ``` # a is the data array x = np.sort(a) y = np.arange(len(x))\/float(len(x)) plt.plot(x, y) ``` Which works for me even if there are >O(1e6) data values. If you really need to downsample I'd set ``` x = np.sort(a)[::down_sampling_step] ``` Edit to respond to comment\/edit on why I use endpoint=False or the y as defined above. The following are some technical details. The empirical CDF is usually formally defined as ``` CDF(x) = \"number of samples <= x\"\/\"number of samples\" ``` in order to exactly match this formal definition you would need to use y = np.arange(1,len(x)+1)\/float(len(x)) so that we get y = [1\/N, 2\/N ... 1]. This estimator is an unbiased estimator that will converge to the true CDF in the limit of infinite samples Wikipedia ref.. I tend to use y = [0, 1\/N, 2\/N ... (N-1)\/N] since: (a) it is easier to code\/more idiomatic, (b) but is still formally justified since one can always exchange CDF(x) with 1-CDF(x) in the convergence proof, and (c) works with the (easy) downsampling method described above. In some particular cases, it is useful to define ``` y = (arange(len(x))+0.5)\/len(x) ``` which is intermediate between these two conventions. Which, in effect, says \"there is a 1\/(2N) chance of a value less than the lowest one I've seen in my sample, and a 1\/(2N) chance of a value greater than the largest one I've seen so far. Note that the selection of this convention interacts with the where parameter used in the plt.step if it seems more useful to display the CDF as a piecewise constant function. In order to exactly match the formal definition mentioned above, one would need to use where=pre the suggested y=[0,1\/N..., 1-1\/N] convention, or where=post with the y=[1\/N, 2\/N ... 1] convention, but not the other way around. However, for large samples, and reasonable distributions, the convention is given in the main body of the answer is easy to write, is an unbiased estimator of the true CDF, and works with the downsampling methodology.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3209362\/how-to-plot-empirical-cdf-ecdf", "best_answers_votes":134, "question_length":331, "response_length":2138 }, { "question":"Use a loop to plot n charts Python I have a set of data that I load into python using a pandas dataframe. What I would like to do is create a loop that will print a plot for all the elements in their own frame, not all on one. My data is in an excel file structured in this fashion: ``` Index | DATE | AMB CO 1 | AMB CO 2 |...|AMB CO_n | TOTAL 1 | 1\/1\/12| 14 | 33 |...| 236 | 1600 . | ... | ... | ... |...| ... | ... . | ... | ... | ... |...| ... | ... . | ... | ... | ... |...| ... | ... n ``` This is what I have for code so far: ``` import pandas as pd import matplotlib.pyplot as plt ambdf = pd.read_excel('Ambulance.xlsx', sheetname='Sheet2', index_col=0, na_values=['NA']) print type(ambdf) print ambdf print ambdf['EAS'] amb_plot = plt.plot(ambdf['EAS'], linewidth=2) plt.title('EAS Ambulance Numbers') plt.xlabel('Month') plt.ylabel('Count of Deliveries') print amb_plot for i in ambdf: print plt.plot(ambdf[i], linewidth = 2) ``` I am thinking of doing something like this: ``` for i in ambdf: ambdf_plot = plt.plot(ambdf, linewidth = 2) ``` The above was not remotely what i wanted and it stems from my unfamiliarity with Pandas, MatplotLib etc, looking at some documentation though to me it looks like matplotlib is not even needed (question 2) So A) How can I produce a plot of data for every column in my df and B) do I need to use matplotlib or should I just use pandas to do it all? Thank you,", "response":"Ok, so the easiest method to create several plots is this: ``` import matplotlib.pyplot as plt x=[[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]] y=[[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]] for i in range(len(x)): plt.figure() plt.plot(x[i],y[i]) # Show\/save figure as desired. plt.show() # Can show all four figures at once by calling plt.show() here, outside the loop. #plt.show() ``` Note that you need to create a figure every time or pyplot will plot in the first one created. If you want to create several data series all you need to do is: ``` import matplotlib.pyplot as plt plt.figure() x=[[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]] y=[[1,2,3,4],[2,3,4,5],[3,4,5,6],[7,8,9,10]] plt.plot(x[0],y[0],'r',x[1],y[1],'g',x[2],y[2],'b',x[3],y[3],'k') ``` You could automate it by having a list of colours like ['r','g','b','k'] and then just calling both entries in this list and corresponding data to be plotted in a loop if you wanted to. If you just want to programmatically add data series to one plot something like this will do it (no new figure is created each time so everything is plotted in the same figure): ``` import matplotlib.pyplot as plt x=[[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]] y=[[1,2,3,4],[2,3,4,5],[3,4,5,6],[7,8,9,10]] colours=['r','g','b','k'] plt.figure() # In this example, all the plots will be in one figure. for i in range(len(x)): plt.plot(x[i],y[i],colours[i]) plt.show() ``` If anything matplotlib has a very good documentation page with plenty of examples. 17 Dec 2019: added plt.show() and plt.figure() calls to clarify this part of the story.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19189488\/use-a-loop-to-plot-n-charts-python", "best_answers_votes":104, "question_length":1408, "response_length":1572 }, { "question":"Matplotlib showing x-tick labels overlapping Have a look at the graph below: It's a subplot of this larger figure: I see two problems with it. First, the x-axis labels overlap with one another (this is my major issue). Second. the location of the x-axis minor gridlines seems a bit wonky. On the left of the graph, they look properly spaced. But on the right, they seem to be crowding the major gridlines...as if the major gridline locations aren't proper multiples of the minor tick locations. My setup is that I have a DataFrame called df which has a DatetimeIndex on the rows and a column called value which contains floats. I can provide an example of the df contents in a gist if necessary. A dozen or so lines of df are at the bottom of this post for reference. Here's the code that produces the figure: ``` now = dt.datetime.now() fig, axes = plt.subplots(2, 2, figsize=(15, 8), dpi=200) for i, d in enumerate([360, 30, 7, 1]): ax = axes.flatten()[i] earlycut = now - relativedelta(days=d) data = df.loc[df.index>=earlycut, :] ax.plot(data.index, data['value']) ax.xaxis_date() ax.get_xaxis().set_minor_locator(mpl.ticker.AutoMinorLocator()) ax.get_yaxis().set_minor_locator(mpl.ticker.AutoMinorLocator()) ax.grid(b=True, which='major', color='w', linewidth=1.5) ax.grid(b=True, which='minor', color='w', linewidth=0.75) ``` What is my best option here to get the x-axis labels to stop overlapping each other (in each of the four subplots)? Also, separately (but less urgently), what's up with the minor tick issue in the top-left subplot? I am on Pandas 0.13.1, numpy 1.8.0, and matplotlib 1.4.x. Here's a small snippet of df for reference: ``` id scale tempseries_id value timestamp 2014-11-02 14:45:10.302204+00:00 7564 F 1 68.0000 2014-11-02 14:25:13.532391+00:00 7563 F 1 68.5616 2014-11-02 14:15:12.102229+00:00 7562 F 1 68.9000 2014-11-02 14:05:13.252371+00:00 7561 F 1 69.0116 2014-11-02 13:55:11.792191+00:00 7560 F 1 68.7866 2014-11-02 13:45:10.782227+00:00 7559 F 1 68.6750 2014-11-02 13:35:10.972248+00:00 7558 F 1 68.4500 2014-11-02 13:25:10.362213+00:00 7557 F 1 68.1116 2014-11-02 13:15:10.822247+00:00 7556 F 1 68.2250 2014-11-02 13:05:10.102200+00:00 7555 F 1 68.5616 2014-11-02 12:55:10.292217+00:00 7554 F 1 69.0116 2014-11-02 12:45:10.382226+00:00 7553 F 1 69.3500 2014-11-02 12:35:10.642245+00:00 7552 F 1 69.2366 2014-11-02 12:25:12.642255+00:00 7551 F 1 69.1250 2014-11-02 12:15:11.122382+00:00 7550 F 1 68.7866 2014-11-02 12:05:11.332224+00:00 7549 F 1 68.5616 2014-11-02 11:55:11.662311+00:00 7548 F 1 68.2250 2014-11-02 11:45:11.122193+00:00 7547 F 1 68.4500 2014-11-02 11:35:11.162271+00:00 7546 F 1 68.7866 2014-11-02 11:25:12.102211+00:00 7545 F 1 69.2366 2014-11-02 11:15:10.422226+00:00 7544 F 1 69.4616 2014-11-02 11:05:11.412216+00:00 7543 F 1 69.3500 2014-11-02 10:55:10.772212+00:00 7542 F 1 69.1250 2014-11-02 10:45:11.332220+00:00 7541 F 1 68.7866 2014-11-02 10:35:11.332232+00:00 7540 F 1 68.5616 2014-11-02 10:25:11.202411+00:00 7539 F 1 68.2250 2014-11-02 10:15:11.932326+00:00 7538 F 1 68.5616 2014-11-02 10:05:10.922229+00:00 7537 F 1 68.9000 2014-11-02 09:55:11.602357+00:00 7536 F 1 69.3500 ``` Edit: Trying fig.autofmt_xdate(): I don't think this going to do the trick. This seems to use the same x-tick labels for both graphs on the left and also for both graphs on the right. Which is not correct given my data. Please see the problematic output below:", "response":"Ok, finally got it working. The trick was to use plt.setp to manually rotate the tick labels. Using fig.autofmt_xdate() did not work as it does some unexpected things when you have multiple subplots in your figure. Here's the working code with its output: ``` for i, d in enumerate([360, 30, 7, 1]): ax = axes.flatten()[i] earlycut = now - relativedelta(days=d) data = df.loc[df.index>=earlycut, :] ax.plot(data.index, data['value']) ax.get_xaxis().set_minor_locator(mpl.ticker.AutoMinorLocator()) ax.get_yaxis().set_minor_locator(mpl.ticker.AutoMinorLocator()) ax.grid(b=True, which='major', color='w', linewidth=1.5) ax.grid(b=True, which='minor', color='w', linewidth=0.75) plt.setp(ax.get_xticklabels(), rotation=30, horizontalalignment='right') fig.tight_layout() ``` By the way, the comment earlier about some matplotlib things taking forever is very interesting here. I'm using a raspberry pi to act as a weather station at a remote location. It's collecting the data and serving the results via the web. And boy oh boy, it's really wheezing trying to put out these graphics.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/26700598\/matplotlib-showing-x-tick-labels-overlapping", "best_answers_votes":74, "question_length":3406, "response_length":1082 }, { "question":"What does \"rc\" in matplotlib's rcParams stand for? [closed] Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 9 years ago. The community reviewed whether to reopen this question last year and left it closed: Original close reason(s) were not resolved Improve this question matplotlibrc configuration files are used to customize all kinds of properties in matplotlib. One can change the rc settings to customize the default parameters e.g: ``` matplotlib.rcParams['font.family'] = 'times new roman' ``` ... but what does \"rc\" stand for? I can't find any explanation in the docs", "response":"It's common to end configuration files in 'rc' - e.g. '.xinitrc', '.vimrc' and '.bashrc'. It stems from practice of having your configs executable - they are automatically Run at startup and they Configure your stuff. This started long ago, even before Unix: [Unix: from runcom files on the CTSS system 1962-63, via the startup script \/etc\/rc] Script file containing startup instructions for an application program (or an entire operating system), usually a text file containing commands of the sort that might have been invoked manually once the system was running but are to be executed automatically each time the system starts up.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37728087\/what-does-rc-in-matplotlibs-rcparams-stand-for", "best_answers_votes":79, "question_length":940, "response_length":634 }, { "question":"How to move a tick label I would like to move some ticks' labels horizontally along the x-axis, without moving the corresponding ticks. More specifically, when rotating labels with plt.setp, the centers of the labels' text stay aligned with the ticks. I would like to shift those labels to the right, so that the near ends of the labels get aligned instead as suggested on the image below. I am aware of this post and this one, however the answers are interesting kludges rather than strict answers to the question. my code: ``` import matplotlib.pyplot as plt import numpy as np import datetime # my fake data dates = np.array([datetime.datetime(2000,1,1) + datetime.timedelta(days=i) for i in range(365*5)]) data = np.sin(np.arange(365*5)\/365.0*2*np.pi - 0.25*np.pi) + np.random.rand(365*5) \/3 # creates fig with 2 subplots fig = plt.figure(figsize=(10.0, 6.0)) ax = plt.subplot2grid((2,1), (0, 0)) ax2 = plt.subplot2grid((2,1), (1, 0)) ## plot dates ax2.plot_date( dates, data ) # rotates labels plt.setp( ax2.xaxis.get_majorticklabels(), rotation=-45 ) # try to shift labels to the right ax2.xaxis.get_majorticklabels()[2].set_y(-.1) ax2.xaxis.get_majorticklabels()[2].set_x(10**99) plt.show() ``` Strangely enough, set_y behaves as expected, but even if I set x to a fantasillion, the labels would not move by one iota. (The use of plot_date may introduce additional confusion, but the same actually happens with plot.)", "response":"First of all, let's use a mcve to show the problem. ``` import numpy as np import datetime import matplotlib.pyplot as plt plt.rcParams[\"date.autoformatter.month\"] = \"%b %Y\" # my fake data dates = np.array([datetime.datetime(2000,1,1) + datetime.timedelta(days=i) for i in range(365)]) data = np.sin(np.arange(365)\/365.0*2*np.pi - 0.25*np.pi) + np.random.rand(365) \/3 # creates fig with 2 subplots fig, ax = plt.subplots(figsize=(6,2)) ## plot dates ax.plot_date( dates, data ) # rotates labels plt.setp( ax.xaxis.get_majorticklabels(), rotation=-45 ) plt.tight_layout() plt.show() ``` Now as other anwers pointed out already, you may use horizontal alignment of the text. ``` # rotates labels and aligns them horizontally to left plt.setp( ax.xaxis.get_majorticklabels(), rotation=-45, ha=\"left\" ) ``` You may use the rotation_mode argument to let the rotation happen about the top left point of the text, giving a slightly nicer result in this case. ``` # rotates labels and aligns them horizontally to left plt.setp( ax.xaxis.get_majorticklabels(), rotation=-45, ha=\"left\", rotation_mode=\"anchor\") ``` In case those options are not fine grained enough, i.e. you want to position the labels more accurately, e.g. shifting it to the side by some points, you may use a transform. The following would offset the label by 5 points in horizontal direction, using a matplotlib.transforms.ScaledTranslation. ``` import matplotlib.transforms plt.setp( ax.xaxis.get_majorticklabels(), rotation=-45) # Create offset transform by 5 points in x direction dx = 5\/72.; dy = 0\/72. offset = matplotlib.transforms.ScaledTranslation(dx, dy, fig.dpi_scale_trans) # apply offset transform to all x ticklabels. for label in ax.xaxis.get_majorticklabels(): label.set_transform(label.get_transform() + offset) ``` The advantage of this, compared to e.g. the solution provided by @explorerDude is that the offset is independent on the data in the graph, such that it is generally applicable to any plot and would look the same for a given fontsize.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/28615887\/how-to-move-a-tick-label", "best_answers_votes":108, "question_length":1424, "response_length":2026 }, { "question":"Bar-Plot with two bars and two y-axis I have a DataFrame looking like this: ``` amount price age A 40929 4066443 B 93904 9611272 C 188349 19360005 D 248438 24335536 E 205622 18888604 F 140173 12580900 G 76243 6751731 H 36859 3418329 I 29304 2758928 J 39768 3201269 K 30350 2867059 ``` Now I'd like to plot a bar-plot with the age on the x-axis as labels. For each x-tick there should be two bars, one bar for the amount, and one for the price. I can get this working by using simply: ``` df.plot(kind='bar') ``` The problem is the scaling. The prices are so much higher that I can not really identify the amount in that graph, see: Thus I'd like a second y-axis. I tried it using: ``` df.loc[:,'amount'].plot(kind='bar') df.loc[:,'price'].plot(kind='bar',secondary_y=True) ``` but this just overwrites the bars and does NOT place them side-by-side. Is there any way to do this without having to access the lower-level matplotlib (which would be possible obviously by placing the bars side by side manually)? For now, I'm using two single plots within subplots: ``` df.plot(kind='bar',grid=True,subplots=True,sharex=True); ``` resulting in:", "response":"Using the new pandas release (0.14.0 or later) the below code will work. To create the two axis I have manually created two matplotlib axes objects (ax and ax2) which will serve for both bar plots. When plotting a Dataframe you can choose the axes object using ax=.... Also in order to prevent the two plots from overlapping I have modified where they align with the position keyword argument, this defaults to 0.5 but that would mean the two bar plots overlapping. ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd from io import StringIO s = StringIO(\"\"\" amount price A 40929 4066443 B 93904 9611272 C 188349 19360005 D 248438 24335536 E 205622 18888604 F 140173 12580900 G 76243 6751731 H 36859 3418329 I 29304 2758928 J 39768 3201269 K 30350 2867059\"\"\") df = pd.read_csv(s, index_col=0, delimiter=' ', skipinitialspace=True) fig = plt.figure() # Create matplotlib figure ax = fig.add_subplot(111) # Create matplotlib axes ax2 = ax.twinx() # Create another axes that shares the same x-axis as ax. width = 0.4 df.amount.plot(kind='bar', color='red', ax=ax, width=width, position=1) df.price.plot(kind='bar', color='blue', ax=ax2, width=width, position=0) ax.set_ylabel('Amount') ax2.set_ylabel('Price') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/24183101\/bar-plot-with-two-bars-and-two-y-axis", "best_answers_votes":110, "question_length":1139, "response_length":1244 }, { "question":"How to show matplotlib plots? I am sure the configuration of matplotlib for python is correct since I have used it to plot some figures. But today it just stop working for some reason. I tested it with really simple code like: ``` import matplotlib.pyplot as plt import numpy as np x = np.arange(0, 5, 0.1) y = np.sin(x) plt.plot(x, y) ``` There's no error but just no figure shown up. I am using python 2.6, Eclipse in Ubuntu", "response":"In matplotlib you have two main options: Create your plots and draw them at the end: ``` import matplotlib.pyplot as plt plt.plot(x, y) plt.plot(z, t) plt.show() ``` Create your plots and draw them as soon as they are created: ``` import matplotlib.pyplot as plt from matplotlib import interactive interactive(True) plt.plot(x, y) raw_input('press return to continue') plt.plot(z, t) raw_input('press return to end') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8575062\/how-to-show-matplotlib-plots", "best_answers_votes":96, "question_length":426, "response_length":420 }, { "question":"matplotlib: make plus sign thicker In Matplotlib, I would like to draw a thick plus sign (or a cross), but the one provided in the marker set is too thin. Even as I increase its size, it doesn't get any thicker. For example: The lines of code drawing the red plus sign are: ``` # Draw median marker. if plot_opts.get('bean_show_median', True): ax.plot(pos, np.median(pos_data), marker=plot_opts.get('bean_median_marker', '+'), color=plot_opts.get('bean_median_color', 'r')) ``` If I add an extra parameter markersize=20, the marker will only stretch. It will be as thin as before. Can I make it thick?", "response":"You can use markeredgewidth (or mew). You'll want to combine it with markersize, otherwise you get thick but tiny markers. For example: ``` plt.plot([2,4,6,1,3,5], '+', mew=10, ms=20) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22172565\/matplotlib-make-plus-sign-thicker", "best_answers_votes":129, "question_length":601, "response_length":187 }, { "question":"How can I change the x axis so there is no white space? So currently learning how to import data and work with it in matplotlib and I am having trouble even tho I have the exact code from the book. This is what the plot looks like, but my question is how can I get it where there is no white space between the start and the end of the x-axis. Here is the code: ``` import csv from matplotlib import pyplot as plt from datetime import datetime # Get dates and high temperatures from file. filename = 'sitka_weather_07-2014.csv' with open(filename) as f: reader = csv.reader(f) header_row = next(reader) #for index, column_header in enumerate(header_row): #print(index, column_header) dates, highs = [], [] for row in reader: current_date = datetime.strptime(row[0], \"%Y-%m-%d\") dates.append(current_date) high = int(row[1]) highs.append(high) # Plot data. fig = plt.figure(dpi=128, figsize=(10,6)) plt.plot(dates, highs, c='red') # Format plot. plt.title(\"Daily high temperatures, July 2014\", fontsize=24) plt.xlabel('', fontsize=16) fig.autofmt_xdate() plt.ylabel(\"Temperature (F)\", fontsize=16) plt.tick_params(axis='both', which='major', labelsize=16) plt.show() ```", "response":"There is an automatic margin set at the edges, which ensures the data to be nicely fitting within the axis spines. In this case such a margin is probably desired on the y axis. By default it is set to 0.05 in units of axis span. To set the margin to 0 on the x axis, use ``` plt.margins(x=0) ``` or ``` ax.margins(x=0) ``` depending on the context. Also see the documentation. In case you want to get rid of the margin in the whole script, you can use ``` plt.rcParams['axes.xmargin'] = 0 ``` at the beginning of your script (same for y of course). If you want to get rid of the margin entirely and forever, you might want to change the according line in the matplotlib rc file: ``` axes.xmargin : 0 axes.ymargin : 0 ``` Example ```py import seaborn as sns import matplotlib.pyplot as plt tips = sns.load_dataset('tips') fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4)) tips.plot(ax=ax1, title='Default Margin') tips.plot(ax=ax2, title='Margins: x=0') ax2.margins(x=0) ``` Alternatively, use plt.xlim(..) or ax.set_xlim(..) to manually set the limits of the axes such that there is no white space left.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42045767\/how-can-i-change-the-x-axis-so-there-is-no-white-space", "best_answers_votes":137, "question_length":1168, "response_length":1107 }, { "question":"Python - How to show graph in Visual Studio Code itself? When I try to run this example: ``` import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np x = np.linspace(0, 20, 100) plt.plot(x, np.sin(x)) plt.show() ``` I see the result in a new window. Is there any way to see the result graphs in the Visual Studio Code itself directly? Thank you.", "response":"Yes, if you use notebook interface. Basically, install Python Extension Pack, it includes Jupyter extension, put your code in the editor, put #%% at the top of your code, you'll get Run cell clickable, click it, and you'll get result in the other window Here is the link to the extension: https:\/\/marketplace.visualstudio.com\/items?itemName=donjayamanne.jupyter UPDATE Ok, apparently Microsoft hired Don Jayamanne and he's working on Python and Jupyter for VS Code. And last month they (MS) improved their python extension to support Jupyter notebooks right in the Visual Code together with .ipynb import and export. Get extension here and check blog post how to use it with Jupyter notebooks. UPDATE II Another one is Neuron, under development, but looks nice - again, notebooks in VS Code, with graphs, markdown etc. Get it here UPDATE III Using interactive matplotlib graphs works as well in VS Code, check https:\/\/matplotlib.org\/ipympl\/ for details", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/49992300\/python-how-to-show-graph-in-visual-studio-code-itself", "best_answers_votes":116, "question_length":364, "response_length":952 }, { "question":"How to hide in IPython notebook [duplicate] This question already has answers here: Disable the output of matplotlib pyplot (4 answers) Closed 1 year ago. I am plotting a NumPy array of values, I, using IPython notebook in %matplotlib inline mode with the plot command plt.plot(I,'o'). The resulting output is: ``` Out[159]: [, , , , , , , .... .... ] ``` Then my plot shows up below these lines of output. Is there a way to just show the plot and hide the from the output?", "response":"You can use a semi-colon ; to end the line. This suppresses the unwanted output when generating plots: ``` plt.plot(I,'o'); ``` In general, using a semi-colon stops IPython from printing any output value from that line of a code block. For example, the executing the cell containing the code 1+1; would not output 2. An alternative way would be to bind a variable to the plot: ``` _ = plt.plot(a) ``` This way, IPython only shows you the plots and the name _ is bound to the unwanted output.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25790062\/how-to-hide-matplotlib-lines-line2d-in-ipython-notebook", "best_answers_votes":118, "question_length":476, "response_length":491 }, { "question":"How to shade region under the curve in matplotlib I want to use matplotlib to illustrate the definite integral between two regions: x_0, and x_1. How can I shade a region under a curve in matplotlib from x=-1, to x=1 given the following plot ``` import numpy as np from matplotlib import pyplot as plt def f(t): return t * t t = np.arange(-4,4,1\/40.) plt.plot(t,f(t)) ```", "response":"The final answer I came up with is to use fill_between. I thought there would have been a simple shade between type method, but this does exactly what I want. ``` section = np.arange(-1, 1, 1\/20.) plt.fill_between(section,f(section)) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10046262\/how-to-shade-region-under-the-curve-in-matplotlib", "best_answers_votes":84, "question_length":371, "response_length":237 }, { "question":"Colorize Voronoi Diagram I'm trying to colorize a Voronoi Diagram created using scipy.spatial.Voronoi. Here's my code: ``` import numpy as np import matplotlib.pyplot as plt from scipy.spatial import Voronoi, voronoi_plot_2d # make up data points points = np.random.rand(15,2) # compute Voronoi tesselation vor = Voronoi(points) # plot voronoi_plot_2d(vor) # colorize for region in vor.regions: if not -1 in region: polygon = [vor.vertices[i] for i in region] plt.fill(*zip(*polygon)) plt.show() ``` The resulting image: As you can see some of the Voronoi regions at the border of the image are not colored. That is because some indices to the Voronoi vertices for these regions are set to -1, i.e., for those vertices outside the Voronoi diagram. According to the docs: regions: (list of list of ints, shape (nregions, *)) Indices of the Voronoi vertices forming each Voronoi region. -1 indicates vertex outside the Voronoi diagram. In order to colorize these regions as well, I've tried to just remove these \"outside\" vertices from the polygon, but that didn't work. I think, I need to fill in some points at the border of the image region, but I can't seem to figure out how to achieve this reasonably. Can anyone help?", "response":"The Voronoi data structure contains all the necessary information to construct positions for the \"points at infinity\". Qhull also reports them simply as -1 indices, so Scipy doesn't compute them for you. https:\/\/gist.github.com\/pv\/8036995 http:\/\/nbviewer.ipython.org\/gist\/pv\/8037100 ``` import numpy as np import matplotlib.pyplot as plt from scipy.spatial import Voronoi def voronoi_finite_polygons_2d(vor, radius=None): \"\"\" Reconstruct infinite voronoi regions in a 2D diagram to finite regions. Parameters ---------- vor : Voronoi Input diagram radius : float, optional Distance to 'points at infinity'. Returns ------- regions : list of tuples Indices of vertices in each revised Voronoi regions. vertices : list of tuples Coordinates for revised Voronoi vertices. Same as coordinates of input vertices, with 'points at infinity' appended to the end. \"\"\" if vor.points.shape[1] != 2: raise ValueError(\"Requires 2D input\") new_regions = [] new_vertices = vor.vertices.tolist() center = vor.points.mean(axis=0) if radius is None: radius = vor.points.ptp().max() # Construct a map containing all ridges for a given point all_ridges = {} for (p1, p2), (v1, v2) in zip(vor.ridge_points, vor.ridge_vertices): all_ridges.setdefault(p1, []).append((p2, v1, v2)) all_ridges.setdefault(p2, []).append((p1, v1, v2)) # Reconstruct infinite regions for p1, region in enumerate(vor.point_region): vertices = vor.regions[region] if all(v >= 0 for v in vertices): # finite region new_regions.append(vertices) continue # reconstruct a non-finite region ridges = all_ridges[p1] new_region = [v for v in vertices if v >= 0] for p2, v1, v2 in ridges: if v2 = 0: # finite ridge: already in the region continue # Compute the missing endpoint of an infinite ridge t = vor.points[p2] - vor.points[p1] # tangent t \/= np.linalg.norm(t) n = np.array([-t[1], t[0]]) # normal midpoint = vor.points[[p1, p2]].mean(axis=0) direction = np.sign(np.dot(midpoint - center, n)) * n far_point = vor.vertices[v2] + direction * radius new_region.append(len(new_vertices)) new_vertices.append(far_point.tolist()) # sort region counterclockwise vs = np.asarray([new_vertices[v] for v in new_region]) c = vs.mean(axis=0) angles = np.arctan2(vs[:,1] - c[1], vs[:,0] - c[0]) new_region = np.array(new_region)[np.argsort(angles)] # finish new_regions.append(new_region.tolist()) return new_regions, np.asarray(new_vertices) # make up data points np.random.seed(1234) points = np.random.rand(15, 2) # compute Voronoi tesselation vor = Voronoi(points) # plot regions, vertices = voronoi_finite_polygons_2d(vor) print \"--\" print regions print \"--\" print vertices # colorize for region in regions: polygon = vertices[region] plt.fill(*zip(*polygon), alpha=0.4) plt.plot(points[:,0], points[:,1], 'ko') plt.xlim(vor.min_bound[0] - 0.1, vor.max_bound[0] + 0.1) plt.ylim(vor.min_bound[1] - 0.1, vor.max_bound[1] + 0.1) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20515554\/colorize-voronoi-diagram", "best_answers_votes":81, "question_length":1222, "response_length":2885 }, { "question":"How can I use seaborn without changing the matplotlib defaults? I am trying to use seaborn, because of its distplot function. But I prefer the default matplotlib settings. When I import seaborn, it changes automatically the appearance of my figure. How can I use seaborn functions without changing the look of the plots?", "response":"Version 0.8 (july 2017) changed this behaviour. From https:\/\/seaborn.pydata.org\/whatsnew.html#v0-8-0-july-2017: The default (seaborn) style is no longer applied when seaborn is imported. It is now necessary to explicitly call set() or one or more of set_style(), set_context(), and set_palette(). Correspondingly, the seaborn.apionly module has been deprecated. For older versions, Import seaborn like this: ``` import seaborn.apionly as sns ``` and then you should be able to use sns.distplot but maintain the default matplotlib styling + your personal rc configuration.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25393936\/how-can-i-use-seaborn-without-changing-the-matplotlib-defaults", "best_answers_votes":82, "question_length":320, "response_length":571 }, { "question":"x, = ... - is this trailing comma the comma operator? I don't understand what the comma after variable lines means: http:\/\/matplotlib.org\/examples\/animation\/simple_anim.html ``` line, = ax.plot(x, np.sin(x)) ``` If I remove the comma, then the program is broken. Full code from the URL given above: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation fig = plt.figure() ax = fig.add_subplot(111) x = np.arange(0, 2*np.pi, 0.01) # x-array line, = ax.plot(x, np.sin(x)) def animate(i): line.set_ydata(np.sin(x+i\/10.0)) # update the data return line, #Init only required for blitting to give a clean slate. def init(): line.set_ydata(np.ma.array(x, mask=True)) return line, ani = animation.FuncAnimation(fig, animate, np.arange(1, 200), init_func=init, interval=25, blit=True) plt.show() ``` According to http:\/\/docs.python.org\/3\/tutorial\/datastructures.html#tuples-and-sequences a comma after a variable seems to be related to tuples containing only one item.", "response":"ax.plot() returns a tuple with one element. By adding the comma to the assignment target list, you ask Python to unpack the return value and assign it to each variable named to the left in turn. Most often, you see this being applied for functions with more than one return value: ``` base, ext = os.path.splitext(filename) ``` The left-hand side can, however, contain any number of elements, and provided it is a tuple or list of variables the unpacking will take place. In Python, it's the comma that makes something a tuple: ``` >>> 1 1 >>> 1, (1,) ``` The parenthesis are optional in most locations. You could rewrite the original code with parenthesis without changing the meaning: ``` (line,) = ax.plot(x, np.sin(x)) ``` Or you could use list syntax too: ``` [line] = ax.plot(x, np.sin(x)) ``` Or, you could recast it to lines that do not use tuple unpacking: ``` line = ax.plot(x, np.sin(x))[0] ``` or ``` lines = ax.plot(x, np.sin(x)) def animate(i): lines[0].set_ydata(np.sin(x+i\/10.0)) # update the data return lines #Init only required for blitting to give a clean slate. def init(): lines[0].set_ydata(np.ma.array(x, mask=True)) return lines ``` For full details on how assignments work with respect to unpacking, see the Assignment Statements documentation.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16037494\/x-is-this-trailing-comma-the-comma-operator", "best_answers_votes":85, "question_length":1001, "response_length":1270 }, { "question":"Jupyter Notebook: interactive plot with widgets I am trying to generate an interactive plot that depends on widgets. The problem I have is that when I change parameters using the slider, a new plot is done after the previous one, instead I would expect only one plot changing according to the parameters. Example: ``` from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets import matplotlib.pyplot as plt %matplotlib inline import numpy as np def plot_func(freq): x = np.linspace(0, 2*np.pi) y = np.sin(x * freq) plt.plot(x, y) interact(plot_func, freq = widgets.FloatSlider(value=7.5, min=1, max=5.0, step=0.5)) ``` After moving the slider to 4.0, I have: while I just want one figure to change as I move the slider. How can I achieve this? (I am using Python 2.7, matplotlib 2.0 and I have just updated notebook and jupyter to the latest version. let me know if further info is needed.)", "response":"As you want to change the figure, instead of creating a new one, may I suggest the following way: Use an interactive backend; %matplotlib notebook Update the line in the plot, instead of drawing new ones. So the code could look something like this: ``` %matplotlib notebook from ipywidgets import * import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 2 * np.pi) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) line, = ax.plot(x, np.sin(x)) def update(w = 1.0): line.set_ydata(np.sin(w * x)) fig.canvas.draw_idle() interact(update); ``` Alternatively you may use plt.show() as in this answer.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44329068\/jupyter-notebook-interactive-plot-with-widgets", "best_answers_votes":76, "question_length":931, "response_length":610 }, { "question":"Remove seaborn lineplot legend title I would like to remove the title from my seaborn lineplot legend. I tried using this answer to no avail: ``` import matplotlib.pyplot as plt import seaborn as sns; sns.set() fmri = sns.load_dataset(\"fmri\") fig, ax = plt.subplots() g = sns.lineplot(x=\"timepoint\", y=\"signal\", hue=\"event\", data=fmri, ax=ax) ax.legend().set_title('') ``` I get the same if I try to set the title to None. Interestingly, setting the title to something else seems to prepend to the existing title: ``` ax.legend().set_title('Something else') ``` It almost looks like seaborn is treating the title as a hidden legend entry. How can I resolve this?", "response":"Important: This answer is about the case when a hue is used that appears as a legend title. In all other cases, the question itself already contains the usual way to get rid of a title. Indeed, seaborn is misusing a legend label as a (subgroup-)title. Hence the idea can be to either remove this label, or replace it with custom text. Replacing with custom text: ``` legend = ax.legend() legend.texts[0].set_text(\"Whatever else\") ``` Removing the label: ``` handles, labels = ax.get_legend_handles_labels() ax.legend(handles=handles[1:], labels=labels[1:]) ``` After having removed the label you may of course still set another (real) title: ``` handles, labels = ax.get_legend_handles_labels() ax.legend(handles=handles[1:], labels=labels[1:], title=\"Whatever else\") ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/51579215\/remove-seaborn-lineplot-legend-title", "best_answers_votes":111, "question_length":662, "response_length":771 }, { "question":"How to add vertical lines to a distribution plot Using the examples from seaborn.pydata.org and the Python DataScience Handbook, I'm able to produce a combined distribution plot with the following snippet: Code: ``` import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt # some settings sns.set_style(\"darkgrid\") # Create some data data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=2000) data = pd.DataFrame(data, columns=['x', 'y']) # Combined distributionplot sns.distplot(data['x']) sns.distplot(data['y']) ``` Plot: How can I combine this setup with vertical lines so that I can illustrate thresholds like this: I know I can do it with matplotlib like here Dynamic histogram subplots with line to mark target, but I really like the simplicity of seaborn plots and would like to know if it's possible to do it more elegantly (and yes, I know that seaborn builds on top of matplotlib).", "response":"Just use ``` plt.axvline(2.8, 0,0.17) ``` And the same for the other line Here instead of 0.17 you can put the maxima of your distribution using some variable such as maxx = max(data) or something similar. 2.8 is the position on the x-axis. Oh remember that the y-value has to be in between 0 and 1 where 1 is the top of the plot. You can rescale your values accordingly. Another obvious option is simply ``` plt.plot([2.8, 2.8], [0, max(data)]) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/52334938\/how-to-add-vertical-lines-to-a-distribution-plot", "best_answers_votes":103, "question_length":943, "response_length":449 }, { "question":"Axes from plt.subplots() is a \"numpy.ndarray\" object and has no attribute \"plot\" The information below may be superfluous if you are trying to understand the error message. Please start off by reading the answer by @user707650. Using MatPlotLib, I wanted a generalizable script that creates the following from my data. A window containing a subplots arranged so that there are b subplots per column. I want to be able to change the values of a and b. If I have data for 2a subplots, I want 2 windows, each with the previously described \"a subplots arranged according to b subplots per column\". The x and y data I am plotting are floats stored in np.arrays and are structured as follows: The x data is always the same for all plots and is of length 5. ``` 'x_vector': [0.000, 0.005, 0.010, 0.020, 0.030, 0.040] ``` The y data of all plots are stored in y_vector where the data for the first plot is stored at indexes 0 through 5. The data for the second plot is stored at indexes 6 through 11. The third plot gets 12-18, the fourth 19-24, and so on. In total, for this dataset, I have 91 plots (i.e. 91*6 = 546 values stored in y_vector). Attempt: ``` import matplotlib.pyplot as plt # Options: plots_tot = 14 # Total number of plots. In reality there is going to be 7*13 = 91 plots. location_of_ydata = 6 # The values for the n:th plot can be found in the y_vector at index 'n*6' through 'n*6 + 6'. plots_window = 7 # Total number of plots per window. rows = 2 # Number of rows, i.e. number of subplots per column. # Calculating number of columns: prim_cols = plots_window \/ rows extra_cols = 0 if plots_window % rows > 0: extra_cols = 1 cols = prim_cols + extra_cols print 'cols:', cols print 'rows:', rows # Plotting: n=0 x=0 fig, ax = plt.subplots(rows, cols) while x ax[x].plot(x_vector, y_vector[n:(n+location_of_ydata)], 'ro') AttributeError: 'numpy.ndarray' object has no attribute 'plot' ```", "response":"If you debug your program by simply printing ax, you'll quickly find out that ax is a two-dimensional array: one dimension for the rows, one for the columns. Thus, you need two indices to index ax to retrieve the actual AxesSubplot instance, like: ``` ax[1,1].plot(...) ``` If you want to iterate through the subplots in the way you do it now, by flattening ax first: ``` ax = ax.flatten() ``` and now ax is a one dimensional array. I don't know if rows or columns are stepped through first, but if it's the wrong around, use the transpose: ``` ax = ax.T.flatten() ``` Of course, by now it makes more sense to simply create each subplot on the fly, because that already has an index, and the other two numbers are fixed: ``` for x < plots_tot: ax = plt.subplot(nrows, ncols, x+1) ``` Note: you have x <= plots_tot, but with x starting at 0, you'll get an IndexError next with your current code (after flattening your array). Matplotlib is (unfortunately) 1-indexed for subplots. I prefer using a 0-indexed variable (Python style), and just add +1 for the subplot index (like above).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37967786\/axes-from-plt-subplots-is-a-numpy-ndarray-object-and-has-no-attribute-plot", "best_answers_votes":134, "question_length":1900, "response_length":1082 }, { "question":"How to change the plot line color from blue to black I am stuck when I have generated a set of data and tried to color the plot line in python. For example I would like to change the line color from blue to black here. This is what I have and returns is the set of data that I got from pandas. ``` ax=plt.gca() ax.set_axis_bgcolor('#cccccc') returns.plot() ```", "response":"The usual way to set the line color in matplotlib is to specify it in the plot command. This can either be done by a string after the data, e.g. \"r-\" for a red line, or by explicitely stating the color argument. ``` import matplotlib.pyplot as plt plt.plot([1,2,3], [2,3,1], \"r-\") # red line plt.plot([1,2,3], [5,5,3], color=\"blue\") # blue line plt.show() ``` See also the plot command's documentation. In case you already have a line with a certain color, you can change that with the lines2D.set_color() method. ``` line, = plt.plot([1,2,3], [4,5,3], color=\"blue\") line.set_color(\"black\") ``` Setting the color of a line in a pandas plot is also best done at the point of creating the plot: ``` import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame({ \"x\" : [1,2,3,5], \"y\" : [3,5,2,6]}) df.plot(\"x\", \"y\", color=\"r\") #plot red line plt.show() ``` If you want to change this color later on, you can do so by ``` plt.gca().get_lines()[0].set_color(\"black\") ``` This will get you the first (possibly the only) line of the current active axes. In case you have more axes in the plot, you could loop through them ``` for ax in plt.gcf().axes: ax.get_lines()[0].set_color(\"black\") ``` and if you have more lines you can loop over them as well.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/41709257\/how-to-change-the-plot-line-color-from-blue-to-black", "best_answers_votes":129, "question_length":360, "response_length":1253 }, { "question":"Resize a figure automatically in matplotlib Is there a way to automatically resize a figure to properly fit contained plots in a matplotlib\/pylab image? I'm creating heatmap (sub)plots that differ in aspect ratio according to the data used. I realise I could calculate the aspect ratio and manually set it, but surely there's an easier way?", "response":"Use bbox_inches='tight' ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.cm as cm X = 10*np.random.rand(5,3) fig = plt.figure(figsize=(15,5),facecolor='w') ax = fig.add_subplot(111) ax.imshow(X, cmap=cm.jet) plt.savefig(\"image.png\",bbox_inches='tight',dpi=100) ``` ...only works when saving images though, not showing them.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/1271023\/resize-a-figure-automatically-in-matplotlib", "best_answers_votes":77, "question_length":340, "response_length":347 }, { "question":"Matplotlib: how to show legend elements horizontally? I'd like to set the legend to be displayed horizontally. I do not mean the text of the legend like described in the post Matplotlib legend vertical rotation. My actual case includes an arbitrary number of series specified with a widget. But the following example represents the gist of the challenge: Snippet: ``` # Imports import pandas as pd import matplotlib.pyplot as plt import numpy as np # data np.random.seed(123) x = pd.Series(np.random.randn(100),index=pd.date_range('1\/1\/2000', periods=100)).cumsum() y = pd.Series(np.random.randn(100),index=pd.date_range('1\/1\/2000', periods=100)).cumsum() z = pd.Series(np.random.randn(100),index=pd.date_range('1\/1\/2000', periods=100)).cumsum() df = pd.concat([x,y,z], axis = 1) # plot ax = df.plot() plt.legend(loc=\"lower left\") plt.show() ``` Plot: The default layout seems to be vertical. Looking at the details of help(ax.legend) and the docs , there does not seem to be a straight forward way to change this to horizontal. Or is there? Edit - Desired Legend: (using MS Paint)", "response":"Specify the ncol parameter in legend. In your case something like: ``` plt.legend(loc=\"lower left\", ncol=len(df.columns)) ``` This is the only line I changed in your script. Working full code: ``` import pandas as pd import matplotlib.pyplot as plt import numpy as np # data np.random.seed(123) x = pd.Series(np.random.randn(100),index=pd.date_range('1\/1\/2000', periods=100)).cumsum() y = pd.Series(np.random.randn(100),index=pd.date_range('1\/1\/2000', periods=100)).cumsum() z = pd.Series(np.random.randn(100),index=pd.date_range('1\/1\/2000', periods=100)).cumsum() df = pd.concat([x,y,z], axis = 1) # plot ax = plt.subplot() for col in (df.columns): plt.plot(df[col]) plt.legend(loc=\"lower left\", ncol=len(df.columns)) plt.xticks(rotation=90) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/54870585\/matplotlib-how-to-show-legend-elements-horizontally", "best_answers_votes":83, "question_length":1081, "response_length":757 }, { "question":"Reverse the order of a legend I use the following code to plot the bar graph and need to present a legend in reverse order. How can I do it? ``` colorsArr = plt.cm.BuPu(np.linspace(0, 0.5, len(C2))) p = numpy.empty(len(C2), dtype=object) plt.figure(figsize=(11, 11)) prevBar = 0 for index in range(len(C2)): plt.bar(ind, C2[index], width, bottom=prevBar, color=colorsArr[index], label=C0[index]) prevBar = prevBar + C2[index] # Positions of the x-axis ticks (center of the bars as bar labels) tick_pos = [i + (width\/2) for i in ind] plt.ylabel('Home Category') plt.title('Affinity - Retail Details(Home category)') # Set the x ticks with names plt.xticks(tick_pos, C1) plt.yticks(np.arange(0, 70000, 3000)) plt.legend(title=\"Line\", loc='upper left') # Set a buffer around the edge plt.xlim(-width*2, width*2) plt.show() ```", "response":"You could call ``` handles, labels = ax.get_legend_handles_labels() ax.legend(handles[::-1], labels[::-1], title='Line', loc='upper left') ``` ``` import numpy as np import matplotlib.pyplot as plt np.random.seed(2016) C0 = list('ABCDEF') C2 = np.random.randint(20000, size=(len(C0), 3)) width = 1.0 C1 = ['foo', 'bar', 'baz'] ind = np.linspace(-width, width, len(C1)) colorsArr = plt.cm.BuPu(np.linspace(0, 0.5, len(C2))) fig = plt.figure(figsize=(11,11)) ax = fig.add_subplot(1, 1, 1) prevBar = 0 for height, color, label in zip(C2, colorsArr, C0): h = ax.bar(ind, height, width, bottom=prevBar, color=color, label=label) prevBar = prevBar + height plt.ylabel('Home Category') plt.title('Affinity - Retail Details(Home category)') # positions of the x-axis ticks (center of the bars as bar labels) tick_pos = [i+(width\/2.0) for i in ind] # set the x ticks with names plt.xticks(tick_pos, C1) plt.yticks(np.arange(0,70000,3000)) handles, labels = ax.get_legend_handles_labels() ax.legend(handles[::-1], labels[::-1], title='Line', loc='upper left') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34576059\/reverse-the-order-of-a-legend", "best_answers_votes":102, "question_length":823, "response_length":1064 }, { "question":"Python Matplotlib Venn diagram I want to plot variables that belongs to certain groups. Say that I have 6 variables that I want to sort into these 3 groups and plot like a venn diagram. I would like to annotate the variable names into the three bubbles. In this simple example we could say that 1 variable is in group 1, 3 variables in group 2 and 2 variables in group 3. Could anyone help me with a simple example of how to do it in matplotlib?", "response":"There is a beautiful Venn diagram add-on for matplotlib called matplotlib-venn. It looks like it can be completely customized to do what you are looking for, from the size of the circles (proportional to the set size), to inner and outer labels. Using the example code on the website gives a plot like: Edit: Per the comments below the following code gives non-overlapping circles with text using the same library: ``` import pylab as plt from matplotlib_venn import venn3, venn3_circles v = venn3(subsets=(1,1,0,1,0,0,0)) v.get_label_by_id('100').set_text('First') v.get_label_by_id('010').set_text('Second') v.get_label_by_id('001').set_text('Third') plt.title(\"Not a Venn diagram\") plt.show() ``` Gives the diagram:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19841535\/python-matplotlib-venn-diagram", "best_answers_votes":85, "question_length":445, "response_length":718 }, { "question":"Get legend as a separate picture in Matplotlib I'm developing a Web application and want to display a figure and its legend in different locations on the page. Which means I need to save the legend as a separate png file. Is this possible in Matplotlib in a more or less straightforward way?", "response":"You may limit the saved region of a figure to the bounding box of the legend using the bbox_inches argument to fig.savefig. Below to versions of a function which you can simply call with the legend you want to save as argument. You may either use the legend created in the original figure here (and remove it afterwards, legend.remove()) or you may create a new figure for the legend and simply use the function as it is. Export legend boundingbox In case the complete legend shall be saved, the bounding box supplied to the bbox_inches argument would be simply the transformed bounding box of the legend. This works well if the legend has no border around it. ``` import matplotlib.pyplot as plt colors = [\"crimson\", \"purple\", \"gold\"] f = lambda m,c: plt.plot([],[],marker=m, color=c, ls=\"none\")[0] handles = [f(\"s\", colors[i]) for i in range(3)] labels = colors legend = plt.legend(handles, labels, loc=3, framealpha=1, frameon=False) def export_legend(legend, filename=\"legend.png\"): fig = legend.figure fig.canvas.draw() bbox = legend.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) fig.savefig(filename, dpi=\"figure\", bbox_inches=bbox) export_legend(legend) plt.show() ``` Export extended legend bounding box If there is a border around the legend, the above solution may be suboptimal. In this case it makes sense to extend the bounding box by some pixels to include the border to its full. ``` import numpy as np import matplotlib.pyplot as plt colors = [\"crimson\", \"purple\", \"gold\"] f = lambda m,c: plt.plot([],[],marker=m, color=c, ls=\"none\")[0] handles = [f(\"s\", colors[i]) for i in range(3)] labels = colors legend = plt.legend(handles, labels, loc=3, framealpha=1, frameon=True) def export_legend(legend, filename=\"legend.png\", expand=[-5,-5,5,5]): fig = legend.figure fig.canvas.draw() bbox = legend.get_window_extent() bbox = bbox.from_extents(*(bbox.extents + np.array(expand))) bbox = bbox.transformed(fig.dpi_scale_trans.inverted()) fig.savefig(filename, dpi=\"figure\", bbox_inches=bbox) export_legend(legend) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4534480\/get-legend-as-a-separate-picture-in-matplotlib", "best_answers_votes":36, "question_length":291, "response_length":2056 }, { "question":"Defining a discrete colormap for imshow I have a simple image that I'm showing with imshow in matplotlib. I'd like to apply a custom colormap so that values between 0-5 are white, 5-10 are red (very simple colors), etc. I've tried following this tutorial: http:\/\/assorted-experience.blogspot.com\/2007\/07\/custom-colormaps.html with the following code: ``` cdict = { 'red' : ((0., 0., 0.), (0.5, 0.25, 0.25), (1., 1., 1.)), 'green': ((0., 1., 1.), (0.7, 0.0, 0.5), (1., 1., 1.)), 'blue' : ((0., 1., 1.), (0.5, 0.0, 0.0), (1., 1., 1.)) } my_cmap = mpl.colors.LinearSegmentedColormap('my_colormap', cdict, 3) plt.imshow(num_stars, extent=(min(x), max(x), min(y), max(y)), cmap=my_cmap) plt.show() ``` But this ends up showing strange colors, and I only need 3-4 colors that I want to define. How do I do this?", "response":"You can use a ListedColormap to specify the white and red as the only colors in the color map, and the bounds determine where the transition is from one color to the next: ``` import matplotlib.pyplot as plt from matplotlib import colors import numpy as np np.random.seed(101) zvals = np.random.rand(100, 100) * 10 # make a color map of fixed colors cmap = colors.ListedColormap(['white', 'red']) bounds=[0,5,10] norm = colors.BoundaryNorm(bounds, cmap.N) # tell imshow about color map so that only set colors are used img = plt.imshow(zvals, interpolation='nearest', origin='lower', cmap=cmap, norm=norm) # make a color bar plt.colorbar(img, cmap=cmap, norm=norm, boundaries=bounds, ticks=[0, 5, 10]) plt.savefig('redwhite.png') plt.show() ``` The resulting figure has only two colors: I proposed essentially the same thing for a somewhat different question: 2D grid data visualization in Python The solution is inspired by a matplotlib example. The example explains that the bounds must be one more than the number of colors used. The BoundaryNorm is a normalization that maps a series of values to integers, which are then used to assign the corresponding colors. cmap.N, in the example above, just defines the number of colors.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9707676\/defining-a-discrete-colormap-for-imshow", "best_answers_votes":129, "question_length":805, "response_length":1231 }, { "question":"Plotting a decision boundary separating 2 classes using Matplotlib's pyplot I could really use a tip to help me plotting a decision boundary to separate to classes of data. I created some sample data (from a Gaussian distribution) via Python NumPy. In this case, every data point is a 2D coordinate, i.e., a 1 column vector consisting of 2 rows. E.g., ``` [ 1 2 ] ``` Let's assume I have 2 classes, class1 and class2, and I created 100 data points for class1 and 100 data points for class2 via the code below (assigned to the variables x1_samples and x2_samples). ``` mu_vec1 = np.array([0,0]) cov_mat1 = np.array([[2,0],[0,2]]) x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100) mu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector mu_vec2 = np.array([1,2]) cov_mat2 = np.array([[1,0],[0,1]]) x2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100) mu_vec2 = mu_vec2.reshape(1,2).T ``` When I plot the data points for each class, it would look like this: Now, I came up with an equation for an decision boundary to separate both classes and would like to add it to the plot. However, I am not really sure how I can plot this function: ``` def decision_boundary(x_vec, mu_vec1, mu_vec2): g1 = (x_vec-mu_vec1).T.dot((x_vec-mu_vec1)) g2 = 2*( (x_vec-mu_vec2).T.dot((x_vec-mu_vec2)) ) return g1 - g2 ``` I would really appreciate any help! EDIT: Intuitively (If I did my math right) I would expect the decision boundary to look somewhat like this red line when I plot the function...", "response":"Your question is more complicated than a simple plot : you need to draw the contour which will maximize the inter-class distance. Fortunately it's a well-studied field, particularly for SVM machine learning. The easiest method is to download the scikit-learn module, which provides a lot of cool methods to draw boundaries: scikit-learn: Support Vector Machines Code : ``` # -*- coding: utf-8 -*- import numpy as np import matplotlib from matplotlib import pyplot as plt import scipy from sklearn import svm mu_vec1 = np.array([0,0]) cov_mat1 = np.array([[2,0],[0,2]]) x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100) mu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector mu_vec2 = np.array([1,2]) cov_mat2 = np.array([[1,0],[0,1]]) x2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100) mu_vec2 = mu_vec2.reshape(1,2).T fig = plt.figure() plt.scatter(x1_samples[:,0],x1_samples[:,1], marker='+') plt.scatter(x2_samples[:,0],x2_samples[:,1], c= 'green', marker='o') X = np.concatenate((x1_samples,x2_samples), axis = 0) Y = np.array([0]*100 + [1]*100) C = 1.0 # SVM regularization parameter clf = svm.SVC(kernel = 'linear', gamma=0.7, C=C ) clf.fit(X, Y) ``` Linear Plot ``` w = clf.coef_[0] a = -w[0] \/ w[1] xx = np.linspace(-5, 5) yy = a * xx - (clf.intercept_[0]) \/ w[1] plt.plot(xx, yy, 'k-') ``` MultiLinear Plot ``` C = 1.0 # SVM regularization parameter clf = svm.SVC(kernel = 'rbf', gamma=0.7, C=C ) clf.fit(X, Y) h = .02 # step size in the mesh # create a mesh to plot in x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contour(xx, yy, Z, cmap=plt.cm.Paired) ``` Implementation If you want to implement it yourself, you need to solve the following quadratic equation: The Wikipedia article Unfortunately, for non-linear boundaries like the one you draw, it's a difficult problem relying on a kernel trick but there isn't a clear cut solution.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22294241\/plotting-a-decision-boundary-separating-2-classes-using-matplotlibs-pyplot", "best_answers_votes":54, "question_length":1505, "response_length":2243 }, { "question":"Python: How to show matplotlib in flask [duplicate] This question already has answers here: Converting matplotlib png to base64 for viewing in html template (2 answers) Save plot to image file instead of displaying it (25 answers) How to serve static files in Flask (24 answers) Closed 7 years ago. I'm very new to Flask and Matplotlib. I'd like to be able to show a simple chart I generated in some html, but I'm having a very hard time figuring out how. Here is my Python code: ``` from flask import Flask, render_template import numpy as np import pandas import matplotlib.pyplot as plt app = Flask(__name__) variables = pandas.read_csv('C:\\\\path\\\\to\\\\variable.csv') price =variables['price'] @app.route('\/test') def chartTest(): lnprice=np.log(price) plt.plot(lnprice) return render_template('untitled1.html', name = plt.show()) if __name__ == '__main__': app.run(debug = True) ``` And here is my HTML: ``` Price Chart {{ name }} ```", "response":"You can generate the image on-the-fly in Flask URL route handler: ``` import io import random from flask import Response from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas from matplotlib.figure import Figure @app.route('\/plot.png') def plot_png(): fig = create_figure() output = io.BytesIO() FigureCanvas(fig).print_png(output) return Response(output.getvalue(), mimetype='image\/png') def create_figure(): fig = Figure() axis = fig.add_subplot(1, 1, 1) xs = range(100) ys = [random.randint(1, 50) for x in xs] axis.plot(xs, ys) return fig ``` Then you need to include the image in your HTML template: ``` ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/50728328\/python-how-to-show-matplotlib-in-flask", "best_answers_votes":103, "question_length":943, "response_length":635 }, { "question":"How do I align gridlines for two y-axis scales? I'm plotting two datasets with different units on the y-axis. Is there a way to make the ticks and gridlines aligned on both y-axes? The first image shows what I get, and the second image shows what I would like to get. This is the code I'm using to plot: ``` import seaborn as sns import numpy as np import pandas as pd np.random.seed(0) fig = plt.figure() ax1 = fig.add_subplot(111) ax1.plot(pd.Series(np.random.uniform(0, 1, size=10))) ax2 = ax1.twinx() ax2.plot(pd.Series(np.random.uniform(10, 20, size=10)), color='r') ```", "response":"I am not sure if this is the prettiest way to do it, but it does fix it with one line: ``` import matplotlib.pyplot as plt import seaborn as sns import numpy as np import pandas as pd np.random.seed(0) fig = plt.figure() ax1 = fig.add_subplot(111) ax1.plot(pd.Series(np.random.uniform(0, 1, size=10))) ax2 = ax1.twinx() ax2.plot(pd.Series(np.random.uniform(10, 20, size=10)), color='r') # ADD THIS LINE ax2.set_yticks(np.linspace(ax2.get_yticks()[0], ax2.get_yticks()[-1], len(ax1.get_yticks()))) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/26752464\/how-do-i-align-gridlines-for-two-y-axis-scales", "best_answers_votes":36, "question_length":575, "response_length":511 }, { "question":"When to use imshow over pcolormesh I often find myself needing to create heatmap-style visualizations in Python with matplotlib. Matplotlib provides several functions which apparently do the same thing. pcolormesh is recommended instead of pcolor but what is the difference (from a practical point of view as a data plotter) between imshow and pcolormesh? What are the pros\/cons of using one over the other? In what scenarios would one or the other be a clear winner?", "response":"Fundamentally, imshow assumes that all data elements in your array are to be rendered at the same size, whereas pcolormesh\/pcolor associates elements of the data array with rectangular elements whose size may vary over the rectangular grid. If your mesh elements are uniform, then imshow with interpolation set to \"nearest\" will look very similar to the default pcolormesh display (without the optional X and Y args). The obvious differences are that the imshow y-axis will be inverted (w.r.t. pcolormesh) and the aspect ratio is maintained, although those characteristics can be altered to look like the pcolormesh output as well. From a practical point of view, pcolormesh is more convenient if you want to visualize the data array as cells, particularly when the rectangular mesh is non-uniform or when you want to plot the boundaries\/edges of the cells. Otherwise, imshow is more convenient if you have a fixed cell size, want to maintain aspect ratio, want control over pixel interpolation, or want to specify RGB values directly.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21166679\/when-to-use-imshow-over-pcolormesh", "best_answers_votes":72, "question_length":467, "response_length":1035 }, { "question":"How to add a grid line at a specific location in matplotlib plot? How do I add grid at a specific location on the y axis in a matplotlib plot?", "response":"Yes. It's very simple. Use the set_[x|y]ticks methods of axes object and toggle the grid as normal: ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.set_yticks([0.2, 0.6, 0.8], minor=False) ax.set_yticks([0.3, 0.55, 0.7], minor=True) ax.yaxis.grid(True, which='major') ax.yaxis.grid(True, which='minor') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14608483\/how-to-add-a-grid-line-at-a-specific-location-in-matplotlib-plot", "best_answers_votes":100, "question_length":142, "response_length":333 }, { "question":"Duplicate items in legend in matplotlib? I am trying to add the legend to my plot with this snippet: ``` import matplotlib.pylab as plt fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # left, bottom, width, height (range 0 to 1) axes.set_xlabel('x (m)') axes.set_ylabel('y (m)') for i, representative in enumerate(representatives): axes.plot([e[0] for e in representative], [e[1] for e in representative], color='b', label='Representatives') axes.scatter([e[0] for e in intersections], [e[1] for e in intersections], color='r', label='Intersections') axes.legend() ``` I end up with this plot Obviously, the items are duplicated in the plot. How can I correct this error?", "response":"As the docs say, although it's easy to miss: If label attribute is empty string or starts with \u201c_\u201d, those artists will be ignored. So if I'm plotting similar lines in a loop and I only want one example line in the legend, I usually do something like ``` ax.plot(x, y, label=\"Representatives\" if i == 0 else \"\") ``` where i is my loop index. It's not quite as nice to look at as building them separately, but often I want to keep the label logic as close to the line drawing as possible. (Note that the matplotlib developers themselves tend to use \"_nolegend_\" to be explicit.)", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19385639\/duplicate-items-in-legend-in-matplotlib", "best_answers_votes":97, "question_length":684, "response_length":576 }, { "question":"find length of sequences of identical values in a numpy array (run length encoding) In a pylab program (which could probably be a matlab program as well) I have a numpy array of numbers representing distances: d[t] is the distance at time t (and the timespan of my data is len(d) time units). The events I'm interested in are when the distance is below a certain threshold, and I want to compute the duration of these events. It's easy to get an array of booleans with b = d0 and b[i-1] and b[i]: counter+=1 if (b[i-1] and not b[i]) or i==len(b)-1: durations.append(counter) print '.' ```", "response":"Fully numpy vectorized and generic RLE for any array (works with strings, booleans etc too). Outputs tuple of run lengths, start positions, and values. ``` import numpy as np def rle(inarray): \"\"\" run length encoding. Partial credit to R rle function. Multi datatype arrays catered for including non Numpy returns: tuple (runlengths, startpositions, values) \"\"\" ia = np.asarray(inarray) # force numpy n = len(ia) if n == 0: return (None, None, None) else: y = ia[1:] != ia[:-1] # pairwise unequal (string safe) i = np.append(np.where(y), n - 1) # must include last element posi z = np.diff(np.append(-1, i)) # run lengths p = np.cumsum(np.append(0, z))[:-1] # positions return(z, p, ia[i]) ``` Pretty fast (i7): ``` xx = np.random.randint(0, 5, 1000000) %timeit yy = rle(xx) 100 loops, best of 3: 18.6 ms per loop ``` Multiple data types: ``` rle([True, True, True, False, True, False, False]) Out[8]: (array([3, 1, 1, 2]), array([0, 3, 4, 5]), array([ True, False, True, False], dtype=bool)) rle(np.array([5, 4, 4, 4, 4, 0, 0])) Out[9]: (array([1, 4, 2]), array([0, 1, 5]), array([5, 4, 0])) rle([\"hello\", \"hello\", \"my\", \"friend\", \"okay\", \"okay\", \"bye\"]) Out[10]: (array([2, 1, 1, 2, 1]), array([0, 2, 3, 4, 6]), array(['hello', 'my', 'friend', 'okay', 'bye'], dtype='|S6')) ``` Same results as Alex Martelli above: ``` xx = np.random.randint(0, 2, 20) xx Out[60]: array([1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1]) am = runs_of_ones_array(xx) tb = rle(xx) am Out[63]: array([4, 5, 2, 5]) tb[0][tb[2] == 1] Out[64]: array([4, 5, 2, 5]) %timeit runs_of_ones_array(xx) 10000 loops, best of 3: 28.5 \u00b5s per loop %timeit rle(xx) 10000 loops, best of 3: 38.2 \u00b5s per loop ``` Slightly slower than Alex (but still very fast), and much more flexible.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/1066758\/find-length-of-sequences-of-identical-values-in-a-numpy-array-run-length-encodi", "best_answers_votes":81, "question_length":588, "response_length":1759 }, { "question":"seaborn color_palette as matplotlib colormap Seaborn offers a function called color_palette, which allows you to easily create new color_palettes for plots. ``` colors = [\"#67E568\",\"#257F27\",\"#08420D\",\"#FFF000\",\"#FFB62B\",\"#E56124\",\"#E53E30\",\"#7F2353\",\"#F911FF\",\"#9F8CA6\"] color_palette = sns.color_palette(colors) ``` I want to transform color_palette to a cmap, which I can use in matplotlib, but I don't see how I can do this. Sadly just functions like \"cubehelix_palette\",\"light_palette\",\u2026 have an \"as_cmap\" paramater. \"color_palette\" doesn't, unfortunately.", "response":"You have to convert a list of colors from seaborn palette to color map of matplolib (thx to @RafaelLopes for proposed changes): ``` import seaborn as sns import matplotlib.pylab as plt import numpy as np from matplotlib.colors import ListedColormap # construct cmap flatui = [\"#9b59b6\", \"#3498db\", \"#95a5a6\", \"#e74c3c\", \"#34495e\", \"#2ecc71\"] my_cmap = ListedColormap(sns.color_palette(flatui).as_hex()) N = 500 data1 = np.random.randn(N) data2 = np.random.randn(N) colors = np.linspace(0,1,N) plt.scatter(data1, data2, c=colors, cmap=my_cmap) plt.colorbar() plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37902459\/seaborn-color-palette-as-matplotlib-colormap", "best_answers_votes":61, "question_length":561, "response_length":572 }, { "question":"Writing numerical values on the plot with Matplotlib Is it possible, with Matplotlib, to print the values of each point on the graph? For example, if I have: ``` x = numpy.range(0,10) y = numpy.array([5,3,4,2,7,5,4,6,3,2]) pyplot.plot(x,y) ``` How can I display y values on the plot (e.g. print a 5 near the (0,5) point, print a 3 near the (1,3) point, etc.)?", "response":"You can use the annotate command to place text annotations at any x and y values you want. To place them exactly at the data points you could do this ``` import numpy from matplotlib import pyplot x = numpy.arange(10) y = numpy.array([5,3,4,2,7,5,4,6,3,2]) fig = pyplot.figure() ax = fig.add_subplot(111) ax.set_ylim(0,10) pyplot.plot(x,y) for i,j in zip(x,y): ax.annotate(str(j),xy=(i,j)) pyplot.show() ``` If you want the annotations offset a little, you could change the annotate line to something like ``` ax.annotate(str(j),xy=(i,j+0.5)) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6282058\/writing-numerical-values-on-the-plot-with-matplotlib", "best_answers_votes":97, "question_length":359, "response_length":546 }, { "question":"How do I get a list of axes for a figure in pyplot? Regarding matplotlib.figure, the documentation says there is a class matplotlib.figure.AxesStack and that The AxesStack is a callable, where ax_stack() returns the current axes However, when I call fig.ax_stack(), I get the error: ``` AttributeError: 'Figure' object has no attribute 'ax_stack' ```", "response":"The property .axes returns a list of the Axes objects in the Figure object: ``` ax_list = fig.axes ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/24104990\/how-do-i-get-a-list-of-axes-for-a-figure-in-pyplot", "best_answers_votes":104, "question_length":350, "response_length":102 }, { "question":"How to plot a wav file I have just read a wav file with scipy and now I want to make the plot of the file using matplotlib, on the \"y scale\" I want to see the amplitude and over the \"x scale\" I want to see the numbers of frames! Any help how can I do this?? Thank you! ``` from scipy.io.wavfile import read import numpy as np from numpy import* import matplotlib.pyplot as plt a=read(\"C:\/Users\/Martinez\/Desktop\/impulso.wav\") print a ```", "response":"You can call wave lib to read an audio file. To plot the waveform, use the \"plot\" function from matplotlib ``` import matplotlib.pyplot as plt import numpy as np import wave import sys spf = wave.open(\"wavfile.wav\", \"r\") # Extract Raw Audio from Wav File signal = spf.readframes(-1) signal = np.fromstring(signal, np.int16) # If Stereo if spf.getnchannels() == 2: print(\"Just mono files\") sys.exit(0) plt.figure(1) plt.title(\"Signal Wave...\") plt.plot(signal) plt.show() ``` you will have something like: To Plot the x-axis in seconds you need get the frame rate and divide by size of your signal, you can use linspace function from numpy to create a Time Vector spaced linearly with the size of the audio file and finally you can use plot again like plt.plot(Time,signal) ``` import matplotlib.pyplot as plt import numpy as np import wave import sys spf = wave.open(\"Animal_cut.wav\", \"r\") # Extract Raw Audio from Wav File signal = spf.readframes(-1) signal = np.fromstring(signal, np.int16) fs = spf.getframerate() # If Stereo if spf.getnchannels() == 2: print(\"Just mono files\") sys.exit(0) Time = np.linspace(0, len(signal) \/ fs, num=len(signal)) plt.figure(1) plt.title(\"Signal Wave...\") plt.plot(Time, signal) plt.show() ``` New plot x-axis in seconds:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18625085\/how-to-plot-a-wav-file", "best_answers_votes":92, "question_length":436, "response_length":1258 }, { "question":"How to change figuresize using seaborn factorplot ``` %pylab inline import pandas as pd import numpy as np import matplotlib as mpl import seaborn as sns typessns = pd.DataFrame.from_csv('C:\/data\/testesns.csv', index_col=False, sep=';') mpl.rc(\"figure\", figsize=(45, 10)) sns.factorplot(\"MONTH\", \"VALUE\", hue=\"REGION\", data=typessns, kind=\"box\", palette=\"OrRd\"); ``` I always get a small size figure, no matter what size I 've specified in figsize... How to fix it?", "response":"Note added in 2019: In modern seaborn versions the size argument has been renamed to height. To be a little more concrete: ``` %matplotlib inline import seaborn as sns exercise = sns.load_dataset(\"exercise\") # Defaults are size=5, aspect=1 sns.factorplot(\"kind\", \"pulse\", \"diet\", exercise, kind=\"point\", size=2, aspect=1) sns.factorplot(\"kind\", \"pulse\", \"diet\", exercise, kind=\"point\", size=4, aspect=1) sns.factorplot(\"kind\", \"pulse\", \"diet\", exercise, kind=\"point\", size=4, aspect=2) ``` You want to pass in the arguments 'size' or 'aspect' to the sns.factorplot() when constructing your plot. Size will change the height, while maintaining the aspect ratio (so it will also also get wider if only size is changed.) Aspect will change the width while keeping the height constant. The above code should be able to be run locally in an ipython notebook. Plot sizes are reduced in these examples to show the effects, and because the plots from the above code were fairly large when saved as png's. This also shows that size\/aspect includes the legend in the margin. size=2, aspect=1 size=4, aspect=1 size=4, aspect=2 Also, all other useful parameters\/arguments and defaults for this plotting function can be viewed with once the 'sns' module is loaded: ``` help(sns.factorplot) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/26163702\/how-to-change-figuresize-using-seaborn-factorplot", "best_answers_votes":118, "question_length":465, "response_length":1280 }, { "question":"Row titles for matplotlib subplot In matplotlib, Is it possible to set a a separate title for each row of subplots in addition to the title set for the entire figure and the title set for each individual plot? This would correspond to the orange text in the figure below. If not, how would you get around this problem? Create a separate column of empty subplots to the left and fill them with the orange text? I am aware that it is possible to manually position each single title using text() or annotate(), but that usually requires a lot of tweaking and I have many subplots. Is there a smoother solution?", "response":"New in matplotlib 3.4.0 Row titles can now be implemented as subfigure suptitles: The new subfigure feature allows creating virtual figures within figures with localized artists (e.g., colorbars and suptitles) that only pertain to each subfigure. See how to plot subfigures for further details. How to reproduce OP's reference figure: Either Figure.subfigures (most straightforward) Create 3x1 fig.subfigures where each subfig gets its own 1x3 subfig.subplots and subfig.suptitle: ```py fig = plt.figure(constrained_layout=True) fig.suptitle('Figure title') # create 3x1 subfigs subfigs = fig.subfigures(nrows=3, ncols=1) for row, subfig in enumerate(subfigs): subfig.suptitle(f'Subfigure title {row}') # create 1x3 subplots per subfig axs = subfig.subplots(nrows=1, ncols=3) for col, ax in enumerate(axs): ax.plot() ax.set_title(f'Plot title {col}') ``` Or Figure.add_subfigure (onto existing subplots) If you already have 3x1 plt.subplots, then add_subfigure into the underlying gridspec. Again each subfig will get its own 1x3 subfig.subplots and subfig.suptitle: ```py # create 3x1 subplots fig, axs = plt.subplots(nrows=3, ncols=1, constrained_layout=True) fig.suptitle('Figure title') # clear subplots for ax in axs: ax.remove() # add subfigure per subplot gridspec = axs[0].get_subplotspec().get_gridspec() subfigs = [fig.add_subfigure(gs) for gs in gridspec] for row, subfig in enumerate(subfigs): subfig.suptitle(f'Subfigure title {row}') # create 1x3 subplots per subfig axs = subfig.subplots(nrows=1, ncols=3) for col, ax in enumerate(axs): ax.plot() ax.set_title(f'Plot title {col}') ``` Output of either example (after some styling):", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/27426668\/row-titles-for-matplotlib-subplot", "best_answers_votes":66, "question_length":607, "response_length":1646 }, { "question":"How to plot the lines first and points last I have a simple plot with several sets of points and lines connecting each set. I want the points to be plotted on top of the lines (so that the line doesn't show inside the point). Regardless of order of the plot and scatter calls, this plot comes out the same, and not as I'd like. Is there a simple way to do it? ``` import math import matplotlib.pyplot as plt def poisson(m): def f(k): e = math.e**(-m) f = math.factorial(k) g = m**k return g*e\/f return f R = range(20) L = list() means = (1,4,10) for m in means: f = poisson(m) L.append([f(k) for k in R]) colors = ['r','b','purple'] for c,P in zip(colors,L): plt.plot(R,P,color='0.2',lw=1.5) plt.scatter(R,P,s=150,color=c) ax = plt.axes() ax.set_xlim(-0.5,20) ax.set_ylim(-0.01,0.4) plt.savefig('example.png') ```", "response":"You need to set the Z-order. ``` plt.plot(R,P,color='0.2',lw=1.5, zorder=1) plt.scatter(R,P,s=150,color=c, zorder=2) ``` Check out this example. http:\/\/matplotlib.sourceforge.net\/examples\/pylab_examples\/zorder_demo.html", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2314379\/how-to-plot-the-lines-first-and-points-last", "best_answers_votes":104, "question_length":813, "response_length":219 }, { "question":"scatter plot logarithmic scale In my code, I take the logarithm of two data series and plot them. I would like to change each tick value of the x-axis by raising it to the power of e (anti-log of natural logarithm). In other words. I want to graph the logarithms of both series but have x-axis in levels. Here is the code that I'm using. ``` from pylab import scatter import pylab import matplotlib.pyplot as plt import pandas as pd from pandas import Series, DataFrame import numpy as np file_name = '\/Users\/joedanger\/Desktop\/Python\/scatter_python.csv' data = DataFrame(pd.read_csv(file_name)) y = np.log(data['o_value'], dtype='float64') x = np.log(data['time_diff_day'], dtype='float64') fig = plt.figure() plt.scatter(x, y, c='blue', alpha=0.05, edgecolors='none') fig.suptitle('test title', fontsize=20) plt.xlabel('time_diff_day', fontsize=18) plt.ylabel('o_value', fontsize=16) plt.xticks([-8,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4]) plt.grid(True) pylab.show() ```", "response":"let matplotlib take the log for you: ``` fig = plt.figure() ax = plt.gca() ax.scatter(data['o_value'] ,data['time_diff_day'] , c='blue', alpha=0.05, edgecolors='none') ax.set_yscale('log') ax.set_xscale('log') ``` If you are using all the same size and color markers, it is faster to use plot ``` fig = plt.figure() ax = plt.gca() ax.plot(data['o_value'] ,data['time_diff_day'], 'o', c='blue', alpha=0.05, markeredgecolor='none') ax.set_yscale('log') ax.set_xscale('log') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18773662\/scatter-plot-logarithmic-scale", "best_answers_votes":121, "question_length":964, "response_length":475 }, { "question":"How to plot bar graphs with same X coordinates side by side ('dodged') ``` import matplotlib.pyplot as plt gridnumber = range(1,4) b1 = plt.bar(gridnumber, [0.2, 0.3, 0.1], width=0.4, label=\"Bar 1\", align=\"center\") b2 = plt.bar(gridnumber, [0.3, 0.2, 0.2], color=\"red\", width=0.4, label=\"Bar 2\", align=\"center\") plt.ylim([0,0.5]) plt.xlim([0,4]) plt.xticks(gridnumber) plt.legend() plt.show() ``` Currently b1 and b2 overlap each other. How do I plot them separately like so:", "response":"Below answer will explain each and every line of code in the simplest manner possible: ``` # Numbers of pairs of bars you want N = 3 # Data on X-axis # Specify the values of blue bars (height) blue_bar = (23, 25, 17) # Specify the values of orange bars (height) orange_bar = (19, 18, 14) # Position of bars on x-axis ind = np.arange(N) # Figure size plt.figure(figsize=(10,5)) # Width of a bar width = 0.3 # Plotting plt.bar(ind, blue_bar , width, label='Blue bar label') plt.bar(ind + width, orange_bar, width, label='Orange bar label') plt.xlabel('Here goes x-axis label') plt.ylabel('Here goes y-axis label') plt.title('Here goes title of the plot') # xticks() # First argument - A list of positions at which ticks should be placed # Second argument - A list of labels to place at the given locations plt.xticks(ind + width \/ 2, ('Xtick1', 'Xtick3', 'Xtick3')) # Finding the best position for legends and putting it plt.legend(loc='best') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10369681\/how-to-plot-bar-graphs-with-same-x-coordinates-side-by-side-dodged", "best_answers_votes":65, "question_length":475, "response_length":956 }, { "question":"Logarithmic y-axis bins in python I'm trying to create a histogram of a data column and plot it logarithmically (y-axis) and I'm not sure why the following code does not work: ``` import numpy as np import matplotlib.pyplot as plt data = np.loadtxt('foo.bar') fig = plt.figure() ax = fig.add_subplot(111) plt.hist(data, bins=(23.0, 23.5,24.0,24.5,25.0,25.5,26.0,26.5,27.0,27.5,28.0)) ax.set_xlim(23.5, 28) ax.set_ylim(0, 30) ax.grid(True) plt.yscale('log') plt.show() ``` I've also tried instead of plt.yscale('log') adding Log=true in the plt.hist line and also I tried ax.set_yscale('log'), but nothing seems to work. I either get an empty plot, either the y-axis is indeed logarithmic (with the code as shown above), but there is no data plotted (no bins).", "response":"try ``` plt.yscale('log', nonposy='clip') ``` http:\/\/matplotlib.org\/api\/pyplot_api.html#matplotlib.pyplot.yscale The issue is with the bottom of bars being at y=0 and the default is to mask out in-valid points (log(0) -> undefined) when doing the log transformation (there was discussion of changing this, but I don't remember which way it went) so when it tries to draw the rectangles for you bar plot, the bottom edge is masked out -> no rectangles.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17952279\/logarithmic-y-axis-bins-in-python", "best_answers_votes":112, "question_length":759, "response_length":451 }, { "question":"How do you directly overlay a scatter plot on top of a jpg image in matplotlib \/ Python? I need to rapidly plot jpg frames that result as the output of a tracking algorithm. Companion with the jpg frames are text files containing simple (x,y) data locating the image targets that are being tracked. I would like to use matplotlib to plot the jpg images, then overlay a scatter plot of the (x,y) data which gets read from the text file and stored into a Pythonic list. Below is code that will plot the jpg image, but in all of the scouring I have done of matplotlib, scipy, and PIL manuals and help pages, I cannot find anything that explains how to maintain this plot window and simply overlay a scatter plot of simple markers at various (x,y) locations in the image. Any help is greatly appreciated. ``` import matplotlib.pyplot as plt; im = plt.imread(image_name); implot = plt.imshow(im); plt.show() ```", "response":"The pyplot.scatter() function was tailor made for this reason: ``` import matplotlib.pyplot as plt im = plt.imread(image_name) implot = plt.imshow(im) # put a blue dot at (10, 20) plt.scatter([10], [20]) # put a red dot, size 40, at 2 locations: plt.scatter(x=[30, 40], y=[50, 60], c='r', s=40) plt.show() ``` See the documentation for more info.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5073386\/how-do-you-directly-overlay-a-scatter-plot-on-top-of-a-jpg-image-in-matplotlib", "best_answers_votes":93, "question_length":906, "response_length":346 }, { "question":"plotting unix timestamps in matplotlib I'd like to make a generic value -vs- time plot with python's matplotlib module. My times are in unix time but I'd like them to show up in a readable format on the plot's x-axis. I have read answers about plotting with datetime objects but this method seems to remove hour\/min\/sec information and rails timestamps to the full day. Is there a way to generate these plots and show more granular labels?", "response":"It is possible to call plt.plot(dates,values) with dates being a list of datetime.datetime objects. The plot will include xticks in a format like '%Y-%m-%d' and as you zoom in, automatically change to one that shows hours, minutes, seconds. However, it sounds like you desire more control than this. Perhaps it is not showing the hours, minutes, seconds at the scale you wish. In that case, you can set up your own date formatter: ``` ax=plt.gca() xfmt = md.DateFormatter('%Y-%m-%d %H:%M:%S') ax.xaxis.set_major_formatter(xfmt) ``` Unfortunately, if you pass datetime.datetime objects to plt.plot, the xticks automatically chosen by matplotlib seems to always have seconds equal to zero. For example, if you run ``` import matplotlib.pyplot as plt import matplotlib.dates as md import numpy as np import datetime as dt import time n=20 duration=1000 now=time.mktime(time.localtime()) timestamps=np.linspace(now,now+duration,n) dates=[dt.datetime.fromtimestamp(ts) for ts in timestamps] values=np.sin((timestamps-now)\/duration*2*np.pi) plt.subplots_adjust(bottom=0.2) plt.xticks( rotation=25 ) ax=plt.gca() xfmt = md.DateFormatter('%Y-%m-%d %H:%M:%S') ax.xaxis.set_major_formatter(xfmt) plt.plot(dates,values) plt.show() ``` then you get nicely formatted dates, but all the xtick seconds are zero. So what's the solution? If you convert your timestamps --> datetime.datetime objects --> matplotlib datenums yourself, and pass the datenums to plt.plot, then the seconds are preserved. PS. By \"matplotlib datenum\" I mean the kind of number returned by matplotlib.dates.date2num. ``` import matplotlib.pyplot as plt import matplotlib.dates as md import numpy as np import datetime as dt import time n=20 duration=1000 now=time.mktime(time.localtime()) timestamps=np.linspace(now,now+duration,n) dates=[dt.datetime.fromtimestamp(ts) for ts in timestamps] datenums=md.date2num(dates) values=np.sin((timestamps-now)\/duration*2*np.pi) plt.subplots_adjust(bottom=0.2) plt.xticks( rotation=25 ) ax=plt.gca() xfmt = md.DateFormatter('%Y-%m-%d %H:%M:%S') ax.xaxis.set_major_formatter(xfmt) plt.plot(datenums,values) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4090383\/plotting-unix-timestamps-in-matplotlib", "best_answers_votes":99, "question_length":439, "response_length":2118 }, { "question":"Individual alpha values in scatter plot I'm wondering if it is possible to have individual alpha values for each point to be plotted using the scatter function of Matplotlib. I need to plot a set of points, each one with its alpha value. For example, I have this code to plot some points ``` def plot_singularities(points_x, p, alpha_point, file_path): plt.figure() plt.scatter(points_x, points_y, alpha=alpha_point) plt.savefig(file_path + '.png', dpi=100) plt.close() ``` All my points_x, points_y and alpha_point have n values. However, I can't assign an array to the alpha parameter in scatter(). How can I have a different alpha value for each point? I can loop and plot point by point with each specific alpha value, but this doesn't seem like a good approach.", "response":"New solution with matplotlib >= 3.4 Since matplotlib 3.4, alpha supports an iterable of multiple values: https:\/\/matplotlib.org\/stable\/users\/prev_whats_new\/whats_new_3.4.0.html#transparency-alpha-can-be-set-as-an-array-in-collections ```py import numpy as np import matplotlib.pylab as plt x = np.arange(10) y = np.arange(10) alphas = np.linspace(0.1, 1, 10) plt.scatter(x, y, alpha=alphas) plt.show() ``` Old solution for matplotlib < 3.4 tcaswell's suggestion is correct, you can do it like this: ``` import numpy as np import matplotlib.pylab as plt x = np.arange(10) y = np.arange(10) alphas = np.linspace(0.1, 1, 10) rgba_colors = np.zeros((10,4)) # for red the first column needs to be one rgba_colors[:,0] = 1.0 # the fourth column needs to be your alphas rgba_colors[:, 3] = alphas plt.scatter(x, y, color=rgba_colors) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/24767355\/individual-alpha-values-in-scatter-plot", "best_answers_votes":90, "question_length":766, "response_length":841 }, { "question":"Is \"from matplotlib import pyplot as plt\" == \"import matplotlib.pyplot as plt\"? ``` from matplotlib import pyplot as plt import matplotlib.pyplot as plt ``` Are the above statements equivalent? Which is more readable\/better form?", "response":"Even though they are equivalent, I think there is a pretty good argument that the second form import matplotlib.pyplot as plt is objectively more readable: It is generally customary to use import matplotlib.pyplot as plt and suggested in the matplotlib documentation (see http:\/\/matplotlib.org\/users\/pyplot_tutorial.html etc...) so this will be more familiar to most readers. import matplotlib.pyplot as plt is shorter but no less clear. import matplotlib.pyplot as plt gives an unfamiliar reader a hint that pyplot is a module, rather than a function which could be incorrectly assumed from the first form.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/30558087\/is-from-matplotlib-import-pyplot-as-plt-import-matplotlib-pyplot-as-plt", "best_answers_votes":64, "question_length":229, "response_length":607 }, { "question":"Interactive matplotlib figures in Google Colab Normally in a jupyter notebook I would use %matplotlib notebook magic to display an interactive window, however this doesn't seem to work with google colab. Is there a solution, or is it not possible to display interactive windows in google colab?", "response":"Below is an example of creating interactive iplot() in Plotly and cufflinks() on Google Colab Notebook. Used functions and suggestions from the answer [1, 2] The key seems to be to include configure_plotly_browser_state() in the cell that does the plotting. Code below should work: Import libraries ``` import datetime from datetime import date import pandas as pd import numpy as np from plotly import __version__ %matplotlib inline import plotly.offline as pyo import plotly.graph_objs as go from plotly.offline import iplot import cufflinks as cf from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot cf.go_offline() ``` Set notebook to false ``` init_notebook_mode(connected=False) ``` Create function for Colab copied from: [1, 2] ``` def configure_plotly_browser_state(): import IPython display(IPython.core.display.HTML(''' requirejs.config({ paths: { base: '\/static\/base', plotly: 'https:\/\/cdn.plot.ly\/plotly-latest.min.js?noext', }, }); ''')) ``` Note: using plotly: 'https:\/\/cdn.plot.ly\/plotly-latest.min.js?noext'ensures the latest version of Plotly will be used. It is also possible to specify the version of plotly, by using e.g. plotly: 'https:\/\/cdn.plot.ly\/plotly-1.5.1.min.js?noext'. Create sample dataframe Data source: Annual rainfuall data for the Peachtree City, GA from National Weather Service [3]. ``` df = pd.DataFrame({ 'month': ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'], 'Year_2018': [3.26, 6.11, 4.86, 6.53, 4.45, 3.86, 8.04, 7.59, 1.48, 4.75, 7.27, 11.83], 'Year_1996': [8.26, 3.82, 6.42, 2.91, 2.12, 1.70, 2.14, 4.66, 4.32, 0.89, 3.22, 4.14] } ) df ``` Create an interactive iplot ``` configure_plotly_browser_state() df.iplot(kind='line',x='month',y=['Year_2018', 'Year_1996'], color=['white', 'gold'], theme='solar', mode='markers+lines',title='Annual Rainfall in the city Peachtree City, GA') ``` Output:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/52859983\/interactive-matplotlib-figures-in-google-colab", "best_answers_votes":14, "question_length":294, "response_length":1944 }, { "question":"How to plot a chart in the terminal I'm researching ML\/Theano, and recently came across this script: https:\/\/gist.github.com\/notmatthancock\/68d52af2e8cde7fbff1c9225b2790a7f which was cool to play with. And like all ML researchers, I recently upgraded to a server, and while it's more powerful, it also presented me with a problem. The script is very long, but it ends with this code: ``` def plot_stuff(inputs, outputs, losses, net_func, n_hidden): fig,axes = plt.subplots(1,2,figsize=(12,6)) axes[0].plot(np.arange(losses.shape[0])+1, losses) axes[0].set_xlabel('iteration') axes[0].set_ylabel('loss') axes[0].set_xscale('log') axes[0].set_yscale('log') x,y = np.mgrid[inputs[:,0].min():inputs[:,0].max():51j, inputs[:,1].min():inputs[:,1].max():51j] z = net_func( np.c_[x.flatten(), y.flatten()] ).reshape(x.shape) axes[1].contourf(x,y,z, cmap=plt.cm.RdBu, alpha=0.6) axes[1].plot(inputs[outputs==0,0], inputs[outputs==0,1], 'or') axes[1].plot(inputs[outputs==1,0], inputs[outputs==1,1], 'sb') axes[1].set_title('Percent missclassified: %0.2f%%' % (((net_func(inputs)>0.5) != outputs.astype(np.bool)).mean()*100)) fig.suptitle('Shallow net with %d hidden units'%n_hidden) plt.show() if __name__=='__main__': n_hidden = 40 inputs, outputs = gen_data(n_samples_per_class=100) losses, net_func = train_neural_network(inputs=inputs, outputs=outputs, n_hidden=n_hidden, n_iters=int(2000), learning_rate=0.1) plot_stuff(inputs, outputs, losses, net_func, n_hidden) ``` Which generates this chart: And when I tried to run it on the server, which being a sever has no screen only a command line, I predictably got this error: ``` fedora@ip-173-33-18-911:~\/scripting\/spiral$ python spiral.py Iteration 2000 \/ 2000, Loss: 0.172083 Traceback (most recent call last): File \"spiral.py\", line 133, in plot_stuff(inputs, outputs, losses, net_func, n_hidden) File \"spiral.py\", line 110, in plot_stuff fig,axes = plt.subplots(1,2,figsize=(12,6)) File \"\/usr\/lib\/pymodules\/python2.7\/matplotlib\/pyplot.py\", line 1046, in subplots fig = figure(**fig_kw) File \"\/usr\/lib\/pymodules\/python2.7\/matplotlib\/pyplot.py\", line 423, in figure **kwargs) File \"\/usr\/lib\/pymodules\/python2.7\/matplotlib\/backends\/backend_tkagg.py\", line 79, in new_figure_manager return new_figure_manager_given_figure(num, figure) File \"\/usr\/lib\/pymodules\/python2.7\/matplotlib\/backends\/backend_tkagg.py\", line 87, in new_figure_manager_given_figure window = Tk.Tk() File \"\/usr\/lib\/python2.7\/lib-tk\/Tkinter.py\", line 1767, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: no display name and no $DISPLAY environment variable ``` Is there a way\/method\/function to display charts and graphs in the command line?", "response":"termplotlib (a small project of mine) might come in handy here. Install with ``` pip install termplotlib ``` and produce terminal plots like ```py import termplotlib as tpl import numpy as np x = np.linspace(0, 2*np.pi, 100) y = np.sin(x) + x fig = tpl.figure() fig.plot(x, y, width=60, height=20) fig.show() ``` ``` 7 +---------------------------------------------------+ | | 6 | ** | | ** | | ** | 5 | ** | | *** | 4 | **** | | ***** | 3 | ***************** | | **** | 2 | *** | | *** | | *** | 1 | ** | |** | 0 +---------------------------------------------------+ 0 1 2 3 4 5 6 7 ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37288421\/how-to-plot-a-chart-in-the-terminal", "best_answers_votes":73, "question_length":2743, "response_length":587 }, { "question":"python matplotlib framework under macosx? I am getting this error: \/sw\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_macosx.py:235: UserWarning: Python is not installed as a framework. The MacOSX backend may not work correctly if Python is not installed as a framework. Please see the Python documentation for more information on installing Python as a framework on Mac OS X I installed python27 using fink and it's using the default matplotlib is using macosx framework.", "response":"Some users may not want to change the backend for all of their scripts. This page -- http:\/\/matplotlib.org\/faq\/usage_faq.html#what-is-a-backend -- tells another way: ``` import matplotlib matplotlib.use('TkAgg') ``` You have to do this before importing a subpackage of matplotlib or a third-party package that depends on matplotlib.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4130355\/python-matplotlib-framework-under-macosx", "best_answers_votes":64, "question_length":481, "response_length":332 }, { "question":"Is it possible to have multiple PyPlot windows? Or am I limited to subplots? I'm not sure how to word my question more clearly. Basically, is PyPlot limited to one instance\/window? Any hack or workaround I try either causes my program to freeze or for the second pyplot window to be queued until the first one is closed.", "response":"Sure, just open a new figure: ``` import matplotlib.pyplot as plt plt.plot(range(10)) plt.figure() plt.plot(range(10), 'ro-') plt.figure(), plt.plot(...) plt.show() # only do this once, at the end ``` If you're running this in the default python interpreter, this won't work, as each figure needs to enter the gui's mainloop. If you want to run things in an interactive shell, look into IPython. If you just run this normally (i.e. put it into a file and call python filename.py) it will work fine, though.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5993206\/is-it-possible-to-have-multiple-pyplot-windows-or-am-i-limited-to-subplots", "best_answers_votes":108, "question_length":320, "response_length":506 }, { "question":"Figure to image as a numpy array I'm trying to get a numpy array image from a Matplotlib figure and I'm currently doing it by saving to a file, then reading the file back in, but I feel like there has to be a better way. Here's what I'm doing now: ``` from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas from matplotlib.figure import Figure fig = Figure() canvas = FigureCanvas(fig) ax = fig.gca() ax.text(0.0,0.0,\"Test\", fontsize=45) ax.axis('off') canvas.print_figure(\"output.png\") image = plt.imread(\"output.png\") ``` I tried this: ``` image = np.fromstring( canvas.tostring_rgb(), dtype='uint8' ) ``` from an example I found but it gives me an error saying that 'FigureCanvasAgg' object has no attribute 'renderer'.", "response":"In order to get the figure contents as RGB pixel values, the matplotlib.backend_bases.Renderer needs to first draw the contents of the canvas. You can do this by manually calling canvas.draw(): ``` from matplotlib.figure import Figure fig = Figure() canvas = fig.canvas ax = fig.gca() ax.text(0.0,0.0,\"Test\", fontsize=45) ax.axis('off') canvas.draw() # Draw the canvas, cache the renderer image_flat = np.frombuffer(canvas.tostring_rgb(), dtype='uint8') # (H * W * 3,) # NOTE: reversed converts (W, H) from get_width_height to (H, W) image = image_flat.reshape(*reversed(canvas.get_width_height()), 3) # (H, W, 3) ``` See here for more info on the Matplotlib API.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/35355930\/figure-to-image-as-a-numpy-array", "best_answers_votes":65, "question_length":743, "response_length":663 }, { "question":"percent label position in pie chart Is there a way to change the default position of the percent label in a matplotlib pie chart? Here is an example pie chart: Which I have created using: ``` plt.pie(sizes, labels=labels, colors=colors, explode=explode, autopct='%1.0f%%') ``` Now I don't like how some percent labels are intruding on other sections teritory (actually the only perpitrator in this example is the 9m section). Ideally I would like such labels to be outside the pie chart with an arrow of some sort pointing to the section, or alternativly just outside the section.", "response":"You can control the distance of the percents and labels from the center of the pie using pctdistance= and labeldistance=, try this on your code: ``` plt.pie(sizes, labels=labels, autopct='%1.0f%%', pctdistance=1.1, labeldistance=1.2) ``` You can also set a radius of the pie using radius= (by default is 1)", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21572870\/percent-label-position-in-pie-chart", "best_answers_votes":117, "question_length":580, "response_length":306 }, { "question":"savefig - text chopped off Say I create a plot: ``` import matplotlib.pyplot as plt plt.clf() import numpy as np props = np.random.randint(0,100,(200,)) x=np.arange(1,201,1) plt.plot(x,props,c='k') plt.xlabel('blah blah blah blah blah blah\\n blah blah blah blah blah') plt.ylabel('blah blah blah blah blah blah blah blah blah') fig=plt.gcf() fig.set_size_inches(3.75,3.75)#14, 23) plt.savefig('testfig.png',dpi=300) ``` When using Ipython (via Spyder), the plot presents ok. However when I looked at the saved image, it presents thus: As you can see, the text is cut off. What is recommended practice for dealing with this? I have got round it by increasing the figure size, and re-sizing afterwards. However, my aim is to produce a set of images with a consistent text size (figure size varies); so this approach is not ideal. Note. Whilst a similar question exists, this question is distinct in that it: deals with both xlabel and ylabel. combines with set_size_inchesfunction seeks to ensure consistent text size with differing figure sizes. seeks to find out why Ipython output differs from savefig", "response":"The Ipython console in Spyder uses the inline backend, which saves the figure as png and displays the output image. When saving, it uses the option bbox_inches = \"tight\". So in order to obtain the same figure as shown in the console, you may decide to use this option as well - it basically extends or shrinks the bounding box such that all objects in the canvas are displayed. ``` plt.savefig('testfig.png',dpi=300, bbox_inches = \"tight\") ``` Alternatively, you can make sure that all objects are already inside the figure boundaries before saving or showing the figure. This can either be accomplished using ``` plt.tight_layout() ``` which tries to do that automatically, or you can use ``` plt.subplots_adjust(left=0.3, right=0.9, bottom=0.3, top=0.9) ``` where the parameters denote the margins on each side in units of fractions of figure size (30% space on the left, 10% space on the right, etc.).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/45239261\/savefig-text-chopped-off", "best_answers_votes":109, "question_length":1102, "response_length":904 }, { "question":"How to change marker border width and hatch width? In this example of a marker from my scatter plot, I have set the color to green, and edge color to black, and hatch to \"|\". For the hatch pattern to show up at all, I must set the edgecolor; however when I do, I get a very thick border around the marker. Two questions: How can I to set the size of this border (preferably to 0)? How can I increase the thickness of the hatch lines?", "response":"You just need to set the linewidth to control the marker border thickness. You can increase the density of hatching, by repeating symbols (in the example below, the '|' is repeated in the R\/H pane; note that to obtain NW->SE diagonal lines the symbol must be escaped so needs twice as many characters to really double it -- '\\\\\\\\' is density 2 while '||||' is density 4). However, I don't think the thickness of individual lines within hatching is controllable. See the code example below to produce scatter plots such as these: ``` import matplotlib.pyplot as plt # generate some data x = [1,2,3,4,5,8] y= [i**2 for i in x] y2= [60-i**2+3*i for i in x] # plot markers with thick borders plt.subplot(121) plt.scatter(x,y, s=500, marker='s', edgecolor='black', linewidth=3, facecolor='green', hatch='|') # compare with no borders, and denser hatch. plt.subplot(122) plt.scatter(x,y2, s=500, marker='s', edgecolor='black', linewidth=0, facecolor='green', hatch='||||') plt.show() ``` matplotlib documentation on collections and scatter.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14325773\/how-to-change-marker-border-width-and-hatch-width", "best_answers_votes":92, "question_length":433, "response_length":1034 }, { "question":"Creating a Colormap Legend in Matplotlib I am using imshow() in matplotlib like so: ``` import numpy as np import matplotlib.pyplot as plt mat = '''SOME MATRIX''' plt.imshow(mat, origin=\"lower\", cmap='gray', interpolation='nearest') plt.show() ``` How do I add a legend showing the numeric value for the different shades of gray. Sadly, my googling has not uncovered an answer :( Thank you in advance for the help. Vince", "response":"Simple, just plt.colorbar(): ``` import numpy as np import matplotlib.pyplot as plt mat = np.random.random((10,10)) plt.imshow(mat, origin=\"lower\", cmap='gray', interpolation='nearest') plt.colorbar() plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2451264\/creating-a-colormap-legend-in-matplotlib", "best_answers_votes":62, "question_length":420, "response_length":215 }, { "question":"How to prevent x-axis labels from overlapping I'm generating a bar-chart with matplotlib. It all works well but I can't figure out how to prevent the labels of the x-axis from overlapping each other. Here an example: Here is some sample SQL for a postgres 9.1 database: ``` drop table if exists mytable; create table mytable(id bigint, version smallint, date_from timestamp without time zone); insert into mytable(id, version, date_from) values ('4084036', '1', '2006-12-22 22:46:35'), ('4084938', '1', '2006-12-23 16:19:13'), ('4084938', '2', '2006-12-23 16:20:23'), ('4084939', '1', '2006-12-23 16:29:14'), ('4084954', '1', '2006-12-23 16:28:28'), ('4250653', '1', '2007-02-12 21:58:53'), ('4250657', '1', '2007-03-12 21:58:53') ; ``` And this is my python-script: ``` # -*- coding: utf-8 -*- #!\/usr\/bin\/python2.7 import psycopg2 import matplotlib.pyplot as plt fig = plt.figure() # for savefig() import pylab ### ### Connect to database with psycopg2 ### try: conn_string=\"dbname='x' user='y' host='z' password='pw'\" print \"Connecting to database\\n->%s\" % (conn_string) conn = psycopg2.connect(conn_string) print \"Connection to database was established succesfully\" except: print \"Connection to database failed\" ### ### Execute SQL query ### # New cursor method for sql cur = conn.cursor() # Execute SQL query. For more than one row use three '\"' try: cur.execute(\"\"\" -- In which year\/month have these points been created? -- Need 'yyyymm' because I only need Months with years (values are summeed up). Without, query returns every day the db has an entry. SELECT to_char(s.day,'yyyymm') AS month ,count(t.id)::int AS count FROM ( SELECT generate_series(min(date_from)::date ,max(date_from)::date ,interval '1 day' )::date AS day FROM mytable t ) s LEFT JOIN mytable t ON t.date_from::date = s.day GROUP BY month ORDER BY month; \"\"\") # Return the results of the query. Fetchall() = all rows, fetchone() = first row records = cur.fetchall() cur.close() except: print \"Query could not be executed\" # Unzip the data from the db-query. Order is the same as db-query output year, count = zip(*records) ### ### Plot (Barchart) ### # Count the length of the range of the count-values, y-axis-values, position of axis-labels, legend-label plt.bar(range(len(count)), count, align='center', label='Amount of created\/edited points') # Add database-values to the plot with an offset of 10px\/10px ax = fig.add_subplot(111) for i,j in zip(year,count): ax.annotate(str(j), xy=(i,j), xytext=(10,10), textcoords='offset points') # Rotate x-labels on the x-axis fig.autofmt_xdate() # Label-values for x and y axis plt.xticks(range(len(count)), (year)) # Label x and y axis plt.xlabel('Year') plt.ylabel('Amount of created\/edited points') # Locate legend on the plot (http:\/\/matplotlib.org\/users\/legend_guide.html#legend-location) plt.legend(loc=1) # Plot-title plt.title(\"Amount of created\/edited points over time\") # show plot pylab.show() ``` Is there a way how I can prevent the labels from overlapping each other? Ideally in an automatic way, because I can't predict the amount of bars.", "response":"I think you're confused on a few points about how matplotlib handles dates. You're not actually plotting dates, at the moment. You're plotting things on the x-axis with [0,1,2,...] and then manually labeling every point with a string representation of the date. Matplotlib will automatically position ticks. However, you're over-riding matplotlib's tick positioning functionality (Using xticks is basically saying: \"I want ticks in exactly these positions\".) At the moment, you'll get ticks at [10, 20, 30, ...] if matplotlib automatically positions them. However, these will correspond to the values that you used to plot them, not the dates (which you didn't use when plotting). You probably want to actually plot things using dates. Currently, you're doing something like this: ``` import datetime as dt import matplotlib.dates as mdates import numpy as np import matplotlib.pyplot as plt # Generate a series of dates (these are in matplotlib's internal date format) dates = mdates.drange(dt.datetime(2010, 01, 01), dt.datetime(2012,11,01), dt.timedelta(weeks=3)) # Create some data for the y-axis counts = np.sin(np.linspace(0, np.pi, dates.size)) # Set up the axes and figure fig, ax = plt.subplots() # Make a bar plot, ignoring the date values ax.bar(np.arange(counts.size), counts, align='center', width=1.0) # Force matplotlib to place a tick at every bar and label them with the date datelabels = mdates.num2date(dates) # Go back to a sequence of datetimes... ax.set(xticks=np.arange(dates.size), xticklabels=datelabels) #Same as plt.xticks # Make space for and rotate the x-axis tick labels fig.autofmt_xdate() plt.show() ``` Instead, try something like this: ``` import datetime as dt import matplotlib.dates as mdates import numpy as np import matplotlib.pyplot as plt # Generate a series of dates (these are in matplotlib's internal date format) dates = mdates.drange(dt.datetime(2010, 01, 01), dt.datetime(2012,11,01), dt.timedelta(weeks=3)) # Create some data for the y-axis counts = np.sin(np.linspace(0, np.pi, dates.size)) # Set up the axes and figure fig, ax = plt.subplots() # By default, the bars will have a width of 0.8 (days, in this case) We want # them quite a bit wider, so we'll make them them the minimum spacing between # the dates. (To use the exact code below, you'll need to convert your sequence # of datetimes into matplotlib's float-based date format. # Use \"dates = mdates.date2num(dates)\" to convert them.) width = np.diff(dates).min() # Make a bar plot. Note that I'm using \"dates\" directly instead of plotting # \"counts\" against x-values of [0,1,2...] ax.bar(dates, counts, align='center', width=width) # Tell matplotlib to interpret the x-axis values as dates ax.xaxis_date() # Make space for and rotate the x-axis tick labels fig.autofmt_xdate() plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13515471\/how-to-prevent-x-axis-labels-from-overlapping", "best_answers_votes":43, "question_length":3075, "response_length":2802 }, { "question":"Output matplotlib figure to SVG with text as text, not curves When I use matplotlib.pyplot.savefig(\"test.svg\", format=\"svg\") to export the figure as SVG, then the resulting SVG file is huge. This is caused by the fact that there are a lot of text annotations in my figure, and each text ends up as paths in the SVG. I want my text to end up as text strings in SVG, and not paths. It gets too hard to interpret the output too, if the text strings are exported this way. Is there a way to force matplotlib to output text as text, not curves? Currently, I see these code fragments in my SVG file: ```xml ```", "response":"Matplotlibs SVG text rendering can be configured either in the matplotlibrc or in code. From Customizing Matplotlib with style sheets and rcParams: ```py #svg.fonttype : 'path' # How to handle SVG fonts: # 'none': Assume fonts are installed on the machine where the SVG will be viewed. # 'path': Embed characters as paths -- supported by most SVG renderers # 'svgfont': Embed characters as SVG fonts -- supported only by Chrome, # Opera and Safari ``` This translates to the following code for neither embedding the font nor rendering the text as path: ```py import matplotlib.pyplot as plt plt.rcParams['svg.fonttype'] = 'none' ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34387893\/output-matplotlib-figure-to-svg-with-text-as-text-not-curves", "best_answers_votes":85, "question_length":605, "response_length":632 }, { "question":"Automatically Rescale ylim and xlim I'm plotting data in Python using matplotlib. I am updating the data of the plot based upon some calculations and want the ylim and xlim to be rescaled automatically. Instead what happens is the scale is set based upon the limits of the initial plot. A MWE is ``` import random import matplotlib.pyplot as pyplot pyplot.ion() x = range(10) y = lambda m: [m*random.random() for i in range(10)] pLine, = pyplot.plot(x, y(1)) for i in range(10): pLine.set_ydata(y(i+1)) pyplot.draw() ``` The first plot command generates a plot from [0,1] and I can see everything just fine. At the end, the y-data array goes from [0,10) with most of it greater than 1, but the y-limits of the figure remain [0,1]. I know I can manually change the limits using pyplot.ylim(...), but I don't know what to change them to. In the for loop, can I tell pyplot to scale the limits as if it was the first time being plotted?", "response":"You will need to update the axes' dataLim, then subsequently update the axes' viewLim based on the dataLim. The approrpiate methods are axes.relim() and ax.autoscale_view() method. Your example then looks like: ``` import random import matplotlib.pyplot as pyplot pyplot.ion() x = range(10) y = lambda m: [m*random.random() for i in range(10)] pLine, = pyplot.plot(x, y(1)) for i in range(10): pLine.set_ydata(y(i+1)) ax = pyplot.gca() # recompute the ax.dataLim ax.relim() # update ax.viewLim using the new dataLim ax.autoscale_view() pyplot.draw() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10984085\/automatically-rescale-ylim-and-xlim", "best_answers_votes":80, "question_length":933, "response_length":553 }, { "question":"matplotlib strings as labels on x axis I am building a small tool for data analysis and I have come to the point, where I have to plot the prepared data. The code before this produces the following two lists with equal length. ``` t11 = ['00', '01', '02', '03', '04', '05', '10', '11', '12', '13', '14', '15', '20', '21', '22', '23', '24', '25', '30', '31', '32', '33', '34', '35', '40', '41', '42', '43', '44', '45', '50', '51', '52', '53', '54', '55'] t12 = [173, 135, 141, 148, 140, 149, 152, 178, 135, 96, 109, 164, 137, 152, 172, 149, 93, 78, 116, 81, 149, 202, 172, 99, 134, 85, 104, 172, 177, 150, 130, 131, 111, 99, 143, 194] ``` Based on this, I want to built a histogram with matplotlib.plt.hist. However, there are a couple of problems: 1. t11[x] and t12[x] are connected for all x. Where t11[x] is actually a string. It represents a certain detector combination. For example: '01' tells that the detection was made in the 0th segment of the first detector, and 1st segment of the second detector. My goal is to have each entry from t11 as a labeled point on the x axis. The t12 entry is going to define the hight of the bar above the t11 entry (on a logarithmic y axis) How does one configure such an x axis? 2. This is all very new to me. I could not find anything related in the documentation. Most probably because I did not know what to search for. SO: Is there an \"official\" name for what I am trying to achieve. This would also help me alot.", "response":"Use the xticks command. ``` import matplotlib.pyplot as plt t11 = ['00', '01', '02', '03', '04', '05', '10', '11', '12', '13', '14', '15', '20', '21', '22', '23', '24', '25', '30', '31', '32', '33', '34', '35', '40', '41', '42', '43', '44', '45', '50', '51', '52', '53', '54', '55'] t12 = [173, 135, 141, 148, 140, 149, 152, 178, 135, 96, 109, 164, 137, 152, 172, 149, 93, 78, 116, 81, 149, 202, 172, 99, 134, 85, 104, 172, 177, 150, 130, 131, 111, 99, 143, 194] plt.bar(range(len(t12)), t12, align='center') plt.xticks(range(len(t11)), t11, size='small') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7559242\/matplotlib-strings-as-labels-on-x-axis", "best_answers_votes":81, "question_length":1459, "response_length":570 }, { "question":"Matplotlib: How to plot images instead of points? I want to read a list of images into Python\/Matplotlib and then plot this images instead of other markers (like points) in a graph. I have tried with imshow but I didn't succeed, because I cannot shift the image to another position and scale it appropriately. Maybe somebody has a good idea : )", "response":"There are two ways to do this. Plot the image using imshow with the extent kwarg set based on the location you want the image at. Use an OffsetImage inside an AnnotationBbox. The first way is the easiest to understand, but the second has a large advantage. The annotation box approach will allow the image to stay at a constant size as you zoom in. Using imshow will tie the size of the image to the data coordinates of the plot. Here's an example of the second option: ``` import numpy as np import matplotlib.pyplot as plt from matplotlib.offsetbox import OffsetImage, AnnotationBbox from matplotlib.cbook import get_sample_data def main(): x = np.linspace(0, 10, 20) y = np.cos(x) image_path = get_sample_data('ada.png') fig, ax = plt.subplots() imscatter(x, y, image_path, zoom=0.1, ax=ax) ax.plot(x, y) plt.show() def imscatter(x, y, image, ax=None, zoom=1): if ax is None: ax = plt.gca() try: image = plt.imread(image) except TypeError: # Likely already an array... pass im = OffsetImage(image, zoom=zoom) x, y = np.atleast_1d(x, y) artists = [] for x0, y0 in zip(x, y): ab = AnnotationBbox(im, (x0, y0), xycoords='data', frameon=False) artists.append(ax.add_artist(ab)) ax.update_datalim(np.column_stack([x, y])) ax.autoscale() return artists main() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22566284\/matplotlib-how-to-plot-images-instead-of-points", "best_answers_votes":65, "question_length":344, "response_length":1260 }, { "question":"Remove the extra plot in the matplotlib subplot I want to plot 5 data frames in a 2 by 3 setting (i.e. 2 rows and 3 columns). This is my code: However there is an extra empty plot in the 6th position (second row and third column) which I want to get rid of it. I am wondering how I could remove it so that I have three plots in the first row and two plots in the second row. ``` import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=2, ncols=3) fig.set_figheight(8) fig.set_figwidth(15) df[2].plot(kind='bar',ax=axes[0,0]); axes[0,0].set_title('2') df[4].plot(kind='bar',ax=axes[0,1]); axes[0,1].set_title('4') df[6].plot(kind='bar',ax=axes[0,2]); axes[0,2].set_title('6') df[8].plot(kind='bar',ax=axes[1,0]); axes[1,0].set_title('8') df[10].plot(kind='bar',ax=axes[1,1]); axes[1,1].set_title('10') plt.setp(axes, xticks=np.arange(len(observations)), xticklabels=map(str,observations), yticks=[0,1]) fig.tight_layout() ```", "response":"Try this: ``` fig.delaxes(axes[1][2]) ``` A much more flexible way to create subplots is the fig.add_axes() method. The parameters is a list of rect coordinates: fig.add_axes([x, y, xsize, ysize]). The values are relative to the canvas size, so an xsize of 0.5 means the subplot has half the width of the window.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44980658\/remove-the-extra-plot-in-the-matplotlib-subplot", "best_answers_votes":100, "question_length":931, "response_length":312 }, { "question":"Get the list of figures in matplotlib I would like to: ``` pylab.figure() pylab.plot(x) pylab.figure() pylab.plot(y) # ... for i, figure in enumerate(pylab.MagicFunctionReturnsListOfAllFigures()): figure.savefig('figure%d.png' % i) ``` What is the magic function that returns a list of current figures in pylab? Websearch didn't help...", "response":"Pyplot has get_fignums method that returns a list of figure numbers. This should do what you want: ``` import matplotlib.pyplot as plt import numpy as np x = np.arange(100) y = -x plt.figure() plt.plot(x) plt.figure() plt.plot(y) for i in plt.get_fignums(): plt.figure(i) plt.savefig('figure%d.png' % i) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3783217\/get-the-list-of-figures-in-matplotlib", "best_answers_votes":115, "question_length":336, "response_length":307 }, { "question":"Adjusting gridlines and ticks in matplotlib imshow I'm trying to plot a matrix of values and would like to add gridlines to make the boundary between values clearer. Unfortunately, imshow decided to locate the tick marks in the middle of each voxel. Is it possible to a) remove the ticks but leave the label in the same location and b) add gridlines between the pixel boundaries? ``` import matplotlib.pyplot as plt import numpy as np im = plt.imshow(np.reshape(np.random.rand(100), newshape=(10,10)), interpolation='none', vmin=0, vmax=1, aspect='equal'); ax = plt.gca(); ax.set_xticks(np.arange(0, 10, 1)); ax.set_yticks(np.arange(0, 10, 1)); ax.set_xticklabels(np.arange(1, 11, 1)); ax.set_yticklabels(np.arange(1, 11, 1)); ``` Image without the gridline and with tick marks in the wrong location ``` ax.grid(color='w', linestyle='-', linewidth=2) ``` Image with gridlines in the wrong location:", "response":"Code for solution as suggested by Serenity: ``` plt.figure() im = plt.imshow(np.reshape(np.random.rand(100), newshape=(10,10)), interpolation='none', vmin=0, vmax=1, aspect='equal') ax = plt.gca(); # Major ticks ax.set_xticks(np.arange(0, 10, 1)) ax.set_yticks(np.arange(0, 10, 1)) # Labels for major ticks ax.set_xticklabels(np.arange(1, 11, 1)) ax.set_yticklabels(np.arange(1, 11, 1)) # Minor ticks ax.set_xticks(np.arange(-.5, 10, 1), minor=True) ax.set_yticks(np.arange(-.5, 10, 1), minor=True) # Gridlines based on minor ticks ax.grid(which='minor', color='w', linestyle='-', linewidth=2) # Remove minor ticks ax.tick_params(which='minor', bottom=False, left=False) ``` Resulting image:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/38973868\/adjusting-gridlines-and-ticks-in-matplotlib-imshow", "best_answers_votes":85, "question_length":898, "response_length":691 }, { "question":"Break \/\/ in x axis of matplotlib [duplicate] This question already has answers here: Is there a way to make a discontinuous axis in Matplotlib? (7 answers) Closed 7 years ago. Best way to describe what I want to achieve is using my own image: Now I have a lot of dead space in the spectra plot, especially between 5200 and 6300. My question is quite simple, how would I add in a nice little \/\/ break that looks something similar to this (image lifted from the net): I'm using this setup for my plots: ``` nullfmt = pyplot.NullFormatter() fig = pyplot.figure(figsize=(16,6)) gridspec_layout1= gridspec.GridSpec(2,1) gridspec_layout1.update(left=0.05, right=0.97, hspace=0, wspace=0.018) pyplot_top = fig.add_subplot(gridspec_layout1[0]) pyplot_bottom = fig.add_subplot(gridspec_layout1[1]) pyplot_top.xaxis.set_major_formatter(nullfmt) ``` I'm quite certain it is achievable with gridpsec but an advanced tutorial cover exactly how this is achieved would be greatly appreciated. Apologies also if this question has been dealt with previously on stackoverflow but I have looked extensively for the correct procedure for gridSpec but found nothing as yet. I have managed to go as far as this, pretty much there: However, my break lines are not as steep as I would like them...how do I change them? (I have made use of the example answer below)", "response":"You could adapt the matplotlib example for a break in the x-axis directly: ``` \"\"\" Broken axis example, where the x-axis will have a portion cut out. \"\"\" import matplotlib.pylab as plt import numpy as np x = np.linspace(0,10,100) x[75:] = np.linspace(40,42.5,25) y = np.sin(x) f, (ax, ax2) = plt.subplots(1, 2, sharey=True, facecolor='w') # plot the same data on both axes ax.plot(x, y) ax2.plot(x, y) ax.set_xlim(0, 7.5) ax2.set_xlim(40, 42.5) # hide the spines between ax and ax2 ax.spines['right'].set_visible(False) ax2.spines['left'].set_visible(False) ax.yaxis.tick_left() ax.tick_params(labelright='off') ax2.yaxis.tick_right() # This looks pretty good, and was fairly painless, but you can get that # cut-out diagonal lines look with just a bit more work. The important # thing to know here is that in axes coordinates, which are always # between 0-1, spine endpoints are at these locations (0, 0), (0, 1), # (1, 0), and (1, 1). Thus, we just need to put the diagonals in the # appropriate corners of each of our axes, and so long as we use the # right transform and disable clipping. d = .015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color='k', clip_on=False) ax.plot((1-d, 1+d), (-d, +d), **kwargs) ax.plot((1-d, 1+d), (1-d, 1+d), **kwargs) kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d, +d), (1-d, 1+d), **kwargs) ax2.plot((-d, +d), (-d, +d), **kwargs) # What's cool about this is that now if we vary the distance between # ax and ax2 via f.subplots_adjust(hspace=...) or plt.subplot_tool(), # the diagonal lines will move accordingly, and stay right at the tips # of the spines they are 'breaking' plt.show() ``` For your purposes, just plot your data twice (once on each axis, ax and ax2 and set your xlims appropriately. The \"break lines\" should move to match the new break because they are plotted in relative axis coordinates rather than data coordinates. The break lines are just unclipped plot lines drawn between a pair of points. E.g. ax.plot((1-d, 1+d), (-d, +d), **kwargs) plots the break line between point (1-d, -d) and (1+d, +d) on the first axis: this is the bottom righthand one. If you want to change the graident, change these values appropriately. For example, to make this one steeper, try ax.plot((1-d\/2, 1+d\/2), (-d, +d), **kwargs)", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/32185411\/break-in-x-axis-of-matplotlib", "best_answers_votes":52, "question_length":1340, "response_length":2415 }, { "question":"How to change the legend edgecolor and facecolor Is there while rcParams['legend.frameon'] = 'False' a simple way to fill the legend area background with a given colour. More specifically I would like the grid not to be seen on the legend area because it disturbs the text reading. The keyword framealpha sounds like what I need but it doesn't change anything. ``` import matplotlib as mpl import matplotlib.pyplot as plt mpl.rcParams['legend.frameon'] = 'False' plt.plot(range(5), label = u\"line\") plt.grid(True) plt.legend(loc = best) plt.show() ``` I've also tried: ``` legend = plt.legend(frameon = 1) frame = legend.get_frame() frame.set_color('white') ``` but then I need to ask how can I change the background colour while keeping the frame on? Sometimes I want it ON with a background colour other than white. And also, is there a way of changing the colour of the frame? With the above code I was expecting to change the colour of the frame only, not the background.", "response":"You can set the edge color and the face color separately like this: ``` frame.set_facecolor('green') frame.set_edgecolor('red') ``` There's more information under FancyBboxPatch here.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19863368\/how-to-change-the-legend-edgecolor-and-facecolor", "best_answers_votes":55, "question_length":975, "response_length":183 }, { "question":"3d axes ticks, labels, and LaTeX I am running this sample script, with the following modifications: ``` import matplotlib as mpl from mpl_toolkits.mplot3d import Axes3D import numpy as np import matplotlib.pyplot as plt mpl.rcParams['legend.fontsize'] = 10 fig = plt.figure() ax = fig.gca(projection='3d') theta = np.linspace(-4 * np.pi, 4 * np.pi, 100) z = np.linspace(-2, 2, 100) r = z**2 + 1 x = r * np.sin(theta) y = r * np.cos(theta) ax.plot(x, y, z, label='parametric curve') ax.legend() ax.set_xlabel('$X$', fontsize=20, rotation=150) ax.set_ylabel('$Y$') ax.set_zlabel(r'$\\gamma$', fontsize=30, rotation=60) ax.yaxis._axinfo['label']['space_factor'] = 3.0 plt.show() ``` How do I adjust the axis ticks to that of my choosing? I.e., how would I get the z-axis to only label 2, 0, and -2, and in the font size that I want? I know how to do this in 2D but not 3D. The script above produces the following: Why is the x-axis label distorted, which I wanted to do with this script, but not the z-axis label (gamma)? This does not make sense. I need this axis labeled in the Greek letter. How do I fix this?", "response":"How do I adjust the axis ticks to that of my choosing? I.e., how would I get the z-axis to only label 2, 0, and -2, and in the font size that I want? I know how to do this in 2D but not 3D. You have to change properties of zticks. Why is the x-axis label distorted, which I wanted to do with this script, but not the z-axis label (gamma)? This does not make sense. I need this axis labeled in the Greek letter. How do I fix this? You have to disable autorotation for z axis labels. Look at the code below: ``` import matplotlib as mpl from mpl_toolkits.mplot3d import Axes3D import numpy as np import matplotlib.pyplot as plt mpl.rcParams['legend.fontsize'] = 10 fig = plt.figure() ax = fig.gca(projection='3d') theta = np.linspace(-4 * np.pi, 4 * np.pi, 100) z = np.linspace(-2, 2, 100) r = z**2 + 1 x = r * np.sin(theta) y = r * np.cos(theta) ax.plot(x, y, z, label='parametric curve') ax.legend() ax.set_xlabel('$X$', fontsize=20) ax.set_ylabel('$Y$') ax.yaxis._axinfo['label']['space_factor'] = 3.0 # set z ticks and labels ax.set_zticks([-2, 0, 2]) # change fontsize for t in ax.zaxis.get_major_ticks(): t.label.set_fontsize(10) # disable auto rotation ax.zaxis.set_rotate_label(False) ax.set_zlabel('$\\gamma$', fontsize=30, rotation = 0) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37711538\/3d-axes-ticks-labels-and-latex", "best_answers_votes":50, "question_length":1108, "response_length":1258 }, { "question":"Annotate data points while plotting from Pandas DataFrame I would like to annotate the data points with their values next to the points on the plot. The examples I found only deal with x and y as vectors. However, I would like to do this for a pandas DataFrame that contains multiple columns. ``` ax = plt.figure().add_subplot(1, 1, 1) df.plot(ax = ax) plt.show() ``` What is the best way to annotate all the points for a multi-column DataFrame?", "response":"Here's a (very) slightly slicker version of Dan Allan's answer: ``` import matplotlib.pyplot as plt import pandas as pd import numpy as np import string df = pd.DataFrame({'x':np.random.rand(10), 'y':np.random.rand(10)}, index=list(string.ascii_lowercase[:10])) ``` Which gives: ``` x y a 0.541974 0.042185 b 0.036188 0.775425 c 0.950099 0.888305 d 0.739367 0.638368 e 0.739910 0.596037 f 0.974529 0.111819 g 0.640637 0.161805 h 0.554600 0.172221 i 0.718941 0.192932 j 0.447242 0.172469 ``` And then: ``` fig, ax = plt.subplots() df.plot('x', 'y', kind='scatter', ax=ax) for k, v in df.iterrows(): ax.annotate(k, v) ``` Finally, if you're in interactive mode you might need to refresh the plot: ``` fig.canvas.draw() ``` Which produces: Or, since that looks incredibly ugly, you can beautify things a bit pretty easily: ``` from matplotlib import cm cmap = cm.get_cmap('Spectral') df.plot('x', 'y', kind='scatter', ax=ax, s=120, linewidth=0, c=range(len(df)), colormap=cmap) for k, v in df.iterrows(): ax.annotate(k, v, xytext=(10,-5), textcoords='offset points', family='sans-serif', fontsize=18, color='darkslategrey') ``` Which looks a lot nicer:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15910019\/annotate-data-points-while-plotting-from-pandas-dataframe", "best_answers_votes":64, "question_length":445, "response_length":1149 }, { "question":"Python ASCII plots in terminal With Octave I am able to plot arrays to the terminal, for example, plotting an array with values for the function x^2 gives this output in my terminal: ``` 10000 ++---------+-----------+----------+-----------+---------++ ++ + + + + ++ |+ : : : : +| |++ : : : : ++| | + : : : : + | | ++ : : : : ++ | 8000 ++.+..................................................+.++ | ++ : : : : ++ | | ++ : : : : ++ | | + : : : : + | | ++ : : : : ++ | | + : : : : + | 6000 ++....++..........................................++....++ | ++ : : : : ++ | | + : : : : + | | ++ : : : : ++ | | ++: : : :++ | 4000 ++........++..................................++........++ | + : : + | | ++ : : ++ | | :++ : : ++: | | : ++ : : ++ : | | : ++ : : ++ : | 2000 ++.............++........................++.............++ | : ++ : : ++ : | | : +++ : : +++ : | | : ++ : : ++ : | | : +++: :+++ : | + + ++++ ++++ + + 0 ++---------+-----------+----------+-----------+---------++ 0 20000 40000 60000 80000 100000 ``` Is there some way I can do something similar in Python, specifically with matplotlib? bashplotlib seems to offer some of this functionality but appears to be quite basic compared to Octave's offering.", "response":"As few answers already suggested the gnuplot is a great choice. However, there is no need to call a gnuplot subprocess, it might be much easier to use a python gnuplotlib library. Example (from: https:\/\/github.com\/dkogan\/gnuplotlib): ``` >>> import numpy as np >>> import gnuplotlib as gp >>> x = np.linspace(-5,5,100) >>> gp.plot( x, np.sin(x) ) [ graphical plot pops up showing a simple sinusoid ] >>> gp.plot( (x, np.sin(x), {'with': 'boxes'}), ... (x, np.cos(x), {'legend': 'cosine'}), ... _with = 'lines', ... terminal = 'dumb 80,40', ... unset = 'grid') [ ascii plot printed on STDOUT] 1 +-+---------+----------+-----------+-----------+----------+---------+-+ + +|||+ + + +++++ +++|||+ + + | |||||+ + + +|||||| cosine +-----+ | 0.8 +-+ |||||| + + ++||||||+ +-+ | ||||||+ + ++||||||||+ | | ||||||| + ++||||||||| | | |||||||+ + ||||||||||| | 0.6 +-+ |||||||| + +||||||||||+ +-+ | ||||||||+ | ++||||||||||| | | ||||||||| + ||||||||||||| | 0.4 +-+ ||||||||| | ++||||||||||||+ +-+ | ||||||||| + +|||||||||||||| | | |||||||||+ + ||||||||||||||| | | ||||||||||+ | ++||||||||||||||+ + | 0.2 +-+ ||||||||||| + ||||||||||||||||| + +-+ | ||||||||||| | +||||||||||||||||+ | | | ||||||||||| + |||||||||||||||||| + | 0 +-+ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +-+ | + ||||||||||||||||||+ | ++|||||||||| | | | +||||||||||||||||| + ||||||||||| | | + ++|||||||||||||||| | +|||||||||| | -0.2 +-+ + ||||||||||||||||| + ||||||||||| +-+ | | ++||||||||||||||+ | ++||||||||| | | + ||||||||||||||| + ++|||||||| | | | +|||||||||||||| + ||||||||| | -0.4 +-+ + ++||||||||||||+ | +|||||||| +-+ | + ||||||||||||| + ||||||||| | | | +|||||||||||+ + ++||||||| | -0.6 +-+ + ++|||||||||| | +||||||| +-+ | + ||||||||||| + ++|||||| | | + +|||||||||+ + ||||||| | | + ++|||||||| + +++||||| | -0.8 +-+ + + ++||||||+ + + +||||| +-+ | + + +|||||| + + ++|||| | + + + ++ ++|||++ + + ++ + + ++||| + -1 +-+---------+----------+-----------+-----------+----------+---------+-+ -6 -4 -2 0 2 4 6 ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20295646\/python-ascii-plots-in-terminal", "best_answers_votes":28, "question_length":1208, "response_length":1979 }, { "question":"What do all the distributions available in scipy.stats look like? Visualizing scipy.stats distributions A histogram can be made of the scipy.stats normal random variable to see what the distribution looks like. ``` % matplotlib inline import pandas as pd import scipy.stats as stats d = stats.norm() rv = d.rvs(100000) pd.Series(rv).hist(bins=32, normed=True) ``` What do the other distributions look like?", "response":"Visualizing all scipy.stats distributions Based on the list of scipy.stats distributions, plotted below are the histograms and PDFs of each continuous random variable. The code used to generate each distribution is at the bottom. Note: The shape constants were taken from the examples on the scipy.stats distribution documentation pages. alpha(a=3.57, loc=0.00, scale=1.00) anglit(loc=0.00, scale=1.00) arcsine(loc=0.00, scale=1.00) beta(a=2.31, loc=0.00, scale=1.00, b=0.63) betaprime(a=5.00, loc=0.00, scale=1.00, b=6.00) bradford(loc=0.00, c=0.30, scale=1.00) burr(loc=0.00, c=10.50, scale=1.00, d=4.30) cauchy(loc=0.00, scale=1.00) chi(df=78.00, loc=0.00, scale=1.00) chi2(df=55.00, loc=0.00, scale=1.00) cosine(loc=0.00, scale=1.00) dgamma(a=1.10, loc=0.00, scale=1.00) dweibull(loc=0.00, c=2.07, scale=1.00) erlang(a=2.00, loc=0.00, scale=1.00) expon(loc=0.00, scale=1.00) exponnorm(loc=0.00, K=1.50, scale=1.00) exponpow(loc=0.00, scale=1.00, b=2.70) exponweib(a=2.89, loc=0.00, c=1.95, scale=1.00) f(loc=0.00, dfn=29.00, scale=1.00, dfd=18.00) fatiguelife(loc=0.00, c=29.00, scale=1.00) fisk(loc=0.00, c=3.09, scale=1.00) foldcauchy(loc=0.00, c=4.72, scale=1.00) foldnorm(loc=0.00, c=1.95, scale=1.00) frechet_l(loc=0.00, c=3.63, scale=1.00) frechet_r(loc=0.00, c=1.89, scale=1.00) gamma(a=1.99, loc=0.00, scale=1.00) gausshyper(a=13.80, loc=0.00, c=2.51, scale=1.00, b=3.12, z=5.18) genexpon(a=9.13, loc=0.00, c=3.28, scale=1.00, b=16.20) genextreme(loc=0.00, c=-0.10, scale=1.00) gengamma(a=4.42, loc=0.00, c=-3.12, scale=1.00) genhalflogistic(loc=0.00, c=0.77, scale=1.00) genlogistic(loc=0.00, c=0.41, scale=1.00) gennorm(loc=0.00, beta=1.30, scale=1.00) genpareto(loc=0.00, c=0.10, scale=1.00) gilbrat(loc=0.00, scale=1.00) gompertz(loc=0.00, c=0.95, scale=1.00) gumbel_l(loc=0.00, scale=1.00) gumbel_r(loc=0.00, scale=1.00) halfcauchy(loc=0.00, scale=1.00) halfgennorm(loc=0.00, beta=0.68, scale=1.00) halflogistic(loc=0.00, scale=1.00) halfnorm(loc=0.00, scale=1.00) hypsecant(loc=0.00, scale=1.00) invgamma(a=4.07, loc=0.00, scale=1.00) invgauss(mu=0.14, loc=0.00, scale=1.00) invweibull(loc=0.00, c=10.60, scale=1.00) johnsonsb(a=4.32, loc=0.00, scale=1.00, b=3.18) johnsonsu(a=2.55, loc=0.00, scale=1.00, b=2.25) ksone(loc=0.00, scale=1.00, n=1000.00) kstwobign(loc=0.00, scale=1.00) laplace(loc=0.00, scale=1.00) levy(loc=0.00, scale=1.00) levy_l(loc=0.00, scale=1.00) loggamma(loc=0.00, c=0.41, scale=1.00) logistic(loc=0.00, scale=1.00) loglaplace(loc=0.00, c=3.25, scale=1.00) lognorm(loc=0.00, s=0.95, scale=1.00) lomax(loc=0.00, c=1.88, scale=1.00) maxwell(loc=0.00, scale=1.00) mielke(loc=0.00, s=3.60, scale=1.00, k=10.40) nakagami(loc=0.00, scale=1.00, nu=4.97) ncf(loc=0.00, dfn=27.00, nc=0.42, dfd=27.00, scale=1.00) nct(df=14.00, loc=0.00, scale=1.00, nc=0.24) ncx2(df=21.00, loc=0.00, scale=1.00, nc=1.06) norm(loc=0.00, scale=1.00) pareto(loc=0.00, scale=1.00, b=2.62) pearson3(loc=0.00, skew=0.10, scale=1.00) powerlaw(a=1.66, loc=0.00, scale=1.00) powerlognorm(loc=0.00, s=0.45, scale=1.00, c=2.14) powernorm(loc=0.00, c=4.45, scale=1.00) rayleigh(loc=0.00, scale=1.00) rdist(loc=0.00, c=0.90, scale=1.00) recipinvgauss(mu=0.63, loc=0.00, scale=1.00) reciprocal(a=0.01, loc=0.00, scale=1.00, b=1.01) rice(loc=0.00, scale=1.00, b=0.78) semicircular(loc=0.00, scale=1.00) t(df=2.74, loc=0.00, scale=1.00) triang(loc=0.00, c=0.16, scale=1.00) truncexpon(loc=0.00, scale=1.00, b=4.69) truncnorm(a=0.10, loc=0.00, scale=1.00, b=2.00) tukeylambda(loc=0.00, scale=1.00, lam=3.13) uniform(loc=0.00, scale=1.00) vonmises(loc=0.00, scale=1.00, kappa=3.99) vonmises_line(loc=0.00, scale=1.00, kappa=3.99) wald(loc=0.00, scale=1.00) weibull_max(loc=0.00, c=2.87, scale=1.00) weibull_min(loc=0.00, c=1.79, scale=1.00) wrapcauchy(loc=0.00, c=0.03, scale=1.00) Generation Code Here is the Jupyter Notebook used to generate the plots. ``` %matplotlib inline import io import numpy as np import pandas as pd import scipy.stats as stats import matplotlib import matplotlib.pyplot as plt matplotlib.rcParams['figure.figsize'] = (16.0, 14.0) matplotlib.style.use('ggplot') ``` ``` # Distributions to check, shape constants were taken from the examples on the scipy.stats distribution documentation pages. DISTRIBUTIONS = [ stats.alpha(a=3.57, loc=0.0, scale=1.0), stats.anglit(loc=0.0, scale=1.0), stats.arcsine(loc=0.0, scale=1.0), stats.beta(a=2.31, b=0.627, loc=0.0, scale=1.0), stats.betaprime(a=5, b=6, loc=0.0, scale=1.0), stats.bradford(c=0.299, loc=0.0, scale=1.0), stats.burr(c=10.5, d=4.3, loc=0.0, scale=1.0), stats.cauchy(loc=0.0, scale=1.0), stats.chi(df=78, loc=0.0, scale=1.0), stats.chi2(df=55, loc=0.0, scale=1.0), stats.cosine(loc=0.0, scale=1.0), stats.dgamma(a=1.1, loc=0.0, scale=1.0), stats.dweibull(c=2.07, loc=0.0, scale=1.0), stats.erlang(a=2, loc=0.0, scale=1.0), stats.expon(loc=0.0, scale=1.0), stats.exponnorm(K=1.5, loc=0.0, scale=1.0), stats.exponweib(a=2.89, c=1.95, loc=0.0, scale=1.0), stats.exponpow(b=2.7, loc=0.0, scale=1.0), stats.f(dfn=29, dfd=18, loc=0.0, scale=1.0), stats.fatiguelife(c=29, loc=0.0, scale=1.0), stats.fisk(c=3.09, loc=0.0, scale=1.0), stats.foldcauchy(c=4.72, loc=0.0, scale=1.0), stats.foldnorm(c=1.95, loc=0.0, scale=1.0), stats.frechet_r(c=1.89, loc=0.0, scale=1.0), stats.frechet_l(c=3.63, loc=0.0, scale=1.0), stats.genlogistic(c=0.412, loc=0.0, scale=1.0), stats.genpareto(c=0.1, loc=0.0, scale=1.0), stats.gennorm(beta=1.3, loc=0.0, scale=1.0), stats.genexpon(a=9.13, b=16.2, c=3.28, loc=0.0, scale=1.0), stats.genextreme(c=-0.1, loc=0.0, scale=1.0), stats.gausshyper(a=13.8, b=3.12, c=2.51, z=5.18, loc=0.0, scale=1.0), stats.gamma(a=1.99, loc=0.0, scale=1.0), stats.gengamma(a=4.42, c=-3.12, loc=0.0, scale=1.0), stats.genhalflogistic(c=0.773, loc=0.0, scale=1.0), stats.gilbrat(loc=0.0, scale=1.0), stats.gompertz(c=0.947, loc=0.0, scale=1.0), stats.gumbel_r(loc=0.0, scale=1.0), stats.gumbel_l(loc=0.0, scale=1.0), stats.halfcauchy(loc=0.0, scale=1.0), stats.halflogistic(loc=0.0, scale=1.0), stats.halfnorm(loc=0.0, scale=1.0), stats.halfgennorm(beta=0.675, loc=0.0, scale=1.0), stats.hypsecant(loc=0.0, scale=1.0), stats.invgamma(a=4.07, loc=0.0, scale=1.0), stats.invgauss(mu=0.145, loc=0.0, scale=1.0), stats.invweibull(c=10.6, loc=0.0, scale=1.0), stats.johnsonsb(a=4.32, b=3.18, loc=0.0, scale=1.0), stats.johnsonsu(a=2.55, b=2.25, loc=0.0, scale=1.0), stats.ksone(n=1e+03, loc=0.0, scale=1.0), stats.kstwobign(loc=0.0, scale=1.0), stats.laplace(loc=0.0, scale=1.0), stats.levy(loc=0.0, scale=1.0), stats.levy_l(loc=0.0, scale=1.0), stats.levy_stable(alpha=0.357, beta=-0.675, loc=0.0, scale=1.0), stats.logistic(loc=0.0, scale=1.0), stats.loggamma(c=0.414, loc=0.0, scale=1.0), stats.loglaplace(c=3.25, loc=0.0, scale=1.0), stats.lognorm(s=0.954, loc=0.0, scale=1.0), stats.lomax(c=1.88, loc=0.0, scale=1.0), stats.maxwell(loc=0.0, scale=1.0), stats.mielke(k=10.4, s=3.6, loc=0.0, scale=1.0), stats.nakagami(nu=4.97, loc=0.0, scale=1.0), stats.ncx2(df=21, nc=1.06, loc=0.0, scale=1.0), stats.ncf(dfn=27, dfd=27, nc=0.416, loc=0.0, scale=1.0), stats.nct(df=14, nc=0.24, loc=0.0, scale=1.0), stats.norm(loc=0.0, scale=1.0), stats.pareto(b=2.62, loc=0.0, scale=1.0), stats.pearson3(skew=0.1, loc=0.0, scale=1.0), stats.powerlaw(a=1.66, loc=0.0, scale=1.0), stats.powerlognorm(c=2.14, s=0.446, loc=0.0, scale=1.0), stats.powernorm(c=4.45, loc=0.0, scale=1.0), stats.rdist(c=0.9, loc=0.0, scale=1.0), stats.reciprocal(a=0.00623, b=1.01, loc=0.0, scale=1.0), stats.rayleigh(loc=0.0, scale=1.0), stats.rice(b=0.775, loc=0.0, scale=1.0), stats.recipinvgauss(mu=0.63, loc=0.0, scale=1.0), stats.semicircular(loc=0.0, scale=1.0), stats.t(df=2.74, loc=0.0, scale=1.0), stats.triang(c=0.158, loc=0.0, scale=1.0), stats.truncexpon(b=4.69, loc=0.0, scale=1.0), stats.truncnorm(a=0.1, b=2, loc=0.0, scale=1.0), stats.tukeylambda(lam=3.13, loc=0.0, scale=1.0), stats.uniform(loc=0.0, scale=1.0), stats.vonmises(kappa=3.99, loc=0.0, scale=1.0), stats.vonmises_line(kappa=3.99, loc=0.0, scale=1.0), stats.wald(loc=0.0, scale=1.0), stats.weibull_min(c=1.79, loc=0.0, scale=1.0), stats.weibull_max(c=2.87, loc=0.0, scale=1.0), stats.wrapcauchy(c=0.0311, loc=0.0, scale=1.0) ] ``` ``` bins = 32 size = 16384 plotData = [] for distribution in DISTRIBUTIONS: try: # Create random data rv = pd.Series(distribution.rvs(size=size)) # Get sane start and end points of distribution start = distribution.ppf(0.01) end = distribution.ppf(0.99) # Build PDF and turn into pandas Series x = np.linspace(start, end, size) y = distribution.pdf(x) pdf = pd.Series(y, x) # Get histogram of random data b = np.linspace(start, end, bins+1) y, x = np.histogram(rv, bins=b, normed=True) x = [(a+x[i+1])\/2.0 for i,a in enumerate(x[0:-1])] hist = pd.Series(y, x) # Create distribution name and parameter string title = '{}({})'.format(distribution.dist.name, ', '.join(['{}={:0.2f}'.format(k,v) for k,v in distribution.kwds.items()])) # Store data for later plotData.append({ 'pdf': pdf, 'hist': hist, 'title': title }) except Exception: print 'could not create data', distribution.dist.name ``` ``` plotMax = len(plotData) for i, data in enumerate(plotData): w = abs(abs(data['hist'].index[0]) - abs(data['hist'].index[1])) # Display plt.figure(figsize=(10, 6)) ax = data['pdf'].plot(kind='line', label='Model PDF', legend=True, lw=2) ax.bar(data['hist'].index, data['hist'].values, label='Random Sample', width=w, align='center', alpha=0.5) ax.set_title(data['title']) # Grab figure fig = matplotlib.pyplot.gcf() # Output 'file' fig.savefig('~\/Desktop\/dist\/'+data['title']+'.png', format='png', bbox_inches='tight') matplotlib.pyplot.close() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37559470\/what-do-all-the-distributions-available-in-scipy-stats-look-like", "best_answers_votes":139, "question_length":406, "response_length":9594 }, { "question":"plotting a histogram on a Log scale with Matplotlib I have a Pandas DataFrame that has the following values in a Series ``` x = [2, 1, 76, 140, 286, 267, 60, 271, 5, 13, 9, 76, 77, 6, 2, 27, 22, 1, 12, 7, 19, 81, 11, 173, 13, 7, 16, 19, 23, 197, 167, 1] ``` I was instructed to plot two histograms in a Jupyter notebook with Python 3.6. ``` x.plot.hist(bins=8) plt.show() ``` I chose 8 bins because that looked best to me. I have also been instructed to plot another histogram with the log of x. ``` x.plot.hist(bins=8) plt.xscale('log') plt.show() ``` This histogram looks TERRIBLE. Am I not doing something right? I've tried fiddling around with the plot, but everything I've tried just seems to make the histogram look even worse. Example: ``` x.plot(kind='hist', logx=True) ``` I was not given any instructions other than plot the log of X as a histogram. For the record, I have imported pandas, numpy, and matplotlib and specified that the plot should be inline.", "response":"Specifying bins=8 in the hist call means that the range between the minimum and maximum value is divided equally into 8 bins. What is equal on a linear scale is distorted on a log scale. What you could do is specify the bins of the histogram such that they are unequal in width in a way that would make them look equal on a logarithmic scale. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt x = [2, 1, 76, 140, 286, 267, 60, 271, 5, 13, 9, 76, 77, 6, 2, 27, 22, 1, 12, 7, 19, 81, 11, 173, 13, 7, 16, 19, 23, 197, 167, 1] x = pd.Series(x) # histogram on linear scale plt.subplot(211) hist, bins, _ = plt.hist(x, bins=8) # histogram on log scale. # Use non-equal bin sizes, such that they look equal on log scale. logbins = np.logspace(np.log10(bins[0]),np.log10(bins[-1]),len(bins)) plt.subplot(212) plt.hist(x, bins=logbins) plt.xscale('log') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/47850202\/plotting-a-histogram-on-a-log-scale-with-matplotlib", "best_answers_votes":81, "question_length":967, "response_length":884 }, { "question":"Plot multiple columns of pandas DataFrame using Seaborn suppose I have DataFrame with columns ['X_Axis','col_2','col_3',...,'col_n',] I need to plot the first column on X-Axis and rest on Y-Axis. FYI : all the values have been grouped according to X-Axis, the X-Axis values range from 0-25 and all other column values have been normalized to the scale of 0 - 1. I want it on same graph plot, not subplots. Preferred : FactorPlot , normal line graph.", "response":"Some seaborn plots will accept a wide dataframe, sns.pointplot(data=df, x='X_Axis', y='col_2'), but not sns.pointplot(data=df, x='X_Axis', y=['col_2', 'col_3']), so it's better to reshape the DataFrame. Reshape the DataFrame from wide to long with pandas.DataFrame.melt. Converting the dataframe from a wide to long form is standard for all seaborn plots, not just the examples shown. Tested in python 3.8.12, pandas 1.3.4, matplotlib 3.4.3, seaborn 0.11.2 Sample DataFrame ```py import pandas as pd import seaborn as sns df = pd.DataFrame({'X_Axis':[1,3,5,7,10,20], 'col_2':[.4,.5,.4,.5,.5,.4], 'col_3':[.7,.8,.9,.4,.2,.3], 'col_4':[.1,.3,.5,.7,.1,.0], 'col_5':[.5,.3,.6,.9,.2,.4]}) # display(df) X_Axis col_2 col_3 col_4 col_5 0 1 0.4 0.7 0.1 0.5 1 3 0.5 0.8 0.3 0.3 2 5 0.4 0.9 0.5 0.6 3 7 0.5 0.4 0.7 0.9 4 10 0.5 0.2 0.1 0.2 5 20 0.4 0.3 0.0 0.4 # convert to long (tidy) form dfm = df.melt('X_Axis', var_name='cols', value_name='vals') # display(dfm.head()) X_Axis cols vals 0 1 col_2 0.4 1 3 col_2 0.5 2 5 col_2 0.4 3 7 col_2 0.5 4 10 col_2 0.5 ``` Current Plot Methods catplot: figure-level Use seaborn.catplot with kind= (e.g. kind='point' to reproduce the FactorPlot default): ```py g = sns.catplot(x=\"X_Axis\", y=\"vals\", hue='cols', data=dfm, kind='point') ``` pointplot: axes-level ```py sns.pointplot(x=\"X_Axis\", y=\"vals\", hue='cols', data=dfm) ``` Original factorplot: was renamed to catplot v0.9.0 (July 2018) New versions of seaborn get warning: The factorplot function has been renamed to catplot. The original name will be removed in a future release. Please update your code. Note that the default kind in factorplot ('point') has changed 'strip' in catplot. ```py g = sns.factorplot(x=\"X_Axis\", y=\"vals\", hue='cols', data=dfm) # using pd.melt instead of pd.DataFrame.melt for pandas < 0.20.0 # dfm = pd.melt(df, 'X_Axis', var_name='cols', value_name='vals') # g = sns.factorplot(x=\"X_Axis\", y=\"vals\", hue='cols', data=dfm) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44941082\/plot-multiple-columns-of-pandas-dataframe-using-seaborn", "best_answers_votes":123, "question_length":449, "response_length":1944 }, { "question":"How to smooth matplotlib contour plot? I have numpy array with this shape: (33,10). When I plot contour I get ugly image like this: while contour() doesn't seem to have any argument about smoothing or some sort of interpolation feature. I somehow expected that tool which offers contour plot should offer smoothing too. Is there straight forward way to do it in MPL?", "response":"As others have already pointed out, you need to interpolate your data. There are a number of different ways to do this, but for starters, consider scipy.ndimage.zoom. As a quick exmaple: ``` import numpy as np import scipy.ndimage import matplotlib.pyplot as plt data = np.loadtxt('data.txt') # Resample your data grid by a factor of 3 using cubic spline interpolation. data = scipy.ndimage.zoom(data, 3) plt.contour(data) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12274529\/how-to-smooth-matplotlib-contour-plot", "best_answers_votes":68, "question_length":366, "response_length":437 }, { "question":"Matplotlib Plots Lose Transparency When Saving as .ps\/.eps I'm having an issue with attempting to save some plots with transparent ellipsoids on them if I attempt to save them with .ps\/.eps extensions. Here's the plot saved as a .png: If I choose to save it as a .ps\/.eps here is what it looks like: How I got around this, was to use ImageMagick to convert the original png to a ps. The only problem is that the image in png format is about 90k, and it becomes just under 4M after conversion. This is not good since I have a lot of these images, and it will take too much time to compile my latex document. Does anyone have a solution to this?", "response":"The problem is that eps does not support transparencies natively. There are few options: rasterize the image and embed in a eps file (like @Molly suggests) or exporting to pdf and converting with some external tool (like gs) (which usually relies as well on rasterization) 'mimic' transparency, giving a colour that looks like the transparent one on a given background. I discussed this for sure once on the matplotlib mailing list, and I got the suggestion to rasterize, which is not feasible as you get either pixellized or huge figures. And they don't scale very nicely when put into, e.g., a publication. I personally use the second approach, and although not ideal, I found it good enough. I wrote a small python script that implements the algorithm from this SO post to obtain a solid RGB representation of a colour with a give transparency EDIT In the specific case of your plot try to use the zorder keyword to order the parts plotted. Try to use zorder=10 for the blue ellipse, zorder=11 for the green and zorder=12 for the hexbins. This way the blue should be below everything, then the green ellipse and finally the hexbins. And the plot should be readable also with solid colors. And if you like the shades of blue and green that you have in png, you can try to play with mimic_alpha.py. EDIT 2 If you are 100% sure that you have to use eps, there are a couple of workarounds that come to my mind (and that are definitely uglier than your plot): Just draw the ellipse borders on top of the hexbins. Get centre and amplitude of each hexagon, (possibly discard all zero bins) and make a scatter plot using the same colour map as in hexbin and adjusting the marker size and shape as you like. You might want to redraw the ellipses borders on top of that", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19638773\/matplotlib-plots-lose-transparency-when-saving-as-ps-eps", "best_answers_votes":38, "question_length":643, "response_length":1762 }, { "question":"Add Legend to Seaborn point plot I am plotting multiple dataframes as point plot using seaborn. Also I am plotting all the dataframes on the same axis. How would I add legend to the plot ? My code takes each of the dataframe and plots it one after another on the same figure. Each dataframe has same columns ``` date count 2017-01-01 35 2017-01-02 43 2017-01-03 12 2017-01-04 27 ``` My code : ``` f, ax = plt.subplots(1, 1, figsize=figsize) x_col='date' y_col = 'count' sns.pointplot(ax=ax,x=x_col,y=y_col,data=df_1,color='blue') sns.pointplot(ax=ax,x=x_col,y=y_col,data=df_2,color='green') sns.pointplot(ax=ax,x=x_col,y=y_col,data=df_3,color='red') ``` This plots 3 lines on the same plot. However the legend is missing. The documentation does not accept label argument . One workaround that worked was creating a new dataframe and using hue argument. ``` df_1['region'] = 'A' df_2['region'] = 'B' df_3['region'] = 'C' df = pd.concat([df_1,df_2,df_3]) sns.pointplot(ax=ax,x=x_col,y=y_col,data=df,hue='region') ``` But I would like to know if there is a way to create a legend for the code that first adds sequentially point plot to the figure and then add a legend. Sample output :", "response":"I would suggest not to use seaborn pointplot for plotting. This makes things unnecessarily complicated. Instead use matplotlib plot_date. This allows to set labels to the plots and have them automatically put into a legend with ax.legend(). ``` import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import numpy as np date = pd.date_range(\"2017-03\", freq=\"M\", periods=15) count = np.random.rand(15,4) df1 = pd.DataFrame({\"date\":date, \"count\" : count[:,0]}) df2 = pd.DataFrame({\"date\":date, \"count\" : count[:,1]+0.7}) df3 = pd.DataFrame({\"date\":date, \"count\" : count[:,2]+2}) f, ax = plt.subplots(1, 1) x_col='date' y_col = 'count' ax.plot_date(df1.date, df1[\"count\"], color=\"blue\", label=\"A\", linestyle=\"-\") ax.plot_date(df2.date, df2[\"count\"], color=\"red\", label=\"B\", linestyle=\"-\") ax.plot_date(df3.date, df3[\"count\"], color=\"green\", label=\"C\", linestyle=\"-\") ax.legend() plt.gcf().autofmt_xdate() plt.show() ``` In case one is still interested in obtaining the legend for pointplots, here a way to go: ``` sns.pointplot(ax=ax,x=x_col,y=y_col,data=df1,color='blue') sns.pointplot(ax=ax,x=x_col,y=y_col,data=df2,color='green') sns.pointplot(ax=ax,x=x_col,y=y_col,data=df3,color='red') ax.legend(handles=ax.lines[::len(df1)+1], labels=[\"A\",\"B\",\"C\"]) ax.set_xticklabels([t.get_text().split(\"T\")[0] for t in ax.get_xticklabels()]) plt.gcf().autofmt_xdate() plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42767489\/add-legend-to-seaborn-point-plot", "best_answers_votes":50, "question_length":1182, "response_length":1389 }, { "question":"How does one insert statistical annotations (stars or p-values) This seems like a trivial question, but I've been searching for a while and can't seem to find an answer. It also seems like something that should be a standard part of these packages. Does anyone know if there is a standard way to include statistical annotation between distribution plots in seaborn? For example, between two box or swarmplots?", "response":"A brace \/ bracket can be plotted direct with matplotlib.pyplot.plot or matplotlib.axes.Axes.plot, and annotations can be added with matplotlib.pyplot.text or matplotlib.axes.Axes.text. seaborn categorical plots are 0 indexed, whereas box plots, by default, with matplotlib and pandas, start at range(1, N+1), which can be adjusted with the positions parameter. seaborn is a high-level API for matplotlib, and pandas.DataFrame.plot uses matplotlib as the default backend. Imports and DataFrame ```py import seaborn as sns import matplotlib.pyplot as plt # dataframe in long form for seaborn tips = sns.load_dataset(\"tips\") # dataframe in wide form for plotting with pandas.DataFrame.plot df = tips.pivot(columns='day', values='total_bill') # data as a list of lists for plotting directly with matplotlib (no nan values allowed) data = [df[c].dropna().tolist() for c in df.columns] ``` seaborn ```py sns.boxplot(x=\"day\", y=\"total_bill\", data=tips, palette=\"PRGn\") # statistical annotation x1, x2 = 2, 3 # columns 'Sat' and 'Sun' (first column: 0, see plt.xticks()) y, h, col = tips['total_bill'].max() + 2, 2, 'k' plt.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) plt.text((x1+x2)*.5, y+h, \"ns\", ha='center', va='bottom', color=col) plt.show() ``` pandas.DataFrame.plot ```py ax = df.plot(kind='box', positions=range(len(df.columns))) x1, x2 = 2, 3 y, h, col = df.max().max() + 2, 2, 'k' ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) ax.text((x1+x2)*.5, y+h, \"ns\", ha='center', va='bottom', color=col) ``` matplotlib ```py plt.boxplot(data, positions=range(len(data))) x1, x2 = 2, 3 y, h, col = max(map(max, data)) + 2, 2, 'k' plt.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) plt.text((x1+x2)*.5, y+h, \"ns\", ha='center', va='bottom', color=col) ``` tips.head() ```none total_bill tip sex smoker day time size 0 16.99 1.01 Female No Sun Dinner 2 1 10.34 1.66 Male No Sun Dinner 3 2 21.01 3.50 Male No Sun Dinner 3 3 23.68 3.31 Male No Sun Dinner 2 4 24.59 3.61 Female No Sun Dinner 4 ``` df.head() ```none day Thur Fri Sat Sun 0 NaN NaN NaN 16.99 1 NaN NaN NaN 10.34 2 NaN NaN NaN 21.01 3 NaN NaN NaN 23.68 4 NaN NaN NaN 24.59 ``` data ```none [[27.2, 22.76, 17.29, ..., 20.53, 16.47, 18.78], [28.97, 22.49, 5.75, ..., 13.42, 16.27, 10.09], [20.65, 17.92, 20.29, ..., 29.03, 27.18, 22.67, 17.82], [16.99, 10.34, 21.01, ..., 18.15, 23.1, 15.69]] ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/36578458\/how-does-one-insert-statistical-annotations-stars-or-p-values", "best_answers_votes":81, "question_length":409, "response_length":2379 }, { "question":"Increase resolution with word-cloud and remove empty border I am using word cloud with some txt files. How do I change this example if I wanted to 1) increase resolution and 2) remove empty border. ``` #!\/usr\/bin\/env python2 \"\"\" Minimal Example =============== Generating a square wordcloud from the US constitution using default arguments. \"\"\" from os import path import matplotlib.pyplot as plt from wordcloud import WordCloud d = path.dirname(__file__) # Read the whole text. text = open(path.join(d, 'constitution.txt')).read() wordcloud = WordCloud().generate(text) # Open a plot of the generated image. plt.imshow(wordcloud) plt.axis(\"off\") plt.show() ```", "response":"You can't increase the resolution of the image in plt.show() since that is determined by your screen, but you can increase the size. This allows it to scale, zoom, etc. without blurring. To do this pass dimensions to WordCloud, e.g. ``` wordcloud = WordCloud(width=800, height=400).generate(text) ``` However, this just determines the size of the image created by WordCloud. When you display this using matplotlib it is scaled to the size of the plot canvas, which is (by default) around 800x600 and you again lose quality. To fix this you need to specify the size of the figure before you call imshow, e.g. ``` plt.figure( figsize=(20,10) ) plt.imshow(wordcloud) ``` By doing this I can successfully create a 2000x1000 high resolution word cloud. For your second question (removing the border) first we could set the border to black, so it is less apparent, e.g. ``` plt.figure( figsize=(20,10), facecolor='k' ) ``` You can also shrink the size of the border by using tight_layout, e.g. ``` plt.tight_layout(pad=0) ``` The final code: ``` # Read the whole text. text = open(path.join(d, 'constitution.txt')).read() wordcloud = WordCloud(width=1600, height=800).generate(text) # Open a plot of the generated image. plt.figure( figsize=(20,10), facecolor='k') plt.imshow(wordcloud) plt.axis(\"off\") plt.tight_layout(pad=0) plt.show() ``` By replacing the last two lines with the following you can get the final output shown below: ``` plt.savefig('wordcloud.png', facecolor='k', bbox_inches='tight') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/28786534\/increase-resolution-with-word-cloud-and-remove-empty-border", "best_answers_votes":120, "question_length":661, "response_length":1501 }, { "question":"ImportError: matplotlib requires dateutil I have successfully installed matplotlib with python 2.6 on x64 Windows7. When I try to import matplotlib, it shows the following error. I have also installed numpy following this link: Installing Numpy on 64bit Windows 7 with Python 2.7.3 ``` import matplotlib.pyplot as plt Traceback (most recent call last): File \"\", line 1, in import matplotlib.pyplot as plt File \"C:\\Python26\\Lib\\site-packages\\matplotlib\\__init__.py\", line 110, in raise ImportError(\"matplotlib requires dateutil\") ImportError: matplotlib requires dateutil ``` How can I make it work? I installed matplotlib-1.3.0.win-amd64-py2.6.exe from http:\/\/matplotlib.org\/downloads.html", "response":"Here's a list of the programs you can install on windows: http:\/\/www.lfd.uci.edu\/~gohlke\/pythonlibs\/ And you'll need the following dependencies: Requires numpy, dateutil, pytz, pyparsing, six", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18280436\/importerror-matplotlib-requires-dateutil", "best_answers_votes":66, "question_length":691, "response_length":191 }, { "question":"How to remove or hide x-axis labels from a plot I have a boxplot and need to remove the x-axis ('user_type' and 'member_gender') label. How do I do this given the below format? ``` sb.boxplot(x=\"user_type\", y=\"Seconds\", data=df, color = default_color, ax = ax[0,0], sym='').set_title('User-Type (0=Non-Subscriber, 1=Subscriber)') sb.boxplot(x=\"member_gender\", y=\"Seconds\", data=df, color = default_color, ax = ax[1,0], sym='').set_title('Gender (0=Male, 1=Female, 2=Other)') ```", "response":"After creating the boxplot, use .set(). .set(xticklabels=[]) should remove tick labels. This doesn't work if you use .set_title(), but you can use .set(title=''). Do not use sns.boxplot(...).set(xticklabels=[]) because, while this works, the object type is changed from matplotlib.axes._axes.Axes for sns.boxplot(...), to list. .set(xlabel=None) should remove the axis label. .tick_params(bottom=False) will remove the ticks. Similarly, for the y-axis: How to remove or hide y-axis ticklabels from a plot Tested in python 3.11, pandas 1.5.2, matplotlib 3.6.2, seaborn 0.12.1 From the OP: No sample data ```py fig, ax = plt.subplots(2, 1) g1 = sb.boxplot(x=\"user_type\", y=\"Seconds\", data=df, color = default_color, ax = ax[0], sym='') g1.set(xticklabels=[]) g1.set(title='User-Type (0=Non-Subscriber, 1=Subscriber)') g1.set(xlabel=None) g2 = sb.boxplot(x=\"member_gender\", y=\"Seconds\", data=df, color = default_color, ax = ax[1], sym='') g2.set(xticklabels=[]) g2.set(title='Gender (0=Male, 1=Female, 2=Other)') g2.set(xlabel=None) ``` Example 1 With xticks and xlabel ```py import seaborn as sns import matplotlib.pyplot as plt # load data exercise = sns.load_dataset('exercise') pen = sns.load_dataset('penguins') # create figures fig, ax = plt.subplots(2, 1, figsize=(8, 8)) # plot data g1 = sns.boxplot(x='time', y='pulse', hue='kind', data=exercise, ax=ax[0]) g2 = sns.boxplot(x='species', y='body_mass_g', hue='sex', data=pen, ax=ax[1]) plt.show() ``` Without xticks and xlabel ```py fig, ax = plt.subplots(2, 1, figsize=(8, 8)) g1 = sns.boxplot(x='time', y='pulse', hue='kind', data=exercise, ax=ax[0]) g1.set(xticklabels=[]) # remove the tick labels g1.set(title='Exercise: Pulse by Time for Exercise Type') # add a title g1.set(xlabel=None) # remove the axis label g2 = sns.boxplot(x='species', y='body_mass_g', hue='sex', data=pen, ax=ax[1]) g2.set(xticklabels=[]) g2.set(title='Penguins: Body Mass by Species for Gender') g2.set(xlabel=None) g2.tick_params(bottom=False) # remove the ticks plt.show() ``` Example 2 ```py import numpy as np import matplotlib.pyplot as plt import pandas as pd # sinusoidal sample data sample_length = range(1, 1+1) # number of columns of frequencies rads = np.arange(0, 2*np.pi, 0.01) data = np.array([(np.cos(t*rads)*10**67) + 3*10**67 for t in sample_length]) df = pd.DataFrame(data.T, index=pd.Series(rads.tolist(), name='radians'), columns=[f'freq: {i}x' for i in sample_length]) df.reset_index(inplace=True) # plot fig, ax = plt.subplots(figsize=(8, 8)) ax.plot('radians', 'freq: 1x', data=df) # or skip the previous two lines and plot df directly # ax = df.plot(x='radians', y='freq: 1x', figsize=(8, 8), legend=False) ``` Remove Labels ```py # plot fig, ax = plt.subplots(figsize=(8, 8)) ax.plot('radians', 'freq: 1x', data=df) # or skip the previous two lines and plot df directly # ax = df.plot(x='radians', y='freq: 1x', figsize=(8, 8), legend=False) ax.set(xticklabels=[]) # remove the tick labels ax.tick_params(bottom=False) # remove the ticks ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/58476654\/how-to-remove-or-hide-x-axis-labels-from-a-plot", "best_answers_votes":110, "question_length":478, "response_length":3001 }, { "question":"No handles with labels found to put in legend I'm trying to create a parallelogram in PyPlot. I'm not up to drawing the parallelogram--first I'm putting in the vector arrows--using the following code: ``` fig = plt.figure() ax = fig.add_subplot(111) ax.spines['left'].set_position('zero') ax.spines['right'].set_color('none') ax.spines['bottom'].set_position('zero') ax.spines['top'].set_color('none') plt.axis([-5,5,-5,5]) ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') plt.grid() plt.arrow(0,0, 3,1, head_width=0.2, color='r', length_includes_head=True, label='u') plt.arrow(0,0, 1,3, head_width=0.2, color='r', length_includes_head=True, label='v') plt.arrow(0,0, 4,4, head_width=0.2, color='r', length_includes_head=True, label='u+v') plt.legend() ``` This returns the following error: ```none No handles with labels found to put in legend. ``` I'm not sure why, because, based on the documentation for plt.arrow(), label is an acceptable kwarg, and plt.legend() should ostensibly be reading that. The rest of the figure draws fine; it's just missing the legend.", "response":"It might be late but for anyone with the same issue the solution is using the method legend() for the corresponding ax not as for plt ``` fig = plt.figure() ax = fig.add_subplot(111) ax.spines['left'].set_position('zero') ax.spines['right'].set_color('none') ax.spines['bottom'].set_position('zero') ax.spines['top'].set_color('none') plt.axis([-5,5,-5,5]) ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') plt.grid() plt.arrow(0,0, 3,1, head_width=0.2, color='r', length_includes_head=True, label='u') plt.arrow(0,0, 1,3, head_width=0.2, color='r', length_includes_head=True, label='v') plt.arrow(0,0, 4,4, head_width=0.2, color='r', length_includes_head=True, label='u+v') ax.legend() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/59346731\/no-handles-with-labels-found-to-put-in-legend", "best_answers_votes":56, "question_length":1093, "response_length":714 }, { "question":"Matplotlib not using latex font while text.usetex==True I want to create labels to my plots with the latex computer modern font. However, the only way to persuade matplotlib to use the latex font is by inserting something like: ``` title(r'$\\mathrm{test}$') ``` This is of course ridiculous, I tell latex to start math mode, and then exit math mode temporary to write the actual string. How do I make sure that all labels are rendered in latex, instead of just the formulas? And how do I make sure that this will be the default behaviour? A minimal working example is as follows: ``` import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np # use latex for font rendering mpl.rcParams['text.usetex'] = True x = np.linspace(-50,50,100) y = np.sin(x)**2\/x plt.plot(x,y) plt.xlabel(r'$\\mathrm{xlabel\\;with\\;\\LaTeX\\;font}$') plt.ylabel(r'Not a latex font') plt.show() ``` This gives the following result: Here the x axis is how I want the labels to appear. How do I make sure that all labels appear like this without having to go to math mode and back again?", "response":"The default Latex font is known as Computer Modern: ``` from matplotlib import rc import matplotlib.pylab as plt rc('font', **{'family': 'serif', 'serif': ['Computer Modern']}) rc('text', usetex=True) x = plt.linspace(0,5) plt.plot(x,plt.sin(x)) plt.ylabel(r\"This is $\\sin(x)$\", size=20) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17958485\/matplotlib-not-using-latex-font-while-text-usetex-true", "best_answers_votes":59, "question_length":1073, "response_length":302 }, { "question":"How to export figures to files from IPython Notebook I use the IPython Notebook with the --pylab inline option, since I don't want plots to show up in a different window. Now I'd like to save the plots I see in the notebook to PDF or PNG files. Some code examples use ``` import matplotlib as plt plt.savefig(\"figure.png\") # save as png ``` but this does not seem to work in inline mode. Of course I could simply save the PNG that is generated out of the browser, but I'd like to do this with a line of Python. I am also interested in PDF export.", "response":"try this (note that the files get saved to the default notebook folder): ``` plot(range(80)) xlabel('foo') ylabel('bar') legend(['myline']) axis([0, 80, 0, 120]) savefig('sample.pdf') ``` if you want png just change it to 'sample.png'. Note that the savefig() call should be in the same notebook cell as the plotting commands.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13642528\/how-to-export-figures-to-files-from-ipython-notebook", "best_answers_votes":57, "question_length":546, "response_length":326 }, { "question":"How can I set the background color on specific areas of a figure? I've managed to plot a series of points with the following code: ``` plt = pp.figure() for i in range(spt.shape[1]): spktrain = spt[0,i] for trial in spktrain: non_z = np.nonzero(trial) non_z = non_z[0] pp.plot(t[non_z], trial[non_z], 'bo') ``` I would like to place alternating bands of white and gray background on the figure in order to separate the data from each iteration of the outer for loop. In other words, I would like the data from each \"spktrain\" to have it's own background color (the data does not overlap). How can I go about changing the background color of a figure in a specific region?", "response":"You can use axhspan and\/or axvspan like this: ``` import matplotlib.pyplot as plt plt.figure() plt.xlim(0, 5) plt.ylim(0, 5) for i in range(0, 5): plt.axhspan(i, i+.2, facecolor='0.2', alpha=0.5) plt.axvspan(i, i+.5, facecolor='b', alpha=0.5) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9957637\/how-can-i-set-the-background-color-on-specific-areas-of-a-figure", "best_answers_votes":123, "question_length":671, "response_length":257 }, { "question":"fill between multiple lines I would like to fill between 3 lines in matplotlib.pyplot but unfortunately the fill_between gives me opportunity to fill between only two lines. Any ideas how to deal with this? Edit: Ok, I did not explain what I really mean since I cannot add the picture with my current reputation so maybe in that way: I try to fill the polygon bounded by these lines and I have no idea how because fill_between gives me opportunity to fill only area between two of them. Below the fill equation: ``` y = 0 x >= 0 ``` the x and y bigger than 0 is obvious. I start the plot from (0,0) but I still have 3 lines... ``` y <= 4- 2x y <= 3 - 1\/2x y <= 1 - x ```", "response":"If you start the plot in point (0, 0), and therefore do not need to consider the area of the polygon not in the first quadrant, then this should do the trick in this particular situation: ``` import matplotlib.pyplot as plt import numpy as np x = np.arange(0,10,0.1) # The lines to plot y1 = 4 - 2*x y2 = 3 - 0.5*x y3 = 1 -x # The upper edge of polygon (min of lines y1 & y2) y4 = np.minimum(y1, y2) # Set y-limit, making neg y-values not show in plot plt.ylim(0, 5) # Plotting of lines plt.plot(x, y1, x, y2, x, y3) # Filling between line y3 and line y4 plt.fill_between(x, y3, y4, color='grey', alpha='0.5') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16417496\/fill-between-multiple-lines", "best_answers_votes":62, "question_length":670, "response_length":624 }, { "question":"plotting results of hierarchical clustering on top of a matrix of data How can I plot a dendrogram right on top of a matrix of values, reordered appropriately to reflect the clustering, in Python? An example is the following figure: This is Figure 6 from: A panel of induced pluripotent stem cells from chimpanzees: a resource for comparative functional genomics I use scipy.cluster.dendrogram to make my dendrogram and perform hierarchical clustering on a matrix of data. How can I then plot the data as a matrix where the rows have been reordered to reflect a clustering induced by the cutting the dendrogram at a particular threshold, and have the dendrogram plotted alongside the matrix? I know how to plot the dendrogram in scipy, but not how to plot the intensity matrix of data with the right scale bar next to it.", "response":"The question does not define matrix very well: \"matrix of values\", \"matrix of data\". I assume that you mean a distance matrix. In other words, element D_ij in the symmetric nonnegative N-by-N distance matrix D denotes the distance between two feature vectors, x_i and x_j. Is that correct? If so, then try this (edited June 13, 2010, to reflect two different dendrograms). Tested in python 3.10 and matplotlib 3.5.1 ```py import numpy as np import matplotlib.pyplot as plt import scipy.cluster.hierarchy as sch from scipy.spatial.distance import squareform # Generate random features and distance matrix. np.random.seed(200) # for reproducible data x = np.random.rand(40) D = np.zeros([40, 40]) for i in range(40): for j in range(40): D[i,j] = abs(x[i] - x[j]) condensedD = squareform(D) # Compute and plot first dendrogram. fig = plt.figure(figsize=(8, 8)) ax1 = fig.add_axes([0.09, 0.1, 0.2, 0.6]) Y = sch.linkage(condensedD, method='centroid') Z1 = sch.dendrogram(Y, orientation='left') ax1.set_xticks([]) ax1.set_yticks([]) # Compute and plot second dendrogram. ax2 = fig.add_axes([0.3, 0.71, 0.6, 0.2]) Y = sch.linkage(condensedD, method='single') Z2 = sch.dendrogram(Y) ax2.set_xticks([]) ax2.set_yticks([]) # Plot distance matrix. axmatrix = fig.add_axes([0.3, 0.1, 0.6, 0.6]) idx1 = Z1['leaves'] idx2 = Z2['leaves'] D = D[idx1,:] D = D[:,idx2] im = axmatrix.matshow(D, aspect='auto', origin='lower', cmap=plt.cm.YlGnBu) axmatrix.set_xticks([]) # remove axis labels axmatrix.set_yticks([]) # remove axis labels # Plot colorbar. axcolor = fig.add_axes([0.91, 0.1, 0.02, 0.6]) plt.colorbar(im, cax=axcolor) plt.show() fig.savefig('dendrogram.png') ``` Edit: For different colors, adjust the cmap attribute in imshow. See the scipy\/matplotlib docs for examples. That page also describes how to create your own colormap. For convenience, I recommend using a preexisting colormap. In my example, I used YlGnBu. Edit: add_axes (see documentation here) accepts a list or tuple: (left, bottom, width, height). For example, (0.5,0,0.5,1) adds an Axes on the right half of the figure. (0,0.5,1,0.5) adds an Axes on the top half of the figure. Most people probably use add_subplot for its convenience. I like add_axes for its control. To remove the border, use add_axes([left,bottom,width,height], frame_on=False). See example here.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2982929\/plotting-results-of-hierarchical-clustering-on-top-of-a-matrix-of-data", "best_answers_votes":108, "question_length":821, "response_length":2328 }, { "question":"Matplotlib giving error \"OverflowError: In draw_path: Exceeded cell block limit\" I'm trying to render an image using matplotlib with 100000000 data points and it produces the error OverflowError: In draw_path: Exceeded cell block limit. Is there a limit in the amount of data points it can draw?", "response":"The problem is a hardcoded limit in the number of points in the backend Agg. Try using: ```py import matplotlib as mpl mpl.rcParams['agg.path.chunksize'] = 10000 ``` or other large value. You can find the issue and the solution proposed here: https:\/\/github.com\/matplotlib\/matplotlib\/issues\/5907", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37470734\/matplotlib-giving-error-overflowerror-in-draw-path-exceeded-cell-block-limit", "best_answers_votes":81, "question_length":295, "response_length":295 }, { "question":"What is the process to create pdf reports with charts from a DB? [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 6 months ago. The community reviewed whether to reopen this question 6 months ago and left it closed: Original close reason(s) were not resolved Improve this question I have a database generated by a survey to evaluate university professors. What I want is a python script that takes the information from that database, generates a graphing table for each user, creates graphs for each user, and then renders it in a template to export it to a pdf. What does the database look like? ``` User Professor_evaluated Category Question Answer _________________________________________________________________ Mike Professor Criss respect 1 3 Mike Professor Criss respect 2 4 Mike Professor Criss wisdom 3 5 Mike Professor Criss wisdom 4 3 Charles Professor Criss respect 1 3 Charles Professor Criss respect 2 4 Charles Professor Criss wisdom 3 5 Charles Professor Criss wisdom 4 3 ``` Each teacher has several categories assigned to be evaluated (respect, wisdom, etc.) and in turn each category has associated questions. In other words, a category has several questions. Each row of the DB is the answer to a question from a student evaluating a teacher What do I need? I need to create a script for automatically generate pdf reports that summarizes this information through charts, for example a chart with the overall score of each teacher, another chart with the score of each teacher by category, another chart with the average of each student, etc..Finally, every teacher would have a report.I want a report like this What is my question? my question is about which python packages and modules I would need to do this task. And what would be the general process of doing so. I don't need the code, because I know the answer is very general, but the knowledge of how I could do it. For example: you would first need to process the information with pandas, to create a table that summarizes the information you want to graph, then plot it, then create a template of your report with XYZ module and then export it to pdf with XYZ module.", "response":"There are a lot of options for creating a pdf in python. Some of these options are ReportLab, pydf2, pdfdocument and FPDF. The FPDF library is fairly stragihtforward to use and is what I've used in this example. FPDF Documentation can be found here. It's perhaps also good to think about what python modules you might want to use to create graphs and tables. In my example, I use matplotlib (link to docs) and I also use Pandas to create a dataframe using pandas.dataframe(). I've posted a rather lengthy but fully reproducible example below, using pandas, matplotlib and fpdf. The data are a subset of what the OP provided in the question. I loop through the dataframe in my example to create the table, but there are alternative and perhaps more efficient ways to do this. ``` import pandas as pd import matplotlib from pylab import title, figure, xlabel, ylabel, xticks, bar, legend, axis, savefig from fpdf import FPDF df = pd.DataFrame() df['Question'] = [\"Q1\", \"Q2\", \"Q3\", \"Q4\"] df['Charles'] = [3, 4, 5, 3] df['Mike'] = [3, 3, 4, 4] title(\"Professor Criss's Ratings by Users\") xlabel('Question Number') ylabel('Score') c = [2.0, 4.0, 6.0, 8.0] m = [x - 0.5 for x in c] xticks(c, df['Question']) bar(m, df['Mike'], width=0.5, color=\"#91eb87\", label=\"Mike\") bar(c, df['Charles'], width=0.5, color=\"#eb879c\", label=\"Charles\") legend() axis([0, 10, 0, 8]) savefig('barchart.png') pdf = FPDF() pdf.add_page() pdf.set_xy(0, 0) pdf.set_font('arial', 'B', 12) pdf.cell(60) pdf.cell(75, 10, \"A Tabular and Graphical Report of Professor Criss's Ratings by Users Charles and Mike\", 0, 2, 'C') pdf.cell(90, 10, \" \", 0, 2, 'C') pdf.cell(-40) pdf.cell(50, 10, 'Question', 1, 0, 'C') pdf.cell(40, 10, 'Charles', 1, 0, 'C') pdf.cell(40, 10, 'Mike', 1, 2, 'C') pdf.cell(-90) pdf.set_font('arial', '', 12) for i in range(0, len(df)): pdf.cell(50, 10, '%s' % (df['Question'].iloc[i]), 1, 0, 'C') pdf.cell(40, 10, '%s' % (str(df.Mike.iloc[i])), 1, 0, 'C') pdf.cell(40, 10, '%s' % (str(df.Charles.iloc[i])), 1, 2, 'C') pdf.cell(-90) pdf.cell(90, 10, \" \", 0, 2, 'C') pdf.cell(-30) pdf.image('barchart.png', x = None, y = None, w = 0, h = 0, type = '', link = '') pdf.output('test.pdf', 'F') ``` Expected test.pdf: Update (April 2020): I made an edit to the original answer in April 2020 to replace use of pandas.DataFrame.ix() since this is deprecated. In my example I was able to replace it's use with pandas.DataFrame.iloc and the output is the same as before.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/51864730\/what-is-the-process-to-create-pdf-reports-with-charts-from-a-db", "best_answers_votes":76, "question_length":2527, "response_length":2447 }, { "question":"Common title to many subplots in Matplotlib [duplicate] This question already has answers here: Global legend and title aside subplots (4 answers) Closed 7 years ago. I am making a chart in matplotlib and I have many subplots in it each of them with a different title, but on the top I also want to a put a title to the whole chart. How this can be done?", "response":"You can use the pyplot.suptitle command to add a centered title to the figure in addition to sub plot titles.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10717104\/common-title-to-many-subplots-in-matplotlib", "best_answers_votes":91, "question_length":354, "response_length":109 }, { "question":"TypeError: Invalid dimensions for image data when plotting array with imshow() For the following code ``` # Numerical operation SN_map_final = (new_SN_map - mean_SN) \/ sigma_SN # Plot figure fig12 = plt.figure(12) fig_SN_final = plt.imshow(SN_map_final, interpolation='nearest') plt.colorbar() fig12 = plt.savefig(outname12) ``` with new_SN_map being a 1D array and mean_SN and sigma_SN being constants, I get the following error. ``` Traceback (most recent call last): File \"c:\\Users\\Valentin\\Desktop\\Stage M2\\density_map_simple.py\", line 546, in fig_SN_final = plt.imshow(SN_map_final, interpolation='nearest') File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\pyplot.py\", line 3022, in imshow **kwargs) File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\__init__.py\", line 1812, in inner return func(ax, *args, **kwargs) File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\axes\\_axes.py\", line 4947, in imshow im.set_data(X) File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\image.py\", line 453, in set_data raise TypeError(\"Invalid dimensions for image data\") TypeError: Invalid dimensions for image data ``` What is the source of this error? I thought my numerical operations were allowed.", "response":"There is a (somewhat) related question on StackOverflow: Showing an image with pylab.imshow() Here the problem was that an array of shape (nx,ny,1) is still considered a 3D array, and must be squeezed or sliced into a 2D array. More generally, the reason for the Exception TypeError: Invalid dimensions for image data is shown here: matplotlib.pyplot.imshow() needs a 2D array, or a 3D array with the third dimension being of shape 3 or 4! You can easily check this with (these checks are done by imshow, this function is only meant to give a more specific message in case it's not a valid input): ``` from __future__ import print_function import numpy as np def valid_imshow_data(data): data = np.asarray(data) if data.ndim == 2: return True elif data.ndim == 3: if 3 >> new_SN_map = np.array([1,2,3]) >>> valid_imshow_data(new_SN_map) To visualize an image the data must be 2 dimensional or 3 dimensional, not \"1\". False ``` The np.asarray is what is done internally by matplotlib.pyplot.imshow so it's generally best you do it too. If you have a numpy array it's obsolete but if not (for example a list) it's necessary. In your specific case you got a 1D array, so you need to add a dimension with np.expand_dims() ``` import matplotlib.pyplot as plt a = np.array([1,2,3,4,5]) a = np.expand_dims(a, axis=0) # or axis=1 plt.imshow(a) plt.show() ``` or just use something that accepts 1D arrays like plot: ``` a = np.array([1,2,3,4,5]) plt.plot(a) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/36431496\/typeerror-invalid-dimensions-for-image-data-when-plotting-array-with-imshow", "best_answers_votes":77, "question_length":1348, "response_length":1463 }, { "question":"GeoPandas Label Polygons Given the shape file available here: I'd like to label each polygon (county) in the map. Is this possible with GeoPandas? ``` import geopandas as gpd import matplotlib.pyplot as plt %matplotlib inline shpfile= c=gpd.read_file(shpfile) c=c.loc[c['GEOID'].isin(['26161','26093','26049','26091','26075','26125','26163','26099','26115','26065'])] c.plot() ``` Thanks in advance!", "response":"c['geometry'] is a series comprised of shapely.geometry.polygon.Polygon objects. You can verify this by checking ``` In [23]: type(c.ix[23, 'geometry']) Out[23]: shapely.geometry.polygon.Polygon ``` From the Shapely docs there is a method representative_point() that Returns a cheaply computed point that is guaranteed to be within the geometric object. Sounds ideal for a situation in which you need to label the polygon objects! You can then create a new column for your geopandas dataframe, 'coords' like so ``` c['coords'] = c['geometry'].apply(lambda x: x.representative_point().coords[:]) c['coords'] = [coords[0] for coords in c['coords']] ``` Now that you have a set of coordinates pertaining to each polygon object (each county), you can annotate your plot by iterating through your dataframe ``` c.plot() for idx, row in c.iterrows(): plt.annotate(s=row['NAME'], xy=row['coords'], horizontalalignment='center') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/38899190\/geopandas-label-polygons", "best_answers_votes":89, "question_length":399, "response_length":924 }, { "question":"Get Rid of Tick Labels for all subplots Is there a way to get rid of tick labels altogether when creating an array of subplots in Matplotlib? I am currently needing to specify each plot based on the row and column of a larger data set to which the plot corresponds. I've attempted to use the ax.set_xticks([]) and the similar y-axis command, to no avail. I recognize that it's probably an unusual request to want to make a plot with no axis data whatsoever, but that's what I need. And I need it to automatically apply to all of the subplots in the array.", "response":"You have the right method. Maybe you are not applying the set_xticks to the correct axes. An example: ``` import matplotlib.pyplot as plt import numpy as np ncols = 5 nrows = 3 # create the plots fig = plt.figure() axes = [ fig.add_subplot(nrows, ncols, r * ncols + c) for r in range(0, nrows) for c in range(0, ncols) ] # add some data for ax in axes: ax.plot(np.random.random(10), np.random.random(10), '.') # remove the x and y ticks for ax in axes: ax.set_xticks([]) ax.set_yticks([]) ``` This gives: Note that each axis instance is stored in a list (axes) and then they can be easily manipulated. As usual, there are several ways of doing this, this is just an example.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25124143\/get-rid-of-tick-labels-for-all-subplots", "best_answers_votes":69, "question_length":555, "response_length":674 }, { "question":"bbox_to_anchor and loc in matplotlib I came across matplotlib code which customizes legend location using keywords loc and bbox_to_anchor. For example : ``` fig.legend([line1, line2], ['series1', 'series2'], bbox_to_anchor=[0.5, 0.5], loc='center', ncol=2) ``` I have seen variation of above where bbox_to_anchor is used after loc. I understand the purpose of using bbox_to_anchor and loc separately. However, is there any benefit of using both in the same legend specification? From my understanding and usage, it appears to me that if bbox_to_anchor is specified, then the loc parameter is pretty much don't care. Can anyone confirm this? I don't see any documentation regarding this.", "response":"When bbox_to_anchor and loc are used together, the loc argument will inform matplotlib which part of the bounding box of the legend should be placed at the arguments of bbox_to_anchor. For example (I've simplified the command a bit), the three options below will produce different locations for your legend, ``` fig.legend([line1], ['series1'], bbox_to_anchor=[0.5, 0.5], loc='center') fig.legend([line1], ['series1'], bbox_to_anchor=[0.5, 0.5], loc='center left') fig.legend([line1], ['series1'], bbox_to_anchor=[0.5, 0.5], loc='center right') ``` The first command will put the center of the bounding box at axes coordinates 0.5,0.5. The second will put the center left edge of the bounding box at the same coordinates (i.e. shift the legend to the right). Finally, the third option will put the center right edge of the bounding box at the coordinates (i.e. shift the legend to the left).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25068384\/bbox-to-anchor-and-loc-in-matplotlib", "best_answers_votes":76, "question_length":686, "response_length":891 }, { "question":"Dynamically add\/create subplots in matplotlib I want to create a plot consisting of several subplots with shared x\/y axes. It should look something like this from the documentation (though my subplots will be scatterblots): (code here) But I want to create the subplots dynamically! So the number of subplots depends on the output of a previous function. (It will probably be around 3 to 15 subplots per diagram, each from a distinct dataset, depending on the input of my script.) Can anyone tell me how to accomplish that?", "response":"Suppose you know total subplots and total columns you want to use: ``` import matplotlib.pyplot as plt # Subplots are organized in a Rows x Cols Grid # Tot and Cols are known Tot = number_of_subplots Cols = number_of_columns # Compute Rows required Rows = Tot \/\/ Cols # EDIT for correct number of rows: # If one additional row is necessary -> add one: if Tot % Cols != 0: Rows += 1 # Create a Position index Position = range(1,Tot + 1) ``` First instance of Rows accounts only for rows completely filled by subplots, then is added one more Row if 1 or 2 or ... Cols - 1 subplots still need location. Then create figure and add subplots with a for loop. ``` # Create main figure fig = plt.figure(1) for k in range(Tot): # add every single subplot to the figure with a for loop ax = fig.add_subplot(Rows,Cols,Position[k]) ax.plot(x,y) # Or whatever you want in the subplot plt.show() ``` Please note that you need the range Position to move the subplots into the right place.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12319796\/dynamically-add-create-subplots-in-matplotlib", "best_answers_votes":46, "question_length":523, "response_length":973 }, { "question":"How to add percentages on top of grouped bars Given the following count plot how do I place percentages on top of the bars? ``` import seaborn as sns sns.set(style=\"darkgrid\") titanic = sns.load_dataset(\"titanic\") ax = sns.countplot(x=\"class\", hue=\"who\", data=titanic) ``` For example for \"First\" I want total First men\/total First, total First women\/total First, and total First children\/total First on top of their respective bars.", "response":"The seaborn.catplot organizing function returns a FacetGrid, which gives you access to the fig, the ax, and its patches. If you add the labels when nothing else has been plotted you know which bar-patches came from which variables. From @LordZsolt's answer I picked up the order argument to catplot: I like making that explicit because now we aren't relying on the barplot function using the order we think of as default. ``` import seaborn as sns from itertools import product titanic = sns.load_dataset(\"titanic\") class_order = ['First','Second','Third'] hue_order = ['child', 'man', 'woman'] bar_order = product(class_order, hue_order) catp = sns.catplot(data=titanic, kind='count', x='class', hue='who', order = class_order, hue_order = hue_order ) # As long as we haven't plotted anything else into this axis, # we know the rectangles in it are our barplot bars # and we know the order, so we can match up graphic and calculations: spots = zip(catp.ax.patches, bar_order) for spot in spots: class_total = len(titanic[titanic['class']==spot[1][0]]) class_who_total = len(titanic[(titanic['class']==spot[1][0]) & (titanic['who']==spot[1][1])]) height = spot[0].get_height() catp.ax.text(spot[0].get_x(), height+3, '{:1.2f}'.format(class_who_total\/class_total)) #checking the patch order, not for final: #catp.ax.text(spot[0].get_x(), -3, spot[1][0][0]+spot[1][1][0]) ``` produces An alternate approach is to do the sub-summing explicitly, e.g. with the excellent pandas, and plot with matplotlib, and also do the styling yourself. (Though you can get quite a lot of styling from sns context even when using matplotlib plotting functions. Try it out -- )", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/31749448\/how-to-add-percentages-on-top-of-grouped-bars", "best_answers_votes":70, "question_length":433, "response_length":1656 }, { "question":"determine matplotlib axis size in pixels Given a set of axes in matplotlib, is there a way to determine its size in pixels? I need to scale things according to adjust for larger or smaller figures. (In particular I want to change the linewidth so it is proportionate for the axes size.)", "response":"This gives the width and height in inches. ``` bbox = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) width, height = bbox.width, bbox.height ``` That probably suffices for your purpose, but to get pixels, you can multiply by fig.dpi: ``` width *= fig.dpi height *= fig.dpi ``` For example, ``` import matplotlib.pyplot as plt def get_ax_size(ax): bbox = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) width, height = bbox.width, bbox.height width *= fig.dpi height *= fig.dpi return width, height fig, ax = plt.subplots() print(get_ax_size(ax)) #(496.0, 384.00000000000006) ax2 = plt.axes([0.3, 0.3, 0.7, 0.7]) print(get_ax_size(ax2)) # (448.0, 336.0) ``` To make an image of exactly that figure size, you have to remove whitespace between the figure and the axis: ``` import numpy as np import matplotlib.pyplot as plt def get_ax_size(ax): bbox = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) width, height = bbox.width, bbox.height width *= fig.dpi height *= fig.dpi return width, height data = np.arange(9).reshape((3, 3)) fig = plt.figure(figsize=(8,6), dpi=80) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ax.imshow(data, aspect='equal') print(get_ax_size(ax)) # (640.0, 480.0) plt.savefig('\/tmp\/test.png', dpi=80) ``` ``` % identify \/tmp\/test.png \/tmp\/test.png PNG 640x480 640x480+0+0 8-bit DirectClass 50.5KB 0.020u 0:00.020 ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19306510\/determine-matplotlib-axis-size-in-pixels", "best_answers_votes":78, "question_length":286, "response_length":1423 }, { "question":"matplotlib backends - do I care? ``` >>> import matplotlib >>> print matplotlib.rcsetup.all_backends [u'GTK', u'GTKAgg', u'GTKCairo', u'MacOSX', u'Qt4Agg', u'Qt5Agg', u'TkAgg', u'WX', u'WXAgg', u'CocoaAgg', u'GTK3Cairo', u'GTK3Agg', u'WebAgg', u'nbAgg', u'agg', u'cairo', u'emf', u'gdk', u'pdf', u'pgf', u'ps', u'svg', u'template'] ``` Look at all those backends! Do I need to care which backend is in use? e.g. if I develop and test my stuff using only TkAgg backend, and someone else using my code might be using GTKAgg backend on their system, might my stuff break for them in a way that I won't have noticed - or are all backends required to more or less \"work\" the same way?", "response":"The backend mainly matters if you're embedding matplotlib in an application, in which case you need to use a backend (GTK, Qt, TkInter, WxWindows) which matches the toolkit you're using to build your application. If you're also using matplotlib in a simple interactive way, you'll also want to use a backend which matches what is available on your machine (GTK if you're running Gnome, Qt if you're running KDE, etc) (although most libs are already installed on most machines) The drawing layer part of the backend (Cairo, Agg...) also matters in terms of functionalities: you can choose it depending on what that layer provides compared to what your application needs (anti aliasing, alpha channel, export formats...). So if you develop and test using TkAgg and other people run with e.g. TkCairo, some things might not work. OTOH, running with QtAgg would certainly work in a very similar way as long as you stick to the matplotlib API and don't reach in the wrapped toolkit layer.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7156058\/matplotlib-backends-do-i-care", "best_answers_votes":52, "question_length":679, "response_length":983 }, { "question":"How to put text outside of plots I am plotting two time series and computing varies indices for them. How to write these indices for these plots outside the plot using annotation or text in python? Below is my code ``` import matplotlib.pyplot as plt obs_graph=plt.plot(obs_df['cms'], '-r', label='Observed') plt.legend(loc='best') plt.hold(True) sim_graph=plt.plot(sim_df['cms'], '-g', label=\"Simulated\") plt.legend(loc='best') plt.ylabel('Daily Discharge (m^3\/s)') plt.xlabel('Year') plt.title('Observed vs Simulated Daily Discharge') textstr = 'NSE=%.2f\\nRMSE=%.2f\\n'%(NSE, RMSE) # print textstr plt.text(2000, 2000, textstr, fontsize=14) plt.grid(True) plt.show() ``` I want to print teststr outside the plots. Here is the current plot:", "response":"It's probably best to define the position in figure coordinates instead of data coordinates as you'd probably not want the text to change its position when changing the data. Using figure coordinates can be done either by specifying the figure transform (fig.transFigure) ``` plt.text(0.02, 0.5, textstr, fontsize=14, transform=plt.gcf().transFigure) ``` or by using the text method of the figure instead of that of the axes. ``` plt.gcf().text(0.02, 0.5, textstr, fontsize=14) ``` In both cases the coordinates to place the text are in figure coordinates, where (0,0) is the bottom left and (1,1) is the top right of the figure. At the end you still may want to provide some extra space for the text to fit next to the axes, using plt.subplots_adjust(left=0.3) or so.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42435446\/how-to-put-text-outside-of-plots", "best_answers_votes":117, "question_length":740, "response_length":768 }, { "question":"How to increase\/reduce the fontsize of x and y tick labels [duplicate] This question already has answers here: How to change tick label font size (10 answers) Closed 1 year ago. I seem to have a problem in figuring out how to increase or decrease the fontsize of both the x and y tick labels while using matplotlib. I am aware that there is the set_xticklabels(labels, fontdict=None, minor=False, **kwargs) function, but I failed to understand how to control the fontsize in it. I expected something somehow explicit, like ``` title_string=('My Title') plt.suptitle(title_string, y=1.0, fontsize=17) ``` but I haven't found anything like that so far. What am I missing?", "response":"You can set the fontsize directly in the call to set_xticklabels and set_yticklabels (as noted in previous answers). This will only affect one Axes at a time. ``` ax.set_xticklabels(x_ticks, rotation=0, fontsize=8) ax.set_yticklabels(y_ticks, rotation=0, fontsize=8) ``` Note this method should only be used if you are fixing the positions of the ticks first (e.g. using ax.set_xticks). If you are not changing the tick positions from the default ones, you can just change the font size of the tick labels without changing the text using ax.tick_params ``` ax.tick_params(axis='x', labelsize=8) ax.tick_params(axis='y', labelsize=8) ``` or ``` ax.tick_params(axis='both', labelsize=8) ``` You can also set the ticklabel font size globally (i.e. for all figures\/subplots in a script) using rcParams: ``` import matplotlib.pyplot as plt plt.rc('xtick',labelsize=8) plt.rc('ytick',labelsize=8) ``` Or, equivalently: ``` plt.rcParams['xtick.labelsize']=8 plt.rcParams['ytick.labelsize']=8 ``` Finally, if this is a setting that you would like to be set for all your matplotlib plots, you could also set these two rcParams in your matplotlibrc file: ``` xtick.labelsize : 8 # fontsize of the x tick labels ytick.labelsize : 8 # fontsize of the y tick labels ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34001751\/how-to-increase-reduce-the-fontsize-of-x-and-y-tick-labels", "best_answers_votes":77, "question_length":669, "response_length":1256 }, { "question":"How to plot vectors in python using matplotlib I am taking a course on linear algebra and I want to visualize the vectors in action, such as vector addition, normal vector, so on. For instance: ``` V = np.array([[1,1],[-2,2],[4,-7]]) ``` In this case I want to plot 3 vectors V1 = (1,1), M2 = (-2,2), M3 = (4,-7). Then I should be able to add V1,V2 to plot a new vector V12(all together in one figure). when I use the following code, the plot is not as intended ``` import numpy as np import matplotlib.pyplot as plt M = np.array([[1,1],[-2,2],[4,-7]]) print(\"vector:1\") print(M[0,:]) # print(\"vector:2\") # print(M[1,:]) rows,cols = M.T.shape print(cols) for i,l in enumerate(range(0,cols)): print(\"Iteration: {}-{}\".format(i,l)) print(\"vector:{}\".format(i)) print(M[i,:]) v1 = [0,0],[M[i,0],M[i,1]] # v1 = [M[i,0]],[M[i,1]] print(v1) plt.figure(i) plt.plot(v1) plt.show() ```", "response":"How about something like ``` import numpy as np import matplotlib.pyplot as plt V = np.array([[1,1], [-2,2], [4,-7]]) origin = np.array([[0, 0, 0],[0, 0, 0]]) # origin point plt.quiver(*origin, V[:,0], V[:,1], color=['r','b','g'], scale=21) plt.show() ``` Then to add up any two vectors and plot them to the same figure, do so before you call plt.show(). Something like: ``` plt.quiver(*origin, V[:,0], V[:,1], color=['r','b','g'], scale=21) v12 = V[0] + V[1] # adding up the 1st (red) and 2nd (blue) vectors plt.quiver(*origin, v12[0], v12[1], scale=21) plt.show() ``` NOTE: in Python2 use origin[0], origin[1] instead of *origin", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42281966\/how-to-plot-vectors-in-python-using-matplotlib", "best_answers_votes":77, "question_length":876, "response_length":630 }, { "question":"Putting arrowheads on vectors in a 3d plot I plotted the eigenvectors of some 3D-data and was wondering if there is currently (already) a way to put arrowheads on the lines? Would be awesome if someone has a tip for me. ``` import numpy as np from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D #################################################### # This part is just for reference if # you are interested where the data is # coming from # The plot is at the bottom ##################################################### # Generate some example data mu_vec1 = np.array([0,0,0]) cov_mat1 = np.array([[1,0,0],[0,1,0],[0,0,1]]) class1_sample = np.random.multivariate_normal(mu_vec1, cov_mat1, 20) mu_vec2 = np.array([1,1,1]) cov_mat2 = np.array([[1,0,0],[0,1,0],[0,0,1]]) class2_sample = np.random.multivariate_normal(mu_vec2, cov_mat2, 20) # concatenate data for PCA samples = np.concatenate((class1_sample, class2_sample), axis=0) # mean values mean_x = mean(samples[:,0]) mean_y = mean(samples[:,1]) mean_z = mean(samples[:,2]) #eigenvectors and eigenvalues eig_val, eig_vec = np.linalg.eig(cov_mat) ################################ #plotting eigenvectors ################################ fig = plt.figure(figsize=(15,15)) ax = fig.add_subplot(111, projection='3d') ax.plot(samples[:,0], samples[:,1], samples[:,2], 'o', markersize=10, color='green', alpha=0.2) ax.plot([mean_x], [mean_y], [mean_z], 'o', markersize=10, color='red', alpha=0.5) for v in eig_vec: ax.plot([mean_x, v[0]], [mean_y, v[1]], [mean_z, v[2]], color='red', alpha=0.8, lw=3) ax.set_xlabel('x_values') ax.set_ylabel('y_values') ax.set_zlabel('z_values') plt.title('Eigenvectors') plt.draw() plt.show() ```", "response":"To add arrow patches to a 3D plot, the simple solution is to use FancyArrowPatch class defined in \/matplotlib\/patches.py. However, it only works for 2D plot (at the time of writing), as its posA and posB are supposed to be tuples of length 2. Therefore we create a new arrow patch class, name it Arrow3D, which inherits from FancyArrowPatch. The only thing we need to override its posA and posB. To do that, we initiate Arrow3d with posA and posB of (0,0)s. The 3D coordinates xs, ys, zs was then projected from 3D to 2D using proj3d.proj_transform(), and the resultant 2D coordinates get assigned to posA and posB using .set_position() method, replacing the (0,0)s. This way we get the 3D arrow to work. The projection steps go into the .draw method, which overrides the .draw method of the FancyArrowPatch object. This might appear like a hack. However, the mplot3d currently only provides (again, only) simple 3D plotting capacity by supplying 3D-2D projections and essentially does all the plotting in 2D, which is not truly 3D. ``` import numpy as np from numpy import * from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib.patches import FancyArrowPatch from mpl_toolkits.mplot3d import proj3d class Arrow3D(FancyArrowPatch): def __init__(self, xs, ys, zs, *args, **kwargs): FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs) self._verts3d = xs, ys, zs def draw(self, renderer): xs3d, ys3d, zs3d = self._verts3d xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M) self.set_positions((xs[0],ys[0]),(xs[1],ys[1])) FancyArrowPatch.draw(self, renderer) #################################################### # This part is just for reference if # you are interested where the data is # coming from # The plot is at the bottom ##################################################### # Generate some example data mu_vec1 = np.array([0,0,0]) cov_mat1 = np.array([[1,0,0],[0,1,0],[0,0,1]]) class1_sample = np.random.multivariate_normal(mu_vec1, cov_mat1, 20) mu_vec2 = np.array([1,1,1]) cov_mat2 = np.array([[1,0,0],[0,1,0],[0,0,1]]) class2_sample = np.random.multivariate_normal(mu_vec2, cov_mat2, 20) ``` Actual drawing. Note that we only need to change one line of your code, which add an new arrow artist: ``` # concatenate data for PCA samples = np.concatenate((class1_sample, class2_sample), axis=0) # mean values mean_x = mean(samples[:,0]) mean_y = mean(samples[:,1]) mean_z = mean(samples[:,2]) #eigenvectors and eigenvalues eig_val, eig_vec = np.linalg.eig(cov_mat1) ################################ #plotting eigenvectors ################################ fig = plt.figure(figsize=(15,15)) ax = fig.add_subplot(111, projection='3d') ax.plot(samples[:,0], samples[:,1], samples[:,2], 'o', markersize=10, color='g', alpha=0.2) ax.plot([mean_x], [mean_y], [mean_z], 'o', markersize=10, color='red', alpha=0.5) for v in eig_vec: #ax.plot([mean_x,v[0]], [mean_y,v[1]], [mean_z,v[2]], color='red', alpha=0.8, lw=3) #I will replace this line with: a = Arrow3D([mean_x, v[0]], [mean_y, v[1]], [mean_z, v[2]], mutation_scale=20, lw=3, arrowstyle=\"-|>\", color=\"r\") ax.add_artist(a) ax.set_xlabel('x_values') ax.set_ylabel('y_values') ax.set_zlabel('z_values') plt.title('Eigenvectors') plt.draw() plt.show() ``` Please check this post, which inspired this question, for further details.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22867620\/putting-arrowheads-on-vectors-in-a-3d-plot", "best_answers_votes":80, "question_length":1701, "response_length":3344 }, { "question":"How to add group labels for bar charts I want to plot data of the following form, using matplotlib bar plot: ``` data = {'Room A': {'Shelf 1': {'Milk': 10, 'Water': 20}, 'Shelf 2': {'Sugar': 5, 'Honey': 6} }, 'Room B': {'Shelf 1': {'Wheat': 4, 'Corn': 7}, 'Shelf 2': {'Chicken': 2, 'Cow': 1} } } ``` The bar chart is supposed to look The bar groups should be visible from the labels on the x axis. Is there any way to do this with matplotlib?", "response":"Since I could not find a built-in solution for this in matplotlib, I coded my own: ``` #!\/usr\/bin\/env python from matplotlib import pyplot as plt def mk_groups(data): try: newdata = data.items() except: return thisgroup = [] groups = [] for key, value in newdata: newgroups = mk_groups(value) if newgroups is None: thisgroup.append((key, value)) else: thisgroup.append((key, len(newgroups[-1]))) if groups: groups = [g + n for n, g in zip(newgroups, groups)] else: groups = newgroups return [thisgroup] + groups def add_line(ax, xpos, ypos): line = plt.Line2D([xpos, xpos], [ypos + .1, ypos], transform=ax.transAxes, color='black') line.set_clip_on(False) ax.add_line(line) def label_group_bar(ax, data): groups = mk_groups(data) xy = groups.pop() x, y = zip(*xy) ly = len(y) xticks = range(1, ly + 1) ax.bar(xticks, y, align='center') ax.set_xticks(xticks) ax.set_xticklabels(x) ax.set_xlim(.5, ly + .5) ax.yaxis.grid(True) scale = 1. \/ ly for pos in xrange(ly + 1): # change xrange to range for python3 add_line(ax, pos * scale, -.1) ypos = -.2 while groups: group = groups.pop() pos = 0 for label, rpos in group: lxpos = (pos + .5 * rpos) * scale ax.text(lxpos, ypos, label, ha='center', transform=ax.transAxes) add_line(ax, pos * scale, ypos) pos += rpos add_line(ax, pos * scale, ypos) ypos -= .1 if __name__ == '__main__': data = {'Room A': {'Shelf 1': {'Milk': 10, 'Water': 20}, 'Shelf 2': {'Sugar': 5, 'Honey': 6} }, 'Room B': {'Shelf 1': {'Wheat': 4, 'Corn': 7}, 'Shelf 2': {'Chicken': 2, 'Cow': 1} } } fig = plt.figure() ax = fig.add_subplot(1,1,1) label_group_bar(ax, data) fig.subplots_adjust(bottom=0.3) fig.savefig('label_group_bar_example.png') ``` The mk_groups function takes a dictionary (or anything with an items() method, like collections.OrderedDict) and converts it to a data format that is then used to create the chart. It is basically a list of the form: ``` [ [(label, bars_to_span), ...], ..., [(tick_label, bar_value), ...] ] ``` The add_line function creates a vertical line in the subplot at the specified positions (in axes coordinates). The label_group_bar function takes a dictionary and creates the bar chart in the subplot with the labels beneath. The result from the example then looks like this. Easier or better solutions and suggestions are still very much appreciated.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19184484\/how-to-add-group-labels-for-bar-charts", "best_answers_votes":73, "question_length":442, "response_length":2309 }, { "question":"Scatter plot error bars (the error on each point is unique) I am attempting a scatter plot of 2 arrays for which I have a third array containing the absolute error (error in y direction) on each point. I want the error bars to between (point a - error on a) and (point a + error on a). Is there a way of achieving this with pylab and if not any ideas on how else I could do it?", "response":"This is almost like the other answer but you don't need a scatter plot at all, you can simply specify a scatter-plot-like format (fmt-parameter) for errorbar: ``` import matplotlib.pyplot as plt x = [1, 2, 3, 4] y = [1, 4, 9, 16] e = [0.5, 1., 1.5, 2.] plt.errorbar(x, y, yerr=e, fmt='o') plt.show() ``` Result: A list of the avaiable fmt parameters can be found for example in the plot documentation: ``` character description '-' solid line style '--' dashed line style '-.' dash-dot line style ':' dotted line style '.' point marker ',' pixel marker 'o' circle marker 'v' triangle_down marker '^' triangle_up marker '' triangle_right marker '1' tri_down marker '2' tri_up marker '3' tri_left marker '4' tri_right marker 's' square marker 'p' pentagon marker '*' star marker 'h' hexagon1 marker 'H' hexagon2 marker '+' plus marker 'x' x marker 'D' diamond marker 'd' thin_diamond marker '|' vline marker '_' hline marker ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22364565\/scatter-plot-error-bars-the-error-on-each-point-is-unique", "best_answers_votes":73, "question_length":377, "response_length":926 }, { "question":"Elegantly changing the color of a plot frame in matplotlib This is a kind of follow-up question to this post, where the coloring of axes, ticks and labels was discussed. I hope it is alright to open a new, extended question for this. Changing the color of a complete frame (ticks and axes) around a double-plot (via add_subplot) with axes [ax1, ax2] results in a lot of code. This snippet changes the color of the frame of the upper plot: ``` ax1.spines['bottom'].set_color('green') ax1.spines['top'].set_color('green') ax1.spines['left'].set_color('green') ax1.spines['right'].set_color('green') for t in ax1.xaxis.get_ticklines(): t.set_color('green') for t in ax1.yaxis.get_ticklines(): t.set_color('green') for t in ax2.xaxis.get_ticklines(): t.set_color('green') for t in ax2.yaxis.get_ticklines(): t.set_color('green') ``` So for changing the frame color of two plots with two y-axes each, I would need 16(!) lines of code... This is how it looks like: Other methods I dug up so far: matplotlib.rc: discussed here; changes globally, not locally. I want to have some other plots in different colors. Please, no discussions about too many colors in plots... :-) ``` matplotlib.rc('axes',edgecolor='green') ``` dig out the spines of the axis, then change it: also discussed here; not really elegant, I think. ``` for child in ax.get_children(): if isinstance(child, matplotlib.spines.Spine): child.set_color('#dddddd') ``` Is there an elegant way of condensing the above block, something more \"pythonic\"? I'm using python 2.6.5 with matplotlib 0.99.1.1 under ubuntu.", "response":"Assuming you're using a reasonably up-to-date version of matplotlib (>= 1.0), perhaps try something like this: ``` import matplotlib.pyplot as plt # Make the plot... fig, axes = plt.subplots(nrows=2) axes[0].plot(range(10), 'r-') axes[1].plot(range(10), 'bo-') # Set the borders to a given color... for ax in axes: ax.tick_params(color='green', labelcolor='green') for spine in ax.spines.values(): spine.set_edgecolor('green') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7778954\/elegantly-changing-the-color-of-a-plot-frame-in-matplotlib", "best_answers_votes":49, "question_length":1569, "response_length":441 }, { "question":"How to adjust transparency (alpha) in seaborn pairplot? I can create beatiful scatter plot with seaborns regplot, obtain the right level of transparency through the scatter_kws as in ``` sns.regplot(x='logAssets', y='logLTIFR', lowess=True, data=df, scatter_kws={'alpha':0.15}, line_kws={'color': 'red'}) ``` and obtain this: Is there an option in a seaborn pairplot to tweak transparency?", "response":"Ok I was very close to the solution. Seaborn pairplots have plot_kws that takes as arguments a dictionary of the kind of modifications you would do in a regplot. The following line is exactly what I needed: ``` g = sns.pairplot(df, kind='reg', plot_kws={'line_kws':{'color':'red'}, 'scatter_kws': {'alpha': 0.1}}) ``` And this is the outcome: If you don't do the regression but just the scatter plot (kind='scatter'), within plot keywords you don't have to do the division between line and scatter keywords: ``` g = sns.pairplot(df, kind='scatter', plot_kws={'alpha':0.1}) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/47200033\/how-to-adjust-transparency-alpha-in-seaborn-pairplot", "best_answers_votes":85, "question_length":389, "response_length":576 }, { "question":"Seaborn multiple barplots I have a pandas dataframe that looks like this: ``` class men woman children 0 first 0.91468 0.667971 0.660562 1 second 0.30012 0.329380 0.882608 2 third 0.11899 0.189747 0.121259 ``` How would I create a plot using seaborn that looks like this? Do I have to rearrange my data in some way? (source: mwaskom at stanford.edu)", "response":"Tested in python 3.12.0, pandas 2.1.1, matplotlib 3.8.0, seaborn 0.13.0 Reshape the DataFrame with pandas.DataFrame.melt or pandas.melt: ``` import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # convert the dataframe to a long format dfm = pd.melt(df, id_vars=\"class\", var_name=\"sex\", value_name=\"survival rate\") dfm Out: class sex survival rate 0 first men 0.914680 1 second men 0.300120 2 third men 0.118990 3 first woman 0.667971 4 second woman 0.329380 5 third woman 0.189747 6 first children 0.660562 7 second children 0.882608 8 third children 0.121259 ``` Consolidate the plot by creating a single facet with grouped bars, instead of multiple facets with single bars. Plot with the figure-level method sns.catplot ``` g = sns.catplot(x='class', y='survival rate', hue='sex', data=dfm, kind='bar', height=5, aspect=1) ``` Plot with the axes-level method sns.barplot ``` # the following code matches the plot produced by catplot plt.figure(figsize=(5, 5)) ax = sns.barplot(x='class', y='survival rate', hue='sex', data=dfm) ax.spines[['top', 'right']].set_visible(False) sns.move_legend(ax, bbox_to_anchor=(1, 0.5), loc='center left', frameon=False) ``` Deprecated factorplot (v0.8.1 or earlier): ``` sns.factorplot(x='class', y='survival rate', hue='sex', data=df, kind='bar') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/38807895\/seaborn-multiple-barplots", "best_answers_votes":90, "question_length":349, "response_length":1308 }, { "question":"How to draw horizontal grid only (using pandas plot + pyplot) I would like to get only horizontal grid using pandas plot. The integrated parameter of pandas only has grid=True or grid=False, so I tried with matplotlib pyplot, changing the axes parameters, specifically with this code: ```py import pandas as pd import matplotlib.pyplot as plt fig = plt.figure() ax2 = plt.subplot() ax2.grid(axis='x') df.plot(kind='bar',ax=ax2, fontsize=10, sort_columns=True) plt.show(fig) ``` But I get no grid, neither horizontal nor vertical. Is Pandas overwriting the axes? Or am I doing something wrong?", "response":"Try setting the grid after plotting the DataFrame. Also, to get the horizontal grid, you need to use ax2.grid(axis='y'). Below is an answer using a sample DataFrame. I have restructured how you define ax2 by making use of subplots. ``` import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]}) fig, ax2 = plt.subplots() df.plot(kind='bar',ax=ax2, fontsize=10, sort_columns=True) ax2.grid(axis='y') plt.show() ``` Alternatively, you can also do the following: Use the axis object returned from the DataFrame plot directly to turn on the horizontal grid ``` fig = plt.figure() ax2 = df.plot(kind='bar', fontsize=10, sort_columns=True) ax2.grid(axis='y') ``` Third option as suggested by @ayorgo in the comments is to chain the two commands as ``` df.plot(kind='bar',ax=ax2, fontsize=10, sort_columns=True).grid(axis='y') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/54714018\/how-to-draw-horizontal-grid-only-using-pandas-plot-pyplot", "best_answers_votes":100, "question_length":592, "response_length":879 }, { "question":"Rotate xtick labels in seaborn boxplot? I have a question that is basically the same as a question back from 2014 (see here). However, my script still throws an error. Here is what I do: I have a pandas dataframe with a few columns. I plot a simple boxplot comparison. ``` g = sns.boxplot(x='categories', y='oxygen', hue='target', data=df) g.set_xticklabels(rotation=30) ``` The graph looks like this: I'd like to rotate the x-labels by 30 degrees. Hence I use g.set_xticklabels(rotation=30). However, I get the following error: set_xticklabels() missing 1 required positional argument: 'labels' I don't know how to pass the matplotlib labels argument to seaborns sns.boxplot. Any ideas?", "response":"The question you link to uses a factorplot. A factorplot returns its own class which has a method called set_xticklabels(rotation). This is different from the set_xticklabels method of the matplotlib Axes. In the linked question's answers there are also other options which you may use ``` ax = sns.boxplot(x='categories', y='oxygen', hue='target', data=df) ax.set_xticklabels(ax.get_xticklabels(),rotation=30) ``` or ``` ax = sns.boxplot(x='categories', y='oxygen', hue='target', data=df) plt.setp(ax.get_xticklabels(), rotation=45) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44954123\/rotate-xtick-labels-in-seaborn-boxplot", "best_answers_votes":90, "question_length":687, "response_length":537 }, { "question":"How to space overlapping annotations I want to annotate the bars in a graph with some text but if the bars are close together and have comparable height, the annotations are above ea. other and thus hard to read (the coordinates for the annotations were taken from the bar position and height). Is there a way to shift one of them if there is a collision? Edit: The bars are very thin and very close sometimes so just aligning vertically doesn't solve the problem... A picture might clarify things:", "response":"I've written a quick solution, which checks each annotation position against default bounding boxes for all the other annotations. If there is a collision it changes its position to the next available collision free place. It also puts in nice arrows. For a fairly extreme example, it will produce this (none of the numbers overlap): Instead of this: Here is the code: ``` import numpy as np import matplotlib.pyplot as plt from numpy.random import * def get_text_positions(x_data, y_data, txt_width, txt_height): a = zip(y_data, x_data) text_positions = y_data.copy() for index, (y, x) in enumerate(a): local_text_positions = [i for i in a if i[0] > (y - txt_height) and (abs(i[1] - x) txt_height * 2: #if True then room to fit a word in a[index] = (sorted_ltp[k][0] + txt_height, a[index][1]) text_positions[index] = sorted_ltp[k][0] + txt_height break return text_positions def text_plotter(x_data, y_data, text_positions, axis,txt_width,txt_height): for x,y,t in zip(x_data, y_data, text_positions): axis.text(x - txt_width, 1.01*t, '%d'%int(y),rotation=0, color='blue') if y != t: axis.arrow(x, t,0,y-t, color='red',alpha=0.3, width=txt_width*0.1, head_width=txt_width, head_length=txt_height*0.5, zorder=0,length_includes_head=True) ``` Here is the code producing these plots, showing the usage: ``` #random test data: x_data = random_sample(100) y_data = random_integers(10,50,(100)) #GOOD PLOT: fig2 = plt.figure() ax2 = fig2.add_subplot(111) ax2.bar(x_data, y_data,width=0.00001) #set the bbox for the text. Increase txt_width for wider text. txt_height = 0.04*(plt.ylim()[1] - plt.ylim()[0]) txt_width = 0.02*(plt.xlim()[1] - plt.xlim()[0]) #Get the corrected text positions, then write the text. text_positions = get_text_positions(x_data, y_data, txt_width, txt_height) text_plotter(x_data, y_data, text_positions, ax2, txt_width, txt_height) plt.ylim(0,max(text_positions)+2*txt_height) plt.xlim(-0.1,1.1) #BAD PLOT: fig = plt.figure() ax = fig.add_subplot(111) ax.bar(x_data, y_data, width=0.0001) #write the text: for x,y in zip(x_data, y_data): ax.text(x - txt_width, 1.01*y, '%d'%int(y),rotation=0) plt.ylim(0,max(text_positions)+2*txt_height) plt.xlim(-0.1,1.1) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8850142\/how-to-space-overlapping-annotations", "best_answers_votes":63, "question_length":498, "response_length":2195 }, { "question":"Remove the x-axis ticks while keeping the grids (matplotlib) [duplicate] This question already has answers here: Hiding axis text in matplotlib plots (13 answers) Closed 11 years ago. I want to remove the ticks on the x-axis but keep the vertical girds. When I do the following I lose both x-axis ticks as well as the grid. ``` import matplotlib.pyplot as plt fig = plt.figure() figr = fig.add_subplot(211) ... figr.axes.get_xaxis().set_visible(False) figr.xaxsis.grid(True) ``` How can I retain the grid while makeing x-axis ticks invisible?", "response":"By removing the ticks, do you mean remove the tick labels or the ticks themselves? This will remove the labels: ``` import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 2*np.pi, 100) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x, np.sin(x)) ax.grid(True) ax.set_xticklabels([]) plt.show() ``` If you really want to get rid of the little tick lines, you can add this: ``` for tic in ax.xaxis.get_major_ticks(): tic.tick1On = tic.tick2On = False ``` You could turn the tick labels off here too without resorting to the ax.set_xticklabels([]) \"hack\" by setting tic.label1On = tic.label2On = False: ``` import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 2*np.pi, 100) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x, np.sin(x)) ax.grid(True) for tick in ax.xaxis.get_major_ticks(): tick.tick1line.set_visible(False) tick.tick2line.set_visible(False) tick.label1.set_visible(False) tick.label2.set_visible(False) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20416609\/remove-the-x-axis-ticks-while-keeping-the-grids-matplotlib", "best_answers_votes":86, "question_length":542, "response_length":974 }, { "question":"Arrow on a line plot I'd like to add an arrow to a line plot with matplotlib like in the plot below (drawn with pgfplots). How can I do (position and direction of the arrow should be parameters ideally)? Here is some code to experiment. ``` from matplotlib import pyplot import numpy as np t = np.linspace(-2, 2, 100) plt.plot(t, np.sin(t)) plt.show() ``` Thanks.", "response":"In my experience this works best by using annotate. Thereby you avoid the weird warping you get with ax.arrow which is somehow hard to control. EDIT: I've wrapped it into a little function. ``` from matplotlib import pyplot as plt import numpy as np def add_arrow(line, position=None, direction='right', size=15, color=None): \"\"\" add an arrow to a line. line: Line2D object position: x-position of the arrow. If None, mean of xdata is taken direction: 'left' or 'right' size: size of the arrow in fontsize points color: if None, line color is taken. \"\"\" if color is None: color = line.get_color() xdata = line.get_xdata() ydata = line.get_ydata() if position is None: position = xdata.mean() # find closest index start_ind = np.argmin(np.absolute(xdata - position)) if direction == 'right': end_ind = start_ind + 1 else: end_ind = start_ind - 1 line.axes.annotate('', xytext=(xdata[start_ind], ydata[start_ind]), xy=(xdata[end_ind], ydata[end_ind]), arrowprops=dict(arrowstyle=\"->\", color=color), size=size ) t = np.linspace(-2, 2, 100) y = np.sin(t) # return the handle of the line line = plt.plot(t, y)[0] add_arrow(line) plt.show() ``` It's not very intuitive but it works. You can then fiddle with the arrowprops dictionary until it looks right.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34017866\/arrow-on-a-line-plot", "best_answers_votes":37, "question_length":363, "response_length":1249 }, { "question":"why does my colorbar have lines in it? Edit: Since this seems to be a popular post, here's the solution that seems to be working well for me. Thanks @gazzar and @mfra. ``` cbar.solids.set_rasterized(True) cbar.solids.set_edgecolor(\"face\") ``` Does anyone know why my colorbar has what appear to be lines in it? Or rather why is the color transition not smooth? I'm using basemap, obviously, but that shouldn't matter since it's all matplotlib calls under the hood AFAICT. I create the map doing something like ``` grays = plt.cm.get_cmap(\"Grays\") sc = mymap.scatter(xpoints, ypoints, s=sizes, c=color_values, cmap=grays, alpha=.75, marker=\"o\", zorder=10, vmin=0, vmax=1) cbar = mymap.colorbar(sc, drawedges=True, location=\"bottom\") ``` I tried without and without alpha and the result was the same. Maybe it is because my color_values array is not fine enough? Can I set the underlying values that are mapped to the colorbar somewhere? I don't see how, and I don't see this problem elsewhere. Ie., I can replicate the matplotlib show_colorbars example without this problem.", "response":"In case you create vector graphics, have you tried this (taken from http:\/\/matplotlib.org\/api\/pyplot_api.html?highlight=colorbar#matplotlib.pyplot.colorbar): \"It is known that some vector graphics viewer (svg and pdf) renders white gaps between segments of the colorbar. This is due to bugs in the viewers not matplotlib. As a workaround the colorbar can be rendered with overlapping segments: ``` cbar = colorbar() cbar.solids.set_edgecolor(\"face\") draw() ``` However this has negative consequences in other circumstances. Particularly with semi transparent images (alpha < 1) and colorbar extensions and is not enabled by default see (issue #1188).\"", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15003353\/why-does-my-colorbar-have-lines-in-it", "best_answers_votes":33, "question_length":1073, "response_length":651 }, { "question":"ubuntu 14.04, pip cannot upgrade matplotllib When I try to upgrade my matplotlib using pip, it outputs: ```none Downloading\/unpacking matplotlib from https:\/\/pypi.python.org\/packages\/source\/m\/matplotlib\/matplotlib-1.4.0.tar.gz#md5=1daf7f2123d94745feac1a30b210940c Downloading matplotlib-1.4.0.tar.gz (51.2MB): 51.2MB downloaded Running setup.py (path:\/tmp\/pip_build_root\/matplotlib\/setup.py) egg_info for package matplotlib ============================================================================ Edit setup.cfg to change the build options BUILDING MATPLOTLIB matplotlib: yes [1.4.0] python: yes [2.7.6 (default, Mar 22 2014, 22:59:38) [GCC 4.8.2]] platform: yes [linux2] REQUIRED DEPENDENCIES AND EXTENSIONS numpy: yes [version 1.8.2] six: yes [using six version 1.7.3] dateutil: yes [using dateutil version 2.2] tornado: yes [using tornado version 4.0.1] pyparsing: yes [using pyparsing version 2.0.2] pycxx: yes [Couldn't import. Using local copy.] libagg: yes [pkg-config information for 'libagg' could not be found. Using local copy.] Traceback (most recent call last): File \"\", line 17, in File \"\/tmp\/pip_build_root\/matplotlib\/setup.py\", line 154, in result = package.check() File \"setupext.py\", line 940, in check if 'No such file or directory\\ngrep:' in version: TypeError: argument of type 'NoneType' is not iterable Complete output from command python setup.py egg_info: ============================================================================ Edit setup.cfg to change the build options BUILDING MATPLOTLIB matplotlib: yes [1.4.0] python: yes [2.7.6 (default, Mar 22 2014, 22:59:38) [GCC 4.8.2]] platform: yes [linux2] REQUIRED DEPENDENCIES AND EXTENSIONS numpy: yes [version 1.8.2] six: yes [using six version 1.7.3] dateutil: yes [using dateutil version 2.2] tornado: yes [using tornado version 4.0.1] pyparsing: yes [using pyparsing version 2.0.2] pycxx: yes [Couldn't import. Using local copy.] libagg: yes [pkg-config information for 'libagg' could not be found. Using local copy.] Traceback (most recent call last): File \"\", line 17, in File \"\/tmp\/pip_build_root\/matplotlib\/setup.py\", line 154, in result = package.check() File \"setupext.py\", line 940, in check if 'No such file or directory\\ngrep:' in version: TypeError: argument of type 'NoneType' is not iterable ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 in \/tmp\/pip_build_root\/matplotlib Storing debug log for failure in \/home\/username\/.pip\/pip.log ``` In the tail of the log it says: ``` Exception information: Traceback (most recent call last): File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pip-1.5.6-py2.7.egg\/pip\/basecommand.py\", line 122, in main status = self.run(options, args) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pip-1.5.6-py2.7.egg\/pip\/commands\/install.py\", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pip-1.5.6-py2.7.egg\/pip\/req.py\", line 1229, in prepare_files req_to_install.run_egg_info() File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pip-1.5.6-py2.7.egg\/pip\/req.py\", line 325, in run_egg_info command_desc='python setup.py egg_info') File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pip-1.5.6-py2.7.egg\/pip\/util.py\", line 697, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command python setup.py egg_info failed with error code 1 in \/tmp\/pip_build_root\/matplotlib ``` Why did it fail? Many thanks!", "response":"This is a known bug that has been fixed (https:\/\/github.com\/matplotlib\/matplotlib\/pull\/3414) on master. The bug is in the handling of searching for a freetype installation. If you install the Linux package freetype-dev, you will avoid this bug and be able to compile matplotlib. ``` sudo apt-get install libfreetype6-dev ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25674612\/ubuntu-14-04-pip-cannot-upgrade-matplotllib", "best_answers_votes":77, "question_length":3511, "response_length":324 }, { "question":"matplotlib RuntimeError: Python is not installed as a framework This question has been asked before, in here, also here. However, the solution didn't fix the problem for my case. The original error is, when I try to import matplotlib.pyplot, I got: Traceback (most recent call last): File \"\", line 1, in File \"\/Users\/XX\/anaconda\/lib\/python2.7\/site-packages\/matplotlib\/pyplot.py\", line 114, in _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup() File \"\/Users\/XX\/anaconda\/lib\/python2.7\/site-packages\/matplotlib\/backends\/init.py\", line 32, in pylab_setup globals(),locals(),[backend_name],0) File \"\/Users\/XX\/anaconda\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_macosx.py\", line 24, in from matplotlib.backends import _macosx RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are Working with Matplotlib in a virtual enviroment see 'Working with Matplotlib in Virtual environments' in the Matplotlib FAQ I followed the solutions to add a ~\/.matplotlib\/matplotlibrc file with the code: backend: TkAgg. After doing that, my error changed to: \/Users\/XX\/anaconda\/lib\/python2.7\/site-packages\/matplotlib\/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment. warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.') objc[25120]: Class TKApplication is implemented in both \/Users\/XX\/anaconda\/lib\/libtk8.5.dylib and \/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Tk. One of the two will be used. Which one is undefined. objc[25120]: Class TKMenu is implemented in both \/Users\/XX\/anaconda\/lib\/libtk8.5.dylib and \/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Tk. One of the two will be used. Which one is undefined. objc[25120]: Class TKContentView is implemented in both \/Users\/XX\/anaconda\/lib\/libtk8.5.dylib and \/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Tk. One of the two will be used. Which one is undefined. objc[25120]: Class TKWindow is implemented in both \/Users\/XX\/anaconda\/lib\/libtk8.5.dylib and \/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Tk. One of the two will be used. Which one is undefined. I have no idea how to fix that. I'm not using a virtual machine. PS: I found out that by adding: import matplotlib matplotlib.use('TkAgg') before import matplotlib.pyplot, it seems to work. But adding those two lines of code every time is annoying... What's going on, and how I can fix it?", "response":"I run my script in virtualenv. Python version is 3.5. Add a line: ``` backend: TkAgg ``` in file: ``` ~\/.matplotlib\/matplotlibrc ``` This solved the problem. If you want to know more about why adding this solves the problem, you can read about customizing matplotlib's backend. And TkAgg solves this issue because of it's dependency with Tkinter.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34977388\/matplotlib-runtimeerror-python-is-not-installed-as-a-framework", "best_answers_votes":95, "question_length":2739, "response_length":346 }, { "question":"How to center an annotation horizontally over a point? I'd like to take something like this code for scatter plot annotations... ```py import matplotlib; matplotlib.use('TkAgg') import matplotlib.pyplot as plt labels = [\"Abra\", \"Kadabra\", \"Alazkazam\", \"Mew\"] x_values = [0.3, 0.6, 0.2, 0.4] y_values = [0.2, 0.2, 0.4, 0.9] fig = plt.figure(figsize=(5, 5)) plt.axis('off') renderer = fig.canvas.get_renderer() for i, label in enumerate(labels): plt.scatter(x_values[i], y_values[i]) text_object = plt.annotate(label, xy=(x_values[i], y_values[i])) plt.savefig(\"horizontally_centered_text_annotations.png\") ``` ...which produces this plot: ...and make it produce something like this plot: I've tried getting the window extent around the text boxes, grabbing the x coordinate and width, and shifting over for each annotation like so: ```py for i, label in enumerate(labels): plt.scatter(x_values[i], y_values[i]) text_object = plt.annotate(label, xy=(x_values[i], y_values[i])) text_window_extent = text_object.get_window_extent(renderer) new_x_position = x_values[i] - text_window_extent.width \/ 2 text_object.set_position((new_x_position, y_values[i])) print \"x_value: {}, window_extent_width: {}, new_x_position: {}\".format(x_values[i], text_window_extent.width, new_x_position) ``` But as you can see here from print statements the widths are too big: ```none x_value: 0.3, window_extent_width: 31.5, new_x_position: -15.45 x_value: 0.6, window_extent_width: 56.0, new_x_position: -27.4 x_value: 0.2, window_extent_width: 72.875, new_x_position: -36.2375 x_value: 0.4, window_extent_width: 30.75, new_x_position: -14.975 ``` Not sure how if this has to do with coordinate systems...", "response":"Use the horizontalalignment (which can be shortened as ha) option to the annotate call: ```py text_object = plt.annotate(label, xy=(x_values[i], y_values[i]), ha='center') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/38173517\/how-to-center-an-annotation-horizontally-over-a-point", "best_answers_votes":104, "question_length":1683, "response_length":175 }, { "question":"What is a \"good\" palette for divergent colors in R? (or: can viridis and magma be combined together?) I am interested in having a \"good\" divergent color pallette. One could obviously use just red, white, and blue: ``` img <- function(obj, nam) { image(1:length(obj), 1, as.matrix(1:length(obj)), col=obj, main = nam, ylab = \"\", xaxt = \"n\", yaxt = \"n\", bty = \"n\") } rwb <- colorRampPalette(colors = c(\"red\", \"white\", \"blue\")) img(rwb(100), \"red-white-blue\") ``` Since I recently fell in love with the viridis color palettes, I was hoping to combine viridis and magma to form such divergent colors (of course, color blind people would only see the absolute value of the color, but that is sometimes o.k.). When I tried combining viridis and magma, I found that they don't \"end\" (or \"start\") at the same place, so I get something like this (I'm using R, but this would probably be the same for python users): ``` library(viridis) img(c(rev(viridis(100, begin = 0)), magma(100, begin = 0)), \"magma-viridis\") ``` We can see that when close to zero, viridis is purple, while magma is black. I would like for both of them to start in (more or less) the same spot, so I tried using 0.3 as a starting point: ``` img(c(rev(viridis(100, begin = 0.3)), magma(100, begin = 0.3)), \"-viridis-magma(0.3)\") ``` This is indeed better, but I wonder if there is a better solution. (I am also \"tagging\" python users, since viridis is originally from matplotlib, so someone using it may know of such a solution) Thanks!", "response":"There have been some good and useful suggestions already but let me add a few remarks: The viridis and magma palettes are sequential palettes with multiple hues. Thus, along the scale you increase from very light colors to rather dark colors. Simultaneously the colorfulness is increased and the hue changes from yellow to blue (either via green or via red). Diverging palettes can be created by combining two sequential palettes. Typically, you join them at the light colors and then let them diverge to different dark colors. Usually, one uses single-hue sequential palettes that diverge from a neutral light gray to two different dark colors. One should pay attention though that the different \"arms\" of the palette are balanced with respect to luminance (light-dark) and chroma (colorfuness). Therefore, combining magma and viridis does not work well. You could let them diverge from a similar yellowish color but you would diverge to similar blueish colors. Also with the changing hues it would just become more difficult to judge in which arm of the palette you are. As mentioned by others, ColorBrewer.org provides good diverging palettes. Moreland's approach is also useful. Yet another general solution is our diverging_hcl() function in the colorspace package. In the accompanying paper at https:\/\/arxiv.org\/abs\/1903.06490 (forthcoming in JSS) the construction principles are described and also how the general HCL-based strategy can approximate numerous palettes from ColorBrewer.org, CARTO, etc. (Earlier references include our initial work in CSDA at http:\/\/dx.doi.org\/10.1016\/j.csda.2008.11.033 and further recommendations geared towards meteorology, but applicable beyond, in a BAMS paper at http:\/\/dx.doi.org\/10.1175\/BAMS-D-13-00155.1.) The advantage of our solution in HCL space (hue-chroma-luminance) is that you can interpret the coordinates relatively easily. It does take some practice but isn't as opaque as other solutions. Also we provide a GUI hclwizard() (see below) that helps understanding the importance of the different coordinates. Most of the palettes in the question and the other answers can be matched rather closely by diverging_hcl() provided that the two hues (argument h), the maximum chroma (c), and minimal\/maximal luminance (l) are chosen appropriately. Furthermore, one may have to tweak the power argument which controls how quickly chroma and luminance are increased, respectively. Typically, chroma is added rather quickly (power[1] 1). Moreland's \"cool-warm\" palette for example uses a blue (h = 250) and red (h = 10) hue but with a relatively small luminance contrast(l = 37 vs. l = 88): ``` coolwarm_hcl 1] <- 1 coolwarm <- rgb(coolwarm[, 1], coolwarm[, 2], coolwarm[, 3]) ``` In contrast, ColorBrewer.org's BrBG palette a much higher luminance contrast (l = 20 vs. l = 95): ``` brbg <- rev(RColorBrewer::brewer.pal(11, \"BrBG\")) brbg_hcl <- colorspace::diverging_hcl(11, h = c(180, 50), c = 80, l = c(20, 95), power = c(0.7, 1.3)) ``` The resulting palettes are compared below with the HCL-based version below the original. You see that these are not identical but rather close. On the right-hand side I've also matched viridis and plasma with HCL-based palettes. Whether you prefer the cool-warm or BrBG palette may depend on your personal taste but also - more importantly - what you want to bring out in your visualization. The low luminance contrast in cool-warm will be more useful if the sign of the deviation matters most. A high luminance contrast will be more useful if you want to bring out the size of the (extreme) deviations. More practical guidance is provided in the papers above. The rest of the replication code for the figure above is: ``` viridis <- viridis::viridis(11) viridis_hcl <- colorspace::sequential_hcl(11, h = c(300, 75), c = c(35, 95), l = c(15, 90), power = c(0.8, 1.2)) plasma <- viridis::plasma(11) plasma_hcl <- colorspace::sequential_hcl(11, h = c(-100, 100), c = c(60, 100), l = c(15, 95), power = c(2, 0.9)) pal <- function(col, border = \"transparent\") { n <- length(col) plot(0, 0, type=\"n\", xlim = c(0, 1), ylim = c(0, 1), axes = FALSE, xlab = \"\", ylab = \"\") rect(0:(n-1)\/n, 0, 1:n\/n, 1, col = col, border = border) } par(mar = rep(0, 4), mfrow = c(4, 2)) pal(coolwarm) pal(viridis) pal(coolwarm_hcl) pal(viridis_hcl) pal(brbg) pal(plasma) pal(brbg_hcl) pal(plasma_hcl) ``` Update: These HCL-based approximations of colors from other tools (ColorBrewer.org, viridis, scico, CARTO, ...) are now also available as named palettes in both the colorspace package and the hcl.colors() function from the basic grDevices package (starting from 3.6.0). Thus, you can now also say easily: ``` colorspace::sequential_hcl(11, \"viridis\") grDevices::hcl.colors(11, \"viridis\") ``` Finally, you can explore our proposed colors interactively in a shiny app: http:\/\/hclwizard.org:64230\/hclwizard\/. For users of R, you can also start the shiny app locally on your computer (which runs somewhat faster than from our server) or you can run a Tcl\/Tk version of it (which is even faster): ``` colorspace::hclwizard(gui = \"shiny\") colorspace::hclwizard(gui = \"tcltk\") ``` If you want to understand what the paths of the palettes look like in RGB and HCL coordinates, the colorspace::specplot() is useful. See for example colorspace::specplot(coolwarm).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37482977\/what-is-a-good-palette-for-divergent-colors-in-r-or-can-viridis-and-magma-b", "best_answers_votes":33, "question_length":1497, "response_length":5319 }, { "question":"One colorbar for seaborn heatmaps in subplot Here is an example that shows a colorbar for each subplot: ``` import seaborn as sns import matplotlib.pyplot as plt import pandas as pd import numpy as np df = pd.DataFrame(np.random.random((10,10,))) fig,axn = plt.subplots(2, 2, sharex=True, sharey=True) for ax in axn.flat: sns.heatmap(df, ax=ax) ``` How can I remove the colorbars for each subplot? I'd like to have only one colorbar that is either vertically or horizontally oriented. I know I have access to each colorbar axes via fig.get_axes()[:-4], but how can I remove it from them entirely from the plot? I don't think there is an option to opt out of drawing the colorbar when heatmap is called.", "response":"The cbar parameter controls whether a colorbar should be added, and the cbar_ax parameter can optionally specify the axes where the colorbar should go. So, you could do: ``` import seaborn as sns import matplotlib.pyplot as plt import pandas as pd import numpy as np df = pd.DataFrame(np.random.random((10,10,))) fig, axn = plt.subplots(2, 2, sharex=True, sharey=True) cbar_ax = fig.add_axes([.91, .3, .03, .4]) for i, ax in enumerate(axn.flat): sns.heatmap(df, ax=ax, cbar=i == 0, vmin=0, vmax=1, cbar_ax=None if i else cbar_ax) fig.tight_layout(rect=[0, 0, .9, 1]) ``` (You'll get a warning about tight_layout here, but it actually is correct because we placed cbar_ax explicitly. If you don't like seeing the warning, you can also call tight_layout before plotting, but it won't be as tight).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/28356359\/one-colorbar-for-seaborn-heatmaps-in-subplot", "best_answers_votes":87, "question_length":702, "response_length":795 }, { "question":"Prevent anti-aliasing for imshow in matplotlib When I use matplotlib's imshow() method to represent a small numpy matrix, it ends up doing some smoothing between pixels. Is there any way to disables this? It makes my figure's misleading in presentations. The figure above is a 28x28 image, so I should be seeing large squares of single colors representing each pixel (as matlab would display it when using imagesc()). But Instead, the pixels seem to be blurred with neighboring pixels. Is there a way to disable this behavior?", "response":"There is an interpolation option for imshow which controls how and if interpolation will be applied to the rendering of the matrix. If you try ``` imshow(array, interpolation=\"nearest\") ``` you might get something more like you want. As an example ``` A=10*np.eye(10) + np.random.rand(100).reshape(10,10) imshow(A) ``` compared with ``` A=10*np.eye(10) + np.random.rand(100).reshape(10,10) imshow(A, interpolation=\"nearest\") ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8376609\/prevent-anti-aliasing-for-imshow-in-matplotlib", "best_answers_votes":56, "question_length":526, "response_length":428 }, { "question":"Plotting a histogram from pre-counted data in Matplotlib I'd like to use Matplotlib to plot a histogram over data that's been pre-counted. For example, say I have the raw data ``` data = [1, 2, 2, 3, 4, 5, 5, 5, 5, 6, 10] ``` Given this data, I can use ``` pylab.hist(data, bins=[...]) ``` to plot a histogram. In my case, the data has been pre-counted and is represented as a dictionary: ``` counted_data = {1: 1, 2: 2, 3: 1, 4: 1, 5: 4, 6: 1, 10: 1} ``` Ideally, I'd like to pass this pre-counted data to a histogram function that lets me control the bin widths, plot range, etc, as if I had passed it the raw data. As a workaround, I'm expanding my counts into the raw data: ``` data = list(chain.from_iterable(repeat(value, count) for (value, count) in counted_data.iteritems())) ``` This is inefficient when counted_data contains counts for millions of data points. Is there an easier way to use Matplotlib to produce a histogram from my pre-counted data? Alternatively, if it's easiest to just bar-plot data that's been pre-binned, is there a convenience method to \"roll-up\" my per-item counts into binned counts?", "response":"You can use the weights keyword argument to np.histgram (which plt.hist calls underneath) ``` val, weight = zip(*[(k, v) for k,v in counted_data.items()]) plt.hist(val, weights=weight) ``` Assuming you only have integers as the keys, you can also use bar directly: ``` min_bin = np.min(counted_data.keys()) max_bin = np.max(counted_data.keys()) bins = np.arange(min_bin, max_bin + 1) vals = np.zeros(max_bin - min_bin + 1) for k,v in counted_data.items(): vals[k - min_bin] = v plt.bar(bins, vals, ...) ``` where ... is what ever arguments you want to pass to bar (doc) If you want to re-bin your data see Histogram with separate list denoting frequency", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19212508\/plotting-a-histogram-from-pre-counted-data-in-matplotlib", "best_answers_votes":36, "question_length":1119, "response_length":653 }, { "question":"Difference between plt.draw() and plt.show() in matplotlib I was wondering why some people put a plt.draw() into their code before the plt.show(). For my code, the behavior of the plt.draw() didn't seem to change anything about the output. I did a search on the internet but couldn't find anything useful. (assuming we imported pyplot as from matplotlib import pyplot as plt)", "response":"plt.show() will display the current figure that you are working on. plt.draw() will re-draw the figure. This allows you to work in interactive mode and, should you have changed your data or formatting, allow the graph itself to change. The plt.draw docs state: This is used in interactive mode to update a figure that has been altered using one or more plot object method calls; it is not needed if figure modification is done entirely with pyplot functions, if a sequence of modifications ends with a pyplot function, or if matplotlib is in non-interactive mode and the sequence of modifications ends with show() or savefig(). This seems to suggest that using plt.draw() before plt.show() when not in interactive mode will be redundant the vast majority of the time. The only time you may need it is if you are doing some very strange modifications that don't involve using pyplot functions. Refer to the Matplotlib doc, \"Interactive figures\" for more information.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23141452\/difference-between-plt-draw-and-plt-show-in-matplotlib", "best_answers_votes":61, "question_length":375, "response_length":965 }, { "question":"Calculate overlapped area between two rectangles I want to calculate the overlapped area \"THE GRAY REGION\" between red and blue rectangles. Each rectangle is defined by its four corner coordinates. The resulted unit of the overlapped area is unit square. I could not imagine how can I do it? Any creative comments would be appreciated.", "response":"This type of intersection is easily done by the \"min of the maxes\" and \"max of the mins\" idea. To write it out one needs a specific notion for the rectangle, and, just to make things clear I'll use a namedtuple: ``` from collections import namedtuple Rectangle = namedtuple('Rectangle', 'xmin ymin xmax ymax') ra = Rectangle(3., 3., 5., 5.) rb = Rectangle(1., 1., 4., 3.5) # intersection here is (3, 3, 4, 3.5), or an area of 1*.5=.5 def area(a, b): # returns None if rectangles don't intersect dx = min(a.xmax, b.xmax) - max(a.xmin, b.xmin) dy = min(a.ymax, b.ymax) - max(a.ymin, b.ymin) if (dx>=0) and (dy>=0): return dx*dy print area(ra, rb) # 0.5 ``` If you don't like the namedtuple notation, you could just use: ``` dx = max(a[0], b[0]) - min(a[2], b[2]) ``` etc, or whatever notation you prefer.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/27152904\/calculate-overlapped-area-between-two-rectangles", "best_answers_votes":89, "question_length":335, "response_length":802 }, { "question":"AttributeError while adding colorbar in matplotlib The following code fails to run on Python 2.5.4: ``` from matplotlib import pylab as pl import numpy as np data = np.random.rand(6,6) fig = pl.figure(1) fig.clf() ax = fig.add_subplot(1,1,1) ax.imshow(data, interpolation='nearest', vmin=0.5, vmax=0.99) pl.colorbar() pl.show() ``` The error message is ``` C:\\temp>python z.py Traceback (most recent call last): File \"z.py\", line 10, in pl.colorbar() File \"C:\\Python25\\lib\\site-packages\\matplotlib\\pyplot.py\", line 1369, in colorbar ret = gcf().colorbar(mappable, cax = cax, ax=ax, **kw) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\figure.py\", line 1046, in colorbar cb = cbar.Colorbar(cax, mappable, **kw) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\colorbar.py\", line 622, in __init__ mappable.autoscale_None() # Ensure mappable.norm.vmin, vmax AttributeError: 'NoneType' object has no attribute 'autoscale_None' ``` How can I add colorbar to this code? Following is the interpreter information: ``` Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> ```", "response":"(This is a very old question I know) The reason you are seeing this issue is because you have mixed the use of the state machine (matplotlib.pyplot) with the OO approach of adding images to an axes. The plt.imshow function differs from the ax.imshow method in just one subtly different way. The method ax.imshow: creates and returns an Image which has been added to the axes The function plt.imshow: creates and returns an Image which has been added to the current axes, and sets the image to be the \"current\" image\/mappable (which can then be automatically picked up by the plt.colorbar function). If you want to be able to use the plt.colorbar (which in all but the most extreme cases, you do) with the ax.imshow method, you will need to pass the returned image (which is an instance of a ScalarMappable) to plt.colorbar as the first argument: ``` plt.imshow(image_file) plt.colorbar() ``` is equivalent (without using the state machine) to: ``` img = ax.imshow(image_file) plt.colorbar(img, ax=ax) ``` If ax is the current axes in pyplot, then the kwarg ax=ax is not needed.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2643953\/attributeerror-while-adding-colorbar-in-matplotlib", "best_answers_votes":90, "question_length":1173, "response_length":1077 }, { "question":"How to save the Pandas dataframe\/series data as a figure? It sounds somewhat weird\uff0c but I need to save the Pandas console output string to png pics. For example: ``` >>> df sales net_pft ROE ROIC STK_ID RPT_Date 600809 20120331 22.1401 4.9253 0.1651 0.6656 20120630 38.1565 7.8684 0.2567 1.0385 20120930 52.5098 12.4338 0.3587 1.2867 20121231 64.7876 13.2731 0.3736 1.2205 20130331 27.9517 7.5182 0.1745 0.3723 20130630 40.6460 9.8572 0.2560 0.4290 20130930 53.0501 11.8605 0.2927 0.4369 ``` Is there any way like df.output_as_png(filename='df_data.png') to generate a pic file which just display above content inside?", "response":"Option-1: use matplotlib table functionality, with some additional styling: ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt df = pd.DataFrame() df['date'] = ['2016-04-01', '2016-04-02', '2016-04-03'] df['calories'] = [2200, 2100, 1500] df['sleep hours'] = [8, 7.5, 8.2] df['gym'] = [True, False, False] def render_mpl_table(data, col_width=3.0, row_height=0.625, font_size=14, header_color='#40466e', row_colors=['#f1f1f2', 'w'], edge_color='w', bbox=[0, 0, 1, 1], header_columns=0, ax=None, **kwargs): if ax is None: size = (np.array(data.shape[::-1]) + np.array([0, 1])) * np.array([col_width, row_height]) fig, ax = plt.subplots(figsize=size) ax.axis('off') mpl_table = ax.table(cellText=data.values, bbox=bbox, colLabels=data.columns, **kwargs) mpl_table.auto_set_font_size(False) mpl_table.set_fontsize(font_size) for k, cell in mpl_table._cells.items(): cell.set_edgecolor(edge_color) if k[0] == 0 or k[1] < header_columns: cell.set_text_props(weight='bold', color='w') cell.set_facecolor(header_color) else: cell.set_facecolor(row_colors[k[0]%len(row_colors) ]) return ax.get_figure(), ax fig,ax = render_mpl_table(df, header_columns=0, col_width=2.0) fig.savefig(\"table_mpl.png\") ``` Options-2 Use Plotly + kaleido ``` import plotly.figure_factory as ff import pandas as pd df = pd.DataFrame() df['date'] = ['2016-04-01', '2016-04-02', '2016-04-03'] df['calories'] = [2200, 2100, 1500] df['sleep hours'] = [8, 7.5, 8.2] df['gym'] = [True, False, False] fig = ff.create_table(df) fig.update_layout( autosize=False, width=500, height=200, ) fig.write_image(\"table_plotly.png\", scale=2) fig.show() ``` For the above, the font size can be changed using the font attribute: ``` fig.update_layout( autosize=False, width=500, height=200, font={'size':8} ) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19726663\/how-to-save-the-pandas-dataframe-series-data-as-a-figure", "best_answers_votes":68, "question_length":618, "response_length":1787 }, { "question":"Make part of a title bold and a different color I would like to change part of a title to be bold. For example: ``` plt.title(\"This is title number: \" + str(number)) ``` Given a title like the above, how would I bold the str(number) part.", "response":"From matplotlib version 2 on, there is no need to use latex (which would require a working latex installation). One can use normal MathText to render part of the title in bold. ``` import matplotlib.pyplot as plt number = 2017 plt.title(\"This is title number: \" + r\"$\\bf{\" + str(number) + \"}$\") plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34937048\/make-part-of-a-title-bold-and-a-different-color", "best_answers_votes":98, "question_length":238, "response_length":309 }, { "question":"ValueError: Unknown projection '3d' (once again) When executing this line of code: ``` import matplotlib.pyplot as plt #your code fig = plt.figure() ax = fig.gca(projection='3d') ``` I have an output error: ``` raise ValueError(\"Unknown projection %r\" % projection) ValueError: Unknown projection '3d' ``` The error appears also when I use Spyder as IDE. The version of matplotlib is ``` print('matplotlib: {}'.format(matplotlib.__version__)) matplotlib: 1.5.0rc3 ``` But I had the same problem even with other versions of matplotlib. A similar error was reported in this question (Stackoverflow) but the answers do not help. Some suggestions on how to modify the instruction? matplotlib: 3.0.2", "response":"You will have to import Axes3D to enable the 3d plotting in matplotlib. The official tutorials on 3d plotting can be found here. So the correct imports and code would look like ``` import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D # <--- This is important for 3d plotting #your code fig = plt.figure() ax = fig.gca(projection='3d') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/56222259\/valueerror-unknown-projection-3d-once-again", "best_answers_votes":109, "question_length":695, "response_length":357 }, { "question":"Matplotlib boxplot without outliers Is there any way of hiding the outliers when plotting a boxplot in matplotlib (python)? I'm using the simplest way of plotting it: ``` from pylab import * boxplot([1,2,3,4,5,10]) show() ``` This gives me the following plot: (I cannot post the image because I have not enough reputation, but basically it is a boxplot with Q1 at y=1, Q3 at y=5, and the outlier at y=10) I would like to remove the outlier at y=10, so that the plot only shows from Q1 to Q3 (in this case from 1 to 5).", "response":"In current versions of matplotlib you can do: ``` boxplot([1,2,3,4,5,10], showfliers=False) ``` or ``` boxplot([1,2,3,4,5,10], sym='') ``` In older versions, only the second approach will work. The docs for boxplot do mention this, btw as, \"Enter an empty string (\u2018\u2019) if you don\u2019t want to show fliers.\", though, at least for myself, \"outliers\" is the more familiar word.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22028064\/matplotlib-boxplot-without-outliers", "best_answers_votes":101, "question_length":518, "response_length":370 }, { "question":"How to make custom legend I currently generate my legend with matplotlib this way: ```py if t==25: l1,l2 = ax2.plot(x320,vTemp320,'or',x320,vAnaTemp320,'-r') elif t==50: l3,l4 = ax2.plot(x320,vTemp320,'ob',x320,vAnaTemp320,'-b') else: l5,l6 = ax2.plot(x320,vTemp320,'og',x320,vAnaTemp320,'-g') plt.legend((l1,l2,l3,l4,l5,l6), ('t=25 Simulation', 't=25 Analytical','t=50 Simulation', 't=50 Analytical','t=500 Simulation', 't=500 Analytical'), bbox_to_anchor=(-.25, 1), loc=2, borderaxespad=0.,prop={'size':12}) ``` Which somehow works see below. But I have duplicated information in my legend. I would prefer to separate the legend so that I have different colored lines corresponding to each time t, and a normal line as my Analytical solution and dots for the results of my simulation. Something like that ```none --(red line) t = 25 --(blue line) t = 50 --(green line) t = 500 o Simulaton -- Analytical Solution ``` Does anyone know how I could achieve this with matplotlib?", "response":"You can chose the artists and labels to display in the legend as follows. You'll need to create custom artists for the elements in the legend that are not actually plotted. ``` import matplotlib.pyplot as plt import numpy as np x = np.linspace(0,10,31) fig = plt.figure() ax = fig.add_subplot(1,1,1) #Plot analytic solution ax.plot(x,1*x**2, color='r', label=\"t = 25\") ax.plot(x,2*x**2, color='b', label=\"t = 50\") ax.plot(x,3*x**2, color='g', label=\"t = 500\") #Plot simulation ax.plot(x,1*x**2, color='r', linestyle='', marker='o') ax.plot(x,2*x**2, color='b', linestyle='', marker='o') ax.plot(x,3*x**2, color='g', linestyle='', marker='o') #Get artists and labels for legend and chose which ones to display handles, labels = ax.get_legend_handles_labels() display = (0,1,2) #Create custom artists simArtist = plt.Line2D((0,1),(0,0), color='k', marker='o', linestyle='') anyArtist = plt.Line2D((0,1),(0,0), color='k') #Create legend from custom artist\/label lists ax.legend([handle for i,handle in enumerate(handles) if i in display]+[simArtist,anyArtist], [label for i,label in enumerate(labels) if i in display]+['Simulation', 'Analytic']) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13303928\/how-to-make-custom-legend", "best_answers_votes":92, "question_length":976, "response_length":1157 }, { "question":"Changing the background color of the axes planes of a 3D plot On the basis of the scatterplot example of matplotlib, how can I change the gray background color of the 3 axes grid planes? I would like to set it to white, keeping the grid lines with the default gray color. I found this question but I couldn't apply it to the example. Thanks.", "response":"For a slightly different approach, see below: ``` # Get rid of colored axes planes # First remove fill ax.xaxis.pane.fill = False ax.yaxis.pane.fill = False ax.zaxis.pane.fill = False # Now set color to white (or whatever is \"invisible\") ax.xaxis.pane.set_edgecolor('w') ax.yaxis.pane.set_edgecolor('w') ax.zaxis.pane.set_edgecolor('w') # Bonus: To get rid of the grid as well: ax.grid(False) ``` See this blog post that I used as my source.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11448972\/changing-the-background-color-of-the-axes-planes-of-a-3d-plot", "best_answers_votes":48, "question_length":341, "response_length":441 }, { "question":"How to add different graphs (as an inset) in another python graph [duplicate] This question already has answers here: Embedding small plots inside subplots in matplotlib (5 answers) Closed 6 years ago. I'd like to make a graph like that: the problem is, I've got the data from some external files, and I can make the background graph, but I have no idea how to add another graph inside of the one that I already have and change the data to have different results in both of them: Below I am adding the code I am using to do the background graph. Hope someone can help. ``` from __future__ import division import numpy as np import matplotlib.pyplot as plt plt.rc('text',usetex=True) font = {'family':'serif','size':16} plt.rc('font',**font) plt.rc('legend',**{'fontsize':14}) matplotlib.rcParams['text.latex.preamble']=[r'\\usepackage{amsmath}'] data=np.loadtxt(r'C:\\...\\file.txt') plt.plot(data[:,0],data[:,6],linewidth = 3,label='B$_0$ = 1.5 T d',linestyle= '--', color='black') plt.show() ```", "response":"There's more than one way do to this, depending on the relationship that you want the inset to have. If you just want to inset a graph that has no set relationship with the bigger graph, just do something like: ``` import matplotlib.pyplot as plt fig, ax1 = plt.subplots() # These are in unitless percentages of the figure size. (0,0 is bottom left) left, bottom, width, height = [0.25, 0.6, 0.2, 0.2] ax2 = fig.add_axes([left, bottom, width, height]) ax1.plot(range(10), color='red') ax2.plot(range(6)[::-1], color='green') plt.show() ``` If you want to have some sort of relationship between the two, have a look at some of the examples here: http:\/\/matplotlib.org\/1.3.1\/mpl_toolkits\/axes_grid\/users\/overview.html#insetlocator This is useful if you want the inset to be a \"zoomed in\" version, (say, at exactly twice the scale of the original) that will automatically update as you pan\/zoom interactively. For simple insets, though, just create a new axes as I showed in the example above.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21001088\/how-to-add-different-graphs-as-an-inset-in-another-python-graph", "best_answers_votes":73, "question_length":994, "response_length":990 }, { "question":"How to set the line width of error bar caps How can the line width of the error bar caps in Matplotlib be changed? I tried the following code: ``` (_, caplines, _) = matplotlib.pyplot.errorbar( data['distance'], data['energy'], yerr=data['energy sigma'], capsize=10, elinewidth=3) for capline in caplines: capline.set_linewidth(10) capline.set_color('red') pp.draw() ``` Unfortunately, this updates the color of the caps, but does not update the line width of the caps! The resulting effect is similar to the \"fat error bar lines \/ thin caps\" in the following image: It would be nice to have \"fat\" bar caps, in the case; how can this be done, in Matplotlib? Drawing the bar caps \"manually\", one by one with plot() would work, but a simpler alternative would be best.", "response":"EOL, you were very close.., ``` distance = [1,3,7,9] energy = [10,20,30,40] sigma = [1,3,2,5] (_, caps, _) = plt.errorbar(distance, energy, sigma, capsize=20, elinewidth=3) for cap in caps: cap.set_color('red') cap.set_markeredgewidth(10) plt.show ``` set_markeredgewidth sets the width of the cap lines. Matplotlib objects have so many attributes that often it is difficult to remember the right ones for a given object. IPython is a very useful tool for introspecting matplotlib. I used it to analyze the properties of the 2Dlines correponding to the error cap lines and I found that and other marker properties. Cheers", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7601334\/how-to-set-the-line-width-of-error-bar-caps", "best_answers_votes":41, "question_length":766, "response_length":621 }, { "question":"Python: subplot within a loop: first panel appears in wrong position I am fairly new to Python and come from a more Matlab point of view. I am trying to make a series of 2 x 5 panel contourf subplots. My approach so far has been to convert (to a certain degree) my Matlab code to Python and plot my subplots within a loop. The relavent part of the code looks like this: ``` fig=plt.figure(figsize=(15, 6),facecolor='w', edgecolor='k') for i in range(10): #this part is just arranging the data for contourf ind2 = py.find(zz==i+1) sfr_mass_mat = np.reshape(sfr_mass[ind2],(pixmax_x,pixmax_y)) sfr_mass_sub = sfr_mass[ind2] zi = griddata(massloclist, sfrloclist, sfr_mass_sub,xi,yi,interp='nn') temp = 250+i # this is to index the position of the subplot ax=plt.subplot(temp) ax.contourf(xi,yi,zi,5,cmap=plt.cm.Oranges) plt.subplots_adjust(hspace = .5,wspace=.001) #just annotating where each contour plot is being placed ax.set_title(str(temp)) ``` As a newbie to this forum, i dont seem to be allowed to attach the resulting image. However, going by my indexing in the code as 'temp' the resulting layout of the the 2 x 5 panels is: ``` 251 - 252 - 253 - 254 - 255 256 - 257 - 258 - 259 - 250 ``` However, what i want is ``` 250 - 251 - 252 - 253 - 254 255 - 256 - 257 - 258 - 259 ``` That is, the first panel (250) appears in the last position where i would think 259 should be. And 251 seems to be where i want 250 to be placed. They all seem to be in the correct sequence, just circularly shifted by one. I know this will be something very silly, but appreciate any help you can give. Thank you in advance.", "response":"Using your code with some random data, this would work: ``` fig, axs = plt.subplots(2,5, figsize=(15, 6), facecolor='w', edgecolor='k') fig.subplots_adjust(hspace = .5, wspace=.001) axs = axs.ravel() for i in range(10): axs[i].contourf(np.random.rand(10,10),5,cmap=plt.cm.Oranges) axs[i].set_title(str(250+i)) ``` The layout is off course a bit messy, but that's because of your current settings (the figsize, wspace etc).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17210646\/python-subplot-within-a-loop-first-panel-appears-in-wrong-position", "best_answers_votes":140, "question_length":1609, "response_length":422 }, { "question":"Hiding Axis Labels I'm trying to hide the axis labels on the first subplot at 211. I'd like to label the figure, not just a subplot (reference: \"Isub Event Characteristics\"). How can I control font properties like size, font, color? ``` f = Figure() vdsvgsPlot = f.add_subplot(211) vdsvgsPlot.plot(theLister()[3],theLister()[0]) vdsvgsPlot.plot(theLister()[3],theLister()[1]) isubPlot = f.add_subplot(212) isubPlot.plot(theLister()[3],theLister()[2]) plotCanvas = FigureCanvasTkAgg(f, master) toolbar = NavigationToolbar2TkAgg(plotCanvas, master) plotCanvas.get_tk_widget().pack() ``` Thank you in advance.", "response":"You have several different questions here... Let me break them up a bit... By \"hide the axis labels on the first subplot\" do you mean the actual axis labels (which aren't there unless you specify them), the tick labels (i.e. the numbers along the axis), the axis ticks, or all of the above? If you mean \"all of the above\", just do ax.xaxis.set_visible(False) and the same for the y-axis. (ax here would be vdsvgsPlot in your example code above) If you mean the axis tick labels, just set them to [], i.e.: ax.set_xticklabels([]). (and set_yticklabels for the y-axis) If you mean the axis ticks, you can do something similar: ax.set_xticks([]) and ax.set_yticks([]) which will turn off both the ticks and ticklabels. As to the second question, use suptitle to title the entire figure. i.e.: fig.suptitle('whatever') (f.suptitle... in your example code above). As for how to control the font properties, you can either pass various keyword arguments to suptitle (or anything else that creates text on a plot) or set them after you create the text. For example fig.suptitle('This is a title', size=20, horizontalalignment='left', font='Times', color='red') In general, I would suggest you look through the various user's guide, gallery of examples (all of which have the source code included), the pyplot api docs, and the detailed api docs.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4079795\/hiding-axis-labels", "best_answers_votes":92, "question_length":606, "response_length":1338 }, { "question":"Plot Normal distribution with Matplotlib [duplicate] This question already has answers here: How to plot normal distribution (10 answers) Closed 3 years ago. please help me to plot the normal distribution of the folowing data: DATA: ``` import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm h = [186, 176, 158, 180, 186, 168, 168, 164, 178, 170, 189, 195, 172, 187, 180, 186, 185, 168, 179, 178, 183, 179, 170, 175, 186, 159, 161, 178, 175, 185, 175, 162, 173, 172, 177, 175, 172, 177, 180] std = np.std(h) mean = np.mean(h) plt.plot(norm.pdf(h,mean,std)) ``` output: ``` Standard Deriviation = 8.54065575872 mean = 176.076923077 ``` the plot is incorrect, what is wrong with my code?", "response":"Note: This solution is using pylab, not matplotlib.pyplot You may try using hist to put your data info along with the fitted curve as below: ``` import numpy as np import scipy.stats as stats import pylab as pl h = sorted([186, 176, 158, 180, 186, 168, 168, 164, 178, 170, 189, 195, 172, 187, 180, 186, 185, 168, 179, 178, 183, 179, 170, 175, 186, 159, 161, 178, 175, 185, 175, 162, 173, 172, 177, 175, 172, 177, 180]) #sorted fit = stats.norm.pdf(h, np.mean(h), np.std(h)) #this is a fitting indeed pl.plot(h,fit,'-o') pl.hist(h,normed=True) #use this to draw histogram of your data pl.show() #use may also need add this ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20011494\/plot-normal-distribution-with-matplotlib", "best_answers_votes":100, "question_length":710, "response_length":625 }, { "question":"How do I plot two countplot graphs side by side? [duplicate] This question already has answers here: How to plot in multiple subplots (13 answers) Closed 2 years ago. I am trying to plot two countplots showing the counts of batting and bowling. I tried the following code: ``` l=['batting_team','bowling_team'] for i in l: sns.countplot(high_scores[i]) mlt.show() ``` But by using this , I am getting two plots one below the other. How can i make them order side by side?", "response":"Something like this: ``` import seaborn as sns import pandas as pd import matplotlib.pyplot as plt batData = ['a','b','c','a','c'] bowlData = ['b','a','d','d','a'] df=pd.DataFrame() df['batting']=batData df['bowling']=bowlData fig, ax =plt.subplots(1,2) sns.countplot(df['batting'], ax=ax[0]) sns.countplot(df['bowling'], ax=ax[1]) fig.show() ``` The idea is to specify the subplots in the figure - there are numerous ways to do this but the above will work fine.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/43131274\/how-do-i-plot-two-countplot-graphs-side-by-side", "best_answers_votes":101, "question_length":471, "response_length":463 }, { "question":"Hide axis label only, not entire axis, in Pandas plot I can clear the text of the xlabel in a Pandas plot with: ```py plt.xlabel(\"\") ``` Instead, is it possible to hide the label? May be something like .xaxis.label.set_visible(False).", "response":"From the Pandas docs - The plot method on Series and DataFrame is just a simple wrapper around plt.plot(): This means that anything you can do with matplolib, you can do with a Pandas DataFrame plot. pyplot has an axis() method that lets you set axis properties. Calling plt.axis('off') before calling plt.show() will turn off both axes. ``` df.plot() plt.axis('off') plt.show() plt.close() ``` To control a single axis, you need to set its properties via the plot's Axes. For the x axis - (pyplot.axes().get_xaxis().....) ``` df.plot() ax1 = plt.axes() x_axis = ax1.axes.get_xaxis() x_axis.set_visible(False) plt.show() plt.close() ``` Similarly to control an axis label, get the label and turn it off. ``` df.plot() ax1 = plt.axes() x_axis = ax1.axes.get_xaxis() x_axis.set_label_text('foo') x_label = x_axis.get_label() ##print isinstance(x_label, matplotlib.artist.Artist) x_label.set_visible(False) plt.show() plt.close() ``` You can also get to the x axis like this ``` ax1 = plt.axes() x_axis = ax1.xaxis x_axis.set_label_text('foo') x_axis.label.set_visible(False) ``` Or this ``` ax1 = plt.axes() ax1.xaxis.set_label_text('foo') ax1.xaxis.label.set_visible(False) ``` DataFrame.plot returns a matplotlib.axes.Axes or numpy.ndarray of them so you can get it\/them when you call it. ``` axs = df.plot() ``` .set_visible() is an Artist method. The axes and their labels are Artists so they have Artist methods\/attributes as well as their own. There are many ways to customize your plots. Sometimes you can find the feature you want browsing the Gallery and Examples", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/40705614\/hide-axis-label-only-not-entire-axis-in-pandas-plot", "best_answers_votes":87, "question_length":234, "response_length":1570 }, { "question":"Bold font weight for LaTeX axes label in matplotlib In matplotlib you can make the text of an axis label bold by ``` plt.xlabel('foo',fontweight='bold') ``` You can also use LaTeX with the right backend ``` plt.xlabel(r'$\\phi$') ``` When you combine them however, the math text is not bold anymore ``` plt.xlabel(r'$\\phi$',fontweight='bold') ``` Nor do the following LaTeX commands seem to have any effect ``` plt.xlabel(r'$\\bf \\phi$') plt.xlabel(r'$\\mathbf{\\phi}$') ``` How can I make a bold $\\phi$ in my axis label?", "response":"Unfortunately you can't bold symbols using the bold font, see this question on tex.stackexchange. As the answer suggests, you could use \\boldsymbol to bold phi: ``` r'$\\boldsymbol{\\phi}$' ``` You'll need to load amsmath into the TeX preamble: ``` matplotlib.rc('text', usetex=True) matplotlib.rcParams['text.latex.preamble']=[r\"\\usepackage{amsmath}\"] ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14324477\/bold-font-weight-for-latex-axes-label-in-matplotlib", "best_answers_votes":47, "question_length":517, "response_length":354 }, { "question":"Adjusting Text background transparency I'm trying to put some text with a background on a matplotlib figure, with the text and background both transparent. The following code ``` import numpy as np import matplotlib.pyplot as plt plt.figure() ax = plt.subplot(111) plt.plot(np.linspace(1,0,1000)) t = plt.text(0.03,.95,'text',transform=ax.transAxes,backgroundcolor='0.75',alpha=.5) plt.show() ``` makes the text semi-transparent relative to the text's background, but the background isn't at all transparent relative to the line it obscures in the figure. ``` t.figure.set_alpha(.5) ``` and ``` t.figure.patch.set_alpha(.5) ``` also don't do the trick.", "response":"The alpha passed to plt.text() will change the transparency of the text font. To change the background you have to change the alpha using Text.set_bbox(): ``` t = plt.text(0.5, 0.5, 'text', transform=ax.transAxes, fontsize=30) t.set_bbox(dict(facecolor='red', alpha=0.5, edgecolor='red')) #changed first dict arg from \"color='red'\" to \"facecolor='red'\" to work on python 3.6 ``` To remove the border of the text box, as suggested in the comment of @halt9k, you can use .set_bbox(dict(facecolor='white', alpha=0.5, linewidth=0)).", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23696898\/adjusting-text-background-transparency", "best_answers_votes":91, "question_length":652, "response_length":528 }, { "question":"How to 'turn off' blurry effect of imshow() in matplotlib? I want to make a color plot of probabilities however imshow generates blurry values for points which have zero probability. How can I get rid of this blurry periphery around real grid points? Example: ``` import numpy as np import matplotlib.pyplot as plt a=np.asarray([[ 0.00000000e+00 , 1.05824446e-01 , 2.05086136e-04, 0.00000000e+00], [ 1.05824446e-01 , 3.15012305e-01 , 1.31255127e-01 , 1.05209188e-01], [ 2.05086136e-04 , 1.31255127e-01 , 0.00000000e+00 , 0.00000000e+00], [ 0.00000000e+00 ,1.05209188e-01 , 0.00000000e+00 , 0.00000000e+00]]) im=plt.imshow(a,extent=[0,4,0,4],origin='lower',alpha=1,aspect='auto') plt.show() ```", "response":"By default (which is changed mpl 2.0), imshow interpolates the data (as you would want to do for an image). All you need to do is tell it to not interpolate: ``` im = plt.imshow(..., interpolation='none') ``` 'nearest' will also work for what you want. See smoothing between pixels of imagesc\\imshow in matlab like the matplotlib imshow for examples of all of the kinds of interpolation. doc", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20010882\/how-to-turn-off-blurry-effect-of-imshow-in-matplotlib", "best_answers_votes":82, "question_length":693, "response_length":391 }, { "question":"How to use a (random) *.otf or *.ttf font in matplotlib? How can I use any type of font in my font library on my computer (e.g. *otf or *ttf) in all my matplotlib figures?", "response":"See the example here: http:\/\/matplotlib.sourceforge.net\/examples\/api\/font_file.html In general, you'd do something like this if you're wanting to use a specific .ttf file. (Keep in mind that pointing to a specific font file is usually a bad idea!) ``` import matplotlib.font_manager as fm import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(range(10)) prop = fm.FontProperties(fname='\/usr\/share\/fonts\/truetype\/groovygh.ttf') ax.set_title('This is some random font', fontproperties=prop, size=32) plt.show() ``` Usually, you'd just point to the name of the font, and let matplotlib worry about finding the specific file. E.g. ``` import matplotlib.pyplot as plt plt.plot(range(10)) plt.title('This is some random font', family='GroovyGhosties', size=32) plt.show() ``` If you want to have matplotlib always use a particular font, then customize your .matplotlibrc file. (font.family is what you'd want to set. Note that you should specify the name of the font, not the path to a specific .ttf file.) As an example of doing this dynamically (i.e. without setting up a specific .matplotlibrc file): ``` import matplotlib as mpl mpl.rcParams['font.family'] = 'GroovyGhosties' import matplotlib.pyplot as plt plt.plot(range(10)) plt.title('Everything is crazy!!!', size=32) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7726852\/how-to-use-a-random-otf-or-ttf-font-in-matplotlib", "best_answers_votes":75, "question_length":171, "response_length":1296 }, { "question":"Non-ASCII characters in Matplotlib I have a problem displaying non-ASCII characters in Matplotlib, these characters are rendered as small boxes instead of a proper font, it looks like (I filled these boxes with red paint to hightlight them): How do I fix it? A related question is Accented characters in Matplotlib.", "response":"This problem may actually have a couple of different causes: The default font does not include these glyphs You may change the default font using the following (before any plotting is done!) ``` matplotlib.rc('font', family='Arial') ``` In some versions of matplotlib you'll have to set the family: ``` matplotlib.rc('font', **{'sans-serif' : 'Arial', 'family' : 'sans-serif'}) ``` (Note that because sans-serif contains a hyphen inside the **{} syntax, it is actually necessary.) The first command changes the sans-serif font family to contain only one font (in my case it was Arial), the second sets the default font family to sans-serif. Other options are included in the documentation. You have improperly created\/passed string objects into Matplotlib Even if the font contains proper glyphs, if you forgot to use u to create Unicode constants, Matplotlib will have this behaviour: ``` plt.xlabel(\"\u015arednia odleg\u0142o\u015b\u0107 mi\u0119dzy stacjami wsparcia a modelowan\u0105 [km]\") ``` So you need to add u: ``` plt.xlabel(u\"\u015arednia odleg\u0142o\u015b\u0107 mi\u0119dzy stacjami wsparcia a modelowan\u0105 [km]\") ``` Another cause is that you forgot to put a UTF-8 magic comment on top of the file (I read that this might be the source of the problem): ``` # -*- coding: utf-8 -*- ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10960463\/non-ascii-characters-in-matplotlib", "best_answers_votes":80, "question_length":315, "response_length":1242 }, { "question":"Specify figure size in centimeter in matplotlib I am wondering whether you can specify the size of a figure in matplotlib in centimeter. At the moment I write: ``` def cm2inch(value): return value\/2.54 fig = plt.figure(figsize=(cm2inch(12.8), cm2inch(9.6))) ``` But is there a native approach?", "response":"This is not an answer to the question ''Is there a native way?'', but I think that there is a more elegant way: ```py def cm2inch(*tupl): inch = 2.54 if isinstance(tupl[0], tuple): return tuple(i\/inch for i in tupl[0]) else: return tuple(i\/inch for i in tupl) ``` Then one can issue plt.figure(figsize=cm2inch(12.8, 9.6)), which I think is a much cleaner way. The implementation also allows us to use cm2inch((12.8, 9.6)), which I personally do not prefer, but some people may do. Even though there isn't any way of doing this natively at the moment, I found a discussion here.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14708695\/specify-figure-size-in-centimeter-in-matplotlib", "best_answers_votes":34, "question_length":293, "response_length":577 }, { "question":"Legend overlaps with the pie chart Using matplotlib in python. The legend overlaps with my pie chart. Tried various options for \"loc\" such as \"best\" ,1,2,3... but to no avail. Any Suggestions as to how to either exactly mention the legend position (such as giving padding from the pie chart boundaries) or at least make sure that it does not overlap?", "response":"The short answer is: You may use plt.legend's arguments loc, bbox_to_anchor and additionally bbox_transform and mode, to position the legend in an axes or figure. The long version: Step 1: Making sure a legend is needed. In many cases no legend is needed at all and the information can be inferred by the context or the color directly: If indeed the plot cannot live without a legend, proceed to step 2. Step 2: Making sure, a pie chart is needed. In many cases pie charts are not the best way to convey information. If the need for a pie chart is unambiguously determined, let's proceed to place the legend. Placing the legend plt.legend() has two main arguments to determine the position of the legend. The most important and in itself sufficient is the loc argument. E.g. plt.legend(loc=\"upper left\") placed the legend such that it sits in the upper left corner of its bounding box. If no further argument is specified, this bounding box will be the entire axes. However, we may specify our own bounding box using the bbox_to_anchor argument. If bbox_to_anchor is given a 2-tuple e.g. bbox_to_anchor=(1,1) it means that the bounding box is located at the upper right corner of the axes and has no extent. It then acts as a point relative to which the legend will be placed according to the loc argument. It will then expand out of the zero-size bounding box. E.g. if loc is \"upper left\", the upper left corner of the legend is at position (1,1) and the legend will expand to the right and downwards. This concept is used for the above plot, which tells us the shocking truth about the bias in Miss Universe elections. ``` import matplotlib.pyplot as plt import matplotlib.patches total = [100] labels = [\"Earth\", \"Mercury\", \"Venus\", \"Mars\", \"Jupiter\", \"Saturn\", \"Uranus\", \"Neptune\", \"Pluto *\"] plt.title('Origin of Miss Universe since 1952') plt.gca().axis(\"equal\") pie = plt.pie(total, startangle=90, colors=[plt.cm.Set3(0)], wedgeprops = { 'linewidth': 2, \"edgecolor\" :\"k\" }) handles = [] for i, l in enumerate(labels): handles.append(matplotlib.patches.Patch(color=plt.cm.Set3((i)\/8.), label=l)) plt.legend(handles,labels, bbox_to_anchor=(0.85,1.025), loc=\"upper left\") plt.gcf().text(0.93,0.04,\"* out of competition since 2006\", ha=\"right\") plt.subplots_adjust(left=0.1, bottom=0.1, right=0.75) ``` In order for the legend not to exceed the figure, we use plt.subplots_adjust to obtain more space between the figure edge and the axis, which can then be taken up by the legend. There is also the option to use a 4-tuple to bbox_to_anchor. How to use or interprete this is detailed in this question: What does a 4-element tuple argument for 'bbox_to_anchor' mean in matplotlib? and one may then use the mode=\"expand\" argument to make the legend fit into the specified bounding box. There are some useful alternatives to this approach: Using figure coordinates Instead of specifying the legend position in axes coordinates, one may use figure coordinates. The advantage is that this will allow to simply place the legend in one corner of the figure without adjusting much of the rest. To this end, one would use the bbox_transform argument and supply the figure transformation to it. The coordinates given to bbox_to_anchor are then interpreted as figure coordinates. ``` plt.legend(pie[0],labels, bbox_to_anchor=(1,0), loc=\"lower right\", bbox_transform=plt.gcf().transFigure) ``` Here (1,0) is the lower right corner of the figure. Because of the default spacings between axes and figure edge, this suffices to place the legend such that it does not overlap with the pie. In other cases, one might still need to adapt those spacings such that no overlap is seen, e.g. ``` title = plt.title('What slows down my computer') title.set_ha(\"left\") plt.gca().axis(\"equal\") pie = plt.pie(total, startangle=0) labels=[\"Trojans\", \"Viruses\", \"Too many open tabs\", \"The anti-virus software\"] plt.legend(pie[0],labels, bbox_to_anchor=(1,0.5), loc=\"center right\", fontsize=10, bbox_transform=plt.gcf().transFigure) plt.subplots_adjust(left=0.0, bottom=0.1, right=0.45) ``` Saving the file with bbox_inches=\"tight\" Now there may be cases where we are more interested in the saved figure than at what is shown on the screen. We may then simply position the legend at the edge of the figure, like so but then save it using the bbox_inches=\"tight\" to savefig, ``` plt.savefig(\"output.png\", bbox_inches=\"tight\") ``` This will create a larger figure, which sits tight around the contents of the canvas: A sophisticated approach, which allows to place the legend tightly inside the figure, without changing the figure size is presented here: Creating figure with exact size and no padding (and legend outside the axes) Using Subplots An alternative is to use subplots to reserve space for the legend. In this case one subplot could take the pie chart, another subplot would contain the legend. This is shown below. ``` fig = plt.figure(4, figsize=(3,3)) ax = fig.add_subplot(211) total = [4,3,2,81] labels = [\"tough working conditions\", \"high risk of accident\", \"harsh weather\", \"it's not allowed to watch DVDs\"] ax.set_title('What people know about oil rigs') ax.axis(\"equal\") pie = ax.pie(total, startangle=0) ax2 = fig.add_subplot(212) ax2.axis(\"off\") ax2.legend(pie[0],labels, loc=\"center\") ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/43272206\/legend-overlaps-with-the-pie-chart", "best_answers_votes":142, "question_length":350, "response_length":5282 }, { "question":"Multiple histograms in Pandas I would like to create the following histogram (see image below) taken from the book \"Think Stats\". However, I cannot get them on the same plot. Each DataFrame takes its own subplot. I have the following code: ``` import nsfg import matplotlib.pyplot as plt df = nsfg.ReadFemPreg() preg = nsfg.ReadFemPreg() live = preg[preg.outcome == 1] first = live[live.birthord == 1] others = live[live.birthord != 1] #fig = plt.figure() #ax1 = fig.add_subplot(111) first.hist(column = 'prglngth', bins = 40, color = 'teal', \\ alpha = 0.5) others.hist(column = 'prglngth', bins = 40, color = 'blue', \\ alpha = 0.5) plt.show() ``` The above code does not work when I use ax = ax1 as suggested in: pandas multiple plots not working as hists nor this example does what I need: Overlaying multiple histograms using pandas. When I use the code as it is, it creates two windows with histograms. Any ideas how to combine them? Here's an example of how I'd like the final figure to look:", "response":"As far as I can tell, pandas can't handle this situation. That's ok since all of their plotting methods are for convenience only. You'll need to use matplotlib directly. Here's how I do it: ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import pandas #import seaborn #seaborn.set(style='ticks') np.random.seed(0) df = pandas.DataFrame(np.random.normal(size=(37,2)), columns=['A', 'B']) fig, ax = plt.subplots() a_heights, a_bins = np.histogram(df['A']) b_heights, b_bins = np.histogram(df['B'], bins=a_bins) width = (a_bins[1] - a_bins[0])\/3 ax.bar(a_bins[:-1], a_heights, width=width, facecolor='cornflowerblue') ax.bar(b_bins[:-1]+width, b_heights, width=width, facecolor='seagreen') #seaborn.despine(ax=ax, offset=10) ``` And that gives me:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25539195\/multiple-histograms-in-pandas", "best_answers_votes":57, "question_length":997, "response_length":770 }, { "question":"Setting axes.linewidth without changing the rcParams global dict So, it seems one cannot do the following (it raises an error, since axes does not have a set_linewidth method): ``` axes_style = {'linewidth':5} axes_rect = [0.1, 0.1, 0.9, 0.9] axes(axes_rect, **axes_style) ``` and has to use the following old trick instead: ``` rcParams['axes.linewidth'] = 5 # set the value globally ... # some code rcdefaults() # restore [global] defaults ``` Is there an easy \/ clean way (may be one can set x- and y- axes parameters individually, etc)? If no, why?", "response":"This answer does not work, as it is explained in the comments. I suggest using spines. As mentioned in a comment by Czechnology, consider changing the ticks too. ```py import matplotlib.pyplot as plt fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4)) ax1.set_title('Normal spine and ticks') ax2.set_title('Adjusted spine and ticks') # change each spine separately: # ax.spines['right'].set_linewidth(0.5) # change all spines for axis in ['top','bottom','left','right']: ax2.spines[axis].set_linewidth(4) # increase tick width ax2.tick_params(width=4) plt.show() ``` see more about spines at: http:\/\/matplotlib.org\/api\/spines_api.html http:\/\/matplotlib.org\/examples\/pylab_examples\/multiple_yaxis_with_spines.html", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2553521\/setting-axes-linewidth-without-changing-the-rcparams-global-dict", "best_answers_votes":103, "question_length":552, "response_length":714 }, { "question":"matplotlib: disregard outliers when plotting I'm plotting some data from various tests. Sometimes in a test I happen to have one outlier (say 0.1), while all other values are three orders of magnitude smaller. With matplotlib, I plot against the range [0, max_data_value] How can I just zoom into my data and not display outliers, which would mess up the x-axis in my plot? Should I simply take the 95 percentile and have the range [0, 95_percentile] on the x-axis?", "response":"There's no single \"best\" test for an outlier. Ideally, you should incorporate a-priori information (e.g. \"This parameter shouldn't be over x because of blah...\"). Most tests for outliers use the median absolute deviation, rather than the 95th percentile or some other variance-based measurement. Otherwise, the variance\/stddev that is calculated will be heavily skewed by the outliers. Here's a function that implements one of the more common outlier tests. ``` def is_outlier(points, thresh=3.5): \"\"\" Returns a boolean array with True if points are outliers and False otherwise. Parameters: ----------- points : An numobservations by numdimensions array of observations thresh : The modified z-score to use as a threshold. Observations with a modified z-score (based on the median absolute deviation) greater than this value will be classified as outliers. Returns: -------- mask : A numobservations-length boolean array. References: ---------- Boris Iglewicz and David Hoaglin (1993), \"Volume 16: How to Detect and Handle Outliers\", The ASQC Basic References in Quality Control: Statistical Techniques, Edward F. Mykytka, Ph.D., Editor. \"\"\" if len(points.shape) == 1: points = points[:,None] median = np.median(points, axis=0) diff = np.sum((points - median)**2, axis=-1) diff = np.sqrt(diff) med_abs_deviation = np.median(diff) modified_z_score = 0.6745 * diff \/ med_abs_deviation return modified_z_score > thresh ``` As an example of using it, you'd do something like the following: ``` import numpy as np import matplotlib.pyplot as plt # The function above... In my case it's in a local utilities module from sci_utilities import is_outlier # Generate some data x = np.random.random(100) # Append a few \"bad\" points x = np.r_[x, -3, -10, 100] # Keep only the \"good\" points # \"~\" operates as a logical not operator on boolean numpy arrays filtered = x[~is_outlier(x)] # Plot the results fig, (ax1, ax2) = plt.subplots(nrows=2) ax1.hist(x) ax1.set_title('Original') ax2.hist(filtered) ax2.set_title('Without Outliers') plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11882393\/matplotlib-disregard-outliers-when-plotting", "best_answers_votes":81, "question_length":465, "response_length":2037 }, { "question":"Overlay an image segmentation with numpy and matplotlib I am trying to overlay two images. The first one is a 512x512 NumPy array (from a CT image). The second one is also a 512x512 NumPy array but I am just interested in the pixels where the value is larger than 0 (a functional image). To do that I am trying to create a masked array. ``` import numpy as np import numpy.ma as ma import matplotlib.pyplot as plt # Both images are loaded from a dicom. Both are numpy arrays of (512,512) Image1 = readimage(path) Image2 = readimage(path) # Create image 2 mask mask = ma.masked_where(Image2>0, Image2) Image2_mask = ma.masked_array(Image2,mask) # Plot images plt.figure(dpi=300) y, x = np.mgrid[1:513,1:513] plt.axes().set_aspect('equal', 'datalim') plt.set_cmap(plt.gray()) plt.pcolormesh(x, y, Image1,cmap='gray') plt.pcolormesh(x, y, Image2_mask,cmap='jet') plt.axis([x.min(), x.max(), y.min(), y.max()]) plt.colorbar() plt.show() ``` This code does not show any overlay. What I am doing wrong? Is there any straight way? I am coming from a Matlab environment and I am quite new to Python.", "response":"Why don't you use imshow instead? You can plot a 2D image by doing: ``` plt.imshow(Image1, cmap='gray') # I would add interpolation='none' ``` Afterwards, you can easily overlay the segmentation by doing: ``` plt.imshow(Image2_mask, cmap='jet', alpha=0.5) # interpolation='none' ``` Changing the alpha will change the opacity of the overlay. Additionaly, why do you create 2 masks? Only one should be enough, you can do: ``` Image2_mask = ma.masked_array(Image2 > 0, Image2) ``` Practical example: ``` import numpy as np mask = np.zeros((10,10)) mask[3:-3, 3:-3] = 1 # white square in black background im = mask + np.random.randn(10,10) * 0.01 # random image masked = np.ma.masked_where(mask == 0, mask) import matplotlib.pyplot as plt plt.figure() plt.subplot(1,2,1) plt.imshow(im, 'gray', interpolation='none') plt.subplot(1,2,2) plt.imshow(im, 'gray', interpolation='none') plt.imshow(masked, 'jet', interpolation='none', alpha=0.7) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/31877353\/overlay-an-image-segmentation-with-numpy-and-matplotlib", "best_answers_votes":99, "question_length":1091, "response_length":950 }, { "question":"How do you create a legend for a contour plot? I can't seem to find the answer anywhere! I found a discussion here, but trying this I get a TypeError: 'NoneType' object is not iterable: ``` >>> import numpy as np >>> import matplotlib.pyplot as plt >>> x, y = np.meshgrid(np.arange(10),np.arange(10)) >>> z = x + y >>> cs = plt.contourf(x,y,z,levels=[2,3]) >>> cs.collections[0].set_label('test') >>> plt.legend() Traceback (most recent call last): File \"\", line 1, in File \"\/opt\/local\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/matplotlib\/pyplot.py\", line 2791, in legend ret = gca().legend(*args, **kwargs) File \"\/opt\/local\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/matplotlib\/axes.py\", line 4475, in legend self.legend_ = mlegend.Legend(self, handles, labels, **kwargs) File \"\/opt\/local\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/matplotlib\/legend.py\", line 365, in __init__ self._init_legend_box(handles, labels) File \"\/opt\/local\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/matplotlib\/legend.py\", line 627, in _init_legend_box handlebox) File \"\/opt\/local\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/matplotlib\/legend_handler.py\", line 110, in __call__ handlebox.get_transform()) File \"\/opt\/local\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/matplotlib\/legend_handler.py\", line 352, in create_artists width, height, fontsize) File \"\/opt\/local\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/matplotlib\/legend_handler.py\", line 307, in get_sizes size_max = max(orig_handle.get_sizes())*legend.markerscale**2 TypeError: 'NoneType' object is not iterable ``` EDIT: I'm looking for something like this:", "response":"You could also do it directly with the lines of the contour, without using proxy artists. ``` import matplotlib import numpy as np import matplotlib.cm as cm import matplotlib.mlab as mlab import matplotlib.pyplot as plt matplotlib.rcParams['xtick.direction'] = 'out' matplotlib.rcParams['ytick.direction'] = 'out' delta = 0.025 x = np.arange(-3.0, 3.0, delta) y = np.arange(-2.0, 2.0, delta) X, Y = np.meshgrid(x, y) Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0) Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1) # difference of Gaussians Z = 10.0 * (Z2 - Z1) # Create a simple contour plot with labels using default colors. The # inline argument to clabel will control whether the labels are draw # over the line segments of the contour, removing the lines beneath # the label plt.figure() CS = plt.contour(X, Y, Z) plt.clabel(CS, inline=1, fontsize=10) plt.title('Simplest default with labels') labels = ['line1', 'line2','line3','line4', 'line5', 'line6'] for i in range(len(labels)): CS.collections[i].set_label(labels[i]) plt.legend(loc='upper left') ``` Will produce: However, you might also want to look into annotations for your own need. In my opinion it will give you a more fine grained control on where and what you write on the image, here is the same example with some annotation: ``` ### better with annotation, more flexible plt.figure(2) CS = plt.contour(X, Y, Z) plt.clabel(CS, inline=1, fontsize=10) plt.title('Simplest default with labels') plt.annotate('some text here',(1.4,1.6)) plt.annotate('some text there',(-2,-1.5)) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10490302\/how-do-you-create-a-legend-for-a-contour-plot", "best_answers_votes":63, "question_length":1832, "response_length":1554 }, { "question":"How to format axis tick labels from number to thousands or Millions (125,436 to 125.4K) ``` import matplotlib.pyplot as plt import matplotlib.ticker as ticker import seaborn as sns import pandas as pd sns.set(style=\"darkgrid\") fig, ax = plt.subplots(figsize=(8, 5)) palette = sns.color_palette(\"bright\", 6) g = sns.scatterplot(ax=ax, x=\"Area\", y=\"Rent\/Sqft\", hue=\"Region\", marker='o', data=df, s=100, palette= palette) g.legend(bbox_to_anchor=(1, 1), ncol=1) g.set(xlim = (50000,250000)) ``` How can I can change the axis format from a number to custom format? For example, 125000 to 125.00K", "response":"IIUC you can format the xticks and set these: ``` In[60]: #generate some psuedo data df = pd.DataFrame({'num':[50000, 75000, 100000, 125000], 'Rent\/Sqft':np.random.randn(4), 'Region':list('abcd')}) df Out[60]: num Rent\/Sqft Region 0 50000 0.109196 a 1 75000 0.566553 b 2 100000 -0.274064 c 3 125000 -0.636492 d In[61]: import matplotlib.pyplot as plt import matplotlib.ticker as ticker import seaborn as sns import pandas as pd sns.set(style=\"darkgrid\") fig, ax = plt.subplots(figsize=(8, 5)) palette = sns.color_palette(\"bright\", 4) g = sns.scatterplot(ax=ax, x=\"num\", y=\"Rent\/Sqft\", hue=\"Region\", marker='o', data=df, s=100, palette= palette) g.legend(bbox_to_anchor=(1, 1), ncol=1) g.set(xlim = (50000,250000)) xlabels = ['{:,.2f}'.format(x) + 'K' for x in g.get_xticks()\/1000] g.set_xticklabels(xlabels) Out[61]: ``` The key bit here is this line: ``` xlabels = ['{:,.2f}'.format(x) + 'K' for x in g.get_xticks()\/1000] g.set_xticklabels(xlabels) ``` So this divides all the ticks by 1000 and then formats them and sets the xtick labels UPDATE Thanks to @ScottBoston who has suggested a better method: ``` ax.xaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: '{:,.2f}'.format(x\/1000) + 'K')) ``` see the docs", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/53747298\/how-to-format-axis-tick-labels-from-number-to-thousands-or-millions-125-436-to", "best_answers_votes":60, "question_length":591, "response_length":1223 }, { "question":"Python plotting libraries [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 10 years ago. Improve this question What alternatives are there to pylab for plotting in Python? In particular, I'm looking for something that doesn't use the stateful model that pylab does.", "response":"Plotly lets you make graphs using a Python API, matplotlib, and pandas. Their IPython gallery has some example scientific graphs with the Python scripts that generated them. Here's a sample: Some recent exciting open source offerings: ggplot is based on R's ggplot2, with aesthetically pleasing defaults and a really concise api. wants to be a matplotlib killer bokeh makes interactive (html canvas) plots. emphasis on interativity + handling big data vega translates JSON \"plot descriptions\" into SVG or Canvas-based interactive plots, and vincent is a declarative interface for generating the JSON specifications. (source: fastly.net)", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/915940\/python-plotting-libraries", "best_answers_votes":48, "question_length":643, "response_length":636 }, { "question":"How to create a stacked bar chart for my DataFrame using seaborn [duplicate] This question already has answers here: How to have clusters of stacked bars (10 answers) Closed 7 years ago. I have a DataFrame df: ```py df = pd.DataFrame(columns=[\"App\",\"Feature1\", \"Feature2\",\"Feature3\", \"Feature4\",\"Feature5\", \"Feature6\",\"Feature7\",\"Feature8\"], data=[['SHA', 0, 0, 1, 1, 1, 0, 1, 0], ['LHA', 1, 0, 1, 1, 0, 1, 1, 0], ['DRA', 0, 0, 0, 0, 0, 0, 1, 0], ['FRA', 1, 0, 1, 1, 1, 0, 1, 1], ['BRU', 0, 0, 1, 0, 1, 0, 0, 0], ['PAR', 0, 1, 1, 1, 1, 0, 1, 0], ['AER', 0, 0, 1, 1, 0, 1, 1, 0], ['SHE', 0, 0, 0, 1, 0, 0, 1, 0]]) # display(df) App Feature1 Feature2 Feature3 Feature4 Feature5 Feature6 Feature7 Feature8 0 SHA 0 0 1 1 1 0 1 0 1 LHA 1 0 1 1 0 1 1 0 2 DRA 0 0 0 0 0 0 1 0 3 FRA 1 0 1 1 1 0 1 1 4 BRU 0 0 1 0 1 0 0 0 5 PAR 0 1 1 1 1 0 1 0 6 AER 0 0 1 1 0 1 1 0 7 SHE 0 0 0 1 0 0 1 0 ``` I want to create a stacked bar chart so that each stack would correspond to App while the Y axis would contain the count of 1 values and the X axis would be Feature. It should be similar to this bar chart with the only difference that now I want to see stack bars and a legend with colors: ```py df_c = df.iloc[:, 1:].eq(1).sum().rename_axis('Feature').reset_index(name='Cou\u200cnt') df_c = df_c.sort_values('Cou\u200cnt') plt.figure(figsize=(12,8)) ax = sns.barplot(x=\"Feature\", y='Cou\u200cnt', data=df_c, palette=sns.color_palette(\"GnBu\", 10)) plt.xticks(rotation='vertical') ax.grid(b=True, which='major', color='#d3d3d3', linewidth=1.0) ax.grid(b=True, which='minor', color='#d3d3d3', linewidth=0.5) plt.show() ```", "response":"You could use pandas plot as @Bharath suggest: ``` import seaborn as sns sns.set() df.set_index('App').T.plot(kind='bar', stacked=True) ``` Output: Updated: from matplotlib.colors import ListedColormap df.set_index('App')\\ .reindex_axis(df.set_index('App').sum().sort_values().index, axis=1)\\ .T.plot(kind='bar', stacked=True, colormap=ListedColormap(sns.color_palette(\"GnBu\", 10)), figsize=(12,6)) Updated Pandas 0.21.0+ reindex_axis is deprecated, use reindex ``` from matplotlib.colors import ListedColormap df.set_index('App')\\ .reindex(df.set_index('App').sum().sort_values().index, axis=1)\\ .T.plot(kind='bar', stacked=True, colormap=ListedColormap(sns.color_palette(\"GnBu\", 10)), figsize=(12,6)) ``` Output:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/47138271\/how-to-create-a-stacked-bar-chart-for-my-dataframe-using-seaborn", "best_answers_votes":87, "question_length":1588, "response_length":714 }, { "question":"Draw graph in NetworkX I'm trying to draw any graph in NetworkX, but get nothing, not even errors: ``` import networkx as nx import matplotlib.pyplot as plt g1=nx.petersen_graph() nx.draw(g1) ```", "response":"Add to the end: ``` plt.show() ``` ``` import networkx as nx import matplotlib.pyplot as plt g1 = nx.petersen_graph() nx.draw(g1) plt.show() ``` When run from an interactive shell where plt.ion() has been called, the plt.show() is not needed. This is probably why it is omitted in a lot of examples. If you run these commands from a script (where plt.ion() has not been called), the plt.show() is needed. plt.ion() is okay for interactive sessions, but is not recommended for scripts.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19212979\/draw-graph-in-networkx", "best_answers_votes":75, "question_length":195, "response_length":484 }, { "question":"Plot with fewer markers than data points (or a better way to plot CDFs?) [matplotlib, or general plotting help] I am plotting Cumulative Distribution Functions, with a large number of data points. I am plotting a few lines on the same plot, which are identified with markers as it will be printed in black and white. What I would like are markers evenly spaced in the x-dimension. What I am getting is one marker per data point (and given the number of points, they all overlap) I'm not sure if it's my understanding of how to plot well, or just a lack of understanding matplotlib. I can't find a 'marker frequency' setting. An easy solution for one line would be to take every N'th value from the line, and use that as a separate line with linestyle='', but I would like the markers to be vertically aligned, and the different x arrays have different lengths. ``` # in reality, many thousands of values x_example = [ 567, 460, 66, 1034, 275, 26, 628, 99, 287, 157, 705, 421, 1093, \\ 139, 204, 14, 240, 179, 94, 139, 645, 670, 47, 520, 891, 450, 56, 964, \\ 1728, 99, 277, 356, 1628, 745, 364, 88, 112, 810, 816, 523, 401, 89, \\ 278, 917, 370, 53, 39, 90, 853, 356 ] x = sort(x_example) y = linspace(0,1,len(x)) ax = subplot(1,1,1) plots[w] = ax.plot(x,y, marker='o') ```", "response":"You can do plot(x,y,marker='o',markevery=5) to mark every fifth point, but I don't think there is any built-in support for setting marks at even intervals. You could decide on the x locations where you want the marks, use e.g. numpy.searchsorted to find which data points the locations fall between, and then interpolate between the neighboring points to find the y coordinates.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2040306\/plot-with-fewer-markers-than-data-points-or-a-better-way-to-plot-cdfs-matplo", "best_answers_votes":76, "question_length":1270, "response_length":378 }, { "question":"Adding y=x to a matplotlib scatter plot if I haven't kept track of all the data points that went in Here's some code that does scatter plot of a number of different series using matplotlib and then adds the line y=x: ``` import numpy as np, matplotlib.pyplot as plt, matplotlib.cm as cm, pylab nseries = 10 colors = cm.rainbow(np.linspace(0, 1, nseries)) all_x = [] all_y = [] for i in range(nseries): x = np.random.random(12)+i\/10.0 y = np.random.random(12)+i\/5.0 plt.scatter(x, y, color=colors[i]) all_x.extend(x) all_y.extend(y) # Could I somehow do the next part (add identity_line) if I haven't been keeping track of all the x and y values I've seen? identity_line = np.linspace(max(min(all_x), min(all_y)), min(max(all_x), max(all_y))) plt.plot(identity_line, identity_line, color=\"black\", linestyle=\"dashed\", linewidth=3.0) plt.show() ``` In order to achieve this I've had to keep track of all the x and y values that went into the scatter plot so that I know where identity_line should start and end. Is there a way I can get y=x to show up even if I don't have a list of all the points that I plotted? I would think that something in matplotlib can give me a list of all the points after the fact, but I haven't been able to figure out how to get that list.", "response":"You don't need to know anything about your data per se. You can get away with what your matplotlib Axes object will tell you about the data. See below: ``` import numpy as np import matplotlib.pyplot as plt # random data N = 37 x = np.random.normal(loc=3.5, scale=1.25, size=N) y = np.random.normal(loc=3.4, scale=1.5, size=N) c = x**2 + y**2 # now sort it just to make it look like it's related x.sort() y.sort() fig, ax = plt.subplots() ax.scatter(x, y, s=25, c=c, cmap=plt.cm.coolwarm, zorder=10) ``` Here's the good part: ``` lims = [ np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes ] # now plot both limits against eachother ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0) ax.set_aspect('equal') ax.set_xlim(lims) ax.set_ylim(lims) fig.savefig('\/Users\/paul\/Desktop\/so.png', dpi=300) ``` Et voil\u00e0", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25497402\/adding-y-x-to-a-matplotlib-scatter-plot-if-i-havent-kept-track-of-all-the-data", "best_answers_votes":75, "question_length":1266, "response_length":870 }, { "question":"Plot histogram with colors taken from colormap I want to plot a simple 1D histogram where the bars should follow the color-coding of a given colormap. Here's an MWE: ``` import numpy as n import matplotlib.pyplot as plt # Random gaussian data. Ntotal = 1000 data = 0.05 * n.random.randn(Ntotal) + 0.5 # This is the colormap I'd like to use. cm = plt.cm.get_cmap('RdYlBu_r') # Plot histogram. n, bins, patches = plt.hist(data, 25, normed=1, color='green') plt.show() ``` which outputs this: Instead of the color being green for the entire histogram, I'd like the columns to follow a color-coding given by the colormap defined in cm and the values of the bins. This would mean that bins closer to zero (not in height but in position) should look bluer and those closer to one redder, according to the chosen colormap RdYlBu_r. Since plt.histo doesn't take a cmap argument I don't know how to tell it to use the colormap defined in cm.", "response":"The hist command returns a list of patches, so you can iterate over them and set their color like so: ``` import numpy as n import matplotlib.pyplot as plt # Random gaussian data. Ntotal = 1000 data = 0.05 * n.random.randn(Ntotal) + 0.5 # This is the colormap I'd like to use. cm = plt.cm.get_cmap('RdYlBu_r') # Plot histogram. n, bins, patches = plt.hist(data, 25, normed=1, color='green') bin_centers = 0.5 * (bins[:-1] + bins[1:]) # scale values to interval [0,1] col = bin_centers - min(bin_centers) col \/= max(col) for c, p in zip(col, patches): plt.setp(p, 'facecolor', cm(c)) plt.show() ``` To get the colors, you need to call the colormap with a value between 0 and 1. Resulting figure:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23061657\/plot-histogram-with-colors-taken-from-colormap", "best_answers_votes":55, "question_length":932, "response_length":694 }, { "question":"Tweaking seaborn.boxplot I would like to compare a set of distributions of scores (score), grouped by some categories (centrality) and colored by some other (model). I've tried the following with seaborn: ``` plt.figure(figsize=(14,6)) seaborn.boxplot(x=\"centrality\", y=\"score\", hue=\"model\", data=data, palette=seaborn.color_palette(\"husl\", len(models) +1)) seaborn.despine(offset=10, trim=True) plt.savefig(\"\/home\/i11\/staudt\/Eval\/properties-replication-test.pdf\", bbox_inches=\"tight\") ``` There are some problems I have with this plot: There is a large amount of outliers and I don't like how they are drawn here. Can I remove them? Can I change the appearance to show less clutter? Can I color them at least so that their color matches the box color? The model value original is special because all other distributions should be compared to the distribution of original. This should be visually reflected in the plot. Can I make original the first box of every group? Can I offset or mark it differently somehow? Would it be possible to draw a horizontal line through the median of each original distribution and through the group of boxes? some of the values of score are very small, how to do proper scaling of the y-axis to show them? EDIT: Here is an example with a log-scaled y-axis - also not yet ideal. Why do the some boxes seem cut off at the low end?", "response":"Outlier display You should be able to pass any arguments to seaborn.boxplot that you can pass to plt.boxplot (see documentation), so you could adjust the display of the outliers by setting flierprops. Here are some examples of what you can do with your outliers. If you don't want to display them, you could do ``` seaborn.boxplot(x=\"centrality\", y=\"score\", hue=\"model\", data=data, showfliers=False) ``` or you could make them light gray like so: ``` flierprops = dict(markerfacecolor='0.75', markersize=5, linestyle='none') seaborn.boxplot(x=\"centrality\", y=\"score\", hue=\"model\", data=data, flierprops=flierprops) ``` Order of groups You can set the order of the groups manually with hue_order, e.g. ``` seaborn.boxplot(x=\"centrality\", y=\"score\", hue=\"model\", data=data, hue_order=[\"original\", \"Havel..\",\"etc\"]) ``` Scaling of y-axis You could just get the minimum and maximum values of all y-values and set y_lim accordingly? Something like this: ``` y_values = data[\"scores\"].values seaborn.boxplot(x=\"centrality\", y=\"score\", hue=\"model\", data=data, y_lim=(np.min(y_values),np.max(y_values))) ``` EDIT: This last point doesn't really make sense since the automatic y_lim range will already include all the values, but I'm leaving it just as an example of how to adjust these settings. As mentioned in the comments, log-scaling probably makes more sense.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/35131798\/tweaking-seaborn-boxplot", "best_answers_votes":87, "question_length":1362, "response_length":1356 }, { "question":"How to remove outline of circle marker when using pyplot.plot in matplotlib I'm producing a scatter plot using pyplot.plot (instead of scatter - I'm having difficulties with the colormap) I am plotting using the 'o' marker to get a circle, but the circle always has a black outline. How do I remove the outline, or adjust its colour?", "response":"To remove the outline of a marker, and adjust its color, use markeredgewidth (aka mew), and markeredgecolor (aka mec) respectively. Using this as a guide: ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(0, 5, 0.1) y = np.sin(x) plt.plot(x, y, color='blue', marker='o', fillstyle='full', markeredgecolor='red', markeredgewidth=0.0) ``` This produces: As you notice, even though the marker edge color is set, because the width of it is set to zero it doesn't show up.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/28403179\/how-to-remove-outline-of-circle-marker-when-using-pyplot-plot-in-matplotlib", "best_answers_votes":55, "question_length":333, "response_length":486 }, { "question":"matplotlib Axes.plot() vs pyplot.plot() What is the difference between the Axes.plot() and pyplot.plot() methods? Does one use another as a subroutine? It seems that my options for plotting are ``` line = plt.plot(data) ``` or ``` ax = plt.axes() line = ax.plot(data) ``` or even ``` fig = plt.figure() ax = fig.add_axes([0,0,1,1]) line = ax.plot(data) ``` Are there situations where it is preferable to use one over the other?", "response":"For drawing a single plot, the best practice is probably ``` fig = plt.figure() plt.plot(data) fig.show() ``` Now, lets take a look in to 3 examples from the question and explain what they do. Takes the current figure and axes (if none exists it will create a new one) and plot into them. ``` line = plt.plot(data) ``` In your case, the behavior is same as before with explicitly stating the axes for plot. ``` ax = plt.axes() line = ax.plot(data) ``` This approach of using ax.plot(...) is a must, if you want to plot into multiple axes (possibly in one figure). For example when using a subplots. Explicitly creates new figure - you will not add anything to previous one. Explicitly creates a new axes with given rectangle shape and the rest is the same as with 2. ``` fig = plt.figure() ax = fig.add_axes([0,0,1,1]) line = ax.plot(data) ``` possible problem using figure.add_axes is that it may add a new axes object to the figure, which will overlay the first one (or others). This happens if the requested size does not match the existing ones.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/43482191\/matplotlib-axes-plot-vs-pyplot-plot", "best_answers_votes":32, "question_length":427, "response_length":1049 }, { "question":"Since matplotlib.finance has been deprecated, how can I use the new mpl_finance module? I am trying to import matplotlib.finance module in python so that I can make a Candlestick OCHL graph. My matplotlib.pyplot version is 2.00. I've tried to import it using the following commands: ``` import matplotlib.finance from matplotlib.finance import candlestick_ohlc ``` I get this error: warnings.warn(message, mplDeprecation, stacklevel=1) MatplotlibDeprecationWarning: The finance module has been deprecated in mpl 2.0 and will be removed in mpl 2.2. Please use the module mpl_finance instead. Then instead of using the above lines in python I tried using the following line: ``` import mpl_finance ``` I get this error: ImportError: No module named 'mpl_finance' What should I do to import candlestick from matplotlib.pyplot?", "response":"I've stopped using mpl_finance (and plotly) since they are too slow. Instead I've written an optimized finance plotting library, finplot, which I use to backtest up to 107 candles. Here's a small example: ``` import yfinance as yf import finplot as fplt df = yf.download('SPY',start='2018-01-01', end = '2020-04-29') fplt.candlestick_ochl(df[['Open','Close','High','Low']]) fplt.plot(df.Close.rolling(50).mean()) fplt.plot(df.Close.rolling(200).mean()) fplt.show() ``` Examples included show SMA, EMA, Bollinger bands, Accumulation\/Distribution, Heikin Ashi, on balance volume, RSI, TD sequential, MACD, scatter plot indicators, heat maps, histograms, real-time updating charts and interactive measurements; all with sensible defaults ready for use. I do dogfooding every day, drop me a note or a pull request if there is something you want. Hope you take it for a spin!", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42373104\/since-matplotlib-finance-has-been-deprecated-how-can-i-use-the-new-mpl-finance", "best_answers_votes":52, "question_length":823, "response_length":870 }, { "question":"Adjust Axes3D label positioning I am having trouble with axes labels overlapping ticks labels in matplotlib. I've tried to reposition the labels \"manually\" by applying transforms or by calling set_y(), but no avail. Here's a snippet that reproduces the problem: ``` import matplotlib matplotlib.use(\"TKAGG\") import matplotlib.pyplot as pyplot import mpl_toolkits.mplot3d figure = pyplot.figure() figure.subplots_adjust(bottom=0.25, top=0.75) axes = figure.gca(projection='3d') xLabel = axes.set_xlabel('XXX xxxxxx xxxx x xx x') yLabel = axes.set_ylabel('YY (y) yyyyyy') zLabel = axes.set_zlabel('Z zzzz zzz (z)') plot = axes.plot([1,2,3],[1,2,3]) pyplot.show() ``` Note how the x and y labels clash with the ticks. Can I solve this elegantly ?", "response":"I share your frustration. I worked on it for a good half hour and got nowhere. The docs say set_xlabel takes an arg labelpad but I get an error (AttributeError: Unknown property labelpad)! Setting it after the fact doesn't do anything, on xaxis or w_xaxis. Here's a crude workaround: ``` import matplotlib matplotlib.use(\"TKAGG\") import matplotlib.pyplot as pyplot import mpl_toolkits.mplot3d figure = pyplot.figure(figsize=(8,4), facecolor='w') ax = figure.gca(projection='3d') xLabel = ax.set_xlabel('\\nXXX xxxxxx xxxx x xx x', linespacing=3.2) yLabel = ax.set_ylabel('\\nYY (y) yyyyyy', linespacing=3.1) zLabel = ax.set_zlabel('\\nZ zzzz zzz (z)', linespacing=3.4) plot = ax.plot([1,2,3],[1,2,3]) ax.dist = 10 pyplot.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5525782\/adjust-axes3d-label-positioning", "best_answers_votes":55, "question_length":743, "response_length":728 }, { "question":"Matplotlib, horizontal bar chart (barh) is upside-down TL'DR, the vertical bar charts are shown in a conventional way -- things line up from left to right. However, when it is converted to horizontal bar chart (from bar to barh), everything is upside-down. I.e., for a grouped bar chart, not only the order of the grouped bar is wrong, the order of the each group is wrong as well. For e.g., the graph from http:\/\/dwheelerau.com\/2014\/05\/28\/pandas-data-analysis-new-zealanders-and-their-sheep\/ If you look closely, you will find that the the bar and legend are in reverse order -- Beef shows on top in legend but on bottom in the graph. As the simplest demo, I changed kind='bar', to kind='barh', from this graph https:\/\/plot.ly\/pandas\/bar-charts\/#pandas-grouped-bar-chart and the result looks like this: https:\/\/plot.ly\/7\/~xpt\/ I.e., the bars in the horizontal grouped bar chart is ordered upside-down. How to fix it? EDIT: @Ajean, it is actually not only the order of the grouped bar is wrong, the order of the each group is wrong as well. The graph from Simple customization of matplotlib\/pandas bar chart (labels, ticks, etc.) shows it clearly: We can see that the order is unconventional too, because people would expect the graph to be top-down, with \"AAA\" at the top, not the bottom. If you search for \"Excel upside-down\", you will find people are complaining about this in Excel all over the places. The Microsoft Excel has a fix for it, do Matplotlib\/Panda\/Searborn\/Ploty\/etc has a fix for it?", "response":"I believe the joint wrong order of groups and subgroups boils down to a single feature: that the y axis increases upwards, as in a usual plot. Try reversing the y axis of your axes as in this pandas-less example: ```py import numpy as np import matplotlib.pyplot as plt x = range(5) y = np.random.randn(5) # plot 1: bar plt.figure() plt.bar(x, y) # plot 2: barh, wrong order plt.figure() plt.barh(x, y) # plot 3: barh with correct order: top-down y axis plt.figure() plt.barh(x, y) plt.gca().invert_yaxis() plt.show() ``` Specifically for pandas, pandas.DataFrame.plot and its various plotting submethods return a matplotlib axes object, so you can invert its y axis directly: ```py ax = df.plot.barh() # or df.plot(), or similar ax.invert_yaxis() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34076177\/matplotlib-horizontal-bar-chart-barh-is-upside-down", "best_answers_votes":100, "question_length":1501, "response_length":751 }, { "question":"`Sudo pip install matplotlib` fails to find freetype headers. [OS X Mavericks \/ 10.9] [closed] Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 11 years ago. Improve this question I already have matplotlib-1.2.1 installed as well as numpy-1.8.0. Note - I am using system python with homebrew installed - I have $PYTHONPATH set so Python loads from \/Library\/Python\/x.y\/site-packages (where pip installs to). Here is the code for installing matplotlib (the configurations) ``` BUILDING MATPLOTLIB matplotlib: yes [1.3.1] python: yes [2.7.5 (default, Aug 25 2013, 00:04:04) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]] platform: yes [darwin] REQUIRED DEPENDENCIES AND EXTENSIONS numpy: yes [version 1.8.0] dateutil: yes [using dateutil version 1.5] tornado: yes [tornado was not found. It is required for the WebAgg backend. pip\/easy_install may attempt to install it after matplotlib.] pyparsing: yes [pyparsing was not found. It is required for mathtext support. pip\/easy_install may attempt to install it after matplotlib.] pycxx: yes [Couldn't import. Using local copy.] libagg: yes [pkg-config information for 'libagg' could not be found. Using local copy.] freetype: yes [version 17.1.11] png: yes [version 1.5.17] OPTIONAL SUBPACKAGES sample_data: yes [installing] toolkits: yes [installing] tests: yes [nose 0.11.1 or later is required to run the matplotlib test suite] OPTIONAL BACKEND EXTENSIONS macosx: yes [installing, darwin] qt4agg: no [PyQt4 not found] gtk3agg: no [Requires pygobject to be installed.] gtk3cairo: no [Requires cairo to be installed.] gtkagg: no [Requires pygtk] tkagg: yes [installing, version 81008] wxagg: no [requires wxPython] gtk: no [Requires pygtk] agg: yes [installing] cairo: no [cairo not found] windowing: no [Microsoft Windows only] OPTIONAL LATEX DEPENDENCIES dvipng: yes [version 1.14] ghostscript: yes [version 9.02] latex: yes [version 3.1415926] pdftops: no ``` And then later down the line, after copying and building files, it gets to build_ext and fails because of missing header files. ``` running build_ext building 'matplotlib.ft2font' extension creating build\/temp.macosx-10.9-intel-2.7 creating build\/temp.macosx-10.9-intel-2.7\/src creating build\/temp.macosx-10.9-intel-2.7\/CXX cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -DPY_ARRAY_UNIQUE_SYMBOL=MPL_matplotlib_ft2font_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/Extras\/lib\/python\/numpy\/core\/include -I\/usr\/local\/include -I\/usr\/include -I\/usr\/X11\/include -I. -I\/usr\/local\/Cellar\/freetype\/2.5.2\/include\/freetype2 -I\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/include\/python2.7 -c src\/ft2font.cpp -o build\/temp.macosx-10.9-intel-2.7\/src\/ft2font.o clang: warning: argument unused during compilation: '-mno-fused-madd' In file included from src\/ft2font.cpp:3: In file included from src\/ft2font.h:6: In file included from .\/CXX\/Extensions.hxx:40: In file included from .\/CXX\/Python2\/Extensions.hxx:52: .\/CXX\/Python2\/Objects.hxx:1133:23: warning: implicit conversion of NULL constant to 'int' [-Wnull-conversion] , offset( NULL ) ~ ^~~~ 0 In file included from src\/ft2font.cpp:3: In file included from src\/ft2font.h:16: \/usr\/X11\/include\/ft2build.h:56:10: fatal error: 'freetype\/config\/ftheader.h' file not found #include ^ 1 warning and 1 error generated. error: command 'cc' failed with exit status 1 ---------------------------------------- Rolling back uninstall of matplotlib Replacing \/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/Extras\/lib\/python\/matplotlib-1.1.1-py2.7.egg-info Cleaning up... Removing temporary dir \/private\/tmp\/pip_build_root... Command \/usr\/bin\/python -c \"import setuptools;__file__='\/private\/tmp\/pip_build_root\/matplotlib\/setup.py';exec(compile(open(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))\" install --record \/tmp\/pip-vHUPhA-record\/install-record.txt --single-version-externally-managed failed with error code 1 in \/private\/tmp\/pip_build_root\/matplotlib Exception information: Traceback (most recent call last): File \"\/Library\/Python\/2.7\/site-packages\/pip-1.4.1-py2.7.egg\/pip\/basecommand.py\", line 134, in main status = self.run(options, args) File \"\/Library\/Python\/2.7\/site-packages\/pip-1.4.1-py2.7.egg\/pip\/commands\/install.py\", line 241, in run requirement_set.install(install_options, global_options, root=options.root_path) File \"\/Library\/Python\/2.7\/site-packages\/pip-1.4.1-py2.7.egg\/pip\/req.py\", line 1298, in install requirement.install(install_options, global_options, *args, **kwargs) File \"\/Library\/Python\/2.7\/site-packages\/pip-1.4.1-py2.7.egg\/pip\/req.py\", line 625, in install cwd=self.source_dir, filter_stdout=self._filter_install, show_stdout=False) File \"\/Library\/Python\/2.7\/site-packages\/pip-1.4.1-py2.7.egg\/pip\/util.py\", line 670, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command \/usr\/bin\/python -c \"import setuptools;__file__='\/private\/tmp\/pip_build_root\/matplotlib\/setup.py';exec(compile(open(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))\" install --record \/tmp\/pip-vHUPhA-record\/install-record.txt --single-version-externally-managed failed with error code 1 in \/private\/tmp\/pip_build_root\/matplotlib ```", "response":"I had the same problem, and finally found the solution by checking the compilation commands. It's really simple: ``` ln -s \/usr\/local\/opt\/freetype\/include\/freetype2 \/usr\/local\/include\/freetype ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20572366\/sudo-pip-install-matplotlib-fails-to-find-freetype-headers-os-x-mavericks", "best_answers_votes":88, "question_length":5816, "response_length":196 }, { "question":"After conda update, python kernel crashes when matplotlib is used I have create this simple env with conda: ``` conda create -n test python=3.8.5 pandas scipy numpy matplotlib seaborn jupyterlab ``` The following code in jupyter lab crashes the kernel : ``` import matplotlib.pyplot as plt plt.subplot() ``` I don't face the problem on Linux. The problem is when I try on Windows 10. There are no errors on the jupyter lab console (where I started the server), and I have no idea where to investigate.", "response":"Update 2021-11-06 The default pkgs\/main channel for conda has reverted to using freetype 2.10.4 for Windows, per main \/ packages \/ freetype. If you are still experiencing the issue, use conda list freetype to check the version: freetype != 2.11.0 If it is 2.11.0, then change the version, per the solution, or conda update --all (providing your default channel isn't changed in the .condarc config file). Solution If this is occurring after installing Anaconda, updating conda or freetype since Oct 27, 2021. Go to the Anaconda prompt and downgrade freetype 2.11.0 in any affected environment. conda install freetype=2.10.4 Relevant to any package using matplotlib and any IDE For example, pandas.DataFrame.plot and seaborn Jupyter, Spyder, VSCode, PyCharm, command line. Discovery An issue occurs after updating with the most current updates from conda, released Friday, Oct 29. After updating with conda update --all, there's an issue with anything related to matplotlib in any IDE (not just Jupyter). I tested this in JupyterLab, PyCharm, and python from the command prompt. PyCharm: Process finished with exit code -1073741819 JupyterLab: kernel just restarts and there are no associated errors or Traceback command prompt: a blank interactive matplotlib window will appear briefly, and then a new command line appears. The issue seems to be with conda update --all in (base), then any plot API that uses matplotlib (e.g. seaborn and pandas.DataFrame.plot) kills the kernel in any environment. I had to reinstall Anaconda, but do not do an update of (base), then my other environments worked. I have not figured out what specifically is causing the issue. I tested the issue with python 3.8.12 and python 3.9.7 Current Testing: Following is the conda revision log. Prior to conda update --all this environment was working, but after the updates, plotting with matplotlib crashes the python kernel ```py 2021-10-31 10:47:22 (rev 3) bokeh {2.3.3 (defaults\/win-64) -> 2.4.1 (defaults\/win-64)} click {8.0.1 (defaults\/noarch) -> 8.0.3 (defaults\/noarch)} filelock {3.0.12 (defaults\/noarch) -> 3.3.1 (defaults\/noarch)} freetype {2.10.4 (defaults\/win-64) -> 2.11.0 (defaults\/win-64)} imagecodecs {2021.6.8 (defaults\/win-64) -> 2021.8.26 (defaults\/win-64)} joblib {1.0.1 (defaults\/noarch) -> 1.1.0 (defaults\/noarch)} lerc {2.2.1 (defaults\/win-64) -> 3.0 (defaults\/win-64)} more-itertools {8.8.0 (defaults\/noarch) -> 8.10.0 (defaults\/noarch)} pyopenssl {20.0.1 (defaults\/noarch) -> 21.0.0 (defaults\/noarch)} scikit-learn {0.24.2 (defaults\/win-64) -> 1.0.1 (defaults\/win-64)} statsmodels {0.12.2 (defaults\/win-64) -> 0.13.0 (defaults\/win-64)} sympy {1.8 (defaults\/win-64) -> 1.9 (defaults\/win-64)} tqdm {4.62.2 (defaults\/noarch) -> 4.62.3 (defaults\/noarch)} xlwings {0.24.7 (defaults\/win-64) -> 0.24.9 (defaults\/win-64)} ``` The issue seems to be freetype Downgrading from 2.11.0 to 2.10.4 resolved the issue and made the environment work with matplotlib Went to post a bug report and discovered there is [Bug]: Matplotlib crashes Python #21511", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/69786885\/after-conda-update-python-kernel-crashes-when-matplotlib-is-used", "best_answers_votes":75, "question_length":501, "response_length":3037 }, { "question":"Plot a bar plot from a Pandas DataFrame Assuming I have a DataFrame that looks like this: ```none Hour V1 V2 A1 A2 0 15 13 25 37 1 26 52 21 45 2 18 45 45 25 3 65 38 98 14 ``` I'm trying to create a bar plot to compare columns V1 and V2 by the Hour. When I do: ```py import matplotlib.pyplot as plt ax = df.plot(kind='bar', title =\"V comp\",figsize=(15,10),legend=True, fontsize=12) ax.set_xlabel(\"Hour\",fontsize=12) ax.set_ylabel(\"V\",fontsize=12) ``` I get a plot and a legend with all the columns' values and names. How can I modify my code so the plot and legend only displays the columns V1 and V2?", "response":"To plot just a selection of your columns you can select the columns of interest by passing a list to the subscript operator: ``` ax = df[['V1','V2']].plot(kind='bar', title =\"V comp\", figsize=(15, 10), legend=True, fontsize=12) ``` What you tried was df['V1','V2'] this will raise a KeyError as correctly no column exists with that label, although it looks funny at first you have to consider that your are passing a list hence the double square brackets [[]]. ``` import matplotlib.pyplot as plt ax = df[['V1','V2']].plot(kind='bar', title =\"V comp\", figsize=(15, 10), legend=True, fontsize=12) ax.set_xlabel(\"Hour\", fontsize=12) ax.set_ylabel(\"V\", fontsize=12) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/29498652\/plot-a-bar-plot-from-a-pandas-dataframe", "best_answers_votes":84, "question_length":600, "response_length":677 }, { "question":"what does axes.flat in matplotlib do? I have seen various programs using matplotlib that uses the axes.flat function, like this code: ``` for i, ax in enumerate(axes.flat): ``` what does this do?", "response":"Let's look a minimal example, where we create some axes with plt.subplots, also see this question, ``` import matplotlib.pyplot as plt fig, axes = plt.subplots(ncols=2,nrows=3, sharex=True, sharey=True) for i, ax in enumerate(axes.flat): ax.scatter([i\/\/2+1, i],[i,i\/\/3]) plt.show() ``` Here, axes is a numpy array of axes, ``` print(type(axes)) > print(axes.shape) > (3L, 2L) ``` axes.flat is not a function, it's an attribute of the numpy.ndarray: numpy.ndarray.flat ndarray.flat A 1-D iterator over the array. This is a numpy.flatiter instance, which acts similarly to, but is not a subclass of, Python\u2019s built-in iterator object. Example: ``` import numpy as np a = np.array([[2,3], [4,5], [6,7]]) for i in a.flat: print(i) ``` which would print the numbers 2 3 4 5 6 7. Being an interator over the array, you can use it to loop over all the axes from the 3x2 array of axes, ``` for i, ax in enumerate(axes.flat): ``` For each iteration it would yield the next axes from that array, such that you may easily plot to all axes in a single loop. An alternative would be to use axes.flatten(), where flatten() is method of the numpy array. Instead of an iterator, it returns a flattened version of the array: ``` for i, ax in enumerate(axes.flatten()): ``` There is no difference seen from the outside between the two. However an iterator does not actually create a new array and may hence be slightly faster (although this will never be noticable in the case of matplotlib axes objects). ``` flat1 = [ax for ax in axes.flat] flat2 = axes.flatten() print(flat1 == flat2) > [ True True True True True True] ``` Iterating a flattened version of the axes array has the advantage that you will save one loop, compared to the naive approach of iterating over rows and columns separately, ``` for row in axes: for ax in row: ax.scatter(...) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/46862861\/what-does-axes-flat-in-matplotlib-do", "best_answers_votes":79, "question_length":195, "response_length":1838 }, { "question":"How to create a scatter plot legend with only one symbol for each label? How can I create a scatter plot legend without two symbols showing up each time? I can understand why you'd want this when you're joining symbols by lines, but for a pure scatter plot, all I want in the legend is one example of the symbol. This plot from a previous stackoverflow post shows the kind of thing I mean:", "response":"In the legend command you can use the scatterpoints option: ``` ax.legend(loc=0, scatterpoints = 1) ``` For a normal plot, it is the option numpoints. Here you can find more information about the keyword arguments for the legend: http:\/\/matplotlib.sourceforge.net\/api\/pyplot_api.html#matplotlib.pyplot.legend", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6099799\/how-to-create-a-scatter-plot-legend-with-only-one-symbol-for-each-label", "best_answers_votes":72, "question_length":389, "response_length":308 }, { "question":"Labels for clustermap in seaborn I have several questions about labeling for clustermap in seaborn. First is it possible to extract the the distance values for the hierarchical clustering, and plot the value on the tree structure visualization (maybe only the first three levels). Here is my example code for creating a clustermap plot: ``` import pandas as pd import numpy as np import seaborn as sns get_ipython().magic(u'matplotlib inline') m = np.random.rand(50, 50) df = pd.DataFrame(m, columns=range(4123, 4173), index=range(4123, 4173)) sns.clustermap(df, metric=\"correlation\") ``` The other two questions are: - How to rotate the y labels since they overlaps together. - How to move the color bar to the bottom or right. (There was a question for heatmap, but does not work for my case. Also does not address the color bar position)", "response":"I had the exact same issue with the labels on the y-axis being rotated and found a solution. The issue is that if you do plt.yticks(rotation=0) like suggested in the question you referenced, it will rotate the labels on your colobar due to the way ClusterGrid works. To solve it and rotate the right labels, you need to reference the Axes from the underlying Heatmap and rotate these: ``` cg = sns.clustermap(df, metric=\"correlation\") plt.setp(cg.ax_heatmap.yaxis.get_majorticklabels(), rotation=0) ``` For your other question about the colorbar placement, I don't think this is supported at the moment, as indicated by this Github issue unfortunately. And finally for the hierarchical clustering distance values, you can access the linkage matrics for rows or columns with: ``` cg = sns.clustermap(df, metric=\"correlation\") cg.dendrogram_col.linkage # linkage matrix for columns cg.dendrogram_row.linkage # linkage matrix for rows ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34572177\/labels-for-clustermap-in-seaborn", "best_answers_votes":59, "question_length":840, "response_length":935 }, { "question":"How to plot scikit learn classification report? Is it possible to plot with matplotlib scikit-learn classification report?. Let's assume I print the classification report like this: ``` print '\\n*Classification Report:\\n', classification_report(y_test, predictions) confusion_matrix_graph = confusion_matrix(y_test, predictions) ``` and I get: ``` Clasification Report: precision recall f1-score support 1 0.62 1.00 0.76 66 2 0.93 0.93 0.93 40 3 0.59 0.97 0.73 67 4 0.47 0.92 0.62 272 5 1.00 0.16 0.28 413 avg \/ total 0.77 0.57 0.49 858 ``` How can I \"plot\" the avobe chart?.", "response":"Expanding on Bin's answer: ``` import matplotlib.pyplot as plt import numpy as np def show_values(pc, fmt=\"%.2f\", **kw): ''' Heatmap with text in each cell with matplotlib's pyplot Source: https:\/\/stackoverflow.com\/a\/25074150\/395857 By HYRY ''' from itertools import izip pc.update_scalarmappable() ax = pc.get_axes() #ax = pc.axes# FOR LATEST MATPLOTLIB #Use zip BELOW IN PYTHON 3 for p, color, value in izip(pc.get_paths(), pc.get_facecolors(), pc.get_array()): x, y = p.vertices[:-2, :].mean(0) if np.all(color[:3] > 0.5): color = (0.0, 0.0, 0.0) else: color = (1.0, 1.0, 1.0) ax.text(x, y, fmt % value, ha=\"center\", va=\"center\", color=color, **kw) def cm2inch(*tupl): ''' Specify figure size in centimeter in matplotlib Source: https:\/\/stackoverflow.com\/a\/22787457\/395857 By gns-ank ''' inch = 2.54 if type(tupl[0]) == tuple: return tuple(i\/inch for i in tupl[0]) else: return tuple(i\/inch for i in tupl) def heatmap(AUC, title, xlabel, ylabel, xticklabels, yticklabels, figure_width=40, figure_height=20, correct_orientation=False, cmap='RdBu'): ''' Inspired by: - https:\/\/stackoverflow.com\/a\/16124677\/395857 - https:\/\/stackoverflow.com\/a\/25074150\/395857 ''' # Plot it out fig, ax = plt.subplots() #c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap='RdBu', vmin=0.0, vmax=1.0) c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap=cmap) # put the major ticks at the middle of each cell ax.set_yticks(np.arange(AUC.shape[0]) + 0.5, minor=False) ax.set_xticks(np.arange(AUC.shape[1]) + 0.5, minor=False) # set tick labels #ax.set_xticklabels(np.arange(1,AUC.shape[1]+1), minor=False) ax.set_xticklabels(xticklabels, minor=False) ax.set_yticklabels(yticklabels, minor=False) # set title and x\/y labels plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) # Remove last blank column plt.xlim( (0, AUC.shape[1]) ) # Turn off all the ticks ax = plt.gca() for t in ax.xaxis.get_major_ticks(): t.tick1On = False t.tick2On = False for t in ax.yaxis.get_major_ticks(): t.tick1On = False t.tick2On = False # Add color bar plt.colorbar(c) # Add text in each cell show_values(c) # Proper orientation (origin at the top left instead of bottom left) if correct_orientation: ax.invert_yaxis() ax.xaxis.tick_top() # resize fig = plt.gcf() #fig.set_size_inches(cm2inch(40, 20)) #fig.set_size_inches(cm2inch(40*4, 20*4)) fig.set_size_inches(cm2inch(figure_width, figure_height)) def plot_classification_report(classification_report, title='Classification report ', cmap='RdBu'): ''' Plot scikit-learn classification report. Extension based on https:\/\/stackoverflow.com\/a\/31689645\/395857 ''' lines = classification_report.split('\\n') classes = [] plotMat = [] support = [] class_names = [] for line in lines[2 : (len(lines) - 2)]: t = line.strip().split() if len(t) < 2: continue classes.append(t[0]) v = [float(x) for x in t[1: len(t) - 1]] support.append(int(t[-1])) class_names.append(t[0]) print(v) plotMat.append(v) print('plotMat: {0}'.format(plotMat)) print('support: {0}'.format(support)) xlabel = 'Metrics' ylabel = 'Classes' xticklabels = ['Precision', 'Recall', 'F1-score'] yticklabels = ['{0} ({1})'.format(class_names[idx], sup) for idx, sup in enumerate(support)] figure_width = 25 figure_height = len(class_names) + 7 correct_orientation = False heatmap(np.array(plotMat), title, xlabel, ylabel, xticklabels, yticklabels, figure_width, figure_height, correct_orientation, cmap=cmap) def main(): sampleClassificationReport = \"\"\" precision recall f1-score support Acacia 0.62 1.00 0.76 66 Blossom 0.93 0.93 0.93 40 Camellia 0.59 0.97 0.73 67 Daisy 0.47 0.92 0.62 272 Echium 1.00 0.16 0.28 413 avg \/ total 0.77 0.57 0.49 858\"\"\" plot_classification_report(sampleClassificationReport) plt.savefig('test_plot_classif_report.png', dpi=200, format='png', bbox_inches='tight') plt.close() if __name__ == \"__main__\": main() #cProfile.run('main()') # if you want to do some profiling ``` outputs: Example with more classes (~40):", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/28200786\/how-to-plot-scikit-learn-classification-report", "best_answers_votes":44, "question_length":575, "response_length":3970 }, { "question":"How to make a histogram from a list of strings I have a list of strings: ``` a = ['a', 'a', 'a', 'a', 'b', 'b', 'c', 'c', 'c', 'd', 'e', 'e', 'e', 'e', 'e'] ``` I want to make a histogram for displaying the frequency distribution of the letters. I can make a list that contains the count of each letter using following codes: ``` from itertools import groupby b = [len(list(group)) for key, group in groupby(a)] ``` How do I make the histogram? I may have a million such elements in list a.", "response":"Very easy with Pandas. ``` import pandas from collections import Counter a = ['a', 'a', 'a', 'a', 'b', 'b', 'c', 'c', 'c', 'd', 'e', 'e', 'e', 'e', 'e'] letter_counts = Counter(a) df = pandas.DataFrame.from_dict(letter_counts, orient='index') df.plot(kind='bar') ``` Notice that Counter is making a frequency count, so our plot type is 'bar' not 'hist'.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/28418988\/how-to-make-a-histogram-from-a-list-of-strings", "best_answers_votes":53, "question_length":490, "response_length":353 }, { "question":"Changing plot scale by a factor in matplotlib I am creating a plot in python. Is there a way to re-scale the axis by a factor? The yscale and xscale commands only allow me to turn log scale off. Edit: For example. If I have a plot where the x scales goes from 1 nm to 50 nm, the x scale will range from 1x10^(-9) to 50x10^(-9) and I want it to change from 1 to 50. Thus, I want the plot function to divide the x values placed on the plot by 10^(-9)", "response":"As you have noticed, xscale and yscale does not support a simple linear re-scaling (unfortunately). As an alternative to Hooked's answer, instead of messing with the data, you can trick the labels like so: ``` ticks = ticker.FuncFormatter(lambda x, pos: '{0:g}'.format(x*scale)) ax.xaxis.set_major_formatter(ticks) ``` A complete example showing both x and y scaling: ``` import numpy as np import pylab as plt import matplotlib.ticker as ticker # Generate data x = np.linspace(0, 1e-9) y = 1e3*np.sin(2*np.pi*x\/1e-9) # one period, 1k amplitude # setup figures fig = plt.figure() ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122) # plot two identical plots ax1.plot(x, y) ax2.plot(x, y) # Change only ax2 scale_x = 1e-9 scale_y = 1e3 ticks_x = ticker.FuncFormatter(lambda x, pos: '{0:g}'.format(x\/scale_x)) ax2.xaxis.set_major_formatter(ticks_x) ticks_y = ticker.FuncFormatter(lambda x, pos: '{0:g}'.format(x\/scale_y)) ax2.yaxis.set_major_formatter(ticks_y) ax1.set_xlabel(\"meters\") ax1.set_ylabel('volt') ax2.set_xlabel(\"nanometers\") ax2.set_ylabel('kilovolt') plt.show() ``` And finally I have the credits for a picture: Note that, if you have text.usetex: true as I have, you may want to enclose the labels in $, like so: '${0:g}$'.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10171618\/changing-plot-scale-by-a-factor-in-matplotlib", "best_answers_votes":95, "question_length":448, "response_length":1239 }, { "question":"How to set my xlabel at the end of xaxis I want my x axis has the label like this format ``` 0 1 2 3 4 5 Xlabel ``` but I try code below it result me in 2 lines ``` self.axes.set_xticks(np.arange(0,6,1)) self.axes.set_xlabel('Xlabel', fontsize=9,x=1,y=1) ``` => my result :( ``` 0 1 2 3 4 5 Xlabel ```", "response":"When setting the xlabel, the x parameter assigns the position in axis units, so 0 is the origin and 1 is the right edge of the plot. y is ignored as it's expected to be a default value, just below the tick marks. To override this behavior, you can set the position in axis units using the Axis set_label_coords method. You can use other units by also providing a transform. Here is an example of this: ``` import matplotlib.pyplot as plt import numpy as np ax = plt.gca() ax.set_xticks(np.arange(0,6,1)) label = ax.set_xlabel('Xlabel', fontsize = 9) ax.xaxis.set_label_coords(1.05, -0.025) plt.savefig('labelAtEnd.png') plt.show() ``` Resulting in: The x value (1.05) was chosen to position the label outside the Axes frame. The y value (-0.025) was chose as a best guess to the position you desired. Using a transform, it might be possible to automatically position the text in line with the Tick labels. EDIT: Here's an extended example using a transform. It is not necissarily more helpful to use the last ticklabel's transform, because it does not take into account the size of the text and how it is aligned. So to get a somewhat desired effect, I had to 1) use the same font size for my x label, 2) position the vertical alignment (va) to 'top', and 3) position the horizontal alignment to 'left'. The transform for each tick is set for data units for x (because it's an xaxis) and axis units for y (0 to 1), but displaced by a fixed padding (in pixels) from the x axis. ``` import matplotlib.pyplot as plt import numpy as np ax = plt.gca() ax.set_xticks(np.arange(0,6,1)) ax.set_yticks(np.arange(0,6,1)) label = ax.set_xlabel('xlabel', ha='left', va = 'top', )#fontsize = 9) # need to draw the figure first to position the tick labels fig = plt.gcf() fig.draw(fig.canvas.get_renderer()) # get a tick and will position things next to the last one ticklab = ax.xaxis.get_ticklabels()[0] trans = ticklab.get_transform() ax.xaxis.set_label_coords(5.1, 0, transform=trans) plt.savefig('labelAtEnd2.png') plt.show() ``` This results in:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9290938\/how-to-set-my-xlabel-at-the-end-of-xaxis", "best_answers_votes":73, "question_length":301, "response_length":2037 }, { "question":"How to show an AxesSubplot in Python? I have an object fig2 that is a class mathplotlib.axes.axessubplot, but when I try to execute fig2.show(), python says axessubplot object has no attribute show. How can I show AxesSubplot?", "response":"You should call matplotlib.pyplot.show(), which is a method that displays all the figures. If you have imported as plt, then: ``` import matplotlib.pyplot as plt # create fig1 (of type plt.figure) # create fig2 plt.show() # will display fig1 and fig2 in different windows ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/26540567\/how-to-show-an-axessubplot-in-python", "best_answers_votes":69, "question_length":226, "response_length":275 }, { "question":"Horizontal box plots in matplotlib\/Pandas Bar plots: matplotlib offers the function bar and barh to do vertical and horizontal bar plots. Box plots: matplotlib also offers the function boxplot to do vertical box plots. And Pandas offers its own function for vertical box plots. But is there any way in matplotlib or Pandas to get a horizontal box plot?", "response":"matplotlib's boxplot(..., vert=False) makes horizontal box plots. The keyword parameter vert=False can also be passed to DataFrame.boxplot: ``` import matplotlib.pyplot as plt import pandas as pd x = [[1.2, 2.3, 3.0, 4.5], [1.1, 2.2, 2.9, 5.0]] df = pd.DataFrame(x, index=['Age of pregnant women', 'Age of pregnant men']) df.T.boxplot(vert=False) plt.subplots_adjust(left=0.25) plt.show() ``` I see from the comment (below) that the motivation for making a horizontal box plot is that the labels are rather long. Another option in that case might be to rotate the xticklabels: ``` import matplotlib.pyplot as plt import pandas as pd x = [[1.2, 2.3, 3.0, 4.5], [1.1, 2.2, 2.9, 5.0]] df = pd.DataFrame(x, index=['Age of pregnant women', 'Age of pregnant men']) df.T.boxplot() plt.subplots_adjust(bottom=0.25) plt.xticks(rotation=25) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18500011\/horizontal-box-plots-in-matplotlib-pandas", "best_answers_votes":87, "question_length":352, "response_length":845 }, { "question":"Closing pyplot windows Final Edit: What I found on the subject of closing pyplot windows is that it really probably shouldn't be done using pyplot. SRK gives a great example on how to handle plots that will be updated in his answer below. Also I have stumbled across how to put pyplot plots into a Tkinter window, and Tkinter is much more adept at opening and closing windows than pyplot. Here is how to put a pyplot plot into a Tk window also this is a good example. \/Final Edit I would like to be able to display several plots and then be able to close (remove from screen) them individually from some code input, but I don't know the code input to do this. Below is what I have tried so far. I have played around with the position of the show and close commands, but the only real result I have gotten from this is to have one or the other plot not come up, but I have not been able to remove a plot from the screen. I have been inserting a raw_input() to create pauses. Edit: These plots are being called from a Tkinter gui and if there is a better way to do this from that direction I would be glad to hear it. Any input would be appreciated, thanks. ``` import matplotlib.pyplot as plt a = range(0,10) b = range(0,20,2) c = range(0,30,3) d = range(0,40,4) plot1 = plt.figure() plt.plot(a,b, 'r-o') plt.show() plt.close() plot2 = plt.figure() plt.plot(c,d, 'b-o') plt.show() plt.close() ``` Edit Code: This didn't work either. ``` plot1 = plt.figure(1) plt.plot(a,b, 'r-o') plot2 = plt.figure(2) plt.plot(c,d, 'b-o') #plt.close(1) #this will prevent plot1 from being displayed plt.show() plt.close(1) # or ('all') or (plot1) ```", "response":"plt.close() will close current instance. plt.close(2) will close figure 2 plt.close(plot1) will close figure with instance plot1 plt.close('all') will close all fiures Found here. Remember that plt.show() is a blocking function, so in the example code you used above, plt.close() isn't being executed until the window is closed, which makes it redundant. You can use plt.ion() at the beginning of your code to make it non-blocking, although this has other implications. EXAMPLE After our discussion in the comments, I've put together a bit of an example just to demonstrate how the plot functionality can be used. Below I create a plot: ``` fig = plt.figure(figsize=plt.figaspect(0.75)) ax = fig.add_subplot(1, 1, 1) .... par_plot, = plot(x_data,y_data, lw=2, color='red') ``` In this case, ax above is a handle to a pair of axes. Whenever I want to do something to these axes, I can change my current set of axes to this particular set by calling axes(ax). par_plot is a handle to the line2D instance. This is called an artist. If I want to change a property of the line, like change the ydata, I can do so by referring to this handle. I can also create a slider widget by doing the following: ``` axsliderA = axes([0.12, 0.85, 0.16, 0.075]) sA = Slider(axsliderA, 'A', -1, 1.0, valinit=0.5) sA.on_changed(update) ``` The first line creates a new axes for the slider (called axsliderA), the second line creates a slider instance sA which is placed in the axes, and the third line specifies a function to call when the slider value changes (update). My update function could look something like this: ``` def update(val): A = sA.val B = sB.val C = sC.val y_data = A*x_data*x_data + B*x_data + C par_plot.set_ydata(y_data) draw() ``` The par_plot.set_ydata(y_data) changes the ydata property of the Line2D object with the handle par_plot. The draw() function updates the current set of axes. Putting it all together: ``` from pylab import * import matplotlib.pyplot as plt import numpy def update(val): A = sA.val B = sB.val C = sC.val y_data = A*x_data*x_data + B*x_data + C par_plot.set_ydata(y_data) draw() x_data = numpy.arange(-100,100,0.1); fig = plt.figure(figsize=plt.figaspect(0.75)) ax = fig.add_subplot(1, 1, 1) subplots_adjust(top=0.8) ax.set_xlim(-100, 100); ax.set_ylim(-100, 100); ax.set_xlabel('X') ax.set_ylabel('Y') axsliderA = axes([0.12, 0.85, 0.16, 0.075]) sA = Slider(axsliderA, 'A', -1, 1.0, valinit=0.5) sA.on_changed(update) axsliderB = axes([0.43, 0.85, 0.16, 0.075]) sB = Slider(axsliderB, 'B', -30, 30.0, valinit=2) sB.on_changed(update) axsliderC = axes([0.74, 0.85, 0.16, 0.075]) sC = Slider(axsliderC, 'C', -30, 30.0, valinit=1) sC.on_changed(update) axes(ax) A = 1; B = 2; C = 1; y_data = A*x_data*x_data + B*x_data + C; par_plot, = plot(x_data,y_data, lw=2, color='red') show() ``` A note about the above: When I run the application, the code runs sequentially right through (it stores the update function in memory, I think), until it hits show(), which is blocking. When you make a change to one of the sliders, it runs the update function from memory (I think?). This is the reason why show() is implemented in the way it is, so that you can change values in the background by using functions to process the data.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11140787\/closing-pyplot-windows", "best_answers_votes":86, "question_length":1633, "response_length":3248 }, { "question":"Matplotlib imshow() stretch to \"fit width\" I've got an image, and a measure associated with each column of its pixels. I'm using pyplot to create a figure with the image on top, and a plot of the column measurements below. I'm using something like this: ``` import numpy as np import matplotlib.pyplot as plt A = np.random.rand(34*52).reshape(34,52) means = np.average(A,axis=0) plt.figure() plt.subplot(2,1,1) plt.imshow(A, interpolation='nearest' ) plt.subplot(2,1,2) plt.plot(means) plt.show() ``` How can I stretch the image's width to the match that of the plots. That way, when looking at the measurements in the plot, the souce pixels will be in a column directly above it.", "response":"Turns out that it's as simple as giving aspect='auto' to the imshow call. ``` plt.imshow(A, interpolation='nearest', aspect='auto') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12806481\/matplotlib-imshow-stretch-to-fit-width", "best_answers_votes":94, "question_length":680, "response_length":135 }, { "question":"Different colors for points and line in Seaborn regplot All examples listed in Seaborn's regplot documentation show the same color for dots and the regression line. Changing the color argument changes both. How can one set a different color for the points as the line?", "response":"You are right in that the color argument changes all the plot elements. However, if you read the last bit of the relevant sentence in the documentation: color : matplotlib color Color to apply to all plot elements; will be superseded by colors passed in scatter_kws or line_kws. Therefore, using scatter_kws or line_kws we can change the color of them individually. Taking the first example given in the documentation: ``` import seaborn as sns tips = sns.load_dataset(\"tips\") ax = sns.regplot(x=\"total_bill\", y=\"tip\", data=tips, scatter_kws={\"color\": \"black\"}, line_kws={\"color\": \"red\"}) plt.show() ``` Gives:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/48145924\/different-colors-for-points-and-line-in-seaborn-regplot", "best_answers_votes":79, "question_length":268, "response_length":610 }, { "question":"how to plot and annotate hierarchical clustering dendrograms in scipy\/matplotlib I'm using dendrogram from scipy to plot hierarchical clustering using matplotlib as follows: ``` mat = array([[1, 0.5, 0.9], [0.5, 1, -0.5], [0.9, -0.5, 1]]) plt.subplot(1,2,1) plt.title(\"mat\") dist_mat = mat linkage_matrix = linkage(dist_mat, \"single\") print \"linkage2:\" print linkage(1-dist_mat, \"single\") dendrogram(linkage_matrix, color_threshold=1, labels=[\"a\", \"b\", \"c\"], show_leaf_counts=True) plt.subplot(1,2,2) plt.title(\"1 - mat\") dist_mat = 1 - mat linkage_matrix = linkage(dist_mat, \"single\") dendrogram(linkage_matrix, color_threshold=1, labels=[\"a\", \"b\", \"c\"], show_leaf_counts=True) ``` My questions are: first, why does mat and 1-mat give identical clusterings here? and second, how can I annotate the distance along each branch of the tree using dendrogram so that the distances between pairs of nodes can be compared? finally it seems that show_leaf_counts flag is ignored, is there a way to turn it on so that the number of objects in each class is shown? thanks.", "response":"The input to linkage() is either an n x m array, representing n points in m-dimensional space, or a one-dimensional array containing the condensed distance matrix. In your example, mat is 3 x 3, so you are clustering three 3-d points. Clustering is based on the distance between these points. Why does mat and 1-mat give identical clusterings here? The arrays mat and 1-mat produce the same clustering because the clustering is based on distances between the points, and neither a reflection (-mat) nor a translation (mat + offset) of the entire data set change the relative distances between the points. How can I annotate the distance along each branch of the tree using dendrogram so that the distances between pairs of nodes can be compared? In the code below, I show how you can use the data returned by dendrogram to label the horizontal segments of the diagram with the corresponding distance. The values associated with the keys icoord and dcoord give the x and y coordinates of each three-segment inverted-U of the figure. In augmented_dendrogram this data is used to add a label of the distance (i.e. y value) of each horizontal line segment in dendrogram. ``` from scipy.cluster.hierarchy import dendrogram import matplotlib.pyplot as plt def augmented_dendrogram(*args, **kwargs): ddata = dendrogram(*args, **kwargs) if not kwargs.get('no_plot', False): for i, d in zip(ddata['icoord'], ddata['dcoord']): x = 0.5 * sum(i[1:3]) y = d[1] plt.plot(x, y, 'ro') plt.annotate(\"%.3g\" % y, (x, y), xytext=(0, -8), textcoords='offset points', va='top', ha='center') return ddata ``` For your mat array, the augmented dendrogram is So point 'a' and 'c' are 1.01 units apart, and point 'b' is 1.57 units from the cluster ['a', 'c']. It seems that show_leaf_counts flag is ignored, is there a way to turn it on so that the number of objects in each class is shown? The flag show_leaf_counts only applies when not all the original data points are shown as leaves. For example, when trunc_mode = \"lastp\", only the last p nodes are show. Here's an example with 100 points: ``` import numpy as np from scipy.cluster.hierarchy import linkage import matplotlib.pyplot as plt from augmented_dendrogram import augmented_dendrogram # Generate a random sample of `n` points in 2-d. np.random.seed(12312) n = 100 x = np.random.multivariate_normal([0, 0], np.array([[4.0, 2.5], [2.5, 1.4]]), size=(n,)) plt.figure(1, figsize=(6, 5)) plt.clf() plt.scatter(x[:, 0], x[:, 1]) plt.axis('equal') plt.grid(True) linkage_matrix = linkage(x, \"single\") plt.figure(2, figsize=(10, 4)) plt.clf() plt.subplot(1, 2, 1) show_leaf_counts = False ddata = augmented_dendrogram(linkage_matrix, color_threshold=1, p=6, truncate_mode='lastp', show_leaf_counts=show_leaf_counts, ) plt.title(\"show_leaf_counts = %s\" % show_leaf_counts) plt.subplot(1, 2, 2) show_leaf_counts = True ddata = augmented_dendrogram(linkage_matrix, color_threshold=1, p=6, truncate_mode='lastp', show_leaf_counts=show_leaf_counts, ) plt.title(\"show_leaf_counts = %s\" % show_leaf_counts) plt.show() ``` These are the points in the data set: With p=6 and trunc_mode=\"lastp\", dendrogram only shows the \"top\" of the dendrogram. The following shows the effect of show_leaf_counts.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11917779\/how-to-plot-and-annotate-hierarchical-clustering-dendrograms-in-scipy-matplotlib", "best_answers_votes":70, "question_length":1063, "response_length":3218 }, { "question":"How to get default blue colour of matplotlib.pyplot.scatter? How do I get the shade of blue that is used as default in matplotlib.pyplot.scatter? When giving the keyword argument c='b', it gives a darker shade of blue. In this documentation of matplotlib.pyplot.scatter, it says the default is supposed to be 'b', yet it looks different. See example below: ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.scatter(-1, 0) ax.text(-1, 0, 'Default blue') ax.scatter(1, 0, c='b') ax.text(1, 0, 'Darker blue') ax.set_xlim(-2, 2) ``` I'm using Python 3.5 with Matplotlib 2.0.0. The reason why I'm asking this, is because I would like to use the same blue colour when plotting some of the points one by one with plt.plot().", "response":"The default colour cycle was changed in matplotlib version 2 as shown in the docs. Therefore, to plot the \"new\" default blue you can do 2 things: ``` fig, ax = plt.subplots() ax.scatter(-1, 1) ax.text(-0.9, 1, 'Default blue') ax.scatter(1, 1, c='#1f77b4') ax.text(1.1, 1, 'Using hex value') ax.scatter(0, 0.5, c='C0') ax.text(0.1, 0.5, 'Using \"C0\" notation') ax.set_xlim(-2, 3) ax.set_ylim(-1,2) plt.show() ``` Which gives: Alternatively you can change the colour cycle back to what it was: ``` import matplotlib as mpl from cycler import cycler mpl.rcParams['axes.prop_cycle'] = cycler(color='bgrcmyk') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/47074423\/how-to-get-default-blue-colour-of-matplotlib-pyplot-scatter", "best_answers_votes":69, "question_length":731, "response_length":607 }, { "question":"Matplotlib: TypeError: 'AxesSubplot' object is not subscriptable [duplicate] This question already has an answer here: understanding matplotlib.subplots python [duplicate] (1 answer) Closed 6 years ago. I am trying to make a simple box plot of a variable 'x' contained in two dataframes, df1 and df2. To do this I am using the following code: ``` fig, axs = plt.subplots() axs[0, 0].boxplot([df1['x'], df2['x']]) plt.show(); ``` However, I get this: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 2 axs[0, 0].boxplot([df1['x'], df2['x']]) 3 plt.show(); 4 TypeError: 'AxesSubplot' object is not subscriptable ``` Any ideas?", "response":"``` fig, axs = plt.subplots() ``` returns a figure with only one single subplot, so axs already holds it without indexing. ``` fig, axs = plt.subplots(3) ``` returns a 1D array of subplots. ``` fig, axs = plt.subplots(3, 2) ``` returns a 2D array of subplots. Note that this is only due to the default setting of the kwarg squeeze=True. By setting it to False you can force the result to be a 2D-array, independent of the number or arrangement of the subplots. ``` import numpy from PIL import Image import matplotlib.pyplot as plt imarray = numpy.random.rand(10,10,3) * 255 im = Image.fromarray(imarray.astype('uint8')).convert('RGBA') #rows = 1; cols = 1; #rows = 5; cols = 3; rows = 1; cols = 5; fig, ax = plt.subplots(rows, cols, squeeze=False) fig.suptitle('random plots') i = 0 for r in range(rows): for c in range(cols): ax[r][c].imshow(im) i = i + 1 plt.show() plt.close() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/52273546\/matplotlib-typeerror-axessubplot-object-is-not-subscriptable", "best_answers_votes":78, "question_length":712, "response_length":884 }, { "question":"How do I show logarithmically spaced grid lines at all ticks on a log-log plot using Matplotlib? I'm trying to plot a log-log graph that shows logarithmically spaced grid lines at all of the ticks that you see along the bottom and left hand side of the plot. I've been able to show some gridlines by using matplotlib.pyplot.grid(True), but this is only showing grid lines for me at power of 10 intervals. So as an example, here is what I'm currently getting: I'd really like something with grid lines looking more like this, where the gridlines aren't all evenly spaced: How would I go about achieving this in Matplotlib?", "response":"Basically, you just need to put in the parameter which=\"both\" in the grid command so that it becomes: ``` matplotlib.pyplot.grid(True, which=\"both\") ``` Other options for which are 'minor' and 'major' which are the major ticks (which are shown in your graph) and the minor ticks which you are missing. If you want solid lines then you can use ls=\"-\" as a parameter to grid() as well. Here is an example for kicks: ``` import numpy as np from matplotlib import pyplot as plt x = np.arange(0, 100, .5) y = 2 * x**3 plt.loglog(x, y) plt.grid(True, which=\"both\", ls=\"-\") plt.show() ``` which generates: More details on the Matplotlib Docs", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3590905\/how-do-i-show-logarithmically-spaced-grid-lines-at-all-ticks-on-a-log-log-plot-u", "best_answers_votes":65, "question_length":621, "response_length":634 }, { "question":"Getting the same subplot size using matplotlib imshow and scatter I am trying to plot an image (using matplotlib.imshow) and a scatter plot within the same figure. When trying this, the image appears smaller than the scatter plot. Small example code is shown below: ``` import matplotlib.pyplot as plt import numpy as np image = np.random.randint(100,200,(200,200)) x = np.arange(0,10,0.1) y = np.sin(x) fig, (ax1, ax2) = plt.subplots(1,2) ax1.imshow(image) ax2.scatter(x,y) plt.show() ``` Which gives the following figure: How can I get the two sublpots to have the same height? (and width I suppose) I have tried using gridspec as shown in this answer: ``` fig=plt.figure() gs=GridSpec(1,2) ax1=fig.add_subplot(gs[0,0]) ax2=fig.add_subplot(gs[0,1]) ax1.imshow(image) ax2.scatter(x,y) ``` But this gives the same result. I have also tried to adjust the subplot sizes manually by using: ``` fig = plt.figure() ax1 = plt.axes([0.05,0.05,0.45,0.9]) ax2 = plt.axes([0.55,0.19,0.45,0.62]) ax1.imshow(image) ax2.scatter(x,y) ``` By trial and error I can get the two subplots to the correct size, though any change in the overall figure size will mean that the subplots will no longer be the same size. Is there a way to make imshow and a scatter plot appear the same size in a figure without manually changing the axes sizes? I am using Python 2.7 and matplotlib 2.0.0", "response":"It's not perfectly clear what your desired outcome is. You may use automatic aspect on the image ``` ax.imshow(z, aspect=\"auto\") ``` Or you may set the aspect of the line plot depending on its axis limits such that it gets the same size as the image (in case the image has equal x and y sizes) ``` asp = np.diff(ax2.get_xlim())[0] \/ np.diff(ax2.get_ylim())[0] ax2.set_aspect(asp) ``` Complete code: ``` import numpy as np import matplotlib.pyplot as plt x = np.linspace(0,10,20) y = np.sin(x) z = np.random.rand(100,100) fig, (ax, ax2) = plt.subplots(ncols=2) ax.imshow(z) ax2.plot(x,y, marker=\".\") asp = np.diff(ax2.get_xlim())[0] \/ np.diff(ax2.get_ylim())[0] ax2.set_aspect(asp) plt.show() ``` If the image does not have equal limits (is not square), one still needs to divide by the aspect of the image: ``` asp = np.diff(ax2.get_xlim())[0] \/ np.diff(ax2.get_ylim())[0] asp \/= np.abs(np.diff(ax1.get_xlim())[0] \/ np.diff(ax1.get_ylim())[0]) ax2.set_aspect(asp) ``` More sophisticated solutions: This answer for using the subplot parameters to achieve a certain aspect. If you want to use mpl_toolkits and make your hands dirty, this answer would be a good read.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44654421\/getting-the-same-subplot-size-using-matplotlib-imshow-and-scatter", "best_answers_votes":65, "question_length":1363, "response_length":1164 }, { "question":"Axes class - set explicitly size (width\/height) of axes in given units I want to to create a figure using matplotlib where I can explicitly specify the size of the axes, i.e. I want to set the width and height of the axes bbox. I have looked around all over and I cannot find a solution for this. What I typically find is how to adjust the size of the complete Figure (including ticks and labels), for example using fig, ax = plt.subplots(figsize=(w, h)) This is very important for me as I want to have a 1:1 scale of the axes, i.e. 1 unit in paper is equal to 1 unit in reality. For example, if xrange is 0 to 10 with major tick = 1 and x axis is 10cm, then 1 major tick = 1cm. I will save this figure as pdf to import it to a latex document. This question brought up a similar topic but the answer does not solve my problem (using plt.gca().set_aspect('equal', adjustable='box') code) From this other question I see that it is possible to get the axes size, but not how to modify them explicitly. Any ideas how I can set the axes box size and not just the figure size. The figure size should adapt to the axes size. Thanks! For those familiar with pgfplots in latex, it will like to have something similar to the scale only axis option (see here for example).", "response":"The axes size is determined by the figure size and the figure spacings, which can be set using figure.subplots_adjust(). In reverse this means that you can set the axes size by setting the figure size taking into acount the figure spacings: ``` import matplotlib.pyplot as plt def set_size(w,h, ax=None): \"\"\" w, h: width, height in inches \"\"\" if not ax: ax=plt.gca() l = ax.figure.subplotpars.left r = ax.figure.subplotpars.right t = ax.figure.subplotpars.top b = ax.figure.subplotpars.bottom figw = float(w)\/(r-l) figh = float(h)\/(t-b) ax.figure.set_size_inches(figw, figh) fig, ax=plt.subplots() ax.plot([1,3,2]) set_size(5,5) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44970010\/axes-class-set-explicitly-size-width-height-of-axes-in-given-units", "best_answers_votes":44, "question_length":1261, "response_length":643 }, { "question":"Python web hosting: Numpy, Matplotlib, Scientific Computing I write scientific software in Numpy\/Scipy\/Matplotlib. Having developed applications on my home computer, I am now interested in writing simple web applications. Example: user uploads image or audio file, my program processes it using Numpy\/Scipy, and output is displayed on the browser using Matplotlib, or perhaps the user can download a processed file. I already pay for hosting that does have Python 2.4.3 installed, but no Numpy\/Scipy. I don't have shell access via command line, either. Just drag-and-drop FTP. Pretty limited, but I can get simple Python\/CGI scripts working. Surprisingly, a web search revealed few suitable options for web hosting with these capabilities already built in. (Please guide me if I am wrong.) I am learning about the Google App Engine, but I still don't have a full understanding about its tools and limitations. What the web did tell me is that others have similar concerns. Hoping for solutions, I thought I would ask these simple questions to the awesome SO community: Is there a simple way of installing numpy (or any third-party package\/library) onto my already hosted space? I know the Python path on my hosted space, and I know the relevant Python\/Numpy directories on my home computer. Can I simply copy files over and have it work? Both local and remote systems run Ubuntu. What hosting sites exist (either free or paid) which have Numpy\/Matplotlib installed or, if not installed, the possibility of installing it? Are there any documented sites that you can reference with working applications, no matter how simple? Can Google App Engine help me in any way? Or is it totally for something else? Have you or others used it to write scientific applications in Python\/Numpy? If so, could you reference them? Thank you for your help. EDIT: After the useful answers below, I bought the $20 plan at Slicehost, and I love it so far! (I first tried Amazon EC2. I must be stupid, but I just couldn't get it to work.) Setting up the Ubuntu server with Apache took mere hours (and I'm an Apache novice). It allows me to do exactly what I wanted with Python plus much more. I now have my own remote repository for version control, too. Thanks again! EDIT 2: Nearly two years later, I tried Linode and EC2 (again). Linode is great. EC2 seemed easier this time around -- maybe it's just added experience, or maybe it's the improvements that Amazon made to the AWS management console. For those interested in Numpy\/Scipy\/Matplotlib\/Audiolab, here is my Ubuntu cheat sheet whenever I launch an EC2 instance: ``` ec2:~$ sudo aptitude install build-essential python-scipy ipython python-matplotlib python-dev python-setuptools libsndfile-dev libasound2-dev mysql-server python-mysqldb Upload scikits.audiolab-0.11.0 ec2:~\/scikits.audiolab-0.11.0$ sudo python setup.py install ec2:~$ sudo rm -rf scikits.audiolab-0.11.0 ec2:~$ nano .ipython\/ipy_user_conf.py ip.ex('import matplotlib; matplotlib.use(\"Agg\"); import scipy, pylab, scipy.signal as sig, scipy.linalg as lin, scipy.sparse as spar, os, sys, MySQLdb, boto; from scikits import audiolab') import ipy_greedycompleter import ipy_autoreload ```", "response":"1: Installing third party packages to hosted spaces You can indeed install third party packages to your hosted space. If it's a pure python package, all that's needed is to unpack it to a directory and then add that directory to your PYTHONPATH environment variable or sys.path. This can be tiring to do often, and won't work easily for compiled modules. If you have shell access to your python host, the excellent virtualenv package allows you to do set up a private python environment with its own libraries. To set up your virtualenv, you'll do something like this at the shell: ``` $ virtualenv $HOME\/my_python $ $HOME\/my_python\/bin\/easy_install numpy ``` You can keep running easy_install for anything else you want to install in your personal python environment. Now, when you write your python scripts, you will want to use your private python interpreter, if that is possible: ``` #!\/home\/myuser\/my_python\/bin\/python import numpy # script here ``` If your python env cannot be specified (such as if run by mod_wsgi), you will need to add it to the import path: ``` import sys sys.path.insert(0, '\/home\/myuser\/my_python\/lib\/python2.5\/site-packages') import numpy ``` 2: Hosting sites with numpy I can't think of any hosting sites offhand which offer numpy pre-installed. However, Dreamhost\/Bluehost for sharedhosts provide SSH access, and with shell access you can install numpy using the methods I described above. Any Virtual Private Server such as Linode\/Slicehost will allow you to install whatever you desire, as well. 3: AppEngine As mentioned above, AppEngine will not allow you to install C extensions (but pure python ones do work) so it's unlikely numpy will work for you on there, since I suspect some of its features use C speedups.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2080110\/python-web-hosting-numpy-matplotlib-scientific-computing", "best_answers_votes":18, "question_length":3188, "response_length":1751 }, { "question":"graphing an equation I'm trying to make a function that will graph whatever formula I tell it to. ``` import numpy as np import matplotlib.pyplot as plt def graph(formula, x_range): x = np.array(x_range) y = formula plt.plot(x, y) plt.show() ``` When I try to call it the following error happens, I believe it's trying to do the multiplication before it gets to y = formula. ``` graph(x**3+2*x-4, range(-10, 11)) Traceback (most recent call last): File \"\", line 1, in graph(x**3+2*x-4, range(-10, 11)) NameError: name 'x' is not defined ```", "response":"Your guess is right: the code is trying to evaluate x**3+2*x-4 immediately. Unfortunately you can't really prevent it from doing so. The good news is that in Python, functions are first-class objects, by which I mean that you can treat them like any other variable. So to fix your function, we could do: ``` import numpy as np import matplotlib.pyplot as plt def graph(formula, x_range): x = np.array(x_range) y = formula(x) # <- note now we're calling the function 'formula' with x plt.plot(x, y) plt.show() def my_formula(x): return x**3+2*x-4 graph(my_formula, range(-10, 11)) ``` If you wanted to do it all in one line, you could use what's called a lambda function, which is just a short function without a name where you don't use def or return: ``` graph(lambda x: x**3+2*x-4, range(-10, 11)) ``` And instead of range, you can look at np.arange (which allows for non-integer increments), and np.linspace, which allows you to specify the start, stop, and the number of points to use.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14000595\/graphing-an-equation", "best_answers_votes":82, "question_length":541, "response_length":989 }, { "question":"Permission denied error by installing matplotlib I installed opencv with all dependencies. After the installation I tried to import matplotlib for a simple example. Then I got the following error, when I tried to install matplotlib via pip with pip install matplotlib: ``` Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '\/usr\/local\/lib\/python3.5\/dist-packages\/kiwisolver.cpython-35m-x86_64-linux-gnu.so' Consider using the `--user` option or check the permissions. ``` What can I do to install matplotlib?", "response":"It looks like your user does not have the permission to install packages in your system (for all users). Here's how to fix this problem for Linux, macOS and Windows. Linux \/ macOS From your terminal, you can install the package for your user only, like this: ``` pip install --user ``` OR You can use su or sudo from your terminal, to install the package as root: ``` sudo pip install ``` Windows From the Command Prompt, you can install the package for your user only, like this: ``` pip install --user ``` OR You can install the package as Administrator, by following these steps: Right click on the Command Prompt icon Select the option Run This Program As An Administrator Run the command pip install ", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/50087098\/permission-denied-error-by-installing-matplotlib", "best_answers_votes":99, "question_length":543, "response_length":708 }, { "question":"How to increase the font size of the legend in seaborn I have the following codes to create a Seaborn strip plot. I am having a hard time figuring out how to increase the font size of the legend appearing in the plot. ```py g=sns.stripplot(x=\"Market\", y=\"Rate\", hue=\"Group\",data=myBenchmarkData, jitter=True, size=12, alpha=0.5) g.axes.set_title(\"4* Rate Market and by Hotel Groups for Year 2016\",fontsize=25) g.set_xlabel(\"Market\",fontsize=20) g.set_ylabel(\"Rate (in EUR)\",fontsize=20) g.tick_params(labelsize=15) plt.savefig ('benchmark1.png') ``` I am OK with my x-axis and y-axis labels font size but the font size of the legend in my plot is small. How to change it?", "response":"Use matplotlib function setp according to this example: ``` import seaborn as sns import matplotlib.pylab as plt sns.set_style(\"whitegrid\") tips = sns.load_dataset(\"tips\") ax = sns.stripplot(x=\"sex\", y=\"total_bill\", hue=\"day\", data=tips, jitter=True) plt.setp(ax.get_legend().get_texts(), fontsize='22') # for legend text plt.setp(ax.get_legend().get_title(), fontsize='32') # for legend title plt.show() ``` Another way is to change font_scale of all graph with plotting_context: http:\/\/seaborn.pydata.org\/generated\/seaborn.plotting_context.html", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44880444\/how-to-increase-the-font-size-of-the-legend-in-seaborn", "best_answers_votes":72, "question_length":671, "response_length":546 }, { "question":"How to add title and xlabel and ylabel Is there a way to add title (and xlabel and ylabel) to plt.scatter(x,y,...) or plt.plot(x,y,...) directly without writing additional lines? It is easy to add it when we use Series_name.plot in which we simply write Series_name.plot(...,title='name') but it does not work for me if I write: plt.scatter(...,title='name') or plt.plot(...,title='name') [plt<< import matplotlib.pyplot as plt] I am using Python 3.", "response":"From the documentation of plt.scatter() there is no such arguments to set the title or labels. But neither does the plt.plot() command have such arguments. plt.plot(x,y, title=\"title\") throws an error AttributeError: Unknown property title. So I wonder why this should work in either case. In any case, the usual way to set the title is plt.title. The usual way to set the labels is plt.xlabeland plt.ylabel. ``` import matplotlib.pyplot as plt x= [8,3,5]; y = [3,4,5] plt.scatter(x,y) plt.title(\"title\") plt.xlabel(\"x-label\") plt.ylabel(\"y-label\") plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42223587\/how-to-add-title-and-xlabel-and-ylabel", "best_answers_votes":79, "question_length":449, "response_length":563 }, { "question":"Matplotlib: How to make two histograms have the same bin width? I've spent some time searching the interwebs for an answer for this, and I have tried looking all over SO for an answer too, but I think I do not have the correct terminology down... Please excuse me if this is a duplicate of some known problem, I'd happily delete my post and refer to that post instead! In any case, I am trying to plot two histograms on the same figure in Matplotlib. My two data sources are lists of 500 elements long. To provide an illustration of the problem I am facing, please see the following image: As you can see, the histogram has uneven bin sizes under default parameters, even though the number of bins is the same. I would like to guarantee that the bin widths for both histograms are the same. Is there any way I can do this? Thanks in advance!", "response":"I think a consistent way that will easily work for most cases, without having to worry about what is the distribution range for each of your datasets, will be to put the datasets together into a big one, determine the bins edges and then plot: ``` a = np.random.random(100) * 0.5 # A uniform distribution b = 1 - np.random.normal(size=100) * 0.1 # A normal distribution bins = np.histogram(np.hstack((a,b)), bins=40)[1] # Get the bin edges plt.hist(a, bins) plt.hist(b, bins) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23617129\/matplotlib-how-to-make-two-histograms-have-the-same-bin-width", "best_answers_votes":56, "question_length":841, "response_length":479 }, { "question":"Seaborn heatmap not displaying all xticks and yticks I have a pandas dataframe of shape (39, 67). When I plot it's seaborn heatmap, I don't get as many labels on the X and Y axes. .get_xticklabels() method also returns only 23 labels. matplotlib doesn't show any labels (only numbers) as well. Both these heatmaps are for the same dataframe (39, 67).", "response":"To ensure the labels are visible, you have to set the parameters xticklabels, yticklabels to True, like so. ``` import seaborn as sns sns.heatmap(dataframe, xticklabels=True, yticklabels=True) ``` Here's the documentation for the heatmap function.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/50754471\/seaborn-heatmap-not-displaying-all-xticks-and-yticks", "best_answers_votes":84, "question_length":350, "response_length":247 }, { "question":"Wrapping long y labels in matplotlib tight layout using setp I've been trying to wrap text for long labels in my code. I tried the textwrap method suggested earlier here, but my code defines yticklabels through an array imported from a csv using the pyplot.setp() method. I'm using tight_layout() for the formatting otherwise. So the question is - is there a way to wrap the really long y labels to newlines easily? Here is some sample code that I'd like a fix for: ``` import numpy as np import matplotlib.pyplot as plt labels=('Really really really really really really long label 1', 'Really really really really really really long label 2', 'Really really really really really really long label 3') values=(30,50,40) fig = plt.figure() ax=fig.add_subplot(111) plt.ylim((0,40)) for i in np.arange(3): plt.barh(15*i, values[i]) plt.yticks(15*np.arange(3)) plt.setp(ax.set_yticklabels(labels)) plt.tight_layout() plt.show() ``` This plots something like this I'd like the labels to go to newlines after a fixed width. Any ideas?", "response":"I have tried using textwrap on the labels and it works for me. ``` from textwrap import wrap labels=['Really really really really really really long label 1', 'Really really really really really really long label 2', 'Really really really really really really long label 3'] labels = [ '\\n'.join(wrap(l, 20)) for l in labels ] ``` Inserting this in your code gives us:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15740682\/wrapping-long-y-labels-in-matplotlib-tight-layout-using-setp", "best_answers_votes":73, "question_length":1029, "response_length":368 }, { "question":"Remove colorbar from figure This should be easy but I'm having a hard time with it. Basically, I have a subplot in matplotlib that I'm drawing a hexbin plot in every time a function is called, but every time I call the function I get a new colorbar, so what I'd really like to do is update the colorbar. Unfortunately, this doesn't seem to work since the object the colorbar is attached to is being recreated by subplot.hexbin. ``` def foo(self): self.subplot.clear() hb = self.subplot.hexbin(...) if self.cb: self.cb.update_bruteforce() # Doesn't work (hb is new) else: self.cb = self.figure.colorbar(hb) ``` I'm now in this annoying place where I'm trying to delete the colorbar axes altogether and simply recreate it. Unfortunately, when I delete the colorbar axes, the subplot axes don't reclaim the space, and calling self.subplot.reset_position() isn't doing what I thought it would. ``` def foo(self): self.subplot.clear() hb = self.subplot.hexbin(...) if self.cb: self.figure.delaxes(self.figure.axes[1]) del self.cb # TODO: resize self.subplot so it fills the # whole figure before adding the new colorbar self.cb = self.figure.colorbar(hb) ```", "response":"I think the problem is that with del you cancel the variable, but not the referenced object colorbar. If you want the colorbar to be removed from plot and disappear, you have to use the method remove of the colorbar instance and to do this you need to have the colorbar in a variable, for which you have two options: holding the colorbar in a value at the moment of creation, as shown in other answers e.g. cb=plt.colorbar() retrieve an existing colorbar, that you can do following (and upvoting :)) what I wrote here: How to retrieve colorbar instance from figure in matplotlib then: cb.remove() plt.draw() #update plot Full code and result: ``` from matplotlib import pyplot as plt import numpy as np plt.ion() plt.imshow(np.random.random(15).reshape((5,3))) cb = plt.colorbar() plt.savefig('test01.png') cb.remove() plt.savefig('test02.png') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5263034\/remove-colorbar-from-figure", "best_answers_votes":39, "question_length":1153, "response_length":848 }, { "question":"countplot() with frequencies I have a Pandas DataFrame with a column called \"AXLES\", which can take an integer value between 3-12. I am trying to use Seaborn's countplot() option to achieve the following plot: left y axis shows the frequencies of these values occurring in the data. The axis extends are [0%-100%], tick marks at every 10%. right y axis shows the actual counts, values correspond to tick marks determined by the left y axis (marked at every 10%.) x axis shows the categories for the bar plots [3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. Annotation on top of the bars show the actual percentage of that category. The following code gives me the plot below, with actual counts, but I could not find a way to convert them into frequencies. I can get the frequencies using df.AXLES.value_counts()\/len(df.index) but I am not sure about how to plug this information into Seaborn's countplot(). I also found a workaround for the annotations, but I am not sure if that is the best implementation. Any help would be appreciated! Thanks ``` plt.figure(figsize=(12,8)) ax = sns.countplot(x=\"AXLES\", data=dfWIM, order=[3,4,5,6,7,8,9,10,11,12]) plt.title('Distribution of Truck Configurations') plt.xlabel('Number of Axles') plt.ylabel('Frequency [%]') for p in ax.patches: ax.annotate('%{:.1f}'.format(p.get_height()), (p.get_x()+0.1, p.get_height()+50)) ``` EDIT: I got closer to what I need with the following code, using Pandas' bar plot, ditching Seaborn. Feels like I'm using so many workarounds, and there has to be an easier way to do it. The issues with this approach: There is no order keyword in Pandas' bar plot function as Seaborn's countplot() has, so I cannot plot all categories from 3-12 as I did in the countplot(). I need to have them shown even if there is no data in that category. The secondary y-axis messes up the bars and the annotation for some reason (see the white gridlines drawn over the text and bars). ``` plt.figure(figsize=(12,8)) plt.title('Distribution of Truck Configurations') plt.xlabel('Number of Axles') plt.ylabel('Frequency [%]') ax = (dfWIM.AXLES.value_counts()\/len(df)*100).sort_index().plot(kind=\"bar\", rot=0) ax.set_yticks(np.arange(0, 110, 10)) ax2 = ax.twinx() ax2.set_yticks(np.arange(0, 110, 10)*len(df)\/100) for p in ax.patches: ax.annotate('{:.2f}%'.format(p.get_height()), (p.get_x()+0.15, p.get_height()+1)) ```", "response":"You can do this by making a twinx axes for the frequencies. You can switch the two y axes around so the frequencies stay on the left and the counts on the right, but without having to recalculate the counts axis (here we use tick_left() and tick_right() to move the ticks and set_label_position to move the axis labels You can then set the ticks using the matplotlib.ticker module, specifically ticker.MultipleLocator and ticker.LinearLocator. As for your annotations, you can get the x and y locations for all 4 corners of the bar with patch.get_bbox().get_points(). This, along with setting the horizontal and vertical alignment correctly, means you don't need to add any arbitrary offsets to the annotation location. Finally, you need to turn the grid off for the twinned axis, to prevent grid lines showing up on top of the bars (ax2.grid(None)) Here is a working script: ``` import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns import matplotlib.ticker as ticker # Some random data dfWIM = pd.DataFrame({'AXLES': np.random.normal(8, 2, 5000).astype(int)}) ncount = len(dfWIM) plt.figure(figsize=(12,8)) ax = sns.countplot(x=\"AXLES\", data=dfWIM, order=[3,4,5,6,7,8,9,10,11,12]) plt.title('Distribution of Truck Configurations') plt.xlabel('Number of Axles') # Make twin axis ax2=ax.twinx() # Switch so count axis is on right, frequency on left ax2.yaxis.tick_left() ax.yaxis.tick_right() # Also switch the labels over ax.yaxis.set_label_position('right') ax2.yaxis.set_label_position('left') ax2.set_ylabel('Frequency [%]') for p in ax.patches: x=p.get_bbox().get_points()[:,0] y=p.get_bbox().get_points()[1,1] ax.annotate('{:.1f}%'.format(100.*y\/ncount), (x.mean(), y), ha='center', va='bottom') # set the alignment of the text # Use a LinearLocator to ensure the correct number of ticks ax.yaxis.set_major_locator(ticker.LinearLocator(11)) # Fix the frequency range to 0-100 ax2.set_ylim(0,100) ax.set_ylim(0,ncount) # And use a MultipleLocator to ensure a tick spacing of 10 ax2.yaxis.set_major_locator(ticker.MultipleLocator(10)) # Need to turn the grid on ax2 off, otherwise the gridlines end up on top of the bars ax2.grid(None) plt.savefig('snscounter.pdf') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/33179122\/countplot-with-frequencies", "best_answers_votes":53, "question_length":2360, "response_length":2212 }, { "question":"Matplotlib: TypeError: can't multiply sequence by non-int of type 'numpy.float64' I am trying to fit a linear line of best fit to my matplotlib graph. I keep getting the error that x and y do not have the same first dimension. But they both have lengths of 15. What am I doing wrong? ``` import matplotlib.pyplot as plt from scipy import stats import numpy as np x = [0.46,0.59,0.68,0.99,0.39,0.31,1.09,0.77,0.72,0.49,0.55,0.62,0.58,0.88,0.78] y = [0.315,0.383,0.452,0.650,0.279,0.215,0.727,0.512,0.478,0.335,0.365,0.424,0.390,0.585,0.511] xerr = [0.01]*15 yerr = [0.001]*15 plt.rc('font', family='serif', size=13) m, b = np.polyfit(x, y, 1) plt.plot(x,y,'s',color='#0066FF') plt.plot(x, m*x + b, 'r-') #BREAKS ON THIS LINE plt.errorbar(x,y,xerr=xerr,yerr=0,linestyle=\"None\",color='black') plt.xlabel('$\\Delta t$ $(s)$',fontsize=20) plt.ylabel('$\\Delta p$ $(hPa)$',fontsize=20) plt.autoscale(enable=True, axis=u'both', tight=False) plt.grid(False) plt.xlim(0.2,1.2) plt.ylim(0,0.8) plt.show() ``` Error ```py --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~\\AppData\\Local\\Temp\/ipykernel_34116\/1820029981.py in 7 m, b = np.polyfit(x, y, 1) 8 plt.plot(x,y,'s',color='#0066FF') ----> 9 plt.plot(x, m*x + b, 'r-') #BREAKS ON THIS LINE 10 plt.errorbar(x,y,xerr=xerr,yerr=0,linestyle=\"None\",color='black') 11 plt.xlabel('$\\Delta t$ $(s)$',fontsize=20) TypeError: can't multiply sequence by non-int of type 'numpy.float64' ```", "response":"You should make x and y numpy arrays, not lists: ``` x = np.array([0.46,0.59,0.68,0.99,0.39,0.31,1.09, 0.77,0.72,0.49,0.55,0.62,0.58,0.88,0.78]) y = np.array([0.315,0.383,0.452,0.650,0.279,0.215,0.727,0.512, 0.478,0.335,0.365,0.424,0.390,0.585,0.511]) ``` With this change, it produces the expected plot. If they are lists, m * x will not produce the result you expect, but an empty list. Note that m is anumpy.float64 scalar, not a standard Python float. I actually consider this a bit dubious behavior of Numpy. In normal Python, multiplying a list with an integer just repeats the list: ``` In [42]: 2 * [1, 2, 3] Out[42]: [1, 2, 3, 1, 2, 3] ``` while multiplying a list with a float gives an error (as I think it should): ``` In [43]: 1.5 * [1, 2, 3] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 1.5 * [1, 2, 3] TypeError: can't multiply sequence by non-int of type 'float' ``` The weird thing is that multiplying a Python list with a Numpy scalar apparently works: ``` In [45]: np.float64(0.5) * [1, 2, 3] Out[45]: [] In [46]: np.float64(1.5) * [1, 2, 3] Out[46]: [1, 2, 3] In [47]: np.float64(2.5) * [1, 2, 3] Out[47]: [1, 2, 3, 1, 2, 3] ``` So it seems that the float gets truncated to an int, after which you get the standard Python behavior of repeating the list, which is quite unexpected behavior. The best thing would have been to raise an error (so that you would have spotted the problem yourself instead of having to ask your question on Stackoverflow) or to just show the expected element-wise multiplication (in which your code would have just worked). Interestingly, addition between a list and a Numpy scalar does work: ``` In [69]: np.float64(0.123) + [1, 2, 3] Out[69]: array([ 1.123, 2.123, 3.123]) ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/26690480\/matplotlib-typeerror-cant-multiply-sequence-by-non-int-of-type-numpy-float64", "best_answers_votes":48, "question_length":1493, "response_length":1814 }, { "question":"How to obtain the same font(-style, -size etc.) in matplotlib output as in latex output? I have one .tex-document in which one graph is made by the python module matplotlib. What I want is, that the graph blends in to the document as good as possible. So I want the characters used in the graph to look exactly like the other same characters in the rest of the document. My first try looks like this (the matplotlibrc-file): ``` text.usetex : True text.latex.preamble: \\usepackage{lmodern} #Used in .tex-document font.size : 11.0 #Same as in .tex-document backend: PDF ``` For compiling of the .tex in which the PDF output of matplotlib is included, pdflatex is used. Now, the output looks not bad, but it looks somewhat different, the characters in the graph seem weaker in stroke width. What is the best approach for this? EDIT: Minimum example: LaTeX-Input: ``` \\documentclass[11pt]{scrartcl} \\usepackage[T1]{fontenc} \\usepackage[utf8]{inputenc} \\usepackage{lmodern} \\usepackage{graphicx} \\begin{document} \\begin{figure} \\includegraphics{.\/graph} \\caption{Excitation-Energy} \\label{fig:graph} \\end{figure} \\end{document} ``` Python-Script: ``` import matplotlib.pyplot as plt import numpy as np plt.plot([1,2,3,4]) plt.xlabel(\"Excitation-Energy\") plt.ylabel(\"Intensit\u00e4t\") plt.savefig(\"graph.pdf\") ``` PDF output:", "response":"The difference in the fonts can be caused by incorrect parameter setting out pictures with matplotlib or wrong its integration into the final document. I think problem in text.latex.preamble: \\usepackage{lmodern}. This thing works very badly and even developers do not guarantee its workability, how you can find here. In my case it did not work at all. Minimal differences in font associated with font family. For fix this u need: 'font.family' : 'lmodern' in rc. Other options and more detailed settings can be found here. To suppress this problem, I used a slightly different method - direct. plt.rcParams['text.latex.preamble']=[r\"\\usepackage{lmodern}\"]. It is not strange, but it worked. Further information can be found at the link above. To prevent these effects suggest taking a look at this code: ``` import matplotlib.pyplot as plt #Direct input plt.rcParams['text.latex.preamble']=[r\"\\usepackage{lmodern}\"] #Options params = {'text.usetex' : True, 'font.size' : 11, 'font.family' : 'lmodern', 'text.latex.unicode': True, } plt.rcParams.update(params) fig = plt.figure() #You must select the correct size of the plot in advance fig.set_size_inches(3.54,3.54) plt.plot([1,2,3,4]) plt.xlabel(\"Excitation-Energy\") plt.ylabel(\"Intensit\u00e4t\") plt.savefig(\"graph.pdf\", #This is simple recomendation for publication plots dpi=1000, # Plot will be occupy a maximum of available space bbox_inches='tight', ) ``` And finally move on to the latex: ``` \\documentclass[11pt]{scrartcl} \\usepackage[T1]{fontenc} \\usepackage[utf8]{inputenc} \\usepackage{lmodern} \\usepackage{graphicx} \\begin{document} \\begin{figure} \\begin{center} \\includegraphics{.\/graph} \\caption{Excitation-Energy} \\label{fig:graph} \\end{center} \\end{figure} \\end{document} ``` Results As can be seen from a comparison of two fonts - differences do not exist (1 - MatPlotlib, 2 - pdfLaTeX)", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17687213\/how-to-obtain-the-same-font-style-size-etc-in-matplotlib-output-as-in-latex", "best_answers_votes":33, "question_length":1315, "response_length":1851 }, { "question":"How to set axis ticks in multiples of pi (Python) (matplotlib) I'd like to make a plot in Python and have x range display ticks in multiples of pi. Is there a good way to do this, not manually? I'm thinking of using matplotlib, but other options are fine. EDIT 3: EL_DON's solution worked for me like this: ``` import matplotlib.ticker as tck import matplotlib.pyplot as plt import numpy as np f,ax=plt.subplots(figsize=(20,10)) x=np.linspace(-10*np.pi, 10*np.pi,1000) y=np.sin(x) ax.plot(x\/np.pi,y) ax.xaxis.set_major_formatter(tck.FormatStrFormatter('%g $\\pi$')) ax.xaxis.set_major_locator(tck.MultipleLocator(base=1.0)) plt.style.use(\"ggplot\") plt.show() ``` giving: EDIT 2 (solved in EDIT 3!): EL_DON's answer doesn't seem to work right for me: ``` import matplotlib.ticker as tck import matplotlib.pyplot as plt import numpy as np f,ax=plt.subplots(figsize=(20,10)) x=np.linspace(-10*np.pi, 10*np.pi) y=np.sin(x) ax.plot(x\/np.pi,y) ax.xaxis.set_major_formatter(tck.FormatStrFormatter('%g $\\pi$')) ax.xaxis.set_major_locator(tck.MultipleLocator(base=1.0)) plt.style.use(\"ggplot\") plt.show() ``` gives me which really doesn't look right", "response":"This is inspired by Python Data Science Handbook, although Sage attempts to do without explicit parameters. EDIT: I've generalized this to allow you to supply as optional parameters the denominator, the value of the unit, and the LaTeX label for the unit. A class definition is included if you find that helpful. ``` import numpy as np import matplotlib.pyplot as plt def multiple_formatter(denominator=2, number=np.pi, latex='\\pi'): def gcd(a, b): while b: a, b = b, a%b return a def _multiple_formatter(x, pos): den = denominator num = np.int(np.rint(den*x\/number)) com = gcd(num,den) (num,den) = (int(num\/com),int(den\/com)) if den==1: if num==0: return r'$0$' if num==1: return r'$%s$'%latex elif num==-1: return r'$-%s$'%latex else: return r'$%s%s$'%(num,latex) else: if num==1: return r'$\\frac{%s}{%s}$'%(latex,den) elif num==-1: return r'$\\frac{-%s}{%s}$'%(latex,den) else: return r'$\\frac{%s%s}{%s}$'%(num,latex,den) return _multiple_formatter \u200b class Multiple: def __init__(self, denominator=2, number=np.pi, latex='\\pi'): self.denominator = denominator self.number = number self.latex = latex \u200b def locator(self): return plt.MultipleLocator(self.number \/ self.denominator) \u200b def formatter(self): return plt.FuncFormatter(multiple_formatter(self.denominator, self.number, self.latex)) ``` This can be used very simply, without any parameters: ``` x = np.linspace(-np.pi, 3*np.pi,500) plt.plot(x, np.cos(x)) plt.title(r'Multiples of $\\pi$') ax = plt.gca() ax.grid(True) ax.set_aspect(1.0) ax.axhline(0, color='black', lw=2) ax.axvline(0, color='black', lw=2) ax.xaxis.set_major_locator(plt.MultipleLocator(np.pi \/ 2)) ax.xaxis.set_minor_locator(plt.MultipleLocator(np.pi \/ 12)) ax.xaxis.set_major_formatter(plt.FuncFormatter(multiple_formatter())) plt.show() ``` Or it can be used in a more sophisticated way: ``` tau = np.pi*2 den = 60 major = Multiple(den, tau, r'\\tau') minor = Multiple(den*4, tau, r'\\tau') x = np.linspace(-tau\/60, tau*8\/60,500) plt.plot(x, np.exp(-x)*np.cos(60*x)) plt.title(r'Multiples of $\\tau$') ax = plt.gca() ax.grid(True) ax.axhline(0, color='black', lw=2) ax.axvline(0, color='black', lw=2) ax.xaxis.set_major_locator(major.locator()) ax.xaxis.set_minor_locator(minor.locator()) ax.xaxis.set_major_formatter(major.formatter()) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/40642061\/how-to-set-axis-ticks-in-multiples-of-pi-python-matplotlib", "best_answers_votes":48, "question_length":1139, "response_length":2277 }, { "question":"Matplotlib - Border around scatter plot points I am following this tutorial. I would like to use Matplotlib to create a scatter plot with points that are colored inside, but have a black border, such as this plot: However, when I copy the code exactly, I get this plot instead. Here is the code: ``` colors = ['black', 'blue', 'purple', 'yellow', 'white', 'red', 'lime', 'cyan', 'orange', 'gray'] for i in range(len(colors)): x = reduced_data_rpca[:, 0][digits.target == i] y = reduced_data_rpca[:, 1][digits.target == i] plt.scatter(x, y, c=colors[i]) plt.legend(digits.target_names, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.xlabel('First Principal Component') plt.ylabel('Second Principal Component') plt.title(\"PCA Scatter Plot\") plt.show() ``` I tried adjusting the style, but that didn't help.", "response":"When you use scatter plot, you set a color for both face and edge. In the official documentation you can find an additional parameter, edgecolors, which allows setting the edge color. edgecolors : color or sequence of color, optional, default: None If None, defaults to \u2018face\u2019 If \u2018face\u2019, the edge color will always be the same as the face color. If it is \u2018none\u2019, the patch boundary will not be drawn. For non-filled markers, the edgecolors kwarg is ignored and forced to \u2018face\u2019 internally. So, after all, you need only plt.scatter(x, y, c=colors[i],edgecolors='black')", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/50706901\/matplotlib-border-around-scatter-plot-points", "best_answers_votes":71, "question_length":812, "response_length":568 }, { "question":"How to adjust space between legend markers and labels I want to adjust space between legend markers and labels. Sometime the space is too much as default. Does anyone know how to do this? Thanks.", "response":"legend() has a kwarg in called handletextpad which will do what you are looking for. By default, this is set to 0.8. From the docs: handletextpad : float or None The pad between the legend handle and text. Measured in font-size units. Default is None which will take the value from the legend.handletextpad rcParam. So when you call legend, add that kwarg, and experiment with the value. Something like: ``` ax.legend(handletextpad=0.1) ``` Consider the following: ``` import matplotlib.pyplot as plt fig, (ax1, ax2) = plt.subplots(ncols=2) ax1.plot(range(5), 'ro', label='handletextpad=0.8') ax2.plot(range(5), 'bo', label='handletextpad=0.1') ax1.legend() ax2.legend(handletextpad=0.1) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/41827406\/how-to-adjust-space-between-legend-markers-and-labels", "best_answers_votes":74, "question_length":195, "response_length":702 }, { "question":"How to set x axis values in matplotlib python? [duplicate] This question already has answers here: Modify tick label text (13 answers) Closed 2 years ago. I want to draw this graph using matplotlib. I wrote the code but it's not changing the x axis values. ``` import matplotlib.pyplot as plt x = [0.00001,0.001,0.01,0.1,0.5,1,5] y = [0.945,0.885,0.893,0.9,0.996,1.25,1.19] plt.xlim(0.00001,5) plt.ylim(0.8,1.4) plt.plot(x, y, marker='o', linestyle='--', color='r', label='Square') plt.xlabel('x') plt.ylabel('y') plt.title('compare') plt.legend() plt.show() ``` How I can draw the blue line of the given graph using matplotlib?", "response":"The scaling on your example figure is a bit strange but you can force it by plotting the index of each x-value and then setting the ticks to the data points: ``` import matplotlib.pyplot as plt x = [0.00001,0.001,0.01,0.1,0.5,1,5] # create an index for each tick position xi = list(range(len(x))) y = [0.945,0.885,0.893,0.9,0.996,1.25,1.19] plt.ylim(0.8,1.4) # plot the index for the x-values plt.plot(xi, y, marker='o', linestyle='--', color='r', label='Square') plt.xlabel('x') plt.ylabel('y') plt.xticks(xi, x) plt.title('compare') plt.legend() plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44813601\/how-to-set-x-axis-values-in-matplotlib-python", "best_answers_votes":69, "question_length":628, "response_length":562 }, { "question":"How to create a draggable legend in matplotlib? I'm drawing a legend on an axes object in matplotlib but the default positioning which claims to place it in a smart place doesn't seem to work. Ideally, I'd like to have the legend be draggable by the user. How can this be done?", "response":"Note: This is now built into matplotlib ``` leg = plt.legend() if leg: leg.draggable() ``` will work as expected Well, I found bits and pieces of the solution scattered among mailing lists. I've come up with a nice modular chunk of code that you can drop in and use... here it is: ``` class DraggableLegend: def __init__(self, legend): self.legend = legend self.gotLegend = False legend.figure.canvas.mpl_connect('motion_notify_event', self.on_motion) legend.figure.canvas.mpl_connect('pick_event', self.on_pick) legend.figure.canvas.mpl_connect('button_release_event', self.on_release) legend.set_picker(self.my_legend_picker) def on_motion(self, evt): if self.gotLegend: dx = evt.x - self.mouse_x dy = evt.y - self.mouse_y loc_in_canvas = self.legend_x + dx, self.legend_y + dy loc_in_norm_axes = self.legend.parent.transAxes.inverted().transform_point(loc_in_canvas) self.legend._loc = tuple(loc_in_norm_axes) self.legend.figure.canvas.draw() def my_legend_picker(self, legend, evt): return self.legend.legendPatch.contains(evt) def on_pick(self, evt): if evt.artist == self.legend: bbox = self.legend.get_window_extent() self.mouse_x = evt.mouseevent.x self.mouse_y = evt.mouseevent.y self.legend_x = bbox.xmin self.legend_y = bbox.ymin self.gotLegend = 1 def on_release(self, event): if self.gotLegend: self.gotLegend = False ``` ...and in your code... ``` def draw(self): ax = self.figure.add_subplot(111) scatter = ax.scatter(np.random.randn(100), np.random.randn(100)) legend = DraggableLegend(ax.legend()) ``` I emailed the Matplotlib-users group and John Hunter was kind enough to add my solution it to SVN HEAD. On Thu, Jan 28, 2010 at 3:02 PM, Adam Fraser wrote: I thought I'd share a solution to the draggable legend problem since it took me forever to assimilate all the scattered knowledge on the mailing lists... Cool -- nice example. I added the code to legend.py. Now you can do leg = ax.legend() leg.draggable() to enable draggable mode. You can repeatedly call this func to toggle the draggable state. I hope this is helpful to people working with matplotlib.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2539477\/how-to-create-a-draggable-legend-in-matplotlib", "best_answers_votes":32, "question_length":277, "response_length":2079 }, { "question":"How to convert Matplotlib figure to PIL Image object (without saving image) As the title states, I am trying to convert a fig to a PIL.Image. I am currently able to do so by first saving the fig to disk and then opening that file using Image.open() but the process is taking longer than expected and I am hoping that by skipping the saving locally step it will be a bit faster. Here is what I have so far: ``` # build fig figsize, dpi = self._calc_fig_size_res(img_height) fig = plt.Figure(figsize=figsize) canvas = FigureCanvas(fig) ax = fig.add_subplot(111) ax.imshow(torch.from_numpy(S).flip(0), cmap = cmap) fig.subplots_adjust(left = 0, right = 1, bottom = 0, top = 1) ax.axis('tight'); ax.axis('off') # export fig.savefig(export_path, dpi = dpi) # open image as PIL object img = Image.open(export_path) ``` I have tried doing this after I build the fig (it would be right before the export stage): ``` pil_img = Image.frombytes('RGB', canvas.get_width_height(), canvas.tostring_rgb()) ``` But it's not showing the entire image. It looks like it's a crop of the top left corner, but it could just be a weird representation of the data -- I'm working with spectrograms so the images are fairly abstract.", "response":"EDIT # 2 ``` PIL.Image.frombytes('RGB', fig.canvas.get_width_height(),fig.canvas.tostring_rgb()) ``` takes around 2ms compared to the 35\/40ms of the below. This is the fastest way I can find so far. I've been looking at this also today. In the matplotlib docs the savefig function had this. pil_kwargsdict, optional Additional keyword arguments that are passed to PIL.Image.save when saving the figure. Only applicable for formats that are saved using Pillow, i.e. JPEG, TIFF, and (if the keyword is set to a non-None value) PNG. This must mean it's already a pil image before saving but I can't see it. You could follow this Matplotlib: save plot to numpy array To get it into a numpy array and then do PIL.Image.fromarray(array) You might need to reverse the channels from BGR TO RGB with array [:, :, ::-1] EDIT: I've tested each way come up with so far. ``` import io def save_plot_and_get(): fig.savefig(\"test.jpg\") img = cv2.imread(\"test.jpg\") return PIL.Image.fromarray(img) def buffer_plot_and_get(): buf = io.BytesIO() fig.savefig(buf) buf.seek(0) return PIL.Image.open(buf) def from_canvas(): lst = list(fig.canvas.get_width_height()) lst.append(3) return PIL.Image.fromarray(np.fromstring(fig.canvas.tostring_rgb(),dtype=np.uint8).reshape(lst)) ``` Results ``` %timeit save_plot_and_get() ``` 35.5 ms \u00b1 148 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each) ``` %timeit save_plot_and_get() ``` 35.5 ms \u00b1 142 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each) ``` %timeit buffer_plot_and_get() ``` 40.4 ms \u00b1 152 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/57316491\/how-to-convert-matplotlib-figure-to-pil-image-object-without-saving-image", "best_answers_votes":36, "question_length":1207, "response_length":1587 }, { "question":"How to make Matplotlib scatterplots transparent as a group? I'm making some scatterplots using Matplotlib (python 3.4.0, matplotlib 1.4.3, running on Linux Mint 17). It's easy enough to set alpha transparency for each point individually; is there any way to set them as a group, so that two overlapping points from the same group don't change the color? Example code: ``` import matplotlib.pyplot as plt import numpy as np def points(n=100): x = np.random.uniform(size=n) y = np.random.uniform(size=n) return x, y x1, y1 = points() x2, y2 = points() fig = plt.figure(figsize=(4,4)) ax = fig.add_subplot(111, title=\"Test scatter\") ax.scatter(x1, y1, s=100, color=\"blue\", alpha=0.5) ax.scatter(x2, y2, s=100, color=\"red\", alpha=0.5) fig.savefig(\"test_scatter.png\") ``` Results in this output: but I want something more like this one: I can workaround by saving as SVG and manually grouping then in Inkscape, then setting transparency, but I'd really prefer something I can code. Any suggestions?", "response":"Yes, interesting question. You can get this scatterplot with Shapely. Here is the code : ``` import matplotlib.pyplot as plt import matplotlib.patches as ptc import numpy as np from shapely.geometry import Point from shapely.ops import cascaded_union n = 100 size = 0.02 alpha = 0.5 def points(): x = np.random.uniform(size=n) y = np.random.uniform(size=n) return x, y x1, y1 = points() x2, y2 = points() polygons1 = [Point(x1[i], y1[i]).buffer(size) for i in range(n)] polygons2 = [Point(x2[i], y2[i]).buffer(size) for i in range(n)] polygons1 = cascaded_union(polygons1) polygons2 = cascaded_union(polygons2) fig = plt.figure(figsize=(4,4)) ax = fig.add_subplot(111, title=\"Test scatter\") for polygon1 in polygons1: polygon1 = ptc.Polygon(np.array(polygon1.exterior), facecolor=\"red\", lw=0, alpha=alpha) ax.add_patch(polygon1) for polygon2 in polygons2: polygon2 = ptc.Polygon(np.array(polygon2.exterior), facecolor=\"blue\", lw=0, alpha=alpha) ax.add_patch(polygon2) ax.axis([-0.2, 1.2, -0.2, 1.2]) fig.savefig(\"test_scatter.png\") ``` and the result is :", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/30108372\/how-to-make-matplotlib-scatterplots-transparent-as-a-group", "best_answers_votes":21, "question_length":993, "response_length":1055 }, { "question":"matplotlib legend location numbers I am beginning to use Python for my scientific computing, and I am really liking it a lot, however I am confused by a feature of the matplotlib.pylab.legend function. In particular, the location feature allows one to specifiy the location of their legend using numbers, following this scheme: best -- 0 upper right -- 1 upper left -- 2 lower left -- 3 lower right -- 4 right -- 5 center left -- 6 center right -- 7 lower center -- 8 upper center -- 9 center -- 10 Does anyone know why you wouldn't use the ordering on the numpad? I.e. center -- 5, upper right -- 9, etc. I am just curious if anyone knows.", "response":"The docs show this example: ``` legend( ('label1', 'label2', 'label3'), loc='upper left') ``` Presumably, you could write loc=2, but why would you? It's much more readable to use the English word. As to why they didn't enumerate the values to align with the numeric keypad, I presume they weren't thinking about the numeric keypad at the time. Edit: It's worth including here the full text of Joe Kington's comment: Actually, they were deliberately mimicking matlab's behavior at the time. See the \"obsolete location values\" section in the documentation for MATLAB's legend: mathworks.com\/help\/techdoc\/ref\/legend.html", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10824156\/matplotlib-legend-location-numbers", "best_answers_votes":38, "question_length":640, "response_length":617 }, { "question":"copy an axes content and show it in a new figure let say I have this code: ``` num_rows = 10 num_cols = 1 fig, axs = plt.subplots(num_rows, num_cols, sharex=True) for i in xrange(num_rows): ax = axs[i] ax.plot(np.arange(10), np.arange(10)**i) plt.show() ``` the result figure has too much info and now I want to pick 1 of the axes and draw it alone in a new figure I tried doing something like this ``` def on_click(event): axes = event.inaxes.get_axes() fig2 = plt.figure(15) fig2.axes.append(axes) fig2.show() fig.canvas.mpl_connect('button_press_event', on_click) ``` but it didn't quite work. what would be the correct way to do it? searching through the docs and throw SE gave hardly any useful result edit: I don't mind redrawing the chosen axes, but I'm not sure how can I tell which of the axes was chosen so if that information is available somehow then it is a valid solution for me edit #2: so I've managed to do something like this: ``` def on_click(event): fig2 = plt.figure(15) fig2.clf() for line in event.inaxes.axes.get_lines(): xydata = line.get_xydata() plt.plot(xydata[:, 0], xydata[:, 1]) fig2.show() ``` which seems to be \"working\" (all the other information is lost - labels, lines colors, lines style, lines width, xlim, ylim, etc...) but I feel like there must be a nicer way to do it", "response":"Copying the axes The inital answer here does not work, we keep it for future reference and also to see why a more sophisticated approach is needed. ```css #There are some pitfalls on the way with the initial approach. #Adding an `axes` to a figure can be done via `fig.add_axes(axes)`. However, at this point, #the axes' figure needs to be the figure the axes should be added to. #This may sound a bit like running in circles but we can actually set the axes' #figure as `axes.figure = fig2` and hence break out of this. #One might then also position the axes in the new figure to take the usual dimensions. #For this a dummy axes can be added first, the axes can change its position to the position #of the dummy axes and then the dummy axes is removed again. In total, this would look as follows. import matplotlib.pyplot as plt import numpy as np num_rows = 10 num_cols = 1 fig, axs = plt.subplots(num_rows, num_cols, sharex=True) for i in xrange(num_rows): ax = axs[i] ax.plot(np.arange(10), np.arange(10)**i) def on_click(event): axes = event.inaxes if not axes: return fig2 = plt.figure() axes.figure=fig2 fig2.axes.append(axes) fig2.add_axes(axes) dummy = fig2.add_subplot(111) axes.set_position(dummy.get_position()) dummy.remove() fig2.show() fig.canvas.mpl_connect('button_press_event', on_click) plt.show() #So far so good, however, be aware that now after a click the axes is somehow #residing in both figures, which can cause all sorts of problems, e.g. if you # want to resize or save the initial figure. ``` Instead, the following will work: Pickling the figure The problem is that axes cannot be copied (even deepcopy will fail). Hence to obtain a true copy of an axes, you may need to use pickle. The following will work. It pickles the complete figure and removes all but the one axes to show. ``` import matplotlib.pyplot as plt import numpy as np import pickle import io num_rows = 10 num_cols = 1 fig, axs = plt.subplots(num_rows, num_cols, sharex=True) for i in range(num_rows): ax = axs[i] ax.plot(np.arange(10), np.arange(10)**i) def on_click(event): if not event.inaxes: return inx = list(fig.axes).index(event.inaxes) buf = io.BytesIO() pickle.dump(fig, buf) buf.seek(0) fig2 = pickle.load(buf) for i, ax in enumerate(fig2.axes): if i != inx: fig2.delaxes(ax) else: axes=ax axes.change_geometry(1,1,1) fig2.show() fig.canvas.mpl_connect('button_press_event', on_click) plt.show() ``` Recreate plots The alternative to the above is of course to recreate the plot in a new figure each time the axes is clicked. To this end one may use a function that creates a plot on a specified axes and with a specified index as input. Using this function during figure creation as well as later for replicating the plot in another figure ensures to have the same plot in all cases. ``` import matplotlib.pyplot as plt import numpy as np num_rows = 10 num_cols = 1 colors = plt.rcParams[\"axes.prop_cycle\"].by_key()[\"color\"] labels = [\"Label {}\".format(i+1) for i in range(num_rows)] def myplot(i, ax): ax.plot(np.arange(10), np.arange(10)**i, color=colors[i]) ax.set_ylabel(labels[i]) fig, axs = plt.subplots(num_rows, num_cols, sharex=True) for i in xrange(num_rows): myplot(i, axs[i]) def on_click(event): axes = event.inaxes if not axes: return inx = list(fig.axes).index(axes) fig2 = plt.figure() ax = fig2.add_subplot(111) myplot(inx, ax) fig2.show() fig.canvas.mpl_connect('button_press_event', on_click) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/45810557\/copy-an-axes-content-and-show-it-in-a-new-figure", "best_answers_votes":35, "question_length":1309, "response_length":3436 }, { "question":"Scientific notation colorbar I am trying to put a colorbar to my image using matplotlib. The issue comes when I try to force the ticklabels to be written in scientific notation. How can I force the scientific notation (ie, 1x10^0, 2x10^0, ..., 1x10^2, and so on) in the ticks of the color bar? Example, let's create and plot and image with its color bar: ``` import matplotlib as plot import numpy as np img = np.random.randn(300,300) myplot = plt.imshow(img) plt.colorbar(myplot) plt.show() ``` When I do this, I get the following image: However, I would like to see the ticklabels in scientific notation... Is there any one line command to do this? Otherwise, is there any hint out there?", "response":"You could use colorbar's format parameter: ``` import matplotlib.pyplot as plt import numpy as np import matplotlib.ticker as ticker img = np.random.randn(300,300) myplot = plt.imshow(img) def fmt(x, pos): a, b = '{:.2e}'.format(x).split('e') b = int(b) return r'${} \\times 10^{{{}}}$'.format(a, b) plt.colorbar(myplot, format=ticker.FuncFormatter(fmt)) plt.show() ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25983218\/scientific-notation-colorbar", "best_answers_votes":57, "question_length":690, "response_length":368 }, { "question":"Matplotlib runs out of memory when plotting in a loop I have a fairly simple plotting routine that looks like this: ``` from __future__ import division import datetime import matplotlib matplotlib.use('Agg') from matplotlib.pyplot import figure, plot, show, legend, close, savefig, rcParams import numpy from globalconstants import * def plotColumns(columnNumbers, t, out, showFig=False, filenamePrefix=None, saveFig=True, saveThumb=True): lineProps = ['b', 'r', 'g', 'c', 'm', 'y', 'k', 'b--', 'r--', 'g--', 'c--', 'm--', 'y--', 'k--', 'g--', 'b.-', 'r.-', 'g.-', 'c.-', 'm.-', 'y.-', 'k.-'] rcParams['figure.figsize'] = (13,11) for i in columnNumbers: plot(t, out[:,i], lineProps[i]) legendStrings = list(numpy.zeros(NUMCOMPONENTS)) legendStrings[GLUCOSE] = 'GLUCOSE' legendStrings[CELLULOSE] = 'CELLULOSE' legendStrings[STARCH] = 'STARCH' legendStrings[ACETATE] = 'ACETATE' legendStrings[BUTYRATE] = 'BUTYRATE' legendStrings[SUCCINATE] = 'SUCCINATE' legendStrings[HYDROGEN] = 'HYDROGEN' legendStrings[PROPIONATE] = 'PROPIONATE' legendStrings[METHANE] = \"METHANE\" legendStrings[RUMINOCOCCUS] = 'RUMINOCOCCUS' legendStrings[METHANOBACTERIUM] = \"METHANOBACTERIUM\" legendStrings[BACTEROIDES] = 'BACTEROIDES' legendStrings[SELENOMONAS] = 'SELENOMONAS' legendStrings[CLOSTRIDIUM] = 'CLOSTRIDIUM' legendStrings = [legendStrings[i] for i in columnNumbers] legend(legendStrings, loc='best') dt = datetime.datetime.now() dtAsString = dt.strftime('%d-%m-%Y_%H-%M-%S') if filenamePrefix is None: filenamePrefix = '' if filenamePrefix != '' and filenamePrefix[-1] != '_': filenamePrefix += '_' if saveFig: savefig(filenamePrefix+dtAsString+'.eps') if saveThumb: savefig(filenamePrefix+dtAsString+'.png', dpi=300) if showFig: f.show() close('all') ``` When I plot this in single iterations, it works fine. However, the moment I put it in a loop, matplotlib throws a hissy fit... ``` Traceback (most recent call last): File \"c4hm_param_variation_h2_conc.py\", line 148, in plotColumns(columnNumbers, timeVector, out, showFig=False, filenamePrefix='c 4hm_param_variation_h2_conc_'+str(hydrogen_conc), saveFig=False, saveThumb=True) File \"D:\\phdproject\\alexander paper\\python\\v3\\plotcolumns.py\", line 48, in plo tColumns savefig(filenamePrefix+dtAsString+'.png', dpi=300) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\pyplot.py\", line 356, in savefi g return fig.savefig(*args, **kwargs) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\figure.py\", line 1032, in savef ig self.canvas.print_figure(*args, **kwargs) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\backend_bases.py\", line 1476, i n print_figure **kwargs) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py\", line 358, in print_png FigureCanvasAgg.draw(self) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py\", line 314, in draw self.figure.draw(self.renderer) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\artist.py\", line 46, in draw_wr apper draw(artist, renderer, *kl) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\figure.py\", line 773, in draw for a in self.axes: a.draw(renderer) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\artist.py\", line 46, in draw_wr apper draw(artist, renderer, *kl) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\axes.py\", line 1735, in draw a.draw(renderer) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\artist.py\", line 46, in draw_wr apper draw(artist, renderer, *kl) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\legend.py\", line 374, in draw bbox = self._legend_box.get_window_extent(renderer) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\offsetbox.py\", line 209, in get _window_extent px, py = self.get_offset(w, h, xd, yd) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\offsetbox.py\", line 162, in get _offset return self._offset(width, height, xdescent, ydescent) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\legend.py\", line 360, in findof fset return _findoffset(width, height, xdescent, ydescent, renderer) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\legend.py\", line 325, in _findo ffset_best ox, oy = self._find_best_position(width, height, renderer) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\legend.py\", line 817, in _find_ best_position verts, bboxes, lines = self._auto_legend_data() File \"C:\\Python25\\lib\\site-packages\\matplotlib\\legend.py\", line 669, in _auto_ legend_data tpath = trans.transform_path(path) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\transforms.py\", line 1911, in t ransform_path self._a.transform_path(path)) File \"C:\\Python25\\lib\\site-packages\\matplotlib\\transforms.py\", line 1122, in t ransform_path return Path(self.transform(path.vertices), path.codes, File \"C:\\Python25\\lib\\site-packages\\matplotlib\\transforms.py\", line 1402, in t ransform return affine_transform(points, mtx) MemoryError: Could not allocate memory for path ``` This happens on iteration 2 (counting from 1), if that makes a difference. The code is running on Windows XP 32-bit with python 2.5 and matplotlib 0.99.1, numpy 1.3.0 and scipy 0.7.1. EDIT: The code has now been updated to reflect the fact that the crash actually occurs at the call to legend(). Commenting that call out solves the problem, though obviously, I would still like to be able to put a legend on my graphs...", "response":"Is each loop supposed to generate a new figure? I don't see you closing it or creating a new figure instance from loop to loop. This call will clear the current figure after you save it at the end of the loop: ```py pyplot.clf() ``` I'd refactor, though, and make your code more OO and create a new figure instance on each loop: ``` from matplotlib import pyplot while True: fig = pyplot.figure() ax = fig.add_subplot(111) ax.plot(x,y) ax.legend(legendStrings, loc = 'best') fig.savefig('himom.png') # etc.... ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2364945\/matplotlib-runs-out-of-memory-when-plotting-in-a-loop", "best_answers_votes":38, "question_length":5231, "response_length":513 }, { "question":"Histogram in matplotlib, time on x-Axis I am new to matplotlib (1.3.1-2) and I cannot find a decent place to start. I want to plot the distribution of points over time in a histogram with matplotlib. Basically I want to plot the cumulative sum of the occurrence of a date. ``` date 2011-12-13 2011-12-13 2013-11-01 2013-11-01 2013-06-04 2013-06-04 2014-01-01 ... ``` That would make ``` 2011-12-13 -> 2 times 2013-11-01 -> 3 times 2013-06-04 -> 2 times 2014-01-01 -> once ``` Since there will be many points over many years, I want to set the start date on my x-Axis and the end date, and then mark n-time steps(i.e. 1 year steps) and finally decide how many bins there will be. How would I achieve that?", "response":"Matplotlib uses its own format for dates\/times, but also provides simple functions to convert which are provided in the dates module. It also provides various Locators and Formatters that take care of placing the ticks on the axis and formatting the corresponding labels. This should get you started: ``` import random import matplotlib.pyplot as plt import matplotlib.dates as mdates # generate some random data (approximately over 5 years) data = [float(random.randint(1271517521, 1429197513)) for _ in range(1000)] # convert the epoch format to matplotlib date format mpl_data = mdates.epoch2num(data) # plot it fig, ax = plt.subplots(1,1) ax.hist(mpl_data, bins=50, color='lightblue') ax.xaxis.set_major_locator(mdates.YearLocator()) ax.xaxis.set_major_formatter(mdates.DateFormatter('%d.%m.%y')) plt.show() ``` Result:", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/29672375\/histogram-in-matplotlib-time-on-x-axis", "best_answers_votes":51, "question_length":704, "response_length":823 }, { "question":"Logscale plots with zero values in matplotlib I am currently using logscale in order to have greater possibilities of plotting my data. Nevertheless, my data consists also of zero values. I know that these zero values will not work on logscale as log(0) is not defined. So e.g., ``` fig = plt.figure() ax = fig.add_subplot(111) ax.plot([0,1,2],[10,10,100],marker='o',linestyle='-') ax.set_yscale('log') ax.set_xscale('log') ``` completely omits the zero value. Is this behavior acceptable? At least there should be some kind of warning. I only recognized it by accident. Is there maybe also a way of plotting zero value data in logscale? Thanks! P.S.: I hope this fits to stackoverflow. I did not find a mailing list of matplotlib.", "response":"It's easiest to use a \"symlog\" plot for this purpose. The interval near 0 will be on a linear scale, so 0 can be displayed. ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot([0,1,2],[10,10,100],marker='o',linestyle='-') ax.set_yscale('symlog') ax.set_xscale('symlog') plt.show() ``` Symlog sets a small interval near zero (both above and below) to use a linear scale. This allows things to cross 0 without causing log(x) to explode (or go to -inf, rather). There's a nice visual comparison as an SO answer here: https:\/\/stackoverflow.com\/a\/3513150\/325565", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16904755\/logscale-plots-with-zero-values-in-matplotlib", "best_answers_votes":64, "question_length":731, "response_length":573 }, { "question":"Scatter plot form dataframe with index on x-axis I've got pandas DataFrame, df, with index named date and the columns columnA, columnB and columnC I am trying to scatter plot index on a x-axis and columnA on a y-axis using the DataFrame syntax. When I try: ``` df.plot(kind='scatter', x='date', y='columnA') ``` I ma getting an error KeyError: 'date' probably because the date is not column ``` df.plot(kind='scatter', y='columnA') ``` I am getting an error: ```none ValueError: scatter requires and x and y column ``` so no default index on x-axis. ``` df.plot(kind='scatter', x=df.index, y='columnA') ``` I am getting error ```none KeyError: \"DatetimeIndex(['1818-01-01', '1818-01-02', '1818-01-03', '1818-01-04',\\n '1818-01-05', '1818-01-06', '1818-01-07', '1818-01-08',\\n '1818-01-09', '1818-01-10',\\n ...\\n '2018-03-22', '2018-03-23', '2018-03-24', '2018-03-25',\\n '2018-03-26', '2018-03-27', '2018-03-28', '2018-03-29',\\n '2018-03-30', '2018-03-31'],\\n dtype='datetime64[ns]', name='date', length=73139, freq=None) not in index\" ``` I can plot it if I use matplotlib.pyplot directly ``` plt.scatter(df.index, df['columnA']) ``` Is there a way to plot index as x-axis using the DataFrame kind syntax?", "response":"This is kind of ugly (I think the matplotlib solution you used in your question is better, FWIW), but you can always create a temporary DataFrame with the index as a column usinng ``` df.reset_index() ``` If the index was nameless, the default name will be 'index'. Assuming this is the case, you could use ``` df.reset_index().plot(kind='scatter', x='index', y='columnA') ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/49834883\/scatter-plot-form-dataframe-with-index-on-x-axis", "best_answers_votes":31, "question_length":1205, "response_length":376 }, { "question":"How to adjust the size of matplotlib legend box I have a graph whose left upper corner is quite blank. So I decide to put my legend box there. However, I find the items in legend are very small and the legend box itself is also quite small. By \"small\", I mean something like this How can I make the items (not texts!) in the legend box bigger? How can i make the box itself bigger?", "response":"To control the padding inside the legend (effectively making the legend box bigger) use the borderpad kwarg. For example, here's the default: ``` import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 10, 100) fig, ax = plt.subplots() for i in range(1, 6): ax.plot(x, i*x + x, label='$y={i}x + {i}$'.format(i=i)) ax.legend(loc='upper left') plt.show() ``` If we change inside padding with borderpad=2, we'll make the overall legend box larger (the units are multiples of the font size, similar to em): ``` import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 10, 100) fig, ax = plt.subplots() for i in range(1, 6): ax.plot(x, i*x + x, label='$y={i}x + {i}$'.format(i=i)) ax.legend(loc='upper left', borderpad=2) plt.show() ``` Alternately, you might want to change the spacing between the items. Use labelspacing to control this: ``` import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 10, 100) fig, ax = plt.subplots() for i in range(1, 6): ax.plot(x, i*x + x, label='$y={i}x + {i}$'.format(i=i)) ax.legend(loc='upper left', labelspacing=2) plt.show() ``` In most cases, however, it makes the most sense to adjust both labelspacing and borderpad at the same time: ``` import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 10, 100) fig, ax = plt.subplots() for i in range(1, 6): ax.plot(x, i*x + x, label='$y={i}x + {i}$'.format(i=i)) ax.legend(loc='upper left', borderpad=1.5, labelspacing=1.5) plt.show() ``` On the other hand, if you have very large markers, you may want to make the length of the line shown in the legend larger. For example, the default might look something like this: ``` import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 10, 5) fig, ax = plt.subplots() for i in range(1, 6): ax.plot(x, i*x + x, marker='o', markersize=20, label='$y={i}x + {i}$'.format(i=i)) ax.legend(loc='upper left') plt.show() ``` If we change handlelength, we'll get longer lines in the legend, which looks a bit more realistic. (I'm also tweaking borderpad and labelspacing here to give more room.) ``` import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 10, 5) fig, ax = plt.subplots() for i in range(1, 6): ax.plot(x, i*x + x, marker='o', markersize=20, label='$y={i}x + {i}$'.format(i=i)) ax.legend(loc='upper left', handlelength=5, borderpad=1.2, labelspacing=1.2) plt.show() ``` From the docs, here are some of the other options you might want to explore: ``` Padding and spacing between various elements use following keywords parameters. These values are measure in font-size units. E.g., a fontsize of 10 points and a handlelength=5 implies a handlelength of 50 points. Values from rcParams will be used if None. ===================================================================== Keyword | Description ===================================================================== borderpad the fractional whitespace inside the legend border labelspacing the vertical space between the legend entries handlelength the length of the legend handles handletextpad the pad between the legend handle and text borderaxespad the pad between the axes and legend border columnspacing the spacing between columns ```", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20048352\/how-to-adjust-the-size-of-matplotlib-legend-box", "best_answers_votes":121, "question_length":381, "response_length":3208 }, { "question":"Getting the r-squared value using curve_fit I am a beginner with both Python and all its libs. But I have managed to make a small program that works as intended. It takes a string, counts the occurence of the different letters and plots them in a graph and then applies a equation and its curve.\u00a8 Now i would like to get the r-squared value of the fit. The overall idea is to compare different kinds of text from articles on different levels and see how strong the overall pattern is. Is just an excersise and I am new, so a easy to understand answer would be awesome. The code is: ``` import numpy as np import math import matplotlib.pyplot as plt from matplotlib.pylab import figure, show from scipy.optimize import curve_fit s=\"\"\"det, og deres unders\u00f8gelse af hvor meget det bliver brugt viser, at der kun er seks plugins, som benyttes af mere end 5 % af Chrome-brugere. Problemet med teknologien er, at den ivivuilv rduyd iytf ouyf ouy yg oyuf yd iyt erzypu zhrpyh dfgopaehr poargi ah pargoh ertao gehorg aeophgrpaoghraprbpaenbtibaeriber en af hoved\u00e5rsagerne til sikkerhedshuller, ustabilitet og deciderede nedbrud af browseren. Der vil ikke bve lukket for API'et ivivuilv rduyd iytf ouyf ouy yg oyuf yd iyt erzypu zhrpyh dfgopaehr poargi ah pargoh ertao gehorg aeophgrpaoghraprbpaenbtibaeriber en af hoved\u00e5rsagerne til sikkerhedshuller, ustabilitet og deciderede nedbrud af browseren. Der vil ikke blive lukket for API'et p\u00e5 \u00e9n gang, men det vil blive udfaset i l\u00f8bet af et \u00e5rs tid. De mest popul\u00e6re plugins f\u00e5r lov at fungere i udfasningsperioden; Det drejer sig om: Silverlight (anvendt af 15 % af Chrome-brugere sidste m\u00e5ned), Unity (9,1 %), Google Earth (9,1 %), Java (8,9%), Google Talk (8,7 %) og Facebook Video (6,0 %). Det er muligt at hvidliste andre plugins, men i slutningen af 2014 forventer udviklerne helt at lukke for brugen af dem.\"\"\" fordel=[] alf=['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z','\u00e6','\u00f8','\u00e5'] i=1 p=0 fig = figure() ax1 = fig.add_subplot(1,2,0) for i in range(len(alf)): fordel.append(s.count(alf[i])) i=i+1 fordel=sorted(fordel,key=int,reverse=True) yFit=fordel xFit=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28] def func(x, a, b): return a * (b ** x) popt, pcov = curve_fit(func, xFit, yFit) t = np.arange(0.0, 30.0, 0.1) a=popt[0] b=popt[1] s = (a*b**t) ax1.plot(t,s) print(popt) yMax=math.ceil(fordel[0]+5) ax1.axis([0,30,0,yMax]) for i in range(0,int(len(alf))*2,2): fordel.insert(i,p) p=p+1 for i in range(0,int(len(fordel)\/2)): ax1.scatter(fordel[0],fordel[1]) fordel.pop(0) fordel.pop(0) plt.show() show() ```", "response":"Computing : The value can be found using the mean (), the total sum of squares (), and the residual sum of squares (). Each is defined as: where is the function value at point . Taken from Wikipedia. From scipy.optimize.curve_fit(): You can get the parameters (popt) from curve_fit() with popt, pcov = curve_fit(f, xdata, ydata) You can get the residual sum of squares () with residuals = ydata- f(xdata, *popt) ss_res = numpy.sum(residuals**2) You can get the total sum of squares () with ss_tot = numpy.sum((ydata-numpy.mean(ydata))**2) And finally, the -value with, r_squared = 1 - (ss_res \/ ss_tot)", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19189362\/getting-the-r-squared-value-using-curve-fit", "best_answers_votes":104, "question_length":2645, "response_length":602 }, { "question":"Is it possible to plot implicit equations? I would like to plot implicit equations (of the form f(x, y)=g(x, y) eg. X^y=y^x) in Matplotlib. Is this possible?", "response":"I don't believe there's very good support for this, but you could try something like ``` import matplotlib.pyplot from numpy import arange from numpy import meshgrid delta = 0.025 xrange = arange(-5.0, 20.0, delta) yrange = arange(-5.0, 20.0, delta) X, Y = meshgrid(xrange,yrange) # F is one side of the equation, G is the other F = Y**X G = X**Y matplotlib.pyplot.contour(X, Y, (F - G), [0]) matplotlib.pyplot.show() ``` See the API docs for contour: if the fourth argument is a sequence then it specifies which contour lines to plot. But the plot will only be as good as the resolution of your ranges, and there are certain features it may never get right, often at self-intersection points.", "best_answers_score":0.8, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2484527\/is-it-possible-to-plot-implicit-equations", "best_answers_votes":40, "question_length":157, "response_length":693 }, { "question":"Calling pylab.savefig without display in ipython I need to create a figure in a file without displaying it within IPython notebook. I am not clear on the interaction between IPython and matplotlib.pylab in this regard. But, when I call pylab.savefig(\"test.png\") the current figure get's displayed in addition to being saved in test.png. When automating the creation of a large set of plot files, this is often undesirable. Or in the situation that an intermediate file for external processing by another app is desired. Not sure if this is a matplotlib or IPython notebook question.", "response":"This is a matplotlib question, and you can get around this by using a backend that doesn't display to the user, e.g. 'Agg': ``` import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt plt.plot([1,2,3]) plt.savefig('\/tmp\/test.png') ``` EDIT: If you don't want to lose the ability to display plots, turn off Interactive Mode, and only call plt.show() when you are ready to display the plots: ``` import matplotlib.pyplot as plt # Turn interactive plotting off plt.ioff() # Create a new figure, plot into it, then close it so it never gets displayed fig = plt.figure() plt.plot([1,2,3]) plt.savefig('\/tmp\/test0.png') plt.close(fig) # Create a new figure, plot into it, then don't close it so it does get displayed plt.figure() plt.plot([1,3,2]) plt.savefig('\/tmp\/test1.png') # Display all \"open\" (non-closed) figures plt.show() ```", "best_answers_score":0.7988, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15713279\/calling-pylab-savefig-without-display-in-ipython", "best_answers_votes":267, "question_length":582, "response_length":844 }, { "question":"What is the necessity of plt.figure() in matplotlib? ``` plt.figure(figsize=(10,8)) plt.scatter(df['attacker_size'][df['year'] == 298], # attacker size in year 298 as the y axis df['defender_size'][df['year'] == 298], # the marker as marker='x', # the color color='b', # the alpha alpha=0.7, # with size s = 124, # labelled this label='Year 298') ``` In the above snippet of code collected from Scatterplot in Matplotlib, what is the necessity of plt.figure()? link above ais dead , self sustaining example : ``` import matplotlib.pyplot as plt import pandas as pd data = { \"attacker_size\": [420, 380, 390], \"defender_size\": [50, 40, 45] } df = pd.DataFrame(data, index = [\"day1\", \"day2\", \"day3\"]) print(df) plt.figure(figsize=(10,8)) plt.scatter(df['attacker_size'], # attacker size in year 298 as the y axis df['defender_size'], # the marker as marker='x', # the color color='b', # the alpha alpha=0.7, # width size s = 150, # labelled this label='Test') ```", "response":"The purpose of using plt.figure() is to create a figure object. The whole figure is regarded as the figure object. It is necessary to explicitly use plt.figure() when we want to tweak the size of the figure and when we want to add multiple Axes objects in a single figure. ``` # in order to modify the size fig = plt.figure(figsize=(12,8)) # adding multiple Axes objects fig, ax_lst = plt.subplots(2, 2) # a figure with a 2x2 grid of Axes ``` Parts of a Figure", "best_answers_score":0.7987, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/38666527\/what-is-the-necessity-of-plt-figure-in-matplotlib", "best_answers_votes":43, "question_length":960, "response_length":460 }, { "question":"Change xticklabels fontsize of seaborn heatmap Here is my question: I plot 7 variable's coefficient using sns.clustermap() x\/y tickslabel seems really small(In my case, s1,s2,... s9) My attempt label='big ==> no effect plt.tick_params(axis='both', which='minor', labelsize=12) ===> cbar label has changed, but the x\/y axes looks the same. Add My code: ``` ds = pd.read_csv(\"xxxx.csv\") corr = ds.corr().mul(100).astype(int) cmap = sns.diverging_palette(h_neg=210, h_pos=350, s=90, l=30, as_cmap=True) sns.clustermap(data=corr_s, annot=True, fmt='d',cmap = \"Blues\",annot_kws={\"size\": 16},) ```", "response":"Consider calling sns.set(font_scale=1.4) before plotting your data. This will scale all fonts in your legend and on the axes. My plot went from this, To this, Of course, adjust the scaling to whatever you feel is a good setting. Code: ``` sns.set(font_scale=1.4) cmap = sns.diverging_palette(h_neg=210, h_pos=350, s=90, l=30, as_cmap=True) sns.clustermap(data=corr, annot=True, fmt='d', cmap=\"Blues\", annot_kws={\"size\": 16}) ```", "best_answers_score":0.7982, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34706845\/change-xticklabels-fontsize-of-seaborn-heatmap", "best_answers_votes":88, "question_length":591, "response_length":428 }, { "question":"Invert image displayed by imshow in matplotlib I wanted the imshow() function in matplotlib.pyplot to display images the opposite way, i.e upside down. Is there a simple way to do this?", "response":"Specify the keyword argument origin='lower' or origin='upper' in your call to imshow.", "best_answers_score":0.7977, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8396101\/invert-image-displayed-by-imshow-in-matplotlib", "best_answers_votes":278, "question_length":185, "response_length":85 }, { "question":"How can I make a blank subplot in matplotlib? I am making a group of subplot (say, 3 x 2) in matplotlib, but I have fewer than 6 datasets. How can I make the remaining subplot blank? The arrangement looks like this: ``` +----+----+ | 0,0| 0,1| +----+----+ | 1,0| 1,1| +----+----+ | 2,0| 2,1| +----+----+ ``` This may go on for several pages, but on the final page, there are, for example, 5 datasets to the 2,1 box will be empty. However, I have declared the figure as: ``` cfig,ax = plt.subplots(3,2) ``` So in the space for subplot 2,1 there is a default set of axes with ticks and labels. How can I programatically render that space blank and devoid of axes?", "response":"You could always hide the axes which you do not need. For example, the following code turns off the 6th axes completely: ``` import matplotlib.pyplot as plt hf, ha = plt.subplots(3,2) ha[-1, -1].axis('off') plt.show() ``` and results in the following figure: Alternatively, see the accepted answer to the question Hiding axis text in matplotlib plots for a way of keeping the axes but hiding all the axes decorations (e.g. the tick marks and labels).", "best_answers_score":0.7945, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10035446\/how-can-i-make-a-blank-subplot-in-matplotlib", "best_answers_votes":234, "question_length":661, "response_length":450 }, { "question":"How can I remove the top and right axis? Instead of the default \"boxed\" axis style I want to have only the left and bottom axis, i.e.: ``` +------+ | | | | | | ---> | | | | +------+ +------- ``` This should be easy, but I can't find the necessary options in the docs.", "response":"This is the suggested Matplotlib 3 solution from the official website HERE: ``` import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 2*np.pi, 100) y = np.sin(x) ax = plt.subplot(111) ax.plot(x, y) # Hide the right and top spines ax.spines[['right', 'top']].set_visible(False) plt.show() ```", "best_answers_score":0.7939, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/925024\/how-can-i-remove-the-top-and-right-axis", "best_answers_votes":302, "question_length":267, "response_length":307 }, { "question":"Add x and y labels to a pandas plot Suppose I have the following code that plots something very simple using pandas: ``` import pandas as pd values = [[1, 2], [2, 5]] df2 = pd.DataFrame(values, columns=['Type A', 'Type B'], index=['Index 1', 'Index 2']) df2.plot(lw=2, colormap='jet', marker='.', markersize=10, title='Video streaming dropout by category') ``` How do I easily set x and y-labels while preserving my ability to use specific colormaps? I noticed that the plot() wrapper for pandas DataFrames doesn't take any parameters specific for that.", "response":"The df.plot() function returns a matplotlib.axes.AxesSubplot object. You can set the labels on that object. ``` ax = df2.plot(lw=2, colormap='jet', marker='.', markersize=10, title='Video streaming dropout by category') ax.set_xlabel(\"x label\") ax.set_ylabel(\"y label\") ``` Or, more succinctly: ax.set(xlabel=\"x label\", ylabel=\"y label\"). Alternatively, the index x-axis label is automatically set to the Index name, if it has one. so df2.index.name = 'x label' would work too.", "best_answers_score":0.7928, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21487329\/add-x-and-y-labels-to-a-pandas-plot", "best_answers_votes":446, "question_length":553, "response_length":477 }, { "question":"changing default x range in histogram matplotlib I would like to change the default x range for the histogram plot. The range of the data is from 7 to 12. However, by default the histogram starts right at 7 and ends at 13. I want it to start at 6.5 and end at 12.5. However, the ticks should go from 7 to 12.How do I do it? ``` import asciitable import numpy as np import matplotlib.pyplot as plt import matplotlib.mlab as mlab import pylab from pylab import xticks data = asciitable.read(file) hmag = data['col8'] visits = data['col14'] origin = data['col13'] n, bins, patches = plt.hist(hmag, 30, facecolor='gray', align='mid') xticks(range(7,13)) pylab.rc(\"axes\", linewidth=8.0) pylab.rc(\"lines\", markeredgewidth=2.0) plt.xlabel('H mag', fontsize=14) plt.ylabel('# of targets', fontsize=14) pylab.xticks(fontsize=15) pylab.yticks(fontsize=15) plt.grid(True) plt.savefig('hmag_histogram.eps', facecolor='w', edgecolor='w', format='eps') plt.show() ```", "response":"``` plt.hist(hmag, 30, range=[6.5, 12.5], facecolor='gray', align='mid') ```", "best_answers_score":0.7928, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12125880\/changing-default-x-range-in-histogram-matplotlib", "best_answers_votes":99, "question_length":953, "response_length":76 }, { "question":"How do I fit a sine curve to my data with pylab and numpy? I am trying to show that economies follow a relatively sinusoidal growth pattern. I am building a python simulation to show that even when we let some degree of randomness take hold, we can still produce something relatively sinusoidal. I am happy with the data I'm producing, but now I'd like to find some way to get a sine graph that pretty closely matches the data. I know you can do polynomial fit, but can you do sine fit?", "response":"Here is a parameter-free fitting function fit_sin() that does not require manual guess of frequency: ``` import numpy, scipy.optimize def fit_sin(tt, yy): '''Fit sin to the input time sequence, and return fitting parameters \"amp\", \"omega\", \"phase\", \"offset\", \"freq\", \"period\" and \"fitfunc\"''' tt = numpy.array(tt) yy = numpy.array(yy) ff = numpy.fft.fftfreq(len(tt), (tt[1]-tt[0])) # assume uniform spacing Fyy = abs(numpy.fft.fft(yy)) guess_freq = abs(ff[numpy.argmax(Fyy[1:])+1]) # excluding the zero frequency \"peak\", which is related to offset guess_amp = numpy.std(yy) * 2.**0.5 guess_offset = numpy.mean(yy) guess = numpy.array([guess_amp, 2.*numpy.pi*guess_freq, 0., guess_offset]) def sinfunc(t, A, w, p, c): return A * numpy.sin(w*t + p) + c popt, pcov = scipy.optimize.curve_fit(sinfunc, tt, yy, p0=guess) A, w, p, c = popt f = w\/(2.*numpy.pi) fitfunc = lambda t: A * numpy.sin(w*t + p) + c return {\"amp\": A, \"omega\": w, \"phase\": p, \"offset\": c, \"freq\": f, \"period\": 1.\/f, \"fitfunc\": fitfunc, \"maxcov\": numpy.max(pcov), \"rawres\": (guess,popt,pcov)} ``` The initial frequency guess is given by the peak frequency in the frequency domain using FFT. The fitting result is almost perfect assuming there is only one dominant frequency (other than the zero frequency peak). ``` import pylab as plt N, amp, omega, phase, offset, noise = 500, 1., 2., .5, 4., 3 #N, amp, omega, phase, offset, noise = 50, 1., .4, .5, 4., .2 #N, amp, omega, phase, offset, noise = 200, 1., 20, .5, 4., 1 tt = numpy.linspace(0, 10, N) tt2 = numpy.linspace(0, 10, 10*N) yy = amp*numpy.sin(omega*tt + phase) + offset yynoise = yy + noise*(numpy.random.random(len(tt))-0.5) res = fit_sin(tt, yynoise) print( \"Amplitude=%(amp)s, Angular freq.=%(omega)s, phase=%(phase)s, offset=%(offset)s, Max. Cov.=%(maxcov)s\" % res ) plt.plot(tt, yy, \"-k\", label=\"y\", linewidth=2) plt.plot(tt, yynoise, \"ok\", label=\"y with noise\") plt.plot(tt2, res[\"fitfunc\"](tt2), \"r-\", label=\"y fit curve\", linewidth=2) plt.legend(loc=\"best\") plt.show() ``` The result is good even with high noise: Amplitude=1.00660540618, Angular freq.=2.03370472482, phase=0.360276844224, offset=3.95747467506, Max. Cov.=0.0122923578658", "best_answers_score":0.7927, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16716302\/how-do-i-fit-a-sine-curve-to-my-data-with-pylab-and-numpy", "best_answers_votes":111, "question_length":486, "response_length":2172 }, { "question":"How to display the value on horizontal bars I generated a bar plot, how can I display the value of the bar on each bar? Current plot: What I am trying to get: My code: ``` import os import numpy as np import matplotlib.pyplot as plt x = [u'INFO', u'CUISINE', u'TYPE_OF_PLACE', u'DRINK', u'PLACE', u'MEAL_TIME', u'DISH', u'NEIGHBOURHOOD'] y = [160, 167, 137, 18, 120, 36, 155, 130] fig, ax = plt.subplots() width = 0.75 # the width of the bars ind = np.arange(len(y)) # the x locations for the groups ax.barh(ind, y, width, color=\"blue\") ax.set_yticks(ind+width\/2) ax.set_yticklabels(x, minor=False) plt.title('title') plt.xlabel('x') plt.ylabel('y') #plt.show() plt.savefig(os.path.join('test.png'), dpi=300, format='png', bbox_inches='tight') # use format='svg' or 'pdf' for vectorial pictures ```", "response":"Update: there's a built in method for this now! See the answer beginning \"New in matplotlib 3.4.0\". If you can't upgrade that far, it doesn't take much code. Add: ```py for i, v in enumerate(y): ax.text(v + 3, i, str(v), color='blue', fontweight='bold', verticalalignment='center') ``` result: The y-values v are both the x-location and the string values for ax.text, and conveniently the barplot has a metric of 1 for each bar, so the enumeration i is the y-location.", "best_answers_score":0.7912, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/30228069\/how-to-display-the-value-on-horizontal-bars", "best_answers_votes":281, "question_length":798, "response_length":468 }, { "question":"Why set_xticks doesn't set the labels of ticks? ``` import matplotlib.pyplot as plt x = range(1, 7) y = (220, 300, 300, 290, 320, 315) def test(axes): axes.bar(x, y) axes.set_xticks(x, [i+100 for i in x]) fig, (ax1, ax2) = plt.subplots(1, 2) test(ax1) test(ax2) ``` I am expecting the xlabs as 101, 102 ... However, if i switch to use plt.xticks(x, [i+100 for i in x]) and rewrite the function explicitly, it works.", "response":".set_xticks() on the axes will set the locations and set_xticklabels() will set the displayed text. ``` def test(axes): axes.bar(x,y) axes.set_xticks(x) axes.set_xticklabels([i+100 for i in x]) ```", "best_answers_score":0.7911, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21910986\/why-set-xticks-doesnt-set-the-labels-of-ticks", "best_answers_votes":149, "question_length":415, "response_length":197 }, { "question":"Title for matplotlib legend I know it seems fairly redundant to have a title for a legend, but is it possible using matplotlib? Here's a snippet of the code I have: ``` import matplotlib.patches as mpatches import matplotlib.pyplot as plt one = mpatches.Patch(facecolor='#f3f300', label='label1', linewidth = 0.5, edgecolor = 'black') two = mpatches.Patch(facecolor='#ff9700', label = 'label2', linewidth = 0.5, edgecolor = 'black') three = mpatches.Patch(facecolor='#ff0000', label = 'label3', linewidth = 0.5, edgecolor = 'black') legend = plt.legend(handles=[one, two, three], loc = 4, fontsize = 'small', fancybox = True) frame = legend.get_frame() #sets up for color, edge, and transparency frame.set_facecolor('#b4aeae') #color of legend frame.set_edgecolor('black') #edge color of legend frame.set_alpha(1) #deals with transparency plt.show() ``` I would want the title of the legend above label1. For reference, this is the output:", "response":"Add the title parameter to the this line: ``` legend = plt.legend(handles=[one, two, three], title=\"title\", loc=4, fontsize='small', fancybox=True) ``` See also the official docs for the legend constructor.", "best_answers_score":0.7909, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44620013\/title-for-matplotlib-legend", "best_answers_votes":195, "question_length":939, "response_length":206 }, { "question":"Matplotlib: display plot on a remote machine I have a python code doing some calculation on a remote machine, named A. I connect on A via ssh from a machine named B. Is there a way to display the figure on machine B?", "response":"Sure, you can enable X11 forwarding. Usually this is done by passing the -X or -Y option to ssh when you connect to the remote computer ``` ssh -X computerA ``` Note that the SSH daemon on computer A will also have to be configured to enable X11 forwarding. This is done by putting ``` X11Forwarding yes ``` in computer A's sshd_config configuration file. If computer A's SSH daemon does not have X11 forwarding enabled, you can always have Python write the result of the calculation to a text file, download it to computer B, and use Matplotlib locally.", "best_answers_score":0.7889, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3453188\/matplotlib-display-plot-on-a-remote-machine", "best_answers_votes":43, "question_length":216, "response_length":554 }, { "question":"Matplotlib - Tcl_AsyncDelete: async handler deleted by the wrong thread? I'm asking this question because I can't solve one problem in Python\/Django (actually in pure Python it's ok) which leads to RuntimeError: tcl_asyncdelete async handler deleted by the wrong thread. This is somehow related to the way how I render matplotlib plots in Django. The way I do it is: ``` ... import matplotlib.pyplot as plt ... fig = plt.figure() ... plt.close() ``` I extremely minimized my code. But the catch is - even if I have just one line of code: ``` fig = plt.figure() ``` I see this RuntimeError happening. I hope I could solve the problem, If I knew the correct way of closing\/cleaning\/destroying plots in Python\/Django.", "response":"By default matplotlib uses TK gui toolkit, when you're rendering an image without using the toolkit (i.e. into a file or a string), matplotlib still instantiates a window that doesn't get displayed, causing all kinds of problems. In order to avoid that, you should use an Agg backend. It can be activated like so -- ``` import matplotlib matplotlib.use('Agg') from matplotlib import pyplot ``` For more information please refer to matplotlib documentation -- http:\/\/matplotlib.org\/faq\/howto_faq.html#matplotlib-in-a-web-application-server", "best_answers_score":0.7869, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/27147300\/matplotlib-tcl-asyncdelete-async-handler-deleted-by-the-wrong-thread", "best_answers_votes":97, "question_length":714, "response_length":538 }, { "question":"Plotting dates on the x-axis I am trying to plot information against dates. I have a list of dates in the format \"01\/02\/1991\". I converted them by doing the following: ``` x = parser.parse(date).strftime('%Y%m%d')) ``` which gives 19910102 Then I tried to use num2date ``` import matplotlib.dates as dates new_x = dates.num2date(x) ``` Plotting: ``` plt.plot_date(new_x, other_data, fmt=\"bo\", tz=None, xdate=True) ``` But I get an error. It says \"ValueError: year is out of range\". Any solutions?", "response":"You can do this more simply using plot() instead of plot_date(). First, convert your strings to instances of Python datetime.date: ``` import datetime as dt dates = ['01\/02\/1991','01\/03\/1991','01\/04\/1991'] x = [dt.datetime.strptime(d,'%m\/%d\/%Y').date() for d in dates] y = range(len(x)) # many thanks to Kyss Tao for setting me straight here ``` Then plot: ``` import matplotlib.pyplot as plt import matplotlib.dates as mdates plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%m\/%d\/%Y')) plt.gca().xaxis.set_major_locator(mdates.DayLocator()) plt.plot(x,y) plt.gcf().autofmt_xdate() ``` Result:", "best_answers_score":0.7863, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9627686\/plotting-dates-on-the-x-axis", "best_answers_votes":196, "question_length":496, "response_length":603 }, { "question":"Matplotlib - label each bin I'm currently using Matplotlib to create a histogram: ``` import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as pyplot ... fig = pyplot.figure() ax = fig.add_subplot(1,1,1,) n, bins, patches = ax.hist(measurements, bins=50, range=(graph_minimum, graph_maximum), histtype='bar') #ax.set_xticklabels([n], rotation='vertical') for patch in patches: patch.set_facecolor('r') pyplot.title('Spam and Ham') pyplot.xlabel('Time (in seconds)') pyplot.ylabel('Bits of Ham') pyplot.savefig(output_filename) ``` I'd like to make the x-axis labels a bit more meaningful. Firstly, the x-axis ticks here seem to be limited to five ticks. No matter what I do, I can't seem to change this - even if I add more xticklabels, it only uses the first five. I'm not sure how Matplotlib calculates this, but I assume it's auto-calculated from the range\/data? Is there some way I can increase the resolution of x-tick labels - even to the point of one for each bar\/bin? (Ideally, I'd also like the seconds to be reformatted in micro-seconds\/milli-seconds, but that's a question for another day). Secondly, I'd like each individual bar labeled - with the actual number in that bin, as well as the percentage of the total of all bins. The final output might look something like this: Is something like that possible with Matplotlib? Cheers, Victor", "response":"Sure! To set the ticks, just, well... Set the ticks (see matplotlib.pyplot.xticks or ax.set_xticks). (Also, you don't need to manually set the facecolor of the patches. You can just pass in a keyword argument.) For the rest, you'll need to do some slightly more fancy things with the labeling, but matplotlib makes it fairly easy. As an example: ``` import matplotlib.pyplot as plt import numpy as np from matplotlib.ticker import FormatStrFormatter data = np.random.randn(82) fig, ax = plt.subplots() counts, bins, patches = ax.hist(data, facecolor='yellow', edgecolor='gray') # Set the ticks to be at the edges of the bins. ax.set_xticks(bins) # Set the xaxis's tick labels to be formatted with 1 decimal place... ax.xaxis.set_major_formatter(FormatStrFormatter('%0.1f')) # Change the colors of bars at the edges... twentyfifth, seventyfifth = np.percentile(data, [25, 75]) for patch, rightside, leftside in zip(patches, bins[1:], bins[:-1]): if rightside seventyfifth: patch.set_facecolor('red') # Label the raw counts and the percentages below the x-axis... bin_centers = 0.5 * np.diff(bins) + bins[:-1] for count, x in zip(counts, bin_centers): # Label the raw counts ax.annotate(str(count), xy=(x, 0), xycoords=('data', 'axes fraction'), xytext=(0, -18), textcoords='offset points', va='top', ha='center') # Label the percentages percent = '%0.0f%%' % (100 * float(count) \/ counts.sum()) ax.annotate(percent, xy=(x, 0), xycoords=('data', 'axes fraction'), xytext=(0, -32), textcoords='offset points', va='top', ha='center') # Give ourselves some more room at the bottom of the plot plt.subplots_adjust(bottom=0.15) plt.show() ```", "best_answers_score":0.7849, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6352740\/matplotlib-label-each-bin", "best_answers_votes":143, "question_length":1362, "response_length":1636 }, { "question":"Matplotlib plot zooming with scroll wheel Is it possible to bind the scroll wheel to zoom in\/out when the cursor is hovering over a matplotlib plot?", "response":"Thanks guys, the examples were very helpful. I had to make a few changes to work with a scatter plot and I added panning with a left button drag. Hopefully someone will find this useful. ``` from matplotlib.pyplot import figure, show import numpy class ZoomPan: def __init__(self): self.press = None self.cur_xlim = None self.cur_ylim = None self.x0 = None self.y0 = None self.x1 = None self.y1 = None self.xpress = None self.ypress = None def zoom_factory(self, ax, base_scale = 2.): def zoom(event): cur_xlim = ax.get_xlim() cur_ylim = ax.get_ylim() xdata = event.xdata # get event x location ydata = event.ydata # get event y location if event.button == 'down': # deal with zoom in scale_factor = 1 \/ base_scale elif event.button == 'up': # deal with zoom out scale_factor = base_scale else: # deal with something that should never happen scale_factor = 1 print event.button new_width = (cur_xlim[1] - cur_xlim[0]) * scale_factor new_height = (cur_ylim[1] - cur_ylim[0]) * scale_factor relx = (cur_xlim[1] - xdata)\/(cur_xlim[1] - cur_xlim[0]) rely = (cur_ylim[1] - ydata)\/(cur_ylim[1] - cur_ylim[0]) ax.set_xlim([xdata - new_width * (1-relx), xdata + new_width * (relx)]) ax.set_ylim([ydata - new_height * (1-rely), ydata + new_height * (rely)]) ax.figure.canvas.draw() fig = ax.get_figure() # get the figure of interest fig.canvas.mpl_connect('scroll_event', zoom) return zoom def pan_factory(self, ax): def onPress(event): if event.inaxes != ax: return self.cur_xlim = ax.get_xlim() self.cur_ylim = ax.get_ylim() self.press = self.x0, self.y0, event.xdata, event.ydata self.x0, self.y0, self.xpress, self.ypress = self.press def onRelease(event): self.press = None ax.figure.canvas.draw() def onMotion(event): if self.press is None: return if event.inaxes != ax: return dx = event.xdata - self.xpress dy = event.ydata - self.ypress self.cur_xlim -= dx self.cur_ylim -= dy ax.set_xlim(self.cur_xlim) ax.set_ylim(self.cur_ylim) ax.figure.canvas.draw() fig = ax.get_figure() # get the figure of interest # attach the call back fig.canvas.mpl_connect('button_press_event',onPress) fig.canvas.mpl_connect('button_release_event',onRelease) fig.canvas.mpl_connect('motion_notify_event',onMotion) #return the function return onMotion fig = figure() ax = fig.add_subplot(111, xlim=(0,1), ylim=(0,1), autoscale_on=False) ax.set_title('Click to zoom') x,y,s,c = numpy.random.rand(4,200) s *= 200 ax.scatter(x,y,s,c) scale = 1.1 zp = ZoomPan() figZoom = zp.zoom_factory(ax, base_scale = scale) figPan = zp.pan_factory(ax) show() ```", "best_answers_score":0.7838, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11551049\/matplotlib-plot-zooming-with-scroll-wheel", "best_answers_votes":36, "question_length":148, "response_length":2525 }, { "question":"Plot correlation matrix using pandas I have a data set with huge number of features, so analysing the correlation matrix has become very difficult. I want to plot a correlation matrix which we get using dataframe.corr() function from pandas library. Is there any built-in function provided by the pandas library to plot this matrix?", "response":"If your main goal is to visualize the correlation matrix, rather than creating a plot per se, the convenient pandas styling options is a viable built-in solution: ``` import pandas as pd import numpy as np rs = np.random.RandomState(0) df = pd.DataFrame(rs.rand(10, 10)) corr = df.corr() corr.style.background_gradient(cmap='coolwarm') # 'RdBu_r', 'BrBG_r', & PuOr_r are other good diverging colormaps ``` Note that this needs to be in a backend that supports rendering HTML, such as the JupyterLab Notebook. Styling You can easily limit the digit precision (this is now .format(precision=2) in pandas 2.*): ``` corr.style.background_gradient(cmap='coolwarm').set_precision(2) ``` Or get rid of the digits altogether if you prefer the matrix without annotations: ``` corr.style.background_gradient(cmap='coolwarm').set_properties(**{'font-size': '0pt'}) ``` The styling documentation also includes instructions of more advanced styles, such as how to change the display of the cell the mouse pointer is hovering over. Time comparison In my testing, style.background_gradient() was 4x faster than plt.matshow() and 120x faster than sns.heatmap() with a 10x10 matrix. Unfortunately it doesn't scale as well as plt.matshow(): the two take about the same time for a 100x100 matrix, and plt.matshow() is 10x faster for a 1000x1000 matrix. Saving There are a few possible ways to save the stylized dataframe: Return the HTML by appending the render() method and then write the output to a file. Save as an .xslx file with conditional formatting by appending the to_excel() method. Combine with imgkit to save a bitmap Take a screenshot (like I have done here). Normalize colors across the entire matrix (pandas >= 0.24) By setting axis=None, it is now possible to compute the colors based on the entire matrix rather than per column or per row: ``` corr.style.background_gradient(cmap='coolwarm', axis=None) ``` Single corner heatmap Since many people are reading this answer I thought I would add a tip for how to only show one corner of the correlation matrix. I find this easier to read myself, since it removes the redundant information. ``` # Fill diagonal and upper half with NaNs mask = np.zeros_like(corr, dtype=bool) mask[np.triu_indices_from(mask)] = True corr[mask] = np.nan (corr .style .background_gradient(cmap='coolwarm', axis=None, vmin=-1, vmax=1) .highlight_null(color='#f1f1f1') # Color NaNs grey .format(precision=2)) ```", "best_answers_score":0.7836, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/29432629\/plot-correlation-matrix-using-pandas", "best_answers_votes":463, "question_length":332, "response_length":2435 }, { "question":"Multiple datasets on the same scatter plot I want to plot multiple data sets on the same scatter plot: ``` cases = scatter(x[:4], y[:4], s=10, c='b', marker=\"s\") controls = scatter(x[4:], y[4:], s=10, c='r', marker=\"o\") show() ``` The above only shows the most recent scatter() I've also tried: ``` plt = subplot(111) plt.scatter(x[:4], y[:4], s=10, c='b', marker=\"s\") plt.scatter(x[4:], y[4:], s=10, c='r', marker=\"o\") show() ```", "response":"You need a reference to an Axes object to keep drawing on the same subplot. ``` import matplotlib.pyplot as plt x = range(100) y = range(100,200) fig = plt.figure() ax1 = fig.add_subplot(111) ax1.scatter(x[:4], y[:4], s=10, c='b', marker=\"s\", label='first') ax1.scatter(x[40:],y[40:], s=10, c='r', marker=\"o\", label='second') plt.legend(loc='upper left') plt.show() ```", "best_answers_score":0.7834, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4270301\/multiple-datasets-on-the-same-scatter-plot", "best_answers_votes":174, "question_length":430, "response_length":369 }, { "question":"Jupyter | How to rotate 3D graph [duplicate] This question already has answers here: Displaying rotatable 3D plots in IPython or Jupyter Notebook (6 answers) Closed 7 years ago. I am not sure about how to rotate graph in Python Jupyter notebook, its static for me and not rotate on mouse movement ``` from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111, projection='3d') x =[1,2,3,4,5,6,7,8,9,10] y =[5,6,2,3,13,4,1,2,4,8] z =[2,3,3,3,5,7,9,11,9,10] ax.scatter(x, y, z, c='r', marker='o') ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') plt.show() ```", "response":"To enable interactivity you need to use the notebook backend of matplotlib. You can do this by running %matplotlib notebook. This must be done before you plot anything, e.g.: ``` %matplotlib notebook import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d fig = ... ``` Update 2023-07-13 This answer is getting old and the choice of backends have changed over time (I used the widget backend for many years). The current best choice for interactive plots in Jupyter Notebooks is supposedly the ipympl backend. You need to install it and then use ``` %matplotlib ipympl ```", "best_answers_score":0.7832, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/47311632\/jupyter-how-to-rotate-3d-graph", "best_answers_votes":109, "question_length":642, "response_length":588 }, { "question":"Plotting pandas timedelta I have a pandas dataframe that has two datetime64 columns and one timedelta64 column that is the difference between the two columns. I'm trying to plot a histogram of the timedelta column to visualize the time differences between the two events. However, just using df['time_delta'] results in: TypeError: ufunc add cannot use operands with types dtype(' df2 = df1['time_delta'].astype(float) results in: TypeError: cannot astype a timedelta from [timedelta64[ns]] to [float64] How would one create a histogram of pandas timedelta data?", "response":"Here are ways to convert timedeltas, docs are here ``` In [2]: pd.to_timedelta(np.arange(5),unit='d')+pd.to_timedelta(1,unit='s') Out[2]: 0 0 days, 00:00:01 1 1 days, 00:00:01 2 2 days, 00:00:01 3 3 days, 00:00:01 4 4 days, 00:00:01 dtype: timedelta64[ns] ``` Convert to seconds (is an exact conversion) ``` In [3]: (pd.to_timedelta(np.arange(5),unit='d')+pd.to_timedelta(1,unit='s')).astype('timedelta64[s]') Out[3]: 0 1 1 86401 2 172801 3 259201 4 345601 dtype: float64 ``` Convert using astype will round to that unit ``` In [4]: (pd.to_timedelta(np.arange(5),unit='d')+pd.to_timedelta(1,unit='s')).astype('timedelta64[D]') Out[4]: 0 0 1 1 2 2 3 3 4 4 dtype: float64 ``` Division will give an exact repr ``` In [5]: (pd.to_timedelta(np.arange(5),unit='d')+pd.to_timedelta(1,unit='s')) \/ np.timedelta64(1,'D') Out[5]: 0 0.000012 1 1.000012 2 2.000012 3 3.000012 4 4.000012 dtype: float64 ```", "best_answers_score":0.7823, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23543909\/plotting-pandas-timedelta", "best_answers_votes":58, "question_length":562, "response_length":893 }, { "question":"How to edit a seaborn legend title and labels for figure-level functions I've created this plot using Seaborn and a pandas dataframe (data): My code: ```py import seaborn as sns g = sns.lmplot('credibility', 'percentWatched', data=data, hue='millennial', markers=[\"+\", \".\"]) ``` You may notice the plot's legend title is simply the variable name ('millennial') and the legend items are its values (0, 1). How can I edit the legend's title and labels? Ideally, the legend title would be 'Generation' and the labels would be \"Millennial\" and \"Older Generations\".", "response":"If legend_out is set to True then legend is available through the g._legend property and it is a part of a figure. Seaborn legend is standard matplotlib legend object. Therefore you may change legend texts. Tested in python 3.8.11, matplotlib 3.4.3, seaborn 0.11.2 ```py import seaborn as sns # load the tips dataset tips = sns.load_dataset(\"tips\") # plot g = sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips, markers=[\"o\", \"x\"], facet_kws={'legend_out': True}) # title new_title = 'My title' g._legend.set_title(new_title) # replace labels new_labels = ['label 1', 'label 2'] for t, l in zip(g._legend.texts, new_labels): t.set_text(l) ``` Another situation if legend_out is set to False. You have to define which axes has a legend (in below example this is axis number 0): ```py g = sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips, markers=[\"o\", \"x\"], facet_kws={'legend_out': False}) # check axes and find which is have legend leg = g.axes.flat[0].get_legend() new_title = 'My title' leg.set_title(new_title) new_labels = ['label 1', 'label 2'] for t, l in zip(leg.texts, new_labels): t.set_text(l) ``` Moreover you may combine both situations and use this code: ```py g = sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips, markers=[\"o\", \"x\"], facet_kws={'legend_out': True}) # check axes and find which is have legend for ax in g.axes.flat: leg = g.axes.flat[0].get_legend() if not leg is None: break # or legend may be on a figure if leg is None: leg = g._legend # change legend texts new_title = 'My title' leg.set_title(new_title) new_labels = ['label 1', 'label 2'] for t, l in zip(leg.texts, new_labels): t.set_text(l) ``` This code works for any seaborn plot which is based on Grid class.", "best_answers_score":0.7797, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/45201514\/how-to-edit-a-seaborn-legend-title-and-labels-for-figure-level-functions", "best_answers_votes":143, "question_length":560, "response_length":1731 }, { "question":"ipython notebook --pylab inline: zooming of a plot Is it possible to zoom into a plot if inline is activated? Especially regarding to 3d-plots rotating and zooming is a necessary feature.", "response":"Now thanks to mpld3 it's super easy to enable zooming in inline plots! All you have to do is install mpld3 (pip install mpld3), and then add this to your notebook: ``` %matplotlib inline import mpld3 mpld3.enable_notebook() ``` Now your plots will get a toolbar menu at the bottom left, in which you can enable mouse zooming :)", "best_answers_score":0.7782, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10655217\/ipython-notebook-pylab-inline-zooming-of-a-plot", "best_answers_votes":106, "question_length":187, "response_length":327 }, { "question":"RuntimeError: Invalid DISPLAY variable I am running my python script in another machine by using ssh command in linux. I have also run this command : ``` source ~\/.bashrc ``` after logging in the other machine, in order to define the proper paths in the new machine. I was getting the error message for running the following python code lines even I have tried to follow the instruction in this question by defining the backend. ``` >>> import matplotlib >>> import pylab as plt >>> matplotlib.use('Agg') >>> import numpy as np >>> x=np.arange(0,2,0.001) >>> y=np.sin(x)**2+4*np.cos(x) >>> fig = plt.figure() >>> plt.plot(x,y,'r.') ``` The error message ``` This probably means that Tcl wasn't installed properly. Traceback (most recent call last): File \"Systematic_Optimised.py\", line 513, in fig = plt.figure() File \"\/vol\/anaconda\/lib\/python2.7\/site-packages\/matplotlib\/pyplot.py\", line 435, in figure **kwargs) File \"\/vol\/anaconda\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_qt4agg.py\", line 47, in new_figure_manager return new_figure_manager_given_figure(num, thisFig) File \"\/vol\/anaconda\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_qt4agg.py\", line 54, in new_figure_manager_given_figure canvas = FigureCanvasQTAgg(figure) File \"\/vol\/anaconda\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_qt4agg.py\", line 72, in __init__ FigureCanvasQT.__init__(self, figure) File \"\/vol\/aibn84\/data2\/zahra\/anaconda\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_qt4.py\", line 68, in __init__ _create_qApp() File \"\/vol\/anaconda\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_qt5.py\", line 138, in _create_qApp raise RuntimeError('Invalid DISPLAY variable') RuntimeError: Invalid DISPLAY variable ``` any suggestion how to fix the problem", "response":"You must declare matplotlib.use('agg') before import pylab as plt. Reference", "best_answers_score":0.7775, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/35737116\/runtimeerror-invalid-display-variable", "best_answers_votes":72, "question_length":1779, "response_length":76 }, { "question":"Only the first row of annotations displayed on seaborn heatmap As it's usually advised, I have managed to reduce my problem to a minimal reproducible example: ```py import numpy as np import seaborn as sns import matplotlib.pyplot as plt matrix = np.array([[0.1234, 1.4567, 0.7890, 0.1234], [0.9876, 0, 0.5432, 0.6789], [0.1111, 0.2222, 0, 0.3333], [0.4444, 0.5555, 0.6666, 0]]) sns.heatmap(matrix, annot=True) plt.show() ``` Vaguely based on Seaborn official documentation. Unfortunately, unlike what would be expected (all numbers visible), I get only the numbers in the top row visible: As there is not really much room for error in this one, I'm out of ideas and google\/SO doesn't seem to have this question asked before. Is this a bug? I am running: ``` Seaborn 0.12.2 Matplotlib 3.8.0 PyCharm 2023.1.4 Windows 10 ```", "response":"Just ran into the issue myself, I was on Seaborn 0.12.2. Ran pip install seaborn --upgrade and now have 0.13.0 Restarted vscode and annotations appeared.", "best_answers_score":0.7771, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/77165100\/only-the-first-row-of-annotations-displayed-on-seaborn-heatmap", "best_answers_votes":45, "question_length":822, "response_length":153 }, { "question":"Overlay plots with different scales So far I have the following code: ``` colors = ('k','r','b') ax = [] for i in range(3): ax.append(plt.axes()) plt.plot(datamatrix[:,0],datamatrix[:,i],colors[i]+'o') ax[i].set(autoscale_on=True) ``` With the autoscale_on=True option for each axis, I thought each plot should have its own y-axis limits, but it appears they all share the same value (even if they share different axes). How do I set them to scale to show the range of each datamatrix[:,i] (just an explicit call to .set_ylim()?) And also, how can I create an offset y-axis for the third variable (datamatrix[:,2]) that might be required above? Thanks all.", "response":"It sounds like what you're wanting is subplots... What you're doing now doesn't make much sense (Or I'm very confused by your code snippet, at any rate...). Try something more like this: ``` import matplotlib.pyplot as plt import numpy as np fig, axes = plt.subplots(nrows=3) colors = ('k', 'r', 'b') for ax, color in zip(axes, colors): data = np.random.random(1) * np.random.random(10) ax.plot(data, marker='o', linestyle='none', color=color) plt.show() ``` Edit: If you don't want subplots, your code snippet makes a lot more sense. You're trying to add three axes right on top of each other. Matplotlib is recognizing that there's already a subplot in that exactly size and location on the figure, and so it's returning the same axes object each time. In other words, if you look at your list ax, you'll see that they're all the same object. If you really want to do that, you'll need to reset fig._seen to an empty dict each time you add an axes. You probably don't really want to do that, however. Instead of putting three independent plots over each other, have a look at using twinx instead. E.g. ``` import matplotlib.pyplot as plt import numpy as np # To make things reproducible... np.random.seed(1977) fig, ax = plt.subplots() # Twin the x-axis twice to make independent y-axes. axes = [ax, ax.twinx(), ax.twinx()] # Make some space on the right side for the extra y-axis. fig.subplots_adjust(right=0.75) # Move the last y-axis spine over to the right by 20% of the width of the axes axes[-1].spines['right'].set_position(('axes', 1.2)) # To make the border of the right-most axis visible, we need to turn the frame # on. This hides the other plots, however, so we need to turn its fill off. axes[-1].set_frame_on(True) axes[-1].patch.set_visible(False) # And finally we get to plot things... colors = ('Green', 'Red', 'Blue') for ax, color in zip(axes, colors): data = np.random.random(1) * np.random.random(10) ax.plot(data, marker='o', linestyle='none', color=color) ax.set_ylabel('%s Thing' % color, color=color) ax.tick_params(axis='y', colors=color) axes[0].set_xlabel('X-axis') plt.show() ```", "best_answers_score":0.776, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7733693\/overlay-plots-with-different-scales", "best_answers_votes":130, "question_length":656, "response_length":2110 }, { "question":"How can I write unit tests against code that uses matplotlib? I'm working on a python (2.7) program that produce a lot of different matplotlib figure (the data are not random). I'm willing to implement some test (using unittest) to be sure that the generated figures are correct. For instance, I store the expected figure (data or image) in some place, I run my function and compare the result with the reference. Is there a way to do this ?", "response":"In my experience, image comparison tests end up bring more trouble than they are worth. This is especially the case if you want to run continuous integration across multiple systems (like TravisCI) that may have slightly different fonts or available drawing backends. It can be a lot of work to keep the tests passing even when the functions work perfectly correctly. Furthermore, testing this way requires keeping images in your git repository, which can quickly lead to repository bloat if you're changing the code often. A better approach in my opinion is to (1) assume matplotlib is going to actually draw the figure correctly, and (2) run numerical tests against the data returned by the plotting functions. (You can also always find this data inside the Axes object if you know where to look.) For example, say you want to test a simple function like this: ``` import numpy as np import matplotlib.pyplot as plt def plot_square(x, y): y_squared = np.square(y) return plt.plot(x, y_squared) ``` Your unit test might then look like ``` def test_plot_square1(): x, y = [0, 1, 2], [0, 1, 2] line, = plot_square(x, y) x_plot, y_plot = line.get_xydata().T np.testing.assert_array_equal(y_plot, np.square(y)) ``` Or, equivalently, ``` def test_plot_square2(): f, ax = plt.subplots() x, y = [0, 1, 2], [0, 1, 2] plot_square(x, y) x_plot, y_plot = ax.lines[0].get_xydata().T np.testing.assert_array_equal(y_plot, np.square(y)) ```", "best_answers_score":0.775, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/27948126\/how-can-i-write-unit-tests-against-code-that-uses-matplotlib", "best_answers_votes":69, "question_length":441, "response_length":1427 }, { "question":"Multiple plots in one figure in Python I am new to python and am trying to plot multiple lines in the same figure using matplotlib. The value of my Y-axis is stored in a dictionary and I make corresponding values in X-axis in the following code My code is like this: ``` for i in range(len(ID)): AxisY= PlotPoints[ID[i]] if len(AxisY)> 5: AxisX= [len(AxisY)] for i in range(1,len(AxisY)): AxisX.append(AxisX[i-1]-1) plt.plot(AxisX,AxisY) plt.xlabel('Lead Time (in days)') plt.ylabel('Proportation of Events Scheduled') ax = plt.gca() ax.invert_xaxis() ax.yaxis.tick_right() ax.yaxis.set_label_position(\"right\") plt.show() ``` But I am getting separate figures with a single plot one by one. Can anybody help me figure out what is wrong with my code? Why can't I produce multiple-line plotting? Thanks a lot!", "response":"This is very simple to do: ``` import matplotlib.pyplot as plt plt.plot(, , 'line type', label='label here') plt.plot(, , 'line type', label='label here') plt.legend(loc='best') plt.show() ``` You can keep adding plt.plot as many times as you like. As for line type, you need to first specify the color. So for blue, it's b. And for a normal line it's -. An example would be: ``` plt.plot(total_lengths, sort_times_heap, 'b-', label=\"Heap\") ```", "best_answers_score":0.7745, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21254472\/multiple-plots-in-one-figure-in-python", "best_answers_votes":116, "question_length":807, "response_length":444 }, { "question":"ImportError: DLL load failed: The specified module could not be found I have installed Python 2.5.4, Numpy 1.5.0 win32, Matplotlib 1.0.0 win32, pywin32 218. Still not able to plot graphs in Python. Here is the error I am getting : ``` import pylab File \"C:\\Python25\\lib\\site-packages\\pylab.py\", line 1, in from matplotlib.pylab import * File \"C:\\Python25\\lib\\site-packages\\matplotlib\\pylab.py\", line 216, in from matplotlib import mpl # pulls in most modules File \"C:\\Python25\\lib\\site-packages\\matplotlib\\mpl.py\", line 1, in from matplotlib import artist File \"C:\\Python25\\lib\\site-packages\\matplotlib\\artist.py\", line 6, in from transforms import Bbox, IdentityTransform, TransformedBbox, TransformedPath File \"C:\\Python25\\lib\\site-packages\\matplotlib\\transforms.py\", line 34, in from matplotlib._path import affine_transform ImportError: DLL load failed: The specified module could not be found. ``` Please kindly help..", "response":"I had the same issue with importing matplotlib.pylab with Python 3.5.1 on Win 64. Installing the Visual C++ Redistributable f\u00fcr Visual Studio 2015 from this links: https:\/\/www.microsoft.com\/en-us\/download\/details.aspx?id=48145 fixed the missing DLLs. I find it better and easier than downloading and pasting DLLs.", "best_answers_score":0.7741, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20201868\/importerror-dll-load-failed-the-specified-module-could-not-be-found", "best_answers_votes":34, "question_length":928, "response_length":313 }, { "question":"Partially transparent scatter plot, but with a solid color bar In Python, with Matplotlib, how to simply do a scatter plot with transparency (alpha < 1), but with a color bar that represents their color value, but has alpha = 1? Here is what one gets, with from pylab import *; scatter(range(10), arange(0, 100, 10), c=range(10), alpha=0.2); color_bar = colorbar(): How can the color bar be made non-transparent? PS: I tried color_bar.set_alpha(1); draw(), but this did not do anything\u2026", "response":"Alright, I found one way to do it, that looks relatively clean: (using the ColorBar object from the question) ``` color_bar.set_alpha(1) color_bar.draw_all() # pylab.draw() or pyplot.draw() might be necessary ``` It would be great to get a confirmation that this is the most robust way to proceed, though! :)", "best_answers_score":0.7738, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4478725\/partially-transparent-scatter-plot-but-with-a-solid-color-bar", "best_answers_votes":48, "question_length":486, "response_length":308 }, { "question":"Secondary axis with twinx(): how to add to legend I have a plot with two y-axes, using twinx(). I also give labels to the lines, and want to show them with legend(), but I only succeed to get the labels of one axis in the legend: ``` import numpy as np import matplotlib.pyplot as plt from matplotlib import rc rc('mathtext', default='regular') fig = plt.figure() ax = fig.add_subplot(111) ax.plot(time, Swdown, '-', label = 'Swdown') ax.plot(time, Rn, '-', label = 'Rn') ax2 = ax.twinx() ax2.plot(time, temp, '-r', label = 'temp') ax.legend(loc=0) ax.grid() ax.set_xlabel(\"Time (h)\") ax.set_ylabel(r\"Radiation ($MJ\\,m^{-2}\\,d^{-1}$)\") ax2.set_ylabel(r\"Temperature ($^\\circ$C)\") ax2.set_ylim(0, 35) ax.set_ylim(-20,100) plt.show() ``` So I only get the labels of the first axis in the legend, and not the label 'temp' of the second axis. How could I add this third label to the legend?", "response":"You can easily add a second legend by adding the line: ``` ax2.legend(loc=0) ``` You'll get this: But if you want all labels on one legend then you should do something like this: ``` import numpy as np import matplotlib.pyplot as plt from matplotlib import rc rc('mathtext', default='regular') time = np.arange(10) temp = np.random.random(10)*30 Swdown = np.random.random(10)*100-10 Rn = np.random.random(10)*100-10 fig = plt.figure() ax = fig.add_subplot(111) lns1 = ax.plot(time, Swdown, '-', label = 'Swdown') lns2 = ax.plot(time, Rn, '-', label = 'Rn') ax2 = ax.twinx() lns3 = ax2.plot(time, temp, '-r', label = 'temp') # added these three lines lns = lns1+lns2+lns3 labs = [l.get_label() for l in lns] ax.legend(lns, labs, loc=0) ax.grid() ax.set_xlabel(\"Time (h)\") ax.set_ylabel(r\"Radiation ($MJ\\,m^{-2}\\,d^{-1}$)\") ax2.set_ylabel(r\"Temperature ($^\\circ$C)\") ax2.set_ylim(0, 35) ax.set_ylim(-20,100) plt.show() ``` Which will give you this:", "best_answers_score":0.7733, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5484922\/secondary-axis-with-twinx-how-to-add-to-legend", "best_answers_votes":593, "question_length":885, "response_length":946 }, { "question":"Fine control over the font size in Seaborn plots I'm currently trying to use Seaborn to create plots for my academic papers. The plots look great and easy to generate, but one problem that I'm having some trouble with is having the fine control on the font size in the plots. My font size in my paper is 9pt and I would like to make sure the font size in my plots are either 9pt or 10pt. But in seaborn, the font size is mainly controlled through font scale sns.set_context(\"paper\", font_scale=0.9). So it's hard for me to find the right font size except through trial and error. Is there a more efficient way to do this? I also want to make sure the font size is consistent between different seaborn plots. But not all my seaborn plots have the same dimension, so it seems like using the same font_scale on all the plots does not necessarily create the same font size across these different plots? I've attached my code below. I appreciate any comments on how to format the plot for a two column academic paper. My goal is to be able to control the size of the figure without distorting the font size or the plot. I use Latex to write my paper. ``` # Seaborn setting sns.set(style='whitegrid', rc={\"grid.linewidth\": 0.1}) sns.set_context(\"paper\", font_scale=0.9) plt.figure(figsize=(3.1, 3)) # Two column paper. Each column is about 3.15 inch wide. color = sns.color_palette(\"Set2\", 6) # Create a box plot for my data splot = sns.boxplot(data=df, palette=color, whis=np.inf, width=0.5, linewidth = 0.7) # Labels and clean up on the plot splot.set_ylabel('Normalized WS') plt.xticks(rotation=90) plt.tight_layout() splot.yaxis.grid(True, clip_on=False) sns.despine(left=True, bottom=True) plt.savefig('test.pdf', bbox_inches='tight') ```", "response":"You are right. This is a badly documented issue. But you can change the font size parameter (by opposition to font scale) directly after building the plot. Check the following example: ``` import seaborn as sns import matplotlib.pyplot as plt tips = sns.load_dataset(\"tips\") b = sns.boxplot(x=tips[\"total_bill\"]) b.axes.set_title(\"Title\",fontsize=50) b.set_xlabel(\"X Label\",fontsize=30) b.set_ylabel(\"Y Label\",fontsize=20) b.tick_params(labelsize=5) plt.show() ``` , which results in this: To make it consistent in between plots I think you just need to make sure the DPI is the same. By the way it' also a possibility to customize a bit the rc dictionaries since \"font.size\" parameter exists but I'm not too sure how to do that. NOTE: And also I don't really understand why they changed the name of the font size variables for axis labels and ticks. Seems a bit un-intuitive.", "best_answers_score":0.7733, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/36220829\/fine-control-over-the-font-size-in-seaborn-plots", "best_answers_votes":127, "question_length":1737, "response_length":876 }, { "question":"Adding subplots to a subplot I'm trying to create a figure that consists of a 2x2 grid, where in each quadrant there are 2 vertically stacked subplots (i.e. a 2x1 grid). I can't seem to figure out how to achieve this, though. The closest I've gotten is using gridspec and some ugly code (see below), but because gridspec.update(hspace=X) changes the spacing for all of the subplots I'm still not where I'd like to be. Ideally what I want is to, using the picture below as an example, decrease the spacing between the subplots within each quadrant, while increasing the vertical spacing between the top and bottom quadrants (i.e. between 1-3 and 2-4). Is there a way to do this (with or without using gridspec)? What I originally envisioned is generating each of the sub-subplot grids (i.e. each 2x1 grid) and inserting them into the larger 2x2 grid of subplots, but I haven't figured out how to add a subplot to a subplot, if there is even a way. ``` import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec plt.figure(figsize=(10, 8)) gs = gridspec.GridSpec(4,2) gs.update(hspace=0.4) for i in range(2): for j in range(4): ax = plt.subplot(gs[j,i]) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) plt.tick_params(which='both', top='off', right='off') if j % 2 == 0: ax.set_title(str(i+j+1)) ax.plot([1,2,3], [1,2,3]) ax.spines['bottom'].set_visible(False) ax.get_xaxis().set_visible(False) else: ax.plot([1,2,3], [3,2,1]) ```", "response":"You can nest your GridSpec using SubplotSpec. The outer grid will be a 2 x 2 and the inner grids will be 2 x 1. The following code should give you the basic idea. ``` import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec fig = plt.figure(figsize=(10, 8)) outer = gridspec.GridSpec(2, 2, wspace=0.2, hspace=0.2) for i in range(4): inner = gridspec.GridSpecFromSubplotSpec(2, 1, subplot_spec=outer[i], wspace=0.1, hspace=0.1) for j in range(2): ax = plt.Subplot(fig, inner[j]) t = ax.text(0.5,0.5, 'outer=%d, inner=%d' % (i, j)) t.set_ha('center') ax.set_xticks([]) ax.set_yticks([]) fig.add_subplot(ax) fig.show() ```", "best_answers_score":0.7716, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34933905\/adding-subplots-to-a-subplot", "best_answers_votes":98, "question_length":1466, "response_length":633 }, { "question":"matplotlib hatched fill_between without edges? I have a region I'd like to hatch which borders on an existing plot line (of the same colour) that is dashed. However, when I use fill_between the region to be hatched has a border drawn around it also. This border seems share properties with the lines that create the hatching so I cannot set edgecolour to \"none\" or set linestyle as \"--\" as the hatching is similarly affected. ``` import matplotlib.pyploy as plt plt.plot([0,1],[0,1],ls=\"--\",c=\"b\") plt.fill_between([0,1],[0,1],color=\"none\",hatch=\"X\",edgecolor=\"b\") plt.show() ``` In this example I'd want the diagonal line from 0,0 to 1,1 to be dashed. Many thanks in advance.", "response":">2.0.1 Update As commented by @CatherineHolloway you need to use facecolor instead of color now: ``` import matplotlib.pyplot as plt plt.plot([0,1],[0,1],ls=\"--\",c=\"b\") plt.fill_between([0,1],[0,1], facecolor=\"none\", hatch=\"X\", edgecolor=\"b\", linewidth=0.0) plt.show() ``` Former answer This seems to do the trick! ``` import matplotlib.pyplot as plt plt.plot([0,1],[0,1],ls=\"--\",c=\"b\") plt.fill_between([0,1],[0,1], color=\"none\", hatch=\"X\", edgecolor=\"b\", linewidth=0.0) plt.show() ```", "best_answers_score":0.7713, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18386106\/matplotlib-hatched-fill-between-without-edges", "best_answers_votes":66, "question_length":676, "response_length":486 }, { "question":"Confusion between numpy, scipy, matplotlib and pylab Numpy, scipy, matplotlib, and pylab are common terms among they who use python for scientific computation. I just learn a bit about pylab, and I got confused. Whenever I want to import numpy, I can always do: ``` import numpy as np ``` I just consider, that once I do ``` from pylab import * ``` the numpy will be imported as well (with np alias). So basically the second one does more things compared to the first one. There are few things I want to ask: Is it right that pylab is just a wrapper for numpy, scipy and matplotlib? As np is the numpy alias in pylab, what is the scipy and matplotlib alias in pylab? (as far as I know, plt is alias of matplotlib.pyplot, but I don't know the alias for the matplotlib itself)", "response":"No, pylab is part of matplotlib (in matplotlib.pylab) and tries to give you a MatLab like environment. matplotlib has a number of dependencies, among them numpy which it imports under the common alias np. scipy is not a dependency of matplotlib. If you run ipython --pylab an automatic import will put all symbols from matplotlib.pylab into global scope. Like you wrote numpy gets imported under the np alias. Symbols from matplotlib are available under the mpl alias.", "best_answers_score":0.7701, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12987624\/confusion-between-numpy-scipy-matplotlib-and-pylab", "best_answers_votes":136, "question_length":774, "response_length":468 }, { "question":"How to change the color of a single bar in a bar plot Supposely, I have the bar chart as below: Any ideas on how to set different colors for each carrier? As for example, AK would be Red, GA would be Green, etc? I am using Pandas and matplotlib in Python ``` >>> f=plt.figure() >>> ax=f.add_subplot(1,1,1) >>> ax.bar([1,2,3,4], [1,2,3,4]) >>> ax.get_children() [, , , , , , , , , , , ] >>> ax.get_children()[2].set_color('r') #You can also try to locate the first patches.Rectangle object instead of direct calling the index. ``` For the suggestions above, how do exactly we could enumerate ax.get_children() and check if the object type is rectangle? So if the object is rectangle, we would assign different random color?", "response":"Simple, just use .set_color ``` >>> barlist=plt.bar([1,2,3,4], [1,2,3,4]) >>> barlist[0].set_color('r') >>> plt.show() ``` For your new question, not much harder either, just need to find the bar from your axis, an example: ``` >>> f=plt.figure() >>> ax=f.add_subplot(1,1,1) >>> ax.bar([1,2,3,4], [1,2,3,4]) >>> ax.get_children() [, , , , , , , , , , , ] >>> ax.get_children()[2].set_color('r') #You can also try to locate the first patches.Rectangle object #instead of direct calling the index. ``` If you have a complex plot and want to identify the bars first, add those: ``` >>> import matplotlib >>> childrenLS=ax.get_children() >>> barlist=filter(lambda x: isinstance(x, matplotlib.patches.Rectangle), childrenLS) [, , , , ] ```", "best_answers_score":0.7694, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18973404\/how-to-change-the-color-of-a-single-bar-in-a-bar-plot", "best_answers_votes":197, "question_length":723, "response_length":735 }, { "question":"Store mouse click event coordinates with matplotlib I am trying to implement a simple mouse click event in matplotlib. I wish to plot a figure then use the mouse to select the lower and upper limits for integration. So far I am able to print the coordinates to screen but not store them for later use in the program. I would also like to exit the connection to the figure after the second mouse click. Below is the code which currently plots and then prints the coordinates. My Question(s): How can I store coordinates from the figure to list? i.e. click = [xpos, ypos] Is it possible to get two sets of x coordinates in order to do a simple integration over that section of line? ```py import numpy as np import matplotlib.pyplot as plt x = np.arange(-10,10) y = x**2 fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x,y) def onclick(event): global ix, iy ix, iy = event.xdata, event.ydata print 'x = %d, y = %d'%( ix, iy) global coords coords = [ix, iy] return coords for i in xrange(0,1): cid = fig.canvas.mpl_connect('button_press_event', onclick) plt.show() ```", "response":"mpl_connect needs to be called just once to connect the event to event handler. It will start listening to click event until you disconnect. And you can use ``` fig.canvas.mpl_disconnect(cid) ``` to disconnect the event hook. What you want to do is something like: ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(-10,10) y = x**2 fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x,y) coords = [] def onclick(event): global ix, iy ix, iy = event.xdata, event.ydata print (f'x = {ix}, y = {iy}') global coords coords.append((ix, iy)) if len(coords) == 2: fig.canvas.mpl_disconnect(cid) return coords cid = fig.canvas.mpl_connect('button_press_event', onclick) ```", "best_answers_score":0.7689, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25521120\/store-mouse-click-event-coordinates-with-matplotlib", "best_answers_votes":57, "question_length":1070, "response_length":686 }, { "question":"Global legend and title aside subplots I've started with matplot and managed some basic plots, but now I find it hard to discover how to do some stuff I need now :( My actual question is how to place a global title and global legend on a figure with subplots. I'm doing 2x3 subplots where I have a lot of different graphs in various colors (about 200). To distinguish (most) of them I wrote something like ``` def style(i, total): return dict(color=jet(i\/total), linestyle=[\"-\", \"--\", \"-.\", \":\"][i%4], marker=[\"+\", \"*\", \"1\", \"2\", \"3\", \"4\", \"s\"][i%7]) fig=plt.figure() p0=fig.add_subplot(321) for i, y in enumerate(data): p0.plot(x, trans0(y), \"-\", label=i, **style(i, total)) # and more subplots with other transN functions ``` (any thoughts on this? :)) Each subplot has the same style function. Now I'm trying to get a global title for all subplots and also a global legend which explains all styles. Also I need to make the font tiny to fit all 200 styles on there (I don't need completely unique styles, but at least some attempt) Can someone help me solve this task?", "response":"Global title: In newer releases of matplotlib one can use Figure.suptitle() method of Figure: ``` import matplotlib.pyplot as plt fig = plt.gcf() fig.suptitle(\"Title centered above all subplots\", fontsize=14) ``` Alternatively (based on @Steven C. Howell's comment below (thank you!)), use the matplotlib.pyplot.suptitle() function: ``` import matplotlib.pyplot as plt # plot stuff # ... plt.suptitle(\"Title centered above all subplots\", fontsize=14) ```", "best_answers_score":0.7687, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7526625\/global-legend-and-title-aside-subplots", "best_answers_votes":278, "question_length":1071, "response_length":454 }, { "question":"How to change fonts in matplotlib (python)? It sounds as an easy problem but I do not find any effective solution to change the font (not the font size) in a plot made with matplotlib in python. I found a couple of tutorials to change the default font of matplotlib by modifying some files in the folders where matplotlib stores its default font - see this blog post - but I am looking for a less radical solution since I would like to use more than one font in my plot (text, label, axis label, etc).", "response":"Say you want Comic Sans for the title and Helvetica for the x label. ``` csfont = {'fontname':'Comic Sans MS'} hfont = {'fontname':'Helvetica'} plt.title('title',**csfont) plt.xlabel('xlabel', **hfont) plt.show() ```", "best_answers_score":0.7687, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21321670\/how-to-change-fonts-in-matplotlib-python", "best_answers_votes":166, "question_length":501, "response_length":216 }, { "question":"tight_layout() doesn't take into account figure suptitle If I add a subtitle to my matplotlib figure it gets overlaid by the subplot's titles. Does anybody know how to easily take care of that? I tried the tight_layout() function, but it only makes things worse. Example: ```python import numpy as np import matplotlib.pyplot as plt f = np.random.random(100) g = np.random.random(100) fig = plt.figure() fig.suptitle('Long Suptitle', fontsize=24) plt.subplot(121) plt.plot(f) plt.title('Very Long Title 1', fontsize=20) plt.subplot(122) plt.plot(g) plt.title('Very Long Title 2', fontsize=20) plt.tight_layout() plt.show() ```", "response":"You can adjust the subplot geometry in the very tight_layout call as follows: ``` fig.tight_layout(rect=[0, 0.03, 1, 0.95]) ``` As it's stated in the documentation (https:\/\/matplotlib.org\/stable\/users\/explain\/axes\/tight_layout_guide.html): tight_layout() only considers ticklabels, axis labels, and titles. Thus, other artists may be clipped and also may overlap.", "best_answers_score":0.7681, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8248467\/tight-layout-doesnt-take-into-account-figure-suptitle", "best_answers_votes":308, "question_length":626, "response_length":363 }, { "question":"Remove the legend on a matplotlib figure To add a legend to a matplotlib plot, one simply runs legend(). How to remove a legend from a plot? (The closest I came to this is to run legend([]) in order to empty the legend from data. But that leaves an ugly white rectangle in the upper right corner.)", "response":"As of matplotlib v1.4.0rc4, a remove method has been added to the legend object. Usage: ```py ax.get_legend().remove() ``` or ```py legend = ax.legend(...) ... legend.remove() ``` See here for the commit where this was introduced.", "best_answers_score":0.7674, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5735208\/remove-the-legend-on-a-matplotlib-figure", "best_answers_votes":471, "question_length":297, "response_length":230 }, { "question":"Rotate tick labels in subplot I am attempting to rotate the x labels of a subplot (created using GridSpec) by 45 degrees. I have tried using axa.set_xticks() and axa.set_xticklabels, but it does not seem to work. Google wasn't helping either, since most questions concerning labels are about normal plots, and not subplots. See code below: ``` width = 20 # Width of the figure in centimeters height = 15 # Height of the figure in centimeters w = width * 0.393701 # Conversion to inches h = height * 0.393701 # Conversion to inches f1 = plt.figure(figsize=[w,h]) gs = gridspec.GridSpec(1, 7, width_ratios = [1.5, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]) axa = plt.subplot(gs[0]) axa.plot(dts, z,'k', alpha=0.75, lw=0.25) #axa.set_title('...') axa.set_ylabel('TVDSS ' + '$[m]$', fontsize = '10' ) axa.set_xlabel('slowness 'r'$[\\mu s\/m]$', fontsize = '10') axa.set_ylim(245, 260) axa.set_xlim(650, 700) axa.tick_params(labelsize=7) axa.invert_yaxis() axa.grid() ``` Any help will be greatly appreciated!", "response":"You can do it in multiple ways: Here is one solution making use of tick_params: ``` ax.tick_params(labelrotation=45) ``` Here is another solution making use of set_xticklabels: ``` ax.set_xticklabels(labels, rotation=45) ``` Here is a third solution making use of set_rotation: ``` for tick in ax.get_xticklabels(): tick.set_rotation(45) ```", "best_answers_score":0.7669, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/31186019\/rotate-tick-labels-in-subplot", "best_answers_votes":130, "question_length":990, "response_length":341 }, { "question":"How to format axis number format to thousands with a comma How can I change the format of the numbers in the x-axis to be like 10,000 instead of 10000? Ideally, I would just like to do something like this: ``` x = format((10000.21, 22000.32, 10120.54), \"#,###\") ``` Here is the code: ``` import matplotlib.pyplot as plt # create figure instance fig1 = plt.figure(1) fig1.set_figheight(15) fig1.set_figwidth(20) ax = fig1.add_subplot(2,1,1) x = 10000.21, 22000.32, 10120.54 y = 1, 4, 15 ax.plot(x, y) ax2 = fig1.add_subplot(2,1,2) x2 = 10434, 24444, 31234 y2 = 1, 4, 9 ax2.plot(x2, y2) fig1.show() ```", "response":"Use , as format specifier: ``` >>> format(10000.21, ',') '10,000.21' ``` Alternatively you can also use str.format instead of format: ``` >>> '{:,}'.format(10000.21) '10,000.21' ``` With matplotlib.ticker.FuncFormatter: ``` ... ax.get_xaxis().set_major_formatter( matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ','))) ax2.get_xaxis().set_major_formatter( matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ','))) fig1.show() ```", "best_answers_score":0.7665, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25973581\/how-to-format-axis-number-format-to-thousands-with-a-comma", "best_answers_votes":124, "question_length":600, "response_length":450 }, { "question":"Cleanest way to hide every nth tick label in matplotlib colorbar? The labels on my horizontal colorbar are too close together and I don't want to reduce text size further: ``` cbar = plt.colorbar(shrink=0.8, orientation='horizontal', extend='both', pad=0.02) cbar.ax.tick_params(labelsize=8) ``` I'd like to preserve all ticks, but remove every other label. Most examples I've found pass a user-specified list of strings to cbar.set_ticklabels(). I'm looking for a general solution. I played around with variations of ``` cbar.set_ticklabels(cbar.get_ticklabels()[::2]) ``` and ``` cbar.ax.xaxis.set_major_locator(matplotlib.ticker.MaxNLocator(nbins=4)) ``` but I haven't found the magic combination. I know there must be a clean way to do this using a locator object.", "response":"For loop the ticklabels, and call set_visible(): ``` for label in cbar.ax.xaxis.get_ticklabels()[::2]: label.set_visible(False) ```", "best_answers_score":0.7655, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20337664\/cleanest-way-to-hide-every-nth-tick-label-in-matplotlib-colorbar", "best_answers_votes":102, "question_length":768, "response_length":131 }, { "question":"Rotate tick labels for seaborn barplot I am trying to display a chart with rotated x-axis labels, but the chart is not displaying. ``` import seaborn as sns %matplotlib inline yellow='#FFB11E' by_school=sns.barplot(x ='Organization Name',y ='Score',data = combined.sort('Organization Name'),color=yellow,ci=None) ``` At this point I can see the image, but after I set the xticklabel, I don't see the image anymore only an object reference. (I would post the image, but I don't enough reputation :() ``` by_school.set_xticklabels('Organization Name',rotation=45) ``` A similar question is posted here: Rotate label text in seaborn factorplot but the solution is not working.", "response":"You need a different method call, namely .set_rotation for each ticklables. Since you already have the ticklabels, just change their rotations: ``` for item in by_school.get_xticklabels(): item.set_rotation(45) ``` barplot returns a matplotlib.axes object (as of seaborn 0.6.0), therefore you have to rotate the labels this way. In other cases, when the method returns a FacetGrid object, refer to Rotate label text in seaborn factorplot", "best_answers_score":0.7647, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/31859285\/rotate-tick-labels-for-seaborn-barplot", "best_answers_votes":64, "question_length":674, "response_length":437 }, { "question":"How to draw a rectangle over a specific region in a matplotlib graph I have a graph, computed from some data, drawn in matplotlib. I want to draw a rectangular region around the global maximum of this graph. I tried plt.axhspan, but the rectangle doesn't seem to appear when I call plt.show() So, how can a rectangular region be drawn onto a matplotlib graph? Thanks!", "response":"The most likely reason is that you used data units for the x arguments when calling axhspan. From the function's docs (my emphasis): y coords are in data units and x coords are in axes (relative 0-1) units. So any rectangle stretching left of 0 or right of 1 is simply drawn off-plot. An easy alternative might be to add a Rectangle to your axis (e.g., via plt.gca and add_patch); Rectangle uses data units for both dimensions. The following would add a grey rectangle with width & height of 1 centered on (2,3): ```py from matplotlib.patches import Rectangle import matplotlib.pyplot as plt fig = plt.figure() plt.xlim(0, 10) plt.ylim(0, 12) someX, someY = 2, 5 currentAxis = plt.gca() currentAxis.add_patch(Rectangle((someX - .5, someY - .5), 4, 6, facecolor=\"grey\")) ``` Without facecolor ```py currentAxis.add_patch(Rectangle((someX - .5, someY - .5), 4, 6, facecolor=\"none\", ec='k', lw=2)) ```", "best_answers_score":0.7645, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13013781\/how-to-draw-a-rectangle-over-a-specific-region-in-a-matplotlib-graph", "best_answers_votes":54, "question_length":367, "response_length":898 }, { "question":"Colorplot of 2D array matplotlib So, I thought this was going to be really simple, but I've been having a lot of difficult finding exactly what I'm looking for in a comprehensible example. Basically I want to make phase plots, so assuming I have a 2d array, how can I get matplotlib to convert this to a plot that I can attach titles, axes, and legends (color bars) to. I'm looking for an extremely simple bare bones solution that only uses what is required that will work with any 2D array. I'm certain this is simple and I'm just being thick somehow, but I'm really having a lot of trouble with this. I have been tooling with the examples, but they don't seem well suited to what I'm trying to do: I like the general appearance of this graph, I'd just like to be able to pass in a 2dArray and have this same result: ``` import numpy as np import matplotlib as ml import matplotlib.pyplot as plt H = [[1,2,3,4][5,6,7,8][9,10,11,12][13,14,15,16]] fig = plt.figure(figsize=(6, 3.2)) ax = fig.add_subplot(111) ax.set_title('colorMap') X,Y = np.meshgrid(xedges, yedges) plt.pcolormesh(X, Y, H) ax.set_aspect('equal') cax = fig.add_axes([0.12, 0.1, 0.78, 0.8]) cax.get_xaxis().set_visible(False) cax.get_yaxis().set_visible(False) cax.patch.set_alpha(0) cax.set_frame_on(False) plt.colorbar(orientation='vertical') plt.show() ```", "response":"I'm afraid your posted example is not working, since X and Y aren't defined. So instead of pcolormesh let's use imshow: ``` import numpy as np import matplotlib.pyplot as plt H = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]) # added some commas and array creation code fig = plt.figure(figsize=(6, 3.2)) ax = fig.add_subplot(111) ax.set_title('colorMap') plt.imshow(H) ax.set_aspect('equal') cax = fig.add_axes([0.12, 0.1, 0.78, 0.8]) cax.get_xaxis().set_visible(False) cax.get_yaxis().set_visible(False) cax.patch.set_alpha(0) cax.set_frame_on(False) plt.colorbar(orientation='vertical') plt.show() ```", "best_answers_score":0.7636, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16492830\/colorplot-of-2d-array-matplotlib", "best_answers_votes":62, "question_length":1325, "response_length":631 }, { "question":"Plot a black-and-white binary map in matplotlib I'm using python to simulate some automation models, and with the help of matplotlib I'm producing plots like the one shown below. I'm currently plotting with the following command: ``` ax.imshow(self.g, cmap=map, interpolation='nearest') ``` where self.g is the binary map (0 -> blue, 1 -> red in my current plots). However, to include this in my report I would like the plot to be with black dots on white background instead of red on blue. How do I accomplish that?", "response":"You can change the color map you are using via the cmap keyword. The color map 'Greys' provides the effect you want. You can find a list of available maps on the scipy website. ``` import matplotlib.pyplot as plt import numpy as np np.random.seed(101) g = np.floor(np.random.random((100, 100)) + .5) plt.subplot(211) plt.imshow(g) plt.subplot(212) plt.imshow(g, cmap='Greys', interpolation='nearest') plt.savefig('blkwht.png') plt.show() ``` which results in:", "best_answers_score":0.7628, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9638826\/plot-a-black-and-white-binary-map-in-matplotlib", "best_answers_votes":73, "question_length":516, "response_length":459 }, { "question":"pad_inches=0 and bbox_inches=\"tight\" makes the plot smaller than declared figsize I am producing a publication-quality plot to be embedded in latex and I would like to be very precise in terms of sizes and fonts (so that fonts are of the same size in the article as in the plot). To prevent the plot from scaling in latex I would like to have it exact size, but I cannot. Here is my code: ``` import matplotlib.pyplot as plt from matplotlib import rc, rcParams from numpy import sin rc('text', usetex=True) rc('font', family='serif', serif='Computer Modern Roman', size=8) rc('legend', fontsize=10) width_in = 5 fig = plt.figure(1, figsize=(width_in, 2)) ax = fig.add_subplot(111) ax.plot(range(0,100), sin(range(0,100))) fig.tight_layout() fig.savefig('test.eps', bbox_inches='tight', pad_inches=0) plt.close() ``` The problem is with bbox_inches='tight' and pad_inches=0. Adding those options makes my plot 4.76 inches wide instead of declared 5 inches. But I want them to save space. So how to solve it? Edit: Well, the answers suggest to remove bbox_inches='tight' and pad_inches=0 and use just the tight_layout(). Then the images is of right size, however it has still some white padding around. I can remove it with fig.tight_layout(pad=0), but then the figure title it moved inside the box, which looks ugly. On the other hand I can use tight_layout(rect=[...]) and obtain the desired result, but it is a manual work to get the numbers right - I don't like it. Thus, currently I don't see any easy and general solution to my problem.", "response":"The problem you are having is that bbox_inches='tight' just removes all of the extra white space around your figure, it does not actually re-arrange anything in your figure, after it has been rendered. You might need to tweak the parameters you pass to tight_layout (tutorial) to get your desired effect. Hopefully this gets you pointed in the right direction.", "best_answers_score":0.7609, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16032389\/pad-inches-0-and-bbox-inches-tight-makes-the-plot-smaller-than-declared-figsiz", "best_answers_votes":45, "question_length":1540, "response_length":360 }, { "question":"Increase distance between title and plot in matplolib? [duplicate] This question already has answers here: Python Matplotlib figure title overlaps axes label when using twiny (9 answers) Closed 6 years ago. I have a simple plot in matplotlib and I would like to increase the distance between the title and the plot (without using suptitle because it does not work on the version I use on a server). How to do that ?", "response":"With matplotlib 2.2+, you can use the keyword argument pad: ``` ax.set_title('Title', pad=20) ``` Adjust pad until you're happy with the axis title position. The advantage of this method over using rcParams is that it only changes this one axis title.", "best_answers_score":0.7589, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16419670\/increase-distance-between-title-and-plot-in-matplolib", "best_answers_votes":165, "question_length":415, "response_length":251 }, { "question":"How to have logarithmic bins in a Python histogram As far as I know the option Log=True in the histogram function only refers to the y-axis. ``` P.hist(d,bins=50,log=True,alpha=0.5,color='b',histtype='step') ``` I need the bins to be equally spaced in log10. Is there something that can do this?", "response":"use logspace() to create a geometric sequence, and pass it to bins parameter. And set the scale of xaxis to log scale. ``` import pylab as pl import numpy as np data = np.random.normal(size=10000) pl.hist(data, bins=np.logspace(np.log10(0.1),np.log10(1.0), 50)) pl.gca().set_xscale(\"log\") pl.show() ```", "best_answers_score":0.7579, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6855710\/how-to-have-logarithmic-bins-in-a-python-histogram", "best_answers_votes":160, "question_length":295, "response_length":302 }, { "question":"increase the linewidth of the legend lines in matplotlib I know that if I change the linewidth of a line, that is automatically updated in the legend. However I would like to just change the legend linewidth without affecting the plot.", "response":"Here's a simple example of how to do it: ``` import numpy as np import matplotlib.pyplot as plt # make some data x = np.linspace(0, 2*np.pi) y1 = np.sin(x) y2 = np.cos(x) # plot sin(x) and cos(x) p1 = plt.plot(x, y1, 'b-', linewidth=1.0) p2 = plt.plot(x, y2, 'r-', linewidth=1.0) # make a legend for both plots leg = plt.legend([p1, p2], ['sin(x)', 'cos(x)'], loc=1) # set the linewidth of each legend object for legobj in leg.legendHandles: legobj.set_linewidth(2.0) plt.show() ```", "best_answers_score":0.7579, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9706845\/increase-the-linewidth-of-the-legend-lines-in-matplotlib", "best_answers_votes":108, "question_length":235, "response_length":482 }, { "question":"How to avoid overlapping of labels & autopct in a pie chart My Python code is: ``` values = [234, 64, 54,10, 0, 1, 0, 9, 2, 1, 7, 7] months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul','Aug','Sep','Oct', 'Nov','Dec'] colors = ['yellowgreen', 'red', 'gold', 'lightskyblue', 'white','lightcoral','blue','pink', 'darkgreen', 'yellow','grey','violet','magenta','cyan'] plt.pie(values, labels=labels, autopct='%1.1f%%', shadow=True, colors=colors, startangle=90, radius=1.2) plt.show() ``` Is it possible to show the labels \"Jan\", \"Feb\", \"Mar\", etc. and the percentages, either: without overlapping, or using an arrow mark?", "response":"Alternatively you can put the legends beside the pie graph: ``` import matplotlib.pyplot as plt import numpy as np x = np.char.array(['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct', 'Nov','Dec']) y = np.array([234, 64, 54,10, 0, 1, 0, 9, 2, 1, 7, 7]) colors = ['yellowgreen','red','gold','lightskyblue','white','lightcoral','blue','pink', 'darkgreen','yellow','grey','violet','magenta','cyan'] porcent = 100.*y\/y.sum() patches, texts = plt.pie(y, colors=colors, startangle=90, radius=1.2) labels = ['{0} - {1:1.2f} %'.format(i,j) for i,j in zip(x, porcent)] sort_legend = True if sort_legend: patches, labels, dummy = zip(*sorted(zip(patches, labels, y), key=lambda x: x[2], reverse=True)) plt.legend(patches, labels, loc='left center', bbox_to_anchor=(-0.1, 1.), fontsize=8) plt.savefig('piechart.png', bbox_inches='tight') ``` EDIT: if you want to keep the legend in the original order, as you mentioned in the comments, you can set sort_legend=False in the code above, giving:", "best_answers_score":0.7579, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23577505\/how-to-avoid-overlapping-of-labels-autopct-in-a-pie-chart", "best_answers_votes":82, "question_length":624, "response_length":992 }, { "question":"How to hide ticks label in python but keep the ticks in place? I want to hide my ticks label on a plot I created, but keep this tick itself (the little marks on the axis). When I try to use what I've found here, for example, the entire tick is removed, and not just the labels. How can I remove only the labels then?", "response":"Here is a slightly simpler answer, using ax.tick_params ``` import matplotlib.pylab as plt fig, ax = plt.subplots() plt.plot([1,2,3],[4,5,6]) ax.tick_params(labelbottom=False) plt.show() ``` Here is the resulting output in Matplotlib 3 As commented by @chris, one can similarly hide the labels on any of the other axes using labeltop, labelleft, labelright, in the above example, instead of labelbottom.", "best_answers_score":0.7566, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20936658\/how-to-hide-ticks-label-in-python-but-keep-the-ticks-in-place", "best_answers_votes":137, "question_length":316, "response_length":403 }, { "question":"Annotate Time Series plot I have an index array (x) of dates (datetime objects) and an array of actual values (y: bond prices). Doing the following: ```py plot(x,y) ``` produces a perfectly fine time series graph with the x-axis labeled with the dates. No problem so far. But I want to add text on certain dates. For example, on 2009-10-31, I wish to display the text \"Event 1\" with an arrow pointing to the y value at that date. I have read through the Matplotlib documentation on text() and annotate() to no avail.", "response":"Matplotlib uses an internal floating point format for dates. You just need to convert your date to that format (using matplotlib.dates.date2num or matplotlib.dates.datestr2num) and then use annotate as usual. As a somewhat excessively fancy example: ``` import datetime as dt import matplotlib.pyplot as plt import matplotlib.dates as mdates x = [dt.datetime(2009, 05, 01), dt.datetime(2010, 06, 01), dt.datetime(2011, 04, 01), dt.datetime(2012, 06, 01)] y = [1, 3, 2, 5] fig, ax = plt.subplots() ax.plot_date(x, y, linestyle='--') ax.annotate('Test', (mdates.date2num(x[1]), y[1]), xytext=(15, 15), textcoords='offset points', arrowprops=dict(arrowstyle='-|>')) fig.autofmt_xdate() plt.show() ```", "best_answers_score":0.7563, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11067368\/annotate-time-series-plot", "best_answers_votes":89, "question_length":516, "response_length":697 }, { "question":"Drawing lines between two plots in Matplotlib I am drawing two subplots with Matplotlib, essentially following : ``` subplot(211); imshow(a); scatter(..., ...) subplot(212); imshow(b); scatter(..., ...) ``` Can I draw lines between those two subplots? How would I do that?", "response":"The solution from the other answers are suboptimal in many cases (as they would only work if no changes are made to the plot after calculating the points). A better solution would use the specially designed ConnectionPatch: ``` import matplotlib.pyplot as plt from matplotlib.patches import ConnectionPatch import numpy as np fig = plt.figure(figsize=(10,5)) ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122) x,y = np.random.rand(100),np.random.rand(100) ax1.plot(x,y,'ko') ax2.plot(x,y,'ko') i = 10 xy = (x[i],y[i]) con = ConnectionPatch(xyA=xy, xyB=xy, coordsA=\"data\", coordsB=\"data\", axesA=ax2, axesB=ax1, color=\"red\") ax2.add_artist(con) ax1.plot(x[i],y[i],'ro',markersize=10) ax2.plot(x[i],y[i],'ro',markersize=10) plt.show() ```", "best_answers_score":0.7558, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17543359\/drawing-lines-between-two-plots-in-matplotlib", "best_answers_votes":66, "question_length":272, "response_length":738 }, { "question":"set matplotlib 3d plot aspect ratio ``` import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D ``` Setting the aspect ratio works for 2d plots: ``` ax = plt.axes() ax.plot([0,1], [0,10]) ax.set_aspect('equal', 'box') ``` But it does not work for 3d: ``` ax = plt.axes(projection='3d') ax.plot([0,1], [0,1], [0,10]) ax.set_aspect('equal', 'box') ``` How do I set the aspect ratio for 3d?", "response":"As of matplotlib 3.3.0, Axes3D.set_box_aspect seems to be the recommended approach. ```py import numpy as np import matplotlib.pyplot as plt xs, ys, zs = ... ax = plt.axes(projection='3d') ax.set_box_aspect((np.ptp(xs), np.ptp(ys), np.ptp(zs))) # aspect ratio is 1:1:1 in data space ax.plot(xs, ys, zs) ```", "best_answers_score":0.7551, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8130823\/set-matplotlib-3d-plot-aspect-ratio", "best_answers_votes":65, "question_length":403, "response_length":306 }, { "question":"How to save a figure remotely with pylab? [duplicate] This question already has answers here: Generating a PNG with matplotlib when DISPLAY is undefined (13 answers) Closed 9 years ago. I'm trying to generate a figure at a remote computer with the command pylab.savefig. But I got such error: ``` Unable to access the X Display, is $DISPLAY set properly? ``` How can I save the figure properly?", "response":"By default, matplotlib will use something like the TkAgg backend. This requires an X-server to be running. While you can just use X-forwarding, there will be a noticeable lag as matplotlib tries to connect with the remote X-server. If you don't need to interact with the plot, it's often nicer to speed things up by avoiding an X-connection entirely. If you want to make a plot without needing an X-server at all, use the Agg backend instead. E.g. do something like this: ``` import matplotlib matplotlib.use('Agg') # Must be before importing matplotlib.pyplot or pylab! import matplotlib.pyplot as plt fig = plt.figure() plt.plot(range(10)) fig.savefig('temp.png') ``` If you want this to be the default behavior, you can modify your matplotlibrc file to use the Agg backend by default. See this article for more information.", "best_answers_score":0.7549, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4706451\/how-to-save-a-figure-remotely-with-pylab", "best_answers_votes":154, "question_length":394, "response_length":826 }, { "question":"Store and reload matplotlib.pyplot object I work in an psudo-operational environment where we make new imagery on receipt of data. Sometimes when new data comes in, we need to re-open an image and update that image in order to create composites, add overlays, etc. In addition to adding to the image, this requires modification of titles, legends, etc. Is there something built into matplotlib that would let me store and reload my matplotlib.pyplot object for later use? It would need to maintain access to all associated objects including figures, lines, legends, etc. Maybe pickle is what I'm looking for, but I doubt it.", "response":"As of 1.2 matplotlib ships with experimental pickling support. If you come across any issues with it, please let us know on the mpl mailing list or by opening an issue on github.com\/matplotlib\/matplotlib HTH EDIT: Added a simple example ``` import matplotlib.pyplot as plt import numpy as np import pickle ax = plt.subplot(111) x = np.linspace(0, 10) y = np.exp(x) ax.plot(x, y) pickle.dump(ax, open('myplot.pickle', 'w')) ``` Then in a separate session: ``` import matplotlib.pyplot as plt import pickle ax = pickle.load(open('myplot.pickle')) plt.show() ```", "best_answers_score":0.7516, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7290370\/store-and-reload-matplotlib-pyplot-object", "best_answers_votes":63, "question_length":624, "response_length":559 }, { "question":"Matplotlib: Writing right-to-left text (Hebrew, Arabic, etc.) I'm trying to add some text to my plot which is RTL (in this case, Hebrew). After some work managed to get it to display the text, but it's displayed LTR (meaning, in the reverese order). I've dug into the reference and did extensive search online and nothing came up. An example for what I'm using: ``` import matplotlib.pyplot as plt plt.text(0.5, 0.5, u'\u05e9\u05dc\u05d5\u05dd \u05db\u05d9\u05ea\u05d4 \u05d0', name = 'Arial') plt.show() ``` and it displays '\u05d0 \u05d4\u05ea\u05d9\u05db \u05dd\u05dc\u05d5\u05e9'. In case you can't see the Hebrew, it's as if i'd input 'Hello', and the output would be 'olleH'. I can't simply reverse the input since it's mixed LTR and RTL. Every help would be appreciated.", "response":"For Arabic you need both bidi.algorithm.get_display and arabic_reshaper modules: ``` from bidi.algorithm import get_display import matplotlib.pyplot as plt import arabic_reshaper reshaped_text = arabic_reshaper.reshape(u'\u0644\u063a\u0629\u064c \u0639\u0631\u0628\u064a\u0651\u0629') artext = get_display(reshaped_text) plt.text(0.25, 0.45, artext , name = 'Times New Roman',fontsize=50) plt.show() ```", "best_answers_score":0.7513, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15421746\/matplotlib-writing-right-to-left-text-hebrew-arabic-etc", "best_answers_votes":63, "question_length":687, "response_length":353 }, { "question":"Adding an axes using the same arguments as a previous axes I want to plot data, in two different subplots. After plotting, I want to go back to the first subplot and plot an additional dataset in it. However, when I do so I get this warning: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance. warnings.warn(message, mplDeprecation, stacklevel=1) I can reproduce that with a simple piece of code: ``` import matplotlib.pyplot as plt import numpy as np # Generate random data data = np.random.rand(100) # Plot in different subplots plt.figure() plt.subplot(1, 2, 1) plt.plot(data) plt.subplot(1, 2, 2) plt.plot(data) plt.subplot(1, 2, 1) # Warning occurs here plt.plot(data + 1) ``` Any ideas on how to avoid this warning? I use matplotlib 2.1.0. Looks like the same problem as here", "response":"This is a good example that shows the benefit of using matplotlib's object oriented API. ``` import numpy as np import matplotlib.pyplot as plt # Generate random data data = np.random.rand(100) # Plot in different subplots fig, (ax1, ax2) = plt.subplots(1, 2) ax1.plot(data) ax2.plot(data) ax1.plot(data+1) plt.show() ``` Note: it is more pythonic to have variable names start with a lower case letter e.g. data = ... rather than Data = ... see PEP8", "best_answers_score":0.751, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/46933824\/adding-an-axes-using-the-same-arguments-as-a-previous-axes", "best_answers_votes":38, "question_length":1055, "response_length":449 }, { "question":"Extract matplotlib colormap in hex-format I am trying to extract discrete colors from a matplotlib colormap by manipulating this example. However, I cannot find the N discrete colors that are extracted from the colormap. In the code below I've used cmap._segmentdata, but I've found that it is the definition of the entire colormap. Given a colormap and an integer N, how do I extract N discrete colors from the colormap and export them in hex-format? ``` from pylab import * delta = 0.01 x = arange(-3.0, 3.0, delta) y = arange(-3.0, 3.0, delta) X,Y = meshgrid(x, y) Z1 = bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0) Z2 = bivariate_normal(X, Y, 1.5, 0.5, 1, 1) Z = Z2 - Z1 # difference of Gaussians cmap = cm.get_cmap('seismic', 5) # PiYG cmap_colors = cmap._segmentdata def print_hex(r,b,g): if not(0 <= r <= 255 or 0 <= b <= 255 or 0 <= g <= 255): raise ValueError('rgb not in range(256)') print '#%02x%02x%02x' % (r, b, g) for i in range(len(cmap_colors['blue'])): r = int(cmap_colors['red'][i][2]*255) b = int(cmap_colors['blue'][i][2]*255) g = int(cmap_colors['green'][i][2]*255) print_hex(r, g, b) im = imshow(Z, cmap=cmap, interpolation='bilinear', vmax=abs(Z).max(), vmin=-abs(Z).max()) axis('off') colorbar() show() ```", "response":"You can get a tuple of rgba values for the segment with index i by calling cmap(i). There is also already a function that turns rgb values into hex. As Joe Kington wrote in the comments, you can use matplotlib.colors.rgb2hex. Therefore, a possible solution would be: ``` from pylab import * cmap = cm.get_cmap('seismic', 5) # PiYG for i in range(cmap.N): rgba = cmap(i) # rgb2hex accepts rgb or rgba print(matplotlib.colors.rgb2hex(rgba)) ``` The output is: ```none #00004c #0000ff #ffffff #ff0000 #7f0000 ```", "best_answers_score":0.7507, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/33596491\/extract-matplotlib-colormap-in-hex-format", "best_answers_votes":121, "question_length":1227, "response_length":509 }, { "question":"How to add a title to each subplot I have one figure which contains many subplots. ``` fig = plt.figure(num=None, figsize=(26, 12), dpi=80, facecolor='w', edgecolor='k') fig.canvas.set_window_title('Window Title') # Returns the Axes instance ax = fig.add_subplot(311) ax2 = fig.add_subplot(312) ax3 = fig.add_subplot(313) ``` How do I add titles to the subplots? fig.suptitle adds a title to all graphs and although ax.set_title() exists, the latter does not add any title to my subplots. Thank you for your help. Edit: Corrected typo about set_title(). Thanks Rutger Kassies", "response":"ax.title.set_text('My Plot Title') seems to work too. ``` fig = plt.figure() ax1 = fig.add_subplot(221) ax2 = fig.add_subplot(222) ax3 = fig.add_subplot(223) ax4 = fig.add_subplot(224) ax1.title.set_text('First Plot') ax2.title.set_text('Second Plot') ax3.title.set_text('Third Plot') ax4.title.set_text('Fourth Plot') plt.show() ```", "best_answers_score":0.7506, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25239933\/how-to-add-a-title-to-each-subplot", "best_answers_votes":536, "question_length":575, "response_length":333 }, { "question":"%matplotlib notebook showing a blank histogram In my Jupyter notebook I am now using %matplotlib notebook instead of %matplotlib inline, it's awesome that I can now interact with my plots on Jupyter. However, when I try to make an histogram I get a blank plot: If I use %matplotlib inline everything works fine: What's going on?", "response":"Seeing that my comment above has indeed helped someone to solve the problem I will post it as an answer. The problem occurs if you switch from %matplotlib inline to %matplotlib notebook without restarting the kernel. Switching from %matplotlib notebook to %matplotlib inline works fine. So the solution is to either restart the kernel or start a new notebook. It seems that in some cases it helps to repeat the setting of the notebook backend, i.e. call it twice like ``` %matplotlib notebook %matplotlib notebook ``` An analysis for why that is can be found in this comment", "best_answers_score":0.7505, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/41125690\/matplotlib-notebook-showing-a-blank-histogram", "best_answers_votes":83, "question_length":328, "response_length":574 }, { "question":"How to set the labels size on a pie chart I want to have labels with small size on a piechart in python to improve visibility here is the code ``` import matplotlib.pyplot as plt frac=[1.40 , 10.86 , 19.31 , 4.02 , 1.43 , 2.66 , 4.70 , 0.70 , 0.13 , 1.48, 32.96 , 1.11 , 13.30 , 5.86] labels=['HO0900344', 'HO0900331', 'HO0900332', 'HO0900354', 'HO0900358', 'HO0900374', 'HO0900372', 'HO0900373', 'HO0900371', 'HO0900370', 'HO0900369', 'HO0900356', 'HO0900353', 'HO0900343'] fig = plt.figure(1, figsize=(6,6)) ax = fig.add_subplot(111) ax.axis('equal') colors=('b', 'g', 'r', 'c', 'm', 'y', 'burlywood', 'w') ax.pie(frac,colors=colors ,labels=labels, autopct='%1.1f%%') plt.show() ```", "response":"The simplest way to change the font size on a pie chart is directly via the textprops argument in the pie() function. Using the code above add it like so: ``` ax.pie(frac, colors=colors ,labels=labels, autopct='%1.1f%%', textprops={'fontsize': 14}) ``` That way you can just pass in a dictionary with your desired fontsize (e.g., 14). No messing around with rcParams or return values from the function call.", "best_answers_score":0.7503, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7082345\/how-to-set-the-labels-size-on-a-pie-chart", "best_answers_votes":166, "question_length":684, "response_length":407 }, { "question":"Change y range to start from 0 with matplotlib I am using matplotlib to plot data. Here's a code that does something similar: ``` import matplotlib.pyplot as plt f, ax = plt.subplots(1) xdata = [1, 4, 8] ydata = [10, 20, 30] ax.plot(xdata, ydata) plt.show(f) ``` This shows a line in a graph with the y axis that goes from 10 to 30. While I am satisfied with the x range, I would like to change the y range to start from 0 and adjust on the ymax to show everything. My current solution is to do: ``` ax.set_ylim(0, max(ydata)) ``` However I am wondering if there is a way to just say: autoscale but starts from 0.", "response":"The range must be set after the plot. ``` import matplotlib.pyplot as plt f, ax = plt.subplots(1) xdata = [1, 4, 8] ydata = [10, 20, 30] ax.plot(xdata, ydata) ax.set_ylim(ymin=0) plt.show(f) ``` If ymin is changed before plotting, this will result in a range of [0, 1]. Edit: the ymin argument has been replaced by bottom: ``` ax.set_ylim(bottom=0) ``` Documentation: https:\/\/matplotlib.org\/stable\/api\/_as_gen\/matplotlib.axes.Axes.set_ylim.html You can do the same on the x axis with left and right: ``` ax.set_xlim(left=0) ``` Documentation: https:\/\/matplotlib.org\/stable\/api\/_as_gen\/matplotlib.axes.Axes.set_xlim.html", "best_answers_score":0.7502, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22642511\/change-y-range-to-start-from-0-with-matplotlib", "best_answers_votes":164, "question_length":613, "response_length":619 }, { "question":"Get default line color cycle I noticed when you plot that the first line is blue, then orange, then green, and so on. Is there some way to access this list of colors? I've seen a million posts on how to change the color cycle or access the iterator, but not on how to just get the list of colors that matplotlib cycles through by default.", "response":"Often, there is no need to get the default color cycle from anywhere, as it is the default one, so just using it is sufficient. ``` import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) t = np.arange(5) for i in range(4): line, = ax.plot(t,i*(t+1), linestyle = '-') ax.plot(t,i*(t+1)+.3,color = line.get_color(), linestyle = ':') plt.show() ``` In case you want to use the default color cycle for something different, there are of course several options. \"tab10\" colormap First it should be mentionned that the \"tab10\" colormap comprises the colors from the default color cycle, you can get it via cmap = plt.get_cmap(\"tab10\"). Equivalent to the above would hence be ``` import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) t = np.arange(5) cmap = plt.get_cmap(\"tab10\") for i in range(4): ax.plot(t,i*(t+1), color=cmap(i), linestyle = '-') ax.plot(t,i*(t+1)+.3,color=cmap(i), linestyle = ':') plt.show() ``` Colors from color cycle You can also use the color cycler directly, cycle = plt.rcParams['axes.prop_cycle'].by_key()['color']. This gives list with the colors from the cycle, which you can use to iterate over. ``` import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) t = np.arange(5) cycle = plt.rcParams['axes.prop_cycle'].by_key()['color'] for i in range(4): ax.plot(t,i*(t+1), color=cycle[i], linestyle = '-') ax.plot(t,i*(t+1)+.3,color=cycle[i], linestyle = ':') plt.show() ``` The CN notation Finally, the CN notation allows to get the Nth color of the color cycle, color=\"C{}\".format(i). This however only works for the first 10 colors (N in [0,1,...9]) ``` import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) t = np.arange(5) for i in range(4): ax.plot(t,i*(t+1), color=\"C{}\".format(i), linestyle = '-') ax.plot(t,i*(t+1)+.3,color=\"C{}\".format(i), linestyle = ':') plt.show() ``` All codes presented here produce the same plot.", "best_answers_score":0.75, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42086276\/get-default-line-color-cycle", "best_answers_votes":198, "question_length":338, "response_length":2014 }, { "question":"How to change the text color of font in legend? Is there a way to change the font color of the legend in a matplotlib plot? Specially in occasions where the background of the plot is dark, the default black text in the legend is hard or impossible to read.", "response":"As of matplotlib version 3.3.0, you can now directly use the keyword argument labelcolor in matplotlib.pyplot.legend(). Example using the same color as the corresponding artist by setting labelcolor='linecolor': ```py import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=(4, 3)) plt.plot(np.arange(10), np.random.rand(10) * 0, '-', label='spam') plt.plot(np.arange(10), np.random.rand(10) * 1, ':', label='ham') plt.plot(np.arange(10), np.random.rand(10) * 2, 'o', label='eggs') plt.legend(labelcolor='linecolor') ``` Example changing all text to white by setting labelcolor='w', e.g. for dark backgrounds: ```py import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=(4, 3)) plt.plot(np.arange(10), np.random.rand(10) * 0, '-', label='spam') plt.plot(np.arange(10), np.random.rand(10) * 1, ':', label='ham') plt.plot(np.arange(10), np.random.rand(10) * 2, 'o', label='eggs') plt.legend(facecolor='k', labelcolor='w') ```", "best_answers_score":0.75, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18909696\/how-to-change-the-text-color-of-font-in-legend", "best_answers_votes":44, "question_length":256, "response_length":953 }, { "question":"How To Plot Multiple Histograms On Same Plot With Seaborn With matplotlib, I can make a histogram with two datasets on one plot (one next to the other, not overlay). ``` import matplotlib.pyplot as plt import random x = [random.randrange(100) for i in range(100)] y = [random.randrange(100) for i in range(100)] plt.hist([x, y]) plt.show() ``` This yields the following plot. However, when I try to do this with seabron; ``` import seaborn as sns sns.distplot([x, y]) ``` I get the following error: ``` ValueError: color kwarg must have one color per dataset ``` So then I try to add some color values: ``` sns.distplot([x, y], color=['r', 'b']) ``` And I get the same error. I saw this post on how to overlay graphs, but I would like these histograms to be side by side, not overlay. And looking at the docs it doesn't specify how to include a list of lists as the first argument 'a'. How can I achieve this style of histogram using seaborn?", "response":"If I understand you correctly you may want to try something this: ``` fig, ax = plt.subplots() for a in [x, y]: sns.distplot(a, bins=range(1, 110, 10), ax=ax, kde=False) ax.set_xlim([0, 100]) ``` Which should yield a plot like this: UPDATE: Looks like you want 'seaborn look' rather than seaborn plotting functionality. For this you only need to: ``` import seaborn as sns plt.hist([x, y], color=['r','b'], alpha=0.5) ``` Which will produce: UPDATE for seaborn v0.12+: After seaborn v0.12 to get seaborn-styled plots you need to: ``` import seaborn as sns sns.set_theme() # <-- This actually changes the look of plots. plt.hist([x, y], color=['r','b'], alpha=0.5) ``` See seaborn docs for more information.", "best_answers_score":0.7497, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/36362624\/how-to-plot-multiple-histograms-on-same-plot-with-seaborn", "best_answers_votes":60, "question_length":942, "response_length":706 }, { "question":"How to get color of most recent plotted line in Python's plt I plot a line without specifying the color (think: plt.plot(x,y)). Say the color comes out blue. Question: How do I obtain this color from the plt object so that I can put it into a variable? Seems like this is close (and potentially the solution): ``` p = plt.plot(x,y) color = p[0].get_color() ``` Updated question: I am not sure I understand the \"0\" index: Does p[0] always access the most recent plotted line?", "response":"In your example, p is a list of Line2D object. In that example you have only one line object, p[0]. The following is an example plotting three lines. As more line is added, it is appended to the p. So if you want the color of the last plot, it will be p[-1].get_color(). ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(10) y = np.arange(10) p = plt.plot(x,y, x,y*2, x,y*3) # make three line plots type(p) # list type(p[0]) # p[0].get_color() # 'b' p[1].get_color() # 'g' p[2].get_color() # 'r' ```", "best_answers_score":0.7494, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/36699155\/how-to-get-color-of-most-recent-plotted-line-in-pythons-plt", "best_answers_votes":124, "question_length":474, "response_length":519 }, { "question":"How to plot multiple bars grouped How to plot multiple bars in matplotlib, when I tried to call the bar function multiple times, they overlap and as seen the below figure the highest value red can be seen only. How can I plot the multiple bars with dates on the x-axes? So far, I tried this: ```py import matplotlib.pyplot as plt import datetime x = [ datetime.datetime(2011, 1, 4, 0, 0), datetime.datetime(2011, 1, 5, 0, 0), datetime.datetime(2011, 1, 6, 0, 0) ] y = [4, 9, 2] z = [1, 2, 3] k = [11, 12, 13] ax = plt.subplot(111) ax.bar(x, y, width=0.5, color='b', align='center') ax.bar(x, z, width=0.5, color='g', align='center') ax.bar(x, k, width=0.5, color='r', align='center') ax.xaxis_date() plt.show() ``` I got this: The results should be something like, but with the dates are on the x-axes and bars are next to each other:", "response":"``` import matplotlib.pyplot as plt from matplotlib.dates import date2num import datetime x = [ datetime.datetime(2011, 1, 4, 0, 0), datetime.datetime(2011, 1, 5, 0, 0), datetime.datetime(2011, 1, 6, 0, 0) ] x = date2num(x) y = [4, 9, 2] z = [1, 2, 3] k = [11, 12, 13] ax = plt.subplot(111) ax.bar(x-0.2, y, width=0.2, color='b', align='center') ax.bar(x, z, width=0.2, color='g', align='center') ax.bar(x+0.2, k, width=0.2, color='r', align='center') ax.xaxis_date() plt.show() ``` I don't know what's the \"y values are also overlapping\" means, does the following code solve your problem? ``` ax = plt.subplot(111) w = 0.3 ax.bar(x-w, y, width=w, color='b', align='center') ax.bar(x, z, width=w, color='g', align='center') ax.bar(x+w, k, width=w, color='r', align='center') ax.xaxis_date() ax.autoscale(tight=True) plt.show() ```", "best_answers_score":0.7492, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14270391\/how-to-plot-multiple-bars-grouped", "best_answers_votes":159, "question_length":834, "response_length":830 }, { "question":"Command-line Unix ASCII-based charting \/ plotting tool Is there a good command-line UNIX charting \/ graphing \/ plotting tool out there? I'm looking for something that will plot xy points on an ASCII graph. Just to clarify, I'm looking for something that will output a graph in ASCII (like ascii-art style), so I can use it over an interactive shell session without needing X.", "response":"Try gnuplot. It has very powerful graphing possibilities. It can output to your terminal in the following way: ``` gnuplot> set terminal dumb Terminal type set to 'dumb' Options are 'feed 79 24' gnuplot> plot sin(x) 1 ++----------------**---------------+----**-----------+--------**-----++ + *+ * + * * + sin(x) ****** + 0.8 ++ * * * * * * ++ | * * * * * * | 0.6 ++ * * * * * * ++ * * * * * * * | 0.4 +* * * * * * * ++ |* * * * * * * | 0.2 +* * * * * * * ++ | * * * * * * * | 0 ++* * * * * * *++ | * * * * * * *| -0.2 ++ * * * * * * *+ | * * * * * * *| -0.4 ++ * * * * * * *+ | * * * * * * * -0.6 ++ * * * * * * ++ | * * * * * * | -0.8 ++ * * * * * * ++ + * * + * * + * * + -1 ++-----**---------+----------**----+---------------**+---------------++ -10 -5 0 5 10 ```", "best_answers_score":0.7481, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/123378\/command-line-unix-ascii-based-charting-plotting-tool", "best_answers_votes":166, "question_length":375, "response_length":766 }, { "question":"matplotlib set yaxis label size How can I change the size of only the yaxis label? Right now, I change the size of all labels using ```py pylab.rc('font', family='serif', size=40) ``` but in my case, I would like to make the y-axis label larger than the x-axis. However, I'd like to leave the tick labels alone. I've tried, for example: ```py pylab.gca().get_ylabel().set_fontsize(60) ``` but I only get: ```none AttributeError: 'str' object has no attribute 'set_fontsize' ``` So, obviously that doesn't work. I've seen lots of stuff for tick sizes, but nothing for the axis labels themselves.", "response":"If you are using the 'pylab' for interactive plotting you can set the labelsize at creation time with pylab.ylabel('Example', fontsize=40). If you use pyplot programmatically you can either set the fontsize on creation with ax.set_ylabel('Example', fontsize=40) or afterwards with ax.yaxis.label.set_size(40).", "best_answers_score":0.7477, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10404759\/matplotlib-set-yaxis-label-size", "best_answers_votes":159, "question_length":594, "response_length":309 }, { "question":"Adding a matplotlib legend How can one create a legend for a line graph in Matplotlib's PyPlot without creating any extra variables? Please consider the graphing script below: ```py if __name__ == '__main__': PyPlot.plot(length, bubble, 'b-', length, ins, 'r-', length, merge_r, 'g+', length, merge_i, 'p-', ) PyPlot.title(\"Combined Statistics\") PyPlot.xlabel(\"Length of list (number)\") PyPlot.ylabel(\"Time taken (seconds)\") PyPlot.show() ``` As you can see, this is a very basic use of matplotlib's PyPlot. This generates the following graph: However, it is unclear which line is which. Thus, I need a legend; however, taking a look at the following example below (from the official site): ``` ax = subplot(1,1,1) p1, = ax.plot([1,2,3], label=\"line 1\") p2, = ax.plot([3,2,1], label=\"line 2\") p3, = ax.plot([2,3,1], label=\"line 3\") handles, labels = ax.get_legend_handles_labels() # reverse the order ax.legend(handles[::-1], labels[::-1]) # or sort them by labels import operator hl = sorted(zip(handles, labels), key=operator.itemgetter(1)) handles2, labels2 = zip(*hl) ax.legend(handles2, labels2) ``` You will see that I need to create an extra variable ax. How can I add a legend to my graph without having to create this extra variable and retaining the simplicity of my current script?", "response":"Add a label= to each of your plot() calls, and then call legend(loc='upper left'). Consider this sample (tested with Python 3.8.0): ``` import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 20, 1000) y1 = np.sin(x) y2 = np.cos(x) plt.plot(x, y1, \"-b\", label=\"sine\") plt.plot(x, y2, \"-r\", label=\"cosine\") plt.legend(loc=\"upper left\") plt.ylim(-1.5, 2.0) plt.show() ``` Slightly modified from this tutorial: http:\/\/jakevdp.github.io\/mpl_tutorial\/tutorial_pages\/tut1.html", "best_answers_score":0.7468, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19125722\/adding-a-matplotlib-legend", "best_answers_votes":868, "question_length":1292, "response_length":484 }, { "question":"How do you set the absolute position of figure windows with matplotlib? I'm writing a simple Python application that uses matplotlib to display a few figures on screen. The number of figures generated is based on user input and changes throughout the application's life. The user has the ability to issue a \"plot\" command to generate a new figure window with the selected data series. In order to improve the user experience, I would like to provide another command that would programmatically arrange all open figure windows in some convenient arrangement (e.g. tile them across the available screen space). I believe to have found APIs that allow me to adjust the size of the figure window (in pixels), but haven't had any success in finding a way to set their absolute position on screen. Is there a way to do this without delving into the details of whatever backend is in use? I would like to do this in a backend-agnostic way so I can avoid relying upon implementation details that might change in the future.", "response":"Found the solution for QT backend: ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() mngr = plt.get_current_fig_manager() # to put it into the upper left corner for example: mngr.window.setGeometry(50,100,640, 545) ``` If one doesn't know the x- and y-width one can read them out first, like so: ``` # get the QTCore PyRect object geom = mngr.window.geometry() x,y,dx,dy = geom.getRect() ``` and then set the new position with the same size: ``` mngr.window.setGeometry(newX, newY, dx, dy) ```", "best_answers_score":0.7466, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7449585\/how-do-you-set-the-absolute-position-of-figure-windows-with-matplotlib", "best_answers_votes":57, "question_length":1015, "response_length":504 }, { "question":"Change title and colorbar text and tick colors I wanted to know how to change the color of the ticks in the colorbar and how to change the font color of the title and colorbar in a figure. For example, things obviously are visible in temp.png but not in temp2.png: ``` import matplotlib.pyplot as plt import numpy as np from numpy.random import randn fig = plt.figure() data = np.clip(randn(250,250),-1,1) cax = plt.imshow(data, interpolation='nearest') plt.title('my random fig') plt.colorbar() # works fine plt.savefig('temp.png') # title and colorbar ticks and text hidden plt.savefig('temp2.png', facecolor=\"black\", edgecolor=\"none\") ``` Thanks", "response":"Previous answer didnt give what I wanted. This is how I did it: ``` import matplotlib.pyplot as plt import numpy as np from numpy.random import randn data = np.clip(randn(250,250),-1,1) data = np.ma.masked_where(data > 0.5, data) fig, ax1 = plt.subplots(1,1) im = ax1.imshow(data, interpolation='nearest') cb = plt.colorbar(im) fg_color = 'white' bg_color = 'black' # IMSHOW # set title plus title color ax1.set_title('ax1 title', color=fg_color) # set figure facecolor ax1.patch.set_facecolor(bg_color) # set tick and ticklabel color im.axes.tick_params(color=fg_color, labelcolor=fg_color) # set imshow outline for spine in im.axes.spines.values(): spine.set_edgecolor(fg_color) # COLORBAR # set colorbar label plus label color cb.set_label('colorbar label', color=fg_color) # set colorbar tick color cb.ax.yaxis.set_tick_params(color=fg_color) # set colorbar edgecolor cb.outline.set_edgecolor(fg_color) # set colorbar ticklabels plt.setp(plt.getp(cb.ax.axes, 'yticklabels'), color=fg_color) fig.patch.set_facecolor(bg_color) plt.tight_layout() plt.show() #plt.savefig('save\/to\/pic.png', dpi=200, facecolor=bg_color) ```", "best_answers_score":0.7465, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9662995\/change-title-and-colorbar-text-and-tick-colors", "best_answers_votes":52, "question_length":648, "response_length":1123 }, { "question":"gnuplot vs Matplotlib [closed] Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Because this question may lead to opinionated discussion, debate, and answers, it has been closed. You may edit the question if you feel you can improve it so that it requires answers that include facts and citations or a detailed explanation of the proposed solution. If edited, the question will be reviewed and might be reopened. Closed 4 years ago. Improve this question I've started on a project graphing Tomcat logs using gnuplot-py, specifically correlating particular requests with memory allocation and garbage collection. What is the collective wisdom on gnuplot-py vs Matplotlib for Python graphing. Are there better graphing libraries out there I haven't heard of? My general considerations are: While gnuplot has large amounts of documentation, gnuplot-py doesn't. How good is documentation community for Matplotlib? Are there things which gnuplot can do, but gnuplot-py can't? Does Matplotlib have better Python support? Are there are big show stopping bugs in either? Annoyances? Currently gnuplot is graphing 100,000's of points, I'm planning on scaling this up to millions. Should I expect problems? How well does Matplotlib handle this? Ease of use, turnaround time for gnuplot vs Matplotlib? How easy would it be to port existing gnuplot-py code to Matplotlib? How would you approach this task?", "response":"Matplotlib = ease of use, Gnuplot = (slightly better) performance I know this post is old and answered but I was passing by and wanted to put my two cents. Here is my conclusion: if you have a not-so-big data set, you should use Matplotlib. It's easier and looks better. However, if you really need performance, you could use Gnuplot. I've added some code to test it out on your machine and see for yourself if it makes a real difference (this is not a real performance benchmark but should give a first idea). The following graph represents the required time (in seconds) to: Plot a random scatter graph Save the graph to a png file Configuration: gnuplot: 5.2.2 gnuplot-py: 1.8 matplotlib: 2.1.2 I remember the performance gap being much wider when running on an older computer with older versions of the libraries (~30 seconds difference for a large scatter plot). Moreover, as mentionned in the comments, you can get equivalent quality of plots. But you will have to put more sweat into that to do it with Gnuplot. Here's the code to generate the graph if you want to give it a try on your machine: ``` # -*- coding: utf-8 -*- from timeit import default_timer as timer import matplotlib.pyplot as plt import Gnuplot, Gnuplot.funcutils import numpy as np import sys import os def mPlotAndSave(x, y): plt.scatter(x, y) plt.savefig('mtmp.png') plt.clf() def gPlotAndSave(data, g): g(\"set output 'gtmp.png'\") g.plot(data) g(\"clear\") def cleanup(): try: os.remove('gtmp.png') except OSError: pass try: os.remove('mtmp.png') except OSError: pass begin = 2 end = 500000 step = 10000 numberOfPoints = range(begin, end, step) n = len(numberOfPoints) gnuplotTime = [] matplotlibTime = [] progressBarWidth = 30 # Init Gnuplot g = Gnuplot.Gnuplot() g(\"set terminal png size 640,480\") # Init matplotlib to avoid a peak in the beginning plt.clf() for idx, val in enumerate(numberOfPoints): # Print a nice progress bar (crucial) sys.stdout.write('\\r') progress = (idx+1)*progressBarWidth\/n bar = \"\u2595\" + \"\u2587\"*progress + \"\u2581\"*(progressBarWidth-progress) + \"\u258f\" + str(idx) + \"\/\" + str(n-1) sys.stdout.write(bar) sys.stdout.flush() # Generate random data x = np.random.randint(sys.maxint, size=val) y = np.random.randint(sys.maxint, size=val) gdata = zip(x,y) # Generate string call to a matplotlib plot and save, call it and save execution time start = timer() mPlotAndSave(x, y) end = timer() matplotlibTime.append(end - start) # Generate string call to a gnuplot plot and save, call it and save execution time start = timer() gPlotAndSave(gdata, g) end = timer() gnuplotTime.append(end - start) # Clean up the files cleanup() del g sys.stdout.write('\\n') plt.plot(numberOfPoints, gnuplotTime, label=\"gnuplot\") plt.plot(numberOfPoints, matplotlibTime, label=\"matplotlib\") plt.legend(loc='upper right') plt.xlabel('Number of points in the scatter graph') plt.ylabel('Execution time (s)') plt.savefig('execution.png') plt.show() ```", "best_answers_score":0.7464, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/911655\/gnuplot-vs-matplotlib", "best_answers_votes":51, "question_length":1454, "response_length":2913 }, { "question":"Matplotlib returning a plot object I have a function that wraps pyplot.plt so I can quickly create graphs with oft-used defaults: ``` def plot_signal(time, signal, title='', xlab='', ylab='', line_width=1, alpha=1, color='k', subplots=False, show_grid=True, fig_size=(10, 5)): # Skipping a lot of other complexity here f, axarr = plt.subplots(figsize=fig_size) axarr.plot(time, signal, linewidth=line_width, alpha=alpha, color=color) axarr.set_xlim(min(time), max(time)) axarr.set_xlabel(xlab) axarr.set_ylabel(ylab) axarr.grid(show_grid) plt.suptitle(title, size=16) plt.show() ``` However, there are times where I'd want to be able to return the plot so I can manually add\/edit things for a specific graph. For example, I want to be able to change the axis labels, or add a second line to the plot after calling the function: ``` import numpy as np x = np.random.rand(100) y = np.random.rand(100) plot = plot_signal(np.arange(len(x)), x) plot.plt(y, 'r') plot.show() ``` I've seen a few questions on this (How to return a matplotlib.figure.Figure object from Pandas plot function? and AttributeError: 'Figure' object has no attribute 'plot') and as a result I've tried adding the following to the end of the function: return axarr return axarr.get_figure() return plt.axes() However, they all return a similar error: AttributeError: 'AxesSubplot' object has no attribute 'plt' Whats the correct way to return a plot object so it can be edited later?", "response":"I think the error is pretty self-explanatory. There is no such thing as pyplot.plt, or similar. plt is the quasi-standard abbreviated form of pyplot when being imported, i.e., import matplotlib.pyplot as plt. Concerning the problem, the first approach, return axarr is the most versatile one. You get an axis, or an array of axes, and can plot to it. The code may look like: ``` def plot_signal(x,y, ..., **kwargs): # Skipping a lot of other complexity here f, ax = plt.subplots(figsize=fig_size) ax.plot(x,y, ...) # further stuff return ax ax = plot_signal(x,y, ...) ax.plot(x2, y2, ...) plt.show() ```", "best_answers_score":0.7462, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/43925337\/matplotlib-returning-a-plot-object", "best_answers_votes":49, "question_length":1451, "response_length":603 }, { "question":"Plotting numerous disconnected line segments with different colors I have a set of data records like this: ``` (s1, t1), (u1, v1), color1 (s2, t2), (u2, v2), color2 . . . (sN, tN), (uN, vN), colorN ``` In any record, the first two values are the end-points of a line segment, the third value is the color of that line segment. More specifically, (sn, tn) are the x-y coordinates of the first end-point, (un, vn) are the x-y coordinates of the second-endpoint. Also, color is an rgb with alpha value. In general, any two line segments are disconnected (meaning that their end-points do not necessarily coincide). How to plot this data using matplotlib with a single plot call (or as few as possible) as there could be potentially thousands of records. Attempts Preparing the data in one big list and calling plot against it is way too slow. For example the following code couldn't finish in a reasonable amount of time: ``` import numpy as np import matplotlib.pyplot as plt data = [] for _ in xrange(60000): data.append((np.random.rand(), np.random.rand())) data.append((np.random.rand(), np.random.rand())) data.append('r') print 'now plotting...' # from now on, takes too long plt.plot(*data) print 'done' #plt.show() ``` I was able to speed-up the plot rendering by using the None insertion trick as follows: ``` import numpy as np import matplotlib.pyplot as plt from timeit import timeit N = 60000 _s = np.random.rand(N) _t = np.random.rand(N) _u = np.random.rand(N) _v = np.random.rand(N) x = [] y = [] for s, t, u, v in zip(_s, _t, _u, _v): x.append(s) x.append(u) x.append(None) y.append(t) y.append(v) y.append(None) print timeit(lambda:plt.plot(x, y), number=1) ``` This executes in under a second on my machine. I still have to figure out how to embed the color values (RGB with alpha channel).", "response":"use LineCollection: ``` import numpy as np import pylab as pl from matplotlib import collections as mc lines = [[(0, 1), (1, 1)], [(2, 3), (3, 3)], [(1, 2), (1, 3)]] c = np.array([(1, 0, 0, 1), (0, 1, 0, 1), (0, 0, 1, 1)]) lc = mc.LineCollection(lines, colors=c, linewidths=2) fig, ax = pl.subplots() ax.add_collection(lc) ax.autoscale() ax.margins(0.1) ``` here is the output:", "best_answers_score":0.7461, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21352580\/plotting-numerous-disconnected-line-segments-with-different-colors", "best_answers_votes":124, "question_length":1805, "response_length":377 }, { "question":"Save a subplot in matplotlib Is it possible to save (to a png) an individual subplot in a matplotlib figure? Let's say I have ``` import pyplot.matplotlib as plt ax1 = plt.subplot(121) ax2 = plt.subplot(122) ax1.plot([1,2,3],[4,5,6]) ax2.plot([3,4,5],[7,8,9]) ``` Is it possible to save each of the two subplots to different files or at least copy them separately to a new figure to save them? I am using version 1.0.0 of matplotlib on RHEL 5.", "response":"While @Eli is quite correct that there usually isn't much of a need to do it, it is possible. savefig takes a bbox_inches argument that can be used to selectively save only a portion of a figure to an image. Here's a quick example: ``` import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np # Make an example plot with two subplots... fig = plt.figure() ax1 = fig.add_subplot(2,1,1) ax1.plot(range(10), 'b-') ax2 = fig.add_subplot(2,1,2) ax2.plot(range(20), 'r^') # Save the full figure... fig.savefig('full_figure.png') # Save just the portion _inside_ the second axis's boundaries extent = ax2.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) fig.savefig('ax2_figure.png', bbox_inches=extent) # Pad the saved area by 10% in the x-direction and 20% in the y-direction fig.savefig('ax2_figure_expanded.png', bbox_inches=extent.expanded(1.1, 1.2)) ``` The full figure: Area inside the second subplot: Area around the second subplot padded by 10% in the x-direction and 20% in the y-direction:", "best_answers_score":0.7451, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4325733\/save-a-subplot-in-matplotlib", "best_answers_votes":168, "question_length":443, "response_length":1027 }, { "question":"pyplot scatter plot marker size In the pyplot document for scatter plot: ```py matplotlib.pyplot.scatter(x, y, s=20, c='b', marker='o', cmap=None, norm=None, vmin=None, vmax=None, alpha=None, linewidths=None, faceted=True, verts=None, hold=None, **kwargs) ``` The marker size s: size in points^2. It is a scalar or an array of the same length as x and y. What kind of unit is points^2? What does it mean? Does s=100 mean 10 pixel x 10 pixel? Basically I'm trying to make scatter plots with different marker sizes, and I want to figure out what does the s number mean.", "response":"This can be a somewhat confusing way of defining the size but you are basically specifying the area of the marker. This means, to double the width (or height) of the marker you need to increase s by a factor of 4. [because A = WH => (2W)(2H)=4A] There is a reason, however, that the size of markers is defined in this way. Because of the scaling of area as the square of width, doubling the width actually appears to increase the size by more than a factor 2 (in fact it increases it by a factor of 4). To see this consider the following two examples and the output they produce. ```py # doubling the width of markers x = [0,2,4,6,8,10] y = [0]*len(x) s = [20*4**n for n in range(len(x))] plt.scatter(x,y,s=s) plt.show() ``` gives Notice how the size increases very quickly. If instead we have ```py # doubling the area of markers x = [0,2,4,6,8,10] y = [0]*len(x) s = [20*2**n for n in range(len(x))] plt.scatter(x,y,s=s) plt.show() ``` gives Now the apparent size of the markers increases roughly linearly in an intuitive fashion. As for the exact meaning of what a 'point' is, it is fairly arbitrary for plotting purposes, you can just scale all of your sizes by a constant until they look reasonable. Edit: (In response to comment from @Emma) It's probably confusing wording on my part. The question asked about doubling the width of a circle so in the first picture for each circle (as we move from left to right) it's width is double the previous one so for the area this is an exponential with base 4. Similarly the second example each circle has area double the last one which gives an exponential with base 2. However it is the second example (where we are scaling area) that doubling area appears to make the circle twice as big to the eye. Thus if we want a circle to appear a factor of n bigger we would increase the area by a factor n not the radius so the apparent size scales linearly with the area. Edit to visualize the comment by @TomaszGandor: This is what it looks like for different functions of the marker size: ```py x = [0,2,4,6,8,10,12,14,16,18] s_exp = [20*2**n for n in range(len(x))] s_square = [20*n**2 for n in range(len(x))] s_linear = [20*n for n in range(len(x))] plt.scatter(x,[1]*len(x),s=s_exp, label='$s=2^n$', lw=1) plt.scatter(x,[0]*len(x),s=s_square, label='$s=n^2$') plt.scatter(x,[-1]*len(x),s=s_linear, label='$s=n$') plt.ylim(-1.5,1.5) plt.legend(loc='center left', bbox_to_anchor=(1.1, 0.5), labelspacing=3) plt.show() ```", "best_answers_score":0.745, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14827650\/pyplot-scatter-plot-marker-size", "best_answers_votes":703, "question_length":567, "response_length":2467 }, { "question":"Change the relative size of a subplot I have two plots ``` import matplotlib.pyplot as plt plt.subplot(121) plt.subplot(122) ``` I want plt.subplot(122) to be half as wide as plt.subplot(121). Is there a straightforward way to set the height and width parameters for a subplot?", "response":"See the grid-spec tutorial: http:\/\/matplotlib.sourceforge.net\/users\/gridspec.html Example code: ``` import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec f = plt.figure() gs = gridspec.GridSpec(1, 2,width_ratios=[2,1]) ax1 = plt.subplot(gs[0]) ax2 = plt.subplot(gs[1]) plt.show() ``` You can also adjust the height ratio using a similar option in GridSpec", "best_answers_score":0.7447, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5083763\/change-the-relative-size-of-a-subplot", "best_answers_votes":55, "question_length":277, "response_length":372 }, { "question":"How to make inline plots in Jupyter Notebook larger? [duplicate] This question already has answers here: How do I change the size of figures drawn with Matplotlib? (16 answers) How to change the figure size of a seaborn axes or figure level plot (14 answers) Closed 3 years ago. I have made my plots inline on my Ipython Notebook with \"%matplotlib inline.\" Now, the plot appears. However, it is very small. Is there a way to make it appear larger using either notebook settings or plot settings?", "response":"The default figure size (in inches) is controlled by ``` matplotlib.rcParams['figure.figsize'] = [width, height] ``` For example: ``` import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [10, 5] ``` creates a figure with 10 (width) x 5 (height) inches", "best_answers_score":0.7446, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/36367986\/how-to-make-inline-plots-in-jupyter-notebook-larger", "best_answers_votes":587, "question_length":495, "response_length":263 }, { "question":"Split title of a figure in matplotlib into multiple lines I use matplotlib to create a figure with 4 sub-plots in it. I would like to split one of my title of a subplot, such that each line would be in the centered with respect to subplot. I tried ``` import matplotlib.pylab as plt fig = plt.figure(num=0,figsize=(8.27, 11.69), dpi=300) ax = fig.add_subplot(2, 2, 1) ax.set_title(r'Normalized occupied \\\\ Neighbors') ``` and what I get is that Neighbors is indented to the left side. How could I correct this?", "response":"I get the correct alignment when I format the string this way: ``` import matplotlib.pylab as plt fig = plt.figure()#num=0,figsize=(8.27, 11.69), dpi=300) ax = fig.add_subplot(2, 2, 1) ax.set_title('Normalized occupied \\n Neighbors') plt.show() ```", "best_answers_score":0.7446, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8598163\/split-title-of-a-figure-in-matplotlib-into-multiple-lines", "best_answers_votes":127, "question_length":510, "response_length":248 }, { "question":"Increase tick label font size in seaborn I have a huge problem with my seaborn plots. For some reason, the numbers along the axis are printed with a really small font, which makes them unreadable. I've tried to scale them with ``` with plt.rc_context(dict(sns.axes_style(\"whitegrid\"), **sns.plotting_context(font_scale=5))): b = sns.violinplot(y=\"Draughts\", data=dr) ``` To no help, this only makes the axis text larger, but not the number along the axis.", "response":"The answer from here makes fonts larger in seaborn ... ``` import pandas as pd, numpy as np, seaborn as sns from matplotlib import pyplot as plt # Generate data df = pd.DataFrame({\"Draughts\": np.random.randn(100)}) # Plot using seaborn sns.set(font_scale = 2) b = sns.violinplot(y = \"Draughts\", data = df) plt.show() ```", "best_answers_score":0.7439, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42404154\/increase-tick-label-font-size-in-seaborn", "best_answers_votes":61, "question_length":455, "response_length":320 }, { "question":"stacked bar plot using matplotlib I am generating bar plots using matplotlib and it looks like there is a bug with the stacked bar plot. The sum for each vertical stack should be 100. However, for X-AXIS ticks 65, 70, 75 and 80 we get completely arbitrary results which do not make any sense. I do not understand what the problem is. Please find the MWE below. ``` import numpy as np import matplotlib.pyplot as plt import matplotlib header = ['a','b','c','d'] dataset= [('60.0', '65.0', '70.0', '75.0', '80.0', '85.0', '90.0', '95.0', '100.0', '105.0', '110.0', '115.0', '120.0', '125.0', '130.0', '135.0', '140.0', '145.0', '150.0', '155.0', '160.0', '165.0', '170.0', '175.0', '180.0', '185.0', '190.0', '195.0', '200.0'), (0.0, 25.0, 48.93617021276596, 83.01886792452831, 66.66666666666666, 66.66666666666666, 70.96774193548387, 84.61538461538461, 93.33333333333333, 85.0, 92.85714285714286, 93.75, 95.0, 100.0, 100.0, 100.0, 100.0, 80.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0), (0.0, 50.0, 36.17021276595745, 11.320754716981133, 26.666666666666668, 33.33333333333333, 29.03225806451613, 15.384615384615385, 6.666666666666667, 15.0, 7.142857142857142, 6.25, 5.0, 0.0, 0.0, 0.0, 0.0, 20.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), (0.0, 12.5, 10.638297872340425, 3.7735849056603774, 4.444444444444445, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), (100.0, 12.5, 4.25531914893617, 1.8867924528301887, 2.2222222222222223, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0)] X_AXIS = dataset[0] matplotlib.rc('font', serif='Helvetica Neue') matplotlib.rc('text', usetex='false') matplotlib.rcParams.update({'font.size': 40}) fig = matplotlib.pyplot.gcf() fig.set_size_inches(18.5, 10.5) configs = dataset[0] N = len(configs) ind = np.arange(N) width = 0.4 p1 = plt.bar(ind, dataset[1], width, color='r') p2 = plt.bar(ind, dataset[2], width, bottom=dataset[1], color='b') p3 = plt.bar(ind, dataset[3], width, bottom=dataset[2], color='g') p4 = plt.bar(ind, dataset[4], width, bottom=dataset[3], color='c') plt.ylim([0,120]) plt.yticks(fontsize=12) plt.ylabel(output, fontsize=12) plt.xticks(ind, X_AXIS, fontsize=12, rotation=90) plt.xlabel('test', fontsize=12) plt.legend((p1[0], p2[0], p3[0], p4[0]), (header[0], header[1], header[2], header[3]), fontsize=12, ncol=4, framealpha=0, fancybox=True) plt.show() ```", "response":"I found this such a pain that I wrote a function to do it. I'm sharing it in the hope that others find it useful: ``` def plot_stacked_bar(data, series_labels, category_labels=None, show_values=False, value_format=\"{}\", y_label=None, colors=None, grid=True, reverse=False): \"\"\"Plots a stacked bar chart with the data and labels provided. Keyword arguments: data -- 2-dimensional numpy array or nested list containing data for each series in rows series_labels -- list of series labels (these appear in the legend) category_labels -- list of category labels (these appear on the x-axis) show_values -- If True then numeric value labels will be shown on each bar value_format -- Format string for numeric value labels (default is \"{}\") y_label -- Label for y-axis (str) colors -- List of color labels grid -- If True display grid reverse -- If True reverse the order that the series are displayed (left-to-right or right-to-left) \"\"\" ny = len(data[0]) ind = list(range(ny)) axes = [] cum_size = np.zeros(ny) data = np.array(data) if reverse: data = np.flip(data, axis=1) category_labels = reversed(category_labels) for i, row_data in enumerate(data): color = colors[i] if colors is not None else None p = plt.bar(ind, row_data, bottom=cum_size, label=series_labels[i], color=color) cum_size += row_data if show_values: plt.bar_label(p, label_type='center', fmt=value_format) if category_labels: plt.xticks(ind, category_labels) if y_label: plt.ylabel(y_label) plt.legend() if grid: plt.grid() ``` Example: ``` plt.figure(figsize=(6, 4)) series_labels = ['Series 1', 'Series 2'] data = [ [0.2, 0.3, 0.35, 0.3], [0.8, 0.7, 0.6, 0.5] ] category_labels = ['Cat A', 'Cat B', 'Cat C', 'Cat D'] plot_stacked_bar( data, series_labels, category_labels=category_labels, show_values=True, value_format=\"{:.1f}\", colors=['tab:orange', 'tab:green'], y_label=\"Quantity (units)\" ) plt.tight_layout() plt.savefig('bar.png') plt.show() ```", "best_answers_score":0.7429, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44309507\/stacked-bar-plot-using-matplotlib", "best_answers_votes":57, "question_length":2509, "response_length":1920 }, { "question":"How do I change the range of the x-axis with datetime? I'm trying to plot a graph of dates on the x-axis and values on the y-axis. It works fine, except that I can't get the range of the x-axis to be appropriate. The x-axis range is always Jan 2012 to Jan 2016, despite my dates being from today. I am even specifying that xlim should be the first and last date. I'm writing this for python-django, if that's relevant. ``` import datetime import matplotlib.pyplot as plt x = [datetime.date(2014, 1, 29), datetime.date(2014, 1, 29), datetime.date(2014, 1, 29)] y = [2, 4, 1] fig, ax = plt.subplots() ax.plot_date(x, y) ax.set_xlim([x[0], x[-1]]) canvas = FigureCanvas(plt.figure(1)) response = HttpResponse(content_type='image\/png') canvas.print_png(response) return response ``` And here is the output:", "response":"Edit: Having seen actual data from the OP, all of the values are at the same date\/time. So matplotlib is automatically zooming the x-axis out. You can still manually set the x-axis limits with datetime objects If I do something like this on matplotlib v1.3.1: ``` import datetime import matplotlib.pyplot as plt x = [datetime.date(2014, 1, 29)] * 3 y = [2, 4, 1] fig, ax = plt.subplots() ax.plot_date(x, y, markerfacecolor='CornflowerBlue', markeredgecolor='white') fig.autofmt_xdate() ax.set_xlim([datetime.date(2014, 1, 26), datetime.date(2014, 2, 1)]) ax.set_ylim([0, 5]) ``` I get: And the axes limits match the dates that I specified.", "best_answers_score":0.7426, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21423158\/how-do-i-change-the-range-of-the-x-axis-with-datetime", "best_answers_votes":79, "question_length":802, "response_length":639 }, { "question":"Y-Axis ticks on Right Side of Plot I have a simple line plot and need to move the y-axis ticks from the (default) left side of the plot to the right side. Any thoughts on how to do this?", "response":"Use ax.yaxis.tick_right() for example: ``` from matplotlib import pyplot as plt f = plt.figure() ax = f.add_subplot(111) ax.yaxis.tick_right() plt.plot([2,3,4,5]) plt.show() ```", "best_answers_score":0.7424, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10354397\/y-axis-ticks-on-right-side-of-plot", "best_answers_votes":266, "question_length":186, "response_length":177 }, { "question":"Dynamically updating plot in matplotlib I am making an application in Python which collects data from a serial port and plots a graph of the collected data against arrival time. The time of arrival for the data is uncertain. I want the plot to be updated when data is received. I searched on how to do this and found two methods: Clear the plot and re-draw the plot with all the points again. Animate the plot by changing it after a particular interval. I do not prefer the first one as the program runs and collects data for a long time (a day for example), and redrawing the plot will be pretty slow. The second one is also not preferable as time of arrival of data is uncertain and I want the plot to update only when the data is received. Is there a way in which I can update the plot just by adding more points to it only when the data is received?", "response":"Is there a way in which I can update the plot just by adding more point[s] to it... There are a number of ways of animating data in matplotlib, depending on the version you have. Have you seen the animation examples in the matplotlib documentation? The animation API defines a function FuncAnimation which animates a function in time. This function could just be the function you use to acquire your data. Each method basically sets the data property of the object being drawn, so doesn't require clearing the screen or figure. The data property can simply be extended, so you can keep the previous points and just keep adding to your line (or image or whatever you are drawing). Given that you say that your data arrival time is uncertain your best bet is probably just to do something like: ``` import matplotlib.pyplot as plt import numpy hl, = plt.plot([], []) def update_line(hl, new_data): hl.set_xdata(numpy.append(hl.get_xdata(), new_data)) hl.set_ydata(numpy.append(hl.get_ydata(), new_data)) plt.draw() ``` Then when you receive data from the serial port just call update_line.", "best_answers_score":0.7421, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10944621\/dynamically-updating-plot-in-matplotlib", "best_answers_votes":175, "question_length":853, "response_length":1087 }, { "question":"How to remove the duplicate legend when overlaying boxplot and stripplot One of the coolest things you can easily make in seaborn is boxplot + stripplot combination: ``` import matplotlib.pyplot as plt import seaborn as sns import pandas as pd tips = sns.load_dataset(\"tips\") sns.stripplot(x=\"day\", y=\"total_bill\", hue=\"smoker\", data=tips, jitter=True, palette=\"Set2\", dodge=True,linewidth=1,edgecolor='gray') sns.boxplot(x=\"day\", y=\"total_bill\", hue=\"smoker\", data=tips,palette=\"Set2\",fliersize=0) plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.); ``` Unfortunately, as you can see above, it produced double legend, one for boxplot, one for stripplot. Obviously, it looks ridiculous and redundant. But I cannot seem to find a way to get rid of stripplot legend and only leave boxplot legend. Probably, I can somehow delete items from plt.legend, but I cannot find it in the documentation.", "response":"You can get what handles\/labels should exist in the legend before you actually draw the legend itself. You then draw the legend only with the specific ones you want. ```py import matplotlib.pyplot as plt import seaborn as sns import pandas as pd tips = sns.load_dataset(\"tips\") sns.stripplot(x=\"day\", y=\"total_bill\", hue=\"smoker\", data=tips, jitter=True, palette=\"Set2\", dodge=True, linewidth=1, edgecolor='gray') # Get the ax object to use later. ax = sns.boxplot(x=\"day\", y=\"total_bill\", hue=\"smoker\", data=tips, palette=\"Set2\", fliersize=0) # Get the handles and labels. For this example it'll be 2 tuples # of length 4 each. handles, labels = ax.get_legend_handles_labels() # When creating the legend, only use the first two elements # to effectively remove the last two. l = plt.legend(handles[0:2], labels[0:2], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ```", "best_answers_score":0.742, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/35538882\/how-to-remove-the-duplicate-legend-when-overlaying-boxplot-and-stripplot", "best_answers_votes":57, "question_length":902, "response_length":872 }, { "question":"How to set opacity of background colour of graph with Matplotlib I've been playing around with Matplotlib and I can't figure out how to change the background colour of the graph, or how to make the background completely transparent.", "response":"If you just want the entire background for both the figure and the axes to be transparent, you can simply specify transparent=True when saving the figure with fig.savefig. e.g.: ``` import matplotlib.pyplot as plt fig = plt.figure() plt.plot(range(10)) fig.savefig('temp.png', transparent=True) ``` If you want more fine-grained control, you can simply set the facecolor and\/or alpha values for the figure and axes background patch. (To make a patch completely transparent, we can either set the alpha to 0, or set the facecolor to 'none' (as a string, not the object None!)) e.g.: ``` import matplotlib.pyplot as plt fig = plt.figure() fig.patch.set_facecolor('blue') fig.patch.set_alpha(0.7) ax = fig.add_subplot(111) ax.plot(range(10)) ax.patch.set_facecolor('red') ax.patch.set_alpha(0.5) # If we don't specify the edgecolor and facecolor for the figure when # saving with savefig, it will override the value we set earlier! fig.savefig('temp.png', facecolor=fig.get_facecolor(), edgecolor='none') plt.show() ```", "best_answers_score":0.7418, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4581504\/how-to-set-opacity-of-background-colour-of-graph-with-matplotlib", "best_answers_votes":161, "question_length":232, "response_length":1016 }, { "question":"Visualization of scatter plots with overlapping points in matplotlib I have to represent about 30,000 points in a scatter plot in matplotlib. These points belong to two different classes, so I want to depict them with different colors. I succeded in doing so, but there is an issue. The points overlap in many regions and the class that I depict for last will be visualized on top of the other one, hiding it. Furthermore, with the scatter plot is not possible to show how many points lie in each region. I have also tried to make a 2d histogram with histogram2d and imshow, but it's difficult to show the points belonging to both classes in a clear way. Can you suggest a way to make clear both the distribution of the classes and the concentration of the points? EDIT: To be more clear, this is the link to my data file in the format \"x,y,class\"", "response":"You could also colour the points by first computing a kernel density estimate of the distribution of the scatter, and using the density values to specify a colour for each point of the scatter. To modify the code in the earlier example : ``` import numpy as np import matplotlib.pyplot as plt from scipy.stats import gaussian_kde as kde from matplotlib.colors import Normalize from matplotlib import cm N = 10000 mean = [0,0] cov = [[2,2],[0,2]] samples = np.random.multivariate_normal(mean,cov,N).T densObj = kde( samples ) def makeColours( vals ): colours = np.zeros( (len(vals),3) ) norm = Normalize( vmin=vals.min(), vmax=vals.max() ) #Can put any colormap you like here. colours = [cm.ScalarMappable( norm=norm, cmap='jet').to_rgba( val ) for val in vals] return colours colours = makeColours( densObj.evaluate( samples ) ) plt.scatter( samples[0], samples[1], color=colours ) plt.show() ``` I learnt this trick a while ago when I noticed the documentation of the scatter function -- ``` c : color or sequence of color, optional, default : 'b' ``` c can be a single color format string, or a sequence of color specifications of length N, or a sequence of N numbers to be mapped to colors using the cmap and norm specified via kwargs (see below). Note that c should not be a single numeric RGB or RGBA sequence because that is indistinguishable from an array of values to be colormapped. c can be a 2-D array in which the rows are RGB or RGBA, however, including the case of a single row to specify the same color for all points.", "best_answers_score":0.7412, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19064772\/visualization-of-scatter-plots-with-overlapping-points-in-matplotlib", "best_answers_votes":46, "question_length":847, "response_length":1533 }, { "question":"how to save a pylab figure into in-memory file which can be read into PIL image? The following is my first shot which never works: ``` import cStringIO import pylab from PIL import Image pylab.figure() pylab.plot([1,2]) pylab.title(\"test\") buffer = cStringIO.StringIO() pylab.savefig(buffer, format='png') im = Image.open(buffer.read()) buffer.close() ``` the error says, ``` Traceback (most recent call last): File \"try.py\", line 10, in im = Image.open(buffer.read()) File \"\/awesomepath\/python2.7\/site-packages\/PIL\/Image.py\", line 1952, in open fp = __builtin__.open(fp, \"rb\") ``` any ideas? I don't want the solution to involve extra packages.", "response":"Remember to call buf.seek(0) so Image.open(buf) starts reading from the beginning of the buf: ``` import io from PIL import Image import matplotlib.pyplot as plt plt.figure() plt.plot([1, 2]) plt.title(\"test\") buf = io.BytesIO() plt.savefig(buf, format='png') buf.seek(0) im = Image.open(buf) im.show() buf.close() ```", "best_answers_score":0.7407, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8598673\/how-to-save-a-pylab-figure-into-in-memory-file-which-can-be-read-into-pil-image", "best_answers_votes":152, "question_length":646, "response_length":318 }, { "question":"How do I plot a step function? This should be easy but I have just started toying with matplotlib and python. I can do a line or a scatter plot but i am not sure how to do a simple step function. Any help is much appreciated. ``` x = 1,2,3,4 y = 0.002871972681775004, 0.00514787917410944, 0.00863476098280219, 0.012003316194034325 ```", "response":"It seems like you want step. E.g. ``` import matplotlib.pyplot as plt x = [1,2,3,4] y = [0.002871972681775004, 0.00514787917410944, 0.00863476098280219, 0.012003316194034325] plt.step(x, y) plt.show() ```", "best_answers_score":0.7406, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8921296\/how-do-i-plot-a-step-function", "best_answers_votes":72, "question_length":334, "response_length":204 }, { "question":"Row and column headers in matplotlib's subplots What's the best practise to add a row and a column header to a grid of subplots generated in a loop in matplotlib? I can think of a couple, but not particularly neat: For columns, with a counter to your loop you can use set_title() for the first row only. For rows this doesn't work. You would have to draw text outside of the plots. You add an extra row of subplots on top and an extra column of subplots on the left, and draw text in the middle of that subplot. Can you suggest a better alternative?", "response":"There are several ways to do this. The easy way is to exploit the y-labels and titles of the plot and then use fig.tight_layout() to make room for the labels. Alternatively, you can place additional text in the right location with annotate and then make room for it semi-manually. If you don't have y-labels on your axes, it's easy to exploit the title and y-label of the first row and column of axes. ``` import matplotlib.pyplot as plt cols = ['Column {}'.format(col) for col in range(1, 4)] rows = ['Row {}'.format(row) for row in ['A', 'B', 'C', 'D']] fig, axes = plt.subplots(nrows=4, ncols=3, figsize=(12, 8)) for ax, col in zip(axes[0], cols): ax.set_title(col) for ax, row in zip(axes[:,0], rows): ax.set_ylabel(row, rotation=0, size='large') fig.tight_layout() plt.show() ``` If you do have y-labels, or if you prefer a bit more flexibility, you can use annotate to place the labels. This is more complicated, but allows you to have individual plot titles, ylabels, etc in addition to the row and column labels. ``` import matplotlib.pyplot as plt from matplotlib.transforms import offset_copy cols = ['Column {}'.format(col) for col in range(1, 4)] rows = ['Row {}'.format(row) for row in ['A', 'B', 'C', 'D']] fig, axes = plt.subplots(nrows=4, ncols=3, figsize=(12, 8)) plt.setp(axes.flat, xlabel='X-label', ylabel='Y-label') pad = 5 # in points for ax, col in zip(axes[0], cols): ax.annotate(col, xy=(0.5, 1), xytext=(0, pad), xycoords='axes fraction', textcoords='offset points', size='large', ha='center', va='baseline') for ax, row in zip(axes[:,0], rows): ax.annotate(row, xy=(0, 0.5), xytext=(-ax.yaxis.labelpad - pad, 0), xycoords=ax.yaxis.label, textcoords='offset points', size='large', ha='right', va='center') fig.tight_layout() # tight_layout doesn't take these labels into account. We'll need # to make some room. These numbers are are manually tweaked. # You could automatically calculate them, but it's a pain. fig.subplots_adjust(left=0.15, top=0.95) plt.show() ```", "best_answers_score":0.7405, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25812255\/row-and-column-headers-in-matplotlibs-subplots", "best_answers_votes":204, "question_length":549, "response_length":1992 }, { "question":"unique plot marker for each plot I have a loop where i create some plots and I need unique marker for each plot. I think about creating function, which returns random symbol, and use it in my program in this way: ``` for i in xrange(len(y)): plt.plot(x, y [i], randomMarker()) ``` but I think this way is not good one. I need this just to distinguish plots on legend, because plots must be not connected with lines, they must be just sets of dots.", "response":"itertools.cycle will iterate over a list or tuple indefinitely. This is preferable to a function which randomly picks markers for you. Python 2.x ``` import itertools marker = itertools.cycle((',', '+', '.', 'o', '*')) for n in y: plt.plot(x,n, marker = marker.next(), linestyle='') ``` Python 3.x ``` import itertools marker = itertools.cycle((',', '+', '.', 'o', '*')) for n in y: plt.plot(x,n, marker = next(marker), linestyle='') ``` You can use that to produce a plot like this (Python 2.x): ``` import numpy as np import matplotlib.pyplot as plt import itertools x = np.linspace(0,2,10) y = np.sin(x) marker = itertools.cycle((',', '+', '.', 'o', '*')) fig = plt.figure() ax = fig.add_subplot(111) for q,p in zip(x,y): ax.plot(q,p, linestyle = '', marker=marker.next()) plt.show() ```", "best_answers_score":0.7401, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13091649\/unique-plot-marker-for-each-plot", "best_answers_votes":95, "question_length":447, "response_length":790 }, { "question":"Change main plot legend label text So far I have been able to label the subplots just fine but I'm having an issue with the main one. Here's the relevant part of my code: ``` data_BS_P = data[channels[0]] data_BS_R = data[channels[1]] data_BS_Y = data[channels[2]] plot_BS_P = data_BS_P.plot() #data_BS_P is a pandas dataframe axBS = plot_BS_P.gca() axBS.plot(data_BS_R, label='Roll') axBS.plot(data_BS_Y, label='Yaw') axBS.set_ylabel('Amplitude (urad)') axBS.legend(loc='upper center', bbox_to_anchor=(0.5, 1.05), ncol=3, fancybox=True, shadow=True) ml1 = MultipleLocator(10) ml2 = MultipleLocator(3600) axBS.yaxis.set_minor_locator(ml1) axBS.xaxis.set_minor_locator(ml2) plot_BS_P.save('L1-SUS-BS_M1_DAMP_PRY_INMON.jpg') ``` And this is what I have so far: Notice the lengthy label for the blue line. I'd like that to be labeled as \"Pitch\" instead of the file name. In which line can I do that?", "response":"You need to gain access of the legend() object and use set_text() to change the text values, a simple example: ``` plt.plot(range(10), label='Some very long label') plt.plot(range(1,11), label='Short label') L=plt.legend() L.get_texts()[0].set_text('make it short') plt.savefig('temp.png') ``` In your case, you are changing the first item in the legend, I am quite sure the 0 index in L.get_texts()[0] applies to your problem too.", "best_answers_score":0.74, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23037548\/change-main-plot-legend-label-text", "best_answers_votes":108, "question_length":896, "response_length":431 }, { "question":"jupyter notebook inline plots as svg By default jupyter notebook inline plots are displayed as png, e.g.: ``` import matplotlib.pyplot as plt %matplotlib inline plt.plot() ``` How can you configure jupyter notebooks to display matplotlib inline plots as svg?", "response":"%config InlineBackend.figure_formats = ['svg'] does the trick. A minimal example is: ```py %config InlineBackend.figure_formats = ['svg'] import matplotlib.pyplot as plt %matplotlib inline plt.plot() ```", "best_answers_score":0.7396, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/36622237\/jupyter-notebook-inline-plots-as-svg", "best_answers_votes":74, "question_length":258, "response_length":203 }, { "question":"Display multiple images in subplots How do I use the matlib function plt.imshow(image) to display multiple images? For example my code is as follows: ``` for file in images: process(file) def process(filename): image = mpimg.imread(filename) plt.imshow(image) ``` My results show that only the last processed image is shown effectively overwriting the other images", "response":"To display the multiple images use subplot() ``` plt.figure() #subplot(r,c) provide the no. of rows and columns f, axarr = plt.subplots(4,1) # use the created array to output your multiple images. In this case I have stacked 4 images vertically axarr[0].imshow(v_slice[0]) axarr[1].imshow(v_slice[1]) axarr[2].imshow(v_slice[2]) axarr[3].imshow(v_slice[3]) ```", "best_answers_score":0.7395, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/41210823\/display-multiple-images-in-subplots", "best_answers_votes":64, "question_length":365, "response_length":360 }, { "question":"RuntimeWarning: invalid value encountered in divide I have to make a program using Euler's method for the \"ball in a spring\" model ``` from pylab import* from math import* m=0.1 Lo=1 tt=30 k=200 t=20 g=9.81 dt=0.01 n=int((ceil(t\/dt))) km=k\/m r0=[-5,5*sqrt(3)] v0=[-5,5*sqrt(3)] a=zeros((n,2)) r=zeros((n,2)) v=zeros((n,2)) t=zeros((n,2)) r[1,:]=r0 v[1,:]=v0 for i in range(n-1): rr=dot(r[i,:],r[i,:])**0.5 a=-g+km*cos(tt)*(rr-L0)*r[i,:]\/rr v[i+1,:]=v[i,:]+a*dt r[i+1,:]=r[i,:]+v[i+1,:]*dt t[i+1]=t[i]+dt #print norm(r[i,:]) plot(r[:,0],r[:,1]) xlim(-100,100) ylim(-100,100) xlabel('x [m]') ylabel('y [m]') show() ``` I keep getting this error: ``` a=-g+km*cos(tt)*(rr-L0)*r[i,:]\/rr RuntimeWarning: invalid value encountered in divide ``` I can't figure it out, what is wrong with the code?", "response":"I think your code is trying to \"divide by zero\" or \"divide by NaN\". If you are aware of that and don't want it to bother you, then you can try: ``` import numpy as np np.seterr(divide='ignore', invalid='ignore') ``` For more details see: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.seterr.html", "best_answers_score":0.7394, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14861891\/runtimewarning-invalid-value-encountered-in-divide", "best_answers_votes":255, "question_length":789, "response_length":307 }, { "question":"Imshow subplots with the same colorbar I want to make 4 imshow subplots but all of them share the same colormap. Matplotlib automatically adjusts the scale on the colormap depending on the entries of the matrices. For example, if one of my matrices has all entires as 10 and the other one has all entries equal to 5 and I use the Greys colormap then one of my subplots should be completely black and the other one should be completely grey. But both of them end up becoming completely black. How to make all the subplots share the same scale on the colormap?", "response":"To get this right you need to have all the images with the same intensity scale, otherwise the colorbar() colours are meaningless. To do that, use the vmin and vmax arguments of imshow(), and make sure they are the same for all your images. E.g., if the range of values you want to show goes from 0 to 10, you can use the following: ``` import pylab as plt import numpy as np my_image1 = np.linspace(0, 10, 10000).reshape(100,100) my_image2 = np.sqrt(my_image1.T) + 3 plt.subplot(1, 2, 1) plt.imshow(my_image1, vmin=0, vmax=10, cmap='jet', aspect='auto') plt.subplot(1, 2, 2) plt.imshow(my_image2, vmin=0, vmax=10, cmap='jet', aspect='auto') plt.colorbar() ```", "best_answers_score":0.7394, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17989917\/imshow-subplots-with-the-same-colorbar", "best_answers_votes":94, "question_length":558, "response_length":660 }, { "question":"Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized Getting the error message when using matplotlib: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized OMP: Hint: This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http:\/\/www.intel.com\/software\/products\/support\/.", "response":"Do the following to solve the issue: ``` import os os.environ['KMP_DUPLICATE_LIB_OK']='True' ``` Answer found at: https:\/\/github.com\/dmlc\/xgboost\/issues\/1715 Be aware of potential side-effects: but that may cause crashes or silently produce incorrect results.", "best_answers_score":0.7393, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/53014306\/error-15-initializing-libiomp5-dylib-but-found-libiomp5-dylib-already-initial", "best_answers_votes":107, "question_length":871, "response_length":259 }, { "question":"iPython\/Jupyter Notebook and Pandas, how to plot multiple graphs in a for loop? Consider the following code running in iPython\/Jupyter Notebook: ``` from pandas import * %matplotlib inline ys = [[0,1,2,3,4],[4,3,2,1,0]] x_ax = [0,1,2,3,4] for y_ax in ys: ts = Series(y_ax,index=x_ax) ts.plot(kind='bar', figsize=(15,5)) ``` I would expect to have 2 separate plots as output, instead, I get the two series merged in one single plot. Why is that? How can I get two separate plots keeping the for loop?", "response":"Just add the call to plt.show() after you plot the graph (you might want to import matplotlib.pyplot to do that), like this: ``` from pandas import Series import matplotlib.pyplot as plt %matplotlib inline ys = [[0,1,2,3,4],[4,3,2,1,0]] x_ax = [0,1,2,3,4] for y_ax in ys: ts = Series(y_ax,index=x_ax) ts.plot(kind='bar', figsize=(15,5)) plt.show() ```", "best_answers_score":0.739, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/29532894\/ipython-jupyter-notebook-and-pandas-how-to-plot-multiple-graphs-in-a-for-loop", "best_answers_votes":72, "question_length":499, "response_length":351 }, { "question":"Saving interactive Matplotlib figures Is there a way to save a Matplotlib figure such that it can be re-opened and have typical interaction restored? (Like the .fig format in MATLAB?) I find myself running the same scripts many times to generate these interactive figures. Or I'm sending my colleagues multiple static PNG files to show different aspects of a plot. I'd rather send the figure object and have them interact with it themselves.", "response":"I just found out how to do this. The \"experimental pickle support\" mentioned by @pelson works quite well. Try this: ``` # Plot something import matplotlib.pyplot as plt fig,ax = plt.subplots() ax.plot([1,2,3],[10,-10,30]) ``` After your interactive tweaking, save the figure object as a binary file: ``` import pickle pickle.dump(fig, open('FigureObject.fig.pickle', 'wb')) # This is for Python 3 - py2 may need `file` instead of `open` ``` Later, open the figure and the tweaks should be saved and GUI interactivity should be present: ``` import pickle figx = pickle.load(open('FigureObject.fig.pickle', 'rb')) figx.show() # Show the figure, edit it, etc.! ``` You can even extract the data from the plots: ``` data = figx.axes[0].lines[0].get_data() ``` (It works for lines, pcolor & imshow - pcolormesh works with some tricks to reconstruct the flattened data.) I got the excellent tip from Saving Matplotlib Figures Using Pickle.", "best_answers_score":0.7383, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4348733\/saving-interactive-matplotlib-figures", "best_answers_votes":101, "question_length":441, "response_length":933 }, { "question":"My plot in ipython does not show with pyplot.show() Help required displaying matplotlib plots in ipython. I did not forget to call pyplot.show(). $ ipython --pylab ``` import matplotlib.pyplot as plt plt.plot(range(20), range(20)) ``` It returns matplotlib.lines.Line2D at 0xade2b2c as the output. ``` plt.show() ``` Nothing happens. No error message. No new window. I installed matplotlib with pip, and no error messages occurred. Details: I use, Ubuntu IPython v0.11 Python v2.6.6 matplotlib v1.0.1", "response":"If I set my backend to template in ~\/.matplotlib\/matplotlibrc, then I can reproduce your symptoms: ~\/.matplotlib\/matplotlibrc: ``` # backend : GtkAgg backend : template ``` Note that the file matplotlibrc may not be in directory ~\/.matplotlib\/. In this case, the following code shows where it is: ``` >>> import matplotlib >>> matplotlib.matplotlib_fname() ``` ``` In [1]: import matplotlib.pyplot as p In [2]: p.plot(range(20),range(20)) Out[2]: [] In [3]: p.show() ``` If you edit ~\/.matplotlib\/matplotlibrc and change the backend to something like GtkAgg, you should see a plot. You can list all the backends available on your machine with ``` import matplotlib.rcsetup as rcsetup print(rcsetup.all_backends) ``` It should return a list like: ``` ['GTK', 'GTKAgg', 'GTKCairo', 'FltkAgg', 'MacOSX', 'QtAgg', 'Qt4Agg', 'TkAgg', 'WX', 'WXAgg', 'CocoaAgg', 'agg', 'cairo', 'emf', 'gdk', 'pdf', 'ps', 'svg', 'template'] ``` Reference: Customizing matplotlib", "best_answers_score":0.7378, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7534453\/my-plot-in-ipython-does-not-show-with-pyplot-show", "best_answers_votes":192, "question_length":500, "response_length":955 }, { "question":"Matplotlib transparent line plots I am plotting two similar trajectories in matplotlib and I'd like to plot each of the lines with partial transparency so that the red (plotted second) doesn't obscure the blue. EDIT: Here's the image with transparent lines.", "response":"Plain and simple: ``` plt.plot(x, y, 'r-', alpha=0.7) ``` (I know I add nothing new, but the straightforward answer should be visible).", "best_answers_score":0.7369, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4320021\/matplotlib-transparent-line-plots", "best_answers_votes":368, "question_length":257, "response_length":135 }, { "question":"Matplotlib imshow - Change default colour normalisation I have consistently had problems with my colour maps when using imshow; some colours seem to just become black. I have finally realised that imshow seems to, by default, normalise the matrix of floating point values I give it. I would have expected an array such as [[0,0.25],[0.5,0.75]] to display the appropriate colours from the map, corresponding to those absolute values but the 0.75 will be interpreted as a 1. In the extreme case, an N x N array of 0.2 (for example), would just produce one big black square, rather than whatever one would expect 0.2 to correspond to in the colour map (perhaps a 20% grey). Is there a way to prevent this behaviour? It is particularly annoying when custom colour maps have many discontinuities; a small change in scale could cause all the colours to completely change.", "response":"Just specify vmin=0, vmax=1. By default, imshow normalizes the data to its min and max. You can control this with either the vmin and vmax arguments or with the norm argument (if you want a non-linear scaling). As a quick example: ``` import matplotlib.pyplot as plt data = [[0, 0.25], [0.5, 0.75]] fig, ax = plt.subplots() im = ax.imshow(data, cmap=plt.get_cmap('hot'), interpolation='nearest', vmin=0, vmax=1) fig.colorbar(im) plt.show() ```", "best_answers_score":0.7366, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22121239\/matplotlib-imshow-change-default-colour-normalisation", "best_answers_votes":96, "question_length":865, "response_length":443 }, { "question":"Matplotlib axis with two scales shared origin I need two overlay two datasets with different Y-axis scales in Matplotlib. The data contains both positive and negative values. I want the two axes to share one origin, but Matplotlib does not align the two scales by default. ``` import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax1 = fig.add_subplot(111) ax2 = ax1.twinx() ax1.bar(range(6), (2, -2, 1, 0, 0, 0)) ax2.plot(range(6), (0, 2, 8, -2, 0, 0)) plt.show() ``` I suppose it is possible to perform some computation with .get_ylim() and .set_ylim() two align the two scales. Is there an easier solution?", "response":"use the align_yaxis() function: ``` import numpy as np import matplotlib.pyplot as plt def align_yaxis(ax1, v1, ax2, v2): \"\"\"adjust ax2 ylimit so that v2 in ax2 is aligned to v1 in ax1\"\"\" _, y1 = ax1.transData.transform((0, v1)) _, y2 = ax2.transData.transform((0, v2)) inv = ax2.transData.inverted() _, dy = inv.transform((0, 0)) - inv.transform((0, y1-y2)) miny, maxy = ax2.get_ylim() ax2.set_ylim(miny+dy, maxy+dy) fig = plt.figure() ax1 = fig.add_subplot(111) ax2 = ax1.twinx() ax1.bar(range(6), (2, -2, 1, 0, 0, 0)) ax2.plot(range(6), (0, 2, 8, -2, 0, 0)) align_yaxis(ax1, 0, ax2, 0) plt.show() ```", "best_answers_score":0.7365, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10481990\/matplotlib-axis-with-two-scales-shared-origin", "best_answers_votes":64, "question_length":626, "response_length":603 }, { "question":"How to share secondary y-axis between subplots in matplotlib If you have multiple subplots containing a secondary y-axis (created using twinx), how can you share these secondary y-axes between the subplots? I want them to scale equally in an automatic way (so not set the y-limits afterwards by hand). For the primary y-axis, this is possible by using the keyword sharey in the call of subplot. Below example shows my attempt, but it fails to share the secondary y-axis of both subplots. I'm using Matplotlib: ```py ax = [] #create upper subplot ax.append(subplot(211)) plot(rand(1) * rand(10),'r') #create plot on secondary y-axis of upper subplot ax.append(ax[0].twinx()) plot(10*rand(1) * rand(10),'b') #create lower subplot and share y-axis with primary y-axis of upper subplot ax.append(subplot(212, sharey = ax[0])) plot(3*rand(1) * rand(10),'g') #create plot on secondary y-axis of lower subplot ax.append(ax[2].twinx()) #set twinxed axes as the current axes again, #but now attempt to share the secondary y-axis axes(ax[3], sharey = ax[1]) plot(10*rand(1) * rand(10),'y') ``` This gets me something like: The reason I used the axes() function to set the shared y-axis is that twinx doesn't accept the sharey keyword. I am using Python 3.2 on Win7 x64. Matplotlib version is 1.2.0rc2.", "response":"You can use Axes.get_shared_y_axes() like so: ``` from numpy.random import rand import matplotlib matplotlib.use('gtkagg') import matplotlib.pyplot as plt # create all axes we need ax0 = plt.subplot(211) ax1 = ax0.twinx() ax2 = plt.subplot(212) ax3 = ax2.twinx() # share the secondary axes ax1.get_shared_y_axes().join(ax1, ax3) ax0.plot(rand(1) * rand(10),'r') ax1.plot(10*rand(1) * rand(10),'b') ax2.plot(3*rand(1) * rand(10),'g') ax3.plot(10*rand(1) * rand(10),'y') plt.show() ``` Here we're just joining the secondary axes together.", "best_answers_score":0.7356, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12919230\/how-to-share-secondary-y-axis-between-subplots-in-matplotlib", "best_answers_votes":58, "question_length":1291, "response_length":536 }, { "question":"Manually set color of points in legend I'm making a scatter plot which looks like this: (MWE at bottom of question) As can be seen in the image above the colors of the points in the legend are set to blue automatically by matplotlib. I need to set this points to some other color not present in the colormap (ie: black) so they won't generate confusion with the colors associated with said colormap. I looked around but the matplotlib.legend module does not seem to accept a color keyword. Is there any way to do this? Here's the MWE: ``` import matplotlib.pyplot as plt import numpy as np def rand_data(): return np.random.uniform(low=0., high=1., size=(100,)) # Generate data. x, y, x2, x3 = [rand_data() for i in range(4)] # This data defines the markes and labels used. x1 = np.random.random_integers(7, 9, size=(100,)) # Order all lists so smaller points are on top. order = np.argsort(-np.array(x2)) # Order x and y. x_o, y_o = np.take(x, order), np.take(y, order) # Order list related to markers and labels. z1 = np.take(x1, order) # Order list related to sizes. z2 = np.take(x2, order) # Order list related to colors. z3 = np.take(x3, order) plt.figure() cm = plt.cm.get_cmap('RdYlBu') # Scatter plot where each value in z1 has a different marker and label # assigned. mrk = {7: ('o', '7'), 8: ('s', '8'), 9: ('D', '9')} for key, value in mrk.items(): s1 = (z1 == key) plt.scatter(x_o[s1], y_o[s1], marker=value[0], label=value[1], s=z2[s1] * 100., c=z3[s1], cmap=cm, lw=0.2) # Plot colorbar plt.colorbar() # Plot legend. plt.legend(loc=\"lower left\", markerscale=0.7, scatterpoints=1, fontsize=10) plt.show() ```", "response":"You can obtain the legend handles and change their colors individually. Thanks for the comments of @OrOrg and @Reinderien that led me to update this answer. ``` ax = plt.gca() leg = ax.get_legend() leg.legend_handles[0].set_facecolor('red') leg.legend_handles[0].set_edgecolor('red') leg.legend_handles[1].set_facecolor('yellow') leg.legend_handles[1].set_edgecolor('yellow') ```", "best_answers_score":0.7352, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23698850\/manually-set-color-of-points-in-legend", "best_answers_votes":82, "question_length":1620, "response_length":379 }, { "question":"Plot a plane and points in 3D simultaneously I m trying to plot simultaneously a plane and some points in 3D with Matplotlib. I have no errors just the point will not appear. I can plot at different times some points and planes but never at same time. The part of the code looks like : ``` import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D point = np.array([1, 2, 3]) normal = np.array([1, 1, 2]) point2 = np.array([10, 50, 50]) # a plane is a*x+b*y+c*z+d=0 # [a,b,c] is the normal. Thus, we have to calculate # d and we're set d = -point.dot(normal) # create x,y xx, yy = np.meshgrid(range(10), range(10)) # calculate corresponding z z = (-normal[0] * xx - normal[1] * yy - d) * 1. \/normal[2] # plot the surface plt3d = plt.figure().gca(projection='3d') plt3d.plot_surface(xx, yy, z, alpha=0.2) #and i would like to plot this point : ax.scatter(point2[0] , point2[1] , point2[2], color='green') plt.show() ```", "response":"Just to add to @suever's answer, you there's no reason why you can't create the Axes and then plot both the surface and the scatter points on it. Then there's no need to use ax.hold(): ``` # Create the figure fig = plt.figure() # Add an axes ax = fig.add_subplot(111,projection='3d') # plot the surface ax.plot_surface(xx, yy, z, alpha=0.2) # and plot the point ax.scatter(point2[0] , point2[1] , point2[2], color='green') ```", "best_answers_score":0.7351, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/36060933\/plot-a-plane-and-points-in-3d-simultaneously", "best_answers_votes":28, "question_length":951, "response_length":426 }, { "question":"Plot inline or a separate window using Matplotlib in Spyder IDE When I use Matplotlib to plot some graphs, it is usually fine for the default inline drawing. However, when I draw some 3D graphs, I'd like to have them in a separate window so that interactions like rotation can be enabled. Can I configure in Python code which figure to display inline and which one to display in a new window? I know that in Spyder, click Tools, Preferences, Ipython Console, Graphics and under Graphics Backend select \u201cautomatic\u201d instead of \u201cinline\u201d. However, this make all the figures to be in new windows. It can be messy when I have a lot of plots. So I want only those 3D plot to be in new windows, but all the other 2D plots remain inline. Is it possible at all? Thanks!", "response":"type ``` %matplotlib qt ``` when you want graphs in a separate window and ``` %matplotlib inline ``` when you want an inline plot", "best_answers_score":0.735, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/29356269\/plot-inline-or-a-separate-window-using-matplotlib-in-spyder-ide", "best_answers_votes":103, "question_length":759, "response_length":129 }, { "question":"How to change data points color based on some variable I have 2 variables (x,y) that change with time (t). I want to plot x vs. t and color the ticks based on the value of y. e.g. for highest values of y the tick color is dark green, for lowest value is dark red, and for intermediate values the color will be scaled in between green and red. Can this be done with matplotlib in python?", "response":"This is what matplotlib.pyplot.scatter is for. If no colormap is specified, scatter will use whatever the default colormap is set to. To specify which colormap scatter should use, use the cmap kwarg (e.g. cmap=\"jet\"). As a quick example: ``` import matplotlib.pyplot as plt import matplotlib.colors as mcolors import numpy as np # Generate data... t = np.linspace(0, 2 * np.pi, 20) x = np.sin(t) y = np.cos(t) plt.scatter(t, x, c=y, ec='k') plt.show() ``` One may specify a custom color map and norm ```py cmap, norm = mcolors.from_levels_and_colors([0, 2, 5, 6], ['red', 'green', 'blue']) plt.scatter(x, y, c=t, cmap=cmap, norm=norm) ```", "best_answers_score":0.734, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7881994\/how-to-change-data-points-color-based-on-some-variable", "best_answers_votes":108, "question_length":386, "response_length":638 }, { "question":"matplotlib Legend Markers Only Once I often plot a point on a matplotlib plot with: ``` x = 10 y = 100 plot(x, y, \"k*\", label=\"Global Optimum\") legend() ``` However, this causes the legend to put a star in the legend twice, such that it looks like: ``` * * Global Optimum ``` when I really want it to look like: ``` * Global Optimum ``` How do I do this?", "response":"This should work: ``` legend(numpoints=1) ``` BTW, if you add the line ``` legend.numpoints : 1 # the number of points in the legend line ``` to your matplotlibrc file, then this will be the new default. [See also scatterpoints, depending on your plot.] API: Link to API docs", "best_answers_score":0.7333, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6146778\/matplotlib-legend-markers-only-once", "best_answers_votes":254, "question_length":354, "response_length":275 }, { "question":"Save plot to image file instead of displaying it This displays the figure in a GUI: ``` import matplotlib.pyplot as plt plt.plot([1, 2, 3], [1, 4, 9]) plt.show() ``` But how do I instead save the figure to a file (e.g. foo.png)?", "response":"When using matplotlib.pyplot.savefig, the file format can be specified by the extension: ``` from matplotlib import pyplot as plt plt.savefig('foo.png') plt.savefig('foo.pdf') ``` That gives a rasterized or vectorized output respectively. In addition, there is sometimes undesirable whitespace around the image, which can be removed with: ``` plt.savefig('foo.png', bbox_inches='tight') ``` Note that if showing the plot, plt.show() should follow plt.savefig(); otherwise, the file image will be blank.", "best_answers_score":0.7332, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9622163\/save-plot-to-image-file-instead-of-displaying-it", "best_answers_votes":2277, "question_length":228, "response_length":502 }, { "question":"python 2.7: cannot pip on windows \"bash: pip: command not found\" I am trying to install the SciPy stack located at https:\/\/scipy.org\/stackspec.html [I am only allowed 2 links; trying to use them wisely]. I realize that there are much easier ways to do this, but I think there is a lot to be learned by doing it manually. I am relatively new to a lot of this stuff, so I apologize if I sound ignorant at any point. I am running Windows 7 Enterprise - 64 bit. Here is what I have done so far: Installed python-2.7.8.msi (32-bit) from https:\/\/www.python.org\/download\/releases\/2.7.8\/ Installed numpy-1.8.1-win32-superpack-python2.7 from http:\/\/sourceforge.net\/projects\/numpy\/files\/ Test: import numpy as np ---> no errors Installed scipy library, scipy-0.14.0-win32-superpack-python2.7.exe from (SCIPY DOT ORG LINK REMOVED) Test: import scipy as sp ---> no errors Installed matplotlib: matplotlib-1.3.1.win32-py2.7.exe from (MATPLOTLIB DOT ORG LINK REMOVED) Installed PIP by running script here: https:\/\/raw.githubusercontent.com\/pypa\/pip\/master\/contrib\/get-pip.py I just copied-pasted script to a new file in IDLE, saved as C:\\Python27\\Scripts\\pip_install.py and clicked Run>module. No errors reported. Does the path on which I saved pip_install.py matter? HERE IS WHERE I FAIL Attempted to install matlibplot dependency dateutil: Opened a Cygwin Shell, and typed ``` cd C:\\Python27 ! is it necessary to cd to python directtory? pip install python-dateutil ``` This results in the error: ``` bash: pip: command not found ``` I get the same error attempting from cmd. Any help is appreciated; the closest I found was bash: pip: command not found. But the OSX nature of it is just enough to confise me further. UPDATE: I added the pip-path per Paul H's suggestion below. It made the error go away, but strangely, nothing I pip actually installs. For example, in Cygwin, I type: ``` cbennett2> pip install python-dateutil cbennett2> ``` You can see that there is no output or feedback from the shell (which I think there should be). Then when I go to a new python shell: ``` >>> from dateutil.parser import parse Traceback (most recent call last): File \"\", line 1, in from dateutil.parser import parse ImportError: No module named dateutil.parser >>>> ``` This happens with all of the modules that I thought I had pip'd ... pandas, tornado, etc.", "response":"On Windows, pip lives in C:\\[pythondir]\\scripts. So you'll need to add that to your system path in order to run it from the command prompt. You could alternatively cd into that directory each time, but that's a hassle. See the top answer here for info on how to do that: Adding Python Path on Windows 7 Also, that is a terrifying way to install pip. Grab it from Christophe Gohlke. Grab everything else from there for that matter. http:\/\/www.lfd.uci.edu\/~gohlke\/pythonlibs\/", "best_answers_score":0.7331, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25328818\/python-2-7-cannot-pip-on-windows-bash-pip-command-not-found", "best_answers_votes":89, "question_length":2340, "response_length":473 }, { "question":"How to increase the space between bar plot bars How do I increase the space between each bar with matplotlib barcharts, as they keep cramming them self to the centre. (this is what it currently looks) ``` import matplotlib.pyplot as plt import matplotlib.dates as mdates def ww(self):#wrongwords text file with open(\"wrongWords.txt\") as file: array1 = [] array2 = [] for element in file: array1.append(element) x=array1[0] s = x.replace(')(', '),(') #removes the quote marks from csv file print(s) my_list = ast.literal_eval(s) print(my_list) my_dict = {} for item in my_list: my_dict[item[2]] = my_dict.get(item[2], 0) + 1 plt.bar(range(len(my_dict)), my_dict.values(), align='center') plt.xticks(range(len(my_dict)), my_dict.keys()) plt.show() ```", "response":"Try replace ``` plt.bar(range(len(my_dict)), my_dict.values(), align='center') ``` with ``` plt.figure(figsize=(20, 3)) # width:20, height:3 plt.bar(range(len(my_dict)), my_dict.values(), align='edge', width=0.3) ``` The option align='edge' will eliminate white space on the left of the bar chart. And width=0.3 sets the bars' width smaller size than the default value. The bars spacing will be adjusted accordingly. For the labels along x-axis, they should be rotated 90 degrees to make them readable. ``` plt.xticks(range(len(my_dict)), my_dict.keys(), rotation='vertical') ```", "best_answers_score":0.733, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/40575067\/how-to-increase-the-space-between-bar-plot-bars", "best_answers_votes":65, "question_length":749, "response_length":579 }, { "question":"Matplotlib-Animation \"No MovieWriters Available\" Under Linux, I've been checking out matplotlib's animation class, and it seems to work except that I cant initialise the movie writer to write out the movie. Using either of the examples: http:\/\/matplotlib.org\/examples\/animation\/moviewriter.html http:\/\/matplotlib.org\/examples\/animation\/basic_example_writer.html results in the error \"RuntimeError: No MovieWriters available!\" Im using matplotlib version 1.3.x and have installed (hopefully) all the codecs. Can someone please suggest as to why I get this error? If its a codecs issue, which codecs (+versions) should I install? If its something else that's broken, is there an alternative for creating animations in python?", "response":"For fellow googlers using Anaconda, install the ffmpeg package: ``` conda install -c conda-forge ffmpeg ``` This works on Windows too. (Original answer used menpo package owner but as mentioned by @harsh their version is a little behind at time of writing)", "best_answers_score":0.7327, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13316397\/matplotlib-animation-no-moviewriters-available", "best_answers_votes":119, "question_length":723, "response_length":256 }, { "question":"UserWarning: FigureCanvasAgg is non-interactive, and thus cannot be shown plt.show() I am using Windows 10 PyCharm 2021.3.3 Professional Edition python 3.11.5 matplotlib 3.8.1 How can I permanently resolve this issue in my development environment? ```py import numpy as np import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt # Read data from file, skipping the first row (header) data = np.loadtxt('cm.dat', skiprows=1) # Initialize reference point x0, y0, z0 = data[0] # Compute squared displacement for each time step SD = [(x - x0)**2 + (y - y0)**2 + (z - z0)**2 for x, y, z in data] # Compute the cumulative average of SD to get MSD at each time step MSD = np.cumsum(SD) \/ np.arange(1, len(SD) + 1) # Generate time steps t = np.arange(1, len(SD) + 1) # Create a log-log plot of MSD versus t plt.figure(figsize=(8, 6)) plt.loglog(t, MSD, marker='o') plt.title('Mean Squared Displacement vs Time') plt.xlabel('Time step') plt.ylabel('MSD') plt.grid(True, which=\"both\", ls=\"--\") plt.show() ``` ```none C:\\Users\\pc\\AppData\\Local\\Programs\\Python\\Python311\\python.exe C:\/git\/RouseModel\/tau_plot.py C:\\git\\RouseModel\\tau_plot.py:29: UserWarning: FigureCanvasAgg is non-interactive, and thus cannot be shown plt.show() Process finished with exit code 0 ```", "response":"I have the same issue. In my case, I installed the PyQt5==5.15.10. After that, I run my code successfully. pip install PyQt5==5.15.10 or pip install PyQt5 with python==3.11 But from 2024, you guys should install version PyQt6 or the last version with python==3.12 or later.", "best_answers_score":0.7325, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/77507580\/userwarning-figurecanvasagg-is-non-interactive-and-thus-cannot-be-shown-plt-sh", "best_answers_votes":107, "question_length":1272, "response_length":273 }, { "question":"Adding a y-axis label to secondary y-axis in matplotlib I can add a y label to the left y-axis using plt.ylabel, but how can I add it to the secondary y-axis? ``` table = sql.read_frame(query,connection) table[0].plot(color=colors[0],ylim=(0,100)) table[1].plot(secondary_y=True,color=colors[1]) plt.ylabel('$') ```", "response":"The best way is to interact with the axes object directly ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(0, 10, 0.1) y1 = 0.05 * x**2 y2 = -1 *y1 fig, ax1 = plt.subplots() ax2 = ax1.twinx() ax1.plot(x, y1, 'g-') ax2.plot(x, y2, 'b-') ax1.set_xlabel('X data') ax1.set_ylabel('Y1 data', color='g') ax2.set_ylabel('Y2 data', color='b') plt.show() ```", "best_answers_score":0.7322, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14762181\/adding-a-y-axis-label-to-secondary-y-axis-in-matplotlib", "best_answers_votes":504, "question_length":315, "response_length":369 }, { "question":"python numpy\/scipy curve fitting I have some points and I am trying to fit curve for this points. I know that there exist scipy.optimize.curve_fit function, but I do not understand the documentation, i.e. how to use this function. My points: ```py np.array([(1, 1), (2, 4), (3, 1), (9, 3)]) ``` Can anybody explain how to do that?", "response":"I suggest you to start with simple polynomial fit, scipy.optimize.curve_fit tries to fit a function f that you must know to a set of points. This is a simple 3 degree polynomial fit using numpy.polyfit and poly1d, the first performs a least squares polynomial fit and the second calculates the new points: ``` import numpy as np import matplotlib.pyplot as plt points = np.array([(1, 1), (2, 4), (3, 1), (9, 3)]) # get x and y vectors x = points[:,0] y = points[:,1] # calculate polynomial z = np.polyfit(x, y, 3) f = np.poly1d(z) # calculate new x's and y's x_new = np.linspace(x[0], x[-1], 50) y_new = f(x_new) plt.plot(x,y,'o', x_new, y_new) plt.xlim([x[0]-1, x[-1] + 1 ]) plt.show() ```", "best_answers_score":0.7315, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19165259\/python-numpy-scipy-curve-fitting", "best_answers_votes":127, "question_length":330, "response_length":690 }, { "question":"How do you determine which backend is being used by matplotlib? Either interactively, such as from within an Ipython session, or from within a script, how can you determine which backend is being used by matplotlib?", "response":"Use the get_backend() function to obtain a string denoting which backend is in use: ``` >>> import matplotlib >>> matplotlib.get_backend() 'TkAgg' ```", "best_answers_score":0.7311, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3580027\/how-do-you-determine-which-backend-is-being-used-by-matplotlib", "best_answers_votes":161, "question_length":215, "response_length":150 }, { "question":"How to zoomed a portion of image and insert in the same plot in matplotlib I would like to zoom a portion of data\/image and plot it inside the same figure. It looks something like this figure. Is it possible to insert a portion of zoomed image inside the same plot. I think it is possible to draw another figure with subplot but it draws two different figures. I also read to add patch to insert rectangle\/circle but not sure if it is useful to insert a portion of image into the figure. I basically load data from the text file and plot it using a simple plot commands shown below. I found one related example from matplotlib image gallery here but not sure how it works. Your help is much appreciated. ``` from numpy import * import os import matplotlib.pyplot as plt data = loadtxt(os.getcwd()+txtfl[0], skiprows=1) fig1 = plt.figure() ax1 = fig1.add_subplot(111) ax1.semilogx(data[:,1],data[:,2]) plt.show() ```", "response":"Playing with runnable code is one of the fastest ways to learn Python. So let's start with the code from the matplotlib example gallery. Given the comments in the code, it appears the code is broken up into 4 main stanzas. The first stanza generates some data, the second stanza generates the main plot, the third and fourth stanzas create the inset axes. We know how to generate data and plot the main plot, so let's focus on the third stanza: ``` a = axes([.65, .6, .2, .2], axisbg='y') n, bins, patches = hist(s, 400, normed=1) title('Probability') setp(a, xticks=[], yticks=[]) ``` Copy the example code into a new file, called, say, test.py. What happens if we change the .65 to .3? ``` a = axes([.35, .6, .2, .2], axisbg='y') ``` Run the script: ``` python test.py ``` You'll find the \"Probability\" inset moved to the left. So the axes function controls the placement of the inset. If you play some more with the numbers you'll figure out that (.35, .6) is the location of the lower left corner of the inset, and (.2, .2) is the width and height of the inset. The numbers go from 0 to 1 and (0,0) is the located at the lower left corner of the figure. Okay, now we're cooking. On to the next line we have: ``` n, bins, patches = hist(s, 400, normed=1) ``` You might recognize this as the matplotlib command for drawing a histogram, but if not, changing the number 400 to, say, 10, will produce an image with a much chunkier histogram, so again by playing with the numbers you'll soon figure out that this line has something to do with the image inside the inset. You'll want to call semilogx(data[3:8,1],data[3:8,2]) here. The line title('Probability') obviously generates the text above the inset. Finally we come to setp(a, xticks=[], yticks=[]). There are no numbers to play with, so what happens if we just comment out the whole line by placing a # at the beginning of the line: ``` # setp(a, xticks=[], yticks=[]) ``` Rerun the script. Oh! now there are lots of tick marks and tick labels on the inset axes. Fine. So now we know that setp(a, xticks=[], yticks=[]) removes the tick marks and labels from the axes a. Now, in theory you have enough information to apply this code to your problem. But there is one more potential stumbling block: The matplotlib example uses from pylab import * whereas you use import matplotlib.pyplot as plt. The matplotlib FAQ says import matplotlib.pyplot as plt is the recommended way to use matplotlib when writing scripts, while from pylab import * is for use in interactive sessions. So you are doing it the right way, (though I would recommend using import numpy as np instead of from numpy import * too). So how do we convert the matplotlib example to run with import matplotlib.pyplot as plt? Doing the conversion takes some experience with matplotlib. Generally, you just add plt. in front of bare names like axes and setp, but sometimes the function come from numpy, and sometimes the call should come from an axes object, not from the module plt. It takes experience to know where all these functions come from. Googling the names of functions along with \"matplotlib\" can help. Reading example code can builds experience, but there is no easy shortcut. So, the converted code becomes ``` ax2 = plt.axes([.65, .6, .2, .2], axisbg='y') ax2.semilogx(t[3:8],s[3:8]) plt.setp(ax2, xticks=[], yticks=[]) ``` And you could use it in your code like this: ``` from numpy import * import os import matplotlib.pyplot as plt data = loadtxt(os.getcwd()+txtfl[0], skiprows=1) fig1 = plt.figure() ax1 = fig1.add_subplot(111) ax1.semilogx(data[:,1],data[:,2]) ax2 = plt.axes([.65, .6, .2, .2], axisbg='y') ax2.semilogx(data[3:8,1],data[3:8,2]) plt.setp(ax2, xticks=[], yticks=[]) plt.show() ```", "best_answers_score":0.7308, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13583153\/how-to-zoomed-a-portion-of-image-and-insert-in-the-same-plot-in-matplotlib", "best_answers_votes":46, "question_length":915, "response_length":3732 }, { "question":"How to display a 3D plot of a 3D array isosurface with mplot3D or similar I have a 3-dimensional numpy array. I'd like to display (in matplotlib) a nice 3D plot of an isosurface of this array (or more strictly, display an isosurface of the 3D scalar field defined by interpolating between the sample points). matplotlib's mplot3D part provides nice 3D plot support, but (so far as I can see) its API doesn't have anything which will simply take a 3D array of scalar values and display an isosurface. However, it does support displaying a collection of polygons, so presumably I could implement the marching cubes algorithm to generate such polygons. It does seem quite likely that a scipy-friendly marching cubes has already been implemented somewhere and that I haven't found it, or that I'm missing some easy way of doing this. Alternatively I'd welcome any pointers to other tools for visualising 3D array data easily usable from the Python\/numpy\/scipy world.", "response":"Complementing the answer of @DanHickstein, you can also use trisurf to visualize the polygons obtained in the marching cubes phase. ``` import numpy as np from numpy import sin, cos, pi from skimage import measure import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def fun(x, y, z): return cos(x) + cos(y) + cos(z) x, y, z = pi*np.mgrid[-1:1:31j, -1:1:31j, -1:1:31j] vol = fun(x, y, z) iso_val=0.0 verts, faces = measure.marching_cubes(vol, iso_val, spacing=(0.1, 0.1, 0.1)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_trisurf(verts[:, 0], verts[:,1], faces, verts[:, 2], cmap='Spectral', lw=1) plt.show() ``` Update: May 11, 2018 As mentioned by @DrBwts, now marching_cubes return 4 values. The following code works. ``` import numpy as np from numpy import sin, cos, pi from skimage import measure import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def fun(x, y, z): return cos(x) + cos(y) + cos(z) x, y, z = pi*np.mgrid[-1:1:31j, -1:1:31j, -1:1:31j] vol = fun(x, y, z) iso_val=0.0 verts, faces, _, _ = measure.marching_cubes(vol, iso_val, spacing=(0.1, 0.1, 0.1)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_trisurf(verts[:, 0], verts[:,1], faces, verts[:, 2], cmap='Spectral', lw=1) plt.show() ``` Update: February 2, 2020 Adding to my previous answer, I should mention that since then PyVista has been released, and it makes this kind of tasks somewhat effortless. Following the same example as before. ```py from numpy import cos, pi, mgrid import pyvista as pv #%% Data x, y, z = pi*mgrid[-1:1:31j, -1:1:31j, -1:1:31j] vol = cos(x) + cos(y) + cos(z) grid = pv.StructuredGrid(x, y, z) grid[\"vol\"] = vol.flatten() contours = grid.contour([0]) #%% Visualization pv.set_plot_theme('document') p = pv.Plotter() p.add_mesh(contours, scalars=contours.points[:, 2], show_scalar_bar=False) p.show() ``` With the following result Update: February 24, 2020 As mentioned by @HenriMenke, marching_cubes has been renamed to marching_cubes_lewiner. The \"new\" snippet is the following. ```py import numpy as np from numpy import cos, pi from skimage.measure import marching_cubes_lewiner import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D x, y, z = pi*np.mgrid[-1:1:31j, -1:1:31j, -1:1:31j] vol = cos(x) + cos(y) + cos(z) iso_val=0.0 verts, faces, _, _ = marching_cubes_lewiner(vol, iso_val, spacing=(0.1, 0.1, 0.1)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_trisurf(verts[:, 0], verts[:,1], faces, verts[:, 2], cmap='Spectral', lw=1) plt.show() ```", "best_answers_score":0.7308, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6030098\/how-to-display-a-3d-plot-of-a-3d-array-isosurface-with-mplot3d-or-similar", "best_answers_votes":46, "question_length":962, "response_length":2583 }, { "question":"How to draw a rectangle on image How to draw a rectangle on an image, like this: ``` import matplotlib.pyplot as plt from PIL import Image import numpy as np im = np.array(Image.open('dog.png'), dtype=np.uint8) plt.imshow(im) ``` To make it clear, I meant to draw a rectangle on top of the image for visualization, not to change the image data. So using matplotlib.patches.Patch would be the best option.", "response":"You can add a Rectangle patch to the matplotlib Axes. For example (using the image from the tutorial here): ``` import matplotlib.pyplot as plt import matplotlib.patches as patches from PIL import Image im = Image.open('stinkbug.png') # Create figure and axes fig, ax = plt.subplots() # Display the image ax.imshow(im) # Create a Rectangle patch rect = patches.Rectangle((50, 100), 40, 30, linewidth=1, edgecolor='r', facecolor='none') # Add the patch to the Axes ax.add_patch(rect) plt.show() ```", "best_answers_score":0.7304, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37435369\/how-to-draw-a-rectangle-on-image", "best_answers_votes":506, "question_length":404, "response_length":497 }, { "question":"Shading an area between two points in a matplotlib plot How do you add a shaded area between two points in a matplotlib plot? In the example matplotlib plot below, I manually added the shaded, yellow area using Skitch. I'd like to know how to do this sans-Skitch.", "response":"You can just use the function axvspan. The advantage to this is that the vertical region (or horizontal, in the case of axhspan) will remain shaded regardless of how you pan\/zoom the plot. There's a complete example here. See a simple example below: ``` import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 20, 500) y = np.cos(3*x) - 2*np.cos(5*x) + 0.5*np.cos(6*x) a = 5 b = 15 plt.axvspan(a, b, color='y', alpha=0.5, lw=0) plt.plot(x, y) plt.savefig('shade.png', dpi=300) plt.show() ``` That gives as a result", "best_answers_score":0.7304, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3681872\/shading-an-area-between-two-points-in-a-matplotlib-plot", "best_answers_votes":62, "question_length":263, "response_length":528 }, { "question":"how to set \"camera position\" for 3d plots using python\/matplotlib? I'm learning how to use mplot3d to produce nice plots of 3d data and I'm pretty happy so far. What I am trying to do at the moment is a little animation of a rotating surface. For that purpose, I need to set a camera position for the 3D projection. I guess this must be possible since a surface can be rotated using the mouse when using matplotlib interactively. But how can I do this from a script? I found a lot of transforms in mpl_toolkits.mplot3d.proj3d but I could not find out how to use these for my purpose and I didn't find any example for what I'm trying to do.", "response":"By \"camera position,\" it sounds like you want to adjust the elevation and the azimuth angle that you use to view the 3D plot. You can set this with ax.view_init. I've used the below script to first create the plot, then I determined a good elevation, or elev, from which to view my plot. I then adjusted the azimuth angle, or azim, to vary the full 360deg around my plot, saving the figure at each instance (and noting which azimuth angle as I saved the plot). For a more complicated camera pan, you can adjust both the elevation and angle to achieve the desired effect. ``` from mpl_toolkits.mplot3d import Axes3D ax = Axes3D(fig) ax.scatter(xx,yy,zz, marker='o', s=20, c=\"goldenrod\", alpha=0.6) for ii in xrange(0,360,1): ax.view_init(elev=10., azim=ii) savefig(\"movie%d.png\" % ii) ```", "best_answers_score":0.7293, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12904912\/how-to-set-camera-position-for-3d-plots-using-python-matplotlib", "best_answers_votes":234, "question_length":639, "response_length":787 }, { "question":"sklearn plot confusion matrix with labels I want to plot a confusion matrix to visualize the classifer's performance, but it shows only the numbers of the labels, not the labels themselves: ``` from sklearn.metrics import confusion_matrix import pylab as pl y_test=['business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business'] pred=array(['health', 'business', 'business', 'business', 'business', 'business', 'health', 'health', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'health', 'health', 'business', 'health'], dtype='|S8') cm = confusion_matrix(y_test, pred) pl.matshow(cm) pl.title('Confusion matrix of the classifier') pl.colorbar() pl.show() ``` How can I add the labels (health, business..etc) to the confusion matrix?", "response":"UPDATE: Check the ConfusionMatrixDisplay OLD ANSWER: I think it's worth mentioning the use of seaborn.heatmap here. ``` import seaborn as sns import matplotlib.pyplot as plt ax= plt.subplot() sns.heatmap(cm, annot=True, fmt='g', ax=ax); #annot=True to annotate cells, ftm='g' to disable scientific notation # labels, title and ticks ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels'); ax.set_title('Confusion Matrix'); ax.xaxis.set_ticklabels(['business', 'health']); ax.yaxis.set_ticklabels(['health', 'business']); ```", "best_answers_score":0.7292, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19233771\/sklearn-plot-confusion-matrix-with-labels", "best_answers_votes":108, "question_length":959, "response_length":532 }, { "question":"automatically position text box in plot Is there a way of telling pyplot.text() a location like you can with pyplot.legend()? Something like the legend argument would be excellent: ``` plt.legend(loc=\"upper left\") ``` I am trying to label subplots with different axes using letters (e.g. \"A\",\"B\"). I figure there's got to be a better way than manually estimating the position.", "response":"Just use annotate and specify axis coordinates. For example, \"upper left\" would be: ``` plt.annotate('Something', xy=(0.05, 0.95), xycoords='axes fraction') ``` You could also get fancier and specify a constant offset in points: ``` plt.annotate('Something', xy=(0, 1), xytext=(12, -12), va='top' xycoords='axes fraction', textcoords='offset points') ``` For more explanation see the examples here and the more detailed examples here.", "best_answers_score":0.7265, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7045729\/automatically-position-text-box-in-plot", "best_answers_votes":77, "question_length":376, "response_length":434 }, { "question":"Label axes on Seaborn Barplot I'm trying to use my own labels for a Seaborn barplot with the following code: ```py import pandas as pd import seaborn as sns fake = pd.DataFrame({'cat': ['red', 'green', 'blue'], 'val': [1, 2, 3]}) fig = sns.barplot(x = 'val', y = 'cat', data = fake, color = 'black') fig.set_axis_labels('Colors', 'Values') ``` However, I get an error that: ```none AttributeError: 'AxesSubplot' object has no attribute 'set_axis_labels' ``` Why am I getting this error?", "response":"Seaborn's barplot returns an axis-object (not a figure). This means you can do the following: ``` import pandas as pd import seaborn as sns import matplotlib.pyplot as plt fake = pd.DataFrame({'cat': ['red', 'green', 'blue'], 'val': [1, 2, 3]}) ax = sns.barplot(x = 'val', y = 'cat', data = fake, color = 'black') ax.set(xlabel='common xlabel', ylabel='common ylabel') plt.show() ```", "best_answers_score":0.7248, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/31632637\/label-axes-on-seaborn-barplot", "best_answers_votes":422, "question_length":486, "response_length":383 }, { "question":"How do I add space between the ticklabels and the axes I've increased the font of my ticklabels successfully, but now they're too close to the axis. I'd like to add a little breathing room between the ticklabels and the axis.", "response":"If you don't want to change the spacing globally (by editing your rcParams), and want a cleaner approach, try this: ax.tick_params(axis='both', which='major', pad=15) or for just x axis ax.tick_params(axis='x', which='major', pad=15) or the y axis ax.tick_params(axis='y', which='major', pad=15)", "best_answers_score":0.7248, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2969867\/how-do-i-add-space-between-the-ticklabels-and-the-axes", "best_answers_votes":145, "question_length":225, "response_length":295 }, { "question":"Matplotlib plot is a no-show When I run this code ``` import pandas as pd import numpy as np def add_prop(group): births = group.births.astype(float) group['prop'] = births\/births.sum() return group pieces = [] columns = ['name', 'sex', 'births'] for year in range(1880, 2012): path = 'yob%d.txt' % year frame = pd.read_csv(path, names = columns) frame['year'] = year pieces.append(frame) names = pd.concat(pieces, ignore_index = True) total_births = names.pivot_table('births', rows = 'year', cols = 'sex', aggfunc = sum) total_births.plot(title = 'Total Births by sex and year') ``` I get no plot. This is from Wes McKinney's book on using Python for data analysis. Can anyone point me in the right direction?", "response":"Put ``` import matplotlib.pyplot as plt ``` at the top, and ``` plt.show() ``` at the end.", "best_answers_score":0.7247, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16522380\/matplotlib-plot-is-a-no-show", "best_answers_votes":163, "question_length":711, "response_length":90 }, { "question":"How to change the font size on a matplotlib plot How does one change the font size for all elements (ticks, labels, title) on a matplotlib plot? I know how to change the tick label sizes, this is done with: ``` import matplotlib matplotlib.rc('xtick', labelsize=20) matplotlib.rc('ytick', labelsize=20) ``` But how does one change the rest?", "response":"From the matplotlib documentation, ``` font = {'family' : 'normal', 'weight' : 'bold', 'size' : 22} matplotlib.rc('font', **font) ``` This sets the font of all items to the font specified by the kwargs object, font. Alternatively, you could also use the rcParams update method as suggested in this answer: ``` matplotlib.rcParams.update({'font.size': 22}) ``` or ``` import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 22}) ``` You can find a full list of available properties on the Customizing matplotlib page.", "best_answers_score":0.7245, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3899980\/how-to-change-the-font-size-on-a-matplotlib-plot", "best_answers_votes":1168, "question_length":340, "response_length":526 }, { "question":"How to plot two columns of a pandas data frame using points I have a pandas dataframe and would like to plot values from one column versus the values from another column. Fortunately, there is plot method associated with the dataframes that seems to do what I need: ```py df.plot(x='col_name_1', y='col_name_2') ``` Unfortunately, it looks like among the plot styles (listed here after the kind parameter), there are not points. I can use lines or bars or even density but not points. Is there a work around that can help to solve this problem?", "response":"You can specify the style of the plotted line when calling df.plot: ``` df.plot(x='col_name_1', y='col_name_2', style='o') ``` The style argument can also be a dict or list, e.g.: ``` import numpy as np import pandas as pd d = {'one' : np.random.rand(10), 'two' : np.random.rand(10)} df = pd.DataFrame(d) df.plot(style=['o','rx']) ``` All the accepted style formats are listed in the documentation of matplotlib.pyplot.plot.", "best_answers_score":0.7242, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17812978\/how-to-plot-two-columns-of-a-pandas-data-frame-using-points", "best_answers_votes":153, "question_length":544, "response_length":424 }, { "question":"Line plot with data points in pandas Using pandas I can easily make a line plot: ``` import pandas as pd import numpy as np %matplotlib inline # to use it in jupyter notebooks df = pd.DataFrame(np.random.randn(50, 4), index=pd.date_range('1\/1\/2000', periods=50), columns=list('ABCD')) df = df.cumsum() df.plot(); ``` But I can't figure out how to also plot the data as points over the lines, as in this example: This matplotlib example seems to suggest the direction, but I can't find how to do it using pandas plotting capabilities. And I am specially interested in learning how to do it with pandas because I am always working with dataframes. Any clues?", "response":"You can use the style kwarg to the df.plot command. From the docs: style : list or dict matplotlib line style per column So, you could either just set one linestyle for all the lines, or a different one for each line. e.g. this does something similar to what you asked for: ``` df.plot(style='.-') ``` To define a different marker and linestyle for each line, you can use a list: ``` df.plot(style=['+-','o-','.--','s:']) ``` You can also pass the markevery kwarg onto matplotlib's plot command, to only draw markers at a given interval ``` df.plot(style='.-', markevery=5) ```", "best_answers_score":0.7233, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/43941245\/line-plot-with-data-points-in-pandas", "best_answers_votes":136, "question_length":656, "response_length":577 }, { "question":"How to plot 2 seaborn lmplots side-by-side? Plotting 2 distplots or scatterplots in a subplot works great: ``` import matplotlib.pyplot as plt import numpy as np import seaborn as sns import pandas as pd %matplotlib inline # create df x = np.linspace(0, 2 * np.pi, 400) df = pd.DataFrame({'x': x, 'y': np.sin(x ** 2)}) # Two subplots f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) ax1.plot(df.x, df.y) ax1.set_title('Sharing Y axis') ax2.scatter(df.x, df.y) plt.show() ``` But when I do the same with an lmplot instead of either of the other types of charts I get an error: AttributeError: 'AxesSubplot' object has no attribute 'lmplot' Is there any way to plot these chart types side by side?", "response":"You get that error because matplotlib and its objects are completely unaware of seaborn functions. Pass your axes objects (i.e., ax1 and ax2) to seaborn.regplot or you can skip defining those and use the col kwarg of seaborn.lmplot With your same imports, pre-defining your axes and using regplot looks like this: ``` # create df x = np.linspace(0, 2 * np.pi, 400) df = pd.DataFrame({'x': x, 'y': np.sin(x ** 2)}) df.index.names = ['obs'] df.columns.names = ['vars'] idx = np.array(df.index.tolist(), dtype='float') # make an array of x-values # call regplot on each axes fig, (ax1, ax2) = plt.subplots(ncols=2, sharey=True) sns.regplot(x=idx, y=df['x'], ax=ax1) sns.regplot(x=idx, y=df['y'], ax=ax2) ``` Using lmplot requires your dataframe to be tidy. Continuing from the code above: ``` tidy = ( df.stack() # pull the columns into row variables .to_frame() # convert the resulting Series to a DataFrame .reset_index() # pull the resulting MultiIndex into the columns .rename(columns={0: 'val'}) # rename the unnamed column ) sns.lmplot(x='obs', y='val', col='vars', hue='vars', data=tidy) ```", "best_answers_score":0.7231, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/33049884\/how-to-plot-2-seaborn-lmplots-side-by-side", "best_answers_votes":62, "question_length":694, "response_length":1095 }, { "question":"surface plots in matplotlib I have a list of 3-tuples representing a set of points in 3D space. I want to plot a surface that covers all these points. The plot_surface function in the mplot3d package requires as arguments X,Y and Z to be 2d arrays. Is plot_surface the right function to plot surface and how do I transform my data into the required format? ``` data = [(x1,y1,z1),(x2,y2,z2),.....,(xn,yn,zn)] ```", "response":"For surfaces it's a bit different than a list of 3-tuples, you should pass in a grid for the domain in 2d arrays. If all you have is a list of 3d points, rather than some function f(x, y) -> z, then you will have a problem because there are multiple ways to triangulate that 3d point cloud into a surface. Here's a smooth surface example: ``` import numpy as np from mpl_toolkits.mplot3d import Axes3D # Axes3D import has side effects, it enables using projection='3d' in add_subplot import matplotlib.pyplot as plt import random def fun(x, y): return x**2 + y fig = plt.figure() ax = fig.add_subplot(111, projection='3d') x = y = np.arange(-3.0, 3.0, 0.05) X, Y = np.meshgrid(x, y) zs = np.array(fun(np.ravel(X), np.ravel(Y))) Z = zs.reshape(X.shape) ax.plot_surface(X, Y, Z) ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') plt.show() ```", "best_answers_score":0.7228, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/9170838\/surface-plots-in-matplotlib", "best_answers_votes":185, "question_length":412, "response_length":866 }, { "question":"Matplotlib plot with variable line width Is it possible to plot a line with variable line width in matplotlib? For example: ``` from pylab import * x = [1, 2, 3, 4, 5] y = [1, 2, 2, 0, 0] width = [.5, 1, 1.5, .75, .75] plot(x, y, linewidth=width) ``` This doesn't work because linewidth expects a scalar. Note: I'm aware of *fill_between()* and *fill_betweenx()*. Because these only fill in x or y direction, these do not do justice to cases where you have a slanted line. It is desirable for the fill to always be normal to the line. That is why a variable width line is sought.", "response":"Use LineCollections. A way to do it along the lines of this Matplotlib example is ``` import numpy as np from matplotlib.collections import LineCollection import matplotlib.pyplot as plt x = np.linspace(0,4*np.pi,10000) y = np.cos(x) lwidths=1+x[:-1] points = np.array([x, y]).T.reshape(-1, 1, 2) segments = np.concatenate([points[:-1], points[1:]], axis=1) lc = LineCollection(segments, linewidths=lwidths,color='blue') fig,a = plt.subplots() a.add_collection(lc) a.set_xlim(0,4*np.pi) a.set_ylim(-1.1,1.1) fig.show() ```", "best_answers_score":0.7214, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19390895\/matplotlib-plot-with-variable-line-width", "best_answers_votes":94, "question_length":579, "response_length":522 }, { "question":"seaborn scatterplot marker size for ALL markers I can't find out anywhere how to change the marker size on seaborn scatterplots. There is a size option listed in the documentation but it is only for when you want variable size across points. I want the same size for all points but larger than the default! I tried making a new column of integers in my dataframe and set that as the size, but it looks like the actual value doesn't matter, it changes the marker size on a relative basis, so in this case all the markers were still the same size as the default. Here's some code: ```py ax = sns.scatterplot(x=\"Data Set Description\", y=\"R Squared\", data=mean_df) plt.show() ``` I just tried something and it worked, not sure if it's the best method though. I added size=[1, 1, 1, 1, 1, 1] and sizes=(500, 500). So essentially I'm setting all sizes to be the same, and the range of sizes to be only at 500.", "response":"You can do so by giving a value to the s argument to change the marker size. Example: ``` ax = sns.scatterplot(x=\"Data Set Description\", y=\"R Squared\", data=mean_df, s=10) ```", "best_answers_score":0.7209, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/52785101\/seaborn-scatterplot-marker-size-for-all-markers", "best_answers_votes":202, "question_length":903, "response_length":175 }, { "question":"How to show Y axis label horizontally I'm creating very simple charts with matplotlib \/ pylab Python module. The letter \"y\" that labels the Y axis is on its side. You would expect this if the label was longer, such as a word, so as not to extend the outside of the graph to the left too much. But for a one-letter label, this doesn't make sense; the label should be upright. How can I show the \"y\" horizontally?", "response":"Expanding on the accepted answer, when we work with a particular axes object ax: ``` ax.set_ylabel('abc', rotation=0, fontsize=20, labelpad=20) ``` Note that often the labelpad will need to be adjusted manually too \u2014 otherwise the \"abc\" will intrude onto the plot. From brief experiments I'm guessing that labelpad is the offset between the bounding box of the tick labels and the y-label's centre. (So, not quite the padding the name implies \u2014 it would have been more intuitive if this was the gap to the label's bounding box instead.)", "best_answers_score":0.72, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/27671748\/how-to-show-y-axis-label-horizontally", "best_answers_votes":143, "question_length":411, "response_length":536 }, { "question":"How do I extend the margin at the bottom of a figure in Matplotlib? The following screenshot shows my x-axis. I added some labels and rotated them by 90 degrees in order to better read them. However, pyplot truncates the bottom such that I'm not able to completely read the labels. How do I extend the bottom margin in order to see the complete labels?", "response":"Two retroactive ways: ```py fig, ax = plt.subplots() # ... fig.tight_layout() ``` Or ```py fig.subplots_adjust(bottom=0.2) # or whatever ``` Here's a subplots_adjust example: http:\/\/matplotlib.org\/examples\/pylab_examples\/subplots_adjust.html (but I prefer tight_layout)", "best_answers_score":0.7183, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/27878217\/how-do-i-extend-the-margin-at-the-bottom-of-a-figure-in-matplotlib", "best_answers_votes":97, "question_length":352, "response_length":269 }, { "question":"Generating movie from python without saving individual frames to files I would like to create an h264 or divx movie from frames that I generate in a python script in matplotlib. There are about 100k frames in this movie. In examples on the web [eg. 1], I have only seen the method of saving each frame as a png and then running mencoder or ffmpeg on these files. In my case, saving each frame is impractical. Is there a way to take a plot generated from matplotlib and pipe it directly to ffmpeg, generating no intermediate files? Programming with ffmpeg's C-api is too difficult for me [eg. 2]. Also, I need an encoding that has good compression such as x264 as the movie file will otherwise be too large for a subsequent step. So it would be great to stick with mencoder\/ffmpeg\/x264. Is there something that can be done with pipes [3]? [1] http:\/\/matplotlib.sourceforge.net\/examples\/animation\/movie_demo.html [2] How does one encode a series of images into H264 using the x264 C API? [3] http:\/\/www.ffmpeg.org\/ffmpeg-doc.html#SEC41", "response":"This functionality is now (at least as of 1.2.0, maybe 1.1) baked into matplotlib via the MovieWriter class and it's sub-classes in the animation module. You also need to install ffmpeg in advance. ``` import matplotlib.animation as animation import numpy as np from pylab import * dpi = 100 def ani_frame(): fig = plt.figure() ax = fig.add_subplot(111) ax.set_aspect('equal') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) im = ax.imshow(rand(300,300),cmap='gray',interpolation='nearest') im.set_clim([0,1]) fig.set_size_inches([5,5]) tight_layout() def update_img(n): tmp = rand(300,300) im.set_data(tmp) return im #legend(loc=0) ani = animation.FuncAnimation(fig,update_img,300,interval=30) writer = animation.writers['ffmpeg'](fps=30) ani.save('demo.mp4',writer=writer,dpi=dpi) return ani ``` Documentation for animation", "best_answers_score":0.718, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4092927\/generating-movie-from-python-without-saving-individual-frames-to-files", "best_answers_votes":58, "question_length":1033, "response_length":845 }, { "question":"Top label for matplotlib colorbars By default, matplotlib would position colorbar labels alongside the vertical colorbars. What is the best way to force the label to be on top of a colorbar? Currently my solution needs adjusting labelpad and y values depending on size of the label: ```py import numpy as np import matplotlib.pylab as plt dat = np.random.randn(10,10) plt.imshow(dat, interpolation='none') clb = plt.colorbar() clb.set_label('label', labelpad=-40, y=1.05, rotation=0) plt.show() ``` Is there a better, more generic way to do this?", "response":"You could set the title of the colorbar axis (which appears above the axis), rather than the label (which appears along the long axis). To access the colorbar's Axes, you can use clb.ax. You can then use set_title, in the same way you can for any other Axes instance. For example: ``` import numpy as np import matplotlib.pylab as plt dat = np.random.randn(10,10) plt.imshow(dat, interpolation='none') clb = plt.colorbar() clb.ax.set_title('This is a title') plt.show() ```", "best_answers_score":0.717, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/33737427\/top-label-for-matplotlib-colorbars", "best_answers_votes":82, "question_length":546, "response_length":473 }, { "question":"Scatter plot with different text at each data point I am trying to make a scatter plot and annotate data points with different numbers from a list. So, for example, I want to plot y vs x and annotate with corresponding numbers from n. ``` y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199] x = [0.15, 0.3, 0.45, 0.6, 0.75] n = [58, 651, 393, 203, 123] ax = fig.add_subplot(111) ax1.scatter(z, y, fmt='o') ``` Any ideas?", "response":"I'm not aware of any plotting method which takes arrays or lists but you could use annotate() while iterating over the values in n. ``` import matplotlib.pyplot as plt x = [0.15, 0.3, 0.45, 0.6, 0.75] y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199] n = [58, 651, 393, 203, 123] fig, ax = plt.subplots() ax.scatter(x, y) for i, txt in enumerate(n): ax.annotate(txt, (x[i], y[i])) ``` There are a lot of formatting options for annotate(), see the matplotlib website:", "best_answers_score":0.7166, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14432557\/scatter-plot-with-different-text-at-each-data-point", "best_answers_votes":818, "question_length":418, "response_length":466 }, { "question":"Giving graphs a subtitle I want to give my graph a title in big 18pt font, then a subtitle below it in smaller 10pt font. How can I do this in matplotlib? It appears the title() function only takes one single string with a single fontsize attribute. There has to be a way to do this, but how?", "response":"What I do is use the title() function for the subtitle and the suptitle() for the main title (they can take different font size arguments).", "best_answers_score":0.7165, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/1388450\/giving-graphs-a-subtitle", "best_answers_votes":113, "question_length":292, "response_length":139 }, { "question":"Plot two histograms on single chart I created a histogram plot using data from a file and no problem. Now I wanted to superpose data from another file in the same histogram, so I do something like this ``` n,bins,patchs = ax.hist(mydata1,100) n,bins,patchs = ax.hist(mydata2,100) ``` but the problem is that for each interval, only the bar with the highest value appears, and the other is hidden. I wonder how could I plot both histograms at the same time with different colors.", "response":"Here you have a working example: ``` import random import numpy from matplotlib import pyplot x = [random.gauss(3,1) for _ in range(400)] y = [random.gauss(4,2) for _ in range(400)] bins = numpy.linspace(-10, 10, 100) pyplot.hist(x, bins, alpha=0.5, label='x') pyplot.hist(y, bins, alpha=0.5, label='y') pyplot.legend(loc='upper right') pyplot.show() ```", "best_answers_score":0.7144, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6871201\/plot-two-histograms-on-single-chart", "best_answers_votes":614, "question_length":478, "response_length":354 }, { "question":"Why do many examples use `fig, ax = plt.subplots()` I'm learning to use matplotlib by studying examples, and a lot of examples seem to include a line like the following before creating a single plot... ``` fig, ax = plt.subplots() ``` Here are some examples... Modify tick label text http:\/\/matplotlib.org\/examples\/pylab_examples\/boxplot_demo2.html I see this function used a lot, even though the example is only attempting to create a single chart. Is there some other advantage? The official demo for subplots() also uses f, ax = subplots when creating a single chart, and it only ever references ax after that. This is the code they use. ``` # Just a figure and one subplot f, ax = plt.subplots() ax.plot(x, y) ax.set_title('Simple plot') ```", "response":"plt.subplots() is a function that returns a tuple containing a figure and axes object(s). Thus when using fig, ax = plt.subplots() you unpack this tuple into the variables fig and ax. Having fig is useful if you want to change figure-level attributes or save the figure as an image file later (e.g. with fig.savefig('yourfilename.png')). You certainly don't have to use the returned figure object but many people do use it later so it's common to see. Also, all axes objects (the objects that have plotting methods), have a parent figure object anyway, thus: ``` fig, ax = plt.subplots() ``` is more concise than this: ``` fig = plt.figure() ax = fig.add_subplot(111) ```", "best_answers_score":0.7142, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34162443\/why-do-many-examples-use-fig-ax-plt-subplots", "best_answers_votes":567, "question_length":745, "response_length":671 }, { "question":"How is order of items in matplotlib legend determined? I am having to reorder items in a legend, when I don't think I should have to. I try: ``` from pylab import * clf() ax=gca() ht=ax.add_patch(Rectangle((1,1),1,1,color='r',label='Top',alpha=.1)) h1=ax.bar(1,2,label='Middle') hb=ax.add_patch(Rectangle((1,1),1,1,color='k',label='Bottom',alpha=.11)) legend() show() ``` and end up with Bottom above Middle. How can I get the right order? Is it not determined by creation order? Update: The following can be used to force the order. I think this may be the simplest way to do it, and that seems awkward. The question is what determines the original order? ``` hh=[ht,h1,hb] legend([ht,h1.patches[0],hb],[H.get_label() for H in hh]) ```", "response":"A slight variation on some other aswers. The list order should have the same length as the number of legend items, and specifies the new order manually. ``` handles, labels = plt.gca().get_legend_handles_labels() order = [0,2,1] plt.legend([handles[idx] for idx in order],[labels[idx] for idx in order]) ```", "best_answers_score":0.7129, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22263807\/how-is-order-of-items-in-matplotlib-legend-determined", "best_answers_votes":145, "question_length":736, "response_length":307 }, { "question":"No module named when using PyInstaller I try to compile a Python project under Windows 7 using PyInstaller. The project works fine, there are no issues, however when I try to compile it the result doesn't work. Though I get no warnings during compilation there are many in the warnmain.txt file in the build directory: warnmain.txt I don't really understand those warnings, for example \"no module named numpy.pi\" since numpy.pi is no module but a number. I never tried to import numpy.pi. I did import numpy and matplotlib explicitly. In addition I'm using PyQt4. I thought the error might be related to those libraries. However I was able to compile a simple script which uses numpy succesfully: ``` import sys from PyQt4 import QtGui, QtCore import numpy as np class MainWindow(QtGui.QMainWindow): def __init__(self): QtGui.QMainWindow.__init__(self) self.pb = QtGui.QPushButton(str(np.pi), self) app = QtGui.QApplication(sys.argv) main = MainWindow() main.show() sys.exit(app.exec_()) ``` Successfully here means that the created executable file actually showed the desired output. However there is also a warnmain.txt file created which contains exactly the same 'warnings' as the one before. So I guess the fact that compiling my actual project does not give any success is not (or at least not only) related to those warnings. But what else could be the error then? The only output during compilation are 'INFO's and none of the is a negative statement. I did not specify an additional hook directory but the hooks where down using the default directory as far as I could read from the compile output, e.g. hook-matplotlib was executed. I could not see any hook for numpy neither could I for my small example script but this one worked. I used the following imports in my files (not all in the same but in different ones): ``` import numpy as np import matplotlib.pyplot as ppl from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas from matplotlib.backends.backend_qt4agg import NavigationToolbar2QTAgg as NavigationToolbar from PyQt4 import QtGui, QtCore import json import sys import numpy # added this one later import matplotlib # added this one later ``` Since PyInstaller does not give any errors\/warnings I could not figure out if the problem is related to the libraries or if there is something else to be considered.", "response":"Had a similar problem with no module named FileDialog. Discovered that with version 3.2, I could use pyinstaller --hidden-import FileDialog ... instead of modifying my main script. See Listing Hidden Imports documentation", "best_answers_score":0.7129, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25733467\/no-module-named-when-using-pyinstaller", "best_answers_votes":40, "question_length":2358, "response_length":221 }, { "question":"How to find the intersection of two graphs Let 0 <= x <= 1. I have two columns f and g of length 5000 respectively. Now I plot: ``` plt.plot(x, f, '-') plt.plot(x, g, '*') ``` I want to find the point 'x' where the curve intersects. I don't want to find the intersection of f and g. I can do it simply with: ``` set(f) & set(g) ```", "response":"You can use np.sign in combination with np.diff and np.argwhere to obtain the indices of points where the lines cross (in this case, the points are [ 0, 149, 331, 448, 664, 743]): ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(0, 1000) f = np.arange(0, 1000) g = np.sin(np.arange(0, 10, 0.01) * 2) * 1000 plt.plot(x, f, '-') plt.plot(x, g, '-') idx = np.argwhere(np.diff(np.sign(f - g))).flatten() plt.plot(x[idx], f[idx], 'ro') plt.show() ``` First it calculates f - g and the corresponding signs using np.sign. Applying np.diff reveals all the positions, where the sign changes (e.g. the lines cross). Using np.argwhere gives us the exact indices.", "best_answers_score":0.7128, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/28766692\/how-to-find-the-intersection-of-two-graphs", "best_answers_votes":151, "question_length":331, "response_length":671 }, { "question":"Python basemap module impossible to import I have troubles to import the basemap module of mpl_toolkits in python. Here is what I get when I run the test.py script from the module directory: ``` \/usr\/lib\/python2.7\/dist-packages\/mpl_toolkits\/basemap$ python test.py Traceback (most recent call last): File \"test.py\", line 1, in from mpl_toolkits.basemap import Basemap, shiftgrid ImportError: No module named basemap ``` I can't get it since sys.path gives a list of paths where I am sure the directory \"basemap\" is, in the \"mpl_toolkits\" directory. There is no problem to import mpl_toolkits. Here is a thing I tried, to manually add the path, and the result: ``` >>> import sys >>> sys.path.append('\/usr\/lib\/python2.7\/dist-packages\/mpl_toolkits\/basemap') >>> import basemap Traceback (most recent call last): File \"\", line 1, in File \"basemap\/__init__.py\", line 30, in from mpl_toolkits.basemap import pyproj ImportError: No module named basemap ``` I tried to uninstall an reinstall basemap from source (carefully following these instructions), from apt-get, from conda, but it does not change anything: I can't import basemap. Thank you for your help", "response":"I was facing this issue and I was able to solve it using anaconda After activating my profile ``` source activate MyProfileName conda install basemap from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt # setup Lambert Conformal basemap. # set resolution=None to skip processing of boundary datasets. m = Basemap(width=12000000,height=9000000,projection='lcc', resolution=None,lat_1=45.,lat_2=55,lat_0=50,lon_0=-107.) m.bluemarble() plt.show() ```", "best_answers_score":0.7125, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/40374441\/python-basemap-module-impossible-to-import", "best_answers_votes":41, "question_length":1156, "response_length":467 }, { "question":"Need to add space between SubPlots for X axis label, maybe remove labelling of axis notches Looking to add in vertical space between plotted graphs to allow a X-Axis label to show: Each graph needs to have space to show the day, currently the last 2 graphs are the only one's that show simply because the graphs are overlapping it. Also curious if I could actually remove the notch labels for the X-Axis for the graphs above the one's marked Thursday\/Friday, i.e. the bottom X-axis is the only one that shows. Same for the Y-Axis, but only the graphs on the left having the scale shown. *Unfortunately I can't post an image to show this since I don't have enough rep. Code snippet: ``` import mathlib.pyplot as pyplot fig = pyplot.figure() ax1 = fig.add_subplot(4,2,1) ax1.set_yscale('log') ax2 = fig.add_subplot(4,2,2, sharex=ax1, sharey=ax1) ax3 = fig.add_subplot(4,2,3, sharex=ax2, sharey=ax2) ax4 = fig.add_subplot(4,2,4, sharex=ax3, sharey=ax3) ax5 = fig.add_subplot(4,2,5, sharex=ax4, sharey=ax4) ax6 = fig.add_subplot(4,2,6, sharex=ax5, sharey=ax5) ax7 = fig.add_subplot(4,2,7, sharex=ax6, sharey=ax6) ax1.plot(no_dict[\"Saturday\"],'k.-',label='Saturday') ax1.set_xlabel('Saturday') ax1.axis([0,24,0,10000]) pyplot.suptitle('Title') pyplot.xlabel('Hour in 24 Hour Format') ax2.plot(no_dict[\"Sunday\"],'b.-',label='Sunday') ax2.set_xlabel('Sunday') ... ```", "response":"Use subplots_adjust. In your case this looks good: ``` fig.subplots_adjust(hspace=.5) ``` to remove the tick labels do this: ``` ax1.set_xticklabels([]) ``` Similar for the yticklabels. However, you cannot share the x-axis with the plots that do have tick labels.", "best_answers_score":0.7124, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5159065\/need-to-add-space-between-subplots-for-x-axis-label-maybe-remove-labelling-of-a", "best_answers_votes":113, "question_length":1360, "response_length":263 }, { "question":"How to place two different legends on the same graph I have a plot where different colors are used for different parameters, and where different line styles are used for different algorithms. The goal is to compare the results of the different algorithms performed with similar parameters. It means in total I use 4 different colors, and 3 different line styles, for a total of 12 plots on the same graph. I actually build the legend based on colors, associating each color with the corresponding parameter. Now I'd like to display a second legend on the same graph, with the meaning of each line style. It is possible to achieve that? How? Here is what my code looks like actually: ``` colors = ['b', 'r', 'g', 'c'] cc = cycle(c) for p in parameters: d1 = algo1(p) d2 = algo2(p) d3 = algo3(p) pyplot.hold(True) c = next(cc) pyplot.plot(d1, '-', color=c, label=\"d1\") pyplot.plot(d1, '--', color=c) pyplot.plot(d2, '.-', color=c) pyplot.legend() ```", "response":"There's a section in the matplotlib documentation on that exact subject. Here's code for your specific example: ``` import itertools from matplotlib import pyplot colors = ['b', 'r', 'g', 'c'] cc = itertools.cycle(colors) plot_lines = [] for p in parameters: d1 = algo1(p) d2 = algo2(p) d3 = algo3(p) pyplot.hold(True) c = next(cc) l1, = pyplot.plot(d1, '-', color=c) l2, = pyplot.plot(d2, '--', color=c) l3, = pyplot.plot(d3, '.-', color=c) plot_lines.append([l1, l2, l3]) legend1 = pyplot.legend(plot_lines[0], [\"algo1\", \"algo2\", \"algo3\"], loc=1) pyplot.legend([l[0] for l in plot_lines], parameters, loc=4) pyplot.gca().add_artist(legend1) ``` Here's an example of its output:", "best_answers_score":0.7113, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12761806\/how-to-place-two-different-legends-on-the-same-graph", "best_answers_votes":153, "question_length":948, "response_length":679 }, { "question":"Matplotlib scatter plot with unknown error I am attempting to create a scatter plot. I have a list of numbers from 0 - 17 as well as an array with 18 values. I can plot the data as a line plot but when I try to plot as a scatter, I get an error message I do not understand: ```none TypeError: ufunc 'sqrt' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' ``` What does this error message mean and how can I get the data to plot as a scatter? ```py import numpy as np import matplotlib.pyplot as plt y = [7316.0, 7453.25, 7518.25, 7711.5, 7448.0, 7210.25, 7416.75, 6960.75, 7397.75, 6397.5, 5522.75, 5139.0, 5034.75, 4264.75, 5106.0, 3489.5, 4712.0, 4770.0] x = np.arange(0,18,1) plt.rcParams['legend.loc'] = 'best' plt.figure(1) plt.xlim(0, 20) plt.ylim(0, 10000) plt.scatter(x, y, 'r') plt.show() ```", "response":"Check the scatter documentation. Third argument is for size of points and should be scalar or array_like. I assume 'r' is for color so do the following: ``` plt.scatter(x, y, c='r') ```", "best_answers_score":0.7112, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/35733223\/matplotlib-scatter-plot-with-unknown-error", "best_answers_votes":97, "question_length":892, "response_length":185 }, { "question":"Unable to show legend in seaborn distplot I am new to plotting in python and trying following code to plot distribution in seaborn but unable to see the legend, i.e., test_label1 and test_label1 on the plot. ``` import matplotlib.pylab as plt import seaborn as sns import numpy as np plt.figure(\"Test Plots\") lst1 = list(np.random.rand(10)) lst2 = list(np.random.rand(10)) sns.distplot(lst1, label='test_label1', color=\"0.25\") sns.distplot(lst2, label='test_label2', color=\"0.25\") plt.show() ```", "response":"As you have already labelled your plots using label= inside your sns.distplot then all you have to do is show your legend. This is done by adding plt.legend() just before plt.show() More information on matplotlib legends can be found in the documentation", "best_answers_score":0.7106, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44968012\/unable-to-show-legend-in-seaborn-distplot", "best_answers_votes":79, "question_length":495, "response_length":254 }, { "question":"Create own colormap using matplotlib and plot color scale I have the following problem, I want to create my own colormap (red-mix-violet-mix-blue) that maps to values between -2 and +2 and want to use it to color points in my plot. The plot should then have the colorscale to the right. That is how I create the map so far. But I am not really sure if it mixes the colors. ``` cmap = matplotlib.colors.ListedColormap([\"red\",\"violet\",\"blue\"], name='from_list', N=None) m = cm.ScalarMappable(norm=norm, cmap=cmap) ``` That way I map the colors to the values. ``` colors = itertools.cycle([m.to_rgba(1.22), ..]) ``` Then I plot it: ``` for i in range(0, len(array_dg)): plt.plot(array_dg[i], markers.next(),alpha=alpha[i], c=colors.next()) ``` My problems are: 1. I can't plot the color scale. 2. I am not completely sure if my scale is creating a continues (smooth) colorscale.", "response":"Since the methods used in other answers seems quite complicated for such easy task, here is a new answer: Instead of a ListedColormap, which produces a discrete colormap, you may use a LinearSegmentedColormap. This can easily be created from a list using the from_list method. ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.colors x,y,c = zip(*np.random.rand(30,3)*4-2) norm=plt.Normalize(-2,2) cmap = matplotlib.colors.LinearSegmentedColormap.from_list(\"\", [\"red\",\"violet\",\"blue\"]) plt.scatter(x,y,c=c, cmap=cmap, norm=norm) plt.colorbar() plt.show() ``` More generally, if you have a list of values (e.g. [-2., -1, 2]) and corresponding colors, (e.g. [\"red\",\"violet\",\"blue\"]), such that the nth value should correspond to the nth color, you can normalize the values and supply them as tuples to the from_list method. ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.colors x,y,c = zip(*np.random.rand(30,3)*4-2) cvals = [-2., -1, 2] colors = [\"red\",\"violet\",\"blue\"] norm=plt.Normalize(min(cvals),max(cvals)) tuples = list(zip(map(norm,cvals), colors)) cmap = matplotlib.colors.LinearSegmentedColormap.from_list(\"\", tuples) plt.scatter(x,y,c=c, cmap=cmap, norm=norm) plt.colorbar() plt.show() ```", "best_answers_score":0.7088, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16834861\/create-own-colormap-using-matplotlib-and-plot-color-scale", "best_answers_votes":136, "question_length":875, "response_length":1247 }, { "question":"Simple line plots using seaborn I'm trying to plot a ROC curve using seaborn (python). With matplotlib I simply use the function plot: ``` plt.plot(one_minus_specificity, sensitivity, 'bs--') ``` where one_minus_specificity and sensitivity are two lists of paired values. Is there a simple counterparts of the plot function in seaborn? I had a look at the gallery but I didn't find any straightforward method.", "response":"Since seaborn also uses matplotlib to do its plotting you can easily combine the two. If you only want to adopt the styling of seaborn the set_style function should get you started: ``` import matplotlib.pyplot as plt import numpy as np import seaborn as sns sns.set_style(\"darkgrid\") plt.plot(np.cumsum(np.random.randn(1000,1))) plt.show() ``` Result:", "best_answers_score":0.7083, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/31069191\/simple-line-plots-using-seaborn", "best_answers_votes":88, "question_length":409, "response_length":352 }, { "question":"How can one display an image using cv2 in Python I've been working with code to display frames from a movie. The bare bones of the code is as follows: ``` import cv2 import matplotlib.pyplot as plt # Read single frame avi cap = cv2.VideoCapture('singleFrame.avi') rval, frame = cap.read() # Attempt to display using cv2 (doesn't work) cv2.namedWindow(\"Input\") cv2.imshow(\"Input\", frame) #Display image using matplotlib (Works) b,g,r = cv2.split(frame) frame_rgb = cv2.merge((r,g,b)) plt.imshow(frame_rgb) plt.title('Matplotlib') #Give this plot a title, #so I know it's from matplotlib and not cv2 plt.show() ``` Because I can display the image using matplotlib, I know that I'm successfully reading it in. I don't understand why my creation of a window and attempt to show an image using cv2 doesn't work. No cv2 window ever appears. Oddly though, if I create a second cv2 window, the 'input' window appears, but it is only a blank\/white window. What am I missing here?", "response":"As far as I can see, you are doing it almost good. There is one thing missing: ``` cv2.imshow('image',img) cv2.waitKey(0) ``` So probably your window appears but is closed very very fast.", "best_answers_score":0.7082, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/34966541\/how-can-one-display-an-image-using-cv2-in-python", "best_answers_votes":134, "question_length":970, "response_length":187 }, { "question":"Annotating a 3D scatter plot I'm trying to generate a 3D scatter plot using Matplotlib. I would like to annotate individual points like the 2D case here: How to put individual tags for a matplotlib scatter plot?. I've tried to use this function and consulted the Matplotlib docoment but found it seems that the library does not support 3D annotation. Does anyone know how to do this?", "response":"Maybe easier via ax.text(...): ``` from matplotlib import pyplot from mpl_toolkits.mplot3d import Axes3D from numpy.random import rand from pylab import figure m=rand(3,3) # m is an array of (x,y,z) coordinate triplets fig = figure() ax = fig.add_subplot(projection='3d') for i in range(len(m)): #plot each point + it's index as text above ax.scatter(m[i,0],m[i,1],m[i,2],color='b') ax.text(m[i,0],m[i,1],m[i,2], '%s' % (str(i)), size=20, zorder=1, color='k') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z') pyplot.show() ```", "best_answers_score":0.7081, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10374930\/annotating-a-3d-scatter-plot", "best_answers_votes":127, "question_length":383, "response_length":534 }, { "question":"Matplotlib: Save figure as file from iPython notebook I am trying to save a Matplotlib figure as a file from an iPython notebook. ``` import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_axes([1,1,1,1]) ax.plot([1,2]) fig.savefig('test.png') ``` The inline view in the iPython notebook looks good: The file 'test.png' is almost empty though. It looks like the plot is shifted to the top right, you can see the tick labels '1.0' and '0.0' in the corner. How can I produce a file from the iPython notebook that looks like the inline view?", "response":"Problem solved: add 'bbox_inches='tight' argument to savefig. ``` import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_axes([1,1,1,1]) plt.plot([1,2]) savefig('test.png', bbox_inches='tight') ``` I don't understand what's happening here, but the file looks like the iPython notebook inline file now. Yay.", "best_answers_score":0.7081, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19271309\/matplotlib-save-figure-as-file-from-ipython-notebook", "best_answers_votes":44, "question_length":547, "response_length":315 }, { "question":"Aligning rotated xticklabels with their respective xticks Check the x axis of the figure below. How can I move the labels a bit to the left so that they align with their respective ticks? I'm rotating the labels using: ``` ax.set_xticks(xlabels_positions) ax.set_xticklabels(xlabels, rotation=45) ``` But, as you can see, the rotation is centered on the middle of the text labels. Which makes it look like they are shifted to the right. I've tried using this instead: ``` ax.set_xticklabels(xlabels, rotation=45, rotation_mode=\"anchor\") ``` ... but it doesn't do what I wished for. And \"anchor\" seems to be the only value allowed for the rotation_mode parameter.", "response":"You can set the horizontal alignment of ticklabels, see the example below. If you imagine a rectangular box around the rotated label, which side of the rectangle do you want to be aligned with the tickpoint? Given your description, you want: ha='right' ``` n=5 x = np.arange(n) y = np.sin(np.linspace(-3,3,n)) xlabels = ['Ticklabel %i' % i for i in range(n)] fig, axs = plt.subplots(1,3, figsize=(12,3)) ha = ['right', 'center', 'left'] for n, ax in enumerate(axs): ax.plot(x,y, 'o-') ax.set_title(ha[n]) ax.set_xticks(x) ax.set_xticklabels(xlabels, rotation=40, ha=ha[n]) ```", "best_answers_score":0.708, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14852821\/aligning-rotated-xticklabels-with-their-respective-xticks", "best_answers_votes":301, "question_length":662, "response_length":576 }, { "question":"Annotate bars with values on Pandas bar plots I was looking for a way to annotate my bars in a Pandas bar plot with the rounded numerical values from my DataFrame. ``` >>> df=pd.DataFrame({'A':np.random.rand(2),'B':np.random.rand(2)},index=['value1','value2'] ) >>> df A B value1 0.440922 0.911800 value2 0.588242 0.797366 ``` I would like to get something like this: I tried with this code sample, but the annotations are all centered on the x ticks: ``` >>> ax = df.plot(kind='bar') >>> for idx, label in enumerate(list(df.index)): for acc in df.columns: value = np.round(df.ix[idx][acc],decimals=2) ax.annotate(value, (idx, value), xytext=(0, 15), textcoords='offset points') ```", "response":"You get it directly from the axes' patches: ``` for p in ax.patches: ax.annotate(str(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.005)) ``` You'll want to tweak the string formatting and the offsets to get things centered, maybe use the width from p.get_width(), but that should get you started. It may not work with stacked bar plots unless you track the offsets somewhere.", "best_answers_score":0.708, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25447700\/annotate-bars-with-values-on-pandas-bar-plots", "best_answers_votes":228, "question_length":682, "response_length":385 }, { "question":"Set Matplotlib colorbar size to match graph I cannot get the colorbar on imshow graphs like this one to be the same height as the graph, short of using Photoshop after the fact. How do I get the heights to match?", "response":"You can do this easily with a matplotlib AxisDivider. The example from the linked page also works without using subplots: ```py import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import numpy as np plt.figure() ax = plt.gca() im = ax.imshow(np.arange(100).reshape((10,10))) # create an axes on the right side of ax. The width of cax will be 5% # of ax and the padding between cax and ax will be fixed at 0.05 inch. divider = make_axes_locatable(ax) cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05) plt.colorbar(im, cax=cax) ```", "best_answers_score":0.7065, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18195758\/set-matplotlib-colorbar-size-to-match-graph", "best_answers_votes":298, "question_length":212, "response_length":571 }, { "question":"Improve subplot size\/spacing with many subplots I need to generate a whole bunch of vertically-stacked plots in matplotlib. The result will be saved using savefig and viewed on a webpage, so I don't care how tall the final image is, as long as the subplots are spaced so they don't overlap. No matter how big I allow the figure to be, the subplots always seem to overlap. My code currently looks like ``` import matplotlib.pyplot as plt import my_other_module titles, x_lists, y_lists = my_other_module.get_data() fig = plt.figure(figsize=(10,60)) for i, y_list in enumerate(y_lists): plt.subplot(len(titles), 1, i) plt.xlabel(\"Some X label\") plt.ylabel(\"Some Y label\") plt.title(titles[i]) plt.plot(x_lists[i],y_list) fig.savefig('out.png', dpi=100) ```", "response":"Please review matplotlib: Tight Layout guide and try using matplotlib.pyplot.tight_layout, or matplotlib.figure.Figure.tight_layout As a quick example: ``` import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=4, ncols=4, figsize=(8, 8)) fig.tight_layout() # Or equivalently, \"plt.tight_layout()\" plt.show() ``` Without Tight Layout With Tight Layout", "best_answers_score":0.7059, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6541123\/improve-subplot-size-spacing-with-many-subplots", "best_answers_votes":783, "question_length":754, "response_length":359 }, { "question":"Change values on matplotlib imshow() graph axis Say I have some input data: ```py data = np.random.normal(loc=100, scale=10, size=(500,1,32)) hist = np.ones((32, 20)) # initialise hist for z in range(32): hist[z], edges = np.histogram(data[:, 0, z], bins=np.arange(80, 122, 2)) ``` I can plot it using imshow(): ```py plt.imshow(hist, cmap='Reds') ``` getting: However, the x-axis values do not match the input data (i.e. mean of 100, range from 80 to 122). Therefore, I'd like to change the x-axis to show the values in edges. I have tried: ```py ax = plt.gca() ax.set_xlabel([80,122]) # range of values in edges ... # this shifts the plot so that nothing is visible ``` and ```py ax.set_xticklabels(edges) ... # this labels the axis but does not centre around the mean: ``` Any ideas on how I can change the axis values to reflect the input data I am using?", "response":"I would try to avoid changing the xticklabels if possible, otherwise it can get very confusing if you for example overplot your histogram with additional data. Defining the range of your grid is probably the best and with imshow it can be done by adding the extent keyword. This way the axes gets adjusted automatically. If you want to change the labels i would use set_xticks with perhaps some formatter. Altering the labels directly should be the last resort. ``` fig, ax = plt.subplots(figsize=(6,6)) ax.imshow(hist, cmap=plt.cm.Reds, interpolation='none', extent=[80,120,32,0]) ax.set_aspect(2) # you may also use am.imshow(..., aspect=\"auto\") to restore the aspect ratio ```", "best_answers_score":0.7051, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18696122\/change-values-on-matplotlib-imshow-graph-axis", "best_answers_votes":213, "question_length":859, "response_length":679 }, { "question":"List of all available matplotlib backends The current backend name is accessible via ``` >>> import matplotlib.pyplot as plt >>> plt.get_backend() 'GTKAgg' ``` Is there a way to get a list of all backends that can be used on a particular machine?", "response":"Here is a modification of the script posted previously. It finds all supported backends, validates them and measures their fps. On OSX it crashes python when it comes to tkAgg, so use at your own risk ;) ``` from __future__ import print_function, division, absolute_import from pylab import * import time import matplotlib.backends import matplotlib.pyplot as p import os.path def is_backend_module(fname): \"\"\"Identifies if a filename is a matplotlib backend module\"\"\" return fname.startswith('backend_') and fname.endswith('.py') def backend_fname_formatter(fname): \"\"\"Removes the extension of the given filename, then takes away the leading 'backend_'.\"\"\" return os.path.splitext(fname)[0][8:] # get the directory where the backends live backends_dir = os.path.dirname(matplotlib.backends.__file__) # filter all files in that directory to identify all files which provide a backend backend_fnames = filter(is_backend_module, os.listdir(backends_dir)) backends = [backend_fname_formatter(fname) for fname in backend_fnames] print(\"supported backends: \\t\" + str(backends)) # validate backends backends_valid = [] for b in backends: try: p.switch_backend(b) backends_valid += [b] except: continue print(\"valid backends: \\t\" + str(backends_valid)) # try backends performance for b in backends_valid: ion() try: p.switch_backend(b) clf() tstart = time.time() # for profiling x = arange(0,2*pi,0.01) # x-array line, = plot(x,sin(x)) for i in arange(1,200): line.set_ydata(sin(x+i\/10.0)) # update the data draw() # redraw the canvas print(b + ' FPS: \\t' , 200\/(time.time()-tstart)) ioff() except: print(b + \" error :(\") ``` To just see supported interactive backends see: ``` #!\/usr\/bin\/env python from __future__ import print_function import matplotlib.pyplot as plt import matplotlib backends = matplotlib.rcsetup.interactive_bk # validate backends backends_valid = [] for b in backends: try: plt.switch_backend(b) backends_valid += [b] except: continue print(backends_valid) ```", "best_answers_score":0.7048, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5091993\/list-of-all-available-matplotlib-backends", "best_answers_votes":53, "question_length":246, "response_length":1976 }, { "question":"Matplotlib issue on OS X (\"ImportError: cannot import name _thread\") At some point in the last few days, Matplotlib stopped working for me on OS X. Here's the error I get when trying to import matplotlib: ``` Traceback (most recent call last): File \"\/my\/path\/to\/script\/my_script.py\", line 15, in import matplotlib.pyplot as plt File \"\/Library\/Python\/2.7\/site-packages\/matplotlib\/pyplot.py\", line 34, in from matplotlib.figure import Figure, figaspect File \"\/Library\/Python\/2.7\/site-packages\/matplotlib\/figure.py\", line 40, in from matplotlib.axes import Axes, SubplotBase, subplot_class_factory File \"\/Library\/Python\/2.7\/site-packages\/matplotlib\/axes\/__init__.py\", line 4, in from ._subplots import * File \"\/Library\/Python\/2.7\/site-packages\/matplotlib\/axes\/_subplots.py\", line 10, in from matplotlib.axes._axes import Axes File \"\/Library\/Python\/2.7\/site-packages\/matplotlib\/axes\/_axes.py\", line 22, in import matplotlib.dates as _ # from dateutil.rrule import (rrule, MO, TU, WE, TH, FR, SA, SU, YEARLY, File \"\/Library\/Python\/2.7\/site-packages\/dateutil\/rrule.py\", line 14, in from six.moves import _thread ImportError: cannot import name _thread ``` The only system change I can think of was the Apple-forced NTP update and maybe some permission changes I did in \/usr\/local to get Brew working again. I tried reinstalling both Matplotlib and Python-dateutil via Pip, but this did not help. Also tried a reboot. I'm running Python 2.7.6, which is located in \/usr\/bin\/python. I'm running Yosemite (OS X 10.10.1).", "response":"``` sudo pip uninstall python-dateutil sudo pip install python-dateutil==2.2 ``` I had the same error message this afternoon as well, although I did recently upgrade to Yosemite. I'm not totally sure I understand why reverting dateutil to a previous version works for me, but since running the above I'm having no trouble (I generally use pyplot inline in an ipython notebook).", "best_answers_score":0.7044, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/27630114\/matplotlib-issue-on-os-x-importerror-cannot-import-name-thread", "best_answers_votes":191, "question_length":1518, "response_length":377 }, { "question":"Python matplotlib decrease size of colorbar labels I need your help! I have a plotting code which is the following: ``` fig = plt.figure() ax1 = fig.add_subplot(111) imax1 = ax1.imshow(data,interpolation = 'nearest', origin = 'lower',cmap=cm.jet)#plot cbar = plt.colorbar(imax1, extend='neither', spacing='proportional', orientation='vertical', shrink=0.7, format=\"%.0f\") cbar.set_label(r\"ET [mm\/month]\", size=10) titlestr = \"Evapotranspiration in mm\/month\" plt.title(titlestr) #plt.xlabel(\"Longitude\") #plt.ylabel(\"Latitude\") imax1.set_clim(0,60) labels = [item.get_text() for item in ax1.get_xticklabels()] for ii in range(np.shape(labels)[0]): labels[ii] = str(grid_lon[75*ii\/np.shape(labels)[0]]) ax1.set_xticklabels(labels, rotation = 45, ha='right', size = 10) labels = [item.get_text() for item in ax1.get_yticklabels()] for ii in range(np.shape(labels)[0]): labels[ii] = str(grid_lat[75*ii\/np.shape(labels)[0]]) ax1.set_yticklabels(labels, size = 10) pngname = \".\/out\/2d_\"+variable+\"_\"+mm+\".png\" print \"save \", pngname plt.savefig(pngname, dpi=None, facecolor='w', edgecolor='w', orientation='portrait', papertype=None, format=None, transparent=False, bbox_inches=None, pad_inches=0.1) print \"plot finished\" ``` I would like to set the label size of the colorbar labels (e.g. 0,10,20,...60) to size of 10 or smaller. This will probably go into the line \"imax1.set_clim(0,60). Any ideas? I'd be also interested to print information of the imax1 object to command line. How could I do that? E.g. available attributes and functions of imax1. I deeply appreciate your help!", "response":"Aha! Found the answer here: ``` cbar.ax.tick_params(labelsize=10) ``` P.S. Upvote that answer and give Paul some love!", "best_answers_score":0.7038, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15305737\/python-matplotlib-decrease-size-of-colorbar-labels", "best_answers_votes":235, "question_length":1577, "response_length":118 }, { "question":"Seaborn Bar Plot Ordering I have a pandas dataframe that has two columns. I need the plot ordered by the \"Count\" Column. ``` dicti=({'37':99943,'25':47228,'36':16933,'40':14996,'35':11791,'34':8030,'24' : 6319 ,'2' :5055 ,'39' :4758 ,'38' :4611 }) pd_df = pd.DataFrame(list(dicti.iteritems())) pd_df.columns =[\"Dim\",\"Count\"] plt.figure(figsize=(12,8)) ax = sns.barplot(x=\"Dim\", y= \"Count\",data=pd_df ) ax.get_yaxis().set_major_formatter(plt.FuncFormatter(lambda x, loc: \" {:,}\".format(int(x)))) ax.set(xlabel=\"Dim\", ylabel='Count') for item in ax.get_xticklabels(): item.set_rotation(90) for i, v in enumerate(pd_df[\"Count\"].iteritems()): ax.text(i ,v[1], \"{:,}\".format(v[1]), color='m', va ='bottom', rotation=45) plt.tight_layout() ``` Right now the plot is getting ordered by the \"Dim\" column, I need it ordered by the \"Count\" column,How can I do this?", "response":"you can use the order parameter for this. ``` sns.barplot(x='Id', y=\"Speed\", data=df, order=result['Id']) ``` Credits to Wayne. See the rest of his code. This link is still working for me. But, for the sake of convenience, I'm pasting the author's code here. ``` result = df.groupby([\"Id\"])['Speed'].aggregate(np.median).reset_index().sort_values('Speed') sns.barplot(x='Id', y=\"Speed\", data=df, order=result['Id']) plt.show() ``` df ``` Id Speed 0 1 30 1 1 35 2 1 31 3 2 20 4 2 25 ``` result ``` Id Speed 1 2 22.5 0 1 31.0 2 3 80.0 ```", "best_answers_score":0.7029, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/43770507\/seaborn-bar-plot-ordering", "best_answers_votes":59, "question_length":855, "response_length":536 }, { "question":"Removing white space around a saved image I need to take an image and save it after some process. The figure looks fine when I display it, but after saving the figure, I got some white space around the saved image. I have tried the 'tight' option for savefig method, did not work either. The code: ``` import matplotlib.image as mpimg import matplotlib.pyplot as plt fig = plt.figure(1) img = mpimg.imread(\"image.jpg\") plt.imshow(img) ax = fig.add_subplot(1, 1, 1) extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) plt.savefig('1.png', bbox_inches=extent) plt.axis('off') plt.show() ``` I am trying to draw a basic graph by using NetworkX on a figure and save it. I realized that without a graph it works, but when added a graph I get white space around the saved image; ``` import matplotlib.image as mpimg import matplotlib.pyplot as plt import networkx as nx G = nx.Graph() G.add_node(1) G.add_node(2) G.add_node(3) G.add_edge(1, 3) G.add_edge(1, 2) pos = {1:[100, 120], 2:[200, 300], 3:[50, 75]} fig = plt.figure(1) img = mpimg.imread(\"image.jpg\") plt.imshow(img) ax = fig.add_subplot(1, 1, 1) nx.draw(G, pos=pos) extent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) plt.savefig('1.png', bbox_inches=extent) plt.axis('off') plt.show() ```", "response":"You can remove the white space padding by setting bbox_inches=\"tight\" in savefig: ``` plt.savefig(\"test.png\",bbox_inches='tight') ``` You'll have to put the argument to bbox_inches as a string, perhaps this is why it didn't work earlier for you. Possible duplicates: Matplotlib plots: removing axis, legends and white spaces How to set the margins for a matplotlib figure? Reduce left and right margins in matplotlib plot", "best_answers_score":0.7025, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11837979\/removing-white-space-around-a-saved-image", "best_answers_votes":400, "question_length":1291, "response_length":421 }, { "question":"Plot a histogram from a Dictionary I created a dictionary that counts the occurrences in a list of every key and I would now like to plot the histogram of its content. This is the content of the dictionary I want to plot: ``` {1: 27, 34: 1, 3: 72, 4: 62, 5: 33, 6: 36, 7: 20, 8: 12, 9: 9, 10: 6, 11: 5, 12: 8, 2: 74, 14: 4, 15: 3, 16: 1, 17: 1, 18: 1, 19: 1, 21: 1, 27: 2} ``` So far I wrote this: ``` import numpy as np import matplotlib.pyplot as plt pos = np.arange(len(myDictionary.keys())) width = 1.0 # gives histogram aspect to the bar diagram ax = plt.axes() ax.set_xticks(pos + (width \/ 2)) ax.set_xticklabels(myDictionary.keys()) plt.bar(myDictionary.keys(), ******, width, color='g') # ^^^^^^ what should I put here? plt.show() ``` I tried by simply doing ``` plt.bar(myDictionary.keys(), myDictionary, width, color='g') ``` but this is the result: and I don't know why the 3 bars are shifted and also I'd like the histogram to be displayed in a ordered fashion. Can somebody tell me how to do it?", "response":"You can use the function for plotting histograms like this: ``` a = np.random.random_integers(0,10,20) #example list of values plt.hist(a) plt.show() ``` Or you can use myDictionary just like this: ``` plt.bar(myDictionary.keys(), myDictionary.values(), width, color='g') ```", "best_answers_score":0.7022, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21195179\/plot-a-histogram-from-a-dictionary", "best_answers_votes":90, "question_length":1008, "response_length":275 }, { "question":"How to change legend fontsize with matplotlib.pyplot Simple question here: I'm trying to get the size of my legend using matplotlib.pyplot to be smaller (i.e., the text to be smaller). The code I'm using goes something like this: ``` plot.figure() plot.scatter(k, sum_cf, color='black', label='Sum of Cause Fractions') plot.scatter(k, data[:, 0], color='b', label='Dis 1: cf = .6, var = .2') plot.scatter(k, data[:, 1], color='r', label='Dis 2: cf = .2, var = .1') plot.scatter(k, data[:, 2], color='g', label='Dis 3: cf = .1, var = .01') plot.legend(loc=2) ```", "response":"You can set an individual font size for the legend by adjusting the prop keyword. ``` plot.legend(loc=2, prop={'size': 6}) ``` This takes a dictionary of keywords corresponding to matplotlib.font_manager.FontProperties properties. See the documentation for legend: Keyword arguments: ``` prop: [ None | FontProperties | dict ] A matplotlib.font_manager.FontProperties instance. If prop is a dictionary, a new instance will be created with prop. If None, use rc settings. ``` It is also possible, as of version 1.2.1, to use the keyword fontsize.", "best_answers_score":0.7013, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7125009\/how-to-change-legend-fontsize-with-matplotlib-pyplot", "best_answers_votes":837, "question_length":561, "response_length":545 }, { "question":"How can I make a 3D line plot? I want to generate the lines, which I get from an array in 3D. Here is the code: ``` VecStart_x = [0,1,3,5] VecStart_y = [2,2,5,5] VecStart_z = [0,1,1,5] VecEnd_x = [1,2,-1,6] VecEnd_y = [3,1,-2,7] VecEnd_z =[1,0,4,9] import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot([VecStart_x ,VecEnd_x],[VecStart_y,VecEnd_y],[VecStart_z,VecEnd_z]) plt.show() Axes3D.plot() ``` I get that error: ValueError: third arg must be a format string", "response":"I guess, you want to plot 4 lines. Then you can try ``` fig = plt.figure() ax = fig.add_subplot(111, projection='3d') for i in range(4): ax.plot([VecStart_x[i], VecEnd_x[i]], [VecStart_y[i],VecEnd_y[i]],zs=[VecStart_z[i],VecEnd_z[i]]) ``` As Nicolas has suggested, do have a look at the matplotlib gallery.", "best_answers_score":0.7013, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11541123\/how-can-i-make-a-3d-line-plot", "best_answers_votes":51, "question_length":551, "response_length":306 }, { "question":"Named colors in matplotlib What named colors are available in matplotlib for use in plots? I can find a list on the matplotlib documentation that claims that these are the only names: ``` b: blue g: green r: red c: cyan m: magenta y: yellow k: black w: white ``` However, I've found that these colors can also be used, at least in this context: ``` scatter(X,Y, color='red') scatter(X,Y, color='orange') scatter(X,Y, color='darkgreen') ``` but these are not on the above list. Does anyone know an exhaustive list of the named colors that are available?", "response":"I constantly forget the names of the colors I want to use and keep coming back to this question =) The previous answers are great, but I find it a bit difficult to get an overview of the available colors from the posted image. I prefer the colors to be grouped with similar colors, so I slightly tweaked the matplotlib answer that was mentioned in a comment above to get a color list sorted in columns. The order is not identical to how I would sort by eye, but I think it gives a good overview. I updated the image and code to reflect that 'rebeccapurple' has been added and the three sage colors have been moved under the 'xkcd:' prefix since I posted this answer originally. I really didn't change much from the matplotlib example, but here is the code for completeness. ``` import matplotlib.pyplot as plt from matplotlib import colors as mcolors colors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS) # Sort colors by hue, saturation, value and name. by_hsv = sorted((tuple(mcolors.rgb_to_hsv(mcolors.to_rgba(color)[:3])), name) for name, color in colors.items()) sorted_names = [name for hsv, name in by_hsv] n = len(sorted_names) ncols = 4 nrows = n \/\/ ncols fig, ax = plt.subplots(figsize=(12, 10)) # Get height and width X, Y = fig.get_dpi() * fig.get_size_inches() h = Y \/ (nrows + 1) w = X \/ ncols for i, name in enumerate(sorted_names): row = i % nrows col = i \/\/ nrows y = Y - (row * h) - h xi_line = w * (col + 0.05) xf_line = w * (col + 0.25) xi_text = w * (col + 0.3) ax.text(xi_text, y, name, fontsize=(h * 0.8), horizontalalignment='left', verticalalignment='center') ax.hlines(y + h * 0.1, xi_line, xf_line, color=colors[name], linewidth=(h * 0.8)) ax.set_xlim(0, X) ax.set_ylim(0, Y) ax.set_axis_off() fig.subplots_adjust(left=0, right=1, top=1, bottom=0, hspace=0, wspace=0) plt.show() ``` Additional named colors Updated 2017-10-25. I merged my previous updates into this section. xkcd If you would like to use additional named colors when plotting with matplotlib, you can use the xkcd crowdsourced color names, via the 'xkcd:' prefix: ``` plt.plot([1,2], lw=4, c='xkcd:baby poop green') ``` Now you have access to a plethora of named colors! Tableau The default Tableau colors are available in matplotlib via the 'tab:' prefix: ``` plt.plot([1,2], lw=4, c='tab:green') ``` There are ten distinct colors: HTML You can also plot colors by their HTML hex code: ``` plt.plot([1,2], lw=4, c='#8f9805') ``` This is more similar to specifying and RGB tuple rather than a named color (apart from the fact that the hex code is passed as a string), and I will not include an image of the 16 million colors you can choose from... For more details, please refer to the matplotlib colors documentation and the source file specifying the available colors, _color_data.py.", "best_answers_score":0.7004, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22408237\/named-colors-in-matplotlib", "best_answers_votes":496, "question_length":552, "response_length":2784 }, { "question":"How to plot a gradient color line To state it in a general form, I'm looking for a way to join several points with a gradient color line using matplotlib, and I'm not finding it anywhere. To be more specific, I'm plotting a 2D random walk with a one color line. But, as the points have a relevant sequence, I would like to look at the plot and see where the data has moved. A gradient colored line would do the trick. Or a line with gradually changing transparency. I'm just trying to improve the vizualization of my data. Check out this beautiful image produced by the ggplot2 package of R. I'm looking for the same in matplotlib. Thanks.", "response":"Note that if you have many points, calling plt.plot for each line segment can be quite slow. It's more efficient to use a LineCollection object. Using the colorline recipe you could do the following: ``` import matplotlib.pyplot as plt import numpy as np import matplotlib.collections as mcoll import matplotlib.path as mpath def colorline( x, y, z=None, cmap=plt.get_cmap('copper'), norm=plt.Normalize(0.0, 1.0), linewidth=3, alpha=1.0): \"\"\" http:\/\/nbviewer.ipython.org\/github\/dpsanders\/matplotlib-examples\/blob\/master\/colorline.ipynb http:\/\/matplotlib.org\/examples\/pylab_examples\/multicolored_line.html Plot a colored line with coordinates x and y Optionally specify colors in the array z Optionally specify a colormap, a norm function and a line width \"\"\" # Default colors equally spaced on [0,1]: if z is None: z = np.linspace(0.0, 1.0, len(x)) # Special case if a single number: if not hasattr(z, \"__iter__\"): # to check for numerical input -- this is a hack z = np.array([z]) z = np.asarray(z) segments = make_segments(x, y) lc = mcoll.LineCollection(segments, array=z, cmap=cmap, norm=norm, linewidth=linewidth, alpha=alpha) ax = plt.gca() ax.add_collection(lc) return lc def make_segments(x, y): \"\"\" Create list of line segments from x and y coordinates, in the correct format for LineCollection: an array of the form numlines x (points per line) x 2 (x and y) array \"\"\" points = np.array([x, y]).T.reshape(-1, 1, 2) segments = np.concatenate([points[:-1], points[1:]], axis=1) return segments N = 10 np.random.seed(101) x = np.random.rand(N) y = np.random.rand(N) fig, ax = plt.subplots() path = mpath.Path(np.column_stack([x, y])) verts = path.interpolated(steps=3).vertices x, y = verts[:, 0], verts[:, 1] z = np.linspace(0, 1, len(x)) colorline(x, y, z, cmap=plt.get_cmap('jet'), linewidth=2) plt.show() ```", "best_answers_score":0.7004, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8500700\/how-to-plot-a-gradient-color-line", "best_answers_votes":46, "question_length":639, "response_length":1819 }, { "question":"Python saving multiple figures into one PDF file In python (for one figure created in a GUI) I was able to save the figure under .jpg and also .pdf by either using: ``` plt.savefig(filename1 + '.pdf') ``` or ``` plt.savefig(filename1 + '.jpg') ``` Using one file I would like to save multiple figures in either .pdf or .jpg (just like its done in math lab). Can anybody please help with this?", "response":"Use PdfPages to solve your problem. Pass your figure object to the savefig method. For example, if you have a whole pile of figure objects open and you want to save them into a multi-page PDF, you might do: ``` import matplotlib.backends.backend_pdf pdf = matplotlib.backends.backend_pdf.PdfPages(\"output.pdf\") for fig in xrange(1, figure().number): ## will open an empty extra figure :( pdf.savefig( fig ) pdf.close() ```", "best_answers_score":0.6998, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17788685\/python-saving-multiple-figures-into-one-pdf-file", "best_answers_votes":110, "question_length":392, "response_length":422 }, { "question":"Matplotlib - How to make the marker face color transparent without making the line transparent I know how to set the transparency of a line in matplotlib. For example, the following code makes the line and the markers transparent. ``` import numpy as np import matplotlib.pyplot as plt vec = np.random.uniform(0, 10, 50) f = plt.figure(1) ax = f.add_subplot(111) ax.plot(vec, color='#999999', marker='s', alpha=0.5) ``` I want line's alpha=1.0, and marker's face color to be semi-transparent (alpha=0.5). Can this be done in matplotlib?", "response":"See @Pelson's answer below for the correct way to do this with one line. You can do this in a hacky way by sticky taping together two independent Line2D objects. ``` th = np.linspace(0, 2 * np.pi, 64) y = np.sin(th) ax = plt.gca() lin, = ax.plot(th, y, lw=5) mark, = ax.plot(th, y, marker='o', alpha=.5, ms=10) ax.legend([(lin, mark)], ['merged']) plt.draw() ``` see here for explanation", "best_answers_score":0.6995, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15928539\/matplotlib-how-to-make-the-marker-face-color-transparent-without-making-the-li", "best_answers_votes":84, "question_length":536, "response_length":387 }, { "question":"'invalid value encountered in double_scalars' warning, possibly numpy As I run my code I get these warnings, always in groups of four, sporadically. I have tried to locate the source by placing debug messages before and after certain statements to pin-point its origin. ``` Warning: invalid value encountered in double_scalars Warning: invalid value encountered in double_scalars Warning: invalid value encountered in double_scalars Warning: invalid value encountered in double_scalars ``` Is this is a Numpy warning, and what is a double scalar? From Numpy I use ``` min(), argmin(), mean() and random.randn() ``` I also use Matplotlib", "response":"It looks like a floating-point calculation error. Check the numpy.seterr function to get more information about where it happens.", "best_answers_score":0.699, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3767409\/invalid-value-encountered-in-double-scalars-warning-possibly-numpy", "best_answers_votes":84, "question_length":636, "response_length":129 }, { "question":"Cyclic colormap without visual distortions for use in phase angle plots? I'm looking for a good circular\/cyclic colormap to represent phase angle information (where the values are restricted to the range [0, 2\u03c0] and where 0 and 2\u03c0 represent the same phase angle). Background: I'd like to visualize normal modes by plotting both the power spectral density and the relative phase information of the oscillations across the system. I'll admit that previously I used the 'rainbow' colormap for the power plot and the 'hsv' colormap for the phase plot (see [1]). However, the use of the rainbow colormap is extremely discouraged because of its lack of perceptual linearity and ordering [2][3]. So I switched to the 'coolwarm' colormap for the power plot which I quite like. Unfortunately, the 'hsv' colormap seems to introduce the same kind of visual distortions as the 'rainbow' map (and it also doesn't go along very well with the 'coolwarm' map since it looks kind of ugly and flashy in comparison). Does anyone have a good recommendation for an alternative circular colormap which I could use for the phase plots? Requirements: It needs to be circular so that the values 0 and 2\u03c0 are represented by the same color. It should not introduce any visual distortions; in particular, it should be perceptually linear (which the 'hsv' colormap doesn't seem to be). I don't believe that perceptual ordering is such a big deal for phase information, but it would of course not do any harm. It should be visually appealing when combined with the 'coolwarm' colormap. However, I'm not dead set on 'coolwarm' and am happy to consider other options if there is another nice pair of colormaps to visualize amplitude and phase information. Bonus points if the colormap is available (or can be easily created) for use in matplotlib. Many thanks for any suggestions! [1] http:\/\/matplotlib.org\/examples\/color\/colormaps_reference.html [2] http:\/\/www.renci.org\/~borland\/pdfs\/RainbowColorMap_VisViewpoints.pdf [3] http:\/\/medvis.org\/2012\/08\/21\/rainbow-colormaps-what-are-they-good-for-absolutely-nothing\/", "response":"EDIT: Matplotlib has now nice cyclic color maps, see the answer of @andras-deak below. They use a similar approach to the color maps as in this answer, but smooth the edges in luminosity. The issue with the hue-HUSL colormap is that it's not intuitive to read an angle from it. Therefore, I suggest to make your own colormap. Here's a few possibilities: For the linear segmented colormap, we definine a few colors. The colormap is then a linear interpolation between the colors. This has visual distortions. For the luminosity-HSLUV map, we use the HUSL (\"HSLUV\") space, however instead of just hue channel, we use two colors and the luminosity channel. This has distortions in the chroma, but has bright colors. The luminosity-HPLUV map, we use the HPLUV color space (following @mwaskom's comment). This is the only way to really have no visual distortions, but the colors are not saturated This is what they look like: We see that in our custom colormaps, white stands for 0, blue stands for 1i, etc. On the upper right, we see the hue-HUSL map for comparison. There, the color-angle assignments are random. Also when plotting a more complex function, it's straightforward to read out the phase of the result when using one of our colormaps. And here's the code for the plots: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as col import seaborn as sns import hsluv # install via pip import scipy.special # just for the example function ##### generate custom colormaps def make_segmented_cmap(): white = '#ffffff' black = '#000000' red = '#ff0000' blue = '#0000ff' anglemap = col.LinearSegmentedColormap.from_list( 'anglemap', [black, red, white, blue, black], N=256, gamma=1) return anglemap def make_anglemap( N = 256, use_hpl = True ): h = np.ones(N) # hue h[:N\/\/2] = 11.6 # red h[N\/\/2:] = 258.6 # blue s = 100 # saturation l = np.linspace(0, 100, N\/\/2) # luminosity l = np.hstack( (l,l[::-1] ) ) colorlist = np.zeros((N,3)) for ii in range(N): if use_hpl: colorlist[ii,:] = hsluv.hpluv_to_rgb( (h[ii], s, l[ii]) ) else: colorlist[ii,:] = hsluv.hsluv_to_rgb( (h[ii], s, l[ii]) ) colorlist[colorlist > 1] = 1 # correct numeric errors colorlist[colorlist < 0] = 0 return col.ListedColormap( colorlist ) N = 256 segmented_cmap = make_segmented_cmap() flat_huslmap = col.ListedColormap(sns.color_palette('husl',N)) hsluv_anglemap = make_anglemap( use_hpl = False ) hpluv_anglemap = make_anglemap( use_hpl = True ) ##### generate data grid x = np.linspace(-2,2,N) y = np.linspace(-2,2,N) z = np.zeros((len(y),len(x))) # make cartesian grid for ii in range(len(y)): z[ii] = np.arctan2(y[ii],x) # simple angular function z[ii] = np.angle(scipy.special.gamma(x+1j*y[ii])) # some complex function ##### plot with different colormaps fig = plt.figure(1) fig.clf() colormapnames = ['segmented map', 'hue-HUSL', 'lum-HSLUV', 'lum-HPLUV'] colormaps = [segmented_cmap, flat_huslmap, hsluv_anglemap, hpluv_anglemap] for ii, cm in enumerate(colormaps): ax = fig.add_subplot(2, 2, ii+1) pmesh = ax.pcolormesh(x, y, z\/np.pi, cmap = cm, vmin=-1, vmax=1) plt.axis([x.min(), x.max(), y.min(), y.max()]) cbar = fig.colorbar(pmesh) cbar.ax.set_ylabel('Phase [pi]') ax.set_title( colormapnames[ii] ) plt.show() ```", "best_answers_score":0.6966, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23712207\/cyclic-colormap-without-visual-distortions-for-use-in-phase-angle-plots", "best_answers_votes":24, "question_length":2081, "response_length":3228 }, { "question":"Edit the width of bars using pd.DataFrame.plot() I am making a stacked bar plot using: ``` DataFrame.plot(kind='bar',stacked=True) ``` I want to control width of bars so that the bars are connected to each other like a histogram. I've looked through the documentation but to no avail - any suggestions? Is it possible to do it this way?", "response":"For anyone coming across this question: Since pandas 0.14, plotting with bars has a 'width' command: https:\/\/github.com\/pydata\/pandas\/pull\/6644 The example above can now be solved simply by using ``` df.plot(kind='bar', stacked=True, width=1) ``` See pandas.DataFrame.plot.bar or pandas.DataFrame.plot with kind='bar'. When changing the width of the bars, it might also be appropriate to change the figure size by specifying the figsize= parameter.", "best_answers_score":0.696, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14824456\/edit-the-width-of-bars-using-pd-dataframe-plot", "best_answers_votes":109, "question_length":336, "response_length":448 }, { "question":"How to change the color of the axis, ticks and labels I'd like to change the color of the axis, as well as ticks and value-labels for a plot I did using matplotlib and PyQt.", "response":"As a quick example (using a slightly cleaner method than the potentially duplicate question): ``` import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) ax.plot(range(10)) ax.set_xlabel('X-axis') ax.set_ylabel('Y-axis') ax.spines['bottom'].set_color('red') ax.spines['top'].set_color('red') ax.xaxis.label.set_color('red') ax.tick_params(axis='x', colors='red') plt.show() ``` Alternatively ```py [t.set_color('red') for t in ax.xaxis.get_ticklines()] [t.set_color('red') for t in ax.xaxis.get_ticklabels()] ```", "best_answers_score":0.6958, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4761623\/how-to-change-the-color-of-the-axis-ticks-and-labels", "best_answers_votes":295, "question_length":173, "response_length":533 }, { "question":"Plotting a 2D heatmap Using Matplotlib, I want to plot a 2D heat map. My data is an n-by-n Numpy array, each with a value between 0 and 1. So for the (i, j) element of this array, I want to plot a square at the (i, j) coordinate in my heat map, whose color is proportional to the element's value in the array. How can I do this?", "response":"The imshow() function with parameters interpolation='nearest' and cmap='hot' should do what you want. Please review the interpolation parameter details, and see Interpolations for imshow and Image antialiasing. ``` import matplotlib.pyplot as plt import numpy as np a = np.random.random((16, 16)) plt.imshow(a, cmap='hot', interpolation='nearest') plt.show() ```", "best_answers_score":0.6946, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/33282368\/plotting-a-2d-heatmap", "best_answers_votes":351, "question_length":328, "response_length":362 }, { "question":"No plot window in matplotlib I just installed matplotlib in Ubuntu 9.10 using the synaptic package system. However, when I try the following simple example ``` >>> from pylab import plot; >>> plot([1,2,3],[1,2,3]) [] ``` I get no plot window. Any ideas on how to get the plot window to show?", "response":"You can type ``` import pylab pylab.show() ``` or better, use ipython -pylab. Since the use of pylab is not recommended anymore, the solution would nowadays be ``` import matplotlib.pyplot as plt plt.plot([1,2,3]) plt.show() ```", "best_answers_score":0.6946, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2130913\/no-plot-window-in-matplotlib", "best_answers_votes":169, "question_length":291, "response_length":228 }, { "question":"small scatter plot markers in matplotlib are always black I'm trying to use matplotlib to make a scatter plot with very small gray points. Because of the point density, the points need to be small. The problem is that the scatter() function's markers seem to have both a line and a fill. When the markers are small, only the line is visible, not the fill, and the line isn't the right colour (it's always black). I can get exactly what I want using gnuplot: plot 'nodes' with points pt 0 lc rgb 'gray' How can I make very small gray points using matplotlib scatterplot()?", "response":"``` scatter([1,2,3], [2,4,5], s=1, facecolor='0.5', lw = 0) ``` This sets the markersize to 1 (s=1), the facecolor to gray (facecolor='0.5'), and the linewidth to 0 (lw=0).", "best_answers_score":0.6936, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6503241\/small-scatter-plot-markers-in-matplotlib-are-always-black", "best_answers_votes":71, "question_length":571, "response_length":172 }, { "question":"more than 9 subplots in matplotlib Is it possible to get more than 9 subplots in matplotlib? I am on the subplots command pylab.subplot(449); how can I get a 4410 to work? Thank you very much.", "response":"It was easier than I expected, I just did: pylab.subplot(4,4,10) and it worked.", "best_answers_score":0.6936, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4158367\/more-than-9-subplots-in-matplotlib", "best_answers_votes":74, "question_length":192, "response_length":79 }, { "question":"How do I plot only a table in Matplotlib? Is it possible to draw only a table with matplotlib? If I uncomment the line ``` plt.bar(index, data[row], bar_width, bottom=y_offset, color=colors[row]) ``` of this example code, the plot is still visible. I want to have a table on top of my (PyQt) window and underneath a plot (with some space in between).", "response":"This is another option to write a pandas dataframe directly into a matplotlib table: ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt fig, ax = plt.subplots() # hide axes fig.patch.set_visible(False) ax.axis('off') ax.axis('tight') df = pd.DataFrame(np.random.randn(10, 4), columns=list('ABCD')) ax.table(cellText=df.values, colLabels=df.columns, loc='center') fig.tight_layout() plt.show() ```", "best_answers_score":0.6935, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/32137396\/how-do-i-plot-only-a-table-in-matplotlib", "best_answers_votes":90, "question_length":350, "response_length":420 }, { "question":"Plot yerr\/xerr as shaded region rather than error bars In matplotlib, how do I plot error as a shaded region rather than error bars? For example: rather than", "response":"Ignoring the smooth interpolation between points in your example graph (that would require doing some manual interpolation, or just have a higher resolution of your data), you can use pyplot.fill_between(): ``` from matplotlib import pyplot as plt import numpy as np x = np.linspace(0, 30, 30) y = np.sin(x\/6*np.pi) error = np.random.normal(0.1, 0.02, size=y.shape) y += np.random.normal(0, 0.1, size=y.shape) plt.plot(x, y, 'k-') plt.fill_between(x, y-error, y+error) plt.show() ``` See also the matplotlib examples.", "best_answers_score":0.6931, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12957582\/plot-yerr-xerr-as-shaded-region-rather-than-error-bars", "best_answers_votes":220, "question_length":157, "response_length":517 }, { "question":"How to change size of figure with subplots I'm having some trouble trying to change the figure size when using plt.subplots. With the following code, I just get the standard size graph with all my subplots bunched in (there's ~100) and obviously just an extra empty figuresize . I've tried using tight_layout, but to no avail. ```py def plot(reader): channels=[] for i in reader: channels.append(i) plt.figure(figsize=(50,100)) fig, ax = plt.subplots(len(channels), sharex=True) plot=0 for j in reader: ax[plot].plot(reader[\"%s\" % j]) plot=plot+1 plt.tight_layout() plt.show() ```", "response":"You can remove your initial plt.figure(). When calling plt.subplots() a new figure is created, so you first call doesn't do anything. The subplots command in the background will call plt.figure() for you, and any keywords will be passed along. So just add the figsize keyword to the subplots() command: ``` def plot(reader): channels=[] for i in reader: channels.append(i) fig, ax = plt.subplots(len(channels), sharex=True, figsize=(50,100)) plot=0 for j in reader: ax[plot].plot(reader[\"%s\" % j]) plot=plot+1 plt.tight_layout() plt.show() ```", "best_answers_score":0.6925, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19932553\/how-to-change-size-of-figure-with-subplots", "best_answers_votes":70, "question_length":580, "response_length":543 }, { "question":"Setting different color for each series in scatter plot Suppose I have three data sets: ``` X = [1,2,3,4] Y1 = [4,8,12,16] Y2 = [1,4,9,16] ``` I can scatter plot this: ``` from matplotlib import pyplot as plt plt.scatter(X,Y1,color='red') plt.scatter(X,Y2,color='blue') plt.show() ``` How can I do this with 10 sets? I searched for this and could find any reference to what I'm asking. Edit: clarifying (hopefully) my question If I call scatter multiple times, I can only set the same color on each scatter. Also, I know I can set a color array manually but I'm sure there is a better way to do this. My question is then, \"How can I automatically scatter-plot my several data sets, each with a different color. If that helps, I can easily assign a unique number to each data set.", "response":"I don't know what you mean by 'manually'. You can choose a colourmap and make a colour array easily enough: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.cm as cm x = np.arange(10) ys = [i+x+(i*x)**2 for i in range(10)] colors = cm.rainbow(np.linspace(0, 1, len(ys))) for y, c in zip(ys, colors): plt.scatter(x, y, color=c) ``` Or you can make your own colour cycler using itertools.cycle and specifying the colours you want to loop over, using next to get the one you want. For example, with 3 colours: ``` import itertools colors = itertools.cycle([\"r\", \"b\", \"g\"]) for y in ys: plt.scatter(x, y, color=next(colors)) ``` Come to think of it, maybe it's cleaner not to use zip with the first one neither: ``` colors = iter(cm.rainbow(np.linspace(0, 1, len(ys)))) for y in ys: plt.scatter(x, y, color=next(colors)) ```", "best_answers_score":0.6914, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12236566\/setting-different-color-for-each-series-in-scatter-plot", "best_answers_votes":371, "question_length":779, "response_length":844 }, { "question":"Common xlabel\/ylabel for matplotlib subplots I have the following plot: ``` fig,ax = plt.subplots(5,2,sharex=True,sharey=True,figsize=fig_size) ``` and now I would like to give this plot common x-axis labels and y-axis labels. With \"common\", I mean that there should be one big x-axis label below the whole grid of subplots, and one big y-axis label to the right. I can't find anything about this in the documentation for plt.subplots, and my googlings suggest that I need to make a big plt.subplot(111) to start with - but how do I then put my 5*2 subplots into that using plt.subplots?", "response":"This looks like what you actually want. It applies the same approach of this answer to your specific case: ``` import matplotlib.pyplot as plt fig, ax = plt.subplots(nrows=3, ncols=3, sharex=True, sharey=True, figsize=(6, 6)) fig.text(0.5, 0.04, 'common X', ha='center') fig.text(0.04, 0.5, 'common Y', va='center', rotation='vertical') ```", "best_answers_score":0.6907, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16150819\/common-xlabel-ylabel-for-matplotlib-subplots", "best_answers_votes":351, "question_length":587, "response_length":340 }, { "question":"Setting the position of the `ylabel` I am trying to recreate the look of figure below using matplotlib (source). However, I am having issues with the placement of the ylabel. I want it at the top of the y-axis, as it is on the figure. I have tried setting its position with ax.yaxis.set_label_position(), but this only accepts left or right for the y-axis. Is there an option to control the position of the ylabel, or should I just use ax.text and set the text's position manually? EDIT: As it turns out, the ax.set_ylabel(position=(x,y)) sets the position of the label relative to the graph coordinates. However, because of its horizontal rotation, the label is a little too much to the right, and position(x,y) does not seem to accept negative inputs. Is there a way to move the label a little to the left? I include the code used to generate the skeleton of the figure here, even though it's rather messy. ``` import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np mpl.rcParams['text.usetex'] = True mpl.rcParams['text.latex.preamble'] = [r\"\\usepackage[charter]{mathdesign}\"] mpl.rcParams['font.family'] = ['serif'] mpl.rcParams['font.size'] = 10 nb_procs = np.array([1, 2, 4, 12, 24, 48, 96, 192, 384]) def adjust_spines(ax, spines): for loc, spine in ax.spines.items(): if loc in spines: spine.set_position(('outward', 10)) # outward by 10 points spine.set_smart_bounds(True) else: spine.set_color('none') # don't draw spine # turn off ticks where there is no spine if 'left' in spines: ax.yaxis.set_ticks_position('left') else: # no yaxis ticks ax.yaxis.set_ticks([]) if 'bottom' in spines: ax.xaxis.set_ticks_position('bottom') else: # no xaxis ticks ax.xaxis.set_ticks([]) # -- We create the figure. figPres = plt.figure(figsize=(3,1.75)) axPres = figPres.add_subplot(111) # -- We remove any superfluous decoration. # Remove the axis decorations on the right and on the top. axPres.spines['top'].set_visible(False) axPres.spines['right'].set_visible(False) # Make the remaining spines a light gray. axPres.spines['bottom'].set_color('gray') axPres.spines['left'].set_color('gray') adjust_spines(axPres, ['left', 'bottom']) # -- Set the x ticks. axPres.set_xscale('log') axPres.set_xlim((0.75,500)) axPres.set_xticks((nb_procs)) axPres.set_xticklabels( (r'1', r'2', r'4', r'12', r'24', r'48', r'96', r'192', r'384'), color='gray' ) axPres.xaxis.set_ticks_position('bottom') for tic in axPres.xaxis.get_major_ticks(): tic.tick1On = tic.tick2On = False # -- Set the y ticks. axPres.set_ylim((0,1)) axPres.set_yticks((0.0,0.5,1.0)) axPres.set_yticklabels((r'0', '', r'1')) axPres.yaxis.set_ticks_position('left') axPres.tick_params(axis='y', colors='gray') #for tac in axPres.yaxis.get_major_ticks(): # tac.tick1On = tac.tick2On = False for toc in axPres.xaxis.get_minor_ticks(): toc.tick1On = toc.tick2On = False # -- Set the titles of the axes. axPres.set_ylabel(r\"Efficacit\\'e\", color='gray', rotation='horizontal') axPres.yaxis.set_label_position('right') axPres.set_xlabel(r\"Nombre de processeurs\", color='gray') plt.show() ```", "response":"You can move the ylabel using ax.yaxis.set_label_coords, which does accept negative numbers. For your example, I removed the line with set_label_position, and added: ``` axPres.yaxis.set_label_coords(-0.1,1.02) ``` From the documentation: Axis.set_label_coords(x, y, transform=None) Set the coordinates of the label. By default, the x coordinate of the y label and the y coordinate of the x label are determined by the tick label bounding boxes, but this can lead to poor alignment of multiple labels if there are multiple axes. You can also specify the coordinate system of the label with the transform. If None, the default coordinate system will be the axes coordinate system: (0, 0) is bottom left, (0.5, 0.5) is center, etc. For further examples of using this method, see the second example on the matplotlib Align y-labels example page.", "best_answers_score":0.6906, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37815976\/setting-the-position-of-the-ylabel", "best_answers_votes":69, "question_length":3057, "response_length":842 }, { "question":"Matplotlib cannot find basic fonts I am using matplotlib version 2.0.0 on Python 3 in a miniconda virtual environment. I am working on a unix scientific computing cluster where I don't have root privileges. I am generally executing python code through an ipython notebook. If I do a basic command such as: ``` import matplotlib.pyplot as plt plt.scatter([1,5], [1,5]) ``` I get an error message: ``` path_to_miniconda\/miniconda3\/envs\/conda34\/lib\/python3.4\/site- packages\/matplotlib\/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans (prop.get_family(), self.defaultFamily[fontext])) ``` I would like to be able to use a Times New Roman font but even after deleting the Font cache file (fontList.py3k.cache) which I find from here: ``` import matplotlib as mpl fm = mpl.font_manager fm.get_cachedir() ``` The commands: ``` mpl.rcParams['font.family'] = ['serif'] mpl.rcParams['font.serif'] = ['Times New Roman'] ``` have no effect, I get the same error as above. The true type fonts directory: path_to_miniconda\/miniconda3\/envs\/conda34\/lib\/python3.4\/site-packages\/matplotlib\/mpl-data\/fonts\/ttf\/ only has 40 fonts in it of the type: DejaVuSerif,DejaVuSans,STIX,cmb, cmt, cmy Any idea what could be going on and how I can add additional fonts? Thanks!", "response":"To get it to work, I had to combine the two current top answers. Here's what worked for me: ``` sudo apt install msttcorefonts -qq rm ~\/.cache\/matplotlib -rf # remove cache ```", "best_answers_score":0.6905, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42097053\/matplotlib-cannot-find-basic-fonts", "best_answers_votes":149, "question_length":1308, "response_length":176 }, { "question":"Boxplots in matplotlib: Markers and outliers I have some questions about boxplots in matplotlib: Question A. What do the markers that I highlighted below with Q1, Q2, and Q3 represent? I believe Q1 is maximum and Q3 are outliers, but what is Q2? Question B How does matplotlib identify outliers? (i.e. how does it know that they are not the true max and min values?)", "response":"A picture is worth a thousand words. Note that the outliers (the + markers in your plot) are simply points outside of the wide [(Q1-1.5 IQR), (Q3+1.5 IQR)] margin below. However, the picture is only an example for a normally distributed data set. It is important to understand that matplotlib does not estimate a normal distribution first and calculates the quartiles from the estimated distribution parameters as shown above. Instead, the median and the quartiles are calculated directly from the data. Thus, your boxplot may look different depending on the distribution of your data and the size of the sample, e.g., asymmetric and with more or less outliers.", "best_answers_score":0.6901, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17725927\/boxplots-in-matplotlib-markers-and-outliers", "best_answers_votes":107, "question_length":366, "response_length":661 }, { "question":"Strange error with matplotlib axes labels I'm very new to Python and programming in general, so apologies in advance if I'm missing something obvious. I'm trying to plot a graph and label the axes, but every time I try to label the y axis an exception is raised. I wrote the code below in a new script to make sure the problem wasn't coming from somewhere else in the module. I'm using Python 3.4. ``` from numpy import * from matplotlib import * a = [1, 2, 3, 4, 5] b = [2, 3, 2, 3, 2] pyplot.plot(a, b) pylab.xlabel(\"Time\") pylab.ylabel(\"Speed\") ``` Every time, I get the error 'TypeError: 'str' object is not callable' for the final line. If I change the y to an x, everything is fine. If I change the x to a y, I get the same error. However, ylabel comes up on the drop down list for ylabel so the function does exist and the documentation says a string is the only necessary argument, exactly as for xlabel (matplotlib.pyplot.ylabel(s, *args, **kwargs) and matplotlib.pyplot.xlabel(s, *args, **kwargs)). What on earth could be going on here?", "response":"I had this same issue when working in iPython notebook. I think it can be re-created as follows: ``` import matplotlib.pyplot as plt plt.ylabel = 'somestring' # oh wait this isn't the right syntax. ... plt.ylabel('somestring') # now this breaks because the function has been turned into a string ``` Re-starting the kernel or re-importing the libraries restores plt.ylabel to a function.", "best_answers_score":0.6899, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/24120023\/strange-error-with-matplotlib-axes-labels", "best_answers_votes":195, "question_length":1046, "response_length":387 }, { "question":"Plotting transparent histogram with non transparent edge I am plotting a histogram, and I have three datasets which I want to plot together, each one with different colours and linetype (dashed, dotted, etc). I am also giving some transparency, in order to see the overlapping bars. The point is that I would like the edge of each bar not to become transparent as the inner part does. Here is an example: ``` import matplotlib.pyplot as plt import numpy as np x = np.random.random(20) y =np.random.random(20) z= np.random.random(20) fig = plt.figure() ax = fig.add_subplot(111) ax.hist(x, bins=np.arange(0, 1, 0.1), ls='dashed', alpha = 0.5, lw=3, color= 'b') ax.hist(y, bins=np.arange(0, 1, 0.1), ls='dotted', alpha = 0.5, lw=3, color= 'r') ax.hist(z, bins=np.arange(0, 1, 0.1), alpha = 0.5, lw=3, color= 'k') ax.set_xlim(-0.5, 1.5) ax.set_ylim(0, 7) plt.show() ```", "response":"plt.hist accepts additional keyword arguments that are passed to the constructor for matplotlib.patches.Patch. In particular you can pass an fc= argument which lets you set the patch facecolor using an (R, G, B, A) tuple when you create the histograms. Changing the alpha value of the facecolor does not affect the transparency of the edges: ``` ax.hist(x, bins=np.arange(0, 1, 0.1), ls='dashed', lw=3, fc=(0, 0, 1, 0.5)) ax.hist(y, bins=np.arange(0, 1, 0.1), ls='dotted', lw=3, fc=(1, 0, 0, 0.5)) ax.hist(z, bins=np.arange(0, 1, 0.1), lw=3, fc=(0, 0, 0, 0.5)) ```", "best_answers_score":0.6897, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/28398200\/plotting-transparent-histogram-with-non-transparent-edge", "best_answers_votes":54, "question_length":866, "response_length":564 }, { "question":"Inserting a degree symbol into python plot This is a really simple problem but its escaping me. I'm just trying to insert a degree symbol into the titles and legends of my python plot. Code is below. Thanks. ``` from numpy import * import numpy as np import matplotlib.pyplot as plt theta1 = linspace(0,60,610) theta2 = linspace(0,45,460) theta3 = linspace(45,90,460) CTS = 1\/cos(radians(theta1)) CTS0 = 1\/cos(radians(60-theta2)) CTS45 = 1\/cos(radians(105-theta3)) plt.plot(theta1,CTS,label=u'CTS Head at 0',linewidth=2) plt.plot(theta2,CTS0,label='CTS Head at 60',linewidth=2) plt.plot(theta3,CTS45,label='CTS Head at 105',linewidth=2) plt.xlabel('Manufactured Ply Angle (degrees)') plt.ylabel('Thickness') plt.legend( loc='lower right', numpoints = 1 ) plt.ylim([0,2.5]) plt.grid(b=None, which='major', axis='both') plt.grid(color='k', linestyle='--', linewidth=0.5) plt.axhline(y=1.035, xmin=0, xmax=90,color='k', linestyle='-', linewidth=1) plt.show() ```", "response":"Use LaTeX Style. For Example: $^\\circ$ Text would produce \u00b0Text See the matplotlib documentation for more information about printing (especially mathematical expression). In your case the code has to be: plt.xlabel('Manufactured Ply Angle $^\\circ$') The TeX part of the expression must be enclosed by dollar signs \"$\".", "best_answers_score":0.6894, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19926246\/inserting-a-degree-symbol-into-python-plot", "best_answers_votes":55, "question_length":959, "response_length":318 }, { "question":"How can I plot NaN values as a special color with imshow? I am trying to use imshow in matplotlib to plot data as a heatmap, but some of the values are NaNs. I'd like the NaNs to be rendered as a special color not found in the colormap. example: ``` import numpy as np import matplotlib.pyplot as plt f = plt.figure() ax = f.add_subplot(111) a = np.arange(25).reshape((5,5)).astype(float) a[3,:] = np.nan ax.imshow(a, interpolation='nearest') f.canvas.draw() ``` The resultant image is unexpectedly all blue (the lowest color in the jet colormap). However, if I do the plotting like this: ``` ax.imshow(a, interpolation='nearest', vmin=0, vmax=24) ``` --then I get something better, but the NaN values are drawn the same color as vmin... Is there a graceful way that I can set NaNs to be drawn with a special color (eg: gray or transparent)?", "response":"With newer versions of Matplotlib, it is not necessary to use a masked array anymore. For example, let\u2019s generate an array where every 7th value is a NaN: ``` arr = np.arange(100, dtype=float).reshape(10, 10) arr[~(arr % 7).astype(bool)] = np.nan ``` .cm.get_cmap() is replaced by .colormaps.get_cmap('viridis') in matplotlib v3.7.0 Set the color with .set_bad. ```py import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np arr = np.arange(100, dtype=float).reshape(10, 10) arr[~(arr % 7).astype(bool)] = np.nan cmap = mpl.colormaps.get_cmap('viridis') # viridis is the default colormap for imshow cmap.set_bad(color='red') plt.imshow(arr, cmap=cmap) ``` .cm.get_cmap() is deprecated We can modify the current colormap and plot the array with the following lines: ``` current_cmap = mpl.cm.get_cmap() current_cmap.set_bad(color='red') plt.imshow(arr) ```", "best_answers_score":0.6893, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2578752\/how-can-i-plot-nan-values-as-a-special-color-with-imshow", "best_answers_votes":84, "question_length":841, "response_length":874 }, { "question":"Increase DPI of plt.show() I'm using Matplotlib in a Jupyter Notebook to display an image of a map. The code looks like this: ``` %matplotlib inline imgpath = '.\/map.png' import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np from PIL import Image img = Image.open(imgpath) print(img.size) width, height = img.size # img.thumbnail((width * 2,height * 2), Image.ANTIALIAS) # resizes image in-place imgplot = plt.imshow(img) plt.savefig('test.png', dpi = 300) ``` The problem is, although the plt.savefig('test.png', dpi = 300) looks fine (because I changed the dpi to 300), the image displayed in the notebook is so low resolution I can't make anything out on it, and plt.imshow(img, dpi = 300) doesn't work: So what I'm wondering is if there is a way to change the resolution of the image shown in the Jupyter Notebook?", "response":"Add this at the beginning of the notebook: ``` import matplotlib as mpl mpl.rcParams['figure.dpi'] = 300 ``` That's it !", "best_answers_score":0.6886, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/51937381\/increase-dpi-of-plt-show", "best_answers_votes":143, "question_length":848, "response_length":120 }, { "question":"Python Matplotlib figure title overlaps axes label when using twiny I am trying to plot two separate quantities on the same graph using twiny as follows: ``` fig = figure() ax = fig.add_subplot(111) ax.plot(T, r, 'b-', T, R, 'r-', T, r_geo, 'g-') ax.set_yscale('log') ax.annotate('Approx. sea level', xy=(Planet.T_day*1.3,(Planet.R)\/1000), xytext=(Planet.T_day*1.3, Planet.R\/1000)) ax.annotate('Geostat. orbit', xy=(Planet.T_day*1.3, r_geo[0]), xytext=(Planet.T_day*1.3, r_geo[0])) ax.set_xlabel('Rotational period (hrs)') ax.set_ylabel('Orbital radius (km), logarithmic') ax.set_title('Orbital charts for ' + Planet.N, horizontalalignment='center', verticalalignment='top') ax2 = ax.twiny() ax2.plot(v,r,'k-') ax2.set_xlabel('Linear speed (ms-1)') show() ``` and the data is presented fine, but I am having the problem that the figure title is overlapping with the axes labels on the secondary x axis so that it's barely legible (I wanted to post a picture example here, but I don't have a high enough rep yet). I'd like to know if there's a straightforward way to just shift the title directly up a few tens of pixels, so that the chart looks prettier.", "response":"I'm not sure whether it is a new feature in later versions of matplotlib, but at least for 1.3.1, this is simply: ``` plt.title(figure_title, y=1.08) ``` This also works for plt.suptitle(), but not (yet) for plt.xlabel(), etc.", "best_answers_score":0.6875, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12750355\/python-matplotlib-figure-title-overlaps-axes-label-when-using-twiny", "best_answers_votes":301, "question_length":1154, "response_length":226 }, { "question":"What is y axis in seaborn distplot? I have some geometrically distributed data. When I want to take a look at it, I use ``` sns.distplot(data, kde=False, norm_hist=True, bins=100) ``` which results is a picture: However, bins heights don't add up to 1, which means y axis doesn't show probability, it's something different. If instead we use ``` weights = np.ones_like(np.array(data))\/float(len(np.array(data))) plt.hist(data, weights=weights, bins = 100) ``` the y axis shall show probability, as bins heights sum up to 1: It can be seen more clearly here: suppose we have a list ``` l = [1, 3, 2, 1, 3] ``` We have two 1s, two 3s and one 2, so their respective probabilities are 2\/5, 2\/5 and 1\/5. When we use seaborn histplot with 3 bins: ``` sns.distplot(l, kde=False, norm_hist=True, bins=3) ``` we get: As you can see, the 1st and the 3rd bin sum up to 0.6+0.6=1.2 which is already greater than 1, so y axis is not a probability. When we use ``` weights = np.ones_like(np.array(l))\/float(len(np.array(l))) plt.hist(l, weights=weights, bins = 3) ``` we get: and the y axis is probability, as 0.4+0.4+0.2=1 as expected. The amount of bins in these 2 cases are is the same for both methods used in each case: 100 bins for geometrically distributed data, 3 bins for small array l with 3 possible values. So bins amount is not the issue. My question is: in seaborn distplot called with norm_hist=True, what is the meaning of y axis?", "response":"The x-axis is the value of the variable just like in a histogram, but what exactly does the y-axis represent? ANS-> The y-axis in a density plot is the probability density function for the kernel density estimation. However, we need to be careful to specify this is a probability density and not a probability. The difference is the probability density is the probability per unit on the x-axis. To convert to an actual probability, we need to find the area under the curve for a specific interval on the x-axis. Somewhat confusingly, because this is a probability density and not a probability, the y-axis can take values greater than one. The only requirement of the density plot is that the total area under the curve integrates to one. I generally tend to think of the y-axis on a density plot as a value only for relative comparisons between different categories. from the reference of https:\/\/towardsdatascience.com\/histograms-and-density-plots-in-python-f6bda88f5ac0", "best_answers_score":0.6875, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/51666784\/what-is-y-axis-in-seaborn-distplot", "best_answers_votes":26, "question_length":1432, "response_length":973 }, { "question":"How to set the matplotlib figure default size in ipython notebook? I use \"$ipython notebook --pylab inline\" to start the ipython notebook. The display matplotlib figure size is too big for me, and I have to adjust it manually. How to set the default size for the figure displayed in cell?", "response":"I believe the following work in version 0.11 and above. To check the version: ``` $ ipython --version ``` It may be worth adding this information to your question. Solution: You need to find the file ipython_notebook_config.py. Depending on your installation process this should be in somewhere like ``` .config\/ipython\/profile_default\/ipython_notebook_config.py ``` where .config is in your home directory. Once you have located this file find the following lines ``` # Subset of matplotlib rcParams that should be different for the inline backend. # c.InlineBackend.rc = {'font.size': 10, 'figure.figsize': (6.0, 4.0), 'figure.facecolor': 'white', 'savefig.dpi': 72, 'figure.subplot.bottom': 0.125, 'figure.edgecolor': 'white'} ``` Uncomment this line c.InlineBack... and define your default figsize in the second dictionary entry. Note that this could be done in a python script (and hence interactively in IPython) using ``` pylab.rcParams['figure.figsize'] = (10.0, 8.0) ```", "best_answers_score":0.6873, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17230797\/how-to-set-the-matplotlib-figure-default-size-in-ipython-notebook", "best_answers_votes":134, "question_length":288, "response_length":979 }, { "question":"Matplotlib log scale tick label number formatting With matplotlib when a log scale is specified for an axis, the default method of labeling that axis is with numbers that are 10 to a power eg. 10^6. Is there an easy way to change all of these labels to be their full numerical representation? eg. 1, 10, 100, etc. Note that I do not know what the range of powers will be and want to support an arbitrary range (negatives included).", "response":"Sure, just change the formatter. For example, if we have this plot: ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.axis([1, 10000, 1, 100000]) ax.loglog() plt.show() ``` You could set the tick labels manually, but then the tick locations and labels would be fixed when you zoom\/pan\/etc. Therefore, it's best to change the formatter. By default, a logarithmic scale uses a LogFormatter, which will format the values in scientific notation. To change the formatter to the default for linear axes (ScalarFormatter) use e.g. ``` from matplotlib.ticker import ScalarFormatter for axis in [ax.xaxis, ax.yaxis]: axis.set_major_formatter(ScalarFormatter()) ```", "best_answers_score":0.6872, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21920233\/matplotlib-log-scale-tick-label-number-formatting", "best_answers_votes":80, "question_length":431, "response_length":669 }, { "question":"How to invert the x or y axis I have a scatter plot graph with a bunch of random x, y coordinates. Currently the Y-Axis starts at 0 and goes up to the max value. I would like the Y-Axis to start at the max value and go up to 0. ``` points = [(10,5), (5,11), (24,13), (7,8)] x_arr = [] y_arr = [] for x,y in points: x_arr.append(x) y_arr.append(y) plt.scatter(x_arr,y_arr) ```", "response":"There is a new API that makes this even simpler. ``` plt.gca().invert_xaxis() ``` and\/or ``` plt.gca().invert_yaxis() ```", "best_answers_score":0.687, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2051744\/how-to-invert-the-x-or-y-axis", "best_answers_votes":848, "question_length":375, "response_length":121 }, { "question":"warning about too many open figures In a script where I create many figures with fix, ax = plt.subplots(...), I get the warning RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (matplotlib.pyplot.figure) are retained until explicitly closed and may consume too much memory. However, I don't understand why I get this warning, because after saving the figure with fig.savefig(...), I delete it with fig.clear(); del fig. At no point in my code, I have more than one figure open at a time. Still, I get the warning about too many open figures. What does that mean \/ how can I avoid getting the warning?", "response":"Use .clf or .cla on your figure object instead of creating a new figure. From @DavidZwicker Assuming you have imported pyplot as ``` import matplotlib.pyplot as plt ``` plt.cla() clears an axis, i.e. the currently active axis in the current figure. It leaves the other axes untouched. plt.clf() clears the entire current figure with all its axes, but leaves the window opened, such that it may be reused for other plots. plt.close() closes a window, which will be the current window, if not specified otherwise. plt.close('all') will close all open figures. The reason that del fig does not work is that the pyplot state-machine keeps a reference to the figure around (as it must if it is going to know what the 'current figure' is). This means that even if you delete your ref to the figure, there is at least one live ref, hence it will never be garbage collected. Since I'm polling on the collective wisdom here for this answer, @JoeKington mentions in the comments that plt.close(fig) will remove a specific figure instance from the pylab state machine (plt._pylab_helpers.Gcf) and allow it to be garbage collected.", "best_answers_score":0.6863, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21884271\/warning-about-too-many-open-figures", "best_answers_votes":324, "question_length":651, "response_length":1119 }, { "question":"How to update matplotlib's imshow() window interactively? I'm working on some computer vision algorithm and I'd like to show how a numpy array changes in each step. What works now is that if I have a simple imshow( array ) at the end of my code, the window displays and shows the final image. However what I'd like to do is to update and display the imshow window as the image changes in each iteration. So for example I'd like to do: ``` import numpy as np import matplotlib.pyplot as plt import time array = np.zeros( (100, 100), np.uint8 ) for i in xrange( 0, 100 ): for j in xrange( 0, 50 ): array[j, i] = 1 #_show_updated_window_briefly_ plt.imshow( array ) time.sleep(0.1) ``` The problem is that this way, the Matplotlib window doesn't get activated, only once the whole computation is finished. I've tried both native matplotlib and pyplot, but the results are the same. For plotting commands I found an .ion() switch, but here it doesn't seem to work. Q1. What is the best way to continuously display updates to a numpy array (actually a uint8 greyscale image)? Q2. Is it possible to do this with an animation function, like in the dynamic image example? I'd like to call a function inside a loop, thus I don't know how to achieve this with an animation function.", "response":"You don't need to call imshow all the time. It is much faster to use the object's set_data method: ``` myobj = imshow(first_image) for pixel in pixels: addpixel(pixel) myobj.set_data(segmentedimg) draw() ``` The draw() should make sure that the backend updates the image. UPDATE: your question was significantly modified. In such cases it is better to ask another question. Here is a way to deal with your second question: Matplotlib's animation only deals with one increasing dimension (time), so your double loop won't do. You need to convert your indices to a single index. Here is an example: ``` import numpy as np from matplotlib import pyplot as plt from matplotlib import animation nx = 150 ny = 50 fig = plt.figure() data = np.zeros((nx, ny)) im = plt.imshow(data, cmap='gist_gray_r', vmin=0, vmax=1) def init(): im.set_data(np.zeros((nx, ny))) def animate(i): xi = i \/\/ ny yi = i % ny data[xi, yi] = 1 im.set_data(data) return im anim = animation.FuncAnimation(fig, animate, init_func=init, frames=nx * ny, interval=50) ```", "best_answers_score":0.6861, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17835302\/how-to-update-matplotlibs-imshow-window-interactively", "best_answers_votes":58, "question_length":1272, "response_length":1033 }, { "question":"How can I create stacked line graph? I would like to be able to produce a stacked line graph (similar to the method used here) with Python (preferably using matplotlib, but another library would be fine too). How can I do this? This similar to the stacked bar graph example on their website, except I'd like the top of bar to be connected with a line segment and the area underneath to be filled. I might be able to approximate this by decreasing the gaps between bars and using lots of bars (but this seems like a hack, and besides I'm not sure if it is possible).", "response":"Newer versions of matplotlib contain the function plt.stackplot(), which allows for several different \"out-of-the-box\" stacked area plots: ``` import numpy as np import pylab as plt X = np.arange(0, 10, 1) Y = X + 5 * np.random.random((5, X.size)) baseline = [\"zero\", \"sym\", \"wiggle\", \"weighted_wiggle\"] for n, v in enumerate(baseline): plt.subplot(2 ,2, n + 1) plt.stackplot(X, *Y, baseline=v) plt.title(v) plt.axis('tight') plt.show() ```", "best_answers_score":0.6859, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2225995\/how-can-i-create-stacked-line-graph", "best_answers_votes":74, "question_length":565, "response_length":440 }, { "question":"How to put the legend outside the plot I have a series of 20 plots (not subplots) to be made in a single figure. I want the legend to be outside of the box. At the same time, I do not want to change the axes, as the size of the figure gets reduced. I want to keep the legend box outside the plot area (I want the legend to be outside at the right side of the plot area). Is there a way to reduce the font size of the text inside the legend box, so that the size of the legend box will be small?", "response":"There are a number of ways to do what you want. To add to what Christian Alis and Navi already said, you can use the bbox_to_anchor keyword argument to place the legend partially outside the axes and\/or decrease the font size. Before you consider decreasing the font size (which can make things awfully hard to read), try playing around with placing the legend in different places: So, let's start with a generic example: ``` import matplotlib.pyplot as plt import numpy as np x = np.arange(10) fig = plt.figure() ax = plt.subplot(111) for i in range(5): ax.plot(x, i * x, label='$y = %ix$' % i) ax.legend() plt.show() ``` If we do the same thing, but use the bbox_to_anchor keyword argument we can shift the legend slightly outside the axes boundaries: ``` import matplotlib.pyplot as plt import numpy as np x = np.arange(10) fig = plt.figure() ax = plt.subplot(111) for i in range(5): ax.plot(x, i * x, label='$y = %ix$' % i) ax.legend(bbox_to_anchor=(1.1, 1.05)) plt.show() ``` Similarly, make the legend more horizontal and\/or put it at the top of the figure (I'm also turning on rounded corners and a simple drop shadow): ``` import matplotlib.pyplot as plt import numpy as np x = np.arange(10) fig = plt.figure() ax = plt.subplot(111) for i in range(5): line, = ax.plot(x, i * x, label='$y = %ix$'%i) ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.05), ncol=3, fancybox=True, shadow=True) plt.show() ``` Alternatively, shrink the current plot's width, and put the legend entirely outside the axis of the figure (note: if you use tight_layout(), then leave out ax.set_position(): ``` import matplotlib.pyplot as plt import numpy as np x = np.arange(10) fig = plt.figure() ax = plt.subplot(111) for i in range(5): ax.plot(x, i * x, label='$y = %ix$'%i) # Shrink current axis by 20% box = ax.get_position() ax.set_position([box.x0, box.y0, box.width * 0.8, box.height]) # Put a legend to the right of the current axis ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.show() ``` And in a similar manner, shrink the plot vertically, and put a horizontal legend at the bottom: ``` import matplotlib.pyplot as plt import numpy as np x = np.arange(10) fig = plt.figure() ax = plt.subplot(111) for i in range(5): line, = ax.plot(x, i * x, label='$y = %ix$'%i) # Shrink current axis's height by 10% on the bottom box = ax.get_position() ax.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 0.9]) # Put a legend below current axis ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), fancybox=True, shadow=True, ncol=5) plt.show() ``` Have a look at the matplotlib legend guide. You might also take a look at plt.figlegend().", "best_answers_score":0.6841, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4700614\/how-to-put-the-legend-outside-the-plot", "best_answers_votes":2502, "question_length":494, "response_length":2664 }, { "question":"How do I add a title and axis labels to Seaborn Heatmap? I want to add a title to a seaborn heatmap. Using Pandas and iPython Notebook code is below, ``` a1_p = a1.pivot_table( index='Postcode', columns='Property Type', values='Count', aggfunc=np.mean, fill_value=0) sns.heatmap(a1_p, cmap=\"YlGnBu\") ``` the data is pretty straight forward: ``` In [179]: a1_p Out [179]: Property Type Flat Terraced house Unknown Postcode E1 11 0 0 E14 12 0 0 E1W 6 0 0 E2 6 0 0 ```", "response":"heatmap is an axes-level function, so you should be able to use just plt.title or ax.set_title: ``` %matplotlib inline import numpy as np import os import seaborn as sns import matplotlib.pyplot as plt data = np.random.randn(10,12) ax = plt.axes() sns.heatmap(data, ax = ax) ax.set_title('lalala') plt.show() ```", "best_answers_score":0.6839, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/32723798\/how-do-i-add-a-title-and-axis-labels-to-seaborn-heatmap", "best_answers_votes":102, "question_length":465, "response_length":312 }, { "question":"How to remove frame from a figure To remove frame in figure, I write ``` frameon=False ``` works perfect with pyplot.figure, but with matplotlib.Figure it only removes the gray background, the frame stays. Also, I only want the lines to show, and all the rest of figure be transparent. with pyplot I can do what I want, I want to do it with matplotlib for some long reason I'd rather not mention to extend my question.", "response":"ax.axis('off'), will as Joe Kington pointed out, remove everything except the plotted line. For those wanting to only remove the frame (border), and keep labels, tickers etc, one can do that by accessing the spines object on the axis. Given an axis object ax, the following should remove borders on all four sides: ``` ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) ``` And, in case of removing x and y ticks from the plot: ``` ax.get_xaxis().set_ticks([]) ax.get_yaxis().set_ticks([]) ```", "best_answers_score":0.6833, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14908576\/how-to-remove-frame-from-a-figure", "best_answers_votes":442, "question_length":418, "response_length":592 }, { "question":"Matplotlib: Scatter Plot to Foreground on top of a Contour Plot Does anyone know a way to bring a scatter plot to the foreground in matplotlib? I have to display the scatter plotting on top of the contour, but by default it is plotted underneath...", "response":"You can manually choose in which order the different plots are to be displayed with the zorder parameter of e.g. the scatter method. To demonstrate, see the code below, where the scatter plot in the left subplot has zorder=1 and in the right subplot it has zorder=-1. The object with the highest zorder is placed on top. This means that the scatter will be placed on top of the contour in the first subplot, while it is placed underneath in the second subplot. ``` import numpy as np import matplotlib.cm as cm import matplotlib.mlab as mlab import matplotlib.pyplot as plt delta = 0.025 x = np.arange(-3.0, 3.0, delta) y = np.arange(-2.0, 2.0, delta) X, Y = np.meshgrid(x, y) Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0) Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1) Z = 10.0 * (Z2 - Z1) norm = cm.colors.Normalize(vmax=abs(Z).max(), vmin=-abs(Z).max()) cmap = cm.PRGn levels = np.arange(-2.0, 1.601, 0.4) fig, axes = plt.subplots(1,2, sharey=True) for ax, zord in zip(axes, [1, -1]): ax.contourf(X, Y, Z, levels, cmap=cm.get_cmap(cmap, len(levels)-1), norm=norm) ax.autoscale(False) # To avoid that the scatter changes limits ax.scatter(np.random.uniform(-3,3,10), np.random.uniform(-2,2,10), zorder=zord) ax.set_title('Scatter with zorder={0}'.format(zord)) ```", "best_answers_score":0.6833, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17431441\/matplotlib-scatter-plot-to-foreground-on-top-of-a-contour-plot", "best_answers_votes":103, "question_length":248, "response_length":1275 }, { "question":"How to read image file from S3 bucket directly into memory? I have the following code ``` import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import boto3 s3 = boto3.resource('s3', region_name='us-east-2') bucket = s3.Bucket('sentinel-s2-l1c') object = bucket.Object('tiles\/10\/S\/DG\/2015\/12\/7\/0\/B01.jp2') object.download_file('B01.jp2') img=mpimg.imread('B01.jp2') imgplot = plt.imshow(img) plt.show(imgplot) ``` and it works. But the problem it downloads file into current directory first. Is it possible to read file and decode it as image directly in RAM?", "response":"I would suggest using io module to read the file directly in to memory, without having to use a temporary file at all. For example: ``` import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import boto3 import io s3 = boto3.resource('s3', region_name='us-east-2') bucket = s3.Bucket('sentinel-s2-l1c') object = bucket.Object('tiles\/10\/S\/DG\/2015\/12\/7\/0\/B01.jp2') file_stream = io.StringIO() object.download_fileobj(file_stream) img = mpimg.imread(file_stream) # whatever you need to do ``` You could also use io.BytesIO if your data is binary.", "best_answers_score":0.6833, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/44043036\/how-to-read-image-file-from-s3-bucket-directly-into-memory", "best_answers_votes":76, "question_length":589, "response_length":572 }, { "question":"Putting text in top left corner of matplotlib plot How can I put text in the top left (or top right) corner of a matplotlib figure, e.g. where a top left legend would be, or on top of the plot but in the top left corner? E.g. if it's a plt.scatter(), then something that would be within the square of the scatter, put in the top left most corner. I'd like to do this without ideally knowing the scale of the scatterplot being plotted for example, since it will change from dataset to data set. I just want it the text to be roughly in the upper left, or roughly in the upper right. With legend type positioning it should not overlap with any scatter plot points anyway.", "response":"You can use text. ``` plt.text(x, y, s, fontsize=12) ``` text coordinates can be given relative to the axis, so the position of your text will be independent of the size of the plot: The default transform specifies that text is in data coords, alternatively, you can specify text in axis coords (0,0 is lower-left and 1,1 is upper-right). The example below places text in the center of the axes:: ``` plt.text(0.5, 0.5, 'matplotlib', horizontalalignment='center', verticalalignment='center', transform = ax.transAxes) ``` To prevent the text to interfere with any point of your scatter is more difficult afaik. The easier method is to set y_axis (ymax in ylim((ymin,ymax))) to a value a bit higher than the max y-coordinate of your points. In this way you will always have this free space for the text. EDIT: here you have an example: ```py from matplotlib import pyplot as plt f, ax = plt.subplots() plt.scatter([3,5,2,6,8],[5,3,2,1,5]) plt.text(.01, .99, 'matplotlib', ha='left', va='top', transform=ax.transAxes) f.tight_layout() ``` The ha and va parameters set the alignment of your text relative to the insertion point. ie. ha='left' is a good set to prevent a long text to go out of the left axis when the frame is reduced (made narrower) manually.", "best_answers_score":0.6831, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8482588\/putting-text-in-top-left-corner-of-matplotlib-plot", "best_answers_votes":242, "question_length":669, "response_length":1255 }, { "question":"Save multiple plots in a single PDF file plotting module ``` def plotGraph(X,Y): fignum = random.randint(0,sys.maxint) plt.figure(fignum) ### Plotting arrangements ### return fignum ``` main module ``` import matplotlib.pyplot as plt ### tempDLStats, tempDLlabels are the argument plot1 = plotGraph(tempDLstats, tempDLlabels) plot2 = plotGraph(tempDLstats_1, tempDLlabels_1) plot3 = plotGraph(tempDLstats_2, tempDLlabels_2) plt.show() ``` I want to save all the graphs plot1, plot2, plot3 to a single PDF file. Is there any way to achieve it? I can't include the plotGraph function in the main module. There's a function named pyplot.savefig but that seems to work only with a single figure. Is there any other way to accomplish it?", "response":"If someone ends up here from google, looking to convert a single figure to a .pdf (that was what I was looking for): ``` import matplotlib.pyplot as plt f = plt.figure() plt.plot(range(10), range(10), \"o\") plt.show() f.savefig(\"foo.pdf\", bbox_inches='tight') ```", "best_answers_score":0.6826, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11328958\/save-multiple-plots-in-a-single-pdf-file", "best_answers_votes":282, "question_length":732, "response_length":262 }, { "question":"Matplotlib savefig with a legend outside the plot Reading the following article, I managed to put a legend outside plot. How to put the legend out of the plot code: ``` import matplotlib.pyplot as pyplot x = [0, 1, 2, 3, 4] y = [xx*xx for xx in x] fig = pyplot.figure() ax = fig.add_subplot(111) box = ax.get_position() ax.set_position([box.x0, box.y0, box.width*0.8, box.height]) ax.plot(x, y) leg = ax.legend(['abc'], loc = 'center left', bbox_to_anchor = (1.0, 0.5)) #pyplot.show() fig.savefig('aaa.png', bbox_inches='tight') ``` pyplot.show() displays the correct plot with a legend outside it. But when I save it as a file with fig.savefig(), the legend is truncated. Some googling shows me workarounds such as adding bbox_extra_artists=[leg.legendPatch] or bbox_extra_artists=[leg] to savefig(), but neither worked. What is the correct way to do it? Matplotlib version is 0.99.3. Thanks.", "response":"The problem is that when you plot dynamically, matplotlib determines the borders automatically to fit all your objects. When you save a file, things are not being done automatically, so you need to specify the size of your figure, and then the bounding box of your axes object. Here is how to correct your code: ``` import matplotlib.pyplot as pyplot x = [0, 1, 2, 3, 4] y = [xx*xx for xx in x] fig = pyplot.figure(figsize=(3,3)) ax = fig.add_subplot(111) #box = ax.get_position() #ax.set_position([0.3, 0.4, box.width*0.3, box.height]) # you can set the position manually, with setting left,buttom, witdh, hight of the axis # object ax.set_position([0.1,0.1,0.5,0.8]) ax.plot(x, y) leg = ax.legend(['abc'], loc = 'center left', bbox_to_anchor = (1.0, 0.5)) fig.savefig('aaa.png') ```", "best_answers_score":0.6826, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8971834\/matplotlib-savefig-with-a-legend-outside-the-plot", "best_answers_votes":29, "question_length":893, "response_length":784 }, { "question":"Why is my plt.savefig is not working? I have a simple python code as follows: ``` import numpy as np import matplotlib.pyplot as plt \"\"\" Here are the solutions and the plot. \"\"\" # Create the axis and plot. plt.axis([0, 10, 0, 10]) axis_x = range(1, 11) grd = [1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1, 9.1, 10.1] grd2 = [1.2, 2.2, 3.2, 4.2, 5.2, 6.2, 7.2, 8.2, 9.2, 10.2] plt.plot(axis_x, grd, '-g', label='BR1') plt.plot(axis_x, grd2, '-b', label='BR2') plt.legend(loc='upper left') plt.grid() plt.show() # Save the results vector to a text file. np.savetxt('test.out', (grd, grd2)) # Save the figure as '.eps' file. plt.savefig('expl.pdf', format='pdf', dpi=1200) ``` When I open the output files expl.pdf and\/or test.out I find them blank and nothing in there. Why? Thanks.", "response":"When you close the image displayed by plt.show(), the image is closed and freed from memory. You should call savefig and savetxt before calling show.", "best_answers_score":0.6826, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/30765455\/why-is-my-plt-savefig-is-not-working", "best_answers_votes":134, "question_length":774, "response_length":149 }, { "question":"Seaborn plots not showing up I'm sure I'm forgetting something very simple, but I cannot get certain plots to work with Seaborn. If I do: ``` import seaborn as sns ``` Then any plots that I create as usual with matplotlib get the Seaborn styling (with the grey grid in the background). However, if I try to do one of the examples, such as: ``` In [1]: import seaborn as sns In [2]: sns.set() In [3]: df = sns.load_dataset('iris') In [4]: sns.pairplot(df, hue='species', size=2.5) Out[4]: ``` The pairplot function returns a PairGrid object, but the plot doesn't show up. I'm a little confused because matplotlib seems to be functioning properly, and the Seaborn styles are applied to other matplotlib plots, but the Seaborn functions don't seem to do anything. Does anybody have any idea what might be the problem?", "response":"Plots created using seaborn need to be displayed like ordinary matplotlib plots. This can be done using the ``` plt.show() ``` function from matplotlib. Originally I posted the solution to use the already imported matplotlib object from seaborn (sns.plt.show()) however this is considered to be a bad practice. Therefore, simply directly import the _matplotlib.pyplot_ module and show your plots with ``` import matplotlib.pyplot as plt plt.show() ``` If the IPython notebook is used the inline backend can be invoked to remove the necessity of calling show after each plot. The respective magic is ``` %matplotlib inline ```", "best_answers_score":0.6824, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/26597116\/seaborn-plots-not-showing-up", "best_answers_votes":591, "question_length":815, "response_length":625 }, { "question":"How to force integer tick labels My python script uses matplotlib to plot a 2D \"heat map\" of an x, y, z dataset. My x- and y-values represent amino acid residues in a protein and can therefore only be integers. When I zoom into the plot, it looks like this: As I said, float values on the x-y axes do not make sense with my data and I therefore want it to look like this: Any ideas how to achieve this? This is the code that generates the plot: ``` def plotDistanceMap(self): # Read on x,y,z x = self.currentGraph['xData'] y = self.currentGraph['yData'] X, Y = numpy.meshgrid(x, y) Z = self.currentGraph['zData'] # Define colormap cmap = colors.ListedColormap(['blue', 'green', 'orange', 'red']) cmap.set_under('white') cmap.set_over('white') bounds = [1,15,50,80,100] norm = colors.BoundaryNorm(bounds, cmap.N) # Draw surface plot img = self.axes.pcolor(X, Y, Z, cmap=cmap, norm=norm) self.axes.set_xlim(x.min(), x.max()) self.axes.set_ylim(y.min(), y.max()) self.axes.set_xlabel(self.currentGraph['xTitle']) self.axes.set_ylabel(self.currentGraph['yTitle']) # Cosmetics #matplotlib.rcParams.update({'font.size': 12}) xminorLocator = MultipleLocator(10) yminorLocator = MultipleLocator(10) self.axes.xaxis.set_minor_locator(xminorLocator) self.axes.yaxis.set_minor_locator(yminorLocator) self.axes.tick_params(direction='out', length=6, width=1) self.axes.tick_params(which='minor', direction='out', length=3, width=1) self.axes.xaxis.labelpad = 15 self.axes.yaxis.labelpad = 15 # Draw colorbar colorbar = self.figure.colorbar(img, boundaries = [0,1,15,50,80,100], spacing = 'proportional', ticks = [15,50,80,100], extend = 'both') colorbar.ax.set_xlabel('Angstrom') colorbar.ax.xaxis.set_label_position('top') colorbar.ax.xaxis.labelpad = 20 self.figure.tight_layout() self.canvas.draw() ```", "response":"This should be simpler: (from https:\/\/scivision.co\/matplotlib-force-integer-labeling-of-axis\/) ```py import matplotlib.pyplot as plt from matplotlib.ticker import MaxNLocator #... ax = plt.figure().gca() #... ax.xaxis.set_major_locator(MaxNLocator(integer=True)) ``` Read the official docs: https:\/\/matplotlib.org\/stable\/api\/ticker_api.html#matplotlib.ticker.MaxNLocator", "best_answers_score":0.6823, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/30914462\/how-to-force-integer-tick-labels", "best_answers_votes":208, "question_length":1793, "response_length":370 }, { "question":"Reverse colormap in matplotlib I would like to know how to simply reverse the color order of a given colormap in order to use it with plot_surface.", "response":"The standard colormaps also all have reversed versions. They have the same names with _r tacked on to the end. (Documentation here.)", "best_answers_score":0.6815, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3279560\/reverse-colormap-in-matplotlib", "best_answers_votes":775, "question_length":147, "response_length":132 }, { "question":"Using %matplotlib notebook after %matplotlib inline in Jupyter Notebook doesn't work I am using Jupyter Notebook for plotting piechart figures. In first cell with my code I have a magic command %matplotlib inline and after this magic command I run my code, everything works fine and my figure renders. But in second cell when I set %matplotlib notebook for interactive plotting my figure won't render after running this second cell. I need to restart kernel and run cell with %matplotlib notebook again and cannot run %matplotlib inline command before that. Here is my code for first cell with %matplotlib inline, which renders fine: ``` import matplotlib.pyplot as plt %matplotlib inline labels = \"No\", \"Yes\" sizes = [100, 50] fig, ax = plt.subplots(figsize=(6, 6)) _, texts, autotexts = ax.pie(sizes, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%', shadow=False, startangle=90) ax.axis('equal') ``` After that I have second cell with same code, just %matplotlib inline is changed to %matplotlib notebook. Figure won't render after I run this cell and I need to restart kernel and run this cell again. Why?", "response":"You just have the wrong order of your commands. A backend should be set before importing pyplot in jupyter. Or in other words, after changing the backend, pyplot needs to be imported again. Therefore call %matplotlib ... prior to importing pyplot. In first cell: ``` %matplotlib inline import matplotlib.pyplot as plt plt.plot([1,1.6,3]) ``` In second cell: ``` %matplotlib notebook #calling it a second time may prevent some graphics errors %matplotlib notebook import matplotlib.pyplot as plt plt.plot([1,1.6,3]) ```", "best_answers_score":0.6811, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/43545050\/using-matplotlib-notebook-after-matplotlib-inline-in-jupyter-notebook-doesnt", "best_answers_votes":54, "question_length":1125, "response_length":518 }, { "question":"How to plot normal distribution Given a mean and a variance is there a simple function call which will plot a normal distribution?", "response":"``` import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats import math mu = 0 variance = 1 sigma = math.sqrt(variance) x = np.linspace(mu - 3*sigma, mu + 3*sigma, 100) plt.plot(x, stats.norm.pdf(x, mu, sigma)) plt.show() ```", "best_answers_score":0.681, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10138085\/how-to-plot-normal-distribution", "best_answers_votes":330, "question_length":130, "response_length":248 }, { "question":"Unable to \"import matplotlib.pyplot as plt\" in virtualenv I am working with flask in a virtual environment. I was able to install matplotlib with pip, and I can import matplotlib in a Python session. However, when I import it as ``` matplotlib.pyplot as plt ``` I get the following error: ``` >>> import matplotlib.pyplot as plt Traceback (most recent call last): File \"\", line 1, in File \"\/\/anaconda\/envs\/myenv\/lib\/python2.7\/site-packages\/matplotlib\/pyplot.py\", line 109, in _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup() File \"\/\/anaconda\/envs\/myenv\/lib\/python2.7\/site-packages\/matplotlib\/backends\/__init__.py\", line 32, in pylab_setup globals(),locals(),[backend_name],0) File \"\/\/anaconda\/envs\/myenv\/lib\/python2.7\/site-packages\/matplotlib\/backends\/backend_macosx.py\", line 24, in from matplotlib.backends import _macosx RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. ``` I am confused about why it asks me to install Python as framework. Doesn't it already exists? What does it mean to \"install Python as framework\", and how do I install it?", "response":"This solution worked for me. If you already installed matplotlib using pip on your virtual environment, you can just type the following: ``` $ cd ~\/.matplotlib $ nano matplotlibrc ``` And then, write backend: TkAgg in there. If you need more information, just go to the solution link.", "best_answers_score":0.6809, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/29433824\/unable-to-import-matplotlib-pyplot-as-plt-in-virtualenv", "best_answers_votes":161, "question_length":1366, "response_length":284 }, { "question":"Move seaborn plot legend to a different position I'm using factorplot(kind=\"bar\") with seaborn. The plot is fine except the legend is misplaced: too much to the right, text goes out of the plot's shaded area. How do I make seaborn place the legend somewhere else, such as in top-left instead of middle-right?", "response":"Building on @user308827's answer: you can use legend=False in factorplot and specify the legend through matplotlib: ``` import seaborn as sns import matplotlib.pyplot as plt sns.set(style=\"whitegrid\") titanic = sns.load_dataset(\"titanic\") g = sns.factorplot(\"class\", \"survived\", \"sex\", data=titanic, kind=\"bar\", size=6, palette=\"muted\", legend=False) g.despine(left=True) plt.legend(loc='upper left') g.set_ylabels(\"survival probability\") ``` plt acts on the current axes. To get axes from a FacetGrid use fig. g.fig.get_axes()[0].legend(loc='lower left')", "best_answers_score":0.6797, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/27019079\/move-seaborn-plot-legend-to-a-different-position", "best_answers_votes":93, "question_length":308, "response_length":555 }, { "question":"How can I release memory after creating matplotlib figures I have several matlpotlib functions rolled into some django-celery tasks. Every time the tasks are called more RAM is dedicated to python. Before too long, python is taking up all of the RAM. QUESTION: How can I release this memory? UPDATE 2 - A Second Solution: I asked a similar question specifically about the memory locked up when matplotlib errors, but I got a good answer to this question .clf(), .close(), and gc.collect() aren't needed if you use multiprocess to run the plotting function in a separate process whose memory will automatically be freed once the process ends. Matplotlib errors result in a memory leak. How can I free up that memory? UPDATE - The Solution: These stackoverflow posts suggested that I can release the memory used by matplotlib objects with the following commands: .clf(): Matplotlib runs out of memory when plotting in a loop .close(): Python matplotlib: memory not being released when specifying figure size ``` import gc gc.collect() ``` Here is the example I used to test the solution: ``` import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt from pylab import import figure, savefig import numpy as np import gc a = np.arange(1000000) b = np.random.randn(1000000) fig = plt.figure(num=1, dpi=100, facecolor='w', edgecolor='w') fig.set_size_inches(10,7) ax = fig.add_subplot(111) ax.plot(a, b) fig.clf() plt.close() del a, b gc.collect() ```", "response":"Did you try to run you task function several times (in a for) to be sure that not your function is leaking no matter of celery? Make sure that django.settings.DEBUG is set False( The connection object holds all queries in memmory when DEBUG=True).", "best_answers_score":0.6792, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7101404\/how-can-i-release-memory-after-creating-matplotlib-figures", "best_answers_votes":6, "question_length":1460, "response_length":247 }, { "question":"Subplot for seaborn boxplot I have a dataframe like this ``` import seaborn as sns import pandas as pd %pylab inline df = pd.DataFrame({'a' :['one','one','two','two','one','two','one','one','one','two'], 'b': [1,2,1,2,1,2,1,2,1,1], 'c': [1,2,3,4,6,1,2,3,4,6]}) ``` A single boxplot is OK: ``` sns.boxplot(y=\"b\", x=\"a\", data=df, orient='v') ``` But I want to build a subplot for all variables. I tried: ``` names = ['b', 'c'] plt.subplots(1,2) sub = [] for name in names: ax = sns.boxplot( y=name, x= \"a\", data=df, orient='v' ) sub.append(ax) ``` but it outputs:", "response":"We create the figure with the subplots: ``` f, axes = plt.subplots(1, 2) ``` Where axes is an array with each subplot. Then we tell each plot in which subplot we want them with the argument ax. ``` sns.boxplot( y=\"b\", x= \"a\", data=df, orient='v' , ax=axes[0]) sns.boxplot( y=\"c\", x= \"a\", data=df, orient='v' , ax=axes[1]) ``` And the result is:", "best_answers_score":0.6791, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/41384040\/subplot-for-seaborn-boxplot", "best_answers_votes":143, "question_length":561, "response_length":344 }, { "question":"Plotting lines connecting points I know there is another very similar question, but I could not extract the information I need from it. plotting lines in pairs I have 4 points in the (x,y) plane: x=[x1,x2,x3,x4] and y=[y1,y2,y3,y4] ``` x=[-1 ,0.5 ,1,-0.5] y=[ 0.5, 1, -0.5, -1] ``` Now, I can plot the four points by doing: ``` import matplotlib.pyplot as plt plt.plot(x,y, 'ro') plt.axis('equal') plt.show() ``` But, apart from the four points, I would like to have 2 lines: 1) one connecting (x1,y1) with (x2,y2) and 2) the second one connecting (x3,y3) with (x4,y4). This is a simple toy example. In the real case I have 2N points in the plane. How can I get the desired output: for points with two connecting lines ? Thank you.", "response":"I think you're going to need separate lines for each segment: ``` import numpy as np import matplotlib.pyplot as plt x, y = np.random.random(size=(2,10)) for i in range(0, len(x), 2): plt.plot(x[i:i+2], y[i:i+2], 'ro-') plt.show() ``` (The numpy import is just to set up some random 2x10 sample data)", "best_answers_score":0.6765, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/35363444\/plotting-lines-connecting-points", "best_answers_votes":64, "question_length":731, "response_length":300 }, { "question":"Matplotlib throws warning message because of findfont - python I want to plot a basic graph, basically consisting of two lists of the form x = [1,2,3,...,d], y = [y1,y2,...,yd]. after using_ pyplot.plot(x,y) I get a huge amount of warning messages referring to findfont errors (see below..) I just installed matplotlib. All the threads I could find refer to changing fonts manually and having warning messages popping up. I did not do anything like that. I just installed the package and that is it. I am using pycharm, python 3.6 and windows for programming. ``` DEBUG findfont: Matching :family=sans-serif:style=normal:variant=normal:weight=normal:stretch=normal:size=10.0. DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 0.33499999999999996 DEBUG findfont: score() = 11.335 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 11.05 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 11.335 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 11.335 DEBUG findfont: score() = 11.05 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 11.05 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 11.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 1.05 DEBUG findfont: score() = 1.335 DEBUG findfont: score() = 11.335 DEBUG findfont: score() = 0.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.25 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 11.335 DEBUG findfont: score() = 11.05 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 4.6863636363636365 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.25 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 11.05 DEBUG findfont: score() = 11.05 DEBUG findfont: score() = 11.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.25 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 6.888636363636364 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 11.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 11.24 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 11.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 11.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.05 DEBUG findfont: score() = 10.24 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 11.535 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 11.05 DEBUG findfont: score() = 11.335 DEBUG findfont: score() = 10.335 DEBUG findfont: score() = 10.335 [...............] DEBUG findfont: Matching :family=sans-serif:style=normal:variant=normal:weight=normal:stretch=normal:size=10.0 to DejaVu Sans ('C:\\\\Users\\\\chris\\\\PycharmProjects\\\\DeepNN-Solver\\\\venv\\\\lib\\\\site-packages\\\\matplotlib\\\\mpl-data\\\\fonts\\\\ttf\\\\DejaVuSans.ttf') with score of 0.050000. ```", "response":"Just as an alternative solution, you can disable the matplotlib font manager logger. ``` logging.getLogger('matplotlib.font_manager').disabled = True ``` Or you can just suppress the DEBUG messages but not the others from that logger. ``` logging.getLogger('matplotlib.font_manager').setLevel(logging.ERROR) ```", "best_answers_score":0.6764, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/56618739\/matplotlib-throws-warning-message-because-of-findfont-python", "best_answers_votes":109, "question_length":3984, "response_length":311 }, { "question":"How to dynamically update a plot in a loop in IPython notebook (within one cell) Environment: Python 2.7, Matplotlib 1.3, IPython notebook 1.1, Linux, and Chrome. The code is in one single input cell, using --pylab=inline. I want to use IPython notebook and Pandas to consume a stream and dynamically update a plot every five seconds. When I just use a print statement to print the data in text format, it works perfectly fine: the output cell just keeps printing data and adding new rows. But when I try to plot the data (and then update it in a loop), the plot never shows up in the output cell. But if I remove the loop, and just plot it once, it works fine. Then I did some simple test: ``` i = pd.date_range('2013-1-1',periods=100,freq='s') while True: plot(pd.Series(data=np.random.randn(100), index=i)) #pd.Series(data=np.random.randn(100), index=i).plot() also tried this one time.sleep(5) ``` The output will not show anything until I manually interrupt the process (Ctrl + M + I). And after I interrupt it, the plot shows correctly as multiple overlapped lines. But what I really want is a plot that shows up and gets updated every five seconds (or whenever the plot() function gets called, just like what print statement outputs I mentioned above, which works well). Only showing the final chart after the cell is completely done is not what I want. I even tried to explicitly add the draw() function after each plot(), etc. None of them works. How can I dynamically update a plot by a for\/while loop within one cell in IPython notebook?", "response":"Use the IPython.display module: ``` %matplotlib inline import time import pylab as pl from IPython import display for i in range(10): pl.plot(pl.randn(100)) display.clear_output(wait=True) display.display(pl.gcf()) time.sleep(1.0) ```", "best_answers_score":0.6762, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21360361\/how-to-dynamically-update-a-plot-in-a-loop-in-ipython-notebook-within-one-cell", "best_answers_votes":147, "question_length":1548, "response_length":234 }, { "question":"How can I make a scatter plot colored by density? I'd like to make a scatter plot where each point is colored by the spatial density of nearby points. I've come across a very similar question, which shows an example of this using R: R Scatter Plot: symbol color represents number of overlapping points What's the best way to accomplish something similar in python using matplotlib?", "response":"In addition to hist2d or hexbin as @askewchan suggested, you can use the same method that the accepted answer in the question you linked to uses. If you want to do that: ``` import numpy as np import matplotlib.pyplot as plt from scipy.stats import gaussian_kde # Generate fake data x = np.random.normal(size=1000) y = x * 3 + np.random.normal(size=1000) # Calculate the point density xy = np.vstack([x,y]) z = gaussian_kde(xy)(xy) fig, ax = plt.subplots() ax.scatter(x, y, c=z, s=100) plt.show() ``` If you'd like the points to be plotted in order of density so that the densest points are always on top (similar to the linked example), just sort them by the z-values. I'm also going to use a smaller marker size here as it looks a bit better: ``` import numpy as np import matplotlib.pyplot as plt from scipy.stats import gaussian_kde # Generate fake data x = np.random.normal(size=1000) y = x * 3 + np.random.normal(size=1000) # Calculate the point density xy = np.vstack([x,y]) z = gaussian_kde(xy)(xy) # Sort the points by density, so that the densest points are plotted last idx = z.argsort() x, y, z = x[idx], y[idx], z[idx] fig, ax = plt.subplots() ax.scatter(x, y, c=z, s=50) plt.show() ```", "best_answers_score":0.6755, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20105364\/how-can-i-make-a-scatter-plot-colored-by-density", "best_answers_votes":221, "question_length":381, "response_length":1199 }, { "question":"Plot mean and standard deviation I have several values of a function at different x points. I want to plot the mean and std in python, like the answer of this SO question. I know this must be easy using matplotlib, but I have no idea of the function's name that can do that. Does anyone know it?", "response":"plt.errorbar can be used to plot x, y, error data (as opposed to the usual plt.plot) ``` import matplotlib.pyplot as plt import numpy as np x = np.array([1, 2, 3, 4, 5]) y = np.power(x, 2) # Effectively y = x**2 e = np.array([1.5, 2.6, 3.7, 4.6, 5.5]) plt.errorbar(x, y, e, linestyle='None', marker='^') plt.show() ``` plt.errorbar accepts the same arguments as plt.plot with additional yerr and xerr which default to None (i.e. if you leave them blank it will act as plt.plot).", "best_answers_score":0.6755, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22481854\/plot-mean-and-standard-deviation", "best_answers_votes":125, "question_length":295, "response_length":478 }, { "question":"Typing Greek letters etc. in plots I need to type Greek letters and the Angstrom symbol in labels of axes in a plot. So for example ``` fig.gca().set_xlabel(\"$wavelength\\, (Angstrom)$\") fig.gca().set_ylabel(\"$lambda$\") ``` except that I actually want \"Angstrom\" and \"lambda\" replaced by actual symbols. How should I do this? Thanks!", "response":"You need to make the strings raw and use latex: ``` fig.gca().set_ylabel(r'$\\lambda$') ``` As of matplotlib 2.0 the default font supports most western alphabets and can simple do ``` ax.set_xlabel('\u03bb') ``` with unicode.", "best_answers_score":0.6712, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13338550\/typing-greek-letters-etc-in-plots", "best_answers_votes":114, "question_length":332, "response_length":219 }, { "question":"Keep plotting window open in Matplotlib When writing scripts that use matplotlib, I temporally get an interactive graphing window when I run the script, which immediately goes away before I can view the plot. If I execute the same code interactively inside iPython, the graphing window stays open. How can I get matplotlib to keep a plot open once it is produces a graph when I run a script? For example, I can save this plot, but I cannot display it with show(): ``` from matplotlib import pyplot as plt import scipy as sp x = sp.arange(10) y = sp.arange(10) plt.plot(x,y) plt.show() ```", "response":"According to the documentation, there's an experimental block parameter you can pass to plt.show(). Of course, if your version of matplotlib isn't new enough, it won't have this. If you have this feature, you should be able to replace plt.show() with plt.show(block=True) to get your desired behavior.", "best_answers_score":0.6704, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12358312\/keep-plotting-window-open-in-matplotlib", "best_answers_votes":52, "question_length":588, "response_length":301 }, { "question":"How do I plot Shapely polygons and objects using Matplotlib? I want to use Shapely for my computational geometry project. I need to be able to visualize and display polygons, lines, and other geometric objects for this. I've tried to use Matplotlib for this but I am having trouble with it. ```py from shapely.geometry import Polygon import matplotlib.pyplot as plt polygon1 = Polygon([(0,5), (1,1), (3,0), ]) plt.plot(polygon1) plt.show() ``` I would like to be able to display this polygon in a plot. How would I change my code to do this?", "response":"Use: ``` import matplotlib.pyplot as plt x,y = polygon1.exterior.xy plt.plot(x,y) ``` Or, more succinctly: ``` plt.plot(*polygon1.exterior.xy) ```", "best_answers_score":0.6703, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/55522395\/how-do-i-plot-shapely-polygons-and-objects-using-matplotlib", "best_answers_votes":144, "question_length":541, "response_length":146 }, { "question":"matplotlib bar graph black - how do I remove bar borders I'm using pyplot.bar but I'm plotting so many points that the color of the bars is always black. This is because the borders of the bars are black and there are so many of them that they are all squished together so that all you see is the borders (black). Is there a way to remove the bar borders so that I can see the intended color?", "response":"Set the edgecolor to \"none\": bar(..., edgecolor = \"none\")", "best_answers_score":0.6698, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15904042\/matplotlib-bar-graph-black-how-do-i-remove-bar-borders", "best_answers_votes":141, "question_length":392, "response_length":57 }, { "question":"How to create a density plot In R I can create the desired output by doing: ``` data = c(rep(1.5, 7), rep(2.5, 2), rep(3.5, 8), rep(4.5, 3), rep(5.5, 1), rep(6.5, 8)) plot(density(data, bw=0.5)) ``` In python (with matplotlib) the closest I got was with a simple histogram: ``` import matplotlib.pyplot as plt data = [1.5]*7 + [2.5]*2 + [3.5]*8 + [4.5]*3 + [5.5]*1 + [6.5]*8 plt.hist(data, bins=6) plt.show() ``` I also tried the normed=True parameter but couldn't get anything other than trying to fit a gaussian to the histogram. My latest attempts were around scipy.stats and gaussian_kde, following examples on the web, but I've been unsuccessful so far.", "response":"Five years later, when I Google \"how to create a kernel density plot using python\", this thread still shows up at the top! Today, a much easier way to do this is to use seaborn, a package that provides many convenient plotting functions and good style management. ``` import numpy as np import seaborn as sns data = [1.5]*7 + [2.5]*2 + [3.5]*8 + [4.5]*3 + [5.5]*1 + [6.5]*8 sns.set_style('whitegrid') sns.kdeplot(np.array(data), bw=0.5) ```", "best_answers_score":0.6693, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4150171\/how-to-create-a-density-plot", "best_answers_votes":205, "question_length":658, "response_length":440 }, { "question":"vertical & horizontal lines in matplotlib I do not quite understand why I am unable to create horizontal and vertical lines at specified limits. I would like to bound the data by this box. However, the sides do not seem to comply with my instructions. Why is this? ``` # CREATING A BOUNDING BOX # BOTTOM HORIZONTAL plt.axhline(y=.4, xmin=0.25, xmax=0.402, linewidth=2, color = 'k') # RIGHT VERTICAL plt.axvline(x=0.402, ymin=0.4, ymax = 0.615, linewidth=2, color='k') # LEFT VERTICAL plt.axvline(x=0.1, ymin=0.58, ymax = 0.79, linewidth=2, color='k') plt.show() ```", "response":"The pyplot functions you are calling, axhline() and axvline() draw lines that span a portion of the axis range, regardless of coordinates. The parameters xmin or ymin use value 0.0 as the minimum of the axis and 1.0 as the maximum of the axis. Instead, use plt.plot((x1, x2), (y1, y2), 'k-') to draw a line from the point (x1, y1) to the point (x2, y2) in color k. See pyplot.plot.", "best_answers_score":0.6678, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16930328\/vertical-horizontal-lines-in-matplotlib", "best_answers_votes":182, "question_length":565, "response_length":381 }, { "question":"How to remove gaps between bars in a bar chart I'm making a bar chart in Matplotlib with a call like this: ``` xs.bar(bar_lefts, bar_heights, facecolor='black', edgecolor='black') ``` I get a barchart that looks like this: What I'd like is one with no white gap between consecutive bars, e.g. more like this: Is there a way to achieve this in Matplotlib using the bar() function?", "response":"Add width=1.0 as a keyword argument to bar(). E.g. xs.bar(bar_lefts, bar_heights, width=1.0, facecolor='black', edgecolor='black'). This will fill the bars gaps vertically.", "best_answers_score":0.6674, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/20454120\/how-to-remove-gaps-between-bars-in-a-bar-chart", "best_answers_votes":63, "question_length":379, "response_length":172 }, { "question":"adding extra axis ticks using matplotlib I have a simple plot code as ``` plt.plot(x,y) plt.show() ``` I want to add some extra ticks on the x-axis in addition to the current ones, let's say at ``` extraticks=[2.1, 3, 7.6] ``` As you see I do not have a pattern for ticks so I do not want to increase the tick frequency for the whole axis; just keep the original ones and add those extras... Is it possible, at all? Regards", "response":"Yes, you can try something like: ``` plt.xticks(list(plt.xticks()[0]) + extraticks) ``` The function to use is xticks(). When called without arguments, it returns the current ticks. Calling it with arguments, you can set the tick positions and, optionally, labels.", "best_answers_score":0.6671, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14716660\/adding-extra-axis-ticks-using-matplotlib", "best_answers_votes":108, "question_length":423, "response_length":264 }, { "question":"Is it possible to add a string as a legend item I am producing some plots in matplotlib and would like to add explanatory text for some of the data. I want to have a string inside my legend as a separate legend item above the '0-10' item. Does anyone know if there is a possible way to do this? This is the code for my legend: ax.legend(['0-10','10-100','100-500','500+'],loc='best')", "response":"Alternative solution, kind of dirty but pretty quick. ``` import pylab as plt X = range(50) Y = range(50) plt.plot(X, Y, label=\"Very straight line\") # Create empty plot with blank marker containing the extra label plt.plot([], [], ' ', label=\"Extra label on the legend\") plt.legend() plt.show() ```", "best_answers_score":0.6667, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/16826711\/is-it-possible-to-add-a-string-as-a-legend-item", "best_answers_votes":124, "question_length":383, "response_length":298 }, { "question":"How to force the Y axis to only use integers I'm plotting a histogram using the matplotlib.pyplot module and I am wondering how I can force the y-axis labels to only show integers (e.g. 0, 1, 2, 3 etc.) and not decimals (e.g. 0., 0.5, 1., 1.5, 2. etc.). I'm looking at the guidance notes and suspect the answer lies somewhere around matplotlib.pyplot.ylim but so far I can only find stuff that sets the minimum and maximum y-axis values. ``` def doMakeChart(item, x): if len(x)==1: return filename = \"C:\\Users\\me\\maxbyte3\\charts\\\\\" bins=logspace(0.1, 10, 100) plt.hist(x, bins=bins, facecolor='green', alpha=0.75) plt.gca().set_xscale(\"log\") plt.xlabel('Size (Bytes)') plt.ylabel('Count') plt.suptitle(r'Normal Distribution for Set of Files') plt.title('Reference PUID: %s' % item) plt.grid(True) plt.savefig(filename + item + '.png') plt.clf() ```", "response":"Here is another way: ``` from matplotlib.ticker import MaxNLocator ax = plt.figure().gca() ax.yaxis.set_major_locator(MaxNLocator(integer=True)) ```", "best_answers_score":0.6659, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12050393\/how-to-force-the-y-axis-to-only-use-integers", "best_answers_votes":282, "question_length":848, "response_length":148 }, { "question":"Second y-axis label getting cut off I'm trying to plot two sets of data in a bar graph with matplotlib, so I'm using two axes with the twinx() method. However, the second y-axis label gets cut off. I've tried a few different methods with no success (tight_layout(), setting the major_pads in rcParams, etc...). I feel like the solution is simple, but I haven't come across it yet. Here's a MWE: ``` #!\/usr\/bin\/env python import numpy as np import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt matplotlib.rcParams.update({'font.size': 21}) ax = plt.gca() plt.ylabel('Data1') #Left side ax2 = ax.twinx() for i in range(10): if(i%2==0): ax.bar(i,np.random.randint(10)) else: ax2.bar(i,np.random.randint(1000),color='k') plt.ylabel('Data2') #Right ``` side plt.savefig(\"test.png\")", "response":"I just figured it out: the trick is to use bbox_inches='tight' in savefig. E.G. plt.savefig(\"test.png\",bbox_inches='tight')", "best_answers_score":0.6658, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21288062\/second-y-axis-label-getting-cut-off", "best_answers_votes":146, "question_length":795, "response_length":123 }, { "question":"matplotlib bar chart with dates I know about plot_date() but is there a bar_date() out there? The general method would be to use set_xticks and set_xticklabels, but I'd like something that can handle time scales from a few hours out to a few years (this means involving the major and minor ticks to make things readable). I am plotting values associated with a specific time interval (that the bar spans). Here is the basic solution I used: ```py import matplotlib.pyplot as plt import datetime t = [datetime.datetime(2010, 12, 2, 22, 0), datetime.datetime(2010, 12, 2, 23, 0), datetime.datetime(2010, 12, 10, 0, 0), datetime.datetime(2010, 12, 10, 6, 0)] y = [4, 6, 9, 3] interval = 1.0 \/ 24.0 #1hr intervals, but maplotlib dates have base of 1 day ax = plt.subplot(111) ax.bar(t, y, width=interval) ax.xaxis_date() plt.show() ```", "response":"All plot_date does is plot the function and the call ax.xaxis_date(). All you should need to do is this: ``` import numpy as np import matplotlib.pyplot as plt import datetime x = [datetime.datetime(2010, 12, 1, 10, 0), datetime.datetime(2011, 1, 4, 9, 0), datetime.datetime(2011, 5, 5, 9, 0)] y = [4, 9, 2] ax = plt.subplot(111) ax.bar(x, y, width=10) ax.xaxis_date() plt.show() ```", "best_answers_score":0.6648, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5902371\/matplotlib-bar-chart-with-dates", "best_answers_votes":61, "question_length":831, "response_length":383 }, { "question":"Plot a horizontal line on a given plot How do I add a horizontal line to an existing plot?", "response":"Use axhline (a horizontal axis line). For example, this plots a horizontal line at y = 0.5: ``` import matplotlib.pyplot as plt plt.axhline(y=0.5, color='r', linestyle='-') plt.show() ```", "best_answers_score":0.6646, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/33382619\/plot-a-horizontal-line-on-a-given-plot", "best_answers_votes":919, "question_length":90, "response_length":187 }, { "question":"Specify format of floats for tick labels I am trying to set the format to two decimal numbers in a matplotlib subplot environment. Unfortunately, I do not have any idea how to solve this task. To prevent using scientific notation on the y-axis I used ScalarFormatter(useOffset=False) as you can see in my snippet below. I think my task should be solved by passing further options\/arguments to the used formatter. However, I could not find any hint in matplotlib's documentation. How can I set two decimal digits or none (both cases are needed)? I am not able to provide sample data, unfortunately. -- SNIPPET -- ``` f, axarr = plt.subplots(3, sharex=True) data = conv_air x = range(0, len(data)) axarr[0].scatter(x, data) axarr[0].set_ylabel('$T_\\mathrm{air,2,2}$', size=FONT_SIZE) axarr[0].yaxis.set_major_locator(MaxNLocator(5)) axarr[0].yaxis.set_major_formatter(ScalarFormatter(useOffset=False)) axarr[0].tick_params(direction='out', labelsize=FONT_SIZE) axarr[0].grid(which='major', alpha=0.5) axarr[0].grid(which='minor', alpha=0.2) data = conv_dryer x = range(0, len(data)) axarr[1].scatter(x, data) axarr[1].set_ylabel('$T_\\mathrm{dryer,2,2}$', size=FONT_SIZE) axarr[1].yaxis.set_major_locator(MaxNLocator(5)) axarr[1].yaxis.set_major_formatter(ScalarFormatter(useOffset=False)) axarr[1].tick_params(direction='out', labelsize=FONT_SIZE) axarr[1].grid(which='major', alpha=0.5) axarr[1].grid(which='minor', alpha=0.2) data = conv_lambda x = range(0, len(data)) axarr[2].scatter(x, data) axarr[2].set_xlabel('Iterationsschritte', size=FONT_SIZE) axarr[2].xaxis.set_major_locator(MaxNLocator(integer=True)) axarr[2].set_ylabel('$\\lambda$', size=FONT_SIZE) axarr[2].yaxis.set_major_formatter(ScalarFormatter(useOffset=False)) axarr[2].yaxis.set_major_locator(MaxNLocator(5)) axarr[2].tick_params(direction='out', labelsize=FONT_SIZE) axarr[2].grid(which='major', alpha=0.5) axarr[2].grid(which='minor', alpha=0.2) ```", "response":"See the relevant documentation in general and specifically ``` from matplotlib.ticker import FormatStrFormatter fig, ax = plt.subplots() ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f')) ```", "best_answers_score":0.6644, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/29188757\/specify-format-of-floats-for-tick-labels", "best_answers_votes":222, "question_length":1922, "response_length":197 }, { "question":"Labeling boxplot in seaborn with median value How can I label each boxplot in a seaborn plot with the median value? E.g. ``` import seaborn as sns sns.set_style(\"whitegrid\") tips = sns.load_dataset(\"tips\") ax = sns.boxplot(x=\"day\", y=\"total_bill\", data=tips) ``` How do I label each boxplot with the median or average value?", "response":"I love when people include sample datasets! ``` import seaborn as sns sns.set_style(\"whitegrid\") tips = sns.load_dataset(\"tips\") box_plot = sns.boxplot(x=\"day\",y=\"total_bill\",data=tips) medians = tips.groupby(['day'])['total_bill'].median() vertical_offset = tips['total_bill'].median() * 0.05 # offset from median for display for xtick in box_plot.get_xticks(): box_plot.text(xtick,medians[xtick] + vertical_offset,medians[xtick], horizontalalignment='center',size='x-small',color='w',weight='semibold') ```", "best_answers_score":0.6644, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/38649501\/labeling-boxplot-in-seaborn-with-median-value", "best_answers_votes":84, "question_length":324, "response_length":508 }, { "question":"Editing the date formatting of x-axis tick labels I am looking to edit the formatting of the dates on the x-axis. The picture below shows how they appear on my bar graph by default. I would like to remove the repetition of 'Dec' and '2012' and just have the actual date numbers along the x-axis. Any suggestions as to how I can do this?", "response":"In short: ``` import matplotlib.dates as mdates myFmt = mdates.DateFormatter('%d') ax.xaxis.set_major_formatter(myFmt) ``` Many examples on the matplotlib website. The one I most commonly use is here", "best_answers_score":0.6642, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14946371\/editing-the-date-formatting-of-x-axis-tick-labels", "best_answers_votes":161, "question_length":336, "response_length":199 }, { "question":"Pandas Plotting with Multi-Index After performing a groupby.sum() on a DataFrame I'm having some trouble trying to create my intended plot. ```py import pandas as pd import numpy as np np.random.seed(365) rows = 100 data = {'Month': np.random.choice(['2014-01', '2014-02', '2014-03', '2014-04'], size=rows), 'Code': np.random.choice(['A', 'B', 'C'], size=rows), 'ColA': np.random.randint(5, 125, size=rows), 'ColB': np.random.randint(0, 51, size=rows),} df = pd.DataFrame(data) Month Code ColA ColB 0 2014-03 C 59 47 1 2014-01 A 24 9 2 2014-02 C 77 50 dfg = df.groupby(['Code', 'Month']).sum() ColA ColB Code Month A 2014-01 124 102 2014-02 398 282 2014-03 474 198 2014-04 830 237 B 2014-01 477 300 2014-02 591 167 2014-03 522 192 2014-04 367 169 C 2014-01 412 180 2014-02 275 205 2014-03 795 291 2014-04 901 309 ``` How can I create a subplot (kind='bar') for each Code, where the x-axis is the Month and the bars are ColA and ColB?", "response":"I found the unstack(level) method to work perfectly, which has the added benefit of not needing a priori knowledge about how many Codes there are. ```py ax = dfg.unstack(level=0).plot(kind='bar', subplots=True, rot=0, figsize=(9, 7), layout=(2, 3)) plt.tight_layout() ```", "best_answers_score":0.6624, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/25386870\/pandas-plotting-with-multi-index", "best_answers_votes":143, "question_length":933, "response_length":271 }, { "question":"How to draw axis in the middle of the figure? I want to draw a figure in matplotib where the axis are displayed within the plot itself not on the side I have tried the following code from here: ``` import math import numpy as np import matplotlib.pyplot as plt def sigmoid(x): a = [] for item in x: a.append(1\/(1+math.exp(-item))) return a x = np.arange(-10., 10., 0.2) sig = sigmoid(x) plt.plot(x,sig) plt.show() ``` The above code displays the figure like this: What I would like to draw is something as follows (image from Wikipedia) This question describes a similar problem, but it draws a reference line in the middle but no axis.", "response":"One way to do it is using spines: ``` import math import numpy as np import matplotlib.pyplot as plt def sigmoid(x): a = [] for item in x: a.append(1\/(1+math.exp(-item))) return a x = np.arange(-10., 10., 0.2) sig = sigmoid(x) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) # Move left y-axis and bottom x-axis to centre, passing through (0,0) ax.spines['left'].set_position('center') ax.spines['bottom'].set_position('center') # Eliminate upper and right axes ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') # Show ticks in the left and lower axes only ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') plt.plot(x,sig) plt.show() ``` shows:", "best_answers_score":0.6604, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/31556446\/how-to-draw-axis-in-the-middle-of-the-figure", "best_answers_votes":74, "question_length":636, "response_length":691 }, { "question":"UserWarning: FixedFormatter should only be used together with FixedLocator I have used for a long time small subroutines to format axes of charts I'm plotting. A couple of examples: ``` def format_y_label_thousands(): # format y-axis tick labels formats ax = plt.gca() label_format = '{:,.0f}' ax.set_yticklabels([label_format.format(x) for x in ax.get_yticks().tolist()]) def format_y_label_percent(): # format y-axis tick labels formats ax = plt.gca() label_format = '{:.1%}' ax.set_yticklabels([label_format.format(x) for x in ax.get_yticks().tolist()]) ``` However, after an update to matplotlib yesterday, I get the following warning when calling any of these two functions: ``` UserWarning: FixedFormatter should only be used together with FixedLocator ax.set_yticklabels([label_format.format(x) for x in ax.get_yticks().tolist()]) ``` What is the reason for such a warning? I couldn't figure it out looking into matplotlib's documentation.", "response":"WORKAROUND: The way to avoid the warning is to use FixedLocator (that is part of matplotlib.ticker). Below I show a code to plot three charts. I format their axes in different ways. Note that the \"set_ticks\" silence the warning, but it changes the actual ticks locations\/labels (it took me some time to figure out that FixedLocator uses the same info but keeps the ticks locations intact). You can play with the x\/y's to see how each solution might affect the output. ``` import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import matplotlib.ticker as mticker mpl.rcParams['font.size'] = 6.5 x = np.array(range(1000, 5000, 500)) y = 37*x fig, [ax1, ax2, ax3] = plt.subplots(1,3) ax1.plot(x,y, linewidth=5, color='green') ax2.plot(x,y, linewidth=5, color='red') ax3.plot(x,y, linewidth=5, color='blue') label_format = '{:,.0f}' # nothing done to ax1 as it is a \"control chart.\" # fixing yticks with \"set_yticks\" ticks_loc = ax2.get_yticks().tolist() ax2.set_yticks(ax1.get_yticks().tolist()) ax2.set_yticklabels([label_format.format(x) for x in ticks_loc]) # fixing yticks with matplotlib.ticker \"FixedLocator\" ticks_loc = ax3.get_yticks().tolist() ax3.yaxis.set_major_locator(mticker.FixedLocator(ticks_loc)) ax3.set_yticklabels([label_format.format(x) for x in ticks_loc]) # fixing xticks with FixedLocator but also using MaxNLocator to avoid cramped x-labels ax3.xaxis.set_major_locator(mticker.MaxNLocator(3)) ticks_loc = ax3.get_xticks().tolist() ax3.xaxis.set_major_locator(mticker.FixedLocator(ticks_loc)) ax3.set_xticklabels([label_format.format(x) for x in ticks_loc]) fig.tight_layout() plt.show() ``` OUTPUT CHARTS: Obviously, having a couple of idle lines of code like the one above (I'm basically getting the yticks or xticks and setting them again) only adds noise to my program. I would prefer that the warning was removed. However, look into some of the \"bug reports\" (from links on the comments above\/below; the issue is not actually a bug: it is an update that is generating some issues), and the contributors that manage matplotlib have their reasons to keep the warning. OLDER VERSION OF MATPLOTLIB: If you use your Console to control critical outputs of your code (as I do), the warning messages might be problematic. Therefore, a way to delay having to deal with the issue is to downgrade matplotlib to version 3.2.2. I use Anaconda to manage my Python packages, and here is the command used to downgrade matplotlib: ``` conda install matplotlib=3.2.2 ``` Not all listed versions might be available. For instance, couldn't install matplotlib 3.3.0 although it is listed on matplotlib's releases page: https:\/\/github.com\/matplotlib\/matplotlib\/releases", "best_answers_score":0.6602, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/63723514\/userwarning-fixedformatter-should-only-be-used-together-with-fixedlocator", "best_answers_votes":79, "question_length":946, "response_length":2695 }, { "question":"Matplotlib legends in subplot I would like to put legends inside each one of the subplots below. I've tried with plt.legend but it didn't work. ``` f, (ax1, ax2, ax3) = plt.subplots(3, sharex=True, sharey=True) ax1.plot(xtr, color='r', label='Blue stars') ax2.plot(ytr, color='g') ax3.plot(ztr, color='b') ax1.set_title('2012\/09\/15') plt.legend([ax1, ax2, ax3],[\"HHZ 1\", \"HHN\", \"HHE\"]) plt.show() ``` With the suggestion from atomh33ls: ``` ax1.legend(\"HHZ 1\",loc=\"upper right\") ax2.legend(\"HHN\",loc=\"upper right\") ax3.legend(\"HHE\",loc=\"upper right\") ``` The legend position is fixed, however it seems to have a problem with the strings, because each letter is placed in a new line. Does anyone knows how to fix it?", "response":"This should work: ``` ax1.plot(xtr, color='r', label='HHZ 1') ax1.legend(loc=\"upper right\") ax2.plot(xtr, color='r', label='HHN') ax2.legend(loc=\"upper right\") ax3.plot(xtr, color='r', label='HHE') ax3.legend(loc=\"upper right\") ```", "best_answers_score":0.6601, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/27016904\/matplotlib-legends-in-subplot", "best_answers_votes":128, "question_length":715, "response_length":231 }, { "question":"Image is not displaying in Google Colab while using imshow() I am working on a project which requires functions from OpenCV to plot images. I am trying to display image using the below code in Google Colab. But nothing shows up in the output. Can anybody help me with this? ``` %pylab notebook import cv2 testim = imread('butterfly.jpg') figure() imshow(testim) plt.show() ``` Screenshot: Link to my Colab Notebook", "response":"Google colab crashes if you try to display image using cv2.imshow() instead import from google.colab.patches import cv2_imshow and display using cv2_imshow()", "best_answers_score":0.6597, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/55288657\/image-is-not-displaying-in-google-colab-while-using-imshow", "best_answers_votes":102, "question_length":414, "response_length":157 }, { "question":"Pycharm does not show plot Pycharm does not show plot from the following code: ``` import pandas as pd import numpy as np import matplotlib as plt ts = pd.Series(np.random.randn(1000), index=pd.date_range('1\/1\/2000', periods=1000)) ts = ts.cumsum() ts.plot() ``` What happens is that a window appears for less than a second, and then disappears again. Using the Pyzo IEP IDE (using same interpreter) on the same code the plot shows as expected. ...So the problem must be with some setting on Pycharm. I've tried using both python.exe and pythonw.exe as interpreter both with same results. This is my sys_info: ``` C:\\pyzo2014a\\pythonw.exe -u C:\\Program Files (x86)\\JetBrains\\PyCharm Community Edition 3.4.1\\helpers\\pydev\\pydevconsole.py 57315 57316 PyDev console: using IPython 2.1.0import sys; print('Python %s on %s' % (sys.version, sys.platform)) Python 3.4.1 |Continuum Analytics, Inc.| (default, May 19 2014, 13:02:30) [MSC v.1600 64 bit (AMD64)] on win32 sys.path.extend(['C:\\\\Users\\\\Rasmus\\\\PycharmProjects\\\\untitled2']) In[3]: import IPython print(IPython.sys_info()) {'commit_hash': '681fd77', 'commit_source': 'installation', 'default_encoding': 'UTF-8', 'ipython_path': 'C:\\\\pyzo2014a\\\\lib\\\\site-packages\\\\IPython', 'ipython_version': '2.1.0', 'os_name': 'nt', 'platform': 'Windows-8-6.2.9200', 'sys_executable': 'C:\\\\pyzo2014a\\\\pythonw.exe', 'sys_platform': 'win32', 'sys_version': '3.4.1 |Continuum Analytics, Inc.| (default, May 19 2014, ' '13:02:30) [MSC v.1600 64 bit (AMD64)]'} ```", "response":"Just use ``` import matplotlib.pyplot as plt plt.show() ``` This command tells the system to draw the plot in Pycharm. Example: ``` plt.imshow(img.reshape((28, 28))) plt.show() ```", "best_answers_score":0.6591, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/24886625\/pycharm-does-not-show-plot", "best_answers_votes":201, "question_length":1498, "response_length":180 }, { "question":"Hide ticks but show tick labels I can remove the ticks with ``` ax.set_xticks([]) ax.set_yticks([]) ``` but this removes the labels as well. Any way I can plot the tick labels but not the ticks and the spine", "response":"You can set the tick length to 0 using tick_params (http:\/\/matplotlib.org\/api\/axes_api.html#matplotlib.axes.Axes.tick_params): ``` fig = plt.figure() ax = fig.add_subplot(111) ax.plot([1],[1]) ax.tick_params(axis=u'both', which=u'both',length=0) plt.show() ```", "best_answers_score":0.659, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/29988241\/hide-ticks-but-show-tick-labels", "best_answers_votes":155, "question_length":207, "response_length":260 }, { "question":"Is there a list of line styles in matplotlib? I'm writing a script that will do some plotting. I want it to plot several data series, each with its unique line style (not color). I can easily iterate through a list, but is there such a list already available in python?", "response":"According to the doc you could find them by doing this : ``` from matplotlib import lines lines.lineStyles.keys() >>> ['', ' ', 'None', '--', '-.', '-', ':'] ``` You can do the same with markers EDIT: In the latest versions, there are still the same styles, but you can vary the space between dots\/lines.", "best_answers_score":0.659, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13359951\/is-there-a-list-of-line-styles-in-matplotlib", "best_answers_votes":112, "question_length":269, "response_length":304 }, { "question":"Setting Yaxis in Matplotlib using Pandas Using Pandas to plot in IPython Notebook, I have several plots and because Matplotlib decides the Y axis it is setting them differently and we need to compare that data using the same range. From the Matplotlib doc it seems that I need to set ylim, but can't figure the syntax to do so. I have tried several variants on: ``` df2250.plot(); plt.ylim((100000,500000)) df2260.plot() df5.plot() ``` I assume I'll need to apply the limits to each plot, but since I can't get one working...", "response":"DataFrame.plot() exposes a ylim parameter that sets the y axis limits: ``` df.plot(ylim=(0, 200)) ``` I'm guessing this feature was added after Rutger's answer was accepted in 2013.", "best_answers_score":0.6577, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/17787366\/setting-yaxis-in-matplotlib-using-pandas", "best_answers_votes":105, "question_length":525, "response_length":181 }, { "question":"Label data points on plot If you want to label your plot points using python matplotlib, I used the following code. ```py from matplotlib import pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) A = anyarray B = anyotherarray plt.plot(A,B) for i,j in zip(A,B): ax.annotate('%s)' %j, xy=(i,j), xytext=(30,0), textcoords='offset points') ax.annotate('(%s,' %i, xy=(i,j)) plt.grid() plt.show() ``` I know that xytext=(30,0) goes along with the textcoords and you use those 30,0 values to position the data label point, so it's on the y=0 and x=30 on its own little area. You need both the lines plotting i and j otherwise you only plot x or y data label. You get something like this out (note the labels only): It's not ideal, there is still some overlap.", "response":"How about print (x, y) at once. ``` from matplotlib import pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) A = -0.75, -0.25, 0, 0.25, 0.5, 0.75, 1.0 B = 0.73, 0.97, 1.0, 0.97, 0.88, 0.73, 0.54 ax.plot(A,B) for xy in zip(A, B): # <-- ax.annotate('(%s, %s)' % xy, xy=xy, textcoords='data') # <-- ax.grid() plt.show() ```", "best_answers_score":0.6575, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22272081\/label-data-points-on-plot", "best_answers_votes":123, "question_length":761, "response_length":329 }, { "question":"Making a chart bigger in size I'm trying to get a bigger chart. However, the figure method from matplotlib does not seem to be working properly. I get a message, which is not an error: ```none ``` My code: ```py import pandas.io.data as web import pandas as pd import matplotlib.pyplot as plt %matplotlib inline ... plt.figure(figsize=(20,10)) df2['media']= df2['SPY']*.6 + df2['TLT']*.4 df2.plot() plt.show() ``` What's wrong with my code?", "response":"You can skip the first plt.figure() and just use the argument figsize: ``` df2.plot(figsize=(20,10)) ``` See docs.", "best_answers_score":0.6571, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/38294852\/making-a-chart-bigger-in-size", "best_answers_votes":107, "question_length":441, "response_length":114 }, { "question":"Prettier default plot colors in matplotlib The default colors used in matplotlib (example here: http:\/\/matplotlib.org\/examples\/pylab_examples\/pie_demo.html) are kind of plain and ugly. I've also noticed that if you plot more than 5-6 different series in a single plot, matplotlib starts repeating colors. I've seen some gorgeous graphs coming out of other visualization packages (in other languages, by default) that can have 5-6 different series covered by just one color in different shades. Does anyone have a good color set to use in matplotlib? And a way to make matplotlib use it by default?", "response":"You can use Matplotlib's style sheets. It has been ported from the mpltools library which has a style module that redefine matplotlib rc parameters. As an example, see the use of the ggplot style and Matplotlib's manual.", "best_answers_score":0.6569, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/15814635\/prettier-default-plot-colors-in-matplotlib", "best_answers_votes":69, "question_length":597, "response_length":220 }, { "question":"Matplotlib control which plot is on top I am wondering if there is a way to control which plot lies on top of other plots if one makes multiple plots on one axis. An example: As you can see, the green series is on top of the blue series, and both series are on top of the black dots (which I made with a scatter plot). I would like the black dots to be on top of both series (lines). I first did the above with the following code ``` plt.plot(series1_x, series1_y) plt.plot(series2_x, series2_y) plt.scatter(series2_x, series2_y) ``` Then I tried the following ``` fig = plt.figure() ax1 = fig.add_subplot(111) ax1.plot(series1_x, series1_y) ax2 = fig.add_subplot(111) ax2.plot(series2_x, series2_y) ax3 = fig.add_subplot(111) ax3.scatter(series2_x, series2_y) ``` And some variations on that, but no luck. Swapping around the plot functions has an effect on which plot is on top, but no matter where I put the scatter function, the lines are on top of the dots. NOTE: I am using Python 3.5 on Windows 10 (this example), but mostly Python 3.4 on Ubuntu. NOTE 2: I know this may seem like a trivial issue, but I have a case where the series on top of the dots are so dense that the colour of the dots get obscured, and in those cases I need my readers to clearly see which dots are what colour, hence why I need the dots to be on top.", "response":"Use the zorder kwarg where the lower the zorder the further back the plot, e.g. ``` plt.plot(series1_x, series1_y, zorder=1) plt.plot(series2_x, series2_y, zorder=2) plt.scatter(series2_x, series2_y, zorder=3) ```", "best_answers_score":0.6567, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/35781612\/matplotlib-control-which-plot-is-on-top", "best_answers_votes":67, "question_length":1333, "response_length":213 }, { "question":"Plotting with a transparent marker but non-transparent edge I'm trying to make a plot in matplotlib with transparent markers which have a fixed color edge . However, I can't seem to achieve a marker with transparent fill. I have a minimum working example here: ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(10) y1 = 2*x + 1 y2 = 3*x - 5 plt.plot(x,y1, 'o-', lw=6, ms=14) plt.plot(x,y2, 'o', ms=14, markerfacecolor=None, alpha=0.5, markeredgecolor='red', markeredgewidth=5) plt.show() ``` I tried two techniques I found online to achieve this: 1) Setting alpha parameter. However, this makes the marker edge transparent too, which is not the desired effect. 2) Setting markerfacecolor=None, although this has no effect on my plot Is there a solution to this please?", "response":"This is tricky in Matplotlib... you have to use a string \"None\" instead of the value None, then you can just do: ``` plt.plot(x,y2, 'o', ms=14, markerfacecolor=\"None\", markeredgecolor='red', markeredgewidth=5) ```", "best_answers_score":0.6564, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/23596575\/plotting-with-a-transparent-marker-but-non-transparent-edge", "best_answers_votes":103, "question_length":787, "response_length":213 }, { "question":"How to change the current axis instance (i.e., gca()) in matplotlib I use a trick to draw a colorbar whose height matches the master axes. The code is like ```py import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import numpy as np ax = plt.subplot(111) im = ax.imshow(np.arange(100).reshape((10,10))) # create an axes on the right side of ax. The width of cax will be 5% # of ax and the padding between cax and ax will be fixed at 0.05 inch. divider = make_axes_locatable(ax) cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05) plt.colorbar(im, cax=cax) ``` This trick works good. However, since a new axis is appended, the current instance of the figure becomes cax - the appended axis. As a result, if one performs operations like ```py plt.text(0,0,'whatever') ``` the text will be drawn on cax instead of ax - the axis to which im belongs. Meanwhile, gcf().axes shows both axes. My question is: How to make the current axis instance (returned by gca()) the original axis to which im belongs.", "response":"Use plt.sca(ax) to set the current axes, where ax is the Axes object you'd like to become active.", "best_answers_score":0.6562, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/19625563\/how-to-change-the-current-axis-instance-i-e-gca-in-matplotlib", "best_answers_votes":116, "question_length":1037, "response_length":97 }, { "question":"Plot a histogram such that bar heights sum to 1 (probability) I'd like to plot a normalized histogram from a vector using matplotlib. I tried the following: ``` plt.hist(myarray, normed=True) ``` as well as: ``` plt.hist(myarray, normed=1) ``` but neither option produces a y-axis from [0, 1] such that the bar heights of the histogram sum to 1.", "response":"If you want the sum of all bars to be equal unity, weight each bin by the total number of values: ``` weights = np.ones_like(myarray) \/ len(myarray) plt.hist(myarray, weights=weights) ``` Note for Python 2.x: add casting to float() for one of the operators of the division as otherwise you would end up with zeros due to integer division", "best_answers_score":0.6556, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/3866520\/plot-a-histogram-such-that-bar-heights-sum-to-1-probability", "best_answers_votes":235, "question_length":345, "response_length":337 }, { "question":"How do I make sans serif superscript or subscript text in matplotlib? I want to use a subscript in an axis label in a matplotlib figure. Using LaTeX I would set it as $N_i$, which gives me the italic serif font. I know I can get non-italic mathfont with \\mathrm. But I would like to get the text in the default matplotlib sans-serif font so it matches the rest of the text in the figure. Is there a way to subscript text without using latex?", "response":"You can do it by customizing rcParams. If you have multiple elements to customize, you can store them as a dict and the update the `rcParams': ``` params = {'mathtext.default': 'regular' } plt.rcParams.update(params) ``` If you want to do a single modification, you can simply type: ``` plt.rcParams.update({'mathtext.default': 'regular' }) ``` In this respect, a trivial example would be as follows: ``` import numpy as np from matplotlib import pyplot as plt x = np.linspace(1, 10, 40) y = x**2 fig = plt.figure() ax = fig.add_subplot(111) params = {'mathtext.default': 'regular' } plt.rcParams.update(params) ax.set_xlabel('$x_{my text}$') ax.set_ylabel('$y_i$') ax.plot(x, y) ax.grid() plt.show() ``` You can find more information on RcParams in the matplotlib documentation.", "best_answers_score":0.6556, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/27698377\/how-do-i-make-sans-serif-superscript-or-subscript-text-in-matplotlib", "best_answers_votes":41, "question_length":441, "response_length":779 }, { "question":"Show only certain items in legend I currently am plotting a stacked bar graph of a large amount of taxonomic data, and only wish to show significant species in the legend (out of ~500 I wish to show ~25). Is there a simple way to do this? Below is the code I have: ``` labels=['0','20','40','60','80','100','120'] ax1=subj1df.plot(kind='barh', stacked=True,legend=True,cmap='Paired', grid=False) legend(ncol=2,loc=2, bbox_to_anchor=(1.05, 1), borderaxespad=0.) label1=['Baseline','8h','24h','48h','96h','120h'] ax1.set_yticklabels(label1, fontdict=None, minor=False) plt.title('Subject 1 Phyla',fontweight='bold') plt.savefig('Subject1Phyla.eps', format='eps', dpi=1000) ax1.set_xticklabels(labels) ``` Edit: tried adding this to show only one legend entry, however only returns an empty legend: ``` h, l = ax1.get_legend_handles_labels() legend(l[4],h[4],ncol=2,loc=2, bbox_to_anchor=(1.05, 1), borderaxespad=0.) ```", "response":"This works: ``` plt.plot([0, 4], [3,4]) plt.plot([0, 4], [2,3],label='_nolegend_') # element missing from legend plt.plot([0, 4], [1,2]) plt.legend(['first', 'third']) ```", "best_answers_score":0.6554, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/24680981\/show-only-certain-items-in-legend", "best_answers_votes":309, "question_length":917, "response_length":171 }, { "question":"Hide legend from seaborn pairplot I would like to hide the Seaborn pairplot legend. The official docs don't mention a keyword legend. Everything I tried using plt.legend didn't work. Please suggest the best way forward. Thanks! ``` import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline test = pd.DataFrame({ 'id': ['1','2','1','2','2','6','7','7','6','6'], 'x': [123,22,356,412,54,634,72,812,129,110], 'y':[120,12,35,41,45,63,17,91,112,151]}) sns.pairplot(x_vars='x', y_vars=\"y\", data=test, hue = 'id', height = 3) ```", "response":"Since _legend.remove() method won't work on some other seaborn plots, what about: ``` plt.legend([],[], frameon=False) ```", "best_answers_score":0.6544, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/54781243\/hide-legend-from-seaborn-pairplot", "best_answers_votes":64, "question_length":537, "response_length":122 }, { "question":"How do I fit long title? There's a similar question - but I can't make the solution proposed there work. Here's an example plot with a long title: ``` #!\/usr\/bin\/env python import matplotlib import matplotlib.pyplot import textwrap x = [1,2,3] y = [4,5,6] # initialization: fig = matplotlib.pyplot.figure(figsize=(8.0, 5.0)) # lines: fig.add_subplot(111).plot(x, y) # title: myTitle = \"Some really really long long long title I really really need - and just can't - just can't - make it any - simply any - shorter - at all.\" fig.add_subplot(111).set_title(\"\\n\".join(textwrap.wrap(myTitle, 80))) # tight: (matplotlib.pyplot).tight_layout() # saving: fig.savefig(\"fig.png\") ``` it gives a ```rb AttributeError: 'module' object has no attribute 'tight_layout' ``` and if I replace (matplotlib.pyplot).tight_layout() with fig.tight_layout() it gives: ```rb AttributeError: 'Figure' object has no attribute 'tight_layout' ``` So my question is - how do I fit the title to the plot?", "response":"Here's what I've finally used: ``` #!\/usr\/bin\/env python3 import matplotlib from matplotlib import pyplot as plt from textwrap import wrap data = range(5) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(data, data) title = ax.set_title(\"\\n\".join(wrap(\"Some really really long long long title I really really need - and just can't - just can't - make it any - simply any - shorter - at all.\", 60))) fig.tight_layout() title.set_y(1.05) fig.subplots_adjust(top=0.8) fig.savefig(\"1.png\") ```", "best_answers_score":0.6543, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/10351565\/how-do-i-fit-long-title", "best_answers_votes":117, "question_length":976, "response_length":493 }, { "question":"matplotlib - extracting data from contour lines I would like to get data from a single contour of evenly spaced 2D data (an image-like data). Based on the example found in a similar question: How can I get the (x,y) values of the line that is ploted by a contour plot (matplotlib)? ``` >>> import matplotlib.pyplot as plt >>> x = [1,2,3,4] >>> y = [1,2,3,4] >>> m = [[15,14,13,12],[14,12,10,8],[13,10,7,4],[12,8,4,0]] >>> cs = plt.contour(x,y,m, [9.5]) >>> cs.collections[0].get_paths() ``` The result of this call into cs.collections[0].get_paths() is: ``` [Path([[ 4. 1.625 ] [ 3.25 2. ] [ 3. 2.16666667] [ 2.16666667 3. ] [ 2. 3.25 ] [ 1.625 4. ]], None)] ``` Based on the plots, this result makes sense and appears to be collection of (y,x) pairs for the contour line. Other than manually looping over this return value, extracting the coordinates and assembling arrays for the line, are there better ways to get data back from a matplotlib.path object? Are there pitfalls to be aware of when extracting data from a matplotlib.path? Alternatively, are there alternatives within matplotlib or better yet numpy\/scipy to do a similar thing? Ideal thing would be to get a high resolution vector of (x,y) pairs describing the line, which could be used for further analysis, as in general my datasets are not a small or simple as the example above.", "response":"For a given path, you can get the points like this: ``` p = cs.collections[0].get_paths()[0] v = p.vertices x = v[:,0] y = v[:,1] ```", "best_answers_score":0.6528, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5666056\/matplotlib-extracting-data-from-contour-lines", "best_answers_votes":65, "question_length":1346, "response_length":133 }, { "question":"matplotlib y-axis label on right side Is there a simple way to put the y-axis label on the right-hand side of the plot? I know that this can be done for the tick labels using ax.yaxis.tick_right(), but I would like to know if it can be done for the axis label as well. One idea which came to mind was to use ``` ax.yaxis.tick_right() ax2 = ax.twinx() ax2.set_ylabel('foo') ``` However, this doesn't have the desired effect of placing all labels (tick and axis labels) on the right-hand side, while preserving the extent of the y-axis. In short, I would like a way to move all the y-axis labels from the left to the right.", "response":"It looks like you can do it with: ``` ax.yaxis.set_label_position(\"right\") ax.yaxis.tick_right() ``` See here for an example.", "best_answers_score":0.652, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13369888\/matplotlib-y-axis-label-on-right-side", "best_answers_votes":259, "question_length":621, "response_length":125 }, { "question":"how to turn on minor ticks only on y axis How can I turn the minor ticks only on y axis on a linear vs linear plot? When I use the function minor_ticks_on to turn minor ticks on, they appear on both x and y axis.", "response":"Nevermind, I figured it out. ``` ax.tick_params(axis='x', which='minor', bottom=False) ```", "best_answers_score":0.6515, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/12711202\/how-to-turn-on-minor-ticks-only-on-y-axis", "best_answers_votes":89, "question_length":212, "response_length":90 }, { "question":"Generate a heatmap using a scatter data set I have a set of X,Y data points (about 10k) that are easy to plot as a scatter plot but that I would like to represent as a heatmap. I looked through the examples in Matplotlib and they all seem to already start with heatmap cell values to generate the image. Is there a method that converts a bunch of x, y, all different, to a heatmap (where zones with higher frequency of x, y would be \"warmer\")?", "response":"If you don't want hexagons, you can use numpy's histogram2d function: ``` import numpy as np import numpy.random import matplotlib.pyplot as plt # Generate some test data x = np.random.randn(8873) y = np.random.randn(8873) heatmap, xedges, yedges = np.histogram2d(x, y, bins=50) extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]] plt.clf() plt.imshow(heatmap.T, extent=extent, origin='lower') plt.show() ``` This makes a 50x50 heatmap. If you want, say, 512x384, you can put bins=(512, 384) in the call to histogram2d. Example:", "best_answers_score":0.6503, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2369492\/generate-a-heatmap-using-a-scatter-data-set", "best_answers_votes":234, "question_length":443, "response_length":532 }, { "question":"How to display all label values in matplotlib I have two lists, when I plot with the following code, the x axis only shows up to 12 (max is 15). May I know how can I show all of the values in x list to the x axis? Thanks in advance. ``` x = [4,5,6,7,8,9,10,11,12,13,14,15,0,1,2,3] y = [10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160] fig = plt.figure() ax1 = fig.add_subplot(111) ax1.plot(np.arange(len(x)), y, 'o') ax1.set_xticklabels(x) plt.show() ``` If I set minor=True in the set_xticklabels function, it shows me all x=2,4,6,8,..,16... but I want ALL values. P.S. My x axis is not sorted, should display as it shows.", "response":"The issue here is that the number of ticks -set automatically - isn\u2019t the same as the number of points in your plot. To resolve this, set the number of ticks: ``` ax1.set_xticks(np.arange(len(x))) ``` Before the ax1.set_xticklabels(x) call.", "best_answers_score":0.6501, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/26131822\/how-to-display-all-label-values-in-matplotlib", "best_answers_votes":91, "question_length":630, "response_length":240 }, { "question":"Matplotlib Version Having my system prepped with homebrew and using pip install matplotlib after successful installation of numpy and scipy, I'm getting a successful installation. Then, running ``` $ python Python 2.7.6 (default, Jan 30 2014, 20:19:23) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import matplotlib >>> matplotlib.__version__ '1.1.1' ``` This is a very outdated version and noone of my programs run with it. I used pip uninstall matplotlib and redid it with pip install 'the url for 1.3.1' and it still reads version 1.1.1. Is there a way I can manually delete all python libraries, even python itself, and restart from scratch? Or is this an obvious fix for this? EDIT: I'm running Mac OS X version 10.9. I just reinstalled python 2.7 with scipy, numpy, and matplotlib through macports. Is there a very basic way to see where, when I import matplotlib from the python environment, it is calling it from? Like which in the terminal? I began using homebrew but switched to macports for more control. Can that be a problem? Do I need to completely remove homebrew? I did get this message at first: Warning: Error parsing file \/Applications\/MacPorts\/Python 2.7\/Python Launcher.app\/Contents\/MacOS\/Python Launcher: Error opening or reading file but after running $ sudo port -f deactivate python27 followed by sudo port activate python27 I no longer have that warning, but I wanted to include this detail for completeness. EDIT 2: Could some things be installing to opt\/local\/bin when they need to be installed to usr\/local\/bin? EDIT 3: To shed some light on this, print scipy.__version__ reads 0.11.0 which is several outdated, print numpy.__version__ reads 1.6.2 which is also outdated. However I attempt to install says the installation was successful, which I don't doubt. I suspect it's not linked up together in a correct way. Is there a way delete everything that is connected to python at all and restart? FINAL EDIT: I think the easiest way to handle this is to run which python and see what options you have to run python. Because I used homebrew and macports at this time (not recommended) I had four options- a macports install, a package install from python.org, a homebrew install, and the standard 2.6 from Apple. Iterate through these and find which one your installer (pip or easy_install) is placing your frameworks and run that python when you need certain dependencies. The best way is use only one package manager and run virtual environments if you need different dependencies, but we're all learning as we go.", "response":"Copy-Paste the following code in your terminal and press enter , it will show the version of matplotlib installed on your system :: ``` python import matplotlib print('matplotlib: {}'.format(matplotlib.__version__)) ```", "best_answers_score":0.6501, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21473600\/matplotlib-version", "best_answers_votes":61, "question_length":2639, "response_length":219 }, { "question":"How can I open the interactive matplotlib window in IPython notebook? I am using IPython with --pylab=inline and would sometimes like to quickly switch to the interactive, zoomable matplotlib GUI for viewing plots (the one that pops up when you plot something in a terminal Python console). How could I do that? Preferably without leaving or restarting my notebook. The problem with inline plots in IPy notebook is that they are of a limited resolution and I can't zoom into them to see some smaller parts. With the maptlotlib GUI that starts from a terminal, I can select a rectangle of the graph that I want to zoom into and the axes adjust accordingly. I tried experimenting with ``` from matplotlib import interactive interactive(True) ``` and ``` interactive(False) ``` but that didn't do anything. I couldn't find any hint online either.", "response":"According to the documentation, you should be able to switch back and forth like this: ``` In [2]: %matplotlib inline In [3]: plot(...) In [4]: %matplotlib qt # wx, gtk, osx, tk, empty uses default In [5]: plot(...) ``` and that will pop up a regular plot window (a restart on the notebook may be necessary).", "best_answers_score":0.6499, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14261903\/how-can-i-open-the-interactive-matplotlib-window-in-ipython-notebook", "best_answers_votes":199, "question_length":843, "response_length":308 }, { "question":"Matplotlib plots not showing up in Mac OSX? I am running Mac OSX 10.5.8. I installed matplotlib using macports. I get some examples from the matplotlib gallery like this one, without modification: http:\/\/matplotlib.sourceforge.net\/examples\/api\/unicode_minus.html I run it, get no error, but the picture does not show up. In Linux Ubuntu I get it. Do you know what could be wrong here?", "response":"I had the same problem, even I could see how a new application window was created and immediately disappeared. Simple solution - just check if you have ``` # Assumes you have imported \"matplotlib.pyplot\" as \"plt\" plt.show() ``` after the plot", "best_answers_score":0.6496, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2512225\/matplotlib-plots-not-showing-up-in-mac-osx", "best_answers_votes":166, "question_length":384, "response_length":242 }, { "question":"Styling part of label in legend in matplotlib Is it possible to have part of the text of a legend in a particular style, let's say, bold or italic?", "response":"Write between $$ to force matplotlib to interpret it. ``` import matplotlib.pyplot as plt plt.plot(range(10), range(10), label = \"Normal text $\\it{Italics}$\") plt.legend() plt.show() ```", "best_answers_score":0.6496, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8376335\/styling-part-of-label-in-legend-in-matplotlib", "best_answers_votes":61, "question_length":147, "response_length":186 }, { "question":"Creating labels where line appears in matplotlib figure I have a figure created in matplotlib (time-series data) over which are a series of ``` matplotlib.pyplot.axvline ``` lines. I would like to create labels on the plot that appear close to (probably on the RHS of the line and towards the top of the figure) these vertical lines.", "response":"You can use something like ``` plt.axvline(10) plt.text(10.1,0,'blah',rotation=90) ``` you might have to play around with the x and y value in text to get it to align properly. You can find the more complete documentation here.", "best_answers_score":0.6482, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13413112\/creating-labels-where-line-appears-in-matplotlib-figure", "best_answers_votes":130, "question_length":333, "response_length":227 }, { "question":"How to set a single, main title above all the subplots I am using pyplot. I have 4 subplots. How to set a single, main title above all the subplots? title() sets it above the last subplot.", "response":"Use pyplot.suptitle or Figure.suptitle: ``` import matplotlib.pyplot as plt import numpy as np fig=plt.figure() data=np.arange(900).reshape((30,30)) for i in range(1,5): ax=fig.add_subplot(2,2,i) ax.imshow(data) fig.suptitle('Main title') # or plt.suptitle('Main title') plt.show() ```", "best_answers_score":0.648, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7066121\/how-to-set-a-single-main-title-above-all-the-subplots", "best_answers_votes":487, "question_length":188, "response_length":285 }, { "question":"Create stacked histogram from unequal length arrays I'd like to create a stacked histogram. If I have a single 2-D array, made of three equal length data sets, this is simple. Code and image below: ``` import numpy as np from matplotlib import pyplot as plt # create 3 data sets with 1,000 samples mu, sigma = 200, 25 x = mu + sigma*np.random.randn(1000,3) #Stack the data plt.figure() n, bins, patches = plt.hist(x, 30, stacked=True, density = True) plt.show() ``` However, if I try similar code with three data sets of a different length the results are that one histogram covers up another. Is there any way I can do the stacked histogram with mixed length data sets? ``` ##Continued from above ###Now as three separate arrays x1 = mu + sigma*np.random.randn(990,1) x2 = mu + sigma*np.random.randn(980,1) x3 = mu + sigma*np.random.randn(1000,1) #Stack the data plt.figure() plt.hist(x1, bins, stacked=True, density = True) plt.hist(x2, bins, stacked=True, density = True) plt.hist(x3, bins, stacked=True, density = True) plt.show() ```", "response":"Well, this is simple. I just need to put the three arrays in a list. ``` ##Continued from above ###Now as three separate arrays x1 = mu + sigma*np.random.randn(990,1) x2 = mu + sigma*np.random.randn(980,1) x3 = mu + sigma*np.random.randn(1000,1) #Stack the data plt.figure() plt.hist([x1,x2,x3], bins, stacked=True, density=True) plt.show() ```", "best_answers_score":0.6475, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/18449602\/create-stacked-histogram-from-unequal-length-arrays", "best_answers_votes":104, "question_length":1038, "response_length":344 }, { "question":"Change x axes scale I created this plot using Matlab Using matplotlib, the x-axies draws large numbers such as 100000, 200000, 300000. I would like to have something like 1, 2, 3 and a 10^5 to indicate that it's actually 100000, 200000, 300000. Is there a simple way to create such scale in matplotlib?", "response":"Try using matplotlib.pyplot.ticklabel_format: ``` import matplotlib.pyplot as plt ... plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) ``` This applies scientific notation (i.e. a x 10^b) to your x-axis tickmarks", "best_answers_score":0.6474, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11577665\/change-x-axes-scale", "best_answers_votes":174, "question_length":302, "response_length":224 }, { "question":"How do I display a single image in PyTorch? How do I display a PyTorch Tensor of shape (3, 224, 224) representing a 224x224 RGB image? Using plt.imshow(image) gives the error: TypeError: Invalid dimensions for image data", "response":"Given a Tensor representing the image, use .permute() to put the channels as the last dimension when passing them to matplotlib: ``` plt.imshow(tensor_image.permute(1, 2, 0)) ``` Note: permute does not copy or allocate memory, and from_numpy() doesn't either.", "best_answers_score":0.6465, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/53623472\/how-do-i-display-a-single-image-in-pytorch", "best_answers_votes":171, "question_length":220, "response_length":259 }, { "question":"Imshow: extent and aspect I'm writing a software system that visualizes slices and projections through a 3D dataset. I'm using matplotlib and specifically imshow to visualize the image buffers I get back from my analysis code. Since I'd like to annotate the images with plot axes, I use the extent keyword that imshow supplies to map the image buffer pixel coordinates to a data space coordinate system. Unfortuantely, matplotlib doesn't know about units. Say (taking an artificial example) that I want to plot an image with dimensions of 1000 m X 1 km. In that case the extent would be something like [0, 1000, 0, 1]. Even though the image array is square, since the aspect ratio implied by the extent keyword is 1000, the resulting plot axes also have an aspect ratio of 1000. Is it possible to force the aspect ratio of the plot while still keeping the automatically generated major tick marks and labels I get by using the extent keyword?", "response":"You can do it by setting the aspect of the image manually (or by letting it auto-scale to fill up the extent of the figure). By default, imshow sets the aspect of the plot to 1, as this is often what people want for image data. In your case, you can do something like: ``` import matplotlib.pyplot as plt import numpy as np grid = np.random.random((10,10)) fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, figsize=(6,10)) ax1.imshow(grid, extent=[0,100,0,1]) ax1.set_title('Default') ax2.imshow(grid, extent=[0,100,0,1], aspect='auto') ax2.set_title('Auto-scaled Aspect') ax3.imshow(grid, extent=[0,100,0,1], aspect=100) ax3.set_title('Manually Set Aspect') plt.tight_layout() plt.show() ```", "best_answers_score":0.6457, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13384653\/imshow-extent-and-aspect", "best_answers_votes":182, "question_length":942, "response_length":687 }, { "question":"Displaying rotatable 3D plots in IPython or Jupyter Notebook (Mac OSX 10.10.5) I can reproduce from the matplotlib website mplot3d the example code for a 3D scatter plot scatter3d_demo.py, however the plot renders as a static image. I can not click on the graph and dynamically rotate to view the 3D plotted data. I have achieved the static 3D plot using the example code - using (a) ipython from within Terminal, (b) ipython notebook from within terminal, and (c) ipython notebook launched from the Anaconda launcher. I think I am missing some very basic step as assumed knowledge. In past learning, plotting has opened a GUI Python App which has a graph viewer. (Solution 2 in code shown below opens this.) Perhaps I need to know the code to export the output graph to that display method? (Yes, use %matplotlib (only) as first line without inline or notebook as shown in comments in code block below.) As an example in ipython notebook: ``` # These lines are comments # Initial setup from an online python notebook tutorial is below. # Note the first line \"%matplotlib inline\" this is how the tutorial has it. # Two solutions 1. use: \"%matplotlib notebook\" graphs appear dynamic in the notebook. # 2. use: \"%matplotlib\" (only) graphs appear dynamic in separate window. # ( 2. is the best solution for detailed graphs\/plots. ) %matplotlib inline import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D pd.set_option('html',False) pd.set_option('max_columns',30) pd.set_option('max_rows',10) # What follows is a copy of the 3D plot example code. # Data is randomly generated so there is no external data import. def randrange(n, vmin, vmax): return (vmax-vmin)*np.random.rand(n) + vmin fig = plt.figure() ax = fig.add_subplot(111, projection='3d') n = 100 for c, m, zl, zh in [('r', 'o', -60, -25), ('b', '^', -30, -5)]: xs = randrange(n, 23, 50) ys = randrange(n, 0, 100) zs = randrange(n, zl, zh) ax.scatter(xs, ys, zs, c=c, marker=m) ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') plt.show() ``` Can someone identify what I am missing? Looking at Python 3.3.6 documentation, section 25.1perhaps the tkinter package ... The tkinter package (\u201cTk interface\u201d) is the standard Python interface to the Tk GUI toolkit. Both Tk and tkinter are available on most Unix platforms, as well as on Windows systems. I think though, this relates to development of GUI programs so I am not sure this is relevant. (Correct, this was not needed for the solution.)", "response":"Use %matplotlib notebook instead of %matplotlib inline to get embedded interactive figures in the IPython notebook \u2013 this requires recent versions of matplotlib (1.4+) and IPython (3.0+).", "best_answers_score":0.6451, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/33436221\/displaying-rotatable-3d-plots-in-ipython-or-jupyter-notebook", "best_answers_votes":161, "question_length":2556, "response_length":187 }, { "question":"raise LinAlgError(\"SVD did not converge\") LinAlgError: SVD did not converge in matplotlib pca determination Code: ```py import numpy from matplotlib.mlab import PCA file_name = \"store1_pca_matrix.txt\" ori_data = numpy.loadtxt(file_name,dtype='float', comments='#', delimiter=None, converters=None, skiprows=0, usecols=None, unpack=False, ndmin=0) result = PCA(ori_data) ``` Though my input matrix is devoid of nan and inf, I do get the error below: ```py raise LinAlgError(\"SVD did not converge\") LinAlgError: SVD did not converge ``` What's the problem?", "response":"This can happen when there are inf or nan values in the data. Use this to remove nan values: ``` ori_data.dropna(inplace=True) ```", "best_answers_score":0.6429, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21827594\/raise-linalgerrorsvd-did-not-converge-linalgerror-svd-did-not-converge-in-m", "best_answers_votes":63, "question_length":554, "response_length":130 }, { "question":"How can I show figures separately? Say that I have two figures in matplotlib, with one plot per figure: ``` import matplotlib.pyplot as plt f1 = plt.figure() plt.plot(range(0,10)) f2 = plt.figure() plt.plot(range(10,20)) ``` Then I show both in one shot ``` plt.show() ``` Is there a way to show them separately, i.e. to show just f1? Or better: how can I manage the figures separately like in the following 'wishful' code (that doesn't work): ``` f1 = plt.figure() f1.plot(range(0,10)) f1.show() ```", "response":"Sure. Add an Axes using add_subplot. (Edited import.) (Edited show.) ``` import matplotlib.pyplot as plt f1 = plt.figure() f2 = plt.figure() ax1 = f1.add_subplot(111) ax1.plot(range(0,10)) ax2 = f2.add_subplot(111) ax2.plot(range(10,20)) plt.show() ``` Alternatively, use add_axes. ``` ax1 = f1.add_axes([0.1,0.1,0.8,0.8]) ax1.plot(range(0,10)) ax2 = f2.add_axes([0.1,0.1,0.8,0.8]) ax2.plot(range(10,20)) ```", "best_answers_score":0.6427, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2397791\/how-can-i-show-figures-separately", "best_answers_votes":58, "question_length":500, "response_length":408 }, { "question":"matplotlib get ylim values I'm using matplotlib to plot data (using plot and errorbar functions) from Python. I have to plot a set of totally separate and independent plots, and then adjust their ylim values so they can be easily visually compared. How can I retrieve the ylim values from each plot, so that I can take the min and max of the lower and upper ylim values, respectively, and adjust the plots so they can be visually compared? Of course, I could just analyze the data and come up with my own custom ylim values... but I'd like to use matplotlib to do that for me. Any suggestions on how to easily (and efficiently) do this? Here's my Python function that plots using matplotlib: ``` import matplotlib.pyplot as plt def myplotfunction(title, values, errors, plot_file_name): # plot errorbars indices = range(0, len(values)) fig = plt.figure() plt.errorbar(tuple(indices), tuple(values), tuple(errors), marker='.') # axes axes = plt.gca() axes.set_xlim([-0.5, len(values) - 0.5]) axes.set_xlabel('My x-axis title') axes.set_ylabel('My y-axis title') # title plt.title(title) # save as file plt.savefig(plot_file_name) # close figure plt.close(fig) ```", "response":"Just use axes.get_ylim(), it is very similar to set_ylim. From the docs: get_ylim() Get the y-axis range [bottom, top]", "best_answers_score":0.6423, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/26131607\/matplotlib-get-ylim-values", "best_answers_votes":266, "question_length":1162, "response_length":118 }, { "question":"set ticks with logarithmic scale It seems that the set_xticks is not working in log scale: ```py from matplotlib import pyplot as plt fig1, ax1 = plt.subplots() ax1.plot([10, 100, 1000], [1,2,3]) ax1.set_xscale('log') ax1.set_xticks([20, 200, 500]) plt.show() ``` is it possible?", "response":"```py import matplotlib from matplotlib import pyplot as plt fig1, ax1 = plt.subplots() ax1.plot([10, 100, 1000], [1,2,3]) ax1.set_xscale('log') ax1.set_xticks([20, 200, 500]) ax1.get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) ``` or ```py ax1.get_xaxis().get_major_formatter().labelOnlyBase = False plt.show() ```", "best_answers_score":0.642, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14530113\/set-ticks-with-logarithmic-scale", "best_answers_votes":113, "question_length":279, "response_length":336 }, { "question":"What does a 4-element tuple argument for 'bbox_to_anchor' mean in matplotlib? In the \"Legend location\" section of the \"Legend guide\" in the matplotlib website, there's a small script where line 9 is plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3, ncol=2, mode=\"expand\", borderaxespad=0.). All the tuples I've seen passed to bbox_to_anchor have 2 elements in it, but this one has 4. What does each element mean if the tuple passed has 4 elements? I was looking at it in the pyplot.legend docs, and it said something about bbox_transform coordinates. So I looked around and found matplotlib.transforms.Bbox with a static from_bounds(x0, y0, width, height). I was guessing the setup for the 4-tuple parameter was based on this from_bounds. I copied the script to Spyder, did %matplotlib in an Ipython console, and changed some values. It seemed to make sense, but when I tried only changing .102 to something like 0.9, the legend didn't change. I think the tuple is based on from_bounds, I just don't know why changing the last value in the tuple did nothing.", "response":"You're right, the 4-tuple in plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3) is set as (x0, y0, width, height) where (x0,y0) are the lower left corner coordinates of the bounding box. While those parameters set the bounding box for the legend, the legend's actual vertical size is shrunk to the size that is needed to put the elements in. Further its position is determined only in conjunction with the loc parameter. The loc parameter sets the alignment of the legend inside the bounding box, such that for some cases, no difference will by seen when changing the height, compare e.g. plot (2) and (4).", "best_answers_score":0.6413, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/39803385\/what-does-a-4-element-tuple-argument-for-bbox-to-anchor-mean-in-matplotlib", "best_answers_votes":72, "question_length":1063, "response_length":610 }, { "question":"How to plot different groups of data from a dataframe into a single figure I have a temperature file with many years of temperature records in the format below: ```none 2012-04-12,16:13:09,20.6 2012-04-12,17:13:09,20.9 2012-04-12,18:13:09,20.6 2007-05-12,19:13:09,5.4 2007-05-12,20:13:09,20.6 2007-05-12,20:13:09,20.6 2005-08-11,11:13:09,20.6 2005-08-11,11:13:09,17.5 2005-08-13,07:13:09,20.6 2006-04-13,01:13:09,20.6 ``` Every year has different numbers of time of records, so the pandas datetimeindices are all different. I want to plot the different year's data in the same figure for comparison: The X-axis is datetimeindices from Jan to Dec The Y-axis is the temperature How should I go about doing this?", "response":"Try: ``` ax = df1.plot() df2.plot(ax=ax) ```", "best_answers_score":0.6402, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/13872533\/how-to-plot-different-groups-of-data-from-a-dataframe-into-a-single-figure", "best_answers_votes":410, "question_length":709, "response_length":44 }, { "question":"Plotting multiple different plots in one figure using Seaborn I am attempting to recreate the following plot from the book Introduction to Statistical learning using seaborn I specifically want to recreate this using seaborn's lmplot to create the first two plots and boxplot to create the second. The main problem is that lmplot creates a FacetGrid according to this answer which forces me to hackily add another matplotlib Axes for the boxplot. I was wondering if there was an easier way to achieve this. Below, I have to do quite a bit of manual manipulation to get the desired plot. ```py seaborn_grid = sns.lmplot('value', 'wage', col='variable', hue='education', data=df_melt, sharex=False) seaborn_grid.fig.set_figwidth(8) left, bottom, width, height = seaborn_grid.fig.axes[0]._position.bounds left2, bottom2, width2, height2 = seaborn_grid.fig.axes[1]._position.bounds left_diff = left2 - left seaborn_grid.fig.add_axes((left2 + left_diff, bottom, width, height)) sns.boxplot('education', 'wage', data=df_wage, ax = seaborn_grid.fig.axes[2]) ax2 = seaborn_grid.fig.axes[2] ax2.set_yticklabels([]) ax2.set_xticklabels(ax2.get_xmajorticklabels(), rotation=30) ax2.set_ylabel('') ax2.set_xlabel(''); leg = seaborn_grid.fig.legends[0] leg.set_bbox_to_anchor([0, .1, 1.5,1]) ``` Which yields Sample data for DataFrames: ```py df_melt = { 'education': ['1. < HS Grad', '4. College Grad', '3. Some College', '4. College Grad', '2. HS Grad'], 'value': [18, 24, 45, 43, 50], 'variable': ['age', 'age', 'age', 'age', 'age'], 'wage': [75.0431540173515, 70.47601964694451, 130.982177377461, 154.68529299563, 75.0431540173515]} df_wage = { 'education': ['1. < HS Grad', '4. College Grad', '3. Some College', '4. College Grad', '2. HS Grad'], 'wage': [75.0431540173515, 70.47601964694451, 130.982177377461, 154.68529299563, 75.0431540173515]} ```", "response":"One possibility would be to NOT use lmplot(), but directly use regplot() instead. regplot() plots on the axes you pass as an argument with ax=. You lose the ability to automatically split your dataset according to a certain variable, but if you know beforehand the plots you want to generate, it shouldn't be a problem. Something like this: ```py import matplotlib.pyplot as plt import seaborn as sns fig, axs = plt.subplots(ncols=3) sns.regplot(x='value', y='wage', data=df_melt, ax=axs[0]) sns.regplot(x='value', y='wage', data=df_melt, ax=axs[1]) sns.boxplot(x='education',y='wage', data=df_melt, ax=axs[2]) ```", "best_answers_score":0.6401, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/38082602\/plotting-multiple-different-plots-in-one-figure-using-seaborn", "best_answers_votes":174, "question_length":1841, "response_length":614 }, { "question":"How to specify values on y axis of a matplotlib plot I need to generate a graph using matplotlib like the one in the attached picture. So far I tried it like this: ```py import matplotlib.pyplot as plt import numpy as np x = np.array([0,1,2,3]) y = np.array([20,21,22,23]) my_xticks = ['John','Arnold','Mavis','Matt'] plt.xticks(x, my_xticks) plt.plot(x, y) plt.show() ``` How can I specify a different number of values on the y axis different from the number of values on the x axis? And maybe specify them as an interval with 0.005 difference instead of a list?", "response":"``` import matplotlib.pyplot as plt import numpy as np x = np.array([0,1,2,3]) y = np.array([0.650, 0.660, 0.675, 0.685]) my_xticks = ['a', 'b', 'c', 'd'] plt.xticks(x, my_xticks) plt.yticks(np.arange(y.min(), y.max(), 0.005)) plt.plot(x, y) plt.grid(axis='y', linestyle='-') plt.show() ``` Something like this should work.", "best_answers_score":0.6398, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/21393802\/how-to-specify-values-on-y-axis-of-a-matplotlib-plot", "best_answers_votes":63, "question_length":563, "response_length":323 }, { "question":"How do I plot in real-time in a while loop? I am trying to plot some data from a camera in real time using OpenCV. However, the real-time plotting (using matplotlib) doesn't seem to be working. I've isolated the problem into this simple example: ``` fig = plt.figure() plt.axis([0, 1000, 0, 1]) i = 0 x = list() y = list() while i < 1000: temp_y = np.random.random() x.append(i) y.append(temp_y) plt.scatter(i, temp_y) i += 1 plt.show() ``` I would expect this example to plot 1000 points individually. What actually happens is that the window pops up with the first point showing (ok with that), then waits for the loop to finish before it populates the rest of the graph. Any thoughts why I am not seeing points populated one at a time?", "response":"Here's the working version of the code in question (requires at least version Matplotlib 1.1.0 from 2011-11-14): ``` import numpy as np import matplotlib.pyplot as plt plt.axis([0, 10, 0, 1]) for i in range(10): y = np.random.random() plt.scatter(i, y) plt.pause(0.05) plt.show() ``` Note the call to plt.pause(0.05), which both draws the new data and runs the GUI's event loop (allowing for mouse interaction).", "best_answers_score":0.6392, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/11874767\/how-do-i-plot-in-real-time-in-a-while-loop", "best_answers_votes":415, "question_length":738, "response_length":411 }, { "question":"Matplotlib subplots_adjust hspace so titles and xlabels don't overlap? [duplicate] This question already has answers here: Improve subplot size\/spacing with many subplots (12 answers) Closed 2 years ago. With, say, 3 rows of subplots in matplotlib, xlabels of one row can overlap the title of the next. One has to fiddle with pl.subplots_adjust(hspace), which is annoying. Is there a recipe for hspace that prevents overlaps and works for any nrow? ``` \"\"\" matplotlib xlabels overlap titles ? \"\"\" import sys import numpy as np import pylab as pl nrow = 3 hspace = .4 # of plot height, titles and xlabels both fall within this ?? exec \"\\n\".join( sys.argv[1:] ) # nrow= ... y = np.arange(10) pl.subplots_adjust( hspace=hspace ) for jrow in range( 1, nrow+1 ): pl.subplot( nrow, 1, jrow ) pl.plot( y**jrow ) pl.title( 5 * (\"title %d \" % jrow) ) pl.xlabel( 5 * (\"xlabel %d \" % jrow) ) pl.show() ``` My versions: matplotlib 0.99.1.1, Python 2.6.4, Mac OSX 10.4.11, backend: Qt4Agg (TkAgg => Exception in Tkinter callback) (For many extra points, can anyone outline how matplotlib's packer \/ spacer works, along the lines of chapter 17 \"the packer\" in the Tcl\/Tk book?)", "response":"The link posted by Jose has been updated and pylab now has a tight_layout() function that does this automatically (in matplotlib version 1.1.0). http:\/\/matplotlib.org\/api\/pyplot_api.html#matplotlib.pyplot.tight_layout http:\/\/matplotlib.org\/users\/tight_layout_guide.html#plotting-guide-tight-layout", "best_answers_score":0.6392, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2418125\/matplotlib-subplots-adjust-hspace-so-titles-and-xlabels-dont-overlap", "best_answers_votes":47, "question_length":1163, "response_length":297 }, { "question":"Efficient method of calculating density of irregularly spaced points I am attempting to generate map overlay images that would assist in identifying hot-spots, that is areas on the map that have high density of data points. None of the approaches that I've tried are fast enough for my needs. Note: I forgot to mention that the algorithm should work well under both low and high zoom scenarios (or low and high data point density). I looked through numpy, pyplot and scipy libraries, and the closest I could find was numpy.histogram2d. As you can see in the image below, the histogram2d output is rather crude. (Each image includes points overlaying the heatmap for better understanding) My second attempt was to iterate over all the data points, and then calculate the hot-spot value as a function of distance. This produced a better looking image, however it is too slow to use in my application. Since it's O(n), it works ok with 100 points, but blows out when I use my actual dataset of 30000 points. My final attempt was to store the data in an KDTree, and use the nearest 5 points to calculate the hot-spot value. This algorithm is O(1), so much faster with large dataset. It's still not fast enough, it takes about 20 seconds to generate a 256x256 bitmap, and I would like this to happen in around 1 second time. Edit The boxsum smoothing solution provided by 6502 works well at all zoom levels and is much faster than my original methods. The gaussian filter solution suggested by Luke and Neil G is the fastest. You can see all four approaches below, using 1000 data points in total, at 3x zoom there are around 60 points visible. Complete code that generates my original 3 attempts, the boxsum smoothing solution provided by 6502 and gaussian filter suggested by Luke (improved to handle edges better and allow zooming in) is here: ``` import matplotlib import numpy as np from matplotlib.mlab import griddata import matplotlib.cm as cm import matplotlib.pyplot as plt import math from scipy.spatial import KDTree import time import scipy.ndimage as ndi def grid_density_kdtree(xl, yl, xi, yi, dfactor): zz = np.empty([len(xi),len(yi)], dtype=np.uint8) zipped = zip(xl, yl) kdtree = KDTree(zipped) for xci in range(0, len(xi)): xc = xi[xci] for yci in range(0, len(yi)): yc = yi[yci] density = 0. retvalset = kdtree.query((xc,yc), k=5) for dist in retvalset[0]: density = density + math.exp(-dfactor * pow(dist, 2)) \/ 5 zz[yci][xci] = min(density, 1.0) * 255 return zz def grid_density(xl, yl, xi, yi): ximin, ximax = min(xi), max(xi) yimin, yimax = min(yi), max(yi) xxi,yyi = np.meshgrid(xi,yi) #zz = np.empty_like(xxi) zz = np.empty([len(xi),len(yi)]) for xci in range(0, len(xi)): xc = xi[xci] for yci in range(0, len(yi)): yc = yi[yci] density = 0. for i in range(0,len(xl)): xd = math.fabs(xl[i] - xc) yd = math.fabs(yl[i] - yc) if xd < 1 and yd < 1: dist = math.sqrt(math.pow(xd, 2) + math.pow(yd, 2)) density = density + math.exp(-5.0 * pow(dist, 2)) zz[yci][xci] = density return zz def boxsum(img, w, h, r): st = [0] * (w+1) * (h+1) for x in xrange(w): st[x+1] = st[x] + img[x] for y in xrange(h): st[(y+1)*(w+1)] = st[y*(w+1)] + img[y*w] for x in xrange(w): st[(y+1)*(w+1)+(x+1)] = st[(y+1)*(w+1)+x] + st[y*(w+1)+(x+1)] - st[y*(w+1)+x] + img[y*w+x] for y in xrange(h): y0 = max(0, y - r) y1 = min(h, y + r + 1) for x in xrange(w): x0 = max(0, x - r) x1 = min(w, x + r + 1) img[y*w+x] = st[y0*(w+1)+x0] + st[y1*(w+1)+x1] - st[y1*(w+1)+x0] - st[y0*(w+1)+x1] def grid_density_boxsum(x0, y0, x1, y1, w, h, data): kx = (w - 1) \/ (x1 - x0) ky = (h - 1) \/ (y1 - y0) r = 15 border = r * 2 imgw = (w + 2 * border) imgh = (h + 2 * border) img = [0] * (imgw * imgh) for x, y in data: ix = int((x - x0) * kx) + border iy = int((y - y0) * ky) + border if 0 <= ix < imgw and 0 <= iy < imgh: img[iy * imgw + ix] += 1 for p in xrange(4): boxsum(img, imgw, imgh, r) a = np.array(img).reshape(imgh,imgw) b = a[border:(border+h),border:(border+w)] return b def grid_density_gaussian_filter(x0, y0, x1, y1, w, h, data): kx = (w - 1) \/ (x1 - x0) ky = (h - 1) \/ (y1 - y0) r = 20 border = r imgw = (w + 2 * border) imgh = (h + 2 * border) img = np.zeros((imgh,imgw)) for x, y in data: ix = int((x - x0) * kx) + border iy = int((y - y0) * ky) + border if 0 <= ix < imgw and 0 <= iy < imgh: img[iy][ix] += 1 return ndi.gaussian_filter(img, (r,r)) ## gaussian convolution def generate_graph(): n = 1000 # data points range data_ymin = -2. data_ymax = 2. data_xmin = -2. data_xmax = 2. # view area range view_ymin = -.5 view_ymax = .5 view_xmin = -.5 view_xmax = .5 # generate data xl = np.random.uniform(data_xmin, data_xmax, n) yl = np.random.uniform(data_ymin, data_ymax, n) zl = np.random.uniform(0, 1, n) # get visible data points xlvis = [] ylvis = [] for i in range(0,len(xl)): if view_xmin < xl[i] < view_xmax and view_ymin < yl[i] < view_ymax: xlvis.append(xl[i]) ylvis.append(yl[i]) fig = plt.figure() # plot histogram plt1 = fig.add_subplot(221) plt1.set_axis_off() t0 = time.clock() zd, xe, ye = np.histogram2d(yl, xl, bins=10, range=[[view_ymin, view_ymax],[view_xmin, view_xmax]], normed=True) plt.title('numpy.histogram2d - '+str(time.clock()-t0)+\"sec\") plt.imshow(zd, origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax]) plt.scatter(xlvis, ylvis) # plot density calculated with kdtree plt2 = fig.add_subplot(222) plt2.set_axis_off() xi = np.linspace(view_xmin, view_xmax, 256) yi = np.linspace(view_ymin, view_ymax, 256) t0 = time.clock() zd = grid_density_kdtree(xl, yl, xi, yi, 70) plt.title('function of 5 nearest using kdtree\\n'+str(time.clock()-t0)+\"sec\") cmap=cm.jet A = (cmap(zd\/256.0)*255).astype(np.uint8) #A[:,:,3] = zd plt.imshow(A , origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax]) plt.scatter(xlvis, ylvis) # gaussian filter plt3 = fig.add_subplot(223) plt3.set_axis_off() t0 = time.clock() zd = grid_density_gaussian_filter(view_xmin, view_ymin, view_xmax, view_ymax, 256, 256, zip(xl, yl)) plt.title('ndi.gaussian_filter - '+str(time.clock()-t0)+\"sec\") plt.imshow(zd , origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax]) plt.scatter(xlvis, ylvis) # boxsum smoothing plt3 = fig.add_subplot(224) plt3.set_axis_off() t0 = time.clock() zd = grid_density_boxsum(view_xmin, view_ymin, view_xmax, view_ymax, 256, 256, zip(xl, yl)) plt.title('boxsum smoothing - '+str(time.clock()-t0)+\"sec\") plt.imshow(zd, origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax]) plt.scatter(xlvis, ylvis) if __name__=='__main__': generate_graph() plt.show() ```", "response":"This approach is along the lines of some previous answers: increment a pixel for each spot, then smooth the image with a gaussian filter. A 256x256 image runs in about 350ms on my 6-year-old laptop. ``` import numpy as np import scipy.ndimage as ndi data = np.random.rand(30000,2) ## create random dataset inds = (data * 255).astype('uint') ## convert to indices img = np.zeros((256,256)) ## blank image for i in xrange(data.shape[0]): ## draw pixels img[inds[i,0], inds[i,1]] += 1 img = ndi.gaussian_filter(img, (10,10)) ```", "best_answers_score":0.6385, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6652671\/efficient-method-of-calculating-density-of-irregularly-spaced-points", "best_answers_votes":31, "question_length":6527, "response_length":525 }, { "question":"Move X-Axis label downwards, but not X-Axis Ticks in matplotlib I'm using Matplotlib to plot a histogram. Using tips from my previous question: Matplotlib - label each bin, I've more or less got the kinks worked out. There's one final issue - previously - the x-axis label (\"Time (in milliseconds)\") was being rendered underneath the x-axis tickmarks (0.00, 0.04, 0.08, 0.12 etc.) Using the advice from Joe Kingston (see question above), I tried using: ```py ax.tick_params(axis='x', pad=30) ``` However, this moves both the x-axis tickmarks (0.00, 0.04, 0.08, 0.12 etc.), as well as the x-axis label (\"Time (in milliseconds)\"): Is there any way to move only the x-axis label to underneath the three rows of figures? NB: You may need to open the PNGs below directly - Right Click on the image, then View Image (in FF), or Open image in new tab (Chrome). The image resize done by SO has rendered them nigh unreadable", "response":"use labelpad parameter: ``` pl.xlabel(\"...\", labelpad=20) ``` or set it after: ``` ax.xaxis.labelpad = 20 ```", "best_answers_score":0.6372, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/6406368\/move-x-axis-label-downwards-but-not-x-axis-ticks-in-matplotlib", "best_answers_votes":276, "question_length":915, "response_length":109 }, { "question":"How to get a list of all the fonts currently available for Matplotlib? e.g. when I want to set font in ``` matplotlib.rc('font', **font) ```", "response":"``` import matplotlib.font_manager matplotlib.font_manager.findSystemFonts(fontpaths=None, fontext='ttf') ``` Check this for other options.", "best_answers_score":0.6367, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8753835\/how-to-get-a-list-of-all-the-fonts-currently-available-for-matplotlib", "best_answers_votes":83, "question_length":140, "response_length":139 }, { "question":"How to add trendline to a scatter plot How could I add a trendline to a dot graph drawn using matplotlib.scatter?", "response":"as explained here With help from numpy one can calculate for example a linear fitting. ``` # plot the data itself pylab.plot(x,y,'o') # calc the trendline z = numpy.polyfit(x, y, 1) p = numpy.poly1d(z) pylab.plot(x,p(x),\"r--\") # the line equation: print \"y=%.6fx+(%.6f)\"%(z[0],z[1]) ```", "best_answers_score":0.6351, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/26447191\/how-to-add-trendline-to-a-scatter-plot", "best_answers_votes":110, "question_length":113, "response_length":286 }, { "question":"Histogram outlined by added edgecolor I have plotted a histogram and was expecting to see the outlines of my bars but this is not the case. I'm using the following code: ```py import matplotlib.pyplot as plt from numpy.random import normal gaussian_numbers = normal(size=1000) plt.hist(gaussian_numbers) plt.title(\"Gaussian Histogram\") plt.xlabel(\"Value\") plt.ylabel(\"Frequency\") plt.show() ``` How do I show the outline of the bars?", "response":"It looks like either your linewidth was set to zero or your edgecolor was set to 'none'. Matplotlib changed the defaults for these in 2.0. Try using: ``` plt.hist(gaussian_numbers, edgecolor='black', linewidth=1.2) ```", "best_answers_score":0.6343, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/42741687\/histogram-outlined-by-added-edgecolor", "best_answers_votes":184, "question_length":433, "response_length":218 }, { "question":"How to highlight specific x-value ranges I'm making a visualization of historical stock data for a project, and I'd like to highlight regions of drops. For instance, when the stock is experiencing significant drawdown, I would like to highlight it with a red region. Can I do this automatically, or will I have to draw a rectangle or something?", "response":"Have a look at axvspan (and axhspan for highlighting a region of the y-axis). ``` import matplotlib.pyplot as plt plt.plot(range(10)) plt.axvspan(3, 6, color='red', alpha=0.5) plt.show() ``` If you're using dates, then you'll need to convert your min and max x values to matplotlib dates. Use matplotlib.dates.date2num for datetime objects or matplotlib.dates.datestr2num for various string timestamps. ``` import matplotlib.pyplot as plt import matplotlib.dates as mdates import datetime as dt t = mdates.drange(dt.datetime(2011, 10, 15), dt.datetime(2011, 11, 27), dt.timedelta(hours=2)) y = np.sin(t) fig, ax = plt.subplots() ax.plot_date(t, y, 'b-') ax.axvspan(*mdates.datestr2num(['10\/27\/2011', '11\/2\/2011']), color='red', alpha=0.5) fig.autofmt_xdate() plt.show() ```", "best_answers_score":0.6322, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8270981\/how-to-highlight-specific-x-value-ranges", "best_answers_votes":180, "question_length":344, "response_length":773 }, { "question":"How to show labels on matplotlib plots When I execute the following code, it doesn't produce a plot with a label. ```py import matplotlib.pyplot as plt import numpy as np x = np.arange(1, 5) plt.plot(x, x*1.5, label='Normal') ``` Numpy version is '1.6.2' Matplotlib version is '1.3.x' Why is this happening?", "response":"You forgot to display the legend: ``` ... plt.legend(loc='best') plt.show() ```", "best_answers_score":0.632, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/14657169\/how-to-show-labels-on-matplotlib-plots", "best_answers_votes":138, "question_length":307, "response_length":79 }, { "question":"Equivalent function for xticks for an AxesSubplot object So I am trying to use Axes objects to control my matlibplot figure. I am not using plt (aka import matlibplot.pyplot as plt) because I am embedding the figure in my tkinter GUI per this. However, I am also using subplots in the figure, so something like: ```py a = f.add_subplot(121) a2 = f.add_subplot(122) a.plot(fn2,mag) a2.bar(range(0,10), magBin, width) ``` This is all well and good, I can use the axes properties to control things (i.e. a.axesMethod()), but I want string labels for my bar plots, per this, see code. My dilemma is that I cannot use ```py plt.xticks(ind+width, ('G1', 'G2', 'G3', 'G4', 'G5') ) ``` As in the example, because I cannot use plt if I want to embed it into my Tkinter GUI. I am limited to what I can do with Axes objects. I am trying to use a2.set_xticks, but this does not allow for the string as ticks functionality I need for my bar chart.", "response":"you can use instead: ``` axes.set_xticks(ticks, minor=False) ``` and ``` axes.set_xticklabels(labels, fontdict=None, minor=False) ```", "best_answers_score":0.6307, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8384120\/equivalent-function-for-xticks-for-an-axessubplot-object", "best_answers_votes":124, "question_length":934, "response_length":133 }, { "question":"Matplotlib text dimensions Is it possible to determine the dimensions of a matplotlib text object? How can I find the width and height in pixels? Thanks Edit: I think I figured out a way to do this. I've included an example below. ``` import matplotlib as plt f = plt.figure() r = f.canvas.get_renderer() t = plt.text(0.5, 0.5, 'test') bb = t.get_window_extent(renderer=r) width = bb.width height = bb.height ```", "response":"``` from matplotlib import pyplot as plt f = plt.figure() r = f.canvas.get_renderer() t = plt.text(0.5, 0.5, 'test') bb = t.get_window_extent(renderer=r) width = bb.width height = bb.height ```", "best_answers_score":0.6301, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/5320205\/matplotlib-text-dimensions", "best_answers_votes":32, "question_length":412, "response_length":193 }, { "question":"How to make matplotlib show all x coordinates? For example in the following code: ```py import numpy as np import matplotlib.pyplot as plt N = 10 x = [1,2,3,4,5,6,7,8,9,10] y = np.random.rand(N) plt.scatter(x, y) plt.show() ``` I get the following plot as you can see, in the x axis, only the even values appear. How to force matplotlib to show all values, that is 1 2 3 4 5 6 7 8 9 10?", "response":"Use plt.xticks(x). See the documentation. Note that using only the input values for ticks will usually be confusing to a viewer. It makes more sense to use evenly spaced ticks.", "best_answers_score":0.6299, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/28506907\/how-to-make-matplotlib-show-all-x-coordinates", "best_answers_votes":57, "question_length":386, "response_length":176 }, { "question":"Is there a function to make scatterplot matrices in matplotlib? Example of scatterplot matrix Is there such a function in matplotlib.pyplot?", "response":"For those who do not want to define their own functions, there is a great data analysis libarary in Python, called Pandas, where one can find the scatter_matrix() method: ``` from pandas.plotting import scatter_matrix df = pd.DataFrame(np.random.randn(1000, 4), columns = ['a', 'b', 'c', 'd']) scatter_matrix(df, alpha = 0.2, figsize = (6, 6), diagonal = 'kde') ```", "best_answers_score":0.6282, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7941207\/is-there-a-function-to-make-scatterplot-matrices-in-matplotlib", "best_answers_votes":125, "question_length":140, "response_length":365 }, { "question":"How to draw vertical lines on a given plot Given a plot of a signal in time representation, how can I draw lines marking the corresponding time index? Specifically, given a signal plot with a time index ranging from 0 to 2.6 (seconds), I want to draw vertical red lines indicating the corresponding time index for the list [0.22058956, 0.33088437, 2.20589566]. How can I do it?", "response":"The standard way to add vertical lines that will cover your entire plot window without you having to specify their actual height is plt.axvline ``` import matplotlib.pyplot as plt plt.axvline(x=0.22058956) plt.axvline(x=0.33088437) plt.axvline(x=2.20589566) ``` OR ``` xcoords = [0.22058956, 0.33088437, 2.20589566] for xc in xcoords: plt.axvline(x=xc) ``` You can use many of the keywords available for other plot commands (e.g. color, linestyle, linewidth ...). You can pass in keyword arguments ymin and ymax if you like in axes corrdinates (e.g. ymin=0.25, ymax=0.75 will cover the middle half of the plot). There are corresponding functions for horizontal lines (axhline) and rectangles (axvspan).", "best_answers_score":0.625, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/24988448\/how-to-draw-vertical-lines-on-a-given-plot", "best_answers_votes":749, "question_length":377, "response_length":702 }, { "question":"Circular \/ polar histogram in python I have periodic data and the distribution for it is best visualised around a circle. Now the question is how can I do this visualisation using matplotlib? If not, can it be done easily in Python? Here I generate some sample data which I would like to visualise with a circular histogram: ``` import matplotlib.pyplot as plt import numpy as np # Generating random data a = np.random.uniform(low=0, high=2*np.pi, size=50) ``` There are a few examples in a question on SX for Mathematica. I would like to generate a plot which looks something like one of the following:", "response":"Building off of this example from the gallery, you can do ``` import numpy as np import matplotlib.pyplot as plt N = 80 bottom = 8 max_height = 4 theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False) radii = max_height*np.random.rand(N) width = (2*np.pi) \/ N ax = plt.subplot(111, polar=True) bars = ax.bar(theta, radii, width=width, bottom=bottom) # Use custom colors and opacity for r, bar in zip(radii, bars): bar.set_facecolor(plt.cm.jet(r \/ 10.)) bar.set_alpha(0.8) plt.show() ``` Of course, there are many variations and tweeks, but this should get you started. In general, a browse through the matplotlib gallery is usually a good place to start. Here, I used the bottom keyword to leave the center empty, because I think I saw an earlier question by you with a graph more like what I have, so I assume that's what you want. To get the full wedges that you show above, just use bottom=0 (or leave it out since 0 is the default).", "best_answers_score":0.625, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/22562364\/circular-polar-histogram-in-python", "best_answers_votes":56, "question_length":603, "response_length":936 }, { "question":"How can I set the aspect ratio? I'm trying to make a square plot (using imshow), i.e. aspect ratio of 1:1, but I can't. None of these work: ``` import matplotlib.pyplot as plt ax = fig.add_subplot(111,aspect='equal') ax = fig.add_subplot(111,aspect=1.0) ax.set_aspect('equal') plt.axes().set_aspect('equal') ``` It seems like the calls are just being ignored (a problem I often seem to have with matplotlib).", "response":"Third times the charm. My guess is that this is a bug and Zhenya's answer suggests it's fixed in the latest version. I have version 0.99.1.1 and I've created the following solution: ``` import matplotlib.pyplot as plt import numpy as np def forceAspect(ax,aspect=1): im = ax.get_images() extent = im[0].get_extent() ax.set_aspect(abs((extent[1]-extent[0])\/(extent[3]-extent[2]))\/aspect) data = np.random.rand(10,20) fig = plt.figure() ax = fig.add_subplot(111) ax.imshow(data) ax.set_xlabel('xlabel') ax.set_aspect(2) fig.savefig('equal.png') ax.set_aspect('auto') fig.savefig('auto.png') forceAspect(ax,aspect=1) fig.savefig('force.png') ``` This is 'force.png': Below are my unsuccessful, yet hopefully informative attempts. Second Answer: My 'original answer' below is overkill, as it does something similar to axes.set_aspect(). I think you want to use axes.set_aspect('auto'). I don't understand why this is the case, but it produces a square image plot for me, for example this script: ``` import matplotlib.pyplot as plt import numpy as np data = np.random.rand(10,20) fig = plt.figure() ax = fig.add_subplot(111) ax.imshow(data) ax.set_aspect('equal') fig.savefig('equal.png') ax.set_aspect('auto') fig.savefig('auto.png') ``` Produces an image plot with 'equal' aspect ratio: and one with 'auto' aspect ratio: The code provided below in the 'original answer' provides a starting off point for an explicitly controlled aspect ratio, but it seems to be ignored once an imshow is called. Original Answer: Here's an example of a routine that will adjust the subplot parameters so that you get the desired aspect ratio: ``` import matplotlib.pyplot as plt def adjustFigAspect(fig,aspect=1): ''' Adjust the subplot parameters so that the figure has the correct aspect ratio. ''' xsize,ysize = fig.get_size_inches() minsize = min(xsize,ysize) xlim = .4*minsize\/xsize ylim = .4*minsize\/ysize if aspect < 1: xlim *= aspect else: ylim \/= aspect fig.subplots_adjust(left=.5-xlim, right=.5+xlim, bottom=.5-ylim, top=.5+ylim) fig = plt.figure() adjustFigAspect(fig,aspect=.5) ax = fig.add_subplot(111) ax.plot(range(10),range(10)) fig.savefig('axAspect.png') ``` This produces a figure like so: I can imagine if your having multiple subplots within the figure, you would want to include the number of y and x subplots as keyword parameters (defaulting to 1 each) to the routine provided. Then using those numbers and the hspace and wspace keywords, you can make all the subplots have the correct aspect ratio.", "best_answers_score":0.6233, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/7965743\/how-can-i-set-the-aspect-ratio", "best_answers_votes":115, "question_length":408, "response_length":2505 }, { "question":"How to plot one single data point? I have the following code to plot a line and a point: ``` df = pd.DataFrame({'x': [1, 2, 3], 'y': [3, 4, 6]}) point = pd.DataFrame({'x': [2], 'y': [5]}) ax = df.plot(x='x', y='y', label='line') ax = point.plot(x='x', y='y', ax=ax, style='r-', label='point') ``` How do I get the single data point to show up?", "response":"To plot a single point you can do something like this: ``` plt.plot([x], [y], marker='o', markersize=3, color=\"red\") ```", "best_answers_score":0.6233, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/27779845\/how-to-plot-one-single-data-point", "best_answers_votes":116, "question_length":343, "response_length":120 }, { "question":"matplotlib.pyplot, preserve aspect ratio of the plot Assuming we have a polygon coordinates as polygon = [(x1, y1), (x2, y2), ...], the following code displays the polygon: ``` import matplotlib.pyplot as plt plt.fill(*zip(*polygon)) plt.show() ``` By default it is trying to adjust the aspect ratio so that the polygon (or whatever other diagram) fits inside the window, and automatically changing it so that it fits even after resizing. Which is great in many cases, except when you are trying to estimate visually if the image is distorted. How to fix the aspect ratio to be strictly 1:1? (Not sure if \"aspect ratio\" is the right term here, so in case it is not - I need both X and Y axes to have 1:1 scale, so that (0, 1) on both X and Y takes an exact same amount of screen space. And I need to keep it 1:1 no matter how I resize the window.)", "response":"Does it help to use: ``` plt.axis('equal') ```", "best_answers_score":0.6229, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/2934878\/matplotlib-pyplot-preserve-aspect-ratio-of-the-plot", "best_answers_votes":113, "question_length":847, "response_length":46 }, { "question":"Add alpha to an existing colormap I'd like to make an overlay of several hexbin plots, but with builtin colormaps only the last one is visible. I don't want to construct a colormap de novo. How one would add linear alpha to the colormap without knowing the inner structure of the colormap beforehand?", "response":"I'm not quite sure if this qualifies within \"not knowing the inner structure of the colormap\", but perhaps something like this would work to add a linear alpha to an existing colormap? ``` import numpy as np import matplotlib.pylab as pl from matplotlib.colors import ListedColormap # Random data data1 = np.random.random((4,4)) # Choose colormap cmap = pl.cm.RdBu # Get the colormap colors my_cmap = cmap(np.arange(cmap.N)) # Set alpha my_cmap[:,-1] = np.linspace(0, 1, cmap.N) # Create new colormap my_cmap = ListedColormap(my_cmap) pl.figure() pl.subplot(121) pl.pcolormesh(data1, cmap=pl.cm.RdBu) pl.colorbar() pl.subplot(122) pl.pcolormesh(data1, cmap=my_cmap) pl.colorbar() ```", "best_answers_score":0.6219, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/37327308\/add-alpha-to-an-existing-colormap", "best_answers_votes":68, "question_length":300, "response_length":683 }, { "question":"Plotting 3D Polygons I was unsuccessful browsing web for a solution for the following simple question: How to draw 3D polygon (say a filled rectangle or triangle) using vertices values? I have tried many ideas but all failed, see: ``` from mpl_toolkits.mplot3d import Axes3D from matplotlib.collections import PolyCollection import matplotlib.pyplot as plt fig = plt.figure() ax = Axes3D(fig) x = [0,1,1,0] y = [0,0,1,1] z = [0,1,0,1] verts = [zip(x, y,z)] ax.add_collection3d(PolyCollection(verts),zs=z) plt.show() ``` I appreciate in advance any idea\/comment. Updates based on the accepted answer: ``` import mpl_toolkits.mplot3d as a3 import matplotlib.colors as colors import pylab as pl import numpy as np ax = a3.Axes3D(pl.figure()) for i in range(10000): vtx = np.random.rand(3,3) tri = a3.art3d.Poly3DCollection([vtx]) tri.set_color(colors.rgb2hex(np.random.rand(3))) tri.set_edgecolor('k') ax.add_collection3d(tri) pl.show() ``` Here is the result:", "response":"I think you've almost got it. Is this what you want? ``` from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d.art3d import Poly3DCollection import matplotlib.pyplot as plt fig = plt.figure() ax = Axes3D(fig, auto_add_to_figure=False) fig.add_axes(ax) x = [0,1,1,0] y = [0,0,1,1] z = [0,1,0,1] verts = [list(zip(x,y,z))] ax.add_collection3d(Poly3DCollection(verts)) plt.show() ``` You might also be interested in art3d.pathpatch_2d_to_3d.", "best_answers_score":0.6159, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/4622057\/plotting-3d-polygons", "best_answers_votes":76, "question_length":957, "response_length":450 }, { "question":"Parallel Coordinates plot in Matplotlib Two and three dimensional data can be viewed relatively straight-forwardly using traditional plot types. Even with four dimensional data, we can often find a way to display the data. Dimensions above four, though, become increasingly difficult to display. Fortunately, parallel coordinates plots provide a mechanism for viewing results with higher dimensions. Several plotting packages provide parallel coordinates plots, such as Matlab, R, VTK type 1 and VTK type 2, but I don't see how to create one using Matplotlib. Is there a built-in parallel coordinates plot in Matplotlib? I certainly don't see one in the gallery. If there is no built-in-type, is it possible to build a parallel coordinates plot using standard features of Matplotlib? Edit: Based on the answer provided by Zhenya below, I developed the following generalization that supports an arbitrary number of axes. Following the plot style of the example I posted in the original question above, each axis gets its own scale. I accomplished this by normalizing the data at each axis point and making the axes have a range of 0 to 1. I then go back and apply labels to each tick-mark that give the correct value at that intercept. The function works by accepting an iterable of data sets. Each data set is considered a set of points where each point lies on a different axis. The example in __main__ grabs random numbers for each axis in two sets of 30 lines. The lines are random within ranges that cause clustering of lines; a behavior I wanted to verify. This solution isn't as good as a built-in solution since you have odd mouse behavior and I'm faking the data ranges through labels, but until Matplotlib adds a built-in solution, it's acceptable. ``` #!\/usr\/bin\/python import matplotlib.pyplot as plt import matplotlib.ticker as ticker def parallel_coordinates(data_sets, style=None): dims = len(data_sets[0]) x = range(dims) fig, axes = plt.subplots(1, dims-1, sharey=False) if style is None: style = ['r-']*len(data_sets) # Calculate the limits on the data min_max_range = list() for m in zip(*data_sets): mn = min(m) mx = max(m) if mn == mx: mn -= 0.5 mx = mn + 1. r = float(mx - mn) min_max_range.append((mn, mx, r)) # Normalize the data sets norm_data_sets = list() for ds in data_sets: nds = [(value - min_max_range[dimension][0]) \/ min_max_range[dimension][2] for dimension,value in enumerate(ds)] norm_data_sets.append(nds) data_sets = norm_data_sets # Plot the datasets on all the subplots for i, ax in enumerate(axes): for dsi, d in enumerate(data_sets): ax.plot(x, d, style[dsi]) ax.set_xlim([x[i], x[i+1]]) # Set the x axis ticks for dimension, (axx,xx) in enumerate(zip(axes, x[:-1])): axx.xaxis.set_major_locator(ticker.FixedLocator([xx])) ticks = len(axx.get_yticklabels()) labels = list() step = min_max_range[dimension][2] \/ (ticks - 1) mn = min_max_range[dimension][0] for i in xrange(ticks): v = mn + i*step labels.append('%4.2f' % v) axx.set_yticklabels(labels) # Move the final axis' ticks to the right-hand side axx = plt.twinx(axes[-1]) dimension += 1 axx.xaxis.set_major_locator(ticker.FixedLocator([x[-2], x[-1]])) ticks = len(axx.get_yticklabels()) step = min_max_range[dimension][2] \/ (ticks - 1) mn = min_max_range[dimension][0] labels = ['%4.2f' % (mn + i*step) for i in xrange(ticks)] axx.set_yticklabels(labels) # Stack the subplots plt.subplots_adjust(wspace=0) return plt if __name__ == '__main__': import random base = [0, 0, 5, 5, 0] scale = [1.5, 2., 1.0, 2., 2.] data = [[base[x] + random.uniform(0., 1.)*scale[x] for x in xrange(5)] for y in xrange(30)] colors = ['r'] * 30 base = [3, 6, 0, 1, 3] scale = [1.5, 2., 2.5, 2., 2.] data.extend([[base[x] + random.uniform(0., 1.)*scale[x] for x in xrange(5)] for y in xrange(30)]) colors.extend(['b'] * 30) parallel_coordinates(data, style=colors).show() ``` Edit 2: Here is an example of what comes out of the above code when plotting Fisher's Iris data. It isn't quite as nice as the reference image from Wikipedia, but it is passable if all you have is Matplotlib and you need multi-dimensional plots.", "response":"pandas has a parallel coordinates wrapper: ``` import pandas import matplotlib.pyplot as plt from pandas.plotting import parallel_coordinates data = pandas.read_csv(r'C:\\Python27\\Lib\\site-packages\\pandas\\tests\\data\\iris.csv', sep=',') parallel_coordinates(data, 'Name') plt.show() ``` Source code, how they made it: plotting.py#L494", "best_answers_score":0.6069, "library_name":"matplotlib", "question_url":"https:\/\/stackoverflow.com\/questions\/8230638\/parallel-coordinates-plot-in-matplotlib", "best_answers_votes":62, "question_length":4098, "response_length":332 }, { "question":"How do I get indices of N maximum values in a NumPy array? NumPy proposes a way to get the index of the maximum value of an array via np.argmax. I would like a similar thing, but returning the indexes of the N maximum values. For instance, if I have an array, [1, 3, 2, 4, 5], then nargmax(array, n=3) would return the indices [4, 3, 1] which correspond to the elements [5, 4, 3].", "response":"Newer NumPy versions (1.8 and up) have a function called argpartition for this. To get the indices of the four largest elements, do ``` >>> a = np.array([9, 4, 4, 3, 3, 9, 0, 4, 6, 0]) >>> a array([9, 4, 4, 3, 3, 9, 0, 4, 6, 0]) >>> ind = np.argpartition(a, -4)[-4:] >>> ind array([1, 5, 8, 0]) >>> top4 = a[ind] >>> top4 array([4, 9, 6, 9]) ``` Unlike argsort, this function runs in linear time in the worst case, but the returned indices are not sorted, as can be seen from the result of evaluating a[ind]. If you need that too, sort them afterwards: ``` >>> ind[np.argsort(a[ind])] array([1, 8, 5, 0]) ``` To get the top-k elements in sorted order in this way takes O(n + k log k) time.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6910641\/how-do-i-get-indices-of-n-maximum-values-in-a-numpy-array", "best_answers_votes":1025, "question_length":380, "response_length":689 }, { "question":"How can the Euclidean distance be calculated with NumPy? I have two points in 3D space: ```none a = (ax, ay, az) b = (bx, by, bz) ``` I want to calculate the distance between them: ```none dist = sqrt((ax-bx)^2 + (ay-by)^2 + (az-bz)^2) ``` How do I do this with NumPy? I have: ```py import numpy a = numpy.array((ax, ay, az)) b = numpy.array((bx, by, bz)) ```", "response":"Use numpy.linalg.norm: ``` dist = numpy.linalg.norm(a-b) ``` This works because the Euclidean distance is the l2 norm, and the default value of the ord parameter in numpy.linalg.norm is 2. For more theory, see Introduction to Data Mining:", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1401712\/how-can-the-euclidean-distance-be-calculated-with-numpy", "best_answers_votes":1352, "question_length":359, "response_length":238 }, { "question":"How do I count the occurrence of a certain item in an ndarray? How do I count the number of 0s and 1s in the following array? ``` y = np.array([0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1]) ``` y.count(0) gives: numpy.ndarray object has no attribute count", "response":"Using numpy.unique: ``` import numpy a = numpy.array([0, 3, 0, 1, 0, 1, 2, 1, 0, 0, 0, 0, 1, 3, 4]) unique, counts = numpy.unique(a, return_counts=True) >>> dict(zip(unique, counts)) {0: 7, 1: 4, 2: 1, 3: 2, 4: 1} ``` Non-numpy method using collections.Counter; ``` import collections, numpy a = numpy.array([0, 3, 0, 1, 0, 1, 2, 1, 0, 0, 0, 0, 1, 3, 4]) counter = collections.Counter(a) >>> counter Counter({0: 7, 1: 4, 3: 2, 2: 1, 4: 1}) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28663856\/how-do-i-count-the-occurrence-of-a-certain-item-in-an-ndarray", "best_answers_votes":1113, "question_length":246, "response_length":443 }, { "question":"Pandas read_csv: low_memory and dtype options ``` df = pd.read_csv('somefile.csv') ``` ...gives an error: ...\/site-packages\/pandas\/io\/parsers.py:1130: DtypeWarning: Columns (4,5,7,16) have mixed types. Specify dtype option on import or set low_memory=False. Why is the dtype option related to low_memory, and why might low_memory=False help?", "response":"The deprecated low_memory option The low_memory option is not properly deprecated, but it should be, since it does not actually do anything differently[source] The reason you get this low_memory warning is because guessing dtypes for each column is very memory demanding. Pandas tries to determine what dtype to set by analyzing the data in each column. Dtype Guessing (very bad) Pandas can only determine what dtype a column should have once the whole file is read. This means nothing can really be parsed before the whole file is read unless you risk having to change the dtype of that column when you read the last value. Consider the example of one file which has a column called user_id. It contains 10 million rows where the user_id is always numbers. Since pandas cannot know it is only numbers, it will probably keep it as the original strings until it has read the whole file. Specifying dtypes (should always be done) adding ``` dtype={'user_id': int} ``` to the pd.read_csv() call will make pandas know when it starts reading the file, that this is only integers. Also worth noting is that if the last line in the file would have \"foobar\" written in the user_id column, the loading would crash if the above dtype was specified. Example of broken data that breaks when dtypes are defined ``` import pandas as pd try: from StringIO import StringIO except ImportError: from io import StringIO csvdata = \"\"\"user_id,username 1,Alice 3,Bob foobar,Caesar\"\"\" sio = StringIO(csvdata) pd.read_csv(sio, dtype={\"user_id\": int, \"username\": \"string\"}) ValueError: invalid literal for long() with base 10: 'foobar' ``` dtypes are typically a numpy thing, read more about them here: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.dtype.html What dtypes exists? We have access to numpy dtypes: float, int, bool, timedelta64[ns] and datetime64[ns]. Note that the numpy date\/time dtypes are not time zone aware. Pandas extends this set of dtypes with its own: 'datetime64[ns, ]' Which is a time zone aware timestamp. 'category' which is essentially an enum (strings represented by integer keys to save 'period[]' Not to be confused with a timedelta, these objects are actually anchored to specific time periods 'Sparse', 'Sparse[int]', 'Sparse[float]' is for sparse data or 'Data that has a lot of holes in it' Instead of saving the NaN or None in the dataframe it omits the objects, saving space. 'Interval' is a topic of its own but its main use is for indexing. See more here 'Int8', 'Int16', 'Int32', 'Int64', 'UInt8', 'UInt16', 'UInt32', 'UInt64' are all pandas specific integers that are nullable, unlike the numpy variant. 'string' is a specific dtype for working with string data and gives access to the .str attribute on the series. 'boolean' is like the numpy 'bool' but it also supports missing data. Read the complete reference here: Pandas dtype reference Gotchas, caveats, notes Setting dtype=object will silence the above warning, but will not make it more memory efficient, only process efficient if anything. Setting dtype=unicode will not do anything, since to numpy, a unicode is represented as object. Usage of converters @sparrow correctly points out the usage of converters to avoid pandas blowing up when encountering 'foobar' in a column specified as int. I would like to add that converters are really heavy and inefficient to use in pandas and should be used as a last resort. This is because the read_csv process is a single process. CSV files can be processed line by line and thus can be processed by multiple converters in parallel more efficiently by simply cutting the file into segments and running multiple processes, something that pandas does not support. But this is a different story.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/24251219\/pandas-read-csv-low-memory-and-dtype-options", "best_answers_votes":749, "question_length":341, "response_length":3725 }, { "question":"What are the advantages of NumPy over regular Python lists? What are the advantages of NumPy over regular Python lists? I have approximately 100 financial markets series, and I am going to create a cube array of 100x100x100 = 1 million cells. I will be regressing (3-variable) each x with each y and z, to fill the array with standard errors. I have heard that for \"large matrices\" I should use NumPy as opposed to Python lists, for performance and scalability reasons. Thing is, I know Python lists and they seem to work for me. What will the benefits be if I move to NumPy? What if I had 1000 series (that is, 1 billion floating point cells in the cube)?", "response":"NumPy's arrays are more compact than Python lists -- a list of lists as you describe, in Python, would take at least 20 MB or so, while a NumPy 3D array with single-precision floats in the cells would fit in 4 MB. Access in reading and writing items is also faster with NumPy. Maybe you don't care that much for just a million cells, but you definitely would for a billion cells -- neither approach would fit in a 32-bit architecture, but with 64-bit builds NumPy would get away with 4 GB or so, Python alone would need at least about 12 GB (lots of pointers which double in size) -- a much costlier piece of hardware! The difference is mostly due to \"indirectness\" -- a Python list is an array of pointers to Python objects, at least 4 bytes per pointer plus 16 bytes for even the smallest Python object (4 for type pointer, 4 for reference count, 4 for value -- and the memory allocators rounds up to 16). A NumPy array is an array of uniform values -- single-precision numbers takes 4 bytes each, double-precision ones, 8 bytes. Less flexible, but you pay substantially for the flexibility of standard Python lists!", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/993984\/what-are-the-advantages-of-numpy-over-regular-python-lists", "best_answers_votes":834, "question_length":656, "response_length":1118 }, { "question":"What does numpy.random.seed(0) do? What does np.random.seed do? ``` np.random.seed(0) ```", "response":"np.random.seed(0) makes the random numbers predictable ``` >>> numpy.random.seed(0) ; numpy.random.rand(4) array([ 0.55, 0.72, 0.6 , 0.54]) >>> numpy.random.seed(0) ; numpy.random.rand(4) array([ 0.55, 0.72, 0.6 , 0.54]) ``` With the seed reset (every time), the same set of numbers will appear every time. If the random seed is not reset, different numbers appear with every invocation: ``` >>> numpy.random.rand(4) array([ 0.42, 0.65, 0.44, 0.89]) >>> numpy.random.rand(4) array([ 0.96, 0.38, 0.79, 0.53]) ``` (pseudo-)random numbers work by starting with a number (the seed), multiplying it by a large number, adding an offset, then taking modulo of that sum. The resulting number is then used as the seed to generate the next \"random\" number. When you set the seed (every time), it does the same thing every time, giving you the same numbers. If you want seemingly random numbers, do not set the seed. If you have code that uses random numbers that you want to debug, however, it can be very helpful to set the seed before each run so that the code does the same thing every time you run it. To get the most random numbers for each run, call numpy.random.seed(). This will cause numpy to set the seed to a random number obtained from \/dev\/urandom or its Windows analog or, if neither of those is available, it will use the clock. For more information on using seeds to generate pseudo-random numbers, see wikipedia.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21494489\/what-does-numpy-random-seed0-do", "best_answers_votes":885, "question_length":89, "response_length":1419 }, { "question":"How do I create a new column where the values are selected based on an existing column? How do I add a color column to the following dataframe so that color='green' if Set == 'Z', and color='red' otherwise? ``` Type Set 1 A Z 2 B Z 3 B X 4 C Y ```", "response":"If you only have two choices to select from then use np.where: ``` df['color'] = np.where(df['Set']=='Z', 'green', 'red') ``` For example, ``` import pandas as pd import numpy as np df = pd.DataFrame({'Type':list('ABBC'), 'Set':list('ZZXY')}) df['color'] = np.where(df['Set']=='Z', 'green', 'red') print(df) ``` yields ``` Set Type color 0 Z A green 1 Z B green 2 X B red 3 Y C red ``` If you have more than two conditions then use np.select. For example, if you want color to be yellow when (df['Set'] == 'Z') & (df['Type'] == 'A') otherwise blue when (df['Set'] == 'Z') & (df['Type'] == 'B') otherwise purple when (df['Type'] == 'B') otherwise black, then use ``` df = pd.DataFrame({'Type':list('ABBC'), 'Set':list('ZZXY')}) conditions = [ (df['Set'] == 'Z') & (df['Type'] == 'A'), (df['Set'] == 'Z') & (df['Type'] == 'B'), (df['Type'] == 'B')] choices = ['yellow', 'blue', 'purple'] df['color'] = np.select(conditions, choices, default='black') print(df) ``` which yields ``` Set Type color 0 Z A yellow 1 Z B blue 2 X B purple 3 Y C black ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19913659\/how-do-i-create-a-new-column-where-the-values-are-selected-based-on-an-existing", "best_answers_votes":1060, "question_length":247, "response_length":1046 }, { "question":"What is the purpose of meshgrid in NumPy? What is the purpose of np.meshgrid? I know it creates some kind of grid of coordinates for plotting, but I can't see the direct benefit of it. The official documentation gives the following example, but its output doesn't make sense to me: ``` x = np.arange(-5, 5, 1) y = np.arange(-5, 5, 1) xx, yy = np.meshgrid(x, y, sparse=True) z = np.sin(xx**2 + yy**2) \/ (xx**2 + yy**2) h = plt.contourf(x,y,z) ```", "response":"The purpose of meshgrid is to create a rectangular grid out of an array of x values and an array of y values. So, for example, if we want to create a grid where we have a point at each integer value between 0 and 4 in both the x and y directions. To create a rectangular grid, we need every combination of the x and y points. This is going to be 25 points, right? So if we wanted to create an x and y array for all of these points, we could do the following. ``` x[0,0] = 0 y[0,0] = 0 x[0,1] = 1 y[0,1] = 0 x[0,2] = 2 y[0,2] = 0 x[0,3] = 3 y[0,3] = 0 x[0,4] = 4 y[0,4] = 0 x[1,0] = 0 y[1,0] = 1 x[1,1] = 1 y[1,1] = 1 ... x[4,3] = 3 y[4,3] = 4 x[4,4] = 4 y[4,4] = 4 ``` This would result in the following x and y matrices, such that the pairing of the corresponding element in each matrix gives the x and y coordinates of a point in the grid. ``` x = 0 1 2 3 4 y = 0 0 0 0 0 0 1 2 3 4 1 1 1 1 1 0 1 2 3 4 2 2 2 2 2 0 1 2 3 4 3 3 3 3 3 0 1 2 3 4 4 4 4 4 4 ``` We can then plot these to verify that they are a grid: ``` plt.plot(x,y, marker='.', color='k', linestyle='none') ``` Obviously, this gets very tedious especially for large ranges of x and y. Instead, meshgrid can actually generate this for us: all we have to specify are the unique x and y values. ``` xvalues = np.array([0, 1, 2, 3, 4]); yvalues = np.array([0, 1, 2, 3, 4]); ``` Now, when we call meshgrid, we get the previous output automatically. ``` xx, yy = np.meshgrid(xvalues, yvalues) plt.plot(xx, yy, marker='.', color='k', linestyle='none') ``` Creation of these rectangular grids is useful for a number of tasks. In the example that you have provided in your post, it is simply a way to sample a function (sin(x**2 + y**2) \/ (x**2 + y**2)) over a range of values for x and y. Because this function has been sampled on a rectangular grid, the function can now be visualized as an \"image\". Additionally, the result can now be passed to functions which expect data on rectangular grid (i.e. contourf)", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/36013063\/what-is-the-purpose-of-meshgrid-in-numpy", "best_answers_votes":639, "question_length":445, "response_length":1967 }, { "question":"How do I convert a PIL Image into a NumPy array? How do I convert a PIL Image back and forth to a NumPy array so that I can do faster pixel-wise transformations than PIL's PixelAccess allows? I can convert it to a NumPy array via: ``` pic = Image.open(\"foo.jpg\") pix = numpy.array(pic.getdata()).reshape(pic.size[0], pic.size[1], 3) ``` But how do I load it back into the PIL Image after I've modified the array? pic.putdata() isn't working well.", "response":"You're not saying how exactly putdata() is not behaving. I'm assuming you're doing ``` >>> pic.putdata(a) Traceback (most recent call last): File \"...blablabla...\/PIL\/Image.py\", line 1185, in putdata self.im.putdata(data, scale, offset) SystemError: new style getargs format but argument is not a tuple ``` This is because putdata expects a sequence of tuples and you're giving it a numpy array. This ``` >>> data = list(tuple(pixel) for pixel in pix) >>> pic.putdata(data) ``` will work but it is very slow. As of PIL 1.1.6, the \"proper\" way to convert between images and numpy arrays is simply ``` >>> pix = numpy.array(pic) ``` although the resulting array is in a different format than yours (3-d array or rows\/columns\/rgb in this case). Then, after you make your changes to the array, you should be able to do either pic.putdata(pix) or create a new image with Image.fromarray(pix).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/384759\/how-do-i-convert-a-pil-image-into-a-numpy-array", "best_answers_votes":488, "question_length":446, "response_length":887 }, { "question":"Pretty-print a NumPy array without scientific notation and with given precision How do I print formatted NumPy arrays in a way similar to this: ``` x = 1.23456 print('%.3f' % x) ``` If I want to print the numpy.ndarray of floats, it prints several decimals, often in 'scientific' format, which is rather hard to read even for low-dimensional arrays. However, numpy.ndarray apparently has to be printed as a string, i.e., with %s. Is there a solution for this?", "response":"Use numpy.set_printoptions to set the precision of the output: ``` import numpy as np x = np.random.random(10) print(x) # [ 0.07837821 0.48002108 0.41274116 0.82993414 0.77610352 0.1023732 # 0.51303098 0.4617183 0.33487207 0.71162095] np.set_printoptions(precision=3) print(x) # [ 0.078 0.48 0.413 0.83 0.776 0.102 0.513 0.462 0.335 0.712] ``` And suppress suppresses the use of scientific notation for small numbers: ``` y = np.array([1.5e-10, 1.5, 1500]) print(y) # [ 1.500e-10 1.500e+00 1.500e+03] np.set_printoptions(suppress=True) print(y) # [ 0. 1.5 1500. ] ``` To apply print options locally, using NumPy 1.15.0 or later, you could use the numpy.printoptions context manager. For example, inside the with-suite precision=3 and suppress=True are set: ``` x = np.random.random(10) with np.printoptions(precision=3, suppress=True): print(x) # [ 0.073 0.461 0.689 0.754 0.624 0.901 0.049 0.582 0.557 0.348] ``` But outside the with-suite the print options are back to default settings: ``` print(x) # [ 0.07334334 0.46132615 0.68935231 0.75379645 0.62424021 0.90115836 # 0.04879837 0.58207504 0.55694118 0.34768638] ``` If you are using an earlier version of NumPy, you can create the context manager yourself. For example, ``` import numpy as np import contextlib @contextlib.contextmanager def printoptions(*args, **kwargs): original = np.get_printoptions() np.set_printoptions(*args, **kwargs) try: yield finally: np.set_printoptions(**original) x = np.random.random(10) with printoptions(precision=3, suppress=True): print(x) # [ 0.073 0.461 0.689 0.754 0.624 0.901 0.049 0.582 0.557 0.348] ``` To prevent zeros from being stripped from the end of floats: np.set_printoptions now has a formatter parameter which allows you to specify a format function for each type. ``` np.set_printoptions(formatter={'float': '{: 0.3f}'.format}) print(x) ``` which prints ``` [ 0.078 0.480 0.413 0.830 0.776 0.102 0.513 0.462 0.335 0.712] ``` instead of ``` [ 0.078 0.48 0.413 0.83 0.776 0.102 0.513 0.462 0.335 0.712] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/2891790\/pretty-print-a-numpy-array-without-scientific-notation-and-with-given-precision", "best_answers_votes":796, "question_length":459, "response_length":2014 }, { "question":"How do I add an extra column to a NumPy array? Given the following 2D array: ``` a = np.array([ [1, 2, 3], [2, 3, 4], ]) ``` I want to add a column of zeros along the second axis to get: ``` b = np.array([ [1, 2, 3, 0], [2, 3, 4, 0], ]) ```", "response":"np.r_[...] (docs) and np.c_[...] (docs) are useful alternatives to np.vstack and np.hstack. Note that they use square brackets [] instead of parentheses (). Some examples: ``` : import numpy as np : N = 3 : A = np.eye(N) : np.c_[ A, np.ones(N) ] # add a column array([[ 1., 0., 0., 1.], [ 0., 1., 0., 1.], [ 0., 0., 1., 1.]]) : np.c_[ np.ones(N), A, np.ones(N) ] # or two array([[ 1., 1., 0., 0., 1.], [ 1., 0., 1., 0., 1.], [ 1., 0., 0., 1., 1.]]) : np.r_[ A, [A[1]] ] # add a row array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.], [ 0., 1., 0.]]) : # not np.r_[ A, A[1] ] : np.r_[ A[0], 1, 2, 3, A[1] ] # mix vecs and scalars array([ 1., 0., 0., 1., 2., 3., 0., 1., 0.]) : np.r_[ A[0], [1, 2, 3], A[1] ] # lists array([ 1., 0., 0., 1., 2., 3., 0., 1., 0.]) : np.r_[ A[0], (1, 2, 3), A[1] ] # tuples array([ 1., 0., 0., 1., 2., 3., 0., 1., 0.]) : np.r_[ A[0], 1:4, A[1] ] # same, 1:4 == arange(1,4) == 1,2,3 array([ 1., 0., 0., 1., 2., 3., 0., 1., 0.]) ``` The reason for square brackets [] instead of round () is that Python converts 1:4 to slice objects in square brackets.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8486294\/how-do-i-add-an-extra-column-to-a-numpy-array", "best_answers_votes":510, "question_length":240, "response_length":1077 }, { "question":"Comparing two NumPy arrays for equality, element-wise What is the simplest way to compare two NumPy arrays for equality (where equality is defined as: A = B iff for all indices i: A[i] == B[i])? Simply using == gives me a boolean array: ``` >>> numpy.array([1,1,1]) == numpy.array([1,1,1]) array([ True, True, True], dtype=bool) ``` Do I have to and the elements of this array to determine if the arrays are equal, or is there a simpler way to compare?", "response":"``` (A==B).all() ``` test if all values of array (A==B) are True. Note: maybe you also want to test A and B shape, such as A.shape == B.shape Special cases and alternatives (from dbaupp's answer and yoavram's comment) It should be noted that: this solution can have a strange behavior in a particular case: if either A or B is empty and the other one contains a single element, then it return True. For some reason, the comparison A==B returns an empty array, for which the all operator returns True. Another risk is if A and B don't have the same shape and aren't broadcastable, then this approach will raise an error. In conclusion, if you have a doubt about A and B shape or simply want to be safe: use one of the specialized functions: ``` np.array_equal(A,B) # test if same shape, same elements values np.array_equiv(A,B) # test if broadcastable shape, same elements values np.allclose(A,B,...) # test if same shape, elements have close enough values ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10580676\/comparing-two-numpy-arrays-for-equality-element-wise", "best_answers_votes":681, "question_length":452, "response_length":959 }, { "question":"What is the difference between flatten and ravel functions in numpy? ``` import numpy as np y = np.array(((1,2,3),(4,5,6),(7,8,9))) OUTPUT: print(y.flatten()) [1 2 3 4 5 6 7 8 9] print(y.ravel()) [1 2 3 4 5 6 7 8 9] ``` Both function return the same list. Then what is the need of two different functions performing same job.", "response":"The current API is that: flatten always returns a copy. ravel returns a contiguous view of the original array whenever possible. This isn't visible in the printed output, but if you modify the array returned by ravel, it may modify the entries in the original array. If you modify the entries in an array returned from flatten this will never happen. ravel will often be faster since no memory is copied, but you have to be more careful about modifying the array it returns. reshape((-1,)) gets a view whenever the strides of the array allow it even if that means you don't always get a contiguous array.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28930465\/what-is-the-difference-between-flatten-and-ravel-functions-in-numpy", "best_answers_votes":567, "question_length":325, "response_length":604 }, { "question":"What are the differences between numpy arrays and matrices? Which one should I use? [closed] Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Because this question may lead to opinionated discussion, debate, and answers, it has been closed. You may edit the question if you feel you can improve it so that it requires answers that include facts and citations or a detailed explanation of the proposed solution. If edited, the question will be reviewed and might be reopened. Closed last year. The community reviewed whether to reopen this question last year and left it closed: Original close reason(s) were not resolved Improve this question What are the advantages and disadvantages of each? From what I've seen, either one can work as a replacement for the other if need be, so should I bother using both or should I stick to just one of them? Will the style of the program influence my choice? I am doing some machine learning using numpy, so there are indeed lots of matrices, but also lots of vectors (arrays).", "response":"Numpy matrices are strictly 2-dimensional, while numpy arrays (ndarrays) are N-dimensional. Matrix objects are a subclass of ndarray, so they inherit all the attributes and methods of ndarrays. The main advantage of numpy matrices is that they provide a convenient notation for matrix multiplication: if a and b are matrices, then a*b is their matrix product. ``` import numpy as np a = np.mat('4 3; 2 1') b = np.mat('1 2; 3 4') print(a) # [[4 3] # [2 1]] print(b) # [[1 2] # [3 4]] print(a*b) # [[13 20] # [ 5 8]] ``` On the other hand, as of Python 3.5, NumPy supports infix matrix multiplication using the @ operator, so you can achieve the same convenience of matrix multiplication with ndarrays in Python >= 3.5. ``` import numpy as np a = np.array([[4, 3], [2, 1]]) b = np.array([[1, 2], [3, 4]]) print(a@b) # [[13 20] # [ 5 8]] ``` Both matrix objects and ndarrays have .T to return the transpose, but matrix objects also have .H for the conjugate transpose, and .I for the inverse. In contrast, numpy arrays consistently abide by the rule that operations are applied element-wise (except for the new @ operator). Thus, if a and b are numpy arrays, then a*b is the array formed by multiplying the components element-wise: ``` c = np.array([[4, 3], [2, 1]]) d = np.array([[1, 2], [3, 4]]) print(c*d) # [[4 6] # [6 4]] ``` To obtain the result of matrix multiplication, you use np.dot (or @ in Python >= 3.5, as shown above): ``` print(np.dot(c,d)) # [[13 20] # [ 5 8]] ``` The ** operator also behaves differently: ``` print(a**2) # [[22 15] # [10 7]] print(c**2) # [[16 9] # [ 4 1]] ``` Since a is a matrix, a**2 returns the matrix product a*a. Since c is an ndarray, c**2 returns an ndarray with each component squared element-wise. There are other technical differences between matrix objects and ndarrays (having to do with np.ravel, item selection and sequence behavior). The main advantage of numpy arrays is that they are more general than 2-dimensional matrices. What happens when you want a 3-dimensional array? Then you have to use an ndarray, not a matrix object. Thus, learning to use matrix objects is more work -- you have to learn matrix object operations, and ndarray operations. Writing a program that mixes both matrices and arrays makes your life difficult because you have to keep track of what type of object your variables are, lest multiplication return something you don't expect. In contrast, if you stick solely with ndarrays, then you can do everything matrix objects can do, and more, except with slightly different functions\/notation. If you are willing to give up the visual appeal of NumPy matrix product notation (which can be achieved almost as elegantly with ndarrays in Python >= 3.5), then I think NumPy arrays are definitely the way to go. PS. Of course, you really don't have to choose one at the expense of the other, since np.asmatrix and np.asarray allow you to convert one to the other (as long as the array is 2-dimensional). There is a synopsis of the differences between NumPy arrays vs NumPy matrixes here.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4151128\/what-are-the-differences-between-numpy-arrays-and-matrices-which-one-should-i-u", "best_answers_votes":487, "question_length":1077, "response_length":3058 }, { "question":"Convert NumPy array to Python list How do I convert a NumPy array into a Python List?", "response":"Use tolist(): ``` >>> import numpy as np >>> np.array([[1,2,3],[4,5,6]]).tolist() [[1, 2, 3], [4, 5, 6]] ``` Note that this converts the values from whatever numpy type they may have (e.g. np.int32 or np.float32) to the \"nearest compatible Python type\" (in a list). If you want to preserve the numpy data types, you could call list() on your array instead, and you'll end up with a list of numpy scalars. (Thanks to Mr_and_Mrs_D for pointing that out in a comment.)", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1966207\/convert-numpy-array-to-python-list", "best_answers_votes":617, "question_length":85, "response_length":465 }, { "question":"Difference between numpy.array shape (R, 1) and (R,) In numpy, some of the operations return in shape (R, 1) but some return (R,). This will make matrix multiplication more tedious since explicit reshape is required. For example, given a matrix M, if we want to do numpy.dot(M[:,0], numpy.ones((1, R))) where R is the number of rows (of course, the same issue also occurs column-wise). We will get matrices are not aligned error since M[:,0] is in shape (R,) but numpy.ones((1, R)) is in shape (1, R). So my questions are: What's the difference between shape (R, 1) and (R,). I know literally it's list of numbers and list of lists where all list contains only a number. Just wondering why not design numpy so that it favors shape (R, 1) instead of (R,) for easier matrix multiplication. Are there better ways for the above example? Without explicitly reshape like this: numpy.dot(M[:,0].reshape(R, 1), numpy.ones((1, R)))", "response":"1. The meaning of shapes in NumPy You write, \"I know literally it's list of numbers and list of lists where all list contains only a number\" but that's a bit of an unhelpful way to think about it. The best way to think about NumPy arrays is that they consist of two parts, a data buffer which is just a block of raw elements, and a view which describes how to interpret the data buffer. For example, if we create an array of 12 integers: ``` >>> a = numpy.arange(12) >>> a array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) ``` Then a consists of a data buffer, arranged something like this: ``` \u250c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2510 \u2502 0 \u2502 1 \u2502 2 \u2502 3 \u2502 4 \u2502 5 \u2502 6 \u2502 7 \u2502 8 \u2502 9 \u2502 10 \u2502 11 \u2502 \u2514\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2518 ``` and a view which describes how to interpret the data: ``` >>> a.flags C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False >>> a.dtype dtype('int64') >>> a.itemsize 8 >>> a.strides (8,) >>> a.shape (12,) ``` Here the shape (12,) means the array is indexed by a single index which runs from 0 to 11. Conceptually, if we label this single index i, the array a looks like this: ``` i= 0 1 2 3 4 5 6 7 8 9 10 11 \u250c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2510 \u2502 0 \u2502 1 \u2502 2 \u2502 3 \u2502 4 \u2502 5 \u2502 6 \u2502 7 \u2502 8 \u2502 9 \u2502 10 \u2502 11 \u2502 \u2514\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2518 ``` If we reshape an array, this doesn't change the data buffer. Instead, it creates a new view that describes a different way to interpret the data. So after: ``` >>> b = a.reshape((3, 4)) ``` the array b has the same data buffer as a, but now it is indexed by two indices which run from 0 to 2 and 0 to 3 respectively. If we label the two indices i and j, the array b looks like this: ``` i= 0 0 0 0 1 1 1 1 2 2 2 2 j= 0 1 2 3 0 1 2 3 0 1 2 3 \u250c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2510 \u2502 0 \u2502 1 \u2502 2 \u2502 3 \u2502 4 \u2502 5 \u2502 6 \u2502 7 \u2502 8 \u2502 9 \u2502 10 \u2502 11 \u2502 \u2514\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2518 ``` which means that: ``` >>> b[2,1] 9 ``` You can see that the second index changes quickly and the first index changes slowly. If you prefer this to be the other way round, you can specify the order parameter: ``` >>> c = a.reshape((3, 4), order='F') ``` which results in an array indexed like this: ``` i= 0 1 2 0 1 2 0 1 2 0 1 2 j= 0 0 0 1 1 1 2 2 2 3 3 3 \u250c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2510 \u2502 0 \u2502 1 \u2502 2 \u2502 3 \u2502 4 \u2502 5 \u2502 6 \u2502 7 \u2502 8 \u2502 9 \u2502 10 \u2502 11 \u2502 \u2514\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2518 ``` which means that: ``` >>> c[2,1] 5 ``` It should now be clear what it means for an array to have a shape with one or more dimensions of size 1. After: ``` >>> d = a.reshape((12, 1)) ``` the array d is indexed by two indices, the first of which runs from 0 to 11, and the second index is always 0: ``` i= 0 1 2 3 4 5 6 7 8 9 10 11 j= 0 0 0 0 0 0 0 0 0 0 0 0 \u250c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2510 \u2502 0 \u2502 1 \u2502 2 \u2502 3 \u2502 4 \u2502 5 \u2502 6 \u2502 7 \u2502 8 \u2502 9 \u2502 10 \u2502 11 \u2502 \u2514\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2518 ``` and so: ``` >>> d[10,0] 10 ``` A dimension of length 1 is \"free\" (in some sense), so there's nothing stopping you from going to town: ``` >>> e = a.reshape((1, 2, 1, 6, 1)) ``` giving an array indexed like this: ``` i= 0 0 0 0 0 0 0 0 0 0 0 0 j= 0 0 0 0 0 0 1 1 1 1 1 1 k= 0 0 0 0 0 0 0 0 0 0 0 0 l= 0 1 2 3 4 5 0 1 2 3 4 5 m= 0 0 0 0 0 0 0 0 0 0 0 0 \u250c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2510 \u2502 0 \u2502 1 \u2502 2 \u2502 3 \u2502 4 \u2502 5 \u2502 6 \u2502 7 \u2502 8 \u2502 9 \u2502 10 \u2502 11 \u2502 \u2514\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2518 ``` and so: ``` >>> e[0,1,0,0,0] 6 ``` See the NumPy internals documentation for more details about how arrays are implemented. 2. What to do? Since numpy.reshape just creates a new view, you shouldn't be scared about using it whenever necessary. It's the right tool to use when you want to index an array in a different way. However, in a long computation it's usually possible to arrange to construct arrays with the \"right\" shape in the first place, and so minimize the number of reshapes and transposes. But without seeing the actual context that led to the need for a reshape, it's hard to say what should be changed. The example in your question is: ``` numpy.dot(M[:,0], numpy.ones((1, R))) ``` but this is not realistic. First, this expression: ``` M[:,0].sum() ``` computes the result more simply. Second, is there really something special about column 0? Perhaps what you actually need is: ``` M.sum(axis=0) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22053050\/difference-between-numpy-array-shape-r-1-and-r", "best_answers_votes":732, "question_length":922, "response_length":4557 }, { "question":"Simple Digit Recognition OCR in OpenCV-Python I am trying to implement a \"Digit Recognition OCR\" in OpenCV-Python (cv2). It is just for learning purposes. I would like to learn both KNearest and SVM features in OpenCV. I have 100 samples (i.e. images) of each digit. I would like to train with them. There is a sample letter_recog.py that comes with OpenCV sample. But I still couldn't figure out on how to use it. I don't understand what are the samples, responses etc. Also, it loads a txt file at first, which I didn't understand first. Later on searching a little bit, I could find a letter_recognition.data in cpp samples. I used it and made a code for cv2.KNearest in the model of letter_recog.py (just for testing): ``` import numpy as np import cv2 fn = 'letter-recognition.data' a = np.loadtxt(fn, np.float32, delimiter=',', converters={ 0 : lambda ch : ord(ch)-ord('A') }) samples, responses = a[:,1:], a[:,0] model = cv2.KNearest() retval = model.train(samples,responses) retval, results, neigh_resp, dists = model.find_nearest(samples, k = 10) print results.ravel() ``` It gave me an array of size 20000, I don't understand what it is. Questions: 1) What is letter_recognition.data file? How to build that file from my own data set? 2) What does results.reval() denote? 3) How we can write a simple digit recognition tool using letter_recognition.data file (either KNearest or SVM)?", "response":"Well, I decided to workout myself on my question to solve the above problem. What I wanted is to implement a simple OCR using KNearest or SVM features in OpenCV. And below is what I did and how. (it is just for learning how to use KNearest for simple OCR purposes). 1) My first question was about letter_recognition.data file that comes with OpenCV samples. I wanted to know what is inside that file. It contains a letter, along with 16 features of that letter. And this SOF helped me to find it. These 16 features are explained in the paper Letter Recognition Using Holland-Style Adaptive Classifiers. (Although I didn't understand some of the features at the end) 2) Since I knew, without understanding all those features, it is difficult to do that method. I tried some other papers, but all were a little difficult for a beginner. So I just decided to take all the pixel values as my features. (I was not worried about accuracy or performance, I just wanted it to work, at least with the least accuracy) I took the below image for my training data: (I know the amount of training data is less. But, since all letters are of the same font and size, I decided to try on this). To prepare the data for training, I made a small code in OpenCV. It does the following things: It loads the image. Selects the digits (obviously by contour finding and applying constraints on area and height of letters to avoid false detections). Draws the bounding rectangle around one letter and wait for key press manually. This time we press the digit key ourselves corresponding to the letter in the box. Once the corresponding digit key is pressed, it resizes this box to 10x10 and saves all 100 pixel values in an array (here, samples) and corresponding manually entered digit in another array(here, responses). Then save both the arrays in separate .txt files. At the end of the manual classification of digits, all the digits in the training data (train.png) are labeled manually by ourselves, image will look like below: Below is the code I used for the above purpose (of course, not so clean): ``` import sys import numpy as np import cv2 im = cv2.imread('pitrain.png') im3 = im.copy() gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray,(5,5),0) thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2) ################# Now finding Contours ################### contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE) samples = np.empty((0,100)) responses = [] keys = [i for i in range(48,58)] for cnt in contours: if cv2.contourArea(cnt)>50: [x,y,w,h] = cv2.boundingRect(cnt) if h>28: cv2.rectangle(im,(x,y),(x+w,y+h),(0,0,255),2) roi = thresh[y:y+h,x:x+w] roismall = cv2.resize(roi,(10,10)) cv2.imshow('norm',im) key = cv2.waitKey(0) if key == 27: # (escape to quit) sys.exit() elif key in keys: responses.append(int(chr(key))) sample = roismall.reshape((1,100)) samples = np.append(samples,sample,0) responses = np.array(responses,np.float32) responses = responses.reshape((responses.size,1)) print \"training complete\" np.savetxt('generalsamples.data',samples) np.savetxt('generalresponses.data',responses) ``` Now we enter in to training and testing part. For the testing part, I used the below image, which has the same type of letters I used for the training phase. For training we do as follows: Load the .txt files we already saved earlier create an instance of the classifier we are using (it is KNearest in this case) Then we use KNearest.train function to train the data For testing purposes, we do as follows: We load the image used for testing process the image as earlier and extract each digit using contour methods Draw a bounding box for it, then resize it to 10x10, and store its pixel values in an array as done earlier. Then we use KNearest.find_nearest() function to find the nearest item to the one we gave. ( If lucky, it recognizes the correct digit.) I included last two steps (training and testing) in single code below: ``` import cv2 import numpy as np ####### training part ############### samples = np.loadtxt('generalsamples.data',np.float32) responses = np.loadtxt('generalresponses.data',np.float32) responses = responses.reshape((responses.size,1)) model = cv2.KNearest() model.train(samples,responses) ############################# testing part ######################### im = cv2.imread('pi.png') out = np.zeros(im.shape,np.uint8) gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) thresh = cv2.adaptiveThreshold(gray,255,1,1,11,2) contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE) for cnt in contours: if cv2.contourArea(cnt)>50: [x,y,w,h] = cv2.boundingRect(cnt) if h>28: cv2.rectangle(im,(x,y),(x+w,y+h),(0,255,0),2) roi = thresh[y:y+h,x:x+w] roismall = cv2.resize(roi,(10,10)) roismall = roismall.reshape((1,100)) roismall = np.float32(roismall) retval, results, neigh_resp, dists = model.find_nearest(roismall, k = 1) string = str(int((results[0][0]))) cv2.putText(out,string,(x,y+h),0,1,(0,255,0)) cv2.imshow('im',im) cv2.imshow('out',out) cv2.waitKey(0) ``` And it worked, below is the result I got: Here it worked with 100% accuracy. I assume this is because all the digits are of the same kind and the same size. But anyway, this is a good start to go for beginners (I hope so).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9413216\/simple-digit-recognition-ocr-in-opencv-python", "best_answers_votes":607, "question_length":1394, "response_length":5280 }, { "question":"ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() Let x be a NumPy array. The following: ``` (x > 1) and (x < 3) ``` Gives the error message: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() How do I fix this?", "response":"If a and b are Boolean NumPy arrays, the & operation returns the elementwise-and of them: ``` a & b ``` That returns a Boolean array. To reduce this to a single Boolean value, use either ``` (a & b).any() ``` or ``` (a & b).all() ``` Note: if a and b are non-Boolean arrays, consider (a - b).any() or (a - b).all() instead. Rationale The NumPy developers felt there was no one commonly understood way to evaluate an array in Boolean context: it could mean True if any element is True, or it could mean True if all elements are True, or True if the array has non-zero length, just to name three possibilities. Since different users might have different needs and different assumptions, the NumPy developers refused to guess and instead decided to raise a ValueError whenever one tries to evaluate an array in Boolean context. Applying and to two numpy arrays causes the two arrays to be evaluated in Boolean context (by calling __bool__ in Python3 or __nonzero__ in Python2).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10062954\/valueerror-the-truth-value-of-an-array-with-more-than-one-element-is-ambiguous", "best_answers_votes":323, "question_length":318, "response_length":974 }, { "question":"NumPy array initialization (fill with identical values) [duplicate] This question already has answers here: Create numpy matrix filled with NaNs (11 answers) Closed 1 year ago. I need to create a NumPy array of length n, each element of which is v. Is there anything better than: ``` a = empty(n) for i in range(n): a[i] = v ``` I know zeros and ones would work for v = 0, 1. I could use v * ones(n), but it won't work when v is None, and also would be much slower.", "response":"NumPy 1.8 introduced np.full(), which is a more direct method than empty() followed by fill() for creating an array filled with a certain value: ``` >>> np.full((3, 5), 7) array([[ 7., 7., 7., 7., 7.], [ 7., 7., 7., 7., 7.], [ 7., 7., 7., 7., 7.]]) >>> np.full((3, 5), 7, dtype=int) array([[7, 7, 7, 7, 7], [7, 7, 7, 7, 7], [7, 7, 7, 7, 7]]) ``` This is arguably the way of creating an array filled with certain values, because it explicitly describes what is being achieved (and it can in principle be very efficient since it performs a very specific task).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5891410\/numpy-array-initialization-fill-with-identical-values", "best_answers_votes":571, "question_length":465, "response_length":558 }, { "question":"Frequency counts for unique values in a NumPy array How do I efficiently obtain the frequency count for each unique value in a NumPy array? ``` >>> x = np.array([1,1,1,2,2,2,5,25,1,1]) >>> freq_count(x) [(1, 5), (2, 3), (5, 1), (25, 1)] ```", "response":"Use numpy.unique with return_counts=True (for NumPy 1.9+): ``` import numpy as np x = np.array([1,1,1,2,2,2,5,25,1,1]) unique, counts = np.unique(x, return_counts=True) >>> print(np.asarray((unique, counts)).T) [[ 1 5] [ 2 3] [ 5 1] [25 1]] ``` In comparison with scipy.stats.itemfreq: ``` In [4]: x = np.random.random_integers(0,100,1e6) In [5]: %timeit unique, counts = np.unique(x, return_counts=True) 10 loops, best of 3: 31.5 ms per loop In [6]: %timeit scipy.stats.itemfreq(x) 10 loops, best of 3: 170 ms per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10741346\/frequency-counts-for-unique-values-in-a-numpy-array", "best_answers_votes":788, "question_length":240, "response_length":523 }, { "question":"Dropping infinite values from dataframes in pandas? How do I drop nan, inf, and -inf values from a DataFrame without resetting mode.use_inf_as_null? Can I tell dropna to include inf in its definition of missing values so that the following works? ``` df.dropna(subset=[\"col1\", \"col2\"], how=\"all\") ```", "response":"First replace() infs with NaN: ``` df.replace([np.inf, -np.inf], np.nan, inplace=True) ``` and then drop NaNs via dropna(): ``` df.dropna(subset=[\"col1\", \"col2\"], how=\"all\", inplace=True) ``` For example: ``` >>> df = pd.DataFrame({\"col1\": [1, np.inf, -np.inf], \"col2\": [2, 3, np.nan]}) >>> df col1 col2 0 1.0 2.0 1 inf 3.0 2 -inf NaN >>> df.replace([np.inf, -np.inf], np.nan, inplace=True) >>> df col1 col2 0 1.0 2.0 1 NaN 3.0 2 NaN NaN >>> df.dropna(subset=[\"col1\", \"col2\"], how=\"all\", inplace=True) >>> df col1 col2 0 1.0 2.0 1 NaN 3.0 ``` The same method also works for Series.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17477979\/dropping-infinite-values-from-dataframes-in-pandas", "best_answers_votes":715, "question_length":300, "response_length":581 }, { "question":"What is the difference between ndarray and array in NumPy? What is the difference between ndarray and array in NumPy? Where is their implementation in the NumPy source code?", "response":"numpy.array is just a convenience function to create an ndarray; it is not a class itself. You can also create an array using numpy.ndarray, but it is not the recommended way. From the docstring of numpy.ndarray: Arrays should be constructed using array, zeros or empty ... The parameters given here refer to a low-level method (ndarray(...)) for instantiating an array. Most of the meat of the implementation is in C code, here in multiarray, but you can start looking at the ndarray interfaces here: https:\/\/github.com\/numpy\/numpy\/blob\/master\/numpy\/core\/numeric.py", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15879315\/what-is-the-difference-between-ndarray-and-array-in-numpy", "best_answers_votes":361, "question_length":173, "response_length":566 }, { "question":"Converting numpy dtypes to native python types If I have a numpy dtype, how do I automatically convert it to its closest python data type? For example, ``` numpy.float32 -> \"python float\" numpy.float64 -> \"python float\" numpy.uint32 -> \"python int\" numpy.int16 -> \"python int\" ``` I could try to come up with a mapping of all of these cases, but does numpy provide some automatic way of converting its dtypes into the closest possible native python types? This mapping need not be exhaustive, but it should convert the common dtypes that have a close python analog. I think this already happens somewhere in numpy.", "response":"Use val.item() to convert most NumPy values to a native Python type: ``` import numpy as np # for example, numpy.float32 -> python float val = np.float32(0) pyval = val.item() print(type(pyval)) # # and similar... type(np.float64(0).item()) # type(np.uint32(0).item()) # type(np.int16(0).item()) # type(np.cfloat(0).item()) # type(np.datetime64(0, 'D').item()) # type(np.datetime64('2001-01-01 00:00:00').item()) # type(np.timedelta64(0, 'D').item()) # ... ``` (A related method np.asscalar(val) was deprecated with 1.16, and removed with 1.23). For the curious, to build a table of conversions of NumPy array scalars for your system: ``` for name in dir(np): obj = getattr(np, name) if hasattr(obj, 'dtype'): try: if 'time' in name: npn = obj(0, 'D') else: npn = obj(0) nat = npn.item() print('{0} ({1!r}) -> {2}'.format(name, npn.dtype.char, type(nat))) except: pass ``` There are a few NumPy types that have no native Python equivalent on some systems, including: clongdouble, clongfloat, complex192, complex256, float128, longcomplex, longdouble and longfloat. These need to be converted to their nearest NumPy equivalent before using .item().", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9452775\/converting-numpy-dtypes-to-native-python-types", "best_answers_votes":562, "question_length":614, "response_length":1155 }, { "question":"How do I remove NaN values from a NumPy array? How do I remove NaN values from a NumPy array? ``` [1, 2, NaN, 4, NaN, 8] \u27f6 [1, 2, 4, 8] ```", "response":"To remove NaN values from a NumPy array x: ``` x = x[~numpy.isnan(x)] ``` Explanation The inner function numpy.isnan returns a boolean\/logical array which has the value True everywhere that x is not-a-number. Since we want the opposite, we use the logical-not operator ~ to get an array with Trues everywhere that x is a valid number. Lastly, we use this logical array to index into the original array x, in order to retrieve just the non-NaN values.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11620914\/how-do-i-remove-nan-values-from-a-numpy-array", "best_answers_votes":629, "question_length":139, "response_length":450 }, { "question":"How do I convert a numpy array to (and display) an image? I have created an array thusly: ``` import numpy as np data = np.zeros( (512,512,3), dtype=np.uint8) data[256,256] = [255,0,0] ``` What I want this to do is display a single red dot in the center of a 512x512 image. (At least to begin with... I think I can figure out the rest from there)", "response":"Use plt.imshow to create the figure, and plt.show to display it: ``` from matplotlib import pyplot as plt plt.imshow(data, interpolation='nearest') plt.show() ``` For Jupyter notebooks, add this line before importing matplotlib: ``` %matplotlib inline ``` For interactive plots in Jupyter [demo], install ipyml pip install ipympl, then use: ``` %matplotlib widget ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/2659312\/how-do-i-convert-a-numpy-array-to-and-display-an-image", "best_answers_votes":448, "question_length":346, "response_length":367 }, { "question":"Most efficient way to reverse a numpy array Believe it or not, after profiling my current code, the repetitive operation of numpy array reversion ate a giant chunk of the running time. What I have right now is the common view-based method: ``` reversed_arr = arr[::-1] ``` Is there any other way to do it more efficiently, or is it just an illusion from my obsession with unrealistic numpy performance?", "response":"``` reversed_arr = arr[::-1] ``` gives a reversed view into the original array arr. Any changes made to the original array arr will also be immediately visible in reversed_arr. The underlying data buffers for arr and reversed_arr are shared, so creating this view is always instantaneous, and does not require any additional memory allocation or copying for the array contents. See also, this discussion on NumPy views: How do I create a view onto a NumPy array? Possible solutions to performance problems regarding views Are you re-creating the view more often than you need to? You should be able to do something like this: ``` arr = np.array(some_sequence) reversed_arr = arr[::-1] do_something(arr) look_at(reversed_arr) do_something_else(arr) look_at(reversed_arr) ``` I'm not a numpy expert, but this seems like it would be the fastest way to do things in numpy. If this is what you are already doing, I don't think you can improve on it.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6771428\/most-efficient-way-to-reverse-a-numpy-array", "best_answers_votes":342, "question_length":402, "response_length":944 }, { "question":"Split (explode) pandas dataframe string entry to separate rows I have a pandas dataframe in which one column of text strings contains comma-separated values. I want to split each CSV field and create a new row per entry (assume that CSV are clean and need only be split on ','). For example, a should become b: ``` In [7]: a Out[7]: var1 var2 0 a,b,c 1 1 d,e,f 2 In [8]: b Out[8]: var1 var2 0 a 1 1 b 1 2 c 1 3 d 2 4 e 2 5 f 2 ``` So far, I have tried various simple functions, but the .apply method seems to only accept one row as return value when it is used on an axis, and I can't get .transform to work. Any suggestions would be much appreciated! Example data: ``` from pandas import DataFrame import numpy as np a = DataFrame([{'var1': 'a,b,c', 'var2': 1}, {'var1': 'd,e,f', 'var2': 2}]) b = DataFrame([{'var1': 'a', 'var2': 1}, {'var1': 'b', 'var2': 1}, {'var1': 'c', 'var2': 1}, {'var1': 'd', 'var2': 2}, {'var1': 'e', 'var2': 2}, {'var1': 'f', 'var2': 2}]) ``` I know this won't work because we lose DataFrame meta-data by going through numpy, but it should give you a sense of what I tried to do: ``` def fun(row): letters = row['var1'] letters = letters.split(',') out = np.array([row] * len(letters)) out['var1'] = letters a['idx'] = range(a.shape[0]) z = a.groupby('idx') z.transform(fun) ```", "response":"UPDATE 3: it makes more sense to use Series.explode() \/ DataFrame.explode() methods (implemented in Pandas 0.25.0 and extended in Pandas 1.3.0 to support multi-column explode) as is shown in the usage example: for a single column: ``` In [1]: df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]], ...: 'B': 1, ...: 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]}) In [2]: df Out[2]: A B C 0 [0, 1, 2] 1 [a, b, c] 1 foo 1 NaN 2 [] 1 [] 3 [3, 4] 1 [d, e] In [3]: df.explode('A') Out[3]: A B C 0 0 1 [a, b, c] 0 1 1 [a, b, c] 0 2 1 [a, b, c] 1 foo 1 NaN 2 NaN 1 [] 3 3 1 [d, e] 3 4 1 [d, e] ``` for multiple columns (for Pandas 1.3.0+): ``` In [4]: df.explode(['A', 'C']) Out[4]: A B C 0 0 1 a 0 1 1 b 0 2 1 c 1 foo 1 NaN 2 NaN 1 NaN 3 3 1 d 3 4 1 e ``` UPDATE 2: more generic vectorized function, which will work for multiple normal and multiple list columns ``` def explode(df, lst_cols, fill_value='', preserve_index=False): # make sure `lst_cols` is list-alike if (lst_cols is not None and len(lst_cols) > 0 and not isinstance(lst_cols, (list, tuple, np.ndarray, pd.Series))): lst_cols = [lst_cols] # all columns except `lst_cols` idx_cols = df.columns.difference(lst_cols) # calculate lengths of lists lens = df[lst_cols[0]].str.len() # preserve original index values idx = np.repeat(df.index.values, lens) # create \"exploded\" DF res = (pd.DataFrame({ col:np.repeat(df[col].values, lens) for col in idx_cols}, index=idx) .assign(**{col:np.concatenate(df.loc[lens>0, col].values) for col in lst_cols})) # append those rows that have empty lists if (lens == 0).any(): # at least one list in cells is empty res = (res.append(df.loc[lens==0, idx_cols], sort=False) .fillna(fill_value)) # revert the original index order res = res.sort_index() # reset index if requested if not preserve_index: res = res.reset_index(drop=True) return res ``` Demo: Multiple list columns - all list columns must have the same # of elements in each row: ``` In [134]: df Out[134]: aaa myid num text 0 10 1 [1, 2, 3] [aa, bb, cc] 1 11 2 [] [] 2 12 3 [1, 2] [cc, dd] 3 13 4 [] [] In [135]: explode(df, ['num','text'], fill_value='') Out[135]: aaa myid num text 0 10 1 1 aa 1 10 1 2 bb 2 10 1 3 cc 3 11 2 4 12 3 1 cc 5 12 3 2 dd 6 13 4 ``` preserving original index values: ``` In [136]: explode(df, ['num','text'], fill_value='', preserve_index=True) Out[136]: aaa myid num text 0 10 1 1 aa 0 10 1 2 bb 0 10 1 3 cc 1 11 2 2 12 3 1 cc 2 12 3 2 dd 3 13 4 ``` Setup: ``` df = pd.DataFrame({ 'aaa': {0: 10, 1: 11, 2: 12, 3: 13}, 'myid': {0: 1, 1: 2, 2: 3, 3: 4}, 'num': {0: [1, 2, 3], 1: [], 2: [1, 2], 3: []}, 'text': {0: ['aa', 'bb', 'cc'], 1: [], 2: ['cc', 'dd'], 3: []} }) ``` CSV column: ``` In [46]: df Out[46]: var1 var2 var3 0 a,b,c 1 XX 1 d,e,f,x,y 2 ZZ In [47]: explode(df.assign(var1=df.var1.str.split(',')), 'var1') Out[47]: var1 var2 var3 0 a 1 XX 1 b 1 XX 2 c 1 XX 3 d 2 ZZ 4 e 2 ZZ 5 f 2 ZZ 6 x 2 ZZ 7 y 2 ZZ ``` using this little trick we can convert CSV-like column to list column: ``` In [48]: df.assign(var1=df.var1.str.split(',')) Out[48]: var1 var2 var3 0 [a, b, c] 1 XX 1 [d, e, f, x, y] 2 ZZ ``` UPDATE: generic vectorized approach (will work also for multiple columns): Original DF: ``` In [177]: df Out[177]: var1 var2 var3 0 a,b,c 1 XX 1 d,e,f,x,y 2 ZZ ``` Solution: first let's convert CSV strings to lists: ``` In [178]: lst_col = 'var1' In [179]: x = df.assign(**{lst_col:df[lst_col].str.split(',')}) In [180]: x Out[180]: var1 var2 var3 0 [a, b, c] 1 XX 1 [d, e, f, x, y] 2 ZZ ``` Now we can do this: ``` In [181]: pd.DataFrame({ ...: col:np.repeat(x[col].values, x[lst_col].str.len()) ...: for col in x.columns.difference([lst_col]) ...: }).assign(**{lst_col:np.concatenate(x[lst_col].values)})[x.columns.tolist()] ...: Out[181]: var1 var2 var3 0 a 1 XX 1 b 1 XX 2 c 1 XX 3 d 2 ZZ 4 e 2 ZZ 5 f 2 ZZ 6 x 2 ZZ 7 y 2 ZZ ``` OLD answer: Inspired by @AFinkelstein solution, i wanted to make it bit more generalized which could be applied to DF with more than two columns and as fast, well almost, as fast as AFinkelstein's solution): ``` In [2]: df = pd.DataFrame( ...: [{'var1': 'a,b,c', 'var2': 1, 'var3': 'XX'}, ...: {'var1': 'd,e,f,x,y', 'var2': 2, 'var3': 'ZZ'}] ...: ) In [3]: df Out[3]: var1 var2 var3 0 a,b,c 1 XX 1 d,e,f,x,y 2 ZZ In [4]: (df.set_index(df.columns.drop('var1',1).tolist()) ...: .var1.str.split(',', expand=True) ...: .stack() ...: .reset_index() ...: .rename(columns={0:'var1'}) ...: .loc[:, df.columns] ...: ) Out[4]: var1 var2 var3 0 a 1 XX 1 b 1 XX 2 c 1 XX 3 d 2 ZZ 4 e 2 ZZ 5 f 2 ZZ 6 x 2 ZZ 7 y 2 ZZ ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12680754\/split-explode-pandas-dataframe-string-entry-to-separate-rows", "best_answers_votes":259, "question_length":1305, "response_length":4533 }, { "question":"How to remove specific elements in a numpy array How can I remove some specific elements from a numpy array? Say I have ``` import numpy as np a = np.array([1,2,3,4,5,6,7,8,9]) ``` I then want to remove 3,4,7 from a. All I know is the index of the values (index=[2,3,6]).", "response":"Use numpy.delete(), which returns a new array with sub-arrays along an axis deleted. ``` numpy.delete(a, index) ``` For your specific question: ``` import numpy as np a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]) index = [2, 3, 6] new_a = np.delete(a, index) print(new_a) # Output: [1, 2, 5, 6, 8, 9] ``` Note that numpy.delete() returns a new array since array scalars are immutable, similar to strings in Python, so each time a change is made to it, a new object is created. I.e., to quote the delete() docs: \"A copy of arr with the elements specified by obj removed. Note that delete does not occur in-place...\" If the code I post has output, it is the result of running the code.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10996140\/how-to-remove-specific-elements-in-a-numpy-array", "best_answers_votes":464, "question_length":271, "response_length":679 }, { "question":"Convert array of indices to one-hot encoded array in NumPy Given a 1D array of indices: ``` a = array([1, 0, 3]) ``` I want to one-hot encode this as a 2D array: ``` b = array([[0,1,0,0], [1,0,0,0], [0,0,0,1]]) ```", "response":"Create a zeroed array b with enough columns, i.e. a.max() + 1. Then, for each row i, set the a[i]th column to 1. ``` >>> a = np.array([1, 0, 3]) >>> b = np.zeros((a.size, a.max() + 1)) >>> b[np.arange(a.size), a] = 1 >>> b array([[ 0., 1., 0., 0.], [ 1., 0., 0., 0.], [ 0., 0., 0., 1.]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/29831489\/convert-array-of-indices-to-one-hot-encoded-array-in-numpy", "best_answers_votes":547, "question_length":214, "response_length":291 }, { "question":"Convert a tensor to numpy array in Tensorflow? How to convert a tensor into a numpy array when using Tensorflow with Python bindings?", "response":"TensorFlow 2.x Eager Execution is enabled by default, so just call .numpy() on the Tensor object. ``` import tensorflow as tf a = tf.constant([[1, 2], [3, 4]]) b = tf.add(a, 1) a.numpy() # array([[1, 2], # [3, 4]], dtype=int32) b.numpy() # array([[2, 3], # [4, 5]], dtype=int32) tf.multiply(a, b).numpy() # array([[ 2, 6], # [12, 20]], dtype=int32) ``` See NumPy Compatibility for more. It is worth noting (from the docs), Numpy array may share a memory with the Tensor object. Any changes to one may be reflected in the other. Bold emphasis mine. A copy may or may not be returned, and this is an implementation detail based on whether the data is in CPU or GPU (in the latter case, a copy has to be made from GPU to host memory). But why am I getting the AttributeError: 'Tensor' object has no attribute 'numpy'?. A lot of folks have commented about this issue, there are a couple of possible reasons: TF 2.0 is not correctly installed (in which case, try re-installing), or TF 2.0 is installed, but eager execution is disabled for some reason. In such cases, call tf.compat.v1.enable_eager_execution() to enable it, or see below. If Eager Execution is disabled, you can build a graph and then run it through tf.compat.v1.Session: ``` a = tf.constant([[1, 2], [3, 4]]) b = tf.add(a, 1) out = tf.multiply(a, b) out.eval(session=tf.compat.v1.Session()) # array([[ 2, 6], # [12, 20]], dtype=int32) ``` See also TF 2.0 Symbols Map for a mapping of the old API to the new one.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34097281\/convert-a-tensor-to-numpy-array-in-tensorflow", "best_answers_votes":249, "question_length":133, "response_length":1473 }, { "question":"How to take column-slices of dataframe in pandas I load some machine learning data from a CSV file. The first 2 columns are observations and the remaining columns are features. Currently, I do the following: ``` data = pandas.read_csv('mydata.csv') ``` which gives something like: ``` data = pandas.DataFrame(np.random.rand(10,5), columns = list('abcde')) ``` I'd like to slice this dataframe in two dataframes: one containing the columns a and b and one containing the columns c, d and e. It is not possible to write something like ``` observations = data[:'c'] features = data['c':] ``` I'm not sure what the best method is. Do I need a pd.Panel? By the way, I find dataframe indexing pretty inconsistent: data['a'] is permitted, but data[0] is not. On the other side, data['a':] is not permitted but data[0:] is. Is there a practical reason for this? This is really confusing if columns are indexed by Int, given that data[0] != data[0:1]", "response":"2017 Answer - pandas 0.20: .ix is deprecated. Use .loc See the deprecation in the docs .loc uses label based indexing to select both rows and columns. The labels being the values of the index or the columns. Slicing with .loc includes the last element. Let's assume we have a DataFrame with the following columns: foo, bar, quz, ant, cat, sat, dat. ``` # selects all rows and all columns beginning at 'foo' up to and including 'sat' df.loc[:, 'foo':'sat'] # foo bar quz ant cat sat ``` .loc accepts the same slice notation that Python lists do for both row and columns. Slice notation being start:stop:step ``` # slice from 'foo' to 'cat' by every 2nd column df.loc[:, 'foo':'cat':2] # foo quz cat # slice from the beginning to 'bar' df.loc[:, :'bar'] # foo bar # slice from 'quz' to the end by 3 df.loc[:, 'quz'::3] # quz sat # attempt from 'sat' to 'bar' df.loc[:, 'sat':'bar'] # no columns returned # slice from 'sat' to 'bar' df.loc[:, 'sat':'bar':-1] sat cat ant quz bar # slice notation is syntatic sugar for the slice function # slice from 'quz' to the end by 2 with slice function df.loc[:, slice('quz',None, 2)] # quz cat dat # select specific columns with a list # select columns foo, bar and dat df.loc[:, ['foo','bar','dat']] # foo bar dat ``` You can slice by rows and columns. For instance, if you have 5 rows with labels v, w, x, y, z ``` # slice from 'w' to 'y' and 'foo' to 'ant' by 3 df.loc['w':'y', 'foo':'ant':3] # foo ant # w # x # y ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10665889\/how-to-take-column-slices-of-dataframe-in-pandas", "best_answers_votes":324, "question_length":941, "response_length":1458 }, { "question":"Create numpy matrix filled with NaNs I have the following code: ``` r = numpy.zeros(shape = (width, height, 9)) ``` It creates a width x height x 9 matrix filled with zeros. Instead, I'd like to know if there's a function or way to initialize them instead to NaNs in an easy way.", "response":"You rarely need loops for vector operations in numpy. You can create an uninitialized array and assign to all entries at once: ``` >>> a = numpy.empty((3,3,)) >>> a[:] = numpy.nan >>> a array([[ NaN, NaN, NaN], [ NaN, NaN, NaN], [ NaN, NaN, NaN]]) ``` I have timed the alternatives a[:] = numpy.nan here and a.fill(numpy.nan) as posted by Blaenk: ``` $ python -mtimeit \"import numpy as np; a = np.empty((100,100));\" \"a.fill(np.nan)\" 10000 loops, best of 3: 54.3 usec per loop $ python -mtimeit \"import numpy as np; a = np.empty((100,100));\" \"a[:] = np.nan\" 10000 loops, best of 3: 88.8 usec per loop ``` The timings show a preference for ndarray.fill(..) as the faster alternative. OTOH, I like numpy's convenience implementation where you can assign values to whole slices at the time, the code's intention is very clear. Note that ndarray.fill performs its operation in-place, so numpy.empty((3,3,)).fill(numpy.nan) will instead return None.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1704823\/create-numpy-matrix-filled-with-nans", "best_answers_votes":415, "question_length":279, "response_length":943 }, { "question":"How do I create a numpy array of all True or all False? In Python, how do I create a numpy array of arbitrary shape filled with all True or all False?", "response":"The answer: ```py numpy.full((2, 2), True) ``` Explanation: numpy creates arrays of all ones or all zeros very easily: e.g. numpy.ones((2, 2)) or numpy.zeros((2, 2)) Since True and False are represented in Python as 1 and 0, respectively, we have only to specify this array should be boolean using the optional dtype parameter and we are done: ```py numpy.ones((2, 2), dtype=bool) ``` returns: ``` array([[ True, True], [ True, True]], dtype=bool) ``` UPDATE: 30 October 2013 Since numpy version 1.8, we can use full to achieve the same result with syntax that more clearly shows our intent (as fmonegaglia points out): ``` numpy.full((2, 2), True, dtype=bool) ``` UPDATE: 16 January 2017 Since at least numpy version 1.12, full automatically casts to the dtype of the second parameter, so we can just write: ``` numpy.full((2, 2), True) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21174961\/how-do-i-create-a-numpy-array-of-all-true-or-all-false", "best_answers_votes":438, "question_length":150, "response_length":841 }, { "question":"Take multiple lists into dataframe How do I take multiple lists and put them as different columns in a python dataframe? I tried this solution but had some trouble. Attempt 1: Have three lists, and zip them together and use that res = zip(lst1,lst2,lst3) Yields just one column Attempt 2: ``` percentile_list = pd.DataFrame({'lst1Tite' : [lst1], 'lst2Tite' : [lst2], 'lst3Tite' : [lst3] }, columns=['lst1Tite','lst1Tite', 'lst1Tite']) ``` yields either one row by 3 columns (the way above) or if I transpose it is 3 rows and 1 column How do I get a 100 row (length of each independent list) by 3 column (three lists) pandas dataframe?", "response":"I think you're almost there, try removing the extra square brackets around the lst's (Also you don't need to specify the column names when you're creating a dataframe from a dict like this): ``` import pandas as pd lst1 = range(100) lst2 = range(100) lst3 = range(100) percentile_list = pd.DataFrame( {'lst1Title': lst1, 'lst2Title': lst2, 'lst3Title': lst3 }) percentile_list lst1Title lst2Title lst3Title 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 5 5 5 5 6 6 6 6 ... ``` If you need a more performant solution you can use np.column_stack rather than zip as in your first attempt, this has around a 2x speedup on the example here, however comes at bit of a cost of readability in my opinion: ``` import numpy as np percentile_list = pd.DataFrame(np.column_stack([lst1, lst2, lst3]), columns=['lst1Title', 'lst2Title', 'lst3Title']) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30522724\/take-multiple-lists-into-dataframe", "best_answers_votes":513, "question_length":634, "response_length":834 }, { "question":"How to convert a NumPy array to PIL image applying matplotlib colormap I want to take a NumPy 2D array which represents a grayscale image, and convert it to an RGB PIL image while applying some of the matplotlib colormaps. I can get a reasonable PNG output by using the pyplot.figure.figimage command: ``` dpi = 100.0 w, h = myarray.shape[1]\/dpi, myarray.shape[0]\/dpi fig = plt.figure(figsize=(w,h), dpi=dpi) fig.figimage(sub, cmap=cm.gist_earth) plt.savefig('out.png') ``` Although I could adapt this to get what I want (probably using StringIO do get the PIL image), I wonder if there is not a simpler way to do that, since it seems to be a very natural problem of image visualization. Let's say, something like this: ``` colored_PIL_image = magic_function(array, cmap) ```", "response":"Quite a busy one-liner, but here it is: First ensure your NumPy array, myarray, is normalised with the max value at 1.0. Apply the colormap directly to myarray. Rescale to the 0-255 range. Convert to integers, using np.uint8(). Use Image.fromarray(). And you're done: ``` from PIL import Image from matplotlib import cm im = Image.fromarray(np.uint8(cm.gist_earth(myarray)*255)) ``` with plt.savefig(): with im.save():", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10965417\/how-to-convert-a-numpy-array-to-pil-image-applying-matplotlib-colormap", "best_answers_votes":400, "question_length":775, "response_length":418 }, { "question":"numpy matrix vector multiplication [duplicate] This question already has answers here: how does multiplication differ for NumPy Matrix vs Array classes? (8 answers) Closed 11 years ago. When I multiply two numpy arrays of sizes (n x n)*(n x 1), I get a matrix of size (n x n). Following normal matrix multiplication rules, an (n x 1) vector is expected, but I simply cannot find any information about how this is done in Python's Numpy module. The thing is that I don't want to implement it manually to preserve the speed of the program. Example code is shown below: ``` a = np.array([[5, 1, 3], [1, 1, 1], [1, 2, 1]]) b = np.array([1, 2, 3]) print a*b >> [[5 2 9] [1 2 3] [1 4 3]] ``` What I want is: ``` print a*b >> [16 6 8] ```", "response":"Simplest solution Use numpy.dot or a.dot(b). See the documentation here. ``` >>> a = np.array([[ 5, 1 ,3], [ 1, 1 ,1], [ 1, 2 ,1]]) >>> b = np.array([1, 2, 3]) >>> print a.dot(b) array([16, 6, 8]) ``` This occurs because numpy arrays are not matrices, and the standard operations *, +, -, \/ work element-wise on arrays. Note that while you can use numpy.matrix (as of early 2021) where * will be treated like standard matrix multiplication, numpy.matrix is deprecated and may be removed in future releases.. See the note in its documentation (reproduced below): It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future. Thanks @HopeKing. Other Solutions Also know there are other options: As noted below, if using python3.5+ and numpy v1.10+, the @ operator works as you'd expect: ``` >>> print(a @ b) array([16, 6, 8]) ``` If you want overkill, you can use numpy.einsum. The documentation will give you a flavor for how it works, but honestly, I didn't fully understand how to use it until reading this answer and just playing around with it on my own. ``` >>> np.einsum('ji,i->j', a, b) array([16, 6, 8]) ``` As of mid 2016 (numpy 1.10.1), you can try the experimental numpy.matmul, which works like numpy.dot with two major exceptions: no scalar multiplication but it works with stacks of matrices. ``` >>> np.matmul(a, b) array([16, 6, 8]) ``` numpy.inner functions the same way as numpy.dot for matrix-vector multiplication but behaves differently for matrix-matrix and tensor multiplication (see Wikipedia regarding the differences between the inner product and dot product in general or see this SO answer regarding numpy's implementations). ``` >>> np.inner(a, b) array([16, 6, 8]) # Beware using for matrix-matrix multiplication though! >>> b = a.T >>> np.dot(a, b) array([[35, 9, 10], [ 9, 3, 4], [10, 4, 6]]) >>> np.inner(a, b) array([[29, 12, 19], [ 7, 4, 5], [ 8, 5, 6]]) ``` If you have multiple 2D arrays to dot together, you may consider the np.linalg.multi_dot function, which simplifies the syntax of many nested np.dots. Note that this only works with 2D arrays (i.e. not for matrix-vector multiplication). ``` >>> np.dot(np.dot(a, a.T), a).dot(a.T) array([[1406, 382, 446], [ 382, 106, 126], [ 446, 126, 152]]) >>> np.linalg.multi_dot((a, a.T, a, a.T)) array([[1406, 382, 446], [ 382, 106, 126], [ 446, 126, 152]]) ``` Rarer options for edge cases If you have tensors (arrays of dimension greater than or equal to one), you can use numpy.tensordot with the optional argument axes=1: ``` >>> np.tensordot(a, b, axes=1) array([16, 6, 8]) ``` Don't use numpy.vdot if you have a matrix of complex numbers, as the matrix will be flattened to a 1D array, then it will try to find the complex conjugate dot product between your flattened matrix and vector (which will fail due to a size mismatch n*m vs n).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21562986\/numpy-matrix-vector-multiplication", "best_answers_votes":471, "question_length":731, "response_length":2901 }, { "question":"Transposing a 1D NumPy array I use Python and NumPy and have some problems with \"transpose\": ``` import numpy as np a = np.array([5,4]) print(a) print(a.T) ``` Invoking a.T is not transposing the array. If a is for example [[],[]] then it transposes correctly, but I need the transpose of [...,...,...].", "response":"It's working exactly as it's supposed to. The transpose of a 1D array is still a 1D array! (If you're used to matlab, it fundamentally doesn't have a concept of a 1D array. Matlab's \"1D\" arrays are 2D.) If you want to turn your 1D vector into a 2D array and then transpose it, just slice it with np.newaxis (or None, they're the same, newaxis is just more readable). ``` import numpy as np a = np.array([5,4])[np.newaxis] print(a) print(a.T) ``` Generally speaking though, you don't ever need to worry about this. Adding the extra dimension is usually not what you want, if you're just doing it out of habit. Numpy will automatically broadcast a 1D array when doing various calculations. There's usually no need to distinguish between a row vector and a column vector (neither of which are vectors. They're both 2D!) when you just want a vector.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5954603\/transposing-a-1d-numpy-array", "best_answers_votes":318, "question_length":303, "response_length":845 }, { "question":"Relationship between SciPy and NumPy SciPy appears to provide most (but not all [1]) of NumPy's functions in its own namespace. In other words, if there's a function named numpy.foo, there's almost certainly a scipy.foo. Most of the time, the two appear to be exactly the same, oftentimes even pointing to the same function object. Sometimes, they're different. To give an example that came up recently: numpy.log10 is a ufunc that returns NaNs for negative arguments; scipy.log10 returns complex values for negative arguments and doesn't appear to be a ufunc. The same can be said about log, log2 and logn, but not about log1p [2]. On the other hand, numpy.exp and scipy.exp appear to be different names for the same ufunc. This is also true of scipy.log1p and numpy.log1p. Another example is numpy.linalg.solve vs scipy.linalg.solve. They're similar, but the latter offers some additional features over the former. Why the apparent duplication? If this is meant to be a wholesale import of numpy into the scipy namespace, why the subtle differences in behaviour and the missing functions? Is there some overarching logic that would help clear up the confusion? [1] numpy.min, numpy.max, numpy.abs and a few others have no counterparts in the scipy namespace. [2] Tested using NumPy 1.5.1 and SciPy 0.9.0rc2.", "response":"Last time I checked it, the scipy __init__ method executes a ``` from numpy import * ``` so that the whole numpy namespace is included into scipy when the scipy module is imported. The log10 behavior you are describing is interesting, because both versions are coming from numpy. One is a ufunc, the other is a numpy.lib function. Why scipy is preferring the library function over the ufunc, I don't know off the top of my head. EDIT: In fact, I can answer the log10 question. Looking in the scipy __init__ method I see this: ``` # Import numpy symbols to scipy name space import numpy as _num from numpy import oldnumeric from numpy import * from numpy.random import rand, randn from numpy.fft import fft, ifft from numpy.lib.scimath import * ``` The log10 function you get in scipy comes from numpy.lib.scimath. Looking at that code, it says: ``` \"\"\" Wrapper functions to more user-friendly calling of certain math functions whose output data-type is different than the input data-type in certain domains of the input. For example, for functions like log() with branch cuts, the versions in this module provide the mathematically valid answers in the complex plane: >>> import math >>> from numpy.lib import scimath >>> scimath.log(-math.exp(1)) == (1+1j*math.pi) True Similarly, sqrt(), other base logarithms, power() and trig functions are correctly handled. See their respective docstrings for specific examples. \"\"\" ``` It seems that module overlays the base numpy ufuncs for sqrt, log, log2, logn, log10, power, arccos, arcsin, and arctanh. That explains the behavior you are seeing. The underlying design reason why it is done like that is probably buried in a mailing list post somewhere.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6200910\/relationship-between-scipy-and-numpy", "best_answers_votes":156, "question_length":1309, "response_length":1697 }, { "question":"Counting unique values in a column in pandas dataframe like in Qlik? If I have a table like this: ``` df = pd.DataFrame({ 'hID': [101, 102, 103, 101, 102, 104, 105, 101], 'dID': [10, 11, 12, 10, 11, 10, 12, 10], 'uID': ['James', 'Henry', 'Abe', 'James', 'Henry', 'Brian', 'Claude', 'James'], 'mID': ['A', 'B', 'A', 'B', 'A', 'A', 'A', 'C'] }) ``` I can do count(distinct hID) in Qlik to come up with count of 5 for unique hID. How do I do that in python using a pandas dataframe? Or maybe a numpy array? Similarly, if were to do count(hID) I will get 8 in Qlik. What is the equivalent way to do it in pandas?", "response":"Count distinct values, use nunique: ``` df['hID'].nunique() 5 ``` Count only non-null values, use count: ``` df['hID'].count() 8 ``` Count total values including null values, use the size attribute: ``` df['hID'].size 8 ``` Edit to add condition Use boolean indexing: ``` df.loc[df['mID']=='A','hID'].agg(['nunique','count','size']) ``` OR using query: ``` df.query('mID == \"A\"')['hID'].agg(['nunique','count','size']) ``` Output: ``` nunique 5 count 5 size 5 Name: hID, dtype: int64 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/45759966\/counting-unique-values-in-a-column-in-pandas-dataframe-like-in-qlik", "best_answers_votes":419, "question_length":608, "response_length":487 }, { "question":"Is it possible to use argsort in descending order? Consider the following code: ``` avgDists = np.array([1, 8, 6, 9, 4]) ids = avgDists.argsort()[:n] ``` This gives me indices of the n smallest elements. Is it possible to use this same argsort in descending order to get the indices of n highest elements?", "response":"If you negate an array, the lowest elements become the highest elements and vice-versa. Therefore, the indices of the n highest elements are: ``` (-avgDists).argsort()[:n] ``` Another way to reason about this, as mentioned in the comments, is to observe that the big elements are coming last in the argsort. So, you can read from the tail of the argsort to find the n highest elements: ``` avgDists.argsort()[::-1][:n] ``` Both methods are O(n log n) in time complexity, because the argsort call is the dominant term here. But the second approach has a nice advantage: it replaces an O(n) negation of the array with an O(1) slice. If you're working with small arrays inside loops then you may get some performance gains from avoiding that negation, and if you're working with huge arrays then you can save on memory usage because the negation creates a copy of the entire array. Note that these methods do not always give equivalent results: if a stable sort implementation is requested to argsort, e.g. by passing the keyword argument kind='mergesort', then the first strategy will preserve the sorting stability, but the second strategy will break stability (i.e. the positions of equal items will get reversed). Example timings: Using a small array of 100 floats and a length 30 tail, the view method was about 15% faster ``` >>> avgDists = np.random.rand(100) >>> n = 30 >>> timeit (-avgDists).argsort()[:n] 1.93 \u00b5s \u00b1 6.68 ns per loop (mean \u00b1 std. dev. of 7 runs, 1000000 loops each) >>> timeit avgDists.argsort()[::-1][:n] 1.64 \u00b5s \u00b1 3.39 ns per loop (mean \u00b1 std. dev. of 7 runs, 1000000 loops each) >>> timeit avgDists.argsort()[-n:][::-1] 1.64 \u00b5s \u00b1 3.66 ns per loop (mean \u00b1 std. dev. of 7 runs, 1000000 loops each) ``` For larger arrays, the argsort is dominant and there is no significant timing difference ``` >>> avgDists = np.random.rand(1000) >>> n = 300 >>> timeit (-avgDists).argsort()[:n] 21.9 \u00b5s \u00b1 51.2 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each) >>> timeit avgDists.argsort()[::-1][:n] 21.7 \u00b5s \u00b1 33.3 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each) >>> timeit avgDists.argsort()[-n:][::-1] 21.9 \u00b5s \u00b1 37.1 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each) ``` Please note that the comment from nedim below is incorrect. Whether to truncate before or after reversing makes no difference in efficiency, since both of these operations are only striding a view of the array differently and not actually copying data.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16486252\/is-it-possible-to-use-argsort-in-descending-order", "best_answers_votes":366, "question_length":305, "response_length":2462 }, { "question":"np.mean() vs np.average() in Python NumPy? I notice that ``` In [30]: np.mean([1, 2, 3]) Out[30]: 2.0 In [31]: np.average([1, 2, 3]) Out[31]: 2.0 ``` However, there should be some differences, since after all they are two different functions. What are the differences between them?", "response":"np.average takes an optional weight parameter. If it is not supplied they are equivalent. Take a look at the source code: Mean, Average np.mean: ``` try: mean = a.mean except AttributeError: return _wrapit(a, 'mean', axis, dtype, out) return mean(axis, dtype, out) ``` np.average: ``` ... if weights is None : avg = a.mean(axis) scl = avg.dtype.type(a.size\/avg.size) else: #code that does weighted mean here if returned: #returned is another optional argument scl = np.multiply(avg, 0) + scl return avg, scl else: return avg ... ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20054243\/np-mean-vs-np-average-in-python-numpy", "best_answers_votes":240, "question_length":281, "response_length":532 }, { "question":"How to count the number of true elements in a NumPy bool array I have a NumPy array 'boolarr' of boolean type. I want to count the number of elements whose values are True. Is there a NumPy or Python routine dedicated for this task? Or, do I need to iterate over the elements in my script?", "response":"You have multiple options. Two options are the following. ``` boolarr.sum() numpy.count_nonzero(boolarr) ``` Here's an example: ``` >>> import numpy as np >>> boolarr = np.array([[0, 0, 1], [1, 0, 1], [1, 0, 1]], dtype=np.bool) >>> boolarr array([[False, False, True], [ True, False, True], [ True, False, True]], dtype=bool) >>> boolarr.sum() 5 ``` Of course, that is a bool-specific answer. More generally, you can use numpy.count_nonzero. ``` >>> np.count_nonzero(boolarr) 5 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8364674\/how-to-count-the-number-of-true-elements-in-a-numpy-bool-array", "best_answers_votes":365, "question_length":289, "response_length":481 }, { "question":"How to split data into 3 sets (train, validation and test)? I have a pandas dataframe and I wish to divide it to 3 separate sets. I know that using train_test_split from sklearn.cross_validation, one can divide the data in two sets (train and test). However, I couldn't find any solution about splitting the data into three sets. Preferably, I'd like to have the indices of the original data. I know that a workaround would be to use train_test_split two times and somehow adjust the indices. But is there a more standard \/ built-in way to split the data into 3 sets instead of 2?", "response":"Numpy solution. We will shuffle the whole dataset first (df.sample(frac=1, random_state=42)) and then split our data set into the following parts: 60% - train set, 20% - validation set, 20% - test set ``` In [305]: train, validate, test = \\ np.split(df.sample(frac=1, random_state=42), [int(.6*len(df)), int(.8*len(df))]) In [306]: train Out[306]: A B C D E 0 0.046919 0.792216 0.206294 0.440346 0.038960 2 0.301010 0.625697 0.604724 0.936968 0.870064 1 0.642237 0.690403 0.813658 0.525379 0.396053 9 0.488484 0.389640 0.599637 0.122919 0.106505 8 0.842717 0.793315 0.554084 0.100361 0.367465 7 0.185214 0.603661 0.217677 0.281780 0.938540 In [307]: validate Out[307]: A B C D E 5 0.806176 0.008896 0.362878 0.058903 0.026328 6 0.145777 0.485765 0.589272 0.806329 0.703479 In [308]: test Out[308]: A B C D E 4 0.521640 0.332210 0.370177 0.859169 0.401087 3 0.333348 0.964011 0.083498 0.670386 0.169619 ``` [int(.6*len(df)), int(.8*len(df))] - is an indices_or_sections array for numpy.split(). Here is a small demo for np.split() usage - let's split 20-elements array into the following parts: 80%, 10%, 10%: ``` In [45]: a = np.arange(1, 21) In [46]: a Out[46]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]) In [47]: np.split(a, [int(.8 * len(a)), int(.9 * len(a))]) Out[47]: [array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]), array([17, 18]), array([19, 20])] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/38250710\/how-to-split-data-into-3-sets-train-validation-and-test", "best_answers_votes":295, "question_length":580, "response_length":1414 }, { "question":"How to flatten only some dimensions of a numpy array Is there a quick way to \"sub-flatten\" or flatten only some of the first dimensions in a numpy array? For example, given a numpy array of dimensions (50,100,25), the resultant dimensions would be (5000,25)", "response":"Take a look at numpy.reshape . ``` >>> arr = numpy.zeros((50,100,25)) >>> arr.shape # (50, 100, 25) >>> new_arr = arr.reshape(5000,25) >>> new_arr.shape # (5000, 25) # One shape dimension can be -1. # In this case, the value is inferred from # the length of the array and remaining dimensions. >>> another_arr = arr.reshape(-1, arr.shape[-1]) >>> another_arr.shape # (5000, 25) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18757742\/how-to-flatten-only-some-dimensions-of-a-numpy-array", "best_answers_votes":222, "question_length":257, "response_length":381 }, { "question":"How to do exponential and logarithmic curve fitting in Python? I found only polynomial fitting I have a set of data and I want to compare which line describes it best (polynomials of different orders, exponential or logarithmic). I use Python and Numpy and for polynomial fitting there is a function polyfit(). But I found no such functions for exponential and logarithmic fitting. Are there any? Or how to solve it otherwise?", "response":"For fitting y = A + B log x, just fit y against (log x). ``` >>> x = numpy.array([1, 7, 20, 50, 79]) >>> y = numpy.array([10, 19, 30, 35, 51]) >>> numpy.polyfit(numpy.log(x), y, 1) array([ 8.46295607, 6.61867463]) # y \u2248 8.46 log(x) + 6.62 ``` For fitting y = AeBx, take the logarithm of both side gives log y = log A + Bx. So fit (log y) against x. Note that fitting (log y) as if it is linear will emphasize small values of y, causing large deviation for large y. This is because polyfit (linear regression) works by minimizing \u2211i (\u0394Y)2 = \u2211i (Yi \u2212 \u0176i)2. When Yi = log yi, the residues \u0394Yi = \u0394(log yi) \u2248 \u0394yi \/ |yi|. So even if polyfit makes a very bad decision for large y, the \"divide-by-|y|\" factor will compensate for it, causing polyfit favors small values. This could be alleviated by giving each entry a \"weight\" proportional to y. polyfit supports weighted-least-squares via the w keyword argument. ``` >>> x = numpy.array([10, 19, 30, 35, 51]) >>> y = numpy.array([1, 7, 20, 50, 79]) >>> numpy.polyfit(x, numpy.log(y), 1) array([ 0.10502711, -0.40116352]) # y \u2248 exp(-0.401) * exp(0.105 * x) = 0.670 * exp(0.105 * x) # (^ biased towards small values) >>> numpy.polyfit(x, numpy.log(y), 1, w=numpy.sqrt(y)) array([ 0.06009446, 1.41648096]) # y \u2248 exp(1.42) * exp(0.0601 * x) = 4.12 * exp(0.0601 * x) # (^ not so biased) ``` Note that Excel, LibreOffice and most scientific calculators typically use the unweighted (biased) formula for the exponential regression \/ trend lines. If you want your results to be compatible with these platforms, do not include the weights even if it provides better results. Now, if you can use scipy, you could use scipy.optimize.curve_fit to fit any model without transformations. For y = A + B log x the result is the same as the transformation method: ``` >>> x = numpy.array([1, 7, 20, 50, 79]) >>> y = numpy.array([10, 19, 30, 35, 51]) >>> scipy.optimize.curve_fit(lambda t,a,b: a+b*numpy.log(t), x, y) (array([ 6.61867467, 8.46295606]), array([[ 28.15948002, -7.89609542], [ -7.89609542, 2.9857172 ]])) # y \u2248 6.62 + 8.46 log(x) ``` For y = AeBx, however, we can get a better fit since it computes \u0394(log y) directly. But we need to provide an initialize guess so curve_fit can reach the desired local minimum. ``` >>> x = numpy.array([10, 19, 30, 35, 51]) >>> y = numpy.array([1, 7, 20, 50, 79]) >>> scipy.optimize.curve_fit(lambda t,a,b: a*numpy.exp(b*t), x, y) (array([ 5.60728326e-21, 9.99993501e-01]), array([[ 4.14809412e-27, -1.45078961e-08], [ -1.45078961e-08, 5.07411462e+10]])) # oops, definitely wrong. >>> scipy.optimize.curve_fit(lambda t,a,b: a*numpy.exp(b*t), x, y, p0=(4, 0.1)) (array([ 4.88003249, 0.05531256]), array([[ 1.01261314e+01, -4.31940132e-02], [ -4.31940132e-02, 1.91188656e-04]])) # y \u2248 4.88 exp(0.0553 x). much better. ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3433486\/how-to-do-exponential-and-logarithmic-curve-fitting-in-python-i-found-only-poly", "best_answers_votes":328, "question_length":426, "response_length":2791 }, { "question":"What is the inverse function of zip in python? [duplicate] This question already has answers here: Transpose\/Unzip Function (inverse of zip)? (14 answers) Closed 2 years ago. I've used the zip function from the Numpy library to sort tuples and now I have a list containing all the tuples. I had since modified that list and now I would like to restore the tuples so I can use my data. How can I do this?", "response":"``` lst1, lst2 = zip(*zipped_list) ``` should give you the unzipped list. *zipped_list unpacks the zipped_list object. it then passes all the tuples from the zipped_list object to zip, which just packs them back up as they were when you passed them in. so if: ``` a = [1,2,3] b = [4,5,6] ``` then zipped_list = zip(a,b) gives you: ``` [(1,4), (2,5), (3,6)] ``` and *zipped_list gives you back ``` (1,4), (2,5), (3,6) ``` zipping that with zip(*zipped_list) gives you back the two collections: ``` [(1, 2, 3), (4, 5, 6)] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13635032\/what-is-the-inverse-function-of-zip-in-python", "best_answers_votes":496, "question_length":403, "response_length":523 }, { "question":"Concatenate a NumPy array to another NumPy array I have a numpy_array. Something like [ a b c ]. And then I want to concatenate it with another NumPy array (just like we create a list of lists). How do we create a NumPy array containing NumPy arrays? I tried to do the following without any luck ``` >>> M = np.array([]) >>> M array([], dtype=float64) >>> M.append(a,axis=0) Traceback (most recent call last): File \"\", line 1, in AttributeError: 'numpy.ndarray' object has no attribute 'append' >>> a array([1, 2, 3]) ```", "response":"``` In [1]: import numpy as np In [2]: a = np.array([[1, 2, 3], [4, 5, 6]]) In [3]: b = np.array([[9, 8, 7], [6, 5, 4]]) In [4]: np.concatenate((a, b)) Out[4]: array([[1, 2, 3], [4, 5, 6], [9, 8, 7], [6, 5, 4]]) ``` or this: ``` In [1]: a = np.array([1, 2, 3]) In [2]: b = np.array([4, 5, 6]) In [3]: np.vstack((a, b)) Out[3]: array([[1, 2, 3], [4, 5, 6]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9775297\/concatenate-a-numpy-array-to-another-numpy-array", "best_answers_votes":294, "question_length":522, "response_length":360 }, { "question":"How do I catch a numpy warning like it's an exception (not just for testing)? I have to make a Lagrange polynomial in Python for a project I'm doing. I'm doing a barycentric style one to avoid using an explicit for-loop as opposed to a Newton's divided difference style one. The problem I have is that I need to catch a division by zero, but Python (or maybe numpy) just makes it a warning instead of a normal exception. So, what I need to know how to do is to catch this warning as if it were an exception. The related questions to this I found on this site were answered not in the way I needed. Here's my code: ``` import numpy as np import matplotlib.pyplot as plt import warnings class Lagrange: def __init__(self, xPts, yPts): self.xPts = np.array(xPts) self.yPts = np.array(yPts) self.degree = len(xPts)-1 self.weights = np.array([np.product([x_j - x_i for x_j in xPts if x_j != x_i]) for x_i in xPts]) def __call__(self, x): warnings.filterwarnings(\"error\") try: bigNumerator = np.product(x - self.xPts) numerators = np.array([bigNumerator\/(x - x_j) for x_j in self.xPts]) return sum(numerators\/self.weights*self.yPts) except Exception, e: # Catch division by 0. Only possible in 'numerators' array return yPts[np.where(xPts == x)[0][0]] L = Lagrange([-1,0,1],[1,0,1]) # Creates quadratic poly L(x) = x^2 L(1) # This should catch an error, then return 1. ``` When this code is executed, the output I get is: ``` Warning: divide by zero encountered in int_scalars ``` That's the warning I want to catch. It should occur inside the list comprehension.", "response":"It seems that your configuration is using the print option for numpy.seterr: ``` >>> import numpy as np >>> np.array([1])\/0 #'warn' mode __main__:1: RuntimeWarning: divide by zero encountered in divide array([0]) >>> np.seterr(all='print') {'over': 'warn', 'divide': 'warn', 'invalid': 'warn', 'under': 'ignore'} >>> np.array([1])\/0 #'print' mode Warning: divide by zero encountered in divide array([0]) ``` This means that the warning you see is not a real warning, but it's just some characters printed to stdout(see the documentation for seterr). If you want to catch it you can: Use numpy.seterr(all='raise') which will directly raise the exception. This however changes the behaviour of all the operations, so it's a pretty big change in behaviour. Use numpy.seterr(all='warn'), which will transform the printed warning in a real warning and you'll be able to use the above solution to localize this change in behaviour. Once you actually have a warning, you can use the warnings module to control how the warnings should be treated: ``` >>> import warnings >>> >>> warnings.filterwarnings('error') >>> >>> try: ... warnings.warn(Warning()) ... except Warning: ... print 'Warning was raised as an exception!' ... Warning was raised as an exception! ``` Read carefully the documentation for filterwarnings since it allows you to filter only the warning you want and has other options. I'd also consider looking at catch_warnings which is a context manager which automatically resets the original filterwarnings function: ``` >>> import warnings >>> with warnings.catch_warnings(): ... warnings.filterwarnings('error') ... try: ... warnings.warn(Warning()) ... except Warning: print 'Raised!' ... Raised! >>> try: ... warnings.warn(Warning()) ... except Warning: print 'Not raised!' ... __main__:2: Warning: ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15933741\/how-do-i-catch-a-numpy-warning-like-its-an-exception-not-just-for-testing", "best_answers_votes":267, "question_length":1557, "response_length":1814 }, { "question":"Extracting specific columns in numpy array This is an easy question but say I have an MxN matrix. All I want to do is extract specific columns and store them in another numpy array but I get invalid syntax errors. Here is the code: ``` extractedData = data[[:,1],[:,9]]. ``` It seems like the above line should suffice but I guess not. I looked around but couldn't find anything syntax wise regarding this specific scenario.", "response":"I assume you wanted columns 1 and 9? To select multiple columns at once, use ``` X = data[:, [1, 9]] ``` To select one at a time, use ``` x, y = data[:, 1], data[:, 9] ``` With names: ``` data[:, ['Column Name1','Column Name2']] ``` You can get the names from data.dtype.names\u2026", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8386675\/extracting-specific-columns-in-numpy-array", "best_answers_votes":398, "question_length":424, "response_length":277 }, { "question":"NumPy or Pandas: Keeping array type as integer while having a NaN value Is there a preferred way to keep the data type of a numpy array fixed as int (or int64 or whatever), while still having an element inside listed as numpy.NaN? In particular, I am converting an in-house data structure to a Pandas DataFrame. In our structure, we have integer-type columns that still have NaN's (but the dtype of the column is int). It seems to recast everything as a float if we make this a DataFrame, but we'd really like to be int. Thoughts? Things tried: I tried using the from_records() function under pandas.DataFrame, with coerce_float=False and this did not help. I also tried using NumPy masked arrays, with NaN fill_value, which also did not work. All of these caused the column data type to become a float.", "response":"NaN can't be stored in an integer array. This is a known limitation of pandas at the moment; I have been waiting for progress to be made with NA values in NumPy (similar to NAs in R), but it will be at least 6 months to a year before NumPy gets these features, it seems: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/gotchas.html#support-for-integer-na (This feature has been added beginning with version 0.24 of pandas, but note it requires the use of extension dtype Int64 (capitalized), rather than the default dtype int64 (lower case): https:\/\/pandas.pydata.org\/pandas-docs\/version\/0.24\/whatsnew\/v0.24.0.html#optional-integer-na-support )", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11548005\/numpy-or-pandas-keeping-array-type-as-integer-while-having-a-nan-value", "best_answers_votes":128, "question_length":803, "response_length":640 }, { "question":"Numpy first occurrence of value greater than existing value I have a 1D array in numpy and I want to find the position of the index where a value exceeds the value in numpy array. E.g. ``` aa = range(-10,10) ``` Find position in aa where, the value 5 gets exceeded.", "response":"This is a little faster (and looks nicer) ``` np.argmax(aa>5) ``` Since argmax will stop at the first True (\"In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence are returned.\") and doesn't save another list. ``` In [2]: N = 10000 In [3]: aa = np.arange(-N,N) In [4]: timeit np.argmax(aa>N\/2) 100000 loops, best of 3: 52.3 us per loop In [5]: timeit np.where(aa>N\/2)[0][0] 10000 loops, best of 3: 141 us per loop In [6]: timeit np.nonzero(aa>N\/2)[0][0] 10000 loops, best of 3: 142 us per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16243955\/numpy-first-occurrence-of-value-greater-than-existing-value", "best_answers_votes":315, "question_length":265, "response_length":548 }, { "question":"Numpy: Get random set of rows from 2D array I have a very large 2D array which looks something like this: ``` a= [[a1, b1, c1], [a2, b2, c2], ..., [an, bn, cn]] ``` Using numpy, is there an easy way to get a new 2D array with, e.g., 2 random rows from the initial array a (without replacement)? e.g. ``` b= [[a4, b4, c4], [a99, b99, c99]] ```", "response":"``` >>> A = np.random.randint(5, size=(10,3)) >>> A array([[1, 3, 0], [3, 2, 0], [0, 2, 1], [1, 1, 4], [3, 2, 2], [0, 1, 0], [1, 3, 1], [0, 4, 1], [2, 4, 2], [3, 3, 1]]) >>> idx = np.random.randint(10, size=2) >>> idx array([7, 6]) >>> A[idx,:] array([[0, 4, 1], [1, 3, 1]]) ``` Putting it together for a general case: ``` A[np.random.randint(A.shape[0], size=2), :] ``` For non replacement (numpy 1.7.0+): ``` A[np.random.choice(A.shape[0], 2, replace=False), :] ``` I do not believe there is a good way to generate random list without replacement before 1.7. Perhaps you can setup a small definition that ensures the two values are not the same.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14262654\/numpy-get-random-set-of-rows-from-2d-array", "best_answers_votes":298, "question_length":342, "response_length":647 }, { "question":"How to add a new row to an empty numpy array Using standard Python arrays, I can do the following: ``` arr = [] arr.append([1,2,3]) arr.append([4,5,6]) # arr is now [[1,2,3],[4,5,6]] ``` However, I cannot do the same thing in numpy. For example: ``` arr = np.array([]) arr = np.append(arr, np.array([1,2,3])) arr = np.append(arr, np.array([4,5,6])) # arr is now [1,2,3,4,5,6] ``` I also looked into vstack, but when I use vstack on an empty array, I get: ``` ValueError: all the input array dimensions except for the concatenation axis must match exactly ``` So how do I do append a new row to an empty array in numpy?", "response":"The way to \"start\" the array that you want is: ``` arr = np.empty((0,3), int) ``` Which is an empty array but it has the proper dimensionality. ``` >>> arr array([], shape=(0, 3), dtype=int64) ``` Then be sure to append along axis 0: ``` arr = np.append(arr, np.array([[1,2,3]]), axis=0) arr = np.append(arr, np.array([[4,5,6]]), axis=0) ``` But, @jonrsharpe is right. In fact, if you're going to be appending in a loop, it would be much faster to append to a list as in your first example, then convert to a numpy array at the end, since you're really not using numpy as intended during the loop: ``` In [210]: %%timeit .....: l = [] .....: for i in xrange(1000): .....: l.append([3*i+1,3*i+2,3*i+3]) .....: l = np.asarray(l) .....: 1000 loops, best of 3: 1.18 ms per loop In [211]: %%timeit .....: a = np.empty((0,3), int) .....: for i in xrange(1000): .....: a = np.append(a, 3*i+np.array([[1,2,3]]), 0) .....: 100 loops, best of 3: 18.5 ms per loop In [214]: np.allclose(a, l) Out[214]: True ``` The numpythonic way to do it depends on your application, but it would be more like: ``` In [220]: timeit n = np.arange(1,3001).reshape(1000,3) 100000 loops, best of 3: 5.93 \u00b5s per loop In [221]: np.allclose(a, n) Out[221]: True ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22392497\/how-to-add-a-new-row-to-an-empty-numpy-array", "best_answers_votes":343, "question_length":618, "response_length":1232 }, { "question":"How to install python modules without root access? I'm taking some university classes and have been given an 'instructional account', which is a school account I can ssh into to do work. I want to run my computationally intensive Numpy, matplotlib, scipy code on that machine, but I cannot install these modules because I am not a system administrator. How can I do the installation?", "response":"In most situations the best solution is to rely on the so-called \"user site\" location (see the PEP for details) by running: ``` pip install --user package_name ``` Below is a more \"manual\" way from my original answer, you do not need to read it if the above solution works for you. With easy_install you can do: ``` easy_install --prefix=$HOME\/local package_name ``` which will install into ``` $HOME\/local\/lib\/pythonX.Y\/site-packages ``` (the 'local' folder is a typical name many people use, but of course you may specify any folder you have permissions to write into). You will need to manually create ``` $HOME\/local\/lib\/pythonX.Y\/site-packages ``` and add it to your PYTHONPATH environment variable (otherwise easy_install will complain -- btw run the command above once to find the correct value for X.Y). If you are not using easy_install, look for a prefix option, most install scripts let you specify one. With pip you can use: ``` pip install --install-option=\"--prefix=$HOME\/local\" package_name ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7465445\/how-to-install-python-modules-without-root-access", "best_answers_votes":322, "question_length":383, "response_length":1009 }, { "question":"Numpy where function multiple conditions I have an array of distances called dists. I want to select dists which are within a range. ``` dists[(np.where(dists >= r)) and (np.where(dists <= r + dr))] ``` However, this selects only for the condition ``` (np.where(dists <= r + dr)) ``` If I do the commands sequentially by using a temporary variable it works fine. Why does the above code not work, and how do I get it to work?", "response":"The best way in your particular case would just be to change your two criteria to one criterion: ``` dists[abs(dists - r - dr\/2.) = r) & (dists = r) & (dists = r) Out[233]: (array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]),) In [234]: np.where(dists = r) and np.where(dists = r Out[236]: array([False, False, False, False, False, False, False, False, False, False, True, True, True, True, True, True, True, True, True, True], dtype=bool) In [237]: dists = r) & (dists = r) & (dists = r) & (dists = r) & (dists <= r + dr)] Out[241]: array([ 5. , 5.5, 6. ]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16343752\/numpy-where-function-multiple-conditions", "best_answers_votes":313, "question_length":425, "response_length":558 }, { "question":"AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'? Earlier I installed some packages like Matplotlib, NumPy, pip (version 23.3.1), wheel (version 0.41.2), etc., and did some programming with those. I used the command C:\\Users\\UserName>pip list to find the list of packages that I have installed, and I am using Python 3.12.0 (by employing code C:\\Users\\UserName>py -V). I need to use pyspedas to analyse some data. I am following the instruction that that I received from site to install the package, with a variation (I am not sure whether it matters or not: I am using py, instead of python). The commands that I use, in the order, are: ```none py -m venv pyspedas .\\pyspedas\\Scripts\\activate pip install pyspedas ``` After the last step, I am getting the following output: ```none Collecting pyspedas Using cached pyspedas-1.4.47-py3-none-any.whl.metadata (14 kB) Collecting numpy>=1.19.5 (from pyspedas) Using cached numpy-1.26.1-cp312-cp312-win_amd64.whl.metadata (61 kB) Collecting requests (from pyspedas) Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB) Collecting geopack>=1.0.10 (from pyspedas) Using cached geopack-1.0.10-py3-none-any.whl (114 kB) Collecting cdflib=1.7.24 (from pyspedas) Using cached cdasws-1.7.43.tar.gz (21 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting netCDF4>=1.6.2 (from pyspedas) Using cached netCDF4-1.6.5-cp312-cp312-win_amd64.whl.metadata (1.8 kB) Collecting pywavelets (from pyspedas) Using cached PyWavelets-1.4.1.tar.gz (4.6 MB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error \u00d7 Getting requirements to build wheel did not run successfully. \u2502 exit code: 1 \u2570\u2500> [33 lines of output] Traceback (most recent call last): File \"C:\\Users\\UserName\\pyspedas\\Lib\\site-packages\\pip\\_vendor\\pyproject_hooks\\_in_process\\_in_process.py\", line 353, in main() File \"C:\\Users\\UserName\\pyspedas\\Lib\\site-packages\\pip\\_vendor\\pyproject_hooks\\_in_process\\_in_process.py\", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File \"C:\\Users\\UserName\\pyspedas\\Lib\\site-packages\\pip\\_vendor\\pyproject_hooks\\_in_process\\_in_process.py\", line 112, in get_requires_for_build_wheel backend = _build_backend() ^^^^^^^^^^^^^^^^ File \"C:\\Users\\UserName\\pyspedas\\Lib\\site-packages\\pip\\_vendor\\pyproject_hooks\\_in_process\\_in_process.py\", line 77, in _build_backend obj = import_module(mod_path) ^^^^^^^^^^^^^^^^^^^^^^^ File \"C:\\Users\\UserName\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\importlib\\__init__.py\", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File \"\", line 1381, in _gcd_import File \"\", line 1354, in _find_and_load File \"\", line 1304, in _find_and_load_unlocked File \"\", line 488, in _call_with_frames_removed File \"\", line 1381, in _gcd_import File \"\", line 1354, in _find_and_load File \"\", line 1325, in _find_and_load_unlocked File \"\", line 929, in _load_unlocked File \"\", line 994, in exec_module File \"\", line 488, in _call_with_frames_removed File \"C:\\Users\\UserName\\AppData\\Local\\Temp\\pip-build-env-_lgbq70y\\overlay\\Lib\\site-packages\\setuptools\\__init__.py\", line 16, in import setuptools.version File \"C:\\Users\\UserName\\AppData\\Local\\Temp\\pip-build-env-_lgbq70y\\overlay\\Lib\\site-packages\\setuptools\\version.py\", line 1, in import pkg_resources File \"C:\\Users\\UserName\\AppData\\Local\\Temp\\pip-build-env-_lgbq70y\\overlay\\Lib\\site-packages\\pkg_resources\\__init__.py\", line 2191, in register_finder(pkgutil.ImpImporter, find_on_path) ^^^^^^^^^^^^^^^^^^^ AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'? [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error \u00d7 Getting requirements to build wheel did not run successfully. \u2502 exit code: 1 \u2570\u2500> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. ``` After little bit of googling, I came to know that this issues was reported at multiple places, but none for this package. I did install wheel in the new environment as mentioned in the answer here, but the problem still persists. Instead of setting up a virtual environment, I simply executed the command py -m pip install pyspedas. But I am still getting the error. What I could gather is that the program has an issue with ```none Collecting pywavelets (from pyspedas) Using cached PyWavelets-1.4.1.tar.gz (4.6 MB) Installing build dependencies ... done ``` I am using IDLE in Windows 11.", "response":"Due to the removal of the long-deprecated pkgutil.ImpImporter class, the pip command may not work for Python 3.12. You just have to manually install pip for Python 3.12 ``` python -m ensurepip --upgrade python -m pip install --upgrade setuptools python -m pip install ``` In your virtual environment: ```none pip install --upgrade setuptools ``` Python comes with an ensurepip, which can install pip in a Python environment. https:\/\/pip.pypa.io\/en\/stable\/installation\/ On Linux\/macOS terminal: ```none python -m ensurepip --upgrade ``` On Windows: ```none py -m ensurepip --upgrade ``` also, make sure to upgrade pip: ``` py -m pip install --upgrade pip ``` To install numpy on Python 3.12, you must use numpy version 1.26.4 ``` pip install numpy==1.26.4 ``` https:\/\/github.com\/numpy\/numpy\/issues\/23808#issuecomment-1722440746 for Ubuntu ``` sudo apt install python3.12-dev ``` or ``` python3.12 -m pip install --upgrade setuptools ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/77364550\/attributeerror-module-pkgutil-has-no-attribute-impimporter-did-you-mean", "best_answers_votes":339, "question_length":4771, "response_length":936 }, { "question":"How to normalize a NumPy array to within a certain range? After doing some processing on an audio or image array, it needs to be normalized within a range before it can be written back to a file. This can be done like so: ``` # Normalize audio channels to between -1.0 and +1.0 audio[:,0] = audio[:,0]\/abs(audio[:,0]).max() audio[:,1] = audio[:,1]\/abs(audio[:,1]).max() # Normalize image to between 0 and 255 image = image\/(image.max()\/255.0) ``` Is there a less verbose, convenience function way to do this? matplotlib.colors.Normalize() doesn't seem to be related.", "response":"``` # Normalize audio channels to between -1.0 and +1.0 audio \/= np.max(np.abs(audio),axis=0) # Normalize image to between 0 and 255 image *= (255.0\/image.max()) ``` Using \/= and *= allows you to eliminate an intermediate temporary array, thus saving some memory. Multiplication is less expensive than division, so ``` image *= 255.0\/image.max() # Uses 1 division and image.size multiplications ``` is marginally faster than ``` image \/= image.max()\/255.0 # Uses 1+image.size divisions ``` Since we are using basic numpy methods here, I think this is about as efficient a solution in numpy as can be. In-place operations do not change the dtype of the container array. Since the desired normalized values are floats, the audio and image arrays need to have floating-point point dtype before the in-place operations are performed. If they are not already of floating-point dtype, you'll need to convert them using astype. For example, ``` image = image.astype('float64') ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1735025\/how-to-normalize-a-numpy-array-to-within-a-certain-range", "best_answers_votes":218, "question_length":566, "response_length":973 }, { "question":"From ND to 1D arrays Say I have an array a: ``` a = np.array([[1,2,3], [4,5,6]]) array([[1, 2, 3], [4, 5, 6]]) ``` I would like to convert it to a 1D array (i.e. a column vector): ``` b = np.reshape(a, (1,np.product(a.shape))) ``` but this returns ``` array([[1, 2, 3, 4, 5, 6]]) ``` which is not the same as: ``` array([1, 2, 3, 4, 5, 6]) ``` I can take the first element of this array to manually convert it to a 1D array: ``` b = np.reshape(a, (1,np.product(a.shape)))[0] ``` but this requires me to know how many dimensions the original array has (and concatenate [0]'s when working with higher dimensions) Is there a dimensions-independent way of getting a column\/row vector from an arbitrary ndarray?", "response":"Use np.ravel (for a 1D view) or np.ndarray.flatten (for a 1D copy) or np.ndarray.flat (for an 1D iterator): ``` In [12]: a = np.array([[1,2,3], [4,5,6]]) In [13]: b = a.ravel() In [14]: b Out[14]: array([1, 2, 3, 4, 5, 6]) ``` Note that ravel() returns a view of a when possible. So modifying b also modifies a. ravel() returns a view when the 1D elements are contiguous in memory, but would return a copy if, for example, a were made from slicing another array using a non-unit step size (e.g. a = x[::2]). If you want a copy rather than a view, use ``` In [15]: c = a.flatten() ``` If you just want an iterator, use np.ndarray.flat: ``` In [20]: d = a.flat In [21]: d Out[21]: In [22]: list(d) Out[22]: [1, 2, 3, 4, 5, 6] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13730468\/from-nd-to-1d-arrays", "best_answers_votes":396, "question_length":706, "response_length":728 }, { "question":"numpy max vs amax vs maximum numpy has three different functions which seem like they can be used for the same things --- except that numpy.maximum can only be used element-wise, while numpy.max and numpy.amax can be used on particular axes, or all elements. Why is there more than just numpy.max? Is there some subtlety to this in performance? (Similarly for min vs. amin vs. minimum)", "response":"np.max is just an alias for np.amax. This function only works on a single input array and finds the value of maximum element in that entire array (returning a scalar). Alternatively, it takes an axis argument and will find the maximum value along an axis of the input array (returning a new array). ``` >>> a = np.array([[0, 1, 6], [2, 4, 1]]) >>> np.max(a) 6 >>> np.max(a, axis=0) # max of each column array([2, 4, 6]) ``` The default behaviour of np.maximum is to take two arrays and compute their element-wise maximum. Here, 'compatible' means that one array can be broadcast to the other. For example: ``` >>> b = np.array([3, 6, 1]) >>> c = np.array([4, 2, 9]) >>> np.maximum(b, c) array([4, 6, 9]) ``` But np.maximum is also a universal function which means that it has other features and methods which come in useful when working with multidimensional arrays. For example you can compute the cumulative maximum over an array (or a particular axis of the array): ``` >>> d = np.array([2, 0, 3, -4, -2, 7, 9]) >>> np.maximum.accumulate(d) array([2, 2, 3, 3, 3, 7, 9]) ``` This is not possible with np.max. You can make np.maximum imitate np.max to a certain extent when using np.maximum.reduce: ``` >>> np.maximum.reduce(d) 9 >>> np.max(d) 9 ``` Basic testing suggests the two approaches are comparable in performance; and they should be, as np.max() actually calls np.maximum.reduce to do the computation.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/33569668\/numpy-max-vs-amax-vs-maximum", "best_answers_votes":261, "question_length":385, "response_length":1411 }, { "question":"Cartesian product of x and y array points into single array of 2D points I have two numpy arrays that define the x and y axes of a grid. For example: ``` x = numpy.array([1,2,3]) y = numpy.array([4,5]) ``` I'd like to generate the Cartesian product of these arrays to generate: ``` array([[1,4],[2,4],[3,4],[1,5],[2,5],[3,5]]) ``` In a way that's not terribly inefficient since I need to do this many times in a loop. I'm assuming that converting them to a Python list and using itertools.product and back to a numpy array is not the most efficient form.", "response":"A canonical cartesian_product (almost) There are many approaches to this problem with different properties. Some are faster than others, and some are more general-purpose. After a lot of testing and tweaking, I've found that the following function, which calculates an n-dimensional cartesian_product, is faster than most others for many inputs. For a pair of approaches that are slightly more complex, but are even a bit faster in many cases, see the answer by Paul Panzer. Given that answer, this is no longer the fastest implementation of the cartesian product in numpy that I'm aware of. However, I think its simplicity will continue to make it a useful benchmark for future improvement: ``` def cartesian_product(*arrays): la = len(arrays) dtype = numpy.result_type(*arrays) arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype) for i, a in enumerate(numpy.ix_(*arrays)): arr[...,i] = a return arr.reshape(-1, la) ``` It's worth mentioning that this function uses ix_ in an unusual way; whereas the documented use of ix_ is to generate indices into an array, it just so happens that arrays with the same shape can be used for broadcasted assignment. Many thanks to mgilson, who inspired me to try using ix_ this way, and to unutbu, who provided some extremely helpful feedback on this answer, including the suggestion to use numpy.result_type. Notable alternatives It's sometimes faster to write contiguous blocks of memory in Fortran order. That's the basis of this alternative, cartesian_product_transpose, which has proven faster on some hardware than cartesian_product (see below). However, Paul Panzer's answer, which uses the same principle, is even faster. Still, I include this here for interested readers: ``` def cartesian_product_transpose(*arrays): broadcastable = numpy.ix_(*arrays) broadcasted = numpy.broadcast_arrays(*broadcastable) rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted) dtype = numpy.result_type(*arrays) out = numpy.empty(rows * cols, dtype=dtype) start, end = 0, rows for a in broadcasted: out[start:end] = a.reshape(-1) start, end = end, end + rows return out.reshape(cols, rows).T ``` After coming to understand Panzer's approach, I wrote a new version that's almost as fast as his, and is almost as simple as cartesian_product: ``` def cartesian_product_simple_transpose(arrays): la = len(arrays) dtype = numpy.result_type(*arrays) arr = numpy.empty([la] + [len(a) for a in arrays], dtype=dtype) for i, a in enumerate(numpy.ix_(*arrays)): arr[i, ...] = a return arr.reshape(la, -1).T ``` This appears to have some constant-time overhead that makes it run slower than Panzer's for small inputs. But for larger inputs, in all the tests I ran, it performs just as well as his fastest implementation (cartesian_product_transpose_pp). In following sections, I include some tests of other alternatives. These are now somewhat out of date, but rather than duplicate effort, I've decided to leave them here out of historical interest. For up-to-date tests, see Panzer's answer, as well as Nico Schl\u00f6mer's. Tests against alternatives Here is a battery of tests that show the performance boost that some of these functions provide relative to a number of alternatives. All the tests shown here were performed on a quad-core machine, running Mac OS 10.12.5, Python 3.6.1, and numpy 1.12.1. Variations on hardware and software are known to produce different results, so YMMV. Run these tests for yourself to be sure! Definitions: ``` import numpy import itertools from functools import reduce ### Two-dimensional products ### def repeat_product(x, y): return numpy.transpose([numpy.tile(x, len(y)), numpy.repeat(y, len(x))]) def dstack_product(x, y): return numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2) ### Generalized N-dimensional products ### def cartesian_product(*arrays): la = len(arrays) dtype = numpy.result_type(*arrays) arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype) for i, a in enumerate(numpy.ix_(*arrays)): arr[...,i] = a return arr.reshape(-1, la) def cartesian_product_transpose(*arrays): broadcastable = numpy.ix_(*arrays) broadcasted = numpy.broadcast_arrays(*broadcastable) rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted) dtype = numpy.result_type(*arrays) out = numpy.empty(rows * cols, dtype=dtype) start, end = 0, rows for a in broadcasted: out[start:end] = a.reshape(-1) start, end = end, end + rows return out.reshape(cols, rows).T # from https:\/\/stackoverflow.com\/a\/1235363\/577088 def cartesian_product_recursive(*arrays, out=None): arrays = [numpy.asarray(x) for x in arrays] dtype = arrays[0].dtype n = numpy.prod([x.size for x in arrays]) if out is None: out = numpy.zeros([n, len(arrays)], dtype=dtype) m = n \/\/ arrays[0].size out[:,0] = numpy.repeat(arrays[0], m) if arrays[1:]: cartesian_product_recursive(arrays[1:], out=out[0:m,1:]) for j in range(1, arrays[0].size): out[j*m:(j+1)*m,1:] = out[0:m,1:] return out def cartesian_product_itertools(*arrays): return numpy.array(list(itertools.product(*arrays))) ### Test code ### name_func = [('repeat_product', repeat_product), ('dstack_product', dstack_product), ('cartesian_product', cartesian_product), ('cartesian_product_transpose', cartesian_product_transpose), ('cartesian_product_recursive', cartesian_product_recursive), ('cartesian_product_itertools', cartesian_product_itertools)] def test(in_arrays, test_funcs): global func global arrays arrays = in_arrays for name, func in test_funcs: print('{}:'.format(name)) %timeit func(*arrays) def test_all(*in_arrays): test(in_arrays, name_func) # `cartesian_product_recursive` throws an # unexpected error when used on more than # two input arrays, so for now I've removed # it from these tests. def test_cartesian(*in_arrays): test(in_arrays, name_func[2:4] + name_func[-1:]) x10 = [numpy.arange(10)] x50 = [numpy.arange(50)] x100 = [numpy.arange(100)] x500 = [numpy.arange(500)] x1000 = [numpy.arange(1000)] ``` Test results: ``` In [2]: test_all(*(x100 * 2)) repeat_product: 67.5 \u00b5s \u00b1 633 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each) dstack_product: 67.7 \u00b5s \u00b1 1.09 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each) cartesian_product: 33.4 \u00b5s \u00b1 558 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each) cartesian_product_transpose: 67.7 \u00b5s \u00b1 932 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each) cartesian_product_recursive: 215 \u00b5s \u00b1 6.01 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) cartesian_product_itertools: 3.65 ms \u00b1 38.7 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) In [3]: test_all(*(x500 * 2)) repeat_product: 1.31 ms \u00b1 9.28 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) dstack_product: 1.27 ms \u00b1 7.5 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) cartesian_product: 375 \u00b5s \u00b1 4.5 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) cartesian_product_transpose: 488 \u00b5s \u00b1 8.88 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) cartesian_product_recursive: 2.21 ms \u00b1 38.4 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) cartesian_product_itertools: 105 ms \u00b1 1.17 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each) In [4]: test_all(*(x1000 * 2)) repeat_product: 10.2 ms \u00b1 132 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) dstack_product: 12 ms \u00b1 120 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) cartesian_product: 4.75 ms \u00b1 57.1 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) cartesian_product_transpose: 7.76 ms \u00b1 52.7 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) cartesian_product_recursive: 13 ms \u00b1 209 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) cartesian_product_itertools: 422 ms \u00b1 7.77 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) ``` In all cases, cartesian_product as defined at the beginning of this answer is fastest. For those functions that accept an arbitrary number of input arrays, it's worth checking performance when len(arrays) > 2 as well. (Until I can determine why cartesian_product_recursive throws an error in this case, I've removed it from these tests.) ``` In [5]: test_cartesian(*(x100 * 3)) cartesian_product: 8.8 ms \u00b1 138 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) cartesian_product_transpose: 7.87 ms \u00b1 91.5 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) cartesian_product_itertools: 518 ms \u00b1 5.5 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) In [6]: test_cartesian(*(x50 * 4)) cartesian_product: 169 ms \u00b1 5.1 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each) cartesian_product_transpose: 184 ms \u00b1 4.32 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each) cartesian_product_itertools: 3.69 s \u00b1 73.5 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) In [7]: test_cartesian(*(x10 * 6)) cartesian_product: 26.5 ms \u00b1 449 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each) cartesian_product_transpose: 16 ms \u00b1 133 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) cartesian_product_itertools: 728 ms \u00b1 16 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) In [8]: test_cartesian(*(x10 * 7)) cartesian_product: 650 ms \u00b1 8.14 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) cartesian_product_transpose: 518 ms \u00b1 7.09 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) cartesian_product_itertools: 8.13 s \u00b1 122 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) ``` As these tests show, cartesian_product remains competitive until the number of input arrays rises above (roughly) four. After that, cartesian_product_transpose does have a slight edge. It's worth reiterating that users with other hardware and operating systems may see different results. For example, unutbu reports seeing the following results for these tests using Ubuntu 14.04, Python 3.4.3, and numpy 1.14.0.dev0+b7050a9: ``` >>> %timeit cartesian_product_transpose(x500, y500) 1000 loops, best of 3: 682 \u00b5s per loop >>> %timeit cartesian_product(x500, y500) 1000 loops, best of 3: 1.55 ms per loop ``` Below, I go into a few details about earlier tests I've run along these lines. The relative performance of these approaches has changed over time, for different hardware and different versions of Python and numpy. While it's not immediately useful for people using up-to-date versions of numpy, it illustrates how things have changed since the first version of this answer. A simple alternative: meshgrid + dstack The currently accepted answer uses tile and repeat to broadcast two arrays together. But the meshgrid function does practically the same thing. Here's the output of tile and repeat before being passed to transpose: ``` In [1]: import numpy In [2]: x = numpy.array([1,2,3]) ...: y = numpy.array([4,5]) ...: In [3]: [numpy.tile(x, len(y)), numpy.repeat(y, len(x))] Out[3]: [array([1, 2, 3, 1, 2, 3]), array([4, 4, 4, 5, 5, 5])] ``` And here's the output of meshgrid: ``` In [4]: numpy.meshgrid(x, y) Out[4]: [array([[1, 2, 3], [1, 2, 3]]), array([[4, 4, 4], [5, 5, 5]])] ``` As you can see, it's almost identical. We need only reshape the result to get exactly the same result. ``` In [5]: xt, xr = numpy.meshgrid(x, y) ...: [xt.ravel(), xr.ravel()] Out[5]: [array([1, 2, 3, 1, 2, 3]), array([4, 4, 4, 5, 5, 5])] ``` Rather than reshaping at this point, though, we could pass the output of meshgrid to dstack and reshape afterwards, which saves some work: ``` In [6]: numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2) Out[6]: array([[1, 4], [2, 4], [3, 4], [1, 5], [2, 5], [3, 5]]) ``` Contrary to the claim in this comment, I've seen no evidence that different inputs will produce differently shaped outputs, and as the above demonstrates, they do very similar things, so it would be quite strange if they did. Please let me know if you find a counterexample. Testing meshgrid + dstack vs. repeat + transpose The relative performance of these two approaches has changed over time. In an earlier version of Python (2.7), the result using meshgrid + dstack was noticeably faster for small inputs. (Note that these tests are from an old version of this answer.) Definitions: ``` >>> def repeat_product(x, y): ... return numpy.transpose([numpy.tile(x, len(y)), numpy.repeat(y, len(x))]) ... >>> def dstack_product(x, y): ... return numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2) ... ``` For moderately-sized input, I saw a significant speedup. But I retried these tests with more recent versions of Python (3.6.1) and numpy (1.12.1), on a newer machine. The two approaches are almost identical now. Old Test ``` >>> x, y = numpy.arange(500), numpy.arange(500) >>> %timeit repeat_product(x, y) 10 loops, best of 3: 62 ms per loop >>> %timeit dstack_product(x, y) 100 loops, best of 3: 12.2 ms per loop ``` New Test ``` In [7]: x, y = numpy.arange(500), numpy.arange(500) In [8]: %timeit repeat_product(x, y) 1.32 ms \u00b1 24.7 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) In [9]: %timeit dstack_product(x, y) 1.26 ms \u00b1 8.47 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) ``` As always, YMMV, but this suggests that in recent versions of Python and numpy, these are interchangeable. Generalized product functions In general, we might expect that using built-in functions will be faster for small inputs, while for large inputs, a purpose-built function might be faster. Furthermore for a generalized n-dimensional product, tile and repeat won't help, because they don't have clear higher-dimensional analogues. So it's worth investigating the behavior of purpose-built functions as well. Most of the relevant tests appear at the beginning of this answer, but here are a few of the tests performed on earlier versions of Python and numpy for comparison. The cartesian function defined in another answer used to perform pretty well for larger inputs. (It's the same as the function called cartesian_product_recursive above.) In order to compare cartesian to dstack_prodct, we use just two dimensions. Here again, the old test showed a significant difference, while the new test shows almost none. Old Test ``` >>> x, y = numpy.arange(1000), numpy.arange(1000) >>> %timeit cartesian([x, y]) 10 loops, best of 3: 25.4 ms per loop >>> %timeit dstack_product(x, y) 10 loops, best of 3: 66.6 ms per loop ``` New Test ``` In [10]: x, y = numpy.arange(1000), numpy.arange(1000) In [11]: %timeit cartesian([x, y]) 12.1 ms \u00b1 199 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) In [12]: %timeit dstack_product(x, y) 12.7 ms \u00b1 334 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) ``` As before, dstack_product still beats cartesian at smaller scales. New Test (redundant old test not shown) ``` In [13]: x, y = numpy.arange(100), numpy.arange(100) In [14]: %timeit cartesian([x, y]) 215 \u00b5s \u00b1 4.75 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) In [15]: %timeit dstack_product(x, y) 65.7 \u00b5s \u00b1 1.15 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each) ``` These distinctions are, I think, interesting and worth recording; but they are academic in the end. As the tests at the beginning of this answer showed, all of these versions are almost always slower than cartesian_product, defined at the very beginning of this answer -- which is itself a bit slower than the fastest implementations among the answers to this question.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11144513\/cartesian-product-of-x-and-y-array-points-into-single-array-of-2d-points", "best_answers_votes":208, "question_length":554, "response_length":15253 }, { "question":"Filtering a list based on a list of booleans I have a list of values which I need to filter given the values in a list of booleans: ``` list_a = [1, 2, 4, 6] filter = [True, False, True, False] ``` I generate a new filtered list with the following line: ``` filtered_list = [i for indx,i in enumerate(list_a) if filter[indx] == True] ``` which results in: ``` print filtered_list [1,4] ``` The line works but looks (to me) a bit overkill and I was wondering if there was a simpler way to achieve the same. Advices Summary of two good advices given in the answers below: 1- Don't name a list filter like I did because it is a built-in function. 2- Don't compare things to True like I did with if filter[idx]==True.. since it's unnecessary. Just using if filter[idx] is enough.", "response":"You're looking for itertools.compress: ``` >>> from itertools import compress >>> list_a = [1, 2, 4, 6] >>> fil = [True, False, True, False] >>> list(compress(list_a, fil)) [1, 4] ``` Timing comparisons(py3.x): ``` >>> list_a = [1, 2, 4, 6] >>> fil = [True, False, True, False] >>> %timeit list(compress(list_a, fil)) 100000 loops, best of 3: 2.58 us per loop >>> %timeit [i for (i, v) in zip(list_a, fil) if v] #winner 100000 loops, best of 3: 1.98 us per loop >>> list_a = [1, 2, 4, 6]*100 >>> fil = [True, False, True, False]*100 >>> %timeit list(compress(list_a, fil)) #winner 10000 loops, best of 3: 24.3 us per loop >>> %timeit [i for (i, v) in zip(list_a, fil) if v] 10000 loops, best of 3: 82 us per loop >>> list_a = [1, 2, 4, 6]*10000 >>> fil = [True, False, True, False]*10000 >>> %timeit list(compress(list_a, fil)) #winner 1000 loops, best of 3: 1.66 ms per loop >>> %timeit [i for (i, v) in zip(list_a, fil) if v] 100 loops, best of 3: 7.65 ms per loop ``` Don't use filter as a variable name, it is a built-in function.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18665873\/filtering-a-list-based-on-a-list-of-booleans", "best_answers_votes":287, "question_length":775, "response_length":1034 }, { "question":"How to copy a 2D array into a 3rd dimension, N times? I'd like to copy a numpy 2D array into a third dimension. For example, given the 2D numpy array: ``` import numpy as np arr = np.array([[1, 2], [1, 2]]) # arr.shape = (2, 2) ``` convert it into a 3D matrix with N such copies in a new dimension. Acting on arr with N=3, the output should be: ```py new_arr[:,:,0] # array([[1, 2], [1, 2]]) new_arr[:,:,1] # array([[1, 2], [1, 2]]) new_arr[:,:,2] # array([[1, 2], [1, 2]]) # new_arr.shape = (2, 2, 3) ```", "response":"Probably the cleanest way is to use np.repeat: ``` a = np.array([[1, 2], [1, 2]]) print(a.shape) # (2, 2) # indexing with np.newaxis inserts a new 3rd dimension, which we then repeat the # array along, (you can achieve the same effect by indexing with None, see below) b = np.repeat(a[:, :, np.newaxis], 3, axis=2) print(b.shape) # (2, 2, 3) print(b[:, :, 0]) # [[1 2] # [1 2]] print(b[:, :, 1]) # [[1 2] # [1 2]] print(b[:, :, 2]) # [[1 2] # [1 2]] ``` Having said that, you can often avoid repeating your arrays altogether by using broadcasting. For example, let's say I wanted to add a (3,) vector: ``` c = np.array([1, 2, 3]) ``` to a. I could copy the contents of a 3 times in the third dimension, then copy the contents of c twice in both the first and second dimensions, so that both of my arrays were (2, 2, 3), then compute their sum. However, it's much simpler and quicker to do this: ``` d = a[..., None] + c[None, None, :] ``` Here, a[..., None] has shape (2, 2, 1) and c[None, None, :] has shape (1, 1, 3)*. When I compute the sum, the result gets 'broadcast' out along the dimensions of size 1, giving me a result of shape (2, 2, 3): ``` print(d.shape) # (2, 2, 3) print(d[..., 0]) # a + c[0] # [[2 3] # [2 3]] print(d[..., 1]) # a + c[1] # [[3 4] # [3 4]] print(d[..., 2]) # a + c[2] # [[4 5] # [4 5]] ``` Broadcasting is a very powerful technique because it avoids the additional overhead involved in creating repeated copies of your input arrays in memory. * Although I included them for clarity, the None indices into c aren't actually necessary - you could also do a[..., None] + c, i.e. broadcast a (2, 2, 1) array against a (3,) array. This is because if one of the arrays has fewer dimensions than the other then only the trailing dimensions of the two arrays need to be compatible. To give a more complicated example: ``` a = np.ones((6, 1, 4, 3, 1)) # 6 x 1 x 4 x 3 x 1 b = np.ones((5, 1, 3, 2)) # 5 x 1 x 3 x 2 result = a + b # 6 x 5 x 4 x 3 x 2 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/32171917\/how-to-copy-a-2d-array-into-a-3rd-dimension-n-times", "best_answers_votes":272, "question_length":505, "response_length":1974 }, { "question":"Efficiently sorting a numpy array in descending order? I am surprised this specific question hasn't been asked before, but I really didn't find it on SO nor on the documentation of np.sort. Say I have a random numpy array holding integers, e.g: ``` > temp = np.random.randint(1,10, 10) > temp array([2, 4, 7, 4, 2, 2, 7, 6, 4, 4]) ``` If I sort it, I get ascending order by default: ``` > np.sort(temp) array([2, 2, 2, 4, 4, 4, 4, 6, 7, 7]) ``` but I want the solution to be sorted in descending order. Now, I know I can always do: ``` reverse_order = np.sort(temp)[::-1] ``` but is this last statement efficient? Doesn't it create a copy in ascending order, and then reverses this copy to get the result in reversed order? If this is indeed the case, is there an efficient alternative? It doesn't look like np.sort accepts parameters to change the sign of the comparisons in the sort operation to get things in reverse order.", "response":"temp[::-1].sort() sorts the array in place, whereas np.sort(temp)[::-1] creates a new array. ``` In [25]: temp = np.random.randint(1,10, 10) In [26]: temp Out[26]: array([5, 2, 7, 4, 4, 2, 8, 6, 4, 4]) In [27]: id(temp) Out[27]: 139962713524944 In [28]: temp[::-1].sort() In [29]: temp Out[29]: array([8, 7, 6, 5, 4, 4, 4, 4, 2, 2]) In [30]: id(temp) Out[30]: 139962713524944 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/26984414\/efficiently-sorting-a-numpy-array-in-descending-order", "best_answers_votes":225, "question_length":926, "response_length":379 }, { "question":"How to convert list of numpy arrays into single numpy array? Suppose I have ; ``` LIST = [[array([1, 2, 3, 4, 5]), array([1, 2, 3, 4, 5],[1,2,3,4,5])] # inner lists are numpy arrays ``` I try to convert; ``` array([[1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5]) ``` I am solving it by iteration on vstack right now but it is really slow for especially large LIST What do you suggest for the best efficient way?", "response":"In general you can concatenate a whole sequence of arrays along any axis: ``` numpy.concatenate( LIST, axis=0 ) ``` but you do have to worry about the shape and dimensionality of each array in the list (for a 2-dimensional 3x5 output, you need to ensure that they are all 2-dimensional n-by-5 arrays already). If you want to concatenate 1-dimensional arrays as the rows of a 2-dimensional output, you need to expand their dimensionality. As Jorge's answer points out, there is also the function stack, introduced in numpy 1.10: ``` numpy.stack( LIST, axis=0 ) ``` This takes the complementary approach: it creates a new view of each input array and adds an extra dimension (in this case, on the left, so each n-element 1D array becomes a 1-by-n 2D array) before concatenating. It will only work if all the input arrays have the same shape. vstack (or equivalently row_stack) is often an easier-to-use solution because it will take a sequence of 1- and\/or 2-dimensional arrays and expand the dimensionality automatically where necessary and only where necessary, before concatenating the whole list together. Where a new dimension is required, it is added on the left. Again, you can concatenate a whole list at once without needing to iterate: ``` numpy.vstack( LIST ) ``` This flexible behavior is also exhibited by the syntactic shortcut numpy.r_[ array1, ...., arrayN ] (note the square brackets). This is good for concatenating a few explicitly-named arrays but it becomes less readable in your situation because [] subscripting will not accept a list. You would need to convert your sequence to a tuple: numpy.r_[tuple(LIST)]. It's more readable to simply use vstack(). There is also an analogous function column_stack and shortcut c_[...], for horizontal (column-wise) stacking, as well as an almost-analogous function hstack\u2014although for some reason the latter is less flexible (it is stricter about input arrays' dimensionality, and tries to concatenate 1-D arrays end-to-end instead of treating them as columns). Finally, in the specific case of vertical stacking of 1-D arrays, the following also works: ``` numpy.array( LIST ) ``` ...because arrays can be constructed out of a sequence of other arrays, adding a new dimension to the beginning.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/27516849\/how-to-convert-list-of-numpy-arrays-into-single-numpy-array", "best_answers_votes":268, "question_length":414, "response_length":2254 }, { "question":"What are the differences between Pandas and NumPy+SciPy in Python? [closed] Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Because this question may lead to opinionated discussion, debate, and answers, it has been closed. You may edit the question if you feel you can improve it so that it requires answers that include facts and citations or a detailed explanation of the proposed solution. If edited, the question will be reviewed and might be reopened. Closed 10 years ago. Improve this question They both seem exceedingly similar and I'm curious as to which package would be more beneficial for financial data analysis.", "response":"pandas provides high level data manipulation tools built on top of NumPy. NumPy by itself is a fairly low-level tool, similar to MATLAB. pandas on the other hand provides rich time series functionality, data alignment, NA-friendly statistics, groupby, merge and join methods, and lots of other conveniences. It has become very popular in recent years in financial applications. I will have a chapter dedicated to financial data analysis using pandas in my upcoming book.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11077023\/what-are-the-differences-between-pandas-and-numpyscipy-in-python", "best_answers_votes":329, "question_length":686, "response_length":470 }, { "question":"Index all *except* one item in python Is there a simple way to index all elements of a list (or array, or whatever) except for a particular index? E.g., mylist[3] will return the item in position 3 milist[~3] will return the whole list except for 3", "response":"For a list, you could use a list comp. For example, to make b a copy of a without the 3rd element: ``` a = range(10)[::-1] # [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] b = [x for i,x in enumerate(a) if i!=3] # [9, 8, 7, 5, 4, 3, 2, 1, 0] ``` This is very general, and can be used with all iterables, including numpy arrays. If you replace [] with (), b will be an iterator instead of a list. Or you could do this in-place with pop: ``` a = range(10)[::-1] # a = [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] a.pop(3) # a = [9, 8, 7, 5, 4, 3, 2, 1, 0] ``` In numpy you could do this with a boolean indexing: ``` a = np.arange(9, -1, -1) # a = array([9, 8, 7, 6, 5, 4, 3, 2, 1, 0]) b = a[np.arange(len(a))!=3] # b = array([9, 8, 7, 5, 4, 3, 2, 1, 0]) ``` which will, in general, be much faster than the list comprehension listed above.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19286657\/index-all-except-one-item-in-python", "best_answers_votes":194, "question_length":248, "response_length":806 }, { "question":"Numpy Resize\/Rescale Image I would like to take an image and change the scale of the image, while it is a numpy array. For example I have this image of a coca-cola bottle: bottle-1 Which translates to a numpy array of shape (528, 203, 3) and I want to resize that to say the size of this second image: bottle-2 Which has a shape of (140, 54, 3). How do I change the size of the image to a certain shape while still maintaining the original image? Other answers suggest stripping every other or third row out, but what I want to do is basically shrink the image how you would via an image editor but in python code. Are there any libraries to do this in numpy\/SciPy?", "response":"Yeah, you can install opencv (this is a library used for image processing, and computer vision), and use the cv2.resize function. And for instance use: ``` import cv2 import numpy as np img = cv2.imread('your_image.jpg') res = cv2.resize(img, dsize=(54, 140), interpolation=cv2.INTER_CUBIC) ``` Here img is thus a numpy array containing the original image, whereas res is a numpy array containing the resized image. An important aspect is the interpolation parameter: there are several ways how to resize an image. Especially since you scale down the image, and the size of the original image is not a multiple of the size of the resized image. Possible interpolation schemas are: INTER_NEAREST - a nearest-neighbor interpolation INTER_LINEAR - a bilinear interpolation (used by default) INTER_AREA - resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire\u2019-free results. But when the image is zoomed, it is similar to the INTER_NEAREST method. INTER_CUBIC - a bicubic interpolation over 4x4 pixel neighborhood INTER_LANCZOS4 - a Lanczos interpolation over 8x8 pixel neighborhood Like with most options, there is no \"best\" option in the sense that for every resize schema, there are scenarios where one strategy can be preferred over another.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/48121916\/numpy-resize-rescale-image", "best_answers_votes":223, "question_length":665, "response_length":1293 }, { "question":"Numpy matrix to array I am using numpy. I have a matrix with 1 column and N rows and I want to get an array from with N elements. For example, if i have M = matrix([[1], [2], [3], [4]]), I want to get A = array([1,2,3,4]). To achieve it, I use A = np.array(M.T)[0]. Does anyone know a more elegant way to get the same result? Thanks!", "response":"If you'd like something a bit more readable, you can do this: ``` A = np.squeeze(np.asarray(M)) ``` Equivalently, you could also do: A = np.asarray(M).reshape(-1), but that's a bit less easy to read.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3337301\/numpy-matrix-to-array", "best_answers_votes":244, "question_length":333, "response_length":199 }, { "question":"Difference between numpy dot() and Python 3.5+ matrix multiplication @ I recently moved to Python 3.5 and noticed the new matrix multiplication operator (@) sometimes behaves differently from the numpy dot operator. In example, for 3d arrays: ``` import numpy as np a = np.random.rand(8,13,13) b = np.random.rand(8,13,13) c = a @ b # Python 3.5+ d = np.dot(a, b) ``` The @ operator returns an array of shape: ``` c.shape (8, 13, 13) ``` while the np.dot() function returns: ``` d.shape (8, 13, 8, 13) ``` How can I reproduce the same result with numpy dot? Are there any other significant differences?", "response":"The @ operator calls the array's __matmul__ method, not dot. This method is also present in the API as the function np.matmul. ``` >>> a = np.random.rand(8,13,13) >>> b = np.random.rand(8,13,13) >>> np.matmul(a, b).shape (8, 13, 13) ``` From the documentation: matmul differs from dot in two important ways. Multiplication by scalars is not allowed. Stacks of matrices are broadcast together as if the matrices were elements. The last point makes it clear that dot and matmul methods behave differently when passed 3D (or higher dimensional) arrays. Quoting from the documentation some more: For matmul: If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly. For np.dot: For 2-D arrays it is equivalent to matrix multiplication, and for 1-D arrays to inner product of vectors (without complex conjugation). For N dimensions it is a sum product over the last axis of a and the second-to-last of b", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34142485\/difference-between-numpy-dot-and-python-3-5-matrix-multiplication", "best_answers_votes":237, "question_length":601, "response_length":971 }, { "question":"Slicing of a NumPy 2d array, or how do I extract an mxm submatrix from an nxn array (n>m)? I want to slice a NumPy nxn array. I want to extract an arbitrary selection of m rows and columns of that array (i.e. without any pattern in the numbers of rows\/columns), making it a new, mxm array. For this example let us say the array is 4x4 and I want to extract a 2x2 array from it. Here is our array: ``` from numpy import * x = range(16) x = reshape(x,(4,4)) print x [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] ``` The line and columns to remove are the same. The easiest case is when I want to extract a 2x2 submatrix that is at the beginning or at the end, i.e. : ``` In [33]: x[0:2,0:2] Out[33]: array([[0, 1], [4, 5]]) In [34]: x[2:,2:] Out[34]: array([[10, 11], [14, 15]]) ``` But what if I need to remove another mixture of rows\/columns? What if I need to remove the first and third lines\/rows, thus extracting the submatrix [[5,7],[13,15]]? There can be any composition of rows\/lines. I read somewhere that I just need to index my array using arrays\/lists of indices for both rows and columns, but that doesn't seem to work: ``` In [35]: x[[1,3],[1,3]] Out[35]: array([ 5, 15]) ``` I found one way, which is: ``` In [61]: x[[1,3]][:,[1,3]] Out[61]: array([[ 5, 7], [13, 15]]) ``` First issue with this is that it is hardly readable, although I can live with that. If someone has a better solution, I'd certainly like to hear it. Other thing is I read on a forum that indexing arrays with arrays forces NumPy to make a copy of the desired array, thus when treating with large arrays this could become a problem. Why is that so \/ how does this mechanism work?", "response":"To answer this question, we have to look at how indexing a multidimensional array works in Numpy. Let's first say you have the array x from your question. The buffer assigned to x will contain 16 ascending integers from 0 to 15. If you access one element, say x[i,j], NumPy has to figure out the memory location of this element relative to the beginning of the buffer. This is done by calculating in effect i*x.shape[1]+j (and multiplying with the size of an int to get an actual memory offset). If you extract a subarray by basic slicing like y = x[0:2,0:2], the resulting object will share the underlying buffer with x. But what happens if you acces y[i,j]? NumPy can't use i*y.shape[1]+j to calculate the offset into the array, because the data belonging to y is not consecutive in memory. NumPy solves this problem by introducing strides. When calculating the memory offset for accessing x[i,j], what is actually calculated is i*x.strides[0]+j*x.strides[1] (and this already includes the factor for the size of an int): ``` x.strides (16, 4) ``` When y is extracted like above, NumPy does not create a new buffer, but it does create a new array object referencing the same buffer (otherwise y would just be equal to x.) The new array object will have a different shape then x and maybe a different starting offset into the buffer, but will share the strides with x (in this case at least): ``` y.shape (2,2) y.strides (16, 4) ``` This way, computing the memory offset for y[i,j] will yield the correct result. But what should NumPy do for something like z=x[[1,3]]? The strides mechanism won't allow correct indexing if the original buffer is used for z. NumPy theoretically could add some more sophisticated mechanism than the strides, but this would make element access relatively expensive, somehow defying the whole idea of an array. In addition, a view wouldn't be a really lightweight object anymore. This is covered in depth in the NumPy documentation on indexing. Oh, and nearly forgot about your actual question: Here is how to make the indexing with multiple lists work as expected: ``` x[[[1],[3]],[1,3]] ``` This is because the index arrays are broadcasted to a common shape. Of course, for this particular example, you can also make do with basic slicing: ``` x[1::2, 1::2] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4257394\/slicing-of-a-numpy-2d-array-or-how-do-i-extract-an-mxm-submatrix-from-an-nxn-ar", "best_answers_votes":122, "question_length":1668, "response_length":2294 }, { "question":"initialize a numpy array [duplicate] This question already has answers here: Create numpy matrix filled with NaNs (11 answers) Closed 1 year ago. Is there way to initialize a numpy array of a shape and add to it? I will explain what I need with a list example. If I want to create a list of objects generated in a loop, I can do: ``` a = [] for i in range(5): a.append(i) ``` I want to do something similar with a numpy array. I know about vstack, concatenate etc. However, it seems these require two numpy arrays as inputs. What I need is: ``` big_array # Initially empty. This is where I don't know what to specify for i in range(5): array i of shape = (2,4) created. add to big_array ``` The big_array should have a shape (10,4). How to do this? EDIT: I want to add the following clarification. I am aware that I can define big_array = numpy.zeros((10,4)) and then fill it up. However, this requires specifying the size of big_array in advance. I know the size in this case, but what if I do not? When we use the .append function for extending the list in python, we don't need to know its final size in advance. I am wondering if something similar exists for creating a bigger array from smaller arrays, starting with an empty array.", "response":"numpy.zeros Return a new array of given shape and type, filled with zeros. or numpy.ones Return a new array of given shape and type, filled with ones. or numpy.empty Return a new array of given shape and type, without initializing entries. However, the mentality in which we construct an array by appending elements to a list is not much used in numpy, because it's less efficient (numpy datatypes are much closer to the underlying C arrays). Instead, you should preallocate the array to the size that you need it to be, and then fill in the rows. You can use numpy.append if you must, though.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4535374\/initialize-a-numpy-array", "best_answers_votes":226, "question_length":1237, "response_length":593 }, { "question":"RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility I have this error for trying to load a saved SVM model. I have tried uninstalling sklearn, NumPy and SciPy, reinstalling the latest versions all-together again (using pip). I am still getting this error. Why? ``` In [1]: import sklearn; print sklearn.__version__ 0.18.1 In [3]: import numpy; print numpy.__version__ 1.11.2 In [5]: import scipy; print scipy.__version__ 0.18.1 In [7]: import pandas; print pandas.__version__ 0.19.1 In [10]: clf = joblib.load('model\/trained_model.pkl') --------------------------------------------------------------------------- RuntimeWarning Traceback (most recent call last) in () ----> 1 clf = joblib.load('sentiment_classification\/model\/trained_model.pkl') \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/externals\/joblib\/numpy_pickle.pyc in load(filename, mmap_mode) 573 return load_compatibility(fobj) 574 --> 575 obj = _unpickle(fobj, filename, mmap_mode) 576 577 return obj \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/externals\/joblib\/numpy_pickle.pyc in _unpickle(fobj, filename, mmap_mode) 505 obj = None 506 try: --> 507 obj = unpickler.load() 508 if unpickler.compat_mode: 509 warnings.warn(\"The file '%s' has been generated with a \" \/usr\/lib\/python2.7\/pickle.pyc in load(self) 862 while 1: 863 key = read(1) --> 864 dispatch[key](self) 865 except _Stop, stopinst: 866 return stopinst.value \/usr\/lib\/python2.7\/pickle.pyc in load_global(self) 1094 module = self.readline()[:-1] 1095 name = self.readline()[:-1] -> 1096 klass = self.find_class(module, name) 1097 self.append(klass) 1098 dispatch[GLOBAL] = load_global \/usr\/lib\/python2.7\/pickle.pyc in find_class(self, module, name) 1128 def find_class(self, module, name): 1129 # Subclasses may override this -> 1130 __import__(module) 1131 mod = sys.modules[module] 1132 klass = getattr(mod, name) \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/svm\/__init__.py in () 11 # License: BSD 3 clause (C) INRIA 2010 12 ---> 13 from .classes import SVC, NuSVC, SVR, NuSVR, OneClassSVM, LinearSVC, \\ 14 LinearSVR 15 from .bounds import l1_min_c \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/svm\/classes.py in () 2 import numpy as np 3 ----> 4 from .base import _fit_liblinear, BaseSVC, BaseLibSVM 5 from ..base import BaseEstimator, RegressorMixin 6 from ..linear_model.base import LinearClassifierMixin, SparseCoefMixin, \\ \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/svm\/base.py in () 6 from abc import ABCMeta, abstractmethod 7 ----> 8 from . import libsvm, liblinear 9 from . import libsvm_sparse 10 from ..base import BaseEstimator, ClassifierMixin __init__.pxd in init sklearn.svm.libsvm (sklearn\/svm\/libsvm.c:10207)() RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 80 ``` UPDATE: OK, by following here, and ``` pip uninstall -y scipy scikit-learn pip install --no-binary scipy scikit-learn ``` The error has now gone, though I still have no idea why it occurred in the first place...", "response":"According to MAINT: silence Cython warnings about changes dtype\/ufunc size. - numpy\/numpy: These warnings are visible whenever you import scipy (or another package) that was compiled against an older numpy than is installed. and the checks are inserted by Cython (hence are present in any module compiled with it). Long story short, these warnings should be benign in the particular case of numpy, and these messages are filtered out since numpy 1.8 (the branch this commit went onto). While scikit-learn 0.18.1 is compiled against numpy 1.6.1. To filter these warnings yourself, you can do the same as the patch does: ``` import warnings warnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\") warnings.filterwarnings(\"ignore\", message=\"numpy.ufunc size changed\") ``` Of course, you can just recompile all affected modules from source against your local numpy with pip install --no-binary :all:\u00b9 instead if you have the balls tools for that. Longer story: the patch's proponent claims there should be no risk specifically with numpy, and 3rd-party packages are intentionally built against older versions: [Rebuilding everything against current numpy is] not a feasible solution, and certainly shouldn't be necessary. Scipy (as many other packages) is compatible with a number of versions of numpy. So when we distribute scipy binaries, we build them against the lowest supported numpy version (1.5.1 as of now) and they work with 1.6.x, 1.7.x and numpy master as well. The real correct would be for Cython only to issue warnings when the size of dtypes\/ufuncs has changes in a way that breaks the ABI, and be silent otherwise. As a result, Cython's devs agreed to trust the numpy team with maintaining binary compatibility by hand, so we can probably expect that using versions with breaking ABI changes would yield a specially-crafted exception or some other explicit show-stopper. \u00b9The previously available --no-use-wheel option has been removed since pip 10.0.0.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/40845304\/runtimewarning-numpy-dtype-size-changed-may-indicate-binary-incompatibility", "best_answers_votes":172, "question_length":3008, "response_length":1980 }, { "question":"How to convert an array of strings to an array of floats in numpy? How do I make the following conversion in NumPy? ```none [\"1.1\", \"2.2\", \"3.2\"] \u27f6 [1.1, 2.2, 3.2] ```", "response":"Well, if you're reading the data in as a list, just do np.array(map(float, list_of_strings)) (or equivalently, use a list comprehension). (In Python 3, you'll need to call list on the map return value if you use map, since map returns an iterator now.) However, if it's already a numpy array of strings, there's a better way. Use astype(). ``` import numpy as np x = np.array(['1.1', '2.2', '3.3']) y = x.astype(np.float) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3877209\/how-to-convert-an-array-of-strings-to-an-array-of-floats-in-numpy", "best_answers_votes":269, "question_length":167, "response_length":425 }, { "question":"How to create a DataFrame of random integers with Pandas? I know that if I use randn, the following code gives me what I am looking for, but with elements from a normal distribution. But what if I just wanted random integers? ```py import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD')) ``` randint works by providing a range, but not an array like randn. So how do I do this with random integers between some range?", "response":"numpy.random.randint accepts a third argument (size) , in which you can specify the size of the output array. You can use this to create your DataFrame - ``` df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) ``` Here - np.random.randint(0,100,size=(100, 4)) - creates an output array of size (100,4) with random integer elements between [0,100) . Demo - ```python import numpy as np import pandas as pd df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) ``` which produces: ```none A B C D 0 45 88 44 92 1 62 34 2 86 2 85 65 11 31 3 74 43 42 56 4 90 38 34 93 5 0 94 45 10 6 58 23 23 60 .. .. .. .. .. ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/32752292\/how-to-create-a-dataframe-of-random-integers-with-pandas", "best_answers_votes":295, "question_length":465, "response_length":663 }, { "question":"numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject I want to call my Python module from the Matlab. I received the error: ``` Error using numpy_ops>init thinc.backends.numpy_ops ``` Python Error: ``` ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject. ``` The Python script is as follows ``` import spacy def text_recognizer(model_path, text): try: # Load the trained model nlp = spacy.load(model_path) print(\"Model loaded successfully.\") # Process the given text doc = nlp(text) ent_labels = [(ent.text, ent.label_) for ent in doc.ents] return ent_labels ``` The Matlab script is as follows ``` % Set up the Python environment pe = pyenv; py.importlib.import_module('final_output'); % Add the directory containing the Python script to the Python path path_add = fileparts(which('final_output.py')); if count(py.sys.path, path_add) == 0 insert(py.sys.path, int64(0), path_add); end % Define model path and text to process model_path = 'D:\\trained_model\\\\output\\\\model-best'; text = 'Roses are red'; % Call the Python function pyOut = py.final_output.text_recognizer(model_path, text); % Convert the output to a MATLAB cell array entity_labels = cell(pyOut); disp(entity_labels); ``` I found one solution to update Numpy, what I did, but nothing changed. I am using Python 3.9 and Numpy version 2.0.0 The error was received when I tried to call the Python module using a Matlab script. How can I fix the issue?", "response":"The reason is that pandas defines its numpy dependency freely as \"anything newer than certain version of numpy\". The problem occured, when numpy==2.0.0 has been released on June 16th 2024, because it is no longer compatible with your pandas version. The solution is to pin down the numpy version to any before the 2.0.0. Today it could be (this is the most recent numpy 1 release): ``` numpy==1.26.4 ``` To be added in your requirements or to the pip command you use (but together with installing pandas). Nowadays pip is very flexible and can handle the issue flawesly. You just need to ask it to install both pandas and numpy of given versions in the same pip install invocation.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/78634235\/numpy-dtype-size-changed-may-indicate-binary-incompatibility-expected-96-from", "best_answers_votes":270, "question_length":1541, "response_length":681 }, { "question":"Pytorch tensor to numpy array I have a pytorch Tensor of shape [4, 3, 966, 1296]. I want to convert it to numpy array using the following code: ``` imgs = imgs.numpy()[:, ::-1, :, :] ``` How does that code work?", "response":"I believe you also have to use .detach(). I had to convert my Tensor to a numpy array on Colab which uses CUDA and GPU. I did it like the following: ``` # this is just my embedding matrix which is a Torch tensor object embedding = learn.model.u_weight embedding_list = list(range(0, 64382)) input = torch.cuda.LongTensor(embedding_list) tensor_array = embedding(input) # the output of the line below is a numpy array tensor_array.cpu().detach().numpy() ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/49768306\/pytorch-tensor-to-numpy-array", "best_answers_votes":163, "question_length":211, "response_length":456 }, { "question":"How to return 0 with divide by zero I'm trying to perform an element wise divide in python, but if a zero is encountered, I need the quotient to just be zero. For example: ``` array1 = np.array([0, 1, 2]) array2 = np.array([0, 1, 1]) array1 \/ array2 # should be np.array([0, 1, 2]) ``` I could always just use a for-loop through my data, but to really utilize numpy's optimizations, I need the divide function to return 0 upon divide by zero errors instead of ignoring the error. Unless I'm missing something, it doesn't seem numpy.seterr() can return values upon errors. Does anyone have any other suggestions on how I could get the best out of numpy while setting my own divide by zero error handling?", "response":"In numpy v1.7+, you can take advantage of the \"where\" option for ufuncs. You can do things in one line and you don't have to deal with the errstate context manager. ``` >>> a = np.array([-1, 0, 1, 2, 3], dtype=float) >>> b = np.array([ 0, 0, 0, 2, 2], dtype=float) # If you don't pass `out` the indices where (b == 0) will be uninitialized! >>> c = np.divide(a, b, out=np.zeros_like(a), where=b!=0) >>> print(c) [ 0. 0. 0. 1. 1.5] ``` In this case, it does the divide calculation anywhere 'where' b does not equal zero. When b does equal zero, then it remains unchanged from whatever value you originally gave it in the 'out' argument.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/26248654\/how-to-return-0-with-divide-by-zero", "best_answers_votes":363, "question_length":703, "response_length":635 }, { "question":"Find the most frequent number in a NumPy array Suppose I have the following NumPy array: ``` a = np.array([1,2,3,1,2,1,1,1,3,2,2,1]) ``` How can I find the most frequent number in this array?", "response":"If your list contains all non-negative ints, you should take a look at numpy.bincounts: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.bincount.html and then probably use np.argmax: ``` a = np.array([1,2,3,1,2,1,1,1,3,2,2,1]) counts = np.bincount(a) print(np.argmax(counts)) ``` For a more complicated list (that perhaps contains negative numbers or non-integer values), you can use np.histogram in a similar way. Alternatively, if you just want to work in python without using numpy, collections.Counter is a good way of handling this sort of data. ``` from collections import Counter a = [1,2,3,1,2,1,1,1,3,2,2,1] b = Counter(a) print(b.most_common(1)) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6252280\/find-the-most-frequent-number-in-a-numpy-array", "best_answers_votes":253, "question_length":191, "response_length":669 }, { "question":"How do I use numpy.where()? What should I pass, and what does the result mean? [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Closed 9 years ago. Improve this question I tried reading the documentation for numpy.where(), but I'm still confused. What should I pass for the condition, x and y values? When I pass only condition, what does the result mean and how can I use it? What about when I pass all three? I found How does python numpy.where() work? but it didn't answer my question because it seems to be about the implementation rather than about how to use it. Numpy where() on a 2D matrix also didn't explain things for me; I'm looking for a step-by-step explanation, rather than a how-to guide for a specific case. Please include examples with both 1D and 2D source data.", "response":"After fiddling around for a while, I figured things out, and am posting them here hoping it will help others. Intuitively, np.where is like asking \"tell me where in this array, entries satisfy a given condition\". ``` >>> a = np.arange(5,10) >>> np.where(a >> a[np.where(a >> a = np.arange(4,10).reshape(2,3) array([[4, 5, 6], [7, 8, 9]]) >>> np.where(a > 8) (array(1), array(2)) ``` As in the 1d case, we can use np.where() to get entries in the 2d array that satisfy the condition: ``` >>> a[np.where(a > 8)] # selects from a entries 0, 1, 2 ``` array([9]) Note, when a is 1d, np.where() still returns an array of row idx's and an array of col idx's, but columns are of length 1, so latter is empty array.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34667282\/how-do-i-use-numpy-where-what-should-i-pass-and-what-does-the-result-mean", "best_answers_votes":304, "question_length":1145, "response_length":706 }, { "question":"How to get element-wise matrix multiplication (Hadamard product) in numpy? I have two matrices ```py a = np.matrix([[1,2], [3,4]]) b = np.matrix([[5,6], [7,8]]) ``` and I want to get the element-wise product, [[1*5,2*6], [3*7,4*8]], which equals ``` matrix([[5, 12], [21, 32]]) ``` I have tried np.dot(a,b) and a*b but both give the result matrix([[19, 22], [43, 50]]) which is the matrix product, not the element-wise product. How can I get the the element-wise product (aka Hadamard product) using built-in functions?", "response":"For elementwise multiplication of matrix objects, you can use numpy.multiply: ``` import numpy as np a = np.array([[1,2],[3,4]]) b = np.array([[5,6],[7,8]]) np.multiply(a,b) ``` Result ``` array([[ 5, 12], [21, 32]]) ``` However, you should really use array instead of matrix. matrix objects have all sorts of horrible incompatibilities with regular ndarrays. With ndarrays, you can just use * for elementwise multiplication: ``` a * b ``` If you're on Python 3.5+, you don't even lose the ability to perform matrix multiplication with an operator, because @ does matrix multiplication now: ``` a @ b # matrix multiplication ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/40034993\/how-to-get-element-wise-matrix-multiplication-hadamard-product-in-numpy", "best_answers_votes":260, "question_length":519, "response_length":628 }, { "question":"best way to preserve numpy arrays on disk I am looking for a fast way to preserve large numpy arrays. I want to save them to the disk in a binary format, then read them back into memory relatively fastly. cPickle is not fast enough, unfortunately. I found numpy.savez and numpy.load. But the weird thing is, numpy.load loads a npy file into \"memory-map\". That means regular manipulating of arrays really slow. For example, something like this would be really slow: ``` #!\/usr\/bin\/python import numpy as np; import time; from tempfile import TemporaryFile n = 10000000; a = np.arange(n) b = np.arange(n) * 10 c = np.arange(n) * -0.5 file = TemporaryFile() np.savez(file,a = a, b = b, c = c); file.seek(0) t = time.time() z = np.load(file) print \"loading time = \", time.time() - t t = time.time() aa = z['a'] bb = z['b'] cc = z['c'] print \"assigning time = \", time.time() - t; ``` more precisely, the first line will be really fast, but the remaining lines that assign the arrays to obj are ridiculously slow: ``` loading time = 0.000220775604248 assining time = 2.72940087318 ``` Is there any better way of preserving numpy arrays? Ideally, I want to be able to store multiple arrays in one file.", "response":"I've compared performance (space and time) for a number of ways to store numpy arrays. Few of them support multiple arrays per file, but perhaps it's useful anyway. Npy and binary files are both really fast and small for dense data. If the data is sparse or very structured, you might want to use npz with compression, which'll save a lot of space but cost some load time. If portability is an issue, binary is better than npy. If human readability is important, then you'll have to sacrifice a lot of performance, but it can be achieved fairly well using csv (which is also very portable of course). More details and the code are available at the github repo.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9619199\/best-way-to-preserve-numpy-arrays-on-disk", "best_answers_votes":317, "question_length":1195, "response_length":660 }, { "question":"How to save and load numpy.array() data properly? I wonder, how to save and load numpy.array data properly. Currently I'm using the numpy.savetxt() method. For example, if I got an array markers, which looks like this: I try to save it by the use of: ``` numpy.savetxt('markers.txt', markers) ``` In other script I try to open previously saved file: ``` markers = np.fromfile(\"markers.txt\") ``` And that's what I get... Saved data first looks like this: ``` 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 ``` But when I save just loaded data by the use of the same method, ie. numpy.savetxt() it looks like this: ``` 1.398043286095131769e-76 1.398043286095288860e-76 1.396426376485745879e-76 1.398043286055061908e-76 1.398043286095288860e-76 1.182950697433698368e-76 1.398043275797188953e-76 1.398043286095288860e-76 1.210894289234927752e-99 1.398040649781712473e-76 ``` What am I doing wrong? PS there are no other \"backstage\" operation which I perform. Just saving and loading, and that's what I get. Thank you in advance.", "response":"The most reliable way I have found to do this is to use np.savetxt with np.loadtxt and not np.fromfile which is better suited to binary files written with tofile. The np.fromfile and np.tofile methods write and read binary files whereas np.savetxt writes a text file. So, for example: ``` a = np.array([1, 2, 3, 4]) np.savetxt('test1.txt', a, fmt='%d') b = np.loadtxt('test1.txt', dtype=int) a == b # array([ True, True, True, True], dtype=bool) ``` Or: ``` a.tofile('test2.dat') c = np.fromfile('test2.dat', dtype=int) c == a # array([ True, True, True, True], dtype=bool) ``` I use the former method even if it is slower and creates bigger files (sometimes): the binary format can be platform dependent (for example, the file format depends on the endianness of your system). There is a platform independent format for NumPy arrays, which can be saved and read with np.save and np.load: ``` np.save('test3.npy', a) # .npy extension is added if not given d = np.load('test3.npy') a == d # array([ True, True, True, True], dtype=bool) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28439701\/how-to-save-and-load-numpy-array-data-properly", "best_answers_votes":262, "question_length":1227, "response_length":1038 }, { "question":"Windows Scipy Install: No Lapack\/Blas Resources Found I am trying to install python and a series of packages onto a 64bit windows 7 desktop. I have installed Python 3.4, have Microsoft Visual Studio C++ installed, and have successfully installed numpy, pandas and a few others. I am getting the following error when trying to install scipy; ``` numpy.distutils.system_info.NotFoundError: no lapack\/blas resources found ``` I am using pip install offline, the install command I am using is; ``` pip install --no-index --find-links=\"S:\\python\\scipy 0.15.0\" scipy ``` I have read the posts on here about requiring a compiler which if I understand correctly is the VS C++ compiler. I am using the 2010 version as I am using Python 3.4. This has worked for other packages. Do I have to use the window binary or is there a way I can get pip install to work? Many thanks for the help", "response":"The following link should solve all problems with Windows and SciPy; just choose the appropriate download. I was able to pip install the package with no problems. Every other solution I have tried gave me big headaches. Source: http:\/\/www.lfd.uci.edu\/~gohlke\/pythonlibs\/#scipy Command: ``` pip install [Local File Location]\\[Your specific file such as scipy-0.16.0-cp27-none-win_amd64.whl] ``` This assumes you have installed the following already: Install Visual Studio 2015\/2013 with Python Tools (Is integrated into the setup options on install of 2015) Install Visual Studio C++ Compiler for Python Source: http:\/\/www.microsoft.com\/en-us\/download\/details.aspx?id=44266 File Name: VCForPython27.msi Install Python Version of choice Source: python.org File Name (e.g.): python-2.7.10.amd64.msi", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28190534\/windows-scipy-install-no-lapack-blas-resources-found", "best_answers_votes":124, "question_length":876, "response_length":795 }, { "question":"A tool to convert MATLAB code to Python [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 9 years ago. Improve this question I have a bunch of MATLAB code from my MS thesis which I now want to convert to Python (using numpy\/scipy and matplotlib) and distribute as open-source. I know the similarity between MATLAB and Python scientific libraries, and converting them manually will be not more than a fortnight (provided that I work towards it every day for some time). I was wondering if there was already any tool available which can do the conversion.", "response":"There are several tools for converting Matlab to Python code. The only one that's seen recent activity (last commit from June 2018) is Small Matlab to Python compiler (also developed here: SMOP@chiselapp). Other options include: LiberMate: translate from Matlab to Python and SciPy (Requires Python 2, last update 4 years ago). OMPC: Matlab to Python (a bit outdated). Mat2py: Matlab to Python (Requires Python 2). Also, for those interested in an interface between the two languages and not conversion: pymatlab: communicate from Python by sending data to the MATLAB workspace, operating on them with scripts and pulling back the resulting data. Python-Matlab wormholes: both directions of interaction supported. Python-Matlab bridge: use Matlab from within Python, offers matlab_magic for iPython, to execute normal matlab code from within ipython. PyMat: Control Matlab session from Python. pymat2: continuation of the seemingly abandoned PyMat. mlabwrap, mlabwrap-purepy: make Matlab look like Python library (based on PyMat). oct2py (repository): run GNU Octave commands from within Python. pymex: Embeds the Python Interpreter in Matlab, also on File Exchange. matpy: Access MATLAB in various ways: create variables, access .mat files, direct interface to MATLAB engine (requires MATLAB be installed). MatPy: Python package for numerical linear algebra and plotting with a MatLab-like interface. Btw might be helpful to look here for other migration tips: http:\/\/bci2000.org\/downloads\/BCPy2000\/Migration.html On a different note, for people who might find it useful there is: matlab2fortran", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9845292\/a-tool-to-convert-matlab-code-to-python", "best_answers_votes":186, "question_length":930, "response_length":1596 }, { "question":"Finding local maxima\/minima with Numpy in a 1D numpy array Can you suggest a module function from numpy\/scipy that can find local maxima\/minima in a 1D numpy array? Obviously the simplest approach ever is to have a look at the nearest neighbours, but I would like to have an accepted solution that is part of the numpy distro.", "response":"In SciPy >= 0.11 ``` import numpy as np from scipy.signal import argrelextrema x = np.random.random(12) # for local maxima argrelextrema(x, np.greater) # for local minima argrelextrema(x, np.less) ``` Produces ``` >>> x array([ 0.56660112, 0.76309473, 0.69597908, 0.38260156, 0.24346445, 0.56021785, 0.24109326, 0.41884061, 0.35461957, 0.54398472, 0.59572658, 0.92377974]) >>> argrelextrema(x, np.greater) (array([1, 5, 7]),) >>> argrelextrema(x, np.less) (array([4, 6, 8]),) ``` Note, these are the indices of x that are local max\/min. To get the values, try: ``` >>> x[argrelextrema(x, np.greater)[0]] ``` scipy.signal also provides argrelmax and argrelmin for finding maxima and minima respectively.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4624970\/finding-local-maxima-minima-with-numpy-in-a-1d-numpy-array", "best_answers_votes":287, "question_length":326, "response_length":702 }, { "question":"Fast check for NaN in NumPy I'm looking for the fastest way to check for the occurrence of NaN (np.nan) in a NumPy array X. np.isnan(X) is out of the question, since it builds a boolean array of shape X.shape, which is potentially gigantic. I tried np.nan in X, but that seems not to work because np.nan != np.nan. Is there a fast and memory-efficient way to do this at all? (To those who would ask \"how gigantic\": I can't tell. This is input validation for library code.)", "response":"Ray's solution is good. However, on my machine it is about 2.5x faster to use numpy.sum in place of numpy.min: ``` In [13]: %timeit np.isnan(np.min(x)) 1000 loops, best of 3: 244 us per loop In [14]: %timeit np.isnan(np.sum(x)) 10000 loops, best of 3: 97.3 us per loop ``` Unlike min, sum doesn't require branching, which on modern hardware tends to be pretty expensive. This is probably the reason why sum is faster. edit The above test was performed with a single NaN right in the middle of the array. It is interesting to note that min is slower in the presence of NaNs than in their absence. It also seems to get slower as NaNs get closer to the start of the array. On the other hand, sum's throughput seems constant regardless of whether there are NaNs and where they're located: ``` In [40]: x = np.random.rand(100000) In [41]: %timeit np.isnan(np.min(x)) 10000 loops, best of 3: 153 us per loop In [42]: %timeit np.isnan(np.sum(x)) 10000 loops, best of 3: 95.9 us per loop In [43]: x[50000] = np.nan In [44]: %timeit np.isnan(np.min(x)) 1000 loops, best of 3: 239 us per loop In [45]: %timeit np.isnan(np.sum(x)) 10000 loops, best of 3: 95.8 us per loop In [46]: x[0] = np.nan In [47]: %timeit np.isnan(np.min(x)) 1000 loops, best of 3: 326 us per loop In [48]: %timeit np.isnan(np.sum(x)) 10000 loops, best of 3: 95.9 us per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6736590\/fast-check-for-nan-in-numpy", "best_answers_votes":216, "question_length":472, "response_length":1341 }, { "question":"Why does it take ages to install Pandas on Alpine Linux I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install? Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy? Dockerfile.debian ``` FROM python:3.6.4-slim-jessie RUN pip install pandas ``` Build Debian image with Pandas & Numpy: ``` [PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache Sending build context to Docker daemon 3.072kB Step 1\/2 : FROM python:3.6.4-slim-jessie ---> 43431c5410f3 Step 2\/2 : RUN pip install pandas ---> Running in 2e4c030f8051 Collecting pandas Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB) Collecting numpy>=1.9.0 (from pandas) Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB) Collecting pytz>=2011k (from pandas) Downloading pytz-2018.3-py2.py3-none-any.whl (509kB) Collecting python-dateutil>=2 (from pandas) Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB) Collecting six>=1.5 (from python-dateutil>=2->pandas) Downloading six-1.11.0-py2.py3-none-any.whl Installing collected packages: numpy, pytz, six, python-dateutil, pandas Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0 Removing intermediate container 2e4c030f8051 ---> a71e1c314897 Successfully built a71e1c314897 Successfully tagged debian-pandas:latest docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total ``` Dockerfile.alpine ``` FROM python:3.6.4-alpine3.7 RUN apk --update add --no-cache g++ RUN pip install pandas ``` Build Alpine image with Pandas & Numpy: ``` [PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache Sending build context to Docker daemon 16.9kB Step 1\/3 : FROM python:3.6.4-alpine3.7 ---> 4b00a94b6f26 Step 2\/3 : RUN apk --update add --no-cache g++ ---> Running in 4b0c32551e3f fetch http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.7\/main\/x86_64\/APKINDEX.tar.gz fetch http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.7\/main\/x86_64\/APKINDEX.tar.gz fetch http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.7\/community\/x86_64\/APKINDEX.tar.gz fetch http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.7\/community\/x86_64\/APKINDEX.tar.gz (1\/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3) (2\/17) Installing libgcc (6.4.0-r5) (3\/17) Installing libstdc++ (6.4.0-r5) (4\/17) Installing binutils-libs (2.28-r3) (5\/17) Installing binutils (2.28-r3) (6\/17) Installing gmp (6.1.2-r1) (7\/17) Installing isl (0.18-r0) (8\/17) Installing libgomp (6.4.0-r5) (9\/17) Installing libatomic (6.4.0-r5) (10\/17) Installing pkgconf (1.3.10-r0) (11\/17) Installing mpfr3 (3.1.5-r1) (12\/17) Installing mpc1 (1.0.3-r1) (13\/17) Installing gcc (6.4.0-r5) (14\/17) Installing musl-dev (1.1.18-r3) (15\/17) Installing libc-dev (0.7.1-r0) (16\/17) Installing g++ (6.4.0-r5) (17\/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3) Executing busybox-1.27.2-r7.trigger OK: 184 MiB in 50 packages Removing intermediate container 4b0c32551e3f ---> be26c3bf4e42 Step 3\/3 : RUN pip install pandas ---> Running in 36f6024e5e2d Collecting pandas Downloading pandas-0.22.0.tar.gz (11.3MB) Collecting python-dateutil>=2 (from pandas) Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB) Collecting pytz>=2011k (from pandas) Downloading pytz-2018.3-py2.py3-none-any.whl (509kB) Collecting numpy>=1.9.0 (from pandas) Downloading numpy-1.14.1.zip (4.9MB) Collecting six>=1.5 (from python-dateutil>=2->pandas) Downloading six-1.11.0-py2.py3-none-any.whl Building wheels for collected packages: pandas, numpy Running setup.py bdist_wheel for pandas: started Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: finished with status 'done' Stored in directory: \/root\/.cache\/pip\/wheels\/e8\/ed\/46\/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92 Running setup.py bdist_wheel for numpy: started Running setup.py bdist_wheel for numpy: still running... Running setup.py bdist_wheel for numpy: still running... Running setup.py bdist_wheel for numpy: still running... Running setup.py bdist_wheel for numpy: finished with status 'done' Stored in directory: \/root\/.cache\/pip\/wheels\/9d\/cd\/e1\/4d418b16ea662e512349ef193ed9d9ff473af715110798c984 Successfully built pandas numpy Installing collected packages: six, python-dateutil, pytz, numpy, pandas Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0 Removing intermediate container 36f6024e5e2d ---> a93c59e6a106 Successfully built a93c59e6a106 Successfully tagged alpine-pandas:latest docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total ```", "response":"Debian based images use only python pip to install packages with .whl format: ``` Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB) Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB) ``` WHL format was developed as a quicker and more reliable method of installing Python software than re-building from source code every time. WHL files only have to be moved to the correct location on the target system to be installed, whereas a source distribution requires a build step before installation. Wheel packages pandas and numpy are not supported in images based on Alpine platform. That's why when we install them using python pip during the building process, we always compile them from the source files in alpine: ``` Downloading pandas-0.22.0.tar.gz (11.3MB) Downloading numpy-1.14.1.zip (4.9MB) ``` and we can see the following inside container during the image building: ``` \/ # ps aux PID USER TIME COMMAND 1 root 0:00 \/bin\/sh -c pip install pandas 7 root 0:04 {pip} \/usr\/local\/bin\/python \/usr\/local\/bin\/pip install pandas 21 root 0:07 \/usr\/local\/bin\/python -c import setuptools, tokenize;__file__='\/tmp\/pip-build-en29h0ak\/pandas\/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n 496 root 0:00 sh 660 root 0:00 \/bin\/sh -c gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild\/src.linux-x86_64-3.6\/numpy\/core\/src\/pri 661 root 0:00 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild\/src.linux-x86_64-3.6\/numpy\/core\/src\/private -Inump 662 root 0:00 \/usr\/libexec\/gcc\/x86_64-alpine-linux-musl\/6.4.0\/cc1 -quiet -I build\/src.linux-x86_64-3.6\/numpy\/core\/src\/private -I numpy\/core\/include -I build\/src.linux-x86_64-3.6\/numpy\/core\/includ 663 root 0:00 ps aux ``` If we modify Dockerfile a little: ``` FROM python:3.6.4-alpine3.7 RUN apk add --no-cache g++ wget RUN wget https:\/\/pypi.python.org\/packages\/da\/c6\/0936bc5814b429fddb5d6252566fe73a3e40372e6ceaf87de3dec1326f28\/pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl ``` we get the following error: ``` Step 4\/4 : RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl ---> Running in 0faea63e2bda pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl is not a supported wheel on this platform. The command '\/bin\/sh -c pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl' returned a non-zero code: 1 ``` Unfortunately, the only way to install pandas on an Alpine image is to wait until build finishes. Of course if you want to use the Alpine image with pandas in CI for example, the best way to do so is to compile it once, push it to any registry and use it as a base image for your needs. EDIT: If you want to use the Alpine image with pandas you can pull my nickgryg\/alpine-pandas docker image. It is a python image with pre-compiled pandas on the Alpine platform. It should save your time.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/49037742\/why-does-it-take-ages-to-install-pandas-on-alpine-linux", "best_answers_votes":93, "question_length":5408, "response_length":3036 }, { "question":"Detect if a NumPy array contains at least one non-numeric value? I need to write a function which will detect if the input contains at least one value which is non-numeric. If a non-numeric value is found I will raise an error (because the calculation should only return a numeric value). The number of dimensions of the input array is not known in advance - the function should give the correct value regardless of ndim. As an extra complication the input could be a single float or numpy.float64 or even something oddball like a zero-dimensional array. The obvious way to solve this is to write a recursive function which iterates over every iterable object in the array until it finds a non-iterabe. It will apply the numpy.isnan() function over every non-iterable object. If at least one non-numeric value is found then the function will return False immediately. Otherwise if all the values in the iterable are numeric it will eventually return True. That works just fine, but it's pretty slow and I expect that NumPy has a much better way to do it. What is an alternative that is faster and more numpyish? Here's my mockup: ``` def contains_nan( myarray ): \"\"\" @param myarray : An n-dimensional array or a single float @type myarray : numpy.ndarray, numpy.array, float @returns: bool Returns true if myarray is numeric or only contains numeric values. Returns false if at least one non-numeric value exists Not-A-Number is given by the numpy.isnan() function. \"\"\" return True ```", "response":"This should be faster than iterating and will work regardless of shape. ``` numpy.isnan(myarray).any() ``` Edit: 30x faster: ``` import timeit s = 'import numpy;a = numpy.arange(10000.).reshape((100,100));a[10,10]=numpy.nan' ms = [ 'numpy.isnan(a).any()', 'any(numpy.isnan(x) for x in a.flatten())'] for m in ms: print \" %.2f s\" % timeit.Timer(m, s).timeit(1000), m ``` Results: ``` 0.11 s numpy.isnan(a).any() 3.75 s any(numpy.isnan(x) for x in a.flatten()) ``` Bonus: it works fine for non-array NumPy types: ``` >>> a = numpy.float64(42.) >>> numpy.isnan(a).any() False >>> a = numpy.float64(numpy.nan) >>> numpy.isnan(a).any() True ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/911871\/detect-if-a-numpy-array-contains-at-least-one-non-numeric-value", "best_answers_votes":297, "question_length":1485, "response_length":639 }, { "question":"Generate random array of floats between a range I haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at Random sampling but no function seems to do what I need. random.uniform comes close but it only returns a single element, not a specific number. This is what I'm after: ``` ran_floats = some_function(low=0.5, high=13.3, size=50) ``` which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range [0.5, 13.3]. Is there such a function?", "response":"np.random.uniform fits your use case: ``` sampl = np.random.uniform(low=0.5, high=13.3, size=(50,)) ``` Update Oct 2019: While the syntax is still supported, it looks like the API changed with NumPy 1.17 to support greater control over the random number generator. Going forward the API has changed and you should look at https:\/\/docs.scipy.org\/doc\/numpy\/reference\/random\/generated\/numpy.random.Generator.uniform.html The enhancement proposal is here: https:\/\/numpy.org\/neps\/nep-0019-rng-policy.html", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22071987\/generate-random-array-of-floats-between-a-range", "best_answers_votes":262, "question_length":578, "response_length":499 }, { "question":"How do I find the length (or dimensions, size) of a numpy matrix in python? [duplicate] This question already has answers here: Numpy array dimensions (10 answers) Closed 12 years ago. For a numpy matrix in python ``` from numpy import matrix A = matrix([[1,2],[3,4]]) ``` How can I find the length of a row (or column) of this matrix? Equivalently, how can I know the number of rows or columns? So far, the only solution I've found is: ``` len(A) len(A[:,1]) len(A[1,:]) ``` Which returns 2, 2, and 1, respectively. From this I've gathered that len() will return the number of rows, so I can always us the transpose, len(A.T), for the number of columns. However, this feels unsatisfying and arbitrary, as when reading the line len(A), it isn't immediately obvious that this should return the number of rows. It actually works differently than len([1,2]) would for a 2D python array, as this would return 2. So, is there a more intuitive way to find the size of a matrix, or is this the best I have?", "response":"shape is a property of both numpy ndarray's and matrices. ``` A.shape ``` will return a tuple (m, n), where m is the number of rows, and n is the number of columns. In fact, the numpy matrix object is built on top of the ndarray object, one of numpy's two fundamental objects (along with a universal function object), so it inherits from ndarray", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14847457\/how-do-i-find-the-length-or-dimensions-size-of-a-numpy-matrix-in-python", "best_answers_votes":286, "question_length":999, "response_length":345 }, { "question":"ValueError when checking if variable is None or numpy.array I'd like to check if variable is None or numpy.array. I've implemented check_a function to do this. ``` def check_a(a): if not a: print \"please initialize a\" a = None check_a(a) a = np.array([1,2]) check_a(a) ``` But, this code raises ValueError. What is the straight forward way? ``` ValueError Traceback (most recent call last) in () 6 check_a(a) 7 a = np.array([1,2]) ----> 8 check_a(a) in check_a(a) 1 def check_a(a): ----> 2 if not a: 3 print \"please initialize a\" 4 5 a = None ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ```", "response":"Using not a to test whether a is None assumes that the other possible values of a have a truth value of True. However, most NumPy arrays don't have a truth value at all, and not cannot be applied to them. If you want to test whether an object is None, the most general, reliable way is to literally use an is check against None: ``` if a is None: ... else: ... ``` This doesn't depend on objects having a truth value, so it works with NumPy arrays. Note that the test has to be is, not ==. is is an object identity test. == is whatever the arguments say it is, and NumPy arrays say it's a broadcasted elementwise equality comparison, producing a boolean array: ``` >>> a = numpy.arange(5) >>> a == None array([False, False, False, False, False]) >>> if a == None: ... pass ... Traceback (most recent call last): File \"\", line 1, in ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` On the other side of things, if you want to test whether an object is a NumPy array, you can test its type: ``` # Careful - the type is np.ndarray, not np.array. np.array is a factory function. if type(a) is np.ndarray: ... else: ... ``` You can also use isinstance, which will also return True for subclasses of that type (if that is what you want). Considering how terrible and incompatible np.matrix is, you may not actually want this: ``` # Again, ndarray, not array, because array is a factory function. if isinstance(a, np.ndarray): ... else: ... ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/36783921\/valueerror-when-checking-if-variable-is-none-or-numpy-array", "best_answers_votes":287, "question_length":652, "response_length":1497 }, { "question":"figure of imshow() is too small I'm trying to visualize a numpy array using imshow() since it's similar to imagesc() in Matlab. ``` imshow(random.rand(8, 90), interpolation='nearest') ``` The resulting figure is very small at the center of the grey window, while most of the space is unoccupied. How can I set the parameters to make the figure larger? I tried figsize=(xx,xx) and it's not what I want. Thanks!", "response":"If you don't give an aspect argument to imshow, it will use the value for image.aspect in your matplotlibrc. The default for this value in a new matplotlibrc is equal. So imshow will plot your array with equal aspect ratio. If you don't need an equal aspect you can set aspect to auto ``` imshow(random.rand(8, 90), interpolation='nearest', aspect='auto') ``` which gives the following figure If you want an equal aspect ratio you have to adapt your figsize according to the aspect ``` fig, ax = subplots(figsize=(18, 2)) ax.imshow(random.rand(8, 90), interpolation='nearest') tight_layout() ``` which gives you:", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10540929\/figure-of-imshow-is-too-small", "best_answers_votes":215, "question_length":409, "response_length":612 }, { "question":"Test if numpy array contains only zeros We initialize a numpy array with zeros as bellow: ``` np.zeros((N,N+1)) ``` But how do we check whether all elements in a given n*n numpy array matrix is zero. The method just need to return a True if all the values are indeed zero.", "response":"The other answers posted here will work, but the clearest and most efficient function to use is numpy.any(): ``` >>> all_zeros = not np.any(a) ``` or ``` >>> all_zeros = not a.any() ``` This is preferred over numpy.all(a==0) because it uses less RAM. (It does not require the temporary array created by the a==0 term.) Also, it is faster than numpy.count_nonzero(a) because it can return immediately when the first nonzero element has been found. Edit: As @Rachel pointed out in the comments, np.any() no longer uses \"short-circuit\" logic, so you won't see a speed benefit for small arrays.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18395725\/test-if-numpy-array-contains-only-zeros", "best_answers_votes":251, "question_length":272, "response_length":590 }, { "question":"How to fix 'Object arrays cannot be loaded when allow_pickle=False' for imdb.load_data() function? I'm trying to implement the binary classification example using the IMDb dataset in Google Colab. I have implemented this model before. But when I tried to do it again after a few days, it returned a value error: 'Object arrays cannot be loaded when allow_pickle=False' for the load_data() function. I have already tried solving this, referring to an existing answer for a similar problem: How to fix 'Object arrays cannot be loaded when allow_pickle=False' in the sketch_rnn algorithm. But it turns out that just adding an allow_pickle argument isn't sufficient. My code: ``` from keras.datasets import imdb (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) ``` The error: ``` ValueError Traceback (most recent call last) in () 1 from keras.datasets import imdb ----> 2 (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) 2 frames \/usr\/local\/lib\/python3.6\/dist-packages\/keras\/datasets\/imdb.py in load_data(path, num_words, skip_top, maxlen, seed, start_char, oov_char, index_from, **kwargs) 57 file_hash='599dadb1135973df5b59232a0e9a887c') 58 with np.load(path) as f: ---> 59 x_train, labels_train = f['x_train'], f['y_train'] 60 x_test, labels_test = f['x_test'], f['y_test'] 61 \/usr\/local\/lib\/python3.6\/dist-packages\/numpy\/lib\/npyio.py in __getitem__(self, key) 260 return format.read_array(bytes, 261 allow_pickle=self.allow_pickle, --> 262 pickle_kwargs=self.pickle_kwargs) 263 else: 264 return self.zip.read(key) \/usr\/local\/lib\/python3.6\/dist-packages\/numpy\/lib\/format.py in read_array(fp, allow_pickle, pickle_kwargs) 690 # The array contained Python objects. We need to unpickle the data. 691 if not allow_pickle: --> 692 raise ValueError(\"Object arrays cannot be loaded when \" 693 \"allow_pickle=False\") 694 if pickle_kwargs is None: ValueError: Object arrays cannot be loaded when allow_pickle=False ```", "response":"Here's a trick to force imdb.load_data to allow pickle by, in your notebook, replacing this line: ```py (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) ``` by this: ```py import numpy as np # save np.load np_load_old = np.load # modify the default parameters of np.load np.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k) # call load_data with allow_pickle implicitly set to true (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) # restore np.load for future normal usage np.load = np_load_old ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/55890813\/how-to-fix-object-arrays-cannot-be-loaded-when-allow-pickle-false-for-imdb-loa", "best_answers_votes":160, "question_length":1984, "response_length":586 }, { "question":"NumPy selecting specific column index per row by using a list of indexes I'm struggling to select the specific columns per row of a NumPy matrix. Suppose I have the following matrix which I would call X: ``` [1, 2, 3] [4, 5, 6] [7, 8, 9] ``` I also have a list of column indexes per every row which I would call Y: ``` [1, 0, 2] ``` I need to get the values: ``` [2] [4] [9] ``` Instead of a list with indexes Y, I can also produce a matrix with the same shape as X where every column is a bool \/ int in the range 0-1 value, indicating whether this is the required column. ``` [0, 1, 0] [1, 0, 0] [0, 0, 1] ``` I know this can be done with iterating over the array and selecting the column values I need. However, this will be executed frequently on big arrays of data and that's why it has to run as fast as it can. I was thus wondering if there is a better solution?", "response":"If you've got a boolean array you can do direct selection based on that like so: ``` >>> a = np.array([True, True, True, False, False]) >>> b = np.array([1,2,3,4,5]) >>> b[a] array([1, 2, 3]) ``` To go along with your initial example you could do the following: ``` >>> a = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> b = np.array([[False,True,False],[True,False,False],[False,False,True]]) >>> a[b] array([2, 4, 9]) ``` You can also add in an arange and do direct selection on that, though depending on how you're generating your boolean array and what your code looks like YMMV. ``` >>> a = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> a[np.arange(len(a)), [1,0,2]] array([2, 4, 9]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/23435782\/numpy-selecting-specific-column-index-per-row-by-using-a-list-of-indexes", "best_answers_votes":166, "question_length":868, "response_length":683 }, { "question":"Most efficient way to find mode in numpy array I have a 2D array containing integers (both positive or negative). Each row represents the values over time for a particular spatial site, whereas each column represents values for various spatial sites for a given time. So if the array is like: ``` 1 3 4 2 2 7 5 2 2 1 4 1 3 3 2 2 1 1 ``` The result should be ``` 1 3 2 2 2 1 ``` Note that when there are multiple values for mode, any one (selected randomly) may be set as mode. I can iterate over the columns finding mode one at a time but I was hoping numpy might have some in-built function to do that. Or if there is a trick to find that efficiently without looping.", "response":"Check scipy.stats.mode() (inspired by @tom10's comment): ``` import numpy as np from scipy import stats a = np.array([[1, 3, 4, 2, 2, 7], [5, 2, 2, 1, 4, 1], [3, 3, 2, 2, 1, 1]]) m = stats.mode(a) print(m) ``` Output: ``` ModeResult(mode=array([[1, 3, 2, 2, 1, 1]]), count=array([[1, 2, 2, 2, 1, 2]])) ``` As you can see, it returns both the mode as well as the counts. You can select the modes directly via m[0]: ``` print(m[0]) ``` Output: ``` [[1 3 2 2 1 1]] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16330831\/most-efficient-way-to-find-mode-in-numpy-array", "best_answers_votes":205, "question_length":668, "response_length":465 }, { "question":"Iterating over a numpy array Is there a less verbose alternative to this: ``` for x in xrange(array.shape[0]): for y in xrange(array.shape[1]): do_stuff(x, y) ``` I came up with this: ``` for x, y in itertools.product(map(xrange, array.shape)): do_stuff(x, y) ``` Which saves one indentation, but is still pretty ugly. I'm hoping for something that looks like this pseudocode: ``` for x, y in array.indices: do_stuff(x, y) ``` Does anything like that exist?", "response":"I think you're looking for the ndenumerate. ``` >>> a =numpy.array([[1,2],[3,4],[5,6]]) >>> for (x,y), value in numpy.ndenumerate(a): ... print x,y ... 0 0 0 1 1 0 1 1 2 0 2 1 ``` Regarding the performance. It is a bit slower than a list comprehension. ``` X = np.zeros((100, 100, 100)) %timeit list([((i,j,k), X[i,j,k]) for i in range(X.shape[0]) for j in range(X.shape[1]) for k in range(X.shape[2])]) 1 loop, best of 3: 376 ms per loop %timeit list(np.ndenumerate(X)) 1 loop, best of 3: 570 ms per loop ``` If you are worried about the performance you could optimise a bit further by looking at the implementation of ndenumerate, which does 2 things, converting to an array and looping. If you know you have an array, you can call the .coords attribute of the flat iterator. ``` a = X.flat %timeit list([(a.coords, x) for x in a.flat]) 1 loop, best of 3: 305 ms per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6967463\/iterating-over-a-numpy-array", "best_answers_votes":220, "question_length":457, "response_length":877 }, { "question":"python numpy machine epsilon I am trying to understand what is machine epsilon. According to the Wikipedia, it can be calculated as follows: ``` def machineEpsilon(func=float): machine_epsilon = func(1) while func(1)+func(machine_epsilon) != func(1): machine_epsilon_last = machine_epsilon machine_epsilon = func(machine_epsilon) \/ func(2) return machine_epsilon_last ``` However, it is suitable only for double precision numbers. I am interested in modifying it to support also single precision numbers. I read that numpy can be used, particularly numpy.float32 class. Can anybody help with modifying the function?", "response":"An easier way to get the machine epsilon for a given float type is to use np.finfo(): ``` print(np.finfo(float).eps) # 2.22044604925e-16 print(np.finfo(np.float32).eps) # 1.19209e-07 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19141432\/python-numpy-machine-epsilon", "best_answers_votes":290, "question_length":615, "response_length":186 }, { "question":"What is the difference between NaN and None? I am reading two columns of a csv file using pandas readcsv() and then assigning the values to a dictionary. The columns contain strings of numbers and letters. Occasionally there are cases where a cell is empty. In my opinion, the value read to that dictionary entry should be None but instead nan is assigned. Surely None is more descriptive of an empty cell as it has a null value, whereas nan just says that the value read is not a number. Is my understanding correct, what IS the difference between None and nan? Why is nan assigned instead of None? Also, my dictionary check for any empty cells has been using numpy.isnan(): ``` for k, v in my_dict.iteritems(): if np.isnan(v): ``` But this gives me an error saying that I cannot use this check for v. I guess it is because an integer or float variable, not a string is meant to be used. If this is true, how can I check v for an \"empty cell\"\/nan case?", "response":"NaN is used as a placeholder for missing data consistently in pandas, consistency is good. I usually read\/translate NaN as \"missing\". Also see the 'working with missing data' section in the docs. Wes writes in the docs 'choice of NA-representation': After years of production use [NaN] has proven, at least in my opinion, to be the best decision given the state of affairs in NumPy and Python in general. The special value NaN (Not-A-Number) is used everywhere as the NA value, and there are API functions isna and notna which can be used across the dtypes to detect NA values. ... Thus, I have chosen the Pythonic \u201cpracticality beats purity\u201d approach and traded integer NA capability for a much simpler approach of using a special value in float and object arrays to denote NA, and promoting integer arrays to floating when NAs must be introduced. Note: the \"gotcha\" that integer Series containing missing data are upcast to floats. In my opinion the main reason to use NaN (over None) is that it can be stored with numpy's float64 dtype, rather than the less efficient object dtype, see NA type promotions. ``` # without forcing dtype it changes None to NaN! s_bad = pd.Series([1, None], dtype=object) s_good = pd.Series([1, np.nan]) In [13]: s_bad.dtype Out[13]: dtype('O') In [14]: s_good.dtype Out[14]: dtype('float64') ``` Jeff comments (below) on this: np.nan allows for vectorized operations; its a float value, while None, by definition, forces object type, which basically disables all efficiency in numpy. So repeat 3 times fast: object==bad, float==good Saying that, many operations may still work just as well with None vs NaN (but perhaps are not supported i.e. they may sometimes give surprising results): ``` In [15]: s_bad.sum() Out[15]: 1 In [16]: s_good.sum() Out[16]: 1.0 ``` To answer the second question: You should be using isna and notna to test for missing data (NaN).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17534106\/what-is-the-difference-between-nan-and-none", "best_answers_votes":182, "question_length":953, "response_length":1893 }, { "question":"How do I compute derivative using Numpy? How do I calculate the derivative of a function, for example y = x2+1 using numpy? Let's say, I want the value of derivative at x = 5...", "response":"You have four options Finite Differences Automatic Derivatives Symbolic Differentiation Compute derivatives by hand. Finite differences require no external tools but are prone to numerical error and, if you're in a multivariate situation, can take a while. Symbolic differentiation is ideal if your problem is simple enough. Symbolic methods are getting quite robust these days. SymPy is an excellent project for this that integrates well with NumPy. Look at the autowrap or lambdify functions or check out Jensen's blogpost about a similar question. Automatic derivatives are very cool, aren't prone to numeric errors, but do require some additional libraries (google for this, there are a few good options). This is the most robust but also the most sophisticated\/difficult to set up choice. If you're fine restricting yourself to numpy syntax then Theano might be a good choice. Here is an example using SymPy ``` In [1]: from sympy import * In [2]: import numpy as np In [3]: x = Symbol('x') In [4]: y = x**2 + 1 In [5]: yprime = y.diff(x) In [6]: yprime Out[6]: 2\u22c5x In [7]: f = lambdify(x, yprime, 'numpy') In [8]: f(np.ones(5)) Out[8]: [ 2. 2. 2. 2. 2.] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9876290\/how-do-i-compute-derivative-using-numpy", "best_answers_votes":208, "question_length":177, "response_length":1163 }, { "question":"Performance of Pandas apply vs np.vectorize to create new column from existing columns I am using Pandas dataframes and want to create a new column as a function of existing columns. I have not seen a good discussion of the speed difference between df.apply() and np.vectorize(), so I thought I would ask here. The Pandas apply() function is slow. From what I measured (shown below in some experiments), using np.vectorize() is 25x faster (or more) than using the DataFrame function apply() , at least on my 2016 MacBook Pro. Is this an expected result, and why? For example, suppose I have the following dataframe with N rows: ``` N = 10 A_list = np.random.randint(1, 100, N) B_list = np.random.randint(1, 100, N) df = pd.DataFrame({'A': A_list, 'B': B_list}) df.head() # A B # 0 78 50 # 1 23 91 # 2 55 62 # 3 82 64 # 4 99 80 ``` Suppose further that I want to create a new column as a function of the two columns A and B. In the example below, I'll use a simple function divide(). To apply the function, I can use either df.apply() or np.vectorize(): ``` def divide(a, b): if b == 0: return 0.0 return float(a)\/b df['result'] = df.apply(lambda row: divide(row['A'], row['B']), axis=1) df['result2'] = np.vectorize(divide)(df['A'], df['B']) df.head() # A B result result2 # 0 78 50 1.560000 1.560000 # 1 23 91 0.252747 0.252747 # 2 55 62 0.887097 0.887097 # 3 82 64 1.281250 1.281250 # 4 99 80 1.237500 1.237500 ``` If I increase N to real-world sizes like 1 million or more, then I observe that np.vectorize() is 25x faster or more than df.apply(). Below is some complete benchmarking code: ``` import pandas as pd import numpy as np import time def divide(a, b): if b == 0: return 0.0 return float(a)\/b for N in [1000, 10000, 100000, 1000000, 10000000]: print '' A_list = np.random.randint(1, 100, N) B_list = np.random.randint(1, 100, N) df = pd.DataFrame({'A': A_list, 'B': B_list}) start_epoch_sec = int(time.time()) df['result'] = df.apply(lambda row: divide(row['A'], row['B']), axis=1) end_epoch_sec = int(time.time()) result_apply = end_epoch_sec - start_epoch_sec start_epoch_sec = int(time.time()) df['result2'] = np.vectorize(divide)(df['A'], df['B']) end_epoch_sec = int(time.time()) result_vectorize = end_epoch_sec - start_epoch_sec print 'N=%d, df.apply: %d sec, np.vectorize: %d sec' % \\ (N, result_apply, result_vectorize) # Make sure results from df.apply and np.vectorize match. assert(df['result'].equals(df['result2'])) ``` The results are shown below: ``` N=1000, df.apply: 0 sec, np.vectorize: 0 sec N=10000, df.apply: 1 sec, np.vectorize: 0 sec N=100000, df.apply: 2 sec, np.vectorize: 0 sec N=1000000, df.apply: 24 sec, np.vectorize: 1 sec N=10000000, df.apply: 262 sec, np.vectorize: 4 sec ``` If np.vectorize() is in general always faster than df.apply(), then why is np.vectorize() not mentioned more? I only ever see StackOverflow posts related to df.apply(), such as: pandas create new column based on values from other columns How do I use Pandas 'apply' function to multiple columns? How to apply a function to two columns of Pandas dataframe", "response":"I will start by saying that the power of Pandas and NumPy arrays is derived from high-performance vectorised calculations on numeric arrays.1 The entire point of vectorised calculations is to avoid Python-level loops by moving calculations to highly optimised C code and utilising contiguous memory blocks.2 Python-level loops Now we can look at some timings. Below are all Python-level loops which produce either pd.Series, np.ndarray or list objects containing the same values. For the purposes of assignment to a series within a dataframe, the results are comparable. ``` # Python 3.6.5, NumPy 1.14.3, Pandas 0.23.0 np.random.seed(0) N = 10**5 %timeit list(map(divide, df['A'], df['B'])) # 43.9 ms %timeit np.vectorize(divide)(df['A'], df['B']) # 48.1 ms %timeit [divide(a, b) for a, b in zip(df['A'], df['B'])] # 49.4 ms %timeit [divide(a, b) for a, b in df[['A', 'B']].itertuples(index=False)] # 112 ms %timeit df.apply(lambda row: divide(*row), axis=1, raw=True) # 760 ms %timeit df.apply(lambda row: divide(row['A'], row['B']), axis=1) # 4.83 s %timeit [divide(row['A'], row['B']) for _, row in df[['A', 'B']].iterrows()] # 11.6 s ``` Some takeaways: The tuple-based methods (the first 4) are a factor more efficient than pd.Series-based methods (the last 3). np.vectorize, list comprehension + zip and map methods, i.e. the top 3, all have roughly the same performance. This is because they use tuple and bypass some Pandas overhead from pd.DataFrame.itertuples. There is a significant speed improvement from using raw=True with pd.DataFrame.apply versus without. This option feeds NumPy arrays to the custom function instead of pd.Series objects. pd.DataFrame.apply: just another loop To see exactly the objects Pandas passes around, you can amend your function trivially: ``` def foo(row): print(type(row)) assert False # because you only need to see this once df.apply(lambda row: foo(row), axis=1) ``` Output: . Creating, passing and querying a Pandas series object carries significant overheads relative to NumPy arrays. This shouldn't be surprise: Pandas series include a decent amount of scaffolding to hold an index, values, attributes, etc. Do the same exercise again with raw=True and you'll see . All this is described in the docs, but seeing it is more convincing. np.vectorize: fake vectorisation The docs for np.vectorize has the following note: The vectorized function evaluates pyfunc over successive tuples of the input arrays like the python map function, except it uses the broadcasting rules of numpy. The \"broadcasting rules\" are irrelevant here, since the input arrays have the same dimensions. The parallel to map is instructive, since the map version above has almost identical performance. The source code shows what's happening: np.vectorize converts your input function into a Universal function (\"ufunc\") via np.frompyfunc. There is some optimisation, e.g. caching, which can lead to some performance improvement. In short, np.vectorize does what a Python-level loop should do, but pd.DataFrame.apply adds a chunky overhead. There's no JIT-compilation which you see with numba (see below). It's just a convenience. True vectorisation: what you should use Why aren't the above differences mentioned anywhere? Because the performance of truly vectorised calculations make them irrelevant: ``` %timeit np.where(df['B'] == 0, 0, df['A'] \/ df['B']) # 1.17 ms %timeit (df['A'] \/ df['B']).replace([np.inf, -np.inf], 0) # 1.96 ms ``` Yes, that's ~40x faster than the fastest of the above loopy solutions. Either of these are acceptable. In my opinion, the first is succinct, readable and efficient. Only look at other methods, e.g. numba below, if performance is critical and this is part of your bottleneck. numba.njit: greater efficiency When loops are considered viable they are usually optimised via numba with underlying NumPy arrays to move as much as possible to C. Indeed, numba improves performance to microseconds. Without some cumbersome work, it will be difficult to get much more efficient than this. ``` from numba import njit @njit def divide(a, b): res = np.empty(a.shape) for i in range(len(a)): if b[i] != 0: res[i] = a[i] \/ b[i] else: res[i] = 0 return res %timeit divide(df['A'].values, df['B'].values) # 717 \u00b5s ``` Using @njit(parallel=True) may provide a further boost for larger arrays. 1 Numeric types include: int, float, datetime, bool, category. They exclude object dtype and can be held in contiguous memory blocks. 2 There are at least 2 reasons why NumPy operations are efficient versus Python: Everything in Python is an object. This includes, unlike C, numbers. Python types therefore have an overhead which does not exist with native C types. NumPy methods are usually C-based. In addition, optimised algorithms are used where possible.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/52673285\/performance-of-pandas-apply-vs-np-vectorize-to-create-new-column-from-existing-c", "best_answers_votes":259, "question_length":3076, "response_length":4797 }, { "question":"Selecting specific rows and columns from NumPy array I've been going crazy trying to figure out what stupid thing I'm doing wrong here. I'm using NumPy, and I have specific row indices and specific column indices that I want to select from. Here's the gist of my problem: ``` import numpy as np a = np.arange(20).reshape((5,4)) # array([[ 0, 1, 2, 3], # [ 4, 5, 6, 7], # [ 8, 9, 10, 11], # [12, 13, 14, 15], # [16, 17, 18, 19]]) # If I select certain rows, it works print a[[0, 1, 3], :] # array([[ 0, 1, 2, 3], # [ 4, 5, 6, 7], # [12, 13, 14, 15]]) # If I select certain rows and a single column, it works print a[[0, 1, 3], 2] # array([ 2, 6, 14]) # But if I select certain rows AND certain columns, it fails print a[[0,1,3], [0,2]] # Traceback (most recent call last): # File \"\", line 1, in # ValueError: shape mismatch: objects cannot be broadcast to a single shape ``` Why is this happening? Surely I should be able to select the 1st, 2nd, and 4th rows, and 1st and 3rd columns? The result I'm expecting is: ``` a[[0,1,3], [0,2]] => [[0, 2], [4, 6], [12, 14]] ```", "response":"As Toan suggests, a simple hack would be to just select the rows first, and then select the columns over that. ``` >>> a[[0,1,3], :] # Returns the rows you want array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [12, 13, 14, 15]]) >>> a[[0,1,3], :][:, [0,2]] # Selects the columns you want as well array([[ 0, 2], [ 4, 6], [12, 14]]) ``` [Edit] The built-in method: np.ix_ I recently discovered that numpy gives you an in-built one-liner to doing exactly what @Jaime suggested, but without having to use broadcasting syntax (which suffers from lack of readability). From the docs: Using ix_ one can quickly construct index arrays that will index the cross product. a[np.ix_([1,3],[2,5])] returns the array [[a[1,2] a[1,5]], [a[3,2] a[3,5]]]. So you use it like this: ``` >>> a = np.arange(20).reshape((5,4)) >>> a[np.ix_([0,1,3], [0,2])] array([[ 0, 2], [ 4, 6], [12, 14]]) ``` And the way it works is that it takes care of aligning arrays the way Jaime suggested, so that broadcasting happens properly: ``` >>> np.ix_([0,1,3], [0,2]) (array([[0], [1], [3]]), array([[0, 2]])) ``` Also, as MikeC says in a comment, np.ix_ has the advantage of returning a view, which my first (pre-edit) answer did not. This means you can now assign to the indexed array: ``` >>> a[np.ix_([0,1,3], [0,2])] = -1 >>> a array([[-1, 1, -1, 3], [-1, 5, -1, 7], [ 8, 9, 10, 11], [-1, 13, -1, 15], [16, 17, 18, 19]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22927181\/selecting-specific-rows-and-columns-from-numpy-array", "best_answers_votes":148, "question_length":1069, "response_length":1382 }, { "question":"python how to pad numpy array with zeros I want to know how I can pad a 2D numpy array with zeros using python 2.6.6 with numpy version 1.5.0. But these are my limitations. Therefore I cannot use np.pad. For example, I want to pad a with zeros such that its shape matches b. The reason why I want to do this is so I can do: ``` b-a ``` such that ``` >>> a array([[ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.]]) >>> b array([[ 3., 3., 3., 3., 3., 3.], [ 3., 3., 3., 3., 3., 3.], [ 3., 3., 3., 3., 3., 3.], [ 3., 3., 3., 3., 3., 3.]]) >>> c array([[1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0]]) ``` The only way I can think of doing this is appending, however this seems pretty ugly. is there a cleaner solution possibly using b.shape? Edit, Thank you to MSeiferts answer. I had to clean it up a bit, and this is what I got: ``` def pad(array, reference_shape, offsets): \"\"\" array: Array to be padded reference_shape: tuple of size of ndarray to create offsets: list of offsets (number of elements must be equal to the dimension of the array) will throw a ValueError if offsets is too big and the reference_shape cannot handle the offsets \"\"\" # Create an array of zeros with the reference shape result = np.zeros(reference_shape) # Create a list of slices from offset to offset + shape in each dimension insertHere = [slice(offsets[dim], offsets[dim] + array.shape[dim]) for dim in range(array.ndim)] # Insert the array in the result at the specified offsets result[insertHere] = array return result ```", "response":"NumPy 1.7.0 (when numpy.pad was added) is pretty old now (it was released in 2013) so even though the question asked for a way without using that function I thought it could be useful to know how that could be achieved using numpy.pad. It's actually pretty simple: ``` >>> import numpy as np >>> a = np.array([[ 1., 1., 1., 1., 1.], ... [ 1., 1., 1., 1., 1.], ... [ 1., 1., 1., 1., 1.]]) >>> np.pad(a, [(0, 1), (0, 1)], mode='constant') array([[ 1., 1., 1., 1., 1., 0.], [ 1., 1., 1., 1., 1., 0.], [ 1., 1., 1., 1., 1., 0.], [ 0., 0., 0., 0., 0., 0.]]) ``` In this case I used that 0 is the default value for mode='constant'. But it could also be specified by passing it in explicitly: ``` >>> np.pad(a, [(0, 1), (0, 1)], mode='constant', constant_values=0) array([[ 1., 1., 1., 1., 1., 0.], [ 1., 1., 1., 1., 1., 0.], [ 1., 1., 1., 1., 1., 0.], [ 0., 0., 0., 0., 0., 0.]]) ``` Just in case the second argument ([(0, 1), (0, 1)]) seems confusing: Each list item (in this case tuple) corresponds to a dimension and item therein represents the padding before (first element) and after (second element). So: ```none [(0, 1), (0, 1)] ^^^^^^------ padding for second dimension ^^^^^^-------------- padding for first dimension ^------------------ no padding at the beginning of the first axis ^--------------- pad with one \"value\" at the end of the first axis. ``` In this case the padding for the first and second axis are identical, so one could also just pass in the 2-tuple: ``` >>> np.pad(a, (0, 1), mode='constant') array([[ 1., 1., 1., 1., 1., 0.], [ 1., 1., 1., 1., 1., 0.], [ 1., 1., 1., 1., 1., 0.], [ 0., 0., 0., 0., 0., 0.]]) ``` In case the padding before and after is identical one could even omit the tuple (not applicable in this case though): ``` >>> np.pad(a, 1, mode='constant') array([[ 0., 0., 0., 0., 0., 0., 0.], [ 0., 1., 1., 1., 1., 1., 0.], [ 0., 1., 1., 1., 1., 1., 0.], [ 0., 1., 1., 1., 1., 1., 0.], [ 0., 0., 0., 0., 0., 0., 0.]]) ``` Or if the padding before and after is identical but different for the axis, you could also omit the second argument in the inner tuples: ``` >>> np.pad(a, [(1, ), (2, )], mode='constant') array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 1., 1., 1., 1., 1., 0., 0.], [ 0., 0., 1., 1., 1., 1., 1., 0., 0.], [ 0., 0., 1., 1., 1., 1., 1., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) ``` However I tend to prefer to always use the explicit one, because it's just to easy to make mistakes (when NumPys expectations differ from your intentions): ``` >>> np.pad(a, [1, 2], mode='constant') array([[ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 1., 1., 1., 1., 1., 0., 0.], [ 0., 1., 1., 1., 1., 1., 0., 0.], [ 0., 1., 1., 1., 1., 1., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.]]) ``` Here NumPy thinks you wanted to pad all axis with 1 element before and 2 elements after each axis! Even if you intended it to pad with 1 element in axis 1 and 2 elements for axis 2. I used lists of tuples for the padding, note that this is just \"my convention\", you could also use lists of lists or tuples of tuples, or even tuples of arrays. NumPy just checks the length of the argument (or if it doesn't have a length) and the length of each item (or if it has a length)!", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/35751306\/python-how-to-pad-numpy-array-with-zeros", "best_answers_votes":299, "question_length":1557, "response_length":3248 }, { "question":"How to identify numpy types in python? How can one reliably determine if an object has a numpy type? I realize that this question goes against the philosophy of duck typing, but idea is to make sure a function (which uses scipy and numpy) never returns a numpy type unless it is called with a numpy type. This comes up in the solution to another question, but I think the general problem of determining if an object has a numpy type is far enough away from that original question that they should be separated.", "response":"Use the builtin type function to get the type, then you can use the __module__ property to find out where it was defined: ``` >>> import numpy as np a = np.array([1, 2, 3]) >>> type(a) >>> type(a).__module__ 'numpy' >>> type(a).__module__ == np.__name__ True ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12569452\/how-to-identify-numpy-types-in-python", "best_answers_votes":156, "question_length":510, "response_length":263 }, { "question":"How to write a multidimensional array to a text file? In another question, other users offered some help if I could supply the array I was having trouble with. However, I even fail at a basic I\/O task, such as writing an array to a file. Can anyone explain what kind of loop I would need to write a 4x11x14 numpy array to file? This array consist of four 11 x 14 arrays, so I should format it with a nice newline, to make the reading of the file easier on others. Edit: So I've tried the numpy.savetxt function. Strangely, it gives the following error: ``` TypeError: float argument required, not numpy.ndarray ``` I assume that this is because the function doesn't work with multidimensional arrays? Any solutions as I would like them within one file?", "response":"If you want to write it to disk so that it will be easy to read back in as a numpy array, look into numpy.save. Pickling it will work fine, as well, but it's less efficient for large arrays (which yours isn't, so either is perfectly fine). If you want it to be human readable, look into numpy.savetxt. Edit: So, it seems like savetxt isn't quite as great an option for arrays with >2 dimensions... But just to draw everything out to it's full conclusion: I just realized that numpy.savetxt chokes on ndarrays with more than 2 dimensions... This is probably by design, as there's no inherently defined way to indicate additional dimensions in a text file. E.g. This (a 2D array) works fine ``` import numpy as np x = np.arange(20).reshape((4,5)) np.savetxt('test.txt', x) ``` While the same thing would fail (with a rather uninformative error: TypeError: float argument required, not numpy.ndarray) for a 3D array: ``` import numpy as np x = np.arange(200).reshape((4,5,10)) np.savetxt('test.txt', x) ``` One workaround is just to break the 3D (or greater) array into 2D slices. E.g. ``` x = np.arange(200).reshape((4,5,10)) with open('test.txt', 'w') as outfile: for slice_2d in x: np.savetxt(outfile, slice_2d) ``` However, our goal is to be clearly human readable, while still being easily read back in with numpy.loadtxt. Therefore, we can be a bit more verbose, and differentiate the slices using commented out lines. By default, numpy.loadtxt will ignore any lines that start with # (or whichever character is specified by the comments kwarg). (This looks more verbose than it actually is...) ``` import numpy as np # Generate some test data data = np.arange(200).reshape((4,5,10)) # Write the array to disk with open('test.txt', 'w') as outfile: # I'm writing a header here just for the sake of readability # Any line starting with \"#\" will be ignored by numpy.loadtxt outfile.write('# Array shape: {0}\\n'.format(data.shape)) # Iterating through a ndimensional array produces slices along # the last axis. This is equivalent to data[i,:,:] in this case for data_slice in data: # The formatting string indicates that I'm writing out # the values in left-justified columns 7 characters in width # with 2 decimal places. np.savetxt(outfile, data_slice, fmt='%-7.2f') # Writing out a break to indicate different slices... outfile.write('# New slice\\n') ``` This yields: ``` # Array shape: (4, 5, 10) 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 11.00 12.00 13.00 14.00 15.00 16.00 17.00 18.00 19.00 20.00 21.00 22.00 23.00 24.00 25.00 26.00 27.00 28.00 29.00 30.00 31.00 32.00 33.00 34.00 35.00 36.00 37.00 38.00 39.00 40.00 41.00 42.00 43.00 44.00 45.00 46.00 47.00 48.00 49.00 # New slice 50.00 51.00 52.00 53.00 54.00 55.00 56.00 57.00 58.00 59.00 60.00 61.00 62.00 63.00 64.00 65.00 66.00 67.00 68.00 69.00 70.00 71.00 72.00 73.00 74.00 75.00 76.00 77.00 78.00 79.00 80.00 81.00 82.00 83.00 84.00 85.00 86.00 87.00 88.00 89.00 90.00 91.00 92.00 93.00 94.00 95.00 96.00 97.00 98.00 99.00 # New slice 100.00 101.00 102.00 103.00 104.00 105.00 106.00 107.00 108.00 109.00 110.00 111.00 112.00 113.00 114.00 115.00 116.00 117.00 118.00 119.00 120.00 121.00 122.00 123.00 124.00 125.00 126.00 127.00 128.00 129.00 130.00 131.00 132.00 133.00 134.00 135.00 136.00 137.00 138.00 139.00 140.00 141.00 142.00 143.00 144.00 145.00 146.00 147.00 148.00 149.00 # New slice 150.00 151.00 152.00 153.00 154.00 155.00 156.00 157.00 158.00 159.00 160.00 161.00 162.00 163.00 164.00 165.00 166.00 167.00 168.00 169.00 170.00 171.00 172.00 173.00 174.00 175.00 176.00 177.00 178.00 179.00 180.00 181.00 182.00 183.00 184.00 185.00 186.00 187.00 188.00 189.00 190.00 191.00 192.00 193.00 194.00 195.00 196.00 197.00 198.00 199.00 # New slice ``` Reading it back in is very easy, as long as we know the shape of the original array. We can just do numpy.loadtxt('test.txt').reshape((4,5,10)). As an example (You can do this in one line, I'm just being verbose to clarify things): ``` # Read the array from disk new_data = np.loadtxt('test.txt') # Note that this returned a 2D array! print new_data.shape # However, going back to 3D is easy if we know the # original shape of the array new_data = new_data.reshape((4,5,10)) # Just to check that they're the same... assert np.all(new_data == data) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3685265\/how-to-write-a-multidimensional-array-to-a-text-file", "best_answers_votes":227, "question_length":752, "response_length":4295 }, { "question":"Automatically import modules when entering the python or ipython interpreter I find myself typing import numpy as np almost every single time I fire up the python interpreter. How do I set up the python or ipython interpreter so that numpy is automatically imported?", "response":"For ipython, there are two ways to achieve this. Both involve ipython's configuration directory which is located in ~\/.ipython. Create a custom ipython profile. Or you can add a startup file to ~\/.ipython\/profile_default\/startup\/ For simplicity, I'd use option 2. All you have to do is place a .py or .ipy file in the ~\/.ipython\/profile_default\/startup directory and it will automatically be executed. So you could simple place import numpy as np in a simple file and you'll have np in the namespace of your ipython prompt. Option 2 will actually work with a custom profile, but using a custom profile will allow you to change the startup requirements and other configuration based on a particular case. However, if you'd always like np to be available to you then by all means put it in the startup directory. For more information on ipython configuration. The docs have a much more complete explanation.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11124578\/automatically-import-modules-when-entering-the-python-or-ipython-interpreter", "best_answers_votes":97, "question_length":266, "response_length":905 }, { "question":"binning data in python with scipy\/numpy is there a more efficient way to take an average of an array in prespecified bins? for example, i have an array of numbers and an array corresponding to bin start and end positions in that array, and I want to just take the mean in those bins? I have code that does it below but i am wondering how it can be cut down and improved. thanks. ``` from scipy import * from numpy import * def get_bin_mean(a, b_start, b_end): ind_upper = nonzero(a >= b_start)[0] a_upper = a[ind_upper] a_range = a_upper[nonzero(a_upper < b_end)[0]] mean_val = mean(a_range) return mean_val data = rand(100) bins = linspace(0, 1, 10) binned_data = [] n = 0 for n in range(0, len(bins)-1): b_start = bins[n] b_end = bins[n+1] binned_data.append(get_bin_mean(data, b_start, b_end)) print binned_data ```", "response":"It's probably faster and easier to use numpy.digitize(): ``` import numpy data = numpy.random.random(100) bins = numpy.linspace(0, 1, 10) digitized = numpy.digitize(data, bins) bin_means = [data[digitized == i].mean() for i in range(1, len(bins))] ``` An alternative to this is to use numpy.histogram(): ``` bin_means = (numpy.histogram(data, bins, weights=data)[0] \/ numpy.histogram(data, bins)[0]) ``` Try for yourself which one is faster... :)", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6163334\/binning-data-in-python-with-scipy-numpy", "best_answers_votes":230, "question_length":818, "response_length":446 }, { "question":"Efficiently checking if arbitrary object is NaN in Python \/ numpy \/ pandas? My numpy arrays use np.nan to designate missing values. As I iterate over the data set, I need to detect such missing values and handle them in special ways. Naively I used numpy.isnan(val), which works well unless val isn't among the subset of types supported by numpy.isnan(). For example, missing data can occur in string fields, in which case I get: ``` >>> np.isnan('some_string') Traceback (most recent call last): File \"\", line 1, in TypeError: Not implemented for this type ``` Other than writing an expensive wrapper that catches the exception and returns False, is there a way to handle this elegantly and efficiently?", "response":"pandas.isnull() (also pd.isna(), in newer versions) checks for missing values in both numeric and string\/object arrays. From the documentation, it checks for: NaN in numeric arrays, None\/NaN in object arrays Quick example: ``` import pandas as pd import numpy as np s = pd.Series(['apple', np.nan, 'banana']) pd.isnull(s) Out[9]: 0 False 1 True 2 False dtype: bool ``` The idea of using numpy.nan to represent missing values is something that pandas introduced, which is why pandas has the tools to deal with it. Datetimes too (if you use pd.NaT you won't need to specify the dtype) ``` In [24]: s = Series([Timestamp('20130101'),np.nan,Timestamp('20130102 9:30')],dtype='M8[ns]') In [25]: s Out[25]: 0 2013-01-01 00:00:00 1 NaT 2 2013-01-02 09:30:00 dtype: datetime64[ns]`` In [26]: pd.isnull(s) Out[26]: 0 False 1 True 2 False dtype: bool ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18689512\/efficiently-checking-if-arbitrary-object-is-nan-in-python-numpy-pandas", "best_answers_votes":231, "question_length":705, "response_length":844 }, { "question":"Working with TIFFs (import, export) in Python using numpy I need a python method to open and import TIFF images into numpy arrays so I can analyze and modify the pixel data and then save them as TIFFs again. (They are basically light intensity maps in greyscale, representing the respective values per pixel) I couldn't find any documentation on PIL methods concerning TIFF. I tried to figure it out, but only got \"bad mode\" or \"file type not supported\" errors. What do I need to use here?", "response":"First, I downloaded a test TIFF image from this page called a_image.tif. Then I opened with PIL like this: ``` >>> from PIL import Image >>> im = Image.open('a_image.tif') >>> im.show() ``` This showed the rainbow image. To convert to a numpy array, it's as simple as: ``` >>> import numpy >>> imarray = numpy.array(im) ``` We can see that the size of the image and the shape of the array match up: ``` >>> imarray.shape (44, 330) >>> im.size (330, 44) ``` And the array contains uint8 values: ``` >>> imarray array([[ 0, 1, 2, ..., 244, 245, 246], [ 0, 1, 2, ..., 244, 245, 246], [ 0, 1, 2, ..., 244, 245, 246], ..., [ 0, 1, 2, ..., 244, 245, 246], [ 0, 1, 2, ..., 244, 245, 246], [ 0, 1, 2, ..., 244, 245, 246]], dtype=uint8) ``` Once you're done modifying the array, you can turn it back into a PIL image like this: ``` >>> Image.fromarray(imarray) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7569553\/working-with-tiffs-import-export-in-python-using-numpy", "best_answers_votes":159, "question_length":489, "response_length":856 }, { "question":"Is there a numpy builtin to reject outliers from a list Is there a numpy builtin to do something like the following? That is, take a list d and return a list filtered_d with any outlying elements removed based on some assumed distribution of the points in d. ``` import numpy as np def reject_outliers(data): m = 2 u = np.mean(data) s = np.std(data) filtered = [e for e in data if (u - 2 * s >> d = [2,4,5,1,6,5,40] >>> filtered_d = reject_outliers(d) >>> print filtered_d [2,4,5,1,6,5] ``` I say 'something like' because the function might allow for varying distributions (poisson, gaussian, etc.) and varying outlier thresholds within those distributions (like the m I've used here).", "response":"Something important when dealing with outliers is that one should try to use estimators as robust as possible. The mean of a distribution will be biased by outliers but e.g. the median will be much less. Building on eumiro's answer: ``` def reject_outliers(data, m = 2.): d = np.abs(data - np.median(data)) mdev = np.median(d) s = d\/mdev if mdev else np.zeros(len(d)) return data[s>> import numpy as np >>> np.show_config() lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3'] define_macros = [('NO_ATLAS_INFO', 3)] blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3', '-I\/System\/Library\/Frameworks\/vecLib.framework\/Headers'] define_macros = [('NO_ATLAS_INFO', 3)] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9000164\/how-to-check-blas-lapack-linkage-in-numpy-and-scipy", "best_answers_votes":317, "question_length":285, "response_length":605 }, { "question":"How can I use numpy.correlate to do autocorrelation? I need to do auto-correlation of a set of numbers, which as I understand it is just the correlation of the set with itself. I've tried it using numpy's correlate function, but I don't believe the result, as it almost always gives a vector where the first number is not the largest, as it ought to be. So, this question is really two questions: What exactly is numpy.correlate doing? How can I use it (or something else) to do auto-correlation?", "response":"To answer your first question, numpy.correlate(a, v, mode) is performing the convolution of a with the reverse of v and giving the results clipped by the specified mode. The definition of convolution, C(t)=\u2211 -\u221e < i < \u221e aivt+i where -\u221e < t < \u221e, allows for results from -\u221e to \u221e, but you obviously can't store an infinitely long array. So it has to be clipped, and that is where the mode comes in. There are 3 different modes: full, same, & valid: \"full\" mode returns results for every t where both a and v have some overlap. \"same\" mode returns a result with the same length as the shortest vector (a or v). \"valid\" mode returns results only when a and v completely overlap each other. The documentation for numpy.convolve gives more detail on the modes. For your second question, I think numpy.correlate is giving you the autocorrelation, it is just giving you a little more as well. The autocorrelation is used to find how similar a signal, or function, is to itself at a certain time difference. At a time difference of 0, the auto-correlation should be the highest because the signal is identical to itself, so you expected that the first element in the autocorrelation result array would be the greatest. However, the correlation is not starting at a time difference of 0. It starts at a negative time difference, closes to 0, and then goes positive. That is, you were expecting: autocorrelation(a) = \u2211 -\u221e < i < \u221e aivt+i where 0 <= t < \u221e But what you got was: autocorrelation(a) = \u2211 -\u221e < i < \u221e aivt+i where -\u221e < t < \u221e What you need to do is take the last half of your correlation result, and that should be the autocorrelation you are looking for. A simple python function to do that would be: ``` def autocorr(x): result = numpy.correlate(x, x, mode='full') return result[result.size\/\/2:] ``` returning you only the second half of what numpy calculates. You will, of course, need error checking to make sure that x is actually a 1-d array. Also, this explanation probably isn't the most mathematically rigorous. I've been throwing around infinities because the definition of convolution uses them, but that doesn't necessarily apply for autocorrelation. So, the theoretical portion of this explanation may be slightly wonky, but hopefully the practical results are helpful. These pages on autocorrelation are pretty helpful, and can give you a much better theoretical background if you don't mind wading through the notation and heavy concepts.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/643699\/how-can-i-use-numpy-correlate-to-do-autocorrelation", "best_answers_votes":156, "question_length":496, "response_length":2448 }, { "question":"How do I calculate r-squared using Python and Numpy? I'm using Python and Numpy to calculate a best fit polynomial of arbitrary degree. I pass a list of x values, y values, and the degree of the polynomial I want to fit (linear, quadratic, etc.). This much works, but I also want to calculate r (coefficient of correlation) and r-squared(coefficient of determination). I am comparing my results with Excel's best-fit trendline capability, and the r-squared value it calculates. Using this, I know I am calculating r-squared correctly for linear best-fit (degree equals 1). However, my function does not work for polynomials with degree greater than 1. Excel is able to do this. How do I calculate r-squared for higher-order polynomials using Numpy? Here's my function: ``` import numpy # Polynomial Regression def polyfit(x, y, degree): results = {} coeffs = numpy.polyfit(x, y, degree) # Polynomial Coefficients results['polynomial'] = coeffs.tolist() correlation = numpy.corrcoef(x, y)[0,1] # r results['correlation'] = correlation # r-squared results['determination'] = correlation**2 return results ```", "response":"A very late reply, but just in case someone needs a ready function for this: scipy.stats.linregress i.e. ``` slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(x, y) ``` as in @Adam Marples's answer.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/893657\/how-do-i-calculate-r-squared-using-python-and-numpy", "best_answers_votes":209, "question_length":1106, "response_length":217 }, { "question":"What does \"unsqueeze\" do in Pytorch? The PyTorch documentation says: Returns a new tensor with a dimension of size one inserted at the specified position. [...] ``` >>> x = torch.tensor([1, 2, 3, 4]) >>> torch.unsqueeze(x, 0) tensor([[ 1, 2, 3, 4]]) >>> torch.unsqueeze(x, 1) tensor([[ 1], [ 2], [ 3], [ 4]]) ```", "response":"unsqueeze turns an n.d. tensor into an (n+1).d. one by adding an extra dimension of depth 1. However, since it is ambiguous which axis the new dimension should lie across (i.e. in which direction it should be \"unsqueezed\"), this needs to be specified by the dim argument. e.g. unsqueeze can be applied to a 2d tensor three different ways: The resulting unsqueezed tensors have the same information, but the indices used to access them are different.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/57237352\/what-does-unsqueeze-do-in-pytorch", "best_answers_votes":201, "question_length":312, "response_length":449 }, { "question":"How to split\/partition a dataset into training and test datasets for, e.g., cross validation? What is a good way to split a NumPy array randomly into training and testing\/validation dataset? Something similar to the cvpartition or crossvalind functions in Matlab.", "response":"If you want to split the data set once in two parts, you can use numpy.random.shuffle, or numpy.random.permutation if you need to keep track of the indices (remember to fix the random seed to make everything reproducible): ``` import numpy # x is your dataset x = numpy.random.rand(100, 5) numpy.random.shuffle(x) training, test = x[:80,:], x[80:,:] ``` or ``` import numpy # x is your dataset x = numpy.random.rand(100, 5) indices = numpy.random.permutation(x.shape[0]) training_idx, test_idx = indices[:80], indices[80:] training, test = x[training_idx,:], x[test_idx,:] ``` There are many ways other ways to repeatedly partition the same data set for cross validation. Many of those are available in the sklearn library (k-fold, leave-n-out, ...). sklearn also includes more advanced \"stratified sampling\" methods that create a partition of the data that is balanced with respect to some features, for example to make sure that there is the same proportion of positive and negative examples in the training and test set.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3674409\/how-to-split-partition-a-dataset-into-training-and-test-datasets-for-e-g-cros", "best_answers_votes":172, "question_length":263, "response_length":1023 }, { "question":"How does numpy.histogram() work? While reading up on numpy, I encountered the function numpy.histogram(). What is it for and how does it work? In the docs they mention bins: What are they? Some googling led me to the definition of Histograms in general. I get that. But unfortunately I can't link this knowledge to the examples given in the docs.", "response":"A bin is range that represents the width of a single bar of the histogram along the X-axis. You could also call this the interval. (Wikipedia defines them more formally as \"disjoint categories\".) The Numpy histogram function doesn't draw the histogram, but it computes the occurrences of input data that fall within each bin, which in turns determines the area (not necessarily the height if the bins aren't of equal width) of each bar. In this example: ``` np.histogram([1, 2, 1], bins=[0, 1, 2, 3]) ``` There are 3 bins, for values ranging from 0 to 1 (excl 1.), 1 to 2 (excl. 2) and 2 to 3 (incl. 3), respectively. The way Numpy defines these bins if by giving a list of delimiters ([0, 1, 2, 3]) in this example, although it also returns the bins in the results, since it can choose them automatically from the input, if none are specified. If bins=5, for example, it will use 5 bins of equal width spread between the minimum input value and the maximum input value. The input values are 1, 2 and 1. Therefore, bin \"1 to 2\" contains two occurrences (the two 1 values), and bin \"2 to 3\" contains one occurrence (the 2). These results are in the first item in the returned tuple: array([0, 2, 1]). Since the bins here are of equal width, you can use the number of occurrences for the height of each bar. When drawn, you would have: a bar of height 0 for range\/bin [0,1] on the X-axis, a bar of height 2 for range\/bin [1,2], a bar of height 1 for range\/bin [2,3]. You can plot this directly with Matplotlib (its hist function also returns the bins and the values): ``` >>> import matplotlib.pyplot as plt >>> plt.hist([1, 2, 1], bins=[0, 1, 2, 3]) (array([0, 2, 1]), array([0, 1, 2, 3]), ) >>> plt.show() ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9141732\/how-does-numpy-histogram-work", "best_answers_votes":201, "question_length":346, "response_length":1709 }, { "question":"Why does multiprocessing use only a single core after I import numpy? I am not sure whether this counts more as an OS issue, but I thought I would ask here in case anyone has some insight from the Python end of things. I've been trying to parallelise a CPU-heavy for loop using joblib, but I find that instead of each worker process being assigned to a different core, I end up with all of them being assigned to the same core and no performance gain. Here's a very trivial example... ``` from joblib import Parallel,delayed import numpy as np def testfunc(data): # some very boneheaded CPU work for nn in xrange(1000): for ii in data[0,:]: for jj in data[1,:]: ii*jj def run(niter=10): data = (np.random.randn(2,100) for ii in xrange(niter)) pool = Parallel(n_jobs=-1,verbose=1,pre_dispatch='all') results = pool(delayed(testfunc)(dd) for dd in data) if __name__ == '__main__': run() ``` ...and here's what I see in htop while this script is running: I'm running Ubuntu 12.10 (3.5.0-26) on a laptop with 4 cores. Clearly joblib.Parallel is spawning separate processes for the different workers, but is there any way that I can make these processes execute on different cores?", "response":"After some more googling I found the answer here. It turns out that certain Python modules (numpy, scipy, tables, pandas, skimage...) mess with core affinity on import. As far as I can tell, this problem seems to be specifically caused by them linking against multithreaded OpenBLAS libraries. A workaround is to reset the task affinity using ``` os.system(\"taskset -p 0xff %d\" % os.getpid()) ``` With this line pasted in after the module imports, my example now runs on all cores: My experience so far has been that this doesn't seem to have any negative effect on numpy's performance, although this is probably machine- and task-specific . Update: There are also two ways to disable the CPU affinity-resetting behaviour of OpenBLAS itself. At run-time you can use the environment variable OPENBLAS_MAIN_FREE (or GOTOBLAS_MAIN_FREE), for example ``` OPENBLAS_MAIN_FREE=1 python myscript.py ``` Or alternatively, if you're compiling OpenBLAS from source you can permanently disable it at build-time by editing the Makefile.rule to contain the line ``` NO_AFFINITY=1 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15639779\/why-does-multiprocessing-use-only-a-single-core-after-i-import-numpy", "best_answers_votes":170, "question_length":1176, "response_length":1069 }, { "question":"Replacing Numpy elements if condition is met I have a large numpy array that I need to manipulate so that each element is changed to either a 1 or 0 if a condition is met (will be used as a pixel mask later). There are about 8 million elements in the array and my current method takes too long for the reduction pipeline: ``` for (y,x), value in numpy.ndenumerate(mask_data): if mask_data[y,x]3: #Bad Pixel mask_data[y,x]=0 ``` Is there a numpy function that would speed this up?", "response":"``` >>> import numpy as np >>> a = np.random.randint(0, 5, size=(5, 4)) >>> a array([[4, 2, 1, 1], [3, 0, 1, 2], [2, 0, 1, 1], [4, 0, 2, 3], [0, 0, 0, 2]]) >>> b = a >> b array([[False, True, True, True], [False, True, True, True], [ True, True, True, True], [False, True, True, False], [ True, True, True, True]], dtype=bool) >>> >>> c = b.astype(int) >>> c array([[0, 1, 1, 1], [0, 1, 1, 1], [1, 1, 1, 1], [0, 1, 1, 0], [1, 1, 1, 1]]) ``` You can shorten this with: ``` >>> c = (a < 3).astype(int) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19766757\/replacing-numpy-elements-if-condition-is-met", "best_answers_votes":183, "question_length":479, "response_length":503 }, { "question":"Counting the number of non-NaN elements in a numpy ndarray in Python I need to calculate the number of non-NaN elements in a numpy ndarray matrix. How would one efficiently do this in Python? Here is my simple code for achieving this: ``` import numpy as np def numberOfNonNans(data): count = 0 for i in data: if not np.isnan(i): count += 1 return count ``` Is there a built-in function for this in numpy? Efficiency is important because I'm doing Big Data analysis. Thnx for any help!", "response":"``` np.count_nonzero(~np.isnan(data)) ``` ~ inverts the boolean matrix returned from np.isnan. np.count_nonzero counts values that is not 0\\false. .sum should give the same result. But maybe more clearly to use count_nonzero Testing speed: ``` In [23]: data = np.random.random((10000,10000)) In [24]: data[[np.random.random_integers(0,10000, 100)],:][:, [np.random.random_integers(0,99, 100)]] = np.nan In [25]: %timeit data.size - np.count_nonzero(np.isnan(data)) 1 loops, best of 3: 309 ms per loop In [26]: %timeit np.count_nonzero(~np.isnan(data)) 1 loops, best of 3: 345 ms per loop In [27]: %timeit data.size - np.isnan(data).sum() 1 loops, best of 3: 339 ms per loop ``` data.size - np.count_nonzero(np.isnan(data)) seems to barely be the fastest here. other data might give different relative speed results.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21778118\/counting-the-number-of-non-nan-elements-in-a-numpy-ndarray-in-python", "best_answers_votes":257, "question_length":485, "response_length":815 }, { "question":"LogisticRegression: Unknown label type: 'continuous' using sklearn in python I have the following code to test some of most popular ML algorithms of sklearn python library: ``` import numpy as np from sklearn import metrics, svm from sklearn.linear_model import LinearRegression from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC trainingData = np.array([ [2.3, 4.3, 2.5], [1.3, 5.2, 5.2], [3.3, 2.9, 0.8], [3.1, 4.3, 4.0] ]) trainingScores = np.array( [3.4, 7.5, 4.5, 1.6] ) predictionData = np.array([ [2.5, 2.4, 2.7], [2.7, 3.2, 1.2] ]) clf = LinearRegression() clf.fit(trainingData, trainingScores) print(\"LinearRegression\") print(clf.predict(predictionData)) clf = svm.SVR() clf.fit(trainingData, trainingScores) print(\"SVR\") print(clf.predict(predictionData)) clf = LogisticRegression() clf.fit(trainingData, trainingScores) print(\"LogisticRegression\") print(clf.predict(predictionData)) clf = DecisionTreeClassifier() clf.fit(trainingData, trainingScores) print(\"DecisionTreeClassifier\") print(clf.predict(predictionData)) clf = KNeighborsClassifier() clf.fit(trainingData, trainingScores) print(\"KNeighborsClassifier\") print(clf.predict(predictionData)) clf = LinearDiscriminantAnalysis() clf.fit(trainingData, trainingScores) print(\"LinearDiscriminantAnalysis\") print(clf.predict(predictionData)) clf = GaussianNB() clf.fit(trainingData, trainingScores) print(\"GaussianNB\") print(clf.predict(predictionData)) clf = SVC() clf.fit(trainingData, trainingScores) print(\"SVC\") print(clf.predict(predictionData)) ``` The first two works ok, but I got the following error in LogisticRegression call: ``` root@ubupc1:\/home\/ouhma# python stack.py LinearRegression [ 15.72023529 6.46666667] SVR [ 3.95570063 4.23426243] Traceback (most recent call last): File \"stack.py\", line 28, in clf.fit(trainingData, trainingScores) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/linear_model\/logistic.py\", line 1174, in fit check_classification_targets(y) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/utils\/multiclass.py\", line 172, in check_classification_targets raise ValueError(\"Unknown label type: %r\" % y_type) ValueError: Unknown label type: 'continuous' ``` The input data is the same as in the previous calls, so what is going on here? And by the way, why there is a huge diference in the first prediction of LinearRegression() and SVR() algorithms (15.72 vs 3.95)?", "response":"You are passing floats to a classifier which expects categorical values as the target vector. If you convert it to int it will be accepted as input (although it will be questionable if that's the right way to do it). It would be better to convert your training scores by using scikit's labelEncoder function. The same is true for your DecisionTree and KNeighbors qualifier. ``` from sklearn import preprocessing from sklearn import utils lab_enc = preprocessing.LabelEncoder() encoded = lab_enc.fit_transform(trainingScores) >>> array([1, 3, 2, 0], dtype=int64) print(utils.multiclass.type_of_target(trainingScores)) >>> continuous print(utils.multiclass.type_of_target(trainingScores.astype('int'))) >>> multiclass print(utils.multiclass.type_of_target(encoded)) >>> multiclass ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/41925157\/logisticregression-unknown-label-type-continuous-using-sklearn-in-python", "best_answers_votes":134, "question_length":2618, "response_length":782 }, { "question":"How to save a list as numpy array in python? Is possible to construct a NumPy array from a python list?", "response":"First of all, I'd recommend you to go through NumPy's Quickstart tutorial, which will probably help with these basic questions. You can directly create an array from a list as: ``` import numpy as np a = np.array( [2,3,4] ) ``` Or from a from a nested list in the same way: ``` import numpy as np a = np.array( [[2,3,4], [3,4,5]] ) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5951135\/how-to-save-a-list-as-numpy-array-in-python", "best_answers_votes":173, "question_length":103, "response_length":335 }, { "question":"sort eigenvalues and associated eigenvectors after using numpy.linalg.eig in python I'm using numpy.linalg.eig to obtain a list of eigenvalues and eigenvectors: ``` A = someMatrixArray from numpy.linalg import eig as eigenValuesAndVectors solution = eigenValuesAndVectors(A) eigenValues = solution[0] eigenVectors = solution[1] ``` I would like to sort my eigenvalues (e.g. from lowest to highest), in a way I know what is the associated eigenvector after the sorting. I'm not finding any way of doing that with python functions. Is there any simple way or do I have to code my sort version?", "response":"Use numpy.argsort. It returns the indices one would use to sort the array. ``` import numpy as np import numpy.linalg as linalg A = np.random.random((3,3)) eigenValues, eigenVectors = linalg.eig(A) idx = eigenValues.argsort()[::-1] eigenValues = eigenValues[idx] eigenVectors = eigenVectors[:,idx] ``` If the eigenvalues are complex, the sort order is lexicographic (that is, complex numbers are sorted according to their real part first, with ties broken by their imaginary part).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8092920\/sort-eigenvalues-and-associated-eigenvectors-after-using-numpy-linalg-eig-in-pyt", "best_answers_votes":216, "question_length":591, "response_length":481 }, { "question":"Plotting a fast Fourier transform in Python I have access to NumPy and SciPy and want to create a simple FFT of a data set. I have two lists, one that is y values and the other is timestamps for those y values. What is the simplest way to feed these lists into a SciPy or NumPy method and plot the resulting FFT? I have looked up examples, but they all rely on creating a set of fake data with some certain number of data points, and frequency, etc. and don't really show how to do it with just a set of data and the corresponding timestamps. I have tried the following example: ``` from scipy.fftpack import fft # Number of samplepoints N = 600 # Sample spacing T = 1.0 \/ 800.0 x = np.linspace(0.0, N*T, N) y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x) yf = fft(y) xf = np.linspace(0.0, 1.0\/(2.0*T), N\/2) import matplotlib.pyplot as plt plt.plot(xf, 2.0\/N * np.abs(yf[0:N\/2])) plt.grid() plt.show() ``` But when I change the argument of fft to my data set and plot it, I get extremely odd results, and it appears the scaling for the frequency may be off. I am unsure. Here is a pastebin of the data I am attempting to FFT http:\/\/pastebin.com\/0WhjjMkb http:\/\/pastebin.com\/ksM4FvZS When I use fft() on the whole thing it just has a huge spike at zero and nothing else. Here is my code: ``` ## Perform FFT with SciPy signalFFT = fft(yInterp) ## Get power spectral density signalPSD = np.abs(signalFFT) ** 2 ## Get frequencies corresponding to signal PSD fftFreq = fftfreq(len(signalPSD), spacing) ## Get positive half of frequencies i = fftfreq>0 ## plt.figurefigsize = (8, 4) plt.plot(fftFreq[i], 10*np.log10(signalPSD[i])); #plt.xlim(0, 100); plt.xlabel('Frequency [Hz]'); plt.ylabel('PSD [dB]') ``` Spacing is just equal to xInterp[1]-xInterp[0].", "response":"So I run a functionally equivalent form of your code in an IPython notebook: ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import scipy.fftpack # Number of samplepoints N = 600 # sample spacing T = 1.0 \/ 800.0 x = np.linspace(0.0, N*T, N) y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x) yf = scipy.fftpack.fft(y) xf = np.linspace(0.0, 1.0\/(2.0*T), N\/\/2) fig, ax = plt.subplots() ax.plot(xf, 2.0\/N * np.abs(yf[:N\/\/2])) plt.show() ``` I get what I believe to be very reasonable output. It's been longer than I care to admit since I was in engineering school thinking about signal processing, but spikes at 50 and 80 are exactly what I would expect. So what's the issue? In response to the raw data and comments being posted The problem here is that you don't have periodic data. You should always inspect the data that you feed into any algorithm to make sure that it's appropriate. ``` import pandas import matplotlib.pyplot as plt #import seaborn %matplotlib inline # the OP's data x = pandas.read_csv('http:\/\/pastebin.com\/raw.php?i=ksM4FvZS', skiprows=2, header=None).values y = pandas.read_csv('http:\/\/pastebin.com\/raw.php?i=0WhjjMkb', skiprows=2, header=None).values fig, ax = plt.subplots() ax.plot(x, y) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25735153\/plotting-a-fast-fourier-transform-in-python", "best_answers_votes":127, "question_length":1765, "response_length":1256 }, { "question":"Numpy array assignment with copy For example, if we have a numpy array A, and we want a numpy array B with the same elements. What is the difference between the following (see below) methods? When is additional memory allocated, and when is it not? B = A B[:] = A (same as B[:]=A[:]?) numpy.copy(B, A)", "response":"All three versions do different things: B = A This binds a new name B to the existing object already named A. Afterwards they refer to the same object, so if you modify one in place, you'll see the change through the other one too. B[:] = A (same as B[:]=A[:]?) This copies the values from A into an existing array B. The two arrays must have the same shape for this to work. B[:] = A[:] does the same thing (but B = A[:] would do something more like 1). numpy.copy(B, A) This is not legal syntax. You probably meant B = numpy.copy(A). This is almost the same as 2, but it creates a new array, rather than reusing the B array. If there were no other references to the previous B value, the end result would be the same as 2, but it will use more memory temporarily during the copy. Or maybe you meant numpy.copyto(B, A), which is legal, and is equivalent to 2?", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19676538\/numpy-array-assignment-with-copy", "best_answers_votes":162, "question_length":301, "response_length":860 }, { "question":"Numpy isnan() fails on an array of floats (from pandas dataframe apply) I have an array of floats (some normal numbers, some nans) that is coming out of an apply on a pandas dataframe. For some reason, numpy.isnan is failing on this array, however as shown below, each element is a float, numpy.isnan runs correctly on each element, the type of the variable is definitely a numpy array. What's going on?! ``` set([type(x) for x in tester]) Out[59]: {float} tester Out[60]: array([-0.7000000000000001, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], dtype=object) set([type(x) for x in tester]) Out[61]: {float} np.isnan(tester) Traceback (most recent call last): File \"\", line 1, in np.isnan(tester) TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' set([np.isnan(x) for x in tester]) Out[65]: {False, True} type(tester) Out[66]: numpy.ndarray ```", "response":"np.isnan can be applied to NumPy arrays of native dtype (such as np.float64): ``` In [99]: np.isnan(np.array([np.nan, 0], dtype=np.float64)) Out[99]: array([ True, False], dtype=bool) ``` but raises TypeError when applied to object arrays: ``` In [96]: np.isnan(np.array([np.nan, 0], dtype=object)) TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' ``` Since you have Pandas, you could use pd.isnull instead -- it can accept NumPy arrays of object or native dtypes: ``` In [97]: pd.isnull(np.array([np.nan, 0], dtype=float)) Out[97]: array([ True, False], dtype=bool) In [98]: pd.isnull(np.array([np.nan, 0], dtype=object)) Out[98]: array([ True, False], dtype=bool) ``` Note that None is also considered a null value in object arrays.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/36000993\/numpy-isnan-fails-on-an-array-of-floats-from-pandas-dataframe-apply", "best_answers_votes":206, "question_length":1166, "response_length":850 }, { "question":"What is the difference between i = i + 1 and i += 1 in a 'for' loop? [duplicate] This question already has answers here: When is \"i += x\" different from \"i = i + x\" in Python? (3 answers) Closed 8 years ago. I found out a curious thing today and was wondering if somebody could shed some light into what the difference is here? ``` import numpy as np A = np.arange(12).reshape(4,3) for a in A: a = a + 1 B = np.arange(12).reshape(4,3) for b in B: b += 1 ``` After running each for loop, A has not changed, but B has had one added to each element. I actually use the B version to write to a initialized NumPy array within a for loop.", "response":"The difference is that one modifies the data-structure itself (in-place operation) b += 1 while the other just reassigns the variable a = a + 1. Just for completeness: x += y is not always doing an in-place operation, there are (at least) three exceptions: If x doesn't implement an __iadd__ method then the x += y statement is just a shorthand for x = x + y. This would be the case if x was something like an int. If __iadd__ returns NotImplemented, Python falls back to x = x + y. The __iadd__ method could theoretically be implemented to not work in place. It'd be really weird to do that, though. As it happens your bs are numpy.ndarrays which implements __iadd__ and return itself so your second loop modifies the original array in-place. You can read more on this in the Python documentation of \"Emulating Numeric Types\". These [__i*__] methods are called to implement the augmented arithmetic assignments (+=, -=, *=, @=, \/=, \/\/=, %=, **=, >=, &=, ^=, |=). These methods should attempt to do the operation in-place (modifying self) and return the result (which could be, but does not have to be, self). If a specific method is not defined, the augmented assignment falls back to the normal methods. For instance, if x is an instance of a class with an __iadd__() method, x += y is equivalent to x = x.__iadd__(y) . Otherwise, x.__add__(y) and y.__radd__(x) are considered, as with the evaluation of x + y. In certain situations, augmented assignment can result in unexpected errors (see Why does a_tuple[i] += [\"item\"] raise an exception when the addition works?), but this behavior is in fact part of the data model.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/41446833\/what-is-the-difference-between-i-i-1-and-i-1-in-a-for-loop", "best_answers_votes":134, "question_length":632, "response_length":1624 }, { "question":"Numpy: find index of the elements within range I have a numpy array of numbers, for example, ``` a = np.array([1, 3, 5, 6, 9, 10, 14, 15, 56]) ``` I would like to find all the indexes of the elements within a specific range. For instance, if the range is (6, 10), the answer should be (3, 4, 5). Is there a built-in function to do this?", "response":"You can use np.where to get indices and np.logical_and to set two conditions: ``` import numpy as np a = np.array([1, 3, 5, 6, 9, 10, 14, 15, 56]) np.where(np.logical_and(a>=6, a<=10)) # returns (array([3, 4, 5]),) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13869173\/numpy-find-index-of-the-elements-within-range", "best_answers_votes":209, "question_length":336, "response_length":218 }, { "question":"NumPy: Logarithm with base n From the numpy documentation on logarithms, I have found functions to take the logarithm with base e, 2, and 10: ```py import numpy as np np.log(np.e**3) #3.0 np.log2(2**3) #3.0 np.log10(10**3) #3.0 ``` However, how do I take the logarithm with base n (e.g. 42) in numpy?", "response":"If you have numpy 1.23 or later, you can use np.emath.logn: ``` import numpy as np array = np.array([74088, 3111696]) # = [42^3, 42^4] base = 42 exponent = np.emath.logn(base, array) # = [3, 4] ``` If your version of numpy is older: To get the logarithm with a custom base using math.log: ``` import math number = 74088 # = 42^3 base = 42 exponent = math.log(number, base) # = 3 ``` To get the logarithm with a custom base using numpy.log: ``` import numpy as np array = np.array([74088, 3111696]) # = [42^3, 42^4] base = 42 exponent = np.log(array) \/ np.log(base) # = [3, 4] ``` Which uses the logarithm base change rule:", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25169297\/numpy-logarithm-with-base-n", "best_answers_votes":241, "question_length":300, "response_length":622 }, { "question":"Index of element in NumPy array [duplicate] This question already has answers here: Is there a NumPy function to return the first index of something in an array? (24 answers) Closed 4 years ago. In Python we can get the index of a value in an array by using .index(). But with a NumPy array, when I try to do: ``` decoding.index(i) ``` I get: AttributeError: 'numpy.ndarray' object has no attribute 'index' How could I do this on a NumPy array?", "response":"Use np.where to get the indices where a given condition is True. Examples: For a 2D np.ndarray called a: ``` i, j = np.where(a == value) # when comparing arrays of integers i, j = np.where(np.isclose(a, value)) # when comparing floating-point arrays ``` For a 1D array: ``` i, = np.where(a == value) # integers i, = np.where(np.isclose(a, value)) # floating-point ``` Note that this also works for conditions like >=, <=, != and so forth... You can also create a subclass of np.ndarray with an index() method: ``` class myarray(np.ndarray): def __new__(cls, *args, **kwargs): return np.array(*args, **kwargs).view(myarray) def index(self, value): return np.where(self == value) ``` Testing: ``` a = myarray([1,2,3,4,4,4,5,6,4,4,4]) a.index(4) #(array([ 3, 4, 5, 8, 9, 10]),) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18079029\/index-of-element-in-numpy-array", "best_answers_votes":197, "question_length":444, "response_length":778 }, { "question":"Find the index of the k smallest values of a numpy array In order to find the index of the smallest value, we can use argmin: ``` import numpy as np A = np.array([1, 7, 9, 2, 0.1, 17, 17, 1.5]) print(A.argmin()) # 4 because A[4] = 0.1 ``` But how can I find the indices of the k-smallest values? I'm looking for something like: ``` A.argmin(numberofvalues=3) # [4, 0, 7] because A[4] 10.", "response":"Use np.argpartition. It does not sort the entire array. It only guarantees that the kth element is in sorted position and all smaller elements will be moved before it. Thus the first k elements will be the k-smallest elements. ``` import numpy as np A = np.array([1, 7, 9, 2, 0.1, 17, 17, 1.5]) k = 3 idx = np.argpartition(A, k) print(idx) # [4 0 7 3 1 2 6 5] ``` This returns the k-smallest values. Note that these may not be in sorted order. ``` print(A[idx[:k]]) # [ 0.1 1. 1.5] ``` To obtain the k-largest values use ``` idx = np.argpartition(A, -k) # [4 0 7 3 1 2 6 5] A[idx[-k:]] # [ 9. 17. 17.] ``` WARNING: Do not (re)use idx = np.argpartition(A, k); A[idx[-k:]] to obtain the k-largest. That won't always work. For example, these are NOT the 3 largest values in x: ``` x = np.array([100, 90, 80, 70, 60, 50, 40, 30, 20, 10, 0]) idx = np.argpartition(x, 3) x[idx[-3:]] array([ 70, 80, 100]) ``` Here is a comparison against np.argsort, which also works but just sorts the entire array to get the result. ``` In [2]: x = np.random.randn(100000) In [3]: %timeit idx0 = np.argsort(x)[:100] 100 loops, best of 3: 8.26 ms per loop In [4]: %timeit idx1 = np.argpartition(x, 100)[:100] 1000 loops, best of 3: 721 \u00b5s per loop In [5]: np.alltrue(np.sort(np.argsort(x)[:100]) == np.sort(np.argpartition(x, 100)[:100])) Out[5]: True ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34226400\/find-the-index-of-the-k-smallest-values-of-a-numpy-array", "best_answers_votes":204, "question_length":388, "response_length":1333 }, { "question":"how is axis indexed in numpy's array? [duplicate] This question already has answers here: In Python NumPy what is a dimension and axis? (7 answers) Closed 1 year ago. From Numpy's tutorial, axis can be indexed with integers, like 0 is for column, 1 is for row, but I don't grasp why they are indexed this way? And How do I figure out each axis' index when coping with multidimensional array?", "response":"By definition, the axis number of the dimension is the index of that dimension within the array's shape. It is also the position used to access that dimension during indexing. For example, if a 2D array a has shape (5,6), then you can access a[0,0] up to a[4,5]. Axis 0 is thus the first dimension (the \"rows\"), and axis 1 is the second dimension (the \"columns\"). In higher dimensions, where \"row\" and \"column\" stop really making sense, try to think of the axes in terms of the shapes and indices involved. If you do .sum(axis=n), for example, then dimension n is collapsed and deleted, with each value in the new matrix equal to the sum of the corresponding collapsed values. For example, if b has shape (5,6,7,8), and you do c = b.sum(axis=2), then axis 2 (dimension with size 7) is collapsed, and the result has shape (5,6,8). Furthermore, c[x,y,z] is equal to the sum of all elements b[x,y,:,z].", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17079279\/how-is-axis-indexed-in-numpys-array", "best_answers_votes":198, "question_length":391, "response_length":899 }, { "question":"NumPy style arrays for C++? [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 4 years ago. Improve this question Are there any C++ (or C) libs that have NumPy-like arrays with support for slicing, vectorized operations, adding and subtracting contents element-by-element, etc.?", "response":"Here are several free software that may suit your needs. The GNU Scientific Library is a GPL software written in C. Thus, it has a C-like allocation and way of programming (pointers, etc.). With the GSLwrap, you can have a C++ way of programming, while still using the GSL. GSL has a BLAS implementation, but you can use ATLAS instead of the default CBLAS, if you want even more performances. The boost\/uBLAS library is a BSL library, written in C++ and distributed as a boost package. It is a C++-way of implementing the BLAS standard. uBLAS comes with a few linear algebra functions, and there is an experimental binding to ATLAS. eigen is a linear algebra library written in C++, distributed under the MPL2 license (starting from version 3.1.1) or LGPL3\/GPL2 (older versions). It's a C++ way of programming, but more integrated than the two others (more algorithms and data structures are available). Eigen claims to be faster than the BLAS implementations above, while not following the de-facto standard BLAS API. Eigen does not seem to put a lot of effort on parallel implementation. Armadillo is LGPL3 library for C++. It has binding for LAPACK (the library used by numpy). It uses recursive templates and template meta-programming, which is a good point (I don't know if other libraries are doing it also?). xtensor is a C++ library that is BSD licensed. It offers A C++ API very similar to that of NumPy. See https:\/\/xtensor.readthedocs.io\/en\/latest\/numpy.html for a cheat sheet. These alternatives are really good if you just want to get data structures and basic linear algebra. Depending on your taste about style, license or sysadmin challenges (installing big libraries like LAPACK may be difficult), you may choose the one that best suits your needs.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11169418\/numpy-style-arrays-for-c", "best_answers_votes":93, "question_length":654, "response_length":1765 }, { "question":"Histogram Matplotlib So I have a little problem. I have a data set in scipy that is already in the histogram format, so I have the center of the bins and the number of events per bin. How can I now plot is as a histogram. I tried just doing ``` bins, n=hist() ``` but it didn't like that. Any recommendations?", "response":"``` import matplotlib.pyplot as plt import numpy as np mu, sigma = 100, 15 x = mu + sigma * np.random.randn(10000) hist, bins = np.histogram(x, bins=50) width = 0.7 * (bins[1] - bins[0]) center = (bins[:-1] + bins[1:]) \/ 2 plt.bar(center, hist, align='center', width=width) plt.show() ``` The object-oriented interface is also straightforward: ``` fig, ax = plt.subplots() ax.bar(center, hist, align='center', width=width) fig.savefig(\"1.png\") ``` If you are using custom (non-constant) bins, you can pass compute the widths using np.diff, pass the widths to ax.bar and use ax.set_xticks to label the bin edges: ``` import matplotlib.pyplot as plt import numpy as np mu, sigma = 100, 15 x = mu + sigma * np.random.randn(10000) bins = [0, 40, 60, 75, 90, 110, 125, 140, 160, 200] hist, bins = np.histogram(x, bins=bins) width = np.diff(bins) center = (bins[:-1] + bins[1:]) \/ 2 fig, ax = plt.subplots(figsize=(8,3)) ax.bar(center, hist, align='center', width=width) ax.set_xticks(bins) fig.savefig(\"\/tmp\/out.png\") plt.show() ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5328556\/histogram-matplotlib", "best_answers_votes":268, "question_length":309, "response_length":1027 }, { "question":"Linear regression with matplotlib \/ numpy I'm trying to generate a linear regression on a scatter plot I have generated, however my data is in list format, and all of the examples I can find of using polyfit require using arange. arange doesn't accept lists though. I have searched high and low about how to convert a list to an array and nothing seems clear. Am I missing something? Following on, how best can I use my list of integers as inputs to the polyfit? Here is the polyfit example I am following: ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(data) y = np.arange(data) m, b = np.polyfit(x, y, 1) plt.plot(x, y, 'yo', x, m*x+b, '--k') plt.show() ```", "response":"arange generates lists (well, numpy arrays); type help(np.arange) for the details. You don't need to call it on existing lists. ```py >>> x = [1,2,3,4] >>> y = [3,5,7,9] >>> >>> m,b = np.polyfit(x, y, 1) >>> m 2.0000000000000009 >>> b 0.99999999999999833 ``` I should add that I tend to use poly1d here rather than write out \"m*x+b\" and the higher-order equivalents, so my version of your code would look something like this: ```py import numpy as np import matplotlib.pyplot as plt x = [1,2,3,4] y = [3,5,7,10] # 10, not 9, so the fit isn't perfect coef = np.polyfit(x,y,1) poly1d_fn = np.poly1d(coef) # poly1d_fn is now a function which takes in x and returns an estimate for y plt.plot(x,y, 'yo', x, poly1d_fn(x), '--k') #'--k'=black dashed line, 'yo' = yellow circle marker plt.xlim(0, 5) plt.ylim(0, 12) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6148207\/linear-regression-with-matplotlib-numpy", "best_answers_votes":246, "question_length":681, "response_length":812 }, { "question":"How does NumPy's transpose() method permute the axes of an array? ``` In [28]: arr = np.arange(16).reshape((2, 2, 4)) In [29]: arr Out[29]: array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7]], [[ 8, 9, 10, 11], [12, 13, 14, 15]]]) In [32]: arr.transpose((1, 0, 2)) Out[32]: array([[[ 0, 1, 2, 3], [ 8, 9, 10, 11]], [[ 4, 5, 6, 7], [12, 13, 14, 15]]]) ``` When we pass a tuple of integers to the transpose() function, what happens? To be specific, this is a 3D array: how does NumPy transform the array when I pass the tuple of axes (1, 0 ,2)? Can you explain which row or column these integers refer to? And what are axis numbers in the context of NumPy?", "response":"To transpose an array, NumPy just swaps the shape and stride information for each axis. Here are the strides: ``` >>> arr.strides (64, 32, 8) >>> arr.transpose(1, 0, 2).strides (32, 64, 8) ``` Notice that the transpose operation swapped the strides for axis 0 and axis 1. The lengths of these axes were also swapped (both lengths are 2 in this example). No data needs to be copied for this to happen; NumPy can simply change how it looks at the underlying memory to construct the new array. Visualising strides The stride value represents the number of bytes that must be travelled in memory in order to reach the next value of an axis of an array. Now, our 3D array arr looks this (with labelled axes): This array is stored in a contiguous block of memory; essentially it is one-dimensional. To interpret it as a 3D object, NumPy must jump over a certain constant number of bytes in order to move along one of the three axes: Since each integer takes up 8 bytes of memory (we're using the int64 dtype), the stride value for each dimension is 8 times the number of values that we need to jump. For instance, to move along axis 1, four values (32 bytes) are jumped, and to move along axis 0, eight values (64 bytes) need to be jumped. When we write arr.transpose(1, 0, 2) we are swapping axes 0 and 1. The transposed array looks like this: All that NumPy needs to do is to swap the stride information for axis 0 and axis 1 (axis 2 is unchanged). Now we must jump further to move along axis 1 than axis 0: This basic concept works for any permutation of an array's axes. The actual code that handles the transpose is written in C and can be found here.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/32034237\/how-does-numpys-transpose-method-permute-the-axes-of-an-array", "best_answers_votes":290, "question_length":640, "response_length":1650 }, { "question":"Maximum allowed value for a numpy data type I am working with numpy arrays of a range of data types (uint8, uint16, int16, etc.). I would like to be able to check whether a number can be represented within the limits of an array for a given datatype. I am imagining something that looks like: ``` >>> im.dtype dtype('uint16') >>> dtype_max(im.dtype) 65535 >>> dtype_min(im.dtype) 0 ``` Does something like this exist? By the way, I feel like this has to have been asked before, but my search came up empty, and all of the \"similar questions\" appear to be unrelated. Edit: Of course, now that I've asked, one of the \"related\" questions does have the answer. Oops.", "response":"```py min_value = np.iinfo(im.dtype).min max_value = np.iinfo(im.dtype).max ``` docs: np.iinfo (machine limits for integer types) np.finfo (machine limits for floating point types)", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/23189506\/maximum-allowed-value-for-a-numpy-data-type", "best_answers_votes":181, "question_length":662, "response_length":180 }, { "question":"How to create a numpy array of arbitrary length strings? I'm a complete rookie to Python, but it seems like a given string is able to be (effectively) arbitrary length. i.e. you can take a string str and keeping adding to it: str += \"some stuff...\". Is there a way to make an array of such strings? When I try this, each element only stores a single character ``` strArr = numpy.empty(10, dtype='string') for i in range(0,10) strArr[i] = \"test\" ``` On the other hand, I know I can initialize an array of certain length strings, i.e. ``` strArr = numpy.empty(10, dtype='s256') ``` which can store 10 strings of up to 256 characters.", "response":"You can do so by creating an array of dtype=object. If you try to assign a long string to a normal numpy array, it truncates the string: ``` >>> a = numpy.array(['apples', 'foobar', 'cowboy']) >>> a[2] = 'bananas' >>> a array(['apples', 'foobar', 'banana'], dtype='|S6') ``` But when you use dtype=object, you get an array of python object references. So you can have all the behaviors of python strings: ``` >>> a = numpy.array(['apples', 'foobar', 'cowboy'], dtype=object) >>> a array([apples, foobar, cowboy], dtype=object) >>> a[2] = 'bananas' >>> a array([apples, foobar, bananas], dtype=object) ``` Indeed, because it's an array of objects, you can assign any kind of python object to the array: ``` >>> a[2] = {1:2, 3:4} >>> a array([apples, foobar, {1: 2, 3: 4}], dtype=object) ``` However, this undoes a lot of the benefits of using numpy, which is so fast because it works on large contiguous blocks of raw memory. Working with python objects adds a lot of overhead. A simple example: ``` >>> a = numpy.array(['abba' for _ in range(10000)]) >>> b = numpy.array(['abba' for _ in range(10000)], dtype=object) >>> %timeit a.copy() 100000 loops, best of 3: 2.51 us per loop >>> %timeit b.copy() 10000 loops, best of 3: 48.4 us per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14639496\/how-to-create-a-numpy-array-of-arbitrary-length-strings", "best_answers_votes":177, "question_length":631, "response_length":1245 }, { "question":"Convert 2d numpy array into list of lists [duplicate] This question already has answers here: Convert NumPy array to Python list (7 answers) Closed 10 years ago. I use an external module (libsvm), which does not support numpy arrays, only tuples, lists and dicts. But my data is in a 2d numpy array. How can I convert it the pythonic way, aka without loops. ``` >>> import numpy >>> array = numpy.ones((2,4)) >>> data_list = list(array) >>> data_list [array([ 1., 1., 1., 1.]), array([ 1., 1., 1., 1.])] >>> type(data_list[0]) # >> newdata=list() >>> for line in data_list: ... line = list(line) ... newdata.append(line) >>> type(newdata[0]) # <= what I want ```", "response":"You can simply cast the matrix to list with matrix.tolist(), proof: ``` >>> import numpy >>> a = numpy.ones((2,4)) >>> a array([[ 1., 1., 1., 1.], [ 1., 1., 1., 1.]]) >>> a.tolist() [[1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0]] >>> type(a.tolist()) >>> type(a.tolist()[0]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9721884\/convert-2d-numpy-array-into-list-of-lists", "best_answers_votes":182, "question_length":664, "response_length":277 }, { "question":"Pass percentiles to pandas agg function I want to pass the numpy percentile() function through pandas' agg() function as I do below with various other numpy statistics functions. Right now I have a dataframe that looks like this: ``` AGGREGATE MY_COLUMN A 10 A 12 B 5 B 9 A 84 B 22 ``` And my code looks like this: ``` grouped = dataframe.groupby('AGGREGATE') column = grouped['MY_COLUMN'] column.agg([np.sum, np.mean, np.std, np.median, np.var, np.min, np.max]) ``` The above code works, but I want to do something like ``` column.agg([np.sum, np.mean, np.percentile(50), np.percentile(95)]) ``` I.e., specify various percentiles to return from agg(). How should this be done?", "response":"Perhaps not super efficient, but one way would be to create a function yourself: ``` def percentile(n): def percentile_(x): return x.quantile(n) percentile_.__name__ = 'percentile_{:02.0f}'.format(n*100) return percentile_ ``` Then include this in your agg: ``` In [11]: column.agg([np.sum, np.mean, np.std, np.median, np.var, np.min, np.max, percentile(50), percentile(95)]) Out[11]: sum mean std median var amin amax percentile_50 percentile_95 AGGREGATE A 106 35.333333 42.158431 12 1777.333333 10 84 12 76.8 B 36 12.000000 8.888194 9 79.000000 5 22 12 76.8 ``` Note sure this is how it should be done though...", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17578115\/pass-percentiles-to-pandas-agg-function", "best_answers_votes":165, "question_length":677, "response_length":614 }, { "question":"savefig without frames, axes, only content In numpy\/scipy I have an image stored in an array. I can display it, I want to save it using savefig without any borders, axes, labels, titles,... Just pure image, nothing else. I want to avoid packages like PyPNG or scipy.misc.imsave, they are sometimes problematic (they do not always install well), only basic savefig() for me", "response":"EDIT Changed aspect='normal to aspect='auto' since that changed in more recent versions of matplotlib (thanks to @Luke19). Assuming : ```py import matplotlib.pyplot as plt ``` To make a figure without the frame : ```py fig = plt.figure(frameon=False) fig.set_size_inches(w,h) ``` To make the content fill the whole figure ```py ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ``` Then draw your image on it : ```py ax.imshow(your_image, aspect='auto') fig.savefig(fname, dpi) ``` The aspect parameter changes the pixel size to make sure they fill the figure size specified in fig.set_size_inches(\u2026). To get a feel of how to play with this sort of things, read through matplotlib's documentation, particularly on the subject of Axes, Axis and Artist.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8218608\/savefig-without-frames-axes-only-content", "best_answers_votes":152, "question_length":372, "response_length":773 }, { "question":"Add numpy array as column to Pandas data frame I have a Pandas data frame object of shape (X,Y) that looks like this: ``` [[1, 2, 3], [4, 5, 6], [7, 8, 9]] ``` and a numpy sparse matrix (CSC) of shape (X,Z) that looks something like this ``` [[0, 1, 0], [0, 0, 1], [1, 0, 0]] ``` How can I add the content from the matrix to the data frame in a new named column such that the data frame will end up like this: ``` [[1, 2, 3, [0, 1, 0]], [4, 5, 6, [0, 0, 1]], [7, 8, 9, [1, 0, 0]]] ``` Notice the data frame now has shape (X, Y+1) and rows from the matrix are elements in the data frame.", "response":"``` import numpy as np import pandas as pd import scipy.sparse as sparse df = pd.DataFrame(np.arange(1,10).reshape(3,3)) arr = sparse.coo_matrix(([1,1,1], ([0,1,2], [1,2,0])), shape=(3,3)) df['newcol'] = arr.toarray().tolist() print(df) ``` yields ``` 0 1 2 newcol 0 1 2 3 [0, 1, 0] 1 4 5 6 [0, 0, 1] 2 7 8 9 [1, 0, 0] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18646076\/add-numpy-array-as-column-to-pandas-data-frame", "best_answers_votes":107, "question_length":586, "response_length":322 }, { "question":"Ambiguity in Pandas Dataframe \/ Numpy Array \"axis\" definition I've been very confused about how python axes are defined, and whether they refer to a DataFrame's rows or columns. Consider the code below: ``` >>> df = pd.DataFrame([[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3]], columns=[\"col1\", \"col2\", \"col3\", \"col4\"]) >>> df col1 col2 col3 col4 0 1 1 1 1 1 2 2 2 2 2 3 3 3 3 ``` So if we call df.mean(axis=1), we'll get a mean across the rows: ``` >>> df.mean(axis=1) 0 1 1 2 2 3 ``` However, if we call df.drop(name, axis=1), we actually drop a column, not a row: ``` >>> df.drop(\"col4\", axis=1) col1 col2 col3 0 1 1 1 1 2 2 2 2 3 3 3 ``` Can someone help me understand what is meant by an \"axis\" in pandas\/numpy\/scipy? A side note, DataFrame.mean just might be defined wrong. It says in the documentation for DataFrame.mean that axis=1 is supposed to mean a mean over the columns, not the rows...", "response":"It's perhaps simplest to remember it as 0=down and 1=across. This means: Use axis=0 to apply a method down each column, or to the row labels (the index). Use axis=1 to apply a method across each row, or to the column labels. Here's a picture to show the parts of a DataFrame that each axis refers to: It's also useful to remember that Pandas follows NumPy's use of the word axis. The usage is explained in NumPy's glossary of terms: Axes are defined for arrays with more than one dimension. A 2-dimensional array has two corresponding axes: the first running vertically downwards across rows (axis 0), and the second running horizontally across columns (axis 1). [my emphasis] So, concerning the method in the question, df.mean(axis=1), seems to be correctly defined. It takes the mean of entries horizontally across columns, that is, along each individual row. On the other hand, df.mean(axis=0) would be an operation acting vertically downwards across rows. Similarly, df.drop(name, axis=1) refers to an action on column labels, because they intuitively go across the horizontal axis. Specifying axis=0 would make the method act on rows instead.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25773245\/ambiguity-in-pandas-dataframe-numpy-array-axis-definition", "best_answers_votes":190, "question_length":895, "response_length":1147 }, { "question":"Benchmarking (python vs. c++ using BLAS) and (numpy) I would like to write a program that makes extensive use of BLAS and LAPACK linear algebra functionalities. Since performance is an issue I did some benchmarking and would like know, if the approach I took is legitimate. I have, so to speak, three contestants and want to test their performance with a simple matrix-matrix multiplication. The contestants are: Numpy, making use only of the functionality of dot. Python, calling the BLAS functionalities through a shared object. C++, calling the BLAS functionalities through a shared object. Scenario I implemented a matrix-matrix multiplication for different dimensions i. i runs from 5 to 500 with an increment of 5 and the matricies m1 and m2 are set up like this: ``` m1 = numpy.random.rand(i,i).astype(numpy.float32) m2 = numpy.random.rand(i,i).astype(numpy.float32) ``` 1. Numpy The code used looks like this: ``` tNumpy = timeit.Timer(\"numpy.dot(m1, m2)\", \"import numpy; from __main__ import m1, m2\") rNumpy.append((i, tNumpy.repeat(20, 1))) ``` 2. Python, calling BLAS through a shared object With the function ``` _blaslib = ctypes.cdll.LoadLibrary(\"libblas.so\") def Mul(m1, m2, i, r): no_trans = c_char(\"n\") n = c_int(i) one = c_float(1.0) zero = c_float(0.0) _blaslib.sgemm_(byref(no_trans), byref(no_trans), byref(n), byref(n), byref(n), byref(one), m1.ctypes.data_as(ctypes.c_void_p), byref(n), m2.ctypes.data_as(ctypes.c_void_p), byref(n), byref(zero), r.ctypes.data_as(ctypes.c_void_p), byref(n)) ``` the test code looks like this: ``` r = numpy.zeros((i,i), numpy.float32) tBlas = timeit.Timer(\"Mul(m1, m2, i, r)\", \"import numpy; from __main__ import i, m1, m2, r, Mul\") rBlas.append((i, tBlas.repeat(20, 1))) ``` 3. c++, calling BLAS through a shared object Now the c++ code naturally is a little longer so I reduce the information to a minimum. I load the function with ``` void* handle = dlopen(\"libblas.so\", RTLD_LAZY); void* Func = dlsym(handle, \"sgemm_\"); ``` I measure the time with gettimeofday like this: ``` gettimeofday(&start, NULL); f(&no_trans, &no_trans, &dim, &dim, &dim, &one, A, &dim, B, &dim, &zero, Return, &dim); gettimeofday(&end, NULL); dTimes[j] = CalcTime(start, end); ``` where j is a loop running 20 times. I calculate the time passed with ``` double CalcTime(timeval start, timeval end) { double factor = 1000000; return (((double)end.tv_sec) * factor + ((double)end.tv_usec) - (((double)start.tv_sec) * factor + ((double)start.tv_usec))) \/ factor; } ``` Results The result is shown in the plot below: Questions Do you think my approach is fair, or are there some unnecessary overheads I can avoid? Would you expect that the result would show such a huge discrepancy between the c++ and python approach? Both are using shared objects for their calculations. Since I would rather use python for my program, what could I do to increase the performance when calling BLAS or LAPACK routines? Download The complete benchmark can be downloaded here. (J.F. Sebastian made that link possible^^)", "response":"UPDATE (30.07.2014): I re-run the the benchmark on our new HPC. Both the hardware as well as the software stack changed from the setup in the original answer. I put the results in a google spreadsheet (contains also the results from the original answer). Hardware Our HPC has two different nodes one with Intel Sandy Bridge CPUs and one with the newer Ivy Bridge CPUs: Sandy (MKL, OpenBLAS, ATLAS): CPU: 2 x 16 Intel(R) Xeon(R) E2560 Sandy Bridge @ 2.00GHz (16 Cores) RAM: 64 GB Ivy (MKL, OpenBLAS, ATLAS): CPU: 2 x 20 Intel(R) Xeon(R) E2680 V2 Ivy Bridge @ 2.80GHz (20 Cores, with HT = 40 Cores) RAM: 256 GB Software The software stack is for both nodes the sam. Instead of GotoBLAS2, OpenBLAS is used and there is also a multi-threaded ATLAS BLAS that is set to 8 threads (hardcoded). OS: Suse Intel Compiler: ictce-5.3.0 Numpy: 1.8.0 OpenBLAS: 0.2.6 ATLAS:: 3.8.4 Dot-Product Benchmark Benchmark-code is the same as below. However for the new machines I also ran the benchmark for matrix sizes 5000 and 8000. The table below includes the benchmark results from the original answer (renamed: MKL --> Nehalem MKL, Netlib Blas --> Nehalem Netlib BLAS, etc) Single threaded performance: Multi threaded performance (8 threads): Threads vs Matrix size (Ivy Bridge MKL): Benchmark Suite Single threaded performance: Multi threaded (8 threads) performance: Conclusion The new benchmark results are similar to the ones in the original answer. OpenBLAS and MKL perform on the same level, with the exception of Eigenvalue test. The Eigenvalue test performs only reasonably well on OpenBLAS in single threaded mode. In multi-threaded mode the performance is worse. The \"Matrix size vs threads chart\" also show that although MKL as well as OpenBLAS generally scale well with number of cores\/threads,it depends on the size of the matrix. For small matrices adding more cores won't improve performance very much. There is also approximately 30% performance increase from Sandy Bridge to Ivy Bridge which might be either due to higher clock rate (+ 0.8 Ghz) and\/or better architecture. Original Answer (04.10.2011): Some time ago I had to optimize some linear algebra calculations\/algorithms which were written in python using numpy and BLAS so I benchmarked\/tested different numpy\/BLAS configurations. Specifically I tested: Numpy with ATLAS Numpy with GotoBlas2 (1.13) Numpy with MKL (11.1\/073) Numpy with Accelerate Framework (Mac OS X) I did run two different benchmarks: simple dot product of matrices with different sizes Benchmark suite which can be found here. Here are my results: Machines Linux (MKL, ATLAS, No-MKL, GotoBlas2): OS: Ubuntu Lucid 10.4 64 Bit. CPU: 2 x 4 Intel(R) Xeon(R) E5504 @ 2.00GHz (8 Cores) RAM: 24 GB Intel Compiler: 11.1\/073 Scipy: 0.8 Numpy: 1.5 Mac Book Pro (Accelerate Framework): OS: Mac OS X Snow Leopard (10.6) CPU: 1 Intel Core 2 Duo 2.93 Ghz (2 Cores) RAM: 4 GB Scipy: 0.7 Numpy: 1.3 Mac Server (Accelerate Framework): OS: Mac OS X Snow Leopard Server (10.6) CPU: 4 X Intel(R) Xeon(R) E5520 @ 2.26 Ghz (8 Cores) RAM: 4 GB Scipy: 0.8 Numpy: 1.5.1 Dot product benchmark Code: ``` import numpy as np a = np.random.random_sample((size,size)) b = np.random.random_sample((size,size)) %timeit np.dot(a,b) ``` Results: ``` System | size = 1000 | size = 2000 | size = 3000 | netlib BLAS | 1350 ms | 10900 ms | 39200 ms | ATLAS (1 CPU) | 314 ms | 2560 ms | 8700 ms | MKL (1 CPUs) | 268 ms | 2110 ms | 7120 ms | MKL (2 CPUs) | - | - | 3660 ms | MKL (8 CPUs) | 39 ms | 319 ms | 1000 ms | GotoBlas2 (1 CPU) | 266 ms | 2100 ms | 7280 ms | GotoBlas2 (2 CPUs)| 139 ms | 1009 ms | 3690 ms | GotoBlas2 (8 CPUs)| 54 ms | 389 ms | 1250 ms | Mac OS X (1 CPU) | 143 ms | 1060 ms | 3605 ms | Mac Server (1 CPU)| 92 ms | 714 ms | 2130 ms | ``` Benchmark Suite Code: For additional information about the benchmark suite see here. Results: ``` System | eigenvalues | svd | det | inv | dot | netlib BLAS | 1688 ms | 13102 ms | 438 ms | 2155 ms | 3522 ms | ATLAS (1 CPU) | 1210 ms | 5897 ms | 170 ms | 560 ms | 893 ms | MKL (1 CPUs) | 691 ms | 4475 ms | 141 ms | 450 ms | 736 ms | MKL (2 CPUs) | 552 ms | 2718 ms | 96 ms | 267 ms | 423 ms | MKL (8 CPUs) | 525 ms | 1679 ms | 60 ms | 137 ms | 197 ms | GotoBlas2 (1 CPU) | 2124 ms | 4636 ms | 147 ms | 456 ms | 743 ms | GotoBlas2 (2 CPUs)| 1560 ms | 3278 ms | 116 ms | 295 ms | 460 ms | GotoBlas2 (8 CPUs)| 741 ms | 2914 ms | 82 ms | 262 ms | 192 ms | Mac OS X (1 CPU) | 948 ms | 4339 ms | 151 ms | 318 ms | 566 ms | Mac Server (1 CPU)| 1033 ms | 3645 ms | 99 ms | 232 ms | 342 ms | ``` Installation Installation of MKL included installing the complete Intel Compiler Suite which is pretty straight forward. However because of some bugs\/issues configuring and compiling numpy with MKL support was a bit of a hassle. GotoBlas2 is a small package which can be easily compiled as a shared library. However because of a bug you have to re-create the shared library after building it in order to use it with numpy. In addition to this building it for multiple target plattform didn't work for some reason. So I had to create an .so file for each platform for which i want to have an optimized libgoto2.so file. If you install numpy from Ubuntu's repository it will automatically install and configure numpy to use ATLAS. Installing ATLAS from source can take some time and requires some additional steps (fortran, etc). If you install numpy on a Mac OS X machine with Fink or Mac Ports it will either configure numpy to use ATLAS or Apple's Accelerate Framework. You can check by either running ldd on the numpy.core._dotblas file or calling numpy.show_config(). Conclusions MKL performs best closely followed by GotoBlas2. In the eigenvalue test GotoBlas2 performs surprisingly worse than expected. Not sure why this is the case. Apple's Accelerate Framework performs really good especially in single threaded mode (compared to the other BLAS implementations). Both GotoBlas2 and MKL scale very well with number of threads. So if you have to deal with big matrices running it on multiple threads will help a lot. In any case don't use the default netlib blas implementation because it is way too slow for any serious computational work. On our cluster I also installed AMD's ACML and performance was similar to MKL and GotoBlas2. I don't have any numbers tough. I personally would recommend to use GotoBlas2 because it's easier to install and it's free. If you want to code in C++\/C also check out Eigen3 which is supposed to outperform MKL\/GotoBlas2 in some cases and is also pretty easy to use.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7596612\/benchmarking-python-vs-c-using-blas-and-numpy", "best_answers_votes":79, "question_length":3032, "response_length":6541 }, { "question":"Is there an analysis speed or memory usage advantage to using HDF5 for large array storage (instead of flat binary files)? I am processing large 3D arrays, which I often need to slice in various ways to do a variety of data analysis. A typical \"cube\" can be ~100GB (and will likely get larger in the future) It seems that the typical recommended file format for large datasets in python is to use HDF5 (either h5py or pytables). My question is: is there any speed or memory usage benefit to using HDF5 to store and analyze these cubes over storing them in simple flat binary files? Is HDF5 more appropriate for tabular data, as opposed to large arrays like what I am working with? I see that HDF5 can provide nice compression, but I am more interested in processing speed and dealing with memory overflow. I frequently want to analyze only one large subset of the cube. One drawback of both pytables and h5py is it seems is that when I take a slice of the array, I always get a numpy array back, using up memory. However, if I slice a numpy memmap of a flat binary file, I can get a view, which keeps the data on disk. So, it seems that I can more easily analyze specific sectors of my data without overrunning my memory. I have explored both pytables and h5py, and haven't seen the benefit of either so far for my purpose.", "response":"HDF5 Advantages: Organization, flexibility, interoperability Some of the main advantages of HDF5 are its hierarchical structure (similar to folders\/files), optional arbitrary metadata stored with each item, and its flexibility (e.g. compression). This organizational structure and metadata storage may sound trivial, but it's very useful in practice. Another advantage of HDF is that the datasets can be either fixed-size or flexibly sized. Therefore, it's easy to append data to a large dataset without having to create an entire new copy. Additionally, HDF5 is a standardized format with libraries available for almost any language, so sharing your on-disk data between, say Matlab, Fortran, R, C, and Python is very easy with HDF. (To be fair, it's not too hard with a big binary array, too, as long as you're aware of the C vs. F ordering and know the shape, dtype, etc of the stored array.) HDF advantages for a large array: Faster I\/O of an arbitrary slice Just as the TL\/DR: For an ~8GB 3D array, reading a \"full\" slice along any axis took ~20 seconds with a chunked HDF5 dataset, and 0.3 seconds (best-case) to over three hours (worst case) for a memmapped array of the same data. Beyond the things listed above, there's another big advantage to a \"chunked\"* on-disk data format such as HDF5: Reading an arbitrary slice (emphasis on arbitrary) will typically be much faster, as the on-disk data is more contiguous on average. *(HDF5 doesn't have to be a chunked data format. It supports chunking, but doesn't require it. In fact, the default for creating a dataset in h5py is not to chunk, if I recall correctly.) Basically, your best case disk-read speed and your worst case disk read speed for a given slice of your dataset will be fairly close with a chunked HDF dataset (assuming you chose a reasonable chunk size or let a library choose one for you). With a simple binary array, the best-case is faster, but the worst-case is much worse. One caveat, if you have an SSD, you likely won't notice a huge difference in read\/write speed. With a regular hard drive, though, sequential reads are much, much faster than random reads. (i.e. A regular hard drive has long seek time.) HDF still has an advantage on an SSD, but it's more due its other features (e.g. metadata, organization, etc) than due to raw speed. First off, to clear up confusion, accessing an h5py dataset returns an object that behaves fairly similarly to a numpy array, but does not load the data into memory until it's sliced. (Similar to memmap, but not identical.) Have a look at the h5py introduction for more information. Slicing the dataset will load a subset of the data into memory, but presumably you want to do something with it, at which point you'll need it in memory anyway. If you do want to do out-of-core computations, you can fairly easily for tabular data with pandas or pytables. It is possible with h5py (nicer for big N-D arrays), but you need to drop down to a touch lower level and handle the iteration yourself. However, the future of numpy-like out-of-core computations is Blaze. Have a look at it if you really want to take that route. The \"unchunked\" case First off, consider a 3D C-ordered array written to disk (I'll simulate it by calling arr.ravel() and printing the result, to make things more visible): ``` In [1]: import numpy as np In [2]: arr = np.arange(4*6*6).reshape(4,6,6) In [3]: arr Out[3]: array([[[ 0, 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10, 11], [ 12, 13, 14, 15, 16, 17], [ 18, 19, 20, 21, 22, 23], [ 24, 25, 26, 27, 28, 29], [ 30, 31, 32, 33, 34, 35]], [[ 36, 37, 38, 39, 40, 41], [ 42, 43, 44, 45, 46, 47], [ 48, 49, 50, 51, 52, 53], [ 54, 55, 56, 57, 58, 59], [ 60, 61, 62, 63, 64, 65], [ 66, 67, 68, 69, 70, 71]], [[ 72, 73, 74, 75, 76, 77], [ 78, 79, 80, 81, 82, 83], [ 84, 85, 86, 87, 88, 89], [ 90, 91, 92, 93, 94, 95], [ 96, 97, 98, 99, 100, 101], [102, 103, 104, 105, 106, 107]], [[108, 109, 110, 111, 112, 113], [114, 115, 116, 117, 118, 119], [120, 121, 122, 123, 124, 125], [126, 127, 128, 129, 130, 131], [132, 133, 134, 135, 136, 137], [138, 139, 140, 141, 142, 143]]]) ``` The values would be stored on-disk sequentially as shown on line 4 below. (Let's ignore filesystem details and fragmentation for the moment.) ``` In [4]: arr.ravel(order='C') Out[4]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143]) ``` In the best case scenario, let's take a slice along the first axis. Notice that these are just the first 36 values of the array. This will be a very fast read! (one seek, one read) ``` In [5]: arr[0,:,:] Out[5]: array([[ 0, 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17], [18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35]]) ``` Similarly, the next slice along the first axis will just be the next 36 values. To read a complete slice along this axis, we only need one seek operation. If all we're going to be reading is various slices along this axis, then this is the perfect file structure. However, let's consider the worst-case scenario: A slice along the last axis. ``` In [6]: arr[:,:,0] Out[6]: array([[ 0, 6, 12, 18, 24, 30], [ 36, 42, 48, 54, 60, 66], [ 72, 78, 84, 90, 96, 102], [108, 114, 120, 126, 132, 138]]) ``` To read this slice in, we need 36 seeks and 36 reads, as all of the values are separated on disk. None of them are adjacent! This may seem pretty minor, but as we get to larger and larger arrays, the number and size of the seek operations grows rapidly. For a large-ish (~10Gb) 3D array stored in this way and read in via memmap, reading a full slice along the \"worst\" axis can easily take tens of minutes, even with modern hardware. At the same time, a slice along the best axis can take less than a second. For simplicity, I'm only showing \"full\" slices along a single axis, but the exact same thing happens with arbitrary slices of any subset of the data. Incidentally there are several file formats that take advantage of this and basically store three copies of huge 3D arrays on disk: one in C-order, one in F-order, and one in the intermediate between the two. (An example of this is Geoprobe's D3D format, though I'm not sure it's documented anywhere.) Who cares if the final file size is 4TB, storage is cheap! The crazy thing about that is that because the main use case is extracting a single sub-slice in each direction, the reads you want to make are very, very fast. It works very well! The simple \"chunked\" case Let's say we store 2x2x2 \"chunks\" of the 3D array as contiguous blocks on disk. In other words, something like: ``` nx, ny, nz = arr.shape slices = [] for i in range(0, nx, 2): for j in range(0, ny, 2): for k in range(0, nz, 2): slices.append((slice(i, i+2), slice(j, j+2), slice(k, k+2))) chunked = np.hstack([arr[chunk].ravel() for chunk in slices]) ``` So the data on disk would look like chunked: ``` array([ 0, 1, 6, 7, 36, 37, 42, 43, 2, 3, 8, 9, 38, 39, 44, 45, 4, 5, 10, 11, 40, 41, 46, 47, 12, 13, 18, 19, 48, 49, 54, 55, 14, 15, 20, 21, 50, 51, 56, 57, 16, 17, 22, 23, 52, 53, 58, 59, 24, 25, 30, 31, 60, 61, 66, 67, 26, 27, 32, 33, 62, 63, 68, 69, 28, 29, 34, 35, 64, 65, 70, 71, 72, 73, 78, 79, 108, 109, 114, 115, 74, 75, 80, 81, 110, 111, 116, 117, 76, 77, 82, 83, 112, 113, 118, 119, 84, 85, 90, 91, 120, 121, 126, 127, 86, 87, 92, 93, 122, 123, 128, 129, 88, 89, 94, 95, 124, 125, 130, 131, 96, 97, 102, 103, 132, 133, 138, 139, 98, 99, 104, 105, 134, 135, 140, 141, 100, 101, 106, 107, 136, 137, 142, 143]) ``` And just to show that they're 2x2x2 blocks of arr, notice that these are the first 8 values of chunked: ``` In [9]: arr[:2, :2, :2] Out[9]: array([[[ 0, 1], [ 6, 7]], [[36, 37], [42, 43]]]) ``` To read in any slice along an axis, we'd read in either 6 or 9 contiguous chunks (twice as much data as we need) and then only keep the portion we wanted. That's a worst-case maximum of 9 seeks vs a maximum of 36 seeks for the non-chunked version. (But the best case is still 6 seeks vs 1 for the memmapped array.) Because sequential reads are very fast compared to seeks, this significantly reduces the amount of time it takes to read an arbitrary subset into memory. Once again, this effect becomes larger with larger arrays. HDF5 takes this a few steps farther. The chunks don't have to be stored contiguously, and they're indexed by a B-Tree. Furthermore, they don't have to be the same size on disk, so compression can be applied to each chunk. Chunked arrays with h5py By default, h5py doesn't created chunked HDF files on disk (I think pytables does, by contrast). If you specify chunks=True when creating the dataset, however, you'll get a chunked array on disk. As a quick, minimal example: ``` import numpy as np import h5py data = np.random.random((100, 100, 100)) with h5py.File('test.hdf', 'w') as outfile: dset = outfile.create_dataset('a_descriptive_name', data=data, chunks=True) dset.attrs['some key'] = 'Did you want some metadata?' ``` Note that chunks=True tells h5py to automatically pick a chunk size for us. If you know more about your most common use-case, you can optimize the chunk size\/shape by specifying a shape tuple (e.g. (2,2,2) in the simple example above). This allows you to make reads along a particular axis more efficient or optimize for reads\/writes of a certain size. I\/O Performance comparison Just to emphasize the point, let's compare reading in slices from a chunked HDF5 dataset and a large (~8GB), Fortran-ordered 3D array containing the same exact data. I've cleared all OS caches between each run, so we're seeing the \"cold\" performance. For each file type, we'll test reading in a \"full\" x-slice along the first axis and a \"full\" z-slize along the last axis. For the Fortran-ordered memmapped array, the \"x\" slice is the worst case, and the \"z\" slice is the best case. The code used is in a gist (including creating the hdf file). I can't easily share the data used here, but you could simulate it by an array of zeros of the same shape (621, 4991, 2600) and type np.uint8. The chunked_hdf.py looks like this: ``` import sys import h5py def main(): data = read() if sys.argv[1] == 'x': x_slice(data) elif sys.argv[1] == 'z': z_slice(data) def read(): f = h5py.File('\/tmp\/test.hdf5', 'r') return f['seismic_volume'] def z_slice(data): return data[:,:,0] def x_slice(data): return data[0,:,:] main() ``` memmapped_array.py is similar, but has a touch more complexity to ensure the slices are actually loaded into memory (by default, another memmapped array would be returned, which wouldn't be an apples-to-apples comparison). ``` import numpy as np import sys def main(): data = read() if sys.argv[1] == 'x': x_slice(data) elif sys.argv[1] == 'z': z_slice(data) def read(): big_binary_filename = '\/data\/nankai\/data\/Volumes\/kumdep01_flipY.3dv.vol' shape = 621, 4991, 2600 header_len = 3072 data = np.memmap(filename=big_binary_filename, mode='r', offset=header_len, order='F', shape=shape, dtype=np.uint8) return data def z_slice(data): dat = np.empty(data.shape[:2], dtype=data.dtype) dat[:] = data[:,:,0] return dat def x_slice(data): dat = np.empty(data.shape[1:], dtype=data.dtype) dat[:] = data[0,:,:] return dat main() ``` Let's have a look at the HDF performance first: ``` jofer at cornbread in ~ $ sudo .\/clear_cache.sh jofer at cornbread in ~ $ time python chunked_hdf.py z python chunked_hdf.py z 0.64s user 0.28s system 3% cpu 23.800 total jofer at cornbread in ~ $ sudo .\/clear_cache.sh jofer at cornbread in ~ $ time python chunked_hdf.py x python chunked_hdf.py x 0.12s user 0.30s system 1% cpu 21.856 total ``` A \"full\" x-slice and a \"full\" z-slice take about the same amount of time (~20sec). Considering this is an 8GB array, that's not too bad. Most of the time And if we compare this to the memmapped array times (it's Fortran-ordered: A \"z-slice\" is the best case and an \"x-slice\" is the worst case.): ``` jofer at cornbread in ~ $ sudo .\/clear_cache.sh jofer at cornbread in ~ $ time python memmapped_array.py z python memmapped_array.py z 0.07s user 0.04s system 28% cpu 0.385 total jofer at cornbread in ~ $ sudo .\/clear_cache.sh jofer at cornbread in ~ $ time python memmapped_array.py x python memmapped_array.py x 2.46s user 37.24s system 0% cpu 3:35:26.85 total ``` Yes, you read that right. 0.3 seconds for one slice direction and ~3.5 hours for the other. The time to slice in the \"x\" direction is far longer than the amount of time it would take to load the entire 8GB array into memory and select the slice we wanted! (Again, this is a Fortran-ordered array. The opposite x\/z slice timing would be the case for a C-ordered array.) However, if we're always wanting to take a slice along the best-case direction, the big binary array on disk is very good. (~0.3 sec!) With a memmapped array, you're stuck with this I\/O discrepancy (or perhaps anisotropy is a better term). However, with a chunked HDF dataset, you can choose the chunksize such that access is either equal or is optimized for a particular use-case. It gives you a lot more flexibility. In summary Hopefully that helps clear up one part of your question, at any rate. HDF5 has many other advantages over \"raw\" memmaps, but I don't have room to expand on all of them here. Compression can speed some things up (the data I work with doesn't benefit much from compression, so I rarely use it), and OS-level caching often plays more nicely with HDF5 files than with \"raw\" memmaps. Beyond that, HDF5 is a really fantastic container format. It gives you a lot of flexibility in managing your data, and can be used from more or less any programming language. Overall, try it and see if it works well for your use case. I think you might be surprised.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/27710245\/is-there-an-analysis-speed-or-memory-usage-advantage-to-using-hdf5-for-large-arr", "best_answers_votes":198, "question_length":1323, "response_length":14213 }, { "question":"ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224) I have a list say, temp_list with following properties : ``` len(temp_list) = 9260 temp_list[0].shape = (224,224,3) ``` Now, when I am converting into numpy array, ``` x = np.array(temp_list) ``` I am getting the error : ``` ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224) ``` Can someone help me here?", "response":"At least one item in your list is either not three dimensional, or its second or third dimension does not match the other elements. If only the first dimension does not match, the arrays are still matched, but as individual objects, no attempt is made to reconcile them into a new (four dimensional) array. Some examples are below: That is, the offending element's shape != (?, 224, 3), or ndim != 3 (with the ? being non-negative integer). That is what is giving you the error. You'll need to fix that, to be able to turn your list into a four (or three) dimensional array. Without context, it is impossible to say if you want to lose a dimension from the 3D items or add one to the 2D items (in the first case), or change the second or third dimension (in the second case). Here's an example of the error: ``` >>> a = [np.zeros((224,224,3)), np.zeros((224,224,3)), np.zeros((224,224))] >>> np.array(a) ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224) ``` or, different type of input, but the same error: ``` >>> a = [np.zeros((224,224,3)), np.zeros((224,224,3)), np.zeros((224,224,13))] >>> np.array(a) Traceback (most recent call last): File \"\", line 1, in ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224) ``` Alternatively, similar but with a different error message: ``` >>> a = [np.zeros((224,224,3)), np.zeros((224,224,3)), np.zeros((224,100,3))] >>> np.array(a) Traceback (most recent call last): File \"\", line 1, in ValueError: could not broadcast input array from shape (224,224,3) into shape (224) ``` But the following will work, albeit with different results than (presumably) intended: ``` >>> a = [np.zeros((224,224,3)), np.zeros((224,224,3)), np.zeros((10,224,3))] >>> np.array(a) # long output omitted >>> newa = np.array(a) >>> newa.shape 3 # oops >>> newa.dtype dtype('O') >>> newa[0].shape (224, 224, 3) >>> newa[1].shape (224, 224, 3) >>> newa[2].shape (10, 224, 3) >>> ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/43977463\/valueerror-could-not-broadcast-input-array-from-shape-224-224-3-into-shape-2", "best_answers_votes":119, "question_length":430, "response_length":1974 }, { "question":"Difference between np.random.seed() and np.random.RandomState() I know that to seed the randomness of numpy.random, and be able to reproduce it, I should us: ``` import numpy as np np.random.seed(1234) ``` but what does np.random.RandomState() do?", "response":"If you want to set the seed that calls to np.random... will use, use np.random.seed: ``` np.random.seed(1234) np.random.uniform(0, 10, 5) #array([ 1.9151945 , 6.22108771, 4.37727739, 7.85358584, 7.79975808]) np.random.rand(2,3) #array([[ 0.27259261, 0.27646426, 0.80187218], # [ 0.95813935, 0.87593263, 0.35781727]]) ``` Use the class to avoid impacting the global numpy state: ``` r = np.random.RandomState(1234) r.uniform(0, 10, 5) #array([ 1.9151945 , 6.22108771, 4.37727739, 7.85358584, 7.79975808]) ``` And it maintains the state just as before: ``` r.rand(2,3) #array([[ 0.27259261, 0.27646426, 0.80187218], # [ 0.95813935, 0.87593263, 0.35781727]]) ``` You can see the state of the sort of 'global' class with: ``` np.random.get_state() ``` and of your own class instance with: ``` r.get_state() ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22994423\/difference-between-np-random-seed-and-np-random-randomstate", "best_answers_votes":117, "question_length":247, "response_length":806 }, { "question":"Extract upper or lower triangular part of a numpy matrix I have a matrix A and I want 2 matrices U and L such that U contains the upper triangular elements of A (all elements above and not including diagonal) and similarly for L(all elements below and not including diagonal). Is there a numpy method to do this? e.g ``` A = array([[ 4., 9., -3.], [ 2., 4., -2.], [-2., -3., 7.]]) U = array([[ 0., 9., -3.], [ 0., 0., -2.], [ 0., 0., 0.]]) L = array([[ 0., 0., 0.], [ 2., 0., 0.], [-2., -3., 0.]]) ```", "response":"To extract the upper triangle values to a flat vector, you can do something like the following: ``` import numpy as np a = np.array([[1,2,3],[4,5,6],[7,8,9]]) print(a) #array([[1, 2, 3], # [4, 5, 6], # [7, 8, 9]]) a[np.triu_indices(3)] #or list(a[np.triu_indices(3)]) #array([1, 2, 3, 5, 6, 9]) ``` Similarly, for the lower triangle, use np.tril. IMPORTANT If you want to extract the values that are above the diagonal (or below) then use the k argument. This is usually used when the matrix is symmetric. ``` import numpy as np a = np.array([[1,2,3],[4,5,6],[7,8,9]]) #array([[1, 2, 3], # [4, 5, 6], # [7, 8, 9]]) a[np.triu_indices(3, k = 1)] # this returns the following array([2, 3, 6]) ``` EDIT (on 11.11.2019): To put back the extracted vector into a 2D symmetric array, one can follow my answer here: https:\/\/stackoverflow.com\/a\/58806626\/5025009", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8905501\/extract-upper-or-lower-triangular-part-of-a-numpy-matrix", "best_answers_votes":134, "question_length":501, "response_length":851 }, { "question":"What is the equivalent of MATLAB's repmat in NumPy I would like to execute the equivalent of the following MATLAB code using NumPy: repmat([1; 1], [1 1 1]). How would I accomplish this?", "response":"Here is a much better (official) NumPy for Matlab Users link - I'm afraid the mathesaurus one is quite out of date. The numpy equivalent of repmat(a, m, n) is tile(a, (m, n)). This works with multiple dimensions and gives a similar result to matlab. (Numpy gives a 3d output array as you would expect - matlab for some reason gives 2d output - but the content is the same). Matlab: ``` >> repmat([1;1],[1,1,1]) ans = 1 1 ``` Python: ``` In [46]: a = np.array([[1],[1]]) In [47]: np.tile(a, [1,1,1]) Out[47]: array([[[1], [1]]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1721802\/what-is-the-equivalent-of-matlabs-repmat-in-numpy", "best_answers_votes":122, "question_length":185, "response_length":531 }, { "question":"numpy: formal definition of \"array_like\" objects? In numpy, the constructors of many objects accept an \"array_like\" as first argument. Is there a definition of a such object, either as an abstract meta class, or documentation of the methods is should contain??", "response":"It turns out almost anything is technically an array-like. \"Array-like\" is more of a statement of how the input will be interpreted than a restriction on what the input can be; if a parameter is documented as array-like, NumPy will try to interpret it as an array. There is no formal definition of array-like beyond the nearly tautological one -- an array-like is any Python object that np.array can convert to an ndarray. To go beyond this, you'd need to study the source code. ```c NPY_NO_EXPORT PyObject * PyArray_FromAny(PyObject *op, PyArray_Descr *newtype, int min_depth, int max_depth, int flags, PyObject *context) { \/* * This is the main code to make a NumPy array from a Python * Object. It is called from many different places. *\/ PyArrayObject *arr = NULL, *ret; PyArray_Descr *dtype = NULL; int ndim = 0; npy_intp dims[NPY_MAXDIMS]; \/* Get either the array or its parameters if it isn't an array *\/ if (PyArray_GetArrayParamsFromObject(op, newtype, 0, &dtype, &ndim, dims, &arr, context) < 0) { Py_XDECREF(newtype); return NULL; } ... ``` Particularly interesting is PyArray_GetArrayParamsFromObject, whose comments enumerate the types of objects np.array expects: ```c NPY_NO_EXPORT int PyArray_GetArrayParamsFromObject(PyObject *op, PyArray_Descr *requested_dtype, npy_bool writeable, PyArray_Descr **out_dtype, int *out_ndim, npy_intp *out_dims, PyArrayObject **out_arr, PyObject *context) { PyObject *tmp; \/* If op is an array *\/ \/* If op is a NumPy scalar *\/ \/* If op is a Python scalar *\/ \/* If op supports the PEP 3118 buffer interface *\/ \/* If op supports the __array_struct__ or __array_interface__ interface *\/ \/* * If op supplies the __array__ function. * The documentation says this should produce a copy, so * we skip this method if writeable is true, because the intent * of writeable is to modify the operand. * XXX: If the implementation is wrong, and\/or if actual * usage requires this behave differently, * this should be changed! *\/ \/* Try to treat op as a list of lists *\/ \/* Anything can be viewed as an object, unless it needs to be writeable *\/ } ``` So by studying the source code we can conclude an array-like is a NumPy array, or a NumPy scalar, or a Python scalar, or any object which supports the PEP 3118 buffer interface, or any object that supports the __array_struct__ or __array_interface__ interface, or any object that supplies the __array__ function, or any object that can be treated as a list of lists, or anything! If it doesn't fall under one of the other cases, it'll be treated as a 0-dimensional array of object dtype.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/40378427\/numpy-formal-definition-of-array-like-objects", "best_answers_votes":91, "question_length":260, "response_length":2574 }, { "question":"Mean Squared Error in Numpy? Is there a method in numpy for calculating the Mean Squared Error between two matrices? I've tried searching but found none. Is it under a different name? If there isn't, how do you overcome this? Do you write it yourself or use a different lib?", "response":"You can use: ``` mse = ((A - B)**2).mean(axis=ax) ``` Or ``` mse = (np.square(A - B)).mean(axis=ax) ``` with ax=0 the average is performed along the row, for each column, returning an array with ax=1 the average is performed along the column, for each row, returning an array with omitting the ax parameter (or setting it to ax=None) the average is performed element-wise along the array, returning a scalar value", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16774849\/mean-squared-error-in-numpy", "best_answers_votes":158, "question_length":274, "response_length":413 }, { "question":"Principal component analysis in Python I'd like to use principal component analysis (PCA) for dimensionality reduction. Does numpy or scipy already have it, or do I have to roll my own using numpy.linalg.eigh? I don't just want to use singular value decomposition (SVD) because my input data are quite high-dimensional (~460 dimensions), so I think SVD will be slower than computing the eigenvectors of the covariance matrix. I was hoping to find a premade, debugged implementation that already makes the right decisions for when to use which method, and which maybe does other optimizations that I don't know about.", "response":"Months later, here's a small class PCA, and a picture: ``` #!\/usr\/bin\/env python \"\"\" a small class for Principal Component Analysis Usage: p = PCA( A, fraction=0.90 ) In: A: an array of e.g. 1000 observations x 20 variables, 1000 rows x 20 columns fraction: use principal components that account for e.g. 90 % of the total variance Out: p.U, p.d, p.Vt: from numpy.linalg.svd, A = U . d . Vt p.dinv: 1\/d or 0, see NR p.eigen: the eigenvalues of A*A, in decreasing order (p.d**2). eigen[j] \/ eigen.sum() is variable j's fraction of the total variance; look at the first few eigen[] to see how many PCs get to 90 %, 95 % ... p.npc: number of principal components, e.g. 2 if the top 2 eigenvalues are >= `fraction` of the total. It's ok to change this; methods use the current value. Methods: The methods of class PCA transform vectors or arrays of e.g. 20 variables, 2 principal components and 1000 observations, using partial matrices U' d' Vt', parts of the full U d Vt: A ~ U' . d' . Vt' where e.g. U' is 1000 x 2 d' is diag([ d0, d1 ]), the 2 largest singular values Vt' is 2 x 20. Dropping the primes, d . Vt 2 principal vars = p.vars_pc( 20 vars ) U 1000 obs = p.pc_obs( 2 principal vars ) U . d . Vt 1000 obs, p.obs( 20 vars ) = pc_obs( vars_pc( vars )) fast approximate A . vars, using the `npc` principal components Ut 2 pcs = p.obs_pc( 1000 obs ) V . dinv 20 vars = p.pc_vars( 2 principal vars ) V . dinv . Ut 20 vars, p.vars( 1000 obs ) = pc_vars( obs_pc( obs )), fast approximate Ainverse . obs: vars that give ~ those obs. Notes: PCA does not center or scale A; you usually want to first A -= A.mean(A, axis=0) A \/= A.std(A, axis=0) with the little class Center or the like, below. See also: http:\/\/en.wikipedia.org\/wiki\/Principal_component_analysis http:\/\/en.wikipedia.org\/wiki\/Singular_value_decomposition Press et al., Numerical Recipes (2 or 3 ed), SVD PCA micro-tutorial iris-pca .py .png \"\"\" from __future__ import division import numpy as np dot = np.dot # import bz.numpyutil as nu # dot = nu.pdot __version__ = \"2010-04-14 apr\" __author_email__ = \"denis-bz-py at t-online dot de\" #............................................................................... class PCA: def __init__( self, A, fraction=0.90 ): assert 0 = self.d[1:] ) # sorted self.eigen = self.d**2 self.sumvariance = np.cumsum(self.eigen) self.sumvariance \/= self.sumvariance[-1] self.npc = np.searchsorted( self.sumvariance, fraction ) + 1 self.dinv = np.array([ 1\/d if d > self.d[0] * 1e-6 else 0 for d in self.d ]) def pc( self ): \"\"\" e.g. 1000 x 2 U[:, :npc] * d[:npc], to plot etc. \"\"\" n = self.npc return self.U[:, :n] * self.d[:n] # These 1-line methods may not be worth the bother; # then use U d Vt directly -- def vars_pc( self, x ): n = self.npc return self.d[:n] * dot( self.Vt[:n], x.T ).T # 20 vars -> 2 principal def pc_vars( self, p ): n = self.npc return dot( self.Vt[:n].T, (self.dinv[:n] * p).T ) .T # 2 PC -> 20 vars def pc_obs( self, p ): n = self.npc return dot( self.U[:, :n], p.T ) # 2 principal -> 1000 obs def obs_pc( self, obs ): n = self.npc return dot( self.U[:, :n].T, obs ) .T # 1000 obs -> 2 principal def obs( self, x ): return self.pc_obs( self.vars_pc(x) ) # 20 vars -> 2 principal -> 1000 obs def vars( self, obs ): return self.pc_vars( self.obs_pc(obs) ) # 1000 obs -> 2 principal -> 20 vars class Center: \"\"\" A -= A.mean() \/= A.std(), inplace -- use A.copy() if need be uncenter(x) == original A . x \"\"\" # mttiw def __init__( self, A, axis=0, scale=True, verbose=1 ): self.mean = A.mean(axis=axis) if verbose: print \"Center -= A.mean:\", self.mean A -= self.mean if scale: std = A.std(axis=axis) self.std = np.where( std, std, 1. ) if verbose: print \"Center \/= A.std:\", self.std A \/= self.std else: self.std = np.ones( A.shape[-1] ) self.A = A def uncenter( self, x ): return np.dot( self.A, x * self.std ) + np.dot( x, self.mean ) #............................................................................... if __name__ == \"__main__\": import sys csv = \"iris4.csv\" # wikipedia Iris_flower_data_set # 5.1,3.5,1.4,0.2 # ,Iris-setosa ... N = 1000 K = 20 fraction = .90 seed = 1 exec \"\\n\".join( sys.argv[1:] ) # N= ... np.random.seed(seed) np.set_printoptions( 1, threshold=100, suppress=True ) # .1f try: A = np.genfromtxt( csv, delimiter=\",\" ) N, K = A.shape except IOError: A = np.random.normal( size=(N, K) ) # gen correlated ? print \"csv: %s N: %d K: %d fraction: %.2g\" % (csv, N, K, fraction) Center(A) print \"A:\", A print \"PCA ...\" , p = PCA( A, fraction=fraction ) print \"npc:\", p.npc print \"% variance:\", p.sumvariance * 100 print \"Vt[0], weights that give PC 0:\", p.Vt[0] print \"A . Vt[0]:\", dot( A, p.Vt[0] ) print \"pc:\", p.pc() print \"\\nobs pc x: with fraction=1, diffs should be ~ 0\" x = np.ones(K) # x = np.ones(( 3, K )) print \"x:\", x pc = p.vars_pc(x) # d' Vt' x print \"vars_pc(x):\", pc print \"back to ~ x:\", p.pc_vars(pc) Ax = dot( A, x.T ) pcx = p.obs(x) # U' d' Vt' x print \"Ax:\", Ax print \"A'x:\", pcx print \"max |Ax - A'x|: %.2g\" % np.linalg.norm( Ax - pcx, np.inf ) b = Ax # ~ back to original x, Ainv A x back = p.vars(b) print \"~ back again:\", back print \"max |back - x|: %.2g\" % np.linalg.norm( back - x, np.inf ) # end pca.py ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1730600\/principal-component-analysis-in-python", "best_answers_votes":67, "question_length":616, "response_length":5182 }, { "question":"inverting image in Python with OpenCV I want to load a color image, convert it to grayscale, and then invert the data in the file. What I need: to iterate over the array in OpenCV and change every single value with this formula (it might be wrong but it seems reasonable for me): ``` img[x,y] = abs(img[x,y] - 255) ``` but I don't understand why doesn't it works: ``` def inverte(imagem, name): imagem = abs(imagem - 255) cv2.imwrite(name, imagem) def inverte2(imagem, name): for x in np.nditer(imagem, op_flags=['readwrite']): x = abs(x - 255) cv2.imwrite(name, imagem) if __name__ == '__main__': nome = str(sys.argv[1]) image = cv2.imread(nome) gs_imagem = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) inverte(gs_imagem, \"invertida.png\") inverte2(gs_imagem, \"invertida2.png\") ``` I don't want to do an explicit loop (I am trying to be more pythonic). I can see that in one image that got a white background it turned black, but only this it doesn't looks like the other colors are having much (if any) change.", "response":"You almost did it. You were tricked by the fact that abs(imagem-255) will give a wrong result since your dtype is an unsigned integer. You have to do (255-imagem) in order to keep the integers unsigned: ``` def inverte(imagem, name): imagem = (255-imagem) cv2.imwrite(name, imagem) ``` As recommended in the answer by Eric Olmon, you can also invert the image using the bitwise_not function of OpenCV: ``` imagem = cv2.bitwise_not(imagem) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19580102\/inverting-image-in-python-with-opencv", "best_answers_votes":227, "question_length":1006, "response_length":442 }, { "question":"Consistently create same random numpy array I am waiting for another developer to finish a piece of code that will return an np array of shape (100,2000) with values of either -1,0, or 1. In the meantime, I want to randomly create an array of the same characteristics so I can get a head start on my development and testing. The thing is that I want this randomly created array to be the same each time, so that I'm not testing against an array that keeps changing its value each time I re-run my process. I can create my array like this, but is there a way to create it so that it's the same each time. I can pickle the object and unpickle it, but wondering if there's another way. ``` r = np.random.randint(3, size=(100, 2000)) - 1 ```", "response":"Create your own instance of numpy.random.RandomState() with your chosen seed. Do not use numpy.random.seed() except to work around inflexible libraries that do not let you pass around your own RandomState instance. ``` [~] |1> from numpy.random import RandomState [~] |2> prng = RandomState(1234567890) [~] |3> prng.randint(-1, 2, size=10) array([ 1, 1, -1, 0, 0, -1, 1, 0, -1, -1]) [~] |4> prng2 = RandomState(1234567890) [~] |5> prng2.randint(-1, 2, size=10) array([ 1, 1, -1, 0, 0, -1, 1, 0, -1, -1]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5836335\/consistently-create-same-random-numpy-array", "best_answers_votes":221, "question_length":737, "response_length":507 }, { "question":"How to delete columns in numpy.array I would like to delete selected columns in a numpy.array . This is what I do: ``` n [397]: a = array([[ NaN, 2., 3., NaN], .....: [ 1., 2., 3., 9]]) In [398]: print a [[ NaN 2. 3. NaN] [ 1. 2. 3. 9.]] In [399]: z = any(isnan(a), axis=0) In [400]: print z [ True False False True] In [401]: delete(a, z, axis = 1) Out[401]: array([[ 3., NaN], [ 3., 9.]]) ``` In this example my goal is to delete all the columns that contain NaN's. I expect the last command to result in: ``` array([[2., 3.], [2., 3.]]) ``` How can I do that?", "response":"Given its name, I think the standard way should be delete: ``` import numpy as np A = np.delete(A, 1, 0) # delete second row of A B = np.delete(B, 2, 0) # delete third row of B C = np.delete(C, 1, 1) # delete second column of C ``` According to numpy's documentation page, the parameters for numpy.delete are as follow: numpy.delete(arr, obj, axis=None) arr refers to the input array, obj refers to which sub-arrays (e.g. column\/row no. or slice of the array) and axis refers to either column wise (axis = 1) or row-wise (axis = 0) delete operation.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1642730\/how-to-delete-columns-in-numpy-array", "best_answers_votes":180, "question_length":562, "response_length":549 }, { "question":"Elegant way to create empty pandas DataFrame with NaN of type float I want to create a Pandas DataFrame filled with NaNs. During my research I found an answer: ``` import pandas as pd df = pd.DataFrame(index=range(0,4),columns=['A']) ``` This code results in a DataFrame filled with NaNs of type \"object\". So they cannot be used later on for example with the interpolate() method. Therefore, I created the DataFrame with this complicated code (inspired by this answer): ``` import pandas as pd import numpy as np dummyarray = np.empty((4,1)) dummyarray[:] = np.nan df = pd.DataFrame(dummyarray) ``` This results in a DataFrame filled with NaN of type \"float\", so it can be used later on with interpolate(). Is there a more elegant way to create the same result?", "response":"Simply pass the desired value as first argument, like 0, math.inf or, here, np.nan. The constructor then initializes and fills the value array to the size specified by arguments index and columns: ``` >>> import numpy as np >>> import pandas as pd >>> df = pd.DataFrame(np.nan, index=[0, 1, 2, 3], columns=['A', 'B']) >>> df A B 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN >>> df.dtypes A float64 B float64 dtype: object ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30053329\/elegant-way-to-create-empty-pandas-dataframe-with-nan-of-type-float", "best_answers_votes":149, "question_length":761, "response_length":420 }, { "question":"mean, nanmean and warning: Mean of empty slice Say I construct three numpy arrays: ``` a = np.array([1, 2, 3]) b = np.array([np.NaN, np.NaN, 3]) c = np.array([np.NaN, np.NaN, np.NaN]) ``` Now I find that np.mean returns nan for both b and c: ``` >>> np.mean(a) 2.0 >>> np.mean(b) nan >>> np.mean(c) nan ``` Since numpy 1.8 (released 20 April 2016), we've been blessed with nanmean, which ignores nan values: ``` >>> np.nanmean(a) 2.0 >>> np.nanmean(b) 3.0 >>> np.nanmean(c) nan C:\\python-3.4.3\\lib\\site-packages\\numpy\\lib\\nanfunctions.py:598: RuntimeWarning: Mean of empty slice warnings.warn(\"Mean of empty slice\", RuntimeWarning) ``` So, nanmean is great, but it has the odd and undesirable behaviour of raising a warning when the array has nothing but nan values. How can I get the behaviour of nanmean without that warning? I don't like warnings, and I don't like suppressing them manually.", "response":"I really can't see any good reason not to just suppress the warning. The safest way would be to use the warnings.catch_warnings context manager to suppress the warning only where you anticipate it occurring - that way you won't miss any additional RuntimeWarnings that might be unexpectedly raised in some other part of your code: ``` import numpy as np import warnings x = np.ones((1000, 1000)) * np.nan # I expect to see RuntimeWarnings in this block with warnings.catch_warnings(): warnings.simplefilter(\"ignore\", category=RuntimeWarning) foo = np.nanmean(x, axis=1) ``` @dawg's solution would also work, but ultimately any additional steps that you have to take in order to avoid computing np.nanmean on an array of all NaNs are going to incur some extra overhead that you could avoid by just suppressing the warning. Also your intent will be much more clearly reflected in the code.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/29688168\/mean-nanmean-and-warning-mean-of-empty-slice", "best_answers_votes":106, "question_length":894, "response_length":887 }, { "question":"Python: Differentiating between row and column vectors Is there a good way of differentiating between row and column vectors in numpy? If I was to give one a vector, say: ```py from numpy import * v = array([1,2,3]) ``` they wouldn't be able to say weather I mean a row or a column vector. Moreover: ```py >>> array([1,2,3]) == array([1,2,3]).transpose() array([ True, True, True]) ``` Which compares the vectors element-wise. I realize that most of the functions on vectors from the mentioned modules don't need the differentiation. For example outer(a,b) or a.dot(b) but I'd like to differentiate for my own convenience.", "response":"You can make the distinction explicit by adding another dimension to the array. ``` >>> a = np.array([1, 2, 3]) >>> a array([1, 2, 3]) >>> a.transpose() array([1, 2, 3]) >>> a.dot(a.transpose()) 14 ``` Now force it to be a column vector: ``` >>> a.shape = (3,1) >>> a array([[1], [2], [3]]) >>> a.transpose() array([[1, 2, 3]]) >>> a.dot(a.transpose()) array([[1, 2, 3], [2, 4, 6], [3, 6, 9]]) ``` Another option is to use np.newaxis when you want to make the distinction: ``` >>> a = np.array([1, 2, 3]) >>> a array([1, 2, 3]) >>> a[:, np.newaxis] array([[1], [2], [3]]) >>> a[np.newaxis, :] array([[1, 2, 3]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17428621\/python-differentiating-between-row-and-column-vectors", "best_answers_votes":104, "question_length":622, "response_length":615 }, { "question":"Two-sample Kolmogorov-Smirnov Test in Python Scipy I can't figure out how to do a Two-sample KS test in Scipy. After reading the documentation of scipy kstest, I can see how to test whether a distribution is identical to standard normal distribution ```py from scipy.stats import kstest import numpy as np x = np.random.normal(0,1,1000) test_stat = kstest(x, 'norm') #>>> test_stat #(0.021080234718821145, 0.76584491300591395) ``` Which means that at p-value of 0.76 we cannot reject the null hypothesis that the two distributions are identical. However, I want to compare two distributions and see if I can reject the null hypothesis that they are identical, something like: ```py from scipy.stats import kstest import numpy as np x = np.random.normal(0,1,1000) z = np.random.normal(1.1,0.9, 1000) ``` and test whether x and z are identical. I tried the naive: ```py test_stat = kstest(x, z) ``` and got the following error: ```none TypeError: 'numpy.ndarray' object is not callable ``` Is there a way to do a two-sample KS test in Python? If so, how should I do it?", "response":"You are using the one-sample KS test. You probably want the two-sample test ks_2samp: ``` >>> from scipy.stats import ks_2samp >>> import numpy as np >>> >>> np.random.seed(12345678) >>> x = np.random.normal(0, 1, 1000) >>> y = np.random.normal(0, 1, 1000) >>> z = np.random.normal(1.1, 0.9, 1000) >>> >>> ks_2samp(x, y) Ks_2sampResult(statistic=0.022999999999999909, pvalue=0.95189016804849647) >>> ks_2samp(x, z) Ks_2sampResult(statistic=0.41800000000000004, pvalue=3.7081494119242173e-77) ``` Results can be interpreted as following: You can either compare the statistic value given by python to the KS-test critical value table according to your sample size. When statistic value is higher than the critical value, the two distributions are different. Or you can compare the p-value to a level of significance a, usually a=0.05 or 0.01 (you decide, the lower a is, the more significant). If p-value is lower than a, then it is very probable that the two distributions are different.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10884668\/two-sample-kolmogorov-smirnov-test-in-python-scipy", "best_answers_votes":157, "question_length":1067, "response_length":986 }, { "question":"numpy division with RuntimeWarning: invalid value encountered in double_scalars I wrote the following script: ```py import numpy d = numpy.array([[1089, 1093]]) e = numpy.array([[1000, 4443]]) answer = numpy.exp(-3 * d) answer1 = numpy.exp(-3 * e) res = answer.sum()\/answer1.sum() ``` But I got this result and with the error occurred: ```none nan C:\\Users\\Desktop\\test.py:16: RuntimeWarning: invalid value encountered in double_scalars res = answer.sum()\/answer1.sum() ``` It seems to be that the input element were too small that python turned them to be zeros, but indeed the division has its result. How to solve this kind of problem?", "response":"You can't solve it. Simply answer1.sum()==0, and you can't perform a division by zero. This happens because answer1 is the exponential of 2 very large, negative numbers, so that the result is rounded to zero. nan is returned in this case because of the division by zero. Now to solve your problem you could: go for a library for high-precision mathematics, like mpmath. But that's less fun. as an alternative to a bigger weapon, do some math manipulation, as detailed below. go for a tailored scipy\/numpy function that does exactly what you want! Check out @Warren Weckesser answer. Here I explain how to do some math manipulation that helps on this problem. We have that for the numerator: ``` exp(-x)+exp(-y) = exp(log(exp(-x)+exp(-y))) = exp(log(exp(-x)*[1+exp(-y+x)])) = exp(log(exp(-x) + log(1+exp(-y+x))) = exp(-x + log(1+exp(-y+x))) ``` where above x=3* 1089 and y=3* 1093. Now, the argument of this exponential is -x + log(1+exp(-y+x)) = -x + 6.1441934777474324e-06 For the denominator you could proceed similarly but obtain that log(1+exp(-z+k)) is already rounded to 0, so that the argument of the exponential function at the denominator is simply rounded to -z=-3000. You then have that your result is ``` exp(-x + log(1+exp(-y+x)))\/exp(-z) = exp(-x+z+log(1+exp(-y+x)) = exp(-266.99999385580668) ``` which is already extremely close to the result that you would get if you were to keep only the 2 leading terms (i.e. the first number 1089 in the numerator and the first number 1000 at the denominator): ``` exp(3*(1089-1000))=exp(-267) ``` For the sake of it, let's see how close we are from the solution of Wolfram alpha (link): ``` Log[(exp[-3*1089]+exp[-3*1093])\/([exp[-3*1000]+exp[-3*4443])] -> -266.999993855806522267194565420933791813296828742310997510523 ``` The difference between this number and the exponent above is +1.7053025658242404e-13, so the approximation we made at the denominator was fine. The final result is ``` 'exp(-266.99999385580668) = 1.1050349147204485e-116 ``` From wolfram alpha is (link) ``` 1.105034914720621496.. \u00d7 10^-116 # Wolfram alpha. ``` and again, it is safe to use numpy here too.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/27784528\/numpy-division-with-runtimewarning-invalid-value-encountered-in-double-scalars", "best_answers_votes":115, "question_length":638, "response_length":2132 }, { "question":"pandas select from Dataframe using startswith This works (using Pandas 12 dev) ``` table2=table[table['SUBDIVISION'] =='INVERNESS'] ``` Then I realized I needed to select the field using \"starts with\" Since I was missing a bunch. So per the Pandas doc as near as I could follow I tried ``` criteria = table['SUBDIVISION'].map(lambda x: x.startswith('INVERNESS')) table2 = table[criteria] ``` And got AttributeError: 'float' object has no attribute 'startswith' So I tried an alternate syntax with the same result ``` table[[x.startswith('INVERNESS') for x in table['SUBDIVISION']]] ``` Reference http:\/\/pandas.pydata.org\/pandas-docs\/stable\/indexing.html#boolean-indexing Section 4: List comprehensions and map method of Series can also be used to produce more complex criteria: What am I missing?", "response":"You can use the str.startswith DataFrame method to give more consistent results: ``` In [11]: s = pd.Series(['a', 'ab', 'c', 11, np.nan]) In [12]: s Out[12]: 0 a 1 ab 2 c 3 11 4 NaN dtype: object In [13]: s.str.startswith('a', na=False) Out[13]: 0 True 1 True 2 False 3 False 4 False dtype: bool ``` and the boolean indexing will work just fine (I prefer to use loc, but it works just the same without): ``` In [14]: s.loc[s.str.startswith('a', na=False)] Out[14]: 0 a 1 ab dtype: object ``` . It looks least one of your elements in the Series\/column is a float, which doesn't have a startswith method hence the AttributeError, the list comprehension should raise the same error...", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17957890\/pandas-select-from-dataframe-using-startswith", "best_answers_votes":137, "question_length":796, "response_length":681 }, { "question":"Should I use np.absolute or np.abs? Numpy provides both np.absolute and the alias np.abs defined via ``` from .numeric import absolute as abs ``` which seems to be in obvious violation of the zen of python: There should be one-- and preferably only one --obvious way to do it. So I'm guessing that there is a good reason for this. I have personally been using np.abs in almost all of my code and looking at e.g. the number of search results for np.abs vs np.absolute on Stack Overflow it seems like an overwhelming majority does the same (2130 vs 244 hits). Is there any reason i should preferentially use np.absolute over np.abs in my code, or should I simply go for the more \"standard\" np.abs?", "response":"It's likely because there a built-in functions with the same name, abs. The same is true for np.amax, np.amin and np.round_. The aliases for the NumPy functions abs, min, max and round are only defined in the top-level package. So np.abs and np.absolute are completely identical. It doesn't matter which one you use. There are several advantages to the short names: They are shorter and they are known to Python programmers because the names are identical to the built-in Python functions. So end-users have it easier (less to type, less to remember). But there are reasons to have different names too: NumPy (or more generally 3rd party packages) sometimes need the Python functions abs, min, etc. So inside the package they define functions with a different name so you can still access the Python functions - and just in the top-level of the package you expose the \"shortcuts\". Note: Different names are not the only available option in that case: One could work around that with the Python module builtins to access the built-in functions if one shadowed a built-in name. It might also be the case (but that's pure speculation on my part) that they originally only included the long-named functions absolute (and so on) and only added the short aliases later. Being a large and well-used library the NumPy developers don't remove or deprecate stuff lightly. So they may just keep the long names around because it could break old code\/scripts if they would remove them.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/45413909\/should-i-use-np-absolute-or-np-abs", "best_answers_votes":106, "question_length":695, "response_length":1472 }, { "question":"AttributeError: 'Tensor' object has no attribute 'numpy' I downloaded this code from GitHub. ``` predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy() ``` But I get an error that says: ``` AttributeError: 'Tensor' object has no attribute 'numpy' ``` What is wrong, and how do I fix it?", "response":"Since the accepted answer did not solve the problem for me so I thought it might be helpful for some people who face the problem and that already have tensorflow version >= 2.2.0 and eager execution enabled. The issue seems to be that for certain functions during the fitting model.fit() the @tf.function decorator prohibits the execution of functions like tensor.numpy() for performance reasons. The solution for me was to pass the flag run_eagerly=True to the model.compile() like this: ``` model.compile(..., run_eagerly=True) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/52357542\/attributeerror-tensor-object-has-no-attribute-numpy", "best_answers_votes":104, "question_length":311, "response_length":533 }, { "question":"shuffle vs permute numpy What is the difference between numpy.random.shuffle(x) and numpy.random.permutation(x)? I have read the doc pages but I could not understand if there was any difference between the two when I just want to randomly shuffle the elements of an array. To be more precise suppose I have an array x=[1,4,2,8]. If I want to generate random permutations of x, then what is the difference between shuffle(x) and permutation(x)?", "response":"np.random.permutation has two differences from np.random.shuffle: if passed an array, it will return a shuffled copy of the array; np.random.shuffle shuffles the array inplace if passed an integer, it will return a shuffled range i.e. np.random.shuffle(np.arange(n)) If x is an integer, randomly permute np.arange(x). If x is an array, make a copy and shuffle the elements randomly. The source code might help to understand this: ``` 3280 def permutation(self, object x): ... 3307 if isinstance(x, (int, np.integer)): 3308 arr = np.arange(x) 3309 else: 3310 arr = np.array(x) 3311 self.shuffle(arr) 3312 return arr ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15474159\/shuffle-vs-permute-numpy", "best_answers_votes":138, "question_length":443, "response_length":618 }, { "question":"data type not understood I'm trying to use a matrix to compute stuff. The code is this ``` import numpy as np # some code mmatrix = np.zeros(nrows, ncols) print mmatrix[0, 0] ``` but I get 'data type not understood', and it works if I do it from terminal.", "response":"Try: ``` mmatrix = np.zeros((nrows, ncols)) ``` Since the shape parameter has to be an int or sequence of ints http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.zeros.html Otherwise you are passing ncols to np.zeros as the dtype.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5446522\/data-type-not-understood", "best_answers_votes":163, "question_length":255, "response_length":237 }, { "question":"How do I create a view onto a NumPy array? I have a 2D numpy array. Is there a way to create a view onto it that would include the first k rows and all columns? The point is to avoid copying the underlying data (the array is so large that making partial copies is not feasible.)", "response":"Sure, just index it as you normally would. E.g. y = x[:k, :] This will return a view into the original array. No data will be copied, and any updates made to y will be reflected in x and vice versa. Edit: I commonly work with >10GB 3D arrays of uint8's, so I worry about this a lot... Numpy can be very efficient at memory management if you keep a few things in mind. Here are a few tips on avoiding making copies of arrays in memory: Use +=, -=, *=, etc to avoid making a copy of the array. E.g. x += 10 will modify the array in place, while x = x + 10 will make a copy and modify it. (also, have a look at numexpr) If you do want to make a copy with x = x + 10, be aware that x = x + 10.0 will cause x to automatically be up-casted to a floating point array, if it wasn't already. However, x += 10.0, where x is an integer array, will cause the 10.0 to be down-casted to an int of the same precision as the array, instead. Additionally, many numpy functions take an out parameter, so you can do things like np.abs(x, x) to take the absolute value of x in-place. As a second edit, here's few more tips on views vs. copies with numpy arrays: Unlike python lists, y = x[:] does not return a copy, it returns a view. If you do want a copy (which will, of course, double the amount of memory you're using) use y = x.copy() You'll often hear about \"fancy indexing\" of numpy arrays. Using a list (or integer array) as an index is \"fancy indexing\". It can be very useful, but copies the data. As an example of this: y = x[[0, 1, 2], :] returns a copy, while y = x[:3,:] would return a view. Even really crazy indexing like x[4:100:5, :-10:-1, None] is \"normal\" indexing and will return a view, though, so don't be afraid to use all kinds of slicing tricks on large arrays. x.astype() will return a copy of the data as the new type, whilex.view() will return a view. Be careful with this, however... It's extremely powerful and useful, but you need to understand how the underlying data is stored in memory. If you have an array of floats, and view them as ints, (or vice versa) numpy will interpret the underlying bits of the array as ints. For example, this means that 1.0 as a 64bit float on a little-endian system will be 4607182418800017408 when viewed as a 64bit int, and an array of [ 0, 0, 0, 0, 0, 0, 240, 63] if viewed as a uint8. This is really nice when you need to do bit-twiddling of some sort on large arrays, though... You have low level control over how the memory buffer is interpreted.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4370745\/how-do-i-create-a-view-onto-a-numpy-array", "best_answers_votes":259, "question_length":278, "response_length":2497 }, { "question":"Differences between numpy.random.rand vs numpy.random.randn in Python What are the differences between numpy.random.rand and numpy.random.randn? From the documentation, I know the only difference between them is the probabilistic distribution each number is drawn from, but the overall structure (dimension) and data type used (float) is the same. I have a hard time debugging a neural network because of this. Specifically, I am trying to re-implement the Neural Network provided in the Neural Network and Deep Learning book by Michael Nielson. The original code can be found here. My implementation was the same as the original; however, I instead defined and initialized weights and biases with numpy.random.rand in the init function, rather than the numpy.random.randn function as shown in the original. However, my code that uses random.rand to initialize weights and biases does not work. The network won't learn and the weights and biases will not change. What is the difference(s) between the two random functions that cause this weirdness?", "response":"First, as you see from the documentation numpy.random.randn generates samples from the normal distribution, while numpy.random.rand from a uniform distribution (in the range [0,1)). Second, why did the uniform distribution not work? The main reason is the activation function, especially in your case where you use the sigmoid function. The plot of the sigmoid looks like the following: So you can see that if your input is away from 0, the slope of the function decreases quite fast and as a result you get a tiny gradient and tiny weight update. And if you have many layers - those gradients get multiplied many times in the back pass, so even \"proper\" gradients after multiplications become small and stop making any influence. So if you have a lot of weights which bring your input to those regions you network is hardly trainable. That's why it is a usual practice to initialize network variables around zero value. This is done to ensure that you get reasonable gradients (close to 1) to train your net. However, uniform distribution is not something completely undesirable, you just need to make the range smaller and closer to zero. As one of good practices is using Xavier initialization. In this approach you can initialize your weights with: Normal distribution. Where mean is 0 and var = sqrt(2. \/ (in + out)), where in - is the number of inputs to the neurons and out - number of outputs. Uniform distribution in range [-sqrt(6. \/ (in + out)), +sqrt(6. \/ (in + out))]", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/47240308\/differences-between-numpy-random-rand-vs-numpy-random-randn-in-python", "best_answers_votes":145, "question_length":1048, "response_length":1480 }, { "question":"TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced I am trying to convert a CSV into numpy array. In the numpy array, I am replacing few elements with NaN. Then, I wanted to find the indices of the NaN elements in the numpy array. The code is: ``` import pandas as pd import matplotlib.pyplot as plyt import numpy as np filename = 'wether.csv' df = pd.read_csv(filename,header = None ) list = df.values.tolist() labels = list[0] wether_list = list[1:] year = [] month = [] day = [] max_temp = [] for i in wether_list: year.append(i[1]) month.append(i[2]) day.append(i[3]) max_temp.append(i[5]) mid = len(max_temp) \/\/ 2 temps = np.array(max_temp[mid:]) temps[np.where(np.array(temps) == -99.9)] = np.nan plyt.plot(temps,marker = '.',color = 'black',linestyle = 'none') # plyt.show() print(np.where(np.isnan(temps))[0]) # print(len(pd.isnull(np.array(temps)))) ``` When I execute this, I am getting a warning and an error. The warning is: ``` wether.py:26: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison temps[np.where(np.array(temps) == -99.9)] = np.nan ``` The error is: ``` Traceback (most recent call last): File \"wether.py\", line 30, in print(np.where(np.isnan(temps))[0]) TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' ``` This is a part of the dataset which I am using: ``` 83168,2014,9,7,0.00000,89.00000,78.00000, 83.50000 83168,2014,9,22,1.62000,90.00000,72.00000, 81.00000 83168,2014,9,23,0.50000,87.00000,74.00000, 80.50000 83168,2014,9,24,0.35000,82.00000,73.00000, 77.50000 83168,2014,9,25,0.60000,85.00000,75.00000, 80.00000 83168,2014,9,26,0.76000,89.00000,77.00000, 83.00000 83168,2014,9,27,0.00000,89.00000,79.00000, 84.00000 83168,2014,9,28,0.00000,90.00000,81.00000, 85.50000 83168,2014,9,29,0.00000,90.00000,79.00000, 84.50000 83168,2014,9,30,0.50000,89.00000,75.00000, 82.00000 83168,2014,10,1,0.02000,91.00000,75.00000, 83.00000 83168,2014,10,2,0.03000,93.00000,77.00000, 85.00000 83168,2014,10,3,1.40000,93.00000,75.00000, 84.00000 83168,2014,10,4,0.06000,89.00000,75.00000, 82.00000 83168,2014,10,5,0.22000,91.00000,68.00000, 79.50000 83168,2014,10,6,0.00000,84.00000,68.00000, 76.00000 83168,2014,10,7,0.17000,85.00000,73.00000, 79.00000 83168,2014,10,8,0.06000,84.00000,73.00000, 78.50000 83168,2014,10,9,0.00000,87.00000,73.00000, 80.00000 83168,2014,10,10,0.00000,88.00000,80.00000, 84.00000 83168,2014,10,11,0.00000,87.00000,80.00000, 83.50000 83168,2014,10,12,0.00000,88.00000,80.00000, 84.00000 83168,2014,10,13,0.00000,88.00000,81.00000, 84.50000 83168,2014,10,14,0.04000,88.00000,77.00000, 82.50000 83168,2014,10,15,0.00000,88.00000,77.00000, 82.50000 83168,2014,10,16,0.09000,89.00000,72.00000, 80.50000 83168,2014,10,17,0.00000,85.00000,67.00000, 76.00000 83168,2014,10,18,0.00000,84.00000,65.00000, 74.50000 83168,2014,10,19,0.00000,84.00000,65.00000, 74.50000 83168,2014,10,20,0.00000,85.00000,69.00000, 77.00000 83168,2014,10,21,0.77000,87.00000,76.00000, 81.50000 83168,2014,10,22,0.69000,81.00000,71.00000, 76.00000 83168,2014,10,23,0.31000,82.00000,72.00000, 77.00000 83168,2014,10,24,0.71000,79.00000,73.00000, 76.00000 83168,2014,10,25,0.00000,81.00000,68.00000, 74.50000 83168,2014,10,26,0.00000,82.00000,67.00000, 74.50000 83168,2014,10,27,0.00000,83.00000,64.00000, 73.50000 83168,2014,10,28,0.00000,83.00000,66.00000, 74.50000 83168,2014,10,29,0.03000,86.00000,76.00000, 81.00000 83168,2014,10,30,0.00000,85.00000,69.00000, 77.00000 83168,2014,10,31,0.00000,85.00000,69.00000, 77.00000 83168,2014,11,1,0.00000,86.00000,59.00000, 72.50000 83168,2014,11,2,0.00000,77.00000,52.00000, 64.50000 83168,2014,11,3,0.00000,70.00000,52.00000, 61.00000 83168,2014,11,4,0.00000,77.00000,59.00000, 68.00000 83168,2014,11,5,0.02000,79.00000,73.00000, 76.00000 83168,2014,11,6,0.02000,82.00000,75.00000, 78.50000 83168,2014,11,7,0.00000,83.00000,66.00000, 74.50000 83168,2014,11,8,0.00000,84.00000,65.00000, 74.50000 83168,2014,11,9,0.00000,84.00000,65.00000, 74.50000 83168,2014,11,10,1.20000,72.00000,65.00000, 68.50000 83168,2014,11,11,0.08000,77.00000,61.00000, 69.00000 83168,2014,11,12,0.00000,80.00000,61.00000, 70.50000 83168,2014,11,13,0.00000,83.00000,63.00000, 73.00000 83168,2014,11,14,0.00000,83.00000,65.00000, 74.00000 83168,2014,11,15,0.00000,82.00000,64.00000, 73.00000 83168,2014,11,16,0.00000,83.00000,64.00000, 73.50000 83168,2014,11,17,0.07000,84.00000,64.00000, 74.00000 83168,2014,11,18,0.00000,86.00000,71.00000, 78.50000 83168,2014,11,19,0.57000,78.00000,55.00000, 66.50000 83168,2014,11,20,0.05000,72.00000,56.00000, 64.00000 83168,2014,11,21,0.05000,77.00000,63.00000, 70.00000 83168,2014,11,22,0.22000,77.00000,69.00000, 73.00000 83168,2014,11,23,0.06000,79.00000,76.00000, 77.50000 83168,2014,11,24,0.02000,84.00000,78.00000, 81.00000 83168,2014,11,25,0.00000,86.00000,78.00000, 82.00000 83168,2014,11,26,0.07000,85.00000,77.00000, 81.00000 83168,2014,11,27,0.21000,82.00000,55.00000, 68.50000 83168,2014,11,28,0.00000,73.00000,53.00000, 63.00000 83168,2015,1,8,0.00000,80.00000,57.00000, 83168,2015,1,9,0.05000,72.00000,56.00000, 83168,2015,1,10,0.00000,72.00000,57.00000, 83168,2015,1,11,0.00000,80.00000,57.00000, 83168,2015,1,12,0.05000,80.00000,59.00000, 83168,2015,1,13,0.85000,81.00000,69.00000, 83168,2015,1,14,0.05000,81.00000,68.00000, 83168,2015,1,15,0.00000,81.00000,64.00000, 83168,2015,1,16,0.00000,78.00000,63.00000, 83168,2015,1,17,0.00000,73.00000,55.00000, 83168,2015,1,18,0.00000,76.00000,55.00000, 83168,2015,1,19,0.00000,78.00000,55.00000, 83168,2015,1,20,0.00000,75.00000,56.00000, 83168,2015,1,21,0.02000,73.00000,65.00000, 83168,2015,1,22,0.00000,80.00000,64.00000, 83168,2015,1,23,0.00000,80.00000,71.00000, 83168,2015,1,24,0.00000,79.00000,72.00000, 83168,2015,1,25,0.00000,79.00000,49.00000, 83168,2015,1,26,0.00000,79.00000,49.00000, 83168,2015,1,27,0.10000,75.00000,53.00000, 83168,2015,1,28,0.00000,68.00000,53.00000, 83168,2015,1,29,0.00000,69.00000,53.00000, 83168,2015,1,30,0.00000,72.00000,60.00000, 83168,2015,1,31,0.00000,76.00000,58.00000, 83168,2015,2,1,0.00000,76.00000,58.00000, 83168,2015,2,2,0.05000,77.00000,58.00000, 83168,2015,2,3,0.00000,84.00000,56.00000, 83168,2015,2,4,0.00000,76.00000,56.00000, ``` I am unable to rectify the error. How to overcome the warning in the 26th line? How can one solve this error? Update : when I try the same thing in different way like reading dataset from file instead of converting to dataframes, I am not getting the error. What would be the reason for that? The code is: ``` weather_filename = 'wether.csv' weather_file = open(weather_filename) weather_data = weather_file.read() weather_file.close() # Break the weather records into lines lines = weather_data.split('\\n') labels = lines[0] values = lines[1:] n_values = len(values) # Break the list of comma-separated value strings # into lists of values. year = [] month = [] day = [] max_temp = [] j_year = 1 j_month = 2 j_day = 3 j_max_temp = 5 for i_row in range(n_values): split_values = values[i_row].split(',') if len(split_values) >= j_max_temp: year.append(int(split_values[j_year])) month.append(int(split_values[j_month])) day.append(int(split_values[j_day])) max_temp.append(float(split_values[j_max_temp])) # Isolate the recent data. i_mid = len(max_temp) \/\/ 2 temps = np.array(max_temp[i_mid:]) year = year[i_mid:] month = month[i_mid:] day = day[i_mid:] temps[np.where(temps == -99.9)] = np.nan # Remove all the nans. # Trim both ends and fill nans in the middle. # Find the first non-nan. i_start = np.where(np.logical_not(np.isnan(temps)))[0][0] temps = temps[i_start:] year = year[i_start:] month = month[i_start:] day = day[i_start:] i_nans = np.where(np.isnan(temps))[0] print(i_nans) ``` What is wrong in the first code and why the second doesn't even give a warning?", "response":"Posting as it might help future users. As correctly pointed out by others, np.isnan won't work for object or string dtypes. If you're using pandas, as mentioned here you can directly use pd.isnull, which should work in your case. ``` import pandas as pd import numpy as np var1 = '' var2 = np.nan >>> type(var1) >>> type(var2) >>> pd.isnull(var1) False >>> pd.isnull(var2) True ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/52657223\/typeerror-ufunc-isnan-not-supported-for-the-input-types-and-the-inputs-could", "best_answers_votes":104, "question_length":7933, "response_length":383 }, { "question":"Numpy: Should I use newaxis or None? In numpy one can use the 'newaxis' object in the slicing syntax to create an axis of length one, e.g.: ``` import numpy as np print np.zeros((3,5))[:,np.newaxis,:].shape # shape will be (3,1,5) ``` The documentation states that one can also use None instead of newaxis, the effect is exactly the same. Is there any reason to choose one over the other? Is there any general preference or style guide? My impression is that newaxis is more popular, probably because it is more explicit. So is there any reason why None is allowed?", "response":"None is allowed because numpy.newaxis is merely an alias for None. ``` In [1]: import numpy In [2]: numpy.newaxis is None Out[2]: True ``` The authors probably chose it because they needed a convenient constant, and None was available. As for why you should prefer newaxis over None: mainly it's because it's more explicit, and partly because someday the numpy authors might change it to something other than None. (They're not planning to, and probably won't, but there's no good reason to prefer None.)", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/944863\/numpy-should-i-use-newaxis-or-none", "best_answers_votes":122, "question_length":565, "response_length":504 }, { "question":"How to implement the ReLU function in Numpy I want to make a simple neural network which uses the ReLU function. Can someone give me a clue of how can I implement the function using numpy.", "response":"There are a couple of ways. ``` >>> x = np.random.random((3, 2)) - 0.5 >>> x array([[-0.00590765, 0.18932873], [-0.32396051, 0.25586596], [ 0.22358098, 0.02217555]]) >>> np.maximum(x, 0) array([[ 0. , 0.18932873], [ 0. , 0.25586596], [ 0.22358098, 0.02217555]]) >>> x * (x > 0) array([[-0. , 0.18932873], [-0. , 0.25586596], [ 0.22358098, 0.02217555]]) >>> (abs(x) + x) \/ 2 array([[ 0. , 0.18932873], [ 0. , 0.25586596], [ 0.22358098, 0.02217555]]) ``` If timing the results with the following code: ``` import numpy as np x = np.random.random((5000, 5000)) - 0.5 print(\"max method:\") %timeit -n10 np.maximum(x, 0) print(\"multiplication method:\") %timeit -n10 x * (x > 0) print(\"abs method:\") %timeit -n10 (abs(x) + x) \/ 2 ``` We get: ``` max method: 10 loops, best of 3: 239 ms per loop multiplication method: 10 loops, best of 3: 145 ms per loop abs method: 10 loops, best of 3: 288 ms per loop ``` So the multiplication seems to be the fastest.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/32109319\/how-to-implement-the-relu-function-in-numpy", "best_answers_votes":178, "question_length":188, "response_length":947 }, { "question":"Fastest way to grow a numpy numeric array Requirements: I need to grow an array arbitrarily large from data. I can guess the size (roughly 100-200) with no guarantees that the array will fit every time Once it is grown to its final size, I need to perform numeric computations on it, so I'd prefer to eventually get to a 2-D numpy array. Speed is critical. As an example, for one of 300 files, the update() method is called 45 million times (takes 150s or so) and the finalize() method is called 500k times (takes total of 106s) ... taking a total of 250s or so. Here is my code: ``` def __init__(self): self.data = [] def update(self, row): self.data.append(row) def finalize(self): dx = np.array(self.data) ``` Other things I tried include the following code ... but this is waaaaay slower. ``` def class A: def __init__(self): self.data = np.array([]) def update(self, row): np.append(self.data, row) def finalize(self): dx = np.reshape(self.data, size=(self.data.shape[0]\/5, 5)) ``` Here is a schematic of how this is called: ``` for i in range(500000): ax = A() for j in range(200): ax.update([1,2,3,4,5]) ax.finalize() # some processing on ax ```", "response":"I tried a few different things, with timing. ``` import numpy as np ``` The method you mention as slow: (32.094 seconds) ``` class A: def __init__(self): self.data = np.array([]) def update(self, row): self.data = np.append(self.data, row) def finalize(self): return np.reshape(self.data, newshape=(self.data.shape[0]\/5, 5)) ``` Regular ol Python list: (0.308 seconds) ``` class B: def __init__(self): self.data = [] def update(self, row): for r in row: self.data.append(r) def finalize(self): return np.reshape(self.data, newshape=(len(self.data)\/5, 5)) ``` Trying to implement an arraylist in numpy: (0.362 seconds) ``` class C: def __init__(self): self.data = np.zeros((100,)) self.capacity = 100 self.size = 0 def update(self, row): for r in row: self.add(r) def add(self, x): if self.size == self.capacity: self.capacity *= 4 newdata = np.zeros((self.capacity,)) newdata[:self.size] = self.data self.data = newdata self.data[self.size] = x self.size += 1 def finalize(self): data = self.data[:self.size] return np.reshape(data, newshape=(len(data)\/5, 5)) ``` And this is how I timed it: ``` x = C() for i in xrange(100000): x.update([i]) ``` So it looks like regular old Python lists are pretty good ;)", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7133885\/fastest-way-to-grow-a-numpy-numeric-array", "best_answers_votes":121, "question_length":1152, "response_length":1207 }, { "question":"Optimize finding index of nearest point in 2d arrays 2d NumPy array x_array contains positional information in x-direction, y_array positions in y-direction. I then have a list of x,y points. For each point in the list I find the array index of the location closest to that point, based on this code: ``` import time import numpy def find_index_of_nearest_xy(y_array, x_array, y_point, x_point): distance = (y_array-y_point)**2 + (x_array-x_point)**2 idy,idx = numpy.where(distance==distance.min()) return idy[0],idx[0] def do_all(y_array, x_array, points): store = [] for i in xrange(points.shape[1]): store.append(find_index_of_nearest_xy(y_array,x_array,points[0,i],points[1,i])) return store # Create some dummy data y_array = numpy.random.random(10000).reshape(100,100) x_array = numpy.random.random(10000).reshape(100,100) points = numpy.random.random(10000).reshape(2,5000) # Time how long it takes to run start = time.time() results = do_all(y_array, x_array, points) end = time.time() print 'Completed in: ',end-start ``` I want to speed it up.", "response":"Here is a scipy.spatial.KDTree example ``` In [1]: from scipy import spatial In [2]: import numpy as np In [3]: A = np.random.random((10,2))*100 In [4]: A Out[4]: array([[ 68.83402637, 38.07632221], [ 76.84704074, 24.9395109 ], [ 16.26715795, 98.52763827], [ 70.99411985, 67.31740151], [ 71.72452181, 24.13516764], [ 17.22707611, 20.65425362], [ 43.85122458, 21.50624882], [ 76.71987125, 44.95031274], [ 63.77341073, 78.87417774], [ 8.45828909, 30.18426696]]) In [5]: pt = [6, 30] # <-- the point to find In [6]: A[spatial.KDTree(A).query(pt)[1]] # <-- the nearest point Out[6]: array([ 8.45828909, 30.18426696]) #how it works! In [7]: distance,index = spatial.KDTree(A).query(pt) In [8]: distance # <-- The distances to the nearest neighbors Out[8]: 2.4651855048258393 In [9]: index # <-- The locations of the neighbors Out[9]: 9 #then In [10]: A[index] Out[10]: array([ 8.45828909, 30.18426696]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10818546\/optimize-finding-index-of-nearest-point-in-2d-arrays", "best_answers_votes":100, "question_length":1053, "response_length":901 }, { "question":"When importing tensorflow, I get the following error: No module named 'numpy.core._multiarray_umath' I have installed Ancaconda3 and Tensorflow. When I try to import Tensorflow in python shell I receive the following error: ModuleNotFoundError: No module named 'numpy.core._multiarray_umath' ImportError: numpy.core.multiarray failed to import The above exception was the direct cause of the following exception: Traceback (most recent call last): File \"\", line 980, in _find_and_load SystemError: returned a result with an error set ImportError: numpy.core._multiarray_umath failed to import ImportError: numpy.core.umath failed to import I am not sure what the problem is as numpy is installed on my system and can be successfully imported in python. I am using Windows10.", "response":"I also had the same issue. It got resloved once I upgraded the numpy from 1.15.4 to 1.16.1. If you're using pip: pip install numpy --upgrade Numpy that came with Anaconda3 is of version 1.15.4. so i upgraded and it worked. Side note: if you're also using scikit-image in your script, be aware that numpy 1.16.3 has a conflict with old versions of scikit-image (e.g. you may get ImportError: cannot import name '_validate_lengths'). In that case, pip install --upgrade scikit-image from terminal solved the issue for me.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/54665842\/when-importing-tensorflow-i-get-the-following-error-no-module-named-numpy-cor", "best_answers_votes":142, "question_length":775, "response_length":519 }, { "question":"Why does numpy std() give a different result to matlab std()? I try to convert matlab code to numpy and figured out that numpy has a different result with the std function. in matlab ``` std([1,3,4,6]) ans = 2.0817 ``` in numpy ``` np.std([1,3,4,6]) 1.8027756377319946 ``` Is this normal? And how should I handle this?", "response":"The NumPy function np.std takes an optional parameter ddof: \"Delta Degrees of Freedom\". By default, this is 0. Set it to 1 to get the MATLAB result: ``` >>> np.std([1,3,4,6], ddof=1) 2.0816659994661326 ``` To add a little more context, in the calculation of the variance (of which the standard deviation is the square root) we typically divide by the number of values we have. But if we select a random sample of N elements from a larger distribution and calculate the variance, division by N can lead to an underestimate of the actual variance. To fix this, we can lower the number we divide by (the degrees of freedom) to a number less than N (usually N-1). The ddof parameter allows us change the divisor by the amount we specify. Unless told otherwise, NumPy will calculate the biased estimator for the variance (ddof=0, dividing by N). This is what you want if you are working with the entire distribution (and not a subset of values which have been randomly picked from a larger distribution). If the ddof parameter is given, NumPy divides by N - ddof instead. The default behaviour of MATLAB's std is to correct the bias for sample variance by dividing by N-1. This gets rid of some of (but probably not all of) of the bias in the standard deviation. This is likely to be what you want if you're using the function on a random sample of a larger distribution. The nice answer by @hbaderts gives further mathematical details.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/27600207\/why-does-numpy-std-give-a-different-result-to-matlab-std", "best_answers_votes":175, "question_length":318, "response_length":1431 }, { "question":"How do you find the IQR in Numpy? Is there a baked-in Numpy\/Scipy function to find the interquartile range? I can do it pretty easily myself, but mean() exists which is basically sum\/len... ``` def IQR(dist): return np.percentile(dist, 75) - np.percentile(dist, 25) ```", "response":"np.percentile takes multiple percentile arguments, and you are slightly better off doing: ``` q75, q25 = np.percentile(x, [75 ,25]) iqr = q75 - q25 ``` or ``` iqr = np.subtract(*np.percentile(x, [75, 25])) ``` than making two calls to percentile: ``` In [8]: x = np.random.rand(1e6) In [9]: %timeit q75, q25 = np.percentile(x, [75 ,25]); iqr = q75 - q25 10 loops, best of 3: 24.2 ms per loop In [10]: %timeit iqr = np.subtract(*np.percentile(x, [75, 25])) 10 loops, best of 3: 24.2 ms per loop In [11]: %timeit iqr = np.percentile(x, 75) - np.percentile(x, 25) 10 loops, best of 3: 33.7 ms per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/23228244\/how-do-you-find-the-iqr-in-numpy", "best_answers_votes":160, "question_length":269, "response_length":602 }, { "question":"Immutable numpy array? Is there a simple way to create an immutable NumPy array? If one has to derive a class from ndarray to do this, what's the minimum set of methods that one has to override to achieve immutability?", "response":"You can make a numpy array unwriteable: ``` a = np.arange(10) a.flags.writeable = False a[0] = 1 # Gives: ValueError: assignment destination is read-only ``` Also see the discussion in this thread: http:\/\/mail.scipy.org\/pipermail\/numpy-discussion\/2008-December\/039274.html and the documentation: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.ndarray.flags.html", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5541324\/immutable-numpy-array", "best_answers_votes":155, "question_length":218, "response_length":372 }, { "question":"Find out if a matrix is positive definite with NumPy How can I find out if a matrix is positive definite? My matrix is a NumPy matrix. I was expecting to find any related method in the NumPy library, but I didn't have any success.", "response":"You can also check if all the eigenvalues of matrix are positive. If so, the matrix is positive definite: ``` import numpy as np def is_pos_def(x): return np.all(np.linalg.eigvals(x) > 0) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16266720\/find-out-if-a-matrix-is-positive-definite-with-numpy", "best_answers_votes":114, "question_length":230, "response_length":191 }, { "question":"Using a pre-trained word embedding (word2vec or Glove) in TensorFlow I've recently reviewed an interesting implementation for convolutional text classification. However all TensorFlow code I've reviewed uses a random (not pre-trained) embedding vectors like the following: ``` with tf.device('\/cpu:0'), tf.name_scope(\"embedding\"): W = tf.Variable( tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0), name=\"W\") self.embedded_chars = tf.nn.embedding_lookup(W, self.input_x) self.embedded_chars_expanded = tf.expand_dims(self.embedded_chars, -1) ``` Does anybody know how to use the results of Word2vec or a GloVe pre-trained word embedding instead of a random one?", "response":"There are a few ways that you can use a pre-trained embedding in TensorFlow. Let's say that you have the embedding in a NumPy array called embedding, with vocab_size rows and embedding_dim columns and you want to create a tensor W that can be used in a call to tf.nn.embedding_lookup(). Simply create W as a tf.constant() that takes embedding as its value: ``` W = tf.constant(embedding, name=\"W\") ``` This is the easiest approach, but it is not memory efficient because the value of a tf.constant() is stored multiple times in memory. Since embedding can be very large, you should only use this approach for toy examples. Create W as a tf.Variable and initialize it from the NumPy array via a tf.placeholder(): ``` W = tf.Variable(tf.constant(0.0, shape=[vocab_size, embedding_dim]), trainable=False, name=\"W\") embedding_placeholder = tf.placeholder(tf.float32, [vocab_size, embedding_dim]) embedding_init = W.assign(embedding_placeholder) # ... sess = tf.Session() sess.run(embedding_init, feed_dict={embedding_placeholder: embedding}) ``` This avoid storing a copy of embedding in the graph, but it does require enough memory to keep two copies of the matrix in memory at once (one for the NumPy array, and one for the tf.Variable). Note that I've assumed that you want to hold the embedding matrix constant during training, so W is created with trainable=False. If the embedding was trained as part of another TensorFlow model, you can use a tf.train.Saver to load the value from the other model's checkpoint file. This means that the embedding matrix can bypass Python altogether. Create W as in option 2, then do the following: ``` W = tf.Variable(...) embedding_saver = tf.train.Saver({\"name_of_variable_in_other_model\": W}) # ... sess = tf.Session() embedding_saver.restore(sess, \"checkpoint_filename.ckpt\") ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/35687678\/using-a-pre-trained-word-embedding-word2vec-or-glove-in-tensorflow", "best_answers_votes":132, "question_length":670, "response_length":1819 }, { "question":"Is there a NumPy-like package for Node.js and if not why not? [closed] Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Because this question may lead to opinionated discussion, debate, and answers, it has been closed. You may edit the question if you feel you can improve it so that it requires answers that include facts and citations or a detailed explanation of the proposed solution. If edited, the question will be reviewed and might be reopened. Closed last year. The community reviewed whether to reopen this question last year and left it closed: Original close reason(s) were not resolved Improve this question I've always been amazed at how much much much faster things become if you manage to rewrite code that loops though your ndarray and does something, with NumPy functions that work on the whole array at once. I'm looking for something similar in Node.js. So far I have turned up some things, none of which look promising: scikit-node, runs scikit-learn in Python, and interfaces with Node.js. I haven't tried it, it may not give me the speed that I would like. There are some rather old, and newer, JavaScript matrix libraries (sylvester, gl-matrix, ...). In addition to not being sure they work well with matrices larger than 4x4 (which is most useful in 3D rendering), they seem to be native JavaScript (and some, not sure these, use WebGL acceleration). Great on the browser, not so on Node.js. As far as I know, npms can be written in C++. I'm wondering why there are no NumPy-like libraries for Node.js? Is there just not enough interest in Node.js yet from the community that needs that kind of power? Is there a hope that ES6 features (list comprehensions) will allow JavaScript compilers to automatically vectorise native JavaScript code to C++ speeds? Am I missing something else? I'm not asking for \"what is the best package to do xyz\". I'm just wondering if there is a technical reason there is no package to do this on Node.js, a social reason, or no reason at all and there is just a package I missed. Maybe to avoid too many opinionated criticism: I have about 10,000 matrices that are 100 x 100 each. What's a reasonable fast way to add them together? After some more googling for \"Node.js scientific computing\" there are links to some very interesting notes: https:\/\/cs.stackexchange.com\/questions\/1693\/a-faster-leaner-javascript-for-scientific-computing-what-features-should-i-kee http:\/\/www.quora.com\/Can-Node-js-handle-numerical-computation-the-same-way-that-languages-like-R-or-Julia-can Javascript and Scientific Processing? Basically as far as I understand now, no-one has bothered so far. Also, since there are some major omissions in the js TypedArrays (such as 64bit ints), it might be hard to add good support by just using NPMs, and not hacking the engine itself\u2014something that would defeat the purpose. Then again, I didn't further research this last statement.", "response":"No, there are no technical reasons why a numpy-like package does not exist for Node.js and, more generally, JavaScript. There are two main obstacles preventing Node.js and JavaScript from achieving more mind share in the data science and numeric computing communities. The first obstacle is community. While the JavaScript community is huge, the subset of people within that community doing interesting things in numeric computing is small. Hence, if you want to do numeric computing in JavaScript and Node.js, finding resources to help you along the way can be hard, and it may feel like a lonely endeavor. Next, the absence of comparable libraries (chicken and egg: libraries are needed to attract library authors and authors are needed to write good libraries). There are no technical reasons why libraries cannot be written in JavaScript or leverage Node.js (e.g., via native add-ons). I know, as I have written many numeric computing libraries in JavaScript. So while numeric computing is possible in JavaScript, the problem stems from an inability to attract developers having sufficient expertise and capable of putting in the time and effort needed to write high quality numeric computing implementations. Regarding the specific language features mentioned in the OP: ES6\/ES2015: none of the recent language additions help or hinder development of numeric computing libraries in JavaScript. Potential additions like list comprehensions will not be game changers either. The one change to the web platform which will make a difference is WebAssembly. With WebAssembly, compiling C\/C++\/Fortran libraries to run in web browsers will be made easier. At the time of this answer, WebAssembly looks to be the means for bringing SIMD to the web, potentially allowing some speed-ups, although the focus seems to be on short SIMD, rather than long. But even with WebAssembly, porting numeric computing libraries to the web will not be as simple as hitting the compile button. Numeric computing code bases will need to massaged to become amenable for use on the web, and, even then, higher level APIs will likely need to be written to mask some of lower level features, such as manually managing the heap. Native add-ons: yes, node modules can be written as native add-ons, allowing C\/C++\/Fortran code to be used within a Node.js application. Individuals have written libraries to this end; for example, see stdlib. If done well, Node.js can perform numeric computations at speeds comparable to directly using native implementations. Typed arrays: as they are now, they are suitable for numeric computation. Similar to C, you can create pooled buffers, which allow for efficient memory reuse and better performance. Furthermore, similar to languages like R, Python, and Julia, you can leverage typed arrays to create ndarray (aka strided array) interfaces. While U\/Int64 integer arrays are not currently available at the time of this answer, (a) their absence is not a show stopper and (b) proposals are advancing at the specification level to add U\/Int64 integer arrays to JavaScript. Ditto for complex numbers with structured types. My personal belief is that some form of numeric computing is inevitable in JavaScript and Node.js. The advantages (ubiquity, distribution, performance) and potential applications (edge computing, integrating machine learning, data visualization) are too strong of evolutionary forces not to support data science applications, at least at a basic level. disclosure: I and others are currently working on a project (https:\/\/github.com\/stdlib-js\/stdlib) which aims to provide numeric computing facilities in JavaScript and Node.js.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/31412537\/is-there-a-numpy-like-package-for-node-js-and-if-not-why-not", "best_answers_votes":61, "question_length":2969, "response_length":3660 }, { "question":"TypeError: Object of type 'float32' is not JSON serializable [duplicate] This question already has answers here: Convert numpy type to python (7 answers) Closed 6 years ago. I'm working with numpy.float32 numbers and they don't go into JSON. What's the right approach to overcome this issue? ``` import numpy as np import json a = np.float32(1) json.dumps(a) TypeError: Object of type 'float32' is not JSON serializable ```", "response":"It has to be a string, so you can have: ``` json.dumps(str(a)) ``` EDIT: JSON is a format for serialising object data. It doesn't really care or know about Python types, the json package tries to translate whatever object you pass json.dumps() into a string form via a conversion table that only supports some types (see doc below). This is the reason why I think it's a good idea to just pass a string to avoid this issue: numpy.float32 just isn't in the table. Because some have commented that explicitly passing a string to dumps \"sounds wrong\" I'll just add the doc here json.dumps(obj, *, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, cls=None, indent=None, separators=None, default=None, sort_keys=False, **kw) Serialize obj to a JSON formatted str using this conversion table. The arguments have the same meaning as in dump(). Note Keys in key\/value pairs of JSON are always of the type str. When a dictionary is converted into JSON, all the keys of the dictionary are coerced to strings. As a result of this, if a dictionary is converted into JSON and then back into a dictionary, the dictionary may not equal the original one. That is, loads(dumps(x)) != x if x has non-string keys. taken from the official docs here: https:\/\/docs.python.org\/3\/library\/json.html", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/53082708\/typeerror-object-of-type-float32-is-not-json-serializable", "best_answers_votes":69, "question_length":423, "response_length":1296 }, { "question":"How to apply numpy.linalg.norm to each row of a matrix? I have a 2D matrix and I want to take norm of each row. But when I use numpy.linalg.norm(X) directly, it takes the norm of the whole matrix. I can take norm of each row by using a for loop and then taking norm of each X[i], but it takes a huge time since I have 30k rows. Any suggestions to find a quicker way? Or is it possible to apply np.linalg.norm to each row of a matrix?", "response":"For numpy 1.9+ Note that, as perimosocordiae shows, as of NumPy version 1.9, np.linalg.norm(x, axis=1) is the fastest way to compute the L2-norm. For numpy < 1.9 If you are computing an L2-norm, you could compute it directly (using the axis=-1 argument to sum along rows): ``` np.sum(np.abs(x)**2,axis=-1)**(1.\/2) ``` Lp-norms can be computed similarly of course. It is considerably faster than np.apply_along_axis, though perhaps not as convenient: ``` In [48]: %timeit np.apply_along_axis(np.linalg.norm, 1, x) 1000 loops, best of 3: 208 us per loop In [49]: %timeit np.sum(np.abs(x)**2,axis=-1)**(1.\/2) 100000 loops, best of 3: 18.3 us per loop ``` Other ord forms of norm can be computed directly too (with similar speedups): ``` In [55]: %timeit np.apply_along_axis(lambda row:np.linalg.norm(row,ord=1), 1, x) 1000 loops, best of 3: 203 us per loop In [54]: %timeit np.sum(abs(x), axis=-1) 100000 loops, best of 3: 10.9 us per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7741878\/how-to-apply-numpy-linalg-norm-to-each-row-of-a-matrix", "best_answers_votes":103, "question_length":433, "response_length":940 }, { "question":"How to convert list of model objects to pandas dataframe? I have an array of objects of this class ``` class CancerDataEntity(Model): age = columns.Text(primary_key=True) gender = columns.Text(primary_key=True) cancer = columns.Text(primary_key=True) deaths = columns.Integer() ... ``` When printed, array looks like this ``` [CancerDataEntity(age=u'80-85+', gender=u'Female', cancer=u'All cancers (C00-97,B21)', deaths=15306), CancerDataEntity(... ``` I want to convert this to a data frame so I can play with it in a more suitable way to me - to aggregate, count, sum and similar. How I wish this data frame to look, would be something like this: ``` age gender cancer deaths 0 80-85+ Female ... 15306 1 ... ``` Is there a way to achieve this using numpy\/pandas easily, without manually processing the input array?", "response":"A much cleaner way to to this is to define a to_dict method on your class and then use pandas.DataFrame.from_records ``` class Signal(object): def __init__(self, x, y): self.x = x self.y = y def to_dict(self): return { 'x': self.x, 'y': self.y, } ``` e.g. ``` In [87]: signals = [Signal(3, 9), Signal(4, 16)] In [88]: pandas.DataFrame.from_records([s.to_dict() for s in signals]) Out[88]: x y 0 3 9 1 4 16 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34997174\/how-to-convert-list-of-model-objects-to-pandas-dataframe", "best_answers_votes":123, "question_length":816, "response_length":409 }, { "question":"What is the most efficient way to check if a value exists in a NumPy array? I have a very large NumPy array ``` 1 40 3 4 50 4 5 60 7 5 49 6 6 70 8 8 80 9 8 72 1 9 90 7 .... ``` I want to check to see if a value exists in the 1st column of the array. I've got a bunch of homegrown ways (e.g. iterating through each row and checking), but given the size of the array I'd like to find the most efficient method. Thanks!", "response":"How about ``` if value in my_array[:, col_num]: do_whatever ``` Edit: I think __contains__ is implemented in such a way that this is the same as @detly's version", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7088625\/what-is-the-most-efficient-way-to-check-if-a-value-exists-in-a-numpy-array", "best_answers_votes":111, "question_length":416, "response_length":161 }, { "question":"Get the position of the largest value in a multi-dimensional NumPy array How can I get get the position (indices) of the largest value in a multi-dimensional NumPy array?", "response":"The argmax() method should help. Update (After reading comment) I believe the argmax() method would work for multi dimensional arrays as well. The linked documentation gives an example of this: ``` >>> a = array([[10,50,30],[60,20,40]]) >>> maxindex = a.argmax() >>> maxindex 3 ``` Update 2 (Thanks to KennyTM's comment) You can use unravel_index(a.argmax(), a.shape) to get the index as a tuple: ``` >>> from numpy import unravel_index >>> unravel_index(a.argmax(), a.shape) (1, 0) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3584243\/get-the-position-of-the-largest-value-in-a-multi-dimensional-numpy-array", "best_answers_votes":203, "question_length":170, "response_length":486 }, { "question":"Importing data from a MySQL database into a Pandas data frame including column names [duplicate] This question already has answers here: How to convert SQL Query result to PANDAS Data Structure? (18 answers) Closed 9 years ago. I am importing data from a MySQL database into a Pandas data frame. The following excerpt is the code that I am using: ``` import mysql.connector as sql import pandas as pd db_connection = sql.connect(host='hostname', database='db_name', user='username', password='password') db_cursor = db_connection.cursor() db_cursor.execute('SELECT * FROM table_name') table_rows = db_cursor.fetchall() df = pd.DataFrame(table_rows) ``` When I print the data frame it does properly represent the data but my question is, is it possible to also keep the column names? Here is an example output: ``` 0 1 2 3 4 5 6 7 8 0 :ID[giA0CqQcx+(9kbuSKV== NaN NaN None None None None None None 1 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None 2 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None 3 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None 4 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None ``` What I would like to do is keep the column name, which would replace the pandas column indexes. For example, instead of having 0, the column name would be: \"First_column\" as in the MySQL table. Is there a good way to go about this? or is there a more efficient approach of importing data from MySQL into a Pandas data frame than mine?", "response":"IMO it would be much more efficient to use pandas for reading data from your MySQL server: ``` from sqlalchemy import create_engine import pandas as pd db_connection_str = 'mysql+pymysql:\/\/mysql_user:mysql_password@mysql_host\/mysql_db' db_connection = create_engine(db_connection_str) df = pd.read_sql('SELECT * FROM table_name', con=db_connection) ``` this should also take care of column names...", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37730243\/importing-data-from-a-mysql-database-into-a-pandas-data-frame-including-column-n", "best_answers_votes":203, "question_length":1495, "response_length":398 }, { "question":"Save \/ load scipy sparse csr_matrix in portable data format How do you save\/load a scipy sparse csr_matrix in a portable format? The scipy sparse matrix is created on Python 3 (Windows 64-bit) to run on Python 2 (Linux 64-bit). Initially, I used pickle (with protocol=2 and fix_imports=True) but this didn't work going from Python 3.2.2 (Windows 64-bit) to Python 2.7.2 (Windows 32-bit) and got the error: ``` TypeError: ('data type not understood', , (, (0,), '[98]')). ``` Next, tried numpy.save and numpy.load as well as scipy.io.mmwrite() and scipy.io.mmread() and none of these methods worked either.", "response":"edit: scipy 0.19 now has scipy.sparse.save_npz and scipy.sparse.load_npz. ``` from scipy import sparse sparse.save_npz(\"yourmatrix.npz\", your_matrix) your_matrix_back = sparse.load_npz(\"yourmatrix.npz\") ``` For both functions, the file argument may also be a file-like object (i.e. the result of open) instead of a filename. Got an answer from the Scipy user group: A csr_matrix has 3 data attributes that matter: .data, .indices, and .indptr. All are simple ndarrays, so numpy.save will work on them. Save the three arrays with numpy.save or numpy.savez, load them back with numpy.load, and then recreate the sparse matrix object with: ``` new_csr = csr_matrix((data, indices, indptr), shape=(M, N)) ``` So for example: ``` def save_sparse_csr(filename, array): np.savez(filename, data=array.data, indices=array.indices, indptr=array.indptr, shape=array.shape) def load_sparse_csr(filename): loader = np.load(filename) return csr_matrix((loader['data'], loader['indices'], loader['indptr']), shape=loader['shape']) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8955448\/save-load-scipy-sparse-csr-matrix-in-portable-data-format", "best_answers_votes":146, "question_length":605, "response_length":1019 }, { "question":"Get year, month or day from numpy datetime64 I have an array of datetime64 type: ``` dates = np.datetime64(['2010-10-17', '2011-05-13', \"2012-01-15\"]) ``` Is there a better way than looping through each element just to get np.array of years: ``` years = f(dates) #output: array([2010, 2011, 2012], dtype=int8) #or dtype = string ``` I'm using stable numpy version 1.6.2.", "response":"I find the following tricks give between 2x and 4x speed increase versus the pandas method described in this answer (i.e. pd.DatetimeIndex(dates).year etc.). The speed of [dt.year for dt in dates.astype(object)] I find to be similar to the pandas method. Also these tricks can be applied directly to ndarrays of any shape (2D, 3D etc.) ``` dates = np.arange(np.datetime64('2000-01-01'), np.datetime64('2010-01-01')) years = dates.astype('datetime64[Y]').astype(int) + 1970 months = dates.astype('datetime64[M]').astype(int) % 12 + 1 days = dates - dates.astype('datetime64[M]') + 1 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13648774\/get-year-month-or-day-from-numpy-datetime64", "best_answers_votes":95, "question_length":370, "response_length":585 }, { "question":"Get index of a row of a pandas dataframe as an integer Assume an easy dataframe, for example ``` A B 0 1 0.810743 1 2 0.595866 2 3 0.154888 3 4 0.472721 4 5 0.894525 5 6 0.978174 6 7 0.859449 7 8 0.541247 8 9 0.232302 9 10 0.276566 ``` How can I retrieve an index value of a row, given a condition? For example: dfb = df[df['A']==5].index.values.astype(int) returns [4], but what I would like to get is just 4. This is causing me troubles later in the code. Based on some conditions, I want to have a record of the indexes where that condition is fulfilled, and then select rows between. I tried ``` dfb = df[df['A']==5].index.values.astype(int) dfbb = df[df['A']==8].index.values.astype(int) df.loc[dfb:dfbb,'B'] ``` for a desired output ``` A B 4 5 0.894525 5 6 0.978174 6 7 0.859449 ``` but I get TypeError: '[4]' is an invalid key", "response":"The easier is add [0] - select first value of list with one element: ``` dfb = df[df['A']==5].index.values.astype(int)[0] dfbb = df[df['A']==8].index.values.astype(int)[0] ``` ``` dfb = int(df[df['A']==5].index[0]) dfbb = int(df[df['A']==8].index[0]) ``` But if possible some values not match, error is raised, because first value not exist. Solution is use next with iter for get default parameetr if values not matched: ``` dfb = next(iter(df[df['A']==5].index), 'no match') print (dfb) 4 dfb = next(iter(df[df['A']==50].index), 'no match') print (dfb) no match ``` Then it seems need substract 1: ``` print (df.loc[dfb:dfbb-1,'B']) 4 0.894525 5 0.978174 6 0.859449 Name: B, dtype: float64 ``` Another solution with boolean indexing or query: ``` print (df[(df['A'] >= 5) & (df['A'] = 5) & (df['A'] = 5 and A < 8')) A B 4 5 0.894525 5 6 0.978174 6 7 0.859449 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/41217310\/get-index-of-a-row-of-a-pandas-dataframe-as-an-integer", "best_answers_votes":109, "question_length":834, "response_length":864 }, { "question":"Numpy: Checking if a value is NaT ``` nat = np.datetime64('NaT') nat == nat >> FutureWarning: In the future, 'NAT == x' and 'x == NAT' will always be False. np.isnan(nat) >> TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' ``` How can I check if a datetime64 is NaT? I can't seem to dig anything out of the docs. I know Pandas can do it, but I'd rather not add a dependency for something so basic.", "response":"NumPy has an isnat function as of version 1.13.0: ``` import numpy as np np.isnat(np.datetime64(\"NaT\")) ``` pandas can check for NaT with pandas.isnull: ``` >>> import numpy as np >>> import pandas as pd >>> pd.isnull(np.datetime64('NaT')) True ``` If you don't want to use pandas you can also define your own function (parts are taken from the pandas source): ``` nat_as_integer = np.datetime64('NAT').view('i8') def isnat(your_datetime): dtype_string = str(your_datetime.dtype) if 'datetime64' in dtype_string or 'timedelta64' in dtype_string: return your_datetime.view('i8') == nat_as_integer return False # it can't be a NaT if it's not a dateime ``` This correctly identifies NaT values: ``` >>> isnat(np.datetime64('NAT')) True >>> isnat(np.timedelta64('NAT')) True ``` And realizes if it's not a datetime or timedelta: ``` >>> isnat(np.timedelta64('NAT').view('i8')) False ``` In the future there might be an isnat-function in the numpy code, at least they have a (currently open) pull request about it: Link to the PR (NumPy github)", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/38509538\/numpy-checking-if-a-value-is-nat", "best_answers_votes":119, "question_length":513, "response_length":1040 }, { "question":"Determining duplicate values in an array Suppose I have an array ``` a = np.array([1, 2, 1, 3, 3, 3, 0]) ``` How can I (efficiently, Pythonically) find which elements of a are duplicates (i.e., non-unique values)? In this case the result would be array([1, 3, 3]) or possibly array([1, 3]) if efficient. I've come up with a few methods that appear to work: Masking ``` m = np.zeros_like(a, dtype=bool) m[np.unique(a, return_index=True)[1]] = True a[~m] ``` Set operations ``` a[~np.in1d(np.arange(len(a)), np.unique(a, return_index=True)[1], assume_unique=True)] ``` This one is cute but probably illegal (as a isn't actually unique): ``` np.setxor1d(a, np.unique(a), assume_unique=True) ``` Histograms ``` u, i = np.unique(a, return_inverse=True) u[np.bincount(i) > 1] ``` Sorting ``` s = np.sort(a, axis=None) s[:-1][s[1:] == s[:-1]] ``` Pandas ``` s = pd.Series(a) s[s.duplicated()] ``` Is there anything I've missed? I'm not necessarily looking for a numpy-only solution, but it has to work with numpy data types and be efficient on medium-sized data sets (up to 10 million in size). Conclusions Testing with a 10 million size data set (on a 2.8GHz Xeon): ``` a = np.random.randint(10**7, size=10**7) ``` The fastest is sorting, at 1.1s. The dubious xor1d is second at 2.6s, followed by masking and Pandas Series.duplicated at 3.1s, bincount at 5.6s, and in1d and senderle's setdiff1d both at 7.3s. Steven's Counter is only a little slower, at 10.5s; trailing behind are Burhan's Counter.most_common at 110s and DSM's Counter subtraction at 360s. I'm going to use sorting for performance, but I'm accepting Steven's answer because the performance is acceptable and it feels clearer and more Pythonic. Edit: discovered the Pandas solution. If Pandas is available it's clear and performs well.", "response":"As of numpy version 1.9.0, np.unique has an argument return_counts which greatly simplifies your task: ``` u, c = np.unique(a, return_counts=True) dup = u[c > 1] ``` This is similar to using Counter, except you get a pair of arrays instead of a mapping. I'd be curious to see how they perform relative to each other. It's probably worth mentioning that even though np.unique is quite fast in practice due to its numpyness, it has worse algorithmic complexity than the Counter solution. np.unique is sort-based, so runs asymptotically in O(n log n) time. Counter is hash-based, so has O(n) complexity. This will not matter much for anything but the largest datasets.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11528078\/determining-duplicate-values-in-an-array", "best_answers_votes":94, "question_length":1795, "response_length":665 }, { "question":"Why are NumPy arrays so fast? I just changed a program I am writing to hold my data as numpy arrays as I was having performance issues, and the difference was incredible. It originally took 30 minutes to run and now takes 2.5 seconds! I was wondering how it does it. I assume it is that the because it removes the need for for loops but beyond that I am stumped.", "response":"Numpy arrays are densely packed arrays of homogeneous type. Python lists, by contrast, are arrays of pointers to objects, even when all of them are of the same type. So, you get the benefits of locality of reference. Also, many Numpy operations are implemented in C, avoiding the general cost of loops in Python, pointer indirection and per-element dynamic type checking. The speed boost depends on which operations you're performing, but a few orders of magnitude isn't uncommon in number crunching programs.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8385602\/why-are-numpy-arrays-so-fast", "best_answers_votes":135, "question_length":362, "response_length":509 }, { "question":"Python RuntimeWarning: overflow encountered in long scalars I am new to programming. In my latest Python 2.7 project I encountered the following: RuntimeWarning: overflow encountered in long_scalars Could someone please elaborate what this means and what I could do to fix that? The code runs through, but I'm not sure if it is a good idea to just ignore the warning. It happens during an append process like: ``` SomeList.append(VeryLongFormula) ```", "response":"Here's an example which issues the same warning: ``` import numpy as np np.seterr(all='warn') A = np.array([10]) a=A[-1] a**a ``` yields ``` RuntimeWarning: overflow encountered in long_scalars ``` In the example above it happens because a is of dtype int32, and the maximim value storable in an int32 is 2**31-1. Since 10**10 > 2**32-1, the exponentiation results in a number that is bigger than that which can be stored in an int32. Note that you can not rely on np.seterr(all='warn') to catch all overflow errors in numpy. For example, on 32-bit NumPy ``` >>> np.multiply.reduce(np.arange(21)+1) -1195114496 ``` while on 64-bit NumPy: ``` >>> np.multiply.reduce(np.arange(21)+1) -4249290049419214848 ``` Both fail without any warning, although it is also due to an overflow error. The correct answer is that 21! equals ``` In [47]: import math In [48]: math.factorial(21) Out[50]: 51090942171709440000L ``` According to numpy developer, Robert Kern, Unlike true floating point errors (where the hardware FPU sets a flag whenever it does an atomic operation that overflows), we need to implement the integer overflow detection ourselves. We do it on the scalars, but not arrays because it would be too slow to implement for every atomic operation on arrays. So the burden is on you to choose appropriate dtypes so that no operation overflows.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7559595\/python-runtimewarning-overflow-encountered-in-long-scalars", "best_answers_votes":95, "question_length":450, "response_length":1344 }, { "question":"What is an intuitive explanation of np.unravel_index? I have read the documentation for np.unravel_index and played around with the function, but I can't figure out what it is doing.", "response":"Computer memory is addressed linearly. Each memory cell corresponds to a number. A block of memory can be addressed in terms of a base, which is the memory address of its first element, and the item index. For example, assuming the base address is 10,000: ``` item index 0 1 2 3 memory address 10,000 10,001 10,002 10,003 ``` To store multi-dimensional blocks, their geometry must somehow be made to fit into linear memory. In C and NumPy, this is done row-by-row. A 2D example would be: ``` | 0 1 2 3 --+------------------------ 0 | 0 1 2 3 1 | 4 5 6 7 2 | 8 9 10 11 ``` So, for example, in this 3-by-4 block the 2D index (1, 2) would correspond to the linear index 6 which is 1 x 4 + 2. unravel_index does the inverse. Given a linear index, it computes the corresponding ND index. Since this depends on the block dimensions, these also have to be passed. So, in our example, we can get the original 2D index (1, 2) back from the linear index 6: ``` >>> np.unravel_index(6, (3, 4)) (1, 2) ``` Note: The above glosses over a few details. 1) Translating the item index to memory address also has to account for item size. For example, an integer typically has 4 or 8 bytes. So, in the latter case, the memory address for item i would be base + 8 x i. 2). NumPy is a bit more flexible than suggested. It can organize ND data column-by-column if desired. It can even handle data that are not contiguous in memory but for example leave gaps, etc. Bonus reading: internal memory layout of an ndarray", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/48135736\/what-is-an-intuitive-explanation-of-np-unravel-index", "best_answers_votes":131, "question_length":182, "response_length":1494 }, { "question":"Why are 0d arrays in Numpy not considered scalar? Surely a 0d array is scalar, but Numpy does not seem to think so... am I missing something or am I just misunderstanding the concept? ``` >>> foo = numpy.array(1.11111111111, numpy.float64) >>> numpy.ndim(foo) 0 >>> numpy.isscalar(foo) False >>> foo.item() 1.11111111111 ```", "response":"One should not think too hard about it. It's ultimately better for the mental health and longevity of the individual. The curious situation with Numpy scalar-types was bore out of the fact that there is no graceful and consistent way to degrade the 1x1 matrix to scalar types. Even though mathematically they are the same thing, they are handled by very different code. If you've been doing any amount of scientific code, ultimately you'd want things like max(a) to work on matrices of all sizes, even scalars. Mathematically, this is a perfectly sensible thing to expect. However for programmers this means that whatever presents scalars in Numpy should have the .shape and .ndim attirbute, so at least the ufuncs don't have to do explicit type checking on its input for the 21 possible scalar types in Numpy. On the other hand, they should also work with existing Python libraries that does do explicit type-checks on scalar type. This is a dilemma, since a Numpy ndarray have to individually change its type when they've been reduced to a scalar, and there is no way of knowing whether that has occurred without it having do checks on all access. Actually going that route would probably make bit ridiculously slow to work with by scalar type standards. The Numpy developer's solution is to inherit from both ndarray and Python scalars for its own scalary type, so that all scalars also have .shape, .ndim, .T, etc etc. The 1x1 matrix will still be there, but its use will be discouraged if you know you'll be dealing with a scalar. While this should work fine in theory, occasionally you could still see some places where they missed with the paint roller, and the ugly innards are exposed for all to see: ``` >>> from numpy import * >>> a = array(1) >>> b = int_(1) >>> a.ndim 0 >>> b.ndim 0 >>> a[...] array(1) >>> a[()] 1 >>> b[...] array(1) >>> b[()] 1 ``` There's really no reason why a[...] and a[()] should return different things, but it does. There are proposals in place to change this, but looks like they forgot to finish the job for 1x1 arrays. A potentially bigger, and possibly non-resolvable issue, is the fact that Numpy scalars are immutable. Therefore \"spraying\" a scalar into a ndarray, mathematically the adjoint operation of collapsing an array into a scalar, is a PITA to implement. You can't actually grow a Numpy scalar, it cannot by definition be cast into an ndarray, even though newaxis mysteriously works on it: ``` >>> b[0,1,2,3] = 1 Traceback (most recent call last): File \"\", line 1, in TypeError: 'numpy.int32' object does not support item assignment >>> b[newaxis] array([1]) ``` In Matlab, growing the size of a scalar is a perfectly acceptable and brainless operation. In Numpy you have to stick jarring a = array(a) everywhere you think you'd have the possibility of starting with a scalar and ending up with an array. I understand why Numpy has to be this way to play nice with Python, but that doesn't change the fact that many new switchers are deeply confused about this. Some have explicit memory of struggling with this behaviour and eventually persevering, while others who are too far gone are generally left with some deep shapeless mental scar that frequently haunts their most innocent dreams. It's an ugly situation for all.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/773030\/why-are-0d-arrays-in-numpy-not-considered-scalar", "best_answers_votes":187, "question_length":324, "response_length":3278 }, { "question":"TypeError: only length-1 arrays can be converted to Python scalars while plot showing I have such Python code: ``` import numpy as np import matplotlib.pyplot as plt def f(x): return np.int(x) x = np.arange(1, 15.1, 0.1) plt.plot(x, f(x)) plt.show() ``` And such error: ``` TypeError: only length-1 arrays can be converted to Python scalars ``` How can I fix it?", "response":"The error \"only length-1 arrays can be converted to Python scalars\" is raised when the function expects a single value but you pass an array instead. np.int was an alias for the built-in int, which is deprecated in numpy v1.20. The argument for int should be a scalar and it does not accept array-like objects. In general, if you want to apply a function to each element of the array, you can use np.vectorize: ``` import numpy as np import matplotlib.pyplot as plt def f(x): return int(x) f2 = np.vectorize(f) x = np.arange(1, 15.1, 0.1) plt.plot(x, f2(x)) plt.show() ``` You can skip the definition of f(x) and just pass the function int to the vectorize function: f2 = np.vectorize(int). Note that np.vectorize is just a convenience function and basically a for loop. That will be inefficient over large arrays. Whenever you have the possibility, use truly vectorized functions or methods (like astype(int) as @FFT suggests).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/36680402\/typeerror-only-length-1-arrays-can-be-converted-to-python-scalars-while-plot-sh", "best_answers_votes":104, "question_length":362, "response_length":928 }, { "question":"How do you Unit Test Python DataFrames How do I unit test Python dataframes? I have functions that have an input and output as dataframes. Almost every function I have does this. Now if I want to unit test this what is the best method of doing it? It seems a bit of an effort to create a new dataframe (with values populated) for every function? Are there any materials you can refer me to? Should you write unit tests for these functions?", "response":"While Pandas' test functions are primarily used for internal testing, NumPy includes a very useful set of testing functions that are documented here: NumPy Test Support. These functions compare NumPy arrays, but you can get the array that underlies a Pandas DataFrame using the values property. You can define a simple DataFrame and compare what your function returns to what you expect. One technique you can use is to define one set of test data for a number of functions. That way, you can use Pytest Fixtures to define that DataFrame once, and use it in multiple tests. In terms of resources, I found this article on Testing with NumPy and Pandas to be very useful. I also did a short presentation about data analysis testing at PyCon Canada 2016: Automate Your Data Analysis Testing.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/41852686\/how-do-you-unit-test-python-dataframes", "best_answers_votes":58, "question_length":439, "response_length":788 }, { "question":"How to perform element-wise Boolean operations on NumPy arrays [duplicate] This question already has answers here: Logical operators for Boolean indexing in Pandas (4 answers) Closed 6 years ago. The community reviewed whether to reopen this question 3 years ago and left it closed: Original close reason(s) were not resolved For example, I would like to create a mask that masks elements with value between 40 and 60: ``` foo = np.asanyarray(range(100)) mask = (foo 60) ``` Which just looks ugly. I can't write ``` (foo 60) ``` because I end up with: ``` ValueError Traceback (most recent call last) ... ----> 1 (foo 60) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` Is there a canonical way of doing element-wise Boolean operations on NumPy arrays with good looking code?", "response":"Try this: ``` mask = (foo 60) ``` Note: the __or__ method in an object overloads the bitwise or operator (|), not the Boolean or operator.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8632033\/how-to-perform-element-wise-boolean-operations-on-numpy-arrays", "best_answers_votes":127, "question_length":838, "response_length":139 }, { "question":"What does \".T\" mean for a Numpy array? I saw this example in the SciPy documentation: ``` x, y = np.random.multivariate_normal(mean, cov, 5000).T ``` What does the final .T actually do here?", "response":"The .T accesses the attribute T of the object, which happens to be a NumPy array. The T attribute is the transpose of the array, see the documentation. Apparently you are creating random coordinates in the plane. The output of multivariate_normal() might look like this: ``` >>> np.random.multivariate_normal([0, 0], [[1, 0], [0, 1]], 5) array([[ 0.59589335, 0.97741328], [-0.58597307, 0.56733234], [-0.69164572, 0.17840394], [-0.24992978, -2.57494471], [ 0.38896689, 0.82221377]]) ``` The transpose of this matrix is: ``` array([[ 0.59589335, -0.58597307, -0.69164572, -0.24992978, 0.38896689], [ 0.97741328, 0.56733234, 0.17840394, -2.57494471, 0.82221377]]) ``` which can be conveniently separated in x and y parts by sequence unpacking.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5741372\/what-does-t-mean-for-a-numpy-array", "best_answers_votes":98, "question_length":190, "response_length":740 }, { "question":"pandas equivalent of np.where np.where has the semantics of a vectorized if\/else (similar to Apache Spark's when\/otherwise DataFrame method). I know that I can use np.where on pandas.Series, but pandas often defines its own API to use instead of raw numpy functions, which is usually more convenient with pd.Series\/pd.DataFrame. Sure enough, I found pandas.DataFrame.where. However, at first glance, it has completely different semantics. I could not find a way to rewrite the most basic example of np.where using pandas where: ``` # df is pd.DataFrame # how to write this using df.where? df['C'] = np.where((df['A']0), df['A']+df['B'], df['A']\/df['B']) ``` Am I missing something obvious? Or is pandas' where intended for a completely different use case, despite same name as np.where?", "response":"Try: ``` (df['A'] + df['B']).where((df['A'] 0), df['A'] \/ df['B']) ``` The difference between the numpy where and DataFrame where is that the default values are supplied by the DataFrame that the where method is being called on (docs). I.e. ``` np.where(m, A, B) ``` is roughly equivalent to ``` A.where(m, B) ``` If you wanted a similar call signature using pandas, you could take advantage of the way method calls work in Python: ``` pd.DataFrame.where(cond=(df['A'] 0), self=df['A'] + df['B'], other=df['A'] \/ df['B']) ``` or without kwargs (Note: that the positional order of arguments is different from the numpy where argument order): ``` pd.DataFrame.where(df['A'] + df['B'], (df['A'] 0), df['A'] \/ df['B']) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/38579532\/pandas-equivalent-of-np-where", "best_answers_votes":81, "question_length":786, "response_length":721 }, { "question":"SQL-like window functions in PANDAS: Row Numbering in Python Pandas Dataframe I come from a sql background and I use the following data processing step frequently: Partition the table of data by one or more fields For each partition, add a rownumber to each of its rows that ranks the row by one or more other fields, where the analyst specifies ascending or descending EX: ``` df = pd.DataFrame({'key1' : ['a','a','a','b','a'], 'data1' : [1,2,2,3,3], 'data2' : [1,10,2,3,30]}) df data1 data2 key1 0 1 1 a 1 2 10 a 2 2 2 a 3 3 3 b 4 3 30 a ``` I'm looking for how to do the PANDAS equivalent to this sql window function: ``` RN = ROW_NUMBER() OVER (PARTITION BY Key1 ORDER BY Data1 ASC, Data2 DESC) data1 data2 key1 RN 0 1 1 a 1 1 2 10 a 2 2 2 2 a 3 3 3 3 b 1 4 3 30 a 4 ``` I've tried the following which I've gotten to work where there are no 'partitions': ``` def row_number(frame,orderby_columns, orderby_direction,name): frame.sort_index(by = orderby_columns, ascending = orderby_direction, inplace = True) frame[name] = list(xrange(len(frame.index))) ``` I tried to extend this idea to work with partitions (groups in pandas) but the following didn't work: ``` df1 = df.groupby('key1').apply(lambda t: t.sort_index(by=['data1', 'data2'], ascending=[True, False], inplace = True)).reset_index() def nf(x): x['rn'] = list(xrange(len(x.index))) df1['rn1'] = df1.groupby('key1').apply(nf) ``` But I just got a lot of NaNs when I do this. Ideally, there'd be a succinct way to replicate the window function capability of sql (i've figured out the window based aggregates...that's a one liner in pandas)...can someone share with me the most idiomatic way to number rows like this in PANDAS?", "response":"you can also use sort_values(), groupby() and finally cumcount() + 1: ``` df['RN'] = df.sort_values(['data1','data2'], ascending=[True,False]) \\ .groupby(['key1']) \\ .cumcount() + 1 print(df) ``` yields: ``` data1 data2 key1 RN 0 1 1 a 1 1 2 10 a 2 2 2 2 a 3 3 3 3 b 1 4 3 30 a 4 ``` PS tested with pandas 0.18", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17775935\/sql-like-window-functions-in-pandas-row-numbering-in-python-pandas-dataframe", "best_answers_votes":123, "question_length":1690, "response_length":310 }, { "question":"In Python NumPy what is a dimension and axis? I am coding with Pythons NumPy module. If coordinates of a point in 3D space are described as [1, 2, 1], wouldn't that be three dimensions, three axis, a rank of three? Or if that is one dimension then shouldn't it be points (plural), not point? Here is the documentation: In Numpy dimensions are called axes. The number of axes is rank. For example, the coordinates of a point in 3D space [1, 2, 1] is an array of rank 1, because it has one axis. That axis has a length of 3. Source: Link", "response":"In numpy arrays, dimensionality refers to the number of axes needed to index it, not the dimensionality of any geometrical space. For example, you can describe the locations of points in 3D space with a 2D array: ```py array([[0, 0, 0], [1, 2, 3], [2, 2, 2], [9, 9, 9]]) ``` Which has shape of (4, 3) and dimension 2. But it can describe 3D space because the length of each row (axis 1) is three, so each row can be the x, y, and z component of a point's location. The length of axis 0 indicates the number of points (here, 4). However, that is more of an application to the math that the code is describing, not an attribute of the array itself. In mathematics, the dimension of a vector would be its length (e.g., x, y, and z components of a 3d vector), but in numpy, any \"vector\" is really just considered a 1d array of varying length. The array doesn't care what the dimension of the space (if any) being described is. You can play around with this, and see the number of dimensions and shape of an array like so: ```py In [262]: a = np.arange(9) In [263]: a Out[263]: array([0, 1, 2, 3, 4, 5, 6, 7, 8]) In [264]: a.ndim # number of dimensions Out[264]: 1 In [265]: a.shape Out[265]: (9,) In [266]: b = np.array([[0,0,0],[1,2,3],[2,2,2],[9,9,9]]) In [267]: b Out[267]: array([[0, 0, 0], [1, 2, 3], [2, 2, 2], [9, 9, 9]]) In [268]: b.ndim Out[268]: 2 In [269]: b.shape Out[269]: (4, 3) ``` Arrays can have many dimensions, but they become hard to visualize above two or three: ```py In [276]: c = np.random.rand(2,2,3,4) In [277]: c Out[277]: array([[[[ 0.33018579, 0.98074944, 0.25744133, 0.62154557], [ 0.70959511, 0.01784769, 0.01955593, 0.30062579], [ 0.83634557, 0.94636324, 0.88823617, 0.8997527 ]], [[ 0.4020885 , 0.94229555, 0.309992 , 0.7237458 ], [ 0.45036185, 0.51943908, 0.23432001, 0.05226692], [ 0.03170345, 0.91317231, 0.11720796, 0.31895275]]], [[[ 0.47801989, 0.02922993, 0.12118226, 0.94488471], [ 0.65439109, 0.77199972, 0.67024853, 0.27761443], [ 0.31602327, 0.42678546, 0.98878701, 0.46164756]], [[ 0.31585844, 0.80167337, 0.17401188, 0.61161196], [ 0.74908902, 0.45300247, 0.68023488, 0.79672751], [ 0.23597218, 0.78416727, 0.56036792, 0.55973686]]]]) In [278]: c.ndim Out[278]: 4 In [279]: c.shape Out[279]: (2, 2, 3, 4) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19389910\/in-python-numpy-what-is-a-dimension-and-axis", "best_answers_votes":109, "question_length":535, "response_length":2250 }, { "question":"Histogram values of a Pandas Series I have some values in a Python Pandas Series (type: pandas.core.series.Series) ``` In [1]: series = pd.Series([0.0,950.0,-70.0,812.0,0.0,-90.0,0.0,0.0,-90.0,0.0,-64.0,208.0,0.0,-90.0,0.0,-80.0,0.0,0.0,-80.0,-48.0,840.0,-100.0,190.0,130.0,-100.0,-100.0,0.0,-50.0,0.0,-100.0,-100.0,0.0,-90.0,0.0,-90.0,-90.0,63.0,-90.0,0.0,0.0,-90.0,-80.0,0.0,]) In [2]: series.min() Out[2]: -100.0 In [3]: series.max() Out[3]: 950.0 ``` I would like to get values of histogram (not necessary plotting histogram)... I just need to get the frequency for each interval. Let's say that my intervals are going from [-200; -150] to [950; 1000] so lower bounds are ``` lwb = range(-200,1000,50) ``` and upper bounds are ``` upb = range(-150,1050,50) ``` I don't know how to get frequency (the number of values that are inside each interval) now... I'm sure that defining lwb and upb is not necessary... but I don't know what function I should use to perform this! (after diving in Pandas doc, I think cut function can help me because it's a discretization problem... but I'm don't understand how to use it) After being able to do this, I will have a look at the way to display histogram (but that's an other problem)", "response":"You just need to use the histogram function of NumPy: ``` import numpy as np count, division = np.histogram(series) ``` where division is the automatically calculated border for your bins and count is the population inside each bin. If you need to fix a certain number of bins, you can use the argument bins and specify a number of bins, or give it directly the boundaries between each bin. ``` count, division = np.histogram(series, bins = [-201,-149,949,1001]) ``` to plot the results you can use the matplotlib function hist, but if you are working in pandas each Series has its own handle to the hist function, and you can give it the chosen binning: ``` series.hist(bins=division) ``` Edit: As mentioned by another poster, Pandas is built on top of NumPy. Since OP is explicitly using Pandas, we can do away with the additional import by accessing NumPy through Pandas: ```py count, division = pd.np.histogram(series) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13129618\/histogram-values-of-a-pandas-series", "best_answers_votes":122, "question_length":1227, "response_length":926 }, { "question":"Is there special significance to 16331239353195370.0? Using import numpy as np I've noticed that ``` np.tan(np.pi\/2) ``` gives the number in the title and not np.inf ``` 16331239353195370.0 ``` I'm curious about this number. Is it related to some system machine precision parameter? Could I have calculated it from something? (I'm thinking along the lines of something similar to sys.float_info) EDIT: The same result is indeed reproducible in other environments such as Java, octace, matlab... The suggested dupe does not explain why, though.", "response":"pi isn't exactly representable as Python float (same as the platform C's double type). The closest representable approximation is used. Here's the exact approximation in use on my box (probably the same as on your box): ``` >>> import math >>> (math.pi \/ 2).as_integer_ratio() (884279719003555, 562949953421312) ``` To find the tangent of that ratio, I'm going to switch to wxMaxima now: ``` (%i1) fpprec: 32; (%o1) 32 (%i2) tan(bfloat(884279719003555) \/ 562949953421312); (%o2) 1.6331239353195369755967737041529b16 ``` So essentially identical to what you got. The binary approximation to pi\/2 used is a little bit less than the mathematical (\"infinite precision\") value of pi\/2. So you get a very large tangent instead of infinity. The computed tan() is appropriate for the actual input! For exactly the same kinds of reasons, e.g., ``` >>> math.sin(math.pi) 1.2246467991473532e-16 ``` doesn't return 0. The approximation math.pi is a little bit less than pi, and the displayed result is correct given that truth. OTHER WAYS OF SEEING math.pi There are several ways to see the exact approximation in use: ``` >>> import math >>> math.pi.as_integer_ratio() (884279719003555, 281474976710656) ``` math.pi is exactly equal to the mathematical (\"infinite precision\") value of that ratio. Or as an exact float in hex notation: ``` >>> math.pi.hex() '0x1.921fb54442d18p+1' ``` Or in a way most easily understood by just about everyone: ``` >>> import decimal >>> decimal.Decimal(math.pi) Decimal('3.141592653589793115997963468544185161590576171875') ``` While it may not be immediately obvious, every finite binary float is exactly representable as a finite decimal float (the reverse is not true; e.g. the decimal 0.1 is not exactly representable as a finite binary float), and the Decimal(some_float) constructor produces the exact equivalent. Here's the true value of pi followed by the exact decimal value of math.pi, and a caret on the third line points to the first digit where they differ: ``` true 3.14159265358979323846264338327950288419716939937510... math.pi 3.141592653589793115997963468544185161590576171875 ^ ``` math.pi is the same across \"almost all\" boxes now, because almost all boxes now use the same binary floating-point format (IEEE 754 double precision). You can use any of the ways above to confirm that on your box, or to find the precise approximation in use if your box is an exception.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/38295501\/is-there-special-significance-to-16331239353195370-0", "best_answers_votes":122, "question_length":543, "response_length":2409 }, { "question":"Resampling a numpy array representing an image I am looking for how to resample a numpy array representing image data at a new size, preferably having a choice of the interpolation method (nearest, bilinear, etc.). I know there is ``` scipy.misc.imresize ``` which does exactly this by wrapping PIL's resize function. The only problem is that since it uses PIL, the numpy array has to conform to image formats, giving me a maximum of 4 \"color\" channels. I want to be able to resize arbitrary images, with any number of \"color\" channels. I was wondering if there is a simple way to do this in scipy\/numpy, or if I need to roll my own. I have two ideas for how to concoct one myself: a function that runs scipy.misc.imresize on every channel separately create my own using scipy.ndimage.interpolation.affine_transform The first one would probably be slow for large data, and the second one does not seem to offer any other interpolation method except splines.", "response":"Based on your description, you want scipy.ndimage.zoom. Bilinear interpolation would be order=1, nearest is order=0, and cubic is the default (order=3). zoom is specifically for regularly-gridded data that you want to resample to a new resolution. As a quick example: ``` import numpy as np import scipy.ndimage x = np.arange(9).reshape(3,3) print 'Original array:' print x print 'Resampled by a factor of 2 with nearest interpolation:' print scipy.ndimage.zoom(x, 2, order=0) print 'Resampled by a factor of 2 with bilinear interpolation:' print scipy.ndimage.zoom(x, 2, order=1) print 'Resampled by a factor of 2 with cubic interpolation:' print scipy.ndimage.zoom(x, 2, order=3) ``` And the result: ``` Original array: [[0 1 2] [3 4 5] [6 7 8]] Resampled by a factor of 2 with nearest interpolation: [[0 0 1 1 2 2] [0 0 1 1 2 2] [3 3 4 4 5 5] [3 3 4 4 5 5] [6 6 7 7 8 8] [6 6 7 7 8 8]] Resampled by a factor of 2 with bilinear interpolation: [[0 0 1 1 2 2] [1 2 2 2 3 3] [2 3 3 4 4 4] [4 4 4 5 5 6] [5 5 6 6 6 7] [6 6 7 7 8 8]] Resampled by a factor of 2 with cubic interpolation: [[0 0 1 1 2 2] [1 1 1 2 2 3] [2 2 3 3 4 4] [4 4 5 5 6 6] [5 6 6 7 7 7] [6 6 7 7 8 8]] ``` Edit: As Matt S. pointed out, there are a couple of caveats for zooming multi-band images. I'm copying the portion below almost verbatim from one of my earlier answers: Zooming also works for 3D (and nD) arrays. However, be aware that if you zoom by 2x, for example, you'll zoom along all axes. ``` data = np.arange(27).reshape(3,3,3) print 'Original:\\n', data print 'Zoomed by 2x gives an array of shape:', ndimage.zoom(data, 2).shape ``` This yields: ``` Original: [[[ 0 1 2] [ 3 4 5] [ 6 7 8]] [[ 9 10 11] [12 13 14] [15 16 17]] [[18 19 20] [21 22 23] [24 25 26]]] Zoomed by 2x gives an array of shape: (6, 6, 6) ``` In the case of multi-band images, you usually don't want to interpolate along the \"z\" axis, creating new bands. If you have something like a 3-band, RGB image that you'd like to zoom, you can do this by specifying a sequence of tuples as the zoom factor: ``` print 'Zoomed by 2x along the last two axes:' print ndimage.zoom(data, (1, 2, 2)) ``` This yields: ``` Zoomed by 2x along the last two axes: [[[ 0 0 1 1 2 2] [ 1 1 1 2 2 3] [ 2 2 3 3 4 4] [ 4 4 5 5 6 6] [ 5 6 6 7 7 7] [ 6 6 7 7 8 8]] [[ 9 9 10 10 11 11] [10 10 10 11 11 12] [11 11 12 12 13 13] [13 13 14 14 15 15] [14 15 15 16 16 16] [15 15 16 16 17 17]] [[18 18 19 19 20 20] [19 19 19 20 20 21] [20 20 21 21 22 22] [22 22 23 23 24 24] [23 24 24 25 25 25] [24 24 25 25 26 26]]] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13242382\/resampling-a-numpy-array-representing-an-image", "best_answers_votes":134, "question_length":957, "response_length":2534 }, { "question":"Difference between a -= b and a = a - b in Python I have recently applied this solution for averaging every N rows of matrix. Although the solution works in general I had problems when applied to a 7x1 array. I have noticed that the problem is when using the -= operator. To make a small example: ``` import numpy as np a = np.array([1,2,3]) b = np.copy(a) a[1:] -= a[:-1] b[1:] = b[1:] - b[:-1] print a print b ``` which outputs: ``` [1 1 2] [1 1 1] ``` So, in the case of an array a -= b produces a different result than a = a - b. I thought until now that these two ways are exactly the same. What is the difference? How come the method I am mentioning for summing every N rows in a matrix is working e.g. for a 7x4 matrix but not for a 7x1 array?", "response":"Note: using in-place operations on NumPy arrays that share memory in no longer a problem in version 1.13.0 onward (see details here). The two operation will produce the same result. This answer only applies to earlier versions of NumPy. Mutating arrays while they're being used in computations can lead to unexpected results! In the example in the question, subtraction with -= modifies the second element of a and then immediately uses that modified second element in the operation on the third element of a. Here is what happens with a[1:] -= a[:-1] step by step: a is the array with the data [1, 2, 3]. We have two views onto this data: a[1:] is [2, 3], and a[:-1] is [1, 2]. The in-place subtraction -= begins. The first element of a[:-1], 1, is subtracted from the first element of a[1:]. This has modified a to be [1, 1, 3]. Now we have that a[1:] is a view of the data [1, 3], and a[:-1] is a view of the data [1, 1] (the second element of array a has been changed). a[:-1] is now [1, 1] and NumPy must now subtract its second element which is 1 (not 2 anymore!) from the second element of a[1:]. This makes a[1:] a view of the values [1, 2]. a is now an array with the values [1, 1, 2]. b[1:] = b[1:] - b[:-1] does not have this problem because b[1:] - b[:-1] creates a new array first and then assigns the values in this array to b[1:]. It does not modify b itself during the subtraction, so the views b[1:] and b[:-1] do not change. The general advice is to avoid modifying one view inplace with another if they overlap. This includes the operators -=, *=, etc. and using the out parameter in universal functions (like np.subtract and np.multiply) to write back to one of the arrays.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/35036126\/difference-between-a-b-and-a-a-b-in-python", "best_answers_votes":80, "question_length":750, "response_length":1693 }, { "question":"Does Numpy automatically detect and use GPU? I have a few basic questions about using Numpy with GPU (nvidia GTX 1080 Ti). I'm new to GPU, and would like to make sure I'm properly using the GPU to accelerate Numpy\/Python. I searched on the internet for a while, but didn't find a simple tutorial that addressed my questions. I'd appreciate it if someone can give me some pointers: 1) Does Numpy\/Python automatically detect the presence of GPU and utilize it to speed up matrix computation (e.g. numpy.multiply, numpy.linalg.inv, ... etc)? Or do I have code in a specific way to exploit the GPU for fast computation? 2) Can someone recommend a good tutorial\/introductory material on using Numpy\/Python with GPU (nvidia's)? Thanks a lot!", "response":"Does Numpy\/Python automatically detect the presence of GPU and utilize it to speed up matrix computation (e.g. numpy.multiply, numpy.linalg.inv, ... etc)? No. Or do I have code in a specific way to exploit the GPU for fast computation? Yes. Search for Numba, CuPy, Theano, PyTorch or PyCUDA for different paradigms for accelerating Python with GPUs.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/49605231\/does-numpy-automatically-detect-and-use-gpu", "best_answers_votes":75, "question_length":735, "response_length":349 }, { "question":"Numpy: Creating a complex array from 2 real ones? I want to combine 2 parts of the same array to make a complex array: ``` Data[:,:,:,0] , Data[:,:,:,1] ``` These don't work: ``` x = np.complex(Data[:,:,:,0], Data[:,:,:,1]) x = complex(Data[:,:,:,0], Data[:,:,:,1]) ``` Am I missing something? Does numpy not like performing array functions on complex numbers? Here's the error: ``` TypeError: only length-1 arrays can be converted to Python scalars ```", "response":"This seems to do what you want: ``` numpy.apply_along_axis(lambda args: [complex(*args)], 3, Data) ``` Here is another solution: ``` # The ellipsis is equivalent here to \":,:,:\"... numpy.vectorize(complex)(Data[...,0], Data[...,1]) ``` And yet another simpler solution: ``` Data[...,0] + 1j * Data[...,1] ``` PS: If you want to save memory (no intermediate array): ``` result = 1j*Data[...,1]; result += Data[...,0] ``` devS' solution below is also fast.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/2598734\/numpy-creating-a-complex-array-from-2-real-ones", "best_answers_votes":108, "question_length":453, "response_length":454 }, { "question":"Fitting a Normal distribution to 1D data I have a 1 dimensional array. I can compute the \"mean\" and \"standard deviation\" of this sample and plot the \"Normal distribution\" but I have a problem: I want to plot the data and Normal distribution in the same figure. I dont know how to plot both the data and the normal distribution. Any Idea about \"Gaussian probability density function in scipy.stats\"? ``` s = np.std(array) m = np.mean(array) plt.plot(norm.pdf(array,m,s)) ```", "response":"You can use matplotlib to plot the histogram and the PDF (as in the link in @MrE's answer). For fitting and for computing the PDF, you can use scipy.stats.norm, as follows. ``` import numpy as np from scipy.stats import norm import matplotlib.pyplot as plt # Generate some data for this demonstration. data = norm.rvs(10.0, 2.5, size=500) # Fit a normal distribution to the data: mu, std = norm.fit(data) # Plot the histogram. plt.hist(data, bins=25, density=True, alpha=0.6, color='g') # Plot the PDF. xmin, xmax = plt.xlim() x = np.linspace(xmin, xmax, 100) p = norm.pdf(x, mu, std) plt.plot(x, p, 'k', linewidth=2) title = \"Fit results: mu = %.2f, std = %.2f\" % (mu, std) plt.title(title) plt.show() ``` Here's the plot generated by the script:", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20011122\/fitting-a-normal-distribution-to-1d-data", "best_answers_votes":191, "question_length":473, "response_length":747 }, { "question":"Concatenate Numpy arrays without copying In Numpy, I can concatenate two arrays end-to-end with np.append or np.concatenate: ``` >>> X = np.array([[1,2,3]]) >>> Y = np.array([[-1,-2,-3],[4,5,6]]) >>> Z = np.append(X, Y, axis=0) >>> Z array([[ 1, 2, 3], [-1, -2, -3], [ 4, 5, 6]]) ``` But these make copies of their input arrays: ``` >>> Z[0,:] = 0 >>> Z array([[ 0, 0, 0], [-1, -2, -3], [ 4, 5, 6]]) >>> X array([[1, 2, 3]]) ``` Is there a way to concatenate two arrays into a view, i.e. without copying? Would that require an np.ndarray subclass?", "response":"The memory belonging to a Numpy array must be contiguous. If you allocated the arrays separately, they are randomly scattered in memory, and there is no way to represent them as a view Numpy array. If you know beforehand how many arrays you need, you can instead start with one big array that you allocate beforehand, and have each of the small arrays be a view to the big array (e.g. obtained by slicing).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7869095\/concatenate-numpy-arrays-without-copying", "best_answers_votes":104, "question_length":547, "response_length":406 }, { "question":"Importing PNG files into Numpy? I have about 200 grayscale PNG images stored within a directory like this. ``` 1.png 2.png 3.png ... ... 200.png ``` I want to import all the PNG images as NumPy arrays. How can I do this?", "response":"According to the doc, scipy.misc.imread is deprecated starting SciPy 1.0.0, and will be removed in 1.2.0. Consider using imageio.v3.imread instead. Example: ``` import imageio.v3 as iio im = iio.imread('my_image.png') print(im.shape) ``` You can also use imageio to load from fancy sources: ``` im = iio.imread('http:\/\/upload.wikimedia.org\/wikipedia\/commons\/d\/de\/Wikipedia_Logo_1.0.png') ``` Edit: To load all of the *.png files in a specific folder, you could use the glob package: ``` import imageio.v3 as iio import glob for im_path in glob.glob(\"path\/to\/folder\/*.png\"): im = iio.imread(im_path) print(im.shape) # do whatever with the image here ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/31386096\/importing-png-files-into-numpy", "best_answers_votes":125, "question_length":220, "response_length":652 }, { "question":"Neural network always predicts the same class I'm trying to implement a neural network that classifies images into one of the two discrete categories. The problem is, however, that it currently always predicts 0 for any input and I'm not really sure why. Here's my feature extraction method: ``` def extract(file): # Resize and subtract mean pixel img = cv2.resize(cv2.imread(file), (224, 224)).astype(np.float32) img[:, :, 0] -= 103.939 img[:, :, 1] -= 116.779 img[:, :, 2] -= 123.68 # Normalize features img = (img.flatten() - np.mean(img)) \/ np.std(img) return np.array([img]) ``` Here's my gradient descent routine: ``` def fit(x, y, t1, t2): \"\"\"Training routine\"\"\" ils = x.shape[1] if len(x.shape) > 1 else 1 labels = len(set(y)) if t1 is None or t2 is None: t1 = randweights(ils, 10) t2 = randweights(10, labels) params = np.concatenate([t1.reshape(-1), t2.reshape(-1)]) res = grad(params, ils, 10, labels, x, y) params -= 0.1 * res return unpack(params, ils, 10, labels) ``` Here are my forward and back(gradient) propagations: ``` def forward(x, theta1, theta2): \"\"\"Forward propagation\"\"\" m = x.shape[0] # Forward prop a1 = np.vstack((np.ones([1, m]), x.T)) z2 = np.dot(theta1, a1) a2 = np.vstack((np.ones([1, m]), sigmoid(z2))) a3 = sigmoid(np.dot(theta2, a2)) return (a1, a2, a3, z2, m) def grad(params, ils, hls, labels, x, Y, lmbda=0.01): \"\"\"Compute gradient for hypothesis Theta\"\"\" theta1, theta2 = unpack(params, ils, hls, labels) a1, a2, a3, z2, m = forward(x, theta1, theta2) d3 = a3 - Y.T print('Current error: {}'.format(np.mean(np.abs(d3)))) d2 = np.dot(theta2.T, d3) * (np.vstack([np.ones([1, m]), sigmoid_prime(z2)])) d3 = d3.T d2 = d2[1:, :].T t1_grad = np.dot(d2.T, a1.T) t2_grad = np.dot(d3.T, a2.T) theta1[0] = np.zeros([1, theta1.shape[1]]) theta2[0] = np.zeros([1, theta2.shape[1]]) t1_grad = t1_grad + (lmbda \/ m) * theta1 t2_grad = t2_grad + (lmbda \/ m) * theta2 return np.concatenate([t1_grad.reshape(-1), t2_grad.reshape(-1)]) ``` And here's my prediction function: ``` def predict(theta1, theta2, x): \"\"\"Predict output using learned weights\"\"\" m = x.shape[0] h1 = sigmoid(np.hstack((np.ones([m, 1]), x)).dot(theta1.T)) h2 = sigmoid(np.hstack((np.ones([m, 1]), h1)).dot(theta2.T)) return h2.argmax(axis=1) ``` I can see that the error rate is gradually decreasing with each iteration, generally converging somewhere around 1.26e-05. What I've tried so far: PCA Different datasets (Iris from sklearn and handwritten numbers from Coursera ML course, achieving about 95% accuracy on both). However, both of those were processed in a batch, so I can assume that my general implementation is correct, but there is something wrong with either how I extract features, or how I train the classifier. Tried sklearn's SGDClassifier and it didn't perform much better, giving me a ~50% accuracy. So something wrong with the features, then? Edit: An average output of h2 looks like the following: ``` [0.5004899 0.45264441] [0.50048522 0.47439413] [0.50049019 0.46557124] [0.50049261 0.45297816] ``` So, very similar sigmoid outputs for all validation examples.", "response":"My network does always predict the same class. What is the problem? I had this a couple of times. Although I'm currently too lazy to go through your code, I think I can give some general hints which might also help others who have the same symptom but probably different underlying problems. Debugging Neural Networks Fitting one item datasets For every class i the network should be able to predict, try the following: Create a dataset of only one data point of class i. Fit the network to this dataset. Does the network learn to predict \"class i\"? If this doesn't work, there are four possible error sources: Buggy training algorithm: Try a smaller model, print a lot of values which are calculated in between and see if those match your expectation. Dividing by 0: Add a small number to the denominator Logarithm of 0 \/ negativ number: Like dividing by 0 Data: It is possible that your data has the wrong type. For example, it might be necessary that your data is of type float32 but actually is an integer. Model: It is also possible that you just created a model which cannot possibly predict what you want. This should be revealed when you try simpler models. Initialization \/ Optimization: Depending on the model, your initialization and your optimization algorithm might play a crucial role. For beginners who use standard stochastic gradient descent, I would say it is mainly important to initialize the weights randomly (each weight a different value). - see also: this question \/ answer Learning Curve See sklearn for details. The idea is to start with a tiny training dataset (probably only one item). Then the model should be able to fit the data perfectly. If this works, you make a slightly larger dataset. Your training error should slightly go up at some point. This reveals your models capacity to model the data. Data analysis Check how often the other class(es) appear. If one class dominates the others (e.g. one class is 99.9% of the data), this is a problem. Look for \"outlier detection\" techniques. More Learning rate: If your network doesn't improve and get only slightly better than random chance, try reducing the learning rate. For computer vision, a learning rate of 0.001 is often used \/ working. This is also relevant if you use Adam as an optimizer. Preprocessing: Make sure you use the same preprocessing for training and testing. You might see differences in the confusion matrix (see this question) Common Mistakes This is inspired by reddit: You forgot to apply preprocessing Dying ReLU Too small \/ too big learning rate Wrong activation function in final layer: Your targets are not in sum one? -> Don't use softmax Single elements of your targets are negative -> Don't use Softmax, ReLU, Sigmoid. tanh might be an option Too deep network: You fail to train. Try a simpler neural network first. Vastly unbalanced data: You might want to look into imbalanced-learn", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/41488279\/neural-network-always-predicts-the-same-class", "best_answers_votes":148, "question_length":3079, "response_length":2900 }, { "question":"What does x[x >> x = [1,2,3,4,5] >>> x[x>> x[x>> x[x>2] 2 >>> x[x>> x [0, 2, 3, 4, 5] ```", "response":"This only makes sense with NumPy arrays. The behavior with lists is useless, and specific to Python 2 (not Python 3). You may want to double-check if the original object was indeed a NumPy array (see further below) and not a list. But in your code here, x is a simple list. Since ``` x 2] is x[True] or x[1] So, x[1] gets changed. Why does this happen? The rules for comparison are: When you order two strings or two numeric types the ordering is done in the expected way (lexicographic ordering for string, numeric ordering for integers). When you order a numeric and a non-numeric type, the numeric type comes first. When you order two incompatible types where neither is numeric, they are ordered by the alphabetical order of their typenames: So, we have the following order numeric >> x = np.array([1., -1., -2., 3]) >>> x >> x[x >> x array([ 1., 19., 18., 3.]) # Only elements < 0 are affected ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/36603042\/what-does-xx-2-0-mean-in-python", "best_answers_votes":122, "question_length":89, "response_length":902 }, { "question":"Concatenate two NumPy arrays vertically I tried the following: ``` >>> a = np.array([1,2,3]) >>> b = np.array([4,5,6]) >>> np.concatenate((a,b), axis=0) array([1, 2, 3, 4, 5, 6]) >>> np.concatenate((a,b), axis=1) array([1, 2, 3, 4, 5, 6]) ``` However, I'd expect at least that one result looks like this ``` array([[1, 2, 3], [4, 5, 6]]) ``` Why is it not concatenated vertically?", "response":"Because both a and b have only one axis, as their shape is (3), and the axis parameter specifically refers to the axis of the elements to concatenate. this example should clarify what concatenate is doing with axis. Take two vectors with two axis, with shape (2,3): ``` a = np.array([[1,5,9], [2,6,10]]) b = np.array([[3,7,11], [4,8,12]]) ``` concatenates along the 1st axis (rows of the 1st, then rows of the 2nd): ``` np.concatenate((a,b), axis=0) array([[ 1, 5, 9], [ 2, 6, 10], [ 3, 7, 11], [ 4, 8, 12]]) ``` concatenates along the 2nd axis (columns of the 1st, then columns of the 2nd): ``` np.concatenate((a, b), axis=1) array([[ 1, 5, 9, 3, 7, 11], [ 2, 6, 10, 4, 8, 12]]) ``` to obtain the output you presented, you can use vstack ``` a = np.array([1,2,3]) b = np.array([4,5,6]) np.vstack((a, b)) array([[1, 2, 3], [4, 5, 6]]) ``` You can still do it with concatenate, but you need to reshape them first: ``` np.concatenate((a.reshape(1,3), b.reshape(1,3))) array([[1, 2, 3], [4, 5, 6]]) ``` Finally, as proposed in the comments, one way to reshape them is to use newaxis: ``` np.concatenate((a[np.newaxis,:], b[np.newaxis,:])) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21887754\/concatenate-two-numpy-arrays-vertically", "best_answers_votes":115, "question_length":380, "response_length":1139 }, { "question":"How to make a multidimension numpy array with a varying row size? I would like to create a two dimensional numpy array of arrays that has a different number of elements on each row. Trying ``` cells = numpy.array([[0,1,2,3], [2,3,4]]) ``` gives an error ``` ValueError: setting an array element with a sequence. ```", "response":"We are now almost 7 years after the question was asked, and your code ``` cells = numpy.array([[0,1,2,3], [2,3,4]]) ``` executed in numpy 1.12.0, python 3.5, doesn't produce any error and cells contains: ``` array([[0, 1, 2, 3], [2, 3, 4]], dtype=object) ``` You access your cells elements as cells[0][2] # (=2) . An alternative to tom10's solution if you want to build your list of numpy arrays on the fly as new elements (i.e. arrays) become available is to use append: ``` d = [] # initialize an empty list a = np.arange(3) # array([0, 1, 2]) d.append(a) # [array([0, 1, 2])] b = np.arange(3,-1,-1) #array([3, 2, 1, 0]) d.append(b) #[array([0, 1, 2]), array([3, 2, 1, 0])] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3386259\/how-to-make-a-multidimension-numpy-array-with-a-varying-row-size", "best_answers_votes":65, "question_length":315, "response_length":679 }, { "question":"Formatting floats in a numpy array [duplicate] This question already has answers here: Pretty-print a NumPy array without scientific notation and with given precision (14 answers) Closed 5 years ago. If I have a numpy array like this: ``` [2.15295647e+01, 8.12531501e+00, 3.97113829e+00, 1.00777250e+01] ``` how can I move the decimal point and format the numbers so I end up with a numpy array like this: ``` [21.53, 8.13, 3.97, 10.08] ``` np.around(a, decimals=2) only gives me [2.15300000e+01, 8.13000000e+00, 3.97000000e+00, 1.00800000e+01] Which I don't want and I haven't found another way to do it.", "response":"In order to make numpy display float arrays in an arbitrary format, you can define a custom function that takes a float value as its input and returns a formatted string: ``` In [1]: float_formatter = \"{:.2f}\".format ``` The f here means fixed-point format (not 'scientific'), and the .2 means two decimal places (you can read more about string formatting here). Let's test it out with a float value: ``` In [2]: float_formatter(1.234567E3) Out[2]: '1234.57' ``` To make numpy print all float arrays this way, you can pass the formatter= argument to np.set_printoptions: ``` In [3]: np.set_printoptions(formatter={'float_kind':float_formatter}) ``` Now numpy will print all float arrays this way: ``` In [4]: np.random.randn(5) * 10 Out[4]: array([5.25, 3.91, 0.04, -1.53, 6.68] ``` Note that this only affects numpy arrays, not scalars: ``` In [5]: np.pi Out[5]: 3.141592653589793 ``` It also won't affect non-floats, complex floats etc - you will need to define separate formatters for other scalar types. You should also be aware that this only affects how numpy displays float values - the actual values that will be used in computations will retain their original precision. For example: ``` In [6]: a = np.array([1E-9]) In [7]: a Out[7]: array([0.00]) In [8]: a == 0 Out[8]: array([False], dtype=bool) ``` numpy prints a as if it were equal to 0, but it is not - it still equals 1E-9. If you actually want to round the values in your array in a way that affects how they will be used in calculations, you should use np.round, as others have already pointed out.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21008858\/formatting-floats-in-a-numpy-array", "best_answers_votes":112, "question_length":605, "response_length":1567 }, { "question":"Ignore divide by 0 warning in NumPy I have a function for statistic issues: ``` import numpy as np from scipy.special import gamma as Gamma def Foo(xdata): ... return x1 * ( ( #R is a numpy vector ( ((R - x2)\/beta) ** (x3 -1) ) * ( np.exp( - ((R - x2) \/ x4) ) ) \/ ( x4 * Gamma(x3)) ).real ) ``` Sometimes I get from the shell the following warning: ``` RuntimeWarning: divide by zero encountered in... ``` I use the numpy isinf function to correct the results of the function in other files, so I do not need this warning. Is there a way to ignore the message? In other words, I do not want the shell to print this message. I do not want to disable all python warnings, just this one.", "response":"You can disable the warning with numpy.seterr. Put this before the possible division by zero: ``` np.seterr(divide='ignore') ``` That'll disable zero division warnings globally. If you just want to disable them for a little bit, you can use numpy.errstate in a with clause: ``` with np.errstate(divide='ignore'): # some code here ``` For a zero by zero division (undetermined, results in a NaN), the error behaviour has changed with numpy version 1.8.0: this is now considered \"invalid\", while previously it was \"divide\". Thus, if there is a chance you your numerator could be zero as well, use ``` np.seterr(divide='ignore', invalid='ignore') ``` or ``` with np.errstate(divide='ignore', invalid='ignore'): # some code here ``` See the \"Compatibility\" section in the release notes, last paragraph before the \"New Features\" section: Comparing NaN floating point numbers now raises the invalid runtime warning. If a NaN is expected the warning can be ignored using np.errstate.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/29950557\/ignore-divide-by-0-warning-in-numpy", "best_answers_votes":209, "question_length":684, "response_length":976 }, { "question":"Inverse of a matrix using numpy I'd like to use numpy to calculate the inverse. But I'm getting an error: ``` 'numpy.ndarry' object has no attribute I ``` To calculate inverse of a matrix in numpy, say matrix M, it should be simply: print M.I Here's the code: ``` x = numpy.empty((3,3), dtype=int) for comb in combinations_with_replacement(range(10), 9): x.flat[:] = comb print x.I ``` I'm presuming, this error occurs because x is now flat, thus 'I' command is not compatible. Is there a work around for this? My goal is to print the INVERSE MATRIX of every possible numerical matrix combination.", "response":"The I attribute only exists on matrix objects, not ndarrays. You can use numpy.linalg.inv to invert arrays: ``` inverse = numpy.linalg.inv(x) ``` Note that the way you're generating matrices, not all of them will be invertible. You will either need to change the way you're generating matrices, or skip the ones that aren't invertible. ``` try: inverse = numpy.linalg.inv(x) except numpy.linalg.LinAlgError: # Not invertible. Skip this one. pass else: # continue with what you were doing ``` Also, if you want to go through all 3x3 matrices with elements drawn from [0, 10), you want the following: ``` for comb in itertools.product(range(10), repeat=9): ``` rather than combinations_with_replacement, or you'll skip matrices like ``` numpy.array([[0, 1, 0], [0, 0, 0], [0, 0, 0]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21638895\/inverse-of-a-matrix-using-numpy", "best_answers_votes":114, "question_length":597, "response_length":785 }, { "question":"How to turn a boolean array into index array in numpy Is there an efficient Numpy mechanism to retrieve the integer indexes of locations in an array based on a condition is true as opposed to the Boolean mask array? For example: ``` x=np.array([range(100,1,-1)]) #generate a mask to find all values that are a power of 2 mask=x&(x-1)==0 #This will tell me those values print x[mask] ``` In this case, I'd like to know the indexes i of mask where mask[i]==True. Is it possible to generate these without looping?", "response":"Another option: ``` In [13]: numpy.where(mask) Out[13]: (array([36, 68, 84, 92, 96, 98]),) ``` which is the same thing as numpy.where(mask==True).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8218032\/how-to-turn-a-boolean-array-into-index-array-in-numpy", "best_answers_votes":111, "question_length":510, "response_length":146 }, { "question":"Mesh grid functions in Python (meshgrid mgrid ogrid ndgrid) I'm looking for a clear comparison of meshgrid-like functions. Unfortunately I don't find it! Numpy http:\/\/docs.scipy.org\/doc\/numpy\/reference\/ provides mgrid ogrid meshgrid Scitools http:\/\/hplgit.github.io\/scitools\/doc\/api\/html\/index.html provides ndgrid boxgrid Ideally a table summarizing all this would be perfect!", "response":"numpy.meshgrid is modelled after Matlab's meshgrid command. It is used to vectorise functions of two variables, so that you can write ``` x = numpy.array([1, 2, 3]) y = numpy.array([10, 20, 30]) XX, YY = numpy.meshgrid(x, y) ZZ = XX + YY ZZ => array([[11, 12, 13], [21, 22, 23], [31, 32, 33]]) ``` So ZZ contains all the combinations of x and y put into the function. When you think about it, meshgrid is a bit superfluous for numpy arrays, as they broadcast. This means you can do ``` XX, YY = numpy.atleast_2d(x, y) YY = YY.T # transpose to allow broadcasting ZZ = XX + YY ``` and get the same result. mgrid and ogrid are helper classes which use index notation so that you can create XX and YY in the previous examples directly, without having to use something like linspace. The order in which the output are generated is reversed. ``` YY, XX = numpy.mgrid[10:40:10, 1:4] ZZ = XX + YY # These are equivalent to the output of meshgrid YY, XX = numpy.ogrid[10:40:10, 1:4] ZZ = XX + YY # These are equivalent to the atleast_2d example ``` I am not familiar with the scitools stuff, but ndgrid seems equivalent to meshgrid, while BoxGrid is actually a whole class to help with this kind of generation.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12402045\/mesh-grid-functions-in-python-meshgrid-mgrid-ogrid-ndgrid", "best_answers_votes":93, "question_length":377, "response_length":1201 }, { "question":"How to generate audio from a numpy array? I want to create \"heart rate monitor\" effect from a 2D array in numpy and want the tone to reflect the values in the array.", "response":"You can use the write function from scipy.io.wavfile to create a wav file which you can then play however you wish. Note that the array must be integers, so if you have floats, you might want to scale them appropriately: ``` import numpy as np from scipy.io.wavfile import write rate = 44100 data = np.random.uniform(-1, 1, rate) # 1 second worth of random samples between -1 and 1 scaled = np.int16(data \/ np.max(np.abs(data)) * 32767) write('test.wav', rate, scaled) ``` If you want Python to actually play audio, then this page provides an overview of some of the packages\/modules.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10357992\/how-to-generate-audio-from-a-numpy-array", "best_answers_votes":112, "question_length":165, "response_length":584 }, { "question":"How can numpy be so much faster than my Fortran routine? I get a 512^3 array representing a Temperature distribution from a simulation (written in Fortran). The array is stored in a binary file that's about 1\/2G in size. I need to know the minimum, maximum and mean of this array and as I will soon need to understand Fortran code anyway, I decided to give it a go and came up with the following very easy routine. ``` integer gridsize,unit,j real mini,maxi double precision mean gridsize=512 unit=40 open(unit=unit,file='T.out',status='old',access='stream',& form='unformatted',action='read') read(unit=unit) tmp mini=tmp maxi=tmp mean=tmp do j=2,gridsize**3 read(unit=unit) tmp if(tmp>maxi)then maxi=tmp elseif(tmp 1 foo() in foo() 1 def foo(): 2 print('one') ----> 3 x = np.array([[1],[1,2]]) 4 return x 5 VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray ``` The error gives a traceback telling me where the warning was raised. There may be ways of refining the warning filter to catch just this one, and not others of the same category. I haven't used this mechanism much. Read np.warnings.filterwarnings docs for more details.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/63097829\/debugging-numpy-visibledeprecationwarning-ndarray-from-ragged-nested-sequences", "best_answers_votes":88, "question_length":609, "response_length":1712 }, { "question":"Iterating over arbitrary dimension of numpy.array Is there function to get an iterator over an arbitrary dimension of a numpy array? Iterating over the first dimension is easy... ``` In [63]: c = numpy.arange(24).reshape(2,3,4) In [64]: for r in c : ....: print r ....: [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]] ``` But iterating over other dimensions is harder. For example, the last dimension: ``` In [73]: for r in c.swapaxes(2,0).swapaxes(1,2) : ....: print r ....: [[ 0 4 8] [12 16 20]] [[ 1 5 9] [13 17 21]] [[ 2 6 10] [14 18 22]] [[ 3 7 11] [15 19 23]] ``` I'm making a generator to do this myself, but I'm surprised there isn't a function named something like numpy.ndarray.iterdim(axis=0) to do this automatically.", "response":"What you propose is quite fast, but the legibility can be improved with the clearer forms: ``` for i in range(c.shape[-1]): print c[:,:,i] ``` or, better (faster, more general and more explicit): ``` for i in range(c.shape[-1]): print c[...,i] ``` However, the first approach above appears to be about twice as slow as the swapaxes() approach: ``` python -m timeit -s 'import numpy; c = numpy.arange(24).reshape(2,3,4)' \\ 'for r in c.swapaxes(2,0).swapaxes(1,2): u = r' 100000 loops, best of 3: 3.69 usec per loop python -m timeit -s 'import numpy; c = numpy.arange(24).reshape(2,3,4)' \\ 'for i in range(c.shape[-1]): u = c[:,:,i]' 100000 loops, best of 3: 6.08 usec per loop python -m timeit -s 'import numpy; c = numpy.arange(24).reshape(2,3,4)' \\ 'for r in numpy.rollaxis(c, 2): u = r' 100000 loops, best of 3: 6.46 usec per loop ``` I would guess that this is because swapaxes() does not copy any data, and because the handling of c[:,:,i] might be done through general code (that handles the case where : is replaced by a more complicated slice). Note however that the more explicit second solution c[...,i] is both quite legible and quite fast: ``` python -m timeit -s 'import numpy; c = numpy.arange(24).reshape(2,3,4)' \\ 'for i in range(c.shape[-1]): u = c[...,i]' 100000 loops, best of 3: 4.74 usec per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1589706\/iterating-over-arbitrary-dimension-of-numpy-array", "best_answers_votes":75, "question_length":762, "response_length":1320 }, { "question":"RuntimeError: The current NumPy installation fails to pass a sanity check due to a bug in the windows runtime [duplicate] This question already has answers here: How do you fix \"runtimeError: package fails to pass a sanity check\" for numpy and pandas? (9 answers) Closed 4 years ago. I am using Python 3.9 on Windows 10 version 2004 x64. PowerShell as Administrator. Python version: ```none Python 3.9.0 (tags\/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)] on win32 ``` Install matplotlib error. ```none pip install virtualenv virtualenv foo cd .\\foo .\\Scripts\\active pip install numpy pip install matplotlib ``` Error ```none Windows PowerShell Copyright (C) Microsoft Corporation. All rights reserved. Try the new cross-platform PowerShell https:\/\/aka.ms\/pscore6 PS C:\\WINDOWS\\system32> Set-ExecutionPolicy Unrestricted -Force PS C:\\WINDOWS\\system32> cd \/d C:\\Windows\\System32\\cmd.exe Set-Location : A positional parameter cannot be found that accepts argument 'C:\\Windows\\System32\\cmd.exe'. At line:1 char:1 + cd \/d C:\\Windows\\System32\\cmd.exe + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Set-Location], ParameterBindingException + FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.SetLocationCommand PS C:\\WINDOWS\\system32> cd C:\\Windows\\System32\\cmd.exe cd : Cannot find path 'C:\\Windows\\System32\\cmd.exe' because it does not exist. At line:1 char:1 + cd C:\\Windows\\System32\\cmd.exe + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (C:\\Windows\\System32\\cmd.exe:String) [Set-Location], ItemNotFoundExcepti on + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand PS C:\\WINDOWS\\system32> cd D:\\ PS D:\\> cd .\\Users\\donhuvy\\ PS D:\\Users\\donhuvy> ls Directory: D:\\Users\\donhuvy Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 10\/26\/2020 3:35 PM AppData d----- 11\/7\/2020 9:33 AM PycharmProjects PS D:\\Users\\donhuvy> cd .\\PycharmProjects\\pythonProject\\ PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject> virtualenv foo virtualenv : The term 'virtualenv' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + virtualenv foo + ~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (virtualenv:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject> pip install virtualenv Collecting virtualenv Downloading virtualenv-20.1.0-py2.py3-none-any.whl (4.9 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.9 MB 1.1 MB\/s Collecting distlib=0.3.1 Downloading distlib-0.3.1-py2.py3-none-any.whl (335 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 335 kB 6.4 MB\/s Requirement already satisfied: six=1.9.0 in c:\\users\\donhuvy\\appdata\\roaming\\python\\python39\\site-packages (from virtualenv) (1.15.0) Collecting filelock=3.0.0 Downloading filelock-3.0.12-py3-none-any.whl (7.6 kB) Collecting appdirs=1.4.3 Downloading appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB) Installing collected packages: distlib, filelock, appdirs, virtualenv Successfully installed appdirs-1.4.4 distlib-0.3.1 filelock-3.0.12 virtualenv-20.1.0 PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject> virtualenv foo created virtual environment CPython3.9.0.final.0-64 in 1312ms creator CPython3Windows(dest=D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo, clear=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=C:\\Users\\donhuvy\\AppData\\Local\\pypa\\virtualenv) added seed packages: pip==20.2.4, setuptools==50.3.2, wheel==0.35.1 activators BashActivator,BatchActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject> cd .\\foo PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo> .\\Scripts\\activate (foo) PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo> pip install numpy Collecting numpy Using cached numpy-1.19.4-cp39-cp39-win_amd64.whl (13.0 MB) Installing collected packages: numpy Successfully installed numpy-1.19.4 (foo) PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo> pip install matplotlib Collecting matplotlib Using cached matplotlib-3.3.2.tar.gz (37.9 MB) ** On entry to DGEBAL parameter number 3 had an illegal value ** On entry to DGEHRD parameter number 2 had an illegal value ** On entry to DORGHR DORGQR parameter number 2 had an illegal value ** On entry to DHSEQR parameter number 4 had an illegal value ERROR: Command errored out with exit status 1: command: 'D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\Scripts\\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'C:\\\\Users\\\\donhuvy\\\\AppData\\\\Local\\\\Temp\\\\pip-install-8bn40qg7\\\\matplotlib\\\\setup.py'\"'\"'; __file__='\"'\"'C:\\\\Users\\\\donhuvy\\\\AppData\\\\Local\\\\Temp\\\\pip-install-8bn40qg7\\\\matplotlib\\\\setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' egg_info --egg-base 'C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe' cwd: C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\ Complete output (61 lines): Edit setup.cfg to change the build options; suppress output with --quiet. BUILDING MATPLOTLIB matplotlib: yes [3.3.2] python: yes [3.9.0 (tags\/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)]] platform: yes [win32] sample_data: yes [installing] tests: no [skipping due to configuration] macosx: no [Mac OS-X only] running egg_info creating C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info writing C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\PKG-INFO writing dependency_links to C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\dependency_links.txt writing namespace_packages to C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\namespace_packages.txt writing requirements to C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\requires.txt writing top-level names to C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\top_level.txt writing manifest file 'C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\SOURCES.txt' Traceback (most recent call last): File \"\", line 1, in File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setup.py\", line 242, in setup( # Finally, pass this all along to distutils to do the heavy lifting. File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\__init__.py\", line 153, in setup return distutils.core.setup(**attrs) File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\core.py\", line 148, in setup dist.run_commands() File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\dist.py\", line 966, in run_commands self.run_command(cmd) File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\dist.py\", line 985, in run_command cmd_obj.run() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\command\\egg_info.py\", line 298, in run self.find_sources() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\command\\egg_info.py\", line 305, in find_sources mm.run() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\command\\egg_info.py\", line 536, in run self.add_defaults() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\command\\egg_info.py\", line 572, in add_defaults sdist.add_defaults(self) File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\command\\sdist.py\", line 228, in add_defaults self._add_defaults_ext() File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\command\\sdist.py\", line 311, in _add_defaults_ext build_ext = self.get_finalized_command('build_ext') File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\cmd.py\", line 299, in get_finalized_command cmd_obj.ensure_finalized() File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\cmd.py\", line 107, in ensure_finalized self.finalize_options() File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setup.py\", line 88, in finalize_options self.distribution.ext_modules[:] = [ File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setup.py\", line 91, in for ext in package.get_extensions() File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setupext.py\", line 345, in get_extensions add_numpy_flags(ext) File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setupext.py\", line 469, in add_numpy_flags import numpy as np File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\numpy\\__init__.py\", line 305, in _win_os_check() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\numpy\\__init__.py\", line 302, in _win_os_check raise RuntimeError(msg.format(__file__)) from None RuntimeError: The current Numpy installation ('D:\\\\Users\\\\donhuvy\\\\PycharmProjects\\\\pythonProject\\\\foo\\\\lib\\\\site-packages\\\\numpy\\\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https:\/\/tinyurl.com\/ y3dm3h86 ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. (foo) PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo> ``` A screenshot of some of the above text Error information link to fmod(), after an update to windows 2004, is causing a strange interaction with other code I use PyCharm 2020.2 Ultimate, and it also catches the error. How can I fix it?", "response":"The temporary solution is to use NumPy 1.19.3. ```none pip install numpy==1.19.3 ``` From a Microsoft thread, a fix was promised to be available around January 2021. It was fixed in the KB4598291 update.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/64729944\/runtimeerror-the-current-numpy-installation-fails-to-pass-a-sanity-check-due-to", "best_answers_votes":241, "question_length":10048, "response_length":203 }, { "question":"Numpy \u2018smart\u2019 symmetric matrix Is there a smart and space-efficient symmetric matrix in numpy which automatically (and transparently) fills the position at [j][i] when [i][j] is written to? ``` import numpy a = numpy.symmetric((3, 3)) a[0][1] = 1 a[1][0] == a[0][1] # True print(a) # [[0 1 0], [1 0 0], [0 0 0]] assert numpy.all(a == a.T) # for any symmetric matrix ``` An automatic Hermitian would also be nice, although I won\u2019t need that at the time of writing.", "response":"If you can afford to symmetrize the matrix just before doing calculations, the following should be reasonably fast: ``` def symmetrize(a): \"\"\" Return a symmetrized version of NumPy array a. Values 0 are replaced by the array value at the symmetric position (with respect to the diagonal), i.e. if a_ij = 0, then the returned array a' is such that a'_ij = a_ji. Diagonal values are left untouched. a -- square NumPy array, such that a_ij = 0 or a_ji = 0, for i != j. \"\"\" return a + a.T - numpy.diag(a.diagonal()) ``` This works under reasonable assumptions (such as not doing both a[0, 1] = 42 and the contradictory a[1, 0] = 123 before running symmetrize). If you really need a transparent symmetrization, you might consider subclassing numpy.ndarray and simply redefining __setitem__: ``` class SymNDArray(numpy.ndarray): \"\"\" NumPy array subclass for symmetric matrices. A SymNDArray arr is such that doing arr[i,j] = value automatically does arr[j,i] = value, so that array updates remain symmetrical. \"\"\" def __setitem__(self, (i, j), value): super(SymNDArray, self).__setitem__((i, j), value) super(SymNDArray, self).__setitem__((j, i), value) def symarray(input_array): \"\"\" Return a symmetrized version of the array-like input_array. The returned array has class SymNDArray. Further assignments to the array are thus automatically symmetrized. \"\"\" return symmetrize(numpy.asarray(input_array)).view(SymNDArray) # Example: a = symarray(numpy.zeros((3, 3))) a[0, 1] = 42 print a # a[1, 0] == 42 too! ``` (or the equivalent with matrices instead of arrays, depending on your needs). This approach even handles more complicated assignments, like a[:, 1] = -1, which correctly sets a[1, :] elements. Note that Python 3 removed the possibility of writing def \u2026(\u2026, (i, j),\u2026), so the code has to be slightly adapted before running with Python 3: def __setitem__(self, indexes, value): (i, j) = indexes\u2026", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/2572916\/numpy-smart-symmetric-matrix", "best_answers_votes":97, "question_length":463, "response_length":1899 }, { "question":"NumPy array slice using None This had me scratching my head for a while. I was unintentionally slicing an array with None and getting something other than an error (I expected an error). Instead, it returns an array with an extra dimension. ``` >>> import numpy >>> a = numpy.arange(4).reshape(2,2) >>> a array([[0, 1], [2, 3]]) >>> a[None] array([[[0, 1], [2, 3]]]) ``` Is this behavior intentional or a side-effect? If intentional, is there some rationale for it?", "response":"Using None is equivalent to using numpy.newaxis, so yes, it's intentional. In fact, they're the same thing, but, of course, newaxis spells it out better. The docs: The newaxis object can be used in all slicing operations to create an axis of length one. newaxis is an alias for \u2018None\u2019, and \u2018None\u2019 can be used in place of this with the same result. A related SO question.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1408311\/numpy-array-slice-using-none", "best_answers_votes":85, "question_length":465, "response_length":370 }, { "question":"How to overplot a line on a scatter plot in python? I have two vectors of data and I've put them into pyplot.scatter(). Now I'd like to over plot a linear fit to these data. How would I do this? I've tried using scikitlearn and np.polyfit().", "response":"``` import numpy as np from numpy.polynomial.polynomial import polyfit import matplotlib.pyplot as plt # Sample data x = np.arange(10) y = 5 * x + 10 # Fit with polyfit b, m = polyfit(x, y, 1) plt.plot(x, y, '.') plt.plot(x, b + m * x, '-') plt.show() ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19068862\/how-to-overplot-a-line-on-a-scatter-plot-in-python", "best_answers_votes":152, "question_length":241, "response_length":255 }, { "question":"Average values in two Numpy arrays Given two ndarrays ``` old_set = [[0, 1], [4, 5]] new_set = [[2, 7], [0, 1]] ``` I'm looking to get the mean of the respective values between the two arrays so that the data ends up something like: ``` end_data = [[1, 4], [2, 3]] ``` basically it would apply something like ``` for i in len(old_set): end_data[i] = (old_set[i]+new_set[i])\/2 ``` But I'm unsure what syntax to use.. Thanks for the help in advance!", "response":"You can create a 3D array containing your 2D arrays to be averaged, then average along axis=0 using np.mean or np.average (the latter allows for weighted averages): ``` np.mean( np.array([ old_set, new_set ]), axis=0 ) ``` This averaging scheme can be applied to any (n)-dimensional array, because the created (n+1)-dimensional array will always contain the original arrays to be averaged along its axis=0.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18461623\/average-values-in-two-numpy-arrays", "best_answers_votes":183, "question_length":447, "response_length":406 }, { "question":"python pandas flatten a dataframe to a list I have a df like so: ``` import pandas a=[['1\/2\/2014', 'a', '6', 'z1'], ['1\/2\/2014', 'a', '3', 'z1'], ['1\/3\/2014', 'c', '1', 'x3'], ] df = pandas.DataFrame.from_records(a[1:],columns=a[0]) ``` I want to flatten the df so it is one continuous list like so: ['1\/2\/2014', 'a', '6', 'z1', '1\/2\/2014', 'a', '3', 'z1','1\/3\/2014', 'c', '1', 'x3'] I can loop through the rows and extend to a list, but is a much easier way to do it?", "response":"You can use .flatten() on the DataFrame converted to a NumPy array: ``` df.to_numpy().flatten() ``` and you can also add .tolist() if you want the result to be a Python list. Edit In previous versions of Pandas, the values attributed was used instead of the .to_numpy() method, as mentioned in the comments below.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25440008\/python-pandas-flatten-a-dataframe-to-a-list", "best_answers_votes":138, "question_length":468, "response_length":313 }, { "question":"What is the difference between np.linspace and np.arange? I have always used np.arange. I recently came across np.linspace. I am wondering what exactly is the difference between them... Looking at their documentation: np.arange: Return evenly spaced values within a given interval. np.linspace: Return evenly spaced numbers over a specified interval. The only difference I can see is linspace having more options... Like choosing to include the last element. Which one of these two would you recommend and why? And in which cases is np.linspace superior?", "response":"np.linspace allows you to define how many values you get including the specified min and max value. It infers the stepsize: ```py >>> np.linspace(0,1,11) array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]) ``` np.arange allows you to define the stepsize and infers the number of steps(the number of values you get). ```py >>> np.arange(0,1,.1) array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]) ``` contributions from user2357112: np.arange excludes the maximum value unless rounding error makes it do otherwise. For example, the following results occur due to rounding error: ```py >>> numpy.arange(1, 1.3, 0.1) array([1. , 1.1, 1.2, 1.3]) ``` You can exclude the stop value (in our case 1.3) using endpoint=False: ```py >>> numpy.linspace(1, 1.3, 3, endpoint=False) array([1. , 1.1, 1.2]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/62106028\/what-is-the-difference-between-np-linspace-and-np-arange", "best_answers_votes":115, "question_length":554, "response_length":812 }, { "question":"TypeError: unhashable type: 'numpy.ndarray' From an array with three columns, I want to be able to just take a slice of data from all three columns where the values in the first column are equal to the values defined in above. ```py above = {1, 5, 10} data = np.arange(9).reshape(-1, 3) energies = np.hsplit(data, 3)[0] slice = set(energies) & above ``` The above comes back with: ```none Traceback (most recent call last): File \"\", line 1, in slice = set(energies) & above TypeError: unhashable type: 'numpy.ndarray' ``` How do I resolve this error?", "response":"Your variable energies probably has the wrong shape: ``` >>> from numpy import array >>> set([1,2,3]) & set(range(2, 10)) set([2, 3]) >>> set(array([1,2,3])) & set(range(2,10)) set([2, 3]) >>> set(array([[1,2,3],])) & set(range(2,10)) Traceback (most recent call last): File \"\", line 1, in TypeError: unhashable type: 'numpy.ndarray' ``` And that's what happens if you read columnar data using your approach: ``` >>> data array([[ 1., 2., 3.], [ 3., 4., 5.], [ 5., 6., 7.], [ 8., 9., 10.]]) >>> hsplit(data,3)[0] array([[ 1.], [ 3.], [ 5.], [ 8.]]) ``` Probably you can simply use ``` >>> data[:,0] array([ 1., 3., 5., 8.]) ``` instead. (P.S. Your code looks like it's undecided about whether it's data or elementdata. I've assumed it's simply a typo.)", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9022656\/typeerror-unhashable-type-numpy-ndarray", "best_answers_votes":58, "question_length":551, "response_length":753 }, { "question":"Rearrange columns of numpy 2D array Is there a way to change the order of the columns in a numpy 2D array to a new and arbitrary order? For example, I have an array ``` array([[10, 20, 30, 40, 50], [ 6, 7, 8, 9, 10]]) ``` and I want to change it into, say ``` array([[10, 30, 50, 40, 20], [ 6, 8, 10, 9, 7]]) ``` by applying the permutation ``` 0 -> 0 1 -> 4 2 -> 1 3 -> 3 4 -> 2 ``` on the columns. In the new matrix, I therefore want the first column of the original to stay in place, the second to move to the last column and so on. Is there a numpy function to do it? I have a fairly large matrix and expect to get even larger ones, so I need a solution that does this quickly and in place if possible (permutation matrices are a no-go) Thank you.", "response":"This is possible in O(n) time and O(n) space using fancy indexing: ``` >>> import numpy as np >>> a = np.array([[10, 20, 30, 40, 50], ... [ 6, 7, 8, 9, 10]]) >>> permutation = [0, 4, 1, 3, 2] >>> idx = np.empty_like(permutation) >>> idx[permutation] = np.arange(len(permutation)) >>> a[:, idx] # return a rearranged copy array([[10, 30, 50, 40, 20], [ 6, 8, 10, 9, 7]]) >>> a[:] = a[:, idx] # in-place modification of a ``` Note that a[:, idx] is returning a copy, not a view. An O(1)-space solution is not possible in the general case, due to how numpy arrays are strided in memory.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20265229\/rearrange-columns-of-numpy-2d-array", "best_answers_votes":104, "question_length":751, "response_length":583 }, { "question":"Easy way to test if each element in an numpy array lies between two values? I was wondering if there was a syntactically simple way of checking if each element in a numpy array lies between two numbers. In other words, just as numpy.array([1,2,3,4,5]) < 5 will return array([True, True, True, True, False]), I was wondering if it was possible to do something akin to this: ``` 1 < numpy.array([1,2,3,4,5]) < 5 ``` ... to obtain ... ``` array([False, True, True, True, False]) ``` I understand that I can obtain this through logical chaining of boolean tests, but I'm working through some rather complex code and I was looking for a syntactically clean solution. Any tips?", "response":"One solution would be: ```py import numpy as np a = np.array([1, 2, 3, 4, 5]) (a > 1) & (a < 5) # 1 < element < 5? # array([False, True, True, True, False]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10542240\/easy-way-to-test-if-each-element-in-an-numpy-array-lies-between-two-values", "best_answers_votes":115, "question_length":671, "response_length":160 }, { "question":"built-in range or numpy.arange: which is more efficient? When iterating over a large array with a range expression, should I use Python's built-in range function, or numpy's arange to get the best performance? My reasoning so far: range probably resorts to a native implementation and might be faster therefore. On the other hand, arange returns a full array, which occupies memory, so there might be an overhead. Python 3's range expression is a generator, which does not hold all the values in memory.", "response":"For large arrays, a vectorised numpy operation is the fastest. If you must loop, prefer xrange\/range and avoid using np.arange. In numpy you should use combinations of vectorized calculations, ufuncs and indexing to solve your problems as it runs at C speed. Looping over numpy arrays is inefficient compared to this. (Something like the worst thing you could do would be to iterate over the array with an index created with range or np.arange as the first sentence in your question suggests, but I'm not sure if you really mean that.) ``` import numpy as np import sys sys.version # out: '2.7.3rc2 (default, Mar 22 2012, 04:35:15) \\n[GCC 4.6.3]' np.version.version # out: '1.6.2' size = int(1E6) %timeit for x in range(size): x ** 2 # out: 10 loops, best of 3: 136 ms per loop %timeit for x in xrange(size): x ** 2 # out: 10 loops, best of 3: 88.9 ms per loop # avoid this %timeit for x in np.arange(size): x ** 2 #out: 1 loops, best of 3: 1.16 s per loop # use this %timeit np.arange(size) ** 2 #out: 100 loops, best of 3: 19.5 ms per loop ``` So for this case numpy is 4 times faster than using xrange if you do it right. Depending on your problem numpy can be much faster than a 4 or 5 times speed up. The answers to this question explain some more advantages of using numpy arrays instead of python lists for large data sets.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10698858\/built-in-range-or-numpy-arange-which-is-more-efficient", "best_answers_votes":95, "question_length":503, "response_length":1330 }, { "question":"Interpolate NaN values in a numpy array Is there a quick way of replacing all NaN values in a numpy array with (say) the linearly interpolated values? For example, ``` [1 1 1 nan nan 2 2 nan 0] ``` would be converted into ``` [1 1 1 1.3 1.6 2 2 1 0] ```", "response":"Lets define first a simple helper function in order to make it more straightforward to handle indices and logical indices of NaNs: ``` import numpy as np def nan_helper(y): \"\"\"Helper to handle indices and logical indices of NaNs. Input: - y, 1d numpy array with possible NaNs Output: - nans, logical indices of NaNs - index, a function, with signature indices= index(logical_indices), to convert logical indices of NaNs to 'equivalent' indices Example: >>> # linear interpolation of NaNs >>> nans, x= nan_helper(y) >>> y[nans]= np.interp(x(nans), x(~nans), y[~nans]) \"\"\" return np.isnan(y), lambda z: z.nonzero()[0] ``` Now the nan_helper(.) can now be utilized like: ``` >>> y= array([1, 1, 1, NaN, NaN, 2, 2, NaN, 0]) >>> >>> nans, x= nan_helper(y) >>> y[nans]= np.interp(x(nans), x(~nans), y[~nans]) >>> >>> print y.round(2) [ 1. 1. 1. 1.33 1.67 2. 2. 1. 0. ] ``` --- Although it may seem first a little bit overkill to specify a separate function to do just things like this: ``` >>> nans, x= np.isnan(y), lambda z: z.nonzero()[0] ``` it will eventually pay dividends. So, whenever you are working with NaNs related data, just encapsulate all the (new NaN related) functionality needed, under some specific helper function(s). Your code base will be more coherent and readable, because it follows easily understandable idioms. Interpolation, indeed, is a nice context to see how NaN handling is done, but similar techniques are utilized in various other contexts as well.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6518811\/interpolate-nan-values-in-a-numpy-array", "best_answers_votes":121, "question_length":253, "response_length":1475 }, { "question":"Does matplotlib have a function for drawing diagonal lines in axis coordinates? Matplotlib Axes have the functions axhline and axvline for drawing horizontal or vertical lines at a given y or x coordinate (respectively) independently of the data scale on an Axes. Is there a similar function for plotting a constant diagonal? For example, if I have a scatterplot of variables with a similar domain, it is often useful to know whether they fall above or below the line of y = x: ``` mean, cov = [0, 0], [(1, .6), (.6, 1)] x, y = np.random.multivariate_normal(mean, cov, 100).T y += x + 1 f, ax = plt.subplots(figsize=(6, 6)) ax.scatter(x, y, c=\".3\") ax.plot([-3, 3], [-3, 3], ls=\"--\", c=\".3\") ax.set(xlim=(-3, 3), ylim=(-3, 3)) ``` This can of course be done programmatically by grabbing the axis limits, (ax.get_xlim(), etc.), but that a) takes a few extra steps and b) is brittle in cases where more data might end up on the plot and shift the limits. (Actually in some cases just adding the constant line itself stretches the axes). It would be preferable to just do, e.g., ax.axdline(ls=\"--\", c=\".3\"), but it's not clear if something like this exists in the matplotlib codebase. All you would need to do would be modify the axhline code to plot from [0, 1] in axes coordinates for both x and y, I think.", "response":"Drawing a diagonal from the lower left to the upper right corners of your plot would be accomplished by the following ax.plot([0, 1], [0, 1], transform=ax.transAxes) Using transform=ax.transAxes, the supplied x and y coordinates are interpreted as axes coordinates instead of data coordinates. This, as @fqq pointed out, is only the identity line when your x and y limits are equal. To draw the line y=x such that it always extends to the limits of your plot, an approach similar to the one given by @Ffisegydd would work, and can be written as the following function. ``` def add_identity(axes, *line_args, **line_kwargs): identity, = axes.plot([], [], *line_args, **line_kwargs) def callback(axes): low_x, high_x = axes.get_xlim() low_y, high_y = axes.get_ylim() low = max(low_x, low_y) high = min(high_x, high_y) identity.set_data([low, high], [low, high]) callback(axes) axes.callbacks.connect('xlim_changed', callback) axes.callbacks.connect('ylim_changed', callback) return axes ``` Example usage: ``` import numpy as np import matplotlib.pyplot as plt mean, cov = [0, 0], [(1, .6), (.6, 1)] x, y = np.random.multivariate_normal(mean, cov, 100).T y += x + 1 f, ax = plt.subplots(figsize=(6, 6)) ax.scatter(x, y, c=\".3\") add_identity(ax, color='r', ls='--') plt.show() ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22104256\/does-matplotlib-have-a-function-for-drawing-diagonal-lines-in-axis-coordinates", "best_answers_votes":68, "question_length":1306, "response_length":1277 }, { "question":"Numpy shuffle multidimensional array by row only, keep column order unchanged How can I shuffle a multidimensional array by row only in Python (so do not shuffle the columns). I am looking for the most efficient solution, because my matrix is very huge. Is it also possible to do this highly efficient on the original array (to save memory)? Example: ``` import numpy as np X = np.random.random((6, 2)) print(X) Y = ???shuffle by row only not colls??? print(Y) ``` What I expect now is original matrix: ``` [[ 0.48252164 0.12013048] [ 0.77254355 0.74382174] [ 0.45174186 0.8782033 ] [ 0.75623083 0.71763107] [ 0.26809253 0.75144034] [ 0.23442518 0.39031414]] ``` Output shuffle the rows not cols e.g.: ``` [[ 0.45174186 0.8782033 ] [ 0.48252164 0.12013048] [ 0.77254355 0.74382174] [ 0.75623083 0.71763107] [ 0.23442518 0.39031414] [ 0.26809253 0.75144034]] ```", "response":"You can use numpy.random.shuffle(). This function only shuffles the array along the first axis of a multi-dimensional array. The order of sub-arrays is changed but their contents remains the same. ``` In [2]: import numpy as np In [3]: In [3]: X = np.random.random((6, 2)) In [4]: X Out[4]: array([[0.71935047, 0.25796155], [0.4621708 , 0.55140423], [0.22605866, 0.61581771], [0.47264172, 0.79307633], [0.22701656, 0.11927993], [0.20117207, 0.2754544 ]]) In [5]: np.random.shuffle(X) In [6]: X Out[6]: array([[0.71935047, 0.25796155], [0.47264172, 0.79307633], [0.4621708 , 0.55140423], [0.22701656, 0.11927993], [0.20117207, 0.2754544 ], [0.22605866, 0.61581771]]) ``` For other functionalities you can also check out the following functions: random.Generator.shuffle random.Generator.permutation random.Generator.permuted The function random.Generator.permuted is introduced in Numpy's 1.20.0 Release. The new function differs from shuffle and permutation in that the subarrays indexed by an axis are permuted rather than the axis being treated as a separate 1-D array for every combination of the other indexes. For example, it is now possible to permute the rows or columns of a 2-D array.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/35646908\/numpy-shuffle-multidimensional-array-by-row-only-keep-column-order-unchanged", "best_answers_votes":88, "question_length":861, "response_length":1193 }, { "question":"How do I plot list of tuples? I have the following data set. I would like to use Python or Gnuplot to plot the data. The tuples are of the form (x, y). The Y-axis should be a log axis, that is, log(y). A scatter plot or line plot would be ideal. How can this be done? ``` [(0, 6.0705199999997801e-08), (1, 2.1015700100300739e-08), (2, 7.6280656623374823e-09), (3, 5.7348209304555086e-09), (4, 3.6812203579604238e-09), (5, 4.1572516753310418e-09)] ```", "response":"If I get your question correctly, you could do something like this. ``` >>> import matplotlib.pyplot as plt >>> testList =[(0, 6.0705199999997801e-08), (1, 2.1015700100300739e-08), (2, 7.6280656623374823e-09), (3, 5.7348209304555086e-09), (4, 3.6812203579604238e-09), (5, 4.1572516753310418e-09)] >>> from math import log >>> testList2 = [(elem1, log(elem2)) for elem1, elem2 in testList] >>> testList2 [(0, -16.617236475334405), (1, -17.67799605473062), (2, -18.691431541177973), (3, -18.9767093108359), (4, -19.420021520728017), (5, -19.298411635970396)] >>> zip(*testList2) [(0, 1, 2, 3, 4, 5), (-16.617236475334405, -17.67799605473062, -18.691431541177973, -18.9767093108359, -19.420021520728017, -19.298411635970396)] >>> plt.scatter(*zip(*testList2)) >>> plt.show() ``` which would give you something like Or as a line plot, ``` >>> plt.plot(*zip(*testList2)) >>> plt.show() ``` EDIT - If you want to add a title and labels for the axis, you could do something like ``` >>> plt.scatter(*zip(*testList2)) >>> plt.title('Random Figure') >>> plt.xlabel('X-Axis') >>> plt.ylabel('Y-Axis') >>> plt.show() ``` which would give you", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18458734\/how-do-i-plot-list-of-tuples", "best_answers_votes":119, "question_length":450, "response_length":1130 }, { "question":"Performant cartesian product (CROSS JOIN) with pandas The contents of this post were originally meant to be a part of Pandas Merging 101, but due to the nature and size of the content required to fully do justice to this topic, it has been moved to its own QnA. Given two simple DataFrames; ``` left = pd.DataFrame({'col1' : ['A', 'B', 'C'], 'col2' : [1, 2, 3]}) right = pd.DataFrame({'col1' : ['X', 'Y', 'Z'], 'col2' : [20, 30, 50]}) left col1 col2 0 A 1 1 B 2 2 C 3 right col1 col2 0 X 20 1 Y 30 2 Z 50 ``` The cross product of these frames can be computed, and will look something like: ``` A 1 X 20 A 1 Y 30 A 1 Z 50 B 2 X 20 B 2 Y 30 B 2 Z 50 C 3 X 20 C 3 Y 30 C 3 Z 50 ``` What is the most performant method of computing this result?", "response":"Let's start by establishing a benchmark. The easiest method for solving this is using a temporary \"key\" column: pandas = 1.2 ``` left.merge(right, how=\"cross\") # implements the technique above ``` ``` col1_x col2_x col1_y col2_y 0 A 1 X 20 1 A 1 Y 30 2 A 1 Z 50 3 B 2 X 20 4 B 2 Y 30 5 B 2 Z 50 6 C 3 X 20 7 C 3 Y 30 8 C 3 Z 50 ``` How this works is that both DataFrames are assigned a temporary \"key\" column with the same value (say, 1). merge then performs a many-to-many JOIN on \"key\". While the many-to-many JOIN trick works for reasonably sized DataFrames, you will see relatively lower performance on larger data. A faster implementation will require NumPy. Here are some famous NumPy implementations of 1D cartesian product. We can build on some of these performant solutions to get our desired output. My favourite, however, is @senderle's first implementation. ``` def cartesian_product(*arrays): la = len(arrays) dtype = np.result_type(*arrays) arr = np.empty([len(a) for a in arrays] + [la], dtype=dtype) for i, a in enumerate(np.ix_(*arrays)): arr[...,i] = a return arr.reshape(-1, la) ``` Generalizing: CROSS JOIN on Unique or Non-Unique Indexed DataFrames Disclaimer These solutions are optimised for DataFrames with non-mixed scalar dtypes. If dealing with mixed dtypes, use at your own risk! This trick will work on any kind of DataFrame. We compute the cartesian product of the DataFrames' numeric indices using the aforementioned cartesian_product, use this to reindex the DataFrames, and ``` def cartesian_product_generalized(left, right): la, lb = len(left), len(right) idx = cartesian_product(np.ogrid[:la], np.ogrid[:lb]) return pd.DataFrame( np.column_stack([left.values[idx[:,0]], right.values[idx[:,1]]])) cartesian_product_generalized(left, right) 0 1 2 3 0 A 1 X 20 1 A 1 Y 30 2 A 1 Z 50 3 B 2 X 20 4 B 2 Y 30 5 B 2 Z 50 6 C 3 X 20 7 C 3 Y 30 8 C 3 Z 50 np.array_equal(cartesian_product_generalized(left, right), cartesian_product_basic(left, right)) True ``` And, along similar lines, ``` left2 = left.copy() left2.index = ['s1', 's2', 's1'] right2 = right.copy() right2.index = ['x', 'y', 'y'] left2 col1 col2 s1 A 1 s2 B 2 s1 C 3 right2 col1 col2 x X 20 y Y 30 y Z 50 np.array_equal(cartesian_product_generalized(left, right), cartesian_product_basic(left2, right2)) True ``` This solution can generalise to multiple DataFrames. For example, ``` def cartesian_product_multi(*dfs): idx = cartesian_product(*[np.ogrid[:len(df)] for df in dfs]) return pd.DataFrame( np.column_stack([df.values[idx[:,i]] for i,df in enumerate(dfs)])) cartesian_product_multi(*[left, right, left]).head() 0 1 2 3 4 5 0 A 1 X 20 A 1 1 A 1 X 20 B 2 2 A 1 X 20 C 3 3 A 1 X 20 D 4 4 A 1 Y 30 A 1 ``` Further Simplification A simpler solution not involving @senderle's cartesian_product is possible when dealing with just two DataFrames. Using np.broadcast_arrays, we can achieve almost the same level of performance. ``` def cartesian_product_simplified(left, right): la, lb = len(left), len(right) ia2, ib2 = np.broadcast_arrays(*np.ogrid[:la,:lb]) return pd.DataFrame( np.column_stack([left.values[ia2.ravel()], right.values[ib2.ravel()]])) np.array_equal(cartesian_product_simplified(left, right), cartesian_product_basic(left2, right2)) True ``` Performance Comparison Benchmarking these solutions on some contrived DataFrames with unique indices, we have Do note that timings may vary based on your setup, data, and choice of cartesian_product helper function as applicable. Performance Benchmarking Code This is the timing script. All functions called here are defined above. ``` from timeit import timeit import pandas as pd import matplotlib.pyplot as plt res = pd.DataFrame( index=['cartesian_product_basic', 'cartesian_product_generalized', 'cartesian_product_multi', 'cartesian_product_simplified'], columns=[1, 10, 50, 100, 200, 300, 400, 500, 600, 800, 1000, 2000], dtype=float ) for f in res.index: for c in res.columns: # print(f,c) left2 = pd.concat([left] * c, ignore_index=True) right2 = pd.concat([right] * c, ignore_index=True) stmt = '{}(left2, right2)'.format(f) setp = 'from __main__ import left2, right2, {}'.format(f) res.at[f, c] = timeit(stmt, setp, number=5) ax = res.div(res.min()).T.plot(loglog=True) ax.set_xlabel(\"N\"); ax.set_ylabel(\"time (relative)\"); plt.show() ``` Continue Reading Jump to other topics in Pandas Merging 101 to continue learning: Merging basics - basic types of joins Index-based joins Generalizing to multiple DataFrames Cross join * * you are here", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/53699012\/performant-cartesian-product-cross-join-with-pandas", "best_answers_votes":103, "question_length":739, "response_length":4505 }, { "question":"1D numpy concatenate: TypeError: only integer scalar arrays can be converted to a scalar index [duplicate] This question already has answers here: Concatenating two one-dimensional NumPy arrays (7 answers) Closed 7 years ago. I want to store numpy array into to another numpy array I am using np.concatenate This is my code ``` x=np.concatenate(x,s_x) ``` These are the type and the shape of x and s_x ``` Type of s_x: , Shape of s_x: (173,) Type of x: (0,), Shape of x: (0,) ``` This is the error being displayed ``` TypeError: only integer scalar arrays can be converted to a scalar index ```", "response":"You need to pass the arrays as an iterable (a tuple or list), thus the correct syntax is ``` x=np.concatenate((x, s_x)) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/47388013\/1d-numpy-concatenate-typeerror-only-integer-scalar-arrays-can-be-converted-to", "best_answers_votes":173, "question_length":595, "response_length":123 }, { "question":"Create a two-dimensional array with two one-dimensional arrays My function (name CovexHull(point)) accepts the argument as a two-dimensional array: ``` hull = ConvexHull(points) ``` Session ``` In [1]: points.ndim Out[1]: 2 In [2]: points.shape Out[2]: (10, 2) In [3]: points Out[3]: array([[ 0. , 0. ], [ 1. , 0.8], [ 0.9, 0.8], [ 0.9, 0.7], [ 0.9, 0.6], [ 0.8, 0.5], [ 0.8, 0.5], [ 0.7, 0.5], [ 0.1, 0. ], [ 0. , 0. ]]) ``` points is a NumPy array with ndim 2. I have two different NumPy arrays (tp and fp) like below: ``` In [4]: fp.ndim Out[4]: 1 In [5]: fp.shape Out[5]: (10,) In [6]: fp Out[6]: array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.4, 0.5, 0.6, 0.9, 1. ]) ``` How can I create a two-dimensional NumPy array effectively (like points mentioned above) with tp and fp?", "response":"If you wish to combine two 10 element one-dimensional arrays into a two-dimensional array, np.vstack((tp, fp)).T will do it. np.vstack((tp, fp)) will return an array of shape (2, 10), and the T attribute returns the transposed array with shape (10, 2) (i.e., with the two one-dimensional arrays forming columns rather than rows). ``` >>> tp = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> tp.ndim 1 >>> tp.shape (10,) >>> fp = np.array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) >>> fp.ndim 1 >>> fp.shape (10,) >>> combined = np.vstack((tp, fp)).T >>> combined array([[ 0, 10], [ 1, 11], [ 2, 12], [ 3, 13], [ 4, 14], [ 5, 15], [ 6, 16], [ 7, 17], [ 8, 18], [ 9, 19]]) >>> combined.ndim 2 >>> combined.shape (10, 2) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17710672\/create-a-two-dimensional-array-with-two-one-dimensional-arrays", "best_answers_votes":105, "question_length":768, "response_length":718 }, { "question":"only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices I am implementing fft and when I shuffle the data elements using bit reversal, I get the following error: ```none IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices. ``` My code is: ```py def shuffle_bit_reversed_order(data: np.ndarray) -> np.ndarray: x = data.size n = x \/ 2 y = n * np.mod(x, 2) data[x], data[y] = data[y], data[x] return data ``` I think the problem is my data is of type 'float64' and I may have used it as an integer but I don't know how I can solve it.", "response":"I believe your problem is this: in your while loop, n is divided by 2, but never cast as an integer again, so it becomes a float at some point. It is then added onto y, which is then a float too, and that gives you the warning.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34952651\/only-integers-slices-ellipsis-numpy-newaxis-none-and-intege", "best_answers_votes":67, "question_length":671, "response_length":227 }, { "question":"Working with big data in python and numpy, not enough ram, how to save partial results on disc? I am trying to implement algorithms for 1000-dimensional data with 200k+ datapoints in python. I want to use numpy, scipy, sklearn, networkx, and other useful libraries. I want to perform operations such as pairwise distance between all of the points and do clustering on all of the points. I have implemented working algorithms that perform what I want with reasonable complexity but when I try to scale them to all of my data I run out of RAM. Of course, I do, creating the matrix for pairwise distances on 200k+ data takes a lot of memory. Here comes the catch: I would really like to do this on crappy computers with low amounts of RAM. Is there a feasible way for me to make this work without the constraints of low RAM? That it will take a much longer time is really not a problem, as long as the time reqs don't go to infinity! I would like to be able to put my algorithms to work and then come back an hour or five later and not have it stuck because it ran out of RAM! I would like to implement this in python, and be able to use the numpy, scipy, sklearn, and networkx libraries. I would like to be able to calculate the pairwise distance to all my points etc Is this feasible? And how would I go about it, what can I start to read up on?", "response":"Using numpy.memmap you create arrays directly mapped into a file: ``` import numpy a = numpy.memmap('test.mymemmap', dtype='float32', mode='w+', shape=(200000,1000)) # here you will see a 762MB file created in your working directory ``` You can treat it as a conventional array: a += 1000. It is possible even to assign more arrays to the same file, controlling it from mutually sources if needed. But I've experiences some tricky things here. To open the full array you have to \"close\" the previous one first, using del: ``` del a b = numpy.memmap('test.mymemmap', dtype='float32', mode='r+', shape=(200000,1000)) ``` But openning only some part of the array makes it possible to achieve the simultaneous control: ``` b = numpy.memmap('test.mymemmap', dtype='float32', mode='r+', shape=(2,1000)) b[1,5] = 123456. print a[1,5] #123456.0 ``` Great! a was changed together with b. And the changes are already written on disk. The other important thing worth commenting is the offset. Suppose you want to take not the first 2 lines in b, but lines 150000 and 150001. ``` b = numpy.memmap('test.mymemmap', dtype='float32', mode='r+', shape=(2,1000), offset=150000*1000*32\/8) b[1,2] = 999999. print a[150001,2] #999999.0 ``` Now you can access and update any part of the array in simultaneous operations. Note the byte-size going in the offset calculation. So for a 'float64' this example would be 150000*1000*64\/8. Other references: Is it possible to map a discontiuous data on disk to an array with python? numpy.memmap documentation here.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16149803\/working-with-big-data-in-python-and-numpy-not-enough-ram-how-to-save-partial-r", "best_answers_votes":102, "question_length":1344, "response_length":1536 }, { "question":"Zero pad numpy array What's the more pythonic way to pad an array with zeros at the end? ``` def pad(A, length): ... A = np.array([1,2,3,4,5]) pad(A, 8) # expected : [1,2,3,4,5,0,0,0] ``` In my real use case, in fact I want to pad an array to the closest multiple of 1024. Ex: 1342 => 2048, 3000 => 3072", "response":"numpy.pad with constant mode does what you need, where we can pass a tuple as second argument to tell how many zeros to pad on each size, a (2, 3) for instance will pad 2 zeros on the left side and 3 zeros on the right side: Given A as: ``` A = np.array([1,2,3,4,5]) np.pad(A, (2, 3), 'constant') # array([0, 0, 1, 2, 3, 4, 5, 0, 0, 0]) ``` It's also possible to pad a 2D numpy arrays by passing a tuple of tuples as padding width, which takes the format of ((top, bottom), (left, right)): ``` A = np.array([[1,2],[3,4]]) np.pad(A, ((1,2),(2,1)), 'constant') #array([[0, 0, 0, 0, 0], # 1 zero padded to the top # [0, 0, 1, 2, 0], # 2 zeros padded to the bottom # [0, 0, 3, 4, 0], # 2 zeros padded to the left # [0, 0, 0, 0, 0], # 1 zero padded to the right # [0, 0, 0, 0, 0]]) ``` For your case, you specify the left side to be zero and right side pad calculated from a modular division: ``` B = np.pad(A, (0, 1024 - len(A)%1024), 'constant') B # array([1, 2, 3, ..., 0, 0, 0]) len(B) # 1024 ``` For a larger A: ``` A = np.ones(3000) B = np.pad(A, (0, 1024 - len(A)%1024), 'constant') B # array([ 1., 1., 1., ..., 0., 0., 0.]) len(B) # 3072 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/38191855\/zero-pad-numpy-array", "best_answers_votes":126, "question_length":303, "response_length":1144 }, { "question":"'list' object has no attribute 'shape' how to create an array to numpy array? ``` def test(X, N): [n,T] = X.shape print \"n : \", n print \"T : \", T if __name__==\"__main__\": X = [[[-9.035250067710876], [7.453250169754028], [33.34074878692627]], [[-6.63700008392334], [5.132999956607819], [31.66075038909912]], [[-5.1272499561309814], [8.251499891281128], [30.925999641418457]]] N = 200 test(X, N) ``` I am getting error as ``` AttributeError: 'list' object has no attribute 'shape' ``` So, I think I need to convert my X to numpy array?", "response":"Use numpy.array to use shape attribute. ``` >>> import numpy as np >>> X = np.array([ ... [[-9.035250067710876], [7.453250169754028], [33.34074878692627]], ... [[-6.63700008392334], [5.132999956607819], [31.66075038909912]], ... [[-5.1272499561309814], [8.251499891281128], [30.925999641418457]] ... ]) >>> X.shape (3L, 3L, 1L) ``` NOTE X.shape returns 3-items tuple for the given array; [n, T] = X.shape raises ValueError.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21015674\/list-object-has-no-attribute-shape", "best_answers_votes":94, "question_length":533, "response_length":423 }, { "question":"Initialise numpy array of unknown length I want to be able to 'build' a numpy array on the fly, I do not know the size of this array in advance. For example I want to do something like this: ``` a= np.array() for x in y: a.append(x) ``` Which would result in a containing all the elements of x, obviously this is a trivial answer. I am just curious whether this is possible?", "response":"Build a Python list and convert that to a Numpy array. That takes amortized O(1) time per append + O(n) for the conversion to array, for a total of O(n). ``` a = [] for x in y: a.append(x) a = np.array(a) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10121926\/initialise-numpy-array-of-unknown-length", "best_answers_votes":129, "question_length":374, "response_length":208 }, { "question":"How to use numpy.genfromtxt when first column is string and the remaining columns are numbers? Basically, I have a bunch of data where the first column is a string (label) and the remaining columns are numeric values. I run the following: ``` data = numpy.genfromtxt('data.txt', delimiter = ',') ``` This reads most of the data well, but the label column just gets 'nan'. How can I deal with this?", "response":"By default, np.genfromtxt uses dtype=float: that's why you string columns are converted to NaNs because, after all, they're Not A Number... You can ask np.genfromtxt to try to guess the actual type of your columns by using dtype=None: ``` >>> from StringIO import StringIO >>> test = \"a,1,2\\nb,3,4\" >>> a = np.genfromtxt(StringIO(test), delimiter=\",\", dtype=None) >>> print a array([('a',1,2),('b',3,4)], dtype=[('f0', '|S1'),('f1', '>> np.genfromtxt(StringIO(test), delimiter=\",\", dtype=(\"|S10\", int, float)) array([('a', 1, 2.0), ('b', 3, 4.0)], dtype=[('f0', '|S10'), ('f1', '>> a = [[1,2],[10,20],[100,200]] >>> [1,2] in a True >>> [1,20] in a False ``` but Numpy arrays give different and rather odd-looking results. (The __contains__ method of ndarray seems to be undocumented.) ``` >>> a = np.array([[1,2],[10,20],[100,200]]) >>> np.array([1,2]) in a True >>> np.array([1,20]) in a True >>> np.array([1,42]) in a True >>> np.array([42,1]) in a False ```", "response":"You can use .tolist() ``` >>> a = np.array([[1,2],[10,20],[100,200]]) >>> [1,2] in a.tolist() True >>> [1,20] in a.tolist() False >>> [1,20] in a.tolist() False >>> [1,42] in a.tolist() False >>> [42,1] in a.tolist() False ``` Or use a view: ``` >>> any((a[:]==[1,2]).all(1)) True >>> any((a[:]==[1,20]).all(1)) False ``` Or generate over the numpy list (potentially VERY SLOW): ``` any(([1,2] == x).all() for x in a) # stops on first occurrence ``` Or use numpy logic functions: ``` any(np.equal(a,[1,2]).all(1)) ``` If you time these: ``` import numpy as np import time n=300000 a=np.arange(n*3).reshape(n,3) b=a.tolist() t1,t2,t3=a[n\/\/100][0],a[n\/\/2][0],a[-10][0] tests=[ ('early hit',[t1, t1+1, t1+2]), ('middle hit',[t2,t2+1,t2+2]), ('late hit', [t3,t3+1,t3+2]), ('miss',[0,2,0])] fmt='\\t{:20}{:.5f} seconds and is {}' for test, tgt in tests: print('\\n{}: {} in {:,} elements:'.format(test,tgt,n)) name='view' t1=time.time() result=(a[...]==tgt).all(1).any() t2=time.time() print(fmt.format(name,t2-t1,result)) name='python list' t1=time.time() result = True if tgt in b else False t2=time.time() print(fmt.format(name,t2-t1,result)) name='gen over numpy' t1=time.time() result=any((tgt == x).all() for x in a) t2=time.time() print(fmt.format(name,t2-t1,result)) name='logic equal' t1=time.time() np.equal(a,tgt).all(1).any() t2=time.time() print(fmt.format(name,t2-t1,result)) ``` You can see that hit or miss, the numpy routines are the same speed to search the array. The Python in operator is potentially a lot faster for an early hit, and the generator is just bad news if you have to go all the way through the array. Here are the results for 300,000 x 3 element array: ``` early hit: [9000, 9001, 9002] in 300,000 elements: view 0.01002 seconds and is True python list 0.00305 seconds and is True gen over numpy 0.06470 seconds and is True logic equal 0.00909 seconds and is True middle hit: [450000, 450001, 450002] in 300,000 elements: view 0.00915 seconds and is True python list 0.15458 seconds and is True gen over numpy 3.24386 seconds and is True logic equal 0.00937 seconds and is True late hit: [899970, 899971, 899972] in 300,000 elements: view 0.00936 seconds and is True python list 0.30604 seconds and is True gen over numpy 6.47660 seconds and is True logic equal 0.00965 seconds and is True miss: [0, 2, 0] in 300,000 elements: view 0.00936 seconds and is False python list 0.01287 seconds and is False gen over numpy 6.49190 seconds and is False logic equal 0.00965 seconds and is False ``` And for 3,000,000 x 3 array: ``` early hit: [90000, 90001, 90002] in 3,000,000 elements: view 0.10128 seconds and is True python list 0.02982 seconds and is True gen over numpy 0.66057 seconds and is True logic equal 0.09128 seconds and is True middle hit: [4500000, 4500001, 4500002] in 3,000,000 elements: view 0.09331 seconds and is True python list 1.48180 seconds and is True gen over numpy 32.69874 seconds and is True logic equal 0.09438 seconds and is True late hit: [8999970, 8999971, 8999972] in 3,000,000 elements: view 0.09868 seconds and is True python list 3.01236 seconds and is True gen over numpy 65.15087 seconds and is True logic equal 0.09591 seconds and is True miss: [0, 2, 0] in 3,000,000 elements: view 0.09588 seconds and is False python list 0.12904 seconds and is False gen over numpy 64.46789 seconds and is False logic equal 0.09671 seconds and is False ``` Which seems to indicate that np.equal is the fastest pure numpy way to do this...", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14766194\/testing-whether-a-numpy-array-contains-a-given-row", "best_answers_votes":72, "question_length":891, "response_length":3487 }, { "question":"What is the difference between numpy.fft and scipy.fftpack? Is the later just a synonym of the former, or are they two different implementations of FFT? Which one is better?", "response":"SciPy does more: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/routines.fft.html http:\/\/docs.scipy.org\/doc\/scipy\/reference\/fftpack.html# In addition, SciPy exports some of the NumPy features through its own interface, for example if you execute scipy.fftpack.helper.fftfreq and numpy.fft.helper.fftfreq you're actually running the same code. However, SciPy has its own implementations of much functionality. The source has performance benchmarks that compare the original NumPy and new SciPy versions. My archaic laptop shows something like this: ``` Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | numpy | scipy | numpy ------------------------------------------------- 100 | 0.07 | 0.06 | 0.06 | 0.07 (secs for 7000 calls) 1000 | 0.06 | 0.09 | 0.09 | 0.09 (secs for 2000 calls) 256 | 0.11 | 0.11 | 0.12 | 0.11 (secs for 10000 calls) 512 | 0.16 | 0.21 | 0.20 | 0.21 (secs for 10000 calls) 1024 | 0.03 | 0.04 | 0.04 | 0.04 (secs for 1000 calls) 2048 | 0.05 | 0.09 | 0.08 | 0.08 (secs for 1000 calls) 4096 | 0.05 | 0.08 | 0.07 | 0.09 (secs for 500 calls) 8192 | 0.10 | 0.20 | 0.19 | 0.21 (secs for 500 calls) ``` It does seem that SciPy runs significantly faster as the array increases in size, though these are just contrived examples and it would be worth experimenting with both for your particular project. It's worth checking out the source code http:\/\/www.scipy.org\/Download#head-312ad78cdf85a9ca6fa17a266752069d23f785d1 . Yes those .f files really are Fortran! :-D", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6363154\/what-is-the-difference-between-numpy-fft-and-scipy-fftpack", "best_answers_votes":47, "question_length":173, "response_length":1585 }, { "question":"Is there any way to use pythonappend with SWIG's new builtin feature? I have a little project that works beautifully with SWIG. In particular, some of my functions return std::vectors, which get translated to tuples in Python. Now, I do a lot of numerics, so I just have SWIG convert these to numpy arrays after they're returned from the c++ code. To do this, I use something like the following in SWIG. ``` %feature(\"pythonappend\") My::Cool::Namespace::Data() const %{ if isinstance(val, tuple) : val = numpy.array(val) %} ``` (Actually, there are several functions named Data, some of which return floats, which is why I check that val is actually a tuple.) This works just beautifully. But, I'd also like to use the -builtin flag that's now available. Calls to these Data functions are rare and mostly interactive, so their slowness is not a problem, but there are other slow loops that speed up significantly with the builtin option. The problem is that when I use that flag, the pythonappend feature is silently ignored. Now, Data just returns a tuple again. Is there any way I could still return numpy arrays? I tried using typemaps, but it turned into a giant mess. Edit: Borealid has answered the question very nicely. Just for completeness, I include a couple related but subtly different typemaps that I need because I return by const reference and I use vectors of vectors (don't start!). These are different enough that I wouldn't want anyone else stumbling around trying to figure out the minor differences. ``` %typemap(out) std::vector& { npy_intp result_size = $1->size(); npy_intp dims[1] = { result_size }; PyArrayObject* npy_arr = (PyArrayObject*)PyArray_SimpleNew(1, dims, NPY_INT); int* dat = (int*) PyArray_DATA(npy_arr); for (size_t i = 0; i >& { npy_intp result_size = $1->size(); npy_intp result_size2 = (result_size>0 ? (*$1)[0].size() : 0); npy_intp dims[2] = { result_size, result_size2 }; PyArrayObject* npy_arr = (PyArrayObject*)PyArray_SimpleNew(2, dims, NPY_INT); int* dat = (int*) PyArray_DATA(npy_arr); for (size_t i = 0; i < result_size; ++i) { for (size_t j = 0; j < result_size2; ++j) { dat[i*result_size2+j] = (*$1)[i][j]; } } $result = PyArray_Return(npy_arr); } ``` Edit 2: Though not quite what I was looking for, similar problems may also be solved using @MONK's approach (explained here).", "response":"I agree with you that using typemap gets a little messy, but it is the right way to accomplish this task. You are also right that the SWIG documentation does not directly say that %pythonappend is incompatible with -builtin, but it is strongly implied: %pythonappend adds to the Python proxy class, and the Python proxy class does not exist at all in conjunction with the -builtin flag. Before, what you were doing was having SWIG convert the C++ std::vector objects into Python tuples, and then passing those tuples back down to numpy - where they were converted again. What you really want to do is convert them once, at the C level. Here's some code which will turn all std::vector objects into NumPy integer arrays: ``` %{ #include \"numpy\/arrayobject.h\" %} %init %{ import_array(); %} %typemap(out) std::vector { npy_intp result_size = $1.size(); npy_intp dims[1] = { result_size }; PyArrayObject* npy_arr = (PyArrayObject*)PyArray_SimpleNew(1, dims, NPY_INT); int* dat = (int*) PyArray_DATA(npy_arr); for (size_t i = 0; i into NumPy arrays with a typemap This code should be placed before you %import the headers which contain the functions returning std::vector. Other than that restriction, it's entirely self-contained, so it shouldn't add too much subjective \"mess\" to your codebase. If you need other vector types, you can just change the NPY_INT and all the int* and int bits, otherwise duplicating the function above.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9270052\/is-there-any-way-to-use-pythonappend-with-swigs-new-builtin-feature", "best_answers_votes":8, "question_length":2332, "response_length":1430 }, { "question":"Store numpy.array in cells of a Pandas.DataFrame I have a dataframe in which I would like to store 'raw' numpy.array: ``` df['COL_ARRAY'] = df.apply(lambda r: np.array(do_something_with_r), axis=1) ``` but it seems that pandas tries to 'unpack' the numpy.array. Is there a workaround? Other than using a wrapper (see edit below)? I tried reduce=False with no success. EDIT This works, but I have to use the 'dummy' Data class to wrap around the array, which is unsatisfactory and not very elegant. ``` class Data: def __init__(self, v): self.v = v meas = pd.read_excel(DATA_FILE) meas['DATA'] = meas.apply( lambda r: Data(np.array(pd.read_csv(r['filename'])))), axis=1 ) ```", "response":"Use a wrapper around the numpy array i.e. pass the numpy array as list ``` a = np.array([5, 6, 7, 8]) df = pd.DataFrame({\"a\": [a]}) ``` Output: ``` a 0 [5, 6, 7, 8] ``` Or you can use apply(np.array) by creating the tuples i.e. if you have a dataframe ``` df = pd.DataFrame({'id': [1, 2, 3, 4], 'a': ['on', 'on', 'off', 'off'], 'b': ['on', 'off', 'on', 'off']}) df['new'] = df.apply(lambda r: tuple(r), axis=1).apply(np.array) ``` Output : ``` a b id new 0 on on 1 [on, on, 1] 1 on off 2 [on, off, 2] 2 off on 3 [off, on, 3] 3 off off 4 [off, off, 4] ``` ``` df['new'][0] ``` Output : ``` array(['on', 'on', '1'], dtype=' 400, \"medium\" if the \"consumption_energy\" value is between 200 and 400, and \"low\" if the \"consumption_energy\" value is under 200. I try to use np.where from numpy, but I see that numpy.where(condition[, x, y]) treat only two condition not 3 like in my case.", "response":"Try this: Using the setup from @Maxu ``` col = 'consumption_energy' conditions = [ df2[col] >= 400, (df2[col] 200), df2[col] <= 200 ] choices = [ \"high\", 'medium', 'low' ] df2[\"energy_class\"] = np.select(conditions, choices, default=np.nan) consumption_energy energy_class 0 459 high 1 416 high 2 186 low 3 250 medium 4 411 high 5 210 medium 6 343 medium 7 328 medium 8 208 medium 9 223 medium ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/39109045\/numpy-where-with-multiple-conditions", "best_answers_votes":144, "question_length":433, "response_length":398 }, { "question":"How to plot empirical CDF (ECDF) How can I plot the empirical CDF of an array of numbers with Matplotlib in Python? I'm looking for the CDF analog of Pylab\u2019s hist function. One thing I can think of is: ``` from scipy.stats import cumfreq a = array([...]) # my array of numbers num_bins = 20 b = cumfreq(a, num_bins) plt.plot(b) ```", "response":"If you like linspace and prefer one-liners, you can do: ``` plt.plot(np.sort(a), np.linspace(0, 1, len(a), endpoint=False)) ``` Given my tastes, I almost always do: ``` # a is the data array x = np.sort(a) y = np.arange(len(x))\/float(len(x)) plt.plot(x, y) ``` Which works for me even if there are >O(1e6) data values. If you really need to downsample I'd set ``` x = np.sort(a)[::down_sampling_step] ``` Edit to respond to comment\/edit on why I use endpoint=False or the y as defined above. The following are some technical details. The empirical CDF is usually formally defined as ``` CDF(x) = \"number of samples <= x\"\/\"number of samples\" ``` in order to exactly match this formal definition you would need to use y = np.arange(1,len(x)+1)\/float(len(x)) so that we get y = [1\/N, 2\/N ... 1]. This estimator is an unbiased estimator that will converge to the true CDF in the limit of infinite samples Wikipedia ref.. I tend to use y = [0, 1\/N, 2\/N ... (N-1)\/N] since: (a) it is easier to code\/more idiomatic, (b) but is still formally justified since one can always exchange CDF(x) with 1-CDF(x) in the convergence proof, and (c) works with the (easy) downsampling method described above. In some particular cases, it is useful to define ``` y = (arange(len(x))+0.5)\/len(x) ``` which is intermediate between these two conventions. Which, in effect, says \"there is a 1\/(2N) chance of a value less than the lowest one I've seen in my sample, and a 1\/(2N) chance of a value greater than the largest one I've seen so far. Note that the selection of this convention interacts with the where parameter used in the plt.step if it seems more useful to display the CDF as a piecewise constant function. In order to exactly match the formal definition mentioned above, one would need to use where=pre the suggested y=[0,1\/N..., 1-1\/N] convention, or where=post with the y=[1\/N, 2\/N ... 1] convention, but not the other way around. However, for large samples, and reasonable distributions, the convention is given in the main body of the answer is easy to write, is an unbiased estimator of the true CDF, and works with the downsampling methodology.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3209362\/how-to-plot-empirical-cdf-ecdf", "best_answers_votes":134, "question_length":331, "response_length":2138 }, { "question":"How to understand numpy strides for layman? I am currently going through numpy and there is a topic in numpy called \"strides\". I understand what it is. But how does it work? I did not find any useful information online. Can anyone let me understand in a layman's terms?", "response":"The actual data of a numpy array is stored in a homogeneous and contiguous block of memory called data buffer. For more information see NumPy internals. Using the (default) row-major order, a 2D array looks like this: To map the indices i,j,k,... of a multidimensional array to the positions in the data buffer (the offset, in bytes), NumPy uses the notion of strides. Strides are the number of bytes to jump-over in the memory in order to get from one item to the next item along each direction\/dimension of the array. In other words, it's the byte-separation between consecutive items for each dimension. For example: ``` >>> a = np.arange(1,10).reshape(3,3) >>> a array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) ``` This 2D array has two directions, axes-0 (running vertically downwards across rows), and axis-1 (running horizontally across columns), with each item having size: ``` >>> a.itemsize # in bytes 4 ``` So to go from a[0, 0] -> a[0, 1] (moving horizontally along the 0th row, from the 0th column to the 1st column) the byte-step in the data buffer is 4. Same for a[0, 1] -> a[0, 2], a[1, 0] -> a[1, 1] etc. This means that the number of strides for the horizontal direction (axis-1) is 4 bytes. However, to go from a[0, 0] -> a[1, 0] (moving vertically along the 0th column, from the 0th row to the 1st row), you need first to traverse all the remaining items on the 0th row to get to the 1st row, and then move through the 1st row to get to the item a[1, 0], i.e. a[0, 0] -> a[0, 1] -> a[0, 2] -> a[1, 0]. Therefore the number of strides for the vertical direction (axis-0) is 3*4 = 12 bytes. Note that going from a[0, 2] -> a[1, 0], and in general from the last item of the i-th row to the first item of the (i+1)-th row, is also 4 bytes because the array a is stored in the row-major order. That's why ``` >>> a.strides # (strides[0], strides[1]) (12, 4) ``` Here's another example showing that the strides in the horizontal direction (axis-1), strides[1], of a 2D array is not necessary equal to the item size (e.g. an array with column-major order): ``` >>> b = np.array([[1, 4, 7], [2, 5, 8], [3, 6, 9]]).T >>> b array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> b.strides (4, 12) ``` Here strides[1] is a multiple of the item-size. Although the array b looks identical to the array a, it's a different array: internally b is stored as |1|4|7|2|5|8|3|6|9| (because transposing doesn't affect the data buffer but only swaps the strides and the shape), whereas a as |1|2|3|4|5|6|7|8|9|. What makes them look alike is the different strides. That is, the byte-step for b[0, 0] -> b[0, 1] is 3*4=12 bytes and for b[0, 0] -> b[1, 0] is 4 bytes, whereas for a[0, 0] -> a[0, 1] is 4 bytes and for a[0, 0] -> a[1, 0] is 12 bytes. Last but not least, NumPy allows to create views of existing arrays with the option of modifying the strides and the shape, see stride tricks. For example: ``` >>> np.lib.stride_tricks.as_strided(a, shape=a.shape[::-1], strides=a.strides[::-1]) array([[1, 4, 7], [2, 5, 8], [3, 6, 9]]) ``` which is equivalent to transposing the array a. Let me just add, but without going into much detail, that one can even define strides that are not multiples of the item size. Here's an example: ``` >>> a = np.lib.stride_tricks.as_strided(np.array([1, 512, 0, 3], dtype=np.int16), shape=(3,), strides=(3,)) >>> a array([1, 2, 3], dtype=int16) >>> a.strides[0] 3 >>> a.itemsize 2 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/53097952\/how-to-understand-numpy-strides-for-layman", "best_answers_votes":134, "question_length":269, "response_length":3401 }, { "question":"How to add items into a numpy array I need to accomplish the following task: from: ``` a = array([[1,3,4],[1,2,3]...[1,2,1]]) ``` (add one element to each row) to: ``` a = array([[1,3,4,x],[1,2,3,x]...[1,2,1,x]]) ``` I have tried doing stuff like a[n] = array([1,3,4,x]) but numpy complained of shape mismatch. I tried iterating through a and appending element x to each item, but the changes are not reflected. Any ideas on how I can accomplish this?", "response":"Appending data to an existing array is a natural thing to want to do for anyone with python experience. However, if you find yourself regularly appending to large arrays, you'll quickly discover that NumPy doesn't easily or efficiently do this the way a python list will. You'll find that every \"append\" action requires re-allocation of the array memory and short-term doubling of memory requirements. So, the more general solution to the problem is to try to allocate arrays to be as large as the final output of your algorithm. Then perform all your operations on sub-sets (slices) of that array. Array creation and destruction should ideally be minimized. That said, It's often unavoidable and the functions that do this are: for 2-D arrays: np.hstack np.vstack np.column_stack np.row_stack for 3-D arrays (the above plus): np.dstack for N-D arrays: np.concatenate", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5064822\/how-to-add-items-into-a-numpy-array", "best_answers_votes":134, "question_length":451, "response_length":867 }, { "question":"check how many elements are equal in two numpy arrays python I have two numpy arrays with number (Same length), and I want to count how many elements are equal between those two array (equal = same value and position in array) ``` A = [1, 2, 3, 4] B = [1, 2, 4, 3] ``` then I want the return value to be 2 (just 1&2 are equal in position and value)", "response":"Using numpy.sum: ``` >>> import numpy as np >>> a = np.array([1, 2, 3, 4]) >>> b = np.array([1, 2, 4, 3]) >>> np.sum(a == b) 2 >>> (a == b).sum() 2 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25490641\/check-how-many-elements-are-equal-in-two-numpy-arrays-python", "best_answers_votes":124, "question_length":348, "response_length":151 }, { "question":"Why do we call .detach() before calling .numpy() on a Pytorch Tensor? It has been firmly established that my_tensor.detach().numpy() is the correct way to get a numpy array from a torch tensor. I'm trying to get a better understanding of why. I have studied the internal workings of PyTorch's autodifferentiation library, and I'm still confused by these answers. Why does it break the graph to to move to numpy? Is it because any operations on the numpy array will not be tracked in the autodiff graph? I feel that a thorough high-quality Stack-Overflow answer that explains the reason for this to new users of PyTorch who don't yet understand autodifferentiation is called for here. In particular, I think it would be helpful to illustrate the graph through a figure and show how this code could be problematic if it didn't throw an error: ``` import torch tensor1 = torch.tensor([1.0,2.0],requires_grad=True) print(tensor1) print(type(tensor1)) tensor1 = tensor1.numpy() print(tensor1) print(type(tensor1)) ```", "response":"I think the most crucial point to understand here is the difference between a torch.tensor and np.ndarray: While both objects are used to store n-dimensional matrices (aka \"Tensors\"), torch.tensors has an additional \"layer\" - which is storing the computational graph leading to the associated n-dimensional matrix. So, if you are only interested in efficient and easy way to perform mathematical operations on matrices np.ndarray or torch.tensor can be used interchangeably. However, torch.tensors are designed to be used in the context of gradient descent optimization, and therefore they hold not only a tensor with numeric values, but (and more importantly) the computational graph leading to these values. This computational graph is then used (using the chain rule of derivatives) to compute the derivative of the loss function w.r.t each of the independent variables used to compute the loss. As mentioned before, np.ndarray object does not have this extra \"computational graph\" layer and therefore, when converting a torch.tensor to np.ndarray you must explicitly remove the computational graph of the tensor using the detach() command. Computational Graph From your comments it seems like this concept is a bit vague. I'll try and illustrate it with a simple example. Consider a simple function of two (vector) variables, x and w: ```py x = torch.rand(4, requires_grad=True) w = torch.rand(4, requires_grad=True) y = x @ w # inner-product of x and w z = y ** 2 # square the inner product ``` If we are only interested in the value of z, we need not worry about any graphs, we simply moving forward from the inputs, x and w, to compute y and then z. However, what would happen if we do not care so much about the value of z, but rather want to ask the question \"what is w that minimizes z for a given x\"? To answer that question, we need to compute the derivative of z w.r.t w. How can we do that? Using the chain rule we know that dz\/dw = dz\/dy * dy\/dw. That is, to compute the gradient of z w.r.t w we need to move backward from z back to w computing the gradient of the operation at each step as we trace back our steps from z to w. This \"path\" we trace back is the computational graph of z and it tells us how to compute the derivative of z w.r.t the inputs leading to z: ```py z.backward() # ask pytorch to trace back the computation of z ``` We can now inspect the gradient of z w.r.t w: ``` w.grad # the resulting gradient of z w.r.t w tensor([0.8010, 1.9746, 1.5904, 1.0408]) ``` Note that this is exactly equals to ``` 2*y*x tensor([0.8010, 1.9746, 1.5904, 1.0408], grad_fn=) ``` since dz\/dy = 2*y and dy\/dw = x. Each tensor along the path stores its \"contribution\" to the computation: ``` z tensor(1.4061, grad_fn=) ``` And ``` y tensor(1.1858, grad_fn=) ``` As you can see, y and z stores not only the \"forward\" value of or y**2 but also the computational graph -- the grad_fn that is needed to compute the derivatives (using the chain rule) when tracing back the gradients from z (output) to w (inputs). These grad_fn are essential components to torch.tensors and without them one cannot compute derivatives of complicated functions. However, np.ndarrays do not have this capability at all and they do not have this information. please see this answer for more information on tracing back the derivative using backwrd() function. Since both np.ndarray and torch.tensor has a common \"layer\" storing an n-d array of numbers, pytorch uses the same storage to save memory: numpy() \u2192 numpy.ndarray Returns self tensor as a NumPy ndarray. This tensor and the returned ndarray share the same underlying storage. Changes to self tensor will be reflected in the ndarray and vice versa. The other direction works in the same way as well: torch.from_numpy(ndarray) \u2192 Tensor Creates a Tensor from a numpy.ndarray. The returned tensor and ndarray share the same memory. Modifications to the tensor will be reflected in the ndarray and vice versa. Thus, when creating an np.array from torch.tensor or vice versa, both object reference the same underlying storage in memory. Since np.ndarray does not store\/represent the computational graph associated with the array, this graph should be explicitly removed using detach() when sharing both numpy and torch wish to reference the same tensor. Note, that if you wish, for some reason, to use pytorch only for mathematical operations without back-propagation, you can use with torch.no_grad() context manager, in which case computational graphs are not created and torch.tensors and np.ndarrays can be used interchangeably. ```py with torch.no_grad(): x_t = torch.rand(3,4) y_np = np.ones((4, 2), dtype=np.float32) x_t @ torch.from_numpy(y_np) # dot product in torch np.dot(x_t.numpy(), y_np) # the same dot product in numpy ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/63582590\/why-do-we-call-detach-before-calling-numpy-on-a-pytorch-tensor", "best_answers_votes":138, "question_length":1012, "response_length":4780 }, { "question":"How can I solve error \"module 'numpy' has no attribute 'float'\" in Python? I am using NumPy 1.24.0. On running this sample code line, ```py import numpy as np num = np.float(3) ``` I am getting this error: ```none Traceback (most recent call last): File \"\", line 1, in File \"\/home\/ubuntu\/.local\/lib\/python3.8\/site-packages\/numpy\/__init__.py\", line 284, in __getattr__ raise AttributeError(\"module {!r} has no attribute \" AttributeError: module 'numpy' has no attribute 'float' ``` How can I fix it?", "response":"The answer is already provided in the comments by @mattdmo and @tdelaney: NumPy 1.20 (release notes) deprecated numpy.float, numpy.int, and similar aliases, causing them to issue a deprecation warning NumPy 1.24 (release notes) removed these aliases altogether, causing an error when they are used In many cases you can simply replace the deprecated NumPy types by the equivalent Python built-in type, e.g., numpy.float becomes a \"plain\" Python float. For detailed guidelines on how to deal with various deprecated types, have a closer look at the table and guideline in the release notes for 1.20: ... To give a clear guideline for the vast majority of cases, for the types bool, object, str (and unicode) using the plain version is shorter and clear, and generally a good replacement. For float and complex you can use float64 and complex128 if you wish to be more explicit about the precision. For np.int a direct replacement with np.int_ or int is also good and will not change behavior, but the precision will continue to depend on the computer and operating system. If you want to be more explicit and review the current use, you have the following alternatives: np.int64 or np.int32 to specify the precision exactly. This ensures that results cannot depend on the computer or operating system. np.int_ or int (the default), but be aware that it depends on the computer and operating system. The C types: np.cint (int), np.int_ (long), np.longlong. np.intp which is 32bit on 32bit machines 64bit on 64bit machines. This can be the best type to use for indexing. ... If you have dependencies that use the deprecated types, a quick workaround would be to roll back your NumPy version to 1.24 or less (as suggested in some of the other answers), while waiting for the dependency to catch up. Alternatively, you could create a patch yourself and open a pull request, or monkey patch the dependency in your own code.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/74844262\/how-can-i-solve-error-module-numpy-has-no-attribute-float-in-python", "best_answers_votes":65, "question_length":499, "response_length":1917 }, { "question":"Slice 2d array into smaller 2d arrays Is there a way to slice a 2d array in numpy into smaller 2d arrays? Example ``` [[1,2,3,4], -> [[1,2] [3,4] [5,6,7,8]] [5,6] [7,8]] ``` So I basically want to cut down a 2x4 array into 2 2x2 arrays. Looking for a generic solution to be used on images.", "response":"There was another question a couple of months ago which clued me in to the idea of using reshape and swapaxes. The h\/\/nrows makes sense since this keeps the first block's rows together. It also makes sense that you'll need nrows and ncols to be part of the shape. -1 tells reshape to fill in whatever number is necessary to make the reshape valid. Armed with the form of the solution, I just tried things until I found the formula that works. You should be able to break your array into \"blocks\" using some combination of reshape and swapaxes: ```py def blockshaped(arr, nrows, ncols): \"\"\" Return an array of shape (n, nrows, ncols) where n * nrows * ncols = arr.size If arr is a 2D array, the returned array should look like n subblocks with each subblock preserving the \"physical\" layout of arr. \"\"\" h, w = arr.shape assert h % nrows == 0, f\"{h} rows is not evenly divisible by {nrows}\" assert w % ncols == 0, f\"{w} cols is not evenly divisible by {ncols}\" return (arr.reshape(h\/\/nrows, nrows, -1, ncols) .swapaxes(1,2) .reshape(-1, nrows, ncols)) ``` turns c ```py np.random.seed(365) c = np.arange(24).reshape((4, 6)) print(c) [out]: [[ 0 1 2 3 4 5] [ 6 7 8 9 10 11] [12 13 14 15 16 17] [18 19 20 21 22 23]] ``` into ```py print(blockshaped(c, 2, 3)) [out]: [[[ 0 1 2] [ 6 7 8]] [[ 3 4 5] [ 9 10 11]] [[12 13 14] [18 19 20]] [[15 16 17] [21 22 23]]] ``` I've posted an inverse function, unblockshaped, here, and an N-dimensional generalization here. The generalization gives a little more insight into the reasoning behind this algorithm. Note that there is also superbatfish's blockwise_view. It arranges the blocks in a different format (using more axes) but it has the advantage of (1) always returning a view and (2) being capable of handling arrays of any dimension.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16856788\/slice-2d-array-into-smaller-2d-arrays", "best_answers_votes":110, "question_length":289, "response_length":1775 }, { "question":"list memory usage in ipython and jupyter I have a few (almost ten) Gb of memory taken by the ipython kernel. I think this is coming from large objects (matrices, lists, numpy arrays, ...) that I might have produced during some operation and now I do not need anymore. I would like to list all of the objects I have defined and sort them by their memory footprint. Is there a simple way to do that? For certain types there is nbytes method, but not for all ... so I am looking for a general way to list all objects I have made and their memory occupation.", "response":"Assuming that you are using ipython or jupyter, you will need to do a little bit of work to get a list all of the objects you have defined. That means taking everything available in globals() and filtering out objects that are modules, builtins, ipython objects, etc. Once you are sure you have those objects, then you can proceed to grabbing their sizes with sys.getsizeof. This can be summed up as follows: ``` import sys # These are the usual ipython objects, including this one you are creating ipython_vars = ['In', 'Out', 'exit', 'quit', 'get_ipython', 'ipython_vars'] # Get a sorted list of the objects and their sizes sorted([(x, sys.getsizeof(globals().get(x))) for x in dir() if not x.startswith('_') and x not in sys.modules and x not in ipython_vars], key=lambda x: x[1], reverse=True) ``` Please keep in mind that for python objects (those created with python's builtin functions), sys.getsizeof will be very accurate. But it can be a bit inaccurate on objects created using third-party libraries. Furthermore, please be mindful that sys.getsizeof adds an additional garbage collector overhead if the object is managed by the garbage collector. So, some things may look a bit heavier than they actually are. As a side note, numpy's .nbytes method can be somewhat misleading in that it does not include memory consumed by non-element attributes of the array object.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/40993626\/list-memory-usage-in-ipython-and-jupyter", "best_answers_votes":116, "question_length":554, "response_length":1377 }, { "question":"Iterating over Numpy matrix rows to apply a function each? I want to be able to iterate over the matrix to apply a function to each row. How can I do it for a Numpy matrix ?", "response":"You can use numpy.apply_along_axis(). Assuming that your array is 2D, you can use it like: ``` import numpy as np myarray = np.array([[11, 12, 13], [21, 22, 23], [31, 32, 33]]) def myfunction(x): return x[0] + x[1]**2 + x[2]**3 print(np.apply_along_axis(myfunction, axis=1, arr=myarray)) #[ 2352 12672 36992] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16468717\/iterating-over-numpy-matrix-rows-to-apply-a-function-each", "best_answers_votes":93, "question_length":173, "response_length":312 }, { "question":"string representation of a numpy array with commas separating its elements I have a numpy array, for example: ``` points = np.array([[-468.927, -11.299, 76.271, -536.723], [-429.379, -694.915, -214.689, 745.763], [ 0., 0., 0., 0. ]]) ``` if I print it or turn it into a string with str() I get: ``` print w_points [[-468.927 -11.299 76.271 -536.723] [-429.379 -694.915 -214.689 745.763] [ 0. 0. 0. 0. ]] ``` I need to turn it into a string that prints with separating commas while keeping the 2D array structure, that is: ``` [[-468.927, -11.299, 76.271, -536.723], [-429.379, -694.915, -214.689, 745.763], [ 0., 0., 0., 0. ]] ``` Does anybody know an easy way of turning a numpy array to that form of string? I know that .tolist() adds the commas but the result loses the 2D structure.", "response":"Try using repr ``` >>> import numpy as np >>> points = np.array([[-468.927, -11.299, 76.271, -536.723], ... [-429.379, -694.915, -214.689, 745.763], ... [ 0., 0., 0., 0. ]]) >>> print(repr(points)) array([[-468.927, -11.299, 76.271, -536.723], [-429.379, -694.915, -214.689, 745.763], [ 0. , 0. , 0. , 0. ]]) ``` If you plan on using large numpy arrays, set np.set_printoptions(threshold=np.nan) first. Without it, the array representation will be truncated after about 1000 entries (by default). ``` >>> arr = np.arange(1001) >>> print(repr(arr)) array([ 0, 1, 2, ..., 998, 999, 1000]) ``` Of course, if you have arrays that large, this starts to become less useful and you should probably analyze the data some way other than just looking at it and there are better ways of persisting a numpy array than saving it's repr to a file...", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16423774\/string-representation-of-a-numpy-array-with-commas-separating-its-elements", "best_answers_votes":104, "question_length":786, "response_length":835 }, { "question":"convert a grayscale image to a 3-channel image [duplicate] This question already has answers here: how to copy numpy array value into higher dimensions (7 answers) Closed 8 years ago. I want to convert a gray-scale image with shape (height,width) to a 3 channels image with shape (height,width,nchannels). The work is done with a for-loop, but there must be a neat way. Here is a piece code in program, can someone give a hint. please advice. ``` 30 if img.shape == (height,width): # if img is grayscale, expand 31 print \"convert 1-channel image to \", nchannels, \" image.\" 32 new_img = np.zeros((height,width,nchannels)) 33 for ch in range(nchannels): 34 for xx in range(height): 35 for yy in range(width): 36 new_img[xx,yy,ch] = img[xx,yy] 37 img = new_img ```", "response":"You can use np.stack to accomplish this much more concisely: ``` img = np.array([[1, 2], [3, 4]]) stacked_img = np.stack((img,)*3, axis=-1) print(stacked_img) # array([[[1, 1, 1], # [2, 2, 2]], # [[3, 3, 3], # [4, 4, 4]]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/40119743\/convert-a-grayscale-image-to-a-3-channel-image", "best_answers_votes":138, "question_length":761, "response_length":226 }, { "question":"Why do \"Not a Number\" values equal True when cast as boolean in Python\/Numpy? When casting a NumPy Not-a-Number value as a boolean, it becomes True, e.g. as follows. ``` >>> import numpy as np >>> bool(np.nan) True ``` This is the exact opposite to what I would intuitively expect. Is there a sound principle underlying this behaviour? (I suspect there might be as the same behaviour seems to occur in Octave.)", "response":"This is in no way NumPy-specific, but is consistent with how Python treats NaNs: ``` In [1]: bool(float('nan')) Out[1]: True ``` The rules are spelled out in the documentation. I think it could be reasonably argued that the truth value of NaN should be False. However, this is not how the language works right now.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15686318\/why-do-not-a-number-values-equal-true-when-cast-as-boolean-in-python-numpy", "best_answers_votes":47, "question_length":410, "response_length":314 }, { "question":"How to extract an arbitrary line of values from a numpy array? I have a numpy array that contains some image data. I would like to plot the 'profile' of a transect drawn across the image. The simplest case is a profile running parallel to the edge of the image, so if the image array is imdat, then the profile at a selected point (r,c) is simply imdat[r] (horizontal) or imdat[:,c] (vertical). Now, I want to take as input two points (r1,c1) and (r2,c2), both lying inside imdat. I would like to plot the profile of the values along the line connecting these two points. What is the best way to get values from a numpy array, along such a line? More generally, along a path\/polygon? I have used slicing and indexing before, but I can't seem to arrive at an elegant solution for such a where consecutive slice elements are not in the same row or column. Thanks for your help.", "response":"@Sven's answer is the easy way, but it's rather inefficient for large arrays. If you're dealing with a relatively small array, you won't notice the difference, if you're wanting a profile from a large (e.g. >50 MB) you may want to try a couple of other approaches. You'll need to work in \"pixel\" coordinates for these, though, so there's an extra layer of complexity. There are two more memory-efficient ways. 1) use scipy.ndimage.map_coordinates if you need bilinear or cubic interpolation. 2) if you just want nearest neighbor sampling, then just use indexing directly. As an example of the first: ``` import numpy as np import scipy.ndimage import matplotlib.pyplot as plt #-- Generate some data... x, y = np.mgrid[-5:5:0.1, -5:5:0.1] z = np.sqrt(x**2 + y**2) + np.sin(x**2 + y**2) #-- Extract the line... # Make a line with \"num\" points... x0, y0 = 5, 4.5 # These are in _pixel_ coordinates!! x1, y1 = 60, 75 num = 1000 x, y = np.linspace(x0, x1, num), np.linspace(y0, y1, num) # Extract the values along the line, using cubic interpolation zi = scipy.ndimage.map_coordinates(z, np.vstack((x,y))) #-- Plot... fig, axes = plt.subplots(nrows=2) axes[0].imshow(z) axes[0].plot([x0, x1], [y0, y1], 'ro-') axes[0].axis('image') axes[1].plot(zi) plt.show() ``` The equivalent using nearest-neighbor interpolation would look something like this: ``` import numpy as np import matplotlib.pyplot as plt #-- Generate some data... x, y = np.mgrid[-5:5:0.1, -5:5:0.1] z = np.sqrt(x**2 + y**2) + np.sin(x**2 + y**2) #-- Extract the line... # Make a line with \"num\" points... x0, y0 = 5, 4.5 # These are in _pixel_ coordinates!! x1, y1 = 60, 75 num = 1000 x, y = np.linspace(x0, x1, num), np.linspace(y0, y1, num) # Extract the values along the line zi = z[x.astype(np.int), y.astype(np.int)] #-- Plot... fig, axes = plt.subplots(nrows=2) axes[0].imshow(z) axes[0].plot([x0, x1], [y0, y1], 'ro-') axes[0].axis('image') axes[1].plot(zi) plt.show() ``` However, if you're using nearest-neighbor, you probably would only want samples at each pixel, so you'd probably do something more like this, instead... ``` import numpy as np import matplotlib.pyplot as plt #-- Generate some data... x, y = np.mgrid[-5:5:0.1, -5:5:0.1] z = np.sqrt(x**2 + y**2) + np.sin(x**2 + y**2) #-- Extract the line... # Make a line with \"num\" points... x0, y0 = 5, 4.5 # These are in _pixel_ coordinates!! x1, y1 = 60, 75 length = int(np.hypot(x1-x0, y1-y0)) x, y = np.linspace(x0, x1, length), np.linspace(y0, y1, length) # Extract the values along the line zi = z[x.astype(np.int), y.astype(np.int)] #-- Plot... fig, axes = plt.subplots(nrows=2) axes[0].imshow(z) axes[0].plot([x0, x1], [y0, y1], 'ro-') axes[0].axis('image') axes[1].plot(zi) plt.show() ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7878398\/how-to-extract-an-arbitrary-line-of-values-from-a-numpy-array", "best_answers_votes":103, "question_length":875, "response_length":2723 }, { "question":"Is shared readonly data copied to different processes for multiprocessing? The piece of code that I have looks some what like this: ``` glbl_array = # a 3 Gb array def my_func( args, def_param = glbl_array): #do stuff on args and def_param if __name__ == '__main__': pool = Pool(processes=4) pool.map(my_func, range(1000)) ``` Is there a way to make sure (or encourage) that the different processes does not get a copy of glbl_array but shares it. If there is no way to stop the copy I will go with a memmapped array, but my access patterns are not very regular, so I expect memmapped arrays to be slower. The above seemed like the first thing to try. This is on Linux. I just wanted some advice from Stackoverflow and do not want to annoy the sysadmin. Do you think it will help if the the second parameter is a genuine immutable object like glbl_array.tostring().", "response":"You can use the shared memory stuff from multiprocessing together with Numpy fairly easily: ``` import multiprocessing import ctypes import numpy as np shared_array_base = multiprocessing.Array(ctypes.c_double, 10*10) shared_array = np.ctypeslib.as_array(shared_array_base.get_obj()) shared_array = shared_array.reshape(10, 10) #-- edited 2015-05-01: the assert check below checks the wrong thing # with recent versions of Numpy\/multiprocessing. That no copy is made # is indicated by the fact that the program prints the output shown below. ## No copy was made ##assert shared_array.base.base is shared_array_base.get_obj() # Parallel processing def my_func(i, def_param=shared_array): shared_array[i,:] = i if __name__ == '__main__': pool = multiprocessing.Pool(processes=4) pool.map(my_func, range(10)) print shared_array ``` which prints ``` [[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] [ 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] [ 3. 3. 3. 3. 3. 3. 3. 3. 3. 3.] [ 4. 4. 4. 4. 4. 4. 4. 4. 4. 4.] [ 5. 5. 5. 5. 5. 5. 5. 5. 5. 5.] [ 6. 6. 6. 6. 6. 6. 6. 6. 6. 6.] [ 7. 7. 7. 7. 7. 7. 7. 7. 7. 7.] [ 8. 8. 8. 8. 8. 8. 8. 8. 8. 8.] [ 9. 9. 9. 9. 9. 9. 9. 9. 9. 9.]] ``` However, Linux has copy-on-write semantics on fork(), so even without using multiprocessing.Array, the data will not be copied unless it is written to.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5549190\/is-shared-readonly-data-copied-to-different-processes-for-multiprocessing", "best_answers_votes":134, "question_length":865, "response_length":1333 }, { "question":"Calculate weighted average using a pandas\/dataframe I have the following table. I want to calculate a weighted average grouped by each date based on the formula below. I can do this using some standard conventional code, but assuming that this data is in a pandas dataframe, is there any easier way to achieve this rather than through iteration? ``` Date ID wt value w_avg 01\/01\/2012 100 0.50 60 0.791666667 01\/01\/2012 101 0.75 80 01\/01\/2012 102 1.00 100 01\/02\/2012 201 0.50 100 0.722222222 01\/02\/2012 202 1.00 80 ``` 01\/01\/2012 w_avg = 0.5 * ( 60\/ sum(60,80,100)) + .75 * (80\/ sum(60,80,100)) + 1.0 * (100\/sum(60,80,100)) 01\/02\/2012 w_avg = 0.5 * ( 100\/ sum(100,80)) + 1.0 * ( 80\/ sum(100,80))", "response":"Let's first create the example pandas dataframe: ``` In [1]: import numpy as np In [2]: import pandas as pd In [3]: index = pd.Index(['01\/01\/2012','01\/01\/2012','01\/01\/2012','01\/02\/2012','01\/02\/2012'], name='Date') In [4]: df = pd.DataFrame({'ID':[100,101,102,201,202],'wt':[.5,.75,1,.5,1],'value':[60,80,100,100,80]},index=index) ``` Then, the average of 'wt' weighted by 'value' and grouped by the index is obtained as: ``` In [5]: df.groupby(df.index).apply(lambda x: np.average(x.wt, weights=x.value)) Out[5]: Date 01\/01\/2012 0.791667 01\/02\/2012 0.722222 dtype: float64 ``` Alternatively, one can also define a function: ``` In [5]: def grouped_weighted_avg(values, weights, by): ...: return (values * weights).groupby(by).sum() \/ weights.groupby(by).sum() In [6]: grouped_weighted_avg(values=df.wt, weights=df.value, by=df.index) Out[6]: Date 01\/01\/2012 0.791667 01\/02\/2012 0.722222 dtype: float64 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/26205922\/calculate-weighted-average-using-a-pandas-dataframe", "best_answers_votes":68, "question_length":694, "response_length":905 }, { "question":"numpy divide row by row sum How can I divide a numpy array row by the sum of all values in this row? This is one example. But I'm pretty sure there is a fancy and much more efficient way of doing this: ``` import numpy as np e = np.array([[0., 1.],[2., 4.],[1., 5.]]) for row in xrange(e.shape[0]): e[row] \/= np.sum(e[row]) ``` Result: ``` array([[ 0. , 1. ], [ 0.33333333, 0.66666667], [ 0.16666667, 0.83333333]]) ```", "response":"Method #1: use None (or np.newaxis) to add an extra dimension so that broadcasting will behave: ``` >>> e array([[ 0., 1.], [ 2., 4.], [ 1., 5.]]) >>> e\/e.sum(axis=1)[:,None] array([[ 0. , 1. ], [ 0.33333333, 0.66666667], [ 0.16666667, 0.83333333]]) ``` Method #2: go transpose-happy: ``` >>> (e.T\/e.sum(axis=1)).T array([[ 0. , 1. ], [ 0.33333333, 0.66666667], [ 0.16666667, 0.83333333]]) ``` (You can drop the axis= part for conciseness, if you want.) Method #3: (promoted from Jaime's comment) Use the keepdims argument on sum to preserve the dimension: ``` >>> e\/e.sum(axis=1, keepdims=True) array([[ 0. , 1. ], [ 0.33333333, 0.66666667], [ 0.16666667, 0.83333333]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16202348\/numpy-divide-row-by-row-sum", "best_answers_votes":133, "question_length":418, "response_length":674 }, { "question":"Get the mean across multiple Pandas DataFrames I'm generating a number of dataframes with the same shape, and I want to compare them to one another. I want to be able to get the mean and median across the dataframes. ``` Source.0 Source.1 Source.2 Source.3 cluster 0 0.001182 0.184535 0.814230 0.000054 1 0.000001 0.160490 0.839508 0.000001 2 0.000001 0.173829 0.826114 0.000055 3 0.000432 0.180065 0.819502 0.000001 4 0.000152 0.157041 0.842694 0.000113 5 0.000183 0.174142 0.825674 0.000001 6 0.000001 0.151556 0.848405 0.000038 7 0.000771 0.177583 0.821645 0.000001 8 0.000001 0.202059 0.797939 0.000001 9 0.000025 0.189537 0.810410 0.000028 10 0.006142 0.003041 0.493912 0.496905 11 0.003739 0.002367 0.514216 0.479678 12 0.002334 0.001517 0.529041 0.467108 13 0.003458 0.000001 0.532265 0.464276 14 0.000405 0.005655 0.527576 0.466364 15 0.002557 0.003233 0.507954 0.486256 16 0.004161 0.000001 0.491271 0.504568 17 0.001364 0.001330 0.528311 0.468996 18 0.002886 0.000001 0.506392 0.490721 19 0.001823 0.002498 0.509620 0.486059 Source.0 Source.1 Source.2 Source.3 cluster 0 0.000001 0.197108 0.802495 0.000396 1 0.000001 0.157860 0.842076 0.000063 2 0.094956 0.203057 0.701662 0.000325 3 0.000001 0.181948 0.817841 0.000210 4 0.000003 0.169680 0.830316 0.000001 5 0.000362 0.177194 0.822443 0.000001 6 0.000001 0.146807 0.852924 0.000268 7 0.001087 0.178994 0.819564 0.000354 8 0.000001 0.202182 0.797333 0.000485 9 0.000348 0.181399 0.818252 0.000001 10 0.003050 0.000247 0.506777 0.489926 11 0.004420 0.000001 0.513927 0.481652 12 0.006488 0.001396 0.527197 0.464919 13 0.001510 0.000001 0.525987 0.472502 14 0.000001 0.000001 0.520737 0.479261 15 0.000001 0.001765 0.515658 0.482575 16 0.000001 0.000001 0.492550 0.507448 17 0.002855 0.000199 0.526535 0.470411 18 0.000001 0.001952 0.498303 0.499744 19 0.001232 0.000001 0.506612 0.492155 ``` Then I want to get the mean of these two dataframes. What is the easiest way to do this? Just to clarify I want to get the mean for each particular cell when the indexes and columns of all the dataframes are exactly the same. So in the example I gave, the average for [0,Source.0] would be (0.001182 + 0.000001) \/ 2 = 0.0005915.", "response":"Assuming the two dataframes have the same columns, you could just concatenate them and compute your summary stats on the concatenated frames: ``` import numpy as np import pandas as pd # some random data frames df1 = pd.DataFrame(dict(x=np.random.randn(100), y=np.random.randint(0, 5, 100))) df2 = pd.DataFrame(dict(x=np.random.randn(100), y=np.random.randint(0, 5, 100))) # concatenate them df_concat = pd.concat((df1, df2)) print df_concat.mean() # x -0.163044 # y 2.120000 # dtype: float64 print df_concat.median() # x -0.192037 # y 2.000000 # dtype: float64 ``` Update If you want to compute stats across each set of rows with the same index in the two datasets, you can use .groupby() to group the data by row index, then apply the mean, median etc.: ``` by_row_index = df_concat.groupby(df_concat.index) df_means = by_row_index.mean() print df_means.head() # x y # 0 -0.850794 1.5 # 1 0.159038 1.5 # 2 0.083278 1.0 # 3 -0.540336 0.5 # 4 0.390954 3.5 ``` This method will work even when your dataframes have unequal numbers of rows - if a particular row index is missing in one of the two dataframes, the mean\/median will be computed on the single existing row.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25057835\/get-the-mean-across-multiple-pandas-dataframes", "best_answers_votes":66, "question_length":2181, "response_length":1166 }, { "question":"Numpy integer nan [duplicate] This question already has answers here: NumPy or Pandas: Keeping array type as integer while having a NaN value (10 answers) Closed 11 years ago. Is there a way to store NaN in a Numpy array of integers? I get: ``` a=np.array([1],dtype=long) a[0]=np.nan Traceback (most recent call last): File \"\", line 1, in ValueError: cannot convert float NaN to integer ```", "response":"No, you can't, at least with current version of NumPy. A nan is a special value for float arrays only. There are talks about introducing a special bit that would allow non-float arrays to store what in practice would correspond to a nan, but so far (2012\/10), it's only talks. In the meantime, you may want to consider the numpy.ma package: instead of picking an invalid integer like -99999, you could use the special numpy.ma.masked value to represent an invalid value. ``` a = np.ma.array([1,2,3,4,5], dtype=int) a[1] = np.ma.masked masked_array(data = [1 -- 3 4 5], mask = [False True False False False], fill_value = 999999) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12708807\/numpy-integer-nan", "best_answers_votes":65, "question_length":391, "response_length":632 }, { "question":"Numpy remove a dimension from np array I have some images I want to work with, the problem is that there are two kinds of images both are 106 x 106 pixels, some are in color and some are black and white. one with only two (2) dimensions: (106,106) and one with three (3) (106,106,3) Is there a way I can strip this last dimension? I tried np.delete, but it did not seem to work. ``` np.shape(np.delete(Xtrain[0], [2] , 2)) Out[67]: (106, 106, 2) ```", "response":"You could use numpy's fancy indexing (an extension to Python's built-in slice notation): ``` x = np.zeros( (106, 106, 3) ) result = x[:, :, 0] print(result.shape) ``` prints ``` (106, 106) ``` A shape of (106, 106, 3) means you have 3 sets of things that have shape (106, 106). So in order to \"strip\" the last dimension, you just have to pick one of these (that's what the fancy indexing does). You can keep any slice you want. I arbitrarily choose to keep the 0th, since you didn't specify what you wanted. So, result = x[:, :, 1] and result = x[:, :, 2] would give the desired shape as well: it all just depends on which slice you need to keep.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37152031\/numpy-remove-a-dimension-from-np-array", "best_answers_votes":126, "question_length":449, "response_length":646 }, { "question":"recover dict from 0-d numpy array What happened is that I (by mistake) saved a dictionary with the command numpy.save() (no error messages shown) and now I need to recover the data in the dictionary. When I load it with numpy.load() it has type (numpy.ndarray) and is 0-d, so it is not a dictionary any more and I can't access the data in it, 0-d arrays are not index-able so doing something like ``` mydict = numpy.load('mydict') mydict[0]['some_key'] ``` doesn't work. I also tried ``` recdict = dict(mydict) ``` but that didn't work either. Why numpy didn't warn me when I saved the dictionary with numpy.save()? Is there a way to recover the data? Thanks in advance!", "response":"Use mydict.item() to obtain the array element as a Python scalar. ``` >>> import numpy as np >>> np.save('\/tmp\/data.npy',{'a':'Hi Mom!'}) >>> x=np.load('\/tmp\/data.npy') >>> x.item() {'a': 'Hi Mom!'} ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8361561\/recover-dict-from-0-d-numpy-array", "best_answers_votes":106, "question_length":670, "response_length":202 }, { "question":"numpy subtract every row of matrix by vector So I have a n x d matrix and an n x 1 vector. I'm trying to write a code to subtract every row in the matrix by the vector. I currently have a for loop that iterates through and subtracts the i-th row in the matrix by the vector. Is there a way to simply subtract an entire matrix by the vector? Thanks! Current code: ``` for i in xrange( len( X1 ) ): X[i,:] = X1[i,:] - X2 ``` This is where X1 is the matrix's i-th row and X2 is vector. Can I make it so that I don't need a for loop?", "response":"That works in numpy but only if the trailing axes have the same dimension. Here is an example of successfully subtracting a vector from a matrix: ``` In [27]: print m; m.shape [[ 0 1 2] [ 3 4 5] [ 6 7 8] [ 9 10 11]] Out[27]: (4, 3) In [28]: print v; v.shape [0 1 2] Out[28]: (3,) In [29]: m - v Out[29]: array([[0, 0, 0], [3, 3, 3], [6, 6, 6], [9, 9, 9]]) ``` This worked because the trailing axis of both had the same dimension (3). In your case, the leading axes had the same dimension. Here is an example, using the same v as above, of how that can be fixed: ``` In [35]: print m; m.shape [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] Out[35]: (3, 4) In [36]: (m.transpose() - v).transpose() Out[36]: array([[0, 1, 2, 3], [3, 4, 5, 6], [6, 7, 8, 9]]) ``` The rules for broadcasting axes are explained in depth here.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/26333005\/numpy-subtract-every-row-of-matrix-by-vector", "best_answers_votes":94, "question_length":529, "response_length":809 }, { "question":"Store different datatypes in one NumPy array? I have two different arrays, one with strings and another with ints. I want to concatenate them, into one array where each column has the original datatype. My current solution for doing this (see below) converts the entire array into dtype = string, which seems very memory inefficient. combined_array = np.concatenate((A, B), axis = 1) Is it possible to mutiple dtypes in combined_array when A.dtype = string and B.dtype = int?", "response":"One approach might be to use a record array. The \"columns\" won't be like the columns of standard numpy arrays, but for most use cases, this is sufficient: ``` >>> a = numpy.array(['a', 'b', 'c', 'd', 'e']) >>> b = numpy.arange(5) >>> records = numpy.rec.fromarrays((a, b), names=('keys', 'data')) >>> records rec.array([('a', 0), ('b', 1), ('c', 2), ('d', 3), ('e', 4)], dtype=[('keys', '|S1'), ('data', '>> records['keys'] rec.array(['a', 'b', 'c', 'd', 'e'], dtype='|S1') >>> records['data'] array([0, 1, 2, 3, 4]) ``` Note that you can also do something similar with a standard array by specifying the datatype of the array. This is known as a \"structured array\": ``` >>> arr = numpy.array([('a', 0), ('b', 1)], dtype=([('keys', '|S1'), ('data', 'i8')])) >>> arr array([('a', 0), ('b', 1)], dtype=[('keys', '|S1'), ('data', '>> records.keys chararray(['a', 'b', 'c', 'd', 'e'], dtype='|S1') >>> arr.keys Traceback (most recent call last): File \"\", line 1, in AttributeError: 'numpy.ndarray' object has no attribute 'keys' ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11309739\/store-different-datatypes-in-one-numpy-array", "best_answers_votes":57, "question_length":475, "response_length":1029 }, { "question":"numpy.unique with order preserved ``` ['b','b','b','a','a','c','c'] ``` numpy.unique gives ``` ['a','b','c'] ``` How can I get the original order preserved ``` ['b','a','c'] ``` Great answers. Bonus question. Why do none of these methods work with this dataset? http:\/\/www.uploadmb.com\/dw.php?id=1364341573 Here's the question numpy sort wierd behavior", "response":"Numpy np.unique() is slow, O(Nlog(N)), but you can do this by following code: ``` import numpy as np a = np.array(['b','b','b','a','a','c','c']) _, idx = np.unique(a, return_index=True) print(a[np.sort(idx)]) ``` Output: ``` ['b' 'a' 'c'] ``` Pandas pd.unique() is much faster for big array O(N): ``` import pandas as pd a = np.random.randint(0, 1000, 10000) %timeit np.unique(a) %timeit pd.unique(a) 1000 loops, best of 3: 644 us per loop 10000 loops, best of 3: 144 us per loop ``` Note: Pandas pd.unique() has the further benefit of preserving order by default: pandas.unique(values) Return unique values based on a hash table. Uniques are returned in order of appearance. This does NOT sort. Significantly faster than numpy.unique for long enough sequences. Includes NA values.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15637336\/numpy-unique-with-order-preserved", "best_answers_votes":120, "question_length":352, "response_length":781 }, { "question":"Are NumPy's math functions faster than Python's? I have a function defined by a combination of basic math functions (abs, cosh, sinh, exp, ...). I was wondering if it makes a difference (in speed) to use, for example, numpy.abs() instead of abs()?", "response":"Here are the timing results: ``` lebigot@weinberg ~ % python -m timeit 'abs(3.15)' 10000000 loops, best of 3: 0.146 usec per loop lebigot@weinberg ~ % python -m timeit -s 'from numpy import abs as nabs' 'nabs(3.15)' 100000 loops, best of 3: 3.92 usec per loop ``` numpy.abs() is slower than abs() because it also handles Numpy arrays: it contains additional code that provides this flexibility. However, Numpy is fast on arrays: ``` lebigot@weinberg ~ % python -m timeit -s 'a = [3.15]*1000' '[abs(x) for x in a]' 10000 loops, best of 3: 186 usec per loop lebigot@weinberg ~ % python -m timeit -s 'import numpy; a = numpy.empty(1000); a.fill(3.15)' 'numpy.abs(a)' 100000 loops, best of 3: 6.47 usec per loop ``` (PS: '[abs(x) for x in a]' is slower in Python 2.7 than the better map(abs, a), which is about 30 % faster\u2014which is still much slower than NumPy.) Thus, numpy.abs() does not take much more time for 1000 elements than for 1 single float!", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3650194\/are-numpys-math-functions-faster-than-pythons", "best_answers_votes":91, "question_length":247, "response_length":948 }, { "question":"difference between numpy dot() and inner() What is the difference between ``` import numpy as np np.dot(a,b) ``` and ``` import numpy as np np.inner(a,b) ``` all examples I tried returned the same result. Wikipedia has the same article for both?! In the description of inner() it says, that its behavior is different in higher dimensions, but I couldn't produce any different output. Which one should I use?", "response":"numpy.dot: For 2-D arrays it is equivalent to matrix multiplication, and for 1-D arrays to inner product of vectors (without complex conjugation). For N dimensions it is a sum product over the last axis of a and the second-to-last of b: numpy.inner: Ordinary inner product of vectors for 1-D arrays (without complex conjugation), in higher dimensions a sum product over the last axes. (Emphasis mine.) As an example, consider this example with 2D arrays: ``` >>> a=np.array([[1,2],[3,4]]) >>> b=np.array([[11,12],[13,14]]) >>> np.dot(a,b) array([[37, 40], [85, 92]]) >>> np.inner(a,b) array([[35, 41], [81, 95]]) ``` Thus, the one you should use is the one that gives the correct behaviour for your application. Performance testing (Note that I am testing only the 1D case, since that is the only situation where .dot and .inner give the same result.) ``` >>> import timeit >>> setup = 'import numpy as np; a=np.random.random(1000); b = np.random.random(1000)' >>> [timeit.timeit('np.dot(a,b)',setup,number=1000000) for _ in range(3)] [2.6920320987701416, 2.676928997039795, 2.633111000061035] >>> [timeit.timeit('np.inner(a,b)',setup,number=1000000) for _ in range(3)] [2.588860034942627, 2.5845699310302734, 2.6556360721588135] ``` So maybe .inner is faster, but my machine is fairly loaded at the moment, so the timings are not consistent nor are they necessarily very accurate.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11033573\/difference-between-numpy-dot-and-inner", "best_answers_votes":74, "question_length":407, "response_length":1381 }, { "question":"Efficiently count zero elements in numpy array? I need to count the number of zero elements in numpy arrays. I'm aware of the numpy.count_nonzero function, but there appears to be no analog for counting zero elements. My arrays are not very large (typically less than 1E5 elements) but the operation is performed several millions of times. Of course I could use len(arr) - np.count_nonzero(arr), but I wonder if there's a more efficient way to do it. Here's a MWE of how I do it currently: ``` import numpy as np import timeit arrs = [] for _ in range(1000): arrs.append(np.random.randint(-5, 5, 10000)) def func1(): for arr in arrs: zero_els = len(arr) - np.count_nonzero(arr) print(timeit.timeit(func1, number=10)) ```", "response":"A 2x faster approach would be to just use np.count_nonzero() but with the condition as needed. ``` In [3]: arr Out[3]: array([[1, 2, 0, 3], [3, 9, 0, 4]]) In [4]: np.count_nonzero(arr==0) Out[4]: 2 In [5]:def func_cnt(): for arr in arrs: zero_els = np.count_nonzero(arr==0) # here, it counts the frequency of zeroes actually ``` You can also use np.where() but it's slower than np.count_nonzero() ``` In [6]: np.where( arr == 0) Out[6]: (array([0, 1]), array([2, 2])) In [7]: len(np.where( arr == 0)) Out[7]: 2 ``` Efficiency: (in descending order) ``` In [8]: %timeit func_cnt() 10 loops, best of 3: 29.2 ms per loop In [9]: %timeit func1() 10 loops, best of 3: 46.5 ms per loop In [10]: %timeit func_where() 10 loops, best of 3: 61.2 ms per loop ``` more speedups with accelerators It is now possible to achieve more than 3 orders of magnitude speed boost with the help of JAX if you've access to accelerators (GPU\/TPU). Another advantage of using JAX is that the NumPy code needs very little modification to make it JAX compatible. Below is a reproducible example: ``` In [1]: import jax.numpy as jnp In [2]: from jax import jit # set up inputs In [3]: arrs = [] In [4]: for _ in range(1000): ...: arrs.append(np.random.randint(-5, 5, 10000)) # JIT'd function that performs the counting task In [5]: @jit ...: def func_cnt(): ...: for arr in arrs: ...: zero_els = jnp.count_nonzero(arr==0) ``` ``` # efficiency test In [8]: %timeit func_cnt() 15.6 \u00b5s \u00b1 391 ns per loop (mean \u00b1 std. dev. of 7 runs, 100000 loops each) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/42916330\/efficiently-count-zero-elements-in-numpy-array", "best_answers_votes":95, "question_length":720, "response_length":1523 }, { "question":"check if numpy array is multidimensional or not I want to check if a numpy array is multidimensional or not? ``` V = [[ -7.94627203e+01 -1.81562235e+02 -3.05418070e+02 -2.38451033e+02][ 9.43740653e+01 1.69312771e+02 1.68545575e+01 -1.44450299e+02][ 5.61599000e+00 8.76135909e+01 1.18959245e+02 -1.44049237e+02]] ``` How can I do that in numpy?", "response":"Use the .ndim property of the ndarray: ``` >>> a = np.array([[ -7.94627203e+01, -1.81562235e+02, -3.05418070e+02, -2.38451033e+02],[ 9.43740653e+01, 1.69312771e+02, 1.68545575e+01, -1.44450299e+02],[ 5.61599000e+00, 8.76135909e+01, 1.18959245e+02, -1.44049237e+02]]) >>> a.ndim 2 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21299798\/check-if-numpy-array-is-multidimensional-or-not", "best_answers_votes":126, "question_length":343, "response_length":283 }, { "question":"How to shade region under the curve in matplotlib I want to use matplotlib to illustrate the definite integral between two regions: x_0, and x_1. How can I shade a region under a curve in matplotlib from x=-1, to x=1 given the following plot ``` import numpy as np from matplotlib import pyplot as plt def f(t): return t * t t = np.arange(-4,4,1\/40.) plt.plot(t,f(t)) ```", "response":"The final answer I came up with is to use fill_between. I thought there would have been a simple shade between type method, but this does exactly what I want. ``` section = np.arange(-1, 1, 1\/20.) plt.fill_between(section,f(section)) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10046262\/how-to-shade-region-under-the-curve-in-matplotlib", "best_answers_votes":84, "question_length":371, "response_length":237 }, { "question":"What does \"ValueError: object too deep for desired array\" mean and how to fix it? I'm trying to do this: ``` h = [0.2, 0.2, 0.2, 0.2, 0.2] Y = np.convolve(Y, h, \"same\") ``` Y looks like this: While doing this I get this error: ```none ValueError: object too deep for desired array ``` Why is this? My guess is because somehow the convolve function does not see Y as a 1D array.", "response":"The Y array in your screenshot is not a 1D array, it's a 2D array with 300 rows and 1 column, as indicated by its shape being (300, 1). To remove the extra dimension, you can slice the array as Y[:, 0]. To generally convert an n-dimensional array to 1D, you can use np.reshape(a, a.size). Another option for converting a 2D array into 1D is flatten() function from numpy.ndarray module, with the difference that it makes a copy of the array.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15923081\/what-does-valueerror-object-too-deep-for-desired-array-mean-and-how-to-fix-it", "best_answers_votes":87, "question_length":377, "response_length":441 }, { "question":"Rolling window for 1D arrays in Numpy? Is there a way to efficiently implement a rolling window for 1D arrays in Numpy? For example, I have this pure Python code snippet to calculate the rolling standard deviations for a 1D list, where observations is the 1D list of values, and n is the window length for the standard deviation: ``` stdev = [] for i, data in enumerate(observations[n-1:]): strip = observations[i:i+n] mean = sum(strip) \/ n stdev.append(sqrt(250*sum([(s-mean)**2 for s in strip])\/(n-1))) ``` Is there a way to do this completely within Numpy, i.e., without any Python loops? The standard deviation is trivial with numpy.std, but the rolling window part completely stumps me. I found this blog post regarding a rolling window in Numpy, but it doesn't seem to be for 1D arrays.", "response":"Just use the blog code, but apply your function to the result. i.e. ``` numpy.std(rolling_window(observations, n), 1) ``` where you have (from the blog): ``` def rolling_window(a, window): shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6811183\/rolling-window-for-1d-arrays-in-numpy", "best_answers_votes":85, "question_length":792, "response_length":361 }, { "question":"Variance Inflation Factor in Python I'm trying to calculate the variance inflation factor (VIF) for each column in a simple dataset in python: ``` a b c d 1 2 4 4 1 2 6 3 2 3 7 4 3 2 8 5 4 1 9 4 ``` I have already done this in R using the vif function from the usdm library which gives the following results: ``` a <- c(1, 1, 2, 3, 4) b <- c(2, 2, 3, 2, 1) c <- c(4, 6, 7, 8, 9) d <- c(4, 3, 4, 5, 4) df <- data.frame(a, b, c, d) vif_df <- vif(df) print(vif_df) Variables VIF a 22.95 b 3.00 c 12.95 d 3.00 ``` However, when I do the same in python using the statsmodel vif function, my results are: ``` a = [1, 1, 2, 3, 4] b = [2, 2, 3, 2, 1] c = [4, 6, 7, 8, 9] d = [4, 3, 4, 5, 4] ck = np.column_stack([a, b, c, d]) vif = [variance_inflation_factor(ck, i) for i in range(ck.shape[1])] print(vif) Variables VIF a 47.136986301369774 b 28.931506849315081 c 80.31506849315096 d 40.438356164383549 ``` The results are vastly different, even though the inputs are the same. In general, results from the statsmodel VIF function seem to be wrong, but I'm not sure if this is because of the way I am calling it or if it is an issue with the function itself. I was hoping someone could help me figure out whether I was incorrectly calling the statsmodel function or explain the discrepancies in the results. If it's an issue with the function then are there any VIF alternatives in python?", "response":"As mentioned by others and in this post by Josef Perktold, the function's author, variance_inflation_factor expects the presence of a constant in the matrix of explanatory variables. One can use add_constant from statsmodels to add the required constant to the dataframe before passing its values to the function. ``` from statsmodels.stats.outliers_influence import variance_inflation_factor from statsmodels.tools.tools import add_constant df = pd.DataFrame( {'a': [1, 1, 2, 3, 4], 'b': [2, 2, 3, 2, 1], 'c': [4, 6, 7, 8, 9], 'd': [4, 3, 4, 5, 4]} ) X = add_constant(df) >>> pd.Series([variance_inflation_factor(X.values, i) for i in range(X.shape[1])], index=X.columns) const 136.875 a 22.950 b 3.000 c 12.950 d 3.000 dtype: float64 ``` I believe you could also add the constant to the right most column of the dataframe using assign: ``` X = df.assign(const=1) >>> pd.Series([variance_inflation_factor(X.values, i) for i in range(X.shape[1])], index=X.columns) a 22.950 b 3.000 c 12.950 d 3.000 const 136.875 dtype: float64 ``` The source code itself is rather concise: ``` def variance_inflation_factor(exog, exog_idx): \"\"\" exog : ndarray, (nobs, k_vars) design matrix with all explanatory variables, as for example used in regression exog_idx : int index of the exogenous variable in the columns of exog \"\"\" k_vars = exog.shape[1] x_i = exog[:, exog_idx] mask = np.arange(k_vars) != exog_idx x_noti = exog[:, mask] r_squared_i = OLS(x_i, x_noti).fit().rsquared vif = 1. \/ (1. - r_squared_i) return vif ``` It is also rather simple to modify the code to return all of the VIFs as a series: ``` from statsmodels.regression.linear_model import OLS from statsmodels.tools.tools import add_constant def variance_inflation_factors(exog_df): ''' Parameters ---------- exog_df : dataframe, (nobs, k_vars) design matrix with all explanatory variables, as for example used in regression. Returns ------- vif : Series variance inflation factors ''' exog_df = add_constant(exog_df) vifs = pd.Series( [1 \/ (1. - OLS(exog_df[col].values, exog_df.loc[:, exog_df.columns != col].values).fit().rsquared) for col in exog_df], index=exog_df.columns, name='VIF' ) return vifs >>> variance_inflation_factors(df) const 136.875 a 22.950 b 3.000 c 12.950 Name: VIF, dtype: float64 ``` Per the solution of @T_T, one can also simply do the following: ``` vifs = pd.Series(np.linalg.inv(df.corr().to_numpy()).diagonal(), index=df.columns, name='VIF') ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/42658379\/variance-inflation-factor-in-python", "best_answers_votes":79, "question_length":1381, "response_length":2433 }, { "question":"Fast replacement of values in a numpy array I have a very large numpy array (containing up to a million elements) like the one below: ``` [0,1,6,5,1,2,7,6,2,3,8,7,3,4,9,8,5,6,11,10,6,7,12,11,7, 8,13,12,8,9,14,13,10,11,16,15,11,12,17,16,12,13,18,17,13, 14,19,18,15,16,21,20,16,17,22,21,17,18,23,22,18,19,24,23] ``` and a small dictionary map for replacing some of the elements in the above array ``` {4: 0, 9: 5, 14: 10, 19: 15, 20: 0, 21: 1, 22: 2, 23: 3, 24: 0} ``` I would like to replace some of the elements according to the map above. The numpy array is really large, and only a small subset of the elements (occurring as keys in the dictionary) will be replaced with the corresponding values. What is the fastest way to do this?", "response":"I believe there's even more efficient method, but for now, try ``` from numpy import copy newArray = copy(theArray) for k, v in d.iteritems(): newArray[theArray==k] = v ``` Microbenchmark and test for correctness: ``` #!\/usr\/bin\/env python2.7 from numpy import copy, random, arange random.seed(0) data = random.randint(30, size=10**5) d = {4: 0, 9: 5, 14: 10, 19: 15, 20: 0, 21: 1, 22: 2, 23: 3, 24: 0} dk = d.keys() dv = d.values() def f1(a, d): b = copy(a) for k, v in d.iteritems(): b[a==k] = v return b def f2(a, d): for i in xrange(len(a)): a[i] = d.get(a[i], a[i]) return a def f3(a, dk, dv): mp = arange(0, max(a)+1) mp[dk] = dv return mp[a] a = copy(data) res = f2(a, d) assert (f1(data, d) == res).all() assert (f3(data, dk, dv) == res).all() ``` Result: ``` $ python2.7 -m timeit -s 'from w import f1,f3,data,d,dk,dv' 'f1(data,d)' 100 loops, best of 3: 6.15 msec per loop $ python2.7 -m timeit -s 'from w import f1,f3,data,d,dk,dv' 'f3(data,dk,dv)' 100 loops, best of 3: 19.6 msec per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3403973\/fast-replacement-of-values-in-a-numpy-array", "best_answers_votes":49, "question_length":734, "response_length":1003 }, { "question":"What are the advantages of using numpy.identity over numpy.eye? Having looked over the man pages for numpy's eye and identity, I'd assumed that identity was a special case of eye, since it has fewer options (e.g. eye can fill shifted diagonals, identity cannot), but could plausibly run more quickly. However, this isn't the case on either small or large arrays: ``` >>> np.identity(3) array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> np.eye(3) array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> timeit.timeit(\"import numpy; numpy.identity(3)\", number = 10000) 0.05699801445007324 >>> timeit.timeit(\"import numpy; numpy.eye(3)\", number = 10000) 0.03787708282470703 >>> timeit.timeit(\"import numpy\", number = 10000) 0.00960087776184082 >>> timeit.timeit(\"import numpy; numpy.identity(1000)\", number = 10000) 11.379066944122314 >>> timeit.timeit(\"import numpy; numpy.eye(1000)\", number = 10000) 11.247124910354614 ``` What, then, is the advantage of using identity over eye?", "response":"identity just calls eye so there is no difference in how the arrays are constructed. Here's the code for identity: ``` def identity(n, dtype=None): from numpy import eye return eye(n, dtype=dtype) ``` As you say, the main difference is that with eye the diagonal can may be offset, whereas identity only fills the main diagonal. Since the identity matrix is such a common construct in mathematics, it seems the main advantage of using identity is for its name alone.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28363447\/what-are-the-advantages-of-using-numpy-identity-over-numpy-eye", "best_answers_votes":95, "question_length":986, "response_length":466 }, { "question":"Why in numpy `nan == nan` is False while nan in [nan] is True? While the first part of the question (which is in the title) has been answered a few times before (i.e., Why is NaN not equal to NaN?), I don't see why the second piece works the way it does (inspired by this question How to Check list containing NaN)? Namely: ``` >> nan == nan False >> nan in [nan] True ``` An explanatory addendum to the question considering the answer from @DSM. So, why float(\"nan\") is behaving differently from nan? Shouldn't it evaluate again to simple nan and why interpreter behaves this way? ``` >> x = float(\"nan\") >> y = nan >> x nan >> y nan >> x is nan, x is float(\"nan\"), y is nan (False, False, True) ``` Basically, it refers to same generic nan in the first case, but creates separate object in the second: ``` >> nans = [nan for i in range(2)] >> map(id, nans) [190459300, 190459300] >> nans = [float(\"nan\") for i in range(2)] >> map(id, nans) [190459300, 190459301] ```", "response":"nan not being equal to nan is part of the definition of nan, so that part's easy. As for nan in [nan] being True, that's because identity is tested before equality for containment in lists. You're comparing the same two objects. If you tried the same thing with two different nans, you'd get False: ``` >>> nans = [float(\"nan\") for i in range(2)] >>> map(id, nans) [190459300, 190459284] >>> nans [nan, nan] >>> nans[0] is nans[1] False >>> nans[0] in nans True >>> nans[0] in nans[1:] False ``` Your addendum doesn't really have much to do with nan, that's simply how Python works. Once you understand that float(\"nan\") is under no obligation to return some nan singleton, and that y = x doesn't make a copy of x but instead binds the name y to the object named by x, there's nothing left to get.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20320022\/why-in-numpy-nan-nan-is-false-while-nan-in-nan-is-true", "best_answers_votes":62, "question_length":968, "response_length":797 }, { "question":"How to add column to numpy array I am trying to add one column to the array created from recfromcsv. In this case it's an array: [210,8] (rows, cols). I want to add a ninth column. Empty or with zeroes doesn't matter. ``` from numpy import genfromtxt from numpy import recfromcsv import numpy as np import time if __name__ == '__main__': print(\"testing\") my_data = recfromcsv('LIAB.ST.csv', delimiter='\\t') array_size = my_data.size #my_data = np.append(my_data[:array_size],my_data[9:],0) new_col = np.sum(x,1).reshape((x.shape[0],1)) np.append(x,new_col,1) ```", "response":"I think that your problem is that you are expecting np.append to add the column in-place, but what it does, because of how numpy data is stored, is create a copy of the joined arrays ``` Returns ------- append : ndarray A copy of `arr` with `values` appended to `axis`. Note that `append` does not occur in-place: a new array is allocated and filled. If `axis` is None, `out` is a flattened array. ``` so you need to save the output all_data = np.append(...): ``` my_data = np.random.random((210,8)) #recfromcsv('LIAB.ST.csv', delimiter='\\t') new_col = my_data.sum(1)[...,None] # None keeps (n, 1) shape new_col.shape #(210,1) all_data = np.append(my_data, new_col, 1) all_data.shape #(210,9) ``` Alternative ways: ``` all_data = np.hstack((my_data, new_col)) #or all_data = np.concatenate((my_data, new_col), 1) ``` I believe that the only difference between these three functions (as well as np.vstack) are their default behaviors for when axis is unspecified: concatenate assumes axis = 0 hstack assumes axis = 1 unless inputs are 1d, then axis = 0 vstack assumes axis = 0 after adding an axis if inputs are 1d append flattens array Based on your comment, and looking more closely at your example code, I now believe that what you are probably looking to do is add a field to a record array. You imported both genfromtxt which returns a structured array and recfromcsv which returns the subtly different record array (recarray). You used the recfromcsv so right now my_data is actually a recarray, which means that most likely my_data.shape = (210,) since recarrays are 1d arrays of records, where each record is a tuple with the given dtype. So you could try this: ``` import numpy as np from numpy.lib.recfunctions import append_fields x = np.random.random(10) y = np.random.random(10) z = np.random.random(10) data = np.array( list(zip(x,y,z)), dtype=[('x',float),('y',float),('z',float)]) data = np.recarray(data.shape, data.dtype, buf=data) data.shape #(10,) tot = data['x'] + data['y'] + data['z'] # sum(axis=1) won't work on recarray tot.shape #(10,) all_data = append_fields(data, 'total', tot, usemask=False) all_data #array([(0.4374783740738456 , 0.04307289878861764, 0.021176067323686598, 0.5017273401861498), # (0.07622262416466963, 0.3962146058689695 , 0.27912715826653534 , 0.7515643883001745), # (0.30878532523061153, 0.8553768789387086 , 0.9577415585116588 , 2.121903762680979 ), # (0.5288343561208022 , 0.17048864443625933, 0.07915689716226904 , 0.7784798977193306), # (0.8804269791375121 , 0.45517504750917714, 0.1601389248542675 , 1.4957409515009568), # (0.9556552723429782 , 0.8884504475901043 , 0.6412854758843308 , 2.4853911958174133), # (0.0227638618687922 , 0.9295332854783015 , 0.3234597575660103 , 1.275756904913104 ), # (0.684075052174589 , 0.6654774682866273 , 0.5246593820025259 , 1.8742119024637423), # (0.9841793718333871 , 0.5813955915551511 , 0.39577520705133684 , 1.961350170439875 ), # (0.9889343795296571 , 0.22830104497714432, 0.20011292764078448 , 1.4173483521475858)], # dtype=[('x', '> Here we go! Done! indata: [[ 1. 1. 1. 1. 1. 1.] [ 1. 1. 1. 1. 1. 1.] [ 1. 1. 1. 1. 1. 1.] [ 1. 1. 1. 1. 1. 1.] [ 1. 1. 1. 1. 1. 1.]] outdata: [[ 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0.]] ``` The outdata array is not modified. And in fact if I call the function again I get a segfault. Which doesn't surprise me -- I really don't know what I'm doing here. Can anyone point me in the right direction?", "response":"While not a direct answer to your original question, here's a much more convenient way to call your function. First, make the prototype of your C function exactly as you would do it in plain C. Since you don't need rowcount and colcount separately, I'll collapse them into a single size parameter: ``` void cfun(const double *indatav, size_t size, double *outdatav) { size_t i; for (i = 0; i < size; ++i) outdatav[i] = indatav[i] * 2.0; } ``` Now define the ctypes prototype in the following way: ``` import ctypes from numpy.ctypeslib import ndpointer lib = ctypes.cdll.LoadLibrary(\".\/ctest.so\") fun = lib.cfun fun.restype = None fun.argtypes = [ndpointer(ctypes.c_double, flags=\"C_CONTIGUOUS\"), ctypes.c_size_t, ndpointer(ctypes.c_double, flags=\"C_CONTIGUOUS\")] ``` Now, calls to your function will be really convenient: ``` indata = numpy.ones((5,6)) outdata = numpy.empty((5,6)) fun(indata, indata.size, outdata) ``` You could also define a wrapper to make this even more convenient: ``` def wrap_fun(indata, outdata): assert indata.size == outdata.size fun(indata, indata.size, outdata) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5862915\/passing-numpy-arrays-to-a-c-function-for-input-and-output", "best_answers_votes":88, "question_length":1197, "response_length":1095 }, { "question":"Axes from plt.subplots() is a \"numpy.ndarray\" object and has no attribute \"plot\" The information below may be superfluous if you are trying to understand the error message. Please start off by reading the answer by @user707650. Using MatPlotLib, I wanted a generalizable script that creates the following from my data. A window containing a subplots arranged so that there are b subplots per column. I want to be able to change the values of a and b. If I have data for 2a subplots, I want 2 windows, each with the previously described \"a subplots arranged according to b subplots per column\". The x and y data I am plotting are floats stored in np.arrays and are structured as follows: The x data is always the same for all plots and is of length 5. ``` 'x_vector': [0.000, 0.005, 0.010, 0.020, 0.030, 0.040] ``` The y data of all plots are stored in y_vector where the data for the first plot is stored at indexes 0 through 5. The data for the second plot is stored at indexes 6 through 11. The third plot gets 12-18, the fourth 19-24, and so on. In total, for this dataset, I have 91 plots (i.e. 91*6 = 546 values stored in y_vector). Attempt: ``` import matplotlib.pyplot as plt # Options: plots_tot = 14 # Total number of plots. In reality there is going to be 7*13 = 91 plots. location_of_ydata = 6 # The values for the n:th plot can be found in the y_vector at index 'n*6' through 'n*6 + 6'. plots_window = 7 # Total number of plots per window. rows = 2 # Number of rows, i.e. number of subplots per column. # Calculating number of columns: prim_cols = plots_window \/ rows extra_cols = 0 if plots_window % rows > 0: extra_cols = 1 cols = prim_cols + extra_cols print 'cols:', cols print 'rows:', rows # Plotting: n=0 x=0 fig, ax = plt.subplots(rows, cols) while x ax[x].plot(x_vector, y_vector[n:(n+location_of_ydata)], 'ro') AttributeError: 'numpy.ndarray' object has no attribute 'plot' ```", "response":"If you debug your program by simply printing ax, you'll quickly find out that ax is a two-dimensional array: one dimension for the rows, one for the columns. Thus, you need two indices to index ax to retrieve the actual AxesSubplot instance, like: ``` ax[1,1].plot(...) ``` If you want to iterate through the subplots in the way you do it now, by flattening ax first: ``` ax = ax.flatten() ``` and now ax is a one dimensional array. I don't know if rows or columns are stepped through first, but if it's the wrong around, use the transpose: ``` ax = ax.T.flatten() ``` Of course, by now it makes more sense to simply create each subplot on the fly, because that already has an index, and the other two numbers are fixed: ``` for x < plots_tot: ax = plt.subplot(nrows, ncols, x+1) ``` Note: you have x <= plots_tot, but with x starting at 0, you'll get an IndexError next with your current code (after flattening your array). Matplotlib is (unfortunately) 1-indexed for subplots. I prefer using a 0-indexed variable (Python style), and just add +1 for the subplot index (like above).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37967786\/axes-from-plt-subplots-is-a-numpy-ndarray-object-and-has-no-attribute-plot", "best_answers_votes":134, "question_length":1900, "response_length":1082 }, { "question":"Pandas - add value at specific iloc into new dataframe column I have a large dataframe containing lots of columns. For each row\/index in the dataframe I do some operations, read in some ancilliary ata, etc and get a new value. Is there a way to add that new value into a new column at the correct row\/index? I can use .assign to add a new column but as I'm looping over the rows and only generating the data to add for one value at a time (generating it is quite involved). When it's generated I'd like to immediately add it to the dataframe rather than waiting until I've generated the entire series. This doesn't work and gives a key error: ``` df['new_column_name'].iloc[this_row]=value ``` Do I need to initialise the column first or something?", "response":"There are two steps to created & populate a new column using only a row number... (in this approach iloc is not used) First, get the row index value by using the row number ``` rowIndex = df.index[someRowNumber] ``` Then, use row index with the loc function to reference the specific row and add the new column \/ value ``` df.loc[rowIndex, 'New Column Title'] = \"some value\" ``` These two steps can be combine into one line as follows ``` df.loc[df.index[someRowNumber], 'New Column Title'] = \"some value\" ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/46113078\/pandas-add-value-at-specific-iloc-into-new-dataframe-column", "best_answers_votes":95, "question_length":748, "response_length":509 }, { "question":"Concatenating empty array in Numpy in Matlab I do this: ```none >> E = []; >> A = [1 2 3 4 5; 10 20 30 40 50]; >> E = [E ; A] E = 1 2 3 4 5 10 20 30 40 50 ``` Now I want the same thing in Numpy but I have problems, look at this: ```none >>> E = array([],dtype=int) >>> E array([], dtype=int64) >>> A = array([[1,2,3,4,5],[10,20,30,40,50]]) >>> E = vstack((E,A)) Traceback (most recent call last): File \"\", line 1, in File \"\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/Extras\/lib\/python\/numpy\/core\/shape_base.py\", line 226, in vstack return _nx.concatenate(map(atleast_2d,tup),0) ValueError: array dimensions must agree except for d_0 ``` I have a similar situation when I do this with: ```none >>> E = concatenate((E,A),axis=0) Traceback (most recent call last): File \"\", line 1, in ValueError: arrays must have same number of dimensions ``` Or: ```none >>> E = append([E],[A],axis=0) Traceback (most recent call last): File \"\", line 1, in File \"\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/Extras\/lib\/python\/numpy\/lib\/function_base.py\", line 3577, in append return concatenate((arr, values), axis=axis) ValueError: arrays must have same number of dimensions ```", "response":"if you know the number of columns before hand: ```py >>> xs = np.array([[1,2,3,4,5],[10,20,30,40,50]]) >>> ys = np.array([], dtype=np.int64).reshape(0,5) >>> ys array([], shape=(0, 5), dtype=int64) >>> np.vstack([ys, xs]) array([[ 1., 2., 3., 4., 5.], [ 10., 20., 30., 40., 50.]]) ``` if not: ```py >>> ys = np.array([]) >>> ys = np.vstack([ys, xs]) if ys.size else xs array([[ 1, 2, 3, 4, 5], [10, 20, 30, 40, 50]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22732589\/concatenating-empty-array-in-numpy", "best_answers_votes":135, "question_length":1189, "response_length":420 }, { "question":"How do you stop numpy from multithreading? [duplicate] This question already has answers here: Python multiprocessing: restrict number of cores used (5 answers) Limit number of threads in numpy (5 answers) Closed 4 years ago. I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which would require me to renice the jobs and so on. I just want to have 10 solid cores and that's all. I am using Enthought 7.3-1 on Redhat, which is based on Python 2.7.3 and numpy 1.6.1, but the question is more general.", "response":"Only hopefully this fixes all scenarios and system you may be on. Use numpy.__config__.show() to see if you are using OpenBLAS or MKL From this point on there are a few ways you can do this. 2.1. The terminal route export OPENBLAS_NUM_THREADS=1 or export MKL_NUM_THREADS=1 2.2 (This is my preferred way) In your python script import os and add the line os.environ['OPENBLAS_NUM_THREADS'] = '1' or os.environ['MKL_NUM_THREADS'] = '1'. NOTE when setting os.environ[VAR] the number of threads must be a string! Also, you may need to set this environment variable before importing numpy\/scipy. There are probably other options besides openBLAS or MKL but step 1 will help you figure that out.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17053671\/how-do-you-stop-numpy-from-multithreading", "best_answers_votes":60, "question_length":759, "response_length":688 }, { "question":"numpy.sin function in degrees? I'm working on a problem that has to do with calculating angles of refraction and what not. However, it seems that I'm unable to use the numpy.sin() function in degrees. I have tried to use numpy.degrees() and numpy.rad2deg(). ``` numpy.sin(90) numpy.degrees(numpy.sin(90)) ``` Both return ~ 0.894 and ~ 51.2 respectively. Thanks for your help.", "response":"You don't want to convert to degrees, because you already have your number (90) in degrees. You need to convert 90 from degrees to radians, and you need to do it before you take the sine: ``` >>> np.sin(np.deg2rad(90)) 1.0 ``` (You can use either deg2rad or radians.)", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28077733\/numpy-sin-function-in-degrees", "best_answers_votes":97, "question_length":375, "response_length":267 }, { "question":"Get the column names of a python numpy ndarray Let's say I have a data file called data.txt that looks like: ``` TIME FX FY FZ 0 10 5 6 1 2 4 7 2 5 2 6 ... ``` In Python run: ``` import numpy as np myData = np.genfromtxt(\"data.txt\", names=True) >>> print myData[\"TIME\"] [0, 1, 2] ``` The names at the top of my data file will vary, so what I would like to do is find out what the names of my arrays in the data file are. I would like something like: ``` >>> print myData.names [TIME, F0, F1, F2] ``` I thought about just to read in the data file and get the first line and parse it as a separate operation, but that doesn't seem very efficient or elegant.", "response":"Try: ``` myData.dtype.names ``` This will return a tuple of the field names. ``` In [10]: myData.dtype.names Out[10]: ('TIME', 'FX', 'FY', 'FZ') ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7561017\/get-the-column-names-of-a-python-numpy-ndarray", "best_answers_votes":86, "question_length":655, "response_length":148 }, { "question":"np.random.rand vs np.random.random I find Python (and its ecosystem) to be full of strange conventions and inconsistencies and this is another example: np.random.rand Create an array of the given shape and populate it with random samples from a uniform distribution over [0, 1). np.random.random Return random floats in the half-open interval [0.0, 1.0). Results are from the \u201ccontinuous uniform\u201d distribution over the stated interval. ??? What exactly is the difference there?", "response":"First note that numpy.random.random is actually an alias for numpy.random.random_sample. I'll use the latter in the following. (See this question and answer for more aliases.) Both functions generate samples from the uniform distribution on [0, 1). The only difference is in how the arguments are handled. With numpy.random.rand, the length of each dimension of the output array is a separate argument. With numpy.random.random_sample, the shape argument is a single tuple. For example, to create an array of samples with shape (3, 5), you can write ``` sample = np.random.rand(3, 5) ``` or ``` sample = np.random.random_sample((3, 5)) ``` (Really, that's it.) Update As of version 1.17, NumPy has a new random API. The recommended method for generating samples from the uniform distribution on [0, 1) is: ``` >>> rng = np.random.default_rng() # Create a default Generator. >>> rng.random(size=10) # Generate 10 samples. array([0.00416913, 0.31533329, 0.19057857, 0.48732511, 0.40638395, 0.32165646, 0.02597142, 0.19788567, 0.08142055, 0.15755424]) ``` The new Generator class does not have the rand() or random_sample() methods. There is a uniform() method that allows you to specify the lower and upper bounds of the distribution. E.g. ``` >>> rng.uniform(1, 2, size=10) array([1.75573298, 1.79862591, 1.53700962, 1.29183769, 1.16439681, 1.64413869, 1.7675135 , 1.02121057, 1.37345967, 1.73589452]) ``` The old functions in the numpy.random namespace will continue to work, but they are considered \"frozen\", with no ongoing development. If you are writing new code, and you don't have to support pre-1.17 versions of numpy, it is recommended that you use the new random API.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/47231852\/np-random-rand-vs-np-random-random", "best_answers_votes":72, "question_length":477, "response_length":1676 }, { "question":"Convert numpy array type and values from Float64 to Float32 I am trying to convert threshold array(pickle file of isolation forest from scikit learn) of type from Float64 to Float32 ``` for i in range(len(tree.tree_.threshold)): tree.tree_.threshold[i] = tree.tree_.threshold[i].astype(np.float32) ``` \u200b Then Printing it ``` for value in tree.tree_.threshold[:5]: print(type(value)) print(value) ``` the output i am getting is : ``` 526226.0 91.9514312744 3.60330319405 -2.0 -2.0 ``` I am not getting a proper conversion to Float32. I want to convert values and their type to Float32, Did anybody have any workaround this ?", "response":"The problem is that you do not do any type conversion of the numpy array. You calculate a float32 variable and put it as an entry into a float64 numpy array. numpy then converts it properly back to float64 Try someting like this: ``` a = np.zeros(4,dtype=\"float64\") print a.dtype print type(a[0]) a = np.float32(a) print a.dtype print type(a[0]) ``` The output (tested with python 2.7) ``` float64 float32 ``` a is in your case the array tree.tree_.threshold", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/45955186\/convert-numpy-array-type-and-values-from-float64-to-float32", "best_answers_votes":59, "question_length":628, "response_length":460 }, { "question":"Distributing Cython based extensions using LAPACK I am writing a Python module that includes Cython extensions and uses LAPACK (and BLAS). I am open to using either clapack or lapacke, or some kind of f2c or f2py solution if necessary. What is important is that I am able to call lapack and blas routines from Cython in tight loops without Python call overhead. I've found one example here. However, that example depends on SAGE. I want my module to be installable without installing SAGE, since my users are not likely to want or need SAGE for anything else. My users are likely to have packages like numpy, scipy, pandas, and scikit learn installed, so those would be reasonable dependencies. What is the best combination of interfaces to use, and what would the minimal setup.py file look like that could fetch the necessary information (from numpy, scipy, etc.) for compilation? EDIT: Here is what I ended up doing. It works on my macbook, but I have no idea how portable it is. Surely there's a better way. ``` from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext import numpy from Cython.Build import cythonize from numpy.distutils.system_info import get_info # TODO: This cannot be the right way blas_include = get_info('blas_opt')['extra_compile_args'][1][2:] includes = [blas_include,numpy.get_include()] setup( cmdclass = {'build_ext': build_ext}, ext_modules = cythonize([Extension(\"cylapack\", [\"cylapack.pyx\"], include_dirs = includes, libraries=['blas','lapack']) ]) ) ``` This works because, on my macbook, the clapack.h header file is in the same directory as cblas.h. I can then do this in my pyx file: ``` ctypedef np.int32_t integer cdef extern from \"cblas.h\": double cblas_dnrm2(int N,double *X, int incX) cdef extern from \"clapack.h\": integer dgelsy_(integer *m, integer *n, integer *nrhs, double *a, integer *lda, double *b, integer *ldb, integer * jpvt, double *rcond, integer *rank, double *work, integer * lwork, integer *info) ```", "response":"If I have understood the question correctly, you could make use of SciPy's Cython wrappers for BLAS and LAPACK routines. These wrappers are documented here: BLAS LAPACK As the documentation states, you are responsible for checking that any arrays that you pass to these functions are aligned correctly for the Fortran routines. You can simply import and use these functions as needed in your .pyx file. For instance: ``` from scipy.linalg.cython_blas cimport dnrm2 from scipy.linalg.cython_lapack cimport dgelsy ``` Given that this is well-tested, widely-used code that runs on different platforms, I'd argue that it is a good candidate for reliably distributing Cython extensions that directly call BLAS and LAPACK routines. If you do not want your code to have a dependency on the entirety of SciPy, you can find many of the relevant files for these wrapper functions in SciPy's linalg directory here. A useful reference is these lines of setup.py which list the source and header files. Note that a Fortran compiler is required! In theory it should be possible to isolate only the source files here that are needed to compile the BLAS and LAPACK Cython wrappers and then bundle them as an independent extension with your module. In practice this is very fiddly to do. The build process for the linalg submodule requires some Python functions to aid the compilation on different platforms (e.g. from here). Building also relies upon other C and Fortran source files (here), the paths of which are hard-coded into these Python functions. Clearly a lot of work has gone into making sure that SciPy compiles properly on different operating systems and architectures. I'm sure it is possible to do, but after shuffling files about and tweaking paths, I have not yet found the right way to build this part of the linalg submodule independently from the rest of SciPy. Should I find the correct way, I'll be sure to update this answer.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14864895\/distributing-cython-based-extensions-using-lapack", "best_answers_votes":5, "question_length":2016, "response_length":1931 }, { "question":"numpy: Efficiently avoid 0s when taking log(matrix) ``` from numpy import * m = array([[1,0], [2,3]]) ``` I would like to compute the element-wise log2(m), but only in the places where m is not 0. In those places, I would like to have 0 as a result. I am now fighting against: ``` RuntimeWarning: divide by zero encountered in log2 ``` Try 1: using where ``` res = where(m != 0, log2(m), 0) ``` which computes me the correct result, but I still get logged a RuntimeWarning: divide by zero encountered in log2. It looks like (and syntactically it is quite obvious) numpy still computes log2(m) on the full matrix and only afterwards where picks the values to keep. I would like to avoid this warning. Try 2: using masks ``` from numpy import ma res = ma.filled(log2(ma.masked_equal(m, 0)), 0) ``` Sure masking away the zeros will prevent log2 to get applied to them, won't it? Unfortunately not: We still get RuntimeWarning: divide by zero encountered in log2. Even though the matrix is masked, log2 still seems to be applied to every element. How can I efficiently compute the element-wise log of a numpy array without getting division-by-zero warnings? Of course I could temporarily disable the logging of these warnings using seterr, but that doesn't look like a clean solution. And sure a double for loop would help with treating 0s specially, but defeats the efficiency of numpy. Any ideas?", "response":"Another option is to use the where parameter of numpy's ufuncs: ``` m = np.array([[1., 0], [2, 3]]) res = np.log2(m, out=np.zeros_like(m, dtype=np.float64), where=(m!=0)) ``` No RuntimeWarning is raised, and zeros are introduced where the log is not computed. 1 Note, courtesy of @JStrahl's comment, adding the argument dtype=np.float64 to np.zeros_like() avoids universal function (ufunc) (e.g. log2, log10 etc.) casting errors when m is not of the np.float64 data type.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21752989\/numpy-efficiently-avoid-0s-when-taking-logmatrix", "best_answers_votes":46, "question_length":1394, "response_length":471 }, { "question":"Plotting a decision boundary separating 2 classes using Matplotlib's pyplot I could really use a tip to help me plotting a decision boundary to separate to classes of data. I created some sample data (from a Gaussian distribution) via Python NumPy. In this case, every data point is a 2D coordinate, i.e., a 1 column vector consisting of 2 rows. E.g., ``` [ 1 2 ] ``` Let's assume I have 2 classes, class1 and class2, and I created 100 data points for class1 and 100 data points for class2 via the code below (assigned to the variables x1_samples and x2_samples). ``` mu_vec1 = np.array([0,0]) cov_mat1 = np.array([[2,0],[0,2]]) x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100) mu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector mu_vec2 = np.array([1,2]) cov_mat2 = np.array([[1,0],[0,1]]) x2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100) mu_vec2 = mu_vec2.reshape(1,2).T ``` When I plot the data points for each class, it would look like this: Now, I came up with an equation for an decision boundary to separate both classes and would like to add it to the plot. However, I am not really sure how I can plot this function: ``` def decision_boundary(x_vec, mu_vec1, mu_vec2): g1 = (x_vec-mu_vec1).T.dot((x_vec-mu_vec1)) g2 = 2*( (x_vec-mu_vec2).T.dot((x_vec-mu_vec2)) ) return g1 - g2 ``` I would really appreciate any help! EDIT: Intuitively (If I did my math right) I would expect the decision boundary to look somewhat like this red line when I plot the function...", "response":"Your question is more complicated than a simple plot : you need to draw the contour which will maximize the inter-class distance. Fortunately it's a well-studied field, particularly for SVM machine learning. The easiest method is to download the scikit-learn module, which provides a lot of cool methods to draw boundaries: scikit-learn: Support Vector Machines Code : ``` # -*- coding: utf-8 -*- import numpy as np import matplotlib from matplotlib import pyplot as plt import scipy from sklearn import svm mu_vec1 = np.array([0,0]) cov_mat1 = np.array([[2,0],[0,2]]) x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100) mu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector mu_vec2 = np.array([1,2]) cov_mat2 = np.array([[1,0],[0,1]]) x2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100) mu_vec2 = mu_vec2.reshape(1,2).T fig = plt.figure() plt.scatter(x1_samples[:,0],x1_samples[:,1], marker='+') plt.scatter(x2_samples[:,0],x2_samples[:,1], c= 'green', marker='o') X = np.concatenate((x1_samples,x2_samples), axis = 0) Y = np.array([0]*100 + [1]*100) C = 1.0 # SVM regularization parameter clf = svm.SVC(kernel = 'linear', gamma=0.7, C=C ) clf.fit(X, Y) ``` Linear Plot ``` w = clf.coef_[0] a = -w[0] \/ w[1] xx = np.linspace(-5, 5) yy = a * xx - (clf.intercept_[0]) \/ w[1] plt.plot(xx, yy, 'k-') ``` MultiLinear Plot ``` C = 1.0 # SVM regularization parameter clf = svm.SVC(kernel = 'rbf', gamma=0.7, C=C ) clf.fit(X, Y) h = .02 # step size in the mesh # create a mesh to plot in x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contour(xx, yy, Z, cmap=plt.cm.Paired) ``` Implementation If you want to implement it yourself, you need to solve the following quadratic equation: The Wikipedia article Unfortunately, for non-linear boundaries like the one you draw, it's a difficult problem relying on a kernel trick but there isn't a clear cut solution.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22294241\/plotting-a-decision-boundary-separating-2-classes-using-matplotlibs-pyplot", "best_answers_votes":54, "question_length":1505, "response_length":2243 }, { "question":"Deprecation status of the NumPy matrix class What is the status of the matrix class in NumPy? I keep being told that I should use the ndarray class instead. Is it worth\/safe using the matrix class in new code I write? I don't understand why I should use ndarrays instead.", "response":"tl; dr: the numpy.matrix class is getting deprecated. There are some high-profile libraries that depend on the class as a dependency (the largest one being scipy.sparse) which hinders proper short-term deprecation of the class, but users are strongly encouraged to use the ndarray class (usually created using the numpy.array convenience function) instead. With the introduction of the @ operator for matrix multiplication a lot of the relative advantages of matrices have been removed. Why (not) the matrix class? numpy.matrix is a subclass of numpy.ndarray. It was originally meant for convenient use in computations involving linear algebra, but there are both limitations and surprising differences in how they behave compared to instances of the more general array class. Examples for fundamental differences in behaviour: Shapes: arrays can have an arbitrary number of dimensions ranging from 0 to infinity (or 32). Matrices are always two-dimensional. Oddly enough, while a matrix can't be created with more dimensions, it's possible to inject singleton dimensions into a matrix to end up with technically a multidimensional matrix: np.matrix(np.random.rand(2,3))[None,...,None].shape == (1,2,3,1) (not that this is of any practical importance). Indexing: indexing arrays can give you arrays of any size depending on how you are indexing it. Indexing expressions on matrices will always give you a matrix. This means that both arr[:,0] and arr[0,:] for a 2d array gives you a 1d ndarray, while mat[:,0] has shape (N,1) and mat[0,:] has shape (1,M) in case of a matrix. Arithmetic operations: the main reason for using matrices in the old days was that arithmetic operations (in particular, multiplication and power) on matrices performs matrix operations (matrix multiplication and matrix power). The same for arrays results in elementwise multiplication and power. Consequently mat1 * mat2 is valid if mat1.shape[1] == mat2.shape[0], but arr1 * arr2 is valid if arr1.shape == arr2.shape (and of course the result means something completely different). Also, surprisingly, mat1 \/ mat2 performs elementwise division of two matrices. This behaviour is probably inherited from ndarray but makes no sense for matrices, especially in light of the meaning of *. Special attributes: matrices have a few handy attributes in addition to what arrays have: mat.A and mat.A1 are array views with the same value as np.array(mat) and np.array(mat).ravel(), respectively. mat.T and mat.H are the transpose and conjugate transpose (adjoint) of the matrix; arr.T is the only such attribute that exists for the ndarray class. Finally, mat.I is the inverse matrix of mat. It's easy enough writing code that works either for ndarrays or for matrices. But when there's a chance that the two classes have to interact in code, things start to become difficult. In particular, a lot of code could work naturally for subclasses of ndarray, but matrix is an ill-behaved subclass that can easily break code that tries to rely on duck typing. Consider the following example using arrays and matrices of shape (3,4): ``` import numpy as np shape = (3, 4) arr = np.arange(np.prod(shape)).reshape(shape) # ndarray mat = np.matrix(arr) # same data in a matrix print((arr + mat).shape) # (3, 4), makes sense print((arr[0,:] + mat[0,:]).shape) # (1, 4), makes sense print((arr[:,0] + mat[:,0]).shape) # (3, 3), surprising ``` Adding slices of the two objects is catastrophically different depending on the dimension along which we slice. Addition on both matrices and arrays happens elementwise when the shapes are the same. The first two cases in the above are intuitive: we add two arrays (matrices), then we add two rows from each. The last case is really surprising: we probably meant to add two columns and ended up with a matrix. The reason of course is that arr[:,0] has shape (3,) which is compatible with shape (1,3), but mat[:.0] has shape (3,1). The two are broadcast together to shape (3,3). Finally, the largest advantage of the matrix class (i.e. the possibility to concisely formulate complicated matrix expressions involving a lot of matrix products) was removed when the @ matmul operator was introduced in python 3.5, first implemented in numpy 1.10. Compare the computation of a simple quadratic form: ``` v = np.random.rand(3); v_row = np.matrix(v) arr = np.random.rand(3,3); mat = np.matrix(arr) print(v.dot(arr.dot(v))) # pre-matmul style # 0.713447037658556, yours will vary print(v_row * mat * v_row.T) # pre-matmul matrix style # [[0.71344704]] print(v @ arr @ v) # matmul style # 0.713447037658556 ``` Looking at the above it's clear why the matrix class was widely preferred for working with linear algebra: the infix * operator made the expressions much less verbose and much easier to read. However, we get the same readability with the @ operator using modern python and numpy. Furthermore, note that the matrix case gives us a matrix of shape (1,1) which should technically be a scalar. This also implies that we can't multiply a column vector with this \"scalar\": (v_row * mat * v_row.T) * v_row.T in the above example raises an error because matrices with shape (1,1) and (3,1) can't be multiplied in this order. For completeness' sake it should be noted that while the matmul operator fixes the most common scenario in which ndarrays are suboptimal compared to matrices, there are still a few shortcomings in handling linear algebra elegantly using ndarrays (although people still tend to believe that overall it's preferable to stick to the latter). One such example is matrix power: mat ** 3 is the proper third matrix power of a matrix (whereas it's the elementwise cube of an ndarray). Unfortunately numpy.linalg.matrix_power is quite more verbose. Furthermore, in-place matrix multiplication only works fine for the matrix class. In contrast, while both PEP 465 and the python grammar allow @= as an augmented assignment with matmul, this is not implemented for ndarrays as of numpy 1.15. Deprecation history Considering the above complications concerning the matrix class there have been recurring discussions of its possible deprecation for a long time. The introduction of the @ infix operator which was a huge prerequisite for this process happened in September 2015. Unfortunately the advantages of the matrix class in earlier days meant that its use spread wide. There are libraries that depend on the matrix class (one of the most important dependent is scipy.sparse which uses both numpy.matrix semantics and often returns matrices when densifying), so fully deprecating them has always been problematic. Already in a numpy mailing list thread from 2009 I found remarks such as numpy was designed for general purpose computational needs, not any one branch of math. nd-arrays are very useful for lots of things. In contrast, Matlab, for instance, was originally designed to be an easy front-end to linear algebra package. Personally, when I used Matlab, I found that very awkward -- I was usually writing 100s of lines of code that had nothing to do with linear algebra, for every few lines that actually did matrix math. So I much prefer numpy's way -- the linear algebra lines of code are longer an more awkward, but the rest is much better. The Matrix class is the exception to this: is was written to provide a natural way to express linear algebra. However, things get a bit tricky when you mix matrices and arrays, and even when sticking with matrices there are confusions and limitations -- how do you express a row vs a column vector? what do you get when you iterate over a matrix? etc. There has been a bunch of discussion about these issues, a lot of good ideas, a little bit of consensus about how to improve it, but no one with the skill to do it has enough motivation to do it. These reflect the benefits and difficulties arising from the matrix class. The earliest suggestion for deprecation I could find is from 2008, although partly motivated by unintuitive behaviour that has changed since (in particular, slicing and iterating over a matrix will result in (row) matrices as one would most likely expect). The suggestion showed both that this is a highly controversial subject and that infix operators for matrix multiplication are crucial. The next mention I could find is from 2014 which turned out to be a very fruitful thread. The ensuing discussion raises the question of handling numpy subclasses in general, which general theme is still very much on the table. There is also strong criticism: What sparked this discussion (on Github) is that it is not possible to write duck-typed code that works correctly for: ndarrays matrices scipy.sparse sparse matrixes The semantics of all three are different; scipy.sparse is somewhere between matrices and ndarrays with some things working randomly like matrices and others not. With some hyberbole added, one could say that from the developer point of view, np.matrix is doing and has already done evil just by existing, by messing up the unstated rules of ndarray semantics in Python. followed by a lot of valuable discussion of the possible futures for matrices. Even with no @ operator at the time there is a lot of thought given to the deprecation of the matrix class and how it might affect users downstream. As far as I can tell this discussion has directly led to the inception of PEP 465 introducing matmul. In early 2015: In my opinion, a \"fixed\" version of np.matrix should (1) not be a np.ndarray subclass and (2) exist in a third party library not numpy itself. I don't think it's really feasible to fix np.matrix in its current state as an ndarray subclass, but even a fixed matrix class doesn't really belong in numpy itself, which has too long release cycles and compatibility guarantees for experimentation -- not to mention that the mere existence of the matrix class in numpy leads new users astray. Once the @ operator had been available for a while the discussion of deprecation surfaced again, reraising the topic about the relationship of matrix deprecation and scipy.sparse. Eventually, first action to deprecate numpy.matrix was taken in late November 2017. Regarding dependents of the class: How would the community handle the scipy.sparse matrix subclasses? These are still in common use. They're not going anywhere for quite a while (until the sparse ndarrays materialize at least). Hence np.matrix needs to be moved, not deleted. (source) and while I want to get rid of np.matrix as much as anyone, doing that anytime soon would be really disruptive. There are tons of little scripts out there written by people who didn't know better; we do want them to learn not to use np.matrix but breaking all their scripts is a painful way to do that There are major projects like scikit-learn that simply have no alternative to using np.matrix, because of scipy.sparse. So I think the way forward is something like: Now or whenever someone gets together a PR: issue a PendingDeprecationWarning in np.matrix._init_ (unless it kills performance for scikit-learn and friends), and put a big warning box at the top of the docs. The idea here is to not actually break anyone's code, but start to get out the message that we definitely don't think anyone should use this if they have any alternative. After there's an alternative to scipy.sparse: ramp up the warnings, possibly all the way to FutureWarning so that existing scripts don't break but they do get noisy warnings Eventually, if we think it will reduce maintenance costs: split it into a subpackage (source). Status quo As of May 2018 (numpy 1.15, relevant pull request and commit) the matrix class docstring contains the following note: It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future. And the documentation page for standard array subclasses says It is strongly advised not to use the matrix subclass. As described below, it makes writing functions that deal consistently with matrices and regular arrays very difficult. Currently, they are mainly used for interacting with scipy.sparse. We hope to provide an alternative for this use, however, and eventually remove the matrix subclass. At the same time a PendingDeprecationWarning has been added to matrix.__new__. Unfortunately, deprecation warnings are (almost always) silenced by default, so most end-users of numpy will not see this strong hint. Finally, the numpy roadmap as of November 2018 mentions multiple related topics as one of the \"tasks and features [the numpy community] will be investing resources in\": Some things inside NumPy do not actually match the Scope of NumPy. A backend system for numpy.fft (so that e.g. fft-mkl doesn\u2019t need to monkeypatch numpy) Rewrite masked arrays to not be a ndarray subclass \u2013 maybe in a separate project? MaskedArray as a duck-array type, and\/or dtypes that support missing values Write a strategy on how to deal with overlap between numpy and scipy for linalg and fft (and implement it). Deprecate np.matrix It's likely that this state will stay as long as larger libraries\/many users (and in particular scipy.sparse) rely on the matrix class. However, there's ongoing discussion to move scipy.sparse to depend on something else, such as pydata\/sparse. In SciPy 1.8 (released February 2022) a sparse array API was introduced for early testing and feedback, with the potential to remove the np.matrix legacy eventually. This replicates the SciPy sparse containers with an interface that matches the behaviour of NumPy arrays (rather than matrices). Maintainers of downstream libraries such as NetworkX and scikit-learn are eager to switch to the new API as soon as possible. Irrespective of the developments of the deprecation process users should use the ndarray class in new code and preferably port older code if possible. Eventually the matrix class will probably end up in a separate package to remove some of the burdens caused by its existence in its current form.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/53254738\/deprecation-status-of-the-numpy-matrix-class", "best_answers_votes":95, "question_length":271, "response_length":14028 }, { "question":"gradient descent using python and numpy ``` def gradient(X_norm,y,theta,alpha,m,n,num_it): temp=np.array(np.zeros_like(theta,float)) for i in range(0,num_it): h=np.dot(X_norm,theta) #temp[j]=theta[j]-(alpha\/m)*( np.sum( (h-y)*X_norm[:,j][np.newaxis,:] ) ) temp[0]=theta[0]-(alpha\/m)*(np.sum(h-y)) temp[1]=theta[1]-(alpha\/m)*(np.sum((h-y)*X_norm[:,1])) theta=temp return theta X_norm,mean,std=featureScale(X) #length of X (number of rows) m=len(X) X_norm=np.array([np.ones(m),X_norm]) n,m=np.shape(X_norm) num_it=1500 alpha=0.01 theta=np.zeros(n,float)[:,np.newaxis] X_norm=X_norm.transpose() theta=gradient(X_norm,y,theta,alpha,m,n,num_it) print theta ``` My theta from the above code is 100.2 100.2, but it should be 100.2 61.09 in matlab which is correct.", "response":"I think your code is a bit too complicated and it needs more structure, because otherwise you'll be lost in all equations and operations. In the end this regression boils down to four operations: Calculate the hypothesis h = X * theta Calculate the loss = h - y and maybe the squared cost (loss^2)\/2m Calculate the gradient = X' * loss \/ m Update the parameters theta = theta - alpha * gradient In your case, I guess you have confused m with n. Here m denotes the number of examples in your training set, not the number of features. Let's have a look at my variation of your code: ``` import numpy as np import random # m denotes the number of examples here, not the number of features def gradientDescent(x, y, theta, alpha, m, numIterations): xTrans = x.transpose() for i in range(0, numIterations): hypothesis = np.dot(x, theta) loss = hypothesis - y # avg cost per example (the 2 in 2*m doesn't really matter here. # But to be consistent with the gradient, I include it) cost = np.sum(loss ** 2) \/ (2 * m) print(\"Iteration %d | Cost: %f\" % (i, cost)) # avg gradient per example gradient = np.dot(xTrans, loss) \/ m # update theta = theta - alpha * gradient return theta def genData(numPoints, bias, variance): x = np.zeros(shape=(numPoints, 2)) y = np.zeros(shape=numPoints) # basically a straight line for i in range(0, numPoints): # bias feature x[i][0] = 1 x[i][1] = i # our target variable y[i] = (i + bias) + random.uniform(0, 1) * variance return x, y # gen 100 points with a bias of 25 and 10 variance as a bit of noise x, y = genData(100, 25, 10) m, n = np.shape(x) numIterations= 100000 alpha = 0.0005 theta = np.ones(n) theta = gradientDescent(x, y, theta, alpha, m, numIterations) print(theta) ``` At first I create a small random dataset which should look like this: As you can see I also added the generated regression line and formula that was calculated by excel. You need to take care about the intuition of the regression using gradient descent. As you do a complete batch pass over your data X, you need to reduce the m-losses of every example to a single weight update. In this case, this is the average of the sum over the gradients, thus the division by m. The next thing you need to take care about is to track the convergence and adjust the learning rate. For that matter you should always track your cost every iteration, maybe even plot it. If you run my example, the theta returned will look like this: ``` Iteration 99997 | Cost: 47883.706462 Iteration 99998 | Cost: 47883.706462 Iteration 99999 | Cost: 47883.706462 [ 29.25567368 1.01108458] ``` Which is actually quite close to the equation that was calculated by excel (y = x + 30). Note that as we passed the bias into the first column, the first theta value denotes the bias weight.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17784587\/gradient-descent-using-python-and-numpy", "best_answers_votes":146, "question_length":757, "response_length":2767 }, { "question":"find length of sequences of identical values in a numpy array (run length encoding) In a pylab program (which could probably be a matlab program as well) I have a numpy array of numbers representing distances: d[t] is the distance at time t (and the timespan of my data is len(d) time units). The events I'm interested in are when the distance is below a certain threshold, and I want to compute the duration of these events. It's easy to get an array of booleans with b = d0 and b[i-1] and b[i]: counter+=1 if (b[i-1] and not b[i]) or i==len(b)-1: durations.append(counter) print '.' ```", "response":"Fully numpy vectorized and generic RLE for any array (works with strings, booleans etc too). Outputs tuple of run lengths, start positions, and values. ``` import numpy as np def rle(inarray): \"\"\" run length encoding. Partial credit to R rle function. Multi datatype arrays catered for including non Numpy returns: tuple (runlengths, startpositions, values) \"\"\" ia = np.asarray(inarray) # force numpy n = len(ia) if n == 0: return (None, None, None) else: y = ia[1:] != ia[:-1] # pairwise unequal (string safe) i = np.append(np.where(y), n - 1) # must include last element posi z = np.diff(np.append(-1, i)) # run lengths p = np.cumsum(np.append(0, z))[:-1] # positions return(z, p, ia[i]) ``` Pretty fast (i7): ``` xx = np.random.randint(0, 5, 1000000) %timeit yy = rle(xx) 100 loops, best of 3: 18.6 ms per loop ``` Multiple data types: ``` rle([True, True, True, False, True, False, False]) Out[8]: (array([3, 1, 1, 2]), array([0, 3, 4, 5]), array([ True, False, True, False], dtype=bool)) rle(np.array([5, 4, 4, 4, 4, 0, 0])) Out[9]: (array([1, 4, 2]), array([0, 1, 5]), array([5, 4, 0])) rle([\"hello\", \"hello\", \"my\", \"friend\", \"okay\", \"okay\", \"bye\"]) Out[10]: (array([2, 1, 1, 2, 1]), array([0, 2, 3, 4, 6]), array(['hello', 'my', 'friend', 'okay', 'bye'], dtype='|S6')) ``` Same results as Alex Martelli above: ``` xx = np.random.randint(0, 2, 20) xx Out[60]: array([1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1]) am = runs_of_ones_array(xx) tb = rle(xx) am Out[63]: array([4, 5, 2, 5]) tb[0][tb[2] == 1] Out[64]: array([4, 5, 2, 5]) %timeit runs_of_ones_array(xx) 10000 loops, best of 3: 28.5 \u00b5s per loop %timeit rle(xx) 10000 loops, best of 3: 38.2 \u00b5s per loop ``` Slightly slower than Alex (but still very fast), and much more flexible.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1066758\/find-length-of-sequences-of-identical-values-in-a-numpy-array-run-length-encodi", "best_answers_votes":81, "question_length":588, "response_length":1759 }, { "question":"Normalize numpy array columns in python I have a numpy array where each cell of a specific row represents a value for a feature. I store all of them in an 100*4 matrix. ``` A B C 1000 10 0.5 765 5 0.35 800 7 0.09 ``` Any idea how I can normalize rows of this numpy.array where each value is between 0 and 1? My desired output is: ``` A B C 1 1 1 0.765 0.5 0.7 0.8 0.7 0.18(which is 0.09\/0.5) ```", "response":"If I understand correctly, what you want to do is divide by the maximum value in each column. You can do this easily using broadcasting. Starting with your example array: ``` import numpy as np x = np.array([[1000, 10, 0.5], [ 765, 5, 0.35], [ 800, 7, 0.09]]) x_normed = x \/ x.max(axis=0) print(x_normed) # [[ 1. 1. 1. ] # [ 0.765 0.5 0.7 ] # [ 0.8 0.7 0.18 ]] ``` x.max(0) takes the maximum over the 0th dimension (i.e. rows). This gives you a vector of size (ncols,) containing the maximum value in each column. You can then divide x by this vector in order to normalize your values such that the maximum value in each column will be scaled to 1. If x contains negative values you would need to subtract the minimum first: ``` x_normed = (x - x.min(0)) \/ x.ptp(0) ``` Here, x.ptp(0) returns the \"peak-to-peak\" (i.e. the range, max - min) along axis 0. This normalization also guarantees that the minimum value in each column will be 0.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/29661574\/normalize-numpy-array-columns-in-python", "best_answers_votes":123, "question_length":395, "response_length":937 }, { "question":"How do I install SciPy on 64 bit Windows? How do I install SciPy on my system? For the NumPy part (that SciPy depends on) there is actually an installer for 64 bit Windows: numpy-1.3.0.win-amd64-py2.6.msi (is direct download URL, 2310144 bytes). Running the SciPy superpack installer results in this message in a dialog box: Cannot install. Python version 2.6 required, which was not found in the registry. I already have Python 2.6.2 installed (and a working Django installation in it), but I don't know about any Registry story. The registry entries seem to already exist: ``` REGEDIT4 [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python] [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore] [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6] [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6\\Help] [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6\\Help\\Main Python Documentation] @=\"D:\\\\Python262\\\\Doc\\\\python262.chm\" [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6\\InstallPath] @=\"D:\\\\Python262\\\\\" [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6\\InstallPath\\InstallGroup] @=\"Python 2.6\" [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6\\Modules] [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6\\PythonPath] @=\"D:\\\\Python262\\\\Lib;D:\\\\Python262\\\\DLLs;D:\\\\Python262\\\\Lib\\\\lib-tk\" ``` What I have done so far: Step 1 Downloaded the NumPy superpack installer numpy-1.3.0rc2-win32-superpack-python2.6.exe (direct download URL, 4782592 bytes). Running this installer resulted in the same message, \"Cannot install. Python version 2.6 required, which was not found in the registry.\". Update: there is actually an installer for NumPy that works - see beginning of the question. Step 2 Tried to install NumPy in another way. Downloaded the zip package numpy-1.3.0rc2.zip (direct download URL, 2404011 bytes), extracted the zip file in a normal way to a temporary directory, D:\\temp7\\numpy-1.3.0rc2 (where setup.py and README.txt is). I then opened a command line window and: ``` d: cd D:\\temp7\\numpy-1.3.0rc2 setup.py install ``` This ran for a long time and also included use of cl.exe (part of Visual Studio). Here is a nearly 5000 lines long transcript (230 KB). This seemed to work. I can now do this in Python: ``` import numpy as np np.random.random(10) ``` with this result: ``` array([ 0.35667511, 0.56099423, 0.38423629, 0.09733172, 0.81560421, 0.18813222, 0.10566666, 0.84968066, 0.79472597, 0.30997724]) ``` Step 3 Downloaded the SciPy superpack installer, scipy-0.7.1rc3- win32-superpack-python2.6.exe (direct download URL, 45597175 bytes). Running this installer resulted in the message listed in the beginning Step 4 Tried to install SciPy in another way. Downloaded the zip package scipy-0.7.1rc3.zip (direct download URL, 5506562 bytes), extracted the zip file in a normal way to a temporary directory, D:\\temp7\\scipy-0.7.1 (where setup.py and README.txt is). I then opened a command line window and: ``` d: cd D:\\temp7\\scipy-0.7.1 setup.py install ``` This did not achieve much - here is a transcript (about 95 lines). And it fails: ``` >>> import scipy as sp2 Traceback (most recent call last): File \"\", line 1, in ImportError: No module named scipy ``` Platform: Python 2.6.2 installed in directory D:\\Python262, Windows XP 64 bit SP2, 8 GB RAM, Visual Studio 2008 Professional Edition installed. The startup screen of the installed Python is: ``` Python 2.6.2 (r262:71605, Apr 14 2009, 22:46:50) [MSC v.1500 64 bit (AMD64)] on win32 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> ``` Value of PATH, result from SET in a command line window: ``` Path=D:\\Perl64\\site\\bin;D:\\Perl64\\bin;C:\\Program Files (x86)\\PC Connectivity Solution\\;D:\\Perl\\site\\bin;D:\\Perl\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\Program Files (x86)\\ATI Technologies\\ATI.ACE\\Core-Static;d:\\Program Files (x86)\\WinSCP\\;D:\\MassLynx\\;D:\\Program Files (x86)\\Analyst\\bin;d:\\Python262;d:\\Python262\\Scripts;D:\\Program Files (x86)\\TortoiseSVN\\bin;D:\\Program Files\\TortoiseSVN\\bin;C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0;D:\\Program Files (x86)\\IDM Computer Solutions\\UltraEdit\\ ```", "response":"Unofficial 64-bit installers for NumPy and SciPy are available at http:\/\/www.lfd.uci.edu\/~gohlke\/pythonlibs\/ Make sure that you download & install the packages (aka. wheels) that match your CPython version and bitness (ie. cp35 = Python v3.5; win_amd64 = x86_64). You'll want to install NumPy first; From a CMD prompt with administrator privileges for a system-wide (aka. Program Files) install: ``` C:\\>pip install numpy\u2011+mkl\u2011cp\u2011cpm\u2011.whl ``` Or include the --user flag to install to the current user's application folder (Typically %APPDATA%\\Python on Windows) from a non-admin CMD prompt: ``` C:\\>pip install --user numpy\u2011+mkl\u2011cp\u2011cpm\u2011.whl ``` Then do the same for SciPy: ``` C:\\>pip install [--user] scipy\u2011\u2011cp\u2011cpm\u2011.whl ``` Don't forget to replace , , and appropriately if you copy & paste any of these examples. And also that you must use the numpy & scipy packages from the ifd.uci.edu link above (or else you will get errors if you try to mix & match incompatible packages -- uninstall any conflicting packages first [ie. pip list]).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1517129\/how-do-i-install-scipy-on-64-bit-windows", "best_answers_votes":64, "question_length":4098, "response_length":1038 }, { "question":"Find matching rows in 2 dimensional numpy array I would like to get the index of a 2 dimensional Numpy array that matches a row. For example, my array is this: ``` vals = np.array([[0, 0], [1, 0], [2, 0], [0, 1], [1, 1], [2, 1], [0, 2], [1, 2], [2, 2], [0, 3], [1, 3], [2, 3], [0, 0], [1, 0], [2, 0], [0, 1], [1, 1], [2, 1], [0, 2], [1, 2], [2, 2], [0, 3], [1, 3], [2, 3]]) ``` I would like to get the index that matches the row [0, 1] which is index 3 and 15. When I do something like numpy.where(vals == [0 ,1]) I get... ``` (array([ 0, 3, 3, 4, 5, 6, 9, 12, 15, 15, 16, 17, 18, 21]), array([0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0])) ``` I want index array([3, 15]).", "response":"You need the np.where function to get the indexes: ``` >>> np.where((vals == (0, 1)).all(axis=1)) (array([ 3, 15]),) ``` Or, as the documentation states: If only condition is given, return condition.nonzero() You could directly call .nonzero() on the array returned by .all: ``` >>> (vals == (0, 1)).all(axis=1).nonzero() (array([ 3, 15]),) ``` To dissassemble that: ``` >>> vals == (0, 1) array([[ True, False], [False, False], ... [ True, False], [False, False], [False, False]], dtype=bool) ``` and calling the .all method on that array (with axis=1) gives you True where both are True: ``` >>> (vals == (0, 1)).all(axis=1) array([False, False, False, True, False, False, False, False, False, False, False, False, False, False, False, True, False, False, False, False, False, False, False, False], dtype=bool) ``` and to get which indexes are True: ``` >>> np.where((vals == (0, 1)).all(axis=1)) (array([ 3, 15]),) ``` or ``` >>> (vals == (0, 1)).all(axis=1).nonzero() (array([ 3, 15]),) ``` I find my solution a bit more readable, but as unutbu points out, the following may be faster, and returns the same value as (vals == (0, 1)).all(axis=1): ``` >>> (vals[:, 0] == 0) & (vals[:, 1] == 1) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25823608\/find-matching-rows-in-2-dimensional-numpy-array", "best_answers_votes":97, "question_length":670, "response_length":1199 }, { "question":"How to check if a variable is either a python list, numpy array or pandas series I have a function that takes in a variable that would work if it is any of the following three types ``` 1. pandas Series 2. numpy array (ndarray) 3. python list ``` Any other type should be rejected. What is the most efficient way to check this?", "response":"You can do it using isinstance: ``` import pandas as pd import numpy as np def f(l): if isinstance(l,(list,pd.core.series.Series,np.ndarray)): print(5) else: raise Exception('wrong type') ``` Then f([1,2,3]) prints 5 while f(3.34) raises an error.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/43748991\/how-to-check-if-a-variable-is-either-a-python-list-numpy-array-or-pandas-series", "best_answers_votes":65, "question_length":327, "response_length":247 }, { "question":"Getting the indices of several elements in a NumPy array at once Is there any way to get the indices of several elements in a NumPy array at once? E.g. ``` import numpy as np a = np.array([1, 2, 4]) b = np.array([1, 2, 3, 10, 4]) ``` I would like to find the index of each element of a in b, namely: [0,1,4]. I find the solution I am using a bit verbose: ``` import numpy as np a = np.array([1, 2, 4]) b = np.array([1, 2, 3, 10, 4]) c = np.zeros_like(a) for i, aa in np.ndenumerate(a): c[i] = np.where(b == aa)[0] print('c: {0}'.format(c)) ``` Output: ``` c: [0 1 4] ```", "response":"You could use in1d and nonzero (or where for that matter): ``` >>> np.in1d(b, a).nonzero()[0] array([0, 1, 4]) ``` This works fine for your example arrays, but in general the array of returned indices does not honour the order of the values in a. This may be a problem depending on what you want to do next. In that case, a much better answer is the one @Jaime gives here, using searchsorted: ``` >>> sorter = np.argsort(b) >>> sorter[np.searchsorted(b, a, sorter=sorter)] array([0, 1, 4]) ``` This returns the indices for values as they appear in a. For instance: ``` a = np.array([1, 2, 4]) b = np.array([4, 2, 3, 1]) >>> sorter = np.argsort(b) >>> sorter[np.searchsorted(b, a, sorter=sorter)] array([3, 1, 0]) # the other method would return [0, 1, 3] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/32191029\/getting-the-indices-of-several-elements-in-a-numpy-array-at-once", "best_answers_votes":67, "question_length":570, "response_length":758 }, { "question":"What are some possible calculations with numpy or scipy that can return a NaN? [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Closed 10 years ago. Improve this question What are the most common operations that would cause a NaN, in Python, which originate while working with NumPy or SciPy? For example: ``` 1e500 - 1e500 >>> nan ``` What is the reasoning for this behavior and why does it not return 0?", "response":"If you do any of the following without horsing around with the floating-point environment, you should get a NaN where you didn't have one before: 0\/0 (either sign on top and bottom) inf\/inf (either sign on top and bottom) inf - inf or (-inf) + inf or inf + (-inf) or (-inf) - (-inf) 0 * inf and inf * 0 (either sign on both factors) sqrt(x) when x feenableexcept(FE_INVALID); ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25506281\/what-are-some-possible-calculations-with-numpy-or-scipy-that-can-return-a-nan", "best_answers_votes":74, "question_length":769, "response_length":380 }, { "question":"Python pandas: how to remove nan and -inf values I have the following dataframe ``` time X Y X_t0 X_tp0 X_t1 X_tp1 X_t2 X_tp2 0 0.002876 0 10 0 NaN NaN NaN NaN NaN 1 0.002986 0 10 0 NaN 0 NaN NaN NaN 2 0.037367 1 10 1 1.000000 0 NaN 0 NaN 3 0.037374 2 10 2 0.500000 1 1.000000 0 NaN 4 0.037389 3 10 3 0.333333 2 0.500000 1 1.000000 5 0.037393 4 10 4 0.250000 3 0.333333 2 0.500000 .... 1030308 9.962213 256 268 256 0.000000 256 0.003906 255 0.003922 1030309 10.041799 0 268 0 -inf 256 0.000000 256 0.003906 1030310 10.118960 0 268 0 NaN 0 -inf 256 0.000000 ``` I tried with the following ``` df.dropna(inplace=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40) X_train = X_train.drop('time', axis=1) X_train = X_train.drop('X_t1', axis=1) X_train = X_train.drop('X_t2', axis=1) X_test = X_test.drop('time', axis=1) X_test = X_test.drop('X_t1', axis=1) X_test = X_test.drop('X_t2', axis=1) X_test.fillna(X_test.mean(), inplace=True) X_train.fillna(X_train.mean(), inplace=True) y_train.fillna(y_train.mean(), inplace=True) ``` However, I am still getting this error ValueError: Input contains NaN, infinity or a value too large for dtype('float32'). whenever i try to fit a regression model fit(X_train, y_train) How can we remove both the NaN and -inf values at the same time?", "response":"Use pd.DataFrame.isin and check for rows that have any with pd.DataFrame.any. Finally, use the boolean array to slice the dataframe. ``` df[~df.isin([np.nan, np.inf, -np.inf]).any(1)] time X Y X_t0 X_tp0 X_t1 X_tp1 X_t2 X_tp2 4 0.037389 3 10 3 0.333333 2.0 0.500000 1.0 1.000000 5 0.037393 4 10 4 0.250000 3.0 0.333333 2.0 0.500000 1030308 9.962213 256 268 256 0.000000 256.0 0.003906 255.0 0.003922 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/45745085\/python-pandas-how-to-remove-nan-and-inf-values", "best_answers_votes":93, "question_length":1305, "response_length":403 }, { "question":"What is the easiest way to install BLAS and LAPACK for scipy? I would like to run a programme that someone else has prepared and it includes scipy. I have tried to install scipy with ``` pip install scipy ``` but it gives me a long error. I know there are ways with Anaconda and Canopy but I think these are long ways. I would like to have a short way. I have also tried ``` G:\\determinator_Oskar>pip install scipy Collecting scipy Using cached scipy-0.16.1.tar.gz Building wheels for collected packages: scipy Running setup.py bdist_wheel for scipy Complete output from command g:\\myve\\scripts\\python.exe -c \"import setuptools; __file__='e:\\\\temp_n~1\\\\pip-build-1xigxu\\\\scipy\\\\setup.py';exec(compile(open(__f ile__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))\" bdist_wheel -d e:\\temp_ n~1\\tmp07__zrpip-wheel-: lapack_opt_info: openblas_lapack_info: libraries openblas not found in ['g:\\\\myve\\\\lib', 'C:\\\\'] NOT AVAILABLE lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in ['g:\\\\myve\\\\lib', 'C:\\\\'] NOT AVAILABLE NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS libraries tatlas,tatlas not found in g:\\myve\\lib libraries lapack_atlas not found in g:\\myve\\lib libraries tatlas,tatlas not found in C:\\ libraries lapack_atlas not found in C:\\ NOT AVAILABLE atlas_3_10_info: libraries satlas,satlas not found in g:\\myve\\lib libraries lapack_atlas not found in g:\\myve\\lib libraries satlas,satlas not found in C:\\ libraries lapack_atlas not found in C:\\ NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in g:\\myve\\lib libraries lapack_atlas not found in g:\\myve\\lib libraries ptf77blas,ptcblas,atlas not found in C:\\ libraries lapack_atlas not found in C:\\ NOT AVAILABLE atlas_info: libraries f77blas,cblas,atlas not found in g:\\myve\\lib libraries lapack_atlas not found in g:\\myve\\lib libraries f77blas,cblas,atlas not found in C:\\ libraries lapack_atlas not found in C:\\ NOT AVAILABLE lapack_info: libraries lapack not found in ['g:\\\\myve\\\\lib', 'C:\\\\'] NOT AVAILABLE lapack_src_info: NOT AVAILABLE NOT AVAILABLE g:\\myve\\lib\\site-packages\\numpy\\distutils\\system_info.py:1552: UserWarning: Atlas (http:\/\/math-atlas.sourceforge.net\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) g:\\myve\\lib\\site-packages\\numpy\\distutils\\system_info.py:1563: UserWarning: Lapack (http:\/\/www.netlib.org\/lapack\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. warnings.warn(LapackNotFoundError.__doc__) g:\\myve\\lib\\site-packages\\numpy\\distutils\\system_info.py:1566: UserWarning: Lapack (http:\/\/www.netlib.org\/lapack\/) sources not found. Directories to search for the sources can be specified in the numpy\/distutils\/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. warnings.warn(LapackSrcNotFoundError.__doc__) Running from scipy source directory. Traceback (most recent call last): File \"\", line 1, in File \"e:\\temp_n~1\\pip-build-1xigxu\\scipy\\setup.py\", line 253, in setup_package() File \"e:\\temp_n~1\\pip-build-1xigxu\\scipy\\setup.py\", line 250, in setup_packa ge setup(**metadata) File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\core.py\", line 135, in setup config = configuration() File \"e:\\temp_n~1\\pip-build-1xigxu\\scipy\\setup.py\", line 175, in configurati on config.add_subpackage('scipy') File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\misc_util.py\", line 1001, in add_subpackage caller_level = 2) File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\misc_util.py\", line 970, in get_subpackage caller_level = caller_level + 1) File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\misc_util.py\", line 907, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File \"scipy\\setup.py\", line 15, in configuration config.add_subpackage('linalg') File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\misc_util.py\", line 1001, in add_subpackage caller_level = 2) File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\misc_util.py\", line 970, in get_subpackage caller_level = caller_level + 1) File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\misc_util.py\", line 907, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File \"scipy\\linalg\\setup.py\", line 20, in configuration raise NotFoundError('no lapack\/blas resources found') numpy.distutils.system_info.NotFoundError: no lapack\/blas resources found ---------------------------------------- Failed building wheel for scipy Failed to build scipy Installing collected packages: scipy Running setup.py install for scipy Complete output from command g:\\myve\\scripts\\python.exe -c \"import setuptool s, tokenize;__file__='e:\\\\temp_n~1\\\\pip-build-1xigxu\\\\scipy\\\\setup.py';exec(comp ile(getattr(tokenize, 'open', open)(__file__).read().replace('\\r\\n', '\\n'), __fi le__, 'exec'))\" install --record e:\\temp_n~1\\pip-3hncqr-record\\install-record.tx t --single-version-externally-managed --compile --install-headers g:\\myve\\includ e\\site\\python2.7\\scipy: lapack_opt_info: openblas_lapack_info: libraries openblas not found in ['g:\\\\myve\\\\lib', 'C:\\\\'] NOT AVAILABLE lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in ['g:\\\\myve\\\\lib', 'C:\\\\'] NOT AVAILABLE NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS libraries tatlas,tatlas not found in g:\\myve\\lib libraries lapack_atlas not found in g:\\myve\\lib libraries tatlas,tatlas not found in C:\\ libraries lapack_atlas not found in C:\\ NOT AVAILABLE atlas_3_10_info: libraries satlas,satlas not found in g:\\myve\\lib libraries lapack_atlas not found in g:\\myve\\lib libraries satlas,satlas not found in C:\\ libraries lapack_atlas not found in C:\\ NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in g:\\myve\\lib libraries lapack_atlas not found in g:\\myve\\lib libraries ptf77blas,ptcblas,atlas not found in C:\\ libraries lapack_atlas not found in C:\\ NOT AVAILABLE atlas_info: libraries f77blas,cblas,atlas not found in g:\\myve\\lib libraries lapack_atlas not found in g:\\myve\\lib libraries f77blas,cblas,atlas not found in C:\\ libraries lapack_atlas not found in C:\\ NOT AVAILABLE lapack_info: libraries lapack not found in ['g:\\\\myve\\\\lib', 'C:\\\\'] NOT AVAILABLE lapack_src_info: NOT AVAILABLE NOT AVAILABLE g:\\myve\\lib\\site-packages\\numpy\\distutils\\system_info.py:1552: UserWarning: Atlas (http:\/\/math-atlas.sourceforge.net\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) g:\\myve\\lib\\site-packages\\numpy\\distutils\\system_info.py:1563: UserWarning: Lapack (http:\/\/www.netlib.org\/lapack\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. warnings.warn(LapackNotFoundError.__doc__) g:\\myve\\lib\\site-packages\\numpy\\distutils\\system_info.py:1566: UserWarning: Lapack (http:\/\/www.netlib.org\/lapack\/) sources not found. Directories to search for the sources can be specified in the numpy\/distutils\/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. warnings.warn(LapackSrcNotFoundError.__doc__) Running from scipy source directory. Traceback (most recent call last): File \"\", line 1, in File \"e:\\temp_n~1\\pip-build-1xigxu\\scipy\\setup.py\", line 253, in setup_package() File \"e:\\temp_n~1\\pip-build-1xigxu\\scipy\\setup.py\", line 250, in setup_pac kage setup(**metadata) File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\core.py\", line 135, in set up config = configuration() File \"e:\\temp_n~1\\pip-build-1xigxu\\scipy\\setup.py\", line 175, in configura tion config.add_subpackage('scipy') File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\misc_util.py\", line 1001, in add_subpackage caller_level = 2) File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\misc_util.py\", line 970, i n get_subpackage caller_level = caller_level + 1) File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\misc_util.py\", line 907, i n _get_configuration_from_setup_py config = setup_module.configuration(*args) File \"scipy\\setup.py\", line 15, in configuration config.add_subpackage('linalg') File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\misc_util.py\", line 1001, in add_subpackage caller_level = 2) File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\misc_util.py\", line 970, i n get_subpackage caller_level = caller_level + 1) File \"g:\\myve\\lib\\site-packages\\numpy\\distutils\\misc_util.py\", line 907, i n _get_configuration_from_setup_py config = setup_module.configuration(*args) File \"scipy\\linalg\\setup.py\", line 20, in configuration raise NotFoundError('no lapack\/blas resources found') numpy.distutils.system_info.NotFoundError: no lapack\/blas resources found ---------------------------------------- Command \"g:\\myve\\scripts\\python.exe -c \"import setuptools, tokenize;__file__='e: \\\\temp_n~1\\\\pip-build-1xigxu\\\\scipy\\\\setup.py';exec(compile(getattr(tokenize, 'o pen', open)(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))\" install --record e:\\temp_n~1\\pip-3hncqr-record\\install-record.txt --single-version-exter nally-managed --compile --install-headers g:\\myve\\include\\site\\python2.7\\scipy\" failed with error code 1 in e:\\temp_n~1\\pip-build-1xigxu\\scipy ``` I have also tried ``` pip install lapack ``` with this result ``` Collecting lapack Could not find a version that satisfies the requirement lapack (from versions ) No matching distribution found for lapack ``` I have also tried ``` pip install blas ``` with similar results ``` G:\\determinator_Oskar>pip install blas Collecting blas Could not find a version that satisfies the requirement blas (from versions: ) No matching distribution found for blas ``` Why does a scipy get so complicated ?", "response":"For Debian Jessie and Stretch installing the following packages resolves the issue: ``` sudo apt install libblas3 liblapack3 liblapack-dev libblas-dev ``` Your next issue is very likely going to be a missing Fortran compiler, resolve this by installing it like this: ``` sudo apt install gfortran ``` If you want an optimized scipy, you can also install the optional libatlas-base-dev package: ``` sudo apt install libatlas-base-dev ``` Source If you have any issue with a missing Python.h file like this: Python.h: No such file or directory Then have a look at this post: https:\/\/stackoverflow.com\/a\/21530768\/209532", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/33368261\/what-is-the-easiest-way-to-install-blas-and-lapack-for-scipy", "best_answers_votes":61, "question_length":10071, "response_length":616 }, { "question":"Convert np.array of type float64 to type uint8 scaling values I have a particular np.array data which represents a particular grayscale image. I need to use SimpleBlobDetector() that unfortunately only accepts 8bit images, so I need to convert this image, obviously having a quality-loss. I've already tried: ``` import numpy as np import cv2 [...] data = data \/ data.max() #normalizes data in range 0 - 255 data = 255 * data img = data.astype(np.uint8) cv2.imshow(\"Window\", img) ``` But cv2.imshow is not giving the image as expected, but with strange distortion... In the end, I only need to convert a np.float64 to np.uint8 scaling all the values and truncating the rest, eg. 65535 becomes 255, 65534 becomes 254 and so on.... Any help? Thanks.", "response":"A better way to normalize your image is to take each value and divide by the largest value experienced by the data type. This ensures that images that have a small dynamic range in your image remain small and they're not inadvertently normalized so that they become gray. For example, if your image had a dynamic range of [0-2], the code right now would scale that to have intensities of [0, 128, 255]. You want these to remain small after converting to np.uint8. Therefore, divide every value by the largest value possible by the image type, not the actual image itself. You would then scale this by 255 to produced the normalized result. Use numpy.iinfo and provide it the type (dtype) of the image and you will obtain a structure of information for that type. You would then access the max field from this structure to determine the maximum value. So with the above, do the following modifications to your code: ``` import numpy as np import cv2 [...] info = np.iinfo(data.dtype) # Get the information of the incoming image type data = data.astype(np.float64) \/ info.max # normalize the data to 0 - 1 data = 255 * data # Now scale by 255 img = data.astype(np.uint8) cv2.imshow(\"Window\", img) ``` Note that I've additionally converted the image into np.float64 in case the incoming data type is not so and to maintain floating-point precision when doing the division.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/46689428\/convert-np-array-of-type-float64-to-type-uint8-scaling-values", "best_answers_votes":85, "question_length":747, "response_length":1369 }, { "question":"Finding count of distinct elements in DataFrame in each column I am trying to find the count of distinct values in each column using Pandas. This is what I did. ``` import pandas as pd import numpy as np # Generate data. NROW = 10000 NCOL = 100 df = pd.DataFrame(np.random.randint(1, 100000, (NROW, NCOL)), columns=['col' + x for x in np.arange(NCOL).astype(str)]) ``` I need to count the number of distinct elements for each column, like this: ``` col0 9538 col1 9505 col2 9524 ``` What would be the most efficient way to do this, as this method will be applied to files which have size greater than 1.5GB? Based upon the answers, df.apply(lambda x: len(x.unique())) is the fastest (notebook). %timeit df.apply(lambda x: len(x.unique())) 10 loops, best of 3: 49.5 ms per loop %timeit df.nunique() 10 loops, best of 3: 59.7 ms per loop %timeit df.apply(pd.Series.nunique) 10 loops, best of 3: 60.3 ms per loop %timeit df.T.apply(lambda x: x.nunique(), axis=1) 10 loops, best of 3: 60.5 ms per loop", "response":"As of pandas 0.20 we can use nunique directly on DataFrames, i.e.: ``` df.nunique() a 4 b 5 c 1 dtype: int64 ``` Other legacy options: You could do a transpose of the df and then using apply call nunique row-wise: ``` In [205]: df = pd.DataFrame({'a':[0,1,1,2,3],'b':[1,2,3,4,5],'c':[1,1,1,1,1]}) df Out[205]: a b c 0 0 1 1 1 1 2 1 2 1 3 1 3 2 4 1 4 3 5 1 In [206]: df.T.apply(lambda x: x.nunique(), axis=1) Out[206]: a 4 b 5 c 1 dtype: int64 ``` EDIT As pointed out by @ajcr the transpose is unnecessary: ``` In [208]: df.apply(pd.Series.nunique) Out[208]: a 4 b 5 c 1 dtype: int64 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30503321\/finding-count-of-distinct-elements-in-dataframe-in-each-column", "best_answers_votes":98, "question_length":997, "response_length":586 }, { "question":"In numpy.sum() there is parameter called \"keepdims\". What does it do? In numpy.sum() there is parameter called keepdims. What does it do? As you can see here in the documentation: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.sum.html ``` numpy.sum(a, axis=None, dtype=None, out=None, keepdims=False)[source] Sum of array elements over a given axis. Parameters: ... keepdims : bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. ... ```", "response":"@Ney @hpaulj is correct, you need to experiment, but I suspect you don't realize that summation for some arrays can occur along axes. Observe the following which reading the documentation ``` >>> a array([[0, 0, 0], [0, 1, 0], [0, 2, 0], [1, 0, 0], [1, 1, 0]]) >>> np.sum(a, keepdims=True) array([[6]]) >>> np.sum(a, keepdims=False) 6 >>> np.sum(a, axis=1, keepdims=True) array([[0], [1], [2], [1], [2]]) >>> np.sum(a, axis=1, keepdims=False) array([0, 1, 2, 1, 2]) >>> np.sum(a, axis=0, keepdims=True) array([[2, 4, 0]]) >>> np.sum(a, axis=0, keepdims=False) array([2, 4, 0]) ``` You will notice that if you don't specify an axis (1st two examples), the numerical result is the same, but the keepdims = True returned a 2D array with the number 6, whereas, the second incarnation returned a scalar. Similarly, when summing along axis 1 (across rows), a 2D array is returned again when keepdims = True. The last example, along axis 0 (down columns), shows a similar characteristic... dimensions are kept when keepdims = True. Studying axes and their properties is critical to a full understanding of the power of NumPy when dealing with multidimensional data.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/39441517\/in-numpy-sum-there-is-parameter-called-keepdims-what-does-it-do", "best_answers_votes":101, "question_length":593, "response_length":1158 }, { "question":"Understanding tensordot After I learned how to use einsum, I am now trying to understand how np.tensordot works. However, I am a little bit lost especially regarding the various possibilities for the parameter axes. To understand it, as I have never practiced tensor calculus, I use the following example: ``` A = np.random.randint(2, size=(2, 3, 5)) B = np.random.randint(2, size=(3, 2, 4)) ``` In this case, what are the different possible np.tensordot and how would you compute it manually?", "response":"The idea with tensordot is pretty simple - We input the arrays and the respective axes along which the sum-reductions are intended. The axes that take part in sum-reduction are removed in the output and all of the remaining axes from the input arrays are spread-out as different axes in the output keeping the order in which the input arrays are fed. Let's look at few sample cases with one and two axes of sum-reductions and also swap the input places and see how the order is kept in the output. I. One axis of sum-reduction Inputs : ``` In [7]: A = np.random.randint(2, size=(2, 6, 5)) ...: B = np.random.randint(2, size=(3, 2, 4)) ...: ``` Case #1: ``` In [9]: np.tensordot(A, B, axes=((0),(1))).shape Out[9]: (6, 5, 3, 4) A : (2, 6, 5) -> reduction of axis=0 B : (3, 2, 4) -> reduction of axis=1 Output : `(2, 6, 5)`, `(3, 2, 4)` ===(2 gone)==> `(6,5)` + `(3,4)` => `(6,5,3,4)` ``` Case #2 (same as case #1 but the inputs are fed swapped): ``` In [8]: np.tensordot(B, A, axes=((1),(0))).shape Out[8]: (3, 4, 6, 5) B : (3, 2, 4) -> reduction of axis=1 A : (2, 6, 5) -> reduction of axis=0 Output : `(3, 2, 4)`, `(2, 6, 5)` ===(2 gone)==> `(3,4)` + `(6,5)` => `(3,4,6,5)`. ``` II. Two axes of sum-reduction Inputs : ``` In [11]: A = np.random.randint(2, size=(2, 3, 5)) ...: B = np.random.randint(2, size=(3, 2, 4)) ...: ``` Case #1: ``` In [12]: np.tensordot(A, B, axes=((0,1),(1,0))).shape Out[12]: (5, 4) A : (2, 3, 5) -> reduction of axis=(0,1) B : (3, 2, 4) -> reduction of axis=(1,0) Output : `(2, 3, 5)`, `(3, 2, 4)` ===(2,3 gone)==> `(5)` + `(4)` => `(5,4)` ``` Case #2: ``` In [14]: np.tensordot(B, A, axes=((1,0),(0,1))).shape Out[14]: (4, 5) B : (3, 2, 4) -> reduction of axis=(1,0) A : (2, 3, 5) -> reduction of axis=(0,1) Output : `(3, 2, 4)`, `(2, 3, 5)` ===(2,3 gone)==> `(4)` + `(5)` => `(4,5)` ``` We can extend this to as many axes as possible.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/41870228\/understanding-tensordot", "best_answers_votes":76, "question_length":493, "response_length":1865 }, { "question":"Difference between np.int, np.int_, int, and np.int_t in cython? I am a bit struggled with so many int data types in cython. np.int, np.int_, np.int_t, int I guess int in pure python is equivalent to np.int_, then where does np.int come from? I cannot find the document from numpy? Also, why does np.int_ exist given we do already have int? In cython, I guess int becomes a C type when used as cdef int or ndarray[int], and when used as int() it stays as the python caster? Is np.int_ equivalent to long in C? so cdef long is the identical to cdef np.int_? Under what circumstances should I use np.int_t instead of np.int? e.g. cdef np.int_t, ndarray[np.int_t] ... Can someone briefly explain how the wrong use of those types would affect the performance of compiled cython code?", "response":"It's a bit complicated because the names have different meanings depending on the context. int In Python The int is normally just a Python type, it's of arbitrary precision, meaning that you can store any conceivable integer inside it (as long as you have enough memory). ``` >>> int(10**50) 100000000000000000000000000000000000000000000000000 ``` However, when you use it as dtype for a NumPy array it will be interpreted as np.int_ 1. Which is not of arbitrary precision, it will have the same size as C's long: ``` >>> np.array(10**50, dtype=int) OverflowError: Python int too large to convert to C long ``` That also means the following two are equivalent: ``` np.array([1,2,3], dtype=int) np.array([1,2,3], dtype=np.int_) ``` As Cython type identifier it has another meaning, here it stands for the c type int. It's of limited precision (typically 32bits). You can use it as Cython type, for example when defining variables with cdef: ``` cdef int value = 100 # variable cdef int[:] arr = ... # memoryview ``` As return value or argument value for cdef or cpdef functions: ``` cdef int my_function(int argument1, int argument2): # ... ``` As \"generic\" for ndarray: ``` cimport numpy as cnp cdef cnp.ndarray[int, ndim=1] val = ... ``` For type casting: ``` avalue = (another_value) ``` And probably many more. In Cython but as Python type. You can still call int and you'll get a \"Python int\" (of arbitrary precision), or use it for isinstance or as dtype argument for np.array. Here the context is important, so converting to a Python int is different from converting to a C int: ``` cdef object val = int(10) # Python int cdef int val = (10) # C int ``` np.int Actually this is very easy. It's just an alias for int: ``` >>> int is np.int True ``` So everything from above applies to np.int as well. However you can't use it as a type-identifier except when you use it on the cimported package. In that case it represents the Python integer type. ``` cimport numpy as cnp cpdef func(cnp.int obj): return obj ``` This will expect obj to be a Python integer not a NumPy type: ``` >>> func(np.int_(10)) TypeError: Argument 'obj' has incorrect type (expected int, got numpy.int32) >>> func(10) 10 ``` My advise regarding np.int: Avoid it whenever possible. In Python code it's equivalent to int and in Cython code it's also equivalent to Pythons int but if used as type-identifier it will probably confuse you and everyone who reads the code! It certainly confused me... np.int_ Actually it only has one meaning: It's a Python type that represents a scalar NumPy type. You use it like Pythons int: ``` >>> np.int_(10) # looks like a normal Python integer 10 >>> type(np.int_(10)) # but isn't (output may vary depending on your system!) numpy.int32 ``` Or you use it to specify the dtype, for example with np.array: ``` >>> np.array([1,2,3], dtype=np.int_) array([1, 2, 3]) ``` But you cannot use it as type-identifier in Cython. cnp.int_t It's the type-identifier version for np.int_. That means you can't use it as dtype argument. But you can use it as type for cdef declarations: ``` cimport numpy as cnp import numpy as np cdef cnp.int_t[:] arr = np.array([1,2,3], dtype=np.int_) |---TYPE---| |---DTYPE---| ``` This example (hopefully) shows that the type-identifier with the trailing _t actually represents the type of an array using the dtype without the trailing t. You can't interchange them in Cython code! Notes There are several more numeric types in NumPy I'll include a list containing the NumPy dtype and Cython type-identifier and the C type identifier that could also be used in Cython here. But it's basically taken from the NumPy documentation and the Cython NumPy pxd file: ``` NumPy dtype Numpy Cython type C Cython type identifier np.bool_ None None np.int_ cnp.int_t long np.intc None int np.intp cnp.intp_t ssize_t np.int8 cnp.int8_t signed char np.int16 cnp.int16_t signed short np.int32 cnp.int32_t signed int np.int64 cnp.int64_t signed long long np.uint8 cnp.uint8_t unsigned char np.uint16 cnp.uint16_t unsigned short np.uint32 cnp.uint32_t unsigned int np.uint64 cnp.uint64_t unsigned long np.float_ cnp.float64_t double np.float32 cnp.float32_t float np.float64 cnp.float64_t double np.complex_ cnp.complex128_t double complex np.complex64 cnp.complex64_t float complex np.complex128 cnp.complex128_t double complex ``` Actually there are Cython types for np.bool_: cnp.npy_bool and bint but both they can't be used for NumPy arrays currently. For scalars cnp.npy_bool will just be an unsigned integer while bint will be a boolean. Not sure what's going on there... 1 Taken From the NumPy documentation \"Data type objects\" Built-in Python types Several python types are equivalent to a corresponding array scalar when used to generate a dtype object: ``` int np.int_ bool np.bool_ float np.float_ complex np.cfloat bytes np.bytes_ str np.bytes_ (Python2) or np.unicode_ (Python3) unicode np.unicode_ buffer np.void (all others) np.object_ ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21851985\/difference-between-np-int-np-int-int-and-np-int-t-in-cython", "best_answers_votes":77, "question_length":779, "response_length":4973 }, { "question":"Convert Pandas dataframe to Sparse Numpy Matrix directly I am creating a matrix from a Pandas dataframe as follows: ``` dense_matrix = np.array(df.as_matrix(columns = None), dtype=bool).astype(np.int) ``` And then into a sparse matrix with: ``` sparse_matrix = scipy.sparse.csr_matrix(dense_matrix) ``` Is there any way to go from a df straight to a sparse matrix? Thanks in advance.", "response":"df.values is a numpy array, and accessing values that way is always faster than np.array. ``` scipy.sparse.csr_matrix(df.values) ``` You might need to take the transpose first, like df.values.T. In DataFrames, the columns are axis 0.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20459536\/convert-pandas-dataframe-to-sparse-numpy-matrix-directly", "best_answers_votes":75, "question_length":383, "response_length":233 }, { "question":"python tilde unary operator as negation numpy bool array Should be a simple question, but I'm unable to find an answer anywhere. The ~ operator in python is a documented as a bitwise inversion operator. Fine. I have noticed seemingly schizophrenic behavior though, to wit: ``` ~True -> -2 ~1 -> -2 ~False -> -1 ~0 -> -1 ~numpy.array([True,False],dtype=int) -> array([-2,-1]) ~numpy.array([True,False],dtype=bool) -> array([False,True]) ``` In the first 4 examples, I can see that python is implementing (as documented) ~x = -(x+1), with the input treated as an int even if it's boolean. Hence, for a scalar boolean, ~ is not treated as a logical negation. Not that the behavior is identical on a numpy array defined with boolean values by with an int type. Why does ~ then work as a logical negation operator on a boolean array (Also notice: ~numpy.isfinite(numpy.inf) -> True?)? It is extremely annoying that I must use not() on a scalar, but not() won't work to negate an array. Then for an array, I must use ~, but ~ won't work to negate a scalar...", "response":"not is implemented through the __nonzero__ special method, which is required to return either True or False, so it can't give the required result. Instead the ~ operator is used, which is implemented through the __not__ special method. For the same reason, & and | are used in place of and and or. PEP 335 aimed to allow overloading of boolean operators but was rejected because of excessive overhead (it would e.g. complicate if statements). PEP 225 suggests a general syntax for \"elementwise\" operators, which would provide a more general solution, but has been deferred. It appears that the current situation, while awkward, is not painful enough to force change. np.isfinite when called on a scalar returns a value of type np.bool_, not bool. np.bool_ is also the type you get when extracting a scalar value from an array of bool dtype. If you use np.True_ and np.False_ in place of True and False you will get consistent behaviour under ~.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13600988\/python-tilde-unary-operator-as-negation-numpy-bool-array", "best_answers_votes":51, "question_length":1052, "response_length":944 }, { "question":"How is numpy so fast? I'm trying to understand how numpy can be so fast, based on my shocking comparison with optimized C\/C++ code which is still far from reproducing numpy's speed. Consider the following example: Given a 2D array with shape=(N, N) and dtype=float32, which represents a list of N vectors of N dimensions, I am computing the pairwise differences between every pair of vectors. Using numpy broadcasting, this simply writes as: ```py def pairwise_sub_numpy( X ): return X - X[:, None, :] ``` Using timeit I can measure the performance for N=512: it takes 88 ms per call on my laptop. Now, in C\/C++ a naive implementation writes as: ```cpp #define X(i, j) _X[(i)*N + (j)] #define res(i, j, k) _res[((i)*N + (j))*N + (k)] float* pairwise_sub_naive( const float* _X, int N ) { float* _res = (float*) aligned_alloc( 32, N*N*N*sizeof(float)); for (int i = 0; i > best of 5 = {1000*min(times):.3f} ms\") ``` Full benchmark for C code: ```c #include #include #include \/\/ compile with -mavx -msse4.1 #include #include #include #define X(i, j) _x[(i)*N + (j)] #define res(i, j, k) _res[((i)*N + (j))*N + (k)] float* pairwise_sub_naive( const float* _x, int N ) { float* _res = (float*) aligned_alloc( 32, N*N*N*sizeof(float)); for (int i = 0; i < N; i++) { for (int j = 0; j < N; j++) { for (int k = 0; k < N; k++) res(i,j,k) = X(i,k) - X(j,k); } } return _res; } float* pairwise_sub_better( const float* _x, int N ) { float* _res = (float*) aligned_alloc( 32, N*N*N*sizeof(float)); for (int i = 0; i < N; i++) { const float* xi = & X(i,0); for (int j = 0; j < N; j++) { const float* xj = & X(j,0); float* r = &res(i,j,0); for (int k = 0; k < N; k++) r[k] = xi[k] - xj[k]; } } return _res; } float* pairwise_sub_simd( const float* _x, int N ) { float* _res = (float*) aligned_alloc( 32, N*N*N*sizeof(float)); \/\/ create caches for row vectors which are memory-aligned float* xi = (float*)aligned_alloc(32, N * sizeof(float)); float* xj = (float*)aligned_alloc(32, N * sizeof(float)); for (int i = 0; i < N; i++) { memcpy(xi, & X(i,0), N*sizeof(float)); for (int j = 0; j < N; j++) { memcpy(xj, & X(j,0), N*sizeof(float)); float* r = &res(i,j,0); for (int k = 0; k < N; k += 256\/sizeof(float)) { const __m256 A = _mm256_load_ps(xi+k); const __m256 B = _mm256_load_ps(xj+k); _mm256_store_ps(r+k, _mm256_sub_ps( A, B )); } } } free(xi); free(xj); return _res; } float* pairwise_sub_blocks( const float* _x, int N ) { float* _res = (float*) aligned_alloc( 32, N*N*N*sizeof(float)); #define B 8 float cache1[B*B], cache2[B*B]; for (int bi = 0; bi < N; bi+=B) for (int bj = 0; bj < N; bj+=B) for (int bk = 0; bk < N; bk+=B) { \/\/ load first 8x8 block in the cache for (int i = 0; i < B; i++) for (int k = 0; k < B; k++) cache1[B*i + k] = X(bi+i, bk+k); \/\/ load second 8x8 block in the cache for (int j = 0; j < B; j++) for (int k = 0; k < B; k++) cache2[B*j + k] = X(bj+j, bk+k); \/\/ compute local operations on the caches for (int i = 0; i < B; i++) for (int j = 0; j < B; j++) for (int k = 0; k < B; k++) res(bi+i,bj+j,bk+k) = cache1[B*i + k] - cache2[B*j + k]; } return _res; } int main() { const int N = 512; float* _x = (float*) malloc( N * N * sizeof(float) ); for( int i = 0; i < N; i++) for( int j = 0; j < N; j++) X(i,j) = ((i+j*j+17*i+101) % N) \/ float(N); double best = 9e9; for( int i = 0; i < 5; i++) { struct timespec start, stop; clock_gettime(CLOCK_THREAD_CPUTIME_ID, &start); \/\/float* res = pairwise_sub_naive( _x, N ); \/\/float* res = pairwise_sub_better( _x, N ); \/\/float* res = pairwise_sub_simd( _x, N ); float* res = pairwise_sub_blocks( _x, N ); clock_gettime(CLOCK_THREAD_CPUTIME_ID, &stop); double t = (stop.tv_sec - start.tv_sec) * 1e6 + (stop.tv_nsec - start.tv_nsec) \/ 1e3; \/\/ in microseconds if (t < best) best = t; free( res ); } printf(\"Best of 5 = %f ms\\n\", best \/ 1000); free( _x ); return 0; } ``` Compiled using gcc 7.3.0 gcc -Wall -O3 -mavx -msse4.1 -o test_simd test_simd.c Summary of timings on my machine: Implementation Time numpy 88 ms C++ naive 194 ms C++ better 195 ms C++ SIMD 178 ms C++ blocked 258 ms C++ blocked (gcc 8.3.1) 217 ms", "response":"As pointed out by some of the comments numpy uses SIMD in its implementation and it does not allocate memory at the point of computation. If I eliminate the memory allocation from your implementation, pre-allocating all the buffers ahead of the computation then I get a better time compared to numpy even with the scaler version(that is the one without any optimizations). Also in terms of SIMD and why your implementation does not perform much better than the scaler is because your memory access patterns are not ideal for SIMD usage - you do memcopy and you load into SIMD registers from locations that are far apart from each other - e.g. you fill vectors from line 0 and line 511, which might not play well with the cache or with the SIMD prefetcher. There is also a mistake in how you load the SIMD registers(if I understood correctly what you're trying to compute): a 256 bit SIMD register can load 8 single-precision floating-point numbers 8 * 32 = 256, but in your loop you jump k by \"256\/sizeof(float)\" which is 256\/4 = 64; _x and _res are float pointers and the SIMD intrinsics expect also float pointers as arguments so instead of reading all elements from those lines every 8 floats you read them every 64 floats. The computation can be optimized further by changing the access patterns but also by observing that you repeat some computations: e.g. when iterating with line0 as a base you compute line0 - line1 but at some future time, when iterating with line1 as a base, you need to compute line1 - line0 which is basically -(line0 - line1), that is for each line after line0 a lot of results could be reused from previous computations. A lot of times SIMD usage or parallelization requires one to change how data is accessed or reasoned about in order to provide meaningful improvements. Here is what I have done as a first step based on your initial implementation and it is faster than the numpy(don't mind the OpenMP stuff as it's not how its supposed to be done, I just wanted to see how it behaves trying the naive way). ``` C++ Time scaler version: 55 ms Time SIMD version: 53 ms **Time SIMD 2 version: 33 ms** Time SIMD 3 version: 168 ms Time OpenMP version: 59 ms Python numpy >> best of 5 = 88.794 ms #include #include \/\/ compile with -mavx -msse4.1 #include #include #include #include #include #include #include using namespace std; float* pairwise_sub_naive (const float* input, float* output, int n) { for (int i = 0; i (stop - start); if (duration (stop - start); if (duration (stop - start); if (duration (stop - start); if (duration (stop - start); if (duration < best_par) { best_par = duration; } } cout << \"Time OpenMP version: \" << best_par.count() << \" ms\\n\"; cout << \"Verification\\n\"; if (equal(output, output + output_size, output_simd)) { cout << \"PASSED\\n\"; } else { cout << \"FAILED\\n\"; } return 0; } ``` Edit: Small correction as there was a wrong call related to the second version of SIMD implementation. As you can see now, the second implementation is the fastest as it behaves the best from the point of view of the locality of reference of the cache. Examples 2 and 3 of SIMD implementations are there to illustrate for you how changing memory access patterns to influence the performance of your SIMD optimizations. To summarize(knowing that I'm far from being complete in my advice) be mindful of your memory access patterns and of the loads and stores to\\from the SIMD unit; the SIMD is a different hardware unit inside the processor's core so there is a penalty in shuffling data back and forth, hence when you load a register from memory try to do as many operations as possible with that data and do not be too eager to store it back(of course, in your example that might be all you need to do with the data). Be mindful also that there is a limited number of SIMD registers available and if you load too many then they will \"spill\", that is they will be stored back to temporary locations in main memory behind the scenes killing all your gains. SIMD optimization, it's a true balance act! There is some effort to put a cross-platform intrinsics wrapper into the standard(I developed myself a closed source one in my glorious past) and even it's far from being complete, it's worth taking a look at(read the accompanying papers if you're truly interested to learn how SIMD works). https:\/\/github.com\/VcDevel\/std-simd", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/65888569\/how-is-numpy-so-fast", "best_answers_votes":33, "question_length":4077, "response_length":4379 }, { "question":"A fast way to find the largest N elements in an numpy array I know I can do it like the following: ``` import numpy as np N=10 a=np.arange(1,100,1) np.argsort()[-N:] ``` However, it is very slow since it did a full sort. I wonder whether numpy provide some methods the do it fast.", "response":"numpy 1.8 implements partition and argpartition that perform partial sort ( in O(n) time as opposed to full sort that is O(n) * log(n)). ``` import numpy as np test = np.array([9,1,3,4,8,7,2,5,6,0]) temp = np.argpartition(-test, 4) result_args = temp[:4] temp = np.partition(-test, 4) result = -temp[:4] ``` Result: ``` >>> result_args array([0, 4, 8, 5]) # indices of highest vals >>> result array([9, 8, 6, 7]) # highest vals ``` Timing: ``` In [16]: a = np.arange(10000) In [17]: np.random.shuffle(a) In [18]: %timeit np.argsort(a) 1000 loops, best of 3: 1.02 ms per loop In [19]: %timeit np.argpartition(a, 100) 10000 loops, best of 3: 139 us per loop In [20]: %timeit np.argpartition(a, 1000) 10000 loops, best of 3: 141 us per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10337533\/a-fast-way-to-find-the-largest-n-elements-in-an-numpy-array", "best_answers_votes":90, "question_length":280, "response_length":741 }, { "question":"How to print numpy array with 3 decimal places? [duplicate] This question already has answers here: Pretty-print a NumPy array without scientific notation and with given precision (14 answers) Closed 5 years ago. How can I print numpy array with 3 decimal places? I tried array.round(3) but it keeps printing like this 6.000e-01. Is there an option to make it print like this: 6.000? I got one solution as print (\"%0.3f\" % arr), but I want a global solution i.e. not doing that every time I want to check the array contents.", "response":"``` np.set_printoptions(formatter={'float': lambda x: \"{0:0.3f}\".format(x)}) ``` This will set numpy to use this lambda function for formatting every float it prints out. other types you can define formatting for (from the docstring of the function) ``` - 'bool' - 'int' - 'timedelta' : a `numpy.timedelta64` - 'datetime' : a `numpy.datetime64` - 'float' - 'longfloat' : 128-bit floats - 'complexfloat' - 'longcomplexfloat' : composed of two 128-bit floats - 'numpy_str' : types `numpy.string_` and `numpy.unicode_` - 'str' : all other strings Other keys that can be used to set a group of types at once are:: - 'all' : sets all types - 'int_kind' : sets 'int' - 'float_kind' : sets 'float' and 'longfloat' - 'complex_kind' : sets 'complexfloat' and 'longcomplexfloat' - 'str_kind' : sets 'str' and 'numpystr' ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22222818\/how-to-print-numpy-array-with-3-decimal-places", "best_answers_votes":81, "question_length":524, "response_length":813 }, { "question":"One Hot Encoding using numpy [duplicate] This question already has answers here: Convert array of indices to one-hot encoded array in NumPy (24 answers) Closed 6 years ago. If the input is zero I want to make an array which looks like this: ``` [1,0,0,0,0,0,0,0,0,0] ``` and if the input is 5: ``` [0,0,0,0,0,1,0,0,0,0] ``` For the above I wrote: ``` np.put(np.zeros(10),5,1) ``` but it did not work. Is there any way in which, this can be implemented in one line?", "response":"Usually, when you want to get a one-hot encoding for classification in machine learning, you have an array of indices. ``` import numpy as np nb_classes = 6 targets = np.array([[2, 3, 4, 0]]).reshape(-1) one_hot_targets = np.eye(nb_classes)[targets] ``` The one_hot_targets is now ``` array([[[ 0., 0., 1., 0., 0., 0.], [ 0., 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 1., 0.], [ 1., 0., 0., 0., 0., 0.]]]) ``` The .reshape(-1) is there to make sure you have the right labels format (you might also have [[2], [3], [4], [0]]). The -1 is a special value which means \"put all remaining stuff in this dimension\". As there is only one, it flattens the array. Copy-Paste solution ``` def get_one_hot(targets, nb_classes): res = np.eye(nb_classes)[np.array(targets).reshape(-1)] return res.reshape(list(targets.shape)+[nb_classes]) ``` Package You can use mpu.ml.indices2one_hot. It's tested and simple to use: ``` import mpu.ml one_hot = mpu.ml.indices2one_hot([1, 3, 0], nb_classes=5) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/38592324\/one-hot-encoding-using-numpy", "best_answers_votes":125, "question_length":464, "response_length":980 }, { "question":"Figure to image as a numpy array I'm trying to get a numpy array image from a Matplotlib figure and I'm currently doing it by saving to a file, then reading the file back in, but I feel like there has to be a better way. Here's what I'm doing now: ``` from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas from matplotlib.figure import Figure fig = Figure() canvas = FigureCanvas(fig) ax = fig.gca() ax.text(0.0,0.0,\"Test\", fontsize=45) ax.axis('off') canvas.print_figure(\"output.png\") image = plt.imread(\"output.png\") ``` I tried this: ``` image = np.fromstring( canvas.tostring_rgb(), dtype='uint8' ) ``` from an example I found but it gives me an error saying that 'FigureCanvasAgg' object has no attribute 'renderer'.", "response":"In order to get the figure contents as RGB pixel values, the matplotlib.backend_bases.Renderer needs to first draw the contents of the canvas. You can do this by manually calling canvas.draw(): ``` from matplotlib.figure import Figure fig = Figure() canvas = fig.canvas ax = fig.gca() ax.text(0.0,0.0,\"Test\", fontsize=45) ax.axis('off') canvas.draw() # Draw the canvas, cache the renderer image_flat = np.frombuffer(canvas.tostring_rgb(), dtype='uint8') # (H * W * 3,) # NOTE: reversed converts (W, H) from get_width_height to (H, W) image = image_flat.reshape(*reversed(canvas.get_width_height()), 3) # (H, W, 3) ``` See here for more info on the Matplotlib API.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/35355930\/figure-to-image-as-a-numpy-array", "best_answers_votes":65, "question_length":743, "response_length":663 }, { "question":"How to get the values from a NumPy array using multiple indices I have a NumPy array that looks like this: ``` arr = np.array([100.10, 200.42, 4.14, 89.00, 34.55, 1.12]) ``` How can I get multiple values from this array by index? For example, how can I get the values at the index positions 1, 4, and 5? I was trying something like this, which is incorrect: ``` arr[1, 4, 5] ```", "response":"Try like this: ``` >>> arr = np.array([100.10, 200.42, 4.14, 89.00, 34.55, 1.12]) >>> arr[[1,4,5]] array([ 200.42, 34.55, 1.12]) ``` And for multidimensional arrays: ``` >>> arr = np.arange(9).reshape(3,3) >>> arr array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> arr[[0, 1, 1], [1, 0, 2]] array([1, 3, 5]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14162026\/how-to-get-the-values-from-a-numpy-array-using-multiple-indices", "best_answers_votes":101, "question_length":378, "response_length":305 }, { "question":"Filtering (reducing) a NumPy Array Suppose I have a NumPy array arr that I want to element-wise filter (reduce) depending on the truth value of a (broadcastable) function, e.g. I want to get only values below a certain threshold value k: ```py def cond(x): return x < k ``` There are a couple of methods, e.g.: Using a generator: np.fromiter((x for x in arr if cond(x)), dtype=arr.dtype) (which is a memory efficient version of using a list comprehension: np.array([x for x in arr if cond(x)]) because np.fromiter() will produce a NumPy array directly, without the need to allocate an intermediate Python list) Using boolean masking: arr[cond(arr)] Using integer indexing: arr[np.nonzero(cond(arr))] (or equivalently using np.where() as it defaults to np.nonzero() with only one condition) Using explicit looping with: single pass and final copying\/resizing two passes: one to determine the size of the result and one to actually perform the computation (The last two approaches to be accelerated with Cython or Numba) Which is the fastest? What about memory efficiency? (EDITED: To use directly np.nonzero() instead of np.where() as per @ShadowRanger comment)", "response":"Summary Using a loop-based approach with single pass and copying, accelerated with Numba, offers the best overall trade-off in terms of speed, memory efficiency and flexibility. If the execution of the condition function is sufficiently fast, two-passes (filter2_nb()) may be faster, while they are more memory efficient regardless. Also, for sufficiently large inputs, resizing instead of copying (filter_resize_xnb()) leads to faster execution. If the data type (and the condition function) is known ahead of time and can be compiled, the Cython acceleration seems to be faster. It is likely that a similar hard-coding of the condition would lead to comparable speed-up with Numba accerelation as well. When it comes to NumPy-only based approaches, boolean masking or integer indexing are of comparable speed, and which one comes out faster depends largely on the filtering factor, i.e. the portion of values that passes through the filtering condition. The np.fromiter() approach is much slower (it would be off-charts in the plot), but does not produce large temporary objects. Note that the following tests are meant to give some insights into the different approaches and should be taken with a grain of salt. The most relevant assumptions are that the condition is broadcastable and that it would eventually compute very fast. Definitions Using a generator: ```py def filter_fromiter(arr, cond): return np.fromiter((x for x in arr if cond(x)), dtype=arr.dtype) ``` Using boolean masking: ```py def filter_mask(arr, cond): return arr[cond(arr)] ``` Using integer indexing: ```py def filter_idx(arr, cond): return arr[np.nonzero(cond(arr))] ``` 4a. Using explicit looping, with single pass and final copying\/resizing Cython-accelerated with copying (pre-compiled condition) ``` %%cython -c-O3 -c-march=native -a #cython: language_level=3, boundscheck=False, wraparound=False, initializedcheck=False, cdivision=True, infer_types=True import numpy as np cdef long NUM = 1048576 cdef long MAX_VAL = 1048576 cdef long K = 1048576 \/\/ 2 cdef int cond_cy(long x, long k=K): return x k]` is a copy: ', arr[arr > k].base is None) # `arr[arr > k]` is a copy: True print('`arr[np.where(arr > k)]` is a copy: ', arr[np.where(arr > k)].base is None) # `arr[np.where(arr > k)]` is a copy: True print('`arr[:k]` is a copy: ', arr[:k].base is None) # `arr[:k]` is a copy: False ``` (EDITED: various improvements based on @ShadowRanger, @PaulPanzer, @max9111 and @DavidW comments.)", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/58422690\/filtering-reducing-a-numpy-array", "best_answers_votes":117, "question_length":1160, "response_length":2470 }, { "question":"Melt the Upper Triangular Matrix of a Pandas Dataframe Given a square pandas DataFrame of the following form: ``` a b c a 1 .5 .3 b .5 1 .4 c .3 .4 1 ``` How can the upper triangle be melted to get a matrix of the following form ``` Row Column Value a a 1 a b .5 a c .3 b b 1 b c .4 c c 1 #Note the combination a,b is only listed once. There is no b,a listing ``` I'm more interested in an idiomatic pandas solution, a custom indexer would be easy enough to write by hand... Thank you in advance for your consideration and response.", "response":"First I convert lower values of df to NaN by where and numpy.triu and then stack, reset_index and set column names: ``` import numpy as np print df a b c a 1.0 0.5 0.3 b 0.5 1.0 0.4 c 0.3 0.4 1.0 print np.triu(np.ones(df.shape)).astype(np.bool) [[ True True True] [False True True] [False False True]] df = df.where(np.triu(np.ones(df.shape)).astype(np.bool)) print df a b c a 1 0.5 0.3 b NaN 1.0 0.4 c NaN NaN 1.0 df = df.stack().reset_index() df.columns = ['Row','Column','Value'] print df Row Column Value 0 a a 1.0 1 a b 0.5 2 a c 0.3 3 b b 1.0 4 b c 0.4 5 c c 1.0 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34417685\/melt-the-upper-triangular-matrix-of-a-pandas-dataframe", "best_answers_votes":85, "question_length":532, "response_length":572 }, { "question":"Pandas : compute mean or std (standard deviation) over entire dataframe Here is my problem, I have a dataframe like this : ``` Depr_1 Depr_2 Depr_3 S3 0 5 9 S2 4 11 8 S1 6 11 12 S5 0 4 11 S4 4 8 8 ``` and I just want to calculate the mean over the full dataframe, as the following doesn't work : ``` df.mean() ``` Then I came up with : ``` df.mean().mean() ``` But this trick won't work for computing the standard deviation. My final attempts were : ``` df.get_values().mean() df.get_values().std() ``` Except that in the latter case, it uses mean() and std() function from numpy. It's not a problem for the mean, but it is for std, as the pandas function uses by default ddof=1, unlike the numpy one where ddof=0.", "response":"You could convert the dataframe to be a single column with stack (this changes the shape from 5x3 to 15x1) and then take the standard deviation: ``` df.stack().std() # pandas default degrees of freedom is one ``` Alternatively, you can use values to convert from a pandas dataframe to a numpy array before taking the standard deviation: ``` df.values.std(ddof=1) # numpy default degrees of freedom is zero ``` Unlike pandas, numpy will give the standard deviation of the entire array by default, so there is no need to reshape before taking the standard deviation. A couple of additional notes: The numpy approach here is a bit faster than the pandas one, which is generally true when you have the option to accomplish the same thing with either numpy or pandas. The speed difference will depend on the size of your data, but numpy was roughly 10x faster when I tested a few different sized dataframes on my laptop (numpy version 1.15.4 and pandas version 0.23.4). The numpy and pandas approaches here will not give exactly the same answers, but will be extremely close (identical at several digits of precision). The discrepancy is due to slight differences in implementation behind the scenes that affect how the floating point values get rounded.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25140998\/pandas-compute-mean-or-std-standard-deviation-over-entire-dataframe", "best_answers_votes":97, "question_length":714, "response_length":1249 }, { "question":"Simple plot in python of a numpy array? I have an array that looks like this ``` cplr = array([ 0.01828922, 0.01972157, 0.02342053, ..., 0.25928021, 0.26352547, 0.26883406]) ``` If I say ``` import matplotlib.pyplot as plt plt(cplr) ``` TypeError: 'module' object is not callable How do I plot the contents of a numpy array?", "response":"matplotlib.pyplot is a module; the function to plot is matplotlib.pyplot.plot. Thus, you should do ``` plt.plot(cplr) plt.show() ``` A good place to learn more about this would be to read a matplotlib tutorial.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18423603\/simple-plot-in-python-of-a-numpy-array", "best_answers_votes":86, "question_length":324, "response_length":210 }, { "question":"Sliding window of M-by-N shape numpy.ndarray I have a Numpy array of shape (6,2): ``` [[ 0, 1], [10,11], [20,21], [30,31], [40,41], [50,51]] ``` I need a sliding window with step size 1 and window size 3 like this: ``` [[ 0, 1,10,11,20,21], [10,11,20,21,30,31], [20,21,30,31,40,41], [30,31,40,41,50,51]] ``` I'm looking for a Numpy solution. If your solution could parametrise the shape of the original array as well as the window size and step size, that'd be great. I found this related answer Using strides for an efficient moving average filter but I don't see how to specify the stepsize there and how to collapse the window from the 3d to a continuous 2d array. Also this Rolling or sliding window iterator? but that's in Python and I'm not sure how efficient that is. Also, it supports elements but does not join them together in the end if each element has multiple features.", "response":"You can do a vectorized sliding window in numpy using fancy indexing. ``` >>> import numpy as np >>> a = np.array([[00,01], [10,11], [20,21], [30,31], [40,41], [50,51]]) >>> a array([[ 0, 1], [10, 11], [20, 21], #define our 2d numpy array [30, 31], [40, 41], [50, 51]]) >>> a = a.flatten() >>> a array([ 0, 1, 10, 11, 20, 21, 30, 31, 40, 41, 50, 51]) #flattened numpy array >>> indexer = np.arange(6)[None, :] + 2*np.arange(4)[:, None] >>> indexer array([[ 0, 1, 2, 3, 4, 5], [ 2, 3, 4, 5, 6, 7], #sliding window indices [ 4, 5, 6, 7, 8, 9], [ 6, 7, 8, 9, 10, 11]]) >>> a[indexer] array([[ 0, 1, 10, 11, 20, 21], [10, 11, 20, 21, 30, 31], #values of a over sliding window [20, 21, 30, 31, 40, 41], [30, 31, 40, 41, 50, 51]]) >>> np.sum(a[indexer], axis=1) array([ 63, 123, 183, 243]) #sum of values in 'a' under the sliding window. ``` Explanation for what this code is doing. The np.arange(6)[None, :] creates a row vector 0 through 6, and np.arange(4)[:, None] creates a column vector 0 through 4. This results in a 4x6 matrix where each row (six of them) represents a window, and the number of rows (four of them) represents the number of windows. The multiple of 2 makes the sliding window slide 2 units at a time which is necessary for sliding over each tuple. Using numpy array slicing you can pass the sliding window into the flattened numpy array and do aggregates on them like sum.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15722324\/sliding-window-of-m-by-n-shape-numpy-ndarray", "best_answers_votes":78, "question_length":883, "response_length":1390 }, { "question":"How do I get all the values from a NumPy array excluding a certain index? I have a NumPy array, and I want to retrieve all the elements except a certain index. For example, consider the following array ``` a = [0,1,2,3,4,5,5,6,7,8,9] ``` If I specify index 3, then the resultant should be ``` a = [0,1,2,4,5,5,6,7,8,9] ```", "response":"Like resizing, removing elements from an NumPy array is a slow operation (especially for large arrays since it requires allocating space and copying all the data from the original array to the new array). It should be avoided if possible. Often you can avoid it by working with a masked array instead. For example, consider the array a: ``` import numpy as np a = np.array([0,1,2,3,4,5,5,6,7,8,9]) print(a) print(a.sum()) # [0 1 2 3 4 5 5 6 7 8 9] # 50 ``` We can mask its value at index 3 and can perform a summation which ignores masked elements: ``` a = np.ma.array(a, mask=False) a.mask[3] = True print(a) print(a.sum()) # [0 1 2 -- 4 5 5 6 7 8 9] # 47 ``` Masked arrays also support many operations besides sum. If you really need to, it is also possible to remove masked elements using the compressed method: ``` print(a.compressed()) # [0 1 2 4 5 5 6 7 8 9] ``` But as mentioned above, avoid it if possible.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7429118\/how-do-i-get-all-the-values-from-a-numpy-array-excluding-a-certain-index", "best_answers_votes":76, "question_length":322, "response_length":914 }, { "question":"Weird behaviour initializing a numpy array of string data I am having some seemingly trivial trouble with numpy when the array contains string data. I have the following code: ``` my_array = numpy.empty([1, 2], dtype = str) my_array[0, 0] = \"Cat\" my_array[0, 1] = \"Apple\" ``` Now, when I print it with print my_array[0, :], the response I get is ['C', 'A'], which is clearly not the expected output of Cat and Apple. Why is that, and how can I get the right output? Thanks!", "response":"Numpy requires string arrays to have a fixed maximum length. When you create an empty array with dtype=str, it sets this maximum length to 1 by default. You can see if you do my_array.dtype; it will show \"|S1\", meaning \"one-character string\". Subsequent assignments into the array are truncated to fit this structure. You can pass an explicit datatype with your maximum length by doing, e.g.: ``` my_array = numpy.empty([1, 2], dtype=\"S10\") ``` The \"S10\" will create an array of length-10 strings. You have to decide how big will be big enough to hold all the data you want to hold.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13717554\/weird-behaviour-initializing-a-numpy-array-of-string-data", "best_answers_votes":82, "question_length":473, "response_length":582 }, { "question":"Comparing previous row values in Pandas DataFrame ``` import pandas as pd data={'col1':[1,3,3,1,2,3,2,2]} df=pd.DataFrame(data,columns=['col1']) print df col1 0 1 1 3 2 3 3 1 4 2 5 3 6 2 7 2 ``` I have the following Pandas DataFrame and I want to create another column that compares the previous row of col1 to see if they are equal. What would be the best way to do this? It would be like the following DataFrame. Thanks ``` col1 match 0 1 False 1 3 False 2 3 True 3 1 False 4 2 False 5 3 False 6 2 False 7 2 True ```", "response":"You need eq with shift: ``` df['match'] = df.col1.eq(df.col1.shift()) print (df) col1 match 0 1 False 1 3 False 2 3 True 3 1 False 4 2 False 5 3 False 6 2 False 7 2 True ``` Or instead eq use ==, but it is a bit slowier in large DataFrame: ``` df['match'] = df.col1 == df.col1.shift() print (df) col1 match 0 1 False 1 3 False 2 3 True 3 1 False 4 2 False 5 3 False 6 2 False 7 2 True ``` Timings: ``` import pandas as pd data={'col1':[1,3,3,1,2,3,2,2]} df=pd.DataFrame(data,columns=['col1']) print (df) #[80000 rows x 1 columns] df = pd.concat([df]*10000).reset_index(drop=True) df['match'] = df.col1 == df.col1.shift() df['match1'] = df.col1.eq(df.col1.shift()) print (df) In [208]: %timeit df.col1.eq(df.col1.shift()) The slowest run took 4.83 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 3: 933 \u00b5s per loop In [209]: %timeit df.col1 == df.col1.shift() 1000 loops, best of 3: 1 ms per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/41399538\/comparing-previous-row-values-in-pandas-dataframe", "best_answers_votes":94, "question_length":518, "response_length":963 }, { "question":"Different std in pandas vs numpy The standard deviation differs between pandas and numpy. Why and which one is the correct one? (the relative difference is 3.5% which should not come from rounding, this is high in my opinion). Example ``` import numpy as np import pandas as pd from StringIO import StringIO a='''0.057411 0.024367 0.021247 -0.001809 -0.010874 -0.035845 0.001663 0.043282 0.004433 -0.007242 0.029294 0.023699 0.049654 0.034422 -0.005380''' df = pd.read_csv(StringIO(a.strip()), delim_whitespace=True, header=None) df.std()==np.std(df) # False df.std() # 0.025801 np.std(df) # 0.024926 (0.024926 - 0.025801) \/ 0.024926 # 3.5% relative difference ``` I use these versions: ``` pandas '0.14.0' numpy '1.8.1' ```", "response":"In a nutshell, neither is \"incorrect\". Pandas uses the unbiased estimator (N-1 in the denominator), whereas Numpy by default does not. To make them behave the same, pass ddof=1 to numpy.std(). For further discussion, see Population variance and sample variance.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/24984178\/different-std-in-pandas-vs-numpy", "best_answers_votes":89, "question_length":724, "response_length":261 }, { "question":"Convert structured array to regular NumPy array The answer will be very obvious I think, but I don't see it at the moment. How can I convert a record array back to a regular ndarray? Suppose I have following simple structured array: ``` x = np.array([(1.0, 4.0,), (2.0, -1.0)], dtype=[('f0', '>> x array([ 3.08, 3.1 , 3.12, 3.14, 3.16, 3.18, 3.2 , 3.22, 3.24, 3.26, 3.28, 3.3 , 3.32, 3.34, 3.36, 3.38, 3.4 , 3.42, 3.44, 3.46, 3.48, 3.5 , 3.52, 3.54, 3.56, 3.58, 3.6 , 3.62, 3.64, 3.66, 3.68]) >>> y array([ 0.000857, 0.001182, 0.001619, 0.002113, 0.002702, 0.003351, 0.004062, 0.004754, 0.00546 , 0.006183, 0.006816, 0.007362, 0.007844, 0.008207, 0.008474, 0.008541, 0.008539, 0.008445, 0.008251, 0.007974, 0.007608, 0.007193, 0.006752, 0.006269, 0.005799, 0.005302, 0.004822, 0.004339, 0.00391 , 0.003481, 0.003095]) ``` Now, I want to fit these data with, say, a 4 degree polynomial. So I do: ``` >>> coefs = np.polynomial.polynomial.polyfit(x, y, 4) >>> ffit = np.poly1d(coefs) ``` Now I create a new grid for x values to evaluate the fitting function ffit: ``` >>> x_new = np.linspace(x[0], x[-1], num=len(x)*10) ``` When I do all the plotting (data set and fitting curve) with the command: ``` >>> fig1 = plt.figure() >>> ax1 = fig1.add_subplot(111) >>> ax1.scatter(x, y, facecolors='None') >>> ax1.plot(x_new, ffit(x_new)) >>> plt.show() ``` I get the following: fitting_data.png What I expect is the fitting function to fit correctly (at least near the maximum value of the data). What am I doing wrong?", "response":"Unfortunately, np.polynomial.polynomial.polyfit returns the coefficients in the opposite order of that for np.polyfit and np.polyval (or, as you used np.poly1d). To illustrate: ``` In [40]: np.polynomial.polynomial.polyfit(x, y, 4) Out[40]: array([ 84.29340848, -100.53595376, 44.83281408, -8.85931101, 0.65459882]) In [41]: np.polyfit(x, y, 4) Out[41]: array([ 0.65459882, -8.859311 , 44.83281407, -100.53595375, 84.29340846]) ``` In general: np.polynomial.polynomial.polyfit returns coefficients [A, B, C] to A + Bx + Cx^2 + ..., while np.polyfit returns: ... + Ax^2 + Bx + C. So if you want to use this combination of functions, you must reverse the order of coefficients, as in: ``` ffit = np.polyval(coefs[::-1], x_new) ``` However, the documentation states clearly to avoid np.polyfit, np.polyval, and np.poly1d, and instead to use only the new(er) package. You're safest to use only the polynomial package: ``` import numpy.polynomial.polynomial as poly coefs = poly.polyfit(x, y, 4) ffit = poly.polyval(x_new, coefs) plt.plot(x_new, ffit) ``` Or, to create the polynomial function: ``` ffit = poly.Polynomial(coefs) # instead of np.poly1d plt.plot(x_new, ffit(x_new)) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18767523\/fitting-data-with-numpy", "best_answers_votes":116, "question_length":1273, "response_length":1179 }, { "question":"Multiply several matrices in numpy Suppose you have n square matrices A1,...,An. Is there anyway to multiply these matrices in a neat way? As far as I know dot in numpy accepts only two arguments. One obvious way is to define a function to call itself and get the result. Is there any better way to get it done?", "response":"This might be a relatively recent feature, but I like: ``` A.dot(B).dot(C) ``` or if you had a long chain you could do: ``` reduce(numpy.dot, [A1, A2, ..., An]) ``` Update: There is more info about reduce here. Here is an example that might help. ``` >>> A = [np.random.random((5, 5)) for i in xrange(4)] >>> product1 = A[0].dot(A[1]).dot(A[2]).dot(A[3]) >>> product2 = reduce(numpy.dot, A) >>> numpy.all(product1 == product2) True ``` Update 2016: As of python 3.5, there is a new matrix_multiply symbol, @: ``` R = A @ B @ C ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11838352\/multiply-several-matrices-in-numpy", "best_answers_votes":90, "question_length":311, "response_length":530 }, { "question":"How to find first non-zero value in every column of a numpy array? Suppose I have a numpy array of the form: ``` arr=numpy.array([[1,1,0],[1,1,0],[0,0,1],[0,0,0]]) ``` I want to find the indices of the first index (for every column) where the value is non-zero. So in this instance, I would like the following to be returned: ``` [0,0,2] ``` How do I go about this?", "response":"Indices of first occurrences Use np.argmax along that axis (zeroth axis for columns here) on the mask of non-zeros to get the indices of first matches (True values) - ``` (arr!=0).argmax(axis=0) ``` Extending to cover generic axis specifier and for cases where no non-zeros are found along that axis for an element, we would have an implementation like so - ``` def first_nonzero(arr, axis, invalid_val=-1): mask = arr!=0 return np.where(mask.any(axis=axis), mask.argmax(axis=axis), invalid_val) ``` Note that since argmax() on all False values returns 0, so if the invalid_val needed is 0, we would have the final output directly with mask.argmax(axis=axis). Sample runs - ``` In [296]: arr # Different from given sample for variety Out[296]: array([[1, 0, 0], [1, 1, 0], [0, 1, 0], [0, 0, 0]]) In [297]: first_nonzero(arr, axis=0, invalid_val=-1) Out[297]: array([ 0, 1, -1]) In [298]: first_nonzero(arr, axis=1, invalid_val=-1) Out[298]: array([ 0, 0, 1, -1]) ``` Extending to cover all comparison operations To find the first zeros, simply use arr==0 as mask for use in the function. For first ones equal to a certain value val, use arr == val and so on for all cases of comparisons possible here. Indices of last occurrences To find the last ones matching a certain comparison criteria, we need to flip along that axis and use the same idea of using argmax and then compensate for the flipping by offsetting from the axis length, as shown below - ``` def last_nonzero(arr, axis, invalid_val=-1): mask = arr!=0 val = arr.shape[axis] - np.flip(mask, axis=axis).argmax(axis=axis) - 1 return np.where(mask.any(axis=axis), val, invalid_val) ``` Sample runs - ``` In [320]: arr Out[320]: array([[1, 0, 0], [1, 1, 0], [0, 1, 0], [0, 0, 0]]) In [321]: last_nonzero(arr, axis=0, invalid_val=-1) Out[321]: array([ 1, 2, -1]) In [322]: last_nonzero(arr, axis=1, invalid_val=-1) Out[322]: array([ 0, 1, 1, -1]) ``` Again, all cases of comparisons possible here are covered by using the corresponding comparator to get mask and then using within the listed function.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/47269390\/how-to-find-first-non-zero-value-in-every-column-of-a-numpy-array", "best_answers_votes":112, "question_length":365, "response_length":2058 }, { "question":"PyTorch: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead Calling tensor.numpy() gives the error: RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead. tensor.cpu().detach().numpy() gives the same error.", "response":"Error reproduced ``` import torch tensor1 = torch.tensor([1.0,2.0],requires_grad=True) print(tensor1) print(type(tensor1)) tensor1 = tensor1.numpy() print(tensor1) print(type(tensor1)) ``` which leads to the exact same error for the line tensor1 = tensor1.numpy(): ``` tensor([1., 2.], requires_grad=True) Traceback (most recent call last): File \"\/home\/badScript.py\", line 8, in tensor1 = tensor1.numpy() RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead. Process finished with exit code 1 ``` Generic solution this was suggested to you in your error message, just replace var with your variable name ``` import torch tensor1 = torch.tensor([1.0,2.0],requires_grad=True) print(tensor1) print(type(tensor1)) tensor1 = tensor1.detach().numpy() print(tensor1) print(type(tensor1)) ``` which returns as expected ``` tensor([1., 2.], requires_grad=True) [1. 2.] Process finished with exit code 0 ``` Some explanation You need to convert your tensor to another tensor that isn't requiring a gradient in addition to its actual value definition. This other tensor can be converted to a numpy array. Cf. this discuss.pytorch post. (I think, more precisely, that one needs to do that in order to get the actual tensor out of its pytorch Variable wrapper, cf. this other discuss.pytorch post).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/55466298\/pytorch-cant-call-numpy-on-variable-that-requires-grad-use-var-detach-num", "best_answers_votes":57, "question_length":283, "response_length":1334 }, { "question":"what does numpy ndarray shape do? I have a simple question about the .shape function, which confused me a lot. ``` a = np.array([1, 2, 3]) # Create a rank 1 array print(type(a)) # Prints \"\" print(a.shape) # Prints \"(3,)\" b = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array print(b.shape) # Prints \"(2, 3)\" ``` What did the .shape exactly do? count how many rows, how many columns, then the a.shape suppose to be, (1,3), one row three columns, right?", "response":"yourarray.shape or np.shape() or np.ma.shape() returns the shape of your ndarray as a tuple; And you can get the (number of) dimensions of your array using yourarray.ndim or np.ndim(). (i.e. it gives the n of the ndarray since all arrays in NumPy are just n-dimensional arrays (shortly called as ndarrays)) For a 1D array, the shape would be (n,) where n is the number of elements in your array. For a 2D array, the shape would be (n,m) where n is the number of rows and m is the number of columns in your array. Please note that in 1D case, the shape would simply be (n, ) instead of what you said as either (1, n) or (n, 1) for row and column vectors respectively. This is to follow the convention that: For 1D array, return a shape tuple with only 1 element (i.e. (n,)) For 2D array, return a shape tuple with only 2 elements (i.e. (n,m)) For 3D array, return a shape tuple with only 3 elements (i.e. (n,m,k)) For 4D array, return a shape tuple with only 4 elements (i.e. (n,m,k,j)) and so on. Also, please see the example below to see how np.shape() or np.ma.shape() behaves with 1D arrays and scalars: ``` # sample array In [10]: u = np.arange(10) # get its shape In [11]: np.shape(u) # u.shape Out[11]: (10,) # get array dimension using `np.ndim` In [12]: np.ndim(u) Out[12]: 1 In [13]: np.shape(np.mean(u)) Out[13]: () # empty tuple (to indicate that a scalar is a 0D array). # check using `numpy.ndim` In [14]: np.ndim(np.mean(u)) Out[14]: 0 ``` P.S.: So, the shape tuple is consistent with our understanding of dimensions of space, at least mathematically.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/47564495\/what-does-numpy-ndarray-shape-do", "best_answers_votes":82, "question_length":453, "response_length":1565 }, { "question":"When to apply(pd.to_numeric) and when to astype(np.float64) in python? I have a pandas DataFrame object named xiv which has a column of int64 Volume measurements. ``` In[]: xiv['Volume'].head(5) Out[]: 0 252000 1 484000 2 62000 3 168000 4 232000 Name: Volume, dtype: int64 ``` I have read other posts (like this and this) that suggest the following solutions. But when I use either approach, it doesn't appear to change the dtype of the underlying data: ``` In[]: xiv['Volume'] = pd.to_numeric(xiv['Volume']) In[]: xiv['Volume'].dtypes Out[]: dtype('int64') ``` Or... ``` In[]: xiv['Volume'] = pd.to_numeric(xiv['Volume']) Out[]: ###omitted for brevity### In[]: xiv['Volume'].dtypes Out[]: dtype('int64') In[]: xiv['Volume'] = xiv['Volume'].apply(pd.to_numeric) In[]: xiv['Volume'].dtypes Out[]: dtype('int64') ``` I've also tried making a separate pandas Series and using the methods listed above on that Series and reassigning to the x['Volume'] obect, which is a pandas.core.series.Series object. I have, however, found a solution to this problem using the numpy package's float64 type - this works but I don't know why it's different. ``` In[]: xiv['Volume'] = xiv['Volume'].astype(np.float64) In[]: xiv['Volume'].dtypes Out[]: dtype('float64') ``` Can someone explain how to accomplish with the pandas library what the numpy library seems to do easily with its float64 class; that is, convert the column in the xiv DataFrame to a float64 in place.", "response":"If you already have numeric dtypes (int8|16|32|64,float64,boolean) you can convert it to another \"numeric\" dtype using Pandas .astype() method. Demo: ``` In [90]: df = pd.DataFrame(np.random.randint(10**5,10**7,(5,3)),columns=list('abc'), dtype=np.int64) In [91]: df Out[91]: a b c 0 9059440 9590567 2076918 1 5861102 4566089 1947323 2 6636568 162770 2487991 3 6794572 5236903 5628779 4 470121 4044395 4546794 In [92]: df.dtypes Out[92]: a int64 b int64 c int64 dtype: object In [93]: df['a'] = df['a'].astype(float) In [94]: df.dtypes Out[94]: a float64 b int64 c int64 dtype: object ``` It won't work for object (string) dtypes, that can't be converted to numbers: ``` In [95]: df.loc[1, 'b'] = 'XXXXXX' In [96]: df Out[96]: a b c 0 9059440.0 9590567 2076918 1 5861102.0 XXXXXX 1947323 2 6636568.0 162770 2487991 3 6794572.0 5236903 5628779 4 470121.0 4044395 4546794 In [97]: df.dtypes Out[97]: a float64 b object c int64 dtype: object In [98]: df['b'].astype(float) ... skipped ... ValueError: could not convert string to float: 'XXXXXX' ``` So here we want to use pd.to_numeric() method: ``` In [99]: df['b'] = pd.to_numeric(df['b'], errors='coerce') In [100]: df Out[100]: a b c 0 9059440.0 9590567.0 2076918 1 5861102.0 NaN 1947323 2 6636568.0 162770.0 2487991 3 6794572.0 5236903.0 5628779 4 470121.0 4044395.0 4546794 In [101]: df.dtypes Out[101]: a float64 b float64 c int64 dtype: object ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/40095712\/when-to-applypd-to-numeric-and-when-to-astypenp-float64-in-python", "best_answers_votes":61, "question_length":1452, "response_length":1402 }, { "question":"Converting two lists into a matrix I'll try to be as clear as possible, and I'll start by explaining why I want to transform two arrays into a matrix. To plot the performance of a portfolio vs an market index I need a data structure like in this format: ``` [[portfolio_value1, index_value1] [portfolio_value2, index_value2]] ``` But I have the the data as two separate 1-D arrays: ``` portfolio = [portfolio_value1, portfolio_value2, ...] index = [index_value1, index_value2, ...] ``` So how do I transform the second scenario into the first. I've tried np.insert to add the second array to a test matrix I had in a python shell, my problem was to transpose the first array into a single column matrix. Any help on how to achieve this without an imperative loop would be great.", "response":"The standard numpy function for what you want is np.column_stack: ``` >>> np.column_stack(([1, 2, 3], [4, 5, 6])) array([[1, 4], [2, 5], [3, 6]]) ``` So with your portfolio and index arrays, doing ``` np.column_stack((portfolio, index)) ``` would yield something like: ``` [[portfolio_value1, index_value1], [portfolio_value2, index_value2], [portfolio_value3, index_value3], ...] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18730044\/converting-two-lists-into-a-matrix", "best_answers_votes":130, "question_length":778, "response_length":384 }, { "question":"Installing numpy on Docker Alpine I'm trying to install numpy in a docker container based on Alpine 3.1. I'm using the following Dockerfile: ``` FROM alpine:3.1 RUN apk add --update make cmake gcc g++ gfortran RUN apk add --update python py-pip python-dev RUN pip install cython RUN pip install numpy ``` This runs fine until pip install numpy when I get the following error: ``` error: Command \"gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -Inumpy\/core\/include -Ibuild\/src.linux-x86_64-2.7\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/private -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/include\/python2.7 -Ibuild\/src.linux-x86_64-2.7\/numpy\/core\/src\/private -Ibuild\/src.linux-x86_64-2.7\/numpy\/core\/src\/private -Ibuild\/src.linux-x86_64-2.7\/numpy\/core\/src\/private -c build\/src.linux-x86_64-2.7\/numpy\/core\/src\/npymath\/ieee754.c -o build\/temp.linux-x86_64-2.7\/build\/src.linux-x86_64-2.7\/numpy\/core\/src\/npymath\/ieee754.o\" failed with exit status 1 ``` easy_install-2.7 numpy gives the same error. Are there any config\/installation steps I'm missing?", "response":"I've been having a bit of trouble with this myself and, long story short, I would encourage you to ask if it's really worth the hassle. Numpy is enormous when you start adding things to the stack like pandas, gpus, and scipy so the benefit of building it on alpine is limited, the savings over using Debian, Arch, or even Ubuntu are relatively modest when 500MB of your space is on this library anyway. That having been said, I threw together an image that does it. I needed as build-time dependencies musl-dev, linux-headers, and g++. I also wound up needing to add openblas from edge for something later in the stack so it's possible that some dependencies from that are required too. But I believe just adding the three former libraries with ``` apk --no-cache add musl-dev linux-headers g++ ``` should be sufficient to prevent the gcc error you are getting. You can view the image at https:\/\/hub.docker.com\/r\/o76923\/alpine-numpy-stack\/", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/33421965\/installing-numpy-on-docker-alpine", "best_answers_votes":52, "question_length":1180, "response_length":939 }, { "question":"Cannot install NumPy from a wheel format I am trying to install NumPy from a wheel (.whl) file. I get the error: numpy-1.9.1%2Bmkl-cp34-none-win_amd64.whl is not a supported wheel on this platform. Details: Windows 8.1 pro x64, elevated command prompt Python 3.4.2 Package NumPy from Gohlke's site File numpy-1.9.1%2Bmkl-cp34-none-win_amd64.whl copied in the pip.exe folder The log file shows: d:\\Program Files\\WinPython-64bit-3.4.2.4\\python-3.4.2.amd64\\Scripts\\pip run on 01\/23\/15 11:55:21 numpy-1.9.1%2Bmkl-cp34-none-win_amd64.whl is not a supported wheel on this platform. Exception information: Traceback (most recent call last): File \"D:\\Python34\\lib\\site-packages\\pip\\basecommand.py\", line 122, in main status = self.run(options, args) File \"D:\\Python34\\lib\\site-packages\\pip\\commands\\install.py\", line 257, in run InstallRequirement.from_line(name, None)) File \"D:\\Python34\\lib\\site-packages\\pip\\req.py\", line 167, in from_line raise UnsupportedWheel(\"%s is not a supported wheel on this platform.\" % wheel.filename) pip.exceptions.UnsupportedWheel: numpy-1.9.1%2Bmkl-cp34-none-win_amd64.whl is not a supported wheel on this platform. What is wrong?", "response":"Short answer: rename the file to numpy-1.9.1%2Bmkl-cp34-none-win32.whl to install it. You can check what tags your pip tool accepts for installation by running: ``` import pip; print(pip.pep425tags.get_supported()) ``` In this case pip is incorrectly detecting your operating system to be 32-bits and the file you're trying to install was win_amd64 in its filename. If you rename the file to numpy-1.9.1%2Bmkl-cp34-none-win32.whl (which now contains the tags that are considered supported) then you can install the package. It's a trick because the file is still built for 64-bits but this allows you to install the package as intended.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28107123\/cannot-install-numpy-from-a-wheel-format", "best_answers_votes":82, "question_length":1156, "response_length":636 }, { "question":"Improve Row Append Performance On Pandas DataFrames I am running a basic script that loops over a nested dictionary, grabs data from each record, and appends it to a Pandas DataFrame. The data looks something like this: ```py data = {\"SomeCity\": {\"Date1\": {record1, record2, record3, ...}, \"Date2\": {}, ...}, ...} ``` In total it has a few million records. The script itself looks like this: ```py city = [\"SomeCity\"] df = DataFrame({}, columns=['Date', 'HouseID', 'Price']) for city in cities: for dateRun in data[city]: for record in data[city][dateRun]: recSeries = Series([record['Timestamp'], record['Id'], record['Price']], index = ['Date', 'HouseID', 'Price']) FredDF = FredDF.append(recSeries, ignore_index=True) ``` This runs painfully slow, however. Before I look for a way to parallelize it, I just want to make sure I'm not missing something obvious that would make this perform faster as it is, as I'm still quite new to Pandas.", "response":"I also used the dataframe's append function inside a loop and I was perplexed how slow it ran. A useful example for those who are suffering, based on the correct answer on this page. Python version: 3 Pandas version: 0.20.3 ``` # the dictionary to pass to pandas dataframe d = {} # a counter to use to add entries to \"dict\" i = 0 # Example data to loop and append to a dataframe data = [{\"foo\": \"foo_val_1\", \"bar\": \"bar_val_1\"}, {\"foo\": \"foo_val_2\", \"bar\": \"bar_val_2\"}] # the loop for entry in data: # add a dictionary entry to the final dictionary d[i] = {\"col_1_title\": entry['foo'], \"col_2_title\": entry['bar']} # increment the counter i = i + 1 # create the dataframe using 'from_dict' # important to set the 'orient' parameter to \"index\" to make the keys as rows df = DataFrame.from_dict(d, \"index\") ``` The \"from_dict\" function: https:\/\/pandas.pydata.org\/pandas-docs\/stable\/generated\/pandas.DataFrame.from_dict.html", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/27929472\/improve-row-append-performance-on-pandas-dataframes", "best_answers_votes":81, "question_length":941, "response_length":922 }, { "question":"How to access a column in a list of lists in python I have a 2D array in python modeled by a list of lists and I want to extract the column. I made a quick research and I found a way that uses numpy arrays. The problem is that I do not want to use numpy so I don't want to convert my list of lists into a numpy array and then use [:,1] syntax. I tried using it on a normal list of lists but it shows an error so it's not possible. I am asking for a similar thing for list of lists without having to go through each element (In numpy arrays, it's faster to access a column by using [:,1] syntax than iterating over the elements of the array). I found this link but again it suggests iterating over elements without a shortcut.", "response":"List comprehensions are your friend when working with lists of lists: ``` In [111]: alist Out[111]: [[0, 1, 2, 3, 4, 5], [6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17], [18, 19, 20, 21, 22, 23]] In [112]: [row[1] for row in alist] Out[112]: [1, 7, 13, 19] ``` There's also a handy 'idiom' for transposing a nested list, turning 'columns' into 'rows': ``` In [113]: tlist = list(zip(*alist)) In [114]: tlist Out[114]: [(0, 6, 12, 18), (1, 7, 13, 19), (2, 8, 14, 20), (3, 9, 15, 21), (4, 10, 16, 22), (5, 11, 17, 23)] In [115]: tlist[1] Out[115]: (1, 7, 13, 19) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/44360162\/how-to-access-a-column-in-a-list-of-lists-in-python", "best_answers_votes":96, "question_length":725, "response_length":563 }, { "question":"Does Python have a function to reduce fractions? For example, when I calculate 98\/42 I want to get 7\/3, not 2.3333333, is there a function for that using Python or Numpy?", "response":"The fractions module can do that ``` >>> from fractions import Fraction >>> Fraction(98, 42) Fraction(7, 3) ``` There's a recipe over here for a numpy gcd. Which you could then use to divide your fraction ``` >>> def numpy_gcd(a, b): ... a, b = np.broadcast_arrays(a, b) ... a = a.copy() ... b = b.copy() ... pos = np.nonzero(b)[0] ... while len(pos) > 0: ... b2 = b[pos] ... a[pos], b[pos] = b2, a[pos] % b2 ... pos = pos[b[pos]!=0] ... return a ... >>> numpy_gcd(np.array([98]), np.array([42])) array([14]) >>> 98\/14, 42\/14 (7, 3) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17537613\/does-python-have-a-function-to-reduce-fractions", "best_answers_votes":70, "question_length":170, "response_length":536 }, { "question":"numpy boolean array with 1 bit entries Is there a way in numpy to create an array of booleans that uses just 1 bit for each entry? The standard np.bool type is 1 byte, but this way I use 8 times the required memory. On Google I found that C++ has std::vector.", "response":"To do this you can use numpy's packbits and unpackbits: ``` import numpy as np # original boolean array A1 = np.array([ [0, 1, 1, 0, 1], [0, 0, 1, 1, 1], [1, 1, 1, 1, 1], ], dtype=bool) # packed data A2 = np.packbits(A1, axis=None) # checking the size print(len(A1.tostring())) # 15 bytes print(len(A2.tostring())) # 2 bytes (ceil(15\/8)) # reconstructing from packed data. You need to resize and reshape A3 = np.unpackbits(A2, count=A1.size).reshape(A1.shape).view(bool) # and the arrays are equal print(np.array_equal(A1, A3)) # True ``` Prior to numpy 1.17.0, the first function is straight-forward to use, but reconstruction required additional manipulations. Here is an example: ``` import numpy as np # original boolean array A1 = np.array([ [0, 1, 1, 0, 1], [0, 0, 1, 1, 1], [1, 1, 1, 1, 1], ], dtype=np.bool) # packed data A2 = np.packbits(A1, axis=None) # checking the size print(len(A1.tostring())) # 15 bytes print(len(A2.tostring())) # 2 bytes (ceil(15\/8)) # reconstructing from packed data. You need to resize and reshape A3 = np.unpackbits(A2, axis=None)[:A1.size].reshape(A1.shape).astype(np.bool) # and the arrays are equal print(np.array_equal(A1, A3)) # True ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5602155\/numpy-boolean-array-with-1-bit-entries", "best_answers_votes":43, "question_length":259, "response_length":1179 }, { "question":"plotting a histogram on a Log scale with Matplotlib I have a Pandas DataFrame that has the following values in a Series ``` x = [2, 1, 76, 140, 286, 267, 60, 271, 5, 13, 9, 76, 77, 6, 2, 27, 22, 1, 12, 7, 19, 81, 11, 173, 13, 7, 16, 19, 23, 197, 167, 1] ``` I was instructed to plot two histograms in a Jupyter notebook with Python 3.6. ``` x.plot.hist(bins=8) plt.show() ``` I chose 8 bins because that looked best to me. I have also been instructed to plot another histogram with the log of x. ``` x.plot.hist(bins=8) plt.xscale('log') plt.show() ``` This histogram looks TERRIBLE. Am I not doing something right? I've tried fiddling around with the plot, but everything I've tried just seems to make the histogram look even worse. Example: ``` x.plot(kind='hist', logx=True) ``` I was not given any instructions other than plot the log of X as a histogram. For the record, I have imported pandas, numpy, and matplotlib and specified that the plot should be inline.", "response":"Specifying bins=8 in the hist call means that the range between the minimum and maximum value is divided equally into 8 bins. What is equal on a linear scale is distorted on a log scale. What you could do is specify the bins of the histogram such that they are unequal in width in a way that would make them look equal on a logarithmic scale. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt x = [2, 1, 76, 140, 286, 267, 60, 271, 5, 13, 9, 76, 77, 6, 2, 27, 22, 1, 12, 7, 19, 81, 11, 173, 13, 7, 16, 19, 23, 197, 167, 1] x = pd.Series(x) # histogram on linear scale plt.subplot(211) hist, bins, _ = plt.hist(x, bins=8) # histogram on log scale. # Use non-equal bin sizes, such that they look equal on log scale. logbins = np.logspace(np.log10(bins[0]),np.log10(bins[-1]),len(bins)) plt.subplot(212) plt.hist(x, bins=logbins) plt.xscale('log') plt.show() ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/47850202\/plotting-a-histogram-on-a-log-scale-with-matplotlib", "best_answers_votes":81, "question_length":967, "response_length":884 }, { "question":"How to multiply individual elements of a list with a number? ``` S = [22, 33, 45.6, 21.6, 51.8] P = 2.45 ``` Here S is an array How will I multiply this and get the value? ``` SP = [53.9, 80.85, 111.72, 52.92, 126.91] ```", "response":"In NumPy it is quite simple ``` import numpy as np P=2.45 S=[22, 33, 45.6, 21.6, 51.8] SP = P*np.array(S) ``` I recommend taking a look at the NumPy tutorial for an explanation of the full capabilities of NumPy's arrays: https:\/\/scipy.github.io\/old-wiki\/pages\/Tentative_NumPy_Tutorial", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8194959\/how-to-multiply-individual-elements-of-a-list-with-a-number", "best_answers_votes":77, "question_length":221, "response_length":284 }, { "question":"Find index where elements change value numpy Suppose I have ``` >>> v array([1, 1, 1, 1, 1, 2, 2, 2, 3, 4, 3, 4, 3, 4, 3, 4, 5, 5, 5]) ``` Is there an efficient numpy way to find each index where the value changes? For instance, I would want some result like, ``` >>> index_of_changed_values(v) [0, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16] ``` If this is not possible with some numpy routine, what is a fast way to do it in python? It would also be useful to me to be referred to some good numpy tutorials since I am a numpy beginner.", "response":"You can get this functionality in numpy by comparing each element with it's neighbor; ``` v[:-1] != v[1:] array([False, False, False, False, True, False, False, True, True, True, True, True, True, True, True, True, False, False], dtype=bool) ``` to get the indices you use the \"where\" function ``` np.where(v[:-1] != v[1:])[0] array([ 4, 7, 8, 9, 10, 11, 12, 13, 14, 15]) ``` From here you can prepend the first element and add a one to get to the same indexing scheme you have in your question.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19125661\/find-index-where-elements-change-value-numpy", "best_answers_votes":94, "question_length":530, "response_length":495 }, { "question":"When to use pandas series, numpy ndarrays or simply python dictionaries? I am new to learning Python, and some of its libraries (numpy, pandas). I have found a lot of documentation on how numpy ndarrays, pandas series and python dictionaries work. But owing to my inexperience with Python, I have had a really hard time determining when to use each one of them. And I haven't found any best-practices that will help me understand and decide when it is better to use each type of data structure. As a general matter, are there any best practices for deciding which, if any, of these three data structures a specific data set should be loaded into?", "response":"The rule of thumb that I usually apply: use the simplest data structure that still satisfies your needs. If we rank the data structures from most simple to least simple, it usually ends up like this: Dictionaries \/ lists Numpy arrays Pandas series \/ dataframes So first consider dictionaries \/ lists. If these allow you to do all data operations that you need, then all is fine. If not, start considering numpy arrays. Some typical reasons for moving to numpy arrays are: Your data is 2-dimensional (or higher). Although nested dictionaries\/lists can be used to represent multi-dimensional data, in most situations numpy arrays will be more efficient. You have to perform a bunch of numerical calculations. As already pointed out by zhqiat, numpy will give a significant speed-up in this case. Furthermore numpy arrays come bundled with a large amount of mathematical functions. Then there are also some typical reasons for going beyond numpy arrays and to the more-complex but also more-powerful pandas series\/dataframes: You have to merge multiple data sets with each other, or do reshaping\/reordering of your data. This diagram gives a nice overview of all the 'data wrangling' operations that pandas allows you to do. You have to import data from or export data to a specific file format like Excel, HDF5 or SQL. Pandas comes with convenient import\/export functions for this.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/45285743\/when-to-use-pandas-series-numpy-ndarrays-or-simply-python-dictionaries", "best_answers_votes":80, "question_length":646, "response_length":1379 }, { "question":"Is there anything faster than dict()? I need a faster way to store and access around 3GB of k:v pairs. Where k is a string or an integer and v is an np.array() that can be of different shapes. Is there any object that is faster than the standard python dict in storing and accessing such a table? For example, a pandas.DataFrame? As far I have understood, python dict is a quite fast implementation of a hashtable. Is there anything better than that for my specific case?", "response":"No, there is nothing faster than a dictionary for this task and that\u2019s because the complexity of its indexing (getting and setting item) and even membership checking is O(1) in average. (check the complexity of the rest of functionalities on Python doc https:\/\/wiki.python.org\/moin\/TimeComplexity ) Once you saved your items in a dictionary, you can have access to them in constant time which means that it's unlikely for your performance problem to have anything to do with dictionary indexing. That being said, you still might be able to make this process slightly faster by making some changes in your objects and their types that may result in some optimizations at under the hood operations. e.g. If your strings (keys) are not very large you can intern the lookup key and your dictionary's keys. Interning is caching the objects in memory --or as in Python, table of \"interned\" strings-- rather than creating them as a separate object. Python has provided an intern() function within the sys module that you can use for this. Enter string in the table of \u201cinterned\u201d strings and return the interned string \u2013 which is string itself or a copy. Interning strings is useful to gain a little performance on dictionary lookup... also ... If the keys in a dictionary are interned and the lookup key is interned, the key comparisons (after hashing) can be done by a pointer comparison instead of comparing the string values themselves which in consequence reduces the access time to the object. Here is an example: ``` In [49]: d = {'mystr{}'.format(i): i for i in range(30)} In [50]: %timeit d['mystr25'] 10000000 loops, best of 3: 46.9 ns per loop In [51]: d = {sys.intern('mystr{}'.format(i)): i for i in range(30)} In [52]: %timeit d['mystr25'] 10000000 loops, best of 3: 38.8 ns per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/40694470\/is-there-anything-faster-than-dict", "best_answers_votes":79, "question_length":471, "response_length":1793 }, { "question":"Numpy, multiply array with scalar [duplicate] This question already has answers here: How to multiply individual elements of a list with a number? (4 answers) Closed 6 years ago. Is it possible to use ufuncs https:\/\/docs.scipy.org\/doc\/numpy\/reference\/ufuncs.html In order to map function to array (1D and \/ or 2D) and scalar If not what would be my way to achieve this? For example: ``` a_1 = np.array([1.0, 2.0, 3.0]) a_2 = np.array([[1., 2.], [3., 4.]]) b = 2.0 ``` Expected result: ``` a_1 * b = array([2.0, 4.0, 6.0]); a_2 * b = array([[2., 4.], [6., 8.]]) ``` I`m using python 2.7 if it is relevant to an issue.", "response":"You can multiply numpy arrays by scalars and it just works. ``` >>> import numpy as np >>> np.array([1, 2, 3]) * 2 array([2, 4, 6]) >>> np.array([[1, 2, 3], [4, 5, 6]]) * 2 array([[ 2, 4, 6], [ 8, 10, 12]]) ``` This is also a very fast and efficient operation. With your example: ``` >>> a_1 = np.array([1.0, 2.0, 3.0]) >>> a_2 = np.array([[1., 2.], [3., 4.]]) >>> b = 2.0 >>> a_1 * b array([2., 4., 6.]) >>> a_2 * b array([[2., 4.], [6., 8.]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/53485221\/numpy-multiply-array-with-scalar", "best_answers_votes":74, "question_length":616, "response_length":448 }, { "question":"How to use numpy.where with logical operators I'm trying to find the indices of all elements in an array that are greater than a but less than b. It's probably just a problem with my syntax but this doesn't work: ``` numpy.where((my_array > a) and (my_array < b)) ``` How should I fix this? Or is there a better way to do it? Thanks!", "response":"Here are two ways: ``` In [1]: my_array = arange(10) In [2]: where((my_array > 3) & (my_array 3, my_array 3) * (my_array < 7)) Out[4]: (array([4, 5, 6]),) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13589390\/how-to-use-numpy-where-with-logical-operators", "best_answers_votes":89, "question_length":333, "response_length":160 }, { "question":"Indexing a numpy array with a list of tuples Why can't I index an ndarray using a list of tuple indices like so? ``` idx = [(x1, y1), ... (xn, yn)] X[idx] ``` Instead I have to do something unwieldy like ``` idx2 = numpy.array(idx) X[idx2[:, 0], idx2[:, 1]] # or more generally: X[tuple(numpy.vsplit(idx2.T, 1)[0])] ``` Is there a simpler, more pythonic way?", "response":"You can use a list of tuples, but the convention is different from what you want. numpy expects a list of row indices, followed by a list of column values. You, apparently, want to specify a list of (x,y) pairs. http:\/\/docs.scipy.org\/doc\/numpy\/reference\/arrays.indexing.html#integer-array-indexing The relevant section in the documentation is 'integer array indexing'. Here's an example, seeking 3 points in a 2d array. (2 points in 2d can be confusing): ``` In [223]: idx Out[223]: [(0, 1, 1), (2, 3, 0)] In [224]: X[idx] Out[224]: array([2, 7, 4]) ``` Using your style of xy pairs of indices: ``` In [230]: idx1 = [(0,2),(1,3),(1,0)] In [231]: [X[i] for i in idx1] Out[231]: [2, 7, 4] In [240]: X[tuple(np.array(idx1).T)] Out[240]: array([2, 7, 4]) ``` X[tuple(zip(*idx1))] is another way of doing the conversion. The tuple() is optional in Python2. zip(*...) is a Python idiom that reverses the nesting of a list of lists. You are on the right track with: ``` In [242]: idx2=np.array(idx1) In [243]: X[idx2[:,0], idx2[:,1]] Out[243]: array([2, 7, 4]) ``` My tuple() is just a bit more compact (and not necessarily more 'pythonic'). Given the numpy convention, some sort of conversion is necessary. (Should we check what works with n-dimensions and m-points?)", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28491230\/indexing-a-numpy-array-with-a-list-of-tuples", "best_answers_votes":70, "question_length":358, "response_length":1261 }, { "question":"How to increase Jupyter notebook Memory limit? I am using jupyter notebook with Python3 on windows 10. My computer has 8GB RAM and at least 4GB of my RAM is free. But when I want to make a numpy ndArray with size 6000*6000 with this command: np.zeros((6000, 6000), dtype='float64') I got this : Unable to allocate array with shape (6000, 6000) and data type float64 I don't think this could use more then 100MB RAM. I tried to change the number to see what happens. The biggest array I can make is (5000,5000). Did I make a mistake in estimating how much RAM I need?", "response":"Jupyter notebook has a default memory limit size. You can try to increase the memory limit by following the steps: Generate a config file using: ``` jupyter notebook --generate-config ``` Open the jupyter_notebook_config.py file situated inside the jupyter folder and edit the following property: ``` NotebookApp.max_buffer_size = your desired value ``` Remember to remove the # before the property value. Save and run the Jupyter notebook. It should now be able utilize the set memory value. Also, don\u2019t forget to run the notebook from inside the Jupyter folder. Alternatively, you can simply run the notebook using below command: ``` jupyter notebook --NotebookApp.max_buffer_size=your_value ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/57948003\/how-to-increase-jupyter-notebook-memory-limit", "best_answers_votes":63, "question_length":566, "response_length":697 }, { "question":"Pandas finding local max and min I have a pandas data frame with two columns one is temperature the other is time. I would like to make third and fourth columns called min and max. Each of these columns would be filled with nan's except where there is a local min or max, then it would have the value of that extrema. Here is a sample of what the data looks like, essentially I am trying to identify all the peaks and low points in the figure. Are there any built in tools with pandas that can accomplish this?", "response":"The solution offered by fuglede is great but if your data is very noisy (like the one in the picture) you will end up with lots of misleading local extremes. I suggest that you use scipy.signal.argrelextrema() method. The .argrelextrema() method has its own limitations but it has a useful feature where you can specify the number of points to be compared, kind of like a noise filtering algorithm. for example: ```py import numpy as np import matplotlib.pyplot as plt import pandas as pd from scipy.signal import argrelextrema # Generate a noisy AR(1) sample np.random.seed(0) rs = np.random.randn(200) xs = [0] for r in rs: xs.append(xs[-1] * 0.9 + r) df = pd.DataFrame(xs, columns=['data']) n = 5 # number of points to be checked before and after # Find local peaks df['min'] = df.iloc[argrelextrema(df.data.values, np.less_equal, order=n)[0]]['data'] df['max'] = df.iloc[argrelextrema(df.data.values, np.greater_equal, order=n)[0]]['data'] # Plot results plt.scatter(df.index, df['min'], c='r') plt.scatter(df.index, df['max'], c='g') plt.plot(df.index, df['data']) plt.show() ``` ``` ``` Some points: you might need to check the points afterward to ensure there are no twine points very close to each other. you can play with n to filter the noisy points argrelextrema returns a tuple and the [0] at the end extracts a numpy array P.S. I developed this method as part of my research here. If you find it useful, feel free to cite me.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/48023982\/pandas-finding-local-max-and-min", "best_answers_votes":129, "question_length":510, "response_length":1438 }, { "question":"Weighted percentile using numpy Is there a way to use the numpy.percentile function to compute weighted percentile? Or is anyone aware of an alternative python function to compute weighted percentile?", "response":"Completely vectorized numpy solution Here is the code I use. It's not an optimal one (which I'm unable to write with numpy), but still much faster and more reliable than accepted solution ```py def weighted_quantile(values, quantiles, sample_weight=None, values_sorted=False, old_style=False): \"\"\" Very close to numpy.percentile, but supports weights. NOTE: quantiles should be in [0, 1]! :param values: numpy.array with data :param quantiles: array-like with many quantiles needed :param sample_weight: array-like of the same length as `array` :param values_sorted: bool, if True, then will avoid sorting of initial array :param old_style: if True, will correct output to be consistent with numpy.percentile. :return: numpy.array with computed quantiles. \"\"\" values = np.array(values) quantiles = np.array(quantiles) if sample_weight is None: sample_weight = np.ones(len(values)) sample_weight = np.array(sample_weight) assert np.all(quantiles >= 0) and np.all(quantiles <= 1), \\ 'quantiles should be in [0, 1]' if not values_sorted: sorter = np.argsort(values) values = values[sorter] sample_weight = sample_weight[sorter] weighted_quantiles = np.cumsum(sample_weight) - 0.5 * sample_weight if old_style: # To be convenient with numpy.percentile weighted_quantiles -= weighted_quantiles[0] weighted_quantiles \/= weighted_quantiles[-1] else: weighted_quantiles \/= np.sum(sample_weight) return np.interp(quantiles, weighted_quantiles, values) ``` Examples: weighted_quantile([1, 2, 9, 3.2, 4], [0.0, 0.5, 1.]) array([ 1. , 3.2, 9. ]) weighted_quantile([1, 2, 9, 3.2, 4], [0.0, 0.5, 1.], sample_weight=[2, 1, 2, 4, 1]) array([ 1. , 3.2, 9. ])", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21844024\/weighted-percentile-using-numpy", "best_answers_votes":81, "question_length":200, "response_length":1641 }, { "question":"In numpy, what does selection by [:,None] do? I'm taking the Udacity course on deep learning and I came across the following code: ``` def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels ``` What does labels[:,None] actually do here?", "response":"http:\/\/docs.scipy.org\/doc\/numpy\/reference\/arrays.indexing.html numpy.newaxis The newaxis object can be used in all slicing operations to create an axis of length one. :const: newaxis is an alias for \u2018None\u2019, and \u2018None\u2019 can be used in place of this with the same result. http:\/\/docs.scipy.org\/doc\/numpy-1.10.1\/reference\/generated\/numpy.expand_dims.html Demonstrating with part of your code ``` In [154]: labels=np.array([1,3,5]) In [155]: labels[:,None] Out[155]: array([[1], [3], [5]]) In [157]: np.arange(8)==labels[:,None] Out[157]: array([[False, True, False, False, False, False, False, False], [False, False, False, True, False, False, False, False], [False, False, False, False, False, True, False, False]], dtype=bool) In [158]: (np.arange(8)==labels[:,None]).astype(int) Out[158]: array([[0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37867354\/in-numpy-what-does-selection-by-none-do", "best_answers_votes":57, "question_length":438, "response_length":877 }, { "question":"Faster way of polygon intersection with shapely I have a large number of polygons (~100000) and try to find a smart way of calculating their intersecting area with a regular grid cells. Currently, I am creating the polygons and the grid cells using shapely (based on their corner coordinates). Then, using a simple for-loop I go through each polygon and compare it to nearby grid cells. Just a small example to illustrate the polygons\/grid cells. ``` from shapely.geometry import box, Polygon # Example polygon xy = [[130.21001, 27.200001], [129.52, 27.34], [129.45, 27.1], [130.13, 26.950001]] polygon_shape = Polygon(xy) # Example grid cell gridcell_shape = box(129.5, -27.0, 129.75, 27.25) # The intersection polygon_shape.intersection(gridcell_shape).area ``` (BTW: the grid cells have the dimensions 0.25x0.25 and the polygons 1x1 at max) Actually this is quite fast for an individual polygon\/grid cell combo with around 0.003 seconds. However, running this code on a huge amount of polygons (each one could intersect dozens of grid cells) takes around 15+ minutes (up to 30+ min depending on the number of intersecting grid cells) on my machine which is not acceptable. Unfortunately, I have no idea how it is possible to write a code for polygon intersection to get the area of overlap. Do you have any tips? Is there an alternative to shapely?", "response":"Consider using Rtree to help identify which grid cells that a polygon may intersect. This way, you can remove the for loop used with the array of lat\/lons, which is probably the slow part. Structure your code something like this: ``` from shapely.ops import cascaded_union from rtree import index idx = index.Index() # Populate R-tree index with bounds of grid cells for pos, cell in enumerate(grid_cells): # assuming cell is a shapely object idx.insert(pos, cell.bounds) # Loop through each Shapely polygon for poly in polygons: # Merge cells that have overlapping bounding boxes merged_cells = cascaded_union([grid_cells[pos] for pos in idx.intersection(poly.bounds)]) # Now do actual intersection print(poly.intersection(merged_cells).area) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14697442\/faster-way-of-polygon-intersection-with-shapely", "best_answers_votes":82, "question_length":1351, "response_length":747 }, { "question":"Get intersecting rows across two 2D numpy arrays I want to get the intersecting (common) rows across two 2D numpy arrays. E.g., if the following arrays are passed as inputs: ``` array([[1, 4], [2, 5], [3, 6]]) array([[1, 4], [3, 6], [7, 8]]) ``` the output should be: ``` array([[1, 4], [3, 6]) ``` I know how to do this with loops. I'm looking at a Pythonic\/Numpy way to do this.", "response":"For short arrays, using sets is probably the clearest and most readable way to do it. Another way is to use numpy.intersect1d. You'll have to trick it into treating the rows as a single value, though... This makes things a bit less readable... ``` import numpy as np A = np.array([[1,4],[2,5],[3,6]]) B = np.array([[1,4],[3,6],[7,8]]) nrows, ncols = A.shape dtype={'names':['f{}'.format(i) for i in range(ncols)], 'formats':ncols * [A.dtype]} C = np.intersect1d(A.view(dtype), B.view(dtype)) # This last bit is optional if you're okay with \"C\" being a structured array... C = C.view(A.dtype).reshape(-1, ncols) ``` For large arrays, this should be considerably faster than using sets.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8317022\/get-intersecting-rows-across-two-2d-numpy-arrays", "best_answers_votes":53, "question_length":380, "response_length":684 }, { "question":"numpy float: 10x slower than builtin in arithmetic operations? I am getting really weird timings for the following code: ``` import numpy as np s = 0 for i in range(10000000): s += np.float64(1) # replace with np.float32 and built-in float ``` built-in float: 4.9 s float64: 10.5 s float32: 45.0 s Why is float64 twice slower than float? And why is float32 5 times slower than float64? Is there any way to avoid the penalty of using np.float64, and have numpy functions return built-in float instead of float64? I found that using numpy.float64 is much slower than Python's float, and numpy.float32 is even slower (even though I'm on a 32-bit machine). numpy.float32 on my 32-bit machine. Therefore, every time I use various numpy functions such as numpy.random.uniform, I convert the result to float32 (so that further operations would be performed at 32-bit precision). Is there any way to set a single variable somewhere in the program or in the command line, and make all numpy functions return float32 instead of float64? EDIT #1: numpy.float64 is 10 times slower than float in arithmetic calculations. It's so bad that even converting to float and back before the calculations makes the program run 3 times faster. Why? Is there anything I can do to fix it? I want to emphasize that my timings are not due to any of the following: the function calls the conversion between numpy and python float the creation of objects I updated my code to make it clearer where the problem lies. With the new code, it would seem I see a ten-fold performance hit from using numpy data types: ``` from datetime import datetime import numpy as np START_TIME = datetime.now() # one of the following lines is uncommented before execution #s = np.float64(1) #s = np.float32(1) #s = 1.0 for i in range(10000000): s = (s + 8) * s % 2399232 print(s) print('Runtime:', datetime.now() - START_TIME) ``` The timings are: float64: 34.56s float32: 35.11s float: 3.53s Just for the hell of it, I also tried: from datetime import datetime import numpy as np ``` START_TIME = datetime.now() s = np.float64(1) for i in range(10000000): s = float(s) s = (s + 8) * s % 2399232 s = np.float64(s) print(s) print('Runtime:', datetime.now() - START_TIME) ``` The execution time is 13.28 s; it's actually 3 times faster to convert the float64 to float and back than to use it as is. Still, the conversion takes its toll, so overall it's more than 3 times slower compared to the pure-python float. My machine is: Intel Core 2 Duo T9300 (2.5GHz) WinXP Professional (32-bit) ActiveState Python 3.1.3.5 Numpy 1.5.1 EDIT #2: Thank you for the answers, they help me understand how to deal with this problem. But I still would like to know the precise reason (based on the source code perhaps) why the code below runs 10 times slow with float64 than with float. EDIT #3: I rerun the code under the Windows 7 x64 (Intel Core i7 930 @ 3.8GHz). Again, the code is: ``` from datetime import datetime import numpy as np START_TIME = datetime.now() # one of the following lines is uncommented before execution #s = np.float64(1) #s = np.float32(1) #s = 1.0 for i in range(10000000): s = (s + 8) * s % 2399232 print(s) print('Runtime:', datetime.now() - START_TIME) ``` The timings are: float64: 16.1s float32: 16.1s float: 3.2s Now both np floats (either 64 or 32) are 5 times slower than the built-in float. Still, a significant difference. I'm trying to figure out where it comes from. END OF EDITS", "response":"CPython floats are allocated in chunks The key problem with comparing numpy scalar allocations to the float type is that CPython always allocates the memory for float and int objects in blocks of size N. Internally, CPython maintains a linked list of blocks each large enough to hold N float objects. When you call float(1) CPython checks if there is space available in the current block; if not it allocates a new block. Once it has space in the current block it simply initializes that space and returns a pointer to it. On my machine each block can hold 41 float objects, so there is some overhead for the first float(1) call but the next 40 run much faster as the memory is allocated and ready. Slow numpy.float32 vs. numpy.float64 It appears that numpy has 2 paths it can take when creating a scalar type: fast and slow. This depends on whether the scalar type has a Python base class to which it can defer for argument conversion. For some reason numpy.float32 is hard-coded to take the slower path (defined by the _WORK0 macro), while numpy.float64 gets a chance to take the faster path (defined by the _WORK1 macro). Note that scalartypes.c.src is a template which generates scalartypes.c at build time. You can visualize this in Cachegrind. I've included screen captures showing how many more calls are made to construct a float32 vs. float64: float64 takes the fast path float32 takes the slow path Updated - Which type takes the slow\/fast path may depend on whether the OS is 32-bit vs 64-bit. On my test system, Ubuntu Lucid 64-bit, the float64 type is 10 times faster than float32.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5956783\/numpy-float-10x-slower-than-builtin-in-arithmetic-operations", "best_answers_votes":51, "question_length":3453, "response_length":1594 }, { "question":"Transforming a row vector into a column vector in Numpy Let's say I have a row vector of the shape (1, 256). I want to transform it into a column vector of the shape (256, 1) instead. How would you do it in Numpy?", "response":"you can use the transpose operation to do this: Example: ``` In [2]: a = np.array([[1,2], [3,4], [5,6]]) In [5]: a.shape Out[5]: (3, 2) In [6]: a_trans = a.T #or: np.transpose(a), a.transpose() In [8]: a_trans.shape Out[8]: (2, 3) In [7]: a_trans Out[7]: array([[1, 3, 5], [2, 4, 6]]) ``` Note that the original array a will still remain unmodified. The transpose operation will just make a copy and transpose it. If your input array is rather 1D, then you can promote the array to a column vector by introducing a new (singleton) axis as the second dimension. Below is an example: ``` # 1D array In [13]: arr = np.arange(6) # promotion to a column vector (i.e., a 2D array) In [14]: arr = arr[..., None] #or: arr = arr[:, np.newaxis] In [15]: arr Out[15]: array([[0], [1], [2], [3], [4], [5]]) In [12]: arr.shape Out[12]: (6, 1) ``` For the 1D case, yet another option would be to use numpy.atleast_2d() followed by a transpose operation, as suggested by ankostis in the comments. ``` In [9]: np.atleast_2d(arr).T Out[9]: array([[0], [1], [2], [3], [4], [5]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/36384760\/transforming-a-row-vector-into-a-column-vector-in-numpy", "best_answers_votes":49, "question_length":213, "response_length":1064 }, { "question":"Numpy argmax - random tie breaking In numpy.argmax function, tie breaking between multiple max elements is so that the first element is returned. Is there a functionality for randomizing tie breaking so that all maximum numbers have equal chance of being selected? Below is an example directly from numpy.argmax documentation. ``` >>> b = np.arange(6) >>> b[1] = 5 >>> b array([0, 5, 2, 3, 4, 5]) >>> np.argmax(b) # Only the first occurrence is returned. 1 ``` I am looking for ways so that 1st and 5th elements in the list are returned with equal probability. Thank you!", "response":"Use np.random.choice - ``` np.random.choice(np.flatnonzero(b == b.max())) ``` Let's verify for an array with three max candidates - ``` In [298]: b Out[298]: array([0, 5, 2, 5, 4, 5]) In [299]: c=[np.random.choice(np.flatnonzero(b == b.max())) for i in range(100000)] In [300]: np.bincount(c) Out[300]: array([ 0, 33180, 0, 33611, 0, 33209]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/42071597\/numpy-argmax-random-tie-breaking", "best_answers_votes":61, "question_length":571, "response_length":345 }, { "question":"How to fix IndexError: invalid index to scalar variable This code generates error: ``` IndexError: invalid index to scalar variable. ``` at the line: results.append(RMSPE(np.expm1(y_train[testcv]), [y[1] for y in y_test])) How to fix it? ``` import pandas as pd import numpy as np from sklearn import ensemble from sklearn import cross_validation def ToWeight(y): w = np.zeros(y.shape, dtype=float) ind = y != 0 w[ind] = 1.\/(y[ind]**2) return w def RMSPE(y, yhat): w = ToWeight(y) rmspe = np.sqrt(np.mean( w * (y - yhat)**2 )) return rmspe forest = ensemble.RandomForestRegressor(n_estimators=10, min_samples_split=2, n_jobs=-1) print (\"Cross validations\") cv = cross_validation.KFold(len(train), n_folds=5) results = [] for traincv, testcv in cv: y_test = np.expm1(forest.fit(X_train[traincv], y_train[traincv]).predict(X_train[testcv])) results.append(RMSPE(np.expm1(y_train[testcv]), [y[1] for y in y_test])) ``` testcv is: ``` [False False False ..., True True True] ```", "response":"You are trying to index into a scalar (non-iterable) value: ``` [y[1] for y in y_test] # ^ this is the problem ``` When you call [y for y in test] you are iterating over the values already, so you get a single value in y. Your code is the same as trying to do the following: ``` y_test = [1, 2, 3] y = y_test[0] # y = 1 print(y[0]) # this line will fail ``` I'm not sure what you're trying to get into your results array, but you need to get rid of [y[1] for y in y_test]. If you want to append each y in y_test to results, you'll need to expand your list comprehension out further to something like this: ``` [results.append(..., y) for y in y_test] ``` Or just use a for loop: ``` for y in y_test: results.append(..., y) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/32978575\/how-to-fix-indexerror-invalid-index-to-scalar-variable", "best_answers_votes":49, "question_length":974, "response_length":726 }, { "question":"Convolve2d just by using Numpy I am studying image-processing using NumPy and facing a problem with filtering with convolution. I would like to convolve a gray-scale image. (convolve a 2d Array with a smaller 2d Array) Does anyone have an idea to refine my method? I know that SciPy supports convolve2d but I want to make a convolve2d only by using NumPy. What I have done First, I made a 2d array the submatrices. ``` a = np.arange(25).reshape(5,5) # original matrix submatrices = np.array([ [a[:-2,:-2], a[:-2,1:-1], a[:-2,2:]], [a[1:-1,:-2], a[1:-1,1:-1], a[1:-1,2:]], [a[2:,:-2], a[2:,1:-1], a[2:,2:]]]) ``` the submatrices seems complicated but what I am doing is shown in the following drawing. Next, I multiplied each submatrices with a filter. ``` conv_filter = np.array([[0,-1,0],[-1,4,-1],[0,-1,0]]) multiplied_subs = np.einsum('ij,ijkl->ijkl',conv_filter,submatrices) ``` and summed them. ``` np.sum(np.sum(multiplied_subs, axis = -3), axis = -3) #array([[ 6, 7, 8], # [11, 12, 13], # [16, 17, 18]]) ``` Thus this procedure can be called my convolve2d. ``` def my_convolve2d(a, conv_filter): submatrices = np.array([ [a[:-2,:-2], a[:-2,1:-1], a[:-2,2:]], [a[1:-1,:-2], a[1:-1,1:-1], a[1:-1,2:]], [a[2:,:-2], a[2:,1:-1], a[2:,2:]]]) multiplied_subs = np.einsum('ij,ijkl->ijkl',conv_filter,submatrices) return np.sum(np.sum(multiplied_subs, axis = -3), axis = -3) ``` However, I find this my_convolve2d troublesome for 3 reasons. Generation of the submatrices is too awkward that is difficult to read and can only be used when the filter is 3*3 The size of the variant submatrices seems to be too big, since it is approximately 9 folds bigger than the original matrix. The summing seems a little non intuitive. Simply said, ugly. Thank you for reading this far. Kind of update. I wrote a conv3d for myself. I will leave this as a public domain. ``` def convolve3d(img, kernel): # calc the size of the array of submatrices sub_shape = tuple(np.subtract(img.shape, kernel.shape) + 1) # alias for the function strd = np.lib.stride_tricks.as_strided # make an array of submatrices submatrices = strd(img,kernel.shape + sub_shape,img.strides * 2) # sum the submatrices and kernel convolved_matrix = np.einsum('hij,hijklm->klm', kernel, submatrices) return convolved_matrix ```", "response":"You could generate the subarrays using as_strided: ``` import numpy as np a = np.array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) sub_shape = (3,3) view_shape = tuple(np.subtract(a.shape, sub_shape) + 1) + sub_shape strides = a.strides + a.strides sub_matrices = np.lib.stride_tricks.as_strided(a,view_shape,strides) ``` To get rid of your second \"ugly\" sum, alter your einsum so that the output array only has j and k. This implies your second summation. ``` conv_filter = np.array([[0,-1,0],[-1,5,-1],[0,-1,0]]) m = np.einsum('ij,ijkl->kl',conv_filter,sub_matrices) # [[ 6 7 8] # [11 12 13] # [16 17 18]] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/43086557\/convolve2d-just-by-using-numpy", "best_answers_votes":40, "question_length":2280, "response_length":671 }, { "question":"How to pad with zeros a tensor along some axis (Python) I would like to pad a numpy tensor with 0 along the chosen axis. For instance, I have tensor r with shape (4,3,2) but I am only interested in padding only the last two axis (that is, pad only the matrix). Is it possible to do it with the one-line python code?", "response":"You can use np.pad(): ``` a = np.ones((4, 3, 2)) # npad is a tuple of (n_before, n_after) for each dimension npad = ((0, 0), (1, 2), (2, 1)) b = np.pad(a, pad_width=npad, mode='constant', constant_values=0) print(b.shape) # (4, 6, 5) print(b) # [[[ 0. 0. 0. 0. 0.] # [ 0. 0. 1. 1. 0.] # [ 0. 0. 1. 1. 0.] # [ 0. 0. 1. 1. 0.] # [ 0. 0. 0. 0. 0.] # [ 0. 0. 0. 0. 0.]] # [[ 0. 0. 0. 0. 0.] # [ 0. 0. 1. 1. 0.] # [ 0. 0. 1. 1. 0.] # [ 0. 0. 1. 1. 0.] # [ 0. 0. 0. 0. 0.] # [ 0. 0. 0. 0. 0.]] # [[ 0. 0. 0. 0. 0.] # [ 0. 0. 1. 1. 0.] # [ 0. 0. 1. 1. 0.] # [ 0. 0. 1. 1. 0.] # [ 0. 0. 0. 0. 0.] # [ 0. 0. 0. 0. 0.]] # [[ 0. 0. 0. 0. 0.] # [ 0. 0. 1. 1. 0.] # [ 0. 0. 1. 1. 0.] # [ 0. 0. 1. 1. 0.] # [ 0. 0. 0. 0. 0.] # [ 0. 0. 0. 0. 0.]]] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19349410\/how-to-pad-with-zeros-a-tensor-along-some-axis-python", "best_answers_votes":102, "question_length":315, "response_length":736 }, { "question":"Compiling numpy with OpenBLAS integration I am trying to install numpy with OpenBLAS , however I am at loss as to how the site.cfg file needs to be written. When the installation procedure was followed the installation completed without errors, however there is performance degradation on increasing the number of threads used by OpenBLAS from 1 (controlled by the environment variable OMP_NUM_THREADS). I am not sure if the OpenBLAS integration has been perfect. Could any one provide a site.cfg file to achieve the same. P.S.: OpenBLAS integration in other toolkits like Theano, which is based on Python, provides substantial performance boost on increasing the number of threads, on the same machine.", "response":"I just compiled numpy inside a virtualenv with OpenBLAS integration, and it seems to be working OK. This was my process: Compile OpenBLAS: ``` $ git clone https:\/\/github.com\/xianyi\/OpenBLAS $ cd OpenBLAS && make FC=gfortran $ sudo make PREFIX=\/opt\/OpenBLAS install ``` If you don't have admin rights you could set PREFIX= to a directory where you have write privileges (just modify the corresponding steps below accordingly). Make sure that the directory containing libopenblas.so is in your shared library search path. To do this locally, you could edit your ~\/.bashrc file to contain the line ``` export LD_LIBRARY_PATH=\/opt\/OpenBLAS\/lib:$LD_LIBRARY_PATH ``` The LD_LIBRARY_PATH environment variable will be updated when you start a new terminal session (use $ source ~\/.bashrc to force an update within the same session). Another option that will work for multiple users is to create a .conf file in \/etc\/ld.so.conf.d\/ containing the line \/opt\/OpenBLAS\/lib, e.g.: ``` $ sudo sh -c \"echo '\/opt\/OpenBLAS\/lib' > \/etc\/ld.so.conf.d\/openblas.conf\" ``` Once you are done with either option, run ``` $ sudo ldconfig ``` Grab the numpy source code: ``` $ git clone https:\/\/github.com\/numpy\/numpy $ cd numpy ``` Copy site.cfg.example to site.cfg and edit the copy: ``` $ cp site.cfg.example site.cfg $ nano site.cfg ``` Uncomment these lines: ``` .... [openblas] libraries = openblas library_dirs = \/opt\/OpenBLAS\/lib include_dirs = \/opt\/OpenBLAS\/include .... ``` Check configuration, build, install (optionally inside a virtualenv) ``` $ python setup.py config ``` The output should look something like this: ``` ... openblas_info: FOUND: libraries = ['openblas', 'openblas'] library_dirs = ['\/opt\/OpenBLAS\/lib'] language = c define_macros = [('HAVE_CBLAS', None)] FOUND: libraries = ['openblas', 'openblas'] library_dirs = ['\/opt\/OpenBLAS\/lib'] language = c define_macros = [('HAVE_CBLAS', None)] ... ``` Installing with pip is preferable to using python setup.py install, since pip will keep track of the package metadata and allow you to easily uninstall or upgrade numpy in the future. ``` $ pip install . ``` Optional: you can use this script to test performance for different thread counts. ``` $ OMP_NUM_THREADS=1 python build\/test_numpy.py version: 1.10.0.dev0+8e026a2 maxint: 9223372036854775807 BLAS info: * libraries ['openblas', 'openblas'] * library_dirs ['\/opt\/OpenBLAS\/lib'] * define_macros [('HAVE_CBLAS', None)] * language c dot: 0.099796795845 sec $ OMP_NUM_THREADS=8 python build\/test_numpy.py version: 1.10.0.dev0+8e026a2 maxint: 9223372036854775807 BLAS info: * libraries ['openblas', 'openblas'] * library_dirs ['\/opt\/OpenBLAS\/lib'] * define_macros [('HAVE_CBLAS', None)] * language c dot: 0.0439578056335 sec ``` There seems to be a noticeable improvement in performance for higher thread counts. However, I haven't tested this very systematically, and it's likely that for smaller matrices the additional overhead would outweigh the performance benefit from a higher thread count.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11443302\/compiling-numpy-with-openblas-integration", "best_answers_votes":100, "question_length":703, "response_length":2997 }, { "question":"which is faster for load: pickle or hdf5 in python Given a 1.5 Gb list of pandas dataframes, which format is fastest for loading compressed data: pickle (via cPickle), hdf5, or something else in Python? I only care about fastest speed to load the data into memory I don't care about dumping the data, it's slow but I only do this once. I don't care about file size on disk", "response":"UPDATE: nowadays I would choose between Parquet, Feather (Apache Arrow), HDF5 and Pickle. Pro's and Contra's: Parquet pros one of the fastest and widely supported binary storage formats supports very fast compression methods (for example Snappy codec) de-facto standard storage format for Data Lakes \/ BigData contras the whole dataset must be read into memory. You can't read a smaller subset. One way to overcome this problem is to use partitioning and to read only required partitions. no support for indexing. you can't read a specific row or a range of rows - you always have to read the whole Parquet file Parquet files are immutable - you can't change them (no way to append, update, delete), one can only either write or overwrite to Parquet file. Well this \"limitation\" comes from the BigData and would be considered as one of the huge \"pros\" there. HDF5 pros supports data slicing - ability to read a portion of the whole dataset (we can work with datasets that wouldn't fit completely into RAM). relatively fast binary storage format supports compression (though the compression is slower compared to Snappy codec (Parquet) ) supports appending rows (mutable) contras risk of data corruption Pickle pros very fast contras requires much space on disk for a long term storage one might experience compatibility problems. You might need to specify the Pickle version for reading old Pickle files. OLD Answer: I would consider only two storage formats: HDF5 (PyTables) and Feather Here are results of my read and write comparison for the DF (shape: 4000000 x 6, size in memory 183.1 MB, size of uncompressed CSV - 492 MB). Comparison for the following storage formats: (CSV, CSV.gzip, Pickle, HDF5 [various compression]): ``` read_s write_s size_ratio_to_CSV storage CSV 17.900 69.00 1.000 CSV.gzip 18.900 186.00 0.047 Pickle 0.173 1.77 0.374 HDF_fixed 0.196 2.03 0.435 HDF_tab 0.230 2.60 0.437 HDF_tab_zlib_c5 0.845 5.44 0.035 HDF_tab_zlib_c9 0.860 5.95 0.035 HDF_tab_bzip2_c5 2.500 36.50 0.011 HDF_tab_bzip2_c9 2.500 36.50 0.011 ``` But it might be different for you, because all my data was of the datetime dtype, so it's always better to make such a comparison with your real data or at least with the similar data...", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37928794\/which-is-faster-for-load-pickle-or-hdf5-in-python", "best_answers_votes":90, "question_length":372, "response_length":2228 }, { "question":"How to use numpy.void type I loaded a MATLAB .mat file via scipy.io.loadmat and it gave me a list of numpy.void objects. What are they, how can they be used and where can I get some reference documentation on them?", "response":"According to the numpy documentation: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/arrays.dtypes.html, numpy.void types are defined as flexible data types. Basically, these are data types where there is no pre-defined type associated to the variable(s) you're looking at. If you look at numpy, you have data types such as float, uint8, bool, string, etc. void is to accommodate for more generic and flexible types and are for those data types that don't necessary fall into any one of these pre-defined data types. This situation is mostly encountered when you're loading in a struct where each element has multiple data types associated with multiple fields. Each structure element could have a combination of different data types, and the amalgamation of all of these data types to represent an instance of this structure element thus leads us to numpy.void. With the documentation, you can certainly do the same operations like you would with any other data type. Take a look at the generic data type methods here: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.generic.html#numpy.generic . In fact, all numpy data types are derived from this generic class, including numpy.void. In the first link I provided at the beginning of this post, it shows a good example of how to create a custom record type, where a record is a combination of a tuple of numbers and a string. When creating a list of these records, each type in the list is of type numpy.void and it demonstrates that a record is of this data type. However, bear in mind that this record list has a data type that is of this record, but each element of this list will be of type numpy.void. However, as a matter of self-containment, let's re-create the example here: Let's create a custom record type where it has two fields associated for each variable you create: A 16-bit string with a field named name A 2-element tuple of floating point numbers that are 64-bits each, with a field named grades As such, you'd do something like: ``` import numpy as np dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) ``` As such, let's create an example list of two elements and instantiate their fields: ``` x = np.array([('Sarah', (8.0, 7.0)), ('John', (6.0, 7.0))], dtype=dt) ``` Because we made this list into a numpy.array, we expect its data type to be so: ``` type(x) ``` We get: ``` ``` Remember, the list itself is a numpy.array, but not the individual elements. To access the second element of this list, which is the second record, we do: ``` x[1] ``` We get: ``` ('John', [6.0, 7.0]) ``` To check the type of the second record, we do: ``` type(x[1]) ``` We get: ``` # As expected ``` Some additional bonuses for you To access the name of the second record, we do: ``` x[1]['name'] ``` We get: ``` 'John' ``` To access the grades of the second record, we do: ``` x[1]['grades'] ``` We get: ``` array([ 6., 7.]) ``` To check the type of the name inside the second record, we do: ``` type(x[1]['name']) ``` We get: ``` ``` To check the type of the grades inside the second record, we do: ``` type(x[1]['grades']) ``` We get: ``` ``` Take note that each element in this list is of type numpy.void. However, the individual fields for each element in our list is either a tuple of numbers, or a string. The collection of these elements together is of type numpy.void.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25247190\/how-to-use-numpy-void-type", "best_answers_votes":86, "question_length":214, "response_length":3357 }, { "question":"TypeError: Invalid dimensions for image data when plotting array with imshow() For the following code ``` # Numerical operation SN_map_final = (new_SN_map - mean_SN) \/ sigma_SN # Plot figure fig12 = plt.figure(12) fig_SN_final = plt.imshow(SN_map_final, interpolation='nearest') plt.colorbar() fig12 = plt.savefig(outname12) ``` with new_SN_map being a 1D array and mean_SN and sigma_SN being constants, I get the following error. ``` Traceback (most recent call last): File \"c:\\Users\\Valentin\\Desktop\\Stage M2\\density_map_simple.py\", line 546, in fig_SN_final = plt.imshow(SN_map_final, interpolation='nearest') File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\pyplot.py\", line 3022, in imshow **kwargs) File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\__init__.py\", line 1812, in inner return func(ax, *args, **kwargs) File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\axes\\_axes.py\", line 4947, in imshow im.set_data(X) File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\image.py\", line 453, in set_data raise TypeError(\"Invalid dimensions for image data\") TypeError: Invalid dimensions for image data ``` What is the source of this error? I thought my numerical operations were allowed.", "response":"There is a (somewhat) related question on StackOverflow: Showing an image with pylab.imshow() Here the problem was that an array of shape (nx,ny,1) is still considered a 3D array, and must be squeezed or sliced into a 2D array. More generally, the reason for the Exception TypeError: Invalid dimensions for image data is shown here: matplotlib.pyplot.imshow() needs a 2D array, or a 3D array with the third dimension being of shape 3 or 4! You can easily check this with (these checks are done by imshow, this function is only meant to give a more specific message in case it's not a valid input): ``` from __future__ import print_function import numpy as np def valid_imshow_data(data): data = np.asarray(data) if data.ndim == 2: return True elif data.ndim == 3: if 3 >> new_SN_map = np.array([1,2,3]) >>> valid_imshow_data(new_SN_map) To visualize an image the data must be 2 dimensional or 3 dimensional, not \"1\". False ``` The np.asarray is what is done internally by matplotlib.pyplot.imshow so it's generally best you do it too. If you have a numpy array it's obsolete but if not (for example a list) it's necessary. In your specific case you got a 1D array, so you need to add a dimension with np.expand_dims() ``` import matplotlib.pyplot as plt a = np.array([1,2,3,4,5]) a = np.expand_dims(a, axis=0) # or axis=1 plt.imshow(a) plt.show() ``` or just use something that accepts 1D arrays like plot: ``` a = np.array([1,2,3,4,5]) plt.plot(a) plt.show() ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/36431496\/typeerror-invalid-dimensions-for-image-data-when-plotting-array-with-imshow", "best_answers_votes":77, "question_length":1348, "response_length":1463 }, { "question":"Error packaging Kivy with numpy library for Android using buildozer I am trying to create an Android package of my Kivy application using buildozer but I am getting this error when I try to include the numpy: resume of the error: ``` compile options: '-DNO_ATLAS_INFO=1 -Inumpy\/core\/include -Ibuild\/src.linux-x86_64-2.7\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/private -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -Inumpy\/core\/include -I\/home\/joao\/github\/buildozer\/.buildozer\/android\/platform\/python-for-android\/build\/python-install\/include\/python2.7 -Ibuild\/src.linux-x86_64-2.7\/numpy\/core\/src\/multiarray -Ibuild\/src.linux-x86_64-2.7\/numpy\/core\/src\/umath -c' ccache: numpy\/linalg\/lapack_litemodule.c ccache: numpy\/linalg\/python_xerbla.c \/usr\/bin\/gfortran -Wall -lm build\/temp.linux-x86_64-2.7\/numpy\/linalg\/lapack_litemodule.o build\/temp.linux-x86_64-2.7\/numpy\/linalg\/python_xerbla.o -L\/usr\/lib -L\/home\/joao\/github\/buildozer\/.buildozer\/android\/platform\/python-for-android\/build\/python-install\/lib -Lbuild\/temp.linux-x86_64-2.7 -llapack -lblas -lpython2.7 -lgfortran -o build\/lib.linux-x86_64-2.7\/numpy\/linalg\/lapack_lite.so \/usr\/bin\/ld: build\/temp.linux-x86_64-2.7\/numpy\/linalg\/lapack_litemodule.o: Relocations in generic ELF (EM: 40) \/usr\/bin\/ld: build\/temp.linux-x86_64-2.7\/numpy\/linalg\/lapack_litemodule.o: Relocations in generic ELF (EM: 40) build\/temp.linux-x86_64-2.7\/numpy\/linalg\/lapack_litemodule.o: error adding symbols: File in wrong format collect2: error: ld returned 1 exit status \/usr\/bin\/ld: build\/temp.linux-x86_64-2.7\/numpy\/linalg\/lapack_litemodule.o: Relocations in generic ELF (EM: 40) \/usr\/bin\/ld: build\/temp.linux-x86_64-2.7\/numpy\/linalg\/lapack_litemodule.o: Relocations in generic ELF (EM: 40) build\/temp.linux-x86_64-2.7\/numpy\/linalg\/lapack_litemodule.o: error adding symbols: File in wrong format collect2: error: ld returned 1 exit status unable to execute _configtest: Exec format error error: Command \"\/usr\/bin\/gfortran -Wall -lm build\/temp.linux-x86_64-2.7\/numpy\/linalg\/lapack_litemodule.o build\/temp.linux-x86_64-2.7\/numpy\/linalg\/python_xerbla.o -L\/usr\/lib -L\/home\/joao\/github\/buildozer\/.buildozer\/android\/platform\/python-for-android\/build\/python-install\/lib -Lbuild\/temp.linux-x86_64-2.7 -llapack -lblas -lpython2.7 -lgfortran -o build\/lib.linux-x86_64-2.7\/numpy\/linalg\/lapack_lite.so\" failed with exit status 1 ``` does anyone knows how to solve it? P.S. I am using Ubuntu 14.04 64-bit", "response":"Try sudo apt-get install libatlas-base-dev it looks like you're missing some libraries", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/29742289\/error-packaging-kivy-with-numpy-library-for-android-using-buildozer", "best_answers_votes":10, "question_length":2503, "response_length":86 }, { "question":"Scikit-Learn's Pipeline: A sparse matrix was passed, but dense data is required I'm finding it difficult to understand how to fix a Pipeline I created (read: largely pasted from a tutorial). It's python 3.4.2: ``` df = pd.DataFrame df = DataFrame.from_records(train) test = [blah1, blah2, blah3] pipeline = Pipeline([('vectorizer', CountVectorizer()), ('classifier', RandomForestClassifier())]) pipeline.fit(numpy.asarray(df[0]), numpy.asarray(df[1])) predicted = pipeline.predict(test) ``` When I run it, I get: ``` TypeError: A sparse matrix was passed, but dense data is required. Use X.toarray() to convert to a dense numpy array. ``` This is for the line pipeline.fit(numpy.asarray(df[0]), numpy.asarray(df[1])). I've experimented a lot with solutions through numpy, scipy, and so forth, but I still don't know how to fix it. And yes, similar questions have come up before, but not inside a pipeline. Where is it that I have to apply toarray or todense?", "response":"Unfortunately those two are incompatible. A CountVectorizer produces a sparse matrix and the RandomForestClassifier requires a dense matrix. It is possible to convert using X.todense(). Doing this will substantially increase your memory footprint. Below is sample code to do this based on http:\/\/zacstewart.com\/2014\/08\/05\/pipelines-of-featureunions-of-pipelines.html which allows you to call .todense() in a pipeline stage. ``` class DenseTransformer(TransformerMixin): def fit(self, X, y=None, **fit_params): return self def transform(self, X, y=None, **fit_params): return X.todense() ``` Once you have your DenseTransformer, you are able to add it as a pipeline step. ``` pipeline = Pipeline([ ('vectorizer', CountVectorizer()), ('to_dense', DenseTransformer()), ('classifier', RandomForestClassifier()) ]) ``` Another option would be to use a classifier meant for sparse data like LinearSVC. ``` from sklearn.svm import LinearSVC pipeline = Pipeline([('vectorizer', CountVectorizer()), ('classifier', LinearSVC())]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28384680\/scikit-learns-pipeline-a-sparse-matrix-was-passed-but-dense-data-is-required", "best_answers_votes":87, "question_length":958, "response_length":1023 }, { "question":"Convert list or numpy array of single element to float in python I have a function which can accept either a list or a numpy array. In either case, the list\/array has a single element (always). I just need to return a float. So, e.g., I could receive: ``` list_ = [4] ``` or the numpy array: ``` array_ = array([4]) ``` And I should return ``` 4.0 ``` So, naturally (I would say), I employ float(...) on list_ and get: ``` TypeError: float() argument must be a string or a number ``` I do the same to array_ and this time it works by responding with \"4.0\". From this, I learn that Python's list cannot be converted to float this way. Based on the success with the numpy array conversion to float this lead me to the approach: ``` float(np.asarray(list_)) ``` And this works when list_ is both a Python list and when it is a numpy array. Question But it seems like this approach has an overhead first converting the list to a numpy array and then to float. Basically: Is there a better way of doing this?", "response":"You may want to use the ndarray.item method, as in a.item(). This is also equivalent to (the now deprecated) np.asscalar(a). This has the benefit of working in situations with views and superfluous axes, while the above solutions will currently break. For example, ``` >>> a = np.asarray(1).view() >>> a.item() # correct 1 >>> a[0] # breaks Traceback (most recent call last): File \"\", line 1, in IndexError: too many indices for array >>> a = np.asarray([[2]]) >>> a.item() # correct 2 >>> a[0] # bad result array([2]) ``` This also has the benefit of throwing an exception if the array is not actually a scalar, while the a[0] approach will silently proceed (which may lead to bugs sneaking through undetected). ``` >>> a = np.asarray([1, 2]) >>> a[0] # silently proceeds 1 >>> a.item() # detects incorrect size Traceback (most recent call last): File \"\", line 1, in ValueError: can only convert an array of size 1 to a Python scalar ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30311172\/convert-list-or-numpy-array-of-single-element-to-float-in-python", "best_answers_votes":62, "question_length":1003, "response_length":940 }, { "question":"TypeError with ufunc bitwise_xor In my program which traces out the path of a particle, I get the following error: ```none Traceback (most recent call last): File \"C:\\Users\\Felix\\Google Drive\\Research\\particles.py\", line 154, in bfield += b_X(r_p(r,pos[2]))*(r_p(r,pos[2])\/r) *((r-r_p(r,pos[2]))**2+pos[2]**2)^(-1\/2)*np.array ([(1-r_p(r,pos[2])\/r)*pos[0],(1-r_p(r,pos[2])\/r)*pos[1],pos[2]]) TypeError: ufunc 'bitwise_xor' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' ``` I can't seem to find what's going on. I don't have any instances of xor (although I suppose it might be encoded in an if\/else statement).", "response":"In the offending line you are using a ^ when you want a ** to raise a value to a power. Python interprets this as an xor: ``` bfield += b_X(r_p(r,pos[2]))*(r_p(r,pos[2])\/r)*((r-r_p(r,pos[2]))**2+ pos[2]**2)^(-1\/2)*np.array([(1-r_p(r,pos[2])\/r)*pos[0], (1-r_p(r,pos[2])\/r)*pos[1],pos[2]]) ``` See: http:\/\/docs.python.org\/2\/reference\/expressions.html#binary-bitwise-operations", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22725421\/typeerror-with-ufunc-bitwise-xor", "best_answers_votes":103, "question_length":704, "response_length":374 }, { "question":"Why does numpy.power return 0 for small exponents while math.pow returns the correct answer? ``` In [25]: np.power(10,-100) Out[25]: 0 In [26]: math.pow(10,-100) Out[26]: 1e-100 ``` I would expect both the commands to return 1e-100. This is not a precision issue either, since the issue persists even after increasing precision to 500. Is there some setting which I can change to get the correct answer?", "response":"Oh, it's much \"worse\" than that: ``` In [2]: numpy.power(10,-1) Out[2]: 0 ``` But this is a hint to what's going on: 10 is an integer, and numpy.power doesn't coerce the numbers to floats. But this works: ``` In [3]: numpy.power(10.,-1) Out[3]: 0.10000000000000001 In [4]: numpy.power(10.,-100) Out[4]: 1e-100 ``` Note, however, that the power operator, **, does convert to float: ``` In [5]: 10**-1 Out[5]: 0.1 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22956139\/why-does-numpy-power-return-0-for-small-exponents-while-math-pow-returns-the-cor", "best_answers_votes":72, "question_length":403, "response_length":415 }, { "question":"Numpy Typing with specific shape and datatype Currently i'm trying to work more with numpy typing to make my code clearer however i've somehow reached a limit that i can't currently override. Is it possible to specify a specific shape and also the corresponding data type? Example: ``` Shape=(4,) datatype= np.int32 ``` My attempts so far look like the following (but all just threw errors): First attempt: ``` import numpy as np def foo(x: np.ndarray[(4,), np.dtype[np.int32]]): ... result -> 'numpy._DTypeMeta' object is not subscriptable ``` Second attempt: ``` import numpy as np import numpy.typing as npt def foo(x: npt.NDArray[(4,), np.int32]): ... result -> Too many arguments for numpy.ndarray[typing.Any, numpy.dtype[+ScalarType]] ``` Also, unfortunately, I can't find any information about it in the documentation or I only get errors when I implement it the way it is documented.", "response":"Currently, numpy.typing.NDArray only accepts a dtype, like so: numpy.typing.NDArray[numpy.int32]. You have some options though. Use typing.Annotated typing.Annotated allows you to create an alias for a type and to bundle some extra information with it. In some my_types.py you would write all variations of shapes you want to hint: ```py from typing import Annotated, Literal, TypeVar import numpy as np import numpy.typing as npt DType = TypeVar(\"DType\", bound=np.generic) Array4 = Annotated[npt.NDArray[DType], Literal[4]] Array3x3 = Annotated[npt.NDArray[DType], Literal[3, 3]] ArrayNxNx3 = Annotated[npt.NDArray[DType], Literal[\"N\", \"N\", 3]] ``` And then in foo.py, you can supply a numpy dtype and use them as typehint: ```py import numpy as np from my_types import Array4 def foo(arr: Array4[np.int32]): assert arr.shape == (4,) ``` MyPy will recognize arr to be an np.ndarray and will check it as such. The shape checking can be done at runtime only, like in this example with an assert. If you don't like the assertion, you can use your creativity to define a function to do the checking for you. ```py def assert_match(arr, array_type): hinted_shape = array_type.__metadata__[0].__args__ hinted_dtype_type = array_type.__args__[0].__args__[1] hinted_dtype = hinted_dtype_type.__args__[0] assert np.issubdtype(arr.dtype, hinted_dtype), \"DType does not match\" assert arr.shape == hinted_shape, \"Shape does not match\" assert_match(some_array, Array4[np.int32]) ``` Use nptyping Another option would be to use 3th party lib nptyping (yes, I am the author). You would drop my_types.py as it would be of no use anymore. Your foo.py would become something like: ```py from nptyping import NDArray, Shape, Int32 def foo(arr: NDArray[Shape[\"4\"], Int32]): assert isinstance(arr, NDArray[Shape[\"4\"], Int32]) ``` Use beartype + typing.Annotated There is also another 3th party lib called beartype that you could use. It can take a variant of the typing.Annotated approach and will do the runtime checking for you. You would reinstate your my_types.py with content similar to: ```py from beartype import beartype from beartype.vale import Is from typing import Annotated import numpy as np Int32Array4 = Annotated[np.ndarray, Is[lambda array: array.shape == (4,) and np.issubdtype(array.dtype, np.int32)]] Int32Array3x3 = Annotated[np.ndarray, Is[lambda array: array.shape == (3,3) and np.issubdtype(array.dtype, np.int32)]] ``` And your foo.py would become: ```py import numpy as np from beartype import beartype from my_types import Int32Array4 @beartype def foo(arr: Int32Array4): ... # Runtime type checked by beartype. ``` Use beartype + nptyping You could also stack up both libraries. Your my_types.py can be removed again and your foo.py would become something like: ```py from nptyping import NDArray, Shape, Int32 from beartype import beartype @beartype def foo(arr: NDArray[Shape[\"4\"], Int32]): ... # Runtime type checked by beartype. ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/71109838\/numpy-typing-with-specific-shape-and-datatype", "best_answers_votes":65, "question_length":891, "response_length":2945 }, { "question":"Passing list-likes to .loc or [] with any missing labels is no longer supported I want to create a modified dataframe with the specified columns. I tried the following but throws the error \"Passing list-likes to .loc or [] with any missing labels is no longer supported\" ``` # columns to keep filtered_columns = ['text', 'agreeCount', 'disagreeCount', 'id', 'user.firstName', 'user.lastName', 'user.gender', 'user.id'] tips_filtered = tips_df.loc[:, filtered_columns] # display tips tips_filtered ``` Thank you", "response":"It looks like Pandas has deprecated this method of indexing. According to their docs: This behavior is deprecated and will show a warning message pointing to this section. The recommended alternative is to use .reindex() Using the new recommended method, you can filter your columns using: tips_filtered = tips_df.reindex(columns = filtered_columns). NB: To reindex rows, you would use reindex(index = ...) (More information here).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/61291741\/passing-list-likes-to-loc-or-with-any-missing-labels-is-no-longer-supported", "best_answers_votes":54, "question_length":510, "response_length":431 }, { "question":"output of numpy.where(condition) is not an array, but a tuple of arrays: why? I am experimenting with the numpy.where(condition[, x, y]) function. From the numpy documentation, I learn that if you give just one array as input, it should return the indices where the array is non-zero (i.e. \"True\"): If only condition is given, return the tuple condition.nonzero(), the indices where condition is True. But if try it, it returns me a tuple of two elements, where the first is the wanted list of indices, and the second is a null element: ``` >>> import numpy as np >>> array = np.array([1,2,3,4,5,6,7,8,9]) >>> np.where(array>4) (array([4, 5, 6, 7, 8]),) # notice the comma before the last parenthesis ``` so the question is: why? what is the purpose of this behaviour? in what situation this is useful? Indeed, to get the wanted list of indices I have to add the indexing, as in np.where(array>4)[0], which seems... \"ugly\". ADDENDUM I understand (from some answers) that it is actually a tuple of just one element. Still I don't understand why to give the output in this way. To illustrate how this is not ideal, consider the following error (which motivated my question in the first place): ``` >>> import numpy as np >>> array = np.array([1,2,3,4,5,6,7,8,9]) >>> pippo = np.where(array>4) >>> pippo + 1 Traceback (most recent call last): File \"\", line 1, in TypeError: can only concatenate tuple (not \"int\") to tuple ``` so that you need to do some indexing to access the actual array of indices: ``` >>> pippo[0] + 1 array([5, 6, 7, 8, 9]) ```", "response":"In Python (1) means just 1. () can be freely added to group numbers and expressions for human readability (e.g. (1+3)*3 v (1+3,)*3). Thus to denote a 1 element tuple it uses (1,) (and requires you to use it as well). Thus ``` (array([4, 5, 6, 7, 8]),) ``` is a one element tuple, that element being an array. If you applied where to a 2d array, the result would be a 2 element tuple. The result of where is such that it can be plugged directly into an indexing slot, e.g. ``` a[where(a>0)] a[a>0] ``` should return the same things as would ``` I,J = where(a>0) # a is 2d a[I,J] a[(I,J)] ``` Or with your example: ``` In [278]: a=np.array([1,2,3,4,5,6,7,8,9]) In [279]: np.where(a>4) Out[279]: (array([4, 5, 6, 7, 8], dtype=int32),) # tuple In [280]: a[np.where(a>4)] Out[280]: array([5, 6, 7, 8, 9]) In [281]: I=np.where(a>4) In [282]: I Out[282]: (array([4, 5, 6, 7, 8], dtype=int32),) In [283]: a[I] Out[283]: array([5, 6, 7, 8, 9]) In [286]: i, = np.where(a>4) # note the , on LHS In [287]: i Out[287]: array([4, 5, 6, 7, 8], dtype=int32) # not tuple In [288]: a[i] Out[288]: array([5, 6, 7, 8, 9]) In [289]: a[(i,)] Out[289]: array([5, 6, 7, 8, 9]) ``` ====================== np.flatnonzero shows the correct way of returning just one array, regardless of the dimensions of the input array. ``` In [299]: np.flatnonzero(a>4) Out[299]: array([4, 5, 6, 7, 8], dtype=int32) In [300]: np.flatnonzero(a>4)+10 Out[300]: array([14, 15, 16, 17, 18], dtype=int32) ``` It's doc says: This is equivalent to a.ravel().nonzero()[0] In fact that is literally what the function does. By flattening a removes the question of what to do with multiple dimensions. And then it takes the response out of the tuple, giving you a plain array. With flattening it doesn't have make a special case for 1d arrays. =========================== @Divakar suggests np.argwhere: ``` In [303]: np.argwhere(a>4) Out[303]: array([[4], [5], [6], [7], [8]], dtype=int32) ``` which does np.transpose(np.where(a>4)) Or if you don't like the column vector, you could transpose it again ``` In [307]: np.argwhere(a>4).T Out[307]: array([[4, 5, 6, 7, 8]], dtype=int32) ``` except now it is a 1xn array. We could just as well have wrapped where in array: ``` In [311]: np.array(np.where(a>4)) Out[311]: array([[4, 5, 6, 7, 8]], dtype=int32) ``` Lots of ways of taking an array out the where tuple ([0], i,=, transpose, array, etc).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/33747908\/output-of-numpy-wherecondition-is-not-an-array-but-a-tuple-of-arrays-why", "best_answers_votes":50, "question_length":1547, "response_length":2392 }, { "question":"Converting Numpy Array to OpenCV Array I'm trying to convert a 2D Numpy array, representing a black-and-white image, into a 3-channel OpenCV array (i.e. an RGB image). Based on code samples and the docs I'm attempting to do this via Python like: ``` import numpy as np, cv vis = np.zeros((384, 836), np.uint32) h,w = vis.shape vis2 = cv.CreateMat(h, w, cv.CV_32FC3) cv.CvtColor(vis, vis2, cv.CV_GRAY2BGR) ``` However, the call to CvtColor() is throwing the following cpp-level Exception: ``` OpenCV Error: Image step is wrong () in cvSetData, file \/build\/buildd\/opencv-2.1.0\/src\/cxcore\/cxarray.cpp, line 902 terminate called after throwing an instance of 'cv::Exception' what(): \/build\/buildd\/opencv-2.1.0\/src\/cxcore\/cxarray.cpp:902: error: (-13) in function cvSetData Aborted ``` What am I doing wrong?", "response":"Your code can be fixed as follows: ``` import numpy as np, cv vis = np.zeros((384, 836), np.float32) h,w = vis.shape vis2 = cv.CreateMat(h, w, cv.CV_32FC3) vis0 = cv.fromarray(vis) cv.CvtColor(vis0, vis2, cv.CV_GRAY2BGR) ``` Short explanation: np.uint32 data type is not supported by OpenCV (it supports uint8, int8, uint16, int16, int32, float32, float64) cv.CvtColor can't handle numpy arrays so both arguments has to be converted to OpenCV type. cv.fromarray do this conversion. Both arguments of cv.CvtColor must have the same depth. So I've changed source type to 32bit float to match the ddestination. Also I recommend you use newer version of OpenCV python API because it uses numpy arrays as primary data type: ``` import numpy as np, cv2 vis = np.zeros((384, 836), np.float32) vis2 = cv2.cvtColor(vis, cv2.COLOR_GRAY2BGR) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7587490\/converting-numpy-array-to-opencv-array", "best_answers_votes":58, "question_length":803, "response_length":834 }, { "question":"Scipy sparse... arrays? So, I'm doing some Kmeans classification using numpy arrays that are quite sparse-- lots and lots of zeroes. I figured that I'd use scipy's 'sparse' package to reduce the storage overhead, but I'm a little confused about how to create arrays, not matrices. I've gone through this tutorial on how to create sparse matrices: http:\/\/www.scipy.org\/SciPy_Tutorial#head-c60163f2fd2bab79edd94be43682414f18b90df7 To mimic an array, I just create a 1xN matrix, but as you may guess, Asp.dot(Bsp) doesn't quite work because you can't multiply two 1xN matrices. I'd have to transpose each array to Nx1, and that's pretty lame, since I'd be doing it for every dot-product calculation. Next up, I tried to create an NxN matrix where column 1 == row 1 (such that you can multiply two matrices and just take the top-left corner as the dot product), but that turned out to be really inefficient. I'd love to use scipy's sparse package as a magic replacement for numpy's array(), but as yet, I'm not really sure what to do. Any advice?", "response":"Use a scipy.sparse format that is row or column based: csc_matrix and csr_matrix. These use efficient, C implementations under the hood (including multiplication), and transposition is a no-op (esp. if you call transpose(copy=False)), just like with numpy arrays. EDIT: some timings via ipython: ``` import numpy, scipy.sparse n = 100000 x = (numpy.random.rand(n) * 2).astype(int).astype(float) # 50% sparse vector x_csr = scipy.sparse.csr_matrix(x) x_dok = scipy.sparse.dok_matrix(x.reshape(x_csr.shape)) ``` Now x_csr and x_dok are 50% sparse: ``` print repr(x_csr) ' with 49757 stored elements in Compressed Sparse Row format> ``` And the timings: ``` timeit numpy.dot(x, x) 10000 loops, best of 3: 123 us per loop timeit x_dok * x_dok.T 1 loops, best of 3: 1.73 s per loop timeit x_csr.multiply(x_csr).sum() 1000 loops, best of 3: 1.64 ms per loop timeit x_csr * x_csr.T 100 loops, best of 3: 3.62 ms per loop ``` So it looks like I told a lie. Transposition is very cheap, but there is no efficient C implementation of csr * csc (in the latest scipy 0.9.0). A new csr object is constructed in each call :-( As a hack (though scipy is relatively stable these days), you can do the dot product directly on the sparse data: ``` timeit numpy.dot(x_csr.data, x_csr.data) 10000 loops, best of 3: 62.9 us per loop ``` Note this last approach does a numpy dense multiplication again. The sparsity is 50%, so it's actually faster than dot(x, x) by a factor of 2.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/2540059\/scipy-sparse-arrays", "best_answers_votes":36, "question_length":1042, "response_length":1458 }, { "question":"ERROR: Failed building wheel for numpy , ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects I`m using python poetry(https:\/\/python-poetry.org\/) for dependency management in my project. Though when I`m running poetry install, its giving me below error. ``` ERROR: Failed building wheel for numpy Failed to build numpy ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects ``` I`m having python 3.9 installed in my laptop. I installed numpy 1.21.5 using pip install numpy, I even tried to down version it to 1.19.5. Though I`m getting the same error. I found out many people are getting ERROR: Failed building wheel for numpy this error in python 3.10, they solved it by down versioning python to 3.9, though that didnt working for me.", "response":"I solved it by doing the following steps:- I updated the pyproject.toml(This file contains all the library\/dependency\/dev dependency)with the numpy version that I installed using pip install numpy command. Run poetry lock to update poetry.lock file(contains details information about the library) Run poetry install again, & it should work fine. In a nutshell, you just have to install the correct version of numpy Click me to check the compatibility And then install the required version using pip install numpy==version. Example: To install NumPy version 1.23.5, use the following- pip install numpy==1.23.5 If you are having any problems, you can comment. I`ll try to answer it.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/70565965\/error-failed-building-wheel-for-numpy-error-could-not-build-wheels-for-numpy", "best_answers_votes":14, "question_length":828, "response_length":681 }, { "question":"Convert numpy type to python I have a list of dicts in the following form that I generate from pandas. I want to convert it to a JSON format. ``` list_val = [{1.0: 685}, {2.0: 8}] output = json.dumps(list_val) ``` However, json.dumps throws an error: TypeError: 685 is not JSON serializable I am guessing it's a type conversion issue from numpy to python(?). However, when I convert the values v of each dict in the array using np.int32(v) it still throws the error. Here's the full code: ```py new = df[df[label] == label_new] ks_dict = json.loads(content) ks_list = ks_dict['variables'] freq_counts = [] for ks_var in ks_list: freq_var = dict() freq_var[\"name\"] = ks_var[\"name\"] ks_series = new[ks_var[\"name\"]] temp_df = ks_series.value_counts().to_dict() freq_var[\"new\"] = [{u: np.int32(v)} for (u, v) in temp_df.iteritems()] freq_counts.append(freq_var) out = json.dumps(freq_counts) ```", "response":"It looks like you're correct: ``` >>> import numpy >>> import json >>> json.dumps(numpy.int32(685)) Traceback (most recent call last): File \"\", line 1, in File \"\/usr\/lib\/python2.7\/json\/__init__.py\", line 243, in dumps return _default_encoder.encode(obj) File \"\/usr\/lib\/python2.7\/json\/encoder.py\", line 207, in encode chunks = self.iterencode(o, _one_shot=True) File \"\/usr\/lib\/python2.7\/json\/encoder.py\", line 270, in iterencode return _iterencode(o, 0) File \"\/usr\/lib\/python2.7\/json\/encoder.py\", line 184, in default raise TypeError(repr(o) + \" is not JSON serializable\") TypeError: 685 is not JSON serializable ``` The unfortunate thing here is that numpy numbers' __repr__ doesn't give you any hint about what type they are. They're running around masquerading as ints when they aren't (gasp). Ultimately, it looks like json is telling you that an int isn't serializable, but really, it's telling you that this particular np.int32 (or whatever type you actually have) isn't serializable. (No real surprise there -- No np.int32 is serializable). This is also why the dict that you inevitably printed before passing it to json.dumps looks like it just has integers in it as well. The easiest workaround here is probably to write your own serializer1: ``` class MyEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, numpy.integer): return int(obj) elif isinstance(obj, numpy.floating): return float(obj) elif isinstance(obj, numpy.ndarray): return obj.tolist() else: return super(MyEncoder, self).default(obj) ``` You use it like this: ``` json.dumps(numpy.float32(1.2), cls=MyEncoder) json.dumps(numpy.arange(12), cls=MyEncoder) json.dumps({'a': numpy.int32(42)}, cls=MyEncoder) ``` etc. 1Or you could just write the default function and pass that as the defaut keyword argument to json.dumps. In this scenario, you'd replace the last line with raise TypeError, but ... meh. The class is more extensible :-)", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/27050108\/convert-numpy-type-to-python", "best_answers_votes":129, "question_length":891, "response_length":1926 }, { "question":"How to plot vectors in python using matplotlib I am taking a course on linear algebra and I want to visualize the vectors in action, such as vector addition, normal vector, so on. For instance: ``` V = np.array([[1,1],[-2,2],[4,-7]]) ``` In this case I want to plot 3 vectors V1 = (1,1), M2 = (-2,2), M3 = (4,-7). Then I should be able to add V1,V2 to plot a new vector V12(all together in one figure). when I use the following code, the plot is not as intended ``` import numpy as np import matplotlib.pyplot as plt M = np.array([[1,1],[-2,2],[4,-7]]) print(\"vector:1\") print(M[0,:]) # print(\"vector:2\") # print(M[1,:]) rows,cols = M.T.shape print(cols) for i,l in enumerate(range(0,cols)): print(\"Iteration: {}-{}\".format(i,l)) print(\"vector:{}\".format(i)) print(M[i,:]) v1 = [0,0],[M[i,0],M[i,1]] # v1 = [M[i,0]],[M[i,1]] print(v1) plt.figure(i) plt.plot(v1) plt.show() ```", "response":"How about something like ``` import numpy as np import matplotlib.pyplot as plt V = np.array([[1,1], [-2,2], [4,-7]]) origin = np.array([[0, 0, 0],[0, 0, 0]]) # origin point plt.quiver(*origin, V[:,0], V[:,1], color=['r','b','g'], scale=21) plt.show() ``` Then to add up any two vectors and plot them to the same figure, do so before you call plt.show(). Something like: ``` plt.quiver(*origin, V[:,0], V[:,1], color=['r','b','g'], scale=21) v12 = V[0] + V[1] # adding up the 1st (red) and 2nd (blue) vectors plt.quiver(*origin, v12[0], v12[1], scale=21) plt.show() ``` NOTE: in Python2 use origin[0], origin[1] instead of *origin", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/42281966\/how-to-plot-vectors-in-python-using-matplotlib", "best_answers_votes":77, "question_length":876, "response_length":630 }, { "question":"Randomly select from numpy array I have two related numpy arrays, X and y. I need to select n random rows from X and store this in an array, the corresponding y value and the appends to it the index of the points randomly selected. I have another array index which stores a list of index which I dont want to sample. How can I do this? Sample data: ``` index = [2,3] X = np.array([[0.3,0.7],[0.5,0.5] ,[0.2,0.8], [0.1,0.9]]) y = np.array([[0], [1], [0], [1]]) ``` If these X's were randomly selected (where n=2): ``` randomylSelected = np.array([[0.3,0.7],[0.5,0.5]]) ``` the desired output would be: ``` index = [0,1,2,3] randomlySelectedY = [0,1] ``` How can I do this?", "response":"You can create random indices with np.random.choice: ``` n = 2 # for 2 random indices index = np.random.choice(X.shape[0], n, replace=False) ``` Then you just need to index your arrays with the result: ``` x_random = X[index] y_random = Y[index] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/43506766\/randomly-select-from-numpy-array", "best_answers_votes":87, "question_length":671, "response_length":249 }, { "question":"Python finite difference functions? I've been looking around in Numpy\/Scipy for modules containing finite difference functions. However, the closest thing I've found is numpy.gradient(), which is good for 1st-order finite differences of 2nd order accuracy, but not so much if you're wanting higher-order derivatives or more accurate methods. I haven't even found very many specific modules for this sort of thing; most people seem to be doing a \"roll-your-own\" thing as they need them. So my question is if anyone knows of any modules (either part of Numpy\/Scipy or a third-party module) that are specifically dedicated to higher-order (both in accuracy and derivative) finite difference methods. I've got my own code that I'm working on, but it's currently kind of slow, and I'm not going to attempt to optimize it if there's something already available. Note that I am talking about finite differences, not derivatives. I've seen both scipy.misc.derivative() and Numdifftools, which take the derivative of an analytical function, which I don't have.", "response":"One way to do this quickly is by convolution with the derivative of a gaussian kernel. The simple case is a convolution of your array with [-1, 1] which gives exactly the simple finite difference formula. Beyond that, (f*g)'= f'*g = f*g' where the * is convolution, so you end up with your derivative convolved with a plain gaussian, so of course this will smooth your data a bit, which can be minimized by choosing the smallest reasonable kernel. ``` import numpy as np from scipy import ndimage import matplotlib.pyplot as plt #Data: x = np.linspace(0,2*np.pi,100) f = np.sin(x) + .02*(np.random.rand(100)-.5) #Normalization: dx = x[1] - x[0] # use np.diff(x) if x is not uniform dxdx = dx**2 #First derivatives: df = np.diff(f) \/ dx cf = np.convolve(f, [1,-1]) \/ dx gf = ndimage.gaussian_filter1d(f, sigma=1, order=1, mode='wrap') \/ dx #Second derivatives: ddf = np.diff(f, 2) \/ dxdx ccf = np.convolve(f, [1, -2, 1]) \/ dxdx ggf = ndimage.gaussian_filter1d(f, sigma=1, order=2, mode='wrap') \/ dxdx #Plotting: plt.figure() plt.plot(x, f, 'k', lw=2, label='original') plt.plot(x[:-1], df, 'r.', label='np.diff, 1') plt.plot(x, cf[:-1], 'r--', label='np.convolve, [1,-1]') plt.plot(x, gf, 'r', label='gaussian, 1') plt.plot(x[:-2], ddf, 'g.', label='np.diff, 2') plt.plot(x, ccf[:-2], 'g--', label='np.convolve, [1,-2,1]') plt.plot(x, ggf, 'g', label='gaussian, 2') ``` Since you mentioned np.gradient I assumed you had at least 2d arrays, so the following applies to that: This is built into the scipy.ndimage package if you want to do it for ndarrays. Be cautious though, because of course this doesn't give you the full gradient but I believe the product of all directions. Someone with better expertise will hopefully speak up. Here's an example: ``` from scipy import ndimage x = np.linspace(0,2*np.pi,100) sine = np.sin(x) im = sine * sine[...,None] d1 = ndimage.gaussian_filter(im, sigma=5, order=1, mode='wrap') d2 = ndimage.gaussian_filter(im, sigma=5, order=2, mode='wrap') plt.figure() plt.subplot(131) plt.imshow(im) plt.title('original') plt.subplot(132) plt.imshow(d1) plt.title('first derivative') plt.subplot(133) plt.imshow(d2) plt.title('second derivative') ``` Use of the gaussian_filter1d allows you to take a directional derivative along a certain axis: ``` imx = im * x d2_0 = ndimage.gaussian_filter1d(imx, axis=0, sigma=5, order=2, mode='wrap') d2_1 = ndimage.gaussian_filter1d(imx, axis=1, sigma=5, order=2, mode='wrap') plt.figure() plt.subplot(131) plt.imshow(imx) plt.title('original') plt.subplot(132) plt.imshow(d2_0) plt.title('derivative along axis 0') plt.subplot(133) plt.imshow(d2_1) plt.title('along axis 1') ``` The first set of results above are a little confusing to me (peaks in the original show up as peaks in the second derivative when the curvature should point down). Without looking further into how the 2d version works, I can only really recomend the 1d version. If you want the magnitude, simply do something like: ``` d2_mag = np.sqrt(d2_0**2 + d2_1**2) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18991408\/python-finite-difference-functions", "best_answers_votes":59, "question_length":1051, "response_length":3006 }, { "question":"How do I transform a \"SciPy sparse matrix\" to a \"NumPy matrix\"? I am using a python function called \"incidence_matrix(G)\", which returns the incident matrix of graph. It is from Networkx package. The problem that I am facing is the return type of this function is \"Scipy Sparse Matrix\". I need to have the Incident matrix in the format of numpy matrix or array. I was wondering if there is any easy way of doing that or not? Or is there any built-in function that can do this transformation for me or not? Thanks", "response":"The scipy.sparse.*_matrix has several useful methods, for example, if a is e.g. scipy.sparse.csr_matrix: a.toarray() - Return a dense ndarray representation of this matrix. (numpy.array, recommended) a.todense() - Return a dense matrix representation of this matrix. (numpy.matrix) Previously, these methods had shorthands (.A for .toarray(), and .M for .todense()), but these have been or will be deprecated as of Scipy v1.14.0.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/26576524\/how-do-i-transform-a-scipy-sparse-matrix-to-a-numpy-matrix", "best_answers_votes":80, "question_length":512, "response_length":429 }, { "question":"numpy.savetxt without hash mark at beginning of header line When I try to save a matrix with header, a hash mark and a space (# ) appear on the first line: input: ``` np.savetxt(filename,data, fmt='%i %i %i %i %s',delimiter='\\t',header=\"a\\tb\\tc\\td\\te\") ``` output: ``` # a b c d e 0 0 0 0 bla 0 0 0 0 bla 1 1 1 1 bla 1 1 1 1 bla ``` Any hint why? How could I remove it?", "response":"it inserts the # because that line is a comment, and the default character for comments is the symbol #, as you can read in the documentation here. If you want to get rid of it, pass comments='' as option to savetxt.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17352244\/numpy-savetxt-without-hash-mark-at-beginning-of-header-line", "best_answers_votes":84, "question_length":369, "response_length":216 }, { "question":"Why is linear read-shuffled write not faster than shuffled read-linear write? I'm currently trying to get a better understanding of memory\/cache related performance issues. I read somewhere that memory locality is more important for reading than for writing, because in the former case the CPU has to actually wait for the data whereas in the latter case it can just ship them out and forget about them. With that in mind, I did the following quick-and-dirty test: I wrote a script that creates an array of N random floats and a permutation, i.e. an array containing the numbers 0 to N-1 in random order. Then it repeatedly either (1) reads the data array linearly and writes it back to a new array in the random access pattern given by the permutation or (2) reads the data array in the permuted order and linearly writes it to a new array. To my surprise (2) seemed consistently faster than (1). There were, however, problems with my script The script is written in python\/numpy. This being quite a high-level language it is not clear how pecisely the read\/write are implemented. I probably did not balance the two cases properly. Also, some of the answers\/comments below suggest that my original expectation isn't correct and that depending on details of the cpu cache either case might be faster. My question is: Which (if any) of the two should be faster? What are the relvant cache concepts here; how do they influence the result A beginner-friendly explanation would be appreciated. Any supporting code should be in C \/ cython \/ numpy \/ numba or python. Optionally: Explain why the absolute durations are nonlinear in problem size (cf. timings below). Explain the behavior of my clearly inadequate python experiments. For reference, my platform is Linux-4.12.14-lp150.11-default-x86_64-with-glibc2.3.4. Python version is 3.6.5. Here is the code I wrote: ``` import numpy as np from timeit import timeit def setup(): global a, b, c a = np.random.permutation(N) b = np.random.random(N) c = np.empty_like(b) def fwd(): c = b[a] def inv(): c[a] = b N = 10_000 setup() timeit(fwd, number=100_000) # 1.4942631321027875 timeit(inv, number=100_000) # 2.531870319042355 N = 100_000 setup() timeit(fwd, number=10_000) # 2.4054739447310567 timeit(inv, number=10_000) # 3.2365565397776663 N = 1_000_000 setup() timeit(fwd, number=1_000) # 11.131387163884938 timeit(inv, number=1_000) # 14.19817715883255 ``` As pointed out by @Trilarion and @Yann Vernier my snippets aren't properly balanced, so I replaced them with ``` def fwd(): c[d] = b[a] b[d] = c[a] def inv(): c[a] = b[d] b[a] = c[d] ``` where d = np.arange(N) (I shuffle everything both ways to hopefully reduce across trial caching effects). I also replaced timeit with repeat and reduced the numbers of repeats by a factor of 10. Then I get ``` [0.6757169323973358, 0.6705542299896479, 0.6702114241197705] #fwd [0.8183442652225494, 0.8382121799513698, 0.8173762648366392] #inv [1.0969422250054777, 1.0725746559910476, 1.0892365919426084] #fwd [1.0284497970715165, 1.025063106790185, 1.0247828317806125] #inv [3.073981977067888, 3.077839042060077, 3.072118630632758] #fwd [3.2967213969677687, 3.2996009718626738, 3.2817375687882304] #inv ``` So there still seems to be a difference, but it is much more subtle and can now go either way depending on the problem size.", "response":"This is a complex problem closely related to architectural features of modern processors and your intuition that random read are slower than random writes because the CPU has to wait for the read data is not verified (most of the time). There are several reasons for that I will detail. Modern processors are very efficient to hide read latency while memory writes are more expensive than memory reads especially in a multicore environment Reason #1 Modern processors are efficient to hide read latency. Modern superscalar can execute several instructions simultaneously, and change instruction execution order (out of order execution). While first reason for these features is to increase instruction thoughput, one of the most interesting consequence is the ability of processors to hide latency of memory writes (or of complex operators, branches, etc). To explain that, let us consider a simple code that copies array into another one. ``` for i in a: c[i] = b[i] ``` One compiled, code executed by the processor will be somehow like that ``` #1. (iteration 1) c[0] = b[0] 1a. read memory at b[0] and store result in register c0 1b. write register c0 at memory address c[0] #2. (iteration 2) c[1] = b[1] 2a. read memory at b[1] and store result in register c1 2b. write register c1 at memory address c[1] #1. (iteration 2) c[2] = b[2] 3a. read memory at b[2] and store result in register c2 3b. write register c2 at memory address c[2] # etc ``` (this is terribly oversimplified and the actual code is more complex and has to deal with loop management, address computation, etc, but this simplistic model is presently sufficient). As said in the question, for reads, the processor has to wait for the actual data. Indeed, 1b need the data fetched by 1a and cannot execute as long as 1a is not completed. Such a constraint is called a dependency and we can say that 1b is dependent on 1a. Dependencies is a major notion in modern processors. Dependencies express the algorithm (eg I write b to c) and must absolutely be respected. But, if there is no dependency between instructions, processors will try to execute other pending instructions in order to keep there operative pipeline always active. This can lead to execution out-of-order, as long as dependencies are respected (similar to the as-if rule). For the considered code, there no dependency between high level instruction 2. and 1. (or between asm instructions 2a and 2b and previous instructions). Actually the final result would even be identical is 2. is executed before 1., and the processor will try to execute 2a and 2b, before completion of 1a and 1b. There is still a dependency between 2a and 2b, but both can be issued. And similarly for 3a. and 3b., and so on. This is a powerful mean to hide memory latency. If for some reason 2., 3. and 4. can terminate before 1. loads its data, you may even not notice at all any slowdown. This instruction level parallelism is managed by a set of \"queues\" in the processor. a queue of pending instructions in the reservation stations RS (type 128 \u03bcinstructions in recent pentiums). As soon as resources required by the instruction is available (for instance value of register c1 for instruction 1b), the instruction can execute. a queue of pending memory accesses in memory order buffer MOB before the L1 cache. This is required to deal with memory aliases and to insure sequentiality in memory writes or loads at the same address (typ. 64 loads, 32 stores) a queue to enforce sequentiality when writing back results in registers (reorder buffer or ROB of 168 entries) for similar reasons. and some other queues at instruction fetch, for \u03bcops generation, write and miss buffers in the cache, etc At one point execution of the previous program there will be many pending stores instructions in RS, several loads in MOB and instructions waiting to retire in the ROB. As soon as a data becomes available (for instance a read terminates) depending instructions can execute and that frees positions in the queues. But if no termination occurs, and one of these queues is full, the functional unit associated with this queue stalls (this can also happen at instruction issue if the processor is missing register names). Stalls are what creates performance loss and to avoid it, queue filling must be limited. This explains the difference between linear and random memory accesses. In a linear access, 1\/ the number of misses will be smaller because of the better spatial locality and because caches can prefetch accesses with a regular pattern to reduce it further and 2\/ whenever a read terminates, it will concern a complete cache line and can free several pending load instructions limiting the filling of instructions queues. This ways the processor is permanently busy and memory latency is hidden. For a random access, the number of misses will be higher, and only a single load can be served when data arrives. Hence instructions queues will saturate rapidly, the processor stalls and memory latency can no longer be hidden by executing other instructions. The processor architecture must be balanced in terms of throughput in order to avoid queue saturation and stalls. Indeed there are be generally tens of instructions at some stage of execution in a processor and global throughput (ie the ability to serve instruction requests by the memory (or functional units)) is the main factor that will determine performances. The fact than some of these pending instructions are waiting for a memory value has a minor effect... ...except if you have long dependency chains. There is a dependency when an instruction has to wait for the completion of a previous one. Using the result of a read is a dependency. And dependencies can be a problem when involved in a dependency chain. For instance, consider the code for i in range(1,100000): s += a[i]. All the memory reads are independent, but there is a dependency chain for the accumulation in s. No addition can happen until the previous one has terminated. These dependencies will make the reservation stations rapidly filled and create stalls in the pipeline. But reads are rarely involved in dependency chains. It is still possible to imagine pathological code where all reads are dependent of the previous one (for instance for i in range(1,100000): s = a[s]), but they are uncommon in real code. And the problem comes from the dependency chain, not from the fact that it is a read; the situation would be similar (and even probably worse) with compute bound dependent code like for i in range(1,100000): x = 1.0\/x+1.0. Hence, except in some situations, computation time is more related to throughput than to read dependency, thanks to the fact that superscalar out or order execution hides latency. And for what concerns throughput, writes are worse then reads. Reason #2: Memory writes (especially random ones) are more expensive than memory reads This is related to the way caches behave. Cache are fast memory that store a part of the memory (called a line) by the processor. Cache lines are presently 64 bytes and allow to exploit spatial locality of memory references: once a line is stored, all data in the line are immediately available. The important aspect here is that all transfers between the cache and the memory are lines. When a processor performs a read on a data, the cache checks if the line to which the data belongs is in the cache. If not, the line is fetched from memory, stored in the cache and the desired data is sent back to the processor. When a processor writes a data to memory, the cache also checks for the line presence. If the line is not present, the cache cannot send its data to memory (because all transfers are line based) and does the following steps: cache fetches the line from memory and writes it in the cache line. data is written in the cache and the complete line is marked as modified (dirty) when a line is suppressed from the cache, it checks for the modified flag, and if the line has been modified, it writes it back to memory (write back cache) Hence, every memory write must be preceded by a memory read to get the line in the cache. This adds an extra operation, but is not very expensive for linear writes. There will be a cache miss and a memory read for the first written word, but successive writes will just concern the cache and be hits. But the situation is very different for random writes. If the number of misses is important, every cache miss implies a read followed by only a small number of writes before the line is ejected from the cache, which significantly increases write cost. If a line is ejected after a single write, we can even consider that a write is twice the temporal cost of a read. It is important to note that increasing the number of memory accesses (either reads or writes) tends to saturate the memory access path and to globally slow down all transfers between the processor and memory. In either case, writes are always more expensive than reads. And multicores augment this aspect. Reason #3: Random writes create cache misses in multicores Not sure this really applies to the situation of the question. While numpy BLAS routines are multithreaded, I do not think basic array copy is. But it is closely related and is another reason why writes are more expensive. The problem with multicores is to ensure proper cache coherence in such a way that a data shared by several processors is properly updated in the cache of every core. This is done by mean of a protocol such as MESI that updates a cache line before writing it, and invalidates other cache copies (read for ownership). While none of the data is actually shared between cores in the question (or a parallel version of it), note that the protocol applies to cache lines. Whenever a cache line is to be modified, it is copied from the cache holding the most recent copy, locally updated and all other copies are invalidated. Even if cores are accessing different parts of the cache line. Such a situation is called a false sharing and it is an important issue for multicore programming. Concerning the problem of random writes, cache lines are 64 bytes and can hold 8 int64, and if the computer has 8 cores, every core will process on the average 2 values. Hence there is an important false sharing that will slow down writes. We did some performance evaluations. It was performed in C in order to include an evaluation of the impact of parallelization. We compared 5 functions that process int64 arrays of size N. Just a copy of b to c (c[i] = b[i]) (implemented by the compiler with memcpy()) Copy with a linear index c[i] = b[d[i]] where d[i]==i (read_linear) Copy with a random index c[i] = b[a[i]] where a is a random permutation of 0..N-1 (read_random is equivalent to fwd in the original question) Write linear c[d[i]] = b[i] where d[i]==i (write_linear) Write random c[a[i]] = b[i] with a random permutation of 0..N-1 (write_random is equivalent to inv in the question) Code has been compiled with gcc -O3 -funroll-loops -march=native -malign-double on a skylake processor. Performances are measured with _rdtsc() and are given in cycles per iteration. The function are executed several times (1000-20000 depending on array size), 10 experiments are performed and the smallest time is kept. Array sizes range from 4000 to 1200000. All code has been measured with a sequential and a parallel version with openmp. Here is a graph of the results. Functions are with different colors, with the sequential version in thick lines and the parallel one with thin ones. Direct copy is (obviously) the fastest and is implemented by gcc with the highly optimized memcpy(). It is a mean to get an estimation of data throughput with memory. It ranges from 0.8 cycles per iteration (CPI) for small matrices to 2.0 CPI for large ones. Read linear performances are approximately twice longer than memcpy, but there are 2 reads and a write, vs 1 read and a write for the direct copy. More the index adds some dependency. Min value is 1.56 CPI and max value 3.8 CPI. Write linear is slightly longer (5-10%). Reads and writes with a random index are the purpose of the original question and deserve a longer comments. Here are the results. ``` size 4000 6000 9000 13496 20240 30360 45536 68304 102456 153680 230520 345776 518664 777992 1166984 rd-rand 1.86821 2.52813 2.90533 3.50055 4.69627 5.10521 5.07396 5.57629 6.13607 7.02747 7.80836 10.9471 15.2258 18.5524 21.3811 wr-rand 7.07295 7.21101 7.92307 7.40394 8.92114 9.55323 9.14714 8.94196 8.94335 9.37448 9.60265 11.7665 15.8043 19.1617 22.6785 ``` small values (100k): the difference between methods is progressively reduced. For these sizes, a large part of information is stored in L3 cache. L3 size is sufficient to hold a full array of 1.5M and lines are less likely to be ejected. Hence, for writes, after the initial read, a larger number of writes can be done without line ejection, and the relative cost of writes vs read is reduced. For these large sizes, there are also many other factors that need to be considered. For instance, caches can only serve a limited number of misses (typ. 16) and when the number of misses is large, this may be the limiting factor. One word on parallel omp version of random reads and writes. Except for small sizes, where having the random index array spread over several caches may not be an advantage, they are systematically ~ twice faster. For large sizes, we clearly see that the gap between random reads and writes increases due to false sharing. It is almost impossible to do quantitative predictions with the complexity of present computer architectures, even for simple code, and even qualitative explanations of the behaviour are difficult and must take into account many factors. As mentioned in other answers, software aspects related to python can also have an impact. But, while it may happen in some situations, most of the time, one cannot consider that reads are more expensive because of data dependency.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/54782033\/why-is-linear-read-shuffled-write-not-faster-than-shuffled-read-linear-write", "best_answers_votes":22, "question_length":3320, "response_length":14024 }, { "question":"Correct way to obtain confidence interval with scipy I have a 1-dimensional array of data: ``` a = np.array([1,2,3,4,4,4,5,5,5,5,4,4,4,6,7,8]) ``` for which I want to obtain the 68% confidence interval (ie: the 1 sigma). The first comment in this answer states that this can be achieved using scipy.stats.norm.interval from the scipy.stats.norm function, via: ``` from scipy import stats import numpy as np mean, sigma = np.mean(a), np.std(a) conf_int = stats.norm.interval(0.68, loc=mean, scale=sigma) ``` But a comment in this post states that the actual correct way of obtaining the confidence interval is: ``` conf_int = stats.norm.interval(0.68, loc=mean, scale=sigma \/ np.sqrt(len(a))) ``` that is, sigma is divided by the square-root of the sample size: np.sqrt(len(a)). The question is: which version is the correct one?", "response":"The 68% confidence interval for a single draw from a normal distribution with mean mu and std deviation sigma is ``` stats.norm.interval(0.68, loc=mu, scale=sigma) ``` The 68% confidence interval for the mean of N draws from a normal distribution with mean mu and std deviation sigma is ``` stats.norm.interval(0.68, loc=mu, scale=sigma\/sqrt(N)) ``` Intuitively, these formulas make sense, since if you hold up a jar of jelly beans and ask a large number of people to guess the number of jelly beans, each individual may be off by a lot -- the same std deviation sigma -- but the average of the guesses will do a remarkably fine job of estimating the actual number and this is reflected by the standard deviation of the mean shrinking by a factor of 1\/sqrt(N). If a single draw has variance sigma**2, then by the Bienaym\u00e9 formula, the sum of N uncorrelated draws has variance N*sigma**2. The mean is equal to the sum divided by N. When you multiply a random variable (like the sum) by a constant, the variance is multiplied by the constant squared. That is ``` Var(cX) = c**2 * Var(X) ``` So the variance of the mean equals ``` (variance of the sum)\/N**2 = N * sigma**2 \/ N**2 = sigma**2 \/ N ``` and so the standard deviation of the mean (which is the square root of the variance) equals ``` sigma\/sqrt(N). ``` This is the origin of the sqrt(N) in the denominator. Here is some example code, based on Tom's code, which demonstrates the claims made above: ``` import numpy as np from scipy import stats N = 10000 a = np.random.normal(0, 1, N) mean, sigma = a.mean(), a.std(ddof=1) conf_int_a = stats.norm.interval(0.68, loc=mean, scale=sigma) print('{:0.2%} of the single draws are in conf_int_a' .format(((a >= conf_int_a[0]) & (a = conf_int_b[0]) & (b < conf_int_b[1])).sum() \/ float(N))) ``` prints ``` 68.03% of the single draws are in conf_int_a 67.78% of the means are in conf_int_b ``` Beware that if you define conf_int_b with the estimates for mean and sigma based on the sample a, the mean may not fall in conf_int_b with the desired frequency. If you take a sample from a distribution and compute the sample mean and std deviation, ``` mean, sigma = a.mean(), a.std() ``` be careful to note that there is no guarantee that these will equal the population mean and standard deviation and that we are assuming the population is normally distributed -- those are not automatic givens! If you take a sample and want to estimate the population mean and standard deviation, you should use ``` mean, sigma = a.mean(), a.std(ddof=1) ``` since this value for sigma is the unbiased estimator for the population standard deviation.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28242593\/correct-way-to-obtain-confidence-interval-with-scipy", "best_answers_votes":94, "question_length":828, "response_length":2630 }, { "question":"Understanding PyTorch einsum I'm familiar with how einsum works in NumPy. A similar functionality is also offered by PyTorch: torch.einsum(). What are the similarities and differences, either in terms of functionality or performance? The information available at PyTorch documentation is rather scanty and doesn't provide any insights regarding this.", "response":"Since the description of einsum is skimpy in torch documentation, I decided to write this post to document, compare and contrast how torch.einsum() behaves when compared to numpy.einsum(). Differences: NumPy allows both small case and capitalized letters [a-zA-Z] for the \"subscript string\" whereas PyTorch allows only the small case letters [a-z]. NumPy accepts nd-arrays, plain Python lists (or tuples), list of lists (or tuple of tuples, list of tuples, tuple of lists) or even PyTorch tensors as operands (i.e. inputs). This is because the operands have only to be array_like and not strictly NumPy nd-arrays. On the contrary, PyTorch expects the operands (i.e. inputs) strictly to be PyTorch tensors. It will throw a TypeError if you pass either plain Python lists\/tuples (or its combinations) or NumPy nd-arrays. NumPy supports lot of keyword arguments (for e.g. optimize) in addition to nd-arrays while PyTorch doesn't offer such flexibility yet. Here are the implementations of some examples both in PyTorch and NumPy: ``` # input tensors to work with In [16]: vec Out[16]: tensor([0, 1, 2, 3]) In [17]: aten Out[17]: tensor([[11, 12, 13, 14], [21, 22, 23, 24], [31, 32, 33, 34], [41, 42, 43, 44]]) In [18]: bten Out[18]: tensor([[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]) ``` 1) Matrix multiplication PyTorch: torch.matmul(aten, bten) ; aten.mm(bten) NumPy : np.einsum(\"ij, jk -> ik\", arr1, arr2) ``` In [19]: torch.einsum('ij, jk -> ik', aten, bten) Out[19]: tensor([[130, 130, 130, 130], [230, 230, 230, 230], [330, 330, 330, 330], [430, 430, 430, 430]]) ``` 2) Extract elements along the main-diagonal PyTorch: torch.diag(aten) NumPy : np.einsum(\"ii -> i\", arr) ``` In [28]: torch.einsum('ii -> i', aten) Out[28]: tensor([11, 22, 33, 44]) ``` 3) Hadamard product (i.e. element-wise product of two tensors) PyTorch: aten * bten NumPy : np.einsum(\"ij, ij -> ij\", arr1, arr2) ``` In [34]: torch.einsum('ij, ij -> ij', aten, bten) Out[34]: tensor([[ 11, 12, 13, 14], [ 42, 44, 46, 48], [ 93, 96, 99, 102], [164, 168, 172, 176]]) ``` 4) Element-wise squaring PyTorch: aten ** 2 NumPy : np.einsum(\"ij, ij -> ij\", arr, arr) ``` In [37]: torch.einsum('ij, ij -> ij', aten, aten) Out[37]: tensor([[ 121, 144, 169, 196], [ 441, 484, 529, 576], [ 961, 1024, 1089, 1156], [1681, 1764, 1849, 1936]]) ``` General: Element-wise nth power can be implemented by repeating the subscript string and tensor n times. For e.g., computing element-wise 4th power of a tensor can be done using: ``` # NumPy: np.einsum('ij, ij, ij, ij -> ij', arr, arr, arr, arr) In [38]: torch.einsum('ij, ij, ij, ij -> ij', aten, aten, aten, aten) Out[38]: tensor([[ 14641, 20736, 28561, 38416], [ 194481, 234256, 279841, 331776], [ 923521, 1048576, 1185921, 1336336], [2825761, 3111696, 3418801, 3748096]]) ``` 5) Trace (i.e. sum of main-diagonal elements) PyTorch: torch.trace(aten) NumPy einsum: np.einsum(\"ii -> \", arr) ``` In [44]: torch.einsum('ii -> ', aten) Out[44]: tensor(110) ``` 6) Matrix transpose PyTorch: torch.transpose(aten, 1, 0) NumPy einsum: np.einsum(\"ij -> ji\", arr) ``` In [58]: torch.einsum('ij -> ji', aten) Out[58]: tensor([[11, 21, 31, 41], [12, 22, 32, 42], [13, 23, 33, 43], [14, 24, 34, 44]]) ``` 7) Outer Product (of vectors) PyTorch: torch.ger(vec, vec) NumPy einsum: np.einsum(\"i, j -> ij\", vec, vec) ``` In [73]: torch.einsum('i, j -> ij', vec, vec) Out[73]: tensor([[0, 0, 0, 0], [0, 1, 2, 3], [0, 2, 4, 6], [0, 3, 6, 9]]) ``` 8) Inner Product (of vectors) PyTorch: torch.dot(vec1, vec2) NumPy einsum: np.einsum(\"i, i -> \", vec1, vec2) ``` In [76]: torch.einsum('i, i -> ', vec, vec) Out[76]: tensor(14) ``` 9) Sum along axis 0 PyTorch: torch.sum(aten, 0) NumPy einsum: np.einsum(\"ij -> j\", arr) ``` In [85]: torch.einsum('ij -> j', aten) Out[85]: tensor([104, 108, 112, 116]) ``` 10) Sum along axis 1 PyTorch: torch.sum(aten, 1) NumPy einsum: np.einsum(\"ij -> i\", arr) ``` In [86]: torch.einsum('ij -> i', aten) Out[86]: tensor([ 50, 90, 130, 170]) ``` 11) Batch Matrix Multiplication PyTorch: torch.bmm(batch_tensor_1, batch_tensor_2) NumPy : np.einsum(\"bij, bjk -> bik\", batch_tensor_1, batch_tensor_2) ``` # input batch tensors to work with In [13]: batch_tensor_1 = torch.arange(2 * 4 * 3).reshape(2, 4, 3) In [14]: batch_tensor_2 = torch.arange(2 * 3 * 4).reshape(2, 3, 4) In [15]: torch.bmm(batch_tensor_1, batch_tensor_2) Out[15]: tensor([[[ 20, 23, 26, 29], [ 56, 68, 80, 92], [ 92, 113, 134, 155], [ 128, 158, 188, 218]], [[ 632, 671, 710, 749], [ 776, 824, 872, 920], [ 920, 977, 1034, 1091], [1064, 1130, 1196, 1262]]]) # sanity check with the shapes In [16]: torch.bmm(batch_tensor_1, batch_tensor_2).shape Out[16]: torch.Size([2, 4, 4]) # batch matrix multiply using einsum In [17]: torch.einsum(\"bij, bjk -> bik\", batch_tensor_1, batch_tensor_2) Out[17]: tensor([[[ 20, 23, 26, 29], [ 56, 68, 80, 92], [ 92, 113, 134, 155], [ 128, 158, 188, 218]], [[ 632, 671, 710, 749], [ 776, 824, 872, 920], [ 920, 977, 1034, 1091], [1064, 1130, 1196, 1262]]]) # sanity check with the shapes In [18]: torch.einsum(\"bij, bjk -> bik\", batch_tensor_1, batch_tensor_2).shape ``` 12) Sum along axis 2 PyTorch: torch.sum(batch_ten, 2) NumPy einsum: np.einsum(\"ijk -> ij\", arr3D) ``` In [99]: torch.einsum(\"ijk -> ij\", batch_ten) Out[99]: tensor([[ 50, 90, 130, 170], [ 4, 8, 12, 16]]) ``` 13) Sum all the elements in an nD tensor PyTorch: torch.sum(batch_ten) NumPy einsum: np.einsum(\"ijk -> \", arr3D) ``` In [101]: torch.einsum(\"ijk -> \", batch_ten) Out[101]: tensor(480) ``` 14) Sum over multiple axes (i.e. marginalization) PyTorch: torch.sum(arr, dim=(dim0, dim1, dim2, dim3, dim4, dim6, dim7)) NumPy: np.einsum(\"ijklmnop -> n\", nDarr) ``` # 8D tensor In [103]: nDten = torch.randn((3,5,4,6,8,2,7,9)) In [104]: nDten.shape Out[104]: torch.Size([3, 5, 4, 6, 8, 2, 7, 9]) # marginalize out dimension 5 (i.e. \"n\" here) In [111]: esum = torch.einsum(\"ijklmnop -> n\", nDten) In [112]: esum Out[112]: tensor([ 98.6921, -206.0575]) # marginalize out axis 5 (i.e. sum over rest of the axes) In [113]: tsum = torch.sum(nDten, dim=(0, 1, 2, 3, 4, 6, 7)) In [115]: torch.allclose(tsum, esum) Out[115]: True ``` 15) Double Dot Products \/ Frobenius inner product (same as: torch.sum(hadamard-product) cf. 3) PyTorch: torch.sum(aten * bten) NumPy : np.einsum(\"ij, ij -> \", arr1, arr2) ``` In [120]: torch.einsum(\"ij, ij -> \", aten, bten) Out[120]: tensor(1300) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/55894693\/understanding-pytorch-einsum", "best_answers_votes":114, "question_length":350, "response_length":6370 }, { "question":"set very low values to zero in numpy In numpy I have an array like ``` [0 + 0.5j, 0.25 + 1.2352444e-24j, 0.25+ 0j, 2.46519033e-32 + 0j] ``` what is the fastest and easiest way to set the super low value to zero to get ``` [0 + 0.5j, 0.25 + 0j, 0.25+ 0j, 0 + 0j] ``` efficiency is not the paramount.", "response":"Hmmm. I'm not super-happy with it, but this seems to work: ``` >>> a = np.array([0 + 0.5j, 0.25 + 1.2352444e-24j, 0.25+ 0j, 2.46519033e-32 + 0j]) >>> a array([ 0.00000000e+00 +5.00000000e-01j, 2.50000000e-01 +1.23524440e-24j, 2.50000000e-01 +0.00000000e+00j, 2.46519033e-32 +0.00000000e+00j]) >>> tol = 1e-16 >>> a.real[abs(a.real) >> a.imag[abs(a.imag) >> a array([ 0.00+0.5j, 0.25+0.j , 0.25+0.j , 0.00+0.j ]) ``` and you can choose your tolerance as your problem requires. I usually use an order of magnitude or so higher than ``` >>> np.finfo(np.float).eps 2.2204460492503131e-16 ``` but it's problem-dependent.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14419290\/set-very-low-values-to-zero-in-numpy", "best_answers_votes":39, "question_length":298, "response_length":615 }, { "question":"Standard deviation in numpy [duplicate] This question already has answers here: Python: Numpy standard deviation error (3 answers) Closed 9 years ago. Here is my code: ``` import numpy as np print(np.std(np.array([0,1]))) ``` it produces 0.5 I am confident that this is incorrect. What am I doing wrong?", "response":"By default, numpy.std returns the population standard deviation, in which case np.std([0,1]) is correctly reported to be 0.5. If you are looking for the sample standard deviation, you can supply an optional ddof parameter to std(): ``` >>> np.std([0, 1], ddof=1) 0.70710678118654757 ``` ddof modifies the divisor of the sum of the squares of the samples-minus-mean. The divisor is N - ddof, where the default ddof is 0 as you can see from your result.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34050491\/standard-deviation-in-numpy", "best_answers_votes":105, "question_length":303, "response_length":451 }, { "question":"python numpy roll with padding I'd like to roll a 2D numpy array, except that I'd like to fill the ends with zeros rather than roll the data as if it were periodic. The following ``` import numpy as np x = np.array([[1, 2, 3], [4, 5, 6]]) np.roll(x, 1, axis=1) ``` returns ``` array([[3, 1, 2], [6, 4, 5]]) ``` but what I would prefer is ``` array([[0, 1, 2], [0, 4, 5]]) ```", "response":"numpy.pad can be used to create an array with surrounding zeros. Pad seems to be quite powerful and can do much more than a simple \"roll\". The tuple ((0,0),(1,0)) used in this answer indicates the \"side\" of the matrix which to pad. ``` import numpy as np x = np.array([[1, 2, 3],[4, 5, 6]]) print np.pad(x,((0,0),(1,0)), mode='constant')[:, :-1] ``` Giving ``` [[0 1 2] [0 4 5]] ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/2777907\/python-numpy-roll-with-padding", "best_answers_votes":56, "question_length":375, "response_length":382 }, { "question":"Inverse Distance Weighted (IDW) Interpolation with Python The Question: What is the best way to calculate inverse distance weighted (IDW) interpolation in Python, for point locations? Some Background: Currently I'm using RPy2 to interface with R and its gstat module. Unfortunately, the gstat module conflicts with arcgisscripting which I got around by running RPy2 based analysis in a separate process. Even if this issue is resolved in a recent\/future release, and efficiency can be improved, I'd still like to remove my dependency on installing R. The gstat website does provide a stand alone executable, which is easier to package with my python script, but I still hope for a Python solution which doesn't require multiple writes to disk and launching external processes. The number of calls to the interpolation function, of separate sets of points and values, can approach 20,000 in the processing I'm performing. I specifically need to interpolate for points, so using the IDW function in ArcGIS to generate rasters sounds even worse than using R, in terms of performance.....unless there is a way to efficiently mask out only the points I need. Even with this modification, I wouldn't expect performance to be all that great. I will look into this option as another alternative. UPDATE: The problem here is you are tied to the cell size you are using. If you reduce the cell-size to get better accuracy, processing takes a long time. You also need to follow up by extracting by points.....over all an ugly method if you want values for specific points. I have looked at the scipy documentation, but it doesn't look like there is a straight forward way to calculate IDW. I'm thinking of rolling my own implementation, possibly using some of the scipy functionality to locate the closest points and calculate distances. Am I missing something obvious? Is there a python module I haven't seen that does exactly what I want? Is creating my own implementation with the aid of scipy a wise choice?", "response":"changed 20 Oct: this class Invdisttree combines inverse-distance weighting and scipy.spatial.KDTree. Forget the original brute-force answer; this is imho the method of choice for scattered-data interpolation. ``` \"\"\" invdisttree.py: inverse-distance-weighted interpolation using KDTree fast, solid, local \"\"\" from __future__ import division import numpy as np from scipy.spatial import cKDTree as KDTree # http:\/\/docs.scipy.org\/doc\/scipy\/reference\/spatial.html __date__ = \"2010-11-09 Nov\" # weights, doc #............................................................................... class Invdisttree: \"\"\" inverse-distance-weighted interpolation using KDTree: invdisttree = Invdisttree( X, z ) -- data points, values interpol = invdisttree( q, nnear=3, eps=0, p=1, weights=None, stat=0 ) interpolates z from the 3 points nearest each query point q; For example, interpol[ a query point q ] finds the 3 data points nearest q, at distances d1 d2 d3 and returns the IDW average of the values z1 z2 z3 (z1\/d1 + z2\/d2 + z3\/d3) \/ (1\/d1 + 1\/d2 + 1\/d3) = .55 z1 + .27 z2 + .18 z3 for distances 1 2 3 q may be one point, or a batch of points. eps: approximate nearest, dist = 0 w \/= np.sum(w) wz = np.dot( w, self.z[ix] ) if self.stat: self.wn += 1 self.wsum += w interpol[jinterpol] = wz jinterpol += 1 return interpol if qdim > 1 else interpol[0] #............................................................................... if __name__ == \"__main__\": import sys N = 10000 Ndim = 2 Nask = N # N Nask 1e5: 24 sec 2d, 27 sec 3d on mac g4 ppc Nnear = 8 # 8 2d, 11 3d => 5 % chance one-sided -- Wendel, mathoverflow.com leafsize = 10 eps = .1 # approximate nearest, dist <= (1 + eps) * true nearest p = 1 # weights ~ 1 \/ distance**p cycle = .25 seed = 1 exec \"\\n\".join( sys.argv[1:] ) # python this.py N= ... np.random.seed(seed ) np.set_printoptions( 3, threshold=100, suppress=True ) # .3f print \"\\nInvdisttree: N %d Ndim %d Nask %d Nnear %d leafsize %d eps %.2g p %.2g\" % ( N, Ndim, Nask, Nnear, leafsize, eps, p) def terrain(x): \"\"\" ~ rolling hills \"\"\" return np.sin( (2*np.pi \/ cycle) * np.mean( x, axis=-1 )) known = np.random.uniform( size=(N,Ndim) ) ** .5 # 1\/(p+1): density x^p z = terrain( known ) ask = np.random.uniform( size=(Nask,Ndim) ) #............................................................................... invdisttree = Invdisttree( known, z, leafsize=leafsize, stat=1 ) interpol = invdisttree( ask, nnear=Nnear, eps=eps, p=p ) print \"average distances to nearest points: %s\" % \\ np.mean( invdisttree.distances, axis=0 ) print \"average weights: %s\" % (invdisttree.wsum \/ invdisttree.wn) # see Wikipedia Zipf's law err = np.abs( terrain(ask) - interpol ) print \"average |terrain() - interpolated|: %.2g\" % np.mean(err) # print \"interpolate a single point: %.2g\" % \\ # invdisttree( known[0], nnear=Nnear, eps=eps ) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3104781\/inverse-distance-weighted-idw-interpolation-with-python", "best_answers_votes":44, "question_length":2000, "response_length":2837 }, { "question":"What might be the cause of 'invalid value encountered in less_equal' in numpy I experienced a RuntimeWarning ``` RuntimeWarning: invalid value encountered in less_equal ``` Generated by this line of code of mine: ``` center_dists[j] <= center_dists[i] ``` Both center_dists[j] and center_dists[i] are numpy arrays What might be the cause of this warning ?", "response":"That's most likely happening because of a np.nan somewhere in the inputs involved. An example of it is shown below - ``` In [1]: A = np.array([4, 2, 1]) In [2]: B = np.array([2, 2, np.nan]) In [3]: A<=B RuntimeWarning: invalid value encountered in less_equal Out[3]: array([False, True, False], dtype=bool) ``` For all those comparisons involving np.nan, it would output False. Let's confirm it for a broadcasted comparison. Here's a sample - ``` In [1]: A = np.array([4, 2, 1]) In [2]: B = np.array([2, 2, np.nan]) In [3]: A[:,None] <= B RuntimeWarning: invalid value encountered in less_equal Out[3]: array([[False, False, False], [ True, True, False], [ True, True, False]], dtype=bool) ``` Please notice the third column in the output which corresponds to the comparison involving third element np.nan in B and that results in all False values.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34955158\/what-might-be-the-cause-of-invalid-value-encountered-in-less-equal-in-numpy", "best_answers_votes":57, "question_length":355, "response_length":848 }, { "question":"What is the advantage of saving `.npz` files instead of `.npy` in python, regarding speed, memory and look-up? The python documentation for the numpy.savez which saves an .npz file is: The .npz file format is a zipped archive of files named after the variables they contain. The archive is not compressed and each file in the archive contains one variable in .npy format. [...] When opening the saved .npz file with load a NpzFile object is returned. This is a dictionary-like object which can be queried for its list of arrays (with the .files attribute), and for the arrays themselves. My question is: what is the point of numpy.savez? Is it just a more elegant version (shorter command) to save multiple arrays, or is there a speed-up in the saving\/reading process? Does it occupy less memory?", "response":"There are two parts of explanation for answering your question. I. NPY vs. NPZ As we already read from the doc, the .npy format is: the standard binary file format in NumPy for persisting a single arbitrary NumPy array on disk. ... The format is designed to be as simple as possible while achieving its limited goals. (sources) And .npz is only a simple way to combine multiple arrays into a single file, one can use ZipFile to contain multiple \u201c.npy\u201d files. We recommend using the file extension \u201c.npz\u201d for these archives. (sources) So, .npz is just a ZipFile containing multiple \u201c.npy\u201d files. And this ZipFile can be either compressed (by using np.savez_compressed) or uncompressed (by using np.savez). It's similar to tarball archive file in Unix-like system, where a tarball file can be just an uncompressed archive file which containing other files or a compressed archive file by combining with various compression programs (gzip, bzip2, etc.) II. Different APIs for binary serialization And Numpy also provides different APIs to produce these binary file output: np.save ---> Save an array to a binary file in NumPy .npy format np.savez --> Save several arrays into a single file in uncompressed .npz format np.savez_compressed --> Save several arrays into a single file in compressed .npz format np.load --> Load arrays or pickled objects from .npy, .npz or pickled files If we skim the source code of Numpy, under the hood: ```py def _savez(file, args, kwds, compress, allow_pickle=True, pickle_kwargs=None): ... if compress: compression = zipfile.ZIP_DEFLATED else: compression = zipfile.ZIP_STORED ... def savez(file, *args, **kwds): _savez(file, args, kwds, False) def savez_compressed(file, *args, **kwds): _savez(file, args, kwds, True) ``` Then back to the question: If only use np.save, there is no more compression on top of the .npy format, only just a single archive file for the convenience of managing multiple related files. If use np.savez_compressed, then of course less memory on disk because of more CPU time to do the compression job (i.e. a bit slower).", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/54238670\/what-is-the-advantage-of-saving-npz-files-instead-of-npy-in-python-regard", "best_answers_votes":71, "question_length":796, "response_length":2081 }, { "question":"Retrieve name of column from its Index in Pandas I have a pandas dataframe and a numpy array of values of that dataframe. I have the index of a specific column and I already have the row index of an important value. Now I need to get the column name of that particular value from my dataframe. After searching through the documentations, I found out that I can do the opposite but not what I want.", "response":"I think you need index columns names by position (python counts from 0, so for fourth column need 3): ``` colname = df.columns[pos] ``` Sample: ``` df = pd.DataFrame({'A':[1,2,3], 'B':[4,5,6], 'C':[7,8,9], 'D':[1,3,5], 'E':[5,3,6], 'F':[7,4,3]}) print (df) A B C D E F 0 1 4 7 1 5 7 1 2 5 8 3 3 4 2 3 6 9 5 6 3 pos = 3 colname = df.columns[pos] print (colname) D ``` ``` pos = [3,5] colname = df.columns[pos] print (colname) Index(['D', 'F'], dtype='object') ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/43068412\/retrieve-name-of-column-from-its-index-in-pandas", "best_answers_votes":78, "question_length":397, "response_length":462 }, { "question":"Python: ufunc 'add' did not contain a loop with signature matching types dtype('S21') dtype('S21') dtype('S21') I have two dataframes, which both have an Order ID and a date. I wanted to add a flag into the first dataframe df1: if a record with the same order id and date is in dataframe df2, then add a Y: ``` [ df1['R'] = np.where(orders['key'].isin(df2['key']), 'Y', 0)] ``` To accomplish that, I was going to create a key, which would be the concatenation of the order_id and date, but when I try the following code: ``` df1['key']=df1['Order_ID']+'_'+df1['Date'] ``` I get this error ``` ufunc 'add' did not contain a loop with signature matching types dtype('S21') dtype('S21') dtype('S21') ``` df1 looks like this: ``` Date | Order_ID | other data points ... 201751 4395674 ... 201762 3487535 ... ``` These are the datatypes: ``` df1.info() RangeIndex: 157443 entries, 0 to 157442 Data columns (total 6 columns): Order_ID 157429 non-null object Date 157443 non-null int64 ... dtypes: float64(2), int64(2), object(2) memory usage: 7.2+ MB df1['Order_ID'].values array(['782833030', '782834969', '782836416', ..., '783678018', '783679806', '783679874'], dtype=object) ```", "response":"The problem is that you can't add an object array (containing strings) to a number array, that's just ambiguous: ``` >>> import pandas as pd >>> pd.Series(['abc', 'def']) + pd.Series([1, 2]) TypeError: ufunc 'add' did not contain a loop with signature matching types dtype(' Nehalem MKL, Netlib Blas --> Nehalem Netlib BLAS, etc) Single threaded performance: Multi threaded performance (8 threads): Threads vs Matrix size (Ivy Bridge MKL): Benchmark Suite Single threaded performance: Multi threaded (8 threads) performance: Conclusion The new benchmark results are similar to the ones in the original answer. OpenBLAS and MKL perform on the same level, with the exception of Eigenvalue test. The Eigenvalue test performs only reasonably well on OpenBLAS in single threaded mode. In multi-threaded mode the performance is worse. The \"Matrix size vs threads chart\" also show that although MKL as well as OpenBLAS generally scale well with number of cores\/threads,it depends on the size of the matrix. For small matrices adding more cores won't improve performance very much. There is also approximately 30% performance increase from Sandy Bridge to Ivy Bridge which might be either due to higher clock rate (+ 0.8 Ghz) and\/or better architecture. Original Answer (04.10.2011): Some time ago I had to optimize some linear algebra calculations\/algorithms which were written in python using numpy and BLAS so I benchmarked\/tested different numpy\/BLAS configurations. Specifically I tested: Numpy with ATLAS Numpy with GotoBlas2 (1.13) Numpy with MKL (11.1\/073) Numpy with Accelerate Framework (Mac OS X) I did run two different benchmarks: simple dot product of matrices with different sizes Benchmark suite which can be found here. Here are my results: Machines Linux (MKL, ATLAS, No-MKL, GotoBlas2): OS: Ubuntu Lucid 10.4 64 Bit. CPU: 2 x 4 Intel(R) Xeon(R) E5504 @ 2.00GHz (8 Cores) RAM: 24 GB Intel Compiler: 11.1\/073 Scipy: 0.8 Numpy: 1.5 Mac Book Pro (Accelerate Framework): OS: Mac OS X Snow Leopard (10.6) CPU: 1 Intel Core 2 Duo 2.93 Ghz (2 Cores) RAM: 4 GB Scipy: 0.7 Numpy: 1.3 Mac Server (Accelerate Framework): OS: Mac OS X Snow Leopard Server (10.6) CPU: 4 X Intel(R) Xeon(R) E5520 @ 2.26 Ghz (8 Cores) RAM: 4 GB Scipy: 0.8 Numpy: 1.5.1 Dot product benchmark Code: ``` import numpy as np a = np.random.random_sample((size,size)) b = np.random.random_sample((size,size)) %timeit np.dot(a,b) ``` Results: ``` System | size = 1000 | size = 2000 | size = 3000 | netlib BLAS | 1350 ms | 10900 ms | 39200 ms | ATLAS (1 CPU) | 314 ms | 2560 ms | 8700 ms | MKL (1 CPUs) | 268 ms | 2110 ms | 7120 ms | MKL (2 CPUs) | - | - | 3660 ms | MKL (8 CPUs) | 39 ms | 319 ms | 1000 ms | GotoBlas2 (1 CPU) | 266 ms | 2100 ms | 7280 ms | GotoBlas2 (2 CPUs)| 139 ms | 1009 ms | 3690 ms | GotoBlas2 (8 CPUs)| 54 ms | 389 ms | 1250 ms | Mac OS X (1 CPU) | 143 ms | 1060 ms | 3605 ms | Mac Server (1 CPU)| 92 ms | 714 ms | 2130 ms | ``` Benchmark Suite Code: For additional information about the benchmark suite see here. Results: ``` System | eigenvalues | svd | det | inv | dot | netlib BLAS | 1688 ms | 13102 ms | 438 ms | 2155 ms | 3522 ms | ATLAS (1 CPU) | 1210 ms | 5897 ms | 170 ms | 560 ms | 893 ms | MKL (1 CPUs) | 691 ms | 4475 ms | 141 ms | 450 ms | 736 ms | MKL (2 CPUs) | 552 ms | 2718 ms | 96 ms | 267 ms | 423 ms | MKL (8 CPUs) | 525 ms | 1679 ms | 60 ms | 137 ms | 197 ms | GotoBlas2 (1 CPU) | 2124 ms | 4636 ms | 147 ms | 456 ms | 743 ms | GotoBlas2 (2 CPUs)| 1560 ms | 3278 ms | 116 ms | 295 ms | 460 ms | GotoBlas2 (8 CPUs)| 741 ms | 2914 ms | 82 ms | 262 ms | 192 ms | Mac OS X (1 CPU) | 948 ms | 4339 ms | 151 ms | 318 ms | 566 ms | Mac Server (1 CPU)| 1033 ms | 3645 ms | 99 ms | 232 ms | 342 ms | ``` Installation Installation of MKL included installing the complete Intel Compiler Suite which is pretty straight forward. However because of some bugs\/issues configuring and compiling numpy with MKL support was a bit of a hassle. GotoBlas2 is a small package which can be easily compiled as a shared library. However because of a bug you have to re-create the shared library after building it in order to use it with numpy. In addition to this building it for multiple target plattform didn't work for some reason. So I had to create an .so file for each platform for which i want to have an optimized libgoto2.so file. If you install numpy from Ubuntu's repository it will automatically install and configure numpy to use ATLAS. Installing ATLAS from source can take some time and requires some additional steps (fortran, etc). If you install numpy on a Mac OS X machine with Fink or Mac Ports it will either configure numpy to use ATLAS or Apple's Accelerate Framework. You can check by either running ldd on the numpy.core._dotblas file or calling numpy.show_config(). Conclusions MKL performs best closely followed by GotoBlas2. In the eigenvalue test GotoBlas2 performs surprisingly worse than expected. Not sure why this is the case. Apple's Accelerate Framework performs really good especially in single threaded mode (compared to the other BLAS implementations). Both GotoBlas2 and MKL scale very well with number of threads. So if you have to deal with big matrices running it on multiple threads will help a lot. In any case don't use the default netlib blas implementation because it is way too slow for any serious computational work. On our cluster I also installed AMD's ACML and performance was similar to MKL and GotoBlas2. I don't have any numbers tough. I personally would recommend to use GotoBlas2 because it's easier to install and it's free. If you want to code in C++\/C also check out Eigen3 which is supposed to outperform MKL\/GotoBlas2 in some cases and is also pretty easy to use.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5260068\/multithreaded-blas-in-python-numpy", "best_answers_votes":116, "question_length":416, "response_length":6621 }, { "question":"How can I make a two-dimensional NumPy array a three-dimensional array? I have a two-dimensional array with shape (x, y) which I want to convert to a three-dimensional array with shape (x, y, 1). Is there a nice Pythonic way to do this?", "response":"In addition to the other answers, you can also use slicing with numpy.newaxis: ``` >>> from numpy import zeros, newaxis >>> a = zeros((6, 8)) >>> a.shape (6, 8) >>> b = a[:, :, newaxis] >>> b.shape (6, 8, 1) ``` Or even this (which will work with an arbitrary number of dimensions): ``` >>> b = a[..., newaxis] >>> b.shape (6, 8, 1) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7372316\/how-can-i-make-a-two-dimensional-numpy-array-a-three-dimensional-array", "best_answers_votes":77, "question_length":236, "response_length":336 }, { "question":"Python vectorizing nested for loops I'd appreciate some help in finding and understanding a pythonic way to optimize the following array manipulations in nested for loops: ``` def _func(a, b, radius): \"Return 0 if a>b, otherwise return 1\" if distance.euclidean(a, b) < radius: return 1 else: return 0 def _make_mask(volume, roi, radius): mask = numpy.zeros(volume.shape) for x in range(volume.shape[0]): for y in range(volume.shape[1]): for z in range(volume.shape[2]): mask[x, y, z] = _func((x, y, z), roi, radius) return mask ``` Where volume.shape (182, 218, 200) and roi.shape (3,) are both ndarray types; and radius is an int", "response":"Approach #1 Here's a vectorized approach - ``` m,n,r = volume.shape x,y,z = np.mgrid[0:m,0:n,0:r] X = x - roi[0] Y = y - roi[1] Z = z - roi[2] mask = X**2 + Y**2 + Z**2 < radius**2 ``` Possible improvement : We can probably speedup the last step with numexpr module - ``` import numexpr as ne mask = ne.evaluate('X**2 + Y**2 + Z**2 < radius**2') ``` Approach #2 We can also gradually build the three ranges corresponding to the shape parameters and perform the subtraction against the three elements of roi on the fly without actually creating the meshes as done earlier with np.mgrid. This would be benefited by the use of broadcasting for efficiency purposes. The implementation would look like this - ``` m,n,r = volume.shape vals = ((np.arange(m)-roi[0])**2)[:,None,None] + \\ ((np.arange(n)-roi[1])**2)[:,None] + ((np.arange(r)-roi[2])**2) mask = vals < radius**2 ``` Simplified version : Thanks to @Bi Rico for suggesting an improvement here as we can use np.ogrid to perform those operations in a bit more concise manner, like so - ``` m,n,r = volume.shape x,y,z = np.ogrid[0:m,0:n,0:r]-roi mask = (x**2+y**2+z**2) < radius**2 ``` Runtime test Function definitions - ``` def vectorized_app1(volume, roi, radius): m,n,r = volume.shape x,y,z = np.mgrid[0:m,0:n,0:r] X = x - roi[0] Y = y - roi[1] Z = z - roi[2] return X**2 + Y**2 + Z**2 < radius**2 def vectorized_app1_improved(volume, roi, radius): m,n,r = volume.shape x,y,z = np.mgrid[0:m,0:n,0:r] X = x - roi[0] Y = y - roi[1] Z = z - roi[2] return ne.evaluate('X**2 + Y**2 + Z**2 < radius**2') def vectorized_app2(volume, roi, radius): m,n,r = volume.shape vals = ((np.arange(m)-roi[0])**2)[:,None,None] + \\ ((np.arange(n)-roi[1])**2)[:,None] + ((np.arange(r)-roi[2])**2) return vals < radius**2 def vectorized_app2_simplified(volume, roi, radius): m,n,r = volume.shape x,y,z = np.ogrid[0:m,0:n,0:r]-roi return (x**2+y**2+z**2) < radius**2 ``` Timings - ``` In [106]: # Setup input arrays ...: volume = np.random.rand(90,110,100) # Half of original input sizes ...: roi = np.random.rand(3) ...: radius = 3.4 ...: In [107]: %timeit _make_mask(volume, roi, radius) 1 loops, best of 3: 41.4 s per loop In [108]: %timeit vectorized_app1(volume, roi, radius) 10 loops, best of 3: 62.3 ms per loop In [109]: %timeit vectorized_app1_improved(volume, roi, radius) 10 loops, best of 3: 47 ms per loop In [110]: %timeit vectorized_app2(volume, roi, radius) 100 loops, best of 3: 4.26 ms per loop In [139]: %timeit vectorized_app2_simplified(volume, roi, radius) 100 loops, best of 3: 4.36 ms per loop ``` So, as always broadcasting showing its magic for a crazy almost 10,000x speedup over the original code and more than 10x better than creating meshes by using on-the-fly broadcasted operations!", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/39667089\/python-vectorizing-nested-for-loops", "best_answers_votes":92, "question_length":630, "response_length":2746 }, { "question":"Concatenate sparse matrices in Python using SciPy\/Numpy What would be the most efficient way to concatenate sparse matrices in Python using SciPy\/Numpy? Here I used the following: ``` >>> np.hstack((X, X2)) array([ ' with 1135520 stored elements in Compressed Sparse Row format>, ' with 1135520 stored elements in Compressed Sparse Row format>], dtype=object) ``` I would like to use both predictors in a regression, but the current format is obviously not what I'm looking for. Would it be possible to get the following: ``` ' with 2271040 stored elements in Compressed Sparse Row format> ``` It is too large to be converted to a deep format.", "response":"You can use the scipy.sparse.hstack to concatenate sparse matrices with the same number of rows (horizontal concatenation): ``` from scipy.sparse import hstack hstack((X, X2)) ``` Similarly, you can use scipy.sparse.vstack to concatenate sparse matrices with the same number of columns (vertical concatenation). Using numpy.hstack or numpy.vstack will create an array with two sparse matrix objects.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19710602\/concatenate-sparse-matrices-in-python-using-scipy-numpy", "best_answers_votes":90, "question_length":643, "response_length":399 }, { "question":"Swap two rows in a numpy array in python [duplicate] This question already has an answer here: Row exchange in Numpy [duplicate] (1 answer) Closed 6 years ago. How to swap xth and yth rows of the 2-D NumPy array? x & y are inputs provided by the user. Lets say x = 0 & y =2 , and the input array is as below: ``` a = [[4 3 1] [5 7 0] [9 9 3] [8 2 4]] Expected Output : [[9 9 3] [5 7 0] [4 3 1] [8 2 4]] ``` I tried multiple things, but did not get the expected result. this is what i tried: ``` a[x],a[y]= a[y],a[x] output i got is: [[9 9 3] [5 7 0] [9 9 3] [8 2 4]] ``` Please suggest what is wrong in my solution.", "response":"Put the index as a whole: ``` a[[x, y]] = a[[y, x]] ``` With your example: ``` a = np.array([[4,3,1], [5,7,0], [9,9,3], [8,2,4]]) a # array([[4, 3, 1], # [5, 7, 0], # [9, 9, 3], # [8, 2, 4]]) a[[0, 2]] = a[[2, 0]] a # array([[9, 9, 3], # [5, 7, 0], # [4, 3, 1], # [8, 2, 4]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/54069863\/swap-two-rows-in-a-numpy-array-in-python", "best_answers_votes":84, "question_length":615, "response_length":279 }, { "question":"Cython: (Why \/ When) Is it preferable to use Py_ssize_t for indexing? This is a follow-up to this question. (Why \/ When) Is it preferable to use Py_ssize_t for indexing? In the docs I just found ``` # Purists could use \"Py_ssize_t\" which is the proper Python type for # array indices. ``` -> Does that mean always when indexing NumPy\/Cython - array(s)\/-views one should use Py_ssize_t? -> Is Py_ssize_t e. g. an unsigned int so that I can't used @cython.boundscheck(False)", "response":"Py_ssize_t is signed. See PEP 353, where it says \"A new type Py_ssize_t is introduced, which has the same size as the compiler's size_t type, but is signed. It will be a typedef for ssize_t where available.\" You should use Py_ssize_t for indexing. I didn't find a definitive statement of this in the Cython docs, but Stefan Behnel, a Cython developer, said as much in an email (https:\/\/groups.google.com\/forum\/#!topic\/cython-users\/brENF_M9zxM): As a general remark, you are using ints as indices. You should use the Py_ssize_t type instead (or size_t, if you prefer an unsigned type) to properly accommodate for 64 bit architectures.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20987390\/cython-why-when-is-it-preferable-to-use-py-ssize-t-for-indexing", "best_answers_votes":48, "question_length":472, "response_length":633 }, { "question":"converty numpy array of arrays to 2d array I have a pandas series features that has the following values (features.values) ``` array([array([0, 0, 0, ..., 0, 0, 0]), array([0, 0, 0, ..., 0, 0, 0]), array([0, 0, 0, ..., 0, 0, 0]), ..., array([0, 0, 0, ..., 0, 0, 0]), array([0, 0, 0, ..., 0, 0, 0]), array([0, 0, 0, ..., 0, 0, 0])], dtype=object) ``` Now I really want this to be recognized as matrix, but if I do ``` >>> features.values.shape (10000,) ``` rather than (10000, 3000) which is what I would expect. How can I get this to be recognized as 2d rather than a 1d array with arrays as values. Also why does it not automatically detect it as a 2d array?", "response":"In response your comment question, let's compare 2 ways of creating an array First make an array from a list of arrays (all same length): ``` In [302]: arr = np.array([np.arange(3), np.arange(1,4), np.arange(10,13)]) In [303]: arr Out[303]: array([[ 0, 1, 2], [ 1, 2, 3], [10, 11, 12]]) ``` The result is a 2d array of numbers. If instead we make an object dtype array, and fill it with arrays: ``` In [304]: arr = np.empty(3,object) In [305]: arr[:] = [np.arange(3), np.arange(1,4), np.arange(10,13)] In [306]: arr Out[306]: array([array([0, 1, 2]), array([1, 2, 3]), array([10, 11, 12])], dtype=object) ``` Notice that this display is like yours. This is, by design a 1d array. Like a list it contains pointers to arrays elsewhere in memory. Notice that it requires an extra construction step. The default behavior of np.array is to create a multidimensional array where it can. It takes extra effort to get around that. Likewise it takes some extra effort to undo that - to create the 2d numeric array. Simply calling np.array on it does not change the structure. ``` In [307]: np.array(arr) Out[307]: array([array([0, 1, 2]), array([1, 2, 3]), array([10, 11, 12])], dtype=object) ``` stack does change it to 2d. stack treats it as a list of arrays, which it joins on a new axis. ``` In [308]: np.stack(arr) Out[308]: array([[ 0, 1, 2], [ 1, 2, 3], [10, 11, 12]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/50971123\/converty-numpy-array-of-arrays-to-2d-array", "best_answers_votes":60, "question_length":659, "response_length":1370 }, { "question":"Prevent anti-aliasing for imshow in matplotlib When I use matplotlib's imshow() method to represent a small numpy matrix, it ends up doing some smoothing between pixels. Is there any way to disables this? It makes my figure's misleading in presentations. The figure above is a 28x28 image, so I should be seeing large squares of single colors representing each pixel (as matlab would display it when using imagesc()). But Instead, the pixels seem to be blurred with neighboring pixels. Is there a way to disable this behavior?", "response":"There is an interpolation option for imshow which controls how and if interpolation will be applied to the rendering of the matrix. If you try ``` imshow(array, interpolation=\"nearest\") ``` you might get something more like you want. As an example ``` A=10*np.eye(10) + np.random.rand(100).reshape(10,10) imshow(A) ``` compared with ``` A=10*np.eye(10) + np.random.rand(100).reshape(10,10) imshow(A, interpolation=\"nearest\") ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8376609\/prevent-anti-aliasing-for-imshow-in-matplotlib", "best_answers_votes":56, "question_length":526, "response_length":428 }, { "question":"Using a sparse matrix versus numpy array I am creating some numpy arrays with word counts in Python: rows are documents, columns are counts for word X. If I have a lot of zero counts, people suggest using sparse matrices when processing these further, e.g. in a classifier. When feeding a numpy array versus a sparse matrix into the Scikit logistic regression classifier, it did not seem to make much of a difference, however. So I was wondering about three things: Wikipedia says a sparse matrix is a matrix in which most of the elements are zero Is that an appropriate way to determine when to use a sparse matrix format - as soon as > 50 % of the values are zero? Or does it make sense to use just in case? How much does a sparse matrix help performance in a task like mine, especially compared to a numpy array or a standard list? So far, I collect my data into a numpy array, then convert into the csr_matrix in Scipy. Is that the right way to do it? I could not figure out how to build a sparse matrix from the ground up, and that might be impossible. Any help is much appreciated!", "response":"The scipy sparse matrix package, and similar ones in MATLAB, was based on ideas developed from linear algebra problems, such as solving large sparse linear equations (e.g. finite difference and finite element implementations). So things like matrix product (the dot product for numpy arrays) and equation solvers are well developed. My rough experience is that a sparse csr matrix product has to have a 1% sparsity to be faster than the equivalent dense dot operation - in other words, one nonzero value for every 99 zeros. (but see tests below) But people also try to use sparse matrices to save memory. But keep in mind that such a matrix has to store 3 arrays of values (at least in the coo format). So the sparsity has to be less than 1\/3 to start saving memory. Obviously you aren't going to save memory if you first build the dense array, and create the sparse one from that. The scipy package implements many sparse formats. The coo format is easiest to understand and build. Build one according to documentation and look at its .data, .row, and .col attributes (3 1d arrays). csr and csc are typically built from the coo format, and compress the data a bit, making them a bit harder to understand. But they have most of the math functionality. It is also possible to index csr format, though in general this is slower than the equivalent dense matrix\/array case. Other operations like changing values (especially from 0 to nonzero), concatenation, incremental growth, are also slower. lil (lists of lists) is also easy to understand, and best for incremental building. dok is a actually a dictionary subclass. A key point is that a sparse matrix is limited to 2d, and in many ways behaves like the np.matrix class (though it isn't a subclass). A search for other questions using scikit-learn and sparse might be the best way of finding the pros\/cons of using these matrices. I've answered a number of questions, but I know the 'sparse' side better than the 'learn' side. I think they are useful, but I get the sense is that the fit isn't always the best. Any customization is on the learn side. So far the sparse package has not been optimized for this application. I just tried some matrix product tests, using the sparse.random method to create a sparse matrix with a specified sparsity. Sparse matrix multiplication performed better than I expected. ``` In [251]: M=sparse.random(1000,1000,.5) In [252]: timeit M1=M*M 1 loops, best of 3: 2.78 s per loop In [253]: timeit Ma=M.toarray(); M2=Ma.dot(Ma) 1 loops, best of 3: 4.28 s per loop ``` It is a size issue; for smaller matrix the dense dot is faster ``` In [255]: M=sparse.random(100,100,.5) In [256]: timeit M1=M*M 100 loops, best of 3: 3.24 ms per loop In [257]: timeit Ma=M.toarray(); M2=Ma.dot(Ma) 1000 loops, best of 3: 1.44 ms per loop ``` But compare indexing ``` In [268]: timeit M.tocsr()[500,500] 10 loops, best of 3: 86.4 ms per loop In [269]: timeit Ma[500,500] 1000000 loops, best of 3: 318 ns per loop In [270]: timeit Ma=M.toarray();Ma[500,500] 10 loops, best of 3: 23.6 ms per loop ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/36969886\/using-a-sparse-matrix-versus-numpy-array", "best_answers_votes":46, "question_length":1087, "response_length":3066 }, { "question":"How to multiply two vector and get a matrix? In numpy operation, I have two vectors, let's say vector A is 4X1, vector B is 1X5, if I do AXB, it should result a matrix of size 4X5. But I tried lot of times, doing many kinds of reshape and transpose, they all either raise error saying not aligned or return a single value. How should I get the output product of matrix I want?", "response":"Normal matrix multiplication works as long as the vectors have the right shape. Remember that * in Numpy is elementwise multiplication, and matrix multiplication is available with numpy.dot() (or with the @ operator, in Python 3.5) ``` >>> numpy.dot(numpy.array([[1], [2]]), numpy.array([[3, 4]])) array([[3, 4], [6, 8]]) ``` This is called an \"outer product.\" You can get it using plain vectors using numpy.outer(): ``` >>> numpy.outer(numpy.array([1, 2]), numpy.array([3, 4])) array([[3, 4], [6, 8]]) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28578302\/how-to-multiply-two-vector-and-get-a-matrix", "best_answers_votes":61, "question_length":376, "response_length":506 }, { "question":"Python numpy: cannot convert datetime64[ns] to datetime64[D] (to use with Numba) I want to pass a datetime array to a Numba function (which cannot be vectorised and would otherwise be very slow). I understand Numba supports numpy.datetime64. However, it seems it supports datetime64[D] (day precision) but not datetime64[ns] (nanosecond precision) (I learnt this the hard way: is it documented?). I tried to convert from datetime64[ns] to datetime64[D], but can't seem to find a way! I have summarised my problem with the minimal code below. If you run testdf(mydates), which is datetime64[D], it works fine. If you run testdf(dates_input), which is datetime64[ns], it doesn't. Note that this example simply passes the dates to the Numba function, which doesn't (yet) do anything with them. I try to convert dates_input to datetime64[D], but the conversion doesn't work. In my original code I read from a SQL table into a pandas dataframe, and need a column which changes the day of each date to the 15th. ```py import numba import numpy as np import pandas as pd import datetime mydates =np.array(['2010-01-01','2011-01-02']).astype('datetime64[D]') df=pd.DataFrame() df[\"rawdate\"]=mydates df[\"month_15\"] = df[\"rawdate\"].apply(lambda r: datetime.date( r.year, r.month,15 ) ) dates_input = df[\"month_15\"].astype('datetime64[D]') print dates_input.dtype # Why datetime64[ns] and not datetime64[D] ?? @numba.jit(nopython=True) def testdf(dates): return 1 print testdf(mydates) ``` The error I get if I run testdf(dates_input) is: ```none numba.typeinfer.TypingError: Failed at nopython (nopython frontend) Var 'dates' unified to object: dates := {pyobject} ```", "response":"Note (2023-05-30): This answer only works for pandas version <2. Pandas 2.0.0 was released on 2023-04-03. See relevant changelog entry. Series.astype converts all date-like objects to datetime64[ns]. To convert to datetime64[D], use values to obtain a NumPy array before calling astype: ``` dates_input = df[\"month_15\"].values.astype('datetime64[D]') ``` Note that NDFrames (such as Series and DataFrames) can only hold datetime-like objects as objects of dtype datetime64[ns]. The automatic conversion of all datetime-likes to a common dtype simplifies subsequent date computations. But it makes it impossible to store, say, datetime64[s] objects in a DataFrame column. Pandas core developer, Jeff Reback explains, \"We don't allow direct conversions because its simply too complicated to keep anything other than datetime64[ns] internally (nor necessary at all).\" Also note that even though df['month_15'].astype('datetime64[D]') has dtype datetime64[ns]: ``` In [29]: df['month_15'].astype('datetime64[D]').dtype Out[29]: dtype('>> 100000 loops, best of 3: 6.36 us per loop # test 2 %timeit np.any(last) >>> 100000 loops, best of 3: 6.95 us per loop ``` At least np.any seems to be doing something vaguely sensible here - if the nonzero value is the first one in the array, there should be no need to consider any others before returning True, so I would expect test 1 to be slightly quicker than test 2. However, what happens when we make the arrays much larger? ``` first = np.zeros(1E9,dtype=np.bool) last = np.zeros(1E9,dtype=np.bool) first[0] = True last[-1] = True # test 3 %timeit np.any(first) >>> 10 loops, best of 3: 21.6 ms per loop # test 4 %timeit np.any(last) >>> 1 loops, best of 3: 739 ms per loop ``` As expected, test 4 is quite a lot slower than test 3. However, in test 3 np.any should still only have to check the value of a single element in first in order to know that it contains at least one nonzero value. Why, then, is test 3 so much slower than test 1? Edit 1: I'm using a development version of Numpy (1.8.0.dev-e11cd9b), but I get the exact same timing results using Numpy 1.7.1. I'm running 64 bit Linux, Python 2.7.4. My system is basically idling (I'm running an IPython session, a browser and a text editor), and I'm definitely not hitting the swap. I also replicated the result on another machine running Numpy 1.7.1. Edit 2: Using Numpy 1.6.2 I get times of ~1.85us for both tests 1 and 3, so as jorgeca says there seems to have been some performance regression between Numpy 1.6.2 and 1.7.1 1.7.0 in this regard. Edit 3: Following J.F. Sebastian and jorgeca's lead I've done some more benchmarking using np.all on an array of zeros, which ought to be equivalent to calling np.any on an array where the first element is one. Test script: ``` import timeit import numpy as np print 'Numpy v%s' %np.version.full_version stmt = \"np.all(x)\" for ii in xrange(10): setup = \"import numpy as np; x = np.zeros(%d,dtype=np.bool)\" %(10**ii) timer = timeit.Timer(stmt,setup) n,r = 1,3 t = np.min(timer.repeat(r,n)) while t < 0.2: n *= 10 t = np.min(timer.repeat(r,n)) t \/= n if t < 1E-3: timestr = \"%1.3f us\" %(t*1E6) elif t < 1: timestr = \"%1.3f ms\" %(t*1E3) else: timestr = \"%1.3f s\" %t print \"Array size: 1E%i, %i loops, best of %i: %s\/loop\" %(ii,n,r,timestr) ``` Results: ``` Numpy v1.6.2 Array size: 1E0, 1000000 loops, best of 3: 1.738 us\/loop Array size: 1E1, 1000000 loops, best of 3: 1.845 us\/loop Array size: 1E2, 1000000 loops, best of 3: 1.862 us\/loop Array size: 1E3, 1000000 loops, best of 3: 1.858 us\/loop Array size: 1E4, 1000000 loops, best of 3: 1.864 us\/loop Array size: 1E5, 1000000 loops, best of 3: 1.882 us\/loop Array size: 1E6, 1000000 loops, best of 3: 1.866 us\/loop Array size: 1E7, 1000000 loops, best of 3: 1.853 us\/loop Array size: 1E8, 1000000 loops, best of 3: 1.860 us\/loop Array size: 1E9, 1000000 loops, best of 3: 1.854 us\/loop Numpy v1.7.0 Array size: 1E0, 100000 loops, best of 3: 5.881 us\/loop Array size: 1E1, 100000 loops, best of 3: 5.831 us\/loop Array size: 1E2, 100000 loops, best of 3: 5.924 us\/loop Array size: 1E3, 100000 loops, best of 3: 5.864 us\/loop Array size: 1E4, 100000 loops, best of 3: 5.997 us\/loop Array size: 1E5, 100000 loops, best of 3: 6.979 us\/loop Array size: 1E6, 100000 loops, best of 3: 17.196 us\/loop Array size: 1E7, 10000 loops, best of 3: 116.162 us\/loop Array size: 1E8, 1000 loops, best of 3: 1.112 ms\/loop Array size: 1E9, 100 loops, best of 3: 11.061 ms\/loop Numpy v1.7.1 Array size: 1E0, 100000 loops, best of 3: 6.216 us\/loop Array size: 1E1, 100000 loops, best of 3: 6.257 us\/loop Array size: 1E2, 100000 loops, best of 3: 6.318 us\/loop Array size: 1E3, 100000 loops, best of 3: 6.247 us\/loop Array size: 1E4, 100000 loops, best of 3: 6.492 us\/loop Array size: 1E5, 100000 loops, best of 3: 7.406 us\/loop Array size: 1E6, 100000 loops, best of 3: 17.426 us\/loop Array size: 1E7, 10000 loops, best of 3: 115.946 us\/loop Array size: 1E8, 1000 loops, best of 3: 1.102 ms\/loop Array size: 1E9, 100 loops, best of 3: 10.987 ms\/loop Numpy v1.8.0.dev-e11cd9b Array size: 1E0, 100000 loops, best of 3: 6.357 us\/loop Array size: 1E1, 100000 loops, best of 3: 6.399 us\/loop Array size: 1E2, 100000 loops, best of 3: 6.425 us\/loop Array size: 1E3, 100000 loops, best of 3: 6.397 us\/loop Array size: 1E4, 100000 loops, best of 3: 6.596 us\/loop Array size: 1E5, 100000 loops, best of 3: 7.569 us\/loop Array size: 1E6, 100000 loops, best of 3: 17.445 us\/loop Array size: 1E7, 10000 loops, best of 3: 115.109 us\/loop Array size: 1E8, 1000 loops, best of 3: 1.094 ms\/loop Array size: 1E9, 100 loops, best of 3: 10.840 ms\/loop ``` Edit 4: Following seberg's comment I tried the same test with an np.float32 array instead of np.bool. In this case, Numpy 1.6.2 also shows a slowdown as array sizes increase: ``` Numpy v1.6.2 Array size: 1E0, 100000 loops, best of 3: 3.503 us\/loop Array size: 1E1, 100000 loops, best of 3: 3.597 us\/loop Array size: 1E2, 100000 loops, best of 3: 3.742 us\/loop Array size: 1E3, 100000 loops, best of 3: 4.745 us\/loop Array size: 1E4, 100000 loops, best of 3: 14.533 us\/loop Array size: 1E5, 10000 loops, best of 3: 112.463 us\/loop Array size: 1E6, 1000 loops, best of 3: 1.101 ms\/loop Array size: 1E7, 100 loops, best of 3: 11.724 ms\/loop Array size: 1E8, 10 loops, best of 3: 116.924 ms\/loop Array size: 1E9, 1 loops, best of 3: 1.168 s\/loop Numpy v1.7.1 Array size: 1E0, 100000 loops, best of 3: 6.548 us\/loop Array size: 1E1, 100000 loops, best of 3: 6.546 us\/loop Array size: 1E2, 100000 loops, best of 3: 6.804 us\/loop Array size: 1E3, 100000 loops, best of 3: 7.784 us\/loop Array size: 1E4, 100000 loops, best of 3: 17.946 us\/loop Array size: 1E5, 10000 loops, best of 3: 117.235 us\/loop Array size: 1E6, 1000 loops, best of 3: 1.096 ms\/loop Array size: 1E7, 100 loops, best of 3: 12.328 ms\/loop Array size: 1E8, 10 loops, best of 3: 118.431 ms\/loop Array size: 1E9, 1 loops, best of 3: 1.172 s\/loop ``` Why should this happen? As with the boolean case, np.all should still only have to check the first element before returning, so times should still be constant w.r.t. array size.", "response":"As has been guessed in the comments, I can confirm that the processing of the array is being done in chunks. First, I will show you where things are in the code and then I will show you how you can change the chunk size and the effect that doing so has on your benchmark. Where to find the reduction processing in the Numpy source files np.all(x) is the same as x.all(). all() really calls np.core.umath.logical_and.reduce(x). If you want to dig into the numpy source, I will try to guide you through finding that a buffer\/chunk size is used. The folder with all of the code we will be looking at is numpy\/core\/src\/umath\/. PyUFunc_Reduce() in ufunc_object.c is the C function that handles the reduce. In PyUFunc_Reduce(), the chunk, or buffer, size is found by looking up the value for reduce in some global dictionary via the PyUFunc_GetPyValues() function (ufunc_object.c). On my machine and compiling from the development branch, the chunk size is 8192. PyUFunc_ReduceWrapper() in reduction.c is called to set-up the iterator (with a stride equal to the chunk size) and it calls the passed in loop function which is reduce_loop() in ufunc_object.c. reduce_loop() basically just uses the iterator and calls another innerloop() function for each chunk. The innerloop function is found in loops.c.src. For a boolean array and our case of all\/logical_and, the appropriate innerloop function is BOOL_logical_and. You can find the right function by searching for BOOLEAN LOOPS and then it is the second function below that (it is hard to find due to the template-like programming used here). There you will find that short circuiting is in fact being done for each chunk. How to change the buffer size used in ufunctions (and thus in any\/all) You can get the chunk\/buffer size with np.getbuffersize(). For me, that returns 8192 without manually setting it which matches what I found by printing out the buffer size in the code. You can use np.setbuffersize() to change the chunk size. Results using a bigger buffer size I changed your benchmark code to the following: ``` import timeit import numpy as np print 'Numpy v%s' %np.version.full_version stmt = \"np.all(x)\" for ii in xrange(9): setup = \"import numpy as np; x = np.zeros(%d,dtype=np.bool); np.setbufsize(%d)\" %(10**ii, max(8192, min(10**ii, 10**7))) timer = timeit.Timer(stmt,setup) n,r = 1,3 t = np.min(timer.repeat(r,n)) while t < 0.2: n *= 10 t = np.min(timer.repeat(r,n)) t \/= n if t < 1E-3: timestr = \"%1.3f us\" %(t*1E6) elif t < 1: timestr = \"%1.3f ms\" %(t*1E3) else: timestr = \"%1.3f s\" %t print \"Array size: 1E%i, %i loops, best of %i: %s\/loop\" %(ii,n,r,timestr) ``` Numpy doesn't like the buffer size being too small or too big so I made sure that it didn't get smaller than 8192 or larger than 1E7 because Numpy didn't like a buffer size of 1E8. Otherwise, I was setting the buffer size to the size of the array being processed. I only went up to 1E8 because my machine only has 4GB of memory at the moment. Here are the results: ``` Numpy v1.8.0.dev-2a5c2c8 Array size: 1E0, 100000 loops, best of 3: 5.351 us\/loop Array size: 1E1, 100000 loops, best of 3: 5.390 us\/loop Array size: 1E2, 100000 loops, best of 3: 5.366 us\/loop Array size: 1E3, 100000 loops, best of 3: 5.360 us\/loop Array size: 1E4, 100000 loops, best of 3: 5.433 us\/loop Array size: 1E5, 100000 loops, best of 3: 5.400 us\/loop Array size: 1E6, 100000 loops, best of 3: 5.397 us\/loop Array size: 1E7, 100000 loops, best of 3: 5.381 us\/loop Array size: 1E8, 100000 loops, best of 3: 6.126 us\/loop ``` There is a small uptick in the last timing because there are multiple chunks being processed due to the limitations on how big the buffer size can be.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17128116\/why-is-numpy-any-so-slow-over-large-arrays", "best_answers_votes":33, "question_length":6509, "response_length":3683 }, { "question":"Checking if a matrix is symmetric in Numpy I'm trying to make a function with the arguments (a,tol=1e-8) that returns a boolean value that tells the user whether or not the matrix is symmetric (symmetric matrix is equal to its transpose). So far I have: ``` def check_symmetric(a, tol=1e-8): if np.transpose(a, axes=axes) == np.transpose(a, axes=axes): return True def sqr(s): rows = len(s) for row in sq: if len(row) != rows: return False return True if a != sqr(s): raise ValueError ``` although I keep getting an axes isn't defined message so I'm pretty sure that doesn't work at all...... the tests I'd like to pass are: ``` e = np.eye(4) f = np.diag([1], k=3) g = e[1:, :] print(check_symmetric(e)) print(not check_symmetric(e + f)) print(check_symmetric(e + f * 1e-9)) print(not check_symmetric(e + f * 1e-9, 1e-10)) try: check_symmetric(g) print(False) except ValueError: print(True) ``` Any help is appreciated, thanks!", "response":"You can simply compare it to its transpose using allclose ``` def check_symmetric(a, rtol=1e-05, atol=1e-08): return numpy.allclose(a, a.T, rtol=rtol, atol=atol) ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/42908334\/checking-if-a-matrix-is-symmetric-in-numpy", "best_answers_votes":114, "question_length":927, "response_length":165 }, { "question":"Calculate overlapped area between two rectangles I want to calculate the overlapped area \"THE GRAY REGION\" between red and blue rectangles. Each rectangle is defined by its four corner coordinates. The resulted unit of the overlapped area is unit square. I could not imagine how can I do it? Any creative comments would be appreciated.", "response":"This type of intersection is easily done by the \"min of the maxes\" and \"max of the mins\" idea. To write it out one needs a specific notion for the rectangle, and, just to make things clear I'll use a namedtuple: ``` from collections import namedtuple Rectangle = namedtuple('Rectangle', 'xmin ymin xmax ymax') ra = Rectangle(3., 3., 5., 5.) rb = Rectangle(1., 1., 4., 3.5) # intersection here is (3, 3, 4, 3.5), or an area of 1*.5=.5 def area(a, b): # returns None if rectangles don't intersect dx = min(a.xmax, b.xmax) - max(a.xmin, b.xmin) dy = min(a.ymax, b.ymax) - max(a.ymin, b.ymin) if (dx>=0) and (dy>=0): return dx*dy print area(ra, rb) # 0.5 ``` If you don't like the namedtuple notation, you could just use: ``` dx = max(a[0], b[0]) - min(a[2], b[2]) ``` etc, or whatever notation you prefer.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/27152904\/calculate-overlapped-area-between-two-rectangles", "best_answers_votes":89, "question_length":335, "response_length":802 }, { "question":"bounding box of numpy array Suppose you have a 2D numpy array with some random values and surrounding zeros. Example \"tilted rectangle\": ``` import numpy as np from skimage import transform img1 = np.zeros((100,100)) img1[25:75,25:75] = 1. img2 = transform.rotate(img1, 45) ``` Now I want to find the smallest bounding rectangle for all the nonzero data. For example: ``` a = np.where(img2 != 0) bbox = img2[np.min(a[0]):np.max(a[0])+1, np.min(a[1]):np.max(a[1])+1] ``` What would be the fastest way to achieve this result? I am sure there is a better way since the np.where function takes quite a time if I am e.g. using 1000x1000 data sets. Edit: Should also work in 3D...", "response":"You can roughly halve the execution time by using np.any to reduce the rows and columns that contain non-zero values to 1D vectors, rather than finding the indices of all non-zero values using np.where: ``` def bbox1(img): a = np.where(img != 0) bbox = np.min(a[0]), np.max(a[0]), np.min(a[1]), np.max(a[1]) return bbox def bbox2(img): rows = np.any(img, axis=1) cols = np.any(img, axis=0) rmin, rmax = np.where(rows)[0][[0, -1]] cmin, cmax = np.where(cols)[0][[0, -1]] return rmin, rmax, cmin, cmax ``` Some benchmarks: ``` %timeit bbox1(img2) 10000 loops, best of 3: 63.5 \u00b5s per loop %timeit bbox2(img2) 10000 loops, best of 3: 37.1 \u00b5s per loop ``` Extending this approach to the 3D case just involves performing the reduction along each pair of axes: ``` def bbox2_3D(img): r = np.any(img, axis=(1, 2)) c = np.any(img, axis=(0, 2)) z = np.any(img, axis=(0, 1)) rmin, rmax = np.where(r)[0][[0, -1]] cmin, cmax = np.where(c)[0][[0, -1]] zmin, zmax = np.where(z)[0][[0, -1]] return rmin, rmax, cmin, cmax, zmin, zmax ``` It's easy to generalize this to N dimensions by using itertools.combinations to iterate over each unique combination of axes to perform the reduction over: ``` import itertools def bbox2_ND(img): N = img.ndim out = [] for ax in itertools.combinations(reversed(range(N)), N - 1): nonzero = np.any(img, axis=ax) out.extend(np.where(nonzero)[0][[0, -1]]) return tuple(out) ``` If you know the coordinates of the corners of the original bounding box, the angle of rotation, and the centre of rotation, you could get the coordinates of the transformed bounding box corners directly by computing the corresponding affine transformation matrix and dotting it with the input coordinates: ``` def bbox_rotate(bbox_in, angle, centre): rmin, rmax, cmin, cmax = bbox_in # bounding box corners in homogeneous coordinates xyz_in = np.array(([[cmin, cmin, cmax, cmax], [rmin, rmax, rmin, rmax], [ 1, 1, 1, 1]])) # translate centre to origin cr, cc = centre cent2ori = np.eye(3) cent2ori[:2, 2] = -cr, -cc # rotate about the origin theta = np.deg2rad(angle) rmat = np.eye(3) rmat[:2, :2] = np.array([[ np.cos(theta),-np.sin(theta)], [ np.sin(theta), np.cos(theta)]]) # translate from origin back to centre ori2cent = np.eye(3) ori2cent[:2, 2] = cr, cc # combine transformations (rightmost matrix is applied first) xyz_out = ori2cent.dot(rmat).dot(cent2ori).dot(xyz_in) r, c = xyz_out[:2] rmin = int(r.min()) rmax = int(r.max()) cmin = int(c.min()) cmax = int(c.max()) return rmin, rmax, cmin, cmax ``` This works out to be very slightly faster than using np.any for your small example array: ``` %timeit bbox_rotate([25, 75, 25, 75], 45, (50, 50)) 10000 loops, best of 3: 33 \u00b5s per loop ``` However, since the speed of this method is independent of the size of the input array, it can be quite a lot faster for larger arrays. Extending the transformation approach to 3D is slightly more complicated, in that the rotation now has three different components (one about the x-axis, one about the y-axis and one about the z-axis), but the basic method is the same: ``` def bbox_rotate_3d(bbox_in, angle_x, angle_y, angle_z, centre): rmin, rmax, cmin, cmax, zmin, zmax = bbox_in # bounding box corners in homogeneous coordinates xyzu_in = np.array(([[cmin, cmin, cmin, cmin, cmax, cmax, cmax, cmax], [rmin, rmin, rmax, rmax, rmin, rmin, rmax, rmax], [zmin, zmax, zmin, zmax, zmin, zmax, zmin, zmax], [ 1, 1, 1, 1, 1, 1, 1, 1]])) # translate centre to origin cr, cc, cz = centre cent2ori = np.eye(4) cent2ori[:3, 3] = -cr, -cc -cz # rotation about the x-axis theta = np.deg2rad(angle_x) rmat_x = np.eye(4) rmat_x[1:3, 1:3] = np.array([[ np.cos(theta),-np.sin(theta)], [ np.sin(theta), np.cos(theta)]]) # rotation about the y-axis theta = np.deg2rad(angle_y) rmat_y = np.eye(4) rmat_y[[0, 0, 2, 2], [0, 2, 0, 2]] = ( np.cos(theta), np.sin(theta), -np.sin(theta), np.cos(theta)) # rotation about the z-axis theta = np.deg2rad(angle_z) rmat_z = np.eye(4) rmat_z[:2, :2] = np.array([[ np.cos(theta),-np.sin(theta)], [ np.sin(theta), np.cos(theta)]]) # translate from origin back to centre ori2cent = np.eye(4) ori2cent[:3, 3] = cr, cc, cz # combine transformations (rightmost matrix is applied first) tform = ori2cent.dot(rmat_z).dot(rmat_y).dot(rmat_x).dot(cent2ori) xyzu_out = tform.dot(xyzu_in) r, c, z = xyzu_out[:3] rmin = int(r.min()) rmax = int(r.max()) cmin = int(c.min()) cmax = int(c.max()) zmin = int(z.min()) zmax = int(z.max()) return rmin, rmax, cmin, cmax, zmin, zmax ``` I've essentially just modified the function above using the rotation matrix expressions from here - I haven't had time to write a test-case yet, so use with caution.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/31400769\/bounding-box-of-numpy-array", "best_answers_votes":104, "question_length":674, "response_length":4647 }, { "question":"Are there advantages to use the Python\/C interface instead of Cython? I want to extend python and numpy by writing some modules in C or C++, using BLAS and LAPACK. I also want to be able to distribute the code as standalone C\/C++ libraries. I would like this libraries to use both single and double precision float. Some examples of functions I will write are conjugate gradient for solving linear systems or accelerated first order methods. Some functions will need to call a Python function from the C\/C++ code. After playing a little with the Python\/C API and the Numpy\/C API, I discovered that many people advocate the use of Cython instead (see for example this question or this one). I am not an expert about Cython, but it seems that for some cases, you still need to use the Numpy\/C API and know how it works. Given the fact that I already have (some little) knowledge about the Python\/C API and none about Cython, I was wondering if it makes sense to keep on using the Python\/C API, and if using this API has some advantages over Cython. In the future, I will certainly develop some stuff not involving numerical computing, so this question is not only about numpy. One of the thing I like about the Python\/C API is the fact that I learn some stuff about how the Python interpreter is working. Thanks.", "response":"The current \"top answer\" sounds a bit too much like FUD in my ears. For one, it is not immediately obvious that the Average Developer would write faster code in C than what NumPy+Cython gives you anyway. Quite the contrary, the time it takes to even get the necessary C code to work correctly in a Python environment is usually much better invested in writing a quick prototype in Cython, benchmarking it, optimising it, rewriting it in a faster way, benchmarking it again, and then deciding if there is anything in it that truly requires the 5-10% more performance that you may or may not get from rewriting 2% of the code in hand-tuned C and calling it from your Cython code. I'm writing a library in Cython that currently has about 18K lines of Cython code, which translate to almost 200K lines of C code. I once managed to get a speed-up of almost 25% for a couple of very important internal base level functions, by injecting some 20 lines of hand-tuned C code in the right places. It took me a couple of hours to rewrite and optimise this tiny part. That's truly nothing compared to the huge amount of time I saved by not writing (and having to maintain) the library in plain C in the first place. Even if you know C a lot better than Cython, if you know Python and C, you will learn Cython so quickly that it's worth the investment in any case, especially when you are into numerics. 80-95% of the code you write will benefit so much from being written in a high-level language, that you can safely lay back and invest half of the time you saved into making your code just as fast as if you had written it in a low-level language right away. That being said, your comment that you want \"to be able to distribute the code as standalone C\/C++ libraries\" is a valid reason to stick to plain C\/C++. Cython always depends on CPython, which is quite a dependency. However, using plain C\/C++ (except for the Python interface) will not allow you to take advantage of NumPy either, as that also depends on CPython. So, as usual when writing something in C, you will have to do a lot of ground work before you get to the actual functionality. You should seriously think about this twice before you start this work.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5720272\/are-there-advantages-to-use-the-python-c-interface-instead-of-cython", "best_answers_votes":95, "question_length":1310, "response_length":2211 }, { "question":"Saving numpy array to txt file row wise I have an numpy array of form ``` a = [1,2,3] ``` which I want to save to a .txt file such that the file looks like: ``` 1 2 3 ``` If I use numpy.savetxt then I get a file like: ``` 1 2 3 ``` There should be a easy solution to this I suppose, any suggestions?", "response":"If numpy >= 1.5, you can do: # note that the filename is enclosed with double quotes, # example \"filename.txt\" ``` numpy.savetxt(\"filename\", a, newline=\" \") ``` Edit several 1D arrays with same length ``` a = numpy.array([1,2,3]) b = numpy.array([4,5,6]) numpy.savetxt(filename, (a,b), fmt=\"%d\") # gives: # 1 2 3 # 4 5 6 ``` several 1D arrays with variable length ``` a = numpy.array([1,2,3]) b = numpy.array([4,5]) with open(filename,\"w\") as f: f.write(\"\\n\".join(\" \".join(map(str, x)) for x in (a,b))) # gives: # 1 2 3 # 4 5 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9565426\/saving-numpy-array-to-txt-file-row-wise", "best_answers_votes":54, "question_length":299, "response_length":529 }, { "question":"python dict to numpy structured array I have a dictionary that I need to convert to a NumPy structured array. I'm using the arcpy function NumPyArraytoTable, so a NumPy structured array is the only data format that will work. Based on this thread: Writing to numpy array from dictionary and this thread: How to convert Python dictionary object to numpy array I've tried this: ``` result = {0: 1.1181753789488595, 1: 0.5566080288678394, 2: 0.4718269778030734, 3: 0.48716683119447185, 4: 1.0, 5: 0.1395076201641266, 6: 0.20941558441558442} names = ['id','data'] formats = ['f8','f8'] dtype = dict(names = names, formats=formats) array=numpy.array([[key,val] for (key,val) in result.iteritems()],dtype) ``` But I keep getting expected a readable buffer object The method below works, but is stupid and obviously won't work for real data. I know there is a more graceful approach, I just can't figure it out. ``` totable = numpy.array([[key,val] for (key,val) in result.iteritems()]) array=numpy.array([(totable[0,0],totable[0,1]),(totable[1,0],totable[1,1])],dtype) ```", "response":"You could use np.array(list(result.items()), dtype=dtype): ``` import numpy as np result = {0: 1.1181753789488595, 1: 0.5566080288678394, 2: 0.4718269778030734, 3: 0.48716683119447185, 4: 1.0, 5: 0.1395076201641266, 6: 0.20941558441558442} names = ['id','data'] formats = ['f8','f8'] dtype = dict(names = names, formats=formats) array = np.array(list(result.items()), dtype=dtype) print(repr(array)) ``` yields ``` array([(0.0, 1.1181753789488595), (1.0, 0.5566080288678394), (2.0, 0.4718269778030734), (3.0, 0.48716683119447185), (4.0, 1.0), (5.0, 0.1395076201641266), (6.0, 0.20941558441558442)], dtype=[('id', ' SomeType: ...do stuff... ...return... ``` However, this results in an error when running the interpreter: ``` TypeError: Callable[[arg, ...], result]: each arg must be a type. Got . ```", "response":"Confusingly, np.array is a function useful for creating numpy arrays. It isn't the actual type of the arrays created. The type is np.ndarray. So, replace np.array with np.ndarray. That should fix the problem.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/35328286\/how-to-use-numpy-in-optional-typing", "best_answers_votes":81, "question_length":634, "response_length":208 }, { "question":"why do we need np.squeeze()? Very often, arrays are squeezed with np.squeeze(). In the documentation, it says Remove single-dimensional entries from the shape of a. However I'm still wondering: Why are zero and nondimensional entries in the shape of a? Or to put it differently: Why do both a.shape = (2,1) and (2,) exist?", "response":"Besides the mathematical differences between the two things, there is the issue of predictability. If your suggestion was followed, you could at no point rely on the dimension of your array. So any expression of the form my_array[x,y] would need to be replaced by something that first checks if my_array is actually two-dimensional and did not have an implicit squeeze at some point. This would probably obfuscate code far more than the occasional squeeze, which does a clearly specified thing. Actually, it might even be very hard to tell, which axis has been removed, leading to a whole host of new problems. In the spirit of The Zen of Python, also Explicit is better than implicit, we can also say that we should prefer explicit squeeze to implicit array conversion.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34990652\/why-do-we-need-np-squeeze", "best_answers_votes":33, "question_length":322, "response_length":770 }, { "question":"How can I get the value stored in a 0-dimensional Numpy array? I have a numpy array like this ```py a = np.array(1) ``` How can I get the 1 value back from this array? a[0] does not work: it results in IndexError: 0-d arrays can't be indexed. Similarly, a(0) gives an error that says TypeError: 'numpy.ndarray' object is not callable.", "response":"What you create with ``` a = np.array(1) ``` is a zero-dimensional array, and these cannot be indexed. You also don't need to index it -- you can use a directly as if it were a scalar value. If you really need the value in a different type, say float, you can explicitly convert it with float(a). If you need it in the base type of the array, you can use a.item() or a[()]. Note that the zero-dimensional array is mutable. If you change the value of the single entry in the array, this will be visible via all references to the array you stored. Use a.item() if you want to store an immutable value. If you want a one-dimensional array with a single element instead, use ``` a = np.array([1]) ``` You can access the single element with a[0] now.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9814226\/how-can-i-get-the-value-stored-in-a-0-dimensional-numpy-array", "best_answers_votes":72, "question_length":334, "response_length":745 }, { "question":"Plot Normal distribution with Matplotlib [duplicate] This question already has answers here: How to plot normal distribution (10 answers) Closed 3 years ago. please help me to plot the normal distribution of the folowing data: DATA: ``` import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm h = [186, 176, 158, 180, 186, 168, 168, 164, 178, 170, 189, 195, 172, 187, 180, 186, 185, 168, 179, 178, 183, 179, 170, 175, 186, 159, 161, 178, 175, 185, 175, 162, 173, 172, 177, 175, 172, 177, 180] std = np.std(h) mean = np.mean(h) plt.plot(norm.pdf(h,mean,std)) ``` output: ``` Standard Deriviation = 8.54065575872 mean = 176.076923077 ``` the plot is incorrect, what is wrong with my code?", "response":"Note: This solution is using pylab, not matplotlib.pyplot You may try using hist to put your data info along with the fitted curve as below: ``` import numpy as np import scipy.stats as stats import pylab as pl h = sorted([186, 176, 158, 180, 186, 168, 168, 164, 178, 170, 189, 195, 172, 187, 180, 186, 185, 168, 179, 178, 183, 179, 170, 175, 186, 159, 161, 178, 175, 185, 175, 162, 173, 172, 177, 175, 172, 177, 180]) #sorted fit = stats.norm.pdf(h, np.mean(h), np.std(h)) #this is a fitting indeed pl.plot(h,fit,'-o') pl.hist(h,normed=True) #use this to draw histogram of your data pl.show() #use may also need add this ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20011494\/plot-normal-distribution-with-matplotlib", "best_answers_votes":100, "question_length":710, "response_length":625 }, { "question":"FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated After updating my Numpy and Tensorflow I am getting these kind of warnings. I had already tried these, but nothing works, every suggestion will be appreciated. ``` FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters 2018-01-19 17:11:38.695932: I C:\\tf_jenkins\\home\\workspace\\rel-win\\M\\windows\\PY\\36\\tensorflow\\core\\platform\\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 ```", "response":"This might or might not be your case, but the same warning is also spit out from h5py package: \/home\/user\/bin\/conda3\/lib\/python3.6\/site-packages\/h5py\/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters For anyone coming here with this problem, it is a known h5py issue, introduced with numpy 1.14. As stated by the devs: You can ignore the warning, it's not going to cause any issues at the moment, but you should upgrade to the next release of h5py when it becomes available. ... so it's harmless. The fix has just been merged to master. But until the update is released, the workaround is to downgrade numpy to a previous version: ``` pip install numpy==1.13.0 ``` Update: h5py has released the RC build with the fix. The following command should do it: ``` pip install h5py==2.8.0rc1 ``` Update (FINAL): there's a full-fledged release now. So you can simply run: ``` pip install --upgrade h5py ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/48340392\/futurewarning-conversion-of-the-second-argument-of-issubdtype-from-float-to", "best_answers_votes":66, "question_length":743, "response_length":1100 }, { "question":"How to 'turn off' blurry effect of imshow() in matplotlib? I want to make a color plot of probabilities however imshow generates blurry values for points which have zero probability. How can I get rid of this blurry periphery around real grid points? Example: ``` import numpy as np import matplotlib.pyplot as plt a=np.asarray([[ 0.00000000e+00 , 1.05824446e-01 , 2.05086136e-04, 0.00000000e+00], [ 1.05824446e-01 , 3.15012305e-01 , 1.31255127e-01 , 1.05209188e-01], [ 2.05086136e-04 , 1.31255127e-01 , 0.00000000e+00 , 0.00000000e+00], [ 0.00000000e+00 ,1.05209188e-01 , 0.00000000e+00 , 0.00000000e+00]]) im=plt.imshow(a,extent=[0,4,0,4],origin='lower',alpha=1,aspect='auto') plt.show() ```", "response":"By default (which is changed mpl 2.0), imshow interpolates the data (as you would want to do for an image). All you need to do is tell it to not interpolate: ``` im = plt.imshow(..., interpolation='none') ``` 'nearest' will also work for what you want. See smoothing between pixels of imagesc\\imshow in matlab like the matplotlib imshow for examples of all of the kinds of interpolation. doc", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20010882\/how-to-turn-off-blurry-effect-of-imshow-in-matplotlib", "best_answers_votes":82, "question_length":693, "response_length":391 }, { "question":"How to resample a dataframe with different functions applied to each column? I have a times series with temperature and radiation in a pandas dataframe. The time resolution is 1 minute in regular steps. ``` import datetime import pandas as pd import numpy as np date_times = pd.date_range(datetime.datetime(2012, 4, 5, 8, 0), datetime.datetime(2012, 4, 5, 12, 0), freq='1min') tamb = np.random.sample(date_times.size) * 10.0 radiation = np.random.sample(date_times.size) * 10.0 frame = pd.DataFrame(data={'tamb': tamb, 'radiation': radiation}, index=date_times) frame DatetimeIndex: 241 entries, 2012-04-05 08:00:00 to 2012-04-05 12:00:00 Freq: T Data columns: radiation 241 non-null values tamb 241 non-null values dtypes: float64(2) ``` How can I down-sample this dataframe to a resolution of one hour, computing the hourly mean for the temperature and the hourly sum for radiation?", "response":"With pandas 0.18 the resample API changed (see the docs). So for pandas >= 0.18 the answer is: ``` In [31]: frame.resample('1H').agg({'radiation': np.sum, 'tamb': np.mean}) Out[31]: tamb radiation 2012-04-05 08:00:00 5.161235 279.507182 2012-04-05 09:00:00 4.968145 290.941073 2012-04-05 10:00:00 4.478531 317.678285 2012-04-05 11:00:00 4.706206 335.258633 2012-04-05 12:00:00 2.457873 8.655838 ``` Old Answer: I am answering my question to reflect the time series related changes in pandas >= 0.8 (all other answers are outdated). Using pandas >= 0.8 the answer is: ``` In [30]: frame.resample('1H', how={'radiation': np.sum, 'tamb': np.mean}) Out[30]: tamb radiation 2012-04-05 08:00:00 5.161235 279.507182 2012-04-05 09:00:00 4.968145 290.941073 2012-04-05 10:00:00 4.478531 317.678285 2012-04-05 11:00:00 4.706206 335.258633 2012-04-05 12:00:00 2.457873 8.655838 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10020591\/how-to-resample-a-dataframe-with-different-functions-applied-to-each-column", "best_answers_votes":76, "question_length":885, "response_length":870 }, { "question":"How to generate a random normal distribution of integers How to generate a random integer as with np.random.randint(), but with a normal distribution around 0. np.random.randint(-10, 10) returns integers with a discrete uniform distribution np.random.normal(0, 0.1, 1) returns floats with a normal distribution What I want is a kind of combination between the two functions.", "response":"One other way to get a discrete distribution that looks like the normal distribution is to draw from a multinomial distribution where the probabilities are calculated from a normal distribution. ``` import scipy.stats as ss import numpy as np import matplotlib.pyplot as plt x = np.arange(-10, 11) xU, xL = x + 0.5, x - 0.5 prob = ss.norm.cdf(xU, scale = 3) - ss.norm.cdf(xL, scale = 3) prob = prob \/ prob.sum() # normalize the probabilities so their sum is 1 nums = np.random.choice(x, size = 10000, p = prob) plt.hist(nums, bins = len(x)) ``` Here, np.random.choice picks an integer from [-10, 10]. The probability for selecting an element, say 0, is calculated by p(-0.5 < x < 0.5) where x is a normal random variable with mean zero and standard deviation 3. I chose a std. dev. of 3 because this way p(-10 < x < 10) is almost 1. The result looks like this:", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37411633\/how-to-generate-a-random-normal-distribution-of-integers", "best_answers_votes":44, "question_length":374, "response_length":860 }, { "question":"Missing data, insert rows in Pandas and fill with NAN I'm new to Python and Pandas so there might be a simple solution which I don't see. I have a number of discontinuous datasets which look like this: ``` ind A B C 0 0.0 1 3 1 0.5 4 2 2 1.0 6 1 3 3.5 2 0 4 4.0 4 5 5 4.5 3 3 ``` I now look for a solution to get the following: ``` ind A B C 0 0.0 1 3 1 0.5 4 2 2 1.0 6 1 3 1.5 NAN NAN 4 2.0 NAN NAN 5 2.5 NAN NAN 6 3.0 NAN NAN 7 3.5 2 0 8 4.0 4 5 9 4.5 3 3 ``` The problem is,that the gap in A varies from dataset to dataset in position and length...", "response":"set_index and reset_index are your friends. ``` df = DataFrame({\"A\":[0,0.5,1.0,3.5,4.0,4.5], \"B\":[1,4,6,2,4,3], \"C\":[3,2,1,0,5,3]}) ``` First move column A to the index: ``` In [64]: df.set_index(\"A\") Out[64]: B C A 0.0 1 3 0.5 4 2 1.0 6 1 3.5 2 0 4.0 4 5 4.5 3 3 ``` Then reindex with a new index, here the missing data is filled in with nans. We use the Index object since we can name it; this will be used in the next step. ``` In [66]: new_index = Index(arange(0,5,0.5), name=\"A\") In [67]: df.set_index(\"A\").reindex(new_index) Out[67]: B C 0.0 1 3 0.5 4 2 1.0 6 1 1.5 NaN NaN 2.0 NaN NaN 2.5 NaN NaN 3.0 NaN NaN 3.5 2 0 4.0 4 5 4.5 3 3 ``` Finally move the index back to the columns with reset_index. Since we named the index, it all works magically: ``` In [69]: df.set_index(\"A\").reindex(new_index).reset_index() Out[69]: A B C 0 0.0 1 3 1 0.5 4 2 2 1.0 6 1 3 1.5 NaN NaN 4 2.0 NaN NaN 5 2.5 NaN NaN 6 3.0 NaN NaN 7 3.5 2 0 8 4.0 4 5 9 4.5 3 3 ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25909984\/missing-data-insert-rows-in-pandas-and-fill-with-nan", "best_answers_votes":64, "question_length":551, "response_length":953 }, { "question":"Cython: cimport and import numpy as (both) np In the tutorial of the Cython documentation, there are cimport and import statements of numpy module: ``` import numpy as np cimport numpy as np ``` I found this convention is quite popular among numpy\/cython users. This looks strange for me because they are both named as np. In which part of the code, imported\/cimported np are used? Why cython compiler does not confuse them?", "response":"cimport my_module gives access to C functions or attributes or even sub-modules under my_module import my_module gives access to Python functions or attributes or sub-modules under my_module. In your case: ``` cimport numpy as np ``` gives you access to Numpy C API, where you can declare array buffers, variable types and so on... And: ``` import numpy as np ``` gives you access to NumPy-Python functions, such as np.array, np.linspace, etc Cython internally handles this ambiguity so that the user does not need to use different names.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20268228\/cython-cimport-and-import-numpy-as-both-np", "best_answers_votes":54, "question_length":424, "response_length":538 }, { "question":"Counting Cars OpenCV + Python Issue I have been trying to count cars when crossing the line and it works, but the problem is it counts one car many times which is ridiculous because it should only be counted once. Here is the code I am using: ``` import cv2 import numpy as np bgsMOG = cv2.BackgroundSubtractorMOG() cap = cv2.VideoCapture(\"traffic.avi\") counter = 0 if cap: while True: ret, frame = cap.read() if ret: fgmask = bgsMOG.apply(frame, None, 0.01) cv2.line(frame, (0,60), (160,60), (255,255,0), 1) # To find the countours of the Cars contours, hierarchy = cv2.findContours(fgmask, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) try: hierarchy = hierarchy[0] except: hierarchy = [] for contour, hier in zip(contours, hierarchy): (x, y, w, h) = cv2.boundingRect(contour) if w > 20 and h > 20: cv2.rectangle(frame, (x,y), (x+w,y+h), (255, 0, 0), 1) # To find the centroid of the car x1 = w\/2 y1 = h\/2 cx = x+x1 cy = y+y1 ## print \"cy=\", cy ## print \"cx=\", cx centroid = (cx,cy) ## print \"centoid=\", centroid # Draw the circle of Centroid cv2.circle(frame,(int(cx),int(cy)),2,(0,0,255),-1) # To make sure the Car crosses the line ## dy = cy-108 ## print \"dy\", dy if centroid > (27, 38) and centroid = 20): counter +=1 ## print \"counter=\", counter ## if cy > 10 and cy < 160: cv2.putText(frame, str(counter), (x,y-5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 255), 2) ## cv2.namedWindow('Output',cv2.cv.CV_WINDOW_NORMAL) cv2.imshow('Output', frame) ## cv2.imshow('FGMASK', fgmask) key = cv2.waitKey(60) if key == 27: break cap.release() cv2.destroyAllWindows() ``` And the video is on my GitHub page @ https:\/\/github.com\/Tes3awy\/MATLAB-Tutorials\/blob\/f24b680f2215c1b1bb96c76f5ba81df533552983\/traffic.avi (and it's also a built-in video in Matlab library) How can make it so that each car is only counted once? The individual frames of the video look as follows:", "response":"Preparation In order to understand what is happening, and eventually solve our problem, we first need to improve the script a little. I've added logging of the important steps of your algorithm, refactored the code a little, and added saving of the mask and processed images, added ability to run the script using the individual frame images, along with some other modifications. This is what the script looks like at this point: ``` import logging import logging.handlers import os import time import sys import cv2 import numpy as np from vehicle_counter import VehicleCounter # ============================================================================ IMAGE_DIR = \"images\" IMAGE_FILENAME_FORMAT = IMAGE_DIR + \"\/frame_%04d.png\" # Support either video file or individual frames CAPTURE_FROM_VIDEO = False if CAPTURE_FROM_VIDEO: IMAGE_SOURCE = \"traffic.avi\" # Video file else: IMAGE_SOURCE = IMAGE_FILENAME_FORMAT # Image sequence # Time to wait between frames, 0=forever WAIT_TIME = 1 # 250 # ms LOG_TO_FILE = True # Colours for drawing on processed frames DIVIDER_COLOUR = (255, 255, 0) BOUNDING_BOX_COLOUR = (255, 0, 0) CENTROID_COLOUR = (0, 0, 255) # ============================================================================ def init_logging(): main_logger = logging.getLogger() formatter = logging.Formatter( fmt='%(asctime)s.%(msecs)03d %(levelname)-8s [%(name)s] %(message)s' , datefmt='%Y-%m-%d %H:%M:%S') handler_stream = logging.StreamHandler(sys.stdout) handler_stream.setFormatter(formatter) main_logger.addHandler(handler_stream) if LOG_TO_FILE: handler_file = logging.handlers.RotatingFileHandler(\"debug.log\" , maxBytes = 2**24 , backupCount = 10) handler_file.setFormatter(formatter) main_logger.addHandler(handler_file) main_logger.setLevel(logging.DEBUG) return main_logger # ============================================================================ def save_frame(file_name_format, frame_number, frame, label_format): file_name = file_name_format % frame_number label = label_format % frame_number log.debug(\"Saving %s as '%s'\", label, file_name) cv2.imwrite(file_name, frame) # ============================================================================ def get_centroid(x, y, w, h): x1 = int(w \/ 2) y1 = int(h \/ 2) cx = x + x1 cy = y + y1 return (cx, cy) # ============================================================================ def detect_vehicles(fg_mask): log = logging.getLogger(\"detect_vehicles\") MIN_CONTOUR_WIDTH = 21 MIN_CONTOUR_HEIGHT = 21 # Find the contours of any vehicles in the image contours, hierarchy = cv2.findContours(fg_mask , cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_SIMPLE) log.debug(\"Found %d vehicle contours.\", len(contours)) matches = [] for (i, contour) in enumerate(contours): (x, y, w, h) = cv2.boundingRect(contour) contour_valid = (w >= MIN_CONTOUR_WIDTH) and (h >= MIN_CONTOUR_HEIGHT) log.debug(\"Contour #%d: pos=(x=%d, y=%d) size=(w=%d, h=%d) valid=%s\" , i, x, y, w, h, contour_valid) if not contour_valid: continue centroid = get_centroid(x, y, w, h) matches.append(((x, y, w, h), centroid)) return matches # ============================================================================ def filter_mask(fg_mask): kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)) # Fill any small holes closing = cv2.morphologyEx(fg_mask, cv2.MORPH_CLOSE, kernel) # Remove noise opening = cv2.morphologyEx(closing, cv2.MORPH_OPEN, kernel) # Dilate to merge adjacent blobs dilation = cv2.dilate(opening, kernel, iterations = 2) return dilation # ============================================================================ def process_frame(frame_number, frame, bg_subtractor, car_counter): log = logging.getLogger(\"process_frame\") # Create a copy of source frame to draw into processed = frame.copy() # Draw dividing line -- we count cars as they cross this line. cv2.line(processed, (0, car_counter.divider), (frame.shape[1], car_counter.divider), DIVIDER_COLOUR, 1) # Remove the background fg_mask = bg_subtractor.apply(frame, None, 0.01) fg_mask = filter_mask(fg_mask) save_frame(IMAGE_DIR + \"\/mask_%04d.png\" , frame_number, fg_mask, \"foreground mask for frame #%d\") matches = detect_vehicles(fg_mask) log.debug(\"Found %d valid vehicle contours.\", len(matches)) for (i, match) in enumerate(matches): contour, centroid = match log.debug(\"Valid vehicle contour #%d: centroid=%s, bounding_box=%s\", i, centroid, contour) x, y, w, h = contour # Mark the bounding box and the centroid on the processed frame # NB: Fixed the off-by one in the bottom right corner cv2.rectangle(processed, (x, y), (x + w - 1, y + h - 1), BOUNDING_BOX_COLOUR, 1) cv2.circle(processed, centroid, 2, CENTROID_COLOUR, -1) log.debug(\"Updating vehicle count...\") car_counter.update_count(matches, processed) return processed # ============================================================================ def main(): log = logging.getLogger(\"main\") log.debug(\"Creating background subtractor...\") bg_subtractor = cv2.BackgroundSubtractorMOG() log.debug(\"Pre-training the background subtractor...\") default_bg = cv2.imread(IMAGE_FILENAME_FORMAT % 119) bg_subtractor.apply(default_bg, None, 1.0) car_counter = None # Will be created after first frame is captured # Set up image source log.debug(\"Initializing video capture device #%s...\", IMAGE_SOURCE) cap = cv2.VideoCapture(IMAGE_SOURCE) frame_width = cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH) frame_height = cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT) log.debug(\"Video capture frame size=(w=%d, h=%d)\", frame_width, frame_height) log.debug(\"Starting capture loop...\") frame_number = -1 while True: frame_number += 1 log.debug(\"Capturing frame #%d...\", frame_number) ret, frame = cap.read() if not ret: log.error(\"Frame capture failed, stopping...\") break log.debug(\"Got frame #%d: shape=%s\", frame_number, frame.shape) if car_counter is None: # We do this here, so that we can initialize with actual frame size log.debug(\"Creating vehicle counter...\") car_counter = VehicleCounter(frame.shape[:2], frame.shape[0] \/ 2) # Archive raw frames from video to disk for later inspection\/testing if CAPTURE_FROM_VIDEO: save_frame(IMAGE_FILENAME_FORMAT , frame_number, frame, \"source frame #%d\") log.debug(\"Processing frame #%d...\", frame_number) processed = process_frame(frame_number, frame, bg_subtractor, car_counter) save_frame(IMAGE_DIR + \"\/processed_%04d.png\" , frame_number, processed, \"processed frame #%d\") cv2.imshow('Source Image', frame) cv2.imshow('Processed Image', processed) log.debug(\"Frame #%d processed.\", frame_number) c = cv2.waitKey(WAIT_TIME) if c == 27: log.debug(\"ESC detected, stopping...\") break log.debug(\"Closing video capture device...\") cap.release() cv2.destroyAllWindows() log.debug(\"Done.\") # ============================================================================ if __name__ == \"__main__\": log = init_logging() if not os.path.exists(IMAGE_DIR): log.debug(\"Creating image directory `%s`...\", IMAGE_DIR) os.makedirs(IMAGE_DIR) main() ``` This script is responsible for processing of the stream of images, and identifying all the vehicles in each frame -- I refer to them as matches in the code. The task of counting the detected vehicles is delegated to class VehicleCounter. The reason why I chose to make this a class will become evident as we progress. I did not implement your vehicle counting algorithm, because it will not work for reasons that will again become evident as we dig into this deeper. File vehicle_counter.py contains the following code: ``` import logging # ============================================================================ class VehicleCounter(object): def __init__(self, shape, divider): self.log = logging.getLogger(\"vehicle_counter\") self.height, self.width = shape self.divider = divider self.vehicle_count = 0 def update_count(self, matches, output_image = None): self.log.debug(\"Updating count using %d matches...\", len(matches)) # ============================================================================ ``` Finally, I wrote a script that will stitch all the generated images together, so it's easier to inspect them: ``` import cv2 import numpy as np # ============================================================================ INPUT_WIDTH = 160 INPUT_HEIGHT = 120 OUTPUT_TILE_WIDTH = 10 OUTPUT_TILE_HEIGHT = 12 TILE_COUNT = OUTPUT_TILE_WIDTH * OUTPUT_TILE_HEIGHT # ============================================================================ def stitch_images(input_format, output_filename): output_shape = (INPUT_HEIGHT * OUTPUT_TILE_HEIGHT , INPUT_WIDTH * OUTPUT_TILE_WIDTH , 3) output = np.zeros(output_shape, np.uint8) for i in range(TILE_COUNT): img = cv2.imread(input_format % i) cv2.rectangle(img, (0, 0), (INPUT_WIDTH - 1, INPUT_HEIGHT - 1), (0, 0, 255), 1) # Draw the frame number cv2.putText(img, str(i), (2, 10) , cv2.FONT_HERSHEY_PLAIN, 0.7, (255, 255, 255), 1) x = i % OUTPUT_TILE_WIDTH * INPUT_WIDTH y = i \/ OUTPUT_TILE_WIDTH * INPUT_HEIGHT output[y:y+INPUT_HEIGHT, x:x+INPUT_WIDTH,:] = img cv2.imwrite(output_filename, output) # ============================================================================ stitch_images(\"images\/frame_%04d.png\", \"stitched_frames.png\") stitch_images(\"images\/mask_%04d.png\", \"stitched_masks.png\") stitch_images(\"images\/processed_%04d.png\", \"stitched_processed.png\") ``` Analysis In order to solve this problem, we should have some idea about what results we expect to get. We should also label all the distinct cars in the video, so it's easier to talk about them. If we run our script, and stitch the images together, we get the a number of useful files to help us analyze the problem: Image containing a mosaic of input frames Image containing a mosaic of foreground masks: Image containing a mosaic of processed frames The debug log for the run. Upon inspecting those, a number of issues become evident: The foreground masks tend to be noisy. We should do some filtering (erode\/dilate?) to get rid of the noise and narrow gaps. Sometimes we miss vehicles (grey ones). Some vehicles get detected twice in the single frame. Vehicles are rarely detected in the upper regions of the frame. The same vehicle is often detected in consecutive frames. We need to figure out a way of tracking the same vehicle in consecutive frames, and counting it only once. Solution 1. Pre-Seeding the Background Subtractor Our video is quite short, only 120 frames. With learning rate of 0.01, it will take a substantial part of the video for the background detector to stabilize. Fortunately, the last frame of the video (frame number 119) is completely devoid of vehicles, and therefore we can use it as our initial background image. (Other options of obtaining suitable image are mentioned in notes and comments.) To use this initial background image, we simply load it, and apply it on the background subtractor with learning factor 1.0: ``` bg_subtractor = cv2.BackgroundSubtractorMOG() default_bg = cv2.imread(IMAGE_FILENAME_FORMAT % 119) bg_subtractor.apply(default_bg, None, 1.0) ``` When we look at the new mosaic of masks we can see that we get less noise and the vehicle detection works better in the early frames. 2. Cleaning Up the Foreground Mask A simple approach to improve our foreground mask is to apply a few morphological transformations. ``` def filter_mask(fg_mask): kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)) # Fill any small holes closing = cv2.morphologyEx(fg_mask, cv2.MORPH_CLOSE, kernel) # Remove noise opening = cv2.morphologyEx(closing, cv2.MORPH_OPEN, kernel) # Dilate to merge adjacent blobs dilation = cv2.dilate(opening, kernel, iterations = 2) return dilation ``` Inspecting the masks, processed frames and the log file generated with filtering, we can see that we now detect vehicles more reliably, and have mitigated the issue of different parts of one vehicle being detected as separate objects. 3. Tracking Vehicles Between Frames At this point, we need to go through our log file, and collect all the centroid coordinates for each vehicle. This will allow us to plot and inspect the path each vehicle traces across the image, and develop an algorithm to do this automatically. To make this process easier, we can create a reduced log by grepping out the relevant entries. The lists of centroid coordinates: ``` traces = { 'A': [(112, 36), (112, 45), (112, 52), (112, 54), (112, 63), (111, 73), (111, 86), (111, 91), (111, 97), (110, 105)] , 'B': [(119, 37), (120, 42), (121, 54), (121, 55), (123, 64), (124, 74), (125, 87), (127, 94), (125, 100), (126, 108)] , 'C': [(93, 23), (91, 27), (89, 31), (87, 36), (85, 42), (82, 49), (79, 59), (74, 71), (70, 82), (62, 86), (61, 92), (55, 101)] , 'D': [(118, 30), (124, 83), (125, 90), (116, 101), (122, 100)] , 'E': [(77, 27), (75, 30), (73, 33), (70, 37), (67, 42), (63, 47), (59, 53), (55, 59), (49, 67), (43, 75), (36, 85), (27, 92), (24, 97), (20, 102)] , 'F': [(119, 30), (120, 34), (120, 39), (122, 59), (123, 60), (124, 70), (125, 82), (127, 91), (126, 97), (128, 104)] , 'G': [(88, 37), (87, 41), (85, 48), (82, 55), (79, 63), (76, 74), (72, 87), (67, 92), (65, 98), (60, 106)] , 'H': [(124, 35), (123, 40), (125, 45), (127, 59), (126, 59), (128, 67), (130, 78), (132, 88), (134, 93), (135, 99), (135, 107)] , 'I': [(98, 26), (97, 30), (96, 34), (94, 40), (92, 47), (90, 55), (87, 64), (84, 77), (79, 87), (74, 93), (73, 102)] , 'J': [(123, 60), (125, 63), (125, 81), (127, 93), (126, 98), (125, 100)] } ``` Individual vehicle traces plotted on the background: Combined enlarged image of all the vehicle traces: Vectors In order to analyze the movement, we need to work with vectors (i.e. the distance and direction moved). The following diagram shows how the angles correspond to movement of vehicles in the image. We can use the following function to calculate the vector between two points: ``` def get_vector(a, b): \"\"\"Calculate vector (distance, angle in degrees) from point a to point b. Angle ranges from -180 to 180 degrees. Vector with angle 0 points straight down on the image. Values increase in clockwise direction. \"\"\" dx = float(b[0] - a[0]) dy = float(b[1] - a[1]) distance = math.sqrt(dx**2 + dy**2) if dy > 0: angle = math.degrees(math.atan(-dx\/dy)) elif dy == 0: if dx 0: angle = -90.0 else: angle = 0.0 else: if dx 0: angle = -180 - math.degrees(math.atan(dx\/dy)) else: angle = 180.0 return distance, angle ``` Categorization One way we can look for patterns that could be used to categorize the movements as valid\/invalid is to make a scatter plot (angle vs. distance): Green points represent valid movement, that we determined using the lists of points for each vehicle. Red points represent invalid movement - vectors between points in adjacent traffic lanes. I plotted two blue curves, which we can use to separate the two types of movements. Any point that lies below either curve can be considered as valid. The curves are: distance = -0.008 * angle**2 + 0.4 * angle + 25.0 distance = 10.0 We can use the following function to categorize the movement vectors: ``` def is_valid_vector(a): distance, angle = a threshold_distance = max(10.0, -0.008 * angle**2 + 0.4 * angle + 25.0) return (distance 0: angle = math.degrees(math.atan(-dx\/dy)) elif dy == 0: if dx 0: angle = -90.0 else: angle = 0.0 else: if dx 0: angle = -180 - math.degrees(math.atan(dx\/dy)) else: angle = 180.0 return distance, angle @staticmethod def is_valid_vector(a): distance, angle = a threshold_distance = max(10.0, -0.008 * angle**2 + 0.4 * angle + 25.0) return (distance self.divider): self.vehicle_count += 1 vehicle.counted = True self.log.debug(\"Counted vehicle #%d (total count=%d).\" , vehicle.id, self.vehicle_count) # Optionally draw the vehicles on an image if output_image is not None: for vehicle in self.vehicles: vehicle.draw(output_image) cv2.putText(output_image, (\"%02d\" % self.vehicle_count), (142, 10) , cv2.FONT_HERSHEY_PLAIN, 0.7, (127, 255, 255), 1) # Remove vehicles that have not been seen long enough removed = [ v.id for v in self.vehicles if v.frames_since_seen >= self.max_unseen_frames ] self.vehicles[:] = [ v for v in self.vehicles if not v.frames_since_seen >= self.max_unseen_frames ] for id in removed: self.log.debug(\"Removed vehicle #%d.\", id) self.log.debug(\"Count updated, tracking %d vehicles.\", len(self.vehicles)) # ============================================================================ ``` The program now draws the historical paths of all currently tracked vehicles into the output image, along with the vehicle count. Each vehicle is assigned 1 of 10 colours. Notice that vehicle D ends up being tracked twice, however it is counted only once, since we lose track of it before crossing the divider. Ideas on how to resolve this are mentioned in the appendix. Based on the last processed frame generated by the script the total vehicle count is 10. This is a correct result. More details can be found in the output the script generated: Full debug log Filtered out vehicle counter log A mosaic of the processed frames: A. Potential Improvements Refactor, add unit tests. Improve filtering\/preprocessing of the foreground mask Multiple iterations of filtering, fill holes using cv2.drawContours with CV_FILLED? Watershed Algorithm? Improve categorization of movement vectors Create a predictor to estimate initial movement angle when vehicles are created (and only one position is known)... in order to be able to Use change in direction rather than direction alone (I think this would cluster the angles of valid motion vectors close to zero). Improve vehicle tracking Predict position for frames where vehicle is not seen. B. Notes It seems it's not possible to directly extract the current background image from BackgroundSubtractorMOG in Python (at least in OpenCV 2.4.x), but there is a way to do it with a little work. As suggested by Henrik, we can obtain a good estimate of the background using median blending.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/36254452\/counting-cars-opencv-python-issue", "best_answers_votes":131, "question_length":1862, "response_length":17859 }, { "question":"Overlay an image segmentation with numpy and matplotlib I am trying to overlay two images. The first one is a 512x512 NumPy array (from a CT image). The second one is also a 512x512 NumPy array but I am just interested in the pixels where the value is larger than 0 (a functional image). To do that I am trying to create a masked array. ``` import numpy as np import numpy.ma as ma import matplotlib.pyplot as plt # Both images are loaded from a dicom. Both are numpy arrays of (512,512) Image1 = readimage(path) Image2 = readimage(path) # Create image 2 mask mask = ma.masked_where(Image2>0, Image2) Image2_mask = ma.masked_array(Image2,mask) # Plot images plt.figure(dpi=300) y, x = np.mgrid[1:513,1:513] plt.axes().set_aspect('equal', 'datalim') plt.set_cmap(plt.gray()) plt.pcolormesh(x, y, Image1,cmap='gray') plt.pcolormesh(x, y, Image2_mask,cmap='jet') plt.axis([x.min(), x.max(), y.min(), y.max()]) plt.colorbar() plt.show() ``` This code does not show any overlay. What I am doing wrong? Is there any straight way? I am coming from a Matlab environment and I am quite new to Python.", "response":"Why don't you use imshow instead? You can plot a 2D image by doing: ``` plt.imshow(Image1, cmap='gray') # I would add interpolation='none' ``` Afterwards, you can easily overlay the segmentation by doing: ``` plt.imshow(Image2_mask, cmap='jet', alpha=0.5) # interpolation='none' ``` Changing the alpha will change the opacity of the overlay. Additionaly, why do you create 2 masks? Only one should be enough, you can do: ``` Image2_mask = ma.masked_array(Image2 > 0, Image2) ``` Practical example: ``` import numpy as np mask = np.zeros((10,10)) mask[3:-3, 3:-3] = 1 # white square in black background im = mask + np.random.randn(10,10) * 0.01 # random image masked = np.ma.masked_where(mask == 0, mask) import matplotlib.pyplot as plt plt.figure() plt.subplot(1,2,1) plt.imshow(im, 'gray', interpolation='none') plt.subplot(1,2,2) plt.imshow(im, 'gray', interpolation='none') plt.imshow(masked, 'jet', interpolation='none', alpha=0.7) plt.show() ```", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/31877353\/overlay-an-image-segmentation-with-numpy-and-matplotlib", "best_answers_votes":99, "question_length":1091, "response_length":950 }, { "question":"How do I check if a numpy dtype is integral? How do I check if a numpy dtype is integral? I tried: ``` issubclass(np.int64, numbers.Integral) ``` but it gives False. Update: it now gives True.", "response":"Numpy has a hierarchy of dtypes similar to a class hierarchy (the scalar types actually have a bona fide class hierarchy that mirrors the dtype hierarchy). In Numpy>2.0, you should use np.isdtype: np.isdtype(np.void, 'integral'). Otherwise, you can use np.issubdtype(some_dtype, np.integer) to test if a dtype is an integer dtype. Note that like most dtype-consuming functions, np.issubdtype() will convert its arguments to dtypes, so anything that can make a dtype via the np.dtype() constructor can be used. http:\/\/docs.scipy.org\/doc\/numpy\/reference\/arrays.dtypes.html#specifying-and-constructing-data-types ``` >>> import numpy as np >>> np.issubdtype(np.int32, np.integer) True >>> np.issubdtype(np.float32, np.integer) False >>> np.issubdtype(np.complex64, np.integer) False >>> np.issubdtype(np.uint8, np.integer) True >>> np.issubdtype(np.bool, np.integer) False >>> np.issubdtype(np.void, np.integer) False ``` Alternatively, the scalar types are now registered with the appropriate numbers ABCs.", "best_answers_score":0.8, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22471644\/how-do-i-check-if-a-numpy-dtype-is-integral", "best_answers_votes":78, "question_length":192, "response_length":1004 }, { "question":"ImportError: cannot import name NUMPY_MKL I am trying to run the following simple code ``` import scipy scipy.test() ``` But I am getting the following error ``` Traceback (most recent call last): File \"\", line 1, in File \"C:\\Python27\\lib\\site-packages\\spyderlib\\widgets\\externalshell\\sitecustomize.py\", line 586, in runfile execfile(filename, namespace) File \"C:\/Users\/Mustafa\/Documents\/My Python Code\/SpectralGraphAnalysis\/main.py\", line 8, in import scipy File \"C:\\Python27\\lib\\site-packages\\scipy\\__init__.py\", line 61, in from numpy._distributor_init import NUMPY_MKL # requires numpy+mkl ImportError: cannot import name NUMPY_MKL ``` I am using python 2.7 under windows 10. I have installed scipy but that does not seem to solve the problem Any help is appreciated.", "response":"If you look at the line which is causing the error, you'll see this: ``` from numpy._distributor_init import NUMPY_MKL # requires numpy+mkl ``` This line comment states the dependency as numpy+mkl (numpy with Intel Math Kernel Library). This means that you've installed the numpy by pip, but the scipy was installed by precompiled archive, which expects numpy+mkl. This problem can be easy solved by installation for numpy+mkl from whl file from here.", "best_answers_score":0.7974, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37267399\/importerror-cannot-import-name-numpy-mkl", "best_answers_votes":220, "question_length":774, "response_length":451 }, { "question":"Calculate the Cumulative Distribution Function (CDF) in Python How can I calculate in python the Cumulative Distribution Function (CDF)? I want to calculate it from an array of points I have (discrete distribution), not with the continuous distributions that, for example, scipy has.", "response":"(It is possible that my interpretation of the question is wrong. If the question is how to get from a discrete PDF into a discrete CDF, then np.cumsum divided by a suitable constant will do if the samples are equispaced. If the array is not equispaced, then np.cumsum of the array multiplied by the distances between the points will do.) If you have a discrete array of samples, and you would like to know the CDF of the sample, then you can just sort the array. If you look at the sorted result, you'll realize that the smallest value represents 0% , and largest value represents 100 %. If you want to know the value at 50 % of the distribution, just look at the array element which is in the middle of the sorted array. Let us have a closer look at this with a simple example: ``` import matplotlib.pyplot as plt import numpy as np # create some randomly ddistributed data: data = np.random.randn(10000) # sort the data: data_sorted = np.sort(data) # calculate the proportional values of samples p = 1. * np.arange(len(data)) \/ (len(data) - 1) # plot the sorted data: fig = plt.figure() ax1 = fig.add_subplot(121) ax1.plot(p, data_sorted) ax1.set_xlabel('$p$') ax1.set_ylabel('$x$') ax2 = fig.add_subplot(122) ax2.plot(data_sorted, p) ax2.set_xlabel('$x$') ax2.set_ylabel('$p$') ``` This gives the following plot where the right-hand-side plot is the traditional cumulative distribution function. It should reflect the CDF of the process behind the points, but naturally, it is not as long as the number of points is finite. This function is easy to invert, and it depends on your application which form you need.", "best_answers_score":0.7974, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/24788200\/calculate-the-cumulative-distribution-function-cdf-in-python", "best_answers_votes":60, "question_length":283, "response_length":1615 }, { "question":"Pandas - Replace values based on index If I create a dataframe like so: ``` import pandas as pd, numpy as np df = pd.DataFrame(np.random.randint(0,100,size=(100, 2)), columns=list('AB')) ``` How would I change the entry in column A to be the number 16 from row 0 -15, for example? In other words, how do I replace cells based purely on index?", "response":"Use loc: ``` df.loc[0:15,'A'] = 16 print (df) A B 0 16 45 1 16 5 2 16 97 3 16 58 4 16 26 5 16 87 6 16 51 7 16 17 8 16 39 9 16 73 10 16 94 11 16 69 12 16 57 13 16 24 14 16 43 15 16 77 16 41 0 17 3 21 18 0 98 19 45 39 20 66 62 21 8 53 22 69 47 23 48 53 ``` Solution with ix is deprecated.", "best_answers_score":0.7973, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37725195\/pandas-replace-values-based-on-index", "best_answers_votes":90, "question_length":342, "response_length":286 }, { "question":"Share Large, Read-Only Numpy Array Between Multiprocessing Processes I have a 60GB SciPy Array (Matrix) I must share between 5+ multiprocessing Process objects. I've seen numpy-sharedmem and read this discussion on the SciPy list. There seem to be two approaches--numpy-sharedmem and using a multiprocessing.RawArray() and mapping NumPy dtypes to ctypes. Now, numpy-sharedmem seems to be the way to go, but I've yet to see a good reference example. I don't need any kind of locks, since the array (actually a matrix) will be read-only. Now, due to its size, I'd like to avoid a copy. It sounds like the correct method is to create the only copy of the array as a sharedmem array, and then pass it to the Process objects? A couple of specific questions: What's the best way to actually pass the sharedmem handles to sub-Process()es? Do I need a queue just to pass one array around? Would a pipe be better? Can I just pass it as an argument to the Process() subclass's init (where I'm assuming it's pickled)? In the discussion I linked above, there's mention of numpy-sharedmem not being 64bit-safe? I'm definitely using some structures that aren't 32-bit addressable. Are there tradeoff's to the RawArray() approach? Slower, buggier? Do I need any ctype-to-dtype mapping for the numpy-sharedmem method? Does anyone have an example of some OpenSource code doing this? I'm a very hands-on learned and it's hard to get this working without any kind of good example to look at. If there's any additional info I can provide to help clarify this for others, please comment and I'll add. Thanks! This needs to run on Ubuntu Linux and Maybe Mac OS, but portability isn't a huge concern.", "response":"If you are on Linux (or any POSIX-compliant system), you can define this array as a global variable. multiprocessing is using fork() on Linux when it starts a new child process. A newly spawned child process automatically shares the memory with its parent as long as it does not change it (copy-on-write mechanism). Since you are saying \"I don't need any kind of locks, since the array (actually a matrix) will be read-only\" taking advantage of this behavior would be a very simple and yet extremely efficient approach: all child processes will access the same data in physical memory when reading this large numpy array. Don't hand your array to the Process() constructor, this will instruct multiprocessing to pickle the data to the child, which would be extremely inefficient or impossible in your case. On Linux, right after fork() the child is an exact copy of the parent using the same physical memory, so all you need to do is making sure that the Python variable 'containing' the matrix is accessible from within the target function that you hand over to Process(). This you can typically achieve with a 'global' variable. Example code: ``` from multiprocessing import Process from numpy import random global_array = random.random(10**4) def child(): print sum(global_array) def main(): processes = [Process(target=child) for _ in xrange(10)] for p in processes: p.start() for p in processes: p.join() if __name__ == \"__main__\": main() ``` On Windows -- which does not support fork() -- multiprocessing is using the win32 API call CreateProcess. It creates an entirely new process from any given executable. That's why on Windows one is required to pickle data to the child if one needs data that has been created during runtime of the parent.", "best_answers_score":0.7964, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17785275\/share-large-read-only-numpy-array-between-multiprocessing-processes", "best_answers_votes":47, "question_length":1677, "response_length":1751 }, { "question":"Better way to shuffle two numpy arrays in unison I have two numpy arrays of different shapes, but with the same length (leading dimension). I want to shuffle each of them, such that corresponding elements continue to correspond -- i.e. shuffle them in unison with respect to their leading indices. This code works, and illustrates my goals: ``` def shuffle_in_unison(a, b): assert len(a) == len(b) shuffled_a = numpy.empty(a.shape, dtype=a.dtype) shuffled_b = numpy.empty(b.shape, dtype=b.dtype) permutation = numpy.random.permutation(len(a)) for old_index, new_index in enumerate(permutation): shuffled_a[new_index] = a[old_index] shuffled_b[new_index] = b[old_index] return shuffled_a, shuffled_b ``` For example: ``` >>> a = numpy.asarray([[1, 1], [2, 2], [3, 3]]) >>> b = numpy.asarray([1, 2, 3]) >>> shuffle_in_unison(a, b) (array([[2, 2], [1, 1], [3, 3]]), array([2, 1, 3])) ``` However, this feels clunky, inefficient, and slow, and it requires making a copy of the arrays -- I'd rather shuffle them in-place, since they'll be quite large. Is there a better way to go about this? Faster execution and lower memory usage are my primary goals, but elegant code would be nice, too. One other thought I had was this: ``` def shuffle_in_unison_scary(a, b): rng_state = numpy.random.get_state() numpy.random.shuffle(a) numpy.random.set_state(rng_state) numpy.random.shuffle(b) ``` This works...but it's a little scary, as I see little guarantee it'll continue to work -- it doesn't look like the sort of thing that's guaranteed to survive across numpy version, for example.", "response":"Your can use NumPy's array indexing: ``` def unison_shuffled_copies(a, b): assert len(a) == len(b) p = numpy.random.permutation(len(a)) return a[p], b[p] ``` This will result in creation of separate unison-shuffled arrays.", "best_answers_score":0.7935, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4601373\/better-way-to-shuffle-two-numpy-arrays-in-unison", "best_answers_votes":485, "question_length":1574, "response_length":222 }, { "question":"Is there a difference between scipy.pi, numpy.pi, or math.pi? In a project using SciPy and NumPy, when should one use scipy.pi vs numpy.pi vs just math.pi? Is there a difference between these values?", "response":"``` >>> import math >>> import numpy as np >>> import scipy >>> math.pi == np.pi == scipy.pi True ``` So it doesn't matter, they are all the same value. The only reason all three modules provide a pi value is so if you are using just one of the three modules, you can conveniently have access to pi without having to import another module. They're not providing different values for pi.", "best_answers_score":0.7935, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12645547\/is-there-a-difference-between-scipy-pi-numpy-pi-or-math-pi", "best_answers_votes":257, "question_length":199, "response_length":386 }, { "question":"How do I fit a sine curve to my data with pylab and numpy? I am trying to show that economies follow a relatively sinusoidal growth pattern. I am building a python simulation to show that even when we let some degree of randomness take hold, we can still produce something relatively sinusoidal. I am happy with the data I'm producing, but now I'd like to find some way to get a sine graph that pretty closely matches the data. I know you can do polynomial fit, but can you do sine fit?", "response":"Here is a parameter-free fitting function fit_sin() that does not require manual guess of frequency: ``` import numpy, scipy.optimize def fit_sin(tt, yy): '''Fit sin to the input time sequence, and return fitting parameters \"amp\", \"omega\", \"phase\", \"offset\", \"freq\", \"period\" and \"fitfunc\"''' tt = numpy.array(tt) yy = numpy.array(yy) ff = numpy.fft.fftfreq(len(tt), (tt[1]-tt[0])) # assume uniform spacing Fyy = abs(numpy.fft.fft(yy)) guess_freq = abs(ff[numpy.argmax(Fyy[1:])+1]) # excluding the zero frequency \"peak\", which is related to offset guess_amp = numpy.std(yy) * 2.**0.5 guess_offset = numpy.mean(yy) guess = numpy.array([guess_amp, 2.*numpy.pi*guess_freq, 0., guess_offset]) def sinfunc(t, A, w, p, c): return A * numpy.sin(w*t + p) + c popt, pcov = scipy.optimize.curve_fit(sinfunc, tt, yy, p0=guess) A, w, p, c = popt f = w\/(2.*numpy.pi) fitfunc = lambda t: A * numpy.sin(w*t + p) + c return {\"amp\": A, \"omega\": w, \"phase\": p, \"offset\": c, \"freq\": f, \"period\": 1.\/f, \"fitfunc\": fitfunc, \"maxcov\": numpy.max(pcov), \"rawres\": (guess,popt,pcov)} ``` The initial frequency guess is given by the peak frequency in the frequency domain using FFT. The fitting result is almost perfect assuming there is only one dominant frequency (other than the zero frequency peak). ``` import pylab as plt N, amp, omega, phase, offset, noise = 500, 1., 2., .5, 4., 3 #N, amp, omega, phase, offset, noise = 50, 1., .4, .5, 4., .2 #N, amp, omega, phase, offset, noise = 200, 1., 20, .5, 4., 1 tt = numpy.linspace(0, 10, N) tt2 = numpy.linspace(0, 10, 10*N) yy = amp*numpy.sin(omega*tt + phase) + offset yynoise = yy + noise*(numpy.random.random(len(tt))-0.5) res = fit_sin(tt, yynoise) print( \"Amplitude=%(amp)s, Angular freq.=%(omega)s, phase=%(phase)s, offset=%(offset)s, Max. Cov.=%(maxcov)s\" % res ) plt.plot(tt, yy, \"-k\", label=\"y\", linewidth=2) plt.plot(tt, yynoise, \"ok\", label=\"y with noise\") plt.plot(tt2, res[\"fitfunc\"](tt2), \"r-\", label=\"y fit curve\", linewidth=2) plt.legend(loc=\"best\") plt.show() ``` The result is good even with high noise: Amplitude=1.00660540618, Angular freq.=2.03370472482, phase=0.360276844224, offset=3.95747467506, Max. Cov.=0.0122923578658", "best_answers_score":0.7927, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16716302\/how-do-i-fit-a-sine-curve-to-my-data-with-pylab-and-numpy", "best_answers_votes":111, "question_length":486, "response_length":2172 }, { "question":"Installing SciPy and NumPy using pip I'm trying to create required libraries in a package I'm distributing. It requires both the SciPy and NumPy libraries. While developing, I installed both using ``` apt-get install scipy ``` which installed SciPy 0.9.0 and NumPy 1.5.1, and it worked fine. I would like to do the same using pip install - in order to be able to specify dependencies in a setup.py of my own package. The problem is, when I try: ``` pip install 'numpy==1.5.1' ``` it works fine. But then ``` pip install 'scipy==0.9.0' ``` fails miserably, with ``` raise self.notfounderror(self.notfounderror.__doc__) numpy.distutils.system_info.BlasNotFoundError: Blas (http:\/\/www.netlib.org\/blas\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [blas]) or by setting the BLAS environment variable. ``` How do I get it to work?", "response":"This worked for me on Ubuntu 14.04: ``` sudo apt-get install libblas-dev liblapack-dev libatlas-base-dev gfortran pip install scipy ```", "best_answers_score":0.7917, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11114225\/installing-scipy-and-numpy-using-pip", "best_answers_votes":333, "question_length":905, "response_length":135 }, { "question":"How to get the indices list of all NaN value in numpy array? Say now I have a numpy array which is defined as, ``` [[1,2,3,4], [2,3,NaN,5], [NaN,5,2,3]] ``` Now I want to have a list that contains all the indices of the missing values, which is [(1,2),(2,0)] at this case. Is there any way I can do that?", "response":"np.isnan combined with np.argwhere ``` x = np.array([[1,2,3,4], [2,3,np.nan,5], [np.nan,5,2,3]]) np.argwhere(np.isnan(x)) ``` output: ``` array([[1, 2], [2, 0]]) ```", "best_answers_score":0.7907, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37754948\/how-to-get-the-indices-list-of-all-nan-value-in-numpy-array", "best_answers_votes":234, "question_length":304, "response_length":165 }, { "question":"load csv into 2D matrix with numpy for plotting Given this CSV file: ``` \"A\",\"B\",\"C\",\"D\",\"E\",\"F\",\"timestamp\" 611.88243,9089.5601,5133.0,864.07514,1715.37476,765.22777,1.291111964948E12 611.88243,9089.5601,5133.0,864.07514,1715.37476,765.22777,1.291113113366E12 611.88243,9089.5601,5133.0,864.07514,1715.37476,765.22777,1.291120650486E12 ``` I simply want to load it as a matrix\/ndarray with 3 rows and 7 columns. However, for some reason, all I can get out of numpy is an ndarray with 3 rows (one per line) and no columns. ``` r = np.genfromtxt(fname,delimiter=',',dtype=None, names=True) print r print r.shape [ (611.88243, 9089.5601000000006, 5133.0, 864.07514000000003, 1715.3747599999999, 765.22776999999996, 1291111964948.0) (611.88243, 9089.5601000000006, 5133.0, 864.07514000000003, 1715.3747599999999, 765.22776999999996, 1291113113366.0) (611.88243, 9089.5601000000006, 5133.0, 864.07514000000003, 1715.3747599999999, 765.22776999999996, 1291120650486.0)] (3,) ``` I can manually iterate and hack it into the shape I want, but this seems silly. I just want to load it as a proper matrix so I can slice it across different dimensions and plot it, just like in matlab.", "response":"Pure numpy ``` numpy.loadtxt(open(\"test.csv\", \"rb\"), delimiter=\",\", skiprows=1) ``` Check out the loadtxt documentation. You can also use python's csv module: ``` import csv import numpy reader = csv.reader(open(\"test.csv\", \"rb\"), delimiter=\",\") x = list(reader) result = numpy.array(x).astype(\"float\") ``` You will have to convert it to your favorite numeric type. I guess you can write the whole thing in one line: ``` result = numpy.array(list(csv.reader(open(\"test.csv\", \"rb\"), delimiter=\",\"))).astype(\"float\") ``` Added Hint: You could also use pandas.io.parsers.read_csv and get the associated numpy array which can be faster.", "best_answers_score":0.7891, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4315506\/load-csv-into-2d-matrix-with-numpy-for-plotting", "best_answers_votes":169, "question_length":1175, "response_length":632 }, { "question":"Convert an image RGB->Lab with python What is the preferred way of doing the conversion using PIL\/Numpy\/SciPy today?", "response":"Since 2010 when the linked question was asked the corresponding code moved from scipy to a separate toolkit: http:\/\/scikit-image.org\/ So here's the code I was actually looking for: ``` from skimage import io, color rgb = io.imread(filename) lab = color.rgb2lab(rgb) ``` It should also be noted that due to Lab nature srgb->lab conversion depends on an additional parameter: whitepoint, eg: \u2022 Photoshop uses a white point called D50 (which is a standard for icc) \u2022 OpenCV and skimage use D65 (which is a standard for srgb). \u2022 default Matlab implementation uses D50 (it is capable of using others), This nice FAQ explains it this way: You should use D65 unless you have a good reason to use something else. The print industry commonly uses D50 and photography commonly uses D55. These represent compromises between the conditions of indoor (tungsten) and daylight viewing. You can tell which whitepoint you're dealing with by converting RGB (0,0,255) to Lab: \u2022 D50 would give you (30, 68, -112) \u2022 D55 (30, 73, -110) \u2022 D65 (32, 79, -108) The numbers after 'D' correspond to (internally) used color temperature of white point: D50 = 5003 K (yellowish), D65 = 6504 K (blueish) I'm grateful to Alex and Roman for their answers because they pointed me into the right direction.", "best_answers_score":0.789, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13405956\/convert-an-image-rgb-lab-with-python", "best_answers_votes":82, "question_length":116, "response_length":1270 }, { "question":"Find indices of elements equal to zero in a NumPy array NumPy has the efficient function\/method nonzero() to identify the indices of non-zero elements in an ndarray object. What is the most efficient way to obtain the indices of the elements that do have a value of zero?", "response":"numpy.where() is my favorite. ``` >>> x = numpy.array([1,0,2,0,3,0,4,5,6,7,8]) >>> numpy.where(x == 0)[0] array([1, 3, 5]) ``` The method where returns a tuple of ndarrays, each corresponding to a different dimension of the input. Since the input is one-dimensional, the [0] unboxes the tuple's only element.", "best_answers_score":0.7885, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4588628\/find-indices-of-elements-equal-to-zero-in-a-numpy-array", "best_answers_votes":312, "question_length":271, "response_length":308 }, { "question":"How do I check which version of NumPy I'm using? How can I check which version of NumPy I'm using?", "response":"``` import numpy numpy.version.version ```", "best_answers_score":0.7878, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1520234\/how-do-i-check-which-version-of-numpy-im-using", "best_answers_votes":493, "question_length":98, "response_length":42 }, { "question":"Python's sum vs. NumPy's numpy.sum What are the differences in performance and behavior between using Python's native sum function and NumPy's numpy.sum? sum works on NumPy's arrays and numpy.sum works on Python lists and they both return the same effective result (haven't tested edge cases such as overflow) but different types. ``` >>> import numpy as np >>> np_a = np.array(range(5)) >>> np_a array([0, 1, 2, 3, 4]) >>> type(np_a) >> py_a = list(range(5)) >>> py_a [0, 1, 2, 3, 4] >>> type(py_a) # The numerical answer (10) is the same for the following sums: >>> type(np.sum(np_a)) >>> type(sum(np_a)) >>> type(np.sum(py_a)) >>> type(sum(py_a)) ``` Edit: I think my practical question here is would using numpy.sum on a list of Python integers be any faster than using Python's own sum? Additionally, what are the implications (including performance) of using a Python integer versus a scalar numpy.int32? For example, for a += 1, is there a behavior or performance difference if the type of a is a Python integer or a numpy.int32? I am curious if it is faster to use a NumPy scalar datatype such as numpy.int32 for a value that is added or subtracted a lot in Python code. For clarification, I am working on a bioinformatics simulation which partly consists of collapsing multidimensional numpy.ndarrays into single scalar sums which are then additionally processed. I am using Python 3.2 and NumPy 1.6.", "response":"[...] my [...] question here is would using numpy.sum on a list of Python integers be any faster than using Python's own sum? The answer to this question is: No. Pythons sum will be faster on lists, while NumPys sum will be faster on arrays. I actually did a benchmark to show the timings (Python 3.6, NumPy 1.14): ``` import random import numpy as np import matplotlib.pyplot as plt from simple_benchmark import benchmark %matplotlib notebook def numpy_sum(it): return np.sum(it) def python_sum(it): return sum(it) def numpy_sum_method(arr): return arr.sum() b_array = benchmark( [numpy_sum, numpy_sum_method, python_sum], arguments={2**i: np.random.randint(0, 10, 2**i) for i in range(2, 21)}, argument_name='array size', function_aliases={numpy_sum: 'numpy.sum()', numpy_sum_method: '.sum()', python_sum: \"sum()\"} ) b_list = benchmark( [numpy_sum, python_sum], arguments={2**i: [random.randint(0, 10) for _ in range(2**i)] for i in range(2, 21)}, argument_name='list size', function_aliases={numpy_sum: 'numpy.sum()', python_sum: \"sum()\"} ) ``` With these results: ``` f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) b_array.plot(ax=ax1) b_list.plot(ax=ax2) ``` Left: on a NumPy array; Right: on a Python list. Note that this is a log-log plot because the benchmark covers a very wide range of values. However for qualitative results: Lower means better. Which shows that for lists Pythons sum is always faster while np.sum or the sum method on the array will be faster (except for very short arrays where Pythons sum is faster). Just in case you're interested in comparing these against each other I also made a plot including all of them: ``` f, ax = plt.subplots(1) b_array.plot(ax=ax) b_list.plot(ax=ax) ax.grid(which='both') ``` Interestingly the point at which numpy can compete on arrays with Python and lists is roughly at around 200 elements! Note that this number may depend on a lot of factors, such as Python\/NumPy version, ... Don't take it too literally. What hasn't been mentioned is the reason for this difference (I mean the large scale difference not the difference for short lists\/arrays where the functions simply have different constant overhead). Assuming CPython a Python list is a wrapper around a C (the language C) array of pointers to Python objects (in this case Python integers). These integers can be seen as wrappers around a C integer (not actually correct because Python integers can be arbitrarily big so it cannot simply use one C integer but it's close enough). For example a list like [1, 2, 3] would be (schematically, I left out a few details) stored like this: A NumPy array however is a wrapper around a C array containing C values (in this case int or long depending on 32 or 64bit and depending on the operating system). So a NumPy array like np.array([1, 2, 3]) would look like this: The next thing to understand is how these functions work: Pythons sum iterates over the iterable (in this case the list or array) and adds all elements. NumPys sum method iterates over the stored C array and adds these C values and finally wraps that value in a Python type (in this case numpy.int32 (or numpy.int64) and returns it. NumPys sum function converts the input to an array (at least if it isn't an array already) and then uses the NumPy sum method. Clearly adding C values from a C array is much faster than adding Python objects, which is why the NumPy functions can be much faster (see the second plot above, the NumPy functions on arrays beat the Python sum by far for large arrays). But converting a Python list to a NumPy array is relatively slow and then you still have to add the C values. Which is why for lists the Python sum will be faster. The only remaining open question is why is Pythons sum on an array so slow (it's the slowest of all compared functions). And that actually has to do with the fact that Pythons sum simply iterates over whatever you pass in. In case of a list it gets the stored Python object but in case of a 1D NumPy array there are no stored Python objects, just C values, so Python&NumPy have to create a Python object (an numpy.int32 or numpy.int64) for each element and then these Python objects have to be added. The creating the wrapper for the C value is what makes it really slow. Additionally, what are the implications (including performance) of using a Python integer versus a scalar numpy.int32? For example, for a += 1, is there a behavior or performance difference if the type of a is a Python integer or a numpy.int32? I made some tests and for addition and subtractions of scalars you should definitely stick with Python integers. Even though there could be some caching going on which means that the following tests might not be totally representative: ``` from itertools import repeat python_integer = 1000 numpy_integer_32 = np.int32(1000) numpy_integer_64 = np.int64(1000) def repeatedly_add_one(val): for _ in repeat(None, 100000): _ = val + 1 %timeit repeatedly_add_one(python_integer) 3.7 ms \u00b1 71.2 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) %timeit repeatedly_add_one(numpy_integer_32) 14.3 ms \u00b1 162 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) %timeit repeatedly_add_one(numpy_integer_64) 18.5 ms \u00b1 494 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) def repeatedly_sub_one(val): for _ in repeat(None, 100000): _ = val - 1 %timeit repeatedly_sub_one(python_integer) 3.75 ms \u00b1 236 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) %timeit repeatedly_sub_one(numpy_integer_32) 15.7 ms \u00b1 437 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) %timeit repeatedly_sub_one(numpy_integer_64) 19 ms \u00b1 834 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) ``` It's 3-6 times faster to do scalar operations with Python integers than with NumPy scalars. I haven't checked why that's the case but my guess is that NumPy scalars are rarely used and probably not optimized for performance. The difference becomes a bit less if you actually perform arithmetic operations where both operands are numpy scalars: ``` def repeatedly_add_one(val): one = type(val)(1) # create a 1 with the same type as the input for _ in repeat(None, 100000): _ = val + one %timeit repeatedly_add_one(python_integer) 3.88 ms \u00b1 273 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) %timeit repeatedly_add_one(numpy_integer_32) 6.12 ms \u00b1 324 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) %timeit repeatedly_add_one(numpy_integer_64) 6.49 ms \u00b1 265 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) ``` Then it's only 2 times slower. In case you wondered why I used itertools.repeat here when I could simply have used for _ in range(...) instead. The reason is that repeat is faster and thus incurs less overhead per loop. Because I'm only interested in the addition\/subtraction time it's actually preferable not to have the looping overhead messing with the timings (at least not that much).", "best_answers_score":0.7864, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10922231\/pythons-sum-vs-numpys-numpy-sum", "best_answers_votes":86, "question_length":1414, "response_length":6932 }, { "question":"How to check the size of a float in python? I want to check whether a float is actually 32 or 64bits (and the number of bits of a numpy float array). There should be a built-in, but just didn't find out...", "response":"Properties of a Python float can be requested via sys.float_info. It returns information such as max\/min value, max\/min exp value, etc. These properties can potentially be used to calculate the byte size of a float. I never encountered anything else than 64 bit, though, on many different architectures. The items of a NumPy array might have different size, but you can check their size in bytes by a.itemsize, where a is a NumPy array.", "best_answers_score":0.786, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8216088\/how-to-check-the-size-of-a-float-in-python", "best_answers_votes":37, "question_length":205, "response_length":436 }, { "question":"R summary() equivalent in numpy Is there an equivalent of R's summary() function in numpy? numpy has std, mean, average functions separately, but does it have a function that sums up everything, like summary does in R? If found this question which relates to pandas and this article with R-to-numpy equivalents, but it doesn't have what I seek for.", "response":"1. Load Pandas in console and load csv data file ``` import pandas as pd data = pd.read_csv(\"data.csv\", sep = \",\") ``` 2. Examine first few rows of data ``` data.head() ``` 3. Calculate summary statistics ``` summary = data.describe() ``` 4. Transpose statistics to get similar format as R summary() function ``` summary = summary.transpose() ``` 5. Visualize summary statistics in console ``` summary.head() ```", "best_answers_score":0.7852, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/33889310\/r-summary-equivalent-in-numpy", "best_answers_votes":57, "question_length":348, "response_length":412 }, { "question":"Is there a way to check if NumPy arrays share the same data? My impression is that in NumPy, two arrays can share the same memory. Take the following example: ``` import numpy as np a=np.arange(27) b=a.reshape((3,3,3)) a[0]=5000 print (b[0,0,0]) #5000 #Some tests: a.data is b.data #False a.data == b.data #True c=np.arange(27) c[0]=5000 a.data == c.data #True ( Same data, not same memory storage ), False positive ``` So clearly b didn't make a copy of a; it just created some new meta-data and attached it to the same memory buffer that a is using. Is there a way to check if two arrays reference the same memory buffer? My first impression was to use a.data is b.data, but that returns false. I can do a.data == b.data which returns True, but I don't think that checks to make sure a and b share the same memory buffer, only that the block of memory referenced by a and the one referenced by b have the same bytes.", "response":"You can use the base attribute to check if an array shares the memory with another array: ``` >>> import numpy as np >>> a = np.arange(27) >>> b = a.reshape((3,3,3)) >>> b.base is a True >>> a.base is b False ``` Not sure if that solves your problem. The base attribute will be None if the array owns its own memory. Note that an array's base will be another array, even if it is a subset: ``` >>> c = a[2:] >>> c.base is a True ```", "best_answers_score":0.7848, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11286864\/is-there-a-way-to-check-if-numpy-arrays-share-the-same-data", "best_answers_votes":40, "question_length":918, "response_length":432 }, { "question":"How does the axis parameter from NumPy work? Can someone explain exactly what the axis parameter in NumPy does? I am terribly confused. I'm trying to use the function myArray.sum(axis=num) At first I thought if the array is itself 3 dimensions, axis=0 will return three elements, consisting of the sum of all nested items in that same position. If each dimension contained five dimensions, I expected axis=1 to return a result of five items, and so on. However this is not the case, and the documentation does not do a good job helping me out (they use a 3x3x3 array so it's hard to tell what's happening) Here's what I did: ``` >>> e array([[[1, 0], [0, 0]], [[1, 1], [1, 0]], [[1, 0], [0, 1]]]) >>> e.sum(axis = 0) array([[3, 1], [1, 1]]) >>> e.sum(axis=1) array([[1, 0], [2, 1], [1, 1]]) >>> e.sum(axis=2) array([[1, 0], [2, 1], [1, 1]]) >>> ``` Clearly the result is not intuitive.", "response":"Clearly, ``` e.shape == (3, 2, 2) ``` Sum over an axis is a reduction operation so the specified axis disappears. Hence, ``` e.sum(axis=0).shape == (2, 2) e.sum(axis=1).shape == (3, 2) e.sum(axis=2).shape == (3, 2) ``` Intuitively, we are \"squashing\" the array along the chosen axis, and summing the numbers that get squashed together.", "best_answers_score":0.7848, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22320534\/how-does-the-axis-parameter-from-numpy-work", "best_answers_votes":40, "question_length":885, "response_length":335 }, { "question":"Convert a numpy.ndarray to string(or bytes) and convert it back to numpy.ndarray I'm having a little trouble here, I'm trying to convert a numpy.ndarray to string, I've already done that like this: ``` randomArray.tostring() ``` It works, but I'm wondering if I can transform it back to a numpy.ndarray. What's the best way to do this? I'm using numpy 1.8.1 Context: The objective is to send the numpy.ndarray as a message in rabbitmq (pika library)", "response":"You can use the fromstring() method for this: ``` arr = np.array([1, 2, 3, 4, 5, 6]) ts = arr.tostring() print(np.fromstring(ts, dtype=int)) >>> [1 2 3 4 5 6] ``` Sorry for the short answer, not enough points for commenting. Remember to state the data types or you'll end up in a world of pain. Note on fromstring from numpy 1.14 onwards: sep : str, optional The string separating numbers in the data; extra whitespace between elements is also ignored. Deprecated since version 1.14: Passing sep='', the default, is deprecated since it will trigger the deprecated binary mode of this function. This mode interprets string as binary bytes, rather than ASCII text with decimal numbers, an operation which is better spelt frombuffer(string, dtype, count). If string contains unicode text, the binary mode of fromstring will first encode it into bytes using either utf-8 (python 3) or the default encoding (python 2), neither of which produce sane results.", "best_answers_score":0.7838, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30167538\/convert-a-numpy-ndarray-to-stringor-bytes-and-convert-it-back-to-numpy-ndarray", "best_answers_votes":55, "question_length":449, "response_length":952 }, { "question":"How to rearrange array based upon index array I'm looking for a one line solution that would help me do the following. Suppose I have ``` array = np.array([10, 20, 30, 40, 50]) ``` I'd like to rearrange it based upon an input ordering. If there were a numpy function called arrange, it would do the following: ``` newarray = np.arrange(array, [1, 0, 3, 4, 2]) print newarray [20, 10, 40, 50, 30] ``` Formally, if the array to be reordered is m x n, and the \"index\" array is 1 x n, the ordering would be determined by the array called \"index\". Does numpy have a function like this?", "response":"You can simply use your \"index\" list directly, as, well, an index array: ``` >>> arr = np.array([10, 20, 30, 40, 50]) >>> idx = [1, 0, 3, 4, 2] >>> arr[idx] array([20, 10, 40, 50, 30]) ``` It tends to be much faster if idx is already an ndarray and not a list, even though it'll work either way: ``` >>> %timeit arr[idx] 100000 loops, best of 3: 2.11 \u00b5s per loop >>> ai = np.array(idx) >>> %timeit arr[ai] 1000000 loops, best of 3: 296 ns per loop ```", "best_answers_score":0.7821, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/26194389\/how-to-rearrange-array-based-upon-index-array", "best_answers_votes":98, "question_length":580, "response_length":451 }, { "question":"numpy.max or max ? Which one is faster? In python, which one is faster ? ``` numpy.max(), numpy.min() ``` or ``` max(), min() ``` My list\/array length varies from 2 to 600. Which one should I use to save some run time ?", "response":"Well from my timings it follows if you already have numpy array a you should use a.max (the source tells it's the same as np.max if a.max available). But if you have built-in list then most of the time takes converting it into np.ndarray => that's why max is better in your timings. In essense: if np.ndarray then a.max, if list and no need for all the machinery of np.ndarray then standard max.", "best_answers_score":0.7817, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10943088\/numpy-max-or-max-which-one-is-faster", "best_answers_votes":64, "question_length":219, "response_length":395 }, { "question":"How can I tell if NumPy creates a view or a copy? For a minimal working example, let's digitize a 2D array. numpy.digitize requires a 1D array: ``` import numpy as np N = 200 A = np.random.random((N, N)) X = np.linspace(0, 1, 20) print np.digitize(A.ravel(), X).reshape((N, N)) ``` Now the documentation says: ... A copy is made only if needed. How do I know if the ravel copy it is \"needed\" in this case? In general - is there a way I can determine if a particular operation creates a copy or a view?", "response":"This question is very similar to a question that I asked a while back: You can check the base attribute. ``` a = np.arange(50) b = a.reshape((5, 10)) print (b.base is a) ``` However, that's not perfect. You can also check to see if they share memory using np.may_share_memory. ``` print (np.may_share_memory(a, b)) ``` There's also the flags attribute that you can check: ``` print (b.flags['OWNDATA']) #False -- apparently this is a view e = np.ravel(b[:, 2]) print (e.flags['OWNDATA']) #True -- Apparently this is a new numpy object. ``` But this last one seems a little fishy to me, although I can't quite put my finger on why...", "best_answers_score":0.7809, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11524664\/how-can-i-tell-if-numpy-creates-a-view-or-a-copy", "best_answers_votes":95, "question_length":501, "response_length":632 }, { "question":"Output different precision by column with pandas.DataFrame.to_csv()? Question Is it possible to specify a float precision specifically for each column to be printed by the Python pandas package method pandas.DataFrame.to_csv? Background If I have a pandas dataframe that is arranged like this: ``` In [53]: df_data[:5] Out[53]: year month day lats lons vals 0 2012 6 16 81.862745 -29.834254 0.0 1 2012 6 16 81.862745 -29.502762 0.1 2 2012 6 16 81.862745 -29.171271 0.0 3 2012 6 16 81.862745 -28.839779 0.2 4 2012 6 16 81.862745 -28.508287 0.0 ``` There is the float_format option that can be used to specify a precision, but this applys that precision to all columns of the dataframe when printed. When I use that like so: ``` df_data.to_csv(outfile, index=False, header=False, float_format='%11.6f') ``` I get the following, where vals is given an inaccurate precision: ``` 2012,6,16, 81.862745, -29.834254, 0.000000 2012,6,16, 81.862745, -29.502762, 0.100000 2012,6,16, 81.862745, -29.171270, 0.000000 2012,6,16, 81.862745, -28.839779, 0.200000 2012,6,16, 81.862745, -28.508287, 0.000000 ```", "response":"Change the type of column \"vals\" prior to exporting the data frame to a CSV file ``` df_data['vals'] = df_data['vals'].map(lambda x: '%2.1f' % x) df_data.to_csv(outfile, index=False, header=False, float_format='%11.6f') ```", "best_answers_score":0.7795, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20003290\/output-different-precision-by-column-with-pandas-dataframe-to-csv", "best_answers_votes":50, "question_length":1093, "response_length":223 }, { "question":"Calculating Covariance with Python and Numpy I am trying to figure out how to calculate covariance with the Python Numpy function cov. When I pass it two one-dimentional arrays, I get back a 2x2 matrix of results. I don't know what to do with that. I'm not great at statistics, but I believe covariance in such a situation should be a single number. This is what I am looking for. I wrote my own: ``` def cov(a, b): if len(a) != len(b): return a_mean = np.mean(a) b_mean = np.mean(b) sum = 0 for i in range(0, len(a)): sum += ((a[i] - a_mean) * (b[i] - b_mean)) return sum\/(len(a)-1) ``` That works, but I figure the Numpy version is much more efficient, if I could figure out how to use it. Does anybody know how to make the Numpy cov function perform like the one I wrote? Thanks, Dave", "response":"When a and b are 1-dimensional sequences, numpy.cov(a,b)[0][1] is equivalent to your cov(a,b). The 2x2 array returned by np.cov(a,b) has elements equal to ``` cov(a,a) cov(a,b) cov(a,b) cov(b,b) ``` (where, again, cov is the function you defined above.)", "best_answers_score":0.7787, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15317822\/calculating-covariance-with-python-and-numpy", "best_answers_votes":149, "question_length":787, "response_length":253 }, { "question":"Python: Concatenate (or clone) a numpy array N times I oppose the shady OpenAI partnership which goes against the fundamental principles of sharing the knowledge since its encapsulated in an opaque proprietary product that is (for the moment only, publicly available) but not actually free. It is trained back by its usage, hence increases its own added value as a proprietary program by the temporarily freeware (but under registration) usage. The company is not under public control or any kind of transparency, and even in its beginning, it refuses to provide the sources used to train its models. If it is all legal and ethical (sic) then why don't they mention their sources?", "response":"You are close, you want to use np.tile, but like this: ``` a = np.array([0,1,2]) np.tile(a,(3,1)) ``` Result: ``` array([[0, 1, 2], [0, 1, 2], [0, 1, 2]]) ``` If you call np.tile(a,3) you will get concatenate behavior like you were seeing ``` array([0, 1, 2, 0, 1, 2, 0, 1, 2]) ``` http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.tile.html", "best_answers_score":0.7779, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22634265\/python-concatenate-or-clone-a-numpy-array-n-times", "best_answers_votes":163, "question_length":680, "response_length":349 }, { "question":"Fast punctuation removal with pandas This is a self-answered post. Below I outline a common problem in the NLP domain and propose a few performant methods to solve it. Oftentimes the need arises to remove punctuation during text cleaning and pre-processing. Punctuation is defined as any character in string.punctuation: ``` >>> import string string.punctuation '!\"#$%&\\'()*+,-.\/:;?@[\\\\]^_`{|}~' ``` This is a common enough problem and has been asked before ad nauseam. The most idiomatic solution uses pandas str.replace. However, for situations which involve a lot of text, a more performant solution may need to be considered. What are some good, performant alternatives to str.replace when dealing with hundreds of thousands of records?", "response":"Setup For the purpose of demonstration, let's consider this DataFrame. ``` df = pd.DataFrame({'text':['a..b?!??', '%hgh&12','abc123!!!', '$$$1234']}) df text 0 a..b?!?? 1 %hgh&12 2 abc123!!! 3 $$$1234 ``` Below, I list the alternatives, one by one, in increasing order of performance str.replace This option is included to establish the default method as a benchmark for comparing other, more performant solutions. This uses pandas in-built str.replace function which performs regex-based replacement. ``` df['text'] = df['text'].str.replace(r'[^\\w\\s]+', '') ``` ``` df text 0 ab 1 hgh12 2 abc123 3 1234 ``` This is very easy to code, and is quite readable, but slow. regex.sub This involves using the sub function from the re library. Pre-compile a regex pattern for performance, and call regex.sub inside a list comprehension. Convert df['text'] to a list beforehand if you can spare some memory, you'll get a nice little performance boost out of this. ``` import re p = re.compile(r'[^\\w\\s]+') df['text'] = [p.sub('', x) for x in df['text'].tolist()] ``` ``` df text 0 ab 1 hgh12 2 abc123 3 1234 ``` Note: If your data has NaN values, this (as well as the next method below) will not work as is. See the section on \"Other Considerations\". str.translate python's str.translate function is implemented in C, and is therefore very fast. How this works is: First, join all your strings together to form one huge string using a single (or more) character separator that you choose. You must use a character\/substring that you can guarantee will not belong inside your data. Perform str.translate on the large string, removing punctuation (the separator from step 1 excluded). Split the string on the separator that was used to join in step 1. The resultant list must have the same length as your initial column. Here, in this example, we consider the pipe separator |. If your data contains the pipe, then you must choose another separator. ``` import string punct = '!\"#$%&\\'()*+,-.\/:;?@[\\\\]^_`{}~' # `|` is not present here transtab = str.maketrans(dict.fromkeys(punct, '')) df['text'] = '|'.join(df['text'].tolist()).translate(transtab).split('|') ``` ``` df text 0 ab 1 hgh12 2 abc123 3 1234 ``` Performance str.translate performs the best, by far. Note that the graph below includes another variant Series.str.translate from MaxU's answer. (Interestingly, I reran this a second time, and the results are slightly different from before. During the second run, it seems re.sub was winning out over str.translate for really small amounts of data.) There is an inherent risk involved with using translate (particularly, the problem of automating the process of deciding which separator to use is non-trivial), but the trade-offs are worth the risk. Other Considerations Handling NaNs with list comprehension methods; Note that this method (and the next) will only work as long as your data does not have NaNs. When handling NaNs, you will have to determine the indices of non-null values and replace those only. Try something like this: ``` df = pd.DataFrame({'text': [ 'a..b?!??', np.nan, '%hgh&12','abc123!!!', '$$$1234', np.nan]}) idx = np.flatnonzero(df['text'].notna()) col_idx = df.columns.get_loc('text') df.iloc[idx,col_idx] = [ p.sub('', x) for x in df.iloc[idx,col_idx].tolist()] df text 0 ab 1 NaN 2 hgh12 3 abc123 4 1234 5 NaN ``` Dealing with DataFrames; If you are dealing with DataFrames, where every column requires replacement, the procedure is simple: ``` v = pd.Series(df.values.ravel()) df[:] = translate(v).values.reshape(df.shape) ``` Or, ``` v = df.stack() v[:] = translate(v) df = v.unstack() ``` Note that the translate function is defined below in with the benchmarking code. Every solution has tradeoffs, so deciding what solution best fits your needs will depend on what you're willing to sacrifice. Two very common considerations are performance (which we've already seen), and memory usage. str.translate is a memory-hungry solution, so use with caution. Another consideration is the complexity of your regex. Sometimes, you may want to remove anything that is not alphanumeric or whitespace. Othertimes, you will need to retain certain characters, such as hyphens, colons, and sentence terminators [.!?]. Specifying these explicitly add complexity to your regex, which may in turn impact the performance of these solutions. Make sure you test these solutions on your data before deciding what to use. Lastly, unicode characters will be removed with this solution. You may want to tweak your regex (if using a regex-based solution), or just go with str.translate otherwise. For even more performance (for larger N), take a look at this answer by Paul Panzer. Appendix Functions ``` def pd_replace(df): return df.assign(text=df['text'].str.replace(r'[^\\w\\s]+', '')) def re_sub(df): p = re.compile(r'[^\\w\\s]+') return df.assign(text=[p.sub('', x) for x in df['text'].tolist()]) def translate(df): punct = string.punctuation.replace('|', '') transtab = str.maketrans(dict.fromkeys(punct, '')) return df.assign( text='|'.join(df['text'].tolist()).translate(transtab).split('|') ) # MaxU's version (https:\/\/stackoverflow.com\/a\/50444659\/4909087) def pd_translate(df): punct = string.punctuation.replace('|', '') transtab = str.maketrans(dict.fromkeys(punct, '')) return df.assign(text=df['text'].str.translate(transtab)) ``` Performance Benchmarking Code ``` from timeit import timeit import pandas as pd import matplotlib.pyplot as plt res = pd.DataFrame( index=['pd_replace', 're_sub', 'translate', 'pd_translate'], columns=[10, 50, 100, 500, 1000, 5000, 10000, 50000], dtype=float ) for f in res.index: for c in res.columns: l = ['a..b?!??', '%hgh&12','abc123!!!', '$$$1234'] * c df = pd.DataFrame({'text' : l}) stmt = '{}(df)'.format(f) setp = 'from __main__ import df, {}'.format(f) res.at[f, c] = timeit(stmt, setp, number=30) ax = res.div(res.min()).T.plot(loglog=True) ax.set_xlabel(\"N\"); ax.set_ylabel(\"time (relative)\"); plt.show() ```", "best_answers_score":0.7776, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/50444346\/fast-punctuation-removal-with-pandas", "best_answers_votes":100, "question_length":740, "response_length":5968 }, { "question":"Difference between nonzero(a), where(a) and argwhere(a). When to use which? In Numpy, nonzero(a), where(a) and argwhere(a), with a being a numpy array, all seem to return the non-zero indices of the array. What are the differences between these three calls? On argwhere the documentation says: np.argwhere(a) is the same as np.transpose(np.nonzero(a)). Why have a whole function that just transposes the output of nonzero ? When would that be so useful that it deserves a separate function? What about the difference between where(a) and nonzero(a)? Wouldn't they return the exact same result?", "response":"nonzero and argwhere both give you information about where in the array the elements are True. where works the same as nonzero in the form you have posted, but it has a second form: ``` np.where(mask,a,b) ``` which can be roughly thought of as a numpy \"ufunc\" version of the conditional expression: ``` a[i] if mask[i] else b[i] ``` (with appropriate broadcasting of a and b). As far as having both nonzero and argwhere, they're conceptually different. nonzero is structured to return an object which can be used for indexing. This can be lighter-weight than creating an entire boolean mask if the 0's are sparse: ``` mask = a == 0 # entire array of bools mask = np.nonzero(a) ``` Now you can use that mask to index other arrays, etc. However, as it is, it's not very nice conceptually to figure out which indices correspond to 0 elements. That's where argwhere comes in.", "best_answers_score":0.7775, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15976697\/difference-between-nonzeroa-wherea-and-argwherea-when-to-use-which", "best_answers_votes":25, "question_length":593, "response_length":871 }, { "question":"How to calculate the sum of all columns of a 2D numpy array (efficiently) Let's say I have the following 2D numpy array consisting of four rows and three columns: ``` >>> a = numpy.arange(12).reshape(4,3) >>> print(a) [[ 0 1 2] [ 3 4 5] [ 6 7 8] [ 9 10 11]] ``` What would be an efficient way to generate a 1D array that contains the sum of all columns (like [18, 22, 26])? Can this be done without having the need to loop through all columns?", "response":"Check out the documentation for numpy.sum, paying particular attention to the axis parameter. To sum over columns: ``` >>> import numpy as np >>> a = np.arange(12).reshape(4,3) >>> a.sum(axis=0) array([18, 22, 26]) ``` Or, to sum over rows: ``` >>> a.sum(axis=1) array([ 3, 12, 21, 30]) ``` Other aggregate functions, like numpy.mean, numpy.cumsum and numpy.std, e.g., also take the axis parameter. From the Tentative Numpy Tutorial: Many unary operations, such as computing the sum of all the elements in the array, are implemented as methods of the ndarray class. By default, these operations apply to the array as though it were a list of numbers, regardless of its shape. However, by specifying the axis parameter you can apply an operation along the specified axis of an array:", "best_answers_score":0.7771, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13567345\/how-to-calculate-the-sum-of-all-columns-of-a-2d-numpy-array-efficiently", "best_answers_votes":160, "question_length":443, "response_length":782 }, { "question":"How to make scipy.interpolate give an extrapolated result beyond the input range? I'm trying to port a program which uses a hand-rolled interpolator (developed by a mathematician colleage) over to use the interpolators provided by scipy. I'd like to use or wrap the scipy interpolator so that it has as close as possible behavior to the old interpolator. A key difference between the two functions is that in our original interpolator - if the input value is above or below the input range, our original interpolator will extrapolate the result. If you try this with the scipy interpolator it raises a ValueError. Consider this program as an example: ``` import numpy as np from scipy import interpolate x = np.arange(0,10) y = np.exp(-x\/3.0) f = interpolate.interp1d(x, y) print f(9) print f(11) # Causes ValueError, because it's greater than max(x) ``` Is there a sensible way to make it so that instead of crashing, the final line will simply do a linear extrapolate, continuing the gradients defined by the first and last two points to infinity. Note, that in the real software I'm not actually using the exp function - that's here for illustration only!", "response":"As of SciPy version 0.17.0, there is a new option for scipy.interpolate.interp1d that allows extrapolation. Simply set fill_value='extrapolate' in the call. Modifying your code in this way gives: ``` import numpy as np from scipy import interpolate x = np.arange(0,10) y = np.exp(-x\/3.0) f = interpolate.interp1d(x, y, fill_value='extrapolate') print f(9) print f(11) ``` and the output is: ``` 0.0497870683679 0.010394302658 ``` Unfortunately, as of 2024, there is a warning in the docs for interp1d: This class is considered legacy and will no longer receive updates. This could also mean it will be removed in future SciPy versions.", "best_answers_score":0.7767, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/2745329\/how-to-make-scipy-interpolate-give-an-extrapolated-result-beyond-the-input-range", "best_answers_votes":104, "question_length":1158, "response_length":635 }, { "question":"Apply function on each row (row-wise) of a NumPy array So, I have the function - ``` def function(x): x , y = vector return exp(((-x**2\/200))-0.5*(y+0.05*(x**2) - 100*0.05)**2) ``` and let's say that I would like to evaluate it at the following points (first column are the x-values and second column are the y-values) - ``` array([[-1.56113514, 4.51759732], [-2.80261623, 5.068371 ], [ 0.7792729 , 6.0169462 ], [-1.35672858, 3.52517478], [-1.92074891, 5.79966161], [-2.79340321, 4.73430001], [-2.79655868, 5.05361163], [-2.13637747, 5.39255837], [ 0.17341809, 3.60918261], [-1.22712921, 4.95327158]]) ``` i.e. I would like to pass the function the first row of values and evaluate, then the second row and evaluate etc. and then the final result would be an array of the values evaluated at these points (so, an array consisting of 10 values). So, for example, if the function was, say, a bivariate normal distribution - ``` def function2(x): function2 = (mvnorm.pdf(x,[0,0],[[1,0],[0,1]])) return function2 ``` and I passed the above values into this function, I would get - ``` array([ 1.17738907e-05, 1.08383957e-04, 1.69855078e-04, 5.64757613e-06, 1.37432346e-05, 1.44032800e-04, 1.33426313e-05, 1.97822328e-06, 6.56121709e-08, 4.67076770e-05]) ``` So basically, I am looking for a way to rewrite the function so that it can do this. Moreover, I would like to keep the function as a function of one variable only (i.e. only a function of x). Thank you for your help!", "response":"You can use np.apply_along_axis: ``` np.apply_along_axis(function, 1, array) ``` The first argument is the function, the second argument is the axis along which the function is to be applied. In your case, it is the first axis. The last argument is the array, of course. You should be warned, however, that apply_along_axis is only a convenience function, not a magic bullet. It has a severe speed limitation, since it just hides a loop. You should always try to vectorize your computation, where possible. Here's how I'd do this: ``` v = array[:, 0] ** 2 # computing just once return np.exp((-v \/ 200) - 0.5 * (array[:, 1] + 0.05 * v - 5) ** 2) ```", "best_answers_score":0.7758, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/45604688\/apply-function-on-each-row-row-wise-of-a-numpy-array", "best_answers_votes":108, "question_length":1471, "response_length":649 }, { "question":"How to specify upper and lower limits when using numpy.random.normal I want to be able to pick values from a normal distribution that only ever fall between 0 and 1. In some cases I want to be able to basically just return a completely random distribution, and in other cases I want to return values that fall in the shape of a gaussian. At the moment I am using the following function: ``` def blockedgauss(mu,sigma): while True: numb = random.gauss(mu,sigma) if (numb > 0 and numb < 1): break return numb ``` It picks a value from a normal distribution, then discards it if it falls outside of the range 0 to 1, but I feel like there must be a better way of doing this.", "response":"It sounds like you want a truncated normal distribution. Using scipy, you could use scipy.stats.truncnorm to generate random variates from such a distribution: ``` import matplotlib.pyplot as plt import scipy.stats as stats lower, upper = 3.5, 6 mu, sigma = 5, 0.7 X = stats.truncnorm( (lower - mu) \/ sigma, (upper - mu) \/ sigma, loc=mu, scale=sigma) N = stats.norm(loc=mu, scale=sigma) fig, ax = plt.subplots(2, sharex=True) ax[0].hist(X.rvs(10000), normed=True) ax[1].hist(N.rvs(10000), normed=True) plt.show() ``` The top figure shows the truncated normal distribution, the lower figure shows the normal distribution with the same mean mu and standard deviation sigma.", "best_answers_score":0.7758, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18441779\/how-to-specify-upper-and-lower-limits-when-using-numpy-random-normal", "best_answers_votes":72, "question_length":671, "response_length":671 }, { "question":"NumPy array is not JSON serializable After creating a NumPy array, and saving it as a Django context variable, I receive the following error when loading the webpage: ``` array([ 0, 239, 479, 717, 952, 1192, 1432, 1667], dtype=int64) is not JSON serializable ``` What does this mean?", "response":"I regularly \"jsonify\" np.arrays. Try using the \".tolist()\" method on the arrays first, like this: ``` import numpy as np import codecs, json a = np.arange(10).reshape(2,5) # a 2 by 5 array b = a.tolist() # nested lists with same data, indices file_path = \"\/path.json\" ## your path variable json.dump(b, codecs.open(file_path, 'w', encoding='utf-8'), separators=(',', ':'), sort_keys=True, indent=4) ### this saves the array in .json format ``` In order to \"unjsonify\" the array use: ``` obj_text = codecs.open(file_path, 'r', encoding='utf-8').read() b_new = json.loads(obj_text) a_new = np.array(b_new) ```", "best_answers_score":0.7747, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/26646362\/numpy-array-is-not-json-serializable", "best_answers_votes":501, "question_length":283, "response_length":607 }, { "question":"How to calculate rolling \/ moving average using python + NumPy \/ SciPy? There seems to be no function that simply calculates the moving average on numpy\/scipy, leading to convoluted solutions. My question is two-fold: What's the easiest way to (correctly) implement a moving average with numpy? Since this seems non-trivial and error prone, is there a good reason not to have the batteries included in this case?", "response":"A simple way to achieve this is by using np.convolve. The idea behind this is to leverage the way the discrete convolution is computed and use it to return a rolling mean. This can be done by convolving with a sequence of np.ones of a length equal to the sliding window length we want. In order to do so we could define the following function: ``` def moving_average(x, w): return np.convolve(x, np.ones(w), 'valid') \/ w ``` This function will be taking the convolution of the sequence x and a sequence of ones of length w. Note that the chosen mode is valid so that the convolution product is only given for points where the sequences overlap completely. Some examples: ``` x = np.array([5,3,8,10,2,1,5,1,0,2]) ``` For a moving average with a window of length 2 we would have: ``` moving_average(x, 2) # array([4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]) ``` And for a window of length 4: ``` moving_average(x, 4) # array([6.5 , 5.75, 5.25, 4.5 , 2.25, 1.75, 2. ]) ``` How does convolve work? Lets have a more in depth look at the way the discrete convolution is being computed. The following function aims to replicate the way np.convolve is computing the output values: ``` def mov_avg(x, w): for m in range(len(x)-(w-1)): yield sum(np.ones(w) * x[m:m+w]) \/ w ``` Which, for the same example above would also yield: ``` list(mov_avg(x, 2)) # [4.0, 5.5, 9.0, 6.0, 1.5, 3.0, 3.0, 0.5, 1.0] ``` So what is being done at each step is to take the inner product between the array of ones and the current window. In this case the multiplication by np.ones(w) is superfluous given that we are directly taking the sum of the sequence. Bellow is an example of how the first outputs are computed so that it is a little clearer. Lets suppose we want a window of w=4: ``` [1,1,1,1] [5,3,8,10,2,1,5,1,0,2] = (1*5 + 1*3 + 1*8 + 1*10) \/ w = 6.5 ``` And the following output would be computed as: ``` [1,1,1,1] [5,3,8,10,2,1,5,1,0,2] = (1*3 + 1*8 + 1*10 + 1*2) \/ w = 5.75 ``` And so on, returning a moving average of the sequence once all overlaps have been performed.", "best_answers_score":0.7746, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14313510\/how-to-calculate-rolling-moving-average-using-python-numpy-scipy", "best_answers_votes":271, "question_length":412, "response_length":2056 }, { "question":"How can I serialize a numpy array while preserving matrix dimensions? numpy.array.tostring doesn't seem to preserve information about matrix dimensions (see this question), requiring the user to issue a call to numpy.array.reshape. Is there a way to serialize a numpy array to JSON format while preserving this information? Note: The arrays may contain ints, floats or bools. It's reasonable to expect a transposed array. Note 2: this is being done with the intent of passing the numpy array through a Storm topology using streamparse, in case such information ends up being relevant.", "response":"pickle.dumps or numpy.save encode all the information needed to reconstruct an arbitrary NumPy array, even in the presence of endianness issues, non-contiguous arrays, or weird structured dtypes. Endianness issues are probably the most important; you don't want array([1]) to suddenly become array([16777216]) because you loaded your array on a big-endian machine. pickle is probably the more convenient option, though save has its own benefits, given in the npy format rationale. I'm giving options for serializing to JSON or a bytestring, because the original questioner needed JSON-serializable output, but most people coming here probably don't. The pickle way: ``` import pickle a = # some NumPy array # Bytestring option serialized = pickle.dumps(a) deserialized_a = pickle.loads(serialized) # JSON option # latin-1 maps byte n to unicode code point n serialized_as_json = json.dumps(pickle.dumps(a).decode('latin-1')) deserialized_from_json = pickle.loads(json.loads(serialized_as_json).encode('latin-1')) ``` numpy.save uses a binary format, and it needs to write to a file, but you can get around that with io.BytesIO: ``` a = # any NumPy array memfile = io.BytesIO() numpy.save(memfile, a) serialized = memfile.getvalue() serialized_as_json = json.dumps(serialized.decode('latin-1')) # latin-1 maps byte n to unicode code point n ``` And to deserialize: ``` memfile = io.BytesIO() # If you're deserializing from a bytestring: memfile.write(serialized) # Or if you're deserializing from JSON: # memfile.write(json.loads(serialized_as_json).encode('latin-1')) memfile.seek(0) a = numpy.load(memfile) ```", "best_answers_score":0.7746, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30698004\/how-can-i-serialize-a-numpy-array-while-preserving-matrix-dimensions", "best_answers_votes":79, "question_length":584, "response_length":1611 }, { "question":"How to determine if a number is any type of int (core or numpy, signed or not)? I need to test whether a variable is of type int, or any of np.int*, np.uint*, preferably using a single condition (i.e. no or). After some tests, I guess that: isinstance(n, int) will only match int and np.int32 (or np.int64 depending on plateform), np.issubdtype(type(n), int) seems to match all int and np.int*, but doesn\u2019t match np.uint*. This leads to two questions: will np.issubdtype match any kind of signed ints? Can determine in a single check whether a number is any kind of signed or unsigned int? This is about testing for integers, the test should return False for float-likes.", "response":"NumPy provides base classes that you can\/should use for subtype-checking, rather than the Python types. Use np.integer to check for any instance of either signed or unsigned integers. Use np.signedinteger and np.unsignedinteger to check for signed types or unsigned types. ``` >>> np.issubdtype(np.uint32, np.integer) True >>> np.issubdtype(np.uint32, np.signedinteger) False >>> np.issubdtype(int, np.integer) True >>> np.issubdtype(np.array([1, 2, 3]).dtype, np.integer) True ``` All floating or complex number types will return False when tested. np.issubdtype(np.uint*, int) will always be False because the Python int is a signed type. A useful reference showing the relationship between all of these base classes is found in the documentation here.", "best_answers_score":0.7743, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37726830\/how-to-determine-if-a-number-is-any-type-of-int-core-or-numpy-signed-or-not", "best_answers_votes":208, "question_length":671, "response_length":754 }, { "question":"ImportError: DLL load failed: The specified module could not be found I have installed Python 2.5.4, Numpy 1.5.0 win32, Matplotlib 1.0.0 win32, pywin32 218. Still not able to plot graphs in Python. Here is the error I am getting : ``` import pylab File \"C:\\Python25\\lib\\site-packages\\pylab.py\", line 1, in from matplotlib.pylab import * File \"C:\\Python25\\lib\\site-packages\\matplotlib\\pylab.py\", line 216, in from matplotlib import mpl # pulls in most modules File \"C:\\Python25\\lib\\site-packages\\matplotlib\\mpl.py\", line 1, in from matplotlib import artist File \"C:\\Python25\\lib\\site-packages\\matplotlib\\artist.py\", line 6, in from transforms import Bbox, IdentityTransform, TransformedBbox, TransformedPath File \"C:\\Python25\\lib\\site-packages\\matplotlib\\transforms.py\", line 34, in from matplotlib._path import affine_transform ImportError: DLL load failed: The specified module could not be found. ``` Please kindly help..", "response":"I had the same issue with importing matplotlib.pylab with Python 3.5.1 on Win 64. Installing the Visual C++ Redistributable f\u00fcr Visual Studio 2015 from this links: https:\/\/www.microsoft.com\/en-us\/download\/details.aspx?id=48145 fixed the missing DLLs. I find it better and easier than downloading and pasting DLLs.", "best_answers_score":0.7741, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20201868\/importerror-dll-load-failed-the-specified-module-could-not-be-found", "best_answers_votes":34, "question_length":928, "response_length":313 }, { "question":"Confusion between numpy, scipy, matplotlib and pylab Numpy, scipy, matplotlib, and pylab are common terms among they who use python for scientific computation. I just learn a bit about pylab, and I got confused. Whenever I want to import numpy, I can always do: ``` import numpy as np ``` I just consider, that once I do ``` from pylab import * ``` the numpy will be imported as well (with np alias). So basically the second one does more things compared to the first one. There are few things I want to ask: Is it right that pylab is just a wrapper for numpy, scipy and matplotlib? As np is the numpy alias in pylab, what is the scipy and matplotlib alias in pylab? (as far as I know, plt is alias of matplotlib.pyplot, but I don't know the alias for the matplotlib itself)", "response":"No, pylab is part of matplotlib (in matplotlib.pylab) and tries to give you a MatLab like environment. matplotlib has a number of dependencies, among them numpy which it imports under the common alias np. scipy is not a dependency of matplotlib. If you run ipython --pylab an automatic import will put all symbols from matplotlib.pylab into global scope. Like you wrote numpy gets imported under the np alias. Symbols from matplotlib are available under the mpl alias.", "best_answers_score":0.7701, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12987624\/confusion-between-numpy-scipy-matplotlib-and-pylab", "best_answers_votes":136, "question_length":774, "response_length":468 }, { "question":"How do you use the ellipsis slicing syntax in Python? This came up in Hidden features of Python, but I can't see good documentation or examples that explain how the feature works.", "response":"The ellipsis is used in numpy to slice higher-dimensional data structures. It's designed to mean at this point, insert as many full slices (:) to extend the multi-dimensional slice to all dimensions. Example: ``` >>> from numpy import arange >>> a = arange(16).reshape(2,2,2,2) ``` Now, you have a 4-dimensional matrix of order 2x2x2x2. To select all first elements in the 4th dimension, you can use the ellipsis notation ``` >>> a[..., 0].flatten() array([ 0, 2, 4, 6, 8, 10, 12, 14]) ``` which is equivalent to ``` >>> a[:,:,:,0].flatten() array([ 0, 2, 4, 6, 8, 10, 12, 14]) ``` In your own implementations, you're free to ignore the contract mentioned above and use it for whatever you see fit.", "best_answers_score":0.7694, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/118370\/how-do-you-use-the-ellipsis-slicing-syntax-in-python", "best_answers_votes":314, "question_length":179, "response_length":698 }, { "question":"how to extract frequency associated with fft values in python I used fft function in numpy which resulted in a complex array. How to get the exact frequency values?", "response":"np.fft.fftfreq tells you the frequencies associated with the coefficients: ``` import numpy as np x = np.array([1,2,1,0,1,2,1,0]) w = np.fft.fft(x) freqs = np.fft.fftfreq(len(x)) for coef,freq in zip(w,freqs): if coef: print('{c:>6} * exp(2 pi i t * {f})'.format(c=coef,f=freq)) # (8+0j) * exp(2 pi i t * 0.0) # -4j * exp(2 pi i t * 0.25) # 4j * exp(2 pi i t * -0.25) ``` The OP asks how to find the frequency in Hertz. I believe the formula is frequency (Hz) = abs(fft_freq * frame_rate). Here is some code that demonstrates that. First, we make a wave file at 440 Hz: ``` import math import wave import struct if __name__ == '__main__': # http:\/\/stackoverflow.com\/questions\/3637350\/how-to-write-stereo-wav-files-in-python # http:\/\/www.sonicspot.com\/guide\/wavefiles.html freq = 440.0 data_size = 40000 fname = \"test.wav\" frate = 11025.0 amp = 64000.0 nchannels = 1 sampwidth = 2 framerate = int(frate) nframes = data_size comptype = \"NONE\" compname = \"not compressed\" data = [math.sin(2 * math.pi * freq * (x \/ frate)) for x in range(data_size)] wav_file = wave.open(fname, 'w') wav_file.setparams( (nchannels, sampwidth, framerate, nframes, comptype, compname)) for v in data: wav_file.writeframes(struct.pack('h', int(v * amp \/ 2))) wav_file.close() ``` This creates the file test.wav. Now we read in the data, FFT it, find the coefficient with maximum power, and find the corresponding fft frequency, and then convert to Hertz: ``` import wave import struct import numpy as np if __name__ == '__main__': data_size = 40000 fname = \"test.wav\" frate = 11025.0 wav_file = wave.open(fname, 'r') data = wav_file.readframes(data_size) wav_file.close() data = struct.unpack('{n}h'.format(n=data_size), data) data = np.array(data) w = np.fft.fft(data) freqs = np.fft.fftfreq(len(w)) print(freqs.min(), freqs.max()) # (-0.5, 0.499975) # Find the peak in the coefficients idx = np.argmax(np.abs(w)) freq = freqs[idx] freq_in_hertz = abs(freq * frate) print(freq_in_hertz) # 439.8975 ```", "best_answers_score":0.7691, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3694918\/how-to-extract-frequency-associated-with-fft-values-in-python", "best_answers_votes":78, "question_length":164, "response_length":1979 }, { "question":"save numpy array in append mode Is it possible to save a numpy array appending it to an already existing npy-file --- something like np.save(filename,arr,mode='a')? I have several functions that have to iterate over the rows of a large array. I cannot create the array at once because of memory constrains. To avoid to create the rows over and over again, I wanted to create each row once and save it to file appending it to the previous row in the file. Later I could load the npy-file in mmap_mode, accessing the slices when needed.", "response":"Edit: this answer is somewhat outdated, see the second answer about NpyAppendArray. I would not recommend going for HDF5 in 2023. But rather use numpy or zarr. The build-in .npy file format is perfectly fine for working with small datasets, without relying on external modules other then numpy. However, when you start having large amounts of data, the use of a file format, such as HDF5, designed to handle such datasets, is to be preferred [1]. For instance, below is a solution to save numpy arrays in HDF5 with PyTables, Step 1: Create an extendable EArray storage ``` import tables import numpy as np filename = 'outarray.h5' ROW_SIZE = 100 NUM_COLUMNS = 200 f = tables.open_file(filename, mode='w') atom = tables.Float64Atom() array_c = f.create_earray(f.root, 'data', atom, (0, ROW_SIZE)) for idx in range(NUM_COLUMNS): x = np.random.rand(1, ROW_SIZE) array_c.append(x) f.close() ``` Step 2: Append rows to an existing dataset (if needed) ``` f = tables.open_file(filename, mode='a') f.root.data.append(x) ``` Step 3: Read back a subset of the data ``` f = tables.open_file(filename, mode='r') print(f.root.data[1:10,2:20]) # e.g. read from disk only this part of the dataset ```", "best_answers_score":0.7688, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30376581\/save-numpy-array-in-append-mode", "best_answers_votes":40, "question_length":534, "response_length":1186 }, { "question":"Is there a multi-dimensional version of arange\/linspace in numpy? I would like a list of 2d NumPy arrays (x,y) , where each x is in {-5, -4.5, -4, -3.5, ..., 3.5, 4, 4.5, 5} and the same for y. I could do ``` x = np.arange(-5, 5.1, 0.5) y = np.arange(-5, 5.1, 0.5) ``` and then iterate through all possible pairs, but I'm sure there's a nicer way... I would like something back that looks like: ``` [[-5, -5], [-5, -4.5], [-5, -4], ... [5, 5]] ``` but the order does not matter.", "response":"You can use np.mgrid for this, it's often more convenient than np.meshgrid because it creates the arrays in one step: ``` import numpy as np X,Y = np.mgrid[-5:5.1:0.5, -5:5.1:0.5] ``` For linspace-like functionality, replace the step (i.e. 0.5) with a complex number whose magnitude specifies the number of points you want in the series. Using this syntax, the same arrays as above are specified as: ``` X, Y = np.mgrid[-5:5:21j, -5:5:21j] ``` You can then create your pairs as: ``` xy = np.vstack((X.flatten(), Y.flatten())).T ``` As @ali_m suggested, this can all be done in one line: ``` xy = np.mgrid[-5:5.1:0.5, -5:5.1:0.5].reshape(2,-1).T ``` Best of luck!", "best_answers_score":0.7687, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/32208359\/is-there-a-multi-dimensional-version-of-arange-linspace-in-numpy", "best_answers_votes":115, "question_length":478, "response_length":662 }, { "question":"Can I specify a numpy dtype when generating random values? I'm creating a numpy array of random values and adding them to an existing array containing 32-bit floats. I'd like to generate the random values using the same dtype as the target array, so that I don't have to convert the dtypes manually. Currently I do this: ``` import numpy as np x = np.zeros((10, 10), dtype='f') x += np.random.randn(*x.shape).astype('f') ``` What I'd like to do instead of the last line is something like: ``` x += np.random.randn(*x.shape, dtype=x.dtype) ``` but randn (and actually none of the numpy.random methods) does not accept a dtype argument. My specific question is, is it possible to specify a dtype for random numbers when I create them, without having to call astype? (My guess is that the random number generator is 64 bits long, so it doesn't really make sense to do this, but I thought I'd ask if it's possible.)", "response":"Q: is it possible to specify a dtype for random numbers when I create them. A: No it isn't. randn accepts the shape only as randn(d0, d1, ..., dn) Simply try this: ``` x = np.random.randn(10, 10).astype('f') ``` Or define a new function like ``` np.random.randn2 = lambda *args, dtype=np.float64: np.random.randn(*args).astype(dtype) x = np.random.randn2(10, 10, dtype='f') ``` If you have to use your code on the post, try this code instead ``` x = np.zeros((10, 10), dtype='f') x[:] = np.random.randn(*x.shape) ``` This assigns the results of randn to the memory allocated by np.zeros", "best_answers_score":0.7674, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/23355433\/can-i-specify-a-numpy-dtype-when-generating-random-values", "best_answers_votes":38, "question_length":911, "response_length":586 }, { "question":"Flattening a list of NumPy arrays? It appears that I have data in the format of a list of NumPy arrays (type() = np.ndarray): ``` [array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]])] ``` I am trying to put this into a polyfit function: ``` m1 = np.polyfit(x, y, deg=2) ``` However, it returns the error: TypeError: expected 1D vector for x I assume I need to flatten my data into something like: ``` [0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654 ...] ``` I have tried a list comprehension which usually works on lists of lists, but this as expected has not worked: ``` [val for sublist in risks for val in sublist] ``` What would be the best way to do this?", "response":"You could use numpy.concatenate, which as the name suggests, basically concatenates all the elements of such an input list into a single NumPy array, like so - ``` import numpy as np out = np.concatenate(input_list).ravel() ``` If you wish the final output to be a list, you can extend the solution, like so - ``` out = np.concatenate(input_list).ravel().tolist() ``` Sample run - ``` In [24]: input_list Out[24]: [array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]])] In [25]: np.concatenate(input_list).ravel() Out[25]: array([ 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654]) ``` Convert to list - ``` In [26]: np.concatenate(input_list).ravel().tolist() Out[26]: [0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654] ```", "best_answers_score":0.7662, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/33711985\/flattening-a-list-of-numpy-arrays", "best_answers_votes":119, "question_length":943, "response_length":1193 }, { "question":"Creating a Pandas DataFrame from a Numpy array: How do I specify the index column and column headers? I have a Numpy array consisting of a list of lists, representing a two-dimensional array with row labels and column names as shown below: ```py data = np.array([['','Col1','Col2'],['Row1',1,2],['Row2',3,4]]) ``` I'd like the resulting DataFrame to have Row1 and Row2 as index values, and Col1, Col2 as header values. I can specify the index as follows: ```py df = pd.DataFrame(data, index=data[:,0]) ``` However, I am unsure how to best assign column headers.", "response":"Specify data, index and columns to the DataFrame constructor, as follows: ``` >>> pd.DataFrame(data=data[1:,1:], # values ... index=data[1:,0], # 1st column as index ... columns=data[0,1:]) # 1st row as the column names ``` As @joris mentions, you may need to change above to np.int_(data[1:,1:]) to have the correct data type.", "best_answers_score":0.7658, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20763012\/creating-a-pandas-dataframe-from-a-numpy-array-how-do-i-specify-the-index-colum", "best_answers_votes":449, "question_length":561, "response_length":327 }, { "question":"What does the c underscore expression `c_` do exactly? It seems to be some kind of horizontal concatenation, but I could not find any documentation online. Here a minimal working example: ``` In [1]: from numpy import c_ In [2]: a = ones(4) In [3]: b = zeros((4,10)) In [4]: c_[a,b] Out[4]: array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) ```", "response":"It took me a lot of time to understand but it seems I finally got it. All you have to do is add along second axis. let's take : ``` np.c_[np.array([1,2,3]), np.array([4,5,6])] ``` But there isn't second axis. So we mentally add one. so shape of both array becomes (3,1). So resultant shape would be (3,1+1) which is (3,2). which is the shape of result - ``` array([[1, 4], [2, 5], [3, 6]]) ``` Another ex.: ``` np.c_[np.array([[1,2,3]]), 0, 0, np.array([[4,5,6]])] ``` shapes: np.array([[1,2,3]]) = 1,3 np.array([[4,5,6]]) = 1,3 0 so we can think of it as [[0]] = 1,1 So result 1,3+1+1+3 = 1,8 which is the shape of result : array([[1, 2, 3, 0, 0, 4, 5, 6]])", "best_answers_score":0.7654, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10894323\/what-does-the-c-underscore-expression-c-do-exactly", "best_answers_votes":83, "question_length":490, "response_length":658 }, { "question":"Colorplot of 2D array matplotlib So, I thought this was going to be really simple, but I've been having a lot of difficult finding exactly what I'm looking for in a comprehensible example. Basically I want to make phase plots, so assuming I have a 2d array, how can I get matplotlib to convert this to a plot that I can attach titles, axes, and legends (color bars) to. I'm looking for an extremely simple bare bones solution that only uses what is required that will work with any 2D array. I'm certain this is simple and I'm just being thick somehow, but I'm really having a lot of trouble with this. I have been tooling with the examples, but they don't seem well suited to what I'm trying to do: I like the general appearance of this graph, I'd just like to be able to pass in a 2dArray and have this same result: ``` import numpy as np import matplotlib as ml import matplotlib.pyplot as plt H = [[1,2,3,4][5,6,7,8][9,10,11,12][13,14,15,16]] fig = plt.figure(figsize=(6, 3.2)) ax = fig.add_subplot(111) ax.set_title('colorMap') X,Y = np.meshgrid(xedges, yedges) plt.pcolormesh(X, Y, H) ax.set_aspect('equal') cax = fig.add_axes([0.12, 0.1, 0.78, 0.8]) cax.get_xaxis().set_visible(False) cax.get_yaxis().set_visible(False) cax.patch.set_alpha(0) cax.set_frame_on(False) plt.colorbar(orientation='vertical') plt.show() ```", "response":"I'm afraid your posted example is not working, since X and Y aren't defined. So instead of pcolormesh let's use imshow: ``` import numpy as np import matplotlib.pyplot as plt H = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]) # added some commas and array creation code fig = plt.figure(figsize=(6, 3.2)) ax = fig.add_subplot(111) ax.set_title('colorMap') plt.imshow(H) ax.set_aspect('equal') cax = fig.add_axes([0.12, 0.1, 0.78, 0.8]) cax.get_xaxis().set_visible(False) cax.get_yaxis().set_visible(False) cax.patch.set_alpha(0) cax.set_frame_on(False) plt.colorbar(orientation='vertical') plt.show() ```", "best_answers_score":0.7636, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16492830\/colorplot-of-2d-array-matplotlib", "best_answers_votes":62, "question_length":1325, "response_length":631 }, { "question":"How to transform numpy.matrix or array to scipy sparse matrix For SciPy sparse matrix, one can use todense() or toarray() to transform to NumPy matrix or array. What are the functions to do the inverse? I searched, but got no idea what keywords should be the right hit.", "response":"You can pass a numpy array or matrix as an argument when initializing a sparse matrix. For a CSR matrix, for example, you can do the following. ``` >>> import numpy as np >>> from scipy import sparse >>> A = np.array([[1,2,0],[0,0,3],[1,0,4]]) >>> B = np.matrix([[1,2,0],[0,0,3],[1,0,4]]) >>> A array([[1, 2, 0], [0, 0, 3], [1, 0, 4]]) >>> sA = sparse.csr_matrix(A) # Here's the initialization of the sparse matrix. >>> sB = sparse.csr_matrix(B) >>> sA ' with 5 stored elements in Compressed Sparse Row format> >>> print sA (0, 0) 1 (0, 1) 2 (1, 2) 3 (2, 0) 1 (2, 2) 4 ```", "best_answers_score":0.763, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7922487\/how-to-transform-numpy-matrix-or-array-to-scipy-sparse-matrix", "best_answers_votes":162, "question_length":269, "response_length":572 }, { "question":"Convert byte array back to numpy array [duplicate] This question already has answers here: How can I make a numpy ndarray from bytes? (3 answers) Closed 2 years ago. You can convert a numpy array to bytes using .tobytes() function. How do decode it back from this bytes array to numpy array? I tried like this for array i of shape (28,28) >>k=i.tobytes() >>np.frombuffer(k)==i False also tried with uint8 as well.", "response":"A couple of issues with what you're doing: frombuffer will always interpret the input as a 1-dimensional array. It's the first line of the documentation. So you'd have to reshape to be (28, 28). The default dtype is float. So if you didn't serialize out floats, then you'll have to specify the dtype manually (a priori no one can tell what a stream of bytes means: you have to say what they represent). If you want to make sure the arrays are equal, you have to use np.array_equal. Using == will do an elementwise operation, and return a numpy array of bools (this presumably isn't what you want). How do decode it back from this bytes array to numpy array? Example: ``` In [3]: i = np.arange(28*28).reshape(28, 28) In [4]: k = i.tobytes() In [5]: y = np.frombuffer(k, dtype=i.dtype) In [6]: y.shape Out[6]: (784,) In [7]: np.array_equal(y.reshape(28, 28), i) Out[7]: True ``` HTH.", "best_answers_score":0.763, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/53376786\/convert-byte-array-back-to-numpy-array", "best_answers_votes":66, "question_length":413, "response_length":881 }, { "question":"Shared-memory objects in multiprocessing Suppose I have a large in memory numpy array, I have a function func that takes in this giant array as input (together with some other parameters). func with different parameters can be run in parallel. For example: ``` def func(arr, param): # do stuff to arr, param # build array arr pool = Pool(processes = 6) results = [pool.apply_async(func, [arr, param]) for param in all_params] output = [res.get() for res in results] ``` If I use multiprocessing library, then that giant array will be copied for multiple times into different processes. Is there a way to let different processes share the same array? This array object is read-only and will never be modified. What's more complicated, if arr is not an array, but an arbitrary python object, is there a way to share it? [EDITED] I read the answer but I am still a bit confused. Since fork() is copy-on-write, we should not invoke any additional cost when spawning new processes in python multiprocessing library. But the following code suggests there is a huge overhead: ``` from multiprocessing import Pool, Manager import numpy as np; import time def f(arr): return len(arr) t = time.time() arr = np.arange(10000000) print \"construct array = \", time.time() - t; pool = Pool(processes = 6) t = time.time() res = pool.apply_async(f, [arr,]) res.get() print \"multiprocessing overhead = \", time.time() - t; ``` output (and by the way, the cost increases as the size of the array increases, so I suspect there is still overhead related to memory copying): ```none construct array = 0.0178790092468 multiprocessing overhead = 0.252444982529 ``` Why is there such huge overhead, if we didn't copy the array? And what part does the shared memory save me?", "response":"If you use an operating system that uses copy-on-write fork() semantics (like any common unix), then as long as you never alter your data structure it will be available to all child processes without taking up additional memory. You will not have to do anything special (except make absolutely sure you don't alter the object). The most efficient thing you can do for your problem would be to pack your array into an efficient array structure (using numpy or array), place that in shared memory, wrap it with multiprocessing.Array, and pass that to your functions. This answer shows how to do that. If you want a writeable shared object, then you will need to wrap it with some kind of synchronization or locking. multiprocessing provides two methods of doing this: one using shared memory (suitable for simple values, arrays, or ctypes) or a Manager proxy, where one process holds the memory and a manager arbitrates access to it from other processes (even over a network). The Manager approach can be used with arbitrary Python objects, but will be slower than the equivalent using shared memory because the objects need to be serialized\/deserialized and sent between processes. There are a wealth of parallel processing libraries and approaches available in Python. multiprocessing is an excellent and well rounded library, but if you have special needs perhaps one of the other approaches may be better.", "best_answers_score":0.7628, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10721915\/shared-memory-objects-in-multiprocessing", "best_answers_votes":149, "question_length":1746, "response_length":1407 }, { "question":"Binning a column with pandas I have a data frame column with numeric values: ``` df['percentage'].head() 46.5 44.2 100.0 42.12 ``` I want to see the column as bin counts: ``` bins = [0, 1, 5, 10, 25, 50, 100] ``` How can I get the result as bins with their value counts? ``` [0, 1] bin amount [1, 5] etc [5, 10] etc ... ```", "response":"You can use pandas.cut: ``` bins = [0, 1, 5, 10, 25, 50, 100] df['binned'] = pd.cut(df['percentage'], bins) print (df) percentage binned 0 46.50 (25, 50] 1 44.20 (25, 50] 2 100.00 (50, 100] 3 42.12 (25, 50] ``` ``` bins = [0, 1, 5, 10, 25, 50, 100] labels = [1,2,3,4,5,6] df['binned'] = pd.cut(df['percentage'], bins=bins, labels=labels) print (df) percentage binned 0 46.50 5 1 44.20 5 2 100.00 6 3 42.12 5 ``` Or numpy.searchsorted: ``` bins = [0, 1, 5, 10, 25, 50, 100] df['binned'] = np.searchsorted(bins, df['percentage'].values) print (df) percentage binned 0 46.50 5 1 44.20 5 2 100.00 6 3 42.12 5 ``` ...and then value_counts or groupby and aggregate size: ``` s = pd.cut(df['percentage'], bins=bins).value_counts() print (s) (25, 50] 3 (50, 100] 1 (10, 25] 0 (5, 10] 0 (1, 5] 0 (0, 1] 0 Name: percentage, dtype: int64 ``` ``` s = df.groupby(pd.cut(df['percentage'], bins=bins)).size() print (s) percentage (0, 1] 0 (1, 5] 0 (5, 10] 0 (10, 25] 0 (25, 50] 3 (50, 100] 1 dtype: int64 ``` By default cut returns categorical. Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data, operations in categorical.", "best_answers_score":0.761, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/45273731\/binning-a-column-with-pandas", "best_answers_votes":363, "question_length":323, "response_length":1176 }, { "question":"numpy vstack vs. column_stack What exactly is the difference between numpy vstack and column_stack. Reading through the documentation, it looks as if column_stack is an implementation of vstack for 1D arrays. Is it a more efficient implementation? Otherwise, I cannot find a reason for just having vstack.", "response":"I think the following code illustrates the difference nicely: ``` >>> np.vstack(([1,2,3],[4,5,6])) array([[1, 2, 3], [4, 5, 6]]) >>> np.column_stack(([1,2,3],[4,5,6])) array([[1, 4], [2, 5], [3, 6]]) >>> np.hstack(([1,2,3],[4,5,6])) array([1, 2, 3, 4, 5, 6]) ``` I've included hstack for comparison as well. Notice how column_stack stacks along the second dimension whereas vstack stacks along the first dimension. The equivalent to column_stack is the following hstack command: ``` >>> np.hstack(([[1],[2],[3]],[[4],[5],[6]])) array([[1, 4], [2, 5], [3, 6]]) ``` I hope we can agree that column_stack is more convenient.", "best_answers_score":0.761, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16473042\/numpy-vstack-vs-column-stack", "best_answers_votes":122, "question_length":305, "response_length":621 }, { "question":"Cython: \"fatal error: numpy\/arrayobject.h: No such file or directory\" I'm trying to speed up the answer here using Cython. I try to compile the code (after doing the cygwinccompiler.py hack explained here), but get a fatal error: numpy\/arrayobject.h: No such file or directory...compilation terminated error. Can anyone tell me if it's a problem with my code, or some esoteric subtlety with Cython? Below is my code. ``` import numpy as np import scipy as sp cimport numpy as np cimport cython cdef inline np.ndarray[np.int, ndim=1] fbincount(np.ndarray[np.int_t, ndim=1] x): cdef int m = np.amax(x)+1 cdef int n = x.size cdef unsigned int i cdef np.ndarray[np.int_t, ndim=1] c = np.zeros(m, dtype=np.int) for i in xrange(n): c[x[i]] += 1 return c cdef packed struct Point: np.float64_t f0, f1 @cython.boundscheck(False) def sparsemaker(np.ndarray[np.float_t, ndim=2] X not None, np.ndarray[np.float_t, ndim=2] Y not None, np.ndarray[np.float_t, ndim=2] Z not None): cdef np.ndarray[np.float64_t, ndim=1] counts, factor cdef np.ndarray[np.int_t, ndim=1] row, col, repeats cdef np.ndarray[Point] indices cdef int x_, y_ _, row = np.unique(X, return_inverse=True); x_ = _.size _, col = np.unique(Y, return_inverse=True); y_ = _.size indices = np.rec.fromarrays([row,col]) _, repeats = np.unique(indices, return_inverse=True) counts = 1. \/ fbincount(repeats) Z.flat *= counts.take(repeats) return sp.sparse.csr_matrix((Z.flat,(row,col)), shape=(x_, y_)).toarray() ```", "response":"In your setup.py, the Extension should have the argument include_dirs=[numpy.get_include()]. Also, you are missing np.import_array() in your code. -- Example setup.py: ``` from distutils.core import setup, Extension from Cython.Build import cythonize import numpy setup( ext_modules=[ Extension(\"my_module\", [\"my_module.c\"], include_dirs=[numpy.get_include()]), ], ) # Or, if you use cythonize() to make the ext_modules list, # include_dirs can be passed to setup() setup( ext_modules=cythonize(\"my_module.pyx\"), include_dirs=[numpy.get_include()] ) ```", "best_answers_score":0.7599, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14657375\/cython-fatal-error-numpy-arrayobject-h-no-such-file-or-directory", "best_answers_votes":272, "question_length":1464, "response_length":553 }, { "question":"Converting int arrays to string arrays in numpy without truncation Trying to convert int arrays to string arrays in numpy ``` In [66]: a=array([0,33,4444522]) In [67]: a.astype(str) Out[67]: array(['0', '3', '4'], dtype='|S1') ``` Not what I intended ``` In [68]: a.astype('S10') Out[68]: array(['0', '33', '4444522'], dtype='|S10') ``` This works but I had to know 10 was big enough to hold my longest string. Is there a way of doing this easily without knowing ahead of time what size string you need? It seems a little dangerous that it just quietly truncates your string without throwing an error.", "response":"You can stay in numpy, doing ``` np.char.mod('%d', a) ``` This is twice faster than map or list comprehensions for 10 elements, four times faster for 100. This and other string operations are documented here.", "best_answers_score":0.7594, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9958846\/converting-int-arrays-to-string-arrays-in-numpy-without-truncation", "best_answers_votes":58, "question_length":601, "response_length":208 }, { "question":"What is the difference between contiguous and non-contiguous arrays? In the numpy manual about the reshape() function, it says ``` >>> a = np.zeros((10, 2)) # A transpose make the array non-contiguous >>> b = a.T # Taking a view makes it possible to modify the shape without modifying the # initial object. >>> c = b.view() >>> c.shape = (20) AttributeError: incompatible shape for a non-contiguous array ``` My questions are: What are continuous and noncontiguous arrays? Is it similar to the contiguous memory block in C like What is a contiguous memory block? Is there any performance difference between these two? When should we use one or the other? Why does transpose make the array non-contiguous? Why does c.shape = (20) throws an error incompatible shape for a non-contiguous array? Thanks for your answer!", "response":"A contiguous array is just an array stored in an unbroken block of memory: to access the next value in the array, we just move to the next memory address. Consider the 2D array arr = np.arange(12).reshape(3,4). It looks like this: In the computer's memory, the values of arr are stored like this: This means arr is a C contiguous array because the rows are stored as contiguous blocks of memory. The next memory address holds the next row value on that row. If we want to move down a column, we just need to jump over three blocks (e.g. to jump from 0 to 4 means we skip over 1,2 and 3). Transposing the array with arr.T means that C contiguity is lost because adjacent row entries are no longer in adjacent memory addresses. However, arr.T is Fortran contiguous since the columns are in contiguous blocks of memory: Performance-wise, accessing memory addresses which are next to each other is very often faster than accessing addresses which are more \"spread out\" (fetching a value from RAM could entail a number of neighbouring addresses being fetched and cached for the CPU.) This means that operations over contiguous arrays will often be quicker. As a consequence of C contiguous memory layout, row-wise operations are usually faster than column-wise operations. For example, you'll typically find that ``` np.sum(arr, axis=1) # sum the rows ``` is slightly faster than: ``` np.sum(arr, axis=0) # sum the columns ``` Similarly, operations on columns will be slightly faster for Fortran contiguous arrays. Finally, why can't we flatten the Fortran contiguous array by assigning a new shape? ``` >>> arr2 = arr.T >>> arr2.shape = 12 AttributeError: incompatible shape for a non-contiguous array ``` In order for this to be possible NumPy would have to put the rows of arr.T together like this: (Setting the shape attribute directly assumes C order - i.e. NumPy tries to perform the operation row-wise.) This is impossible to do. For any axis, NumPy needs to have a constant stride length (the number of bytes to move) to get to the next element of the array. Flattening arr.T in this way would require skipping forwards and backwards in memory to retrieve consecutive values of the array. If we wrote arr2.reshape(12) instead, NumPy would copy the values of arr2 into a new block of memory (since it can't return a view on to the original data for this shape).", "best_answers_score":0.759, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/26998223\/what-is-the-difference-between-contiguous-and-non-contiguous-arrays", "best_answers_votes":403, "question_length":815, "response_length":2363 }, { "question":"How to have logarithmic bins in a Python histogram As far as I know the option Log=True in the histogram function only refers to the y-axis. ``` P.hist(d,bins=50,log=True,alpha=0.5,color='b',histtype='step') ``` I need the bins to be equally spaced in log10. Is there something that can do this?", "response":"use logspace() to create a geometric sequence, and pass it to bins parameter. And set the scale of xaxis to log scale. ``` import pylab as pl import numpy as np data = np.random.normal(size=10000) pl.hist(data, bins=np.logspace(np.log10(0.1),np.log10(1.0), 50)) pl.gca().set_xscale(\"log\") pl.show() ```", "best_answers_score":0.7579, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6855710\/how-to-have-logarithmic-bins-in-a-python-histogram", "best_answers_votes":160, "question_length":295, "response_length":302 }, { "question":"How to avoid overlapping of labels & autopct in a pie chart My Python code is: ``` values = [234, 64, 54,10, 0, 1, 0, 9, 2, 1, 7, 7] months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul','Aug','Sep','Oct', 'Nov','Dec'] colors = ['yellowgreen', 'red', 'gold', 'lightskyblue', 'white','lightcoral','blue','pink', 'darkgreen', 'yellow','grey','violet','magenta','cyan'] plt.pie(values, labels=labels, autopct='%1.1f%%', shadow=True, colors=colors, startangle=90, radius=1.2) plt.show() ``` Is it possible to show the labels \"Jan\", \"Feb\", \"Mar\", etc. and the percentages, either: without overlapping, or using an arrow mark?", "response":"Alternatively you can put the legends beside the pie graph: ``` import matplotlib.pyplot as plt import numpy as np x = np.char.array(['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct', 'Nov','Dec']) y = np.array([234, 64, 54,10, 0, 1, 0, 9, 2, 1, 7, 7]) colors = ['yellowgreen','red','gold','lightskyblue','white','lightcoral','blue','pink', 'darkgreen','yellow','grey','violet','magenta','cyan'] porcent = 100.*y\/y.sum() patches, texts = plt.pie(y, colors=colors, startangle=90, radius=1.2) labels = ['{0} - {1:1.2f} %'.format(i,j) for i,j in zip(x, porcent)] sort_legend = True if sort_legend: patches, labels, dummy = zip(*sorted(zip(patches, labels, y), key=lambda x: x[2], reverse=True)) plt.legend(patches, labels, loc='left center', bbox_to_anchor=(-0.1, 1.), fontsize=8) plt.savefig('piechart.png', bbox_inches='tight') ``` EDIT: if you want to keep the legend in the original order, as you mentioned in the comments, you can set sort_legend=False in the code above, giving:", "best_answers_score":0.7579, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/23577505\/how-to-avoid-overlapping-of-labels-autopct-in-a-pie-chart", "best_answers_votes":82, "question_length":624, "response_length":992 }, { "question":"What is the difference between native int type and the numpy.int types? Can you please help understand what are the main differences (if any) between the native int type and the numpy.int32 or numpy.int64 types?", "response":"There are several major differences. The first is that python integers are flexible-sized (at least in python 3.x). This means they can grow to accommodate any number of any size (within memory constraints, of course). The numpy integers, on the other hand, are fixed-sized. This means there is a maximum value they can hold. This is defined by the number of bytes in the integer (int32 vs. int64), with more bytes holding larger numbers, as well as whether the number is signed or unsigned (int32 vs. uint32), with unsigned being able to hold larger numbers but not able to hold negative number. So, you might ask, why use the fixed-sized integers? The reason is that modern processors have built-in tools for doing math on fixed-size integers, so calculations on those are much, much, much faster. In fact, python uses fixed-sized integers behind-the-scenes when the number is small enough, only switching to the slower, flexible-sized integers when the number gets too large. Another advantage of fixed-sized values is that they can be placed into consistently-sized adjacent memory blocks of the same type. This is the format that numpy arrays use to store data. The libraries that numpy relies on are able to do extremely fast computations on data in this format, in fact modern CPUs have built-in features for accelerating this sort of computation. With the variable-sized python integers, this sort of computation is impossible because there is no way to say how big the blocks should be and no consistentcy in the data format. That being said, numpy is actually able to make arrays of python integers. But rather than arrays containing the values, instead they are arrays containing references to other pieces of memory holding the actual python integers. This cannot be accelerated in the same way, so even if all the python integers fit within the fixed integer size, it still won't be accelerated. None of this is the case with Python 2. In Python 2, Python integers are fixed integers and thus can be directly translated into numpy integers. For variable-length integers, Python 2 had the long type. But this was confusing and it was decided this confusion wasn't worth the performance gains, especially when people who need performance would be using numpy or something like it anyway.", "best_answers_score":0.7578, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/38155039\/what-is-the-difference-between-native-int-type-and-the-numpy-int-types", "best_answers_votes":60, "question_length":211, "response_length":2298 }, { "question":"Conditionally fill column values based on another columns value in pandas I have a DataFrame with a few columns. One columns contains a symbol for which currency is being used, for instance a euro or a dollar sign. Another column contains a budget value. So for instance in one row it could mean a budget of 5000 in euro and in the next row it could say a budget of 2000 in dollar. In pandas I would like to add an extra column to my DataFrame, normalizing the budgets in euro. So basically, for each row the value in the new column should be the value from the budget column * 1 if the symbol in the currency column is a euro sign, and the value in the new column should be the value of the budget column * 0.78125 if the symbol in the currency column is a dollar sign. I know how to add a column, fill it with values, copy values from another column etc. but not how to fill the new column conditionally based on the value of another column. Any suggestions?", "response":"You probably want to do ``` df['Normalized'] = np.where(df['Currency'] == '$', df['Budget'] * 0.78125, df['Budget']) ```", "best_answers_score":0.7572, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10715519\/conditionally-fill-column-values-based-on-another-columns-value-in-pandas", "best_answers_votes":131, "question_length":960, "response_length":120 }, { "question":"Annotate Time Series plot I have an index array (x) of dates (datetime objects) and an array of actual values (y: bond prices). Doing the following: ```py plot(x,y) ``` produces a perfectly fine time series graph with the x-axis labeled with the dates. No problem so far. But I want to add text on certain dates. For example, on 2009-10-31, I wish to display the text \"Event 1\" with an arrow pointing to the y value at that date. I have read through the Matplotlib documentation on text() and annotate() to no avail.", "response":"Matplotlib uses an internal floating point format for dates. You just need to convert your date to that format (using matplotlib.dates.date2num or matplotlib.dates.datestr2num) and then use annotate as usual. As a somewhat excessively fancy example: ``` import datetime as dt import matplotlib.pyplot as plt import matplotlib.dates as mdates x = [dt.datetime(2009, 05, 01), dt.datetime(2010, 06, 01), dt.datetime(2011, 04, 01), dt.datetime(2012, 06, 01)] y = [1, 3, 2, 5] fig, ax = plt.subplots() ax.plot_date(x, y, linestyle='--') ax.annotate('Test', (mdates.date2num(x[1]), y[1]), xytext=(15, 15), textcoords='offset points', arrowprops=dict(arrowstyle='-|>')) fig.autofmt_xdate() plt.show() ```", "best_answers_score":0.7563, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11067368\/annotate-time-series-plot", "best_answers_votes":89, "question_length":516, "response_length":697 }, { "question":"sampling random floats on a range in numpy How can I sample random floats on an interval [a, b] in numpy? Not just integers, but any real numbers. For example, random_float(5, 10) would return random numbers between [5, 10]. thanks.", "response":"The uniform distribution would probably do what you are asking. ``` np.random.uniform(5,10) # A single value np.random.uniform(5,10,[2,3]) # A 2x3 array ```", "best_answers_score":0.7538, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11873741\/sampling-random-floats-on-a-range-in-numpy", "best_answers_votes":109, "question_length":232, "response_length":156 }, { "question":"Resize rectangular image to square, keeping ratio and fill background with black I'm trying to resize a batch of grayscale images that are 256 x N pixels (N varies, but is always \u2264256). My intention is to downscale the images. The resize would have to output a square (1:1) image, with: resized image centered vertically aspect ratio maintained remaining pixels rendered black Visually this would be the desired result: I have tried creating a numpy zeroes matrix with the target size (e.g. 200 x 200) but have not been able to paste the resized image into its vertical center. Any suggestions using cv2, PIL or numpy are welcome.", "response":"You can use Pillow to accomplish that: Code: ``` from PIL import Image def make_square(im, min_size=256, fill_color=(0, 0, 0, 0)): x, y = im.size size = max(min_size, x, y) new_im = Image.new('RGBA', (size, size), fill_color) new_im.paste(im, (int((size - x) \/ 2), int((size - y) \/ 2))) return new_im ``` Test Code: ``` test_image = Image.open('hLarp.png') new_image = make_square(test_image) new_image.show() ``` For a white background you can do: ``` new_image = make_square(test_image, fill_color=(255, 255, 255, 0)) ``` Result:", "best_answers_score":0.7535, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/44231209\/resize-rectangular-image-to-square-keeping-ratio-and-fill-background-with-black", "best_answers_votes":64, "question_length":630, "response_length":531 }, { "question":"What's the fastest way in Python to calculate cosine similarity given sparse matrix data? Given a sparse matrix listing, what's the best way to calculate the cosine similarity between each of the columns (or rows) in the matrix? I would rather not iterate n-choose-two times. Say the input matrix is: ``` A= [0 1 0 0 1 0 0 1 1 1 1 1 0 1 0] ``` The sparse representation is: ``` A = 0, 1 0, 4 1, 2 1, 3 1, 4 2, 0 2, 1 2, 3 ``` In Python, it's straightforward to work with the matrix-input format: ``` import numpy as np from sklearn.metrics import pairwise_distances from scipy.spatial.distance import cosine A = np.array( [[0, 1, 0, 0, 1], [0, 0, 1, 1, 1], [1, 1, 0, 1, 0]]) dist_out = 1-pairwise_distances(A, metric=\"cosine\") dist_out ``` Gives: ``` array([[ 1. , 0.40824829, 0.40824829], [ 0.40824829, 1. , 0.33333333], [ 0.40824829, 0.33333333, 1. ]]) ``` That's fine for a full-matrix input, but I really want to start with the sparse representation (due to the size and sparsity of my matrix). Any ideas about how this could best be accomplished?", "response":"You can compute pairwise cosine similarity on the rows of a sparse matrix directly using sklearn. As of version 0.17 it also supports sparse output: ``` from sklearn.metrics.pairwise import cosine_similarity from scipy import sparse A = np.array([[0, 1, 0, 0, 1], [0, 0, 1, 1, 1],[1, 1, 0, 1, 0]]) A_sparse = sparse.csr_matrix(A) similarities = cosine_similarity(A_sparse) print('pairwise dense output:\\n {}\\n'.format(similarities)) #also can output sparse matrices similarities_sparse = cosine_similarity(A_sparse,dense_output=False) print('pairwise sparse output:\\n {}\\n'.format(similarities_sparse)) ``` Results: ``` pairwise dense output: [[ 1. 0.40824829 0.40824829] [ 0.40824829 1. 0.33333333] [ 0.40824829 0.33333333 1. ]] pairwise sparse output: (0, 1) 0.408248290464 (0, 2) 0.408248290464 (0, 0) 1.0 (1, 0) 0.408248290464 (1, 2) 0.333333333333 (1, 1) 1.0 (2, 1) 0.333333333333 (2, 0) 0.408248290464 (2, 2) 1.0 ``` If you want column-wise cosine similarities simply transpose your input matrix beforehand: ``` A_sparse.transpose() ```", "best_answers_score":0.7533, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17627219\/whats-the-fastest-way-in-python-to-calculate-cosine-similarity-given-sparse-mat", "best_answers_votes":101, "question_length":1051, "response_length":1042 }, { "question":"Rank items in an array using Python\/NumPy, without sorting array twice I have an array of numbers and I'd like to create another array that represents the rank of each item in the first array. I'm using Python and NumPy. For example: ``` array = [4,2,7,1] ranks = [2,1,3,0] ``` Here's the best method I've come up with: ``` array = numpy.array([4,2,7,1]) temp = array.argsort() ranks = numpy.arange(len(array))[temp.argsort()] ``` Are there any better\/faster methods that avoid sorting the array twice?", "response":"This question is a few years old, and the accepted answer is great, but I think the following is still worth mentioning. If you don't mind the dependence on scipy, you can use scipy.stats.rankdata: ``` In [22]: from scipy.stats import rankdata In [23]: a = [4, 2, 7, 1] In [24]: rankdata(a) Out[24]: array([ 3., 2., 4., 1.]) In [25]: (rankdata(a) - 1).astype(int) Out[25]: array([2, 1, 3, 0]) ``` A nice feature of rankdata is that the method argument provides several options for handling ties. For example, there are three occurrences of 20 and two occurrences of 40 in b: ``` In [26]: b = [40, 20, 70, 10, 20, 50, 30, 40, 20] ``` The default assigns the average rank to the tied values: ``` In [27]: rankdata(b) Out[27]: array([ 6.5, 3. , 9. , 1. , 3. , 8. , 5. , 6.5, 3. ]) ``` method='ordinal' assigns consecutive ranks: ``` In [28]: rankdata(b, method='ordinal') Out[28]: array([6, 2, 9, 1, 3, 8, 5, 7, 4]) ``` method='min' assigns the minimum rank of the tied values to all the tied values: ``` In [29]: rankdata(b, method='min') Out[29]: array([6, 2, 9, 1, 2, 8, 5, 6, 2]) ``` See the docstring for more options.", "best_answers_score":0.7523, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5284646\/rank-items-in-an-array-using-python-numpy-without-sorting-array-twice", "best_answers_votes":139, "question_length":502, "response_length":1120 }, { "question":"Getting the integer index of a Pandas DataFrame row fulfilling a condition? I have the following DataFrame: ``` a b c b 2 1 2 3 5 4 5 6 ``` As you can see, column b is used as an index. I want to get the ordinal number of the row fulfilling ('b' == 5), which in this case would be 1. The column being tested can be either an index column (as with b in this case) or a regular column, e.g. I may want to find the index of the row fulfilling ('c' == 6).", "response":"Use Index.get_loc instead. Reusing @unutbu's set up code, you'll achieve the same results. ``` >>> import pandas as pd >>> import numpy as np >>> df = pd.DataFrame(np.arange(1,7).reshape(2,3), columns = list('abc'), index=pd.Series([2,5], name='b')) >>> df a b c b 2 1 2 3 5 4 5 6 >>> df.index.get_loc(5) 1 ```", "best_answers_score":0.7519, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18199288\/getting-the-integer-index-of-a-pandas-dataframe-row-fulfilling-a-condition", "best_answers_votes":79, "question_length":451, "response_length":310 }, { "question":"Moving average or running mean Is there a SciPy function or NumPy function or module for Python that calculates the running mean of a 1D array given a specific window?", "response":"NOTE: More efficient solutions may include scipy.ndimage.uniform_filter1d (see this answer), or using newer libraries including talib's talib.MA. Use np.convolve: ``` np.convolve(x, np.ones(N)\/N, mode='valid') ``` Explanation The running mean is a case of the mathematical operation of convolution. For the running mean, you slide a window along the input and compute the mean of the window's contents. For discrete 1D signals, convolution is the same thing, except instead of the mean you compute an arbitrary linear combination, i.e., multiply each element by a corresponding coefficient and add up the results. Those coefficients, one for each position in the window, are sometimes called the convolution kernel. The arithmetic mean of N values is (x_1 + x_2 + ... + x_N) \/ N, so the corresponding kernel is (1\/N, 1\/N, ..., 1\/N), and that's exactly what we get by using np.ones(N)\/N. Edges The mode argument of np.convolve specifies how to handle the edges. I chose the valid mode here because I think that's how most people expect the running mean to work, but you may have other priorities. Here is a plot that illustrates the difference between the modes: ``` import numpy as np import matplotlib.pyplot as plt modes = ['full', 'same', 'valid'] for m in modes: plt.plot(np.convolve(np.ones(200), np.ones(50)\/50, mode=m)); plt.axis([-10, 251, -.1, 1.1]); plt.legend(modes, loc='lower center'); plt.show() ```", "best_answers_score":0.7517, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13728392\/moving-average-or-running-mean", "best_answers_votes":382, "question_length":167, "response_length":1413 }, { "question":"Numpy how to iterate over columns of array? Suppose I have and m x n array. I want to pass each column of this array to a function to perform some operation on the entire column. How do I iterate over the columns of the array? For example, I have a 4 x 3 array like ``` 1 99 2 2 14 5 3 12 7 4 43 1 for column in array: some_function(column) ``` where column would be \"1,2,3,4\" in the first iteration, \"99,14,12,43\" in the second, and \"2,5,7,1\" in the third.", "response":"Just iterate over the transposed of your array: ``` for column in array.T: some_function(column) ```", "best_answers_score":0.7507, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10148818\/numpy-how-to-iterate-over-columns-of-array", "best_answers_votes":303, "question_length":457, "response_length":100 }, { "question":"Fitting a Weibull distribution using Scipy I am trying to recreate maximum likelihood distribution fitting, I can already do this in Matlab and R, but now I want to use scipy. In particular, I would like to estimate the Weibull distribution parameters for my data set. I have tried this: ``` import scipy.stats as s import numpy as np import matplotlib.pyplot as plt def weib(x,n,a): return (a \/ n) * (x \/ n)**(a - 1) * np.exp(-(x \/ n)**a) data = np.loadtxt(\"stack_data.csv\") (loc, scale) = s.exponweib.fit_loc_scale(data, 1, 1) print loc, scale x = np.linspace(data.min(), data.max(), 1000) plt.plot(x, weib(x, loc, scale)) plt.hist(data, data.max(), density=True) plt.show() ``` And get this: ``` (2.5827280639441961, 3.4955032285727947) ``` And a distribution that looks like this: I have been using the exponweib after reading this http:\/\/www.johndcook.com\/distributions_scipy.html. I have also tried the other Weibull functions in scipy (just in case!). In Matlab (using the Distribution Fitting Tool - see screenshot) and in R (using both the MASS library function fitdistr and the GAMLSS package) I get a (loc) and b (scale) parameters more like 1.58463497 5.93030013. I believe all three methods use the maximum likelihood method for distribution fitting. I have posted my data here if you would like to have a go! And for completeness I am using Python 2.7.5, Scipy 0.12.0, R 2.15.2 and Matlab 2012b. Why am I getting a different result!?", "response":"My guess is that you want to estimate the shape parameter and the scale of the Weibull distribution while keeping the location fixed. Fixing loc assumes that the values of your data and of the distribution are positive with lower bound at zero. floc=0 keeps the location fixed at zero, f0=1 keeps the first shape parameter of the exponential weibull fixed at one. ``` >>> stats.exponweib.fit(data, floc=0, f0=1) [1, 1.8553346917584836, 0, 6.8820748596850905] >>> stats.weibull_min.fit(data, floc=0) [1.8553346917584836, 0, 6.8820748596850549] ``` The fit compared to the histogram looks ok, but not very good. The parameter estimates are a bit higher than the ones you mention are from R and matlab. Update The closest I can get to the plot that is now available is with unrestricted fit, but using starting values. The plot is still less peaked. Note values in fit that don't have an f in front are used as starting values. ``` >>> from scipy import stats >>> import matplotlib.pyplot as plt >>> plt.plot(data, stats.exponweib.pdf(data, *stats.exponweib.fit(data, 1, 1, scale=02, loc=0))) >>> _ = plt.hist(data, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5); >>> plt.show() ```", "best_answers_score":0.7503, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17481672\/fitting-a-weibull-distribution-using-scipy", "best_answers_votes":41, "question_length":1447, "response_length":1185 }, { "question":"How to append data to one specific dataset in a hdf5 file with h5py I am looking for a possibility to append data to an existing dataset inside a .h5 file using Python (h5py). A short intro to my project: I try to train a CNN using medical image data. Because of the huge amount of data and heavy memory usage during the transformation of the data to NumPy arrays, I needed to split the \"transformation\" into a few data chunks: load and preprocess the first 100 medical images and save the NumPy arrays to hdf5 file, then load the next 100 datasets and append the existing .h5 file, and so on. Now, I tried to store the first 100 transformed NumPy arrays as follows: ``` import h5py from LoadIPV import LoadIPV X_train_data, Y_train_data, X_test_data, Y_test_data = LoadIPV() with h5py.File('.\\PreprocessedData.h5', 'w') as hf: hf.create_dataset(\"X_train\", data=X_train_data, maxshape=(None, 512, 512, 9)) hf.create_dataset(\"X_test\", data=X_test_data, maxshape=(None, 512, 512, 9)) hf.create_dataset(\"Y_train\", data=Y_train_data, maxshape=(None, 512, 512, 1)) hf.create_dataset(\"Y_test\", data=Y_test_data, maxshape=(None, 512, 512, 1)) ``` As can be seen, the transformed NumPy arrays are splitted into four different \"groups\" that are stored into the four hdf5 datasets[X_train, X_test, Y_train, Y_test]. The LoadIPV() function performs the preprocessing of the medical image data. My problem is that I would like to store the next 100 NumPy arrays into the same .h5 file into the existing datasets: that means that I would like to append to, for example, the existing X_train dataset of shape [100, 512, 512, 9] with the next 100 NumPy arrays, such that X_train becomes of shape [200, 512, 512, 9]. The same should work for the other three datasets X_test, Y_train and Y_test.", "response":"I have found a solution that seems to work! Have a look at this: incremental writes to hdf5 with h5py! In order to append data to a specific dataset it is necessary to first resize the specific dataset in the corresponding axis and subsequently append the new data at the end of the \"old\" nparray. Thus, the solution looks like this: ``` with h5py.File('.\\PreprocessedData.h5', 'a') as hf: hf[\"X_train\"].resize((hf[\"X_train\"].shape[0] + X_train_data.shape[0]), axis = 0) hf[\"X_train\"][-X_train_data.shape[0]:] = X_train_data hf[\"X_test\"].resize((hf[\"X_test\"].shape[0] + X_test_data.shape[0]), axis = 0) hf[\"X_test\"][-X_test_data.shape[0]:] = X_test_data hf[\"Y_train\"].resize((hf[\"Y_train\"].shape[0] + Y_train_data.shape[0]), axis = 0) hf[\"Y_train\"][-Y_train_data.shape[0]:] = Y_train_data hf[\"Y_test\"].resize((hf[\"Y_test\"].shape[0] + Y_test_data.shape[0]), axis = 0) hf[\"Y_test\"][-Y_test_data.shape[0]:] = Y_test_data ``` However, note that you should create the dataset with maxshape=(None,), for example ``` h5f.create_dataset('X_train', data=orig_data, compression=\"gzip\", chunks=True, maxshape=(None,)) ``` otherwise the dataset cannot be extended.", "best_answers_score":0.7499, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/47072859\/how-to-append-data-to-one-specific-dataset-in-a-hdf5-file-with-h5py", "best_answers_votes":75, "question_length":1778, "response_length":1152 }, { "question":"Find out the percentage of missing values in each column in the given dataset ``` import pandas as pd df = pd.read_csv('https:\/\/query.data.world\/s\/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0') percent= 100*(len(df.loc[:,df.isnull().sum(axis=0)>=1 ].index) \/ len(df.index)) print(round(percent,2)) ``` input is https:\/\/query.data.world\/s\/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0 and the output should be ``` Ord_id 0.00 Prod_id 0.00 Ship_id 0.00 Cust_id 0.00 Sales 0.24 Discount 0.65 Order_Quantity 0.65 Profit 0.65 Shipping_Cost 0.65 Product_Base_Margin 1.30 dtype: float64 ```", "response":"How about this? I think I actually found something similar on here once before, but I'm not seeing it now... ``` percent_missing = df.isnull().sum() * 100 \/ len(df) missing_value_df = pd.DataFrame({'column_name': df.columns, 'percent_missing': percent_missing}) ``` And if you want the missing percentages sorted, follow the above with: ``` missing_value_df.sort_values('percent_missing', inplace=True) ``` As mentioned in the comments, you may also be able to get by with just the first line in my code above, i.e.: ``` percent_missing = df.isnull().sum() * 100 \/ len(df) ```", "best_answers_score":0.748, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/51070985\/find-out-the-percentage-of-missing-values-in-each-column-in-the-given-dataset", "best_answers_votes":131, "question_length":554, "response_length":576 }, { "question":"Understanding NumPy's einsum How does np.einsum work? Given arrays A and B, their matrix multiplication followed by transpose is computed using (A @ B).T, or equivalently, using: ``` np.einsum(\"ij, jk -> ki\", A, B) ```", "response":"(Note: this answer is based on a short blog post about einsum I wrote a while ago.) What does einsum do? Imagine that we have two multi-dimensional arrays, A and B. Now let's suppose we want to... multiply A with B in a particular way to create new array of products; and then maybe sum this new array along particular axes; and then maybe transpose the axes of the new array in a particular order. There's a good chance that einsum will help us do this faster and more memory-efficiently than combinations of the NumPy functions like multiply, sum and transpose will allow. How does einsum work? Here's a simple (but not completely trivial) example. Take the following two arrays: ``` A = np.array([0, 1, 2]) B = np.array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) ``` We will multiply A and B element-wise and then sum along the rows of the new array. In \"normal\" NumPy we'd write: ``` >>> (A[:, np.newaxis] * B).sum(axis=1) array([ 0, 22, 76]) ``` So here, the indexing operation on A lines up the first axes of the two arrays so that the multiplication can be broadcast. The rows of the array of products are then summed to return the answer. Now if we wanted to use einsum instead, we could write: ``` >>> np.einsum('i,ij->i', A, B) array([ 0, 22, 76]) ``` The signature string 'i,ij->i' is the key here and needs a little bit of explaining. You can think of it in two halves. On the left-hand side (left of the ->) we've labelled the two input arrays. To the right of ->, we've labelled the array we want to end up with. Here is what happens next: A has one axis; we've labelled it i. And B has two axes; we've labelled axis 0 as i and axis 1 as j. By repeating the label i in both input arrays, we are telling einsum that these two axes should be multiplied together. In other words, we're multiplying array A with each column of array B, just like A[:, np.newaxis] * B does. Notice that j does not appear as a label in our desired output; we've just used i (we want to end up with a 1D array). By omitting the label, we're telling einsum to sum along this axis. In other words, we're summing the rows of the products, just like .sum(axis=1) does. That's basically all you need to know to use einsum. It helps to play about a little; if we leave both labels in the output, 'i,ij->ij', we get back a 2D array of products (same as A[:, np.newaxis] * B). If we say no output labels, 'i,ij->, we get back a single number (same as doing (A[:, np.newaxis] * B).sum()). The great thing about einsum however, is that it does not build a temporary array of products first; it just sums the products as it goes. This can lead to big savings in memory use. A slightly bigger example To explain the dot product, here are two new arrays: ``` A = array([[1, 1, 1], [2, 2, 2], [5, 5, 5]]) B = array([[0, 1, 0], [1, 1, 0], [1, 1, 1]]) ``` We will compute the dot product using np.einsum('ij,jk->ik', A, B). Here's a picture showing the labelling of the A and B and the output array that we get from the function: You can see that label j is repeated - this means we're multiplying the rows of A with the columns of B. Furthermore, the label j is not included in the output - we're summing these products. Labels i and k are kept for the output, so we get back a 2D array. It might be even clearer to compare this result with the array where the label j is not summed. Below, on the left you can see the 3D array that results from writing np.einsum('ij,jk->ijk', A, B) (i.e. we've kept label j): Summing axis j gives the expected dot product, shown on the right. Some exercises To get more of a feel for einsum, it can be useful to implement familiar NumPy array operations using the subscript notation. Anything that involves combinations of multiplying and summing axes can be written using einsum. Let A and B be two 1D arrays with the same length. For example, A = np.arange(10) and B = np.arange(5, 15). The sum of A can be written: ``` np.einsum('i->', A) ``` Element-wise multiplication, A * B, can be written: ``` np.einsum('i,i->i', A, B) ``` The inner product or dot product, np.inner(A, B) or np.dot(A, B), can be written: ``` np.einsum('i,i->', A, B) # or just use 'i,i' ``` The outer product, np.outer(A, B), can be written: ``` np.einsum('i,j->ij', A, B) ``` For 2D arrays, C and D, provided that the axes are compatible lengths (both the same length or one of them of has length 1), here are a few examples: The trace of C (sum of main diagonal), np.trace(C), can be written: ``` np.einsum('ii', C) ``` Element-wise multiplication of C and the transpose of D, C * D.T, can be written: ``` np.einsum('ij,ji->ij', C, D) ``` Multiplying each element of C by the array D (to make a 4D array), C[:, :, None, None] * D, can be written: ``` np.einsum('ij,kl->ijkl', C, D) ```", "best_answers_score":0.7478, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/26089893\/understanding-numpys-einsum", "best_answers_votes":729, "question_length":218, "response_length":4779 }, { "question":"How to calculate 1st and 3rd quartiles? I have DataFrame: ``` time_diff avg_trips 0 0.450000 1.0 1 0.483333 1.0 2 0.500000 1.0 3 0.516667 1.0 4 0.533333 2.0 ``` I want to get 1st quartile, 3rd quartile and median for the column time_diff. To obtain median, I use np.median(df[\"time_diff\"].values). How can I calculate quartiles?", "response":"You can use np.percentile to calculate quartiles (including the median): ``` >>> np.percentile(df.time_diff, 25) # Q1 0.48333300000000001 >>> np.percentile(df.time_diff, 50) # median 0.5 >>> np.percentile(df.time_diff, 75) # Q3 0.51666699999999999 ``` Or all at once: ``` >>> np.percentile(df.time_diff, [25, 50, 75]) array([ 0.483333, 0.5 , 0.516667]) ```", "best_answers_score":0.7476, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/45926230\/how-to-calculate-1st-and-3rd-quartiles", "best_answers_votes":94, "question_length":328, "response_length":356 }, { "question":"Numpy: For every element in one array, find the index in another array I have two 1D arrays, x & y, one smaller than the other. I'm trying to find the index of every element of y in x. I've found two naive ways to do this, the first is slow, and the second memory-intensive. The slow way ``` indices= [] for iy in y: indices += np.where(x==iy)[0][0] ``` The memory hog ``` xe = np.outer([1,]*len(x), y) ye = np.outer(x, [1,]*len(y)) junk, indices = np.where(np.equal(xe, ye)) ``` Is there a faster way or less memory intensive approach? Ideally the search would take advantage of the fact that we are searching for not one thing in a list, but many things, and thus is slightly more amenable to parallelization. Bonus points if you don't assume that every element of y is actually in x.", "response":"As Joe Kington said, searchsorted() can search element very quickly. To deal with elements that are not in x, you can check the searched result with original y, and create a masked array: ``` import numpy as np x = np.array([3,5,7,1,9,8,6,6]) y = np.array([2,1,5,10,100,6]) index = np.argsort(x) sorted_x = x[index] sorted_index = np.searchsorted(sorted_x, y) yindex = np.take(index, sorted_index, mode=\"clip\") mask = x[yindex] != y result = np.ma.array(yindex, mask=mask) print result ``` the result is: ``` [-- 3 1 -- -- 6] ```", "best_answers_score":0.7474, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8251541\/numpy-for-every-element-in-one-array-find-the-index-in-another-array", "best_answers_votes":52, "question_length":786, "response_length":529 }, { "question":"array.shape() giving error tuple not callable I have a 2D numpy array called results, which contains its own array of data, and I want to go into it and use each list: ```py for r in results: print \"r:\" print r y_pred = np.array(r) print y_pred.shape() ``` This is the output I get: ```none r: [ 25. 25. 25. 25. 25. 25. 26. 26. 26. 26. 26. 22. 27. 27. 42. 23. 23. 23. 28. 28. 28. 44. 29. 29. 30. 30. 30. 18. 18. 18. 19. 30. 17. 17. 17. 17. 2. 19. 2. 17. 17. 17. 17. 17. 17. 4. 17. 17. 41. 7. 17. 19. 19. 19. 10. 32. 4. 19. 34. 19. 34. 34. 34. 34. 34. 34. 20. 20. 20. 36. 36. 36. 4. 36. 36. 22. 22. 22. 22. 22. 22. 23. 23. 23. 27. 27. 27. 24. 39. 39. 10. 10. 10. 6. 10. 10. 11. 11. 11. 11. 11. 11. 12. 12. 12. 12. 12. 12. 13. 13. 13. 14. 14. 14. 15. 15. 15. 1. 17. 1. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 19. 19. 19. 2. 2. 4. 3. 3. 3. 4. 4. 4. 4. 4. 4. 4. 19. 4. 4. 4. 17. 5. 5. 5. 6. 6. 6. 6. 6. 6. 7. 7. 7. 7. 7. 7. 8. 8. 8. 8. 8. 8. 9. 9. 9. 23. 38. 38. 34. 34. 10. 17. 17. 26. 0. 42. 0. 18. 32. 32. 0. 0. 21. 38. 38. 38. 27. 27. 27. 0. 0. 0. 34. 2. 2. 0. 26. 26. 36. 0. 36. 36. 36. 23. 0. 27. 38. 25. 25. 25. 26. 26. 26. 0. 15. 15. 32. 38. 38. 0. 32. 32. 32. 41. 32. 7. 34. 32. 42. 34. 34. 36. 36. 25. 32. 32. 32. 36. 17. 8. 32. 17. 38. 3. 3. 3. 18. 18. 18. 0. 1. 1. 34. 1. 1. 34. 17. 17. 34. 34. 34. 34. 34. 34. 17. 17. 17. 24. 2. 32. 2. 2. 2. 0. 2. 2. 0. 34. 34. 0. 1. 1. 38. 23. 38.] Traceback (most recent call last): File \"C:\\Users\\app\\Documents\\Python Scripts\\gbc_classifier_test.py\", line 93, in print y_pred.shape() TypeError: 'tuple' object is not callable ``` I don't understand why y_pred is not a regular array and why it's being considered a tuple, I've assigned it to be an array using r.", "response":"shape is just an attribute, not a method. Just use y_pred.shape (no parentheses). (The error message isn't telling you that y_pred is a tuple, it's telling you that y_pred.shape is a tuple.)", "best_answers_score":0.7473, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25125168\/array-shape-giving-error-tuple-not-callable", "best_answers_votes":98, "question_length":1714, "response_length":190 }, { "question":"What is the difference between np.mean and tf.reduce_mean? In the MNIST beginner tutorial, there is the statement ``` accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\")) ``` tf.cast basically changes the type of tensor the object is, but what is the difference between tf.reduce_mean and np.mean? Here is the doc on tf.reduce_mean: reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None) input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the defaut), reduces all dimensions. ``` # 'x' is [[1., 1. ]] # [2., 2.]] tf.reduce_mean(x) ==> 1.5 tf.reduce_mean(x, 0) ==> [1.5, 1.5] tf.reduce_mean(x, 1) ==> [1., 2.] ``` For a 1D vector, it looks like np.mean == tf.reduce_mean, but I don't understand what's happening in tf.reduce_mean(x, 1) ==> [1., 2.]. tf.reduce_mean(x, 0) ==> [1.5, 1.5] kind of makes sense, since mean of [1, 2] and [1, 2] is [1.5, 1.5], but what's going on with tf.reduce_mean(x, 1)?", "response":"The functionality of numpy.mean and tensorflow.reduce_mean are the same. They do the same thing. From the documentation, for numpy and tensorflow, you can see that. Lets look at an example, ``` c = np.array([[3.,4], [5.,6], [6.,7]]) print(np.mean(c,1)) Mean = tf.reduce_mean(c,1) with tf.Session() as sess: result = sess.run(Mean) print(result) ``` Output ``` [ 3.5 5.5 6.5] [ 3.5 5.5 6.5] ``` Here you can see that when axis(numpy) or reduction_indices(tensorflow) is 1, it computes mean across (3,4) and (5,6) and (6,7), so 1 defines across which axis the mean is computed. When it is 0, the mean is computed across(3,5,6) and (4,6,7), and so on. I hope you get the idea. Now what are the differences between them? You can compute the numpy operation anywhere on python. But in order to do a tensorflow operation, it must be done inside a tensorflow Session. You can read more about it here. So when you need to perform any computation for your tensorflow graph(or structure if you will), it must be done inside a tensorflow Session. Lets look at another example. ``` npMean = np.mean(c) print(npMean+1) tfMean = tf.reduce_mean(c) Add = tfMean + 1 with tf.Session() as sess: result = sess.run(Add) print(result) ``` We could increase mean by 1 in numpy as you would naturally, but in order to do it in tensorflow, you need to perform that in Session, without using Session you can't do that. In other words, when you are computing tfMean = tf.reduce_mean(c), tensorflow doesn't compute it then. It only computes that in a Session. But numpy computes that instantly, when you write np.mean(). I hope it makes sense.", "best_answers_score":0.7465, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34236252\/what-is-the-difference-between-np-mean-and-tf-reduce-mean", "best_answers_votes":133, "question_length":998, "response_length":1616 }, { "question":"Find nearest value in numpy array How do I find the nearest value in a numpy array? Example: ``` np.find_nearest(array, value) ```", "response":"``` import numpy as np def find_nearest(array, value): array = np.asarray(array) idx = (np.abs(array - value)).argmin() return array[idx] ``` Example usage: ``` array = np.random.random(10) print(array) # [ 0.21069679 0.61290182 0.63425412 0.84635244 0.91599191 0.00213826 # 0.17104965 0.56874386 0.57319379 0.28719469] print(find_nearest(array, value=0.5)) # 0.568743859261 ```", "best_answers_score":0.7454, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/2566412\/find-nearest-value-in-numpy-array", "best_answers_votes":749, "question_length":130, "response_length":378 }, { "question":"Unable to apply methods on timestamps using Series built-ins On the following series: ``` 0 1411161507178 1 1411138436009 2 1411123732180 3 1411167606146 4 1411124780140 5 1411159331327 6 1411131745474 7 1411151831454 8 1411152487758 9 1411137160544 Name: my_series, dtype: int64 ``` This command (convert to timestamp, localize and convert to EST) works: ``` pd.to_datetime(my_series, unit='ms').apply(lambda x: x.tz_localize('UTC').tz_convert('US\/Eastern')) ``` but this one fails: ``` pd.to_datetime(my_series, unit='ms').tz_localize('UTC').tz_convert('US\/Eastern') ``` with: ``` TypeError Traceback (most recent call last) in () ----> 1 lua = pd.to_datetime(df[column], unit='ms').tz_localize('UTC').tz_convert('US\/Eastern') \/Users\/josh\/anaconda\/envs\/py34\/lib\/python3.4\/site-packages\/pandas\/core\/generic.py in tz_localize(self, tz, axis, copy, infer_dst) 3492 ax_name = self._get_axis_name(axis) 3493 raise TypeError('%s is not a valid DatetimeIndex or PeriodIndex' % -> 3494 ax_name) 3495 else: 3496 ax = DatetimeIndex([],tz=tz) TypeError: index is not a valid DatetimeIndex or PeriodIndex ``` and so does this one: ``` my_series.tz_localize('UTC').tz_convert('US\/Eastern') ``` with: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 lua = df[column].tz_localize('UTC').tz_convert('US\/Eastern') \/Users\/josh\/anaconda\/envs\/py34\/lib\/python3.4\/site-packages\/pandas\/core\/generic.py in tz_localize(self, tz, axis, copy, infer_dst) 3492 ax_name = self._get_axis_name(axis) 3493 raise TypeError('%s is not a valid DatetimeIndex or PeriodIndex' % -> 3494 ax_name) 3495 else: 3496 ax = DatetimeIndex([],tz=tz) TypeError: index is not a valid DatetimeIndex or PeriodIndex ``` As far as I understand, the second approach above (the first one that fails) should work. Why does it fail?", "response":"As Jeff's answer mentions, tz_localize() and tz_convert() act on the index, not the data. This was a huge surprise to me too. Since Jeff's answer was written, Pandas 0.15 added a new Series.dt accessor that helps your use case. You can now do this: ``` pd.to_datetime(my_series, unit='ms').dt.tz_localize('UTC').dt.tz_convert('US\/Eastern') ```", "best_answers_score":0.7454, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/26089670\/unable-to-apply-methods-on-timestamps-using-series-built-ins", "best_answers_votes":127, "question_length":1867, "response_length":343 }, { "question":"Numpy quirk: Apply function to all pairs of two 1D arrays, to get one 2D array Let's say I have 2 one-dimensional (1D) numpy arrays, a and b, with lengths n1 and n2 respectively. I also have a function, F(x,y), that takes two values. Now I want to apply that function to each pair of values from my two 1D arrays, so the result would be a 2D numpy array with shape n1, n2. The i, j element of the two-dimensional array would be F(a[i], b[j]). I haven't been able to find a way of doing this without a horrible amount of for-loops, and I'm sure there's a much simpler (and faster!) way of doing this in numpy. Thanks in advance!", "response":"You can use numpy broadcasting to do calculation on the two arrays, turning a into a vertical 2D array using newaxis: ``` In [11]: a = np.array([1, 2, 3]) # n1 = 3 ...: b = np.array([4, 5]) # n2 = 2 ...: #if function is c(i, j) = a(i) + b(j)*2: ...: c = a[:, None] + b*2 In [12]: c Out[12]: array([[ 9, 11], [10, 12], [11, 13]]) ``` To benchmark: ``` In [28]: a = arange(100) In [29]: b = arange(222) In [30]: timeit r = np.array([[f(i, j) for j in b] for i in a]) 10 loops, best of 3: 29.9 ms per loop In [31]: timeit c = a[:, None] + b*2 10000 loops, best of 3: 71.6 us per loop ```", "best_answers_score":0.7446, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21226610\/numpy-quirk-apply-function-to-all-pairs-of-two-1d-arrays-to-get-one-2d-array", "best_answers_votes":27, "question_length":627, "response_length":584 }, { "question":"How to count values in a certain range in a Numpy array? I have a NumPy array of values. I want to count how many of these values are in a specific range say x25. I have read about the counter, but it seems to only be valid for specif values not ranges of values. I have searched, but have not found anything regarding my specific problem. If someone could point me towards the proper documentation I would appreciate it. Thank you I have tried this ``` X = array(X) for X in range(25, 100): print(X) ``` But it just gives me the numbers in between 25 and 99. EDIT The data I am using was created by another program. I then used a script to read the data and store it as a list. I then took the list and turned it in to an array using array(r). Edit The result of running ``` >>> a[0:10] array(['29.63827346', '40.61488812', '25.48300065', '26.22910525', '42.41172923', '20.15013315', '34.95323355', '13.03604098', '29.71097606', '9.53222141'], dtype=' 39 R = np.array(mean_data[:,0]) 40 P = np.array(mean_data[:,1]) 41 Z = np.array(mean_data[:,2]) ``` When I try to run the program I get the error: TypeError: list indices must be integers, not tuple The mean_data list looks like this sample...: ``` [6.0, 315.0, 4.8123788544375692e-06], [6.5, 0.0, 2.259217450023793e-06], [6.5, 45.0, 9.2823565008402673e-06], [6.5, 90.0, 8.309270169336028e-06], [6.5, 135.0, 6.4709418114245381e-05], [6.5, 180.0, 1.7227922423558414e-05], [6.5, 225.0, 1.2308522579848724e-05], [6.5, 270.0, 2.6905672894824344e-05], [6.5, 315.0, 2.2727114437176048e-05]] ``` I don't know how to prevent this error, I have tried creating mean_data as a np.array and using np.append to add values to it but that doesn't solve the problem either. Here's the traceback (was using ipython before): ``` Traceback (most recent call last): File \"polarplot.py\", line 36, in R = np.array(mean_data[:,0]) TypeError: list indices must be integers, not tuple ``` And the other way I tried to create an array was: ``` mean_data = np.array([]) for ur, ua in it.product(uradius, uangle): samepoints = (data[:,0]==ur) & (data[:,1]==ua) if samepoints.sum() > 1: # check if there is more than one match np.append(mean_data[ur, ua, np.mean(data[samepoints,-1])]) elif samepoints.sum() == 1: np.append(mean_data, [ur, ua, data[samepoints,-1]]) ``` The traceback on that is: ``` IndexError Traceback (most recent call last) in () 31 samepoints = (data[:,0]==ur) & (data[:,1]==ua) 32 if samepoints.sum() > 1: # check if there is more than one match ---> 33 np.append(mean_data[ur, ua, np.mean(data[samepoints,-1])]) 34 elif samepoints.sum() == 1: 35 np.append(mean_data, [ur, ua, data[samepoints,-1]]) IndexError: invalid index ```", "response":"The variable mean_data is a nested list, in Python accessing a nested list cannot be done by multi-dimensional slicing, i.e.: mean_data[1,2], instead one would write mean_data[1][2]. This is becausemean_data[2] is a list. Further indexing is done recursively - since mean_data[2] is a list, mean_data[2][0] is the first index of that list. Additionally, mean_data[:][0] does not work because mean_data[:] returns mean_data. The solution is to replace the array ,or import the original data, as follows: ``` mean_data = np.array(mean_data) ``` numpy arrays (like MATLAB arrays and unlike nested lists) support multi-dimensional slicing with tuples.", "best_answers_score":0.741, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15884527\/how-to-prevent-typeerror-list-indices-must-be-integers-not-tuple-when-copying", "best_answers_votes":101, "question_length":1947, "response_length":647 }, { "question":"Python memory usage of numpy arrays I'm using python to analyse some large files and I'm running into memory issues, so I've been using sys.getsizeof() to try and keep track of the usage, but it's behaviour with numpy arrays is bizarre. Here's an example involving a map of albedos that I'm having to open: ``` >>> import numpy as np >>> import struct >>> from sys import getsizeof >>> f = open('Albedo_map.assoc', 'rb') >>> getsizeof(f) 144 >>> albedo = struct.unpack('%df' % (7200*3600), f.read(7200*3600*4)) >>> getsizeof(albedo) 207360056 >>> albedo = np.array(albedo).reshape(3600,7200) >>> getsizeof(albedo) 80 ``` Well the data's still there, but the size of the object, a 3600x7200 pixel map, has gone from ~200 Mb to 80 bytes. I'd like to hope that my memory issues are over and just convert everything to numpy arrays, but I feel that this behaviour, if true, would in some way violate some law of information theory or thermodynamics, or something, so I'm inclined to believe that getsizeof() doesn't work with numpy arrays. Any ideas?", "response":"You can use array.nbytes for numpy arrays, for example: ``` import numpy as np from sys import getsizeof a = [0] * 1024 b = np.array(a) print(getsizeof(a)) print(b.nbytes) ``` Output: ``` 8264 8192 ```", "best_answers_score":0.7408, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11784329\/python-memory-usage-of-numpy-arrays", "best_answers_votes":357, "question_length":1046, "response_length":201 }, { "question":"Convert timedelta64[ns] column to seconds in Python Pandas DataFrame A pandas DataFrame column duration contains timedelta64[ns] as shown. How can you convert them to seconds? ``` 0 00:20:32 1 00:23:10 2 00:24:55 3 00:13:17 4 00:18:52 Name: duration, dtype: timedelta64[ns] ``` I tried the following ``` print df[:5]['duration'] \/ np.timedelta64(1, 's') ``` but got the error ``` Traceback (most recent call last): File \"test.py\", line 16, in print df[0:5]['duration'] \/ np.timedelta64(1, 's') File \"C:\\Python27\\lib\\site-packages\\pandas\\core\\series.py\", line 130, in wrapper \"addition and subtraction, but the operator [%s] was passed\" % name) TypeError: can only operate on a timedeltas for addition and subtraction, but the operator [__div__] was passed ``` Also tried ``` print df[:5]['duration'].astype('timedelta64[s]') ``` but received the error ``` Traceback (most recent call last): File \"test.py\", line 17, in print df[:5]['duration'].astype('timedelta64[s]') File \"C:\\Python27\\lib\\site-packages\\pandas\\core\\series.py\", line 934, in astype values = com._astype_nansafe(self.values, dtype) File \"C:\\Python27\\lib\\site-packages\\pandas\\core\\common.py\", line 1653, in _astype_nansafe raise TypeError(\"cannot astype a timedelta from [%s] to [%s]\" % (arr.dtype,dtype)) TypeError: cannot astype a timedelta from [timedelta64[ns]] to [timedelta64[s]] ```", "response":"Use the Series dt accessor to get access to the methods and attributes of a datetime (timedelta) series. ``` >>> s 0 -1 days +23:45:14.304000 1 -1 days +23:46:57.132000 2 -1 days +23:49:25.913000 3 -1 days +23:59:48.913000 4 00:00:00.820000 dtype: timedelta64[ns] >>> >>> s.dt.total_seconds() 0 -885.696 1 -782.868 2 -634.087 3 -11.087 4 0.820 dtype: float64 ``` There are other Pandas Series Accessors for String, Categorical, and Sparse data types.", "best_answers_score":0.74, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/26456825\/convert-timedelta64ns-column-to-seconds-in-python-pandas-dataframe", "best_answers_votes":97, "question_length":1356, "response_length":450 }, { "question":"Difference between data type 'datetime64[ns]' and '>> import tensorflow C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. np_resource = np.dtype([(\"resource\", np.ubyte, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. np_resource = np.dtype([(\"resource\", np.ubyte, 1)]) ```", "response":"If you're using TF 2.0 a quick solution would be to downgrade your numpy to 1.16.4. (I used 1.17 and received the same warning messages). ``` 1. pip uninstall numpy 2. pip install numpy==1.16.4 ``` See here (thanks to ymodak)", "best_answers_score":0.7363, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/57381430\/synonym-of-type-is-deprecated-in-a-future-version-of-numpy-it-will-be-underst", "best_answers_votes":68, "question_length":4024, "response_length":225 }, { "question":"NumPy version of \"Exponential weighted moving average\", equivalent to pandas.ewm().mean() How do I get the exponential weighted moving average in NumPy just like the following in pandas? ``` import pandas as pd import pandas_datareader as pdr from datetime import datetime # Declare variables ibm = pdr.get_data_yahoo(symbols='IBM', start=datetime(2000, 1, 1), end=datetime(2012, 1, 1)).reset_index(drop=True)['Adj Close'] windowSize = 20 # Get PANDAS exponential weighted moving average ewm_pd = pd.DataFrame(ibm).ewm(span=windowSize, min_periods=windowSize).mean().as_matrix() print(ewm_pd) ``` I tried the following with NumPy ``` import numpy as np import pandas_datareader as pdr from datetime import datetime # From this post: http:\/\/stackoverflow.com\/a\/40085052\/3293881 by @Divakar def strided_app(a, L, S): # Window len = L, Stride len\/stepsize = S nrows = ((a.size - L) \/\/ S) + 1 n = a.strides[0] return np.lib.stride_tricks.as_strided(a, shape=(nrows, L), strides=(S * n, n)) def numpyEWMA(price, windowSize): weights = np.exp(np.linspace(-1., 0., windowSize)) weights \/= weights.sum() a2D = strided_app(price, windowSize, 1) returnArray = np.empty((price.shape[0])) returnArray.fill(np.nan) for index in (range(a2D.shape[0])): returnArray[index + windowSize-1] = np.convolve(weights, a2D[index])[windowSize - 1:-windowSize + 1] return np.reshape(returnArray, (-1, 1)) # Declare variables ibm = pdr.get_data_yahoo(symbols='IBM', start=datetime(2000, 1, 1), end=datetime(2012, 1, 1)).reset_index(drop=True)['Adj Close'] windowSize = 20 # Get NumPy exponential weighted moving average ewma_np = numpyEWMA(ibm, windowSize) print(ewma_np) ``` But the results are not similar as the ones in pandas. Is there maybe a better approach to calculate the exponential weighted moving average directly in NumPy and get the exact same result as the pandas.ewm().mean()? At 60,000 requests on pandas solution, I get about 230 seconds. I am sure that with a pure NumPy, this can be decreased significantly.", "response":"Updated 08\/06\/2019 PURE NUMPY, FAST & VECTORIZED SOLUTION FOR LARGE INPUTS out parameter for in-place computation, dtype parameter, index order parameter This function is equivalent to pandas' ewm(adjust=False).mean(), but much faster. ewm(adjust=True).mean() (the default for pandas) can produce different values at the start of the result. I am working to add the adjust functionality to this solution. @Divakar's answer leads to floating point precision problems when the input is too large. This is because (1-alpha)**(n+1) -> 0 when n -> inf and alpha -> 1, leading to divide-by-zero's and NaN values popping up in the calculation. Here is my fastest solution with no precision problems, nearly fully vectorized. It's gotten a little complicated but the performance is great, especially for really huge inputs. Without using in-place calculations (which is possible using the out parameter, saving memory allocation time): 3.62 seconds for 100M element input vector, 3.2ms for a 100K element input vector, and 293\u00b5s for a 5000 element input vector on a pretty old PC (results will vary with different alpha\/row_size values). ``` # tested with python3 & numpy 1.15.2 import numpy as np def ewma_vectorized_safe(data, alpha, row_size=None, dtype=None, order='C', out=None): \"\"\" Reshapes data before calculating EWMA, then iterates once over the rows to calculate the offset without precision issues :param data: Input data, will be flattened. :param alpha: scalar float in range (0,1) The alpha parameter for the moving average. :param row_size: int, optional The row size to use in the computation. High row sizes need higher precision, low values will impact performance. The optimal value depends on the platform and the alpha being used. Higher alpha values require lower row size. Default depends on dtype. :param dtype: optional Data type used for calculations. Defaults to float64 unless data.dtype is float32, then it will use float32. :param order: {'C', 'F', 'A'}, optional Order to use when flattening the data. Defaults to 'C'. :param out: ndarray, or None, optional A location into which the result is stored. If provided, it must have the same shape as the desired output. If not provided or `None`, a freshly-allocated array is returned. :return: The flattened result. \"\"\" data = np.array(data, copy=False) if dtype is None: if data.dtype == np.float32: dtype = np.float32 else: dtype = np.float else: dtype = np.dtype(dtype) row_size = int(row_size) if row_size is not None else get_max_row_size(alpha, dtype) if data.size 1: # flatten input data = np.reshape(data, -1, order=order) if out is None: out = np.empty_like(data, dtype=dtype) else: assert out.shape == data.shape assert out.dtype == dtype row_n = int(data.size \/\/ row_size) # the number of rows to use trailing_n = int(data.size % row_size) # the amount of data leftover first_offset = data[0] if trailing_n > 0: # set temporary results to slice view of out parameter out_main_view = np.reshape(out[:-trailing_n], (row_n, row_size)) data_main_view = np.reshape(data[:-trailing_n], (row_n, row_size)) else: out_main_view = out data_main_view = data # get all the scaled cumulative sums with 0 offset ewma_vectorized_2d(data_main_view, alpha, axis=1, offset=0, dtype=dtype, order='C', out=out_main_view) scaling_factors = (1 - alpha) ** np.arange(1, row_size + 1) last_scaling_factor = scaling_factors[-1] # create offset array offsets = np.empty(out_main_view.shape[0], dtype=dtype) offsets[0] = first_offset # iteratively calculate offset for each row for i in range(1, out_main_view.shape[0]): offsets[i] = offsets[i - 1] * last_scaling_factor + out_main_view[i - 1, -1] # add the offsets to the result out_main_view += offsets[:, np.newaxis] * scaling_factors[np.newaxis, :] if trailing_n > 0: # process trailing data in the 2nd slice of the out parameter ewma_vectorized(data[-trailing_n:], alpha, offset=out_main_view[-1, -1], dtype=dtype, order='C', out=out[-trailing_n:]) return out def get_max_row_size(alpha, dtype=float): assert 0. 1: # flatten input data = data.reshape(-1, order) if out is None: out = np.empty_like(data, dtype=dtype) else: assert out.shape == data.shape assert out.dtype == dtype if data.size 0 as len(data) gets large # this leads to divide-by-zeros below scaling_factors = np.power(1. - alpha, np.arange(data.size + 1, dtype=dtype), dtype=dtype) # create cumulative sum array np.multiply(data, (alpha * scaling_factors[-2]) \/ scaling_factors[:-1], dtype=dtype, out=out) np.cumsum(out, dtype=dtype, out=out) # cumsums \/ scaling out \/= scaling_factors[-2::-1] if offset != 0: offset = np.array(offset, copy=False).astype(dtype, copy=False) # add offsets out += offset * scaling_factors[1:] return out ``` The 2D ewma function: ``` def ewma_vectorized_2d(data, alpha, axis=None, offset=None, dtype=None, order='C', out=None): \"\"\" Calculates the exponential moving average over a given axis. :param data: Input data, must be 1D or 2D array. :param alpha: scalar float in range (0,1) The alpha parameter for the moving average. :param axis: The axis to apply the moving average on. If axis==None, the data is flattened. :param offset: optional The offset for the moving average. Must be scalar or a vector with one element for each row of data. If set to None, defaults to the first value of each row. :param dtype: optional Data type used for calculations. Defaults to float64 unless data.dtype is float32, then it will use float32. :param order: {'C', 'F', 'A'}, optional Order to use when flattening the data. Ignored if axis is not None. :param out: ndarray, or None, optional A location into which the result is stored. If provided, it must have the same shape as the desired output. If not provided or `None`, a freshly-allocated array is returned. \"\"\" data = np.array(data, copy=False) assert data.ndim = 1 alpha = 2\/(span+1) # for pandas` span parameter # com = 1000 # com >= 0 # alpha = 1\/(1+com) # for pandas` center-of-mass parameter # halflife = 100 # halflife > 0 # alpha = 1 - np.exp(np.log(0.5)\/halflife) # for pandas` half-life parameter result = ewma_vectorized_safe(data, alpha) ``` Just a tip It is easy to calculate a 'window size' (technically exponential averages have infinite 'windows') for a given alpha, dependent on the contribution of the data in that window to the average. This is useful for example to chose how much of the start of the result to treat as unreliable due to border effects. ``` def window_size(alpha, sum_proportion): # Increases with increased sum_proportion and decreased alpha # solve (1-alpha)**window_size = (1-sum_proportion) for window_size return int(np.log(1-sum_proportion) \/ np.log(1-alpha)) alpha = 0.02 sum_proportion = .99 # window covers 99% of contribution to the moving average window = window_size(alpha, sum_proportion) # = 227 sum_proportion = .75 # window covers 75% of contribution to the moving average window = window_size(alpha, sum_proportion) # = 68 ``` The alpha = 2 \/ (window_size + 1.0) relation used in this thread (the 'span' option from pandas) is a very rough approximation of the inverse of the above function (with sum_proportion~=0.87). alpha = 1 - np.exp(np.log(1-sum_proportion)\/window_size) is more accurate (the 'half-life' option from pandas equals this formula with sum_proportion=0.5). In the following example, data represents a continuous noisy signal. cutoff_idx is the first position in result where at least 99% of the value is dependent on separate values in data (i.e. less than 1% depends on data[0]). The data up to cutoff_idx is excluded from the final results because it is too dependent on the first value in data, therefore possibly skewing the average. ``` result = ewma_vectorized_safe(data, alpha, chunk_size) sum_proportion = .99 cutoff_idx = window_size(alpha, sum_proportion) result = result[cutoff_idx:] ``` To illustrate the problem the above solve you can run this a few times, notice the often-appearing false start of the red line, which is skipped after cutoff_idx: ``` data_n = 100000 data = np.random.rand(data_n) * 100 window = 1000 sum_proportion = .99 alpha = 1 - np.exp(np.log(1-sum_proportion)\/window) result = ewma_vectorized_safe(data, alpha) cutoff_idx = window_size(alpha, sum_proportion) x = np.arange(start=0, stop=result.size) import matplotlib.pyplot as plt plt.plot(x[:cutoff_idx+1], result[:cutoff_idx+1], '-r', x[cutoff_idx:], result[cutoff_idx:], '-b') plt.show() ``` note that cutoff_idx==window because alpha was set with the inverse of the window_size() function, with the same sum_proportion. This is similar to how pandas applies ewm(span=window, min_periods=window).", "best_answers_score":0.7362, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/42869495\/numpy-version-of-exponential-weighted-moving-average-equivalent-to-pandas-ewm", "best_answers_votes":42, "question_length":2000, "response_length":8617 }, { "question":"Manually set color of points in legend I'm making a scatter plot which looks like this: (MWE at bottom of question) As can be seen in the image above the colors of the points in the legend are set to blue automatically by matplotlib. I need to set this points to some other color not present in the colormap (ie: black) so they won't generate confusion with the colors associated with said colormap. I looked around but the matplotlib.legend module does not seem to accept a color keyword. Is there any way to do this? Here's the MWE: ``` import matplotlib.pyplot as plt import numpy as np def rand_data(): return np.random.uniform(low=0., high=1., size=(100,)) # Generate data. x, y, x2, x3 = [rand_data() for i in range(4)] # This data defines the markes and labels used. x1 = np.random.random_integers(7, 9, size=(100,)) # Order all lists so smaller points are on top. order = np.argsort(-np.array(x2)) # Order x and y. x_o, y_o = np.take(x, order), np.take(y, order) # Order list related to markers and labels. z1 = np.take(x1, order) # Order list related to sizes. z2 = np.take(x2, order) # Order list related to colors. z3 = np.take(x3, order) plt.figure() cm = plt.cm.get_cmap('RdYlBu') # Scatter plot where each value in z1 has a different marker and label # assigned. mrk = {7: ('o', '7'), 8: ('s', '8'), 9: ('D', '9')} for key, value in mrk.items(): s1 = (z1 == key) plt.scatter(x_o[s1], y_o[s1], marker=value[0], label=value[1], s=z2[s1] * 100., c=z3[s1], cmap=cm, lw=0.2) # Plot colorbar plt.colorbar() # Plot legend. plt.legend(loc=\"lower left\", markerscale=0.7, scatterpoints=1, fontsize=10) plt.show() ```", "response":"You can obtain the legend handles and change their colors individually. Thanks for the comments of @OrOrg and @Reinderien that led me to update this answer. ``` ax = plt.gca() leg = ax.get_legend() leg.legend_handles[0].set_facecolor('red') leg.legend_handles[0].set_edgecolor('red') leg.legend_handles[1].set_facecolor('yellow') leg.legend_handles[1].set_edgecolor('yellow') ```", "best_answers_score":0.7352, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/23698850\/manually-set-color-of-points-in-legend", "best_answers_votes":82, "question_length":1620, "response_length":379 }, { "question":"Still can't install scipy due to missing fortran compiler after brew install gcc on Mac OS X I have read and followed this answer to install scipy\/numpy\/theano. However, it still failed on the same error of missing Fortran compiler after brew install gcc. While HomeBrew installed the gcc-4.8, it didn't install any gfortran or g95 commands. I figure gfortran may be just a synonymy of gcc, then I create a symlink ``` $ cd \/usr\/local\/bin $ ln -s gcc-4.8 gfortran $ pip install scipy ``` Then it detects the gfortran command but still complaining no Fortran compiler ``` customize Gnu95FCompiler Found executable \/usr\/local\/bin\/gfortran customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Could not locate executable g77 customize G95FCompiler Could not locate executable g95 customize PGroupFCompiler Could not locate executable pgfortran don't know how to compile Fortran code on platform 'posix' building 'dfftpack' library error: library dfftpack has Fortran sources but no Fortran compiler found ``` What else should I do?", "response":"Fixed by upgrading pip, even though I just installed my pip\/virtualenv the first time anew on the same day. ``` (mypy)MAC0227: $ pip install --upgrade pip ... (mypy)MAC0227: $ pip install theano \/Users\/me\/.virtualenvs\/mypy\/lib\/python2.7\/site-packages\/pip\/_vendor\/requests\/packages\/urllib3\/util\/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https:\/\/urllib3.readthedocs.org\/en\/latest\/security.html#insecureplatformwarning. InsecurePlatformWarning Requirement already satisfied (use --upgrade to upgrade): theano in \/Users\/me\/.virtualenvs\/mypy\/lib\/python2.7\/site-packages Requirement already satisfied (use --upgrade to upgrade): numpy>=1.6.2 in \/Users\/me\/.virtualenvs\/mypy\/lib\/python2.7\/site-packages (from theano) Collecting scipy>=0.11 (from theano) \/Users\/me\/.virtualenvs\/mypy\/lib\/python2.7\/site-packages\/pip\/_vendor\/requests\/packages\/urllib3\/util\/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https:\/\/urllib3.readthedocs.org\/en\/latest\/security.html#insecureplatformwarning. InsecurePlatformWarning Downloading scipy-0.15.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (19.8MB) 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 19.8MB 23kB\/s Installing collected packages: scipy Successfully installed scipy-0.15.1 ```", "best_answers_score":0.7345, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/29586487\/still-cant-install-scipy-due-to-missing-fortran-compiler-after-brew-install-gcc", "best_answers_votes":49, "question_length":1323, "response_length":1600 }, { "question":"How to set max output width in numpy? I am using a Jupyter notebook. I have a pretty wide screen, but the displayed output (say, when I print a numpy array) is formatted as if the screen was narrow. I found a way of increasing the width of the cells, with ``` from IPython.core.display import HTML HTML(\".container { width:95% !important; }\") ``` but this seems to influence the input only, not the output (see screenshots): I've tried setting the linewidth option in numpy.set_printoptions, I've tried setting numpy.core.arrayprint._line_width, nothing... EDIT: Using matplotlib I can set the width of plots (that I plot in the notebook with the magic %matplotlib inline) with the command plt.rcParams['figure.figsize']=[X,Y]. It turns out that I can increase X to have plots fill the output cell horizontally all the way. This means (I think) that the original problem it's a numpy thing.", "response":"I found this answer helpful in creating my own: ``` import numpy as np np.set_printoptions(edgeitems=30, linewidth=100000, formatter=dict(float=lambda x: \"%.3g\" % x)) ``` The absurd linewidth means only edgeitems and the window's width will determine when newlines\/wrapping occurs. If I shrink the window a bit, it looks like this, so you may still need to play with the edgeitems or formatting: Here are the docs for set_printoptions, of which the following are relevant: edgeitems : Number of array items in summary at beginning and end of each dimension (default 3). linewidth : The number of characters per line for the purpose of inserting line breaks (default 75).", "best_answers_score":0.7331, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37149933\/how-to-set-max-output-width-in-numpy", "best_answers_votes":86, "question_length":890, "response_length":670 }, { "question":"How to remove all rows in a numpy.ndarray that contain non-numeric values I read in a dataset as a numpy.ndarray and some of the values are missing (either by just not being there, being NaN, or by being a string written \"NA\"). I want to clean out all rows containing any entry like this. How do I do that with a numpy ndarray?", "response":"``` >>> a = np.array([[1,2,3], [4,5,np.nan], [7,8,9]]) array([[ 1., 2., 3.], [ 4., 5., nan], [ 7., 8., 9.]]) >>> a[~np.isnan(a).any(axis=1)] array([[ 1., 2., 3.], [ 7., 8., 9.]]) ``` and reassign this to a. Explanation: np.isnan(a) returns a similar array with True where NaN, False elsewhere. .any(axis=1) reduces an m*n array to n with an logical or operation on the whole rows, ~ inverts True\/False and a[ ] chooses just the rows from the original array, which have True within the brackets.", "best_answers_score":0.7324, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11453141\/how-to-remove-all-rows-in-a-numpy-ndarray-that-contain-non-numeric-values", "best_answers_votes":199, "question_length":327, "response_length":494 }, { "question":"python numpy\/scipy curve fitting I have some points and I am trying to fit curve for this points. I know that there exist scipy.optimize.curve_fit function, but I do not understand the documentation, i.e. how to use this function. My points: ```py np.array([(1, 1), (2, 4), (3, 1), (9, 3)]) ``` Can anybody explain how to do that?", "response":"I suggest you to start with simple polynomial fit, scipy.optimize.curve_fit tries to fit a function f that you must know to a set of points. This is a simple 3 degree polynomial fit using numpy.polyfit and poly1d, the first performs a least squares polynomial fit and the second calculates the new points: ``` import numpy as np import matplotlib.pyplot as plt points = np.array([(1, 1), (2, 4), (3, 1), (9, 3)]) # get x and y vectors x = points[:,0] y = points[:,1] # calculate polynomial z = np.polyfit(x, y, 3) f = np.poly1d(z) # calculate new x's and y's x_new = np.linspace(x[0], x[-1], 50) y_new = f(x_new) plt.plot(x,y,'o', x_new, y_new) plt.xlim([x[0]-1, x[-1] + 1 ]) plt.show() ```", "best_answers_score":0.7315, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19165259\/python-numpy-scipy-curve-fitting", "best_answers_votes":127, "question_length":330, "response_length":690 }, { "question":"Shift elements in a numpy array This question contains its own answer at the bottom. Use preallocated arrays. Following-up from this question years ago, is there a canonical \"shift\" function in numpy? I don't see anything from the documentation. Here's a simple version of what I'm looking for: ``` def shift(xs, n): if n >= 0: return np.r_[np.full(n, np.nan), xs[:-n]] else: return np.r_[xs[-n:], np.full(-n, np.nan)] ``` Using this is like: ``` In [76]: xs Out[76]: array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) In [77]: shift(xs, 3) Out[77]: array([ nan, nan, nan, 0., 1., 2., 3., 4., 5., 6.]) In [78]: shift(xs, -3) Out[78]: array([ 3., 4., 5., 6., 7., 8., 9., nan, nan, nan]) ``` This question came from my attempt to write a fast rolling_product yesterday. I needed a way to \"shift\" a cumulative product and all I could think of was to replicate the logic in np.roll(). So np.concatenate() is much faster than np.r_[]. This version of the function performs a lot better: ``` def shift(xs, n): if n >= 0: return np.concatenate((np.full(n, np.nan), xs[:-n])) else: return np.concatenate((xs[-n:], np.full(-n, np.nan))) ``` An even faster version simply pre-allocates the array: ``` def shift(xs, n): e = np.empty_like(xs) if n >= 0: e[:n] = np.nan e[n:] = xs[:-n] else: e[n:] = np.nan e[:n] = xs[-n:] return e ``` The above proposal is the answer. Use preallocated arrays.", "response":"For those who want to just copy and paste the fastest implementation of shift, there is a benchmark and conclusion(see the end). In addition, I introduce fill_value parameter and fix some bugs. Benchmark ``` import numpy as np import timeit # enhanced from IronManMark20 version def shift1(arr, num, fill_value=np.nan): arr = np.roll(arr,num) if num 0: arr[:num] = fill_value return arr # use np.roll and np.put by IronManMark20 def shift2(arr,num): arr=np.roll(arr,num) if num 0: np.put(arr,range(num),np.nan) return arr # use np.pad and slice by me. def shift3(arr, num, fill_value=np.nan): l = len(arr) if num 0: arr = np.pad(arr, (num, 0), mode='constant', constant_values=(fill_value,))[:-num] return arr # use np.concatenate and np.full by chrisaycock def shift4(arr, num, fill_value=np.nan): if num >= 0: return np.concatenate((np.full(num, fill_value), arr[:-num])) else: return np.concatenate((arr[-num:], np.full(-num, fill_value))) # preallocate empty array and assign slice by chrisaycock def shift5(arr, num, fill_value=np.nan): result = np.empty_like(arr) if num > 0: result[:num] = fill_value result[num:] = arr[:-num] elif num < 0: result[num:] = fill_value result[:num] = arr[-num:] else: result[:] = arr return result arr = np.arange(2000).astype(float) def benchmark_shift1(): shift1(arr, 3) def benchmark_shift2(): shift2(arr, 3) def benchmark_shift3(): shift3(arr, 3) def benchmark_shift4(): shift4(arr, 3) def benchmark_shift5(): shift5(arr, 3) benchmark_set = ['benchmark_shift1', 'benchmark_shift2', 'benchmark_shift3', 'benchmark_shift4', 'benchmark_shift5'] for x in benchmark_set: number = 10000 t = timeit.timeit('%s()' % x, 'from __main__ import %s' % x, number=number) print '%s time: %f' % (x, t) ``` benchmark result: ``` benchmark_shift1 time: 0.265238 benchmark_shift2 time: 0.285175 benchmark_shift3 time: 0.473890 benchmark_shift4 time: 0.099049 benchmark_shift5 time: 0.052836 ``` Conclusion shift5 is winner! It's OP's third solution.", "best_answers_score":0.7314, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30399534\/shift-elements-in-a-numpy-array", "best_answers_votes":124, "question_length":1379, "response_length":1974 }, { "question":"Comparing numpy arrays containing NaN For my unittest, I want to check if two arrays are identical. Reduced example: ``` a = np.array([1, 2, np.NaN]) b = np.array([1, 2, np.NaN]) if np.all(a==b): print 'arrays are equal' ``` This does not work because nan != nan. What is the best way to proceed?", "response":"For versions of numpy prior to 1.19, this is probably the best approach in situations that don't specifically involve unit tests: ``` >>> ((a == b) | (numpy.isnan(a) & numpy.isnan(b))).all() True ``` However, modern versions provide the array_equal function with a new keyword argument, equal_nan, which fits the bill exactly. This was first pointed out by flyingdutchman; see his answer below for details.", "best_answers_score":0.7314, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10710328\/comparing-numpy-arrays-containing-nan", "best_answers_votes":69, "question_length":296, "response_length":406 }, { "question":"How do I access the ith column of a NumPy multidimensional array? Given: ```py test = np.array([[1, 2], [3, 4], [5, 6]]) ``` test[i] gives the ith row (e.g. [1, 2]). How do I access the ith column? (e.g. [1, 3, 5]). Also, would this be an expensive operation?", "response":"With: ``` test = np.array([[1, 2], [3, 4], [5, 6]]) ``` To access column 0: ``` >>> test[:, 0] array([1, 3, 5]) ``` To access row 0: ``` >>> test[0, :] array([1, 2]) ``` This is covered in Section 1.4 (Indexing) of the NumPy reference. This is quick, at least in my experience. It's certainly much quicker than accessing each element in a loop.", "best_answers_score":0.7311, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4455076\/how-do-i-access-the-ith-column-of-a-numpy-multidimensional-array", "best_answers_votes":1007, "question_length":259, "response_length":344 }, { "question":"Opencv polylines function in python throws exception I'm trying to draw an arbitrary quadrilateral over an image using the polylines function in opencv. When I do I get the following error OpenCV Error: Assertion failed (p.checkVector(2, CV_32S) >= 0) in polylines, file \/tmp\/buildd\/ros-fuerte-opencv2-2.4.2-1precise-20130312-1306\/modules\/core\/src\/d rawing.cpp, line 2065 I call the function as like so, ``` cv2.polylines(img, points, 1, (255,255,255)) ``` Where points is as numpy array as shown below (The image size is 1280x960): ``` [[910 641] [206 632] [696 488] [458 485]] ``` and img is just a normal image that I'm able to imshow. Currently I'm just drawing lines between these points myself, but I'm looking for a more elegant solution. How should I correct this error?", "response":"The problem in my case was that numpy.array created int64-bit numbers by default. So I had to explicitly convert it to int32: ``` points = np.array([[910, 641], [206, 632], [696, 488], [458, 485]]) # points.dtype => 'int64' cv2.polylines(img, np.int32([points]), 1, (255,255,255)) ``` (Looks like a bug in cv2 python binding, it should've verified dtype)", "best_answers_score":0.7311, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17241830\/opencv-polylines-function-in-python-throws-exception", "best_answers_votes":91, "question_length":778, "response_length":354 }, { "question":"How to get the range of valid Numpy data types? I'm interested in finding for a particular Numpy type (e.g. np.int64, np.uint32, np.float32, etc.) what the range of all possible valid values is (e.g. np.int32 can store numbers up to 2**31-1). Of course, I guess one can theoretically figure this out for each type, but is there a way to do this at run time to ensure more portable code?", "response":"You can use numpy.iinfo(arg).max to find the max value for integer types of arg, and numpy.finfo(arg).max to find the max value for float types of arg. ``` >>> numpy.iinfo(numpy.uint64).min 0 >>> numpy.iinfo(numpy.uint64).max 18446744073709551615L >>> numpy.finfo(numpy.float64).max 1.7976931348623157e+308 >>> numpy.finfo(numpy.float64).min -1.7976931348623157e+308 ``` iinfo only offers min and max, but finfo also offers useful values such as eps (the smallest number > 0 representable) and resolution (the approximate decimal number resolution of the type of arg).", "best_answers_score":0.7308, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1658714\/how-to-get-the-range-of-valid-numpy-data-types", "best_answers_votes":69, "question_length":386, "response_length":568 }, { "question":"replace() method not working on Pandas DataFrame I have looked up this issue and most questions are for more complex replacements. However in my case I have a very simple dataframe as a test dummy. The aim is to replace a string anywhere in the dataframe with an nan, however this does not seem to work (i.e. does not replace; no errors whatsoever). I've tried replacing with another string and it does not work either. E.g. ``` d = {'color' : pd.Series(['white', 'blue', 'orange']), 'second_color': pd.Series(['white', 'black', 'blue']), 'value' : pd.Series([1., 2., 3.])} df = pd.DataFrame(d) df.replace('white', np.nan) ``` The output is still: ``` color second_color value 0 white white 1 1 blue black 2 2 orange blue 3 ``` This problem is often addressed using inplace=True, but there are caveats to that. Please also see Understanding inplace=True in pandas.", "response":"Given that this is the top Google result when searching for \"Pandas replace is not working\" I'd like to also mention that: replace does full replacement searches, unless you turn on the regex switch. Use regex=True, and it should perform partial replacements as well. This took me 30 minutes to find out, so hopefully I've saved the next person 30 minutes.", "best_answers_score":0.7307, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37593550\/replace-method-not-working-on-pandas-dataframe", "best_answers_votes":192, "question_length":864, "response_length":356 }, { "question":"ValueError: Unknown label type: 'unknown' I try to run following code. ``` import pandas as pd import numpy as np from sklearn.linear_model import LogisticRegression # data import and preparation trainData = pd.read_csv('train.csv') train = trainData.values testData = pd.read_csv('test.csv') test = testData.values X = np.c_[train[:, 0], train[:, 2], train[:, 6:7], train[:, 9]] X = np.nan_to_num(X) y = train[:, 1] Xtest = np.c_[test[:, 0:1], test[:, 5:6], test[:, 8]] Xtest = np.nan_to_num(Xtest) # model lr = LogisticRegression() lr.fit(X, y) ``` where y is a np.ndarrayof 0s and 1s. However, I receive the following error: ```none File \"C:\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\logistic.py\", line >1174, in fit check_classification_targets(y) File \"C:\\Anaconda3\\lib\\site-packages\\sklearn\\utils\\multiclass.py\", line 172, >in check_classification_targets raise ValueError(\"Unknown label type: %r\" % y_type) ValueError: Unknown label type: 'unknown' ``` From sklearn documentation, I see that ```none y : array-like, shape (n_samples,) Target values (class labels in classification, real numbers in regression) ``` What is my error? FYI, y is np.array([0.0, 1.0, 1.0, ..., 0.0, 1.0, 0.0], dtype=object) whose size is (891,).", "response":"Your y is of type object, so sklearn cannot recognize its type. Add the line y=y.astype('int') right after the line y = train[:, 1].", "best_answers_score":0.7304, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/45346550\/valueerror-unknown-label-type-unknown", "best_answers_votes":197, "question_length":1237, "response_length":132 }, { "question":"Prepend element to numpy array I have the following numpy array ``` import numpy as np X = np.array([[5.], [4.], [3.], [2.], [1.]]) ``` I want to insert [6.] at the beginning. I've tried: ``` X = X.insert(X, 0) ``` how do I insert into X?", "response":"numpy has an insert function that's accesible via np.insert with documentation. You'll want to use it in this case like so: ``` X = np.insert(X, 0, 6., axis=0) ``` the first argument X specifies the object to be inserted into. The second argument 0 specifies where. The third argument 6. specifies what is to be inserted. The fourth argument axis=0 specifies that the insertion should happen at position 0 for every column. We could've chosen rows but your X is a columns vector, so I figured we'd stay consistent.", "best_answers_score":0.7298, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/36998260\/prepend-element-to-numpy-array", "best_answers_votes":139, "question_length":238, "response_length":514 }, { "question":"How can I use the apply() function for a single column? I have a pandas dataframe with multiple columns. I want to change the values of the only the first column without affecting the other columns. How can I do that using apply() in pandas?", "response":"Given a sample dataframe df as: ``` a b 0 1 2 1 2 3 2 3 4 3 4 5 ``` what you want is: ``` df['a'] = df['a'].apply(lambda x: x + 1) ``` that returns: ``` a b 0 2 2 1 3 3 2 4 4 3 5 5 ```", "best_answers_score":0.7296, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34962104\/how-can-i-use-the-apply-function-for-a-single-column", "best_answers_votes":706, "question_length":241, "response_length":184 }, { "question":"Why is numpy's einsum faster than numpy's built in functions? Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: ``` arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) ``` First lets look at the np.sum function: ``` np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop ``` Powers: ``` np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop ``` Outer product: ``` np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop ``` All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: ``` np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop ``` Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: ``` np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop ``` The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.", "response":"First off, there's been a lot of past discussion about this on the numpy list. For example, see: https:\/\/mail.python.org\/archives\/list\/[email protected]\/thread\/MYB7HYCIZIQSFYIUEJU33RBESP5GNJPP\/#MYB7HYCIZIQSFYIUEJU33RBESP5GNJPP https:\/\/mail.python.org\/archives\/list\/[email protected]\/thread\/LKHZ4NCDALIIMLAOXPK4GL55JHSLINXK\/#LKHZ4NCDALIIMLAOXPK4GL55JHSLINXK Some of boils down to the fact that einsum is new, and is presumably trying to be better about cache alignment and other memory access issues, while many of the older numpy functions focus on a easily portable implementation over a heavily optimized one. I'm just speculating, there, though. However, some of what you're doing isn't quite an \"apples-to-apples\" comparison. In addition to what @Jamie already said, sum uses a more appropriate accumulator for arrays For example, sum is more careful about checking the type of the input and using an appropriate accumulator. For example, consider the following: ``` In [1]: x = 255 * np.ones(100, dtype=np.uint8) In [2]: x Out[2]: array([255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255], dtype=uint8) ``` Note that the sum is correct: ``` In [3]: x.sum() Out[3]: 25500 ``` While einsum will give the wrong result: ``` In [4]: np.einsum('i->', x) Out[4]: 156 ``` But if we use a less limited dtype, we'll still get the result you'd expect: ``` In [5]: y = 255 * np.ones(100) In [6]: np.einsum('i->', y) Out[6]: 25500.0 ```", "best_answers_score":0.7296, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18365073\/why-is-numpys-einsum-faster-than-numpys-built-in-functions", "best_answers_votes":37, "question_length":2838, "response_length":1877 }, { "question":"How to implement the Softmax function in Python? From the Udacity's deep learning class, the softmax of y_i is simply the exponential divided by the sum of exponential of the whole Y vector: Where S(y_i) is the softmax function of y_i and e is the exponential and j is the no. of columns in the input vector Y. I've tried the following: ``` import numpy as np def softmax(x): \"\"\"Compute softmax values for each sets of scores in x.\"\"\" e_x = np.exp(x - np.max(x)) return e_x \/ e_x.sum() scores = [3.0, 1.0, 0.2] print(softmax(scores)) ``` which returns: ``` [ 0.8360188 0.11314284 0.05083836] ``` But the suggested solution was: ``` def softmax(x): \"\"\"Compute softmax values for each sets of scores in x.\"\"\" return np.exp(x) \/ np.sum(np.exp(x), axis=0) ``` which produces the same output as the first implementation, even though the first implementation explicitly takes the difference of each column and the max and then divides by the sum. Can someone show mathematically why? Is one correct and the other one wrong? Are the implementation similar in terms of code and time complexity? Which is more efficient?", "response":"(Well... much confusion here, both in the question and in the answers...) To start with, the two solutions (i.e. yours and the suggested one) are not equivalent; they happen to be equivalent only for the special case of 1-D score arrays. You would have discovered it if you had tried also the 2-D score array in the Udacity quiz provided example. Results-wise, the only actual difference between the two solutions is the axis=0 argument. To see that this is the case, let's try your solution (your_softmax) and one where the only difference is the axis argument: ``` import numpy as np # your solution: def your_softmax(x): \"\"\"Compute softmax values for each sets of scores in x.\"\"\" e_x = np.exp(x - np.max(x)) return e_x \/ e_x.sum() # correct solution: def softmax(x): \"\"\"Compute softmax values for each sets of scores in x.\"\"\" e_x = np.exp(x - np.max(x)) return e_x \/ e_x.sum(axis=0) # only difference ``` As I said, for a 1-D score array, the results are indeed identical: ``` scores = [3.0, 1.0, 0.2] print(your_softmax(scores)) # [ 0.8360188 0.11314284 0.05083836] print(softmax(scores)) # [ 0.8360188 0.11314284 0.05083836] your_softmax(scores) == softmax(scores) # array([ True, True, True], dtype=bool) ``` Nevertheless, here are the results for the 2-D score array given in the Udacity quiz as a test example: ``` scores2D = np.array([[1, 2, 3, 6], [2, 4, 5, 6], [3, 8, 7, 6]]) print(your_softmax(scores2D)) # [[ 4.89907947e-04 1.33170787e-03 3.61995731e-03 7.27087861e-02] # [ 1.33170787e-03 9.84006416e-03 2.67480676e-02 7.27087861e-02] # [ 3.61995731e-03 5.37249300e-01 1.97642972e-01 7.27087861e-02]] print(softmax(scores2D)) # [[ 0.09003057 0.00242826 0.01587624 0.33333333] # [ 0.24472847 0.01794253 0.11731043 0.33333333] # [ 0.66524096 0.97962921 0.86681333 0.33333333]] ``` The results are different - the second one is indeed identical with the one expected in the Udacity quiz, where all columns indeed sum to 1, which is not the case with the first (wrong) result. So, all the fuss was actually for an implementation detail - the axis argument. According to the numpy.sum documentation: The default, axis=None, will sum all of the elements of the input array while here we want to sum row-wise, hence axis=0. For a 1-D array, the sum of the (only) row and the sum of all the elements happen to be identical, hence your identical results in that case... The axis issue aside, your implementation (i.e. your choice to subtract the max first) is actually better than the suggested solution! In fact, it is the recommended way of implementing the softmax function - see here for the justification (numeric stability, also pointed out by some other answers here).", "best_answers_score":0.7294, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34968722\/how-to-implement-the-softmax-function-in-python", "best_answers_votes":165, "question_length":1111, "response_length":2679 }, { "question":"Efficient thresholding filter of an array with numpy I need to filter an array to remove the elements that are lower than a certain threshold. My current code is like this: ```py threshold = 5 a = numpy.array(range(10)) # testing data b = numpy.array(filter(lambda x: x >= threshold, a)) ``` The problem is that this creates a temporary list, using a filter with a lambda function (slow). As this is quite a simple operation, maybe there is a numpy function that does it in an efficient way, but I've been unable to find it. I thought that another way to achieve this could be sorting the array, finding the index of the threshold and returning a slice from that index onwards, but even if this would be faster for small inputs (and it won't be noticeable anyway), it's definitively asymptotically less efficient as the input size grows. Update: I took some measurements too, and the sorting + slicing was still twice as fast as the pure python filter when the input was 100.000.000 entries. ```py r = numpy.random.uniform(0, 1, 100000000) %timeit test1(r) # filter # 1 loops, best of 3: 21.3 s per loop %timeit test2(r) # sort and slice # 1 loops, best of 3: 11.1 s per loop %timeit test3(r) # boolean indexing # 1 loops, best of 3: 1.26 s per loop ```", "response":"b = a[a>threshold] this should do I tested as follows: ``` import numpy as np, datetime # array of zeros and ones interleaved lrg = np.arange(2).reshape((2,-1)).repeat(1000000,-1).flatten() t0 = datetime.datetime.now() flt = lrg[lrg==0] print datetime.datetime.now() - t0 t0 = datetime.datetime.now() flt = np.array(filter(lambda x:x==0, lrg)) print datetime.datetime.now() - t0 ``` I got ``` $ python test.py 0:00:00.028000 0:00:02.461000 ``` http:\/\/docs.scipy.org\/doc\/numpy\/user\/basics.indexing.html#boolean-or-mask-index-arrays", "best_answers_score":0.7278, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7994394\/efficient-thresholding-filter-of-an-array-with-numpy", "best_answers_votes":114, "question_length":1253, "response_length":530 }, { "question":"Linear index upper triangular matrix If I have the upper triangular portion of a matrix, offset above the diagonal, stored as a linear array, how can the (i,j) indices of a matrix element be extracted from the linear index of the array? For example, the linear array [a0, a1, a2, a3, a4, a5, a6, a7, a8, a9 is storage for the matrix ``` 0 a0 a1 a2 a3 0 0 a4 a5 a6 0 0 0 a7 a8 0 0 0 0 a9 0 0 0 0 0 ``` And we want to know the (i,j) index in the array corresponding to an offset in the linear matrix, without recursion. A suitable result, k2ij(int k, int n) -> (int, int) would satisfy, for example ``` k2ij(k=0, n=5) = (0, 1) k2ij(k=1, n=5) = (0, 2) k2ij(k=2, n=5) = (0, 3) k2ij(k=3, n=5) = (0, 4) k2ij(k=4, n=5) = (1, 2) k2ij(k=5, n=5) = (1, 3) [etc] ```", "response":"The equations going from linear index to (i,j) index are ``` i = n - 2 - floor(sqrt(-8*k + 4*n*(n-1)-7)\/2.0 - 0.5) j = k + i + 1 - n*(n-1)\/2 + (n-i)*((n-i)-1)\/2 ``` The inverse operation, from (i,j) index to linear index is ``` k = (n*(n-1)\/2) - (n-i)*((n-i)-1)\/2 + j - i - 1 ``` Verify in Python with: ``` from numpy import triu_indices, sqrt n = 10 for k in range(n*(n-1)\/2): i = n - 2 - int(sqrt(-8*k + 4*n*(n-1)-7)\/2.0 - 0.5) j = k + i + 1 - n*(n-1)\/2 + (n-i)*((n-i)-1)\/2 assert np.triu_indices(n, k=1)[0][k] == i assert np.triu_indices(n, k=1)[1][k] == j for i in range(n): for j in range(i+1, n): k = (n*(n-1)\/2) - (n-i)*((n-i)-1)\/2 + j - i - 1 assert triu_indices(n, k=1)[0][k] == i assert triu_indices(n, k=1)[1][k] == j ```", "best_answers_score":0.7275, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/27086195\/linear-index-upper-triangular-matrix", "best_answers_votes":63, "question_length":754, "response_length":732 }, { "question":"Limit number of threads in numpy It seems that my numpy library is using 4 threads, and setting OMP_NUM_THREADS=1 does not stop this. numpy.show_config() gives me these results: ``` atlas_threads_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['\/usr\/lib64\/atlas'] define_macros = [('ATLAS_INFO', '\"\\\\\"3.8.4\\\\\"\"')] language = f77 include_dirs = ['\/usr\/include'] blas_opt_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['\/usr\/lib64\/atlas'] define_macros = [('ATLAS_INFO', '\"\\\\\"3.8.4\\\\\"\"')] language = c include_dirs = ['\/usr\/include'] atlas_blas_threads_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['\/usr\/lib64\/atlas'] define_macros = [('ATLAS_INFO', '\"\\\\\"3.8.4\\\\\"\"')] language = c include_dirs = ['\/usr\/include'] openblas_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['\/usr\/lib64\/atlas'] define_macros = [('ATLAS_INFO', '\"\\\\\"3.8.4\\\\\"\"')] language = f77 include_dirs = ['\/usr\/include'] ``` So I know it is using blas, but I can't figure out how to make it use 1 thread for matrix multiplication.", "response":"There are more than the 3 mentioned environmental variables. The followings are the complete list of environmental variables and the package that uses that variable to control the number of threads it spawns. Note than you need to set these variables before doing import numpy: ``` OMP_NUM_THREADS: openmp, OPENBLAS_NUM_THREADS: openblas, MKL_NUM_THREADS: mkl, VECLIB_MAXIMUM_THREADS: accelerate, NUMEXPR_NUM_THREADS: numexpr ``` So in practice you can do: ``` import os os.environ[\"OMP_NUM_THREADS\"] = \"4\" # export OMP_NUM_THREADS=4 os.environ[\"OPENBLAS_NUM_THREADS\"] = \"4\" # export OPENBLAS_NUM_THREADS=4 os.environ[\"MKL_NUM_THREADS\"] = \"6\" # export MKL_NUM_THREADS=6 os.environ[\"VECLIB_MAXIMUM_THREADS\"] = \"4\" # export VECLIB_MAXIMUM_THREADS=4 os.environ[\"NUMEXPR_NUM_THREADS\"] = \"6\" # export NUMEXPR_NUM_THREADS=6 ``` Note that as of November 2018 the Numpy developers are working on making this possible to do after you do import numpy as well. I'll update this post once they commit those changes.", "best_answers_score":0.7258, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30791550\/limit-number-of-threads-in-numpy", "best_answers_votes":78, "question_length":1129, "response_length":1003 }, { "question":"How do I create an empty array and then append to it in NumPy? I want to create an empty array and append items to it, one at a time. ``` xs = [] for item in data: xs.append(item) ``` Can I use this list-style notation with NumPy arrays?", "response":"That is the wrong mental model for using NumPy efficiently. NumPy arrays are stored in contiguous blocks of memory. To append rows or columns to an existing array, the entire array needs to be copied to a new block of memory, creating gaps for the new elements to be stored. This is very inefficient if done repeatedly. Instead of appending rows, allocate a suitably sized array, and then assign to it row-by-row: ``` >>> import numpy as np >>> a = np.zeros(shape=(3, 2)) >>> a array([[ 0., 0.], [ 0., 0.], [ 0., 0.]]) >>> a[0] = [1, 2] >>> a[1] = [3, 4] >>> a[2] = [5, 6] >>> a array([[ 1., 2.], [ 3., 4.], [ 5., 6.]]) ```", "best_answers_score":0.7248, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/568962\/how-do-i-create-an-empty-array-and-then-append-to-it-in-numpy", "best_answers_votes":609, "question_length":237, "response_length":623 }, { "question":"How to copy data from a numpy array to another What is the fastest way to copy data from array b to array a, without modifying the address of array a. I need this because an external library (PyFFTW) uses a pointer to my array that cannot change. For example: ``` a = numpy.empty(n, dtype=complex) for i in xrange(a.size): a[i] = b[i] ``` It is possible to do it without a loop?", "response":"I believe ``` a = numpy.empty_like(b) a[:] = b ``` will copy the values quickly. As Funsi mentions, recent versions of numpy also have the copyto function.", "best_answers_score":0.7247, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6431973\/how-to-copy-data-from-a-numpy-array-to-another", "best_answers_votes":95, "question_length":378, "response_length":155 }, { "question":"Mapping a NumPy array in place Is it possible to map a NumPy array in place? If yes, how? Given a_values - 2D array - this is the bit of code that does the trick for me at the moment: ``` for row in range(len(a_values)): for col in range(len(a_values[0])): a_values[row][col] = dim(a_values[row][col]) ``` But it's so ugly that I suspect that somewhere within NumPy there must be a function that does the same with something looking like: ``` a_values.map_in_place(dim) ``` but if something like the above exists, I've been unable to find it.", "response":"It's only worth trying to do this in-place if you are under significant space constraints. If that's the case, it is possible to speed up your code a little bit by iterating over a flattened view of the array. Since reshape returns a new view when possible, the data itself isn't copied (unless the original has unusual structure). I don't know of a better way to achieve bona fide in-place application of an arbitrary Python function. ``` >>> def flat_for(a, f): ... a = a.reshape(-1) ... for i, v in enumerate(a): ... a[i] = f(v) ... >>> a = numpy.arange(25).reshape(5, 5) >>> flat_for(a, lambda x: x + 5) >>> a array([[ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]) ``` Some timings: ``` >>> a = numpy.arange(2500).reshape(50, 50) >>> f = lambda x: x + 5 >>> %timeit flat_for(a, f) 1000 loops, best of 3: 1.86 ms per loop ``` It's about twice as fast as the nested loop version: ``` >>> a = numpy.arange(2500).reshape(50, 50) >>> def nested_for(a, f): ... for i in range(len(a)): ... for j in range(len(a[0])): ... a[i][j] = f(a[i][j]) ... >>> %timeit nested_for(a, f) 100 loops, best of 3: 3.79 ms per loop ``` Of course vectorize is still faster, so if you can make a copy, use that: ``` >>> a = numpy.arange(2500).reshape(50, 50) >>> g = numpy.vectorize(lambda x: x + 5) >>> %timeit g(a) 1000 loops, best of 3: 584 us per loop ``` And if you can rewrite dim using built-in ufuncs, then please, please, don't vectorize: ``` >>> a = numpy.arange(2500).reshape(50, 50) >>> %timeit a + 5 100000 loops, best of 3: 4.66 us per loop ``` numpy does operations like += in place, just as you might expect -- so you can get the speed of a ufunc with in-place application at no cost. Sometimes it's even faster! See here for an example. By the way, my original answer to this question, which can be viewed in its edit history, is ridiculous, and involved vectorizing over indices into a. Not only did it have to do some funky stuff to bypass vectorize's type-detection mechanism, it turned out to be just as slow as the nested loop version. So much for cleverness!", "best_answers_score":0.7245, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6824122\/mapping-a-numpy-array-in-place", "best_answers_votes":55, "question_length":542, "response_length":2117 }, { "question":"Elegant way to perform tuple arithmetic What is the most elegant and concise way (without creating my own class with operator overloading) to perform tuple arithmetic in Python 2.7? Lets say I have two tuples: ``` a = (10, 10) b = (4, 4) ``` My intended result is ``` c = a - b = (6, 6) ``` I currently use: ``` c = (a[0] - b[0], a[1] - b[1]) ``` I also tried: ``` c = tuple([(i - j) for i in a for j in b]) ``` but the result was (6, 6, 6, 6). I believe the above works as a nested for loops resulting in 4 iterations and 4 values in the result.", "response":"If you're looking for fast, you can use numpy: ``` >>> import numpy >>> numpy.subtract((10, 10), (4, 4)) array([6, 6]) ``` and if you want to keep it in a tuple: ``` >>> tuple(numpy.subtract((10, 10), (4, 4))) (6, 6) ```", "best_answers_score":0.7242, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17418108\/elegant-way-to-perform-tuple-arithmetic", "best_answers_votes":92, "question_length":546, "response_length":220 }, { "question":"Why is log(inf + inf j) equal to (inf + 0.785398 j), In C++\/Python\/NumPy? I've been finding a strange behaviour of log functions in C++ and numpy about the behaviour of log function handling complex infinite numbers. Specifically, log(inf + inf * 1j) equals (inf + 0.785398j) when I expect it to be (inf + nan * 1j). When taking the log of a complex number, the real part is the log of the absolute value of the input and the imaginary part is the phase of the input. Returning 0.785398 as the imaginary part of log(inf + inf * 1j) means it assumes the infs in the real and the imaginary part have the same length. This assumption does not seem to be consistent with other calculation, for example, inf - inf == nan, inf \/ inf == nan which assumes 2 infs do not necessarily have the same values. Why is the assumption for log(inf + inf * 1j) different? Reproducing C++ code: ``` #include #include #include int main() { double inf = std::numeric_limits::infinity(); std::complex b(inf, inf); std::complex c = std::log(b); std::cout << c << \"\\n\"; } ``` Reproducing Python code (numpy): ``` import numpy as np a = complex(float('inf'), float('inf')) print(np.log(a)) ``` EDIT: Thank you for everyone who's involved in the discussion about the historical reason and the mathematical reason. All of you turn this naive question into a really interesting discussion. The provided answers are all of high quality and I wish I can accept more than 1 answers. However, I've decided to accept @simon's answer as it explains in more detail the mathematical reason and provided a link to the document explaining the logic (although I can't fully understand it).", "response":"The free final draft of the C99 specification says on page 491 clog(+\u221e, +i\u221e) returns +\u221e + i\u03c0\/4. This is still the case currently. The C++ specification explains the same rules with the note The semantics of this function are intended to be consistent with the C function clog. I agree the behaviour is confusing from a math point of view, and arguably inconsistent with other inf semantics, as you pointed out. But pragmatically, it's part of the C standard, which makes it part of the C++ standard, and since NumPy normally relies on C behaviour (even in confusing cases), this is inherited in the Python example. The standard-library cmath.log() function has the same behaviour (if you test it right...): ```py >>> import cmath >>> cmath.log(complex(float('inf'), float('inf'))) (inf+0.7853981633974483j) ``` I have no means to investigate the rationale that went into the C standard. I assume there were pragmatic choices being made here, potentially when considering how these complex functions interact with each other. To preserve historical anecdata from a comment by Mark Dickinson, former Python core developer: @Fareanor: \"it's not a \"C does this ! So do we !\".\" As the person who implemented this, I can tell you that it is exactly that. :-) The CPython cmath module doesn't wrap the C complex math operations; it implements everything directly. Special-case handling was deliberately chosen (by me) to match what's in the C99 standard. So keeping C99's behaviour in Python was a conscious choice rather than an accident.", "best_answers_score":0.7236, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/74798626\/why-is-loginf-inf-j-equal-to-inf-0-785398-j-in-c-python-numpy", "best_answers_votes":52, "question_length":1652, "response_length":1532 }, { "question":"surface plots in matplotlib I have a list of 3-tuples representing a set of points in 3D space. I want to plot a surface that covers all these points. The plot_surface function in the mplot3d package requires as arguments X,Y and Z to be 2d arrays. Is plot_surface the right function to plot surface and how do I transform my data into the required format? ``` data = [(x1,y1,z1),(x2,y2,z2),.....,(xn,yn,zn)] ```", "response":"For surfaces it's a bit different than a list of 3-tuples, you should pass in a grid for the domain in 2d arrays. If all you have is a list of 3d points, rather than some function f(x, y) -> z, then you will have a problem because there are multiple ways to triangulate that 3d point cloud into a surface. Here's a smooth surface example: ``` import numpy as np from mpl_toolkits.mplot3d import Axes3D # Axes3D import has side effects, it enables using projection='3d' in add_subplot import matplotlib.pyplot as plt import random def fun(x, y): return x**2 + y fig = plt.figure() ax = fig.add_subplot(111, projection='3d') x = y = np.arange(-3.0, 3.0, 0.05) X, Y = np.meshgrid(x, y) zs = np.array(fun(np.ravel(X), np.ravel(Y))) Z = zs.reshape(X.shape) ax.plot_surface(X, Y, Z) ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') plt.show() ```", "best_answers_score":0.7228, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9170838\/surface-plots-in-matplotlib", "best_answers_votes":185, "question_length":412, "response_length":866 }, { "question":"Numpy: Divide each row by a vector element Suppose I have a numpy array: ``` data = np.array([[1,1,1],[2,2,2],[3,3,3]]) ``` and I have a corresponding \"vector:\" ``` vector = np.array([1,2,3]) ``` How do I operate on data along each row to either subtract or divide so the result is: ``` sub_result = [[0,0,0], [0,0,0], [0,0,0]] div_result = [[1,1,1], [1,1,1], [1,1,1]] ``` Long story short: How do I perform an operation on each row of a 2D array with a 1D array of scalars that correspond to each row?", "response":"Here you go. You just need to use None (or alternatively np.newaxis) combined with broadcasting: ``` In [6]: data - vector[:,None] Out[6]: array([[0, 0, 0], [0, 0, 0], [0, 0, 0]]) In [7]: data \/ vector[:,None] Out[7]: array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]) ```", "best_answers_score":0.7214, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19602187\/numpy-divide-each-row-by-a-vector-element", "best_answers_votes":268, "question_length":502, "response_length":262 }, { "question":"Swapping the dimensions of a numpy array I would like to do the following: ``` for i in dimension1: for j in dimension2: for k in dimension3: for l in dimension4: B[k,l,i,j] = A[i,j,k,l] ``` without the use of loops. In the end both A and B contain the same information but indexed differently. I must point out that the dimension 1,2,3 and 4 can be the same or different. So a numpy.reshape() seems difficult.", "response":"The canonical way of doing this in numpy would be to use np.transpose's optional permutation argument. In your case, to go from ijkl to klij, the permutation is (2, 3, 0, 1), e.g.: ``` In [16]: a = np.empty((2, 3, 4, 5)) In [17]: b = np.transpose(a, (2, 3, 0, 1)) In [18]: b.shape Out[18]: (4, 5, 2, 3) ```", "best_answers_score":0.7205, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/23943379\/swapping-the-dimensions-of-a-numpy-array", "best_answers_votes":142, "question_length":410, "response_length":306 }, { "question":"Unable to allocate array with shape and data type I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS. I am trying to allocate memory for a numpy array with shape (156816, 36, 53806) with ``` np.zeros((156816, 36, 53806), dtype='uint8') ``` and while I'm getting an error on Ubuntu OS ``` >>> import numpy as np >>> np.zeros((156816, 36, 53806), dtype='uint8') Traceback (most recent call last): File \"\", line 1, in numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8 ``` I'm not getting it on MacOS: ``` >>> import numpy as np >>> np.zeros((156816, 36, 53806), dtype='uint8') array([[[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], ..., [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]]], dtype=uint8) ``` I've read somewhere that np.zeros shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb. versions: ``` Ubuntu os -> ubuntu mate 18 python -> 3.6.8 numpy -> 1.17.0 mac os -> 10.14.6 python -> 3.6.4 numpy -> 1.17.0 ``` PS: also failed on Google Colab", "response":"This is likely due to your system's overcommit handling mode. In the default mode, 0, Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. The root is allowed to allocate slightly more memory in this mode. This is the default. The exact heuristic used is not well explained here, but this is discussed more on Linux over commit heuristic and on this page. You can check your current overcommit mode by running ``` $ cat \/proc\/sys\/vm\/overcommit_memory 0 ``` In this case, you're allocating ``` >>> 156816 * 36 * 53806 \/ 1024.0**3 282.8939827680588 ``` ~282 GB and the kernel is saying well obviously there's no way I'm going to be able to commit that many physical pages to this, and it refuses the allocation. If (as root) you run: ``` $ echo 1 > \/proc\/sys\/vm\/overcommit_memory ``` This will enable the \"always overcommit\" mode, and you'll find that indeed the system will allow you to make the allocation no matter how large it is (within 64-bit memory addressing at least). I tested this myself on a machine with 32 GB of RAM. With overcommit mode 0 I also got a MemoryError, but after changing it back to 1 it works: ``` >>> import numpy as np >>> a = np.zeros((156816, 36, 53806), dtype='uint8') >>> a.nbytes 303755101056 ``` You can then go ahead and write to any location within the array, and the system will only allocate physical pages when you explicitly write to that page. So you can use this, with care, for sparse arrays.", "best_answers_score":0.7204, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/57507832\/unable-to-allocate-array-with-shape-and-data-type", "best_answers_votes":206, "question_length":2055, "response_length":1583 }, { "question":"ValueError: setting an array element with a sequence Why do the following code samples: ``` np.array([[1, 2], [2, 3, 4]]) np.array([1.2, \"abc\"], dtype=float) ``` all give the following error? ```none ValueError: setting an array element with a sequence. ```", "response":"Possible reason 1: trying to create a jagged array You may be creating an array from a list that isn't shaped like a multi-dimensional array: ``` numpy.array([[1, 2], [2, 3, 4]]) # wrong! ``` ``` numpy.array([[1, 2], [2, [3, 4]]]) # wrong! ``` In these examples, the argument to numpy.array contains sequences of different lengths. Those will yield this error message because the input list is not shaped like a \"box\" that can be turned into a multidimensional array. Possible reason 2: providing elements of incompatible types For example, providing a string as an element in an array of type float: ``` numpy.array([1.2, \"abc\"], dtype=float) # wrong! ``` If you really want to have a NumPy array containing both strings and floats, you could use the dtype object, which allows the array to hold arbitrary Python objects: ``` numpy.array([1.2, \"abc\"], dtype=object) ```", "best_answers_score":0.72, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4674473\/valueerror-setting-an-array-element-with-a-sequence", "best_answers_votes":396, "question_length":257, "response_length":870 }, { "question":"Using NumPy to build an array of all combinations of two arrays I'm trying to run over the parameters space of a six-parameter function to study its numerical behavior before trying to do anything complex with it, so I'm searching for an efficient way to do this. My function takes float values given in a 6-dim NumPy array as input. What I tried to do initially was this: First, I created a function that takes two arrays and generate an array with all combinations of values from the two arrays: ``` from numpy import * def comb(a, b): c = [] for i in a: for j in b: c.append(r_[i,j]) return c ``` Then, I used reduce() to apply that to m copies of the same array: ``` def combs(a, m): return reduce(comb, [a]*m) ``` Finally, I evaluate my function like this: ``` values = combs(np.arange(0, 1, 0.1), 6) for val in values: print F(val) ``` This works, but it's way too slow. I know the space of parameters is huge, but this shouldn't be so slow. I have only sampled 106 (a million) points in this example and it took more than 15 seconds just to create the array values. Is there a more efficient way of doing this with NumPy? I can modify the way the function F takes its arguments if it's necessary.", "response":"In newer versions of NumPy (>1.8.x), numpy.meshgrid() provides a much faster implementation: For pv's solution: ```none In [113]: %timeit cartesian(([1, 2, 3], [4, 5], [6, 7])) 10000 loops, best of 3: 135 \u00b5s per loop In [114]: cartesian(([1, 2, 3], [4, 5], [6, 7])) Out[114]: array([[1, 4, 6], [1, 4, 7], [1, 5, 6], [1, 5, 7], [2, 4, 6], [2, 4, 7], [2, 5, 6], [2, 5, 7], [3, 4, 6], [3, 4, 7], [3, 5, 6], [3, 5, 7]]) ``` numpy.meshgrid() used to be two-dimensional only, but now it is capable of being multidimensional. In this case, three-dimensional: ```none In [115]: %timeit np.array(np.meshgrid([1, 2, 3], [4, 5], [6, 7])).T.reshape(-1,3) 10000 loops, best of 3: 74.1 \u00b5s per loop In [116]: np.array(np.meshgrid([1, 2, 3], [4, 5], [6, 7])).T.reshape(-1,3) Out[116]: array([[1, 4, 6], [1, 5, 6], [2, 4, 6], [2, 5, 6], [3, 4, 6], [3, 5, 6], [1, 4, 7], [1, 5, 7], [2, 4, 7], [2, 5, 7], [3, 4, 7], [3, 5, 7]]) ``` Note that the order of the final result is slightly different.", "best_answers_score":0.7195, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1208118\/using-numpy-to-build-an-array-of-all-combinations-of-two-arrays", "best_answers_votes":220, "question_length":1203, "response_length":975 }, { "question":"What is the equivalent of \"zip()\" in Python's numpy? I am trying to do the following but with numpy arrays: ``` x = [(0.1, 1.), (0.1, 2.), (0.1, 3.), (0.1, 4.), (0.1, 5.)] normal_result = zip(*x) ``` This should give a result of: ``` normal_result = [(0.1, 0.1, 0.1, 0.1, 0.1), (1., 2., 3., 4., 5.)] ``` But if the input vector is a numpy array: ``` y = np.array(x) numpy_result = zip(*y) print type(numpy_result) ``` It (expectedly) returns a: ``` ``` The issue is that I will need to transform the result back into a numpy array after this. What I would like to know is what is if there is an efficient numpy function that will avoid these back-and-forth transformations?", "response":"You can just transpose it... ``` >>> a = np.array([(0.1, 1.), (0.1, 2.), (0.1, 3.), (0.1, 4.), (0.1, 5.)]) >>> a array([[ 0.1, 1. ], [ 0.1, 2. ], [ 0.1, 3. ], [ 0.1, 4. ], [ 0.1, 5. ]]) >>> a.T array([[ 0.1, 0.1, 0.1, 0.1, 0.1], [ 1. , 2. , 3. , 4. , 5. ]]) ```", "best_answers_score":0.7192, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12744778\/what-is-the-equivalent-of-zip-in-pythons-numpy", "best_answers_votes":85, "question_length":674, "response_length":261 }, { "question":"What is the purpose of numpy.where returning a tuple? When I run this code: ``` import numpy as np a = np.array([1, 2, 3, 4, 5, 6]) print(np.where(a > 2)) ``` it would be natural to get an array of indices where a > 2, i.e. [2, 3, 4, 5], but instead we get: ``` (array([2, 3, 4, 5], dtype=int64),) ``` i.e. a tuple with empty second member. Then, to get the the \"natural\" answer of numpy.where, we have to do: ``` np.where(a > 2)[0] ``` What's the point in this tuple? In which situation is it useful? Note: I'm speaking here only about the use case numpy.where(cond) and not numpy.where(cond, x, y) that also exists (see documentation).", "response":"numpy.where returns a tuple because each element of the tuple refers to a dimension. Consider this example in 2 dimensions: ``` a = np.array([[1, 2, 3, 4, 5, 6], [-2, 1, 2, 3, 4, 5]]) print(np.where(a > 2)) (array([0, 0, 0, 0, 1, 1, 1], dtype=int64), array([2, 3, 4, 5, 3, 4, 5], dtype=int64)) ``` As you can see, the first element of the tuple refers to the first dimension of relevant elements; the second element refers to the second dimension. This is a convention numpy often uses. You will see it also when you ask for the shape of an array, i.e. the shape of a 1-dimensional array will return a tuple with 1 element: ``` a = np.array([[1, 2, 3, 4, 5, 6], [-2, 1, 2, 3, 4, 5]]) print(a.shape, a.ndim) # (2, 6) 2 b = np.array([1, 2, 3, 4, 5, 6]) print(b.shape, b.ndim) # (6,) 1 ```", "best_answers_score":0.7191, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/50646102\/what-is-the-purpose-of-numpy-where-returning-a-tuple", "best_answers_votes":47, "question_length":637, "response_length":786 }, { "question":"Replacing Pandas or Numpy Nan with a None to use with MysqlDB I am trying to write a Pandas dataframe (or can use a numpy array) to a mysql database using MysqlDB . MysqlDB doesn't seem understand 'nan' and my database throws out an error saying nan is not in the field list. I need to find a way to convert the 'nan' into a NoneType. Any ideas?", "response":"For pandas > 1.3.0 see this answer. @bogatron has it right, you can use where, it's worth noting that you can do this natively in pandas: ``` df1 = df.where(pd.notnull(df), None) ``` Note: this changes the dtype of all columns to object. Example: ``` In [1]: df = pd.DataFrame([1, np.nan]) In [2]: df Out[2]: 0 0 1 1 NaN In [3]: df1 = df.where(pd.notnull(df), None) In [4]: df1 Out[4]: 0 0 1 1 None ``` Note: what you cannot do recast the DataFrames dtype to allow all datatypes types, using astype, and then the DataFrame fillna method: ``` df1 = df.astype(object).replace(np.nan, 'None') ``` Unfortunately neither this, nor using replace, works with None see this (closed) issue. As an aside, it's worth noting that for most use cases you don't need to replace NaN with None, see this question about the difference between NaN and None in pandas. However, in this specific case it seems you do (at least at the time of this answer).", "best_answers_score":0.7183, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14162723\/replacing-pandas-or-numpy-nan-with-a-none-to-use-with-mysqldb", "best_answers_votes":336, "question_length":345, "response_length":934 }, { "question":"How can I upgrade NumPy? When I installed OpenCV using Homebrew (brew), I got this problem whenever I run this command to test python -c \"import cv2\": ``` RuntimeError: module compiled against API version 9 but this version of numpy is 6 Traceback (most recent call last): File \"\", line 1, in ImportError: numpy.core.multiarray failed to import ``` I tried to upgrade NumPy, but this is confusing: ``` >>> import numpy >>> print numpy.__version__ 1.6.1 ``` When I run brew to upgrade NumPy, I got this problem: ``` brew install -u numpy Warning: numpy-1.9.1 already installed ``` When I uninstalled it: ``` sudo pip install numpy Requirement already satisfied (use --upgrade to upgrade): numpy in .\/anaconda\/lib\/python2.7\/site-packages ``` I have followed this question and deleted Anaconda from my mac. ``` pip install numpy Requirement already satisfied (use --upgrade to upgrade): numpy in \/Library\/Python\/2.7\/site-packages ``` But nothing have changed. How can I link the NumPy version to OpenCV?", "response":"When you already have an older version of NumPy, use this: ``` pip install numpy --upgrade ``` If it still doesn't work, try: ``` pip install numpy --upgrade --ignore-installed ```", "best_answers_score":0.7181, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28517937\/how-can-i-upgrade-numpy", "best_answers_votes":84, "question_length":1001, "response_length":180 }, { "question":"Generating movie from python without saving individual frames to files I would like to create an h264 or divx movie from frames that I generate in a python script in matplotlib. There are about 100k frames in this movie. In examples on the web [eg. 1], I have only seen the method of saving each frame as a png and then running mencoder or ffmpeg on these files. In my case, saving each frame is impractical. Is there a way to take a plot generated from matplotlib and pipe it directly to ffmpeg, generating no intermediate files? Programming with ffmpeg's C-api is too difficult for me [eg. 2]. Also, I need an encoding that has good compression such as x264 as the movie file will otherwise be too large for a subsequent step. So it would be great to stick with mencoder\/ffmpeg\/x264. Is there something that can be done with pipes [3]? [1] http:\/\/matplotlib.sourceforge.net\/examples\/animation\/movie_demo.html [2] How does one encode a series of images into H264 using the x264 C API? [3] http:\/\/www.ffmpeg.org\/ffmpeg-doc.html#SEC41", "response":"This functionality is now (at least as of 1.2.0, maybe 1.1) baked into matplotlib via the MovieWriter class and it's sub-classes in the animation module. You also need to install ffmpeg in advance. ``` import matplotlib.animation as animation import numpy as np from pylab import * dpi = 100 def ani_frame(): fig = plt.figure() ax = fig.add_subplot(111) ax.set_aspect('equal') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) im = ax.imshow(rand(300,300),cmap='gray',interpolation='nearest') im.set_clim([0,1]) fig.set_size_inches([5,5]) tight_layout() def update_img(n): tmp = rand(300,300) im.set_data(tmp) return im #legend(loc=0) ani = animation.FuncAnimation(fig,update_img,300,interval=30) writer = animation.writers['ffmpeg'](fps=30) ani.save('demo.mp4',writer=writer,dpi=dpi) return ani ``` Documentation for animation", "best_answers_score":0.718, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4092927\/generating-movie-from-python-without-saving-individual-frames-to-files", "best_answers_votes":58, "question_length":1033, "response_length":845 }, { "question":"How to normalize a 2-dimensional numpy array in python less verbose? Given a 3 times 3 numpy array ``` a = numpy.arange(0,27,3).reshape(3,3) # array([[ 0, 3, 6], # [ 9, 12, 15], # [18, 21, 24]]) ``` To normalize the rows of the 2-dimensional array I thought of ``` row_sums = a.sum(axis=1) # array([ 9, 36, 63]) new_matrix = numpy.zeros((3,3)) for i, (row, row_sum) in enumerate(zip(a, row_sums)): new_matrix[i,:] = row \/ row_sum ``` There must be a better way, isn't there? Perhaps to clearify: By normalizing I mean, the sum of the entrys per row must be one. But I think that will be clear to most people.", "response":"Broadcasting is really good for this: ``` row_sums = a.sum(axis=1) new_matrix = a \/ row_sums[:, numpy.newaxis] ``` row_sums[:, numpy.newaxis] reshapes row_sums from being (3,) to being (3, 1). When you do a \/ b, a and b are broadcast against each other. You can learn more about broadcasting here or even better here.", "best_answers_score":0.7155, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8904694\/how-to-normalize-a-2-dimensional-numpy-array-in-python-less-verbose", "best_answers_votes":181, "question_length":608, "response_length":317 }, { "question":"Stratified Sampling in Pandas I've looked at the Sklearn stratified sampling docs as well as the pandas docs and also Stratified samples from Pandas and sklearn stratified sampling based on a column but they do not address this issue. Im looking for a fast pandas\/sklearn\/numpy way to generate stratified samples of size n from a dataset. However, for rows with less than the specified sampling number, it should take all of the entries. Concrete example: Thank you! :)", "response":"Use min when passing the number to sample. Consider the dataframe df ``` df = pd.DataFrame(dict( A=[1, 1, 1, 2, 2, 2, 2, 3, 4, 4], B=range(10) )) df.groupby('A', group_keys=False).apply(lambda x: x.sample(min(len(x), 2))) A B 1 1 1 2 1 2 3 2 3 6 2 6 7 3 7 9 4 9 8 4 8 ```", "best_answers_score":0.7145, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/44114463\/stratified-sampling-in-pandas", "best_answers_votes":118, "question_length":469, "response_length":271 }, { "question":"How do you fix \"runtimeError: package fails to pass a sanity check\" for numpy and pandas? This is the error I am getting and, as far as I can tell, there is nothing useful on the error link to fix this. RuntimeError: The current Numpy installation ('...\\\\venv\\\\lib\\\\site-packages\\\\numpy\\\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https:\/\/developercommunity.visualstudio.com\/content\/problem\/1207405\/fmod-after-an-update-to-windows-2004-is-causing-a.html I have tried multiple versions of Python (3.8.6 and 3.9.0) and numpy and pandas. I am currently using PyCharm to do all this.", "response":"This error occurs when using python3.9 and numpy1.19.4 So uninstalling numpy1.19.4 and installing 1.19.3 will work. Edit As of January 5th 2021 numpy version 1.19.5 is out and appears to solve the problem.", "best_answers_score":0.7137, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/64654805\/how-do-you-fix-runtimeerror-package-fails-to-pass-a-sanity-check-for-numpy-an", "best_answers_votes":193, "question_length":653, "response_length":205 }, { "question":"array.array versus numpy.array If you are creating a 1d array in Python, is there any benefit to using the NumPy package?", "response":"It all depends on what you plan to do with the array. If all you're doing is creating arrays of simple data types and doing I\/O, the array module will do just fine. If, on the other hand, you want to do any kind of numerical calculations, the array module doesn't provide any help with that. NumPy (and SciPy) give you a wide variety of operations between arrays and special functions that are useful not only for scientific work but for things like advanced image manipulation or in general anything where you need to perform efficient calculations with large amounts of data. Numpy is also much more flexible, e.g. it supports arrays of any type of Python objects, and is also able to interact \"natively\" with your own objects if they conform to the array interface.", "best_answers_score":0.7137, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/111983\/array-array-versus-numpy-array", "best_answers_votes":83, "question_length":121, "response_length":768 }, { "question":"Suppressing scientific notation in pandas? [duplicate] This question already has answers here: Format \/ Suppress Scientific Notation from Pandas Aggregation Results (8 answers) Closed 3 years ago. The community reviewed whether to reopen this question 2 years ago and left it closed: Duplicate This question has been answered, is not unique, and doesn\u2019t differentiate itself from another question. I have a DataFrame in pandas where some of the numbers are expressed in scientific notation (or exponent notation) like this: ``` id value id 1.00 -4.22e-01 value -0.42 1.00e+00 percent -0.72 1.00e-01 played 0.03 -4.35e-02 money -0.22 3.37e-01 other NaN NaN sy -0.03 2.19e-04 sz -0.33 3.83e-01 ``` And the scientific notation makes what should be an easy comparison, needlessly difficult. I assume it's the 21900 value that's screwing it up for the others. I mean 1.0 is encoded. ONE! This doesn't work: ``` np.set_printoptions(supress=True) ``` And pandas.set_printoptions doesn't implement suppress either, and I've looked all at pd.describe_options() in despair, and pd.core.format.set_eng_float_format() only seems to turn it on for all the other float values, with no ability to turn it off.", "response":"quick temporary: df.round(4) global: pd.options.display.float_format = '{:20,.2f}'.format The :20 means the total width should be twenty characters, padded with whitespace on the left if it would otherwise be shorter. You can use simply '{:,.2f}' if you don't want to specify the number. The .2f means that there should be two digits after the decimal point, even if they are zeros. For more custom and advanced styling, see Apply Formatting to Each Column in Dataframe Using a Dict Mapping, pandas table styling and pandas format. For example you can do df.style.format(precision=2), format each column dfg.style.format({'column_pct':'{:.1%}'}) and many other useful things. Note df.style only works in jupyter but this solution lets you store the results in the dataframe to display in the console as well.", "best_answers_score":0.7131, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17737300\/suppressing-scientific-notation-in-pandas", "best_answers_votes":115, "question_length":1194, "response_length":808 }, { "question":"No module named when using PyInstaller I try to compile a Python project under Windows 7 using PyInstaller. The project works fine, there are no issues, however when I try to compile it the result doesn't work. Though I get no warnings during compilation there are many in the warnmain.txt file in the build directory: warnmain.txt I don't really understand those warnings, for example \"no module named numpy.pi\" since numpy.pi is no module but a number. I never tried to import numpy.pi. I did import numpy and matplotlib explicitly. In addition I'm using PyQt4. I thought the error might be related to those libraries. However I was able to compile a simple script which uses numpy succesfully: ``` import sys from PyQt4 import QtGui, QtCore import numpy as np class MainWindow(QtGui.QMainWindow): def __init__(self): QtGui.QMainWindow.__init__(self) self.pb = QtGui.QPushButton(str(np.pi), self) app = QtGui.QApplication(sys.argv) main = MainWindow() main.show() sys.exit(app.exec_()) ``` Successfully here means that the created executable file actually showed the desired output. However there is also a warnmain.txt file created which contains exactly the same 'warnings' as the one before. So I guess the fact that compiling my actual project does not give any success is not (or at least not only) related to those warnings. But what else could be the error then? The only output during compilation are 'INFO's and none of the is a negative statement. I did not specify an additional hook directory but the hooks where down using the default directory as far as I could read from the compile output, e.g. hook-matplotlib was executed. I could not see any hook for numpy neither could I for my small example script but this one worked. I used the following imports in my files (not all in the same but in different ones): ``` import numpy as np import matplotlib.pyplot as ppl from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas from matplotlib.backends.backend_qt4agg import NavigationToolbar2QTAgg as NavigationToolbar from PyQt4 import QtGui, QtCore import json import sys import numpy # added this one later import matplotlib # added this one later ``` Since PyInstaller does not give any errors\/warnings I could not figure out if the problem is related to the libraries or if there is something else to be considered.", "response":"Had a similar problem with no module named FileDialog. Discovered that with version 3.2, I could use pyinstaller --hidden-import FileDialog ... instead of modifying my main script. See Listing Hidden Imports documentation", "best_answers_score":0.7129, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25733467\/no-module-named-when-using-pyinstaller", "best_answers_votes":40, "question_length":2358, "response_length":221 }, { "question":"Error: Microsoft Visual C++ 10.0 is required (Unable to find vcvarsall.bat) when running Python script [duplicate] This question already has answers here: Cannot find vcvarsall.bat when running a Python script (18 answers) Closed 9 years ago. Im trying to install numpy with PyCharm but i keep getting this error: error: Microsoft Visual C++ 10.0 is required (Unable to find vcvarsall.bat). Can someone please explain to me exactly what i have to do to fix this error(and as simple and detailed as possible)? im using python 3.4.2 (i know this has been answered before but i did not understand it).", "response":"I was able to fix this on Windows 7 64-bit running Python 3.4.3 by running the set command at a command prompt to determine the existing Visual Studio tools environment variable; in my case it was VS140COMNTOOLS for Visual Studio Community 2015. Then run the following (substituting the variable on the right-hand side if yours has a different name): ``` set VS100COMNTOOLS=%VS140COMNTOOLS% ``` This allowed me to install the PyCrypto module that was previously giving me the same error as the OP. For a more permanent solution, add this environment variable to your Windows environment via Control Panel (\"Edit the system environment variables\"), though you might need to use the actual path instead of the variable substitution.", "best_answers_score":0.7126, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28251314\/error-microsoft-visual-c-10-0-is-required-unable-to-find-vcvarsall-bat-when", "best_answers_votes":57, "question_length":598, "response_length":730 }, { "question":"How to load a list of numpy arrays to pytorch dataset loader? I have a huge list of numpy arrays, where each array represents an image and I want to load it using torch.utils.data.Dataloader object. But the documentation of torch.utils.data.Dataloader mentions that it loads data directly from a folder. How do I modify it for my cause? I am new to pytorch and any help would be greatly appreciated. my numpy array for a single image looks something like this. The image is RBG image. ``` [[[ 70 82 94] [ 67 81 93] [ 66 82 94] ..., [182 182 188] [183 183 189] [188 186 192]] [[ 66 80 92] [ 62 78 91] [ 64 79 95] ..., [176 176 182] [178 178 184] [180 180 186]] [[ 62 82 93] [ 62 81 96] [ 65 80 99] ..., [169 172 177] [173 173 179] [172 172 178]] ..., ```", "response":"I think what DataLoader actually requires is an input that subclasses Dataset. You can either write your own dataset class that subclasses Datasetor use TensorDataset as I have done below: ``` import torch import numpy as np from torch.utils.data import TensorDataset, DataLoader my_x = [np.array([[1.0,2],[3,4]]),np.array([[5.,6],[7,8]])] # a list of numpy arrays my_y = [np.array([4.]), np.array([2.])] # another list of numpy arrays (targets) tensor_x = torch.Tensor(my_x) # transform to torch tensor tensor_y = torch.Tensor(my_y) my_dataset = TensorDataset(tensor_x,tensor_y) # create your datset my_dataloader = DataLoader(my_dataset) # create your dataloader ``` Works for me.", "best_answers_score":0.7119, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/44429199\/how-to-load-a-list-of-numpy-arrays-to-pytorch-dataset-loader", "best_answers_votes":202, "question_length":753, "response_length":682 }, { "question":"Matplotlib scatter plot with unknown error I am attempting to create a scatter plot. I have a list of numbers from 0 - 17 as well as an array with 18 values. I can plot the data as a line plot but when I try to plot as a scatter, I get an error message I do not understand: ```none TypeError: ufunc 'sqrt' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' ``` What does this error message mean and how can I get the data to plot as a scatter? ```py import numpy as np import matplotlib.pyplot as plt y = [7316.0, 7453.25, 7518.25, 7711.5, 7448.0, 7210.25, 7416.75, 6960.75, 7397.75, 6397.5, 5522.75, 5139.0, 5034.75, 4264.75, 5106.0, 3489.5, 4712.0, 4770.0] x = np.arange(0,18,1) plt.rcParams['legend.loc'] = 'best' plt.figure(1) plt.xlim(0, 20) plt.ylim(0, 10000) plt.scatter(x, y, 'r') plt.show() ```", "response":"Check the scatter documentation. Third argument is for size of points and should be scalar or array_like. I assume 'r' is for color so do the following: ``` plt.scatter(x, y, c='r') ```", "best_answers_score":0.7112, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/35733223\/matplotlib-scatter-plot-with-unknown-error", "best_answers_votes":97, "question_length":892, "response_length":185 }, { "question":"Generating a dense matrix from a sparse matrix in numpy python I have a Sqlite database that contains following type of schema: ``` termcount(doc_num, term , count) ``` This table contains terms with their respective counts in the document. like ``` (doc1 , term1 ,12) (doc1, term 22, 2) . . (docn,term1 , 10) ``` This matrix can be considered as sparse matrix as each documents contains very few terms that will have a non-zero value. How would I create a dense matrix from this sparse matrix using numpy as I have to calculate the similarity among documents using cosine similarity. This dense matrix will look like a table that have docid as the first column and all the terms will be listed as the first row.and remaining cells will contain counts.", "response":"``` from scipy.sparse import csr_matrix A = csr_matrix([[1,0,2],[0,3,0]]) >>>A ' with 3 stored elements in Compressed Sparse Row format> >>> A.todense() matrix([[1, 0, 2], [0, 3, 0]]) >>> A.toarray() array([[1, 0, 2], [0, 3, 0]]) ``` this is an example of how to convert a sparse matrix to a dense matrix taken from scipy", "best_answers_score":0.7111, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16505670\/generating-a-dense-matrix-from-a-sparse-matrix-in-numpy-python", "best_answers_votes":109, "question_length":752, "response_length":321 }, { "question":"Argmax of numpy array returning non-flat indices I'm trying to get the indices of the maximum element in a Numpy array. This can be done using numpy.argmax. My problem is, that I would like to find the biggest element in the whole array and get the indices of that. numpy.argmax can be either applied along one axis, which is not what I want, or on the flattened array, which is kind of what I want. My problem is that using numpy.argmax with axis=None returns the flat index when I want the multi-dimensional index. I could use divmod to get a non-flat index but this feels ugly. Is there any better way of doing this?", "response":"You could use numpy.unravel_index() on the result of numpy.argmax(): ``` >>> a = numpy.random.random((10, 10)) >>> numpy.unravel_index(a.argmax(), a.shape) (6, 7) >>> a[6, 7] == a.max() True ```", "best_answers_score":0.7103, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9482550\/argmax-of-numpy-array-returning-non-flat-indices", "best_answers_votes":246, "question_length":619, "response_length":194 }, { "question":"How to normalize a numpy array to a unit vector I would like to convert a NumPy array to a unit vector. More specifically, I am looking for an equivalent version of this normalisation function: ``` def normalize(v): norm = np.linalg.norm(v) if norm == 0: return v return v \/ norm ``` This function handles the situation where vector v has the norm value of 0. Is there any similar functions provided in sklearn or numpy?", "response":"If you're using scikit-learn you can use sklearn.preprocessing.normalize: ``` import numpy as np from sklearn.preprocessing import normalize x = np.random.rand(1000)*10 norm1 = x \/ np.linalg.norm(x) norm2 = normalize(x[:,np.newaxis], axis=0).ravel() print np.all(norm1 == norm2) # True ```", "best_answers_score":0.7087, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21030391\/how-to-normalize-a-numpy-array-to-a-unit-vector", "best_answers_votes":250, "question_length":420, "response_length":289 }, { "question":"Convert Select Columns in Pandas Dataframe to Numpy Array I would like to convert everything but the first column of a pandas dataframe into a numpy array. For some reason using the columns= parameter of DataFrame.to_matrix() is not working. df: ``` viz a1_count a1_mean a1_std 0 n 3 2 0.816497 1 n 0 NaN NaN 2 n 2 51 50.000000 ``` I tried X=df.as_matrix(columns=[df[1:]]) but this yields an array of all NaNs", "response":"the easy way is the \"values\" property df.iloc[:,1:].values ``` a=df.iloc[:,1:] b=df.iloc[:,1:].values print(type(df)) print(type(a)) print(type(b)) ``` so, you can get type ``` ```", "best_answers_score":0.7087, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/31789160\/convert-select-columns-in-pandas-dataframe-to-numpy-array", "best_answers_votes":126, "question_length":409, "response_length":183 }, { "question":"How to apply piecewise linear fit in Python? I am trying to fit piecewise linear fit as shown in fig.1 for a data set This figure was obtained by setting on the lines. I attempted to apply a piecewise linear fit using the code: ``` from scipy import optimize import matplotlib.pyplot as plt import numpy as np x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11, 12, 13, 14, 15]) y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03]) def linear_fit(x, a, b): return a * x + b fit_a, fit_b = optimize.curve_fit(linear_fit, x[0:5], y[0:5])[0] y_fit = fit_a * x[0:7] + fit_b fit_a, fit_b = optimize.curve_fit(linear_fit, x[6:14], y[6:14])[0] y_fit = np.append(y_fit, fit_a * x[6:14] + fit_b) figure = plt.figure(figsize=(5.15, 5.15)) figure.clf() plot = plt.subplot(111) ax1 = plt.gca() plot.plot(x, y, linestyle = '', linewidth = 0.25, markeredgecolor='none', marker = 'o', label = r'\\textit{y_a}') plot.plot(x, y_fit, linestyle = ':', linewidth = 0.25, markeredgecolor='none', marker = '', label = r'\\textit{y_b}') plot.set_ylabel('Y', labelpad = 6) plot.set_xlabel('X', labelpad = 6) figure.savefig('test.pdf', box_inches='tight') plt.close() ``` But this gave me fitting of the form in fig. 2, I tried playing with the values but no change I can't get the fit of the upper line proper. The most important requirement for me is how can I get Python to get the gradient change point. I want the code to recognize and fit two linear fits in the appropriate range. How can this be done in Python?", "response":"You can use numpy.piecewise() to create the piecewise function and then use curve_fit(), Here is the code ``` from scipy import optimize import matplotlib.pyplot as plt import numpy as np %matplotlib inline x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11, 12, 13, 14, 15], dtype=float) y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03]) def piecewise_linear(x, x0, y0, k1, k2): return np.piecewise(x, [x < x0], [lambda x:k1*x + y0-k1*x0, lambda x:k2*x + y0-k2*x0]) p , e = optimize.curve_fit(piecewise_linear, x, y) xd = np.linspace(0, 15, 100) plt.plot(x, y, \"o\") plt.plot(xd, piecewise_linear(xd, *p)) ``` the output: For an N parts fitting, please reference segments_fit.ipynb", "best_answers_score":0.7086, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/29382903\/how-to-apply-piecewise-linear-fit-in-python", "best_answers_votes":88, "question_length":1536, "response_length":729 }, { "question":"Size of data type using NumPy In NumPy, I can get the size (in bytes) of a particular data type by: ``` datatype(...).itemsize ``` or: ``` datatype(...).nbytes ``` For example: ``` np.float32(5).itemsize # 4 np.float32(5).nbytes # 4 ``` I have two questions. First, is there a way to get this information without creating an instance of the datatype? Second, what's the difference between itemsize and nbytes?", "response":"You need an instance of the dtype to get the itemsize, but you shouldn't need an instance of the ndarray. (As will become clear in a second, nbytes is a property of the array, not the dtype.) E.g. ``` print np.dtype(float).itemsize print np.dtype(np.float32).itemsize print np.dtype('|S10').itemsize ``` As far as the difference between itemsize and nbytes, nbytes is just x.itemsize * x.size. E.g. ``` In [16]: print np.arange(100).itemsize 8 In [17]: print np.arange(100).nbytes 800 ```", "best_answers_score":0.708, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16972501\/size-of-data-type-using-numpy", "best_answers_votes":92, "question_length":409, "response_length":488 }, { "question":"How to zip two 1d numpy array to 2d numpy array [duplicate] This question already has answers here: Converting two lists into a matrix (5 answers) Closed 8 years ago. I have two numpy 1d arrays, e.g: ``` a = np.array([1,2,3,4,5]) b = np.array([6,7,8,9,10]) ``` Then how can I get one 2d array [[1,6], [2,7], [3,8], [4,9], [5, 10]]?", "response":"If you have numpy arrays you can use dstack(): ``` import numpy as np a = np.array([1,2,3,4,5]) b = np.array([6,7,8,9,10]) c = np.dstack((a,b)) #or d = np.column_stack((a,b)) >>> c array([[[ 1, 6], [ 2, 7], [ 3, 8], [ 4, 9], [ 5, 10]]]) >>> d array([[ 1, 6], [ 2, 7], [ 3, 8], [ 4, 9], [ 5, 10]]) >>> c.shape (1, 5, 2) >>> d.shape (5, 2) ```", "best_answers_score":0.7079, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/44409084\/how-to-zip-two-1d-numpy-array-to-2d-numpy-array", "best_answers_votes":182, "question_length":331, "response_length":341 }, { "question":"What does numpy.gradient do? So I know what the gradient of a (mathematical) function is, so I feel like I should know what numpy.gradient does. But I don't. The documentation is not really helpful either: Return the gradient of an N-dimensional array. What is the gradient of an array? When is numpy.gradient useful?", "response":"Also in the documentation1: ``` >>> y = np.array([1, 2, 4, 7, 11, 16], dtype=np.float) >>> j = np.gradient(y) >>> j array([ 1. , 1.5, 2.5, 3.5, 4.5, 5. ]) ``` Gradient is defined as (change in y)\/(change in x). x, here, is the list index, so the difference between adjacent values is 1. At the boundaries, the first difference is calculated. This means that at each end of the array, the gradient given is simply, the difference between the end two values (divided by 1) Away from the boundaries the gradient for a particular index is given by taking the difference between the the values either side and dividing by 2. So, the gradient of y, above, is calculated thus: ``` j[0] = (y[1]-y[0])\/1 = (2-1)\/1 = 1 j[1] = (y[2]-y[0])\/2 = (4-1)\/2 = 1.5 j[2] = (y[3]-y[1])\/2 = (7-2)\/2 = 2.5 j[3] = (y[4]-y[2])\/2 = (11-4)\/2 = 3.5 j[4] = (y[5]-y[3])\/2 = (16-7)\/2 = 4.5 j[5] = (y[5]-y[4])\/1 = (16-11)\/1 = 5 ``` You could find the minima of all the absolute values in the resulting array to find the turning points of a curve, for example. 1The array is actually called x in the example in the docs, I've changed it to y to avoid confusion.", "best_answers_score":0.7072, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/24633618\/what-does-numpy-gradient-do", "best_answers_votes":195, "question_length":317, "response_length":1128 }, { "question":"How can I map True\/False to 1\/0 in a Pandas DataFrame? I have a column in python pandas DataFrame that has boolean True\/False values, but for further calculations I need 1\/0 representation. Is there a quick pandas\/numpy way to do that?", "response":"A succinct way to convert a single column of boolean values to a column of integers 1 or 0: ```py df[\"somecolumn\"] = df[\"somecolumn\"].astype(int) ```", "best_answers_score":0.707, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17383094\/how-can-i-map-true-false-to-1-0-in-a-pandas-dataframe", "best_answers_votes":577, "question_length":235, "response_length":149 }, { "question":"How to one-hot-encode from a pandas column containing a list? I would like to break down a pandas column consisting of a list of elements into as many columns as there are unique elements i.e. one-hot-encode them (with value 1 representing a given element existing in a row and 0 in the case of absence). For example, taking dataframe df ``` Col1 Col2 Col3 C 33 [Apple, Orange, Banana] A 2.5 [Apple, Grape] B 42 [Banana] ``` I would like to convert this to: df ``` Col1 Col2 Apple Orange Banana Grape C 33 1 1 1 0 A 2.5 1 0 0 1 B 42 0 0 1 0 ``` How can I use pandas\/sklearn to achieve this?", "response":"We can also use sklearn.preprocessing.MultiLabelBinarizer: Often we want to use sparse DataFrame for the real world data in order to save a lot of RAM. Sparse solution (for Pandas v0.25.0+) ``` from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer(sparse_output=True) df = df.join( pd.DataFrame.sparse.from_spmatrix( mlb.fit_transform(df.pop('Col3')), index=df.index, columns=mlb.classes_)) ``` result: ``` In [38]: df Out[38]: Col1 Col2 Apple Banana Grape Orange 0 C 33.0 1 1 0 1 1 A 2.5 1 0 1 0 2 B 42.0 0 1 0 0 In [39]: df.dtypes Out[39]: Col1 object Col2 float64 Apple Sparse[int32, 0] Banana Sparse[int32, 0] Grape Sparse[int32, 0] Orange Sparse[int32, 0] dtype: object In [40]: df.memory_usage() Out[40]: Index 128 Col1 24 Col2 24 Apple 16 # <--- NOTE! Banana 16 # <--- NOTE! Grape 8 # <--- NOTE! Orange 8 # <--- NOTE! dtype: int64 ``` Dense solution ``` mlb = MultiLabelBinarizer() df = df.join(pd.DataFrame(mlb.fit_transform(df.pop('Col3')), columns=mlb.classes_, index=df.index)) ``` Result: ``` In [77]: df Out[77]: Col1 Col2 Apple Banana Grape Orange 0 C 33.0 1 1 0 1 1 A 2.5 1 0 1 0 2 B 42.0 0 1 0 0 ```", "best_answers_score":0.7065, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/45312377\/how-to-one-hot-encode-from-a-pandas-column-containing-a-list", "best_answers_votes":103, "question_length":590, "response_length":1141 }, { "question":"Error \"Import Error: No module named numpy\" on Windows [duplicate] This question already has answers here: ImportError: No module named requests (39 answers) Closed last year. I have a very similar question to this question, but I am still one step behind. I have only one version of Python 3 installed on my Windows 7 (sorry) 64-bit system. I installed NumPy following this link - as suggested in the question. The installation went fine but when I execute ``` import numpy ``` I got the following error: Import error: No module named numpy", "response":"You can simply use ``` pip install numpy ``` Or for python3, use ``` pip3 install numpy ```", "best_answers_score":0.7054, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7818811\/error-import-error-no-module-named-numpy-on-windows", "best_answers_votes":390, "question_length":541, "response_length":91 }, { "question":"Change values on matplotlib imshow() graph axis Say I have some input data: ```py data = np.random.normal(loc=100, scale=10, size=(500,1,32)) hist = np.ones((32, 20)) # initialise hist for z in range(32): hist[z], edges = np.histogram(data[:, 0, z], bins=np.arange(80, 122, 2)) ``` I can plot it using imshow(): ```py plt.imshow(hist, cmap='Reds') ``` getting: However, the x-axis values do not match the input data (i.e. mean of 100, range from 80 to 122). Therefore, I'd like to change the x-axis to show the values in edges. I have tried: ```py ax = plt.gca() ax.set_xlabel([80,122]) # range of values in edges ... # this shifts the plot so that nothing is visible ``` and ```py ax.set_xticklabels(edges) ... # this labels the axis but does not centre around the mean: ``` Any ideas on how I can change the axis values to reflect the input data I am using?", "response":"I would try to avoid changing the xticklabels if possible, otherwise it can get very confusing if you for example overplot your histogram with additional data. Defining the range of your grid is probably the best and with imshow it can be done by adding the extent keyword. This way the axes gets adjusted automatically. If you want to change the labels i would use set_xticks with perhaps some formatter. Altering the labels directly should be the last resort. ``` fig, ax = plt.subplots(figsize=(6,6)) ax.imshow(hist, cmap=plt.cm.Reds, interpolation='none', extent=[80,120,32,0]) ax.set_aspect(2) # you may also use am.imshow(..., aspect=\"auto\") to restore the aspect ratio ```", "best_answers_score":0.7051, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18696122\/change-values-on-matplotlib-imshow-graph-axis", "best_answers_votes":213, "question_length":859, "response_length":679 }, { "question":"shuffling\/permutating a DataFrame in pandas What's a simple and efficient way to shuffle a dataframe in pandas, by rows or by columns? I.e. how to write a function shuffle(df, n, axis=0) that takes a dataframe, a number of shuffles n, and an axis (axis=0 is rows, axis=1 is columns) and returns a copy of the dataframe that has been shuffled n times. Edit: key is to do this without destroying the row\/column labels of the dataframe. If you just shuffle df.index that loses all that information. I want the resulting df to be the same as the original except with the order of rows or order of columns different. Edit2: My question was unclear. When I say shuffle the rows, I mean shuffle each row independently. So if you have two columns a and b, I want each row shuffled on its own, so that you don't have the same associations between a and b as you do if you just re-order each row as a whole. Something like: ``` for 1...n: for each col in df: shuffle column return new_df ``` But hopefully more efficient than naive looping. This does not work for me: ``` def shuffle(df, n, axis=0): shuffled_df = df.copy() for k in range(n): shuffled_df.apply(np.random.shuffle(shuffled_df.values),axis=axis) return shuffled_df df = pandas.DataFrame({'A':range(10), 'B':range(10)}) shuffle(df, 5) ```", "response":"Use numpy's random.permuation function: ``` In [1]: df = pd.DataFrame({'A':range(10), 'B':range(10)}) In [2]: df Out[2]: A B 0 0 0 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 8 8 8 9 9 9 In [3]: df.reindex(np.random.permutation(df.index)) Out[3]: A B 0 0 0 5 5 5 6 6 6 3 3 3 8 8 8 7 7 7 9 9 9 1 1 1 2 2 2 4 4 4 ```", "best_answers_score":0.7044, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15772009\/shuffling-permutating-a-dataframe-in-pandas", "best_answers_votes":230, "question_length":1291, "response_length":312 }, { "question":"Dump a NumPy array into a csv file How do I dump a 2D NumPy array into a csv file in a human-readable format?", "response":"numpy.savetxt saves an array to a text file. ``` import numpy a = numpy.asarray([ [1,2,3], [4,5,6], [7,8,9] ]) numpy.savetxt(\"foo.csv\", a, delimiter=\",\") ```", "best_answers_score":0.7039, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6081008\/dump-a-numpy-array-into-a-csv-file", "best_answers_votes":1267, "question_length":109, "response_length":157 }, { "question":"Count all values in a matrix less than a value I have to count all the values in a matrix (2-d array) that are less than 200. The code I wrote down for this is: ``` za=0 p31 = numpy.asarray(o31) for i in range(o31.size[0]): for j in range(o32.size[1]): if p31[i,j]<200: za=za+1 print za ``` o31 is an image and I am converting it into a matrix and then finding the values. Is there a simpler way to do this?", "response":"The numpy.where function is your friend. Because it's implemented to take full advantage of the array datatype, for large images you should notice a speed improvement over the pure python solution you provide. Using numpy.where directly will yield a boolean mask indicating whether certain values match your conditions: ``` >>> data array([[1, 8], [3, 4]]) >>> numpy.where( data > 3 ) (array([0, 1]), array([1, 1])) ``` And the mask can be used to index the array directly to get the actual values: ``` >>> data[ numpy.where( data > 3 ) ] array([8, 4]) ``` Exactly where you take it from there will depend on what form you'd like the results in.", "best_answers_score":0.7038, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12995937\/count-all-values-in-a-matrix-less-than-a-value", "best_answers_votes":110, "question_length":407, "response_length":645 }, { "question":"Slicing a numpy array along a dynamically specified axis I would like to dynamically slice a numpy array along a specific axis. Given this: ``` axis = 2 start = 5 end = 10 ``` I want to achieve the same result as this: ``` # m is some matrix m[:,:,5:10] ``` Using something like this: ``` slc = tuple(:,) * len(m.shape) slc[axis] = slice(start,end) m[slc] ``` But the : values can't be put in a tuple, so I can't figure out how to build the slice.", "response":"As it was not mentioned clearly enough (and i was looking for it too): an equivalent to: ``` a = my_array[:, :, :, 8] b = my_array[:, :, :, 2:7] ``` is: ``` a = my_array.take(indices=8, axis=3) b = my_array.take(indices=range(2, 7), axis=3) ```", "best_answers_score":0.7035, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/24398708\/slicing-a-numpy-array-along-a-dynamically-specified-axis", "best_answers_votes":76, "question_length":447, "response_length":244 }, { "question":"Saving arrays as columns with np.savetxt I am trying to do something that is probable very simple. I would like to save three arrays to a file as columns using 'np.savetxt' When I try this ``` x = [1,2,3,4] y = [5,6,7,8] z = [9,10,11,12] np.savetxt('myfile.txt', (x,y,z), fmt='%.18g', delimiter=' ', newline=os.linesep) ``` The arrays are saved like this ``` 1 2 3 4 5 6 7 8 9 10 11 12 ``` But what I wold like is this ``` 1 5 9 2 6 10 3 7 11 4 8 12 ```", "response":"Use numpy.c_[]: ``` np.savetxt('myfile.txt', np.c_[x,y,z]) ```", "best_answers_score":0.7023, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15192847\/saving-arrays-as-columns-with-np-savetxt", "best_answers_votes":72, "question_length":453, "response_length":62 }, { "question":"Efficiently detect sign-changes in python I want to do exactly what this guy did: Python - count sign changes However I need to optimize it to run super fast. In brief I want to take a time series and tell every time it crosses crosses zero (changes sign). I want to record the time in between zero crossings. Since this is real data (32 bit float) I doubt I'll every have a number which is exactly zero, so that is not important. I currently have a timing program in place so I'll time your results to see who wins. My solution gives (micro seconds): ``` open data 8384 sign data 8123 zcd data 415466 ``` As you can see the zero-crossing detector is the slow part. Here's my code. ``` import numpy, datetime class timer(): def __init__(self): self.t0 = datetime.datetime.now() self.t = datetime.datetime.now() def __call__(self,text='unknown'): print text,'\\t',(datetime.datetime.now()-self.t).microseconds self.t=datetime.datetime.now() def zcd(data,t): sign_array=numpy.sign(data) t('sign data') out=[] current = sign_array[0] count=0 for i in sign_array[1:]: if i!=current: out.append(count) current=i count=0 else: count+=1 t('zcd data') return out def main(): t = timer() data = numpy.fromfile('deci.dat',dtype=numpy.float32) t('open data') zcd(data,t) if __name__=='__main__': main() ```", "response":"What about: ``` import numpy a = [1, 2, 1, 1, -3, -4, 7, 8, 9, 10, -2, 1, -3, 5, 6, 7, -10] zero_crossings = numpy.where(numpy.diff(numpy.sign(a)))[0] ``` Output: ``` > zero_crossings array([ 3, 5, 9, 10, 11, 12, 15]) ``` I.e., zero_crossings will contain the indices of elements before which a zero crossing occurs. If you want the elements after, just add 1 to that array.", "best_answers_score":0.7015, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3843017\/efficiently-detect-sign-changes-in-python", "best_answers_votes":103, "question_length":1294, "response_length":374 }, { "question":"Concatenating two one-dimensional NumPy arrays How do I concatenate two one-dimensional arrays in NumPy? I tried numpy.concatenate: ``` import numpy as np a = np.array([1, 2, 3]) b = np.array([4, 5]) np.concatenate(a, b) ``` But I get an error: TypeError: only length-1 arrays can be converted to Python scalars", "response":"Use: ``` np.concatenate([a, b]) ``` The arrays you want to concatenate need to be passed in as a sequence, not as separate arguments. From the NumPy documentation: numpy.concatenate((a1, a2, ...), axis=0) Join a sequence of arrays together. It was trying to interpret your b as the axis parameter, which is why it complained it couldn't convert it into a scalar.", "best_answers_score":0.7003, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9236926\/concatenating-two-one-dimensional-numpy-arrays", "best_answers_votes":560, "question_length":311, "response_length":362 }, { "question":"Combine 3 separate numpy arrays to an RGB image in Python So I have a set of data which I am able to convert to form separate numpy arrays of R, G, B bands. Now I need to combine them to form an RGB image. I tried 'Image' to do the job but it requires 'mode' to be attributed. I tried to do a trick. I would use Image.fromarray() to take the array to image but it attains 'F' mode by default when Image.merge requires 'L' mode images to merge. If I would declare the attribute of array in fromarray() to 'L' at first place, all the R G B images become distorted. But, if I save the images and then open them and then merge, it works fine. Image reads the image with 'L' mode. Now I have two issues. First, I dont think it is an elegant way of doing the work. So if anyone knows the better way of doing it, please tell Secondly, Image.SAVE is not working properly. Following are the errors I face: ``` In [7]: Image.SAVE(imagefile, 'JPEG') ---------------------------------------------------------------------------------- TypeError Traceback (most recent call last) \/media\/New Volume\/Documents\/My own works\/ISAC\/SAMPLES\/ in () TypeError: 'dict' object is not callable ``` Please suggest solutions. And please mind that the image is around 4000x4000 size array.", "response":"``` rgb = np.dstack((r,g,b)) # stacks 3 h x w arrays -> h x w x 3 ``` To also convert floats 0 .. 1 to uint8 s, ``` rgb_uint8 = (np.dstack((r,g,b)) * 255.999) .astype(np.uint8) # right, Janna, not 256 ```", "best_answers_score":0.7002, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10443295\/combine-3-separate-numpy-arrays-to-an-rgb-image-in-python", "best_answers_votes":103, "question_length":1260, "response_length":204 }, { "question":"Are numpy arrays passed by reference? I came across the fact that numpy arrays are passed by reference at multiple places, but then when I execute the following code, why is there a difference between the behavior of foo and bar ``` import numpy as np def foo(arr): arr = arr - 3 def bar(arr): arr -= 3 a = np.array([3, 4, 5]) foo(a) print a # prints [3, 4, 5] bar(a) print a # prints [0, 1, 2] ``` I'm using python 2.7 and numpy version 1.6.1", "response":"In Python, all variable names are references to values. When Python evaluates an assignment, the right-hand side is evaluated before the left-hand side. arr - 3 creates a new array; it does not modify arr in-place. arr = arr - 3 makes the local variable arr reference this new array. It does not modify the value originally referenced by arr which was passed to foo. The variable name arr simply gets bound to the new array, arr - 3. Moreover, arr is local variable name in the scope of the foo function. Once the foo function completes, there is no more reference to arr and Python is free to garbage collect the value it references. As Reti43 points out, in order for arr's value to affect a, foo must return arr and a must be assigned to that value: ``` def foo(arr): arr = arr - 3 return arr # or simply combine both lines into `return arr - 3` a = foo(a) ``` In contrast, arr -= 3, which Python translates into a call to the __iadd__ special method, does modify the array referenced by arr in-place.", "best_answers_score":0.6996, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11585793\/are-numpy-arrays-passed-by-reference", "best_answers_votes":98, "question_length":443, "response_length":1004 }, { "question":"Numpy slice of arbitrary dimensions I would like to slice a numpy array to obtain the i-th index in the last dimension. For a 3D array, this would be: ``` slice = myarray[:, :, i] ``` But I am writing a function where I can take an array of arbitrary dimensions, so for a 4D array I'd need myarray[:, :, :, i], and so on. Is there a way I can obtain this slice for any array without explicitly having to write the array dimensions?", "response":"There is ... or Ellipsis, which does exactly this: ``` slice = myarray[..., i] ``` Ellipsis is the python object, if you should want to use it outside the square bracket notation.", "best_answers_score":0.6995, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12116830\/numpy-slice-of-arbitrary-dimensions", "best_answers_votes":137, "question_length":431, "response_length":179 }, { "question":"'invalid value encountered in double_scalars' warning, possibly numpy As I run my code I get these warnings, always in groups of four, sporadically. I have tried to locate the source by placing debug messages before and after certain statements to pin-point its origin. ``` Warning: invalid value encountered in double_scalars Warning: invalid value encountered in double_scalars Warning: invalid value encountered in double_scalars Warning: invalid value encountered in double_scalars ``` Is this is a Numpy warning, and what is a double scalar? From Numpy I use ``` min(), argmin(), mean() and random.randn() ``` I also use Matplotlib", "response":"It looks like a floating-point calculation error. Check the numpy.seterr function to get more information about where it happens.", "best_answers_score":0.699, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3767409\/invalid-value-encountered-in-double-scalars-warning-possibly-numpy", "best_answers_votes":84, "question_length":636, "response_length":129 }, { "question":"How to round a numpy array? I have a numpy array, something like below: ``` data = np.array([ 1.60130719e-01, 9.93827160e-01, 3.63108206e-04]) ``` and I want to round each element to two decimal places. How can I do so?", "response":"Numpy provides two identical methods to do this. Either use ``` np.round(data, 2) ``` or ``` np.around(data, 2) ``` as they are equivalent. See the documentation for more information. Examples: ```py >>> import numpy as np >>> a = np.array([0.015, 0.235, 0.112]) >>> np.round(a, 2) array([0.02, 0.24, 0.11]) >>> np.around(a, 2) array([0.02, 0.24, 0.11]) >>> np.round(a, 1) array([0. , 0.2, 0.1]) ```", "best_answers_score":0.6984, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/46994426\/how-to-round-a-numpy-array", "best_answers_votes":142, "question_length":219, "response_length":399 }, { "question":"Fastest save and load options for a numpy array I have a script that generates two-dimensional numpy arrays with dtype=float and shape on the order of (1e3, 1e6). Right now I'm using np.save and np.load to perform IO operations with the arrays. However, these functions take several seconds for each array. Are there faster methods for saving and loading the entire arrays (i.e., without making assumptions about their contents and reducing them)? I'm open to converting the arrays to another type before saving as long as the data are retained exactly.", "response":"I've compared a few methods using perfplot (one of my projects). Here are the results: Writing For large arrays, all methods are about equally fast. The file sizes are also equal which is to be expected since the input array are random doubles and hence hardly compressible. Code to reproduce the plot: ```py from sys import version_info import matplotlib.pyplot as plt import perfplot import pickle import netCDF4 import numpy as np import h5py import tables import zarr def write_numpy(data): np.save(\"out.npy\", data) def write_hdf5(data): with h5py.File(\"out.h5\", \"w\") as f: f.create_dataset(\"data\", data=data) def write_netcdf(data): with netCDF4.Dataset(\"out.nc\", \"w\") as nc: nc.createDimension(\"len_data\", len(data)) ncdata = nc.createVariable( \"mydata\", \"float64\", (\"len_data\",), ) ncdata[:] = data def write_pickle(data): with open(\"out.pkl\", \"wb\") as f: pickle.dump(data, f) def write_pytables(data): with tables.open_file(\"out-pytables.h5\", mode=\"w\") as f: gcolumns = f.create_group(f.root, \"columns\", \"data\") f.create_array(gcolumns, \"data\", data, \"data\") def write_zarr_zarr(data): zarr.save_array(\"out.zarr\", data) def write_zarr_zip(data): zarr.save_array(\"out.zip\", data) def write_zarr_zarr_uncompressed(data): zarr.save_array(\"out-uncompressed.zarr\", data, compressor=None) def write_zarr_zip_uncompressed(data): zarr.save_array(\"out-uncompressed.zip\", data) def setup(n): data = np.random.rand(n) n[...] = data.nbytes return data b = perfplot.bench( setup=setup, kernels=[ write_numpy, write_hdf5, write_netcdf, write_pickle, write_pytables, write_zarr_zarr, write_zarr_zip, write_zarr_zarr_uncompressed, write_zarr_zip_uncompressed, ], title=\"write comparison\", n_range=[2**k for k in range(28)], xlabel=\"data.nbytes\", equality_check=None, ) plt.text( 0.0, -0.3, \", \".join( [ f\"Python {version_info.major}.{version_info.minor}.{version_info.micro}\", f\"h5py {h5py.__version__}\", f\"netCDF4 {netCDF4.__version__}\", f\"NumPy {np.__version__}\", f\"PyTables {tables.__version__}\", f\"Zarr {zarr.__version__}\", ] ), transform=plt.gca().transAxes, fontsize=\"x-small\", verticalalignment=\"top\", ) b.save(\"out-write.png\") b.show() ``` Reading pickles, pytables and hdf5 are roughly equally fast; pickles and zarr are slower for large arrays. Code to reproduce the plot: ```py import perfplot import pickle import numpy import h5py import tables import zarr def setup(n): data = numpy.random.rand(n) # write all files # numpy.save(\"out.npy\", data) # f = h5py.File(\"out.h5\", \"w\") f.create_dataset(\"data\", data=data) f.close() # with open(\"test.pkl\", \"wb\") as f: pickle.dump(data, f) # f = tables.open_file(\"pytables.h5\", mode=\"w\") gcolumns = f.create_group(f.root, \"columns\", \"data\") f.create_array(gcolumns, \"data\", data, \"data\") f.close() # zarr.save(\"out.zip\", data) def npy_read(data): return numpy.load(\"out.npy\") def hdf5_read(data): f = h5py.File(\"out.h5\", \"r\") out = f[\"data\"][()] f.close() return out def pickle_read(data): with open(\"test.pkl\", \"rb\") as f: out = pickle.load(f) return out def pytables_read(data): f = tables.open_file(\"pytables.h5\", mode=\"r\") out = f.root.columns.data[()] f.close() return out def zarr_read(data): return zarr.load(\"out.zip\") b = perfplot.bench( setup=setup, kernels=[ npy_read, hdf5_read, pickle_read, pytables_read, zarr_read, ], n_range=[2 ** k for k in range(27)], xlabel=\"len(data)\", ) b.save(\"out2.png\") b.show() ```", "best_answers_score":0.6983, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30329726\/fastest-save-and-load-options-for-a-numpy-array", "best_answers_votes":49, "question_length":553, "response_length":3369 }, { "question":"how to convert 2d list to 2d numpy array? I have a 2D list something like ``` a = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] ``` and I want to convert it to a 2d numpy array. Can we do it without allocating memory like ``` numpy.zeros((3,3)) ``` and then storing values to it?", "response":"Just pass the list to np.array: ``` a = np.array(a) ``` You can also take this opportunity to set the dtype if the default is not what you desire. ``` a = np.array(a, dtype=...) ```", "best_answers_score":0.6979, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7717380\/how-to-convert-2d-list-to-2d-numpy-array", "best_answers_votes":119, "question_length":267, "response_length":181 }, { "question":"Is there a NumPy function to return the first index of something in an array? I know there is a method for a Python list to return the first index of something: ``` >>> xs = [1, 2, 3] >>> xs.index(2) 1 ``` Is there something like that for NumPy arrays?", "response":"Yes, given an array, array, and a value, item to search for, you can use np.where as: ``` itemindex = numpy.where(array == item) ``` The result is a tuple with first all the row indices, then all the column indices. For example, if an array is two dimensions and it contained your item at two locations then ``` array[itemindex[0][0]][itemindex[1][0]] ``` would be equal to your item and so would be: ``` array[itemindex[0][1]][itemindex[1][1]] ```", "best_answers_score":0.6975, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/432112\/is-there-a-numpy-function-to-return-the-first-index-of-something-in-an-array", "best_answers_votes":727, "question_length":252, "response_length":448 }, { "question":"How to change a single value in a NumPy array? I want to change a single element of an array. For example, I have: ``` A = np.array([1,2,3,4], [5,6,7,8], [9,10,11,12], [13,14,15,16]) ``` I want to relace A[2][1] = 10 with A[2][1] = 150. How can I do it?", "response":"Is this what you are after? Just index the element and assign a new value. ``` A[2,1]=150 A Out[345]: array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 150, 11, 12], [13, 14, 15, 16]]) ```", "best_answers_score":0.697, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/44209368\/how-to-change-a-single-value-in-a-numpy-array", "best_answers_votes":83, "question_length":253, "response_length":180 }, { "question":"TypeError: only integer scalar arrays can be converted to a scalar index with 1D numpy indices array I want to write a function that randomly picks elements from a training set, based on the bin probabilities provided. I divide the set indices to 11 bins, then create custom probabilities for them. ``` bin_probs = [0.5, 0.3, 0.15, 0.04, 0.0025, 0.0025, 0.001, 0.001, 0.001, 0.001, 0.001] X_train = list(range(2000000)) train_probs = bin_probs * int(len(X_train) \/ len(bin_probs)) # extend probabilities across bin elements train_probs.extend([0.001]*(len(X_train) - len(train_probs))) # a small fix to match number of elements train_probs = train_probs\/np.sum(train_probs) # normalize indices = np.random.choice(range(len(X_train)), replace=False, size=50000, p=train_probs) out_images = X_train[indices.astype(int)] # this is where I get the error ``` I get the following error: ``` TypeError: only integer scalar arrays can be converted to a scalar index with 1D numpy indices array ``` I find this weird, since I already checked the array of indices that I have created. It is 1-D, it is integer, and it is scalar. What am I missing? Note : I tried to pass indices with astype(int). Same error.", "response":"Perhaps the error message is somewhat misleading, but the gist is that X_train is a list, not a numpy array. You cannot use array indexing on it. Make it an array first: ``` out_images = np.array(X_train)[indices.astype(int)] ```", "best_answers_score":0.6957, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/50997928\/typeerror-only-integer-scalar-arrays-can-be-converted-to-a-scalar-index-with-1d", "best_answers_votes":301, "question_length":1198, "response_length":229 }, { "question":"Good ways to \"expand\" a numpy ndarray? Are there good ways to \"expand\" a numpy ndarray? Say I have an ndarray like this: ``` [[1 2] [3 4]] ``` And I want each row to contains more elements by filling zeros: ``` [[1 2 0 0 0] [3 4 0 0 0]] ``` I know there must be some brute-force ways to do so (say construct a bigger array with zeros then copy elements from old smaller arrays), just wondering are there pythonic ways to do so. Tried numpy.reshape but didn't work: ``` import numpy as np a = np.array([[1, 2], [3, 4]]) np.reshape(a, (2, 5)) ``` Numpy complains that: ValueError: total size of new array must be unchanged", "response":"You can use numpy.pad, as follows: ``` >>> import numpy as np >>> a=[[1,2],[3,4]] >>> np.pad(a, ((0,0),(0,3)), mode='constant', constant_values=0) array([[1, 2, 0, 0, 0], [3, 4, 0, 0, 0]]) ``` Here np.pad says, \"Take the array a and add 0 rows above it, 0 rows below it, 0 columns to the left of it, and 3 columns to the right of it. Fill these columns with a constant specified by constant_values\".", "best_answers_score":0.695, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12668027\/good-ways-to-expand-a-numpy-ndarray", "best_answers_votes":84, "question_length":620, "response_length":399 }, { "question":"In-place type conversion of a NumPy array Given a NumPy array of int32, how do I convert it to float32 in place? So basically, I would like to do ``` a = a.astype(numpy.float32) ``` without copying the array. It is big. The reason for doing this is that I have two algorithms for the computation of a. One of them returns an array of int32, the other returns an array of float32 (and this is inherent to the two different algorithms). All further computations assume that a is an array of float32. Currently I do the conversion in a C function called via ctypes. Is there a way to do this in Python?", "response":"Update: This function only avoids copy if it can, hence this is not the correct answer for this question. unutbu's answer is the right one. ``` a = a.astype(numpy.float32, copy=False) ``` numpy astype has a copy flag. Why shouldn't we use it ?", "best_answers_score":0.6945, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4389517\/in-place-type-conversion-of-a-numpy-array", "best_answers_votes":169, "question_length":599, "response_length":243 }, { "question":"How to check if all values in the columns of a numpy matrix are the same? I want to check if all values in the columns of a numpy array\/matrix are the same. I tried to use reduce of the ufunc equal, but it doesn't seem to work in all cases: ``` In [55]: a = np.array([[1,1,0],[1,-1,0],[1,0,0],[1,1,0]]) In [56]: a Out[56]: array([[ 1, 1, 0], [ 1, -1, 0], [ 1, 0, 0], [ 1, 1, 0]]) In [57]: np.equal.reduce(a) Out[57]: array([ True, False, True], dtype=bool) In [58]: a = np.array([[1,1,0],[1,0,0],[1,0,0],[1,1,0]]) In [59]: a Out[59]: array([[1, 1, 0], [1, 0, 0], [1, 0, 0], [1, 1, 0]]) In [60]: np.equal.reduce(a) Out[60]: array([ True, True, True], dtype=bool) ``` Why does the middle column in the second case also evaluate to True, while it should be False? Thanks for any help!", "response":"``` In [45]: a Out[45]: array([[1, 1, 0], [1, 0, 0], [1, 0, 0], [1, 1, 0]]) ``` Compare each value to the corresponding value in the first row: ``` In [46]: a == a[0,:] Out[46]: array([[ True, True, True], [ True, False, True], [ True, False, True], [ True, True, True]], dtype=bool) ``` A column shares a common value if all the values in that column are True: ``` In [47]: np.all(a == a[0,:], axis = 0) Out[47]: array([ True, False, True], dtype=bool) ``` The problem with np.equal.reduce can be seen by micro-analyzing what happens when it is applied to [1, 0, 0, 1]: ``` In [49]: np.equal.reduce([1, 0, 0, 1]) Out[50]: True ``` The first two items, 1 and 0 are tested for equality and the result is False: ``` In [51]: np.equal.reduce([False, 0, 1]) Out[51]: True ``` Now False and 0 are tested for equality and the result is True: ``` In [52]: np.equal.reduce([True, 1]) Out[52]: True ``` But True and 1 are equal, so the total result is True, which is not the desired outcome. The problem is that reduce tries to accumulate the result \"locally\", while we want a \"global\" test like np.all.", "best_answers_score":0.6945, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14859458\/how-to-check-if-all-values-in-the-columns-of-a-numpy-matrix-are-the-same", "best_answers_votes":80, "question_length":781, "response_length":1094 }, { "question":"NumPy first and last element from array I am trying to dynamically get the first and last element from an array. So, let us suppose the array has 6 elements. ``` test = [1,23,4,6,7,8] ``` If I am trying to get the first and last = 1,8, 23,7 and 4,6. Is there a way to get elements in this order? I looked at a couple of questions Link Link2. I took help of these links and I came up with this prototype.. ``` #!\/usr\/bin\/env python import numpy test = [1,23,4,6,7,8] test1 = numpy.array([1,23,4,6,7,8]) len_test = len(test) first_list = [0,1,2] len_first = len(first_list) second_list = [-1,-2,-3] len_second = len(second_list) for a in range(len_first): print numpy.array(test)[[first_list[a] , second_list[a]]] print test1[[first_list[a], second_list[a]]] ``` But this prototype won't scale for if you have more than 6 elements. So, I was wondering if there is way to dynamically get the pair of elements. Thanks!", "response":"I ended here, because I googled for \"python first and last element of array\", and found everything else but this. So here's the answer to the title question: ``` a = [1,2,3] a[0] # first element (returns 1) a[-1] # last element (returns 3) ```", "best_answers_score":0.6943, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14609720\/numpy-first-and-last-element-from-array", "best_answers_votes":181, "question_length":914, "response_length":243 }, { "question":"Compute a confidence interval from sample data I have sample data which I would like to compute a confidence interval for, assuming a normal distribution. I have found and installed the numpy and scipy packages and have gotten numpy to return a mean and standard deviation (numpy.mean(data) with data being a list). Any advice on getting a sample confidence interval would be much appreciated.", "response":"Here a shortened version of shasan's code, calculating the 95% confidence interval of the mean of array a: ``` import numpy as np, scipy.stats as st st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a)) ``` But using StatsModels' tconfint_mean is arguably even nicer: ``` import statsmodels.stats.api as sms sms.DescrStatsW(a).tconfint_mean() ``` The underlying assumptions for both are that the sample (array a) was drawn independently from a normal distribution with unknown standard deviation (see MathWorld or Wikipedia). For large sample size n, the sample mean is normally distributed, and one can calculate its confidence interval using st.norm.interval() (as suggested in Jaime's comment). But the above solutions are correct also for small n, where st.norm.interval() gives confidence intervals that are too narrow (i.e., \"fake confidence\"). See my answer to a similar question for more details (and one of Russ's comments here). Here an example where the correct options give (essentially) identical confidence intervals: ``` In [9]: a = range(10,14) In [10]: mean_confidence_interval(a) Out[10]: (11.5, 9.4457397432391215, 13.554260256760879) In [11]: st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a)) Out[11]: (9.4457397432391215, 13.554260256760879) In [12]: sms.DescrStatsW(a).tconfint_mean() Out[12]: (9.4457397432391197, 13.55426025676088) ``` And finally, the incorrect result using st.norm.interval(): ``` In [13]: st.norm.interval(0.95, loc=np.mean(a), scale=st.sem(a)) Out[13]: (10.23484868811834, 12.76515131188166) ```", "best_answers_score":0.6941, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15033511\/compute-a-confidence-interval-from-sample-data", "best_answers_votes":210, "question_length":393, "response_length":1562 }, { "question":"Numpy extract submatrix I'm pretty new in numpy and I am having a hard time understanding how to extract from a np.array a sub matrix with defined columns and rows: ``` Y = np.arange(16).reshape(4,4) ``` If I want to extract columns\/rows 0 and 3, I should have: ``` [[0 3] [12 15]] ``` I tried all the reshape functions...but cannot figure out how to do this. Any ideas?", "response":"Give np.ix_ a try: ``` Y[np.ix_([0,3],[0,3])] ``` This returns your desired result: ``` In [25]: Y = np.arange(16).reshape(4,4) In [26]: Y[np.ix_([0,3],[0,3])] Out[26]: array([[ 0, 3], [12, 15]]) ```", "best_answers_score":0.6941, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19161512\/numpy-extract-submatrix", "best_answers_votes":117, "question_length":370, "response_length":199 }, { "question":"Why can't I suppress numpy warnings I really want to avoid these annoying numpy warnings since I have to deal with a lot of NaNs. I know this is usually done with seterr, but for some reason here it does not work: ``` import numpy as np data = np.random.random(100000).reshape(10, 100, 100) * np.nan np.seterr(all=\"ignore\") np.nanmedian(data, axis=[1, 2]) ``` It gives me a runtime warning even though I set numpy to ignore all errors...any help? Edit (this is the warning that is recieved): \/opt\/local\/Library\/Frameworks\/Python.framework\/Versions\/3.4\/lib\/python3.4\/site-p\u200c\u200backages\/numpy\/lib\/nanfunctions.py:612: RuntimeWarning: All-NaN slice encountered warnings.warn(\"All-NaN slice encountered\", RuntimeWarning)", "response":"Warnings can often be useful and in most cases I wouldn't advise this, but you can always make use of the Warnings module to ignore all warnings with filterwarnings: ``` warnings.filterwarnings('ignore') ``` Should you want to suppress uniquely your particular error, you could specify it with: ``` with warnings.catch_warnings(): warnings.filterwarnings('ignore', r'All-NaN (slice|axis) encountered') ```", "best_answers_score":0.6939, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/29347987\/why-cant-i-suppress-numpy-warnings", "best_answers_votes":100, "question_length":713, "response_length":405 }, { "question":"Multiplying across in a numpy array I'm trying to multiply each of the terms in a 2D array by the corresponding terms in a 1D array. This is very easy if I want to multiply every column by the 1D array, as shown in the numpy.multiply function. But I want to do the opposite, multiply each term in the row. In other words I want to multiply: ``` [1,2,3] [0] [4,5,6] * [1] [7,8,9] [2] ``` and get ``` [0,0,0] [4,5,6] [14,16,18] ``` but instead I get ``` [0,2,6] [0,5,12] [0,8,18] ``` Does anyone know if there's an elegant way to do that with numpy? Thanks a lot, Alex", "response":"Normal multiplication like you showed: ``` >>> import numpy as np >>> m = np.array([[1,2,3],[4,5,6],[7,8,9]]) >>> c = np.array([0,1,2]) >>> m * c array([[ 0, 2, 6], [ 0, 5, 12], [ 0, 8, 18]]) ``` If you add an axis, it will multiply the way you want: ``` >>> m * c[:, np.newaxis] array([[ 0, 0, 0], [ 4, 5, 6], [14, 16, 18]]) ``` You could also transpose twice: ``` >>> (m.T * c).T array([[ 0, 0, 0], [ 4, 5, 6], [14, 16, 18]]) ```", "best_answers_score":0.6928, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18522216\/multiplying-across-in-a-numpy-array", "best_answers_votes":156, "question_length":566, "response_length":431 }, { "question":"\"Cloning\" row or column vectors Sometimes it is useful to \"clone\" a row or column vector to a matrix. By cloning I mean converting a row vector such as ``` [1, 2, 3] ``` Into a matrix ``` [[1, 2, 3], [1, 2, 3], [1, 2, 3]] ``` or a column vector such as ``` [[1], [2], [3]] ``` into ``` [[1, 1, 1] [2, 2, 2] [3, 3, 3]] ``` In MATLAB or octave this is done pretty easily: ``` x = [1, 2, 3] a = ones(3, 1) * x a = 1 2 3 1 2 3 1 2 3 b = (x') * ones(1, 3) b = 1 1 1 2 2 2 3 3 3 ``` I want to repeat this in numpy, but unsuccessfully ``` In [14]: x = array([1, 2, 3]) In [14]: ones((3, 1)) * x Out[14]: array([[ 1., 2., 3.], [ 1., 2., 3.], [ 1., 2., 3.]]) # so far so good In [16]: x.transpose() * ones((1, 3)) Out[16]: array([[ 1., 2., 3.]]) # DAMN # I end up with In [17]: (ones((3, 1)) * x).transpose() Out[17]: array([[ 1., 1., 1.], [ 2., 2., 2.], [ 3., 3., 3.]]) ``` Why wasn't the first method (In [16]) working? Is there a way to achieve this task in python in a more elegant way?", "response":"Use numpy.tile: ``` >>> tile(array([1,2,3]), (3, 1)) array([[1, 2, 3], [1, 2, 3], [1, 2, 3]]) ``` or for repeating columns: ``` >>> tile(array([[1,2,3]]).transpose(), (1, 3)) array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]) ```", "best_answers_score":0.692, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1550130\/cloning-row-or-column-vectors", "best_answers_votes":434, "question_length":981, "response_length":219 }, { "question":"NumPy: function for simultaneous max() and min() numpy.amax() will find the max value in an array, and numpy.amin() does the same for the min value. If I want to find both max and min, I have to call both functions, which requires passing over the (very big) array twice, which seems slow. Is there a function in the numpy API that finds both max and min with only a single pass through the data?", "response":"Is there a function in the numpy API that finds both max and min with only a single pass through the data? No. At the time of this writing, there is no such function. (And yes, if there were such a function, its performance would be significantly better than calling numpy.amin() and numpy.amax() successively on a large array.)", "best_answers_score":0.6914, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12200580\/numpy-function-for-simultaneous-max-and-min", "best_answers_votes":77, "question_length":396, "response_length":328 }, { "question":"Saving a Numpy array as an image I have a matrix in the type of a Numpy array. How would I write it to disk it as an image? Any format works (png, jpeg, bmp...). One important constraint is that PIL is not present.", "response":"Using PIL, save a NumPy array arr by doing: ``` from PIL import Image im = Image.fromarray(arr) im.save(\"your_file.jpeg\") ``` See the docs for available data formats, including JPEG, PNG, and so on.", "best_answers_score":0.691, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/902761\/saving-a-numpy-array-as-an-image", "best_answers_votes":472, "question_length":214, "response_length":198 }, { "question":"How can I check whether a numpy array is empty or not? How can I check whether a numpy array is empty or not? I used the following code, but this fails if the array contains a zero. ``` if not self.Definition.all(): ``` Is this the solution? ``` if self.Definition == array([]): ```", "response":"You can always take a look at the .size attribute. It is defined as an integer, and is zero (0) when there are no elements in the array: ``` import numpy as np a = np.array([]) if a.size == 0: # Do something when `a` is empty ```", "best_answers_score":0.6902, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11295609\/how-can-i-check-whether-a-numpy-array-is-empty-or-not", "best_answers_votes":484, "question_length":282, "response_length":229 }, { "question":"Pandas Timedelta in Days I have a dataframe in pandas called 'munged_data' with two columns 'entry_date' and 'dob' which i have converted to Timestamps using pd.to_timestamp.I am trying to figure out how to calculate ages of people based on the time difference between 'entry_date' and 'dob' and to do this i need to get the difference in days between the two columns ( so that i can then do somehting like round(days\/365.25). I do not seem to be able to find a way to do this using a vectorized operation. When I do munged_data.entry_date-munged_data.dob i get the following : ``` internal_quote_id 2 15685977 days, 23:54:30.457856 3 11651985 days, 23:49:15.359744 4 9491988 days, 23:39:55.621376 7 11907004 days, 0:10:30.196224 9 15282164 days, 23:30:30.196224 15 15282227 days, 23:50:40.261632 ``` However i do not seem to be able to extract the days as an integer so that i can continue with my calculation. Any help appreciated.", "response":"Using the Pandas type Timedelta available since v0.15.0 you also can do: ``` In[1]: import pandas as pd In[2]: df = pd.DataFrame([ pd.Timestamp('20150111'), pd.Timestamp('20150301') ], columns=['date']) In[3]: df['today'] = pd.Timestamp('20150315') In[4]: df Out[4]: date today 0 2015-01-11 2015-03-15 1 2015-03-01 2015-03-15 In[5]: (df['today'] - df['date']).dt.days Out[5]: 0 63 1 14 dtype: int64 ```", "best_answers_score":0.6897, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16103238\/pandas-timedelta-in-days", "best_answers_votes":131, "question_length":933, "response_length":402 }, { "question":"How to convert singleton array to a scalar value in Python? Suppose I have 1x1x1x1x... array and wish to convert it to scalar? How do I do it? squeeze does not help. ``` import numpy as np matrix = np.array([[1]]) s = np.squeeze(matrix) print type(s) print s matrix = [[1]] print type(s) print s s = 1 print type(s) print s ```", "response":"You can use the item() function: ``` import numpy as np matrix = np.array([[[[7]]]]) print(matrix.item()) ``` Output ``` 7 ```", "best_answers_score":0.6887, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/35157742\/how-to-convert-singleton-array-to-a-scalar-value-in-python", "best_answers_votes":81, "question_length":327, "response_length":126 }, { "question":"Pandas distinction between str and object types Numpy seems to make a distinction between str and object types. For instance I can do: ``` >>> import pandas as pd >>> import numpy as np >>> np.dtype(str) dtype('S') >>> np.dtype(object) dtype('O') ``` Where dtype('S') and dtype('O') corresponds to str and object respectively. However pandas seem to lack that distinction and coerce str to object. ``` >>> df = pd.DataFrame({'a': np.arange(5)}) >>> df.a.dtype dtype('int64') >>> df.a.astype(str).dtype dtype('O') >>> df.a.astype(object).dtype dtype('O') ``` Forcing the type to dtype('S') does not help either. ``` >>> df.a.astype(np.dtype(str)).dtype dtype('O') >>> df.a.astype(np.dtype('S')).dtype dtype('O') ``` Is there any explanation for this behavior?", "response":"Numpy's string dtypes aren't python strings. Therefore, pandas deliberately uses native python strings, which require an object dtype. First off, let me demonstrate a bit of what I mean by numpy's strings being different: ``` In [1]: import numpy as np In [2]: x = np.array(['Testing', 'a', 'string'], dtype='|S7') In [3]: y = np.array(['Testing', 'a', 'string'], dtype=object) ``` Now, 'x' is a numpy string dtype (fixed-width, c-like string) and y is an array of native python strings. If we try to go beyond 7 characters, we'll see an immediate difference. The string dtype versions will be truncated: ``` In [4]: x[1] = 'a really really really long' In [5]: x Out[5]: array(['Testing', 'a reall', 'string'], dtype='|S7') ``` While the object dtype versions can be arbitrary length: ``` In [6]: y[1] = 'a really really really long' In [7]: y Out[7]: array(['Testing', 'a really really really long', 'string'], dtype=object) ``` Next, the |S dtype strings can't hold unicode properly, though there is a unicode fixed-length string dtype, as well. I'll skip an example, for the moment. Finally, numpy's strings are actually mutable, while Python strings are not. For example: ``` In [8]: z = x.view(np.uint8) In [9]: z += 1 In [10]: x Out[10]: array(['Uftujoh', 'b!sfbmm', 'tusjoh\\x01'], dtype='|S7') ``` For all of these reasons, pandas chose not to ever allow C-like, fixed-length strings as a datatype. As you noticed, attempting to coerce a python string into a fixed-with numpy string won't work in pandas. Instead, it always uses native python strings, which behave in a more intuitive way for most users.", "best_answers_score":0.6879, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/34881079\/pandas-distinction-between-str-and-object-types", "best_answers_votes":59, "question_length":758, "response_length":1612 }, { "question":"Most efficient property to hash for numpy array I need to be able to store a numpy array in a dict for caching purposes. Hash speed is important. The array represents indicies, so while the actual identity of the object is not important, the value is. Mutabliity is not a concern, as I'm only interested in the current value. What should I hash in order to store it in a dict? My current approach is to use str(arr.data), which is faster than md5 in my testing. I've incorporated some examples from the answers to get an idea of relative times: ``` In [121]: %timeit hash(str(y)) 10000 loops, best of 3: 68.7 us per loop In [122]: %timeit hash(y.tostring()) 1000000 loops, best of 3: 383 ns per loop In [123]: %timeit hash(str(y.data)) 1000000 loops, best of 3: 543 ns per loop In [124]: %timeit y.flags.writeable = False ; hash(y.data) 1000000 loops, best of 3: 1.15 us per loop In [125]: %timeit hash((b*y).sum()) 100000 loops, best of 3: 8.12 us per loop ``` It would appear that for this particular use case (small arrays of indicies), arr.tostring offers the best performance. While hashing the read-only buffer is fast on its own, the overhead of setting the writeable flag actually makes it slower.", "response":"You can simply hash the underlying buffer, if you make it read-only: ``` >>> a = random.randint(10, 100, 100000) >>> a.flags.writeable = False >>> %timeit hash(a.data) 100 loops, best of 3: 2.01 ms per loop >>> %timeit hash(a.tostring()) 100 loops, best of 3: 2.28 ms per loop ``` For very large arrays, hash(str(a)) is a lot faster, but then it only takes a small part of the array into account. ``` >>> %timeit hash(str(a)) 10000 loops, best of 3: 55.5 us per loop >>> str(a) '[63 30 33 ..., 96 25 60]' ```", "best_answers_score":0.6876, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16589791\/most-efficient-property-to-hash-for-numpy-array", "best_answers_votes":77, "question_length":1205, "response_length":508 }, { "question":"Matplotlib log scale tick label number formatting With matplotlib when a log scale is specified for an axis, the default method of labeling that axis is with numbers that are 10 to a power eg. 10^6. Is there an easy way to change all of these labels to be their full numerical representation? eg. 1, 10, 100, etc. Note that I do not know what the range of powers will be and want to support an arbitrary range (negatives included).", "response":"Sure, just change the formatter. For example, if we have this plot: ``` import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.axis([1, 10000, 1, 100000]) ax.loglog() plt.show() ``` You could set the tick labels manually, but then the tick locations and labels would be fixed when you zoom\/pan\/etc. Therefore, it's best to change the formatter. By default, a logarithmic scale uses a LogFormatter, which will format the values in scientific notation. To change the formatter to the default for linear axes (ScalarFormatter) use e.g. ``` from matplotlib.ticker import ScalarFormatter for axis in [ax.xaxis, ax.yaxis]: axis.set_major_formatter(ScalarFormatter()) ```", "best_answers_score":0.6872, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21920233\/matplotlib-log-scale-tick-label-number-formatting", "best_answers_votes":80, "question_length":431, "response_length":669 }, { "question":"Pandas: Subtracting two date columns and the result being an integer I have two columns in a Pandas data frame that are dates. I am looking to subtract one column from another and the result being the difference in numbers of days as an integer. A peek at the data: ``` df_test.head(10) Out[20]: First_Date Second Date 0 2016-02-09 2015-11-19 1 2016-01-06 2015-11-30 2 NaT 2015-12-04 3 2016-01-06 2015-12-08 4 NaT 2015-12-09 5 2016-01-07 2015-12-11 6 NaT 2015-12-12 7 NaT 2015-12-14 8 2016-01-06 2015-12-14 9 NaT 2015-12-15 ``` I have created a new column successfully with the difference: ``` df_test['Difference'] = df_test['First_Date'].sub(df_test['Second Date'], axis=0) df_test.head() Out[22]: First_Date Second Date Difference 0 2016-02-09 2015-11-19 82 days 1 2016-01-06 2015-11-30 37 days 2 NaT 2015-12-04 NaT 3 2016-01-06 2015-12-08 29 days 4 NaT 2015-12-09 NaT ``` However I am unable to get a numeric version of the result: ``` df_test['Difference'] = df_test[['Difference']].apply(pd.to_numeric) df_test.head() Out[25]: First_Date Second Date Difference 0 2016-02-09 2015-11-19 7.084800e+15 1 2016-01-06 2015-11-30 3.196800e+15 2 NaT 2015-12-04 NaN 3 2016-01-06 2015-12-08 2.505600e+15 4 NaT 2015-12-09 NaN ```", "response":"How about: ``` df_test['Difference'] = (df_test['First_Date'] - df_test['Second Date']).dt.days ``` This will return difference as int if there are no missing values(NaT) and float if there is. Pandas have a rich documentation on Time series \/ date functionality and Time deltas", "best_answers_score":0.6869, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37840812\/pandas-subtracting-two-date-columns-and-the-result-being-an-integer", "best_answers_votes":188, "question_length":1223, "response_length":278 }, { "question":"How to update matplotlib's imshow() window interactively? I'm working on some computer vision algorithm and I'd like to show how a numpy array changes in each step. What works now is that if I have a simple imshow( array ) at the end of my code, the window displays and shows the final image. However what I'd like to do is to update and display the imshow window as the image changes in each iteration. So for example I'd like to do: ``` import numpy as np import matplotlib.pyplot as plt import time array = np.zeros( (100, 100), np.uint8 ) for i in xrange( 0, 100 ): for j in xrange( 0, 50 ): array[j, i] = 1 #_show_updated_window_briefly_ plt.imshow( array ) time.sleep(0.1) ``` The problem is that this way, the Matplotlib window doesn't get activated, only once the whole computation is finished. I've tried both native matplotlib and pyplot, but the results are the same. For plotting commands I found an .ion() switch, but here it doesn't seem to work. Q1. What is the best way to continuously display updates to a numpy array (actually a uint8 greyscale image)? Q2. Is it possible to do this with an animation function, like in the dynamic image example? I'd like to call a function inside a loop, thus I don't know how to achieve this with an animation function.", "response":"You don't need to call imshow all the time. It is much faster to use the object's set_data method: ``` myobj = imshow(first_image) for pixel in pixels: addpixel(pixel) myobj.set_data(segmentedimg) draw() ``` The draw() should make sure that the backend updates the image. UPDATE: your question was significantly modified. In such cases it is better to ask another question. Here is a way to deal with your second question: Matplotlib's animation only deals with one increasing dimension (time), so your double loop won't do. You need to convert your indices to a single index. Here is an example: ``` import numpy as np from matplotlib import pyplot as plt from matplotlib import animation nx = 150 ny = 50 fig = plt.figure() data = np.zeros((nx, ny)) im = plt.imshow(data, cmap='gist_gray_r', vmin=0, vmax=1) def init(): im.set_data(np.zeros((nx, ny))) def animate(i): xi = i \/\/ ny yi = i % ny data[xi, yi] = 1 im.set_data(data) return im anim = animation.FuncAnimation(fig, animate, init_func=init, frames=nx * ny, interval=50) ```", "best_answers_score":0.6861, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17835302\/how-to-update-matplotlibs-imshow-window-interactively", "best_answers_votes":58, "question_length":1272, "response_length":1033 }, { "question":"Python: slicing a multi-dimensional array I know how to slice 1-dimensional sequence: arr[start:end], and access an element in the array: el = arr[row][col]. Now, I'm trying something like slice = arr[0:2][0:2] (where arr is a numpy array) but it doesn't give me the first 2 rows and columns, but repeats the first 2 rows. What did I just do, and how do I slice along another dimension?", "response":"If you use numpy, this is easy: ``` slice = arr[:2,:2] ``` or if you want the 0's, ``` slice = arr[0:2,0:2] ``` You'll get the same result. *note that slice is actually the name of a builtin-type. Generally, I would advise giving your object a different \"name\". Another way, if you're working with lists of lists*: ``` slice = [arr[i][0:2] for i in range(0,2)] ``` (Note that the 0's here are unnecessary: [arr[i][:2] for i in range(2)] would also work.). What I did here is that I take each desired row 1 at a time (arr[i]). I then slice the columns I want out of that row and add it to the list that I'm building. If you naively try: arr[0:2] You get the first 2 rows which if you then slice again arr[0:2][0:2], you're just slicing the first two rows over again. *This actually works for numpy arrays too, but it will be slow compared to the \"native\" solution I posted above.", "best_answers_score":0.686, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17277100\/python-slicing-a-multi-dimensional-array", "best_answers_votes":102, "question_length":386, "response_length":878 }, { "question":"Type hinting \/ annotation (PEP 484) for numpy.ndarray Has anyone implemented type hinting for the specific numpy.ndarray class? Right now, I'm using typing.Any, but it would be nice to have something more specific. For instance if the NumPy people added a type alias for their array_like object class. Better yet, implement support at the dtype level, so that other objects would be supported, as well as ufunc.", "response":"There is a numpy.typing module with an NDArray generic type. From the Numpy 2.3 docs: ```py numpy.typing.NDArray = numpy.ndarray[tuple[typing.Any, ...], numpy.dtype[~_ScalarT]] ``` A np.ndarray[tuple[Any, ...], np.dtype[ScalarT]] type alias generic w.r.t. its dtype.type. Can be used during runtime for typing arrays with a given dtype and unspecified shape. Examples: ```py >>> import numpy as np >>> import numpy.typing as npt >>> print(npt.NDArray) numpy.ndarray[tuple[typing.Any, ...], numpy.dtype[~_ScalarT]] >>> print(npt.NDArray[np.float64]) numpy.ndarray[tuple[typing.Any, ...], numpy.dtype[numpy.float64]] >>> NDArrayInt = npt.NDArray[np.int_] >>> a: NDArrayInt = np.arange(10) >>> def func(a: npt.ArrayLike) -> npt.NDArray[Any]: ... return np.array(a) ``` As of 2025-07-17, support for shapes is still a work in progress per numpy\/numpy#16544.", "best_answers_score":0.6857, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/35673895\/type-hinting-annotation-pep-484-for-numpy-ndarray", "best_answers_votes":118, "question_length":411, "response_length":853 }, { "question":"How can I \"zip sort\" parallel numpy arrays? If I have two parallel lists and want to sort them by the order of the elements in the first, it's very easy: ``` >>> a = [2, 3, 1] >>> b = [4, 6, 7] >>> a, b = zip(*sorted(zip(a,b))) >>> print a (1, 2, 3) >>> print b (7, 4, 6) ``` How can I do the same using numpy arrays without unpacking them into conventional Python lists?", "response":"b[a.argsort()] should do the trick. Here's how it works. First you need to find a permutation that sorts a. argsort is a method that computes this: ``` >>> a = numpy.array([2, 3, 1]) >>> p = a.argsort() >>> p [2, 0, 1] ``` You can easily check that this is right: ``` >>> a[p] array([1, 2, 3]) ``` Now apply the same permutation to b. ``` >>> b = numpy.array([4, 6, 7]) >>> b[p] array([7, 4, 6]) ```", "best_answers_score":0.6857, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1903462\/how-can-i-zip-sort-parallel-numpy-arrays", "best_answers_votes":115, "question_length":371, "response_length":399 }, { "question":"How to convert a python set to a numpy array? I am using a set operation in python to perform a symmetric difference between two numpy arrays. The result, however, is a set and I need to convert it back to a numpy array to move forward. Is there a way to do this? Here's what I tried: ``` a = numpy.array([1,2,3,4,5,6]) b = numpy.array([2,3,5]) c = set(a) ^ set(b) ``` The results is a set: ``` In [27]: c Out[27]: set([1, 4, 6]) ``` If I convert to a numpy array, it places the entire set in the first array element. ``` In [28]: numpy.array(c) Out[28]: array(set([1, 4, 6]), dtype=object) ``` What I need, however, would be this: ``` array([1,4,6],dtype=int) ``` I could loop over the elements to convert one by one, but I will have 100,000 elements and hoped for a built-in function to save the loop. Thanks!", "response":"Do: ``` >>> numpy.array(list(c)) array([1, 4, 6]) ``` And dtype is int (int64 on my side.)", "best_answers_score":0.6851, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8466014\/how-to-convert-a-python-set-to-a-numpy-array", "best_answers_votes":68, "question_length":811, "response_length":90 }, { "question":"Fast calculation of Pareto front in Python I have a set of points in a 3D space, from which I need to find the Pareto frontier. Speed of execution is very important here, and time increases very fast as I add points to test. The set of points looks like this: ``` [[0.3296170319979843, 0.0, 0.44472108843537406], [0.3296170319979843,0.0, 0.44472108843537406], [0.32920760896951373, 0.0, 0.4440408163265306], [0.32920760896951373, 0.0, 0.4440408163265306], [0.33815192743764166, 0.0, 0.44356462585034007]] ``` Right now, I'm using this algorithm: ``` def dominates(row, candidateRow): return sum([row[x] >= candidateRow[x] for x in range(len(row))]) == len(row) def simple_cull(inputPoints, dominates): paretoPoints = set() candidateRowNr = 0 dominatedPoints = set() while True: candidateRow = inputPoints[candidateRowNr] inputPoints.remove(candidateRow) rowNr = 0 nonDominated = True while len(inputPoints) != 0 and rowNr < len(inputPoints): row = inputPoints[rowNr] if dominates(candidateRow, row): # If it is worse on all features remove the row from the array inputPoints.remove(row) dominatedPoints.add(tuple(row)) elif dominates(row, candidateRow): nonDominated = False dominatedPoints.add(tuple(candidateRow)) rowNr += 1 else: rowNr += 1 if nonDominated: # add the non-dominated point to the Pareto frontier paretoPoints.add(tuple(candidateRow)) if len(inputPoints) == 0: break return paretoPoints, dominatedPoints ``` Found here: http:\/\/code.activestate.com\/recipes\/578287-multidimensional-pareto-front\/ What's the fastest way to find the set of non-dominated solutions in an ensemble of solutions? Or, in short, can Python do better than this algorithm?", "response":"If you're worried about actual speed, you definitely want to use numpy (as the clever algorithmic tweaks probably have way less effect than the gains to be had from using array operations). Here are three solutions that all compute the same function. The is_pareto_efficient_dumb solution is slower in most situations but becomes faster as the number of costs increases, the is_pareto_efficient_simple solution is much more efficient than the dumb solution for many points, and the final is_pareto_efficient function is less readable but the fastest (so all are Pareto Efficient!). ``` import numpy as np # Very slow for many datapoints. Fastest for many costs, most readable def is_pareto_efficient_dumb(costs): \"\"\" Find the pareto-efficient points :param costs: An (n_points, n_costs) array :return: A (n_points, ) boolean array, indicating whether each point is Pareto efficient \"\"\" is_efficient = np.ones(costs.shape[0], dtype = bool) for i, c in enumerate(costs): is_efficient[i] = np.all(np.any(costs[:i]>c, axis=1)) and np.all(np.any(costs[i+1:]>c, axis=1)) return is_efficient # Fairly fast for many datapoints, less fast for many costs, somewhat readable def is_pareto_efficient_simple(costs): \"\"\" Find the pareto-efficient points :param costs: An (n_points, n_costs) array :return: A (n_points, ) boolean array, indicating whether each point is Pareto efficient \"\"\" is_efficient = np.ones(costs.shape[0], dtype = bool) for i, c in enumerate(costs): if is_efficient[i]: is_efficient[is_efficient] = np.any(costs[is_efficient]>> import pandas as pd >>> df = pd.DataFrame({'A': [5, 6, 7], 'B': [7, 8, 9]}) >>> print df.sum().sum() 42 >>> print df.values.sum() 42 ``` Just want to make sure I'm not missing something more obvious.", "response":"Updated for Pandas 0.24+ ``` df.to_numpy().sum() ``` Prior to Pandas 0.24+ ``` df.values ``` Is the underlying numpy array ``` df.values.sum() ``` Is the numpy sum method and is faster", "best_answers_score":0.6827, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/38733477\/whats-the-best-way-to-sum-all-values-in-a-pandas-dataframe", "best_answers_votes":78, "question_length":438, "response_length":184 }, { "question":"Why is my plt.savefig is not working? I have a simple python code as follows: ``` import numpy as np import matplotlib.pyplot as plt \"\"\" Here are the solutions and the plot. \"\"\" # Create the axis and plot. plt.axis([0, 10, 0, 10]) axis_x = range(1, 11) grd = [1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1, 9.1, 10.1] grd2 = [1.2, 2.2, 3.2, 4.2, 5.2, 6.2, 7.2, 8.2, 9.2, 10.2] plt.plot(axis_x, grd, '-g', label='BR1') plt.plot(axis_x, grd2, '-b', label='BR2') plt.legend(loc='upper left') plt.grid() plt.show() # Save the results vector to a text file. np.savetxt('test.out', (grd, grd2)) # Save the figure as '.eps' file. plt.savefig('expl.pdf', format='pdf', dpi=1200) ``` When I open the output files expl.pdf and\/or test.out I find them blank and nothing in there. Why? Thanks.", "response":"When you close the image displayed by plt.show(), the image is closed and freed from memory. You should call savefig and savetxt before calling show.", "best_answers_score":0.6826, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30765455\/why-is-my-plt-savefig-is-not-working", "best_answers_votes":134, "question_length":774, "response_length":149 }, { "question":"List to array conversion to use ravel() function I have a list in python and I want to convert it to an array to be able to use ravel() function.", "response":"Use numpy.asarray: ``` import numpy as np myarray = np.asarray(mylist) ```", "best_answers_score":0.6821, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15868512\/list-to-array-conversion-to-use-ravel-function", "best_answers_votes":244, "question_length":145, "response_length":74 }, { "question":"How do you get the magnitude of a vector in Numpy? In keeping with the \"There's only one obvious way to do it\", how do you get the magnitude of a vector (1D array) in Numpy? ``` def mag(x): return math.sqrt(sum(i**2 for i in x)) ``` The above works, but I cannot believe that I must specify such a trivial and core function myself.", "response":"The function you're after is numpy.linalg.norm. (I reckon it should be in base numpy as a property of an array -- say x.norm() -- but oh well). ``` import numpy as np x = np.array([1,2,3,4,5]) np.linalg.norm(x) ``` You can also feed in an optional ord for the nth order norm you want. Say you wanted the 1-norm: ``` np.linalg.norm(x,ord=1) ``` And so on.", "best_answers_score":0.6818, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9171158\/how-do-you-get-the-magnitude-of-a-vector-in-numpy", "best_answers_votes":339, "question_length":331, "response_length":354 }, { "question":"extracting days from a numpy.timedelta64 value I am using pandas\/python and I have two date time series s1 and s2, that have been generated using the 'to_datetime' function on a field of the df containing dates\/times. When I subtract s1 from s2 s3 = s2 - s1 I get a series, s3, of type timedelta64[ns] ``` 0 385 days, 04:10:36 1 57 days, 22:54:00 2 642 days, 21:15:23 3 615 days, 00:55:44 4 160 days, 22:13:35 5 196 days, 23:06:49 6 23 days, 22:57:17 7 2 days, 22:17:31 8 622 days, 01:29:25 9 79 days, 20:15:14 10 23 days, 22:46:51 11 268 days, 19:23:04 12 NaT 13 NaT 14 583 days, 03:40:39 ``` How do I look at 1 element of the series: s3[10] I get something like this: numpy.timedelta64(2069211000000000,'ns') How do I extract days from s3 and maybe keep them as integers(not so interested in hours\/mins etc.)?", "response":"You can convert it to a timedelta with a day precision. To extract the integer value of days you divide it with a timedelta of one day. ``` >>> x = np.timedelta64(2069211000000000, 'ns') >>> days = x.astype('timedelta64[D]') >>> days \/ np.timedelta64(1, 'D') 23 ``` Or, as @PhillipCloud suggested, just days.astype(int) since the timedelta is just a 64bit integer that is interpreted in various ways depending on the second parameter you passed in ('D', 'ns', ...). You can find more about it here.", "best_answers_score":0.6812, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18215317\/extracting-days-from-a-numpy-timedelta64-value", "best_answers_votes":197, "question_length":811, "response_length":498 }, { "question":"How to find the groups of consecutive elements in a NumPy array I have to cluster the consecutive elements from a NumPy array. Considering the following example ``` a = [ 0, 47, 48, 49, 50, 97, 98, 99] ``` The output should be a list of tuples as follows ``` [(0), (47, 48, 49, 50), (97, 98, 99)] ``` Here the difference is just one between the elements. It will be great if the difference can also be specified as a limit or a hardcoded number.", "response":"``` def consecutive(data, stepsize=1): return np.split(data, np.where(np.diff(data) != stepsize)[0]+1) a = np.array([0, 47, 48, 49, 50, 97, 98, 99]) consecutive(a) ``` yields ``` [array([0]), array([47, 48, 49, 50]), array([97, 98, 99])] ```", "best_answers_score":0.6803, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7352684\/how-to-find-the-groups-of-consecutive-elements-in-a-numpy-array", "best_answers_votes":242, "question_length":445, "response_length":241 }, { "question":"How can I release memory after creating matplotlib figures I have several matlpotlib functions rolled into some django-celery tasks. Every time the tasks are called more RAM is dedicated to python. Before too long, python is taking up all of the RAM. QUESTION: How can I release this memory? UPDATE 2 - A Second Solution: I asked a similar question specifically about the memory locked up when matplotlib errors, but I got a good answer to this question .clf(), .close(), and gc.collect() aren't needed if you use multiprocess to run the plotting function in a separate process whose memory will automatically be freed once the process ends. Matplotlib errors result in a memory leak. How can I free up that memory? UPDATE - The Solution: These stackoverflow posts suggested that I can release the memory used by matplotlib objects with the following commands: .clf(): Matplotlib runs out of memory when plotting in a loop .close(): Python matplotlib: memory not being released when specifying figure size ``` import gc gc.collect() ``` Here is the example I used to test the solution: ``` import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt from pylab import import figure, savefig import numpy as np import gc a = np.arange(1000000) b = np.random.randn(1000000) fig = plt.figure(num=1, dpi=100, facecolor='w', edgecolor='w') fig.set_size_inches(10,7) ax = fig.add_subplot(111) ax.plot(a, b) fig.clf() plt.close() del a, b gc.collect() ```", "response":"Did you try to run you task function several times (in a for) to be sure that not your function is leaking no matter of celery? Make sure that django.settings.DEBUG is set False( The connection object holds all queries in memmory when DEBUG=True).", "best_answers_score":0.6792, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7101404\/how-can-i-release-memory-after-creating-matplotlib-figures", "best_answers_votes":6, "question_length":1460, "response_length":247 }, { "question":"Factorial in numpy and scipy How can I import factorial function from numpy and scipy separately in order to see which one is faster? I already imported factorial from python itself by import math. But, it does not work for numpy and scipy.", "response":"You can import them like this: ``` In [7]: import scipy, numpy, math In [8]: scipy.math.factorial, numpy.math.factorial, math.factorial Out[8]: (, , ) ``` scipy.math.factorial and numpy.math.factorial seem to simply be aliases\/references for\/to math.factorial, that is scipy.math.factorial is math.factorial and numpy.math.factorial is math.factorial should both give True.", "best_answers_score":0.6791, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21753841\/factorial-in-numpy-and-scipy", "best_answers_votes":106, "question_length":240, "response_length":373 }, { "question":"Extracting first n columns of a numpy matrix I have an array like this: ``` array([[-0.57098887, -0.4274751 , -0.38459931, -0.58593526], [-0.22279713, -0.51723555, 0.82462029, 0.05319973], [ 0.67492385, -0.69294472, -0.2531966 , 0.01403201], [ 0.41086611, 0.26374238, 0.32859738, -0.80848795]]) ``` Now I want to extract the following: ``` [-0.57098887, -0.4274751] [-0.22279713, -0.51723555] [ 0.67492385, -0.69294472] [ 0.41086611, 0.26374238] ``` So basically just first 2 columns..", "response":"If a is your array: ``` In [11]: a[:,:2] Out[11]: array([[-0.57098887, -0.4274751 ], [-0.22279713, -0.51723555], [ 0.67492385, -0.69294472], [ 0.41086611, 0.26374238]]) ```", "best_answers_score":0.6788, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10625096\/extracting-first-n-columns-of-a-numpy-matrix", "best_answers_votes":94, "question_length":485, "response_length":172 }, { "question":"Generating Discrete random variables with specified weights using SciPy or NumPy I am looking for a simple function that can generate an array of specified random values based on their corresponding (also specified) probabilities. I only need it to generate float values, but I don't see why it shouldn't be able to generate any scalar. I can think of many ways of building this from existing functions, but I think I probably just missed an obvious SciPy or NumPy function. E.g.: ``` >>> values = [1.1, 2.2, 3.3] >>> probabilities = [0.2, 0.5, 0.3] >>> print some_function(values, probabilities, size=10) (2.2, 1.1, 3.3, 3.3, 2.2, 2.2, 1.1, 2.2, 3.3, 2.2) ``` Note: I found scipy.stats.rv_discrete but I don't understand how it works. Specifically, I do not understand what this (below) means nor what it should do: ``` numargs = generic.numargs [ ] = ['Replace with resonable value', ]*numargs ``` If rv_discrete is what I should be using, could you please provide me with a simple example and an explanation of the above \"shape\" statement?", "response":"Drawing from a discrete distribution is directly built into numpy. The function is called random.choice (difficult to find without any reference to discrete distributions in the numpy docs). ``` elements = [1.1, 2.2, 3.3] probabilities = [0.2, 0.5, 0.3] np.random.choice(elements, 10, p=probabilities) ```", "best_answers_score":0.6784, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11373192\/generating-discrete-random-variables-with-specified-weights-using-scipy-or-numpy", "best_answers_votes":99, "question_length":1043, "response_length":305 }, { "question":"Python OpenCV2 (cv2) wrapper to get image size? How to get the size of an image in cv2 wrapper in Python OpenCV (numpy). Is there a correct way to do that other than numpy.shape(). How can I get it in these format dimensions: (width, height) list?", "response":"cv2 uses numpy for manipulating images, so the proper and best way to get the size of an image is using numpy.shape. Assuming you are working with BGR images, here is an example: ``` >>> import numpy as np >>> import cv2 >>> img = cv2.imread('foo.jpg') >>> height, width, channels = img.shape >>> print height, width, channels 600 800 3 ``` In case you were working with binary images, img will have two dimensions, and therefore you must change the code to: height, width = img.shape", "best_answers_score":0.6783, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19098104\/python-opencv2-cv2-wrapper-to-get-image-size", "best_answers_votes":319, "question_length":247, "response_length":484 }, { "question":"How to get the index of a maximum element in a NumPy array along one axis I have a 2 dimensional NumPy array. I know how to get the maximum values over axes: ``` >>> a = array([[1,2,3],[4,3,1]]) >>> amax(a,axis=0) array([4, 3, 3]) ``` How can I get the indices of the maximum elements? I would like as output array([1,1,0]) instead.", "response":"``` >>> a.argmax(axis=0) array([1, 1, 0]) ```", "best_answers_score":0.6763, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5469286\/how-to-get-the-index-of-a-maximum-element-in-a-numpy-array-along-one-axis", "best_answers_votes":190, "question_length":332, "response_length":45 }, { "question":"How does condensed distance matrix work? (pdist) scipy.spatial.distance.pdist returns a condensed distance matrix. From the documentation: Returns a condensed distance matrix Y. For each and (where ), the metric dist(u=X[i], v=X[j]) is computed and stored in entry ij. I thought ij meant i*j. But I think I might be wrong. Consider ```py X = array([[1,2], [1,2], [3,4]]) dist_matrix = pdist(X) ``` then the documentation says that dist(X[0], X[2]) should be dist_matrix[0*2]. However, dist_matrix[0*2] is 0 -- not 2.8 as it should be. What's the formula I should use to access the similarity of a two vectors, given i and j?", "response":"You can look at it this way: Suppose x is m by n. The possible pairs of m rows, chosen two at a time, is itertools.combinations(range(m), 2), e.g, for m=3: ``` >>> import itertools >>> list(combinations(range(3),2)) [(0, 1), (0, 2), (1, 2)] ``` So if d = pdist(x), the kth tuple in combinations(range(m), 2)) gives the indices of the rows of x associated with d[k]. Example: ``` >>> x = array([[0,10],[10,10],[20,20]]) >>> pdist(x) array([ 10. , 22.36067977, 14.14213562]) ``` The first element is dist(x[0], x[1]), the second is dist(x[0], x[2]) and the third is dist(x[1], x[2]). Or you can view it as the elements in the upper triangular part of the square distance matrix, strung together into a 1D array. E.g. ``` >>> squareform(pdist(x)) array([[ 0. , 10. , 22.361], [ 10. , 0. , 14.142], [ 22.361, 14.142, 0. ]]) >>> y = array([[0,10],[10,10],[20,20],[10,0]]) >>> squareform(pdist(y)) array([[ 0. , 10. , 22.361, 14.142], [ 10. , 0. , 14.142, 10. ], [ 22.361, 14.142, 0. , 22.361], [ 14.142, 10. , 22.361, 0. ]]) >>> pdist(y) array([ 10. , 22.361, 14.142, 14.142, 10. , 22.361]) ```", "best_answers_score":0.6756, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13079563\/how-does-condensed-distance-matrix-work-pdist", "best_answers_votes":113, "question_length":624, "response_length":1089 }, { "question":"Creating Pandas Dataframe between two Numpy arrays, then draw scatter plot I'm relatively new with numpy and pandas (I'm an experimental physicist so I've been using ROOT for years...). A common plot in ROOT is a 2D scatter plot where, given a list of x- and y- values, makes a \"heatmap\" type scatter plot of one variable versus the other. How is this best accomplished with numpy and Pandas? I'm trying to use the Dataframe.plot() function, but I'm struggling to even create the Dataframe. ``` import numpy as np import pandas as pd x = np.random.randn(1,5) y = np.sin(x) df = pd.DataFrame(d) ``` First off, this dataframe has shape (1,2), but I would like it to have shape (5,2). If I can get the dataframe the right shape, I'm sure I can figure out the DataFrame.plot() function to draw what I want.", "response":"There are a number of ways to create DataFrames. Given 1-dimensional column vectors, you can create a DataFrame by passing it a dict whose keys are column names and whose values are the 1-dimensional column vectors: ``` import numpy as np import pandas as pd x = np.random.randn(5) y = np.sin(x) df = pd.DataFrame({'x':x, 'y':y}) df.plot('x', 'y', kind='scatter') ```", "best_answers_score":0.6752, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/29949757\/creating-pandas-dataframe-between-two-numpy-arrays-then-draw-scatter-plot", "best_answers_votes":116, "question_length":802, "response_length":367 }, { "question":"How to check if a value is in the list in selection from pandas data frame? [duplicate] This question already has answers here: Use a list of values to select rows from a Pandas dataframe (9 answers) Closed 2 years ago. Looks ugly: ``` df_cut = df_new[ ( (df_new['l_ext']==31) | (df_new['l_ext']==22) | (df_new['l_ext']==30) | (df_new['l_ext']==25) | (df_new['l_ext']==64) ) ] ``` Does not work: ``` df_cut = df_new[(df_new['l_ext'] in [31, 22, 30, 25, 64])] ``` Is there an elegant and working solution of the above \"problem\"?", "response":"Use isin ``` df_new[df_new['l_ext'].isin([31, 22, 30, 25, 64])] ```", "best_answers_score":0.6747, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18250298\/how-to-check-if-a-value-is-in-the-list-in-selection-from-pandas-data-frame", "best_answers_votes":262, "question_length":527, "response_length":67 }, { "question":"Replace -inf with zero value I have an array: ``` x = numpy.array([-inf, -inf, 37.49668579]) ``` Is there a way to change the -inf values to just 0?", "response":"There is: ``` from numpy import inf x[x == -inf] = 0 ```", "best_answers_score":0.6745, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21049920\/replace-inf-with-zero-value", "best_answers_votes":108, "question_length":148, "response_length":56 }, { "question":"Equivalent of Numpy.argsort() in basic python? [duplicate] This question already has answers here: How to get indices of a sorted array in Python (19 answers) Closed 10 years ago. is there a builtin function of Python that does on python.array what argsort() does on a numpy.array?", "response":"There is no built-in function, but it's easy to assemble one out of the terrific tools Python makes available: ``` def argsort(seq): # http:\/\/stackoverflow.com\/questions\/3071415\/efficient-method-to-calculate-the-rank-vector-of-a-list-in-python return sorted(range(len(seq)), key=seq.__getitem__) x = [5,2,1,10] print(argsort(x)) # [2, 1, 0, 3] ``` It works on Python array.arrays the same way: ``` import array x = array.array('d', [5, 2, 1, 10]) print(argsort(x)) # [2, 1, 0, 3] ```", "best_answers_score":0.674, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3382352\/equivalent-of-numpy-argsort-in-basic-python", "best_answers_votes":112, "question_length":281, "response_length":483 }, { "question":"Is there a numpy\/scipy dot product, calculating only the diagonal entries of the result? Imagine having 2 numpy arrays: ``` > A, A.shape = (n,p) > B, B.shape = (p,p) ``` Typically p is a smaller number (p result = diag_dot(A.dot(B), A.T) ``` Is there a premade functionality like this and can this be done efficiently without the need for allocating the intermediate (n x n) array?", "response":"I think i got it on my own, but nevertheless will share the solution: since getting only the diagonals of a matrix multiplication ``` > Z = N.diag(X.dot(Y)) ``` is equivalent to the individual sum of the scalar product of rows of X and columns of Y, the previous statement is equivalent to: ``` > Z = (X * Y.T).sum(-1) ``` For the original variables this means: ``` > result = (A.dot(B) * A).sum(-1) ``` Please correct me if I am wrong but this should be it ...", "best_answers_score":0.674, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14758283\/is-there-a-numpy-scipy-dot-product-calculating-only-the-diagonal-entries-of-the", "best_answers_votes":70, "question_length":382, "response_length":461 }, { "question":"Strings in a DataFrame, but dtype is object Why does Pandas tell me that I have objects, although every item in the selected column is a string \u2014 even after explicit conversion. This is my DataFrame: ``` Int64Index: 56992 entries, 0 to 56991 Data columns (total 7 columns): id 56992 non-null values attr1 56992 non-null values attr2 56992 non-null values attr3 56992 non-null values attr4 56992 non-null values attr5 56992 non-null values attr6 56992 non-null values dtypes: int64(2), object(5) ``` Five of them are dtype object. I explicitly convert those objects to strings: ``` for c in df.columns: if df[c].dtype == object: print \"convert \", df[c].name, \" to string\" df[c] = df[c].astype(str) ``` Then, df[\"attr2\"] still has dtype object, although type(df[\"attr2\"].ix[0] reveals str, which is correct. Pandas distinguishes between int64 and float64 and object. What is the logic behind it when there is no dtype str? Why is a str covered by object?", "response":"The dtype object comes from NumPy, it describes the type of element in a ndarray. Every element in an ndarray must have the same size in bytes. For int64 and float64, they are 8 bytes. But for strings, the length of the string is not fixed. So instead of saving the bytes of strings in the ndarray directly, Pandas uses an object ndarray, which saves pointers to objects; because of this the dtype of this kind ndarray is object. Here is an example: the int64 array contains 4 int64 value. the object array contains 4 pointers to 3 string objects.", "best_answers_score":0.6739, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21018654\/strings-in-a-dataframe-but-dtype-is-object", "best_answers_votes":206, "question_length":953, "response_length":547 }, { "question":"Convert numpy array to tuple Note: This is asking for the reverse of the usual tuple-to-array conversion. I have to pass an argument to a (wrapped c++) function as a nested tuple. For example, the following works ``` X = MyFunction( ((2,2),(2,-2)) ) ``` whereas the following do not ``` X = MyFunction( numpy.array(((2,2),(2,-2))) ) X = MyFunction( [[2,2],[2,-2]] ) ``` Unfortunately, the argument I would like to use comes to me as a numpy array. That array always has dimensions 2xN for some N, which may be quite large. Is there an easy way to convert that to a tuple? I know that I could just loop through, creating a new tuple, but would prefer if there's some nice access the numpy array provides. If it's not possible to do this as nicely as I hope, what's the prettiest way to do it by looping, or whatever?", "response":"``` >>> arr = numpy.array(((2,2),(2,-2))) >>> tuple(map(tuple, arr)) ((2, 2), (2, -2)) ```", "best_answers_score":0.6737, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10016352\/convert-numpy-array-to-tuple", "best_answers_votes":245, "question_length":815, "response_length":90 }, { "question":"RuntimeWarning: divide by zero encountered in log I am using numpy.log10 to calculate the log of an array of probability values. There are some zeros in the array, and I am trying to get around it using ``` result = numpy.where(prob > 0.0000000001, numpy.log10(prob), -10) ``` However, RuntimeWarning: divide by zero encountered in log10 still appeared and I am sure it is this line caused the warning. Although my problem is solved, I am confused why this warning appeared again and again?", "response":"You can turn it off with seterr ``` numpy.seterr(divide = 'ignore') ``` and back on with ``` numpy.seterr(divide = 'warn') ```", "best_answers_score":0.6731, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21610198\/runtimewarning-divide-by-zero-encountered-in-log", "best_answers_votes":48, "question_length":490, "response_length":126 }, { "question":"How to calculate a Gaussian kernel matrix efficiently in numpy? ``` def GaussianMatrix(X,sigma): row,col=X.shape GassMatrix=np.zeros(shape=(row,row)) X=np.asarray(X) i=0 for v_i in X: j=0 for v_j in X: GassMatrix[i,j]=Gaussian(v_i.T,v_j.T,sigma) j+=1 i+=1 return GassMatrix def Gaussian(x,z,sigma): return np.exp((-(np.linalg.norm(x-z)**2))\/(2*sigma**2)) ``` This is my current way. Is there any way I can use matrix operation to do this? X is the data points.", "response":"I myself used the accepted answer for my image processing, but I find it (and the other answers) too dependent on other modules. Therefore, here is my compact solution: ```py import numpy as np def gkern(l=5, sig=1.): \"\"\"\\ creates gaussian kernel with side length `l` and a sigma of `sig` \"\"\" ax = np.linspace(-(l - 1) \/ 2., (l - 1) \/ 2., l) gauss = np.exp(-0.5 * np.square(ax) \/ np.square(sig)) kernel = np.outer(gauss, gauss) return kernel \/ np.sum(kernel) ``` Edit: Changed arange to linspace to handle even side lengths Edit: Use separability for faster computation, thank you Yves Daoust.", "best_answers_score":0.6725, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/29731726\/how-to-calculate-a-gaussian-kernel-matrix-efficiently-in-numpy", "best_answers_votes":58, "question_length":460, "response_length":593 }, { "question":"Sort array's rows by another array in Python I'm trying to sort the rows of one array by the values of another. For example: ``` import numpy as np arr1 = np.random.normal(1, 1, 80) arr2 = np.random.normal(1,1, (80,100)) ``` I want to sort arr1 in descending order, and to have the current relationship between arr1 and arr2 to be maintained (ie, after sorting both, the rows of arr1[0] and arr2[0, :] are the same).", "response":"Use argsort as follows: ``` arr1inds = arr1.argsort() sorted_arr1 = arr1[arr1inds[::-1]] sorted_arr2 = arr2[arr1inds[::-1]] ``` This example sorts in descending order.", "best_answers_score":0.6722, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9007877\/sort-arrays-rows-by-another-array-in-python", "best_answers_votes":132, "question_length":416, "response_length":167 }, { "question":"Remove duplicate rows of a numpy array [duplicate] This question already has answers here: Find unique rows in numpy.array (20 answers) Closed 10 years ago. How can I remove duplicate rows of a 2 dimensional numpy array? ``` data = np.array([[1,8,3,3,4], [1,8,9,9,4], [1,8,3,3,4]]) ``` The answer should be as follows: ``` ans = array([[1,8,3,3,4], [1,8,9,9,4]]) ``` If there are two rows that are the same, then I would like to remove one \"duplicate\" row.", "response":"You can use numpy unique. Since you want the unique rows, we need to put them into tuples: ``` import numpy as np data = np.array([[1,8,3,3,4], [1,8,9,9,4], [1,8,3,3,4]]) ``` just applying np.unique to the data array will result in this: ``` >>> uniques array([1, 3, 4, 8, 9]) ``` prints out the unique elements in the list. So putting them into tuples results in: ``` new_array = [tuple(row) for row in data] uniques = np.unique(new_array) ``` which prints: ``` >>> uniques array([[1, 8, 3, 3, 4], [1, 8, 9, 9, 4]]) ``` UPDATE In the new version, you need to set np.unique(data, axis=0)", "best_answers_score":0.6716, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/31097247\/remove-duplicate-rows-of-a-numpy-array", "best_answers_votes":105, "question_length":456, "response_length":587 }, { "question":"Python : save dictionaries through numpy.save [duplicate] This question already has answers here: How to save dictionaries and arrays in the same archive (with numpy.savez) (4 answers) Closed 8 years ago. I have a large data set (millions of rows) in memory, in the form of numpy arrays and dictionaries. Once this data is constructed I want to store them into files; so, later I can load these files into memory quickly, without reconstructing this data from the scratch once again. np.save and np.load functions does the job smoothly for numpy arrays. But I am facing problems with dict objects. See below sample. d2 is the dictionary which was loaded from the file. See #out[28] it has been loaded into d2 as a numpy array, not as a dict. So further dict operations such as get are not working. Is there a way to load the data from the file as dict (instead of numpy array) ? ``` In [25]: d1={'key1':[5,10], 'key2':[50,100]} In [26]: np.save(\"d1.npy\", d1) In [27]: d2=np.load(\"d1.npy\") In [28]: d2 Out[28]: array({'key2': [50, 100], 'key1': [5, 10]}, dtype=object) In [30]: d1.get('key1') #original dict before saving into file Out[30]: [5, 10] In [31]: d2.get('key2') #dictionary loaded from the file --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () ----> 1 d2.get('key2') AttributeError: 'numpy.ndarray' object has no attribute 'get' ```", "response":"It's a structured array. Use d2.item() to retrieve the actual dict object first: ``` import numpy as np d1={'key1':[5,10], 'key2':[50,100]} np.save(\"d1.npy\", d1) d2=np.load(\"d1.npy\") print d1.get('key1') print d2.item().get('key2') ``` result: ``` [5, 10] [50, 100] ```", "best_answers_score":0.6715, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/40219946\/python-save-dictionaries-through-numpy-save", "best_answers_votes":126, "question_length":1425, "response_length":269 }, { "question":"'and' (boolean) vs '&' (bitwise) - Why difference in behavior with lists vs numpy arrays? What explains the difference in behavior of boolean and bitwise operations on lists vs NumPy arrays? I'm confused about the appropriate use of & vs and in Python, illustrated in the following examples. ``` mylist1 = [True, True, True, False, True] mylist2 = [False, True, False, True, False] >>> len(mylist1) == len(mylist2) True # ---- Example 1 ---- >>> mylist1 and mylist2 [False, True, False, True, False] # I would have expected [False, True, False, False, False] # ---- Example 2 ---- >>> mylist1 & mylist2 TypeError: unsupported operand type(s) for &: 'list' and 'list' # Why not just like example 1? >>> import numpy as np # ---- Example 3 ---- >>> np.array(mylist1) and np.array(mylist2) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() # Why not just like Example 4? # ---- Example 4 ---- >>> np.array(mylist1) & np.array(mylist2) array([False, True, False, False, False], dtype=bool) # This is the output I was expecting! ``` This answer and this answer helped me understand that and is a boolean operation but & is a bitwise operation. I read about bitwise operations to better understand the concept, but I am struggling to use that information to make sense of my above 4 examples. Example 4 led me to my desired output, so that is fine, but I am still confused about when\/how\/why I should use and vs &. Why do lists and NumPy arrays behave differently with these operators? Can anyone help me understand the difference between boolean and bitwise operations to explain why they handle lists and NumPy arrays differently?", "response":"and tests whether both expressions are logically True while & (when used with True\/False values) tests if both are True. In Python, empty built-in objects are typically treated as logically False while non-empty built-ins are logically True. This facilitates the common use case where you want to do something if a list is empty and something else if the list is not. Note that this means that the list [False] is logically True: ``` >>> if [False]: ... print('True') ... True ``` So in Example 1, the first list is non-empty and therefore logically True, so the truth value of the and is the same as that of the second list. (In our case, the second list is non-empty and therefore logically True, but identifying that would require an unnecessary step of calculation.) For example 2, lists cannot meaningfully be combined in a bitwise fashion because they can contain arbitrary unlike elements. Things that can be combined bitwise include: Trues and Falses, integers. NumPy objects, by contrast, support vectorized calculations. That is, they let you perform the same operations on multiple pieces of data. Example 3 fails because NumPy arrays (of length > 1) have no truth value as this prevents vector-based logic confusion. Example 4 is simply a vectorized bit and operation. Bottom Line If you are not dealing with arrays and are not performing math manipulations of integers, you probably want and. If you have vectors of truth values that you wish to combine, use numpy with &.", "best_answers_score":0.6698, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22646463\/and-boolean-vs-bitwise-why-difference-in-behavior-with-lists-vs-nump", "best_answers_votes":159, "question_length":1680, "response_length":1485 }, { "question":"how to convert an RGB image to numpy array? I have an RGB image. I want to convert it to numpy array. I did the following ``` im = cv.LoadImage(\"abc.tiff\") a = numpy.asarray(im) ``` It creates an array with no shape. I assume it is a iplimage object.", "response":"You can use newer OpenCV python interface (if I'm not mistaken it is available since OpenCV 2.2). It natively uses numpy arrays: ``` import cv2 im = cv2.imread(\"abc.tiff\",mode='RGB') print(type(im)) ``` result: ``` ```", "best_answers_score":0.6697, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7762948\/how-to-convert-an-rgb-image-to-numpy-array", "best_answers_votes":201, "question_length":250, "response_length":219 }, { "question":"How to use python numpy.savetxt to write strings and float number to an ASCII file? I have a set of lists that contain both strings and float numbers, such as: ``` import numpy as num NAMES = num.array(['NAME_1', 'NAME_2', 'NAME_3']) FLOATS = num.array([ 0.5 , 0.2 , 0.3 ]) DAT = num.column_stack((NAMES, FLOATS)) ``` I want to stack these two lists together and write them to a text file in the form of columns; therefore, I want to use numpy.savetxt (if possible) to do this. ``` num.savetxt('test.txt', DAT, delimiter=\" \") ``` When I do this, I get the following error: ``` >>> num.savetxt('test.txt', DAT, delimiter=\" \") Traceback (most recent call last): File \"\", line 1, in File \"\/Library\/Python\/2.7\/site-packages\/numpy-1.8.0.dev_9597b1f_20120920-py2.7-macosx-10.8-x86_64.egg\/numpy\/lib\/npyio.py\", line 1047, in savetxt fh.write(asbytes(format % tuple(row) + newline)) TypeError: float argument required, not numpy.string_ ``` The ideal output file would look like: ``` NAME_1 0.5 NAME_2 0.2 NAME_3 0.3 ``` How can I write both strings and float numbers to a text file, possibly avoiding using csv ( I want to make if readable for other people )? Is there another way of doing this instead of using numpy.savetxt?", "response":"You have to specify the format (fmt) of you data in savetxt, in this case as a string (%s): ``` num.savetxt('test.txt', DAT, delimiter=\" \", fmt=\"%s\") ``` The default format is a float, that is the reason it was expecting a float instead of a string and explains the error message.", "best_answers_score":0.6694, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16621351\/how-to-use-python-numpy-savetxt-to-write-strings-and-float-number-to-an-ascii-fi", "best_answers_votes":147, "question_length":1219, "response_length":280 }, { "question":"How to create a density plot In R I can create the desired output by doing: ``` data = c(rep(1.5, 7), rep(2.5, 2), rep(3.5, 8), rep(4.5, 3), rep(5.5, 1), rep(6.5, 8)) plot(density(data, bw=0.5)) ``` In python (with matplotlib) the closest I got was with a simple histogram: ``` import matplotlib.pyplot as plt data = [1.5]*7 + [2.5]*2 + [3.5]*8 + [4.5]*3 + [5.5]*1 + [6.5]*8 plt.hist(data, bins=6) plt.show() ``` I also tried the normed=True parameter but couldn't get anything other than trying to fit a gaussian to the histogram. My latest attempts were around scipy.stats and gaussian_kde, following examples on the web, but I've been unsuccessful so far.", "response":"Five years later, when I Google \"how to create a kernel density plot using python\", this thread still shows up at the top! Today, a much easier way to do this is to use seaborn, a package that provides many convenient plotting functions and good style management. ``` import numpy as np import seaborn as sns data = [1.5]*7 + [2.5]*2 + [3.5]*8 + [4.5]*3 + [5.5]*1 + [6.5]*8 sns.set_style('whitegrid') sns.kdeplot(np.array(data), bw=0.5) ```", "best_answers_score":0.6693, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/4150171\/how-to-create-a-density-plot", "best_answers_votes":205, "question_length":658, "response_length":440 }, { "question":"Numpy index slice without losing dimension information I'm using numpy and want to index a row without losing the dimension information. ``` import numpy as np X = np.zeros((100,10)) X.shape # >> (100, 10) xslice = X[10,:] xslice.shape # >> (10,) ``` In this example xslice is now 1 dimension, but I want it to be (1,10). In R, I would use X[10,:,drop=F]. Is there something similar in numpy. I couldn't find it in the documentation and didn't see a similar question asked. Thanks!", "response":"Another solution is to do ``` X[[10],:] ``` or ``` I = array([10]) X[I,:] ``` The dimensionality of an array is preserved when indexing is performed by a list (or an array) of indexes. This is nice because it leaves you with the choice between keeping the dimension and squeezing.", "best_answers_score":0.6684, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3551242\/numpy-index-slice-without-losing-dimension-information", "best_answers_votes":127, "question_length":481, "response_length":280 }, { "question":"How does python numpy.where() work? I am playing with numpy and digging through documentation and I have come across some magic. Namely I am talking about numpy.where(): ``` >>> x = np.arange(9.).reshape(3, 3) >>> np.where( x > 5 ) (array([2, 2, 2]), array([0, 1, 2])) ``` How do they achieve internally that you are able to pass something like x > 5 into a method? I guess it has something to do with __gt__ but I am looking for a detailed explanation.", "response":"How do they achieve internally that you are able to pass something like x > 5 into a method? The short answer is that they don't. Any sort of logical operation on a numpy array returns a boolean array. (i.e. __gt__, __lt__, etc all return boolean arrays where the given condition is true). E.g. ``` x = np.arange(9).reshape(3,3) print x > 5 ``` yields: ``` array([[False, False, False], [False, False, False], [ True, True, True]], dtype=bool) ``` This is the same reason why something like if x > 5: raises a ValueError if x is a numpy array. It's an array of True\/False values, not a single value. Furthermore, numpy arrays can be indexed by boolean arrays. E.g. x[x>5] yields [6 7 8], in this case. Honestly, it's fairly rare that you actually need numpy.where but it just returns the indicies where a boolean array is True. Usually you can do what you need with simple boolean indexing.", "best_answers_score":0.6684, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5642457\/how-does-python-numpy-where-work", "best_answers_votes":78, "question_length":453, "response_length":890 }, { "question":"How to filter numpy array by list of indices? I have a numpy array, filtered__rows, comprised of LAS data [x, y, z, intensity, classification]. I have created a cKDTree of points and have found nearest neighbors, query_ball_point, which is a list of indices for the point and its neighbors. Is there a way to filter filtered__rows to create an array of only points whose index is in the list returned by query_ball_point?", "response":"It looks like you just need a basic integer array indexing: ``` filter_indices = [1,3,5] np.array([11,13,155,22,0xff,32,56,88])[filter_indices] ```", "best_answers_score":0.6679, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19821425\/how-to-filter-numpy-array-by-list-of-indices", "best_answers_votes":112, "question_length":421, "response_length":147 }, { "question":"Find unique rows in numpy.array I need to find unique rows in a numpy.array. For example: ``` >>> a # I have array([[1, 1, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0], [0, 1, 1, 1, 0, 0], [1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 1, 0]]) >>> new_a # I want to get to array([[1, 1, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 0]]) ``` I know that i can create a set and loop over the array, but I am looking for an efficient pure numpy solution. I believe that there is a way to set data type to void and then I could just use numpy.unique, but I couldn't figure out how to make it work.", "response":"As of NumPy 1.13, one can simply choose the axis for selection of unique values in any N-dim array. To get unique rows, use np.unique as follows: ``` unique_rows = np.unique(original_array, axis=0) ```", "best_answers_score":0.6677, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16970982\/find-unique-rows-in-numpy-array", "best_answers_votes":205, "question_length":570, "response_length":201 }, { "question":"Discarding alpha channel from images stored as Numpy arrays I load images with numpy\/scikit. I know that all images are 200x200 pixels. When the images are loaded, I notice some have an alpha channel, and therefore have shape (200, 200, 4) instead of (200, 200, 3) which I expect. Is there a way to delete that last value, discarding the alpha channel and get all images to a nice (200, 200, 3) shape?", "response":"Just slice the array to get the first three entries of the last dimension: ``` image_without_alpha = image[:,:,:3] ```", "best_answers_score":0.6676, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/35902302\/discarding-alpha-channel-from-images-stored-as-numpy-arrays", "best_answers_votes":128, "question_length":401, "response_length":118 }, { "question":"Translate every element in numpy array according to key I am trying to translate every element of a numpy.array according to a given key: For example: ``` a = np.array([[1,2,3], [3,2,4]]) my_dict = {1:23, 2:34, 3:36, 4:45} ``` I want to get: ``` array([[ 23., 34., 36.], [ 36., 34., 45.]]) ``` I can see how to do it with a loop: ``` def loop_translate(a, my_dict): new_a = np.empty(a.shape) for i,row in enumerate(a): new_a[i,:] = map(my_dict.get, row) return new_a ``` Is there a more efficient and\/or pure numpy way? Edit: I timed it, and np.vectorize method proposed by DSM is considerably faster for larger arrays: ``` In [13]: def loop_translate(a, my_dict): ....: new_a = np.empty(a.shape) ....: for i,row in enumerate(a): ....: new_a[i,:] = map(my_dict.get, row) ....: return new_a ....: In [14]: def vec_translate(a, my_dict): ....: return np.vectorize(my_dict.__getitem__)(a) ....: In [15]: a = np.random.randint(1,5, (4,5)) In [16]: a Out[16]: array([[2, 4, 3, 1, 1], [2, 4, 3, 2, 4], [4, 2, 1, 3, 1], [2, 4, 3, 4, 1]]) In [17]: %timeit loop_translate(a, my_dict) 10000 loops, best of 3: 77.9 us per loop In [18]: %timeit vec_translate(a, my_dict) 10000 loops, best of 3: 70.5 us per loop In [19]: a = np.random.randint(1, 5, (500,500)) In [20]: %timeit loop_translate(a, my_dict) 1 loops, best of 3: 298 ms per loop In [21]: %timeit vec_translate(a, my_dict) 10 loops, best of 3: 37.6 ms per loop In [22]: %timeit loop_translate(a, my_dict) ```", "response":"I don't know about efficient, but you could use np.vectorize on the .get method of dictionaries: ``` >>> a = np.array([[1,2,3], [3,2,4]]) >>> my_dict = {1:23, 2:34, 3:36, 4:45} >>> np.vectorize(my_dict.get)(a) array([[23, 34, 36], [36, 34, 45]]) ```", "best_answers_score":0.665, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16992713\/translate-every-element-in-numpy-array-according-to-key", "best_answers_votes":165, "question_length":1456, "response_length":249 }, { "question":"Coalesce values from 2 columns into a single column in a pandas dataframe I'm looking for a method that behaves similarly to coalesce in T-SQL. I have 2 columns (column A and B) that are sparsely populated in a pandas dataframe. I'd like to create a new column using the following rules: If the value in column A is not null, use that value for the new column C If the value in column A is null, use the value in column B for the new column C Like I mentioned, this can be accomplished in MS SQL Server via the coalesce function. I haven't found a good pythonic method for this; does one exist?", "response":"use combine_first(): ``` In [16]: df = pd.DataFrame(np.random.randint(0, 10, size=(10, 2)), columns=list('ab')) In [17]: df.loc[::2, 'a'] = np.nan In [18]: df Out[18]: a b 0 NaN 0 1 5.0 5 2 NaN 8 3 2.0 8 4 NaN 3 5 9.0 4 6 NaN 7 7 2.0 0 8 NaN 6 9 2.0 5 In [19]: df['c'] = df.a.combine_first(df.b) In [20]: df Out[20]: a b c 0 NaN 0 0.0 1 5.0 5 5.0 2 NaN 8 8.0 3 2.0 8 2.0 4 NaN 3 3.0 5 9.0 4 9.0 6 NaN 7 7.0 7 2.0 0 2.0 8 NaN 6 6.0 9 2.0 5 2.0 ```", "best_answers_score":0.6648, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/38152389\/coalesce-values-from-2-columns-into-a-single-column-in-a-pandas-dataframe", "best_answers_votes":220, "question_length":594, "response_length":446 }, { "question":"Numpy float64 vs Python float I'm battling some floating point problems in Pandas read_csv function. In my investigation, I found this: ``` In [15]: a = 5.9975 In [16]: a Out[16]: 5.9975 In [17]: np.float64(a) Out[17]: 5.9974999999999996 ``` Why is builtin float of Python and the np.float64 type from Python giving different results? I thought they were both C++ doubles?", "response":"``` >>> numpy.float64(5.9975).hex() '0x1.7fd70a3d70a3dp+2' >>> (5.9975).hex() '0x1.7fd70a3d70a3dp+2' ``` They are the same number. What differs is the textual representation obtained via by their __repr__ method; the native Python type outputs the minimal digits needed to uniquely distinguish values, while NumPy code before version 1.14.0, released in 2018 didn't try to minimise the number of digits output.", "best_answers_score":0.6647, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/27098529\/numpy-float64-vs-python-float", "best_answers_votes":67, "question_length":372, "response_length":410 }, { "question":"How do I print the full NumPy array, without truncation? When I print a numpy array, I get a truncated representation, but I want the full array. ``` >>> numpy.arange(10000) array([ 0, 1, 2, ..., 9997, 9998, 9999]) >>> numpy.arange(10000).reshape(250,40) array([[ 0, 1, 2, ..., 37, 38, 39], [ 40, 41, 42, ..., 77, 78, 79], [ 80, 81, 82, ..., 117, 118, 119], ..., [9880, 9881, 9882, ..., 9917, 9918, 9919], [9920, 9921, 9922, ..., 9957, 9958, 9959], [9960, 9961, 9962, ..., 9997, 9998, 9999]]) ```", "response":"Use numpy.set_printoptions: ``` import sys import numpy numpy.set_printoptions(threshold=sys.maxsize) ```", "best_answers_score":0.6641, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1987694\/how-do-i-print-the-full-numpy-array-without-truncation", "best_answers_votes":1002, "question_length":496, "response_length":105 }, { "question":"What's the difference between nan, NaN and NAN In numpy there are nan, NaN and NAN. What's the sense of having all three, do they differ or any of these can be used interchangeably?", "response":"``` >>> numpy.nan is numpy.NaN is numpy.NAN True ``` It's just convenient. They're exactly the same.", "best_answers_score":0.6634, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17825707\/whats-the-difference-between-nan-nan-and-nan", "best_answers_votes":54, "question_length":181, "response_length":100 }, { "question":"How to square or raise to a power (elementwise) a 2D numpy array? I need to square a 2D numpy array (elementwise) and I have tried the following code: ``` import numpy as np a = np.arange(4).reshape(2, 2) print a^2, '\\n' print a*a ``` that yields: ``` [[2 3] [0 1]] [[0 1] [4 9]] ``` Clearly, the notation a*a gives me the result I want and not a^2. I would like to know if another notation exists to raise a numpy array to the power of 2 or N? Instead of a*a*a*..*a.", "response":"The fastest way is to do a*a or a**2 or np.square(a) whereas np.power(a, 2) showed to be considerably slower. np.power() allows you to use different exponents for each element if instead of 2 you pass another array of exponents. From the comments of @GarethRees I just learned that this function will give you different results than a**2 or a*a, which become important in cases where you have small tolerances. I've timed some examples using NumPy 1.9.0 MKL 64 bit, and the results are shown below: ``` In [29]: a = np.random.random((1000, 1000)) In [30]: timeit a*a 100 loops, best of 3: 2.78 ms per loop In [31]: timeit a**2 100 loops, best of 3: 2.77 ms per loop In [32]: timeit np.power(a, 2) 10 loops, best of 3: 71.3 ms per loop ```", "best_answers_score":0.663, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/25870923\/how-to-square-or-raise-to-a-power-elementwise-a-2d-numpy-array", "best_answers_votes":88, "question_length":467, "response_length":738 }, { "question":"Is there a numpy function that allows you to specify start, step, and number? We're all familiar with np.linspace, which creates an array given a start, stop, and num of elements: ``` In [1]: import numpy as np In [2]: np.linspace(0, 10, 9) Out[2]: array([ 0. , 1.25, 2.5 , 3.75, 5. , 6.25, 7.5 , 8.75, 10. ]) ``` Likewise, who could ever forget np.arange, which creates an array given a start, stop, and step: ``` In [4]: np.arange(0, 10, 1.25) Out[4]: array([ 0. , 1.25, 2.5 , 3.75, 5. , 6.25, 7.5 , 8.75]) ``` But is there a function that allows you to specify a start, step, and num of elements, while omitting the stop? There should be.", "response":"The shortest and most elegant way (from my perspective) is: ``` import numpy as np start = 0 step = 1.25 num = 9 result = start + np.arange(0, num) * step ``` returns ``` [ 0. 1.25 2.5 3.75 5. 6.25 7.5 8.75 10. ] ```", "best_answers_score":0.6613, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/31820107\/is-there-a-numpy-function-that-allows-you-to-specify-start-step-and-number", "best_answers_votes":60, "question_length":641, "response_length":216 }, { "question":"What's the simplest way to extend a numpy array in 2 dimensions? I have a 2d array that looks like this: ``` XX xx ``` What's the most efficient way to add an extra row and column: ``` xxy xxy yyy ``` For bonus points, I'd like to also be able to knock out single rows and columns, so for example in the matrix below I'd like to be able to knock out all of the a's leaving only the x's - specifically I'm trying to delete the nth row and the nth column at the same time - and I want to be able to do this as quickly as possible: ``` xxaxx xxaxx aaaaa xxaxx xxaxx ```", "response":"The shortest in terms of lines of code i can think of is for the first question. ``` >>> import numpy as np >>> p = np.array([[1,2],[3,4]]) >>> p = np.append(p, [[5,6]], 0) >>> p = np.append(p, [[7],[8],[9]],1) >>> p array([[1, 2, 7], [3, 4, 8], [5, 6, 9]]) ``` And the for the second question ``` p = np.array(range(20)) >>> p.shape = (4,5) >>> p array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]]) >>> n = 2 >>> p = np.append(p[:n],p[n+1:],0) >>> p = np.append(p[...,:n],p[...,n+1:],1) >>> p array([[ 0, 1, 3, 4], [ 5, 6, 8, 9], [15, 16, 18, 19]]) ```", "best_answers_score":0.6608, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/877479\/whats-the-simplest-way-to-extend-a-numpy-array-in-2-dimensions", "best_answers_votes":73, "question_length":566, "response_length":589 }, { "question":"Python: find position of element in array I have a CSV containing weather data like max and min temperatures, precipitation, longitude and latitude of the weather stations etc. Each category of data is stored in a single column. I want to find the location of the maximum and minimum temperatures. Finding the max or min is easy: numpy.min(my_temperatures_column) How can I find the position of where the min or max is located, so I can find the latitude and longitude? Here is my attempt: ``` def coldest_location(data): coldest_temp= numpy.min(mean_temp) for i in mean_temp: if mean_temp[i] == -24.6: print i ``` Error: List indices must be int I saved each of the columns of my CSV into variables, so they are all individual lists. ``` lat = [row[0] for row in weather_data] # latitude long = [row[1] for row in weather_data] # longitude mean_temp = [row[2] for row in weather_data] # mean temperature ``` I have resolved the problem as per the suggestion list.index(x) ``` mean_temp.index(coldest_temp) coldest_location=[long[5],lat[5]] ``` Unsure if asking a second question within a question is proper, but what if there are two locations with the same minimum temperature? How could I find both and their indices?", "response":"Have you thought about using Python list's .index(value) method? It return the index in the list of where the first instance of the value passed in is found.", "best_answers_score":0.6603, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/27260811\/python-find-position-of-element-in-array", "best_answers_votes":123, "question_length":1220, "response_length":157 }, { "question":"Python Pandas - Changing some column types to categories I have fed the following CSV file into iPython Notebook: ``` public = pd.read_csv(\"categories.csv\") public ``` I've also imported pandas as pd, numpy as np and matplotlib.pyplot as plt. The following data types are present (the below is a summary - there are about 100 columns) ``` In [36]: public.dtypes Out[37]: parks object playgrounds object sports object roading object resident int64 children int64 ``` I want to change 'parks', 'playgrounds', 'sports' and 'roading' to categories (they have likert scale responses in them - each column has different types of likert responses though (e.g. one has \"strongly agree\", \"agree\" etc., another has \"very important\", \"important\" etc.), leaving the remainder as int64. I was able to create a separate dataframe - public1 - and change one of the columns to a category type using the following code: ``` public1 = {'parks': public.parks} public1 = public1['parks'].astype('category') ``` However, when I tried to change a number at once using this code, I was unsuccessful: ``` public1 = {'parks': public.parks, 'playgrounds': public.parks} public1 = public1['parks', 'playgrounds'].astype('category') ``` Notwithstanding this, I don't want to create a separate dataframe with just the categories columns. I would like them changed in the original dataframe. I tried numerous ways to achieve this, then tried the code here: Change column type in pandas. ``` public[['parks', 'playgrounds', 'sports', 'roading']] = public[['parks', 'playgrounds', 'sports', 'roading']].astype('category') ``` and got the following error: ``` NotImplementedError: > 1 ndim Categorical are not supported at this time ``` Is there a way to change 'parks', 'playgrounds', 'sports', 'roading' to categories (so the likert scale responses can then be analysed), leaving 'resident' and 'children' (and the 94 other columns that are string, int + floats) untouched? I am using Python 2.7.", "response":"Sometimes, you just have to use a for-loop: ``` for col in ['parks', 'playgrounds', 'sports', 'roading']: public[col] = public[col].astype('category') ```", "best_answers_score":0.6602, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28910851\/python-pandas-changing-some-column-types-to-categories", "best_answers_votes":162, "question_length":1965, "response_length":154 }, { "question":"Unpivot Pandas Data I currently have a DataFrame laid out as: ``` Jan Feb Mar Apr ... 2001 1 12 12 19 2002 9 ... 2003 ... ``` and I would like to \"unpivot\" the data to look like: ``` Date Value Jan 2001 1 Feb 2001 1 Mar 2001 12 ... Jan 2002 9 ``` What is the best way to accomplish this using pandas\/NumPy?", "response":"You just have to do df.unstack() and that will create a MultiIndexed Series with month as a first level and the year as the second level index. If you want them to be columns then just call reset_index() after that. ```py >>> df Jan Feb 2001 3 4 2002 2 7 >>> df.unstack() Jan 2001 3 2002 2 Feb 2001 4 2002 7 >>> df = df.unstack().reset_index(name='value') >>> df level_0 level_1 value 0 Jan 2001 3 1 Jan 2002 2 2 Feb 2001 4 3 Feb 2002 7 >>> df.rename(columns={'level_0': 'month', 'level_1': 'year'}, inplace=True) >>> df month year value 0 Jan 2001 3 1 Jan 2002 2 2 Feb 2001 4 3 Feb 2002 7 ```", "best_answers_score":0.6597, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18259067\/unpivot-pandas-data", "best_answers_votes":66, "question_length":306, "response_length":593 }, { "question":"Python List of np arrays to array I'm trying to turn a list of 2d numpy arrays into a 2d numpy array. For example, ``` dat_list = [] for i in range(10): dat_list.append(np.zeros([5, 10])) ``` What I would like to get out of this list is an array that is (50, 10). However, when I try the following, I get a (10,5,10) array. ``` output = np.array(dat_list) ``` Thoughts?", "response":"you want to stack them: ``` np.vstack(dat_list) ```", "best_answers_score":0.6595, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7200878\/python-list-of-np-arrays-to-array", "best_answers_votes":95, "question_length":369, "response_length":51 }, { "question":"Numpy minimum in (row, column) format How can I know the (row, column) index of the minimum of a numpy array\/matrix? For example, if A = array([[1, 2], [3, 0]]), I want to get (1, 1) Thanks!", "response":"Use unravel_index: ``` numpy.unravel_index(A.argmin(), A.shape) ```", "best_answers_score":0.6594, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3230067\/numpy-minimum-in-row-column-format", "best_answers_votes":157, "question_length":190, "response_length":67 }, { "question":"Replace all elements of NumPy array that are greater than some value I have a 2D NumPy array. How do I replace all values in it greater than a threshold T = 255 with a value x = 255? A slow for-loop based method would be: ``` # arr = arr.copy() # Optionally, do not modify original arr. for i in range(arr.shape[0]): for j in range(arr.shape[1]): if arr[i, j] > 255: arr[i, j] = x ```", "response":"I think both the fastest and most concise way to do this is to use NumPy's built-in Fancy indexing. If you have an ndarray named arr, you can replace all elements >255 with a value x as follows: ``` arr[arr > 255] = x ``` I ran this on my machine with a 500 x 500 random matrix, replacing all values >0.5 with 5, and it took an average of 7.59ms. ``` In [1]: import numpy as np In [2]: A = np.random.rand(500, 500) In [3]: timeit A[A > 0.5] = 5 100 loops, best of 3: 7.59 ms per loop ```", "best_answers_score":0.659, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19666626\/replace-all-elements-of-numpy-array-that-are-greater-than-some-value", "best_answers_votes":477, "question_length":384, "response_length":487 }, { "question":"Parallelizing a Numpy vector operation Let's use, for example, numpy.sin() The following code will return the value of the sine for each value of the array a: ``` import numpy a = numpy.arange( 1000000 ) result = numpy.sin( a ) ``` But my machine has 32 cores, so I'd like to make use of them. (The overhead might not be worthwhile for something like numpy.sin() but the function I actually want to use is quite a bit more complicated, and I will be working with a huge amount of data.) Is this the best (read: smartest or fastest) method: ``` from multiprocessing import Pool if __name__ == '__main__': pool = Pool() result = pool.map( numpy.sin, a ) ``` or is there a better way to do this?", "response":"There is a better way: numexpr Slightly reworded from their main page: It's a multi-threaded VM written in C that analyzes expressions, rewrites them more efficiently, and compiles them on the fly into code that gets near optimal parallel performance for both memory and cpu bounded operations. For example, in my 4 core machine, evaluating a sine is just slightly less than 4 times faster than numpy. ``` In [1]: import numpy as np In [2]: import numexpr as ne In [3]: a = np.arange(1000000) In [4]: timeit ne.evaluate('sin(a)') 100 loops, best of 3: 15.6 ms per loop In [5]: timeit np.sin(a) 10 loops, best of 3: 54 ms per loop ``` Documentation, including supported functions here. You'll have to check or give us more information to see if your more complicated function can be evaluated by numexpr.", "best_answers_score":0.6587, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11442191\/parallelizing-a-numpy-vector-operation", "best_answers_votes":74, "question_length":692, "response_length":803 }, { "question":"Convert pandas series into numpy array [duplicate] This question already has answers here: How do I convert a Pandas series or index to a NumPy array? [duplicate] (8 answers) Closed 6 years ago. I am new to pandas and python. My input data is like ``` category text 1 hello iam fine. how are you 1 iam good. how are you doing. inputData= pd.read_csv(Input', sep='\\t', names=['category','text']) X = inputData[\"text\"] Y = inputData[\"category\"] ``` here Y is the panda series object, which i want to convert into numpy array. so i tried .as_matrix ``` YArray= Y.as_matrix(columns=None) print YArray ``` But i got the output as [1,1] (which is wrong since i have only one column category and two rows). I want the result as 2x1 matrix.", "response":"To get numpy array, you need ``` Y.values ```", "best_answers_score":0.6587, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/44238796\/convert-pandas-series-into-numpy-array", "best_answers_votes":90, "question_length":732, "response_length":45 }, { "question":"float64 with pandas to_csv I'm reading a CSV with float numbers like this: ``` Bob,0.085 Alice,0.005 ``` And import into a dataframe, and write this dataframe to a new place ``` df = pd.read_csv(orig) df.to_csv(pandasfile) ``` Now this pandasfile has: ``` Bob,0.085000000000000006 Alice,0.0050000000000000001 ``` What happen? maybe I have to cast to a different type like float32 or something? Im using pandas 0.9.0 and numpy 1.6.2.", "response":"As mentioned in the comments, it is a general floating point problem. However you can use the float_format key word of to_csv to hide it: ``` df.to_csv('pandasfile.csv', float_format='%.3f') ``` or, if you don't want 0.0001 to be rounded to zero: ``` df.to_csv('pandasfile.csv', float_format='%g') ``` will give you: ``` Bob,0.085 Alice,0.005 ``` in your output file. For an explanation of %g, see Format Specification Mini-Language.", "best_answers_score":0.6569, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12877189\/float64-with-pandas-to-csv", "best_answers_votes":258, "question_length":432, "response_length":433 }, { "question":"Installing Numpy on 64bit Windows 7 with Python 2.7.3 [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 9 years ago. Improve this question It looks like the only 64 bit windows installer for Numpy is for Numpy version 1.3.0 which only works with Python 2.6 http:\/\/sourceforge.net\/projects\/numpy\/files\/NumPy\/ It strikes me as strange that I would have to roll back to Python 2.6 to use Numpy on Windows, which makes me think I'm missing something. Am I?", "response":"Try the (unofficial) binaries in this site: http:\/\/www.lfd.uci.edu\/~gohlke\/pythonlibs\/#numpy You can get the newest numpy x64 with or without Intel MKL libs for Python 2.7 or Python 3.", "best_answers_score":0.6564, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11200137\/installing-numpy-on-64bit-windows-7-with-python-2-7-3", "best_answers_votes":162, "question_length":829, "response_length":184 }, { "question":"Plotting with a transparent marker but non-transparent edge I'm trying to make a plot in matplotlib with transparent markers which have a fixed color edge . However, I can't seem to achieve a marker with transparent fill. I have a minimum working example here: ``` import numpy as np import matplotlib.pyplot as plt x = np.arange(10) y1 = 2*x + 1 y2 = 3*x - 5 plt.plot(x,y1, 'o-', lw=6, ms=14) plt.plot(x,y2, 'o', ms=14, markerfacecolor=None, alpha=0.5, markeredgecolor='red', markeredgewidth=5) plt.show() ``` I tried two techniques I found online to achieve this: 1) Setting alpha parameter. However, this makes the marker edge transparent too, which is not the desired effect. 2) Setting markerfacecolor=None, although this has no effect on my plot Is there a solution to this please?", "response":"This is tricky in Matplotlib... you have to use a string \"None\" instead of the value None, then you can just do: ``` plt.plot(x,y2, 'o', ms=14, markerfacecolor=\"None\", markeredgecolor='red', markeredgewidth=5) ```", "best_answers_score":0.6564, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/23596575\/plotting-with-a-transparent-marker-but-non-transparent-edge", "best_answers_votes":103, "question_length":787, "response_length":213 }, { "question":"Which kind of interpolation best for resizing image? I have a numpy array that I wish to resize using opencv. Its values range from 0 to 255. If I opt to use cv2.INTER_CUBIC, I may get values outside this range. This is undesirable, since the resized array is supposed to still represent an image. One solution is to clip the results to [0, 255]. Another is to use a different interpolation method. It is my understanding that using INTER_AREA is valid for down-sampling an image, but works similar to nearest neighbor for upsampling it, rendering it less than optimal for my purpose. Should I use INTER_CUBIC (and clip), INTER_AREA, or INTER_LINEAR? an example for values outside of range using INTER_CUBIC: ``` a = np.array( [ 0, 10, 20, 0, 5, 2, 255, 0, 255 ] ).reshape( ( 3, 3 ) ) [[ 0 10 20] [ 0 5 2] [255 0 255]] b = cv2.resize( a.astype('float'), ( 4, 4 ), interpolation = cv2.INTER_CUBIC ) [[ 0. 5.42489886 15.43670964 21.29199219] [ -28.01513672 -2.46422291 1.62949324 -19.30908203] [ 91.88964844 25.07939219 24.75106835 91.19140625] [ 273.30322266 68.20603609 68.13853455 273.15966797]] ``` Edit: As berak pointed out, converting the type to float (from int64) allows for values outside the original range. the cv2.resize() function does not work with the default 'int64' type. However, converting to 'uint8' will automatically saturate the values to [0..255]. Also, as pointed out by SaulloCastro, another related answer demonstrated scipy's interpolation, and that there the defualt method is the cubic interpolation (with saturation).", "response":"If you are enlarging the image, you should prefer to use INTER_LINEAR or INTER_CUBIC interpolation. If you are shrinking the image, you should prefer to use INTER_AREA interpolation. Cubic interpolation is computationally more complex, and hence slower than linear interpolation. However, the quality of the resulting image will be higher.", "best_answers_score":0.6557, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/23853632\/which-kind-of-interpolation-best-for-resizing-image", "best_answers_votes":88, "question_length":1547, "response_length":339 }, { "question":"Is the order of a Python dictionary guaranteed over iterations? I'm currently implementing a complex microbial food-web in Python using SciPy.integrate.ode. I need the ability to easily add species and reactions to the system, so I have to code up something quite general. My scheme looks something like this: ``` class Reaction(object): def __init__(self): #stuff common to all reactions def __getReactionRate(self, **kwargs): raise NotImplementedError ... Reaction subclasses that ... implement specific types of reactions class Species(object): def __init__(self, reactionsDict): self.reactionsDict = reactionsDict #reactionsDict looks like {'ReactionName':reactionObject, ...} #stuff common to all species def sumOverAllReactionsForThisSpecies(self, **kwargs): #loop over all the reactions and return the #cumulative change in the concentrations of all solutes ...Species subclasses where for each species ... are defined and passed to the superclass constructor class FermentationChamber(object): def __init__(self, speciesList, timeToSolve, *args): #do initialization def step(self): #loop over each species, which in turn loops #over each reaction inside it and return a #cumulative dictionary of total change for each #solute in the whole system if __name__==__main__: f = FermentationChamber(...) o = ode(...) #initialize ode solver while o.successful() and o.t>> print(p) 3.0 ``` SciPy has scipy.stats.scoreatpercentile(), in addition to many other statistical goodies.", "best_answers_score":0.6543, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/2374640\/how-do-i-calculate-percentiles-with-python-numpy", "best_answers_votes":402, "question_length":217, "response_length":258 }, { "question":"Python insert numpy array into sqlite3 database I'm trying to store a numpy array of about 1000 floats in a sqlite3 database but I keep getting the error \"InterfaceError: Error binding parameter 1 - probably unsupported type\". I was under the impression a BLOB data type could be anything but it definitely doesn't work with a numpy array. Here's what I tried: ``` import sqlite3 as sql import numpy as np con = sql.connect('test.bd',isolation_level=None) cur = con.cursor() cur.execute(\"CREATE TABLE foobar (id INTEGER PRIMARY KEY, array BLOB)\") cur.execute(\"INSERT INTO foobar VALUES (?,?)\", (None,np.arange(0,500,0.5))) con.commit() ``` Is there another module I can use to get the numpy array into the table? Or can I convert the numpy array into another form in Python (like a list or string I can split) that sqlite will accept? Performance isn't a priority. I just want it to work! Thanks!", "response":"You could register a new array data type with sqlite3: ``` import sqlite3 import numpy as np import io def adapt_array(arr): \"\"\" http:\/\/stackoverflow.com\/a\/31312102\/190597 (SoulNibbler) \"\"\" out = io.BytesIO() np.save(out, arr) out.seek(0) return sqlite3.Binary(out.read()) def convert_array(text): out = io.BytesIO(text) out.seek(0) return np.load(out) # Converts np.array to TEXT when inserting sqlite3.register_adapter(np.ndarray, adapt_array) # Converts TEXT to np.array when selecting sqlite3.register_converter(\"array\", convert_array) x = np.arange(12).reshape(2,6) con = sqlite3.connect(\":memory:\", detect_types=sqlite3.PARSE_DECLTYPES) cur = con.cursor() cur.execute(\"create table test (arr array)\") ``` With this setup, you can simply insert the NumPy array with no change in syntax: ``` cur.execute(\"insert into test (arr) values (?)\", (x, )) ``` And retrieve the array directly from sqlite as a NumPy array: ``` cur.execute(\"select arr from test\") data = cur.fetchone()[0] print(data) # [[ 0 1 2 3 4 5] # [ 6 7 8 9 10 11]] print(type(data)) # ```", "best_answers_score":0.6535, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18621513\/python-insert-numpy-array-into-sqlite3-database", "best_answers_votes":67, "question_length":896, "response_length":1057 }, { "question":"How can I get descriptive statistics of a NumPy array? I use the following code to create a numpy-ndarray. The file has 9 columns. I explicitly type each column: ``` dataset = np.genfromtxt(\"data.csv\", delimiter=\",\",dtype=('|S1', float, float,float,float,float,float,float,int)) ``` Now I would like to get some descriptive statistics for each column (min, max, stdev, mean, median, etc.). Shouldn't there be an easy way to do this? I tried this: ``` from scipy import stats stats.describe(dataset) ``` but this returns an error: TypeError: cannot perform reduce with flexible type How can I get descriptive statistics of the created NumPy array?", "response":"``` import pandas as pd import numpy as np df_describe = pd.DataFrame(dataset) df_describe.describe() ``` please note that dataset is your np.array to describe. ``` import pandas as pd import numpy as np df_describe = pd.DataFrame('your np.array') df_describe.describe() ```", "best_answers_score":0.6533, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/38583738\/how-can-i-get-descriptive-statistics-of-a-numpy-array", "best_answers_votes":71, "question_length":646, "response_length":274 }, { "question":"matplotlib - extracting data from contour lines I would like to get data from a single contour of evenly spaced 2D data (an image-like data). Based on the example found in a similar question: How can I get the (x,y) values of the line that is ploted by a contour plot (matplotlib)? ``` >>> import matplotlib.pyplot as plt >>> x = [1,2,3,4] >>> y = [1,2,3,4] >>> m = [[15,14,13,12],[14,12,10,8],[13,10,7,4],[12,8,4,0]] >>> cs = plt.contour(x,y,m, [9.5]) >>> cs.collections[0].get_paths() ``` The result of this call into cs.collections[0].get_paths() is: ``` [Path([[ 4. 1.625 ] [ 3.25 2. ] [ 3. 2.16666667] [ 2.16666667 3. ] [ 2. 3.25 ] [ 1.625 4. ]], None)] ``` Based on the plots, this result makes sense and appears to be collection of (y,x) pairs for the contour line. Other than manually looping over this return value, extracting the coordinates and assembling arrays for the line, are there better ways to get data back from a matplotlib.path object? Are there pitfalls to be aware of when extracting data from a matplotlib.path? Alternatively, are there alternatives within matplotlib or better yet numpy\/scipy to do a similar thing? Ideal thing would be to get a high resolution vector of (x,y) pairs describing the line, which could be used for further analysis, as in general my datasets are not a small or simple as the example above.", "response":"For a given path, you can get the points like this: ``` p = cs.collections[0].get_paths()[0] v = p.vertices x = v[:,0] y = v[:,1] ```", "best_answers_score":0.6528, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5666056\/matplotlib-extracting-data-from-contour-lines", "best_answers_votes":65, "question_length":1346, "response_length":133 }, { "question":"How can I retrieve the current seed of NumPy's random number generator? The following imports NumPy and sets the seed. ``` import numpy as np np.random.seed(42) ``` However, I'm not interested in setting the seed but more in reading it. random.get_state() does not seem to contain the seed. The documentation doesn't show an obvious answer. How do I retrieve the current seed used by numpy.random, assuming I did not set it manually? I want to use the current seed to carry over for the next iteration of a process.", "response":"The short answer is that you simply can't (at least not in general). The Mersenne Twister RNG used by numpy has 219937-1 possible internal states, whereas a single 64 bit integer has only 264 possible values. It's therefore impossible to map every RNG state to a unique integer seed. You can get and set the internal state of the RNG directly using np.random.get_state and np.random.set_state. The output of get_state is a tuple whose second element is a (624,) array of 32 bit integers. This array has more than enough bits to represent every possible internal state of the RNG (2624 * 32 > 219937-1). The tuple returned by get_state can be used much like a seed in order to create reproducible sequences of random numbers. For example: ``` import numpy as np # randomly initialize the RNG from some platform-dependent source of entropy np.random.seed(None) # get the initial state of the RNG st0 = np.random.get_state() # draw some random numbers print(np.random.randint(0, 100, 10)) # [ 8 76 76 33 77 26 3 1 68 21] # set the state back to what it was originally np.random.set_state(st0) # draw again print(np.random.randint(0, 100, 10)) # [ 8 76 76 33 77 26 3 1 68 21] ```", "best_answers_score":0.6519, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/32172054\/how-can-i-retrieve-the-current-seed-of-numpys-random-number-generator", "best_answers_votes":116, "question_length":515, "response_length":1175 }, { "question":"numpy get index where value is true ``` >>> ex=np.arange(30) >>> e=np.reshape(ex,[3,10]) >>> e array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]]) >>> e>15 array([[False, False, False, False, False, False, False, False, False, False], [False, False, False, False, False, False, True, True, True, True], [ True, True, True, True, True, True, True, True, True, True]], dtype=bool) ``` I need to find the rows that have true or rows in e whose value are more than 15. I could iterate using a for loop, however, I would like to know if there is a way numpy could do this more efficiently?", "response":"You can use the nonzero function. it returns the nonzero indices of the given input. Easy Way ``` >>> (e > 15).nonzero() (array([1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]), array([6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9])) ``` to see the indices more cleaner, use transpose method: ``` >>> numpy.transpose((e>15).nonzero()) [[1 6] [1 7] [1 8] [1 9] [2 0] ... ``` Not Bad Way ``` >>> numpy.nonzero(e > 15) (array([1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]), array([6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9])) ``` or the clean way: ``` >>> numpy.transpose(numpy.nonzero(e > 15)) [[1 6] [1 7] [1 8] [1 9] [2 0] ... ```", "best_answers_score":0.6518, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16094563\/numpy-get-index-where-value-is-true", "best_answers_votes":128, "question_length":657, "response_length":616 }, { "question":"Find the min\/max excluding zeros in a numpy array (or a tuple) in python I have an array. The valid values are not zero (either positive or negetive). I want to find the minimum and maximum within the array which should not take zeros into account. For example if the numbers are only negative. Zeros will be problematic.", "response":"How about: ``` import numpy as np minval = np.min(a[np.nonzero(a)]) maxval = np.max(a[np.nonzero(a)]) ``` where a is your array.", "best_answers_score":0.6516, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7164397\/find-the-min-max-excluding-zeros-in-a-numpy-array-or-a-tuple-in-python", "best_answers_votes":124, "question_length":321, "response_length":128 }, { "question":"When should I use hstack\/vstack vs append vs concatenate vs column_stack? Simple question: what is the advantage of each of these methods. It seems that given the right parameters (and ndarray shapes) they all work seemingly equivalently. Do some work in place? Have better performance? Which functions should I use when?", "response":"If you have two matrices, you're good to go with just hstack and vstack: If you're stacking a matrix and a vector, hstack becomes tricky to use, so column_stack is a better option: If you're stacking two vectors, you've got three options: And concatenate in its raw form is useful for 3D and above, see my article Numpy Illustrated for details.", "best_answers_score":0.6515, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/33356442\/when-should-i-use-hstack-vstack-vs-append-vs-concatenate-vs-column-stack", "best_answers_votes":116, "question_length":321, "response_length":344 }, { "question":"Indexing numpy array with another numpy array Suppose I have ``` a = array([[1, 2], [3, 4]]) ``` and ``` b = array([1,1]) ``` I'd like to use b in index a, that is to do a[b] and get 4 instead of [[3, 4], [3, 4]] I can probably do ``` a[tuple(b)] ``` Is there a better way of doing it? Thanks", "response":"According the NumPy tutorial, the correct way to do it is: ```py a[tuple(b)] ```", "best_answers_score":0.6511, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5508352\/indexing-numpy-array-with-another-numpy-array", "best_answers_votes":52, "question_length":292, "response_length":80 }, { "question":"Concat DataFrame Reindexing only valid with uniquely valued Index objects I am trying to concat the following dataframes: df1 ``` price side timestamp timestamp 2016-01-04 00:01:15.631331072 0.7286 2 1451865675631331 2016-01-04 00:01:15.631399936 0.7286 2 1451865675631400 2016-01-04 00:01:15.631860992 0.7286 2 1451865675631861 2016-01-04 00:01:15.631866112 0.7286 2 1451865675631866 ``` and: df2 ``` bid bid_size offer offer_size timestamp 2016-01-04 00:00:31.331441920 0.7284 4000000 0.7285 1000000 2016-01-04 00:00:53.631324928 0.7284 4000000 0.7290 4000000 2016-01-04 00:01:03.131234048 0.7284 5000000 0.7286 4000000 2016-01-04 00:01:12.131444992 0.7285 1000000 0.7286 4000000 2016-01-04 00:01:15.631364096 0.7285 4000000 0.7290 4000000 ``` With ``` data = pd.concat([df1,df2], axis=1) ``` But I get the follwing output: ``` InvalidIndexError Traceback (most recent call last) in () ----> 1 data = pd.concat([df1,df2], axis=1) 2 data = data.fillna(method='pad') 3 data = data.fillna(method='bfill') 4 data['timestamp'] = data.index.values#converting to datetime 5 data['timestamp'] = pd.to_datetime(data['timestamp'])#converting to datetime \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/tools\/merge.pyc in concat(objs, axis, join, join_axes, ignore_index, keys, levels, names, verify_integrity, copy) 810 keys=keys, levels=levels, names=names, 811 verify_integrity=verify_integrity, --> 812 copy=copy) 813 return op.get_result() 814 \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/tools\/merge.pyc in __init__(self, objs, axis, join, join_axes, keys, levels, names, ignore_index, verify_integrity, copy) 947 self.copy = copy 948 --> 949 self.new_axes = self._get_new_axes() 950 951 def get_result(self): \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/tools\/merge.pyc in _get_new_axes(self) 1013 if i == self.axis: 1014 continue -> 1015 new_axes[i] = self._get_comb_axis(i) 1016 else: 1017 if len(self.join_axes) != ndim - 1: \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/tools\/merge.pyc in _get_comb_axis(self, i) 1039 raise TypeError(\"Cannot concatenate list of %s\" % types) 1040 -> 1041 return _get_combined_index(all_indexes, intersect=self.intersect) 1042 1043 def _get_concat_axis(self): \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/core\/index.pyc in _get_combined_index(indexes, intersect) 6120 index = index.intersection(other) 6121 return index -> 6122 union = _union_indexes(indexes) 6123 return _ensure_index(union) 6124 \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/core\/index.pyc in _union_indexes(indexes) 6149 6150 if hasattr(result, 'union_many'): -> 6151 return result.union_many(indexes[1:]) 6152 else: 6153 for other in indexes[1:]: \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/tseries\/index.pyc in union_many(self, others) 959 else: 960 tz = this.tz --> 961 this = Index.union(this, other) 962 if isinstance(this, DatetimeIndex): 963 this.tz = tz \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/core\/index.pyc in union(self, other) 1553 result.extend([x for x in other._values if x not in value_set]) 1554 else: -> 1555 indexer = self.get_indexer(other) 1556 indexer, = (indexer == -1).nonzero() 1557 \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/core\/index.pyc in get_indexer(self, target, method, limit, tolerance) 1890 1891 if not self.is_unique: -> 1892 raise InvalidIndexError('Reindexing only valid with uniquely' 1893 ' valued Index objects') 1894 InvalidIndexError: Reindexing only valid with uniquely valued Index objects ``` I have removed additional columns and removed duplicates and NA where there could be a conflict - but I simply do not know what's wrong.", "response":"Duplicated column names! In my case the problem was because I had duplicated column names.", "best_answers_score":0.6505, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/35084071\/concat-dataframe-reindexing-only-valid-with-uniquely-valued-index-objects", "best_answers_votes":161, "question_length":3592, "response_length":90 }, { "question":"pandas.read_csv from string or package data I have some csv text data in a package which I want to read using read_csv. I was doing this by ``` from pkgutil import get_data from StringIO import StringIO data = read_csv(StringIO(get_data('package.subpackage', 'path\/to\/data.csv'))) ``` However, StringIO.StringIO disappears in Python 3, and io.StringIO only accepts Unicode. Is there a simple way to do this? Edit: the following does not appear to work ``` import pandas as pd import pkgutil from io import StringIO def get_data_file(pkg, path): f = StringIO() contents = unicode(pkgutil.get_data('pymc.examples', 'data\/wells.dat')) f.write(contents) return f wells = get_data_file('pymc.examples', 'data\/wells.dat') data = pd.read_csv(wells, delimiter=' ', index_col='id', dtype={'switch': np.int8}) ``` failing with ``` File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pandas\/io\/parsers.py\", line 401, in parser_f return _read(filepath_or_buffer, kwds) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pandas\/io\/parsers.py\", line 209, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pandas\/io\/parsers.py\", line 509, in __init__ self._make_engine(self.engine) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pandas\/io\/parsers.py\", line 611, in _make_engine self._engine = CParserWrapper(self.f, **self.options) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pandas\/io\/parsers.py\", line 893, in __init__ self._reader = _parser.TextReader(src, **kwds) File \"parser.pyx\", line 441, in pandas._parser.TextReader.__cinit__ (pandas\/src\/parser.c:3940) File \"parser.pyx\", line 551, in pandas._parser.TextReader._get_header (pandas\/src\/parser.c:5096) pandas._parser.CParserError: Passed header=0 but only 0 lines in file ```", "response":"The following worked for me in 3.3: ``` >>> import numpy as np, pandas as pd >>> import io, pkgutil >>> wells = pkgutil.get_data('pymc.examples', 'data\/wells.dat') >>> type(wells) >>> df = pd.read_csv(io.BytesIO(wells), encoding='utf8', sep=\" \", index_col=\"id\", dtype={\"switch\": np.int8}) >>> df.head() switch arsenic dist assoc educ id 1 1 2.36 16.826000 0 0 2 1 0.71 47.321999 0 0 3 0 2.07 20.966999 0 10 4 1 1.15 21.486000 0 12 5 1 1.10 40.874001 1 14 [5 rows x 5 columns] ``` N.B. I had to manually put wells.dat in that location, so I can't swear I copied it correctly and that there isn't terminal whitespace, because I deleted some. But passing read_csv a BytesIO object and an encoding parameter should work. (Actually, you can probably get away without it, but it's a good habit. io.TextIOWrapper might be another option.)", "best_answers_score":0.65, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/20696479\/pandas-read-csv-from-string-or-package-data", "best_answers_votes":72, "question_length":1760, "response_length":832 }, { "question":"How do I build a numpy array from a generator? How can I build a numpy array out of a generator object? Let me illustrate the problem: ``` >>> import numpy >>> def gimme(): ... for x in xrange(10): ... yield x ... >>> gimme() >>> list(gimme()) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> numpy.array(xrange(10)) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> numpy.array(gimme()) array(, dtype=object) >>> numpy.array(list(gimme())) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) ``` In this instance, gimme() is the generator whose output I'd like to turn into an array. However, the array constructor does not iterate over the generator, it simply stores the generator itself. The behaviour I desire is that from numpy.array(list(gimme())), but I don't want to pay the memory overhead of having the intermediate list and the final array in memory at the same time. Is there a more space-efficient way?", "response":"One google behind this stackoverflow result, I found that there is a numpy.fromiter(data, dtype, count). The default count=-1 takes all elements from the iterable. It requires a dtype to be set explicitly. In my case, this worked: numpy.fromiter(something.generate(from_this_input), float)", "best_answers_score":0.6497, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/367565\/how-do-i-build-a-numpy-array-from-a-generator", "best_answers_votes":255, "question_length":881, "response_length":289 }, { "question":"numpy unique without sort [duplicate] This question already has answers here: numpy.unique with order preserved (7 answers) Closed 8 years ago. How can I use numpy unique without sorting the result but just in the order they appear in the sequence? Something like this? a = [4,2,1,3,1,2,3,4] np.unique(a) = [4,2,1,3] rather than np.unique(a) = [1,2,3,4] Use naive solution should be fine to write a simple function. But as I need to do this multiple times, are there any fast and neat way to do this?", "response":"You can do this with the return_index parameter: ```py >>> import numpy as np >>> a = [4,2,1,3,1,2,3,4] >>> np.unique(a) array([1, 2, 3, 4]) >>> indexes = np.unique(a, return_index=True)[1] >>> [a[index] for index in sorted(indexes)] [4, 2, 1, 3] ```", "best_answers_score":0.6496, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12926898\/numpy-unique-without-sort", "best_answers_votes":76, "question_length":500, "response_length":250 }, { "question":"How can I split a column of tuples in a Pandas dataframe? I have a Pandas dataframe (this is only a little piece) ``` >>> d1 y norm test y norm train len(y_train) len(y_test) \\ 0 64.904368 116.151232 1645 549 1 70.852681 112.639876 1645 549 SVR RBF \\ 0 (35.652207342877873, 22.95533537448393) 1 (39.563683797747622, 27.382483096332511) LCV \\ 0 (19.365430594452338, 13.880062435173587) 1 (19.099614489458364, 14.018867136617146) RIDGE CV \\ 0 (4.2907610988480362, 12.416745648065584) 1 (4.18864306788194, 12.980833914392477) RF \\ 0 (9.9484841581029428, 16.46902345373697) 1 (10.139848213735391, 16.282141345406522) GB \\ 0 (0.012816232716538605, 15.950164822266007) 1 (0.012814519804493328, 15.305745202851712) ET DATA 0 (0.00034337162272515505, 16.284800366214057) j2m 1 (0.00024811554516431878, 15.556506191784194) j2m >>> ``` I want to split all the columns that contain tuples. For example, I want to replace the column LCV with the columns LCV-a and LCV-b. How can I do that?", "response":"You can do this by doing pd.DataFrame(col.tolist()) on that column: ``` In [2]: df = pd.DataFrame({'a':[1,2], 'b':[(1,2), (3,4)]}) In [3]: df Out[3]: a b 0 1 (1, 2) 1 2 (3, 4) In [4]: df['b'].tolist() Out[4]: [(1, 2), (3, 4)] In [5]: pd.DataFrame(df['b'].tolist(), index=df.index) Out[5]: 0 1 0 1 2 1 3 4 In [6]: df[['b1', 'b2']] = pd.DataFrame(df['b'].tolist(), index=df.index) In [7]: df Out[7]: a b b1 b2 0 1 (1, 2) 1 2 1 2 (3, 4) 3 4 ``` Note: in an earlier version, this answer recommended to use df['b'].apply(pd.Series) instead of pd.DataFrame(df['b'].tolist(), index=df.index). That works as well (because it makes a Series of each tuple, which is then seen as a row of a dataframe), but it is slower \/ uses more memory than the tolist version, as noted by the other answers here (thanks to denfromufa).", "best_answers_score":0.6495, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/29550414\/how-can-i-split-a-column-of-tuples-in-a-pandas-dataframe", "best_answers_votes":267, "question_length":977, "response_length":811 }, { "question":"Is there a Python equivalent of range(n) for multidimensional ranges? On Python, range(3) will return [0,1,2]. Is there an equivalent for multidimensional ranges? ``` range((3,2)) # [(0,0),(0,1),(1,0),(1,1),(2,0),(2,1)] ``` So, for example, looping though the tiles of a rectangular area on a tile-based game could be written as: ``` for x,y in range((3,2)): ``` Note I'm not asking for an implementation. I would like to know if this is a recognized pattern and if there is a built-in function on Python or it's standard\/common libraries.", "response":"In numpy, it's numpy.ndindex. Also have a look at numpy.ndenumerate. E.g. ``` import numpy as np for x, y in np.ndindex((3,2)): print(x, y) ``` This yields: ``` 0 0 0 1 1 0 1 1 2 0 2 1 ```", "best_answers_score":0.6494, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10093293\/is-there-a-python-equivalent-of-rangen-for-multidimensional-ranges", "best_answers_votes":84, "question_length":539, "response_length":188 }, { "question":"how to perform max\/mean pooling on a 2d array using numpy Given a 2D(M x N) matrix, and a 2D Kernel(K x L), how do i return a matrix that is the result of max or mean pooling using the given kernel over the image? I'd like to use numpy if possible. Note: M, N, K, L can be both even or odd and they need not be perfectly divisible by each other, eg: 7x5 matrix and 2x2 kernel. eg of max pooling: ``` matrix: array([[ 20, 200, -5, 23], [ -13, 134, 119, 100], [ 120, 32, 49, 25], [-120, 12, 09, 23]]) kernel: 2 x 2 soln: array([[ 200, 119], [ 120, 49]]) ```", "response":"You could use scikit-image block_reduce: ```py import numpy as np import skimage.measure a = np.array([ [ 20, 200, -5, 23], [ -13, 134, 119, 100], [ 120, 32, 49, 25], [-120, 12, 9, 23] ]) skimage.measure.block_reduce(a, (2,2), np.max) ``` Gives: ```none array([[200, 119], [120, 49]]) ```", "best_answers_score":0.6493, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/42463172\/how-to-perform-max-mean-pooling-on-a-2d-array-using-numpy", "best_answers_votes":110, "question_length":555, "response_length":288 }, { "question":"Numpy meshgrid in 3D Numpy's meshgrid is very useful for converting two vectors to a coordinate grid. What is the easiest way to extend this to three dimensions? So given three vectors x, y, and z, construct 3x3D arrays (instead of 2x2D arrays) which can be used as coordinates.", "response":"Numpy (as of 1.8 I think) now supports higher that 2D generation of position grids with meshgrid. One important addition which really helped me is the ability to chose the indexing order (either xy or ij for Cartesian or matrix indexing respectively), which I verified with the following example: ``` import numpy as np x_ = np.linspace(0., 1., 10) y_ = np.linspace(1., 2., 20) z_ = np.linspace(3., 4., 30) x, y, z = np.meshgrid(x_, y_, z_, indexing='ij') assert np.all(x[:,0,0] == x_) assert np.all(y[0,:,0] == y_) assert np.all(z[0,0,:] == z_) ```", "best_answers_score":0.6492, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/1827489\/numpy-meshgrid-in-3d", "best_answers_votes":89, "question_length":278, "response_length":549 }, { "question":"numpy array: replace nan values with average of columns I've got a numpy array filled mostly with real numbers, but there is a few nan values in it as well. How can I replace the nans with averages of columns where they are?", "response":"No loops required: ``` print(a) [[ 0.93230948 nan 0.47773439 0.76998063] [ 0.94460779 0.87882456 0.79615838 0.56282885] [ 0.94272934 0.48615268 0.06196785 nan] [ 0.64940216 0.74414127 nan nan]] #Obtain mean of columns as you need, nanmean is convenient. col_mean = np.nanmean(a, axis=0) print(col_mean) [ 0.86726219 0.7030395 0.44528687 0.66640474] #Find indices that you need to replace inds = np.where(np.isnan(a)) #Place column means in the indices. Align the arrays using take a[inds] = np.take(col_mean, inds[1]) print(a) [[ 0.93230948 0.7030395 0.47773439 0.76998063] [ 0.94460779 0.87882456 0.79615838 0.56282885] [ 0.94272934 0.48615268 0.06196785 0.66640474] [ 0.64940216 0.74414127 0.44528687 0.66640474]] ```", "best_answers_score":0.6492, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/18689235\/numpy-array-replace-nan-values-with-average-of-columns", "best_answers_votes":98, "question_length":224, "response_length":719 }, { "question":"Convert 2D float array to 2D int array in NumPy How do I convert a float NumPy array into an int NumPy array?", "response":"Use the astype method. ``` >>> x = np.array([[1.0, 2.3], [1.3, 2.9]]) >>> x array([[ 1. , 2.3], [ 1.3, 2.9]]) >>> x.astype(int) array([[1, 2], [1, 2]]) ```", "best_answers_score":0.6485, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10873824\/convert-2d-float-array-to-2d-int-array-in-numpy", "best_answers_votes":477, "question_length":109, "response_length":155 }, { "question":"Problems with pip install numpy - RuntimeError: Broken toolchain: cannot link a simple C program I'm trying to install numpy (and scipy and matplotlib) into a virturalenv. I keep getting these errors though: ``` RuntimeError: Broken toolchain: cannot link a simple C program ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 ``` I have the command line tools for xcode installed ``` $ which gcc \/usr\/bin\/gcc $ which cc \/usr\/bin\/cc ``` I'm on Mac OSX 10.9 Using a brew installed python Edit Yes, trying to install with pip. The whole traceback is huge (>400 lines) Here is a section of it: ``` C compiler: cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe compile options: '-Inumpy\/core\/src\/private -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -Inumpy\/core\/include -I\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/include\/python2.7 -c' cc: _configtest.c clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] clang: note: this will be a hard error (cannot be downgraded to a warning) in the future clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] clang: note: this will be a hard error (cannot be downgraded to a warning) in the future failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File \"\", line 17, in File \"\/Users\/bdhammel\/Documents\/research_programming\/julia_env\/build\/numpy\/setup.py\", line 192, in setup_package() File \"\/Users\/bdhammel\/Documents\/research_programming\/julia_env\/build\/numpy\/setup.py\", line 185, in setup_package configuration=configuration ) File \"\/Users\/bdhammel\/Documents\/research_programming\/julia_env\/build\/numpy\/numpy\/distutils\/core.py\", line 169, in setup return old_setup(**new_attr) File \"\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/distutils\/core.py\", line 152, in setup dist.run_commands() File \"\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/distutils\/dist.py\", line 953, in run_commands self.run_command(cmd) File \"\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/distutils\/dist.py\", line 972, in run_command cmd_obj.run() File \"\/Users\/bdhammel\/Documents\/research_programming\/julia_env\/build\/numpy\/numpy\/distutils\/command\/egg_info.py\", line 10, in run self.run_command(\"build_src\") File \"\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/distutils\/cmd.py\", line 326, in run_command self.distribution.run_command(command) File \"\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/distutils\/dist.py\", line 972, in run_command cmd_obj.run() File \"\/Users\/bdhammel\/Documents\/research_programming\/julia_env\/build\/numpy\/numpy\/distutils\/command\/build_src.py\", line 153, in run self.build_sources() File \"\/Users\/bdhammel\/Documents\/research_programming\/julia_env\/build\/numpy\/numpy\/distutils\/command\/build_src.py\", line 164, in build_sources self.build_library_sources(*libname_info) File \"\/Users\/bdhammel\/Documents\/research_programming\/julia_env\/build\/numpy\/numpy\/distutils\/command\/build_src.py\", line 299, in build_library_sources sources = self.generate_sources(sources, (lib_name, build_info)) File \"\/Users\/bdhammel\/Documents\/research_programming\/julia_env\/build\/numpy\/numpy\/distutils\/command\/build_src.py\", line 386, in generate_sources source = func(extension, build_dir) File \"numpy\/core\/setup.py\", line 674, in get_mathlib_info raise RuntimeError(\"Broken toolchain: cannot link a simple C program\") RuntimeError: Broken toolchain: cannot link a simple C program ```", "response":"For Docker (Alpine) and Python 3.x this worked for me: ``` RUN apk add --update make automake gcc g++ subversion python3-dev ```", "best_answers_score":0.6471, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/22388519\/problems-with-pip-install-numpy-runtimeerror-broken-toolchain-cannot-link-a", "best_answers_votes":184, "question_length":3971, "response_length":128 }, { "question":"Set numpy array elements to zero if they are above a specific threshold Say, I have a numpy array consists of 10 elements, for example: a = np.array([2, 23, 15, 7, 9, 11, 17, 19, 5, 3]) Now I want to efficiently set all a values higher than 10 to 0, so I'll get: [2, 0, 0, 7, 9, 0, 0, 0, 5, 3] Because I currently use a for loop, which is very slow: ``` # Zero values below \"threshold value\". def flat_values(sig, tv): \"\"\" :param sig: signal. :param tv: threshold value. :return: \"\"\" for i in np.arange(np.size(sig)): if sig[i] < tv: sig[i] = 0 return sig ``` How can I achieve that in the most efficient way, having in mind big arrays of, say, 10^6 elements?", "response":"``` In [7]: a = np.array([2, 23, 15, 7, 9, 11, 17, 19, 5, 3]) In [8]: a[a > 10] = 0 In [9]: a Out[9]: array([2, 0, 0, 7, 9, 0, 0, 0, 5, 3]) ```", "best_answers_score":0.647, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/28430904\/set-numpy-array-elements-to-zero-if-they-are-above-a-specific-threshold", "best_answers_votes":175, "question_length":659, "response_length":143 }, { "question":"convert nan value to zero I have a 2D numpy array. Some of the values in this array are NaN. I want to perform certain operations using this array. For example consider the array: ``` [[ 0. 43. 67. 0. 38.] [ 100. 86. 96. 100. 94.] [ 76. 79. 83. 89. 56.] [ 88. NaN 67. 89. 81.] [ 94. 79. 67. 89. 69.] [ 88. 79. 58. 72. 63.] [ 76. 79. 71. 67. 56.] [ 71. 71. NaN 56. 100.]] ``` I am trying to take each row, one at a time, sort it in reversed order to get max 3 values from the row and take their average. The code I tried is: ``` # nparr is a 2D numpy array for entry in nparr: sortedentry = sorted(entry, reverse=True) highest_3_values = sortedentry[:3] avg_highest_3 = float(sum(highest_3_values)) \/ 3 ``` This does not work for rows containing NaN. My question is, is there a quick way to convert all NaN values to zero in the 2D numpy array so that I have no problems with sorting and other things I am trying to do.", "response":"Where A is your 2D array: ``` import numpy as np A[np.isnan(A)] = 0 ``` The function isnan produces a bool array indicating where the NaN values are. A boolean array can by used to index an array of the same shape. Think of it like a mask.", "best_answers_score":0.6464, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5124376\/convert-nan-value-to-zero", "best_answers_votes":205, "question_length":918, "response_length":239 }, { "question":"What is the correct way to change image channel ordering between channels first and channels last? I can not for the life of me figure out how to switch the image ordering. images are read in (x,x,3) format, theano requires it to be in (3,x,x) format. I tried changing the order with numpy.array([img[:,:,i] for i in range(3)]) which i guess gets the job done, but it is both ugly and i can't figure out how to reverse it to get the original image back.", "response":"I agree with @Qualia 's comment, np.moveaxis(a, source, destination) is easier to understand. This does the job: ``` x = np.zeros((12, 12, 3)) x.shape #yields: (12, 12, 3) x = np.moveaxis(x, -1, 0) x.shape #yields: (3, 12, 12) ```", "best_answers_score":0.6458, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/43829711\/what-is-the-correct-way-to-change-image-channel-ordering-between-channels-first", "best_answers_votes":68, "question_length":453, "response_length":230 }, { "question":"Averaging over every n elements of a numpy array I have a numpy array. I want to create a new array which is the average over every consecutive triplet of elements. So the new array will be a third of the size as the original. As an example: ``` np.array([1,2,3,1,2,3,1,2,3]) ``` should return the array: ``` np.array([2,2,2]) ``` Can anyone suggest an efficient way of doing this? I'm drawing blanks.", "response":"If your array arr has a length divisible by 3: ``` np.mean(arr.reshape(-1, 3), axis=1) ``` Reshaping to a higher dimensional array and then performing some form of reduce operation on one of the additional dimensions is a staple of numpy programming.", "best_answers_score":0.644, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15956309\/averaging-over-every-n-elements-of-a-numpy-array", "best_answers_votes":160, "question_length":401, "response_length":250 }, { "question":"What does np.r_ do (numpy)? Following code is taken from numpy function base on github ``` sa = sort(a[i:i+block]) n += np.r_[sa.searchsorted(bins[:-1], 'left'), sa.searchsorted(bins[-1], 'right')] ``` So I know that searchsorted finds the position in the array sa where the elements of bins would have to be inserted in order to keep sa sorted (left gives the index left of where we would insert the value and right the right index). What I don't understand is the whole construction around it meaning what is ``` np.r_[array,array] ``` What is np.r_?", "response":"It does row-wise merging. This post has some nice example: ``` >>> V = array([1,2,3,4,5,6]) >>> Y = array([7,8,9,10,11,12]) >>> np.r_[V[0:2],Y[0],V[3],Y[1:3],V[4:],Y[4:]] array([ 1, 2, 7, 4, 8, 9, 5, 6, 11, 12]) ``` Read more about it in this , and in the documentation of numpy.", "best_answers_score":0.6439, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30597869\/what-does-np-r-do-numpy", "best_answers_votes":66, "question_length":552, "response_length":279 }, { "question":"Is there a performance difference between Numpy and Pandas? I've written a bunch of code on the assumption that I was going to use Numpy arrays. Turns out the data I am getting is loaded through Pandas. I remember now that I loaded it in Pandas because I was having some problems loading it in Numpy. I believe the data was just too large. Therefore I was wondering, is there a difference in computational ability when using Numpy vs Pandas? If Pandas is more efficient then I would rather rewrite all my code for Pandas but if there is no more efficiency then I'll just use a numpy array...", "response":"There can be a significant performance difference, of an order of magnitude for multiplications and multiple orders of magnitude for indexing a few random values. I was actually wondering about the same thing and came across this interesting comparison: http:\/\/penandpants.com\/2014\/09\/05\/performance-of-pandas-series-vs-numpy-arrays\/", "best_answers_score":0.6424, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/21567842\/is-there-a-performance-difference-between-numpy-and-pandas", "best_answers_votes":38, "question_length":591, "response_length":333 }, { "question":"Converting between datetime, Timestamp and datetime64 How do I convert a numpy.datetime64 object to a datetime.datetime (or Timestamp)? In the following code, I create a datetime, timestamp and datetime64 objects. ``` import datetime import numpy as np import pandas as pd dt = datetime.datetime(2012, 5, 1) # A strange way to extract a Timestamp object, there's surely a better way? ts = pd.DatetimeIndex([dt])[0] dt64 = np.datetime64(dt) In [7]: dt Out[7]: datetime.datetime(2012, 5, 1, 0, 0) In [8]: ts Out[8]: In [9]: dt64 Out[9]: numpy.datetime64('2012-05-01T01:00:00.000000+0100') ``` Note: it's easy to get the datetime from the Timestamp: ``` In [10]: ts.to_datetime() Out[10]: datetime.datetime(2012, 5, 1, 0, 0) ``` But how do we extract the datetime or Timestamp from a numpy.datetime64 (dt64)? . Update: a somewhat nasty example in my dataset (perhaps the motivating example) seems to be: ``` dt64 = numpy.datetime64('2002-06-28T01:00:00.000000000+0100') ``` which should be datetime.datetime(2002, 6, 28, 1, 0), and not a long (!) (1025222400000000000L)...", "response":"You can just use the pd.Timestamp constructor. The following diagram may be useful for this and related questions.", "best_answers_score":0.642, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/13703720\/converting-between-datetime-timestamp-and-datetime64", "best_answers_votes":350, "question_length":1070, "response_length":114 }, { "question":"How do you do natural logs (e.g. \"ln()\") with numpy in Python? Using numpy, how can I do the following: ``` ln(x) ``` Is it equivalent to: ``` np.log(x) ``` I apologise for such a seemingly trivial question, but my understanding of the difference between log and ln is that ln is logspace e?", "response":"np.log is ln, whereas np.log10 is your standard base 10 log.", "best_answers_score":0.6413, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10593100\/how-do-you-do-natural-logs-e-g-ln-with-numpy-in-python", "best_answers_votes":273, "question_length":291, "response_length":60 }, { "question":"Efficient method of calculating density of irregularly spaced points I am attempting to generate map overlay images that would assist in identifying hot-spots, that is areas on the map that have high density of data points. None of the approaches that I've tried are fast enough for my needs. Note: I forgot to mention that the algorithm should work well under both low and high zoom scenarios (or low and high data point density). I looked through numpy, pyplot and scipy libraries, and the closest I could find was numpy.histogram2d. As you can see in the image below, the histogram2d output is rather crude. (Each image includes points overlaying the heatmap for better understanding) My second attempt was to iterate over all the data points, and then calculate the hot-spot value as a function of distance. This produced a better looking image, however it is too slow to use in my application. Since it's O(n), it works ok with 100 points, but blows out when I use my actual dataset of 30000 points. My final attempt was to store the data in an KDTree, and use the nearest 5 points to calculate the hot-spot value. This algorithm is O(1), so much faster with large dataset. It's still not fast enough, it takes about 20 seconds to generate a 256x256 bitmap, and I would like this to happen in around 1 second time. Edit The boxsum smoothing solution provided by 6502 works well at all zoom levels and is much faster than my original methods. The gaussian filter solution suggested by Luke and Neil G is the fastest. You can see all four approaches below, using 1000 data points in total, at 3x zoom there are around 60 points visible. Complete code that generates my original 3 attempts, the boxsum smoothing solution provided by 6502 and gaussian filter suggested by Luke (improved to handle edges better and allow zooming in) is here: ``` import matplotlib import numpy as np from matplotlib.mlab import griddata import matplotlib.cm as cm import matplotlib.pyplot as plt import math from scipy.spatial import KDTree import time import scipy.ndimage as ndi def grid_density_kdtree(xl, yl, xi, yi, dfactor): zz = np.empty([len(xi),len(yi)], dtype=np.uint8) zipped = zip(xl, yl) kdtree = KDTree(zipped) for xci in range(0, len(xi)): xc = xi[xci] for yci in range(0, len(yi)): yc = yi[yci] density = 0. retvalset = kdtree.query((xc,yc), k=5) for dist in retvalset[0]: density = density + math.exp(-dfactor * pow(dist, 2)) \/ 5 zz[yci][xci] = min(density, 1.0) * 255 return zz def grid_density(xl, yl, xi, yi): ximin, ximax = min(xi), max(xi) yimin, yimax = min(yi), max(yi) xxi,yyi = np.meshgrid(xi,yi) #zz = np.empty_like(xxi) zz = np.empty([len(xi),len(yi)]) for xci in range(0, len(xi)): xc = xi[xci] for yci in range(0, len(yi)): yc = yi[yci] density = 0. for i in range(0,len(xl)): xd = math.fabs(xl[i] - xc) yd = math.fabs(yl[i] - yc) if xd < 1 and yd < 1: dist = math.sqrt(math.pow(xd, 2) + math.pow(yd, 2)) density = density + math.exp(-5.0 * pow(dist, 2)) zz[yci][xci] = density return zz def boxsum(img, w, h, r): st = [0] * (w+1) * (h+1) for x in xrange(w): st[x+1] = st[x] + img[x] for y in xrange(h): st[(y+1)*(w+1)] = st[y*(w+1)] + img[y*w] for x in xrange(w): st[(y+1)*(w+1)+(x+1)] = st[(y+1)*(w+1)+x] + st[y*(w+1)+(x+1)] - st[y*(w+1)+x] + img[y*w+x] for y in xrange(h): y0 = max(0, y - r) y1 = min(h, y + r + 1) for x in xrange(w): x0 = max(0, x - r) x1 = min(w, x + r + 1) img[y*w+x] = st[y0*(w+1)+x0] + st[y1*(w+1)+x1] - st[y1*(w+1)+x0] - st[y0*(w+1)+x1] def grid_density_boxsum(x0, y0, x1, y1, w, h, data): kx = (w - 1) \/ (x1 - x0) ky = (h - 1) \/ (y1 - y0) r = 15 border = r * 2 imgw = (w + 2 * border) imgh = (h + 2 * border) img = [0] * (imgw * imgh) for x, y in data: ix = int((x - x0) * kx) + border iy = int((y - y0) * ky) + border if 0 <= ix < imgw and 0 <= iy < imgh: img[iy * imgw + ix] += 1 for p in xrange(4): boxsum(img, imgw, imgh, r) a = np.array(img).reshape(imgh,imgw) b = a[border:(border+h),border:(border+w)] return b def grid_density_gaussian_filter(x0, y0, x1, y1, w, h, data): kx = (w - 1) \/ (x1 - x0) ky = (h - 1) \/ (y1 - y0) r = 20 border = r imgw = (w + 2 * border) imgh = (h + 2 * border) img = np.zeros((imgh,imgw)) for x, y in data: ix = int((x - x0) * kx) + border iy = int((y - y0) * ky) + border if 0 <= ix < imgw and 0 <= iy < imgh: img[iy][ix] += 1 return ndi.gaussian_filter(img, (r,r)) ## gaussian convolution def generate_graph(): n = 1000 # data points range data_ymin = -2. data_ymax = 2. data_xmin = -2. data_xmax = 2. # view area range view_ymin = -.5 view_ymax = .5 view_xmin = -.5 view_xmax = .5 # generate data xl = np.random.uniform(data_xmin, data_xmax, n) yl = np.random.uniform(data_ymin, data_ymax, n) zl = np.random.uniform(0, 1, n) # get visible data points xlvis = [] ylvis = [] for i in range(0,len(xl)): if view_xmin < xl[i] < view_xmax and view_ymin < yl[i] < view_ymax: xlvis.append(xl[i]) ylvis.append(yl[i]) fig = plt.figure() # plot histogram plt1 = fig.add_subplot(221) plt1.set_axis_off() t0 = time.clock() zd, xe, ye = np.histogram2d(yl, xl, bins=10, range=[[view_ymin, view_ymax],[view_xmin, view_xmax]], normed=True) plt.title('numpy.histogram2d - '+str(time.clock()-t0)+\"sec\") plt.imshow(zd, origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax]) plt.scatter(xlvis, ylvis) # plot density calculated with kdtree plt2 = fig.add_subplot(222) plt2.set_axis_off() xi = np.linspace(view_xmin, view_xmax, 256) yi = np.linspace(view_ymin, view_ymax, 256) t0 = time.clock() zd = grid_density_kdtree(xl, yl, xi, yi, 70) plt.title('function of 5 nearest using kdtree\\n'+str(time.clock()-t0)+\"sec\") cmap=cm.jet A = (cmap(zd\/256.0)*255).astype(np.uint8) #A[:,:,3] = zd plt.imshow(A , origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax]) plt.scatter(xlvis, ylvis) # gaussian filter plt3 = fig.add_subplot(223) plt3.set_axis_off() t0 = time.clock() zd = grid_density_gaussian_filter(view_xmin, view_ymin, view_xmax, view_ymax, 256, 256, zip(xl, yl)) plt.title('ndi.gaussian_filter - '+str(time.clock()-t0)+\"sec\") plt.imshow(zd , origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax]) plt.scatter(xlvis, ylvis) # boxsum smoothing plt3 = fig.add_subplot(224) plt3.set_axis_off() t0 = time.clock() zd = grid_density_boxsum(view_xmin, view_ymin, view_xmax, view_ymax, 256, 256, zip(xl, yl)) plt.title('boxsum smoothing - '+str(time.clock()-t0)+\"sec\") plt.imshow(zd, origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax]) plt.scatter(xlvis, ylvis) if __name__=='__main__': generate_graph() plt.show() ```", "response":"This approach is along the lines of some previous answers: increment a pixel for each spot, then smooth the image with a gaussian filter. A 256x256 image runs in about 350ms on my 6-year-old laptop. ``` import numpy as np import scipy.ndimage as ndi data = np.random.rand(30000,2) ## create random dataset inds = (data * 255).astype('uint') ## convert to indices img = np.zeros((256,256)) ## blank image for i in xrange(data.shape[0]): ## draw pixels img[inds[i,0], inds[i,1]] += 1 img = ndi.gaussian_filter(img, (10,10)) ```", "best_answers_score":0.6385, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6652671\/efficient-method-of-calculating-density-of-irregularly-spaced-points", "best_answers_votes":31, "question_length":6527, "response_length":525 }, { "question":"Numpy - Replace a number with NaN I am looking to replace a number with NaN in numpy and am looking for a function like numpy.nan_to_num, except in reverse. The number is likely to change as different arrays are processed because each can have a uniquely define NoDataValue. I have seen people using dictionaries, but the arrays are large and filled with both positive and negative floats. I suspect that it is not efficient to try to load all of these into anything to create keys. I tried using the following but numpy requires that I use any() or all(). I realize that I need to iterate element wise, but hope that a built-in function can achieve this. ```py def replaceNoData(scanBlock, NDV): for n, i in enumerate(array): if i == NDV: scanBlock[n] = numpy.nan ``` NDV is GDAL's no data value and array is a numpy array. Is a masked array the way to go perhaps?", "response":"``` A[A==NDV]=numpy.nan ``` A==NDV will produce a boolean array that can be used as an index for A", "best_answers_score":0.6383, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/6701714\/numpy-replace-a-number-with-nan", "best_answers_votes":81, "question_length":865, "response_length":98 }, { "question":"How do I remove all zero elements from a NumPy array? I have a rank-1 numpy.array of which I want to make a boxplot. However, I want to exclude all values equal to zero in the array. Currently, I solved this by looping the array and copy the value to a new array if not equal to zero. However, as the array consists of 86 000 000 values and I have to do this multiple times, this takes a lot of patience. Is there a more intelligent way to do this?", "response":"For a NumPy array a, you can use ``` a[a != 0] ``` to extract the values not equal to zero.", "best_answers_score":0.6381, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5927180\/how-do-i-remove-all-zero-elements-from-a-numpy-array", "best_answers_votes":142, "question_length":448, "response_length":91 }, { "question":"scipy.stats seed? I am trying to generate scipy.stats.pareto.rvs(b, loc=0, scale=1, size=1) with different seed. In numpy we can seed using numpy.random.seed(seed=233423). Is there any way to seed the random number generated by scipy stats. Note: I am not using numpy pareto because I want to give different values for scale.", "response":"scipy.stats just uses numpy.random to generate its random numbers, so numpy.random.seed() will work here as well. E.g., ``` import numpy as np from scipy.stats import pareto b = 0.9 np.random.seed(seed=233423) print pareto.rvs(b, loc=0, scale=1, size=5) np.random.seed(seed=233423) print pareto.rvs(b, loc=0, scale=1, size=5) ``` will print [ 9.7758784 10.78405752 4.19704602 1.19256849 1.02750628] twice.", "best_answers_score":0.6377, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16016959\/scipy-stats-seed", "best_answers_votes":60, "question_length":325, "response_length":405 }, { "question":"Partition array into N chunks with Numpy There is this How do you split a list into evenly sized chunks? for splitting an array into chunks. Is there anyway to do this more efficiently for giant arrays using Numpy?", "response":"Try numpy.array_split. From the documentation: ``` >>> x = np.arange(8.0) >>> np.array_split(x, 3) [array([ 0., 1., 2.]), array([ 3., 4., 5.]), array([ 6., 7.])] ``` Identical to numpy.split, but won't raise an exception if the groups aren't equal length. If number of chunks > len(array) you get blank arrays nested inside, to address that - if your split array is saved in a, then you can remove empty arrays by: ``` [x for x in a if x.size > 0] ``` Just save that back in a if you wish.", "best_answers_score":0.6375, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14406567\/partition-array-into-n-chunks-with-numpy", "best_answers_votes":162, "question_length":214, "response_length":489 }, { "question":"Convert ndarray from float64 to integer I've got an ndarray in python with a dtype of float64. I'd like to convert the array to be an array of integers. How should I do this? int() won't work, as it says it can't convert it to a scalar. Changing the dtype field itself obviously doesn't work, as the actual bytes haven't changed. I can't seem to find anything on Google or in the documentation - what's the best way to do this?", "response":"Use .astype. ``` >>> a = numpy.array([1, 2, 3, 4], dtype=numpy.float64) >>> a array([ 1., 2., 3., 4.]) >>> a.astype(numpy.int64) array([1, 2, 3, 4]) ``` See the documentation for more options.", "best_answers_score":0.6373, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8855574\/convert-ndarray-from-float64-to-integer", "best_answers_votes":97, "question_length":427, "response_length":192 }, { "question":"Make distutils look for numpy header files in the correct place In my installation, numpy's arrayobject.h is located at \u2026\/site-packages\/numpy\/core\/include\/numpy\/arrayobject.h. I wrote a trivial Cython script that uses numpy: ``` cimport numpy as np def say_hello_to(name): print(\"Hello %s!\" % name) ``` I also have the following distutils setup.py (copied from the Cython user guide): ``` from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension(\"hello\", [\"hello.pyx\"])] setup( name = 'Hello world app', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) ``` When I try to build with python setup.py build_ext --inplace, Cython tries to do the following: ``` gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd \\ -fno-common -dynamic -DNDEBUG -g -Os -Wall -Wstrict-prototypes -DMACOSX \\ -I\/usr\/include\/ffi -DENABLE_DTRACE -arch i386 -arch ppc -pipe \\ -I\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.5\/include\/python2.5 \\ -c hello.c -o build\/temp.macosx-10.5-i386-2.5\/hello.o ``` Predictably, this fails to find arrayobject.h. How can I make distutils use the correct location of numpy include files (without making the user define $CFLAGS)?", "response":"Use numpy.get_include(): ``` from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext import numpy as np # <---- New line ext_modules = [Extension(\"hello\", [\"hello.pyx\"], include_dirs=[get_numpy_include()])] # <---- New argument setup( name = 'Hello world app', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) ```", "best_answers_score":0.6366, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/2379898\/make-distutils-look-for-numpy-header-files-in-the-correct-place", "best_answers_votes":74, "question_length":1267, "response_length":387 }, { "question":"Time difference in seconds from numpy.timedelta64 How to get time difference in seconds from numpy.timedelta64 variable? ``` time1 = '2012-10-05 04:45:18' time2 = '2012-10-05 04:44:13' dt = np.datetime64(time1) - np.datetime64(time2) print dt 0:01:05 ``` I'd like to convert dt to number (int or float) representing time difference in seconds.", "response":"To get number of seconds from numpy.timedelta64() object using numpy 1.7 experimental datetime API: ``` seconds = dt \/ np.timedelta64(1, 's') ```", "best_answers_score":0.6363, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14920903\/time-difference-in-seconds-from-numpy-timedelta64", "best_answers_votes":104, "question_length":343, "response_length":145 }, { "question":"What is the difference between size and count in pandas? That is the difference between groupby(\"x\").count and groupby(\"x\").size in pandas ? Does size just exclude nil ?", "response":"size includes NaN values, count does not: ``` In [46]: df = pd.DataFrame({'a':[0,0,1,2,2,2], 'b':[1,2,3,4,np.NaN,4], 'c':np.random.randn(6)}) df Out[46]: a b c 0 0 1 1.067627 1 0 2 0.554691 2 1 3 0.458084 3 2 4 0.426635 4 2 NaN -2.238091 5 2 4 1.256943 In [48]: print(df.groupby(['a'])['b'].count()) print(df.groupby(['a'])['b'].size()) a 0 2 1 1 2 2 Name: b, dtype: int64 a 0 2 1 1 2 3 dtype: int64 ```", "best_answers_score":0.6359, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/33346591\/what-is-the-difference-between-size-and-count-in-pandas", "best_answers_votes":179, "question_length":169, "response_length":403 }, { "question":"Numpy - add row to array How does one add rows to a numpy array? I have an array A: ``` A = array([[0, 1, 2], [0, 2, 0]]) ``` I wish to add rows to this array from another array X if the first element of each row in X meets a specific condition. Numpy arrays do not have a method 'append' like that of lists, or so it seems. If A and X were lists I would merely do: ``` for i in X: if i[0] < 3: A.append(i) ``` Is there a numpythonic way to do the equivalent? Thanks, S ;-)", "response":"You can do this: ``` newrow = [1, 2, 3] A = numpy.vstack([A, newrow]) ```", "best_answers_score":0.6356, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3881453\/numpy-add-row-to-array", "best_answers_votes":262, "question_length":473, "response_length":73 }, { "question":"What is dtype('O'), in pandas? I have a dataframe in pandas and I'm trying to figure out what the types of its values are. I am unsure what the type is of column 'Test'. However, when I run myFrame['Test'].dtype, I get; ``` dtype('O') ``` What does this mean?", "response":"It means: ``` 'O' (Python) objects ``` Source. The first character specifies the kind of data and the remaining characters specify the number of bytes per item, except for Unicode, where it is interpreted as the number of characters. The item size must correspond to an existing type, or an error will be raised. The supported kinds are to an existing type, or an error will be raised. The supported kinds are: ``` 'b' boolean 'i' (signed) integer 'u' unsigned integer 'f' floating-point 'c' complex-floating point 'O' (Python) objects 'S', 'a' (byte-)string 'U' Unicode 'V' raw data (void) ``` Another answer helps if need check types.", "best_answers_score":0.6345, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37561991\/what-is-dtypeo-in-pandas", "best_answers_votes":208, "question_length":259, "response_length":636 }, { "question":"Multivariate normal density in Python? Is there any python package that allows the efficient computation of the PDF (probability density function) of a multivariate normal distribution? It doesn't seem to be included in Numpy\/Scipy, and surprisingly a Google search didn't turn up any useful thing.", "response":"The multivariate normal is now available on SciPy 0.14.0.dev-16fc0af: ``` from scipy.stats import multivariate_normal var = multivariate_normal(mean=[0,0], cov=[[1,0],[0,1]]) var.pdf([1,0]) ```", "best_answers_score":0.6345, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11615664\/multivariate-normal-density-in-python", "best_answers_votes":101, "question_length":298, "response_length":193 }, { "question":"Add single element to array in numpy I have a numpy array containing: ``` [1, 2, 3] ``` I want to create an array containing: ``` [1, 2, 3, 1] ``` That is, I want to add the first element on to the end of the array. I have tried the obvious: ``` np.concatenate((a, a[0])) ``` But I get an error saying ValueError: arrays must have same number of dimensions I don't understand this - the arrays are both just 1d arrays.", "response":"append() creates a new array which can be the old array with the appended element. I think it's more normal to use the proper method for adding an element: ``` a = numpy.append(a, a[0]) ```", "best_answers_score":0.6336, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/7332841\/add-single-element-to-array-in-numpy", "best_answers_votes":278, "question_length":418, "response_length":189 }, { "question":"What is the difference between np.array() and np.asarray()? What is the difference between NumPy's np.array and np.asarray? When should I use one rather than the other? They seem to generate identical output.", "response":"Since other questions are being redirected to this one which ask about asanyarray or other array creation routines, it's probably worth having a brief summary of what each of them does. The differences are mainly about when to return the input unchanged, as opposed to making a new array as a copy. array offers a wide variety of options (most of the other functions are thin wrappers around it), including flags to determine when to copy. A full explanation would take just as long as the docs (see Array Creation, but briefly, here are some examples: Assume a is an ndarray, and m is a matrix, and they both have a dtype of float32: np.array(a) and np.array(m) will copy both, because that's the default behavior. np.array(a, copy=False) and np.array(m, copy=False) will copy m but not a, because m is not an ndarray. np.array(a, copy=False, subok=True) and np.array(m, copy=False, subok=True) will copy neither, because m is a matrix, which is a subclass of ndarray. np.array(a, dtype=int, copy=False, subok=True) will copy both, because the dtype is not compatible. Most of the other functions are thin wrappers around array that control when copying happens: asarray: The input will be returned uncopied iff it's a compatible ndarray (copy=False). asanyarray: The input will be returned uncopied iff it's a compatible ndarray or subclass like matrix (copy=False, subok=True). ascontiguousarray: The input will be returned uncopied iff it's a compatible ndarray in contiguous C order (copy=False, order='C'). asfortranarray: The input will be returned uncopied iff it's a compatible ndarray in contiguous Fortran order (copy=False, order='F'). require: The input will be returned uncopied iff it's compatible with the specified requirements string. copy: The input is always copied. fromiter: The input is treated as an iterable (so, e.g., you can construct an array from an iterator's elements, instead of an object array with the iterator); always copied. There are also convenience functions, like asarray_chkfinite (same copying rules as asarray, but raises ValueError if there are any nan or inf values), and constructors for subclasses like matrix or for special cases like record arrays, and of course the actual ndarray constructor (which lets you create an array directly out of strides over a buffer).", "best_answers_score":0.6332, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/14415741\/what-is-the-difference-between-np-array-and-np-asarray", "best_answers_votes":239, "question_length":208, "response_length":2315 }, { "question":"How to loop through 2D numpy array using x and y coordinates without getting out of bounds error? I have tried the following: ``` import numpy as np a = np.array([[1,2,3],[4,5,6],[7,8,9],[10,11,12]]) print a rows = a.shape[0] cols = a.shape[1] print rows print cols for x in range(0, cols - 1): for y in range(0, rows -1): print a[x,y] ``` This only prints the numbers 1 through 6. I have also tried only subtracting 1 from either rows or cols in the range, but that either leads to an out-of-bounds error or not all numbers being printed.", "response":"You get prettier code with: ``` for iy, ix in np.ndindex(a.shape): print(a[iy, ix]) ``` resulting in: ``` 1 2 3 4 5 6 7 8 9 10 11 12 ```", "best_answers_score":0.6328, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/30499857\/how-to-loop-through-2d-numpy-array-using-x-and-y-coordinates-without-getting-out", "best_answers_votes":120, "question_length":539, "response_length":136 }, { "question":"How does Lru_cache (from functools) Work? Especially when using recursive code there are massive improvements with lru_cache. I do understand that a cache is a space that stores data that has to be served fast and saves the computer from recomputing. How does the Python lru_cache from functools work internally? I'm Looking for a specific answer, does it use dictionaries like the rest of Python? Does it only store the return value? I know that Python is heavily built on top of dictionaries, however, I couldn't find a specific answer to this question.", "response":"The functools source code is available here: https:\/\/github.com\/python\/cpython\/blob\/master\/Lib\/functools.py lru_cache uses the _lru_cache_wrapper decorator (python decorator with arguments pattern) which has a cache dictionary in context in which it saves the return value of the function called (every decorated function will have its own cache dict). The dictionary key is generated with the _make_key function from the arguments. Added some bold comments below: ``` # ACCORDING TO PASSED maxsize ARGUMENT _lru_cache_wrapper # DEFINES AND RETURNS ONE OF wrapper DECORATORS def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo): # Constants shared by all lru cache instances: sentinel = object() # unique object used to signal cache misses cache = {} # RESULTS SAVES HERE cache_get = cache.get # bound method to lookup a key or return None # ... maxsize is None: def wrapper(*args, **kwds): # Simple caching without ordering or size limit nonlocal hits, misses key = make_key(args, kwds, typed) # BUILD A KEY FROM ARGUMENTS result = cache_get(key, sentinel) # TRYING TO GET PREVIOUS CALLS RESULT if result is not sentinel: # ALREADY CALLED WITH PASSED ARGS hits += 1 return result # RETURN SAVED RESULT # WITHOUT ACTUALLY CALLING FUNCTION misses += 1 result = user_function(*args, **kwds) # FUNCTION CALL - if cache[key] empty cache[key] = result # SAVE RESULT return result # ... return wrapper ```", "best_answers_score":0.6323, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/49883177\/how-does-lru-cache-from-functools-work", "best_answers_votes":73, "question_length":555, "response_length":1412 }, { "question":"Multiple linear regression in Python I can't seem to find any python libraries that do multiple regression. The only things I find only do simple regression. I need to regress my dependent variable (y) against several independent variables (x1, x2, x3, etc.). For example, with this data: ``` print 'y x1 x2 x3 x4 x5 x6 x7' for t in texts: print \"{:>7.1f}{:>10.2f}{:>9.2f}{:>9.2f}{:>10.2f}{:>7.2f}{:>7.2f}{:>9.2f}\" \/ .format(t.y,t.x1,t.x2,t.x3,t.x4,t.x5,t.x6,t.x7) ``` (output for above:) ``` y x1 x2 x3 x4 x5 x6 x7 -6.0 -4.95 -5.87 -0.76 14.73 4.02 0.20 0.45 -5.0 -4.55 -4.52 -0.71 13.74 4.47 0.16 0.50 -10.0 -10.96 -11.64 -0.98 15.49 4.18 0.19 0.53 -5.0 -1.08 -3.36 0.75 24.72 4.96 0.16 0.60 -8.0 -6.52 -7.45 -0.86 16.59 4.29 0.10 0.48 -3.0 -0.81 -2.36 -0.50 22.44 4.81 0.15 0.53 -6.0 -7.01 -7.33 -0.33 13.93 4.32 0.21 0.50 -8.0 -4.46 -7.65 -0.94 11.40 4.43 0.16 0.49 -8.0 -11.54 -10.03 -1.03 18.18 4.28 0.21 0.55 ``` How would I regress these in python, to get the linear regression formula: Y = a1x1 + a2x2 + a3x3 + a4x4 + a5x5 + a6x6 + +a7x7 + c", "response":"sklearn.linear_model.LinearRegression will do it: ``` from sklearn import linear_model clf = linear_model.LinearRegression() clf.fit([[getattr(t, 'x%d' % i) for i in range(1, 8)] for t in texts], [t.y for t in texts]) ``` Then clf.coef_ will have the regression coefficients. sklearn.linear_model also has similar interfaces to do various kinds of regularizations on the regression.", "best_answers_score":0.6316, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/11479064\/multiple-linear-regression-in-python", "best_answers_votes":119, "question_length":1050, "response_length":382 }, { "question":"Python Numpy TypeError: ufunc 'isfinite' not supported for the input types Here's my code: ```py def topK(dataMat,sensitivity): meanVals = np.mean(dataMat, axis=0) meanRemoved = dataMat - meanVals covMat = np.cov(meanRemoved, rowvar=0) eigVals,eigVects = np.linalg.eig(np.mat(covMat)) ``` I get the error in the title on the last line above. I suspect it has something to do with the datatype, so, here's an image of the variable and datatype from the Variable Explorer in Spyder: I've tried changing np.linalg.eig(np.mat(covMat)) to np.linalg.eig(np.array(np.mat(covMat))) and to np.linalg.eig(np.array(covMat)), nothing works. Any ideas?", "response":"Your array has a dtype of object, but this should be some floating point dtype. Use e.g. ``` covMat = np.array(covMat, dtype=float) ``` to convert the dtype", "best_answers_score":0.6316, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/40809503\/python-numpy-typeerror-ufunc-isfinite-not-supported-for-the-input-types", "best_answers_votes":81, "question_length":639, "response_length":156 }, { "question":"Interleaving two NumPy arrays efficiently Assume the following arrays are given: ``` a = array([1, 3, 5]) b = array([2, 4, 6]) ``` How would one interleave them efficiently so that one gets a third array like the following? ``` c = array([1, 2, 3, 4, 5, 6]) ``` It can be assumed that length(a) == length(b).", "response":"I like Josh's answer. I just wanted to add a more mundane, usual, and slightly more verbose solution. I don't know which is more efficient. I expect they will have similar performance. ``` import numpy as np a = np.array([1,3,5]) b = np.array([2,4,6]) c = np.empty((a.size + b.size,), dtype=a.dtype) c[0::2] = a c[1::2] = b ```", "best_answers_score":0.6315, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/5347065\/interleaving-two-numpy-arrays-efficiently", "best_answers_votes":208, "question_length":308, "response_length":327 }, { "question":"How to transform negative elements to zero without a loop? If I have an array like ``` a = np.array([2, 3, -1, -4, 3]) ``` I want to set all the negative elements to zero: [2, 3, 0, 0, 3]. How to do it with numpy without an explicit for? I need to use the modified a in a computation, for example ``` c = a * b ``` where b is another array with the same length of the original a Conclusion ``` import numpy as np from time import time a = np.random.uniform(-1, 1, 20000000) t = time(); b = np.where(a>0, a, 0); print (\"1. \", time() - t) a = np.random.uniform(-1, 1, 20000000) t = time(); b = a.clip(min=0); print (\"2. \", time() - t) a = np.random.uniform(-1, 1, 20000000) t = time(); a[a < 0] = 0; print (\"3. \", time() - t) a = np.random.uniform(-1, 1, 20000000) t = time(); a[np.where(a<0)] = 0; print (\"4. \", time() - t) a = np.random.uniform(-1, 1, 20000000) t = time(); b = [max(x, 0) for x in a]; print (\"5. \", time() - t) ``` 1.38629984856 0.516846179962 <- faster a.clip(min=0); 0.615426063538 0.944557905197 51.7364809513", "response":"``` a = a.clip(min=0) ```", "best_answers_score":0.6296, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3391843\/how-to-transform-negative-elements-to-zero-without-a-loop", "best_answers_votes":130, "question_length":1029, "response_length":25 }, { "question":"How to get value counts for multiple columns at once in Pandas DataFrame? Given a Pandas DataFrame that has multiple columns with categorical values (0 or 1), is it possible to conveniently get the value_counts for every column at the same time? For example, suppose I generate a DataFrame as follows: ``` import numpy as np import pandas as pd np.random.seed(0) df = pd.DataFrame(np.random.randint(0, 2, (10, 4)), columns=list('abcd')) ``` I can get a DataFrame like this: ``` a b c d 0 0 1 1 0 1 1 1 1 1 2 1 1 1 0 3 0 1 0 0 4 0 0 0 1 5 0 1 1 0 6 0 1 1 1 7 1 0 1 0 8 1 0 1 1 9 0 1 1 0 ``` How do I conveniently get the value counts for every column and obtain the following conveniently? ``` a b c d 0 6 3 2 6 1 4 7 8 4 ``` My current solution is: ``` pieces = [] for col in df.columns: tmp_series = df[col].value_counts() tmp_series.name = col pieces.append(tmp_series) df_value_counts = pd.concat(pieces, axis=1) ``` But there must be a simpler way, like stacking, pivoting, or groupby?", "response":"Just call apply and pass pd.Series.value_counts: ``` In [212]: df = pd.DataFrame(np.random.randint(0, 2, (10, 4)), columns=list('abcd')) df.apply(pd.Series.value_counts) Out[212]: a b c d 0 4 6 4 3 1 6 4 6 7 ```", "best_answers_score":0.629, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/32589829\/how-to-get-value-counts-for-multiple-columns-at-once-in-pandas-dataframe", "best_answers_votes":177, "question_length":989, "response_length":211 }, { "question":"How to get the element-wise mean of an ndarray I'd like to calculate element-wise average of numpy ndarray. ``` In [56]: a = np.array([10, 20, 30]) In [57]: b = np.array([30, 20, 20]) In [58]: c = np.array([50, 20, 40]) ``` What I want: ``` [30, 20, 30] ``` Is there any in-built function for this operation, other than vectorized sum and dividing?", "response":"You can just use np.mean directly: ```py >>> np.mean([a, b, c], axis=0) array([ 30., 20., 30.]) ```", "best_answers_score":0.6289, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/37443565\/how-to-get-the-element-wise-mean-of-an-ndarray", "best_answers_votes":75, "question_length":348, "response_length":99 }, { "question":"How can I make a python numpy arange of datetime I have some input data, with timestamps in the input file in the form of hours from the date time specified in the filename. This is a bit useless, so I need to convert it to python datetime.datetime objects, and then put it in a numpy array. I could write a for loop, but I'd like to do something like: ``` numpy.arange(datetime.datetime(2000, 1,1), datetime.datetime(2000, 1,2), datetime.timedelta(hours=1)) ``` which throws a TypeError. Can this be done? I'm stuck with python 2.6 and numpy 1.6.1.", "response":"``` from datetime import datetime, timedelta t = np.arange(datetime(1985,7,1), datetime(2015,7,1), timedelta(days=1)).astype(datetime) ``` The key point here is to use astype(datetime), otherwise the result will be datetime64.", "best_answers_score":0.6285, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12137277\/how-can-i-make-a-python-numpy-arange-of-datetime", "best_answers_votes":57, "question_length":549, "response_length":226 }, { "question":"Convert a 1D array to a 2D array in numpy I want to convert a 1-dimensional array into a 2-dimensional array by specifying the number of columns in the 2D array. Something that would work like this: ``` > import numpy as np > A = np.array([1,2,3,4,5,6]) > B = vec2matrix(A,ncol=2) > B array([[1, 2], [3, 4], [5, 6]]) ``` Does numpy have a function that works like my made-up function \"vec2matrix\"? (I understand that you can index a 1D array like a 2D array, but that isn't an option in the code I have - I need to make this conversion.)", "response":"You want to reshape the array. ``` B = np.reshape(A, (-1, 2)) ``` where -1 infers the size of the new dimension from the size of the input array.", "best_answers_score":0.6276, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/12575421\/convert-a-1d-array-to-a-2d-array-in-numpy", "best_answers_votes":290, "question_length":537, "response_length":145 }, { "question":"how does multiplication differ for NumPy Matrix vs Array classes? The numpy docs recommend using array instead of matrix for working with matrices. However, unlike octave (which I was using till recently), * doesn't perform matrix multiplication, you need to use the function matrixmultipy(). I feel this makes the code very unreadable. Does anybody share my views, and has found a solution?", "response":"The main reason to avoid using the matrix class is that a) it's inherently 2-dimensional, and b) there's additional overhead compared to a \"normal\" numpy array. If all you're doing is linear algebra, then by all means, feel free to use the matrix class... Personally I find it more trouble than it's worth, though. For arrays (prior to Python 3.5), use dot instead of matrixmultiply. E.g. ``` import numpy as np x = np.arange(9).reshape((3,3)) y = np.arange(3) print np.dot(x,y) ``` Or in newer versions of numpy, simply use x.dot(y) Personally, I find it much more readable than the * operator implying matrix multiplication... For arrays in Python 3.5, use x @ y.", "best_answers_score":0.6273, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3890621\/how-does-multiplication-differ-for-numpy-matrix-vs-array-classes", "best_answers_votes":136, "question_length":391, "response_length":665 }, { "question":"Pandas: convert dtype 'object' to int I've read an SQL query into Pandas and the values are coming in as dtype 'object', although they are strings, dates and integers. I am able to convert the date 'object' to a Pandas datetime dtype, but I'm getting an error when trying to convert the string and integers. Here is an example: ``` >>> import pandas as pd >>> df = pd.read_sql_query('select * from my_table', conn) >>> df id date purchase 1 abc1 2016-05-22 1 2 abc2 2016-05-29 0 3 abc3 2016-05-22 2 4 abc4 2016-05-22 0 >>> df.dtypes id object date object purchase object dtype: object ``` Converting the df['date'] to a datetime works: ``` >>> pd.to_datetime(df['date']) 1 2016-05-22 2 2016-05-29 3 2016-05-22 4 2016-05-22 Name: date, dtype: datetime64[ns] ``` But I get an error when trying to convert the df['purchase'] to an integer: ``` >>> df['purchase'].astype(int) .... pandas\/lib.pyx in pandas.lib.astype_intsafe (pandas\/lib.c:16667)() pandas\/src\/util.pxd in util.set_value_at (pandas\/lib.c:67540)() TypeError: long() argument must be a string or a number, not 'java.lang.Long' ``` NOTE: I get a similar error when I tried .astype('float') And when trying to convert to a string, nothing seems to happen. ``` >>> df['id'].apply(str) 1 abc1 2 abc2 3 abc3 4 abc4 Name: id, dtype: object ```", "response":"Documenting the answer that worked for me based on the comment by @piRSquared. I needed to convert to a string first, then an integer. ``` >>> df['purchase'].astype(str).astype(int) ```", "best_answers_score":0.6265, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/39173813\/pandas-convert-dtype-object-to-int", "best_answers_votes":147, "question_length":1296, "response_length":185 }, { "question":"'DataFrame' object has no attribute 'sort' I face some problem here, in my python package I have install numpy, but I still have this error: 'DataFrame' object has no attribute 'sort' Anyone can give me some idea.. This is my code : ``` final.loc[-1] =['', 'P','Actual'] final.index = final.index + 1 # shifting index final = final.sort() final.columns=[final.columns,final.iloc[0]] final = final.iloc[1:].reset_index(drop=True) final.columns.names = (None, None) ```", "response":"sort() was deprecated for DataFrames in favor of either: sort_values() to sort by column(s) sort_index() to sort by the index sort() was deprecated (but still available) in Pandas with release 0.17 (2015-10-09) with the introduction of sort_values() and sort_index(). It was removed from Pandas with release 0.20 (2017-05-05).", "best_answers_score":0.6263, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/44123874\/dataframe-object-has-no-attribute-sort", "best_answers_votes":281, "question_length":467, "response_length":326 }, { "question":"Difference between Python float and numpy float32 What is the difference between the built in float and numpy.float32? Example ``` a = 58682.7578125 print type(a) print a print type(numpy.float32(a)) print numpy.float32(a) ``` Output: ``` 58682.7578125 58682.8 ``` I've found here that numpy.float32 is: float32 Single precision float: sign bit, 8 bits exponent, 23 bits mantissa didn't find what the built in float format is.", "response":"Python's standard float type is a C double: http:\/\/docs.python.org\/2\/library\/stdtypes.html#typesnumeric NumPy's standard numpy.float is the same, and is also the same as numpy.float64.", "best_answers_score":0.6263, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/16963956\/difference-between-python-float-and-numpy-float32", "best_answers_votes":63, "question_length":428, "response_length":184 }, { "question":"How do I read CSV data into a record array in NumPy? Is there a direct way to import the contents of a CSV file into a record array, just like how R's read.table(), read.delim(), and read.csv() import data into R dataframes? Or should I use csv.reader() and then apply numpy.core.records.fromrecords()?", "response":"Use numpy.genfromtxt() by setting the delimiter kwarg to a comma: ``` from numpy import genfromtxt my_data = genfromtxt('my_file.csv', delimiter=',') ```", "best_answers_score":0.626, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3518778\/how-do-i-read-csv-data-into-a-record-array-in-numpy", "best_answers_votes":884, "question_length":302, "response_length":153 }, { "question":"Binary random array with a specific proportion of ones? What is the efficient(probably vectorized with Matlab terminology) way to generate random number of zeros and ones with a specific proportion? Specially with Numpy? As my case is special for 1\/3, my code is: ``` import numpy as np a=np.mod(np.multiply(np.random.randomintegers(0,2,size)),3) ``` But is there any built-in function that could handle this more effeciently at least for the situation of K\/N where K and N are natural numbers?", "response":"Yet another approach, using np.random.choice: ``` >>> np.random.choice([0, 1], size=(10,), p=[1.\/3, 2.\/3]) array([0, 1, 1, 1, 1, 0, 0, 0, 0, 0]) ```", "best_answers_score":0.6247, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/19597473\/binary-random-array-with-a-specific-proportion-of-ones", "best_answers_votes":130, "question_length":494, "response_length":148 }, { "question":"How to solve a pair of nonlinear equations using Python? What's the (best) way to solve a pair of non linear equations using Python. (Numpy, Scipy or Sympy) eg: x+y^2 = 4 e^x+ xy = 3 A code snippet which solves the above pair will be great", "response":"for numerical solution, you can use fsolve: http:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.optimize.fsolve.html#scipy.optimize.fsolve ``` from scipy.optimize import fsolve import math def equations(p): x, y = p return (x+y**2-4, math.exp(x) + x*y - 3) x, y = fsolve(equations, (1, 1)) print equations((x, y)) ```", "best_answers_score":0.6225, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/8739227\/how-to-solve-a-pair-of-nonlinear-equations-using-python", "best_answers_votes":102, "question_length":239, "response_length":323 }, { "question":"Replace negative values in an numpy array Is there a simple way of replacing all negative values in an array with 0? I'm having a complete block on how to do it using a NumPy array. E.g. ``` a = array([1, 2, 3, -4, 5]) ``` I need to return ``` [1, 2, 3, 0, 5] ``` a < 0 gives: ``` [False, False, False, True, False] ``` This is where I'm stuck - how to use this array to modify the original array.", "response":"You are halfway there. Try: ``` In [4]: a[a < 0] = 0 In [5]: a Out[5]: array([1, 2, 3, 0, 5]) ```", "best_answers_score":0.622, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/10335090\/replace-negative-values-in-an-numpy-array", "best_answers_votes":163, "question_length":397, "response_length":97 }, { "question":"How do I select elements of an array given condition? Suppose I have a numpy array x = [5, 2, 3, 1, 4, 5], y = ['f', 'o', 'o', 'b', 'a', 'r']. I want to select the elements in y corresponding to elements in x that are greater than 1 and less than 5. I tried ``` x = array([5, 2, 3, 1, 4, 5]) y = array(['f','o','o','b','a','r']) output = y[x > 1 & x < 5] # desired output is ['o','o','a'] ``` but this doesn't work. How would I do this?", "response":"Your expression works if you add parentheses: ``` >>> y[(1 < x) & (x < 5)] array(['o', 'o', 'a'], dtype='|S1') ```", "best_answers_score":0.6183, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3030480\/how-do-i-select-elements-of-an-array-given-condition", "best_answers_votes":273, "question_length":436, "response_length":114 }, { "question":"Numpy argsort - what is it doing? Why is numpy giving this result: ``` x = numpy.array([1.48,1.41,0.0,0.1]) print x.argsort() >[2 3 1 0] ``` when I'd expect it to do this: [3 2 0 1] Clearly my understanding of the function is lacking.", "response":"According to the documentation Returns the indices that would sort an array. 2 is the index of 0.0. 3 is the index of 0.1. 1 is the index of 1.41. 0 is the index of 1.48.", "best_answers_score":0.6165, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/17901218\/numpy-argsort-what-is-it-doing", "best_answers_votes":173, "question_length":234, "response_length":170 }, { "question":"How to print a Numpy array without brackets? I want to convert a = [1,2,3,4,5] into a_string = \"1 2 3 4 5\". The real numpy array is quite big (50000x200) so I assume using for loops is too slow.", "response":"You can use the join method from string: ``` >>> a = [1,2,3,4,5] >>> ' '.join(map(str, a)) \"1 2 3 4 5\" ```", "best_answers_score":0.6164, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9360103\/how-to-print-a-numpy-array-without-brackets", "best_answers_votes":70, "question_length":194, "response_length":106 }, { "question":"Suppress Scientific Notation in Numpy When Creating Array From Nested List I have a nested Python list that looks like the following: ``` my_list = [[3.74, 5162, 13683628846.64, 12783387559.86, 1.81], [9.55, 116, 189688622.37, 260332262.0, 1.97], [2.2, 768, 6004865.13, 5759960.98, 1.21], [3.74, 4062, 3263822121.39, 3066869087.9, 1.93], [1.91, 474, 44555062.72, 44555062.72, 0.41], [5.8, 5006, 8254968918.1, 7446788272.74, 3.25], [4.5, 7887, 30078971595.46, 27814989471.31, 2.18], [7.03, 116, 66252511.46, 81109291.0, 1.56], [6.52, 116, 47674230.76, 57686991.0, 1.43], [1.85, 623, 3002631.96, 2899484.08, 0.64], [13.76, 1227, 1737874137.5, 1446511574.32, 4.32], [13.76, 1227, 1737874137.5, 1446511574.32, 4.32]] ``` I then import Numpy, and set print options to (suppress=True). When I create an array: ``` my_array = numpy.array(my_list) ``` I can't for the life of me suppress scientific notation: ``` [[ 3.74000000e+00 5.16200000e+03 1.36836288e+10 1.27833876e+10 1.81000000e+00] [ 9.55000000e+00 1.16000000e+02 1.89688622e+08 2.60332262e+08 1.97000000e+00] [ 2.20000000e+00 7.68000000e+02 6.00486513e+06 5.75996098e+06 1.21000000e+00] [ 3.74000000e+00 4.06200000e+03 3.26382212e+09 3.06686909e+09 1.93000000e+00] [ 1.91000000e+00 4.74000000e+02 4.45550627e+07 4.45550627e+07 4.10000000e-01] [ 5.80000000e+00 5.00600000e+03 8.25496892e+09 7.44678827e+09 3.25000000e+00] [ 4.50000000e+00 7.88700000e+03 3.00789716e+10 2.78149895e+10 2.18000000e+00] [ 7.03000000e+00 1.16000000e+02 6.62525115e+07 8.11092910e+07 1.56000000e+00] [ 6.52000000e+00 1.16000000e+02 4.76742308e+07 5.76869910e+07 1.43000000e+00] [ 1.85000000e+00 6.23000000e+02 3.00263196e+06 2.89948408e+06 6.40000000e-01] [ 1.37600000e+01 1.22700000e+03 1.73787414e+09 1.44651157e+09 4.32000000e+00] [ 1.37600000e+01 1.22700000e+03 1.73787414e+09 1.44651157e+09 4.32000000e+00]] ``` If I create a simple numpy array directly: ``` new_array = numpy.array([1.5, 4.65, 7.845]) ``` I have no problem and it prints as follows: ``` [ 1.5 4.65 7.845] ``` Does anyone know what my problem is?", "response":"This is what you need: ``` np.set_printoptions(suppress=True) ``` Here is the documentation which says suppress: bool, optional If True, always print floating point numbers using fixed point notation, in which case numbers equal to zero in the current precision will print as zero. If False, then scientific notation is used when absolute value of the smallest number is 1e3. The default is False. In the original question, the difference between the array created \"directly\" and the original \"big\" array is that the big array contains very large numbers (e.g. 1.44651157e+09), so NumPy chooses the scientific notation for it, unless it's suppressed.", "best_answers_score":0.6162, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/9777783\/suppress-scientific-notation-in-numpy-when-creating-array-from-nested-list", "best_answers_votes":417, "question_length":2048, "response_length":651 }, { "question":"\"isnotnan\" functionality in numpy, can this be more pythonic? I need a function that returns non-NaN values from an array. Currently I am doing it this way: ``` >>> a = np.array([np.nan, 1, 2]) >>> a array([ NaN, 1., 2.]) >>> np.invert(np.isnan(a)) array([False, True, True], dtype=bool) >>> a[np.invert(np.isnan(a))] array([ 1., 2.]) ``` Python: 2.6.4 numpy: 1.3.0 Please share if you know a better way, Thank you", "response":"``` a = a[~np.isnan(a)] ```", "best_answers_score":0.6149, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/2831516\/isnotnan-functionality-in-numpy-can-this-be-more-pythonic", "best_answers_votes":204, "question_length":414, "response_length":27 }, { "question":"Numpy array dimensions How do I get the dimensions of an array? For instance, this is 2x2: ```py a = np.array([[1, 2], [3, 4]]) ```", "response":"Use .shape to obtain a tuple of array dimensions: ``` >>> a.shape (2, 2) ```", "best_answers_score":0.6142, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/3061761\/numpy-array-dimensions", "best_answers_votes":589, "question_length":131, "response_length":76 }, { "question":"ImportError in importing from sklearn: cannot import name check_build I am getting the following error while trying to import from sklearn: ``` >>> from sklearn import svm Traceback (most recent call last): File \"\", line 1, in from sklearn import svm File \"C:\\Python27\\lib\\site-packages\\sklearn\\__init__.py\", line 16, in from . import check_build ImportError: cannot import name check_build ``` I am using python 2.7, scipy-0.12.0b1 superpack, numpy-1.6.0 superpack, scikit-learn-0.11 I have a windows 7 machine I have checked several answers for this issue but none of them gives a way out of this error.", "response":"Worked for me after installing scipy.", "best_answers_score":0.6139, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/15274696\/importerror-in-importing-from-sklearn-cannot-import-name-check-build", "best_answers_votes":164, "question_length":607, "response_length":37 }, { "question":"Sorting arrays in NumPy by column How do I sort a NumPy array by its nth column? For example, given: ``` a = array([[9, 2, 3], [4, 5, 6], [7, 0, 5]]) ``` I want to sort the rows of a by the second column to obtain: ``` array([[7, 0, 5], [9, 2, 3], [4, 5, 6]]) ```", "response":"To sort by the second column of a: ``` a[a[:, 1].argsort()] ```", "best_answers_score":0.6125, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/2828059\/sorting-arrays-in-numpy-by-column", "best_answers_votes":1017, "question_length":263, "response_length":63 }, { "question":"Pandas Split Dataframe into two Dataframes at a specific column I have pandas DataFrame which I have composed from concat. One row consists of 96 values, I would like to split the DataFrame from the value 72. So that the first 72 values of a row are stored in Dataframe1, and the next 24 values of a row in Dataframe2. I create my DF as follows: ``` temps = DataFrame(myData) datasX = concat( [temps.shift(72), temps.shift(71), temps.shift(70), temps.shift(69), temps.shift(68), temps.shift(67), temps.shift(66), temps.shift(65), temps.shift(64), temps.shift(63), temps.shift(62), temps.shift(61), temps.shift(60), temps.shift(59), temps.shift(58), temps.shift(57), temps.shift(56), temps.shift(55), temps.shift(54), temps.shift(53), temps.shift(52), temps.shift(51), temps.shift(50), temps.shift(49), temps.shift(48), temps.shift(47), temps.shift(46), temps.shift(45), temps.shift(44), temps.shift(43), temps.shift(42), temps.shift(41), temps.shift(40), temps.shift(39), temps.shift(38), temps.shift(37), temps.shift(36), temps.shift(35), temps.shift(34), temps.shift(33), temps.shift(32), temps.shift(31), temps.shift(30), temps.shift(29), temps.shift(28), temps.shift(27), temps.shift(26), temps.shift(25), temps.shift(24), temps.shift(23), temps.shift(22), temps.shift(21), temps.shift(20), temps.shift(19), temps.shift(18), temps.shift(17), temps.shift(16), temps.shift(15), temps.shift(14), temps.shift(13), temps.shift(12), temps.shift(11), temps.shift(10), temps.shift(9), temps.shift(8), temps.shift(7), temps.shift(6), temps.shift(5), temps.shift(4), temps.shift(3), temps.shift(2), temps.shift(1), temps, temps.shift(-1), temps.shift(-2), temps.shift(-3), temps.shift(-4), temps.shift(-5), temps.shift(-6), temps.shift(-7), temps.shift(-8), temps.shift(-9), temps.shift(-10), temps.shift(-11), temps.shift(-12), temps.shift(-13), temps.shift(-14), temps.shift(-15), temps.shift(-16), temps.shift(-17), temps.shift(-18), temps.shift(-19), temps.shift(-20), temps.shift(-21), temps.shift(-22), temps.shift(-23)], axis=1) ``` Question is: How can split them? :)", "response":"iloc ``` df1 = datasX.iloc[:, :72] df2 = datasX.iloc[:, 72:] ``` (iloc docs)", "best_answers_score":0.6098, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/41624241\/pandas-split-dataframe-into-two-dataframes-at-a-specific-column", "best_answers_votes":134, "question_length":2069, "response_length":76 }, { "question":"What does axis = 0 do in Numpy's sum function? I am learning Python, and have encountered numpy.sum. It has an optional parameter axis. This parameter is used to get either column-wise summation or row-wise summation. When axis = 0 we imply to sum it over columns only. For example, ``` a = np.array([[1, 2, 3], [4, 5, 6]]) np.sum(a, axis = 0) ``` This snippet of code produces output: array([5, 7, 9]), fine. But if I do: ``` a = np.array([1, 2, 3]) np.sum(a, axis = 0) ``` I get result: 6, why is that? Shouldn't I get array([1, 2, 3])?", "response":"In case someone needs this visual description:", "best_answers_score":0.6073, "library_name":"numpy", "question_url":"https:\/\/stackoverflow.com\/questions\/40200070\/what-does-axis-0-do-in-numpys-sum-function", "best_answers_votes":172, "question_length":538, "response_length":46 }, { "question":"Renaming column names in Pandas I want to change the column labels of a Pandas DataFrame from ``` ['$a', '$b', '$c', '$d', '$e'] ``` to ``` ['a', 'b', 'c', 'd', 'e'] ```", "response":"Rename Specific Columns Use the df.rename() function and refer the columns to be renamed. Not all the columns have to be renamed: ``` df = df.rename(columns={'oldName1': 'newName1', 'oldName2': 'newName2'}) # Or rename the existing DataFrame (rather than creating a copy) df.rename(columns={'oldName1': 'newName1', 'oldName2': 'newName2'}, inplace=True) ``` Minimal Code Example ``` df = pd.DataFrame('x', index=range(3), columns=list('abcde')) df a b c d e 0 x x x x x 1 x x x x x 2 x x x x x ``` The following methods all work and produce the same output: ``` df2 = df.rename({'a': 'X', 'b': 'Y'}, axis=1) df2 = df.rename({'a': 'X', 'b': 'Y'}, axis='columns') df2 = df.rename(columns={'a': 'X', 'b': 'Y'}) df2 X Y c d e 0 x x x x x 1 x x x x x 2 x x x x x ``` Remember to assign the result back, as the modification is not-inplace. Alternatively, specify inplace=True: ``` df.rename({'a': 'X', 'b': 'Y'}, axis=1, inplace=True) df X Y c d e 0 x x x x x 1 x x x x x 2 x x x x x ``` You can specify errors='raise' to raise errors if an invalid column-to-rename is specified. Reassign Column Headers Use df.set_axis() with axis=1. ``` df2 = df.set_axis(['V', 'W', 'X', 'Y', 'Z'], axis=1) df2 V W X Y Z 0 x x x x x 1 x x x x x 2 x x x x x ``` Headers can be assigned directly: ``` df.columns = ['V', 'W', 'X', 'Y', 'Z'] df V W X Y Z 0 x x x x x 1 x x x x x 2 x x x x x ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11346283\/renaming-column-names-in-pandas", "best_answers_votes":4698, "question_length":169, "response_length":1369 }, { "question":"Delete a column from a Pandas DataFrame To delete a column in a DataFrame, I can successfully use: ```py del df['column_name'] ``` But why can't I use the following? ```py del df.column_name ``` Since it is possible to access the Series via df.column_name, I expected this to work.", "response":"The best way to do this in Pandas is to use drop: ``` df = df.drop('column_name', axis=1) ``` where 1 is the axis number (0 for rows and 1 for columns.) Or, the drop() method accepts index\/columns keywords as an alternative to specifying the axis. So we can now just do: ``` df = df.drop(columns=['column_nameA', 'column_nameB']) ``` This was introduced in v0.21.0 (October 27, 2017) To delete the column without having to reassign df you can do: ``` df.drop('column_name', axis=1, inplace=True) ``` Finally, to drop by column number instead of by column label, try this to delete, e.g. the 1st, 2nd and 4th columns: ``` df = df.drop(df.columns[[0, 1, 3]], axis=1) # df.columns is zero-based pd.Index ``` Also working with \"text\" syntax for the columns: ``` df.drop(['column_nameA', 'column_nameB'], axis=1, inplace=True) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13411544\/delete-a-column-from-a-pandas-dataframe", "best_answers_votes":3727, "question_length":281, "response_length":825 }, { "question":"How do I get the row count of a Pandas DataFrame? How do I get the number of rows of a pandas dataframe df?", "response":"For a dataframe df, one can use any of the following: len(df.index) df.shape[0] df[df.columns[0]].count() (== number of non-NaN values in first column) Code to reproduce the plot: ```py import numpy as np import pandas as pd import perfplot perfplot.save( \"out.png\", setup=lambda n: pd.DataFrame(np.arange(n * 3).reshape(n, 3)), n_range=[2**k for k in range(25)], kernels=[ lambda df: len(df.index), lambda df: df.shape[0], lambda df: df[df.columns[0]].count(), ], labels=[\"len(df.index)\", \"df.shape[0]\", \"df[df.columns[0]].count()\"], xlabel=\"Number of rows\", ) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15943769\/how-do-i-get-the-row-count-of-a-pandas-dataframe", "best_answers_votes":2987, "question_length":107, "response_length":565 }, { "question":"Selecting multiple columns in a Pandas dataframe How do I select columns a and b from df, and save them into a new dataframe df1? ```none index a b c 1 2 3 4 2 3 4 5 ``` Unsuccessful attempt: ```py df1 = df['a':'b'] df1 = df.ix[:, 'a':'b'] ```", "response":"The column names (which are strings) cannot be sliced in the manner you tried. Here you have a couple of options. If you know from context which variables you want to slice out, you can just return a view of only those columns by passing a list into the __getitem__ syntax (the []'s). ``` df1 = df[['a', 'b']] ``` Alternatively, if it matters to index them numerically and not by their name (say your code should automatically do this without knowing the names of the first two columns) then you can do this instead: ``` df1 = df.iloc[:, 0:2] # Remember that Python does not slice inclusive of the ending index. ``` Additionally, you should familiarize yourself with the idea of a view into a Pandas object vs. a copy of that object. The first of the above methods will return a new copy in memory of the desired sub-object (the desired slices). Sometimes, however, there are indexing conventions in Pandas that don't do this and instead give you a new variable that just refers to the same chunk of memory as the sub-object or slice in the original object. This will happen with the second way of indexing, so you can modify it with the .copy() method to get a regular copy. When this happens, changing what you think is the sliced object can sometimes alter the original object. Always good to be on the look out for this. ``` df1 = df.iloc[0, 0:2].copy() # To avoid the case where changing df1 also changes df ``` To use iloc, you need to know the column positions (or indices). As the column positions may change, instead of hard-coding indices, you can use iloc along with get_loc function of columns method of dataframe object to obtain column indices. ``` {df.columns.get_loc(c): c for idx, c in enumerate(df.columns)} ``` Now you can use this dictionary to access columns through names and using iloc.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11285613\/selecting-multiple-columns-in-a-pandas-dataframe", "best_answers_votes":2735, "question_length":243, "response_length":1809 }, { "question":"Change column type in pandas I created a DataFrame from a list of lists: ```py table = [ ['a', '1.2', '4.2' ], ['b', '70', '0.03'], ['x', '5', '0' ], ] df = pd.DataFrame(table) ``` How do I convert the columns to specific types? In this case, I want to convert columns 2 and 3 into floats. Is there a way to specify the types while converting the list to DataFrame? Or is it better to create the DataFrame first and then loop through the columns to change the dtype for each column? Ideally I would like to do this in a dynamic way because there can be hundreds of columns, and I don't want to specify exactly which columns are of which type. All I can guarantee is that each column contains values of the same type.", "response":"You have four main options for converting types in pandas: to_numeric() - provides functionality to safely convert non-numeric types (e.g. strings) to a suitable numeric type. (See also to_datetime() and to_timedelta().) astype() - convert (almost) any type to (almost) any other type (even if it's not necessarily sensible to do so). Also allows you to convert to categorial types (very useful). infer_objects() - a utility method to convert object columns holding Python objects to a pandas type if possible. convert_dtypes() - convert DataFrame columns to the \"best possible\" dtype that supports pd.NA (pandas' object to indicate a missing value). Read on for more detailed explanations and usage of each of these methods. 1. to_numeric() The best way to convert one or more columns of a DataFrame to numeric values is to use pandas.to_numeric(). This function will try to change non-numeric objects (such as strings) into integers or floating-point numbers as appropriate. Basic usage The input to to_numeric() is a Series or a single column of a DataFrame. ``` >>> s = pd.Series([\"8\", 6, \"7.5\", 3, \"0.9\"]) # mixed string and numeric values >>> s 0 8 1 6 2 7.5 3 3 4 0.9 dtype: object >>> pd.to_numeric(s) # convert everything to float values 0 8.0 1 6.0 2 7.5 3 3.0 4 0.9 dtype: float64 ``` As you can see, a new Series is returned. Remember to assign this output to a variable or column name to continue using it: ``` # convert Series my_series = pd.to_numeric(my_series) # convert column \"a\" of a DataFrame df[\"a\"] = pd.to_numeric(df[\"a\"]) ``` You can also use it to convert multiple columns of a DataFrame via the apply() method: ``` # convert all columns of DataFrame df = df.apply(pd.to_numeric) # convert all columns of DataFrame # convert just columns \"a\" and \"b\" df[[\"a\", \"b\"]] = df[[\"a\", \"b\"]].apply(pd.to_numeric) ``` As long as your values can all be converted, that's probably all you need. Error handling But what if some values can't be converted to a numeric type? to_numeric() also takes an errors keyword argument that allows you to force non-numeric values to be NaN, or simply ignore columns containing these values. Here's an example using a Series of strings s which has the object dtype: ``` >>> s = pd.Series(['1', '2', '4.7', 'pandas', '10']) >>> s 0 1 1 2 2 4.7 3 pandas 4 10 dtype: object ``` The default behaviour is to raise if it can't convert a value. In this case, it can't cope with the string 'pandas': ``` >>> pd.to_numeric(s) # or pd.to_numeric(s, errors='raise') ValueError: Unable to parse string ``` Rather than fail, we might want 'pandas' to be considered a missing\/bad numeric value. We can coerce invalid values to NaN as follows using the errors keyword argument: ``` >>> pd.to_numeric(s, errors='coerce') 0 1.0 1 2.0 2 4.7 3 NaN 4 10.0 dtype: float64 ``` The third option for errors is just to ignore the operation if an invalid value is encountered: ``` >>> pd.to_numeric(s, errors='ignore') # the original Series is returned untouched ``` This last option is particularly useful for converting your entire DataFrame, but don't know which of our columns can be converted reliably to a numeric type. In that case, just write: ``` df.apply(pd.to_numeric, errors='ignore') ``` The function will be applied to each column of the DataFrame. Columns that can be converted to a numeric type will be converted, while columns that cannot (e.g. they contain non-digit strings or dates) will be left alone. Downcasting By default, conversion with to_numeric() will give you either an int64 or float64 dtype (or whatever integer width is native to your platform). That's usually what you want, but what if you wanted to save some memory and use a more compact dtype, like float32, or int8? to_numeric() gives you the option to downcast to either 'integer', 'signed', 'unsigned', 'float'. Here's an example for a simple series s of integer type: ``` >>> s = pd.Series([1, 2, -7]) >>> s 0 1 1 2 2 -7 dtype: int64 ``` Downcasting to 'integer' uses the smallest possible integer that can hold the values: ``` >>> pd.to_numeric(s, downcast='integer') 0 1 1 2 2 -7 dtype: int8 ``` Downcasting to 'float' similarly picks a smaller than normal floating type: ``` >>> pd.to_numeric(s, downcast='float') 0 1.0 1 2.0 2 -7.0 dtype: float32 ``` 2. astype() The astype() method enables you to be explicit about the dtype you want your DataFrame or Series to have. It's very versatile in that you can try and go from one type to any other. Basic usage Just pick a type: you can use a NumPy dtype (e.g. np.int16), some Python types (e.g. bool), or pandas-specific types (like the categorical dtype). Call the method on the object you want to convert and astype() will try and convert it for you: ``` # convert all DataFrame columns to the int64 dtype df = df.astype(int) # convert column \"a\" to int64 dtype and \"b\" to complex type df = df.astype({\"a\": int, \"b\": complex}) # convert Series to float16 type s = s.astype(np.float16) # convert Series to Python strings s = s.astype(str) # convert Series to categorical type - see docs for more details s = s.astype('category') ``` Notice I said \"try\" - if astype() does not know how to convert a value in the Series or DataFrame, it will raise an error. For example, if you have a NaN or inf value you'll get an error trying to convert it to an integer. As of pandas 0.20.0, this error can be suppressed by passing errors='ignore'. Your original object will be returned untouched. Be careful astype() is powerful, but it will sometimes convert values \"incorrectly\". For example: ``` >>> s = pd.Series([1, 2, -7]) >>> s 0 1 1 2 2 -7 dtype: int64 ``` These are small integers, so how about converting to an unsigned 8-bit type to save memory? ``` >>> s.astype(np.uint8) 0 1 1 2 2 249 dtype: uint8 ``` The conversion worked, but the -7 was wrapped round to become 249 (i.e. 28 - 7)! Trying to downcast using pd.to_numeric(s, downcast='unsigned') instead could help prevent this error. 3. infer_objects() Version 0.21.0 of pandas introduced the method infer_objects() for converting columns of a DataFrame that have an object datatype to a more specific type (soft conversions). For example, here's a DataFrame with two columns of object type. One holds actual integers and the other holds strings representing integers: ``` >>> df = pd.DataFrame({'a': [7, 1, 5], 'b': ['3','2','1']}, dtype='object') >>> df.dtypes a object b object dtype: object ``` Using infer_objects(), you can change the type of column 'a' to int64: ``` >>> df = df.infer_objects() >>> df.dtypes a int64 b object dtype: object ``` Column 'b' has been left alone since its values were strings, not integers. If you wanted to force both columns to an integer type, you could use df.astype(int) instead. 4. convert_dtypes() Version 1.0 and above includes a method convert_dtypes() to convert Series and DataFrame columns to the best possible dtype that supports the pd.NA missing value. Here \"best possible\" means the type most suited to hold the values. For example, this a pandas integer type, if all of the values are integers (or missing values): an object column of Python integer objects are converted to Int64, a column of NumPy int32 values, will become the pandas dtype Int32. With our object DataFrame df, we get the following result: ``` >>> df.convert_dtypes().dtypes a Int64 b string dtype: object ``` Since column 'a' held integer values, it was converted to the Int64 type (which is capable of holding missing values, unlike int64). Column 'b' contained string objects, so was changed to pandas' string dtype. By default, this method will infer the type from object values in each column. We can change this by passing infer_objects=False: ``` >>> df.convert_dtypes(infer_objects=False).dtypes a object b string dtype: object ``` Now column 'a' remained an object column: pandas knows it can be described as an 'integer' column (internally it ran infer_dtype) but didn't infer exactly what dtype of integer it should have so did not convert it. Column 'b' was again converted to 'string' dtype as it was recognised as holding 'string' values.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15891038\/change-column-type-in-pandas", "best_answers_votes":2621, "question_length":716, "response_length":8090 }, { "question":"Use a list of values to select rows from a Pandas dataframe Let\u2019s say I have the following Pandas dataframe: ``` df = DataFrame({'A': [5,6,3,4], 'B': [1,2,3,5]}) df A B 0 5 1 1 6 2 2 3 3 3 4 5 ``` I can subset based on a specific value: ``` x = df[df['A'] == 3] x A B 2 3 3 ``` But how can I subset based on a list of values? - something like this: ``` list_of_values = [3, 6] y = df[df['A'] in list_of_values] ``` To get: ``` A B 1 6 2 2 3 3 ```", "response":"You can use the isin method: ``` In [1]: df = pd.DataFrame({'A': [5,6,3,4], 'B': [1,2,3,5]}) In [2]: df Out[2]: A B 0 5 1 1 6 2 2 3 3 3 4 5 In [3]: df[df['A'].isin([3, 6])] Out[3]: A B 1 6 2 2 3 3 ``` And to get the opposite use ~: ``` In [4]: df[~df['A'].isin([3, 6])] Out[4]: A B 0 5 1 3 4 5 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12096252\/use-a-list-of-values-to-select-rows-from-a-pandas-dataframe", "best_answers_votes":2283, "question_length":446, "response_length":297 }, { "question":"How to add a new column to an existing DataFrame I have the following indexed DataFrame with named columns and rows not- continuous numbers: ```none a b c d 2 0.671399 0.101208 -0.181532 0.241273 3 0.446172 -0.243316 0.051767 1.577318 5 0.614758 0.075793 -0.451460 -0.012493 ``` I would like to add a new column, 'e', to the existing data frame and do not want to change anything in the data frame (i.e., the new column always has the same length as the DataFrame). ```none 0 -0.335485 1 -1.166658 2 -0.385571 dtype: float64 ``` I tried different versions of join, append, merge, but I did not get the result I wanted, only errors at most. How can I add column e to the above example?", "response":"Edit 2017 As indicated in the comments and by @Alexander, currently the best method to add the values of a Series as a new column of a DataFrame could be using assign: ``` df1 = df1.assign(e=pd.Series(np.random.randn(sLength)).values) ``` Edit 2015 Some reported getting the SettingWithCopyWarning with this code. However, the code still runs perfectly with the current pandas version 0.16.1. ``` >>> sLength = len(df1['a']) >>> df1 a b c d 6 -0.269221 -0.026476 0.997517 1.294385 8 0.917438 0.847941 0.034235 -0.448948 >>> df1['e'] = pd.Series(np.random.randn(sLength), index=df1.index) >>> df1 a b c d e 6 -0.269221 -0.026476 0.997517 1.294385 1.757167 8 0.917438 0.847941 0.034235 -0.448948 2.228131 >>> pd.version.short_version '0.16.1' ``` The SettingWithCopyWarning aims to inform of a possibly invalid assignment on a copy of the Dataframe. It doesn't necessarily say you did it wrong (it can trigger false positives) but from 0.13.0 it let you know there are more adequate methods for the same purpose. Then, if you get the warning, just follow its advise: Try using .loc[row_index,col_indexer] = value instead ``` >>> df1.loc[:,'f'] = pd.Series(np.random.randn(sLength), index=df1.index) >>> df1 a b c d e f 6 -0.269221 -0.026476 0.997517 1.294385 1.757167 -0.050927 8 0.917438 0.847941 0.034235 -0.448948 2.228131 0.006109 >>> ``` In fact, this is currently the more efficient method as described in pandas docs Original answer: Use the original df1 indexes to create the series: ``` df1['e'] = pd.Series(np.random.randn(sLength), index=df1.index) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12555323\/how-to-add-a-new-column-to-an-existing-dataframe", "best_answers_votes":1322, "question_length":684, "response_length":1561 }, { "question":"Pretty-print an entire Pandas Series \/ DataFrame I work with Series and DataFrames on the terminal a lot. The default __repr__ for a Series returns a reduced sample, with some head and tail values, but the rest missing. Is there a builtin way to pretty-print the entire Series \/ DataFrame? Ideally, it would support proper alignment, perhaps borders between columns, and maybe even color-coding for the different columns.", "response":"You can also use the option_context, with one or more options: ``` with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also print(df) ``` This will automatically return the options to their previous values. If you are working on jupyter-notebook, using display(df) instead of print(df) will use jupyter rich display logic (like so).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19124601\/pretty-print-an-entire-pandas-series-dataframe", "best_answers_votes":1588, "question_length":421, "response_length":391 }, { "question":"\"Large data\" workflows using pandas [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 3 years ago. The community reviewed whether to reopen this question 3 years ago and left it closed: Original close reason(s) were not resolved Improve this question I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about \"big data\" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive. My first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this: What are some best-practice workflows for accomplishing the following: Loading flat files into a permanent, on-disk database structure Querying that database to retrieve data to feed into a pandas data structure Updating the database after manipulating pieces in pandas Real-world examples would be much appreciated, especially from anyone who uses pandas on \"large data\". Edit -- an example of how I would like this to work: Iteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory. In order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory. I would create new columns by performing various operations on the selected columns. I would then have to append these new columns into the database structure. I am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem. Edit -- Responding to Jeff's questions specifically: I am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns. Typical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset. Finally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model. A typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case. It's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations. The modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns. It is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics\/machine learning parlance).", "response":"I routinely use tens of gigabytes of data in just this fashion e.g. I have tables on disk that I read via queries, create data and append back. It's worth reading the docs and late in this thread for several suggestions for how to store your data. Details which will affect how you store your data, like: Give as much detail as you can; and I can help you develop a structure. Size of data, # of rows, columns, types of columns; are you appending rows, or just columns? What will typical operations look like. E.g. do a query on columns to select a bunch of rows and specific columns, then do an operation (in-memory), create new columns, save these. (Giving a toy example could enable us to offer more specific recommendations.) After that processing, then what do you do? Is step 2 ad hoc, or repeatable? Input flat files: how many, rough total size in Gb. How are these organized e.g. by records? Does each one contains different fields, or do they have some records per file with all of the fields in each file? Do you ever select subsets of rows (records) based on criteria (e.g. select the rows with field A > 5)? and then do something, or do you just select fields A, B, C with all of the records (and then do something)? Do you 'work on' all of your columns (in groups), or are there a good proportion that you may only use for reports (e.g. you want to keep the data around, but don't need to pull in that column explicity until final results time)? Solution Ensure you have pandas at least 0.10.1 installed. Read iterating files chunk-by-chunk and multiple table queries. Since pytables is optimized to operate on row-wise (which is what you query on), we will create a table for each group of fields. This way it's easy to select a small group of fields (which will work with a big table, but it's more efficient to do it this way... I think I may be able to fix this limitation in the future... this is more intuitive anyhow): (The following is pseudocode.) ``` import numpy as np import pandas as pd # create a store store = pd.HDFStore('mystore.h5') # this is the key to your storage: # this maps your fields to a specific group, and defines # what you want to have as data_columns. # you might want to create a nice class wrapping this # (as you will want to have this map and its inversion) group_map = dict( A = dict(fields = ['field_1','field_2',.....], dc = ['field_1',....,'field_5']), B = dict(fields = ['field_10',...... ], dc = ['field_10']), ..... REPORTING_ONLY = dict(fields = ['field_1000','field_1001',...], dc = []), ) group_map_inverted = dict() for g, v in group_map.items(): group_map_inverted.update(dict([ (f,g) for f in v['fields'] ])) ``` Reading in the files and creating the storage (essentially doing what append_to_multiple does): ``` for f in files: # read in the file, additional options may be necessary here # the chunksize is not strictly necessary, you may be able to slurp each # file into memory in which case just eliminate this part of the loop # (you can also change chunksize if necessary) for chunk in pd.read_table(f, chunksize=50000): # we are going to append to each table by group # we are not going to create indexes at this time # but we *ARE* going to create (some) data_columns # figure out the field groupings for g, v in group_map.items(): # create the frame for this group frame = chunk.reindex(columns = v['fields'], copy = False) # append it store.append(g, frame, index=False, data_columns = v['dc']) ``` Now you have all of the tables in the file (actually you could store them in separate files if you wish, you would prob have to add the filename to the group_map, but probably this isn't necessary). This is how you get columns and create new ones: ``` frame = store.select(group_that_I_want) # you can optionally specify: # columns = a list of the columns IN THAT GROUP (if you wanted to # select only say 3 out of the 20 columns in this sub-table) # and a where clause if you want a subset of the rows # do calculations on this frame new_frame = cool_function_on_frame(frame) # to 'add columns', create a new group (you probably want to # limit the columns in this new_group to be only NEW ones # (e.g. so you don't overlap from the other tables) # add this info to the group_map store.append(new_group, new_frame.reindex(columns = new_columns_created, copy = False), data_columns = new_columns_created) ``` When you are ready for post_processing: ``` # This may be a bit tricky; and depends what you are actually doing. # I may need to modify this function to be a bit more general: report_data = store.select_as_multiple([groups_1,groups_2,.....], where =['field_1>0', 'field_1000=foo'], selector = group_1) ``` About data_columns, you don't actually need to define ANY data_columns; they allow you to sub-select rows based on the column. E.g. something like: ``` store.select(group, where = ['field_1000=foo', 'field_1001>0']) ``` They may be most interesting to you in the final report generation stage (essentially a data column is segregated from other columns, which might impact efficiency somewhat if you define a lot). You also might want to: create a function which takes a list of fields, looks up the groups in the groups_map, then selects these and concatenates the results so you get the resulting frame (this is essentially what select_as_multiple does). This way the structure would be pretty transparent to you. indexes on certain data columns (makes row-subsetting much faster). enable compression. Let me know when you have questions!", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14262433\/large-data-workflows-using-pandas", "best_answers_votes":719, "question_length":5088, "response_length":5527 }, { "question":"How do I expand the output display to see more columns of a Pandas DataFrame? Is there a way to widen the display of output in either interactive or script-execution mode? Specifically, I am using the describe() function on a Pandas DataFrame. When the DataFrame is five columns (labels) wide, I get the descriptive statistics that I want. However, if the DataFrame has any more columns, the statistics are suppressed and something like this is returned: ```none >> Index: 8 entries, count to max >> Data columns: >> x1 8 non-null values >> x2 8 non-null values >> x3 8 non-null values >> x4 8 non-null values >> x5 8 non-null values >> x6 8 non-null values >> x7 8 non-null values ``` The \"8\" value is given whether there are 6 or 7 columns. What does the \"8\" refer to? I have already tried dragging the IDLE window larger, as well as increasing the \"Configure IDLE\" width options, to no avail.", "response":"(For Pandas versions before 0.23.4, see at bottom.) Use pandas.set_option(optname, val), or equivalently pd.options. = val. Like: ``` import pandas as pd pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) ``` Pandas will try to autodetect the size of your terminal window if you set pd.options.display.width = 0. Here is the help for set_option: ``` set_option(pat,value) - Sets the value of the specified option Available options: display.[chop_threshold, colheader_justify, column_space, date_dayfirst, date_yearfirst, encoding, expand_frame_repr, float_format, height, line_width, max_columns, max_colwidth, max_info_columns, max_info_rows, max_rows, max_seq_items, mpl_style, multi_sparse, notebook_repr_html, pprint_nest_depth, precision, width] mode.[sim_interactive, use_inf_as_null] Parameters ---------- pat - str\/regexp which should match a single option. Note: partial matches are supported for convenience, but unless you use the full option name (e.g., *x.y.z.option_name*), your code may break in future versions if new options with similar names are introduced. value - new value of option. Returns ------- None Raises ------ KeyError if no such option exists display.chop_threshold: [default: None] [currently: None] : float or None if set to a float value, all float values smaller then the given threshold will be displayed as exactly 0 by repr and friends. display.colheader_justify: [default: right] [currently: right] : 'left'\/'right' Controls the justification of column headers. used by DataFrameFormatter. display.column_space: [default: 12] [currently: 12]No description available. display.date_dayfirst: [default: False] [currently: False] : boolean When True, prints and parses dates with the day first, eg 20\/01\/2005 display.date_yearfirst: [default: False] [currently: False] : boolean When True, prints and parses dates with the year first, e.g., 2005\/01\/20 display.encoding: [default: UTF-8] [currently: UTF-8] : str\/unicode Defaults to the detected encoding of the console. Specifies the encoding to be used for strings returned by to_string, these are generally strings meant to be displayed on the console. display.expand_frame_repr: [default: True] [currently: True] : boolean Whether to print out the full DataFrame repr for wide DataFrames across multiple lines, `max_columns` is still respected, but the output will wrap-around across multiple \"pages\" if it's width exceeds `display.width`. display.float_format: [default: None] [currently: None] : callable The callable should accept a floating point number and return a string with the desired format of the number. This is used in some places like SeriesFormatter. See core.format.EngFormatter for an example. display.height: [default: 60] [currently: 1000] : int Deprecated. (Deprecated, use `display.height` instead.) display.line_width: [default: 80] [currently: 1000] : int Deprecated. (Deprecated, use `display.width` instead.) display.max_columns: [default: 20] [currently: 500] : int max_rows and max_columns are used in __repr__() methods to decide if to_string() or info() is used to render an object to a string. In case python\/IPython is running in a terminal this can be set to 0 and Pandas will correctly auto-detect the width the terminal and swap to a smaller format in case all columns would not fit vertically. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. 'None' value means unlimited. display.max_colwidth: [default: 50] [currently: 50] : int The maximum width in characters of a column in the repr of a Pandas data structure. When the column overflows, a \"...\" placeholder is embedded in the output. display.max_info_columns: [default: 100] [currently: 100] : int max_info_columns is used in DataFrame.info method to decide if per column information will be printed. display.max_info_rows: [default: 1690785] [currently: 1690785] : int or None max_info_rows is the maximum number of rows for which a frame will perform a null check on its columns when repr'ing To a console. The default is 1,000,000 rows. So, if a DataFrame has more 1,000,000 rows there will be no null check performed on the columns and thus the representation will take much less time to display in an interactive session. A value of None means always perform a null check when repr'ing. display.max_rows: [default: 60] [currently: 500] : int This sets the maximum number of rows Pandas should output when printing out various output. For example, this value determines whether the repr() for a dataframe prints out fully or just a summary repr. 'None' value means unlimited. display.max_seq_items: [default: None] [currently: None] : int or None when pretty-printing a long sequence, no more then `max_seq_items` will be printed. If items are ommitted, they will be denoted by the addition of \"...\" to the resulting string. If set to None, the number of items to be printed is unlimited. display.mpl_style: [default: None] [currently: None] : bool Setting this to 'default' will modify the rcParams used by matplotlib to give plots a more pleasing visual style by default. Setting this to None\/False restores the values to their initial value. display.multi_sparse: [default: True] [currently: True] : boolean \"sparsify\" MultiIndex display (don't display repeated elements in outer levels within groups) display.notebook_repr_html: [default: True] [currently: True] : boolean When True, IPython notebook will use html representation for Pandas objects (if it is available). display.pprint_nest_depth: [default: 3] [currently: 3] : int Controls the number of nested levels to process when pretty-printing display.precision: [default: 7] [currently: 7] : int Floating point output precision (number of significant digits). This is only a suggestion display.width: [default: 80] [currently: 1000] : int Width of the display in characters. In case python\/IPython is running in a terminal this can be set to None and Pandas will correctly auto-detect the width. Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width. mode.sim_interactive: [default: False] [currently: False] : boolean Whether to simulate interactive mode for purposes of testing mode.use_inf_as_null: [default: False] [currently: False] : boolean True means treat None, NaN, INF, -INF as null (old way), False means None and NaN are null, but INF, -INF are not null (new way). Call def: pd.set_option(self, *args, **kwds) ``` Older version information Much of this has been deprecated. As @bmu mentioned, Pandas auto detects (by default) the size of the display area, a summary view will be used when an object repr does not fit on the display. You mentioned resizing the IDLE window, to no effect. If you do print df.describe().to_string() does it fit on the IDLE window? The terminal size is determined by pandas.util.terminal.get_terminal_size() (deprecated and removed), this returns a tuple containing the (width, height) of the display. Does the output match the size of your IDLE window? There might be an issue (there was one before when running a terminal in Emacs). Note that it is possible to bypass the autodetect, pandas.set_printoptions(max_rows=200, max_columns=10) will never switch to summary view if number of rows, columns does not exceed the given limits. The max_colwidth option helps in seeing untruncated form of each column.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11707586\/how-do-i-expand-the-output-display-to-see-more-columns-of-a-pandas-dataframe", "best_answers_votes":1510, "question_length":895, "response_length":7535 }, { "question":"How are iloc and loc different? Can someone explain how these two methods of slicing are different? I've seen the docs and I've seen previous similar questions (1, 2), but I still find myself unable to understand how they are different. To me, they seem interchangeable in large part, because they are at the lower levels of slicing. For example, say we want to get the first five rows of a DataFrame. How is it that these two work? ```py df.loc[:5] df.iloc[:5] ``` Can someone present cases where the distinction in uses are clearer? Once upon a time, I also wanted to know how these two functions differed from df.ix[:5] but ix has been removed from pandas 1.0, so I don't care anymore.", "response":"Label vs. Location The main distinction between the two methods is: loc gets rows (and\/or columns) with particular labels. iloc gets rows (and\/or columns) at integer locations. To demonstrate, consider a series s of characters with a non-monotonic integer index: ``` >>> s = pd.Series(list(\"abcdef\"), index=[49, 48, 47, 0, 1, 2]) 49 a 48 b 47 c 0 d 1 e 2 f >>> s.loc[0] # value at index label 0 'd' >>> s.iloc[0] # value at index location 0 'a' >>> s.loc[0:1] # rows at index labels between 0 and 1 (inclusive) 0 d 1 e >>> s.iloc[0:1] # rows at index location between 0 and 1 (exclusive) 49 a ``` Here are some of the differences\/similarities between s.loc and s.iloc when passed various objects: description s.loc[] s.iloc[] 0 single item Value at index label 0 (the string 'd') Value at index location 0 (the string 'a') 0:1 slice Two rows (labels 0 and 1) One row (first row at location 0) 1:47 slice with out-of-bounds end Zero rows (empty Series) Five rows (location 1 onwards) 1:47:-1 slice with negative step three rows (labels 1 back to 47) Zero rows (empty Series) [2, 0] integer list Two rows with given labels Two rows with given locations s > 'e' Bool series (indicating which values have the property) One row (containing 'f') NotImplementedError (s>'e').values Bool array One row (containing 'f') Same as loc 999 int object not in index KeyError IndexError (out of bounds) -1 int object not in index KeyError Returns last value in s lambda x: x.index[3] callable applied to series (here returning 3rd item in index) s.loc[s.index[3]] s.iloc[s.index[3]] loc's label-querying capabilities extend well-beyond integer indexes and it's worth highlighting a couple of additional examples. Here's a Series where the index contains string objects: ``` >>> s2 = pd.Series(s.index, index=s.values) >>> s2 a 49 b 48 c 47 d 0 e 1 f 2 ``` Since loc is label-based, it can fetch the first value in the Series using s2.loc['a']. It can also slice with non-integer objects: ``` >>> s2.loc['c':'e'] # all rows lying between 'c' and 'e' (inclusive) c 47 d 0 e 1 ``` For DateTime indexes, we don't need to pass the exact date\/time to fetch by label. For example: ``` >>> s3 = pd.Series(list('abcde'), pd.date_range('now', periods=5, freq='M')) >>> s3 2021-01-31 16:41:31.879768 a 2021-02-28 16:41:31.879768 b 2021-03-31 16:41:31.879768 c 2021-04-30 16:41:31.879768 d 2021-05-31 16:41:31.879768 e ``` Then to fetch the row(s) for March\/April 2021 we only need: ``` >>> s3.loc['2021-03':'2021-04'] 2021-03-31 17:04:30.742316 c 2021-04-30 17:04:30.742316 d ``` Rows and Columns loc and iloc work the same way with DataFrames as they do with Series. It's useful to note that both methods can address columns and rows together. When given a tuple, the first element is used to index the rows and, if it exists, the second element is used to index the columns. Consider the DataFrame defined below: ``` >>> import numpy as np >>> df = pd.DataFrame(np.arange(25).reshape(5, 5), index=list('abcde'), columns=['x','y','z', 8, 9]) >>> df x y z 8 9 a 0 1 2 3 4 b 5 6 7 8 9 c 10 11 12 13 14 d 15 16 17 18 19 e 20 21 22 23 24 ``` Then for example: ``` >>> df.loc['c': , :'z'] # rows 'c' and onwards AND columns up to 'z' x y z c 10 11 12 d 15 16 17 e 20 21 22 >>> df.iloc[:, 3] # all rows, but only the column at index location 3 a 3 b 8 c 13 d 18 e 23 ``` Sometimes we want to mix label and positional indexing methods for the rows and columns, somehow combining the capabilities of loc and iloc. For example, consider the following DataFrame. How best to slice the rows up to and including 'c' and take the first four columns? ``` >>> import numpy as np >>> df = pd.DataFrame(np.arange(25).reshape(5, 5), index=list('abcde'), columns=['x','y','z', 8, 9]) >>> df x y z 8 9 a 0 1 2 3 4 b 5 6 7 8 9 c 10 11 12 13 14 d 15 16 17 18 19 e 20 21 22 23 24 ``` We can achieve this result using iloc and the help of another method: ``` >>> df.iloc[:df.index.get_loc('c') + 1, :4] x y z 8 a 0 1 2 3 b 5 6 7 8 c 10 11 12 13 ``` get_loc() is an index method meaning \"get the position of the label in this index\". Note that since slicing with iloc is exclusive of its endpoint, we must add 1 to this value if we want row 'c' as well.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31593201\/how-are-iloc-and-loc-different", "best_answers_votes":1602, "question_length":688, "response_length":4203 }, { "question":"Pandas Merging 101 How can I perform a (INNER| (LEFT|RIGHT|FULL) OUTER) JOIN with pandas? How do I add NaNs for missing rows after a merge? How do I get rid of NaNs after merging? Can I merge on the index? How do I merge multiple DataFrames? Cross join with pandas merge? join? concat? update? Who? What? Why?! ... and more. I've seen these recurring questions asking about various facets of the pandas merge functionality. Most of the information regarding merge and its various use cases today is fragmented across dozens of badly worded, unsearchable posts. The aim here is to collate some of the more important points for posterity. This Q&A is meant to be the next installment in a series of helpful user guides on common pandas idioms (see this post on pivoting, and this post on concatenation, which I will be touching on, later). Please note that this post is not meant to be a replacement for the documentation, so please read that as well! Some of the examples are taken from there. Table of Contents For ease of access. Merging basics - basic types of joins (read this first) Index-based joins Generalizing to multiple DataFrames Cross join", "response":"This post aims to give readers a primer on SQL-flavored merging with Pandas, how to use it, and when not to use it. In particular, here's what this post will go through: The basics - types of joins (LEFT, RIGHT, OUTER, INNER) merging with different column names merging with multiple columns avoiding duplicate merge key column in output What this post (and other posts by me on this thread) will not go through: Performance-related discussions and timings (for now). Mostly notable mentions of better alternatives, wherever appropriate. Handling suffixes, removing extra columns, renaming outputs, and other specific use cases. There are other (read: better) posts that deal with that, so figure it out! Note Most examples default to INNER JOIN operations while demonstrating various features, unless otherwise specified. Furthermore, all the DataFrames here can be copied and replicated so you can play with them. Also, see this post on how to read DataFrames from your clipboard. Lastly, all visual representation of JOIN operations have been hand-drawn using Google Drawings. Inspiration from here. Enough talk - just show me how to use merge! Setup & Basics ``` np.random.seed(0) left = pd.DataFrame({'key': ['A', 'B', 'C', 'D'], 'value': np.random.randn(4)}) right = pd.DataFrame({'key': ['B', 'D', 'E', 'F'], 'value': np.random.randn(4)}) left key value 0 A 1.764052 1 B 0.400157 2 C 0.978738 3 D 2.240893 right key value 0 B 1.867558 1 D -0.977278 2 E 0.950088 3 F -0.151357 ``` For the sake of simplicity, the key column has the same name (for now). An INNER JOIN is represented by Note This, along with the forthcoming figures all follow this convention: blue indicates rows that are present in the merge result red indicates rows that are excluded from the result (i.e., removed) green indicates missing values that are replaced with NaNs in the result To perform an INNER JOIN, call merge on the left DataFrame, specifying the right DataFrame and the join key (at the very least) as arguments. ``` left.merge(right, on='key') # Or, if you want to be explicit # left.merge(right, on='key', how='inner') key value_x value_y 0 B 0.400157 1.867558 1 D 2.240893 -0.977278 ``` This returns only rows from left and right which share a common key (in this example, \"B\" and \"D). A LEFT OUTER JOIN, or LEFT JOIN is represented by This can be performed by specifying how='left'. ``` left.merge(right, on='key', how='left') key value_x value_y 0 A 1.764052 NaN 1 B 0.400157 1.867558 2 C 0.978738 NaN 3 D 2.240893 -0.977278 ``` Carefully note the placement of NaNs here. If you specify how='left', then only keys from left are used, and missing data from right is replaced by NaN. And similarly, for a RIGHT OUTER JOIN, or RIGHT JOIN which is... ...specify how='right': ``` left.merge(right, on='key', how='right') key value_x value_y 0 B 0.400157 1.867558 1 D 2.240893 -0.977278 2 E NaN 0.950088 3 F NaN -0.151357 ``` Here, keys from right are used, and missing data from left is replaced by NaN. Finally, for the FULL OUTER JOIN, given by specify how='outer'. ``` left.merge(right, on='key', how='outer') key value_x value_y 0 A 1.764052 NaN 1 B 0.400157 1.867558 2 C 0.978738 NaN 3 D 2.240893 -0.977278 4 E NaN 0.950088 5 F NaN -0.151357 ``` This uses the keys from both frames, and NaNs are inserted for missing rows in both. The documentation summarizes these various merges nicely: Other JOINs - LEFT-Excluding, RIGHT-Excluding, and FULL-Excluding\/ANTI JOINs If you need LEFT-Excluding JOINs and RIGHT-Excluding JOINs in two steps. For LEFT-Excluding JOIN, represented as Start by performing a LEFT OUTER JOIN and then filtering to rows coming from left only (excluding everything from the right), ``` (left.merge(right, on='key', how='left', indicator=True) .query('_merge == \"left_only\"') .drop('_merge', axis=1)) key value_x value_y 0 A 1.764052 NaN 2 C 0.978738 NaN ``` Where, ``` left.merge(right, on='key', how='left', indicator=True) key value_x value_y _merge 0 A 1.764052 NaN left_only 1 B 0.400157 1.867558 both 2 C 0.978738 NaN left_only 3 D 2.240893 -0.977278 both ``` And similarly, for a RIGHT-Excluding JOIN, ``` (left.merge(right, on='key', how='right', indicator=True) .query('_merge == \"right_only\"') .drop('_merge', axis=1)) key value_x value_y 2 E NaN 0.950088 3 F NaN -0.151357 ``` Lastly, if you are required to do a merge that only retains keys from the left or right, but not both (IOW, performing an ANTI-JOIN), You can do this in similar fashion\u2014 ``` (left.merge(right, on='key', how='outer', indicator=True) .query('_merge != \"both\"') .drop('_merge', axis=1)) key value_x value_y 0 A 1.764052 NaN 2 C 0.978738 NaN 4 E NaN 0.950088 5 F NaN -0.151357 ``` Different names for key columns If the key columns are named differently\u2014for example, left has keyLeft, and right has keyRight instead of key\u2014then you will have to specify left_on and right_on as arguments instead of on: ``` left2 = left.rename({'key':'keyLeft'}, axis=1) right2 = right.rename({'key':'keyRight'}, axis=1) left2 keyLeft value 0 A 1.764052 1 B 0.400157 2 C 0.978738 3 D 2.240893 right2 keyRight value 0 B 1.867558 1 D -0.977278 2 E 0.950088 3 F -0.151357 ``` ``` left2.merge(right2, left_on='keyLeft', right_on='keyRight', how='inner') keyLeft value_x keyRight value_y 0 B 0.400157 B 1.867558 1 D 2.240893 D -0.977278 ``` Avoiding duplicate key column in output When merging on keyLeft from left and keyRight from right, if you only want either of the keyLeft or keyRight (but not both) in the output, you can start by setting the index as a preliminary step. ``` left3 = left2.set_index('keyLeft') left3.merge(right2, left_index=True, right_on='keyRight') value_x keyRight value_y 0 0.400157 B 1.867558 1 2.240893 D -0.977278 ``` Contrast this with the output of the command just before (that is, the output of left2.merge(right2, left_on='keyLeft', right_on='keyRight', how='inner')), you'll notice keyLeft is missing. You can figure out what column to keep based on which frame's index is set as the key. This may matter when, say, performing some OUTER JOIN operation. Merging only a single column from one of the DataFrames For example, consider ``` right3 = right.assign(newcol=np.arange(len(right))) right3 key value newcol 0 B 1.867558 0 1 D -0.977278 1 2 E 0.950088 2 3 F -0.151357 3 ``` If you are required to merge only \"newcol\" (without any of the other columns), you can usually just subset columns before merging: ``` left.merge(right3[['key', 'newcol']], on='key') key value newcol 0 B 0.400157 0 1 D 2.240893 1 ``` If you're doing a LEFT OUTER JOIN, a more performant solution would involve map: ``` # left['newcol'] = left['key'].map(right3.set_index('key')['newcol'])) left.assign(newcol=left['key'].map(right3.set_index('key')['newcol'])) key value newcol 0 A 1.764052 NaN 1 B 0.400157 0.0 2 C 0.978738 NaN 3 D 2.240893 1.0 ``` As mentioned, this is similar to, but faster than ``` left.merge(right3[['key', 'newcol']], on='key', how='left') key value newcol 0 A 1.764052 NaN 1 B 0.400157 0.0 2 C 0.978738 NaN 3 D 2.240893 1.0 ``` Merging on multiple columns To join on more than one column, specify a list for on (or left_on and right_on, as appropriate). ``` left.merge(right, on=['key1', 'key2'] ...) ``` Or, in the event the names are different, ``` left.merge(right, left_on=['lkey1', 'lkey2'], right_on=['rkey1', 'rkey2']) ``` Other useful merge* operations and functions Merging a DataFrame with Series on index: See this answer. Besides merge, DataFrame.update and DataFrame.combine_first are also used in certain cases to update one DataFrame with another. pd.merge_ordered is a useful function for ordered JOINs. pd.merge_asof (read: merge_asOf) is useful for approximate joins. This section only covers the very basics, and is designed to only whet your appetite. For more examples and cases, see the documentation on merge, join, and concat as well as the links to the function specifications. Continue Reading Jump to other topics in Pandas Merging 101 to continue learning: Merging basics - basic types of joins * Index-based joins Generalizing to multiple DataFrames Cross join *You are here.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/53645882\/pandas-merging-101", "best_answers_votes":1299, "question_length":1151, "response_length":8126 }, { "question":"Creating an empty Pandas DataFrame, and then filling it I'm starting from the pandas DataFrame documentation here: Introduction to data structures I'd like to iteratively fill the DataFrame with values in a time series kind of calculation. I'd like to initialize the DataFrame with columns A, B, and timestamp rows, all 0 or all NaN. I'd then add initial values and go over this data calculating the new row from the row before, say row[A][t] = row[A][t-1]+1 or so. I'm currently using the code as below, but I feel it's kind of ugly and there must be a way to do this with a DataFrame directly or just a better way in general. ```py import pandas as pd import datetime as dt import scipy as s base = dt.datetime.today().date() dates = [ base - dt.timedelta(days=x) for x in range(9, -1, -1) ] valdict = {} symbols = ['A','B', 'C'] for symb in symbols: valdict[symb] = pd.Series( s.zeros(len(dates)), dates ) for thedate in dates: if thedate > dates[0]: for symb in valdict: valdict[symb][thedate] = 1 + valdict[symb][thedate - dt.timedelta(days=1)] ```", "response":"NEVER grow a DataFrame row-wise! TLDR: (just read the bold text) Most answers here will tell you how to create an empty DataFrame and fill it out, but no one will tell you that it is a bad thing to do. Here is my advice: Accumulate data in a list, not a DataFrame. Use a list to collect your data, then initialise a DataFrame when you are ready. Either a list-of-lists or list-of-dicts format will work, pd.DataFrame accepts both. ``` data = [] for row in some_function_that_yields_data(): data.append(row) df = pd.DataFrame(data) ``` pd.DataFrame converts the list of rows (where each row is a scalar value) into a DataFrame. If your function yields DataFrames instead, call pd.concat. Pros of this approach: It is always cheaper to append to a list and create a DataFrame in one go than it is to create an empty DataFrame (or one of NaNs) and append to it over and over again. Lists also take up less memory and are a much lighter data structure to work with, append, and remove (if needed). dtypes are automatically inferred (rather than assigning object to all of them). A RangeIndex is automatically created for your data, instead of you having to take care to assign the correct index to the row you are appending at each iteration. If you aren't convinced yet, this is also mentioned in the documentation: Iteratively appending rows to a DataFrame can be more computationally intensive than a single concatenate. A better solution is to append those rows to a list and then concatenate the list with the original DataFrame all at once. pandas >= 2.0 update: append has been removed! DataFrame.append was deprecated in version 1.4 and removed from the pandas API entirely in version 2.0. See also this github issue that originally proposed its deprecation. These options are horrible append or concat inside a loop Here is the biggest mistake I've seen from beginners: ``` df = pd.DataFrame(columns=['A', 'B', 'C']) for a, b, c in some_function_that_yields_data(): df = df.append({'A': i, 'B': b, 'C': c}, ignore_index=True) # yuck # or similarly, # df = pd.concat([df, pd.Series({'A': i, 'B': b, 'C': c})], ignore_index=True) ``` Memory is re-allocated for every append or concat operation you have. Couple this with a loop and you have a quadratic complexity operation. The other mistake associated with df.append is that users tend to forget append is not an in-place function, so the result must be assigned back. You also have to worry about the dtypes: ``` df = pd.DataFrame(columns=['A', 'B', 'C']) df = df.append({'A': 1, 'B': 12.3, 'C': 'xyz'}, ignore_index=True) df.dtypes A object # yuck! B float64 C object dtype: object ``` Dealing with object columns is never a good thing, because pandas cannot vectorize operations on those columns. You will need to call the infer_objects() method to fix it: ``` df.infer_objects().dtypes A int64 B float64 C object dtype: object ``` loc inside a loop I have also seen loc used to append to a DataFrame that was created empty: ``` df = pd.DataFrame(columns=['A', 'B', 'C']) for a, b, c in some_function_that_yields_data(): df.loc[len(df)] = [a, b, c] ``` As before, you have not pre-allocated the amount of memory you need each time, so the memory is re-grown each time you create a new row. It's just as bad as append, and even more ugly. Empty DataFrame of NaNs And then, there's creating a DataFrame of NaNs, and all the caveats associated therewith. ``` df = pd.DataFrame(columns=['A', 'B', 'C'], index=range(5)) df A B C 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN 3 NaN NaN NaN 4 NaN NaN NaN ``` It creates a DataFrame of object columns, like the others. ``` df.dtypes A object # you DON'T want this B object C object dtype: object ``` Appending still has all the issues as the methods above. ``` for i, (a, b, c) in enumerate(some_function_that_yields_data()): df.iloc[i] = [a, b, c] ``` The Proof is in the Pudding Timing these methods is the fastest way to see just how much they differ in terms of their memory and utility. Benchmarking code for reference.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13784192\/creating-an-empty-pandas-dataframe-and-then-filling-it", "best_answers_votes":1082, "question_length":1053, "response_length":4020 }, { "question":"Shuffle DataFrame rows I have the following DataFrame: ``` Col1 Col2 Col3 Type 0 1 2 3 1 1 4 5 6 1 ... 20 7 8 9 2 21 10 11 12 2 ... 45 13 14 15 3 46 16 17 18 3 ... ``` The DataFrame is read from a CSV file. All rows which have Type 1 are on top, followed by the rows with Type 2, followed by the rows with Type 3, etc. I would like to shuffle the order of the DataFrame's rows so that all Type's are mixed. A possible result could be: ``` Col1 Col2 Col3 Type 0 7 8 9 2 1 13 14 15 3 ... 20 1 2 3 1 21 10 11 12 2 ... 45 4 5 6 1 46 16 17 18 3 ... ``` How can I achieve this?", "response":"The idiomatic way to do this with Pandas is to use the .sample method of your data frame to sample all rows without replacement: ```py df.sample(frac=1) ``` The frac keyword argument specifies the fraction of rows to return in the random sample, so frac=1 means to return all rows (in random order). Note: If you wish to shuffle your dataframe in-place and reset the index, you could do e.g. ```py df = df.sample(frac=1).reset_index(drop=True) ``` Here, specifying drop=True prevents .reset_index from creating a column containing the old index entries. Follow-up note: Although it may not look like the above operation is in-place, python\/pandas is smart enough not to do another malloc for the shuffled object. That is, even though the reference object has changed (by which I mean id(df_old) is not the same as id(df_new)), the underlying C object is still the same. To show that this is indeed the case, you could run a simple memory profiler: ``` $ python3 -m memory_profiler .\\test.py Filename: .\\test.py Line # Mem usage Increment Line Contents ================================================ 5 68.5 MiB 68.5 MiB @profile 6 def shuffle(): 7 847.8 MiB 779.3 MiB df = pd.DataFrame(np.random.randn(100, 1000000)) 8 847.9 MiB 0.1 MiB df = df.sample(frac=1).reset_index(drop=True) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29576430\/shuffle-dataframe-rows", "best_answers_votes":1613, "question_length":571, "response_length":1287 }, { "question":"Truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all() I want to filter my dataframe with an or condition to keep rows with a particular column's values that are outside the range [-0.25, 0.25]. I tried: ``` df = df[(df['col'] 0.25)] ``` But I get the error: ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().", "response":"The or and and Python statements require truth-values. For pandas, these are considered ambiguous, so you should use \"bitwise\" | (or) or & (and) operations: ``` df = df[(df['col'] 0.25)] ``` These are overloaded for these kinds of data structures to yield the element-wise or or and. Just to add some more explanation to this statement: The exception is thrown when you want to get the bool of a pandas.Series: ``` >>> import pandas as pd >>> x = pd.Series([1]) >>> bool(x) ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). ``` You hit a place where the operator implicitly converted the operands to bool (you used or but it also happens for and, if and while): ``` >>> x or x ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). >>> x and x ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). >>> if x: ... print('fun') ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). >>> while x: ... print('fun') ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). ``` Besides these four statements, there are several Python functions that hide some bool calls (like any, all, filter, ...). These are normally not problematic with pandas.Series, but for completeness I wanted to mention these. In your case, the exception isn't really helpful, because it doesn't mention the right alternatives. For and and or, if you want element-wise comparisons, you can use: numpy.logical_or: ``` >>> import numpy as np >>> np.logical_or(x, y) ``` or simply the | operator: ``` >>> x | y ``` numpy.logical_and: ``` >>> np.logical_and(x, y) ``` or simply the & operator: ``` >>> x & y ``` If you're using the operators, then be sure to set your parentheses correctly because of operator precedence. There are several logical NumPy functions which should work on pandas.Series. The alternatives mentioned in the Exception are more suited if you encountered it when doing if or while. I'll shortly explain each of these: If you want to check if your Series is empty: ``` >>> x = pd.Series([]) >>> x.empty True >>> x = pd.Series([1]) >>> x.empty False ``` Python normally interprets the length of containers (like list, tuple, ...) as truth-value if it has no explicit Boolean interpretation. So if you want the Python-like check, you could do: if x.size or if not x.empty instead of if x. If your Series contains one and only one Boolean value: ``` >>> x = pd.Series([100]) >>> (x > 50).bool() True >>> (x >> x = pd.Series([100]) >>> x.item() 100 ``` If you want to check if all or any item is not-zero, not-empty or not-False: ``` >>> x = pd.Series([0, 1, 2]) >>> x.all() # Because one element is zero False >>> x.any() # because one (or more) elements are non-zero True ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36921951\/truth-value-of-a-series-is-ambiguous-use-a-empty-a-bool-a-item-a-any-o", "best_answers_votes":1157, "question_length":401, "response_length":2926 }, { "question":"How to filter Pandas dataframe using 'in' and 'not in' like in SQL How can I achieve the equivalents of SQL's IN and NOT IN? I have a list with the required values. Here's the scenario: ```py df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']}) countries_to_keep = ['UK', 'China'] # pseudo-code: df[df['country'] not in countries_to_keep] ``` My current way of doing this is as follows: ```py df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']}) df2 = pd.DataFrame({'country': ['UK', 'China'], 'matched': True}) # IN df.merge(df2, how='inner', on='country') # NOT IN not_in = df.merge(df2, how='left', on='country') not_in = not_in[pd.isnull(not_in['matched'])] ``` But this seems like a horrible kludge. Can anyone improve on it?", "response":"You can use pd.Series.isin. For \"IN\" use: something.isin(somewhere) Or for \"NOT IN\": ~something.isin(somewhere) As a worked example: ``` >>> df country 0 US 1 UK 2 Germany 3 China >>> countries_to_keep ['UK', 'China'] >>> df.country.isin(countries_to_keep) 0 False 1 True 2 False 3 True Name: country, dtype: bool >>> df[df.country.isin(countries_to_keep)] country 1 UK 3 China >>> df[~df.country.isin(countries_to_keep)] country 0 US 2 Germany ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19960077\/how-to-filter-pandas-dataframe-using-in-and-not-in-like-in-sql", "best_answers_votes":1547, "question_length":754, "response_length":448 }, { "question":"Constructing DataFrame from values in variables yields \"ValueError: If using all scalar values, you must pass an index\" I have two variables as follows. ```py a = 2 b = 3 ``` I want to construct a DataFrame from this: ```py df2 = pd.DataFrame({'A':a, 'B':b}) ``` This generates an error: ```none ValueError: If using all scalar values, you must pass an index ``` I tried this also: ```py df2 = (pd.DataFrame({'a':a, 'b':b})).reset_index() ``` This gives the same error message. How do I do what I want?", "response":"The error message says that if you're passing scalar values, you have to pass an index. So you can either not use scalar values for the columns -- e.g. use a list: ``` >>> df = pd.DataFrame({'A': [a], 'B': [b]}) >>> df A B 0 2 3 ``` or use scalar values and pass an index: ``` >>> df = pd.DataFrame({'A': a, 'B': b}, index=[0, 3]) >>> df A B 0 2 3 3 2 3 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17839973\/constructing-dataframe-from-values-in-variables-yields-valueerror-if-using-all", "best_answers_votes":1253, "question_length":502, "response_length":357 }, { "question":"Get statistics for each group (such as count, mean, etc) using pandas GroupBy? I have a dataframe df and I use several columns from it to groupby: ```py df['col1','col2','col3','col4'].groupby(['col1','col2']).mean() ``` In the above way, I almost get the table (dataframe) that I need. What is missing is an additional column that contains number of rows in each group. In other words, I have mean but I also would like to know how many were used to get these means. For example in the first group there are 8 values and in the second one 10 and so on. In short: How do I get group-wise statistics for a dataframe?", "response":"Quick Answer: The simplest way to get row counts per group is by calling .size(), which returns a Series: ``` df.groupby(['col1','col2']).size() ``` Usually you want this result as a DataFrame (instead of a Series) so you can do: ``` df.groupby(['col1', 'col2']).size().reset_index(name='counts') ``` If you want to find out how to calculate the row counts and other statistics for each group continue reading below. Detailed example: Consider the following example dataframe: ``` In [2]: df Out[2]: col1 col2 col3 col4 col5 col6 0 A B 0.20 -0.61 -0.49 1.49 1 A B -1.53 -1.01 -0.39 1.82 2 A B -0.44 0.27 0.72 0.11 3 A B 0.28 -1.32 0.38 0.18 4 C D 0.12 0.59 0.81 0.66 5 C D -0.13 -1.65 -1.64 0.50 6 C D -1.42 -0.11 -0.18 -0.44 7 E F -0.00 1.42 -0.26 1.17 8 E F 0.91 -0.47 1.35 -0.34 9 G H 1.48 -0.63 -1.14 0.17 ``` First let's use .size() to get the row counts: ``` In [3]: df.groupby(['col1', 'col2']).size() Out[3]: col1 col2 A B 4 C D 3 E F 2 G H 1 dtype: int64 ``` Then let's use .size().reset_index(name='counts') to get the row counts: ``` In [4]: df.groupby(['col1', 'col2']).size().reset_index(name='counts') Out[4]: col1 col2 counts 0 A B 4 1 C D 3 2 E F 2 3 G H 1 ``` Including results for more statistics When you want to calculate statistics on grouped data, it usually looks like this: ``` In [5]: (df ...: .groupby(['col1', 'col2']) ...: .agg({ ...: 'col3': ['mean', 'count'], ...: 'col4': ['median', 'min', 'count'] ...: })) Out[5]: col4 col3 median min count mean count col1 col2 A B -0.810 -1.32 4 -0.372500 4 C D -0.110 -1.65 3 -0.476667 3 E F 0.475 -0.47 2 0.455000 2 G H -0.630 -0.63 1 1.480000 1 ``` The result above is a little annoying to deal with because of the nested column labels, and also because row counts are on a per column basis. To gain more control over the output I usually split the statistics into individual aggregations that I then combine using join. It looks like this: ``` In [6]: gb = df.groupby(['col1', 'col2']) ...: counts = gb.size().to_frame(name='counts') ...: (counts ...: .join(gb.agg({'col3': 'mean'}).rename(columns={'col3': 'col3_mean'})) ...: .join(gb.agg({'col4': 'median'}).rename(columns={'col4': 'col4_median'})) ...: .join(gb.agg({'col4': 'min'}).rename(columns={'col4': 'col4_min'})) ...: .reset_index() ...: ) ...: Out[6]: col1 col2 counts col3_mean col4_median col4_min 0 A B 4 -0.372500 -0.810 -1.32 1 C D 3 -0.476667 -0.110 -1.65 2 E F 2 0.455000 0.475 -0.47 3 G H 1 1.480000 -0.630 -0.63 ``` Footnotes The code used to generate the test data is shown below: ``` In [1]: import numpy as np ...: import pandas as pd ...: ...: keys = np.array([ ...: ['A', 'B'], ...: ['A', 'B'], ...: ['A', 'B'], ...: ['A', 'B'], ...: ['C', 'D'], ...: ['C', 'D'], ...: ['C', 'D'], ...: ['E', 'F'], ...: ['E', 'F'], ...: ['G', 'H'] ...: ]) ...: ...: df = pd.DataFrame( ...: np.hstack([keys,np.random.randn(10,4).round(2)]), ...: columns = ['col1', 'col2', 'col3', 'col4', 'col5', 'col6'] ...: ) ...: ...: df[['col3', 'col4', 'col5', 'col6']] = \\ ...: df[['col3', 'col4', 'col5', 'col6']].astype(float) ...: ``` Disclaimer: If some of the columns that you are aggregating have null values, then you really want to be looking at the group row counts as an independent aggregation for each column. Otherwise you may be misled as to how many records are actually being used to calculate things like the mean because pandas will drop NaN entries in the mean calculation without telling you about it.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19384532\/get-statistics-for-each-group-such-as-count-mean-etc-using-pandas-groupby", "best_answers_votes":1510, "question_length":615, "response_length":3440 }, { "question":"Set value for particular cell in pandas DataFrame using index I have created a Pandas DataFrame ```py df = DataFrame(index=['A','B','C'], columns=['x','y']) ``` Now, I would like to assign a value to particular cell, for example to row C and column x. In other words, I would like to perform the following transformation: ```none x y x y A NaN NaN A NaN NaN B NaN NaN \u27f6 B NaN NaN C NaN NaN C 10 NaN ``` with this code: ```py df.xs('C')['x'] = 10 ``` However, the contents of df has not changed. The dataframe contains yet again only NaNs. How do I what I want?", "response":"RukTech's answer, df.set_value('C', 'x', 10), is far and away faster than the options I've suggested below. However, it has been slated for deprecation. Going forward, the recommended method is .iat\/.at. Why df.xs('C')['x']=10 does not work: df.xs('C') by default, returns a new dataframe with a copy of the data, so ``` df.xs('C')['x']=10 ``` modifies this new dataframe only. df['x'] returns a view of the df dataframe, so ``` df['x']['C'] = 10 ``` modifies df itself. Warning: It is sometimes difficult to predict if an operation returns a copy or a view. For this reason the docs recommend avoiding assignments with \"chained indexing\". So the recommended alternative is ``` df.at['C', 'x'] = 10 ``` which does modify df. ``` In [18]: %timeit df.set_value('C', 'x', 10) 100000 loops, best of 3: 2.9 \u00b5s per loop In [20]: %timeit df['x']['C'] = 10 100000 loops, best of 3: 6.31 \u00b5s per loop In [81]: %timeit df.at['C', 'x'] = 10 100000 loops, best of 3: 9.2 \u00b5s per loop ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13842088\/set-value-for-particular-cell-in-pandas-dataframe-using-index", "best_answers_votes":954, "question_length":560, "response_length":973 }, { "question":"Import multiple CSV files into pandas and concatenate into one DataFrame I would like to read several CSV files from a directory into pandas and concatenate them into one big DataFrame. I have not been able to figure it out though. Here is what I have so far: ``` import glob import pandas as pd # Get data file names path = r'C:\\DRO\\DCL_rawdata_files' filenames = glob.glob(path + \"\/*.csv\") dfs = [] for filename in filenames: dfs.append(pd.read_csv(filename)) # Concatenate all data into one DataFrame big_frame = pd.concat(dfs, ignore_index=True) ``` I guess I need some help within the for loop?", "response":"See pandas: IO tools for all of the available .read_ methods. Try the following code if all of the CSV files have the same columns. I have added header=0, so that after reading the CSV file's first row, it can be assigned as the column names. ``` import pandas as pd import glob import os path = r'C:\\DRO\\DCL_rawdata_files' # use your path all_files = glob.glob(os.path.join(path , \"\/*.csv\")) li = [] for filename in all_files: df = pd.read_csv(filename, index_col=None, header=0) li.append(df) frame = pd.concat(li, axis=0, ignore_index=True) ``` Or, with attribution to a comment from Sid. ```py all_files = glob.glob(os.path.join(path, \"*.csv\")) df = pd.concat((pd.read_csv(f) for f in all_files), ignore_index=True) ``` It's often necessary to identify each sample of data, which can be accomplished by adding a new column to the dataframe. pathlib from the standard library will be used for this example. It treats paths as objects with methods, instead of strings to be sliced. Imports and Setup ```py from pathlib import Path import pandas as pd import numpy as np path = r'C:\\DRO\\DCL_rawdata_files' # or unix \/ linux \/ mac path # Get the files from the path provided in the OP files = Path(path).glob('*.csv') # .rglob to get subdirectories ``` Option 1: Add a new column with the file name ```py dfs = list() for f in files: data = pd.read_csv(f) # .stem is method for pathlib objects to get the filename w\/o the extension data['file'] = f.stem dfs.append(data) df = pd.concat(dfs, ignore_index=True) ``` Option 2: Add a new column with a generic name using enumerate ```py dfs = list() for i, f in enumerate(files): data = pd.read_csv(f) data['file'] = f'File {i}' dfs.append(data) df = pd.concat(dfs, ignore_index=True) ``` Option 3: Create the dataframes with a list comprehension, and then use np.repeat to add a new column. [f'S{i}' for i in range(len(dfs))] creates a list of strings to name each dataframe. [len(df) for df in dfs] creates a list of lengths Attribution for this option goes to this plotting answer. ```py # Read the files into dataframes dfs = [pd.read_csv(f) for f in files] # Combine the list of dataframes df = pd.concat(dfs, ignore_index=True) # Add a new column df['Source'] = np.repeat([f'S{i}' for i in range(len(dfs))], [len(df) for df in dfs]) ``` Option 4: One liners using .assign to create the new column, with attribution to a comment from C8H10N4O2 ```py df = pd.concat((pd.read_csv(f).assign(filename=f.stem) for f in files), ignore_index=True) ``` or ```py df = pd.concat((pd.read_csv(f).assign(Source=f'S{i}') for i, f in enumerate(files)), ignore_index=True) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20906474\/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe", "best_answers_votes":903, "question_length":599, "response_length":2612 }, { "question":"UnicodeDecodeError when reading CSV file in Pandas I'm running a program which is processing 30,000 similar files. A random number of them are stopping and producing this error... ```none File \"C:\\Importer\\src\\dfman\\importer.py\", line 26, in import_chr data = pd.read_csv(filepath, names=fields) File \"C:\\Python33\\lib\\site-packages\\pandas\\io\\parsers.py\", line 400, in parser_f return _read(filepath_or_buffer, kwds) File \"C:\\Python33\\lib\\site-packages\\pandas\\io\\parsers.py\", line 205, in _read return parser.read() File \"C:\\Python33\\lib\\site-packages\\pandas\\io\\parsers.py\", line 608, in read ret = self._engine.read(nrows) File \"C:\\Python33\\lib\\site-packages\\pandas\\io\\parsers.py\", line 1028, in read data = self._reader.read(nrows) File \"parser.pyx\", line 706, in pandas.parser.TextReader.read (pandas\\parser.c:6745) File \"parser.pyx\", line 728, in pandas.parser.TextReader._read_low_memory (pandas\\parser.c:6964) File \"parser.pyx\", line 804, in pandas.parser.TextReader._read_rows (pandas\\parser.c:7780) File \"parser.pyx\", line 890, in pandas.parser.TextReader._convert_column_data (pandas\\parser.c:8793) File \"parser.pyx\", line 950, in pandas.parser.TextReader._convert_tokens (pandas\\parser.c:9484) File \"parser.pyx\", line 1026, in pandas.parser.TextReader._convert_with_dtype (pandas\\parser.c:10642) File \"parser.pyx\", line 1046, in pandas.parser.TextReader._string_convert (pandas\\parser.c:10853) File \"parser.pyx\", line 1278, in pandas.parser._string_box_utf8 (pandas\\parser.c:15657) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xda in position 6: invalid continuation byte ``` The source\/creation of these files all come from the same place. What's the best way to correct this to proceed with the import?", "response":"read_csv takes an encoding option to deal with files in different formats. I mostly use read_csv('file', encoding = \"ISO-8859-1\"), or alternatively encoding = \"utf-8\" for reading, and generally utf-8 for to_csv. You can also use one of several alias options like 'latin' or 'cp1252' (Windows) instead of 'ISO-8859-1' (see python docs, also for numerous other encodings you may encounter). See relevant Pandas documentation, python docs examples on csv files, and plenty of related questions here on SO. A good background resource is What every developer should know about unicode and character sets. To detect the encoding (assuming the file contains non-ascii characters), you can use enca (see man page) or file -i (linux) or file -I (osx) (see man page).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18171739\/unicodedecodeerror-when-reading-csv-file-in-pandas", "best_answers_votes":1237, "question_length":1721, "response_length":757 }, { "question":"How to sort pandas dataframe by one column I have a dataframe like this: ```none 0 1 2 0 354.7 April 4.0 1 55.4 August 8.0 2 176.5 December 12.0 3 95.5 February 2.0 4 85.6 January 1.0 5 152 July 7.0 6 238.7 June 6.0 7 104.8 March 3.0 8 283.5 May 5.0 9 278.8 November 11.0 10 249.6 October 10.0 11 212.7 September 9.0 ``` As you can see, months are not in calendar order. So I created a second column to get the month number corresponding to each month (1-12). From there, how can I sort this dataframe according to calendar months' order?", "response":"Use sort_values to sort the df by a specific column's values: ``` In [18]: df.sort_values('2') Out[18]: 0 1 2 4 85.6 January 1.0 3 95.5 February 2.0 7 104.8 March 3.0 0 354.7 April 4.0 8 283.5 May 5.0 6 238.7 June 6.0 5 152.0 July 7.0 1 55.4 August 8.0 11 212.7 September 9.0 10 249.6 October 10.0 9 278.8 November 11.0 2 176.5 December 12.0 ``` If you want to sort by two columns, pass a list of column labels to sort_values with the column labels ordered according to sort priority. If you use df.sort_values(['2', '0']), the result would be sorted by column 2 then column 0. Granted, this does not really make sense for this example because each value in df['2'] is unique.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/37787698\/how-to-sort-pandas-dataframe-by-one-column", "best_answers_votes":860, "question_length":538, "response_length":676 }, { "question":"How to delete rows from a pandas DataFrame based on a conditional expression [duplicate] This question already has answers here: Deleting DataFrame row in Pandas based on column value (20 answers) Closed 5 years ago. I have a pandas DataFrame and I want to delete rows from it where the length of the string in a particular column is greater than 2. I expect to be able to do this (per this answer): ``` df[(len(df['column name']) < 2)] ``` but I just get the error: ``` KeyError: u'no item named False' ``` What am I doing wrong? (Note: I know I can use df.dropna() to get rid of rows that contain any NaN, but I didn't see how to remove rows based on a conditional expression.)", "response":"To directly answer this question's original title \"How to delete rows from a pandas DataFrame based on a conditional expression\" (which I understand is not necessarily the OP's problem but could help other users coming across this question) one way to do this is to use the drop method: ``` df = df.drop(some labels) df = df.drop(df[].index) ``` Example To remove all rows where column 'score' is 20 ``` df = df.drop(df[(df.score 20)].index) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13851535\/how-to-delete-rows-from-a-pandas-dataframe-based-on-a-conditional-expression", "best_answers_votes":1582, "question_length":679, "response_length":447 }, { "question":"How to replace NaN values in a dataframe column I have a Pandas Dataframe as below: ```none itm Date Amount 67 420 2012-09-30 00:00:00 65211 68 421 2012-09-09 00:00:00 29424 69 421 2012-09-16 00:00:00 29877 70 421 2012-09-23 00:00:00 30990 71 421 2012-09-30 00:00:00 61303 72 485 2012-09-09 00:00:00 71781 73 485 2012-09-16 00:00:00 NaN 74 485 2012-09-23 00:00:00 11072 75 485 2012-09-30 00:00:00 113702 76 489 2012-09-09 00:00:00 64731 77 489 2012-09-16 00:00:00 NaN ``` When I try to apply a function to the Amount column, I get the following error: ```none ValueError: cannot convert float NaN to integer ``` I have tried applying a function using math.isnan, pandas' .replace method, .sparse data attribute from pandas 0.9, if NaN == NaN statement in a function; I have also looked at this Q\/A; none of them works. How do I do it?", "response":"DataFrame.fillna() or Series.fillna() will do this for you. Example: ``` In [7]: df Out[7]: 0 1 0 NaN NaN 1 -0.494375 0.570994 2 NaN NaN 3 1.876360 -0.229738 4 NaN NaN In [8]: df.fillna(0) Out[8]: 0 1 0 0.000000 0.000000 1 -0.494375 0.570994 2 0.000000 0.000000 3 1.876360 -0.229738 4 0.000000 0.000000 ``` To fill the NaNs in only one column, select just that column. ``` In [12]: df[1] = df[1].fillna(0) In [13]: df Out[13]: 0 1 0 NaN 0.000000 1 -0.494375 0.570994 2 NaN 0.000000 3 1.876360 -0.229738 4 NaN 0.000000 ``` Or you can use the built in column-specific functionality: ``` df = df.fillna({1: 0}) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13295735\/how-to-replace-nan-values-in-a-dataframe-column", "best_answers_votes":1019, "question_length":834, "response_length":611 }, { "question":"Create new column based on values from other columns \/ apply a function of multiple columns, row-wise in Pandas I want to apply my custom function (it uses an if-else ladder) to these six columns (ERI_Hispanic, ERI_AmerInd_AKNatv, ERI_Asian, ERI_Black_Afr.Amer, ERI_HI_PacIsl, ERI_White) in each row of my dataframe. I've tried different methods from other questions but still can't seem to find the right answer for my problem. The critical piece of this is that if the person is counted as Hispanic they can't be counted as anything else. Even if they have a \"1\" in another ethnicity column they still are counted as Hispanic not two or more races. Similarly, if the sum of all the ERI columns is greater than 1 they are counted as two or more races and can't be counted as a unique ethnicity(except for Hispanic). It's almost like doing a for loop through each row and if each record meets a criterion they are added to one list and eliminated from the original. From the dataframe below I need to calculate a new column based on the following spec in SQL: CRITERIA ``` IF [ERI_Hispanic] = 1 THEN RETURN \u201cHispanic\u201d ELSE IF SUM([ERI_AmerInd_AKNatv] + [ERI_Asian] + [ERI_Black_Afr.Amer] + [ERI_HI_PacIsl] + [ERI_White]) > 1 THEN RETURN \u201cTwo or More\u201d ELSE IF [ERI_AmerInd_AKNatv] = 1 THEN RETURN \u201cA\/I AK Native\u201d ELSE IF [ERI_Asian] = 1 THEN RETURN \u201cAsian\u201d ELSE IF [ERI_Black_Afr.Amer] = 1 THEN RETURN \u201cBlack\/AA\u201d ELSE IF [ERI_HI_PacIsl] = 1 THEN RETURN \u201cHaw\/Pac Isl.\u201d ELSE IF [ERI_White] = 1 THEN RETURN \u201cWhite\u201d ``` Comment: If the ERI Flag for Hispanic is True (1), the employee is classified as \u201cHispanic\u201d Comment: If more than 1 non-Hispanic ERI Flag is true, return \u201cTwo or More\u201d DATAFRAME ``` lname fname rno_cd eri_afr_amer eri_asian eri_hawaiian eri_hispanic eri_nat_amer eri_white rno_defined 0 MOST JEFF E 0 0 0 0 0 1 White 1 CRUISE TOM E 0 0 0 1 0 0 White 2 DEPP JOHNNY 0 0 0 0 0 1 Unknown 3 DICAP LEO 0 0 0 0 0 1 Unknown 4 BRANDO MARLON E 0 0 0 0 0 0 White 5 HANKS TOM 0 0 0 0 0 1 Unknown 6 DENIRO ROBERT E 0 1 0 0 0 1 White 7 PACINO AL E 0 0 0 0 0 1 White 8 WILLIAMS ROBIN E 0 0 1 0 0 0 White 9 EASTWOOD CLINT E 0 0 0 0 0 1 White ```", "response":"OK, two steps to this - first is to write a function that does the translation you want - I've put an example together based on your pseudo-code: ``` def label_race(row): if row['eri_hispanic'] == 1: return 'Hispanic' if row['eri_afr_amer'] + row['eri_asian'] + row['eri_hawaiian'] + row['eri_nat_amer'] + row['eri_white'] > 1: return 'Two Or More' if row['eri_nat_amer'] == 1: return 'A\/I AK Native' if row['eri_asian'] == 1: return 'Asian' if row['eri_afr_amer'] == 1: return 'Black\/AA' if row['eri_hawaiian'] == 1: return 'Haw\/Pac Isl.' if row['eri_white'] == 1: return 'White' return 'Other' ``` You may want to go over this, but it seems to do the trick - notice that the parameter going into the function is considered to be a Series object labelled \"row\". Next, use the apply function in pandas to apply the function - e.g. ``` df.apply(label_race, axis=1) ``` Note the axis=1 specifier, that means that the application is done at a row, rather than a column level. The results are here: ``` 0 White 1 Hispanic 2 White 3 White 4 Other 5 White 6 Two Or More 7 White 8 Haw\/Pac Isl. 9 White ``` If you're happy with those results, then run it again, saving the results into a new column in your original dataframe. ``` df['race_label'] = df.apply(label_race, axis=1) ``` The resultant dataframe looks like this (scroll to the right to see the new column): ``` lname fname rno_cd eri_afr_amer eri_asian eri_hawaiian eri_hispanic eri_nat_amer eri_white rno_defined race_label 0 MOST JEFF E 0 0 0 0 0 1 White White 1 CRUISE TOM E 0 0 0 1 0 0 White Hispanic 2 DEPP JOHNNY NaN 0 0 0 0 0 1 Unknown White 3 DICAP LEO NaN 0 0 0 0 0 1 Unknown White 4 BRANDO MARLON E 0 0 0 0 0 0 White Other 5 HANKS TOM NaN 0 0 0 0 0 1 Unknown White 6 DENIRO ROBERT E 0 1 0 0 0 1 White Two Or More 7 PACINO AL E 0 0 0 0 0 1 White White 8 WILLIAMS ROBIN E 0 0 1 0 0 0 White Haw\/Pac Isl. 9 EASTWOOD CLINT E 0 0 0 0 0 1 White White ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26886653\/create-new-column-based-on-values-from-other-columns-apply-a-function-of-multi", "best_answers_votes":793, "question_length":2144, "response_length":1910 }, { "question":"How can I pivot a dataframe? [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Closed 6 months ago. The community reviewed whether to reopen this question last month and left it closed: Needs more focus Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Improve this question How do I pivot the pandas dataframe df defined at bottom such that the col values become columns, row values become the index, and mean of val0 becomes the values? (In some cases this is called transforming from long-format to wide-format.) (See note at bottom: Why is this question not a duplicate? and why this is thematically one question and not too broad.) Subquestions How to avoid getting ValueError: Index contains duplicate entries, cannot reshape? How do I pivot df defined at bottom, such that the col values become columns, row values become the index, and mean of val0 are the values? ``` col col0 col1 col2 col3 col4 row row0 0.77 0.605 NaN 0.860 0.65 row2 0.13 NaN 0.395 0.500 0.25 row3 NaN 0.310 NaN 0.545 NaN row4 NaN 0.100 0.395 0.760 0.24 ``` How do I pivot... ... so that missing values are 0? ``` col col0 col1 col2 col3 col4 row row0 0.77 0.605 0.000 0.860 0.65 row2 0.13 0.000 0.395 0.500 0.25 row3 0.00 0.310 0.000 0.545 0.00 row4 0.00 0.100 0.395 0.760 0.24 ``` ... to do an aggregate function other than mean, like sum? ``` col col0 col1 col2 col3 col4 row row0 0.77 1.21 0.00 0.86 0.65 row2 0.13 0.00 0.79 0.50 0.50 row3 0.00 0.31 0.00 1.09 0.00 row4 0.00 0.10 0.79 1.52 0.24 ``` ... to do more that one aggregation at a time? ``` sum mean col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4 row row0 0.77 1.21 0.00 0.86 0.65 0.77 0.605 0.000 0.860 0.65 row2 0.13 0.00 0.79 0.50 0.50 0.13 0.000 0.395 0.500 0.25 row3 0.00 0.31 0.00 1.09 0.00 0.00 0.310 0.000 0.545 0.00 row4 0.00 0.10 0.79 1.52 0.24 0.00 0.100 0.395 0.760 0.24 ``` ... to aggregate over multiple 'value' columns? ``` val0 val1 col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4 row row0 0.77 0.605 0.000 0.860 0.65 0.01 0.745 0.00 0.010 0.02 row2 0.13 0.000 0.395 0.500 0.25 0.45 0.000 0.34 0.440 0.79 row3 0.00 0.310 0.000 0.545 0.00 0.00 0.230 0.00 0.075 0.00 row4 0.00 0.100 0.395 0.760 0.24 0.00 0.070 0.42 0.300 0.46 ``` ... to subdivide by multiple columns? (item0,item1,item2..., col0,col1,col2...) ``` item item0 item1 item2 col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4 row row0 0.00 0.00 0.00 0.77 0.00 0.00 0.00 0.00 0.00 0.605 0.86 0.65 row2 0.35 0.00 0.37 0.00 0.00 0.44 0.00 0.00 0.13 0.000 0.50 0.13 row3 0.00 0.00 0.00 0.00 0.31 0.00 0.81 0.00 0.00 0.000 0.28 0.00 row4 0.15 0.64 0.00 0.00 0.10 0.64 0.88 0.24 0.00 0.000 0.00 0.00 ``` ... to subdivide by multiple rows: (key0,key1... row0,row1,row2...) ``` item item0 item1 item2 col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4 key row key0 row0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.86 0.00 row2 0.00 0.00 0.37 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 row3 0.00 0.00 0.00 0.00 0.31 0.00 0.81 0.00 0.00 0.00 0.00 0.00 row4 0.15 0.64 0.00 0.00 0.00 0.00 0.00 0.24 0.00 0.00 0.00 0.00 key1 row0 0.00 0.00 0.00 0.77 0.00 0.00 0.00 0.00 0.00 0.81 0.00 0.65 row2 0.35 0.00 0.00 0.00 0.00 0.44 0.00 0.00 0.00 0.00 0.00 0.13 row3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.28 0.00 row4 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 key2 row0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.00 0.00 row2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13 0.00 0.00 0.00 row4 0.00 0.00 0.00 0.00 0.00 0.64 0.88 0.00 0.00 0.00 0.00 0.00 ``` ... to aggregate the frequency in which the column and rows occur together, aka \"cross tabulation\"? ``` col col0 col1 col2 col3 col4 row row0 1 2 0 1 1 row2 1 0 2 1 2 row3 0 1 0 2 0 row4 0 1 2 2 1 ``` ... to convert a DataFrame from long-to-wide by pivoting on ONLY two columns? Given: ``` np.random.seed([3, 1415]) df2 = pd.DataFrame({'A': list('aaaabbbc'), 'B': np.random.choice(15, 8)}) df2 A B 0 a 0 1 a 11 2 a 2 3 a 11 4 b 10 5 b 10 6 b 14 7 c 7 ``` The expected should look something like ``` a b c 0 0.0 10.0 7.0 1 11.0 10.0 NaN 2 2.0 14.0 NaN 3 11.0 NaN NaN ``` ... to flatten the multiple index to a single multi-index after pivot? From: ``` 1 2 1 1 2 a 2 1 1 b 2 1 0 c 1 0 0 ``` To: ``` 1|1 2|1 2|2 a 2 1 1 b 2 1 0 c 1 0 0 ``` Setup Consider a dataframe df with columns 'key', 'row', 'item', 'col', and random float values 'val0', 'val1'. I conspicuously named the columns and relevant column values to correspond with how I want to pivot them. ``` import numpy as np import pandas as pd from numpy.core.defchararray import add np.random.seed([3,1415]) n = 20 cols = np.array(['key', 'row', 'item', 'col']) arr1 = (np.random.randint(5, size=(n, 4)) \/\/ [2, 1, 2, 1]).astype(str) df = pd.DataFrame( add(cols, arr1), columns=cols ).join( pd.DataFrame(np.random.rand(n, 2).round(2)).add_prefix('val') ) print(df) ``` ``` key row item col val0 val1 0 key0 row3 item1 col3 0.81 0.04 1 key1 row2 item1 col2 0.44 0.07 2 key1 row0 item1 col0 0.77 0.01 3 key0 row4 item0 col2 0.15 0.59 4 key1 row0 item2 col1 0.81 0.64 5 key1 row2 item2 col4 0.13 0.88 6 key2 row4 item1 col3 0.88 0.39 7 key1 row4 item1 col1 0.10 0.07 8 key1 row0 item2 col4 0.65 0.02 9 key1 row2 item0 col2 0.35 0.61 10 key2 row0 item2 col1 0.40 0.85 11 key2 row4 item1 col2 0.64 0.25 12 key0 row2 item2 col3 0.50 0.44 13 key0 row4 item1 col4 0.24 0.46 14 key1 row3 item2 col3 0.28 0.11 15 key0 row3 item1 col1 0.31 0.23 16 key0 row0 item2 col3 0.86 0.01 17 key0 row4 item0 col3 0.64 0.21 18 key2 row2 item2 col0 0.13 0.45 19 key0 row2 item0 col4 0.37 0.70 ``` Why is this question not a duplicate? and more useful than the following autosuggestions: How to pivot a dataframe in Pandas? only covers the specific case of 'Country' to row-index, values of 'Indicator' for 'Year' to multiple columns and no aggregation of values. pandas pivot table to data frame asks how to pivot in pandas like in R, i.e. autogenerate an individual column for each value of strength... pandas pivoting a dataframe, duplicate rows asks about the syntax for pivoting multiple columns, without needing to list them all. None of the existing questions and answers are comprehensive, so this is an attempt at a canonical question and answer that encompasses all aspects of pivoting.", "response":"Here is a list of idioms we can use to pivot pd.DataFrame.pivot_table A glorified version of groupby with more intuitive API. For many people, this is the preferred approach. And it is the intended approach by the developers. Specify row level, column levels, values to be aggregated, and function(s) to perform aggregations. pd.DataFrame.groupby + pd.DataFrame.unstack Good general approach for doing just about any type of pivot You specify all columns that will constitute the pivoted row levels and column levels in one group by. You follow that by selecting the remaining columns you want to aggregate and the function(s) you want to perform the aggregation. Finally, you unstack the levels that you want to be in the column index. pd.DataFrame.set_index + pd.DataFrame.unstack Convenient and intuitive for some (myself included). Cannot handle duplicate grouped keys. Similar to the groupby paradigm, we specify all columns that will eventually be either row or column levels and set those to be the index. We then unstack the levels we want in the columns. If either the remaining index levels or column levels are not unique, this method will fail. pd.DataFrame.pivot Very similar to set_index in that it shares the duplicate key limitation. The API is very limited as well. It only takes scalar values for index, columns, values. Similar to the pivot_table method in that we select rows, columns, and values on which to pivot. However, we cannot aggregate and if either rows or columns are not unique, this method will fail. pd.crosstab This a specialized version of pivot_table and in its purest form is the most intuitive way to perform several tasks. pd.factorize + np.bincount This is a highly advanced technique that is very obscure but is very fast. It cannot be used in all circumstances, but when it can be used and you are comfortable using it, you will reap the performance rewards. pd.get_dummies + pd.DataFrame.dot I use this for cleverly performing cross tabulation. See also: Reshaping and pivot tables \u2014 pandas User Guide Question 1 Why do I get ValueError: Index contains duplicate entries, cannot reshape This occurs because pandas is attempting to reindex either a columns or index object with duplicate entries. There are varying methods to use that can perform a pivot. Some of them are not well suited to when there are duplicates of the keys on which it is being asked to pivot. For example: Consider pd.DataFrame.pivot. I know there are duplicate entries that share the row and col values: ``` df.duplicated(['row', 'col']).any() True ``` So when I pivot using ``` df.pivot(index='row', columns='col', values='val0') ``` I get the error mentioned above. In fact, I get the same error when I try to perform the same task with: ``` df.set_index(['row', 'col'])['val0'].unstack() ``` Examples What I'm going to do for each subsequent question is to answer it using pd.DataFrame.pivot_table. Then I'll provide alternatives to perform the same task. Questions 2 and 3 How do I pivot df such that the col values are columns, row values are the index, and mean of val0 are the values? pd.DataFrame.pivot_table ``` df.pivot_table( values='val0', index='row', columns='col', aggfunc='mean') col col0 col1 col2 col3 col4 row row0 0.77 0.605 NaN 0.860 0.65 row2 0.13 NaN 0.395 0.500 0.25 row3 NaN 0.310 NaN 0.545 NaN row4 NaN 0.100 0.395 0.760 0.24 ``` aggfunc='mean' is the default and I didn't have to set it. I included it to be explicit. How do I make it so that missing values are 0? pd.DataFrame.pivot_table fill_value is not set by default. I tend to set it appropriately. In this case I set it to 0. ``` df.pivot_table( values='val0', index='row', columns='col', fill_value=0, aggfunc='mean') col col0 col1 col2 col3 col4 row row0 0.77 0.605 0.000 0.860 0.65 row2 0.13 0.000 0.395 0.500 0.25 row3 0.00 0.310 0.000 0.545 0.00 row4 0.00 0.100 0.395 0.760 0.24 ``` pd.DataFrame.groupby ``` df.groupby(['row', 'col'])['val0'].mean().unstack(fill_value=0) ``` pd.crosstab ``` pd.crosstab( index=df['row'], columns=df['col'], values=df['val0'], aggfunc='mean').fillna(0) ``` Question 4 Can I get something other than mean, like maybe sum? pd.DataFrame.pivot_table ``` df.pivot_table( values='val0', index='row', columns='col', fill_value=0, aggfunc='sum') col col0 col1 col2 col3 col4 row row0 0.77 1.21 0.00 0.86 0.65 row2 0.13 0.00 0.79 0.50 0.50 row3 0.00 0.31 0.00 1.09 0.00 row4 0.00 0.10 0.79 1.52 0.24 ``` pd.DataFrame.groupby ``` df.groupby(['row', 'col'])['val0'].sum().unstack(fill_value=0) ``` pd.crosstab ``` pd.crosstab( index=df['row'], columns=df['col'], values=df['val0'], aggfunc='sum').fillna(0) ``` Question 5 Can I do more that one aggregation at a time? Notice that for pivot_table and crosstab I needed to pass list of callables. On the other hand, groupby.agg is able to take strings for a limited number of special functions. groupby.agg would also have taken the same callables we passed to the others, but it is often more efficient to leverage the string function names as there are efficiencies to be gained. pd.DataFrame.pivot_table ``` df.pivot_table( values='val0', index='row', columns='col', fill_value=0, aggfunc=[np.size, np.mean]) size mean col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4 row row0 1 2 0 1 1 0.77 0.605 0.000 0.860 0.65 row2 1 0 2 1 2 0.13 0.000 0.395 0.500 0.25 row3 0 1 0 2 0 0.00 0.310 0.000 0.545 0.00 row4 0 1 2 2 1 0.00 0.100 0.395 0.760 0.24 ``` pd.DataFrame.groupby ``` df.groupby(['row', 'col'])['val0'].agg(['size', 'mean']).unstack(fill_value=0) ``` pd.crosstab ``` pd.crosstab( index=df['row'], columns=df['col'], values=df['val0'], aggfunc=[np.size, np.mean]).fillna(0, downcast='infer') ``` Question 6 Can I aggregate over multiple value columns? pd.DataFrame.pivot_table we pass values=['val0', 'val1'] but we could've left that off completely ``` df.pivot_table( values=['val0', 'val1'], index='row', columns='col', fill_value=0, aggfunc='mean') val0 val1 col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4 row row0 0.77 0.605 0.000 0.860 0.65 0.01 0.745 0.00 0.010 0.02 row2 0.13 0.000 0.395 0.500 0.25 0.45 0.000 0.34 0.440 0.79 row3 0.00 0.310 0.000 0.545 0.00 0.00 0.230 0.00 0.075 0.00 row4 0.00 0.100 0.395 0.760 0.24 0.00 0.070 0.42 0.300 0.46 ``` pd.DataFrame.groupby ``` df.groupby(['row', 'col'])['val0', 'val1'].mean().unstack(fill_value=0) ``` Question 7 Can I subdivide by multiple columns? pd.DataFrame.pivot_table ``` df.pivot_table( values='val0', index='row', columns=['item', 'col'], fill_value=0, aggfunc='mean') item item0 item1 item2 col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4 row row0 0.00 0.00 0.00 0.77 0.00 0.00 0.00 0.00 0.00 0.605 0.86 0.65 row2 0.35 0.00 0.37 0.00 0.00 0.44 0.00 0.00 0.13 0.000 0.50 0.13 row3 0.00 0.00 0.00 0.00 0.31 0.00 0.81 0.00 0.00 0.000 0.28 0.00 row4 0.15 0.64 0.00 0.00 0.10 0.64 0.88 0.24 0.00 0.000 0.00 0.00 ``` pd.DataFrame.groupby ``` df.groupby( ['row', 'item', 'col'] )['val0'].mean().unstack(['item', 'col']).fillna(0).sort_index(1) ``` Question 8 Can I subdivide by multiple columns? pd.DataFrame.pivot_table ``` df.pivot_table( values='val0', index=['key', 'row'], columns=['item', 'col'], fill_value=0, aggfunc='mean') item item0 item1 item2 col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4 key row key0 row0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.86 0.00 row2 0.00 0.00 0.37 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 row3 0.00 0.00 0.00 0.00 0.31 0.00 0.81 0.00 0.00 0.00 0.00 0.00 row4 0.15 0.64 0.00 0.00 0.00 0.00 0.00 0.24 0.00 0.00 0.00 0.00 key1 row0 0.00 0.00 0.00 0.77 0.00 0.00 0.00 0.00 0.00 0.81 0.00 0.65 row2 0.35 0.00 0.00 0.00 0.00 0.44 0.00 0.00 0.00 0.00 0.00 0.13 row3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.28 0.00 row4 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 key2 row0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.00 0.00 row2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13 0.00 0.00 0.00 row4 0.00 0.00 0.00 0.00 0.00 0.64 0.88 0.00 0.00 0.00 0.00 0.00 ``` pd.DataFrame.groupby ``` df.groupby( ['key', 'row', 'item', 'col'] )['val0'].mean().unstack(['item', 'col']).fillna(0).sort_index(1) ``` pd.DataFrame.set_index because the set of keys are unique for both rows and columns ``` df.set_index( ['key', 'row', 'item', 'col'] )['val0'].unstack(['item', 'col']).fillna(0).sort_index(1) ``` Question 9 Can I aggregate the frequency in which the column and rows occur together, aka \"cross tabulation\"? pd.DataFrame.pivot_table ``` df.pivot_table(index='row', columns='col', fill_value=0, aggfunc='size') col col0 col1 col2 col3 col4 row row0 1 2 0 1 1 row2 1 0 2 1 2 row3 0 1 0 2 0 row4 0 1 2 2 1 ``` pd.DataFrame.groupby ``` df.groupby(['row', 'col'])['val0'].size().unstack(fill_value=0) ``` pd.crosstab ``` pd.crosstab(df['row'], df['col']) ``` pd.factorize + np.bincount ``` # get integer factorization `i` and unique values `r` # for column `'row'` i, r = pd.factorize(df['row'].values) # get integer factorization `j` and unique values `c` # for column `'col'` j, c = pd.factorize(df['col'].values) # `n` will be the number of rows # `m` will be the number of columns n, m = r.size, c.size # `i * m + j` is a clever way of counting the # factorization bins assuming a flat array of length # `n * m`. Which is why we subsequently reshape as `(n, m)` b = np.bincount(i * m + j, minlength=n * m).reshape(n, m) # BTW, whenever I read this, I think 'Bean, Rice, and Cheese' pd.DataFrame(b, r, c) col3 col2 col0 col1 col4 row3 2 0 0 1 0 row2 1 2 1 0 2 row0 1 0 1 2 1 row4 2 2 0 1 1 ``` pd.get_dummies ``` pd.get_dummies(df['row']).T.dot(pd.get_dummies(df['col'])) col0 col1 col2 col3 col4 row0 1 2 0 1 1 row2 1 0 2 1 2 row3 0 1 0 2 0 row4 0 1 2 2 1 ``` Question 10 How do I convert a DataFrame from long to wide by pivoting on ONLY two columns? DataFrame.pivot The first step is to assign a number to each row - this number will be the row index of that value in the pivoted result. This is done using GroupBy.cumcount: ``` df2.insert(0, 'count', df2.groupby('A').cumcount()) df2 count A B 0 0 a 0 1 1 a 11 2 2 a 2 3 3 a 11 4 0 b 10 5 1 b 10 6 2 b 14 7 0 c 7 ``` The second step is to use the newly created column as the index to call DataFrame.pivot. ``` df2.pivot(*df2) # df2.pivot(index='count', columns='A', values='B') A a b c count 0 0.0 10.0 7.0 1 11.0 10.0 NaN 2 2.0 14.0 NaN 3 11.0 NaN NaN ``` DataFrame.pivot_table Whereas DataFrame.pivot only accepts columns, DataFrame.pivot_table also accepts arrays, so the GroupBy.cumcount can be passed directly as the index without creating an explicit column. ``` df2.pivot_table(index=df2.groupby('A').cumcount(), columns='A', values='B') A a b c 0 0.0 10.0 7.0 1 11.0 10.0 NaN 2 2.0 14.0 NaN 3 11.0 NaN NaN ``` Question 11 How do I flatten the multiple index to single index after pivot If columns type object with string join ``` df.columns = df.columns.map('|'.join) ``` else format ``` df.columns = df.columns.map('{0[0]}|{0[1]}'.format) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/47152691\/how-can-i-pivot-a-dataframe", "best_answers_votes":481, "question_length":6869, "response_length":10972 }, { "question":"Convert Python dict into a dataframe I have a Python dictionary: ```py {u'2012-07-01': 391, u'2012-07-02': 392, u'2012-07-03': 392, u'2012-07-04': 392, u'2012-07-05': 392, u'2012-07-06': 392} ``` I would like to convert this into a pandas dataframe by having the dates and their corresponding values as two separate columns; the expected result looks like: ```none Date DateValue 0 2012-07-01 391 1 2012-07-02 392 2 2012-07-03 392 . 2012-07-04 392 . ... ... ``` Is there a direct way to do this?", "response":"The error here, is since calling the DataFrame constructor with scalar values (where it expects values to be a list\/dict\/... i.e. have multiple columns): ``` pd.DataFrame(d) ValueError: If using all scalar values, you must must pass an index ``` You could take the items from the dictionary (i.e. the key-value pairs): ``` In [11]: pd.DataFrame(d.items()) # or list(d.items()) in python 3 Out[11]: 0 1 0 2012-07-01 391 1 2012-07-02 392 2 2012-07-03 392 3 2012-07-04 392 4 2012-07-05 392 5 2012-07-06 392 In [12]: pd.DataFrame(d.items(), columns=['Date', 'DateValue']) Out[12]: Date DateValue 0 2012-07-01 391 1 2012-07-02 392 2 2012-07-03 392 3 2012-07-04 392 4 2012-07-05 392 5 2012-07-06 392 ``` But I think it makes more sense to pass the Series constructor: ``` In [20]: s = pd.Series(d, name='DateValue') In [21]: s Out[21]: 2012-07-01 391 2012-07-02 392 2012-07-03 392 2012-07-04 392 2012-07-05 392 2012-07-06 392 Name: DateValue, dtype: int64 In [22]: s.index.name = 'Date' In [23]: s.reset_index() Out[23]: Date DateValue 0 2012-07-01 391 1 2012-07-02 392 2 2012-07-03 392 3 2012-07-04 392 4 2012-07-05 392 5 2012-07-06 392 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18837262\/convert-python-dict-into-a-dataframe", "best_answers_votes":895, "question_length":495, "response_length":1135 }, { "question":"Pandas read_csv: low_memory and dtype options ``` df = pd.read_csv('somefile.csv') ``` ...gives an error: ...\/site-packages\/pandas\/io\/parsers.py:1130: DtypeWarning: Columns (4,5,7,16) have mixed types. Specify dtype option on import or set low_memory=False. Why is the dtype option related to low_memory, and why might low_memory=False help?", "response":"The deprecated low_memory option The low_memory option is not properly deprecated, but it should be, since it does not actually do anything differently[source] The reason you get this low_memory warning is because guessing dtypes for each column is very memory demanding. Pandas tries to determine what dtype to set by analyzing the data in each column. Dtype Guessing (very bad) Pandas can only determine what dtype a column should have once the whole file is read. This means nothing can really be parsed before the whole file is read unless you risk having to change the dtype of that column when you read the last value. Consider the example of one file which has a column called user_id. It contains 10 million rows where the user_id is always numbers. Since pandas cannot know it is only numbers, it will probably keep it as the original strings until it has read the whole file. Specifying dtypes (should always be done) adding ``` dtype={'user_id': int} ``` to the pd.read_csv() call will make pandas know when it starts reading the file, that this is only integers. Also worth noting is that if the last line in the file would have \"foobar\" written in the user_id column, the loading would crash if the above dtype was specified. Example of broken data that breaks when dtypes are defined ``` import pandas as pd try: from StringIO import StringIO except ImportError: from io import StringIO csvdata = \"\"\"user_id,username 1,Alice 3,Bob foobar,Caesar\"\"\" sio = StringIO(csvdata) pd.read_csv(sio, dtype={\"user_id\": int, \"username\": \"string\"}) ValueError: invalid literal for long() with base 10: 'foobar' ``` dtypes are typically a numpy thing, read more about them here: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.dtype.html What dtypes exists? We have access to numpy dtypes: float, int, bool, timedelta64[ns] and datetime64[ns]. Note that the numpy date\/time dtypes are not time zone aware. Pandas extends this set of dtypes with its own: 'datetime64[ns, ]' Which is a time zone aware timestamp. 'category' which is essentially an enum (strings represented by integer keys to save 'period[]' Not to be confused with a timedelta, these objects are actually anchored to specific time periods 'Sparse', 'Sparse[int]', 'Sparse[float]' is for sparse data or 'Data that has a lot of holes in it' Instead of saving the NaN or None in the dataframe it omits the objects, saving space. 'Interval' is a topic of its own but its main use is for indexing. See more here 'Int8', 'Int16', 'Int32', 'Int64', 'UInt8', 'UInt16', 'UInt32', 'UInt64' are all pandas specific integers that are nullable, unlike the numpy variant. 'string' is a specific dtype for working with string data and gives access to the .str attribute on the series. 'boolean' is like the numpy 'bool' but it also supports missing data. Read the complete reference here: Pandas dtype reference Gotchas, caveats, notes Setting dtype=object will silence the above warning, but will not make it more memory efficient, only process efficient if anything. Setting dtype=unicode will not do anything, since to numpy, a unicode is represented as object. Usage of converters @sparrow correctly points out the usage of converters to avoid pandas blowing up when encountering 'foobar' in a column specified as int. I would like to add that converters are really heavy and inefficient to use in pandas and should be used as a last resort. This is because the read_csv process is a single process. CSV files can be processed line by line and thus can be processed by multiple converters in parallel more efficiently by simply cutting the file into segments and running multiple processes, something that pandas does not support. But this is a different story.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24251219\/pandas-read-csv-low-memory-and-dtype-options", "best_answers_votes":749, "question_length":341, "response_length":3725 }, { "question":"How to group dataframe rows into list in pandas groupby Given a dataframe, I want to groupby the first column and get second column as lists in rows, so that a dataframe like: ``` a b A 1 A 2 B 5 B 5 B 4 C 6 ``` becomes ``` A [1,2] B [5,5,4] C [6] ``` How do I do this?", "response":"You can do this using groupby to group on the column of interest and then apply list to every group: ``` In [1]: df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6]}) df Out[1]: a b 0 A 1 1 A 2 2 B 5 3 B 5 4 B 4 5 C 6 In [2]: df.groupby('a')['b'].apply(list) Out[2]: a A [1, 2] B [5, 5, 4] C [6] Name: b, dtype: object In [3]: df1 = df.groupby('a')['b'].apply(list).reset_index(name='new') df1 Out[3]: a new 0 A [1, 2] 1 B [5, 5, 4] 2 C [6] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22219004\/how-to-group-dataframe-rows-into-list-in-pandas-groupby", "best_answers_votes":791, "question_length":269, "response_length":461 }, { "question":"Get list from pandas dataframe column or row? I have a dataframe df imported from an Excel document like this: ```none cluster load_date budget actual fixed_price A 1\/1\/2014 1000 4000 Y A 2\/1\/2014 12000 10000 Y A 3\/1\/2014 36000 2000 Y B 4\/1\/2014 15000 10000 N B 4\/1\/2014 12000 11500 N B 4\/1\/2014 90000 11000 N C 7\/1\/2014 22000 18000 N C 8\/1\/2014 30000 28960 N C 9\/1\/2014 53000 51200 N ``` I want to be able to return the contents of column 1 df['cluster'] as a list, so I can run a for-loop over it, and create an Excel worksheet for every cluster. Is it also possible to return the contents of a whole column or row to a list? e.g. ```py list = [], list[column1] or list[df.ix(row1)] ```", "response":"Pandas DataFrame columns are Pandas Series when you pull them out, which you can then call x.tolist() on to turn them into a Python list. Alternatively you cast it with list(x). ```py import pandas as pd data_dict = {'one': pd.Series([1, 2, 3], index=['a', 'b', 'c']), 'two': pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])} df = pd.DataFrame(data_dict) print(f\"DataFrame:\\n{df}\\n\") print(f\"column types:\\n{df.dtypes}\") col_one_list = df['one'].tolist() col_one_arr = df['one'].to_numpy() print(f\"\\ncol_one_list:\\n{col_one_list}\\ntype:{type(col_one_list)}\") print(f\"\\ncol_one_arr:\\n{col_one_arr}\\ntype:{type(col_one_arr)}\") ``` Output: ```none DataFrame: one two a 1.0 1 b 2.0 2 c 3.0 3 d NaN 4 column types: one float64 two int64 dtype: object col_one_list: [1.0, 2.0, 3.0, nan] type: col_one_arr: [ 1. 2. 3. nan] type: ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22341271\/get-list-from-pandas-dataframe-column-or-row", "best_answers_votes":940, "question_length":688, "response_length":828 }, { "question":"Python Pandas: Get index of rows where column matches certain value Given a DataFrame with a column \"BoolCol\", we want to find the indexes of the DataFrame in which the values for \"BoolCol\" == True I currently have the iterating way to do it, which works perfectly: ```py for i in range(100,3000): if df.iloc[i]['BoolCol']== True: print i,df.iloc[i]['BoolCol'] ``` But this is not the correct pandas way to do it. After some research, I am currently using this code: ```py df[df['BoolCol'] == True].index.tolist() ``` This one gives me a list of indexes, but they don't match, when I check them by doing: ```py df.iloc[i]['BoolCol'] ``` The result is actually False!! Which would be the correct pandas way to do this?", "response":"df.iloc[i] returns the ith row of df. i does not refer to the index label, i is a 0-based index. In contrast, the attribute index returns actual index labels, not numeric row-indices: ``` df.index[df['BoolCol'] == True].tolist() ``` or equivalently, ``` df.index[df['BoolCol']].tolist() ``` You can see the difference quite clearly by playing with a DataFrame with a non-default index that does not equal to the row's numerical position: ``` df = pd.DataFrame({'BoolCol': [True, False, False, True, True]}, index=[10,20,30,40,50]) In [53]: df Out[53]: BoolCol 10 True 20 False 30 False 40 True 50 True [5 rows x 1 columns] In [54]: df.index[df['BoolCol']].tolist() Out[54]: [10, 40, 50] ``` If you want to use the index, ``` In [56]: idx = df.index[df['BoolCol']] In [57]: idx Out[57]: Int64Index([10, 40, 50], dtype='int64') ``` then you can select the rows using loc instead of iloc: ``` In [58]: df.loc[idx] Out[58]: BoolCol 10 True 40 True 50 True [3 rows x 1 columns] ``` Note that loc can also accept boolean arrays: ``` In [55]: df.loc[df['BoolCol']] Out[55]: BoolCol 10 True 40 True 50 True [3 rows x 1 columns] ``` If you have a boolean array, mask, and need ordinal index values, you can compute them using np.flatnonzero: ``` In [110]: np.flatnonzero(df['BoolCol']) Out[112]: array([0, 3, 4]) ``` Use df.iloc to select rows by ordinal index: ``` In [113]: df.iloc[np.flatnonzero(df['BoolCol'])] Out[113]: BoolCol 10 True 40 True 50 True ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21800169\/python-pandas-get-index-of-rows-where-column-matches-certain-value", "best_answers_votes":813, "question_length":717, "response_length":1451 }, { "question":"Pandas: Get first row value of a given column This seems like a ridiculously easy question... but I'm not seeing the easy answer I was expecting. So, how do I get the value at an nth row of a given column in Pandas? (I am particularly interested in the first row, but would be interested in a more general practice as well). For example, let's say I want to pull the 1.2 value in Btime as a variable. Whats the right way to do this? ```py >>> df_test ATime X Y Z Btime C D E 0 1.2 2 15 2 1.2 12 25 12 1 1.4 3 12 1 1.3 13 22 11 2 1.5 1 10 6 1.4 11 20 16 3 1.6 2 9 10 1.7 12 29 12 4 1.9 1 1 9 1.9 11 21 19 5 2.0 0 0 0 2.0 8 10 11 6 2.4 0 0 0 2.4 10 12 15 ```", "response":"To select the ith row, use iloc: ``` In [31]: df_test.iloc[0] Out[31]: ATime 1.2 X 2.0 Y 15.0 Z 2.0 Btime 1.2 C 12.0 D 25.0 E 12.0 Name: 0, dtype: float64 ``` To select the ith value in the Btime column you could use: ``` In [30]: df_test['Btime'].iloc[0] Out[30]: 1.2 ``` There is a difference between df_test['Btime'].iloc[0] (recommended) and df_test.iloc[0]['Btime']: DataFrames store data in column-based blocks (where each block has a single dtype). If you select by column first, a view can be returned (which is quicker than returning a copy) and the original dtype is preserved. In contrast, if you select by row first, and if the DataFrame has columns of different dtypes, then Pandas copies the data into a new Series of object dtype. So selecting columns is a bit faster than selecting rows. Thus, although df_test.iloc[0]['Btime'] works, df_test['Btime'].iloc[0] is a little bit more efficient. There is a big difference between the two when it comes to assignment. df_test['Btime'].iloc[0] = x affects df_test, but df_test.iloc[0]['Btime'] may not. See below for an explanation of why. Because a subtle difference in the order of indexing makes a big difference in behavior, it is better to use single indexing assignment: ``` df.iloc[0, df.columns.get_loc('Btime')] = x ``` df.iloc[0, df.columns.get_loc('Btime')] = x (recommended): The recommended way to assign new values to a DataFrame is to avoid chained indexing, and instead use the method shown by andrew, ``` df.loc[df.index[n], 'Btime'] = x ``` or ``` df.iloc[n, df.columns.get_loc('Btime')] = x ``` The latter method is a bit faster, because df.loc has to convert the row and column labels to positional indices, so there is a little less conversion necessary if you use df.iloc instead. df['Btime'].iloc[0] = x works, but is not recommended: Although this works, it is taking advantage of the way DataFrames are currently implemented. There is no guarantee that Pandas has to work this way in the future. In particular, it is taking advantage of the fact that (currently) df['Btime'] always returns a view (not a copy) so df['Btime'].iloc[n] = x can be used to assign a new value at the nth location of the Btime column of df. Since Pandas makes no explicit guarantees about when indexers return a view versus a copy, assignments that use chained indexing generally always raise a SettingWithCopyWarning even though in this case the assignment succeeds in modifying df: ``` In [22]: df = pd.DataFrame({'foo':list('ABC')}, index=[0,2,1]) In [24]: df['bar'] = 100 In [25]: df['bar'].iloc[0] = 99 \/home\/unutbu\/data\/binky\/bin\/ipython:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/indexing.html#indexing-view-versus-copy self._setitem_with_indexer(indexer, value) In [26]: df Out[26]: foo bar 0 A 99 <-- assignment succeeded 2 B 100 1 C 100 ``` df.iloc[0]['Btime'] = x does not work: In contrast, assignment with df.iloc[0]['bar'] = 123 does not work because df.iloc[0] is returning a copy: ``` In [66]: df.iloc[0]['bar'] = 123 \/home\/unutbu\/data\/binky\/bin\/ipython:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/indexing.html#indexing-view-versus-copy In [67]: df Out[67]: foo bar 0 A 99 <-- assignment failed 2 B 100 1 C 100 ``` Warning: I had previously suggested df_test.ix[i, 'Btime']. But this is not guaranteed to give you the ith value since ix tries to index by label before trying to index by position. So if the DataFrame has an integer index which is not in sorted order starting at 0, then using ix[i] will return the row labeled i rather than the ith row. For example, ``` In [1]: df = pd.DataFrame({'foo':list('ABC')}, index=[0,2,1]) In [2]: df Out[2]: foo 0 A 2 B 1 C In [4]: df.ix[1, 'foo'] Out[4]: 'C' ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25254016\/pandas-get-first-row-value-of-a-given-column", "best_answers_votes":850, "question_length":656, "response_length":3949 }, { "question":"How to sort a pandas dataFrame by two or more columns? Suppose I have a dataframe with columns a, b and c. I want to sort the dataframe by column b in ascending order, and by column c in descending order. How do I do this?", "response":"As of the 0.17.0 release, the sort method was deprecated in favor of sort_values. sort was completely removed in the 0.20.0 release. The arguments (and results) remain the same: ``` df.sort_values(['a', 'b'], ascending=[True, False]) ``` You can use the ascending argument of sort: ``` df.sort(['a', 'b'], ascending=[True, False]) ``` For example: ``` In [11]: df1 = pd.DataFrame(np.random.randint(1, 5, (10,2)), columns=['a','b']) In [12]: df1.sort(['a', 'b'], ascending=[True, False]) Out[12]: a b 2 1 4 7 1 3 1 1 2 3 1 2 4 3 2 6 4 4 0 4 3 9 4 3 5 4 1 8 4 1 ``` As commented by @renadeen Sort isn't in place by default! So you should assign result of the sort method to a variable or add inplace=True to method call. that is, if you want to reuse df1 as a sorted DataFrame: ``` df1 = df1.sort(['a', 'b'], ascending=[True, False]) ``` or ``` df1.sort(['a', 'b'], ascending=[True, False], inplace=True) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17141558\/how-to-sort-a-pandas-dataframe-by-two-or-more-columns", "best_answers_votes":921, "question_length":222, "response_length":906 }, { "question":"Sorting columns in pandas dataframe based on column name I have a dataframe with over 200 columns. The issue is as they were generated the order is ``` ['Q1.3','Q6.1','Q1.2','Q1.1',......] ``` I need to sort the columns as follows: ``` ['Q1.1','Q1.2','Q1.3',.....'Q6.1',......] ``` Is there some way for me to do this within Python?", "response":"``` df = df.reindex(sorted(df.columns), axis=1) ``` This assumes that sorting the column names will give the order you want. If your column names won't sort lexicographically (e.g., if you want column Q10.3 to appear after Q9.1), you'll need to sort differently, but that has nothing to do with pandas.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11067027\/sorting-columns-in-pandas-dataframe-based-on-column-name", "best_answers_votes":668, "question_length":332, "response_length":302 }, { "question":"Count the frequency that a value occurs in a dataframe column I have a dataset ```none category cat a cat b cat a ``` I'd like to return something like the following which shows the unique values and their frequencies ```none category freq cat a 2 cat b 1 ```", "response":"Use value_counts() as @DSM commented. ``` In [37]: df = pd.DataFrame({'a':list('abssbab')}) df['a'].value_counts() Out[37]: b 3 a 2 s 2 dtype: int64 ``` Also groupby and count. Many ways to skin a cat here. ``` In [38]: df.groupby('a').count() Out[38]: a a a 2 b 3 s 2 [3 rows x 1 columns] ``` See the online docs. If you wanted to add frequency back to the original dataframe use transform to return an aligned index: ``` In [41]: df['freq'] = df.groupby('a')['a'].transform('count') df Out[41]: a freq 0 a 2 1 b 3 2 s 2 3 s 2 4 b 3 5 a 2 6 b 3 [7 rows x 2 columns] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22391433\/count-the-frequency-that-a-value-occurs-in-a-dataframe-column", "best_answers_votes":697, "question_length":259, "response_length":570 }, { "question":"What does `ValueError: cannot reindex from a duplicate axis` mean? I am getting a ValueError: cannot reindex from a duplicate axis when I am trying to set an index to a certain value. I tried to reproduce this with a simple example, but I could not do it. Here is my session inside of ipdb trace. I have a DataFrame with string index, and integer columns, float values. However when I try to create sum index for sum of all columns I am getting ValueError: cannot reindex from a duplicate axis error. I created a small DataFrame with the same characteristics, but was not able to reproduce the problem, what could I be missing? I don't really understand what ValueError: cannot reindex from a duplicate axismeans, what does this error message mean? Maybe this will help me diagnose the problem, and this is most answerable part of my question. ``` ipdb> type(affinity_matrix) ipdb> affinity_matrix.shape (333, 10) ipdb> affinity_matrix.columns Int64Index([9315684, 9315597, 9316591, 9320520, 9321163, 9320615, 9321187, 9319487, 9319467, 9320484], dtype='int64') ipdb> affinity_matrix.index Index([u'001', u'002', u'003', u'004', u'005', u'008', u'009', u'010', u'011', u'014', u'015', u'016', u'018', u'020', u'021', u'022', u'024', u'025', u'026', u'027', u'028', u'029', u'030', u'032', u'033', u'034', u'035', u'036', u'039', u'040', u'041', u'042', u'043', u'044', u'045', u'047', u'047', u'048', u'050', u'053', u'054', u'055', u'056', u'057', u'058', u'059', u'060', u'061', u'062', u'063', u'065', u'067', u'068', u'069', u'070', u'071', u'072', u'073', u'074', u'075', u'076', u'077', u'078', u'080', u'082', u'083', u'084', u'085', u'086', u'089', u'090', u'091', u'092', u'093', u'094', u'095', u'096', u'097', u'098', u'100', u'101', u'103', u'104', u'105', u'106', u'107', u'108', u'109', u'110', u'111', u'112', u'113', u'114', u'115', u'116', u'117', u'118', u'119', u'121', u'122', ...], dtype='object') ipdb> affinity_matrix.values.dtype dtype('float64') ipdb> 'sums' in affinity_matrix.index False ``` Here is the error: ``` ipdb> affinity_matrix.loc['sums'] = affinity_matrix.sum(axis=0) *** ValueError: cannot reindex from a duplicate axis ``` I tried to reproduce this with a simple example, but I failed ``` In [32]: import pandas as pd In [33]: import numpy as np In [34]: a = np.arange(35).reshape(5,7) In [35]: df = pd.DataFrame(a, ['x', 'y', 'u', 'z', 'w'], range(10, 17)) In [36]: df.values.dtype Out[36]: dtype('int64') In [37]: df.loc['sums'] = df.sum(axis=0) In [38]: df Out[38]: 10 11 12 13 14 15 16 x 0 1 2 3 4 5 6 y 7 8 9 10 11 12 13 u 14 15 16 17 18 19 20 z 21 22 23 24 25 26 27 w 28 29 30 31 32 33 34 sums 70 75 80 85 90 95 100 ```", "response":"This error usually rises when you join \/ assign to a column when the index has duplicate values. Since you are assigning to a row, I suspect that there is a duplicate value in affinity_matrix.columns, perhaps not shown in your question.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27236275\/what-does-valueerror-cannot-reindex-from-a-duplicate-axis-mean", "best_answers_votes":334, "question_length":2666, "response_length":236 }, { "question":"How can I display full (non-truncated) dataframe information in HTML when converting from Pandas dataframe to HTML? I converted a Pandas dataframe to an HTML output using the DataFrame.to_html function. When I save this to a separate HTML file, the file shows truncated output. For example, in my TEXT column, df.head(1) will show The film was an excellent effort... instead of The film was an excellent effort in deconstructing the complex social sentiments that prevailed during this period. This rendition is fine in the case of a screen-friendly format of a massive Pandas dataframe, but I need an HTML file that will show complete tabular data contained in the dataframe, that is, something that will show the latter text element rather than the former text snippet. How would I be able to show the complete, non-truncated text data for each element in my TEXT column in the HTML version of the information? I would imagine that the HTML table would have to display long cells to show the complete data, but as far as I understand, only column-width parameters can be passed into the DataFrame.to_html function.", "response":"Set the display.max_colwidth option to None (or -1 before version 1.0): ``` pd.set_option('display.max_colwidth', None) ``` set_option documentation For example, in IPython, we see that the information is truncated to 50 characters. Anything in excess is ellipsized: If you set the display.max_colwidth option, the information will be displayed fully:", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25351968\/how-can-i-display-full-non-truncated-dataframe-information-in-html-when-conver", "best_answers_votes":836, "question_length":1116, "response_length":351 }, { "question":"How do I create a new column where the values are selected based on an existing column? How do I add a color column to the following dataframe so that color='green' if Set == 'Z', and color='red' otherwise? ``` Type Set 1 A Z 2 B Z 3 B X 4 C Y ```", "response":"If you only have two choices to select from then use np.where: ``` df['color'] = np.where(df['Set']=='Z', 'green', 'red') ``` For example, ``` import pandas as pd import numpy as np df = pd.DataFrame({'Type':list('ABBC'), 'Set':list('ZZXY')}) df['color'] = np.where(df['Set']=='Z', 'green', 'red') print(df) ``` yields ``` Set Type color 0 Z A green 1 Z B green 2 X B red 3 Y C red ``` If you have more than two conditions then use np.select. For example, if you want color to be yellow when (df['Set'] == 'Z') & (df['Type'] == 'A') otherwise blue when (df['Set'] == 'Z') & (df['Type'] == 'B') otherwise purple when (df['Type'] == 'B') otherwise black, then use ``` df = pd.DataFrame({'Type':list('ABBC'), 'Set':list('ZZXY')}) conditions = [ (df['Set'] == 'Z') & (df['Type'] == 'A'), (df['Set'] == 'Z') & (df['Type'] == 'B'), (df['Type'] == 'B')] choices = ['yellow', 'blue', 'purple'] df['color'] = np.select(conditions, choices, default='black') print(df) ``` which yields ``` Set Type color 0 Z A yellow 1 Z B blue 2 X B purple 3 Y C black ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19913659\/how-do-i-create-a-new-column-where-the-values-are-selected-based-on-an-existing", "best_answers_votes":1060, "question_length":247, "response_length":1046 }, { "question":"Get the row(s) which have the max value in groups using groupby How do I find all rows in a pandas DataFrame which have the max value for count column, after grouping by ['Sp','Mt'] columns? Example 1: the following DataFrame: ``` Sp Mt Value count 0 MM1 S1 a **3** 1 MM1 S1 n 2 2 MM1 S3 cb **5** 3 MM2 S3 mk **8** 4 MM2 S4 bg **10** 5 MM2 S4 dgd 1 6 MM4 S2 rd 2 7 MM4 S2 cb 2 8 MM4 S2 uyi **7** ``` Expected output is to get the result rows whose count is max in each group, like this: ``` Sp Mt Value count 0 MM1 S1 a **3** 2 MM1 S3 cb **5** 3 MM2 S3 mk **8** 4 MM2 S4 bg **10** 8 MM4 S2 uyi **7** ``` Example 2: ``` Sp Mt Value count 4 MM2 S4 bg 10 5 MM2 S4 dgd 1 6 MM4 S2 rd 2 7 MM4 S2 cb 8 8 MM4 S2 uyi 8 ``` Expected output: ``` Sp Mt Value count 4 MM2 S4 bg 10 7 MM4 S2 cb 8 8 MM4 S2 uyi 8 ```", "response":"Firstly, we can get the max count for each group like this: ``` In [1]: df Out[1]: Sp Mt Value count 0 MM1 S1 a 3 1 MM1 S1 n 2 2 MM1 S3 cb 5 3 MM2 S3 mk 8 4 MM2 S4 bg 10 5 MM2 S4 dgd 1 6 MM4 S2 rd 2 7 MM4 S2 cb 2 8 MM4 S2 uyi 7 In [2]: df.groupby(['Sp', 'Mt'])['count'].max() Out[2]: Sp Mt MM1 S1 3 S3 5 MM2 S3 8 S4 10 MM4 S2 7 Name: count, dtype: int64 ``` To get the indices of the original DF you can do: ``` In [3]: idx = df.groupby(['Sp', 'Mt'])['count'].transform(max) == df['count'] In [4]: df[idx] Out[4]: Sp Mt Value count 0 MM1 S1 a 3 2 MM1 S3 cb 5 3 MM2 S3 mk 8 4 MM2 S4 bg 10 8 MM4 S2 uyi 7 ``` Note that if you have multiple max values per group, all will be returned. Update On a Hail Mary chance that this is what the OP is requesting: ``` In [5]: df['count_max'] = df.groupby(['Sp', 'Mt'])['count'].transform(max) In [6]: df Out[6]: Sp Mt Value count count_max 0 MM1 S1 a 3 3 1 MM1 S1 n 2 3 2 MM1 S3 cb 5 5 3 MM2 S3 mk 8 8 4 MM2 S4 bg 10 10 5 MM2 S4 dgd 1 10 6 MM4 S2 rd 2 7 7 MM4 S2 cb 2 7 8 MM4 S2 uyi 7 7 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15705630\/get-the-rows-which-have-the-max-value-in-groups-using-groupby", "best_answers_votes":608, "question_length":800, "response_length":1027 }, { "question":"Create Pandas DataFrame from a string In order to test some functionality I would like to create a DataFrame from a string. Let's say my test data looks like: ``` TESTDATA=\"\"\"col1;col2;col3 1;4.4;99 2;4.5;200 3;4.7;65 4;3.2;140 \"\"\" ``` What is the simplest way to read that data into a Pandas DataFrame?", "response":"A simple way to do this is to use StringIO.StringIO (python2) or io.StringIO (python3) and pass that to the pandas.read_csv function. E.g: ``` import sys if sys.version_info[0] < 3: from StringIO import StringIO else: from io import StringIO import pandas as pd TESTDATA = StringIO(\"\"\"col1;col2;col3 1;4.4;99 2;4.5;200 3;4.7;65 4;3.2;140 \"\"\") df = pd.read_csv(TESTDATA, sep=\";\") ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22604564\/create-pandas-dataframe-from-a-string", "best_answers_votes":848, "question_length":303, "response_length":382 }, { "question":"Convert DataFrame column type from string to datetime How can I convert a DataFrame column of strings (in dd\/mm\/yyyy format) to datetime dtype?", "response":"The easiest way is to use to_datetime: ``` df['col'] = pd.to_datetime(df['col']) ``` It also offers a dayfirst argument for European times (but beware this isn't strict). Here it is in action: ``` In [11]: pd.to_datetime(pd.Series(['05\/23\/2005'])) Out[11]: 0 2005-05-23 00:00:00 dtype: datetime64[ns] ``` You can pass a specific format: ``` In [12]: pd.to_datetime(pd.Series(['05\/23\/2005']), format=\"%m\/%d\/%Y\") Out[12]: 0 2005-05-23 dtype: datetime64[ns] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17134716\/convert-dataframe-column-type-from-string-to-datetime", "best_answers_votes":738, "question_length":143, "response_length":458 }, { "question":"Remove pandas rows with duplicate indices How to remove rows with duplicate index values? In the weather DataFrame below, sometimes a scientist goes back and corrects observations -- not by editing the erroneous rows, but by appending a duplicate row to the end of a file. I'm reading some automated weather data from the web (observations occur every 5 minutes, and compiled into monthly files for each weather station.) After parsing a file, the DataFrame looks like: ``` Sta Precip1hr Precip5min Temp DewPnt WindSpd WindDir AtmPress Date 2001-01-01 00:00:00 KPDX 0 0 4 3 0 0 30.31 2001-01-01 00:05:00 KPDX 0 0 4 3 0 0 30.30 2001-01-01 00:10:00 KPDX 0 0 4 3 4 80 30.30 2001-01-01 00:15:00 KPDX 0 0 3 2 5 90 30.30 2001-01-01 00:20:00 KPDX 0 0 3 2 10 110 30.28 ``` Example of a duplicate case: ``` import pandas as pd import datetime startdate = datetime.datetime(2001, 1, 1, 0, 0) enddate = datetime.datetime(2001, 1, 1, 5, 0) index = pd.date_range(start=startdate, end=enddate, freq='H') data1 = {'A' : range(6), 'B' : range(6)} data2 = {'A' : [20, -30, 40], 'B' : [-50, 60, -70]} df1 = pd.DataFrame(data=data1, index=index) df2 = pd.DataFrame(data=data2, index=index[:3]) df3 = df2.append(df1) df3 A B 2001-01-01 00:00:00 20 -50 2001-01-01 01:00:00 -30 60 2001-01-01 02:00:00 40 -70 2001-01-01 03:00:00 3 3 2001-01-01 04:00:00 4 4 2001-01-01 05:00:00 5 5 2001-01-01 00:00:00 0 0 2001-01-01 01:00:00 1 1 2001-01-01 02:00:00 2 2 ``` And so I need df3 to eventually become: ``` A B 2001-01-01 00:00:00 0 0 2001-01-01 01:00:00 1 1 2001-01-01 02:00:00 2 2 2001-01-01 03:00:00 3 3 2001-01-01 04:00:00 4 4 2001-01-01 05:00:00 5 5 ``` I thought that adding a column of row numbers (df3['rownum'] = range(df3.shape[0])) would help me select the bottom-most row for any value of the DatetimeIndex, but I am stuck on figuring out the group_by or pivot (or ???) statements to make that work.", "response":"I would suggest using the duplicated method on the Pandas Index itself: ```py df3 = df3.loc[~df3.index.duplicated(keep='first'), :] ``` While all the other methods work, .drop_duplicates is by far the least performant for the provided example. Furthermore, while the groupby method is only slightly less performant, I find the duplicated method to be more readable. Using the sample data provided: ```py >>> %timeit df3.reset_index().drop_duplicates(subset='index', keep='first').set_index('index') 1000 loops, best of 3: 1.54 ms per loop >>> %timeit df3.groupby(df3.index).first() 1000 loops, best of 3: 580 \u00b5s per loop >>> %timeit df3.loc[~df3.index.duplicated(keep='first'), :] 1000 loops, best of 3: 307 \u00b5s per loop ``` Note that you can keep the last element by changing the keep argument to 'last'. It should also be noted that this method works with MultiIndex as well (using df1 as specified in Paul's example): ```py >>> %timeit df1.groupby(level=df1.index.names).last() 1000 loops, best of 3: 771 \u00b5s per loop >>> %timeit df1.loc[~df1.index.duplicated(keep='last'), :] 1000 loops, best of 3: 365 \u00b5s per loop ``` Edit: While the .loc is not necessary (per @lingjiankong's comment) I agree with @shadowtalker that being explicit rather than implicit about row selection can be helpful (especially in large codebases).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13035764\/remove-pandas-rows-with-duplicate-indices", "best_answers_votes":878, "question_length":1882, "response_length":1324 }, { "question":"Convert a Pandas DataFrame to a dictionary I have a DataFrame with four columns. I want to convert this DataFrame to a python dictionary. I want the elements of first column be keys and the elements of other columns in the same row be values. DataFrame: ```py ID A B C 0 p 1 3 2 1 q 4 3 2 2 r 4 0 9 ``` Output should be like this: ```py {'p': [1,3,2], 'q': [4,3,2], 'r': [4,0,9]} ```", "response":"The to_dict() method sets the column names as dictionary keys so you'll need to reshape your DataFrame slightly. Setting the 'ID' column as the index and then transposing the DataFrame is one way to achieve this. to_dict() also accepts an 'orient' argument which you'll need in order to output a list of values for each column. Otherwise, a dictionary of the form {index: value} will be returned for each column. These steps can be done with the following line: ``` >>> df.set_index('ID').T.to_dict('list') {'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]} ``` In case a different dictionary format is needed, here are examples of the possible orient arguments. Consider the following simple DataFrame: ``` >>> df = pd.DataFrame({'a': ['red', 'yellow', 'blue'], 'b': [0.5, 0.25, 0.125]}) >>> df a b 0 red 0.500 1 yellow 0.250 2 blue 0.125 ``` Then the options are as follows. dict - the default: column names are keys, values are dictionaries of index:data pairs ``` >>> df.to_dict('dict') {'a': {0: 'red', 1: 'yellow', 2: 'blue'}, 'b': {0: 0.5, 1: 0.25, 2: 0.125}} ``` list - keys are column names, values are lists of column data ``` >>> df.to_dict('list') {'a': ['red', 'yellow', 'blue'], 'b': [0.5, 0.25, 0.125]} ``` series - like 'list', but values are Series ``` >>> df.to_dict('series') {'a': 0 red 1 yellow 2 blue Name: a, dtype: object, 'b': 0 0.500 1 0.250 2 0.125 Name: b, dtype: float64} ``` split - splits columns\/data\/index as keys with values being column names, data values by row and index labels respectively ``` >>> df.to_dict('split') {'columns': ['a', 'b'], 'data': [['red', 0.5], ['yellow', 0.25], ['blue', 0.125]], 'index': [0, 1, 2]} ``` records - each row becomes a dictionary where key is column name and value is the data in the cell ``` >>> df.to_dict('records') [{'a': 'red', 'b': 0.5}, {'a': 'yellow', 'b': 0.25}, {'a': 'blue', 'b': 0.125}] ``` index - like 'records', but a dictionary of dictionaries with keys as index labels (rather than a list) ``` >>> df.to_dict('index') {0: {'a': 'red', 'b': 0.5}, 1: {'a': 'yellow', 'b': 0.25}, 2: {'a': 'blue', 'b': 0.125}} ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26716616\/convert-a-pandas-dataframe-to-a-dictionary", "best_answers_votes":752, "question_length":383, "response_length":2098 }, { "question":"pandas get rows which are NOT in other dataframe I've two pandas data frames that have some rows in common. Suppose dataframe2 is a subset of dataframe1. How can I get the rows of dataframe1 which are not in dataframe2? ``` df1 = pandas.DataFrame(data = {'col1' : [1, 2, 3, 4, 5], 'col2' : [10, 11, 12, 13, 14]}) df2 = pandas.DataFrame(data = {'col1' : [1, 2, 3], 'col2' : [10, 11, 12]}) ``` df1 ``` col1 col2 0 1 10 1 2 11 2 3 12 3 4 13 4 5 14 ``` df2 ``` col1 col2 0 1 10 1 2 11 2 3 12 ``` Expected result: ``` col1 col2 3 4 13 4 5 14 ```", "response":"The currently selected solution produces incorrect results. To correctly solve this problem, we can perform a left-join from df1 to df2, making sure to first get just the unique rows for df2. First, we need to modify the original DataFrame to add the row with data [3, 10]. ``` df1 = pd.DataFrame(data = {'col1' : [1, 2, 3, 4, 5, 3], 'col2' : [10, 11, 12, 13, 14, 10]}) df2 = pd.DataFrame(data = {'col1' : [1, 2, 3], 'col2' : [10, 11, 12]}) df1 col1 col2 0 1 10 1 2 11 2 3 12 3 4 13 4 5 14 5 3 10 df2 col1 col2 0 1 10 1 2 11 2 3 12 ``` Perform a left-join, eliminating duplicates in df2 so that each row of df1 joins with exactly 1 row of df2. Use the parameter indicator to return an extra column indicating which table the row was from. ``` df_all = df1.merge(df2.drop_duplicates(), on=['col1','col2'], how='left', indicator=True) df_all col1 col2 _merge 0 1 10 both 1 2 11 both 2 3 12 both 3 4 13 left_only 4 5 14 left_only 5 3 10 left_only ``` Create a boolean condition: ``` df_all['_merge'] == 'left_only' 0 False 1 False 2 False 3 True 4 True 5 True Name: _merge, dtype: bool ``` Why other solutions are wrong A few solutions make the same mistake - they only check that each value is independently in each column, not together in the same row. Adding the last row, which is unique but has the values from both columns from df2 exposes the mistake: ``` common = df1.merge(df2,on=['col1','col2']) (~df1.col1.isin(common.col1))&(~df1.col2.isin(common.col2)) 0 False 1 False 2 False 3 True 4 True 5 False dtype: bool ``` This solution gets the same wrong result: ``` df1.isin(df2.to_dict('l')).all(1) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28901683\/pandas-get-rows-which-are-not-in-other-dataframe", "best_answers_votes":425, "question_length":540, "response_length":1608 }, { "question":"pandas: filter rows of DataFrame with operator chaining Most operations in pandas can be accomplished with operator chaining (groupby, aggregate, apply, etc), but the only way I've found to filter rows is via normal bracket indexing ``` df_filtered = df[df['column'] == value] ``` This is unappealing as it requires I assign df to a variable before being able to filter on its values. Is there something more like the following? ``` df_filtered = df.mask(lambda x: x['column'] == value) ```", "response":"I'm not entirely sure what you want, and your last line of code does not help either, but anyway: \"Chained\" filtering is done by \"chaining\" the criteria in the boolean index. ``` In [96]: df Out[96]: A B C D a 1 4 9 1 b 4 5 0 2 c 5 5 1 0 d 1 3 9 6 In [99]: df[(df.A == 1) & (df.D == 6)] Out[99]: A B C D d 1 3 9 6 ``` If you want to chain methods, you can add your own mask method and use that one. ``` In [90]: def mask(df, key, value): ....: return df[df[key] == value] ....: In [92]: pandas.DataFrame.mask = mask In [93]: df = pandas.DataFrame(np.random.randint(0, 10, (4,4)), index=list('abcd'), columns=list('ABCD')) In [95]: df.ix['d','A'] = df.ix['a', 'A'] In [96]: df Out[96]: A B C D a 1 4 9 1 b 4 5 0 2 c 5 5 1 0 d 1 3 9 6 In [97]: df.mask('A', 1) Out[97]: A B C D a 1 4 9 1 d 1 3 9 6 In [98]: df.mask('A', 1).mask('D', 6) Out[98]: A B C D d 1 3 9 6 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11869910\/pandas-filter-rows-of-dataframe-with-operator-chaining", "best_answers_votes":479, "question_length":490, "response_length":863 }, { "question":"Apply multiple functions to multiple groupby columns The docs show how to apply multiple functions on a groupby object at a time using a dict with the output column names as the keys: ``` In [563]: grouped['D'].agg({'result1' : np.sum, .....: 'result2' : np.mean}) .....: Out[563]: result2 result1 A bar -0.579846 -1.739537 foo -0.280588 -1.402938 ``` However, this only works on a Series groupby object. And when a dict is similarly passed to a groupby DataFrame, it expects the keys to be the column names that the function will be applied to. What I want to do is apply multiple functions to several columns (but certain columns will be operated on multiple times). Also, some functions will depend on other columns in the groupby object (like sumif functions). My current solution is to go column by column, and doing something like the code above, using lambdas for functions that depend on other rows. But this is taking a long time, (I think it takes a long time to iterate through a groupby object). I'll have to change it so that I iterate through the whole groupby object in a single run, but I'm wondering if there's a built in way in pandas to do this somewhat cleanly. For example, I've tried something like ``` grouped.agg({'C_sum' : lambda x: x['C'].sum(), 'C_std': lambda x: x['C'].std(), 'D_sum' : lambda x: x['D'].sum()}, 'D_sumifC3': lambda x: x['D'][x['C'] == 3].sum(), ...) ``` but as expected I get a KeyError (since the keys have to be a column if agg is called from a DataFrame). Is there any built in way to do what I'd like to do, or a possibility that this functionality may be added, or will I just need to iterate through the groupby manually?", "response":"The second half of the currently accepted answer is outdated and has two deprecations. First and most important, you can no longer pass a dictionary of dictionaries to the agg groupby method. Second, never use .ix. If you desire to work with two separate columns at the same time I would suggest using the apply method which implicitly passes a DataFrame to the applied function. Let's use a similar dataframe as the one from above ``` df = pd.DataFrame(np.random.rand(4,4), columns=list('abcd')) df['group'] = [0, 0, 1, 1] df a b c d group 0 0.418500 0.030955 0.874869 0.145641 0 1 0.446069 0.901153 0.095052 0.487040 0 2 0.843026 0.936169 0.926090 0.041722 1 3 0.635846 0.439175 0.828787 0.714123 1 ``` A dictionary mapped from column names to aggregation functions is still a perfectly good way to perform an aggregation. ``` df.groupby('group').agg({'a':['sum', 'max'], 'b':'mean', 'c':'sum', 'd': lambda x: x.max() - x.min()}) a b c d sum max mean sum group 0 0.864569 0.446069 0.466054 0.969921 0.341399 1 1.478872 0.843026 0.687672 1.754877 0.672401 ``` If you don't like that ugly lambda column name, you can use a normal function and supply a custom name to the special __name__ attribute like this: ``` def max_min(x): return x.max() - x.min() max_min.__name__ = 'Max minus Min' df.groupby('group').agg({'a':['sum', 'max'], 'b':'mean', 'c':'sum', 'd': max_min}) a b c d sum max mean sum Max minus Min group 0 0.864569 0.446069 0.466054 0.969921 0.341399 1 1.478872 0.843026 0.687672 1.754877 0.672401 ``` Using apply and returning a Series Now, if you had multiple columns that needed to interact together then you cannot use agg, which implicitly passes a Series to the aggregating function. When using apply the entire group as a DataFrame gets passed into the function. I recommend making a single custom function that returns a Series of all the aggregations. Use the Series index as labels for the new columns: ``` def f(x): d = {} d['a_sum'] = x['a'].sum() d['a_max'] = x['a'].max() d['b_mean'] = x['b'].mean() d['c_d_prodsum'] = (x['c'] * x['d']).sum() return pd.Series(d, index=['a_sum', 'a_max', 'b_mean', 'c_d_prodsum']) df.groupby('group').apply(f) a_sum a_max b_mean c_d_prodsum group 0 0.864569 0.446069 0.466054 0.173711 1 1.478872 0.843026 0.687672 0.630494 ``` If you are in love with MultiIndexes, you can still return a Series with one like this: ``` def f_mi(x): d = [] d.append(x['a'].sum()) d.append(x['a'].max()) d.append(x['b'].mean()) d.append((x['c'] * x['d']).sum()) return pd.Series(d, index=[['a', 'a', 'b', 'c_d'], ['sum', 'max', 'mean', 'prodsum']]) df.groupby('group').apply(f_mi) a b c_d sum max mean prodsum group 0 0.864569 0.446069 0.466054 0.173711 1 1.478872 0.843026 0.687672 0.630494 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14529838\/apply-multiple-functions-to-multiple-groupby-columns", "best_answers_votes":622, "question_length":1672, "response_length":2737 }, { "question":"how do I insert a column at a specific column index in pandas? Can I insert a column at a specific column index in pandas? ``` import pandas as pd df = pd.DataFrame({'l':['a','b','c','d'], 'v':[1,2,1,2]}) df['n'] = 0 ``` This will put column n as the last column of df, but isn't there a way to tell df to put n at the beginning?", "response":"see docs: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/reference\/api\/pandas.DataFrame.insert.html using loc = 0 will insert at the beginning ``` df.insert(loc, column, value) ``` ``` df = pd.DataFrame({'B': [1, 2, 3], 'C': [4, 5, 6]}) df Out: B C 0 1 4 1 2 5 2 3 6 idx = 0 new_col = [7, 8, 9] # can be a list, a Series, an array or a scalar df.insert(loc=idx, column='A', value=new_col) df Out: A B C 0 7 1 4 1 8 2 5 2 9 3 6 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18674064\/how-do-i-insert-a-column-at-a-specific-column-index-in-pandas", "best_answers_votes":739, "question_length":329, "response_length":427 }, { "question":"Combining two Series into a DataFrame in pandas I have two Series s1 and s2 with the same (non-consecutive) indices. How do I combine s1 and s2 to being two columns in a DataFrame and keep one of the indices as a third column?", "response":"I think concat is a nice way to do this. If they are present it uses the name attributes of the Series as the columns (otherwise it simply numbers them): ``` In [1]: s1 = pd.Series([1, 2], index=['A', 'B'], name='s1') In [2]: s2 = pd.Series([3, 4], index=['A', 'B'], name='s2') In [3]: pd.concat([s1, s2], axis=1) Out[3]: s1 s2 A 1 3 B 2 4 In [4]: pd.concat([s1, s2], axis=1).reset_index() Out[4]: index s1 s2 0 A 1 3 1 B 2 4 ``` Note: This extends to more than 2 Series.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18062135\/combining-two-series-into-a-dataframe-in-pandas", "best_answers_votes":607, "question_length":226, "response_length":471 }, { "question":"Pandas Replace NaN with blank\/empty string I have a Pandas Dataframe as shown below: ``` 1 2 3 0 a NaN read 1 b l unread 2 c NaN read ``` I want to remove the NaN values with an empty string so that it looks like so: ``` 1 2 3 0 a \"\" read 1 b l unread 2 c \"\" read ```", "response":"``` df = df.fillna('') ``` This will fill na's (e.g. NaN's) with ''. inplace is possible but should be avoided as it makes a copy internally anyway, and it will be deprecated: ``` df.fillna('', inplace=True) ``` To fill only a single column: ``` df.column1 = df.column1.fillna('') ``` One can use df['column1'] instead of df.column1.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26837998\/pandas-replace-nan-with-blank-empty-string", "best_answers_votes":678, "question_length":267, "response_length":333 }, { "question":"Pandas 'count(distinct)' equivalent I am using Pandas as a database substitute as I have multiple databases (Oracle, SQL Server, etc.), and I am unable to make a sequence of commands to a SQL equivalent. I have a table loaded in a DataFrame with some columns: ```none YEARMONTH, CLIENTCODE, SIZE, etc., etc. ``` In SQL, to count the amount of different clients per year would be: ```sql SELECT count(distinct CLIENTCODE) FROM table GROUP BY YEARMONTH; ``` And the result would be ```none 201301 5000 201302 13245 ``` How can I do that in Pandas?", "response":"I believe this is what you want: ``` table.groupby('YEARMONTH').CLIENTCODE.nunique() ``` Example: ``` In [2]: table Out[2]: CLIENTCODE YEARMONTH 0 1 201301 1 1 201301 2 2 201301 3 1 201302 4 2 201302 5 2 201302 6 3 201302 In [3]: table.groupby('YEARMONTH').CLIENTCODE.nunique() Out[3]: YEARMONTH 201301 2 201302 3 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15411158\/pandas-countdistinct-equivalent", "best_answers_votes":600, "question_length":545, "response_length":317 }, { "question":"Get column index from column name in python pandas In R when you need to retrieve a column index based on the name of the column you could do ``` idx <- which(names(my_data)==my_colum_name) ``` Is there a way to do the same with pandas dataframes?", "response":"Sure, you can use .get_loc(): ``` In [45]: df = DataFrame({\"pear\": [1,2,3], \"apple\": [2,3,4], \"orange\": [3,4,5]}) In [46]: df.columns Out[46]: Index([apple, orange, pear], dtype=object) In [47]: df.columns.get_loc(\"pear\") Out[47]: 2 ``` although to be honest I don't often need this myself. Usually access by name does what I want it to (df[\"pear\"], df[[\"apple\", \"orange\"]], or maybe df.columns.isin([\"orange\", \"pear\"])), although I can definitely see cases where you'd want the index number.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13021654\/get-column-index-from-column-name-in-python-pandas", "best_answers_votes":670, "question_length":247, "response_length":492 }, { "question":"Add column to dataframe with constant value I have an existing dataframe which I need to add an additional column to which will contain the same value for every row. Existing df: ``` Date, Open, High, Low, Close 01-01-2015, 565, 600, 400, 450 ``` New df: ``` Name, Date, Open, High, Low, Close abc, 01-01-2015, 565, 600, 400, 450 ``` I know how to append an existing series \/ dataframe column. But this is a different situation, because all I need is to add the 'Name' column and set every row to the same value, in this case 'abc'.", "response":"df['Name']='abc' will add the new column and set all rows to that value: ``` In [79]: df Out[79]: Date, Open, High, Low, Close 0 01-01-2015, 565, 600, 400, 450 In [80]: df['Name'] = 'abc' df Out[80]: Date, Open, High, Low, Close Name 0 01-01-2015, 565, 600, 400, 450 abc ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29517072\/add-column-to-dataframe-with-constant-value", "best_answers_votes":569, "question_length":532, "response_length":274 }, { "question":"Select DataFrame rows between two dates I am creating a DataFrame from a csv as follows: ```py stock = pd.read_csv('data_in\/' + filename + '.csv', skipinitialspace=True) ``` The DataFrame has a date column. Is there a way to create a new DataFrame (or just overwrite the existing one) which only contains rows with date values that fall within a specified date range or between two specified date values?", "response":"There are two possible solutions: Use a boolean mask, then use df.loc[mask] Set the date column as a DatetimeIndex, then use df[start_date : end_date] Using a boolean mask: Ensure df['date'] is a Series with dtype datetime64[ns]: ``` df['date'] = pd.to_datetime(df['date']) ``` Make a boolean mask. start_date and end_date can be datetime.datetimes, np.datetime64s, pd.Timestamps, or even datetime strings: ``` #greater than the start date and smaller than the end date mask = (df['date'] > start_date) & (df['date'] '2000-6-1') & (df['date'] <= '2000-6-10') print(df.loc[mask]) ``` yields ``` 0 1 2 date 153 0.208875 0.727656 0.037787 2000-06-02 154 0.750800 0.776498 0.237716 2000-06-03 155 0.812008 0.127338 0.397240 2000-06-04 156 0.639937 0.207359 0.533527 2000-06-05 157 0.416998 0.845658 0.872826 2000-06-06 158 0.440069 0.338690 0.847545 2000-06-07 159 0.202354 0.624833 0.740254 2000-06-08 160 0.465746 0.080888 0.155452 2000-06-09 161 0.858232 0.190321 0.432574 2000-06-10 ``` Using a DatetimeIndex: If you are going to do a lot of selections by date, it may be quicker to set the date column as the index first. Then you can select rows by date using df.loc[start_date:end_date]. ```py import numpy as np import pandas as pd df = pd.DataFrame(np.random.random((200,3))) df['date'] = pd.date_range('2000-1-1', periods=200, freq='D') df = df.set_index(['date']) print(df.loc['2000-6-1':'2000-6-10']) ``` yields ``` 0 1 2 date 2000-06-01 0.040457 0.326594 0.492136 # <- includes start_date 2000-06-02 0.279323 0.877446 0.464523 2000-06-03 0.328068 0.837669 0.608559 2000-06-04 0.107959 0.678297 0.517435 2000-06-05 0.131555 0.418380 0.025725 2000-06-06 0.999961 0.619517 0.206108 2000-06-07 0.129270 0.024533 0.154769 2000-06-08 0.441010 0.741781 0.470402 2000-06-09 0.682101 0.375660 0.009916 2000-06-10 0.754488 0.352293 0.339337 ``` While Python list indexing, e.g. seq[start:end] includes start but not end, in contrast, Pandas df.loc[start_date : end_date] includes both end-points in the result if they are in the index. Neither start_date nor end_date has to be in the index however. Also note that pd.read_csv has a parse_dates parameter which you could use to parse the date column as datetime64s. Thus, if you use parse_dates, you would not need to use df['date'] = pd.to_datetime(df['date']).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29370057\/select-dataframe-rows-between-two-dates", "best_answers_votes":719, "question_length":404, "response_length":2312 }, { "question":"How to split a dataframe string column into two columns? I have a data frame with one (string) column and I'd like to split it into two (string) columns, with one column header as 'fips' and the other 'row' My dataframe df looks like this: ``` row 0 00000 UNITED STATES 1 01000 ALABAMA 2 01001 Autauga County, AL 3 01003 Baldwin County, AL 4 01005 Barbour County, AL ``` I do not know how to use df.row.str[:] to achieve my goal of splitting the row cell. I can use df['fips'] = hello to add a new column and populate it with hello. Any ideas? ``` fips row 0 00000 UNITED STATES 1 01000 ALABAMA 2 01001 Autauga County, AL 3 01003 Baldwin County, AL 4 01005 Barbour County, AL ```", "response":"TL;DR version: For the simple case of: I have a text column with a delimiter and I want two columns The simplest solution is: ``` df[['A', 'B']] = df['AB'].str.split(' ', n=1, expand=True) ``` You must use expand=True if your strings have a non-uniform number of splits and you want None to replace the missing values. Notice how, in either case, the .tolist() method is not necessary. Neither is zip(). In detail: Andy Hayden's solution is most excellent in demonstrating the power of the str.extract() method. But for a simple split over a known separator (like, splitting by dashes, or splitting by whitespace), the .str.split() method is enough1. It operates on a column (Series) of strings, and returns a column (Series) of lists: ``` >>> import pandas as pd >>> df = pd.DataFrame({'AB': ['A1-B1', 'A2-B2']}) >>> df AB 0 A1-B1 1 A2-B2 >>> df['AB_split'] = df['AB'].str.split('-') >>> df AB AB_split 0 A1-B1 [A1, B1] 1 A2-B2 [A2, B2] ``` 1: If you're unsure what the first two parameters of .str.split() do, I recommend the docs for the plain Python version of the method. But how do you go from: a column containing two-element lists to: two columns, each containing the respective element of the lists? Well, we need to take a closer look at the .str attribute of a column. It's a magical object that is used to collect methods that treat each element in a column as a string, and then apply the respective method in each element as efficient as possible: ``` >>> upper_lower_df = pd.DataFrame({\"U\": [\"A\", \"B\", \"C\"]}) >>> upper_lower_df U 0 A 1 B 2 C >>> upper_lower_df[\"L\"] = upper_lower_df[\"U\"].str.lower() >>> upper_lower_df U L 0 A a 1 B b 2 C c ``` But it also has an \"indexing\" interface for getting each element of a string by its index: ``` >>> df['AB'].str[0] 0 A 1 A Name: AB, dtype: object >>> df['AB'].str[1] 0 1 1 2 Name: AB, dtype: object ``` Of course, this indexing interface of .str doesn't really care if each element it's indexing is actually a string, as long as it can be indexed, so: ``` >>> df['AB'].str.split('-', 1).str[0] 0 A1 1 A2 Name: AB, dtype: object >>> df['AB'].str.split('-', 1).str[1] 0 B1 1 B2 Name: AB, dtype: object ``` Then, it's a simple matter of taking advantage of the Python tuple unpacking of iterables to do ``` >>> df['A'], df['B'] = df['AB'].str.split('-', n=1).str >>> df AB AB_split A B 0 A1-B1 [A1, B1] A1 B1 1 A2-B2 [A2, B2] A2 B2 ``` Of course, getting a DataFrame out of splitting a column of strings is so useful that the .str.split() method can do it for you with the expand=True parameter: ``` >>> df['AB'].str.split('-', n=1, expand=True) 0 1 0 A1 B1 1 A2 B2 ``` So, another way of accomplishing what we wanted is to do: ``` >>> df = df[['AB']] >>> df AB 0 A1-B1 1 A2-B2 >>> df.join(df['AB'].str.split('-', n=1, expand=True).rename(columns={0:'A', 1:'B'})) AB A B 0 A1-B1 A1 B1 1 A2-B2 A2 B2 ``` The expand=True version, although longer, has a distinct advantage over the tuple unpacking method. Tuple unpacking doesn't deal well with splits of different lengths: ``` >>> df = pd.DataFrame({'AB': ['A1-B1', 'A2-B2', 'A3-B3-C3']}) >>> df AB 0 A1-B1 1 A2-B2 2 A3-B3-C3 >>> df['A'], df['B'], df['C'] = df['AB'].str.split('-') Traceback (most recent call last): [...] ValueError: Length of values does not match length of index >>> ``` But expand=True handles it nicely by placing None in the columns for which there aren't enough \"splits\": ``` >>> df.join( ... df['AB'].str.split('-', expand=True).rename( ... columns={0:'A', 1:'B', 2:'C'} ... ) ... ) AB A B C 0 A1-B1 A1 B1 None 1 A2-B2 A2 B2 None 2 A3-B3-C3 A3 B3 C3 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14745022\/how-to-split-a-dataframe-string-column-into-two-columns", "best_answers_votes":826, "question_length":679, "response_length":3583 }, { "question":"Dropping infinite values from dataframes in pandas? How do I drop nan, inf, and -inf values from a DataFrame without resetting mode.use_inf_as_null? Can I tell dropna to include inf in its definition of missing values so that the following works? ``` df.dropna(subset=[\"col1\", \"col2\"], how=\"all\") ```", "response":"First replace() infs with NaN: ``` df.replace([np.inf, -np.inf], np.nan, inplace=True) ``` and then drop NaNs via dropna(): ``` df.dropna(subset=[\"col1\", \"col2\"], how=\"all\", inplace=True) ``` For example: ``` >>> df = pd.DataFrame({\"col1\": [1, np.inf, -np.inf], \"col2\": [2, 3, np.nan]}) >>> df col1 col2 0 1.0 2.0 1 inf 3.0 2 -inf NaN >>> df.replace([np.inf, -np.inf], np.nan, inplace=True) >>> df col1 col2 0 1.0 2.0 1 NaN 3.0 2 NaN NaN >>> df.dropna(subset=[\"col1\", \"col2\"], how=\"all\", inplace=True) >>> df col1 col2 0 1.0 2.0 1 NaN 3.0 ``` The same method also works for Series.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17477979\/dropping-infinite-values-from-dataframes-in-pandas", "best_answers_votes":715, "question_length":300, "response_length":581 }, { "question":"How do I use Pandas group-by to get the sum? I am using this dataframe: ``` Fruit Date Name Number Apples 10\/6\/2016 Bob 7 Apples 10\/6\/2016 Bob 8 Apples 10\/6\/2016 Mike 9 Apples 10\/7\/2016 Steve 10 Apples 10\/7\/2016 Bob 1 Oranges 10\/7\/2016 Bob 2 Oranges 10\/6\/2016 Tom 15 Oranges 10\/6\/2016 Mike 57 Oranges 10\/6\/2016 Bob 65 Oranges 10\/7\/2016 Tony 1 Grapes 10\/7\/2016 Bob 1 Grapes 10\/7\/2016 Tom 87 Grapes 10\/7\/2016 Bob 22 Grapes 10\/7\/2016 Bob 12 Grapes 10\/7\/2016 Tony 15 ``` I would like to aggregate this by Name and then by Fruit to get a total number of Fruit per Name. For example: ``` Bob,Apples,16 ``` I tried grouping by Name and Fruit but how do I get the total number of Fruit?", "response":"Use GroupBy.sum: ``` df.groupby(['Fruit','Name']).sum() Out[31]: Number Fruit Name Apples Bob 16 Mike 9 Steve 10 Grapes Bob 35 Tom 87 Tony 15 Oranges Bob 67 Mike 57 Tom 15 Tony 1 ``` To specify the column to sum, use this: df.groupby(['Name', 'Fruit'])['Number'].sum()", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39922986\/how-do-i-use-pandas-group-by-to-get-the-sum", "best_answers_votes":457, "question_length":678, "response_length":268 }, { "question":"How to show all columns' names on a large pandas dataframe? I have a dataframe that consist of hundreds of columns, and I need to see all column names. What I did: ``` In[37]: data_all2.columns ``` The output is: ``` Out[37]: Index(['customer_id', 'incoming', 'outgoing', 'awan', 'bank', 'family', 'food', 'government', 'internet', 'isipulsa', ... 'overdue_3months_feature78', 'overdue_3months_feature79', 'overdue_3months_feature80', 'overdue_3months_feature81', 'overdue_3months_feature82', 'overdue_3months_feature83', 'overdue_3months_feature84', 'overdue_3months_feature85', 'overdue_3months_feature86', 'loan_overdue_3months_total_y'], dtype='object', length=102) ``` How do I show all columns, instead of a truncated list?", "response":"You can globally set printing options. I think this should work: Method 1: ``` pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) ``` Method 2: ``` pd.options.display.max_columns = None pd.options.display.max_rows = None ``` This will allow you to see all column names & rows when you are doing .head(). None of the column name will be truncated. If you just want to see the column names you can do: ``` print(df.columns.tolist()) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/49188960\/how-to-show-all-columns-names-on-a-large-pandas-dataframe", "best_answers_votes":697, "question_length":729, "response_length":466 }, { "question":"Convert Pandas column containing NaNs to dtype `int` I read data from a .csv file to a Pandas dataframe as below. For one of the columns, namely id, I want to specify the column type as int. The problem is the id series has missing\/empty values. When I try to cast the id column to integer while reading the .csv, I get: ``` df= pd.read_csv(\"data.csv\", dtype={'id': int}) error: Integer column has NA values ``` Alternatively, I tried to convert the column type after reading as below, but this time I get: ``` df= pd.read_csv(\"data.csv\") df[['id']] = df[['id']].astype(int) error: Cannot convert NA to integer ``` How can I tackle this?", "response":"In version 0.24.+ pandas has gained the ability to hold integer dtypes with missing values. Nullable Integer Data Type. Pandas can represent integer data with possibly missing values using arrays.IntegerArray. This is an extension types implemented within pandas. It is not the default dtype for integers, and will not be inferred; you must explicitly pass the dtype into array() or Series: ``` arr = pd.array([1, 2, np.nan], dtype=pd.Int64Dtype()) pd.Series(arr) 0 1 1 2 2 NaN dtype: Int64 ``` For convert column to nullable integers use: ``` df['myCol'] = df['myCol'].astype('Int64') ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21287624\/convert-pandas-column-containing-nans-to-dtype-int", "best_answers_votes":381, "question_length":637, "response_length":589 }, { "question":"Split a Pandas column of lists into multiple columns I have a Pandas DataFrame with one column: ``` import pandas as pd df = pd.DataFrame({\"teams\": [[\"SF\", \"NYG\"] for _ in range(7)]}) teams 0 [SF, NYG] 1 [SF, NYG] 2 [SF, NYG] 3 [SF, NYG] 4 [SF, NYG] 5 [SF, NYG] 6 [SF, NYG] ``` How can split this column of lists into two columns? Desired result: ``` team1 team2 0 SF NYG 1 SF NYG 2 SF NYG 3 SF NYG 4 SF NYG 5 SF NYG 6 SF NYG ```", "response":"You can use the DataFrame constructor with lists created by to_list: ``` import pandas as pd d1 = {'teams': [['SF', 'NYG'],['SF', 'NYG'],['SF', 'NYG'], ['SF', 'NYG'],['SF', 'NYG'],['SF', 'NYG'],['SF', 'NYG']]} df2 = pd.DataFrame(d1) print (df2) teams 0 [SF, NYG] 1 [SF, NYG] 2 [SF, NYG] 3 [SF, NYG] 4 [SF, NYG] 5 [SF, NYG] 6 [SF, NYG] ``` ``` df2[['team1','team2']] = pd.DataFrame(df2.teams.tolist(), index= df2.index) print (df2) teams team1 team2 0 [SF, NYG] SF NYG 1 [SF, NYG] SF NYG 2 [SF, NYG] SF NYG 3 [SF, NYG] SF NYG 4 [SF, NYG] SF NYG 5 [SF, NYG] SF NYG 6 [SF, NYG] SF NYG ``` And for a new DataFrame: ``` df3 = pd.DataFrame(df2['teams'].to_list(), columns=['team1','team2']) print (df3) team1 team2 0 SF NYG 1 SF NYG 2 SF NYG 3 SF NYG 4 SF NYG 5 SF NYG 6 SF NYG ``` A solution with apply(pd.Series) is very slow: ``` #7k rows df2 = pd.concat([df2]*1000).reset_index(drop=True) In [121]: %timeit df2['teams'].apply(pd.Series) 1.79 s \u00b1 52.5 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) In [122]: %timeit pd.DataFrame(df2['teams'].to_list(), columns=['team1','team2']) 1.63 ms \u00b1 54.3 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/35491274\/split-a-pandas-column-of-lists-into-multiple-columns", "best_answers_votes":556, "question_length":429, "response_length":1161 }, { "question":"Convert columns to string in Pandas I have the following DataFrame from a SQL query: ```none (Pdb) pp total_rows ColumnID RespondentCount 0 -1 2 1 3030096843 1 2 3030096845 1 ``` and I pivot it like this: ```py total_data = total_rows.pivot_table(cols=['ColumnID']) ``` which produces ```none (Pdb) pp total_data ColumnID -1 3030096843 3030096845 RespondentCount 2 1 1 [1 rows x 3 columns] ``` When I convert this dataframe into a dictionary (using total_data.to_dict('records')[0]), I get ```py {3030096843: 1, 3030096845: 1, -1: 2} ``` but I want to make sure the 303 columns are cast as strings instead of integers so that I get this: ```py {'3030096843': 1, '3030096845': 1, -1: 2} ```", "response":"One way to convert to string is to use astype: ``` total_rows['ColumnID'] = total_rows['ColumnID'].astype(str) ``` However, perhaps you are looking for the to_json function, which will convert keys to valid json (and therefore your keys to strings): ``` In [11]: df = pd.DataFrame([['A', 2], ['A', 4], ['B', 6]]) In [12]: df.to_json() Out[12]: '{\"0\":{\"0\":\"A\",\"1\":\"A\",\"2\":\"B\"},\"1\":{\"0\":2,\"1\":4,\"2\":6}}' In [13]: df[0].to_json() Out[13]: '{\"0\":\"A\",\"1\":\"A\",\"2\":\"B\"}' ``` Note: you can pass in a buffer\/file to save this to, along with some other options...", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22005911\/convert-columns-to-string-in-pandas", "best_answers_votes":628, "question_length":689, "response_length":553 }, { "question":"Apply pandas function to column to create multiple new columns? How to do this in pandas: I have a function extract_text_features on a single text column, returning multiple output columns. Specifically, the function returns 6 values. The function works, however there doesn't seem to be any proper return type (pandas DataFrame\/ numpy array\/ Python list) such that the output can get correctly assigned df.ix[: ,10:16] = df.textcol.map(extract_text_features) So I think I need to drop back to iterating with df.iterrows(), as per this? UPDATE: Iterating with df.iterrows() is at least 20x slower, so I surrendered and split out the function into six distinct .map(lambda ...) calls. UPDATE 2: this question was asked back around v0.11.0, before the useability of df.apply was improved or df.assign() was added in v0.16. Hence much of the question and answers are not too relevant since then.", "response":"I usually do this using zip: ``` >>> df = pd.DataFrame([[i] for i in range(10)], columns=['num']) >>> df num 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 >>> def powers(x): >>> return x, x**2, x**3, x**4, x**5, x**6 >>> df['p1'], df['p2'], df['p3'], df['p4'], df['p5'], df['p6'] = \\ >>> zip(*df['num'].map(powers)) >>> df num p1 p2 p3 p4 p5 p6 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 2 2 2 4 8 16 32 64 3 3 3 9 27 81 243 729 4 4 4 16 64 256 1024 4096 5 5 5 25 125 625 3125 15625 6 6 6 36 216 1296 7776 46656 7 7 7 49 343 2401 16807 117649 8 8 8 64 512 4096 32768 262144 9 9 9 81 729 6561 59049 531441 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16236684\/apply-pandas-function-to-column-to-create-multiple-new-columns", "best_answers_votes":300, "question_length":892, "response_length":591 }, { "question":"Progress indicator during pandas operations I regularly perform pandas operations on data frames in excess of 15 million or so rows and I'd love to have access to a progress indicator for particular operations. Does a text based progress indicator for pandas split-apply-combine operations exist? For example, in something like: ``` df_users.groupby(['userID', 'requestDate']).apply(feature_rollup) ``` where feature_rollup is a somewhat involved function that take many DF columns and creates new user columns through various methods. These operations can take a while for large data frames so I'd like to know if it is possible to have text based output in an iPython notebook that updates me on the progress. So far, I've tried canonical loop progress indicators for Python but they don't interact with pandas in any meaningful way. I'm hoping there's something I've overlooked in the pandas library\/documentation that allows one to know the progress of a split-apply-combine. A simple implementation would maybe look at the total number of data frame subsets upon which the apply function is working and report progress as the completed fraction of those subsets. Is this perhaps something that needs to be added to the library?", "response":"Due to popular demand, I've added pandas support in tqdm (pip install \"tqdm>=4.9.0\"). Unlike the other answers, this will not noticeably slow pandas down -- here's an example for DataFrameGroupBy.progress_apply: ``` import pandas as pd import numpy as np from tqdm import tqdm # from tqdm.auto import tqdm # for notebooks # Create new `pandas` methods which use `tqdm` progress # (can use tqdm_gui, optional kwargs, etc.) tqdm.pandas() df = pd.DataFrame(np.random.randint(0, int(1e8), (10000, 1000))) # Now you can use `progress_apply` instead of `apply` df.groupby(0).progress_apply(lambda x: x**2) ``` In case you're interested in how this works (and how to modify it for your own callbacks), see the examples on GitHub, the full documentation on PyPI, or import the module and run help(tqdm). Other supported functions include map, applymap, aggregate, and transform. EDIT To directly answer the original question, replace: ``` df_users.groupby(['userID', 'requestDate']).apply(feature_rollup) ``` with: ``` from tqdm import tqdm tqdm.pandas() df_users.groupby(['userID', 'requestDate']).progress_apply(feature_rollup) ``` Note: tqdm <= v4.8: For versions of tqdm below 4.8, instead of tqdm.pandas() you had to do: ``` from tqdm import tqdm, tqdm_pandas tqdm_pandas(tqdm()) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18603270\/progress-indicator-during-pandas-operations", "best_answers_votes":782, "question_length":1232, "response_length":1280 }, { "question":"Pandas: Setting no. of max rows I have a problem viewing the following DataFrame: ``` n = 100 foo = DataFrame(index=range(n)) foo['floats'] = np.random.randn(n) foo ``` The problem is that it does not print all rows per default in ipython notebook, but I have to slice to view the resulting rows. Even the following option does not change the output: ``` pd.set_option('display.max_rows', 500) ``` Does anyone know how to display the whole array?", "response":"Set display.max_rows: ``` pd.set_option('display.max_rows', 500) ``` For older versions of pandas (<=0.11.0) you need to change both display.height and display.max_rows. ``` pd.set_option('display.height', 500) pd.set_option('display.max_rows', 500) ``` See also pd.describe_option('display'). You can set an option only temporarily for this one time like this: ``` from IPython.display import display with pd.option_context('display.max_rows', 100, 'display.max_columns', 10): display(df) #need display to show the dataframe when using with in jupyter #some pandas stuff ``` You can also reset an option back to its default value like this: pd.reset_option('display.max_rows') And reset all of them back: pd.reset_option('all')", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16424493\/pandas-setting-no-of-max-rows", "best_answers_votes":600, "question_length":446, "response_length":728 }, { "question":"Extracting specific selected columns to new DataFrame as a copy I have a pandas DataFrame with 4 columns and I want to create a new DataFrame that only has three of the columns. This question is similar to: Extracting specific columns from a data frame but for pandas not R. The following code does not work, raises an error, and is certainly not the pandas way to do it. ```py import pandas as pd old = pd.DataFrame({'A' : [4,5], 'B' : [10,20], 'C' : [100,50], 'D' : [-30,-50]}) new = pd.DataFrame(zip(old.A, old.C, old.D)) # raises TypeError: data argument can't be an iterator ``` What is the pandas way to do it?", "response":"There is a way of doing this and it actually looks similar to R ``` new = old[['A', 'C', 'D']].copy() ``` Here you are just selecting the columns you want from the original data frame and creating a variable for those. If you want to modify the new dataframe at all you'll probably want to use .copy() to avoid a SettingWithCopyWarning. An alternative method is to use filter which will create a copy by default: ``` new = old.filter(['A','B','D'], axis=1) ``` Finally, depending on the number of columns in your original dataframe, it might be more succinct to express this using a drop (this will also create a copy by default): ``` new = old.drop('B', axis=1) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34682828\/extracting-specific-selected-columns-to-new-dataframe-as-a-copy", "best_answers_votes":704, "question_length":616, "response_length":666 }, { "question":"Split (explode) pandas dataframe string entry to separate rows I have a pandas dataframe in which one column of text strings contains comma-separated values. I want to split each CSV field and create a new row per entry (assume that CSV are clean and need only be split on ','). For example, a should become b: ``` In [7]: a Out[7]: var1 var2 0 a,b,c 1 1 d,e,f 2 In [8]: b Out[8]: var1 var2 0 a 1 1 b 1 2 c 1 3 d 2 4 e 2 5 f 2 ``` So far, I have tried various simple functions, but the .apply method seems to only accept one row as return value when it is used on an axis, and I can't get .transform to work. Any suggestions would be much appreciated! Example data: ``` from pandas import DataFrame import numpy as np a = DataFrame([{'var1': 'a,b,c', 'var2': 1}, {'var1': 'd,e,f', 'var2': 2}]) b = DataFrame([{'var1': 'a', 'var2': 1}, {'var1': 'b', 'var2': 1}, {'var1': 'c', 'var2': 1}, {'var1': 'd', 'var2': 2}, {'var1': 'e', 'var2': 2}, {'var1': 'f', 'var2': 2}]) ``` I know this won't work because we lose DataFrame meta-data by going through numpy, but it should give you a sense of what I tried to do: ``` def fun(row): letters = row['var1'] letters = letters.split(',') out = np.array([row] * len(letters)) out['var1'] = letters a['idx'] = range(a.shape[0]) z = a.groupby('idx') z.transform(fun) ```", "response":"UPDATE 3: it makes more sense to use Series.explode() \/ DataFrame.explode() methods (implemented in Pandas 0.25.0 and extended in Pandas 1.3.0 to support multi-column explode) as is shown in the usage example: for a single column: ``` In [1]: df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]], ...: 'B': 1, ...: 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]}) In [2]: df Out[2]: A B C 0 [0, 1, 2] 1 [a, b, c] 1 foo 1 NaN 2 [] 1 [] 3 [3, 4] 1 [d, e] In [3]: df.explode('A') Out[3]: A B C 0 0 1 [a, b, c] 0 1 1 [a, b, c] 0 2 1 [a, b, c] 1 foo 1 NaN 2 NaN 1 [] 3 3 1 [d, e] 3 4 1 [d, e] ``` for multiple columns (for Pandas 1.3.0+): ``` In [4]: df.explode(['A', 'C']) Out[4]: A B C 0 0 1 a 0 1 1 b 0 2 1 c 1 foo 1 NaN 2 NaN 1 NaN 3 3 1 d 3 4 1 e ``` UPDATE 2: more generic vectorized function, which will work for multiple normal and multiple list columns ``` def explode(df, lst_cols, fill_value='', preserve_index=False): # make sure `lst_cols` is list-alike if (lst_cols is not None and len(lst_cols) > 0 and not isinstance(lst_cols, (list, tuple, np.ndarray, pd.Series))): lst_cols = [lst_cols] # all columns except `lst_cols` idx_cols = df.columns.difference(lst_cols) # calculate lengths of lists lens = df[lst_cols[0]].str.len() # preserve original index values idx = np.repeat(df.index.values, lens) # create \"exploded\" DF res = (pd.DataFrame({ col:np.repeat(df[col].values, lens) for col in idx_cols}, index=idx) .assign(**{col:np.concatenate(df.loc[lens>0, col].values) for col in lst_cols})) # append those rows that have empty lists if (lens == 0).any(): # at least one list in cells is empty res = (res.append(df.loc[lens==0, idx_cols], sort=False) .fillna(fill_value)) # revert the original index order res = res.sort_index() # reset index if requested if not preserve_index: res = res.reset_index(drop=True) return res ``` Demo: Multiple list columns - all list columns must have the same # of elements in each row: ``` In [134]: df Out[134]: aaa myid num text 0 10 1 [1, 2, 3] [aa, bb, cc] 1 11 2 [] [] 2 12 3 [1, 2] [cc, dd] 3 13 4 [] [] In [135]: explode(df, ['num','text'], fill_value='') Out[135]: aaa myid num text 0 10 1 1 aa 1 10 1 2 bb 2 10 1 3 cc 3 11 2 4 12 3 1 cc 5 12 3 2 dd 6 13 4 ``` preserving original index values: ``` In [136]: explode(df, ['num','text'], fill_value='', preserve_index=True) Out[136]: aaa myid num text 0 10 1 1 aa 0 10 1 2 bb 0 10 1 3 cc 1 11 2 2 12 3 1 cc 2 12 3 2 dd 3 13 4 ``` Setup: ``` df = pd.DataFrame({ 'aaa': {0: 10, 1: 11, 2: 12, 3: 13}, 'myid': {0: 1, 1: 2, 2: 3, 3: 4}, 'num': {0: [1, 2, 3], 1: [], 2: [1, 2], 3: []}, 'text': {0: ['aa', 'bb', 'cc'], 1: [], 2: ['cc', 'dd'], 3: []} }) ``` CSV column: ``` In [46]: df Out[46]: var1 var2 var3 0 a,b,c 1 XX 1 d,e,f,x,y 2 ZZ In [47]: explode(df.assign(var1=df.var1.str.split(',')), 'var1') Out[47]: var1 var2 var3 0 a 1 XX 1 b 1 XX 2 c 1 XX 3 d 2 ZZ 4 e 2 ZZ 5 f 2 ZZ 6 x 2 ZZ 7 y 2 ZZ ``` using this little trick we can convert CSV-like column to list column: ``` In [48]: df.assign(var1=df.var1.str.split(',')) Out[48]: var1 var2 var3 0 [a, b, c] 1 XX 1 [d, e, f, x, y] 2 ZZ ``` UPDATE: generic vectorized approach (will work also for multiple columns): Original DF: ``` In [177]: df Out[177]: var1 var2 var3 0 a,b,c 1 XX 1 d,e,f,x,y 2 ZZ ``` Solution: first let's convert CSV strings to lists: ``` In [178]: lst_col = 'var1' In [179]: x = df.assign(**{lst_col:df[lst_col].str.split(',')}) In [180]: x Out[180]: var1 var2 var3 0 [a, b, c] 1 XX 1 [d, e, f, x, y] 2 ZZ ``` Now we can do this: ``` In [181]: pd.DataFrame({ ...: col:np.repeat(x[col].values, x[lst_col].str.len()) ...: for col in x.columns.difference([lst_col]) ...: }).assign(**{lst_col:np.concatenate(x[lst_col].values)})[x.columns.tolist()] ...: Out[181]: var1 var2 var3 0 a 1 XX 1 b 1 XX 2 c 1 XX 3 d 2 ZZ 4 e 2 ZZ 5 f 2 ZZ 6 x 2 ZZ 7 y 2 ZZ ``` OLD answer: Inspired by @AFinkelstein solution, i wanted to make it bit more generalized which could be applied to DF with more than two columns and as fast, well almost, as fast as AFinkelstein's solution): ``` In [2]: df = pd.DataFrame( ...: [{'var1': 'a,b,c', 'var2': 1, 'var3': 'XX'}, ...: {'var1': 'd,e,f,x,y', 'var2': 2, 'var3': 'ZZ'}] ...: ) In [3]: df Out[3]: var1 var2 var3 0 a,b,c 1 XX 1 d,e,f,x,y 2 ZZ In [4]: (df.set_index(df.columns.drop('var1',1).tolist()) ...: .var1.str.split(',', expand=True) ...: .stack() ...: .reset_index() ...: .rename(columns={0:'var1'}) ...: .loc[:, df.columns] ...: ) Out[4]: var1 var2 var3 0 a 1 XX 1 b 1 XX 2 c 1 XX 3 d 2 ZZ 4 e 2 ZZ 5 f 2 ZZ 6 x 2 ZZ 7 y 2 ZZ ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12680754\/split-explode-pandas-dataframe-string-entry-to-separate-rows", "best_answers_votes":259, "question_length":1305, "response_length":4533 }, { "question":"Show DataFrame as table in iPython Notebook I am using iPython notebook. When I do this: ``` df ``` I get a beautiful table with cells. However, if i do this: ``` df1 df2 ``` it doesn't print the first beautiful table. If I try this: ``` print df1 print df2 ``` It prints out the table in a different format that spills columns over and makes the output very tall. Is there a way to force it to print out the beautiful tables for both datasets?", "response":"You'll need to use the HTML() or display() functions from IPython's display module: ```python from IPython.display import display, HTML # Assuming that dataframes df1 and df2 are already defined: print \"Dataframe 1:\" display(df1) print \"Dataframe 2:\" display(HTML(df2.to_html())) ``` Note that if you just print df1.to_html() you'll get the raw, unrendered HTML. You can also import from IPython.core.display with the same effect", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26873127\/show-dataframe-as-table-in-ipython-notebook", "best_answers_votes":573, "question_length":444, "response_length":429 }, { "question":"Selecting with complex criteria from pandas.DataFrame For example I have simple DF: ``` import pandas as pd from random import randint df = pd.DataFrame({'A': [randint(1, 9) for x in range(10)], 'B': [randint(1, 9)*10 for x in range(10)], 'C': [randint(1, 9)*100 for x in range(10)]}) ``` Can I select values from 'A' for which corresponding values for 'B' will be greater than 50, and for 'C' - not equal to 900, using methods and idioms of Pandas?", "response":"Sure! Setup: ``` >>> import pandas as pd >>> from random import randint >>> df = pd.DataFrame({'A': [randint(1, 9) for x in range(10)], 'B': [randint(1, 9)*10 for x in range(10)], 'C': [randint(1, 9)*100 for x in range(10)]}) >>> df A B C 0 9 40 300 1 9 70 700 2 5 70 900 3 8 80 900 4 7 50 200 5 9 30 900 6 2 80 700 7 2 80 400 8 5 80 300 9 7 70 800 ``` We can apply column operations and get boolean Series objects: ``` >>> df[\"B\"] > 50 0 False 1 True 2 True 3 True 4 False 5 False 6 True 7 True 8 True 9 True Name: B >>> (df[\"B\"] > 50) & (df[\"C\"] != 900) ``` or ``` >>> (df[\"B\"] > 50) & ~(df[\"C\"] == 900) 0 False 1 False 2 True 3 True 4 False 5 False 6 False 7 False 8 False 9 False ``` [Update, to switch to new-style .loc]: And then we can use these to index into the object. For read access, you can chain indices: ``` >>> df[\"A\"][(df[\"B\"] > 50) & (df[\"C\"] != 900)] 2 5 3 8 Name: A, dtype: int64 ``` but you can get yourself into trouble because of the difference between a view and a copy doing this for write access. You can use .loc instead: ``` >>> df.loc[(df[\"B\"] > 50) & (df[\"C\"] != 900), \"A\"] 2 5 3 8 Name: A, dtype: int64 >>> df.loc[(df[\"B\"] > 50) & (df[\"C\"] != 900), \"A\"].values array([5, 8], dtype=int64) >>> df.loc[(df[\"B\"] > 50) & (df[\"C\"] != 900), \"A\"] *= 1000 >>> df A B C 0 9 40 300 1 9 70 700 2 5000 70 900 3 8000 80 900 4 7 50 200 5 9 30 900 6 2 80 700 7 2 80 400 8 5 80 300 9 7 70 800 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15315452\/selecting-with-complex-criteria-from-pandas-dataframe", "best_answers_votes":542, "question_length":449, "response_length":1410 }, { "question":"why should I make a copy of a data frame in pandas When selecting a sub dataframe from a parent dataframe, I noticed that some programmers make a copy of the data frame using the .copy() method. For example, ```py X = my_dataframe[features_list].copy() ``` ...instead of just ```py X = my_dataframe[features_list] ``` Why are they making a copy of the data frame? What will happen if I don't make a copy?", "response":"This answer has been deprecated in newer versions of pandas. See docs This expands on Paul's answer. In Pandas, indexing a DataFrame returns a reference to the initial DataFrame. Thus, changing the subset will change the initial DataFrame. Thus, you'd want to use the copy if you want to make sure the initial DataFrame shouldn't change. Consider the following code: ``` df = DataFrame({'x': [1,2]}) df_sub = df[0:1] df_sub.x = -1 print(df) ``` You'll get: ``` x 0 -1 1 2 ``` In contrast, the following leaves df unchanged: ``` df_sub_copy = df[0:1].copy() df_sub_copy.x = -1 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27673231\/why-should-i-make-a-copy-of-a-data-frame-in-pandas", "best_answers_votes":375, "question_length":404, "response_length":579 }, { "question":"Find row where values for column is maximal in a pandas DataFrame How can I find the row for which the value of a specific column is maximal? df.max() will give me the maximal value for each column, I don't know how to get the corresponding row.", "response":"Use the pandas idxmax function. It's straightforward: ``` >>> import pandas >>> import numpy as np >>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C']) >>> df A B C 0 1.232853 -1.979459 -0.573626 1 0.140767 0.394940 1.068890 2 0.742023 1.343977 -0.579745 3 2.125299 -0.649328 -0.211692 4 -0.187253 1.908618 -1.862934 >>> df['A'].idxmax() 3 >>> df['B'].idxmax() 4 >>> df['C'].idxmax() 1 ``` Alternatively you could also use numpy.argmax, such as numpy.argmax(df['A']) -- it provides the same thing, and appears at least as fast as idxmax in cursory observations. idxmax() returns indices labels, not integers. Example': if you have string values as your index labels, like rows 'a' through 'e', you might want to know that the max occurs in row 4 (not row 'd'). if you want the integer position of that label within the Index you have to get it manually (which can be tricky now that duplicate row labels are allowed). HISTORICAL NOTES: idxmax() used to be called argmax() prior to 0.11 argmax was deprecated prior to 1.0.0 and removed entirely in 1.0.0 back as of Pandas 0.16, argmax used to exist and perform the same function (though appeared to run more slowly than idxmax). argmax function returned the integer position within the index of the row location of the maximum element. pandas moved to using row labels instead of integer indices. Positional integer indices used to be very common, more common than labels, especially in applications where duplicate row labels are common. For example, consider this toy DataFrame with a duplicate row label: ``` In [19]: dfrm Out[19]: A B C a 0.143693 0.653810 0.586007 b 0.623582 0.312903 0.919076 c 0.165438 0.889809 0.000967 d 0.308245 0.787776 0.571195 e 0.870068 0.935626 0.606911 f 0.037602 0.855193 0.728495 g 0.605366 0.338105 0.696460 h 0.000000 0.090814 0.963927 i 0.688343 0.188468 0.352213 i 0.879000 0.105039 0.900260 In [20]: dfrm['A'].idxmax() Out[20]: 'i' In [21]: dfrm.iloc[dfrm['A'].idxmax()] # .ix instead of .iloc in older versions of pandas Out[21]: A B C i 0.688343 0.188468 0.352213 i 0.879000 0.105039 0.900260 ``` So here a naive use of idxmax is not sufficient, whereas the old form of argmax would correctly provide the positional location of the max row (in this case, position 9). This is exactly one of those nasty kinds of bug-prone behaviors in dynamically typed languages that makes this sort of thing so unfortunate, and worth beating a dead horse over. If you are writing systems code and your system suddenly gets used on some data sets that are not cleaned properly before being joined, it's very easy to end up with duplicate row labels, especially string labels like a CUSIP or SEDOL identifier for financial assets. You can't easily use the type system to help you out, and you may not be able to enforce uniqueness on the index without running into unexpectedly missing data. So you're left with hoping that your unit tests covered everything (they didn't, or more likely no one wrote any tests) -- otherwise (most likely) you're just left waiting to see if you happen to smack into this error at runtime, in which case you probably have to go drop many hours worth of work from the database you were outputting results to, bang your head against the wall in IPython trying to manually reproduce the problem, finally figuring out that it's because idxmax can only report the label of the max row, and then being disappointed that no standard function automatically gets the positions of the max row for you, writing a buggy implementation yourself, editing the code, and praying you don't run into the problem again.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/10202570\/find-row-where-values-for-column-is-maximal-in-a-pandas-dataframe", "best_answers_votes":386, "question_length":245, "response_length":3623 }, { "question":"Pandas create empty DataFrame with only column names I have a dynamic DataFrame which works fine, but when there are no data to be added into the DataFrame I get an error. And therefore I need a solution to create an empty DataFrame with only the column names. For now I have something like this: ```py df = pd.DataFrame(columns=COLUMN_NAMES) # Note that there is no row data inserted. ``` PS: It is important that the column names would still appear in a DataFrame. But when I use it like this I get something like that as a result: ```none Index([], dtype='object') Empty DataFrame ``` The \"Empty DataFrame\" part is good! But instead of the Index thing I need to still display the columns. An important thing that I found out: I am converting this DataFrame to a PDF using Jinja2, so therefore I'm calling out a method to first output it to HTML like that: ```py df.to_html() ``` This is where the columns get lost I think. In general, I followed this example: http:\/\/pbpython.com\/pdf-reports.html. The css is also from the link. That's what I do to send the dataframe to the PDF: ```py env = Environment(loader=FileSystemLoader('.')) template = env.get_template(\"pdf_report_template.html\") template_vars = {\"my_dataframe\": df.to_html()} html_out = template.render(template_vars) HTML(string=html_out).write_pdf(\"my_pdf.pdf\", stylesheets=[\"pdf_report_style.css\"]) ```", "response":"You can create an empty DataFrame with either column names or an Index: ``` In [4]: import pandas as pd In [5]: df = pd.DataFrame(columns=['A','B','C','D','E','F','G']) In [6]: df Out[6]: Empty DataFrame Columns: [A, B, C, D, E, F, G] Index: [] ``` Or ``` In [7]: df = pd.DataFrame(index=range(1,10)) In [8]: df Out[8]: Empty DataFrame Columns: [] Index: [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` Edit: Even after your amendment with the .to_html, I can't reproduce. This: ``` df = pd.DataFrame(columns=['A','B','C','D','E','F','G']) df.to_html('test.html') ``` Produces: ``` A B C D E F G ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/44513738\/pandas-create-empty-dataframe-with-only-column-names", "best_answers_votes":451, "question_length":1369, "response_length":592 }, { "question":"Merge two dataframes by index [duplicate] This question already has answers here: Pandas Merging 101 (8 answers) What are the 'levels', 'keys', and names arguments for in Pandas' concat function? (1 answer) Closed 7 months ago. I have the following dataframes: ``` > df1 id begin conditional confidence discoveryTechnique 0 278 56 false 0.0 1 1 421 18 false 0.0 1 > df2 concept 0 A 1 B ``` How do I merge on the indices to get: ``` id begin conditional confidence discoveryTechnique concept 0 278 56 false 0.0 1 A 1 421 18 false 0.0 1 B ``` I ask because it is my understanding that merge() i.e. df1.merge(df2) uses columns to do the matching. In fact, doing this I get: ``` Traceback (most recent call last): File \"\", line 1, in File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pandas\/core\/frame.py\", line 4618, in merge copy=copy, indicator=indicator) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pandas\/tools\/merge.py\", line 58, in merge copy=copy, indicator=indicator) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pandas\/tools\/merge.py\", line 491, in __init__ self._validate_specification() File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pandas\/tools\/merge.py\", line 812, in _validate_specification raise MergeError('No common columns to perform merge on') pandas.tools.merge.MergeError: No common columns to perform merge on ``` Is it bad practice to merge on index? Is it impossible? If so, how can I shift the index into a new column called \"index\"?", "response":"Use merge, which is an inner join by default: ``` pd.merge(df1, df2, left_index=True, right_index=True) ``` Or join, which is a left join by default: ``` df1.join(df2) ``` Or concat, which is an outer join by default: ``` pd.concat([df1, df2], axis=1) ``` Samples: ``` df1 = pd.DataFrame({'a':range(6), 'b':[5,3,6,9,2,4]}, index=list('abcdef')) print (df1) a b a 0 5 b 1 3 c 2 6 d 3 9 e 4 2 f 5 4 df2 = pd.DataFrame({'c':range(4), 'd':[10,20,30, 40]}, index=list('abhi')) print (df2) c d a 0 10 b 1 20 h 2 30 i 3 40 ``` ``` # Default inner join df3 = pd.merge(df1, df2, left_index=True, right_index=True) print (df3) a b c d a 0 5 0 10 b 1 3 1 20 # Default left join df4 = df1.join(df2) print (df4) a b c d a 0 5 0.0 10.0 b 1 3 1.0 20.0 c 2 6 NaN NaN d 3 9 NaN NaN e 4 2 NaN NaN f 5 4 NaN NaN # Default outer join df5 = pd.concat([df1, df2], axis=1) print (df5) a b c d a 0.0 5.0 0.0 10.0 b 1.0 3.0 1.0 20.0 c 2.0 6.0 NaN NaN d 3.0 9.0 NaN NaN e 4.0 2.0 NaN NaN f 5.0 4.0 NaN NaN h NaN NaN 2.0 30.0 i NaN NaN 3.0 40.0 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/40468069\/merge-two-dataframes-by-index", "best_answers_votes":672, "question_length":1451, "response_length":1021 }, { "question":"Select rows in pandas MultiIndex DataFrame What are the most common pandas ways to select\/filter rows of a dataframe whose index is a MultiIndex? Slicing based on a single value\/label Slicing based on multiple labels from one or more levels Filtering on boolean conditions and expressions Which methods are applicable in what circumstances Assumptions for simplicity: input dataframe does not have duplicate index keys input dataframe below only has two levels. (Most solutions shown here generalize to N levels) Example input: ``` mux = pd.MultiIndex.from_arrays([ list('aaaabbbbbccddddd'), list('tuvwtuvwtuvwtuvw') ], names=['one', 'two']) df = pd.DataFrame({'col': np.arange(len(mux))}, mux) col one two a t 0 u 1 v 2 w 3 b t 4 u 5 v 6 w 7 t 8 c u 9 v 10 d w 11 t 12 u 13 v 14 w 15 ``` Question 1: Selecting a Single Item How do I select rows having \"a\" in level \"one\"? ``` col one two a t 0 u 1 v 2 w 3 ``` Additionally, how would I be able to drop level \"one\" in the output? ``` col two t 0 u 1 v 2 w 3 ``` Question 1b How do I slice all rows with value \"t\" on level \"two\"? ``` col one two a t 0 b t 4 t 8 d t 12 ``` Question 2: Selecting Multiple Values in a Level How can I select rows corresponding to items \"b\" and \"d\" in level \"one\"? ``` col one two b t 4 u 5 v 6 w 7 t 8 d w 11 t 12 u 13 v 14 w 15 ``` Question 2b How would I get all values corresponding to \"t\" and \"w\" in level \"two\"? ``` col one two a t 0 w 3 b t 4 w 7 t 8 d w 11 t 12 w 15 ``` Question 3: Slicing a Single Cross Section (x, y) How do I retrieve a cross section, i.e., a single row having a specific values for the index from df? Specifically, how do I retrieve the cross section of ('c', 'u'), given by ``` col one two c u 9 ``` Question 4: Slicing Multiple Cross Sections [(a, b), (c, d), ...] How do I select the two rows corresponding to ('c', 'u'), and ('a', 'w')? ``` col one two c u 9 a w 3 ``` Question 5: One Item Sliced per Level How can I retrieve all rows corresponding to \"a\" in level \"one\" or \"t\" in level \"two\"? ``` col one two a t 0 u 1 v 2 w 3 b t 4 t 8 d t 12 ``` Question 6: Arbitrary Slicing How can I slice specific cross sections? For \"a\" and \"b\", I would like to select all rows with sub-levels \"u\" and \"v\", and for \"d\", I would like to select rows with sub-level \"w\". ``` col one two a u 1 v 2 b u 5 v 6 d w 11 w 15 ``` Question 7 will use a unique setup consisting of a numeric level: ``` np.random.seed(0) mux2 = pd.MultiIndex.from_arrays([ list('aaaabbbbbccddddd'), np.random.choice(10, size=16) ], names=['one', 'two']) df2 = pd.DataFrame({'col': np.arange(len(mux2))}, mux2) col one two a 5 0 0 1 3 2 3 3 b 7 4 9 5 3 6 5 7 2 8 c 4 9 7 10 d 6 11 8 12 8 13 1 14 6 15 ``` Question 7: Filtering by numeric inequality on individual levels of the multiindex How do I get all rows where values in level \"two\" are greater than 5? ``` col one two b 7 4 9 5 c 7 10 d 6 11 8 12 8 13 6 15 ``` Note: This post will not go through how to create MultiIndexes, how to perform assignment operations on them, or any performance related discussions (these are separate topics for another time).", "response":"MultiIndex \/ Advanced Indexing Note This post will be structured in the following manner: The questions put forth in the OP will be addressed, one by one For each question, one or more methods applicable to solving this problem and getting the expected result will be demonstrated. Notes (much like this one) will be included for readers interested in learning about additional functionality, implementation details, and other info cursory to the topic at hand. These notes have been compiled through scouring the docs and uncovering various obscure features, and from my own (admittedly limited) experience. All code samples have created and tested on pandas v0.23.4, python3.7. If something is not clear, or factually incorrect, or if you did not find a solution applicable to your use case, please feel free to suggest an edit, request clarification in the comments, or open a new question, ....as applicable. Here is an introduction to some common idioms (henceforth referred to as the Four Idioms) we will be frequently re-visiting DataFrame.loc - A general solution for selection by label (+ pd.IndexSlice for more complex applications involving slices) DataFrame.xs - Extract a particular cross section from a Series\/DataFrame. DataFrame.query - Specify slicing and\/or filtering operations dynamically (i.e., as an expression that is evaluated dynamically. Is more applicable to some scenarios than others. Also see this section of the docs for querying on MultiIndexes. Boolean indexing with a mask generated using MultiIndex.get_level_values (often in conjunction with Index.isin, especially when filtering with multiple values). This is also quite useful in some circumstances. It will be beneficial to look at the various slicing and filtering problems in terms of the Four Idioms to gain a better understanding what can be applied to a given situation. It is very important to understand that not all of the idioms will work equally well (if at all) in every circumstance. If an idiom has not been listed as a potential solution to a problem below, that means that idiom cannot be applied to that problem effectively. Question 1 How do I select rows having \"a\" in level \"one\"? ``` col one two a t 0 u 1 v 2 w 3 ``` You can use loc, as a general purpose solution applicable to most situations: ``` df.loc[['a']] ``` At this point, if you get ``` TypeError: Expected tuple, got str ``` That means you're using an older version of pandas. Consider upgrading! Otherwise, use df.loc[('a', slice(None)), :]. Alternatively, you can use xs here, since we are extracting a single cross section. Note the levels and axis arguments (reasonable defaults can be assumed here). ``` df.xs('a', level=0, axis=0, drop_level=False) # df.xs('a', drop_level=False) ``` Here, the drop_level=False argument is needed to prevent xs from dropping level \"one\" in the result (the level we sliced on). Yet another option here is using query: ``` df.query(\"one == 'a'\") ``` If the index did not have a name, you would need to change your query string to be \"ilevel_0 == 'a'\". Finally, using get_level_values: ``` df[df.index.get_level_values('one') == 'a'] # If your levels are unnamed, or if you need to select by position (not label), # df[df.index.get_level_values(0) == 'a'] ``` Additionally, how would I be able to drop level \"one\" in the output? ``` col two t 0 u 1 v 2 w 3 ``` This can be easily done using either ``` df.loc['a'] # Notice the single string argument instead the list. ``` Or, ``` df.xs('a', level=0, axis=0, drop_level=True) # df.xs('a') ``` Notice that we can omit the drop_level argument (it is assumed to be True by default). Note You may notice that a filtered DataFrame may still have all the levels, even if they do not show when printing the DataFrame out. For example, ``` v = df.loc[['a']] print(v) col one two a t 0 u 1 v 2 w 3 print(v.index) MultiIndex(levels=[['a', 'b', 'c', 'd'], ['t', 'u', 'v', 'w']], labels=[[0, 0, 0, 0], [0, 1, 2, 3]], names=['one', 'two']) ``` You can get rid of these levels using MultiIndex.remove_unused_levels: ``` v.index = v.index.remove_unused_levels() ``` ``` print(v.index) MultiIndex(levels=[['a'], ['t', 'u', 'v', 'w']], labels=[[0, 0, 0, 0], [0, 1, 2, 3]], names=['one', 'two']) ``` Question 1b How do I slice all rows with value \"t\" on level \"two\"? ``` col one two a t 0 b t 4 t 8 d t 12 ``` Intuitively, you would want something involving slice(): ``` df.loc[(slice(None), 't'), :] ``` It Just Works!\u2122 But it is clunky. We can facilitate a more natural slicing syntax using the pd.IndexSlice API here. ``` idx = pd.IndexSlice df.loc[idx[:, 't'], :] ``` This is much, much cleaner. Note Why is the trailing slice : across the columns required? This is because, loc can be used to select and slice along both axes (axis=0 or axis=1). Without explicitly making it clear which axis the slicing is to be done on, the operation becomes ambiguous. See the big red box in the documentation on slicing. If you want to remove any shade of ambiguity, loc accepts an axis parameter: ``` df.loc(axis=0)[pd.IndexSlice[:, 't']] ``` Without the axis parameter (i.e., just by doing df.loc[pd.IndexSlice[:, 't']]), slicing is assumed to be on the columns, and a KeyError will be raised in this circumstance. This is documented in slicers. For the purpose of this post, however, we will explicitly specify all axes. With xs, it is ``` df.xs('t', axis=0, level=1, drop_level=False) ``` With query, it is ``` df.query(\"two == 't'\") # Or, if the first level has no name, # df.query(\"ilevel_1 == 't'\") ``` And finally, with get_level_values, you may do ``` df[df.index.get_level_values('two') == 't'] # Or, to perform selection by position\/integer, # df[df.index.get_level_values(1) == 't'] ``` All to the same effect. Question 2 How can I select rows corresponding to items \"b\" and \"d\" in level \"one\"? ``` col one two b t 4 u 5 v 6 w 7 t 8 d w 11 t 12 u 13 v 14 w 15 ``` Using loc, this is done in a similar fashion by specifying a list. ``` df.loc[['b', 'd']] ``` To solve the above problem of selecting \"b\" and \"d\", you can also use query: ``` items = ['b', 'd'] df.query(\"one in @items\") # df.query(\"one == @items\", parser='pandas') # df.query(\"one in ['b', 'd']\") # df.query(\"one == ['b', 'd']\", parser='pandas') ``` Note Yes, the default parser is 'pandas', but it is important to highlight this syntax isn't conventionally python. The Pandas parser generates a slightly different parse tree from the expression. This is done to make some operations more intuitive to specify. For more information, please read my post on Dynamic Expression Evaluation in pandas using pd.eval(). And, with get_level_values + Index.isin: ``` df[df.index.get_level_values(\"one\").isin(['b', 'd'])] ``` Question 2b How would I get all values corresponding to \"t\" and \"w\" in level \"two\"? ``` col one two a t 0 w 3 b t 4 w 7 t 8 d w 11 t 12 w 15 ``` With loc, this is possible only in conjuction with pd.IndexSlice. ``` df.loc[pd.IndexSlice[:, ['t', 'w']], :] ``` The first colon : in pd.IndexSlice[:, ['t', 'w']] means to slice across the first level. As the depth of the level being queried increases, you will need to specify more slices, one per level being sliced across. You will not need to specify more levels beyond the one being sliced, however. With query, this is ``` items = ['t', 'w'] df.query(\"two in @items\") # df.query(\"two == @items\", parser='pandas') # df.query(\"two in ['t', 'w']\") # df.query(\"two == ['t', 'w']\", parser='pandas') ``` With get_level_values and Index.isin (similar to above): ``` df[df.index.get_level_values('two').isin(['t', 'w'])] ``` Question 3 How do I retrieve a cross section, i.e., a single row having a specific values for the index from df? Specifically, how do I retrieve the cross section of ('c', 'u'), given by ``` col one two c u 9 ``` Use loc by specifying a tuple of keys: ``` df.loc[('c', 'u'), :] ``` Or, ``` df.loc[pd.IndexSlice[('c', 'u')]] ``` Note At this point, you may run into a PerformanceWarning that looks like this: ``` PerformanceWarning: indexing past lexsort depth may impact performance. ``` This just means that your index is not sorted. pandas depends on the index being sorted (in this case, lexicographically, since we are dealing with string values) for optimal search and retrieval. A quick fix would be to sort your DataFrame in advance using DataFrame.sort_index. This is especially desirable from a performance standpoint if you plan on doing multiple such queries in tandem: ``` df_sort = df.sort_index() df_sort.loc[('c', 'u')] ``` You can also use MultiIndex.is_lexsorted() to check whether the index is sorted or not. This function returns True or False accordingly. You can call this function to determine whether an additional sorting step is required or not. With xs, this is again simply passing a single tuple as the first argument, with all other arguments set to their appropriate defaults: ``` df.xs(('c', 'u')) ``` With query, things become a bit clunky: ``` df.query(\"one == 'c' and two == 'u'\") ``` You can see now that this is going to be relatively difficult to generalize. But is still OK for this particular problem. With accesses spanning multiple levels, get_level_values can still be used, but is not recommended: ``` m1 = (df.index.get_level_values('one') == 'c') m2 = (df.index.get_level_values('two') == 'u') df[m1 & m2] ``` Question 4 How do I select the two rows corresponding to ('c', 'u'), and ('a', 'w')? ``` col one two c u 9 a w 3 ``` With loc, this is still as simple as: ``` df.loc[[('c', 'u'), ('a', 'w')]] # df.loc[pd.IndexSlice[[('c', 'u'), ('a', 'w')]]] ``` With query, you will need to dynamically generate a query string by iterating over your cross sections and levels: ``` cses = [('c', 'u'), ('a', 'w')] levels = ['one', 'two'] # This is a useful check to make in advance. assert all(len(levels) == len(cs) for cs in cses) query = '(' + ') or ('.join([ ' and '.join([f\"({l} == {repr(c)})\" for l, c in zip(levels, cs)]) for cs in cses ]) + ')' print(query) # ((one == 'c') and (two == 'u')) or ((one == 'a') and (two == 'w')) df.query(query) ``` 100% DO NOT RECOMMEND! But it is possible. What if I have multiple levels? One option in this scenario would be to use droplevel to drop the levels you're not checking, then use isin to test membership, and then boolean index on the final result. ``` df[df.index.droplevel(unused_level).isin([('c', 'u'), ('a', 'w')])] ``` Question 5 How can I retrieve all rows corresponding to \"a\" in level \"one\" or \"t\" in level \"two\"? ``` col one two a t 0 u 1 v 2 w 3 b t 4 t 8 d t 12 ``` This is actually very difficult to do with loc while ensuring correctness and still maintaining code clarity. df.loc[pd.IndexSlice['a', 't']] is incorrect, it is interpreted as df.loc[pd.IndexSlice[('a', 't')]] (i.e., selecting a cross section). You may think of a solution with pd.concat to handle each label separately: ``` pd.concat([ df.loc[['a'],:], df.loc[pd.IndexSlice[:, 't'],:] ]) col one two a t 0 u 1 v 2 w 3 t 0 # Does this look right to you? No, it isn't! b t 4 t 8 d t 12 ``` But you'll notice one of the rows is duplicated. This is because that row satisfied both slicing conditions, and so appeared twice. You will instead need to do ``` v = pd.concat([ df.loc[['a'],:], df.loc[pd.IndexSlice[:, 't'],:] ]) v[~v.index.duplicated()] ``` But if your DataFrame inherently contains duplicate indices (that you want), then this will not retain them. Use with extreme caution. With query, this is stupidly simple: ``` df.query(\"one == 'a' or two == 't'\") ``` With get_level_values, this is still simple, but not as elegant: ``` m1 = (df.index.get_level_values('one') == 'a') m2 = (df.index.get_level_values('two') == 't') df[m1 | m2] ``` Question 6 How can I slice specific cross sections? For \"a\" and \"b\", I would like to select all rows with sub-levels \"u\" and \"v\", and for \"d\", I would like to select rows with sub-level \"w\". ``` col one two a u 1 v 2 b u 5 v 6 d w 11 w 15 ``` This is a special case that I've added to help understand the applicability of the Four Idioms\u2014this is one case where none of them will work effectively, since the slicing is very specific, and does not follow any real pattern. Usually, slicing problems like this will require explicitly passing a list of keys to loc. One way of doing this is with: ``` keys = [('a', 'u'), ('a', 'v'), ('b', 'u'), ('b', 'v'), ('d', 'w')] df.loc[keys, :] ``` If you want to save some typing, you will recognise that there is a pattern to slicing \"a\", \"b\" and its sublevels, so we can separate the slicing task into two portions and concat the result: ``` pd.concat([ df.loc[(('a', 'b'), ('u', 'v')), :], df.loc[('d', 'w'), :] ], axis=0) ``` Slicing specification for \"a\" and \"b\" is slightly cleaner (('a', 'b'), ('u', 'v')) because the same sub-levels being indexed are the same for each level. Question 7 How do I get all rows where values in level \"two\" are greater than 5? ``` col one two b 7 4 9 5 c 7 10 d 6 11 8 12 8 13 6 15 ``` This can be done using query, ``` df2.query(\"two > 5\") ``` And get_level_values. ``` df2[df2.index.get_level_values('two') > 5] ``` Note Similar to this example, we can filter based on any arbitrary condition using these constructs. In general, it is useful to remember that loc and xs are specifically for label-based indexing, while query and get_level_values are helpful for building general conditional masks for filtering. Bonus Question What if I need to slice a MultiIndex column? Actually, most solutions here are applicable to columns as well, with minor changes. Consider: ``` np.random.seed(0) mux3 = pd.MultiIndex.from_product([ list('ABCD'), list('efgh') ], names=['one','two']) df3 = pd.DataFrame(np.random.choice(10, (3, len(mux))), columns=mux3) print(df3) one A B C D two e f g h e f g h e f g h e f g h 0 5 0 3 3 7 9 3 5 2 4 7 6 8 8 1 6 1 7 7 8 1 5 9 8 9 4 3 0 3 5 0 2 3 2 8 1 3 3 3 7 0 1 9 9 0 4 7 3 2 7 ``` These are the following changes you will need to make to the Four Idioms to have them working with columns. To slice with loc, use ``` df3.loc[:, ....] # Notice how we slice across the index with `:`. ``` or, ``` df3.loc[:, pd.IndexSlice[...]] ``` To use xs as appropriate, just pass an argument axis=1. You can access the column level values directly using df.columns.get_level_values. You will then need to do something like ``` df.loc[:, {condition}] ``` Where {condition} represents some condition built using columns.get_level_values. To use query, your only option is to transpose, query on the index, and transpose again: ``` df3.T.query(...).T ``` Not recommended, use one of the other 3 options.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/53927460\/select-rows-in-pandas-multiindex-dataframe", "best_answers_votes":391, "question_length":3084, "response_length":14599 }, { "question":"What is the difference between join and merge in Pandas? Suppose I have two DataFrames like so: ``` left = pd.DataFrame({'key1': ['foo', 'bar'], 'lval': [1, 2]}) right = pd.DataFrame({'key2': ['foo', 'bar'], 'rval': [4, 5]}) ``` I want to merge them, so I try something like this: ``` pd.merge(left, right, left_on='key1', right_on='key2') ``` And I'm happy ``` key1 lval key2 rval 0 foo 1 foo 4 1 bar 2 bar 5 ``` But I'm trying to use the join method, which I've been lead to believe is pretty similar. ``` left.join(right, on=['key1', 'key2']) ``` And I get this: ``` \/\/anaconda\/lib\/python2.7\/site-packages\/pandas\/tools\/merge.pyc in _validate_specification(self) 406 if self.right_index: 407 if not ((len(self.left_on) == self.right.index.nlevels)): --> 408 raise AssertionError() 409 self.right_on = [None] * n 410 elif self.right_on is not None: AssertionError: ``` What am I missing?", "response":"pandas.merge() is the underlying function used for all merge\/join behavior. DataFrames provide the pandas.DataFrame.merge() and pandas.DataFrame.join() methods as a convenient way to access the capabilities of pandas.merge(). For example, df1.merge(right=df2, ...) is equivalent to pandas.merge(left=df1, right=df2, ...). These are the main differences between df.join() and df.merge(): lookup on right table: df1.join(df2) always joins via the index of df2, but df1.merge(df2) can join to one or more columns of df2 (default) or to the index of df2 (with right_index=True). lookup on left table: by default, df1.join(df2) uses the index of df1 and df1.merge(df2) uses column(s) of df1. That can be overridden by specifying df1.join(df2, on=key_or_keys) or df1.merge(df2, left_index=True). left vs inner join: df1.join(df2) does a left join by default (keeps all rows of df1), but df.merge does an inner join by default (returns only matching rows of df1 and df2). So, the generic approach is to use pandas.merge(df1, df2) or df1.merge(df2). But for a number of common situations (keeping all rows of df1 and joining to an index in df2), you can save some typing by using df1.join(df2) instead. Some notes on these issues from the documentation at http:\/\/pandas.pydata.org\/pandas-docs\/stable\/merging.html#database-style-dataframe-joining-merging: merge is a function in the pandas namespace, and it is also available as a DataFrame instance method, with the calling DataFrame being implicitly considered the left object in the join. The related DataFrame.join method, uses merge internally for the index-on-index and index-on-column(s) joins, but joins on indexes by default rather than trying to join on common columns (the default behavior for merge). If you are joining on index, you may wish to use DataFrame.join to save yourself some typing. ... These two function calls are completely equivalent: ``` left.join(right, on=key_or_keys) pd.merge(left, right, left_on=key_or_keys, right_index=True, how='left', sort=False) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22676081\/what-is-the-difference-between-join-and-merge-in-pandas", "best_answers_votes":532, "question_length":888, "response_length":2029 }, { "question":"How to determine whether a Pandas Column contains a particular value I am trying to determine whether there is an entry in a Pandas column that has a particular value. I tried to do this with if x in df['id']. I thought this was working, except when I fed it a value that I knew was not in the column 43 in df['id'] it still returned True. When I subset to a data frame only containing entries matching the missing id df[df['id'] == 43] there are, obviously, no entries in it. How to I determine if a column in a Pandas data frame contains a particular value and why doesn't my current method work? (FYI, I have the same problem when I use the implementation in this answer to a similar question).", "response":"in of a Series checks whether the value is in the index: ``` In [11]: s = pd.Series(list('abc')) In [12]: s Out[12]: 0 a 1 b 2 c dtype: object In [13]: 1 in s Out[13]: True In [14]: 'a' in s Out[14]: False ``` One option is to see if it's in unique values: ``` In [21]: s.unique() Out[21]: array(['a', 'b', 'c'], dtype=object) In [22]: 'a' in s.unique() Out[22]: True ``` or a python set: ``` In [23]: set(s) Out[23]: {'a', 'b', 'c'} In [24]: 'a' in set(s) Out[24]: True ``` As pointed out by @DSM, it may be more efficient (especially if you're just doing this for one value) to just use in directly on the values: ``` In [31]: s.values Out[31]: array(['a', 'b', 'c'], dtype=object) In [32]: 'a' in s.values Out[32]: True ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21319929\/how-to-determine-whether-a-pandas-column-contains-a-particular-value", "best_answers_votes":371, "question_length":697, "response_length":726 }, { "question":"How to find the installed pandas version I am having trouble with some of pandas functionalities. How do I check what is my installation version?", "response":"Check pandas.__version__: ``` In [76]: import pandas as pd In [77]: pd.__version__ Out[77]: '0.12.0-933-g281dc4e' ``` Pandas also provides a utility function, pd.show_versions(), which reports the version of its dependencies as well: ``` In [53]: pd.show_versions(as_json=False) INSTALLED VERSIONS ------------------ commit: None python: 2.7.6.final.0 python-bits: 64 OS: Linux OS-release: 3.13.0-45-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 pandas: 0.15.2-113-g5531341 nose: 1.3.1 Cython: 0.21.1 numpy: 1.8.2 scipy: 0.14.0.dev-371b4ff statsmodels: 0.6.0.dev-a738b4f IPython: 2.0.0-dev sphinx: 1.2.2 patsy: 0.3.0 dateutil: 1.5 pytz: 2012c bottleneck: None tables: 3.1.1 numexpr: 2.2.2 matplotlib: 1.4.2 openpyxl: None xlrd: 0.9.3 xlwt: 0.7.5 xlsxwriter: None lxml: 3.3.3 bs4: 4.3.2 html5lib: 0.999 httplib2: 0.8 apiclient: None rpy2: 2.5.5 sqlalchemy: 0.9.8 pymysql: None psycopg2: 2.4.5 (dt dec mx pq3 ext) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20612645\/how-to-find-the-installed-pandas-version", "best_answers_votes":477, "question_length":145, "response_length":960 }, { "question":"Appending to an empty DataFrame in Pandas? Is it possible to append to an empty data frame that doesn't contain any indices or columns? I have tried to do this, but keep getting an empty dataframe at the end. e.g. ``` import pandas as pd df = pd.DataFrame() data = ['some kind of data here' --> I have checked the type already, and it is a dataframe] df.append(data) ``` The result looks like this: ``` Empty DataFrame Columns: [] Index: [] ```", "response":"This should work: ``` >>> df = pd.DataFrame() >>> data = pd.DataFrame({\"A\": range(3)}) >>> df = df.append(data) >>> df A 0 0 1 1 2 2 ``` Since the append doesn't happen in-place, so you'll have to store the output if you want it: ``` >>> df = pd.DataFrame() >>> data = pd.DataFrame({\"A\": range(3)}) >>> df.append(data) # without storing >>> df Empty DataFrame Columns: [] Index: [] >>> df = df.append(data) >>> df A 0 0 1 1 2 2 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16597265\/appending-to-an-empty-dataframe-in-pandas", "best_answers_votes":546, "question_length":444, "response_length":431 }, { "question":"How to take column-slices of dataframe in pandas I load some machine learning data from a CSV file. The first 2 columns are observations and the remaining columns are features. Currently, I do the following: ``` data = pandas.read_csv('mydata.csv') ``` which gives something like: ``` data = pandas.DataFrame(np.random.rand(10,5), columns = list('abcde')) ``` I'd like to slice this dataframe in two dataframes: one containing the columns a and b and one containing the columns c, d and e. It is not possible to write something like ``` observations = data[:'c'] features = data['c':] ``` I'm not sure what the best method is. Do I need a pd.Panel? By the way, I find dataframe indexing pretty inconsistent: data['a'] is permitted, but data[0] is not. On the other side, data['a':] is not permitted but data[0:] is. Is there a practical reason for this? This is really confusing if columns are indexed by Int, given that data[0] != data[0:1]", "response":"2017 Answer - pandas 0.20: .ix is deprecated. Use .loc See the deprecation in the docs .loc uses label based indexing to select both rows and columns. The labels being the values of the index or the columns. Slicing with .loc includes the last element. Let's assume we have a DataFrame with the following columns: foo, bar, quz, ant, cat, sat, dat. ``` # selects all rows and all columns beginning at 'foo' up to and including 'sat' df.loc[:, 'foo':'sat'] # foo bar quz ant cat sat ``` .loc accepts the same slice notation that Python lists do for both row and columns. Slice notation being start:stop:step ``` # slice from 'foo' to 'cat' by every 2nd column df.loc[:, 'foo':'cat':2] # foo quz cat # slice from the beginning to 'bar' df.loc[:, :'bar'] # foo bar # slice from 'quz' to the end by 3 df.loc[:, 'quz'::3] # quz sat # attempt from 'sat' to 'bar' df.loc[:, 'sat':'bar'] # no columns returned # slice from 'sat' to 'bar' df.loc[:, 'sat':'bar':-1] sat cat ant quz bar # slice notation is syntatic sugar for the slice function # slice from 'quz' to the end by 2 with slice function df.loc[:, slice('quz',None, 2)] # quz cat dat # select specific columns with a list # select columns foo, bar and dat df.loc[:, ['foo','bar','dat']] # foo bar dat ``` You can slice by rows and columns. For instance, if you have 5 rows with labels v, w, x, y, z ``` # slice from 'w' to 'y' and 'foo' to 'ant' by 3 df.loc['w':'y', 'foo':'ant':3] # foo ant # w # x # y ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/10665889\/how-to-take-column-slices-of-dataframe-in-pandas", "best_answers_votes":324, "question_length":941, "response_length":1458 }, { "question":"How do I get a list of all the duplicate items using pandas in python? I have a list of items that likely has some export issues. I would like to get a list of the duplicate items so I can manually compare them. When I try to use pandas duplicated method, it only returns the first duplicate. Is there a a way to get all of the duplicates and not just the first one? A small subsection of my dataset looks like this: ``` ID,ENROLLMENT_DATE,TRAINER_MANAGING,TRAINER_OPERATOR,FIRST_VISIT_DATE 1536D,12-Feb-12,\"06DA1B3-Lebanon NH\",,15-Feb-12 F15D,18-May-12,\"06405B2-Lebanon NH\",,25-Jul-12 8096,8-Aug-12,\"0643D38-Hanover NH\",\"0643D38-Hanover NH\",25-Jun-12 A036,1-Apr-12,\"06CB8CF-Hanover NH\",\"06CB8CF-Hanover NH\",9-Aug-12 8944,19-Feb-12,\"06D26AD-Hanover NH\",,4-Feb-12 1004E,8-Jun-12,\"06388B2-Lebanon NH\",,24-Dec-11 11795,3-Jul-12,\"0649597-White River VT\",\"0649597-White River VT\",30-Mar-12 30D7,11-Nov-12,\"06D95A3-Hanover NH\",\"06D95A3-Hanover NH\",30-Nov-11 3AE2,21-Feb-12,\"06405B2-Lebanon NH\",,26-Oct-12 B0FE,17-Feb-12,\"06D1B9D-Hartland VT\",,16-Feb-12 127A1,11-Dec-11,\"064456E-Hanover NH\",\"064456E-Hanover NH\",11-Nov-12 161FF,20-Feb-12,\"0643D38-Hanover NH\",\"0643D38-Hanover NH\",3-Jul-12 A036,30-Nov-11,\"063B208-Randolph VT\",\"063B208-Randolph VT\", 475B,25-Sep-12,\"06D26AD-Hanover NH\",,5-Nov-12 151A3,7-Mar-12,\"06388B2-Lebanon NH\",,16-Nov-12 CA62,3-Jan-12,,, D31B,18-Dec-11,\"06405B2-Lebanon NH\",,9-Jan-12 20F5,8-Jul-12,\"0669C50-Randolph VT\",,3-Feb-12 8096,19-Dec-11,\"0649597-White River VT\",\"0649597-White River VT\",9-Apr-12 14E48,1-Aug-12,\"06D3206-Hanover NH\",, 177F8,20-Aug-12,\"063B208-Randolph VT\",\"063B208-Randolph VT\",5-May-12 553E,11-Oct-12,\"06D95A3-Hanover NH\",\"06D95A3-Hanover NH\",8-Mar-12 12D5F,18-Jul-12,\"0649597-White River VT\",\"0649597-White River VT\",2-Nov-12 C6DC,13-Apr-12,\"06388B2-Lebanon NH\",, 11795,27-Feb-12,\"0643D38-Hanover NH\",\"0643D38-Hanover NH\",19-Jun-12 17B43,11-Aug-12,,,22-Oct-12 A036,11-Aug-12,\"06D3206-Hanover NH\",,19-Jun-12 ``` My code looks like this currently: ``` df_bigdata_duplicates = df_bigdata[df_bigdata.duplicated(cols='ID')] ``` There area a couple duplicate items. But, when I use the above code, I only get the first item. In the API reference, I see how I can get the last item, but I would like to have all of them so I can visually inspect them to see why I am getting the discrepancy. So, in this example I would like to get all three A036 entries and both 11795 entries and any other duplicated entries, instead of the just first one. Any help is most appreciated.", "response":"Method #1: print all rows where the ID is one of the IDs in duplicated: ``` >>> import pandas as pd >>> df = pd.read_csv(\"dup.csv\") >>> ids = df[\"ID\"] >>> df[ids.isin(ids[ids.duplicated()])].sort_values(\"ID\") ID ENROLLMENT_DATE TRAINER_MANAGING TRAINER_OPERATOR FIRST_VISIT_DATE 24 11795 27-Feb-12 0643D38-Hanover NH 0643D38-Hanover NH 19-Jun-12 6 11795 3-Jul-12 0649597-White River VT 0649597-White River VT 30-Mar-12 18 8096 19-Dec-11 0649597-White River VT 0649597-White River VT 9-Apr-12 2 8096 8-Aug-12 0643D38-Hanover NH 0643D38-Hanover NH 25-Jun-12 12 A036 30-Nov-11 063B208-Randolph VT 063B208-Randolph VT NaN 3 A036 1-Apr-12 06CB8CF-Hanover NH 06CB8CF-Hanover NH 9-Aug-12 26 A036 11-Aug-12 06D3206-Hanover NH NaN 19-Jun-12 ``` but I couldn't think of a nice way to prevent repeating ids so many times. I prefer method #2: groupby on the ID. ``` >>> pd.concat(g for _, g in df.groupby(\"ID\") if len(g) > 1) ID ENROLLMENT_DATE TRAINER_MANAGING TRAINER_OPERATOR FIRST_VISIT_DATE 6 11795 3-Jul-12 0649597-White River VT 0649597-White River VT 30-Mar-12 24 11795 27-Feb-12 0643D38-Hanover NH 0643D38-Hanover NH 19-Jun-12 2 8096 8-Aug-12 0643D38-Hanover NH 0643D38-Hanover NH 25-Jun-12 18 8096 19-Dec-11 0649597-White River VT 0649597-White River VT 9-Apr-12 3 A036 1-Apr-12 06CB8CF-Hanover NH 06CB8CF-Hanover NH 9-Aug-12 12 A036 30-Nov-11 063B208-Randolph VT 063B208-Randolph VT NaN 26 A036 11-Aug-12 06D3206-Hanover NH NaN 19-Jun-12 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14657241\/how-do-i-get-a-list-of-all-the-duplicate-items-using-pandas-in-python", "best_answers_votes":306, "question_length":2505, "response_length":1440 }, { "question":"How to iterate over columns of a pandas dataframe I have this code using Pandas in Python: ``` all_data = {} for ticker in ['FIUIX', 'FSAIX', 'FSAVX', 'FSTMX']: all_data[ticker] = web.get_data_yahoo(ticker, '1\/1\/2010', '1\/1\/2015') prices = DataFrame({tic: data['Adj Close'] for tic, data in all_data.iteritems()}) returns = prices.pct_change() ``` I know I can run a regression like this: ``` regs = sm.OLS(returns.FIUIX,returns.FSTMX).fit() ``` but how can I do this for each column in the dataframe? Specifically, how can I iterate over columns, in order to run the regression on each? Specifically, I want to regress each other ticker symbol (FIUIX, FSAIX and FSAVX) on FSTMX, and store the residuals for each regression. I've tried various versions of the following, but nothing I've tried gives the desired result: ``` resids = {} for k in returns.keys(): reg = sm.OLS(returns[k],returns.FSTMX).fit() resids[k] = reg.resid ``` Is there something wrong with the returns[k] part of the code? How can I use the k value to access a column? Or else is there a simpler approach?", "response":"Old answer: ``` for column in df: print(df[column]) ``` The previous answer still works, but was added around the time of pandas 0.16.0. Better versions are available. Now you can do: ``` for series_name, series in df.items(): print(series_name) print(series) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28218698\/how-to-iterate-over-columns-of-a-pandas-dataframe", "best_answers_votes":587, "question_length":1077, "response_length":263 }, { "question":"Pandas DataFrame Groupby two columns and get counts I have a pandas dataframe in the following format: ```py df = pd.DataFrame([ [1.1, 1.1, 1.1, 2.6, 2.5, 3.4,2.6,2.6,3.4,3.4,2.6,1.1,1.1,3.3], list('AAABBBBABCBDDD'), [1.1, 1.7, 2.5, 2.6, 3.3, 3.8,4.0,4.2,4.3,4.5,4.6,4.7,4.7,4.8], ['x\/y\/z','x\/y','x\/y\/z\/n','x\/u','x','x\/u\/v','x\/y\/z','x','x\/u\/v\/b','-','x\/y','x\/y\/z','x','x\/u\/v\/w'], ['1','3','3','2','4','2','5','3','6','3','5','1','1','1'] ]).T df.columns = ['col1','col2','col3','col4','col5'] ``` df: ```none col1 col2 col3 col4 col5 0 1.1 A 1.1 x\/y\/z 1 1 1.1 A 1.7 x\/y 3 2 1.1 A 2.5 x\/y\/z\/n 3 3 2.6 B 2.6 x\/u 2 4 2.5 B 3.3 x 4 5 3.4 B 3.8 x\/u\/v 2 6 2.6 B 4 x\/y\/z 5 7 2.6 A 4.2 x 3 8 3.4 B 4.3 x\/u\/v\/b 6 9 3.4 C 4.5 - 3 10 2.6 B 4.6 x\/y 5 11 1.1 D 4.7 x\/y\/z 1 12 1.1 D 4.7 x 1 13 3.3 D 4.8 x\/u\/v\/w 1 ``` I want to get the count by each row like following. Expected Output: ```none col5 col2 count 1 A 1 D 3 2 B 2 etc... ``` How to get my expected output? And I want to find largest count for each 'col2' value?", "response":"You are looking for size: ``` In [11]: df.groupby(['col5', 'col2']).size() Out[11]: col5 col2 1 A 1 D 3 2 B 2 3 A 3 C 1 4 B 1 5 B 2 6 B 1 dtype: int64 ``` To get the same answer as waitingkuo (the \"second question\"), but slightly cleaner, is to groupby the level: ``` In [12]: df.groupby(['col5', 'col2']).size().groupby(level=1).max() Out[12]: col2 A 3 B 2 C 1 D 3 dtype: int64 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17679089\/pandas-dataframe-groupby-two-columns-and-get-counts", "best_answers_votes":280, "question_length":1010, "response_length":382 }, { "question":"Pandas percentage of total with groupby This is obviously simple, but as a numpy newbe I'm getting stuck. I have a CSV file that contains 3 columns, the State, the Office ID, and the Sales for that office. I want to calculate the percentage of sales per office in a given state (total of all percentages in each state is 100%). ``` df = pd.DataFrame({'state': ['CA', 'WA', 'CO', 'AZ'] * 3, 'office_id': list(range(1, 7)) * 2, 'sales': [np.random.randint(100000, 999999) for _ in range(12)]}) df.groupby(['state', 'office_id']).agg({'sales': 'sum'}) ``` This returns: ``` sales state office_id AZ 2 839507 4 373917 6 347225 CA 1 798585 3 890850 5 454423 CO 1 819975 3 202969 5 614011 WA 2 163942 4 369858 6 959285 ``` I can't seem to figure out how to \"reach up\" to the state level of the groupby to total up the sales for the entire state to calculate the fraction.", "response":"Update 2022-03 This answer by caner using transform looks much better than my original answer! ``` df['sales'] \/ df.groupby('state')['sales'].transform('sum') ``` Thanks to this comment by Paul Rougieux for surfacing it. Original Answer (2014) Paul H's answer is right that you will have to make a second groupby object, but you can calculate the percentage in a simpler way -- just groupby the state_office and divide the sales column by its sum. Copying the beginning of Paul H's answer: ``` # From Paul H import numpy as np import pandas as pd np.random.seed(0) df = pd.DataFrame({'state': ['CA', 'WA', 'CO', 'AZ'] * 3, 'office_id': list(range(1, 7)) * 2, 'sales': [np.random.randint(100000, 999999) for _ in range(12)]}) state_office = df.groupby(['state', 'office_id']).agg({'sales': 'sum'}) # Change: groupby state_office and divide by sum state_pcts = state_office.groupby(level=0).apply(lambda x: 100 * x \/ float(x.sum())) ``` Returns: ``` sales state office_id AZ 2 16.981365 4 19.250033 6 63.768601 CA 1 19.331879 3 33.858747 5 46.809373 CO 1 36.851857 3 19.874290 5 43.273852 WA 2 34.707233 4 35.511259 6 29.781508 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23377108\/pandas-percentage-of-total-with-groupby", "best_answers_votes":395, "question_length":865, "response_length":1129 }, { "question":"pandas DataFrame: replace nan values with average of columns I've got a pandas DataFrame filled mostly with real numbers, but there is a few nan values in it as well. How can I replace the nans with averages of columns where they are? This question is very similar to this one: numpy array: replace nan values with average of columns but, unfortunately, the solution given there doesn't work for a pandas DataFrame.", "response":"You can simply use DataFrame.fillna to fill the nan's directly: ``` In [27]: df Out[27]: A B C 0 -0.166919 0.979728 -0.632955 1 -0.297953 -0.912674 -1.365463 2 -0.120211 -0.540679 -0.680481 3 NaN -2.027325 1.533582 4 NaN NaN 0.461821 5 -0.788073 NaN NaN 6 -0.916080 -0.612343 NaN 7 -0.887858 1.033826 NaN 8 1.948430 1.025011 -2.982224 9 0.019698 -0.795876 -0.046431 In [28]: df.mean() Out[28]: A -0.151121 B -0.231291 C -0.530307 dtype: float64 In [29]: df.fillna(df.mean()) Out[29]: A B C 0 -0.166919 0.979728 -0.632955 1 -0.297953 -0.912674 -1.365463 2 -0.120211 -0.540679 -0.680481 3 -0.151121 -2.027325 1.533582 4 -0.151121 -0.231291 0.461821 5 -0.788073 -0.231291 -0.530307 6 -0.916080 -0.612343 -0.530307 7 -0.887858 1.033826 -0.530307 8 1.948430 1.025011 -2.982224 9 0.019698 -0.795876 -0.046431 ``` The docstring of fillna says that value should be a scalar or a dict, however, it seems to work with a Series as well. If you want to pass a dict, you could use df.mean().to_dict().", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18689823\/pandas-dataframe-replace-nan-values-with-average-of-columns", "best_answers_votes":414, "question_length":415, "response_length":988 }, { "question":"How to add multiple columns to pandas dataframe in one assignment I'm trying to figure out how to add multiple columns to pandas simultaneously with Pandas. I would like to do this in one step rather than multiple repeated steps. ``` import pandas as pd data = {'col_1': [0, 1, 2, 3], 'col_2': [4, 5, 6, 7]} df = pd.DataFrame(data) ``` I thought this would work here... ``` df[['column_new_1', 'column_new_2', 'column_new_3']] = [np.nan, 'dogs', 3] ```", "response":"I would have expected your syntax to work too. The problem arises because when you create new columns with the column-list syntax (df[[new1, new2]] = ...), pandas requires that the right hand side be a DataFrame (note that it doesn't actually matter if the columns of the DataFrame have the same names as the columns you are creating). Your syntax works fine for assigning scalar values to existing columns, and pandas is also happy to assign scalar values to a new column using the single-column syntax (df[new1] = ...). So the solution is either to convert this into several single-column assignments, or create a suitable DataFrame for the right-hand side. Here are several approaches that will work: ``` import pandas as pd import numpy as np df = pd.DataFrame({ 'col_1': [0, 1, 2, 3], 'col_2': [4, 5, 6, 7] }) ``` Then one of the following: 1) Three assignments in one, using iterable unpacking ``` df['column_new_1'], df['column_new_2'], df['column_new_3'] = np.nan, 'dogs', 3 ``` 2) Use DataFrame() to expand a single row to match the index ``` df[['column_new_1', 'column_new_2', 'column_new_3']] = pd.DataFrame([[np.nan, 'dogs', 3]], index=df.index) ``` 3) Combine with a temporary DataFrame using pd.concat ``` df = pd.concat( [ df, pd.DataFrame( [[np.nan, 'dogs', 3]], index=df.index, columns=['column_new_1', 'column_new_2', 'column_new_3'] ) ], axis=1 ) ``` 4) Combine with a temporary DataFrame using .join This is similar to 3, but may be less efficient. ``` df = df.join(pd.DataFrame( [[np.nan, 'dogs', 3]], index=df.index, columns=['column_new_1', 'column_new_2', 'column_new_3'] )) ``` 5) Use a dictionary instead of the lists used in 3 and 4 This is a more \"natural\" way to create the temporary DataFrame than the previous two. Note that in Python 3.5 or earlier, the new columns will be sorted alphabetically. ``` df = df.join(pd.DataFrame( { 'column_new_1': np.nan, 'column_new_2': 'dogs', 'column_new_3': 3 }, index=df.index )) ``` 6) Use .assign() with multiple column arguments This may be the winner in Python 3.6+. But like the previous one, the new columns will be sorted alphabetically in earlier versions of Python. ``` df = df.assign(column_new_1=np.nan, column_new_2='dogs', column_new_3=3) ``` 7) Create new columns, then assign all values at once Based on this answer. This is interesting, but I don't know when it would be worth the trouble. ``` new_cols = ['column_new_1', 'column_new_2', 'column_new_3'] new_vals = [np.nan, 'dogs', 3] df = df.reindex(columns=df.columns.tolist() + new_cols) # add empty cols df[new_cols] = new_vals # multi-column assignment works for existing cols ``` 8) Three separate assignments In the end, it's hard to beat this. ``` df['column_new_1'] = np.nan df['column_new_2'] = 'dogs' df['column_new_3'] = 3 ``` Note: many of these options have already been covered in other questions: Add multiple columns to DataFrame and set them equal to an existing column Is it possible to add several columns at once to a pandas DataFrame? Add multiple empty columns to pandas DataFrame", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39050539\/how-to-add-multiple-columns-to-pandas-dataframe-in-one-assignment", "best_answers_votes":412, "question_length":452, "response_length":3039 }, { "question":"How do I convert a Pandas series or index to a NumPy array? [duplicate] This question already has answers here: Convert Pandas dataframe to NumPy array (17 answers) Closed 5 years ago. The community reviewed whether to reopen this question 3 years ago and left it closed: Original close reason(s) were not resolved How can I get the index or column of a DataFrame as a NumPy array or Python list?", "response":"To get a NumPy array, you should use the values attribute: ``` In [1]: df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, index=['a', 'b', 'c']); df A B a 1 4 b 2 5 c 3 6 In [2]: df.index.values Out[2]: array(['a', 'b', 'c'], dtype=object) ``` This accesses how the data is already stored, so there isn't any need for a conversion. Note: This attribute is also available for many other pandas objects. ``` In [3]: df['A'].values Out[3]: Out[16]: array([1, 2, 3]) ``` To get the index as a list, call tolist: ``` In [4]: df.index.tolist() Out[4]: ['a', 'b', 'c'] ``` And similarly, for columns.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17241004\/how-do-i-convert-a-pandas-series-or-index-to-a-numpy-array", "best_answers_votes":383, "question_length":396, "response_length":594 }, { "question":"Take multiple lists into dataframe How do I take multiple lists and put them as different columns in a python dataframe? I tried this solution but had some trouble. Attempt 1: Have three lists, and zip them together and use that res = zip(lst1,lst2,lst3) Yields just one column Attempt 2: ``` percentile_list = pd.DataFrame({'lst1Tite' : [lst1], 'lst2Tite' : [lst2], 'lst3Tite' : [lst3] }, columns=['lst1Tite','lst1Tite', 'lst1Tite']) ``` yields either one row by 3 columns (the way above) or if I transpose it is 3 rows and 1 column How do I get a 100 row (length of each independent list) by 3 column (three lists) pandas dataframe?", "response":"I think you're almost there, try removing the extra square brackets around the lst's (Also you don't need to specify the column names when you're creating a dataframe from a dict like this): ``` import pandas as pd lst1 = range(100) lst2 = range(100) lst3 = range(100) percentile_list = pd.DataFrame( {'lst1Title': lst1, 'lst2Title': lst2, 'lst3Title': lst3 }) percentile_list lst1Title lst2Title lst3Title 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 5 5 5 5 6 6 6 6 ... ``` If you need a more performant solution you can use np.column_stack rather than zip as in your first attempt, this has around a 2x speedup on the example here, however comes at bit of a cost of readability in my opinion: ``` import numpy as np percentile_list = pd.DataFrame(np.column_stack([lst1, lst2, lst3]), columns=['lst1Title', 'lst2Title', 'lst3Title']) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30522724\/take-multiple-lists-into-dataframe", "best_answers_votes":513, "question_length":634, "response_length":834 }, { "question":"Load data from txt with pandas I am loading a txt file containig a mix of float and string data. I want to store them in an array where I can access each element. Now I am just doing ```py import pandas as pd data = pd.read_csv('output_list.txt', header = None) print data ``` Each line in the input file looks like the following: ```none 1 0 2000.0 70.2836942112 1347.28369421 \/file_address.txt ``` Now the data are imported as a unique column. How can I divide it, so to store different elements separately (so I can call data[i,j])? And how can I define a header?", "response":"You can use: ``` data = pd.read_csv('output_list.txt', sep=\" \", header=None) data.columns = [\"a\", \"b\", \"c\", \"etc.\"] ``` Add sep=\" \" in your code, leaving a blank space between the quotes. So pandas can detect spaces between values and sort in columns. Data columns is for naming your columns.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21546739\/load-data-from-txt-with-pandas", "best_answers_votes":361, "question_length":566, "response_length":292 }, { "question":"Random row selection in Pandas dataframe Is there a way to select random rows from a DataFrame in Pandas. In R, using the car package, there is a useful function some(x, n) which is similar to head but selects, in this example, 10 rows at random from x. I have also looked at the slicing documentation and there seems to be nothing equivalent. Update Now using version 20. There is a sample method. ``` df.sample(n) ```", "response":"With pandas version 0.16.1 and up, there is now a DataFrame.sample method built-in: ``` import pandas df = pandas.DataFrame(pandas.np.random.random(100)) # Randomly sample 70% of your dataframe df_percent = df.sample(frac=0.7) # Randomly sample 7 elements from your dataframe df_elements = df.sample(n=7) ``` For either approach above, you can get the rest of the rows by doing: ``` df_rest = df.loc[~df.index.isin(df_percent.index)] ``` Per Pedram's comment, if you would like to get reproducible samples, pass the random_state parameter. ``` df_percent = df.sample(frac=0.7, random_state=42) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15923826\/random-row-selection-in-pandas-dataframe", "best_answers_votes":421, "question_length":419, "response_length":597 }, { "question":"How to add header row to a pandas DataFrame I am reading a csv file into pandas. This csv file consists of four columns and some rows, but does not have a header row, which I want to add. I have been trying the following: ```py Cov = pd.read_csv(\"path\/to\/file.txt\", sep='\\t') Frame = pd.DataFrame([Cov], columns = [\"Sequence\", \"Start\", \"End\", \"Coverage\"]) Frame.to_csv(\"path\/to\/file.txt\", sep='\\t') ``` But when I apply the code, I get the following Error: ```none ValueError: Shape of passed values is (1, 1), indices imply (4, 1) ``` What exactly does the error mean? And what would be a clean way in python to add a header row to my csv file\/pandas df?", "response":"You can use names directly in the read_csv names : array-like, default None List of column names to use. If file contains no header row, then you should explicitly pass header=None ``` Cov = pd.read_csv(\"path\/to\/file.txt\", sep='\\t', names=[\"Sequence\", \"Start\", \"End\", \"Coverage\"]) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34091877\/how-to-add-header-row-to-a-pandas-dataframe", "best_answers_votes":457, "question_length":655, "response_length":284 }, { "question":"Pandas DataFrame: replace all values in a column, based on condition I have a simple DataFrame like the following: Team First Season Total Games 0 Dallas Cowboys 1960 894 1 Chicago Bears 1920 1357 2 Green Bay Packers 1921 1339 3 Miami Dolphins 1966 792 4 Baltimore Ravens 1996 326 5 San Francisco 49ers 1950 1003 I want to select all values from the First Season column and replace those that are over 1990 by 1. In this example, only Baltimore Ravens would have the 1996 replaced by 1 (keeping the rest of the data intact). I have used the following: ``` df.loc[(df['First Season'] > 1990)] = 1 ``` But, it replaces all the values in that row by 1, not just the values in the 'First Season' column. How can I replace just the values from that column?", "response":"You need to select that column: ``` In [41]: df.loc[df['First Season'] > 1990, 'First Season'] = 1 df Out[41]: Team First Season Total Games 0 Dallas Cowboys 1960 894 1 Chicago Bears 1920 1357 2 Green Bay Packers 1921 1339 3 Miami Dolphins 1966 792 4 Baltimore Ravens 1 326 5 San Franciso 49ers 1950 1003 ``` So the syntax here is: ``` df.loc[(here mask is generating the labels to index) , ] ``` You can check the docs and also the 10 minutes to pandas which shows the semantics EDIT If you want to generate a boolean indicator then you can just use the boolean condition to generate a boolean Series and cast the dtype to int this will convert True and False to 1 and 0 respectively: ``` In [43]: df['First Season'] = (df['First Season'] > 1990).astype(int) df Out[43]: Team First Season Total Games 0 Dallas Cowboys 0 894 1 Chicago Bears 0 1357 2 Green Bay Packers 0 1339 3 Miami Dolphins 0 792 4 Baltimore Ravens 1 326 5 San Franciso 49ers 0 1003 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31511997\/pandas-dataframe-replace-all-values-in-a-column-based-on-condition", "best_answers_votes":459, "question_length":751, "response_length":955 }, { "question":"pandas get column average\/mean I can't get the average or mean of a column in pandas. A have a dataframe. Neither of things I tried below gives me the average of the column weight ``` >>> allDF ID birthyear weight 0 619040 1962 0.1231231 1 600161 1963 0.981742 2 25602033 1963 1.3123124 3 624870 1987 0.94212 ``` The following returns several values, not one: ``` allDF[['weight']].mean(axis=1) ``` So does this: ``` allDF.groupby('weight').mean() ```", "response":"If you only want the mean of the weight column, select the column (which is a Series) and call .mean(): ``` In [479]: df Out[479]: ID birthyear weight 0 619040 1962 0.123123 1 600161 1963 0.981742 2 25602033 1963 1.312312 3 624870 1987 0.942120 In [480]: df.loc[:, 'weight'].mean() Out[480]: 0.83982437500000007 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31037298\/pandas-get-column-average-mean", "best_answers_votes":441, "question_length":451, "response_length":315 }, { "question":"How to display pandas DataFrame of floats using a format string for columns? I would like to display a pandas dataframe with a given format using print() and the IPython display(). For example: ``` df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890], index=['foo','bar','baz','quux'], columns=['cost']) print df cost foo 123.4567 bar 234.5678 baz 345.6789 quux 456.7890 ``` I would like to somehow coerce this into printing ``` cost foo $123.46 bar $234.57 baz $345.68 quux $456.79 ``` without having to modify the data itself or create a copy, just change the way it is displayed. How can I do this?", "response":"``` import pandas as pd pd.options.display.float_format = '${:,.2f}'.format df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890], index=['foo','bar','baz','quux'], columns=['cost']) print(df) ``` yields ``` cost foo $123.46 bar $234.57 baz $345.68 quux $456.79 ``` but this only works if you want every float to be formatted with a dollar sign. Otherwise, if you want dollar formatting for some floats only, then I think you'll have to pre-modify the dataframe (converting those floats to strings): ``` import pandas as pd df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890], index=['foo','bar','baz','quux'], columns=['cost']) df['foo'] = df['cost'] df['cost'] = df['cost'].map('${:,.2f}'.format) print(df) ``` yields ``` cost foo foo $123.46 123.4567 bar $234.57 234.5678 baz $345.68 345.6789 quux $456.79 456.7890 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20937538\/how-to-display-pandas-dataframe-of-floats-using-a-format-string-for-columns", "best_answers_votes":471, "question_length":607, "response_length":833 }, { "question":"Search for \"does-not-contain\" on a DataFrame in pandas I've done some searching and can't figure out how to filter a dataframe by ``` df[\"col\"].str.contains(word) ``` however I'm wondering if there is a way to do the reverse: filter a dataframe by that set's compliment. eg: to the effect of ``` !(df[\"col\"].str.contains(word)) ``` Can this be done through a DataFrame method?", "response":"You can use the invert (~) operator (which acts like a not for boolean data): ``` new_df = df[~df[\"col\"].str.contains(word)] ``` where new_df is the copy returned by RHS. contains also accepts a regular expression... If the above throws a ValueError or TypeError, the reason is likely because you have mixed datatypes, so use na=False: ``` new_df = df[~df[\"col\"].str.contains(word, na=False)] ``` Or, ``` new_df = df[df[\"col\"].str.contains(word) == False] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17097643\/search-for-does-not-contain-on-a-dataframe-in-pandas", "best_answers_votes":557, "question_length":376, "response_length":459 }, { "question":"Format \/ Suppress Scientific Notation from Pandas Aggregation Results How can one modify the format for the output from a groupby operation in pandas that produces scientific notation for very large numbers? I know how to do string formatting in python but I'm at a loss when it comes to applying it here. ``` df1.groupby('dept')['data1'].sum() dept value1 1.192433e+08 value2 1.293066e+08 value3 1.077142e+08 ``` This suppresses the scientific notation if I convert to string but now I'm just wondering how to string format and add decimals. ``` sum_sales_dept.astype(str) ```", "response":"Granted, the answer I linked in the comments is not very helpful. You can specify your own string converter like so. ``` In [25]: pd.set_option('display.float_format', lambda x: '%.3f' % x) In [28]: Series(np.random.randn(3))*1000000000 Out[28]: 0 -757322420.605 1 -1436160588.997 2 -1235116117.064 dtype: float64 ``` I'm not sure if that's the preferred way to do this, but it works. Converting numbers to strings purely for aesthetic purposes seems like a bad idea, but if you have a good reason, this is one way: ``` In [6]: Series(np.random.randn(3)).apply(lambda x: '%.3f' % x) Out[6]: 0 0.026 1 -0.482 2 -0.694 dtype: object ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21137150\/format-suppress-scientific-notation-from-pandas-aggregation-results", "best_answers_votes":420, "question_length":577, "response_length":634 }, { "question":"Find column whose name contains a specific string I have a dataframe with column names, and I want to find the one that contains a certain string, but does not exactly match it. I'm searching for 'spike' in column names like 'spike-2', 'hey spike', 'spiked-in' (the 'spike' part is always continuous). I want the column name to be returned as a string or a variable, so I access the column later with df['name'] or df[name] as normal. I've tried to find ways to do this, to no avail. Any tips?", "response":"Just iterate over DataFrame.columns, now this is an example in which you will end up with a list of column names that match: ``` import pandas as pd data = {'spike-2': [1,2,3], 'hey spke': [4,5,6], 'spiked-in': [7,8,9], 'no': [10,11,12]} df = pd.DataFrame(data) spike_cols = [col for col in df.columns if 'spike' in col] print(list(df.columns)) print(spike_cols) ``` Output: ``` ['hey spke', 'no', 'spike-2', 'spiked-in'] ['spike-2', 'spiked-in'] ``` Explanation: df.columns returns a list of column names [col for col in df.columns if 'spike' in col] iterates over the list df.columns with the variable col and adds it to the resulting list if col contains 'spike'. This syntax is list comprehension. If you only want the resulting data set with the columns that match you can do this: ``` df2 = df.filter(regex='spike') print(df2) ``` Output: ``` spike-2 spiked-in 0 1 7 1 2 8 2 3 9 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21285380\/find-column-whose-name-contains-a-specific-string", "best_answers_votes":427, "question_length":493, "response_length":888 }, { "question":"Replacing blank values (white space) with NaN in pandas I want to find all values in a Pandas dataframe that contain whitespace (any arbitrary amount) and replace those values with NaNs. Any ideas how this can be improved? Basically I want to turn this: ```none A B C 2000-01-01 -0.532681 foo 0 2000-01-02 1.490752 bar 1 2000-01-03 -1.387326 foo 2 2000-01-04 0.814772 baz 2000-01-05 -0.222552 4 2000-01-06 -1.176781 qux ``` Into this: ```none A B C 2000-01-01 -0.532681 foo 0 2000-01-02 1.490752 bar 1 2000-01-03 -1.387326 foo 2 2000-01-04 0.814772 baz NaN 2000-01-05 -0.222552 NaN 4 2000-01-06 -1.176781 qux NaN ``` I've managed to do it with the code below, but man is it ugly. It's not Pythonic and I'm sure it's not the most efficient use of pandas either. I loop through each column and do boolean replacement against a column mask generated by applying a function that does a regex search of each value, matching on whitespace. ``` for i in df.columns: df[i][df[i].apply(lambda i: True if re.search('^\\s*$', str(i)) else False)]=None ``` It could be optimized a bit by only iterating through fields that could contain empty strings: ``` if df[i].dtype == np.dtype('object') ``` But that's not much of an improvement And finally, this code sets the target strings to None, which works with Pandas' functions like fillna(), but it would be nice for completeness if I could actually insert a NaN directly instead of None.", "response":"I think df.replace() does the job, since pandas 0.13: ``` df = pd.DataFrame([ [-0.532681, 'foo', 0], [1.490752, 'bar', 1], [-1.387326, 'foo', 2], [0.814772, 'baz', ' '], [-0.222552, ' ', 4], [-1.176781, 'qux', ' '], ], columns='A B C'.split(), index=pd.date_range('2000-01-01','2000-01-06')) # replace field that's entirely space (or empty) with NaN print(df.replace(r'^\\s*$', np.nan, regex=True)) ``` Produces: ```none A B C 2000-01-01 -0.532681 foo 0 2000-01-02 1.490752 bar 1 2000-01-03 -1.387326 foo 2 2000-01-04 0.814772 baz NaN 2000-01-05 -0.222552 NaN 4 2000-01-06 -1.176781 qux NaN ``` As Temak pointed it out, use df.replace(r'^\\s+$', np.nan, regex=True) in case your valid data contains white spaces.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13445241\/replacing-blank-values-white-space-with-nan-in-pandas", "best_answers_votes":357, "question_length":1424, "response_length":710 }, { "question":"Find difference between two data frames I have two data frames df1 and df2, where df2 is a subset of df1. How do I get a new data frame (df3) which is the difference between the two data frames? In other word, a data frame that has all the rows\/columns in df1 that are not in df2?", "response":"By using drop_duplicates ``` pd.concat([df1,df2]).drop_duplicates(keep=False) ``` Update : The above method only works for those data frames that don't already have duplicates themselves. For example: ``` df1=pd.DataFrame({'A':[1,2,3,3],'B':[2,3,4,4]}) df2=pd.DataFrame({'A':[1],'B':[2]}) ``` It will output like below , which is wrong Wrong Output : ``` pd.concat([df1, df2]).drop_duplicates(keep=False) Out[655]: A B 1 2 3 ``` Correct Output ``` Out[656]: A B 1 2 3 2 3 4 3 3 4 ``` How to achieve that? Method 1: Using isin with tuple ``` df1[~df1.apply(tuple,1).isin(df2.apply(tuple,1))] Out[657]: A B 1 2 3 2 3 4 3 3 4 ``` Method 2: merge with indicator ``` df1.merge(df2,indicator = True, how='left').loc[lambda x : x['_merge']!='both'] Out[421]: A B _merge 1 2 3 left_only 2 3 4 left_only 3 3 4 left_only ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/48647534\/find-difference-between-two-data-frames", "best_answers_votes":368, "question_length":280, "response_length":814 }, { "question":"How to find which columns contain any NaN value in Pandas dataframe Given a pandas dataframe containing possible NaN values scattered here and there: Question: How do I determine which columns contain NaN values? In particular, can I get a list of the column names containing NaNs?", "response":"UPDATE: using Pandas 0.22.0 Newer Pandas versions have new methods 'DataFrame.isna()' and 'DataFrame.notna()' ``` In [71]: df Out[71]: a b c 0 NaN 7.0 0 1 0.0 NaN 4 2 2.0 NaN 4 3 1.0 7.0 0 4 1.0 3.0 9 5 7.0 4.0 9 6 2.0 6.0 9 7 9.0 6.0 4 8 3.0 0.0 9 9 9.0 0.0 1 In [72]: df.isna().any() Out[72]: a True b True c False dtype: bool ``` as list of columns: ``` In [74]: df.columns[df.isna().any()].tolist() Out[74]: ['a', 'b'] ``` to select those columns (containing at least one NaN value): ``` In [73]: df.loc[:, df.isna().any()] Out[73]: a b 0 NaN 7.0 1 0.0 NaN 2 2.0 NaN 3 1.0 7.0 4 1.0 3.0 5 7.0 4.0 6 2.0 6.0 7 9.0 6.0 8 3.0 0.0 9 9.0 0.0 ``` OLD answer: Try to use isnull(): ``` In [97]: df Out[97]: a b c 0 NaN 7.0 0 1 0.0 NaN 4 2 2.0 NaN 4 3 1.0 7.0 0 4 1.0 3.0 9 5 7.0 4.0 9 6 2.0 6.0 9 7 9.0 6.0 4 8 3.0 0.0 9 9 9.0 0.0 1 In [98]: pd.isnull(df).sum() > 0 Out[98]: a True b True c False dtype: bool ``` or as @root proposed clearer version: ``` In [5]: df.isnull().any() Out[5]: a True b True c False dtype: bool In [7]: df.columns[df.isnull().any()].tolist() Out[7]: ['a', 'b'] ``` to select a subset - all columns containing at least one NaN value: ``` In [31]: df.loc[:, df.isnull().any()] Out[31]: a b 0 NaN 7.0 1 0.0 NaN 2 2.0 NaN 3 1.0 7.0 4 1.0 3.0 5 7.0 4.0 6 2.0 6.0 7 9.0 6.0 8 3.0 0.0 9 9.0 0.0 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36226083\/how-to-find-which-columns-contain-any-nan-value-in-pandas-dataframe", "best_answers_votes":428, "question_length":281, "response_length":1315 }, { "question":"Convert columns into rows with Pandas So my dataset has some information by location for n dates. The problem is each date is actually a different column header. For example the CSV looks like ``` location name Jan-2010 Feb-2010 March-2010 A \"test\" 12 20 30 B \"foo\" 18 20 25 ``` What I would like is for it to look like ``` location name Date Value A \"test\" Jan-2010 12 A \"test\" Feb-2010 20 A \"test\" March-2010 30 B \"foo\" Jan-2010 18 B \"foo\" Feb-2010 20 B \"foo\" March-2010 25 ``` My problem is I don't know how many dates are in the column (though I know they will always start after name)", "response":"Use .melt: ``` df.melt(id_vars=[\"location\", \"name\"], var_name=\"Date\", value_name=\"Value\") location name Date Value 0 A \"test\" Jan-2010 12 1 B \"foo\" Jan-2010 18 2 A \"test\" Feb-2010 20 3 B \"foo\" Feb-2010 20 4 A \"test\" March-2010 30 5 B \"foo\" March-2010 25 ``` Old(er) versions: >> df location name Jan-2010 Feb-2010 March-2010 0 A test 12 20 30 1 B foo 18 20 25 >>> df2 = pd.melt(df, id_vars=[\"location\", \"name\"], var_name=\"Date\", value_name=\"Value\") >>> df2 location name Date Value 0 A test Jan-2010 12 1 B foo Jan-2010 18 2 A test Feb-2010 20 3 B foo Feb-2010 20 4 A test March-2010 30 5 B foo March-2010 25 >>> df2 = df2.sort([\"location\", \"name\"]) >>> df2 location name Date Value 0 A test Jan-2010 12 2 A test Feb-2010 20 4 A test March-2010 30 1 B foo Jan-2010 18 3 B foo Feb-2010 20 5 B foo March-2010 25 ``` (Might want to throw in a .reset_index(drop=True), just to keep the output clean.) Note: pd.DataFrame.sort has been deprecated in favour of pd.DataFrame.sort_values.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28654047\/convert-columns-into-rows-with-pandas", "best_answers_votes":424, "question_length":589, "response_length":979 }, { "question":"Counting unique values in a column in pandas dataframe like in Qlik? If I have a table like this: ``` df = pd.DataFrame({ 'hID': [101, 102, 103, 101, 102, 104, 105, 101], 'dID': [10, 11, 12, 10, 11, 10, 12, 10], 'uID': ['James', 'Henry', 'Abe', 'James', 'Henry', 'Brian', 'Claude', 'James'], 'mID': ['A', 'B', 'A', 'B', 'A', 'A', 'A', 'C'] }) ``` I can do count(distinct hID) in Qlik to come up with count of 5 for unique hID. How do I do that in python using a pandas dataframe? Or maybe a numpy array? Similarly, if were to do count(hID) I will get 8 in Qlik. What is the equivalent way to do it in pandas?", "response":"Count distinct values, use nunique: ``` df['hID'].nunique() 5 ``` Count only non-null values, use count: ``` df['hID'].count() 8 ``` Count total values including null values, use the size attribute: ``` df['hID'].size 8 ``` Edit to add condition Use boolean indexing: ``` df.loc[df['mID']=='A','hID'].agg(['nunique','count','size']) ``` OR using query: ``` df.query('mID == \"A\"')['hID'].agg(['nunique','count','size']) ``` Output: ``` nunique 5 count 5 size 5 Name: hID, dtype: int64 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45759966\/counting-unique-values-in-a-column-in-pandas-dataframe-like-in-qlik", "best_answers_votes":419, "question_length":608, "response_length":487 }, { "question":"How to test if a string contains one of the substrings in a list, in pandas? Is there any function that would be the equivalent of a combination of df.isin() and df[col].str.contains()? For example, say I have the series s = pd.Series(['cat','hat','dog','fog','pet']), and I want to find all places where s contains any of ['og', 'at'], I would want to get everything but 'pet'. I have a solution, but it's rather inelegant: ``` searchfor = ['og', 'at'] found = [s.str.contains(x) for x in searchfor] result = pd.DataFrame[found] result.any() ``` Is there a better way to do this?", "response":"One option is just to use the regex | character to try to match each of the substrings in the words in your Series s (still using str.contains). You can construct the regex by joining the words in searchfor with |: ``` >>> searchfor = ['og', 'at'] >>> s[s.str.contains('|'.join(searchfor))] 0 cat 1 hat 2 dog 3 fog dtype: object ``` As @AndyHayden noted in the comments below, take care if your substrings have special characters such as $ and ^ which you want to match literally. These characters have specific meanings in the context of regular expressions and will affect the matching. You can make your list of substrings safer by escaping non-alphanumeric characters with re.escape: ``` >>> import re >>> matches = ['$money', 'x^y'] >>> safe_matches = [re.escape(m) for m in matches] >>> safe_matches ['\\\\$money', 'x\\\\^y'] ``` The strings with in this new list will match each character literally when used with str.contains.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26577516\/how-to-test-if-a-string-contains-one-of-the-substrings-in-a-list-in-pandas", "best_answers_votes":465, "question_length":580, "response_length":930 }, { "question":"Pandas convert dataframe to array of tuples I have manipulated some data using pandas and now I want to carry out a batch save back to the database. This requires me to convert the dataframe into an array of tuples, with each tuple corresponding to a \"row\" of the dataframe. My DataFrame looks something like: ``` In [182]: data_set Out[182]: index data_date data_1 data_2 0 14303 2012-02-17 24.75 25.03 1 12009 2012-02-16 25.00 25.07 2 11830 2012-02-15 24.99 25.15 3 6274 2012-02-14 24.68 25.05 4 2302 2012-02-13 24.62 24.77 5 14085 2012-02-10 24.38 24.61 ``` I want to convert it to an array of tuples like: ``` [(datetime.date(2012,2,17),24.75,25.03), (datetime.date(2012,2,16),25.00,25.07), ...etc. ] ``` Any suggestion on how I can efficiently do this?", "response":"``` list(data_set.itertuples(index=False)) ``` As of 17.1, the above will return a list of namedtuples. If you want a list of ordinary tuples, pass name=None as an argument: ``` list(data_set.itertuples(index=False, name=None)) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/9758450\/pandas-convert-dataframe-to-array-of-tuples", "best_answers_votes":378, "question_length":757, "response_length":231 }, { "question":"python pandas: apply a function with arguments to a series I want to apply a function with arguments to a series in python pandas: ``` x = my_series.apply(my_function, more_arguments_1) y = my_series.apply(my_function, more_arguments_2) ... ``` The documentation describes support for an apply method, but it doesn't accept any arguments. Is there a different method that accepts arguments? Alternatively, am I missing a simple workaround? Update (October 2017): Note that since this question was originally asked that pandas apply() has been updated to handle positional and keyword arguments and the documentation link above now reflects that and shows how to include either type of argument.", "response":"Newer versions of pandas do allow you to pass extra arguments (see the new documentation). So now you can do: ``` my_series.apply(your_function, args=(2,3,4), extra_kw=1) ``` The positional arguments are added after the element of the series. For older version of pandas: The documentation explains this clearly. The apply method accepts a python function which should have a single parameter. If you want to pass more parameters you should use functools.partial as suggested by Joel Cornett in his comment. An example: ``` >>> import functools >>> import operator >>> add_3 = functools.partial(operator.add,3) >>> add_3(2) 5 >>> add_3(7) 10 ``` You can also pass keyword arguments using partial. Another way would be to create a lambda: ``` my_series.apply((lambda x: your_func(a,b,c,d,...,x))) ``` But I think using partial is better.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12182744\/python-pandas-apply-a-function-with-arguments-to-a-series", "best_answers_votes":292, "question_length":694, "response_length":836 }, { "question":"How to plot in multiple subplots I am a little confused about how this code works: ``` fig, axes = plt.subplots(nrows=2, ncols=2) plt.show() ``` How does the fig, axes work in this case? What does it do? Also why wouldn't this work to do the same thing: ``` fig = plt.figure() axes = fig.subplots(nrows=2, ncols=2) ```", "response":"There are several ways to do it. The subplots method creates the figure along with the subplots that are then stored in the ax array. For example: ``` import matplotlib.pyplot as plt x = range(10) y = range(10) fig, ax = plt.subplots(nrows=2, ncols=2) for row in ax: for col in row: col.plot(x, y) plt.show() ``` However, something like this will also work, it's not so \"clean\" though since you are creating a figure with subplots and then add on top of them: ``` fig = plt.figure() plt.subplot(2, 2, 1) plt.plot(x, y) plt.subplot(2, 2, 2) plt.plot(x, y) plt.subplot(2, 2, 3) plt.plot(x, y) plt.subplot(2, 2, 4) plt.plot(x, y) plt.show() ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31726643\/how-to-plot-in-multiple-subplots", "best_answers_votes":343, "question_length":318, "response_length":641 }, { "question":"Right way to reverse a pandas DataFrame? Here is my code: ``` import pandas as pd data = pd.DataFrame({'Odd':[1,3,5,6,7,9], 'Even':[0,2,4,6,8,10]}) for i in reversed(data): print(data['Odd'], data['Even']) ``` When I run this code, i get the following error: ``` Traceback (most recent call last): File \"C:\\Python33\\lib\\site-packages\\pandas\\core\\generic.py\", line 665, in _get_item_cache return cache[item] KeyError: 5 During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"C:\\Users\\*****\\Documents\\******\\********\\****.py\", line 5, in for i in reversed(data): File \"C:\\Python33\\lib\\site-packages\\pandas\\core\\frame.py\", line 2003, in __getitem__ return self._get_item_cache(key) File \"C:\\Python33\\lib\\site-packages\\pandas\\core\\generic.py\", line 667, in _get_item_cache values = self._data.get(item) File \"C:\\Python33\\lib\\site-packages\\pandas\\core\\internals.py\", line 1656, in get _, block = self._find_block(item) File \"C:\\Python33\\lib\\site-packages\\pandas\\core\\internals.py\", line 1936, in _find_block self._check_have(item) File \"C:\\Python33\\lib\\site-packages\\pandas\\core\\internals.py\", line 1943, in _check_have raise KeyError('no item named %s' % com.pprint_thing(item)) KeyError: 'no item named 5' ``` Why am I getting this error? How can I fix that? What is the right way to reverse pandas.DataFrame?", "response":"``` data.reindex(index=data.index[::-1]) ``` or simply: ``` data.iloc[::-1] ``` will reverse your data frame, if you want to have a for loop which goes from down to up you may do: ``` for idx in reversed(data.index): print(idx, data.loc[idx, 'Even'], data.loc[idx, 'Odd']) ``` or ``` for idx in reversed(data.index): print(idx, data.Even[idx], data.Odd[idx]) ``` You are getting an error because reversed first calls data.__len__() which returns 6. Then it tries to call data[j - 1] for j in range(6, 0, -1), and the first call would be data[5]; but in pandas dataframe data[5] means column 5, and there is no column 5 so it will throw an exception. ( see docs )", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20444087\/right-way-to-reverse-a-pandas-dataframe", "best_answers_votes":474, "question_length":1361, "response_length":662 }, { "question":"How to replace text in a string column of a Pandas dataframe? I have a column in my dataframe like this: ``` range \"(2,30)\" \"(50,290)\" \"(400,1000)\" ... ``` and I want to replace the , comma with - dash. I'm currently using this method but nothing is changed. ``` org_info_exc['range'].replace(',', '-', inplace=True) ``` Can anybody help?", "response":"Use the vectorised str method replace: ``` df['range'] = df['range'].str.replace(',','-') df range 0 (2-30) 1 (50-290) ``` EDIT: so if we look at what you tried and why it didn't work: ``` df['range'].replace(',','-',inplace=True) ``` from the docs we see this description: str or regex: str: string exactly matching to_replace will be replaced with value So because the str values do not match, no replacement occurs, compare with the following: ``` df = pd.DataFrame({'range':['(2,30)',',']}) df['range'].replace(',','-', inplace=True) df['range'] 0 (2,30) 1 - Name: range, dtype: object ``` here we get an exact match on the second row and the replacement occurs.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28986489\/how-to-replace-text-in-a-string-column-of-a-pandas-dataframe", "best_answers_votes":509, "question_length":338, "response_length":666 }, { "question":"How do I retrieve the number of columns in a Pandas data frame? How do you programmatically retrieve the number of columns in a pandas dataframe? I was hoping for something like: ``` df.num_columns ```", "response":"Like so: ``` import pandas as pd df = pd.DataFrame({\"pear\": [1,2,3], \"apple\": [2,3,4], \"orange\": [3,4,5]}) len(df.columns) 3 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20297332\/how-do-i-retrieve-the-number-of-columns-in-a-pandas-data-frame", "best_answers_votes":423, "question_length":201, "response_length":128 }, { "question":"JSON to pandas DataFrame What I am trying to do is extract elevation data from a google maps API along a path specified by latitude and longitude coordinates as follows: ``` from urllib2 import Request, urlopen import json path1 = '42.974049,-81.205203|42.974298,-81.195755' request=Request('http:\/\/maps.googleapis.com\/maps\/api\/elevation\/json?locations='+path1+'&sensor=false') response = urlopen(request) elevations = response.read() ``` This gives me a data that looks like this: ``` elevations.splitlines() ['{', ' \"results\" : [', ' {', ' \"elevation\" : 243.3462677001953,', ' \"location\" : {', ' \"lat\" : 42.974049,', ' \"lng\" : -81.205203', ' },', ' \"resolution\" : 19.08790397644043', ' },', ' {', ' \"elevation\" : 244.1318664550781,', ' \"location\" : {', ' \"lat\" : 42.974298,', ' \"lng\" : -81.19575500000001', ' },', ' \"resolution\" : 19.08790397644043', ' }', ' ],', ' \"status\" : \"OK\"', '}'] ``` when putting into as DataFrame here is what I get: ``` pd.read_json(elevations) ``` and here is what I want: I'm not sure if this is possible, but mainly what I am looking for is a way to be able to put the elevation, latitude and longitude data together in a pandas dataframe (doesn't have to have fancy mutiline headers). If any one can help or give some advice on working with this data that would be great! If you can't tell I haven't worked much with json data before... EDIT: This method isn't all that attractive but seems to work: ``` data = json.loads(elevations) lat,lng,el = [],[],[] for result in data['results']: lat.append(result[u'location'][u'lat']) lng.append(result[u'location'][u'lng']) el.append(result[u'elevation']) df = pd.DataFrame([lat,lng,el]).T ``` ends up dataframe having columns latitude, longitude, elevation", "response":"I found a quick and easy solution to what I wanted using json_normalize() included in pandas 1.01. ``` from urllib2 import Request, urlopen import json import pandas as pd path1 = '42.974049,-81.205203|42.974298,-81.195755' request=Request('http:\/\/maps.googleapis.com\/maps\/api\/elevation\/json?locations='+path1+'&sensor=false') response = urlopen(request) elevations = response.read() data = json.loads(elevations) df = pd.json_normalize(data['results']) ``` This gives a nice flattened dataframe with the json data that I got from the Google Maps API.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21104592\/json-to-pandas-dataframe", "best_answers_votes":321, "question_length":1734, "response_length":551 }, { "question":"getting the index of a row in a pandas apply function I am trying to access the index of a row in a function applied across an entire DataFrame in Pandas. I have something like this: ``` df = pandas.DataFrame([[1,2,3],[4,5,6]], columns=['a','b','c']) >>> df a b c 0 1 2 3 1 4 5 6 ``` and I'll define a function that access elements with a given row ``` def rowFunc(row): return row['a'] + row['b'] * row['c'] ``` I can apply it like so: ``` df['d'] = df.apply(rowFunc, axis=1) >>> df a b c d 0 1 2 3 7 1 4 5 6 34 ``` Awesome! Now what if I want to incorporate the index into my function? The index of any given row in this DataFrame before adding d would be Index([u'a', u'b', u'c', u'd'], dtype='object'), but I want the 0 and 1. So I can't just access row.index. I know I could create a temporary column in the table where I store the index, but I'm wondering if it is stored in the row object somewhere.", "response":"To access the index in this case you access the name attribute: ``` In [182]: df = pd.DataFrame([[1,2,3],[4,5,6]], columns=['a','b','c']) def rowFunc(row): return row['a'] + row['b'] * row['c'] def rowIndex(row): return row.name df['d'] = df.apply(rowFunc, axis=1) df['rowIndex'] = df.apply(rowIndex, axis=1) df Out[182]: a b c d rowIndex 0 1 2 3 7 0 1 4 5 6 34 1 ``` Note that if this is really what you are trying to do that the following works and is much faster: ``` In [198]: df['d'] = df['a'] + df['b'] * df['c'] df Out[198]: a b c d 0 1 2 3 7 1 4 5 6 34 In [199]: %timeit df['a'] + df['b'] * df['c'] %timeit df.apply(rowIndex, axis=1) 10000 loops, best of 3: 163 \u00b5s per loop 1000 loops, best of 3: 286 \u00b5s per loop ``` EDIT Looking at this question 3+ years later, you could just do: ``` In[15]: df['d'],df['rowIndex'] = df['a'] + df['b'] * df['c'], df.index df Out[15]: a b c d rowIndex 0 1 2 3 7 0 1 4 5 6 34 1 ``` but assuming it isn't as trivial as this, whatever your rowFunc is really doing, you should look to use the vectorised functions, and then use them against the df index: ``` In[16]: df['newCol'] = df['a'] + df['b'] + df['c'] + df.index df Out[16]: a b c d rowIndex newCol 0 1 2 3 7 0 6 1 4 5 6 34 1 16 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26658240\/getting-the-index-of-a-row-in-a-pandas-apply-function", "best_answers_votes":307, "question_length":906, "response_length":1228 }, { "question":"What is the difference between a pandas Series and a single-column DataFrame? Why does pandas make a distinction between a Series and a single-column DataFrame? In other words: what is the reason of existence of the Series class? I'm mainly using time series with datetime index, maybe that helps to set the context.", "response":"Quoting the Pandas docs pandas.DataFrame(data=None, index=None, columns=None, dtype=None, copy=False) Two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure. So, the Series is the data structure for a single column of a DataFrame, not only conceptually, but literally, i.e. the data in a DataFrame is actually stored in memory as a collection of Series. Analogously: We need both lists and matrices, because matrices are built with lists. Single row matricies, while equivalent to lists in functionality still cannot exist without the list(s) they're composed of. They both have extremely similar APIs, but you'll find that DataFrame methods always cater to the possibility that you have more than one column. And, of course, you can always add another Series (or equivalent object) to a DataFrame, while adding a Series to another Series involves creating a DataFrame.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26047209\/what-is-the-difference-between-a-pandas-series-and-a-single-column-dataframe", "best_answers_votes":279, "question_length":316, "response_length":1087 }, { "question":"How to add hovering annotations to a plot I am using matplotlib to make scatter plots. Each point on the scatter plot is associated with a named object. I would like to be able to see the name of an object when I hover my cursor over the point on the scatter plot associated with that object. In particular, it would be nice to be able to quickly see the names of the points that are outliers. The closest thing I have been able to find while searching here is the annotate command, but that appears to create a fixed label on the plot. Unfortunately, with the number of points that I have, the scatter plot would be unreadable if I labeled each point. Does anyone know of a way to create labels that only appear when the cursor hovers in the vicinity of that point?", "response":"Here is a code that uses a scatter and shows an annotation upon hovering over the scatter points. ``` import matplotlib.pyplot as plt import numpy as np; np.random.seed(1) x = np.random.rand(15) y = np.random.rand(15) names = np.array(list(\"ABCDEFGHIJKLMNO\")) c = np.random.randint(1,5,size=15) norm = plt.Normalize(1,4) cmap = plt.cm.RdYlGn fig,ax = plt.subplots() sc = plt.scatter(x,y,c=c, s=100, cmap=cmap, norm=norm) annot = ax.annotate(\"\", xy=(0,0), xytext=(20,20),textcoords=\"offset points\", bbox=dict(boxstyle=\"round\", fc=\"w\"), arrowprops=dict(arrowstyle=\"->\")) annot.set_visible(False) def update_annot(ind): pos = sc.get_offsets()[ind[\"ind\"][0]] annot.xy = pos text = \"{}, {}\".format(\" \".join(list(map(str,ind[\"ind\"]))), \" \".join([names[n] for n in ind[\"ind\"]])) annot.set_text(text) annot.get_bbox_patch().set_facecolor(cmap(norm(c[ind[\"ind\"][0]]))) annot.get_bbox_patch().set_alpha(0.4) def hover(event): vis = annot.get_visible() if event.inaxes == ax: cont, ind = sc.contains(event) if cont: update_annot(ind) annot.set_visible(True) fig.canvas.draw_idle() else: if vis: annot.set_visible(False) fig.canvas.draw_idle() fig.canvas.mpl_connect(\"motion_notify_event\", hover) plt.show() ``` Because people also want to use this solution for a line plot instead of a scatter, the following would be the same solution for plot (which works slightly differently). ```css import matplotlib.pyplot as plt import numpy as np; np.random.seed(1) x = np.sort(np.random.rand(15)) y = np.sort(np.random.rand(15)) names = np.array(list(\"ABCDEFGHIJKLMNO\")) norm = plt.Normalize(1,4) cmap = plt.cm.RdYlGn fig,ax = plt.subplots() line, = plt.plot(x,y, marker=\"o\") annot = ax.annotate(\"\", xy=(0,0), xytext=(-20,20),textcoords=\"offset points\", bbox=dict(boxstyle=\"round\", fc=\"w\"), arrowprops=dict(arrowstyle=\"->\")) annot.set_visible(False) def update_annot(ind): x,y = line.get_data() annot.xy = (x[ind[\"ind\"][0]], y[ind[\"ind\"][0]]) text = \"{}, {}\".format(\" \".join(list(map(str,ind[\"ind\"]))), \" \".join([names[n] for n in ind[\"ind\"]])) annot.set_text(text) annot.get_bbox_patch().set_alpha(0.4) def hover(event): vis = annot.get_visible() if event.inaxes == ax: cont, ind = line.contains(event) if cont: update_annot(ind) annot.set_visible(True) fig.canvas.draw_idle() else: if vis: annot.set_visible(False) fig.canvas.draw_idle() fig.canvas.mpl_connect(\"motion_notify_event\", hover) plt.show() ``` In case someone is looking for a solution for lines in twin axes, refer to How to make labels appear when hovering over a point in multiple axis? In case someone is looking for a solution for bar plots, please refer to e.g. this answer.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/7908636\/how-to-add-hovering-annotations-to-a-plot", "best_answers_votes":259, "question_length":766, "response_length":2626 }, { "question":"Whether to use apply vs transform on a group object, to subtract two columns and get mean Consider the following dataframe: ``` columns = ['A', 'B', 'C', 'D'] records = [ ['foo', 'one', 0.162003, 0.087469], ['bar', 'one', -1.156319, -1.5262719999999999], ['foo', 'two', 0.833892, -1.666304], ['bar', 'three', -2.026673, -0.32205700000000004], ['foo', 'two', 0.41145200000000004, -0.9543709999999999], ['bar', 'two', 0.765878, -0.095968], ['foo', 'one', -0.65489, 0.678091], ['foo', 'three', -1.789842, -1.130922] ] df = pd.DataFrame.from_records(records, columns=columns) \"\"\" A B C D 0 foo one 0.162003 0.087469 1 bar one -1.156319 -1.526272 2 foo two 0.833892 -1.666304 3 bar three -2.026673 -0.322057 4 foo two 0.411452 -0.954371 5 bar two 0.765878 -0.095968 6 foo one -0.654890 0.678091 7 foo three -1.789842 -1.130922 \"\"\" ``` The following commands work: ``` df.groupby('A').apply(lambda x: (x['C'] - x['D'])) df.groupby('A').apply(lambda x: (x['C'] - x['D']).mean()) ``` but none of the following work: ``` df.groupby('A').transform(lambda x: (x['C'] - x['D'])) # KeyError or ValueError: could not broadcast input array from shape (5) into shape (5,3) df.groupby('A').transform(lambda x: (x['C'] - x['D']).mean()) # KeyError or TypeError: cannot concatenate a non-NDFrame object ``` Why? The example on the documentation seems to suggest that calling transform on a group allows one to do row-wise operation processing: ``` # Note that the following suggests row-wise operation (x.mean is the column mean) zscore = lambda x: (x - x.mean()) \/ x.std() transformed = ts.groupby(key).transform(zscore) ``` In other words, I thought that transform is essentially a specific type of apply (the one that does not aggregate). Where am I wrong? For reference, below is the construction of the original dataframe above: ``` df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], 'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], 'C' : randn(8), 'D' : randn(8)}) ```", "response":"Two major differences between apply and transform There are two major differences between the transform and apply groupby methods. Input: apply implicitly passes all the columns for each group as a DataFrame to the custom function. while transform passes each column for each group individually as a Series to the custom function. Output: The custom function passed to apply can return a scalar, or a Series or DataFrame (or numpy array or even list). The custom function passed to transform must return a sequence (a one dimensional Series, array or list) the same length as the group. So, transform works on just one Series at a time and apply works on the entire DataFrame at once. Inspecting the custom function It can help quite a bit to inspect the input to your custom function passed to apply or transform. Examples Let's create some sample data and inspect the groups so that you can see what I am talking about: ``` import pandas as pd import numpy as np df = pd.DataFrame({'State':['Texas', 'Texas', 'Florida', 'Florida'], 'a':[4,5,1,3], 'b':[6,10,3,11]}) State a b 0 Texas 4 6 1 Texas 5 10 2 Florida 1 3 3 Florida 3 11 ``` Let's create a simple custom function that prints out the type of the implicitly passed object and then raises an exception so that execution can be stopped. ``` def inspect(x): print(type(x)) raise ``` Now let's pass this function to both the groupby apply and transform methods to see what object is passed to it: ``` df.groupby('State').apply(inspect) RuntimeError ``` As you can see, a DataFrame is passed into the inspect function. You might be wondering why the type, DataFrame, got printed out twice. Pandas runs the first group twice. It does this to determine if there is a fast way to complete the computation or not. This is a minor detail that you shouldn't worry about. Now, let's do the same thing with transform ``` df.groupby('State').transform(inspect) RuntimeError ``` It is passed a Series - a totally different Pandas object. So, transform is only allowed to work with a single Series at a time. It is impossible for it to act on two columns at the same time. So, if we try and subtract column a from b inside of our custom function we would get an error with transform. See below: ``` def subtract_two(x): return x['a'] - x['b'] df.groupby('State').transform(subtract_two) KeyError: ('a', 'occurred at index a') ``` We get a KeyError as pandas is attempting to find the Series index a which does not exist. You can complete this operation with apply as it has the entire DataFrame: ``` df.groupby('State').apply(subtract_two) State Florida 2 -2 3 -8 Texas 0 -2 1 -5 dtype: int64 ``` The output is a Series and a little confusing as the original index is kept, but we have access to all columns. Displaying the passed pandas object It can help even more to display the entire pandas object within the custom function, so you can see exactly what you are operating with. You can use print statements by I like to use the display function from the IPython.display module so that the DataFrames get nicely outputted in HTML in a jupyter notebook: ``` from IPython.display import display def subtract_two(x): display(x) return x['a'] - x['b'] ``` Screenshot: Transform must return a single dimensional sequence the same size as the group The other difference is that transform must return a single dimensional sequence the same size as the group. In this particular instance, each group has two rows, so transform must return a sequence of two rows. If it does not then an error is raised: ``` def return_three(x): return np.array([1, 2, 3]) df.groupby('State').transform(return_three) ValueError: transform must return a scalar value for each group ``` The error message is not really descriptive of the problem. You must return a sequence the same length as the group. So, a function like this would work: ``` def rand_group_len(x): return np.random.rand(len(x)) df.groupby('State').transform(rand_group_len) a b 0 0.962070 0.151440 1 0.440956 0.782176 2 0.642218 0.483257 3 0.056047 0.238208 ``` Returning a single scalar object also works for transform If you return just a single scalar from your custom function, then transform will use it for each of the rows in the group: ``` def group_sum(x): return x.sum() df.groupby('State').transform(group_sum) a b 0 9 16 1 9 16 2 4 14 3 4 14 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27517425\/whether-to-use-apply-vs-transform-on-a-group-object-to-subtract-two-columns-and", "best_answers_votes":356, "question_length":2006, "response_length":4349 }, { "question":"pandas: multiple conditions while indexing data frame - unexpected behavior I am filtering rows in a dataframe by values in two columns. For some reason the OR operator behaves like I would expect AND operator to behave and vice versa. My test code: ```py df = pd.DataFrame({'a': range(5), 'b': range(5) }) # let's insert some -1 values df['a'][1] = -1 df['b'][1] = -1 df['a'][3] = -1 df['b'][4] = -1 df1 = df[(df.a != -1) & (df.b != -1)] df2 = df[(df.a != -1) | (df.b != -1)] print(pd.concat([df, df1, df2], axis=1, keys = [ 'original df', 'using AND (&)', 'using OR (|)',])) ``` And the result: ```none original df using AND (&) using OR (|) a b a b a b 0 0 0 0 0 0 0 1 -1 -1 NaN NaN NaN NaN 2 2 2 2 2 2 2 3 -1 3 NaN NaN -1 3 4 4 -1 NaN NaN 4 -1 [5 rows x 6 columns] ``` As you can see, the AND operator drops every row in which at least one value equals -1. On the other hand, the OR operator requires both values to be equal to -1 to drop them. I would expect exactly the opposite result. Could anyone explain this behavior? I am using pandas 0.13.1.", "response":"As you can see, the AND operator drops every row in which at least one value equals -1. On the other hand, the OR operator requires both values to be equal to -1 to drop them. That's right. Remember that you're writing the condition in terms of what you want to keep, not in terms of what you want to drop. For df1: ``` df1 = df[(df.a != -1) & (df.b != -1)] ``` You're saying \"keep the rows in which df.a isn't -1 and df.b isn't -1\", which is the same as dropping every row in which at least one value is -1. For df2: ``` df2 = df[(df.a != -1) | (df.b != -1)] ``` You're saying \"keep the rows in which either df.a or df.b is not -1\", which is the same as dropping rows where both values are -1. PS: chained access like df['a'][1] = -1 can get you into trouble. It's better to get into the habit of using .loc and .iloc.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22591174\/pandas-multiple-conditions-while-indexing-data-frame-unexpected-behavior", "best_answers_votes":399, "question_length":1054, "response_length":819 }, { "question":"How to keep index when using pandas merge I would like to merge two DataFrames, and keep the index from the first frame as the index on the merged dataset. However, when I do the merge, the resulting DataFrame has integer index. How can I specify that I want to keep the index from the left data frame? ``` In [4]: a = pd.DataFrame({'col1': {'a': 1, 'b': 2, 'c': 3}, 'to_merge_on': {'a': 1, 'b': 3, 'c': 4}}) In [5]: b = pd.DataFrame({'col2': {0: 1, 1: 2, 2: 3}, 'to_merge_on': {0: 1, 1: 3, 2: 5}}) In [6]: a Out[6]: col1 to_merge_on a 1 1 b 2 3 c 3 4 In [7]: b Out[7]: col2 to_merge_on 0 1 1 1 2 3 2 3 5 In [8]: a.merge(b, how='left') Out[8]: col1 to_merge_on col2 0 1 1 1.0 1 2 3 2.0 2 3 4 NaN In [9]: _.index Out[9]: Int64Index([0, 1, 2], dtype='int64') ``` EDIT: Switched to example code that can be easily reproduced", "response":"``` In [5]: a.reset_index().merge(b, how=\"left\").set_index('index') Out[5]: col1 to_merge_on col2 index a 1 1 1 b 2 3 2 c 3 4 NaN ``` Note that for some left merge operations, you may end up with more rows than in a when there are multiple matches between a and b. In this case, you may need to drop duplicates.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11976503\/how-to-keep-index-when-using-pandas-merge", "best_answers_votes":300, "question_length":821, "response_length":311 }, { "question":"Changing a specific column name in pandas DataFrame [duplicate] This question already has answers here: Renaming column names in Pandas (33 answers) Closed 4 years ago. I was looking for an elegant way to change a specified column name in a DataFrame. play data ... ``` import pandas as pd d = { 'one': [1, 2, 3, 4, 5], 'two': [9, 8, 7, 6, 5], 'three': ['a', 'b', 'c', 'd', 'e'] } df = pd.DataFrame(d) ``` The most elegant solution I have found so far ... ``` names = df.columns.tolist() names[names.index('two')] = 'new_name' df.columns = names ``` I was hoping for a simple one-liner ... this attempt failed ... ``` df.columns[df.columns.tolist().index('one')] = 'another_name' ``` Any hints gratefully received.", "response":"A one liner does exist: ``` In [27]: df=df.rename(columns = {'two':'new_name'}) In [28]: df Out[28]: one three new_name 0 1 a 9 1 2 b 8 2 3 c 7 3 4 d 6 4 5 e 5 ``` Following is the docstring for the rename method. ``` Definition: df.rename(self, index=None, columns=None, copy=True, inplace=False) Docstring: Alter index and \/ or columns using input function or functions. Function \/ dict values must be unique (1-to-1). Labels not contained in a dict \/ Series will be left as-is. Parameters ---------- index : dict-like or function, optional Transformation to apply to index values columns : dict-like or function, optional Transformation to apply to column values copy : boolean, default True Also copy underlying data inplace : boolean, default False Whether to return a new DataFrame. If True then value of copy is ignored. See also -------- Series.rename Returns ------- renamed : DataFrame (new object) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20868394\/changing-a-specific-column-name-in-pandas-dataframe", "best_answers_votes":454, "question_length":714, "response_length":912 }, { "question":"Concatenate strings from several rows using Pandas groupby I want to apply some sort of concatenation of the strings in a column using groupby. This is my code so far: ``` import pandas as pd from io import StringIO data = StringIO(\"\"\" \"name1\",\"hej\",\"2014-11-01\" \"name1\",\"du\",\"2014-11-02\" \"name1\",\"aj\",\"2014-12-01\" \"name1\",\"oj\",\"2014-12-02\" \"name2\",\"fin\",\"2014-11-01\" \"name2\",\"katt\",\"2014-11-02\" \"name2\",\"mycket\",\"2014-12-01\" \"name2\",\"lite\",\"2014-12-01\" \"\"\") # load string as stream into dataframe df = pd.read_csv(data,header=0, names=[\"name\",\"text\",\"date\"],parse_dates=[2]) # add column with month df[\"month\"] = df[\"date\"].apply(lambda x: x.month) ``` I want the end result to look like this:", "response":"You can groupby the 'name' and 'month' columns, then call transform which will return data aligned to the original df and apply a lambda where we join the text entries: ``` In [119]: df['text'] = df[['name','text','month']].groupby(['name','month'])['text'].transform(lambda x: ','.join(x)) df[['name','text','month']].drop_duplicates() Out[119]: name text month 0 name1 hej,du 11 2 name1 aj,oj 12 4 name2 fin,katt 11 6 name2 mycket,lite 12 ``` I sub the original df by passing a list of the columns of interest df[['name','text','month']] here and then call drop_duplicates EDIT actually I can just call apply and then reset_index: ``` In [124]: df.groupby(['name','month'])['text'].apply(lambda x: ','.join(x)).reset_index() Out[124]: name month text 0 name1 11 hej,du 1 name1 12 aj,oj 2 name2 11 fin,katt 3 name2 12 mycket,lite ``` update the lambda is unnecessary here: ``` In[38]: df.groupby(['name','month'])['text'].apply(','.join).reset_index() Out[38]: name month text 0 name1 11 du 1 name1 12 aj,oj 2 name2 11 fin,katt 3 name2 12 mycket,lite ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27298178\/concatenate-strings-from-several-rows-using-pandas-groupby", "best_answers_votes":358, "question_length":694, "response_length":1055 }, { "question":"check if variable is dataframe when my function f is called with a variable I want to check if var is a pandas dataframe: ``` def f(var): if var == pd.DataFrame(): print \"do stuff\" ``` I guess the solution might be quite simple but even with ``` def f(var): if var.values != None: print \"do stuff\" ``` I can't get it to work like expected.", "response":"Use isinstance, nothing else: ``` if isinstance(x, pd.DataFrame): ... # do something ``` PEP8 says explicitly that isinstance is the preferred way to check types ``` No: type(x) is pd.DataFrame No: type(x) == pd.DataFrame Yes: isinstance(x, pd.DataFrame) ``` And don't even think about ``` if obj.__class__.__name__ = 'DataFrame': expect_problems_some_day() ``` isinstance handles inheritance (see What are the differences between type() and isinstance()?). For example, it will tell you if a variable is a string (either str or unicode), because they derive from basestring) ``` if isinstance(obj, basestring): i_am_string(obj) ``` Specifically for pandas DataFrame objects: ``` import pandas as pd isinstance(var, pd.DataFrame) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14808945\/check-if-variable-is-dataframe", "best_answers_votes":379, "question_length":339, "response_length":733 }, { "question":"How do I combine two dataframes? I have a initial dataframe D. I extract two data frames from it like this: ``` A = D[D.label == k] B = D[D.label != k] ``` I want to combine A and B into one DataFrame. The order of the data is not important. However, when we sample A and B from D, they retain their indexes from D.", "response":"DEPRECATED: DataFrame.append and Series.append were deprecated in v1.4.0. Use append: ``` df_merged = df1.append(df2, ignore_index=True) ``` And to keep their indexes, set ignore_index=False.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12850345\/how-do-i-combine-two-dataframes", "best_answers_votes":262, "question_length":315, "response_length":191 }, { "question":"How to split data into 3 sets (train, validation and test)? I have a pandas dataframe and I wish to divide it to 3 separate sets. I know that using train_test_split from sklearn.cross_validation, one can divide the data in two sets (train and test). However, I couldn't find any solution about splitting the data into three sets. Preferably, I'd like to have the indices of the original data. I know that a workaround would be to use train_test_split two times and somehow adjust the indices. But is there a more standard \/ built-in way to split the data into 3 sets instead of 2?", "response":"Numpy solution. We will shuffle the whole dataset first (df.sample(frac=1, random_state=42)) and then split our data set into the following parts: 60% - train set, 20% - validation set, 20% - test set ``` In [305]: train, validate, test = \\ np.split(df.sample(frac=1, random_state=42), [int(.6*len(df)), int(.8*len(df))]) In [306]: train Out[306]: A B C D E 0 0.046919 0.792216 0.206294 0.440346 0.038960 2 0.301010 0.625697 0.604724 0.936968 0.870064 1 0.642237 0.690403 0.813658 0.525379 0.396053 9 0.488484 0.389640 0.599637 0.122919 0.106505 8 0.842717 0.793315 0.554084 0.100361 0.367465 7 0.185214 0.603661 0.217677 0.281780 0.938540 In [307]: validate Out[307]: A B C D E 5 0.806176 0.008896 0.362878 0.058903 0.026328 6 0.145777 0.485765 0.589272 0.806329 0.703479 In [308]: test Out[308]: A B C D E 4 0.521640 0.332210 0.370177 0.859169 0.401087 3 0.333348 0.964011 0.083498 0.670386 0.169619 ``` [int(.6*len(df)), int(.8*len(df))] - is an indices_or_sections array for numpy.split(). Here is a small demo for np.split() usage - let's split 20-elements array into the following parts: 80%, 10%, 10%: ``` In [45]: a = np.arange(1, 21) In [46]: a Out[46]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]) In [47]: np.split(a, [int(.8 * len(a)), int(.9 * len(a))]) Out[47]: [array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]), array([17, 18]), array([19, 20])] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38250710\/how-to-split-data-into-3-sets-train-validation-and-test", "best_answers_votes":295, "question_length":580, "response_length":1414 }, { "question":"Format y axis as percent I have an existing plot that was created with pandas like this: ``` df['myvar'].plot(kind='bar') ``` The y axis is format as float and I want to change the y axis to percentages. All of the solutions I found use ax.xyz syntax and I can only place code below the line above that creates the plot (I cannot add ax=ax to the line above.) How can I format the y axis as percentages without changing the line above? Here is the solution I found but requires that I redefine the plot: ``` import matplotlib.pyplot as plt import numpy as np import matplotlib.ticker as mtick data = [8,12,15,17,18,18.5] perc = np.linspace(0,100,len(data)) fig = plt.figure(1, (7,4)) ax = fig.add_subplot(1,1,1) ax.plot(perc, data) fmt = '%.0f%%' # Format you want the ticks, e.g. '40%' xticks = mtick.FormatStrFormatter(fmt) ax.xaxis.set_major_formatter(xticks) plt.show() ``` Link to the above solution: Pyplot: using percentage on x axis", "response":"This is a few months late, but I have created PR#6251 with matplotlib to add a new PercentFormatter class. With this class you just need one line to reformat your axis (two if you count the import of matplotlib.ticker): ``` import ... import matplotlib.ticker as mtick ax = df['myvar'].plot(kind='bar') ax.yaxis.set_major_formatter(mtick.PercentFormatter()) ``` PercentFormatter() accepts three arguments, xmax, decimals, symbol. xmax allows you to set the value that corresponds to 100% on the axis. This is nice if you have data from 0.0 to 1.0 and you want to display it from 0% to 100%. Just do PercentFormatter(1.0). The other two parameters allow you to set the number of digits after the decimal point and the symbol. They default to None and '%', respectively. decimals=None will automatically set the number of decimal points based on how much of the axes you are showing. Update PercentFormatter was introduced into Matplotlib proper in version 2.1.0.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31357611\/format-y-axis-as-percent", "best_answers_votes":376, "question_length":940, "response_length":961 }, { "question":"pandas dataframe columns scaling with sklearn I have a pandas dataframe with mixed type columns, and I'd like to apply sklearn's min_max_scaler to some of the columns. Ideally, I'd like to do these transformations in place, but haven't figured out a way to do that yet. I've written the following code that works: ``` import pandas as pd import numpy as np from sklearn import preprocessing scaler = preprocessing.MinMaxScaler() dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']}) min_max_scaler = preprocessing.MinMaxScaler() def scaleColumns(df, cols_to_scale): for col in cols_to_scale: df[col] = pd.DataFrame(min_max_scaler.fit_transform(pd.DataFrame(dfTest[col])),columns=[col]) return df dfTest A B C 0 14.00 103.02 big 1 90.20 107.26 small 2 90.95 110.35 big 3 96.27 114.23 small 4 91.21 114.68 small scaled_df = scaleColumns(dfTest,['A','B']) scaled_df A B C 0 0.000000 0.000000 big 1 0.926219 0.363636 small 2 0.935335 0.628645 big 3 1.000000 0.961407 small 4 0.938495 1.000000 small ``` I'm curious if this is the preferred\/most efficient way to do this transformation. Is there a way I could use df.apply that would be better? I'm also surprised I can't get the following code to work: ``` bad_output = min_max_scaler.fit_transform(dfTest['A']) ``` If I pass an entire dataframe to the scaler it works: ``` dfTest2 = dfTest.drop('C', axis = 1) good_output = min_max_scaler.fit_transform(dfTest2) good_output ``` I'm confused why passing a series to the scaler fails. In my full working code above I had hoped to just pass a series to the scaler then set the dataframe column = to the scaled series.", "response":"I am not sure if previous versions of pandas prevented this but now the following snippet works perfectly for me and produces exactly what you want without having to use apply ``` >>> import pandas as pd >>> from sklearn.preprocessing import MinMaxScaler >>> scaler = MinMaxScaler() >>> dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21], 'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']}) >>> dfTest[['A', 'B']] = scaler.fit_transform(dfTest[['A', 'B']]) >>> dfTest A B C 0 0.000000 0.000000 big 1 0.926219 0.363636 small 2 0.935335 0.628645 big 3 1.000000 0.961407 small 4 0.938495 1.000000 small ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24645153\/pandas-dataframe-columns-scaling-with-sklearn", "best_answers_votes":363, "question_length":1702, "response_length":645 }, { "question":"How to make good reproducible pandas examples Having spent a decent amount of time watching both the r and pandas tags on SO, the impression that I get is that pandas questions are less likely to contain reproducible data. This is something that the R community has been pretty good about encouraging, and thanks to guides like this, newcomers are able to get some help on putting together these examples. People who are able to read these guides and come back with reproducible data will often have much better luck getting answers to their questions. How can we create good reproducible examples for pandas questions? Simple dataframes can be put together, e.g.: ``` import pandas as pd df = pd.DataFrame({'user': ['Bob', 'Jane', 'Alice'], 'income': [40000, 50000, 42000]}) ``` But many example datasets need more complicated structure, e.g.: datetime indices or data Multiple categorical variables (is there an equivalent to R's expand.grid() function, which produces all possible combinations of some given variables?) MultiIndex data For datasets that are hard to mock up using a few lines of code, is there an equivalent to R's dput() that allows you to generate copy-pasteable code to regenerate your datastructure?", "response":"Note: Most of the ideas here are pretty generic for Stack Overflow, indeed questions in general. See Minimal, Reproducible Example or Short, Self Contained, Correct Example. Disclaimer: Writing a good question is hard. The Good: Do include a small example DataFrame, either as runnable code: ``` In [1]: df = pd.DataFrame([[1, 2], [1, 3], [4, 6]], columns=['A', 'B']) ``` or make it \"copy and pasteable\" using pd.read_clipboard(sep=r'\\s\\s+'). ``` In [2]: df Out[2]: A B 0 1 2 1 1 3 2 4 6 ``` Test it yourself to make sure it works and reproduces the issue. You can format the text for Stack Overflow by highlighting and using Ctrl+K (or prepend four spaces to each line), or place three backticks (```) above and below your code with your code unindented. I really do mean small. The vast majority of example DataFrames could be fewer than 6 rows,[citation needed] and I bet I can do it in 5. Can you reproduce the error with df = df.head()? If not, fiddle around to see if you can make up a small DataFrame which exhibits the issue you are facing. But every rule has an exception, the obvious one being for performance issues (in which case definitely use %timeit and possibly %prun to profile your code), where you should generate: ``` df = pd.DataFrame(np.random.randn(100000000, 10)) ``` Consider using np.random.seed so we have the exact same frame. Having said that, \"make this code fast for me\" is not strictly on topic for the site. For getting runnable code, df.to_dict is often useful, with the different orient options for different cases. In the example above, I could have grabbed the data and columns from df.to_dict('split'). Write out the outcome you desire (similarly to above) ``` In [3]: iwantthis Out[3]: A B 0 1 5 1 4 6 ``` Explain where the numbers come from: The 5 is the sum of the B column for the rows where A is 1. Do show the code you've tried: ``` In [4]: df.groupby('A').sum() Out[4]: B A 1 5 4 6 ``` But say what's incorrect: The A column is in the index rather than a column. Do show you've done some research (search the documentation, search Stack Overflow), and give a summary: The docstring for sum simply states \"Compute sum of group values\" The groupby documentation doesn't give any examples for this. Aside: the answer here is to use df.groupby('A', as_index=False).sum(). If it's relevant that you have Timestamp columns, e.g. you're resampling or something, then be explicit and apply pd.to_datetime to them for good measure. ``` df['date'] = pd.to_datetime(df['date']) # this column ought to be date. ``` Sometimes this is the issue itself: they were strings. The Bad: Don't include a MultiIndex, which we can't copy and paste (see above). This is kind of a grievance with Pandas' default display, but nonetheless annoying: ``` In [11]: df Out[11]: C A B 1 2 3 2 6 ``` The correct way is to include an ordinary DataFrame with a set_index call: ``` In [12]: df = pd.DataFrame([[1, 2, 3], [1, 2, 6]], columns=['A', 'B', 'C']) In [13]: df = df.set_index(['A', 'B']) In [14]: df Out[14]: C A B 1 2 3 2 6 ``` Do provide insight to what it is when giving the outcome you want: ``` B A 1 1 5 0 ``` Be specific about how you got the numbers (what are they)... double check they're correct. If your code throws an error, do include the entire stack trace. This can be edited out later if it's too noisy. Show the line number and the corresponding line of your code which it's raising against. Pandas 2.0 introduced a number of changes, and Pandas 1.0 before that, so if you're getting unexpected output, include the version: ``` pd.__version__ ``` On that note, you might also want to include the version of Python, your OS, and any other libraries. You could use pd.show_versions() or the session_info package (which shows loaded libraries and Jupyter\/IPython environment). The Ugly: Don't link to a CSV file we don't have access to (and ideally don't link to an external source at all). ``` df = pd.read_csv('my_secret_file.csv') # ideally with lots of parsing options ``` Most data is proprietary, we get that. Make up similar data and see if you can reproduce the problem (something small). Don't explain the situation vaguely in words, like you have a DataFrame which is \"large\", mention some of the column names in passing (be sure not to mention their dtypes). Try and go into lots of detail about something which is completely meaningless without seeing the actual context. Presumably no one is even going to read to the end of this paragraph. Essays are bad; it's easier with small examples. Don't include 10+ (100+??) lines of data munging before getting to your actual question. Please, we see enough of this in our day jobs. We want to help, but not like this.... Cut the intro, and just show the relevant DataFrames (or small versions of them) in the step which is causing you trouble.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20109391\/how-to-make-good-reproducible-pandas-examples", "best_answers_votes":498, "question_length":1222, "response_length":4832 }, { "question":"Can pandas automatically read dates from a CSV file? Today I was positively surprised by the fact that while reading data from a data file (for example) pandas is able to recognize types of values: ``` df = pandas.read_csv('test.dat', delimiter=r\"\\s+\", names=['col1','col2','col3']) ``` For example it can be checked in this way: ``` for i, r in df.iterrows(): print type(r['col1']), type(r['col2']), type(r['col3']) ``` In particular integer, floats and strings were recognized correctly. However, I have a column that has dates in the following format: 2013-6-4. These dates were recognized as strings (not as python date-objects). Is there a way to \"learn\" pandas to recognized dates?", "response":"You should add parse_dates=True, or parse_dates=['column name'] when reading, thats usually enough to magically parse it. But there are always weird formats which need to be defined manually. In such a case you can also add a date parser function, which is the most flexible way possible. Suppose you have a column 'datetime' with your string, then: ``` from datetime import datetime dateparse = lambda x: datetime.strptime(x, '%Y-%m-%d %H:%M:%S') df = pd.read_csv(infile, parse_dates=['datetime'], date_parser=dateparse) ``` This way you can even combine multiple columns into a single datetime column, this merges a 'date' and a 'time' column into a single 'datetime' column: ``` dateparse = lambda x: datetime.strptime(x, '%Y-%m-%d %H:%M:%S') df = pd.read_csv(infile, parse_dates={'datetime': ['date', 'time']}, date_parser=dateparse) ``` You can find directives (i.e. the letters to be used for different formats) for strptime and strftime in this page.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17465045\/can-pandas-automatically-read-dates-from-a-csv-file", "best_answers_votes":503, "question_length":687, "response_length":957 }, { "question":"Ignoring NaNs with str.contains I want to find rows that contain a string, like so: ``` DF[DF.col.str.contains(\"foo\")] ``` However, this fails because some elements are NaN: ValueError: cannot index with vector containing NA \/ NaN values So I resort to the obfuscated ``` DF[DF.col.notnull()][DF.col.dropna().str.contains(\"foo\")] ``` Is there a better way?", "response":"There's a flag for that: ``` In [11]: df = pd.DataFrame([[\"foo1\"], [\"foo2\"], [\"bar\"], [np.nan]], columns=['a']) In [12]: df.a.str.contains(\"foo\") Out[12]: 0 True 1 True 2 False 3 NaN Name: a, dtype: object In [13]: df.a.str.contains(\"foo\", na=False) Out[13]: 0 True 1 True 2 False 3 False Name: a, dtype: bool ``` See the str.replace docs: na : default NaN, fill value for missing values. So you can do the following: ``` In [21]: df.loc[df.a.str.contains(\"foo\", na=False)] Out[21]: a 0 foo1 1 foo2 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28311655\/ignoring-nans-with-str-contains", "best_answers_votes":444, "question_length":356, "response_length":502 }, { "question":"Return multiple columns from pandas apply() I have a pandas DataFrame, df_test. It contains a column 'size' which represents size in bytes. I've calculated KB, MB, and GB using the following code: ``` df_test = pd.DataFrame([ {'dir': '\/Users\/uname1', 'size': 994933}, {'dir': '\/Users\/uname2', 'size': 109338711}, ]) df_test['size_kb'] = df_test['size'].astype(int).apply(lambda x: locale.format(\"%.1f\", x \/ 1024.0, grouping=True) + ' KB') df_test['size_mb'] = df_test['size'].astype(int).apply(lambda x: locale.format(\"%.1f\", x \/ 1024.0 ** 2, grouping=True) + ' MB') df_test['size_gb'] = df_test['size'].astype(int).apply(lambda x: locale.format(\"%.1f\", x \/ 1024.0 ** 3, grouping=True) + ' GB') df_test dir size size_kb size_mb size_gb 0 \/Users\/uname1 994933 971.6 KB 0.9 MB 0.0 GB 1 \/Users\/uname2 109338711 106,776.1 KB 104.3 MB 0.1 GB [2 rows x 5 columns] ``` I've run this over 120,000 rows and time it takes about 2.97 seconds per column * 3 = ~9 seconds according to %timeit. Is there anyway I can make this faster? For example, can I instead of returning one column at a time from apply and running it 3 times, can I return all three columns in one pass to insert back into the original dataframe? The other questions I've found all want to take multiple values and return a single value. I want to take a single value and return multiple columns.", "response":"You can return a Series from the applied function that contains the new data, preventing the need to iterate three times. Passing axis=1 to the apply function applies the function sizes to each row of the dataframe, returning a series to add to a new dataframe. This series, s, contains the new values, as well as the original data. ``` def sizes(s): s['size_kb'] = locale.format(\"%.1f\", s['size'] \/ 1024.0, grouping=True) + ' KB' s['size_mb'] = locale.format(\"%.1f\", s['size'] \/ 1024.0 ** 2, grouping=True) + ' MB' s['size_gb'] = locale.format(\"%.1f\", s['size'] \/ 1024.0 ** 3, grouping=True) + ' GB' return s df_test = df_test.append(rows_list) df_test = df_test.apply(sizes, axis=1) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23586510\/return-multiple-columns-from-pandas-apply", "best_answers_votes":256, "question_length":1353, "response_length":688 }, { "question":"When should I (not) want to use pandas apply() in my code? I have seen many answers posted to questions on Stack Overflow involving the use of the Pandas method apply. I have also seen users commenting under them saying that \"apply is slow, and should be avoided\". I have read many articles on the topic of performance that explain apply is slow. I have also seen a disclaimer in the docs about how apply is simply a convenience function for passing UDFs (can't seem to find that now). So, the general consensus is that apply should be avoided if possible. However, this raises the following questions: If apply is so bad, then why is it in the API? How and when should I make my code apply-free? Are there ever any situations where apply is good (better than other possible solutions)?", "response":"apply, the Convenience Function you Never Needed We start by addressing the questions in the OP, one by one. \"If apply is so bad, then why is it in the API?\" DataFrame.apply and Series.apply are convenience functions defined on DataFrame and Series object respectively. apply accepts any user defined function that applies a transformation\/aggregation on a DataFrame. apply is effectively a silver bullet that does whatever any existing pandas function cannot do. Some of the things apply can do: Run any user-defined function on a DataFrame or Series Apply a function either row-wise (axis=1) or column-wise (axis=0) on a DataFrame Perform index alignment while applying the function Perform aggregation with user-defined functions (however, we usually prefer agg or transform in these cases) Perform element-wise transformations Broadcast aggregated results to original rows (see the result_type argument). Accept positional\/keyword arguments to pass to the user-defined functions. ...Among others. For more information, see Row or Column-wise Function Application in the documentation. So, with all these features, why is apply bad? It is because apply is slow. Pandas makes no assumptions about the nature of your function, and so iteratively applies your function to each row\/column as necessary. Additionally, handling all of the situations above means apply incurs some major overhead at each iteration. Further, apply consumes a lot more memory, which is a challenge for memory bounded applications. There are very few situations where apply is appropriate to use (more on that below). If you're not sure whether you should be using apply, you probably shouldn't. pandas 2.2 update: apply now supports engine='numba' More info in the release notes as well as GH54666 Choose between the python (default) engine or the numba engine in apply. The numba engine will attempt to JIT compile the passed function, which may result in speedups for large DataFrames. It also supports the following engine_kwargs : nopython (compile the function in nopython mode) nogil (release the GIL inside the JIT compiled function) parallel (try to apply the function in parallel over the DataFrame) Note: Due to limitations within numba\/how pandas interfaces with numba, you should only use this if raw=True Let's address the next question. \"How and when should I make my code apply-free?\" To rephrase, here are some common situations where you will want to get rid of any calls to apply. Numeric Data If you're working with numeric data, there is likely already a vectorized cython function that does exactly what you're trying to do (if not, please either ask a question on Stack Overflow or open a feature request on GitHub). Contrast the performance of apply for a simple addition operation. ``` df = pd.DataFrame({\"A\": [9, 4, 2, 1], \"B\": [12, 7, 5, 4]}) df A B 0 9 12 1 4 7 2 2 5 3 1 4 ``` ``` df.apply(np.sum) A 16 B 28 dtype: int64 df.sum() A 16 B 28 dtype: int64 ``` Performance wise, there's no comparison, the cythonized equivalent is much faster. There's no need for a graph, because the difference is obvious even for toy data. ``` %timeit df.apply(np.sum) %timeit df.sum() 2.22 ms \u00b1 41.2 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 471 \u00b5s \u00b1 8.16 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) ``` Even if you enable passing raw arrays with the raw argument, it's still twice as slow. ``` %timeit df.apply(np.sum, raw=True) 840 \u00b5s \u00b1 691 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) ``` Another example: ``` df.apply(lambda x: x.max() - x.min()) A 8 B 8 dtype: int64 df.max() - df.min() A 8 B 8 dtype: int64 %timeit df.apply(lambda x: x.max() - x.min()) %timeit df.max() - df.min() 2.43 ms \u00b1 450 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 1.23 ms \u00b1 14.7 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) ``` In general, seek out vectorized alternatives if possible. String\/Regex Pandas provides \"vectorized\" string functions in most situations, but there are rare cases where those functions do not... \"apply\", so to speak. A common problem is to check whether a value in a column is present in another column of the same row. ``` df = pd.DataFrame({ 'Name': ['mickey', 'donald', 'minnie'], 'Title': ['wonderland', \"welcome to donald's castle\", 'Minnie mouse clubhouse'], 'Value': [20, 10, 86]}) df Name Value Title 0 mickey 20 wonderland 1 donald 10 welcome to donald's castle 2 minnie 86 Minnie mouse clubhouse ``` This should return the row second and third row, since \"donald\" and \"minnie\" are present in their respective \"Title\" columns. Using apply, this would be done using ``` df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1) 0 False 1 True 2 True dtype: bool df[df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)] Name Title Value 1 donald welcome to donald's castle 10 2 minnie Minnie mouse clubhouse 86 ``` However, a better solution exists using list comprehensions. ``` df[[y.lower() in x.lower() for x, y in zip(df['Title'], df['Name'])]] Name Title Value 1 donald welcome to donald's castle 10 2 minnie Minnie mouse clubhouse 86 ``` ``` %timeit df[df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)] %timeit df[[y.lower() in x.lower() for x, y in zip(df['Title'], df['Name'])]] 2.85 ms \u00b1 38.4 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 788 \u00b5s \u00b1 16.4 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) ``` The thing to note here is that iterative routines happen to be faster than apply, because of the lower overhead. If you need to handle NaNs and invalid dtypes, you can build on this using a custom function you can then call with arguments inside the list comprehension. For more information on when list comprehensions should be considered a good option, see my writeup: Are for-loops in pandas really bad? When should I care?. Note Date and datetime operations also have vectorized versions. So, for example, you should prefer pd.to_datetime(df['date']), over, say, df['date'].apply(pd.to_datetime). Read more at the docs. A Common Pitfall: Exploding Columns of Lists ``` s = pd.Series([[1, 2]] * 3) s 0 [1, 2] 1 [1, 2] 2 [1, 2] dtype: object ``` People are tempted to use apply(pd.Series). This is horrible in terms of performance. ``` s.apply(pd.Series) 0 1 0 1 2 1 1 2 2 1 2 ``` A better option is to listify the column and pass it to pd.DataFrame. ``` pd.DataFrame(s.tolist()) 0 1 0 1 2 1 1 2 2 1 2 ``` ``` %timeit s.apply(pd.Series) %timeit pd.DataFrame(s.tolist()) 2.65 ms \u00b1 294 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 816 \u00b5s \u00b1 40.5 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) ``` Lastly, \"Are there any situations where apply is good?\" Apply is a convenience function, so there are situations where the overhead is negligible enough to forgive. It really depends on how many times the function is called. Functions that are Vectorized for Series, but not DataFrames What if you want to apply a string operation on multiple columns? What if you want to convert multiple columns to datetime? These functions are vectorized for Series only, so they must be applied over each column that you want to convert\/operate on. ``` df = pd.DataFrame( pd.date_range('2018-12-31','2019-01-31', freq='2D').date.astype(str).reshape(-1, 2), columns=['date1', 'date2']) df date1 date2 0 2018-12-31 2019-01-02 1 2019-01-04 2019-01-06 2 2019-01-08 2019-01-10 3 2019-01-12 2019-01-14 4 2019-01-16 2019-01-18 5 2019-01-20 2019-01-22 6 2019-01-24 2019-01-26 7 2019-01-28 2019-01-30 df.dtypes date1 object date2 object dtype: object ``` This is an admissible case for apply: ``` df.apply(pd.to_datetime, errors='coerce').dtypes date1 datetime64[ns] date2 datetime64[ns] dtype: object ``` Note that it would also make sense to stack, or just use an explicit loop. All these options are slightly faster than using apply, but the difference is small enough to forgive. ``` %timeit df.apply(pd.to_datetime, errors='coerce') %timeit pd.to_datetime(df.stack(), errors='coerce').unstack() %timeit pd.concat([pd.to_datetime(df[c], errors='coerce') for c in df], axis=1) %timeit for c in df.columns: df[c] = pd.to_datetime(df[c], errors='coerce') 5.49 ms \u00b1 247 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 3.94 ms \u00b1 48.1 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 3.16 ms \u00b1 216 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 2.41 ms \u00b1 1.71 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) ``` You can make a similar case for other operations such as string operations, or conversion to category. ``` u = df.apply(lambda x: x.str.contains(...)) v = df.apply(lambda x: x.astype(category)) ``` v\/s ``` u = pd.concat([df[c].str.contains(...) for c in df], axis=1) v = df.copy() for c in df: v[c] = df[c].astype(category) ``` And so on... Converting Series to str: astype versus apply This seems like an idiosyncrasy of the API. Using apply to convert integers in a Series to string is comparable (and sometimes faster) than using astype. The graph was plotted using the perfplot library. ``` import perfplot perfplot.show( setup=lambda n: pd.Series(np.random.randint(0, n, n)), kernels=[ lambda s: s.astype(str), lambda s: s.apply(str) ], labels=['astype', 'apply'], n_range=[2**k for k in range(1, 20)], xlabel='N', logx=True, logy=True, equality_check=lambda x, y: (x == y).all()) ``` With floats, I see the astype is consistently as fast as, or slightly faster than apply. So this has to do with the fact that the data in the test is integer type. GroupBy operations with chained transformations GroupBy.apply has not been discussed until now, but GroupBy.apply is also an iterative convenience function to handle anything that the existing GroupBy functions do not. One common requirement is to perform a GroupBy and then two prime operations such as a \"lagged cumsum\": ``` df = pd.DataFrame({\"A\": list('aabcccddee'), \"B\": [12, 7, 5, 4, 5, 4, 3, 2, 1, 10]}) df A B 0 a 12 1 a 7 2 b 5 3 c 4 4 c 5 5 c 4 6 d 3 7 d 2 8 e 1 9 e 10 ``` You'd need two successive groupby calls here: ``` df.groupby('A').B.cumsum().groupby(df.A).shift() 0 NaN 1 12.0 2 NaN 3 NaN 4 4.0 5 9.0 6 NaN 7 3.0 8 NaN 9 1.0 Name: B, dtype: float64 ``` Using apply, you can shorten this to a a single call. ``` df.groupby('A').B.apply(lambda x: x.cumsum().shift()) 0 NaN 1 12.0 2 NaN 3 NaN 4 4.0 5 9.0 6 NaN 7 3.0 8 NaN 9 1.0 Name: B, dtype: float64 ``` It is very hard to quantify the performance because it depends on the data. But in general, apply is an acceptable solution if the goal is to reduce a groupby call (because groupby is also quite expensive). Other Caveats Aside from the caveats mentioned above, it is also worth mentioning that apply operates on the first row (or column) twice. This is done to determine whether the function has any side effects. If not, apply may be able to use a fast-path for evaluating the result, else it falls back to a slow implementation. ``` df = pd.DataFrame({ 'A': [1, 2], 'B': ['x', 'y'] }) def func(x): print(x['A']) return x df.apply(func, axis=1) # 1 # 1 # 2 A B 0 1 x 1 2 y ``` This behaviour is also seen in GroupBy.apply on pandas versions <0.25 (it was fixed for 0.25, see here for more information.)", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/54432583\/when-should-i-not-want-to-use-pandas-apply-in-my-code", "best_answers_votes":269, "question_length":786, "response_length":11224 }, { "question":"NumPy or Pandas: Keeping array type as integer while having a NaN value Is there a preferred way to keep the data type of a numpy array fixed as int (or int64 or whatever), while still having an element inside listed as numpy.NaN? In particular, I am converting an in-house data structure to a Pandas DataFrame. In our structure, we have integer-type columns that still have NaN's (but the dtype of the column is int). It seems to recast everything as a float if we make this a DataFrame, but we'd really like to be int. Thoughts? Things tried: I tried using the from_records() function under pandas.DataFrame, with coerce_float=False and this did not help. I also tried using NumPy masked arrays, with NaN fill_value, which also did not work. All of these caused the column data type to become a float.", "response":"NaN can't be stored in an integer array. This is a known limitation of pandas at the moment; I have been waiting for progress to be made with NA values in NumPy (similar to NAs in R), but it will be at least 6 months to a year before NumPy gets these features, it seems: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/gotchas.html#support-for-integer-na (This feature has been added beginning with version 0.24 of pandas, but note it requires the use of extension dtype Int64 (capitalized), rather than the default dtype int64 (lower case): https:\/\/pandas.pydata.org\/pandas-docs\/version\/0.24\/whatsnew\/v0.24.0.html#optional-integer-na-support )", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11548005\/numpy-or-pandas-keeping-array-type-as-integer-while-having-a-nan-value", "best_answers_votes":128, "question_length":803, "response_length":640 }, { "question":"Extract column value based on another column in Pandas I am kind of getting stuck on extracting value of one variable conditioning on another variable. For example, the following dataframe: ``` A B p1 1 p1 2 p3 3 p2 4 ``` How can I get the value of A when B=3? Every time when I extracted the value of A, I got an object, not a string.", "response":"You could use loc to get series which satisfying your condition and then iloc to get first element: ``` In [2]: df Out[2]: A B 0 p1 1 1 p1 2 2 p3 3 3 p2 4 In [3]: df.loc[df['B'] == 3, 'A'] Out[3]: 2 p3 Name: A, dtype: object In [4]: df.loc[df['B'] == 3, 'A'].iloc[0] Out[4]: 'p3' ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36684013\/extract-column-value-based-on-another-column-in-pandas", "best_answers_votes":395, "question_length":335, "response_length":283 }, { "question":"How to plot multiple dataframes in subplots I have a few Pandas DataFrames sharing the same value scale, but having different columns and indices. When invoking df.plot(), I get separate plot images. what I really want is to have them all in the same plot as subplots, but I'm unfortunately failing to come up with a solution to how and would highly appreciate some help.", "response":"You can manually create the subplots with matplotlib, and then plot the dataframes on a specific subplot using the ax keyword. For example for 4 subplots (2x2): ``` import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=2, ncols=2) df1.plot(ax=axes[0,0]) df2.plot(ax=axes[0,1]) ... ``` Here axes is an array which holds the different subplot axes, and you can access one just by indexing axes. If you want a shared x-axis, then you can provide sharex=True to plt.subplots.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22483588\/how-to-plot-multiple-dataframes-in-subplots", "best_answers_votes":414, "question_length":371, "response_length":480 }, { "question":"How to load a tsv file into a Pandas DataFrame? I'm trying to get a TAB-delimited (tsv) file loaded into a pandas DataFrame. This is what I'm trying and the error I'm getting: ``` >>> df1 = DataFrame(csv.reader(open('c:\/~\/trainSetRel3.txt'), delimiter='\\t')) Traceback (most recent call last): File \"\", line 1, in df1 = DataFrame(csv.reader(open('c:\/~\/trainSetRel3.txt'), delimiter='\\t')) File \"C:\\Python27\\lib\\site-packages\\pandas\\core\\frame.py\", line 318, in __init__ raise PandasError('DataFrame constructor not properly called!') PandasError: DataFrame constructor not properly called! ```", "response":"The .read_csv function does what you want: ``` pd.read_csv('c:\/~\/trainSetRel3.txt', sep='\\t') ``` If you have a header, you can pass header=0. ``` pd.read_csv('c:\/~\/trainSetRel3.txt', sep='\\t', header=0) ``` Note: Prior 17.0, pd.DataFrame.from_csv was used (it is now deprecated and the .from_csv documentation link redirects to the page for pd.read_csv).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/9652832\/how-to-load-a-tsv-file-into-a-pandas-dataframe", "best_answers_votes":294, "question_length":594, "response_length":355 }, { "question":"pandas resample documentation So I completely understand how to use resample, but the documentation does not do a good job explaining the options. So most options in the resample function are pretty straight forward except for these two: rule : the offset string or object representing target conversion how : string, method for down- or re-sampling, default to \u2018mean\u2019 So from looking at as many examples as I found online I can see for rule you can do 'D' for day, 'xMin' for minutes, 'xL' for milliseconds, but that is all I could find. for how I have seen the following: 'first', np.max, 'last', 'mean', and 'n1n2n3n4...nx' where nx is the first letter of each column index. So is there somewhere in the documentation that I am missing that displays every option for pandas.resample's rule and how inputs? If yes, where because I could not find it. If no, what are all the options for them?", "response":"``` B business day frequency C custom business day frequency (experimental) D calendar day frequency W weekly frequency M month end frequency SM semi-month end frequency (15th and end of month) BM business month end frequency CBM custom business month end frequency MS month start frequency SMS semi-month start frequency (1st and 15th) BMS business month start frequency CBMS custom business month start frequency Q quarter end frequency BQ business quarter endfrequency QS quarter start frequency BQS business quarter start frequency A year end frequency BA, BY business year end frequency AS, YS year start frequency BAS, BYS business year start frequency BH business hour frequency H hourly frequency T, min minutely frequency S secondly frequency L, ms milliseconds U, us microseconds N nanoseconds ``` See the timeseries documentation. It includes a list of offsets (and 'anchored' offsets), and a section about resampling. Note that there isn't a list of all the different how options, because it can be any NumPy array function and any function that is available via groupby dispatching can be passed to how by name.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17001389\/pandas-resample-documentation", "best_answers_votes":412, "question_length":893, "response_length":1124 }, { "question":"pandas how to check dtype for all columns in a dataframe? It seems that dtype only work for pandas.DataFrame.Series, right? Is there a function to display data types of all columns at once?", "response":"The singular form dtype is used to check the data type for a single column. And the plural form dtypes is for data frame which returns data types for all columns. Essentially: For a single column: ``` dataframe.column.dtype ``` For all columns: ``` dataframe.dtypes ``` Example: ``` import pandas as pd df = pd.DataFrame({'A': [1,2,3], 'B': [True, False, False], 'C': ['a', 'b', 'c']}) df.A.dtype # dtype('int64') df.B.dtype # dtype('bool') df.C.dtype # dtype('O') df.dtypes #A int64 #B bool #C object #dtype: object ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/40353079\/pandas-how-to-check-dtype-for-all-columns-in-a-dataframe", "best_answers_votes":373, "question_length":189, "response_length":520 }, { "question":"Pandas join issue: columns overlap but no suffix specified I have the following data frames: ``` print(df_a) mukey DI PI 0 100000 35 14 1 1000005 44 14 2 1000006 44 14 3 1000007 43 13 4 1000008 43 13 print(df_b) mukey niccdcd 0 190236 4 1 190237 6 2 190238 7 3 190239 4 4 190240 7 ``` When I try to join these data frames: ``` join_df = df_a.join(df_b, on='mukey', how='left') ``` I get the error: ``` *** ValueError: columns overlap but no suffix specified: Index([u'mukey'], dtype='object') ``` Why is this so? The data frames do have common 'mukey' values.", "response":"Your error on the snippet of data you posted is a little cryptic, in that because there are no common values, the join operation fails because the values don't overlap it requires you to supply a suffix for the left and right hand side: ``` In [173]: df_a.join(df_b, on='mukey', how='left', lsuffix='_left', rsuffix='_right') Out[173]: mukey_left DI PI mukey_right niccdcd index 0 100000 35 14 NaN NaN 1 1000005 44 14 NaN NaN 2 1000006 44 14 NaN NaN 3 1000007 43 13 NaN NaN 4 1000008 43 13 NaN NaN ``` merge works because it doesn't have this restriction: ``` In [176]: df_a.merge(df_b, on='mukey', how='left') Out[176]: mukey DI PI niccdcd 0 100000 35 14 NaN 1 1000005 44 14 NaN 2 1000006 44 14 NaN 3 1000007 43 13 NaN 4 1000008 43 13 NaN ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26645515\/pandas-join-issue-columns-overlap-but-no-suffix-specified", "best_answers_votes":252, "question_length":559, "response_length":743 }, { "question":"Get total of Pandas column I have a Pandas data frame, as shown below, with multiple columns and would like to get the total of column, MyColumn. ```none X MyColumn Y Z 0 A 84 13.0 69.0 1 B 76 77.0 127.0 2 C 28 69.0 16.0 3 D 28 28.0 31.0 4 E 19 20.0 85.0 5 F 84 193.0 70.0 ``` Expected Output I'd have expected the output to be the total of this column: 319. Or alternatively, I would like df to be edited with a new row entitled TOTAL containing the total: ```none X MyColumn Y Z 0 A 84 13.0 69.0 1 B 76 77.0 127.0 2 C 28 69.0 16.0 3 D 28 28.0 31.0 4 E 19 20.0 85.0 5 F 84 193.0 70.0 TOTAL 319 ``` I have attempted to get the sum of the column using groupby and .sum(): ```py Total = df.groupby['MyColumn'].sum() ``` This causes the following error: ```none TypeError: 'instancemethod' object has no attribute '__getitem__' ```", "response":"You should use sum: ``` Total = df['MyColumn'].sum() print(Total) 319 ``` Then you use loc with Series, in that case the index should be set as the same as the specific column you need to sum: ``` df.loc['Total'] = pd.Series(df['MyColumn'].sum(), index=['MyColumn']) print(df) X MyColumn Y Z 0 A 84.0 13.0 69.0 1 B 76.0 77.0 127.0 2 C 28.0 69.0 16.0 3 D 28.0 28.0 31.0 4 E 19.0 20.0 85.0 5 F 84.0 193.0 70.0 Total NaN 319.0 NaN NaN ``` because if you pass scalar, the values of all rows will be filled: ``` df.loc['Total'] = df['MyColumn'].sum() print(df) X MyColumn Y Z 0 A 84 13.0 69.0 1 B 76 77.0 127.0 2 C 28 69.0 16.0 3 D 28 28.0 31.0 4 E 19 20.0 85.0 5 F 84 193.0 70.0 Total 319 319 319.0 319.0 ``` Two other solutions are with at, and ix see the applications below: ``` df.at['Total', 'MyColumn'] = df['MyColumn'].sum() print(df) X MyColumn Y Z 0 A 84.0 13.0 69.0 1 B 76.0 77.0 127.0 2 C 28.0 69.0 16.0 3 D 28.0 28.0 31.0 4 E 19.0 20.0 85.0 5 F 84.0 193.0 70.0 Total NaN 319.0 NaN NaN ``` ``` df.ix['Total', 'MyColumn'] = df['MyColumn'].sum() print(df) X MyColumn Y Z 0 A 84.0 13.0 69.0 1 B 76.0 77.0 127.0 2 C 28.0 69.0 16.0 3 D 28.0 28.0 31.0 4 E 19.0 20.0 85.0 5 F 84.0 193.0 70.0 Total NaN 319.0 NaN NaN ``` Note: Since Pandas v0.20, ix has been deprecated. Use loc or iloc instead.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41286569\/get-total-of-pandas-column", "best_answers_votes":412, "question_length":828, "response_length":1293 }, { "question":"Python Pandas merge only certain columns Is it possible to only merge some columns? I have a DataFrame df1 with columns x, y, z, and df2 with columns x, a ,b, c, d, e, f, etc. I want to merge the two DataFrames on x, but I only want to merge columns df2.a, df2.b - not the entire DataFrame. The result would be a DataFrame with x, y, z, a, b. I could merge then delete the unwanted columns, but it seems like there is a better method.", "response":"You want to use TWO brackets, so if you are doing a VLOOKUP sort of action: ``` df = pd.merge(df,df2[['Key_Column','Target_Column']],on='Key_Column', how='left') ``` This will give you everything in the original df + add that one corresponding column in df2 that you want to join.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17978133\/python-pandas-merge-only-certain-columns", "best_answers_votes":301, "question_length":434, "response_length":280 }, { "question":"How to read a .xlsx file using the pandas Library in iPython? I want to read a .xlsx file using the Pandas Library of python and port the data to a postgreSQL table. All I could do up until now is: ``` import pandas as pd data = pd.ExcelFile(\"*File Name*\") ``` Now I know that the step got executed successfully, but I want to know how i can parse the excel file that has been read so that I can understand how the data in the excel maps to the data in the variable data. I learnt that data is a Dataframe object if I'm not wrong. So How do i parse this dataframe object to extract each line row by row.", "response":"I usually create a dictionary containing a DataFrame for every sheet: ``` xl_file = pd.ExcelFile(file_name) dfs = {sheet_name: xl_file.parse(sheet_name) for sheet_name in xl_file.sheet_names} ``` Update: In pandas version 0.21.0+ you will get this behavior more cleanly by passing sheet_name=None to read_excel: ``` dfs = pd.read_excel(file_name, sheet_name=None) ``` In 0.20 and prior, this was sheetname rather than sheet_name (this is now deprecated in favor of the above): ``` dfs = pd.read_excel(file_name, sheetname=None) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16888888\/how-to-read-a-xlsx-file-using-the-pandas-library-in-ipython", "best_answers_votes":321, "question_length":603, "response_length":531 }, { "question":"What is the most efficient way of counting occurrences in pandas? I have a large (about 12M rows) DataFrame df: ``` df.columns = ['word','documents','frequency'] ``` The following ran in a timely fashion: ``` word_grouping = df[['word','frequency']].groupby('word') MaxFrequency_perWord = word_grouping[['frequency']].max().reset_index() MaxFrequency_perWord.columns = ['word','MaxFrequency'] ``` However, this is taking an unexpectedly long time to run: ``` Occurrences_of_Words = word_grouping[['word']].count().reset_index() ``` What am I doing wrong here? Is there a better way to count occurrences in a large DataFrame? ``` df.word.describe() ``` ran pretty well, so I really did not expect this Occurrences_of_Words DataFrame to take very long to build.", "response":"I think df['word'].value_counts() should serve. By skipping the groupby machinery, you'll save some time. I'm not sure why count should be much slower than max. Both take some time to avoid missing values. (Compare with size.) In any case, value_counts has been specifically optimized to handle object type, like your words, so I doubt you'll do much better than that.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20076195\/what-is-the-most-efficient-way-of-counting-occurrences-in-pandas", "best_answers_votes":351, "question_length":759, "response_length":368 }, { "question":"Get list of pandas dataframe columns based on data type If I have a dataframe with the following columns: ``` 1. NAME object 2. On_Time object 3. On_Budget object 4. %actual_hr float64 5. Baseline Start Date datetime64[ns] 6. Forecast Start Date datetime64[ns] ``` I would like to be able to say: for this dataframe, give me a list of the columns which are of type 'object' or of type 'datetime'? I have a function which converts numbers ('float64') to two decimal places, and I would like to use this list of dataframe columns, of a particular type, and run it through this function to convert them all to 2dp. Maybe something like: ``` For c in col_list: if c.dtype = \"Something\" list[] List.append(c)? ```", "response":"If you want a list of columns of a certain type, you can use groupby: ``` >>> df = pd.DataFrame([[1, 2.3456, 'c', 'd', 78]], columns=list(\"ABCDE\")) >>> df A B C D E 0 1 2.3456 c d 78 [1 rows x 5 columns] >>> df.dtypes A int64 B float64 C object D object E int64 dtype: object >>> g = df.columns.to_series().groupby(df.dtypes).groups >>> g {dtype('int64'): ['A', 'E'], dtype('float64'): ['B'], dtype('O'): ['C', 'D']} >>> {k.name: v for k, v in g.items()} {'object': ['C', 'D'], 'int64': ['A', 'E'], 'float64': ['B']} ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22470690\/get-list-of-pandas-dataframe-columns-based-on-data-type", "best_answers_votes":354, "question_length":708, "response_length":520 }, { "question":"Replacing column values in a pandas DataFrame I'm trying to replace the values in one column of a dataframe. The column ('female') only contains the values 'female' and 'male'. I have tried the following: ``` w['female']['female']='1' w['female']['male']='0' ``` But receive the exact same copy of the previous results. I would ideally like to get some output which resembles the following loop element-wise. ``` if w['female'] =='female': w['female'] = '1'; else: w['female'] = '0'; ``` I've looked through the gotchas documentation (http:\/\/pandas.pydata.org\/pandas-docs\/stable\/gotchas.html) but cannot figure out why nothing happens. Any help will be appreciated.", "response":"If I understand right, you want something like this: ``` w['female'] = w['female'].map({'female': 1, 'male': 0}) ``` (Here I convert the values to numbers instead of strings containing numbers. You can convert them to \"1\" and \"0\", if you really want, but I'm not sure why you'd want that.) The reason your code doesn't work is because using ['female'] on a column (the second 'female' in your w['female']['female']) doesn't mean \"select rows where the value is 'female'\". It means to select rows where the index is 'female', of which there may not be any in your DataFrame.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23307301\/replacing-column-values-in-a-pandas-dataframe", "best_answers_votes":373, "question_length":665, "response_length":573 }, { "question":"Print very long string in Pandas dataframe I have a Pandas data frame containing a very long string: ```py df = pd.DataFrame({'one' : ['one', 'two', 'This is very long string very long string very long string veryvery long string']}) ``` When I print it, I see only part of the string. I tried: using print(df.iloc[2]). using to_html. using to_string. to increase column width by using Pandas display option. I also do not get how set_printoptions would solve this.", "response":"You can use options.display.max_colwidth to specify you want to see more in the default representation: ``` In [2]: df Out[2]: one 0 one 1 two 2 This is very long string very long string very... In [3]: pd.options.display.max_colwidth Out[3]: 50 In [4]: pd.options.display.max_colwidth = 100 In [5]: df Out[5]: one 0 one 1 two 2 This is very long string very long string very long string veryvery long string ``` And indeed, if you just want to inspect the one value, by accessing it (as a scalar, not as a row as df.iloc[2] does) you also see the full string: ``` In [7]: df.iloc[2,0] # or df.loc[2,'one'] Out[7]: 'This is very long string very long string very long string veryvery long string' ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29902714\/print-very-long-string-in-pandas-dataframe", "best_answers_votes":293, "question_length":465, "response_length":700 }, { "question":"Naming returned columns in Pandas aggregate function? [duplicate] This question already has answers here: Multiple aggregations of the same column using pandas GroupBy.agg() (5 answers) Closed 6 years ago. I'm having trouble with Pandas' groupby functionality. I've read the documentation, but I can't see to figure out how to apply aggregate functions to multiple columns and have custom names for those columns. This comes very close, but the data structure returned has nested column headings: ``` data.groupby(\"Country\").agg({ \"column1\": {\"foo\": sum()}, \"column2\": {\"mean\": np.mean, \"std\": np.std} }) ``` (ie. I want to take the mean and std of column2, but return those columns as \"mean\" and \"std\") What am I missing?", "response":"For pandas >= 0.25 The functionality to name returned aggregate columns has been reintroduced in the master branch and is targeted for pandas 0.25. The new syntax is .agg(new_col_name=('col_name', 'agg_func'). Detailed example from the PR linked above: ``` In [2]: df = pd.DataFrame({'kind': ['cat', 'dog', 'cat', 'dog'], ...: 'height': [9.1, 6.0, 9.5, 34.0], ...: 'weight': [7.9, 7.5, 9.9, 198.0]}) ...: In [3]: df Out[3]: kind height weight 0 cat 9.1 7.9 1 dog 6.0 7.5 2 cat 9.5 9.9 3 dog 34.0 198.0 In [4]: df.groupby('kind').agg(min_height=('height', 'min'), max_weight=('weight', 'max')) Out[4]: min_height max_weight kind cat 9.1 9.9 dog 6.0 198.0 ``` It will also be possible to use multiple lambda expressions with this syntax and the two-step rename syntax I suggested earlier (below) as per this PR. Again, copying from the example in the PR: ``` In [2]: df = pd.DataFrame({\"A\": ['a', 'a'], 'B': [1, 2], 'C': [3, 4]}) In [3]: df.groupby(\"A\").agg({'B': [lambda x: 0, lambda x: 1]}) Out[3]: B A a 0 1 ``` and then .rename(), or in one go: ``` In [4]: df.groupby(\"A\").agg(b=('B', lambda x: 0), c=('B', lambda x: 1)) Out[4]: b c A a 0 0 ``` For pandas >> df.groupby('A').agg({'B': {'min': lambda x: x.min(), 'max': lambda x: x.max()}}) B max min A 1 2 0 2 4 3 ``` Multiple functions can also be passed to a single column as a list: ``` >>> df.groupby('A').agg({'B': [np.min, np.max]}) B amin amax A 1 0 2 2 3 4 ``` However, this does not work with lambda functions, since they are anonymous and all return , which causes a name collision: ``` >>> df.groupby('A').agg({'B': [lambda x: x.min(), lambda x: x.max]}) SpecificationError: Function names must be unique, found multiple named ``` To avoid the SpecificationError, named functions can be defined a priori instead of using lambda. Suitable function names also avoid calling .rename on the data frame afterwards. These functions can be passed with the same list syntax as above: ``` >>> def my_min(x): >>> return x.min() >>> def my_max(x): >>> return x.max() >>> df.groupby('A').agg({'B': [my_min, my_max]}) B my_min my_max A 1 0 2 2 3 4 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19078325\/naming-returned-columns-in-pandas-aggregate-function", "best_answers_votes":382, "question_length":722, "response_length":2104 }, { "question":"Convert unix time to readable date in pandas dataframe I have a dataframe with unix times and prices in it. I want to convert the index column so that it shows in human readable dates. So for instance I have date as 1349633705 in the index column but I'd want it to show as 10\/07\/2012 (or at least 10\/07\/2012 18:15). For some context, here is the code I'm working with and what I've tried already: ``` import json import urllib2 from datetime import datetime response = urllib2.urlopen('http:\/\/blockchain.info\/charts\/market-price?&format=json') data = json.load(response) df = DataFrame(data['values']) df.columns = [\"date\",\"price\"] #convert dates df.date = df.date.apply(lambda d: datetime.strptime(d, \"%Y-%m-%d\")) df.index = df.date ``` As you can see I'm using df.date = df.date.apply(lambda d: datetime.strptime(d, \"%Y-%m-%d\")) here which doesn't work since I'm working with integers, not strings. I think I need to use datetime.date.fromtimestamp but I'm not quite sure how to apply this to the whole of df.date. Thanks.", "response":"These appear to be seconds since epoch. ``` In [20]: df = DataFrame(data['values']) In [21]: df.columns = [\"date\",\"price\"] In [22]: df Out[22]: Int64Index: 358 entries, 0 to 357 Data columns (total 2 columns): date 358 non-null values price 358 non-null values dtypes: float64(1), int64(1) In [23]: df.head() Out[23]: date price 0 1349720105 12.08 1 1349806505 12.35 2 1349892905 12.15 3 1349979305 12.19 4 1350065705 12.15 In [25]: df['date'] = pd.to_datetime(df['date'],unit='s') In [26]: df.head() Out[26]: date price 0 2012-10-08 18:15:05 12.08 1 2012-10-09 18:15:05 12.35 2 2012-10-10 18:15:05 12.15 3 2012-10-11 18:15:05 12.19 4 2012-10-12 18:15:05 12.15 In [27]: df.dtypes Out[27]: date datetime64[ns] price float64 dtype: object ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19231871\/convert-unix-time-to-readable-date-in-pandas-dataframe", "best_answers_votes":394, "question_length":1025, "response_length":741 }, { "question":"Pandas: sum DataFrame rows for given columns I have the following DataFrame: ``` In [1]: df = pd.DataFrame({'a': [1, 2, 3], 'b': [2, 3, 4], 'c': ['dd', 'ee', 'ff'], 'd': [5, 9, 1]}) df Out [1]: a b c d 0 1 2 dd 5 1 2 3 ee 9 2 3 4 ff 1 ``` I would like to add a column 'e' which is the sum of columns 'a', 'b' and 'd'. Going across forums, I thought something like this would work: ``` df['e'] = df[['a', 'b', 'd']].map(sum) ``` But it didn't. I would like to know the appropriate operation with the list of columns ['a', 'b', 'd'] and df as inputs.", "response":"You can just sum and set axis=1 to sum the rows, which will ignore non-numeric columns; from pandas 2.0+ you also need to specify numeric_only=True. ``` In [91]: df = pd.DataFrame({'a': [1,2,3], 'b': [2,3,4], 'c':['dd','ee','ff'], 'd':[5,9,1]}) df['e'] = df.sum(axis=1, numeric_only=True) df Out[91]: a b c d e 0 1 2 dd 5 8 1 2 3 ee 9 14 2 3 4 ff 1 8 ``` If you want to just sum specific columns then you can create a list of the columns and remove the ones you are not interested in: ``` In [98]: col_list= list(df) col_list.remove('d') col_list Out[98]: ['a', 'b', 'c'] In [99]: df['e'] = df[col_list].sum(axis=1) df Out[99]: a b c d e 0 1 2 dd 5 3 1 2 3 ee 9 5 2 3 4 ff 1 7 ``` sum docs", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25748683\/pandas-sum-dataframe-rows-for-given-columns", "best_answers_votes":402, "question_length":548, "response_length":689 }, { "question":"How to unnest (explode) a column in a pandas DataFrame, into multiple rows I have the following DataFrame where one of the columns is an object (list type cell): ``` df = pd.DataFrame({'A': [1, 2], 'B': [[1, 2], [1, 2]]}) ``` Output: ``` A B 0 1 [1, 2] 1 2 [1, 2] ``` My expected output is: ``` A B 0 1 1 1 1 2 3 2 1 4 2 2 ``` What should I do to achieve this? Related question Pandas column of lists, create a row for each list element Good question and answer but only handle one column with list(In my answer the self-def function will work for multiple columns, also the accepted answer is use the most time consuming apply , which is not recommended, check more info When should I (not) want to use pandas apply() in my code?)", "response":"I know object dtype columns makes the data hard to convert with pandas functions. When I receive data like this, the first thing that came to mind was to \"flatten\" or unnest the columns. I am using pandas and Python functions for this type of question. If you are worried about the speed of the above solutions, check out user3483203's answer, since it's using numpy and most of the time numpy is faster. I recommend Cython or numba if speed matters. Method 0 [pandas >= 0.25] Starting from pandas 0.25, if you only need to explode one column, you can use the pandas.DataFrame.explode function: ``` df.explode('B') A B 0 1 1 1 1 2 0 2 1 1 2 2 ``` Given a dataframe with an empty list or a NaN in the column. An empty list will not cause an issue, but a NaN will need to be filled with a list ``` df = pd.DataFrame({'A': [1, 2, 3, 4],'B': [[1, 2], [1, 2], [], np.nan]}) df.B = df.B.fillna({i: [] for i in df.index}) # replace NaN with [] df.explode('B') A B 0 1 1 0 1 2 1 2 1 1 2 2 2 3 NaN 3 4 NaN ``` Method 1 apply + pd.Series (easy to understand but in terms of performance not recommended . ) ``` df.set_index('A').B.apply(pd.Series).stack().reset_index(level=0).rename(columns={0:'B'}) Out[463]: A B 0 1 1 1 1 2 0 2 1 1 2 2 ``` Method 2 Using repeat with DataFrame constructor , re-create your dataframe (good at performance, not good at multiple columns ) ``` df=pd.DataFrame({'A':df.A.repeat(df.B.str.len()),'B':np.concatenate(df.B.values)}) df Out[465]: A B 0 1 1 0 1 2 1 2 1 1 2 2 ``` Method 2.1 for example besides A we have A.1 .....A.n. If we still use the method(Method 2) above it is hard for us to re-create the columns one by one . Solution : join or merge with the index after 'unnest' the single columns ``` s=pd.DataFrame({'B':np.concatenate(df.B.values)},index=df.index.repeat(df.B.str.len())) s.join(df.drop('B',1),how='left') Out[477]: B A 0 1 1 0 2 1 1 1 2 1 2 2 ``` If you need the column order exactly the same as before, add reindex at the end. ``` s.join(df.drop('B',1),how='left').reindex(columns=df.columns) ``` Method 3 recreate the list ``` pd.DataFrame([[x] + [z] for x, y in df.values for z in y],columns=df.columns) Out[488]: A B 0 1 1 1 1 2 2 2 1 3 2 2 ``` If more than two columns, use ``` s=pd.DataFrame([[x] + [z] for x, y in zip(df.index,df.B) for z in y]) s.merge(df,left_on=0,right_index=True) Out[491]: 0 1 A B 0 0 1 1 [1, 2] 1 0 2 1 [1, 2] 2 1 1 2 [1, 2] 3 1 2 2 [1, 2] ``` Method 4 using reindex or loc ``` df.reindex(df.index.repeat(df.B.str.len())).assign(B=np.concatenate(df.B.values)) Out[554]: A B 0 1 1 0 1 2 1 2 1 1 2 2 #df.loc[df.index.repeat(df.B.str.len())].assign(B=np.concatenate(df.B.values)) ``` Method 5 when the list only contains unique values: ``` df=pd.DataFrame({'A':[1,2],'B':[[1,2],[3,4]]}) from collections import ChainMap d = dict(ChainMap(*map(dict.fromkeys, df['B'], df['A']))) pd.DataFrame(list(d.items()),columns=df.columns[::-1]) Out[574]: B A 0 1 1 1 2 1 2 3 2 3 4 2 ``` Method 6 using numpy for high performance: ``` newvalues=np.dstack((np.repeat(df.A.values,list(map(len,df.B.values))),np.concatenate(df.B.values))) pd.DataFrame(data=newvalues[0],columns=df.columns) A B 0 1 1 1 1 2 2 2 1 3 2 2 ``` Method 7 using base function itertools cycle and chain: Pure python solution just for fun ``` from itertools import cycle,chain l=df.values.tolist() l1=[list(zip([x[0]], cycle(x[1])) if len([x[0]]) > len(x[1]) else list(zip(cycle([x[0]]), x[1]))) for x in l] pd.DataFrame(list(chain.from_iterable(l1)),columns=df.columns) A B 0 1 1 1 1 2 2 2 1 3 2 2 ``` Generalizing to multiple columns ``` df=pd.DataFrame({'A':[1,2],'B':[[1,2],[3,4]],'C':[[1,2],[3,4]]}) df Out[592]: A B C 0 1 [1, 2] [1, 2] 1 2 [3, 4] [3, 4] ``` Self-def function: ``` def unnesting(df, explode): idx = df.index.repeat(df[explode[0]].str.len()) df1 = pd.concat([ pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1) df1.index = idx return df1.join(df.drop(explode, 1), how='left') unnesting(df,['B','C']) Out[609]: B C A 0 1 1 1 0 2 2 1 1 3 3 2 1 4 4 2 ``` Column-wise Unnesting All above method is talking about the vertical unnesting and explode , If you do need expend the list horizontal, Check with pd.DataFrame constructor ``` df.join(pd.DataFrame(df.B.tolist(),index=df.index).add_prefix('B_')) Out[33]: A B C B_0 B_1 0 1 [1, 2] [1, 2] 1 2 1 2 [3, 4] [3, 4] 3 4 ``` Updated function ``` def unnesting(df, explode, axis): if axis==1: idx = df.index.repeat(df[explode[0]].str.len()) df1 = pd.concat([ pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1) df1.index = idx return df1.join(df.drop(explode, 1), how='left') else : df1 = pd.concat([ pd.DataFrame(df[x].tolist(), index=df.index).add_prefix(x) for x in explode], axis=1) return df1.join(df.drop(explode, 1), how='left') ``` Test Output ``` unnesting(df, ['B','C'], axis=0) Out[36]: B0 B1 C0 C1 A 0 1 2 1 2 1 1 3 4 3 4 2 ``` Update 2021-02-17 with original explode function ``` def unnesting(df, explode, axis): if axis==1: df1 = pd.concat([df[x].explode() for x in explode], axis=1) return df1.join(df.drop(explode, 1), how='left') else : df1 = pd.concat([ pd.DataFrame(df[x].tolist(), index=df.index).add_prefix(x) for x in explode], axis=1) return df1.join(df.drop(explode, 1), how='left') ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/53218931\/how-to-unnest-explode-a-column-in-a-pandas-dataframe-into-multiple-rows", "best_answers_votes":340, "question_length":731, "response_length":5247 }, { "question":"How to access subdataframes of pandas groupby by key How do I access the corresponding groupby dataframe in a groupby object by the key? With the following groupby: ``` rand = np.random.RandomState(1) df = pd.DataFrame({'A': ['foo', 'bar'] * 3, 'B': rand.randn(6), 'C': rand.randint(0, 20, 6)}) gb = df.groupby(['A']) ``` I can iterate through it to get the keys and groups: ``` In [11]: for k, gp in gb: print 'key=' + str(k) print gp key=bar A B C 1 bar -0.611756 18 3 bar -1.072969 10 5 bar -2.301539 18 key=foo A B C 0 foo 1.624345 5 2 foo -0.528172 11 4 foo 0.865408 14 ``` I would like to be able to access a group by its key: ``` In [12]: gb['foo'] Out[12]: A B C 0 foo 1.624345 5 2 foo -0.528172 11 4 foo 0.865408 14 ``` But when I try doing that with gb[('foo',)] I get this weird pandas.core.groupby.DataFrameGroupBy object thing which doesn't seem to have any methods that correspond to the DataFrame I want. The best I could think of is: ``` In [13]: def gb_df_key(gb, key, orig_df): ix = gb.indices[key] return orig_df.ix[ix] gb_df_key(gb, 'foo', df) Out[13]: A B C 0 foo 1.624345 5 2 foo -0.528172 11 4 foo 0.865408 14 ``` but this is kind of nasty, considering how nice pandas usually is at these things. What's the built-in way of doing this?", "response":"You can use the get_group method: ``` In [21]: gb.get_group('foo') Out[21]: A B C 0 foo 1.624345 5 2 foo -0.528172 11 4 foo 0.865408 14 ``` Note: This doesn't require creating an intermediary dictionary \/ copy of every subdataframe for every group, so will be much more memory-efficient than creating the naive dictionary with dict(iter(gb)). This is because it uses data-structures already available in the groupby object. You can select different columns using the groupby slicing: ``` In [22]: gb[[\"A\", \"B\"]].get_group(\"foo\") Out[22]: A B 0 foo 1.624345 2 foo -0.528172 4 foo 0.865408 In [23]: gb[\"C\"].get_group(\"foo\") Out[23]: 0 5 2 11 4 14 Name: C, dtype: int64 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14734533\/how-to-access-subdataframes-of-pandas-groupby-by-key", "best_answers_votes":286, "question_length":1258, "response_length":670 }, { "question":"pandas unique values multiple columns ``` df = pd.DataFrame({'Col1': ['Bob', 'Joe', 'Bill', 'Mary', 'Joe'], 'Col2': ['Joe', 'Steve', 'Bob', 'Bob', 'Steve'], 'Col3': np.random.random(5)}) ``` What is the best way to return the unique values of 'Col1' and 'Col2'? The desired output is ``` 'Bob', 'Joe', 'Bill', 'Mary', 'Steve' ```", "response":"pd.unique returns the unique values from an input array, or DataFrame column or index. The input to this function needs to be one-dimensional, so multiple columns will need to be combined. The simplest way is to select the columns you want and then view the values in a flattened NumPy array. The whole operation looks like this: ``` >>> pd.unique(df[['Col1', 'Col2']].values.ravel('K')) array(['Bob', 'Joe', 'Bill', 'Mary', 'Steve'], dtype=object) ``` Note that ravel() is an array method that returns a view (if possible) of a multidimensional array. The argument 'K' tells the method to flatten the array in the order the elements are stored in the memory (pandas typically stores underlying arrays in Fortran-contiguous order; columns before rows). This can be significantly faster than using the method's default 'C' order. An alternative way is to select the columns and pass them to np.unique: ``` >>> np.unique(df[['Col1', 'Col2']].values) array(['Bill', 'Bob', 'Joe', 'Mary', 'Steve'], dtype=object) ``` There is no need to use ravel() here as the method handles multidimensional arrays. Even so, this is likely to be slower than pd.unique as it uses a sort-based algorithm rather than a hashtable to identify unique values. The difference in speed is significant for larger DataFrames (especially if there are only a handful of unique values): ``` >>> df1 = pd.concat([df]*100000, ignore_index=True) # DataFrame with 500000 rows >>> %timeit np.unique(df1[['Col1', 'Col2']].values) 1 loop, best of 3: 1.12 s per loop >>> %timeit pd.unique(df1[['Col1', 'Col2']].values.ravel('K')) 10 loops, best of 3: 38.9 ms per loop >>> %timeit pd.unique(df1[['Col1', 'Col2']].values.ravel()) # ravel using C order 10 loops, best of 3: 49.9 ms per loop ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26977076\/pandas-unique-values-multiple-columns", "best_answers_votes":303, "question_length":329, "response_length":1750 }, { "question":"unique combinations of values in selected columns in pandas data frame and count I have my data in pandas data frame as follows: ``` df1 = pd.DataFrame({'A':['yes','yes','yes','yes','no','no','yes','yes','yes','no'], 'B':['yes','no','no','no','yes','yes','no','yes','yes','no']}) ``` So, my data looks like this ``` ---------------------------- index A B 0 yes yes 1 yes no 2 yes no 3 yes no 4 no yes 5 no yes 6 yes no 7 yes yes 8 yes yes 9 no no ----------------------------- ``` I would like to transform it to another data frame. The expected output can be shown in the following python script: ``` output = pd.DataFrame({'A':['no','no','yes','yes'],'B':['no','yes','no','yes'],'count':[1,2,4,3]}) ``` So, my expected output looks like this ``` -------------------------------------------- index A B count -------------------------------------------- 0 no no 1 1 no yes 2 2 yes no 4 3 yes yes 3 -------------------------------------------- ``` Actually, I can achieve to find all combinations and count them by using the following command: mytable = df1.groupby(['A','B']).size() However, it turns out that such combinations are in a single column. I would like to separate each value in a combination into different column and also add one more column for the result of counting. Is it possible to do that? May I have your suggestions? Thank you in advance.", "response":"You can groupby on cols 'A' and 'B' and call size and then reset_index and rename the generated column: ``` In [26]: df1.groupby(['A','B']).size().reset_index().rename(columns={0:'count'}) Out[26]: A B count 0 no no 1 1 no yes 2 2 yes no 4 3 yes yes 3 ``` update A little explanation, by grouping on the 2 columns, this groups rows where A and B values are the same, we call size which returns the number of unique groups: ``` In[202]: df1.groupby(['A','B']).size() Out[202]: A B no no 1 yes 2 yes no 4 yes 3 dtype: int64 ``` So now to restore the grouped columns, we call reset_index: ``` In[203]: df1.groupby(['A','B']).size().reset_index() Out[203]: A B 0 0 no no 1 1 no yes 2 2 yes no 4 3 yes yes 3 ``` This restores the indices but the size aggregation is turned into a generated column 0, so we have to rename this: ``` In[204]: df1.groupby(['A','B']).size().reset_index().rename(columns={0:'count'}) Out[204]: A B count 0 no no 1 1 no yes 2 2 yes no 4 3 yes yes 3 ``` groupby does accept the arg as_index which we could have set to False so it doesn't make the grouped columns the index, but this generates a series and you'd still have to restore the indices and so on....: ``` In[205]: df1.groupby(['A','B'], as_index=False).size() Out[205]: A B no no 1 yes 2 yes no 4 yes 3 dtype: int64 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/35268817\/unique-combinations-of-values-in-selected-columns-in-pandas-data-frame-and-count", "best_answers_votes":332, "question_length":1361, "response_length":1300 }, { "question":"Python Pandas: How to read only first n rows of CSV files in? I have a very large data set and I can't afford to read the entire data set in. So, I'm thinking of reading only one chunk of it to train but I have no idea how to do it.", "response":"If you only want to read the first 999,999 (non-header) rows: ``` read_csv(..., nrows=999999) ``` If you only want to read rows 1,000,000 ... 1,999,999 ``` read_csv(..., skiprows=1000000, nrows=999999) ``` nrows : int, default None Number of rows of file to read. Useful for reading pieces of large files* skiprows : list-like or integer Row numbers to skip (0-indexed) or number of rows to skip (int) at the start of the file and for large files, you'll probably also want to use chunksize: chunksize : int, default None Return TextFileReader object for iteration pandas.io.parsers.read_csv documentation", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23853553\/python-pandas-how-to-read-only-first-n-rows-of-csv-files-in", "best_answers_votes":359, "question_length":232, "response_length":605 }, { "question":"Replacing Header with Top Row I currently have a dataframe that looks like this: ``` Unnamed: 1 Unnamed: 2 Unnamed: 3 Unnamed: 4 0 Sample Number Group Number Sample Name Group Name 1 1.0 1.0 s_1 g_1 2 2.0 1.0 s_2 g_1 3 3.0 1.0 s_3 g_1 4 4.0 2.0 s_4 g_2 ``` I'm looking for a way to delete the header row and make the first row the new header row, so the new dataframe would look like this: ``` Sample Number Group Number Sample Name Group Name 0 1.0 1.0 s_1 g_1 1 2.0 1.0 s_2 g_1 2 3.0 1.0 s_3 g_1 3 4.0 2.0 s_4 g_2 ``` I've tried stuff along the lines of if 'Unnamed' in df.columns: then make the dataframe without the header ``` df.to_csv(newformat, header=False, index=False) ``` but I don't seem to be getting anywhere.", "response":"``` new_header = df.iloc[0] #grab the first row for the header df = df[1:] #take the data less the header row df.columns = new_header #set the header row as the df header ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31328861\/replacing-header-with-top-row", "best_answers_votes":363, "question_length":723, "response_length":174 }, { "question":"Find the column name which has the maximum value for each row I have a DataFrame like this one: ```none Communications and Search Business General Lifestyle 0 0.745763 0.050847 0.118644 0.084746 0 0.333333 0.000000 0.583333 0.083333 0 0.617021 0.042553 0.297872 0.042553 0 0.435897 0.000000 0.410256 0.153846 0 0.358974 0.076923 0.410256 0.153846 ``` I want to create a new column comprised of the column labels of each row\u2019s maximum value. The desired output is like this: ```none Communications and Search Business General Lifestyle Max 0 0.745763 0.050847 0.118644 0.084746 Communications 0 0.333333 0.000000 0.583333 0.083333 Business 0 0.617021 0.042553 0.297872 0.042553 Communications 0 0.435897 0.000000 0.410256 0.153846 Communications 0 0.358974 0.076923 0.410256 0.153846 Business ```", "response":"You can use idxmax with axis=1 to find the column with the greatest value on each row: ``` >>> df.idxmax(axis=1) 0 Communications 1 Business 2 Communications 3 Communications 4 Business dtype: object ``` To create the new column 'Max', use df['Max'] = df.idxmax(axis=1). To find the row index at which the maximum value occurs in each column, use df.idxmax() (or equivalently df.idxmax(axis=0)).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29919306\/find-the-column-name-which-has-the-maximum-value-for-each-row", "best_answers_votes":310, "question_length":795, "response_length":395 }, { "question":"Removing index column in pandas when reading a csv I have the following code which imports a CSV file. There are 3 columns and I want to set the first two of them to variables. When I set the second column to the variable \"efficiency\" the index column is also tacked on. How can I get rid of the index column? ``` df = pd.DataFrame.from_csv('Efficiency_Data.csv', header=0, parse_dates=False) energy = df.index efficiency = df.Efficiency print efficiency ``` I tried using ``` del df['index'] ``` after I set ``` energy = df.index ``` which I found in another post but that results in \"KeyError: 'index' \"", "response":"When writing to and reading from a CSV file include the argument index=False and index_col=False, respectively. Follows an example: To write: ``` df.to_csv(filename, index=False) ``` and to read from the csv ``` df.read_csv(filename, index_col=False) ``` This should prevent the issue so you don't need to fix it later.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20107570\/removing-index-column-in-pandas-when-reading-a-csv", "best_answers_votes":400, "question_length":605, "response_length":319 }, { "question":"Conditional Replace Pandas [duplicate] This question already has answers here: Pandas - Case when & default in pandas (4 answers) Closed last year. I have a DataFrame, and I want to replace the values in a particular column that exceed a value with zero. I had thought this was a way of achieving this: ``` df[df.my_channel > 20000].my_channel = 0 ``` If I copy the channel into a new data frame it's simple: ``` df2 = df.my_channel df2[df2 > 20000] = 0 ``` This does exactly what I want, but seems not to work with the channel as part of the original DataFrame.", "response":".ix indexer works okay for pandas version prior to 0.20.0, but since pandas 0.20.0, the .ix indexer is deprecated, so you should avoid using it. Instead, you can use .loc or iloc indexers. You can solve this problem by: ``` mask = df.my_channel > 20000 column_name = 'my_channel' df.loc[mask, column_name] = 0 ``` Or, in one line, ``` df.loc[df.my_channel > 20000, 'my_channel'] = 0 ``` mask helps you to select the rows in which df.my_channel > 20000 is True, while df.loc[mask, column_name] = 0 sets the value 0 to the selected rows where maskholds in the column which name is column_name. Update: In this case, you should use loc because if you use iloc, you will get a NotImplementedError telling you that iLocation based boolean indexing on an integer type is not available.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21608228\/conditional-replace-pandas", "best_answers_votes":281, "question_length":562, "response_length":779 }, { "question":"Combine Date and Time columns using pandas I have a pandas dataframe with the following columns: ```py data = {'Date': ['01-06-2013', '02-06-2013', '02-06-2013', '02-06-2013', '02-06-2013', '03-06-2013', '03-06-2013', '03-06-2013', '03-06-2013', '04-06-2013'], 'Time': ['23:00:00', '01:00:00', '21:00:00', '22:00:00', '23:00:00', '01:00:00', '21:00:00', '22:00:00', '23:00:00', '01:00:00']} df = pd.DataFrame(data) Date Time 0 01-06-2013 23:00:00 1 02-06-2013 01:00:00 2 02-06-2013 21:00:00 3 02-06-2013 22:00:00 4 02-06-2013 23:00:00 5 03-06-2013 01:00:00 6 03-06-2013 21:00:00 7 03-06-2013 22:00:00 8 03-06-2013 23:00:00 9 04-06-2013 01:00:00 ``` How do I combine data['Date'] & data['Time'] to get the following? Is there a way of doing it using pd.to_datetime? ``` Date 01-06-2013 23:00:00 02-06-2013 01:00:00 02-06-2013 21:00:00 02-06-2013 22:00:00 02-06-2013 23:00:00 03-06-2013 01:00:00 03-06-2013 21:00:00 03-06-2013 22:00:00 03-06-2013 23:00:00 04-06-2013 01:00:00 ```", "response":"It's worth mentioning that you may have been able to read this in directly e.g. if you were using read_csv using parse_dates=[['Date', 'Time']]. Assuming these are just strings you could simply add them together (with a space), allowing you to use to_datetime, which works without specifying the format= parameter ``` In [11]: df['Date'] + ' ' + df['Time'] Out[11]: 0 01-06-2013 23:00:00 1 02-06-2013 01:00:00 2 02-06-2013 21:00:00 3 02-06-2013 22:00:00 4 02-06-2013 23:00:00 5 03-06-2013 01:00:00 6 03-06-2013 21:00:00 7 03-06-2013 22:00:00 8 03-06-2013 23:00:00 9 04-06-2013 01:00:00 dtype: object In [12]: pd.to_datetime(df['Date'] + ' ' + df['Time']) Out[12]: 0 2013-01-06 23:00:00 1 2013-02-06 01:00:00 2 2013-02-06 21:00:00 3 2013-02-06 22:00:00 4 2013-02-06 23:00:00 5 2013-03-06 01:00:00 6 2013-03-06 21:00:00 7 2013-03-06 22:00:00 8 2013-03-06 23:00:00 9 2013-04-06 01:00:00 dtype: datetime64[ns] ``` Alternatively, without the + ' ', but the format= parameter must be used. Additionally, pandas is good at inferring the format to be converted to a datetime, however, specifying the exact format is faster. ```py pd.to_datetime(df['Date'] + df['Time'], format='%m-%d-%Y%H:%M:%S') ``` Note: surprisingly (for me), this works fine with NaNs being converted to NaT, but it is worth worrying that the conversion (perhaps using the raise argument). %%timeit ```py # sample dataframe with 10000000 rows using df from the OP df = pd.concat([df for _ in range(1000000)]).reset_index(drop=True) %%timeit pd.to_datetime(df['Date'] + ' ' + df['Time']) [result]: 1.73 s \u00b1 10.4 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) %%timeit pd.to_datetime(df['Date'] + df['Time'], format='%m-%d-%Y%H:%M:%S') [result]: 1.33 s \u00b1 9.88 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17978092\/combine-date-and-time-columns-using-pandas", "best_answers_votes":301, "question_length":977, "response_length":1785 }, { "question":"Modifying a subset of rows in a pandas dataframe Assume I have a pandas DataFrame with two columns, A and B. I'd like to modify this DataFrame (or create a copy) so that B is always NaN whenever A is 0. How would I achieve that? I tried the following ``` df['A'==0]['B'] = np.nan ``` and ``` df['A'==0]['B'].values.fill(np.nan) ``` without success.", "response":"Use .loc for label based indexing: ``` df.loc[df.A==0, 'B'] = np.nan ``` The df.A==0 expression creates a boolean series that indexes the rows, 'B' selects the column. You can also use this to transform a subset of a column, e.g.: ``` df.loc[df.A==0, 'B'] = df.loc[df.A==0, 'B'] \/ 2 ``` I don't know enough about pandas internals to know exactly why that works, but the basic issue is that sometimes indexing into a DataFrame returns a copy of the result, and sometimes it returns a view on the original object. According to documentation here, this behavior depends on the underlying numpy behavior. I've found that accessing everything in one operation (rather than [one][two]) is more likely to work for setting.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12307099\/modifying-a-subset-of-rows-in-a-pandas-dataframe", "best_answers_votes":327, "question_length":348, "response_length":715 }, { "question":"ImportError: No module named dateutil.parser I am receiving the following error when importing pandas in a Python program ``` monas-mbp:book mona$ sudo pip install python-dateutil Requirement already satisfied (use --upgrade to upgrade): python-dateutil in \/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/Extras\/lib\/python Cleaning up... monas-mbp:book mona$ python t1.py No module named dateutil.parser Traceback (most recent call last): File \"t1.py\", line 4, in import pandas as pd File \"\/Library\/Python\/2.7\/site-packages\/pandas\/__init__.py\", line 6, in from . import hashtable, tslib, lib File \"tslib.pyx\", line 31, in init pandas.tslib (pandas\/tslib.c:48782) ImportError: No module named dateutil.parser ``` Also here's the program: ``` import codecs from math import sqrt import numpy as np import pandas as pd users = {\"Angelica\": {\"Blues Traveler\": 3.5, \"Broken Bells\": 2.0, \"Norah Jones\": 4.5, \"Phoenix\": 5.0, \"Slightly Stoopid\": 1.5, \"The Strokes\": 2.5, \"Vampire Weekend\": 2.0}, \"Bill\":{\"Blues Traveler\": 2.0, \"Broken Bells\": 3.5, \"Deadmau5\": 4.0, \"Phoenix\": 2.0, \"Slightly Stoopid\": 3.5, \"Vampire Weekend\": 3.0}, \"Chan\": {\"Blues Traveler\": 5.0, \"Broken Bells\": 1.0, \"Deadmau5\": 1.0, \"Norah Jones\": 3.0, \"Phoenix\": 5, \"Slightly Stoopid\": 1.0}, \"Dan\": {\"Blues Traveler\": 3.0, \"Broken Bells\": 4.0, \"Deadmau5\": 4.5, \"Phoenix\": 3.0, \"Slightly Stoopid\": 4.5, \"The Strokes\": 4.0, \"Vampire Weekend\": 2.0}, \"Hailey\": {\"Broken Bells\": 4.0, \"Deadmau5\": 1.0, \"Norah Jones\": 4.0, \"The Strokes\": 4.0, \"Vampire Weekend\": 1.0}, \"Jordyn\": {\"Broken Bells\": 4.5, \"Deadmau5\": 4.0, \"Norah Jones\": 5.0, \"Phoenix\": 5.0, \"Slightly Stoopid\": 4.5, \"The Strokes\": 4.0, \"Vampire Weekend\": 4.0}, \"Sam\": {\"Blues Traveler\": 5.0, \"Broken Bells\": 2.0, \"Norah Jones\": 3.0, \"Phoenix\": 5.0, \"Slightly Stoopid\": 4.0, \"The Strokes\": 5.0}, \"Veronica\": {\"Blues Traveler\": 3.0, \"Norah Jones\": 5.0, \"Phoenix\": 4.0, \"Slightly Stoopid\": 2.5, \"The Strokes\": 3.0} } class recommender: def __init__(self, data, k=1, metric='pearson', n=5): \"\"\" initialize recommender currently, if data is dictionary the recommender is initialized to it. For all other data types of data, no initialization occurs k is the k value for k nearest neighbor metric is which distance formula to use n is the maximum number of recommendations to make\"\"\" self.k = k self.n = n self.username2id = {} self.userid2name = {} self.productid2name = {} # for some reason I want to save the name of the metric self.metric = metric if self.metric == 'pearson': self.fn = self.pearson # # if data is dictionary set recommender data to it # if type(data).__name__ == 'dict': self.data = data def convertProductID2name(self, id): \"\"\"Given product id number return product name\"\"\" if id in self.productid2name: return self.productid2name[id] else: return id def userRatings(self, id, n): \"\"\"Return n top ratings for user with id\"\"\" print (\"Ratings for \" + self.userid2name[id]) ratings = self.data[id] print(len(ratings)) ratings = list(ratings.items()) ratings = [(self.convertProductID2name(k), v) for (k, v) in ratings] # finally sort and return ratings.sort(key=lambda artistTuple: artistTuple[1], reverse = True) ratings = ratings[:n] for rating in ratings: print(\"%s\\t%i\" % (rating[0], rating[1])) def loadBookDB(self, path=''): \"\"\"loads the BX book dataset. Path is where the BX files are located\"\"\" self.data = {} i = 0 # # First load book ratings into self.data # f = codecs.open(path + \"BX-Book-Ratings.csv\", 'r', 'utf8') for line in f: i += 1 #separate line into fields fields = line.split(';') user = fields[0].strip('\"') book = fields[1].strip('\"') rating = int(fields[2].strip().strip('\"')) if user in self.data: currentRatings = self.data[user] else: currentRatings = {} currentRatings[book] = rating self.data[user] = currentRatings f.close() # # Now load books into self.productid2name # Books contains isbn, title, and author among other fields # f = codecs.open(path + \"BX-Books.csv\", 'r', 'utf8') for line in f: i += 1 #separate line into fields fields = line.split(';') isbn = fields[0].strip('\"') title = fields[1].strip('\"') author = fields[2].strip().strip('\"') title = title + ' by ' + author self.productid2name[isbn] = title f.close() # # Now load user info into both self.userid2name and # self.username2id # f = codecs.open(path + \"BX-Users.csv\", 'r', 'utf8') for line in f: i += 1 #print(line) #separate line into fields fields = line.split(';') userid = fields[0].strip('\"') location = fields[1].strip('\"') if len(fields) > 3: age = fields[2].strip().strip('\"') else: age = 'NULL' if age != 'NULL': value = location + ' (age: ' + age + ')' else: value = location self.userid2name[userid] = value self.username2id[location] = userid f.close() print(i) def pearson(self, rating1, rating2): sum_xy = 0 sum_x = 0 sum_y = 0 sum_x2 = 0 sum_y2 = 0 n = 0 for key in rating1: if key in rating2: n += 1 x = rating1[key] y = rating2[key] sum_xy += x * y sum_x += x sum_y += y sum_x2 += pow(x, 2) sum_y2 += pow(y, 2) if n == 0: return 0 # now compute denominator denominator = (sqrt(sum_x2 - pow(sum_x, 2) \/ n) * sqrt(sum_y2 - pow(sum_y, 2) \/ n)) if denominator == 0: return 0 else: return (sum_xy - (sum_x * sum_y) \/ n) \/ denominator def computeNearestNeighbor(self, username): \"\"\"creates a sorted list of users based on their distance to username\"\"\" distances = [] for instance in self.data: if instance != username: distance = self.fn(self.data[username], self.data[instance]) distances.append((instance, distance)) # sort based on distance -- closest first distances.sort(key=lambda artistTuple: artistTuple[1], reverse=True) return distances def recommend(self, user): \"\"\"Give list of recommendations\"\"\" recommendations = {} # first get list of users ordered by nearness nearest = self.computeNearestNeighbor(user) # # now get the ratings for the user # userRatings = self.data[user] # # determine the total distance totalDistance = 0.0 for i in range(self.k): totalDistance += nearest[i][1] # now iterate through the k nearest neighbors # accumulating their ratings for i in range(self.k): # compute slice of pie weight = nearest[i][1] \/ totalDistance # get the name of the person name = nearest[i][0] # get the ratings for this person neighborRatings = self.data[name] # get the name of the person # now find bands neighbor rated that user didn't for artist in neighborRatings: if not artist in userRatings: if artist not in recommendations: recommendations[artist] = (neighborRatings[artist] * weight) else: recommendations[artist] = (recommendations[artist] + neighborRatings[artist] * weight) # now make list from dictionary recommendations = list(recommendations.items()) recommendations = [(self.convertProductID2name(k), v) for (k, v) in recommendations] # finally sort and return recommendations.sort(key=lambda artistTuple: artistTuple[1], reverse = True) # Return the first n items return recommendations[:self.n] r = recommender(users) # The author implementation r.loadBookDB('\/Users\/mona\/Downloads\/BX-Dump\/') ratings = pd.read_csv('\/Users\/danialt\/BX-CSV-Dump\/BX-Book-Ratings.csv', sep=\";\", quotechar=\"\\\"\", escapechar=\"\\\\\") books = pd.read_csv('\/Users\/danialt\/BX-CSV-Dump\/BX-Books.csv', sep=\";\", quotechar=\"\\\"\", escapechar=\"\\\\\") users = pd.read_csv('\/Users\/danialt\/BX-CSV-Dump\/BX-Users.csv', sep=\";\", quotechar=\"\\\"\", escapechar=\"\\\\\") pivot_rating = ratings.pivot(index='User-ID', columns='ISBN', values='Book-Rating') ```", "response":"On Ubuntu you may need to install the package manager pip first: ``` sudo apt-get install python-pip ``` Then install the python-dateutil package with: ``` sudo pip install python-dateutil ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20853474\/importerror-no-module-named-dateutil-parser", "best_answers_votes":354, "question_length":7395, "response_length":192 }, { "question":"What are the differences between Pandas and NumPy+SciPy in Python? [closed] Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Because this question may lead to opinionated discussion, debate, and answers, it has been closed. You may edit the question if you feel you can improve it so that it requires answers that include facts and citations or a detailed explanation of the proposed solution. If edited, the question will be reviewed and might be reopened. Closed 10 years ago. Improve this question They both seem exceedingly similar and I'm curious as to which package would be more beneficial for financial data analysis.", "response":"pandas provides high level data manipulation tools built on top of NumPy. NumPy by itself is a fairly low-level tool, similar to MATLAB. pandas on the other hand provides rich time series functionality, data alignment, NA-friendly statistics, groupby, merge and join methods, and lots of other conveniences. It has become very popular in recent years in financial applications. I will have a chapter dedicated to financial data analysis using pandas in my upcoming book.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11077023\/what-are-the-differences-between-pandas-and-numpyscipy-in-python", "best_answers_votes":329, "question_length":686, "response_length":470 }, { "question":"Update row values where certain condition is met in pandas Say I have the following dataframe: What is the most efficient way to update the values of the columns feat and another_feat where the stream is number 2? Is this it? ```py for index, row in df.iterrows(): if df1.loc[index,'stream'] == 2: # do something ``` How do I do it if there are more than 100 columns? I don't want to explicitly name the columns that I want to update. I want to divide the value of each column by 2 (except for the stream column). So to be clear, my goal is: Dividing all values by 2 of all rows that have stream 2, but not changing the stream column.", "response":"I think you can use loc if you need update two columns to same value: ``` df1.loc[df1['stream'] == 2, ['feat','another_feat']] = 'aaaa' print df1 stream feat another_feat a 1 some_value some_value b 2 aaaa aaaa c 2 aaaa aaaa d 3 some_value some_value ``` If you need update separate, one option is use: ``` df1.loc[df1['stream'] == 2, 'feat'] = 10 print df1 stream feat another_feat a 1 some_value some_value b 2 10 some_value c 2 10 some_value d 3 some_value some_value ``` Another common option is use numpy.where: ``` df1['feat'] = np.where(df1['stream'] == 2, 10,20) print df1 stream feat another_feat a 1 20 some_value b 2 10 some_value c 2 10 some_value d 3 20 some_value ``` EDIT: If you need divide all columns without stream where condition is True, use: ``` print df1 stream feat another_feat a 1 4 5 b 2 4 5 c 2 2 9 d 3 1 7 #filter columns all without stream cols = [col for col in df1.columns if col != 'stream'] print cols ['feat', 'another_feat'] df1.loc[df1['stream'] == 2, cols ] = df1 \/ 2 print df1 stream feat another_feat a 1 4.0 5.0 b 2 2.0 2.5 c 2 1.0 4.5 d 3 1.0 7.0 ``` If working with multiple conditions is possible use multiple numpy.where or numpy.select: ``` df0 = pd.DataFrame({'Col':[5,0,-6]}) df0['New Col1'] = np.where((df0['Col'] > 0), 'Increasing', np.where((df0['Col'] 0, df0['Col'] < 0], ['Increasing', 'Decreasing'], default='No Change') print (df0) Col New Col1 New Col2 0 5 Increasing Increasing 1 0 No Change No Change 2 -6 Decreasing Decreasing ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36909977\/update-row-values-where-certain-condition-is-met-in-pandas", "best_answers_votes":357, "question_length":634, "response_length":1490 }, { "question":"GroupBy pandas DataFrame and select most common value I have a data frame with three string columns. I know that the only one value in the 3rd column is valid for every combination of the first two. To clean the data I have to group by data frame by first two columns and select most common value of the third column for each combination. My code: ``` import pandas as pd from scipy import stats source = pd.DataFrame({ 'Country': ['USA', 'USA', 'Russia', 'USA'], 'City': ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'], 'Short name': ['NY', 'New', 'Spb', 'NY']}) source.groupby(['Country','City']).agg(lambda x: stats.mode(x['Short name'])[0]) ``` Last line of code doesn't work, it says KeyError: 'Short name' and if I try to group only by City, then I got an AssertionError. What can I do fix it?", "response":"Pandas >= 0.16 pd.Series.mode is available! Use groupby, GroupBy.agg, and apply the pd.Series.mode function to each group: ``` source.groupby(['Country','City'])['Short name'].agg(pd.Series.mode) Country City Russia Sankt-Petersburg Spb USA New-York NY Name: Short name, dtype: object ``` If this is needed as a DataFrame, use ``` source.groupby(['Country','City'])['Short name'].agg(pd.Series.mode).to_frame() Short name Country City Russia Sankt-Petersburg Spb USA New-York NY ``` The useful thing about Series.mode is that it always returns a Series, making it very compatible with agg and apply, especially when reconstructing the groupby output. It is also faster. ``` # Accepted answer. %timeit source.groupby(['Country','City']).agg(lambda x:x.value_counts().index[0]) # Proposed in this post. %timeit source.groupby(['Country','City'])['Short name'].agg(pd.Series.mode) 5.56 ms \u00b1 343 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 2.76 ms \u00b1 387 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) ``` Dealing with Multiple Modes Series.mode also does a good job when there are multiple modes: ``` source2 = source.append( pd.Series({'Country': 'USA', 'City': 'New-York', 'Short name': 'New'}), ignore_index=True) # Now `source2` has two modes for the # (\"USA\", \"New-York\") group, they are \"NY\" and \"New\". source2 Country City Short name 0 USA New-York NY 1 USA New-York New 2 Russia Sankt-Petersburg Spb 3 USA New-York NY 4 USA New-York New ``` ``` source2.groupby(['Country','City'])['Short name'].agg(pd.Series.mode) Country City Russia Sankt-Petersburg Spb USA New-York [NY, New] Name: Short name, dtype: object ``` Or, if you want a separate row for each mode, you can use GroupBy.apply: ``` source2.groupby(['Country','City'])['Short name'].apply(pd.Series.mode) Country City Russia Sankt-Petersburg 0 Spb USA New-York 0 NY 1 New Name: Short name, dtype: object ``` If you don't care which mode is returned as long as it's either one of them, then you will need a lambda that calls mode and extracts the first result. ``` source2.groupby(['Country','City'])['Short name'].agg( lambda x: pd.Series.mode(x)[0]) Country City Russia Sankt-Petersburg Spb USA New-York NY Name: Short name, dtype: object ``` Alternatives to (not) consider You can also use statistics.mode from python, but... ``` source.groupby(['Country','City'])['Short name'].apply(statistics.mode) Country City Russia Sankt-Petersburg Spb USA New-York NY Name: Short name, dtype: object ``` ...it does not work well when having to deal with multiple modes; a StatisticsError is raised. This is mentioned in the docs: If data is empty, or if there is not exactly one most common value, StatisticsError is raised. But you can see for yourself... ``` statistics.mode([1, 2]) # --------------------------------------------------------------------------- # StatisticsError Traceback (most recent call last) # ... # StatisticsError: no unique mode; found 2 equally common values ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15222754\/groupby-pandas-dataframe-and-select-most-common-value", "best_answers_votes":274, "question_length":808, "response_length":2968 }, { "question":"How to merge multiple dataframes I have different dataframes and need to merge them together based on the date column. If I only had two dataframes, I could use df1.merge(df2, on='date'), to do it with three dataframes, I use df1.merge(df2.merge(df3, on='date'), on='date'), however it becomes really complex and unreadable to do it with multiple dataframes. All dataframes have one column in common -date, but they don't have the same number of rows nor columns and I only need those rows in which each date is common to every dataframe. So, I'm trying to write a recursion function that returns a dataframe with all data but it didn't work. How should I merge multiple dataframes then? I tried different ways and got errors like out of range, keyerror 0\/1\/2\/3 and can not merge DataFrame with instance of type . This is the script I wrote: ``` dfs = [df1, df2, df3] # list of dataframes def mergefiles(dfs, countfiles, i=0): if i == (countfiles - 2): # it gets to the second to last and merges it with the last return dfm = dfs[i].merge(mergefiles(dfs[i+1], countfiles, i=i+1), on='date') return dfm print(mergefiles(dfs, len(dfs))) ``` An example: df_1: ``` May 19, 2017;1,200.00;0.1% May 18, 2017;1,100.00;0.1% May 17, 2017;1,000.00;0.1% May 15, 2017;1,901.00;0.1% ``` df_2: ``` May 20, 2017;2,200.00;1000000;0.2% May 18, 2017;2,100.00;1590000;0.2% May 16, 2017;2,000.00;1230000;0.2% May 15, 2017;2,902.00;1000000;0.2% ``` df_3: ``` May 21, 2017;3,200.00;2000000;0.3% May 17, 2017;3,100.00;2590000;0.3% May 16, 2017;3,000.00;2230000;0.3% May 15, 2017;3,903.00;2000000;0.3% ``` Expected merge result: ``` May 15, 2017; 1,901.00;0.1%; 2,902.00;1000000;0.2%; 3,903.00;2000000;0.3% ```", "response":"Short answer ```py df_merged = reduce(lambda left,right: pd.merge(left,right,on=['DATE'], how='outer'), data_frames) ``` Long answer Below, is the most clean, comprehensible way of merging multiple dataframe if complex queries aren't involved. Just simply merge with DATE as the index and merge using OUTER method (to get all the data). ``` import pandas as pd from functools import reduce df1 = pd.read_table('file1.csv', sep=',') df2 = pd.read_table('file2.csv', sep=',') df3 = pd.read_table('file3.csv', sep=',') ``` Now, basically load all the files you have as data frame into a list. And, then merge the files using merge or reduce function. ``` # compile the list of dataframes you want to merge data_frames = [df1, df2, df3] ``` Note: you can add as many data-frames inside the above list. This is the good part about this method. No complex queries involved. To keep the values that belong to the same date you need to merge it on the DATE ``` df_merged = reduce(lambda left,right: pd.merge(left,right,on=['DATE'], how='outer'), data_frames) # if you want to fill the values that don't exist in the lines of merged dataframe simply fill with required strings as df_merged = reduce(lambda left,right: pd.merge(left,right,on=['DATE'], how='outer'), data_frames).fillna('void') ``` Now, the output will the values from the same date on the same lines. You can fill the non existing data from different frames for different columns using fillna(). Then write the merged data to the csv file if desired. ``` pd.DataFrame.to_csv(df_merged, 'merged.txt', sep=',', na_rep='.', index=False) ``` This should give you DATE VALUE1 VALUE2 VALUE3 ....", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/44327999\/how-to-merge-multiple-dataframes", "best_answers_votes":341, "question_length":1685, "response_length":1646 }, { "question":"Convert row to column header for Pandas DataFrame The data I have to work with is a bit messy.. It has header names inside of its data. How can I choose a row from an existing pandas dataframe and make it (rename it to) a column header? I want to do something like: ``` header = df[df['old_header_name1'] == 'new_header_name1'] df.columns = header ```", "response":"``` In [21]: df = pd.DataFrame([(1,2,3), ('foo','bar','baz'), (4,5,6)]) In [22]: df Out[22]: 0 1 2 0 1 2 3 1 foo bar baz 2 4 5 6 ``` Set the column labels to equal the values in the 2nd row (index location 1): ``` In [23]: df.columns = df.iloc[1] ``` If the index has unique labels, you can drop the 2nd row using: ``` In [24]: df.drop(df.index[1]) Out[24]: 1 foo bar baz 0 1 2 3 2 4 5 6 ``` If the index is not unique, you could use: ``` In [133]: df.iloc[pd.RangeIndex(len(df)).drop(1)] Out[133]: 1 foo bar baz 0 1 2 3 2 4 5 6 ``` Using df.drop(df.index[1]) removes all rows with the same label as the second row. Because non-unique indexes can lead to stumbling blocks (or potential bugs) like this, it's often better to take care that the index is unique (even though Pandas does not require it).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26147180\/convert-row-to-column-header-for-pandas-dataframe", "best_answers_votes":311, "question_length":351, "response_length":800 }, { "question":"Add column with constant value to pandas dataframe [duplicate] This question already has answers here: Add column to dataframe with constant value (10 answers) Closed 5 years ago. Given a DataFrame: ``` np.random.seed(0) df = pd.DataFrame(np.random.randn(3, 3), columns=list('ABC'), index=[1, 2, 3]) df A B C 1 1.764052 0.400157 0.978738 2 2.240893 1.867558 -0.977278 3 0.950088 -0.151357 -0.103219 ``` What is the simplest way to add a new column containing a constant value eg 0? ``` A B C new 1 1.764052 0.400157 0.978738 0 2 2.240893 1.867558 -0.977278 0 3 0.950088 -0.151357 -0.103219 0 ``` This is my solution, but I don't know why this puts NaN into 'new' column? ``` df['new'] = pd.Series([0 for x in range(len(df.index))]) A B C new 1 1.764052 0.400157 0.978738 0.0 2 2.240893 1.867558 -0.977278 0.0 3 0.950088 -0.151357 -0.103219 NaN ```", "response":"Super simple in-place assignment: df['new'] = 0 For in-place modification, perform direct assignment. This assignment is broadcasted by pandas for each row. ``` df = pd.DataFrame('x', index=range(4), columns=list('ABC')) df A B C 0 x x x 1 x x x 2 x x x 3 x x x ``` ``` df['new'] = 'y' # Same as, # df.loc[:, 'new'] = 'y' df A B C new 0 x x x y 1 x x x y 2 x x x y 3 x x x y ``` Note for object columns If you want to add an column of empty lists, here is my advice: Consider not doing this. object columns are bad news in terms of performance. Rethink how your data is structured. Consider storing your data in a sparse data structure. More information: sparse data structures If you must store a column of lists, ensure not to copy the same reference multiple times. ``` # Wrong df['new'] = [[]] * len(df) # Right df['new'] = [[] for _ in range(len(df))] ``` Generating a copy: df.assign(new=0) If you need a copy instead, use DataFrame.assign: ``` df.assign(new='y') A B C new 0 x x x y 1 x x x y 2 x x x y 3 x x x y ``` And, if you need to assign multiple such columns with the same value, this is as simple as, ``` c = ['new1', 'new2', ...] df.assign(**dict.fromkeys(c, 'y')) A B C new1 new2 0 x x x y y 1 x x x y y 2 x x x y y 3 x x x y y ``` Multiple column assignment Finally, if you need to assign multiple columns with different values, you can use assign with a dictionary. ``` c = {'new1': 'w', 'new2': 'y', 'new3': 'z'} df.assign(**c) A B C new1 new2 new3 0 x x x w y z 1 x x x w y z 2 x x x w y z 3 x x x w y z ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24039023\/add-column-with-constant-value-to-pandas-dataframe", "best_answers_votes":219, "question_length":847, "response_length":1528 }, { "question":"Split a large pandas dataframe I have a large dataframe with 423244 lines. I want to split this in to 4. I tried the following code which gave an error? ValueError: array split does not result in an equal division ``` for item in np.split(df, 4): print item ``` How to split this dataframe in to 4 groups?", "response":"Use np.array_split: ``` Docstring: Split an array into multiple sub-arrays. Please refer to the ``split`` documentation. The only difference between these functions is that ``array_split`` allows `indices_or_sections` to be an integer that does *not* equally divide the axis. ``` ``` In [1]: import pandas as pd In [2]: df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', ...: 'foo', 'bar', 'foo', 'foo'], ...: 'B' : ['one', 'one', 'two', 'three', ...: 'two', 'two', 'one', 'three'], ...: 'C' : randn(8), 'D' : randn(8)}) In [3]: print df A B C D 0 foo one -0.174067 -0.608579 1 bar one -0.860386 -1.210518 2 foo two 0.614102 1.689837 3 bar three -0.284792 -1.071160 4 foo two 0.843610 0.803712 5 bar two -1.514722 0.870861 6 foo one 0.131529 -0.968151 7 foo three -1.002946 -0.257468 In [4]: import numpy as np In [5]: np.array_split(df, 3) Out[5]: [ A B C D 0 foo one -0.174067 -0.608579 1 bar one -0.860386 -1.210518 2 foo two 0.614102 1.689837, A B C D 3 bar three -0.284792 -1.071160 4 foo two 0.843610 0.803712 5 bar two -1.514722 0.870861, A B C D 6 foo one 0.131529 -0.968151 7 foo three -1.002946 -0.257468] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17315737\/split-a-large-pandas-dataframe", "best_answers_votes":356, "question_length":305, "response_length":1122 }, { "question":"Find maximum value of a column and return the corresponding row values using Pandas ``` Country Place Value US NewYork 562 US Michigan 854 US Illinois 356 UK London 778 UK Manchester 512 Spain Madrid 509 India Mumbai 196 US Kansas 894 UK Liverpool 796 Spain Barcelona 792 ``` Using Pandas I am trying to find the Country and Place with the maximum value. This returns the maximum value: ``` data.groupby(['Country','Place'])['Value'].max() ``` But how do I get the corresponding Country and Place name?", "response":"Assuming df has a unique index, this gives the row with the maximum value: ``` In [34]: df.loc[df['Value'].idxmax()] Out[34]: Country US Place Kansas Value 894 Name: 7 ``` Note that idxmax returns index labels. So if the DataFrame has duplicates in the index, the label may not uniquely identify the row, so df.loc may return more than one row. Therefore, if df does not have a unique index, you must make the index unique before proceeding as above. Depending on the DataFrame, sometimes you can use stack or set_index to make the index unique. Or, you can simply reset the index (so the rows become renumbered, starting at 0): ``` df = df.reset_index() ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15741759\/find-maximum-value-of-a-column-and-return-the-corresponding-row-values-using-pan", "best_answers_votes":246, "question_length":502, "response_length":658 }, { "question":"Replace NaN in one column with value from corresponding row of second column I am working with this Pandas DataFrame in Python. ``` File heat Farheit Temp_Rating 1 YesQ 75 N\/A 1 NoR 115 N\/A 1 YesA 63 N\/A 1 NoT 83 41 1 NoY 100 80 1 YesZ 56 12 2 YesQ 111 N\/A 2 NoR 60 N\/A 2 YesA 19 N\/A 2 NoT 106 77 2 NoY 45 21 2 YesZ 40 54 3 YesQ 84 N\/A 3 NoR 67 N\/A 3 YesA 94 N\/A 3 NoT 68 39 3 NoY 63 46 3 YesZ 34 81 ``` I need to replace all NaNs in the Temp_Rating column with the value from the Farheit column. This is what I need: ``` File heat Temp_Rating 1 YesQ 75 1 NoR 115 1 YesA 63 1 YesQ 41 1 NoR 80 1 YesA 12 2 YesQ 111 2 NoR 60 2 YesA 19 2 NoT 77 2 NoY 21 2 YesZ 54 3 YesQ 84 3 NoR 67 3 YesA 94 3 NoT 39 3 NoY 46 3 YesZ 81 ``` If I do a Boolean selection, I can pick out only one of these columns at a time. The problem is if I then try to join them, I am not able to do this while preserving the correct order. How can I only find Temp_Rating rows with the NaNs and replace them with the value in the same row of the Farheit column?", "response":"Assuming your DataFrame is in df: ``` df.Temp_Rating.fillna(df.Farheit, inplace=True) del df['Farheit'] df.columns = 'File heat Observations'.split() ``` First replace any NaN values with the corresponding value of df.Farheit. Delete the 'Farheit' column. Then rename the columns. Here's the resulting DataFrame: ``` File heat Observations 0 1 YesQ 75 1 1 NoR 115 2 1 YesA 63 3 1 NoT 41 4 1 NoY 80 5 1 YesZ 12 6 2 YesQ 111 7 2 NoR 60 8 2 YesA 19 9 2 NoT 77 10 2 NoY 21 11 2 YesZ 54 12 3 YesQ 84 13 3 NoR 67 14 3 YesA 94 15 3 NoT 39 16 3 NoY 46 17 3 YesZ 81 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29177498\/replace-nan-in-one-column-with-value-from-corresponding-row-of-second-column", "best_answers_votes":278, "question_length":1028, "response_length":560 }, { "question":"Change values in one column on the basis of the values in another column I'm trying to reproduce my Stata code in Python, and I was pointed in the direction of Pandas. I am, however, having a hard time wrapping my head around how to process the data. Let's say I want to iterate over all values in the column head 'ID.' If that ID matches a specific number, then I want to change two corresponding values FirstName and LastName. In Stata it looks like this: ``` replace FirstName = \"Matt\" if ID==103 replace LastName = \"Jones\" if ID==103 ``` So this replaces all values in FirstName that correspond with values of ID == 103 to Matt. In Pandas, I'm trying something like this ``` df = read_csv(\"test.csv\") for i in df['ID']: if i ==103: ... ``` Not sure where to go from here. Any ideas?", "response":"One option is to use Python's slicing and indexing features to logically evaluate the places where your condition holds and overwrite the data there. Assuming you can load your data directly into pandas with pandas.read_csv then the following code might be helpful for you. ``` import pandas df = pandas.read_csv(\"test.csv\") df.loc[df.ID == 103, 'FirstName'] = \"Matt\" df.loc[df.ID == 103, 'LastName'] = \"Jones\" ``` As mentioned in the comments, you can also do the assignment to both columns in one shot: ``` df.loc[df.ID == 103, ['FirstName', 'LastName']] = 'Matt', 'Jones' ``` Note that you'll need pandas version 0.11 or newer to make use of loc for overwrite assignment operations. Indeed, for older versions like 0.8 (despite what critics of chained assignment may say), chained assignment is the correct way to do it, hence why it's useful to know about even if it should be avoided in more modern versions of pandas. Another way to do it is to use what is called chained assignment. The behavior of this is less stable and so it is not considered the best solution (it is explicitly discouraged in the docs), but it is useful to know about: ``` import pandas df = pandas.read_csv(\"test.csv\") df['FirstName'][df.ID == 103] = \"Matt\" df['LastName'][df.ID == 103] = \"Jones\" ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19226488\/change-values-in-one-column-on-the-basis-of-the-values-in-another-column", "best_answers_votes":326, "question_length":786, "response_length":1280 }, { "question":"How do you filter pandas dataframes by multiple columns? To filter a DataFrame (df) by a single column, if we consider data with male and females we might: ``` males = df[df[Gender]=='Male'] ``` Question 1: But what if the data spanned multiple years and I wanted to only see males for 2014? In other languages I might do something like: ``` if A = \"Male\" and if B = \"2014\" then ``` (except I want to do this and get a subset of the original DataFrame in a new dataframe object) Question 2: How do I do this in a loop, and create a dataframe object for each unique sets of year and gender (i.e. a df for: 2013-Male, 2013-Female, 2014-Male, and 2014-Female? ``` for y in year: for g in gender: df = ..... ```", "response":"Using & operator, don't forget to wrap the sub-statements with (): ``` males = df[(df[Gender]=='Male') & (df[Year]==2014)] ``` To store your DataFrames in a dict using a for loop: ``` from collections import defaultdict dic={} for g in ['male', 'female']: dic[g]=defaultdict(dict) for y in [2013, 2014]: dic[g][y]=df[(df[Gender]==g) & (df[Year]==y)] #store the DataFrames to a dict of dict ``` A demo for your getDF: ``` def getDF(dic, gender, year): return dic[gender][year] print genDF(dic, 'male', 2014) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22086116\/how-do-you-filter-pandas-dataframes-by-multiple-columns", "best_answers_votes":305, "question_length":707, "response_length":510 }, { "question":"dropping rows from dataframe based on a \"not in\" condition [duplicate] This question already has answers here: How to filter Pandas dataframe using 'in' and 'not in' like in SQL (12 answers) Closed 5 years ago. I want to drop rows from a pandas dataframe when the value of the date column is in a list of dates. The following code doesn't work: ``` a=['2015-01-01' , '2015-02-01'] df=df[df.datecolumn not in a] ``` I get the following error: ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().", "response":"You can use pandas.Dataframe.isin. pandas.Dateframe.isin will return boolean values depending on whether each element is inside the list a or not. You then invert this with the ~ to convert True to False and vice versa. ``` import pandas as pd a = ['2015-01-01' , '2015-02-01'] df = pd.DataFrame(data={'date':['2015-01-01' , '2015-02-01', '2015-03-01' , '2015-04-01', '2015-05-01' , '2015-06-01']}) print(df) # date #0 2015-01-01 #1 2015-02-01 #2 2015-03-01 #3 2015-04-01 #4 2015-05-01 #5 2015-06-01 df = df[~df['date'].isin(a)] print(df) # date #2 2015-03-01 #3 2015-04-01 #4 2015-05-01 #5 2015-06-01 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27965295\/dropping-rows-from-dataframe-based-on-a-not-in-condition", "best_answers_votes":333, "question_length":548, "response_length":605 }, { "question":"Pandas Merge - How to avoid duplicating columns I am attempting a merge between two data frames. Each data frame has two index levels (date, cusip). In the columns, some columns match between the two (currency, adj date) for example. What is the best way to merge these by index, but to not take two copies of currency and adj date. Each data frame is 90 columns, so I am trying to avoid writing everything out by hand. ``` df: currency adj_date data_col1 ... date cusip 2012-01-01 XSDP USD 2012-01-03 0.45 ... df2: currency adj_date data_col2 ... date cusip 2012-01-01 XSDP USD 2012-01-03 0.45 ... ``` If I do: ``` dfNew = merge(df, df2, left_index=True, right_index=True, how='outer') ``` I get ``` dfNew: currency_x adj_date_x data_col2 ... currency_y adj_date_y date cusip 2012-01-01 XSDP USD 2012-01-03 0.45 USD 2012-01-03 ``` Thank you! ...", "response":"You can work out the columns that are only in one DataFrame and use this to select a subset of columns in the merge. ``` cols_to_use = df2.columns.difference(df.columns) ``` Then perform the merge (note this is an index object but it has a handy tolist() method). ``` dfNew = merge(df, df2[cols_to_use], left_index=True, right_index=True, how='outer') ``` This will avoid any columns clashing in the merge.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19125091\/pandas-merge-how-to-avoid-duplicating-columns", "best_answers_votes":238, "question_length":846, "response_length":406 }, { "question":"Replace None with NaN in pandas dataframe I have table x: ``` website 0 http:\/\/www.google.com\/ 1 http:\/\/www.yahoo.com 2 None ``` I want to replace python None with pandas NaN. I tried: ``` x.replace(to_replace=None, value=np.nan) ``` But I got: ``` TypeError: 'regex' must be a string or a compiled regular expression or a list or dict of strings or regular expressions, you passed a 'bool' ``` How should I go about it?", "response":"You can use DataFrame.fillna or Series.fillna which will replace the Python object None, not the string 'None'. ``` import pandas as pd import numpy as np ``` For dataframe: ``` df = df.fillna(value=np.nan) ``` For column or series: ``` df.mycol.fillna(value=np.nan, inplace=True) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23743460\/replace-none-with-nan-in-pandas-dataframe", "best_answers_votes":276, "question_length":420, "response_length":284 }, { "question":"Python pandas: fill a dataframe row by row The simple task of adding a row to a pandas.DataFrame object seems to be hard to accomplish. There are 3 stackoverflow questions relating to this, none of which give a working answer. Here is what I'm trying to do. I have a DataFrame of which I already know the shape as well as the names of the rows and columns. ``` >>> df = pandas.DataFrame(columns=['a','b','c','d'], index=['x','y','z']) >>> df a b c d x NaN NaN NaN NaN y NaN NaN NaN NaN z NaN NaN NaN NaN ``` Now, I have a function to compute the values of the rows iteratively. How can I fill in one of the rows with either a dictionary or a pandas.Series ? Here are various attempts that have failed: ``` >>> y = {'a':1, 'b':5, 'c':2, 'd':3} >>> df['y'] = y AssertionError: Length of values does not match length of index ``` Apparently it tried to add a column instead of a row. ``` >>> y = {'a':1, 'b':5, 'c':2, 'd':3} >>> df.join(y) AttributeError: 'builtin_function_or_method' object has no attribute 'is_unique' ``` Very uninformative error message. ``` >>> y = {'a':1, 'b':5, 'c':2, 'd':3} >>> df.set_value(index='y', value=y) TypeError: set_value() takes exactly 4 arguments (3 given) ``` Apparently that is only for setting individual values in the dataframe. ``` >>> y = {'a':1, 'b':5, 'c':2, 'd':3} >>> df.append(y) Exception: Can only append a Series if ignore_index=True ``` Well, I don't want to ignore the index, otherwise here is the result: ``` >>> df.append(y, ignore_index=True) a b c d 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 NaN NaN NaN NaN 3 1 5 2 3 ``` It did align the column names with the values, but lost the row labels. ``` >>> y = {'a':1, 'b':5, 'c':2, 'd':3} >>> df.ix['y'] = y >>> df a b \\ x NaN NaN y {'a': 1, 'c': 2, 'b': 5, 'd': 3} {'a': 1, 'c': 2, 'b': 5, 'd': 3} z NaN NaN c d x NaN NaN y {'a': 1, 'c': 2, 'b': 5, 'd': 3} {'a': 1, 'c': 2, 'b': 5, 'd': 3} z NaN NaN ``` That also failed miserably. So how do you do it ?", "response":"df['y'] will set a column since you want to set a row, use .loc Note that .ix is equivalent here, yours failed because you tried to assign a dictionary to each element of the row y probably not what you want; converting to a Series tells pandas that you want to align the input (for example you then don't have to to specify all of the elements) ``` In [6]: import pandas as pd In [7]: df = pd.DataFrame(columns=['a','b','c','d'], index=['x','y','z']) In [8]: df.loc['y'] = pd.Series({'a':1, 'b':5, 'c':2, 'd':3}) In [9]: df Out[9]: a b c d x NaN NaN NaN NaN y 1 5 2 3 z NaN NaN NaN NaN ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17091769\/python-pandas-fill-a-dataframe-row-by-row", "best_answers_votes":139, "question_length":1952, "response_length":590 }, { "question":"how to check the dtype of a column in python pandas I need to use different functions to treat numeric columns and string columns. What I am doing now is really dumb: ``` allc = list((agg.loc[:, (agg.dtypes==np.float64)|(agg.dtypes==np.int)]).columns) for y in allc: treat_numeric(agg[y]) allc = list((agg.loc[:, (agg.dtypes!=np.float64)&(agg.dtypes!=np.int)]).columns) for y in allc: treat_str(agg[y]) ``` Is there a more elegant way to do this? E.g. ``` for y in agg.columns: if(dtype(agg[y]) == 'string'): treat_str(agg[y]) elif(dtype(agg[y]) != 'string'): treat_numeric(agg[y]) ```", "response":"In pandas 0.20.2 you can do: ``` from pandas.api.types import is_string_dtype from pandas.api.types import is_numeric_dtype is_string_dtype(df['A']) >>>> True is_numeric_dtype(df['B']) >>>> True ``` So your code becomes: ``` for y in agg.columns: if (is_string_dtype(agg[y])): treat_str(agg[y]) elif (is_numeric_dtype(agg[y])): treat_numeric(agg[y]) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22697773\/how-to-check-the-dtype-of-a-column-in-python-pandas", "best_answers_votes":172, "question_length":585, "response_length":353 }, { "question":"Could pandas use column as index? I have a spreadsheet like this: ``` Locality 2005 2006 2007 2008 2009 ABBOTSFORD 427000 448000 602500 600000 638500 ABERFELDIE 534000 600000 735000 710000 775000 AIREYS INLET459000 440000 430000 517500 512500 ``` I don't want to manually swap the column with the row. Could it be possible to use pandas reading data to a list as this: ``` data['ABBOTSFORD']=[427000,448000,602500,600000,638500] data['ABERFELDIE']=[534000,600000,735000,710000,775000] data['AIREYS INLET']=[459000,440000,430000,517500,512500] ```", "response":"Yes, with pandas.DataFrame.set_index you can make 'Locality' your row index. ``` data.set_index('Locality', inplace=True) ``` If inplace=True is not provided, set_index returns the modified dataframe as a result. Example: ``` > import pandas as pd > df = pd.DataFrame([['ABBOTSFORD', 427000, 448000], ['ABERFELDIE', 534000, 600000]], columns=['Locality', 2005, 2006]) > df Locality 2005 2006 0 ABBOTSFORD 427000 448000 1 ABERFELDIE 534000 600000 > df.set_index('Locality', inplace=True) > df 2005 2006 Locality ABBOTSFORD 427000 448000 ABERFELDIE 534000 600000 > df.loc['ABBOTSFORD'] 2005 427000 2006 448000 Name: ABBOTSFORD, dtype: int64 > df.loc['ABBOTSFORD'][2005] 427000 > df.loc['ABBOTSFORD'].values array([427000, 448000]) > df.loc['ABBOTSFORD'].tolist() [427000, 448000] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38542419\/could-pandas-use-column-as-index", "best_answers_votes":351, "question_length":546, "response_length":781 }, { "question":"Move column by name to front of table in pandas Here is my df: ``` Net Upper Lower Mid Zsore Answer option More than once a day 0% 0.22% -0.12% 2 65 Once a day 0% 0.32% -0.19% 3 45 Several times a week 2% 2.45% 1.10% 4 78 Once a week 1% 1.63% -0.40% 6 65 ``` How can I move a column by name (\"Mid\") to the front of the table, index 0. This is what the result should look like: ``` Mid Upper Lower Net Zsore Answer option More than once a day 2 0.22% -0.12% 0% 65 Once a day 3 0.32% -0.19% 0% 45 Several times a week 4 2.45% 1.10% 2% 78 Once a week 6 1.63% -0.40% 1% 65 ``` My current code moves the column by index using df.columns.tolist() but I'd like to shift it by name.", "response":"We can use loc to reorder by passing a list: ``` In [27]: # get a list of columns cols = list(df) # move the column to head of list using index, pop and insert cols.insert(0, cols.pop(cols.index('Mid'))) cols Out[27]: ['Mid', 'Net', 'Upper', 'Lower', 'Zsore'] In [28]: # use ix to reorder df = df.loc[:, cols] df Out[28]: Mid Net Upper Lower Zsore Answer_option More_than_once_a_day 2 0% 0.22% -0.12% 65 Once_a_day 3 0% 0.32% -0.19% 45 Several_times_a_week 4 2% 2.45% 1.10% 78 Once_a_week 6 1% 1.63% -0.40% 65 ``` Another method is to take a reference to the column and reinsert it at the front: ``` In [39]: mid = df['Mid'] df.drop(labels=['Mid'], axis=1,inplace = True) df.insert(0, 'Mid', mid) df Out[39]: Mid Net Upper Lower Zsore Answer_option More_than_once_a_day 2 0% 0.22% -0.12% 65 Once_a_day 3 0% 0.32% -0.19% 45 Several_times_a_week 4 2% 2.45% 1.10% 78 Once_a_week 6 1% 1.63% -0.40% 65 ``` You can, with very early versions of Pandas, also use ix to achieve the same results: ``` df = df.ix[:, cols] ``` But ix was deprecated from pandas 0.20.0 onwards and was discontinued as of Pandas 1.0.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25122099\/move-column-by-name-to-front-of-table-in-pandas", "best_answers_votes":163, "question_length":674, "response_length":1102 }, { "question":"Get pandas.read_csv to read empty values as empty string instead of nan I'm using the pandas library to read in some CSV data. In my data, certain columns contain strings. The string \"nan\" is a possible value, as is an empty string. I managed to get pandas to read \"nan\" as a string, but I can't figure out how to get it not to read an empty value as NaN. Here's sample data and output ``` One,Two,Three a,1,one b,2,two ,3,three d,4,nan e,5,five nan,6, g,7,seven >>> pandas.read_csv('test.csv', na_values={'One': [], \"Three\": []}) One Two Three 0 a 1 one 1 b 2 two 2 NaN 3 three 3 d 4 nan 4 e 5 five 5 nan 6 NaN 6 g 7 seven ``` It correctly reads \"nan\" as the string \"nan', but still reads the empty cells as NaN. I tried passing in str in the converters argument to read_csv (with converters={'One': str})), but it still reads the empty cells as NaN. I realize I can fill the values after reading, with fillna, but is there really no way to tell pandas that an empty cell in a particular CSV column should be read as an empty string instead of NaN?", "response":"I was still confused after reading the other answers and comments. But the answer now seems simpler, so here you go. Since Pandas version 0.9 (from 2012), you can read your csv with empty cells interpreted as empty strings by simply setting keep_default_na=False: ``` pd.read_csv('test.csv', keep_default_na=False) ``` This issue is more clearly explained in More consistent na_values handling in read_csv \u00b7 Issue #1657 \u00b7 pandas-dev\/pandas That was fixed on on Aug 19, 2012 for Pandas version 0.9 in BUG: more consistent na_values #1657 \u00b7 pandas-dev\/pandas@d9abf68", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/10867028\/get-pandas-read-csv-to-read-empty-values-as-empty-string-instead-of-nan", "best_answers_votes":246, "question_length":1049, "response_length":564 }, { "question":"How to read a Parquet file into Pandas DataFrame? How to read a modestly sized Parquet data-set into an in-memory Pandas DataFrame without setting up a cluster computing infrastructure such as Hadoop or Spark? This is only a moderate amount of data that I would like to read in-memory with a simple Python script on a laptop. The data does not reside on HDFS. It is either on the local file system or possibly in S3. I do not want to spin up and configure other services like Hadoop, Hive or Spark. I thought Blaze\/Odo would have made this possible: the Odo documentation mentions Parquet, but the examples seem all to be going through an external Hive runtime.", "response":"pandas 0.21 introduces new functions for Parquet: ``` import pandas as pd pd.read_parquet('example_pa.parquet', engine='pyarrow') ``` or ``` import pandas as pd pd.read_parquet('example_fp.parquet', engine='fastparquet') ``` The above link explains: These engines are very similar and should read\/write nearly identical parquet format files. These libraries differ by having different underlying dependencies (fastparquet by using numba, while pyarrow uses a c-library).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33813815\/how-to-read-a-parquet-file-into-pandas-dataframe", "best_answers_votes":249, "question_length":661, "response_length":470 }, { "question":"Use .corr to get the correlation between two columns I have the following pandas dataframe Top15: I create a column that estimates the number of citable documents per person: ``` Top15['PopEst'] = Top15['Energy Supply'] \/ Top15['Energy Supply per Capita'] Top15['Citable docs per Capita'] = Top15['Citable documents'] \/ Top15['PopEst'] ``` I want to know the correlation between the number of citable documents per capita and the energy supply per capita. So I use the .corr() method (Pearson's correlation): ``` data = Top15[['Citable docs per Capita','Energy Supply per Capita']] correlation = data.corr(method='pearson') ``` I want to return a single number, but the result is:", "response":"Without actual data it is hard to answer the question but I guess you are looking for something like this: ``` Top15['Citable docs per Capita'].corr(Top15['Energy Supply per Capita']) ``` That calculates the correlation between your two columns 'Citable docs per Capita' and 'Energy Supply per Capita'. To give an example: ``` import pandas as pd df = pd.DataFrame({'A': range(4), 'B': [2*i for i in range(4)]}) A B 0 0 0 1 1 2 2 2 4 3 3 6 ``` Then ``` df['A'].corr(df['B']) ``` gives 1 as expected. Now, if you change a value, e.g. ``` df.loc[2, 'B'] = 4.5 A B 0 0 0.0 1 1 2.0 2 2 4.5 3 3 6.0 ``` the command ``` df['A'].corr(df['B']) ``` returns ``` 0.99586 ``` which is still close to 1, as expected. If you apply .corr() directly to your dataframe, it will return all pairwise correlations between your columns; that's why you then observe 1s at the diagonal of your matrix (each column is perfectly correlated with itself). ``` df.corr() ``` will therefore return ``` A B A 1.000000 0.995862 B 0.995862 1.000000 ``` In the graphic you show, only the upper left corner of the correlation matrix is represented (I assume). There can be cases, where you get NaNs in your solution - check this post for an example. If you want to filter entries above\/below a certain threshold, you can check this question. If you want to plot a heatmap of the correlation coefficients, you can check this answer and if you then run into the issue with overlapping axis-labels check the following post.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/42579908\/use-corr-to-get-the-correlation-between-two-columns", "best_answers_votes":311, "question_length":680, "response_length":1486 }, { "question":"How to create a DataFrame of random integers with Pandas? I know that if I use randn, the following code gives me what I am looking for, but with elements from a normal distribution. But what if I just wanted random integers? ```py import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD')) ``` randint works by providing a range, but not an array like randn. So how do I do this with random integers between some range?", "response":"numpy.random.randint accepts a third argument (size) , in which you can specify the size of the output array. You can use this to create your DataFrame - ``` df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) ``` Here - np.random.randint(0,100,size=(100, 4)) - creates an output array of size (100,4) with random integer elements between [0,100) . Demo - ```python import numpy as np import pandas as pd df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) ``` which produces: ```none A B C D 0 45 88 44 92 1 62 34 2 86 2 85 65 11 31 3 74 43 42 56 4 90 38 34 93 5 0 94 45 10 6 58 23 23 60 .. .. .. .. .. ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32752292\/how-to-create-a-dataframe-of-random-integers-with-pandas", "best_answers_votes":295, "question_length":465, "response_length":663 }, { "question":"How to select all columns whose names start with X in a pandas DataFrame I have a DataFrame: ``` import pandas as pd import numpy as np df = pd.DataFrame({'foo.aa': [1, 2.1, np.nan, 4.7, 5.6, 6.8], 'foo.fighters': [0, 1, np.nan, 0, 0, 0], 'foo.bars': [0, 0, 0, 0, 0, 1], 'bar.baz': [5, 5, 6, 5, 5.6, 6.8], 'foo.fox': [2, 4, 1, 0, 0, 5], 'nas.foo': ['NA', 0, 1, 0, 0, 0], 'foo.manchu': ['NA', 0, 0, 0, 0, 0],}) ``` I want to select values of 1 in columns starting with foo.. Is there a better way to do it other than: ``` df2 = df[(df['foo.aa'] == 1)| (df['foo.fighters'] == 1)| (df['foo.bars'] == 1)| (df['foo.fox'] == 1)| (df['foo.manchu'] == 1) ] ``` Something similar to writing something like: ``` df2= df[df.STARTS_WITH_FOO == 1] ``` The answer should print out a DataFrame like this: ``` bar.baz foo.aa foo.bars foo.fighters foo.fox foo.manchu nas.foo 0 5.0 1.0 0 0 2 NA NA 1 5.0 2.1 0 1 4 0 0 2 6.0 NaN 0 NaN 1 0 1 5 6.8 6.8 1 0 5 0 0 [4 rows x 7 columns] ```", "response":"Just perform a list comprehension to create your columns: ``` In [28]: filter_col = [col for col in df if col.startswith('foo')] filter_col Out[28]: ['foo.aa', 'foo.bars', 'foo.fighters', 'foo.fox', 'foo.manchu'] In [29]: df[filter_col] Out[29]: foo.aa foo.bars foo.fighters foo.fox foo.manchu 0 1.0 0 0 2 NA 1 2.1 0 1 4 0 2 NaN 0 NaN 1 0 3 4.7 0 0 0 0 4 5.6 0 0 0 0 5 6.8 1 0 5 0 ``` Another method is to create a series from the columns and use the vectorised str method startswith: ``` In [33]: df[df.columns[pd.Series(df.columns).str.startswith('foo')]] Out[33]: foo.aa foo.bars foo.fighters foo.fox foo.manchu 0 1.0 0 0 2 NA 1 2.1 0 1 4 0 2 NaN 0 NaN 1 0 3 4.7 0 0 0 0 4 5.6 0 0 0 0 5 6.8 1 0 5 0 ``` In order to achieve what you want you need to add the following to filter the values that don't meet your ==1 criteria: ``` In [36]: df[df[df.columns[pd.Series(df.columns).str.startswith('foo')]]==1] Out[36]: bar.baz foo.aa foo.bars foo.fighters foo.fox foo.manchu nas.foo 0 NaN 1 NaN NaN NaN NaN NaN 1 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN NaN 1 NaN NaN 3 NaN NaN NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN NaN NaN 5 NaN NaN 1 NaN NaN NaN NaN ``` EDIT OK after seeing what you want the convoluted answer is this: ``` In [72]: df.loc[df[df[df.columns[pd.Series(df.columns).str.startswith('foo')]] == 1].dropna(how='all', axis=0).index] Out[72]: bar.baz foo.aa foo.bars foo.fighters foo.fox foo.manchu nas.foo 0 5.0 1.0 0 0 2 NA NA 1 5.0 2.1 0 1 4 0 0 2 6.0 NaN 0 NaN 1 0 1 5 6.8 6.8 1 0 5 0 0 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27275236\/how-to-select-all-columns-whose-names-start-with-x-in-a-pandas-dataframe", "best_answers_votes":261, "question_length":966, "response_length":1503 }, { "question":"What is the difference between using loc and using just square brackets to filter for columns in Pandas\/Python? I've noticed three methods of selecting a column in a Pandas DataFrame: First method of selecting a column using loc: ``` df_new = df.loc[:, 'col1'] ``` Second method - seems simpler and faster: ``` df_new = df['col1'] ``` Third method - most convenient: ``` df_new = df.col1 ``` Is there a difference between these three methods? I don't think so, in which case I'd rather use the third method. I'm mostly curious as to why there appear to be three methods for doing the same thing.", "response":"In the following situations, they behave the same: Selecting a single column (df['A'] is the same as df.loc[:, 'A'] -> selects column A) Selecting a list of columns (df[['A', 'B', 'C']] is the same as df.loc[:, ['A', 'B', 'C']] -> selects columns A, B and C) Slicing by rows (df[1:3] is the same as df.iloc[1:3] -> selects rows 1 and 2. Note, however, if you slice rows with loc, instead of iloc, you'll get rows 1, 2 and 3 assuming you have a RangeIndex. See details here.) However, [] does not work in the following situations: You can select a single row with df.loc[row_label] You can select a list of rows with df.loc[[row_label1, row_label2]] You can slice columns with df.loc[:, 'A':'C'] These three cannot be done with []. More importantly, if your selection involves both rows and columns, then assignment becomes problematic. ``` df[1:3]['A'] = 5 ``` This selects rows 1 and 2 then selects column 'A' of the returning object and assigns value 5 to it. The problem is, the returning object might be a copy so this may not change the actual DataFrame. This raises SettingWithCopyWarning. The correct way of making this assignment is: ``` df.loc[1:3, 'A'] = 5 ``` With .loc, you are guaranteed to modify the original DataFrame. It also allows you to slice columns (df.loc[:, 'C':'F']), select a single row (df.loc[5]), and select a list of rows (df.loc[[1, 2, 5]]). Also note that these two were not included in the API at the same time. .loc was added much later as a more powerful and explicit indexer. See unutbu's answer for more detail. Note: Getting columns with [] vs . is a completely different topic. . is only there for convenience. It only allows accessing columns whose names are valid Python identifiers (i.e. they cannot contain spaces, they cannot be composed of numbers...). It cannot be used when the names conflict with Series\/DataFrame methods. It also cannot be used for non-existing columns (i.e. the assignment df.a = 1 won't work if there is no column a). Other than that, . and [] are the same.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/48409128\/what-is-the-difference-between-using-loc-and-using-just-square-brackets-to-filte", "best_answers_votes":190, "question_length":595, "response_length":2025 }, { "question":"How to drop rows from pandas data frame that contains a particular string in a particular column? [duplicate] This question already has answers here: Search for \"does-not-contain\" on a DataFrame in pandas (11 answers) Closed 6 years ago. I have a very large data frame in python and I want to drop all rows that have a particular string inside a particular column. For example, I want to drop all rows which have the string \"XYZ\" as a substring in the column C of the data frame. Can this be implemented in an efficient way using .drop() method?", "response":"pandas has vectorized string operations, so you can just filter out the rows that contain the string you don't want: ``` In [91]: df = pd.DataFrame(dict(A=[5,3,5,6], C=[\"foo\",\"bar\",\"fooXYZbar\", \"bat\"])) In [92]: df Out[92]: A C 0 5 foo 1 3 bar 2 5 fooXYZbar 3 6 bat In [93]: df[~df.C.str.contains(\"XYZ\")] Out[93]: A C 0 5 foo 1 3 bar 3 6 bat ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28679930\/how-to-drop-rows-from-pandas-data-frame-that-contains-a-particular-string-in-a-p", "best_answers_votes":315, "question_length":545, "response_length":345 }, { "question":"cartesian product in pandas I have two pandas dataframes: ``` from pandas import DataFrame df1 = DataFrame({'col1':[1,2],'col2':[3,4]}) df2 = DataFrame({'col3':[5,6]}) ``` What is the best practice to get their cartesian product (of course without writing it explicitly like me)? ``` #df1, df2 cartesian product df_cartesian = DataFrame({'col1':[1,2,1,2],'col2':[3,4,3,4],'col3':[5,5,6,6]}) ```", "response":"In recent versions of Pandas (>= 1.2) this is built into merge so you can do: ``` from pandas import DataFrame df1 = DataFrame({'col1':[1,2],'col2':[3,4]}) df2 = DataFrame({'col3':[5,6]}) df1.merge(df2, how='cross') ``` This is equivalent to the previous pandas < 1.2 answer but is easier to read. For pandas < 1.2: If you have a key that is repeated for each row, then you can produce a cartesian product using merge (like you would in SQL). ``` from pandas import DataFrame, merge df1 = DataFrame({'key':[1,1], 'col1':[1,2],'col2':[3,4]}) df2 = DataFrame({'key':[1,1], 'col3':[5,6]}) merge(df1, df2,on='key')[['col1', 'col2', 'col3']] ``` Output: ``` col1 col2 col3 0 1 3 5 1 1 3 6 2 2 4 5 3 2 4 6 ``` See here for the documentation: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/merging.html", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13269890\/cartesian-product-in-pandas", "best_answers_votes":200, "question_length":394, "response_length":792 }, { "question":"How to rotate x-axis tick labels in a pandas plot With the following code: ``` import matplotlib matplotlib.style.use('ggplot') import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame({ 'celltype':[\"foo\",\"bar\",\"qux\",\"woz\"], 's1':[5,9,1,7], 's2':[12,90,13,87]}) df = df[[\"celltype\",\"s1\",\"s2\"]] df.set_index([\"celltype\"],inplace=True) df.plot(kind='bar',alpha=0.75) plt.xlabel(\"\") ``` I made this plot: How can I rotate the x-axis tick labels to 0 degrees? I tried adding this but did not work: ``` plt.set_xticklabels(df.index,rotation=90) ```", "response":"Pass param rot=0 to rotate the xticklabels: ``` import matplotlib matplotlib.style.use('ggplot') import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame({ 'celltype':[\"foo\",\"bar\",\"qux\",\"woz\"], 's1':[5,9,1,7], 's2':[12,90,13,87]}) df = df[[\"celltype\",\"s1\",\"s2\"]] df.set_index([\"celltype\"],inplace=True) df.plot(kind='bar',alpha=0.75, rot=0) plt.xlabel(\"\") plt.show() ``` yields plot:", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32244019\/how-to-rotate-x-axis-tick-labels-in-a-pandas-plot", "best_answers_votes":329, "question_length":556, "response_length":396 }, { "question":"Convert pandas timezone-aware DateTimeIndex to naive timestamp, but in certain timezone You can use the function tz_localize to make a Timestamp or DateTimeIndex timezone aware, but how can you do the opposite: how can you convert a timezone aware Timestamp to a naive one, while preserving its timezone? An example: ``` In [82]: t = pd.date_range(start=\"2013-05-18 12:00:00\", periods=10, freq='s', tz=\"Europe\/Brussels\") In [83]: t Out[83]: [2013-05-18 12:00:00, ..., 2013-05-18 12:00:09] Length: 10, Freq: S, Timezone: Europe\/Brussels ``` I could remove the timezone by setting it to None, but then the result is converted to UTC (12 o'clock became 10): ``` In [86]: t.tz = None In [87]: t Out[87]: [2013-05-18 10:00:00, ..., 2013-05-18 10:00:09] Length: 10, Freq: S, Timezone: None ``` Is there another way I can convert a DateTimeIndex to timezone naive, but while preserving the timezone it was set in? Some context on the reason I am asking this: I want to work with timezone naive timeseries (to avoid the extra hassle with timezones, and I do not need them for the case I am working on). But for some reason, I have to deal with a timezone-aware timeseries in my local timezone (Europe\/Brussels). As all my other data are timezone naive (but represented in my local timezone), I want to convert this timeseries to naive to further work with it, but it also has to be represented in my local timezone (so just remove the timezone info, without converting the user-visible time to UTC). I know the time is actually internal stored as UTC and only converted to another timezone when you represent it, so there has to be some kind of conversion when I want to \"delocalize\" it. For example, with the python datetime module you can \"remove\" the timezone like this: ``` In [119]: d = pd.Timestamp(\"2013-05-18 12:00:00\", tz=\"Europe\/Brussels\") In [120]: d Out[120]: In [121]: d.replace(tzinfo=None) Out[121]: ``` So, based on this, I could do the following, but I suppose this will not be very efficient when working with a larger timeseries: ``` In [124]: t Out[124]: [2013-05-18 12:00:00, ..., 2013-05-18 12:00:09] Length: 10, Freq: S, Timezone: Europe\/Brussels In [125]: pd.DatetimeIndex([i.replace(tzinfo=None) for i in t]) Out[125]: [2013-05-18 12:00:00, ..., 2013-05-18 12:00:09] Length: 10, Freq: None, Timezone: None ```", "response":"To answer my own question, this functionality has been added to pandas in the meantime. Starting from pandas 0.15.0, you can use tz_localize(None) to remove the timezone resulting in local time. See the whatsnew entry: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/whatsnew.html#timezone-handling-improvements So with my example from above: ``` In [4]: t = pd.date_range(start=\"2013-05-18 12:00:00\", periods=2, freq='H', tz= \"Europe\/Brussels\") In [5]: t Out[5]: DatetimeIndex(['2013-05-18 12:00:00+02:00', '2013-05-18 13:00:00+02:00'], dtype='datetime64[ns, Europe\/Brussels]', freq='H') ``` using tz_localize(None) removes the timezone information resulting in naive local time: ``` In [6]: t.tz_localize(None) Out[6]: DatetimeIndex(['2013-05-18 12:00:00', '2013-05-18 13:00:00'], dtype='datetime64[ns]', freq='H') ``` Further, you can also use tz_convert(None) to remove the timezone information but converting to UTC, so yielding naive UTC time: ``` In [7]: t.tz_convert(None) Out[7]: DatetimeIndex(['2013-05-18 10:00:00', '2013-05-18 11:00:00'], dtype='datetime64[ns]', freq='H') ``` This is much more performant than the datetime.replace solution: ``` In [31]: t = pd.date_range(start=\"2013-05-18 12:00:00\", periods=10000, freq='H', tz=\"Europe\/Brussels\") In [32]: %timeit t.tz_localize(None) 1000 loops, best of 3: 233 \u00b5s per loop In [33]: %timeit pd.DatetimeIndex([i.replace(tzinfo=None) for i in t]) 10 loops, best of 3: 99.7 ms per loop ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16628819\/convert-pandas-timezone-aware-datetimeindex-to-naive-timestamp-but-in-certain-t", "best_answers_votes":214, "question_length":2332, "response_length":1445 }, { "question":"How to save a new sheet in an existing excel file, using Pandas? I want to use excel files to store data elaborated with python. My problem is that I can't add sheets to an existing excel file. Here I suggest a sample code to work with in order to reach this issue ``` import pandas as pd import numpy as np path = r\"C:\\Users\\fedel\\Desktop\\excelData\\PhD_data.xlsx\" x1 = np.random.randn(100, 2) df1 = pd.DataFrame(x1) x2 = np.random.randn(100, 2) df2 = pd.DataFrame(x2) writer = pd.ExcelWriter(path, engine = 'xlsxwriter') df1.to_excel(writer, sheet_name = 'x1') df2.to_excel(writer, sheet_name = 'x2') writer.save() writer.close() ``` This code saves two DataFrames to two sheets, named \"x1\" and \"x2\" respectively. If I create two new DataFrames and try to use the same code to add two new sheets, 'x3' and 'x4', the original data is lost. ``` import pandas as pd import numpy as np path = r\"C:\\Users\\fedel\\Desktop\\excelData\\PhD_data.xlsx\" x3 = np.random.randn(100, 2) df3 = pd.DataFrame(x3) x4 = np.random.randn(100, 2) df4 = pd.DataFrame(x4) writer = pd.ExcelWriter(path, engine = 'xlsxwriter') df3.to_excel(writer, sheet_name = 'x3') df4.to_excel(writer, sheet_name = 'x4') writer.save() writer.close() ``` I want an excel file with four sheets: 'x1', 'x2', 'x3', 'x4'. I know that 'xlsxwriter' is not the only \"engine\", there is 'openpyxl'. I also saw there are already other people that have written about this issue, but still I can't understand how to do that. Here a code taken from this link ``` import pandas from openpyxl import load_workbook book = load_workbook('Masterfile.xlsx') writer = pandas.ExcelWriter('Masterfile.xlsx', engine='openpyxl') writer.book = book writer.sheets = dict((ws.title, ws) for ws in book.worksheets) data_filtered.to_excel(writer, \"Main\", cols=['Diff1', 'Diff2']) writer.save() ``` They say that it works, but it is hard to figure out how. I don't understand what \"ws.title\", \"ws\", and \"dict\" are in this context. Which is the best way to save \"x1\" and \"x2\", then close the file, open it again and add \"x3\" and \"x4\"?", "response":"Thank you. I believe that a complete example could be good for anyone else who have the same issue: ``` import pandas as pd import numpy as np path = r\"C:\\Users\\fedel\\Desktop\\excelData\\PhD_data.xlsx\" x1 = np.random.randn(100, 2) df1 = pd.DataFrame(x1) x2 = np.random.randn(100, 2) df2 = pd.DataFrame(x2) writer = pd.ExcelWriter(path, engine = 'xlsxwriter') df1.to_excel(writer, sheet_name = 'x1') df2.to_excel(writer, sheet_name = 'x2') writer.close() ``` Here I generate an excel file, from my understanding it does not really matter whether it is generated via the \"xslxwriter\" or the \"openpyxl\" engine. When I want to write without loosing the original data then ``` import pandas as pd import numpy as np from openpyxl import load_workbook path = r\"C:\\Users\\fedel\\Desktop\\excelData\\PhD_data.xlsx\" book = load_workbook(path) writer = pd.ExcelWriter(path, engine = 'openpyxl') writer.book = book x3 = np.random.randn(100, 2) df3 = pd.DataFrame(x3) x4 = np.random.randn(100, 2) df4 = pd.DataFrame(x4) df3.to_excel(writer, sheet_name = 'x3') df4.to_excel(writer, sheet_name = 'x4') writer.close() ``` this code do the job!", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/42370977\/how-to-save-a-new-sheet-in-an-existing-excel-file-using-pandas", "best_answers_votes":223, "question_length":2058, "response_length":1122 }, { "question":"Give column name when read csv file pandas This is the example of my dataset. ``` >>> user1 = pd.read_csv('dataset\/1.csv') >>> print(user1) 0 0.69464 3.1735 7.5048 0 0.030639 0.14982 3.48680 9.2755 1 0.069763 -0.29965 1.94770 9.1120 2 0.099823 -1.68890 1.41650 10.1200 3 0.129820 -2.17930 0.95342 10.9240 4 0.159790 -2.30180 0.23155 10.6510 5 0.189820 -1.41650 1.18500 11.0730 ``` How to push down the first column and add the names column [TIME, X, Y, and Z] on the first column. The desired output is like this: ``` TIME X Y Z 0 0 0.69464 3.1735 7.5048 1 0.030639 0.14982 3.48680 9.2755 2 0.069763 -0.29965 1.94770 9.1120 3 0.099823 -1.68890 1.41650 10.1200 4 0.129820 -2.17930 0.95342 10.9240 5 0.159790 -2.30180 0.23155 10.6510 6 0.189820 -1.41650 1.18500 11.0730 ```", "response":"I'd do it like this: ``` colnames=['TIME', 'X', 'Y', 'Z'] user1 = pd.read_csv('dataset\/1.csv', names=colnames, header=None) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31645466\/give-column-name-when-read-csv-file-pandas", "best_answers_votes":272, "question_length":771, "response_length":127 }, { "question":"Drop rows containing empty cells from a pandas DataFrame I have a pd.DataFrame that was created by parsing some excel spreadsheets. A column of which has empty cells. For example, below is the output for the frequency of that column, 32320 records have missing values for Tenant. ``` >>> value_counts(Tenant, normalize=False) 32320 Thunderhead 8170 Big Data Others 5700 Cloud Cruiser 5700 Partnerpedia 5700 Comcast 5700 SDP 5700 Agora 5700 dtype: int64 ``` I am trying to drop rows where Tenant is missing, however .isnull() option does not recognize the missing values. ``` >>> df['Tenant'].isnull().sum() 0 ``` The column has data type \"Object\". What is happening in this case? How can I drop records where Tenant is missing?", "response":"Pandas will recognise a value as null if it is a np.nan object, which will print as NaN in the DataFrame. Your missing values are probably empty strings, which Pandas doesn't recognise as null. To fix this, you can convert the empty stings (or whatever is in your empty cells) to np.nan objects using replace(), and then call dropna()on your DataFrame to delete rows with null tenants. To demonstrate, we create a DataFrame with some random values and some empty strings in a Tenants column: ``` >>> import pandas as pd >>> import numpy as np >>> >>> df = pd.DataFrame(np.random.randn(10, 2), columns=list('AB')) >>> df['Tenant'] = np.random.choice(['Babar', 'Rataxes', ''], 10) >>> print df A B Tenant 0 -0.588412 -1.179306 Babar 1 -0.008562 0.725239 2 0.282146 0.421721 Rataxes 3 0.627611 -0.661126 Babar 4 0.805304 -0.834214 5 -0.514568 1.890647 Babar 6 -1.188436 0.294792 Rataxes 7 1.471766 -0.267807 Babar 8 -1.730745 1.358165 Rataxes 9 0.066946 0.375640 ``` Now we replace any empty strings in the Tenants column with np.nan objects, like so: ``` >>> df['Tenant'].replace('', np.nan, inplace=True) >>> print df A B Tenant 0 -0.588412 -1.179306 Babar 1 -0.008562 0.725239 NaN 2 0.282146 0.421721 Rataxes 3 0.627611 -0.661126 Babar 4 0.805304 -0.834214 NaN 5 -0.514568 1.890647 Babar 6 -1.188436 0.294792 Rataxes 7 1.471766 -0.267807 Babar 8 -1.730745 1.358165 Rataxes 9 0.066946 0.375640 NaN ``` Now we can drop the null values: ``` >>> df.dropna(subset=['Tenant'], inplace=True) >>> print df A B Tenant 0 -0.588412 -1.179306 Babar 2 0.282146 0.421721 Rataxes 3 0.627611 -0.661126 Babar 5 -0.514568 1.890647 Babar 6 -1.188436 0.294792 Rataxes 7 1.471766 -0.267807 Babar 8 -1.730745 1.358165 Rataxes ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29314033\/drop-rows-containing-empty-cells-from-a-pandas-dataframe", "best_answers_votes":313, "question_length":727, "response_length":1707 }, { "question":"How to add a suffix (or prefix) to each column name? I want to add _x suffix to each column name like so: ``` featuresA = myPandasDataFrame.columns.values + '_x' ``` How do I do this? Additionally, if I wanted to add x_ as a suffix, how would the solution change?", "response":"The following is the nicest way to add suffix in my opinion. ``` df = df.add_suffix('_some_suffix') ``` As it is a function that is called on DataFrame and returns DataFrame - you can use it in chain of the calls.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34049618\/how-to-add-a-suffix-or-prefix-to-each-column-name", "best_answers_votes":290, "question_length":263, "response_length":213 }, { "question":"Pandas dataframe groupby datetime month Consider a CSV file: ``` string,date,number a string,2\/5\/11 9:16am,1.0 a string,3\/5\/11 10:44pm,2.0 a string,4\/22\/11 12:07pm,3.0 a string,4\/22\/11 12:10pm,4.0 a string,4\/29\/11 11:59am,1.0 a string,5\/2\/11 1:41pm,2.0 a string,5\/2\/11 2:02pm,3.0 a string,5\/2\/11 2:56pm,4.0 a string,5\/2\/11 3:00pm,5.0 a string,5\/2\/14 3:02pm,6.0 a string,5\/2\/14 3:18pm,7.0 ``` I can read this in, and reformat the date column into datetime format: ``` b = pd.read_csv('b.dat') b['date'] = pd.to_datetime(b['date'],format='%m\/%d\/%y %I:%M%p') ``` I have been trying to group the data by month. It seems like there should be an obvious way of accessing the month and grouping by that. But I can't seem to do it. Does anyone know how? What I am currently trying is re-indexing by the date: ``` b.index = b['date'] ``` I can access the month like so: ``` b.index.month ``` However I can't seem to find a function to lump together by month.", "response":"Managed to do it: ``` b = pd.read_csv('b.dat') b.index = pd.to_datetime(b['date'],format='%m\/%d\/%y %I:%M%p') b.groupby(by=[b.index.month, b.index.year]) ``` Or ``` b.groupby(pd.Grouper(freq='M')) # update for v0.21+ ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24082784\/pandas-dataframe-groupby-datetime-month", "best_answers_votes":273, "question_length":949, "response_length":219 }, { "question":"Assign pandas dataframe column dtypes I want to set the dtypes of multiple columns in pd.Dataframe (I have a file that I've had to manually parse into a list of lists, as the file was not amenable for pd.read_csv) ``` import pandas as pd print pd.DataFrame([['a','1'],['b','2']], dtype={'x':'object','y':'int'}, columns=['x','y']) ``` I get ``` ValueError: entry not a 2- or 3- tuple ``` The only way I can set them is by looping through each column variable and recasting with astype. ``` dtypes = {'x':'object','y':'int'} mydata = pd.DataFrame([['a','1'],['b','2']], columns=['x','y']) for c in mydata.columns: mydata[c] = mydata[c].astype(dtypes[c]) print mydata['y'].dtype #=> int64 ``` Is there a better way?", "response":"Since 0.17, you have to use the explicit conversions: ``` pd.to_datetime, pd.to_timedelta and pd.to_numeric ``` (As mentioned below, no more \"magic\", convert_objects has been deprecated in 0.17) ``` df = pd.DataFrame({'x': {0: 'a', 1: 'b'}, 'y': {0: '1', 1: '2'}, 'z': {0: '2018-05-01', 1: '2018-05-02'}}) df.dtypes x object y object z object dtype: object df x y z 0 a 1 2018-05-01 1 b 2 2018-05-02 ``` You can apply these to each column you want to convert: ``` df[\"y\"] = pd.to_numeric(df[\"y\"]) df[\"z\"] = pd.to_datetime(df[\"z\"]) df x y z 0 a 1 2018-05-01 1 b 2 2018-05-02 df.dtypes x object y int64 z datetime64[ns] dtype: object ``` and confirm the dtype is updated. OLD\/DEPRECATED ANSWER for pandas 0.12 - 0.16: You can use convert_objects to infer better dtypes: ``` In [21]: df Out[21]: x y 0 a 1 1 b 2 In [22]: df.dtypes Out[22]: x object y object dtype: object In [23]: df.convert_objects(convert_numeric=True) Out[23]: x y 0 a 1 1 b 2 In [24]: df.convert_objects(convert_numeric=True).dtypes Out[24]: x object y int64 dtype: object ``` Magic! (Sad to see it deprecated.)", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21197774\/assign-pandas-dataframe-column-dtypes", "best_answers_votes":100, "question_length":713, "response_length":1079 }, { "question":"Pandas: change data type of Series to String I use Pandas 'ver 0.12.0' with Python 2.7 and have a dataframe as below: ``` df = pd.DataFrame({'id' : [123,512,'zhub1', 12354.3, 129, 753, 295, 610], 'colour': ['black', 'white','white','white', 'black', 'black', 'white', 'white'], 'shape': ['round', 'triangular', 'triangular','triangular','square', 'triangular','round','triangular'] }, columns= ['id','colour', 'shape']) ``` The id Series consists of some integers and strings. Its dtype by default is object. I want to convert all contents of id to strings. I tried astype(str), which produces the output below. ``` df['id'].astype(str) 0 1 1 5 2 z 3 1 4 1 5 7 6 2 7 6 ``` 1) How can I convert all elements of id to String? 2) I will eventually use id for indexing for dataframes. Would having String indices in a dataframe slow things down, compared to having an integer index?", "response":"A new answer to reflect the most current practices: as of now (v1.2.4), neither astype('str') nor astype(str) work. As per the documentation, a Series can be converted to the string datatype in the following ways: ``` df['id'] = df['id'].astype(\"string\") df['id'] = pandas.Series(df['id'], dtype=\"string\") df['id'] = pandas.Series(df['id'], dtype=pandas.StringDtype) ``` End to end example: ```py import pandas as pd # Create a sample DataFrame data = { 'Name': ['John', 'Alice', 'Bob', 'John', 'Alice'], 'Age': [25, 30, 35, 25, 30], 'City': ['New York', 'London', 'Paris', 'New York', 'London'], 'Salary': [50000, 60000, 70000, 50000, 60000], 'Category': ['A', 'B', 'C', 'A', 'B'] } df = pd.DataFrame(data) # Print the DataFrame print(\"Original DataFrame:\") print(df) print(\"\\nData types:\") print(df.dtypes) cat_cols_ = None # Apply the code to change data types if not cat_cols_: # Get the columns with object data type object_columns = df.select_dtypes(include=['object']).columns.tolist() if len(object_columns) > 0: print(f\"\\nObject columns found, converting to string: {object_columns}\") # Convert object columns to string type df[object_columns] = df[object_columns].astype('string') # Get the categorical columns (including string and category data types) cat_cols_ = df.select_dtypes(include=['category', 'string']).columns.tolist() # Print the updated DataFrame and data types print(\"\\nUpdated DataFrame:\") print(df) print(\"\\nUpdated data types:\") print(df.dtypes) print(f\"\\nCategorical columns (cat_cols_): {cat_cols_}\") ``` ```bash Original DataFrame: Name Age City Salary Category 0 John 25 New York 50000 A 1 Alice 30 London 60000 B 2 Bob 35 Paris 70000 C 3 John 25 New York 50000 A 4 Alice 30 London 60000 B Data types: Name object Age int64 City object Salary int64 Category object dtype: object Object columns found, converting to string: ['Name', 'City', 'Category'] Updated DataFrame: Name Age City Salary Category 0 John 25 New York 50000 A 1 Alice 30 London 60000 B 2 Bob 35 Paris 70000 C 3 John 25 New York 50000 A 4 Alice 30 London 60000 B Updated data types: Name string[python] Age int64 City string[python] Salary int64 Category string[python] dtype: object Categorical columns (cat_cols_): ['Name', 'City', 'Category'] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22231592\/pandas-change-data-type-of-series-to-string", "best_answers_votes":249, "question_length":878, "response_length":2249 }, { "question":"Creating a new column based on if-elif-else condition [duplicate] This question already has answers here: Create Column with ELIF in Pandas (5 answers) Closed 1 year ago. I have a DataFrame df: ``` A B a 2 2 b 3 1 c 1 3 ``` I want to create a new column based on the following criteria: if row A == B: 0 if rowA > B: 1 if row A df.B, 1, -1), does pandas provide a special syntax for solving my problem with one step (without the necessity of creating 3 new columns and then combining the result)?", "response":"To formalize some of the approaches laid out above: Create a function that operates on the rows of your dataframe like so: ``` def f(row): if row['A'] == row['B']: val = 0 elif row['A'] > row['B']: val = 1 else: val = -1 return val ``` Then apply it to your dataframe passing in the axis=1 option: ``` In [1]: df['C'] = df.apply(f, axis=1) In [2]: df Out[2]: A B C a 2 2 0 b 3 1 1 c 1 3 -1 ``` Of course, this is not vectorized so performance may not be as good when scaled to a large number of records. Still, I think it is much more readable. Especially coming from a SAS background. Edit Here is the vectorized version ``` df['C'] = np.where( df['A'] == df['B'], 0, np.where( df['A'] > df['B'], 1, -1)) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21702342\/creating-a-new-column-based-on-if-elif-else-condition", "best_answers_votes":268, "question_length":497, "response_length":709 }, { "question":"Update index after sorting data-frame Take the following data-frame: ``` x = np.tile(np.arange(3),3) y = np.repeat(np.arange(3),3) df = pd.DataFrame({\"x\": x, \"y\": y}) ``` ``` x y 0 0 0 1 1 0 2 2 0 3 0 1 4 1 1 5 2 1 6 0 2 7 1 2 8 2 2 ``` I need to sort it by x first, and only second by y: ``` df2 = df.sort([\"x\", \"y\"]) ``` ``` x y 0 0 0 3 0 1 6 0 2 1 1 0 4 1 1 7 1 2 2 2 0 5 2 1 8 2 2 ``` How can I change the index such that it is ascending again. I.e. how do I get this: ``` x y 0 0 0 1 0 1 2 0 2 3 1 0 4 1 1 5 1 2 6 2 0 7 2 1 8 2 2 ``` I have tried the following. Unfortunately, it doesn't change the index at all: ``` df2.reindex(np.arange(len(df2.index))) ```", "response":"You can reset the index using reset_index to get back a default index of 0, 1, 2, ..., n-1 (and use drop=True to indicate you want to drop the existing index instead of adding it as an additional column to your dataframe): ``` In [19]: df2 = df2.reset_index(drop=True) In [20]: df2 Out[20]: x y 0 0 0 1 0 1 2 0 2 3 1 0 4 1 1 5 1 2 6 2 0 7 2 1 8 2 2 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33165734\/update-index-after-sorting-data-frame", "best_answers_votes":283, "question_length":664, "response_length":352 }, { "question":"How to set a cell to NaN in a pandas dataframe I'd like to replace bad values in a column of a dataframe by NaN's. ``` mydata = {'x' : [10, 50, 18, 32, 47, 20], 'y' : ['12', '11', 'N\/A', '13', '15', 'N\/A']} df = pd.DataFrame(mydata) df[df.y == 'N\/A']['y'] = np.nan ``` Though, the last line fails and throws a warning because it's working on a copy of df. So, what's the correct way to handle this? I've seen many solutions with iloc or ix but here I need to use a boolean condition.", "response":"just use replace: ``` In [106]: df.replace('N\/A',np.NaN) Out[106]: x y 0 10 12 1 50 11 2 18 NaN 3 32 13 4 47 15 5 20 NaN ``` What you're trying is called chain indexing: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/indexing.html#indexing-view-versus-copy You can use loc to ensure you operate on the original dF: ``` In [108]: df.loc[df['y'] == 'N\/A','y'] = np.nan df Out[108]: x y 0 10 12 1 50 11 2 18 NaN 3 32 13 4 47 15 5 20 NaN ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34794067\/how-to-set-a-cell-to-nan-in-a-pandas-dataframe", "best_answers_votes":198, "question_length":483, "response_length":434 }, { "question":"Compare two columns using pandas Using this as a starting point: ```py a = [['10', '1.2', '4.2'], ['15', '70', '0.03'], ['8', '5', '0']] df = pd.DataFrame(a, columns=['one', 'two', 'three']) ``` which looks like ```none one two three 0 10 1.2 4.2 1 15 70 0.03 2 8 5 0 ``` I want to use something like an if statement within pandas. ```py if df['one'] >= df['two'] and df['one'] <= df['three']: df['que'] = df['one'] ``` Basically, create a new column by checking each row via the if statement. The docs say to use .all but there is no example...", "response":"You could use np.where. If cond is a boolean array, and A and B are arrays, then ``` C = np.where(cond, A, B) ``` defines C to be equal to A where cond is True, and B where cond is False. ``` import numpy as np import pandas as pd a = [['10', '1.2', '4.2'], ['15', '70', '0.03'], ['8', '5', '0']] df = pd.DataFrame(a, columns=['one', 'two', 'three']) df['que'] = np.where((df['one'] >= df['two']) & (df['one'] = df['two']) & (df['one'] = df['two'] when df['one'] < df['two'] is False, then the conditions and choices could be simplified to ``` conditions = [ df['one'] < df['two'], df['one'] <= df['three']] choices = [df['two'], df['one']] ``` (The assumption may not be true if df['one'] or df['two'] contain NaNs.) Note that ``` a = [['10', '1.2', '4.2'], ['15', '70', '0.03'], ['8', '5', '0']] df = pd.DataFrame(a, columns=['one', 'two', 'three']) ``` defines a DataFrame with string values. Since they look numeric, you might be better off converting those strings to floats: ``` df2 = df.astype(float) ``` This changes the results, however, since strings compare character-by-character, while floats are compared numerically. ``` In [61]: '10' <= '4.2' Out[61]: True In [62]: 10 <= 4.2 Out[62]: False ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27474921\/compare-two-columns-using-pandas", "best_answers_votes":222, "question_length":545, "response_length":1210 }, { "question":"Reading an Excel file in python using pandas I am trying to read an excel file this way : ``` newFile = pd.ExcelFile(PATH\\FileName.xlsx) ParsedData = pd.io.parsers.ExcelFile.parse(newFile) ``` which throws an error that says two arguments expected, I don't know what the second argument is and also what I am trying to achieve here is to convert an Excel file to a DataFrame, Am I doing it the right way? or is there any other way to do this using pandas?", "response":"Close: first you call ExcelFile, but then you call the .parse method and pass it the sheet name. ``` >>> xl = pd.ExcelFile(\"dummydata.xlsx\") >>> xl.sheet_names [u'Sheet1', u'Sheet2', u'Sheet3'] >>> df = xl.parse(\"Sheet1\") >>> df.head() Tid dummy1 dummy2 dummy3 dummy4 dummy5 \\ 0 2006-09-01 00:00:00 0 5.894611 0.605211 3.842871 8.265307 1 2006-09-01 01:00:00 0 5.712107 0.605211 3.416617 8.301360 2 2006-09-01 02:00:00 0 5.105300 0.605211 3.090865 8.335395 3 2006-09-01 03:00:00 0 4.098209 0.605211 3.198452 8.170187 4 2006-09-01 04:00:00 0 3.338196 0.605211 2.970015 7.765058 dummy6 dummy7 dummy8 dummy9 0 0.623354 0 2.579108 2.681728 1 0.554211 0 7.210000 3.028614 2 0.567841 0 6.940000 3.644147 3 0.581470 0 6.630000 4.016155 4 0.595100 0 6.350000 3.974442 ``` What you're doing is calling the method which lives on the class itself, rather than the instance, which is okay (although not very idiomatic), but if you're doing that you would also need to pass the sheet name: ``` >>> parsed = pd.io.parsers.ExcelFile.parse(xl, \"Sheet1\") >>> parsed.columns Index([u'Tid', u'dummy1', u'dummy2', u'dummy3', u'dummy4', u'dummy5', u'dummy6', u'dummy7', u'dummy8', u'dummy9'], dtype=object) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17063458\/reading-an-excel-file-in-python-using-pandas", "best_answers_votes":261, "question_length":455, "response_length":1189 }, { "question":"Set order of columns in pandas dataframe Is there a way to reorder columns in pandas dataframe based on my personal preference (i.e. not alphabetically or numerically sorted, but more like following certain conventions)? Simple example: ``` frame = pd.DataFrame({ 'one thing':[1,2,3,4], 'second thing':[0.1,0.2,1,2], 'other thing':['a','e','i','o']}) ``` produces this: ``` one thing other thing second thing 0 1 a 0.1 1 2 e 0.2 2 3 i 1.0 3 4 o 2.0 ``` But instead, I would like this: ``` one thing second thing other thing 0 1 0.1 a 1 2 0.2 e 2 3 1.0 i 3 4 2.0 o ``` (Please, provide a generic solution rather than specific to this case. Many thanks.)", "response":"Just select the order yourself by typing in the column names. Note the double brackets: ``` frame = frame[['column I want first', 'column I want second'...etc.]] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41968732\/set-order-of-columns-in-pandas-dataframe", "best_answers_votes":239, "question_length":652, "response_length":165 }, { "question":"How to convert a Scikit-learn dataset to a Pandas dataset How do I convert data from a Scikit-learn Bunch object to a Pandas DataFrame? ``` from sklearn.datasets import load_iris import pandas as pd data = load_iris() print(type(data)) data1 = pd. # Is there a Pandas method to accomplish this? ```", "response":"Manually, you can use pd.DataFrame constructor, giving a numpy array (data) and a list of the names of the columns (columns). To have everything in one DataFrame, you can concatenate the features and the target into one numpy array with np.c_[...] (note the []): ``` import numpy as np import pandas as pd from sklearn.datasets import load_iris # save load_iris() sklearn dataset to iris # if you'd like to check dataset type use: type(load_iris()) # if you'd like to view list of attributes use: dir(load_iris()) iris = load_iris() # np.c_ is the numpy concatenate function # which is used to concat iris['data'] and iris['target'] arrays # for pandas column argument: concat iris['feature_names'] list # and string list (in this case one string); you can make this anything you'd like.. # the original dataset would probably call this ['Species'] data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= iris['feature_names'] + ['target']) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38105539\/how-to-convert-a-scikit-learn-dataset-to-a-pandas-dataset", "best_answers_votes":210, "question_length":298, "response_length":961 }, { "question":"Convert List to Pandas Dataframe Column I need to convert my list into a one-column pandas dataframe. Current List (len=3): ```none ['Thanks You', 'Its fine no problem', 'Are you sure'] ``` Required Pandas DF (shape =3,): ```none 0 Thank You 1 Its fine no problem 2 Are you sure ``` N.B. The numbers represent index in the required Pandas DF above.", "response":"Use: ``` L = ['Thanks You', 'Its fine no problem', 'Are you sure'] #create new df df = pd.DataFrame({'col':L}) print (df) col 0 Thanks You 1 Its fine no problem 2 Are you sure ``` ``` df = pd.DataFrame({'oldcol':[1,2,3]}) #add column to existing df df['col'] = L print (df) oldcol col 0 1 Thanks You 1 2 Its fine no problem 2 3 Are you sure ``` Thank you DYZ: ``` #default column name 0 df = pd.DataFrame(L) print (df) 0 0 Thanks You 1 Its fine no problem 2 Are you sure ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/42049147\/convert-list-to-pandas-dataframe-column", "best_answers_votes":282, "question_length":348, "response_length":474 }, { "question":"Why does it take ages to install Pandas on Alpine Linux I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install? Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy? Dockerfile.debian ``` FROM python:3.6.4-slim-jessie RUN pip install pandas ``` Build Debian image with Pandas & Numpy: ``` [PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache Sending build context to Docker daemon 3.072kB Step 1\/2 : FROM python:3.6.4-slim-jessie ---> 43431c5410f3 Step 2\/2 : RUN pip install pandas ---> Running in 2e4c030f8051 Collecting pandas Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB) Collecting numpy>=1.9.0 (from pandas) Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB) Collecting pytz>=2011k (from pandas) Downloading pytz-2018.3-py2.py3-none-any.whl (509kB) Collecting python-dateutil>=2 (from pandas) Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB) Collecting six>=1.5 (from python-dateutil>=2->pandas) Downloading six-1.11.0-py2.py3-none-any.whl Installing collected packages: numpy, pytz, six, python-dateutil, pandas Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0 Removing intermediate container 2e4c030f8051 ---> a71e1c314897 Successfully built a71e1c314897 Successfully tagged debian-pandas:latest docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total ``` Dockerfile.alpine ``` FROM python:3.6.4-alpine3.7 RUN apk --update add --no-cache g++ RUN pip install pandas ``` Build Alpine image with Pandas & Numpy: ``` [PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache Sending build context to Docker daemon 16.9kB Step 1\/3 : FROM python:3.6.4-alpine3.7 ---> 4b00a94b6f26 Step 2\/3 : RUN apk --update add --no-cache g++ ---> Running in 4b0c32551e3f fetch http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.7\/main\/x86_64\/APKINDEX.tar.gz fetch http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.7\/main\/x86_64\/APKINDEX.tar.gz fetch http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.7\/community\/x86_64\/APKINDEX.tar.gz fetch http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.7\/community\/x86_64\/APKINDEX.tar.gz (1\/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3) (2\/17) Installing libgcc (6.4.0-r5) (3\/17) Installing libstdc++ (6.4.0-r5) (4\/17) Installing binutils-libs (2.28-r3) (5\/17) Installing binutils (2.28-r3) (6\/17) Installing gmp (6.1.2-r1) (7\/17) Installing isl (0.18-r0) (8\/17) Installing libgomp (6.4.0-r5) (9\/17) Installing libatomic (6.4.0-r5) (10\/17) Installing pkgconf (1.3.10-r0) (11\/17) Installing mpfr3 (3.1.5-r1) (12\/17) Installing mpc1 (1.0.3-r1) (13\/17) Installing gcc (6.4.0-r5) (14\/17) Installing musl-dev (1.1.18-r3) (15\/17) Installing libc-dev (0.7.1-r0) (16\/17) Installing g++ (6.4.0-r5) (17\/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3) Executing busybox-1.27.2-r7.trigger OK: 184 MiB in 50 packages Removing intermediate container 4b0c32551e3f ---> be26c3bf4e42 Step 3\/3 : RUN pip install pandas ---> Running in 36f6024e5e2d Collecting pandas Downloading pandas-0.22.0.tar.gz (11.3MB) Collecting python-dateutil>=2 (from pandas) Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB) Collecting pytz>=2011k (from pandas) Downloading pytz-2018.3-py2.py3-none-any.whl (509kB) Collecting numpy>=1.9.0 (from pandas) Downloading numpy-1.14.1.zip (4.9MB) Collecting six>=1.5 (from python-dateutil>=2->pandas) Downloading six-1.11.0-py2.py3-none-any.whl Building wheels for collected packages: pandas, numpy Running setup.py bdist_wheel for pandas: started Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: still running... Running setup.py bdist_wheel for pandas: finished with status 'done' Stored in directory: \/root\/.cache\/pip\/wheels\/e8\/ed\/46\/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92 Running setup.py bdist_wheel for numpy: started Running setup.py bdist_wheel for numpy: still running... Running setup.py bdist_wheel for numpy: still running... Running setup.py bdist_wheel for numpy: still running... Running setup.py bdist_wheel for numpy: finished with status 'done' Stored in directory: \/root\/.cache\/pip\/wheels\/9d\/cd\/e1\/4d418b16ea662e512349ef193ed9d9ff473af715110798c984 Successfully built pandas numpy Installing collected packages: six, python-dateutil, pytz, numpy, pandas Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0 Removing intermediate container 36f6024e5e2d ---> a93c59e6a106 Successfully built a93c59e6a106 Successfully tagged alpine-pandas:latest docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total ```", "response":"Debian based images use only python pip to install packages with .whl format: ``` Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB) Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB) ``` WHL format was developed as a quicker and more reliable method of installing Python software than re-building from source code every time. WHL files only have to be moved to the correct location on the target system to be installed, whereas a source distribution requires a build step before installation. Wheel packages pandas and numpy are not supported in images based on Alpine platform. That's why when we install them using python pip during the building process, we always compile them from the source files in alpine: ``` Downloading pandas-0.22.0.tar.gz (11.3MB) Downloading numpy-1.14.1.zip (4.9MB) ``` and we can see the following inside container during the image building: ``` \/ # ps aux PID USER TIME COMMAND 1 root 0:00 \/bin\/sh -c pip install pandas 7 root 0:04 {pip} \/usr\/local\/bin\/python \/usr\/local\/bin\/pip install pandas 21 root 0:07 \/usr\/local\/bin\/python -c import setuptools, tokenize;__file__='\/tmp\/pip-build-en29h0ak\/pandas\/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n 496 root 0:00 sh 660 root 0:00 \/bin\/sh -c gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild\/src.linux-x86_64-3.6\/numpy\/core\/src\/pri 661 root 0:00 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild\/src.linux-x86_64-3.6\/numpy\/core\/src\/private -Inump 662 root 0:00 \/usr\/libexec\/gcc\/x86_64-alpine-linux-musl\/6.4.0\/cc1 -quiet -I build\/src.linux-x86_64-3.6\/numpy\/core\/src\/private -I numpy\/core\/include -I build\/src.linux-x86_64-3.6\/numpy\/core\/includ 663 root 0:00 ps aux ``` If we modify Dockerfile a little: ``` FROM python:3.6.4-alpine3.7 RUN apk add --no-cache g++ wget RUN wget https:\/\/pypi.python.org\/packages\/da\/c6\/0936bc5814b429fddb5d6252566fe73a3e40372e6ceaf87de3dec1326f28\/pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl ``` we get the following error: ``` Step 4\/4 : RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl ---> Running in 0faea63e2bda pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl is not a supported wheel on this platform. The command '\/bin\/sh -c pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl' returned a non-zero code: 1 ``` Unfortunately, the only way to install pandas on an Alpine image is to wait until build finishes. Of course if you want to use the Alpine image with pandas in CI for example, the best way to do so is to compile it once, push it to any registry and use it as a base image for your needs. EDIT: If you want to use the Alpine image with pandas you can pull my nickgryg\/alpine-pandas docker image. It is a python image with pre-compiled pandas on the Alpine platform. It should save your time.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/49037742\/why-does-it-take-ages-to-install-pandas-on-alpine-linux", "best_answers_votes":93, "question_length":5408, "response_length":3036 }, { "question":"Calculate Time Difference Between Two Pandas Columns in Hours and Minutes I have two columns, fromdate and todate, in a dataframe. ```py import pandas as pd data = {'todate': [pd.Timestamp('2014-01-24 13:03:12.050000'), pd.Timestamp('2014-01-27 11:57:18.240000'), pd.Timestamp('2014-01-23 10:07:47.660000')], 'fromdate': [pd.Timestamp('2014-01-26 23:41:21.870000'), pd.Timestamp('2014-01-27 15:38:22.540000'), pd.Timestamp('2014-01-23 18:50:41.420000')]} df = pd.DataFrame(data) ``` I add a new column, diff, to find the difference between the two dates using ```py df['diff'] = df['fromdate'] - df['todate'] ``` I get the diff column, but it contains days, when there's more than 24 hours. ```none todate fromdate diff 0 2014-01-24 13:03:12.050 2014-01-26 23:41:21.870 2 days 10:38:09.820000 1 2014-01-27 11:57:18.240 2014-01-27 15:38:22.540 0 days 03:41:04.300000 2 2014-01-23 10:07:47.660 2014-01-23 18:50:41.420 0 days 08:42:53.760000 ``` How do I convert my results to only hours and minutes (i.e. days are converted to hours)?", "response":"Pandas timestamp differences returns a datetime.timedelta object. This can easily be converted into hours by using the *as_type* method, like so ``` import pandas df = pandas.DataFrame(columns=['to','fr','ans']) df.to = [pandas.Timestamp('2014-01-24 13:03:12.050000'), pandas.Timestamp('2014-01-27 11:57:18.240000'), pandas.Timestamp('2014-01-23 10:07:47.660000')] df.fr = [pandas.Timestamp('2014-01-26 23:41:21.870000'), pandas.Timestamp('2014-01-27 15:38:22.540000'), pandas.Timestamp('2014-01-23 18:50:41.420000')] (df.fr-df.to).astype('timedelta64[h]') ``` to yield, ``` 0 58 1 3 2 8 dtype: float64 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22923775\/calculate-time-difference-between-two-pandas-columns-in-hours-and-minutes", "best_answers_votes":200, "question_length":1032, "response_length":606 }, { "question":"Construct pandas DataFrame from items in nested dictionary Suppose I have a nested dictionary 'user_dict' with structure: Level 1: UserId (Long Integer) Level 2: Category (String) Level 3: Assorted Attributes (floats, ints, etc..) For example, an entry of this dictionary would be: ``` user_dict[12] = { \"Category 1\": {\"att_1\": 1, \"att_2\": \"whatever\"}, \"Category 2\": {\"att_1\": 23, \"att_2\": \"another\"}} ``` each item in user_dict has the same structure and user_dict contains a large number of items which I want to feed to a pandas DataFrame, constructing the series from the attributes. In this case a hierarchical index would be useful for the purpose. Specifically, my question is whether there exists a way to to help the DataFrame constructor understand that the series should be built from the values of the \"level 3\" in the dictionary? If I try something like: ``` df = pandas.DataFrame(users_summary) ``` The items in \"level 1\" (the UserId's) are taken as columns, which is the opposite of what I want to achieve (have UserId's as index). I know I could construct the series after iterating over the dictionary entries, but if there is a more direct way this would be very useful. A similar question would be asking whether it is possible to construct a pandas DataFrame from json objects listed in a file.", "response":"A pandas MultiIndex consists of a list of tuples. So the most natural approach would be to reshape your input dict so that its keys are tuples corresponding to the multi-index values you require. Then you can just construct your dataframe using pd.DataFrame.from_dict, using the option orient='index': ``` user_dict = {12: {'Category 1': {'att_1': 1, 'att_2': 'whatever'}, 'Category 2': {'att_1': 23, 'att_2': 'another'}}, 15: {'Category 1': {'att_1': 10, 'att_2': 'foo'}, 'Category 2': {'att_1': 30, 'att_2': 'bar'}}} pd.DataFrame.from_dict({(i,j): user_dict[i][j] for i in user_dict.keys() for j in user_dict[i].keys()}, orient='index') att_1 att_2 12 Category 1 1 whatever Category 2 23 another 15 Category 1 10 foo Category 2 30 bar ``` An alternative approach would be to build your dataframe up by concatenating the component dataframes: ``` user_ids = [] frames = [] for user_id, d in user_dict.iteritems(): user_ids.append(user_id) frames.append(pd.DataFrame.from_dict(d, orient='index')) pd.concat(frames, keys=user_ids) att_1 att_2 12 Category 1 1 whatever Category 2 23 another 15 Category 1 10 foo Category 2 30 bar ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13575090\/construct-pandas-dataframe-from-items-in-nested-dictionary", "best_answers_votes":219, "question_length":1314, "response_length":1131 }, { "question":"Convert categorical data in pandas dataframe I have a dataframe with this type of data (too many columns): ``` col1 int64 col2 int64 col3 category col4 category col5 category ``` Columns look like this: ``` Name: col3, dtype: category Categories (8, object): [B, C, E, G, H, N, S, W] ``` I want to convert all the values in each column to integer like this: ``` [1, 2, 3, 4, 5, 6, 7, 8] ``` I solved this for one column by this: ``` dataframe['c'] = pandas.Categorical.from_array(dataframe.col3).codes ``` Now I have two columns in my dataframe - old col3 and new c and need to drop old columns. That's bad practice. It works but in my dataframe there are too many columns and I don't want do it manually. How can I do this more cleverly?", "response":"First, to convert a Categorical column to its numerical codes, you can do this easier with: dataframe['c'].cat.codes. Further, it is possible to select automatically all columns with a certain dtype in a dataframe using select_dtypes. This way, you can apply above operation on multiple and automatically selected columns. First making an example dataframe: ``` In [75]: df = pd.DataFrame({'col1':[1,2,3,4,5], 'col2':list('abcab'), 'col3':list('ababb')}) In [76]: df['col2'] = df['col2'].astype('category') In [77]: df['col3'] = df['col3'].astype('category') In [78]: df.dtypes Out[78]: col1 int64 col2 category col3 category dtype: object ``` Then by using select_dtypes to select the columns, and then applying .cat.codes on each of these columns, you can get the following result: ``` In [80]: cat_columns = df.select_dtypes(['category']).columns In [81]: cat_columns Out[81]: Index([u'col2', u'col3'], dtype='object') In [83]: df[cat_columns] = df[cat_columns].apply(lambda x: x.cat.codes) In [84]: df Out[84]: col1 col2 col3 0 1 0 0 1 2 1 1 2 3 2 0 3 4 0 1 4 5 1 1 ``` Note: NaN becomes -1 This method is fast because the relationship between code and category is readily available and do not need to be computed.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32011359\/convert-categorical-data-in-pandas-dataframe", "best_answers_votes":245, "question_length":738, "response_length":1218 }, { "question":"Creating a zero-filled pandas data frame What is the best way to create a zero-filled pandas data frame of a given size? I have used: ``` zero_data = np.zeros(shape=(len(data),len(feature_list))) d = pd.DataFrame(zero_data, columns=feature_list) ``` Is there a better way to do it?", "response":"Create and fill a pandas dataframe with zeros ``` feature_list = [\"foo\", \"bar\", 37] df = pd.DataFrame(0, index=np.arange(7), columns=feature_list) print(df) ``` which prints: ``` foo bar 37 0 0 0 0 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 6 0 0 0 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22963263\/creating-a-zero-filled-pandas-data-frame", "best_answers_votes":220, "question_length":281, "response_length":249 }, { "question":"how to test if a variable is pd.NaT? I'm trying to test if one of my variables is pd.NaT. I know it is NaT, and still it won't pass the test. As an example, the following code prints nothing : ``` a=pd.NaT if a == pd.NaT: print(\"a not NaT\") ``` Does anyone have a clue ? Is there a way to effectively test if a is NaT?", "response":"Pandas NaT behaves like a floating-point NaN, in that it's not equal to itself. Instead, you can use pandas.isnull: ``` In [21]: pandas.isnull(pandas.NaT) Out[21]: True ``` This also returns True for None and NaN. Technically, you could also check for Pandas NaT with x != x, following a common pattern used for floating-point NaN. However, this is likely to cause issues with NumPy NaTs, which look very similar and represent the same concept, but are actually a different type with different behavior: ``` In [29]: x = pandas.NaT In [30]: y = numpy.datetime64('NaT') In [31]: x != x Out[31]: True In [32]: y != y \/home\/i850228\/.local\/lib\/python3.6\/site-packages\/IPython\/__main__.py:1: FutureWarning: In the future, NAT != NAT will be True rather than False. # encoding: utf-8 Out[32]: False ``` numpy.isnat, the function to check for NumPy NaT, also fails with a Pandas NaT: ``` In [33]: numpy.isnat(pandas.NaT) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 numpy.isnat(pandas.NaT) TypeError: ufunc 'isnat' is only defined for datetime and timedelta. ``` pandas.isnull works for both Pandas and NumPy NaTs, so it's probably the way to go: ``` In [34]: pandas.isnull(pandas.NaT) Out[34]: True In [35]: pandas.isnull(numpy.datetime64('NaT')) Out[35]: True ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/49435317\/how-to-test-if-a-variable-is-pd-nat", "best_answers_votes":240, "question_length":318, "response_length":1348 }, { "question":"How to convert single-row pandas data frame to series? I'm somewhat new to pandas. I have a pandas data frame that is 1 row by 23 columns. I want to convert this into a series. I'm wondering what the most pythonic way to do this is? I've tried pd.Series(myResults) but it complains ValueError: cannot copy sequence with size 23 to array axis with dimension 1. It's not smart enough to realize it's still a \"vector\" in math terms.", "response":"You can transpose the single-row dataframe (which still results in a dataframe) and then squeeze the results into a series (the inverse of to_frame). ``` df = pd.DataFrame([list(range(5))], columns=[\"a{}\".format(i) for i in range(5)]) >>> df.squeeze(axis=0) a0 0 a1 1 a2 2 a3 3 a4 4 Name: 0, dtype: int64 ``` Note: To accommodate the point raised by @IanS (even though it is not in the OP's question), test for the dataframe's size. I am assuming that df is a dataframe, but the edge cases are an empty dataframe, a dataframe of shape (1, 1), and a dataframe with more than one row in which case the use should implement their desired functionality. ``` if df.empty: # Empty dataframe, so convert to empty Series. result = pd.Series() elif df.shape == (1, 1) # DataFrame with one value, so convert to series with appropriate index. result = pd.Series(df.iat[0, 0], index=df.columns) elif len(df) == 1: # Convert to series per OP's question. result = df.T.squeeze() else: # Dataframe with multiple rows. Implement desired behavior. pass ``` This can also be simplified along the lines of the answer provided by @themachinist. ``` if len(df) > 1: # Dataframe with multiple rows. Implement desired behavior. pass else: result = pd.Series() if df.empty else df.iloc[0, :] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33246771\/how-to-convert-single-row-pandas-data-frame-to-series", "best_answers_votes":124, "question_length":429, "response_length":1271 }, { "question":"pandas - add new column to dataframe from dictionary [duplicate] This question already has answers here: Remap values in pandas column with a dict, preserve NaNs (12 answers) Closed 6 years ago. I would like to add a column 'D' to a dataframe like this: ``` U,L 111,en 112,en 112,es 113,es 113,ja 113,zh 114,es ``` based on the following Dictionary: ``` d = {112: 'en', 113: 'es', 114: 'es', 111: 'en'} ``` so that the resulting dataframe appears as: ``` U,L,D 111,en,en 112,en,en 112,es,en 113,es,es 113,ja,es 113,zh,es 114,es,es ``` So far I tried the pd.join() method but I can't figured out how it works with Dictionaries.", "response":"Call map and pass the dict, this will perform a lookup and return the associated value for that key: ``` In [248]: d = {112: 'en', 113: 'es', 114: 'es', 111: 'en'} df['D'] = df['U'].map(d) df Out[248]: U L D 0 111 en en 1 112 en en 2 112 es en 3 113 es es 4 113 ja es 5 113 zh es 6 114 es es ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29794959\/pandas-add-new-column-to-dataframe-from-dictionary", "best_answers_votes":295, "question_length":626, "response_length":295 }, { "question":"Append column to pandas dataframe This is probably easy, but I have the following data: In data frame 1: ``` index dat1 0 9 1 5 ``` In data frame 2: ``` index dat2 0 7 1 6 ``` I want a data frame with the following form: ``` index dat1 dat2 0 9 7 1 5 6 ``` I've tried using the append method, but I get a cross join (i.e. cartesian product). What's the right way to do this?", "response":"It seems in general you're just looking for a join: ``` > dat1 = pd.DataFrame({'dat1': [9,5]}) > dat2 = pd.DataFrame({'dat2': [7,6]}) > dat1.join(dat2) dat1 dat2 0 9 7 1 5 6 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20602947\/append-column-to-pandas-dataframe", "best_answers_votes":198, "question_length":374, "response_length":177 }, { "question":"Joining pandas DataFrames by Column names I have two DataFrames with the following column names: ```none frame_1: event_id, date, time, county_ID frame_2: countyid, state ``` I would like to get a DataFrame with the following columns by left-joining on county_ID = countyid: ```none joined_dataframe: event_id, date, time, county, state ``` I cannot figure out how to do it if the columns on which I want to join are not the index.", "response":"You can use the left_on and right_on options of pd.merge as follows: ``` pd.merge(frame_1, frame_2, left_on='county_ID', right_on='countyid') ``` Or equivalently with DataFrame.merge: ``` frame_1.merge(frame_2, left_on='county_ID', right_on='countyid') ``` I was not sure from the question if you only wanted to merge if the key was in the left hand DataFrame. If that is the case then the following will do that (the above will in effect do a many to many merge) ``` pd.merge(frame_1, frame_2, how='left', left_on='county_ID', right_on='countyid') ``` Or ``` frame_1.merge(frame_2, how='left', left_on='county_ID', right_on='countyid') ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20375561\/joining-pandas-dataframes-by-column-names", "best_answers_votes":294, "question_length":431, "response_length":640 }, { "question":"Adding meta-information\/metadata to pandas DataFrame Is it possible to add some meta-information\/metadata to a pandas DataFrame? For example, the instrument's name used to measure the data, the instrument responsible, etc. One workaround would be to create a column with that information, but it seems wasteful to store a single piece of information in every row!", "response":"Sure, like most Python objects, you can attach new attributes to a pandas.DataFrame: ``` import pandas as pd df = pd.DataFrame([]) df.instrument_name = 'Binky' ``` Note, however, that while you can attach attributes to a DataFrame, operations performed on the DataFrame (such as groupby, pivot, join, assign or loc to name just a few) may return a new DataFrame without the metadata attached. Pandas does not yet have a robust method of propagating metadata attached to DataFrames. Preserving the metadata in a file is possible. You can find an example of how to store metadata in an HDF5 file here.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14688306\/adding-meta-information-metadata-to-pandas-dataframe", "best_answers_votes":113, "question_length":363, "response_length":599 }, { "question":"Are for-loops in pandas really bad? When should I care? Are for loops really \"bad\"? If not, in what situation(s) would they be better than using a more conventional \"vectorized\" approach?1 I am familiar with the concept of \"vectorization\", and how pandas employs vectorized techniques to speed up computation. Vectorized functions broadcast operations over the entire series or DataFrame to achieve speedups much greater than conventionally iterating over the data. However, I am quite surprised to see a lot of code (including from answers on Stack Overflow) offering solutions to problems that involve looping through data using for loops and list comprehensions. The documentation and API say that loops are \"bad\", and that one should \"never\" iterate over arrays, series, or DataFrames. So, how come I sometimes see users suggesting loop-based solutions? 1 - While it is true that the question sounds somewhat broad, the truth is that there are very specific situations when for loops are usually better than conventionally iterating over data. This post aims to capture this for posterity.", "response":"TLDR; No, for loops are not blanket \"bad\", at least, not always. It is probably more accurate to say that some vectorized operations are slower than iterating, versus saying that iteration is faster than some vectorized operations. Knowing when and why is key to getting the most performance out of your code. In a nutshell, these are the situations where it is worth considering an alternative to vectorized pandas functions: When your data is small (...depending on what you're doing), When dealing with object\/mixed dtypes When using the str\/regex accessor functions Let's examine these situations individually. Iteration v\/s Vectorization on Small Data Pandas follows a \"Convention Over Configuration\" approach in its API design. This means that the same API has been fitted to cater to a broad range of data and use cases. When a pandas function is called, the following things (among others) must internally be handled by the function, to ensure working Index\/axis alignment Handling mixed datatypes Handling missing data Almost every function will have to deal with these to varying extents, and this presents an overhead. The overhead is less for numeric functions (for example, Series.add), while it is more pronounced for string functions (for example, Series.str.replace). for loops, on the other hand, are faster then you think. What's even better is list comprehensions (which create lists through for loops) are even faster as they are optimized iterative mechanisms for list creation. List comprehensions follow the pattern ``` [f(x) for x in seq] ``` Where seq is a pandas series or DataFrame column. Or, when operating over multiple columns, ``` [f(x, y) for x, y in zip(seq1, seq2)] ``` Where seq1 and seq2 are columns. Numeric Comparison Consider a simple boolean indexing operation. The list comprehension method has been timed against Series.ne (!=) and query. Here are the functions: ``` # Boolean indexing with Numeric value comparison. df[df.A != df.B] # vectorized != df.query('A != B') # query (numexpr) df[[x != y for x, y in zip(df.A, df.B)]] # list comp ``` For simplicity, I have used the perfplot package to run all the timeit tests in this post. The timings for the operations above are below: The list comprehension outperforms query for moderately sized N, and even outperforms the vectorized not equals comparison for tiny N. Unfortunately, the list comprehension scales linearly, so it does not offer much performance gain for larger N. Note It is worth mentioning that much of the benefit of list comprehension come from not having to worry about the index alignment, but this means that if your code is dependent on indexing alignment, this will break. In some cases, vectorised operations over the underlying NumPy arrays can be considered as bringing in the \"best of both worlds\", allowing for vectorisation without all the unneeded overhead of the pandas functions. This means that you can rewrite the operation above as ``` df[df.A.values != df.B.values] ``` Which outperforms both the pandas and list comprehension equivalents: NumPy vectorization is out of the scope of this post, but it is definitely worth considering, if performance matters. Value Counts Taking another example - this time, with another vanilla python construct that is faster than a for loop - collections.Counter. A common requirement is to compute the value counts and return the result as a dictionary. This is done with value_counts, np.unique, and Counter: ``` # Value Counts comparison. ser.value_counts(sort=False).to_dict() # value_counts dict(zip(*np.unique(ser, return_counts=True))) # np.unique Counter(ser) # Counter ``` The results are more pronounced, Counter wins out over both vectorized methods for a larger range of small N (~3500). Note More trivia (courtesy @user2357112). The Counter is implemented with a C accelerator, so while it still has to work with python objects instead of the underlying C datatypes, it is still faster than a for loop. Python power! Of course, the take away from here is that the performance depends on your data and use case. The point of these examples is to convince you not to rule out these solutions as legitimate options. If these still don't give you the performance you need, there is always cython and numba. Let's add this test into the mix. ``` from numba import njit, prange @njit(parallel=True) def get_mask(x, y): result = [False] * len(x) for i in prange(len(x)): result[i] = x[i] != y[i] return np.array(result) df[get_mask(df.A.values, df.B.values)] # numba ``` Numba offers JIT compilation of loopy python code to very powerful vectorized code. Understanding how to make numba work involves a learning curve. Operations with Mixed\/object dtypes String-based Comparison Revisiting the filtering example from the first section, what if the columns being compared are strings? Consider the same 3 functions above, but with the input DataFrame cast to string. ``` # Boolean indexing with string value comparison. df[df.A != df.B] # vectorized != df.query('A != B') # query (numexpr) df[[x != y for x, y in zip(df.A, df.B)]] # list comp ``` So, what changed? The thing to note here is that string operations are inherently difficult to vectorize. Pandas treats strings as objects, and all operations on objects fall back to a slow, loopy implementation. Now, because this loopy implementation is surrounded by all the overhead mentioned above, there is a constant magnitude difference between these solutions, even though they scale the same. When it comes to operations on mutable\/complex objects, there is no comparison. List comprehension outperforms all operations involving dicts and lists. Accessing Dictionary Value(s) by Key Here are timings for two operations that extract a value from a column of dictionaries: map and the list comprehension. The setup is in the Appendix, under the heading \"Code Snippets\". ``` # Dictionary value extraction. ser.map(operator.itemgetter('value')) # map pd.Series([x.get('value') for x in ser]) # list comprehension ``` Positional List Indexing Timings for 3 operations that extract the 0th element from a list of columns (handling exceptions), map, str.get accessor method, and the list comprehension: ```py # List positional indexing. def get_0th(lst): try: return lst[0] # Handle empty lists and NaNs gracefully. except (IndexError, TypeError): return np.nan ``` ``` ser.map(get_0th) # map ser.str[0] # str accessor pd.Series([x[0] if len(x) > 0 else np.nan for x in ser]) # list comp pd.Series([get_0th(x) for x in ser]) # list comp safe ``` Note If the index matters, you would want to do: ``` pd.Series([...], index=ser.index) ``` When reconstructing the series. List Flattening A final example is flattening lists. This is another common problem, and demonstrates just how powerful pure python is here. ``` # Nested list flattening. pd.DataFrame(ser.tolist()).stack().reset_index(drop=True) # stack pd.Series(list(chain.from_iterable(ser.tolist()))) # itertools.chain pd.Series([y for x in ser for y in x]) # nested list comp ``` Both itertools.chain.from_iterable and the nested list comprehension are pure python constructs, and scale much better than the stack solution. These timings are a strong indication of the fact that pandas is not equipped to work with mixed dtypes, and that you should probably refrain from using it to do so. Wherever possible, data should be present as scalar values (ints\/floats\/strings) in separate columns. Lastly, the applicability of these solutions depend widely on your data. So, the best thing to do would be to test these operations on your data before deciding what to go with. Notice how I have not timed apply on these solutions, because it would skew the graph (yes, it's that slow). Regex Operations, and .str Accessor Methods Pandas can apply regex operations such as str.contains, str.extract, and str.extractall, as well as other \"vectorized\" string operations (such as str.split, str.find, str.translate, and so on) on string columns. These functions are slower than list comprehensions, and are meant to be more convenience functions than anything else. It is usually much faster to pre-compile a regex pattern and iterate over your data with re.compile (also see Is it worth using Python's re.compile?). The list comp equivalent to str.contains looks something like this: ``` p = re.compile(...) ser2 = pd.Series([x for x in ser if p.search(x)]) ``` Or, ``` ser2 = ser[[bool(p.search(x)) for x in ser]] ``` If you need to handle NaNs, you can do something like ``` ser[[bool(p.search(x)) if pd.notnull(x) else False for x in ser]] ``` The list comp equivalent to str.extract (without groups) will look something like: ``` df['col2'] = [p.search(x).group(0) for x in df['col']] ``` If you need to handle no-matches and NaNs, you can use a custom function (still faster!): ``` def matcher(x): m = p.search(str(x)) if m: return m.group(0) return np.nan df['col2'] = [matcher(x) for x in df['col']] ``` The matcher function is very extensible. It can be fitted to return a list for each capture group, as needed. Just extract query the group or groups attribute of the matcher object. For str.extractall, change p.search to p.findall. String Extraction Consider a simple filtering operation. The idea is to extract 4 digits if it is preceded by an upper case letter. ``` # Extracting strings. p = re.compile(r'(? 0 else np.nan for x in ser]), lambda ser: pd.Series([get_0th(x) for x in ser]), ], labels=['map', 'str accessor', 'list comprehension', 'list comp safe'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None ) ``` ``` # Nested list flattening. ser3 = pd.Series([['a', 'b', 'c'], ['d', 'e'], ['f', 'g']]) perfplot.show( setup=lambda n: pd.concat([ser2] * n, ignore_index=True), kernels=[ lambda ser: pd.DataFrame(ser.tolist()).stack().reset_index(drop=True), lambda ser: pd.Series(list(chain.from_iterable(ser.tolist()))), lambda ser: pd.Series([y for x in ser for y in x]), ], labels=['stack', 'itertools.chain', 'nested list comp'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None ) ``` ``` # Extracting strings. ser4 = pd.Series(['foo xyz', 'test A1234', 'D3345 xtz']) perfplot.show( setup=lambda n: pd.concat([ser4] * n, ignore_index=True), kernels=[ lambda ser: ser.str.extract(r'(?<=[A-Z])(\\d{4})', expand=False), lambda ser: pd.Series([matcher(x) for x in ser]) ], labels=['str.extract', 'list comprehension'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None ) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/54028199\/are-for-loops-in-pandas-really-bad-when-should-i-care", "best_answers_votes":246, "question_length":1093, "response_length":10519 }, { "question":"Apply function to each cell in DataFrame I have a dataframe that may look like this: ```none A B C foo bar foo bar bar foo foo bar ``` I want to look through every element of each row (or every element of each column) and apply the following function to get the subsequent dataframe: ```py def foo_bar(x): return x.replace('foo', 'wow') ``` After applying the function, my dataframe will look like this: ```none A B C wow bar wow bar bar wow wow bar ``` Is there a simple one-liner that can apply a function to each cell? This is a simplistic example so there may be an easier way to execute this specific example other than applying a function, but what I am really asking about is how to apply a function in every cell within a dataframe.", "response":"You can use map() which is concise for your case. ``` df.map(foo_bar) # A B C #0 wow bar wow bar #1 bar wow wow bar ``` Another option is to vectorize your function and then use apply method: ``` import numpy as np df.apply(np.vectorize(foo_bar)) # A B C #0 wow bar wow bar #1 bar wow wow bar ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39475978\/apply-function-to-each-cell-in-dataframe", "best_answers_votes":244, "question_length":740, "response_length":296 }, { "question":"Pandas \"Can only compare identically-labeled DataFrame objects\" error I'm using Pandas to compare the outputs of two files loaded into two data frames (uat, prod): ... ``` uat = uat[['Customer Number','Product']] prod = prod[['Customer Number','Product']] print uat['Customer Number'] == prod['Customer Number'] print uat['Product'] == prod['Product'] print uat == prod The first two match exactly: 74357 True 74356 True Name: Customer Number, dtype: bool 74357 True 74356 True Name: Product, dtype: bool ``` For the third print, I get an error: Can only compare identically-labeled DataFrame objects. If the first two compared fine, what's wrong with the 3rd? Thanks", "response":"Here's a small example to demonstrate this (which only applied to DataFrames, not Series, until Pandas 0.19 where it applies to both): ``` In [1]: df1 = pd.DataFrame([[1, 2], [3, 4]]) In [2]: df2 = pd.DataFrame([[3, 4], [1, 2]], index=[1, 0]) In [3]: df1 == df2 Exception: Can only compare identically-labeled DataFrame objects ``` One solution is to sort the index first (Note: some functions require sorted indexes): ``` In [4]: df2.sort_index(inplace=True) In [5]: df1 == df2 Out[5]: 0 1 0 True True 1 True True ``` Note: == is also sensitive to the order of columns, so you may have to use sort_index(axis=1): ``` In [11]: df1.sort_index().sort_index(axis=1) == df2.sort_index().sort_index(axis=1) Out[11]: 0 1 0 True True 1 True True ``` Note: This can still raise (if the index\/columns aren't identically labelled after sorting).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18548370\/pandas-can-only-compare-identically-labeled-dataframe-objects-error", "best_answers_votes":121, "question_length":667, "response_length":835 }, { "question":"Remove index name in pandas I have a dataframe like this one: ``` In [10]: df Out[10]: Column 1 foo Apples 1 Oranges 2 Puppies 3 Ducks 4 ``` How to remove index name foo from that dataframe? The desired output is like this: ``` In [10]: df Out[10]: Column 1 Apples 1 Oranges 2 Puppies 3 Ducks 4 ```", "response":"Took me way too long to find an answer that actually worked for me. See below. ``` df = df.rename_axis(None, axis=1) ``` I'm sure some of these other answers are working for other people, but they definitely didn't work for me :(", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29765548\/remove-index-name-in-pandas", "best_answers_votes":163, "question_length":298, "response_length":229 }, { "question":"Extract first and last row of a dataframe in pandas How can I extract the first and last rows of a given dataframe as a new dataframe in pandas? I've tried to use iloc to select the desired rows and then concat as in: ``` df=pd.DataFrame({'a':range(1,5), 'b':['a','b','c','d']}) pd.concat([df.iloc[0,:], df.iloc[-1,:]]) ``` but this does not produce a pandas dataframe: ``` a 1 b a a 4 b d dtype: object ```", "response":"I think the most simple way is .iloc[[0, -1]]. ``` df = pd.DataFrame({'a':range(1,5), 'b':['a','b','c','d']}) df2 = df.iloc[[0, -1]] print(df2) a b 0 1 a 3 4 d ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36542169\/extract-first-and-last-row-of-a-dataframe-in-pandas", "best_answers_votes":223, "question_length":407, "response_length":163 }, { "question":"Does pandas iterrows have performance issues? I have noticed very poor performance when using iterrows from pandas. Is it specific to iterrows and should this function be avoided for data of a certain size (I'm working with 2-3 million rows)? This discussion on GitHub led me to believe it is caused when mixing dtypes in the dataframe, however the simple example below shows it is there even when using one dtype (float64). This takes 36 seconds on my machine: ``` import pandas as pd import numpy as np import time s1 = np.random.randn(2000000) s2 = np.random.randn(2000000) dfa = pd.DataFrame({'s1': s1, 's2': s2}) start = time.time() i=0 for rowindex, row in dfa.iterrows(): i+=1 end = time.time() print end - start ``` Why are vectorized operations like apply so much quicker? I imagine there must be some row by row iteration going on there too. I cannot figure out how to not use iterrows in my case (this I'll save for a future question). Therefore I would appreciate hearing if you have consistently been able to avoid this iteration. I'm making calculations based on data in separate dataframes. A simplified version of what I want to run: ``` import pandas as pd import numpy as np #%% Create the original tables t1 = {'letter':['a','b'], 'number1':[50,-10]} t2 = {'letter':['a','a','b','b'], 'number2':[0.2,0.5,0.1,0.4]} table1 = pd.DataFrame(t1) table2 = pd.DataFrame(t2) #%% Create the body of the new table table3 = pd.DataFrame(np.nan, columns=['letter','number2'], index=[0]) #%% Iterate through filtering relevant data, optimizing, returning info for row_index, row in table1.iterrows(): t2info = table2[table2.letter == row['letter']].reset_index() table3.ix[row_index,] = optimize(t2info,row['number1']) #%% Define optimization def optimize(t2info, t1info): calculation = [] for index, r in t2info.iterrows(): calculation.append(r['number2']*t1info) maxrow = calculation.index(max(calculation)) return t2info.ix[maxrow] ```", "response":"Generally, iterrows should only be used in very, very specific cases. This is the general order of precedence for performance of various operations: vectorization using a custom Cython routine apply reductions that can be performed in Cython iteration in Python space itertuples iterrows updating an empty frame (e.g., using loc one-row-at-a-time) Using a custom Cython routine is usually too complicated, so let's skip that for now. Vectorization is always, always the first and best choice. However, there is a small set of cases (usually involving a recurrence) which cannot be vectorized in obvious ways. Furthermore, on a smallish DataFrame, it may be faster to use other methods. apply usually can be handled by an iterator in Cython space. This is handled internally by pandas, though it depends on what is going on inside the apply expression. For example, df.apply(lambda x: np.sum(x)) will be executed pretty swiftly, though of course, df.sum(1) is even better. However something like df.apply(lambda x: x['b'] + 1) will be executed in Python space, and consequently is much slower. itertuples does not box the data into a Series. It just returns the data in the form of tuples. iterrows does box the data into a Series. Unless you really need this, use another method. Updating an empty frame a-single-row-at-a-time. I have seen this method used WAY too much. It is by far the slowest. It is probably common place (and reasonably fast for some Python structures), but a DataFrame does a fair number of checks on indexing, so this will always be very slow to update a row at a time. Much better to create new structures and concat.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24870953\/does-pandas-iterrows-have-performance-issues", "best_answers_votes":281, "question_length":1943, "response_length":1641 }, { "question":"Selecting pandas column by location I'm simply trying to access named pandas columns by an integer. You can select a row by location using df.ix[3]. But how to select a column by integer? My dataframe: ``` df=pandas.DataFrame({'a':np.random.rand(5), 'b':np.random.rand(5)}) ```", "response":"Two approaches that come to mind: ``` >>> df A B C D 0 0.424634 1.716633 0.282734 2.086944 1 -1.325816 2.056277 2.583704 -0.776403 2 1.457809 -0.407279 -1.560583 -1.316246 3 -0.757134 -1.321025 1.325853 -2.513373 4 1.366180 -1.265185 -2.184617 0.881514 >>> df.iloc[:, 2] 0 0.282734 1 2.583704 2 -1.560583 3 1.325853 4 -2.184617 Name: C >>> df[df.columns[2]] 0 0.282734 1 2.583704 2 -1.560583 3 1.325853 4 -2.184617 Name: C ``` Edit: The original answer suggested the use of df.ix[:,2] but this function is now deprecated. Users should switch to df.iloc[:,2].", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14941097\/selecting-pandas-column-by-location", "best_answers_votes":239, "question_length":277, "response_length":558 }, { "question":"What is the difference between NaN and None? I am reading two columns of a csv file using pandas readcsv() and then assigning the values to a dictionary. The columns contain strings of numbers and letters. Occasionally there are cases where a cell is empty. In my opinion, the value read to that dictionary entry should be None but instead nan is assigned. Surely None is more descriptive of an empty cell as it has a null value, whereas nan just says that the value read is not a number. Is my understanding correct, what IS the difference between None and nan? Why is nan assigned instead of None? Also, my dictionary check for any empty cells has been using numpy.isnan(): ``` for k, v in my_dict.iteritems(): if np.isnan(v): ``` But this gives me an error saying that I cannot use this check for v. I guess it is because an integer or float variable, not a string is meant to be used. If this is true, how can I check v for an \"empty cell\"\/nan case?", "response":"NaN is used as a placeholder for missing data consistently in pandas, consistency is good. I usually read\/translate NaN as \"missing\". Also see the 'working with missing data' section in the docs. Wes writes in the docs 'choice of NA-representation': After years of production use [NaN] has proven, at least in my opinion, to be the best decision given the state of affairs in NumPy and Python in general. The special value NaN (Not-A-Number) is used everywhere as the NA value, and there are API functions isna and notna which can be used across the dtypes to detect NA values. ... Thus, I have chosen the Pythonic \u201cpracticality beats purity\u201d approach and traded integer NA capability for a much simpler approach of using a special value in float and object arrays to denote NA, and promoting integer arrays to floating when NAs must be introduced. Note: the \"gotcha\" that integer Series containing missing data are upcast to floats. In my opinion the main reason to use NaN (over None) is that it can be stored with numpy's float64 dtype, rather than the less efficient object dtype, see NA type promotions. ``` # without forcing dtype it changes None to NaN! s_bad = pd.Series([1, None], dtype=object) s_good = pd.Series([1, np.nan]) In [13]: s_bad.dtype Out[13]: dtype('O') In [14]: s_good.dtype Out[14]: dtype('float64') ``` Jeff comments (below) on this: np.nan allows for vectorized operations; its a float value, while None, by definition, forces object type, which basically disables all efficiency in numpy. So repeat 3 times fast: object==bad, float==good Saying that, many operations may still work just as well with None vs NaN (but perhaps are not supported i.e. they may sometimes give surprising results): ``` In [15]: s_bad.sum() Out[15]: 1 In [16]: s_good.sum() Out[16]: 1.0 ``` To answer the second question: You should be using isna and notna to test for missing data (NaN).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17534106\/what-is-the-difference-between-nan-and-none", "best_answers_votes":182, "question_length":953, "response_length":1893 }, { "question":"Convert number strings with commas in pandas DataFrame to float I have a DataFrame that contains numbers as strings with commas for the thousands marker. I need to convert them to floats. ``` a = [['1,200', '4,200'], ['7,000', '-0.03'], [ '5', '0']] df=pandas.DataFrame(a) ``` I am guessing I need to use locale.atof. Indeed ``` df[0].apply(locale.atof) ``` works as expected. I get a Series of floats. But when I apply it to the DataFrame, I get an error. ``` df.apply(locale.atof) ``` TypeError: (\"cannot convert the series to \", u'occurred at index 0') and ``` df[0:1].apply(locale.atof) ``` gives another error: ValueError: ('invalid literal for float(): 1,200', u'occurred at index 0') So, how do I convert this DataFrame of strings to a DataFrame of floats?", "response":"If you're reading in from csv then you can use the thousands arg: ``` df.read_csv('foo.tsv', sep='\\t', thousands=',') ``` This method is likely to be more efficient than performing the operation as a separate step. You need to set the locale first: ``` In [ 9]: import locale In [10]: from locale import atof In [11]: locale.setlocale(locale.LC_NUMERIC, '') Out[11]: 'en_GB.UTF-8' In [12]: df.applymap(atof) Out[12]: 0 1 0 1200 4200.00 1 7000 -0.03 2 5 0.00 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22137723\/convert-number-strings-with-commas-in-pandas-dataframe-to-float", "best_answers_votes":243, "question_length":763, "response_length":461 }, { "question":"Pandas DataFrame stored list as string: How to convert back to list I have an n-by-m Pandas DataFrame df defined as follows. (I know this is not the best way to do it. It makes sense for what I'm trying to do in my actual code, but that would be TMI for this post so just take my word that this approach works in my particular scenario.) ``` >>> df = DataFrame(columns=['col1']) >>> df.append(Series([None]), ignore_index=True) >>> df Empty DataFrame Columns: [col1] Index: [] ``` I stored lists in the cells of this DataFrame as follows. ``` >>> df['column1'][0] = [1.23, 2.34] >>> df col1 0 [1, 2] ``` For some reason, the DataFrame stored this list as a string instead of a list. ``` >>> df['column1'][0] '[1.23, 2.34]' ``` I have 2 questions for you. Why does the DataFrame store a list as a string and is there a way around this behavior? If not, then is there a Pythonic way to convert this string into a list? Update The DataFrame I was using had been saved and loaded from a CSV format. This format, rather than the DataFrame itself, converted the list from a string to a literal.", "response":"As you pointed out, this can commonly happen when saving and loading pandas DataFrames as .csv files, which is a text format. In your case this happened because list objects have a string representation, allowing them to be stored as .csv files. Loading the .csv will then yield that string representation. If you want to store the actual objects, you should use DataFrame.to_pickle() (note: objects must be picklable!). To answer your second question, you can convert it back with ast.literal_eval: ``` >>> from ast import literal_eval >>> literal_eval('[1.23, 2.34]') [1.23, 2.34] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23111990\/pandas-dataframe-stored-list-as-string-how-to-convert-back-to-list", "best_answers_votes":155, "question_length":1088, "response_length":586 }, { "question":"Reading tab-delimited file with Pandas - works on Windows, but not on Mac I've been reading a tab-delimited data file in Windows with Pandas\/Python without any problems. The data file contains notes in first three lines and then follows with a header. ``` df = pd.read_csv(myfile,sep='\\t',skiprows=(0,1,2),header=(0)) ``` I'm now trying to read this file with my Mac. (My first time using Python on Mac.) I get the following error. ``` pandas.parser.CParserError: Error tokenizing data. C error: Expected 1 fields in line 8, saw 39 ``` If set the error_bad_lines argument for read_csv to False, I get the following information, which continues until the end of the last row. ``` Skipping line 8: expected 1 fields, saw 39 Skipping line 9: expected 1 fields, saw 125 Skipping line 10: expected 1 fields, saw 125 Skipping line 11: expected 1 fields, saw 125 Skipping line 12: expected 1 fields, saw 125 Skipping line 13: expected 1 fields, saw 125 Skipping line 14: expected 1 fields, saw 125 Skipping line 15: expected 1 fields, saw 125 Skipping line 16: expected 1 fields, saw 125 Skipping line 17: expected 1 fields, saw 125 ... ``` Do I need to specify a value for the encoding argument? It seems as though I shouldn't have to because reading the file works fine on Windows.", "response":"The biggest clue is the rows are all being returned on one line. This indicates line terminators are being ignored or are not present. You can specify the line terminator for csv_reader. If you are on a mac the lines created will end with \\rrather than the linux standard \\n or better still the suspenders and belt approach of windows with \\r\\n. ``` pandas.read_csv(filename, sep='\\t', lineterminator='\\r') ``` You could also open all your data using the codecs package. This may increase robustness at the expense of document loading speed. ``` import codecs doc = codecs.open('document','rU','UTF-16') #open for reading with \"universal\" type set df = pandas.read_csv(doc, sep='\\t') ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27896214\/reading-tab-delimited-file-with-pandas-works-on-windows-but-not-on-mac", "best_answers_votes":236, "question_length":1276, "response_length":687 }, { "question":"move column in pandas dataframe I have the following dataframe: ``` a b x y 0 1 2 3 -1 1 2 4 6 -2 2 3 6 9 -3 3 4 8 12 -4 ``` How can I move columns b and x such that they are the last 2 columns in the dataframe? I would like to specify b and x by name, but not the other columns.", "response":"You can rearrange columns directly by specifying their order: ``` df = df[['a', 'y', 'b', 'x']] ``` In the case of larger dataframes where the column titles are dynamic, you can use a list comprehension to select every column not in your target set and then append the target set to the end. ``` >>> df[[c for c in df if c not in ['b', 'x']] + ['b', 'x']] a y b x 0 1 -1 2 3 1 2 -2 4 6 2 3 -3 6 9 3 4 -4 8 12 ``` To make it more bullet proof, you can ensure that your target columns are indeed in the dataframe: ``` cols_at_end = ['b', 'x'] df = df[[c for c in df if c not in cols_at_end] + [c for c in cols_at_end if c in df]] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/35321812\/move-column-in-pandas-dataframe", "best_answers_votes":200, "question_length":279, "response_length":631 }, { "question":"Pandas split DataFrame by column value I have DataFrame with column Sales. How can I split it into 2 based on Sales value? First DataFrame will have data with 'Sales' = s", "response":"You can use boolean indexing: ``` df = pd.DataFrame({'Sales':[10,20,30,40,50], 'A':[3,4,7,6,1]}) print (df) A Sales 0 3 10 1 4 20 2 7 30 3 6 40 4 1 50 s = 30 df1 = df[df['Sales'] >= s] print (df1) A Sales 2 7 30 3 6 40 4 1 50 df2 = df[df['Sales'] = s df1 = df[mask] df2 = df[~mask] print (df1) A Sales 2 7 30 3 6 40 4 1 50 print (df2) A Sales 0 3 10 1 4 20 ``` ``` print (mask) 0 False 1 False 2 True 3 True 4 True Name: Sales, dtype: bool print (~mask) 0 True 1 True 2 False 3 False 4 False Name: Sales, dtype: bool ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33742588\/pandas-split-dataframe-by-column-value", "best_answers_votes":167, "question_length":170, "response_length":520 }, { "question":"How to read file with space separated values in pandas I try to read the file into pandas. The file has values separated by space, but with different number of spaces I tried: ``` pd.read_csv('file.csv', delimiter=' ') ``` but it doesn't work", "response":"Add delim_whitespace=True argument, it's faster than regex. Note however that after pandas 2.2.0 this is deprecated and it's better to use sep=\"\\s+\"", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19632075\/how-to-read-file-with-space-separated-values-in-pandas", "best_answers_votes":250, "question_length":242, "response_length":148 }, { "question":"python pandas dataframe, is it pass-by-value or pass-by-reference [duplicate] This question already has answers here: How do I pass a variable by reference? (43 answers) Closed 1 year ago. If I pass a dataframe to a function and modify it inside the function, is it pass-by-value or pass-by-reference? I run the following code ``` a = pd.DataFrame({'a':[1,2], 'b':[3,4]}) def letgo(df): df = df.drop('b',axis=1) letgo(a) ``` the value of a does not change after the function call. Does it mean it is pass-by-value? I also tried the following ``` xx = np.array([[1,2], [3,4]]) def letgo2(x): x[1,1] = 100 def letgo3(x): x = np.array([[3,3],[3,3]]) ``` It turns out letgo2() does change xx and letgo3() does not. Why is it like this?", "response":"The short answer is, Python always does pass-by-value, but every Python variable is actually a pointer to some object, so sometimes it looks like pass-by-reference. In Python every object is either mutable or non-mutable. e.g., lists, dicts, modules and Pandas data frames are mutable, and ints, strings and tuples are non-mutable. Mutable objects can be changed internally (e.g., add an element to a list), but non-mutable objects cannot. As I said at the start, you can think of every Python variable as a pointer to an object. When you pass a variable to a function, the variable (pointer) within the function is always a copy of the variable (pointer) that was passed in. So if you assign something new to the internal variable, all you are doing is changing the local variable to point to a different object. This doesn't alter (mutate) the original object that the variable pointed to, nor does it make the external variable point to the new object. At this point, the external variable still points to the original object, but the internal variable points to a new object. If you want to alter the original object (only possible with mutable data types), you have to do something that alters the object without assigning a completely new value to the local variable. This is why letgo() and letgo3() leave the external item unaltered, but letgo2() alters it. As @ursan pointed out, if letgo() used something like this instead, then it would alter (mutate) the original object that df points to, which would change the value seen via the global a variable: ``` def letgo(df): df.drop('b', axis=1, inplace=True) a = pd.DataFrame({'a':[1,2], 'b':[3,4]}) letgo(a) # will alter a ``` In some cases, you can completely hollow out the original variable and refill it with new data, without actually doing a direct assignment, e.g. this will alter the original object that v points to, which will change the data seen when you use v later: ``` def letgo3(x): x[:] = np.array([[3,3],[3,3]]) v = np.empty((2, 2)) letgo3(v) # will alter v ``` Notice that I'm not assigning something directly to x; I'm assigning something to the entire internal range of x. If you absolutely must create a completely new object and make it visible externally (which is sometimes the case with pandas), you have two options. The 'clean' option would be just to return the new object, e.g., ``` def letgo(df): df = df.drop('b',axis=1) return df a = pd.DataFrame({'a':[1,2], 'b':[3,4]}) a = letgo(a) ``` Another option would be to reach outside your function and directly alter a global variable. This changes a to point to a new object, and any function that refers to a afterward will see that new object: ``` def letgo(): global a a = a.drop('b',axis=1) a = pd.DataFrame({'a':[1,2], 'b':[3,4]}) letgo() # will alter a! ``` Directly altering global variables is usually a bad idea, because anyone who reads your code will have a hard time figuring out how a got changed. (I generally use global variables for shared parameters used by many functions in a script, but I don't let them alter those global variables.)", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38895768\/python-pandas-dataframe-is-it-pass-by-value-or-pass-by-reference", "best_answers_votes":182, "question_length":731, "response_length":3092 }, { "question":"Performance of Pandas apply vs np.vectorize to create new column from existing columns I am using Pandas dataframes and want to create a new column as a function of existing columns. I have not seen a good discussion of the speed difference between df.apply() and np.vectorize(), so I thought I would ask here. The Pandas apply() function is slow. From what I measured (shown below in some experiments), using np.vectorize() is 25x faster (or more) than using the DataFrame function apply() , at least on my 2016 MacBook Pro. Is this an expected result, and why? For example, suppose I have the following dataframe with N rows: ``` N = 10 A_list = np.random.randint(1, 100, N) B_list = np.random.randint(1, 100, N) df = pd.DataFrame({'A': A_list, 'B': B_list}) df.head() # A B # 0 78 50 # 1 23 91 # 2 55 62 # 3 82 64 # 4 99 80 ``` Suppose further that I want to create a new column as a function of the two columns A and B. In the example below, I'll use a simple function divide(). To apply the function, I can use either df.apply() or np.vectorize(): ``` def divide(a, b): if b == 0: return 0.0 return float(a)\/b df['result'] = df.apply(lambda row: divide(row['A'], row['B']), axis=1) df['result2'] = np.vectorize(divide)(df['A'], df['B']) df.head() # A B result result2 # 0 78 50 1.560000 1.560000 # 1 23 91 0.252747 0.252747 # 2 55 62 0.887097 0.887097 # 3 82 64 1.281250 1.281250 # 4 99 80 1.237500 1.237500 ``` If I increase N to real-world sizes like 1 million or more, then I observe that np.vectorize() is 25x faster or more than df.apply(). Below is some complete benchmarking code: ``` import pandas as pd import numpy as np import time def divide(a, b): if b == 0: return 0.0 return float(a)\/b for N in [1000, 10000, 100000, 1000000, 10000000]: print '' A_list = np.random.randint(1, 100, N) B_list = np.random.randint(1, 100, N) df = pd.DataFrame({'A': A_list, 'B': B_list}) start_epoch_sec = int(time.time()) df['result'] = df.apply(lambda row: divide(row['A'], row['B']), axis=1) end_epoch_sec = int(time.time()) result_apply = end_epoch_sec - start_epoch_sec start_epoch_sec = int(time.time()) df['result2'] = np.vectorize(divide)(df['A'], df['B']) end_epoch_sec = int(time.time()) result_vectorize = end_epoch_sec - start_epoch_sec print 'N=%d, df.apply: %d sec, np.vectorize: %d sec' % \\ (N, result_apply, result_vectorize) # Make sure results from df.apply and np.vectorize match. assert(df['result'].equals(df['result2'])) ``` The results are shown below: ``` N=1000, df.apply: 0 sec, np.vectorize: 0 sec N=10000, df.apply: 1 sec, np.vectorize: 0 sec N=100000, df.apply: 2 sec, np.vectorize: 0 sec N=1000000, df.apply: 24 sec, np.vectorize: 1 sec N=10000000, df.apply: 262 sec, np.vectorize: 4 sec ``` If np.vectorize() is in general always faster than df.apply(), then why is np.vectorize() not mentioned more? I only ever see StackOverflow posts related to df.apply(), such as: pandas create new column based on values from other columns How do I use Pandas 'apply' function to multiple columns? How to apply a function to two columns of Pandas dataframe", "response":"I will start by saying that the power of Pandas and NumPy arrays is derived from high-performance vectorised calculations on numeric arrays.1 The entire point of vectorised calculations is to avoid Python-level loops by moving calculations to highly optimised C code and utilising contiguous memory blocks.2 Python-level loops Now we can look at some timings. Below are all Python-level loops which produce either pd.Series, np.ndarray or list objects containing the same values. For the purposes of assignment to a series within a dataframe, the results are comparable. ``` # Python 3.6.5, NumPy 1.14.3, Pandas 0.23.0 np.random.seed(0) N = 10**5 %timeit list(map(divide, df['A'], df['B'])) # 43.9 ms %timeit np.vectorize(divide)(df['A'], df['B']) # 48.1 ms %timeit [divide(a, b) for a, b in zip(df['A'], df['B'])] # 49.4 ms %timeit [divide(a, b) for a, b in df[['A', 'B']].itertuples(index=False)] # 112 ms %timeit df.apply(lambda row: divide(*row), axis=1, raw=True) # 760 ms %timeit df.apply(lambda row: divide(row['A'], row['B']), axis=1) # 4.83 s %timeit [divide(row['A'], row['B']) for _, row in df[['A', 'B']].iterrows()] # 11.6 s ``` Some takeaways: The tuple-based methods (the first 4) are a factor more efficient than pd.Series-based methods (the last 3). np.vectorize, list comprehension + zip and map methods, i.e. the top 3, all have roughly the same performance. This is because they use tuple and bypass some Pandas overhead from pd.DataFrame.itertuples. There is a significant speed improvement from using raw=True with pd.DataFrame.apply versus without. This option feeds NumPy arrays to the custom function instead of pd.Series objects. pd.DataFrame.apply: just another loop To see exactly the objects Pandas passes around, you can amend your function trivially: ``` def foo(row): print(type(row)) assert False # because you only need to see this once df.apply(lambda row: foo(row), axis=1) ``` Output: . Creating, passing and querying a Pandas series object carries significant overheads relative to NumPy arrays. This shouldn't be surprise: Pandas series include a decent amount of scaffolding to hold an index, values, attributes, etc. Do the same exercise again with raw=True and you'll see . All this is described in the docs, but seeing it is more convincing. np.vectorize: fake vectorisation The docs for np.vectorize has the following note: The vectorized function evaluates pyfunc over successive tuples of the input arrays like the python map function, except it uses the broadcasting rules of numpy. The \"broadcasting rules\" are irrelevant here, since the input arrays have the same dimensions. The parallel to map is instructive, since the map version above has almost identical performance. The source code shows what's happening: np.vectorize converts your input function into a Universal function (\"ufunc\") via np.frompyfunc. There is some optimisation, e.g. caching, which can lead to some performance improvement. In short, np.vectorize does what a Python-level loop should do, but pd.DataFrame.apply adds a chunky overhead. There's no JIT-compilation which you see with numba (see below). It's just a convenience. True vectorisation: what you should use Why aren't the above differences mentioned anywhere? Because the performance of truly vectorised calculations make them irrelevant: ``` %timeit np.where(df['B'] == 0, 0, df['A'] \/ df['B']) # 1.17 ms %timeit (df['A'] \/ df['B']).replace([np.inf, -np.inf], 0) # 1.96 ms ``` Yes, that's ~40x faster than the fastest of the above loopy solutions. Either of these are acceptable. In my opinion, the first is succinct, readable and efficient. Only look at other methods, e.g. numba below, if performance is critical and this is part of your bottleneck. numba.njit: greater efficiency When loops are considered viable they are usually optimised via numba with underlying NumPy arrays to move as much as possible to C. Indeed, numba improves performance to microseconds. Without some cumbersome work, it will be difficult to get much more efficient than this. ``` from numba import njit @njit def divide(a, b): res = np.empty(a.shape) for i in range(len(a)): if b[i] != 0: res[i] = a[i] \/ b[i] else: res[i] = 0 return res %timeit divide(df['A'].values, df['B'].values) # 717 \u00b5s ``` Using @njit(parallel=True) may provide a further boost for larger arrays. 1 Numeric types include: int, float, datetime, bool, category. They exclude object dtype and can be held in contiguous memory blocks. 2 There are at least 2 reasons why NumPy operations are efficient versus Python: Everything in Python is an object. This includes, unlike C, numbers. Python types therefore have an overhead which does not exist with native C types. NumPy methods are usually C-based. In addition, optimised algorithms are used where possible.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/52673285\/performance-of-pandas-apply-vs-np-vectorize-to-create-new-column-from-existing-c", "best_answers_votes":259, "question_length":3076, "response_length":4797 }, { "question":"Prevent pandas from interpreting 'NA' as NaN in a string The pandas read_csv() method interprets 'NA' as nan (not a number) instead of a valid string. In the simple case below note that the output in row 1, column 2 (zero based count) is 'nan' instead of 'NA'. sample.tsv (tab delimited) PDB CHAIN SP_PRIMARY RES_BEG RES_END PDB_BEG PDB_END SP_BEG SP_END 5d8b N P60490 1 146 1 146 1 146 5d8b NA P80377 1 126 1 126 1 126 5d8b O P60491 1 118 1 118 1 118 read_sample.py ```py import pandas as pd df = pd.read_csv( 'sample.tsv', sep='\\t', encoding='utf-8', ) for df_tuples in df.itertuples(index=True): print(df_tuples) ``` output (0, u'5d8b', u'N', u'P60490', 1, 146, 1, 146, 1, 146) (1, u'5d8b', nan, u'P80377', 1, 126, 1, 126, 1, 126) (2, u'5d8b', u'O', u'P60491', 1, 118, 1, 118, 1, 118) Additional Information Re-writing the file with quotes for data in the 'CHAIN' column and then using the quotechar parameter quotechar='\\'' has the same result. And passing a dictionary of types via the dtype parameter dtype=dict(valid_cols) does not change the result. An old answer to Prevent pandas from automatically inferring type in read_csv suggests first using a numpy record array to parse the file, but given the ability to now specify column dtypes, this shouldn't be necessary. Note that itertuples() is used to preserve dtypes as described in the iterrows documentation: \"To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns tuples of the values and which is generally faster as iterrows.\" Example was tested on Python 2 and 3 with pandas version 0.16.2, 0.17.0, and 0.17.1. Is there a way to capture a valid string 'NA' instead of it being converted to nan?", "response":"You could use parameters keep_default_na and na_values to set all NA values by hand docs: ``` import pandas as pd from io import StringIO data = \"\"\" PDB CHAIN SP_PRIMARY RES_BEG RES_END PDB_BEG PDB_END SP_BEG SP_END 5d8b N P60490 1 146 1 146 1 146 5d8b NA P80377 _ 126 1 126 1 126 5d8b O P60491 1 118 1 118 1 118 \"\"\" df = pd.read_csv(StringIO(data), sep=' ', keep_default_na=False, na_values=['_']) In [130]: df Out[130]: PDB CHAIN SP_PRIMARY RES_BEG RES_END PDB_BEG PDB_END SP_BEG SP_END 0 5d8b N P60490 1 146 1 146 1 146 1 5d8b NA P80377 NaN 126 1 126 1 126 2 5d8b O P60491 1 118 1 118 1 118 In [144]: df.CHAIN.apply(type) Out[144]: 0 1 2 Name: CHAIN, dtype: object ``` EDIT All default NA values from na-values (as of pandas 1.0.0): The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N\/A N\/A', '#N\/A', 'N\/A', 'n\/a', 'NA', '', '#NA', 'NULL', 'null', 'NaN', '-NaN', 'nan', '-nan', ''].", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33952142\/prevent-pandas-from-interpreting-na-as-nan-in-a-string", "best_answers_votes":145, "question_length":1703, "response_length":923 }, { "question":"Jupyter notebook display two pandas tables side by side I have two pandas dataframes and I would like to display them in Jupyter notebook. Doing something like: ``` display(df1) display(df2) ``` Shows them one below another: I would like to have a second dataframe on the right of the first one. There is a similar question, but it looks like there a person is satisfied either with merging them in one dataframe of showing the difference between them. This will not work for me. In my case dataframes can represent completely different (non-comparable elements) and the size of them can be different. Thus my main goal is to save space.", "response":"I have ended up writing a function that can do this: [update: added titles based on suggestions (thnx @Antony_Hatchkins et al.)] ``` from IPython.display import display_html from itertools import chain,cycle def display_side_by_side(*args,titles=cycle([''])): html_str='' for df,title in zip(args, chain(titles,cycle([''])) ): html_str+='' html_str+=f'{title}' html_str+=df.to_html().replace('table','table style=\"display:inline\"') html_str+='' display_html(html_str,raw=True) ``` Example usage: ``` df1 = pd.DataFrame(np.arange(12).reshape((3,4)),columns=['A','B','C','D',]) df2 = pd.DataFrame(np.arange(16).reshape((4,4)),columns=['A','B','C','D',]) display_side_by_side(df1,df2,df1, titles=['Foo','Foo Bar']) #we left 3rd empty... ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38783027\/jupyter-notebook-display-two-pandas-tables-side-by-side", "best_answers_votes":200, "question_length":637, "response_length":737 }, { "question":"Pandas aggregate count distinct Let's say I have a log of user activity and I want to generate a report of the total duration and the number of unique users per day. ``` import numpy as np import pandas as pd df = pd.DataFrame({'date': ['2013-04-01','2013-04-01','2013-04-01','2013-04-02', '2013-04-02'], 'user_id': ['0001', '0001', '0002', '0002', '0002'], 'duration': [30, 15, 20, 15, 30]}) ``` Aggregating duration is pretty straightforward: ``` group = df.groupby('date') agg = group.aggregate({'duration': np.sum}) agg duration date 2013-04-01 65 2013-04-02 45 ``` What I'd like to do is sum the duration and count distincts at the same time, but I can't seem to find an equivalent for count_distinct: ``` agg = group.aggregate({ 'duration': np.sum, 'user_id': count_distinct}) ``` This works, but surely there's a better way, no? ``` group = df.groupby('date') agg = group.aggregate({'duration': np.sum}) agg['uv'] = df.groupby('date').user_id.nunique() agg duration uv date 2013-04-01 65 2 2013-04-02 45 1 ``` I'm thinking I just need to provide a function that returns the count of distinct items of a Series object to the aggregate function, but I don't have a lot of exposure to the various libraries at my disposal. Also, it seems that the groupby object already knows this information, so wouldn't I just be duplicating the effort?", "response":"How about either of: ``` >>> df date duration user_id 0 2013-04-01 30 0001 1 2013-04-01 15 0001 2 2013-04-01 20 0002 3 2013-04-02 15 0002 4 2013-04-02 30 0002 >>> df.groupby(\"date\").agg({\"duration\": np.sum, \"user_id\": pd.Series.nunique}) duration user_id date 2013-04-01 65 2 2013-04-02 45 1 >>> df.groupby(\"date\").agg({\"duration\": np.sum, \"user_id\": lambda x: x.nunique()}) duration user_id date 2013-04-01 65 2 2013-04-02 45 1 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18554920\/pandas-aggregate-count-distinct", "best_answers_votes":229, "question_length":1343, "response_length":432 }, { "question":"How to convert Pandas Series of dates string into date objects? I have a Pandas data frame and one of the columns, have dates in string format of YYYY-MM-DD. For e.g. : '2013-10-28' At the moment the dtype of the column is object. How do I convert the column values to Pandas date format?", "response":"Essentially equivalent to @waitingkuo, but I would use pd.to_datetime here (it seems a little cleaner, and offers some additional functionality e.g. dayfirst): ``` In [11]: df Out[11]: a time 0 1 2013-01-01 1 2 2013-01-02 2 3 2013-01-03 In [12]: pd.to_datetime(df['time']) Out[12]: 0 2013-01-01 00:00:00 1 2013-01-02 00:00:00 2 2013-01-03 00:00:00 Name: time, dtype: datetime64[ns] In [13]: df['time'] = pd.to_datetime(df['time']) In [14]: df Out[14]: a time 0 1 2013-01-01 00:00:00 1 2 2013-01-02 00:00:00 2 3 2013-01-03 00:00:00 ``` Handling ValueErrors If you run into a situation where doing ``` df['time'] = pd.to_datetime(df['time']) ``` Throws a ``` ValueError: Unknown string format ``` That means you have invalid (non-coercible) values. If you are okay with having them converted to pd.NaT, you can add an errors='coerce' argument to to_datetime: ``` df['time'] = pd.to_datetime(df['time'], errors='coerce') ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16852911\/how-to-convert-pandas-series-of-dates-string-into-date-objects", "best_answers_votes":160, "question_length":288, "response_length":921 }, { "question":"Get a frequency count based on multiple dataframe columns I have the following dataframe. Group Size Short Small Short Small Moderate Medium Moderate Small Tall Large I want to count the frequency of how many times the same row appears in the dataframe. ``` Group Size Time Short Small 2 Moderate Medium 1 Moderate Small 1 Tall Large 1 ```", "response":"You can use groupby's size ```py import pandas as pd # load the sample data data = {'Group': ['Short', 'Short', 'Moderate', 'Moderate', 'Tall'], 'Size': ['Small', 'Small', 'Medium', 'Small', 'Large']} df = pd.DataFrame(data) ``` Option 1: ```py dfg = df.groupby(by=[\"Group\", \"Size\"]).size() # which results in a pandas.core.series.Series Group Size Moderate Medium 1 Small 1 Short Small 2 Tall Large 1 dtype: int64 ``` Option 2: ```py dfg = df.groupby(by=[\"Group\", \"Size\"]).size().reset_index(name=\"Time\") # which results in a pandas.core.frame.DataFrame Group Size Time 0 Moderate Medium 1 1 Moderate Small 1 2 Short Small 2 3 Tall Large 1 ``` Option 3: ```py dfg = df.groupby(by=[\"Group\", \"Size\"], as_index=False).size() # which results in a pandas.core.frame.DataFrame Group Size Time 0 Moderate Medium 1 1 Moderate Small 1 2 Short Small 2 3 Tall Large 1 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33271098\/get-a-frequency-count-based-on-multiple-dataframe-columns", "best_answers_votes":212, "question_length":339, "response_length":861 }, { "question":"Sample datasets in Pandas When using R it's handy to load \"practice\" datasets using ``` data(iris) ``` or ``` data(mtcars) ``` Is there something similar for Pandas? I know I can load using any other method, just curious if there's anything builtin.", "response":"Since I originally wrote this answer, I have updated it with the many ways that are now available for accessing sample data sets in Python. Personally, I tend to stick with whatever package I am already using (usually seaborn or pandas). If you need offline access, installing the data set with Quilt seems to be the only option. Seaborn The brilliant plotting package seaborn has several built-in sample data sets. ``` import seaborn as sns iris = sns.load_dataset('iris') iris.head() ``` ```none sepal_length sepal_width petal_length petal_width species 0 5.1 3.5 1.4 0.2 setosa 1 4.9 3.0 1.4 0.2 setosa 2 4.7 3.2 1.3 0.2 setosa 3 4.6 3.1 1.5 0.2 setosa 4 5.0 3.6 1.4 0.2 setosa ``` Pandas If you do not want to import seaborn, but still want to access its sample data sets, you can use @andrewwowens's approach for the seaborn sample data: ``` iris = pd.read_csv('https:\/\/raw.githubusercontent.com\/mwaskom\/seaborn-data\/master\/iris.csv') ``` Note that the sample data sets containing categorical columns have their column type modified by sns.load_dataset() and the result might not be the same by getting it from the url directly. The iris and tips sample data sets are also available in the pandas github repo here. R sample datasets Since any dataset can be read via pd.read_csv(), it is possible to access all R's sample data sets by copying the URLs from this R data set repository. Additional ways of loading the R sample data sets include statsmodel ``` import statsmodels.api as sm iris = sm.datasets.get_rdataset('iris').data ``` and PyDataset ``` from pydataset import data iris = data('iris') ``` scikit-learn scikit-learn returns sample data as numpy arrays rather than a pandas data frame. ``` from sklearn.datasets import load_iris iris = load_iris() # `iris.data` holds the numerical values # `iris.feature_names` holds the numerical column names # `iris.target` holds the categorical (species) values (as ints) # `iris.target_names` holds the unique categorical names ``` Quilt Quilt is a dataset manager created to facilitate dataset management. It includes many common sample datasets, such as several from the uciml sample repository. The quick start page shows how to install and import the iris data set: ``` # In your terminal $ pip install quilt $ quilt install uciml\/iris ``` After installing a dataset, it is accessible locally, so this is the best option if you want to work with the data offline. ``` import quilt.data.uciml.iris as ir iris = ir.tables.iris() ``` ```none sepal_length sepal_width petal_length petal_width class 0 5.1 3.5 1.4 0.2 Iris-setosa 1 4.9 3.0 1.4 0.2 Iris-setosa 2 4.7 3.2 1.3 0.2 Iris-setosa 3 4.6 3.1 1.5 0.2 Iris-setosa 4 5.0 3.6 1.4 0.2 Iris-setosa ``` Quilt also support dataset versioning and include a short description of each dataset.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28417293\/sample-datasets-in-pandas", "best_answers_votes":186, "question_length":249, "response_length":2797 }, { "question":"What are the pros and cons between get_dummies (Pandas) and OneHotEncoder (Scikit-learn)? I'm learning different methods to convert categorical variables to numeric for machine-learning classifiers. I came across the pd.get_dummies method and sklearn.preprocessing.OneHotEncoder() and I wanted to see how they differed in terms of performance and usage. I found a tutorial on how to use OneHotEncoder() on https:\/\/xgdgsc.wordpress.com\/2015\/03\/20\/note-on-using-onehotencoder-in-scikit-learn-to-work-on-categorical-features\/ since the sklearn documentation wasn't too helpful on this feature. I have a feeling I'm not doing it correctly...but Can some explain the pros and cons of using pd.dummies over sklearn.preprocessing.OneHotEncoder() and vice versa? I know that OneHotEncoder() gives you a sparse matrix but other than that I'm not sure how it is used and what the benefits are over the pandas method. Am I using it inefficiently? ``` import pandas as pd import numpy as np from sklearn.datasets import load_iris sns.set() %matplotlib inline #Iris Plot iris = load_iris() n_samples, m_features = iris.data.shape #Load Data X, y = iris.data, iris.target D_target_dummy = dict(zip(np.arange(iris.target_names.shape[0]), iris.target_names)) DF_data = pd.DataFrame(X,columns=iris.feature_names) DF_data[\"target\"] = pd.Series(y).map(D_target_dummy) #sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) \\ #0 5.1 3.5 1.4 0.2 #1 4.9 3.0 1.4 0.2 #2 4.7 3.2 1.3 0.2 #3 4.6 3.1 1.5 0.2 #4 5.0 3.6 1.4 0.2 #5 5.4 3.9 1.7 0.4 DF_dummies = pd.get_dummies(DF_data[\"target\"]) #setosa versicolor virginica #0 1 0 0 #1 1 0 0 #2 1 0 0 #3 1 0 0 #4 1 0 0 #5 1 0 0 from sklearn.preprocessing import OneHotEncoder, LabelEncoder def f1(DF_data): Enc_ohe, Enc_label = OneHotEncoder(), LabelEncoder() DF_data[\"Dummies\"] = Enc_label.fit_transform(DF_data[\"target\"]) DF_dummies2 = pd.DataFrame(Enc_ohe.fit_transform(DF_data[[\"Dummies\"]]).todense(), columns = Enc_label.classes_) return(DF_dummies2) %timeit pd.get_dummies(DF_data[\"target\"]) #1000 loops, best of 3: 777 \u00b5s per loop %timeit f1(DF_data) #100 loops, best of 3: 2.91 ms per loop ```", "response":"For machine learning, you almost definitely want to use sklearn.OneHotEncoder. For other tasks like simple analyses, you might be able to use pd.get_dummies, which is a bit more convenient. Note that sklearn.OneHotEncoder has been updated in the latest version so that it does accept strings for categorical variables, as well as integers. The crux of it is that the sklearn encoder creates a function which persists and can then be applied to new data sets which use the same categorical variables, with consistent results. ``` from sklearn.preprocessing import OneHotEncoder # Create the encoder. encoder = OneHotEncoder(handle_unknown=\"ignore\") encoder.fit(X_train) # Assume for simplicity all features are categorical. # Apply the encoder. X_train = encoder.transform(X_train) X_test = encoder.transform(X_test) ``` Note how we apply the same encoder we created via X_train to the new data set X_test. Consider what happens if X_test contains different levels than X_train for one of its variables. For example, let's say X_train[\"color\"] contains only \"red\" and \"green\", but in addition to those, X_test[\"color\"] sometimes contains \"blue\". If we use pd.get_dummies, X_test will end up with an additional \"color_blue\" column which X_train doesn't have, and the inconsistency will probably break our code later on, especially if we are feeding X_test to an sklearn model which we trained on X_train. And if we want to process the data like this in production, where we're receiving a single example at a time, pd.get_dummies won't be of use. With sklearn.OneHotEncoder on the other hand, once we've created the encoder, we can reuse it to produce the same output every time, with columns only for \"red\" and \"green\". And we can explicitly control what happens when it encounters the new level \"blue\": if we think that's impossible, then we can tell it to throw an error with handle_unknown=\"error\"; otherwise we can tell it to continue and simply set the red and green columns to 0, with handle_unknown=\"ignore\".", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36631163\/what-are-the-pros-and-cons-between-get-dummies-pandas-and-onehotencoder-sciki", "best_answers_votes":256, "question_length":2139, "response_length":2014 }, { "question":"Set value of one Pandas column based on value in another column I need to set the value of one column based on the value of another in a Pandas dataframe. This is the logic: ``` if df['c1'] == 'Value': df['c2'] = 10 else: df['c2'] = df['c3'] ``` I am unable to get this to do what I want, which is to simply create a column with new values (or change the value of an existing column: either one works for me). If I try to run the code above or if I write it as a function and use the apply method, I get the following: ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().", "response":"one way to do this would be to use indexing with .loc. Example In the absence of an example dataframe, I'll make one up here: ``` import numpy as np import pandas as pd df = pd.DataFrame({'c1': list('abcdefg')}) df.loc[5, 'c1'] = 'Value' >>> df c1 0 a 1 b 2 c 3 d 4 e 5 Value 6 g ``` Assuming you wanted to create a new column c2, equivalent to c1 except where c1 is Value, in which case, you would like to assign it to 10: First, you could create a new column c2, and set it to equivalent as c1, using one of the following two lines (they essentially do the same thing): ``` df = df.assign(c2 = df['c1']) # OR: df['c2'] = df['c1'] ``` Then, find all the indices where c1 is equal to 'Value' using .loc, and assign your desired value in c2 at those indices: ``` df.loc[df['c1'] == 'Value', 'c2'] = 10 ``` And you end up with this: ``` >>> df c1 c2 0 a a 1 b b 2 c c 3 d d 4 e e 5 Value 10 6 g g ``` If, as you suggested in your question, you would perhaps sometimes just want to replace the values in the column you already have, rather than create a new column, then just skip the column creation, and do the following: ``` df['c1'].loc[df['c1'] == 'Value'] = 10 # or: df.loc[df['c1'] == 'Value', 'c1'] = 10 ``` Giving you: ``` >>> df c1 0 a 1 b 2 c 3 d 4 e 5 10 6 g ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/49161120\/set-value-of-one-pandas-column-based-on-value-in-another-column", "best_answers_votes":220, "question_length":625, "response_length":1271 }, { "question":"Appending pandas dataframes generated in a for loop I am accessing a series of Excel files in a for loop. I then read the data in the excel file to a pandas dataframe. I cant figure out how to append these dataframes together to then save the dataframe (now containing the data from all the files) as a new Excel file. Here's what I tried: ``` for infile in glob.glob(\"*.xlsx\"): data = pandas.read_excel(infile) appended_data = pandas.DataFrame.append(data) # requires at least two arguments appended_data.to_excel(\"appended.xlsx\") ``` Thanks!", "response":"Use pd.concat to merge a list of DataFrame into a single big DataFrame. ``` appended_data = [] for infile in glob.glob(\"*.xlsx\"): data = pandas.read_excel(infile) # store DataFrame in list appended_data.append(data) # see pd.concat documentation for more info appended_data = pd.concat(appended_data) # write DataFrame to an excel sheet appended_data.to_excel('appended.xlsx') ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28669482\/appending-pandas-dataframes-generated-in-a-for-loop", "best_answers_votes":297, "question_length":543, "response_length":380 }, { "question":"Import pandas dataframe column as string not int I would like to import the following csv as strings not as int64. Pandas read_csv automatically converts it to int64, but I need this column as string. ```none ID 00013007854817840016671868 00013007854817840016749251 00013007854817840016754630 00013007854817840016781876 00013007854817840017028824 00013007854817840017963235 00013007854817840018860166 ``` ``` df = read_csv('sample.csv') df.ID >> 0 -9223372036854775808 1 -9223372036854775808 2 -9223372036854775808 3 -9223372036854775808 4 -9223372036854775808 5 -9223372036854775808 6 -9223372036854775808 Name: ID ``` Unfortunately using converters gives the same result. ``` df = read_csv('sample.csv', converters={'ID': str}) df.ID >> 0 -9223372036854775808 1 -9223372036854775808 2 -9223372036854775808 3 -9223372036854775808 4 -9223372036854775808 5 -9223372036854775808 6 -9223372036854775808 Name: ID ```", "response":"Just want to reiterate this will work in pandas >= 0.9.1: ``` In [2]: read_csv('sample.csv', dtype={'ID': object}) Out[2]: ID 0 00013007854817840016671868 1 00013007854817840016749251 2 00013007854817840016754630 3 00013007854817840016781876 4 00013007854817840017028824 5 00013007854817840017963235 6 00013007854817840018860166 ``` I'm creating an issue about detecting integer overflows also. EDIT: See resolution here: https:\/\/github.com\/pydata\/pandas\/issues\/2247 Update as it helps others: To have all columns as str, one can do this (from the comment): ``` pd.read_csv('sample.csv', dtype = str) ``` To have most or selective columns as str, one can do this: ``` # lst of column names which needs to be string lst_str_cols = ['prefix', 'serial'] # use dictionary comprehension to make dict of dtypes dict_dtypes = {x : 'str' for x in lst_str_cols} # use dict on dtypes pd.read_csv('sample.csv', dtype=dict_dtypes) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13293810\/import-pandas-dataframe-column-as-string-not-int", "best_answers_votes":234, "question_length":912, "response_length":922 }, { "question":"Pandas Left Outer Join results in table larger than left table From what I understand about a left outer join, the resulting table should never have more rows than the left table...Please let me know if this is wrong... My left table is 192572 rows and 8 columns. My right table is 42160 rows and 5 columns. My Left table has a field called 'id' which matches with a column in my right table called 'key'. Therefore I merge them as such: ``` combined = pd.merge(a,b,how='left',left_on='id',right_on='key') ``` But then the combined shape is 236569. What am I misunderstanding?", "response":"You can expect this to increase if keys match more than one row in the other DataFrame: ``` In [11]: df = pd.DataFrame([[1, 3], [2, 4]], columns=['A', 'B']) In [12]: df2 = pd.DataFrame([[1, 5], [1, 6]], columns=['A', 'C']) In [13]: df.merge(df2, how='left') # merges on columns A Out[13]: A B C 0 1 3 5 1 1 3 6 2 2 4 NaN ``` To avoid this behaviour drop the duplicates in df2: ``` In [21]: df2.drop_duplicates(subset=['A']) # you can use take_last=True Out[21]: A C 0 1 5 In [22]: df.merge(df2.drop_duplicates(subset=['A']), how='left') Out[22]: A B C 0 1 3 5 1 2 4 NaN ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22720739\/pandas-left-outer-join-results-in-table-larger-than-left-table", "best_answers_votes":211, "question_length":576, "response_length":573 }, { "question":"Efficiently checking if arbitrary object is NaN in Python \/ numpy \/ pandas? My numpy arrays use np.nan to designate missing values. As I iterate over the data set, I need to detect such missing values and handle them in special ways. Naively I used numpy.isnan(val), which works well unless val isn't among the subset of types supported by numpy.isnan(). For example, missing data can occur in string fields, in which case I get: ``` >>> np.isnan('some_string') Traceback (most recent call last): File \"\", line 1, in TypeError: Not implemented for this type ``` Other than writing an expensive wrapper that catches the exception and returns False, is there a way to handle this elegantly and efficiently?", "response":"pandas.isnull() (also pd.isna(), in newer versions) checks for missing values in both numeric and string\/object arrays. From the documentation, it checks for: NaN in numeric arrays, None\/NaN in object arrays Quick example: ``` import pandas as pd import numpy as np s = pd.Series(['apple', np.nan, 'banana']) pd.isnull(s) Out[9]: 0 False 1 True 2 False dtype: bool ``` The idea of using numpy.nan to represent missing values is something that pandas introduced, which is why pandas has the tools to deal with it. Datetimes too (if you use pd.NaT you won't need to specify the dtype) ``` In [24]: s = Series([Timestamp('20130101'),np.nan,Timestamp('20130102 9:30')],dtype='M8[ns]') In [25]: s Out[25]: 0 2013-01-01 00:00:00 1 NaT 2 2013-01-02 09:30:00 dtype: datetime64[ns]`` In [26]: pd.isnull(s) Out[26]: 0 False 1 True 2 False dtype: bool ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18689512\/efficiently-checking-if-arbitrary-object-is-nan-in-python-numpy-pandas", "best_answers_votes":231, "question_length":705, "response_length":844 }, { "question":"Count unique values using pandas groupby [duplicate] This question already has answers here: Pandas 'count(distinct)' equivalent (12 answers) Closed 2 years ago. I have data of the following form: ``` df = pd.DataFrame({ 'group': [1, 1, 2, 3, 3, 3, 4], 'param': ['a', 'a', 'b', np.nan, 'a', 'a', np.nan] }) print(df) # group param # 0 1 a # 1 1 a # 2 2 b # 3 3 NaN # 4 3 a # 5 3 a # 6 4 NaN ``` Non-null values within groups are always the same. I want to count the non-null value for each group (where it exists) once, and then find the total counts for each value. I'm currently doing this in the following (clunky and inefficient) way: ``` param = [] for _, group in df[df.param.notnull()].groupby('group'): param.append(group.param.unique()[0]) print(pd.DataFrame({'param': param}).param.value_counts()) # a 2 # b 1 ``` I'm sure there's a way to do this more cleanly and without using a loop, but I just can't seem to work it out. Any help would be much appreciated.", "response":"I think you can use SeriesGroupBy.nunique: ``` print (df.groupby('param')['group'].nunique()) param a 2 b 1 Name: group, dtype: int64 ``` Another solution with unique, then create new df by DataFrame.from_records, reshape to Series by stack and last value_counts: ``` a = df[df.param.notnull()].groupby('group')['param'].unique() print (pd.DataFrame.from_records(a.values.tolist()).stack().value_counts()) a 2 b 1 dtype: int64 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41415017\/count-unique-values-using-pandas-groupby", "best_answers_votes":242, "question_length":970, "response_length":430 }, { "question":"Skip rows during csv import pandas I'm trying to import a .csv file using pandas.read_csv(), however, I don't want to import the 2nd row of the data file (the row with index = 1 for 0-indexing). I can't see how not to import it because the arguments used with the command seem ambiguous: From the pandas website: skiprows : list-like or integer Row numbers to skip (0-indexed) or number of rows to skip (int) at the start of the file.\" If I put skiprows=1 in the arguments, how does it know whether to skip the first row or skip the row with index 1?", "response":"You can try yourself: ``` >>> import pandas as pd >>> from io import StringIO >>> s = \"\"\"1, 2 ... 3, 4 ... 5, 6\"\"\" >>> pd.read_csv(StringIO(s), skiprows=[1], header=None) 0 1 0 1 2 1 5 6 >>> pd.read_csv(StringIO(s), skiprows=1, header=None) 0 1 0 3 4 1 5 6 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20637439\/skip-rows-during-csv-import-pandas", "best_answers_votes":207, "question_length":550, "response_length":260 }, { "question":"How to filter in NaN (pandas)? I have a pandas dataframe (df), and I want to do something like: ``` newdf = df[(df.var1 == 'a') & (df.var2 == NaN)] ``` I've tried replacing NaN with np.NaN, or 'NaN' or 'nan' etc, but nothing evaluates to True. There's no pd.NaN. I can use df.fillna(np.nan) before evaluating the above expression but that feels hackish and I wonder if it will interfere with other pandas operations that rely on being able to identify pandas-format NaN's later. I get the feeling there should be an easy answer to this question, but somehow it has eluded me.", "response":"``` filtered_df = df[df['var2'].isna()] ``` This filters and gives you rows which has only NaN values in 'var2' column. Note: \"Series.isnull is an alias for Series.isna.\"", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25050141\/how-to-filter-in-nan-pandas", "best_answers_votes":173, "question_length":575, "response_length":170 }, { "question":"plot different color for different categorical levels I have this data frame diamonds which is composed of variables like (carat, price, color), and I want to draw a scatter plot of price to carat for each color, which means different color has different color in the plot. This is easy in R with ggplot: ``` ggplot(aes(x=carat, y=price, color=color), #by setting color=color, ggplot automatically draw in different colors data=diamonds) + geom_point(stat='summary', fun.y=median) ``` I wonder how could this be done in Python using matplotlib ? PS: I know about auxiliary plotting packages, such as seaborn and ggplot for python, and I don't prefer them, just want to find out if it is possible to do the job using matplotlib alone, ;P", "response":"Imports and Sample DataFrame ```py import matplotlib.pyplot as plt import pandas as pd import seaborn as sns # for sample data from matplotlib.lines import Line2D # for legend handle # DataFrame used for all options df = sns.load_dataset('diamonds') carat cut color clarity depth table price x y z 0 0.23 Ideal E SI2 61.5 55.0 326 3.95 3.98 2.43 1 0.21 Premium E SI1 59.8 61.0 326 3.89 3.84 2.31 2 0.23 Good E VS1 56.9 65.0 327 4.05 4.07 2.31 ``` With matplotlib You can pass plt.scatter a c argument, which allows you to select the colors. The following code defines a colors dictionary to map the diamond colors to the plotting colors. ```py fig, ax = plt.subplots(figsize=(6, 6)) colors = {'D':'tab:blue', 'E':'tab:orange', 'F':'tab:green', 'G':'tab:red', 'H':'tab:purple', 'I':'tab:brown', 'J':'tab:pink'} ax.scatter(df['carat'], df['price'], c=df['color'].map(colors)) # add a legend handles = [Line2D([0], [0], marker='o', color='w', markerfacecolor=v, label=k, markersize=8) for k, v in colors.items()] ax.legend(title='color', handles=handles, bbox_to_anchor=(1.05, 1), loc='upper left') plt.show() ``` df['color'].map(colors) effectively maps the colors from \"diamond\" to \"plotting\". (Forgive me for not putting another example image up, I think 2 is enough :P) With seaborn You can use seaborn which is a wrapper around matplotlib that makes it look prettier by default (rather opinion-based, I know :P) but also adds some plotting functions. For this you could use seaborn.lmplot with fit_reg=False (which prevents it from automatically doing some regression). sns.scatterplot(x='carat', y='price', data=df, hue='color', ec=None) also does the same thing. Selecting hue='color' tells seaborn to split and plot the data based on the unique values in the 'color' column. ```py sns.lmplot(x='carat', y='price', data=df, hue='color', fit_reg=False) ``` With pandas.DataFrame.groupby & pandas.DataFrame.plot If you don't want to use seaborn, use pandas.groupby to get the colors alone, and then plot them using just matplotlib, but you'll have to manually assign colors as you go, I've added an example below: ```py fig, ax = plt.subplots(figsize=(6, 6)) grouped = df.groupby('color') for key, group in grouped: group.plot(ax=ax, kind='scatter', x='carat', y='price', label=key, color=colors[key]) plt.show() ``` This code assumes the same DataFrame as above, and then groups it based on color. It then iterates over these groups, plotting for each one. To select a color, I've created a colors dictionary, which can map the diamond color (for instance D) to a real color (for instance tab:blue).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26139423\/plot-different-color-for-different-categorical-levels", "best_answers_votes":227, "question_length":736, "response_length":2602 }, { "question":"Display rows with one or more NaN values in pandas dataframe I have a dataframe in which some rows contain missing values. ``` In [31]: df.head() Out[31]: alpha1 alpha2 gamma1 gamma2 chi2min filename M66_MI_NSRh35d32kpoints.dat 0.8016 0.9283 1.000000 0.074804 3.985599e+01 F71_sMI_DMRI51d.dat 0.0000 0.0000 NaN 0.000000 1.000000e+25 F62_sMI_St22d7.dat 1.7210 3.8330 0.237480 0.150000 1.091832e+01 F41_Car_HOC498d.dat 1.1670 2.8090 0.364190 0.300000 7.966335e+00 F78_MI_547d.dat 1.8970 5.4590 0.095319 0.100000 2.593468e+01 ``` I want to display those rows on the screen. If I try df.isnull(), it gives a long dataframe with True and False. Is there any way by which I can select these rows and print them on the screen?", "response":"You can use DataFrame.any with parameter axis=1 for check at least one True in row by DataFrame.isna with boolean indexing: ``` df1 = df[df.isna().any(axis=1)] ``` ``` d = {'filename': ['M66_MI_NSRh35d32kpoints.dat', 'F71_sMI_DMRI51d.dat', 'F62_sMI_St22d7.dat', 'F41_Car_HOC498d.dat', 'F78_MI_547d.dat'], 'alpha1': [0.8016, 0.0, 1.721, 1.167, 1.897], 'alpha2': [0.9283, 0.0, 3.833, 2.809, 5.459], 'gamma1': [1.0, np.nan, 0.23748000000000002, 0.36419, 0.095319], 'gamma2': [0.074804, 0.0, 0.15, 0.3, np.nan], 'chi2min': [39.855990000000006, 1e+25, 10.91832, 7.966335000000001, 25.93468]} df = pd.DataFrame(d).set_index('filename') ``` ``` print (df) alpha1 alpha2 gamma1 gamma2 chi2min filename M66_MI_NSRh35d32kpoints.dat 0.8016 0.9283 1.000000 0.074804 3.985599e+01 F71_sMI_DMRI51d.dat 0.0000 0.0000 NaN 0.000000 1.000000e+25 F62_sMI_St22d7.dat 1.7210 3.8330 0.237480 0.150000 1.091832e+01 F41_Car_HOC498d.dat 1.1670 2.8090 0.364190 0.300000 7.966335e+00 F78_MI_547d.dat 1.8970 5.4590 0.095319 NaN 2.593468e+01 ``` Explanation: ``` print (df.isna()) alpha1 alpha2 gamma1 gamma2 chi2min filename M66_MI_NSRh35d32kpoints.dat False False False False False F71_sMI_DMRI51d.dat False False True False False F62_sMI_St22d7.dat False False False False False F41_Car_HOC498d.dat False False False False False F78_MI_547d.dat False False False True False print (df.isna().any(axis=1)) filename M66_MI_NSRh35d32kpoints.dat False F71_sMI_DMRI51d.dat True F62_sMI_St22d7.dat False F41_Car_HOC498d.dat False F78_MI_547d.dat True dtype: bool df1 = df[df.isna().any(axis=1)] print (df1) alpha1 alpha2 gamma1 gamma2 chi2min filename F71_sMI_DMRI51d.dat 0.000 0.000 NaN 0.0 1.000000e+25 F78_MI_547d.dat 1.897 5.459 0.095319 NaN 2.593468e+01 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43424199\/display-rows-with-one-or-more-nan-values-in-pandas-dataframe", "best_answers_votes":278, "question_length":719, "response_length":1728 }, { "question":"Normalize data in pandas Suppose I have a pandas data frame df: I want to calculate the column wise mean of a data frame. This is easy: ``` df.apply(average) ``` then the column wise range max(col) - min(col). This is easy again: ``` df.apply(max) - df.apply(min) ``` Now for each element I want to subtract its column's mean and divide by its column's range. I am not sure how to do that Any help\/pointers are much appreciated.", "response":"``` In [92]: df Out[92]: a b c d A -0.488816 0.863769 4.325608 -4.721202 B -11.937097 2.993993 -12.916784 -1.086236 C -5.569493 4.672679 -2.168464 -9.315900 D 8.892368 0.932785 4.535396 0.598124 In [93]: df_norm = (df - df.mean()) \/ (df.max() - df.min()) In [94]: df_norm Out[94]: a b c d A 0.085789 -0.394348 0.337016 -0.109935 B -0.463830 0.164926 -0.650963 0.256714 C -0.158129 0.605652 -0.035090 -0.573389 D 0.536170 -0.376229 0.349037 0.426611 In [95]: df_norm.mean() Out[95]: a -2.081668e-17 b 4.857226e-17 c 1.734723e-17 d -1.040834e-17 In [96]: df_norm.max() - df_norm.min() Out[96]: a 1 b 1 c 1 d 1 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12525722\/normalize-data-in-pandas", "best_answers_votes":234, "question_length":428, "response_length":611 }, { "question":"Can Pandas plot a histogram of dates? I've taken my Series and coerced it to a datetime column of dtype=datetime64[ns] (though only need day resolution...not sure how to change). ``` import pandas as pd df = pd.read_csv('somefile.csv') column = df['date'] column = pd.to_datetime(column, coerce=True) ``` but plotting doesn't work: ``` ipdb> column.plot(kind='hist') *** TypeError: ufunc add cannot use operands with types dtype('>> 145 146 >>> Name: ID, dtype: int64 ``` How can I get the value 146 here without using index 145?", "response":"Use iloc to access by position (rather than label): ``` In [11]: df = pd.DataFrame([[1, 2], [3, 4]], ['a', 'b'], ['A', 'B']) In [12]: df Out[12]: A B a 1 2 b 3 4 In [13]: df.iloc[0] # first row in a DataFrame Out[13]: A 1 B 2 Name: a, dtype: int64 In [14]: df['A'].iloc[0] # first item in a Series (Column) Out[14]: 1 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24273130\/get-first-element-of-series-without-knowing-the-index", "best_answers_votes":221, "question_length":720, "response_length":321 }, { "question":"Select latest in each group in a pandas dataframe How to group values of pandas dataframe and select the latest (by date) from each group? For example, given a dataframe sorted by date: ``` id product date 0 220 6647 2014-09-01 1 220 6647 2014-09-03 2 220 6647 2014-10-16 3 826 3380 2014-11-11 4 826 3380 2014-12-09 5 826 3380 2015-05-19 6 901 4555 2014-09-01 7 901 4555 2014-10-05 8 901 4555 2014-11-01 ``` grouping by id or product, and selecting the latest gives: ``` id product date 2 220 6647 2014-10-16 5 826 3380 2015-05-19 8 901 4555 2014-11-01 ```", "response":"You can also use tail with groupby to get the last n values of the group: ``` df.sort_values('date').groupby('id').tail(1) id product date 2 220 6647 2014-10-16 8 901 4555 2014-11-01 5 826 3380 2015-05-19 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41525911\/select-latest-in-each-group-in-a-pandas-dataframe", "best_answers_votes":171, "question_length":556, "response_length":208 }, { "question":"How can I filter lines on load in Pandas read_csv function? How can I filter which lines of a CSV to be loaded into memory using pandas? This seems like an option that one should find in read_csv. Am I missing something? Example: we've a CSV with a timestamp column and we'd like to load just the lines that with a timestamp greater than a given constant.", "response":"There isn't an option to filter the rows before the CSV file is loaded into a pandas object. You can either load the file and then filter using df[df['field'] > constant], or if you have a very large file and you are worried about memory running out, then use an iterator and apply the filter as you concatenate chunks of your file e.g.: ``` import pandas as pd iter_csv = pd.read_csv('file.csv', iterator=True, chunksize=1000) df = pd.concat([chunk[chunk['field'] > constant] for chunk in iter_csv]) ``` You can vary the chunksize to suit your available memory. See here for more details.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13651117\/how-can-i-filter-lines-on-load-in-pandas-read-csv-function", "best_answers_votes":242, "question_length":355, "response_length":589 }, { "question":"Output data from all columns in a dataframe in pandas [duplicate] This question already has answers here: How do I expand the output display to see more columns of a Pandas DataFrame? (23 answers) Closed 6 years ago. I have a csv file with the name params.csv. I opened up ipython qtconsole and created a pandas dataframe using: ``` import pandas paramdata = pandas.read_csv('params.csv', names=paramnames) ``` where, paramnames is a python list of string objects. Example of paramnames (the length of actual list is 22): ``` paramnames = [\"id\", \"fc\", \"mc\", \"markup\", \"asplevel\", \"aspreview\", \"reviewpd\"] ``` At the ipython prompt if I type paramdata and press enter then I do not get the dataframe with columns and values as shown in examples on Pandas website. Instead, I get information about the dataframe. I get: ``` In[35]: paramdata Out[35]: Int64Index: 59 entries, 0 to 58 Data columns: id 59 non-null values fc 59 non-null values mc 59 non-null values markup 59 non-null values asplevel 59 non-null values aspreview 59 non-null values reviewpd 59 non-null values ``` If I type paramdata['mc'] then I do get the values as expected for the mc column. I have two questions: (1) In the examples on the pandas website (see, for example, the output of df here: http:\/\/pandas.sourceforge.net\/indexing.html#additional-column-access) typing the name of the dataframe gives the actual data. Why am I getting information about the dataframe as shown above instead of the actual data? Do I need to set some output options somewhere? (2) How do I output all columns in the dataframe to the screen without having to type their names, i.e., without having to type something like paramdata[['id','fc','mc']]. I am using pandas version 0.8. Thank you.", "response":"Use: ``` pandas.set_option('display.max_columns', 7) ``` This will force Pandas to display the 7 columns you have. Or more generally: ``` pandas.set_option('display.max_columns', None) ``` which will force it to display any number of columns. Explanation: the default for max_columns is 0, which tells Pandas to display the table only if all the columns can be squeezed into the width of your console. Alternatively, you can change the console width (in chars) from the default of 80 using e.g: ``` pandas.set_option('display.width', 200) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11361985\/output-data-from-all-columns-in-a-dataframe-in-pandas", "best_answers_votes":339, "question_length":1744, "response_length":542 }, { "question":"Read specific columns with pandas or other python module I have a csv file from this webpage. I want to read some of the columns in the downloaded file (the csv version can be downloaded in the upper right corner). Let's say I want 2 columns: 59 which in the header is star_name 60 which in the header is ra. However, for some reason the authors of the webpage sometimes decide to move the columns around. In the end I want something like this, keeping in mind that values can be missing. ``` data = #read data in a clever way names = data['star_name'] ras = data['ra'] ``` This will prevent my program to malfunction when the columns are changed again in the future, if they keep the name correct. Until now I have tried various ways using the csv module and resently the pandas module. Both without any luck. EDIT (added two lines + the header of my datafile. Sorry, but it's extremely long.) ``` # name, mass, mass_error_min, mass_error_max, radius, radius_error_min, radius_error_max, orbital_period, orbital_period_err_min, orbital_period_err_max, semi_major_axis, semi_major_axis_error_min, semi_major_axis_error_max, eccentricity, eccentricity_error_min, eccentricity_error_max, angular_distance, inclination, inclination_error_min, inclination_error_max, tzero_tr, tzero_tr_error_min, tzero_tr_error_max, tzero_tr_sec, tzero_tr_sec_error_min, tzero_tr_sec_error_max, lambda_angle, lambda_angle_error_min, lambda_angle_error_max, impact_parameter, impact_parameter_error_min, impact_parameter_error_max, tzero_vr, tzero_vr_error_min, tzero_vr_error_max, K, K_error_min, K_error_max, temp_calculated, temp_measured, hot_point_lon, albedo, albedo_error_min, albedo_error_max, log_g, publication_status, discovered, updated, omega, omega_error_min, omega_error_max, tperi, tperi_error_min, tperi_error_max, detection_type, mass_detection_type, radius_detection_type, alternate_names, molecules, star_name, ra, dec, mag_v, mag_i, mag_j, mag_h, mag_k, star_distance, star_metallicity, star_mass, star_radius, star_sp_type, star_age, star_teff, star_detected_disc, star_magnetic_field 11 Com b,19.4,1.5,1.5,,,,326.03,0.32,0.32,1.29,0.05,0.05,0.231,0.005,0.005,0.011664,,,,,,,,,,,,,,,,,,,,,,,,,,,,,1,2008,2011-12-23,94.8,1.5,1.5,2452899.6,1.6,1.6,Radial Velocity,,,,,11 Com,185.1791667,17.7927778,4.74,,,,,110.6,-0.35,2.7,19.0,G8 III,,4742.0,, 11 UMi b,10.5,2.47,2.47,,,,516.22,3.25,3.25,1.54,0.07,0.07,0.08,0.03,0.03,0.012887,,,,,,,,,,,,,,,,,,,,,,,,,,,,,1,2009,2009-08-13,117.63,21.06,21.06,2452861.05,2.06,2.06,Radial Velocity,,,,,11 UMi,229.275,71.8238889,5.02,,,,,119.5,0.04,1.8,24.08,K4III,1.56,4340.0,, ```", "response":"An easy way to do this is using the pandas library like this. ``` import pandas as pd fields = ['star_name', 'ra'] df = pd.read_csv('data.csv', skipinitialspace=True, usecols=fields) # See the keys print df.keys() # See content in 'star_name' print df.star_name ``` The problem here was the skipinitialspace which remove the spaces in the header. So ' star_name' becomes 'star_name'", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26063231\/read-specific-columns-with-pandas-or-other-python-module", "best_answers_votes":242, "question_length":2612, "response_length":382 }, { "question":"Pandas Groupby Range of Values Is there an easy method in pandas to invoke groupby on a range of values increments? For instance given the example below can I bin and group column B with a 0.155 increment so that for example, the first couple of groups in column B are divided into ranges between '0 - 0.155, 0.155 - 0.31 ...` ``` import numpy as np import pandas as pd df=pd.DataFrame({'A':np.random.random(20),'B':np.random.random(20)}) A B 0 0.383493 0.250785 1 0.572949 0.139555 2 0.652391 0.401983 3 0.214145 0.696935 4 0.848551 0.516692 ``` Alternatively I could first categorize the data by those increments into a new column and subsequently use groupby to determine any relevant statistics that may be applicable in column A?", "response":"You might be interested in pd.cut: ``` >>> df.groupby(pd.cut(df[\"B\"], np.arange(0, 1.0+0.155, 0.155))).sum() A B B (0, 0.155] 2.775458 0.246394 (0.155, 0.31] 1.123989 0.471618 (0.31, 0.465] 2.051814 1.882763 (0.465, 0.62] 2.277960 1.528492 (0.62, 0.775] 1.577419 2.810723 (0.775, 0.93] 0.535100 1.694955 (0.93, 1.085] NaN NaN [7 rows x 2 columns] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21441259\/pandas-groupby-range-of-values", "best_answers_votes":189, "question_length":734, "response_length":350 }, { "question":"Find percentile stats of a given column I have a pandas data frame my_df, where I can find the mean(), median(), mode() of a given column: ``` my_df['field_A'].mean() my_df['field_A'].median() my_df['field_A'].mode() ``` I am wondering is it possible to find more detailed statistics such as the 90th percentile?", "response":"You can use the pandas.DataFrame.quantile() function. If you look at the API for quantile(), you will see it takes an argument for how to do interpolation. If you want a quantile that falls between two positions in your data: 'linear', 'lower', 'higher', 'midpoint', or 'nearest'. By default, it performs linear interpolation. These interpolation methods are discussed in the Wikipedia article for percentile ```py import pandas as pd import numpy as np # sample data np.random.seed(2023) # for reproducibility data = {'Category': np.random.choice(['hot', 'cold'], size=(10,)), 'field_A': np.random.randint(0, 100, size=(10,)), 'field_B': np.random.randint(0, 100, size=(10,))} df = pd.DataFrame(data) df.field_A.mean() # Same as df['field_A'].mean() # 51.1 df.field_A.median() # 50.0 # You can call `quantile(i)` to get the i'th quantile, # where `i` should be a fractional number. df.field_A.quantile(0.1) # 10th percentile # 15.6 df.field_A.quantile(0.5) # same as median # 50.0 df.field_A.quantile(0.9) # 90th percentile # 88.8 df.groupby('Category').field_A.quantile(0.1) #Category #cold 28.8 #hot 8.6 #Name: field_A, dtype: float64 ``` df ```none Category field_A field_B 0 cold 96 58 1 cold 22 28 2 hot 17 81 3 cold 53 71 4 cold 47 63 5 hot 77 48 6 cold 39 32 7 hot 69 29 8 hot 88 49 9 hot 3 49 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39581893\/find-percentile-stats-of-a-given-column", "best_answers_votes":183, "question_length":312, "response_length":1305 }, { "question":"Remove Unnamed columns in pandas dataframe [duplicate] This question already has answers here: How to get rid of \"Unnamed: 0\" column in a pandas DataFrame read in from CSV file? (13 answers) Closed 6 years ago. I have a data file from columns A-G like below but when I am reading it with pd.read_csv('data.csv') it prints an extra unnamed column at the end for no reason. ``` colA ColB colC colD colE colF colG Unnamed: 7 44 45 26 26 40 26 46 NaN 47 16 38 47 48 22 37 NaN 19 28 36 18 40 18 46 NaN 50 14 12 33 12 44 23 NaN 39 47 16 42 33 48 38 NaN ``` I have seen my data file various times but I have no extra data in any other column. How I should remove this extra column while reading ? Thanks", "response":"``` df = df.loc[:, ~df.columns.str.contains('^Unnamed')] In [162]: df Out[162]: colA ColB colC colD colE colF colG 0 44 45 26 26 40 26 46 1 47 16 38 47 48 22 37 2 19 28 36 18 40 18 46 3 50 14 12 33 12 44 23 4 39 47 16 42 33 48 38 ``` NOTE: very often there is only one unnamed column Unnamed: 0, which is the first column in the CSV file. This is the result of the following steps: a DataFrame is saved into a CSV file using parameter index=True, which is the default behaviour we read this CSV file into a DataFrame using pd.read_csv() without explicitly specifying index_col=0 (default: index_col=None) The easiest way to get rid of this column is to specify the parameter pd.read_csv(..., index_col=0): ``` df = pd.read_csv('data.csv', index_col=0) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43983622\/remove-unnamed-columns-in-pandas-dataframe", "best_answers_votes":343, "question_length":696, "response_length":755 }, { "question":"Shift column in pandas dataframe up by one? I've got a pandas dataframe. I want to 'lag' one of my columns. Meaning, for example, shifting the entire column 'gdp' up by one, and then removing all the excess data at the bottom of the remaining rows so that all columns are of equal length again. ```none df = y gdp cap 0 1 2 5 1 2 3 9 2 8 7 2 3 3 4 7 4 6 7 7 df_lag = y gdp cap 0 1 3 5 1 2 7 9 2 8 4 2 3 3 7 7 ``` Anyway to do this?", "response":"``` In [44]: df['gdp'] = df['gdp'].shift(-1) In [45]: df Out[45]: y gdp cap 0 1 3 5 1 2 7 9 2 8 4 2 3 3 7 7 4 6 NaN 7 In [46]: df[:-1] Out[46]: y gdp cap 0 1 3 5 1 2 7 9 2 8 4 2 3 3 7 7 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20095673\/shift-column-in-pandas-dataframe-up-by-one", "best_answers_votes":233, "question_length":431, "response_length":189 }, { "question":"What is the difference between pandas.qcut and pandas.cut? The documentation says: http:\/\/pandas.pydata.org\/pandas-docs\/dev\/basics.html \"Continuous values can be discretized using the cut (bins based on values) and qcut (bins based on sample quantiles) functions\" Sounds very abstract to me... I can see the differences in the example below but what does qcut (sample quantile) actually do\/mean? When would you use qcut versus cut? Thanks. ``` factors = np.random.randn(30) In [11]: pd.cut(factors, 5) Out[11]: [(-0.411, 0.575], (-0.411, 0.575], (-0.411, 0.575], (-0.411, 0.575], (0.575, 1.561], ..., (-0.411, 0.575], (-1.397, -0.411], (0.575, 1.561], (-2.388, -1.397], (-0.411, 0.575]] Length: 30 Categories (5, object): [(-2.388, -1.397] < (-1.397, -0.411] < (-0.411, 0.575] < (0.575, 1.561] < (1.561, 2.547]] In [14]: pd.qcut(factors, 5) Out[14]: [(-0.348, 0.0899], (-0.348, 0.0899], (0.0899, 1.19], (0.0899, 1.19], (0.0899, 1.19], ..., (0.0899, 1.19], (-1.137, -0.348], (1.19, 2.547], [-2.383, -1.137], (-0.348, 0.0899]] Length: 30 Categories (5, object): [[-2.383, -1.137] < (-1.137, -0.348] < (-0.348, 0.0899] < (0.0899, 1.19] < (1.19, 2.547]]` ```", "response":"To begin, note that quantiles is just the most general term for things like percentiles, quartiles, and medians. You specified five bins in your example, so you are asking qcut for quintiles. So, when you ask for quintiles with qcut, the bins will be chosen so that you have the same number of records in each bin. You have 30 records, so should have 6 in each bin (your output should look like this, although the breakpoints will differ due to the random draw): ``` pd.qcut(factors, 5).value_counts() [-2.578, -0.829] 6 (-0.829, -0.36] 6 (-0.36, 0.366] 6 (0.366, 0.868] 6 (0.868, 2.617] 6 ``` Conversely, for cut you will see something more uneven: ``` pd.cut(factors, 5).value_counts() (-2.583, -1.539] 5 (-1.539, -0.5] 5 (-0.5, 0.539] 9 (0.539, 1.578] 9 (1.578, 2.617] 2 ``` That's because cut will choose the bins to be evenly spaced according to the values themselves and not the frequency of those values. Hence, because you drew from a random normal, you'll see higher frequencies in the inner bins and fewer in the outer. This is essentially going to be a tabular form of a histogram (which you would expect to be fairly bell shaped with 30 records).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30211923\/what-is-the-difference-between-pandas-qcut-and-pandas-cut", "best_answers_votes":287, "question_length":1154, "response_length":1158 }, { "question":"Can I assign a reset index a name? Normally when a dataframe undergoes a reset_index() the new column is assigned the name index or level_i depending on the level. Is it possible to assign the new column a name?", "response":"You can call rename on the returned df from reset_index: ``` In [145]: # create a df df = pd.DataFrame(np.random.randn(5,3)) df Out[145]: 0 1 2 0 -2.845811 -0.182439 -0.526785 1 -0.112547 0.661461 0.558452 2 0.587060 -1.232262 -0.997973 3 -1.009378 -0.062442 0.125875 4 -1.129376 3.282447 -0.403731 ``` Set the index name ``` In [146]: df.index = df.index.set_names(['foo']) df Out[146]: 0 1 2 foo 0 -2.845811 -0.182439 -0.526785 1 -0.112547 0.661461 0.558452 2 0.587060 -1.232262 -0.997973 3 -1.009378 -0.062442 0.125875 4 -1.129376 3.282447 -0.403731 ``` call reset_index and chain with rename: ``` In [147]: df.reset_index().rename(columns={df.index.name:'bar'}) Out[147]: bar 0 1 2 0 0 -2.845811 -0.182439 -0.526785 1 1 -0.112547 0.661461 0.558452 2 2 0.587060 -1.232262 -0.997973 3 3 -1.009378 -0.062442 0.125875 4 4 -1.129376 3.282447 -0.403731 ``` Thanks to @ayhan alternatively you can use rename_axis to rename the index prior to reset_index: ``` In [149]: df.rename_axis('bar').reset_index() Out[149]: bar 0 1 2 0 0 -2.845811 -0.182439 -0.526785 1 1 -0.112547 0.661461 0.558452 2 2 0.587060 -1.232262 -0.997973 3 3 -1.009378 -0.062442 0.125875 4 4 -1.129376 3.282447 -0.403731 ``` or just overwrite the index name directly first: ``` df.index.name = 'bar' ``` and then call reset_index", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/40914200\/can-i-assign-a-reset-index-a-name", "best_answers_votes":169, "question_length":211, "response_length":1295 }, { "question":"Way to read first few lines for pandas dataframe Is there a built-in way to use read_csv to read only the first n lines of a file without knowing the length of the lines ahead of time? I have a large file that takes a long time to read, and occasionally only want to use the first, say, 20 lines to get a sample of it (and prefer not to load the full thing and take the head of it). If I knew the total number of lines I could do something like footer_lines = total_lines - n and pass this to the skipfooter keyword arg. My current solution is to manually grab the first n lines with python and StringIO it to pandas: ``` import pandas as pd from StringIO import StringIO n = 20 with open('big_file.csv', 'r') as f: head = ''.join(f.readlines(n)) df = pd.read_csv(StringIO(head)) ``` It's not that bad, but is there a more concise, 'pandasic' (?) way to do it with keywords or something?", "response":"I think you can use the nrows parameter. From the docs: ``` nrows : int, default None Number of rows of file to read. Useful for reading pieces of large files ``` which seems to work. Using one of the standard large test files (988504479 bytes, 5344499 lines): ``` In [1]: import pandas as pd In [2]: time z = pd.read_csv(\"P00000001-ALL.csv\", nrows=20) CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s Wall time: 0.00 s In [3]: len(z) Out[3]: 20 In [4]: time z = pd.read_csv(\"P00000001-ALL.csv\") CPU times: user 27.63 s, sys: 1.92 s, total: 29.55 s Wall time: 30.23 s ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15008970\/way-to-read-first-few-lines-for-pandas-dataframe", "best_answers_votes":237, "question_length":887, "response_length":573 }, { "question":"Add Leading Zeros to Strings in Pandas Dataframe I have a pandas data frame where the first 3 columns are strings: ``` ID text1 text 2 0 2345656 blah blah 1 3456 blah blah 2 541304 blah blah 3 201306 hi blah 4 12313201308 hello blah ``` I want to add leading zeros to the ID: ``` ID text1 text 2 0 000000002345656 blah blah 1 000000000003456 blah blah 2 000000000541304 blah blah 3 000000000201306 hi blah 4 000012313201308 hello blah ``` I have tried: ``` df['ID'] = df.ID.zfill(15) df['ID'] = '{0:0>15}'.format(df['ID']) ```", "response":"str attribute contains most of the methods in string. ``` df['ID'] = df['ID'].str.zfill(15) ``` See more: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/text.html", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23836277\/add-leading-zeros-to-strings-in-pandas-dataframe", "best_answers_votes":154, "question_length":526, "response_length":159 }, { "question":"start index at 1 for Pandas DataFrame [duplicate] This question already has answers here: Start row index from 1 instead of zero without creating additional column in pandas (6 answers) Closed 1 year ago. I need the index to start at 1 rather than 0 when writing a Pandas DataFrame to CSV. Here's an example: ``` In [1]: import pandas as pd In [2]: result = pd.DataFrame({'Count': [83, 19, 20]}) In [3]: result.to_csv('result.csv', index_label='Event_id') ``` Which produces the following output: ``` In [4]: !cat result.csv Event_id,Count 0,83 1,19 2,20 ``` But my desired output is this: ``` In [5]: !cat result2.csv Event_id,Count 1,83 2,19 3,20 ``` I realize that this could be done by adding a sequence of integers shifted by 1 as a column to my data frame, but I'm new to Pandas and I'm wondering if a cleaner way exists.", "response":"Index is an object, and default index starts from 0: ``` >>> result.index Int64Index([0, 1, 2], dtype=int64) ``` You can shift this index by 1 with ``` >>> result.index += 1 >>> result.index Int64Index([1, 2, 3], dtype=int64) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20167930\/start-index-at-1-for-pandas-dataframe", "best_answers_votes":189, "question_length":827, "response_length":229 }, { "question":"Pandas read_csv dtype read all columns but few as string I'm using Pandas to read a bunch of CSVs. Passing an options json to dtype parameter to tell pandas which columns to read as string instead of the default: ``` dtype_dic= { 'service_id':str, 'end_date':str, ... } feedArray = pd.read_csv(feedfile , dtype = dtype_dic) ``` In my scenario, all the columns except a few specific ones are to be read as strings. So instead of defining several columns as str in dtype_dic, I'd like to set just my chosen few as int or float. Is there a way to do that? It's a loop cycling through various CSVs with differing columns, so a direct column conversion after having read the whole csv as string (dtype=str), would not be easy as I would not immediately know which columns that csv is having. (I'd rather spend that effort in defining all the columns in the dtype json!) Edit: But if there's a way to process the list of column names to be converted to number without erroring out if that column isn't present in that csv, then yes that'll be a valid solution, if there's no other way to do this at csv reading stage itself. Note: this sounds like a previously asked question but the answers there went down a very different path (bool related) which doesn't apply to this question. Pls don't mark as duplicate!", "response":"For Pandas 1.5.0+, there's an easy way to do this. If you use a defaultdict instead of a normal dict for the dtype argument, any columns which aren't explicitly listed in the dictionary will use the default as their type. E.g. ```py from collections import defaultdict types = defaultdict(lambda: str, A=\"int\", B=\"float\") df = pd.read_csv(\"\/path\/to\/file.csv\", dtype=types, keep_default_na=False) ``` (I haven't tested this, but I assume you still need keep_default_na=False) For older versions of Pandas: You can read the entire csv as strings then convert your desired columns to other types afterwards like this: ``` df = pd.read_csv('\/path\/to\/file.csv', dtype=str, keep_default_na=False) # example df; yours will be from pd.read_csv() above df = pd.DataFrame({'A': ['1', '3', '5'], 'B': ['2', '4', '6'], 'C': ['x', 'y', 'z']}) types_dict = {'A': int, 'B': float} for col, col_type in types_dict.items(): df[col] = df[col].astype(col_type) ``` keep_default_na=False is necessary if some of the columns are empty strings or something like NA which pandas convert to NA of type float by default, which would make you end up with a mixed datatype of str\/float Another approach, if you really want to specify the proper types for all columns when reading the file in and not change them after: read in just the column names (no rows), then use those to fill in which columns should be strings ``` col_names = pd.read_csv('file.csv', nrows=0).columns types_dict = {'A': int, 'B': float} types_dict.update({col: str for col in col_names if col not in types_dict}) pd.read_csv('file.csv', dtype=types_dict) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/49684951\/pandas-read-csv-dtype-read-all-columns-but-few-as-string", "best_answers_votes":182, "question_length":1305, "response_length":1605 }, { "question":"Combine two pandas Data Frames (join on a common column) I have 2 dataframes: restaurant_ids_dataframe ``` Data columns (total 13 columns): business_id 4503 non-null values categories 4503 non-null values city 4503 non-null values full_address 4503 non-null values latitude 4503 non-null values longitude 4503 non-null values name 4503 non-null values neighborhoods 4503 non-null values open 4503 non-null values review_count 4503 non-null values stars 4503 non-null values state 4503 non-null values type 4503 non-null values dtypes: bool(1), float64(3), int64(1), object(8)` ``` and restaurant_review_frame ``` Int64Index: 158430 entries, 0 to 229905 Data columns (total 8 columns): business_id 158430 non-null values date 158430 non-null values review_id 158430 non-null values stars 158430 non-null values text 158430 non-null values type 158430 non-null values user_id 158430 non-null values votes 158430 non-null values dtypes: int64(1), object(7) ``` I would like to join these two DataFrames to make them into a single dataframe using the DataFrame.join() command in pandas. I have tried the following line of code: ``` #the following line of code creates a left join of restaurant_ids_frame and restaurant_review_frame on the column 'business_id' restaurant_review_frame.join(other=restaurant_ids_dataframe,on='business_id',how='left') ``` But when I try this I get the following error: ``` Exception: columns overlap: Index([business_id, stars, type], dtype=object) ``` I am very new to pandas and have no clue what I am doing wrong as far as executing the join statement is concerned. any help would be much appreciated.", "response":"You can use merge to combine two dataframes into one: ``` import pandas as pd pd.merge(restaurant_ids_dataframe, restaurant_review_frame, on='business_id', how='outer') ``` where on specifies field name that exists in both dataframes to join on, and how defines whether its inner\/outer\/left\/right join, with outer using 'union of keys from both frames (SQL: full outer join).' Since you have 'star' column in both dataframes, this by default will create two columns star_x and star_y in the combined dataframe. As @DanAllan mentioned for the join method, you can modify the suffixes for merge by passing it as a kwarg. Default is suffixes=('_x', '_y'). if you wanted to do something like star_restaurant_id and star_restaurant_review, you can do: ``` pd.merge(restaurant_ids_dataframe, restaurant_review_frame, on='business_id', how='outer', suffixes=('_restaurant_id', '_restaurant_review')) ``` The parameters are explained in detail in this link.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18792918\/combine-two-pandas-data-frames-join-on-a-common-column", "best_answers_votes":199, "question_length":1631, "response_length":949 }, { "question":"group by in group by and average I have a dataframe like this: ``` cluster org time 1 a 8 1 a 6 2 h 34 1 c 23 2 d 74 3 w 6 ``` I would like to calculate the average of time per org per cluster. Expected result: ``` cluster mean(time) 1 15 #=((8 + 6) \/ 2 + 23) \/ 2 2 54 #=(74 + 34) \/ 2 3 6 ``` I do not know how to do it in Pandas, can anybody help?", "response":"If you want to first take mean on the combination of ['cluster', 'org'] and then take mean on cluster groups, you can use: ``` In [59]: (df.groupby(['cluster', 'org'], as_index=False).mean() .groupby('cluster')['time'].mean()) Out[59]: cluster 1 15 2 54 3 6 Name: time, dtype: int64 ``` If you want the mean of cluster groups only, then you can use: ``` In [58]: df.groupby(['cluster']).mean() Out[58]: time cluster 1 12.333333 2 54.000000 3 6.000000 ``` You can also use groupby on ['cluster', 'org'] and then use mean(): ``` In [57]: df.groupby(['cluster', 'org']).mean() Out[57]: time cluster org 1 a 438886 c 23 2 d 9874 h 34 3 w 6 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30328646\/group-by-in-group-by-and-average", "best_answers_votes":191, "question_length":348, "response_length":639 }, { "question":"Format certain floating dataframe columns into percentage in pandas I am trying to write a paper in IPython notebook, but encountered some issues with display format. Say I have following dataframe df, is there any way to format var1 and var2 into 2 digit decimals and var3 into percentages. ``` var1 var2 var3 id 0 1.458315 1.500092 -0.005709 1 1.576704 1.608445 -0.005122 2 1.629253 1.652577 -0.004754 3 1.669331 1.685456 -0.003525 4 1.705139 1.712096 -0.003134 5 1.740447 1.741961 -0.001223 6 1.775980 1.770801 -0.001723 7 1.812037 1.799327 -0.002013 8 1.853130 1.822982 -0.001396 9 1.943985 1.868401 0.005732 ``` The numbers inside are not multiplied by 100, e.g. -0.0057=-0.57%.", "response":"The accepted answer suggests to modify the raw data for presentation purposes, something you generally do not want. Imagine you need to make further analyses with these columns and you need the precision you lost with rounding. You can modify the formatting of individual columns in data frames, in your case: ``` output = df.to_string(formatters={ 'var1': '{:,.2f}'.format, 'var2': '{:,.2f}'.format, 'var3': '{:,.2%}'.format }) print(output) ``` For your information '{:,.2%}'.format(0.214) yields 21.40%, so no need for multiplying by 100. You don't have a nice HTML table anymore but a text representation. If you need to stay with HTML use the to_html function instead. ``` from IPython.core.display import display, HTML output = df.to_html(formatters={ 'var1': '{:,.2f}'.format, 'var2': '{:,.2f}'.format, 'var3': '{:,.2%}'.format }) display(HTML(output)) ``` Update As of pandas 0.17.1, life got easier and we can get a beautiful html table right away: ``` df.style.format({ 'var1': '{:,.2f}'.format, 'var2': '{:,.2f}'.format, 'var3': '{:,.2%}'.format, }) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23981601\/format-certain-floating-dataframe-columns-into-percentage-in-pandas", "best_answers_votes":211, "question_length":683, "response_length":1064 }, { "question":"Pandas: convert categories to numbers Suppose I have a dataframe with countries that goes as: ``` cc | temp US | 37.0 CA | 12.0 US | 35.0 AU | 20.0 ``` I know that there is a pd.get_dummies function to convert the countries to 'one-hot encodings'. However, I wish to convert them to indices instead such that I will get cc_index = [1,2,1,3] instead. I'm assuming that there is a faster way than using the get_dummies along with a numpy where clause as shown below: [np.where(x) for x in df.cc.get_dummies().values] This is somewhat easier to do in R using 'factors' so I'm hoping pandas has something similar.", "response":"First, change the type of the column: ``` df.cc = pd.Categorical(df.cc) ``` Now the data look similar but are stored categorically. To capture the category codes: ``` df['code'] = df.cc.codes ``` Now you have: ``` cc temp code 0 US 37.0 2 1 CA 12.0 1 2 US 35.0 2 3 AU 20.0 0 ``` If you don't want to modify your DataFrame but simply get the codes: ``` df.cc.astype('category').codes ``` Or use the categorical column as an index: ``` df2 = pd.DataFrame(df.temp) df2.index = pd.CategoricalIndex(df.cc) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38088652\/pandas-convert-categories-to-numbers", "best_answers_votes":223, "question_length":609, "response_length":504 }, { "question":"Group dataframe and get sum and count I have a dataframe that looks like this: ``` Company Name Organisation Name Amount 10118 Vifor Pharma UK Ltd Welsh Assoc for Gastro & Endo 2700.00 10119 Vifor Pharma UK Ltd Welsh IBD Specialist Group, 169.00 10120 Vifor Pharma UK Ltd West Midlands AHSN 1200.00 10121 Vifor Pharma UK Ltd Whittington Hospital 63.00 10122 Vifor Pharma UK Ltd Ysbyty Gwynedd 75.93 ``` How do I sum the Amount and count the Organisation Name, to get a new dataframe that looks like this? ``` Company Name Organisation Count Amount 10118 Vifor Pharma UK Ltd 5 11000.00 ``` I know how to sum or count: ``` df.groupby('Company Name').sum() df.groupby('Company Name').count() ``` But not how to do both!", "response":"try this: ``` In [110]: (df.groupby('Company Name') .....: .agg({'Organisation Name':'count', 'Amount': 'sum'}) .....: .reset_index() .....: .rename(columns={'Organisation Name':'Organisation Count'}) .....: ) Out[110]: Company Name Amount Organisation Count 0 Vifor Pharma UK Ltd 4207.93 5 ``` or if you don't want to reset index: ``` df.groupby('Company Name')['Amount'].agg(['sum','count']) ``` or ``` df.groupby('Company Name').agg({'Amount': ['sum','count']}) ``` Demo: ``` In [98]: df.groupby('Company Name')['Amount'].agg(['sum','count']) Out[98]: sum count Company Name Vifor Pharma UK Ltd 4207.93 5 In [99]: df.groupby('Company Name').agg({'Amount': ['sum','count']}) Out[99]: Amount sum count Company Name Vifor Pharma UK Ltd 4207.93 5 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38174155\/group-dataframe-and-get-sum-and-count", "best_answers_votes":225, "question_length":716, "response_length":749 }, { "question":"pandas dataframe select columns in multiindex [duplicate] This question already has answers here: Selecting columns from pandas MultiIndex (13 answers) Closed 6 years ago. I have the following pd.DataFrame: ``` Name 0 1 ... Col A B A B ... 0 0.409511 -0.537108 -0.355529 0.212134 ... 1 -0.332276 -1.087013 0.083684 0.529002 ... 2 1.138159 -0.327212 0.570834 2.337718 ... ``` It has MultiIndex columns with names=['Name', 'Col'] and hierarchical levels. The Name label goes from 0 to n, and for each label, there are two A and B columns. I would like to subselect all the A (or B) columns of this DataFrame.", "response":"There is a get_level_values method that you can use in conjunction with boolean indexing to get the the intended result. ``` In [13]: df = pd.DataFrame(np.random.random((4,4))) df.columns = pd.MultiIndex.from_product([[1,2],['A','B']]) print df 1 2 A B A B 0 0.543980 0.628078 0.756941 0.698824 1 0.633005 0.089604 0.198510 0.783556 2 0.662391 0.541182 0.544060 0.059381 3 0.841242 0.634603 0.815334 0.848120 In [14]: print df.iloc[:, df.columns.get_level_values(1)=='A'] 1 2 A A 0 0.543980 0.756941 1 0.633005 0.198510 2 0.662391 0.544060 3 0.841242 0.815334 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25189575\/pandas-dataframe-select-columns-in-multiindex", "best_answers_votes":157, "question_length":606, "response_length":563 }, { "question":"Convert dataframe index to datetime How do I convert a pandas index of strings to datetime format? My dataframe df is like this: ```none value 2015-09-25 00:46 71.925000 2015-09-25 00:47 71.625000 2015-09-25 00:48 71.333333 2015-09-25 00:49 64.571429 2015-09-25 00:50 72.285714 ``` but the index is of type string, but I need it a datetime format because I get the error: ```none 'Index' object has no attribute 'hour' ``` when using ```py df[\"A\"] = df.index.hour ```", "response":"It should work as expected. Try to run the following example. ``` import pandas as pd import io data = \"\"\"value \"2015-09-25 00:46\" 71.925000 \"2015-09-25 00:47\" 71.625000 \"2015-09-25 00:48\" 71.333333 \"2015-09-25 00:49\" 64.571429 \"2015-09-25 00:50\" 72.285714\"\"\" df = pd.read_table(io.StringIO(data), delim_whitespace=True) # Converting the index as date df.index = pd.to_datetime(df.index) # Extracting hour & minute df['A'] = df.index.hour df['B'] = df.index.minute df # value A B # 2015-09-25 00:46:00 71.925000 0 46 # 2015-09-25 00:47:00 71.625000 0 47 # 2015-09-25 00:48:00 71.333333 0 48 # 2015-09-25 00:49:00 64.571429 0 49 # 2015-09-25 00:50:00 72.285714 0 50 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/40815238\/convert-dataframe-index-to-datetime", "best_answers_votes":185, "question_length":467, "response_length":668 }, { "question":"Numpy isnan() fails on an array of floats (from pandas dataframe apply) I have an array of floats (some normal numbers, some nans) that is coming out of an apply on a pandas dataframe. For some reason, numpy.isnan is failing on this array, however as shown below, each element is a float, numpy.isnan runs correctly on each element, the type of the variable is definitely a numpy array. What's going on?! ``` set([type(x) for x in tester]) Out[59]: {float} tester Out[60]: array([-0.7000000000000001, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], dtype=object) set([type(x) for x in tester]) Out[61]: {float} np.isnan(tester) Traceback (most recent call last): File \"\", line 1, in np.isnan(tester) TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' set([np.isnan(x) for x in tester]) Out[65]: {False, True} type(tester) Out[66]: numpy.ndarray ```", "response":"np.isnan can be applied to NumPy arrays of native dtype (such as np.float64): ``` In [99]: np.isnan(np.array([np.nan, 0], dtype=np.float64)) Out[99]: array([ True, False], dtype=bool) ``` but raises TypeError when applied to object arrays: ``` In [96]: np.isnan(np.array([np.nan, 0], dtype=object)) TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' ``` Since you have Pandas, you could use pd.isnull instead -- it can accept NumPy arrays of object or native dtypes: ``` In [97]: pd.isnull(np.array([np.nan, 0], dtype=float)) Out[97]: array([ True, False], dtype=bool) In [98]: pd.isnull(np.array([np.nan, 0], dtype=object)) Out[98]: array([ True, False], dtype=bool) ``` Note that None is also considered a null value in object arrays.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36000993\/numpy-isnan-fails-on-an-array-of-floats-from-pandas-dataframe-apply", "best_answers_votes":206, "question_length":1166, "response_length":850 }, { "question":"Converting between datetime and Pandas Timestamp objects I have the following: ``` > date1 Timestamp('2014-01-23 00:00:00', tz=None) > date2 datetime.date(2014, 3, 26) ``` and I read on this answer that I could use pandas.to_datetime() to convert from Timestamps to datetime objects, but it doesn't seem to work: ``` > pd.to_datetime(date1) Timestamp('2014-01-23 00:00:00', tz=None) ``` Why? How can I convert between these two formats?", "response":"You can use the to_pydatetime method to be more explicit: ``` In [11]: ts = pd.Timestamp('2014-01-23 00:00:00', tz=None) In [12]: ts.to_pydatetime() Out[12]: datetime.datetime(2014, 1, 23, 0, 0) ``` It's also available on a DatetimeIndex: ``` In [13]: rng = pd.date_range('1\/10\/2011', periods=3, freq='D') In [14]: rng.to_pydatetime() Out[14]: array([datetime.datetime(2011, 1, 10, 0, 0), datetime.datetime(2011, 1, 11, 0, 0), datetime.datetime(2011, 1, 12, 0, 0)], dtype=object) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22825349\/converting-between-datetime-and-pandas-timestamp-objects", "best_answers_votes":169, "question_length":436, "response_length":483 }, { "question":"re.sub erroring with \"Expected string or bytes-like object\" I have read multiple posts regarding this error, but I still can't figure it out. When I try to loop through my function: ```py def fix_Plan(location): letters_only = re.sub(\"[^a-zA-Z]\", # Search for all non-letters \" \", # Replace all non-letters with spaces location) # Column and row to search words = letters_only.lower().split() stops = set(stopwords.words(\"english\")) meaningful_words = [w for w in words if not w in stops] return (\" \".join(meaningful_words)) col_Plan = fix_Plan(train[\"Plan\"][0]) num_responses = train[\"Plan\"].size clean_Plan_responses = [] for i in range(0,num_responses): clean_Plan_responses.append(fix_Plan(train[\"Plan\"][i])) ``` Here is the error: ```none Traceback (most recent call last): File \"C:\/Users\/xxxxx\/PycharmProjects\/tronc\/tronc2.py\", line 48, in clean_Plan_responses.append(fix_Plan(train[\"Plan\"][i])) File \"C:\/Users\/xxxxx\/PycharmProjects\/tronc\/tronc2.py\", line 22, in fix_Plan location) # Column and row to search File \"C:\\Users\\xxxxx\\AppData\\Local\\Programs\\Python\\Python36\\lib\\re.py\", line 191, in sub return _compile(pattern, flags).sub(repl, string, count) TypeError: expected string or bytes-like object ```", "response":"As you stated in the comments, some of the values appeared to be floats, not strings. You will need to change it to strings before passing it to re.sub. The simplest way is to change location to str(location) when using re.sub. It wouldn't hurt to do it anyways even if it's already a str. ``` letters_only = re.sub(\"[^a-zA-Z]\", # Search for all non-letters \" \", # Replace all non-letters with spaces str(location)) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43727583\/re-sub-erroring-with-expected-string-or-bytes-like-object", "best_answers_votes":177, "question_length":1213, "response_length":419 }, { "question":"Getting vertical gridlines to appear in line plot in matplotlib I want to get both horizontal and vertical grid lines on my plot but only the horizontal grid lines are appearing by default. I am using a pandas.DataFrame from an sql query in python to generate a line plot with dates on the x-axis. I'm not sure why they do not appear on the dates and I have tried to search for an answer to this but couldn't find one. All I have used to plot the graph is the simple code below. ``` data.plot() grid('on') ``` data is the DataFrame which contains the dates and the data from the sql query. I have also tried adding the code below but I still get the same output with no vertical grid lines. ``` ax = plt.axes() ax.yaxis.grid() # horizontal lines ax.xaxis.grid() # vertical lines ``` Any suggestions?", "response":"You may need to give boolean arg in your calls, e.g. use ax.yaxis.grid(True) instead of ax.yaxis.grid(). Additionally, since you are using both of them you can combine into ax.grid, which works on both, rather than doing it once for each dimension. ``` ax = plt.gca() ax.grid(True) ``` That should sort you out.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16074392\/getting-vertical-gridlines-to-appear-in-line-plot-in-matplotlib", "best_answers_votes":127, "question_length":799, "response_length":311 }, { "question":"How to import data from mongodb to pandas? I have a large amount of data in a collection in mongodb which I need to analyze. How do i import that data to pandas? I am new to pandas and numpy. EDIT: The mongodb collection contains sensor values tagged with date and time. The sensor values are of float datatype. Sample Data: ``` { \"_cls\" : \"SensorReport\", \"_id\" : ObjectId(\"515a963b78f6a035d9fa531b\"), \"_types\" : [ \"SensorReport\" ], \"Readings\" : [ { \"a\" : 0.958069536790466, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:26:35.297Z\"), \"b\" : 6.296118156595, \"_cls\" : \"Reading\" }, { \"a\" : 0.95574014778624, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:27:09.963Z\"), \"b\" : 6.29651468650064, \"_cls\" : \"Reading\" }, { \"a\" : 0.953648289182713, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:27:37.545Z\"), \"b\" : 7.29679823731148, \"_cls\" : \"Reading\" }, { \"a\" : 0.955931884300997, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:28:21.369Z\"), \"b\" : 6.29642922525632, \"_cls\" : \"Reading\" }, { \"a\" : 0.95821381, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:41:20.801Z\"), \"b\" : 7.28956613, \"_cls\" : \"Reading\" }, { \"a\" : 4.95821335, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:41:36.931Z\"), \"b\" : 6.28956574, \"_cls\" : \"Reading\" }, { \"a\" : 9.95821341, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:42:09.971Z\"), \"b\" : 0.28956488, \"_cls\" : \"Reading\" }, { \"a\" : 1.95667927, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:43:55.463Z\"), \"b\" : 0.29115237, \"_cls\" : \"Reading\" } ], \"latestReportTime\" : ISODate(\"2013-04-02T08:43:55.463Z\"), \"sensorName\" : \"56847890-0\", \"reportCount\" : 8 } ```", "response":"pymongo might give you a hand, followings is some code I'm using: ``` import pandas as pd from pymongo import MongoClient def _connect_mongo(host, port, username, password, db): \"\"\" A util for making a connection to mongo \"\"\" if username and password: mongo_uri = 'mongodb:\/\/%s:%s@%s:%s\/%s' % (username, password, host, port, db) conn = MongoClient(mongo_uri) else: conn = MongoClient(host, port) return conn[db] def read_mongo(db, collection, query={}, host='localhost', port=27017, username=None, password=None, no_id=True): \"\"\" Read from Mongo and Store into DataFrame \"\"\" # Connect to MongoDB db = _connect_mongo(host=host, port=port, username=username, password=password, db=db) # Make a query to the specific DB and Collection cursor = db[collection].find(query) # Expand the cursor and construct the DataFrame df = pd.DataFrame(list(cursor)) # Delete the _id if no_id: del df['_id'] return df ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16249736\/how-to-import-data-from-mongodb-to-pandas", "best_answers_votes":178, "question_length":1777, "response_length":903 }, { "question":"What are the 'levels', 'keys', and names arguments for in Pandas' concat function? Questions How do I use pd.concat? What is the levels argument for? What is the keys argument for? Are there a bunch of examples to help explain how to use all the arguments? Pandas' concat function is the Swiss Army knife of the merging utilities. The variety of situations in which it is useful are numerous. The existing documentation leaves out a few details on some of the optional arguments. Among them are the levels and keys arguments. I set out to figure out what those arguments do. I'll pose a question that will act as a gateway into many aspects of pd.concat. Consider the data frames d1, d2, and d3: ``` import pandas as pd d1 = pd.DataFrame(dict(A=.1, B=.2, C=.3), [2, 3]) d2 = pd.DataFrame(dict(B=.4, C=.5, D=.6), [1, 2]) d3 = pd.DataFrame(dict(A=.7, B=.8, D=.9), [1, 3]) ``` If I were to concatenate these together with ``` pd.concat([d1, d2, d3], keys=['d1', 'd2', 'd3']) ``` I get the expected result with a pandas.MultiIndex for my columns object: ``` A B C D d1 2 0.1 0.2 0.3 NaN 3 0.1 0.2 0.3 NaN d2 1 NaN 0.4 0.5 0.6 2 NaN 0.4 0.5 0.6 d3 1 0.7 0.8 NaN 0.9 3 0.7 0.8 NaN 0.9 ``` However, I wanted to use the levels argument documentation: levels: list of sequences, default None. Specific levels (unique values) to use for constructing a MultiIndex. Otherwise, they will be inferred from the keys. So I passed ``` pd.concat([d1, d2, d3], keys=['d1', 'd2', 'd3'], levels=[['d1', 'd2']]) ``` And get a KeyError ValueError: Key d3 not in level Index(['d1', 'd2'], dtype='object') This made sense. The levels I passed were inadequate to describe the necessary levels indicated by the keys. Had I not passed anything, as I did above, the levels are inferred (as stated in the documentation). But how else can I use this argument to better effect? If I tried this instead: ``` pd.concat([d1, d2, d3], keys=['d1', 'd2', 'd3'], levels=[['d1', 'd2', 'd3']]) ``` I and got the same results as above. But when I add one more value to the levels, ``` df = pd.concat([d1, d2, d3], keys=['d1', 'd2', 'd3'], levels=[['d1', 'd2', 'd3', 'd4']]) ``` I end up with the same looking data frame, but the resulting MultiIndex has an unused level. ``` df.index.levels[0] Index(['d1', 'd2', 'd3', 'd4'], dtype='object') ``` So what is the point of the level argument and should I be using keys differently? I'm using Python 3.6 and Pandas 0.22.", "response":"In the process of answering this question for myself, I learned many things, and I wanted to put together a catalog of examples and some explanation. The specific answer to the point of the levels argument will come towards the end. pandas.concat: The Missing Manual Link To Current Documentation Imports and defining objects ``` import pandas as pd d1 = pd.DataFrame(dict(A=.1, B=.2, C=.3), index=[2, 3]) d2 = pd.DataFrame(dict(B=.4, C=.5, D=.6), index=[1, 2]) d3 = pd.DataFrame(dict(A=.7, B=.8, D=.9), index=[1, 3]) s1 = pd.Series([1, 2], index=[2, 3]) s2 = pd.Series([3, 4], index=[1, 2]) s3 = pd.Series([5, 6], index=[1, 3]) ``` Arguments objs The first argument we come across is objs: objs: a sequence or mapping of Series, DataFrame, or Panel objects If a dict is passed, the sorted keys will be used as the keys argument, unless it is passed, in which case the values will be selected (see below). Any None objects will be dropped silently unless they are all None in which case a ValueError will be raised We typically see this used with a list of Series or DataFrame objects. I'll show that dict can be very useful as well. Generators may also be used and can be useful when using map as in map(f, list_of_df) For now, we'll stick with a list of some of the DataFrame and Series objects defined above. I'll show how dictionaries can be leveraged to give very useful MultiIndex results later. ``` pd.concat([d1, d2]) A B C D 2 0.1 0.2 0.3 NaN 3 0.1 0.2 0.3 NaN 1 NaN 0.4 0.5 0.6 2 NaN 0.4 0.5 0.6 ``` axis The second argument we encounter is axis whose default value is 0: axis: {0\/\u2019index\u2019, 1\/\u2019columns\u2019}, default 0 The axis to concatenate along. Two DataFrames with axis=0 (stacked) For values of 0 or index we mean to say: \"Align along the columns and add to the index\". As shown above where we used axis=0, because 0 is the default value, and we see that the index of d2 extends the index of d1 despite there being overlap of the value 2: ``` pd.concat([d1, d2], axis=0) A B C D 2 0.1 0.2 0.3 NaN 3 0.1 0.2 0.3 NaN 1 NaN 0.4 0.5 0.6 2 NaN 0.4 0.5 0.6 ``` Two DataFrames with axis=1 (side by side) For values 1 or columns we mean to say: \"Align along the index and add to the columns\", ``` pd.concat([d1, d2], axis=1) A B C B C D 1 NaN NaN NaN 0.4 0.5 0.6 2 0.1 0.2 0.3 0.4 0.5 0.6 3 0.1 0.2 0.3 NaN NaN NaN ``` We can see that the resulting index is the union of indices and the resulting columns are the extension of columns from d1 by the columns of d2. Two (or Three) Series with axis=0 (stacked) When combining pandas.Series along axis=0, we get back a pandas.Series. The name of the resulting Series will be None unless all Series being combined have the same name. Pay attention to the 'Name: A' when we print out the resulting Series. When it isn't present, we can assume the Series name is None. ``` | | | pd.concat( | pd.concat( | pd.concat( | [s1.rename('A'), pd.concat( | [s1.rename('A'), | [s1.rename('A'), | s2.rename('B'), [s1, s2]) | s2]) | s2.rename('A')]) | s3.rename('A')]) -------------- | --------------------- | ---------------------- | ---------------------- 2 1 | 2 1 | 2 1 | 2 1 3 2 | 3 2 | 3 2 | 3 2 1 3 | 1 3 | 1 3 | 1 3 2 4 | 2 4 | 2 4 | 2 4 dtype: int64 | dtype: int64 | Name: A, dtype: int64 | 1 5 | | | 3 6 | | | dtype: int64 ``` Two (or Three) Series with axis=1 (side by side) When combining pandas.Series along axis=1, it is the name attribute that we refer to in order to infer a column name in the resulting pandas.DataFrame. ``` | | pd.concat( | pd.concat( | [s1.rename('X'), pd.concat( | [s1.rename('X'), | s2.rename('Y'), [s1, s2], axis=1) | s2], axis=1) | s3.rename('Z')], axis=1) ---------------------- | --------------------- | ------------------------------ 0 1 | X 0 | X Y Z 1 NaN 3.0 | 1 NaN 3.0 | 1 NaN 3.0 5.0 2 1.0 4.0 | 2 1.0 4.0 | 2 1.0 4.0 NaN 3 2.0 NaN | 3 2.0 NaN | 3 2.0 NaN 6.0 ``` Mixed Series and DataFrame with axis=0 (stacked) When performing a concatenation of a Series and DataFrame along axis=0, we convert all Series to single column DataFrames. Take special note that this is a concatenation along axis=0; that means extending the index (rows) while aligning the columns. In the examples below, we see the index becomes [2, 3, 2, 3] which is an indiscriminate appending of indices. The columns do not overlap unless I force the naming of the Series column with the argument to to_frame: ``` pd.concat( | [s1.to_frame(), d1]) | pd.concat([s1, d1]) ------------------------- | --------------------- 0 A B C | 0 A B C 2 1.0 NaN NaN NaN | 2 1.0 NaN NaN NaN 3 2.0 NaN NaN NaN | 3 2.0 NaN NaN NaN 2 NaN 0.1 0.2 0.3 | 2 NaN 0.1 0.2 0.3 3 NaN 0.1 0.2 0.3 | 3 NaN 0.1 0.2 0.3 ``` You can see the results of pd.concat([s1, d1]) are the same as if I had perfromed the to_frame myself. However, I can control the name of the resulting column with a parameter to to_frame. Renaming the Series with the rename method does not control the column name in the resulting DataFrame. ``` # Effectively renames | | # `s1` but does not align | # Does not rename. So | # Renames to something # with columns in `d1` | # Pandas defaults to `0` | # that does align with `d1` pd.concat( | pd.concat( | pd.concat( [s1.to_frame('X'), d1]) | [s1.rename('X'), d1]) | [s1.to_frame('B'), d1]) ---------------------------- | -------------------------- | ---------------------------- A B C X | 0 A B C | A B C 2 NaN NaN NaN 1.0 | 2 1.0 NaN NaN NaN | 2 NaN 1.0 NaN 3 NaN NaN NaN 2.0 | 3 2.0 NaN NaN NaN | 3 NaN 2.0 NaN 2 0.1 0.2 0.3 NaN | 2 NaN 0.1 0.2 0.3 | 2 0.1 0.2 0.3 3 0.1 0.2 0.3 NaN | 3 NaN 0.1 0.2 0.3 | 3 0.1 0.2 0.3 ``` Mixed Series and DataFrame with axis=1 (side by side) This is fairly intuitive. Series column name defaults to an enumeration of such Series objects when a name attribute is not available. ``` | pd.concat( pd.concat( | [s1.rename('X'), [s1, d1], | s2, s3, d1], axis=1) | axis=1) ------------------- | ------------------------------- 0 A B C | X 0 1 A B C 2 1 0.1 0.2 0.3 | 1 NaN 3.0 5.0 NaN NaN NaN 3 2 0.1 0.2 0.3 | 2 1.0 4.0 NaN 0.1 0.2 0.3 | 3 2.0 NaN 6.0 0.1 0.2 0.3 ``` join The third argument is join that describes whether the resulting merge should be an outer merge (default) or an inner merge. join: {\u2018inner\u2019, \u2018outer\u2019}, default \u2018outer\u2019 How to handle indexes on other axis(es). It turns out, there is no left or right option as pd.concat can handle more than just two objects to merge. In the case of d1 and d2, the options look like: outer ``` pd.concat([d1, d2], axis=1, join='outer') A B C B C D 1 NaN NaN NaN 0.4 0.5 0.6 2 0.1 0.2 0.3 0.4 0.5 0.6 3 0.1 0.2 0.3 NaN NaN NaN ``` inner ``` pd.concat([d1, d2], axis=1, join='inner') A B C B C D 2 0.1 0.2 0.3 0.4 0.5 0.6 ``` join_axes Fourth argument is the thing that allows us to do our left merge and more. join_axes: list of Index objects Specific indexes to use for the other n - 1 axes instead of performing inner\/outer set logic. Left Merge ``` pd.concat([d1, d2, d3], axis=1, join_axes=[d1.index]) A B C B C D A B D 2 0.1 0.2 0.3 0.4 0.5 0.6 NaN NaN NaN 3 0.1 0.2 0.3 NaN NaN NaN 0.7 0.8 0.9 ``` Right Merge ``` pd.concat([d1, d2, d3], axis=1, join_axes=[d3.index]) A B C B C D A B D 1 NaN NaN NaN 0.4 0.5 0.6 0.7 0.8 0.9 3 0.1 0.2 0.3 NaN NaN NaN 0.7 0.8 0.9 ``` ignore_index ignore_index: boolean, default False If True, do not use the index values along the concatenation axis. The resulting axis will be labeled 0, ..., n - 1. This is useful if you are concatenating objects where the concatenation axis does not have meaningful indexing information. Note the index values on the other axes are still respected in the join. Like when I stack d1 on top of d2, if I don't care about the index values, I could reset them or ignore them. ``` | pd.concat( | pd.concat( | [d1, d2], | [d1, d2] pd.concat([d1, d2]) | ignore_index=True) | ).reset_index(drop=True) --------------------- | ----------------------- | ------------------------- A B C D | A B C D | A B C D 2 0.1 0.2 0.3 NaN | 0 0.1 0.2 0.3 NaN | 0 0.1 0.2 0.3 NaN 3 0.1 0.2 0.3 NaN | 1 0.1 0.2 0.3 NaN | 1 0.1 0.2 0.3 NaN 1 NaN 0.4 0.5 0.6 | 2 NaN 0.4 0.5 0.6 | 2 NaN 0.4 0.5 0.6 2 NaN 0.4 0.5 0.6 | 3 NaN 0.4 0.5 0.6 | 3 NaN 0.4 0.5 0.6 ``` And when using axis=1: ``` | pd.concat( | [d1, d2], axis=1, pd.concat([d1, d2], axis=1) | ignore_index=True) ------------------------------- | ------------------------------- A B C B C D | 0 1 2 3 4 5 1 NaN NaN NaN 0.4 0.5 0.6 | 1 NaN NaN NaN 0.4 0.5 0.6 2 0.1 0.2 0.3 0.4 0.5 0.6 | 2 0.1 0.2 0.3 0.4 0.5 0.6 3 0.1 0.2 0.3 NaN NaN NaN | 3 0.1 0.2 0.3 NaN NaN NaN ``` keys We can pass a list of scalar values or tuples in order to assign tuple or scalar values to corresponding MultiIndex. The length of the passed list must be the same length as the number of items being concatenated. keys: sequence, default None If multiple levels passed, should contain tuples. Construct hierarchical index using the passed keys as the outermost level axis=0 When concatenating Series objects along axis=0 (extending the index). Those keys, become a new initial level of a MultiIndex object in the index attribute. ``` # length 3 length 3 # length 2 length 2 # \/--------\\ \/-----------\\ # \/----\\ \/------\\ pd.concat([s1, s2, s3], keys=['A', 'B', 'C']) pd.concat([s1, s2], keys=['A', 'B']) ---------------------------------------------- ------------------------------------- A 2 1 A 2 1 3 2 3 2 B 1 3 B 1 3 2 4 2 4 C 1 5 dtype: int64 3 6 dtype: int64 ``` However, we can use more than scalar values in the keys argument to create an even deeper MultiIndex. Here we pass tuples of length 2 the prepend two new levels of a MultiIndex: ``` pd.concat( [s1, s2, s3], keys=[('A', 'X'), ('A', 'Y'), ('B', 'X')]) ----------------------------------------------- A X 2 1 3 2 Y 1 3 2 4 B X 1 5 3 6 dtype: int64 ``` axis=1 It's a bit different when extending along columns. When we used axis=0 (see above) our keys acted as MultiIndex levels in addition to the existing index. For axis=1, we are referring to an axis that Series objects don't have, namely the columns attribute. Variations of Two Series wtih axis=1 Notice that naming the s1 and s2 matters so long as no keys are passed, but it gets overridden if keys are passed. ``` | | | pd.concat( | pd.concat( | pd.concat( | [s1.rename('U'), pd.concat( | [s1, s2], | [s1.rename('U'), | s2.rename('V')], [s1, s2], | axis=1, | s2.rename('V')], | axis=1, axis=1) | keys=['X', 'Y']) | axis=1) | keys=['X', 'Y']) -------------- | --------------------- | ---------------------- | ---------------------- 0 1 | X Y | U V | X Y 1 NaN 3.0 | 1 NaN 3.0 | 1 NaN 3.0 | 1 NaN 3.0 2 1.0 4.0 | 2 1.0 4.0 | 2 1.0 4.0 | 2 1.0 4.0 3 2.0 NaN | 3 2.0 NaN | 3 2.0 NaN | 3 2.0 NaN ``` MultiIndex with Series and axis=1 ``` pd.concat( [s1, s2], axis=1, keys=[('W', 'X'), ('W', 'Y')]) ----------------------------------- W X Y 1 NaN 3.0 2 1.0 4.0 3 2.0 NaN ``` Two DataFrame with axis=1 As with the axis=0 examples, keys add levels to a MultiIndex, but this time to the object stored in the columns attribute. ``` pd.concat( | pd.concat( [d1, d2], | [d1, d2], axis=1, | axis=1, keys=['X', 'Y']) | keys=[('First', 'X'), ('Second', 'X')]) ------------------------------- | -------------------------------------------- X Y | First Second A B C B C D | X X 1 NaN NaN NaN 0.4 0.5 0.6 | A B C B C D 2 0.1 0.2 0.3 0.4 0.5 0.6 | 1 NaN NaN NaN 0.4 0.5 0.6 3 0.1 0.2 0.3 NaN NaN NaN | 2 0.1 0.2 0.3 0.4 0.5 0.6 | 3 0.1 0.2 0.3 NaN NaN NaN ``` Series and DataFrame with axis=1 This is tricky. In this case, a scalar key value cannot act as the only level of index for the Series object when it becomes a column while also acting as the first level of a MultiIndex for the DataFrame. So Pandas will again use the name attribute of the Series object as the source of the column name. ``` pd.concat( | pd.concat( [s1, d1], | [s1.rename('Z'), d1], axis=1, | axis=1, keys=['X', 'Y']) | keys=['X', 'Y']) --------------------- | -------------------------- X Y | X Y 0 A B C | Z A B C 2 1 0.1 0.2 0.3 | 2 1 0.1 0.2 0.3 3 2 0.1 0.2 0.3 | 3 2 0.1 0.2 0.3 ``` Limitations of keys and MultiIndex inferrence. Pandas only seems to infer column names from Series name, but it will not fill in the blanks when doing an analogous concatenation among data frames with a different number of column levels. ``` d1_ = pd.concat( [d1], axis=1, keys=['One']) d1_ One A B C 2 0.1 0.2 0.3 3 0.1 0.2 0.3 ``` Then concatenate this with another data frame with only one level in the columns object and Pandas will refuse to try and make tuples of the MultiIndex object and combine all data frames as if a single level of objects, scalars and tuples. ``` pd.concat([d1_, d2], axis=1) (One, A) (One, B) (One, C) B C D 1 NaN NaN NaN 0.4 0.5 0.6 2 0.1 0.2 0.3 0.4 0.5 0.6 3 0.1 0.2 0.3 NaN NaN NaN ``` Passing a dict instead of a list When passing a dictionary, pandas.concat will use the keys from the dictionary as the keys parameter. ``` # axis=0 | # axis=1 pd.concat( | pd.concat( {0: d1, 1: d2}) | {0: d1, 1: d2}, axis=1) ----------------------- | ------------------------------- A B C D | 0 1 0 2 0.1 0.2 0.3 NaN | A B C B C D 3 0.1 0.2 0.3 NaN | 1 NaN NaN NaN 0.4 0.5 0.6 1 1 NaN 0.4 0.5 0.6 | 2 0.1 0.2 0.3 0.4 0.5 0.6 2 NaN 0.4 0.5 0.6 | 3 0.1 0.2 0.3 NaN NaN NaN ``` levels This is used in conjunction with the keys argument.When levels is left as its default value of None, Pandas will take the unique values of each level of the resulting MultiIndex and use that as the object used in the resulting index.levels attribute. levels: list of sequences, default None Specific levels (unique values) to use for constructing a MultiIndex. Otherwise they will be inferred from the keys. If Pandas already infers what these levels should be, what advantage is there to specify it ourselves? I'll show one example and leave it up to you to think up other reasons why this might be useful. Example Per the documentation, the levels argument is a list of sequences. This means that we can use another pandas.Index as one of those sequences. Consider the data frame df that is the concatenation of d1, d2 and d3: ``` df = pd.concat( [d1, d2, d3], axis=1, keys=['First', 'Second', 'Fourth']) df First Second Fourth A B C B C D A B D 1 NaN NaN NaN 0.4 0.5 0.6 0.7 0.8 0.9 2 0.1 0.2 0.3 0.4 0.5 0.6 NaN NaN NaN 3 0.1 0.2 0.3 NaN NaN NaN 0.7 0.8 0.9 ``` The levels of the columns object are: ``` print(df, *df.columns.levels, sep='\\n') Index(['First', 'Second', 'Fourth'], dtype='object') Index(['A', 'B', 'C', 'D'], dtype='object') ``` If we use sum within a groupby we get: ``` df.groupby(axis=1, level=0).sum() First Fourth Second 1 0.0 2.4 1.5 2 0.6 0.0 1.5 3 0.6 2.4 0.0 ``` But what if instead of ['First', 'Second', 'Fourth'] there were another missing categories named Third and Fifth? And I wanted them included in the results of a groupby aggregation? We can do this if we had a pandas.CategoricalIndex. And we can specify that ahead of time with the levels argument. So instead, let's define df as: ``` cats = ['First', 'Second', 'Third', 'Fourth', 'Fifth'] lvl = pd.CategoricalIndex(cats, categories=cats, ordered=True) df = pd.concat( [d1, d2, d3], axis=1, keys=['First', 'Second', 'Fourth'], levels=[lvl] ) df First Fourth Second 1 0.0 2.4 1.5 2 0.6 0.0 1.5 3 0.6 2.4 0.0 ``` But the first level of the columns object is: ``` df.columns.levels[0] CategoricalIndex( ['First', 'Second', 'Third', 'Fourth', 'Fifth'], categories=['First', 'Second', 'Third', 'Fourth', 'Fifth'], ordered=True, dtype='category') ``` And our groupby summation looks like: ``` df.groupby(axis=1, level=0).sum() First Second Third Fourth Fifth 1 0.0 1.5 0.0 2.4 0.0 2 0.6 1.5 0.0 0.0 0.0 3 0.6 0.0 0.0 2.4 0.0 ``` names This is used to name the levels of a resulting MultiIndex. The length of the names list should match the number of levels in the resulting MultiIndex. names: list, default None Names for the levels in the resulting hierarchical index ``` # axis=0 | # axis=1 pd.concat( | pd.concat( [d1, d2], | [d1, d2], keys=[0, 1], | axis=1, keys=[0, 1], names=['lvl0', 'lvl1']) | names=['lvl0', 'lvl1']) ----------------------------- | ---------------------------------- A B C D | lvl0 0 1 lvl0 lvl1 | lvl1 A B C B C D 0 2 0.1 0.2 0.3 NaN | 1 NaN NaN NaN 0.4 0.5 0.6 3 0.1 0.2 0.3 NaN | 2 0.1 0.2 0.3 0.4 0.5 0.6 1 1 NaN 0.4 0.5 0.6 | 3 0.1 0.2 0.3 NaN NaN NaN 2 NaN 0.4 0.5 0.6 | ``` verify_integrity Self explanatory documentation verify_integrity: boolean, default False Check whether the new concatenated axis contains duplicates. This can be very expensive relative to the actual data concatenation. Because the resulting index from concatenating d1 and d2 is not unique, it would fail the integrity check. ``` pd.concat([d1, d2]) A B C D 2 0.1 0.2 0.3 NaN 3 0.1 0.2 0.3 NaN 1 NaN 0.4 0.5 0.6 2 NaN 0.4 0.5 0.6 ``` And ``` pd.concat([d1, d2], verify_integrity=True) ``` > ValueError: Indexes have overlapping values: [2]", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/49620538\/what-are-the-levels-keys-and-names-arguments-for-in-pandas-concat-functio", "best_answers_votes":189, "question_length":2424, "response_length":16808 }, { "question":"How to GroupBy a Dataframe in Pandas and keep Columns [duplicate] This question already has answers here: Converting a Pandas GroupBy multiindex output from Series back to DataFrame (13 answers) Closed 1 year ago. Given a dataframe that logs uses of some books like this: ``` Name Type ID Book1 ebook 1 Book2 paper 2 Book3 paper 3 Book1 ebook 1 Book2 paper 2 ``` I need to get the count of all the books, keeping the other columns and get this: ``` Name Type ID Count Book1 ebook 1 2 Book2 paper 2 2 Book3 paper 3 1 ``` How can this be done?", "response":"You want the following: ``` In [20]: df.groupby(['Name','Type','ID']).count().reset_index() Out[20]: Name Type ID Count 0 Book1 ebook 1 2 1 Book2 paper 2 2 2 Book3 paper 3 1 ``` In your case the 'Name', 'Type' and 'ID' cols match in values so we can groupby on these, call count and then reset_index. An alternative approach would be to add the 'Count' column using transform and then call drop_duplicates: ``` In [25]: df['Count'] = df.groupby(['Name'])['ID'].transform('count') df.drop_duplicates() Out[25]: Name Type ID Count 0 Book1 ebook 1 2 1 Book2 paper 2 2 2 Book3 paper 3 1 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31569549\/how-to-groupby-a-dataframe-in-pandas-and-keep-columns", "best_answers_votes":139, "question_length":541, "response_length":586 }, { "question":"Find integer index of rows with NaN in pandas dataframe I have a pandas DataFrame like this: ``` a b 2011-01-01 00:00:00 1.883381 -0.416629 2011-01-01 01:00:00 0.149948 -1.782170 2011-01-01 02:00:00 -0.407604 0.314168 2011-01-01 03:00:00 1.452354 NaN 2011-01-01 04:00:00 -1.224869 -0.947457 2011-01-01 05:00:00 0.498326 0.070416 2011-01-01 06:00:00 0.401665 NaN 2011-01-01 07:00:00 -0.019766 0.533641 2011-01-01 08:00:00 -1.101303 -1.408561 2011-01-01 09:00:00 1.671795 -0.764629 ``` Is there an efficient way to find the \"integer\" index of rows with NaNs? In this case the desired output should be [3, 6].", "response":"Here is a simpler solution: inds = pd.isnull(df).any(1).nonzero()[0] ``` In [9]: df Out[9]: 0 1 0 0.450319 0.062595 1 -0.673058 0.156073 2 -0.871179 -0.118575 3 0.594188 NaN 4 -1.017903 -0.484744 5 0.860375 0.239265 6 -0.640070 NaN 7 -0.535802 1.632932 8 0.876523 -0.153634 9 -0.686914 0.131185 In [10]: pd.isnull(df).any(1).nonzero()[0] Out[10]: array([3, 6]) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14016247\/find-integer-index-of-rows-with-nan-in-pandas-dataframe", "best_answers_votes":160, "question_length":606, "response_length":364 }, { "question":"Updating a value in a pandas dataframe in an iterrows loop [duplicate] This question already has answers here: Update a dataframe in pandas while iterating row by row (9 answers) Closed 2 years ago. I am doing some geocoding work that I used selenium to screen scrape the x-y coordinate I need for address of a location, I imported an xls file to pandas dataframe and want to use an explicit loop to update the rows which do not have the x-y coordinate, like below: ```py for index, row in rche_df.iterrows(): if isinstance(row.wgs1984_latitude, float): row = row.copy() target = row.address_chi dict_temp = geocoding(target) row.wgs1984_latitude = dict_temp['lat'] row.wgs1984_longitude = dict_temp['long'] ``` I have read Why doesn't this function \"take\" after I iterrows over a pandas DataFrame? and am fully aware that iterrows only gives us a view rather than a copy for editing, but what if I really to update the value row by row? Is lambda feasible?", "response":"The rows you get back from iterrows are copies that are no longer connected to the original data frame, so edits don't change your dataframe. Thankfully, because each item you get back from iterrows contains the current index, you can use that to access and edit the relevant row of the dataframe: ``` for index, row in rche_df.iterrows(): if isinstance(row.wgs1984_latitude, float): row = row.copy() target = row.address_chi dict_temp = geocoding(target) rche_df.loc[index, 'wgs1984_latitude'] = dict_temp['lat'] rche_df.loc[index, 'wgs1984_longitude'] = dict_temp['long'] ``` In my experience, this approach seems slower than using an approach like apply or map, but as always, it's up to you to decide how to make the performance\/ease of coding tradeoff.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25478528\/updating-a-value-in-a-pandas-dataframe-in-an-iterrows-loop", "best_answers_votes":230, "question_length":957, "response_length":757 }, { "question":"pandas read_csv and filter columns with usecols I have a csv file which isn't coming in correctly with pandas.read_csv when I filter the columns with usecols and use multiple indexes. ``` import pandas as pd csv = r\"\"\"dummy,date,loc,x bar,20090101,a,1 bar,20090102,a,3 bar,20090103,a,5 bar,20090101,b,1 bar,20090102,b,3 bar,20090103,b,5\"\"\" f = open('foo.csv', 'w') f.write(csv) f.close() df1 = pd.read_csv('foo.csv', header=0, names=[\"dummy\", \"date\", \"loc\", \"x\"], index_col=[\"date\", \"loc\"], usecols=[\"dummy\", \"date\", \"loc\", \"x\"], parse_dates=[\"date\"]) print df1 # Ignore the dummy columns df2 = pd.read_csv('foo.csv', index_col=[\"date\", \"loc\"], usecols=[\"date\", \"loc\", \"x\"], # <----------- Changed parse_dates=[\"date\"], header=0, names=[\"dummy\", \"date\", \"loc\", \"x\"]) print df2 ``` I expect that df1 and df2 should be the same except for the missing dummy column, but the columns come in mislabeled. Also the date is getting parsed as a date. ``` In [118]: %run test.py dummy x date loc 2009-01-01 a bar 1 2009-01-02 a bar 3 2009-01-03 a bar 5 2009-01-01 b bar 1 2009-01-02 b bar 3 2009-01-03 b bar 5 date date loc a 1 20090101 3 20090102 5 20090103 b 1 20090101 3 20090102 5 20090103 ``` Using column numbers instead of names give me the same problem. I can workaround the issue by dropping the dummy column after the read_csv step, but I'm trying to understand what is going wrong. I'm using pandas 0.10.1. edit: fixed bad header usage.", "response":"The solution lies in understanding these two keyword arguments: names is only necessary when there is no header row in your file and you want to specify other arguments (such as usecols) using column names rather than integer indices. usecols is supposed to provide a filter before reading the whole DataFrame into memory; if used properly, there should never be a need to delete columns after reading. So because you have a header row, passing header=0 is sufficient and additionally passing names appears to be confusing pd.read_csv. Removing names from the second call gives the desired output: ``` import pandas as pd from StringIO import StringIO csv = r\"\"\"dummy,date,loc,x bar,20090101,a,1 bar,20090102,a,3 bar,20090103,a,5 bar,20090101,b,1 bar,20090102,b,3 bar,20090103,b,5\"\"\" df = pd.read_csv(StringIO(csv), header=0, index_col=[\"date\", \"loc\"], usecols=[\"date\", \"loc\", \"x\"], parse_dates=[\"date\"]) ``` Which gives us: ``` x date loc 2009-01-01 a 1 2009-01-02 a 3 2009-01-03 a 5 2009-01-01 b 1 2009-01-02 b 3 2009-01-03 b 5 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15017072\/pandas-read-csv-and-filter-columns-with-usecols", "best_answers_votes":153, "question_length":1437, "response_length":1033 }, { "question":"Pandas unstack problems: ValueError: Index contains duplicate entries, cannot reshape I am trying to unstack a multi-index with pandas and I am keep getting: ``` ValueError: Index contains duplicate entries, cannot reshape ``` Given a dataset with four columns: id (string) date (string) location (string) value (float) I first set a three-level multi-index: ``` In [37]: e.set_index(['id', 'date', 'location'], inplace=True) In [38]: e Out[38]: value id date location id1 2014-12-12 loc1 16.86 2014-12-11 loc1 17.18 2014-12-10 loc1 17.03 2014-12-09 loc1 17.28 ``` Then I try to unstack the location: ``` In [39]: e.unstack('location') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () ----> 1 e.unstack('location') ... C:\\Anaconda\\envs\\sandbox\\lib\\site-packages\\pandas\\core\\reshape.pyc in _make_selectors(self) 143 144 if mask.sum() 145 raise ValueError('Index contains duplicate entries, ' 146 'cannot reshape') 147 ValueError: Index contains duplicate entries, cannot reshape ``` What is going on here?", "response":"Here's an example DataFrame which show this, it has duplicate values with the same index. The question is, do you want to aggregate these or keep them as multiple rows? ``` In [11]: df Out[11]: 0 1 2 3 0 1 2 a 16.86 1 1 2 a 17.18 2 1 4 a 17.03 3 2 5 b 17.28 In [12]: df.pivot_table(values=3, index=[0, 1], columns=2, aggfunc='mean') # desired? Out[12]: 2 a b 0 1 1 2 17.02 NaN 4 17.03 NaN 2 5 NaN 17.28 In [13]: df1 = df.set_index([0, 1, 2]) In [14]: df1 Out[14]: 3 0 1 2 1 2 a 16.86 a 17.18 4 a 17.03 2 5 b 17.28 In [15]: df1.unstack(2) ValueError: Index contains duplicate entries, cannot reshape ``` One solution is to reset_index (and get back to df) and use pivot_table. ``` In [16]: df1.reset_index().pivot_table(values=3, index=[0, 1], columns=2, aggfunc='mean') Out[16]: 2 a b 0 1 1 2 17.02 NaN 4 17.03 NaN 2 5 NaN 17.28 ``` Another option (if you don't want to aggregate) is to append a dummy level, unstack it, then drop the dummy level...", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28651079\/pandas-unstack-problems-valueerror-index-contains-duplicate-entries-cannot-re", "best_answers_votes":83, "question_length":1084, "response_length":949 }, { "question":"How to make separator in pandas read_csv more flexible wrt whitespace, for irregular separators? I need to create a data frame by reading in data from a file, using read_csv method. However, the separators are not very regular: some columns are separated by tabs (\\t), other are separated by spaces. Moreover, some columns can be separated by 2 or 3 or more spaces or even by a combination of spaces and tabs (for example 3 spaces, two tabs and then 1 space). Is there a way to tell pandas to treat these files properly? By the way, I do not have this problem if I use Python. I use: ``` for line in file(file_name): fld = line.split() ``` And it works perfect. It does not care if there are 2 or 3 spaces between the fields. Even combinations of spaces and tabs do not cause any problem. Can pandas do the same?", "response":"From the documentation, you can use either a regex or delim_whitespace: ``` >>> import pandas as pd >>> for line in open(\"whitespace.csv\"): ... print repr(line) ... 'a\\t b\\tc 1 2\\n' 'd\\t e\\tf 3 4\\n' >>> pd.read_csv(\"whitespace.csv\", header=None, delimiter=r\"\\s+\") 0 1 2 3 4 0 a b c 1 2 1 d e f 3 4 >>> pd.read_csv(\"whitespace.csv\", header=None, delim_whitespace=True) 0 1 2 3 4 0 a b c 1 2 1 d e f 3 4 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15026698\/how-to-make-separator-in-pandas-read-csv-more-flexible-wrt-whitespace-for-irreg", "best_answers_votes":189, "question_length":812, "response_length":405 }, { "question":"substring of an entire column in pandas dataframe I have a pandas dataframe \"df\". In this dataframe I have multiple columns, one of which I have to substring. Lets say the column name is \"col\". I can run a \"for\" loop like below and substring the column: ``` for i in range(0,len(df)): df.iloc[i].col = df.iloc[i].col[:9] ``` But I wanted to know, if there is an option where I don't have to use a \"for\" loop, and do it directly using an attribute.I have huge amount of data, and if I do this, the data will take a very long time process.", "response":"Use the str accessor with square brackets: ``` df['col'] = df['col'].str[:9] ``` Or str.slice: ``` df['col'] = df['col'].str.slice(0, 9) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36505847\/substring-of-an-entire-column-in-pandas-dataframe", "best_answers_votes":246, "question_length":537, "response_length":140 }, { "question":"Export from pandas to_excel without row names (index)? I'm trying to print out a dataframe from pandas into Excel. Here I am using to_excel() functions. However, I found that the 1st column in Excel is the \"index\", ``` 0 6\/6\/2021 0:00 8\/6\/2021 0:00 1 4\/10\/2024 0:00 6\/10\/2024 0:00 2 4\/14\/2024 0:00 6\/14\/2024 0:00 ``` Is there any ways to get rid of the first column?", "response":"You need to set index=False in to_excel in order for it to not write the index column out, this semantic is followed in other Pandas IO tools, see http:\/\/pandas.pydata.org\/pandas-docs\/stable\/generated\/pandas.DataFrame.to_excel.html and http:\/\/pandas.pydata.org\/pandas-docs\/stable\/io.html", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22089317\/export-from-pandas-to-excel-without-row-names-index", "best_answers_votes":212, "question_length":366, "response_length":287 }, { "question":"Python Pandas Counting the Occurrences of a Specific value I am trying to find the number of times a certain value appears in one column. I have made the dataframe with data = pd.DataFrame.from_csv('data\/DataSet2.csv') and now I want to find the number of times something appears in a column. How is this done? I thought it was the below, where I am looking in the education column and counting the number of time ? occurs. The code below shows that I am trying to find the number of times 9th appears and the error is what I am getting when I run the code Code ``` missing2 = df.education.value_counts()['9th'] print(missing2) ``` Error ``` KeyError: '9th' ```", "response":"You can create subset of data with your condition and then use shape or len: ``` print df col1 education 0 a 9th 1 b 9th 2 c 8th print df.education == '9th' 0 True 1 True 2 False Name: education, dtype: bool print df[df.education == '9th'] col1 education 0 a 9th 1 b 9th print df[df.education == '9th'].shape[0] 2 print len(df[df['education'] == '9th']) 2 ``` Performance is interesting, the fastest solution is compare numpy array and sum: Code: ``` import perfplot, string np.random.seed(123) def shape(df): return df[df.education == 'a'].shape[0] def len_df(df): return len(df[df['education'] == 'a']) def query_count(df): return df.query('education == \"a\"').education.count() def sum_mask(df): return (df.education == 'a').sum() def sum_mask_numpy(df): return (df.education.values == 'a').sum() def make_df(n): L = list(string.ascii_letters) df = pd.DataFrame(np.random.choice(L, size=n), columns=['education']) return df perfplot.show( setup=make_df, kernels=[shape, len_df, query_count, sum_mask, sum_mask_numpy], n_range=[2**k for k in range(2, 25)], logx=True, logy=True, equality_check=False, xlabel='len(df)') ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/35277075\/python-pandas-counting-the-occurrences-of-a-specific-value", "best_answers_votes":165, "question_length":661, "response_length":1123 }, { "question":"How to convert string to datetime format in pandas? I have a column I_DATE of type string (object) in a dataframe called train as show below. ```none I_DATE 28-03-2012 2:15:00 PM 28-03-2012 2:17:28 PM 28-03-2012 2:50:50 PM ``` How to convert I_DATE from string to datetime format & specify the format of input string. Also, how to filter rows based on a range of dates in pandas?", "response":"Use to_datetime. There is no need to specify the format in this case since the parser is able to figure it out. ``` In [51]: pd.to_datetime(df['I_DATE']) Out[51]: 0 2012-03-28 14:15:00 1 2012-03-28 14:17:28 2 2012-03-28 14:50:50 Name: I_DATE, dtype: datetime64[ns] ``` To access the date\/day\/time component use the dt accessor: ``` In [54]: df['I_DATE'].dt.date Out[54]: 0 2012-03-28 1 2012-03-28 2 2012-03-28 dtype: object In [56]: df['I_DATE'].dt.time Out[56]: 0 14:15:00 1 14:17:28 2 14:50:50 dtype: object ``` You can use strings to filter as an example: ``` In [59]: df = pd.DataFrame({'date':pd.date_range(start = dt.datetime(2015,1,1), end = dt.datetime.now())}) df[(df['date'] > '2015-02-04') & (df['date'] < '2015-02-10')] Out[59]: date 35 2015-02-05 36 2015-02-06 37 2015-02-07 38 2015-02-08 39 2015-02-09 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32204631\/how-to-convert-string-to-datetime-format-in-pandas", "best_answers_votes":194, "question_length":379, "response_length":819 }, { "question":"Python: Pandas Dataframe how to multiply entire column with a scalar How do I multiply each element of a given column of my dataframe with a scalar? (I have tried looking on SO, but cannot seem to find the right solution) Doing something like: ``` df['quantity'] *= -1 # trying to multiply each row's quantity column with -1 ``` gives me a warning: ``` A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead ``` Note: If possible, I do not want to be iterating over the dataframe and do something like this...as I think any standard math operation on an entire column should be possible w\/o having to write a loop: ``` for idx, row in df.iterrows(): df.loc[idx, 'quantity'] *= -1 ``` EDIT: I am running 0.16.2 of Pandas full trace: ``` SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the the caveats in the documentation: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/indexing.html#indexing-view-versus-copy self.obj[item] = s ```", "response":"Note: for those using pandas 0.20.3 and above, and are looking for an answer, all these options will work: ``` df = pd.DataFrame(np.ones((5,6)),columns=['one','two','three', 'four','five','six']) df.one *=5 df.two = df.two*5 df.three = df.three.multiply(5) df['four'] = df['four']*5 df.loc[:, 'five'] *=5 df.iloc[:, 5] = df.iloc[:, 5]*5 ``` which results in ``` one two three four five six 0 5.0 5.0 5.0 5.0 5.0 5.0 1 5.0 5.0 5.0 5.0 5.0 5.0 2 5.0 5.0 5.0 5.0 5.0 5.0 3 5.0 5.0 5.0 5.0 5.0 5.0 4 5.0 5.0 5.0 5.0 5.0 5.0 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33768122\/python-pandas-dataframe-how-to-multiply-entire-column-with-a-scalar", "best_answers_votes":93, "question_length":1099, "response_length":523 }, { "question":"Add multiple empty columns to pandas DataFrame How do I add multiple empty columns to a DataFrame from a list? I can do: ``` df[\"B\"] = None df[\"C\"] = None df[\"D\"] = None ``` But I can't do: ``` df[[\"B\", \"C\", \"D\"]] = None ``` KeyError: \"['B' 'C' 'D'] not in index\"", "response":"You could use df.reindex to add new columns: ``` In [18]: df = pd.DataFrame(np.random.randint(10, size=(5,1)), columns=['A']) In [19]: df Out[19]: A 0 4 1 7 2 0 3 7 4 6 In [20]: df.reindex(columns=list('ABCD')) Out[20]: A B C D 0 4 NaN NaN NaN 1 7 NaN NaN NaN 2 0 NaN NaN NaN 3 7 NaN NaN NaN 4 6 NaN NaN NaN ``` reindex will return a new DataFrame, with columns appearing in the order they are listed: ``` In [31]: df.reindex(columns=list('DCBA')) Out[31]: D C B A 0 NaN NaN NaN 4 1 NaN NaN NaN 7 2 NaN NaN NaN 0 3 NaN NaN NaN 7 4 NaN NaN NaN 6 ``` The reindex method as a fill_value parameter as well: ``` In [22]: df.reindex(columns=list('ABCD'), fill_value=0) Out[22]: A B C D 0 4 0 0 0 1 7 0 0 0 2 0 0 0 0 3 7 0 0 0 4 6 0 0 0 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30926670\/add-multiple-empty-columns-to-pandas-dataframe", "best_answers_votes":128, "question_length":263, "response_length":733 }, { "question":"ValueError: Mime type rendering requires nbformat>=4.2.0 but it is not installed I was trying to print a plotly plot in Visual Studio Code and caught this error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in 30 31 fig.update_layout(height=nrows*500) ---> 32 fig.show() C:\\Python38\\lib\\site-packages\\plotly\\basedatatypes.py in show(self, *args, **kwargs) 3147 import plotly.io as pio 3148 -> 3149 return pio.show(self, *args, **kwargs) 3150 3151 def to_json(self, *args, **kwargs): C:\\Python38\\lib\\site-packages\\plotly\\io\\_renderers.py in show(fig, renderer, validate, **kwargs) 383 384 if not nbformat or LooseVersion(nbformat.__version__) 385 raise ValueError( 386 \"Mime type rendering requires nbformat>=4.2.0 but it is not installed\" 387 ) ValueError: Mime type rendering requires nbformat>=4.2.0 but it is not installed ``` The code I used: ``` import plotly.graph_objects as go from plotly.subplots import make_subplots import plotly.express as px df = df[df['Data']>0] df['Timestamp'] = pd.to_datetime(df['Timestamp']) df = df[(df['Id'] ==1)|(df['Id'] ==6)] dfp = pd.pivot_table(df, values='Data', index=['Timestamp'], columns=['Id'], ) nrows = len(dfp.columns) fig = make_subplots(rows=nrows, cols=1, subplot_titles=['Id '+str(c) for c in dfp.columns]) # add traces x = 1 for i, col in enumerate(dfp.columns): fig.add_trace(go.Scatter(x=dfp.index, y=dfp[col].values, name = 'Id '+str(col), mode = 'lines', ), row=i+1, col=1) fig.update_layout(height=nrows*500) fig.show() ``` I tried pip install nbformat in the console following this feed on GitHub and this question on stackoverflow but it did not work. However, it seems the code could run with the last 2 rows removed: ``` fig.update_layout(height=nrows*500) fig.show() ```", "response":"Method 1 reinstall ipykernel via ``` pip install ipykernel ``` Method 2 ``` pip install --upgrade nbformat ``` And restart your kernel, extremely important.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/66557543\/valueerror-mime-type-rendering-requires-nbformat-4-2-0-but-it-is-not-installed", "best_answers_votes":152, "question_length":1819, "response_length":156 }, { "question":"Convert Column to Date Format (Pandas Dataframe) I have a pandas dataframe as follows: ``` Symbol Date A 02\/20\/2015 A 01\/15\/2016 A 08\/21\/2015 ``` I want to sort it by Date, but the column is just an object. I tried to make the column a date object, but I ran into an issue where that format is not the format needed. The format needed is 2015-02-20, etc. So now I'm trying to figure out how to have numpy convert the 'American' dates into the ISO standard, so that I can make them date objects, so that I can sort by them. How would I convert these american dates into ISO standard, or is there a more straight forward method I'm missing within pandas?", "response":"You can use pd.to_datetime() to convert to a datetime object. It takes a format parameter, but in your case I don't think you need it. ``` >>> import pandas as pd >>> df = pd.DataFrame( {'Symbol':['A','A','A'] , 'Date':['02\/20\/2015','01\/15\/2016','08\/21\/2015']}) >>> df Date Symbol 0 02\/20\/2015 A 1 01\/15\/2016 A 2 08\/21\/2015 A >>> df['Date'] =pd.to_datetime(df.Date) >>> df.sort('Date') # This now sorts in date order Date Symbol 0 2015-02-20 A 2 2015-08-21 A 1 2016-01-15 A ``` For future search, you can change the sort statement: ``` >>> df.sort_values(by='Date') # This now sorts in date order Date Symbol 0 2015-02-20 A 2 2015-08-21 A 1 2016-01-15 A ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28161356\/convert-column-to-date-format-pandas-dataframe", "best_answers_votes":191, "question_length":652, "response_length":657 }, { "question":"Pandas dataframe total row I have a dataframe, something like: ``` foo bar qux 0 a 1 3.14 1 b 3 2.72 2 c 2 1.62 3 d 9 1.41 4 e 3 0.58 ``` and I would like to add a 'total' row to the end of dataframe: ``` foo bar qux 0 a 1 3.14 1 b 3 2.72 2 c 2 1.62 3 d 9 1.41 4 e 3 0.58 5 total 18 9.47 ``` I've tried to use the sum command but I end up with a Series, which although I can convert back to a Dataframe, doesn't maintain the data types: ``` tot_row = pd.DataFrame(df.sum()).T tot_row['foo'] = 'tot' tot_row.dtypes: foo object bar object qux object ``` I would like to maintain the data types from the original data frame as I need to apply other operations to the total row, something like: ``` baz = 2*tot_row['qux'] + 3*tot_row['bar'] ```", "response":"Update June 2022 pd.append is now deprecated. You could use pd.concat instead but it's probably easier to use df.loc['Total'] = df.sum(numeric_only=True), as Kevin Zhu commented. Or, better still, don't modify the data frame in place and keep your data separate from your summary statistics! Append a totals row with ``` df.append(df.sum(numeric_only=True), ignore_index=True) ``` The conversion is necessary only if you have a column of strings or objects. It's a bit of a fragile solution so I'd recommend sticking to operations on the dataframe, though. eg. ``` baz = 2*df['qux'].sum() + 3*df['bar'].sum() ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21752399\/pandas-dataframe-total-row", "best_answers_votes":107, "question_length":740, "response_length":612 }, { "question":"How to convert rows in DataFrame in Python to dictionaries For example, I have DataFrame now as ``` id score1 score2 score3 score4 score5 1 0.000000 0.108659 0.000000 0.078597 1 2 0.053238 0.308253 0.286353 0.446433 1 3 0.000000 0.083979 0.808983 0.233052 1 ``` I want to convert it as ``` id scoreDict 1 {'1': 0, '2': 0.1086, ...} 2 {...} 3 {...} ``` Anyway to do that?", "response":"``` import pandas as pd # your df # ========================= print(df) id score1 score2 score3 score4 score5 0 1 0.0000 0.1087 0.0000 0.0786 1 1 2 0.0532 0.3083 0.2864 0.4464 1 2 3 0.0000 0.0840 0.8090 0.2331 1 # to_dict # ========================= df.to_dict(orient='records') Out[318]: [{'id': 1.0, 'score1': 0.0, 'score2': 0.10865899999999999, 'score3': 0.0, 'score4': 0.078597, 'score5': 1.0}, {'id': 2.0, 'score1': 0.053238000000000001, 'score2': 0.308253, 'score3': 0.28635300000000002, 'score4': 0.44643299999999997, 'score5': 1.0}, {'id': 3.0, 'score1': 0.0, 'score2': 0.083978999999999998, 'score3': 0.80898300000000001, 'score4': 0.23305200000000001, 'score5': 1.0}] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31324310\/how-to-convert-rows-in-dataframe-in-python-to-dictionaries", "best_answers_votes":242, "question_length":370, "response_length":681 }, { "question":"How to open and convert sqlite database to pandas dataframe I have downloaded some datas as a sqlite database (data.db) and I want to open this database in python and then convert it into pandas dataframe. This is so far I have done ``` import sqlite3 import pandas dat = sqlite3.connect('data.db') #connected to database with out error pandas.DataFrame.from_records(dat, index=None, exclude=None, columns=None, coerce_float=False, nrows=None) ``` But its throwing this error ``` Traceback (most recent call last): File \"\", line 1, in File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pandas\/core\/frame.py\", line 980, in from_records coerce_float=coerce_float) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/pandas\/core\/frame.py\", line 5353, in _to_arrays if not len(data): TypeError: object of type 'sqlite3.Connection' has no len() ``` How to convert sqlite database to pandas dataframe", "response":"Despite sqlite being part of the Python Standard Library and is a nice and easy interface to SQLite databases, the Pandas tutorial states: Note In order to use read_sql_table(), you must have the SQLAlchemy optional dependency installed. But Pandas still supports sqlite3 access if you want to avoid installing SQLAlchemy: ``` import sqlite3 import pandas as pd # Create your connection. cnx = sqlite3.connect('file.db') df = pd.read_sql_query(\"SELECT * FROM table_name\", cnx) ``` As stated here, but you need to know the name of the used table in advance.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36028759\/how-to-open-and-convert-sqlite-database-to-pandas-dataframe", "best_answers_votes":216, "question_length":882, "response_length":556 }, { "question":"round a single column in pandas Is there a way to round a single column in pandas without affecting the rest of the dataframe? ``` >>> print(df) item value1 value2 0 a 1.12 1.3 1 a 1.50 2.5 2 a 0.10 0.0 3 b 3.30 -1.0 4 b 4.80 -1.0 ``` I have tried the following: ``` >>> df.value1.apply(np.round) 0 1 1 2 2 0 3 3 4 5 5 5 ``` What is the correct way to make data look like this: ``` item value1 value2 0 a 1 1.3 1 a 2 2.5 2 a 0 0.0 3 b 3 -1.0 4 b 5 -1.0 5 c 5 5.0 ```", "response":"You are very close. You applied the round to the series of values given by df.value1. The return type is thus a Series. You need to assign that series back to the dataframe (or another dataframe with the same Index). Also, there is a pandas.Series.round method which is basically a short hand for pandas.Series.apply(np.round). ``` >>> df.value1 = df.value1.round() >>> print df item value1 value2 0 a 1 1.3 1 a 2 2.5 2 a 0 0.0 3 b 3 -1.0 4 b 5 -1.0 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26133538\/round-a-single-column-in-pandas", "best_answers_votes":134, "question_length":466, "response_length":453 }, { "question":"pandas: to_numeric for multiple columns I'm working with the following df: ``` c.sort_values('2005', ascending=False).head(3) GeoName ComponentName IndustryId IndustryClassification Description 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 37926 Alabama Real GDP by state 9 213 Support activities for mining 99 98 117 117 115 87 96 95 103 102 (NA) 37951 Alabama Real GDP by state 34 42 Wholesale trade 9898 10613 10952 11034 11075 9722 9765 9703 9600 9884 10199 37932 Alabama Real GDP by state 15 327 Nonmetallic mineral products manufacturing 980 968 940 1084 861 724 714 701 589 641 (NA) ``` I want to force numeric on all of the years: ``` c['2014'] = pd.to_numeric(c['2014'], errors='coerce') ``` is there an easy way to do this or do I have to type them all out?", "response":"UPDATE: you don't need to convert your values afterwards, you can do it on-the-fly when reading your CSV: ``` In [165]: df=pd.read_csv(url, index_col=0, na_values=['(NA)']).fillna(0) In [166]: df.dtypes Out[166]: GeoName object ComponentName object IndustryId int64 IndustryClassification object Description object 2004 int64 2005 int64 2006 int64 2007 int64 2008 int64 2009 int64 2010 int64 2011 int64 2012 int64 2013 int64 2014 float64 dtype: object ``` If you need to convert multiple columns to numeric dtypes - use the following technique: Sample source DF: ``` In [271]: df Out[271]: id a b c d e f 0 id_3 AAA 6 3 5 8 1 1 id_9 3 7 5 7 3 BBB 2 id_7 4 2 3 5 4 2 3 id_0 7 3 5 7 9 4 4 id_0 2 4 6 4 0 2 In [272]: df.dtypes Out[272]: id object a object b int64 c int64 d int64 e int64 f object dtype: object ``` Converting selected columns to numeric dtypes: ``` In [273]: cols = df.columns.drop('id') In [274]: df[cols] = df[cols].apply(pd.to_numeric, errors='coerce') In [275]: df Out[275]: id a b c d e f 0 id_3 NaN 6 3 5 8 1.0 1 id_9 3.0 7 5 7 3 NaN 2 id_7 4.0 2 3 5 4 2.0 3 id_0 7.0 3 5 7 9 4.0 4 id_0 2.0 4 6 4 0 2.0 In [276]: df.dtypes Out[276]: id object a float64 b int64 c int64 d int64 e int64 f float64 dtype: object ``` PS if you want to select all string (object) columns use the following simple trick: ``` cols = df.columns[df.dtypes.eq('object')] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36814100\/pandas-to-numeric-for-multiple-columns", "best_answers_votes":159, "question_length":776, "response_length":1367 }, { "question":"How do you remove the column name row when exporting a pandas DataFrame? Say I import the following Excel spreadsheet into a dataframe: ``` Val1 Val2 Val3 1 2 3 5 6 7 9 1 2 ``` How do I delete the column name row (in this case Val1, Val2, Val3) so that I can export a csv with no column names, just the data? I have tried df.drop() and df.ix[1:] and have not been successful with either.", "response":"You can write to csv without the header using header=False and without the index using index=False. If desired, you also can modify the separator using sep. CSV example with no header row, omitting the header row: ``` df.to_csv('filename.csv', header=False) ``` TSV (tab-separated) example, omitting the index column: ``` df.to_csv('filename.tsv', sep='\\t', index=False) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19781609\/how-do-you-remove-the-column-name-row-when-exporting-a-pandas-dataframe", "best_answers_votes":235, "question_length":387, "response_length":374 }, { "question":"pandas concat columns ignore_index doesn't work I am trying to column-bind dataframes (like R's cbind() does) and having issue with pandas concat, as ignore_index=True doesn't seem to work: ``` df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3'], 'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 2, 3, 4]) df2 = pd.DataFrame({'A1': ['A4', 'A5', 'A6', 'A7'], 'C': ['C4', 'C5', 'C6', 'C7'], 'D2': ['D4', 'D5', 'D6', 'D7']}, index=[5, 6, 7, 3]) df1 # A B D # 0 A0 B0 D0 # 2 A1 B1 D1 # 3 A2 B2 D2 # 4 A3 B3 D3 df2 # A1 C D2 # 5 A4 C4 D4 # 6 A5 C5 D5 # 7 A6 C6 D6 # 3 A7 C7 D7 dfs = [df1, df2] df = pd.concat(dfs, axis=1, ignore_index=True) print df ``` and the result is ``` 0 1 2 3 4 5 0 A0 B0 D0 NaN NaN NaN 2 A1 B1 D1 NaN NaN NaN 3 A2 B2 D2 A7 C7 D7 4 A3 B3 D3 NaN NaN NaN 5 NaN NaN NaN A4 C4 D4 6 NaN NaN NaN A5 C5 D5 7 NaN NaN NaN A6 C6 D6 ``` Even if I reset index using ``` df1.reset_index() df2.reset_index() ``` and then try ``` pd.concat([df1, df2], axis=1) ``` it still produces the same result! The expected result is a 6x4 dataframe where the contents of columns A,B,D, A1,C,D2 are horizontally concatenated.", "response":"If I understood you correctly, this is what you would like to do. ``` import pandas as pd df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3'], 'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 2, 3, 4]) df2 = pd.DataFrame({'A1': ['A4', 'A5', 'A6', 'A7'], 'C': ['C4', 'C5', 'C6', 'C7'], 'D2': ['D4', 'D5', 'D6', 'D7']}, index=[4, 5, 6 , 7]) df1.reset_index(drop=True, inplace=True) df2.reset_index(drop=True, inplace=True) df = pd.concat([df1, df2], axis=1) ``` Which gives: ``` A B D A1 C D2 0 A0 B0 D0 A4 C4 D4 1 A1 B1 D1 A5 C5 D5 2 A2 B2 D2 A6 C6 D6 3 A3 B3 D3 A7 C7 D7 ``` Actually, I would have expected that df = pd.concat(dfs, axis=1, ignore_index=True) gives the same result. This is the excellent explanation from jreback: ignore_index=True \u2018ignores\u2019, meaning doesn\u2019t align on the joining axis. it simply pastes them together in the order that they are passed, then reassigns a range for the actual index (e.g. range(len(index))) so the difference between joining on non-overlapping indexes (assume axis=1 in the example), is that with ignore_index=False (the default), you get the concat of the indexes, and with ignore_index=True you get a range.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32801806\/pandas-concat-columns-ignore-index-doesnt-work", "best_answers_votes":150, "question_length":1139, "response_length":1177 }, { "question":"pandas out of bounds nanosecond timestamp after offset rollforward plus adding a month offset I am confused how pandas blew out of bounds for datetime objects with these lines: ``` import pandas as pd BOMoffset = pd.tseries.offsets.MonthBegin() # here some code sets the all_treatments dataframe and the newrowix, micolix, mocolix counters all_treatments.iloc[newrowix,micolix] = BOMoffset.rollforward(all_treatments.iloc[i,micolix] + pd.tseries.offsets.DateOffset(months = x)) all_treatments.iloc[newrowix,mocolix] = BOMoffset.rollforward(all_treatments.iloc[newrowix,micolix]+ pd.tseries.offsets.DateOffset(months = 1)) ``` Here all_treatments.iloc[i,micolix] is a datetime set by pd.to_datetime(all_treatments['INDATUMA'], errors='coerce',format='%Y%m%d'), and INDATUMA is date information in the format 20070125. This logic seems to work on mock data (no errors, dates make sense), so at the moment I cannot reproduce while it fails in my entire data with the following error: ``` pandas.tslib.OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 2262-05-01 00:00:00 ```", "response":"Since pandas represents timestamps in nanosecond resolution, the timespan that can be represented using a 64-bit integer is limited to approximately 584 years ``` In [54]: pd.Timestamp.min Out[54]: Timestamp('1677-09-22 00:12:43.145225') In [55]: pd.Timestamp.max Out[55]: Timestamp('2262-04-11 23:47:16.854775807') ``` And your value is out of this range 2262-05-01 00:00:00 and hence the outofbounds error Straight out of: https:\/\/pandas.pydata.org\/pandas-docs\/stable\/user_guide\/timeseries.html#timestamp-limitations Workaround: This will force the dates which are outside the bounds to NaT pd.to_datetime(date_col_to_force, errors = 'coerce')", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32888124\/pandas-out-of-bounds-nanosecond-timestamp-after-offset-rollforward-plus-adding-a", "best_answers_votes":195, "question_length":1078, "response_length":645 }, { "question":"Slice Pandas dataframe by index values that are (not) in a list I have a pandas dataframe, df. I want to select all indices in df that are not in a list, blacklist. Now, I use list comprehension to create the desired labels to slice. ``` ix=[i for i in df.index if i not in blacklist] df_select=df.loc[ix] ``` Works fine, but may be clumsy if I need to do this often. Is there a better way to do this?", "response":"Use isin on the index and invert the boolean index to perform label selection: ``` In [239]: df = pd.DataFrame({'a':np.random.randn(5)}) df Out[239]: a 0 -0.548275 1 -0.411741 2 -1.187369 3 1.028967 4 -2.755030 In [240]: t = [2,4] df.loc[~df.index.isin(t)] Out[240]: a 0 -0.548275 1 -0.411741 3 1.028967 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29134635\/slice-pandas-dataframe-by-index-values-that-are-not-in-a-list", "best_answers_votes":189, "question_length":401, "response_length":307 }, { "question":"Pass percentiles to pandas agg function I want to pass the numpy percentile() function through pandas' agg() function as I do below with various other numpy statistics functions. Right now I have a dataframe that looks like this: ``` AGGREGATE MY_COLUMN A 10 A 12 B 5 B 9 A 84 B 22 ``` And my code looks like this: ``` grouped = dataframe.groupby('AGGREGATE') column = grouped['MY_COLUMN'] column.agg([np.sum, np.mean, np.std, np.median, np.var, np.min, np.max]) ``` The above code works, but I want to do something like ``` column.agg([np.sum, np.mean, np.percentile(50), np.percentile(95)]) ``` I.e., specify various percentiles to return from agg(). How should this be done?", "response":"Perhaps not super efficient, but one way would be to create a function yourself: ``` def percentile(n): def percentile_(x): return x.quantile(n) percentile_.__name__ = 'percentile_{:02.0f}'.format(n*100) return percentile_ ``` Then include this in your agg: ``` In [11]: column.agg([np.sum, np.mean, np.std, np.median, np.var, np.min, np.max, percentile(50), percentile(95)]) Out[11]: sum mean std median var amin amax percentile_50 percentile_95 AGGREGATE A 106 35.333333 42.158431 12 1777.333333 10 84 12 76.8 B 36 12.000000 8.888194 9 79.000000 5 22 12 76.8 ``` Note sure this is how it should be done though...", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17578115\/pass-percentiles-to-pandas-agg-function", "best_answers_votes":165, "question_length":677, "response_length":614 }, { "question":"How to save a pandas DataFrame table as a png I constructed a pandas dataframe of results. This data frame acts as a table. There are MultiIndexed columns and each row represents a name, ie index=['name1','name2',...] when creating the DataFrame. I would like to display this table and save it as a png (or any graphic format really). At the moment, the closest I can get is converting it to html, but I would like a png. It looks like similar questions have been asked such as How to save the Pandas dataframe\/series data as a figure? However, the marked solution converts the dataframe into a line plot (not a table) and the other solution relies on PySide which I would like to stay away simply because I cannot pip install it on linux. I would like this code to be easily portable. I really was expecting table creation to png to be easy with python. All help is appreciated.", "response":"Pandas allows you to plot tables using matplotlib (details here). Usually this plots the table directly onto a plot (with axes and everything) which is not what you want. However, these can be removed first: ``` import matplotlib.pyplot as plt import pandas as pd from pandas.table.plotting import table # EDIT: see deprecation warnings below ax = plt.subplot(111, frame_on=False) # no visible frame ax.xaxis.set_visible(False) # hide the x axis ax.yaxis.set_visible(False) # hide the y axis table(ax, df) # where df is your data frame plt.savefig('mytable.png') ``` The output might not be the prettiest but you can find additional arguments for the table() function here. EDIT: Here is a (admittedly quite hacky) way of simulating multi-indexes when plotting using the method above. If you have a multi-index data frame called df that looks like: ``` first second bar one 1.991802 two 0.403415 baz one -1.024986 two -0.522366 foo one 0.350297 two -0.444106 qux one -0.472536 two 0.999393 dtype: float64 ``` First reset the indexes so they become normal columns ``` df = df.reset_index() df first second 0 0 bar one 1.991802 1 bar two 0.403415 2 baz one -1.024986 3 baz two -0.522366 4 foo one 0.350297 5 foo two -0.444106 6 qux one -0.472536 7 qux two 0.999393 ``` Remove all duplicates from the higher order multi-index columns by setting them to an empty string (in my example I only have duplicate indexes in \"first\"): ``` df.ix[df.duplicated('first') , 'first'] = '' # see deprecation warnings below df first second 0 0 bar one 1.991802 1 two 0.403415 2 baz one -1.024986 3 two -0.522366 4 foo one 0.350297 5 two -0.444106 6 qux one -0.472536 7 two 0.999393 ``` Change the column names over your \"indexes\" to the empty string ``` new_cols = df.columns.values new_cols[:2] = '','' # since my index columns are the two left-most on the table df.columns = new_cols ``` Now call the table function but set all the row labels in the table to the empty string (this makes sure the actual indexes of your plot are not displayed): ``` table(ax, df, rowLabels=['']*df.shape[0], loc='center') ``` et voila: Your not-so-pretty but totally functional multi-indexed table. EDIT: DEPRECATION WARNINGS As pointed out in the comments, the import statement for table: ``` from pandas.tools.plotting import table ``` is now deprecated in newer versions of pandas in favour of: ``` from pandas.plotting import table ``` EDIT: DEPRECATION WARNINGS 2 The ix indexer has now been fully deprecated so we should use the loc indexer instead. Replace: ``` df.ix[df.duplicated('first') , 'first'] = '' ``` with ``` df.loc[df.duplicated('first') , 'first'] = '' ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/35634238\/how-to-save-a-pandas-dataframe-table-as-a-png", "best_answers_votes":114, "question_length":879, "response_length":2643 }, { "question":"save a pandas.Series histogram plot to file In ipython Notebook, first create a pandas Series object, then by calling the instance method .hist(), the browser displays the figure. I am wondering how to save this figure to a file (I mean not by right click and save as, but the commands needed in the script).", "response":"Use the Figure.savefig() method, like so: ``` ax = s.hist() # s is an instance of Series fig = ax.get_figure() fig.savefig('\/path\/to\/figure.pdf') ``` It doesn't have to end in pdf, there are many options. Check out the documentation. Alternatively, you can use the pyplot interface and just call the savefig as a function to save the most recently created figure: ``` import matplotlib.pyplot as plt s.hist() plt.savefig('path\/to\/figure.pdf') # saves the current figure ``` Plots from multiple columns Added from a comment toto_tico made on 2018-05-11 If you are getting this error AttributeError: 'numpy.ndarray' object has no attribute 'get_figure', then it is likely that you are plotting multiple columns. In this case, ax will be an array of all the axes. ```py ax = s.hist(columns=['colA', 'colB']) # try one of the following fig = ax[0].get_figure() fig = ax[0][0].get_figure() fig.savefig('figure.pdf') ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18992086\/save-a-pandas-series-histogram-plot-to-file", "best_answers_votes":250, "question_length":308, "response_length":914 }, { "question":"Pandas plot() without a legend Using the pandas library in python and using ```py .plot() ``` on a dataframe, how do I display the plot without a legend?", "response":"There is a parameter in the function corresponding to legend; by default it is True ``` df.plot(legend=False) ``` Following is the definition of the .plot() method Definition: df.plot(frame=None, x=None, y=None, subplots=False, sharex=True, sharey=False, use_index=True, figsize=None, grid=None, legend=True, rot=None, ax=None, style=None, title=None, xlim=None, ylim=None, logx=False, logy=False, xticks=None, yticks=None, kind='line', sort_columns=False, fontsize=None, secondary_y=False, **kwds)", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20865487\/pandas-plot-without-a-legend", "best_answers_votes":212, "question_length":153, "response_length":498 }, { "question":"Replace invalid values with None in Pandas DataFrame Is there any method to replace values with None in Pandas in Python? You can use df.replace('pre', 'post') and can replace a value with another, but this can't be done if you want to replace with None value, which if you try, you get a strange result. So here's an example: ``` df = DataFrame(['-',3,2,5,1,-5,-1,'-',9]) df.replace('-', 0) ``` which returns a successful result. But, ``` df.replace('-', None) ``` which returns a following result: ``` 0 0 - \/\/ this isn't replaced 1 3 2 2 3 5 4 1 5 -5 6 -1 7 -1 \/\/ this is changed to `-1`... 8 9 ``` Why does such a strange result be returned? Since I want to pour this data frame into MySQL database, I can't put NaN values into any element in my data frame and instead want to put None. Surely, you can first change '-' to NaN and then convert NaN to None, but I want to know why the dataframe acts in such a terrible way. Tested on pandas 0.12.0 dev on Python 2.7 and OS X 10.8. Python is a pre-installed version on OS X and I installed pandas by using SciPy Superpack script, for your information.", "response":"Actually in later versions of pandas this will give a TypeError: ``` df.replace('-', None) TypeError: If \"to_replace\" and \"value\" are both None then regex must be a mapping ``` You can do it by passing either a list or a dictionary: ``` In [11]: df.replace('-', df.replace(['-'], [None]) # or .replace('-', {0: None}) Out[11]: 0 0 None 1 3 2 2 3 5 4 1 5 -5 6 -1 7 None 8 9 ``` But I recommend using NaNs rather than None: ``` In [12]: df.replace('-', np.nan) Out[12]: 0 0 NaN 1 3 2 2 3 5 4 1 5 -5 6 -1 7 NaN 8 9 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17097236\/replace-invalid-values-with-none-in-pandas-dataframe", "best_answers_votes":148, "question_length":1103, "response_length":515 }, { "question":"Add numpy array as column to Pandas data frame I have a Pandas data frame object of shape (X,Y) that looks like this: ``` [[1, 2, 3], [4, 5, 6], [7, 8, 9]] ``` and a numpy sparse matrix (CSC) of shape (X,Z) that looks something like this ``` [[0, 1, 0], [0, 0, 1], [1, 0, 0]] ``` How can I add the content from the matrix to the data frame in a new named column such that the data frame will end up like this: ``` [[1, 2, 3, [0, 1, 0]], [4, 5, 6, [0, 0, 1]], [7, 8, 9, [1, 0, 0]]] ``` Notice the data frame now has shape (X, Y+1) and rows from the matrix are elements in the data frame.", "response":"``` import numpy as np import pandas as pd import scipy.sparse as sparse df = pd.DataFrame(np.arange(1,10).reshape(3,3)) arr = sparse.coo_matrix(([1,1,1], ([0,1,2], [1,2,0])), shape=(3,3)) df['newcol'] = arr.toarray().tolist() print(df) ``` yields ``` 0 1 2 newcol 0 1 2 3 [0, 1, 0] 1 4 5 6 [0, 0, 1] 2 7 8 9 [1, 0, 0] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18646076\/add-numpy-array-as-column-to-pandas-data-frame", "best_answers_votes":107, "question_length":586, "response_length":322 }, { "question":"How to give a pandas\/matplotlib bar graph custom colors I just started using pandas\/matplotlib as a replacement for Excel to generate stacked bar charts. I am running into an issue (1) there are only 5 colors in the default colormap, so if I have more than 5 categories then the colors repeat. How can I specify more colors? Ideally, a gradient with a start color and an end color, and a way to dynamically generate n colors in between? (2) the colors are not very visually pleasing. How do I specify a custom set of n colors? Or, a gradient would also work. An example which illustrates both of the above points is below: ``` 4 from matplotlib import pyplot 5 from pandas import * 6 import random 7 8 x = [{i:random.randint(1,5)} for i in range(10)] 9 df = DataFrame(x) 10 11 df.plot(kind='bar', stacked=True) ``` And the output is this:", "response":"You can specify the color option as a list directly to the plot function. ``` from matplotlib import pyplot as plt from itertools import cycle, islice import pandas, numpy as np # I find np.random.randint to be better # Make the data x = [{i:np.random.randint(1,5)} for i in range(10)] df = pandas.DataFrame(x) # Make a list by cycling through the colors you care about # to match the length of your data. my_colors = list(islice(cycle(['b', 'r', 'g', 'y', 'k']), None, len(df))) # Specify this list of colors as the `color` option to `plot`. df.plot(kind='bar', stacked=True, color=my_colors) ``` To define your own custom list, you can do a few of the following, or just look up the Matplotlib techniques for defining a color item by its RGB values, etc. You can get as complicated as you want with this. ``` my_colors = ['g', 'b']*5 # <-- this concatenates the list to itself 5 times. my_colors = [(0.5,0.4,0.5), (0.75, 0.75, 0.25)]*5 # <-- make two custom RGBs and repeat\/alternate them over all the bar elements. my_colors = [(x\/10.0, x\/20.0, 0.75) for x in range(len(df))] # <-- Quick gradient example along the Red\/Green dimensions. ``` The last example yields the follow simple gradient of colors for me: I didn't play with it long enough to figure out how to force the legend to pick up the defined colors, but I'm sure you can do it. In general, though, a big piece of advice is to just use the functions from Matplotlib directly. Calling them from Pandas is OK, but I find you get better options and performance calling them straight from Matplotlib.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11927715\/how-to-give-a-pandas-matplotlib-bar-graph-custom-colors", "best_answers_votes":162, "question_length":838, "response_length":1561 }, { "question":"Floor or ceiling of a pandas series in python? I have a pandas series series. If I want to get the element-wise floor or ceiling, is there a built in method or do I have to write the function and use apply? I ask because the data is big so I appreciate efficiency. Also this question has not been asked with respect to the Pandas package.", "response":"You can use NumPy's built in methods to do this: np.ceil(series) or np.floor(series). Both return a Series object (not an array) so the index information is preserved.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27592456\/floor-or-ceiling-of-a-pandas-series-in-python", "best_answers_votes":156, "question_length":338, "response_length":167 }, { "question":"Ambiguity in Pandas Dataframe \/ Numpy Array \"axis\" definition I've been very confused about how python axes are defined, and whether they refer to a DataFrame's rows or columns. Consider the code below: ``` >>> df = pd.DataFrame([[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3]], columns=[\"col1\", \"col2\", \"col3\", \"col4\"]) >>> df col1 col2 col3 col4 0 1 1 1 1 1 2 2 2 2 2 3 3 3 3 ``` So if we call df.mean(axis=1), we'll get a mean across the rows: ``` >>> df.mean(axis=1) 0 1 1 2 2 3 ``` However, if we call df.drop(name, axis=1), we actually drop a column, not a row: ``` >>> df.drop(\"col4\", axis=1) col1 col2 col3 0 1 1 1 1 2 2 2 2 3 3 3 ``` Can someone help me understand what is meant by an \"axis\" in pandas\/numpy\/scipy? A side note, DataFrame.mean just might be defined wrong. It says in the documentation for DataFrame.mean that axis=1 is supposed to mean a mean over the columns, not the rows...", "response":"It's perhaps simplest to remember it as 0=down and 1=across. This means: Use axis=0 to apply a method down each column, or to the row labels (the index). Use axis=1 to apply a method across each row, or to the column labels. Here's a picture to show the parts of a DataFrame that each axis refers to: It's also useful to remember that Pandas follows NumPy's use of the word axis. The usage is explained in NumPy's glossary of terms: Axes are defined for arrays with more than one dimension. A 2-dimensional array has two corresponding axes: the first running vertically downwards across rows (axis 0), and the second running horizontally across columns (axis 1). [my emphasis] So, concerning the method in the question, df.mean(axis=1), seems to be correctly defined. It takes the mean of entries horizontally across columns, that is, along each individual row. On the other hand, df.mean(axis=0) would be an operation acting vertically downwards across rows. Similarly, df.drop(name, axis=1) refers to an action on column labels, because they intuitively go across the horizontal axis. Specifying axis=0 would make the method act on rows instead.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25773245\/ambiguity-in-pandas-dataframe-numpy-array-axis-definition", "best_answers_votes":190, "question_length":895, "response_length":1147 }, { "question":"Pandas version of rbind In R, you can combine two dataframes by sticking the columns of one onto the bottom of the columns of the other using rbind. In pandas, how do you accomplish the same thing? It seems bizarrely difficult. Using append results in a horrible mess including NaNs and things for reasons I don't understand. I'm just trying to \"rbind\" two identical frames that look like this: EDIT: I was creating the DataFrames in a stupid way, which was causing issues. Append=rbind to all intents and purposes. See answer below. ``` 0 1 2 3 4 5 6 7 0 ADN.L 20130220 437.4 442.37 436.5000 441.9000 2775364 2013-02-20 18:47:42 1 ADM.L 20130220 1279.0 1300.00 1272.0000 1285.0000 967730 2013-02-20 18:47:42 2 AGK.L 20130220 1717.0 1749.00 1709.0000 1739.0000 834534 2013-02-20 18:47:43 3 AMEC.L 20130220 1030.0 1040.00 1024.0000 1035.0000 1972517 2013-02-20 18:47:43 4 AAL.L 20130220 1998.0 2014.50 1942.4999 1951.0000 3666033 2013-02-20 18:47:44 5 ANTO.L 20130220 1093.0 1097.00 1064.7899 1068.0000 2183931 2013-02-20 18:47:44 6 ARM.L 20130220 941.5 965.10 939.4250 951.5001 2994652 2013-02-20 18:47:45 ``` But I'm getting something horrible a la this: ``` 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 NaN NaN NaN NaN NaN NaN NaN NaN ADN.L 20130220 437.4 442.37 436.5000 441.9000 2775364 2013-02-20 18:47:42 1 NaN NaN NaN NaN NaN NaN NaN NaN ADM.L 20130220 1279.0 1300.00 1272.0000 1285.0000 967730 2013-02-20 18:47:42 2 NaN NaN NaN NaN NaN NaN NaN NaN AGK.L 20130220 1717.0 1749.00 1709.0000 1739.0000 834534 2013-02-20 18:47:43 3 NaN NaN NaN NaN NaN NaN NaN NaN AMEC.L 20130220 1030.0 1040.00 1024.0000 1035.0000 1972517 2013-02-20 18:47:43 4 NaN NaN NaN NaN NaN NaN NaN NaN AAL.L 20130220 1998.0 2014.50 1942.4999 1951.0000 3666033 2013-02-20 18:47:44 5 NaN NaN NaN NaN NaN NaN NaN NaN ANTO.L 20130220 1093.0 1097.00 1064.7899 1068.0000 2183931 2013-02-20 18:47:44 6 NaN NaN NaN NaN NaN NaN NaN NaN ARM.L 20130220 941.5 965.10 939.4250 951.5001 2994652 2013-02-20 18:47:45 0 NaN NaN NaN NaN NaN NaN NaN NaN ADN.L 20130220 437.4 442.37 436.5000 441.9000 2775364 2013-02-20 18:47:42 1 NaN NaN NaN NaN NaN NaN NaN NaN ADM.L 20130220 1279.0 1300.00 1272.0000 1285.0000 967730 2013-02-20 18:47:42 2 NaN NaN NaN NaN NaN NaN NaN NaN AGK.L 20130220 1717.0 1749.00 1709.0000 1739.0000 834534 2013-02-20 18:47:43 3 NaN NaN NaN NaN NaN NaN NaN NaN ``` And I don't understand why. I'm starting to miss R :(", "response":"pd.concat will serve the purpose of rbind in R. ``` import pandas as pd df1 = pd.DataFrame({'col1': [1,2], 'col2':[3,4]}) df2 = pd.DataFrame({'col1': [5,6], 'col2':[7,8]}) print(df1) print(df2) print(pd.concat([df1, df2])) ``` The outcome will looks like: ``` col1 col2 0 1 3 1 2 4 col1 col2 0 5 7 1 6 8 col1 col2 0 1 3 1 2 4 0 5 7 1 6 8 ``` If you read the documentation careful enough, it will also explain other operations like cbind, ..etc.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14988480\/pandas-version-of-rbind", "best_answers_votes":78, "question_length":2389, "response_length":444 }, { "question":"ValueError: Length of values does not match length of index | Pandas DataFrame.unique() I am trying to get a new dataset, or change the value of the current dataset columns to their unique values. Here is an example of what I am trying to get : ```none A B ----- 0| 1 1 1| 2 5 2| 1 5 3| 7 9 4| 7 9 5| 8 9 Wanted Result Not Wanted Result A B A B ----- ----- 0| 1 1 0| 1 1 1| 2 5 1| 2 5 2| 7 9 2| 3| 8 3| 7 9 4| 5| 8 ``` I don't really care about the index but it seems to be the problem. My code so far is pretty simple, I tried 2 approaches, 1 with a new dataFrame and one without. ```py #With New DataFrame def UniqueResults(dataframe): df = pd.DataFrame() for col in dataframe: S=pd.Series(dataframe[col].unique()) df[col]=S.values return df #Without new DataFrame def UniqueResults(dataframe): for col in dataframe: dataframe[col]=dataframe[col].unique() return dataframe ``` Both times, I get the error: ```none Length of Values does not match length of index ```", "response":"The error comes up when you are trying to assign a list of numpy array of different length to a data frame, and it can be reproduced as follows: A data frame of four rows: ``` df = pd.DataFrame({'A': [1,2,3,4]}) ``` Now trying to assign a list\/array of two elements to it: ``` df['B'] = [3,4] # or df['B'] = np.array([3,4]) ``` Both errors out: ValueError: Length of values does not match length of index Because the data frame has four rows but the list and array has only two elements. Work around Solution (use with caution): convert the list\/array to a pandas Series, and then when you do assignment, missing index in the Series will be filled with NaN: ``` df['B'] = pd.Series([3,4]) df # A B #0 1 3.0 #1 2 4.0 #2 3 NaN # NaN because the value at index 2 and 3 doesn't exist in the Series #3 4 NaN ``` For your specific problem, if you don't care about the index or the correspondence of values between columns, you can reset index for each column after dropping the duplicates: ``` df.apply(lambda col: col.drop_duplicates().reset_index(drop=True)) # A B #0 1 1.0 #1 2 5.0 #2 7 9.0 #3 8 NaN ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/42382263\/valueerror-length-of-values-does-not-match-length-of-index-pandas-dataframe-u", "best_answers_votes":149, "question_length":967, "response_length":1100 }, { "question":"pandas - filter dataframe by another dataframe by row elements I have a dataframe df1 which looks like: ``` c k l 0 A 1 a 1 A 2 b 2 B 2 a 3 C 2 a 4 C 2 d ``` and another called df2 like: ``` c l 0 A b 1 C a ``` I would like to filter df1 keeping only the values that ARE NOT in df2. Values to filter are expected to be as (A,b) and (C,a) tuples. So far I tried to apply the isin method: ``` d = df[~(df['l'].isin(dfc['l']) & df['c'].isin(dfc['c']))] ``` That seems to me too complicated, it returns: ``` c k l 2 B 2 a 4 C 2 d ``` but I'm expecting: ``` c k l 0 A 1 a 2 B 2 a 4 C 2 d ```", "response":"You can do this efficiently using isin on a multiindex constructed from the desired columns: ``` df1 = pd.DataFrame({'c': ['A', 'A', 'B', 'C', 'C'], 'k': [1, 2, 2, 2, 2], 'l': ['a', 'b', 'a', 'a', 'd']}) df2 = pd.DataFrame({'c': ['A', 'C'], 'l': ['b', 'a']}) keys = list(df2.columns.values) i1 = df1.set_index(keys).index i2 = df2.set_index(keys).index df1[~i1.isin(i2)] ``` I think this improves on @IanS's similar solution because it doesn't assume any column type (i.e. it will work with numbers as well as strings). (Above answer is an edit. Following was my initial answer) Interesting! This is something I haven't come across before... I would probably solve it by merging the two arrays, then dropping rows where df2 is defined. Here is an example, which makes use of a temporary array: ``` df1 = pd.DataFrame({'c': ['A', 'A', 'B', 'C', 'C'], 'k': [1, 2, 2, 2, 2], 'l': ['a', 'b', 'a', 'a', 'd']}) df2 = pd.DataFrame({'c': ['A', 'C'], 'l': ['b', 'a']}) # create a column marking df2 values df2['marker'] = 1 # join the two, keeping all of df1's indices joined = pd.merge(df1, df2, on=['c', 'l'], how='left') joined ``` ``` # extract desired columns where marker is NaN joined[pd.isnull(joined['marker'])][df1.columns] ``` There may be a way to do this without using the temporary array, but I can't think of one. As long as your data isn't huge the above method should be a fast and sufficient answer.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33282119\/pandas-filter-dataframe-by-another-dataframe-by-row-elements", "best_answers_votes":129, "question_length":586, "response_length":1408 }, { "question":"Filtering multiple items in a multi-index Pandas dataframe I have the following table: ```none Area NSRCODE PBL_AWI CM BONS 44705.492941 BTNN 253854.591990 FONG 41625.590370 FONS 16814.159680 Lake 57124.819333 River 1603.906642 SONS 583958.444751 STNN 45603.837177 clearcut 106139.013930 disturbed 127719.865675 lowland 118795.578059 upland 2701289.270193 LBH BFNN 289207.169650 BONS 9140084.716743 BTNI 33713.160390 BTNN 19748004.789040 FONG 1687122.469691 FONS 5169959.591270 FTNI 317251.976160 FTNN 6536472.869395 Lake 258046.508310 River 44262.807900 SONS 4379097.677405 burn regen 744773.210860 clearcut 54066.756790 disturbed 597561.471686 lowland 12591619.141842 upland 23843453.638117 ``` Note: Both NSRCODE and PBL_AWI are indices. How do I search for values in column PBL_AWI? For example I want to keep the values ['Lake', 'River', 'Upland'].", "response":"You can get_level_values in conjunction with Boolean slicing. ``` In [50]: print df[np.in1d(df.index.get_level_values(1), ['Lake', 'River', 'Upland'])] Area NSRCODE PBL_AWI CM Lake 57124.819333 River 1603.906642 LBH Lake 258046.508310 River 44262.807900 ``` The same idea can be expressed in many different ways, such as df[df.index.get_level_values('PBL_AWI').isin(['Lake', 'River', 'Upland'])] Note that you have 'upland' in your data instead of 'Upland'", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25224545\/filtering-multiple-items-in-a-multi-index-pandas-dataframe", "best_answers_votes":151, "question_length":853, "response_length":456 }, { "question":"How to create a scatter plot by category [duplicate] This question already has answers here: Color a scatter plot by Column Values (6 answers) Closed 3 years ago. I am trying to make a simple scatter plot in pyplot using a Pandas DataFrame object, but want an efficient way of plotting two variables but have the symbols dictated by a third column (key). I have tried various ways using df.groupby, but not successfully. A sample df script is below. This colours the markers according to 'key1', but Id like to see a legend with 'key1' categories. Am I close? Thanks. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame(np.random.normal(10,1,30).reshape(10,3), index = pd.date_range('2010-01-01', freq = 'M', periods = 10), columns = ('one', 'two', 'three')) df['key1'] = (4,4,4,6,6,6,8,8,8,8) fig1 = plt.figure(1) ax1 = fig1.add_subplot(111) ax1.scatter(df['one'], df['two'], marker = 'o', c = df['key1'], alpha = 0.8) plt.show() ```", "response":"You can use scatter for this, but that requires having numerical values for your key1, and you won't have a legend, as you noticed. It's better to just use plot for discrete categories like this. For example: ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd np.random.seed(1974) # Generate Data num = 20 x, y = np.random.random((2, num)) labels = np.random.choice(['a', 'b', 'c'], num) df = pd.DataFrame(dict(x=x, y=y, label=labels)) groups = df.groupby('label') # Plot fig, ax = plt.subplots() ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling for name, group in groups: ax.plot(group.x, group.y, marker='o', linestyle='', ms=12, label=name) ax.legend() plt.show() ``` If you'd like things to look like the default pandas style, then just update the rcParams with the pandas stylesheet and use its color generator. (I'm also tweaking the legend slightly): ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd np.random.seed(1974) # Generate Data num = 20 x, y = np.random.random((2, num)) labels = np.random.choice(['a', 'b', 'c'], num) df = pd.DataFrame(dict(x=x, y=y, label=labels)) groups = df.groupby('label') # Plot plt.rcParams.update(pd.tools.plotting.mpl_stylesheet) colors = pd.tools.plotting._get_standard_colors(len(groups), color_type='random') fig, ax = plt.subplots() ax.set_color_cycle(colors) ax.margins(0.05) for name, group in groups: ax.plot(group.x, group.y, marker='o', linestyle='', ms=12, label=name) ax.legend(numpoints=1, loc='upper left') plt.show() ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21654635\/how-to-create-a-scatter-plot-by-category", "best_answers_votes":143, "question_length":975, "response_length":1547 }, { "question":"Pandas: Drop consecutive duplicates What's the most efficient way to drop only consecutive duplicates in pandas? drop_duplicates gives this: ``` In [3]: a = pandas.Series([1,2,2,3,2], index=[1,2,3,4,5]) In [4]: a.drop_duplicates() Out[4]: 1 1 2 2 4 3 dtype: int64 ``` But I want this: ``` In [4]: a.something() Out[4]: 1 1 2 2 4 3 5 2 dtype: int64 ```", "response":"Use shift: ``` a.loc[a.shift(-1) != a] Out[3]: 1 1 3 2 4 3 5 2 dtype: int64 ``` So the above uses boolean critieria, we compare the dataframe against the dataframe shifted by -1 rows to create the mask Another method is to use diff: ``` In [82]: a.loc[a.diff() != 0] Out[82]: 1 1 2 2 4 3 5 2 dtype: int64 ``` But this is slower than the original method if you have a large number of rows. Update Thanks to Bjarke Ebert for pointing out a subtle error, I should actually use shift(1) or just shift() as the default is a period of 1, this returns the first consecutive value: ``` In [87]: a.loc[a.shift() != a] Out[87]: 1 1 2 2 4 3 5 2 dtype: int64 ``` Note the difference in index values, thanks @BjarkeEbert!", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19463985\/pandas-drop-consecutive-duplicates", "best_answers_votes":153, "question_length":351, "response_length":708 }, { "question":"Pandas counting and summing specific conditions Are there single functions in pandas to perform the equivalents of SUMIF, which sums over a specific condition and COUNTIF, which counts values of specific conditions from Excel? I know that there are many multiple step functions that can be used for For example for sumif I can use (df.map(lambda x: condition) or df.size()) then use .sum(), and for countif, I can use (groupby functions and look for my answer or use a filter and the .count()). Is there simple one step process to do these functions where you enter the condition and the dataframe and you get the sum or counted results?", "response":"You can first make a conditional selection, and sum up the results of the selection using the sum function. ``` >> df = pd.DataFrame({'a': [1, 2, 3]}) >> df[df.a > 1].sum() a 5 dtype: int64 ``` Having more than one condition: ``` >> df[(df.a > 1) & (df.a < 3)].sum() a 2 dtype: int64 ``` If you want to do COUNTIF, just replace sum() with count()", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20995196\/pandas-counting-and-summing-specific-conditions", "best_answers_votes":139, "question_length":637, "response_length":346 }, { "question":"Groupby and sum only one column So I have a dataframe, df1, that looks like the following: ``` A B C 1 foo 12 California 2 foo 22 California 3 bar 8 Rhode Island 4 bar 32 Rhode Island 5 baz 15 Ohio 6 baz 26 Ohio ``` I want to group by column A and then sum column B while keeping the value in column C. Something like this: ``` A B C 1 foo 34 California 2 bar 40 Rhode Island 3 baz 41 Ohio ``` The issue is, when I say ``` df.groupby('A').sum() ``` column C gets removed, returning ``` B A bar 40 baz 41 foo 34 ``` How can I get around this and keep column C when I group and sum?", "response":"The only way to do this would be to include C in your groupby (the groupby function can accept a list). Give this a try: ``` df.groupby(['A','C'])['B'].sum() ``` One other thing to note, if you need to work with df after the aggregation you can also use the as_index=False option to return a dataframe object. This one gave me problems when I was first working with Pandas. Example: ``` df.groupby(['A','C'], as_index=False)['B'].sum() ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38985053\/groupby-and-sum-only-one-column", "best_answers_votes":149, "question_length":580, "response_length":439 }, { "question":"Concatenate rows of two dataframes in pandas I need to concatenate two dataframes df_a and df_b that have equal number of rows (nRow) horizontally without any consideration of keys. This function is similar to cbind in the R programming language. The number of columns in each dataframe may be different. The resultant dataframe will have the same number of rows nRow and number of columns equal to the sum of number of columns in both the dataframes. In other words, this is a blind columnar concatenation of two dataframes. ```py import pandas as pd dict_data = {'Treatment': ['C', 'C', 'C'], 'Biorep': ['A', 'A', 'A'], 'Techrep': [1, 1, 1], 'AAseq': ['ELVISLIVES', 'ELVISLIVES', 'ELVISLIVES'], 'mz':[500.0, 500.5, 501.0]} df_a = pd.DataFrame(dict_data) dict_data = {'Treatment1': ['C', 'C', 'C'], 'Biorep1': ['A', 'A', 'A'], 'Techrep1': [1, 1, 1], 'AAseq1': ['ELVISLIVES', 'ELVISLIVES', 'ELVISLIVES'], 'inte1':[1100.0, 1050.0, 1010.0]} df_b = pd.DataFrame(dict_data) ```", "response":"call concat and pass param axis=1 to concatenate column-wise: ``` In [5]: pd.concat([df_a,df_b], axis=1) Out[5]: AAseq Biorep Techrep Treatment mz AAseq1 Biorep1 Techrep1 \\ 0 ELVISLIVES A 1 C 500.0 ELVISLIVES A 1 1 ELVISLIVES A 1 C 500.5 ELVISLIVES A 1 2 ELVISLIVES A 1 C 501.0 ELVISLIVES A 1 Treatment1 inte1 0 C 1100 1 C 1050 2 C 1010 ``` There is a useful guide to the various methods of merging, joining and concatenating online. For example, as you have no clashing columns you can merge and use the indices as they have the same number of rows: ``` In [6]: df_a.merge(df_b, left_index=True, right_index=True) Out[6]: AAseq Biorep Techrep Treatment mz AAseq1 Biorep1 Techrep1 \\ 0 ELVISLIVES A 1 C 500.0 ELVISLIVES A 1 1 ELVISLIVES A 1 C 500.5 ELVISLIVES A 1 2 ELVISLIVES A 1 C 501.0 ELVISLIVES A 1 Treatment1 inte1 0 C 1100 1 C 1050 2 C 1010 ``` And for the same reasons as above a simple join works too: ``` In [7]: df_a.join(df_b) Out[7]: AAseq Biorep Techrep Treatment mz AAseq1 Biorep1 Techrep1 \\ 0 ELVISLIVES A 1 C 500.0 ELVISLIVES A 1 1 ELVISLIVES A 1 C 500.5 ELVISLIVES A 1 2 ELVISLIVES A 1 C 501.0 ELVISLIVES A 1 Treatment1 inte1 0 C 1100 1 C 1050 2 C 1010 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28135436\/concatenate-rows-of-two-dataframes-in-pandas", "best_answers_votes":155, "question_length":973, "response_length":1173 }, { "question":"Pandas groupby month and year I have the following dataframe: ```none Date abc xyz 01-Jun-13 100 200 03-Jun-13 -20 50 15-Aug-13 40 -5 20-Jan-14 25 15 21-Feb-14 60 80 ``` I need to group the data by year and month. I.e., Group by Jan 2013, Feb 2013, Mar 2013, etc... I will be using the newly grouped data to create a plot showing abc vs xyz per year\/month. I've tried various combinations of groupby and sum, but I just can't seem to get anything to work. How can I do it?", "response":"You can use either resample or Grouper (which resamples under the hood). First make sure that the datetime column is actually of datetimes (hit it with pd.to_datetime). It's easier if it's a DatetimeIndex: ```none In [11]: df1 Out[11]: abc xyz Date 2013-06-01 100 200 2013-06-03 -20 50 2013-08-15 40 -5 2014-01-20 25 15 2014-02-21 60 80 In [12]: g = df1.groupby(pd.Grouper(freq=\"M\")) # DataFrameGroupBy (grouped by Month) In [13]: g.sum() Out[13]: abc xyz Date 2013-06-30 80 250 2013-07-31 NaN NaN 2013-08-31 40 -5 2013-09-30 NaN NaN 2013-10-31 NaN NaN 2013-11-30 NaN NaN 2013-12-31 NaN NaN 2014-01-31 25 15 2014-02-28 60 80 In [14]: df1.resample(\"M\", how='sum') # the same Out[14]: abc xyz Date 2013-06-30 40 125 2013-07-31 NaN NaN 2013-08-31 40 -5 2013-09-30 NaN NaN 2013-10-31 NaN NaN 2013-11-30 NaN NaN 2013-12-31 NaN NaN 2014-01-31 25 15 2014-02-28 60 80 ``` Note: Previously pd.Grouper(freq=\"M\") was written as pd.TimeGrouper(\"M\"). The latter is now deprecated since 0.21. I had thought the following would work, but it doesn't (due to as_index not being respected? I'm not sure.). I'm including this for interest's sake. If it's a column (it has to be a datetime64 column! as I say, hit it with to_datetime), you can use the PeriodIndex: ```none In [21]: df Out[21]: Date abc xyz 0 2013-06-01 100 200 1 2013-06-03 -20 50 2 2013-08-15 40 -5 3 2014-01-20 25 15 4 2014-02-21 60 80 In [22]: pd.DatetimeIndex(df.Date).to_period(\"M\") # old way Out[22]: [2013-06, ..., 2014-02] Length: 5, Freq: M In [23]: per = df.Date.dt.to_period(\"M\") # new way to get the same In [24]: g = df.groupby(per) In [25]: g.sum() # dang not quite what we want (doesn't fill in the gaps) Out[25]: abc xyz 2013-06 80 250 2013-08 40 -5 2014-01 25 15 2014-02 60 80 ``` To get the desired result we have to reindex...", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26646191\/pandas-groupby-month-and-year", "best_answers_votes":168, "question_length":472, "response_length":1793 }, { "question":"Pandas rename column by position? [duplicate] This question already has answers here: Changing multiple column names but not all of them - Pandas Python (6 answers) Closed 8 years ago. I was just wondering if I can rename column names by their positions. I know how to rename them by their actual names using: df.rename(columns = {}) How do I do it if I do not know the column names and know only their positions?", "response":"try this ``` df.rename(columns={ df.columns[1]: \"your value\" }, inplace = True) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43759921\/pandas-rename-column-by-position", "best_answers_votes":246, "question_length":413, "response_length":83 }, { "question":"Python Pandas replicate rows in dataframe If the dataframe looks like: ``` Store,Dept,Date,Weekly_Sales,IsHoliday 1,1,2010-02-05,24924.5,FALSE 1,1,2010-02-12,46039.49,TRUE 1,1,2010-02-19,41595.55,FALSE 1,1,2010-02-26,19403.54,FALSE 1,1,2010-03-05,21827.9,FALSE 1,1,2010-03-12,21043.39,FALSE 1,1,2010-03-19,22136.64,FALSE 1,1,2010-03-26,26229.21,FALSE 1,1,2010-04-02,57258.43,FALSE ``` And I wanna duplicate rows with IsHoliday equal to TRUE, I can do: ``` is_hol = df['IsHoliday'] == True df_try = df[is_hol] df=df.append(df_try*10) ``` But is there a better way to do this as I need to duplicate holiday rows 5 times, and I have to append 5 times if using the above way.", "response":"You can put df_try inside a list and then do what you have in mind: ``` >>> df.append([df_try]*5,ignore_index=True) Store Dept Date Weekly_Sales IsHoliday 0 1 1 2010-02-05 24924.50 False 1 1 1 2010-02-12 46039.49 True 2 1 1 2010-02-19 41595.55 False 3 1 1 2010-02-26 19403.54 False 4 1 1 2010-03-05 21827.90 False 5 1 1 2010-03-12 21043.39 False 6 1 1 2010-03-19 22136.64 False 7 1 1 2010-03-26 26229.21 False 8 1 1 2010-04-02 57258.43 False 9 1 1 2010-02-12 46039.49 True 10 1 1 2010-02-12 46039.49 True 11 1 1 2010-02-12 46039.49 True 12 1 1 2010-02-12 46039.49 True 13 1 1 2010-02-12 46039.49 True ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24029659\/python-pandas-replicate-rows-in-dataframe", "best_answers_votes":126, "question_length":671, "response_length":604 }, { "question":"Python Pandas read_csv skip rows but keep header I'm having trouble figuring out how to skip n rows in a csv file but keep the header which is the 1 row. What I want to do is iterate but keep the header from the first row. skiprows makes the header the first row after the skipped rows. What is the best way of doing this? ``` data = pd.read_csv('test.csv', sep='|', header=0, skiprows=10, nrows=10) ```", "response":"You can pass a list of row numbers to skiprows instead of an integer. By giving the function the integer 10, you're just skipping the first 10 lines. To keep the first row 0 (as the header) and then skip everything else up to row 10, you can write: ``` pd.read_csv('test.csv', sep='|', skiprows=range(1, 10)) ``` Other ways to skip rows using read_csv The two main ways to control which rows read_csv uses are the header or skiprows parameters. Supose we have the following CSV file with one column: ``` a b c d e f ``` In each of the examples below, this file is f = io.StringIO(\"\\n\".join(\"abcdef\")). Read all lines as values (no header, defaults to integers) ``` >>> pd.read_csv(f, header=None) 0 0 a 1 b 2 c 3 d 4 e 5 f ``` Use a particular row as the header (skip all lines before that): ``` >>> pd.read_csv(f, header=3) d 0 e 1 f ``` Use a multiple rows as the header creating a MultiIndex (skip all lines before the last specified header line): ``` >>> pd.read_csv(f, header=[2, 4]) c e 0 f ``` Skip N rows from the start of the file (the first row that's not skipped is the header): ``` >>> pd.read_csv(f, skiprows=3) d 0 e 1 f ``` Skip one or more rows by giving the row indices (the first row that's not skipped is the header): ``` >>> pd.read_csv(f, skiprows=[2, 4]) a 0 b 1 d 2 f ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27325652\/python-pandas-read-csv-skip-rows-but-keep-header", "best_answers_votes":176, "question_length":403, "response_length":1294 }, { "question":"Elegant way to create empty pandas DataFrame with NaN of type float I want to create a Pandas DataFrame filled with NaNs. During my research I found an answer: ``` import pandas as pd df = pd.DataFrame(index=range(0,4),columns=['A']) ``` This code results in a DataFrame filled with NaNs of type \"object\". So they cannot be used later on for example with the interpolate() method. Therefore, I created the DataFrame with this complicated code (inspired by this answer): ``` import pandas as pd import numpy as np dummyarray = np.empty((4,1)) dummyarray[:] = np.nan df = pd.DataFrame(dummyarray) ``` This results in a DataFrame filled with NaN of type \"float\", so it can be used later on with interpolate(). Is there a more elegant way to create the same result?", "response":"Simply pass the desired value as first argument, like 0, math.inf or, here, np.nan. The constructor then initializes and fills the value array to the size specified by arguments index and columns: ``` >>> import numpy as np >>> import pandas as pd >>> df = pd.DataFrame(np.nan, index=[0, 1, 2, 3], columns=['A', 'B']) >>> df A B 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN >>> df.dtypes A float64 B float64 dtype: object ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30053329\/elegant-way-to-create-empty-pandas-dataframe-with-nan-of-type-float", "best_answers_votes":149, "question_length":761, "response_length":420 }, { "question":"How do I convert a Pandas dataframe to a PyTorch tensor? How do I train a simple neural network with PyTorch on a pandas dataframe df? The column df[\"Target\"] is the target (e.g. labels) of the network. This doesn't work: ``` import pandas as pd import torch.utils.data as data_utils target = pd.DataFrame(df['Target']) train = data_utils.TensorDataset(df, target) train_loader = data_utils.DataLoader(train, batch_size=10, shuffle=True) ```", "response":"I'm referring to the question in the title as you haven't really specified anything else in the text, so just converting the DataFrame into a PyTorch tensor. Without information about your data, I'm just taking float values as example targets here. Convert Pandas dataframe to PyTorch tensor? ```py import pandas as pd import torch import random # creating dummy targets (float values) targets_data = [random.random() for i in range(10)] # creating DataFrame from targets_data targets_df = pd.DataFrame(data=targets_data) targets_df.columns = ['targets'] # creating tensor from targets_df torch_tensor = torch.tensor(targets_df['targets'].values) # printing out result print(torch_tensor) ``` Output: ``` tensor([ 0.5827, 0.5881, 0.1543, 0.6815, 0.9400, 0.8683, 0.4289, 0.5940, 0.6438, 0.7514], dtype=torch.float64) ``` Tested with Pytorch 0.4.0. I hope this helps, if you have any further questions - just ask. :)", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/50307707\/how-do-i-convert-a-pandas-dataframe-to-a-pytorch-tensor", "best_answers_votes":102, "question_length":441, "response_length":914 }, { "question":"Python Pandas: Convert Rows as Column headers [duplicate] This question already has answers here: How can I pivot a dataframe? [closed] (5 answers) Closed 6 years ago. I have the following dataframe: ``` Year Country medal no of medals 1896 Afghanistan Gold 5 1896 Afghanistan Silver 4 1896 Afghanistan Bronze 3 1896 Algeria Gold 1 1896 Algeria Silver 2 1896 Algeria Bronze 3 ``` I want it this way. ``` Year Country Gold Silver Bronze 1896 Afghanistan 5 4 3 1896 Algeria 1 2 3 ``` Stack\/Unstack dont seem to work.", "response":"You're looking for pivot_table: ``` In [11]: medals = df.pivot_table('no of medals', ['Year', 'Country'], 'medal') In [12]: medals Out[12]: medal Bronze Gold Silver Year Country 1896 Afghanistan 3 5 4 Algeria 3 1 2 ``` and if you want to reorder the columns: ``` In [12]: medals.reindex_axis(['Gold', 'Silver', 'Bronze'], axis=1) Out[12]: medal Gold Silver Bronze Year Country 1896 Afghanistan 5 4 3 Algeria 1 2 3 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17298313\/python-pandas-convert-rows-as-column-headers", "best_answers_votes":149, "question_length":514, "response_length":417 }, { "question":"Set value for particular cell in pandas DataFrame with iloc I have a question similar to this and this. The difference is that I have to select row by position, as I do not know the index. I want to do something like df.iloc[0, 'COL_NAME'] = x, but iloc does not allow this kind of access. If I do df.iloc[0]['COL_NAME'] = x the warning about chained indexing appears.", "response":"For mixed position and index, use .ix. BUT you need to make sure that your index is not of integer, otherwise it will cause confusions. ``` df.ix[0, 'COL_NAME'] = x ``` Update: Alternatively, try ``` df.iloc[0, df.columns.get_loc('COL_NAME')] = x ``` Example: ``` import pandas as pd import numpy as np # your data # ======================== np.random.seed(0) df = pd.DataFrame(np.random.randn(10, 2), columns=['col1', 'col2'], index=np.random.randint(1,100,10)).sort_index() print(df) col1 col2 10 1.7641 0.4002 24 0.1440 1.4543 29 0.3131 -0.8541 32 0.9501 -0.1514 33 1.8676 -0.9773 36 0.7610 0.1217 56 1.4941 -0.2052 58 0.9787 2.2409 75 -0.1032 0.4106 76 0.4439 0.3337 # .iloc with get_loc # =================================== df.iloc[0, df.columns.get_loc('col2')] = 100 df col1 col2 10 1.7641 100.0000 24 0.1440 1.4543 29 0.3131 -0.8541 32 0.9501 -0.1514 33 1.8676 -0.9773 36 0.7610 0.1217 56 1.4941 -0.2052 58 0.9787 2.2409 75 -0.1032 0.4106 76 0.4439 0.3337 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31569384\/set-value-for-particular-cell-in-pandas-dataframe-with-iloc", "best_answers_votes":180, "question_length":368, "response_length":968 }, { "question":"How to group pandas DataFrame entries by date in a non-unique column A Pandas DataFrame contains column named \"date\" that contains non-unique datetime values. I can group the lines in this frame using: ``` data.groupby(data['date']) ``` However, this splits the data by the datetime values. I would like to group these data by the year stored in the \"date\" column. This page shows how to group by year in cases where the time stamp is used as an index, which is not true in my case. How do I achieve this grouping?", "response":"I'm using pandas 0.16.2. This has better performance on my large dataset: ``` data.groupby(data.date.dt.year) ``` Using the dt option and playing around with weekofyear, dayofweek etc. becomes far easier.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11391969\/how-to-group-pandas-dataframe-entries-by-date-in-a-non-unique-column", "best_answers_votes":141, "question_length":514, "response_length":204 }, { "question":"Is it possible to insert a row at an arbitrary position in a dataframe using pandas? I have a DataFrame object similar to this one: ``` onset length 1 2.215 1.3 2 23.107 1.3 3 41.815 1.3 4 61.606 1.3 ... ``` What I would like to do is insert a row at a position specified by some index value and update the following indices accordingly. E.g.: ``` onset length 1 2.215 1.3 2 23.107 1.3 3 30.000 1.3 # new row 4 41.815 1.3 5 61.606 1.3 ... ``` What would be the best way to do this?", "response":"You could slice and use concat to get what you want. ``` from pandas import DataFrame, concat line = DataFrame({\"onset\": 30.0, \"length\": 1.3}, index=[3]) df2 = concat([df.iloc[:2], line, df.iloc[2:]]).reset_index(drop=True) ``` This will produce the dataframe in your example output. As far as I'm aware, concat is the best method to achieve an insert type operation in pandas, but admittedly I'm by no means a pandas expert.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15888648\/is-it-possible-to-insert-a-row-at-an-arbitrary-position-in-a-dataframe-using-pan", "best_answers_votes":106, "question_length":481, "response_length":425 }, { "question":"Remove duplicates from dataframe, based on two columns A,B, keeping row with max value in another column C I have a pandas dataframe which contains duplicates values according to two columns (A and B): ``` A B C 1 2 1 1 2 4 2 7 1 3 4 0 3 4 8 ``` I want to remove duplicates keeping the row with max value in column C. This would lead to: ``` A B C 1 2 4 2 7 1 3 4 8 ``` I cannot figure out how to do that. Should I use drop_duplicates(), something else?", "response":"You can do it using group by: ``` c_maxes = df.groupby(['A', 'B']).C.transform(max) df = df.loc[df.C == c_maxes] ``` c_maxes is a Series of the maximum values of C in each group but which is of the same length and with the same index as df. If you haven't used .transform then printing c_maxes might be a good idea to see how it works. Another approach using drop_duplicates would be ``` df.sort('C').drop_duplicates(subset=['A', 'B'], take_last=True) ``` Not sure which is more efficient but I guess the first approach as it doesn't involve sorting. EDIT: From pandas 0.18 up the second solution would be ``` df.sort_values('C').drop_duplicates(subset=['A', 'B'], keep='last') ``` or, alternatively, ``` df.sort_values('C', ascending=False).drop_duplicates(subset=['A', 'B']) ``` In any case, the groupby solution seems to be significantly more performing: ``` %timeit -n 10 df.loc[df.groupby(['A', 'B']).C.max == df.C] 10 loops, best of 3: 25.7 ms per loop %timeit -n 10 df.sort_values('C').drop_duplicates(subset=['A', 'B'], keep='last') 10 loops, best of 3: 101 ms per loop ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32093829\/remove-duplicates-from-dataframe-based-on-two-columns-a-b-keeping-row-with-max", "best_answers_votes":137, "question_length":453, "response_length":1081 }, { "question":"in Ipython notebook \/ Jupyter, Pandas is not displaying the graph I try to plot I am trying to plot some data using pandas in Ipython Notebook, and while it gives me the object, it doesn't actually plot the graph itself. So it looks like this: ``` In [7]: pledge.Amount.plot() Out[7]: ``` The graph should follow after that, but it simply doesn't appear. I have imported matplotlib, so that's not the problem. Is there any other module I need to import?", "response":"Note that --pylab is deprecated and has been removed from newer builds of IPython, The recommended way to enable inline plotting in the IPython Notebook is now to run: ``` %matplotlib inline import matplotlib.pyplot as plt ``` See this post from the ipython-dev mailing list for more details.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/10511024\/in-ipython-notebook-jupyter-pandas-is-not-displaying-the-graph-i-try-to-plot", "best_answers_votes":185, "question_length":454, "response_length":292 }, { "question":"How to append rows in a pandas dataframe in a for loop? [duplicate] This question already has answers here: Create a Pandas Dataframe by appending one row at a time [duplicate] (32 answers) Closed 2 years ago. I have the following for loop: ``` for i in links: data = urllib2.urlopen(str(i)).read() data = json.loads(data) data = pd.DataFrame(data.items()) data = data.transpose() data.columns = data.iloc[0] data = data.drop(data.index[[0]]) ``` Each dataframe so created has most columns in common with the others but not all of them. Moreover, they all have just one row. What I need to to is to add to the dataframe all the distinct columns and each row from each dataframe produced by the for loop I tried pandas concatenate or similar but nothing seemed to work. Any idea? Thanks.", "response":"Suppose your data looks like this: ``` import pandas as pd import numpy as np np.random.seed(2015) df = pd.DataFrame([]) for i in range(5): data = dict(zip(np.random.choice(10, replace=False, size=5), np.random.randint(10, size=5))) data = pd.DataFrame(data.items()) data = data.transpose() data.columns = data.iloc[0] data = data.drop(data.index[[0]]) df = df.append(data) print('{}\\n'.format(df)) # 0 0 1 2 3 4 5 6 7 8 9 # 1 6 NaN NaN 8 5 NaN NaN 7 0 NaN # 1 NaN 9 6 NaN 2 NaN 1 NaN NaN 2 # 1 NaN 2 2 1 2 NaN 1 NaN NaN NaN # 1 6 NaN 6 NaN 4 4 0 NaN NaN NaN # 1 NaN 9 NaN 9 NaN 7 1 9 NaN NaN ``` Then it could be replaced with ``` np.random.seed(2015) data = [] for i in range(5): data.append(dict(zip(np.random.choice(10, replace=False, size=5), np.random.randint(10, size=5)))) df = pd.DataFrame(data) print(df) ``` In other words, do not form a new DataFrame for each row. Instead, collect all the data in a list of dicts, and then call df = pd.DataFrame(data) once at the end, outside the loop. Each call to df.append requires allocating space for a new DataFrame with one extra row, copying all the data from the original DataFrame into the new DataFrame, and then copying data into the new row. All that allocation and copying makes calling df.append in a loop very inefficient. The time cost of copying grows quadratically with the number of rows. Not only is the call-DataFrame-once code easier to write, its performance will be much better -- the time cost of copying grows linearly with the number of rows.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31674557\/how-to-append-rows-in-a-pandas-dataframe-in-a-for-loop", "best_answers_votes":115, "question_length":786, "response_length":1517 }, { "question":"Create a day-of-week column Create a day-of-week column in a Pandas dataframe using Python I\u2019d like to read a csv file into a pandas dataframe, parse a column of dates from string format to a date object, and then generate a new column that indicates the day of the week. This is what I\u2019m trying: What I\u2019d like to do is something like: ``` import pandas as pd import csv df = pd.read_csv('data.csv', parse_dates=['date'])) df['day-of-week'] = df['date'].weekday() AttributeError: 'Series' object has no attribute 'weekday' ```", "response":"Pandas 0.23+ Use pandas.Series.dt.day_name(), since pandas.Timestamp.weekday_name has been deprecated: ``` import pandas as pd df = pd.DataFrame({'my_dates':['2015-01-01','2015-01-02','2015-01-03'],'myvals':[1,2,3]}) df['my_dates'] = pd.to_datetime(df['my_dates']) df['day_of_week'] = df['my_dates'].dt.day_name() ``` Output: ``` my_dates myvals day_of_week 0 2015-01-01 1 Thursday 1 2015-01-02 2 Friday 2 2015-01-03 3 Saturday ``` Pandas 0.18.1+ As user jezrael points out below, dt.weekday_name was added in version 0.18.1 Pandas Docs ``` import pandas as pd df = pd.DataFrame({'my_dates':['2015-01-01','2015-01-02','2015-01-03'],'myvals':[1,2,3]}) df['my_dates'] = pd.to_datetime(df['my_dates']) df['day_of_week'] = df['my_dates'].dt.weekday_name ``` Output: ``` my_dates myvals day_of_week 0 2015-01-01 1 Thursday 1 2015-01-02 2 Friday 2 2015-01-03 3 Saturday ``` Original Answer: Use this: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/generated\/pandas.Series.dt.dayofweek.html See this: Get weekday\/day-of-week for Datetime column of DataFrame If you want a string instead of an integer do something like this: ``` import pandas as pd df = pd.DataFrame({'my_dates':['2015-01-01','2015-01-02','2015-01-03'],'myvals':[1,2,3]}) df['my_dates'] = pd.to_datetime(df['my_dates']) df['day_of_week'] = df['my_dates'].dt.dayofweek days = {0:'Mon',1:'Tues',2:'Weds',3:'Thurs',4:'Fri',5:'Sat',6:'Sun'} df['day_of_week'] = df['day_of_week'].apply(lambda x: days[x]) ``` Output: ``` my_dates myvals day_of_week 0 2015-01-01 1 Thurs 1 2015-01-02 2 Fri 2 2015-01-01 3 Thurs ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30222533\/create-a-day-of-week-column", "best_answers_votes":195, "question_length":526, "response_length":1565 }, { "question":"How do I fill a column with one value in Pandas? I have a column with consecutive digits in a Pandas DataFrame. ``` A 1 2 3 4 ``` I would like to change all those values to a simple string, say \"foo\", resulting in ``` A foo foo foo foo ```", "response":"Just select the column and assign like normal: ``` In [194]: df['A'] = 'foo' df Out[194]: A 0 foo 1 foo 2 foo 3 foo ``` Assigning a scalar value will set all the rows to the same scalar value", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34811971\/how-do-i-fill-a-column-with-one-value-in-pandas", "best_answers_votes":157, "question_length":239, "response_length":191 }, { "question":"get dataframe row count based on conditions I want to get the count of dataframe rows based on conditional selection. I tried the following code. ``` print df[(df.IP == head.idxmax()) & (df.Method == 'HEAD') & (df.Referrer == '\"-\"')].count() ``` output: ``` IP 57 Time 57 Method 57 Resource 57 Status 57 Bytes 57 Referrer 57 Agent 57 dtype: int64 ``` The output shows the count for each an every column in the dataframe. Instead I need to get a single count where all of the above conditions satisfied? How to do this? If you need more explanation about my dataframe please let me know.", "response":"You are asking for the condition where all the conditions are true, so len of the frame is the answer, unless I misunderstand what you are asking ``` In [17]: df = DataFrame(randn(20,4),columns=list('ABCD')) In [18]: df[(df['A']>0) & (df['B']>0) & (df['C']>0)] Out[18]: A B C D 12 0.491683 0.137766 0.859753 -1.041487 13 0.376200 0.575667 1.534179 1.247358 14 0.428739 1.539973 1.057848 -1.254489 In [19]: df[(df['A']>0) & (df['B']>0) & (df['C']>0)].count() Out[19]: A 3 B 3 C 3 D 3 dtype: int64 In [20]: len(df[(df['A']>0) & (df['B']>0) & (df['C']>0)]) Out[20]: 3 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17322109\/get-dataframe-row-count-based-on-conditions", "best_answers_votes":143, "question_length":586, "response_length":568 }, { "question":"how to merge two data frames based on particular column in pandas python? [duplicate] This question already has answers here: Pandas Merging 101 (8 answers) join two pandas dataframe using a specific column (2 answers) Closed 7 months ago. I have to merge two dataframes: df1 ``` company,standard tata,A1 cts,A2 dell,A3 ``` df2 ``` company,return tata,71 dell,78 cts,27 hcl,23 ``` I have to unify both dataframes to one dataframe. I need output like: ``` company,standard,return tata,A1,71 cts,A2,27 dell,A3,78 ```", "response":"Use merge: ``` print (pd.merge(df1, df2, on='company')) ``` Sample: ``` print (df1) company standard 0 tata A1 1 cts A2 2 dell A3 print (df2) company return 0 tata 71 1 dell 78 2 cts 27 3 hcl 23 print (pd.merge(df1, df2, on='company')) company standard return 0 tata A1 71 1 cts A2 27 2 dell A3 78 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/37697195\/how-to-merge-two-data-frames-based-on-particular-column-in-pandas-python", "best_answers_votes":184, "question_length":514, "response_length":301 }, { "question":"pandas select from Dataframe using startswith This works (using Pandas 12 dev) ``` table2=table[table['SUBDIVISION'] =='INVERNESS'] ``` Then I realized I needed to select the field using \"starts with\" Since I was missing a bunch. So per the Pandas doc as near as I could follow I tried ``` criteria = table['SUBDIVISION'].map(lambda x: x.startswith('INVERNESS')) table2 = table[criteria] ``` And got AttributeError: 'float' object has no attribute 'startswith' So I tried an alternate syntax with the same result ``` table[[x.startswith('INVERNESS') for x in table['SUBDIVISION']]] ``` Reference http:\/\/pandas.pydata.org\/pandas-docs\/stable\/indexing.html#boolean-indexing Section 4: List comprehensions and map method of Series can also be used to produce more complex criteria: What am I missing?", "response":"You can use the str.startswith DataFrame method to give more consistent results: ``` In [11]: s = pd.Series(['a', 'ab', 'c', 11, np.nan]) In [12]: s Out[12]: 0 a 1 ab 2 c 3 11 4 NaN dtype: object In [13]: s.str.startswith('a', na=False) Out[13]: 0 True 1 True 2 False 3 False 4 False dtype: bool ``` and the boolean indexing will work just fine (I prefer to use loc, but it works just the same without): ``` In [14]: s.loc[s.str.startswith('a', na=False)] Out[14]: 0 a 1 ab dtype: object ``` . It looks least one of your elements in the Series\/column is a float, which doesn't have a startswith method hence the AttributeError, the list comprehension should raise the same error...", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17957890\/pandas-select-from-dataframe-using-startswith", "best_answers_votes":137, "question_length":796, "response_length":681 }, { "question":"Python Pandas User Warning: Sorting because non-concatenation axis is not aligned I'm doing some code practice and applying merging of data frames while doing this getting user warning \/usr\/lib64\/python2.7\/site-packages\/pandas\/core\/frame.py:6201: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version of pandas will change to not sort by default. To accept the future behavior, pass 'sort=True'. To retain the current behavior and silence the warning, pass sort=False On these lines of code: Can you please help to get the solution of this warning. ``` placement_video = [self.read_sql_vdx_summary, self.read_sql_video_km] placement_video_summary = reduce(lambda left, right: pd.merge(left, right, on='PLACEMENT', sort=False), placement_video) placement_by_video = placement_video_summary.loc[:, [\"PLACEMENT\", \"PLACEMENT_NAME\", \"COST_TYPE\", \"PRODUCT\", \"VIDEONAME\", \"VIEW0\", \"VIEW25\", \"VIEW50\", \"VIEW75\", \"VIEW100\", \"ENG0\", \"ENG25\", \"ENG50\", \"ENG75\", \"ENG100\", \"DPE0\", \"DPE25\", \"DPE50\", \"DPE75\", \"DPE100\"]] # print (placement_by_video) placement_by_video[\"Placement# Name\"] = placement_by_video[[\"PLACEMENT\", \"PLACEMENT_NAME\"]].apply(lambda x: \".\".join(x), axis=1) placement_by_video_new = placement_by_video.loc[:, [\"PLACEMENT\", \"Placement# Name\", \"COST_TYPE\", \"PRODUCT\", \"VIDEONAME\", \"VIEW0\", \"VIEW25\", \"VIEW50\", \"VIEW75\", \"VIEW100\", \"ENG0\", \"ENG25\", \"ENG50\", \"ENG75\", \"ENG100\", \"DPE0\", \"DPE25\", \"DPE50\", \"DPE75\", \"DPE100\"]] placement_by_km_video = [placement_by_video_new, self.read_sql_km_for_video] placement_by_km_video_summary = reduce(lambda left, right: pd.merge(left, right, on=['PLACEMENT', 'PRODUCT'], sort=False), placement_by_km_video) #print (list(placement_by_km_video_summary)) #print(placement_by_km_video_summary) #exit() # print(placement_by_video_new) \"\"\"Conditions for 25%view\"\"\" mask17 = placement_by_km_video_summary[\"PRODUCT\"].isin(['Display', 'Mobile']) mask18 = placement_by_km_video_summary[\"COST_TYPE\"].isin([\"CPE\", \"CPM\", \"CPCV\"]) mask19 = placement_by_km_video_summary[\"PRODUCT\"].isin([\"InStream\"]) mask20 = placement_by_km_video_summary[\"COST_TYPE\"].isin([\"CPE\", \"CPM\", \"CPE+\", \"CPCV\"]) mask_video_video_completions = placement_by_km_video_summary[\"COST_TYPE\"].isin([\"CPCV\"]) mask21 = placement_by_km_video_summary[\"COST_TYPE\"].isin([\"CPE+\"]) mask22 = placement_by_km_video_summary[\"COST_TYPE\"].isin([\"CPE\", \"CPM\"]) mask23 = placement_by_km_video_summary[\"PRODUCT\"].isin(['Display', 'Mobile', 'InStream']) mask24 = placement_by_km_video_summary[\"COST_TYPE\"].isin([\"CPE\", \"CPM\", \"CPE+\"]) choice25video_eng = placement_by_km_video_summary[\"ENG25\"] choice25video_vwr = placement_by_km_video_summary[\"VIEW25\"] choice25video_deep = placement_by_km_video_summary[\"DPE25\"] placement_by_km_video_summary[\"25_pc_video\"] = np.select([mask17 & mask18, mask19 & mask20, mask17 & mask21], [choice25video_eng, choice25video_vwr, choice25video_deep]) \"\"\"Conditions for 50%view\"\"\" choice50video_eng = placement_by_km_video_summary[\"ENG50\"] choice50video_vwr = placement_by_km_video_summary[\"VIEW50\"] choice50video_deep = placement_by_km_video_summary[\"DPE50\"] placement_by_km_video_summary[\"50_pc_video\"] = np.select([mask17 & mask18, mask19 & mask20, mask17 & mask21], [choice50video_eng, choice50video_vwr, choice50video_deep]) \"\"\"Conditions for 75%view\"\"\" choice75video_eng = placement_by_km_video_summary[\"ENG75\"] choice75video_vwr = placement_by_km_video_summary[\"VIEW75\"] choice75video_deep = placement_by_km_video_summary[\"DPE75\"] placement_by_km_video_summary[\"75_pc_video\"] = np.select([mask17 & mask18, mask19 & mask20, mask17 & mask21], [choice75video_eng, choice75video_vwr, choice75video_deep]) \"\"\"Conditions for 100%view\"\"\" choice100video_eng = placement_by_km_video_summary[\"ENG100\"] choice100video_vwr = placement_by_km_video_summary[\"VIEW100\"] choice100video_deep = placement_by_km_video_summary[\"DPE100\"] choicecompletions = placement_by_km_video_summary['COMPLETIONS'] placement_by_km_video_summary[\"100_pc_video\"] = np.select([mask17 & mask22, mask19 & mask24, mask17 & mask21, mask23 & mask_video_video_completions], [choice100video_eng, choice100video_vwr, choice100video_deep, choicecompletions]) \"\"\"conditions for 0%view\"\"\" choice0video_eng = placement_by_km_video_summary[\"ENG0\"] choice0video_vwr = placement_by_km_video_summary[\"VIEW0\"] choice0video_deep = placement_by_km_video_summary[\"DPE0\"] placement_by_km_video_summary[\"Views\"] = np.select([mask17 & mask18, mask19 & mask20, mask17 & mask21], [choice0video_eng, choice0video_vwr, choice0video_deep]) #print (placement_by_km_video_summary) #exit() #final Table placement_by_video_summary = placement_by_km_video_summary.loc[:, [\"PLACEMENT\", \"Placement# Name\", \"PRODUCT\", \"VIDEONAME\", \"COST_TYPE\", \"Views\", \"25_pc_video\", \"50_pc_video\", \"75_pc_video\",\"100_pc_video\", \"ENGAGEMENTS\",\"IMPRESSIONS\", \"DPEENGAMENTS\"]] #placement_by_km_video = [placement_by_video_summary, self.read_sql_km_for_video] #placement_by_km_video_summary = reduce(lambda left, right: pd.merge(left, right, on=['PLACEMENT', 'PRODUCT']), #placement_by_km_video) #print(placement_by_video_summary) #exit() # dup_col =[\"IMPRESSIONS\",\"ENGAGEMENTS\",\"DPEENGAMENTS\"] # placement_by_video_summary.loc[placement_by_video_summary.duplicated(dup_col),dup_col] = np.nan # print (\"Dhar\",placement_by_video_summary) '''adding views based on conditions''' #filter maximum value from videos placement_by_video_summary_new = placement_by_km_video_summary.loc[ placement_by_km_video_summary.reset_index().groupby(['PLACEMENT', 'PRODUCT'])['Views'].idxmax()] #print (placement_by_video_summary_new) #exit() # print (placement_by_video_summary_new) # mask22 = (placement_by_video_summary_new.PRODUCT.str.upper ()=='DISPLAY') & (placement_by_video_summary_new.COST_TYPE=='CPE') placement_by_video_summary_new.loc[mask17 & mask18, 'Views'] = placement_by_video_summary_new['ENGAGEMENTS'] placement_by_video_summary_new.loc[mask19 & mask20, 'Views'] = placement_by_video_summary_new['IMPRESSIONS'] placement_by_video_summary_new.loc[mask17 & mask21, 'Views'] = placement_by_video_summary_new['DPEENGAMENTS'] #print (placement_by_video_summary_new) #exit() placement_by_video_summary = placement_by_video_summary.drop(placement_by_video_summary_new.index).append( placement_by_video_summary_new).sort_index() placement_by_video_summary[\"Video Completion Rate\"] = placement_by_video_summary[\"100_pc_video\"] \/ \\ placement_by_video_summary[\"Views\"] placement_by_video_final = placement_by_video_summary.loc[:, [\"Placement# Name\", \"PRODUCT\", \"VIDEONAME\", \"Views\", \"25_pc_video\", \"50_pc_video\", \"75_pc_video\", \"100_pc_video\", \"Video Completion Rate\"]] ```", "response":"tl;dr: concat and append currently sort the non-concatenation index (e.g. columns if you're adding rows) if the columns don't match. In pandas 0.23 this started generating a warning; pass the parameter sort=True to silence it. In the future the default will change to not sort, so it's best to specify either sort=True or False now, or better yet ensure that your non-concatenation indices match. The warning is new in pandas 0.23.0: In a future version of pandas pandas.concat() and DataFrame.append() will no longer sort the non-concatenation axis when it is not already aligned. The current behavior is the same as the previous (sorting), but now a warning is issued when sort is not specified and the non-concatenation axis is not aligned, link. More information from linked very old github issue, comment by smcinerney : When concat'ing DataFrames, the column names get alphanumerically sorted if there are any differences between them. If they're identical across DataFrames, they don't get sorted. This sort is undocumented and unwanted. Certainly the default behavior should be no-sort. After some time the parameter sort was implemented in pandas.concat and DataFrame.append: sort : boolean, default None Sort non-concatenation axis if it is not already aligned when join is 'outer'. The current default of sorting is deprecated and will change to not-sorting in a future version of pandas. Explicitly pass sort=True to silence the warning and sort. Explicitly pass sort=False to silence the warning and not sort. This has no effect when join='inner', which already preserves the order of the non-concatenation axis. So if both DataFrames have the same columns in the same order, there is no warning and no sorting: ``` df1 = pd.DataFrame({\"a\": [1, 2], \"b\": [0, 8]}, columns=['a', 'b']) df2 = pd.DataFrame({\"a\": [4, 5], \"b\": [7, 3]}, columns=['a', 'b']) print (pd.concat([df1, df2])) a b 0 1 0 1 2 8 0 4 7 1 5 3 df1 = pd.DataFrame({\"a\": [1, 2], \"b\": [0, 8]}, columns=['b', 'a']) df2 = pd.DataFrame({\"a\": [4, 5], \"b\": [7, 3]}, columns=['b', 'a']) print (pd.concat([df1, df2])) b a 0 0 1 1 8 2 0 7 4 1 3 5 ``` But if the DataFrames have different columns, or the same columns in a different order, pandas returns a warning if no parameter sort is explicitly set (sort=None is the default value): ``` df1 = pd.DataFrame({\"a\": [1, 2], \"b\": [0, 8]}, columns=['b', 'a']) df2 = pd.DataFrame({\"a\": [4, 5], \"b\": [7, 3]}, columns=['a', 'b']) print (pd.concat([df1, df2])) ``` FutureWarning: Sorting because non-concatenation axis is not aligned. ``` a b 0 1 0 1 2 8 0 4 7 1 5 3 print (pd.concat([df1, df2], sort=True)) a b 0 1 0 1 2 8 0 4 7 1 5 3 print (pd.concat([df1, df2], sort=False)) b a 0 0 1 1 8 2 0 7 4 1 3 5 ``` If the DataFrames have different columns, but the first columns are aligned - they will be correctly assigned to each other (columns a and b from df1 with a and b from df2 in the example below) because they exist in both. For other columns that exist in one but not both DataFrames, missing values are created. Lastly, if you pass sort=True, columns are sorted alphanumerically. If sort=False and the second DafaFrame has columns that are not in the first, they are appended to the end with no sorting: ``` df1 = pd.DataFrame({\"a\": [1, 2], \"b\": [0, 8], 'e':[5, 0]}, columns=['b', 'a','e']) df2 = pd.DataFrame({\"a\": [4, 5], \"b\": [7, 3], 'c':[2, 8], 'd':[7, 0]}, columns=['c','b','a','d']) print (pd.concat([df1, df2])) ``` FutureWarning: Sorting because non-concatenation axis is not aligned. ``` a b c d e 0 1 0 NaN NaN 5.0 1 2 8 NaN NaN 0.0 0 4 7 2.0 7.0 NaN 1 5 3 8.0 0.0 NaN print (pd.concat([df1, df2], sort=True)) a b c d e 0 1 0 NaN NaN 5.0 1 2 8 NaN NaN 0.0 0 4 7 2.0 7.0 NaN 1 5 3 8.0 0.0 NaN print (pd.concat([df1, df2], sort=False)) b a e c d 0 0 1 5.0 NaN NaN 1 8 2 0.0 NaN NaN 0 7 4 NaN 2.0 7.0 1 3 5 NaN 8.0 0.0 ``` In your code: ``` placement_by_video_summary = placement_by_video_summary.drop(placement_by_video_summary_new.index) .append(placement_by_video_summary_new, sort=True) .sort_index() ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/50501787\/python-pandas-user-warning-sorting-because-non-concatenation-axis-is-not-aligne", "best_answers_votes":153, "question_length":6624, "response_length":4034 }, { "question":"How to build and fill pandas dataframe from for loop? [duplicate] This question already has answers here: Creating an empty Pandas DataFrame, and then filling it (8 answers) Closed 5 years ago. Here is a simple example of the code I am running, and I would like the results put into a pandas dataframe (unless there is a better option): ``` for p in game.players.passing(): print p, p.team, p.passing_att, p.passer_rating() R.Wilson SEA 29 55.7 J.Ryan SEA 1 158.3 A.Rodgers GB 34 55.8 ``` Using this code: ``` d = [] for p in game.players.passing(): d = [{'Player': p, 'Team': p.team, 'Passer Rating': p.passer_rating()}] pd.DataFrame(d) ``` I can get: ``` Passer Rating Player Team 0 55.8 A.Rodgers GB ``` Which is a 1x3 dataframe, and I understand why it is only one row but I can't figure out how to make it multi-row with the columns in the correct order. Ideally the solution would be able to deal with n number of rows (based on p) and it would be wonderful (although not essential) if the number of columns would be set by the number of stats requested. Any suggestions? Thanks in advance!", "response":"The simplest answer is what Paul H said: ``` d = [] for p in game.players.passing(): d.append( { 'Player': p, 'Team': p.team, 'Passer Rating': p.passer_rating() } ) pd.DataFrame(d) ``` But if you really want to \"build and fill a dataframe from a loop\", (which, btw, I wouldn't recommend), here's how you'd do it. ``` d = pd.DataFrame() for p in game.players.passing(): temp = pd.DataFrame( { 'Player': p, 'Team': p.team, 'Passer Rating': p.passer_rating() } ) d = pd.concat([d, temp]) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28056171\/how-to-build-and-fill-pandas-dataframe-from-for-loop", "best_answers_votes":149, "question_length":1096, "response_length":488 }, { "question":"group by week in pandas I'm having this data frame: ``` Name Date Quantity Apple 07\/11\/17 20 orange 07\/14\/17 20 Apple 07\/14\/17 70 Orange 07\/25\/17 40 Apple 07\/20\/17 30 ``` I want to aggregate this by Name and Date to get sum of quantities Details: Date: Group, the result should be at the beginning of the week (or just on Monday) Quantity: Sum, if two or more records have same Name and Date (if falls on same interval) The desired output is given below: ``` Name Date Quantity Apple 07\/10\/17 90 orange 07\/10\/17 20 Apple 07\/17\/17 30 orange 07\/24\/17 40 ```", "response":"First, convert column date to_datetime and subtract one week as we want the sum for the week ahead of the date and not the week before that date. Then use groupby with Grouper by W-MON and aggregate sum: ```py df['Date'] = pd.to_datetime(df['Date']) - pd.to_timedelta(7, unit='d') df = df.groupby(['Name', pd.Grouper(key='Date', freq='W-MON')])['Quantity'] .sum() .reset_index() .sort_values('Date') print (df) ``` ``` Name Date Quantity 0 Apple 2017-07-10 90 3 orange 2017-07-10 20 1 Apple 2017-07-17 30 2 Orange 2017-07-24 40 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45281297\/group-by-week-in-pandas", "best_answers_votes":163, "question_length":555, "response_length":531 }, { "question":"Nested dictionary to multiindex dataframe where dictionary keys are column labels Say I have a dictionary that looks like this: ``` dictionary = {'A' : {'a': [1,2,3,4,5], 'b': [6,7,8,9,1]}, 'B' : {'a': [2,3,4,5,6], 'b': [7,8,9,1,2]}} ``` and I want a dataframe that looks something like this: ``` A B a b a b 0 1 6 2 7 1 2 7 3 8 2 3 8 4 9 3 4 9 5 1 4 5 1 6 2 ``` Is there a convenient way to do this? If I try: ``` In [99]: DataFrame(dictionary) Out[99]: A B a [1, 2, 3, 4, 5] [2, 3, 4, 5, 6] b [6, 7, 8, 9, 1] [7, 8, 9, 1, 2] ``` I get a dataframe where each element is a list. What I need is a multiindex where each level corresponds to the keys in the nested dict and the rows corresponding to each element in the list as shown above. I think I can work a very crude solution but I'm hoping there might be something a bit simpler.", "response":"Pandas wants the MultiIndex values as tuples, not nested dicts. The simplest thing is to convert your dictionary to the right format before trying to pass it to DataFrame: ``` >>> reform = {(outerKey, innerKey): values for outerKey, innerDict in dictionary.items() for innerKey, values in innerDict.items()} >>> reform {('A', 'a'): [1, 2, 3, 4, 5], ('A', 'b'): [6, 7, 8, 9, 1], ('B', 'a'): [2, 3, 4, 5, 6], ('B', 'b'): [7, 8, 9, 1, 2]} >>> pandas.DataFrame(reform) A B a b a b 0 1 6 2 7 1 2 7 3 8 2 3 8 4 9 3 4 9 5 1 4 5 1 6 2 [5 rows x 4 columns] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24988131\/nested-dictionary-to-multiindex-dataframe-where-dictionary-keys-are-column-label", "best_answers_votes":115, "question_length":833, "response_length":551 }, { "question":"Plotting multiple lines, in different colors, with pandas dataframe I have a dataframe that looks like the following ``` color x y 0 red 0 0 1 red 1 1 2 red 2 2 3 red 3 3 4 red 4 4 5 red 5 5 6 red 6 6 7 red 7 7 8 red 8 8 9 red 9 9 10 blue 0 0 11 blue 1 1 12 blue 2 4 13 blue 3 9 14 blue 4 16 15 blue 5 25 16 blue 6 36 17 blue 7 49 18 blue 8 64 19 blue 9 81 ``` I ultimately want two lines, one blue, one red. The red line should essentially be y=x and the blue line should be y=x^2 When I do the following: ``` df.plot(x='x', y='y') ``` The output is this: Is there a way to make pandas know that there are two sets? And group them accordingly. I'd like to be able to specify the column color as the set differentiator", "response":"Another simple way is to use the pandas.DataFrame.pivot function to format the data. Use pandas.DataFrame.plot to plot. Providing the colors in the 'color' column exist in matplotlib: List of named colors, they can be passed to the color parameter. ``` # sample data df = pd.DataFrame([['red', 0, 0], ['red', 1, 1], ['red', 2, 2], ['red', 3, 3], ['red', 4, 4], ['red', 5, 5], ['red', 6, 6], ['red', 7, 7], ['red', 8, 8], ['red', 9, 9], ['blue', 0, 0], ['blue', 1, 1], ['blue', 2, 4], ['blue', 3, 9], ['blue', 4, 16], ['blue', 5, 25], ['blue', 6, 36], ['blue', 7, 49], ['blue', 8, 64], ['blue', 9, 81]], columns=['color', 'x', 'y']) # pivot the data into the correct shape df = df.pivot(index='x', columns='color', values='y') # display(df) color blue red x 0 0 0 1 1 1 2 4 2 3 9 3 4 16 4 5 25 5 6 36 6 7 49 7 8 64 8 9 81 9 # plot the pivoted dataframe; if the column names aren't colors, remove color=df.columns df.plot(color=df.columns, figsize=(5, 3)) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29233283\/plotting-multiple-lines-in-different-colors-with-pandas-dataframe", "best_answers_votes":114, "question_length":718, "response_length":957 }, { "question":"How do I properly set the Datetimeindex for a Pandas datetime object in a dataframe? I have a pandas dataframe: ``` lat lng alt days date time 0 40.003834 116.321462 211 39745.175405 2008-10-24 04:12:35 1 40.003783 116.321431 201 39745.175463 2008-10-24 04:12:40 2 40.003690 116.321429 203 39745.175521 2008-10-24 04:12:45 3 40.003589 116.321427 194 39745.175579 2008-10-24 04:12:50 4 40.003522 116.321412 190 39745.175637 2008-10-24 04:12:55 5 40.003509 116.321484 188 39745.175694 2008-10-24 04:13:00 ``` For which I am trying to convert the df['date'] and df['time'] columns into a datetime. I can do: ``` df['Datetime'] = pd.to_datetime(df['date']+df['time']) df = df.set_index(['Datetime']) del df['date'] del df['time'] ``` And I get: ``` lat lng alt days Datetime 2008-10-2404:12:35 40.003834 116.321462 211 39745.175405 2008-10-2404:12:40 40.003783 116.321431 201 39745.175463 2008-10-2404:12:45 40.003690 116.321429 203 39745.175521 2008-10-2404:12:50 40.003589 116.321427 194 39745.175579 2008-10-2404:12:55 40.003522 116.321412 190 39745.175637 ``` But then if I try: ``` df.between_time(time(1),time(22,59,59))['lng'].std() ``` I get an error - 'TypeError: Index must be DatetimeIndex' So, I've also tried setting the DatetimeIndex: ``` df['Datetime'] = pd.to_datetime(df['date']+df['time']) #df = df.set_index(['Datetime']) df = df.set_index(pd.DatetimeIndex(df['Datetime'])) del df['date'] del df['time'] ``` And this throws an error also - 'DateParseError: unknown string format' How do I create the datetime column and DatetimeIndex correctly so that df.between_time() works right?", "response":"To simplify Kirubaharan's answer a bit: ``` df['Datetime'] = pd.to_datetime(df['date'] + ' ' + df['time']) df = df.set_index('Datetime') ``` And to get rid of unwanted columns (as OP did but did not specify per se in the question): ``` df = df.drop(['date','time'], axis=1) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27032052\/how-do-i-properly-set-the-datetimeindex-for-a-pandas-datetime-object-in-a-datafr", "best_answers_votes":138, "question_length":1597, "response_length":277 }, { "question":"what are all the dtypes that pandas recognizes? For pandas, would anyone know, if any datatype apart from (i) float64, int64 (and other variants of np.number like float32, int8 etc.) (ii) bool (iii) datetime64, timedelta64 such as string columns, always have a dtype of object ? Alternatively, I want to know, if there are any datatype apart from (i), (ii) and (iii) in the list above that pandas does not make it's dtype an object?", "response":"pandas borrows its dtypes from numpy. For demonstration of this see the following: ``` import pandas as pd df = pd.DataFrame({'A': [1,'C',2.]}) df['A'].dtype >>> dtype('O') type(df['A'].dtype) >>> numpy.dtype ``` You can find the list of valid numpy.dtypes in the documentation: '?' boolean 'b' (signed) byte 'B' unsigned byte 'i' (signed) integer 'u' unsigned integer 'f' floating-point 'c' complex-floating point 'm' timedelta 'M' datetime 'O' (Python) objects 'S', 'a' zero-terminated bytes (not recommended) 'U' Unicode string 'V' raw data (void) pandas should support these types. Using the astype method of a pandas.Series object with any of the above options as the input argument will result in pandas trying to convert the Series to that type (or at the very least falling back to object type); 'u' is the only one that I see pandas not understanding at all: ``` df['A'].astype('u') >>> TypeError: data type \"u\" not understood ``` This is a numpy error that results because the 'u' needs to be followed by a number specifying the number of bytes per item in (which needs to be valid): ``` import numpy as np np.dtype('u') >>> TypeError: data type \"u\" not understood np.dtype('u1') >>> dtype('uint8') np.dtype('u2') >>> dtype('uint16') np.dtype('u4') >>> dtype('uint32') np.dtype('u8') >>> dtype('uint64') # testing another invalid argument np.dtype('u3') >>> TypeError: data type \"u3\" not understood ``` To summarise, the astype methods of pandas objects will try and do something sensible with any argument that is valid for numpy.dtype. Note that numpy.dtype('f') is the same as numpy.dtype('float32') and numpy.dtype('f8') is the same as numpy.dtype('float64') etc. Same goes for passing the arguments to pandas astype methods. To locate the respective data type classes in NumPy, the Pandas docs recommends this: ``` def subdtypes(dtype): subs = dtype.__subclasses__() if not subs: return dtype return [dtype, [subdtypes(dt) for dt in subs]] subdtypes(np.generic) ``` Output: ``` [numpy.generic, [[numpy.number, [[numpy.integer, [[numpy.signedinteger, [numpy.int8, numpy.int16, numpy.int32, numpy.int64, numpy.int64, numpy.timedelta64]], [numpy.unsignedinteger, [numpy.uint8, numpy.uint16, numpy.uint32, numpy.uint64, numpy.uint64]]]], [numpy.inexact, [[numpy.floating, [numpy.float16, numpy.float32, numpy.float64, numpy.float128]], [numpy.complexfloating, [numpy.complex64, numpy.complex128, numpy.complex256]]]]]], [numpy.flexible, [[numpy.character, [numpy.bytes_, numpy.str_]], [numpy.void, [numpy.record]]]], numpy.bool_, numpy.datetime64, numpy.object_]] ``` Pandas accepts these classes as valid types. For example, dtype={'A': np.float}. NumPy docs contain more details and a chart:", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29245848\/what-are-all-the-dtypes-that-pandas-recognizes", "best_answers_votes":70, "question_length":432, "response_length":2704 }, { "question":"pandas get the row-wise minimum value of two or more columns How can I reference the minimum value of two dataframes as part of a pandas dataframe equation? I tried using the python min() function which did not work. I am looking for something along the lines of this: ```py data['eff'] = pd.DataFrame([data['flow_h'], data['flow_c']]).min() *Cp* (data[' Thi'] - data[' Tci']) ``` I also tried to use pandas min() function, which is also not working. ```py min_flow = pd.DataFrame([data['flow_h'], data['flow_c']]).min() ``` ```none InvalidIndexError: Reindexing only valid with uniquely valued Index objects ``` I was confused by this error. The data columns are just numbers and a name, I wasn't sure where the index comes into play. ```py import pandas as pd import numpy as np np.random.seed(365) rows = 10 flow = {'flow_c': [np.random.randint(100) for _ in range(rows)], 'flow_d': [np.random.randint(100) for _ in range(rows)], 'flow_h': [np.random.randint(100) for _ in range(rows)]} data = pd.DataFrame(flow) # display(data) flow_c flow_d flow_h 0 82 36 43 1 52 48 12 2 33 28 77 3 91 99 11 4 44 95 27 5 5 94 64 6 98 3 88 7 73 39 92 8 26 39 62 9 56 74 50 ```", "response":"If you are trying to get the row-wise mininum of two or more columns, use pandas.DataFrame.min. Note that by default axis=0; specifying axis=1 is necessary. ```py data['min_c_h'] = data[['flow_h','flow_c']].min(axis=1) # display(data) flow_c flow_d flow_h min_c_h 0 82 36 43 43 1 52 48 12 12 2 33 28 77 33 3 91 99 11 11 4 44 95 27 27 5 5 94 64 5 6 98 3 88 88 7 73 39 92 73 8 26 39 62 26 9 56 74 50 50 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33975128\/pandas-get-the-row-wise-minimum-value-of-two-or-more-columns", "best_answers_votes":220, "question_length":1164, "response_length":404 }, { "question":"Find empty or NaN entry in Pandas Dataframe I am trying to search through a Pandas Dataframe to find where it has a missing entry or a NaN entry. Here is a dataframe that I am working with: ``` cl_id a c d e A1 A2 A3 0 1 -0.419279 0.843832 -0.530827 text76 1.537177 -0.271042 1 2 0.581566 2.257544 0.440485 dafN_6 0.144228 2.362259 2 3 -1.259333 1.074986 1.834653 system 1.100353 3 4 -1.279785 0.272977 0.197011 Fifty -0.031721 1.434273 4 5 0.578348 0.595515 0.553483 channel 0.640708 0.649132 5 6 -1.549588 -0.198588 0.373476 audio -0.508501 6 7 0.172863 1.874987 1.405923 Twenty NaN NaN 7 8 -0.149630 -0.502117 0.315323 file_max NaN NaN ``` NOTE: The blank entries are empty strings - this is because there was no alphanumeric content in the file that the dataframe came from. If I have this dataframe, how can I find a list with the indexes where the NaN or blank entry occurs?", "response":"np.where(pd.isnull(df)) returns the row and column indices where the value is NaN: ``` In [152]: import numpy as np In [153]: import pandas as pd In [154]: np.where(pd.isnull(df)) Out[154]: (array([2, 5, 6, 6, 7, 7]), array([7, 7, 6, 7, 6, 7])) In [155]: df.iloc[2,7] Out[155]: nan In [160]: [df.iloc[i,j] for i,j in zip(*np.where(pd.isnull(df)))] Out[160]: [nan, nan, nan, nan, nan, nan] ``` Finding values which are empty strings could be done with applymap: ``` In [182]: np.where(df.applymap(lambda x: x == '')) Out[182]: (array([5]), array([7])) ``` Note that using applymap requires calling a Python function once for each cell of the DataFrame. That could be slow for a large DataFrame, so it would be better if you could arrange for all the blank cells to contain NaN instead so you could use pd.isnull.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27159189\/find-empty-or-nan-entry-in-pandas-dataframe", "best_answers_votes":89, "question_length":880, "response_length":811 }, { "question":"Pandas: Convert Timestamp to datetime.date I have a pandas column of Timestamp data ``` In [27]: train[\"Original_Quote_Date\"][6] Out[27]: Timestamp('2013-12-25 00:00:00') ``` How can check equivalence of these objects to datetime.date objects of the type ``` datetime.date(2013, 12, 25) ```", "response":"Use the .date method: ``` In [11]: t = pd.Timestamp('2013-12-25 00:00:00') In [12]: t.date() Out[12]: datetime.date(2013, 12, 25) In [13]: t.date() == datetime.date(2013, 12, 25) Out[13]: True ``` To compare against a DatetimeIndex (i.e. an array of Timestamps), you'll want to do it the other way around: ``` In [21]: pd.Timestamp(datetime.date(2013, 12, 25)) Out[21]: Timestamp('2013-12-25 00:00:00') In [22]: ts = pd.DatetimeIndex([t]) In [23]: ts == pd.Timestamp(datetime.date(2013, 12, 25)) Out[23]: array([ True], dtype=bool) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34386751\/pandas-convert-timestamp-to-datetime-date", "best_answers_votes":131, "question_length":290, "response_length":535 }, { "question":"Element-wise logical OR in Pandas I am aware that AND corresponds to & and NOT, ~. What is the element-wise logical OR operator? I know \"or\" itself is not what I am looking for.", "response":"The corresponding operator is |: ``` df[(df < 3) | (df == 5)] ``` would elementwise check if value is less than 3 or equal to 5. If you need a function to do this, we have np.logical_or. For two conditions, you can use ``` df[np.logical_or(df<3, df==5)] ``` Or, for multiple conditions use the logical_or.reduce, ``` df[np.logical_or.reduce([df<3, df==5])] ``` Since the conditions are specified as individual arguments, parentheses grouping is not needed. More information on logical operations with pandas can be found here.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24775648\/element-wise-logical-or-in-pandas", "best_answers_votes":162, "question_length":177, "response_length":526 }, { "question":"How do I change a single index value in pandas dataframe? ``` energy.loc['Republic of Korea'] ``` I want to change the value of index from 'Republic of Korea' to 'South Korea'. But the dataframe is too large and it is not possible to change every index value. How do I change only this single value?", "response":"@EdChum's solution looks good. Here's one using rename, which would replace all these values in the index. ``` energy.rename(index={'Republic of Korea':'South Korea'},inplace=True) ``` Here's an example ``` >>> example = pd.DataFrame({'key1' : ['a','a','a','b','a','b'], 'data1' : [1,2,2,3,nan,4], 'data2' : list('abcdef')}) >>> example.set_index('key1',inplace=True) >>> example data1 data2 key1 a 1.0 a a 2.0 b a 2.0 c b 3.0 d a NaN e b 4.0 f >>> example.rename(index={'a':'c'}) # can also use inplace=True data1 data2 key1 c 1.0 a c 2.0 b c 2.0 c b 3.0 d c NaN e b 4.0 f ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/40427943\/how-do-i-change-a-single-index-value-in-pandas-dataframe", "best_answers_votes":131, "question_length":299, "response_length":577 }, { "question":"How to convert columns into one datetime column in pandas? I have a dataframe where the first 3 columns are 'MONTH', 'DAY', 'YEAR' In each column there is an integer. Is there a Pythonic way to convert all three columns into datetimes while there are in the dataframe? From: ``` M D Y Apples Oranges 5 6 1990 12 3 5 7 1990 14 4 5 8 1990 15 34 5 9 1990 23 21 ``` into: ``` Datetimes Apples Oranges 1990-6-5 12 3 1990-7-5 14 4 1990-8-5 15 34 1990-9-5 23 21 ```", "response":"In version 0.18.1 you can use to_datetime, but: The names of the columns have to be year, month, day, hour, minute and second: Minimal columns are year, month and day Sample: ``` import pandas as pd df = pd.DataFrame({'year': [2015, 2016], 'month': [2, 3], 'day': [4, 5], 'hour': [2, 3], 'minute': [10, 30], 'second': [21,25]}) print df day hour minute month second year 0 4 2 10 2 21 2015 1 5 3 30 3 25 2016 print pd.to_datetime(df[['year', 'month', 'day']]) 0 2015-02-04 1 2016-03-05 dtype: datetime64[ns] print pd.to_datetime(df[['year', 'month', 'day', 'hour']]) 0 2015-02-04 02:00:00 1 2016-03-05 03:00:00 dtype: datetime64[ns] print pd.to_datetime(df[['year', 'month', 'day', 'hour', 'minute']]) 0 2015-02-04 02:10:00 1 2016-03-05 03:30:00 dtype: datetime64[ns] print pd.to_datetime(df) 0 2015-02-04 02:10:21 1 2016-03-05 03:30:25 dtype: datetime64[ns] ``` Another solution is convert to dictionary: ``` print df M D Y Apples Oranges 0 5 6 1990 12 3 1 5 7 1990 14 4 2 5 8 1990 15 34 3 5 9 1990 23 21 print pd.to_datetime(dict(year=df.Y, month=df.M, day=df.D)) 0 1990-05-06 1 1990-05-07 2 1990-05-08 3 1990-05-09 dtype: datetime64[ns] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19350806\/how-to-convert-columns-into-one-datetime-column-in-pandas", "best_answers_votes":150, "question_length":458, "response_length":1143 }, { "question":"How do I sum values in a column that match a given condition using pandas? Suppose I have a dataframe like so: ```none a b 1 5 1 7 2 3 1 3 2 5 ``` I want to sum up the values for b where a = 1, for example. This would give me 5 + 7 + 3 = 15. How do I do this in pandas?", "response":"The essential idea here is to select the data you want to sum, and then sum them. This selection of data can be done in several different ways, a few of which are shown below. Boolean indexing Arguably the most common way to select the values is to use Boolean indexing. With this method, you find out where column 'a' is equal to 1 and then sum the corresponding rows of column 'b'. You can use loc to handle the indexing of rows and columns: ``` >>> df.loc[df['a'] == 1, 'b'].sum() 15 ``` The Boolean indexing can be extended to other columns. For example if df also contained a column 'c' and we wanted to sum the rows in 'b' where 'a' was 1 and 'c' was 2, we'd write: ``` df.loc[(df['a'] == 1) & (df['c'] == 2), 'b'].sum() ``` Query Another way to select the data is to use query to filter the rows you're interested in, select column 'b' and then sum: ``` >>> df.query(\"a == 1\")['b'].sum() 15 ``` Again, the method can be extended to make more complicated selections of the data: ``` df.query(\"a == 1 and c == 2\")['b'].sum() ``` Note this is a little more concise than the Boolean indexing approach. Groupby The alternative approach is to use groupby to split the DataFrame into parts according to the value in column 'a'. You can then sum each part and pull out the value that the 1s added up to: ``` >>> df.groupby('a')['b'].sum()[1] 15 ``` This approach is likely to be slower than using Boolean indexing, but it is useful if you want check the sums for other values in column a: ``` >>> df.groupby('a')['b'].sum() a 1 15 2 8 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28236305\/how-do-i-sum-values-in-a-column-that-match-a-given-condition-using-pandas", "best_answers_votes":182, "question_length":269, "response_length":1537 }, { "question":"Pandas: Get unique MultiIndex level values by label Say you have this MultiIndex-ed DataFrame: ``` df = pd.DataFrame({'country':['DE','DE','FR','FR'], 'biome':['Lake','Forest','Lake','Forest'], 'area':[10,20,30,40], 'count':[7,5,2,3]}) df = df.set_index(['country','biome']) ``` Which looks like this: ``` area count country biome DE Lake 10 7 Forest 20 5 FR Lake 30 2 Forest 40 3 ``` I would like to retrieve the unique values per index level. This can be accomplished using ``` >>> df.index.levels[0] ['DE', 'FR'] >>> df.index.levels[1] ['Lake', 'Forest'] ``` What I would really like to do, is to retrieve these lists by addressing the levels by their name, i.e. 'country' and 'biome'. The shortest two ways I could find looks like this: ``` >>> list(set(df.index.get_level_values('country'))) ['DE', 'FR'] >>> df.index.levels[df.index.names.index('country')] ['DE', 'FR'] ``` But non of them are very elegant. Is there a shorter and\/or more performant way?", "response":"Pandas 0.23.0 finally introduced a much cleaner solution to this problem: the level argument to Index.unique(): ``` In [3]: df.index.unique(level='country') Out[3]: Index(['DE', 'FR'], dtype='object', name='country') ``` This is now the recommended solution. It is far more efficient because it avoids creating a complete representation of the level values in memory, and re-scanning it.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24495695\/pandas-get-unique-multiindex-level-values-by-label", "best_answers_votes":137, "question_length":960, "response_length":387 }, { "question":"What causes \"indexing past lexsort depth\" warning in Pandas? I'm indexing a large multi-index Pandas df using df.loc[(key1, key2)]. Sometimes I get a series back (as expected), but other times I get a dataframe. I'm trying to isolate the cases which cause the latter, but so far all I can see is that it's correlated with getting a PerformanceWarning: indexing past lexsort depth may impact performance warning. I'd like to reproduce it to post here, but I can't generate another case that gives me the same warning. Here's my attempt: ``` def random_dates(start, end, n=10): start_u = start.value\/\/10**9 end_u = end.value\/\/10**9 return pd.to_datetime(np.random.randint(start_u, end_u, n), unit='s') np.random.seed(0) df = pd.DataFrame(np.random.random(3255000).reshape(465000,7)) # same shape as my data df['date'] = random_dates(pd.to_datetime('1990-01-01'), pd.to_datetime('2018-01-01'), 465000) df = df.set_index([0, 'date']) df = df.sort_values(by=[3]) # unsort indices, just in case df.index.lexsort_depth > 0 df.index.is_monotonic > False df.loc[(0.9987185534991936, pd.to_datetime('2012-04-16 07:04:34'))] # no warning ``` So my question is: what causes this warning? How do I artificially induce it?", "response":"TL;DR: your index is unsorted and this severely impacts performance. Sort your DataFrame's index using df.sort_index() to address the warning and improve performance. I've actually written about this in detail in my writeup: Select rows in pandas MultiIndex DataFrame (under \"Question 3\"). To reproduce, ``` mux = pd.MultiIndex.from_arrays([ list('aaaabbbbbccddddd'), list('tuvwtuvwtuvwtuvw') ], names=['one', 'two']) df = pd.DataFrame({'col': np.arange(len(mux))}, mux) col one two a t 0 u 1 v 2 w 3 b t 4 u 5 v 6 w 7 t 8 c u 9 v 10 d w 11 t 12 u 13 v 14 w 15 ``` You'll notice that the second level is not properly sorted. Now, try to index a specific cross section: ``` df.loc[pd.IndexSlice[('c', 'u')]] PerformanceWarning: indexing past lexsort depth may impact performance. # encoding: utf-8 col one two c u 9 ``` You'll see the same behaviour with xs: ``` df.xs(('c', 'u'), axis=0) PerformanceWarning: indexing past lexsort depth may impact performance. self.interact() col one two c u 9 ``` The docs, backed by this timing test I once did seem to suggest that handling un-sorted indexes imposes a slowdown\u2014Indexing is O(N) time when it could\/should be O(1). If you sort the index before slicing, you'll notice the difference: ``` df2 = df.sort_index() df2.loc[pd.IndexSlice[('c', 'u')]] col one two c u 9 %timeit df.loc[pd.IndexSlice[('c', 'u')]] %timeit df2.loc[pd.IndexSlice[('c', 'u')]] 802 \u00b5s \u00b1 12.1 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) 648 \u00b5s \u00b1 20.3 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) ``` Finally, if you want to know whether the index is sorted or not, check with MultiIndex.is_lexsorted. ``` df.index.is_lexsorted() # False df2.index.is_lexsorted() # True ``` As for your question on how to induce this behaviour, simply permuting the indices should suffice. This works if your index is unique: ``` df2 = df.loc[pd.MultiIndex.from_tuples(np.random.permutation(df2.index))] ``` If your index is not unique, add a cumcounted level first, ``` df.set_index( df.groupby(level=list(range(len(df.index.levels)))).cumcount(), append=True) df2 = df.loc[pd.MultiIndex.from_tuples(np.random.permutation(df2.index))] df2 = df2.reset_index(level=-1, drop=True) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/54307300\/what-causes-indexing-past-lexsort-depth-warning-in-pandas", "best_answers_votes":134, "question_length":1208, "response_length":2209 }, { "question":"Rename MultiIndex columns in Pandas ``` df = pd.DataFrame([[1,2,3], [10,20,30], [100,200,300]]) df.columns = pd.MultiIndex.from_tuples(((\"a\", \"b\"), (\"a\", \"c\"), (\"d\", \"f\"))) df ``` returns ``` a d b c f 0 1 2 3 1 10 20 30 2 100 200 300 ``` and ``` df.columns.levels[1] ``` returns ``` Index([u'b', u'c', u'f'], dtype='object') ``` I want to rename \"f\" to \"e\". According to pandas.MultiIndex.rename I run: ``` df.columns.rename([\"b1\", \"c1\", \"f1\"], level=1) ``` But it raises ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 df.columns.rename([\"b1\", \"c1\", \"f1\"], level=1) C:\\Users\\USERNAME\\AppData\\Local\\Continuum\\Miniconda2\\lib\\site-packages\\pandas\\indexes\\base.pyc in set_names(self, names, level, inplace) 994 if level is not None and not is_list_like(level) and is_list_like( 995 names): --> 996 raise TypeError(\"Names must be a string\") 997 998 if not is_list_like(names) and level is None and self.nlevels > 1: TypeError: Names must be a string ``` I use Python 2.7.12 |Continuum Analytics, Inc.| (default, Jun 29 2016, 11:07:13) [MSC v.1500 64 bit (AMD64)]' and pandas 0.19.1", "response":"Use set_levels: ``` In [22]: df.columns.set_levels(['b1','c1','f1'],level=1,inplace=True) df Out[22]: a d b1 c1 f1 0 1 2 3 1 10 20 30 2 100 200 300 ``` rename sets the name for the index, it doesn't rename the column names: ``` In [26]: df.columns = df.columns.rename(\"b1\", level=1) df Out[26]: a d b1 b c f 0 1 2 3 1 10 20 30 2 100 200 300 ``` This is why you get the error", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41221079\/rename-multiindex-columns-in-pandas", "best_answers_votes":91, "question_length":1169, "response_length":374 }, { "question":"Getting min and max Dates from a pandas dataframe How do I get the min and max Dates from a dataframe's major axis? ``` value Date 2014-03-13 10000.000 2014-03-21 2000.000 2014-03-27 2000.000 2014-03-17 200.000 2014-03-17 5.000 2014-03-17 70.000 2014-03-21 200.000 2014-03-27 5.000 2014-03-27 25.000 2014-03-31 0.020 2014-03-31 12.000 2014-03-31 0.022 ``` Essentially I want a way to get the min and max dates, i.e. 2014-03-13 and 2014-03-31. I tried using numpy.min or df.min(axis=0), I'm able to get the min or max value but that's not what I want", "response":"'Date' is your index so you want to do, ``` print (df.index.min()) print (df.index.max()) 2014-03-13 00:00:00 2014-03-31 00:00:00 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23178129\/getting-min-and-max-dates-from-a-pandas-dataframe", "best_answers_votes":139, "question_length":549, "response_length":133 }, { "question":"Is there a parameter in matplotlib\/pandas to have the Y axis of a histogram as percentage? I would like to compare two histograms by having the Y axis show the percentage of each column from the overall dataset size instead of an absolute value. Is that possible? I am using Pandas and matplotlib. Thanks", "response":"The density=True (normed=True for matplotlib < 2.2.0) returns a histogram for which np.sum(pdf * np.diff(bins)) equals 1. If you want the sum of the histogram to be 1 you can use Numpy's histogram() and normalize the results yourself. ``` x = np.random.randn(30) fig, ax = plt.subplots(1,2, figsize=(10,4)) ax[0].hist(x, density=True, color='grey') hist, bins = np.histogram(x) ax[1].bar(bins[:-1], hist.astype(np.float32) \/ hist.sum(), width=(bins[1]-bins[0]), color='grey') ax[0].set_title('normed=True') ax[1].set_title('hist = hist \/ hist.sum()') ``` Btw: Strange plotting glitch at the first bin of the left plot.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17874063\/is-there-a-parameter-in-matplotlib-pandas-to-have-the-y-axis-of-a-histogram-as-p", "best_answers_votes":108, "question_length":304, "response_length":618 }, { "question":"Plotting histograms from grouped data in a pandas DataFrame How do I plot a block of histograms from a group of data in a dataframe? For example, given: ``` from pandas import DataFrame import numpy as np x = ['A']*300 + ['B']*400 + ['C']*300 y = np.random.randn(1000) df = DataFrame({'Letter': x, 'N': y}) ``` I tried: ``` df.groupby('Letter').hist() ``` ...which failed with the error message: TypeError: cannot concatenate 'str' and 'float' objects", "response":"I'm on a roll, just found an even simpler way to do it using the by keyword in the hist method: ``` df.hist('N', by='Letter') ``` That's a very handy little shortcut for quickly scanning your grouped data! For future visitors, the product of this call is the following chart: In answer to questions below, here's an example of specific tailoring of the histogram plots: ``` # import libraries import pandas as pd import numpy as np # Create test dataframe x = ['A']*300 + ['B']*400 + ['C']*300 y = np.random.randn(1000) z = np.random.randn(1000) df = pd.DataFrame({'Letter':x, 'N1':y, 'N2':z}) # Plot histograms axes = df.hist(['N1','N2'], by='Letter',bins=10, layout=(2,2), legend=True, yrot=90,sharex=True,sharey=True, log=True, figsize=(6,6)) for ax in axes.flatten(): ax.set_xlabel('N') ax.set_ylabel('Count') ax.set_ylim(bottom=1,top=100) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19584029\/plotting-histograms-from-grouped-data-in-a-pandas-dataframe", "best_answers_votes":266, "question_length":451, "response_length":847 }, { "question":"datetime to string with series in pandas How should I transform from datetime to string? My attempt: ``` dates = p.to_datetime(p.Series(['20010101', '20010331']), format = '%Y%m%d') dates.str ```", "response":"There is no .str accessor for datetimes and you can't do .astype(str) either. Instead, use .dt.strftime: ``` >>> series = pd.Series(['20010101', '20010331']) >>> dates = pd.to_datetime(series, format='%Y%m%d') >>> dates.dt.strftime('%Y-%m-%d') 0 2001-01-01 1 2001-03-31 dtype: object ``` See the docs on customizing date string formats here: strftime() and strptime() Behavior. For old pandas versions >> dates.apply(lambda x: x.strftime('%Y-%m-%d')) 0 2001-01-01 1 2001-03-31 dtype: object ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30132282\/datetime-to-string-with-series-in-pandas", "best_answers_votes":180, "question_length":195, "response_length":494 }, { "question":"Finding the intersection between two series in Pandas I have two series s1 and s2 in pandas and want to compute the intersection i.e. where all of the values of the series are common. How would I use the concat function to do this? I have been trying to work it out but have been unable to (I don't want to compute the intersection on the indices of s1 and s2, but on the values).", "response":"Place both series in Python's set container then use the set intersection method: ``` s1.intersection(s2) ``` and then transform back to list if needed. Just noticed pandas in the tag. Can translate back to that: ``` pd.Series(list(set(s1).intersection(set(s2)))) ``` From comments I have changed this to a more Pythonic expression, which is shorter and easier to read: ``` Series(list(set(s1) & set(s2))) ``` should do the trick, except if the index data is also important to you. Have added the list(...) to translate the set before going to pd.Series as pandas does not accept a set as direct input for a Series.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18079563\/finding-the-intersection-between-two-series-in-pandas", "best_answers_votes":115, "question_length":380, "response_length":615 }, { "question":"Merge pandas dataframes where one value is between two others [duplicate] This question already has answers here: How to join two dataframes for which column values are within a certain range? (10 answers) Closed 7 years ago. I need to merge two pandas dataframes on an identifier and a condition where a date in one dataframe is between two dates in the other dataframe. Dataframe A has a date (\"fdate\") and an ID (\"cusip\"): I need to merge this with this dataframe B: on A.cusip==B.ncusip and A.fdate is between B.namedt and B.nameenddt. In SQL this would be trivial, but the only way I can see how to do this in pandas is to first merge unconditionally on the identifier, and then filter on the date condition: ``` df = pd.merge(A, B, how='inner', left_on='cusip', right_on='ncusip') df = df[(df['fdate']>=df['namedt']) & (df['fdate']<=df['nameenddt'])] ``` Is this really the best way to do this? It seems that it would be much better if one could filter within the merge so as to avoid having a potentially very large dataframe after the merge but before the filter has completed.", "response":"As you say, this is pretty easy in SQL, so why not do it in SQL? ``` import pandas as pd import sqlite3 #We'll use firelynx's tables: presidents = pd.DataFrame({\"name\": [\"Bush\", \"Obama\", \"Trump\"], \"president_id\":[43, 44, 45]}) terms = pd.DataFrame({'start_date': pd.date_range('2001-01-20', periods=5, freq='48M'), 'end_date': pd.date_range('2005-01-21', periods=5, freq='48M'), 'president_id': [43, 43, 44, 44, 45]}) war_declarations = pd.DataFrame({\"date\": [datetime(2001, 9, 14), datetime(2003, 3, 3)], \"name\": [\"War in Afghanistan\", \"Iraq War\"]}) #Make the db in memory conn = sqlite3.connect(':memory:') #write the tables terms.to_sql('terms', conn, index=False) presidents.to_sql('presidents', conn, index=False) war_declarations.to_sql('wars', conn, index=False) qry = ''' select start_date PresTermStart, end_date PresTermEnd, wars.date WarStart, presidents.name Pres from terms join wars on date between start_date and end_date join presidents on terms.president_id = presidents.president_id ''' df = pd.read_sql_query(qry, conn) ``` df: ``` PresTermStart PresTermEnd WarStart Pres 0 2001-01-31 00:00:00 2005-01-31 00:00:00 2001-09-14 00:00:00 Bush 1 2001-01-31 00:00:00 2005-01-31 00:00:00 2003-03-03 00:00:00 Bush ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30627968\/merge-pandas-dataframes-where-one-value-is-between-two-others", "best_answers_votes":97, "question_length":1085, "response_length":1228 }, { "question":"How to slice a pandas DataFrame by position? I have a Pandas Data Frame object that has 1000 rows and 10 columns. I would simply like to slice the Data Frame and take the first 10 rows. How can I do this? I've been trying to use this: ``` >>> df.shape (1000,10) >>> my_slice = df.ix[10,:] >>> my_slice.shape (10,) ``` Shouldn't my_slice be the first ten rows, ie. a 10 x 10 Data Frame? How can I get the first ten rows, such that my_slice is a 10x10 Data Frame object? Thanks.", "response":"http:\/\/pandas.pydata.org\/pandas-docs\/stable\/generated\/pandas.DataFrame.head.html?highlight=head#pandas.DataFrame.head ``` df2 = df.head(10) ``` should do the trick", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12021754\/how-to-slice-a-pandas-dataframe-by-position", "best_answers_votes":150, "question_length":476, "response_length":163 }, { "question":"Python generating a list of dates between two dates I want to generate a list of dates between two dates and store them in a list in string format. This list is useful to compare with other dates I have. My code is given below: ``` from datetime import date, timedelta sdate = date(2019,3,22) # start date edate = date(2019,4,9) # end date def dates_bwn_twodates(start_date, end_date): for n in range(int ((end_date - start_date).days)): yield start_date + timedelta(n) print(dates_bwn_twodates(sdate,edate)) ``` My present output: ``` ``` My expected output: ``` ['2019-03-22',.....,'2019-04-08'] ``` Something wrong in my code.", "response":"You can use pandas.date_range() for this: ```py import pandas pandas.date_range(sdate,edate-timedelta(days=1),freq='d') ``` ``` DatetimeIndex(['2019-03-22', '2019-03-23', '2019-03-24', '2019-03-25', '2019-03-26', '2019-03-27', '2019-03-28', '2019-03-29', '2019-03-30', '2019-03-31', '2019-04-01', '2019-04-02', '2019-04-03', '2019-04-04', '2019-04-05', '2019-04-06', '2019-04-07', '2019-04-08'], dtype='datetime64[ns]', freq='D') ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/59882714\/python-generating-a-list-of-dates-between-two-dates", "best_answers_votes":150, "question_length":630, "response_length":433 }, { "question":"Correlation heatmap I want to represent correlation matrix using a heatmap. There is something called correlogram in R, but I don't think there's such a thing in Python. How can I do this? The values go from -1 to 1, for example: ``` [[ 1. 0.00279981 0.95173379 0.02486161 -0.00324926 -0.00432099] [ 0.00279981 1. 0.17728303 0.64425774 0.30735071 0.37379443] [ 0.95173379 0.17728303 1. 0.27072266 0.02549031 0.03324756] [ 0.02486161 0.64425774 0.27072266 1. 0.18336236 0.18913512] [-0.00324926 0.30735071 0.02549031 0.18336236 1. 0.77678274] [-0.00432099 0.37379443 0.03324756 0.18913512 0.77678274 1. ]] ``` I was able to produce the following heatmap based on another question, but the problem is that my values get 'cut' at 0, so I would like to have a map which goes from blue(-1) to red(1), or something like that, but here values below 0 are not presented in an adequate way. Here's the code for that: ``` plt.imshow(correlation_matrix,cmap='hot',interpolation='nearest') ```", "response":"Another alternative is to use the heatmap function in seaborn to plot the covariance. This example uses the 'mpg' data set from seaborn. ``` import seaborn as sns %matplotlib inline # load the Auto dataset auto_df = sns.load_dataset('mpg') # calculate the correlation matrix on the numeric columns corr = auto_df.select_dtypes('number').corr() # plot the heatmap sns.heatmap(corr) ``` If you wanted to be even more fancy, you can use Pandas Style, for example: ``` cmap = sns.diverging_palette(5, 250, as_cmap=True) def magnify(): return [dict(selector=\"th\", props=[(\"font-size\", \"7pt\")]), dict(selector=\"td\", props=[('padding', \"0em 0em\")]), dict(selector=\"th:hover\", props=[(\"font-size\", \"12pt\")]), dict(selector=\"tr:hover td:hover\", props=[('max-width', '200px'), ('font-size', '12pt')]) ] corr.style.background_gradient(cmap, axis=1)\\ .format(precision=3)\\ .set_properties(**{'max-width': '80px', 'font-size': '10pt'})\\ .set_caption(\"Hover to magify\")\\ .set_table_styles(magnify()) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39409866\/correlation-heatmap", "best_answers_votes":126, "question_length":981, "response_length":989 }, { "question":"Pandas - check if ALL values are NaN in Series I have a data series which looks like this: ``` print mydf id_L1 2 NaN 3 NaN 4 NaN 5 NaN 6 NaN 7 NaN 8 NaN ``` I would like to check if all the values are NaN. My attempt: ``` pd.isnull(mydf).all() ``` Output: ``` True ``` Is this the correct way to do it?", "response":"Yes, that's correct, but I think a more idiomatic way would be: ``` mys.isnull().all() ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33147158\/pandas-check-if-all-values-are-nan-in-series", "best_answers_votes":163, "question_length":303, "response_length":90 }, { "question":"Pandas column values to columns? [duplicate] This question already has answers here: How can I pivot a dataframe? [closed] (5 answers) Closed 3 years ago. I've seen a few variations on the theme of exploding a column\/series into multiple columns of a Pandas dataframe, but I've been trying to do something and not really succeeding with the existing approaches. Given a DataFrame like so: ``` key val id 2 foo oranges 2 bar bananas 2 baz apples 3 foo grapes 3 bar kiwis ``` I want to convert the items in the key series into columns, with the val values serving as the values, like so: ``` foo bar baz id 2 oranges bananas apples 3 grapes kiwis NaN ``` I feel like this is something that should be relatively straightforward, but I've been bashing my head against this for a few hours now with increasing levels of convolution, and no success.", "response":"There are a few ways: using .pivot_table: ``` >>> df.pivot_table(values='val', index=df.index, columns='key', aggfunc='first') key bar baz foo id 2 bananas apples oranges 3 kiwis NaN grapes ``` using .pivot: ``` >>> df.pivot(index=df.index, columns='key')['val'] key bar baz foo id 2 bananas apples oranges 3 kiwis NaN grapes ``` using .groupby followed by .unstack: ``` >>> df.reset_index().groupby(['id', 'key'])['val'].aggregate('first').unstack() key bar baz foo id 2 bananas apples oranges 3 kiwis NaN grapes ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26255671\/pandas-column-values-to-columns", "best_answers_votes":153, "question_length":843, "response_length":517 }, { "question":"Getting a warning when using a pyodbc Connection object with pandas I am trying to make sense of the following error that I started getting when I setup my python code to run on a VM server, which has 3.9.5 installed instead of 3.8.5 on my desktop. Not sure that matters, but it could be part of the reason. The error ``` C:\\ProgramData\\Miniconda3\\lib\\site-packages\\pandas\\io\\sql.py:758: UserWarning: pandas only support SQLAlchemy connectable(engine\/connection) or database string URI or sqlite3 DBAPI2 connection other DBAPI2 objects are not tested, please consider using SQLAlchemy warnings.warn( ``` This is within a fairly simple .py file that imports pyodbc & sqlalchemy fwiw. A fairly generic\/simple version of sql calls that yields the warning is: ``` myserver_string = \"xxxxxxxxx,nnnn\" db_string = \"xxxxxx\" cnxn = \"Driver={ODBC Driver 17 for SQL Server};Server=tcp:\"+myserver_string+\";Database=\"+db_string +\";TrustServerCertificate=no;Connection Timeout=600;Authentication=ActiveDirectoryIntegrated;\" def readAnyTable(tablename, date): conn = pyodbc.connect(cnxn) query_result = pd.read_sql_query( ''' SELECT * FROM [{0}].[dbo].[{1}] where Asof >= '{2}' '''.format(db_string,tablename,date,), conn) conn.close() return query_result ``` All the examples I have seen using pyodbc in python look fairly similar. Is pyodbc becoming deprecated? Is there a better way to achieve similar results without warning?", "response":"Is pyodbc becoming deprecated? No. For at least the last couple of years pandas' documentation has clearly stated that it wants either a SQLAlchemy Connectable (i.e., an Engine or Connection object), a string containing a SQLAlchemy connection URL, or a SQLite DBAPI connection. (The switch-over to SQLAlchemy was almost universal, but they continued supporting SQLite connections for backwards compatibility.) People have been passing other DBAPI connections (like pyodbc Connection objects) for read operations and pandas hasn't complained \u2026 until now. Is there a better way to achieve similar results without warning? Yes. You can take your existing ODBC connection string and use it to create a SQLAlchemy Engine object as described in the SQLAlchemy 1.4 documentation: ```py from sqlalchemy.engine import URL connection_string = \"DRIVER={ODBC Driver 17 for SQL Server};SERVER=dagger;DATABASE=test;UID=user;PWD=password\" connection_url = URL.create(\"mssql+pyodbc\", query={\"odbc_connect\": connection_string}) from sqlalchemy import create_engine engine = create_engine(connection_url) ``` Then use the SQLAlchemy engine to work with the pandas methods you require. For example, with SQLAlchemy 2.0 and pandas 1.5.3: ```py import pandas as pd import sqlalchemy as sa # \u2026 with engine.begin() as conn: df = pd.read_sql_query(sa.text(\"SELECT 'thing' as txt\"), conn) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/71082494\/getting-a-warning-when-using-a-pyodbc-connection-object-with-pandas", "best_answers_votes":101, "question_length":1414, "response_length":1368 }, { "question":"Merge multiple column values into one column in python pandas I have a pandas data frame like this: ``` Column1 Column2 Column3 Column4 Column5 0 a 1 2 3 4 1 a 3 4 5 2 b 6 7 8 3 c 7 7 ``` What I want to do now is getting a new dataframe containing Column1 and a new columnA. This columnA should contain all values from columns 2 -(to) n (where n is the number of columns from Column2 to the end of the row) like this: ``` Column1 ColumnA 0 a 1,2,3,4 1 a 3,4,5 2 b 6,7,8 3 c 7,7 ``` How could I best approach this issue?", "response":"You can call apply pass axis=1 to apply row-wise, then convert the dtype to str and join: ``` In [153]: df['ColumnA'] = df[df.columns[1:]].apply( lambda x: ','.join(x.dropna().astype(str)), axis=1 ) df Out[153]: Column1 Column2 Column3 Column4 Column5 ColumnA 0 a 1 2 3 4 1,2,3,4 1 a 3 4 5 NaN 3,4,5 2 b 6 7 8 NaN 6,7,8 3 c 7 7 NaN NaN 7,7 ``` Here I call dropna to get rid of the NaN, however we need to cast again to int so we don't end up with floats as str.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33098383\/merge-multiple-column-values-into-one-column-in-python-pandas", "best_answers_votes":161, "question_length":519, "response_length":461 }, { "question":"Return Pandas dataframe from PostgreSQL query with sqlalchemy I want to query a PostgreSQL database and return the output as a Pandas dataframe. I created a connection to the database with 'SqlAlchemy': ``` from sqlalchemy import create_engine engine = create_engine('postgresql:\/\/user@localhost:5432\/mydb') ``` I write a Pandas dataframe to a database table: ``` i=pd.read_csv(path) i.to_sql('Stat_Table',engine,if_exists='replace') ``` Based on the docs, looks like pd.read_sql_query() should accept a SQLAlchemy engine: ``` a=pd.read_sql_query('select * from Stat_Table',con=engine) ``` But it throws an error: ``` ProgrammingError: (ProgrammingError) relation \"stat_table\" does not exist ``` I'm using Pandas version 0.14.1. What's the right way to do this?", "response":"You are bitten by the case (in)sensitivity issues with PostgreSQL. If you quote the table name in the query, it will work: ``` df = pd.read_sql_query('select * from \"Stat_Table\"',con=engine) ``` But personally, I would advise to just always use lower case table names (and column names), also when writing the table to the database to prevent such issues. From the PostgreSQL docs (http:\/\/www.postgresql.org\/docs\/8.0\/static\/sql-syntax.html#SQL-SYNTAX-IDENTIFIERS): Quoting an identifier also makes it case-sensitive, whereas unquoted names are always folded to lower case To explain a bit more: you have written a table with the name Stat_Table to the database (and sqlalchemy will quote this name, so it will be written as \"Stat_Table\" in the postgres database). When doing the query 'select * from Stat_Table' the unquoted table name will be converted to lower case stat_table, and so you get the message that this table is not found. See eg also Are PostgreSQL column names case-sensitive?", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27884268\/return-pandas-dataframe-from-postgresql-query-with-sqlalchemy", "best_answers_votes":115, "question_length":761, "response_length":992 }, { "question":"Find the end of the month of a Pandas DataFrame Series I have a series within a DataFrame that I read in initially as an object, and then need to convert it to a date in the form of yyyy-mm-dd where dd is the end of the month. As an example, I have DataFrame df with a column Date as an object: ``` ... Date ... ... 200104 ... ... 200508 ... ``` What I want when this is all said and done is a date object: ``` ... Date ... ... 2001-04-30 ... ... 2005-08-31 ... ``` such that df['Date'].item() returns ``` datetime.date(2001, 04, 30) ``` I've used the following code to get almost there, but all my dates are at the beginning of the month, not the end. Please advise. ``` df['Date'] = pd.to_datetime(df['Date'], format=\"%Y%m\").dt.date ``` Note: I've already imported Pandas (pd), and datetime (dt)", "response":"You can use pandas.tseries.offsets.MonthEnd: ``` from pandas.tseries.offsets import MonthEnd df['Date'] = pd.to_datetime(df['Date'], format=\"%Y%m\") + MonthEnd(0) ``` The 0 in MonthEnd just specifies to roll forward to the end of the given month. Note that if we'd used MonthEnd(1), then we'd have got the next date which is at the end of the month. If you wanted the last day of the next month, you'd then add an extra MonthEnd(1), etc. This should work for any month, so you don't need to know the number days in the month, or anything like that. More offset information can be found in the documentation. Example usage and output: ``` df = pd.DataFrame({'Date': [200104, 200508, 201002, 201602, 199912, 200611]}) df['EndOfMonth'] = pd.to_datetime(df['Date'], format=\"%Y%m\") + MonthEnd(1) Date EndOfMonth 0 200104 2001-04-30 1 200508 2005-08-31 2 201002 2010-02-28 3 201602 2016-02-29 4 199912 1999-12-31 5 200611 2006-11-30 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/37354105\/find-the-end-of-the-month-of-a-pandas-dataframe-series", "best_answers_votes":188, "question_length":797, "response_length":929 }, { "question":"Correct way to set new column in pandas DataFrame to avoid SettingWithCopyWarning Trying to create a new column in the netc df but i get the warning ``` netc[\"DeltaAMPP\"] = netc.LOAD_AM - netc.VPP12_AM C:\\Anaconda\\lib\\site-packages\\ipykernel\\__main__.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead ``` whats the proper way to create a field in the newer version of Pandas to avoid getting the warning? ``` pd.__version__ Out[45]: u'0.19.2+0.g825876c.dirty' ```", "response":"Your example is incomplete, as it doesn't show where netc comes from. It is likely that netc itself is the product of slicing, and as such Pandas cannot make guarantees that it isn't a view or a copy. For example, if you're doing this: ```py netc = netb[netb[\"DeltaAMPP\"] == 0] netc[\"DeltaAMPP\"] = netc.LOAD_AM - netc.VPP12_AM ``` then Pandas wouldn't know if netc is a view or a copy. If it were a one-liner, it would effectively be like this: ``` netb[netb[\"DeltaAMPP\"] == 0][\"DeltaAMPP\"] = netc.LOAD_AM - netc.VPP12_AM ``` where you can see the double indexing more clearly. If you want to make netc separate from netb, one possible remedy might be to force a copy in the first line (the loc is to make sure we're not copying two times), like: ``` netc = netb.loc[netb[\"DeltaAMPP\"] == 0].copy() ``` If, on the other hand, you want to have netb modified with the new column, you may do: ``` netb.loc[netb[\"DeltaAMPP\"] == 0, \"DeltaAMPP\"] = netc.LOAD_AM - netc.VPP12_AM ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/42379818\/correct-way-to-set-new-column-in-pandas-dataframe-to-avoid-settingwithcopywarnin", "best_answers_votes":66, "question_length":566, "response_length":973 }, { "question":"Correct way to set value on a slice in pandas [duplicate] This question already has answers here: How to deal with SettingWithCopyWarning in Pandas (27 answers) Closed 9 years ago. I have a pandas dataframe: data. it has columns [\"name\", 'A', 'B'] What I want to do (and works) is: ``` d2 = data[data['name'] == 'fred'] #This gives me multiple rows d2['A'] = 0 ``` This will set the column A on the fred rows to 0. I've also done: ``` indexes = d2.index data['A'][indexes] = 0 ``` However, both give me the same warning: ``` \/Users\/brianp\/work\/cyan\/venv\/lib\/python2.7\/site-packages\/pandas\/core\/indexing.py:128: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/indexing.html#indexing-view-versus-copy ``` How does pandas WANT me to do this?", "response":"This is a very common warning from pandas. It means you are writing in a copy slice, not the original data so it might not apply to the original columns due to confusing chained assignment. Please read this post. It has detailed discussion on this SettingWithCopyWarning. In your case I think you can try ``` data.loc[data['name'] == 'fred', 'A'] = 0 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/37841525\/correct-way-to-set-value-on-a-slice-in-pandas", "best_answers_votes":153, "question_length":862, "response_length":354 }, { "question":"How to display custom values on a bar plot I'm looking to see how to do two things in Seaborn with using a bar chart to display values that are in the dataframe, but not in the graph. I'm looking to display the values of one field in a dataframe while graphing another. For example, below, I'm graphing 'tip', but I would like to place the value of 'total_bill' centered above each of the bars (i.e.325.88 above Friday, 1778.40 above Saturday, etc.) Is there a way to scale the colors of the bars, with the lowest value of 'total_bill' having the lightest color (in this case Friday) and the highest value of 'total_bill' having the darkest? Obviously, I'd stick with one color (i.e., blue) when I do the scaling. While I see that others think that this is a duplicate of another problem (or two), I am missing the part of how I use a value that is not in the graph as the basis for the label or the shading. How do I say, use total_bill as the basis. I'm sorry, but I just can't figure it out based on those answers. Starting with the following code, ``` import pandas as pd import seaborn as sns %matplotlib inline df = pd.read_csv(\"https:\/\/raw.githubusercontent.com\/wesm\/pydata-book\/1st-edition\/ch08\/tips.csv\", sep=',') groupedvalues = df.groupby('day').sum().reset_index() g = sns.barplot(x='day', y='tip', data=groupedvalues) ``` I get the following result: Interim Solution: ``` for index, row in groupedvalues.iterrows(): g.text(row.name, row.tip, round(row.total_bill, 2), color='black', ha=\"center\") ``` On the shading, using the example below, I tried the following: ``` import pandas as pd import seaborn as sns %matplotlib inline df = pd.read_csv(\"https:\/\/raw.githubusercontent.com\/wesm\/pydata-book\/1st-edition\/ch08\/tips.csv\", sep=',') groupedvalues = df.groupby('day').sum().reset_index() pal = sns.color_palette(\"Greens_d\", len(data)) rank = groupedvalues.argsort().argsort() g = sns.barplot(x='day', y='tip', data=groupedvalues) for index, row in groupedvalues.iterrows(): g.text(row.name, row.tip, round(row.total_bill, 2), color='black', ha=\"center\") ``` But that gave me the following error: AttributeError: 'DataFrame' object has no attribute 'argsort' So I tried a modification: ``` import pandas as pd import seaborn as sns %matplotlib inline df = pd.read_csv(\"https:\/\/raw.githubusercontent.com\/wesm\/pydata-book\/1st-edition\/ch08\/tips.csv\", sep=',') groupedvalues = df.groupby('day').sum().reset_index() pal = sns.color_palette(\"Greens_d\", len(data)) rank = groupedvalues['total_bill'].rank(ascending=True) g = sns.barplot(x='day', y='tip', data=groupedvalues, palette=np.array(pal[::-1])[rank]) ``` and that leaves me with IndexError: index 4 is out of bounds for axis 0 with size 4", "response":"New in matplotlib 3.4.0 There is now a built-in Axes.bar_label to automatically label bar containers: ```py ax = sns.barplot(x='day', y='tip', data=groupedvalues) ax.bar_label(ax.containers[0]) # only 1 container needed unless using `hue` ``` For custom labels (e.g., tip bars with total_bill values), use the labels parameter: ```py ax = sns.barplot(x='day', y='tip', data=groupedvalues) ax.bar_label(ax.containers[0], labels=groupedvalues['total_bill']) # ---------------------------------- ``` For multi-group bar plots (i.e., with hue), there will be multiple bar containers that need to be iterated: ```py ax = sns.barplot(x='day', y='tip', hue='sex', data=df) for container in ax.containers: ax.bar_label(container) ``` More details: How to label percentage counts (fmt param) How to rotate labels (rotation param) How to vertically center labels (label_type param) How to add spacing to labels (padding param) Color-ranked version Is there a way to scale the colors of the bars, with the lowest value of total_bill having the lightest color (in this case Friday) and the highest value of total_bill having the darkest? Find the rank of each total_bill value: Either use Series.sort_values: ```py ranks = groupedvalues.total_bill.sort_values().index # Int64Index([1, 0, 3, 2], dtype='int64') ``` Or condense Ernest's Series.rank version by chaining Series.sub: ```py ranks = groupedvalues.total_bill.rank().sub(1).astype(int).array # [1, 0, 3, 2] ``` Then reindex the color palette using ranks: ```py palette = sns.color_palette('Blues_d', len(ranks)) ax = sns.barplot(x='day', y='tip', palette=np.array(palette)[ranks], data=groupedvalues) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43214978\/how-to-display-custom-values-on-a-bar-plot", "best_answers_votes":165, "question_length":2703, "response_length":1650 }, { "question":"How to convert list of model objects to pandas dataframe? I have an array of objects of this class ``` class CancerDataEntity(Model): age = columns.Text(primary_key=True) gender = columns.Text(primary_key=True) cancer = columns.Text(primary_key=True) deaths = columns.Integer() ... ``` When printed, array looks like this ``` [CancerDataEntity(age=u'80-85+', gender=u'Female', cancer=u'All cancers (C00-97,B21)', deaths=15306), CancerDataEntity(... ``` I want to convert this to a data frame so I can play with it in a more suitable way to me - to aggregate, count, sum and similar. How I wish this data frame to look, would be something like this: ``` age gender cancer deaths 0 80-85+ Female ... 15306 1 ... ``` Is there a way to achieve this using numpy\/pandas easily, without manually processing the input array?", "response":"A much cleaner way to to this is to define a to_dict method on your class and then use pandas.DataFrame.from_records ``` class Signal(object): def __init__(self, x, y): self.x = x self.y = y def to_dict(self): return { 'x': self.x, 'y': self.y, } ``` e.g. ``` In [87]: signals = [Signal(3, 9), Signal(4, 16)] In [88]: pandas.DataFrame.from_records([s.to_dict() for s in signals]) Out[88]: x y 0 3 9 1 4 16 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34997174\/how-to-convert-list-of-model-objects-to-pandas-dataframe", "best_answers_votes":123, "question_length":816, "response_length":409 }, { "question":"Importing data from a MySQL database into a Pandas data frame including column names [duplicate] This question already has answers here: How to convert SQL Query result to PANDAS Data Structure? (18 answers) Closed 9 years ago. I am importing data from a MySQL database into a Pandas data frame. The following excerpt is the code that I am using: ``` import mysql.connector as sql import pandas as pd db_connection = sql.connect(host='hostname', database='db_name', user='username', password='password') db_cursor = db_connection.cursor() db_cursor.execute('SELECT * FROM table_name') table_rows = db_cursor.fetchall() df = pd.DataFrame(table_rows) ``` When I print the data frame it does properly represent the data but my question is, is it possible to also keep the column names? Here is an example output: ``` 0 1 2 3 4 5 6 7 8 0 :ID[giA0CqQcx+(9kbuSKV== NaN NaN None None None None None None 1 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None 2 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None 3 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None 4 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None ``` What I would like to do is keep the column name, which would replace the pandas column indexes. For example, instead of having 0, the column name would be: \"First_column\" as in the MySQL table. Is there a good way to go about this? or is there a more efficient approach of importing data from MySQL into a Pandas data frame than mine?", "response":"IMO it would be much more efficient to use pandas for reading data from your MySQL server: ``` from sqlalchemy import create_engine import pandas as pd db_connection_str = 'mysql+pymysql:\/\/mysql_user:mysql_password@mysql_host\/mysql_db' db_connection = create_engine(db_connection_str) df = pd.read_sql('SELECT * FROM table_name', con=db_connection) ``` this should also take care of column names...", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/37730243\/importing-data-from-a-mysql-database-into-a-pandas-data-frame-including-column-n", "best_answers_votes":203, "question_length":1495, "response_length":398 }, { "question":"How to get number of groups in a groupby object in pandas? I want to know how many unique groups I have to perform calculations. Given a groupby object called dfgroup, how do we find the number of groups?", "response":"Simple, Fast, and Pandaic: ngroups Newer versions of the groupby API (pandas >= 0.23) provide this (undocumented) attribute which stores the number of groups in a GroupBy object. ``` # setup df = pd.DataFrame({'A': list('aabbcccd')}) dfg = df.groupby('A') ``` ``` # call `.ngroups` on the GroupBy object dfg.ngroups # 4 ``` Note that this is different from GroupBy.groups which returns the actual groups themselves. Why should I prefer this over len? As noted in BrenBarn's answer, you could use len(dfg) to get the number of groups. But you shouldn't. Looking at the implementation of GroupBy.__len__ (which is what len() calls interally), we see that __len__ makes a call to GroupBy.groups, which returns a dictionary of grouped indices: ``` dfg.groups {'a': Int64Index([0, 1], dtype='int64'), 'b': Int64Index([2, 3], dtype='int64'), 'c': Int64Index([4, 5, 6], dtype='int64'), 'd': Int64Index([7], dtype='int64')} ``` Depending on the number of groups in your operation, generating the dictionary only to find its length is a wasteful step. ngroups on the other hand is a stored property that can be accessed in constant time. This has been documented in GroupBy object attributes. The issue with len, however, is that for a GroupBy object with a lot of groups, this can take a lot longer But what if I actually want the size of each group? You're in luck. We have a function for that, it's called GroupBy.size. But please note that size counts NaNs as well. If you don't want NaNs counted, use GroupBy.count instead.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27787930\/how-to-get-number-of-groups-in-a-groupby-object-in-pandas", "best_answers_votes":132, "question_length":204, "response_length":1519 }, { "question":"How to iterate over pandas multiindex dataframe using index I have a data frame df which looks like this. Date and Time are 2 multilevel index ``` observation1 observation2 date Time 2012-11-02 9:15:00 79.373668 224 9:16:00 130.841316 477 2012-11-03 9:15:00 45.312814 835 9:16:00 123.776946 623 9:17:00 153.76646 624 9:18:00 463.276946 626 9:19:00 663.176934 622 9:20:00 763.77333 621 2012-11-04 9:15:00 115.449437 122 9:16:00 123.776946 555 9:17:00 153.76646 344 9:18:00 463.276946 212 ``` I want to run some complex process over daily data block. Pseudo code would look like ``` for count in df(level 0 index) : new_df = get only chunk for count complex_process(new_df) ``` So, first of all, I could not find a way to access only blocks for a date ``` 2012-11-03 9:15:00 45.312814 835 9:16:00 123.776946 623 9:17:00 153.76646 624 9:18:00 463.276946 626 9:19:00 663.176934 622 9:20:00 763.77333 621 ``` and then send it for processing. I am doing this in for loop as I am not sure if there is any way to do it without mentioning exact value of level 0 column. I did some basic search and found df.index.get_level_values(0), but it returns all the values and that causes loop to run multiple times for a given day. I want to create a Dataframe per day and send it for processing.", "response":"One easy way would be to groupby the first level of the index - iterating over the groupby object will return the group keys and a subframe containing each group. ``` In [136]: for date, new_df in df.groupby(level=0): ...: print(new_df) ...: observation1 observation2 date Time 2012-11-02 9:15:00 79.373668 224 9:16:00 130.841316 477 observation1 observation2 date Time 2012-11-03 9:15:00 45.312814 835 9:16:00 123.776946 623 9:17:00 153.766460 624 9:18:00 463.276946 626 9:19:00 663.176934 622 9:20:00 763.773330 621 observation1 observation2 date Time 2012-11-04 9:15:00 115.449437 122 9:16:00 123.776946 555 9:17:00 153.766460 344 9:18:00 463.276946 212 ``` You can also use droplevel to remove the first index (the useless date index): ``` In [136]: for date, new_df in df.groupby(level=0): ...: print(new_df.droplevel(0)) ...: observation1 observation2 Time 9:15:00 79.373668 224 9:16:00 130.841316 477 ... ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25929319\/how-to-iterate-over-pandas-multiindex-dataframe-using-index", "best_answers_votes":158, "question_length":1279, "response_length":915 }, { "question":"groupby weighted average and sum in pandas dataframe I have a dataframe: ``` Out[78]: contract month year buys adjusted_lots price 0 W Z 5 Sell -5 554.85 1 C Z 5 Sell -3 424.50 2 C Z 5 Sell -2 424.00 3 C Z 5 Sell -2 423.75 4 C Z 5 Sell -3 423.50 5 C Z 5 Sell -2 425.50 6 C Z 5 Sell -3 425.25 7 C Z 5 Sell -2 426.00 8 C Z 5 Sell -2 426.75 9 CC U 5 Buy 5 3328.00 10 SB V 5 Buy 5 11.65 11 SB V 5 Buy 5 11.64 12 SB V 5 Buy 2 11.60 ``` I need a sum of adjusted_lots , price which is weighted average , of price and adjusted_lots , grouped by all the other columns , ie. grouped by (contract, month , year and buys) Similar solution on R was achieved by following code, using dplyr, however unable to do the same in pandas. ``` > newdf = df %>% select ( contract , month , year , buys , adjusted_lots , price ) %>% group_by( contract , month , year , buys) %>% summarise(qty = sum( adjusted_lots) , avgpx = weighted.mean(x = price , w = adjusted_lots) , comdty = \"Comdty\" ) > newdf Source: local data frame [4 x 6] contract month year comdty qty avgpx 1 C Z 5 Comdty -19 424.8289 2 CC U 5 Comdty 5 3328.0000 3 SB V 5 Comdty 12 11.6375 4 W Z 5 Comdty -5 554.8500 ``` is the same possible by groupby or any other solution ?", "response":"EDIT: update aggregation so it works with recent version of pandas To pass multiple functions to a groupby object, you need to pass a tuples with the aggregation functions and the column to which the function applies: ``` # Define a lambda function to compute the weighted mean: wm = lambda x: np.average(x, weights=df.loc[x.index, \"adjusted_lots\"]) # Define a dictionary with the functions to apply for a given column: # the following is deprecated since pandas 0.20: # f = {'adjusted_lots': ['sum'], 'price': {'weighted_mean' : wm} } # df.groupby([\"contract\", \"month\", \"year\", \"buys\"]).agg(f) # Groupby and aggregate with namedAgg [1]: df.groupby([\"contract\", \"month\", \"year\", \"buys\"]).agg(adjusted_lots=(\"adjusted_lots\", \"sum\"), price_weighted_mean=(\"price\", wm)) adjusted_lots price_weighted_mean contract month year buys C Z 5 Sell -19 424.828947 CC U 5 Buy 5 3328.000000 SB V 5 Buy 12 11.637500 W Z 5 Sell -5 554.850000 ``` You can see more here: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/groupby.html#applying-multiple-functions-at-once and in a similar question here: Apply multiple functions to multiple groupby columns [1] : https:\/\/pandas.pydata.org\/pandas-docs\/stable\/whatsnew\/v0.25.0.html#groupby-aggregation-with-relabeling", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31521027\/groupby-weighted-average-and-sum-in-pandas-dataframe", "best_answers_votes":183, "question_length":1215, "response_length":1239 }, { "question":"Modify the legend of pandas bar plot I am always bothered when I make a bar plot with pandas and I want to change the names of the labels in the legend. Consider for instance the output of this code: ``` import pandas as pd from matplotlib.pyplot import * df = pd.DataFrame({'A':26, 'B':20}, index=['N']) df.plot(kind='bar') ``` Now, if I want to change the name in the legend, I would usually try to do: ``` legend(['AAA', 'BBB']) ``` But I end up with this: In fact, the first dashed line seems to correspond to an additional patch. So I wonder if there is a simple trick here to change the labels, or do I need to plot each of the columns independently with matplotlib and set the labels myself. Thanks.", "response":"To change the labels for Pandas df.plot() use ax.legend([...]): ``` import pandas as pd import matplotlib.pyplot as plt fig, ax = plt.subplots() df = pd.DataFrame({'A':26, 'B':20}, index=['N']) df.plot(kind='bar', ax=ax) #ax = df.plot(kind='bar') # \"same\" as above ax.legend([\"AAA\", \"BBB\"]); ``` Another approach is to do the same by plt.legend([...]): ``` import matplotlib.pyplot as plt df.plot(kind='bar') plt.legend([\"AAA\", \"BBB\"]); ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33149428\/modify-the-legend-of-pandas-bar-plot", "best_answers_votes":161, "question_length":706, "response_length":440 }, { "question":"Get index of a row of a pandas dataframe as an integer Assume an easy dataframe, for example ``` A B 0 1 0.810743 1 2 0.595866 2 3 0.154888 3 4 0.472721 4 5 0.894525 5 6 0.978174 6 7 0.859449 7 8 0.541247 8 9 0.232302 9 10 0.276566 ``` How can I retrieve an index value of a row, given a condition? For example: dfb = df[df['A']==5].index.values.astype(int) returns [4], but what I would like to get is just 4. This is causing me troubles later in the code. Based on some conditions, I want to have a record of the indexes where that condition is fulfilled, and then select rows between. I tried ``` dfb = df[df['A']==5].index.values.astype(int) dfbb = df[df['A']==8].index.values.astype(int) df.loc[dfb:dfbb,'B'] ``` for a desired output ``` A B 4 5 0.894525 5 6 0.978174 6 7 0.859449 ``` but I get TypeError: '[4]' is an invalid key", "response":"The easier is add [0] - select first value of list with one element: ``` dfb = df[df['A']==5].index.values.astype(int)[0] dfbb = df[df['A']==8].index.values.astype(int)[0] ``` ``` dfb = int(df[df['A']==5].index[0]) dfbb = int(df[df['A']==8].index[0]) ``` But if possible some values not match, error is raised, because first value not exist. Solution is use next with iter for get default parameetr if values not matched: ``` dfb = next(iter(df[df['A']==5].index), 'no match') print (dfb) 4 dfb = next(iter(df[df['A']==50].index), 'no match') print (dfb) no match ``` Then it seems need substract 1: ``` print (df.loc[dfb:dfbb-1,'B']) 4 0.894525 5 0.978174 6 0.859449 Name: B, dtype: float64 ``` Another solution with boolean indexing or query: ``` print (df[(df['A'] >= 5) & (df['A'] = 5) & (df['A'] = 5 and A < 8')) A B 4 5 0.894525 5 6 0.978174 6 7 0.859449 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41217310\/get-index-of-a-row-of-a-pandas-dataframe-as-an-integer", "best_answers_votes":109, "question_length":834, "response_length":864 }, { "question":"I want to multiply two columns in a pandas DataFrame and add the result into a new column I'm trying to multiply two existing columns in a pandas Dataframe (orders_df): Prices (stock close price) and Amount (stock quantities) and add the calculation to a new column called Value. For some reason when I run this code, all the rows under the Value column are positive numbers, while some of the rows should be negative. Under the Action column in the DataFrame there are seven rows with the 'Sell' string and seven with the 'Buy' string. ``` for i in orders_df.Action: if i == 'Sell': orders_df['Value'] = orders_df.Prices*orders_df.Amount elif i == 'Buy': orders_df['Value'] = -orders_df.Prices*orders_df.Amount) ``` Please let me know what i'm doing wrong !", "response":"I think an elegant solution is to use the where method (also see the API docs): ``` In [37]: values = df.Prices * df.Amount In [38]: df['Values'] = values.where(df.Action == 'Sell', other=-values) In [39]: df Out[39]: Prices Amount Action Values 0 3 57 Sell 171 1 89 42 Sell 3738 2 45 70 Buy -3150 3 6 43 Sell 258 4 60 47 Sell 2820 5 19 16 Buy -304 6 56 89 Sell 4984 7 3 28 Buy -84 8 56 69 Sell 3864 9 90 49 Buy -4410 ``` Further more this should be the fastest solution.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14059094\/i-want-to-multiply-two-columns-in-a-pandas-dataframe-and-add-the-result-into-a-n", "best_answers_votes":99, "question_length":758, "response_length":471 }, { "question":"LabelEncoder: TypeError: '>' not supported between instances of 'float' and 'str' I'm facing this error for multiple variables even treating missing values. For example: ``` le = preprocessing.LabelEncoder() categorical = list(df.select_dtypes(include=['object']).columns.values) for cat in categorical: print(cat) df[cat].fillna('UNK', inplace=True) df[cat] = le.fit_transform(df[cat]) # print(le.classes_) # print(le.transform(le.classes_)) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () 4 print(cat) 5 df[cat].fillna('UNK', inplace=True) ----> 6 df[cat] = le.fit_transform(df[cat].fillna('UNK')) 7 # print(le.classes_) 8 # print(le.transform(le.classes_)) C:\\Users\\paula.ceccon.ribeiro\\AppData\\Local\\Continuum\\Anaconda3\\lib\\site-packages\\sklearn\\preprocessing\\label.py in fit_transform(self, y) 129 y = column_or_1d(y, warn=True) 130 _check_numpy_unicode_bug(y) --> 131 self.classes_, y = np.unique(y, return_inverse=True) 132 return y 133 C:\\Users\\paula.ceccon.ribeiro\\AppData\\Local\\Continuum\\Anaconda3\\lib\\site-packages\\numpy\\lib\\arraysetops.py in unique(ar, return_index, return_inverse, return_counts) 209 210 if optional_indices: --> 211 perm = ar.argsort(kind='mergesort' if return_index else 'quicksort') 212 aux = ar[perm] 213 else: TypeError: '>' not supported between instances of 'float' and 'str' ``` Checking the variable that lead to the error results ins: ``` df['CRM do M\u00e9dico'].isnull().sum() 0 ``` Besides nan values, what could be causing this error?", "response":"This is due to the series df[cat] containing elements that have varying data types e.g.(strings and\/or floats). This could be due to the way the data is read, i.e. numbers are read as float and text as strings or the datatype was float and changed after the fillna operation. In other words pandas data type 'Object' indicates mixed types rather than str type so using the following line: ``` df[cat] = le.fit_transform(df[cat].astype(str)) ``` should help", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/46406720\/labelencoder-typeerror-not-supported-between-instances-of-float-and-str", "best_answers_votes":161, "question_length":1552, "response_length":456 }, { "question":"Check for duplicate values in Pandas dataframe column Is there a way in pandas to check if a dataframe column has duplicate values, without actually dropping rows? I have a function that will remove duplicate rows, however, I only want it to run if there are actually duplicates in a specific column. Currently I compare the number of unique values in the column to the number of rows: if there are less unique values than rows then there are duplicates and the code runs. ``` if len(df['Student'].unique()) < len(df.index): # Code to remove duplicates based on Date column runs ``` Is there an easier or more efficient way to check if duplicate values exist in a specific column, using pandas? Some of the sample data I am working with (only two columns shown). If duplicates are found then another function identifies which row to keep (row with oldest date): ``` Student Date 0 Joe December 2017 1 James January 2018 2 Bob April 2018 3 Joe December 2017 4 Jack February 2018 5 Jack March 2018 ```", "response":"Main question Is there a duplicate value in a column, True\/False? ``` \u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2566\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557 \u2551 Student \u2551 Date \u2551 \u2560\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563 \u2551 Joe \u2551 December 2017 \u2551 \u2560\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563 \u2551 Bob \u2551 April 2018 \u2551 \u2560\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563 \u2551 Joe \u2551 December 2018 \u2551 \u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d ``` Assuming above dataframe (df), we could do a quick check if duplicated in the Student col by: ``` boolean = not df[\"Student\"].is_unique # True (credit to @Carsten) boolean = df['Student'].duplicated().any() # True ``` Further reading and references Above we are using one of the Pandas Series methods. The pandas DataFrame has several useful methods, two of which are: drop_duplicates(self[, subset, keep, inplace]) - Return DataFrame with duplicate rows removed, optionally only considering certain columns. duplicated(self[, subset, keep]) - Return boolean Series denoting duplicate rows, optionally only considering certain columns. These methods can be applied on the DataFrame as a whole, and not just a Serie (column) as above. The equivalent would be: ``` boolean = df.duplicated(subset=['Student']).any() # True # We were expecting True, as Joe can be seen twice. ``` However, if we are interested in the whole frame we could go ahead and do: ``` boolean = df.duplicated().any() # False boolean = df.duplicated(subset=['Student','Date']).any() # False # We were expecting False here - no duplicates row-wise # ie. Joe Dec 2017, Joe Dec 2018 ``` And a final useful tip. By using the keep paramater we can normally skip a few rows directly accessing what we need: keep : {\u2018first\u2019, \u2018last\u2019, False}, default \u2018first\u2019 first : Drop duplicates except for the first occurrence. last : Drop duplicates except for the last occurrence. False : Drop all duplicates. Example to play around with ``` import pandas as pd import io data = '''\\ Student,Date Joe,December 2017 Bob,April 2018 Joe,December 2018''' df = pd.read_csv(io.StringIO(data), sep=',') # Approach 1: Simple True\/False boolean = df.duplicated(subset=['Student']).any() print(boolean, end='\\n\\n') # True # Approach 2: First store boolean array, check then remove duplicate_in_student = df.duplicated(subset=['Student']) if duplicate_in_student.any(): print(df.loc[~duplicate_in_student], end='\\n\\n') # Approach 3: Use drop_duplicates method df.drop_duplicates(subset=['Student'], inplace=True) print(df) ``` Returns ``` True Student Date 0 Joe December 2017 1 Bob April 2018 Student Date 0 Joe December 2017 1 Bob April 2018 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/50242968\/check-for-duplicate-values-in-pandas-dataframe-column", "best_answers_votes":130, "question_length":999, "response_length":2488 }, { "question":"Pandas: Appending a row to a dataframe and specify its index label Is there any way to specify the index that I want for a new row, when appending the row to a dataframe? The original documentation provides the following example: ``` In [1301]: df = DataFrame(np.random.randn(8, 4), columns=['A','B','C','D']) In [1302]: df Out[1302]: A B C D 0 -1.137707 -0.891060 -0.693921 1.613616 1 0.464000 0.227371 -0.496922 0.306389 2 -2.290613 -1.134623 -1.561819 -0.260838 3 0.281957 1.523962 -0.902937 0.068159 4 -0.057873 -0.368204 -1.144073 0.861209 5 0.800193 0.782098 -1.069094 -1.099248 6 0.255269 0.009750 0.661084 0.379319 7 -0.008434 1.952541 -1.056652 0.533946 In [1303]: s = df.xs(3) In [1304]: df.append(s, ignore_index=True) Out[1304]: A B C D 0 -1.137707 -0.891060 -0.693921 1.613616 1 0.464000 0.227371 -0.496922 0.306389 2 -2.290613 -1.134623 -1.561819 -0.260838 3 0.281957 1.523962 -0.902937 0.068159 4 -0.057873 -0.368204 -1.144073 0.861209 5 0.800193 0.782098 -1.069094 -1.099248 6 0.255269 0.009750 0.661084 0.379319 7 -0.008434 1.952541 -1.056652 0.533946 8 0.281957 1.523962 -0.902937 0.068159 ``` where the new row gets the index label automatically. Is there any way to control the new label?", "response":"The name of the Series becomes the index of the row in the DataFrame: ``` In [99]: df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D']) In [100]: s = df.xs(3) In [101]: s.name = 10 In [102]: df.append(s) Out[102]: A B C D 0 -2.083321 -0.153749 0.174436 1.081056 1 -1.026692 1.495850 -0.025245 -0.171046 2 0.072272 1.218376 1.433281 0.747815 3 -0.940552 0.853073 -0.134842 -0.277135 4 0.478302 -0.599752 -0.080577 0.468618 5 2.609004 -1.679299 -1.593016 1.172298 6 -0.201605 0.406925 1.983177 0.012030 7 1.158530 -2.240124 0.851323 -0.240378 10 -0.940552 0.853073 -0.134842 -0.277135 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16824607\/pandas-appending-a-row-to-a-dataframe-and-specify-its-index-label", "best_answers_votes":77, "question_length":1208, "response_length":601 }, { "question":"Error in Reading a csv file in pandas[CParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file.] So i tried reading all the csv files from a folder and then concatenate them to create a big csv(structure of all the files was same), save it and read it again. All this was done using Pandas. The Error occurs while reading. I am Attaching the code and the Error below. ``` import pandas as pd import numpy as np import glob path =r'somePath' # use your path allFiles = glob.glob(path + \"\/*.csv\") frame = pd.DataFrame() list_ = [] for file_ in allFiles: df = pd.read_csv(file_,index_col=None, header=0) list_.append(df) store = pd.concat(list_) store.to_csv(\"C:\\work\\DATA\\Raw_data\\\\store.csv\", sep=',', index= False) store1 = pd.read_csv(\"C:\\work\\DATA\\Raw_data\\\\store.csv\", sep=',') ``` Error:- ``` CParserError Traceback (most recent call last) in () ----> 1 store1 = pd.read_csv(\"C:\\work\\DATA\\Raw_data\\\\store.csv\", sep=',') C:\\Users\\armsharm\\AppData\\Local\\Continuum\\Anaconda\\lib\\site-packages\\pandas\\io\\parsers.pyc in parser_f(filepath_or_buffer, sep, dialect, compression, doublequote, escapechar, quotechar, quoting, skipinitialspace, lineterminator, header, index_col, names, prefix, skiprows, skipfooter, skip_footer, na_values, na_fvalues, true_values, false_values, delimiter, converters, dtype, usecols, engine, delim_whitespace, as_recarray, na_filter, compact_ints, use_unsigned, low_memory, buffer_lines, warn_bad_lines, error_bad_lines, keep_default_na, thousands, comment, decimal, parse_dates, keep_date_col, dayfirst, date_parser, memory_map, float_precision, nrows, iterator, chunksize, verbose, encoding, squeeze, mangle_dupe_cols, tupleize_cols, infer_datetime_format, skip_blank_lines) 472 skip_blank_lines=skip_blank_lines) 473 --> 474 return _read(filepath_or_buffer, kwds) 475 476 parser_f.__name__ = name C:\\Users\\armsharm\\AppData\\Local\\Continuum\\Anaconda\\lib\\site-packages\\pandas\\io\\parsers.pyc in _read(filepath_or_buffer, kwds) 258 return parser 259 --> 260 return parser.read() 261 262 _parser_defaults = { C:\\Users\\armsharm\\AppData\\Local\\Continuum\\Anaconda\\lib\\site-packages\\pandas\\io\\parsers.pyc in read(self, nrows) 719 raise ValueError('skip_footer not supported for iteration') 720 --> 721 ret = self._engine.read(nrows) 722 723 if self.options.get('as_recarray'): C:\\Users\\armsharm\\AppData\\Local\\Continuum\\Anaconda\\lib\\site-packages\\pandas\\io\\parsers.pyc in read(self, nrows) 1168 1169 try: -> 1170 data = self._reader.read(nrows) 1171 except StopIteration: 1172 if nrows is None: pandas\\parser.pyx in pandas.parser.TextReader.read (pandas\\parser.c:7544)() pandas\\parser.pyx in pandas.parser.TextReader._read_low_memory (pandas\\parser.c:7784)() pandas\\parser.pyx in pandas.parser.TextReader._read_rows (pandas\\parser.c:8401)() pandas\\parser.pyx in pandas.parser.TextReader._tokenize_rows (pandas\\parser.c:8275)() pandas\\parser.pyx in pandas.parser.raise_parser_error (pandas\\parser.c:20691)() CParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file. ``` I tried using csv reader as well:- ``` import csv with open(\"C:\\work\\DATA\\Raw_data\\\\store.csv\", 'rb') as f: reader = csv.reader(f) l = list(reader) ``` Error:- ``` Error Traceback (most recent call last) in () 1 with open('C:\\work\\DATA\\Raw_data\\\\store.csv', 'rb') as f: 2 reader = csv.reader(f) ----> 3 l = list(reader) Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode? ```", "response":"I found this error, the cause was that there were some carriage returns \"\\r\" in the data that pandas was using as a line terminator as if it was \"\\n\". I thought I'd post here as that might be a common reason this error might come up. The solution I found was to add lineterminator='\\n' into the read_csv function like this: ``` df_clean = pd.read_csv('test_error.csv', lineterminator='\\n') ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33998740\/error-in-reading-a-csv-file-in-pandascparsererror-error-tokenizing-data-c-err", "best_answers_votes":280, "question_length":3507, "response_length":393 }, { "question":"Pandas dataframe with multiindex column - merge levels I have a dataframe, grouped, with multiindex columns as below: ``` import pandas as pd import numpy as np import random codes = [\"one\",\"two\",\"three\"]; colours = [\"black\", \"white\"]; textures = [\"soft\", \"hard\"]; N= 100 # length of the dataframe df = pd.DataFrame({ 'id' : range(1,N+1), 'weeks_elapsed' : [random.choice(range(1,25)) for i in range(1,N+1)], 'code' : [random.choice(codes) for i in range(1,N+1)], 'colour': [random.choice(colours) for i in range(1,N+1)], 'texture': [random.choice(textures) for i in range(1,N+1)], 'size': [random.randint(1,100) for i in range(1,N+1)], 'scaled_size': [random.randint(100,1000) for i in range(1,N+1)] }, columns= ['id', 'weeks_elapsed', 'code','colour', 'texture', 'size', 'scaled_size']) grouped = df.groupby(['code', 'colour']).agg( {'size': [np.sum, np.average, np.size, pd.Series.idxmax],'scaled_size': [np.sum, np.average, np.size, pd.Series.idxmax]}).reset_index() >> grouped code colour size scaled_size sum average size idxmax sum average size idxmax 0 one black 1031 60.647059 17 81 185.153944 10.891408 17 47 1 one white 481 37.000000 13 53 204.139249 15.703019 13 53 2 three black 822 48.352941 17 6 123.269405 7.251141 17 31 3 three white 1614 57.642857 28 50 285.638337 10.201369 28 37 4 two black 523 58.111111 9 85 80.908912 8.989879 9 88 5 two white 669 41.812500 16 78 82.098870 5.131179 16 78 [6 rows x 10 columns] ``` How can I flatten\/merge the column index levels as: \"Level1|Level2\", e.g. size|sum, scaled_size|sum. etc? If this is not possible, is there a way to groupby() as I did above without creating multi-index columns?", "response":"There are varied (i.e., more pythonic) way to flatten a MultiIndex columns into single-level columns. Use map and join with string column headers: ``` grouped.columns = grouped.columns.map('|'.join).str.strip('|') print(grouped) ``` Output: ``` code colour size|sum size|average size|size size|idxmax \\ 0 one black 862 53.875000 16 14 1 one white 554 46.166667 12 18 2 three black 842 49.529412 17 90 3 three white 740 56.923077 13 97 4 two black 1541 61.640000 25 50 scaled_size|sum scaled_size|average scaled_size|size scaled_size|idxmax 0 6980 436.250000 16 77 1 6101 508.416667 12 13 2 7889 464.058824 17 64 3 6329 486.846154 13 73 4 12809 512.360000 25 23 ``` Use map with format for column headers that have numeric data types. ``` grouped.columns = grouped.columns.map('{0[0]}|{0[1]}'.format) ``` Output: ``` code| colour| size|sum size|average size|size size|idxmax \\ 0 one black 734 52.428571 14 30 1 one white 1110 65.294118 17 88 2 three black 930 51.666667 18 3 3 three white 1140 51.818182 22 20 4 two black 656 38.588235 17 77 5 two white 704 58.666667 12 17 scaled_size|sum scaled_size|average scaled_size|size scaled_size|idxmax 0 8229 587.785714 14 57 1 8781 516.529412 17 73 2 10743 596.833333 18 21 3 10240 465.454545 22 26 4 9982 587.176471 17 16 5 6537 544.750000 12 49 ``` Use list comprehension with f-string for Python 3.6+: ``` grouped.columns = [f'{i}|{j}' if j != '' else f'{i}' for i,j in grouped.columns] ``` Output: ``` code colour size|sum size|average size|size size|idxmax \\ 0 one black 1003 43.608696 23 76 1 one white 1255 59.761905 21 66 2 three black 777 45.705882 17 39 3 three white 630 52.500000 12 23 4 two black 823 54.866667 15 33 5 two white 491 40.916667 12 64 scaled_size|sum scaled_size|average scaled_size|size scaled_size|idxmax 0 12532 544.869565 23 27 1 13223 629.666667 21 13 2 8615 506.764706 17 92 3 6101 508.416667 12 43 4 7661 510.733333 15 42 5 6143 511.916667 12 49 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24290297\/pandas-dataframe-with-multiindex-column-merge-levels", "best_answers_votes":187, "question_length":1648, "response_length":1927 }, { "question":"Read data from pyodbc to pandas I am querying a SQL database and I want to use pandas to process the data. However, I am not sure how to move the data. Below is my input and output. ``` import pyodbc import pandas from pandas import DataFrame cnxn = pyodbc.connect(r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\\users\\bartogre\\desktop\\CorpRentalPivot1.accdb;UID=\"\";PWD=\"\";') crsr = cnxn.cursor() for table_name in crsr.tables(tableType='TABLE'): print(table_name) cursor = cnxn.cursor() sql = \"Select sum(CYTM), sum(PYTM), BRAND From data Group By BRAND\" cursor.execute(sql) for data in cursor.fetchall(): print (data) ``` ``` ('C:\\\\users\\\\bartogre\\\\desktop\\\\CorpRentalPivot1.accdb', None, 'Data', 'TABLE', None) ('C:\\\\users\\\\bartogre\\\\desktop\\\\CorpRentalPivot1.accdb', None, 'SFDB', 'TABLE', None) (Decimal('78071898.71'), Decimal('82192672.29'), 'A') (Decimal('12120663.79'), Decimal('13278814.52'), 'B') ```", "response":"A shorter and more concise answer ``` import pyodbc import pandas as pd cnxn = pyodbc.connect(r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};' r'DBQ=C:\\users\\bartogre\\desktop\\data.mdb;') sql = \"Select sum(CYTM), sum(PYTM), BRAND From data Group By BRAND\" data = pd.read_sql(sql,cnxn) # without parameters [non-prepared statement] # with a prepared statement, use list\/tuple\/dictionary of parameters depending on DB #data = pd.read_sql(sql=sql, con=cnxn, params=query_params) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39835770\/read-data-from-pyodbc-to-pandas", "best_answers_votes":202, "question_length":922, "response_length":483 }, { "question":"How to fill dataframe Nan values with empty list [] in pandas? This is my dataframe: ``` date ids 0 2011-04-23 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,... 1 2011-04-24 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,... 2 2011-04-25 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,... 3 2011-04-26 Nan 4 2011-04-27 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,... 5 2011-04-28 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,... ``` I want to replace Nan with []. How to do that? .fillna([]) did not work. I even tried replace(np.nan, []) but it gives error: ``` TypeError('Invalid \"to_replace\" type: \\'float\\'',) ```", "response":"My approach is similar to @hellpanderrr's, but instead tests for list-ness rather than using isnan: ``` df['ids'] = df['ids'].apply(lambda d: d if isinstance(d, list) else []) ``` I originally tried using pd.isnull (or pd.notnull) but, when given a list, that returns the null-ness of each element.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33199193\/how-to-fill-dataframe-nan-values-with-empty-list-in-pandas", "best_answers_votes":77, "question_length":618, "response_length":298 }, { "question":"Counting the number of missing\/NaN in each row I've got a dataset with a big number of rows. Some of the values are NaN, like this: ``` In [91]: df Out[91]: 1 3 1 1 1 1 3 1 1 1 2 3 1 1 1 1 1 NaN NaN NaN 1 3 1 1 1 1 1 1 1 1 ``` And I want to count the number of NaN values in each row, it would be like this: ``` In [91]: list = In [92]: list Out[91]: [0, 0, 0, 3, 0, 0] ``` What is the best and fastest way to do it?", "response":"You could first find if element is NaN or not by isnull() and then take row-wise sum(axis=1) ``` In [195]: df.isnull().sum(axis=1) Out[195]: 0 0 1 0 2 0 3 3 4 0 5 0 dtype: int64 ``` And, if you want the output as list, you can ``` In [196]: df.isnull().sum(axis=1).tolist() Out[196]: [0, 0, 0, 3, 0, 0] ``` Or use count like ``` In [130]: df.shape[1] - df.count(axis=1) Out[130]: 0 0 1 0 2 0 3 3 4 0 5 0 dtype: int64 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30059260\/counting-the-number-of-missing-nan-in-each-row", "best_answers_votes":137, "question_length":417, "response_length":420 }, { "question":"How to convert an XML file to nice pandas dataframe? Let's assume that I have an XML like this: ```xml ``` I would like to read this XML file and convert it to a pandas DataFrame: ```shell key type language feature web data e95324a9a6c790ecb95e46cf15bE232ee517651 XXX EN xx www.foo_bar_exmaple.com A large text with lots of strings and punctuations symbols [...] bc360cfbafc39970587547215162f0db XXX EN xx www.foo_bar_exmaple.com A large text with lots of strings and punctuations symbols [...] 19e71144c50a8b9160b3cvdf2324f0955e906fce XXX EN xx www.foo_bar_exmaple.com A large text with lots of strings and punctuations symbols [...] 21d4af9021a174f61b8erf284606c74d9e42 XXX EN xx www.foo_bar_exmaple.com A large text with lots of strings and punctuations symbols [...] ``` This is what I already tried, but I am getting some errors and probably there is a more efficient way of doing this task: ```python from lxml import objectify import pandas as pd path = 'file_path' xml = objectify.parse(open(path)) root = xml.getroot() root.getchildren()[0].getchildren() df = pd.DataFrame(columns=('key','type', 'language', 'feature', 'web', 'data')) for i in range(0,len(xml)): obj = root.getchildren()[i].getchildren() row = dict(zip(['key','type', 'language', 'feature', 'web', 'data'], [obj[0].text, obj[1].text])) row_s = pd.Series(row) row_s.name = i df = df.append(row_s) ``` Could anybody provide me a better aproach for this problem?", "response":"You can easily use xml (from the Python standard library) to convert to a pandas.DataFrame. Here's what I would do (when reading from a file replace xml_data with the name of your file or file object): ``` import pandas as pd import xml.etree.ElementTree as ET import io def iter_docs(author): author_attr = author.attrib for doc in author.iter('document'): doc_dict = author_attr.copy() doc_dict.update(doc.attrib) doc_dict['data'] = doc.text yield doc_dict xml_data = io.StringIO(u'''YOUR XML STRING HERE''') etree = ET.parse(xml_data) #create an ElementTree object doc_df = pd.DataFrame(list(iter_docs(etree.getroot()))) ``` If there are multiple authors in your original document or the root of your XML is not an author, then I would add the following generator: ``` def iter_author(etree): for author in etree.iter('author'): for row in iter_docs(author): yield row ``` and change doc_df = pd.DataFrame(list(iter_docs(etree.getroot()))) to doc_df = pd.DataFrame(list(iter_author(etree))) Have a look at the ElementTree tutorial provided in the xml library documentation.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28259301\/how-to-convert-an-xml-file-to-nice-pandas-dataframe", "best_answers_votes":61, "question_length":1447, "response_length":1076 }, { "question":"How to iterate over Pandas Series generated from groupby().size() How do you iterate over a Pandas Series generated from a .groupby('...').size() command and get both the group name and count. As an example if I have: ``` foo -1 7 0 85 1 14 2 5 ``` how can I loop over them so that in each iteration I would have -1 & 7, 0 & 85, 1 & 14 and 2 & 5 in variables? I tried the enumerate option but it doesn't quite work. Example: ``` for i, row in enumerate(df.groupby(['foo']).size()): print(i, row) ``` it doesn't return -1, 0, 1, and 2 for i but rather 0, 1, 2, 3.", "response":"Update: Given a pandas Series: ``` s = pd.Series([1,2,3,4], index=['a', 'b', 'c', 'd']) s #a 1 #b 2 #c 3 #d 4 #dtype: int64 ``` You can directly loop through it, which yield one value from the series in each iteration: ``` for i in s: print(i) 1 2 3 4 ``` If you want to access the index at the same time, you can use either items or iteritems method, which produces a generator that contains both the index and value: ``` for i, v in s.items(): print('index: ', i, 'value: ', v) #index: a value: 1 #index: b value: 2 #index: c value: 3 #index: d value: 4 for i, v in s.iteritems(): print('index: ', i, 'value: ', v) #index: a value: 1 #index: b value: 2 #index: c value: 3 #index: d value: 4 ``` Old Answer: You can call iteritems() method on the Series: ``` for i, row in df.groupby('a').size().iteritems(): print(i, row) # 12 4 # 14 2 ``` According to doc: Series.iteritems() Lazily iterate over (index, value) tuples Note: This is not the same data as in the question, just a demo.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38387529\/how-to-iterate-over-pandas-series-generated-from-groupby-size", "best_answers_votes":142, "question_length":562, "response_length":985 }, { "question":"How do you Unit Test Python DataFrames How do I unit test Python dataframes? I have functions that have an input and output as dataframes. Almost every function I have does this. Now if I want to unit test this what is the best method of doing it? It seems a bit of an effort to create a new dataframe (with values populated) for every function? Are there any materials you can refer me to? Should you write unit tests for these functions?", "response":"While Pandas' test functions are primarily used for internal testing, NumPy includes a very useful set of testing functions that are documented here: NumPy Test Support. These functions compare NumPy arrays, but you can get the array that underlies a Pandas DataFrame using the values property. You can define a simple DataFrame and compare what your function returns to what you expect. One technique you can use is to define one set of test data for a number of functions. That way, you can use Pytest Fixtures to define that DataFrame once, and use it in multiple tests. In terms of resources, I found this article on Testing with NumPy and Pandas to be very useful. I also did a short presentation about data analysis testing at PyCon Canada 2016: Automate Your Data Analysis Testing.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41852686\/how-do-you-unit-test-python-dataframes", "best_answers_votes":58, "question_length":439, "response_length":788 }, { "question":"How to aggregate unique count with pandas pivot_table This code: ``` df2 = ( pd.DataFrame({ 'X' : ['X1', 'X1', 'X1', 'X1'], 'Y' : ['Y2', 'Y1', 'Y1', 'Y1'], 'Z' : ['Z3', 'Z1', 'Z1', 'Z2'] }) ) g = df2.groupby('X') pd.pivot_table(g, values='X', rows='Y', cols='Z', margins=False, aggfunc='count') ``` returns the following error: ``` Traceback (most recent call last): ... AttributeError: 'Index' object has no attribute 'index' ``` How do I get a Pivot Table with counts of unique values of one DataFrame column for two other columns? Is there aggfunc for count unique? Should I be using np.bincount()? NB. I am aware of pandas.Series.values_counts() however I need a pivot table. EDIT: The output should be: ``` Z Z1 Z2 Z3 Y Y1 1 1 NaN Y2 NaN NaN 1 ```", "response":"Do you mean something like this? ``` >>> df2.pivot_table(values='X', index='Y', columns='Z', aggfunc=lambda x: len(x.unique())) Z Z1 Z2 Z3 Y Y1 1 1 NaN Y2 NaN NaN 1 ``` Note that using len assumes you don't have NAs in your DataFrame. You can do x.value_counts().count() or len(x.dropna().unique()) otherwise.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12860421\/how-to-aggregate-unique-count-with-pandas-pivot-table", "best_answers_votes":140, "question_length":752, "response_length":309 }, { "question":"AttributeError: Can only use .dt accessor with datetimelike values Hi I am using pandas to convert a column to month. When I read my data they are objects: ``` Date object dtype: object ``` So I am first making them to date time and then try to make them as months: ``` import pandas as pd file = '\/pathtocsv.csv' df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids']) df['Date'] = pd.to_datetime(df['Date']) df['Month'] = df['Date'].dt.month ``` Also if that helps: ``` In [10]: df['Date'].dtype Out[10]: dtype('O') ``` So, the error I get is like this: ``` \/Library\/Frameworks\/Python.framework\/Versions\/2.7\/bin\/User\/lib\/python2.7\/site-packages\/pandas\/core\/series.pyc in _make_dt_accessor(self) 2526 return maybe_to_datetimelike(self) 2527 except Exception: -> 2528 raise AttributeError(\"Can only use .dt accessor with datetimelike \" 2529 \"values\") 2530 AttributeError: Can only use .dt accessor with datetimelike values ``` EDITED: Date columns are like this: ``` 0 2014-01-01 1 2014-01-01 2 2014-01-01 3 2014-01-01 4 2014-01-03 5 2014-01-03 6 2014-01-03 7 2014-01-07 8 2014-01-08 9 2014-01-09 ``` Do you have any ideas? Thank you very much!", "response":"Your problem here is that to_datetime silently failed so the dtype remained as str\/object, if you set param errors='coerce' then if the conversion fails for any particular string then those rows are set to NaT. ``` df['Date'] = pd.to_datetime(df['Date'], errors='coerce') ``` So you need to find out what is wrong with those specific row values. See the docs", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33365055\/attributeerror-can-only-use-dt-accessor-with-datetimelike-values", "best_answers_votes":181, "question_length":1170, "response_length":358 }, { "question":"Testing if a pandas DataFrame exists In my code, I have several variables which can either contain a pandas DataFrame or nothing at all. Let's say I want to test and see if a certain DataFrame has been created yet or not. My first thought would be to test for it like this: ``` if df1: # do something ``` However, that code fails in this way: ``` ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). ``` Fair enough. Ideally, I would like to have a presence test that works for either a DataFrame or Python None. Here is one way this can work: ``` if not isinstance(df1, type(None)): # do something ``` However, testing for type is really slow. ``` t = timeit.Timer('if None: pass') t.timeit() # approximately 0.04 t = timeit.Timer('if isinstance(x, type(None)): pass', setup='x=None') t.timeit() # approximately 0.4 ``` Ouch. Along with being slow, testing for NoneType isn't very flexible, either. A different solution would be to initialize df1 as an empty DataFrame, so that the type would be the same in both the null and non-null cases. I could then just test using len(), or any(), or something like that. Making an empty DataFrame seems kind of silly and wasteful, though. Another solution would be to have an indicator variable: df1_exists, which is set to False until df1 is created. Then, instead of testing df1, I would be testing df1_exists. But this doesn't seem all that elegant, either. Is there a better, more Pythonic way of handling this issue? Am I missing something, or is this just an awkward side effect all the awesome things about pandas?", "response":"Option 1 (my preferred option) This is @Ami Tavory's Please select his answer if you like this approach It is very idiomatic python to initialize a variable with None then check for None prior to doing something with that variable. ``` df1 = None if df1 is not None: print df1.head() ``` Option 2 However, setting up an empty dataframe isn't at all a bad idea. ``` df1 = pd.DataFrame() if not df1.empty: print df1.head() ``` Option 3 Just try it. ``` try: print df1.head() # catch when df1 is None except AttributeError: pass # catch when it hasn't even been defined except NameError: pass ``` Timing When df1 is in initialized state or doesn't exist at all When df1 is a dataframe with something in it ``` df1 = pd.DataFrame(np.arange(25).reshape(5, 5), list('ABCDE'), list('abcde')) df1 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39337115\/testing-if-a-pandas-dataframe-exists", "best_answers_votes":173, "question_length":1620, "response_length":792 }, { "question":"select pandas rows by excluding index number Not quite sure why I can't figure this out. I'm looking to slice a Pandas dataframe by using index numbers. I have a list\/core index with the index numbers that i do NOT need, shown below ``` pandas.core.index.Int64Index Int64Index([2340, 4840, 3163, 1597, 491 , 5010, 911 , 3085, 5486, 5475, 1417, 2663, 4204, 156 , 5058, 1990, 3200, 1218, 3280, 793 , 824 , 3625, 1726, 1971, 2845, 4668, 2973, 3039, 376 , 4394, 3749, 1610, 3892, 2527, 324 , 5245, 696 , 1239, 4601, 3219, 5138, 4832, 4762, 1256, 4437, 2475, 3732, 4063, 1193], dtype=int64) ``` How can I create a new dataframe excluding these index numbers. I tried ``` df.iloc[combined_index] ``` and obviously this just shows the rows with those index number (the opposite of what I want). any help will be greatly appreciated", "response":"Not sure if that's what you are looking for, posting this as an answer, because it's too long for a comment: ``` In [31]: d = {'a':[1,2,3,4,5,6], 'b':[1,2,3,4,5,6]} In [32]: df = pd.DataFrame(d) In [33]: bad_df = df.index.isin([3,5]) In [34]: df[~bad_df] Out[34]: a b 0 1 1 1 2 2 2 3 3 4 5 5 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28256761\/select-pandas-rows-by-excluding-index-number", "best_answers_votes":121, "question_length":824, "response_length":295 }, { "question":"Drop all data in a pandas dataframe I would like to drop all data in a pandas dataframe, but am getting TypeError: drop() takes at least 2 arguments (3 given). I essentially want a blank dataframe with just my columns headers. ``` import pandas as pd web_stats = {'Day': [1, 2, 3, 4, 2, 6], 'Visitors': [43, 43, 34, 23, 43, 23], 'Bounce_Rate': [3, 2, 4, 3, 5, 5]} df = pd.DataFrame(web_stats) df.drop(axis=0, inplace=True) print df ```", "response":"You need to pass the labels to be dropped. ``` df.drop(df.index, inplace=True) ``` By default, it operates on axis=0. You can achieve the same with ``` df.iloc[:0] ``` which is much more efficient.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39173992\/drop-all-data-in-a-pandas-dataframe", "best_answers_votes":168, "question_length":435, "response_length":197 }, { "question":"Pandas dataframe groupby plot I have a dataframe which is structured as: ``` Date ticker adj_close 0 2016-11-21 AAPL 111.730 1 2016-11-22 AAPL 111.800 2 2016-11-23 AAPL 111.230 3 2016-11-25 AAPL 111.790 4 2016-11-28 AAPL 111.570 ... 8 2016-11-21 ACN 119.680 9 2016-11-22 ACN 119.480 10 2016-11-23 ACN 119.820 11 2016-11-25 ACN 120.740 ... ``` How can I plot based on the ticker the adj_close versus Date?", "response":"Simple plot, you can use: ``` df.plot(x='Date',y='adj_close') ``` Or you can set the index to be Date beforehand, then it's easy to plot the column you want: ``` df.set_index('Date', inplace=True) df['adj_close'].plot() ``` If you want a chart with one series by ticker on it You need to groupby before: ``` df.set_index('Date', inplace=True) df.groupby('ticker')['adj_close'].plot(legend=True) ``` If you want a chart with individual subplots: ``` grouped = df.groupby('ticker') ncols=2 nrows = int(np.ceil(grouped.ngroups\/ncols)) fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(12,4), sharey=True) for (key, ax) in zip(grouped.groups.keys(), axes.flatten()): grouped.get_group(key).plot(ax=ax) ax.legend() plt.show() ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41494942\/pandas-dataframe-groupby-plot", "best_answers_votes":158, "question_length":404, "response_length":734 }, { "question":"pandas - change df.index from float64 to unicode or string I want to change a dataframes' index (rows) from float64 to string or unicode. I thought this would work but apparently not: ``` #check type type(df.index) 'pandas.core.index.Float64Index' #change type to unicode if not isinstance(df.index, unicode): df.index = df.index.astype(unicode) ``` error message: ``` TypeError: Setting dtype to anything other than float64 or object is not supported ```", "response":"You can do it that way: ``` # for Python 2 df.index = df.index.map(unicode) # for Python 3 (the unicode type does not exist and is replaced by str) df.index = df.index.map(str) ``` As for why you would proceed differently from when you'd convert from int to float, that's a peculiarity of numpy (the library on which pandas is based). Every numpy array has a dtype, which is basically the machine type of its elements : in that manner, numpy deals directly with native types, not with Python objects, which explains how it is so fast. So when you are changing the dtype from int64 to float64, numpy will cast each element in the C code. There's also a special dtype : object, that will basically provide a pointer toward a Python object. If you want strings, you thus have to use the object dtype. But using .astype(object) would not give you the answer you were looking for : it would instead create an index with object dtype, but put Python float objects inside. Here, by using map, we convert the index to strings with the appropriate function: numpy gets the string objects and understand that the index has to have an object dtype, because that's the only dtype that can accomodate strings.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/35368645\/pandas-change-df-index-from-float64-to-unicode-or-string", "best_answers_votes":147, "question_length":456, "response_length":1196 }, { "question":"Pandas bar plot changes date format I have a simple stacked line plot that has exactly the date format I want magically set when using the following code. ``` df_ts = df.resample(\"W\", how='max') df_ts.plot(figsize=(12,8), stacked=True) ``` However, the dates mysteriously transform themselves to an ugly and unreadable format when plotting the same data as a bar plot. ``` df_ts = df.resample(\"W\", how='max') df_ts.plot(kind='bar', figsize=(12,8), stacked=True) ``` The original data was transformed a bit to have the weekly max. Why is this radical change in automatically set dates happening? How can I have the nicely formatted dates as above? Here is some dummy data ``` start = pd.to_datetime(\"1-1-2012\") idx = pd.date_range(start, periods= 365).tolist() df=pd.DataFrame({'A':np.random.random(365), 'B':np.random.random(365)}) df.index = idx df_ts = df.resample('W', how= 'max') df_ts.plot(kind='bar', stacked=True) ```", "response":"The plotting code assumes that each bar in a bar plot deserves its own label. You could override this assumption by specifying your own formatter: ``` ax.xaxis.set_major_formatter(formatter) ``` The pandas.tseries.converter.TimeSeries_DateFormatter that Pandas uses to format the dates in the \"good\" plot works well with line plots when the x-values are dates. However, with a bar plot the x-values (at least those received by TimeSeries_DateFormatter.__call__) are merely integers starting at zero. If you try to use TimeSeries_DateFormatter with a bar plot, all the labels thus start at the Epoch, 1970-1-1 UTC, since this is the date which corresponds to zero. So the formatter used for line plots is unfortunately useless for bar plots (at least as far as I can see). The easiest way I see to produce the desired formatting is to generate and set the labels explicitly: ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import matplotlib.ticker as ticker start = pd.to_datetime(\"5-1-2012\") idx = pd.date_range(start, periods=365) df = pd.DataFrame({'A': np.random.random(365), 'B': np.random.random(365)}) df.index = idx df_ts = df.resample('W').max() ax = df_ts.plot(kind='bar', stacked=True) # Make most of the ticklabels empty so the labels don't get too crowded ticklabels = ['']*len(df_ts.index) # Every 4th ticklable shows the month and day ticklabels[::4] = [item.strftime('%b %d') for item in df_ts.index[::4]] # Every 12th ticklabel includes the year ticklabels[::12] = [item.strftime('%b %d\\n%Y') for item in df_ts.index[::12]] ax.xaxis.set_major_formatter(ticker.FixedFormatter(ticklabels)) plt.gcf().autofmt_xdate() plt.show() ``` yields For those looking for a simple example of a bar plot with dates: ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.ticker as mticker dates = pd.date_range('2012-1-1', '2017-1-1', freq='M') df = pd.DataFrame({'A':np.random.random(len(dates)), 'Date':dates}) fig, ax = plt.subplots() df.plot.bar(x='Date', y='A', ax=ax) ticklabels = ['']*len(df) skip = len(df)\/\/12 ticklabels[::skip] = df['Date'].iloc[::skip].dt.strftime('%Y-%m-%d') ax.xaxis.set_major_formatter(mticker.FixedFormatter(ticklabels)) fig.autofmt_xdate() # fixes the tracker # https:\/\/matplotlib.org\/users\/recipes.html def fmt(x, pos=0, max_i=len(ticklabels)-1): i = int(x) i = 0 if i max_i else i return dates[i] ax.fmt_xdata = fmt plt.show() ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30133280\/pandas-bar-plot-changes-date-format", "best_answers_votes":97, "question_length":924, "response_length":2432 }, { "question":"Concatenate Pandas columns under new multi-index level Given a dictionary of data frames like: ``` dict = {'ABC': df1, 'XYZ' : df2} # of any length... ``` where each data frame has the same columns and similar index, for example: ``` data Open High Low Close Volume Date 2002-01-17 0.18077 0.18800 0.16993 0.18439 1720833 2002-01-18 0.18439 0.21331 0.18077 0.19523 2027866 2002-01-21 0.19523 0.20970 0.19162 0.20608 771149 ``` What is the simplest way to combine all the data frames into one, with a multi-index like: ``` symbol ABC XYZ data Open High Low Close Volume Open ... Date 2002-01-17 0.18077 0.18800 0.16993 0.18439 1720833 ... 2002-01-18 0.18439 0.21331 0.18077 0.19523 2027866 ... 2002-01-21 0.19523 0.20970 0.19162 0.20608 771149 ... ``` I've tried a few methods - eg for each data frame replace the columns with a multi-index like .from_product(['ABC', columns]) and then concatenate along axis=1, without success.", "response":"You can do it with concat (the keys argument will create the hierarchical columns index): ``` d = {'ABC' : df1, 'XYZ' : df2} print pd.concat(d.values(), axis=1, keys=d.keys()) XYZ ABC \\ Open High Low Close Volume Open High Date 2002-01-17 0.18077 0.18800 0.16993 0.18439 1720833 0.18077 0.18800 2002-01-18 0.18439 0.21331 0.18077 0.19523 2027866 0.18439 0.21331 2002-01-21 0.19523 0.20970 0.19162 0.20608 771149 0.19523 0.20970 Low Close Volume Date 2002-01-17 0.16993 0.18439 1720833 2002-01-18 0.18077 0.19523 2027866 2002-01-21 0.19162 0.20608 771149 ``` Really concat wants lists so the following is equivalent: ``` print(pd.concat([df1, df2], axis=1, keys=['ABC', 'XYZ'])) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23600582\/concatenate-pandas-columns-under-new-multi-index-level", "best_answers_votes":117, "question_length":928, "response_length":681 }, { "question":"pandas equivalent of np.where np.where has the semantics of a vectorized if\/else (similar to Apache Spark's when\/otherwise DataFrame method). I know that I can use np.where on pandas.Series, but pandas often defines its own API to use instead of raw numpy functions, which is usually more convenient with pd.Series\/pd.DataFrame. Sure enough, I found pandas.DataFrame.where. However, at first glance, it has completely different semantics. I could not find a way to rewrite the most basic example of np.where using pandas where: ``` # df is pd.DataFrame # how to write this using df.where? df['C'] = np.where((df['A']0), df['A']+df['B'], df['A']\/df['B']) ``` Am I missing something obvious? Or is pandas' where intended for a completely different use case, despite same name as np.where?", "response":"Try: ``` (df['A'] + df['B']).where((df['A'] 0), df['A'] \/ df['B']) ``` The difference between the numpy where and DataFrame where is that the default values are supplied by the DataFrame that the where method is being called on (docs). I.e. ``` np.where(m, A, B) ``` is roughly equivalent to ``` A.where(m, B) ``` If you wanted a similar call signature using pandas, you could take advantage of the way method calls work in Python: ``` pd.DataFrame.where(cond=(df['A'] 0), self=df['A'] + df['B'], other=df['A'] \/ df['B']) ``` or without kwargs (Note: that the positional order of arguments is different from the numpy where argument order): ``` pd.DataFrame.where(df['A'] + df['B'], (df['A'] 0), df['A'] \/ df['B']) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38579532\/pandas-equivalent-of-np-where", "best_answers_votes":81, "question_length":786, "response_length":721 }, { "question":"Encoding Error in Panda read_csv [duplicate] This question already has answers here: UnicodeDecodeError when reading CSV file in Pandas (27 answers) Closed 7 years ago. I'm attempting to read a CSV file into a Dataframe in Pandas. When I try to do that, I get the following error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 55: invalid start byte This is from code: ``` import pandas as pd location = r\"C:\\Users\\khtad\\Documents\\test.csv\" df = pd.read_csv(location, header=0, quotechar='\"') ``` This is on a Windows 7 Enterprise Service Pack 1 machine and it seems to apply to every CSV file I create. In this particular case the binary from location 55 is 00101001 and location 54 is 01110011, if that matters. Saving the file as UTF-8 with a text editor doesn't seem to help, either. Similarly, adding the param \"encoding='utf-8' doesn't work, either--it returns the same error. What is the most likely cause of this error and are there any workarounds other than abandoning the DataFrame construct for the moment and using the csv module to read in the CSV line-by-line?", "response":"Try calling read_csv with encoding='latin1', encoding='iso-8859-1' or encoding='cp1252' (these are some of the various encodings found on Windows).", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30462807\/encoding-error-in-panda-read-csv", "best_answers_votes":224, "question_length":1098, "response_length":147 }, { "question":"UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 35: invalid start byte [duplicate] This question already has answers here: How to determine the encoding table of a text file (7 answers) Closed 1 year ago. I am trying to read a CSV file using the below script. ``` Past = pd.read_csv(\"C:\/Users\/Admin\/Desktop\/Python\/Past.csv\", encoding='utf-8') ``` But, I get the error \"UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 35: invalid start byte\" Where is the issue? I used encoding in the script and thought it would resolve the error.", "response":"This happens because you chose the wrong encoding. Since you are working on a Windows machine, just replacing ``` Past = pd.read_csv(\"C:\/Users\/...\/Past.csv\", encoding='utf-8') ``` with ``` Past = pd.read_csv(\"C:\/Users\/...\/Past.csv\", encoding='cp1252') ``` should solve the problem.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45529507\/unicodedecodeerror-utf-8-codec-cant-decode-byte-0x96-in-position-35-invalid", "best_answers_votes":151, "question_length":572, "response_length":281 }, { "question":"SQL-like window functions in PANDAS: Row Numbering in Python Pandas Dataframe I come from a sql background and I use the following data processing step frequently: Partition the table of data by one or more fields For each partition, add a rownumber to each of its rows that ranks the row by one or more other fields, where the analyst specifies ascending or descending EX: ``` df = pd.DataFrame({'key1' : ['a','a','a','b','a'], 'data1' : [1,2,2,3,3], 'data2' : [1,10,2,3,30]}) df data1 data2 key1 0 1 1 a 1 2 10 a 2 2 2 a 3 3 3 b 4 3 30 a ``` I'm looking for how to do the PANDAS equivalent to this sql window function: ``` RN = ROW_NUMBER() OVER (PARTITION BY Key1 ORDER BY Data1 ASC, Data2 DESC) data1 data2 key1 RN 0 1 1 a 1 1 2 10 a 2 2 2 2 a 3 3 3 3 b 1 4 3 30 a 4 ``` I've tried the following which I've gotten to work where there are no 'partitions': ``` def row_number(frame,orderby_columns, orderby_direction,name): frame.sort_index(by = orderby_columns, ascending = orderby_direction, inplace = True) frame[name] = list(xrange(len(frame.index))) ``` I tried to extend this idea to work with partitions (groups in pandas) but the following didn't work: ``` df1 = df.groupby('key1').apply(lambda t: t.sort_index(by=['data1', 'data2'], ascending=[True, False], inplace = True)).reset_index() def nf(x): x['rn'] = list(xrange(len(x.index))) df1['rn1'] = df1.groupby('key1').apply(nf) ``` But I just got a lot of NaNs when I do this. Ideally, there'd be a succinct way to replicate the window function capability of sql (i've figured out the window based aggregates...that's a one liner in pandas)...can someone share with me the most idiomatic way to number rows like this in PANDAS?", "response":"you can also use sort_values(), groupby() and finally cumcount() + 1: ``` df['RN'] = df.sort_values(['data1','data2'], ascending=[True,False]) \\ .groupby(['key1']) \\ .cumcount() + 1 print(df) ``` yields: ``` data1 data2 key1 RN 0 1 1 a 1 1 2 10 a 2 2 2 2 a 3 3 3 3 b 1 4 3 30 a 4 ``` PS tested with pandas 0.18", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17775935\/sql-like-window-functions-in-pandas-row-numbering-in-python-pandas-dataframe", "best_answers_votes":123, "question_length":1690, "response_length":310 }, { "question":"Compute row average in pandas ``` Y1961 Y1962 Y1963 Y1964 Y1965 Region 0 82.567307 83.104757 83.183700 83.030338 82.831958 US 1 2.699372 2.610110 2.587919 2.696451 2.846247 US 2 14.131355 13.690028 13.599516 13.649176 13.649046 US 3 0.048589 0.046982 0.046583 0.046225 0.051750 US 4 0.553377 0.548123 0.582282 0.577811 0.620999 US ``` In the above dataframe, I would like to get average of each row. currently, I am doing this: ``` df.mean(axis=0) ``` However, this does away with the Region column as well. how can I compute mean and also retain Region column", "response":"You can specify a new column. You also need to compute the mean along the rows, so use axis=1. ``` df['mean'] = df.mean(axis=1) >>> df Y1961 Y1962 Y1963 Y1964 Y1965 Region mean 0 82.567307 83.104757 83.183700 83.030338 82.831958 US 82.943612 1 2.699372 2.610110 2.587919 2.696451 2.846247 US 2.688020 2 14.131355 13.690028 13.599516 13.649176 13.649046 US 13.743824 3 0.048589 0.046982 0.046583 0.046225 0.051750 US 0.048026 4 0.553377 0.548123 0.582282 0.577811 0.620999 US 0.576518 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33750326\/compute-row-average-in-pandas", "best_answers_votes":167, "question_length":560, "response_length":487 }, { "question":"Why does Pandas inner join give ValueError: len(left_on) must equal the number of levels in the index of \"right\"? I'm trying to inner join DataFrame A to DataFrame B and am running into an error. Here's my join statement: ``` merged = DataFrameA.join(DataFrameB, on=['Code','Date']) ``` And here's the error: ``` ValueError: len(left_on) must equal the number of levels in the index of \"right\" ``` I'm not sure the column order matters (they aren't truly \"ordered\" are they?), but just in case, the DataFrames are organized like this: ``` DataFrameA: Code, Date, ColA, ColB, ColC, ..., ColG, ColH (shape: 80514, 8 - no index) DataFrameB: Date, Code, Col1, Col2, Col3, ..., Col15, Col16 (shape: 859, 16 - no index) ``` Do I need to correct my join statement? Or is there another, better way to get the intersection (or inner join) of these two DataFrames?", "response":"use merge if you are not joining on the index: ``` merged = pd.merge(DataFrameA,DataFrameB, on=['Code','Date']) ``` Follow up to question below: Here is a reproducible example: ``` import pandas as pd # create some timestamps for date column i = pd.to_datetime(pd.date_range('20140601',periods=2)) #create two dataframes to merge df = pd.DataFrame({'code': ['ABC','EFG'], 'date':i,'col1': [10,100]}) df2 = pd.DataFrame({'code': ['ABC','EFG'], 'date':i,'col2': [10,200]}) #merge on columns (default join is inner) pd.merge(df, df2, on =['code','date']) ``` This results is: ``` code col1 date col2 0 ABC 10 2014-06-01 10 1 EFG 100 2014-06-02 200 ``` What happens when you run this code?", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28228781\/why-does-pandas-inner-join-give-valueerror-lenleft-on-must-equal-the-number-o", "best_answers_votes":151, "question_length":854, "response_length":685 }, { "question":"Resampling Within a Pandas MultiIndex I have some hierarchical data which bottoms out into time series data which looks something like this: ``` df = pandas.DataFrame( {'value_a': values_a, 'value_b': values_b}, index=[states, cities, dates]) df.index.names = ['State', 'City', 'Date'] df value_a value_b State City Date Georgia Atlanta 2012-01-01 0 10 2012-01-02 1 11 2012-01-03 2 12 2012-01-04 3 13 Savanna 2012-01-01 4 14 2012-01-02 5 15 2012-01-03 6 16 2012-01-04 7 17 Alabama Mobile 2012-01-01 8 18 2012-01-02 9 19 2012-01-03 10 20 2012-01-04 11 21 Montgomery 2012-01-01 12 22 2012-01-02 13 23 2012-01-03 14 24 2012-01-04 15 25 ``` I'd like to perform time resampling per city, so something like ``` df.resample(\"2D\", how=\"sum\") ``` would output ``` value_a value_b State City Date Georgia Atlanta 2012-01-01 1 21 2012-01-03 5 25 Savanna 2012-01-01 9 29 2012-01-03 13 33 Alabama Mobile 2012-01-01 17 37 2012-01-03 21 41 Montgomery 2012-01-01 25 45 2012-01-03 29 49 ``` as is, df.resample('2D', how='sum') gets me ``` TypeError: Only valid with DatetimeIndex or PeriodIndex ``` Fair enough, but I'd sort of expect this to work: ``` >>> df.swaplevel('Date', 'State').resample('2D', how='sum') TypeError: Only valid with DatetimeIndex or PeriodIndex ``` at which point I'm really running out of ideas... is there some way stack and unstack might be able to help me?", "response":"pd.Grouper allows you to specify a \"groupby instruction for a target object\". In particular, you can use it to group by dates even if df.index is not a DatetimeIndex: ``` df.groupby(pd.Grouper(freq='2D', level=-1)) ``` The level=-1 tells pd.Grouper to look for the dates in the last level of the MultiIndex. Moreover, you can use this in conjunction with other level values from the index: ``` level_values = df.index.get_level_values result = (df.groupby([level_values(i) for i in [0,1]] +[pd.Grouper(freq='2D', level=-1)]).sum()) ``` It looks a bit awkward, but using_Grouper turns out to be much faster than my original suggestion, using_reset_index: ``` import numpy as np import pandas as pd import datetime as DT def using_Grouper(df): level_values = df.index.get_level_values return (df.groupby([level_values(i) for i in [0,1]] +[pd.Grouper(freq='2D', level=-1)]).sum()) def using_reset_index(df): df = df.reset_index(level=[0, 1]) return df.groupby(['State','City']).resample('2D').sum() def using_stack(df): # http:\/\/stackoverflow.com\/a\/15813787\/190597 return (df.unstack(level=[0,1]) .resample('2D').sum() .stack(level=[2,1]) .swaplevel(2,0)) def make_orig(): values_a = range(16) values_b = range(10, 26) states = ['Georgia']*8 + ['Alabama']*8 cities = ['Atlanta']*4 + ['Savanna']*4 + ['Mobile']*4 + ['Montgomery']*4 dates = pd.DatetimeIndex([DT.date(2012,1,1)+DT.timedelta(days = i) for i in range(4)]*4) df = pd.DataFrame( {'value_a': values_a, 'value_b': values_b}, index = [states, cities, dates]) df.index.names = ['State', 'City', 'Date'] return df def make_df(N): dates = pd.date_range('2000-1-1', periods=N) states = np.arange(50) cities = np.arange(10) index = pd.MultiIndex.from_product([states, cities, dates], names=['State', 'City', 'Date']) df = pd.DataFrame(np.random.randint(10, size=(len(index),2)), index=index, columns=['value_a', 'value_b']) return df df = make_orig() print(using_Grouper(df)) ``` yields ``` value_a value_b State City Date Alabama Mobile 2012-01-01 17 37 2012-01-03 21 41 Montgomery 2012-01-01 25 45 2012-01-03 29 49 Georgia Atlanta 2012-01-01 1 21 2012-01-03 5 25 Savanna 2012-01-01 9 29 2012-01-03 13 33 ``` Here is a benchmark comparing using_Grouper, using_reset_index, using_stack on a 5000-row DataFrame: ``` In [30]: df = make_df(10) In [34]: len(df) Out[34]: 5000 In [32]: %timeit using_Grouper(df) 100 loops, best of 3: 6.03 ms per loop In [33]: %timeit using_stack(df) 10 loops, best of 3: 22.3 ms per loop In [31]: %timeit using_reset_index(df) 1 loop, best of 3: 659 ms per loop ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15799162\/resampling-within-a-pandas-multiindex", "best_answers_votes":68, "question_length":1367, "response_length":2543 }, { "question":"Aggregation in Pandas How can I perform aggregation with Pandas? No DataFrame after aggregation! What happened? How can I aggregate mainly strings columns (to lists, tuples, strings with separator)? How can I aggregate counts? How can I create a new column filled by aggregated values? I've seen these recurring questions asking about various faces of the pandas aggregate functionality. Most of the information regarding aggregation and its various use cases today is fragmented across dozens of badly worded, unsearchable posts. The aim here is to collate some of the more important points for posterity. This Q&A is meant to be the next instalment in a series of helpful user-guides: How can I pivot a dataframe? What are the 'levels', 'keys', and names arguments for in Pandas' concat function? How do I operate on a DataFrame with a Series for every column? Pandas Merging 101 Please note that this post is not meant to be a replacement for the documentation about aggregation and about groupby, so please read that as well!", "response":"Question 1 How can I perform aggregation with Pandas? Expanded aggregation documentation. Aggregating functions are the ones that reduce the dimension of the returned objects. It means output Series\/DataFrame have less or same rows like original. Some common aggregating functions are tabulated below: ``` Function Description mean() Compute mean of groups sum() Compute sum of group values size() Compute group sizes count() Compute count of group std() Standard deviation of groups var() Compute variance of groups sem() Standard error of the mean of groups describe() Generates descriptive statistics first() Compute first of group values last() Compute last of group values nth() Take nth value, or a subset if n is a list min() Compute min of group values max() Compute max of group values ``` ``` np.random.seed(123) df = pd.DataFrame({'A' : ['foo', 'foo', 'bar', 'foo', 'bar', 'foo'], 'B' : ['one', 'two', 'three','two', 'two', 'one'], 'C' : np.random.randint(5, size=6), 'D' : np.random.randint(5, size=6), 'E' : np.random.randint(5, size=6)}) print (df) A B C D E 0 foo one 2 3 0 1 foo two 4 1 0 2 bar three 2 1 1 3 foo two 1 0 3 4 bar two 3 1 4 5 foo one 2 1 0 ``` Aggregation by filtered columns and Cython implemented functions: ``` df1 = df.groupby(['A', 'B'], as_index=False)['C'].sum() print (df1) A B C 0 bar three 2 1 bar two 3 2 foo one 4 3 foo two 5 ``` An aggregate function is used for all columns without being specified in the groupby function, here the A, B columns: ``` df2 = df.groupby(['A', 'B'], as_index=False).sum() print (df2) A B C D E 0 bar three 2 1 1 1 bar two 3 1 4 2 foo one 4 4 0 3 foo two 5 1 3 ``` You can also specify only some columns used for aggregation in a list after the groupby function: ``` df3 = df.groupby(['A', 'B'], as_index=False)['C','D'].sum() print (df3) A B C D 0 bar three 2 1 1 bar two 3 1 2 foo one 4 4 3 foo two 5 1 ``` Same results by using function DataFrameGroupBy.agg: ``` df1 = df.groupby(['A', 'B'], as_index=False)['C'].agg('sum') print (df1) A B C 0 bar three 2 1 bar two 3 2 foo one 4 3 foo two 5 df2 = df.groupby(['A', 'B'], as_index=False).agg('sum') print (df2) A B C D E 0 bar three 2 1 1 1 bar two 3 1 4 2 foo one 4 4 0 3 foo two 5 1 3 ``` For multiple functions applied for one column use a list of tuples - names of new columns and aggregated functions: ``` df4 = (df.groupby(['A', 'B'])['C'] .agg([('average','mean'),('total','sum')]) .reset_index()) print (df4) A B average total 0 bar three 2.0 2 1 bar two 3.0 3 2 foo one 2.0 4 3 foo two 2.5 5 ``` If want to pass multiple functions is possible pass list of tuples: ``` df5 = (df.groupby(['A', 'B']) .agg([('average','mean'),('total','sum')])) print (df5) C D E average total average total average total A B bar three 2.0 2 1.0 1 1.0 1 two 3.0 3 1.0 1 4.0 4 foo one 2.0 4 2.0 4 0.0 0 two 2.5 5 0.5 1 1.5 3 ``` Then get MultiIndex in columns: ``` print (df5.columns) MultiIndex(levels=[['C', 'D', 'E'], ['average', 'total']], labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]]) ``` And for converting to columns, flattening MultiIndex use map with join: ``` df5.columns = df5.columns.map('_'.join) df5 = df5.reset_index() print (df5) A B C_average C_total D_average D_total E_average E_total 0 bar three 2.0 2 1.0 1 1.0 1 1 bar two 3.0 3 1.0 1 4.0 4 2 foo one 2.0 4 2.0 4 0.0 0 3 foo two 2.5 5 0.5 1 1.5 3 ``` Another solution is pass list of aggregate functions, then flatten MultiIndex and for another columns names use str.replace: ``` df5 = df.groupby(['A', 'B']).agg(['mean','sum']) df5.columns = (df5.columns.map('_'.join) .str.replace('sum','total') .str.replace('mean','average')) df5 = df5.reset_index() print (df5) A B C_average C_total D_average D_total E_average E_total 0 bar three 2.0 2 1.0 1 1.0 1 1 bar two 3.0 3 1.0 1 4.0 4 2 foo one 2.0 4 2.0 4 0.0 0 3 foo two 2.5 5 0.5 1 1.5 3 ``` If want specified each column with aggregated function separately pass dictionary: ``` df6 = (df.groupby(['A', 'B'], as_index=False) .agg({'C':'sum','D':'mean'}) .rename(columns={'C':'C_total', 'D':'D_average'})) print (df6) A B C_total D_average 0 bar three 2 1.0 1 bar two 3 1.0 2 foo one 4 2.0 3 foo two 5 0.5 ``` You can pass custom function too: ``` def func(x): return x.iat[0] + x.iat[-1] df7 = (df.groupby(['A', 'B'], as_index=False) .agg({'C':'sum','D': func}) .rename(columns={'C':'C_total', 'D':'D_sum_first_and_last'})) print (df7) A B C_total D_sum_first_and_last 0 bar three 2 2 1 bar two 3 2 2 foo one 4 4 3 foo two 5 1 ``` Question 2 No DataFrame after aggregation! What happened? Aggregation by two or more columns: ``` df1 = df.groupby(['A', 'B'])['C'].sum() print (df1) A B bar three 2 two 3 foo one 4 two 5 Name: C, dtype: int32 ``` First check the Index and type of a Pandas object: ``` print (df1.index) MultiIndex(levels=[['bar', 'foo'], ['one', 'three', 'two']], labels=[[0, 0, 1, 1], [1, 2, 0, 2]], names=['A', 'B']) print (type(df1)) ``` There are two solutions for how to get MultiIndex Series to columns: add parameter as_index=False ``` df1 = df.groupby(['A', 'B'], as_index=False)['C'].sum() print (df1) A B C 0 bar three 2 1 bar two 3 2 foo one 4 3 foo two 5 ``` use Series.reset_index: ``` df1 = df.groupby(['A', 'B'])['C'].sum().reset_index() print (df1) A B C 0 bar three 2 1 bar two 3 2 foo one 4 3 foo two 5 ``` If group by one column: ``` df2 = df.groupby('A')['C'].sum() print (df2) A bar 5 foo 9 Name: C, dtype: int32 ``` ... get Series with Index: ``` print (df2.index) Index(['bar', 'foo'], dtype='object', name='A') print (type(df2)) ``` And the solution is the same like in the MultiIndex Series: ``` df2 = df.groupby('A', as_index=False)['C'].sum() print (df2) A C 0 bar 5 1 foo 9 df2 = df.groupby('A')['C'].sum().reset_index() print (df2) A C 0 bar 5 1 foo 9 ``` Question 3 How can I aggregate mainly strings columns (to lists, tuples, strings with separator)? ``` df = pd.DataFrame({'A' : ['a', 'c', 'b', 'b', 'a', 'c', 'b'], 'B' : ['one', 'two', 'three','two', 'two', 'one', 'three'], 'C' : ['three', 'one', 'two', 'two', 'three','two', 'one'], 'D' : [1,2,3,2,3,1,2]}) print (df) A B C D 0 a one three 1 1 c two one 2 2 b three two 3 3 b two two 2 4 a two three 3 5 c one two 1 6 b three one 2 ``` Instead of an aggregation function, it is possible to pass list, tuple, set for converting the column: ``` df1 = df.groupby('A')['B'].agg(list).reset_index() print (df1) A B 0 a [one, two] 1 b [three, two, three] 2 c [two, one] ``` An alternative is use GroupBy.apply: ``` df1 = df.groupby('A')['B'].apply(list).reset_index() print (df1) A B 0 a [one, two] 1 b [three, two, three] 2 c [two, one] ``` For converting to strings with a separator, use .join only if it is a string column: ``` df2 = df.groupby('A')['B'].agg(','.join).reset_index() print (df2) A B 0 a one,two 1 b three,two,three 2 c two,one ``` If it is a numeric column, use a lambda function with astype for converting to strings: ``` df3 = (df.groupby('A')['D'] .agg(lambda x: ','.join(x.astype(str))) .reset_index()) print (df3) A D 0 a 1,3 1 b 3,2,2 2 c 2,1 ``` Another solution is converting to strings before groupby: ``` df3 = (df.assign(D = df['D'].astype(str)) .groupby('A')['D'] .agg(','.join).reset_index()) print (df3) A D 0 a 1,3 1 b 3,2,2 2 c 2,1 ``` For converting all columns, don't pass a list of column(s) after groupby. There isn't any column D, because automatic exclusion of 'nuisance' columns. It means all numeric columns are excluded. ``` df4 = df.groupby('A').agg(','.join).reset_index() print (df4) A B C 0 a one,two three,three 1 b three,two,three two,two,one 2 c two,one one,two ``` So it's necessary to convert all columns into strings, and then get all columns: ``` df5 = (df.groupby('A') .agg(lambda x: ','.join(x.astype(str))) .reset_index()) print (df5) A B C D 0 a one,two three,three 1,3 1 b three,two,three two,two,one 3,2,2 2 c two,one one,two 2,1 ``` Question 4 How can I aggregate counts? ``` df = pd.DataFrame({'A' : ['a', 'c', 'b', 'b', 'a', 'c', 'b'], 'B' : ['one', 'two', 'three','two', 'two', 'one', 'three'], 'C' : ['three', np.nan, np.nan, 'two', 'three','two', 'one'], 'D' : [np.nan,2,3,2,3,np.nan,2]}) print (df) A B C D 0 a one three NaN 1 c two NaN 2.0 2 b three NaN 3.0 3 b two two 2.0 4 a two three 3.0 5 c one two NaN 6 b three one 2.0 ``` Function GroupBy.size for size of each group: ``` df1 = df.groupby('A').size().reset_index(name='COUNT') print (df1) A COUNT 0 a 2 1 b 3 2 c 2 ``` Function GroupBy.count excludes missing values: ``` df2 = df.groupby('A')['C'].count().reset_index(name='COUNT') print (df2) A COUNT 0 a 2 1 b 2 2 c 1 ``` This function should be used for multiple columns for counting non-missing values: ``` df3 = df.groupby('A').count().add_suffix('_COUNT').reset_index() print (df3) A B_COUNT C_COUNT D_COUNT 0 a 2 2 1 1 b 3 2 3 2 c 2 1 1 ``` A related function is Series.value_counts. It returns the size of the object containing counts of unique values in descending order, so that the first element is the most frequently-occurring element. It excludes NaNs values by default. ``` df4 = (df['A'].value_counts() .rename_axis('A') .reset_index(name='COUNT')) print (df4) A COUNT 0 b 3 1 a 2 2 c 2 ``` If you want same output like using function groupby + size, add Series.sort_index: ``` df5 = (df['A'].value_counts() .sort_index() .rename_axis('A') .reset_index(name='COUNT')) print (df5) A COUNT 0 a 2 1 b 3 2 c 2 ``` Question 5 How can I create a new column filled by aggregated values? Method GroupBy.transform returns an object that is indexed the same (same size) as the one being grouped. See the Pandas documentation for more information. ``` np.random.seed(123) df = pd.DataFrame({'A' : ['foo', 'foo', 'bar', 'foo', 'bar', 'foo'], 'B' : ['one', 'two', 'three','two', 'two', 'one'], 'C' : np.random.randint(5, size=6), 'D' : np.random.randint(5, size=6)}) print (df) A B C D 0 foo one 2 3 1 foo two 4 1 2 bar three 2 1 3 foo two 1 0 4 bar two 3 1 5 foo one 2 1 df['C1'] = df.groupby('A')['C'].transform('sum') df['C2'] = df.groupby(['A','B'])['C'].transform('sum') df[['C3','D3']] = df.groupby('A')['C','D'].transform('sum') df[['C4','D4']] = df.groupby(['A','B'])['C','D'].transform('sum') print (df) A B C D C1 C2 C3 D3 C4 D4 0 foo one 2 3 9 4 9 5 4 4 1 foo two 4 1 9 5 9 5 5 1 2 bar three 2 1 5 2 5 2 2 1 3 foo two 1 0 9 5 9 5 5 1 4 bar two 3 1 5 3 5 2 3 1 5 foo one 2 1 9 4 9 5 4 4 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/53781634\/aggregation-in-pandas", "best_answers_votes":140, "question_length":1029, "response_length":10314 }, { "question":"Histogram values of a Pandas Series I have some values in a Python Pandas Series (type: pandas.core.series.Series) ``` In [1]: series = pd.Series([0.0,950.0,-70.0,812.0,0.0,-90.0,0.0,0.0,-90.0,0.0,-64.0,208.0,0.0,-90.0,0.0,-80.0,0.0,0.0,-80.0,-48.0,840.0,-100.0,190.0,130.0,-100.0,-100.0,0.0,-50.0,0.0,-100.0,-100.0,0.0,-90.0,0.0,-90.0,-90.0,63.0,-90.0,0.0,0.0,-90.0,-80.0,0.0,]) In [2]: series.min() Out[2]: -100.0 In [3]: series.max() Out[3]: 950.0 ``` I would like to get values of histogram (not necessary plotting histogram)... I just need to get the frequency for each interval. Let's say that my intervals are going from [-200; -150] to [950; 1000] so lower bounds are ``` lwb = range(-200,1000,50) ``` and upper bounds are ``` upb = range(-150,1050,50) ``` I don't know how to get frequency (the number of values that are inside each interval) now... I'm sure that defining lwb and upb is not necessary... but I don't know what function I should use to perform this! (after diving in Pandas doc, I think cut function can help me because it's a discretization problem... but I'm don't understand how to use it) After being able to do this, I will have a look at the way to display histogram (but that's an other problem)", "response":"You just need to use the histogram function of NumPy: ``` import numpy as np count, division = np.histogram(series) ``` where division is the automatically calculated border for your bins and count is the population inside each bin. If you need to fix a certain number of bins, you can use the argument bins and specify a number of bins, or give it directly the boundaries between each bin. ``` count, division = np.histogram(series, bins = [-201,-149,949,1001]) ``` to plot the results you can use the matplotlib function hist, but if you are working in pandas each Series has its own handle to the hist function, and you can give it the chosen binning: ``` series.hist(bins=division) ``` Edit: As mentioned by another poster, Pandas is built on top of NumPy. Since OP is explicitly using Pandas, we can do away with the additional import by accessing NumPy through Pandas: ```py count, division = pd.np.histogram(series) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13129618\/histogram-values-of-a-pandas-series", "best_answers_votes":122, "question_length":1227, "response_length":926 }, { "question":"pandas read_csv index_col=None not working with delimiters at the end of each line I am going through the 'Python for Data Analysis' book and having trouble in the 'Example: 2012 Federal Election Commision Database' section reading the data to a DataFrame. The trouble is that one of the columns of data is always being set as the index column, even when the index_col argument is set to None. Here is the link to the data : http:\/\/www.fec.gov\/disclosurep\/PDownload.do. Here is the loading code (to save time in the checking, I set the nrows=10): ``` import pandas as pd fec = pd.read_csv('P00000001-ALL.csv',nrows=10,index_col=None) ``` To keep it short I am excluding the data column outputs, but here is my output (please not the Index values): ``` In [20]: fec Out[20]: Index: 10 entries, C00410118 to C00410118 Data columns: ... dtypes: float64(4), int64(3), object(11) ``` And here is the book's output (again with data columns excluded): ``` In [13]: fec = read_csv('P00000001-ALL.csv') In [14]: fec Out[14]: Int64Index: 1001731 entries, 0 to 1001730 ... dtypes: float64(1), int64(1), object(14) ``` The Index values in my output are actually the first column of data in the file, which is then moving all the rest of the data to the left by one. Would anyone know how to prevent this column of data to be listed as an index? I would like to have the index just +1 increasing integers. I am fairly new to python and pandas, so I apologize for any inconvenience. Thanks.", "response":"Quick Answer Use index_col=False instead of index_col=None when you have delimiters at the end of each line to turn off index column inference and discard the last column. More Detail After looking at the data, there is a comma at the end of each line. And this quote (the documentation has been edited since the time this post was created): index_col: column number, column name, or list of column numbers\/names, to use as the index (row labels) of the resulting DataFrame. By default, it will number the rows without using any column, unless there is one more data column than there are headers, in which case the first column is taken as the index. from the documentation shows that pandas believes you have n headers and n+1 data columns and is treating the first column as the index. EDIT 10\/20\/2014 - More information I found another valuable entry that is specifically about trailing limiters and how to simply ignore them: If a file has one more column of data than the number of column names, the first column will be used as the DataFrame\u2019s row names: ... Ordinarily, you can achieve this behavior using the index_col option. There are some exception cases when a file has been prepared with delimiters at the end of each data line, confusing the parser. To explicitly disable the index column inference and discard the last column, pass index_col=False: ...", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12960574\/pandas-read-csv-index-col-none-not-working-with-delimiters-at-the-end-of-each-li", "best_answers_votes":149, "question_length":1478, "response_length":1368 }, { "question":"making matplotlib scatter plots from dataframes in Python's pandas What is the best way to make a series of scatter plots using matplotlib from a pandas dataframe in Python? For example, if I have a dataframe df that has some columns of interest, I find myself typically converting everything to arrays: ``` import matplotlib.pylab as plt # df is a DataFrame: fetch col1 and col2 # and drop na rows if any of the columns are NA mydata = df[[\"col1\", \"col2\"]].dropna(how=\"any\") # Now plot with matplotlib vals = mydata.values plt.scatter(vals[:, 0], vals[:, 1]) ``` The problem with converting everything to array before plotting is that it forces you to break out of dataframes. Consider these two use cases where having the full dataframe is essential to plotting: For example, what if you wanted to now look at all the values of col3 for the corresponding values that you plotted in the call to scatter, and color each point (or size) it by that value? You'd have to go back, pull out the non-na values of col1,col2 and check what their corresponding values. Is there a way to plot while preserving the dataframe? For example: ``` mydata = df.dropna(how=\"any\", subset=[\"col1\", \"col2\"]) # plot a scatter of col1 by col2, with sizes according to col3 scatter(mydata([\"col1\", \"col2\"]), s=mydata[\"col3\"]) ``` Similarly, imagine that you wanted to filter or color each point differently depending on the values of some of its columns. E.g. what if you wanted to automatically plot the labels of the points that meet a certain cutoff on col1, col2 alongside them (where the labels are stored in another column of the df), or color these points differently, like people do with dataframes in R. For example: ``` mydata = df.dropna(how=\"any\", subset=[\"col1\", \"col2\"]) myscatter = scatter(mydata[[\"col1\", \"col2\"]], s=1) # Plot in red, with smaller size, all the points that # have a col2 value greater than 0.5 myscatter.replot(mydata[\"col2\"] > 0.5, color=\"red\", s=0.5) ``` How can this be done? EDIT Reply to crewbum: You say that the best way is to plot each condition (like subset_a, subset_b) separately. What if you have many conditions, e.g. you want to split up the scatters into 4 types of points or even more, plotting each in different shape\/color. How can you elegantly apply condition a, b, c, etc. and make sure you then plot \"the rest\" (things not in any of these conditions) as the last step? Similarly in your example where you plot col1,col2 differently based on col3, what if there are NA values that break the association between col1,col2,col3? For example if you want to plot all col2 values based on their col3 values, but some rows have an NA value in either col1 or col3, forcing you to use dropna first. So you would do: ``` mydata = df.dropna(how=\"any\", subset=[\"col1\", \"col2\", \"col3\") ``` then you can plot using mydata like you show -- plotting the scatter between col1,col2 using the values of col3. But mydata will be missing some points that have values for col1,col2 but are NA for col3, and those still have to be plotted... so how would you basically plot \"the rest\" of the data, i.e. the points that are not in the filtered set mydata?", "response":"Try passing columns of the DataFrame directly to matplotlib, as in the examples below, instead of extracting them as numpy arrays. ``` df = pd.DataFrame(np.random.randn(10,2), columns=['col1','col2']) df['col3'] = np.arange(len(df))**2 * 100 + 100 In [5]: df Out[5]: col1 col2 col3 0 -1.000075 -0.759910 100 1 0.510382 0.972615 200 2 1.872067 -0.731010 500 3 0.131612 1.075142 1000 4 1.497820 0.237024 1700 ``` Vary scatter point size based on another column ``` plt.scatter(df.col1, df.col2, s=df.col3) # OR (with pandas 0.13 and up) df.plot(kind='scatter', x='col1', y='col2', s=df.col3) ``` Vary scatter point color based on another column ``` colors = np.where(df.col3 > 300, 'r', 'k') plt.scatter(df.col1, df.col2, s=120, c=colors) # OR (with pandas 0.13 and up) df.plot(kind='scatter', x='col1', y='col2', s=120, c=colors) ``` Scatter plot with legend However, the easiest way I've found to create a scatter plot with legend is to call plt.scatter once for each point type. ``` cond = df.col3 > 300 subset_a = df[cond].dropna() subset_b = df[~cond].dropna() plt.scatter(subset_a.col1, subset_a.col2, s=120, c='b', label='col3 > 300') plt.scatter(subset_b.col1, subset_b.col2, s=60, c='r', label='col3 <= 300') plt.legend() ``` Update From what I can tell, matplotlib simply skips points with NA x\/y coordinates or NA style settings (e.g., color\/size). To find points skipped due to NA, try the isnull method: df[df.col3.isnull()] To split a list of points into many types, take a look at numpy select, which is a vectorized if-then-else implementation and accepts an optional default value. For example: ``` df['subset'] = np.select([df.col3 < 150, df.col3 < 400, df.col3 < 600], [0, 1, 2], -1) for color, label in zip('bgrm', [0, 1, 2, -1]): subset = df[df.subset == label] plt.scatter(subset.col1, subset.col2, s=120, c=color, label=str(label)) plt.legend() ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14300137\/making-matplotlib-scatter-plots-from-dataframes-in-pythons-pandas", "best_answers_votes":122, "question_length":3162, "response_length":1869 }, { "question":"pandas applying regex to replace values I have read some pricing data into a pandas dataframe the values appear as: ```none $40,000* $40000 conditions attached ``` I want to strip it down to just the numeric values. I know I can loop through and apply regex ```py [0-9]+ ``` to each field then join the resulting list back together but is there a not loopy way?", "response":"You could use Series.str.replace: ``` import pandas as pd df = pd.DataFrame(['$40,000*','$40000 conditions attached'], columns=['P']) print(df) # P # 0 $40,000* # 1 $40000 conditions attached df['P'] = df['P'].str.replace(r'\\D+', '', regex=True).astype('int') print(df) ``` yields ``` P 0 40000 1 40000 ``` since \\D matches any character that is not a decimal digit.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22588316\/pandas-applying-regex-to-replace-values", "best_answers_votes":167, "question_length":361, "response_length":366 }, { "question":"Comparing two pandas dataframes for differences I've got a script updating 5-10 columns worth of data , but sometimes the start csv will be identical to the end csv so instead of writing an identical csvfile I want it to do nothing... How can I compare two dataframes to check if they're the same or not? ``` csvdata = pandas.read_csv('csvfile.csv') csvdata_old = csvdata # ... do stuff with csvdata dataframe if csvdata_old != csvdata: csvdata.to_csv('csvfile.csv', index=False) ``` Any ideas?", "response":"You also need to be careful to create a copy of the DataFrame, otherwise the csvdata_old will be updated with csvdata (since it points to the same object): ``` csvdata_old = csvdata.copy() ``` To check whether they are equal, you can use assert_frame_equal as in this answer: ``` from pandas.util.testing import assert_frame_equal assert_frame_equal(csvdata, csvdata_old) ``` You can wrap this in a function with something like: ``` try: assert_frame_equal(csvdata, csvdata_old) return True except: # appeantly AssertionError doesn't catch all return False ``` There was discussion of a better way...", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19917545\/comparing-two-pandas-dataframes-for-differences", "best_answers_votes":90, "question_length":494, "response_length":600 }, { "question":"Efficient way to unnest (explode) multiple list columns in a pandas DataFrame I am reading multiple JSON objects into one DataFrame. The problem is that some of the columns are lists. Also, the data is very big and because of that I cannot use the available solutions on the internet. They are very slow and memory-inefficient Here is how my data looks like: ``` df = pd.DataFrame({'A': ['x1','x2','x3', 'x4'], 'B':[['v1','v2'],['v3','v4'],['v5','v6'],['v7','v8']], 'C':[['c1','c2'],['c3','c4'],['c5','c6'],['c7','c8']],'D':[['d1','d2'],['d3','d4'],['d5','d6'],['d7','d8']], 'E':[['e1','e2'],['e3','e4'],['e5','e6'],['e7','e8']]}) A B C D E 0 x1 [v1, v2] [c1, c2] [d1, d2] [e1, e2] 1 x2 [v3, v4] [c3, c4] [d3, d4] [e3, e4] 2 x3 [v5, v6] [c5, c6] [d5, d6] [e5, e6] 3 x4 [v7, v8] [c7, c8] [d7, d8] [e7, e8] ``` And this is the shape of my data: (441079, 12) My desired output is: ``` A B C D E 0 x1 v1 c1 d1 e1 0 x1 v2 c2 d2 e2 1 x2 v3 c3 d3 e3 1 x2 v4 c4 d4 e4 ..... ``` EDIT: After being marked as duplicate, I would like to stress on the fact that in this question I was looking for an efficient method of exploding multiple columns. Therefore the approved answer is able to explode an arbitrary number of columns on very large datasets efficiently. Something that the answers to the other question failed to do (and that was the reason I asked this question after testing those solutions).", "response":"pandas >= 1.3 In more recent versions, pandas allows you to explode multiple columns at once using DataFrame.explode, provided all values have lists of equal size. Thus, you are able to use this: ``` df.explode(['B', 'C', 'D', 'E']).reset_index(drop=True) A B C D E 0 x1 v1 c1 d1 e1 1 x1 v2 c2 d2 e2 2 x2 v3 c3 d3 e3 3 x2 v4 c4 d4 e4 4 x3 v5 c5 d5 e5 5 x3 v6 c6 d6 e6 6 x4 v7 c7 d7 e7 7 x4 v8 c8 d8 e8 ``` pandas >= 0.25 For slightly older versions, you can apply Series.explode on each column. ``` df.set_index(['A']).apply(pd.Series.explode).reset_index() A B C D E 0 x1 v1 c1 d1 e1 1 x1 v2 c2 d2 e2 2 x2 v3 c3 d3 e3 3 x2 v4 c4 d4 e4 4 x3 v5 c5 d5 e5 5 x3 v6 c6 d6 e6 6 x4 v7 c7 d7 e7 7 x4 v8 c8 d8 e8 ``` The idea is to set as the index all columns that must NOT be exploded first, then reset the index after. Funnily enough, this happens to be faster than calling df.explode, according to my tests. YMMV. explode methods are quite performant in general: ``` df2 = pd.concat([df] * 100, ignore_index=True) %timeit df2.explode(['B', 'C', 'D', 'E']).reset_index(drop=True) %timeit df2.set_index(['A']).apply(pd.Series.explode).reset_index() # fastest %%timeit (df2.set_index('A') .apply(lambda x: x.apply(pd.Series).stack()) .reset_index() .drop('level_1', axis=1)) 2.59 ms \u00b1 112 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 1.27 ms \u00b1 239 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) 120 ms \u00b1 9.48 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each) ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45846765\/efficient-way-to-unnest-explode-multiple-list-columns-in-a-pandas-dataframe", "best_answers_votes":122, "question_length":1391, "response_length":1483 }, { "question":"Check if a string in a Pandas DataFrame column is in a list of strings If I have a frame like this ```py frame = pd.DataFrame({ \"a\": [\"the cat is blue\", \"the sky is green\", \"the dog is black\"] }) ``` and I want to check if any of those rows contain a certain word I just have to do this. ```py frame[\"b\"] = ( frame.a.str.contains(\"dog\") | frame.a.str.contains(\"cat\") | frame.a.str.contains(\"fish\") ) ``` frame[\"b\"] outputs: ```none 0 True 1 False 2 True Name: b, dtype: bool ``` If I decide to make a list: ```py mylist = [\"dog\", \"cat\", \"fish\"] ``` How would I check that the rows contain a certain word in the list?", "response":"``` frame = pd.DataFrame({'a' : ['the cat is blue', 'the sky is green', 'the dog is black']}) frame a 0 the cat is blue 1 the sky is green 2 the dog is black ``` The str.contains method accepts a regular expression pattern: ``` mylist = ['dog', 'cat', 'fish'] pattern = '|'.join(mylist) pattern 'dog|cat|fish' frame.a.str.contains(pattern) 0 True 1 False 2 True Name: a, dtype: bool ``` Because regex patterns are supported, you can also embed flags: ``` frame = pd.DataFrame({'a' : ['Cat Mr. Nibbles is blue', 'the sky is green', 'the dog is black']}) frame a 0 Cat Mr. Nibbles is blue 1 the sky is green 2 the dog is black pattern = '|'.join([f'(?i){animal}' for animal in mylist]) # python 3.6+ pattern '(?i)dog|(?i)cat|(?i)fish' frame.a.str.contains(pattern) 0 True # Because of the (?i) flag, 'Cat' is also matched to 'cat' 1 False 2 True ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17972938\/check-if-a-string-in-a-pandas-dataframe-column-is-in-a-list-of-strings", "best_answers_votes":140, "question_length":616, "response_length":847 }, { "question":"pandas concat generates nan values I am curious why a simple concatenation of two dataframes in pandas: ```py initId.shape # (66441, 1) initId.isnull().sum() # 0 ypred.shape # (66441, 1) ypred.isnull().sum() # 0 ``` of the same shape and both without NaN values ```py foo = pd.concat([initId, ypred], join='outer', axis=1) foo.shape # (83384, 2) foo.isnull().sum() # 16943 ``` can result in a lot of NaN values if joined. How can I fix this problem and prevent NaN values being introduced? Trying to reproduce it like ```py aaa = pd.DataFrame([0,1,0,1,0,0], columns=['prediction']) bbb = pd.DataFrame([0,0,1,0,1,1], columns=['groundTruth']) pd.concat([aaa, bbb], axis=1) ``` failed e.g. worked just fine as no NaN values were introduced.", "response":"I think there is problem with different index values, so where concat cannot align get NaN: ``` aaa = pd.DataFrame([0,1,0,1,0,0], columns=['prediction'], index=[4,5,8,7,10,12]) print(aaa) prediction 4 0 5 1 8 0 7 1 10 0 12 0 bbb = pd.DataFrame([0,0,1,0,1,1], columns=['groundTruth']) print(bbb) groundTruth 0 0 1 0 2 1 3 0 4 1 5 1 print (pd.concat([aaa, bbb], axis=1)) prediction groundTruth 0 NaN 0.0 1 NaN 0.0 2 NaN 1.0 3 NaN 0.0 4 0.0 1.0 5 1.0 1.0 7 1.0 NaN 8 0.0 NaN 10 0.0 NaN 12 0.0 NaN ``` Solution is reset_index if indexes values are not necessary: ``` aaa.reset_index(drop=True, inplace=True) bbb.reset_index(drop=True, inplace=True) print(aaa) prediction 0 0 1 1 2 0 3 1 4 0 5 0 print(bbb) groundTruth 0 0 1 0 2 1 3 0 4 1 5 1 print (pd.concat([aaa, bbb], axis=1)) prediction groundTruth 0 0 0 1 1 0 2 0 1 3 1 0 4 0 1 5 0 1 ``` EDIT: If need same index like aaa and length of DataFrames is same use: ``` bbb.index = aaa.index print (pd.concat([aaa, bbb], axis=1)) prediction groundTruth 4 0 0 5 1 0 8 0 1 7 1 0 10 0 1 12 0 1 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/40339886\/pandas-concat-generates-nan-values", "best_answers_votes":126, "question_length":737, "response_length":1039 }, { "question":"classifiers in scikit-learn that handle nan\/null I was wondering if there are classifiers that handle nan\/null values in scikit-learn. I thought random forest regressor handles this but I got an error when I call predict. ``` X_train = np.array([[1, np.nan, 3],[np.nan, 5, 6]]) y_train = np.array([1, 2]) clf = RandomForestRegressor(X_train, y_train) X_test = np.array([7, 8, np.nan]) y_pred = clf.predict(X_test) # Fails! ``` Can I not call predict with any scikit-learn algorithm with missing values? Edit. Now that I think about this, it makes sense. It's not an issue during training but when you predict how do you branch when the variable is null? maybe you could just split both ways and average the result? It seems like k-NN should work fine as long as the distance function ignores nulls though. Edit 2 (older and wiser me) Some gbm libraries (such as xgboost) use a ternary tree instead of a binary tree precisely for this purpose: 2 children for the yes\/no decision and 1 child for the missing decision. sklearn is using a binary tree", "response":"Short answer Sometimes missing values are simply not applicable. Imputing them is meaningless. In these cases you should use a model that can handle missing values. Scitkit-learn's models cannot handle missing values. XGBoost can. More on scikit-learn and XGBoost As mentioned in this article, scikit-learn's decision trees and KNN algorithms are not (yet) robust enough to work with missing values. If imputation doesn't make sense, don't do it. Consider situtations when imputation doesn't make sense. keep in mind this is a made-up example Consider a dataset with rows of cars (\"Danho Diesel\", \"Estal Electric\", \"Hesproc Hybrid\") and columns with their properties (Weight, Top speed, Acceleration, Power output, Sulfur Dioxide Emission, Range). Electric cars do not produce exhaust fumes - so the Sulfur dioxide emission of the Estal Electric should be a NaN-value (missing). You could argue that it should be set to 0 - but electric cars cannot produce sulfur dioxide. Imputing the value will ruin your predictions. As mentioned in this article, scikit-learn's decision trees and KNN algorithms are not (yet) robust enough to work with missing values. If imputation doesn't make sense, don't do it.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30317119\/classifiers-in-scikit-learn-that-handle-nan-null", "best_answers_votes":47, "question_length":1046, "response_length":1202 }, { "question":"Strip whitespace from strings in a column I am using python csvkit to compare 2 files like this: ``` df1 = pd.read_csv('input1.csv', sep=',\\s+', delimiter=',', encoding=\"utf-8\") df2 = pd.read_csv('input2.csv', sep=',\\s,', delimiter=',', encoding=\"utf-8\") df3 = pd.merge(df1,df2, on='employee_id', how='right') df3.to_csv('output.csv', encoding='utf-8', index=False) ``` Currently I am running the file through a script before hand that strips spaces from the employee_id column. An example of employee_ids: ``` 37 78973 3 23787 2 22 3 123 ``` Is there a way to get csvkit to do it and save me a step?", "response":"You can strip() an entire Series in Pandas using .str.strip(): ``` df1['employee_id'] = df1['employee_id'].str.strip() df2['employee_id'] = df2['employee_id'].str.strip() ``` This will remove leading\/trailing whitespaces on the employee_id column in both df1 and df2 Alternatively, modify the read_csv lines to use skipinitialspace=True ``` df1 = pd.read_csv('input1.csv', sep=',\\s+', delimiter=',', encoding=\"utf-8\", skipinitialspace=True) df2 = pd.read_csv('input2.csv', sep=',\\s,', delimiter=',', encoding=\"utf-8\", skipinitialspace=True) ``` It looks like you are attempting to remove spaces in a string containing numbers, which can be accomplished with pandas.Series.str.replace: ``` df1['employee_id'] = df1['employee_id'].str.replace(\" \", \"\") df2['employee_id'] = df2['employee_id'].str.replace(\" \", \"\") ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43332057\/strip-whitespace-from-strings-in-a-column", "best_answers_votes":149, "question_length":600, "response_length":814 }, { "question":"Reset color cycle in Matplotlib Say I have data about 3 trading strategies, each with and without transaction costs. I want to plot, on the same axes, the time series of each of the 6 variants (3 strategies * 2 trading costs). I would like the \"with transaction cost\" lines to be plotted with alpha=1 and linewidth=1 while I want the \"no transaction costs\" to be plotted with alpha=0.25 and linewidth=5. But I would like the color to be the same for both versions of each strategy. I would like something along the lines of: ``` fig, ax = plt.subplots(1, 1, figsize=(10, 10)) for c in with_transaction_frame.columns: ax.plot(with_transaction_frame[c], label=c, alpha=1, linewidth=1) ****SOME MAGIC GOES HERE TO RESET THE COLOR CYCLE for c in no_transaction_frame.columns: ax.plot(no_transaction_frame[c], label=c, alpha=0.25, linewidth=5) ax.legend() ``` What is the appropriate code to put on the indicated line to reset the color cycle so it is \"back to the start\" when the second loop is invoked?", "response":"In Matplotlib = 1.5 plt.gca().set_prop_cycle(None) for i in range(3): plt.plot(np.arange(10, 1, -1) + i) plt.show() ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24193174\/reset-color-cycle-in-matplotlib", "best_answers_votes":121, "question_length":999, "response_length":119 }, { "question":"pandas extract year from datetime: df['year'] = df['date'].year is not working I import a dataframe via read_csv, but for some reason can't extract the year or month from the series df['date'], trying that gives AttributeError: 'Series' object has no attribute 'year': ``` date Count 6\/30\/2010 525 7\/30\/2010 136 8\/31\/2010 125 9\/30\/2010 84 10\/29\/2010 4469 df = pd.read_csv('sample_data.csv', parse_dates=True) df['date'] = pd.to_datetime(df['date']) df['year'] = df['date'].year df['month'] = df['date'].month ``` UPDATE: and when I try solutions with df['date'].dt on my pandas version 0.14.1, I get \"AttributeError: 'Series' object has no attribute 'dt' \": ``` df = pd.read_csv('sample_data.csv',parse_dates=True) df['date'] = pd.to_datetime(df['date']) df['year'] = df['date'].dt.year df['month'] = df['date'].dt.month ``` Sorry for this question that seems repetitive - I expect the answer will make me feel like a bonehead... but I have not had any luck using answers to the similar questions on SO. FOLLOWUP: I can't seem to update my pandas 0.14.1 to a newer release in my Anaconda environment, each of the attempts below generates an invalid syntax error. I'm using Python 3.4.1 64bit. ``` conda update pandas conda install pandas==0.15.2 conda install -f pandas ``` Any ideas?", "response":"If you're running a recent-ish version of pandas then you can use the datetime accessor dt to access the datetime components: ``` In [6]: df['date'] = pd.to_datetime(df['date']) df['year'], df['month'] = df['date'].dt.year, df['date'].dt.month df Out[6]: date Count year month 0 2010-06-30 525 2010 6 1 2010-07-30 136 2010 7 2 2010-08-31 125 2010 8 3 2010-09-30 84 2010 9 4 2010-10-29 4469 2010 10 ``` EDIT It looks like you're running an older version of pandas in which case the following would work: ``` In [18]: df['date'] = pd.to_datetime(df['date']) df['year'], df['month'] = df['date'].apply(lambda x: x.year), df['date'].apply(lambda x: x.month) df Out[18]: date Count year month 0 2010-06-30 525 2010 6 1 2010-07-30 136 2010 7 2 2010-08-31 125 2010 8 3 2010-09-30 84 2010 9 4 2010-10-29 4469 2010 10 ``` Regarding why it didn't parse this into a datetime in read_csv you need to pass the ordinal position of your column ([0]) because when True it tries to parse columns [1,2,3] see the docs ``` In [20]: t=\"\"\"date Count 6\/30\/2010 525 7\/30\/2010 136 8\/31\/2010 125 9\/30\/2010 84 10\/29\/2010 4469\"\"\" df = pd.read_csv(io.StringIO(t), sep='\\s+', parse_dates=[0]) df.info() Int64Index: 5 entries, 0 to 4 Data columns (total 2 columns): date 5 non-null datetime64[ns] Count 5 non-null int64 dtypes: datetime64[ns](1), int64(1) memory usage: 120.0 bytes ``` So if you pass param parse_dates=[0] to read_csv there shouldn't be any need to call to_datetime on the 'date' column after loading.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30405413\/pandas-extract-year-from-datetime-dfyear-dfdate-year-is-not-working", "best_answers_votes":132, "question_length":1284, "response_length":1489 }, { "question":"What is the point of indexing in pandas? Can someone point me to a link or provide an explanation of the benefits of indexing in pandas? I routinely deal with tables and join them based on columns, and this joining\/merging process seems to re-index things anyway, so it's a bit cumbersome to apply index criteria considering I don't think I need to. Any thoughts on best-practices around indexing?", "response":"Like a dict, a DataFrame's index is backed by a hash table. Looking up rows based on index values is like looking up dict values based on a key. In contrast, the values in a column are like values in a list. Looking up rows based on index values is faster than looking up rows based on column values. For example, consider ``` df = pd.DataFrame({'foo':np.random.random(), 'index':range(10000)}) df_with_index = df.set_index(['index']) ``` Here is how you could look up any row where the df['index'] column equals 999. Pandas has to loop through every value in the column to find the ones equal to 999. ``` df[df['index'] == 999] # foo index # 999 0.375489 999 ``` Here is how you could lookup any row where the index equals 999. With an index, Pandas uses the hash value to find the rows: ``` df_with_index.loc[999] # foo 0.375489 # index 999.000000 # Name: 999, dtype: float64 ``` Looking up rows by index is much faster than looking up rows by column value: ``` In [254]: %timeit df[df['index'] == 999] 1000 loops, best of 3: 368 \u00b5s per loop In [255]: %timeit df_with_index.loc[999] 10000 loops, best of 3: 57.7 \u00b5s per loop ``` Note however, it takes time to build the index: ``` In [220]: %timeit df.set_index(['index']) 1000 loops, best of 3: 330 \u00b5s per loop ``` So having the index is only advantageous when you have many lookups of this type to perform. Sometimes the index plays a role in reshaping the DataFrame. Many functions, such as set_index, stack, unstack, pivot, pivot_table, melt, lreshape, and crosstab, all use or manipulate the index. Sometimes we want the DataFrame in a different shape for presentation purposes, or for join, merge or groupby operations. (As you note joining can also be done based on column values, but joining based on the index is faster.) Behind the scenes, join, merge and groupby take advantage of fast index lookups when possible. Time series have resample, asfreq and interpolate methods whose underlying implementations take advantage of fast index lookups too. So in the end, I think the origin of the index's usefulness, why it shows up in so many functions, is due to its ability to perform fast hash lookups.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27238066\/what-is-the-point-of-indexing-in-pandas", "best_answers_votes":128, "question_length":397, "response_length":2160 }, { "question":"Copy all values in a column to a new column in a pandas dataframe This is a very basic question, I just can not seem to find an answer. I have a dataframe like this, called df: ``` A B C a.1 b.1 c.1 a.2 b.2 c.2 a.3 b.3 c.3 ``` Then I extract all the rows from df, where column B has a value of 'b.2'. I assign these results to df_2. ``` df_2 = df[df['B'] == 'b.2'] ``` df_2 becomes: ``` A B C a.2 b.2 c.2 ``` Then, I copy all the values in column B to a new column named D. Causing df_2 to become: ``` A B C D a.2 b.2 c.2 b.2 ``` When I preform an assignment like this: ``` df_2['D'] = df_2['B'] ``` I get the following warning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the the caveats in the documentation: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/indexing.html#indexing-view-versus-copy I have also tried using loc when creating df_2 like this: ``` df_2 = df.loc[df['B'] == 'b.2'] ``` However, I still get the warning. Any help is greatly appreciated.", "response":"You can simply assign the B to the new column , Like - ``` df['D'] = df['B'] ``` Example\/Demo - ``` In [1]: import pandas as pd In [2]: df = pd.DataFrame([['a.1','b.1','c.1'],['a.2','b.2','c.2'],['a.3','b.3','c.3']],columns=['A','B','C']) In [3]: df Out[3]: A B C 0 a.1 b.1 c.1 1 a.2 b.2 c.2 2 a.3 b.3 c.3 In [4]: df['D'] = df['B'] #<---What you want. In [5]: df Out[5]: A B C D 0 a.1 b.1 c.1 b.1 1 a.2 b.2 c.2 b.2 2 a.3 b.3 c.3 b.3 In [6]: df.loc[0,'D'] = 'd.1' In [7]: df Out[7]: A B C D 0 a.1 b.1 c.1 d.1 1 a.2 b.2 c.2 b.2 2 a.3 b.3 c.3 b.3 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32675861\/copy-all-values-in-a-column-to-a-new-column-in-a-pandas-dataframe", "best_answers_votes":125, "question_length":1042, "response_length":547 }, { "question":"join or merge with overwrite in pandas I want to perform a join\/merge\/append operation on a dataframe with datetime index. Let's say I have df1 and I want to add df2 to it. df2 can have fewer or more columns, and overlapping indexes. For all rows where the indexes match, if df2 has the same column as df1, I want the values of df1 be overwritten with those from df2. How can I obtain the desired result?", "response":"How about: df2.combine_first(df1)? ``` In [33]: df2 Out[33]: A B C D 2000-01-03 0.638998 1.277361 0.193649 0.345063 2000-01-04 -0.816756 -1.711666 -1.155077 -0.678726 2000-01-05 0.435507 -0.025162 -1.112890 0.324111 2000-01-06 -0.210756 -1.027164 0.036664 0.884715 2000-01-07 -0.821631 -0.700394 -0.706505 1.193341 2000-01-10 1.015447 -0.909930 0.027548 0.258471 2000-01-11 -0.497239 -0.979071 -0.461560 0.447598 In [34]: df1 Out[34]: A B C 2000-01-03 2.288863 0.188175 -0.040928 2000-01-04 0.159107 -0.666861 -0.551628 2000-01-05 -0.356838 -0.231036 -1.211446 2000-01-06 -0.866475 1.113018 -0.001483 2000-01-07 0.303269 0.021034 0.471715 2000-01-10 1.149815 0.686696 -1.230991 2000-01-11 -1.296118 -0.172950 -0.603887 2000-01-12 -1.034574 -0.523238 0.626968 2000-01-13 -0.193280 1.857499 -0.046383 2000-01-14 -1.043492 -0.820525 0.868685 In [35]: df2.comb df2.combine df2.combineAdd df2.combine_first df2.combineMult In [35]: df2.combine_first(df1) Out[35]: A B C D 2000-01-03 0.638998 1.277361 0.193649 0.345063 2000-01-04 -0.816756 -1.711666 -1.155077 -0.678726 2000-01-05 0.435507 -0.025162 -1.112890 0.324111 2000-01-06 -0.210756 -1.027164 0.036664 0.884715 2000-01-07 -0.821631 -0.700394 -0.706505 1.193341 2000-01-10 1.015447 -0.909930 0.027548 0.258471 2000-01-11 -0.497239 -0.979071 -0.461560 0.447598 2000-01-12 -1.034574 -0.523238 0.626968 NaN 2000-01-13 -0.193280 1.857499 -0.046383 NaN 2000-01-14 -1.043492 -0.820525 0.868685 NaN ``` Note that it takes the values from df1 for indices that do not overlap with df2. If this doesn't do exactly what you want I would be willing to improve this function \/ add options to it.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/9787853\/join-or-merge-with-overwrite-in-pandas", "best_answers_votes":78, "question_length":404, "response_length":1633 }, { "question":"Pandas timeseries plot setting x-axis major and minor ticks and labels I want to be able to set the major and minor xticks and their labels for a time series graph plotted from a Pandas time series object. The Pandas 0.9 \"what's new\" page says: \"you can either use to_pydatetime or register a converter for the Timestamp type\" but I can't work out how to do that so that I can use the matplotlib ax.xaxis.set_major_locator and ax.xaxis.set_major_formatter (and minor) commands. If I use them without converting the pandas times, the x-axis ticks and labels end up wrong. By using the 'xticks' parameter, I can pass the major ticks to pandas' .plot, and then set the major tick labels. I can't work out how to do the minor ticks using this approach (I can set the labels on the default minor ticks set by pandas' .plot). Here is my test code: Graph with strange dates on xaxis ```py import pandas as pd import matplotlib.dates as mdates import numpy as np dateIndex = pd.date_range(start='2011-05-01', end='2011-07-01', freq='D') testSeries = pd.Series(data=np.random.randn(len(dateIndex)), index=dateIndex) ax = plt.figure(figsize=(7,4), dpi=300).add_subplot(111) testSeries.plot(ax=ax, style='v-', label='first line') # using MatPlotLib date time locators and formatters doesn't work with new # pandas datetime index ax.xaxis.set_minor_locator(mdates.WeekdayLocator()) ax.xaxis.set_minor_formatter(mdates.DateFormatter('%d\\n%a')) ax.xaxis.grid(True, which=\"minor\") ax.xaxis.grid(False, which=\"major\") ax.xaxis.set_major_formatter(mdates.DateFormatter('\\n\\n\\n%b%Y')) plt.show() ``` Graph with correct dates (without minor ticks) ```py # set the major xticks and labels through pandas ax2 = plt.figure(figsize=(7,4), dpi=300).add_subplot(111) xticks = pd.date_range(start='2011-05-01', end='2011-07-01', freq='W-Tue') testSeries.plot(ax=ax2, style='-v', label='second line', xticks=xticks.to_pydatetime()) ax2.set_xticklabels([x.strftime('%a\\n%d\\n%h\\n%Y') for x in xticks]); # remove the minor xtick labels set by pandas.plot ax2.set_xticklabels([], minor=True) # turn the minor ticks created by pandas.plot off plt.show() ``` Update: I've been able to get closer to the layout I wanted by using a loop to build the major xtick labels: ```py # only show month for first label in month month = dStart.month - 1 xticklabels = [] for x in xticks: if month != x.month : xticklabels.append(x.strftime('%d\\n%a\\n%h')) month = x.month else: xticklabels.append(x.strftime('%d\\n%a')) ``` However, this is a bit like doing the x-axis using ax.annotate: possible but not ideal. How do I set the major and minor ticks when plotting pandas time-series data?", "response":"Both pandas and matplotlib.dates use matplotlib.units for locating the ticks. But while matplotlib.dates has convenient ways to set the ticks manually, pandas seems to have the focus on auto formatting so far (you can have a look at the code for date conversion and formatting in pandas). So for the moment it seems more reasonable to use matplotlib.dates (as mentioned by @BrenBarn in his comment). ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as dates idx = pd.date_range('2011-05-01', '2011-07-01') s = pd.Series(np.random.randn(len(idx)), index=idx) fig, ax = plt.subplots() ax.plot_date(idx.to_pydatetime(), s, 'v-') ax.xaxis.set_minor_locator(dates.WeekdayLocator(byweekday=(1), interval=1)) ax.xaxis.set_minor_formatter(dates.DateFormatter('%d\\n%a')) ax.xaxis.grid(True, which=\"minor\") ax.yaxis.grid() ax.xaxis.set_major_locator(dates.MonthLocator()) ax.xaxis.set_major_formatter(dates.DateFormatter('\\n\\n\\n%b\\n%Y')) plt.tight_layout() plt.show() ``` (my locale is German, so that Tuesday [Tue] becomes Dienstag [Di])", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12945971\/pandas-timeseries-plot-setting-x-axis-major-and-minor-ticks-and-labels", "best_answers_votes":94, "question_length":2642, "response_length":1078 }, { "question":"Row-wise average for a subset of columns with missing values I've got a 'DataFrame` which has occasional missing values, and looks something like this: ``` Monday Tuesday Wednesday ================================================ Mike 42 NaN 12 Jenna NaN NaN 15 Jon 21 4 1 ``` I'd like to add a new column to my data frame where I'd calculate the average across all columns for every row. Meaning, for Mike, I'd need (df['Monday'] + df['Wednesday'])\/2, but for Jenna, I'd simply use df['Wednesday amt.']\/1 Does anyone know the best way to account for this variation that results from missing values and calculate the average?", "response":"You can simply: ``` df['avg'] = df.mean(axis=1) Monday Tuesday Wednesday avg Mike 42 NaN 12 27.000000 Jenna NaN NaN 15 15.000000 Jon 21 4 1 8.666667 ``` because .mean() ignores missing values by default: see docs. To select a subset, you can: ``` df['avg'] = df[['Monday', 'Tuesday']].mean(axis=1) Monday Tuesday Wednesday avg Mike 42 NaN 12 42.0 Jenna NaN NaN 15 NaN Jon 21 4 1 12.5 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34734940\/row-wise-average-for-a-subset-of-columns-with-missing-values", "best_answers_votes":190, "question_length":625, "response_length":387 }, { "question":"Python pandas convert datetime to timestamp effectively through dt accessor I have a DataFrame with some (hundreds of) million of rows. And I want to convert datetime to timestamp effectively. How can I do it? My sample df: ``` df = pd.DataFrame(index=pd.DatetimeIndex(start=dt.datetime(2016,1,1,0,0,1), end=dt.datetime(2016,1,2,0,0,1), freq='H'))\\ .reset_index().rename(columns={'index':'datetime'}) ``` which looks like: ``` datetime 0 2016-01-01 00:00:01 1 2016-01-01 01:00:01 2 2016-01-01 02:00:01 3 2016-01-01 03:00:01 4 2016-01-01 04:00:01 ``` Now I convert datetime to timestamp value-by-value with .apply() but it takes a very long time (some hours) if I have some (hundreds of) million rows: ``` df['ts'] = df[['datetime']].apply(lambda x: x[0].timestamp(), axis=1).astype(int) ``` Output: ``` datetime ts 0 2016-01-01 00:00:01 1451602801 1 2016-01-01 01:00:01 1451606401 2 2016-01-01 02:00:01 1451610001 3 2016-01-01 03:00:01 1451613601 4 2016-01-01 04:00:01 1451617201 ``` The above result is what I want. If I try to use the .dt accessor of pandas.Series then I get error message: ``` df['ts'] = df['datetime'].dt.timestamp ``` AttributeError: 'DatetimeProperties' object has no attribute 'timestamp' If I try to create eg. the date parts of datetimes with the .dt accessor then it is much faster then using .apply(): ``` df['date'] = df['datetime'].dt.date ``` Output: ``` datetime ts date 0 2016-01-01 00:00:01 1451602801 2016-01-01 1 2016-01-01 01:00:01 1451606401 2016-01-01 2 2016-01-01 02:00:01 1451610001 2016-01-01 3 2016-01-01 03:00:01 1451613601 2016-01-01 4 2016-01-01 04:00:01 1451617201 2016-01-01 ``` I want something similar with timestamps... But I don't really understand the official documentation: it talks about \"Converting to Timestamps\" but I don't see any timestamps there; it just talks about converting to datetime with pd.to_datetime() but not to timestamp... pandas.Timestamp constructor also doesn't work (returns with the below error): ``` df['ts2'] = pd.Timestamp(df['datetime']) ``` TypeError: Cannot convert input to Timestamp pandas.Series.to_timestamp also makes something totally different from what I want: ``` df['ts3'] = df['datetime'].to_timestamp ``` Output: ``` datetime ts ts3 0 2016-01-01 00:00:01 1451602801 > df A B C 0 1 2 1 1 1 3 2 2 4 6 3 3 4 3 4 4 5 4 5 ``` The original DataFrame is more complicated with more columns and rows. I want to get the first row that fulfills some criteria. Examples: Get first row where A > 3 (returns row 2) Get first row where A > 4 AND B > 3 (returns row 4) Get first row where A > 3 AND (B > 3 OR C > 2) (returns row 2) But, if there isn't any row that fulfills the specific criteria, then I want to get the first one after I just sort it descending by A (or other cases by B, C etc). Get first row where A > 6 (returns row 4 by ordering it by A desc and get the first one) I was able to do it by iterating on the DataFrame (I know that craps :P). So, what would be a more pythonic way to solve it?", "response":"This tutorial is a very good one for pandas slicing. Make sure you check it out. Onto some snippets... To slice a dataframe with a condition, you use this format: ``` >>> df[condition] ``` This will return a slice of your dataframe which you can index using iloc. Here are your examples: Get first row where A > 3 (returns row 2) ``` >>> df[df.A > 3].iloc[0] A 4 B 6 C 3 Name: 2, dtype: int64 ``` If what you actually want is the row number, rather than using iloc, it would be df[df.A > 3].index[0]. Get first row where A > 4 AND B > 3: ``` >>> df[(df.A > 4) & (df.B > 3)].iloc[0] A 5 B 4 C 5 Name: 4, dtype: int64 ``` Get first row where A > 3 AND (B > 3 OR C > 2) (returns row 2) ``` >>> df[(df.A > 3) & ((df.B > 3) | (df.C > 2))].iloc[0] A 4 B 6 C 3 Name: 2, dtype: int64 ``` Now, with your last case we can write a function that handles the default case of returning the descending-sorted frame: ``` >>> def series_or_default(X, condition, default_col, ascending=False): ... sliced = X[condition] ... if sliced.shape[0] == 0: ... return X.sort_values(default_col, ascending=ascending).iloc[0] ... return sliced.iloc[0] >>> >>> series_or_default(df, df.A > 6, 'A') A 5 B 4 C 5 Name: 4, dtype: int64 ``` As expected, it returns row 4.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/40660088\/get-first-row-of-dataframe-in-python-pandas-based-on-criteria", "best_answers_votes":118, "question_length":965, "response_length":1237 }, { "question":"Pandas replace a character in all column names I have data frames with column names (coming from .csv files) containing ( and ) and I'd like to replace them with _. How can I do that in place for all columns?", "response":"Use str.replace: ``` df.columns = df.columns.str.replace(\"[()]\", \"_\", regex=True) ``` Sample: ``` df = pd.DataFrame({'(A)':[1,2,3], '(B)':[4,5,6], 'C)':[7,8,9]}) print (df) (A) (B) C) 0 1 4 7 1 2 5 8 2 3 6 9 df.columns = df.columns.str.replace(r\"[()]\", \"_\", regex=True) print (df) _A_ _B_ C_ 0 1 4 7 1 2 5 8 2 3 6 9 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39741429\/pandas-replace-a-character-in-all-column-names", "best_answers_votes":144, "question_length":208, "response_length":319 }, { "question":"How to get row number in dataframe in Pandas? How can I get the number of the row in a dataframe that contains a certain value in a certain column using Pandas? For example, I have the following dataframe: ``` ClientID LastName 0 34 Johnson 1 67 Smith 2 53 Brows ``` How can I find the number of the row that has 'Smith' in 'LastName' column?", "response":"Note that a dataframe's index could be out of order, or not even numerical at all. If you don't want to use the current index and instead renumber the rows sequentially, then you can use df.reset_index() together with the suggestions below To get all indices that matches 'Smith' ``` >>> df[df['LastName'] == 'Smith'].index Int64Index([1], dtype='int64') ``` or as a numpy array ``` >>> df[df['LastName'] == 'Smith'].index.to_numpy() # .values on older versions array([1]) ``` or if there is only one and you want the integer, you can subset ``` >>> df[df['LastName'] == 'Smith'].index[0] 1 ``` You could use the same boolean expressions with .loc, but it is not needed unless you also want to select a certain column, which is redundant when you only want the row number\/index.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43193880\/how-to-get-row-number-in-dataframe-in-pandas", "best_answers_votes":80, "question_length":342, "response_length":778 }, { "question":"Data type conversion error: ValueError: Cannot convert non-finite values (NA or inf) to integer [duplicate] This question already has answers here: NumPy or Pandas: Keeping array type as integer while having a NaN value (10 answers) Closed 7 years ago. I've the following dataframe ``` df1 = df[['tripduration','starttime','stoptime','start station name','end station name','bikeid','usertype','birth year','gender']] print(df1.head(2)) ``` which prints the following ``` tripduration starttime stoptime start station name \\ 0 364 2017-09-01 00:02:01 2017-09-01 00:08:05 Exchange Place 1 357 2017-09-01 00:08:12 2017-09-01 00:14:09 Warren St end station name bikeid usertype birth year gender 0 Marin Light Rail 29670 Subscriber 1989.0 1 1 Newport Pkwy 26163 Subscriber 1980.0 1 ``` I am using the following code to convert \"birth year\" column type from float to int. ``` df1[['birth year']] = df1[['birth year']].astype(int) print df1.head(2) ``` But I get the following error. How to fix this? ``` ValueErrorTraceback (most recent call last) in () ----> 1 df1[['birth year']] = df1[['birth year']].astype(int) 2 print df1.head(2) 3 __zeppelin__._displayhook() \/usr\/miniconda2\/lib\/python2.7\/site-packages\/pandas\/util\/_decorators.pyc in wrapper(*args, **kwargs) 116 else: 117 kwargs[new_arg_name] = new_arg_value --> 118 return func(*args, **kwargs) 119 return wrapper 120 return _deprecate_kwarg \/usr\/miniconda2\/lib\/python2.7\/site-packages\/pandas\/core\/generic.pyc in astype(self, dtype, copy, errors, **kwargs) 4002 # else, only a single dtype is given 4003 new_data = self._data.astype(dtype=dtype, copy=copy, errors=errors, -> 4004 **kwargs) 4005 return self._constructor(new_data).__finalize__(self) 4006 \/usr\/miniconda2\/lib\/python2.7\/site-packages\/pandas\/core\/internals.pyc in astype(self, dtype, **kwargs) 3460 3461 def astype(self, dtype, **kwargs): -> 3462 return self.apply('astype', dtype=dtype, **kwargs) 3463 3464 def convert(self, **kwargs): \/usr\/miniconda2\/lib\/python2.7\/site-packages\/pandas\/core\/internals.pyc in apply(self, f, axes, filter, do_integrity_check, consolidate, **kwargs) 3327 3328 kwargs['mgr'] = self -> 3329 applied = getattr(b, f)(**kwargs) 3330 result_blocks = _extend_blocks(applied, result_blocks) 3331 \/usr\/miniconda2\/lib\/python2.7\/site-packages\/pandas\/core\/internals.pyc in astype(self, dtype, copy, errors, values, **kwargs) 542 def astype(self, dtype, copy=False, errors='raise', values=None, **kwargs): 543 return self._astype(dtype, copy=copy, errors=errors, values=values, --> 544 **kwargs) 545 546 def _astype(self, dtype, copy=False, errors='raise', values=None, \/usr\/miniconda2\/lib\/python2.7\/site-packages\/pandas\/core\/internals.pyc in _astype(self, dtype, copy, errors, values, klass, mgr, **kwargs) 623 624 # _astype_nansafe works fine with 1-d only --> 625 values = astype_nansafe(values.ravel(), dtype, copy=True) 626 values = values.reshape(self.shape) 627 \/usr\/miniconda2\/lib\/python2.7\/site-packages\/pandas\/core\/dtypes\/cast.pyc in astype_nansafe(arr, dtype, copy) 685 686 if not np.isfinite(arr).all(): --> 687 raise ValueError('Cannot convert non-finite values (NA or inf) to ' 688 'integer') 689 ValueError: Cannot convert non-finite values (NA or inf) to integer ```", "response":"If your DF is big, you're probably not seeing the missing numbers. But you can use the fillna function to help ``` >>> df = pd.DataFrame(data=data, columns=['id', 'birth_year']) >>> df id birth_year 0 1 1989.0 1 2 1990.0 2 3 NaN >>> df.birth_year 0 1989.0 1 1990.0 2 NaN Name: birth_year, dtype: float64 >>> df.birth_year.astype(int) ERROR |2018.01.29T18:14:04|default:183: Unhandled Terminal Exception Traceback (most recent call last): File \"\", line 1, in File \"\/usr\/local\/devtools\/uat\/anaconda4321\/lib\/python3.6\/site- packages\/pandas\/util\/_decorators.py\", line 91, in wrapper return func(*args, **kwargs) File \"\/usr\/local\/devtools\/uat\/anaconda4321\/lib\/python3.6\/site- packages\/pandas\/core\/generic.py\", line 3410, in astype **kwargs) File \"\/usr\/local\/devtools\/uat\/anaconda4321\/lib\/python3.6\/site- packages\/pandas\/core\/internals.py\", line 3224, in astype return self.apply('astype', dtype=dtype, **kwargs) File \"\/usr\/local\/devtools\/uat\/anaconda4321\/lib\/python3.6\/site- packages\/pandas\/core\/internals.py\", line 3091, in apply applied = getattr(b, f)(**kwargs) File \"\/usr\/local\/devtools\/uat\/anaconda4321\/lib\/python3.6\/site- packages\/pandas\/core\/internals.py\", line 471, in astype **kwargs) File \"\/usr\/local\/devtools\/uat\/anaconda4321\/lib\/python3.6\/site- packages\/pandas\/core\/internals.py\", line 521, in _astype values = astype_nansafe(values.ravel(), dtype, copy=True) File \"\/usr\/local\/devtools\/uat\/anaconda4321\/lib\/python3.6\/site- packages\/pandas\/core\/dtypes\/cast.py\", line 620, in astype_nansafe raise ValueError('Cannot convert non-finite values (NA or inf) to ' ValueError: Cannot convert non-finite values (NA or inf) to integer >>> df = df.fillna(0) >>> df.birth_year.astype(int) 0 1989 1 1990 2 0 Name: birth_year, dtype: int64 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/48511484\/data-type-conversion-error-valueerror-cannot-convert-non-finite-values-na-or", "best_answers_votes":84, "question_length":3220, "response_length":1737 }, { "question":"What are Python pandas equivalents for R functions like str(), summary(), and head()? I'm only aware of the describe() function. Are there any other functions similar to str(), summary(), and head()?", "response":"In pandas the info() method creates a very similar output like R's str(): ``` > str(train) 'data.frame': 891 obs. of 13 variables: $ PassengerId: int 1 2 3 4 5 6 7 8 9 10 ... $ Survived : int 0 1 1 1 0 0 0 0 1 1 ... $ Pclass : int 3 1 3 1 3 3 1 3 3 2 ... $ Name : Factor w\/ 891 levels \"Abbing, Mr. Anthony\",..: 109 191 358 277 16 559 520 629 417 581 ... $ Sex : Factor w\/ 2 levels \"female\",\"male\": 2 1 1 1 2 2 2 2 1 1 ... $ Age : num 22 38 26 35 35 NA 54 2 27 14 ... $ SibSp : int 1 1 0 1 0 0 0 3 0 1 ... $ Parch : int 0 0 0 0 0 0 0 1 2 0 ... $ Ticket : Factor w\/ 681 levels \"110152\",\"110413\",..: 524 597 670 50 473 276 86 396 345 133 ... $ Fare : num 7.25 71.28 7.92 53.1 8.05 ... $ Cabin : Factor w\/ 148 levels \"\",\"A10\",\"A14\",..: 1 83 1 57 1 1 131 1 1 1 ... $ Embarked : Factor w\/ 4 levels \"\",\"C\",\"Q\",\"S\": 4 2 4 4 4 3 4 4 4 2 ... $ Child : num 0 0 0 0 0 NA 0 1 0 1 ... train.info() RangeIndex: 891 entries, 0 to 890 Data columns (total 12 columns): PassengerId 891 non-null int64 Survived 891 non-null int64 Pclass 891 non-null int64 Name 891 non-null object Sex 891 non-null object Age 714 non-null float64 SibSp 891 non-null int64 Parch 891 non-null int64 Ticket 891 non-null object Fare 891 non-null float64 Cabin 204 non-null object Embarked 889 non-null object dtypes: float64(2), int64(5), object(5) memory usage: 83.6+ KB ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27637281\/what-are-python-pandas-equivalents-for-r-functions-like-str-summary-and-he", "best_answers_votes":77, "question_length":199, "response_length":1335 }, { "question":"The easiest way for getting feature names after running SelectKBest in Scikit Learn I'm trying to conduct a supervised machine-learning experiment using the SelectKBest feature of scikit-learn, but I'm not sure how to create a new dataframe after finding the best features: Let's assume I would like to conduct the experiment selecting 5 best features: ``` from sklearn.feature_selection import SelectKBest, f_classif select_k_best_classifier = SelectKBest(score_func=f_classif, k=5).fit_transform(features_dataframe, targeted_class) ``` Now, if I add the line: ``` import pandas as pd dataframe = pd.DataFrame(select_k_best_classifier) ``` I receive a new dataframe without feature names (only index starting from 0 to 4), but I want to create a dataframe with the new selected features, in a way like this: ``` dataframe = pd.DataFrame(fit_transofrmed_features, columns=features_names) ``` My question is how to create the features_names list? I know that I should use: ``` select_k_best_classifier.get_support() ``` Which returns an array of boolean values, where true values indices represent the column that should be selected in the original dataframe. How should I use this boolean array with the array of all features names I can get via the method feature_names = list(features_dataframe.columns.values) ?", "response":"This doesn't require loops. ``` # Create and fit selector selector = SelectKBest(f_classif, k=5) selector.fit(features_df, target) # Get columns to keep and create new dataframe with those only cols_idxs = selector.get_support(indices=True) features_df_new = features_df.iloc[:,cols_idxs] ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39839112\/the-easiest-way-for-getting-feature-names-after-running-selectkbest-in-scikit-le", "best_answers_votes":109, "question_length":1314, "response_length":292 }, { "question":"Convert a Spark DataFrame to Pandas DF Is there a way to convert a Spark DF (not RDD) to a Pandas DF? I tried the following: ``` var some_df = Seq( (\"A\", \"no\"), (\"B\", \"yes\"), (\"B\", \"yes\"), (\"B\", \"no\") ).toDF( \"user_id\", \"phone_number\") ``` Code: ``` %pyspark pandas_df = some_df.toPandas() ``` Error: ``` NameError: name 'some_df' is not defined ``` Any suggestions.", "response":"following should work Sample DataFrame ```py some_df = sc.parallelize([ (\"A\", \"no\"), (\"B\", \"yes\"), (\"B\", \"yes\"), (\"B\", \"no\")] ).toDF([\"user_id\", \"phone_number\"]) ``` Converting DataFrame to Pandas DataFrame ``` pandas_df = some_df.toPandas() ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/50958721\/convert-a-spark-dataframe-to-pandas-df", "best_answers_votes":114, "question_length":366, "response_length":245 }, { "question":"Transform a Counter object into a Pandas DataFrame I used Counter on a list to compute this variable: ``` final = Counter(event_container) ``` print final gives: ``` Counter({'fb_view_listing': 76, 'fb_homescreen': 63, 'rt_view_listing': 50, 'rt_home_start_app': 46, 'fb_view_wishlist': 39, 'fb_view_product': 37, 'fb_search': 29, 'rt_view_product': 23, 'fb_view_cart': 22, 'rt_search': 12, 'rt_view_cart': 12, 'add_to_cart': 2, 'create_campaign': 1, 'fb_connect': 1, 'sale': 1, 'guest_sale': 1, 'remove_from_cart': 1, 'rt_transaction_confirmation': 1, 'login': 1}) ``` Now I want to convert final into a Pandas DataFrame, but when I'm doing: ``` final_df = pd.DataFrame(final) ``` but I got an error. I guess final is not a proper dictionary, so how can I convert final to a dictionary? Or is it an other way to convert final to a DataFrame?", "response":"You can construct using from_dict and pass param orient='index', then call reset_index so you get a 2 column df: ``` In [40]: from collections import Counter d = Counter({'fb_view_listing': 76, 'fb_homescreen': 63, 'rt_view_listing': 50, 'rt_home_start_app': 46, 'fb_view_wishlist': 39, 'fb_view_product': 37, 'fb_search': 29, 'rt_view_product': 23, 'fb_view_cart': 22, 'rt_search': 12, 'rt_view_cart': 12, 'add_to_cart': 2, 'create_campaign': 1, 'fb_connect': 1, 'sale': 1, 'guest_sale': 1, 'remove_from_cart': 1, 'rt_transaction_confirmation': 1, 'login': 1}) df = pd.DataFrame.from_dict(d, orient='index').reset_index() df Out[40]: index 0 0 login 1 1 rt_transaction_confirmation 1 2 fb_view_cart 22 3 fb_connect 1 4 rt_view_product 23 5 fb_search 29 6 sale 1 7 fb_view_listing 76 8 add_to_cart 2 9 rt_view_cart 12 10 fb_homescreen 63 11 fb_view_product 37 12 rt_home_start_app 46 13 fb_view_wishlist 39 14 create_campaign 1 15 rt_search 12 16 guest_sale 1 17 remove_from_cart 1 18 rt_view_listing 50 ``` You can rename the columns to something more meaningful: ``` In [43]: df = df.rename(columns={'index':'event', 0:'count'}) df Out[43]: event count 0 login 1 1 rt_transaction_confirmation 1 2 fb_view_cart 22 3 fb_connect 1 4 rt_view_product 23 5 fb_search 29 6 sale 1 7 fb_view_listing 76 8 add_to_cart 2 9 rt_view_cart 12 10 fb_homescreen 63 11 fb_view_product 37 12 rt_home_start_app 46 13 fb_view_wishlist 39 14 create_campaign 1 15 rt_search 12 16 guest_sale 1 17 remove_from_cart 1 18 rt_view_listing 50 ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31111032\/transform-a-counter-object-into-a-pandas-dataframe", "best_answers_votes":131, "question_length":842, "response_length":1519 }, { "question":"How to calculate mean values grouped on another column For the following dataframe: ``` StationID HoursAhead BiasTemp SS0279 0 10 SS0279 1 20 KEOPS 0 0 KEOPS 1 5 BB 0 5 BB 1 5 ``` I'd like to get something like: ``` StationID BiasTemp SS0279 15 KEOPS 2.5 BB 5 ``` I know I can script something like this to get the desired result: ``` def transform_DF(old_df,col): list_stations = list(set(old_df['StationID'].values.tolist())) header = list(old_df.columns.values) header.remove(col) header_new = header new_df = pandas.DataFrame(columns = header_new) for i,station in enumerate(list_stations): general_results = old_df[(old_df['StationID'] == station)].describe() new_row = [] for column in header_new: if column in ['StationID']: new_row.append(station) continue new_row.append(general_results[column]['mean']) new_df.loc[i] = new_row return new_df ``` But I wonder if there is something more straightforward in pandas.", "response":"You could groupby on StationID and then take mean() on BiasTemp. To output Dataframe, use as_index=False ``` In [4]: df.groupby('StationID', as_index=False)['BiasTemp'].mean() Out[4]: StationID BiasTemp 0 BB 5.0 1 KEOPS 2.5 2 SS0279 15.0 ``` Without as_index=False, it returns a Series instead ``` In [5]: df.groupby('StationID')['BiasTemp'].mean() Out[5]: StationID BB 5.0 KEOPS 2.5 SS0279 15.0 Name: BiasTemp, dtype: float64 ``` Read more about groupby in this pydata tutorial.", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30482071\/how-to-calculate-mean-values-grouped-on-another-column", "best_answers_votes":123, "question_length":921, "response_length":479 }, { "question":"Pandas to_html() truncates string contents I have a Python Pandas DataFrame object containing textual data. My problem is, that when I use to_html() function, it truncates the strings in the output. For example: ```py import pandas df = pandas.DataFrame({'text': ['Lorem ipsum dolor sit amet, consectetur adipiscing elit.']}) print (df.to_html()) ``` The output is truncated at adapis... ``` text 0 Lorem ipsum dolor sit amet, consectetur adipis... ``` There is a related question on SO, but it uses placeholders and search\/replace functionality to postprocess the HTML, which I would like to avoid: Writing full contents of Pandas dataframe to HTML table Is there a simpler solution to this problem? I could not find anything related from the documentation.", "response":"What you are seeing is pandas truncating the output for display purposes only. The default max_colwidth value is 50 which is what you are seeing. You can set this value to whatever you desire or you can set it to -1 which effectively turns this off: ``` pd.set_option('display.max_colwidth', -1) ``` Although I would advise against this, it would be better to set it to something that can be displayed easily in your console or ipython. A list of the options can be found here: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/options.html", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26277757\/pandas-to-html-truncates-string-contents", "best_answers_votes":117, "question_length":770, "response_length":534 }, { "question":"Pandas add one day to column I need to add 1 day to each date I want to get the begining date of the following month eg 2014-01-2014 for the 1st item in the dataframe. I tried: ``` montdist['date'] + pd.DateOffset(1) ``` Which gives me: ``` TypeError: cannot use a non-absolute DateOffset in datetime\/timedelta operations [] ``` Have a Dataframe: ``` Units mondist date 1 6491 0.057785 2013-12-31 00:00:00 2 7377 0.065672 2014-01-31 00:00:00 3 9990 0.088934 2014-02-28 00:00:00 4 10362 0.092245 2014-03-31 00:00:00 5 11271 0.100337 2014-04-30 00:00:00 6 11637 0.103596 2014-05-31 00:00:00 7 10199 0.090794 2014-06-30 00:00:00 8 10486 0.093349 2014-07-31 00:00:00 9 9282 0.082631 2014-08-31 00:00:00 10 8632 0.076844 2014-09-30 00:00:00 11 8204 0.073034 2013-10-31 00:00:00 12 8400 0.074779 2013-11-30 00:00:00 ```", "response":"Make it a DatetimeIndex first: ``` pd.DatetimeIndex(montdist['date']) + pd.DateOffset(1) ``` Note: I think there is a feature request that this could work with date columns... In action: ``` In [11]: df = pd.DataFrame([[1, 2], [3, 4]], columns=['A', 'B']) In [12]: df['date'] = pd.to_datetime(['21-11-2013', '22-11-2013']) In [13]: pd.DatetimeIndex(df.date) + pd.DateOffset(1) Out[13]: [2013-11-22 00:00:00, 2013-11-23 00:00:00] Length: 2, Freq: None, Timezone: None In [14]: pd.DatetimeIndex(df.date) + pd.offsets.Hour(1) Out[14]: [2013-11-21 01:00:00, 2013-11-22 01:00:00] Length: 2, Freq: None, Timezone: Non ```", "best_answers_score":0.8, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20480897\/pandas-add-one-day-to-column", "best_answers_votes":101, "question_length":813, "response_length":617 }, { "question":"Pandas selecting by label sometimes return Series, sometimes returns DataFrame In Pandas, when I select a label that only has one entry in the index I get back a Series, but when I select an entry that has more then one entry I get back a data frame. Why is that? Is there a way to ensure I always get back a data frame? ``` In [1]: import pandas as pd In [2]: df = pd.DataFrame(data=range(5), index=[1, 2, 3, 3, 3]) In [3]: type(df.loc[3]) Out[3]: pandas.core.frame.DataFrame In [4]: type(df.loc[1]) Out[4]: pandas.core.series.Series ```", "response":"Granted that the behavior is inconsistent, but I think it's easy to imagine cases where this is convenient. Anyway, to get a DataFrame every time, just pass a list to loc. There are other ways, but in my opinion this is the cleanest. ``` In [2]: type(df.loc[[3]]) Out[2]: pandas.core.frame.DataFrame In [3]: type(df.loc[[1]]) Out[3]: pandas.core.frame.DataFrame ```", "best_answers_score":0.7984, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20383647\/pandas-selecting-by-label-sometimes-return-series-sometimes-returns-dataframe", "best_answers_votes":176, "question_length":538, "response_length":365 }, { "question":"Extracting just Month and Year separately from Pandas Datetime column I have a Dataframe, df, with the following column: ```none ArrivalDate 936 2012-12-31 938 2012-12-29 965 2012-12-31 966 2012-12-31 967 2012-12-31 968 2012-12-31 969 2012-12-31 970 2012-12-29 971 2012-12-31 972 2012-12-29 973 2012-12-29 ``` The elements of the column are pandas.tslib.Timestamp type. I want to extract the year and month. Here's what I've tried: ```py df['ArrivalDate'].resample('M', how = 'mean') ``` which throws the following error: ```none Only valid with DatetimeIndex or PeriodIndex ``` Then I tried: ```py df['ArrivalDate'].apply(lambda(x):x[:-2]) ``` which throws the following error: ```none 'Timestamp' object has no attribute '__getitem__' ``` My current solution is ```py df.index = df['ArrivalDate'] ``` Then, I can resample another column using the index. But I'd still like a method for reconfiguring the entire column. Any ideas?", "response":"If you want new columns showing year and month separately you can do this: ``` df['year'] = pd.DatetimeIndex(df['ArrivalDate']).year df['month'] = pd.DatetimeIndex(df['ArrivalDate']).month ``` or... ``` df['year'] = df['ArrivalDate'].dt.year df['month'] = df['ArrivalDate'].dt.month ``` Then you can combine them or work with them just as they are.", "best_answers_score":0.7983, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25146121\/extracting-just-month-and-year-separately-from-pandas-datetime-column", "best_answers_votes":629, "question_length":931, "response_length":348 }, { "question":"Add missing dates to pandas dataframe My data can have multiple events on a given date or NO events on a date. I take these events, get a count by date and plot them. However, when I plot them, my two series don't always match. ``` idx = pd.date_range(df['simpleDate'].min(), df['simpleDate'].max()) s = df.groupby(['simpleDate']).size() ``` In the above code idx becomes a range of say 30 dates. 09-01-2013 to 09-30-2013 However S may only have 25 or 26 days because no events happened for a given date. I then get an AssertionError as the sizes dont match when I try to plot: ``` fig, ax = plt.subplots() ax.bar(idx.to_pydatetime(), s, color='green') ``` What's the proper way to tackle this? Do I want to remove dates with no values from IDX or (which I'd rather do) is add to the series the missing date with a count of 0. I'd rather have a full graph of 30 days with 0 values. If this approach is right, any suggestions on how to get started? Do I need some sort of dynamic reindex function? Here's a snippet of S ( df.groupby(['simpleDate']).size() ), notice no entries for 04 and 05. ``` 09-02-2013 2 09-03-2013 10 09-06-2013 5 09-07-2013 1 ```", "response":"You could use Series.reindex: ``` import pandas as pd idx = pd.date_range('09-01-2013', '09-30-2013') s = pd.Series({'09-02-2013': 2, '09-03-2013': 10, '09-06-2013': 5, '09-07-2013': 1}) s.index = pd.DatetimeIndex(s.index) s = s.reindex(idx, fill_value=0) print(s) ``` yields ``` 2013-09-01 0 2013-09-02 2 2013-09-03 10 2013-09-04 0 2013-09-05 0 2013-09-06 5 2013-09-07 1 2013-09-08 0 ... ```", "best_answers_score":0.7979, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19324453\/add-missing-dates-to-pandas-dataframe", "best_answers_votes":429, "question_length":1151, "response_length":392 }, { "question":"sorting by a custom list in pandas After reading through: http:\/\/pandas.pydata.org\/pandas-docs\/version\/0.13.1\/generated\/pandas.DataFrame.sort.html I still can't seem to figure out how to sort a column by a custom list. Obviously, the default sort is alphabetical. I'll give an example. Here is my (very abridged) dataframe: ``` Player Year Age Tm G 2967 Cedric Hunter 1991 27 CHH 6 5335 Maurice Baker 2004 25 VAN 7 13950 Ratko Varda 2001 22 TOT 60 6141 Ryan Bowen 2009 34 OKC 52 6169 Adrian Caldwell 1997 31 DAL 81 ``` I want to be able to sort by Player, Year and then Tm. The default sort by Player and Year is fine for me, in normal order. However, I do not want Team sorted alphabetically b\/c I want TOT always at the top. Here is the list I created: ``` sorter = ['TOT', 'ATL', 'BOS', 'BRK', 'CHA', 'CHH', 'CHI', 'CLE', 'DAL', 'DEN', 'DET', 'GSW', 'HOU', 'IND', 'LAC', 'LAL', 'MEM', 'MIA', 'MIL', 'MIN', 'NJN', 'NOH', 'NOK', 'NOP', 'NYK', 'OKC', 'ORL', 'PHI', 'PHO', 'POR', 'SAC', 'SAS', 'SEA', 'TOR', 'UTA', 'VAN', 'WAS', 'WSB'] ``` After reading through the link above, I thought this would work but it didn't: ``` df.sort(['Player', 'Year', 'Tm'], ascending = [True, True, sorter]) ``` It still has ATL at the top, meaning that it sorted alphabetically and not according to my custom list. Any help would really be greatly appreciated, I just can't figure this out.", "response":"The below answer is an old answer. It still works. Anyhow, another very elegant solution has been posted below , using the key argument. I just discovered that with pandas 15.1 it is possible to use categorical series (https:\/\/pandas.pydata.org\/docs\/user_guide\/categorical.html) As for your example, lets define the same data-frame and sorter: ``` import pandas as pd data = { 'id': [2967, 5335, 13950, 6141, 6169], 'Player': ['Cedric Hunter', 'Maurice Baker', 'Ratko Varda' ,'Ryan Bowen' ,'Adrian Caldwell'], 'Year': [1991, 2004, 2001, 2009, 1997], 'Age': [27, 25, 22, 34, 31], 'Tm': ['CHH', 'VAN', 'TOT', 'OKC', 'DAL'], 'G': [6, 7, 60, 52, 81] } # Create DataFrame df = pd.DataFrame(data) # Define the sorter sorter = ['TOT', 'ATL', 'BOS', 'BRK', 'CHA', 'CHH', 'CHI', 'CLE', 'DAL', 'DEN', 'DET', 'GSW', 'HOU', 'IND', 'LAC', 'LAL', 'MEM', 'MIA', 'MIL', 'MIN', 'NJN', 'NOH', 'NOK', 'NOP', 'NYK', 'OKC', 'ORL', 'PHI', 'PHO', 'POR', 'SAC', 'SAS', 'SEA', 'TOR', 'UTA', 'VAN', 'WAS', 'WSB'] ``` With the data-frame and sorter, which is a category-order, we can do the following in pandas 15.1: ``` # Convert Tm-column to category and in set the sorter as categories hierarchy # You could also do both lines in one just appending the cat.set_categories() df.Tm = df.Tm.astype(\"category\") df.Tm = df.Tm.cat.set_categories(sorter) print(df.Tm) Out[48]: 0 CHH 1 VAN 2 TOT 3 OKC 4 DAL Name: Tm, dtype: category Categories (38, object): [TOT < ATL < BOS < BRK ... UTA < VAN < WAS < WSB] df.sort_values([\"Tm\"]) ## 'sort' changed to 'sort_values' Out[49]: Age G Player Tm Year id 2 22 60 Ratko Varda TOT 2001 13950 0 27 6 Cedric Hunter CHH 1991 2967 4 31 81 Adrian Caldwell DAL 1997 6169 3 34 52 Ryan Bowen OKC 2009 6141 1 25 7 Maurice Baker VAN 2004 5335 ```", "best_answers_score":0.7973, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23482668\/sorting-by-a-custom-list-in-pandas", "best_answers_votes":135, "question_length":1373, "response_length":1747 }, { "question":"Python Pandas add column for row-wise max value of selected columns [duplicate] This question already has answers here: Find the max of two or more columns with pandas (4 answers) Closed 6 years ago. ``` data = {'name' : ['bill', 'joe', 'steve'], 'test1' : [85, 75, 85], 'test2' : [35, 45, 83], 'test3' : [51, 61, 45]} frame = pd.DataFrame(data) ``` I would like to add a new column that shows the max value for each row. desired output: ``` name test1 test2 test3 HighScore bill 75 75 85 85 joe 35 45 83 83 steve 51 61 45 61 ``` Sometimes ``` frame['HighScore'] = max(data['test1'], data['test2'], data['test3']) ``` works but most of the time gives this error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() Why does it only work sometimes? Is there another way of doing it?", "response":"``` >>> frame['HighScore'] = frame[['test1','test2','test3']].max(axis=1) >>> frame name test1 test2 test3 HighScore 0 bill 85 35 51 85 1 joe 75 45 61 75 2 steve 85 83 45 85 ```", "best_answers_score":0.7969, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20033111\/python-pandas-add-column-for-row-wise-max-value-of-selected-columns", "best_answers_votes":136, "question_length":833, "response_length":177 }, { "question":"How to add a column with values 1 to len(df) to a dataframe The index that I have in the dataframe (with 30 rows) is of the form: ``` Int64Index([171, 174, 173, 172, 199, \u2026, 175, 200]) ``` The index is not strictly increasing because the data frame is the output of a sort(). I want to add a column which is the series: ``` [1, 2, 3, 4, 5, \u2026, 30] ``` How should I go about doing that?", "response":"How about: ``` df['new_col'] = range(1, len(df) + 1) ``` Alternatively if you want the index to be the ranks and store the original index as a column: ``` df = df.reset_index() ```", "best_answers_score":0.7968, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12168648\/how-to-add-a-column-with-values-1-to-lendf-to-a-dataframe", "best_answers_votes":178, "question_length":384, "response_length":180 }, { "question":"Check if string is in a pandas dataframe I would like to see if a particular string exists in a particular column within my dataframe. I'm getting the error ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). ``` import pandas as pd BabyDataSet = [('Bob', 968), ('Jessica', 155), ('Mary', 77), ('John', 578), ('Mel', 973)] a = pd.DataFrame(data=BabyDataSet, columns=['Names', 'Births']) if a['Names'].str.contains('Mel'): print (\"Mel is there\") ```", "response":"a['Names'].str.contains('Mel') will return an indicator vector of boolean values of size len(BabyDataSet) Therefore, you can use ``` mel_count=a['Names'].str.contains('Mel').sum() if mel_count>0: print (\"There are {m} Mels\".format(m=mel_count)) ``` Or any(), if you don't care how many records match your query ``` if a['Names'].str.contains('Mel').any(): print (\"Mel is there\") ```", "best_answers_score":0.7951, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30944577\/check-if-string-is-in-a-pandas-dataframe", "best_answers_votes":188, "question_length":503, "response_length":382 }, { "question":"Calculating difference between two rows in Python \/ Pandas In python, how can I reference previous row and calculate something against it? Specifically, I am working with dataframes in pandas - I have a data frame full of stock price information that looks like this: ``` Date Close Adj Close 251 2011-01-03 147.48 143.25 250 2011-01-04 147.64 143.41 249 2011-01-05 147.05 142.83 248 2011-01-06 148.66 144.40 247 2011-01-07 147.93 143.69 ``` Here is how I created this dataframe: ``` import pandas url = 'http:\/\/ichart.finance.yahoo.com\/table.csv?s=IBM&a=00&b=1&c=2011&d=11&e=31&f=2011&g=d&ignore=.csv' data = data = pandas.read_csv(url) ## now I sorted the data frame ascending by date data = data.sort(columns='Date') ``` Starting with row number 2, or in this case, I guess it's 250 (PS - is that the index?), I want to calculate the difference between 2011-01-03 and 2011-01-04, for every entry in this dataframe. I believe the appropriate way is to write a function that takes the current row, then figures out the previous row, and calculates the difference between them, the use the pandas apply function to update the dataframe with the value. Is that the right approach? If so, should I be using the index to determine the difference? (note - I'm still in python beginner mode, so index may not be the right term, nor even the correct way to implement this)", "response":"I think you want to do something like this: ``` In [26]: data Out[26]: Date Close Adj Close 251 2011-01-03 147.48 143.25 250 2011-01-04 147.64 143.41 249 2011-01-05 147.05 142.83 248 2011-01-06 148.66 144.40 247 2011-01-07 147.93 143.69 In [27]: data.set_index('Date').diff() Out[27]: Close Adj Close Date 2011-01-03 NaN NaN 2011-01-04 0.16 0.16 2011-01-05 -0.59 -0.58 2011-01-06 1.61 1.57 2011-01-07 -0.73 -0.71 ```", "best_answers_score":0.795, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13114512\/calculating-difference-between-two-rows-in-python-pandas", "best_answers_votes":133, "question_length":1366, "response_length":416 }, { "question":"Convert Pandas dataframe to csv string Here is an example of what I am trying to get: I have: ``` import pandas as pd df = pd.DataFrame({'A' : [0, 1], 'B' : [1, 6]}) ``` My goal is: ``` ',A,B\\n0,0,1\\n1,1,6\\n' ``` I can achieve this with lazy and horrible: ``` df.to_csv('temp.csv') # create unnecessary file body = open('temp.csv').read() ``` Also to_string() methods looks very promising; however, the best I can come up with is this: ``` body = df.to_string()[1:].replace(' ', ',') + '\\n' ``` This does not create an unnecessary file, but seems sloppy and perhaps not very reliable. Am I missing a simpler solution?", "response":"The simplest way is just to not input any filename, in this case a string is returned: ``` >>> df = pd.DataFrame({'A' : [0, 1], 'B' : [1, 6]}) >>> df.to_csv() ',A,B\\n0,0,1\\n1,1,6\\n' ```", "best_answers_score":0.7948, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23231605\/convert-pandas-dataframe-to-csv-string", "best_answers_votes":198, "question_length":617, "response_length":185 }, { "question":"Add x and y labels to a pandas plot Suppose I have the following code that plots something very simple using pandas: ``` import pandas as pd values = [[1, 2], [2, 5]] df2 = pd.DataFrame(values, columns=['Type A', 'Type B'], index=['Index 1', 'Index 2']) df2.plot(lw=2, colormap='jet', marker='.', markersize=10, title='Video streaming dropout by category') ``` How do I easily set x and y-labels while preserving my ability to use specific colormaps? I noticed that the plot() wrapper for pandas DataFrames doesn't take any parameters specific for that.", "response":"The df.plot() function returns a matplotlib.axes.AxesSubplot object. You can set the labels on that object. ``` ax = df2.plot(lw=2, colormap='jet', marker='.', markersize=10, title='Video streaming dropout by category') ax.set_xlabel(\"x label\") ax.set_ylabel(\"y label\") ``` Or, more succinctly: ax.set(xlabel=\"x label\", ylabel=\"y label\"). Alternatively, the index x-axis label is automatically set to the Index name, if it has one. so df2.index.name = 'x label' would work too.", "best_answers_score":0.7928, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21487329\/add-x-and-y-labels-to-a-pandas-plot", "best_answers_votes":446, "question_length":553, "response_length":477 }, { "question":"How to pass another entire column as argument to pandas fillna() I would like to fill missing values in one column with values from another column, using fillna method. (I read that looping through each row would be very bad practice and that it would be better to do everything in one go but I could not find out how to do it with fillna.) Data before: ``` Day Cat1 Cat2 1 cat mouse 2 dog elephant 3 cat giraf 4 NaN ant ``` Data after: ``` Day Cat1 Cat2 1 cat mouse 2 dog elephant 3 cat giraf 4 ant ant ```", "response":"You can provide this column to fillna (see docs), it will use those values on matching indexes to fill: ``` In [17]: df['Cat1'].fillna(df['Cat2']) Out[17]: 0 cat 1 dog 2 cat 3 ant Name: Cat1, dtype: object ```", "best_answers_score":0.7923, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30357276\/how-to-pass-another-entire-column-as-argument-to-pandas-fillna", "best_answers_votes":278, "question_length":507, "response_length":209 }, { "question":"Pandas make new column from string slice of another column I want to create a new column in Pandas using a string sliced for another column in the dataframe. For example. ``` Sample Value New_sample AAB 23 A BAB 25 B ``` Where New_sample is a new column formed from a simple [:1] slice of Sample I've tried a number of things to no avail - I feel I'm missing something simple. What's the most efficient way of doing this?", "response":"You can call the str method and apply a slice, this will be much quicker than the other method as this is vectorised (thanks @unutbu): ``` df['New_Sample'] = df.Sample.str[:1] ``` You can also call a lambda function on the df but this will be slower on larger dataframes: ``` In [187]: df['New_Sample'] = df.Sample.apply(lambda x: x[:1]) df Out[187]: Sample Value New_Sample 0 AAB 23 A 1 BAB 25 B ```", "best_answers_score":0.7918, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25789445\/pandas-make-new-column-from-string-slice-of-another-column", "best_answers_votes":149, "question_length":421, "response_length":400 }, { "question":"Trying to merge 2 dataframes but get ValueError These are my two dataframes saved in two variables: ``` > print(df.head()) > club_name tr_jan tr_dec year 0 ADO Den Haag 1368 1422 2010 1 ADO Den Haag 1455 1477 2011 2 ADO Den Haag 1461 1443 2012 3 ADO Den Haag 1437 1383 2013 4 ADO Den Haag 1386 1422 2014 > print(rankingdf.head()) > club_name ranking year 0 ADO Den Haag 12 2010 1 ADO Den Haag 13 2011 2 ADO Den Haag 11 2012 3 ADO Den Haag 14 2013 4 ADO Den Haag 17 2014 ``` I'm trying to merge these two using this code: ``` new_df = df.merge(ranking_df, on=['club_name', 'year'], how='left') ``` The how='left' is added because I have less datapoints in my ranking_df than in my standard df. The expected behaviour is as such: ``` > print(new_df.head()) > club_name tr_jan tr_dec year ranking 0 ADO Den Haag 1368 1422 2010 12 1 ADO Den Haag 1455 1477 2011 13 2 ADO Den Haag 1461 1443 2012 11 3 ADO Den Haag 1437 1383 2013 14 4 ADO Den Haag 1386 1422 2014 17 ``` But I get this error: ValueError: You are trying to merge on object and int64 columns. If you wish to proceed you should use pd.concat But I do not wish to use concat since I want to merge the trees not just add them on. Another behaviour that's weird in my mind is that my code works if I save the first df to .csv and then load that .csv into a dataframe. The code for that: ``` df = pd.DataFrame(data_points, columns=['club_name', 'tr_jan', 'tr_dec', 'year']) df.to_csv('preliminary.csv') df = pd.read_csv('preliminary.csv', index_col=0) ranking_df = pd.DataFrame(rankings, columns=['club_name', 'ranking', 'year']) new_df = df.merge(ranking_df, on=['club_name', 'year'], how='left') ``` I think that it has to do with the index_col=0 parameter. But I have no idea to fix it without having to save it, it doesn't matter much but is kind of an annoyance that I have to do that.", "response":"In one of your dataframes the year is a string and the other it is an int64 you can convert it first and then join (e.g. df['year']=df['year'].astype(int) or as RafaelC suggested df.year.astype(int)) Edit: Also note the comment by Anderson Zhu: Just in case you have None or missing values in one of your dataframes, you need to use Int64 instead of int. See the reference here.", "best_answers_score":0.7909, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/50649853\/trying-to-merge-2-dataframes-but-get-valueerror", "best_answers_votes":195, "question_length":1842, "response_length":378 }, { "question":"Remove unwanted parts from strings in a column I am looking for an efficient way to remove unwanted parts from strings in a DataFrame column. Data looks like: ``` time result 1 09:00 +52A 2 10:00 +62B 3 11:00 +44a 4 12:00 +30b 5 13:00 -110a ``` I need to trim these data to: ``` time result 1 09:00 52 2 10:00 62 3 11:00 44 4 12:00 30 5 13:00 110 ``` I tried .str.lstrip('+-') and .str.rstrip('aAbBcC'), but got an error: ``` TypeError: wrapper() takes exactly 1 argument (2 given) ``` Any pointers would be greatly appreciated!", "response":"How do I remove unwanted parts from strings in a column? 6 years after the original question was posted, pandas now has a good number of \"vectorised\" string functions that can succinctly perform these string manipulation operations. This answer will explore some of these string functions, suggest faster alternatives, and go into a timings comparison at the end. .str.replace Specify the substring\/pattern to match, and the substring to replace it with. ``` pd.__version__ # '0.24.1' df time result 1 09:00 +52A 2 10:00 +62B 3 11:00 +44a 4 12:00 +30b 5 13:00 -110a ``` ``` df['result'] = df['result'].str.replace(r'\\D', '', regex=True) df time result 1 09:00 52 2 10:00 62 3 11:00 44 4 12:00 30 5 13:00 110 ``` If you need the result converted to an integer, you can use Series.astype, ``` df['result'] = df['result'].str.replace(r'\\D', '', regex=True).astype(int) df.dtypes time object result int64 dtype: object ``` If you don't want to modify df in-place, use DataFrame.assign: ``` df2 = df.assign(result=df['result'].str.replace(r'\\D', '', regex=True)) df # Unchanged ``` .str.extract Useful for extracting the substring(s) you want to keep. ``` df['result'] = df['result'].str.extract(r'(\\d+)', expand=False) df time result 1 09:00 52 2 10:00 62 3 11:00 44 4 12:00 30 5 13:00 110 ``` With extract, it is necessary to specify at least one capture group. expand=False will return a Series with the captured items from the first capture group. ###.str.split and .str.get Splitting works assuming all your strings follow this consistent structure. ``` # df['result'] = df['result'].str.split(r'\\D').str[1] df['result'] = df['result'].str.split(r'\\D').str.get(1) df time result 1 09:00 52 2 10:00 62 3 11:00 44 4 12:00 30 5 13:00 110 ``` Do not recommend if you are looking for a general solution. If you are satisfied with the succinct and readable str accessor-based solutions above, you can stop here. However, if you are interested in faster, more performant alternatives, keep reading. Optimizing: List Comprehensions In some circumstances, list comprehensions should be favoured over pandas string functions. The reason is because string functions are inherently hard to vectorize (in the true sense of the word), so most string and regex functions are only wrappers around loops with more overhead. My write-up, Are for-loops in pandas really bad? When should I care?, goes into greater detail. The str.replace option can be re-written using re.sub ``` import re # Pre-compile your regex pattern for more performance. p = re.compile(r'\\D') df['result'] = [p.sub('', x) for x in df['result']] df time result 1 09:00 52 2 10:00 62 3 11:00 44 4 12:00 30 5 13:00 110 ``` The str.extract example can be re-written using a list comprehension with re.search, ``` p = re.compile(r'\\d+') df['result'] = [p.search(x)[0] for x in df['result']] df time result 1 09:00 52 2 10:00 62 3 11:00 44 4 12:00 30 5 13:00 110 ``` If NaNs or no-matches are a possibility, you will need to re-write the above to include some error checking. I do this using a function. ``` def try_extract(pattern, string): try: m = pattern.search(string) return m.group(0) except (TypeError, ValueError, AttributeError): return np.nan p = re.compile(r'\\d+') df['result'] = [try_extract(p, x) for x in df['result']] df time result 1 09:00 52 2 10:00 62 3 11:00 44 4 12:00 30 5 13:00 110 ``` We can also re-write @eumiro's and @MonkeyButter's answers using list comprehensions: ``` df['result'] = [x.lstrip('+-').rstrip('aAbBcC') for x in df['result']] ``` And, ``` df['result'] = [x[1:-1] for x in df['result']] ``` Same rules for handling NaNs, etc, apply. Performance Comparison Graphs generated using perfplot. Full code listing, for your reference. The relevant functions are listed below. Some of these comparisons are unfair because they take advantage of the structure of OP's data, but take from it what you will. One thing to note is that every list comprehension function is either faster or comparable than its equivalent pandas variant. Functions ``` def eumiro(df): return df.assign( result=df['result'].map(lambda x: x.lstrip('+-').rstrip('aAbBcC'))) def coder375(df): return df.assign( result=df['result'].replace(r'\\D', r'', regex=True)) def monkeybutter(df): return df.assign(result=df['result'].map(lambda x: x[1:-1])) def wes(df): return df.assign(result=df['result'].str.lstrip('+-').str.rstrip('aAbBcC')) def cs1(df): return df.assign(result=df['result'].str.replace(r'\\D', '')) def cs2_ted(df): # `str.extract` based solution, similar to @Ted Petrou's. so timing together. return df.assign(result=df['result'].str.extract(r'(\\d+)', expand=False)) def cs1_listcomp(df): return df.assign(result=[p1.sub('', x) for x in df['result']]) def cs2_listcomp(df): return df.assign(result=[p2.search(x)[0] for x in df['result']]) def cs_eumiro_listcomp(df): return df.assign( result=[x.lstrip('+-').rstrip('aAbBcC') for x in df['result']]) def cs_mb_listcomp(df): return df.assign(result=[x[1:-1] for x in df['result']]) ```", "best_answers_score":0.7888, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13682044\/remove-unwanted-parts-from-strings-in-a-column", "best_answers_votes":262, "question_length":528, "response_length":5002 }, { "question":"Creating dataframe from a dictionary where entries have different lengths Say I have a dictionary with 10 key-value pairs. Each entry holds a numpy array. However, the length of the array is not the same for all of them. How can I create a dataframe where each column holds a different entry? When I try: ```py import pandas as pd import numpy as np from string import ascii_uppercase # from the standard library # repeatable sample data np.random.seed(2023) data = {k: np.random.randn(v) for k, v in zip(ascii_uppercase[:10], range(10, 20))} df = pd.DataFrame(data) ``` I get: ```py ValueError: arrays must all be the same length ``` Any way to overcome this? I am happy to have Pandas use NaN to pad those columns for the shorter entries. Desired Result ```none A B C D E F G H I J 0 0.711674 -1.076522 -1.502178 -1.519748 0.340619 0.051132 0.036537 0.367296 1.056500 -1.186943 1 -0.324485 -0.325682 -1.379593 2.097329 -1.253501 -0.238061 2.431822 -0.576828 -0.733918 -0.540638 2 -1.001871 -1.035498 -0.204455 0.892562 0.370788 -0.208009 0.422599 -0.416005 -0.083968 -0.638495 3 0.236251 -0.426320 0.642125 1.596488 0.455254 0.401304 1.843922 -0.137542 0.127288 0.150411 4 -0.102160 -1.029361 -0.181176 -0.638762 -2.283720 0.183169 -0.221562 1.294987 0.344423 0.919450 5 -1.141293 -0.521774 0.771749 -1.133047 -0.000822 1.235830 0.337117 0.520589 0.685970 0.910146 6 2.654407 -0.422758 0.741523 0.656597 2.398876 -0.291800 -0.557180 -0.194273 0.399908 1.605234 7 1.440605 -0.099244 1.324763 0.595787 -2.583105 0.029992 0.053141 -0.385593 0.893458 0.667165 8 0.098902 -1.380258 0.439287 -0.811120 1.311009 -0.868404 1.053804 -3.065784 0.384793 0.950338 9 -3.121532 0.301903 -0.557873 -0.300535 -1.579478 0.604346 -0.658515 -0.668181 0.641113 0.734329 10 NaN -1.033599 0.927080 1.008391 -0.840683 0.728554 1.844449 0.056965 -0.577314 1.015465 11 NaN NaN -0.600727 -1.087762 -0.165509 1.364820 -0.075514 -0.909368 -0.819947 0.627386 12 NaN NaN NaN -1.787079 -2.068410 1.342694 0.264263 -1.487910 0.746819 1.062655 13 NaN NaN NaN NaN 0.452739 -1.456708 -1.395359 1.169611 1.836805 0.262885 14 NaN NaN NaN NaN NaN 0.969357 0.708416 0.393677 -1.455490 -2.086486 15 NaN NaN NaN NaN NaN NaN 0.762756 0.530569 -0.828721 -1.076369 16 NaN NaN NaN NaN NaN NaN NaN -0.586429 -0.609144 -0.507519 17 NaN NaN NaN NaN NaN NaN NaN NaN -1.071297 -0.274501 18 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1.848811 ```", "response":"In Python 3.x: ```py import pandas as pd import numpy as np d = dict( A = np.array([1,2]), B = np.array([1,2,3,4]) ) pd.DataFrame(dict([ (k,pd.Series(v)) for k,v in d.items() ])) Out[7]: A B 0 1 1 1 2 2 2 NaN 3 3 NaN 4 ``` In Python 2.x: replace d.items() with d.iteritems().", "best_answers_score":0.7863, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19736080\/creating-dataframe-from-a-dictionary-where-entries-have-different-lengths", "best_answers_votes":231, "question_length":2390, "response_length":275 }, { "question":"Good alternative to Pandas .append() method, now that it has been deprecated? I use the following method a lot to append a single row to a dataframe. However, it has been deprecated. One thing I really like about it is that it allows you to append a simple dict object. For example: ``` # Creating an empty dataframe df = pd.DataFrame(columns=['a', 'b']) # Appending a row df = df.append({ 'a': 1, 'b': 2 }, ignore_index=True) ``` Again, what I like most about this is that the code is very clean and requires very few lines. Now I suppose the recommended alternative is: ``` # Create the new row as its own dataframe df_new_row = pd.DataFrame({ 'a': [1], 'b': [2] }) df = pd.concat([df, df_new_row]) ``` So what was one line of code before is now two lines with a throwaway variable and extra cruft where I create the new dataframe. :( Is there a good way to do this that just uses a dict like I have in the past (that is not deprecated)?", "response":"I also like the append method. But you can do it in one line with a list of dicts ``` df = pd.concat([df, pd.DataFrame.from_records([{ 'a': 1, 'b': 2 }])]) ``` or using loc and tuples for values on DataFrames with incremenal ascending indexes ``` df.loc[len(df), ['a','b']] = 1, 2 ``` or maybe ``` df.loc[len(df), df.columns] = 3, 4 ```", "best_answers_score":0.7854, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/70837397\/good-alternative-to-pandas-append-method-now-that-it-has-been-deprecated", "best_answers_votes":79, "question_length":939, "response_length":336 }, { "question":"pandas: complex filter on rows of DataFrame I would like to filter rows by a function of each row, e.g. ``` def f(row): return sin(row['velocity'])\/np.prod(['masses']) > 5 df = pandas.DataFrame(...) filtered = df[apply_to_all_rows(df, f)] ``` Or for another more complex, contrived example, ``` def g(row): if row['col1'].method1() == 1: val = row['col1'].method2() \/ row['col1'].method3(row['col3'], row['col4']) else: val = row['col2'].method5(row['col6']) return np.sin(val) df = pandas.DataFrame(...) filtered = df[apply_to_all_rows(df, g)] ``` How can I do so?", "response":"You can do this using DataFrame.apply, which applies a function along a given axis, ``` In [3]: df = pandas.DataFrame(np.random.randn(5, 3), columns=['a', 'b', 'c']) In [4]: df Out[4]: a b c 0 -0.001968 -1.877945 -1.515674 1 -0.540628 0.793913 -0.983315 2 -1.313574 1.946410 0.826350 3 0.015763 -0.267860 -2.228350 4 0.563111 1.195459 0.343168 In [6]: df[df.apply(lambda x: x['b'] > x['c'], axis=1)] Out[6]: a b c 1 -0.540628 0.793913 -0.983315 2 -1.313574 1.946410 0.826350 3 0.015763 -0.267860 -2.228350 4 0.563111 1.195459 0.343168 ```", "best_answers_score":0.7841, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11418192\/pandas-complex-filter-on-rows-of-dataframe", "best_answers_votes":159, "question_length":565, "response_length":538 }, { "question":"Plot correlation matrix using pandas I have a data set with huge number of features, so analysing the correlation matrix has become very difficult. I want to plot a correlation matrix which we get using dataframe.corr() function from pandas library. Is there any built-in function provided by the pandas library to plot this matrix?", "response":"If your main goal is to visualize the correlation matrix, rather than creating a plot per se, the convenient pandas styling options is a viable built-in solution: ``` import pandas as pd import numpy as np rs = np.random.RandomState(0) df = pd.DataFrame(rs.rand(10, 10)) corr = df.corr() corr.style.background_gradient(cmap='coolwarm') # 'RdBu_r', 'BrBG_r', & PuOr_r are other good diverging colormaps ``` Note that this needs to be in a backend that supports rendering HTML, such as the JupyterLab Notebook. Styling You can easily limit the digit precision (this is now .format(precision=2) in pandas 2.*): ``` corr.style.background_gradient(cmap='coolwarm').set_precision(2) ``` Or get rid of the digits altogether if you prefer the matrix without annotations: ``` corr.style.background_gradient(cmap='coolwarm').set_properties(**{'font-size': '0pt'}) ``` The styling documentation also includes instructions of more advanced styles, such as how to change the display of the cell the mouse pointer is hovering over. Time comparison In my testing, style.background_gradient() was 4x faster than plt.matshow() and 120x faster than sns.heatmap() with a 10x10 matrix. Unfortunately it doesn't scale as well as plt.matshow(): the two take about the same time for a 100x100 matrix, and plt.matshow() is 10x faster for a 1000x1000 matrix. Saving There are a few possible ways to save the stylized dataframe: Return the HTML by appending the render() method and then write the output to a file. Save as an .xslx file with conditional formatting by appending the to_excel() method. Combine with imgkit to save a bitmap Take a screenshot (like I have done here). Normalize colors across the entire matrix (pandas >= 0.24) By setting axis=None, it is now possible to compute the colors based on the entire matrix rather than per column or per row: ``` corr.style.background_gradient(cmap='coolwarm', axis=None) ``` Single corner heatmap Since many people are reading this answer I thought I would add a tip for how to only show one corner of the correlation matrix. I find this easier to read myself, since it removes the redundant information. ``` # Fill diagonal and upper half with NaNs mask = np.zeros_like(corr, dtype=bool) mask[np.triu_indices_from(mask)] = True corr[mask] = np.nan (corr .style .background_gradient(cmap='coolwarm', axis=None, vmin=-1, vmax=1) .highlight_null(color='#f1f1f1') # Color NaNs grey .format(precision=2)) ```", "best_answers_score":0.7836, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29432629\/plot-correlation-matrix-using-pandas", "best_answers_votes":463, "question_length":332, "response_length":2435 }, { "question":"Pandas dataframe get first row of each group I have a pandas DataFrame like following: ``` df = pd.DataFrame({'id' : [1,1,1,2,2,3,3,3,3,4,4,5,6,6,6,7,7], 'value' : [\"first\",\"second\",\"second\",\"first\", \"second\",\"first\",\"third\",\"fourth\", \"fifth\",\"second\",\"fifth\",\"first\", \"first\",\"second\",\"third\",\"fourth\",\"fifth\"]}) ``` I want to group this by [\"id\",\"value\"] and get the first row of each group: ``` id value 0 1 first 1 1 second 2 1 second 3 2 first 4 2 second 5 3 first 6 3 third 7 3 fourth 8 3 fifth 9 4 second 10 4 fifth 11 5 first 12 6 first 13 6 second 14 6 third 15 7 fourth 16 7 fifth ``` Expected outcome: ``` id value 1 first 2 first 3 first 4 second 5 first 6 first 7 fourth ``` I tried following, which only gives the first row of the DataFrame. ``` In [25]: for index, row in df.iterrows(): ....: df2 = pd.DataFrame(df.groupby(['id','value']).reset_index().ix[0]) ```", "response":"Use .first() to get the first (non-null) element. ``` >>> df.groupby('id').first() value id 1 first 2 first 3 first 4 second 5 first 6 first 7 fourth ``` If you need id as column: ``` >>> df.groupby('id').first().reset_index() id value 0 1 first 1 2 first 2 3 first 3 4 second 4 5 first 5 6 first 6 7 fourth ``` To get first n records, you can use .head(): ``` >>> df.groupby('id').head(2).reset_index(drop=True) id value 0 1 first 1 1 second 2 2 first 3 2 second 4 3 first 5 3 third 6 4 second 7 4 fifth 8 5 first 9 6 first 10 6 second 11 7 fourth 12 7 fifth ```", "best_answers_score":0.7827, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20067636\/pandas-dataframe-get-first-row-of-each-group", "best_answers_votes":442, "question_length":878, "response_length":563 }, { "question":"Pandas DataFrame performance Pandas is really great, but I am really surprised by how inefficient it is to retrieve values from a Pandas.DataFrame. In the following toy example, even the DataFrame.iloc method is more than 100 times slower than a dictionary. The question: Is the lesson here just that dictionaries are the better way to look up values? Yes, I get that that is precisely what they were made for. But I just wonder if there is something I am missing about DataFrame lookup performance. I realize this question is more \"musing\" than \"asking\" but I will accept an answer that provides insight or perspective on this. Thanks. ``` import timeit setup = ''' import numpy, pandas df = pandas.DataFrame(numpy.zeros(shape=[10, 10])) dictionary = df.to_dict() ''' f = ['value = dictionary[5][5]', 'value = df.loc[5, 5]', 'value = df.iloc[5, 5]'] for func in f: print func print min(timeit.Timer(func, setup).repeat(3, 100000)) ``` value = dictionary[5][5] 0.130625009537 value = df.loc[5, 5] 19.4681699276 value = df.iloc[5, 5] 17.2575249672", "response":"A dict is to a DataFrame as a bicycle is to a car. You can pedal 10 feet on a bicycle faster than you can start a car, get it in gear, etc, etc. But if you need to go a mile, the car wins. For certain small, targeted purposes, a dict may be faster. And if that is all you need, then use a dict, for sure! But if you need\/want the power and luxury of a DataFrame, then a dict is no substitute. It is meaningless to compare speed if the data structure does not first satisfy your needs. Now for example -- to be more concrete -- a dict is good for accessing columns, but it is not so convenient for accessing rows. ``` import timeit setup = ''' import numpy, pandas df = pandas.DataFrame(numpy.zeros(shape=[10, 1000])) dictionary = df.to_dict() ''' # f = ['value = dictionary[5][5]', 'value = df.loc[5, 5]', 'value = df.iloc[5, 5]'] f = ['value = [val[5] for col,val in dictionary.items()]', 'value = df.loc[5]', 'value = df.iloc[5]'] for func in f: print(func) print(min(timeit.Timer(func, setup).repeat(3, 100000))) ``` yields ``` value = [val[5] for col,val in dictionary.iteritems()] 25.5416321754 value = df.loc[5] 5.68071913719 value = df.iloc[5] 4.56006002426 ``` So the dict of lists is 5 times slower at retrieving rows than df.iloc. The speed deficit becomes greater as the number of columns grows. (The number of columns is like the number of feet in the bicycle analogy. The longer the distance, the more convenient the car becomes...) This is just one example of when a dict of lists would be less convenient\/slower than a DataFrame. Another example would be when you have a DatetimeIndex for the rows and wish to select all rows between certain dates. With a DataFrame you can use ``` df.loc['2000-1-1':'2000-3-31'] ``` There is no easy analogue for that if you were to use a dict of lists. And the Python loops you would need to use to select the right rows would again be terribly slow compared to the DataFrame.", "best_answers_score":0.7827, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22084338\/pandas-dataframe-performance", "best_answers_votes":159, "question_length":1046, "response_length":1926 }, { "question":"How to add an empty column to a dataframe? What's the easiest way to add an empty column to a pandas DataFrame object? The best I've stumbled upon is something like ```py df['foo'] = df.apply(lambda _: '', axis=1) ``` Is there a less perverse method?", "response":"If I understand correctly, assignment should fill: ``` >>> import numpy as np >>> import pandas as pd >>> df = pd.DataFrame({\"A\": [1,2,3], \"B\": [2,3,4]}) >>> df A B 0 1 2 1 2 3 2 3 4 >>> df[\"C\"] = \"\" >>> df[\"D\"] = np.nan >>> df A B C D 0 1 2 NaN 1 2 3 NaN 2 3 4 NaN ```", "best_answers_score":0.7819, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16327055\/how-to-add-an-empty-column-to-a-dataframe", "best_answers_votes":773, "question_length":250, "response_length":269 }, { "question":"Given a pandas Series that represents frequencies of a value, how can I turn those frequencies into percentages? I was experimenting with the kaggle.com Titanic data set (data on every person on the Titanic) and came up with a gender breakdown like this: ``` df = pd.DataFrame({'sex': ['male'] * 577 + ['female'] * 314}) gender = df.sex.value_counts() gender male 577 female 314 ``` I would like to find out the percentage of each gender on the Titanic. My approach is slightly less than ideal: ``` from __future__ import division pcts = gender \/ gender.sum() pcts male 0.647587 female 0.352413 ``` Is there a better (more idiomatic) way?", "response":"This function is implemented in pandas, actually even in value_counts(). No need to calculate :) just type: ``` df.sex.value_counts(normalize=True) ``` which gives exactly the desired output. Please note that value_counts() excludes NA values, so numbers might not add up to 1. See here: http:\/\/pandas-docs.github.io\/pandas-docs-travis\/generated\/pandas.Series.value_counts.html (A column of a DataFrame is a Series)", "best_answers_score":0.7814, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14281871\/given-a-pandas-series-that-represents-frequencies-of-a-value-how-can-i-turn-tho", "best_answers_votes":267, "question_length":638, "response_length":415 }, { "question":"Convert Pandas Column to DateTime I have one field in a pandas DataFrame that was imported as string format. It should be a datetime variable. How do I convert it to a datetime column, and then filter based on date? Example: ```py raw_data = pd.DataFrame({'Mycol': ['05SEP2014:00:00:00.000']}) ```", "response":"Use the to_datetime function, specifying a format to match your data. ``` df['Mycol'] = pd.to_datetime(df['Mycol'], format='%d%b%Y:%H:%M:%S.%f') ```", "best_answers_score":0.781, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26763344\/convert-pandas-column-to-datetime", "best_answers_votes":840, "question_length":297, "response_length":148 }, { "question":"Python: create a pandas data frame from a list I am using the following code to create a data frame from a list: ``` test_list = ['a','b','c','d'] df_test = pd.DataFrame.from_records(test_list, columns=['my_letters']) df_test ``` The above code works fine. Then I tried the same approach for another list: ``` import pandas as pd q_list = ['112354401', '116115526', '114909312', '122425491', '131957025', '111373473'] df1 = pd.DataFrame.from_records(q_list, columns=['q_data']) df1 ``` But it gave me the following errors this time: ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) in () 1 import pandas as pd 2 q_list = ['112354401', '116115526', '114909312', '122425491', '131957025', '111373473'] ----> 3 df1 = pd.DataFrame.from_records(q_list, columns=['q_data']) 4 df1 \/usr\/local\/lib\/python3.4\/dist-packages\/pandas\/core\/frame.py in from_records(cls, data, index, exclude, columns, coerce_float, nrows) 1021 else: 1022 arrays, arr_columns = _to_arrays(data, columns, -> 1023 coerce_float=coerce_float) 1024 1025 arr_columns = _ensure_index(arr_columns) \/usr\/local\/lib\/python3.4\/dist-packages\/pandas\/core\/frame.py in _to_arrays(data, columns, coerce_float, dtype) 5550 data = lmap(tuple, data) 5551 return _list_to_arrays(data, columns, coerce_float=coerce_float, -> 5552 dtype=dtype) 5553 5554 \/usr\/local\/lib\/python3.4\/dist-packages\/pandas\/core\/frame.py in _list_to_arrays(data, columns, coerce_float, dtype) 5607 content = list(lib.to_object_array(data).T) 5608 return _convert_object_array(content, columns, dtype=dtype, -> 5609 coerce_float=coerce_float) 5610 5611 \/usr\/local\/lib\/python3.4\/dist-packages\/pandas\/core\/frame.py in _convert_object_array(content, columns, coerce_float, dtype) 5666 # caller's responsibility to check for this... 5667 raise AssertionError('%d columns passed, passed data had %s ' -> 5668 'columns' % (len(columns), len(content))) 5669 5670 # provide soft conversion of object dtypes AssertionError: 1 columns passed, passed data had 9 columns ``` Why would the same approach work for one list but not another? Any idea what might be wrong here? Thanks a lot!", "response":"DataFrame.from_records treats string as a character list. so it needs as many columns as length of string. You could simply use the DataFrame constructor. ``` In [3]: pd.DataFrame(q_list, columns=['q_data']) Out[3]: q_data 0 112354401 1 116115526 2 114909312 3 122425491 4 131957025 5 111373473 ```", "best_answers_score":0.781, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43175382\/python-create-a-pandas-data-frame-from-a-list", "best_answers_votes":155, "question_length":2174, "response_length":298 }, { "question":"Pandas read_sql with parameters Are there any examples of how to pass parameters with an SQL query in Pandas? In particular I'm using an SQLAlchemy engine to connect to a PostgreSQL database. So far I've found that the following works: ```py df = psql.read_sql(('select \"Timestamp\",\"Value\" from \"MyTable\" ' 'where \"Timestamp\" BETWEEN %s AND %s'), db,params=[datetime(2014,6,24,16,0),datetime(2014,6,24,17,0)], index_col=['Timestamp']) ``` The Pandas documentation says that params can also be passed as a dict, but I can't seem to get this to work having tried for instance: ```py df = psql.read_sql(('select \"Timestamp\",\"Value\" from \"MyTable\" ' 'where \"Timestamp\" BETWEEN :dstart AND :dfinish'), db,params={\"dstart\":datetime(2014,6,24,16,0),\"dfinish\":datetime(2014,6,24,17,0)}, index_col=['Timestamp']) ``` What is the recommended way of running these types of queries from Pandas?", "response":"The read_sql docs say this params argument can be a list, tuple or dict (see docs). To pass the values in the sql query, there are different syntaxes possible: ?, :1, :name, %s, %(name)s (see PEP249). But not all of these possibilities are supported by all database drivers, which syntax is supported depends on the driver you are using (psycopg2 in your case I suppose). In your second case, when using a dict, you are using 'named arguments', and according to the psycopg2 documentation, they support the %(name)s style (and so not the :name I suppose), see http:\/\/initd.org\/psycopg\/docs\/usage.html#query-parameters. So using that style should work: ``` df = psql.read_sql(('select \"Timestamp\",\"Value\" from \"MyTable\" ' 'where \"Timestamp\" BETWEEN %(dstart)s AND %(dfinish)s'), db,params={\"dstart\":datetime(2014,6,24,16,0),\"dfinish\":datetime(2014,6,24,17,0)}, index_col=['Timestamp']) ```", "best_answers_score":0.7808, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24408557\/pandas-read-sql-with-parameters", "best_answers_votes":134, "question_length":882, "response_length":888 }, { "question":"Fill a new pandas column with row numbers I have the following DataFrame data with random index values: ``` A B 100 0 7 203 5 4 5992 0 10 2003 9 8 20 10 5 12 6 2 ``` I would like to add a new column 'C' with row numbers. For example: ``` A B C 100 0 7 0 203 5 4 1 5992 0 10 2 2003 9 8 3 20 10 5 4 12 6 2 5 ```", "response":"Use numpy.arange or range by length of DataFrame: ``` df['C'] = np.arange(len(df)) df['C'] = range(len(df)) ``` Or you can use DataFrame.shape, thank you @Mehmet Burak Say\u0131c\u0131: ``` df['C'] = np.arange(df.shape[0]) ``` ``` print (df) A B C 100 0 7 0 203 5 4 1 5992 0 10 2 2003 9 8 3 20 10 5 4 12 6 2 5 ```", "best_answers_score":0.7794, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/49574817\/fill-a-new-pandas-column-with-row-numbers", "best_answers_votes":144, "question_length":309, "response_length":303 }, { "question":"How to write DataFrame to postgres table There is DataFrame.to_sql method, but it works only for mysql, sqlite and oracle databases. I cant pass to this method postgres connection or sqlalchemy engine.", "response":"Starting from pandas 0.14 (released end of May 2014), postgresql is supported. The sql module now uses sqlalchemy to support different database flavors. You can pass a sqlalchemy engine for a postgresql database (see docs). E.g.: ``` from sqlalchemy import create_engine engine = create_engine('postgresql:\/\/username:password@localhost:5432\/mydatabase') df.to_sql('table_name', engine) ``` You are correct that in pandas up to version 0.13.1 postgresql was not supported. If you need to use an older version of pandas, here is a patched version of pandas.io.sql: https:\/\/gist.github.com\/jorisvandenbossche\/10841234. I wrote this a time ago, so cannot fully guarantee that it always works, buth the basis should be there). If you put that file in your working directory and import it, then you should be able to do (where con is a postgresql connection): ``` import sql # the patched version (file is named sql.py) sql.write_frame(df, 'table_name', con, flavor='postgresql') ```", "best_answers_score":0.7786, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23103962\/how-to-write-dataframe-to-postgres-table", "best_answers_votes":247, "question_length":201, "response_length":977 }, { "question":"How to add a single item to a Pandas Series [duplicate] This question already has answers here: Creating an empty Pandas DataFrame, and then filling it (8 answers) Closed 3 years ago. How do I add a single item to a Pandas Series instance? I'm looking for code along the lines of ``` >>> x = Series() >>> N = 4 >>> for i in xrange(N): >>> x.some_appending_function(i**2) >>> print(x) 0 | 0 1 | 1 2 | 4 3 | 9 ``` Similarly, how can i add a single row to a Pandas DataFrame?", "response":"TLDR: do not append items to a series one by one, better extend with an ordered collection I think the question in its current form is a bit tricky. And the accepted answer does answer the question. But the more I use pandas, the more I understand that it's a bad idea to append items to a Series one by one. I'll try to explain why for pandas beginners. You might think that appending data to a given Series might allow you to reuse some resources, but in reality a Series is just a container that stores a relation between an index and a values array. Each is a numpy.array under the hood, and the index is immutable. When you add to Series an item with a label that is missing in the index, a new index with size n+1 is created, and a new values values array of the same size. That means that when you append items one by one, you create two more arrays of the n+1 size on each step. By the way, you can not append a new item by position (you will get an IndexError) and the label in an index does not have to be unique, that is when you assign a value with a label, you assign the value to all existing items with the the label, and a new row is not appended in this case. This might lead to subtle bugs. The moral of the story is that you should not append data one by one, you should better extend with an ordered collection. The problem is that you can not extend a Series inplace. That is why it is better to organize your code so that you don't need to update a specific instance of a Series by reference. If you create labels yourself and they are increasing, the easiest way is to add new items to a dictionary, then create a new Series from the dictionary (it sorts the keys) and append the Series to an old one. If the keys are not increasing, then you will need to create two separate lists for the new labels and the new values. Below are some code samples: ``` In [1]: import pandas as pd In [2]: import numpy as np In [3]: s = pd.Series(np.arange(4)**2, index=np.arange(4)) In [4]: s Out[4]: 0 0 1 1 2 4 3 9 dtype: int64 In [6]: id(s.index), id(s.values) Out[6]: (4470549648, 4470593296) ``` When we update an existing item, the index and the values array stay the same (if you do not change the type of the value) ``` In [7]: s[2] = 14 In [8]: id(s.index), id(s.values) Out[8]: (4470549648, 4470593296) ``` But when you add a new item, a new index and a new values array is generated: ``` In [9]: s[4] = 16 In [10]: s Out[10]: 0 0 1 1 2 14 3 9 4 16 dtype: int64 In [11]: id(s.index), id(s.values) Out[11]: (4470548560, 4470595056) ``` That is if you are going to append several items, collect them in a dictionary, create a Series, append it to the old one and save the result: ``` In [13]: new_items = {item: item**2 for item in range(5, 7)} In [14]: s2 = pd.Series(new_items) In [15]: s2 # keys are guaranteed to be sorted! Out[15]: 5 25 6 36 dtype: int64 In [16]: s = s.append(s2); s Out[16]: 0 0 1 1 2 14 3 9 4 16 5 25 6 36 dtype: int64 ```", "best_answers_score":0.7782, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13331518\/how-to-add-a-single-item-to-a-pandas-series", "best_answers_votes":53, "question_length":472, "response_length":2962 }, { "question":"Fast punctuation removal with pandas This is a self-answered post. Below I outline a common problem in the NLP domain and propose a few performant methods to solve it. Oftentimes the need arises to remove punctuation during text cleaning and pre-processing. Punctuation is defined as any character in string.punctuation: ``` >>> import string string.punctuation '!\"#$%&\\'()*+,-.\/:;?@[\\\\]^_`{|}~' ``` This is a common enough problem and has been asked before ad nauseam. The most idiomatic solution uses pandas str.replace. However, for situations which involve a lot of text, a more performant solution may need to be considered. What are some good, performant alternatives to str.replace when dealing with hundreds of thousands of records?", "response":"Setup For the purpose of demonstration, let's consider this DataFrame. ``` df = pd.DataFrame({'text':['a..b?!??', '%hgh&12','abc123!!!', '$$$1234']}) df text 0 a..b?!?? 1 %hgh&12 2 abc123!!! 3 $$$1234 ``` Below, I list the alternatives, one by one, in increasing order of performance str.replace This option is included to establish the default method as a benchmark for comparing other, more performant solutions. This uses pandas in-built str.replace function which performs regex-based replacement. ``` df['text'] = df['text'].str.replace(r'[^\\w\\s]+', '') ``` ``` df text 0 ab 1 hgh12 2 abc123 3 1234 ``` This is very easy to code, and is quite readable, but slow. regex.sub This involves using the sub function from the re library. Pre-compile a regex pattern for performance, and call regex.sub inside a list comprehension. Convert df['text'] to a list beforehand if you can spare some memory, you'll get a nice little performance boost out of this. ``` import re p = re.compile(r'[^\\w\\s]+') df['text'] = [p.sub('', x) for x in df['text'].tolist()] ``` ``` df text 0 ab 1 hgh12 2 abc123 3 1234 ``` Note: If your data has NaN values, this (as well as the next method below) will not work as is. See the section on \"Other Considerations\". str.translate python's str.translate function is implemented in C, and is therefore very fast. How this works is: First, join all your strings together to form one huge string using a single (or more) character separator that you choose. You must use a character\/substring that you can guarantee will not belong inside your data. Perform str.translate on the large string, removing punctuation (the separator from step 1 excluded). Split the string on the separator that was used to join in step 1. The resultant list must have the same length as your initial column. Here, in this example, we consider the pipe separator |. If your data contains the pipe, then you must choose another separator. ``` import string punct = '!\"#$%&\\'()*+,-.\/:;?@[\\\\]^_`{}~' # `|` is not present here transtab = str.maketrans(dict.fromkeys(punct, '')) df['text'] = '|'.join(df['text'].tolist()).translate(transtab).split('|') ``` ``` df text 0 ab 1 hgh12 2 abc123 3 1234 ``` Performance str.translate performs the best, by far. Note that the graph below includes another variant Series.str.translate from MaxU's answer. (Interestingly, I reran this a second time, and the results are slightly different from before. During the second run, it seems re.sub was winning out over str.translate for really small amounts of data.) There is an inherent risk involved with using translate (particularly, the problem of automating the process of deciding which separator to use is non-trivial), but the trade-offs are worth the risk. Other Considerations Handling NaNs with list comprehension methods; Note that this method (and the next) will only work as long as your data does not have NaNs. When handling NaNs, you will have to determine the indices of non-null values and replace those only. Try something like this: ``` df = pd.DataFrame({'text': [ 'a..b?!??', np.nan, '%hgh&12','abc123!!!', '$$$1234', np.nan]}) idx = np.flatnonzero(df['text'].notna()) col_idx = df.columns.get_loc('text') df.iloc[idx,col_idx] = [ p.sub('', x) for x in df.iloc[idx,col_idx].tolist()] df text 0 ab 1 NaN 2 hgh12 3 abc123 4 1234 5 NaN ``` Dealing with DataFrames; If you are dealing with DataFrames, where every column requires replacement, the procedure is simple: ``` v = pd.Series(df.values.ravel()) df[:] = translate(v).values.reshape(df.shape) ``` Or, ``` v = df.stack() v[:] = translate(v) df = v.unstack() ``` Note that the translate function is defined below in with the benchmarking code. Every solution has tradeoffs, so deciding what solution best fits your needs will depend on what you're willing to sacrifice. Two very common considerations are performance (which we've already seen), and memory usage. str.translate is a memory-hungry solution, so use with caution. Another consideration is the complexity of your regex. Sometimes, you may want to remove anything that is not alphanumeric or whitespace. Othertimes, you will need to retain certain characters, such as hyphens, colons, and sentence terminators [.!?]. Specifying these explicitly add complexity to your regex, which may in turn impact the performance of these solutions. Make sure you test these solutions on your data before deciding what to use. Lastly, unicode characters will be removed with this solution. You may want to tweak your regex (if using a regex-based solution), or just go with str.translate otherwise. For even more performance (for larger N), take a look at this answer by Paul Panzer. Appendix Functions ``` def pd_replace(df): return df.assign(text=df['text'].str.replace(r'[^\\w\\s]+', '')) def re_sub(df): p = re.compile(r'[^\\w\\s]+') return df.assign(text=[p.sub('', x) for x in df['text'].tolist()]) def translate(df): punct = string.punctuation.replace('|', '') transtab = str.maketrans(dict.fromkeys(punct, '')) return df.assign( text='|'.join(df['text'].tolist()).translate(transtab).split('|') ) # MaxU's version (https:\/\/stackoverflow.com\/a\/50444659\/4909087) def pd_translate(df): punct = string.punctuation.replace('|', '') transtab = str.maketrans(dict.fromkeys(punct, '')) return df.assign(text=df['text'].str.translate(transtab)) ``` Performance Benchmarking Code ``` from timeit import timeit import pandas as pd import matplotlib.pyplot as plt res = pd.DataFrame( index=['pd_replace', 're_sub', 'translate', 'pd_translate'], columns=[10, 50, 100, 500, 1000, 5000, 10000, 50000], dtype=float ) for f in res.index: for c in res.columns: l = ['a..b?!??', '%hgh&12','abc123!!!', '$$$1234'] * c df = pd.DataFrame({'text' : l}) stmt = '{}(df)'.format(f) setp = 'from __main__ import df, {}'.format(f) res.at[f, c] = timeit(stmt, setp, number=30) ax = res.div(res.min()).T.plot(loglog=True) ax.set_xlabel(\"N\"); ax.set_ylabel(\"time (relative)\"); plt.show() ```", "best_answers_score":0.7776, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/50444346\/fast-punctuation-removal-with-pandas", "best_answers_votes":100, "question_length":740, "response_length":5968 }, { "question":"Create an empty data frame with index from another data frame I've got a data frame df1 with multiple columns and rows. Simple example: ``` TIME T1 T2 1 10 100 2 20 200 3 30 300 ``` I'd like to create an empty data frame df2 and later on, add new columns with the calculation results. For this moment my code looks like this: ``` df1=pd.read_csv(\"1.txt\",index_col=\"TIME\") df2=df1.copy()[[]] #copy df1 and erase all columns ``` ...adding two new columns: ``` df2[\"results1\"],df2[\"results2\"]=df1[\"T1\"]*df[\"T2\"]*3,df1[\"T2\"]+100 ``` Is there any better\/safer\/faster way to do this ? Is it possible to create an empty data frame df2 and only copy index from df1 ?", "response":"``` df2 = pd.DataFrame(index=df1.index) ``` This will create a DataFrame with no columns but just an index, and it will be the same index as in the df1.", "best_answers_score":0.7773, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18176933\/create-an-empty-data-frame-with-index-from-another-data-frame", "best_answers_votes":193, "question_length":658, "response_length":152 }, { "question":"Python, Pandas : write content of DataFrame into text File I have pandas DataFrame like this ``` X Y Z Value 0 18 55 1 70 1 18 55 2 67 2 18 57 2 75 3 18 58 1 35 4 19 54 2 70 ``` I want to write this data to a text file that looks like this: ``` 18 55 1 70 18 55 2 67 18 57 2 75 18 58 1 35 19 54 2 70 ``` I have tried something like ``` f = open(writePath, 'a') f.writelines(['\\n', str(data['X']), ' ', str(data['Y']), ' ', str(data['Z']), ' ', str(data['Value'])]) f.close() ``` It's not correct. How to do this?", "response":"You can just use np.savetxt and access the np attribute .values: ``` np.savetxt(r'c:\\data\\np.txt', df.values, fmt='%d') ``` yields: ``` 18 55 1 70 18 55 2 67 18 57 2 75 18 58 1 35 19 54 2 70 ``` or to_csv: ``` df.to_csv(r'c:\\data\\pandas.txt', header=None, index=None, sep=' ', mode='a') ``` Note for np.savetxt you'd have to pass a filehandle that has been created with append mode.", "best_answers_score":0.7753, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31247198\/python-pandas-write-content-of-dataframe-into-text-file", "best_answers_votes":227, "question_length":512, "response_length":382 }, { "question":"Why use loc in Pandas? Why do we use loc for pandas dataframes? it seems the following code with or without using loc both compiles and runs at a similar speed: ``` %timeit df_user1 = df.loc[df.user_id=='5561'] 100 loops, best of 3: 11.9 ms per loop ``` or ``` %timeit df_user1_noloc = df[df.user_id=='5561'] 100 loops, best of 3: 12 ms per loop ``` So why use loc? Edit: This has been flagged as a duplicate question. But although pandas iloc vs ix vs loc explanation? does mention that you can do column retrieval just by using the data frame's __getitem__: ``` df['time'] # equivalent to df.loc[:, 'time'] ``` it does not say why we use loc, although it does explain lots of features of loc. But my specific question is: why not just omit loc altogether? For this question, I have accepted a very detailed answer below. Also in the above post, the answer (which I do not think is an answer) is really well hidden in the discussion, and any person searching for what I was, would find it hard to locate the information and would be much better served by the answer provided to my question here.", "response":"Explicit is better than implicit. df[boolean_mask] selects rows where boolean_mask is True, but there is a corner case when you might not want it to: when df has boolean-valued column labels: ``` In [229]: df = pd.DataFrame({True:[1,2,3],False:[3,4,5]}); df Out[229]: False True 0 3 1 1 4 2 2 5 3 ``` You might want to use df[[True]] to select the True column. Instead it raises a ValueError: ``` In [230]: df[[True]] ValueError: Item wrong length 1 instead of 3. ``` Versus using loc: ``` In [231]: df.loc[[True]] Out[231]: False True 0 3 1 ``` In contrast, the following does not raise ValueError even though the structure of df2 is almost the same as df1 above: ``` In [258]: df2 = pd.DataFrame({'A':[1,2,3],'B':[3,4,5]}); df2 Out[258]: A B 0 1 3 1 2 4 2 3 5 In [259]: df2[['B']] Out[259]: B 0 3 1 4 2 5 ``` Thus, df[boolean_mask] does not always behave the same as df.loc[boolean_mask]. Even though this is arguably an unlikely use case, I would recommend always using df.loc[boolean_mask] instead of df[boolean_mask] because the meaning of df.loc's syntax is explicit. With df.loc[indexer] you know automatically that df.loc is selecting rows. In contrast, it is not clear if df[indexer] will select rows or columns (or raise ValueError) without knowing details about indexer and df. df.loc[row_indexer, column_index] can select rows and columns. df[indexer] can only select rows or columns depending on the type of values in indexer and the type of column values df has (again, are they boolean?). ``` In [237]: df2.loc[[True,False,True], 'B'] Out[237]: 0 3 2 5 Name: B, dtype: int64 ``` When a slice is passed to df.loc the end-points are included in the range. When a slice is passed to df[...], the slice is interpreted as a half-open interval: ``` In [239]: df2.loc[1:2] Out[239]: A B 1 2 4 2 3 5 In [271]: df2[1:2] Out[271]: A B 1 2 4 ```", "best_answers_score":0.775, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38886080\/why-use-loc-in-pandas", "best_answers_votes":113, "question_length":1096, "response_length":1849 }, { "question":"How to get the first column of a pandas DataFrame as a Series? I tried: ``` x=pandas.DataFrame(...) s = x.take([0], axis=1) ``` And s gets a DataFrame, not a Series.", "response":"``` >>> import pandas as pd >>> df = pd.DataFrame({'x' : [1, 2, 3, 4], 'y' : [4, 5, 6, 7]}) >>> df x y 0 1 4 1 2 5 2 3 6 3 4 7 >>> s = df.ix[:,0] >>> type(s) >>> ``` =========================================================================== UPDATE If you're reading this after June 2017, ix has been deprecated in pandas 0.20.2, so don't use it. Use loc or iloc instead. See comments and other answers to this question.", "best_answers_score":0.7741, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15360925\/how-to-get-the-first-column-of-a-pandas-dataframe-as-a-series", "best_answers_votes":155, "question_length":165, "response_length":421 }, { "question":"How to convert SQL Query result to PANDAS Data Structure? Any help on this problem will be greatly appreciated. So basically I want to run a query to my SQL database and store the returned data as Pandas data structure. I have attached code for query. I am reading the documentation on Pandas, but I have problem to identify the return type of my query. I tried to print the query result, but it doesn't give any useful information. Thanks!!!! ```py from sqlalchemy import create_engine engine2 = create_engine('mysql:\/\/THE DATABASE I AM ACCESSING') connection2 = engine2.connect() dataid = 1022 resoverall = connection2.execute(\" SELECT sum(BLABLA) AS BLA, sum(BLABLABLA2) AS BLABLABLA2, sum(SOME_INT) AS SOME_INT, sum(SOME_INT2) AS SOME_INT2, 100*sum(SOME_INT2)\/sum(SOME_INT) AS ctr, sum(SOME_INT2)\/sum(SOME_INT) AS cpc FROM daily_report_cooked WHERE campaign_id = '%s'\", %dataid ) ``` So I sort of want to understand what's the format\/datatype of my variable \"resoverall\" and how to put it with PANDAS data structure.", "response":"Edit: Mar. 2015 As noted below, pandas now uses SQLAlchemy to both read from (read_sql) and insert into (to_sql) a database. The following should work ```py import pandas as pd df = pd.read_sql(sql, cnxn) ``` Previous answer: Via mikebmassey from a similar question ```py import pyodbc import pandas.io.sql as psql cnxn = pyodbc.connect(connection_info) cursor = cnxn.cursor() sql = \"SELECT * FROM TABLE\" df = psql.frame_query(sql, cnxn) cnxn.close() ```", "best_answers_score":0.7741, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12047193\/how-to-convert-sql-query-result-to-pandas-data-structure", "best_answers_votes":177, "question_length":1020, "response_length":454 }, { "question":"Remap values in pandas column with a dict, preserve NaNs I have a dictionary which looks like this: di = {1: \"A\", 2: \"B\"} I would like to apply it to the col1 column of a dataframe similar to: ```none col1 col2 0 w a 1 1 2 2 2 NaN ``` to get: ```none col1 col2 0 w a 1 A 2 2 B NaN ``` How can I best do this?", "response":"map can be much faster than replace If your dictionary has more than a couple of keys, using map can be much faster than replace. There are two versions of this approach, depending on whether your dictionary exhaustively maps all possible values (and also whether you want non-matches to keep their values or be converted to NaNs): Exhaustive Mapping In this case, the form is very simple: ``` df['col1'].map(di) # note: if the dictionary does not exhaustively map all # entries then non-matched entries are changed to NaNs ``` Although map most commonly takes a function as its argument, it can alternatively take a dictionary or series: Documentation for Pandas.series.map Non-Exhaustive Mapping If you have a non-exhaustive mapping and wish to retain the existing variables for non-matches, you can add fillna: ``` df['col1'].map(di).fillna(df['col1']) ``` as in @jpp's answer here: Replace values in a pandas series via dictionary efficiently Benchmarks Using the following data with pandas version 0.23.1: ``` di = {1: \"A\", 2: \"B\", 3: \"C\", 4: \"D\", 5: \"E\", 6: \"F\", 7: \"G\", 8: \"H\" } df = pd.DataFrame({ 'col1': np.random.choice( range(1,9), 100000 ) }) ``` and testing with %timeit, it appears that map is approximately 10x faster than replace. Note that your speedup with map will vary with your data. The largest speedup appears to be with large dictionaries and exhaustive replaces. See @jpp answer (linked above) for more extensive benchmarks and discussion.", "best_answers_score":0.7731, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20250771\/remap-values-in-pandas-column-with-a-dict-preserve-nans", "best_answers_votes":596, "question_length":308, "response_length":1465 }, { "question":"Compare two DataFrames and output their differences side-by-side I am trying to highlight exactly what changed between two dataframes. Suppose I have two Python Pandas dataframes: ``` \"StudentRoster Jan-1\": id Name score isEnrolled Comment 111 Jack 2.17 True He was late to class 112 Nick 1.11 False Graduated 113 Zoe 4.12 True \"StudentRoster Jan-2\": id Name score isEnrolled Comment 111 Jack 2.17 True He was late to class 112 Nick 1.21 False Graduated 113 Zoe 4.12 False On vacation ``` My goal is to output an HTML table that: Identifies rows that have changed (could be int, float, boolean, string) Outputs rows with same, OLD and NEW values (ideally into an HTML table) so the consumer can clearly see what changed between two dataframes: ``` \"StudentRoster Difference Jan-1 - Jan-2\": id Name score isEnrolled Comment 112 Nick was 1.11| now 1.21 False Graduated 113 Zoe 4.12 was True | now False was \"\" | now \"On vacation\" ``` I suppose I could do a row by row and column by column comparison, but is there an easier way?", "response":"The first part is similar to Constantine, you can get the boolean of which rows are empty*: ``` In [21]: ne = (df1 != df2).any(1) In [22]: ne Out[22]: 0 False 1 True 2 True dtype: bool ``` Then we can see which entries have changed: ``` In [23]: ne_stacked = (df1 != df2).stack() In [24]: changed = ne_stacked[ne_stacked] In [25]: changed.index.names = ['id', 'col'] In [26]: changed Out[26]: id col 1 score True 2 isEnrolled True Comment True dtype: bool ``` Here the first entry is the index and the second the columns which has been changed. ``` In [27]: difference_locations = np.where(df1 != df2) In [28]: changed_from = df1.values[difference_locations] In [29]: changed_to = df2.values[difference_locations] In [30]: pd.DataFrame({'from': changed_from, 'to': changed_to}, index=changed.index) Out[30]: from to id col 1 score 1.11 1.21 2 isEnrolled True False Comment None On vacation ``` * Note: it's important that df1 and df2 share the same index here. To overcome this ambiguity, you can ensure you only look at the shared labels using df1.index & df2.index, but I think I'll leave that as an exercise.", "best_answers_score":0.773, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17095101\/compare-two-dataframes-and-output-their-differences-side-by-side", "best_answers_votes":177, "question_length":1026, "response_length":1111 }, { "question":"Access index in pandas.Series.apply Lets say I have a MultiIndex Series s: ``` >>> s values a b 1 2 0.1 3 6 0.3 4 4 0.7 ``` and I want to apply a function which uses the index of the row: ``` def f(x): # conditions or computations using the indexes if x.index[0] and ...: other = sum(x.index) + ... return something ``` How can I do s.apply(f) for such a function? What is the recommended way to make this kind of operations? I expect to obtain a new Series with the values resulting from this function applied on each row and the same MultiIndex.", "response":"I don't believe apply has access to the index; it treats each row as a numpy object, not a Series, as you can see: ``` In [27]: s.apply(lambda x: type(x)) Out[27]: a b 1 2 3 6 4 4 ``` To get around this limitation, promote the indexes to columns, apply your function, and recreate a Series with the original index. ``` Series(s.reset_index().apply(f, axis=1).values, index=s.index) ``` Other approaches might use s.get_level_values, which often gets a little ugly in my opinion, or s.iterrows(), which is likely to be slower -- perhaps depending on exactly what f does.", "best_answers_score":0.7727, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18316211\/access-index-in-pandas-series-apply", "best_answers_votes":61, "question_length":547, "response_length":572 }, { "question":"Pandas get topmost n records within each group Suppose I have pandas DataFrame like this: ```py df = pd.DataFrame({'id':[1,1,1,2,2,2,2,3,4], 'value':[1,2,3,1,2,3,4,1,1]}) ``` which looks like: ```none id value 0 1 1 1 1 2 2 1 3 3 2 1 4 2 2 5 2 3 6 2 4 7 3 1 8 4 1 ``` I want to get a new DataFrame with top 2 records for each id, like this: ```none id value 0 1 1 1 1 2 3 2 1 4 2 2 7 3 1 8 4 1 ``` I can do it with numbering records within group after groupby: ```py dfN = df.groupby('id').apply(lambda x:x['value'].reset_index()).reset_index() ``` which looks like: ```none id level_1 index value 0 1 0 0 1 1 1 1 1 2 2 1 2 2 3 3 2 0 3 1 4 2 1 4 2 5 2 2 5 3 6 2 3 6 4 7 3 0 7 1 8 4 0 8 1 ``` then for the desired output: ```py dfN[dfN['level_1'] <= 1][['id', 'value']] ``` Output: ```none id value 0 1 1 1 1 2 3 2 1 4 2 2 7 3 1 8 4 1 ``` But is there more effective\/elegant approach to do this? And also is there more elegant approach to number records within each group (like SQL window function row_number()).", "response":"Did you try ``` df.groupby('id').head(2) ``` Output generated: ``` id value id 1 0 1 1 1 1 2 2 3 2 1 4 2 2 3 7 3 1 4 8 4 1 ``` (Keep in mind that you might need to order\/sort before, depending on your data) EDIT: As mentioned by the questioner, use ``` df.groupby('id').head(2).reset_index(drop=True) ``` to remove the MultiIndex and flatten the results: ``` id value 0 1 1 1 1 2 2 2 1 3 2 2 4 3 1 5 4 1 ```", "best_answers_score":0.7722, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20069009\/pandas-get-topmost-n-records-within-each-group", "best_answers_votes":320, "question_length":1011, "response_length":407 }, { "question":"Large, persistent DataFrame in pandas I am exploring switching to python and pandas as a long-time SAS user. However, when running some tests today, I was surprised that python ran out of memory when trying to pandas.read_csv() a 128mb csv file. It had about 200,000 rows and 200 columns of mostly numeric data. With SAS, I can import a csv file into a SAS dataset and it can be as large as my hard drive. Is there something analogous in pandas? I regularly work with large files and do not have access to a distributed computing network.", "response":"Wes is of course right! I'm just chiming in to provide a little more complete example code. I had the same issue with a 129 Mb file, which was solved by: ``` import pandas as pd # Returns a TextFileReader, which is iterable with chunks of 1000 rows. csv_iterator = pd.read_csv('large_dataset.csv', iterator=True, chunksize=1000) # Iterate through the dataframe chunks and print one row\/record at a time for chunk in csv_iterator: for index, row in chunk.iterrows(): print(row) # df is DataFrame. If errors, use `list(csv_iterator)` instead df = pd.concat(tp, ignore_index=True) ```", "best_answers_score":0.772, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11622652\/large-persistent-dataframe-in-pandas", "best_answers_votes":89, "question_length":538, "response_length":581 }, { "question":"Add days to dates in dataframe I am stymied at the moment. I am sure that I am missing something simple, but how do you move a series of dates forward by x units? In my more specific case I want to add 180 days to a date series within a dataframe. Here is what I have so far: ``` import pandas, numpy, StringIO, datetime txt = '''ID,DATE 002691c9cec109e64558848f1358ac16,2003-08-13 00:00:00 002691c9cec109e64558848f1358ac16,2003-08-13 00:00:00 0088f218a1f00e0fe1b94919dc68ec33,2006-05-07 00:00:00 0088f218a1f00e0fe1b94919dc68ec33,2006-06-03 00:00:00 00d34668025906d55ae2e529615f530a,2006-03-09 00:00:00 00d34668025906d55ae2e529615f530a,2006-03-09 00:00:00 0101d3286dfbd58642a7527ecbddb92e,2007-10-13 00:00:00 0101d3286dfbd58642a7527ecbddb92e,2007-10-27 00:00:00 0103bd73af66e5a44f7867c0bb2203cc,2001-02-01 00:00:00 0103bd73af66e5a44f7867c0bb2203cc,2008-01-20 00:00:00 ''' df = pandas.read_csv(StringIO.StringIO(txt)) df = df.sort('DATE') df.DATE = pandas.to_datetime(df.DATE) df['X_DATE'] = df['DATE'].shift(180, freq=pandas.datetools.Day) ``` This code generates a type error. For reference I am using: ``` Python 2.7.4 Pandas '0.12.0.dev-6e7c4d6' Numpy '1.7.1' ```", "response":"If I understand you, you don't actually want shift, you simply want to make a new column next to the existing DATE which is 180 days after. In that case, you can use timedelta: ``` >>> from datetime import timedelta >>> df.head() ID DATE 8 0103bd73af66e5a44f7867c0bb2203cc 2001-02-01 00:00:00 0 002691c9cec109e64558848f1358ac16 2003-08-13 00:00:00 1 002691c9cec109e64558848f1358ac16 2003-08-13 00:00:00 5 00d34668025906d55ae2e529615f530a 2006-03-09 00:00:00 4 00d34668025906d55ae2e529615f530a 2006-03-09 00:00:00 >>> df[\"X_DATE\"] = df[\"DATE\"] + timedelta(days=180) >>> df.head() ID DATE X_DATE 8 0103bd73af66e5a44f7867c0bb2203cc 2001-02-01 00:00:00 2001-07-31 00:00:00 0 002691c9cec109e64558848f1358ac16 2003-08-13 00:00:00 2004-02-09 00:00:00 1 002691c9cec109e64558848f1358ac16 2003-08-13 00:00:00 2004-02-09 00:00:00 5 00d34668025906d55ae2e529615f530a 2006-03-09 00:00:00 2006-09-05 00:00:00 4 00d34668025906d55ae2e529615f530a 2006-03-09 00:00:00 2006-09-05 00:00:00 ``` Does that help any?", "best_answers_score":0.772, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16385785\/add-days-to-dates-in-dataframe", "best_answers_votes":102, "question_length":1166, "response_length":992 }, { "question":"How to change the color of a single bar in a bar plot Supposely, I have the bar chart as below: Any ideas on how to set different colors for each carrier? As for example, AK would be Red, GA would be Green, etc? I am using Pandas and matplotlib in Python ``` >>> f=plt.figure() >>> ax=f.add_subplot(1,1,1) >>> ax.bar([1,2,3,4], [1,2,3,4]) >>> ax.get_children() [, , , , , , , , , , , ] >>> ax.get_children()[2].set_color('r') #You can also try to locate the first patches.Rectangle object instead of direct calling the index. ``` For the suggestions above, how do exactly we could enumerate ax.get_children() and check if the object type is rectangle? So if the object is rectangle, we would assign different random color?", "response":"Simple, just use .set_color ``` >>> barlist=plt.bar([1,2,3,4], [1,2,3,4]) >>> barlist[0].set_color('r') >>> plt.show() ``` For your new question, not much harder either, just need to find the bar from your axis, an example: ``` >>> f=plt.figure() >>> ax=f.add_subplot(1,1,1) >>> ax.bar([1,2,3,4], [1,2,3,4]) >>> ax.get_children() [, , , , , , , , , , , ] >>> ax.get_children()[2].set_color('r') #You can also try to locate the first patches.Rectangle object #instead of direct calling the index. ``` If you have a complex plot and want to identify the bars first, add those: ``` >>> import matplotlib >>> childrenLS=ax.get_children() >>> barlist=filter(lambda x: isinstance(x, matplotlib.patches.Rectangle), childrenLS) [, , , , ] ```", "best_answers_score":0.7694, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18973404\/how-to-change-the-color-of-a-single-bar-in-a-bar-plot", "best_answers_votes":197, "question_length":723, "response_length":735 }, { "question":"How to get a value from a Pandas DataFrame and not the index and object type Say I have the following DataFrame ``` Letter Number A 1 B 2 C 3 D 4 ``` Which can be obtained through the following code ``` import pandas as pd letters = pd.Series(('A', 'B', 'C', 'D')) numbers = pd.Series((1, 2, 3, 4)) keys = ('Letters', 'Numbers') df = pd.concat((letters, numbers), axis=1, keys=keys) ``` Now I want to get the value C from the column Letters. The command line ``` df[df.Letters=='C'].Letters ``` will return ``` 2 C Name: Letters, dtype: object ``` How can I get only the value C and not the whole two line output?", "response":"``` df[df.Letters=='C'].Letters.item() ``` This returns the first element in the Index\/Series returned from that selection. In this case, the value is always the first element. EDIT: Or you can run a loc() and access the first element that way. This was shorter and is the way I have implemented it in the past. Pandas Index doc Pandas Series doc", "best_answers_score":0.7692, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30787901\/how-to-get-a-value-from-a-pandas-dataframe-and-not-the-index-and-object-type", "best_answers_votes":255, "question_length":613, "response_length":346 }, { "question":"Understanding inplace=True in pandas In the pandas library many times there is an option to change the object inplace such as with the following statement... ``` df.dropna(axis='index', how='all', inplace=True) ``` I am curious what is being returned as well as how the object is handled when inplace=True is passed vs. when inplace=False. Are all operations modifying self when inplace=True? And when inplace=False is a new object created immediately such as new_df = self and then new_df is returned? If you are trying to close a question where someone should use inplace=True and hasn't, consider replace() method not working on Pandas DataFrame instead.", "response":"In pandas, is inplace = True considered harmful, or not? TLDR; Yes, yes it is. inplace, contrary to what the name implies, often does not prevent copies from being created, and (almost) never offers any performance benefits inplace does not work with method chaining inplace can lead to SettingWithCopyWarning if used on a DataFrame column, and may prevent the operation from going though, leading to hard-to-debug errors in code The pain points above are common pitfalls for beginners, so removing this option will simplify the API. I don't advise setting this parameter as it serves little purpose. See this GitHub issue which proposes the inplace argument be deprecated api-wide. It is a common misconception that using inplace=True will lead to more efficient or optimized code. In reality, there are absolutely no performance benefits to using inplace=True. Both the in-place and out-of-place versions create a copy of the data anyway, with the in-place version automatically assigning the copy back. inplace=True is a common pitfall for beginners. For example, it can trigger the SettingWithCopyWarning: ``` df = pd.DataFrame({'a': [3, 2, 1], 'b': ['x', 'y', 'z']}) df2 = df[df['a'] > 1] df2['b'].replace({'x': 'abc'}, inplace=True) # SettingWithCopyWarning: # A value is trying to be set on a copy of a slice from a DataFrame ``` Calling a function on a DataFrame column with inplace=True may or may not work. This is especially true when chained indexing is involved. As if the problems described above aren't enough, inplace=True also hinders method chaining. Contrast the working of ``` result = df.some_function1().reset_index().some_function2() ``` As opposed to ``` temp = df.some_function1() temp.reset_index(inplace=True) result = temp.some_function2() ``` The former lends itself to better code organization and readability. Another supporting claim is that the API for set_axis was recently changed such that inplace default value was switched from True to False. See GH27600. Great job devs!", "best_answers_score":0.7691, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43893457\/understanding-inplace-true-in-pandas", "best_answers_votes":129, "question_length":657, "response_length":2009 }, { "question":"Apache Airflow or Apache Beam for data processing and job scheduling [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed last year. The community reviewed whether to reopen this question last year and left it closed: Original close reason(s) were not resolved Improve this question I'm trying to give useful information but I am far from being a data engineer. I am currently using the python library pandas to execute a long series of transformation to my data which has a lot of inputs (currently CSV and excel files). The outputs are several excel files. I would like to be able to execute scheduled monitored batch jobs with parallel computation (I mean not as sequential as what I'm doing with pandas), once a month. I don't really know Beam or Airflow, I quickly read through the docs and it seems that both can achieve that. Which one should I use ?", "response":"The other answers are quite technical and hard to understand. I was in your position before so I'll explain in simple terms. Airflow can do anything. It has BashOperator and PythonOperator which means it can run any bash script or any Python script. It is a way to organize (setup complicated data pipeline DAGs), schedule, monitor, trigger re-runs of data pipelines, in a easy-to-view and use UI. Also, it is easy to setup and everything is in familiar Python code. Doing pipelines in an organized manner (i.e using Airflow) means you don't waste time debugging a mess of data processing (cron) scripts all over the place. Nowadays (roughly year 2020 onwards), we call it an orchestration tool. Apache Beam is a wrapper for the many data processing frameworks (Spark, Flink etc.) out there. The intent is so you just learn Beam and can run on multiple backends (Beam runners). If you are familiar with Keras and TensorFlow\/Theano\/Torch, the relationship between Keras and its backends is similar to the relationship between Beam and its data processing backends. Google Cloud Platform's Cloud Dataflow is one backend for running Beam on. They call it the Dataflow runner. GCP's offering, Cloud Composer, is a managed Airflow implementation as a service, running in a Kubernetes cluster in Google Kubernetes Engine (GKE). So you can either: manual Airflow implementation, doing data processing on the instance itself (if your data is small (or your instance is powerful enough), you can process data on the machine running Airflow. This is why many are confused if Airflow can process data or not) manual Airflow implementation calling Beam jobs Cloud Composer (managed Airflow as a service) calling jobs in Cloud Dataflow Cloud Composer running data processing containers in Composer's Kubernetes cluster environment itself, using Airflow's KubernetesPodOperator (KPO) Cloud Composer running data processing containers in Composer's Kubernetes cluster environment with Airflow's KPO, but this time in a better isolated fashion by creating a new node-pool and specifying that the KPO pods are to be run in the new node-pool My personal experience: Airflow is lightweight and not difficult to learn (easy to implement), you should use it for your data pipelines whenever possible. Also, since many companies are looking for experience using Airflow, if you're looking to be a data engineer you should probably learn it Also, managed Airflow (I've only used GCP's Composer so far) is much more convenient than running Airflow yourself, and managing the airflow webserver and scheduler processes.", "best_answers_score":0.7684, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/50249759\/apache-airflow-or-apache-beam-for-data-processing-and-job-scheduling", "best_answers_votes":156, "question_length":1231, "response_length":2593 }, { "question":"How to drop a list of rows from Pandas dataframe? I have a dataframe df : ``` >>> df sales discount net_sales cogs STK_ID RPT_Date 600141 20060331 2.709 NaN 2.709 2.245 20060630 6.590 NaN 6.590 5.291 20060930 10.103 NaN 10.103 7.981 20061231 15.915 NaN 15.915 12.686 20070331 3.196 NaN 3.196 2.710 20070630 7.907 NaN 7.907 6.459 ``` Then I want to drop rows with certain sequence numbers which indicated in a list, suppose here is [1,2,4], then left: ``` sales discount net_sales cogs STK_ID RPT_Date 600141 20060331 2.709 NaN 2.709 2.245 20061231 15.915 NaN 15.915 12.686 20070630 7.907 NaN 7.907 6.459 ``` How or what function can do that ?", "response":"Use DataFrame.drop and pass it a Series of index labels: ``` In [65]: df Out[65]: one two one 1 4 two 2 3 three 3 2 four 4 1 In [66]: df.drop(df.index[[1,3]]) Out[66]: one two one 1 4 three 3 2 ```", "best_answers_score":0.7679, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14661701\/how-to-drop-a-list-of-rows-from-pandas-dataframe", "best_answers_votes":530, "question_length":642, "response_length":197 }, { "question":"Efficient way to apply multiple filters to pandas DataFrame or Series I have a scenario where a user wants to apply several filters to a Pandas DataFrame or Series object. Essentially, I want to efficiently chain a bunch of filtering (comparison operations) together that are specified at run-time by the user. The filters should be additive (aka each one applied should narrow results). I'm currently using reindex() (as below) but this creates a new object each time and copies the underlying data (if I understand the documentation correctly). I want to avoid this unnecessary copying as it will be really inefficient when filtering a big Series or DataFrame. I'm thinking that using apply(), map(), or something similar might be better. I'm pretty new to Pandas though so still trying to wrap my head around everything. Also, I would like to expand this so that the dictionary passed in can include the columns to operate on and filter an entire DataFrame based on the input dictionary. However, I'm assuming whatever works for a Series can be easily expanded to a DataFrame. TL;DR I want to take a dictionary of the following form and apply each operation to a given Series object and return a 'filtered' Series object. ``` relops = {'>=': [1], '>> df = pandas.DataFrame({'col1': [0, 1, 2], 'col2': [10, 11, 12]}) >>> print df >>> print df col1 col2 0 0 10 1 1 11 2 2 12 >>> from operator import le, ge >>> ops ={'>=': ge, '>> apply_relops(df['col1'], {'>=': [1]}) col1 1 1 2 2 Name: col1 >>> apply_relops(df['col1'], relops = {'>=': [1], '<=': [1]}) col1 1 1 Name: col1 ``` Again, the 'problem' with my above approach is that I think there is a lot of possibly unnecessary copying of the data for the in-between steps.", "response":"Pandas (and numpy) allow for boolean indexing, which will be much more efficient: ``` In [11]: df.loc[df['col1'] >= 1, 'col1'] Out[11]: 1 1 2 2 Name: col1 In [12]: df[df['col1'] >= 1] Out[12]: col1 col2 1 1 11 2 2 12 In [13]: df[(df['col1'] >= 1) & (df['col1'] <=1 )] Out[13]: col1 col2 1 1 11 ``` If you want to write helper functions for this, consider something along these lines: ``` In [14]: def b(x, col, op, n): return op(x[col],n) In [15]: def f(x, *b): return x[(np.logical_and(*b))] In [16]: b1 = b(df, 'col1', ge, 1) In [17]: b2 = b(df, 'col1', le, 1) In [18]: f(df, b1, b2) Out[18]: col1 col2 1 1 11 ``` Update: pandas 0.13 has a query method for these kind of use cases, assuming column names are valid identifiers the following works (and can be more efficient for large frames as it uses numexpr behind the scenes): ``` In [21]: df.query('col1 <= 1 & 1 <= col1') Out[21]: col1 col2 1 1 11 ```", "best_answers_score":0.7665, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13611065\/efficient-way-to-apply-multiple-filters-to-pandas-dataframe-or-series", "best_answers_votes":390, "question_length":1724, "response_length":907 }, { "question":"Drop rows with all zeros in pandas data frame I can use pandas dropna() functionality to remove rows with some or all columns set as NA's. Is there an equivalent function for dropping rows with all columns having value 0? ``` P kt b tt mky depth 1 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0 4 0 0 0 0 0 5 1.1 3 4.5 2.3 9.0 ``` In this example, we would like to drop the first 4 rows from the data frame. thanks!", "response":"One-liner. No transpose needed: ``` df.loc[~(df==0).all(axis=1)] ``` And for those who like symmetry, this also works... ``` df.loc[(df!=0).any(axis=1)] ```", "best_answers_score":0.7663, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22649693\/drop-rows-with-all-zeros-in-pandas-data-frame", "best_answers_votes":234, "question_length":402, "response_length":156 }, { "question":"Appending a list or series to a pandas DataFrame as a row? So I have initialized an empty pandas DataFrame and I would like to iteratively append lists (or Series) as rows in this DataFrame. What is the best way of doing this?", "response":"Sometimes it's easier to do all the appending outside of pandas, then, just create the DataFrame in one shot. ``` >>> import pandas as pd >>> simple_list=[['a','b']] >>> simple_list.append(['e','f']) >>> df=pd.DataFrame(simple_list,columns=['col1','col2']) col1 col2 0 a b 1 e f ```", "best_answers_score":0.7663, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26309962\/appending-a-list-or-series-to-a-pandas-dataframe-as-a-row", "best_answers_votes":185, "question_length":226, "response_length":282 }, { "question":"Merging dataframes on index with pandas [duplicate] This question already has answers here: Pandas Merging 101 (8 answers) Closed 7 months ago. I have two dataframes and each one has two index columns. I would like to merge them. For example, the first dataframe is the following: ```none V1 A 1\/1\/2012 12 2\/1\/2012 14 B 1\/1\/2012 15 2\/1\/2012 8 C 1\/1\/2012 17 2\/1\/2012 9 ``` The second dataframe is the following: ```none V2 A 1\/1\/2012 15 3\/1\/2012 21 B 1\/1\/2012 24 2\/1\/2012 9 D 1\/1\/2012 7 2\/1\/2012 16 ``` and as result I would like to get the following: ```none V1 V2 A 1\/1\/2012 12 15 2\/1\/2012 14 N\/A 3\/1\/2012 N\/A 21 B 1\/1\/2012 15 24 2\/1\/2012 8 9 C 1\/1\/2012 17 N\/A 2\/1\/2012 9 N\/A D 1\/1\/2012 N\/A 7 2\/1\/2012 N\/A 16 ``` I have tried a few versions using the pd.merge and .join methods, but nothing seems to work. Do you have any suggestions?", "response":"You should be able to use join, which joins on the index as default. Given your desired result, you must use outer as the join type. ``` >>> df1.join(df2, how='outer') V1 V2 A 1\/1\/2012 12 15 2\/1\/2012 14 NaN 3\/1\/2012 NaN 21 B 1\/1\/2012 15 24 2\/1\/2012 8 9 C 1\/1\/2012 17 NaN 2\/1\/2012 9 NaN D 1\/1\/2012 NaN 7 2\/1\/2012 NaN 16 ``` Signature: _.join(other, on=None, how='left', lsuffix='', rsuffix='', sort=False) Docstring: Join columns with other DataFrame either on index or on a key column. Efficiently Join multiple DataFrame objects by index at once by passing a list.", "best_answers_score":0.766, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36538780\/merging-dataframes-on-index-with-pandas", "best_answers_votes":116, "question_length":835, "response_length":565 }, { "question":"Find length of longest string in Pandas dataframe column Is there a faster way to find the length of the longest string in a Pandas DataFrame than what's shown in the example below? ``` import numpy as np import pandas as pd x = ['ab', 'bcd', 'dfe', 'efghik'] x = np.repeat(x, 1e7) df = pd.DataFrame(x, columns=['col1']) print df.col1.map(lambda x: len(x)).max() # result --> 6 ``` It takes about 10 seconds to run df.col1.map(lambda x: len(x)).max() when timing it with IPython's %timeit.", "response":"DSM's suggestion seems to be about the best you're going to get without doing some manual microoptimization: ``` %timeit -n 100 df.col1.str.len().max() 100 loops, best of 3: 11.7 ms per loop %timeit -n 100 df.col1.map(lambda x: len(x)).max() 100 loops, best of 3: 16.4 ms per loop %timeit -n 100 df.col1.map(len).max() 100 loops, best of 3: 10.1 ms per loop ``` Note that explicitly using the str.len() method doesn't seem to be much of an improvement. If you're not familiar with IPython, which is where that very convenient %timeit syntax comes from, I'd definitely suggest giving it a shot for quick testing of things like this. Update Added screenshot:", "best_answers_score":0.766, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21295334\/find-length-of-longest-string-in-pandas-dataframe-column", "best_answers_votes":123, "question_length":489, "response_length":656 }, { "question":"Creating a Pandas DataFrame from a Numpy array: How do I specify the index column and column headers? I have a Numpy array consisting of a list of lists, representing a two-dimensional array with row labels and column names as shown below: ```py data = np.array([['','Col1','Col2'],['Row1',1,2],['Row2',3,4]]) ``` I'd like the resulting DataFrame to have Row1 and Row2 as index values, and Col1, Col2 as header values. I can specify the index as follows: ```py df = pd.DataFrame(data, index=data[:,0]) ``` However, I am unsure how to best assign column headers.", "response":"Specify data, index and columns to the DataFrame constructor, as follows: ``` >>> pd.DataFrame(data=data[1:,1:], # values ... index=data[1:,0], # 1st column as index ... columns=data[0,1:]) # 1st row as the column names ``` As @joris mentions, you may need to change above to np.int_(data[1:,1:]) to have the correct data type.", "best_answers_score":0.7658, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20763012\/creating-a-pandas-dataframe-from-a-numpy-array-how-do-i-specify-the-index-colum", "best_answers_votes":449, "question_length":561, "response_length":327 }, { "question":"Keep selected column as DataFrame instead of Series When selecting a single column from a pandas DataFrame(say df.iloc[:, 0], df['A'], or df.A, etc), the resulting vector is automatically converted to a Series instead of a single-column DataFrame. However, I am writing some functions that takes a DataFrame as an input argument. Therefore, I prefer to deal with single-column DataFrame instead of Series so that the function can assume say df.columns is accessible. Right now I have to explicitly convert the Series into a DataFrame by using something like pd.DataFrame(df.iloc[:, 0]). This doesn't seem like the most clean method. Is there a more elegant way to index from a DataFrame directly so that the result is a single-column DataFrame instead of Series?", "response":"As @Jeff mentions there are a few ways to do this, but I recommend using loc\/iloc to be more explicit (and raise errors early if you're trying something ambiguous): ``` In [10]: df = pd.DataFrame([[1, 2], [3, 4]], columns=['A', 'B']) In [11]: df Out[11]: A B 0 1 2 1 3 4 In [12]: df[['A']] In [13]: df[[0]] In [14]: df.loc[:, ['A']] In [15]: df.iloc[:, [0]] Out[12-15]: # they all return the same thing: A 0 1 1 3 ``` The latter two choices remove ambiguity in the case of integer column names (precisely why loc\/iloc were created). For example: ``` In [16]: df = pd.DataFrame([[1, 2], [3, 4]], columns=['A', 0]) In [17]: df Out[17]: A 0 0 1 2 1 3 4 In [18]: df[[0]] # ambiguous Out[18]: A 0 1 1 3 ```", "best_answers_score":0.7657, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16782323\/keep-selected-column-as-dataframe-instead-of-series", "best_answers_votes":147, "question_length":762, "response_length":701 }, { "question":"What does axis in pandas mean? Here is my code to generate a dataframe: ``` import pandas as pd import numpy as np dff = pd.DataFrame(np.random.randn(1, 2), columns=list('AB')) ``` then I got the dataframe: ``` A B 0 0.626386 1.52325 ``` When I type the command dff.mean(axis=1), I get: ``` 0 1.074821 dtype: float64 ``` According to the reference of pandas, axis=1 stands for columns and I expect the result of the command to be ``` A 0.626386 B 1.523255 dtype: float64 ``` So what does axis in pandas mean?", "response":"It specifies the axis along which the means are computed. By default axis=0. This is consistent with the numpy.mean usage when axis is specified explicitly (in numpy.mean, axis==None by default, which computes the mean value over the flattened array) , in which axis=0 along the rows (namely, index in pandas), and axis=1 along the columns. For added clarity, one may choose to specify axis='index' (instead of axis=0) or axis='columns' (instead of axis=1). ``` A B 0 0.626386 1.52325 \u2192 \u2192 axis=1 \u2192 \u2192 \u2193 \u2193 \u2193 axis=0 \u2193 \u2193 \u2193 ```", "best_answers_score":0.7655, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22149584\/what-does-axis-in-pandas-mean", "best_answers_votes":557, "question_length":508, "response_length":522 }, { "question":"How to convert index of a pandas dataframe into a column How to convert an index of a dataframe into a column? For example: ```none gi ptt_loc 0 384444683 593 1 384444684 594 2 384444686 596 ``` to ```none index1 gi ptt_loc 0 0 384444683 593 1 1 384444684 594 2 2 384444686 596 ```", "response":"either: ``` df['index1'] = df.index ``` or .reset_index: ``` df = df.reset_index() ``` If you have a multi-index frame with 3 levels of index, like: ``` >>> df val tick tag obs 2016-02-26 C 2 0.0139 2016-02-27 A 2 0.5577 2016-02-28 C 6 0.0303 ``` and you want to convert the 1st (tick) and 3rd (obs) levels in the index into columns, you could do: ``` >>> df.reset_index(level=['tick', 'obs']) tick obs val tag C 2016-02-26 2 0.0139 A 2016-02-27 2 0.5577 C 2016-02-28 6 0.0303 ```", "best_answers_score":0.7653, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20461165\/how-to-convert-index-of-a-pandas-dataframe-into-a-column", "best_answers_votes":1393, "question_length":281, "response_length":480 }, { "question":"pandas three-way joining multiple dataframes on columns I have 3 CSV files. Each has the first column as the (string) names of people, while all the other columns in each dataframe are attributes of that person. How can I \"join\" together all three CSV documents to create a single CSV with each row having all the attributes for each unique value of the person's string name? The join() function in pandas specifies that I need a multiindex, but I'm confused about what a hierarchical indexing scheme has to do with making a join based on a single index.", "response":"Zero's answer is basically a reduce operation. If I had more than a handful of dataframes, I'd put them in a list like this (generated via list comprehensions or loops or whatnot): ``` dfs = [df0, df1, df2, ..., dfN] ``` Assuming they have a common column, like name in your example, I'd do the following: ``` import functools as ft df_final = ft.reduce(lambda left, right: pd.merge(left, right, on='name'), dfs) ``` That way, your code should work with whatever number of dataframes you want to merge.", "best_answers_score":0.7647, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23668427\/pandas-three-way-joining-multiple-dataframes-on-columns", "best_answers_votes":659, "question_length":554, "response_length":502 }, { "question":"Splitting a pandas dataframe column by delimiter i have a small sample data: ``` import pandas as pd df = {'ID': [3009, 129, 119, 120, 121, 122, 130, 3014, 266, 849, 174, 844], 'V': ['IGHV7-B*01', 'IGHV7-B*01', 'IGHV6-A*01', 'GHV6-A*01', 'IGHV6-A*01', 'IGHV6-A*01', 'IGHV4-L*03', 'IGHV4-L*03', 'IGHV5-A*01', 'IGHV5-A*04', 'IGHV6-A*02','IGHV6-A*02'], 'Prob': [1, 1, 0.8, 0.8056, 0.9, 0.805, 1, 1, 0.997, 0.401, 1, 1]} df = pd.DataFrame(df) ``` looks like ``` df Out[25]: ID Prob V 0 3009 1.0000 IGHV7-B*01 1 129 1.0000 IGHV7-B*01 2 119 0.8000 IGHV6-A*01 3 120 0.8056 IGHV6-A*01 4 121 0.9000 IGHV6-A*01 5 122 0.8050 IGHV6-A*01 6 130 1.0000 IGHV4-L*03 7 3014 1.0000 IGHV4-L*03 8 266 0.9970 IGHV5-A*01 9 849 0.4010 IGHV5-A*04 10 174 1.0000 IGHV6-A*02 11 844 1.0000 IGHV6-A*02 ``` I want to split the column 'V' by the '-' delimiter and move it to another column named 'allele' ``` Out[25]: ID Prob V allele 0 3009 1.0000 IGHV7 B*01 1 129 1.0000 IGHV7 B*01 2 119 0.8000 IGHV6 A*01 3 120 0.8056 IGHV6 A*01 4 121 0.9000 IGHV6 A*01 5 122 0.8050 IGHV6 A*01 6 130 1.0000 IGHV4 L*03 7 3014 1.0000 IGHV4 L*03 8 266 0.9970 IGHV5 A*01 9 849 0.4010 IGHV5 A*04 10 174 1.0000 IGHV6 A*02 11 844 1.0000 IGHV6 A*02 ``` the code i have tried so far is incomplete and didn't work: ``` df1 = pd.DataFrame() df1[['V']] = pd.DataFrame([ x.split('-') for x in df['V'].tolist() ]) ``` or ``` df.add(Series, axis='columns', level = None, fill_value = None) newdata = df.DataFrame({'V':df['V'].iloc[::2].values, 'Allele': df['V'].iloc[1::2].values}) ```", "response":"Use vectoried str.split with expand=True: ``` In [42]: df[['V','allele']] = df['V'].str.split('-',expand=True) df Out[42]: ID Prob V allele 0 3009 1.0000 IGHV7 B*01 1 129 1.0000 IGHV7 B*01 2 119 0.8000 IGHV6 A*01 3 120 0.8056 GHV6 A*01 4 121 0.9000 IGHV6 A*01 5 122 0.8050 IGHV6 A*01 6 130 1.0000 IGHV4 L*03 7 3014 1.0000 IGHV4 L*03 8 266 0.9970 IGHV5 A*01 9 849 0.4010 IGHV5 A*04 10 174 1.0000 IGHV6 A*02 11 844 1.0000 IGHV6 A*02 ```", "best_answers_score":0.7643, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/37333299\/splitting-a-pandas-dataframe-column-by-delimiter", "best_answers_votes":248, "question_length":1524, "response_length":434 }, { "question":"How can I obtain the element-wise logical NOT of a pandas Series? I have a pandas Series object containing boolean values. How can I get a series containing the logical NOT of each value? For example, consider a series containing: ``` True True True False ``` The series I'd like to get would contain: ``` False False False True ``` This seems like it should be reasonably simple, but apparently I've misplaced my mojo =(", "response":"To invert a boolean Series, use ~s: ``` In [7]: s = pd.Series([True, True, False, True]) In [8]: ~s Out[8]: 0 False 1 False 2 True 3 False dtype: bool ``` Using Python2.7, NumPy 1.8.0, Pandas 0.13.1: ``` In [119]: s = pd.Series([True, True, False, True]*10000) In [10]: %timeit np.invert(s) 10000 loops, best of 3: 91.8 \u00b5s per loop In [11]: %timeit ~s 10000 loops, best of 3: 73.5 \u00b5s per loop In [12]: %timeit (-s) 10000 loops, best of 3: 73.5 \u00b5s per loop ``` As of Pandas 0.13.0, Series are no longer subclasses of numpy.ndarray; they are now subclasses of pd.NDFrame. This might have something to do with why np.invert(s) is no longer as fast as ~s or -s. Caveat: timeit results may vary depending on many factors including hardware, compiler, OS, Python, NumPy and Pandas versions.", "best_answers_score":0.7631, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15998188\/how-can-i-obtain-the-element-wise-logical-not-of-a-pandas-series", "best_answers_votes":437, "question_length":421, "response_length":784 }, { "question":"Convert floats to ints in Pandas? I've been working with data imported from a CSV. Pandas changed some columns to float, so now the numbers in these columns get displayed as floating points! However, I need them to be displayed as integers or without comma. Is there a way to convert them to integers or not display the comma?", "response":"Use the pandas.DataFrame.astype() function to manipulate column dtypes. ``` >>> df = pd.DataFrame(np.random.rand(3,4), columns=list(\"ABCD\")) >>> df A B C D 0 0.542447 0.949988 0.669239 0.879887 1 0.068542 0.757775 0.891903 0.384542 2 0.021274 0.587504 0.180426 0.574300 >>> df[list(\"ABCD\")] = df[list(\"ABCD\")].astype(int) >>> df A B C D 0 0 0 0 0 1 0 0 0 0 2 0 0 0 0 ``` EDIT: To handle missing values: ``` >>> df A B C D 0 0.475103 0.355453 0.66 0.869336 1 0.260395 0.200287 NaN 0.617024 2 0.517692 0.735613 0.18 0.657106 >>> df[list(\"ABCD\")] = df[list(\"ABCD\")].fillna(0.0).astype(int) >>> df A B C D 0 0 0 0 0 1 0 0 0 0 2 0 0 0 0 ```", "best_answers_score":0.7627, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21291259\/convert-floats-to-ints-in-pandas", "best_answers_votes":287, "question_length":326, "response_length":635 }, { "question":"How to \"select distinct\" across multiple data frame columns in pandas? I'm looking for a way to do the equivalent to the SQL ``` SELECT DISTINCT col1, col2 FROM dataframe_table ``` The pandas sql comparison doesn't have anything about distinct. .unique() only works for a single column, so I suppose I could concat the columns, or put them in a list\/tuple and compare that way, but this seems like something pandas should do in a more native way. Am I missing something obvious, or is there no way to do this?", "response":"You can use the drop_duplicates method to get the unique rows in a DataFrame: ``` In [29]: df = pd.DataFrame({'a':[1,2,1,2], 'b':[3,4,3,5]}) In [30]: df Out[30]: a b 0 1 3 1 2 4 2 1 3 3 2 5 In [32]: df.drop_duplicates() Out[32]: a b 0 1 3 1 2 4 3 2 5 ``` You can also provide the subset keyword argument if you only want to use certain columns to determine uniqueness. See the docstring.", "best_answers_score":0.7615, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30530663\/how-to-select-distinct-across-multiple-data-frame-columns-in-pandas", "best_answers_votes":328, "question_length":509, "response_length":387 }, { "question":"How to sort a Pandas DataFrame by index? When there is a DataFrame like the following: ```py import pandas as pd df = pd.DataFrame(1, index=[100, 29, 234, 1, 150], columns=['A']) ``` How can I sort this dataframe by index with each combination of index and column value intact?", "response":"Dataframes have a sort_index method which returns a copy by default. Pass inplace=True to operate in place. ``` import pandas as pd df = pd.DataFrame([1, 2, 3, 4, 5], index=[100, 29, 234, 1, 150], columns=['A']) df.sort_index(inplace=True) print(df.to_string()) ``` Gives me: ``` A 1 4 29 2 100 1 150 5 234 3 ```", "best_answers_score":0.7614, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22211737\/how-to-sort-a-pandas-dataframe-by-index", "best_answers_votes":199, "question_length":277, "response_length":312 }, { "question":"How to convert dataframe to dictionary in pandas WITHOUT index I have a dataframe df as follows: ``` | name | coverage | |-------|----------| | Jason | 25.1 | ``` I want to convert it to a dictionary. I used the following command in pandas : ``` dict=df.to_dict() ``` The output of dict gave me the following: ``` {'coverage': {0: 25.1}, 'name': {0: 'Jason'}} ``` I do not want the 0 in my output. I believe this is captured because of the column index in my dataframe df. What can I do to eliminate 0 in my output ( I do not want index to be captured.) expected output : ``` {'coverage': 25.1, 'name': 'Jason'} ```", "response":"When I see your dataset with 2 columns I see a series and not a dataframe. Try this: d = df.set_index('name')['coverage'].to_dict() which will convert your dataframe to a series and output that. However, if your intent is to have more columns and not a common key you could store them in an array instead using 'records'. d = df.to_dict('r'). ` Runnable code: ``` import pandas as pd df = pd.DataFrame({ 'name': ['Jason'], 'coverage': [25.1] }) print(df.to_dict()) print(df.set_index('name')['coverage'].to_dict()) print(df.to_dict('records')) ``` Returns: ``` {'name': {0: 'Jason'}, 'coverage': {0: 25.1}} {'Jason': 25.1} [{'name': 'Jason', 'coverage': 25.1}] ``` And one more thing, try to avoid to use variable name dict as it is reserved. Updated 2013-11-01. Now use 'records' instead of 'r' thanks to comment.", "best_answers_score":0.7611, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/52547805\/how-to-convert-dataframe-to-dictionary-in-pandas-without-index", "best_answers_votes":101, "question_length":615, "response_length":814 }, { "question":"Binning a column with pandas I have a data frame column with numeric values: ``` df['percentage'].head() 46.5 44.2 100.0 42.12 ``` I want to see the column as bin counts: ``` bins = [0, 1, 5, 10, 25, 50, 100] ``` How can I get the result as bins with their value counts? ``` [0, 1] bin amount [1, 5] etc [5, 10] etc ... ```", "response":"You can use pandas.cut: ``` bins = [0, 1, 5, 10, 25, 50, 100] df['binned'] = pd.cut(df['percentage'], bins) print (df) percentage binned 0 46.50 (25, 50] 1 44.20 (25, 50] 2 100.00 (50, 100] 3 42.12 (25, 50] ``` ``` bins = [0, 1, 5, 10, 25, 50, 100] labels = [1,2,3,4,5,6] df['binned'] = pd.cut(df['percentage'], bins=bins, labels=labels) print (df) percentage binned 0 46.50 5 1 44.20 5 2 100.00 6 3 42.12 5 ``` Or numpy.searchsorted: ``` bins = [0, 1, 5, 10, 25, 50, 100] df['binned'] = np.searchsorted(bins, df['percentage'].values) print (df) percentage binned 0 46.50 5 1 44.20 5 2 100.00 6 3 42.12 5 ``` ...and then value_counts or groupby and aggregate size: ``` s = pd.cut(df['percentage'], bins=bins).value_counts() print (s) (25, 50] 3 (50, 100] 1 (10, 25] 0 (5, 10] 0 (1, 5] 0 (0, 1] 0 Name: percentage, dtype: int64 ``` ``` s = df.groupby(pd.cut(df['percentage'], bins=bins)).size() print (s) percentage (0, 1] 0 (1, 5] 0 (5, 10] 0 (10, 25] 0 (25, 50] 3 (50, 100] 1 dtype: int64 ``` By default cut returns categorical. Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data, operations in categorical.", "best_answers_score":0.761, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45273731\/binning-a-column-with-pandas", "best_answers_votes":363, "question_length":323, "response_length":1176 }, { "question":"Putting many python pandas dataframes to one excel worksheet It is quite easy to add many pandas dataframes into excel work book as long as it is different worksheets. But, it is somewhat tricky to get many dataframes into one worksheet if you want to use pandas built-in df.to_excel functionality. ``` # Creating Excel Writer Object from Pandas writer = pd.ExcelWriter('test.xlsx',engine='xlsxwriter') workbook=writer.book worksheet=workbook.add_worksheet('Validation') df.to_excel(writer,sheet_name='Validation',startrow=0 , startcol=0) another_df.to_excel(writer,sheet_name='Validation',startrow=20, startcol=0) ``` The above code won't work. You will get the error of ``` Sheetname 'Validation', with case ignored, is already in use. ``` Now, I have experimented enough that I found a way to make it work. ``` writer = pd.ExcelWriter('test.xlsx',engine='xlsxwriter') # Creating Excel Writer Object from Pandas workbook=writer.book df.to_excel(writer,sheet_name='Validation',startrow=0 , startcol=0) another_df.to_excel(writer,sheet_name='Validation',startrow=20, startcol=0) ``` This will work. So, my purpose of posting this question on stackoverflow is twofold. Firstly, I hope this will help someone if he\/she is trying to put many dataframes into a single work sheet at excel. Secondly, Can someone help me understand the difference between those two blocks of code? It appears to me that they are pretty much the same except the first block of code created worksheet called \"Validation\" in advance while the second does not. I get that part. What I don't understand is why should it be any different ? Even if I don't create the worksheet in advance, this line, the line right before the last one, ``` df.to_excel(writer,sheet_name='Validation',startrow=0 , startcol=0) ``` will create a worksheet anyway. Consequently, by the time we reached the last line of code the worksheet \"Validation\" is already created as well in the second block of code. So, my question basically, why should the second block of code work while the first doesn't? Please also share if there is another way to put many dataframes into excel using the built-in df.to_excel functionality !!", "response":"To create the Worksheet in advance, you need to add the created sheet to the sheets dict: writer.sheets['Validation'] = worksheet Using your original code: ``` # Creating Excel Writer Object from Pandas writer = pd.ExcelWriter('test.xlsx',engine='xlsxwriter') workbook=writer.book worksheet=workbook.add_worksheet('Validation') writer.sheets['Validation'] = worksheet df.to_excel(writer,sheet_name='Validation',startrow=0 , startcol=0) another_df.to_excel(writer,sheet_name='Validation',startrow=20, startcol=0) ``` Explanation If we look at the pandas function to_excel, it uses the writer's write_cells function: ``` excel_writer.write_cells(formatted_cells, sheet_name, startrow=startrow, startcol=startcol) ``` So looking at the write_cells function for xlsxwriter: ``` def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): # Write the frame cells using xlsxwriter. sheet_name = self._get_sheet_name(sheet_name) if sheet_name in self.sheets: wks = self.sheets[sheet_name] else: wks = self.book.add_worksheet(sheet_name) self.sheets[sheet_name] = wks ``` Here we can see that it checks for sheet_name in self.sheets, and so it needs to be added there as well.", "best_answers_score":0.7605, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32957441\/putting-many-python-pandas-dataframes-to-one-excel-worksheet", "best_answers_votes":85, "question_length":2173, "response_length":1179 }, { "question":"How to simply add a column level to a pandas dataframe let's say I have a dataframe that looks like this: ``` df = pd.DataFrame({'A': range(5), 'B': range(5)}, index=list('abcde')) df Out[92]: A B a 0 0 b 1 1 c 2 2 d 3 3 e 4 4 ``` Asumming that this dataframe already exist, how can I simply add a level 'C' to the column index so I get this: ``` df Out[92]: A B C C a 0 0 b 1 1 c 2 2 d 3 3 e 4 4 ``` I saw SO anwser like this python\/pandas: how to combine two dataframes into one with hierarchical column index? but this concat different dataframe instead of adding a column level to an already existing dataframe.", "response":"As suggested by @StevenG himself, a better answer: ``` df.columns = pd.MultiIndex.from_product(df.columns.levels + [['C']]) print(df) # A B # C C # a 0 0 # b 1 1 # c 2 2 # d 3 3 # e 4 4 ```", "best_answers_score":0.76, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/40225683\/how-to-simply-add-a-column-level-to-a-pandas-dataframe", "best_answers_votes":156, "question_length":615, "response_length":189 }, { "question":"How to replace NaNs by preceding or next values in pandas DataFrame? Suppose I have a DataFrame with some NaNs: ``` >>> import pandas as pd >>> df = pd.DataFrame([[1, 2, 3], [4, None, None], [None, None, 9]]) >>> df 0 1 2 0 1 2 3 1 4 NaN NaN 2 NaN NaN 9 ``` What I need to do is replace every NaN with the first non-NaN value in the same column above it. It is assumed that the first row will never contain a NaN. So for the previous example the result would be ``` 0 1 2 0 1 2 3 1 4 2 3 2 4 2 9 ``` I can just loop through the whole DataFrame column-by-column, element-by-element and set the values directly, but is there an easy (optimally a loop-free) way of achieving this?", "response":"You could use the fillna method on the DataFrame and specify the method as ffill (forward fill): ``` >>> df = pd.DataFrame([[1, 2, 3], [4, None, None], [None, None, 9]]) >>> df.fillna(method='ffill') 0 1 2 0 1 2 3 1 4 2 3 2 4 2 9 ``` This method... propagate[s] last valid observation forward to next valid To go the opposite way, there's also a bfill method. This method doesn't modify the DataFrame inplace - you'll need to rebind the returned DataFrame to a variable or else specify inplace=True: ``` df.fillna(method='ffill', inplace=True) ```", "best_answers_score":0.7596, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27905295\/how-to-replace-nans-by-preceding-or-next-values-in-pandas-dataframe", "best_answers_votes":415, "question_length":677, "response_length":547 }, { "question":"How can I get a value from a cell of a dataframe? I have constructed a condition that extracts exactly one row from my dataframe: ```py d2 = df[(df['l_ext']==l_ext) & (df['item']==item) & (df['wn']==wn) & (df['wd']==1)] ``` Now I would like to take a value from a particular column: ```py val = d2['col_name'] ``` But as a result, I get a dataframe that contains one row and one column (i.e., one cell). It is not what I need. I need one value (one float number). How can I do it in pandas?", "response":"If you have a DataFrame with only one row, then access the first (only) row as a Series using iloc, and then the value using the column name: ``` In [3]: sub_df Out[3]: A B 2 -0.133653 -0.030854 In [4]: sub_df.iloc[0] Out[4]: A -0.133653 B -0.030854 Name: 2, dtype: float64 In [5]: sub_df.iloc[0]['A'] Out[5]: -0.13365288513107493 ```", "best_answers_score":0.7591, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16729574\/how-can-i-get-a-value-from-a-cell-of-a-dataframe", "best_answers_votes":834, "question_length":490, "response_length":334 }, { "question":"Save list of DataFrames to multisheet Excel spreadsheet How can I export a list of DataFrames into one Excel spreadsheet? The docs for to_excel state: Notes If passing an existing ExcelWriter object, then the sheet will be added to the existing workbook. This can be used to save different DataFrames to one workbook writer = ExcelWriter('output.xlsx') df1.to_excel(writer, 'sheet1') df2.to_excel(writer, 'sheet2') writer.save() Following this, I thought I could write a function which saves a list of DataFrames to one spreadsheet as follows: ``` from openpyxl.writer.excel import ExcelWriter def save_xls(list_dfs, xls_path): writer = ExcelWriter(xls_path) for n, df in enumerate(list_dfs): df.to_excel(writer,'sheet%s' % n) writer.save() ``` However (with a list of two small DataFrames, each of which can save to_excel individually), an exception is raised (Edit: traceback removed): ``` AttributeError: 'str' object has no attribute 'worksheets' ``` Presumably I am not calling ExcelWriter correctly, how should I be in order to do this?", "response":"You should be using pandas own ExcelWriter class: ``` from pandas import ExcelWriter # from pandas.io.parsers import ExcelWriter ``` Then the save_xls function works as expected: ``` def save_xls(list_dfs, xls_path): with ExcelWriter(xls_path) as writer: for n, df in enumerate(list_dfs): df.to_excel(writer,'sheet%s' % n) ```", "best_answers_score":0.7586, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14225676\/save-list-of-dataframes-to-multisheet-excel-spreadsheet", "best_answers_votes":184, "question_length":1042, "response_length":326 }, { "question":"Convert Pandas DataFrame to JSON format I have a Pandas DataFrame with two columns \u2013 one with the filename and one with the hour in which it was generated: ``` File Hour F1 1 F1 2 F2 1 F3 1 ``` I am trying to convert it to a JSON file with the following format: ``` {\"File\":\"F1\",\"Hour\":\"1\"} {\"File\":\"F1\",\"Hour\":\"2\"} {\"File\":\"F2\",\"Hour\":\"1\"} {\"File\":\"F3\",\"Hour\":\"1\"} ``` When I use the command DataFrame.to_json(orient = \"records\"), I get the records in the below format: ``` [{\"File\":\"F1\",\"Hour\":\"1\"}, {\"File\":\"F1\",\"Hour\":\"2\"}, {\"File\":\"F2\",\"Hour\":\"1\"}, {\"File\":\"F3\",\"Hour\":\"1\"}] ``` I'm just wondering whether there is an option to get the JSON file in the desired format. Any help would be appreciated.", "response":"In newer versions of pandas (0.20.0+, I believe), this can be done directly: ``` df.to_json('temp.json', orient='records', lines=True) ``` Direct compression is also possible: ``` df.to_json('temp.json.gz', orient='records', lines=True, compression='gzip') ```", "best_answers_score":0.7583, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39257147\/convert-pandas-dataframe-to-json-format", "best_answers_votes":113, "question_length":704, "response_length":260 }, { "question":"How to use rolling functions for GroupBy objects I have a time series object grouped of the type . grouped.sum() gives the desired result but I cannot get rolling_sum to work with the groupby object. Is there any way to apply rolling functions to groupby objects? For example: ``` x = range(0, 6) id = ['a', 'a', 'a', 'b', 'b', 'b'] df = DataFrame(zip(id, x), columns = ['id', 'x']) df.groupby('id').sum() id x a 3 b 12 ``` However, I would like to have something like: ``` id x 0 a 0 1 a 1 2 a 3 3 b 3 4 b 7 5 b 12 ```", "response":"For the Googlers who come upon this old question: Regarding @kekert's comment on @Garrett's answer to use the new ``` df.groupby('id')['x'].rolling(2).mean() ``` rather than the now-deprecated ``` df.groupby('id')['x'].apply(pd.rolling_mean, 2, min_periods=1) ``` curiously, it seems that the new .rolling().mean() approach returns a multi-indexed series, indexed by the group_by column first and then the index. Whereas, the old approach would simply return a series indexed singularly by the original df index, which perhaps makes less sense, but made it very convenient for adding that series as a new column into the original dataframe. So I think I've figured out a solution that uses the new rolling() method and still works the same: ``` df.groupby('id')['x'].rolling(2).mean().reset_index(0,drop=True) ``` which should give you the series ``` 0 0.0 1 0.5 2 1.5 3 3.0 4 3.5 5 4.5 ``` which you can add as a column: ``` df['x'] = df.groupby('id')['x'].rolling(2).mean().reset_index(0,drop=True) ```", "best_answers_score":0.7576, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13996302\/how-to-use-rolling-functions-for-groupby-objects", "best_answers_votes":145, "question_length":519, "response_length":1004 }, { "question":"Remove duplicates by columns A, keeping the row with the highest value in column B I have a dataframe with repeat values in column A. I want to drop duplicates, keeping the row with the highest value in column B. So this: ``` A B 1 10 1 20 2 30 2 40 3 10 ``` Should turn into this: ``` A B 1 20 2 40 3 10 ``` I'm guessing there's probably an easy way to do this\u2014maybe as easy as sorting the DataFrame before dropping duplicates\u2014but I don't know groupby's internal logic well enough to figure it out. Any suggestions?", "response":"This takes the last. Not the maximum though: ``` In [10]: df.drop_duplicates(subset='A', keep=\"last\") Out[10]: A B 1 1 20 3 2 40 4 3 10 ``` You can do also something like: ``` In [12]: df.groupby('A', group_keys=False).apply(lambda x: x.loc[x.B.idxmax()]) Out[12]: A B A 1 1 20 2 2 40 3 3 10 ```", "best_answers_score":0.7573, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12497402\/remove-duplicates-by-columns-a-keeping-the-row-with-the-highest-value-in-column", "best_answers_votes":395, "question_length":516, "response_length":295 }, { "question":"Redefining the Index in a Pandas DataFrame object I am trying to re-index a pandas DataFrame object, like so, ``` From: a b c 0 1 2 3 1 10 11 12 2 20 21 22 To : b c 1 2 3 10 11 12 20 21 22 ``` I am going about this as shown below and am getting the wrong answer. Any clues on how to do this? ``` >>> col = ['a','b','c'] >>> data = DataFrame([[1,2,3],[10,11,12],[20,21,22]],columns=col) >>> data a b c 0 1 2 3 1 10 11 12 2 20 21 22 >>> idx2 = data.a.values >>> idx2 array([ 1, 10, 20], dtype=int64) >>> data2 = DataFrame(data,index=idx2,columns=col[1:]) >>> data2 b c 1 11 12 10 NaN NaN 20 NaN NaN ``` Any idea why this is happening?", "response":"Why don't you simply use set_index method? ``` In : col = ['a','b','c'] In : data = DataFrame([[1,2,3],[10,11,12],[20,21,22]],columns=col) In : data Out: a b c 0 1 2 3 1 10 11 12 2 20 21 22 In : data2 = data.set_index('a') In : data2 Out: b c a 1 2 3 10 11 12 20 21 22 ```", "best_answers_score":0.7572, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/10457584\/redefining-the-index-in-a-pandas-dataframe-object", "best_answers_votes":192, "question_length":632, "response_length":272 }, { "question":"How to determine the length of lists in a pandas dataframe column How can the length of the lists in the column be determine without iteration? I have a dataframe like this: ```none CreationDate 2013-12-22 15:25:02 [ubuntu, mac-osx, syslinux] 2009-12-14 14:29:32 [ubuntu, mod-rewrite, laconica, apache-2.2] 2013-12-22 15:42:00 [ubuntu, nat, squid, mikrotik] ``` I am calculating the length of lists in the CreationDate column and making a new Length column like this: ```py df['Length'] = df.CreationDate.apply(lambda x: len(x)) ``` Which gives me this: ```none CreationDate Length 2013-12-22 15:25:02 [ubuntu, mac-osx, syslinux] 3 2009-12-14 14:29:32 [ubuntu, mod-rewrite, laconica, apache-2.2] 4 2013-12-22 15:42:00 [ubuntu, nat, squid, mikrotik] 4 ``` Is there a more pythonic way to do this?", "response":"You can use the str accessor for some list operations as well. In this example, ``` df['CreationDate'].str.len() ``` returns the length of each list. See the docs for str.len. ``` df['Length'] = df['CreationDate'].str.len() df Out: CreationDate Length 2013-12-22 15:25:02 [ubuntu, mac-osx, syslinux] 3 2009-12-14 14:29:32 [ubuntu, mod-rewrite, laconica, apache-2.2] 4 2013-12-22 15:42:00 [ubuntu, nat, squid, mikrotik] 4 ``` For these operations, vanilla Python is generally faster. pandas handles NaNs though. Here are timings: ``` ser = pd.Series([random.sample(string.ascii_letters, random.randint(1, 20)) for _ in range(10**6)]) %timeit ser.apply(lambda x: len(x)) 1 loop, best of 3: 425 ms per loop %timeit ser.str.len() 1 loop, best of 3: 248 ms per loop %timeit [len(x) for x in ser] 10 loops, best of 3: 84 ms per loop %timeit pd.Series([len(x) for x in ser], index=ser.index) 1 loop, best of 3: 236 ms per loop ```", "best_answers_score":0.7562, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41340341\/how-to-determine-the-length-of-lists-in-a-pandas-dataframe-column", "best_answers_votes":192, "question_length":795, "response_length":923 }, { "question":"Python Pandas How to assign groupby operation results back to columns in parent dataframe? I have the following data frame in IPython, where each row is a single stock: ``` In [261]: bdata Out[261]: Int64Index: 21210 entries, 0 to 21209 Data columns: BloombergTicker 21206 non-null values Company 21210 non-null values Country 21210 non-null values MarketCap 21210 non-null values PriceReturn 21210 non-null values SEDOL 21210 non-null values yearmonth 21210 non-null values dtypes: float64(2), int64(1), object(4) ``` I want to apply a groupby operation that computes cap-weighted average return across everything, per each date in the \"yearmonth\" column. This works as expected: ``` In [262]: bdata.groupby(\"yearmonth\").apply(lambda x: (x[\"PriceReturn\"]*x[\"MarketCap\"]\/x[\"MarketCap\"].sum()).sum()) Out[262]: yearmonth 201204 -0.109444 201205 -0.290546 ``` But then I want to sort of \"broadcast\" these values back to the indices in the original data frame, and save them as constant columns where the dates match. ``` In [263]: dateGrps = bdata.groupby(\"yearmonth\") In [264]: dateGrps[\"MarketReturn\"] = dateGrps.apply(lambda x: (x[\"PriceReturn\"]*x[\"MarketCap\"]\/x[\"MarketCap\"].sum()).sum()) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) \/mnt\/bos-devrnd04\/usr6\/home\/espears\/ws\/Research\/Projects\/python-util\/src\/util\/ in () ----> 1 dateGrps[\"MarketReturn\"] = dateGrps.apply(lambda x: (x[\"PriceReturn\"]*x[\"MarketCap\"]\/x[\"MarketCap\"].sum()).sum()) TypeError: 'DataFrameGroupBy' object does not support item assignment ``` I realize this naive assignment should not work. But what is the \"right\" Pandas idiom for assigning the result of a groupby operation into a new column on the parent dataframe? In the end, I want a column called \"MarketReturn\" than will be a repeated constant value for all indices that have matching date with the output of the groupby operation. One hack to achieve this would be the following: ``` marketRetsByDate = dateGrps.apply(lambda x: (x[\"PriceReturn\"]*x[\"MarketCap\"]\/x[\"MarketCap\"].sum()).sum()) bdata[\"MarketReturn\"] = np.repeat(np.NaN, len(bdata)) for elem in marketRetsByDate.index.values: bdata[\"MarketReturn\"][bdata[\"yearmonth\"]==elem] = marketRetsByDate.ix[elem] ``` But this is slow, bad, and unPythonic.", "response":"``` In [97]: df = pandas.DataFrame({'month': np.random.randint(0,11, 100), 'A': np.random.randn(100), 'B': np.random.randn(100)}) In [98]: df.join(df.groupby('month')['A'].sum(), on='month', rsuffix='_r') Out[98]: A B month A_r 0 -0.040710 0.182269 0 -0.331816 1 -0.004867 0.642243 1 2.448232 2 -0.162191 0.442338 4 2.045909 3 -0.979875 1.367018 5 -2.736399 4 -1.126198 0.338946 5 -2.736399 5 -0.992209 -1.343258 1 2.448232 6 -1.450310 0.021290 0 -0.331816 7 -0.675345 -1.359915 9 2.722156 ```", "best_answers_score":0.755, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12200693\/python-pandas-how-to-assign-groupby-operation-results-back-to-columns-in-parent", "best_answers_votes":99, "question_length":2315, "response_length":493 }, { "question":"What's the fastest way in Python to calculate cosine similarity given sparse matrix data? Given a sparse matrix listing, what's the best way to calculate the cosine similarity between each of the columns (or rows) in the matrix? I would rather not iterate n-choose-two times. Say the input matrix is: ``` A= [0 1 0 0 1 0 0 1 1 1 1 1 0 1 0] ``` The sparse representation is: ``` A = 0, 1 0, 4 1, 2 1, 3 1, 4 2, 0 2, 1 2, 3 ``` In Python, it's straightforward to work with the matrix-input format: ``` import numpy as np from sklearn.metrics import pairwise_distances from scipy.spatial.distance import cosine A = np.array( [[0, 1, 0, 0, 1], [0, 0, 1, 1, 1], [1, 1, 0, 1, 0]]) dist_out = 1-pairwise_distances(A, metric=\"cosine\") dist_out ``` Gives: ``` array([[ 1. , 0.40824829, 0.40824829], [ 0.40824829, 1. , 0.33333333], [ 0.40824829, 0.33333333, 1. ]]) ``` That's fine for a full-matrix input, but I really want to start with the sparse representation (due to the size and sparsity of my matrix). Any ideas about how this could best be accomplished?", "response":"You can compute pairwise cosine similarity on the rows of a sparse matrix directly using sklearn. As of version 0.17 it also supports sparse output: ``` from sklearn.metrics.pairwise import cosine_similarity from scipy import sparse A = np.array([[0, 1, 0, 0, 1], [0, 0, 1, 1, 1],[1, 1, 0, 1, 0]]) A_sparse = sparse.csr_matrix(A) similarities = cosine_similarity(A_sparse) print('pairwise dense output:\\n {}\\n'.format(similarities)) #also can output sparse matrices similarities_sparse = cosine_similarity(A_sparse,dense_output=False) print('pairwise sparse output:\\n {}\\n'.format(similarities_sparse)) ``` Results: ``` pairwise dense output: [[ 1. 0.40824829 0.40824829] [ 0.40824829 1. 0.33333333] [ 0.40824829 0.33333333 1. ]] pairwise sparse output: (0, 1) 0.408248290464 (0, 2) 0.408248290464 (0, 0) 1.0 (1, 0) 0.408248290464 (1, 2) 0.333333333333 (1, 1) 1.0 (2, 1) 0.333333333333 (2, 0) 0.408248290464 (2, 2) 1.0 ``` If you want column-wise cosine similarities simply transpose your input matrix beforehand: ``` A_sparse.transpose() ```", "best_answers_score":0.7533, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17627219\/whats-the-fastest-way-in-python-to-calculate-cosine-similarity-given-sparse-mat", "best_answers_votes":101, "question_length":1051, "response_length":1042 }, { "question":"Access index of last element in data frame I've looking around for this but I can't seem to find it (though it must be extremely trivial). The problem that I have is that I would like to retrieve the value of a column for the first and last entries of a data frame. But if I do: ``` df.ix[0]['date'] ``` I get: ``` datetime.datetime(2011, 1, 10, 16, 0) ``` but if I do: ``` df[-1:]['date'] ``` I get: ``` myIndex 13 2011-12-20 16:00:00 Name: mydate ``` with a different format. Ideally, I would like to be able to access the value of the last index of the data frame, but I can't find how. I even tried to create a column (IndexCopy) with the values of the index and try: ``` df.ix[df.tail(1)['IndexCopy']]['mydate'] ``` but this also yields a different format (since df.tail(1)['IndexCopy'] does not output a simple integer). Any ideas?", "response":"The former answer is now superseded by .iloc: ``` >>> df = pd.DataFrame({\"date\": range(10, 64, 8)}) >>> df.index += 17 >>> df date 17 10 18 18 19 26 20 34 21 42 22 50 23 58 >>> df[\"date\"].iloc[0] 10 >>> df[\"date\"].iloc[-1] 58 ``` The shortest way I can think of uses .iget(): ``` >>> df = pd.DataFrame({\"date\": range(10, 64, 8)}) >>> df.index += 17 >>> df date 17 10 18 18 19 26 20 34 21 42 22 50 23 58 >>> df['date'].iget(0) 10 >>> df['date'].iget(-1) 58 ``` Alternatively: ``` >>> df['date'][df.index[0]] 10 >>> df['date'][df.index[-1]] 58 ``` There's also .first_valid_index() and .last_valid_index(), but depending on whether or not you want to rule out NaNs they might not be what you want. Remember that df.ix[0] doesn't give you the first, but the one indexed by 0. For example, in the above case, df.ix[0] would produce ``` >>> df.ix[0] Traceback (most recent call last): File \"\", line 1, in df.ix[0] [...] KeyError: 0 ```", "best_answers_score":0.7531, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15862034\/access-index-of-last-element-in-data-frame", "best_answers_votes":184, "question_length":837, "response_length":931 }, { "question":"Applying function with multiple arguments to create a new pandas column I want to create a new column in a pandas data frame by applying a function to two existing columns. Following this answer I've been able to create a new column when I only need one column as an argument: ``` import pandas as pd df = pd.DataFrame({\"A\": [10,20,30], \"B\": [20, 30, 10]}) def fx(x): return x * x print(df) df['newcolumn'] = df.A.apply(fx) print(df) ``` However, I cannot figure out how to do the same thing when the function requires multiple arguments. For example, how do I create a new column by passing column A and column B to the function below? ``` def fxy(x, y): return x * y ```", "response":"You can go with @greenAfrican example, if it's possible for you to rewrite your function. But if you don't want to rewrite your function, you can wrap it into anonymous function inside apply, like this: ``` >>> def fxy(x, y): ... return x * y >>> df['newcolumn'] = df.apply(lambda x: fxy(x['A'], x['B']), axis=1) >>> df A B newcolumn 0 10 20 200 1 20 30 600 2 30 10 300 ```", "best_answers_score":0.7515, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19914937\/applying-function-with-multiple-arguments-to-create-a-new-pandas-column", "best_answers_votes":431, "question_length":672, "response_length":373 }, { "question":"Normalize columns of a dataframe I have a dataframe in pandas where each column has different value range. For example: df: ``` A B C 1000 10 0.5 765 5 0.35 800 7 0.09 ``` Any idea how I can normalize the columns of this dataframe where each value is between 0 and 1? My desired output is: ``` A B C 1 1 1 0.765 0.5 0.7 0.8 0.7 0.18(which is 0.09\/0.5) ```", "response":"one easy way by using Pandas: (here I want to use mean normalization) ``` normalized_df=(df-df.mean())\/df.std() ``` to use min-max normalization: ``` normalized_df=(df-df.min())\/(df.max()-df.min()) ``` Edit: To address some concerns, need to say that Pandas automatically applies colomn-wise function in the code above.", "best_answers_score":0.7512, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26414913\/normalize-columns-of-a-dataframe", "best_answers_votes":832, "question_length":355, "response_length":319 }, { "question":"pandas to_csv output quoting issue I'm having trouble getting the pandas dataframe.to_csv(...) output quoting strings right. ``` import pandas as pd text = 'this is \"out text\"' df = pd.DataFrame(index=['1'],columns=['1','2']) df.loc['1','1']=123 df.loc['1','2']=text df.to_csv('foo.txt',index=False,header=False) ``` The output is: 123,\"this is \"\"out text\"\"\" But I would like: 123,this is \"out text\" Does anyone know how to get this right?", "response":"You could pass quoting=csv.QUOTE_NONE, for example: ``` >>> df.to_csv('foo.txt',index=False,header=False) >>> !cat foo.txt 123,\"this is \"\"out text\"\"\" >>> import csv >>> df.to_csv('foo.txt',index=False,header=False, quoting=csv.QUOTE_NONE) >>> !cat foo.txt 123,this is \"out text\" ``` but in my experience it's better to quote more, rather than less.", "best_answers_score":0.7511, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21147058\/pandas-to-csv-output-quoting-issue", "best_answers_votes":139, "question_length":439, "response_length":348 }, { "question":"Replacing few values in a pandas dataframe column with another value [duplicate] This question already has answers here: Remap values in pandas column with a dict, preserve NaNs (12 answers) Closed 1 year ago. I have a pandas dataframe df as illustrated below: ``` BrandName Specialty A H B I ABC J D K AB L ``` I want to replace 'ABC' and 'AB' in column BrandName by 'A'. Can someone help with this?", "response":"The easiest way is to use the replace method on the column. The arguments are a list of the things you want to replace (here ['ABC', 'AB']) and what you want to replace them with (the string 'A' in this case): ``` >>> df['BrandName'].replace(['ABC', 'AB'], 'A') 0 A 1 B 2 A 3 D 4 A ``` This creates a new Series of values so you need to assign this new column to the correct column name: ``` df['BrandName'] = df['BrandName'].replace(['ABC', 'AB'], 'A') ```", "best_answers_score":0.7506, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27060098\/replacing-few-values-in-a-pandas-dataframe-column-with-another-value", "best_answers_votes":195, "question_length":400, "response_length":457 }, { "question":"Pandas DataFrame to List of Dictionaries I have the following DataFrame: ``` customer item1 item2 item3 1 apple milk tomato 2 water orange potato 3 juice mango chips ``` which I want to translate it to list of dictionaries per row ``` rows = [ { 'customer': 1, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato' }, { 'customer': 2, 'item1': 'water', 'item2': 'orange', 'item3': 'potato' }, { 'customer': 3, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips' } ] ```", "response":"Use df.to_dict('records') -- gives the output without having to transpose externally. ``` In [2]: df.to_dict('records') Out[2]: [{'customer': 1L, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'}, {'customer': 2L, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'}, {'customer': 3L, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}] ```", "best_answers_score":0.7496, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29815129\/pandas-dataframe-to-list-of-dictionaries", "best_answers_votes":601, "question_length":469, "response_length":350 }, { "question":"How to plot multiple bars grouped How to plot multiple bars in matplotlib, when I tried to call the bar function multiple times, they overlap and as seen the below figure the highest value red can be seen only. How can I plot the multiple bars with dates on the x-axes? So far, I tried this: ```py import matplotlib.pyplot as plt import datetime x = [ datetime.datetime(2011, 1, 4, 0, 0), datetime.datetime(2011, 1, 5, 0, 0), datetime.datetime(2011, 1, 6, 0, 0) ] y = [4, 9, 2] z = [1, 2, 3] k = [11, 12, 13] ax = plt.subplot(111) ax.bar(x, y, width=0.5, color='b', align='center') ax.bar(x, z, width=0.5, color='g', align='center') ax.bar(x, k, width=0.5, color='r', align='center') ax.xaxis_date() plt.show() ``` I got this: The results should be something like, but with the dates are on the x-axes and bars are next to each other:", "response":"``` import matplotlib.pyplot as plt from matplotlib.dates import date2num import datetime x = [ datetime.datetime(2011, 1, 4, 0, 0), datetime.datetime(2011, 1, 5, 0, 0), datetime.datetime(2011, 1, 6, 0, 0) ] x = date2num(x) y = [4, 9, 2] z = [1, 2, 3] k = [11, 12, 13] ax = plt.subplot(111) ax.bar(x-0.2, y, width=0.2, color='b', align='center') ax.bar(x, z, width=0.2, color='g', align='center') ax.bar(x+0.2, k, width=0.2, color='r', align='center') ax.xaxis_date() plt.show() ``` I don't know what's the \"y values are also overlapping\" means, does the following code solve your problem? ``` ax = plt.subplot(111) w = 0.3 ax.bar(x-w, y, width=w, color='b', align='center') ax.bar(x, z, width=w, color='g', align='center') ax.bar(x+w, k, width=w, color='r', align='center') ax.xaxis_date() ax.autoscale(tight=True) plt.show() ```", "best_answers_score":0.7492, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14270391\/how-to-plot-multiple-bars-grouped", "best_answers_votes":159, "question_length":834, "response_length":830 }, { "question":"Pandas: rolling mean by time interval I've got a bunch of polling data; I want to compute a Pandas rolling mean to get an estimate for each day based on a three-day window. According to this question, the rolling_* functions compute the window based on a specified number of values, and not a specific datetime range. How do I implement this functionality? Sample input data: ``` polls_subset.tail(20) Out[185]: favorable unfavorable other enddate 2012-10-25 0.48 0.49 0.03 2012-10-25 0.51 0.48 0.02 2012-10-27 0.51 0.47 0.02 2012-10-26 0.56 0.40 0.04 2012-10-28 0.48 0.49 0.04 2012-10-28 0.46 0.46 0.09 2012-10-28 0.48 0.49 0.03 2012-10-28 0.49 0.48 0.03 2012-10-30 0.53 0.45 0.02 2012-11-01 0.49 0.49 0.03 2012-11-01 0.47 0.47 0.05 2012-11-01 0.51 0.45 0.04 2012-11-03 0.49 0.45 0.06 2012-11-04 0.53 0.39 0.00 2012-11-04 0.47 0.44 0.08 2012-11-04 0.49 0.48 0.03 2012-11-04 0.52 0.46 0.01 2012-11-04 0.50 0.47 0.03 2012-11-05 0.51 0.46 0.02 2012-11-07 0.51 0.41 0.00 ``` Output would have only one row for each date.", "response":"In the meantime, a time-window capability was added. See this link. ``` In [1]: df = DataFrame({'B': range(5)}) In [2]: df.index = [Timestamp('20130101 09:00:00'), ...: Timestamp('20130101 09:00:02'), ...: Timestamp('20130101 09:00:03'), ...: Timestamp('20130101 09:00:05'), ...: Timestamp('20130101 09:00:06')] In [3]: df Out[3]: B 2013-01-01 09:00:00 0 2013-01-01 09:00:02 1 2013-01-01 09:00:03 2 2013-01-01 09:00:05 3 2013-01-01 09:00:06 4 In [4]: df.rolling(2, min_periods=1).sum() Out[4]: B 2013-01-01 09:00:00 0.0 2013-01-01 09:00:02 1.0 2013-01-01 09:00:03 3.0 2013-01-01 09:00:05 5.0 2013-01-01 09:00:06 7.0 In [5]: df.rolling('2s', min_periods=1).sum() Out[5]: B 2013-01-01 09:00:00 0.0 2013-01-01 09:00:02 1.0 2013-01-01 09:00:03 3.0 2013-01-01 09:00:05 3.0 2013-01-01 09:00:06 7.0 ```", "best_answers_score":0.7486, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15771472\/pandas-rolling-mean-by-time-interval", "best_answers_votes":130, "question_length":1017, "response_length":795 }, { "question":"Find out the percentage of missing values in each column in the given dataset ``` import pandas as pd df = pd.read_csv('https:\/\/query.data.world\/s\/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0') percent= 100*(len(df.loc[:,df.isnull().sum(axis=0)>=1 ].index) \/ len(df.index)) print(round(percent,2)) ``` input is https:\/\/query.data.world\/s\/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0 and the output should be ``` Ord_id 0.00 Prod_id 0.00 Ship_id 0.00 Cust_id 0.00 Sales 0.24 Discount 0.65 Order_Quantity 0.65 Profit 0.65 Shipping_Cost 0.65 Product_Base_Margin 1.30 dtype: float64 ```", "response":"How about this? I think I actually found something similar on here once before, but I'm not seeing it now... ``` percent_missing = df.isnull().sum() * 100 \/ len(df) missing_value_df = pd.DataFrame({'column_name': df.columns, 'percent_missing': percent_missing}) ``` And if you want the missing percentages sorted, follow the above with: ``` missing_value_df.sort_values('percent_missing', inplace=True) ``` As mentioned in the comments, you may also be able to get by with just the first line in my code above, i.e.: ``` percent_missing = df.isnull().sum() * 100 \/ len(df) ```", "best_answers_score":0.748, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/51070985\/find-out-the-percentage-of-missing-values-in-each-column-in-the-given-dataset", "best_answers_votes":131, "question_length":554, "response_length":576 }, { "question":"Pandas How to filter a Series I have a Series like this after doing groupby('name') and used mean() function on other column ``` name 383 3.000000 663 1.000000 726 1.000000 737 9.000000 833 8.166667 ``` Could anyone please show me how to filter out the rows with 1.000000 mean values? Thank you and I greatly appreciate your help.", "response":"``` In [5]: import pandas as pd test = { 383: 3.000000, 663: 1.000000, 726: 1.000000, 737: 9.000000, 833: 8.166667 } s = pd.Series(test) s = s[s != 1] s Out[0]: 383 3.000000 737 9.000000 833 8.166667 dtype: float64 ```", "best_answers_score":0.7478, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28272137\/pandas-how-to-filter-a-series", "best_answers_votes":205, "question_length":330, "response_length":218 }, { "question":"How to calculate 1st and 3rd quartiles? I have DataFrame: ``` time_diff avg_trips 0 0.450000 1.0 1 0.483333 1.0 2 0.500000 1.0 3 0.516667 1.0 4 0.533333 2.0 ``` I want to get 1st quartile, 3rd quartile and median for the column time_diff. To obtain median, I use np.median(df[\"time_diff\"].values). How can I calculate quartiles?", "response":"You can use np.percentile to calculate quartiles (including the median): ``` >>> np.percentile(df.time_diff, 25) # Q1 0.48333300000000001 >>> np.percentile(df.time_diff, 50) # median 0.5 >>> np.percentile(df.time_diff, 75) # Q3 0.51666699999999999 ``` Or all at once: ``` >>> np.percentile(df.time_diff, [25, 50, 75]) array([ 0.483333, 0.5 , 0.516667]) ```", "best_answers_score":0.7476, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45926230\/how-to-calculate-1st-and-3rd-quartiles", "best_answers_votes":94, "question_length":328, "response_length":356 }, { "question":"How can I compute a histogram (frequency table) for a single Series? How can I generate a frequency table (or histogram) for a single Series? For example, if I have my_series = pandas.Series([1,2,2,3,3,3]), how can I get a result like {1: 1, 2: 2, 3: 3} - that is, a count of how many times each value appears in the Series?", "response":"Maybe .value_counts()? ``` >>> import pandas >>> my_series = pandas.Series([1,2,2,3,3,3, \"fred\", 1.8, 1.8]) >>> my_series 0 1 1 2 2 2 3 3 4 3 5 3 6 fred 7 1.8 8 1.8 >>> counts = my_series.value_counts() >>> counts 3 3 2 2 1.8 2 fred 1 1 1 >>> len(counts) 5 >>> sum(counts) 9 >>> counts[\"fred\"] 1 >>> dict(counts) {1.8: 2, 2: 2, 3: 3, 1: 1, 'fred': 1} ```", "best_answers_score":0.7465, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12207326\/how-can-i-compute-a-histogram-frequency-table-for-a-single-series", "best_answers_votes":173, "question_length":324, "response_length":354 }, { "question":"how to multiply multiple columns by a column in Pandas I would like to have: ``` df[['income_1', 'income_2']] * df['mtaz_proportion'] ``` return those columns multiplied by df['mtaz_proportion'] so that I can set ``` df[['mtaz_income_1', 'mtaz_income_2']] = df[['income_1', 'income_2']] * df['mtaz_proportion'] ``` but instead I get: ``` income_1 income_2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... 2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... ``` ect... what simple thing am I missing? Thank you!", "response":"use multiply method and set axis=\"index\": ``` df[[\"A\", \"B\"]].multiply(df[\"C\"], axis=\"index\") ```", "best_answers_score":0.746, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22702760\/how-to-multiply-multiple-columns-by-a-column-in-pandas", "best_answers_votes":136, "question_length":711, "response_length":96 }, { "question":"Pandas recalculate index after a concatenation I have a problem where I produce a pandas dataframe by concatenating along the row axis (stacking vertically). Each of the constituent dataframes has an autogenerated index (ascending numbers). After concatenation, my index is screwed up: it counts up to n (where n is the shape[0] of the corresponding dataframe), and restarts at zero at the next dataframe. I am trying to \"re-calculate the index, given the current order\", or \"re-index\" (or so I thought). Turns out that isn't exactly what DataFrame.reindex seems to be doing. Here is what I tried to do: ``` train_df = pd.concat(train_class_df_list) train_df = train_df.reindex(index=[i for i in range(train_df.shape[0])]) ``` It failed with \"cannot reindex from a duplicate axis.\" I don't want to change the order of my data... just need to delete the old index and set up a new one, with the order of rows preserved.", "response":"If your index is autogenerated and you don't want to keep it, you can use the ignore_index option. ` ``` train_df = pd.concat(train_class_df_list, ignore_index=True) ``` This will autogenerate a new index for you, and my guess is that this is exactly what you are after.", "best_answers_score":0.7446, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/35528119\/pandas-recalculate-index-after-a-concatenation", "best_answers_votes":151, "question_length":918, "response_length":270 }, { "question":"How to add\/subtract time (hours, minutes, etc.) from a Pandas DataFrame.Index whos objects are of type datetime.time? I have a DataFrame whose index values are of type datetime.time. There is no method in DataFrame.Index to shift the time. datetime.time has replace but that will only work on individual items. Here's an example: ``` In[526]: dfa.index[:5] Out[526]: Index([21:12:19, 21:12:20, 21:12:21, 21:12:21, 21:12:22], dtype='object') In[527]: type(dfa.index[0]) Out[527]: datetime.time ```", "response":"Liam's link looks great, but also check out pandas.Timedelta - looks like it plays nicely with NumPy's and Python's time deltas. https:\/\/pandas.pydata.org\/pandas-docs\/stable\/timedeltas.html ``` pd.date_range('2014-01-01', periods=10) + pd.Timedelta(days=1) ```", "best_answers_score":0.7439, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28954093\/how-to-add-subtract-time-hours-minutes-etc-from-a-pandas-dataframe-index-wh", "best_answers_votes":154, "question_length":496, "response_length":260 }, { "question":"How can I strip the whitespace from Pandas DataFrame headers? I am parsing data from an Excel file that has extra white space in some of the column headings. When I check the columns of the resulting dataframe, with df.columns, I see: ``` Index(['Year', 'Month ', 'Value']) ^ # Note the unwanted trailing space on 'Month ' ``` Consequently, I can't do: df[\"Month\"] Because it will tell me the column is not found, as I asked for \"Month\", not \"Month \". My question, then, is how can I strip out the unwanted white space from the column headings?", "response":"You can give functions to the rename method. The str.strip() method should do what you want: ``` In [5]: df Out[5]: Year Month Value 0 1 2 3 [1 rows x 3 columns] In [6]: df.rename(columns=lambda x: x.strip()) Out[6]: Year Month Value 0 1 2 3 [1 rows x 3 columns] ``` Note: that this returns a DataFrame object and it's shown as output on screen, but the changes are not actually set on your columns. To make the changes, either use this in a method chain or re-assign the df variabe: ``` df = df.rename(columns=lambda x: x.strip()) ```", "best_answers_score":0.7435, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21606987\/how-can-i-strip-the-whitespace-from-pandas-dataframe-headers", "best_answers_votes":206, "question_length":544, "response_length":535 }, { "question":"changing sort in value_counts If I do ``` mt = mobile.PattLen.value_counts() # sort True by default ``` I get ``` 4 2831 3 2555 5 1561 [...] ``` If I do ``` mt = mobile.PattLen.value_counts(sort=False) ``` I get ``` 8 225 9 120 2 1234 [...] ``` What I am trying to do is get the output in 2, 3, 4 ascending order (the left numeric column). Can I change value_counts somehow or do I need to use a different function.", "response":"I think you need sort_index, because the left column is called index. The full command would be mt = mobile.PattLen.value_counts().sort_index(). For example: ``` mobile = pd.DataFrame({'PattLen':[1,1,2,6,6,7,7,7,7,8]}) print (mobile) PattLen 0 1 1 1 2 2 3 6 4 6 5 7 6 7 7 7 8 7 9 8 print (mobile.PattLen.value_counts()) 7 4 6 2 1 2 8 1 2 1 Name: PattLen, dtype: int64 mt = mobile.PattLen.value_counts().sort_index() print (mt) 1 2 2 1 6 2 7 4 8 1 Name: PattLen, dtype: int64 ```", "best_answers_score":0.7434, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43855474\/changing-sort-in-value-counts", "best_answers_votes":241, "question_length":415, "response_length":478 }, { "question":"Anti-Join Pandas I have two tables and I would like to append them so that only all the data in table A is retained and data from table B is only added if its key is unique (Key values are unique in table A and B however in some cases a Key will occur in both table A and B). I think the way to do this will involve some sort of filtering join (anti-join) to get values in table B that do not occur in table A then append the two tables. I am familiar with R and this is the code I would use to do this in R. ``` library(\"dplyr\") ## Filtering join to remove values already in \"TableA\" from \"TableB\" FilteredTableB <- anti_join(TableB,TableA, by = \"Key\") ## Append \"FilteredTableB\" to \"TableA\" CombinedTable <- bind_rows(TableA,FilteredTableB) ``` How would I achieve this in python?", "response":"indicator = True in merge command will tell you which join was applied by creating new column _merge with three possible values: left_only right_only both Keep right_only and left_only. That is it. ``` outer_join = TableA.merge(TableB, how = 'outer', indicator = True) anti_join = outer_join[~(outer_join._merge == 'both')].drop('_merge', axis = 1) ``` easy! Here is a comparison with a solution from piRSquared: 1) When run on this example matching based on one column, piRSquared's solution is faster. 2) But it only works for matching on one column. If you want to match on several columns - my solution works just as fine as with one column. So it's up for you to decide.", "best_answers_score":0.743, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38516664\/anti-join-pandas", "best_answers_votes":88, "question_length":782, "response_length":675 }, { "question":"Create a Pandas Dataframe by appending one row at a time [duplicate] This question already has answers here: Creating an empty Pandas DataFrame, and then filling it (8 answers) Closed 1 year ago. How do I create an empty DataFrame, then add rows, one by one? I created an empty DataFrame: ``` df = pd.DataFrame(columns=('lib', 'qty1', 'qty2')) ``` Then I can add a new row at the end and fill a single field with: ``` df = df._set_value(index=len(df), col='qty1', value=10.0) ``` It works for only one field at a time. What is a better way to add new row to df?", "response":"You can use df.loc[i], where the row with index i will be what you specify it to be in the dataframe. ``` >>> import pandas as pd >>> from numpy.random import randint >>> df = pd.DataFrame(columns=['lib', 'qty1', 'qty2']) >>> for i in range(5): >>> df.loc[i] = ['name' + str(i)] + list(randint(10, size=2)) >>> df lib qty1 qty2 0 name0 3 3 1 name1 2 4 2 name2 2 8 3 name3 2 1 4 name4 9 6 ```", "best_answers_score":0.7423, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/10715965\/create-a-pandas-dataframe-by-appending-one-row-at-a-time", "best_answers_votes":924, "question_length":561, "response_length":391 }, { "question":"Difference between map, applymap and apply methods in Pandas Can you tell me when to use these vectorization methods with basic examples? I see that map is a Series method whereas the rest are DataFrame methods. I got confused about apply and applymap methods though. Why do we have two methods for applying a function to a DataFrame? Again, simple examples which illustrate the usage would be great!", "response":"apply works on a row \/ column basis of a DataFrame applymap works element-wise on a DataFrame map works element-wise on a Series Straight from Wes McKinney's Python for Data Analysis book, pg. 132 (I highly recommended this book): Another frequent operation is applying a function on 1D arrays to each column or row. DataFrame\u2019s apply method does exactly this: ``` In [116]: frame = DataFrame(np.random.randn(4, 3), columns=list('bde'), index=['Utah', 'Ohio', 'Texas', 'Oregon']) In [117]: frame Out[117]: b d e Utah -0.029638 1.081563 1.280300 Ohio 0.647747 0.831136 -1.549481 Texas 0.513416 -0.884417 0.195343 Oregon -0.485454 -0.477388 -0.309548 In [118]: f = lambda x: x.max() - x.min() In [119]: frame.apply(f) Out[119]: b 1.133201 d 1.965980 e 2.829781 dtype: float64 ``` Many of the most common array statistics (like sum and mean) are DataFrame methods, so using apply is not necessary. Element-wise Python functions can be used, too. Suppose you wanted to compute a formatted string from each floating point value in frame. You can do this with applymap: ``` In [120]: format = lambda x: '%.2f' % x In [121]: frame.applymap(format) Out[121]: b d e Utah -0.03 1.08 1.28 Ohio 0.65 0.83 -1.55 Texas 0.51 -0.88 0.20 Oregon -0.49 -0.48 -0.31 ``` The reason for the name applymap is that Series has a map method for applying an element-wise function: ``` In [122]: frame['e'].map(format) Out[122]: Utah 1.28 Ohio -1.55 Texas 0.20 Oregon -0.31 Name: e, dtype: object ```", "best_answers_score":0.7411, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19798153\/difference-between-map-applymap-and-apply-methods-in-pandas", "best_answers_votes":753, "question_length":400, "response_length":1472 }, { "question":"How to add pandas data to an existing csv file? I want to know if it is possible to use the pandas to_csv() function to add a dataframe to an existing csv file. The csv file has the same structure as the loaded data.", "response":"You can specify a python write mode in the pandas to_csv function. For append it is 'a'. In your case: ``` df.to_csv('my_csv.csv', mode='a', header=False) ``` The default mode is 'w'. If the file initially might be missing, you can make sure the header is printed at the first write using this variation: ``` output_path='my_csv.csv' df.to_csv(output_path, mode='a', header=not os.path.exists(output_path)) ```", "best_answers_score":0.741, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17530542\/how-to-add-pandas-data-to-an-existing-csv-file", "best_answers_votes":999, "question_length":216, "response_length":410 }, { "question":"How to select second level in multiindex when using columns? I have a dataframe with this index: ``` index = pd.MultiIndex.from_product([['stock1','stock2'...],['price','volume'...]]) ``` It's a useful structure for being able to do df['stock1'], but how do I select all the price data? I can't make any sense of the documentation. I've tried the following with no luck: df[:,'price'] df[:]['price'] df.loc(axis=1)[:,'close'] df['price] If this index style is generally agreed to be a bad idea for whatever reason, then what would be a better choice? Should I go for a multi-indexed index for the stocks as labels on the time series instead of at the column level? EDIT - I am using the multiindex for the columns, not the index (the wording got the better of me). The examples in the documentation focus on multi-level indexes rather than column structures.", "response":"Also using John's data sample: Using xs() is another way to slice a MultiIndex: ``` df 0 stock1 price 1 volume 2 stock2 price 3 volume 4 stock3 price 5 volume 6 df.xs('price', level=1, drop_level=False) 0 stock1 price 1 stock2 price 3 stock3 price 5 ``` Alternatively if you have a MultiIndex in place of columns: ``` df stock1 stock2 stock3 price volume price volume price volume 0 1 2 3 4 5 6 df.xs('price', axis=1, level=1, drop_level=False) stock1 stock2 stock3 price price price 0 1 3 5 ```", "best_answers_score":0.7407, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45128523\/how-to-select-second-level-in-multiindex-when-using-columns", "best_answers_votes":146, "question_length":858, "response_length":495 }, { "question":"How do I create test and train samples from one dataframe with pandas? I have a fairly large dataset in the form of a dataframe and I was wondering how I would be able to split the dataframe into two random samples (80% and 20%) for training and testing. Thanks!", "response":"Scikit Learn's train_test_split is a good one. It will split both numpy arrays and dataframes. ``` from sklearn.model_selection import train_test_split train, test = train_test_split(df, test_size=0.2) ```", "best_answers_score":0.74, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24147278\/how-do-i-create-test-and-train-samples-from-one-dataframe-with-pandas", "best_answers_votes":989, "question_length":262, "response_length":205 }, { "question":"Convert timedelta64[ns] column to seconds in Python Pandas DataFrame A pandas DataFrame column duration contains timedelta64[ns] as shown. How can you convert them to seconds? ``` 0 00:20:32 1 00:23:10 2 00:24:55 3 00:13:17 4 00:18:52 Name: duration, dtype: timedelta64[ns] ``` I tried the following ``` print df[:5]['duration'] \/ np.timedelta64(1, 's') ``` but got the error ``` Traceback (most recent call last): File \"test.py\", line 16, in print df[0:5]['duration'] \/ np.timedelta64(1, 's') File \"C:\\Python27\\lib\\site-packages\\pandas\\core\\series.py\", line 130, in wrapper \"addition and subtraction, but the operator [%s] was passed\" % name) TypeError: can only operate on a timedeltas for addition and subtraction, but the operator [__div__] was passed ``` Also tried ``` print df[:5]['duration'].astype('timedelta64[s]') ``` but received the error ``` Traceback (most recent call last): File \"test.py\", line 17, in print df[:5]['duration'].astype('timedelta64[s]') File \"C:\\Python27\\lib\\site-packages\\pandas\\core\\series.py\", line 934, in astype values = com._astype_nansafe(self.values, dtype) File \"C:\\Python27\\lib\\site-packages\\pandas\\core\\common.py\", line 1653, in _astype_nansafe raise TypeError(\"cannot astype a timedelta from [%s] to [%s]\" % (arr.dtype,dtype)) TypeError: cannot astype a timedelta from [timedelta64[ns]] to [timedelta64[s]] ```", "response":"Use the Series dt accessor to get access to the methods and attributes of a datetime (timedelta) series. ``` >>> s 0 -1 days +23:45:14.304000 1 -1 days +23:46:57.132000 2 -1 days +23:49:25.913000 3 -1 days +23:59:48.913000 4 00:00:00.820000 dtype: timedelta64[ns] >>> >>> s.dt.total_seconds() 0 -885.696 1 -782.868 2 -634.087 3 -11.087 4 0.820 dtype: float64 ``` There are other Pandas Series Accessors for String, Categorical, and Sparse data types.", "best_answers_score":0.74, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26456825\/convert-timedelta64ns-column-to-seconds-in-python-pandas-dataframe", "best_answers_votes":97, "question_length":1356, "response_length":450 }, { "question":"How to split text in a column into multiple rows I'm working with a large csv file and the next to last column has a string of text that I want to split by a specific delimiter. I was wondering if there is a simple way to do this using pandas or python? ``` CustNum CustomerName ItemQty Item Seatblocks ItemExt 32363 McCartney, Paul 3 F04 2:218:10:4,6 60 31316 Lennon, John 25 F01 1:13:36:1,12 1:13:37:1,13 300 ``` I want to split by the space (' ') and then the colon (':') in the Seatblocks column, but each cell would result in a different number of columns. I have a function to rearrange the columns so the Seatblocks column is at the end of the sheet, but I'm not sure what to do from there. I can do it in excel with the built in text-to-columns function and a quick macro, but my dataset has too many records for excel to handle. Ultimately, I want to take records such John Lennon's and create multiple lines, with the info from each set of seats on a separate line.", "response":"This splits the Seatblocks by space and gives each its own row. ``` In [43]: df Out[43]: CustNum CustomerName ItemQty Item Seatblocks ItemExt 0 32363 McCartney, Paul 3 F04 2:218:10:4,6 60 1 31316 Lennon, John 25 F01 1:13:36:1,12 1:13:37:1,13 300 In [44]: s = df['Seatblocks'].str.split(' ').apply(Series, 1).stack() In [45]: s.index = s.index.droplevel(-1) # to line up with df's index In [46]: s.name = 'Seatblocks' # needs a name to join In [47]: s Out[47]: 0 2:218:10:4,6 1 1:13:36:1,12 1 1:13:37:1,13 Name: Seatblocks, dtype: object In [48]: del df['Seatblocks'] In [49]: df.join(s) Out[49]: CustNum CustomerName ItemQty Item ItemExt Seatblocks 0 32363 McCartney, Paul 3 F04 60 2:218:10:4,6 1 31316 Lennon, John 25 F01 300 1:13:36:1,12 1 31316 Lennon, John 25 F01 300 1:13:37:1,13 ``` Or, to give each colon-separated string in its own column: ``` In [50]: df.join(s.apply(lambda x: Series(x.split(':')))) Out[50]: CustNum CustomerName ItemQty Item ItemExt 0 1 2 3 0 32363 McCartney, Paul 3 F04 60 2 218 10 4,6 1 31316 Lennon, John 25 F01 300 1 13 36 1,12 1 31316 Lennon, John 25 F01 300 1 13 37 1,13 ``` This is a little ugly, but maybe someone will chime in with a prettier solution.", "best_answers_score":0.7395, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17116814\/how-to-split-text-in-a-column-into-multiple-rows", "best_answers_votes":243, "question_length":975, "response_length":1189 }, { "question":"Difference between data type 'datetime64[ns]' and ' pandas.core.frame.DataFrame: return pd.read_csv(path, skiprows=1, sep='\\t', comment='#') ```", "response":"Why not just use pd.DataFrame? ``` import pandas as pd def csv_to_df(path: str) -> pd.DataFrame: return pd.read_csv(path, skiprows=1, sep='\\t', comment='#') ``` Result is the same: ``` > help(csv_to_df) Help on function csv_to_df in module __main__: csv_to_df(path:str) -> pandas.core.frame.DataFrame ```", "best_answers_score":0.7342, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43890844\/pythonic-type-hints-with-pandas", "best_answers_votes":256, "question_length":575, "response_length":304 }, { "question":"How to get rid of \"Unnamed: 0\" column in a pandas DataFrame read in from CSV file? I have a situation wherein sometimes when I read a csv from df I get an unwanted index-like column named unnamed:0. file.csv ``` ,A,B,C 0,1,2,3 1,4,5,6 2,7,8,9 ``` The CSV is read with this: ``` pd.read_csv('file.csv') Unnamed: 0 A B C 0 0 1 2 3 1 1 4 5 6 2 2 7 8 9 ``` This is very annoying! Does anyone have an idea on how to get rid of this?", "response":"It's the index column, pass pd.to_csv(..., index=False) to not write out an unnamed index column in the first place, see the to_csv() docs. Example: ``` In [37]: df = pd.DataFrame(np.random.randn(5,3), columns=list('abc')) pd.read_csv(io.StringIO(df.to_csv())) Out[37]: Unnamed: 0 a b c 0 0 0.109066 -1.112704 -0.545209 1 1 0.447114 1.525341 0.317252 2 2 0.507495 0.137863 0.886283 3 3 1.452867 1.888363 1.168101 4 4 0.901371 -0.704805 0.088335 ``` compare with: ``` In [38]: pd.read_csv(io.StringIO(df.to_csv(index=False))) Out[38]: a b c 0 0.109066 -1.112704 -0.545209 1 0.447114 1.525341 0.317252 2 0.507495 0.137863 0.886283 3 1.452867 1.888363 1.168101 4 0.901371 -0.704805 0.088335 ``` You could also optionally tell read_csv that the first column is the index column by passing index_col=0: ``` In [40]: pd.read_csv(io.StringIO(df.to_csv()), index_col=0) Out[40]: a b c 0 0.109066 -1.112704 -0.545209 1 0.447114 1.525341 0.317252 2 0.507495 0.137863 0.886283 3 1.452867 1.888363 1.168101 4 0.901371 -0.704805 0.088335 ```", "best_answers_score":0.7336, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36519086\/how-to-get-rid-of-unnamed-0-column-in-a-pandas-dataframe-read-in-from-csv-fil", "best_answers_votes":405, "question_length":427, "response_length":1028 }, { "question":"IPython Notebook cell multiple outputs I am running this cell in IPython Notebook: ``` # salaries and teams are Pandas dataframe salaries.head() teams.head() ``` The result is that I am only getting the output of teams data-frame rather than of both salaries and teams. If I just run salaries.head() I get the result for salaries data-frame but on running both the statement I just see the output of teams.head(). How can I correct this?", "response":"An easier way: ``` from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = \"all\" ``` It saves you having to repeatedly type \"Display\" Say the cell contains this: ``` from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = \"all\" a = 1 b = 2 a b ``` Then the output will be: ``` Out[1]: 1 Out[1]: 2 ``` If we use IPython.display.display: ``` from IPython.display import display a = 1 b = 2 display(a) display(b) ``` The output is: ``` 1 2 ``` So the same thing, but without the Out[n] part.", "best_answers_score":0.7333, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34398054\/ipython-notebook-cell-multiple-outputs", "best_answers_votes":160, "question_length":437, "response_length":578 }, { "question":"How to print a specific row of a pandas DataFrame? I have a massive DataFrame, and I'm getting the error: ```none TypeError: (\"Empty 'DataFrame': no numeric data to plot\", 'occurred at index 159220') ``` I've already dropped nulls, and checked dtypes for the DataFrame so I have no guess as to why it's failing on that row. How do I print out just that row (at index 159220) of the DataFrame?", "response":"To print a specific row, we have couple of pandas methods: loc - It only gets the label i.e. column name or features iloc - Here i stands for integer, representing the row number ix - It is a mix of label as well as integer (not available in pandas >=1.0) Below are examples of how to use the first two options for a specific row: loc ```python df.loc[row,column] ``` For the first row and all columns: ```python df.loc[0,:] ``` For the first row and some specific column: ```python df.loc[0,'column_name'] ``` iloc For the first row and all columns: ```python df.iloc[0,:] ``` For the first row and some specific columns i.e. first three cols: ```python df.iloc[0,0:3] ```", "best_answers_score":0.7325, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43772362\/how-to-print-a-specific-row-of-a-pandas-dataframe", "best_answers_votes":134, "question_length":392, "response_length":673 }, { "question":"What values are valid in Pandas 'Freq' tags? I am trying to use date_range. I came across some values valid for freq, like BME and BMS and I would like to be able to quickly look up the proper strings to get what I want. What values are valid in Pandas 'Freq' tags?", "response":"You can find it called Offset Aliases: A number of string aliases are given to useful common time series frequencies. We will refer to these aliases as offset aliases. ``` Alias Description B business day frequency C custom business day frequency D calendar day frequency W weekly frequency ME month end frequency SME semi-month end frequency (15th and end of month) BME business month end frequency CBME custom business month end frequency MS month start frequency SMS semi-month start frequency (1st and 15th) BMS business month start frequency CBMS custom business month start frequency QE quarter end frequency BQE business quarter end frequency QS quarter start frequency BQS business quarter start frequency YE year end frequency BYE business year end frequency YS year start frequency BYS business year start frequency h hourly frequency bh business hour frequency cbh custom business hour frequency min minutely frequency s secondly frequency ms milliseconds us microseconds ns nanoseconds ```", "best_answers_score":0.7323, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/35339139\/what-values-are-valid-in-pandas-freq-tags", "best_answers_votes":289, "question_length":265, "response_length":1001 }, { "question":"pandas groupby without turning grouped by column into index The default behavior of pandas groupby is to turn the group by columns into index and remove them from the list of columns of the dataframe. For instance, say I have a dataFrame with these columns ``` col1|col2|col3|col4 ``` if I apply a groupby say with columns col2 and col3 this way ``` df.groupby(['col2','col3']).sum() ``` The dataframe df no longer has the ['col2','col3'] in the list of columns. They are automatically turned into the indices of the resulting dataframe. My question is how can I perform groupby on a column and yet keep that column in the dataframe?", "response":"``` df.groupby(['col2','col3'], as_index=False).sum() ```", "best_answers_score":0.7312, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32059397\/pandas-groupby-without-turning-grouped-by-column-into-index", "best_answers_votes":188, "question_length":633, "response_length":57 }, { "question":"Is there a way in Pandas to use previous row value in dataframe.apply when previous value is also calculated in the apply? I have the following dataframe: ``` Index_Date A B C D ================================ 2015-01-31 10 10 Nan 10 2015-02-01 2 3 Nan 22 2015-02-02 10 60 Nan 280 2015-02-03 10 100 Nan 250 ``` Require: ``` Index_Date A B C D ================================ 2015-01-31 10 10 10 10 2015-02-01 2 3 23 22 2015-02-02 10 60 290 280 2015-02-03 10 100 3000 250 ``` Column C is derived for 2015-01-31 by taking value of D. Then I need to use the value of C for 2015-01-31 and multiply by the value of A on 2015-02-01 and add B. I have attempted an apply and a shift using an if else by this gives a key error.", "response":"Given a column of numbers: ``` lst = [] cols = ['A'] for a in range(100, 105): lst.append([a]) df = pd.DataFrame(lst, columns=cols, index=range(5)) df A 0 100 1 101 2 102 3 103 4 104 ``` You can reference the previous row with shift: ``` df['Change'] = df.A - df.A.shift(1) df A Change 0 100 NaN 1 101 1.0 2 102 1.0 3 103 1.0 4 104 1.0 ``` You can fill the missing value with fill_value parameter ``` df['Change'] = df.A - df.A.shift(1, fill_value=df.A[0]) # fills in the missing value e.g. 100 df A Change 0 100 0.0 1 101 1.0 2 102 1.0 3 103 1.0 4 104 1.0 ```", "best_answers_score":0.731, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34855859\/is-there-a-way-in-pandas-to-use-previous-row-value-in-dataframe-apply-when-previ", "best_answers_votes":100, "question_length":720, "response_length":560 }, { "question":"Count unique values per groups with Pandas [duplicate] This question already has answers here: Pandas 'count(distinct)' equivalent (12 answers) Closed 7 years ago. I need to count unique ID values in every domain. I have data: ``` ID, domain 123, vk.com 123, vk.com 123, twitter.com 456, vk.com' 456, facebook.com 456, vk.com 456, google.com 789, twitter.com 789, vk.com ``` I try df.groupby(['domain', 'ID']).count() But I want to get ``` domain count vk.com 3 twitter.com 2 facebook.com 1 google.com 1 ```", "response":"You need nunique: ``` df = df.groupby('domain')['ID'].nunique() print (df) domain 'facebook.com' 1 'google.com' 1 'twitter.com' 2 'vk.com' 3 Name: ID, dtype: int64 ``` If you need to strip ' characters: ``` df = df.ID.groupby([df.domain.str.strip(\"'\")]).nunique() print (df) domain facebook.com 1 google.com 1 twitter.com 2 vk.com 3 Name: ID, dtype: int64 ``` Or as Jon Clements commented: ``` df.groupby(df.domain.str.strip(\"'\"))['ID'].nunique() ``` You can retain the column name like this: ``` df = df.groupby(by='domain', as_index=False).agg({'ID': pd.Series.nunique}) print(df) domain ID 0 fb 1 1 ggl 1 2 twitter 2 3 vk 3 ``` The difference is that nunique() returns a Series and agg() returns a DataFrame.", "best_answers_score":0.7305, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38309729\/count-unique-values-per-groups-with-pandas", "best_answers_votes":433, "question_length":507, "response_length":711 }, { "question":"ValueError: Unknown label type: 'unknown' I try to run following code. ``` import pandas as pd import numpy as np from sklearn.linear_model import LogisticRegression # data import and preparation trainData = pd.read_csv('train.csv') train = trainData.values testData = pd.read_csv('test.csv') test = testData.values X = np.c_[train[:, 0], train[:, 2], train[:, 6:7], train[:, 9]] X = np.nan_to_num(X) y = train[:, 1] Xtest = np.c_[test[:, 0:1], test[:, 5:6], test[:, 8]] Xtest = np.nan_to_num(Xtest) # model lr = LogisticRegression() lr.fit(X, y) ``` where y is a np.ndarrayof 0s and 1s. However, I receive the following error: ```none File \"C:\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\logistic.py\", line >1174, in fit check_classification_targets(y) File \"C:\\Anaconda3\\lib\\site-packages\\sklearn\\utils\\multiclass.py\", line 172, >in check_classification_targets raise ValueError(\"Unknown label type: %r\" % y_type) ValueError: Unknown label type: 'unknown' ``` From sklearn documentation, I see that ```none y : array-like, shape (n_samples,) Target values (class labels in classification, real numbers in regression) ``` What is my error? FYI, y is np.array([0.0, 1.0, 1.0, ..., 0.0, 1.0, 0.0], dtype=object) whose size is (891,).", "response":"Your y is of type object, so sklearn cannot recognize its type. Add the line y=y.astype('int') right after the line y = train[:, 1].", "best_answers_score":0.7304, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45346550\/valueerror-unknown-label-type-unknown", "best_answers_votes":197, "question_length":1237, "response_length":132 }, { "question":"How can I use the apply() function for a single column? I have a pandas dataframe with multiple columns. I want to change the values of the only the first column without affecting the other columns. How can I do that using apply() in pandas?", "response":"Given a sample dataframe df as: ``` a b 0 1 2 1 2 3 2 3 4 3 4 5 ``` what you want is: ``` df['a'] = df['a'].apply(lambda x: x + 1) ``` that returns: ``` a b 0 2 2 1 3 3 2 4 4 3 5 5 ```", "best_answers_score":0.7296, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34962104\/how-can-i-use-the-apply-function-for-a-single-column", "best_answers_votes":706, "question_length":241, "response_length":184 }, { "question":"Using Pandas to pd.read_excel() for multiple (but not all) worksheets of the same workbook without reloading the whole file I have a large spreadsheet file (.xlsx) that I'm processing using python pandas. It happens that I need data from two tabs (sheets) in that large file. One of the tabs has a ton of data and the other is just a few square cells. When I use pd.read_excel() on any worksheet, it looks to me like the whole file is loaded (not just the worksheet I'm interested in). So when I use the method twice (once for each sheet), I effectively have to suffer the whole workbook being read in twice (even though we're only using the specified sheet). How do I only load specific sheet(s) with pd.read_excel()?", "response":"Try pd.ExcelFile: ``` xls = pd.ExcelFile('path_to_file.xls') df1 = pd.read_excel(xls, 'Sheet1') df2 = pd.read_excel(xls, 'Sheet2') ``` As noted by @HaPsantran, the entire Excel file is read in during the ExcelFile() call (there doesn't appear to be a way around this). This merely saves you from having to read the same file in each time you want to access a new sheet. Note that the sheet_name argument to pd.read_excel() can be the name of the sheet (as above), an integer specifying the sheet number (eg 0, 1, etc), a list of sheet names or indices, or None. If a list is provided, it returns a dictionary where the keys are the sheet names\/indices and the values are the data frames. The default is to simply return the first sheet (ie, sheet_name=0). If None is specified, all sheets are returned, as a {sheet_name:dataframe} dictionary.", "best_answers_score":0.7292, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26521266\/using-pandas-to-pd-read-excel-for-multiple-but-not-all-worksheets-of-the-sam", "best_answers_votes":640, "question_length":718, "response_length":842 }, { "question":"Writing a pandas DataFrame to CSV file I have a dataframe in pandas which I would like to write to a CSV file. I am doing this using: ```py df.to_csv('out.csv') ``` And getting the following error: ```none UnicodeEncodeError: 'ascii' codec can't encode character u'\\u03b1' in position 20: ordinal not in range(128) ``` Is there any way to get around this easily (i.e. I have unicode characters in my data frame)? And is there a way to write to a tab delimited file instead of a CSV using e.g. a 'to-tab' method (that I don't think exists)?", "response":"To delimit by a tab you can use the sep argument of to_csv: ``` df.to_csv(file_name, sep='\\t') ``` To use a specific encoding (e.g. 'utf-8') use the encoding argument: ``` df.to_csv(file_name, sep='\\t', encoding='utf-8') ``` In many cases you will want to remove the index and add a header: ``` df.to_csv(file_name, sep='\\t', encoding='utf-8', index=False, header=True) ```", "best_answers_score":0.7279, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16923281\/writing-a-pandas-dataframe-to-csv-file", "best_answers_votes":1549, "question_length":539, "response_length":373 }, { "question":"Executing an SQL query on a Pandas dataset I have a Pandas dataset called df. How can I do: ``` df.query(\"select * from df\") ```", "response":"This is not what pandas.query is supposed to do. You can look at package pandasql (same like sqldf in R ) Update: Note pandasql hasn't been maintained since 2017. Use another library from an answer below. ``` import pandas as pd import pandasql as ps df = pd.DataFrame([[1234, 'Customer A', '123 Street', np.nan], [1234, 'Customer A', np.nan, '333 Street'], [1233, 'Customer B', '444 Street', '333 Street'], [1233, 'Customer B', '444 Street', '666 Street']], columns= ['ID', 'Customer', 'Billing Address', 'Shipping Address']) q1 = \"\"\"SELECT ID FROM df \"\"\" print(ps.sqldf(q1, locals())) ID 0 1234 1 1234 2 1233 3 1233 ``` Update 2020-07-10 update the pandasql ``` ps.sqldf(\"select * from df\") ```", "best_answers_score":0.7277, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45865608\/executing-an-sql-query-on-a-pandas-dataset", "best_answers_votes":151, "question_length":128, "response_length":696 }, { "question":"Retrieve DataFrame of all but one specified column [duplicate] This question already has answers here: How to select all columns except one in pandas? (14 answers) Closed 6 years ago. Is there a way to select all but one column in a pandas DataFrame object? I've seen ways to delete a column, but I don't want to do that.", "response":"use drop method: ``` df.drop(column_name, axis=1) ```", "best_answers_score":0.7267, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20230326\/retrieve-dataframe-of-all-but-one-specified-column", "best_answers_votes":160, "question_length":321, "response_length":53 }, { "question":"How to plot two columns of a pandas data frame using points I have a pandas dataframe and would like to plot values from one column versus the values from another column. Fortunately, there is plot method associated with the dataframes that seems to do what I need: ```py df.plot(x='col_name_1', y='col_name_2') ``` Unfortunately, it looks like among the plot styles (listed here after the kind parameter), there are not points. I can use lines or bars or even density but not points. Is there a work around that can help to solve this problem?", "response":"You can specify the style of the plotted line when calling df.plot: ``` df.plot(x='col_name_1', y='col_name_2', style='o') ``` The style argument can also be a dict or list, e.g.: ``` import numpy as np import pandas as pd d = {'one' : np.random.rand(10), 'two' : np.random.rand(10)} df = pd.DataFrame(d) df.plot(style=['o','rx']) ``` All the accepted style formats are listed in the documentation of matplotlib.pyplot.plot.", "best_answers_score":0.7242, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17812978\/how-to-plot-two-columns-of-a-pandas-data-frame-using-points", "best_answers_votes":153, "question_length":544, "response_length":424 }, { "question":"Find the unique values in a column and then sort them I have a pandas dataframe. I want to print the unique values of one of its columns in ascending order. This is how I am doing it: ``` import pandas as pd df = pd.DataFrame({'A':[1,1,3,2,6,2,8]}) a = df['A'].unique() print a.sort() ``` The problem is that I am getting a None for the output.", "response":"sorted(iterable): Return a new sorted list from the items in iterable. CODE ``` import pandas as pd df = pd.DataFrame({'A':[1,1,3,2,6,2,8]}) a = df['A'].unique() print(sorted(a)) ``` OUTPUT ``` [1, 2, 3, 6, 8] ```", "best_answers_score":0.7241, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32072076\/find-the-unique-values-in-a-column-and-then-sort-them", "best_answers_votes":428, "question_length":344, "response_length":213 }, { "question":"Keep other columns when doing groupby [duplicate] This question already has answers here: Converting a Pandas GroupBy multiindex output from Series back to DataFrame (13 answers) Closed 1 year ago. I'm using groupby on a pandas dataframe to drop all rows that don't have the minimum of a specific column. Something like this: ``` df1 = df.groupby(\"item\", as_index=False)[\"diff\"].min() ``` However, if I have more than those two columns, the other columns (e.g. otherstuff in my example) get dropped. Can I keep those columns using groupby, or am I going to have to find a different way to drop the rows? My data looks like: ``` item diff otherstuff 0 1 2 1 1 1 1 2 2 1 3 7 3 2 -1 0 4 2 1 3 5 2 4 9 6 2 -6 2 7 3 0 0 8 3 2 9 ``` and should end up like: ``` item diff otherstuff 0 1 1 2 1 2 -6 2 2 3 0 0 ``` but what I'm getting is: ``` item diff 0 1 1 1 2 -6 2 3 0 ``` I've been looking through the documentation and can't find anything. I tried: ``` df1 = df.groupby([\"item\", \"otherstuff\"], as_index=false)[\"diff\"].min() df1 = df.groupby(\"item\", as_index=false)[\"diff\"].min()[\"otherstuff\"] df1 = df.groupby(\"item\", as_index=false)[\"otherstuff\", \"diff\"].min() ``` But none of those work.", "response":"Method #1: use idxmin() to get the indices of the elements of minimum diff, and then select those: ``` >>> df.loc[df.groupby(\"item\")[\"diff\"].idxmin()] item diff otherstuff 1 1 1 2 6 2 -6 2 7 3 0 0 [3 rows x 3 columns] ``` Method #2: sort by diff, and then take the first element in each item group: ``` >>> df.sort_values(\"diff\").groupby(\"item\", as_index=False).first() item diff otherstuff 0 1 1 2 1 2 -6 2 2 3 0 0 [3 rows x 3 columns] ``` Note that the resulting indices are different even though the row content is the same.", "best_answers_score":0.7239, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23394476\/keep-other-columns-when-doing-groupby", "best_answers_votes":176, "question_length":1185, "response_length":527 }, { "question":"Use pandas.shift() within a group I have a dataframe with panel data, let's say it's time series for 100 different objects: ``` object period value 1 1 24 1 2 67 ... 1 1000 56 2 1 59 2 2 46 ... 2 1000 64 3 1 54 ... 100 1 451 100 2 153 ... 100 1000 21 ``` I want to add a new column prev_value that will store previous value for each object: ``` object period value prev_value 1 1 24 nan 1 2 67 24 ... 1 99 445 1243 1 1000 56 445 2 1 59 nan 2 2 46 59 ... 2 1000 64 784 3 1 54 nan ... 100 1 451 nan 100 2 153 451 ... 100 1000 21 1121 ``` Can I use .shift() and .groupby() somehow to do that?", "response":"Pandas' grouped objects have a groupby.DataFrameGroupBy.shift method, which will shift a specified column in each group n periods, just like the regular dataframe's shift method: ``` df['prev_value'] = df.groupby('object')['value'].shift() ``` For the following example dataframe: ``` print(df) object period value 0 1 1 24 1 1 2 67 2 1 4 89 3 2 4 5 4 2 23 23 ``` The result would be: ``` object period value prev_value 0 1 1 24 NaN 1 1 2 67 24.0 2 1 4 89 67.0 3 2 4 5 NaN 4 2 23 23 5.0 ```", "best_answers_score":0.7238, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/53335567\/use-pandas-shift-within-a-group", "best_answers_votes":143, "question_length":589, "response_length":490 }, { "question":"Merge two data frames based on common column values in Pandas How to get merged data frame from two data frames having common column value such that only those rows make merged data frame having common value in a particular column. I have 5000 rows of df1 as format : - ``` director_name actor_1_name actor_2_name actor_3_name movie_title 0 James Cameron CCH Pounder Joel David Moore Wes Studi Avatar 1 Gore Verbinski Johnny Depp Orlando Bloom Jack Davenport Pirates of the Caribbean: At World's End 2 Sam Mendes Christoph Waltz Rory Kinnear Stephanie Sigman Spectre ``` and 10000 rows of df2 as ``` movieId genres movie_title 1 Adventure|Animation|Children|Comedy|Fantasy Toy Story 2 Adventure|Children|Fantasy Jumanji 3 Comedy|Romance Grumpier Old Men 4 Comedy|Drama|Romance Waiting to Exhale ``` A common column 'movie_title' have common values and based on them, I want to get all rows where 'movie_title' is same. Other rows to be deleted. Any help\/suggestion would be appreciated. Note: I already tried ``` pd.merge(dfinal, df1, on='movie_title') ``` and output comes like one row ``` director_name actor_1_name actor_2_name actor_3_name movie_title movieId title genres ``` and on how =\"outer\"\/\"left\", \"right\", I tried all and didn't get any row after dropping NaN although many common coloumn do exist.", "response":"You can use pd.merge: ``` import pandas as pd pd.merge(df1, df2, on=\"movie_title\") ``` Only rows are kept for which common keys are found in both data frames. In case you want to keep all rows from the left data frame and only add values from df2 where a matching key is available, you can use how=\"left\": ``` pd.merge(df1, df2, on=\"movie_title\", how=\"left\") ```", "best_answers_score":0.7235, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43297589\/merge-two-data-frames-based-on-common-column-values-in-pandas", "best_answers_votes":130, "question_length":1310, "response_length":362 }, { "question":"SQLAlchemy ORM conversion to pandas DataFrame Is there a solution converting a SQLAlchemy to a pandas DataFrame? Pandas has the capability to use pandas.read_sql but this requires use of raw SQL. I have two reasons for wanting to avoid it: I already have everything using the ORM (a good reason in and of itself) and I'm using python lists as part of the query, e.g.: db.session.query(Item).filter(Item.symbol.in_(add_symbols) where Item is my model class and add_symbols is a list). This is the equivalent of SQL SELECT ... from ... WHERE ... IN. Is anything possible?", "response":"Below should work in most cases: ``` df = pd.read_sql(query.statement, query.session.bind) ``` See pandas.read_sql documentation for more information on the parameters.", "best_answers_score":0.7233, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29525808\/sqlalchemy-orm-conversion-to-pandas-dataframe", "best_answers_votes":274, "question_length":570, "response_length":168 }, { "question":"Filtering Pandas Dataframe using OR statement I have a pandas dataframe and I want to filter the whole df based on the value of two columns in the data frame. I want to get back all rows and columns where IBRD or IMF != 0. ``` alldata_balance = alldata[(alldata[IBRD] !=0) or (alldata[IMF] !=0)] ``` but this gives me a ValueError ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). So I know I am not using the or statement correctly, is there a way to do this?", "response":"From the docs: Another common operation is the use of boolean vectors to filter the data. The operators are: | for or, & for and, and ~ for not. These must be grouped by using parentheses. https:\/\/pandas.pydata.org\/docs\/user_guide\/indexing.html#boolean-indexing Try: ``` alldata_balance = alldata[(alldata[IBRD] !=0) | (alldata[IMF] !=0)] ```", "best_answers_score":0.7228, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29461185\/filtering-pandas-dataframe-using-or-statement", "best_answers_votes":241, "question_length":517, "response_length":342 }, { "question":"Make Pandas DataFrame apply() use all cores? As of August 2017, Pandas DataFame.apply() is unfortunately still limited to working with a single core, meaning that a multi-core machine will waste the majority of its compute-time when you run df.apply(myfunc, axis=1). How can you use all your cores to run apply on a dataframe in parallel?", "response":"You may use the swifter package: ``` pip install swifter ``` (Note that you may want to use this in a virtualenv to avoid version conflicts with installed dependencies.) Swifter works as a plugin for pandas, allowing you to reuse the apply function: ``` import swifter def some_function(data): return data * 10 data['out'] = data['in'].swifter.apply(some_function) ``` It will automatically figure out the most efficient way to parallelize the function, no matter if it's vectorized (as in the above example) or not. More examples and a performance comparison are available on GitHub. Note that the package is under active development, so the API may change. Also note that this will not work automatically for string columns. When using strings, Swifter will fallback to a \u201csimple\u201d Pandas apply, which will not be parallel. In this case, even forcing it to use dask will not create performance improvements, and you would be better off just splitting your dataset manually and parallelizing using multiprocessing.", "best_answers_score":0.7224, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45545110\/make-pandas-dataframe-apply-use-all-cores", "best_answers_votes":176, "question_length":338, "response_length":1014 }, { "question":"Construct pandas DataFrame from list of tuples of (row,col,values) I have a list of tuples like ``` data = [ ('r1', 'c1', avg11, stdev11), ('r1', 'c2', avg12, stdev12), ('r2', 'c1', avg21, stdev21), ('r2', 'c2', avg22, stdev22) ] ``` and I would like to put them into a pandas DataFrame with rows named by the first column and columns named by the 2nd column. It seems the way to take care of the row names is something like pandas.DataFrame([x[1:] for x in data], index = [x[0] for x in data]) but how do I take care of the columns to get a 2x2 matrix (the output from the previous set is 3x4)? Is there a more intelligent way of taking care of row labels as well, instead of explicitly omitting them? EDIT It seems I will need 2 DataFrames - one for averages and one for standard deviations, is that correct? Or can I store a list of values in each \"cell\"?", "response":"You can pivot your DataFrame after creating: ``` >>> df = pd.DataFrame(data) >>> df.pivot(index=0, columns=1, values=2) # avg DataFrame 1 c1 c2 0 r1 avg11 avg12 r2 avg21 avg22 >>> df.pivot(index=0, columns=1, values=3) # stdev DataFrame 1 c1 c2 0 r1 stdev11 stdev12 r2 stdev21 stdev22 ```", "best_answers_score":0.7221, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19961490\/construct-pandas-dataframe-from-list-of-tuples-of-row-col-values", "best_answers_votes":70, "question_length":858, "response_length":288 }, { "question":"Get row-index values of Pandas DataFrame as list? [duplicate] This question already has answers here: How do I convert a Pandas series or index to a NumPy array? [duplicate] (8 answers) Closed 6 years ago. I'm probably using poor search terms when trying to find this answer. Right now, before indexing a DataFrame, I'm getting a list of values in a column this way... ``` list = list(df['column']) ``` ...then I'll set_index on the column. This seems like a wasted step. When trying the above on an index, I get a key error. How can I grab the values in an index (both single and multi) and put them in a list or a list of tuples?", "response":"To get the index values as a list\/list of tuples for Index\/MultiIndex do: ``` df.index.values.tolist() # an ndarray method, you probably shouldn't depend on this ``` or ``` list(df.index.values) # this will always work in pandas ```", "best_answers_score":0.7215, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18358938\/get-row-index-values-of-pandas-dataframe-as-list", "best_answers_votes":263, "question_length":631, "response_length":232 }, { "question":"Making heatmap from pandas DataFrame I have a dataframe generated from Python's Pandas package. How can I generate heatmap using DataFrame from pandas package. ``` import numpy as np from pandas import * Index= ['aaa','bbb','ccc','ddd','eee'] Cols = ['A', 'B', 'C','D'] df = DataFrame(abs(np.random.randn(5, 4)), index= Index, columns=Cols) >>> df A B C D aaa 2.431645 1.248688 0.267648 0.613826 bbb 0.809296 1.671020 1.564420 0.347662 ccc 1.501939 1.126518 0.702019 1.596048 ddd 0.137160 0.147368 1.504663 0.202822 eee 0.134540 3.708104 0.309097 1.641090 >>> ```", "response":"For people looking at this today, I would recommend the Seaborn heatmap() as documented here. The example above would be done as follows: ``` import numpy as np from pandas import DataFrame import seaborn as sns %matplotlib inline Index= ['aaa', 'bbb', 'ccc', 'ddd', 'eee'] Cols = ['A', 'B', 'C', 'D'] df = DataFrame(abs(np.random.randn(5, 4)), index=Index, columns=Cols) sns.heatmap(df, annot=True) ``` Where %matplotlib is an IPython magic function for those unfamiliar.", "best_answers_score":0.7212, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12286607\/making-heatmap-from-pandas-dataframe", "best_answers_votes":281, "question_length":563, "response_length":472 }, { "question":"How can I make pandas dataframe column headers all lowercase? I want to make all column headers in my pandas data frame lower case Example If I have: ``` data = country country isocode year XRAT tcgdp 0 Canada CAN 2001 1.54876 924909.44207 1 Canada CAN 2002 1.56932 957299.91586 2 Canada CAN 2003 1.40105 1016902.00180 .... ``` I would like to change XRAT to xrat by doing something like: ``` data.headers.lowercase() ``` So that I get: ``` country country isocode year xrat tcgdp 0 Canada CAN 2001 1.54876 924909.44207 1 Canada CAN 2002 1.56932 957299.91586 2 Canada CAN 2003 1.40105 1016902.00180 3 Canada CAN 2004 1.30102 1096000.35500 .... ``` I will not know the names of each column header ahead of time.", "response":"You can do it like this: ``` data.columns = map(str.lower, data.columns) ``` or ``` data.columns = [x.lower() for x in data.columns] ``` example: ``` >>> data = pd.DataFrame({'A':range(3), 'B':range(3,0,-1), 'C':list('abc')}) >>> data A B C 0 0 3 a 1 1 2 b 2 2 1 c >>> data.columns = map(str.lower, data.columns) >>> data a b c 0 0 3 a 1 1 2 b 2 2 1 c ```", "best_answers_score":0.7207, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19726029\/how-can-i-make-pandas-dataframe-column-headers-all-lowercase", "best_answers_votes":306, "question_length":710, "response_length":355 }, { "question":"python pandas remove duplicate columns What is the easiest way to remove duplicate columns from a dataframe? I am reading a text file that has duplicate columns via: ``` import pandas as pd df=pd.read_table(fname) ``` The column names are: ``` Time, Time Relative, N2, Time, Time Relative, H2, etc... ``` All the Time and Time Relative columns contain the same data. I want: ``` Time, Time Relative, N2, H2 ``` All my attempts at dropping, deleting, etc such as: ``` df=df.T.drop_duplicates().T ``` Result in uniquely valued index errors: ``` Reindexing only valid with uniquely valued index objects ``` Sorry for being a Pandas noob. Any Suggestions would be appreciated. Additional Details Pandas version: 0.9.0 Python Version: 2.7.3 Windows 7 (installed via Pythonxy 2.7.3.0) data file (note: in the real file, columns are separated by tabs, here they are separated by 4 spaces): ``` Time Time Relative [s] N2[%] Time Time Relative [s] H2[ppm] 2\/12\/2013 9:20:55 AM 6.177 9.99268e+001 2\/12\/2013 9:20:55 AM 6.177 3.216293e-005 2\/12\/2013 9:21:06 AM 17.689 9.99296e+001 2\/12\/2013 9:21:06 AM 17.689 3.841667e-005 2\/12\/2013 9:21:18 AM 29.186 9.992954e+001 2\/12\/2013 9:21:18 AM 29.186 3.880365e-005 ... etc ... 2\/12\/2013 2:12:44 PM 17515.269 9.991756+001 2\/12\/2013 2:12:44 PM 17515.269 2.800279e-005 2\/12\/2013 2:12:55 PM 17526.769 9.991754e+001 2\/12\/2013 2:12:55 PM 17526.769 2.880386e-005 2\/12\/2013 2:13:07 PM 17538.273 9.991797e+001 2\/12\/2013 2:13:07 PM 17538.273 3.131447e-005 ```", "response":"Here's a one line solution to remove columns based on duplicate column names: ``` df = df.loc[:,~df.columns.duplicated()].copy() ``` How it works: Suppose the columns of the data frame are ['alpha','beta','alpha'] df.columns.duplicated() returns a boolean array: a True or False for each column. If it is False then the column name is unique up to that point, if it is True then the column name is duplicated earlier. For example, using the given example, the returned value would be [False,False,True]. Pandas allows one to index using boolean values whereby it selects only the True values. Since we want to keep the unduplicated columns, we need the above boolean array to be flipped (ie [True, True, False] = ~[False,False,True]) Finally, df.loc[:,[True,True,False]] selects only the non-duplicated columns using the aforementioned indexing capability. The final .copy() is there to copy the dataframe to (mostly) avoid getting errors about trying to modify an existing dataframe later down the line. Note: the above only checks columns names, not column values. To remove duplicated indexes Since it is similar enough, do the same thing on the index: ``` df = df.loc[~df.index.duplicated(),:].copy() ``` To remove duplicates by checking values without transposing Update and caveat: please be careful in applying this. Per the counter-example provided by DrWhat in the comments, this solution may not have the desired outcome in all cases. ``` df = df.loc[:,~df.apply(lambda x: x.duplicated(),axis=1).all()].copy() ``` This avoids the issue of transposing. Is it fast? No. Does it work? In some cases. Here, try it on this: ``` # create a large(ish) dataframe ldf = pd.DataFrame(np.random.randint(0,100,size= (736334,1312))) #to see size in gigs #ldf.memory_usage().sum()\/1e9 #it's about 3 gigs # duplicate a column ldf.loc[:,'dup'] = ldf.loc[:,101] # take out duplicated columns by values ldf = ldf.loc[:,~ldf.apply(lambda x: x.duplicated(),axis=1).all()].copy() ```", "best_answers_score":0.7203, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14984119\/python-pandas-remove-duplicate-columns", "best_answers_votes":840, "question_length":1479, "response_length":1972 }, { "question":"How to select columns from dataframe by regex I have a dataframe in python pandas. The structure of the dataframe is as the following: ``` a b c d1 d2 d3 10 14 12 44 45 78 ``` I would like to select the columns which begin with d. Is there a simple way to achieve this in python .", "response":"You can use DataFrame.filter this way: ```py import pandas as pd df = pd.DataFrame(np.array([[2,4,4],[4,3,3],[5,9,1]]),columns=['d','t','didi']) >> d t didi 0 2 4 4 1 4 3 3 2 5 9 1 df.filter(regex=(\"d.*\")) >> d didi 0 2 4 1 4 3 2 5 1 ``` The idea is to select columns by regex", "best_answers_score":0.7193, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30808430\/how-to-select-columns-from-dataframe-by-regex", "best_answers_votes":212, "question_length":280, "response_length":276 }, { "question":"Add column with number of days between dates in DataFrame pandas I want to subtract dates in 'A' from dates in 'B' and add a new column with the difference. ``` df A B one 2014-01-01 2014-02-28 two 2014-02-03 2014-03-01 ``` I've tried the following, but get an error when I try to include this in a for loop... ``` import datetime date1=df['A'][0] date2=df['B'][0] mdate1 = datetime.datetime.strptime(date1, \"%Y-%m-%d\").date() rdate1 = datetime.datetime.strptime(date2, \"%Y-%m-%d\").date() delta = (mdate1 - rdate1).days print delta ``` What should I do?", "response":"To remove the 'days' text element, you can also make use of the dt() accessor for series: https:\/\/pandas.pydata.org\/pandas-docs\/stable\/generated\/pandas.Series.dt.html So, ``` df[['A','B']] = df[['A','B']].apply(pd.to_datetime) #if conversion required df['C'] = (df['B'] - df['A']).dt.days ``` which returns: ``` A B C one 2014-01-01 2014-02-28 58 two 2014-02-03 2014-03-01 26 ```", "best_answers_score":0.7191, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22132525\/add-column-with-number-of-days-between-dates-in-dataframe-pandas", "best_answers_votes":205, "question_length":553, "response_length":379 }, { "question":"Selecting a row of pandas series\/dataframe by integer index I am curious as to why df[2] is not supported, while df.ix[2] and df[2:3] both work. ``` In [26]: df.ix[2] Out[26]: A 1.027680 B 1.514210 C -1.466963 D -0.162339 Name: 2000-01-03 00:00:00 In [27]: df[2:3] Out[27]: A B C D 2000-01-03 1.02768 1.51421 -1.466963 -0.162339 ``` I would expect df[2] to work the same way as df[2:3] to be consistent with Python indexing convention. Is there a design reason for not supporting indexing row by single integer?", "response":"echoing @HYRY, see the new docs in 0.11 http:\/\/pandas.pydata.org\/pandas-docs\/stable\/indexing.html Here we have new operators, .iloc to explicity support only integer indexing, and .loc to explicity support only label indexing e.g. imagine this scenario ``` In [1]: df = pd.DataFrame(np.random.rand(5,2),index=range(0,10,2),columns=list('AB')) In [2]: df Out[2]: A B 0 1.068932 -0.794307 2 -0.470056 1.192211 4 -0.284561 0.756029 6 1.037563 -0.267820 8 -0.538478 -0.800654 In [5]: df.iloc[[2]] Out[5]: A B 4 -0.284561 0.756029 In [6]: df.loc[[2]] Out[6]: A B 2 -0.470056 1.192211 ``` [] slices the rows (by label location) only", "best_answers_score":0.7188, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16096627\/selecting-a-row-of-pandas-series-dataframe-by-integer-index", "best_answers_votes":814, "question_length":511, "response_length":626 }, { "question":"Custom sorting in pandas dataframe I have python pandas dataframe, in which a column contains month name. How can I do a custom sort using a dictionary, for example: ``` custom_dict = {'March':0, 'April':1, 'Dec':3} ```", "response":"Pandas 0.15 introduced Categorical Series, which allows a much clearer way to do this: First make the month column a categorical and specify the ordering to use. ``` In [21]: df['m'] = pd.Categorical(df['m'], [\"March\", \"April\", \"Dec\"]) In [22]: df # looks the same! Out[22]: a b m 0 1 2 March 1 5 6 Dec 2 3 4 April ``` Now, when you sort the month column it will sort with respect to that list: ``` In [23]: df.sort_values(\"m\") Out[23]: a b m 0 1 2 March 2 3 4 April 1 5 6 Dec ``` Note: if a value is not in the list it will be converted to NaN. An older answer for those interested... You could create an intermediary series, and set_index on that: ``` df = pd.DataFrame([[1, 2, 'March'],[5, 6, 'Dec'],[3, 4, 'April']], columns=['a','b','m']) s = df['m'].apply(lambda x: {'March':0, 'April':1, 'Dec':3}[x]) s.sort_values() In [4]: df.set_index(s.index).sort() Out[4]: a b m 0 1 2 March 1 3 4 April 2 5 6 Dec ``` As commented, in newer pandas, Series has a replace method to do this more elegantly: ``` s = df['m'].replace({'March':0, 'April':1, 'Dec':3}) ``` The slight difference is that this won't raise if there is a value outside of the dictionary (it'll just stay the same).", "best_answers_score":0.7188, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13838405\/custom-sorting-in-pandas-dataframe", "best_answers_votes":254, "question_length":219, "response_length":1180 }, { "question":"converting a pandas date to week number I would like to extract a week number from data in a pandas dataframe. The date format is datetime64[ns] I have normalized the date to remove the time from it ``` df['Date'] = df['Date'].apply(pd.datetools.normalize_date) ``` so the date now looks like - 2015-06-17 in the data frame column and now I like to convert that to a week number.", "response":"Just access the week attribute of Series.dt.isocalendar(): Example: ``` In [286]: df['Date'].dt.isocalendar().week Out[286]: 0 25 dtype: int64 In [287]: df['Week_Number'] = df['Date'].dt.isocalendar().week df Out[287]: Date Week_Number 0 2015-06-17 25 ```", "best_answers_score":0.7188, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31181295\/converting-a-pandas-date-to-week-number", "best_answers_votes":188, "question_length":379, "response_length":255 }, { "question":"How to reorder indexed rows based on a list in Pandas data frame I have a data frame that looks like this: ``` company Amazon Apple Yahoo name A 0 130 0 C 173 0 0 Z 0 0 150 ``` It was created using this code: ``` import pandas as pd df = pd.DataFrame({'name' : ['A', 'Z','C'], 'company' : ['Apple', 'Yahoo','Amazon'], 'height' : [130, 150,173]}) df = df.pivot(index=\"name\", columns=\"company\", values=\"height\").fillna(0) ``` What I want to do is to sort the row (with index name) according to a predefined list: ``` [\"Z\", \"C\", \"A\"]` ``` Resulting in this : ``` company Amazon Apple Yahoo name Z 0 0 150 C 173 0 0 A 0 130 0 ``` How can I achieve that?", "response":"You could set index on predefined order using reindex like ``` In [14]: df.reindex([\"Z\", \"C\", \"A\"]) Out[14]: company Amazon Apple Yahoo Z 0 0 150 C 173 0 0 A 0 130 0 ``` However, if it's alphabetical order, you could use sort_index(ascending=False) ``` In [12]: df.sort_index(ascending=False) Out[12]: company Amazon Apple Yahoo name Z 0 0 150 C 173 0 0 A 0 130 0 ``` Like pointed below, you need to assign it to some variable ``` In [13]: df = df.sort_index(ascending=False) ```", "best_answers_score":0.7186, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30009948\/how-to-reorder-indexed-rows-based-on-a-list-in-pandas-data-frame", "best_answers_votes":175, "question_length":649, "response_length":479 }, { "question":"Replacing Pandas or Numpy Nan with a None to use with MysqlDB I am trying to write a Pandas dataframe (or can use a numpy array) to a mysql database using MysqlDB . MysqlDB doesn't seem understand 'nan' and my database throws out an error saying nan is not in the field list. I need to find a way to convert the 'nan' into a NoneType. Any ideas?", "response":"For pandas > 1.3.0 see this answer. @bogatron has it right, you can use where, it's worth noting that you can do this natively in pandas: ``` df1 = df.where(pd.notnull(df), None) ``` Note: this changes the dtype of all columns to object. Example: ``` In [1]: df = pd.DataFrame([1, np.nan]) In [2]: df Out[2]: 0 0 1 1 NaN In [3]: df1 = df.where(pd.notnull(df), None) In [4]: df1 Out[4]: 0 0 1 1 None ``` Note: what you cannot do recast the DataFrames dtype to allow all datatypes types, using astype, and then the DataFrame fillna method: ``` df1 = df.astype(object).replace(np.nan, 'None') ``` Unfortunately neither this, nor using replace, works with None see this (closed) issue. As an aside, it's worth noting that for most use cases you don't need to replace NaN with None, see this question about the difference between NaN and None in pandas. However, in this specific case it seems you do (at least at the time of this answer).", "best_answers_score":0.7183, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14162723\/replacing-pandas-or-numpy-nan-with-a-none-to-use-with-mysqldb", "best_answers_votes":336, "question_length":345, "response_length":934 }, { "question":"Finding non-numeric rows in dataframe in pandas? I have a large dataframe in pandas that apart from the column used as index is supposed to have only numeric values: ``` df = pd.DataFrame({'a': [1, 2, 3, 'bad', 5], 'b': [0.1, 0.2, 0.3, 0.4, 0.5], 'item': ['a', 'b', 'c', 'd', 'e']}) df = df.set_index('item') ``` How can I find the row of the dataframe df that has a non-numeric value in it? In this example it's the fourth row in the dataframe, which has the string 'bad' in the a column. How can this row be found programmatically?", "response":"You could use np.isreal to check the type of each element (applymap applies a function to each element in the DataFrame): ``` In [11]: df.applymap(np.isreal) Out[11]: a b item a True True b True True c True True d False True e True True ``` If all in the row are True then they are all numeric: ``` In [12]: df.applymap(np.isreal).all(1) Out[12]: item a True b True c True d False e True dtype: bool ``` So to get the subDataFrame of rouges, (Note: the negation, ~, of the above finds the ones which have at least one rogue non-numeric): ``` In [13]: df[~df.applymap(np.isreal).all(1)] Out[13]: a b item d bad 0.4 ``` You could also find the location of the first offender you could use argmin: ``` In [14]: np.argmin(df.applymap(np.isreal).all(1)) Out[14]: 'd' ``` As @CTZhu points out, it may be slightly faster to check whether it's an instance of either int or float (there is some additional overhead with np.isreal): ``` df.applymap(lambda x: isinstance(x, (int, float))) ```", "best_answers_score":0.7174, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21771133\/finding-non-numeric-rows-in-dataframe-in-pandas", "best_answers_votes":97, "question_length":533, "response_length":981 }, { "question":"Number rows within group in increasing order in a pandas dataframe Given the following dataframe: ```py import pandas as pd import numpy as np df = pd.DataFrame({'A': ['A','A','A','B','B','B'], 'B': ['a','a','b','a','a','a'], }) df ``` ``` A B 0 A a 1 A a 2 A b 3 B a 4 B a 5 B a ``` I'd like to create column 'C', which numbers the rows within each group in columns A and B like this: ```none A B C 0 A a 1 1 A a 2 2 A b 1 3 B a 1 4 B a 2 5 B a 3 ``` I've tried this so far: ```py df['C'] = df.groupby(['A','B'])['B'].transform('rank') ``` ...but it doesn't work!", "response":"Use groupby\/cumcount: ``` In [25]: df['C'] = df.groupby(['A','B']).cumcount()+1; df Out[25]: A B C 0 A a 1 1 A a 2 2 A b 1 3 B a 1 4 B a 2 5 B a 3 ```", "best_answers_score":0.7167, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/37997668\/number-rows-within-group-in-increasing-order-in-a-pandas-dataframe", "best_answers_votes":182, "question_length":564, "response_length":150 }, { "question":"How to Add Incremental Numbers to a New Column Using Pandas I have this simplified dataframe: ``` ID Fruit F1 Apple F2 Orange F3 Banana ``` I want to add in the begining of the dataframe a new column df['New_ID'] which has the number 880 that increments by one in each row. The output should be simply like: ``` New_ID ID Fruit 880 F1 Apple 881 F2 Orange 882 F3 Banana ``` I tried the following: ``` df['New_ID'] = [\"880\"] # but I want to do this without assigning it the list of numbers literally ``` Any idea how to solve this? Thanks!", "response":"``` df.insert(0, 'New_ID', range(880, 880 + len(df))) df ```", "best_answers_score":0.7161, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38862293\/how-to-add-incremental-numbers-to-a-new-column-using-pandas", "best_answers_votes":207, "question_length":537, "response_length":60 }, { "question":"selecting from multi-index pandas I have a multi-index data frame with columns 'A' and 'B'. Is there is a way to select rows by filtering on one column of the multi-index without resetting the index to a single column index? For Example. ``` # has multi-index (A,B) df #can I do this? I know this doesn't work because the index is multi-index so I need to specify a tuple df.ix[df.A ==1] ```", "response":"One way is to use the get_level_values Index method: ``` In [11]: df Out[11]: 0 A B 1 4 1 2 5 2 3 6 3 In [12]: df.iloc[df.index.get_level_values('A') == 1] Out[12]: 0 A B 1 4 1 ``` In 0.13 you'll be able to use xs with drop_level argument: ``` df.xs(1, level='A', drop_level=False) # axis=1 if columns ``` Note: if this were column MultiIndex rather than index, you could use the same technique: ``` In [21]: df1 = df.T In [22]: df1.iloc[:, df1.columns.get_level_values('A') == 1] Out[22]: A 1 B 4 0 1 ```", "best_answers_score":0.7143, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18835077\/selecting-from-multi-index-pandas", "best_answers_votes":196, "question_length":391, "response_length":505 }, { "question":"How to create a dictionary of two pandas DataFrame columns What is the most efficient way to organise the following pandas Dataframe: data = ``` Position Letter 1 a 2 b 3 c 4 d 5 e ``` into a dictionary like alphabet[1 : 'a', 2 : 'b', 3 : 'c', 4 : 'd', 5 : 'e']?", "response":"``` In [9]: pd.Series(df.Letter.values,index=df.Position).to_dict() Out[9]: {1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e'} ``` Speed comparion (using Wouter's method) ``` In [6]: df = pd.DataFrame(randint(0,10,10000).reshape(5000,2),columns=list('AB')) In [7]: %timeit dict(zip(df.A,df.B)) 1000 loops, best of 3: 1.27 ms per loop In [8]: %timeit pd.Series(df.A.values,index=df.B).to_dict() 1000 loops, best of 3: 987 us per loop ```", "best_answers_score":0.7136, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17426292\/how-to-create-a-dictionary-of-two-pandas-dataframe-columns", "best_answers_votes":376, "question_length":262, "response_length":426 }, { "question":"Pandas isna() and isnull(), what is the difference? Pandas has both isna() and isnull(). I usually use isnull() to detect missing values and have never met the case so that I had to use other than that. So, when to use isna()?", "response":"isnull is an alias for isna. Literally in the code source of pandas: ``` isnull = isna ``` Indeed: ``` >>> pd.isnull ``` So I would recommend using isna.", "best_answers_score":0.7135, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/52086574\/pandas-isna-and-isnull-what-is-the-difference", "best_answers_votes":176, "question_length":226, "response_length":154 }, { "question":"create pandas dataframe from dictionary of dictionaries I have a dictionary of dictionaries of the form: ``` {'user':{movie:rating} } ``` For example, ``` {'Jill': {'Avenger: Age of Ultron': 7.0, 'Django Unchained': 6.5, 'Gone Girl': 9.0, 'Kill the Messenger': 8.0} 'Toby': {'Avenger: Age of Ultron': 8.5, 'Django Unchained': 9.0, 'Zoolander': 2.0}} ``` I want to convert this dict of dicts into a pandas dataframe with column 1 the user name and the other columns the movie ratings i.e. ``` user Gone_Girl Horrible_Bosses_2 Django_Unchained Zoolander etc. \\ ``` However, some users did not rate the movies and so these movies are not included in the values() for that user key(). It would be nice in these cases to just fill the entry with NaN. As of now, I iterate over the keys, fill a list, and then use this list to create a dataframe: ``` data=[] for i,key in enumerate(movie_user_preferences.keys() ): try: data.append((key ,movie_user_preferences[key]['Gone Girl'] ,movie_user_preferences[key]['Horrible Bosses 2'] ,movie_user_preferences[key]['Django Unchained'] ,movie_user_preferences[key]['Zoolander'] ,movie_user_preferences[key]['Avenger: Age of Ultron'] ,movie_user_preferences[key]['Kill the Messenger'])) # if no entry, skip except: pass df=pd.DataFrame(data=data,columns=['user','Gone_Girl','Horrible_Bosses_2','Django_Unchained','Zoolander','Avenger_Age_of_Ultron','Kill_the_Messenger']) ``` But this only gives me a dataframe of users who rated all the movies in the set. My goal is to append to the data list by iterating over the movie labels (rather than the brute force approach shown above) and, secondly, create a dataframe that includes all users and that places null values in the elements that do not have movie ratings.", "response":"You can pass the dict of dict to the DataFrame constructor: ``` In [11]: d = {'Jill': {'Django Unchained': 6.5, 'Gone Girl': 9.0, 'Kill the Messenger': 8.0, 'Avenger: Age of Ultron': 7.0}, 'Toby': {'Django Unchained': 9.0, 'Zoolander': 2.0, 'Avenger: Age of Ultron': 8.5}} In [12]: pd.DataFrame(d) Out[12]: Jill Toby Avenger: Age of Ultron 7.0 8.5 Django Unchained 6.5 9.0 Gone Girl 9.0 NaN Kill the Messenger 8.0 NaN Zoolander NaN 2.0 ``` Or use the from_dict method: ``` In [13]: pd.DataFrame.from_dict(d) Out[13]: Jill Toby Avenger: Age of Ultron 7.0 8.5 Django Unchained 6.5 9.0 Gone Girl 9.0 NaN Kill the Messenger 8.0 NaN Zoolander NaN 2.0 In [14]: pd.DataFrame.from_dict(d, orient='index') Out[14]: Django Unchained Gone Girl Kill the Messenger Avenger: Age of Ultron Zoolander Jill 6.5 9 8 7.0 NaN Toby 9.0 NaN NaN 8.5 2 ```", "best_answers_score":0.7132, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33157522\/create-pandas-dataframe-from-dictionary-of-dictionaries", "best_answers_votes":145, "question_length":1749, "response_length":832 }, { "question":"How to merge a Series and DataFrame If you came here looking for information on how to merge a DataFrame and Series on the index, please look at this answer. The OP's original intention was to ask how to assign series elements as columns to another DataFrame. If you are interested in knowing the answer to this, look at the accepted answer by EdChum. Best I can come up with is ``` df = pd.DataFrame({'a':[1, 2], 'b':[3, 4]}) # see EDIT below s = pd.Series({'s1':5, 's2':6}) for name in s.index: df[name] = s[name] a b s1 s2 0 1 3 5 6 1 2 4 5 6 ``` Can anybody suggest better syntax \/ faster method? My attempts: ``` df.merge(s) AttributeError: 'Series' object has no attribute 'columns' ``` and ``` df.join(s) ValueError: Other Series must have a name ``` EDIT The first two answers posted highlighted a problem with my question, so please use the following to construct df: ``` df = pd.DataFrame({'a':[np.nan, 2, 3], 'b':[4, 5, 6]}, index=[3, 5, 6]) ``` with the final result ``` a b s1 s2 3 NaN 4 5 6 5 2 5 5 6 6 3 6 5 6 ```", "response":"Update From v0.24.0 onwards, you can merge on DataFrame and Series as long as the Series is named. ``` df.merge(s.rename('new'), left_index=True, right_index=True) # If series is already named, # df.merge(s, left_index=True, right_index=True) ``` Nowadays, you can simply convert the Series to a DataFrame with to_frame(). So (if joining on index): ``` df.merge(s.to_frame(), left_index=True, right_index=True) ```", "best_answers_score":0.7129, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26265819\/how-to-merge-a-series-and-dataframe", "best_answers_votes":249, "question_length":1028, "response_length":414 }, { "question":"Pandas: create new column in df with random integers from range I have a pandas data frame with 50k rows. I'm trying to add a new column that is a randomly generated integer from 1 to 5. If I want 50k random numbers I'd use: ```py df1['randNumCol'] = random.sample(range(50000), len(df1)) ``` but for this I'm not sure how to do it. Side note in R, I'd do: ```r sample(1:5, 50000, replace = TRUE) ``` Any suggestions?", "response":"One solution is to use numpy.random.randint: ``` import numpy as np df1['randNumCol'] = np.random.randint(1, 6, df1.shape[0]) ``` Or if the numbers are non-consecutive (albeit slower), you can use this: ``` df1['randNumCol'] = np.random.choice([1, 9, 20], df1.shape[0]) ``` In order to make the results reproducible you can set the seed with numpy.random.seed (e.g. np.random.seed(42))", "best_answers_score":0.7109, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30327417\/pandas-create-new-column-in-df-with-random-integers-from-range", "best_answers_votes":171, "question_length":417, "response_length":385 }, { "question":"How to select rows in a DataFrame between two values I am trying to modify a DataFrame df to only contain rows for which the values in the column closing_price are between 99 and 101 and trying to do this with the code below. However, I get the error ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all() and I am wondering if there is a way to do this without using loops. ``` df = df[99 <= df['closing_price'] <= 101] ```", "response":"Consider Series.between: ``` df = df[df['closing_price'].between(99, 101)] ```", "best_answers_score":0.7108, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31617845\/how-to-select-rows-in-a-dataframe-between-two-values", "best_answers_votes":365, "question_length":475, "response_length":78 }, { "question":"Append multiple pandas data frames at once [duplicate] This question already has answers here: Concatenate a list of pandas dataframes together (6 answers) Closed 2 years ago. I am trying to find some way of appending multiple pandas data frames at once rather than appending them one by one using ``` df.append(df) ``` Let us say there are 5 pandas data frames t1, t2, t3, t4, t5. How do I append them at once? Something equivalent of ``` df = rbind(t1,t2,t3,t4,t5) ```", "response":"I think you can use concat: ``` print pd.concat([t1, t2, t3, t4, t5]) ``` Maybe you can ignore_index: ``` print pd.concat([t1, t2, t3, t4, t5], ignore_index=True) ``` More info in docs.", "best_answers_score":0.7095, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/36526282\/append-multiple-pandas-data-frames-at-once", "best_answers_votes":129, "question_length":470, "response_length":185 }, { "question":"Pandas column of lists, create a row for each list element I have a dataframe where some cells contain lists of multiple values. Rather than storing multiple values in a cell, I'd like to expand the dataframe so that each item in the list gets its own row (with the same values in all other columns). So if I have: ``` import pandas as pd import numpy as np df = pd.DataFrame( {'trial_num': [1, 2, 3, 1, 2, 3], 'subject': [1, 1, 1, 2, 2, 2], 'samples': [list(np.random.randn(3).round(2)) for i in range(6)] } ) df Out[10]: samples subject trial_num 0 [0.57, -0.83, 1.44] 1 1 1 [-0.01, 1.13, 0.36] 1 2 2 [1.18, -1.46, -0.94] 1 3 3 [-0.08, -4.22, -2.05] 2 1 4 [0.72, 0.79, 0.53] 2 2 5 [0.4, -0.32, -0.13] 2 3 ``` How do I convert to long form, e.g.: ``` subject trial_num sample sample_num 0 1 1 0.57 0 1 1 1 -0.83 1 2 1 1 1.44 2 3 1 2 -0.01 0 4 1 2 1.13 1 5 1 2 0.36 2 6 1 3 1.18 0 # etc. ``` The index is not important, it's OK to set existing columns as the index and the final ordering isn't important.", "response":"A bit longer than I expected: ``` >>> df samples subject trial_num 0 [-0.07, -2.9, -2.44] 1 1 1 [-1.52, -0.35, 0.1] 1 2 2 [-0.17, 0.57, -0.65] 1 3 3 [-0.82, -1.06, 0.47] 2 1 4 [0.79, 1.35, -0.09] 2 2 5 [1.17, 1.14, -1.79] 2 3 >>> >>> s = df.apply(lambda x: pd.Series(x['samples']),axis=1).stack().reset_index(level=1, drop=True) >>> s.name = 'sample' >>> >>> df.drop('samples', axis=1).join(s) subject trial_num sample 0 1 1 -0.07 0 1 1 -2.90 0 1 1 -2.44 1 1 2 -1.52 1 1 2 -0.35 1 1 2 0.10 2 1 3 -0.17 2 1 3 0.57 2 1 3 -0.65 3 2 1 -0.82 3 2 1 -1.06 3 2 1 0.47 4 2 2 0.79 4 2 2 1.35 4 2 2 -0.09 5 2 3 1.17 5 2 3 1.14 5 2 3 -1.79 ``` If you want sequential index, you can apply reset_index(drop=True) to the result. update: ``` >>> res = df.set_index(['subject', 'trial_num'])['samples'].apply(pd.Series).stack() >>> res = res.reset_index() >>> res.columns = ['subject','trial_num','sample_num','sample'] >>> res subject trial_num sample_num sample 0 1 1 0 1.89 1 1 1 1 -2.92 2 1 1 2 0.34 3 1 2 0 0.85 4 1 2 1 0.24 5 1 2 2 0.72 6 1 3 0 -0.96 7 1 3 1 -2.72 8 1 3 2 -0.11 9 2 1 0 -1.33 10 2 1 1 3.13 11 2 1 2 -0.65 12 2 2 0 0.10 13 2 2 1 0.65 14 2 2 2 0.15 15 2 3 0 0.64 16 2 3 1 -0.10 17 2 3 2 -0.76 ```", "best_answers_score":0.709, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27263805\/pandas-column-of-lists-create-a-row-for-each-list-element", "best_answers_votes":151, "question_length":1004, "response_length":1200 }, { "question":"Comparing two dataframes and getting the differences [duplicate] This question already has answers here: Find difference between two data frames (19 answers) Closed 3 years ago. I have two dataframes. Example: ``` df1: Date Fruit Num Color 2013-11-24 Banana 22.1 Yellow 2013-11-24 Orange 8.6 Orange 2013-11-24 Apple 7.6 Green 2013-11-24 Celery 10.2 Green df2: Date Fruit Num Color 2013-11-24 Banana 22.1 Yellow 2013-11-24 Orange 8.6 Orange 2013-11-24 Apple 7.6 Green 2013-11-24 Celery 10.2 Green 2013-11-25 Apple 22.1 Red 2013-11-25 Orange 8.6 Orange ``` Each dataframe has the Date as an index. Both dataframes have the same structure. What i want to do, is compare these two dataframes and find which rows are in df2 that aren't in df1. I want to compare the date (index) and the first column (Banana, APple, etc) to see if they exist in df2 vs df1. I have tried the following: Compare two DataFrames and output their differences side-by-side Comparing two pandas dataframes for differences For the first approach I get this error: \"Exception: Can only compare identically-labeled DataFrame objects\". I have tried removing the Date as index but get the same error. On the third approach, I get the assert to return False but cannot figure out how to actually see the different rows. Any pointers would be welcome", "response":"This approach, df1 != df2, works only for dataframes with identical rows and columns. In fact, all dataframes axes are compared with _indexed_same method, and exception is raised if differences found, even in columns\/indices order. If I got you right, you want not to find changes, but symmetric difference. For that, one approach might be concatenate dataframes: ``` >>> df = pd.concat([df1, df2]) >>> df = df.reset_index(drop=True) ``` group by ``` >>> df_gpby = df.groupby(list(df.columns)) ``` get index of unique records ``` >>> idx = [x[0] for x in df_gpby.groups.values() if len(x) == 1] ``` filter ``` >>> df.reindex(idx) Date Fruit Num Color 9 2013-11-25 Orange 8.6 Orange 8 2013-11-25 Apple 22.1 Red ```", "best_answers_score":0.709, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20225110\/comparing-two-dataframes-and-getting-the-differences", "best_answers_votes":139, "question_length":1314, "response_length":713 }, { "question":"Convert Select Columns in Pandas Dataframe to Numpy Array I would like to convert everything but the first column of a pandas dataframe into a numpy array. For some reason using the columns= parameter of DataFrame.to_matrix() is not working. df: ``` viz a1_count a1_mean a1_std 0 n 3 2 0.816497 1 n 0 NaN NaN 2 n 2 51 50.000000 ``` I tried X=df.as_matrix(columns=[df[1:]]) but this yields an array of all NaNs", "response":"the easy way is the \"values\" property df.iloc[:,1:].values ``` a=df.iloc[:,1:] b=df.iloc[:,1:].values print(type(df)) print(type(a)) print(type(b)) ``` so, you can get type ``` ```", "best_answers_score":0.7087, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31789160\/convert-select-columns-in-pandas-dataframe-to-numpy-array", "best_answers_votes":126, "question_length":409, "response_length":183 }, { "question":"Splitting dataframe into multiple dataframes I have a very large dataframe (around 1 million rows) with data from an experiment (60 respondents). I would like to split the dataframe into 60 dataframes (a dataframe for each participant). In the dataframe, data, there is a variable called 'name', which is the unique code for each participant. I have tried the following, but nothing happens (or execution does not stop within an hour). What I intend to do is to split the data into smaller dataframes, and append these to a list (datalist): ``` import pandas as pd def splitframe(data, name='name'): n = data[name][0] df = pd.DataFrame(columns=data.columns) datalist = [] for i in range(len(data)): if data[name][i] == n: df = df.append(data.iloc[i]) else: datalist.append(df) df = pd.DataFrame(columns=data.columns) n = data[name][i] df = df.append(data.iloc[i]) return datalist ``` I do not get an error message, the script just seems to run forever! Is there a smart way to do it?", "response":"Can I ask why not just do it by slicing the data frame. Something like ``` #create some data with Names column data = pd.DataFrame({'Names': ['Joe', 'John', 'Jasper', 'Jez'] *4, 'Ob1' : np.random.rand(16), 'Ob2' : np.random.rand(16)}) #create unique list of names UniqueNames = data.Names.unique() #create a data frame dictionary to store your data frames DataFrameDict = {elem : pd.DataFrame() for elem in UniqueNames} for key in DataFrameDict.keys(): DataFrameDict[key] = data[:][data.Names == key] ``` Hey presto you have a dictionary of data frames just as (I think) you want them. Need to access one? Just enter ``` DataFrameDict['Joe'] ```", "best_answers_score":0.7083, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19790790\/splitting-dataframe-into-multiple-dataframes", "best_answers_votes":112, "question_length":983, "response_length":645 }, { "question":"Annotate bars with values on Pandas bar plots I was looking for a way to annotate my bars in a Pandas bar plot with the rounded numerical values from my DataFrame. ``` >>> df=pd.DataFrame({'A':np.random.rand(2),'B':np.random.rand(2)},index=['value1','value2'] ) >>> df A B value1 0.440922 0.911800 value2 0.588242 0.797366 ``` I would like to get something like this: I tried with this code sample, but the annotations are all centered on the x ticks: ``` >>> ax = df.plot(kind='bar') >>> for idx, label in enumerate(list(df.index)): for acc in df.columns: value = np.round(df.ix[idx][acc],decimals=2) ax.annotate(value, (idx, value), xytext=(0, 15), textcoords='offset points') ```", "response":"You get it directly from the axes' patches: ``` for p in ax.patches: ax.annotate(str(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.005)) ``` You'll want to tweak the string formatting and the offsets to get things centered, maybe use the width from p.get_width(), but that should get you started. It may not work with stacked bar plots unless you track the offsets somewhere.", "best_answers_score":0.708, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25447700\/annotate-bars-with-values-on-pandas-bar-plots", "best_answers_votes":228, "question_length":682, "response_length":385 }, { "question":"Pandas date_range to generate monthly data at beginning of the month I'm trying to generate a date range of monthly data where the day is always at the beginning of the month: ``` pd.date_range(start='1\/1\/1980', end='11\/1\/1991', freq='M') ``` This generates 1\/31\/1980, 2\/29\/1980, and so on. Instead, I just want 1\/1\/1980, 2\/1\/1980,... I've seen other question ask about generating data that is always on a specific day of the month, with answers saying it wasn't possible, but beginning of month surely must be possible!", "response":"You can do this by changing the freq argument from 'M' to 'MS': ``` d = pandas.date_range(start='1\/1\/1980', end='11\/1\/1990', freq='MS') print(d) ``` This should now print: ``` DatetimeIndex(['1980-01-01', '1980-02-01', '1980-03-01', '1980-04-01', '1980-05-01', '1980-06-01', '1980-07-01', '1980-08-01', '1980-09-01', '1980-10-01', ... '1990-02-01', '1990-03-01', '1990-04-01', '1990-05-01', '1990-06-01', '1990-07-01', '1990-08-01', '1990-09-01', '1990-10-01', '1990-11-01'], dtype='datetime64[ns]', length=131, freq='MS', tz=None) ``` Look into the offset aliases part of the documentation. There it states that 'M' is for the end of the month (month end frequency) while 'MS' for the beginning (month start frequency).", "best_answers_score":0.7077, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34915828\/pandas-date-range-to-generate-monthly-data-at-beginning-of-the-month", "best_answers_votes":224, "question_length":520, "response_length":720 }, { "question":"Pandas: ValueError: cannot convert float NaN to integer I get ValueError: cannot convert float NaN to integer for following: ``` df = pandas.read_csv('zoom11.csv') df[['x']] = df[['x']].astype(int) ``` The \"x\" is a column in the csv file, I cannot spot any float NaN in the file, and I don't understand the error or why I am getting it. When I read the column as String, then it has values like -1,0,1,...2000, all look very nice int numbers to me. When I read the column as float, then this can be loaded. Then it shows values as -1.0,0.0 etc, still there are no any NaN-s I tried with error_bad_lines = False and dtype parameter in read_csv to no avail. It just cancels loading with same exception. The file is not small (10+ M rows), so cannot inspect it manually, when I extract a small header part, then there is no error, but it happens with full file. So it is something in the file, but cannot detect what. Logically the csv should not have missing values, but even if there is some garbage then I would be ok to skip the rows. Or at least identify them, but I do not see way to scan through file and report conversion errors. Update: Using the hints in comments\/answers I got my data clean with this: ``` # x contained NaN df = df[~df['x'].isnull()] # Y contained some other garbage, so null check was not enough df = df[df['y'].str.isnumeric()] # final conversion now worked df[['x']] = df[['x']].astype(int) df[['y']] = df[['y']].astype(int) ```", "response":"For identifying NaN values use boolean indexing: ``` print(df[df['x'].isnull()]) ``` Then for removing all non-numeric values use to_numeric with parameter errors='coerce' - to replace non-numeric values to NaNs: ``` df['x'] = pd.to_numeric(df['x'], errors='coerce') ``` And for remove all rows with NaNs in column x use dropna: ``` df = df.dropna(subset=['x']) ``` Last convert values to ints: ``` df['x'] = df['x'].astype(int) ```", "best_answers_score":0.7073, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/47333227\/pandas-valueerror-cannot-convert-float-nan-to-integer", "best_answers_votes":111, "question_length":1456, "response_length":432 }, { "question":"How can I map True\/False to 1\/0 in a Pandas DataFrame? I have a column in python pandas DataFrame that has boolean True\/False values, but for further calculations I need 1\/0 representation. Is there a quick pandas\/numpy way to do that?", "response":"A succinct way to convert a single column of boolean values to a column of integers 1 or 0: ```py df[\"somecolumn\"] = df[\"somecolumn\"].astype(int) ```", "best_answers_score":0.707, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17383094\/how-can-i-map-true-false-to-1-0-in-a-pandas-dataframe", "best_answers_votes":577, "question_length":235, "response_length":149 }, { "question":"pandas .at versus .loc I've been exploring how to optimize my code and ran across pandas .at method. Per the documentation Fast label-based scalar accessor Similarly to loc, at provides label based scalar lookups. You can also set using these indexers. So I ran some samples: Setup ``` import pandas as pd import numpy as np from string import letters, lowercase, uppercase lt = list(letters) lc = list(lowercase) uc = list(uppercase) def gdf(rows, cols, seed=None): \"\"\"rows and cols are what you'd pass to pd.MultiIndex.from_product()\"\"\" gmi = pd.MultiIndex.from_product df = pd.DataFrame(index=gmi(rows), columns=gmi(cols)) np.random.seed(seed) df.iloc[:, :] = np.random.rand(*df.shape) return df seed = [3, 1415] df = gdf([lc, uc], [lc, uc], seed) print df.head().T.head().T ``` df looks like: ``` a A B C D E a A 0.444939 0.407554 0.460148 0.465239 0.462691 B 0.032746 0.485650 0.503892 0.351520 0.061569 C 0.777350 0.047677 0.250667 0.602878 0.570528 D 0.927783 0.653868 0.381103 0.959544 0.033253 E 0.191985 0.304597 0.195106 0.370921 0.631576 ``` Lets use .at and .loc and ensure I get the same thing ``` print \"using .loc\", df.loc[('a', 'A'), ('c', 'C')] print \"using .at \", df.at[('a', 'A'), ('c', 'C')] using .loc 0.37374090276 using .at 0.37374090276 ``` Test speed using .loc ``` %%timeit df.loc[('a', 'A'), ('c', 'C')] 10000 loops, best of 3: 180 \u00b5s per loop ``` Test speed using .at ``` %%timeit df.at[('a', 'A'), ('c', 'C')] The slowest run took 6.11 times longer than the fastest. This could mean that an intermediate result is being cached. 100000 loops, best of 3: 8 \u00b5s per loop ``` This looks to be a huge speed increase. Even at the caching stage 6.11 * 8 is a lot faster than 180 Question What are the limitations of .at? I'm motivated to use it. The documentation says it's similar to .loc but it doesn't behave similarly. Example: ``` # small df sdf = gdf([lc[:2]], [uc[:2]], seed) print sdf.loc[:, :] A B a 0.444939 0.407554 b 0.460148 0.465239 ``` where as print sdf.at[:, :] results in TypeError: unhashable type So obviously not the same even if the intent is to be similar. That said, who can provide guidance on what can and cannot be done with the .at method?", "response":"Update: df.get_value is deprecated as of version 0.21.0. Using df.at or df.iat is the recommended method going forward. df.at can only access a single value at a time. df.loc can select multiple rows and\/or columns. Note that there is also df.get_value, which may be even quicker at accessing single values: ``` In [25]: %timeit df.loc[('a', 'A'), ('c', 'C')] 10000 loops, best of 3: 187 \u00b5s per loop In [26]: %timeit df.at[('a', 'A'), ('c', 'C')] 100000 loops, best of 3: 8.33 \u00b5s per loop In [35]: %timeit df.get_value(('a', 'A'), ('c', 'C')) 100000 loops, best of 3: 3.62 \u00b5s per loop ``` Under the hood, df.at[...] calls df.get_value, but it also does some type checking on the keys.", "best_answers_score":0.707, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/37216485\/pandas-at-versus-loc", "best_answers_votes":103, "question_length":2189, "response_length":684 }, { "question":"How to one-hot-encode from a pandas column containing a list? I would like to break down a pandas column consisting of a list of elements into as many columns as there are unique elements i.e. one-hot-encode them (with value 1 representing a given element existing in a row and 0 in the case of absence). For example, taking dataframe df ``` Col1 Col2 Col3 C 33 [Apple, Orange, Banana] A 2.5 [Apple, Grape] B 42 [Banana] ``` I would like to convert this to: df ``` Col1 Col2 Apple Orange Banana Grape C 33 1 1 1 0 A 2.5 1 0 0 1 B 42 0 0 1 0 ``` How can I use pandas\/sklearn to achieve this?", "response":"We can also use sklearn.preprocessing.MultiLabelBinarizer: Often we want to use sparse DataFrame for the real world data in order to save a lot of RAM. Sparse solution (for Pandas v0.25.0+) ``` from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer(sparse_output=True) df = df.join( pd.DataFrame.sparse.from_spmatrix( mlb.fit_transform(df.pop('Col3')), index=df.index, columns=mlb.classes_)) ``` result: ``` In [38]: df Out[38]: Col1 Col2 Apple Banana Grape Orange 0 C 33.0 1 1 0 1 1 A 2.5 1 0 1 0 2 B 42.0 0 1 0 0 In [39]: df.dtypes Out[39]: Col1 object Col2 float64 Apple Sparse[int32, 0] Banana Sparse[int32, 0] Grape Sparse[int32, 0] Orange Sparse[int32, 0] dtype: object In [40]: df.memory_usage() Out[40]: Index 128 Col1 24 Col2 24 Apple 16 # <--- NOTE! Banana 16 # <--- NOTE! Grape 8 # <--- NOTE! Orange 8 # <--- NOTE! dtype: int64 ``` Dense solution ``` mlb = MultiLabelBinarizer() df = df.join(pd.DataFrame(mlb.fit_transform(df.pop('Col3')), columns=mlb.classes_, index=df.index)) ``` Result: ``` In [77]: df Out[77]: Col1 Col2 Apple Banana Grape Orange 0 C 33.0 1 1 0 1 1 A 2.5 1 0 1 0 2 B 42.0 0 1 0 0 ```", "best_answers_score":0.7065, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45312377\/how-to-one-hot-encode-from-a-pandas-column-containing-a-list", "best_answers_votes":103, "question_length":590, "response_length":1141 }, { "question":"Improve subplot size\/spacing with many subplots I need to generate a whole bunch of vertically-stacked plots in matplotlib. The result will be saved using savefig and viewed on a webpage, so I don't care how tall the final image is, as long as the subplots are spaced so they don't overlap. No matter how big I allow the figure to be, the subplots always seem to overlap. My code currently looks like ``` import matplotlib.pyplot as plt import my_other_module titles, x_lists, y_lists = my_other_module.get_data() fig = plt.figure(figsize=(10,60)) for i, y_list in enumerate(y_lists): plt.subplot(len(titles), 1, i) plt.xlabel(\"Some X label\") plt.ylabel(\"Some Y label\") plt.title(titles[i]) plt.plot(x_lists[i],y_list) fig.savefig('out.png', dpi=100) ```", "response":"Please review matplotlib: Tight Layout guide and try using matplotlib.pyplot.tight_layout, or matplotlib.figure.Figure.tight_layout As a quick example: ``` import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=4, ncols=4, figsize=(8, 8)) fig.tight_layout() # Or equivalently, \"plt.tight_layout()\" plt.show() ``` Without Tight Layout With Tight Layout", "best_answers_score":0.7059, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/6541123\/improve-subplot-size-spacing-with-many-subplots", "best_answers_votes":783, "question_length":754, "response_length":359 }, { "question":"What rules does Pandas use to generate a view vs a copy? I'm confused about the rules Pandas uses when deciding that a selection from a dataframe is a copy of the original dataframe, or a view on the original. If I have, for example, ```py df = pd.DataFrame(np.random.randn(8,8), columns=list('ABCDEFGH'), index=range(1,9)) ``` I understand that a query returns a copy so that something like ```py foo = df.query('2 < index <= 5') foo.loc[:,'E'] = 40 ``` will have no effect on the original dataframe, df. I also understand that scalar or named slices return a view, so that assignments to these, such as ```py df.iloc[3] = 70 ``` or ```py df.ix[1,'B':'E'] = 222 ``` will change df. But I'm lost when it comes to more complicated cases. For example, ```py df[df.C <= df.B] = 7654321 ``` changes df, but ```py df[df.C <= df.B].ix[:,'B':'E'] ``` does not. Is there a simple rule that Pandas is using that I'm just missing? What's going on in these specific cases; and in particular, how do I change all values (or a subset of values) in a dataframe that satisfy a particular query (as I'm attempting to do in the last example above)? Note: This is not the same as this question; and I have read the documentation, but am not enlightened by it. I've also read through the \"Related\" questions on this topic, but I'm still missing the simple rule Pandas is using, and how I'd apply it to \u2014 for example \u2014 modify the values (or a subset of values) in a dataframe that satisfy a particular query.", "response":"Here's the rules, subsequent override: All operations generate a copy If inplace=True is provided, it will modify in-place; only some operations support this An indexer that sets, e.g. .loc\/.iloc\/.iat\/.at will set inplace. An indexer that gets on a single-dtyped object is almost always a view (depending on the memory layout it may not be that's why this is not reliable). This is mainly for efficiency. (the example from above is for .query; this will always return a copy as its evaluated by numexpr) An indexer that gets on a multiple-dtyped object is always a copy. Your example of chained indexing ``` df[df.C <= df.B].loc[:,'B':'E'] ``` is not guaranteed to work (and thus you should never do this). Instead do: ``` df.loc[df.C <= df.B, 'B':'E'] ``` as this is faster and will always work The chained indexing is 2 separate python operations and thus cannot be reliably intercepted by pandas (you will oftentimes get a SettingWithCopyWarning, but that is not 100% detectable either). The dev docs, which you pointed, offer a much more full explanation.", "best_answers_score":0.7047, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23296282\/what-rules-does-pandas-use-to-generate-a-view-vs-a-copy", "best_answers_votes":203, "question_length":1488, "response_length":1059 }, { "question":"Pandas column bind (cbind) two data frames I've got a dataframe df_a with id information: ```none unique_id lacet_number 15 5570613 TLA-0138365 24 5025490 EMP-0138757 36 4354431 DXN-0025343 ``` and another dataframe df_b, with the same number of rows that I know correspond to the rows in df_a: ```none latitude longitude 0 -93.193560 31.217029 1 -93.948082 35.360874 2 -103.131508 37.787609 ``` What I want to do is simply concatenate the two horizontally (similar to cbind in R) and get: ```none unique_id lacet_number latitude longitude 0 5570613 TLA-0138365 -93.193560 31.217029 1 5025490 EMP-0138757 -93.948082 35.360874 2 4354431 DXN-0025343 -103.131508 37.787609 ``` What I have tried: ```py df_c = pd.concat([df_a, df_b], axis=1) ``` which gives me an outer join. ```none unique_id lacet_number latitude longitude 0 NaN NaN -93.193560 31.217029 1 NaN NaN -93.948082 35.360874 2 NaN NaN -103.131508 37.787609 15 5570613 TLA-0138365 NaN NaN 24 5025490 EMP-0138757 NaN NaN 36 4354431 DXN-0025343 NaN NaN ``` The problem is that the indices for the two dataframes do not match. I read the documentation for pandas.concat, and saw that there is an option ignore_index. But that only applies to the concatenation axis, in my case the columns and it certainly is not the right choice for me. So my question is: is there a simple way to achieve this?", "response":"If you're sure the index row values are the same then to avoid the index alignment order then just call reset_index(), this will reset your index values back to start from 0: ``` df_c = pd.concat([df_a.reset_index(drop=True), df_b], axis=1) ```", "best_answers_score":0.7031, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33088010\/pandas-column-bind-cbind-two-data-frames", "best_answers_votes":140, "question_length":1350, "response_length":244 }, { "question":"How to concatenate two dataframes without duplicates? I'd like to concatenate two dataframes A, B to a new one without duplicate rows (if rows in B already exist in A, don't add): Dataframe A: ``` I II 0 1 2 1 3 1 ``` Dataframe B: ``` I II 0 5 6 1 3 1 ``` New Dataframe: ``` I II 0 1 2 1 3 1 2 5 6 ``` How can I do this?", "response":"The simplest way is to just do the concatenation, and then drop duplicates. ``` >>> df1 A B 0 1 2 1 3 1 >>> df2 A B 0 5 6 1 3 1 >>> pandas.concat([df1,df2]).drop_duplicates().reset_index(drop=True) A B 0 1 2 1 3 1 2 5 6 ``` The reset_index(drop=True) is to fix up the index after the concat() and drop_duplicates(). Without it you will have an index of [0,1,0] instead of [0,1,2]. This could cause problems for further operations on this dataframe down the road if it isn't reset right away.", "best_answers_score":0.7021, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21317384\/how-to-concatenate-two-dataframes-without-duplicates", "best_answers_votes":172, "question_length":320, "response_length":491 }, { "question":"Plot multiple Y axes I know pandas supports a secondary Y axis, but I'm curious if anyone knows a way to put a tertiary Y axis on plots. Currently I am achieving this with numpy+pyplot, but it is slow with large data sets. This is to plot different measurements with distinct units on the same graph for easy comparison (eg: Relative Humidity\/Temperature\/ and Electrical Conductivity). So really just curious if anyone knows if this is possible in pandas without too much work. [Edit] I doubt that there is a way to do this(without too much overhead) however I hope to be proven wrong, as this may be a limitation of matplotlib.", "response":"I think this might work: ``` import matplotlib.pyplot as plt import numpy as np from pandas import DataFrame df = DataFrame(np.random.randn(5, 3), columns=['A', 'B', 'C']) fig, ax = plt.subplots() ax3 = ax.twinx() rspine = ax3.spines['right'] rspine.set_position(('axes', 1.15)) ax3.set_frame_on(True) ax3.patch.set_visible(False) fig.subplots_adjust(right=0.7) df.A.plot(ax=ax, style='b-') # same ax as above since it's automatically added on the right df.B.plot(ax=ax, style='r-', secondary_y=True) df.C.plot(ax=ax3, style='g-') # add legend --> take advantage of pandas providing us access # to the line associated with the right part of the axis ax3.legend([ax.get_lines()[0], ax.right_ax.get_lines()[0], ax3.get_lines()[0]],\\ ['A','B','C'], bbox_to_anchor=(1.5, 0.5)) ``` Output:", "best_answers_score":0.7016, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/11640243\/plot-multiple-y-axes", "best_answers_votes":116, "question_length":628, "response_length":784 }, { "question":"List Highest Correlation Pairs from a Large Correlation Matrix in Pandas? How do you find the top correlations in a correlation matrix with Pandas? There are many answers on how to do this with R (Show correlations as an ordered list, not as a large matrix or Efficient way to get highly correlated pairs from large data set in Python or R), but I am wondering how to do it with pandas? In my case the matrix is 4460x4460, so can't do it visually.", "response":"You can use DataFrame.values to get an numpy array of the data and then use NumPy functions such as argsort() to get the most correlated pairs. But if you want to do this in pandas, you can unstack and sort the DataFrame: ``` import pandas as pd import numpy as np shape = (50, 4460) data = np.random.normal(size=shape) data[:, 1000] += data[:, 2000] df = pd.DataFrame(data) c = df.corr().abs() s = c.unstack() so = s.sort_values(kind=\"quicksort\") print so[-4470:-4460] ``` Here is the output: ``` 2192 1522 0.636198 1522 2192 0.636198 3677 2027 0.641817 2027 3677 0.641817 242 130 0.646760 130 242 0.646760 1171 2733 0.670048 2733 1171 0.670048 1000 2000 0.742340 2000 1000 0.742340 dtype: float64 ```", "best_answers_score":0.7007, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17778394\/list-highest-correlation-pairs-from-a-large-correlation-matrix-in-pandas", "best_answers_votes":137, "question_length":447, "response_length":702 }, { "question":"Comparison between datetime and datetime64[ns] in pandas I'm writing a program that checks an excel file and if today's date is in the excel file's date column, I parse it I'm using: ``` cur_date = datetime.today() ``` for today's date. I'm checking if today is in the column with: ``` bool_val = cur_date in df['date'] #evaluates to false ``` I do know for a fact that today's date is in the file in question. The dtype of the series is datetime64[ns] Also, I am only checking the date itself and not the timestamp afterwards, if that matters. I'm doing this to make the timestamp 00:00:00: ``` cur_date = datetime.strptime(cur_date.strftime('%Y_%m_%d'), '%Y_%m_%d') ``` And the type of that object after printing is datetime as well", "response":"For anyone who also stumbled across this when comparing a dataframe date to a variable date, and this did not exactly answer your question; you can use the code below. Instead of: ``` self.df[\"date\"] = pd.to_datetime(self.df[\"date\"]) ``` You can import datetime and then add .dt.date to the end like: ``` self.df[\"date\"] = pd.to_datetime(self.df[\"date\"]).dt.date ```", "best_answers_score":0.7004, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/51827134\/comparison-between-datetime-and-datetime64ns-in-pandas", "best_answers_votes":124, "question_length":734, "response_length":366 }, { "question":"append dictionary to data frame I have a function, which returns a dictionary like this: ``` {'truth': 185.179993, 'day1': 197.22307753038834, 'day2': 197.26118010160317, 'day3': 197.19846975345905, 'day4': 197.1490578795196, 'day5': 197.37179265011116} ``` I am trying to append this dictionary to a dataframe like so: ``` output = pd.DataFrame() output.append(dictionary, ignore_index=True) print(output.head()) ``` Unfortunately, the printing of the dataframe results in an empty dataframe. Any ideas?", "response":"Doesn't work starting pandas==2.0.0, deprecated since 1.4 You don't assign the value to the result. ``` output = pd.DataFrame() output = output.append(dictionary, ignore_index=True) print(output.head()) ```", "best_answers_score":0.6998, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/51774826\/append-dictionary-to-data-frame", "best_answers_votes":217, "question_length":504, "response_length":206 }, { "question":"Difference between groupby and pivot_table for pandas dataframes I just started learning Pandas and was wondering if there is any difference between groupby and pivot_table functions. Can anyone help me understand the difference between them?", "response":"Both pivot_table and groupby are used to aggregate your dataframe. The difference is only with regard to the shape of the result. Using pd.pivot_table(df, index=[\"a\"], columns=[\"b\"], values=[\"c\"], aggfunc=np.sum) a table is created where a is on the row axis, b is on the column axis, and the values are the sum of c. Example: ``` df = pd.DataFrame({\"a\": [1,2,3,1,2,3], \"b\":[1,1,1,2,2,2], \"c\":np.random.rand(6)}) pd.pivot_table(df, index=[\"a\"], columns=[\"b\"], values=[\"c\"], aggfunc=np.sum) b 1 2 a 1 0.528470 0.484766 2 0.187277 0.144326 3 0.866832 0.650100 ``` Using groupby, the dimensions given are placed into columns, and rows are created for each combination of those dimensions. In this example, we create a series of the sum of values c, grouped by all unique combinations of a and b. ``` df.groupby(['a','b'])['c'].sum() a b 1 1 0.528470 2 0.484766 2 1 0.187277 2 0.144326 3 1 0.866832 2 0.650100 Name: c, dtype: float64 ``` A similar usage of groupby is if we omit the ['c']. In this case, it creates a dataframe (not a series) of the sums of all remaining columns grouped by unique values of a and b. ``` print df.groupby([\"a\",\"b\"]).sum() c a b 1 1 0.528470 2 0.484766 2 1 0.187277 2 0.144326 3 1 0.866832 2 0.650100 ```", "best_answers_score":0.6994, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34702815\/difference-between-groupby-and-pivot-table-for-pandas-dataframes", "best_answers_votes":141, "question_length":242, "response_length":1231 }, { "question":"Python pandas insert list into a cell I have a list 'abc' and a dataframe 'df': ``` abc = ['foo', 'bar'] df = A B 0 12 NaN 1 23 NaN ``` I want to insert the list into cell 1B, so I want this result: ``` A B 0 12 NaN 1 23 ['foo', 'bar'] ``` Ho can I do that? 1) If I use this: ``` df.ix[1,'B'] = abc ``` I get the following error message: ``` ValueError: Must have equal len keys and value when setting with an iterable ``` because it tries to insert the list (that has two elements) into a row \/ column but not into a cell. 2) If I use this: ``` df.ix[1,'B'] = [abc] ``` then it inserts a list that has only one element that is the 'abc' list ( [['foo', 'bar']] ). 3) If I use this: ``` df.ix[1,'B'] = ', '.join(abc) ``` then it inserts a string: ( foo, bar ) but not a list. 4) If I use this: ``` df.ix[1,'B'] = [', '.join(abc)] ``` then it inserts a list but it has only one element ( ['foo, bar'] ) but not two as I want ( ['foo', 'bar'] ). Thanks for help! EDIT My new dataframe and the old list: ``` abc = ['foo', 'bar'] df2 = A B C 0 12 NaN 'bla' 1 23 NaN 'bla bla' ``` Another dataframe: ``` df3 = A B C D 0 12 NaN 'bla' ['item1', 'item2'] 1 23 NaN 'bla bla' [11, 12, 13] ``` I want insert the 'abc' list into df2.loc[1,'B'] and\/or df3.loc[1,'B']. If the dataframe has columns only with integer values and\/or NaN values and\/or list values then inserting a list into a cell works perfectly. If the dataframe has columns only with string values and\/or NaN values and\/or list values then inserting a list into a cell works perfectly. But if the dataframe has columns with integer and string values and other columns then the error message appears if I use this: df2.loc[1,'B'] = abc or df3.loc[1,'B'] = abc. Another dataframe: ``` df4 = A B 0 'bla' NaN 1 'bla bla' NaN ``` These inserts work perfectly: df.loc[1,'B'] = abc or df4.loc[1,'B'] = abc.", "response":"Since set_value has been deprecated since version 0.21.0, you should now use at. It can insert a list into a cell without raising a ValueError as loc does. I think this is because at always refers to a single value, while loc can refer to values as well as rows and columns. ``` df = pd.DataFrame(data={'A': [1, 2, 3], 'B': ['x', 'y', 'z']}) df.at[1, 'B'] = ['m', 'n'] df = A B 0 1 x 1 2 [m, n] 2 3 z ``` You also need to make sure the column you are inserting into has dtype=object. For example ``` >>> df = pd.DataFrame(data={'A': [1, 2, 3], 'B': [1,2,3]}) >>> df.dtypes A int64 B int64 dtype: object >>> df.at[1, 'B'] = [1, 2, 3] ValueError: setting an array element with a sequence >>> df['B'] = df['B'].astype('object') >>> df.at[1, 'B'] = [1, 2, 3] >>> df A B 0 1 1 1 2 [1, 2, 3] 2 3 3 ```", "best_answers_score":0.699, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26483254\/python-pandas-insert-list-into-a-cell", "best_answers_votes":210, "question_length":1851, "response_length":795 }, { "question":"Why isn't my Pandas 'apply' function referencing multiple columns working? [closed] Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers. This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers. Closed 6 years ago. Improve this question I have some problems with the Pandas apply function, when using multiple columns with the following dataframe ``` df = DataFrame ({'a' : np.random.randn(6), 'b' : ['foo', 'bar'] * 3, 'c' : np.random.randn(6)}) ``` and the following function ``` def my_test(a, b): return a % b ``` When I try to apply this function with : ``` df['Value'] = df.apply(lambda row: my_test(row[a], row[c]), axis=1) ``` I get the error message: ``` NameError: (\"global name 'a' is not defined\", u'occurred at index 0') ``` I do not understand this message, I defined the name properly. I would highly appreciate any help on this issue Update Thanks for your help. I made indeed some syntax mistakes with the code, the index should be put ''. However I still get the same issue using a more complex function such as: ``` def my_test(a): cum_diff = 0 for ix in df.index(): cum_diff = cum_diff + (a - df['a'][ix]) return cum_diff ```", "response":"Seems you forgot the '' of your string. ``` In [43]: df['Value'] = df.apply(lambda row: my_test(row['a'], row['c']), axis=1) In [44]: df Out[44]: a b c Value 0 -1.674308 foo 0.343801 0.044698 1 -2.163236 bar -2.046438 -0.116798 2 -0.199115 foo -0.458050 -0.199115 3 0.918646 bar -0.007185 -0.001006 4 1.336830 foo 0.534292 0.268245 5 0.976844 bar -0.773630 -0.570417 ``` BTW, in my opinion, following way is more elegant: ``` In [53]: def my_test2(row): ....: return row['a'] % row['c'] ....: In [54]: df['Value'] = df.apply(my_test2, axis=1) ```", "best_answers_score":0.698, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16353729\/why-isnt-my-pandas-apply-function-referencing-multiple-columns-working", "best_answers_votes":396, "question_length":1334, "response_length":546 }, { "question":"How to select the last column of dataframe I have done some searching for the answer to this question, but all I can figure out is this: ``` df[df.columns[len(df.columns)-1]] ``` which to me seems unweildy, and un-pythonic (and slow?). What is the easiest way to select the data for the last column in a pandas dataframe without specifying the name of the column?", "response":"Use iloc and select all rows (:) against the last column (-1): ``` df.iloc[:,-1:] ```", "best_answers_score":0.6978, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/40144769\/how-to-select-the-last-column-of-dataframe", "best_answers_votes":218, "question_length":363, "response_length":85 }, { "question":"What is the most efficient way to loop through dataframes with pandas? I want to perform my own complex operations on financial data in dataframes in a sequential manner. For example I am using the following MSFT CSV file taken from Yahoo Finance: ``` Date,Open,High,Low,Close,Volume,Adj Close 2011-10-19,27.37,27.47,27.01,27.13,42880000,27.13 2011-10-18,26.94,27.40,26.80,27.31,52487900,27.31 2011-10-17,27.11,27.42,26.85,26.98,39433400,26.98 2011-10-14,27.31,27.50,27.02,27.27,50947700,27.27 .... ``` I then do the following: ``` #!\/usr\/bin\/env python from pandas import * df = read_csv('table.csv') for i, row in enumerate(df.values): date = df.index[i] open, high, low, close, adjclose = row #now perform analysis on open\/close based on date, etc.. ``` Is that the most efficient way? Given the focus on speed in pandas, I would assume there must be some special function to iterate through the values in a manner that one also retrieves the index (possibly through a generator to be memory efficient)? df.iteritems unfortunately only iterates column by column.", "response":"The newest versions of pandas now include a built-in function for iterating over rows. ``` for index, row in df.iterrows(): # do some logic here ``` Or, if you want it faster use itertuples() But, unutbu's suggestion to use numpy functions to avoid iterating over rows will produce the fastest code.", "best_answers_score":0.6958, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/7837722\/what-is-the-most-efficient-way-to-loop-through-dataframes-with-pandas", "best_answers_votes":426, "question_length":1065, "response_length":299 }, { "question":"How to suppress Pandas Future warning? When I run the program, Pandas gives 'Future warning' like below every time. ```none D:\\Python\\lib\\site-packages\\pandas\\core\\frame.py:3581: FutureWarning: rename with inplace=True will return None from pandas 0.11 onward \" from pandas 0.11 onward\", FutureWarning) ``` I got the message, but I just want to stop Pandas showing such message again and again. Is there any builtin parameter that I can set to make Pandas stop popping up the 'Future warning'?", "response":"Found this on github... ``` import warnings warnings.simplefilter(action='ignore', category=FutureWarning) import pandas ```", "best_answers_score":0.6956, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15777951\/how-to-suppress-pandas-future-warning", "best_answers_votes":632, "question_length":493, "response_length":124 }, { "question":"is it possible to do fuzzy match merge with python pandas? I have two DataFrames which I want to merge based on a column. However, due to alternate spellings, different number of spaces, absence\/presence of diacritical marks, I would like to be able to merge as long as they are similar to one another. Any similarity algorithm will do (soundex, Levenshtein, difflib's). Say one DataFrame has the following data: ``` df1 = DataFrame([[1],[2],[3],[4],[5]], index=['one','two','three','four','five'], columns=['number']) number one 1 two 2 three 3 four 4 five 5 df2 = DataFrame([['a'],['b'],['c'],['d'],['e']], index=['one','too','three','fours','five'], columns=['letter']) letter one a too b three c fours d five e ``` Then I want to get the resulting DataFrame ``` number letter one 1 a two 2 b three 3 c four 4 d five 5 e ```", "response":"Similar to @locojay suggestion, you can apply difflib's get_close_matches to df2's index and then apply a join: ``` In [23]: import difflib In [24]: difflib.get_close_matches Out[24]: In [25]: df2.index = df2.index.map(lambda x: difflib.get_close_matches(x, df1.index)[0]) In [26]: df2 Out[26]: letter one a two b three c four d five e In [31]: df1.join(df2) Out[31]: number letter one 1 a two 2 b three 3 c four 4 d five 5 e ``` . If these were columns, in the same vein you could apply to the column then merge: ``` df1 = DataFrame([[1,'one'],[2,'two'],[3,'three'],[4,'four'],[5,'five']], columns=['number', 'name']) df2 = DataFrame([['a','one'],['b','too'],['c','three'],['d','fours'],['e','five']], columns=['letter', 'name']) df2['name'] = df2['name'].apply(lambda x: difflib.get_close_matches(x, df1['name'])[0]) df1.merge(df2) ```", "best_answers_score":0.6952, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13636848\/is-it-possible-to-do-fuzzy-match-merge-with-python-pandas", "best_answers_votes":111, "question_length":827, "response_length":838 }, { "question":"Make more than one chart in same IPython Notebook cell I have started my IPython Notebook with ``` ipython notebook --pylab inline ``` This is my code in one cell ``` df['korisnika'].plot() df['osiguranika'].plot() ``` This is working fine, it will draw two lines, but on the same chart. I would like to draw each line on a separate chart. And it would be great if the charts would be next to each other, not one after the other. I know that I can put the second line in the next cell, and then I would get two charts. But I would like the charts close to each other, because they represent the same logical unit.", "response":"You can also call the show() function after each plot. e.g ``` plt.plot(a) plt.show() plt.plot(b) plt.show() ```", "best_answers_score":0.6951, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16392921\/make-more-than-one-chart-in-same-ipython-notebook-cell", "best_answers_votes":150, "question_length":613, "response_length":112 }, { "question":"How to assign a name to the size() column? I am using .size() on a groupby result in order to count how many items are in each group. I would like the result to be saved to a new column name without manually editing the column names array, how can it be done? This is what I have tried: ``` grpd = df.groupby(['A','B']) grpd['size'] = grpd.size() grpd ``` and the error I got: TypeError: 'DataFrameGroupBy' object does not support item assignment (on the second line)", "response":"The .size() built-in method of DataFrameGroupBy objects actually returns a Series object with the group sizes and not a DataFrame. If you want a DataFrame whose column is the group sizes, indexed by the groups, with a custom name, you can use the .to_frame() method and use the desired column name as its argument. ``` grpd = df.groupby(['A','B']).size().to_frame('size') ``` If you wanted the groups to be columns again you could add a .reset_index() at the end.", "best_answers_score":0.6951, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17995024\/how-to-assign-a-name-to-the-size-column", "best_answers_votes":125, "question_length":467, "response_length":463 }, { "question":"Pandas get the most frequent values of a column i have this dataframe: ``` 0 name data 1 alex asd 2 helen sdd 3 alex dss 4 helen sdsd 5 john sdadd ``` so i am trying to get the most frequent value or values(in this case its values) so what i do is: ``` dataframe['name'].value_counts().idxmax() ``` but it returns only the value: Alex even if it Helen appears two times as well.", "response":"By using mode ``` df.name.mode() Out[712]: 0 alex 1 helen dtype: object ```", "best_answers_score":0.6946, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/48590268\/pandas-get-the-most-frequent-values-of-a-column", "best_answers_votes":140, "question_length":378, "response_length":75 }, { "question":"How to loop over grouped Pandas dataframe? DataFrame: ```none c_os_family_ss c_os_major_is l_customer_id_i 0 Windows 7 90418 1 Windows 7 90418 2 Windows 7 90418 ``` Code: ```py for name, group in df.groupby('l_customer_id_i').agg(lambda x: ','.join(x)): print name print group ``` I'm trying to just loop over the aggregated data, but I get the error: ```none ValueError: too many values to unpack ``` I wish to loop over every group. How do I do it?", "response":"df.groupby('l_customer_id_i').agg(lambda x: ','.join(x)) does already return a dataframe, so you cannot loop over the groups anymore. In general: df.groupby(...) returns a GroupBy object (a DataFrameGroupBy or SeriesGroupBy), and with this, you can iterate through the groups (as explained in the docs here). You can do something like: ``` grouped = df.groupby('A') for name, group in grouped: ... ``` When you apply a function on the groupby, in your example df.groupby(...).agg(...) (but this can also be transform, apply, mean, ...), you combine the result of applying the function to the different groups together in one dataframe (the apply and combine step of the 'split-apply-combine' paradigm of groupby). So the result of this will always be again a DataFrame (or a Series depending on the applied function).", "best_answers_score":0.694, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27405483\/how-to-loop-over-grouped-pandas-dataframe", "best_answers_votes":404, "question_length":450, "response_length":817 }, { "question":"How to print pandas DataFrame without index I want to print the whole dataframe, but I don't want to print the index Besides, one column is datetime type, I just want to print time, not date. The dataframe looks like: ``` User ID Enter Time Activity Number 0 123 2014-07-08 00:09:00 1411 1 123 2014-07-08 00:18:00 893 2 123 2014-07-08 00:49:00 1041 ``` I want it print as ``` User ID Enter Time Activity Number 123 00:09:00 1411 123 00:18:00 893 123 00:49:00 1041 ```", "response":"```py print(df.to_string(index=False)) ```", "best_answers_score":0.6939, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24644656\/how-to-print-pandas-dataframe-without-index", "best_answers_votes":445, "question_length":467, "response_length":42 }, { "question":"Pandas: sum up multiple columns into one column without last column If I have a dataframe similar to this one ``` Apples Bananas Grapes Kiwis 2 3 nan 1 1 3 7 nan nan nan 2 3 ``` I would like to add a column like this ``` Apples Bananas Grapes Kiwis Fruit Total 2 3 nan 1 6 1 3 7 nan 11 nan nan 2 3 5 ``` I guess you could use df['Apples'] + df['Bananas'] and so on, but my actual dataframe is much larger than this. I was hoping a formula like df['Fruit Total']=df[-4:-1].sum could do the trick in one line of code. That didn't work however. Is there any way to do it without explicitly summing up all columns?", "response":"You can first select by iloc and then sum: ``` df['Fruit Total']= df.iloc[:, -4:-1].sum(axis=1) print (df) Apples Bananas Grapes Kiwis Fruit Total 0 2.0 3.0 NaN 1.0 5.0 1 1.0 3.0 7.0 NaN 11.0 2 NaN NaN 2.0 3.0 2.0 ``` For sum all columns use: ``` df['Fruit Total']= df.sum(axis=1) ```", "best_answers_score":0.6938, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/42063716\/pandas-sum-up-multiple-columns-into-one-column-without-last-column", "best_answers_votes":142, "question_length":610, "response_length":284 }, { "question":"Pandas create new column with count from groupby I have a df that looks like the following: ``` id item color 01 truck red 02 truck red 03 car black 04 truck blue 05 car black ``` I am trying to create a df that looks like this: ``` item color count truck red 2 truck blue 1 car black 2 ``` I have tried ``` df[\"count\"] = df.groupby(\"item\")[\"color\"].transform('count') ``` But it is not quite what I am searching for. Any guidance is appreciated", "response":"That's not a new column, that's a new DataFrame: ``` In [11]: df.groupby([\"item\", \"color\"]).count() Out[11]: id item color car black 2 truck blue 1 red 2 ``` To get the result you want is to use reset_index: ``` In [12]: df.groupby([\"item\", \"color\"])[\"id\"].count().reset_index(name=\"count\") Out[12]: item color count 0 car black 2 1 truck blue 1 2 truck red 2 ``` To get a \"new column\" you could use transform: ``` In [13]: df.groupby([\"item\", \"color\"])[\"id\"].transform(\"count\") Out[13]: 0 2 1 2 2 2 3 1 4 2 dtype: int64 ``` I recommend reading the split-apply-combine section of the docs.", "best_answers_score":0.6916, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29836477\/pandas-create-new-column-with-count-from-groupby", "best_answers_votes":150, "question_length":445, "response_length":589 }, { "question":"Boolean Series key will be reindexed to match DataFrame index Here is how I encountered the warning: ```py df.loc[a_list][df.a_col.isnull()] ``` The type of a_list is Int64Index, it contains a list of row indexes. All of these row indexes belong to df. The df.a_col.isnull() part is a condition I need for filtering. If I execute the following commands individually, I do not get any warnings: ```py df.loc[a_list] df[df.a_col.isnull()] ``` But if I put them together df.loc[a_list][df.a_col.isnull()], I get the warning message (but I can see the result): Boolean Series key will be reindexed to match DataFrame index What is the meaning of this warning message? Does it affect the result that it returned?", "response":"Your approach will work despite the warning, but it's best not to rely on implicit, unclear behavior. Solution 1, make the selection of indices in a_list a boolean mask: ``` df[df.index.isin(a_list) & df.a_col.isnull()] ``` Solution 2, do it in two steps: ``` df2 = df.loc[a_list] df2[df2.a_col.isnull()] ``` Solution 3, if you want a one-liner, use a trick found here: ``` df.loc[a_list].query('a_col != a_col') ``` The warning comes from the fact that the boolean vector df.a_col.isnull() is the length of df, while df.loc[a_list] is of the length of a_list, i.e. shorter. Therefore, some indices in df.a_col.isnull() are not in df.loc[a_list]. What pandas does is reindex the boolean series on the index of the calling dataframe. In effect, it gets from df.a_col.isnull() the values corresponding to the indices in a_list. This works, but the behavior is implicit, and could easily change in the future, so that's what the warning is about.", "best_answers_score":0.6902, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41710789\/boolean-series-key-will-be-reindexed-to-match-dataframe-index", "best_answers_votes":132, "question_length":707, "response_length":943 }, { "question":"How to reversibly store and load a Pandas dataframe to\/from disk Right now I'm importing a fairly large CSV as a dataframe every time I run the script. Is there a good solution for keeping that dataframe constantly available in between runs so I don't have to spend all that time waiting for the script to run?", "response":"The easiest way is to pickle it using to_pickle: ``` df.to_pickle(file_name) # where to save it, usually as a .pkl ``` Then you can load it back using: ``` df = pd.read_pickle(file_name) ``` Note: before 0.11.1 save and load were the only way to do this (they are now deprecated in favor of to_pickle and read_pickle respectively). Another popular choice is to use HDF5 (pytables) which offers very fast access times for large datasets: ``` import pandas as pd store = pd.HDFStore('store.h5') store['df'] = df # save it store['df'] # load it ``` More advanced strategies are discussed in the cookbook. Since 0.13 there's also msgpack which may be be better for interoperability, as a faster alternative to JSON, or if you have python object\/text-heavy data (see this question).", "best_answers_score":0.6898, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17098654\/how-to-reversibly-store-and-load-a-pandas-dataframe-to-from-disk", "best_answers_votes":705, "question_length":310, "response_length":777 }, { "question":"Pandas DataFrame concat vs append I have a list of 4 pandas dataframes containing a day of tick data that I want to merge into a single data frame. I cannot understand the behavior of concat on my timestamps. See details below: ``` data [ DatetimeIndex: 35228 entries, 2013-03-28 00:00:07.089000+02:00 to 2013-03-28 18:59:20.357000+02:00 Data columns: Price 4040 non-null values Volume 4040 non-null values BidQty 35228 non-null values BidPrice 35228 non-null values AskPrice 35228 non-null values AskQty 35228 non-null values dtypes: float64(6), DatetimeIndex: 33088 entries, 2013-04-01 00:03:17.047000+02:00 to 2013-04-01 18:59:58.175000+02:00 Data columns: Price 3969 non-null values Volume 3969 non-null values BidQty 33088 non-null values BidPrice 33088 non-null values AskPrice 33088 non-null values AskQty 33088 non-null values dtypes: float64(6), DatetimeIndex: 50740 entries, 2013-04-02 00:03:27.470000+02:00 to 2013-04-02 18:59:58.172000+02:00 Data columns: Price 7326 non-null values Volume 7326 non-null values BidQty 50740 non-null values BidPrice 50740 non-null values AskPrice 50740 non-null values AskQty 50740 non-null values dtypes: float64(6), DatetimeIndex: 60799 entries, 2013-04-03 00:03:06.994000+02:00 to 2013-04-03 18:59:58.180000+02:00 Data columns: Price 8258 non-null values Volume 8258 non-null values BidQty 60799 non-null values BidPrice 60799 non-null values AskPrice 60799 non-null values AskQty 60799 non-null values dtypes: float64(6)] ``` Using append I get: ``` pd.DataFrame().append(data) DatetimeIndex: 179855 entries, 2013-03-28 00:00:07.089000+02:00 to 2013-04-03 18:59:58.180000+02:00 Data columns: AskPrice 179855 non-null values AskQty 179855 non-null values BidPrice 179855 non-null values BidQty 179855 non-null values Price 23593 non-null values Volume 23593 non-null values dtypes: float64(6) ``` Using concat I get: ``` pd.concat(data) DatetimeIndex: 179855 entries, 2013-03-27 22:00:07.089000+02:00 to 2013-04-03 16:59:58.180000+02:00 Data columns: Price 23593 non-null values Volume 23593 non-null values BidQty 179855 non-null values BidPrice 179855 non-null values AskPrice 179855 non-null values AskQty 179855 non-null values dtypes: float64(6) ``` Notice how the index changes when using concat. Why is that happening and how would I go about using concat to reproduce the results obtained using append? (Since concat seems so much faster; 24.6 ms per loop vs 3.02 s per loop)", "response":"Pandas concat vs append vs join vs merge Concat gives the flexibility to join based on the axis( all rows or all columns) Append is the specific case(axis=0, join='outer') of concat (being deprecated use concat) Join is based on the indexes (set by set_index) on how variable =['left','right','inner','couter'] Merge is based on any particular column each of the two dataframes, this columns are variables on like 'left_on', 'right_on', 'on'", "best_answers_score":0.6895, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/15819050\/pandas-dataframe-concat-vs-append", "best_answers_votes":190, "question_length":2436, "response_length":441 }, { "question":"Is there a way to copy only the structure (not the data) of a Pandas DataFrame? I received a DataFrame from somewhere and want to create another DataFrame with the same number and names of columns and rows (indexes). For example, suppose that the original data frame was created as ``` import pandas as pd df1 = pd.DataFrame([[11,12],[21,22]], columns=['c1','c2'], index=['i1','i2']) ``` I copied the structure by explicitly defining the columns and names: ``` df2 = pd.DataFrame(columns=df1.columns, index=df1.index) ``` I don't want to copy the data, otherwise I could just write df2 = df1.copy(). In other words, after df2 being created it must contain only NaN elements: ``` In [1]: df1 Out[1]: c1 c2 i1 11 12 i2 21 22 In [2]: df2 Out[2]: c1 c2 i1 NaN NaN i2 NaN NaN ``` Is there a more idiomatic way of doing it?", "response":"That's a job for reindex_like. Start with the original: ``` df1 = pd.DataFrame([[11, 12], [21, 22]], columns=['c1', 'c2'], index=['i1', 'i2']) ``` Construct an empty DataFrame and reindex it like df1: ``` pd.DataFrame().reindex_like(df1) Out: c1 c2 i1 NaN NaN i2 NaN NaN ```", "best_answers_score":0.6888, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27467730\/is-there-a-way-to-copy-only-the-structure-not-the-data-of-a-pandas-dataframe", "best_answers_votes":84, "question_length":817, "response_length":274 }, { "question":"How to create a large pandas dataframe from an sql query without running out of memory? I have trouble querying a table of > 5 million records from MS SQL Server database. I want to select all of the records, but my code seems to fail when selecting to much data into memory. This works: ``` import pandas.io.sql as psql sql = \"SELECT TOP 1000000 * FROM MyTable\" data = psql.read_frame(sql, cnxn) ``` ...but this does not work: ``` sql = \"SELECT TOP 2000000 * FROM MyTable\" data = psql.read_frame(sql, cnxn) ``` It returns this error: ``` File \"inference.pyx\", line 931, in pandas.lib.to_object_array_tuples (pandas\\lib.c:42733) Memory Error ``` I have read here that a similar problem exists when creating a dataframe from a csv file, and that the work-around is to use the 'iterator' and 'chunksize' parameters like this: ``` read_csv('exp4326.csv', iterator=True, chunksize=1000) ``` Is there a similar solution for querying from an SQL database? If not, what is the preferred work-around? Should I use some other methods to read the records in chunks? I read a bit of discussion here about working with large datasets in pandas, but it seems like a lot of work to execute a SELECT * query. Surely there is a simpler approach.", "response":"As mentioned in a comment, starting from pandas 0.15, you have a chunksize option in read_sql to read and process the query chunk by chunk: ``` sql = \"SELECT * FROM My_Table\" for chunk in pd.read_sql_query(sql , engine, chunksize=5): print(chunk) ``` Reference: http:\/\/pandas.pydata.org\/pandas-docs\/version\/0.15.2\/io.html#querying", "best_answers_score":0.6882, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18107953\/how-to-create-a-large-pandas-dataframe-from-an-sql-query-without-running-out-of", "best_answers_votes":83, "question_length":1229, "response_length":330 }, { "question":"How to invert the x or y axis I have a scatter plot graph with a bunch of random x, y coordinates. Currently the Y-Axis starts at 0 and goes up to the max value. I would like the Y-Axis to start at the max value and go up to 0. ``` points = [(10,5), (5,11), (24,13), (7,8)] x_arr = [] y_arr = [] for x,y in points: x_arr.append(x) y_arr.append(y) plt.scatter(x_arr,y_arr) ```", "response":"There is a new API that makes this even simpler. ``` plt.gca().invert_xaxis() ``` and\/or ``` plt.gca().invert_yaxis() ```", "best_answers_score":0.687, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/2051744\/how-to-invert-the-x-or-y-axis", "best_answers_votes":848, "question_length":375, "response_length":121 }, { "question":"Pandas: Subtracting two date columns and the result being an integer I have two columns in a Pandas data frame that are dates. I am looking to subtract one column from another and the result being the difference in numbers of days as an integer. A peek at the data: ``` df_test.head(10) Out[20]: First_Date Second Date 0 2016-02-09 2015-11-19 1 2016-01-06 2015-11-30 2 NaT 2015-12-04 3 2016-01-06 2015-12-08 4 NaT 2015-12-09 5 2016-01-07 2015-12-11 6 NaT 2015-12-12 7 NaT 2015-12-14 8 2016-01-06 2015-12-14 9 NaT 2015-12-15 ``` I have created a new column successfully with the difference: ``` df_test['Difference'] = df_test['First_Date'].sub(df_test['Second Date'], axis=0) df_test.head() Out[22]: First_Date Second Date Difference 0 2016-02-09 2015-11-19 82 days 1 2016-01-06 2015-11-30 37 days 2 NaT 2015-12-04 NaT 3 2016-01-06 2015-12-08 29 days 4 NaT 2015-12-09 NaT ``` However I am unable to get a numeric version of the result: ``` df_test['Difference'] = df_test[['Difference']].apply(pd.to_numeric) df_test.head() Out[25]: First_Date Second Date Difference 0 2016-02-09 2015-11-19 7.084800e+15 1 2016-01-06 2015-11-30 3.196800e+15 2 NaT 2015-12-04 NaN 3 2016-01-06 2015-12-08 2.505600e+15 4 NaT 2015-12-09 NaN ```", "response":"How about: ``` df_test['Difference'] = (df_test['First_Date'] - df_test['Second Date']).dt.days ``` This will return difference as int if there are no missing values(NaT) and float if there is. Pandas have a rich documentation on Time series \/ date functionality and Time deltas", "best_answers_score":0.6869, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/37840812\/pandas-subtracting-two-date-columns-and-the-result-being-an-integer", "best_answers_votes":188, "question_length":1223, "response_length":278 }, { "question":"return column names from pyodbc execute() statement ``` from pandas import DataFrame import pyodbc cnxn = pyodbc.connect(databasez) cursor.execute(\"\"\"SELECT ID, NAME AS Nickname, ADDRESS AS Residence FROM tablez\"\"\") DF = DataFrame(cursor.fetchall()) ``` This is fine to populate my pandas DataFrame. But how do I get ``` DF.columns = ['ID', 'Nickname', 'Residence'] ``` straight from cursor? Is that information stored in cursor at all?", "response":"You can get the columns from the cursor description: columns = [column[0] for column in cursor.description]", "best_answers_score":0.6869, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12704305\/return-column-names-from-pyodbc-execute-statement", "best_answers_votes":168, "question_length":436, "response_length":107 }, { "question":"Filter string data based on its string length I like to filter out data whose string length is not equal to 10. If I try to filter out any row whose column A's or B's string length is not equal to 10, I tried this. ``` df=pd.read_csv('filex.csv') df.A=df.A.apply(lambda x: x if len(x)== 10 else np.nan) df.B=df.B.apply(lambda x: x if len(x)== 10 else np.nan) df=df.dropna(subset=['A','B'], how='any') ``` This works slow, but is working. However, it sometimes produce error when the data in A is not a string but a number (interpreted as a number when read_csv read the input file): ``` File \"\", line 1, in TypeError: object of type 'float' has no len() ``` I believe there should be more efficient and elegant code instead of this. Based on the answers and comments below, the simplest solution I found are: ``` df=df[df.A.apply(lambda x: len(str(x))==10] df=df[df.B.apply(lambda x: len(str(x))==10] ``` or ``` df=df[(df.A.apply(lambda x: len(str(x))==10) & (df.B.apply(lambda x: len(str(x))==10)] ``` or ``` df=df[(df.A.astype(str).str.len()==10) & (df.B.astype(str).str.len()==10)] ```", "response":"``` import pandas as pd df = pd.read_csv('filex.csv') df['A'] = df['A'].astype('str') df['B'] = df['B'].astype('str') mask = (df['A'].str.len() == 10) & (df['B'].str.len() == 10) df = df.loc[mask] print(df) ``` Applied to filex.csv: ``` A,B 123,abc 1234,abcd 1234567890,abcdefghij ``` the code above prints ``` A B 2 1234567890 abcdefghij ```", "best_answers_score":0.6863, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19937362\/filter-string-data-based-on-its-string-length", "best_answers_votes":209, "question_length":1089, "response_length":342 }, { "question":"Querying for NaN and other names in Pandas Say I have a dataframe df with a column value holding some float values and some NaN. How can I get the part of the dataframe where we have NaN using the query syntax? The following, for example, does not work: ``` df.query( '(value < 10) or (value == NaN)' ) ``` I get name NaN is not defined (same for df.query('value ==NaN')) Generally speaking, is there any way to use numpy names in query, such as inf, nan, pi, e, etc.?", "response":"In general, you could use @local_variable_name, so something like ``` >>> pi = np.pi; nan = np.nan >>> df = pd.DataFrame({\"value\": [3,4,9,10,11,np.nan,12]}) >>> df.query(\"(value @pi)\") value 1 4 2 9 ``` would work, but nan isn't equal to itself, so value == NaN will always be false. One way to hack around this is to use that fact, and use value != value as an isnan check. We have ``` >>> df.query(\"(value >> df.query(\"(value < 10) or (value != value)\") value 0 3 1 4 2 9 5 NaN ```", "best_answers_score":0.6857, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26535563\/querying-for-nan-and-other-names-in-pandas", "best_answers_votes":119, "question_length":468, "response_length":484 }, { "question":"converting list of header and row lists into pandas DataFrame I am reading contents of a spreadsheet into pandas. DataNitro has a method that returns a rectangular selection of cells as a list of lists. So ``` table = Cell(\"A1\").table ``` gives ``` table = [['Heading1', 'Heading2'], [1 , 2], [3, 4]] headers = table.pop(0) # gives the headers as list and leaves data ``` I am busy writing code to translate this, but my guess is that it is such a simple use that there must be method to do this. Cant seem to find it in documentation. Any pointers to the method that would simplify this?", "response":"Call the pd.DataFrame constructor directly: ``` df = pd.DataFrame(table, columns=headers) df Heading1 Heading2 0 1 2 1 3 4 ```", "best_answers_score":0.6852, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19112398\/converting-list-of-header-and-row-lists-into-pandas-dataframe", "best_answers_votes":343, "question_length":588, "response_length":126 }, { "question":"Opposite of melt in python pandas I cannot figure out how to do \"reverse melt\" using Pandas in python. This is my starting data ``` label type value 0 x a 1 1 x b 2 2 x c 3 3 y a 4 4 y b 5 5 y c 6 6 z a 7 7 z b 8 8 z c 9 ``` This is the output I would like to have: ``` label a b c x 1 2 3 y 4 5 6 z 7 8 9 ``` I'm sure there is an easy way to do this, but I don't know how.", "response":"there are a few ways: using .pivot: ``` >>> origin.pivot(index='label', columns='type')['value'] type a b c label x 1 2 3 y 4 5 6 z 7 8 9 [3 rows x 3 columns] ``` using pivot_table: ``` >>> origin.pivot_table(values='value', index='label', columns='type') value type a b c label x 1 2 3 y 4 5 6 z 7 8 9 [3 rows x 3 columns] ``` or .groupby followed by .unstack: ``` >>> origin.groupby(['label', 'type'])['value'].aggregate('mean').unstack() type a b c label x 1 2 3 y 4 5 6 z 7 8 9 [3 rows x 3 columns] ```", "best_answers_score":0.6851, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22127569\/opposite-of-melt-in-python-pandas", "best_answers_votes":142, "question_length":373, "response_length":506 }, { "question":"pandas groupby, then sort within groups I want to group my dataframe by two columns and then sort the aggregated results within those groups. ``` In [167]: df Out[167]: count job source 0 2 sales A 1 4 sales B 2 6 sales C 3 3 sales D 4 7 sales E 5 5 market A 6 3 market B 7 2 market C 8 4 market D 9 1 market E In [168]: df.groupby(['job','source']).agg({'count':sum}) Out[168]: count job source market A 5 B 3 C 2 D 4 E 1 sales A 2 B 4 C 6 D 3 E 7 ``` I would now like to sort the 'count' column in descending order within each of the groups, and then take only the top three rows. To get something like: ``` count job source market A 5 D 4 B 3 sales E 7 C 6 B 4 ```", "response":"You could also just do it in one go, by doing the sort first and using head to take the first 3 of each group. ``` In[34]: df.sort_values(['job','count'],ascending=False).groupby('job').head(3) Out[35]: count job source 4 7 sales E 2 6 sales C 1 4 sales B 5 5 market A 8 4 market D 6 3 market B ```", "best_answers_score":0.685, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27842613\/pandas-groupby-then-sort-within-groups", "best_answers_votes":333, "question_length":667, "response_length":298 }, { "question":"How to select all columns except one in pandas? I have a dataframe that look like this: ```none a b c d 0 0.418762 0.042369 0.869203 0.972314 1 0.991058 0.510228 0.594784 0.534366 2 0.407472 0.259811 0.396664 0.894202 3 0.726168 0.139531 0.324932 0.906575 ``` How I can get all columns except b?", "response":"When the columns are not a MultiIndex, df.columns is just an array of column names so you can do: ``` df.loc[:, df.columns != 'b'] a c d 0 0.561196 0.013768 0.772827 1 0.882641 0.615396 0.075381 2 0.368824 0.651378 0.397203 3 0.788730 0.568099 0.869127 ```", "best_answers_score":0.6849, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29763620\/how-to-select-all-columns-except-one-in-pandas", "best_answers_votes":800, "question_length":295, "response_length":256 }, { "question":"Rename Pandas DataFrame Index I've a csv file without header, with a DateTime index. I want to rename the index and column name, but with df.rename() only the column name is renamed. Bug? I'm on version 0.12.0 ``` In [2]: df = pd.read_csv(r'D:\\Data\\DataTimeSeries_csv\/\/seriesSM.csv', header=None, parse_dates=[[0]], index_col=[0] ) In [3]: df.head() Out[3]: 1 0 2002-06-18 0.112000 2002-06-22 0.190333 2002-06-26 0.134000 2002-06-30 0.093000 2002-07-04 0.098667 In [4]: df.rename(index={0:'Date'}, columns={1:'SM'}, inplace=True) In [5]: df.head() Out[5]: SM 0 2002-06-18 0.112000 2002-06-22 0.190333 2002-06-26 0.134000 2002-06-30 0.093000 2002-07-04 0.098667 ```", "response":"The rename method takes a dictionary for the index which applies to index values. You want to rename to index level's name: ``` df.index.names = ['Date'] ``` A good way to think about this is that columns and index are the same type of object (Index or MultiIndex), and you can interchange the two via transpose. This is a little bit confusing since the index names have a similar meaning to columns, so here are some more examples: ``` In [1]: df = pd.DataFrame([[1, 2, 3], [4, 5 ,6]], columns=list('ABC')) In [2]: df Out[2]: A B C 0 1 2 3 1 4 5 6 In [3]: df1 = df.set_index('A') In [4]: df1 Out[4]: B C A 1 2 3 4 5 6 ``` You can see the rename on the index, which can change the value 1: ``` In [5]: df1.rename(index={1: 'a'}) Out[5]: B C A a 2 3 4 5 6 In [6]: df1.rename(columns={'B': 'BB'}) Out[6]: BB C A 1 2 3 4 5 6 ``` Whilst renaming the level names: ``` In [7]: df1.index.names = ['index'] df1.columns.names = ['column'] ``` Note: this attribute is just a list, and you could do the renaming as a list comprehension\/map. ``` In [8]: df1 Out[8]: column B C index 1 2 3 4 5 6 ```", "best_answers_score":0.6843, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19851005\/rename-pandas-dataframe-index", "best_answers_votes":446, "question_length":664, "response_length":1086 }, { "question":"Multi Index Sorting in Pandas I have a dataset with multi-index columns in a pandas df that I would like to sort by values in a specific column. My dataset looks like: ```none Group1 Group2 A B C A B C 1 1 0 3 2 5 7 2 5 6 9 1 0 0 3 7 0 2 0 3 5 ``` I want to sort all data and the index by column C in Group 1 in descending order so my results look like: ```none Group1 Group2 A B C A B C 2 5 6 9 1 0 0 1 1 0 3 2 5 7 3 7 0 2 0 3 5 ``` Is it possible to do this sort with the structure that my data is in, or should I be swapping Group1 to the index side?", "response":"When sorting by a MultiIndex you need to contain the tuple describing the column inside a list*: ``` In [11]: df.sort_values([('Group1', 'C')], ascending=False) Out[11]: Group1 Group2 A B C A B C 2 5 6 9 1 0 0 1 1 0 3 2 5 7 3 7 0 2 0 3 5 ``` * so as not to confuse pandas into thinking you want to sort first by Group1 then by C. Note: Originally used .sort since deprecated then removed in 0.20, in favor of .sort_values.", "best_answers_score":0.6838, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14733871\/multi-index-sorting-in-pandas", "best_answers_votes":166, "question_length":553, "response_length":422 }, { "question":"Pandas merge two dataframes with different columns I'm surely missing something simple here. Trying to merge two dataframes in pandas that have mostly the same column names, but the right dataframe has some columns that the left doesn't have, and vice versa. ``` >df_may id quantity attr_1 attr_2 0 1 20 0 1 1 2 23 1 1 2 3 19 1 1 3 4 19 0 0 >df_jun id quantity attr_1 attr_3 0 5 8 1 0 1 6 13 0 1 2 7 20 1 1 3 8 25 1 1 ``` I've tried joining with an outer join: ``` mayjundf = pd.DataFrame.merge(df_may, df_jun, how=\"outer\") ``` But that yields: ``` Left data columns not unique: Index([.... ``` I've also specified a single column to join on (on = \"id\", e.g.), but that duplicates all columns except id like attr_1_x, attr_1_y, which is not ideal. I've also passed the entire list of columns (there are many) to on: ``` mayjundf = pd.DataFrame.merge(df_may, df_jun, how=\"outer\", on=list(df_may.columns.values)) ``` Which yields: ``` ValueError: Buffer has wrong number of dimensions (expected 1, got 2) ``` What am I missing? I'd like to get a df with all rows appended, and attr_1, attr_2, attr_3 populated where possible, NaN where they don't show up. This seems like a pretty typical workflow for data munging, but I'm stuck.", "response":"I think in this case concat is what you want: ``` In [12]: pd.concat([df,df1], axis=0, ignore_index=True) Out[12]: attr_1 attr_2 attr_3 id quantity 0 0 1 NaN 1 20 1 1 1 NaN 2 23 2 1 1 NaN 3 19 3 0 0 NaN 4 19 4 1 NaN 0 5 8 5 0 NaN 1 6 13 6 1 NaN 1 7 20 7 1 NaN 1 8 25 ``` by passing axis=0 here you are stacking the df's on top of each other which I believe is what you want then producing NaN value where they are absent from their respective dfs.", "best_answers_score":0.6826, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28097222\/pandas-merge-two-dataframes-with-different-columns", "best_answers_votes":186, "question_length":1228, "response_length":447 }, { "question":"Pandas: drop columns with all NaN's I have this DataFrame: ``` 0 1 2 3 4 5 6 7 0 #0915-8 NaN NaN NaN NaN NaN NaN NaN 1 NaN NaN NaN LIVE WGT NaN AMOUNT NaN TOTAL 2 GBW COD NaN NaN 2,280 NaN $0.60 NaN $1,368.00 3 POLLOCK NaN NaN 1,611 NaN $0.01 NaN $16.11 4 WHAKE NaN NaN 441 NaN $0.70 NaN $308.70 5 GBE HADDOCK NaN NaN 2,788 NaN $0.01 NaN $27.88 6 GBW HADDOCK NaN NaN 16,667 NaN $0.01 NaN $166.67 7 REDFISH NaN NaN 932 NaN $0.01 NaN $9.32 8 GB WINTER FLOUNDER NaN NaN 145 NaN $0.25 NaN $36.25 9 GOM WINTER FLOUNDER NaN NaN 25,070 NaN $0.35 NaN $8,774.50 10 GB YELLOWTAIL NaN NaN 26 NaN $1.75 NaN $45.50 ``` I want to drop all NaNs as well as any columns with more than 3 NaNs (either one, or both, should work I think). I tried this code: ``` fish_frame.dropna() fish_frame.dropna(thresh=len(fish_frame) - 3, axis=1) ``` but it seems not to have any effect on the DataFrame - I see the same results afterward. What is wrong with the code, and how do I fix it?", "response":"From the dropna docstring: Drop the columns where all elements are NaN: ``` df.dropna(axis=1, how='all') A B D 0 NaN 2.0 0 1 3.0 4.0 1 2 NaN NaN 5 ```", "best_answers_score":0.6819, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45147100\/pandas-drop-columns-with-all-nans", "best_answers_votes":306, "question_length":958, "response_length":150 }, { "question":"extracting days from a numpy.timedelta64 value I am using pandas\/python and I have two date time series s1 and s2, that have been generated using the 'to_datetime' function on a field of the df containing dates\/times. When I subtract s1 from s2 s3 = s2 - s1 I get a series, s3, of type timedelta64[ns] ``` 0 385 days, 04:10:36 1 57 days, 22:54:00 2 642 days, 21:15:23 3 615 days, 00:55:44 4 160 days, 22:13:35 5 196 days, 23:06:49 6 23 days, 22:57:17 7 2 days, 22:17:31 8 622 days, 01:29:25 9 79 days, 20:15:14 10 23 days, 22:46:51 11 268 days, 19:23:04 12 NaT 13 NaT 14 583 days, 03:40:39 ``` How do I look at 1 element of the series: s3[10] I get something like this: numpy.timedelta64(2069211000000000,'ns') How do I extract days from s3 and maybe keep them as integers(not so interested in hours\/mins etc.)?", "response":"You can convert it to a timedelta with a day precision. To extract the integer value of days you divide it with a timedelta of one day. ``` >>> x = np.timedelta64(2069211000000000, 'ns') >>> days = x.astype('timedelta64[D]') >>> days \/ np.timedelta64(1, 'D') 23 ``` Or, as @PhillipCloud suggested, just days.astype(int) since the timedelta is just a 64bit integer that is interpreted in various ways depending on the second parameter you passed in ('D', 'ns', ...). You can find more about it here.", "best_answers_score":0.6812, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18215317\/extracting-days-from-a-numpy-timedelta64-value", "best_answers_votes":197, "question_length":811, "response_length":498 }, { "question":"How to explode a list inside a Dataframe cell into separate rows I'm looking to turn a pandas cell containing a list into rows for each of those values. So, take this: If I'd like to unpack and stack the values in the nearest_neighbors column so that each value would be a row within each opponent index, how would I best go about this? Are there pandas methods that are meant for operations like this?", "response":"Exploding a list-like column has been simplified significantly in pandas 0.25 with the addition of the explode() method: ``` df = (pd.DataFrame({'name': ['A.J. Price'] * 3, 'opponent': ['76ers', 'blazers', 'bobcats'], 'nearest_neighbors': [['Zach LaVine', 'Jeremy Lin', 'Nate Robinson', 'Isaia']] * 3}) .set_index(['name', 'opponent'])) df.explode('nearest_neighbors') ``` Out: ``` nearest_neighbors name opponent A.J. Price 76ers Zach LaVine 76ers Jeremy Lin 76ers Nate Robinson 76ers Isaia blazers Zach LaVine blazers Jeremy Lin blazers Nate Robinson blazers Isaia bobcats Zach LaVine bobcats Jeremy Lin bobcats Nate Robinson bobcats Isaia ```", "best_answers_score":0.6806, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32468402\/how-to-explode-a-list-inside-a-dataframe-cell-into-separate-rows", "best_answers_votes":76, "question_length":402, "response_length":645 }, { "question":"Prepend a level to a pandas MultiIndex I have a DataFrame with a MultiIndex created after some grouping: ```py import numpy as np import pandas as pd from numpy.random import randn df = pd.DataFrame({'A' : ['a1', 'a1', 'a2', 'a3'], 'B' : ['b1', 'b2', 'b3', 'b4'], 'Vals' : randn(4)} ).groupby(['A', 'B']).sum() # Vals # A B # a1 b1 -1.632460 # b2 0.596027 # a2 b3 -0.619130 # a3 b4 -0.002009 ``` How do I prepend a level to the MultiIndex so that I turn it into something like: ```py # Vals # FirstLevel A B # Foo a1 b1 -1.632460 # b2 0.596027 # a2 b3 -0.619130 # a3 b4 -0.002009 ```", "response":"A nice way to do this in one line using pandas.concat(): ``` import pandas as pd pd.concat([df], keys=['Foo'], names=['Firstlevel']) ``` An even shorter way: ``` pd.concat({'Foo': df}, names=['Firstlevel']) ``` This can be generalized to many data frames, see the docs.", "best_answers_score":0.6803, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14744068\/prepend-a-level-to-a-pandas-multiindex", "best_answers_votes":269, "question_length":583, "response_length":269 }, { "question":"Pandas: Looking up the list of sheets in an excel file The new version of Pandas uses the following interface to load Excel files: ``` read_excel('path_to_file.xls', 'Sheet1', index_col=None, na_values=['NA']) ``` but what if I don't know the sheets that are available? For example, I am working with excel files that the following sheets Data 1, Data 2 ..., Data N, foo, bar but I don't know N a priori. Is there any way to get the list of sheets from an excel document in Pandas?", "response":"You can still use the ExcelFile class (and the sheet_names attribute): ``` xl = pd.ExcelFile('foo.xls') xl.sheet_names # see all sheet names xl.parse(sheet_name) # read a specific sheet to DataFrame ``` see docs for parse for more options...", "best_answers_score":0.6802, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17977540\/pandas-looking-up-the-list-of-sheets-in-an-excel-file", "best_answers_votes":529, "question_length":481, "response_length":241 }, { "question":"pandas apply function that returns multiple values to rows in pandas dataframe I have a dataframe with a timeindex and 3 columns containing the coordinates of a 3D vector: ``` x y z ts 2014-05-15 10:38 0.120117 0.987305 0.116211 2014-05-15 10:39 0.117188 0.984375 0.122070 2014-05-15 10:40 0.119141 0.987305 0.119141 2014-05-15 10:41 0.116211 0.984375 0.120117 2014-05-15 10:42 0.119141 0.983398 0.118164 ``` I would like to apply a transformation to each row that also returns a vector ``` def myfunc(a, b, c): do something return e, f, g ``` but if I do: ``` df.apply(myfunc, axis=1) ``` I end up with a Pandas series whose elements are tuples. This is beacause apply will take the result of myfunc without unpacking it. How can I change myfunc so that I obtain a new df with 3 columns? Edit: All solutions below work. The Series solution does allow for column names, the List solution seem to execute faster. ``` def myfunc1(args): e=args[0] + 2*args[1] f=args[1]*args[2] +1 g=args[2] + args[0] * args[1] return pd.Series([e,f,g], index=['a', 'b', 'c']) def myfunc2(args): e=args[0] + 2*args[1] f=args[1]*args[2] +1 g=args[2] + args[0] * args[1] return [e,f,g] %timeit df.apply(myfunc1 ,axis=1) 100 loops, best of 3: 4.51 ms per loop %timeit df.apply(myfunc2 ,axis=1) 100 loops, best of 3: 2.75 ms per loop ```", "response":"Return Series and it will put them in a DataFrame. ``` def myfunc(a, b, c): do something return pd.Series([e, f, g]) ``` This has the bonus that you can give labels to each of the resulting columns. If you return a DataFrame it just inserts multiple rows for the group.", "best_answers_score":0.6798, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23690284\/pandas-apply-function-that-returns-multiple-values-to-rows-in-pandas-dataframe", "best_answers_votes":94, "question_length":1313, "response_length":269 }, { "question":"How to check if any value is NaN in a Pandas DataFrame How do I check whether a pandas DataFrame has NaN values? I know about pd.isnan but it returns a DataFrame of booleans. I also found this post but it doesn't exactly answer my question either.", "response":"jwilner's response is spot on. I was exploring to see if there's a faster option, since in my experience, summing flat arrays is (strangely) faster than counting. This code seems faster: ```py df.isnull().values.any() ``` ```py import numpy as np import pandas as pd import perfplot def setup(n): df = pd.DataFrame(np.random.randn(n)) df[df > 0.9] = np.nan return df def isnull_any(df): return df.isnull().any() def isnull_values_sum(df): return df.isnull().values.sum() > 0 def isnull_sum(df): return df.isnull().sum() > 0 def isnull_values_any(df): return df.isnull().values.any() perfplot.save( \"out.png\", setup=setup, kernels=[isnull_any, isnull_values_sum, isnull_sum, isnull_values_any], n_range=[2 ** k for k in range(25)], ) ``` df.isnull().sum().sum() is a bit slower, but of course, has additional information -- the number of NaNs.", "best_answers_score":0.679, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29530232\/how-to-check-if-any-value-is-nan-in-a-pandas-dataframe", "best_answers_votes":884, "question_length":247, "response_length":842 }, { "question":"Divide multiple columns by another column in pandas I need to divide all but the first columns in a DataFrame by the first column. Here's what I'm doing, but I wonder if this isn't the \"right\" pandas way: ``` df = pd.DataFrame(np.random.rand(10,3), columns=list('ABC')) df[['B', 'C']] = (df.T.iloc[1:] \/ df.T.iloc[0]).T ``` Is there a way to do something like df[['B','C']] \/ df['A']? (That just gives a 10x12 dataframe of nan.) Also, after reading some similar questions on SO, I tried df['A'].div(df[['B', 'C']]) but that gives a broadcast error.", "response":"I believe df[['B','C']].div(df.A, axis=0) and df.iloc[:,1:].div(df.A, axis=0) work.", "best_answers_score":0.678, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34540567\/divide-multiple-columns-by-another-column-in-pandas", "best_answers_votes":189, "question_length":548, "response_length":83 }, { "question":"Split pandas dataframe based on values in a column using groupby I want to split the following dataframe based on column ZZ ``` df = N0_YLDF ZZ MAT 0 6.286333 2 11.669069 1 6.317000 6 11.669069 2 6.324889 6 11.516454 3 6.320667 5 11.516454 4 6.325556 5 11.516454 5 6.359000 6 11.516454 6 6.359000 6 11.516454 7 6.361111 7 11.516454 8 6.360778 7 11.516454 9 6.361111 6 11.516454 ``` As output, I want a new DataFrame with the N0_YLDF column split into 4, one new column for each unique value of ZZ. How do I go about this? I can do groupby, but do not know what to do with the grouped object.", "response":"``` gb = df.groupby('ZZ') [gb.get_group(x) for x in gb.groups] ```", "best_answers_score":0.6771, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23691133\/split-pandas-dataframe-based-on-values-in-a-column-using-groupby", "best_answers_votes":183, "question_length":591, "response_length":66 }, { "question":"How to dynamically update a plot in a loop in IPython notebook (within one cell) Environment: Python 2.7, Matplotlib 1.3, IPython notebook 1.1, Linux, and Chrome. The code is in one single input cell, using --pylab=inline. I want to use IPython notebook and Pandas to consume a stream and dynamically update a plot every five seconds. When I just use a print statement to print the data in text format, it works perfectly fine: the output cell just keeps printing data and adding new rows. But when I try to plot the data (and then update it in a loop), the plot never shows up in the output cell. But if I remove the loop, and just plot it once, it works fine. Then I did some simple test: ``` i = pd.date_range('2013-1-1',periods=100,freq='s') while True: plot(pd.Series(data=np.random.randn(100), index=i)) #pd.Series(data=np.random.randn(100), index=i).plot() also tried this one time.sleep(5) ``` The output will not show anything until I manually interrupt the process (Ctrl + M + I). And after I interrupt it, the plot shows correctly as multiple overlapped lines. But what I really want is a plot that shows up and gets updated every five seconds (or whenever the plot() function gets called, just like what print statement outputs I mentioned above, which works well). Only showing the final chart after the cell is completely done is not what I want. I even tried to explicitly add the draw() function after each plot(), etc. None of them works. How can I dynamically update a plot by a for\/while loop within one cell in IPython notebook?", "response":"Use the IPython.display module: ``` %matplotlib inline import time import pylab as pl from IPython import display for i in range(10): pl.plot(pl.randn(100)) display.clear_output(wait=True) display.display(pl.gcf()) time.sleep(1.0) ```", "best_answers_score":0.6762, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21360361\/how-to-dynamically-update-a-plot-in-a-loop-in-ipython-notebook-within-one-cell", "best_answers_votes":147, "question_length":1548, "response_length":234 }, { "question":"Converting a Pandas GroupBy multiindex output from Series back to DataFrame I have a dataframe: ```none City Name 0 Seattle Alice 1 Seattle Bob 2 Portland Mallory 3 Seattle Mallory 4 Seattle Bob 5 Portland Mallory ``` I perform the following grouping: ```py g1 = df1.groupby([\"Name\", \"City\"]).count() ``` which when printed looks like: ```none City Name Name City Alice Seattle 1 1 Bob Seattle 2 2 Mallory Portland 2 2 Seattle 1 1 ``` But what I want eventually is another DataFrame object that contains all the rows in the GroupBy object. In other words I want to get the following result: ```none City Name Name City Alice Seattle 1 1 Bob Seattle 2 2 Mallory Portland 2 2 Mallory Seattle 1 1 ``` How do I do it?", "response":"g1 here is a DataFrame. It has a hierarchical index, though: ``` In [19]: type(g1) Out[19]: pandas.core.frame.DataFrame In [20]: g1.index Out[20]: MultiIndex([('Alice', 'Seattle'), ('Bob', 'Seattle'), ('Mallory', 'Portland'), ('Mallory', 'Seattle')], dtype=object) ``` Perhaps you want something like this? ``` In [21]: g1.add_suffix('_Count').reset_index() Out[21]: Name City City_Count Name_Count 0 Alice Seattle 1 1 1 Bob Seattle 2 2 2 Mallory Portland 2 2 3 Mallory Seattle 1 1 ``` Or something like: ``` In [36]: DataFrame({'count' : df1.groupby( [ \"Name\", \"City\"] ).size()}).reset_index() Out[36]: Name City count 0 Alice Seattle 1 1 Bob Seattle 2 2 Mallory Portland 2 3 Mallory Seattle 1 ```", "best_answers_score":0.676, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/10373660\/converting-a-pandas-groupby-multiindex-output-from-series-back-to-dataframe", "best_answers_votes":707, "question_length":713, "response_length":698 }, { "question":"How to flatten a hierarchical index in columns I have a data frame with a hierarchical index in axis 1 (columns) (from a groupby.agg operation): ``` USAF WBAN year month day s_PC s_CL s_CD s_CNT tempf sum sum sum sum amax amin 0 702730 26451 1993 1 1 1 0 12 13 30.92 24.98 1 702730 26451 1993 1 2 0 0 13 13 32.00 24.98 2 702730 26451 1993 1 3 1 10 2 13 23.00 6.98 3 702730 26451 1993 1 4 1 0 12 13 10.04 3.92 4 702730 26451 1993 1 5 3 0 10 13 19.94 10.94 ``` I want to flatten it, so that it looks like this (names aren't critical - I could rename): ``` USAF WBAN year month day s_PC s_CL s_CD s_CNT tempf_amax tmpf_amin 0 702730 26451 1993 1 1 1 0 12 13 30.92 24.98 1 702730 26451 1993 1 2 0 0 13 13 32.00 24.98 2 702730 26451 1993 1 3 1 10 2 13 23.00 6.98 3 702730 26451 1993 1 4 1 0 12 13 10.04 3.92 4 702730 26451 1993 1 5 3 0 10 13 19.94 10.94 ``` How do I do this? (I've tried a lot, to no avail.) Per a suggestion, here is the head in dict form ``` {('USAF', ''): {0: '702730', 1: '702730', 2: '702730', 3: '702730', 4: '702730'}, ('WBAN', ''): {0: '26451', 1: '26451', 2: '26451', 3: '26451', 4: '26451'}, ('day', ''): {0: 1, 1: 2, 2: 3, 3: 4, 4: 5}, ('month', ''): {0: 1, 1: 1, 2: 1, 3: 1, 4: 1}, ('s_CD', 'sum'): {0: 12.0, 1: 13.0, 2: 2.0, 3: 12.0, 4: 10.0}, ('s_CL', 'sum'): {0: 0.0, 1: 0.0, 2: 10.0, 3: 0.0, 4: 0.0}, ('s_CNT', 'sum'): {0: 13.0, 1: 13.0, 2: 13.0, 3: 13.0, 4: 13.0}, ('s_PC', 'sum'): {0: 1.0, 1: 0.0, 2: 1.0, 3: 1.0, 4: 3.0}, ('tempf', 'amax'): {0: 30.920000000000002, 1: 32.0, 2: 23.0, 3: 10.039999999999999, 4: 19.939999999999998}, ('tempf', 'amin'): {0: 24.98, 1: 24.98, 2: 6.9799999999999969, 3: 3.9199999999999982, 4: 10.940000000000001}, ('year', ''): {0: 1993, 1: 1993, 2: 1993, 3: 1993, 4: 1993}} ```", "response":"I think the easiest way to do this would be to set the columns to the top level: ``` df.columns = df.columns.get_level_values(0) ``` Note: if the to level has a name you can also access it by this, rather than 0. . If you want to combine\/join your MultiIndex into one Index (assuming you have just string entries in your columns) you could: ``` df.columns = [' '.join(col).strip() for col in df.columns.values] ``` Note: we must strip the whitespace for when there is no second index. ``` In [11]: [' '.join(col).strip() for col in df.columns.values] Out[11]: ['USAF', 'WBAN', 'day', 'month', 's_CD sum', 's_CL sum', 's_CNT sum', 's_PC sum', 'tempf amax', 'tempf amin', 'year'] ```", "best_answers_score":0.676, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14507794\/how-to-flatten-a-hierarchical-index-in-columns", "best_answers_votes":765, "question_length":1735, "response_length":681 }, { "question":"Python pandas: how to specify data types when reading an Excel file? I am importing an excel file into a pandas dataframe with the pandas.read_excel() function. One of the columns is the primary key of the table: it's all numbers, but it's stored as text (the little green triangle in the top left of the Excel cells confirms this). However, when I import the file into a pandas dataframe, the column gets imported as a float. This means that, for example, '0614' becomes 614. Is there a way to specify the datatype when importing a column? I understand this is possible when importing CSV files but couldn't find anything in the syntax of read_excel(). The only solution I can think of is to add an arbitrary letter at the beginning of the text (converting '0614' into 'A0614') in Excel, to make sure the column is imported as text, and then chopping off the 'A' in python, so I can match it to other tables I am importing from SQL.", "response":"You just specify converters. I created an excel spreadsheet of the following structure: ``` names ages bob 05 tom 4 suzy 3 ``` Where the \"ages\" column is formatted as strings. To load: ``` import pandas as pd df = pd.read_excel('Book1.xlsx',sheetname='Sheet1',header=0,converters={'names':str,'ages':str}) >>> df names ages 0 bob 05 1 tom 4 2 suzy 3 ```", "best_answers_score":0.6754, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32591466\/python-pandas-how-to-specify-data-types-when-reading-an-excel-file", "best_answers_votes":203, "question_length":933, "response_length":353 }, { "question":"Make new column in Panda dataframe by adding values from other columns I have a dataframe with values like ``` A B 1 4 2 6 3 9 ``` I need to add a new column by adding values from column A and B, like ``` A B C 1 4 5 2 6 8 3 9 12 ``` I believe this can be done using lambda function, but I can't figure out how to do it.", "response":"Very simple: ``` df['C'] = df['A'] + df['B'] ```", "best_answers_score":0.6754, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/34023918\/make-new-column-in-panda-dataframe-by-adding-values-from-other-columns", "best_answers_votes":144, "question_length":320, "response_length":48 }, { "question":"How to shift a column in Pandas DataFrame I would like to shift a column in a Pandas DataFrame, but I haven't been able to find a method to do it from the documentation without rewriting the whole DF. Does anyone know how to do it? DataFrame: ``` ## x1 x2 ##0 206 214 ##1 226 234 ##2 245 253 ##3 265 272 ##4 283 291 ``` Desired output: ``` ## x1 x2 ##0 206 nan ##1 226 214 ##2 245 234 ##3 265 253 ##4 283 272 ##5 nan 291 ```", "response":"``` In [18]: a Out[18]: x1 x2 0 0 5 1 1 6 2 2 7 3 3 8 4 4 9 In [19]: a['x2'] = a.x2.shift(1) In [20]: a Out[20]: x1 x2 0 0 NaN 1 1 5 2 2 6 3 3 7 4 4 8 ```", "best_answers_score":0.6752, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/10982089\/how-to-shift-a-column-in-pandas-dataframe", "best_answers_votes":192, "question_length":424, "response_length":154 }, { "question":"Python pandas Filtering out nan from a data selection of a column of strings Without using groupby how would I filter out data without NaN? Let say I have a matrix where customers will fill in 'N\/A','n\/a' or any of its variations and others leave it blank: ``` import pandas as pd import numpy as np df = pd.DataFrame({'movie': ['thg', 'thg', 'mol', 'mol', 'lob', 'lob'], 'rating': [3., 4., 5., np.nan, np.nan, np.nan], 'name': ['John', np.nan, 'N\/A', 'Graham', np.nan, np.nan]}) nbs = df['name'].str.extract('^(N\/A|NA|na|n\/a)') nms=df[(df['name'] != nbs) ] ``` output: ``` >>> nms movie name rating 0 thg John 3 1 thg NaN 4 3 mol Graham NaN 4 lob NaN NaN 5 lob NaN NaN ``` How would I filter out NaN values so I can get results to work with like this: ``` movie name rating 0 thg John 3 3 mol Graham NaN ``` I am guessing I need something like ~np.isnan but the tilda does not work with strings.", "response":"Just drop them: ``` nms.dropna(thresh=2) ``` this will drop all rows where there are at least two non-NaN. Then you could then drop where name is NaN: ``` In [87]: nms Out[87]: movie name rating 0 thg John 3 1 thg NaN 4 3 mol Graham NaN 4 lob NaN NaN 5 lob NaN NaN [5 rows x 3 columns] In [89]: nms = nms.dropna(thresh=2) In [90]: nms[nms.name.notnull()] Out[90]: movie name rating 0 thg John 3 3 mol Graham NaN [2 rows x 3 columns] ``` EDIT Actually looking at what you originally want you can do just this without the dropna call: ``` nms[nms.name.notnull()] ``` UPDATE Looking at this question 3 years later, there is a mistake, firstly thresh arg looks for at least n non-NaN values so in fact the output should be: ``` In [4]: nms.dropna(thresh=2) Out[4]: movie name rating 0 thg John 3.0 1 thg NaN 4.0 3 mol Graham NaN ``` It's possible that I was either mistaken 3 years ago or that the version of pandas I was running had a bug, both scenarios are entirely possible.", "best_answers_score":0.6747, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22551403\/python-pandas-filtering-out-nan-from-a-data-selection-of-a-column-of-strings", "best_answers_votes":341, "question_length":896, "response_length":974 }, { "question":"How to check if a value is in the list in selection from pandas data frame? [duplicate] This question already has answers here: Use a list of values to select rows from a Pandas dataframe (9 answers) Closed 2 years ago. Looks ugly: ``` df_cut = df_new[ ( (df_new['l_ext']==31) | (df_new['l_ext']==22) | (df_new['l_ext']==30) | (df_new['l_ext']==25) | (df_new['l_ext']==64) ) ] ``` Does not work: ``` df_cut = df_new[(df_new['l_ext'] in [31, 22, 30, 25, 64])] ``` Is there an elegant and working solution of the above \"problem\"?", "response":"Use isin ``` df_new[df_new['l_ext'].isin([31, 22, 30, 25, 64])] ```", "best_answers_score":0.6747, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18250298\/how-to-check-if-a-value-is-in-the-list-in-selection-from-pandas-data-frame", "best_answers_votes":262, "question_length":527, "response_length":67 }, { "question":"How to query if a list-type column contains something? I have a dataframe, which contains info about movies. It has a column called genre, which contains a list of genres it belongs to. For example: ``` df['genre'] ## returns 0 ['comedy', 'sci-fi'] 1 ['action', 'romance', 'comedy'] 2 ['documentary'] 3 ['crime','horror'] ... ``` I want to know how can I query the dataframe, so it returns the movie belonging to a certain genre? For example, something like df['genre'].contains('comedy') returns 0 or 1. I know for a list, I can do things like: ``` 'comedy' in ['comedy', 'sci-fi'] ``` However, in pandas, I didn't find something similar, the only thing I know is df['genre'].str.contains(), but it didn't work for the list type.", "response":"You can use apply for create mask and then boolean indexing: ``` mask = df.genre.apply(lambda x: 'comedy' in x) df1 = df[mask] print (df1) genre 0 [comedy, sci-fi] 1 [action, romance, comedy] ```", "best_answers_score":0.6743, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41518920\/how-to-query-if-a-list-type-column-contains-something", "best_answers_votes":97, "question_length":730, "response_length":195 }, { "question":"Strings in a DataFrame, but dtype is object Why does Pandas tell me that I have objects, although every item in the selected column is a string \u2014 even after explicit conversion. This is my DataFrame: ``` Int64Index: 56992 entries, 0 to 56991 Data columns (total 7 columns): id 56992 non-null values attr1 56992 non-null values attr2 56992 non-null values attr3 56992 non-null values attr4 56992 non-null values attr5 56992 non-null values attr6 56992 non-null values dtypes: int64(2), object(5) ``` Five of them are dtype object. I explicitly convert those objects to strings: ``` for c in df.columns: if df[c].dtype == object: print \"convert \", df[c].name, \" to string\" df[c] = df[c].astype(str) ``` Then, df[\"attr2\"] still has dtype object, although type(df[\"attr2\"].ix[0] reveals str, which is correct. Pandas distinguishes between int64 and float64 and object. What is the logic behind it when there is no dtype str? Why is a str covered by object?", "response":"The dtype object comes from NumPy, it describes the type of element in a ndarray. Every element in an ndarray must have the same size in bytes. For int64 and float64, they are 8 bytes. But for strings, the length of the string is not fixed. So instead of saving the bytes of strings in the ndarray directly, Pandas uses an object ndarray, which saves pointers to objects; because of this the dtype of this kind ndarray is object. Here is an example: the int64 array contains 4 int64 value. the object array contains 4 pointers to 3 string objects.", "best_answers_score":0.6739, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21018654\/strings-in-a-dataframe-but-dtype-is-object", "best_answers_votes":206, "question_length":953, "response_length":547 }, { "question":"Pandas equivalent of GROUP BY HAVING in SQL What is the most efficient way to use groupby and in parallel apply a filter in pandas? Basically I am asking for the equivalent in SQL of ```sql select * ... group by col_name having condition ``` I think there are many uses cases ranging from conditional means, sums, conditional probabilities, etc. which would make such a command very powerful. I need a very good performance, so ideally such a command would not be the result of several layered operations done in python.", "response":"As mentioned in unutbu's comment, groupby's filter is the equivalent of SQL'S HAVING: ``` In [11]: df = pd.DataFrame([[1, 2], [1, 3], [5, 6]], columns=['A', 'B']) In [12]: df Out[12]: A B 0 1 2 1 1 3 2 5 6 In [13]: g = df.groupby('A') # GROUP BY A In [14]: g.filter(lambda x: len(x) > 1) # HAVING COUNT(*) > 1 Out[14]: A B 0 1 2 1 1 3 ``` You can write more complicated functions (these are applied to each group), provided they return a plain ol' bool: ``` In [15]: g.filter(lambda x: x['B'].sum() == 5) Out[15]: A B 0 1 2 1 1 3 ``` Note: potentially there is a bug where you can't write you function to act on the columns you've used to groupby... a workaround is the groupby the columns manually i.e. g = df.groupby(df['A'])).", "best_answers_score":0.6733, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22105452\/pandas-equivalent-of-group-by-having-in-sql", "best_answers_votes":105, "question_length":520, "response_length":729 }, { "question":"Getting Google Spreadsheet CSV into A Pandas Dataframe I uploaded a file to Google spreadsheets (to make a publically accessible example IPython Notebook, with data) I was using the file in it's native form could be read into a Pandas Dataframe. So now I use the following code to read the spreadsheet, works fine but just comes in as string,, and I'm not having any luck trying to get it back into a dataframe (you can get the data) ``` import requests r = requests.get('https:\/\/docs.google.com\/spreadsheet\/ccc?key=0Ak1ecr7i0wotdGJmTURJRnZLYlV3M2daNTRubTdwTXc&output=csv') data = r.content ``` The data ends up looking like: (1st row headers) ``` ',City,region,Res_Comm,mkt_type,Quradate,National_exp,Alabama_exp,Sales_exp,Inventory_exp,Price_exp,Credit_exp\\n0,Dothan,South_Central-Montgomery-Auburn-Wiregrass-Dothan,Residential,Rural,1\/15\/2010,2,2,3,2,3,3\\n10,Foley,South_Mobile-Baldwin,Residential,Suburban_Urban,1\/15\/2010,4,4,4,4,4,3\\n12,Birmingham,North_Central-Birmingham-Tuscaloosa-Anniston,Commercial,Suburban_Urban,1\/15\/2010,2,2,3,2,2,3\\n ``` The native pandas code that brings in the disk resident file looks like: ``` df = pd.io.parsers.read_csv('\/home\/tom\/Dropbox\/Projects\/annonallanswerswithmaster1012013.csv',index_col=0,parse_dates=['Quradate']) ``` A \"clean\" solution would be helpful to many to provide an easy way to share datasets for Pandas use! I tried a bunch of alternative with no success and I'm pretty sure I'm missing something obvious again. Just a Update note The new Google spreadsheet has a different URL pattern Just use this in place of the URL in the above example and or the below answer and you should be fine here is an example: ``` https:\/\/docs.google.com\/spreadsheets\/d\/177_dFZ0i-duGxLiyg6tnwNDKruAYE-_Dd8vAQziipJQ\/export?format=csv&id ``` see solution below from @Max Ghenis which just used pd.read_csv, no need for StringIO or requests...", "response":"Seems to work for me without the StringIO: ``` test = pd.read_csv('https:\/\/docs.google.com\/spreadsheets\/d\/' + '0Ak1ecr7i0wotdGJmTURJRnZLYlV3M2daNTRubTdwTXc' + '\/export?gid=0&format=csv', # Set first column as rownames in data frame index_col=0, # Parse column values to datetime parse_dates=['Quradate'] ) test.head(5) # Same result as @TomAugspurger ``` BTW, including the ?gid= enables importing different sheets, find the gid in the URL.", "best_answers_score":0.6726, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19611729\/getting-google-spreadsheet-csv-into-a-pandas-dataframe", "best_answers_votes":88, "question_length":1879, "response_length":440 }, { "question":"How to count duplicate rows in pandas dataframe? I am trying to count the duplicates of each type of row in my dataframe. For example, say that I have a dataframe in pandas as follows: ``` df = pd.DataFrame({'one': pd.Series([1., 1, 1]), 'two': pd.Series([1., 2., 1])}) ``` I get a df that looks like this: ``` one two 0 1 1 1 1 2 2 1 1 ``` I imagine the first step is to find all the different unique rows, which I do by: ``` df.drop_duplicates() ``` This gives me the following df: ``` one two 0 1 1 1 1 2 ``` Now I want to take each row from the above df ([1 1] and [1 2]) and get a count of how many times each is in the initial df. My result would look something like this: ``` Row Count [1 1] 2 [1 2] 1 ``` How should I go about doing this last step? Edit: Here's a larger example to make it more clear: ``` df = pd.DataFrame({'one': pd.Series([True, True, True, False]), 'two': pd.Series([True, False, False, True]), 'three': pd.Series([True, False, False, False])}) ``` gives me: ``` one three two 0 True True True 1 True False False 2 True False False 3 False False True ``` I want a result that tells me: ``` Row Count [True True True] 1 [True False False] 2 [False False True] 1 ```", "response":"You can groupby on all the columns and call size the index indicates the duplicate values: ``` In [28]: df.groupby(df.columns.tolist(),as_index=False).size() Out[28]: one three two False False True 1 True False False 2 True True 1 dtype: int64 ```", "best_answers_score":0.672, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/35584085\/how-to-count-duplicate-rows-in-pandas-dataframe", "best_answers_votes":131, "question_length":1193, "response_length":247 }, { "question":"How to conditionally update DataFrame column in Pandas With this DataFrame, how can I conditionally set rating to 0 when line_race is equal to zero? ``` line_track line_race rating foreign 25 MTH 10 84 False 26 MTH 6 88 False 27 TAM 5 87 False 28 GP 2 86 False 29 GP 7 59 False 30 LCH 0 103 True 31 LEO 0 125 True 32 YOR 0 126 True 33 ASC 0 124 True ``` In other words, what is the proper way on a DataFrame to say if ColumnA = x then ColumnB = y else ColumnB = ColumnB", "response":"``` df.loc[df['line_race'] == 0, 'rating'] = 0 ```", "best_answers_score":0.6715, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18196203\/how-to-conditionally-update-dataframe-column-in-pandas", "best_answers_votes":211, "question_length":469, "response_length":50 }, { "question":"Select only one index of multiindex DataFrame I am trying to create a new DataFrame using only one index from a multi-indexed DataFrame. ``` A B C first second bar one 0.895717 0.410835 -1.413681 two 0.805244 0.813850 1.607920 baz one -1.206412 0.132003 1.024180 two 2.565646 -0.827317 0.569605 foo one 1.431256 -0.076467 0.875906 two 1.340309 -1.187678 -2.211372 qux one -1.170299 1.130127 0.974466 two -0.226169 -1.436737 -2.006747 ``` Ideally, I would like something like this: ``` In: df.ix[level=\"first\"] ``` and: ``` Out: A B C first bar 0.895717 0.410835 -1.413681 0.805244 0.813850 1.607920 baz -1.206412 0.132003 1.024180 2.565646 -0.827317 0.569605 foo 1.431256 -0.076467 0.875906 1.340309 -1.187678 -2.211372 qux -1.170299 1.130127 0.974466 -0.226169 -1.436737 -2.006747 ` ``` Essentially I want to drop all the other indexes of the multi-index other than level first. Is there an easy way to do this?", "response":"One way could be to simply rebind df.index to the desired level of the MultiIndex. You can do this by specifying the label name you want to keep: ``` df.index = df.index.get_level_values('first') ``` or use the level's integer value: ``` df.index = df.index.get_level_values(0) ``` All other levels of the MultiIndex would disappear here.", "best_answers_score":0.6713, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28140771\/select-only-one-index-of-multiindex-dataframe", "best_answers_votes":151, "question_length":912, "response_length":338 }, { "question":"pandas.parser.CParserError: Error tokenizing data I'm trying to use pandas to manipulate a .csv file but I get this error: pandas.parser.CParserError: Error tokenizing data. C error: Expected 2 fields in line 3, saw 12 I have tried to read the pandas docs, but found nothing. My code is simple: ``` path = 'GOOG Key Ratios.csv' #print(open(path).read()) data = pd.read_csv(path) ``` How can I resolve this? Should I use the csv module or another language?", "response":"you could also try; ``` data = pd.read_csv('file1.csv', on_bad_lines='skip') ``` Do note that this will cause the offending lines to be skipped. If you don't expect many bad lines and want to (at least) know their amount and IDs, use on_bad_lines='warn'. For advanced handling of bads, you can pass a callable. Edit For Pandas < 1.3.0 try ``` data = pd.read_csv(\"file1.csv\", error_bad_lines=False) ``` as per pandas API reference.", "best_answers_score":0.6706, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18039057\/pandas-parser-cparsererror-error-tokenizing-data", "best_answers_votes":1069, "question_length":455, "response_length":430 }, { "question":"Why were pandas merges in python faster than data.table merges in R in 2012? I recently came across the pandas library for python, which according to this benchmark performs very fast in-memory merges. It's even faster than the data.table package in R (my language of choice for analysis). Why is pandas so much faster than data.table? Is it because of an inherent speed advantage python has over R, or is there some tradeoff I'm not aware of? Is there a way to perform inner and outer joins in data.table without resorting to merge(X, Y, all=FALSE) and merge(X, Y, all=TRUE)? Here's the R code and the Python code used to benchmark the various packages.", "response":"The reason pandas is faster is because I came up with a better algorithm, which is implemented very carefully using a fast hash table implementation - klib and in C\/Cython to avoid the Python interpreter overhead for the non-vectorizable parts. The algorithm is described in some detail in my presentation: A look inside pandas design and development. The comparison with data.table is actually a bit interesting because the whole point of R's data.table is that it contains pre-computed indexes for various columns to accelerate operations like data selection and merges. In this case (database joins) pandas' DataFrame contains no pre-computed information that is being used for the merge, so to speak it's a \"cold\" merge. If I had stored the factorized versions of the join keys, the join would be significantly faster - as factorizing is the biggest bottleneck for this algorithm. I should also add that the internal design of pandas' DataFrame is much more amenable to these kinds of operations than R's data.frame (which is just a list of arrays internally).", "best_answers_score":0.6696, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/8991709\/why-were-pandas-merges-in-python-faster-than-data-table-merges-in-r-in-2012", "best_answers_votes":190, "question_length":654, "response_length":1064 }, { "question":"Multiple aggregations of the same column using pandas GroupBy.agg() Is there a pandas built-in way to apply two different aggregating functions f1, f2 to the same column df[\"returns\"], without having to call agg() multiple times? Example dataframe: ``` import pandas as pd import datetime as dt import numpy as np pd.np.random.seed(0) df = pd.DataFrame({ \"date\" : [dt.date(2012, x, 1) for x in range(1, 11)], \"returns\" : 0.05 * np.random.randn(10), \"dummy\" : np.repeat(1, 10) }) ``` The syntactically wrong, but intuitively right, way to do it would be: ``` # Assume `f1` and `f2` are defined for aggregating. df.groupby(\"dummy\").agg({\"returns\": f1, \"returns\": f2}) ``` Obviously, Python doesn't allow duplicate keys. Is there any other manner for expressing the input to agg()? Perhaps a list of tuples [(column, function)] would work better, to allow multiple functions applied to the same column? But agg() seems like it only accepts a dictionary. Is there a workaround for this besides defining an auxiliary function that just applies both of the functions inside of it? (How would this work with aggregation anyway?)", "response":"TLDR; Pandas groupby.agg has a new, easier syntax for specifying (1) aggregations on multiple columns, and (2) multiple aggregations on a column. So, to do this for pandas >= 0.25, use ``` df.groupby('dummy').agg(Mean=('returns', 'mean'), Sum=('returns', 'sum')) Mean Sum dummy 1 0.036901 0.369012 ``` OR ``` df.groupby('dummy')['returns'].agg(Mean='mean', Sum='sum') Mean Sum dummy 1 0.036901 0.369012 ``` Pandas >= 0.25: Named Aggregation Pandas has changed the behavior of GroupBy.agg in favour of a more intuitive syntax for specifying named aggregations. See the 0.25 docs section on Enhancements as well as relevant GitHub issues GH18366 and GH26512. From the documentation, To support column-specific aggregation with control over the output column names, pandas accepts the special syntax in GroupBy.agg(), known as \u201cnamed aggregation\u201d, where The keywords are the output column names The values are tuples whose first element is the column to select and the second element is the aggregation to apply to that column. Pandas provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc'] to make it clearer what the arguments are. As usual, the aggregation can be a callable or a string alias. You can now pass a tuple via keyword arguments. The tuples follow the format of (, ). ``` import pandas as pd pd.__version__ # '0.25.0.dev0+840.g989f912ee' # Setup df = pd.DataFrame({'kind': ['cat', 'dog', 'cat', 'dog'], 'height': [9.1, 6.0, 9.5, 34.0], 'weight': [7.9, 7.5, 9.9, 198.0] }) df.groupby('kind').agg( max_height=('height', 'max'), min_weight=('weight', 'min'),) max_height min_weight kind cat 9.5 7.9 dog 34.0 7.5 ``` Alternatively, you can use pd.NamedAgg (essentially a namedtuple) which makes things more explicit. ``` df.groupby('kind').agg( max_height=pd.NamedAgg(column='height', aggfunc='max'), min_weight=pd.NamedAgg(column='weight', aggfunc='min') ) max_height min_weight kind cat 9.5 7.9 dog 34.0 7.5 ``` It is even simpler for Series, just pass the aggfunc to a keyword argument. ``` df.groupby('kind')['height'].agg(max_height='max', min_height='min') max_height min_height kind cat 9.5 9.1 dog 34.0 6.0 ``` Lastly, if your column names aren't valid python identifiers, use a dictionary with unpacking: ``` df.groupby('kind')['height'].agg(**{'max height': 'max', ...}) ``` Pandas < 0.25 In more recent versions of pandas leading upto 0.24, if using a dictionary for specifying column names for the aggregation output, you will get a FutureWarning: ``` df.groupby('dummy').agg({'returns': {'Mean': 'mean', 'Sum': 'sum'}}) # FutureWarning: using a dict with renaming is deprecated and will be removed # in a future version ``` Using a dictionary for renaming columns is deprecated in v0.20. On more recent versions of pandas, this can be specified more simply by passing a list of tuples. If specifying the functions this way, all functions for that column need to be specified as tuples of (name, function) pairs. ``` df.groupby(\"dummy\").agg({'returns': [('op1', 'sum'), ('op2', 'mean')]}) returns op1 op2 dummy 1 0.328953 0.032895 ``` Or, ``` df.groupby(\"dummy\")['returns'].agg([('op1', 'sum'), ('op2', 'mean')]) op1 op2 dummy 1 0.328953 0.032895 ```", "best_answers_score":0.6694, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12589481\/multiple-aggregations-of-the-same-column-using-pandas-groupby-agg", "best_answers_votes":266, "question_length":1121, "response_length":3192 }, { "question":"Selecting\/excluding sets of columns in pandas [duplicate] This question already has answers here: Delete a column from a Pandas DataFrame (23 answers) Closed 6 years ago. I would like to create views or dataframes from an existing dataframe based on column selections. For example, I would like to create a dataframe df2 from a dataframe df1 that holds all columns from it except two of them. I tried doing the following, but it didn't work: ``` import numpy as np import pandas as pd # Create a dataframe with columns A,B,C and D df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD')) # Try to create a second dataframe df2 from df with all columns except 'B' and D my_cols = set(df.columns) my_cols.remove('B').remove('D') # This returns an error (\"unhashable type: set\") df2 = df[my_cols] ``` What am I doing wrong? Perhaps more generally, what mechanisms does pandas have to support the picking and exclusions of arbitrary sets of columns from a dataframe?", "response":"You can either Drop the columns you do not need OR Select the ones you need ``` # Using DataFrame.drop df.drop(df.columns[[1, 2]], axis=1, inplace=True) # drop by Name df1 = df1.drop(['B', 'C'], axis=1) # Select the ones you want df1 = df[['a','d']] ```", "best_answers_score":0.6692, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14940743\/selecting-excluding-sets-of-columns-in-pandas", "best_answers_votes":745, "question_length":970, "response_length":253 }, { "question":"Remove non-numeric rows in one column with pandas There is a dataframe like the following, and it has one unclean column 'id' which it sholud be numeric column ``` id, name 1, A 2, B 3, C tt, D 4, E 5, F de, G ``` Is there a concise way to remove the rows because tt and de are not numeric values ``` tt,D de,G ``` to make the dataframe clean? ``` id, name 1, A 2, B 3, C 4, E 5, F ```", "response":"Using pd.to_numeric ``` In [1079]: df[pd.to_numeric(df['id'], errors='coerce').notnull()] Out[1079]: id name 0 1 A 1 2 B 2 3 C 4 4 E 5 5 F ``` Explanation: This will coerce all non-numeric values to NaN, which will then be flagged as False using notnull(). Other numeric values will be converted to True. This filtering mask is then passed to the dataframe to select those rows whose id is numeric only. If you want to overwrite the dataframe, don't forget to reassign the result.", "best_answers_score":0.6657, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33961028\/remove-non-numeric-rows-in-one-column-with-pandas", "best_answers_votes":137, "question_length":385, "response_length":480 }, { "question":"Python Pandas: Convert \".value_counts\" output to dataframe Hi I want to get the counts of unique values of the dataframe. count_values implements this however I want to use its output somewhere else. How can I convert .count_values output to a pandas dataframe. here is an example code: ``` import pandas as pd df = pd.DataFrame({'a':[1, 1, 2, 2, 2]}) value_counts = df['a'].value_counts(dropna=True, sort=True) print(value_counts) print(type(value_counts)) ``` output is: ``` 2 3 1 2 Name: a, dtype: int64 ``` What I need is a dataframe like this: ``` unique_values counts 2 3 1 2 ``` Thank you.", "response":"Use rename_axis for name of column from index and reset_index: ``` df = df.value_counts().rename_axis('unique_values').reset_index(name='counts') print (df) unique_values counts 0 2 3 1 1 2 ``` Or if need one column DataFrame use Series.to_frame: ``` df = df.value_counts().rename_axis('unique_values').to_frame('counts') print (df) counts unique_values 2 3 1 2 ```", "best_answers_score":0.6654, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/47136436\/python-pandas-convert-value-counts-output-to-dataframe", "best_answers_votes":242, "question_length":597, "response_length":365 }, { "question":"Coalesce values from 2 columns into a single column in a pandas dataframe I'm looking for a method that behaves similarly to coalesce in T-SQL. I have 2 columns (column A and B) that are sparsely populated in a pandas dataframe. I'd like to create a new column using the following rules: If the value in column A is not null, use that value for the new column C If the value in column A is null, use the value in column B for the new column C Like I mentioned, this can be accomplished in MS SQL Server via the coalesce function. I haven't found a good pythonic method for this; does one exist?", "response":"use combine_first(): ``` In [16]: df = pd.DataFrame(np.random.randint(0, 10, size=(10, 2)), columns=list('ab')) In [17]: df.loc[::2, 'a'] = np.nan In [18]: df Out[18]: a b 0 NaN 0 1 5.0 5 2 NaN 8 3 2.0 8 4 NaN 3 5 9.0 4 6 NaN 7 7 2.0 0 8 NaN 6 9 2.0 5 In [19]: df['c'] = df.a.combine_first(df.b) In [20]: df Out[20]: a b c 0 NaN 0 0.0 1 5.0 5 5.0 2 NaN 8 8.0 3 2.0 8 2.0 4 NaN 3 3.0 5 9.0 4 9.0 6 NaN 7 7.0 7 2.0 0 2.0 8 NaN 6 6.0 9 2.0 5 2.0 ```", "best_answers_score":0.6648, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38152389\/coalesce-values-from-2-columns-into-a-single-column-in-a-pandas-dataframe", "best_answers_votes":220, "question_length":594, "response_length":446 }, { "question":"Plot a horizontal line on a given plot How do I add a horizontal line to an existing plot?", "response":"Use axhline (a horizontal axis line). For example, this plots a horizontal line at y = 0.5: ``` import matplotlib.pyplot as plt plt.axhline(y=0.5, color='r', linestyle='-') plt.show() ```", "best_answers_score":0.6646, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33382619\/plot-a-horizontal-line-on-a-given-plot", "best_answers_votes":919, "question_length":90, "response_length":187 }, { "question":"iloc giving 'IndexError: single positional indexer is out-of-bounds' I am trying to encode some information to read into a Machine Learning model using the following ```py import numpy as np import pandas as pd import matplotlib.pyplot as py Dataset = pd.read_csv('filename.csv', sep = ',') X = Dataset.iloc[:,:-1].values Y = Dataset.iloc[:,18].values from sklearn.preprocessing import LabelEncoder, OneHotEncoder labelencoder_X = LabelEncoder() X[:, 0] = labelencoder_X.fit_transform(X[:, 0]) onehotencoder = OneHotEncoder(categorical_features = [0]) X = onehotencoder.fit_transform(X).toarray() ``` however I am getting an error that reads ```none IndexError: single positional indexer is out-of-bounds ```", "response":"This error is caused by: ``` Y = Dataset.iloc[:,18].values ``` Indexing is out of bounds here most probably because there are less than 19 columns in your Dataset, so column 18 does not exist. The following code you provided doesn't use Y at all, so you can just comment out this line for now.", "best_answers_score":0.6642, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/42739327\/iloc-giving-indexerror-single-positional-indexer-is-out-of-bounds", "best_answers_votes":129, "question_length":708, "response_length":293 }, { "question":"Pandas every nth row Dataframe.resample() works only with timeseries data. I cannot find a way of getting every nth row from non-timeseries data. What is the best method?", "response":"I'd use iloc, which takes a row\/column slice, both based on integer position and following normal python syntax. If you want every 5th row: ``` df.iloc[::5, :] ```", "best_answers_score":0.6635, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25055712\/pandas-every-nth-row", "best_answers_votes":423, "question_length":170, "response_length":163 }, { "question":"Convert pandas Series to DataFrame I have a Pandas series sf: ``` email [email protected] [1.0, 0.0, 0.0] [email protected] [2.0, 0.0, 0.0] [email protected] [1.0, 0.0, 0.0] [email protected] [4.0, 0.0, 0.0] [email protected] [1.0, 0.0, 3.0] [email protected] [1.0, 5.0, 0.0] ``` And I would like to transform it to the following DataFrame: ``` index | email | list _____________________________________________ 0 | [email protected] | [1.0, 0.0, 0.0] 1 | [email protected] | [2.0, 0.0, 0.0] 2 | [email protected] | [1.0, 0.0, 0.0] 3 | [email protected] | [4.0, 0.0, 0.0] 4 | [email protected] | [1.0, 0.0, 3.0] 5 | [email protected] | [1.0, 5.0, 0.0] ``` I found a way to do it, but I doubt it's the more efficient one: ``` df1 = pd.DataFrame(data=sf.index, columns=['email']) df2 = pd.DataFrame(data=sf.values, columns=['list']) df = pd.merge(df1, df2, left_index=True, right_index=True) ```", "response":"Rather than create 2 temporary dfs you can just pass these as params within a dict using the DataFrame constructor: ``` pd.DataFrame({'email':sf.index, 'list':sf.values}) ``` There are lots of ways to construct a df, see the docs", "best_answers_score":0.6632, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26097916\/convert-pandas-series-to-dataframe", "best_answers_votes":239, "question_length":893, "response_length":229 }, { "question":"Deleting DataFrame row in Pandas based on column value I have the following DataFrame: ```none daysago line_race rating rw wrating line_date 2007-03-31 62 11 56 1.000000 56.000000 2007-03-10 83 11 67 1.000000 67.000000 2007-02-10 111 9 66 1.000000 66.000000 2007-01-13 139 10 83 0.880678 73.096278 2006-12-23 160 10 88 0.793033 69.786942 2006-11-09 204 9 52 0.636655 33.106077 2006-10-22 222 8 66 0.581946 38.408408 2006-09-29 245 9 70 0.518825 36.317752 2006-09-16 258 11 68 0.486226 33.063381 2006-08-30 275 8 72 0.446667 32.160051 2006-02-11 475 5 65 0.164591 10.698423 2006-01-13 504 0 70 0.142409 9.968634 2006-01-02 515 0 64 0.134800 8.627219 2005-12-06 542 0 70 0.117803 8.246238 2005-11-29 549 0 70 0.113758 7.963072 2005-11-22 556 0 -1 0.109852 -0.109852 2005-11-01 577 0 -1 0.098919 -0.098919 2005-10-20 589 0 -1 0.093168 -0.093168 2005-09-27 612 0 -1 0.083063 -0.083063 2005-09-07 632 0 -1 0.075171 -0.075171 2005-06-12 719 0 69 0.048690 3.359623 2005-05-29 733 0 -1 0.045404 -0.045404 2005-05-02 760 0 -1 0.039679 -0.039679 2005-04-02 790 0 -1 0.034160 -0.034160 2005-03-13 810 0 -1 0.030915 -0.030915 2004-11-09 934 0 -1 0.016647 -0.016647 ``` I need to remove the rows where line_race is equal to 0. What's the most efficient way to do this?", "response":"If I'm understanding correctly, it should be as simple as: ``` df = df[df.line_race != 0] ```", "best_answers_score":0.6628, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18172851\/deleting-dataframe-row-in-pandas-based-on-column-value", "best_answers_votes":1608, "question_length":1255, "response_length":93 }, { "question":"Get last \"column\" after .str.split() operation on column in pandas DataFrame I have a column in a pandas DataFrame that I would like to split on a single space. The splitting is simple enough with DataFrame.str.split(' '), but I can't make a new column from the last entry. When I .str.split() the column I get a list of arrays and I don't know how to manipulate this to get a new column for my DataFrame. Here is an example. Each entry in the column contains 'symbol data price' and I would like to split off the price (and eventually remove the \"p\"... or \"c\" in half the cases). ``` import pandas as pd temp = pd.DataFrame({'ticker' : ['spx 5\/25\/2001 p500', 'spx 5\/25\/2001 p600', 'spx 5\/25\/2001 p700']}) temp2 = temp.ticker.str.split(' ') ``` which yields ``` 0 ['spx', '5\/25\/2001', 'p500'] 1 ['spx', '5\/25\/2001', 'p600'] 2 ['spx', '5\/25\/2001', 'p700'] ``` But temp2[0] just gives one list entry's array and temp2[:][-1] fails. How can I convert the last entry in each array to a new column? Thanks!", "response":"Do this: ``` In [43]: temp2.str[-1] Out[43]: 0 p500 1 p600 2 p700 Name: ticker ``` So all together it would be: ``` >>> temp = pd.DataFrame({'ticker' : ['spx 5\/25\/2001 p500', 'spx 5\/25\/2001 p600', 'spx 5\/25\/2001 p700']}) >>> temp['ticker'].str.split(' ').str[-1] 0 p500 1 p600 2 p700 Name: ticker, dtype: object ```", "best_answers_score":0.6627, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12504976\/get-last-column-after-str-split-operation-on-column-in-pandas-dataframe", "best_answers_votes":242, "question_length":1001, "response_length":315 }, { "question":"Pandas Plotting with Multi-Index After performing a groupby.sum() on a DataFrame I'm having some trouble trying to create my intended plot. ```py import pandas as pd import numpy as np np.random.seed(365) rows = 100 data = {'Month': np.random.choice(['2014-01', '2014-02', '2014-03', '2014-04'], size=rows), 'Code': np.random.choice(['A', 'B', 'C'], size=rows), 'ColA': np.random.randint(5, 125, size=rows), 'ColB': np.random.randint(0, 51, size=rows),} df = pd.DataFrame(data) Month Code ColA ColB 0 2014-03 C 59 47 1 2014-01 A 24 9 2 2014-02 C 77 50 dfg = df.groupby(['Code', 'Month']).sum() ColA ColB Code Month A 2014-01 124 102 2014-02 398 282 2014-03 474 198 2014-04 830 237 B 2014-01 477 300 2014-02 591 167 2014-03 522 192 2014-04 367 169 C 2014-01 412 180 2014-02 275 205 2014-03 795 291 2014-04 901 309 ``` How can I create a subplot (kind='bar') for each Code, where the x-axis is the Month and the bars are ColA and ColB?", "response":"I found the unstack(level) method to work perfectly, which has the added benefit of not needing a priori knowledge about how many Codes there are. ```py ax = dfg.unstack(level=0).plot(kind='bar', subplots=True, rot=0, figsize=(9, 7), layout=(2, 3)) plt.tight_layout() ```", "best_answers_score":0.6624, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25386870\/pandas-plotting-with-multi-index", "best_answers_votes":143, "question_length":933, "response_length":271 }, { "question":"splitting at underscore in python and storing the first value I have a pandas data frame like df with a column construct_name ``` construct_name aaaa_t1_2 cccc_t4_10 bbbb_g3_3 ``` and so on. I want to first split all the names at the underscore and store the first element (aaaa,cccc, etc.) as another column name. Expected output ``` construct_name name aaaa_t1_2 aaaa cccc_t4_10 bbbb ``` and so on. I tried the following df['construct_name'].map(lambda row:row.split(\"_\")) and it gives me a list like ``` [aaaa,t1,2] [cccc,t4,10] ``` and so on But when I do df['construct_name'].map(lambda row:row.split(\"_\"))[0] to get the first element of the list I get an error. Can you suggest a fix. Thanks", "response":"Just use the vectorised str method split and use integer indexing on the list to get the first element: ``` In [228]: df['first'] = df['construct_name'].str.split('_').str[0] df Out[228]: construct_name first 0 aaaa_t1_2 aaaa 1 cccc_t4_10 cccc 2 bbbb_g3_3 bbbb ```", "best_answers_score":0.662, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29947574\/splitting-at-underscore-in-python-and-storing-the-first-value", "best_answers_votes":190, "question_length":697, "response_length":264 }, { "question":"Pandas long to wide reshape, by two variables I have data in long format and am trying to reshape to wide, but there doesn't seem to be a straightforward way to do this using melt\/stack\/unstack: ``` Salesman Height product price Knut 6 bat 5 Knut 6 ball 1 Knut 6 wand 3 Steve 5 pen 2 ``` Becomes: ``` Salesman Height product_1 price_1 product_2 price_2 product_3 price_3 Knut 6 bat 5 ball 1 wand 3 Steve 5 pen 2 NA NA NA NA ``` I think Stata can do something like this with the reshape command.", "response":"Here's another solution more fleshed out, taken from Chris Albon's site. Create \"long\" dataframe ``` raw_data = { 'patient': [1, 1, 1, 2, 2], 'obs': [1, 2, 3, 1, 2], 'treatment': [0, 1, 0, 1, 0], 'score': [6252, 24243, 2345, 2342, 23525]} df = pd.DataFrame(raw_data, columns=['patient', 'obs', 'treatment', 'score']) ``` ``` patient obs treatment score 0 1 1 0 6252 1 1 2 1 24243 2 1 3 0 2345 3 2 1 1 2342 4 2 2 0 23525 ``` Make a \"wide\" data ``` df.pivot(index='patient', columns='obs', values='score') ``` ``` obs 1 2 3 patient 1 6252.0 24243.0 2345.0 2 2342.0 23525.0 NaN ```", "best_answers_score":0.6617, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22798934\/pandas-long-to-wide-reshape-by-two-variables", "best_answers_votes":108, "question_length":494, "response_length":578 }, { "question":"Python Pandas - Changing some column types to categories I have fed the following CSV file into iPython Notebook: ``` public = pd.read_csv(\"categories.csv\") public ``` I've also imported pandas as pd, numpy as np and matplotlib.pyplot as plt. The following data types are present (the below is a summary - there are about 100 columns) ``` In [36]: public.dtypes Out[37]: parks object playgrounds object sports object roading object resident int64 children int64 ``` I want to change 'parks', 'playgrounds', 'sports' and 'roading' to categories (they have likert scale responses in them - each column has different types of likert responses though (e.g. one has \"strongly agree\", \"agree\" etc., another has \"very important\", \"important\" etc.), leaving the remainder as int64. I was able to create a separate dataframe - public1 - and change one of the columns to a category type using the following code: ``` public1 = {'parks': public.parks} public1 = public1['parks'].astype('category') ``` However, when I tried to change a number at once using this code, I was unsuccessful: ``` public1 = {'parks': public.parks, 'playgrounds': public.parks} public1 = public1['parks', 'playgrounds'].astype('category') ``` Notwithstanding this, I don't want to create a separate dataframe with just the categories columns. I would like them changed in the original dataframe. I tried numerous ways to achieve this, then tried the code here: Change column type in pandas. ``` public[['parks', 'playgrounds', 'sports', 'roading']] = public[['parks', 'playgrounds', 'sports', 'roading']].astype('category') ``` and got the following error: ``` NotImplementedError: > 1 ndim Categorical are not supported at this time ``` Is there a way to change 'parks', 'playgrounds', 'sports', 'roading' to categories (so the likert scale responses can then be analysed), leaving 'resident' and 'children' (and the 94 other columns that are string, int + floats) untouched? I am using Python 2.7.", "response":"Sometimes, you just have to use a for-loop: ``` for col in ['parks', 'playgrounds', 'sports', 'roading']: public[col] = public[col].astype('category') ```", "best_answers_score":0.6602, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28910851\/python-pandas-changing-some-column-types-to-categories", "best_answers_votes":162, "question_length":1965, "response_length":154 }, { "question":"Plotting categorical data with pandas and matplotlib I have a data frame with categorical data: ``` colour direction 1 red up 2 blue up 3 green down 4 red left 5 red right 6 yellow down 7 blue down ``` I want to generate some graphs, like pie charts and histograms based on the categories. Is it possible without creating dummy numeric variables? Something like ``` df.plot(kind='hist') ```", "response":"You can simply use value_counts on the series: ``` df['colour'].value_counts().plot(kind='bar') ```", "best_answers_score":0.6598, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31029560\/plotting-categorical-data-with-pandas-and-matplotlib", "best_answers_votes":271, "question_length":390, "response_length":99 }, { "question":"Turn Pandas Multi-Index into column I have a dataframe with 2 index levels: ``` value Trial measurement 1 0 13 1 3 2 4 2 0 NaN 1 12 3 0 34 ``` Which I want to turn into this: ``` Trial measurement value 1 0 13 1 1 3 1 2 4 2 0 NaN 2 1 12 3 0 34 ``` How can I best do this? I need this because I want to aggregate the data as instructed here, but I can't select my columns like that if they are in use as indices.", "response":"The reset_index() is a pandas DataFrame method that will transfer index values into the DataFrame as columns. The default setting for the parameter is drop=False (which will keep the index values as columns). All you have to do call .reset_index() after the name of the DataFrame: ``` df = df.reset_index() ```", "best_answers_score":0.6597, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20110170\/turn-pandas-multi-index-into-column", "best_answers_votes":340, "question_length":411, "response_length":310 }, { "question":"Add column in dataframe from list I have a dataframe with some columns like this: ``` A B C 0 4 5 6 7 7 6 5 ``` The possible range of values in A are only from 0 to 7. Also, I have a list of 8 elements like this: ``` List=[2,5,6,8,12,16,26,32] \/\/There are only 8 elements in this list ``` If the element in column A is n, I need to insert the n th element from the List in a new column, say 'D'. How can I do this in one go without looping over the whole dataframe? The resulting dataframe would look like this: ``` A B C D 0 2 4 12 5 16 6 26 7 32 7 32 6 26 5 16 ``` Note: The dataframe is huge and iteration is the last option option. But I can also arrange the elements in 'List' in any other data structure like dict if necessary.", "response":"Just assign the list directly: ``` df['new_col'] = mylist ``` Alternative Convert the list to a series or array and then assign: ``` se = pd.Series(mylist) df['new_col'] = se.values ``` or ``` df['new_col'] = np.array(mylist) ```", "best_answers_score":0.6591, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/26666919\/add-column-in-dataframe-from-list", "best_answers_votes":445, "question_length":733, "response_length":229 }, { "question":"How to apply a function on every row on a dataframe? I am new to Python and I am not sure how to solve the following problem. I have a function: ``` def EOQ(D,p,ck,ch): Q = math.sqrt((2*D*ck)\/(ch*p)) return Q ``` Say I have the dataframe ``` df = pd.DataFrame({\"D\": [10,20,30], \"p\": [20, 30, 10]}) D p 0 10 20 1 20 30 2 30 10 ch=0.2 ck=5 ``` And ch and ck are float types. Now I want to apply the formula to every row on the dataframe and return it as an extra row 'Q'. An example (that does not work) would be: ``` df['Q']= map(lambda p, D: EOQ(D,p,ck,ch),df['p'], df['D']) ``` (returns only 'map' types) I will need this type of processing more in my project and I hope to find something that works.", "response":"The following should work: ``` def EOQ(D,p,ck,ch): Q = math.sqrt((2*D*ck)\/(ch*p)) return Q ch=0.2 ck=5 df['Q'] = df.apply(lambda row: EOQ(row['D'], row['p'], ck, ch), axis=1) df ``` If all you're doing is calculating the square root of some result then use the np.sqrt method this is vectorised and will be significantly faster: ``` In [80]: df['Q'] = np.sqrt((2*df['D']*ck)\/(ch*df['p'])) df Out[80]: D p Q 0 10 20 5.000000 1 20 30 5.773503 2 30 10 12.247449 ``` Timings For a 30k row df: ``` In [92]: import math ch=0.2 ck=5 def EOQ(D,p,ck,ch): Q = math.sqrt((2*D*ck)\/(ch*p)) return Q %timeit np.sqrt((2*df['D']*ck)\/(ch*df['p'])) %timeit df.apply(lambda row: EOQ(row['D'], row['p'], ck, ch), axis=1) 1000 loops, best of 3: 622 \u00b5s per loop 1 loops, best of 3: 1.19 s per loop ``` You can see that the np method is ~1900 X faster", "best_answers_score":0.6589, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33518124\/how-to-apply-a-function-on-every-row-on-a-dataframe", "best_answers_votes":126, "question_length":701, "response_length":828 }, { "question":"How to lowercase a pandas dataframe string column if it has missing values? The following code does not work. ``` import pandas as pd import numpy as np df=pd.DataFrame(['ONE','Two', np.nan],columns=['x']) xLower = df[\"x\"].map(lambda x: x.lower()) ``` How should I tweak it to get xLower = ['one','two',np.nan] ? Efficiency is important since the real data frame is huge.", "response":"use pandas vectorized string methods; as in the documentation: these methods exclude missing\/NA values automatically .str.lower() is the very first example there; ``` >>> df['x'].str.lower() 0 one 1 two 2 NaN Name: x, dtype: object ```", "best_answers_score":0.6588, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22245171\/how-to-lowercase-a-pandas-dataframe-string-column-if-it-has-missing-values", "best_answers_votes":313, "question_length":371, "response_length":235 }, { "question":"Pandas read in table without headers Using pandas, how do I read in only a subset of the columns (say 4th and 7th columns) of a .csv file with no headers? I cannot seem to be able to do so using usecols.", "response":"In order to read a csv in that doesn't have a header and for only certain columns you need to pass params header=None and usecols=[3,6] for the 4th and 7th columns: ``` df = pd.read_csv(file_path, header=None, usecols=[3,6]) ``` See the docs", "best_answers_score":0.6584, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29287224\/pandas-read-in-table-without-headers", "best_answers_votes":575, "question_length":203, "response_length":241 }, { "question":"Pandas - Filtering None Values I'm using Pandas to explore some datasets. I have this dataframe: I want to exclude any row that has a value in column City. So I've tried: ``` new_df = all_df[(all_df[\"City\"] == \"None\") ] new_df ``` But then I got an empty dataframe: It works whenever I use any value other than None. Any idea how to filter this dataframe?", "response":"Consider using isnull() to locate missing values ``` all_df[all_df['City'].isnull()] ```", "best_answers_score":0.658, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/45117272\/pandas-filtering-none-values", "best_answers_votes":206, "question_length":355, "response_length":88 }, { "question":"How do I check if a pandas DataFrame is empty? How do I check if a pandas DataFrame is empty? I'd like to print some message in the terminal if the DataFrame is empty.", "response":"You can use the attribute df.empty to check whether it's empty or not: ``` if df.empty: print('DataFrame is empty!') ``` Source: Pandas Documentation", "best_answers_score":0.6578, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19828822\/how-do-i-check-if-a-pandas-dataframe-is-empty", "best_answers_votes":1162, "question_length":167, "response_length":149 }, { "question":"float64 with pandas to_csv I'm reading a CSV with float numbers like this: ``` Bob,0.085 Alice,0.005 ``` And import into a dataframe, and write this dataframe to a new place ``` df = pd.read_csv(orig) df.to_csv(pandasfile) ``` Now this pandasfile has: ``` Bob,0.085000000000000006 Alice,0.0050000000000000001 ``` What happen? maybe I have to cast to a different type like float32 or something? Im using pandas 0.9.0 and numpy 1.6.2.", "response":"As mentioned in the comments, it is a general floating point problem. However you can use the float_format key word of to_csv to hide it: ``` df.to_csv('pandasfile.csv', float_format='%.3f') ``` or, if you don't want 0.0001 to be rounded to zero: ``` df.to_csv('pandasfile.csv', float_format='%g') ``` will give you: ``` Bob,0.085 Alice,0.005 ``` in your output file. For an explanation of %g, see Format Specification Mini-Language.", "best_answers_score":0.6569, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12877189\/float64-with-pandas-to-csv", "best_answers_votes":258, "question_length":432, "response_length":433 }, { "question":"Concatenate a list of pandas dataframes together I have a list of Pandas dataframes that I would like to combine into one Pandas dataframe. I am using Python 2.7.10 and Pandas 0.16.2 I created the list of dataframes from: ``` import pandas as pd dfs = [] sqlall = \"select * from mytable\" for chunk in pd.read_sql_query(sqlall , cnxn, chunksize=10000): dfs.append(chunk) ``` This returns a list of dataframes ``` type(dfs[0]) Out[6]: pandas.core.frame.DataFrame type(dfs) Out[7]: list len(dfs) Out[8]: 408 ``` Here is some sample data ``` # sample dataframes d1 = pd.DataFrame({'one' : [1., 2., 3., 4.], 'two' : [4., 3., 2., 1.]}) d2 = pd.DataFrame({'one' : [5., 6., 7., 8.], 'two' : [9., 10., 11., 12.]}) d3 = pd.DataFrame({'one' : [15., 16., 17., 18.], 'two' : [19., 10., 11., 12.]}) # list of dataframes mydfs = [d1, d2, d3] ``` I would like to combine d1, d2, and d3 into one pandas dataframe. Alternatively, a method of reading a large-ish table directly into a dataframe when using the chunksize option would be very helpful.", "response":"Given that all the dataframes have the same columns, you can simply concat them: ``` import pandas as pd df = pd.concat(list_of_dataframes) ```", "best_answers_score":0.6558, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32444138\/concatenate-a-list-of-pandas-dataframes-together", "best_answers_votes":530, "question_length":1030, "response_length":143 }, { "question":"Trouble passing in lambda to apply for pandas DataFrame: \"TypeError: () got an unexpected keyword argument 'axis' \" I'm trying to apply a function to all rows of a pandas DataFrame (actually just one column in that DataFrame) I'm sure this is a syntax error but I'm know sure what I'm doing wrong ```py df['col'].apply(lambda x, y:(x - y).total_seconds(), args=[d1], axis=1) ``` The col column contains a bunch a datetime.datetime objects and d1 is the earliest of them. I'm trying to get a column of the total number of seconds for each of the rows. I keep getting the following error ```none TypeError: () got an unexpected keyword argument 'axis' ``` I don't understand why axis is getting passed to my lambda function I've also tried doing ```py def diff_dates(d1, d2): return (d1-d2).total_seconds() df['col'].apply(diff_dates, args=[d1], axis=1) ``` And I get the same error.", "response":"Note there is no axis param for a Series.apply call, as distinct to a DataFrame.apply call. Series.apply(func, convert_dtype=True, args=(), **kwds) ... func : function convert_dtype : boolean, default True Try to find better dtype for elementwise function results. If False, leave as dtype=object args : tuple Positional arguments to pass to function in addition to the value **kwds Additional keyword arguments passed to func. There is one for a df but it's unclear how you're expecting this to work when you're calling it on a series but you're expecting it to work on a row?", "best_answers_score":0.6551, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29155310\/trouble-passing-in-lambda-to-apply-for-pandas-dataframe-typeerror-lambda", "best_answers_votes":136, "question_length":881, "response_length":577 }, { "question":"Pandas: add a column to a multiindex column dataframe I would like to add a column to the second level of a multiindex column dataframe. ``` In [151]: df Out[151]: first bar baz second one two one two A 0.487880 -0.487661 -1.030176 0.100813 B 0.267913 1.918923 0.132791 0.178503 C 1.550526 -0.312235 -1.177689 -0.081596 ``` The usual trick of direct assignment does not work: ``` In [152]: df['bar']['three'] = [0, 1, 2] In [153]: df Out[153]: first bar baz second one two one two A 0.487880 -0.487661 -1.030176 0.100813 B 0.267913 1.918923 0.132791 0.178503 C 1.550526 -0.312235 -1.177689 -0.081596 ``` How can I add the third row to under \"bar\"?", "response":"It's actually pretty simple (FWIW, I originally thought to do it your way): ``` df['bar', 'three'] = [0, 1, 2] df = df.sort_index(axis=1) print(df) bar baz one two three one two A -0.212901 0.503615 0 -1.660945 0.446778 B -0.803926 -0.417570 1 -0.336827 0.989343 C 3.400885 -0.214245 2 0.895745 1.011671 ```", "best_answers_score":0.6542, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16088741\/pandas-add-a-column-to-a-multiindex-column-dataframe", "best_answers_votes":127, "question_length":647, "response_length":307 }, { "question":"AttributeError: 'Series' object has no attribute 'reshape' I'm using sci-kit learn linear regression algorithm. While scaling Y target feature with: ``` Ys = scaler.fit_transform(Y) ``` I got ValueError: Expected 2D array, got 1D array instead: After that I reshaped using: ``` Ys = scaler.fit_transform(Y.reshape(-1,1)) ``` But got error again: AttributeError: 'Series' object has no attribute 'reshape' So I checked pandas.Series documentation page and it says: reshape(*args, **kwargs) Deprecated since version 0.19.0.", "response":"Solution was linked on reshaped method on documentation page. Insted of Y.reshape(-1,1) you need to use: ``` Y.values.reshape(-1,1) ```", "best_answers_score":0.6537, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/53723928\/attributeerror-series-object-has-no-attribute-reshape", "best_answers_votes":161, "question_length":521, "response_length":135 }, { "question":"Indexing Pandas data frames: integer rows, named columns Say df is a pandas dataframe. df.loc[] only accepts names df.iloc[] only accepts integers (actual placements) df.ix[] accepts both names and integers: When referencing rows, df.ix[row_idx, ] only wants to be given names. e.g. ``` df = pd.DataFrame({'a' : ['one', 'two', 'three','four', 'five', 'six'], '1' : np.arange(6)}) df = df.ix[2:6] print(df) 1 a 2 2 three 3 3 four 4 4 five 5 5 six df.ix[0, 'a'] ``` throws an error, it doesn't give return 'two'. When referencing columns, iloc is prefers integers, not names. e.g. ``` df.ix[2, 1] ``` returns 'three', not 2. (Although df.idx[2, '1'] does return 2). Oddly, I'd like the exact opposite functionality. Usually my column names are very meaningful, so in my code I reference them directly. But due to a lot of observation cleaning, the row names in my pandas data frames don't usually correspond to range(len(df)). I realize I can use: ``` df.iloc[0].loc['a'] # returns three ``` But it seems ugly! Does anyone know of a better way to do this, so that the code would look like this? ``` df.foo[0, 'a'] # returns three ``` In fact, is it possible to add on my own new method to pandas.core.frame.DataFrames, so e.g. df.idx(rows, cols) is in fact df.iloc[rows].loc[cols]?", "response":"It's a late answer, but @unutbu's comment is still valid and a great solution to this problem. To index a DataFrame with integer rows and named columns (labeled columns): df.loc[df.index[#], 'NAME'] where # is a valid integer index and NAME is the name of the column.", "best_answers_score":0.6515, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28754603\/indexing-pandas-data-frames-integer-rows-named-columns", "best_answers_votes":85, "question_length":1279, "response_length":267 }, { "question":"Concat DataFrame Reindexing only valid with uniquely valued Index objects I am trying to concat the following dataframes: df1 ``` price side timestamp timestamp 2016-01-04 00:01:15.631331072 0.7286 2 1451865675631331 2016-01-04 00:01:15.631399936 0.7286 2 1451865675631400 2016-01-04 00:01:15.631860992 0.7286 2 1451865675631861 2016-01-04 00:01:15.631866112 0.7286 2 1451865675631866 ``` and: df2 ``` bid bid_size offer offer_size timestamp 2016-01-04 00:00:31.331441920 0.7284 4000000 0.7285 1000000 2016-01-04 00:00:53.631324928 0.7284 4000000 0.7290 4000000 2016-01-04 00:01:03.131234048 0.7284 5000000 0.7286 4000000 2016-01-04 00:01:12.131444992 0.7285 1000000 0.7286 4000000 2016-01-04 00:01:15.631364096 0.7285 4000000 0.7290 4000000 ``` With ``` data = pd.concat([df1,df2], axis=1) ``` But I get the follwing output: ``` InvalidIndexError Traceback (most recent call last) in () ----> 1 data = pd.concat([df1,df2], axis=1) 2 data = data.fillna(method='pad') 3 data = data.fillna(method='bfill') 4 data['timestamp'] = data.index.values#converting to datetime 5 data['timestamp'] = pd.to_datetime(data['timestamp'])#converting to datetime \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/tools\/merge.pyc in concat(objs, axis, join, join_axes, ignore_index, keys, levels, names, verify_integrity, copy) 810 keys=keys, levels=levels, names=names, 811 verify_integrity=verify_integrity, --> 812 copy=copy) 813 return op.get_result() 814 \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/tools\/merge.pyc in __init__(self, objs, axis, join, join_axes, keys, levels, names, ignore_index, verify_integrity, copy) 947 self.copy = copy 948 --> 949 self.new_axes = self._get_new_axes() 950 951 def get_result(self): \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/tools\/merge.pyc in _get_new_axes(self) 1013 if i == self.axis: 1014 continue -> 1015 new_axes[i] = self._get_comb_axis(i) 1016 else: 1017 if len(self.join_axes) != ndim - 1: \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/tools\/merge.pyc in _get_comb_axis(self, i) 1039 raise TypeError(\"Cannot concatenate list of %s\" % types) 1040 -> 1041 return _get_combined_index(all_indexes, intersect=self.intersect) 1042 1043 def _get_concat_axis(self): \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/core\/index.pyc in _get_combined_index(indexes, intersect) 6120 index = index.intersection(other) 6121 return index -> 6122 union = _union_indexes(indexes) 6123 return _ensure_index(union) 6124 \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/core\/index.pyc in _union_indexes(indexes) 6149 6150 if hasattr(result, 'union_many'): -> 6151 return result.union_many(indexes[1:]) 6152 else: 6153 for other in indexes[1:]: \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/tseries\/index.pyc in union_many(self, others) 959 else: 960 tz = this.tz --> 961 this = Index.union(this, other) 962 if isinstance(this, DatetimeIndex): 963 this.tz = tz \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/core\/index.pyc in union(self, other) 1553 result.extend([x for x in other._values if x not in value_set]) 1554 else: -> 1555 indexer = self.get_indexer(other) 1556 indexer, = (indexer == -1).nonzero() 1557 \/usr\/local\/lib\/python2.7\/site-packages\/pandas\/core\/index.pyc in get_indexer(self, target, method, limit, tolerance) 1890 1891 if not self.is_unique: -> 1892 raise InvalidIndexError('Reindexing only valid with uniquely' 1893 ' valued Index objects') 1894 InvalidIndexError: Reindexing only valid with uniquely valued Index objects ``` I have removed additional columns and removed duplicates and NA where there could be a conflict - but I simply do not know what's wrong.", "response":"Duplicated column names! In my case the problem was because I had duplicated column names.", "best_answers_score":0.6505, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/35084071\/concat-dataframe-reindexing-only-valid-with-uniquely-valued-index-objects", "best_answers_votes":161, "question_length":3592, "response_length":90 }, { "question":"Drop all duplicate rows across multiple columns in Python Pandas The pandas drop_duplicates function is great for \"uniquifying\" a dataframe. I would like to drop all rows which are duplicates across a subset of columns. Is this possible? ```none A B C 0 foo 0 A 1 foo 1 A 2 foo 1 B 3 bar 1 A ``` As an example, I would like to drop rows which match on columns A and C so this should drop rows 0 and 1.", "response":"This is much easier in pandas now with drop_duplicates and the keep parameter. ``` import pandas as pd df = pd.DataFrame({\"A\":[\"foo\", \"foo\", \"foo\", \"bar\"], \"B\":[0,1,1,1], \"C\":[\"A\",\"A\",\"B\",\"A\"]}) df.drop_duplicates(subset=['A', 'C'], keep=False) ```", "best_answers_score":0.6502, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23667369\/drop-all-duplicate-rows-across-multiple-columns-in-python-pandas", "best_answers_votes":369, "question_length":401, "response_length":248 }, { "question":"Find the max of two or more columns with pandas I have a dataframe with columns A,B. I need to create a column C such that for every record \/ row: C = max(A, B). How should I go about doing this?", "response":"You can get the maximum like this: ``` >>> import pandas as pd >>> df = pd.DataFrame({\"A\": [1,2,3], \"B\": [-2, 8, 1]}) >>> df A B 0 1 -2 1 2 8 2 3 1 >>> df[[\"A\", \"B\"]] A B 0 1 -2 1 2 8 2 3 1 >>> df[[\"A\", \"B\"]].max(axis=1) 0 1 1 8 2 3 ``` and so: ``` >>> df[\"C\"] = df[[\"A\", \"B\"]].max(axis=1) >>> df A B C 0 1 -2 1 1 2 8 8 2 3 1 3 ``` If you know that \"A\" and \"B\" are the only columns, you could even get away with ``` >>> df[\"C\"] = df.max(axis=1) ``` And you could use .apply(max, axis=1) too, I guess.", "best_answers_score":0.6499, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12169170\/find-the-max-of-two-or-more-columns-with-pandas", "best_answers_votes":317, "question_length":195, "response_length":500 }, { "question":"Delete the first three rows of a dataframe in pandas I need to delete the first three rows of a dataframe in pandas. I know df.ix[:-1] would remove the last row, but I can't figure out how to remove first n rows.", "response":"Use iloc: ``` df = df.iloc[3:] ``` will give you a new df without the first three rows.", "best_answers_score":0.6495, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16396903\/delete-the-first-three-rows-of-a-dataframe-in-pandas", "best_answers_votes":476, "question_length":212, "response_length":87 }, { "question":"How can I split a column of tuples in a Pandas dataframe? I have a Pandas dataframe (this is only a little piece) ``` >>> d1 y norm test y norm train len(y_train) len(y_test) \\ 0 64.904368 116.151232 1645 549 1 70.852681 112.639876 1645 549 SVR RBF \\ 0 (35.652207342877873, 22.95533537448393) 1 (39.563683797747622, 27.382483096332511) LCV \\ 0 (19.365430594452338, 13.880062435173587) 1 (19.099614489458364, 14.018867136617146) RIDGE CV \\ 0 (4.2907610988480362, 12.416745648065584) 1 (4.18864306788194, 12.980833914392477) RF \\ 0 (9.9484841581029428, 16.46902345373697) 1 (10.139848213735391, 16.282141345406522) GB \\ 0 (0.012816232716538605, 15.950164822266007) 1 (0.012814519804493328, 15.305745202851712) ET DATA 0 (0.00034337162272515505, 16.284800366214057) j2m 1 (0.00024811554516431878, 15.556506191784194) j2m >>> ``` I want to split all the columns that contain tuples. For example, I want to replace the column LCV with the columns LCV-a and LCV-b. How can I do that?", "response":"You can do this by doing pd.DataFrame(col.tolist()) on that column: ``` In [2]: df = pd.DataFrame({'a':[1,2], 'b':[(1,2), (3,4)]}) In [3]: df Out[3]: a b 0 1 (1, 2) 1 2 (3, 4) In [4]: df['b'].tolist() Out[4]: [(1, 2), (3, 4)] In [5]: pd.DataFrame(df['b'].tolist(), index=df.index) Out[5]: 0 1 0 1 2 1 3 4 In [6]: df[['b1', 'b2']] = pd.DataFrame(df['b'].tolist(), index=df.index) In [7]: df Out[7]: a b b1 b2 0 1 (1, 2) 1 2 1 2 (3, 4) 3 4 ``` Note: in an earlier version, this answer recommended to use df['b'].apply(pd.Series) instead of pd.DataFrame(df['b'].tolist(), index=df.index). That works as well (because it makes a Series of each tuple, which is then seen as a row of a dataframe), but it is slower \/ uses more memory than the tolist version, as noted by the other answers here (thanks to denfromufa).", "best_answers_score":0.6495, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/29550414\/how-can-i-split-a-column-of-tuples-in-a-pandas-dataframe", "best_answers_votes":267, "question_length":977, "response_length":811 }, { "question":"How to reset index in a pandas dataframe? I have a dataframe from which I remove some rows. As a result, I get a dataframe in which index is something like [1,5,6,10,11] and I would like to reset it to [0,1,2,3,4]. How can I do it? The following seems to work: ```py df = df.reset_index() del df['index'] ``` The following does not work: ```py df = df.reindex() ```", "response":"DataFrame.reset_index is what you're looking for. If you don't want it saved as a column, then do: ``` df = df.reset_index(drop=True) ``` If you don't want to reassign: ``` df.reset_index(drop=True, inplace=True) ```", "best_answers_score":0.6494, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20490274\/how-to-reset-index-in-a-pandas-dataframe", "best_answers_votes":1164, "question_length":365, "response_length":216 }, { "question":"How to check if a column exists in Pandas How do I check if a column exists in a Pandas DataFrame df? ```none A B C 0 3 40 100 1 6 30 200 ``` How would I check if the column \"A\" exists in the above DataFrame so that I can compute: ```py df['sum'] = df['A'] + df['C'] ``` And if \"A\" doesn't exist: ```py df['sum'] = df['B'] + df['C'] ```", "response":"This will work: ``` if 'A' in df: ``` But for clarity, I'd probably write it as: ``` if 'A' in df.columns: ```", "best_answers_score":0.6486, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24870306\/how-to-check-if-a-column-exists-in-pandas", "best_answers_votes":1278, "question_length":336, "response_length":110 }, { "question":"Adding a column thats result of difference in consecutive rows in pandas Lets say I have a dataframe like this ``` A B 0 a b 1 c d 2 e f 3 g h ``` 0,1,2,3 are times, a, c, e, g is one time series and b, d, f, h is another time series. I need to be able to add two columns to the orignal dataframe which is got by computing the differences of consecutive rows for certain columns. So i need something like this ``` A B dA 0 a b (a-c) 1 c d (c-e) 2 e f (e-g) 3 g h Nan ``` I saw something called diff on the dataframe\/series but that does it slightly differently as in first element will become Nan.", "response":"Use shift. ``` df['dA'] = df['A'] - df['A'].shift(-1) ```", "best_answers_score":0.6477, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23142967\/adding-a-column-thats-result-of-difference-in-consecutive-rows-in-pandas", "best_answers_votes":158, "question_length":597, "response_length":57 }, { "question":"Get a list from Pandas DataFrame column headers I want to get a list of the column headers from a Pandas DataFrame. The DataFrame will come from user input, so I won't know how many columns there will be or what they will be called. For example, if I'm given a DataFrame like this: ```none y gdp cap 0 1 2 5 1 2 3 9 2 8 7 2 3 3 4 7 4 6 7 7 5 4 8 3 6 8 2 8 7 9 9 10 8 6 6 4 9 10 10 7 ``` I would get a list like this: ```none ['y', 'gdp', 'cap'] ```", "response":"You can get the values as a list by doing: ``` list(my_dataframe.columns.values) ``` Also you can simply use (as shown in Ed Chum's answer): ``` list(my_dataframe) ```", "best_answers_score":0.6476, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19482970\/get-a-list-from-pandas-dataframe-column-headers", "best_answers_votes":2056, "question_length":448, "response_length":167 }, { "question":"How do I select rows from a DataFrame based on column values? How can I select rows from a DataFrame based on values in some column in Pandas? In SQL, I would use: ```sql SELECT * FROM table WHERE column_name = some_value ```", "response":"To select rows whose column value equals a scalar, some_value, use ==: ``` df.loc[df['column_name'] == some_value] ``` To select rows whose column value is in an iterable, some_values, use isin: ``` df.loc[df['column_name'].isin(some_values)] ``` Combine multiple conditions with &: ``` df.loc[(df['column_name'] >= A) & (df['column_name'] =. Thus, the parentheses in the last example are necessary. Without the parentheses ``` df['column_name'] >= A & df['column_name'] = (A & df['column_name']) <= B ``` which results in a Truth value of a Series is ambiguous error. To select rows whose column value does not equal some_value, use !=: ``` df.loc[df['column_name'] != some_value] ``` The isin returns a boolean Series, so to select rows whose value is not in some_values, negate the boolean Series using ~: ``` df = df.loc[~df['column_name'].isin(some_values)] # .loc is not in-place replacement ``` For example, ``` import pandas as pd import numpy as np df = pd.DataFrame({'A': 'foo bar foo bar foo bar foo foo'.split(), 'B': 'one one two three two two one three'.split(), 'C': np.arange(8), 'D': np.arange(8) * 2}) print(df) # A B C D # 0 foo one 0 0 # 1 bar one 1 2 # 2 foo two 2 4 # 3 bar three 3 6 # 4 foo two 4 8 # 5 bar two 5 10 # 6 foo one 6 12 # 7 foo three 7 14 print(df.loc[df['A'] == 'foo']) ``` yields ``` A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14 ``` If you have multiple values you want to include, put them in a list (or more generally, any iterable) and use isin: ``` print(df.loc[df['B'].isin(['one','three'])]) ``` yields ``` A B C D 0 foo one 0 0 1 bar one 1 2 3 bar three 3 6 6 foo one 6 12 7 foo three 7 14 ``` Note, however, that if you wish to do this many times, it is more efficient to make an index first, and then use df.loc: ``` df = df.set_index(['B']) print(df.loc['one']) ``` yields ``` A C D B one foo 0 0 one bar 1 2 one foo 6 12 ``` or, to include multiple values from the index use df.index.isin: ``` df.loc[df.index.isin(['one','two'])] ``` yields ``` A C D B one foo 0 0 one bar 1 2 two foo 2 4 two foo 4 8 two bar 5 10 one foo 6 12 ```", "best_answers_score":0.647, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/17071871\/how-do-i-select-rows-from-a-dataframe-based-on-column-values", "best_answers_votes":6614, "question_length":225, "response_length":2117 }, { "question":"Find element's index in pandas Series I know this is a very basic question but for some reason I can't find an answer. How can I get the index of certain element of a Series in python pandas? (first occurrence would suffice) I.e., I'd like something like: ``` import pandas as pd myseries = pd.Series([1,4,0,7,5], index=[0,1,2,3,4]) print myseries.find(7) # should output 3 ``` Certainly, it is possible to define such a method with a loop: ``` def find(s, el): for i in s.index: if s[i] == el: return i return None print find(myseries, 7) ``` but I assume there should be a better way. Is there?", "response":"``` >>> myseries[myseries == 7] 3 7 dtype: int64 >>> myseries[myseries == 7].index[0] 3 ``` Though I admit that there should be a better way to do that, but this at least avoids iterating and looping through the object and moves it to the C level.", "best_answers_score":0.6466, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18327624\/find-elements-index-in-pandas-series", "best_answers_votes":298, "question_length":596, "response_length":247 }, { "question":"pandas: merge (join) two data frames on multiple columns I am trying to join two pandas dataframes using two columns: ```py new_df = pd.merge(A_df, B_df, how='left', left_on='[A_c1,c2]', right_on = '[B_c1,c2]') ``` but got the following error: ```none pandas\/index.pyx in pandas.index.IndexEngine.get_loc (pandas\/index.c:4164)() pandas\/index.pyx in pandas.index.IndexEngine.get_loc (pandas\/index.c:4028)() pandas\/src\/hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas\/hashtable.c:13166)() pandas\/src\/hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas\/hashtable.c:13120)() KeyError: '[B_1, c2]' ``` Any idea what should be the right way to do this?", "response":"Try this ``` new_df = pd.merge( left=A_df, right=B_df, how='left', left_on=['A_c1', 'c2'], right_on=['B_c1', 'c2'], ) ``` https:\/\/pandas.pydata.org\/pandas-docs\/stable\/reference\/api\/pandas.DataFrame.merge.html left_on : label or list, or array-like Field names to join on in left DataFrame. Can be a vector or list of vectors of the length of the DataFrame to use a particular vector as the join key instead of columns right_on : label or list, or array-like Field names to join on in right DataFrame or vector\/list of vectors per left_on docs", "best_answers_score":0.6465, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41815079\/pandas-merge-join-two-data-frames-on-multiple-columns", "best_answers_votes":636, "question_length":709, "response_length":542 }, { "question":"How to drop rows of Pandas DataFrame whose value in a certain column is NaN I have this DataFrame and want only the records whose EPS column is not NaN: ```none STK_ID EPS cash STK_ID RPT_Date 601166 20111231 601166 NaN NaN 600036 20111231 600036 NaN 12 600016 20111231 600016 4.3 NaN 601009 20111231 601009 NaN NaN 601939 20111231 601939 2.5 NaN 000001 20111231 000001 NaN NaN ``` ...i.e. something like df.drop(....) to get this resulting dataframe: ```none STK_ID EPS cash STK_ID RPT_Date 600016 20111231 600016 4.3 NaN 601939 20111231 601939 2.5 NaN ``` How do I do that?", "response":"This question is already resolved, but... ...also consider the solution suggested by Wouter in his original comment. The ability to handle missing data, including dropna(), is built into pandas explicitly. Aside from potentially improved performance over doing it manually, these functions also come with a variety of options which may be useful. ``` In [24]: df = pd.DataFrame(np.random.randn(10,3)) In [25]: df.iloc[::2,0] = np.nan; df.iloc[::4,1] = np.nan; df.iloc[::3,2] = np.nan; In [26]: df Out[26]: 0 1 2 0 NaN NaN NaN 1 2.677677 -1.466923 -0.750366 2 NaN 0.798002 -0.906038 3 0.672201 0.964789 NaN 4 NaN NaN 0.050742 5 -1.250970 0.030561 -2.678622 6 NaN 1.036043 NaN 7 0.049896 -0.308003 0.823295 8 NaN NaN 0.637482 9 -0.310130 0.078891 NaN ``` ``` In [27]: df.dropna() #drop all rows that have any NaN values Out[27]: 0 1 2 1 2.677677 -1.466923 -0.750366 5 -1.250970 0.030561 -2.678622 7 0.049896 -0.308003 0.823295 ``` ``` In [28]: df.dropna(how='all') #drop only if ALL columns are NaN Out[28]: 0 1 2 1 2.677677 -1.466923 -0.750366 2 NaN 0.798002 -0.906038 3 0.672201 0.964789 NaN 4 NaN NaN 0.050742 5 -1.250970 0.030561 -2.678622 6 NaN 1.036043 NaN 7 0.049896 -0.308003 0.823295 8 NaN NaN 0.637482 9 -0.310130 0.078891 NaN ``` ``` In [29]: df.dropna(thresh=2) #Drop row if it does not have at least two values that are **not** NaN Out[29]: 0 1 2 1 2.677677 -1.466923 -0.750366 2 NaN 0.798002 -0.906038 3 0.672201 0.964789 NaN 5 -1.250970 0.030561 -2.678622 7 0.049896 -0.308003 0.823295 9 -0.310130 0.078891 NaN ``` ``` In [30]: df.dropna(subset=[1]) #Drop only if NaN in specific column (as asked in the question) Out[30]: 0 1 2 1 2.677677 -1.466923 -0.750366 2 NaN 0.798002 -0.906038 3 0.672201 0.964789 NaN 5 -1.250970 0.030561 -2.678622 6 NaN 1.036043 NaN 7 0.049896 -0.308003 0.823295 9 -0.310130 0.078891 NaN ``` There are also other options (See docs at http:\/\/pandas.pydata.org\/pandas-docs\/stable\/generated\/pandas.DataFrame.dropna.html), including dropping columns instead of rows. Pretty handy!", "best_answers_score":0.6461, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13413590\/how-to-drop-rows-of-pandas-dataframe-whose-value-in-a-certain-column-is-nan", "best_answers_votes":1223, "question_length":575, "response_length":2015 }, { "question":"Locate first and last non NaN values in a Pandas DataFrame I have a Pandas DataFrame indexed by date. There a number of columns but many columns are only populated for part of the time series. I'd like to find where the first and last values non-NaN values are located so that I can extracts the dates and see how long the time series is for a particular column. Could somebody point me in the right direction as to how I could go about doing something like this?", "response":"Here's some helpful examples. Series ``` s = pd.Series([np.NaN, 1, np.NaN, 3, np.NaN], index=list('abcde')) s a NaN b 1.0 c NaN d 3.0 e NaN dtype: float64 # first valid index s.first_valid_index() # 'b' # first valid position s.index.get_loc(s.first_valid_index()) # 1 # last valid index s.last_valid_index() # 'd' # last valid position s.index.get_loc(s.last_valid_index()) # 3 ``` Alternative solution using notna and idxmax: ``` # first valid index s.notna().idxmax() # 'b' # last valid index s.notna()[::-1].idxmax() # 'd' ``` DataFrame ``` df = pd.DataFrame({ 'A': [np.NaN, 1, np.NaN, 3, np.NaN], 'B': [1, np.NaN, np.NaN, np.NaN, np.NaN] }) df A B 0 NaN 1.0 1 1.0 NaN 2 NaN NaN 3 3.0 NaN 4 NaN NaN ``` (first|last)_valid_index isn't defined on DataFrames, but you can apply them on each column using apply. ``` # first valid index for each column df.apply(pd.Series.first_valid_index) A 1 B 0 dtype: int64 # last valid index for each column df.apply(pd.Series.last_valid_index) A 3 B 0 dtype: int64 ``` As before, you can also use notna and idxmax. This is slightly more natural syntax. ``` # first valid index df.notna().idxmax() A 1 B 0 dtype: int64 # last valid index df.notna()[::-1].idxmax() A 3 B 0 dtype: int64 ```", "best_answers_score":0.6442, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/22403469\/locate-first-and-last-non-nan-values-in-a-pandas-dataframe", "best_answers_votes":57, "question_length":463, "response_length":1226 }, { "question":"What are the differences between feather and parquet? Both are columnar (disk-)storage formats for use in data analysis systems. Both are integrated within Apache Arrow (pyarrow package for python) and are designed to correspond with Arrow as a columnar in-memory analytics layer. How do both formats differ? Should you always prefer feather when working with pandas when possible? What are the use cases where feather is more suitable than parquet and the other way round? Appendix I found some hints here https:\/\/github.com\/wesm\/feather\/issues\/188, but given the young age of this project, it's possibly a bit out of date. Not a serious speed test because I'm just dumping and loading a whole Dataframe but to give you some impression if you never heard of the formats before: ``` # IPython import numpy as np import pandas as pd import pyarrow as pa import pyarrow.feather as feather import pyarrow.parquet as pq import fastparquet as fp df = pd.DataFrame({'one': [-1, np.nan, 2.5], 'two': ['foo', 'bar', 'baz'], 'three': [True, False, True]}) print(\"pandas df to disk ####################################################\") print('example_feather:') %timeit feather.write_feather(df, 'example_feather') # 2.62 ms \u00b1 35.8 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) print('example_parquet:') %timeit pq.write_table(pa.Table.from_pandas(df), 'example.parquet') # 3.19 ms \u00b1 51 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) print() print(\"for comparison:\") print('example_pickle:') %timeit df.to_pickle('example_pickle') # 2.75 ms \u00b1 18.8 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) print('example_fp_parquet:') %timeit fp.write('example_fp_parquet', df) # 7.06 ms \u00b1 205 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) print('example_hdf:') %timeit df.to_hdf('example_hdf', 'key_to_store', mode='w', table=True) # 24.6 ms \u00b1 4.45 ms per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) print() print(\"pandas df from disk ##################################################\") print('example_feather:') %timeit feather.read_feather('example_feather') # 969 \u00b5s \u00b1 1.8 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) print('example_parquet:') %timeit pq.read_table('example.parquet').to_pandas() # 1.9 ms \u00b1 5.5 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) print(\"for comparison:\") print('example_pickle:') %timeit pd.read_pickle('example_pickle') # 1.07 ms \u00b1 6.21 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) print('example_fp_parquet:') %timeit fp.ParquetFile('example_fp_parquet').to_pandas() # 4.53 ms \u00b1 260 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) print('example_hdf:') %timeit pd.read_hdf('example_hdf') # 10 ms \u00b1 43.4 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) # pandas version: 0.22.0 # fastparquet version: 0.1.3 # numpy version: 1.13.3 # pandas version: 0.22.0 # pyarrow version: 0.8.0 # sys.version: 3.6.3 # example Dataframe taken from https:\/\/arrow.apache.org\/docs\/python\/parquet.html ```", "response":"Parquet format is designed for long-term storage, where Arrow is more intended for short term or ephemeral storage (Arrow may be more suitable for long-term storage after the 1.0.0 release happens, since the binary format will be stable then) Parquet is more expensive to write than Feather as it features more layers of encoding and compression. Feather is unmodified raw columnar Arrow memory. We will probably add simple compression to Feather in the future. Due to dictionary encoding, RLE encoding, and data page compression, Parquet files will often be much smaller than Feather files Parquet is a standard storage format for analytics that's supported by many different systems: Spark, Hive, Impala, various AWS services, in future by BigQuery, etc. So if you are doing analytics, Parquet is a good option as a reference storage format for query by multiple systems The benchmarks you showed are going to be very noisy since the data you read and wrote is very small. You should try compressing at least 100MB or upwards 1GB of data to get some more informative benchmarks, see e.g. http:\/\/wesmckinney.com\/blog\/python-parquet-multithreading\/", "best_answers_score":0.6434, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/48083405\/what-are-the-differences-between-feather-and-parquet", "best_answers_votes":274, "question_length":2997, "response_length":1148 }, { "question":"Loading a file with more than one line of JSON into Pandas I am trying to read in a JSON file into Python pandas (0.14.0) data frame. Here is the first line line of the JSON file: ``` {\"votes\": {\"funny\": 0, \"useful\": 0, \"cool\": 0}, \"user_id\": \"P_Mk0ygOilLJo4_WEvabAA\", \"review_id\": \"OeT5kgUOe3vcN7H6ImVmZQ\", \"stars\": 3, \"date\": \"2005-08-26\", \"text\": \"This is a pretty typical cafe. The sandwiches and wraps are good but a little overpriced and the food items are the same. The chicken caesar salad wrap is my favorite here but everything else is pretty much par for the course.\", \"type\": \"review\", \"business_id\": \"Jp9svt7sRT4zwdbzQ8KQmw\"} ``` I am trying do the following:df = pd.read_json(path). I am getting the following error (with full traceback): ``` Traceback (most recent call last): File \"\", line 1, in File \"\/Users\/d\/anaconda\/lib\/python2.7\/site-packages\/pandas\/io\/json.py\", line 198, in read_json date_unit).parse() File \"\/Users\/d\/anaconda\/lib\/python2.7\/site-packages\/pandas\/io\/json.py\", line 266, in parse self._parse_no_numpy() File \"\/Users\/d\/anaconda\/lib\/python2.7\/site-packages\/pandas\/io\/json.py\", line 483, in _parse_no_numpy loads(json, precise_float=self.precise_float), dtype=None) ValueError: Trailing data ``` What is the Trailing data error? How do I read it into a data frame? Following some suggestions, here are few lines of the .json file: ``` {\"votes\": {\"funny\": 0, \"useful\": 0, \"cool\": 0}, \"user_id\": \"P_Mk0ygOilLJo4_WEvabAA\", \"review_id\": \"OeT5kgUOe3vcN7H6ImVmZQ\", \"stars\": 3, \"date\": \"2005-08-26\", \"text\": \"This is a pretty typical cafe. The sandwiches and wraps are good but a little overpriced and the food items are the same. The chicken caesar salad wrap is my favorite here but everything else is pretty much par for the course.\", \"type\": \"review\", \"business_id\": \"Jp9svt7sRT4zwdbzQ8KQmw\"} {\"votes\": {\"funny\": 0, \"useful\": 0, \"cool\": 0}, \"user_id\": \"TNJRTBrl0yjtpAACr1Bthg\", \"review_id\": \"qq3zF2dDUh3EjMDuKBqhEA\", \"stars\": 3, \"date\": \"2005-11-23\", \"text\": \"I agree with other reviewers - this is a pretty typical financial district cafe. However, they have fantastic pies. I ordered three pies for an office event (apple, pumpkin cheesecake, and pecan) - all were delicious, particularly the cheesecake. The sucker weighed in about 4 pounds - no joke.\\n\\nNo surprises on the cafe side - great pies and cakes from the catering business.\", \"type\": \"review\", \"business_id\": \"Jp9svt7sRT4zwdbzQ8KQmw\"} {\"votes\": {\"funny\": 0, \"useful\": 0, \"cool\": 0}, \"user_id\": \"H_mngeK3DmjlOu595zZMsA\", \"review_id\": \"i3eQTINJXe3WUmyIpvhE9w\", \"stars\": 3, \"date\": \"2005-11-23\", \"text\": \"Decent enough food, but very overpriced. Just a large soup is almost $5. Their specials are $6.50, and with an overpriced soda or juice, it's approaching $10. A bit much for a cafe lunch!\", \"type\": \"review\", \"business_id\": \"Jp9svt7sRT4zwdbzQ8KQmw\"} ``` This .json file I am using contains one JSON object in each line as per the specification. I tried the jsonlint.com website as suggested and it gives the following error: ``` Parse error on line 14: ...t7sRT4zwdbzQ8KQmw\"}{ \"votes\": { ----------------------^ Expecting 'EOF', '}', ',', ']' ```", "response":"From version 0.19.0 of Pandas you can use the lines parameter, like so: ``` import pandas as pd data = pd.read_json('\/path\/to\/file.json', lines=True) ```", "best_answers_score":0.6429, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/30088006\/loading-a-file-with-more-than-one-line-of-json-into-pandas", "best_answers_votes":382, "question_length":3144, "response_length":153 }, { "question":"Pandas read_csv from url I'm trying to read a csv-file from given URL, using Python 3.x: ``` import pandas as pd import requests url = \"https:\/\/github.com\/cs109\/2014_data\/blob\/master\/countries.csv\" s = requests.get(url).content c = pd.read_csv(s) ``` I have the following error \"Expected file path name or file-like object, got type\" How can I fix this? I'm using Python 3.4", "response":"In the latest version of pandas (0.19.2) you can directly pass the url ``` import pandas as pd url = \"https:\/\/raw.githubusercontent.com\/cs109\/2014_data\/master\/countries.csv\" c = pd.read_csv(url) ```", "best_answers_score":0.6426, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32400867\/pandas-read-csv-from-url", "best_answers_votes":364, "question_length":375, "response_length":198 }, { "question":"Rename specific column(s) in pandas I've got a dataframe called data. How would I rename the only one column header? For example gdp to log(gdp)? ``` data = y gdp cap 0 1 2 5 1 2 3 9 2 8 7 2 3 3 4 7 4 6 7 7 5 4 8 3 6 8 2 8 7 9 9 10 8 6 6 4 9 10 10 7 ```", "response":"``` data.rename(columns={'gdp':'log(gdp)'}, inplace=True) ``` The rename show that it accepts a dict as a param for columns so you just pass a dict with a single entry. Also see related", "best_answers_score":0.6421, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19758364\/rename-specific-columns-in-pandas", "best_answers_votes":639, "question_length":253, "response_length":185 }, { "question":"Converting between datetime, Timestamp and datetime64 How do I convert a numpy.datetime64 object to a datetime.datetime (or Timestamp)? In the following code, I create a datetime, timestamp and datetime64 objects. ``` import datetime import numpy as np import pandas as pd dt = datetime.datetime(2012, 5, 1) # A strange way to extract a Timestamp object, there's surely a better way? ts = pd.DatetimeIndex([dt])[0] dt64 = np.datetime64(dt) In [7]: dt Out[7]: datetime.datetime(2012, 5, 1, 0, 0) In [8]: ts Out[8]: In [9]: dt64 Out[9]: numpy.datetime64('2012-05-01T01:00:00.000000+0100') ``` Note: it's easy to get the datetime from the Timestamp: ``` In [10]: ts.to_datetime() Out[10]: datetime.datetime(2012, 5, 1, 0, 0) ``` But how do we extract the datetime or Timestamp from a numpy.datetime64 (dt64)? . Update: a somewhat nasty example in my dataset (perhaps the motivating example) seems to be: ``` dt64 = numpy.datetime64('2002-06-28T01:00:00.000000000+0100') ``` which should be datetime.datetime(2002, 6, 28, 1, 0), and not a long (!) (1025222400000000000L)...", "response":"You can just use the pd.Timestamp constructor. The following diagram may be useful for this and related questions.", "best_answers_score":0.642, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13703720\/converting-between-datetime-timestamp-and-datetime64", "best_answers_votes":350, "question_length":1070, "response_length":114 }, { "question":"How to plot different groups of data from a dataframe into a single figure I have a temperature file with many years of temperature records in the format below: ```none 2012-04-12,16:13:09,20.6 2012-04-12,17:13:09,20.9 2012-04-12,18:13:09,20.6 2007-05-12,19:13:09,5.4 2007-05-12,20:13:09,20.6 2007-05-12,20:13:09,20.6 2005-08-11,11:13:09,20.6 2005-08-11,11:13:09,17.5 2005-08-13,07:13:09,20.6 2006-04-13,01:13:09,20.6 ``` Every year has different numbers of time of records, so the pandas datetimeindices are all different. I want to plot the different year's data in the same figure for comparison: The X-axis is datetimeindices from Jan to Dec The Y-axis is the temperature How should I go about doing this?", "response":"Try: ``` ax = df1.plot() df2.plot(ax=ax) ```", "best_answers_score":0.6402, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13872533\/how-to-plot-different-groups-of-data-from-a-dataframe-into-a-single-figure", "best_answers_votes":410, "question_length":709, "response_length":44 }, { "question":"Add a string prefix to each value in a pandas string column I would like to prepend a string to the start of each value in a said column of a pandas dataframe. I am currently using: ```py df.ix[(df['col'] != False), 'col'] = 'str' + df[(df['col'] != False), 'col'] ``` This seems an inelegant method. Do you know any other way (which maybe also adds the character to rows where that column is 0 or NaN)? As an example, I would like to turn: ```none col 1 a 2 0 ``` into: ```none col 1 stra 2 str0 ```", "response":"``` df['col'] = 'str' + df['col'].astype(str) ``` Example: ``` >>> df = pd.DataFrame({'col':['a',0]}) >>> df col 0 a 1 0 >>> df['col'] = 'str' + df['col'].astype(str) >>> df col 0 stra 1 str0 ```", "best_answers_score":0.64, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20025882\/add-a-string-prefix-to-each-value-in-a-pandas-string-column", "best_answers_votes":465, "question_length":500, "response_length":195 }, { "question":"Renaming Column Names in Pandas Groupby function [duplicate] This question already has answers here: Multiple aggregations of the same column using pandas GroupBy.agg() (5 answers) Closed 6 years ago. Q1) I want to do a groupby, SQL-style aggregation and rename the output column: Example dataset: ``` ID Region count 0 100 Asia 2 1 101 Europe 3 2 102 US 1 3 103 Africa 5 4 100 Russia 5 5 101 Australia 7 6 102 US 8 7 104 Asia 10 8 105 Europe 11 9 110 Africa 23 ``` I want to group the observations of this dataset by ID and Region and summing the count for each group. So I used something like this... ``` >>> print(df.groupby(['ID','Region'],as_index=False).count().sum()) ID Region count 0 100 Asia 2 1 100 Russia 5 2 101 Australia 7 3 101 Europe 3 4 102 US 9 5 103 Africa 5 6 104 Asia 10 7 105 Europe 11 8 110 Africa 23 ``` On using as_index=False I am able to get \"SQL-Like\" output. My problem is that I am unable to rename the aggregate variable count here. So in SQL if wanted to do the above thing I would do something like this: ``` select ID, Region, sum(count) as Total_Numbers from df group by ID, Region order by ID, Region ``` As we see, it's very easy for me to rename the aggregate variable count to Total_Numbers in SQL. I wanted to do the same thing in Pandas but unable to find such an option in group-by function. Can somebody help? The second question (more of an observation): Q2) Is it possible to directly use column names in Pandas dataframe functions without enclosing them in quotes? I understand that the variable names are strings, so have to be inside quotes, but I see if use them outside dataframe function and as an attribute we don't require them to be inside quotes. Like df.ID.sum() etc. It's only when we use it in a DataFrame function like df.sort() or df.groupby we have to use it inside quotes. This is actually a bit of pain as in SQL or in SAS or other languages we simply use the variable name without quoting them. Any suggestion on this? Kindly reply to both questions (Q1 is the main, Q2 more of an opinion).", "response":"For the first question I think answer would be: ``` .rename(columns= {'count':'Total_Numbers'}) ``` or ``` .columns = ['ID', 'Region', 'Total_Numbers'] ``` if it is Pandas Series we have ``` .rename(\"Total_Numbers\") ``` As for second one I'd say the answer would be no. It's possible to use it like 'df.ID' because of python datamodel: Attribute references are translated to lookups in this dictionary, e.g., m.x is equivalent to m.dict[\"x\"]", "best_answers_score":0.6397, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19523277\/renaming-column-names-in-pandas-groupby-function", "best_answers_votes":107, "question_length":2054, "response_length":441 }, { "question":"Pyspark: display a spark data frame in a table format I am using pyspark to read a parquet file like below: ``` my_df = sqlContext.read.parquet('hdfs:\/\/myPath\/myDB.db\/myTable\/**') ``` Then when I do my_df.take(5), it will show [Row(...)], instead of a table format like when we use the pandas data frame. Is it possible to display the data frame in a table format like pandas data frame? Thanks!", "response":"The show method does what you're looking for. For example, given the following dataframe of 3 rows, I can print just the first two rows like this: ``` df = sqlContext.createDataFrame([(\"foo\", 1), (\"bar\", 2), (\"baz\", 3)], ('k', 'v')) df.show(n=2) ``` which yields: ``` +---+---+ | k| v| +---+---+ |foo| 1| |bar| 2| +---+---+ only showing top 2 rows ```", "best_answers_score":0.638, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39067505\/pyspark-display-a-spark-data-frame-in-a-table-format", "best_answers_votes":123, "question_length":395, "response_length":351 }, { "question":"How to check if particular value (in cell) is NaN in pandas DataFrame? Let's say I have the following pandas DataFrame: ``` import pandas as pd import numpy as np df = pd.DataFrame({\"A\": [1, np.nan, 2], \"B\": [5, 6, 0]}) ``` Which would look like: ``` >>> df A B 0 1.0 5 1 NaN 6 2 2.0 0 ``` First option I know one way to check if a particular value is NaN: ``` >>> df.isnull().iloc[1,0] True ``` But this checks the whole dataframe just to get one value, so I imagine it's wasteful. Second option (not working) I thought below option, using iloc, would work as well, but it doesn't: ``` >>> df.iloc[1,0] == np.nan False ``` However if I check that value I get: ``` >>> df.iloc[1,0] nan ``` So, why is the second option not working? Is it possible to check for NaN values using iloc? Editor's note: This question previously used pd.np instead of np and .ix in addition to .iloc, but since these no longer exist, they have been edited out to keep it short and clear.", "response":"Try pd.isna(): ``` In [7]: pd.isna(df.iloc[1,0]) Out[7]: True ``` AKA pd.isnull", "best_answers_score":0.6378, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/47440077\/how-to-check-if-particular-value-in-cell-is-nan-in-pandas-dataframe", "best_answers_votes":175, "question_length":964, "response_length":79 }, { "question":"How to change the datetime format in Pandas My dataframe has a DOB column (example format 1\/26\/2016) which by default gets converted to Pandas dtype 'object'. Converting this to date format with df['DOB'] = pd.to_datetime(df['DOB']), the date gets converted to: 2016-01-26 and its dtype is datetime64[ns]. Now I want to convert this date format to 01\/26\/2016 or any other general date format. How do I do it? (Whatever the method I try, it always shows the date in 2016-01-26 format.)", "response":"You can use dt.strftime if you need to convert datetime to other formats (but note that then dtype of column will be object (string)): ``` import pandas as pd df = pd.DataFrame({'DOB': {0: '26\/1\/2016', 1: '26\/1\/2016'}}) print (df) DOB 0 26\/1\/2016 1 26\/1\/2016 df['DOB'] = pd.to_datetime(df.DOB) print (df) DOB 0 2016-01-26 1 2016-01-26 df['DOB1'] = df['DOB'].dt.strftime('%m\/%d\/%Y') print (df) DOB DOB1 0 2016-01-26 01\/26\/2016 1 2016-01-26 01\/26\/2016 ```", "best_answers_score":0.6375, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/38067704\/how-to-change-the-datetime-format-in-pandas", "best_answers_votes":433, "question_length":484, "response_length":453 }, { "question":"How to write to an existing excel file without overwriting data (using pandas)? I use pandas to write to excel file in the following fashion: ``` import pandas writer = pandas.ExcelWriter('Masterfile.xlsx') data_filtered.to_excel(writer, \"Main\", cols=['Diff1', 'Diff2']) writer.save() ``` Masterfile.xlsx already consists of number of different tabs. However, it does not yet contain \"Main\". Pandas correctly writes to \"Main\" sheet, unfortunately it also deletes all other tabs.", "response":"Pandas docs says it uses openpyxl for xlsx files. Quick look through the code in ExcelWriter gives a clue that something like this might work out: ``` import pandas from openpyxl import load_workbook book = load_workbook('Masterfile.xlsx') writer = pandas.ExcelWriter('Masterfile.xlsx', engine='openpyxl') writer.book = book ## ExcelWriter for some reason uses writer.sheets to access the sheet. ## If you leave it empty it will not know that sheet Main is already there ## and will create a new sheet. writer.sheets = dict((ws.title, ws) for ws in book.worksheets) data_filtered.to_excel(writer, \"Main\", cols=['Diff1', 'Diff2']) writer.save() ```", "best_answers_score":0.6374, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20219254\/how-to-write-to-an-existing-excel-file-without-overwriting-data-using-pandas", "best_answers_votes":190, "question_length":478, "response_length":647 }, { "question":"Logical operators for Boolean indexing in Pandas I'm working with a Boolean index in Pandas. The question is why the statement: ``` a[(a['some_column']==some_number) & (a['some_other_column']==some_other_number)] ``` works fine whereas ``` a[(a['some_column']==some_number) and (a['some_other_column']==some_other_number)] ``` exits with error? Example: ``` a = pd.DataFrame({'x':[1,1],'y':[10,20]}) In: a[(a['x']==1)&(a['y']==10)] Out: x y 0 1 10 In: a[(a['x']==1) and (a['y']==10)] Out: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ```", "response":"When you say ``` (a['x']==1) and (a['y']==10) ``` You are implicitly asking Python to convert (a['x']==1) and (a['y']==10) to Boolean values. NumPy arrays (of length greater than 1) and Pandas objects such as Series do not have a Boolean value -- in other words, they raise ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all(). when used as a Boolean value. That's because it's unclear when it should be True or False. Some users might assume they are True if they have non-zero length, like a Python list. Others might desire for it to be True only if all its elements are True. Others might want it to be True if any of its elements are True. Because there are so many conflicting expectations, the designers of NumPy and Pandas refuse to guess, and instead raise a ValueError. Instead, you must be explicit, by calling the empty(), all() or any() method to indicate which behavior you desire. In this case, however, it looks like you do not want Boolean evaluation, you want element-wise logical-and. That is what the & binary operator performs: ``` (a['x']==1) & (a['y']==10) ``` returns a boolean array. By the way, as alexpmil notes, the parentheses are mandatory since & has a higher operator precedence than ==. Without the parentheses, ``` a['x']==1 & a['y']==10 ``` would be evaluated as ``` a['x'] == (1 & a['y']) == 10 ``` which would in turn be equivalent to the chained comparison ``` (a['x'] == (1 & a['y'])) and ((1 & a['y']) == 10) ``` That is an expression of the form Series and Series. The use of and with two Series would again trigger the same ValueError as above. That's why the parentheses are mandatory.", "best_answers_score":0.6373, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/21415661\/logical-operators-for-boolean-indexing-in-pandas", "best_answers_votes":358, "question_length":596, "response_length":1661 }, { "question":"Finding common rows (intersection) in two Pandas dataframes Assume I have two dataframes of this format (call them df1 and df2): ``` +------------------------+------------------------+--------+ | user_id | business_id | rating | +------------------------+------------------------+--------+ | rLtl8ZkDX5vH5nAx9C3q5Q | eIxSLxzIlfExI6vgAbn2JA | 4 | | C6IOtaaYdLIT5fWd7ZYIuA | eIxSLxzIlfExI6vgAbn2JA | 5 | | mlBC3pN9GXlUUfQi1qBBZA | KoIRdcIfh3XWxiCeV1BDmA | 3 | +------------------------+------------------------+--------+ ``` I'm looking to get a dataframe of all the rows that have a common user_id in df1 and df2. (ie. if a user_id is in both df1 and df2, include the two rows in the output dataframe) I can think of many ways to approach this, but they all strike me as clunky. For example, we could find all the unique user_ids in each dataframe, create a set of each, find their intersection, filter the two dataframes with the resulting set and concatenate the two filtered dataframes. Maybe that's the best approach, but I know Pandas is clever. Is there a simpler way to do this? I've looked at merge but I don't think that's what I need.", "response":"My understanding is that this question is better answered over in this post. But briefly, the answer to the OP with this method is simply: ``` s1 = pd.merge(df1, df2, how='inner', on=['user_id']) ``` Which gives s1 with 5 columns: user_id and the other two columns from each of df1 and df2.", "best_answers_score":0.6372, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19618912\/finding-common-rows-intersection-in-two-pandas-dataframes", "best_answers_votes":141, "question_length":1143, "response_length":290 }, { "question":"How to form tuple column from two columns in Pandas I've got a Pandas DataFrame and I want to combine the 'lat' and 'long' columns to form a tuple. ``` Int64Index: 205482 entries, 0 to 209018 Data columns: Month 205482 non-null values Reported by 205482 non-null values Falls within 205482 non-null values Easting 205482 non-null values Northing 205482 non-null values Location 205482 non-null values Crime type 205482 non-null values long 205482 non-null values lat 205482 non-null values dtypes: float64(4), object(5) ``` The code I tried to use was: ``` def merge_two_cols(series): return (series['lat'], series['long']) sample['lat_long'] = sample.apply(merge_two_cols, axis=1) ``` However, this returned the following error: ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) in () 2 return (series['lat'], series['long']) 3 ----> 4 sample['lat_long'] = sample.apply(merge_two_cols, axis=1) 5 ``` ... ``` AssertionError: Block shape incompatible with manager ``` How can I solve this problem?", "response":"Get comfortable with zip. It comes in handy when dealing with column data. ``` df['new_col'] = list(zip(df.lat, df.long)) ``` It's less complicated and faster than using apply or map. Something like np.dstack is twice as fast as zip, but wouldn't give you tuples.", "best_answers_score":0.6369, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16031056\/how-to-form-tuple-column-from-two-columns-in-pandas", "best_answers_votes":387, "question_length":1077, "response_length":263 }, { "question":"How to select rows with NaN in particular column? Given this dataframe, how to select only those rows that have \"Col2\" equal to NaN? ``` df = pd.DataFrame([range(3), [0, np.NaN, 0], [0, 0, np.NaN], range(3), range(3)], columns=[\"Col1\", \"Col2\", \"Col3\"]) ``` which looks like: ``` 0 1 2 0 0 1 2 1 0 NaN 0 2 0 0 NaN 3 0 1 2 4 0 1 2 ``` The result should be this one: ``` 0 1 2 1 0 NaN 0 ```", "response":"Try the following: ``` df[df['Col2'].isnull()] ```", "best_answers_score":0.6369, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/43831539\/how-to-select-rows-with-nan-in-particular-column", "best_answers_votes":296, "question_length":387, "response_length":50 }, { "question":"How to remove timezone from a Timestamp column in pandas I read Pandas change timezone for forex DataFrame but I'd like to make the time column of my dataframe timezone naive for interoperability with an sqlite3 database. The data in my pandas dataframe is already converted to UTC data, but I do not want to have to maintain this UTC timezone information in the database. Given a sample of the data derived from other sources, it looks like this: ``` print(type(testdata)) print(testdata) print(testdata.applymap(type)) ``` gives: ``` time navd88_ft station_id new 0 2018-03-07 01:31:02+00:00 -0.030332 13 5 1 2018-03-07 01:21:02+00:00 -0.121653 13 5 2 2018-03-07 01:26:02+00:00 -0.072945 13 5 3 2018-03-07 01:16:02+00:00 -0.139917 13 5 4 2018-03-07 01:11:02+00:00 -0.152085 13 5 time navd88_ft station_id \\ 0 1 2 3 4 new 0 1 2 3 4 ``` but ``` newstamp = testdata['time'].tz_convert(None) ``` gives an eventual error: ``` TypeError: index is not a valid DatetimeIndex or PeriodIndex ``` What do I do to replace the column with a timezone naive timestamp?", "response":"The column must be a datetime dtype, for example after using pd.to_datetime. Then, you can use tz_localize to change the time zone, a naive timestamp corresponds to time zone None: ```py testdata['time'].dt.tz_localize(None) ``` Unless the column is an index (DatetimeIndex), the .dt accessor must be used to access pandas datetime functions.", "best_answers_score":0.6365, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/49198068\/how-to-remove-timezone-from-a-timestamp-column-in-pandas", "best_answers_votes":255, "question_length":1076, "response_length":342 }, { "question":"What is the difference between size and count in pandas? That is the difference between groupby(\"x\").count and groupby(\"x\").size in pandas ? Does size just exclude nil ?", "response":"size includes NaN values, count does not: ``` In [46]: df = pd.DataFrame({'a':[0,0,1,2,2,2], 'b':[1,2,3,4,np.NaN,4], 'c':np.random.randn(6)}) df Out[46]: a b c 0 0 1 1.067627 1 0 2 0.554691 2 1 3 0.458084 3 2 4 0.426635 4 2 NaN -2.238091 5 2 4 1.256943 In [48]: print(df.groupby(['a'])['b'].count()) print(df.groupby(['a'])['b'].size()) a 0 2 1 1 2 2 Name: b, dtype: int64 a 0 2 1 1 2 3 dtype: int64 ```", "best_answers_score":0.6359, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/33346591\/what-is-the-difference-between-size-and-count-in-pandas", "best_answers_votes":179, "question_length":169, "response_length":403 }, { "question":"Groupby value counts on the dataframe pandas I have the following dataframe: ``` df = pd.DataFrame([ (1, 1, 'term1'), (1, 2, 'term2'), (1, 1, 'term1'), (1, 1, 'term2'), (2, 2, 'term3'), (2, 3, 'term1'), (2, 2, 'term1') ], columns=['id', 'group', 'term']) ``` I want to group it by id and group and calculate the number of each term for this id-group pair. So in the end I want to get something like this: Anyway I can achieve this without looping?", "response":"I use groupby and size ``` df.groupby(['id', 'group', 'term']).size().unstack(fill_value=0) ``` Timing 1,000,000 rows ``` df = pd.DataFrame(dict(id=np.random.choice(100, 1000000), group=np.random.choice(20, 1000000), term=np.random.choice(10, 1000000))) ```", "best_answers_score":0.6353, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39132742\/groupby-value-counts-on-the-dataframe-pandas", "best_answers_votes":174, "question_length":447, "response_length":257 }, { "question":"What is dtype('O'), in pandas? I have a dataframe in pandas and I'm trying to figure out what the types of its values are. I am unsure what the type is of column 'Test'. However, when I run myFrame['Test'].dtype, I get; ``` dtype('O') ``` What does this mean?", "response":"It means: ``` 'O' (Python) objects ``` Source. The first character specifies the kind of data and the remaining characters specify the number of bytes per item, except for Unicode, where it is interpreted as the number of characters. The item size must correspond to an existing type, or an error will be raised. The supported kinds are to an existing type, or an error will be raised. The supported kinds are: ``` 'b' boolean 'i' (signed) integer 'u' unsigned integer 'f' floating-point 'c' complex-floating point 'O' (Python) objects 'S', 'a' (byte-)string 'U' Unicode 'V' raw data (void) ``` Another answer helps if need check types.", "best_answers_score":0.6345, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/37561991\/what-is-dtypeo-in-pandas", "best_answers_votes":208, "question_length":259, "response_length":636 }, { "question":"How to iterate over consecutive chunks of Pandas dataframe efficiently I have a large dataframe (several million rows). I want to be able to do a groupby operation on it, but just grouping by arbitrary consecutive (preferably equal-sized) subsets of rows, rather than using any particular property of the individual rows to decide which group they go to. The use case: I want to apply a function to each row via a parallel map in IPython. It doesn't matter which rows go to which back-end engine, as the function calculates a result based on one row at a time. (Conceptually at least; in reality it's vectorized.) I've come up with something like this: ``` # Generate a number from 0-9 for each row, indicating which tenth of the DF it belongs to max_idx = dataframe.index.max() tenths = ((10 * dataframe.index) \/ (1 + max_idx)).astype(np.uint32) # Use this value to perform a groupby, yielding 10 consecutive chunks groups = [g[1] for g in dataframe.groupby(tenths)] # Process chunks in parallel results = dview.map_sync(my_function, groups) ``` But this seems very long-winded, and doesn't guarantee equal sized chunks. Especially if the index is sparse or non-integer or whatever. Any suggestions for a better way? Thanks!", "response":"Use numpy's array_split(): ``` import numpy as np import pandas as pd data = pd.DataFrame(np.random.rand(10, 3)) for chunk in np.array_split(data, 5): assert len(chunk) == len(data) \/ 5, \"This assert may fail for the last chunk if data lenght isn't divisible by 5\" ```", "best_answers_score":0.6335, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/25699439\/how-to-iterate-over-consecutive-chunks-of-pandas-dataframe-efficiently", "best_answers_votes":162, "question_length":1225, "response_length":268 }, { "question":"How to deal with SettingWithCopyWarning in Pandas I just upgraded my Pandas from 0.11 to 0.13.0rc1. Now, the application is popping out many new warnings. One of them like this: ```none E:\\FinReporter\\FM_EXT.py:449: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead quote_df['TVol'] = quote_df['TVol']\/TVOL_SCALE ``` What exactly does it mean? Do I need to change something? How should I suspend the warning if I insist on using quote_df['TVol'] = quote_df['TVol']\/TVOL_SCALE? The function that gives warnings ``` def _decode_stock_quote(list_of_150_stk_str): \"\"\"decode the webpage and return dataframe\"\"\" from cStringIO import StringIO str_of_all = \"\".join(list_of_150_stk_str) quote_df = pd.read_csv( StringIO(str_of_all), sep=',', names=list('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefg')) #dtype={'A': object, 'B': object, 'C': np.float64} quote_df.rename( columns={ 'A':'STK', 'B':'TOpen', 'C':'TPCLOSE', 'D':'TPrice', 'E':'THigh', 'F':'TLow', 'I':'TVol', 'J':'TAmt', 'e':'TDate', 'f':'TTime'}, inplace=True) quote_df = quote_df.ix[:,[0,3,2,1,4,5,8,9,30,31]] quote_df['TClose'] = quote_df['TPrice'] quote_df['RT'] = 100 * (quote_df['TPrice']\/quote_df['TPCLOSE'] - 1) quote_df['TVol'] = quote_df['TVol']\/TVOL_SCALE quote_df['TAmt'] = quote_df['TAmt']\/TAMT_SCALE quote_df['STK_ID'] = quote_df['STK'].str.slice(13,19) quote_df['STK_Name'] = quote_df['STK'].str.slice(21,30)#.decode('gb2312') quote_df['TDate'] = quote_df.TDate.map(lambda x: x[0:4]+x[5:7]+x[8:10]) return quote_df ``` More warning messages ```none E:\\FinReporter\\FM_EXT.py:449: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead quote_df['TVol'] = quote_df['TVol']\/TVOL_SCALE E:\\FinReporter\\FM_EXT.py:450: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead quote_df['TAmt'] = quote_df['TAmt']\/TAMT_SCALE E:\\FinReporter\\FM_EXT.py:453: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead quote_df['TDate'] = quote_df.TDate.map(lambda x: x[0:4]+x[5:7]+x[8:10]) ```", "response":"The SettingWithCopyWarning was created to flag potentially confusing \"chained\" assignments, such as the following, which does not always work as expected, particularly when the first selection returns a copy. [see GH5390 and GH5597 for background discussion.] ``` df[df['A'] > 2]['B'] = new_val # new_val not set in df ``` The warning offers a suggestion to rewrite as follows: ``` df.loc[df['A'] > 2, 'B'] = new_val ``` However, this doesn't fit your usage, which is equivalent to: ``` df = df[df['A'] > 2] df['B'] = new_val ``` While it's clear that you don't care about writes making it back to the original frame (since you are overwriting the reference to it), unfortunately this pattern cannot be differentiated from the first chained assignment example. Hence the (false positive) warning. The potential for false positives is addressed in the docs on indexing, if you'd like to read further. You can safely disable this new warning with the following assignment. ``` import pandas as pd pd.options.mode.chained_assignment = None # default='warn' ``` Other Resources pandas User Guide: Indexing and selecting data Python Data Science Handbook: Data Indexing and Selection Real Python: SettingWithCopyWarning in Pandas: Views vs Copies Dataquest: SettingwithCopyWarning: How to Fix This Warning in Pandas Towards Data Science: Explaining the SettingWithCopyWarning in pandas", "best_answers_score":0.6319, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20625582\/how-to-deal-with-settingwithcopywarning-in-pandas", "best_answers_votes":1717, "question_length":2295, "response_length":1380 }, { "question":"Import CSV file as a Pandas DataFrame How do I read the following CSV file into a Pandas DataFrame? ``` Date,\"price\",\"factor_1\",\"factor_2\" 2012-06-11,1600.20,1.255,1.548 2012-06-12,1610.02,1.258,1.554 2012-06-13,1618.07,1.249,1.552 2012-06-14,1624.40,1.253,1.556 2012-06-15,1626.15,1.258,1.552 2012-06-16,1626.15,1.263,1.558 2012-06-17,1626.15,1.264,1.572 ```", "response":"pandas.read_csv to the rescue: ``` import pandas as pd df = pd.read_csv(\"data.csv\") print(df) ``` This outputs a pandas DataFrame: ``` Date price factor_1 factor_2 0 2012-06-11 1600.20 1.255 1.548 1 2012-06-12 1610.02 1.258 1.554 2 2012-06-13 1618.07 1.249 1.552 3 2012-06-14 1624.40 1.253 1.556 4 2012-06-15 1626.15 1.258 1.552 5 2012-06-16 1626.15 1.263 1.558 6 2012-06-17 1626.15 1.264 1.572 ```", "best_answers_score":0.6304, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14365542\/import-csv-file-as-a-pandas-dataframe", "best_answers_votes":231, "question_length":359, "response_length":398 }, { "question":"pandas loc vs. iloc vs. at vs. iat? Recently began branching out from my safe place (R) into Python and and am a bit confused by the cell localization\/selection in Pandas. I've read the documentation but I'm struggling to understand the practical implications of the various localization\/selection options. Is there a reason why I should ever use .loc or .iloc over at, and iat or vice versa? In what situations should I use which method? Note: future readers be aware that this question is old and was written before pandas v0.20 when there used to exist a function called .ix. This method was later split into two - loc and iloc - to make the explicit distinction between positional and label based indexing. Please beware that ix was discontinued due to inconsistent behavior and being hard to grok, and no longer exists in current versions of pandas (>= 1.0).", "response":"loc: only work on index iloc: work on position at: get scalar values. It's a very fast loc iat: Get scalar values. It's a very fast iloc Also, at and iat are meant to access a scalar, that is, a single element in the dataframe, while loc and iloc are ments to access several elements at the same time, potentially to perform vectorized operations. http:\/\/pyciencia.blogspot.com\/2015\/05\/obtener-y-filtrar-datos-de-un-dataframe.html", "best_answers_score":0.6303, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28757389\/pandas-loc-vs-iloc-vs-at-vs-iat", "best_answers_votes":206, "question_length":863, "response_length":430 }, { "question":"How to filter rows containing a string pattern from a Pandas dataframe [duplicate] This question already has answers here: Filter pandas DataFrame by substring criteria (18 answers) Closed 6 years ago. Assume we have a data frame in Python Pandas that looks like this: ``` df = pd.DataFrame({'vals': [1, 2, 3, 4], 'ids': [u'aball', u'bball', u'cnut', u'fball']}) ``` Or, in table form: ```none ids vals aball 1 bball 2 cnut 3 fball 4 ``` How do I filter rows which contain the key word \"ball?\" For example, the output should be: ```none ids vals aball 1 bball 2 fball 4 ```", "response":"``` In [3]: df[df['ids'].str.contains(\"ball\")] Out[3]: ids vals 0 aball 1 1 bball 2 3 fball 4 ```", "best_answers_score":0.6296, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/27975069\/how-to-filter-rows-containing-a-string-pattern-from-a-pandas-dataframe", "best_answers_votes":560, "question_length":573, "response_length":97 }, { "question":"How to get value counts for multiple columns at once in Pandas DataFrame? Given a Pandas DataFrame that has multiple columns with categorical values (0 or 1), is it possible to conveniently get the value_counts for every column at the same time? For example, suppose I generate a DataFrame as follows: ``` import numpy as np import pandas as pd np.random.seed(0) df = pd.DataFrame(np.random.randint(0, 2, (10, 4)), columns=list('abcd')) ``` I can get a DataFrame like this: ``` a b c d 0 0 1 1 0 1 1 1 1 1 2 1 1 1 0 3 0 1 0 0 4 0 0 0 1 5 0 1 1 0 6 0 1 1 1 7 1 0 1 0 8 1 0 1 1 9 0 1 1 0 ``` How do I conveniently get the value counts for every column and obtain the following conveniently? ``` a b c d 0 6 3 2 6 1 4 7 8 4 ``` My current solution is: ``` pieces = [] for col in df.columns: tmp_series = df[col].value_counts() tmp_series.name = col pieces.append(tmp_series) df_value_counts = pd.concat(pieces, axis=1) ``` But there must be a simpler way, like stacking, pivoting, or groupby?", "response":"Just call apply and pass pd.Series.value_counts: ``` In [212]: df = pd.DataFrame(np.random.randint(0, 2, (10, 4)), columns=list('abcd')) df.apply(pd.Series.value_counts) Out[212]: a b c d 0 4 6 4 3 1 6 4 6 7 ```", "best_answers_score":0.629, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/32589829\/how-to-get-value-counts-for-multiple-columns-at-once-in-pandas-dataframe", "best_answers_votes":177, "question_length":989, "response_length":211 }, { "question":"How can I iterate over rows in a Pandas DataFrame? I have a pandas dataframe, df: ```none c1 c2 0 10 100 1 11 110 2 12 120 ``` How do I iterate over the rows of this dataframe? For every row, I want to access its elements (values in cells) by the name of the columns. For example: ```py for row in df.rows: print(row['c1'], row['c2']) ``` I found a similar question, which suggests using either of these: ```py for date, row in df.T.iteritems(): ``` ```py for row in df.iterrows(): ``` But I do not understand what the row object is and how I can work with it.", "response":"DataFrame.iterrows is a generator which yields both the index and row (as a Series): ``` import pandas as pd df = pd.DataFrame({'c1': [10, 11, 12], 'c2': [100, 110, 120]}) df = df.reset_index() # make sure indexes pair with number of rows for index, row in df.iterrows(): print(row['c1'], row['c2']) ``` ``` 10 100 11 110 12 120 ``` Obligatory disclaimer from the documentation Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed and can be avoided with one of the following approaches: Look for a vectorized solution: many operations can be performed using built-in methods or NumPy functions, (boolean) indexing, \u2026 When you have a function that cannot work on the full DataFrame\/Series at once, it is better to use apply() instead of iterating over the values. See the docs on function application. If you need to do iterative manipulations on the values but performance is important, consider writing the inner loop with cython or numba. See the enhancing performance section for some examples of this approach. Other answers in this thread delve into greater depth on alternatives to iter* functions if you are interested to learn more.", "best_answers_score":0.6279, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16476924\/how-can-i-iterate-over-rows-in-a-pandas-dataframe", "best_answers_votes":5493, "question_length":560, "response_length":1202 }, { "question":"Pandas: create two new columns in a dataframe with values calculated from a pre-existing column I am working with the pandas library and I want to add two new columns to a dataframe df with n columns (n > 0). These new columns result from the application of a function to one of the columns in the dataframe. The function to apply is like: ``` def calculate(x): ...operate... return z, y ``` One method for creating a new column for a function returning only a value is: ``` df['new_col']) = df['column_A'].map(a_function) ``` So, what I want, and tried unsuccesfully (*), is something like: ``` (df['new_col_zetas'], df['new_col_ys']) = df['column_A'].map(calculate) ``` What the best way to accomplish this could be ? I scanned the documentation with no clue. **df['column_A'].map(calculate) returns a pandas Series each item consisting of a tuple z, y. And trying to assign this to two dataframe columns produces a ValueError.*", "response":"I'd just use zip: ``` In [1]: from pandas import * In [2]: def calculate(x): ...: return x*2, x*3 ...: In [3]: df = DataFrame({'a': [1,2,3], 'b': [2,3,4]}) In [4]: df Out[4]: a b 0 1 2 1 2 3 2 3 4 In [5]: df[\"A1\"], df[\"A2\"] = zip(*df[\"a\"].map(calculate)) In [6]: df Out[6]: a b A1 A2 0 1 2 2 3 1 2 3 4 6 2 3 4 6 9 ```", "best_answers_score":0.6274, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12356501\/pandas-create-two-new-columns-in-a-dataframe-with-values-calculated-from-a-pre", "best_answers_votes":136, "question_length":930, "response_length":317 }, { "question":"Accessing Pandas column using squared brackets vs using a dot (like an attribute) In both the bellow cases: ``` import pandas d = {'col1': 2, 'col2': 2.5} df = pandas.DataFrame(data=d, index=[0]) print(df['col2']) print(df.col2) ``` Both methods can be used to index on a column and yield the same result, so is there any difference between them?", "response":"The \"dot notation\", i.e. df.col2 is the attribute access that's exposed as a convenience. You may access an index on a Series, column on a DataFrame, and an item on a Panel directly as an attribute: df['col2'] does the same: it returns a pd.Series of the column. A few caveats about attribute access: you cannot add a column (df.new_col = x won't work, worse: it will silently actually create a new attribute rather than a column - think monkey-patching here) it won't work if you have spaces in the column name or if the column name is an integer.", "best_answers_score":0.6269, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41130255\/accessing-pandas-column-using-squared-brackets-vs-using-a-dot-like-an-attribute", "best_answers_votes":96, "question_length":346, "response_length":548 }, { "question":"Extract values in Pandas value_counts() Say we have used pandas dataframe[column].value_counts() which outputs: ``` apple 5 sausage 2 banana 2 cheese 1 ``` How do you extract the values in the order same as shown above from max to min ? e.g: [apple,sausage,banana,cheese]", "response":"Try this: ``` dataframe[column].value_counts().index.tolist() ['apple', 'sausage', 'banana', 'cheese'] ```", "best_answers_score":0.6268, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/35523635\/extract-values-in-pandas-value-counts", "best_answers_votes":135, "question_length":271, "response_length":106 }, { "question":"Pandas: convert dtype 'object' to int I've read an SQL query into Pandas and the values are coming in as dtype 'object', although they are strings, dates and integers. I am able to convert the date 'object' to a Pandas datetime dtype, but I'm getting an error when trying to convert the string and integers. Here is an example: ``` >>> import pandas as pd >>> df = pd.read_sql_query('select * from my_table', conn) >>> df id date purchase 1 abc1 2016-05-22 1 2 abc2 2016-05-29 0 3 abc3 2016-05-22 2 4 abc4 2016-05-22 0 >>> df.dtypes id object date object purchase object dtype: object ``` Converting the df['date'] to a datetime works: ``` >>> pd.to_datetime(df['date']) 1 2016-05-22 2 2016-05-29 3 2016-05-22 4 2016-05-22 Name: date, dtype: datetime64[ns] ``` But I get an error when trying to convert the df['purchase'] to an integer: ``` >>> df['purchase'].astype(int) .... pandas\/lib.pyx in pandas.lib.astype_intsafe (pandas\/lib.c:16667)() pandas\/src\/util.pxd in util.set_value_at (pandas\/lib.c:67540)() TypeError: long() argument must be a string or a number, not 'java.lang.Long' ``` NOTE: I get a similar error when I tried .astype('float') And when trying to convert to a string, nothing seems to happen. ``` >>> df['id'].apply(str) 1 abc1 2 abc2 3 abc3 4 abc4 Name: id, dtype: object ```", "response":"Documenting the answer that worked for me based on the comment by @piRSquared. I needed to convert to a string first, then an integer. ``` >>> df['purchase'].astype(str).astype(int) ```", "best_answers_score":0.6265, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/39173813\/pandas-convert-dtype-object-to-int", "best_answers_votes":147, "question_length":1296, "response_length":185 }, { "question":"'DataFrame' object has no attribute 'sort' I face some problem here, in my python package I have install numpy, but I still have this error: 'DataFrame' object has no attribute 'sort' Anyone can give me some idea.. This is my code : ``` final.loc[-1] =['', 'P','Actual'] final.index = final.index + 1 # shifting index final = final.sort() final.columns=[final.columns,final.iloc[0]] final = final.iloc[1:].reset_index(drop=True) final.columns.names = (None, None) ```", "response":"sort() was deprecated for DataFrames in favor of either: sort_values() to sort by column(s) sort_index() to sort by the index sort() was deprecated (but still available) in Pandas with release 0.17 (2015-10-09) with the introduction of sort_values() and sort_index(). It was removed from Pandas with release 0.20 (2017-05-05).", "best_answers_score":0.6263, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/44123874\/dataframe-object-has-no-attribute-sort", "best_answers_votes":281, "question_length":467, "response_length":326 }, { "question":"How to get the last N rows of a pandas DataFrame? I have pandas dataframe df1: ```none STK_ID RPT_Date TClose sales discount 0 000568 20060331 3.69 5.975 NaN 1 000568 20060630 9.14 10.143 NaN 2 000568 20060930 9.49 13.854 NaN 3 000568 20061231 15.84 19.262 NaN 4 000568 20070331 17.00 6.803 NaN 5 000568 20070630 26.31 12.940 NaN 6 000568 20070930 39.12 19.977 NaN 7 000568 20071231 45.94 29.269 NaN 8 000568 20080331 38.75 12.668 NaN 9 000568 20080630 30.09 21.102 NaN 10 000568 20080930 26.00 30.769 NaN ``` I wanted to select the last 3 rows and tried df1.ix[-3:], but it returns all the rows. Why? How to get the last 3 rows of df1? I'm using pandas 0.10.1.", "response":"Don't forget DataFrame.tail! e.g. df1.tail(10)", "best_answers_score":0.6259, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/14663004\/how-to-get-the-last-n-rows-of-a-pandas-dataframe", "best_answers_votes":569, "question_length":661, "response_length":46 }, { "question":"How to draw vertical lines on a given plot Given a plot of a signal in time representation, how can I draw lines marking the corresponding time index? Specifically, given a signal plot with a time index ranging from 0 to 2.6 (seconds), I want to draw vertical red lines indicating the corresponding time index for the list [0.22058956, 0.33088437, 2.20589566]. How can I do it?", "response":"The standard way to add vertical lines that will cover your entire plot window without you having to specify their actual height is plt.axvline ``` import matplotlib.pyplot as plt plt.axvline(x=0.22058956) plt.axvline(x=0.33088437) plt.axvline(x=2.20589566) ``` OR ``` xcoords = [0.22058956, 0.33088437, 2.20589566] for xc in xcoords: plt.axvline(x=xc) ``` You can use many of the keywords available for other plot commands (e.g. color, linestyle, linewidth ...). You can pass in keyword arguments ymin and ymax if you like in axes corrdinates (e.g. ymin=0.25, ymax=0.75 will cover the middle half of the plot). There are corresponding functions for horizontal lines (axhline) and rectangles (axvspan).", "best_answers_score":0.625, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/24988448\/how-to-draw-vertical-lines-on-a-given-plot", "best_answers_votes":749, "question_length":377, "response_length":702 }, { "question":"Drop columns whose name contains a specific string from pandas DataFrame I have a pandas dataframe with the following column names: Result1, Test1, Result2, Test2, Result3, Test3, etc... I want to drop all the columns whose name contains the word \"Test\". The numbers of such columns is not static but depends on a previous function. How can I do that?", "response":"Here is one way to do this: ``` df = df[df.columns.drop(list(df.filter(regex='Test')))] ```", "best_answers_score":0.625, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/19071199\/drop-columns-whose-name-contains-a-specific-string-from-pandas-dataframe", "best_answers_votes":333, "question_length":351, "response_length":91 }, { "question":"Convert commas decimal separators to dots within a Dataframe I am importing a CSV file like the one below, using pandas.read_csv: ``` df = pd.read_csv(Input, delimiter=\";\") ``` Example of CSV file: ``` 10;01.02.2015 16:58;01.02.2015 16:58;-0.59;0.1;-4.39;NotApplicable;0.79;0.2 11;01.02.2015 16:58;01.02.2015 16:58;-0.57;0.2;-2.87;NotApplicable;0.79;0.21 ``` The problem is that when I later on in my code try to use these values I get this error: TypeError: can't multiply sequence by non-int of type 'float' The error is because the number I'm trying to use is not written with a dot (.) as a decimal separator but a comma(,). After manually changing the commas to a dots my program works. I can't change the format of my input, and thus have to replace the commas in my DataFrame in order for my code to work, and I want python to do this without the need of doing it manually. Do you have any suggestions?", "response":"pandas.read_csv has a decimal parameter for this. I.e. try with: ``` df = pd.read_csv(Input, delimiter=\";\", decimal=\",\") ```", "best_answers_score":0.6238, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/31700691\/convert-commas-decimal-separators-to-dots-within-a-dataframe", "best_answers_votes":174, "question_length":909, "response_length":124 }, { "question":"Deleting multiple columns based on column names I have some data and when I import it, I get the following unneeded columns. I'm looking for an easy way to delete all of these. ``` 'Unnamed: 24', 'Unnamed: 25', 'Unnamed: 26', 'Unnamed: 27', 'Unnamed: 28', 'Unnamed: 29', 'Unnamed: 30', 'Unnamed: 31', 'Unnamed: 32', 'Unnamed: 33', 'Unnamed: 34', 'Unnamed: 35', 'Unnamed: 36', 'Unnamed: 37', 'Unnamed: 38', 'Unnamed: 39', 'Unnamed: 40', 'Unnamed: 41', 'Unnamed: 42', 'Unnamed: 43', 'Unnamed: 44', 'Unnamed: 45', 'Unnamed: 46', 'Unnamed: 47', 'Unnamed: 48', 'Unnamed: 49', 'Unnamed: 50', 'Unnamed: 51', 'Unnamed: 52', 'Unnamed: 53', 'Unnamed: 54', 'Unnamed: 55', 'Unnamed: 56', 'Unnamed: 57', 'Unnamed: 58', 'Unnamed: 59', 'Unnamed: 60' ``` They are indexed by 0-indexing so I tried something like ``` df.drop(df.columns[[22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32 ,55]], axis=1, inplace=True) ``` But this isn't very efficient. I tried writing some for loops but this struck me as bad Pandas behaviour. Hence i ask the question here. I've seen some examples which are similar (Drop multiple columns in pandas) but this doesn't answer my question.", "response":"By far the simplest approach is: ``` yourdf.drop(['columnheading1', 'columnheading2'], axis=1, inplace=True) ```", "best_answers_score":0.6234, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28538536\/deleting-multiple-columns-based-on-column-names", "best_answers_votes":294, "question_length":1145, "response_length":112 }, { "question":"Pandas DataFrame to List of Lists It's easy to turn a list of lists into a pandas dataframe: ``` import pandas as pd df = pd.DataFrame([[1,2,3],[3,4,5]]) ``` But how do I turn df back into a list of lists? ``` lol = df.what_to_do_now? print lol # [[1,2,3],[3,4,5]] ```", "response":"You could access the underlying array and call its tolist method: ``` >>> df = pd.DataFrame([[1,2,3],[3,4,5]]) >>> lol = df.values.tolist() >>> lol [[1L, 2L, 3L], [3L, 4L, 5L]] ```", "best_answers_score":0.6217, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/28006793\/pandas-dataframe-to-list-of-lists", "best_answers_votes":296, "question_length":268, "response_length":180 }, { "question":"In pandas, how can I reset index without adding a new column? ``` In [37]: df = pd.DataFrame([[1, 2, 3, 4], [2, 3, 4, 5], [3, 4, 5, 6]]) In [38]: df2 = pd.concat([df, df]) In [39]: df2.reset_index() Out[39]: index 0 1 2 3 0 0 1 2 3 4 1 1 2 3 4 5 2 2 3 4 5 6 3 0 1 2 3 4 4 1 2 3 4 5 5 2 3 4 5 6 ``` How can I reset_index without adding a new column index?", "response":"You can use the drop=True option in reset_index(). See here.", "best_answers_score":0.6158, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/16167829\/in-pandas-how-can-i-reset-index-without-adding-a-new-column", "best_answers_votes":172, "question_length":354, "response_length":60 }, { "question":"Set value on an entire column of a pandas dataframe I'm trying to set the entire column of a dataframe to a specific value. ``` In [1]: df Out [1]: issueid industry 0 001 xxx 1 002 xxx 2 003 xxx 3 004 xxx 4 005 xxx ``` From what I've seen, loc is the best practice when replacing values in a dataframe (or isn't it?): ``` In [2]: df.loc[:,'industry'] = 'yyy' ``` However, I still received this much talked-about warning message: ``` A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead ``` If I do ``` In [3]: df['industry'] = 'yyy' ``` I got the same warning message. Any ideas? Working with Python 3.5.2 and pandas 0.18.1. EDIT Jan 2023: Given the volume of visits on this question, it's worth stating that my original question was really more about dataframe copy-versus-slice than \"setting value to an entire column\". On copy-versus-slice: My current understanding is that, in general, if you want to modify a subset of a dataframe after slicing, you should create the subset by .copy(). If you only want a view of the slice, no copy() needed. On setting value to an entire column: simply do df[col_name] = col_value", "response":"You can use the assign function: ``` df = df.assign(industry='yyy') ```", "best_answers_score":0.6157, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/44723183\/set-value-on-an-entire-column-of-a-pandas-dataframe", "best_answers_votes":166, "question_length":1188, "response_length":71 }, { "question":"Check if a value exists in pandas dataframe index I am sure there is an obvious way to do this but cant think of anything slick right now. Basically instead of raising exception I would like to get True or False to see if a value exists in pandas df index. ``` import pandas as pd df = pd.DataFrame({'test':[1,2,3,4]}, index=['a','b','c','d']) df.loc['g'] # (should give False) ``` What I have working now is the following ``` sum(df.index == 'g') ```", "response":"This should do the trick ``` 'g' in df.index ```", "best_answers_score":0.6155, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/23549231\/check-if-a-value-exists-in-pandas-dataframe-index", "best_answers_votes":388, "question_length":451, "response_length":48 }, { "question":"Pandas Split Dataframe into two Dataframes at a specific column I have pandas DataFrame which I have composed from concat. One row consists of 96 values, I would like to split the DataFrame from the value 72. So that the first 72 values of a row are stored in Dataframe1, and the next 24 values of a row in Dataframe2. I create my DF as follows: ``` temps = DataFrame(myData) datasX = concat( [temps.shift(72), temps.shift(71), temps.shift(70), temps.shift(69), temps.shift(68), temps.shift(67), temps.shift(66), temps.shift(65), temps.shift(64), temps.shift(63), temps.shift(62), temps.shift(61), temps.shift(60), temps.shift(59), temps.shift(58), temps.shift(57), temps.shift(56), temps.shift(55), temps.shift(54), temps.shift(53), temps.shift(52), temps.shift(51), temps.shift(50), temps.shift(49), temps.shift(48), temps.shift(47), temps.shift(46), temps.shift(45), temps.shift(44), temps.shift(43), temps.shift(42), temps.shift(41), temps.shift(40), temps.shift(39), temps.shift(38), temps.shift(37), temps.shift(36), temps.shift(35), temps.shift(34), temps.shift(33), temps.shift(32), temps.shift(31), temps.shift(30), temps.shift(29), temps.shift(28), temps.shift(27), temps.shift(26), temps.shift(25), temps.shift(24), temps.shift(23), temps.shift(22), temps.shift(21), temps.shift(20), temps.shift(19), temps.shift(18), temps.shift(17), temps.shift(16), temps.shift(15), temps.shift(14), temps.shift(13), temps.shift(12), temps.shift(11), temps.shift(10), temps.shift(9), temps.shift(8), temps.shift(7), temps.shift(6), temps.shift(5), temps.shift(4), temps.shift(3), temps.shift(2), temps.shift(1), temps, temps.shift(-1), temps.shift(-2), temps.shift(-3), temps.shift(-4), temps.shift(-5), temps.shift(-6), temps.shift(-7), temps.shift(-8), temps.shift(-9), temps.shift(-10), temps.shift(-11), temps.shift(-12), temps.shift(-13), temps.shift(-14), temps.shift(-15), temps.shift(-16), temps.shift(-17), temps.shift(-18), temps.shift(-19), temps.shift(-20), temps.shift(-21), temps.shift(-22), temps.shift(-23)], axis=1) ``` Question is: How can split them? :)", "response":"iloc ``` df1 = datasX.iloc[:, :72] df2 = datasX.iloc[:, 72:] ``` (iloc docs)", "best_answers_score":0.6098, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/41624241\/pandas-split-dataframe-into-two-dataframes-at-a-specific-column", "best_answers_votes":134, "question_length":2069, "response_length":76 }, { "question":"How to get\/set a pandas index column title or name? How do I get the index column name in Python's pandas? Here's an example dataframe: ```none Column 1 Index Title Apples 1 Oranges 2 Puppies 3 Ducks 4 ``` What I'm trying to do is get\/set the dataframe's index title. Here is what I tried: ```py import pandas as pd data = {'Column 1' : [1., 2., 3., 4.], 'Index Title': [\"Apples\", \"Oranges\", \"Puppies\", \"Ducks\"]} df = pd.DataFrame(data) df.index = df[\"Index Title\"] del df[\"Index Title\"] ``` Anyone know how to do this?", "response":"You can just get\/set the index via its name property ``` In [7]: df.index.name Out[7]: 'Index Title' In [8]: df.index.name = 'foo' In [9]: df.index.name Out[9]: 'foo' In [10]: df Out[10]: Column 1 foo Apples 1 Oranges 2 Puppies 3 Ducks 4 ```", "best_answers_score":0.6095, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/18022845\/how-to-get-set-a-pandas-index-column-title-or-name", "best_answers_votes":625, "question_length":519, "response_length":241 }, { "question":"Filter dataframe rows if value in column is in a set list of values [duplicate] This question already has answers here: How to filter Pandas dataframe using 'in' and 'not in' like in SQL (12 answers) Use a list of values to select rows from a Pandas dataframe (9 answers) Closed 6 years ago. I have a Python pandas DataFrame rpt: ``` rpt MultiIndex: 47518 entries, ('000002', '20120331') to ('603366', '20091231') Data columns: STK_ID 47518 non-null values STK_Name 47518 non-null values RPT_Date 47518 non-null values sales 47518 non-null values ``` I can filter the rows whose stock id is '600809' like this: rpt[rpt['STK_ID'] == '600809'] ``` MultiIndex: 25 entries, ('600809', '20120331') to ('600809', '20060331') Data columns: STK_ID 25 non-null values STK_Name 25 non-null values RPT_Date 25 non-null values sales 25 non-null values ``` and I want to get all the rows of some stocks together, such as ['600809','600141','600329']. That means I want a syntax like this: ``` stk_list = ['600809','600141','600329'] rst = rpt[rpt['STK_ID'] in stk_list] # this does not works in pandas ``` Since pandas not accept above command, how to achieve the target?", "response":"Use the isin method: rpt[rpt['STK_ID'].isin(stk_list)]", "best_answers_score":0.6087, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/12065885\/filter-dataframe-rows-if-value-in-column-is-in-a-set-list-of-values", "best_answers_votes":882, "question_length":1160, "response_length":54 }, { "question":"Lambda including if...elif...else I want to apply a lambda function to a DataFrame column using if...elif...else within the lambda function. The df and the code are something like: ``` df=pd.DataFrame({\"one\":[1,2,3,4,5],\"two\":[6,7,8,9,10]}) df[\"one\"].apply(lambda x: x*10 if x<2 elif x<4 x**2 else x+10) ``` Obviously, this doesn't work. Is there a way to apply if....elif....else to a lambda? How can I get the same result with List Comprehension?", "response":"Nest if .. elses: ``` lambda x: x*10 if x<2 else (x**2 if x<4 else x+10) ```", "best_answers_score":0.6081, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/44991438\/lambda-including-if-elif-else", "best_answers_votes":222, "question_length":448, "response_length":76 }, { "question":"Python - Dimension of Data Frame New to Python. In R, you can get the dimension of a matrix using dim(...). What is the corresponding function in Python Pandas for their data frame?", "response":"df.shape, where df is your DataFrame.", "best_answers_score":0.6034, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/13921647\/python-dimension-of-data-frame", "best_answers_votes":186, "question_length":181, "response_length":37 }, { "question":"Convert list of dictionaries to a pandas DataFrame How can I convert a list of dictionaries into a DataFrame? I want to turn ```py [{'points': 50, 'time': '5:00', 'year': 2010}, {'points': 25, 'time': '6:00', 'month': \"february\"}, {'points':90, 'time': '9:00', 'month': 'january'}, {'points_h1':20, 'month': 'june'}] ``` into ```none month points points_h1 time year 0 NaN 50 NaN 5:00 2010 1 february 25 NaN 6:00 NaN 2 january 90 NaN 9:00 NaN 3 june NaN 20 NaN NaN ```", "response":"If ds is a list of dicts: ``` df = pd.DataFrame(ds) ``` Note: this does not work with nested data.", "best_answers_score":0.6031, "library_name":"pandas", "question_url":"https:\/\/stackoverflow.com\/questions\/20638006\/convert-list-of-dictionaries-to-a-pandas-dataframe", "best_answers_votes":1690, "question_length":468, "response_length":98 }, { "question":"Why do we need to call zero_grad() in PyTorch? Why does zero_grad() need to be called during training? ``` | zero_grad(self) | Sets gradients of all model parameters to zero. ```", "response":"In PyTorch, for every mini-batch during the training phase, we typically want to explicitly set the gradients to zero before starting to do backpropagation (i.e., updating the Weights and biases) because PyTorch accumulates the gradients on subsequent backward passes. This accumulating behavior is convenient while training RNNs or when we want to compute the gradient of the loss summed over multiple mini-batches. So, the default action has been set to accumulate (i.e. sum) the gradients on every loss.backward() call. Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly. Otherwise, the gradient would be a combination of the old gradient, which you have already used to update your model parameters and the newly-computed gradient. It would therefore point in some other direction than the intended direction towards the minimum (or maximum, in case of maximization objectives). Here is a simple example: ``` import torch from torch.autograd import Variable import torch.optim as optim def linear_model(x, W, b): return torch.matmul(x, W) + b data, targets = ... W = Variable(torch.randn(4, 3), requires_grad=True) b = Variable(torch.randn(3), requires_grad=True) optimizer = optim.Adam([W, b]) for sample, target in zip(data, targets): # clear out the gradients of all Variables # in this optimizer (i.e. W, b) optimizer.zero_grad() output = linear_model(sample, W, b) loss = (output - target) ** 2 loss.backward() optimizer.step() ``` Alternatively, if you're doing a vanilla gradient descent, then: ``` W = Variable(torch.randn(4, 3), requires_grad=True) b = Variable(torch.randn(3), requires_grad=True) for sample, target in zip(data, targets): # clear out the gradients of Variables # (i.e. W, b) W.grad.data.zero_() b.grad.data.zero_() output = linear_model(sample, W, b) loss = (output - target) ** 2 loss.backward() W -= learning_rate * W.grad.data b -= learning_rate * b.grad.data ``` Note: The accumulation (i.e., sum) of gradients happens when .backward() is called on the loss tensor. As of v1.7.0, Pytorch offers the option to reset the gradients to None optimizer.zero_grad(set_to_none=True) instead of filling them with a tensor of zeroes. The docs claim that this setting reduces memory requirements and slightly improves performance, but might be error-prone if not handled carefully.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/48001598\/why-do-we-need-to-call-zero-grad-in-pytorch", "best_answers_votes":532, "question_length":178, "response_length":2394 }, { "question":"How do I print the model summary in PyTorch? How do I print the summary of a model in PyTorch like what model.summary() does in Keras: ``` Model Summary: ____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== input_1 (InputLayer) (None, 1, 15, 27) 0 ____________________________________________________________________________________________________ convolution2d_1 (Convolution2D) (None, 8, 15, 27) 872 input_1[0][0] ____________________________________________________________________________________________________ maxpooling2d_1 (MaxPooling2D) (None, 8, 7, 27) 0 convolution2d_1[0][0] ____________________________________________________________________________________________________ flatten_1 (Flatten) (None, 1512) 0 maxpooling2d_1[0][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 1) 1513 flatten_1[0][0] ==================================================================================================== Total params: 2,385 Trainable params: 2,385 Non-trainable params: 0 ```", "response":"Yes, you can get exact Keras representation, using the pytorch-summary package. Example for VGG16: ``` from torchvision import models from torchsummary import summary vgg = models.vgg16() summary(vgg, (3, 224, 224)) ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 224, 224] 1,792 ReLU-2 [-1, 64, 224, 224] 0 Conv2d-3 [-1, 64, 224, 224] 36,928 ReLU-4 [-1, 64, 224, 224] 0 MaxPool2d-5 [-1, 64, 112, 112] 0 Conv2d-6 [-1, 128, 112, 112] 73,856 ReLU-7 [-1, 128, 112, 112] 0 Conv2d-8 [-1, 128, 112, 112] 147,584 ReLU-9 [-1, 128, 112, 112] 0 MaxPool2d-10 [-1, 128, 56, 56] 0 Conv2d-11 [-1, 256, 56, 56] 295,168 ReLU-12 [-1, 256, 56, 56] 0 Conv2d-13 [-1, 256, 56, 56] 590,080 ReLU-14 [-1, 256, 56, 56] 0 Conv2d-15 [-1, 256, 56, 56] 590,080 ReLU-16 [-1, 256, 56, 56] 0 MaxPool2d-17 [-1, 256, 28, 28] 0 Conv2d-18 [-1, 512, 28, 28] 1,180,160 ReLU-19 [-1, 512, 28, 28] 0 Conv2d-20 [-1, 512, 28, 28] 2,359,808 ReLU-21 [-1, 512, 28, 28] 0 Conv2d-22 [-1, 512, 28, 28] 2,359,808 ReLU-23 [-1, 512, 28, 28] 0 MaxPool2d-24 [-1, 512, 14, 14] 0 Conv2d-25 [-1, 512, 14, 14] 2,359,808 ReLU-26 [-1, 512, 14, 14] 0 Conv2d-27 [-1, 512, 14, 14] 2,359,808 ReLU-28 [-1, 512, 14, 14] 0 Conv2d-29 [-1, 512, 14, 14] 2,359,808 ReLU-30 [-1, 512, 14, 14] 0 MaxPool2d-31 [-1, 512, 7, 7] 0 Linear-32 [-1, 4096] 102,764,544 ReLU-33 [-1, 4096] 0 Dropout-34 [-1, 4096] 0 Linear-35 [-1, 4096] 16,781,312 ReLU-36 [-1, 4096] 0 Dropout-37 [-1, 4096] 0 Linear-38 [-1, 1000] 4,097,000 ================================================================ Total params: 138,357,544 Trainable params: 138,357,544 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.57 Forward\/backward pass size (MB): 218.59 Params size (MB): 527.79 Estimated Total Size (MB): 746.96 ---------------------------------------------------------------- ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/42480111\/how-do-i-print-the-model-summary-in-pytorch", "best_answers_votes":308, "question_length":1262, "response_length":1972 }, { "question":"What does `view()` do in PyTorch? What does view() do to the tensor x? What do negative values mean? ``` x = x.view(-1, 16 * 5 * 5) ```", "response":"view() reshapes the tensor without copying memory, similar to numpy's reshape(). Given a tensor a with 16 elements: ``` import torch a = torch.range(1, 16) ``` To reshape this tensor to make it a 4 x 4 tensor, use: ``` a = a.view(4, 4) ``` Now a will be a 4 x 4 tensor. Note that after the reshape the total number of elements need to remain the same. Reshaping the tensor a to a 3 x 5 tensor would not be appropriate. What is the meaning of parameter -1? If there is any situation that you don't know how many rows you want but are sure of the number of columns, then you can specify this with a -1. (Note that you can extend this to tensors with more dimensions. Only one of the axis value can be -1). This is a way of telling the library: \"give me a tensor that has these many columns and you compute the appropriate number of rows that is necessary to make this happen\". This can be seen in this model definition code. After the line x = self.pool(F.relu(self.conv2(x))) in the forward function, you will have a 16 depth feature map. You have to flatten this to give it to the fully connected layer. So you tell PyTorch to reshape the tensor you obtained to have specific number of columns and tell it to decide the number of rows by itself.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/42479902\/what-does-view-do-in-pytorch", "best_answers_votes":458, "question_length":135, "response_length":1245 }, { "question":"RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same This: ``` device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\") model.to(device) for data in dataloader: inputs, labels = data outputs = model(inputs) ``` Gives the error: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same", "response":"You get this error because your model is on the GPU, but your data is on the CPU. So, you need to send your input tensors to the GPU. ```py inputs, labels = data # this is what you had inputs, labels = inputs.cuda(), labels.cuda() # add this line ``` Or like this, to stay consistent with the rest of your code: ```py device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\") inputs, labels = inputs.to(device), labels.to(device) ``` The same error will be raised if your input tensors are on the GPU but your model weights aren't. In this case, you need to send your model weights to the GPU. ``` model = MyModel() if torch.cuda.is_available(): model.cuda() ``` See the documentation for cuda(), and its opposite, cpu().", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59013109\/runtimeerror-input-type-torch-floattensor-and-weight-type-torch-cuda-floatte", "best_answers_votes":322, "question_length":397, "response_length":735 }, { "question":"PyTorch preferred way to copy a tensor There seems to be several ways to create a copy of a tensor in PyTorch, including ``` y = tensor.new_tensor(x) #a y = x.clone().detach() #b y = torch.empty_like(x).copy_(x) #c y = torch.tensor(x) #d ``` b is explicitly preferred over a and d according to a UserWarning I get if I execute either a or d. Why is it preferred? Performance? I'd argue it's less readable. Any reasons for\/against using c?", "response":"TL;DR Use .clone().detach() (or preferrably .detach().clone()) If you first detach the tensor and then clone it, the computation path is not copied, the other way around it is copied and then abandoned. Thus, .detach().clone() is very slightly more efficient.-- pytorch forums as it's slightly fast and explicit in what it does. Using perfplot, I plotted the timing of various methods to copy a pytorch tensor. ``` y = tensor.new_tensor(x) # method a y = x.clone().detach() # method b y = torch.empty_like(x).copy_(x) # method c y = torch.tensor(x) # method d y = x.detach().clone() # method e ``` The x-axis is the dimension of tensor created, y-axis shows the time. The graph is in linear scale. As you can clearly see, the tensor() or new_tensor() takes more time compared to other three methods. Note: In multiple runs, I noticed that out of b, c, e, any method can have lowest time. The same is true for a and d. But, the methods b, c, e consistently have lower timing than a and d. ``` import torch import perfplot perfplot.show( setup=lambda n: torch.randn(n), kernels=[ lambda a: a.new_tensor(a), lambda a: a.clone().detach(), lambda a: torch.empty_like(a).copy_(a), lambda a: torch.tensor(a), lambda a: a.detach().clone(), ], labels=[\"new_tensor()\", \"clone().detach()\", \"empty_like().copy()\", \"tensor()\", \"detach().clone()\"], n_range=[2 ** k for k in range(15)], xlabel=\"len(a)\", logx=False, logy=False, title='Timing comparison for copying a pytorch tensor', ) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55266154\/pytorch-preferred-way-to-copy-a-tensor", "best_answers_votes":214, "question_length":438, "response_length":1474 }, { "question":"How to avoid \"CUDA out of memory\" in PyTorch I think it's a pretty common message for PyTorch users with low GPU memory: ``` RuntimeError: CUDA out of memory. Tried to allocate X MiB (GPU X; X GiB total capacity; X GiB already allocated; X MiB free; X cached) ``` I tried to process an image by loading each layer to GPU and then loading it back: ```py for m in self.children(): m.cuda() x = m(x) m.cpu() torch.cuda.empty_cache() ``` But it doesn't seem to be very effective. I'm wondering is there any tips and tricks to train large deep learning models while using little GPU memory.", "response":"Although ``` import torch torch.cuda.empty_cache() ``` provides a good alternative for clearing the occupied cuda memory and we can also manually clear the not in use variables by using, ``` import gc del variables gc.collect() ``` But still after using these commands, the error might appear again because pytorch doesn't actually clears the memory instead clears the reference to the memory occupied by the variables. So reducing the batch_size after restarting the kernel and finding the optimum batch_size is the best possible option (but sometimes not a very feasible one). Another way to get a deeper insight into the alloaction of memory in gpu is to use: ``` torch.cuda.memory_summary(device=None, abbreviated=False) ``` wherein, both the arguments are optional. This gives a readable summary of memory allocation and allows you to figure the reason of CUDA running out of memory and restart the kernel to avoid the error from happening again (Just like I did in my case). Passing the data iteratively might help but changing the size of layers of your network or breaking them down would also prove effective (as sometimes the model also occupies a significant memory for example, while doing transfer learning).", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59129812\/how-to-avoid-cuda-out-of-memory-in-pytorch", "best_answers_votes":131, "question_length":585, "response_length":1221 }, { "question":"Pytorch tensor to numpy array I have a pytorch Tensor of shape [4, 3, 966, 1296]. I want to convert it to numpy array using the following code: ``` imgs = imgs.numpy()[:, ::-1, :, :] ``` How does that code work?", "response":"I believe you also have to use .detach(). I had to convert my Tensor to a numpy array on Colab which uses CUDA and GPU. I did it like the following: ``` # this is just my embedding matrix which is a Torch tensor object embedding = learn.model.u_weight embedding_list = list(range(0, 64382)) input = torch.cuda.LongTensor(embedding_list) tensor_array = embedding(input) # the output of the line below is a numpy array tensor_array.cpu().detach().numpy() ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49768306\/pytorch-tensor-to-numpy-array", "best_answers_votes":163, "question_length":211, "response_length":456 }, { "question":"Pytorch, what are the gradient arguments I am reading through the documentation of PyTorch and found an example where they write ``` gradients = torch.FloatTensor([0.1, 1.0, 0.0001]) y.backward(gradients) print(x.grad) ``` where x was an initial variable, from which y was constructed (a 3-vector). The question is, what are the 0.1, 1.0 and 0.0001 arguments of the gradients tensor ? The documentation is not very clear on that.", "response":"Explanation For neural networks, we usually use loss to assess how well the network has learned to classify the input image (or other tasks). The loss term is usually a scalar value. In order to update the parameters of the network, we need to calculate the gradient of loss w.r.t to the parameters, which is actually leaf node in the computation graph (by the way, these parameters are mostly the weight and bias of various layers such Convolution, Linear and so on). According to chain rule, in order to calculate gradient of loss w.r.t to a leaf node, we can compute derivative of loss w.r.t some intermediate variable, and gradient of intermediate variable w.r.t to the leaf variable, do a dot product and sum all these up. The gradient arguments of a Variable's backward() method is used to calculate a weighted sum of each element of a Variable w.r.t the leaf Variable. These weight is just the derivate of final loss w.r.t each element of the intermediate variable. A concrete example Let's take a concrete and simple example to understand this. ```py from torch.autograd import Variable import torch x = Variable(torch.FloatTensor([[1, 2, 3, 4]]), requires_grad=True) z = 2*x loss = z.sum(dim=1) # do backward for first element of z z.backward(torch.FloatTensor([[1, 0, 0, 0]]), retain_graph=True) print(x.grad.data) x.grad.data.zero_() #remove gradient in x.grad, or it will be accumulated # do backward for second element of z z.backward(torch.FloatTensor([[0, 1, 0, 0]]), retain_graph=True) print(x.grad.data) x.grad.data.zero_() # do backward for all elements of z, with weight equal to the derivative of # loss w.r.t z_1, z_2, z_3 and z_4 z.backward(torch.FloatTensor([[1, 1, 1, 1]]), retain_graph=True) print(x.grad.data) x.grad.data.zero_() # or we can directly backprop using loss loss.backward() # equivalent to loss.backward(torch.FloatTensor([1.0])) print(x.grad.data) ``` In the above example, the outcome of first print is 2 0 0 0 [torch.FloatTensor of size 1x4] which is exactly the derivative of z_1 w.r.t to x. The outcome of second print is : 0 2 0 0 [torch.FloatTensor of size 1x4] which is the derivative of z_2 w.r.t to x. Now if use a weight of [1, 1, 1, 1] to calculate the derivative of z w.r.t to x, the outcome is 1*dz_1\/dx + 1*dz_2\/dx + 1*dz_3\/dx + 1*dz_4\/dx. So no surprisingly, the output of 3rd print is: 2 2 2 2 [torch.FloatTensor of size 1x4] It should be noted that weight vector [1, 1, 1, 1] is exactly derivative of loss w.r.t to z_1, z_2, z_3 and z_4. The derivative of loss w.r.t to x is calculated as: ``` d(loss)\/dx = d(loss)\/dz_1 * dz_1\/dx + d(loss)\/dz_2 * dz_2\/dx + d(loss)\/dz_3 * dz_3\/dx + d(loss)\/dz_4 * dz_4\/dx ``` So the output of 4th print is the same as the 3rd print: 2 2 2 2 [torch.FloatTensor of size 1x4]", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/43451125\/pytorch-what-are-the-gradient-arguments", "best_answers_votes":124, "question_length":429, "response_length":2762 }, { "question":"What does \"unsqueeze\" do in Pytorch? The PyTorch documentation says: Returns a new tensor with a dimension of size one inserted at the specified position. [...] ``` >>> x = torch.tensor([1, 2, 3, 4]) >>> torch.unsqueeze(x, 0) tensor([[ 1, 2, 3, 4]]) >>> torch.unsqueeze(x, 1) tensor([[ 1], [ 2], [ 3], [ 4]]) ```", "response":"unsqueeze turns an n.d. tensor into an (n+1).d. one by adding an extra dimension of depth 1. However, since it is ambiguous which axis the new dimension should lie across (i.e. in which direction it should be \"unsqueezed\"), this needs to be specified by the dim argument. e.g. unsqueeze can be applied to a 2d tensor three different ways: The resulting unsqueezed tensors have the same information, but the indices used to access them are different.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57237352\/what-does-unsqueeze-do-in-pytorch", "best_answers_votes":201, "question_length":312, "response_length":449 }, { "question":"How do I visualize a net in Pytorch? Consider: ``` import torch import torch.nn as nn import torch.optim as optim import torch.utils.data as data import torchvision.models as models import torchvision.datasets as dset import torchvision.transforms as transforms from torch.autograd import Variable from torchvision.models.vgg import model_urls from torchviz import make_dot batch_size = 3 learning_rate =0.0002 epoch = 50 resnet = models.resnet50(pretrained=True) print resnet make_dot(resnet) ``` I want to visualize resnet from the PyTorch models. How can I do it? I tried to use torchviz, but it gives an error: 'ResNet' object has no attribute 'grad_fn'", "response":"Here are three different graph visualizations using different tools. In order to generate example visualizations, I'll use a simple RNN to perform sentiment analysis taken from an online tutorial: ```py class RNN(nn.Module): def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim): super().__init__() self.embedding = nn.Embedding(input_dim, embedding_dim) self.rnn = nn.RNN(embedding_dim, hidden_dim) self.fc = nn.Linear(hidden_dim, output_dim) def forward(self, text): embedding = self.embedding(text) output, hidden = self.rnn(embedding) return self.fc(hidden.squeeze(0)) ``` Here is the output if you print() the model. ```py RNN( (embedding): Embedding(25002, 100) (rnn): RNN(100, 256) (fc): Linear(in_features=256, out_features=1, bias=True) ) ``` Below are the results from three different visualization tools. For all of them, you need to have dummy input that can pass through the model's forward() method. A simple way to get this input is to retrieve a batch from your Dataloader, like this: ```py batch = next(iter(dataloader_train)) yhat = model(batch.text) # Give dummy batch to forward(). ``` Torchviz https:\/\/github.com\/szagoruyko\/pytorchviz I believe this tool generates its graph using the backwards pass, so all the boxes use the PyTorch components for back-propagation. ```py from torchviz import make_dot make_dot(yhat, params=dict(list(model.named_parameters()))).render(\"rnn_torchviz\", format=\"png\") ``` This tool produces the following output file: This is the only output that clearly mentions the three layers in my model, embedding, rnn, and fc. The operator names are taken from the backward pass, so some of them are difficult to understand. HiddenLayer https:\/\/github.com\/waleedka\/hiddenlayer This tool uses the forward pass, I believe. ```py import hiddenlayer as hl transforms = [ hl.transforms.Prune('Constant') ] # Removes Constant nodes from graph. graph = hl.build_graph(model, batch.text, transforms=transforms) graph.theme = hl.graph.THEMES['blue'].copy() graph.save('rnn_hiddenlayer', format='png') ``` Here is the output. I like the shade of blue. I find that the output has too much detail and obfuscates my architecture. For example, why is unsqueeze mentioned so many times? Netron https:\/\/github.com\/lutzroeder\/netron This tool is a desktop application for Mac, Windows, and Linux. It relies on the model being first exported into ONNX format. The application then reads the ONNX file and renders it. There is then an option to export the model to an image file. ```py input_names = ['Sentence'] output_names = ['yhat'] torch.onnx.export(model, batch.text, 'rnn.onnx', input_names=input_names, output_names=output_names) ``` Here's what the model looks like in the application. I think this tool is pretty slick: you can zoom and pan around, and you can drill into the layers and operators. The only negative I've found is that it only does vertical layouts.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/52468956\/how-do-i-visualize-a-net-in-pytorch", "best_answers_votes":143, "question_length":657, "response_length":2915 }, { "question":"Understanding `torch.nn.Parameter()` How does torch.nn.Parameter() work?", "response":"I will break it down for you. Tensors, as you might know, are multi dimensional matrices. Parameter, in its raw form, is a tensor i.e. a multi dimensional matrix. It sub-classes the Variable class. The difference between a Variable and a Parameter comes in when associated with a module. When a Parameter is associated with a module as a model attribute, it gets added to the parameter list automatically and can be accessed using the 'parameters' iterator. Initially in Torch, a Variable (which could for example be an intermediate state) would also get added as a parameter of the model upon assignment. Later on there were use cases identified where a need to cache the variables instead of having them added to the parameter list was identified. One such case, as mentioned in the documentation is that of RNN, where in you need to save the last hidden state so you don't have to pass it again and again. The need to cache a Variable instead of having it automatically register as a parameter to the model is why we have an explicit way of registering parameters to our model i.e. nn.Parameter class. For instance, run the following code - ``` import torch import torch.nn as nn from torch.optim import Adam class NN_Network(nn.Module): def __init__(self,in_dim,hid,out_dim): super(NN_Network, self).__init__() self.linear1 = nn.Linear(in_dim,hid) self.linear2 = nn.Linear(hid,out_dim) self.linear1.weight = torch.nn.Parameter(torch.zeros(in_dim,hid)) self.linear1.bias = torch.nn.Parameter(torch.ones(hid)) self.linear2.weight = torch.nn.Parameter(torch.zeros(in_dim,hid)) self.linear2.bias = torch.nn.Parameter(torch.ones(hid)) def forward(self, input_array): h = self.linear1(input_array) y_pred = self.linear2(h) return y_pred in_d = 5 hidn = 2 out_d = 3 net = NN_Network(in_d, hidn, out_d) ``` Now, check the parameter list associated with this model - ``` for param in net.parameters(): print(type(param.data), param.size()) \"\"\" Output torch.Size([5, 2]) torch.Size([2]) torch.Size([5, 2]) torch.Size([2]) \"\"\" ``` Or try, ``` list(net.parameters()) ``` This can easily be fed to your optimizer - ``` opt = Adam(net.parameters(), learning_rate=0.001) ``` Also, note that Parameters have require_grad set by default.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/50935345\/understanding-torch-nn-parameter", "best_answers_votes":188, "question_length":72, "response_length":2228 }, { "question":"Why `torch.cuda.is_available()` returns False even after installing pytorch with cuda? On a Windows 10 PC with an NVidia GeForce 820M I installed CUDA 9.2 and cudnn 7.1 successfully, and then installed PyTorch using the instructions at pytorch.org: ``` pip install torch==1.4.0+cu92 torchvision==0.5.0+cu92 -f https:\/\/download.pytorch.org\/whl\/torch_stable.html ``` But I get: ``` >>> import torch >>> torch.cuda.is_available() False ```", "response":"Your graphics card does not support CUDA 9.0. Since I've seen a lot of questions that refer to issues like this I'm writing a broad answer on how to check if your system is compatible with CUDA, specifically targeted at using PyTorch with CUDA support. Various circumstance-dependent options for resolving issues are described in the last section of this answer. The system requirements to use PyTorch with CUDA are as follows: Your graphics card must support the required version of CUDA Your graphics card driver must support the required version of CUDA The PyTorch binaries must be built with support for the compute capability of your graphics card Note: If you install pre-built binaries (using either pip or conda) then you do not need to install the CUDA toolkit or runtime on your system before installing PyTorch with CUDA support. This is because PyTorch, unless compiled from source, is always delivered with a copy of the CUDA library. 1. How to check if your GPU\/graphics card supports a particular CUDA version First, identify the model of your graphics card. Before moving forward ensure that you've got an NVIDIA graphics card. AMD and Intel graphics cards do not support CUDA. NVIDIA doesn't do a great job of providing CUDA compatibility information in a single location. The best resource is probably this section on the CUDA Wikipedia page. To determine which versions of CUDA are supported Locate your graphics card model in the big table and take note of the compute capability version. For example, the GeForce 820M compute capability is 2.1. In the bullet list preceding the table check to see if the required CUDA version is supported by the compute capability of your graphics card. For example, CUDA 9.2 is not supported for compute compatibility 2.1. If your card doesn't support the required CUDA version then see the options in section 4 of this answer. Note: Compute capability refers to the computational features supported by your graphics card. Newer versions of the CUDA library rely on newer hardware features, which is why we need to determine the compute capability in order to determine the supported versions of CUDA. 2. How to check if your GPU\/graphics driver supports a particular CUDA version The graphics driver is the software that allows your operating system to communicate with your graphics card. Since CUDA relies on low-level communication with the graphics card you need to have an up-to-date driver in order use the latest versions of CUDA. First, make sure you have an NVIDIA graphics driver installed on your system. You can acquire the newest driver for your system from NVIDIA's website. If you've installed the latest driver version then your graphics driver probably supports every CUDA version compatible with your graphics card (see section 1). To verify, you can check Table 2 in the CUDA release notes. In rare cases I've heard of the latest recommended graphics drivers not supporting the latest CUDA releases. You should be able to get around this by installing the CUDA toolkit for the required CUDA version and selecting the option to install compatible drivers, though this usually isn't required. If you can't, or don't want to upgrade the graphics driver then you can check to see if your current driver supports the specific CUDA version as follows: On Windows Determine your current graphics driver version (Source https:\/\/www.nvidia.com\/en-gb\/drivers\/drivers-faq\/) Right-click on your desktop and select NVIDIA Control Panel. From the NVIDIA Control Panel menu, select Help > System Information. The driver version is listed at the top of the Details window. For more advanced users, you can also get the driver version number from the Windows Device Manager. Right-click on your graphics device under display adapters and then select Properties. Select the Driver tab and read the Driver version. The last 5 digits are the NVIDIA driver version number. Visit the CUDA release notes and scroll down to Table 2. Use this table to verify your graphics driver is new enough to support the required version of CUDA. On Linux\/OS X Run the following command in a terminal window ``` nvidia-smi ``` This should result in something like the following ```none Sat Apr 4 15:31:57 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 435.21 Driver Version: 435.21 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage\/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce RTX 206... Off | 00000000:01:00.0 On | N\/A | | 0% 35C P8 16W \/ 175W | 502MiB \/ 7974MiB | 1% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1138 G \/usr\/lib\/xorg\/Xorg 300MiB | | 0 2550 G \/usr\/bin\/compiz 189MiB | | 0 5735 G \/usr\/lib\/firefox\/firefox 5MiB | | 0 7073 G \/usr\/lib\/firefox\/firefox 5MiB | +-----------------------------------------------------------------------------+ ``` Driver Version: ###.## is your graphic driver version. In the example above the driver version is 435.21. CUDA Version: ##.# is the latest version of CUDA supported by your graphics driver. In the example above the graphics driver supports CUDA 10.1 as well as all compatible CUDA versions before 10.1. Note: The CUDA Version displayed in this table does not indicate that the CUDA toolkit or runtime are actually installed on your system. This just indicates the latest version of CUDA your graphics driver is compatible with. To be extra sure that your driver supports the desired CUDA version you can visit Table 2 on the CUDA release notes page. 3. How to check if a particular version of PyTorch is compatible with your GPU\/graphics card compute capability Even if your graphics card supports the required version of CUDA then it's possible that the pre-compiled PyTorch binaries were not compiled with support for your compute capability. For example, in PyTorch 0.3.1 support for compute capability >> import torch >>> torch.zeros(1).cuda() ``` If you get an error message that reads ```none Found GPU0 XXXXX which is of cuda capability #.#. PyTorch no longer supports this GPU because it is too old. ``` then that means PyTorch was not compiled with support for your compute capability. If this runs without issue then you should be good to go. Update If you're installing an old version of PyTorch on a system with a newer GPU then it's possible that the old PyTorch release wasn't compiled with support for your compute capability. Assuming your GPU supports the version of CUDA used by PyTorch, then you should be able to rebuild PyTorch from source with the desired CUDA version or upgrade to a more recent version of PyTorch that was compiled with support for the newer compute capabilities. 4. Conclusion If your graphics card and driver support the required version of CUDA (section 1 and 2) but the PyTorch binaries don't support your compute capability (section 3) then your options are Compile PyTorch from source with support for your compute capability (see here) Install PyTorch without CUDA support (CPU-only) Install an older version of the PyTorch binaries that support your compute capability (not recommended as PyTorch 0.3.1 is very outdated at this point). AFAIK compute capability older than 3.X has never been supported in the pre-built binaries Upgrade your graphics card If your graphics card doesn't support the required version of CUDA (section 1) then your options are Install PyTorch without CUDA support (CPU-only) Install an older version of PyTorch that supports a CUDA version supported by your graphics card (still may require compiling from source if the binaries don't support your compute capability) Upgrade your graphics card", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/60987997\/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with", "best_answers_votes":248, "question_length":436, "response_length":8119 }, { "question":"L1\/L2 regularization in PyTorch How do I add L1\/L2 regularization in PyTorch without manually computing it?", "response":"Use weight_decay > 0 for L2 regularization: ``` optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, weight_decay=1e-5) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/42704283\/l1-l2-regularization-in-pytorch", "best_answers_votes":109, "question_length":107, "response_length":128 }, { "question":"How to fix RuntimeError \"Expected object of scalar type Float but got scalar type Double for argument\"? I'm trying to train a classifier via PyTorch. However, I am experiencing problems with training when I feed the model with training data. I get this error on y_pred = model(X_trainTensor): RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #4 'mat1' Here are key parts of my code: ```py # Hyper-parameters D_in = 47 # there are 47 parameters I investigate H = 33 D_out = 2 # output should be either 1 or 0 ``` ```py # Format and load the data y = np.array( df['target'] ) X = np.array( df.drop(columns = ['target'], axis = 1) ) X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.8) # split training\/test data X_trainTensor = torch.from_numpy(X_train) # convert to tensors y_trainTensor = torch.from_numpy(y_train) X_testTensor = torch.from_numpy(X_test) y_testTensor = torch.from_numpy(y_test) ``` ```py # Define the model model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), nn.LogSoftmax(dim = 1) ) ``` ```py # Define the loss function loss_fn = torch.nn.NLLLoss() ``` ```py for i in range(50): y_pred = model(X_trainTensor) loss = loss_fn(y_pred, y_trainTensor) model.zero_grad() loss.backward() with torch.no_grad(): for param in model.parameters(): param -= learning_rate * param.grad ```", "response":"Reference is from this github issue. When the error is RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #4 'mat1', you would need to use the .float() function since it says Expected object of scalar type Float. Therefore, the solution is changing y_pred = model(X_trainTensor) to y_pred = model(X_trainTensor.float()). Likewise, when you get another error for loss = loss_fn(y_pred, y_trainTensor), you need y_trainTensor.long() since the error message says Expected object of scalar type Long. You could also do model.double(), as suggested by @Paddy .", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/56741087\/how-to-fix-runtimeerror-expected-object-of-scalar-type-float-but-got-scalar-typ", "best_answers_votes":176, "question_length":1405, "response_length":595 }, { "question":"How does the \"number of workers\" parameter in PyTorch dataloader actually work? If num_workers is 2, Does that mean that it will put 2 batches in the RAM and send 1 of them to the GPU or Does it put 3 batches in the RAM then sends 1 of them to the GPU? What does actually happen when the number of workers is higher than the number of CPU cores? I tried it and it worked fine but How does it work? (I thought that the maximum number of workers I can choose is the number of cores). If I set num_workers to 3 and during the training there were no batches in the memory for the GPU, Does the main process waits for its workers to read the batches or Does it read a single batch (without waiting for the workers)?", "response":"When num_workers>0, only these workers will retrieve data, main process won't. So when num_workers=2 you have at most 2 workers simultaneously putting data into RAM, not 3. Well our CPU can usually run like 100 processes without trouble and these worker processes aren't special in anyway, so having more workers than cpu cores is ok. But is it efficient? it depends on how busy your cpu cores are for other tasks, speed of cpu, speed of your hard disk etc. In short, its complicated, so setting workers to number of cores is a good rule of thumb, nothing more. Nope. Remember DataLoader doesn't just randomly return from what's available in RAM right now, it uses batch_sampler to decide which batch to return next. Each batch is assigned to a worker, and main process will wait until the desired batch is retrieved by assigned worker. Lastly to clarify, it isn't DataLoader's job to send anything directly to GPU, you explicitly call cuda() for that. EDIT: Don't call cuda() inside Dataset's __getitem__() method, please look at @psarka's comment for the reasoning", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53998282\/how-does-the-number-of-workers-parameter-in-pytorch-dataloader-actually-work", "best_answers_votes":124, "question_length":710, "response_length":1066 }, { "question":"Convert PyTorch tensor to python list How do I convert a PyTorch Tensor into a python list? I want to convert a tensor of size [1, 2048, 1, 1] into a list of 2048 elements. My tensor has floating point values. Is there a solution which also works with other data types such as int?", "response":"Use Tensor.tolist() e.g: ``` >>> import torch >>> a = torch.randn(2, 2) >>> a.tolist() [[0.012766935862600803, 0.5415473580360413], [-0.08909505605697632, 0.7729271650314331]] >>> a[0,0].tolist() 0.012766935862600803 ``` To remove all dimensions of size 1, use a.squeeze().tolist(). Alternatively, if all but one dimension are of size 1 (or you wish to get a list of every element of the tensor) you may use a.flatten().tolist().", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53903373\/convert-pytorch-tensor-to-python-list", "best_answers_votes":181, "question_length":281, "response_length":429 }, { "question":"How do I convert a Pandas dataframe to a PyTorch tensor? How do I train a simple neural network with PyTorch on a pandas dataframe df? The column df[\"Target\"] is the target (e.g. labels) of the network. This doesn't work: ``` import pandas as pd import torch.utils.data as data_utils target = pd.DataFrame(df['Target']) train = data_utils.TensorDataset(df, target) train_loader = data_utils.DataLoader(train, batch_size=10, shuffle=True) ```", "response":"I'm referring to the question in the title as you haven't really specified anything else in the text, so just converting the DataFrame into a PyTorch tensor. Without information about your data, I'm just taking float values as example targets here. Convert Pandas dataframe to PyTorch tensor? ```py import pandas as pd import torch import random # creating dummy targets (float values) targets_data = [random.random() for i in range(10)] # creating DataFrame from targets_data targets_df = pd.DataFrame(data=targets_data) targets_df.columns = ['targets'] # creating tensor from targets_df torch_tensor = torch.tensor(targets_df['targets'].values) # printing out result print(torch_tensor) ``` Output: ``` tensor([ 0.5827, 0.5881, 0.1543, 0.6815, 0.9400, 0.8683, 0.4289, 0.5940, 0.6438, 0.7514], dtype=torch.float64) ``` Tested with Pytorch 0.4.0. I hope this helps, if you have any further questions - just ask. :)", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/50307707\/how-do-i-convert-a-pandas-dataframe-to-a-pytorch-tensor", "best_answers_votes":102, "question_length":441, "response_length":914 }, { "question":"ModuleNotFoundError: No module named 'tools.nnwrap' I tried to install torch using: ```sh pip install torch ``` Installation started, but after a few seconds I got the error: ``` from tools.nnwrap import generate_wrappers as generate_nn_wrappers ModuleNotFoundError: No module named 'tools.nnwrap' ``` OS: Windows", "response":"Anyone who is looking for the solution refer below: It seems command to install torch not is working as expected, instead, you can try to install PyTorch using below command. It's working and solved my above-mentioned issue. Run below command(for below-specified OS, package-manager, Language): ``` # for OS: Windows, package-manager: pip, Language: python3.6 (below command is valid for only mentioned python 3.6) pip3 install https:\/\/download.pytorch.org\/whl\/cu90\/torch-1.1.0-cp36-cp36m-win_amd64.whl pip3 install https:\/\/download.pytorch.org\/whl\/cu90\/torchvision-0.3.0-cp36-cp36m-win_amd64.whl ``` For another version\/type of the software (OS, package, Language) installed, the command must be generated from the below-mentioned link. https:\/\/pytorch.org\/get-started\/locally\/ Also, look for the Python version in your IDE(If you are using PyCharm) from the terminal using the command: python. If it returns 32bit this could happen, instead install Python 64-bit.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/56859803\/modulenotfounderror-no-module-named-tools-nnwrap", "best_answers_votes":87, "question_length":313, "response_length":965 }, { "question":"Data Augmentation in PyTorch I am a little bit confused about the data augmentation performed in PyTorch. Now, as far as I know, when we are performing data augmentation, we are KEEPING our original dataset, and then adding other versions of it (Flipping, Cropping...etc). But that doesn't seem like happening in PyTorch. As far as I understood from the references, when we use data.transforms in PyTorch, then it applies them one by one. So for example: ```python data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } ``` Here , for the training, we are first randomly cropping the image and resizing it to shape (224,224). Then we are taking these (224,224) images and horizontally flipping them. Therefore, our dataset is now containing ONLY the horizontally flipped images, so our original images are lost in this case. Am I right? Is this understanding correct? If not, then where do we tell PyTorch in this code above (taken from Official Documentation) to keep the original images and resize them to the expected shape (224,224)?", "response":"I assume you are asking whether these data augmentation transforms (e.g. RandomHorizontalFlip) actually increase the size of the dataset as well, or are they applied on each item in the dataset one by one and not adding to the size of the dataset. Running the following simple code snippet we could observe that the latter is true, i.e. if you have a dataset of 8 images, and create a PyTorch dataset object for this dataset when you iterate through the dataset, the transformations are called on each data point, and the transformed data point is returned. So for example if you have random flipping, some of the data points are returned as original, some are returned as flipped (e.g. 4 flipped and 4 original). In other words, by one iteration through the dataset items, you get 8 data points(some flipped and some not). [Which is at odds with the conventional understanding of augmenting the dataset(e.g. in this case having 16 data points in the augmented dataset)] ``` from torch.utils.data import Dataset from torchvision import transforms class experimental_dataset(Dataset): def __init__(self, data, transform): self.data = data self.transform = transform def __len__(self): return len(self.data.shape[0]) def __getitem__(self, idx): item = self.data[idx] item = self.transform(item) return item transform = transforms.Compose([ transforms.ToPILImage(), transforms.RandomHorizontalFlip(), transforms.ToTensor() ]) x = torch.rand(8, 1, 2, 2) print(x) dataset = experimental_dataset(x,transform) for item in dataset: print(item) ``` Results: (The little differences in floating points are caused by transforming to pil image and back) Original dummy dataset: ``` tensor([[[[0.1872, 0.5518], [0.5733, 0.6593]]], [[[0.6570, 0.6487], [0.4415, 0.5883]]], [[[0.5682, 0.3294], [0.9346, 0.1243]]], [[[0.1829, 0.5607], [0.3661, 0.6277]]], [[[0.1201, 0.1574], [0.4224, 0.6146]]], [[[0.9301, 0.3369], [0.9210, 0.9616]]], [[[0.8567, 0.2297], [0.1789, 0.8954]]], [[[0.0068, 0.8932], [0.9971, 0.3548]]]]) ``` transformed dataset: ``` tensor([[[0.1843, 0.5490], [0.5725, 0.6588]]]) tensor([[[0.6549, 0.6471], [0.4392, 0.5882]]]) tensor([[[0.5647, 0.3255], [0.9333, 0.1216]]]) tensor([[[0.5569, 0.1804], [0.6275, 0.3647]]]) tensor([[[0.1569, 0.1176], [0.6118, 0.4196]]]) tensor([[[0.9294, 0.3333], [0.9176, 0.9608]]]) tensor([[[0.8549, 0.2275], [0.1765, 0.8941]]]) tensor([[[0.8902, 0.0039], [0.3529, 0.9961]]]) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51677788\/data-augmentation-in-pytorch", "best_answers_votes":102, "question_length":1372, "response_length":2407 }, { "question":"How to change the learning rate of an optimizer at any given moment (no LR schedule)? Is it possible in PyTorch to change the learning rate of the optimizer in the middle of training dynamically (I don't want to define a learning rate schedule beforehand)? So let's say I have an optimizer: ``` optim = torch.optim.SGD(model.parameters(), lr=0.01) ``` Now due to some tests which I perform during training, I realize my learning rate is too high so I want to change it to say 0.001. There doesn't seem to be a method optim.set_lr(0.001) but is there some way to do this?", "response":"So the learning rate is stored in optim.param_groups[i]['lr']. optim.param_groups is a list of the different weight groups which can have different learning rates. Thus, simply doing: ``` for g in optim.param_groups: g['lr'] = 0.001 ``` will do the trick. **Alternatively**, as mentionned in the comments, if your learning rate only depends on the epoch number, you can use a learning rate scheduler. For example (modified example from the doc): ``` from torch.optim.lr_scheduler import LambdaLR optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) # Assuming the optimizer has two groups. lambda_group1 = lambda epoch: epoch \/\/ 30 lambda_group2 = lambda epoch: 0.95 ** epoch scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2]) for epoch in range(100): train(...) validate(...) scheduler.step() ``` Also, there is a prebuilt learning rate scheduler to reduce on plateaus.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/48324152\/how-to-change-the-learning-rate-of-an-optimizer-at-any-given-moment-no-lr-sched", "best_answers_votes":187, "question_length":570, "response_length":898 }, { "question":"How to multiply matrices in PyTorch? With numpy, I can do a simple matrix multiplication like this: ``` a = numpy.ones((3, 2)) b = numpy.ones((2, 1)) result = a.dot(b) ``` However, this does not work with PyTorch: ``` a = torch.ones((3, 2)) b = torch.ones((2, 1)) result = torch.dot(a, b) ``` This code throws the following error: RuntimeError: 1D tensors expected, but got 2D and 2D tensors How do I perform matrix multiplication in PyTorch?", "response":"Use torch.mm: ``` torch.mm(a, b) ``` torch.dot() behaves differently to np.dot(). There's been some discussion about what would be desirable here. Specifically, torch.dot() treats both a and b as 1D vectors (irrespective of their original shape) and computes their inner product. The error is thrown because this behaviour makes your a a vector of length 6 and your b a vector of length 2; hence their inner product can't be computed. For matrix multiplication in PyTorch, use torch.mm(). Numpy's np.dot() in contrast is more flexible; it computes the inner product for 1D arrays and performs matrix multiplication for 2D arrays. torch.matmul performs matrix multiplications if both arguments are 2D and computes their dot product if both arguments are 1D. For inputs of such dimensions, its behaviour is the same as np.dot. It also lets you do broadcasting or matrix x matrix, matrix x vector and vector x vector operations in batches. ``` # 1D inputs, same as torch.dot a = torch.rand(n) b = torch.rand(n) torch.matmul(a, b) # torch.Size([]) # 2D inputs, same as torch.mm a = torch.rand(m, k) b = torch.rand(k, j) torch.matmul(a, b) # torch.Size([m, j]) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/44524901\/how-to-multiply-matrices-in-pytorch", "best_answers_votes":133, "question_length":442, "response_length":1159 }, { "question":"Pytorch fails with CUDA error: device-side assert triggered on Colab I am trying to initialize a tensor on Google Colab with GPU enabled. ```py device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') t = torch.tensor([1,2], device=device) ``` But I am getting this strange error. ```bash RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 ``` Even by setting that environment variable to 1 seems not showing any further details. Anyone ever had this issue?", "response":"While I tried your code, and it did not give me an error, I can say that usually the best practice to debug CUDA Runtime Errors: device-side assert like yours is to turn collab to CPU and recreate the error. It will give you a more useful traceback error. Most of the time CUDA Runtime Errors can be the cause of some index mismatching so like you tried to train a network with 10 output nodes on a dataset with 15 labels. And the thing with this CUDA error is once you get this error once, you will recieve it for every operation you do with torch.tensors. This forces you to restart your notebook. I suggest you restart your notebook, get a more accuracate traceback by moving to CPU, and check the rest of your code especially if you train a model on set of targets somewhere. To gain a clearer insight into the typical utilization of GPUs in PyTorch applications, I recommend exploring deep learning projects on GitHub. Websites such as repo-rift.com can be particularly useful for this purpose. They allow you to perform text searches with queries like \"How does this paper use GPU\". This can help you pinpoint the exact usage of CUDA in specific lines of code within extensive repositories.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/68166721\/pytorch-fails-with-cuda-error-device-side-assert-triggered-on-colab", "best_answers_votes":123, "question_length":647, "response_length":1196 }, { "question":"PyTorch: How to get the shape of a Tensor as a list of int In numpy, V.shape gives a tuple of ints of dimensions of V. In tensorflow V.get_shape().as_list() gives a list of integers of the dimensions of V. In pytorch, V.size() gives a size object, but how do I convert it to ints?", "response":"For PyTorch v1.0 and possibly above: ``` >>> import torch >>> var = torch.tensor([[1,0], [0,1]]) # Using .size function, returns a torch.Size object. >>> var.size() torch.Size([2, 2]) >>> type(var.size()) # Similarly, using .shape >>> var.shape torch.Size([2, 2]) >>> type(var.shape) ``` You can cast any torch.Size object to a native Python list: ``` >>> list(var.size()) [2, 2] >>> type(list(var.size())) ``` In PyTorch v0.3 and 0.4: Simply list(var.size()), e.g.: ``` >>> import torch >>> from torch.autograd import Variable >>> from torch import IntTensor >>> var = Variable(IntTensor([[1,0],[0,1]])) >>> var Variable containing: 1 0 0 1 [torch.IntTensor of size 2x2] >>> var.size() torch.Size([2, 2]) >>> list(var.size()) [2, 2] ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/46826218\/pytorch-how-to-get-the-shape-of-a-tensor-as-a-list-of-int", "best_answers_votes":114, "question_length":280, "response_length":740 }, { "question":"How to clear CUDA memory in PyTorch I am trying to get the output of a neural network which I have already trained. The input is an image of the size 300x300. I am using a batch size of 1, but I still get a CUDA error: out of memory error after I have successfully got the output for 25 images. I tried torch.cuda.empty_cache(), but this still doesn't seem to solve the problem. Code: ``` device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\") train_x = torch.tensor(train_x, dtype=torch.float32).view(-1, 1, 300, 300) train_x = train_x.to(device) dataloader = torch.utils.data.DataLoader(train_x, batch_size=1, shuffle=False) right = [] for i, left in enumerate(dataloader): print(i) temp = model(left).view(-1, 1, 300, 300) right.append(temp.to('cpu')) del temp torch.cuda.empty_cache() ``` This for loop runs for 25 times every time before giving the memory error. Every time, I am sending a new image in the network for computation. So, I don't really need to store the previous computation results in the GPU after every iteration in the loop. Is there any way to achieve this?", "response":"I figured out where I was going wrong. I am posting the solution as an answer for others who might be struggling with the same problem. Basically, what PyTorch does is that it creates a computational graph whenever I pass the data through my network and stores the computations on the GPU memory, in case I want to calculate the gradient during backpropagation. But since I only wanted to perform a forward propagation, I simply needed to specify torch.no_grad() for my model. Thus, the for loop in my code could be rewritten as: ``` for i, left in enumerate(dataloader): print(i) with torch.no_grad(): temp = model(left).view(-1, 1, 300, 300) right.append(temp.to('cpu')) del temp torch.cuda.empty_cache() ``` Specifying no_grad() to my model tells PyTorch that I don't want to store any previous computations, thus freeing my GPU space.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55322434\/how-to-clear-cuda-memory-in-pytorch", "best_answers_votes":126, "question_length":1099, "response_length":838 }, { "question":"Pytorch says that CUDA is not available (on Ubuntu) I'm trying to run Pytorch on a laptop that I have. It's an older model but it does have an Nvidia graphics card. I realize it is probably not going to be sufficient for real machine learning but I am trying to do it so I can learn the process of getting CUDA installed. I have followed the steps on the installation guide for Ubuntu 18.04 (my specific distribution is Xubuntu). My graphics card is a GeForce 845M, verified by lspci | grep nvidia: ``` 01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce 845M] (rev a2) 01:00.1 Audio device: NVIDIA Corporation Device 0fbc (rev a1) ``` I also have gcc 7.5 installed, verified by gcc --version ``` gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ``` And I have the correct headers installed, verified by trying to install them with sudo apt-get install linux-headers-$(uname -r): ``` Reading package lists... Done Building dependency tree Reading state information... Done linux-headers-4.15.0-106-generic is already the newest version (4.15.0-106.107). ``` I then followed the installation instructions using a local .deb for version 10.1. Now, when I run nvidia-smi, I get: ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.87.00 Driver Version: 418.87.00 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage\/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce 845M On | 00000000:01:00.0 Off | N\/A | | N\/A 40C P0 N\/A \/ N\/A | 88MiB \/ 2004MiB | 1% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 982 G \/usr\/lib\/xorg\/Xorg 87MiB | +-----------------------------------------------------------------------------+ ``` and I run nvcc -V I get: ``` nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:07:16_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243 ``` I then performed the post-installation instructions from section 6.1, and so as a result, echo $PATH looks like this: ``` \/home\/isaek\/anaconda3\/envs\/stylegan2_pytorch\/bin:\/home\/isaek\/anaconda3\/bin:\/home\/isaek\/anaconda3\/condabin:\/usr\/local\/cuda-10.1\/bin:\/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin:\/usr\/games:\/usr\/local\/games:\/snap\/bin ``` echo $LD_LIBRARY_PATH looks like this: ```bash \/usr\/local\/cuda-10.1\/lib64 ``` and my \/etc\/udev\/rules.d\/40-vm-hotadd.rules file looks like this: ```bash # On Hyper-V and Xen Virtual Machines we want to add memory and cpus as soon as they appear ATTR{[dmi\/id]sys_vendor}==\"Microsoft Corporation\", ATTR{[dmi\/id]product_name}==\"Virtual Machine\", GOTO=\"vm_hotadd_apply\" ATTR{[dmi\/id]sys_vendor}==\"Xen\", GOTO=\"vm_hotadd_apply\" GOTO=\"vm_hotadd_end\" LABEL=\"vm_hotadd_apply\" # Memory hotadd request # CPU hotadd request SUBSYSTEM==\"cpu\", ACTION==\"add\", DEVPATH==\"\/devices\/system\/cpu\/cpu[0-9]*\", TEST==\"online\", ATTR{online}=\"1\" LABEL=\"vm_hotadd_end\" ``` After all of this, I even compiled and ran the samples. .\/deviceQuery returns: ``` .\/deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: \"GeForce 845M\" CUDA Driver Version \/ Runtime Version 10.1 \/ 10.1 CUDA Capability Major\/Minor version number: 5.0 Total amount of global memory: 2004 MBytes (2101870592 bytes) ( 4) Multiprocessors, (128) CUDA Cores\/MP: 512 CUDA Cores GPU Max Clock rate: 863 MHz (0.86 GHz) Memory Clock rate: 1001 Mhz Memory Bus Width: 64-bit L2 Cache Size: 1048576 bytes Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096) Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 1 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device supports Compute Preemption: No Supports Cooperative Kernel Launch: No Supports MultiDevice Co-op Kernel Launch: No Device PCI Domain ID \/ Bus ID \/ location ID: 0 \/ 1 \/ 0 Compute Mode: deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.1, CUDA Runtime Version = 10.1, NumDevs = 1 Result = PASS ``` and .\/bandwidthTest returns: ``` [CUDA Bandwidth Test] - Starting... Running on... Device 0: GeForce 845M Quick Mode Host to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(GB\/s) 32000000 11.7 Device to Host Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(GB\/s) 32000000 11.8 Device to Device Bandwidth, 1 Device(s) PINNED Memory Transfers Transfer Size (Bytes) Bandwidth(GB\/s) 32000000 14.5 Result = PASS NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. ``` But after all of this, this Python snippet (in a conda environment with all dependencies installed): ```py import torch torch.cuda.is_available() ``` returns False Does anybody have any idea about how to resolve this? I've tried to add \/usr\/local\/cuda-10.1\/bin to etc\/environment like this: ```bash PATH=$PATH:\/usr\/local\/cuda-10.1\/bin ``` And restarting the terminal, but that didn't fix it. I really don't know what else to try. EDIT - Results of collect_env for @kHarshit ``` Collecting environment information... PyTorch version: 1.5.0 Is debug build: No CUDA used to build PyTorch: 10.2 OS: Ubuntu 18.04.4 LTS GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 CMake version: Could not collect Python version: 3.6 Is CUDA available: No CUDA runtime version: 10.1.243 GPU models and configuration: GPU 0: GeForce 845M Nvidia driver version: 418.87.00 cuDNN version: Could not collect Versions of relevant libraries: [pip] numpy==1.18.5 [pip] pytorch-ranger==0.1.1 [pip] stylegan2-pytorch==0.12.0 [pip] torch==1.5.0 [pip] torch-optimizer==0.0.1a12 [pip] torchvision==0.6.0 [pip] vector-quantize-pytorch==0.0.2 [conda] numpy 1.18.5 pypi_0 pypi [conda] pytorch-ranger 0.1.1 pypi_0 pypi [conda] stylegan2-pytorch 0.12.0 pypi_0 pypi [conda] torch 1.5.0 pypi_0 pypi [conda] torch-optimizer 0.0.1a12 pypi_0 pypi [conda] torchvision 0.6.0 pypi_0 pypi [conda] vector-quantize-pytorch 0.0.2 pypi_0 pypi ```", "response":"PyTorch doesn't use the system's CUDA library. When you install PyTorch using the precompiled binaries using either pip or conda it is shipped with a copy of the specified version of the CUDA library which is installed locally. In fact, you don't even need to install CUDA on your system to use PyTorch with CUDA support. There are two scenarios which could have caused your issue. You installed the CPU only version of PyTorch. In this case PyTorch wasn't compiled with CUDA support so it didn't support CUDA. You installed the CUDA 10.2 version of PyTorch. In this case the problem is that your graphics card currently uses the 418.87 drivers, which only support up to CUDA 10.1. The two potential fixes in this case would be to either install updated drivers (version >= 440.33 according to Table 2) or to install a version of PyTorch compiled against CUDA 10.1. To determine the appropriate command to use when installing PyTorch you can use the handy widget in the \"Install PyTorch\" section at pytorch.org. Just select the appropriate operating system, package manager, and CUDA version then run the recommended command. In your case one solution was to use ``` conda install pytorch torchvision cudatoolkit=10.1 -c pytorch ``` which explicitly specifies to conda that you want to install the version of PyTorch compiled against CUDA 10.1. For more information about PyTorch CUDA compatibility with respect drivers and hardware see this answer. Edit After you added the output of collect_env we can see that the problem was that you had the CUDA 10.2 version of PyTorch installed. Based on that an alternative solution would have been to update the graphics driver as elaborated in item 2 and the linked answer.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/62359175\/pytorch-says-that-cuda-is-not-available-on-ubuntu", "best_answers_votes":84, "question_length":7420, "response_length":1716 }, { "question":"Evaluating pytorch models: `with torch.no_grad` vs `model.eval()` When I want to evaluate the performance of my model on the validation set, is it preferred to use with torch.no_grad: or model.eval()?", "response":"TL;DR: Use both. They do different things, and have different scopes. with torch.no_grad - disables tracking of gradients in autograd. model.eval() changes the forward() behaviour of the module it is called upon eg, it disables dropout and has batch norm use the entire population statistics with torch.no_grad The torch.autograd.no_grad documentation says: Context-manager that disabled [sic] gradient calculation. Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True. In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True. model.eval() The nn.Module.eval documentation says: Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training\/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. The creator of pytorch said the documentation should be updated to suggest the usage of both, and I raised the pull request.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55627780\/evaluating-pytorch-models-with-torch-no-grad-vs-model-eval", "best_answers_votes":104, "question_length":200, "response_length":1162 }, { "question":"PyTorch reshape tensor dimension I want to reshape a vector of shape (5,) into a matrix of shape (1, 5). With numpy, I can do: ``` >>> import numpy as np >>> a = np.array([1, 2, 3, 4, 5]) >>> a.shape (5,) >>> a = np.reshape(a, (1, 5)) >>> a.shape (1, 5) >>> a array([[1, 2, 3, 4, 5]]) ``` But how do I do this with PyTorch?", "response":"Use torch.unsqueeze(input, dim, out=None): ``` >>> import torch >>> a = torch.Tensor([1, 2, 3, 4, 5]) >>> a 1 2 3 4 5 [torch.FloatTensor of size 5] >>> a = a.unsqueeze(0) >>> a 1 2 3 4 5 [torch.FloatTensor of size 1x5] ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/43328632\/pytorch-reshape-tensor-dimension", "best_answers_votes":56, "question_length":323, "response_length":222 }, { "question":"PyTorch: How to use DataLoaders for custom Datasets How to make use of the torch.utils.data.Dataset and torch.utils.data.DataLoader on your own data (not just the torchvision.datasets)? Is there a way to use the inbuilt DataLoaders which they use on TorchVisionDatasets to be used on any dataset?", "response":"Yes, that is possible. Just create the objects by yourself, e.g. ``` import torch.utils.data as data_utils train = data_utils.TensorDataset(features, targets) train_loader = data_utils.DataLoader(train, batch_size=50, shuffle=True) ``` where features and targets are tensors. features has to be 2-D, i.e. a matrix where each line represents one training sample, and targets may be 1-D or 2-D, depending on whether you are trying to predict a scalar or a vector. EDIT: response to @sarthak's question Basically yes. If you create an object of type TensorData, then the constructor investigates whether the first dimensions of the feature tensor (which is actually called data_tensor) and the target tensor (called target_tensor) have the same length: ``` assert data_tensor.size(0) == target_tensor.size(0) ``` However, if you want to feed these data into a neural network subsequently, then you need to be careful. While convolution layers work on data like yours, (I think) all of the other types of layers expect the data to be given in matrix form. So, if you run into an issue like this, then an easy solution would be to convert your 4D-dataset (given as some kind of tensor, e.g. FloatTensor) into a matrix by using the method view. For your 5000xnxnx3 dataset, this would look like this: ``` 2d_dataset = 4d_dataset.view(5000, -1) ``` (The value -1 tells PyTorch to figure out the length of the second dimension automatically.)", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/41924453\/pytorch-how-to-use-dataloaders-for-custom-datasets", "best_answers_votes":80, "question_length":296, "response_length":1434 }, { "question":"Why do we call .detach() before calling .numpy() on a Pytorch Tensor? It has been firmly established that my_tensor.detach().numpy() is the correct way to get a numpy array from a torch tensor. I'm trying to get a better understanding of why. I have studied the internal workings of PyTorch's autodifferentiation library, and I'm still confused by these answers. Why does it break the graph to to move to numpy? Is it because any operations on the numpy array will not be tracked in the autodiff graph? I feel that a thorough high-quality Stack-Overflow answer that explains the reason for this to new users of PyTorch who don't yet understand autodifferentiation is called for here. In particular, I think it would be helpful to illustrate the graph through a figure and show how this code could be problematic if it didn't throw an error: ``` import torch tensor1 = torch.tensor([1.0,2.0],requires_grad=True) print(tensor1) print(type(tensor1)) tensor1 = tensor1.numpy() print(tensor1) print(type(tensor1)) ```", "response":"I think the most crucial point to understand here is the difference between a torch.tensor and np.ndarray: While both objects are used to store n-dimensional matrices (aka \"Tensors\"), torch.tensors has an additional \"layer\" - which is storing the computational graph leading to the associated n-dimensional matrix. So, if you are only interested in efficient and easy way to perform mathematical operations on matrices np.ndarray or torch.tensor can be used interchangeably. However, torch.tensors are designed to be used in the context of gradient descent optimization, and therefore they hold not only a tensor with numeric values, but (and more importantly) the computational graph leading to these values. This computational graph is then used (using the chain rule of derivatives) to compute the derivative of the loss function w.r.t each of the independent variables used to compute the loss. As mentioned before, np.ndarray object does not have this extra \"computational graph\" layer and therefore, when converting a torch.tensor to np.ndarray you must explicitly remove the computational graph of the tensor using the detach() command. Computational Graph From your comments it seems like this concept is a bit vague. I'll try and illustrate it with a simple example. Consider a simple function of two (vector) variables, x and w: ```py x = torch.rand(4, requires_grad=True) w = torch.rand(4, requires_grad=True) y = x @ w # inner-product of x and w z = y ** 2 # square the inner product ``` If we are only interested in the value of z, we need not worry about any graphs, we simply moving forward from the inputs, x and w, to compute y and then z. However, what would happen if we do not care so much about the value of z, but rather want to ask the question \"what is w that minimizes z for a given x\"? To answer that question, we need to compute the derivative of z w.r.t w. How can we do that? Using the chain rule we know that dz\/dw = dz\/dy * dy\/dw. That is, to compute the gradient of z w.r.t w we need to move backward from z back to w computing the gradient of the operation at each step as we trace back our steps from z to w. This \"path\" we trace back is the computational graph of z and it tells us how to compute the derivative of z w.r.t the inputs leading to z: ```py z.backward() # ask pytorch to trace back the computation of z ``` We can now inspect the gradient of z w.r.t w: ``` w.grad # the resulting gradient of z w.r.t w tensor([0.8010, 1.9746, 1.5904, 1.0408]) ``` Note that this is exactly equals to ``` 2*y*x tensor([0.8010, 1.9746, 1.5904, 1.0408], grad_fn=) ``` since dz\/dy = 2*y and dy\/dw = x. Each tensor along the path stores its \"contribution\" to the computation: ``` z tensor(1.4061, grad_fn=) ``` And ``` y tensor(1.1858, grad_fn=) ``` As you can see, y and z stores not only the \"forward\" value of or y**2 but also the computational graph -- the grad_fn that is needed to compute the derivatives (using the chain rule) when tracing back the gradients from z (output) to w (inputs). These grad_fn are essential components to torch.tensors and without them one cannot compute derivatives of complicated functions. However, np.ndarrays do not have this capability at all and they do not have this information. please see this answer for more information on tracing back the derivative using backwrd() function. Since both np.ndarray and torch.tensor has a common \"layer\" storing an n-d array of numbers, pytorch uses the same storage to save memory: numpy() \u2192 numpy.ndarray Returns self tensor as a NumPy ndarray. This tensor and the returned ndarray share the same underlying storage. Changes to self tensor will be reflected in the ndarray and vice versa. The other direction works in the same way as well: torch.from_numpy(ndarray) \u2192 Tensor Creates a Tensor from a numpy.ndarray. The returned tensor and ndarray share the same memory. Modifications to the tensor will be reflected in the ndarray and vice versa. Thus, when creating an np.array from torch.tensor or vice versa, both object reference the same underlying storage in memory. Since np.ndarray does not store\/represent the computational graph associated with the array, this graph should be explicitly removed using detach() when sharing both numpy and torch wish to reference the same tensor. Note, that if you wish, for some reason, to use pytorch only for mathematical operations without back-propagation, you can use with torch.no_grad() context manager, in which case computational graphs are not created and torch.tensors and np.ndarrays can be used interchangeably. ```py with torch.no_grad(): x_t = torch.rand(3,4) y_np = np.ones((4, 2), dtype=np.float32) x_t @ torch.from_numpy(y_np) # dot product in torch np.dot(x_t.numpy(), y_np) # the same dot product in numpy ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/63582590\/why-do-we-call-detach-before-calling-numpy-on-a-pytorch-tensor", "best_answers_votes":138, "question_length":1012, "response_length":4780 }, { "question":"Pytorch. How does pin_memory work in Dataloader? I want to understand how the pin_memory parameter in Dataloader works. According to the documentation: pin_memory (bool, optional) \u2013 If True, the data loader will copy tensors into CUDA pinned memory before returning them. Below is a self-contained code example. ```py import torchvision import torch print('torch.cuda.is_available()', torch.cuda.is_available()) train_dataset = torchvision.datasets.CIFAR10(root='cifar10_pytorch', download=True, transform=torchvision.transforms.ToTensor()) train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=64, pin_memory=True) x, y = next(iter(train_dataloader)) print('x.device', x.device) print('y.device', y.device) ``` Producing the following output: ```py torch.cuda.is_available() True x.device cpu y.device cpu ``` But I was expecting something like this, because I specified flag pin_memory=True in Dataloader. ```py torch.cuda.is_available() True x.device cuda:0 y.device cuda:0 ``` Also I run some benchmark: ```py import torchvision import torch import time import numpy as np pin_memory = True train_dataset = torchvision.datasets.CIFAR10(root='cifar10_pytorch', download=True, transform=torchvision.transforms.ToTensor()) train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=64, pin_memory=pin_memory) print('pin_memory:', pin_memory) times = [] n_runs = 10 for i in range(n_runs): st = time.time() for bx, by in train_dataloader: bx, by = bx.cuda(), by.cuda() times.append(time.time() - st) print('average time:', np.mean(times)) ``` I got the following results. ``` pin_memory: False average time: 6.5701503753662 pin_memory: True average time: 7.0254474401474 ``` So pin_memory=True only makes things slower. Can someone explain me this behaviour?", "response":"The documentation is perhaps overly laconic, given that the terms used are fairly niche. In CUDA terms, pinned memory does not mean GPU memory but non-paged CPU memory. The benefits and rationale are provided here, but the gist of it is that this flag allows the x.cuda() operation (which you still have to execute as usually) to avoid one implicit CPU-to-CPU copy, which makes it a bit more performant. Additionally, with pinned memory tensors you can use x.cuda(non_blocking=True) to perform the copy asynchronously with respect to host. This can lead to performance gains in certain scenarios, namely if your code is structured as x.cuda(non_blocking=True) perform some CPU operations perform GPU operations using x. Since the copy initiated in 1. is asynchronous, it does not block 2. from proceeding while the copy is underway and thus the two can happen side by side (which is the gain). Since step 3. requires x to be already copied over to GPU, it cannot be executed until 1. is complete - therefore only 1. and 2. can be overlapping, and 3. will definitely take place afterwards. The duration of 2. is therefore the maximum time you can expect to save with non_blocking=True. Without non_blocking=True your CPU would be waiting idle for the transfer to complete before proceeding with 2.. Note: perhaps step 2. could also comprise of GPU operations, as long as they do not require x - I am not sure if this is true and please don't quote me on that. Edit: I believe you're missing the point with your benchmark. There are three issues with it You're not using non_blocking=True in your .cuda() calls. You're not using multiprocessing in your DataLoader, which means that most of the work is done synchronously on main thread anyway, trumping the memory transfer costs. You're not performing any CPU work in your data loading loop (aside from .cuda() calls) so there is no work to be overlaid with memory transfers. A benchmark closer to how pin_memory is meant to be used would be ``` import torchvision, torch, time import numpy as np pin_memory = True batch_size = 1024 # bigger memory transfers to make their cost more noticable n_workers = 6 # parallel workers to free up the main thread and reduce data decoding overhead train_dataset =torchvision.datasets.CIFAR10( root='cifar10_pytorch', download=True, transform=torchvision.transforms.ToTensor() ) train_dataloader = torch.utils.data.DataLoader( train_dataset, batch_size=batch_size, pin_memory=pin_memory, num_workers=n_workers ) print('pin_memory:', pin_memory) times = [] n_runs = 10 def work(): # emulates the CPU work done time.sleep(0.1) for i in range(n_runs): st = time.time() for bx, by in train_dataloader: bx, by = bx.cuda(non_blocking=pin_memory), by.cuda(non_blocking=pin_memory) work() times.append(time.time() - st) print('average time:', np.mean(times)) ``` which gives an average of 5.48s for my machine with memory pinning and 5.72s without.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55563376\/pytorch-how-does-pin-memory-work-in-dataloader", "best_answers_votes":101, "question_length":1791, "response_length":2926 }, { "question":"How to get the device type of a pytorch module conveniently? I have to stack some my own layers on different kinds of pytorch models with different devices. E.g. A is a cuda model and B is a cpu model (but I don't know it before I get the device type). Then the new models are C and D respectively, where ```py class NewModule(torch.nn.Module): def __init__(self, base): super(NewModule, self).__init__() self.base = base self.extra = my_layer() # e.g. torch.nn.Linear() def forward(self,x): y = self.base(x) z = self.extra(y) return z ... C = NewModule(A) # cuda D = NewModule(B) # cpu ``` However I must move base and extra to the same device, i.e. base and extra of C are cuda models and D's are cpu models. So I tried this __inin__: ```py def __init__(self, base): super(NewModule, self).__init__() self.base = base self.extra = my_layer().to(base.device) ``` Unfortunately, there's no attribute device in torch.nn.Module(raise AttributeError). What should I do to get the device type of base? Or any other method to make base and extra to be on the same device automaticly even the structure of base is unspecific?", "response":"This question has been asked many times (1, 2). Quoting the reply from a PyTorch developer: That\u2019s not possible. Modules can hold parameters of different types on different devices, and so it\u2019s not always possible to unambiguously determine the device. The recommended workflow (as described on PyTorch blog) is to create the device object separately and use that everywhere. Copy-pasting the example from the blog here: ``` # at beginning of the script device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\") ... # then whenever you get a new Tensor or Module # this won't copy if they are already on the desired device input = data.to(device) model = MyModule(...).to(device) ``` Do note that there is nothing stopping you from adding a .device property to the models. As mentioned by Kani (in the comments), if the all the parameters in the model are on the same device, one could use next(model.parameters()).device.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/58926054\/how-to-get-the-device-type-of-a-pytorch-module-conveniently", "best_answers_votes":93, "question_length":1119, "response_length":936 }, { "question":"What is the difference between torch.tensor and torch.Tensor? What is the difference between torch.tensor and torch.Tensor? What was the reasoning for providing these two very similar and confusing alternatives?", "response":"In PyTorch torch.Tensor is the main tensor class. So all tensors are just instances of torch.Tensor. When you call torch.Tensor() you will get an empty tensor without any data. In contrast torch.tensor is a function which returns a tensor. In the documentation it says: ``` torch.tensor(data, dtype=None, device=None, requires_grad=False) \u2192 Tensor ``` Constructs a tensor with data. This also explains why it is no problem creating an empty tensor instance of `torch.Tensor` without `data` by calling: ``` tensor_without_data = torch.Tensor() ``` But on the other side: ``` tensor_without_data = torch.tensor() ``` Will lead to an error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 torch.tensor() TypeError: tensor() missing 1 required positional arguments: \"data\" ``` But in general there is no reason to choose `torch.Tensor` over `torch.tensor`. Also `torch.Tensor` lacks a docstring. Similar behaviour for creating a tensor without data like with: torch.Tensor() can be achieved using: ``` torch.tensor(()) ``` Output: ``` tensor([]) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51911749\/what-is-the-difference-between-torch-tensor-and-torch-tensor", "best_answers_votes":63, "question_length":211, "response_length":1136 }, { "question":"Get total amount of free GPU memory and available using pytorch I'm using google colab free Gpu's for experimentation and wanted to know how much GPU Memory available to play around, torch.cuda.memory_allocated() returns the current GPU memory occupied, but how do we determine total available memory using PyTorch.", "response":"PyTorch can provide you total, reserved and allocated info: ``` t = torch.cuda.get_device_properties(0).total_memory r = torch.cuda.memory_reserved(0) a = torch.cuda.memory_allocated(0) f = r-a # free inside reserved ``` Python bindings to NVIDIA can bring you the info for the whole GPU (0 in this case means first GPU device): ``` from pynvml import * nvmlInit() h = nvmlDeviceGetHandleByIndex(0) info = nvmlDeviceGetMemoryInfo(h) print(f'total : {info.total}') print(f'free : {info.free}') print(f'used : {info.used}') ``` ```bash pip install pynvml ``` You may check the nvidia-smi to get memory info. You may use nvtop but this tool needs to be installed from source (at the moment of writing this). Another tool where you can check memory is gpustat (pip3 install gpustat). If you would like to use C++ cuda: ``` include #include \"cuda.h\" #include \"cuda_runtime_api.h\" using namespace std; int main( void ) { int num_gpus; size_t free, total; cudaGetDeviceCount( &num_gpus ); for ( int gpu_id = 0; gpu_id < num_gpus; gpu_id++ ) { cudaSetDevice( gpu_id ); int id; cudaGetDevice( &id ); cudaMemGetInfo( &free, &total ); cout << \"GPU \" << id << \" memory: free=\" << free << \", total=\" << total << endl; } return 0; } ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/58216000\/get-total-amount-of-free-gpu-memory-and-available-using-pytorch", "best_answers_votes":129, "question_length":315, "response_length":1223 }, { "question":"`CrossEntropyLoss()` in PyTorch Cross entropy formula: But why does the following give loss = 0.7437 instead of loss = 0 (since 1*log(1) = 0)? ```python import torch import torch.nn as nn from torch.autograd import Variable output = Variable(torch.FloatTensor([0,0,0,1])).view(1, -1) target = Variable(torch.LongTensor([3])) criterion = nn.CrossEntropyLoss() loss = criterion(output, target) print(loss) # 0.7437 ```", "response":"In your example you are treating output [0, 0, 0, 1] as probabilities as required by the mathematical definition of cross entropy. But PyTorch treats them as outputs, that don\u2019t need to sum to 1, and need to be first converted into probabilities for which it uses the softmax function. So H(p, q) becomes: ``` H(p, softmax(output)) ``` Translating the output [0, 0, 0, 1] into probabilities: ``` softmax([0, 0, 0, 1]) = [0.1749, 0.1749, 0.1749, 0.4754] ``` whence: ``` -log(0.4754) = 0.7437 ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49390842\/crossentropyloss-in-pytorch", "best_answers_votes":105, "question_length":416, "response_length":494 }, { "question":"What is the difference between .pt, .pth and .pwf extentions in PyTorch? I have seen in some code examples, that people use .pwf as model file saving format. But in PyTorch documentation .pt and .pth are recommended. I used .pwf and worked fine for small 1->16->16 convolutional network. My question is what is the difference between these formats? Why is .pwf extension not even recommended in PyTorch documentation and why do people still use it?", "response":"There are no differences between the extensions that were listed: .pt, .pth, .pwf. One can use whatever extension (s)he wants. So, if you're using torch.save() for saving models, then it by default uses python pickle (pickle_module=pickle) to save the objects and some metadata. Thus, you have the liberty to choose the extension you want, as long as it doesn't cause collisions with any other standardized extensions. Having said that, it is however not recommended to use .pth extension when checkpointing models because it collides with Python path (.pth) configuration files. Because of this, I myself use .pth.tar or .pt but not .pth, or any other extensions. The standard way of checkpointing models in PyTorch is not finalized yet. Here is an open issue, as of this writing: Recommend a different file extension for models (.PTH is a special extension for Python) - issues\/14864 It's been suggested by @soumith to use: .pt for checkpointing models in pickle format .ptc for checkpointing models in pytorch compiled (for JIT)", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59095824\/what-is-the-difference-between-pt-pth-and-pwf-extentions-in-pytorch", "best_answers_votes":106, "question_length":448, "response_length":1031 }, { "question":"Pytorch: nn.Dropout vs. F.dropout There are two ways to perform dropout: torch.nn.Dropout torch.nn.functional.Dropout I ask: Is there a difference between them? When should I use one over the other? I don't see any performance difference when I switched them around.", "response":"The technical differences have already been shown in the other answer. However the main difference is that nn.Dropout is a torch Module itself which bears some convenience: A short example for illustration of some differences: ```py import torch import torch.nn as nn class Model1(nn.Module): # Model 1 using functional dropout def __init__(self, p=0.0): super().__init__() self.p = p def forward(self, inputs): return nn.functional.dropout(inputs, p=self.p, training=True) class Model2(nn.Module): # Model 2 using dropout module def __init__(self, p=0.0): super().__init__() self.drop_layer = nn.Dropout(p=p) def forward(self, inputs): return self.drop_layer(inputs) model1 = Model1(p=0.5) # functional dropout model2 = Model2(p=0.5) # dropout module # creating inputs inputs = torch.rand(10) # forwarding inputs in train mode print('Normal (train) model:') print('Model 1', model1(inputs)) print('Model 2', model2(inputs)) print() # switching to eval mode model1.eval() model2.eval() # forwarding inputs in evaluation mode print('Evaluation mode:') print('Model 1', model1(inputs)) print('Model 2', model2(inputs)) # show model summary print('Print summary:') print(model1) print(model2) ``` Output: ```py Normal (train) model: Model 1 tensor([ 1.5040, 0.0000, 0.0000, 0.8563, 0.0000, 0.0000, 1.5951, 0.0000, 0.0000, 0.0946]) Model 2 tensor([ 0.0000, 0.3713, 1.9303, 0.0000, 0.0000, 0.3574, 0.0000, 1.1273, 1.5818, 0.0946]) Evaluation mode: Model 1 tensor([ 0.0000, 0.3713, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]) Model 2 tensor([ 0.7520, 0.1857, 0.9651, 0.4281, 0.7883, 0.1787, 0.7975, 0.5636, 0.7909, 0.0473]) Print summary: Model1() Model2( (drop_layer): Dropout(p=0.5) ) ``` So which should I use? Both are completely equivalent in terms of applying dropout and even though the differences in usage are not that big, there are some reasons to favour the nn.Dropout over nn.functional.dropout: Dropout is designed to be only applied during training, so when doing predictions or evaluation of the model you want dropout to be turned off. The dropout module nn.Dropout conveniently handles this and shuts dropout off as soon as your model enters evaluation mode, while the functional dropout does not care about the evaluation \/ prediction mode. Even though you can set functional dropout to training=False to turn it off, it is still not such a convenient solution like with nn.Dropout. Also the drop rate is stored in the module, so you don't have to save it in an extra variable. In larger networks you might want to create different dropout layers with different drop rates - here nn.Dropout may increase readability and can bear also some convenience when using the layers multiple times. Finally, all modules which are assigned to your model are registered in your model. So you model class keeps track of them, that is why you can just turn off the dropout module by calling eval(). When using the functional dropout your model is not aware of it, thus it won't appear in any summary.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53419474\/pytorch-nn-dropout-vs-f-dropout", "best_answers_votes":101, "question_length":266, "response_length":3020 }, { "question":"How can i process multi loss in pytorch? Such as this, I want to using some auxiliary loss to promoting my model performance. Which type code can implement it in pytorch? ``` #one loss1.backward() loss2.backward() loss3.backward() optimizer.step() #two loss1.backward() optimizer.step() loss2.backward() optimizer.step() loss3.backward() optimizer.step() #three loss = loss1+loss2+loss3 loss.backward() optimizer.step() ``` Thanks for your answer!", "response":"First and 3rd attempt are exactly the same and correct, while 2nd approach is completely wrong. In Pytorch, low layer gradients are Not \"overwritten\" by subsequent backward() calls, rather they are accumulated, or summed. This makes first and 3rd approach identical, though 1st approach might be preferable if you have low-memory GPU\/RAM (a batch size of 1024 with one backward() + step() call is same as having 8 batches of size 128 and 8 backward() calls, with one step() call in the end). To illustrate the idea, here is a simple example. We want to get our tensor x close to 40,50 and 60 simultaneously: ``` x = torch.tensor([1.0],requires_grad=True) loss1 = criterion(40,x) loss2 = criterion(50,x) loss3 = criterion(60,x) ``` Now the first approach: (we use tensor.grad to get current gradient for our tensor x) ``` loss1.backward() loss2.backward() loss3.backward() print(x.grad) ``` This outputs : tensor([-294.]) (EDIT: put retain_graph=True in first two backward calls for more complicated computational graphs) The third approach: ``` loss = loss1+loss2+loss3 loss.backward() print(x.grad) ``` Again the output is : tensor([-294.]) 2nd approach is different because we don't call opt.zero_grad after calling step() method. This means in all 3 step calls gradients of first backward call is used. For example, if 3 losses provide gradients 5,1,4 for same weight, instead of having 10 (=5+1+4), now your weight will have 5*3+1*2+4*1=21 as gradient. For further reading : Link 1,Link 2", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53994625\/how-can-i-process-multi-loss-in-pytorch", "best_answers_votes":82, "question_length":447, "response_length":1492 }, { "question":"Where do I get a CPU-only version of PyTorch? I'm trying to get a basic app running with Flask + PyTorch, and host it on Heroku. However, I run into the issue that the maximum slug size is 500mb on the free version, and PyTorch itself is ~500mb. After some google searching, someone wrote about finding a cpu-only version of PyTorch, and using that, which is much smaller how-and-why-i-built-an-ml-based-python-api-hosted-on-heroku-j74qbfwn1. However, I'm pretty lost as to how this is done, and the person didn't document this at all. Any advice is appreciated, thanks. EDIT: To be more specific about my problem, I tried installing torch by (as far as I understand), including a requirements.txt which listed torch as a dependency. Current I have: torch==0.4.1. However this doesn't work bc of size. My question is, do you know what I could write in the requirements file to get the cpu-only version of torch that is smaller, or alternatively, if the requirements.txt doesn't work for this, what I would do instead, to get the cpu version.", "response":"Per the Pytorch website, you can install pytorch-cpu with ``` conda install pytorch-cpu torchvision-cpu -c pytorch ``` You can see from the files on Anaconda cloud, that the size varies between 26 and 56MB depending on the OS where you want to install it. You can get the wheel from http:\/\/download.pytorch.org\/whl\/cpu\/. The wheel is 87MB. You can setup the installation by putting the link to the wheel in the requirements.txt file. If you use Python 3.6 on Heroku: ``` http:\/\/download.pytorch.org\/whl\/cpu\/torch-0.4.1-cp36-cp36m-linux_x86_64.whl ``` otherwise, for Python 2.7: ``` http:\/\/download.pytorch.org\/whl\/cpu\/torch-0.4.1-cp27-cp27mu-linux_x86_64.whl ``` For example if your requirements are pytorch-cpu, numpy and scipy and you're using Python 3.6, the requirements.txt would look like: ``` http:\/\/download.pytorch.org\/whl\/cpu\/torch-0.4.1-cp36-cp36m-linux_x86_64.whl numpy scipy ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51730880\/where-do-i-get-a-cpu-only-version-of-pytorch", "best_answers_votes":51, "question_length":1041, "response_length":891 }, { "question":"Pytorch Operation to detect NaNs Is there a Pytorch-internal procedure to detect NaNs in Tensors? Tensorflow has the tf.is_nan and the tf.check_numerics operations ... Does Pytorch have something similar, somewhere? I could not find something like this in the docs... I am looking specifically for a Pytorch internal routine, since I would like this to happen on the GPU as well as on the CPU. This excludes numpy - based solutions (like np.isnan(sometensor.numpy()).any()) ...", "response":"You can always leverage the fact that nan != nan: ``` >>> x = torch.tensor([1, 2, np.nan]) tensor([ 1., 2., nan.]) >>> x != x tensor([ 0, 0, 1], dtype=torch.uint8) ``` With pytorch 0.4 there is also torch.isnan: ``` >>> torch.isnan(x) tensor([ 0, 0, 1], dtype=torch.uint8) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/48158017\/pytorch-operation-to-detect-nans", "best_answers_votes":90, "question_length":477, "response_length":276 }, { "question":"Differences in SciKit Learn, Keras, or Pytorch [closed] Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Because this question may lead to opinionated discussion, debate, and answers, it has been closed. You may edit the question if you feel you can improve it so that it requires answers that include facts and citations or a detailed explanation of the proposed solution. If edited, the question will be reviewed and might be reopened. Closed 6 years ago. The community reviewed whether to reopen this question 1 year ago and left it closed: Needs more focus Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Improve this question Are these libraries fairly interchangeable? Looking here, https:\/\/stackshare.io\/stackups\/keras-vs-pytorch-vs-scikit-learn, it seems the major difference is the underlying framework (at least for PyTorch).", "response":"Yes, there is a major difference. SciKit Learn is a general machine learning library, built on top of NumPy. It features a lot of machine learning algorithms such as support vector machines, random forests, as well as a lot of utilities for general pre- and postprocessing of data. It is not a neural network framework. PyTorch is a deep learning framework, consisting of A vectorized math library similar to NumPy, but with GPU support and a lot of neural network related operations (such as softmax or various kinds of activations) Autograd - an algorithm which can automatically calculate gradients of your functions, defined in terms of the basic operations Gradient-based optimization routines for large scale optimization, dedicated to neural network optimization Neural-network related utility functions Keras is a higher-level deep learning framework, which abstracts many details away, making code simpler and more concise than in PyTorch or TensorFlow, at the cost of limited hackability. It abstracts away the computation backend, which can be TensorFlow, Theano or CNTK. It does not support a PyTorch backend, but that's not something unfathomable - you can consider it a simplified and streamlined subset of the above. In short, if you are going with \"classic\", non-neural algorithms, neither PyTorch nor Keras will be useful for you. If you're doing deep learning, scikit-learn may still be useful for its utility part; aside from it you will need the actual deep learning framework, where you can choose between Keras and PyTorch but you're unlikely to use both at the same time. This is very subjective, but in my view, if you're working on a novel algorithm, you're more likely to go with PyTorch (or TensorFlow or some other lower-level framework) for flexibility. If you're adapting a known and tested algorithm to a new problem setting, you may want to go with Keras for its greater simplicity and lower entry level.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54527439\/differences-in-scikit-learn-keras-or-pytorch", "best_answers_votes":159, "question_length":1134, "response_length":1936 }, { "question":"How to get a uniform distribution in a range [r1,r2] in PyTorch? I want to get a 2-D torch.Tensor with size [a,b] filled with values from a uniform distribution (in range [r1,r2]) in PyTorch.", "response":"If U is a random variable uniformly distributed on [0, 1], then (r1 - r2) * U + r2 is uniformly distributed on [r1, r2]. Thus, you just need: ``` (r1 - r2) * torch.rand(a, b) + r2 ``` Alternatively, you can simply use: ``` torch.FloatTensor(a, b).uniform_(r1, r2) ``` To fully explain this formulation, let's look at some concrete numbers: ```py r1 = 2 # Create uniform random numbers in half-open interval [2.0, 5.0) r2 = 5 a = 1 # Create tensor shape 1 x 7 b = 7 ``` We can break down the expression (r1 - r2) * torch.rand(a, b) + r2 as follows: torch.rand(a, b) produces an a x b (1x7) tensor with numbers uniformly distributed in the range [0.0, 1.0). ```py x = torch.rand(a, b) print(x) # tensor([[0.5671, 0.9814, 0.8324, 0.0241, 0.2072, 0.6192, 0.4704]]) ``` (r1 - r2) * torch.rand(a, b) produces numbers distributed in the uniform range [0.0, -3.0) ```py print((r1 - r2) * x) tensor([[-1.7014, -2.9441, -2.4972, -0.0722, -0.6216, -1.8577, -1.4112]]) ``` (r1 - r2) * torch.rand(a, b) + r2 produces numbers in the uniform range [5.0, 2.0) ```py print((r1 - r2) * x + r2) tensor([[3.2986, 2.0559, 2.5028, 4.9278, 4.3784, 3.1423, 3.5888]]) ``` Now, let's break down the answer suggested by @Jonasson: (r2 - r1) * torch.rand(a, b) + r1 Again, torch.rand(a, b) produces (1x7) numbers uniformly distributed in the range [0.0, 1.0). ```py x = torch.rand(a, b) print(x) # tensor([[0.5671, 0.9814, 0.8324, 0.0241, 0.2072, 0.6192, 0.4704]]) ``` (r2 - r1) * torch.rand(a, b) produces numbers uniformly distributed in the range [0.0, 3.0). ```py print((r2 - r1) * x) # tensor([[1.7014, 2.9441, 2.4972, 0.0722, 0.6216, 1.8577, 1.4112]]) ``` (r2 - r1) * torch.rand(a, b) + r1 produces numbers uniformly distributed in the range [2.0, 5.0) ```py print((r2 - r1) * x + r1) tensor([[3.7014, 4.9441, 4.4972, 2.0722, 2.6216, 3.8577, 3.4112]]) ``` In summary, (r1 - r2) * torch.rand(a, b) + r2 produces numbers in the range [r2, r1), while (r2 - r1) * torch.rand(a, b) + r1 produces numbers in the range [r1, r2).", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/44328530\/how-to-get-a-uniform-distribution-in-a-range-r1-r2-in-pytorch", "best_answers_votes":97, "question_length":191, "response_length":1999 }, { "question":"What are Torch Scripts in PyTorch? I've just found that PyTorch docs expose something that is called Torch Scripts. However, I do not know: When they should be used? How they should be used? What are their benefits?", "response":"Torch Script is one of two modes of using the PyTorch just in time compiler, the other being tracing. The benefits are explained in the linked documentation: Torch Script is a way to create serializable and optimizable models from PyTorch code. Any code written in Torch Script can be saved from your Python process and loaded in a process where there is no Python dependency. The above quote is actually true both of scripting and tracing. So You gain the ability to serialize your models and later run them outside of Python, via LibTorch, a C++ native module. This allows you to embed your DL models in various production environments like mobile or IoT. There is an official guide on exporting models to C++ here. PyTorch can compile your jit-able modules rather than running them as an interpreter, allowing for various optimizations and improving performance, both during training and inference. This is equally helpful for development and production. Regarding Torch Script specifically, in comparison to tracing, it is a subset of Python, specified in detail here, which, when adhered to, can be compiled by PyTorch. It is more laborious to write Torch Script modules instead of tracing regular nn.Module subclasses, but it allows for some extra features over tracing, most notably flow control like if statements or for loops. Tracing treats such flow control as \"constant\" - in other words, if you have an if model.training clause in your module and trace it with training=True, it will always behave this way, even if you change the training variable to False later on. To answer your first question, you need to use jit if you want to deploy your models outside Python and otherwise you should use jit if you want to gain some execution performance at the price of extra development effort (as not every model can be straightforwardly made compliant with jit). In particular, you should use Torch Script if your code cannot be jited with tracing alone because it relies on some features such as if statements. For maximum ergonomy, you probably want to mix the two on a case-by-case basis. Finally, for how they should be used, please refer to all the documentation and tutorial links.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53900396\/what-are-torch-scripts-in-pytorch", "best_answers_votes":80, "question_length":215, "response_length":2197 }, { "question":"pytorch how to set .requires_grad False I want to set some of my model frozen. Following the official docs: ```python with torch.no_grad(): linear = nn.Linear(1, 1) linear.eval() print(linear.weight.requires_grad) ``` But it prints True instead of False. If I want to set the model in eval mode, what should I do?", "response":"requires_grad=False If you want to freeze part of your model and train the rest, you can set requires_grad of the parameters you want to freeze to False. For example, if you only want to keep the convolutional part of VGG16 fixed: ``` model = torchvision.models.vgg16(pretrained=True) for param in model.features.parameters(): param.requires_grad = False ``` By switching the requires_grad flags to False, no intermediate buffers will be saved, until the computation gets to some point where one of the inputs of the operation requires the gradient. torch.no_grad() Using the context manager torch.no_grad is a different way to achieve that goal: in the no_grad context, all the results of the computations will have requires_grad=False, even if the inputs have requires_grad=True. Notice that you won't be able to backpropagate the gradient to layers before the no_grad. For example: ``` x = torch.randn(2, 2) x.requires_grad = True lin0 = nn.Linear(2, 2) lin1 = nn.Linear(2, 2) lin2 = nn.Linear(2, 2) x1 = lin0(x) with torch.no_grad(): x2 = lin1(x1) x3 = lin2(x2) x3.sum().backward() print(lin0.weight.grad, lin1.weight.grad, lin2.weight.grad) ``` outputs: ``` (None, None, tensor([[-1.4481, -1.1789], [-1.4481, -1.1789]])) ``` Here lin1.weight.requires_grad was True, but the gradient wasn't computed because the oepration was done in the no_grad context. model.eval() If your goal is not to finetune, but to set your model in inference mode, the most convenient way is to use the torch.no_grad context manager. In this case you also have to set your model to evaluation mode, this is achieved by calling eval() on the nn.Module, for example: ``` model = torchvision.models.vgg16(pretrained=True) model.eval() ``` This operation sets the attribute self.training of the layers to False, in practice this will change the behavior of operations like Dropout or BatchNorm that must behave differently at training and test time.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51748138\/pytorch-how-to-set-requires-grad-false", "best_answers_votes":111, "question_length":313, "response_length":1926 }, { "question":"Bool value of Tensor with more than one value is ambiguous in Pytorch I want to create a model in pytorch, but I can't compute the loss. It's always return Bool value of Tensor with more than one value is ambiguous Actually, I run example code, it work. ``` loss = CrossEntropyLoss() input = torch.randn(8, 5) input target = torch.empty(8,dtype=torch.long).random_(5) target output = loss(input, target) ``` Here is my code, ``` ################################################################################ ## ## import torch from torch.nn import Conv2d, MaxPool2d, Linear, CrossEntropyLoss, MultiLabelSoftMarginLoss from torch.nn.functional import relu, conv2d, max_pool2d, linear, softmax from torch.optim import adadelta ## ## ## Train Train = {} Train[\"Image\"] = torch.rand(2000, 3, 76, 76) Train[\"Variable\"] = torch.rand(2000, 6) Train[\"Label\"] = torch.empty(2000, dtype=torch.long).random_(2) ## ## ## Valid Valid = {} Valid[\"Image\"] = torch.rand(150, 3, 76, 76) Valid[\"Variable\"] = torch.rand(150, 6) Valid[\"Label\"] = torch.empty(150, dtype=torch.long).random_(2) ################################################################################ ## ## ## Model ImageTerm = Train[\"Image\"] VariableTerm = Train[\"Variable\"] Pip = Conv2d(in_channels=3, out_channels=32, kernel_size=(3,3), stride=1, padding=0)(ImageTerm) Pip = MaxPool2d(kernel_size=(2,2), stride=None, padding=0)(Pip) Pip = Conv2d(in_channels=32, out_channels=64, kernel_size=(3,3), stride=1, padding=0)(Pip) Pip = MaxPool2d(kernel_size=(2,2), stride=None, padding=0)(Pip) Pip = Pip.view(2000, -1) Pip = torch.cat([Pip, VariableTerm], 1) Pip = Linear(in_features=18502, out_features=1000 , bias=True)(Pip) Pip = Linear(in_features=1000, out_features=2 , bias=True)(Pip) ## ## ## Loss Loss = CrossEntropyLoss(Pip, Train[\"Label\"]) ``` The error is on Loss = CrossEntropyLoss(Pip, Train[\"Label\"]), thanks.", "response":"In your minimal example, you create an object \"loss\" of the class \"CrossEntropyLoss\". This object is able to compute your loss as ``` loss(input, target) ``` However, in your actual code, you try to create the object \"Loss\", while passing Pip and the labels to the \"CrossEntropyLoss\" class constructor. Instead, try the following: ``` loss = CrossEntropyLoss() loss(Pip, Train[\"Label\"]) ``` Edit (explanation of the error message): The error Message Bool value of Tensor with more than one value is ambiguous appears when you try to cast a tensor into a bool value. This happens most commonly when passing the tensor to an if condition, e.g. ``` input = torch.randn(8, 5) if input: some_code() ``` The second argument of the CrossEntropyLoss class constructor expects a boolean. Thus, in the line ``` Loss = CrossEntropyLoss(Pip, Train[\"Label\"]) ``` the constructor will at some point try to use the passed tensor Train[\"Label\"] as a boolean, which throws the mentioned error message.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/52946920\/bool-value-of-tensor-with-more-than-one-value-is-ambiguous-in-pytorch", "best_answers_votes":92, "question_length":1874, "response_length":984 }, { "question":"PyTorch: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead Calling tensor.numpy() gives the error: RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead. tensor.cpu().detach().numpy() gives the same error.", "response":"Error reproduced ``` import torch tensor1 = torch.tensor([1.0,2.0],requires_grad=True) print(tensor1) print(type(tensor1)) tensor1 = tensor1.numpy() print(tensor1) print(type(tensor1)) ``` which leads to the exact same error for the line tensor1 = tensor1.numpy(): ``` tensor([1., 2.], requires_grad=True) Traceback (most recent call last): File \"\/home\/badScript.py\", line 8, in tensor1 = tensor1.numpy() RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead. Process finished with exit code 1 ``` Generic solution this was suggested to you in your error message, just replace var with your variable name ``` import torch tensor1 = torch.tensor([1.0,2.0],requires_grad=True) print(tensor1) print(type(tensor1)) tensor1 = tensor1.detach().numpy() print(tensor1) print(type(tensor1)) ``` which returns as expected ``` tensor([1., 2.], requires_grad=True) [1. 2.] Process finished with exit code 0 ``` Some explanation You need to convert your tensor to another tensor that isn't requiring a gradient in addition to its actual value definition. This other tensor can be converted to a numpy array. Cf. this discuss.pytorch post. (I think, more precisely, that one needs to do that in order to get the actual tensor out of its pytorch Variable wrapper, cf. this other discuss.pytorch post).", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55466298\/pytorch-cant-call-numpy-on-variable-that-requires-grad-use-var-detach-num", "best_answers_votes":57, "question_length":283, "response_length":1334 }, { "question":"PyTorch - RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed I keep running into this error: RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. I had searched in Pytorch forum, but still can\u2019t find out what I have done wrong in my custom loss function. My model is nn.GRU, and here is my custom loss function: ``` def _loss(outputs, session, items): # `items` is a dict() contains embedding of all items def f(output, target): pos = torch.from_numpy(np.array([items[target[\"click\"]]])).float() neg = torch.from_numpy(np.array([items[idx] for idx in target[\"suggest_list\"] if idx != target[\"click\"]])).float() if USE_CUDA: pos, neg = pos.cuda(), neg.cuda() pos, neg = Variable(pos), Variable(neg) pos = F.cosine_similarity(output, pos) if neg.size()[0] == 0: return torch.mean(F.logsigmoid(pos)) neg = F.cosine_similarity(output.expand_as(neg), neg) return torch.mean(F.logsigmoid(pos - neg)) loss = map(f, outputs, session) return -torch.mean(torch.cat(loss)) ``` Training code: ``` # zero the parameter gradients model.zero_grad() # forward + backward + optimize outputs, hidden = model(inputs, hidden) loss = _loss(outputs, session, items) acc_loss += loss.data[0] loss.backward() # Add parameters' gradients to their values, multiplied by learning rate for p in model.parameters(): p.data.add_(-learning_rate, p.grad.data) ```", "response":"The problem is from my training loop: it doesn\u2019t detach or repackage the hidden state in between batches? If so, then loss.backward() is trying to back-propagate all the way through to the start of time, which works for the first batch but not for the second because the graph for the first batch has been discarded. there are two possible solutions. detach\/repackage the hidden state in between batches. There are (at least) three ways to do this (and I chose this solution): hidden.detach_() (or equivalently hidden = hidden.detach()). replace loss.backward() with loss.backward(retain_graph=True) but know that each successive batch will take more time than the previous one because it will have to back-propagate all the way through to the start of the first batch. Example", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/48274929\/pytorch-runtimeerror-trying-to-backward-through-the-graph-a-second-time-but", "best_answers_votes":64, "question_length":1504, "response_length":777 }, { "question":"Is .data still useful in pytorch? I'm new to pytorch. I read much pytorch code which heavily uses tensor's .data member. But I search .data in the official document and Google, finding little. I guess .data contains the data in the tensor, but I don't know when we need it and when not?", "response":".data was an attribute of Variable (object representing Tensor with history tracking e.g. for automatic update), not Tensor. Actually, .data was giving access to the Variable's underlying Tensor. However, since PyTorch version 0.4.0, Variable and Tensor have been merged (into an updated Tensor structure), so .data disappeared along the previous Variable object (well Variable is still there for backward-compatibility, but is deprecated). Paragraph from Release Notes for version 0.4.0 (I recommend reading the whole section about Variable\/Tensor updates): What about .data? .data was the primary way to get the underlying Tensor from a Variable. After this merge, calling y = x.data still has similar semantics. So y will be a Tensor that shares the same data with x, is unrelated with the computation history of x, and has requires_grad=False. However, .data can be unsafe in some cases. Any changes on x.data wouldn't be tracked by autograd, and the computed gradients would be incorrect if x is needed in a backward pass. A safer alternative is to use x.detach(), which also returns a Tensor that shares data with requires_grad=False, but will have its in-place changes reported by autograd if x is needed in backward.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51743214\/is-data-still-useful-in-pytorch", "best_answers_votes":70, "question_length":286, "response_length":1224 }, { "question":"RuntimeError: expected scalar type Long but found Float I can't get the dtypes to match, either the loss wants long or the model wants float if I change my tensors to long. The shape of the tensors are 42000, 1, 28, 28 and 42000. I'm not sure where I can change what dtypes are required for the model or loss. I'm not sure if dataloader is required, using Variable didn't work either. ``` dataloaders_train = torch.utils.data.DataLoader(Xt_train, batch_size=64) dataloaders_test = torch.utils.data.DataLoader(Yt_train, batch_size=64) class Network(nn.Module): def __init__(self): super().__init__() self.hidden = nn.Linear(42000, 256) self.output = nn.Linear(256, 10) self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) def forward(self, x): x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) return x model = Network() input_size = 784 hidden_sizes = [28, 64] output_size = 10 model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) print(model) criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.003) epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in zip(dataloaders_train, dataloaders_test): images = images.view(images.shape[0], -1) #images, labels = Variable(images), Variable(labels) print(images.dtype) print(labels.dtype) optimizer.zero_grad() output = model(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() else: print(f\"Training loss: {running_loss}\") ``` Which gives ``` RuntimeError Traceback (most recent call last) in 11 12 output = model(images) ---> 13 loss = criterion(output, labels) 14 loss.backward() 15 optimizer.step() \/opt\/conda\/lib\/python3.6\/site-packages\/torch\/nn\/modules\/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) \/opt\/conda\/lib\/python3.6\/site-packages\/torch\/nn\/modules\/loss.py in forward(self, input, target) 202 203 def forward(self, input, target): --> 204 return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) 205 206 \/opt\/conda\/lib\/python3.6\/site-packages\/torch\/nn\/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1836 .format(input.size(0), target.size(0))) 1837 if dim == 2: -> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: expected scalar type Long but found Float ```", "response":"LongTensor is synonymous with integer. PyTorch won't accept a FloatTensor as categorical target, so it's telling you to cast your tensor to LongTensor. This is how you should change your target dtype: ``` Yt_train = Yt_train.type(torch.LongTensor) ``` This is very well documented on the PyTorch website, you definitely won't regret spending a minute or two reading this page. PyTorch essentially defines nine CPU tensor types and nine GPU tensor types: ``` \u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2566\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2566\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2566\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557 \u2551 Data type \u2551 dtype \u2551 CPU tensor \u2551 GPU tensor \u2551 \u2560\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563 \u2551 32-bit floating point \u2551 torch.float32 or torch.float \u2551 torch.FloatTensor \u2551 torch.cuda.FloatTensor \u2551 \u2551 64-bit floating point \u2551 torch.float64 or torch.double \u2551 torch.DoubleTensor \u2551 torch.cuda.DoubleTensor \u2551 \u2551 16-bit floating point \u2551 torch.float16 or torch.half \u2551 torch.HalfTensor \u2551 torch.cuda.HalfTensor \u2551 \u2551 8-bit integer (unsigned) \u2551 torch.uint8 \u2551 torch.ByteTensor \u2551 torch.cuda.ByteTensor \u2551 \u2551 8-bit integer (signed) \u2551 torch.int8 \u2551 torch.CharTensor \u2551 torch.cuda.CharTensor \u2551 \u2551 16-bit integer (signed) \u2551 torch.int16 or torch.short \u2551 torch.ShortTensor \u2551 torch.cuda.ShortTensor \u2551 \u2551 32-bit integer (signed) \u2551 torch.int32 or torch.int \u2551 torch.IntTensor \u2551 torch.cuda.IntTensor \u2551 \u2551 64-bit integer (signed) \u2551 torch.int64 or torch.long \u2551 torch.LongTensor \u2551 torch.cuda.LongTensor \u2551 \u2551 Boolean \u2551 torch.bool \u2551 torch.BoolTensor \u2551 torch.cuda.BoolTensor \u2551 \u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/60440292\/runtimeerror-expected-scalar-type-long-but-found-float", "best_answers_votes":103, "question_length":2870, "response_length":1673 }, { "question":"Understanding PyTorch einsum I'm familiar with how einsum works in NumPy. A similar functionality is also offered by PyTorch: torch.einsum(). What are the similarities and differences, either in terms of functionality or performance? The information available at PyTorch documentation is rather scanty and doesn't provide any insights regarding this.", "response":"Since the description of einsum is skimpy in torch documentation, I decided to write this post to document, compare and contrast how torch.einsum() behaves when compared to numpy.einsum(). Differences: NumPy allows both small case and capitalized letters [a-zA-Z] for the \"subscript string\" whereas PyTorch allows only the small case letters [a-z]. NumPy accepts nd-arrays, plain Python lists (or tuples), list of lists (or tuple of tuples, list of tuples, tuple of lists) or even PyTorch tensors as operands (i.e. inputs). This is because the operands have only to be array_like and not strictly NumPy nd-arrays. On the contrary, PyTorch expects the operands (i.e. inputs) strictly to be PyTorch tensors. It will throw a TypeError if you pass either plain Python lists\/tuples (or its combinations) or NumPy nd-arrays. NumPy supports lot of keyword arguments (for e.g. optimize) in addition to nd-arrays while PyTorch doesn't offer such flexibility yet. Here are the implementations of some examples both in PyTorch and NumPy: ``` # input tensors to work with In [16]: vec Out[16]: tensor([0, 1, 2, 3]) In [17]: aten Out[17]: tensor([[11, 12, 13, 14], [21, 22, 23, 24], [31, 32, 33, 34], [41, 42, 43, 44]]) In [18]: bten Out[18]: tensor([[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]) ``` 1) Matrix multiplication PyTorch: torch.matmul(aten, bten) ; aten.mm(bten) NumPy : np.einsum(\"ij, jk -> ik\", arr1, arr2) ``` In [19]: torch.einsum('ij, jk -> ik', aten, bten) Out[19]: tensor([[130, 130, 130, 130], [230, 230, 230, 230], [330, 330, 330, 330], [430, 430, 430, 430]]) ``` 2) Extract elements along the main-diagonal PyTorch: torch.diag(aten) NumPy : np.einsum(\"ii -> i\", arr) ``` In [28]: torch.einsum('ii -> i', aten) Out[28]: tensor([11, 22, 33, 44]) ``` 3) Hadamard product (i.e. element-wise product of two tensors) PyTorch: aten * bten NumPy : np.einsum(\"ij, ij -> ij\", arr1, arr2) ``` In [34]: torch.einsum('ij, ij -> ij', aten, bten) Out[34]: tensor([[ 11, 12, 13, 14], [ 42, 44, 46, 48], [ 93, 96, 99, 102], [164, 168, 172, 176]]) ``` 4) Element-wise squaring PyTorch: aten ** 2 NumPy : np.einsum(\"ij, ij -> ij\", arr, arr) ``` In [37]: torch.einsum('ij, ij -> ij', aten, aten) Out[37]: tensor([[ 121, 144, 169, 196], [ 441, 484, 529, 576], [ 961, 1024, 1089, 1156], [1681, 1764, 1849, 1936]]) ``` General: Element-wise nth power can be implemented by repeating the subscript string and tensor n times. For e.g., computing element-wise 4th power of a tensor can be done using: ``` # NumPy: np.einsum('ij, ij, ij, ij -> ij', arr, arr, arr, arr) In [38]: torch.einsum('ij, ij, ij, ij -> ij', aten, aten, aten, aten) Out[38]: tensor([[ 14641, 20736, 28561, 38416], [ 194481, 234256, 279841, 331776], [ 923521, 1048576, 1185921, 1336336], [2825761, 3111696, 3418801, 3748096]]) ``` 5) Trace (i.e. sum of main-diagonal elements) PyTorch: torch.trace(aten) NumPy einsum: np.einsum(\"ii -> \", arr) ``` In [44]: torch.einsum('ii -> ', aten) Out[44]: tensor(110) ``` 6) Matrix transpose PyTorch: torch.transpose(aten, 1, 0) NumPy einsum: np.einsum(\"ij -> ji\", arr) ``` In [58]: torch.einsum('ij -> ji', aten) Out[58]: tensor([[11, 21, 31, 41], [12, 22, 32, 42], [13, 23, 33, 43], [14, 24, 34, 44]]) ``` 7) Outer Product (of vectors) PyTorch: torch.ger(vec, vec) NumPy einsum: np.einsum(\"i, j -> ij\", vec, vec) ``` In [73]: torch.einsum('i, j -> ij', vec, vec) Out[73]: tensor([[0, 0, 0, 0], [0, 1, 2, 3], [0, 2, 4, 6], [0, 3, 6, 9]]) ``` 8) Inner Product (of vectors) PyTorch: torch.dot(vec1, vec2) NumPy einsum: np.einsum(\"i, i -> \", vec1, vec2) ``` In [76]: torch.einsum('i, i -> ', vec, vec) Out[76]: tensor(14) ``` 9) Sum along axis 0 PyTorch: torch.sum(aten, 0) NumPy einsum: np.einsum(\"ij -> j\", arr) ``` In [85]: torch.einsum('ij -> j', aten) Out[85]: tensor([104, 108, 112, 116]) ``` 10) Sum along axis 1 PyTorch: torch.sum(aten, 1) NumPy einsum: np.einsum(\"ij -> i\", arr) ``` In [86]: torch.einsum('ij -> i', aten) Out[86]: tensor([ 50, 90, 130, 170]) ``` 11) Batch Matrix Multiplication PyTorch: torch.bmm(batch_tensor_1, batch_tensor_2) NumPy : np.einsum(\"bij, bjk -> bik\", batch_tensor_1, batch_tensor_2) ``` # input batch tensors to work with In [13]: batch_tensor_1 = torch.arange(2 * 4 * 3).reshape(2, 4, 3) In [14]: batch_tensor_2 = torch.arange(2 * 3 * 4).reshape(2, 3, 4) In [15]: torch.bmm(batch_tensor_1, batch_tensor_2) Out[15]: tensor([[[ 20, 23, 26, 29], [ 56, 68, 80, 92], [ 92, 113, 134, 155], [ 128, 158, 188, 218]], [[ 632, 671, 710, 749], [ 776, 824, 872, 920], [ 920, 977, 1034, 1091], [1064, 1130, 1196, 1262]]]) # sanity check with the shapes In [16]: torch.bmm(batch_tensor_1, batch_tensor_2).shape Out[16]: torch.Size([2, 4, 4]) # batch matrix multiply using einsum In [17]: torch.einsum(\"bij, bjk -> bik\", batch_tensor_1, batch_tensor_2) Out[17]: tensor([[[ 20, 23, 26, 29], [ 56, 68, 80, 92], [ 92, 113, 134, 155], [ 128, 158, 188, 218]], [[ 632, 671, 710, 749], [ 776, 824, 872, 920], [ 920, 977, 1034, 1091], [1064, 1130, 1196, 1262]]]) # sanity check with the shapes In [18]: torch.einsum(\"bij, bjk -> bik\", batch_tensor_1, batch_tensor_2).shape ``` 12) Sum along axis 2 PyTorch: torch.sum(batch_ten, 2) NumPy einsum: np.einsum(\"ijk -> ij\", arr3D) ``` In [99]: torch.einsum(\"ijk -> ij\", batch_ten) Out[99]: tensor([[ 50, 90, 130, 170], [ 4, 8, 12, 16]]) ``` 13) Sum all the elements in an nD tensor PyTorch: torch.sum(batch_ten) NumPy einsum: np.einsum(\"ijk -> \", arr3D) ``` In [101]: torch.einsum(\"ijk -> \", batch_ten) Out[101]: tensor(480) ``` 14) Sum over multiple axes (i.e. marginalization) PyTorch: torch.sum(arr, dim=(dim0, dim1, dim2, dim3, dim4, dim6, dim7)) NumPy: np.einsum(\"ijklmnop -> n\", nDarr) ``` # 8D tensor In [103]: nDten = torch.randn((3,5,4,6,8,2,7,9)) In [104]: nDten.shape Out[104]: torch.Size([3, 5, 4, 6, 8, 2, 7, 9]) # marginalize out dimension 5 (i.e. \"n\" here) In [111]: esum = torch.einsum(\"ijklmnop -> n\", nDten) In [112]: esum Out[112]: tensor([ 98.6921, -206.0575]) # marginalize out axis 5 (i.e. sum over rest of the axes) In [113]: tsum = torch.sum(nDten, dim=(0, 1, 2, 3, 4, 6, 7)) In [115]: torch.allclose(tsum, esum) Out[115]: True ``` 15) Double Dot Products \/ Frobenius inner product (same as: torch.sum(hadamard-product) cf. 3) PyTorch: torch.sum(aten * bten) NumPy : np.einsum(\"ij, ij -> \", arr1, arr2) ``` In [120]: torch.einsum(\"ij, ij -> \", aten, bten) Out[120]: tensor(1300) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55894693\/understanding-pytorch-einsum", "best_answers_votes":114, "question_length":350, "response_length":6370 }, { "question":"PyTorch \/ Gensim - How do I load pre-trained word embeddings? I want to load a pre-trained word2vec embedding with gensim into a PyTorch embedding layer. How do I get the embedding weights loaded by gensim into the PyTorch embedding layer?", "response":"I just wanted to report my findings about loading a gensim embedding with PyTorch. Solution for PyTorch 0.4.0 and newer: From v0.4.0 there is a new function from_pretrained() which makes loading an embedding very comfortable. Here is an example from the documentation. ``` import torch import torch.nn as nn # FloatTensor containing pretrained weights weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) embedding = nn.Embedding.from_pretrained(weight) # Get embeddings for index 1 input = torch.LongTensor([1]) embedding(input) ``` The weights from gensim can easily be obtained by: ``` import gensim model = gensim.models.KeyedVectors.load_word2vec_format('path\/to\/file') weights = torch.FloatTensor(model.vectors) # formerly syn0, which is soon deprecated ``` As noted by @Guglie: in newer gensim versions the weights can be obtained by model.wv: ``` weights = model.wv ``` Solution for PyTorch version 0.3.1 and older: I'm using version 0.3.1 and from_pretrained() isn't available in this version. Therefore I created my own from_pretrained so I can also use it with 0.3.1. Code for from_pretrained for PyTorch versions 0.3.1 or lower: ``` def from_pretrained(embeddings, freeze=True): assert embeddings.dim() == 2, \\ 'Embeddings parameter is expected to be 2-dimensional' rows, cols = embeddings.shape embedding = torch.nn.Embedding(num_embeddings=rows, embedding_dim=cols) embedding.weight = torch.nn.Parameter(embeddings) embedding.weight.requires_grad = not freeze return embedding ``` The embedding can be loaded then just like this: ``` embedding = from_pretrained(weights) ``` I hope this is helpful for someone.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49710537\/pytorch-gensim-how-do-i-load-pre-trained-word-embeddings", "best_answers_votes":85, "question_length":239, "response_length":1628 }, { "question":"reshaping a tensor with padding in pytorch How do I reshape a tensor with dimensions (30, 35, 49) to (30, 35, 512) by padding it?", "response":"While @nemo's solution works fine, there is a pytorch internal routine, torch.nn.functional.pad, that does the same - and which has a couple of properties that a torch.ones(*sizes)*pad_value solution does not (namely other forms of padding, like reflection padding or replicate padding ... it also checks some gradient-related properties): ```py import torch.nn.functional as F source = torch.rand((5,10)) # now we expand to size (7, 11) by appending a row of 0s at pos 0 and pos 6, # and a column of 0s at pos 10 result = F.pad(input=source, pad=(0, 1, 1, 1), mode='constant', value=0) ``` The semantics of the arguments are: input: the source tensor, pad: a list of length 2 * len(source.shape) of the form (begin last axis, end last axis, begin 2nd to last axis, end 2nd to last axis, begin 3rd to last axis, etc.) that states how many dimensions should be added to the beginning and end of each axis, mode: 'constant', 'reflect' or 'replicate'. Default: 'constant' for the different kinds of padding value for constant padding.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/48686945\/reshaping-a-tensor-with-padding-in-pytorch", "best_answers_votes":63, "question_length":129, "response_length":1031 }, { "question":"How to remove the last FC layer from a ResNet model in PyTorch? I am using a ResNet152 model from PyTorch. I'd like to strip off the last FC layer from the model. Here's my code: ``` from torchvision import datasets, transforms, models model = models.resnet152(pretrained=True) print(model) ``` When I print the model, the last few lines look like this: ``` (2): Bottleneck( (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) ) ) (avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0) (fc): Linear(in_features=2048, out_features=1000, bias=True) ) ``` I want to remove that last fc layer from the model. I found an answer here on SO (How to convert pretrained FC layers to CONV layers in Pytorch), where mexmex seems to provide the answer I'm looking for: ``` list(model.modules()) # to inspect the modules of your model my_model = nn.Sequential(*list(model.modules())[:-1]) # strips off last linear layer ``` So I added those lines to my code like this: ``` model = models.resnet152(pretrained=True) list(model.modules()) # to inspect the modules of your model my_model = nn.Sequential(*list(model.modules())[:-1]) # strips off last linear layer print(my_model) ``` But this code doesn't work as advertised -- as least not for me. The rest of this post is a detailed explanation of why that answer doesn't work so this question doesn't get closed as a duplicate. First, the printed model is nearly 5x larger than before. I see the same model as before, but followed by what appears to be a repeat of the model, but perhaps flattened. ``` (2): Bottleneck( (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) ) ) (avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0) (fc): Linear(in_features=2048, out_features=1000, bias=True) ) (1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): ReLU(inplace) (4): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (5): Sequential( . . . this goes on for ~1600 more lines . . . (415): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (416): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (417): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (418): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (419): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (420): ReLU(inplace) (421): AvgPool2d(kernel_size=7, stride=1, padding=0) ) ``` Second, the fc layer is still there -- and the Conv2D layer after it looks just like the first layer of ResNet152. Third, if I try to invoke my_model.forward(), pytorch complains about a size mismatch. It expects size [1, 3, 224, 224], but the input was [1, 1000]. So it looks like a copy of the entire model (minus the fc layer) is getting appended to the original model. Bottom line, the only answer I found on SO doesn't actually work.", "response":"For ResNet model, you can use children attribute to access layers since ResNet model in pytorch consist of nn modules. (Tested on pytorch 0.4.1) ``` model = models.resnet152(pretrained=True) newmodel = torch.nn.Sequential(*(list(model.children())[:-1])) print(newmodel) ``` Update: Although there is not an universal answer for the question that can work on all pytorch models, it should work on all well structured ones. Existing layers you add to your model (such as torch.nn.Linear, torch.nn.Conv2d, torch.nn.BatchNorm2d...) all based on torch.nn.Module class. And if you implement a custom layer and add that to your network you should inherit it from pytorch's torch.nn.Module class. As written in documentation, children attribute lets you access the modules of your class\/model\/network. ``` def children(self): r\"\"\"Returns an iterator over immediate children modules. ``` Update: It is important to note that children() returns \"immediate\" modules, which means if last module of your network is a sequential, it will return whole sequential.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/52548174\/how-to-remove-the-last-fc-layer-from-a-resnet-model-in-pytorch", "best_answers_votes":64, "question_length":3950, "response_length":1048 }, { "question":"How to repeat tensor in a specific new dimension in PyTorch If I have a tensor A which has shape [M, N], I want to repeat the tensor K times so that the result B has shape [M, K, N] and each slice B[:, k, :] should has the same data as A. Which is the best practice without a for loop. K might be in other dimension. torch.repeat_interleave() and tensor.repeat() does not seem to work. Or I am using it in a wrong way.", "response":"tensor.repeat should suit your needs but you need to insert a unitary dimension first. For this we could use either tensor.unsqueeze or tensor.reshape. Since unsqueeze is specifically defined to insert a unitary dimension we will use that. ``` B = A.unsqueeze(1).repeat(1, K, 1) ``` Code Description A.unsqueeze(1) turns A from an [M, N] to [M, 1, N] and .repeat(1, K, 1) repeats the tensor K times along the second dimension.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57896357\/how-to-repeat-tensor-in-a-specific-new-dimension-in-pytorch", "best_answers_votes":90, "question_length":418, "response_length":426 }, { "question":"Should I use softmax as output when using cross entropy loss in PyTorch? I have a problem with classifying fully connected deep neural net with 2 hidden layers for MNIST dataset in pytorch. I want to use tanh as activations in both hidden layers, but in the end, I should use softmax. For the loss, I am choosing nn.CrossEntropyLoss() in PyTorch, which (as I have found out) does not want to take one-hot encoded labels as true labels, but takes LongTensor of classes instead. My model is nn.Sequential() and when I am using softmax in the end, it gives me worse results in terms of accuracy on testing data. Why? ```py import torch from torch import nn inputs, n_hidden0, n_hidden1, out = 784, 128, 64, 10 n_epochs = 500 model = nn.Sequential( nn.Linear(inputs, n_hidden0, bias=True), nn.Tanh(), nn.Linear(n_hidden0, n_hidden1, bias=True), nn.Tanh(), nn.Linear(n_hidden1, out, bias=True), nn.Softmax() # SHOULD THIS BE THERE? ) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.5) for epoch in range(n_epochs): y_pred = model(X_train) loss = criterion(y_pred, Y_train) print('epoch: ', epoch+1,' loss: ', loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() ```", "response":"As stated in the torch.nn.CrossEntropyLoss() doc: This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class. Therefore, you should not use softmax before.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55675345\/should-i-use-softmax-as-output-when-using-cross-entropy-loss-in-pytorch", "best_answers_votes":82, "question_length":1228, "response_length":173 }, { "question":"ValueError: Target size (torch.Size([16])) must be the same as input size (torch.Size([16, 1])) ``` ValueError Traceback (most recent call last) in 23 output = model(data) 24 # calculate the batch loss ---> 25 loss = criterion(output, target) 26 # backward pass: compute gradient of the loss with respect to model parameters 27 loss.backward() C:\\Users\\mnauf\\Anaconda3\\envs\\federated_learning\\lib\\site-packages\\torch\\nn\\modules\\module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) C:\\Users\\mnauf\\Anaconda3\\envs\\federated_learning\\lib\\site-packages\\torch\\nn\\modules\\loss.py in forward(self, input, target) 593 self.weight, 594 pos_weight=self.pos_weight, --> 595 reduction=self.reduction) 596 597 C:\\Users\\mnauf\\Anaconda3\\envs\\federated_learning\\lib\\site-packages\\torch\\nn\\functional.py in binary_cross_entropy_with_logits(input, target, weight, size_average, reduce, reduction, pos_weight) 2073 2074 if not (target.size() == input.size()): -> 2075 raise ValueError(\"Target size ({}) must be the same as input size ({})\".format(target.size(), input.size())) 2076 2077 return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum) ValueError: Target size (torch.Size([16])) must be the same as input size (torch.Size([16, 1])) ``` I am training a CNN. Working on the Horses vs humans dataset. This is my code. I am using criterion = nn.BCEWithLogitsLoss() and optimizer = optim.RMSprop(model.parameters(), lr=0.01). My final layer is self.fc2 = nn.Linear(512, 1). Out last neuron, will output 1 for horse and 0 for human, right? or should I choose 2 neurons for output? 16 is the batch size. Since the error says ValueError: Target size (torch.Size([16])) must be the same as input size (torch.Size([16, 1])). I don't understand, where do I need to make change, to rectify the error.", "response":"target = target.unsqueeze(1), before passing target to criterion, changed the target tensor size from [16] to [16,1]. Doing it solved the issue. Furthermore, I also needed to do target = target.float() before passing it to criterion, because our outputs are in float. Besides, there was another error in the code. I was using sigmoid activation function in the last layer, but I shouldn\u2019t because the criterion I am using already comes with sigmoid builtin.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57798033\/valueerror-target-size-torch-size16-must-be-the-same-as-input-size-torch", "best_answers_votes":68, "question_length":2009, "response_length":457 }, { "question":"Issues installing PyTorch 1.4 - \"No matching distribution found for torch===1.4.0\" Used the install guide on pytorch.org on how to install it and the command I'm using is ``` pip install torch===1.4.0 torchvision===0.5.0 -f https:\/\/download.pytorch.org\/whl\/torch_stable.html ``` But it's coming up with this error; ERROR: Could not find a version that satisfies the requirement torch===1.4.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch===1.4.0 Is this even a me-related issue? Can other people use this command? Pip is installed and works for other modules, Python 3.8, CUDA version 10.1, Windows 10 Home 2004", "response":"Looks like this issue is related to virtual environment. Did you try recommended installation line in another\/new one virtual environment? If it doesn't help the possible solution might be installing package using direct link to PyTorch and TorchVision builds for your system: Windows: ```bash pip install https:\/\/download.pytorch.org\/whl\/cu101\/torch-1.4.0-cp38-cp38-win_amd64.whl pip install https:\/\/download.pytorch.org\/whl\/cu101\/torchvision-0.5.0-cp38-cp38-win_amd64.whl ``` Ubuntu (Linux): ```bash pip install https:\/\/download.pytorch.org\/whl\/cu101\/torch-1.4.0-cp38-cp38-linux_x86_64.whl pip install https:\/\/download.pytorch.org\/whl\/cu101\/torchvision-0.5.0-cp38-cp38-linux_x86_64.whl ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/60137572\/issues-installing-pytorch-1-4-no-matching-distribution-found-for-torch-1-4", "best_answers_votes":33, "question_length":662, "response_length":691 }, { "question":"In distributed computing, what are world size and rank? I've been reading through some documentation and example code with the end goal of writing scripts for distributed computing (running PyTorch), but the concepts confuse me. Let's assume that we have a single node with 4 GPUs, and we want to run our script on those 4 GPUs (i.e. one process per GPU). In such a scenario, what are the rank world size and rank? I often find the explanation for world size: Total number of processes involved in the job, so I assume that that is four in our example, but what about rank? To explain it further, another example with multiple nodes and multiple GPUs could be useful, too.", "response":"These concepts are related to parallel computing. It would be helpful to learn a little about parallel computing, e.g., MPI. You can think of world as a group containing all the processes for your distributed training. Usually, each GPU corresponds to one process. Processes in the world can communicate with each other, which is why you can train your model distributedly and still get the correct gradient update. So world size is the number of processes for your training, which is usually the number of GPUs you are using for distributed training. Rank is the unique ID given to a process, so that other processes know how to identify a particular process. Local rank is the a unique local ID for processes running in a single node, this is where my view differs with @zihaozhihao. Let's take a concrete example. Suppose we run our training in 2 servers (some articles also call them nodes) and each server\/node has 4 GPUs. The world size is 4*2=8. The ranks for the processes will be [0, 1, 2, 3, 4, 5, 6, 7]. In each node, the local rank will be [0, 1, 2, 3]. I have also written a post about MPI collectives and basic concepts. The link is here.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/58271635\/in-distributed-computing-what-are-world-size-and-rank", "best_answers_votes":71, "question_length":672, "response_length":1152 }, { "question":"400% higher error with PyTorch compared with identical Keras model (with Adam optimizer) TLDR: A simple (single hidden-layer) feed-forward Pytorch model trained to predict the function y = sin(X1) + sin(X2) + ... sin(X10) substantially underperforms an identical model built\/trained with Keras. Why is this so and what can be done to mitigate the difference in performance? In training a regression model, I noticed that PyTorch drastically underperforms an identical model built with Keras. This phenomenon has been observed and reported previously: The same model produces worse results on pytorch than on tensorflow CNN model in pytorch giving 30% less accuracy to Tensoflowflow model: PyTorch Adam vs Tensorflow Adam Suboptimal convergence when compared with TensorFlow model RNN and Adam: slower convergence than Keras PyTorch comparable but worse than keras on a simple feed forward network Why is the PyTorch model doing worse than the same model in Keras even with the same weight initialization? Why Keras behave better than Pytorch under the same network configuration? The following explanations and suggestions have been made previously as well: Using the same decimal precision (32 vs 64): 1, 2, Using a CPU instead of a GPU: 1,2 Change retain_graph=True to create_graph=True in computing the 2nd derivative with autograd.grad: 1 Check if keras is using a regularizer, constraint, bias, or loss function in a different way from pytorch: 1,2 Ensure you are computing the validation loss in the same way: 1 Use the same initialization routine: 1,2 Training the pytorch model for longer epochs: 1 Trying several random seeds: 1 Ensure that model.eval() is called in validation step when training pytorch model: 1 The main issue is with the Adam optimizer, not the initialization: 1 To understand this issue, I trained a simple two-layer neural network (much simpler than my original model) in Keras and PyTorch, using the same hyperparameters and initialization routines, and following all the recommendations listed above. However, the PyTorch model results in a mean squared error (MSE) that is 400% higher than the MSE of the Keras model. Here is my code: 0. Imports ```py import numpy as np from scipy.stats import pearsonr from sklearn.preprocessing import MinMaxScaler from sklearn import metrics from torch.utils.data import Dataset, DataLoader import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras.regularizers import L2 from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam ``` 1. Generate a reproducible dataset ```py def get_data(): np.random.seed(0) Xtrain = np.random.normal(0, 1, size=(7000,10)) Xval = np.random.normal(0, 1, size=(700,10)) ytrain = np.sum(np.sin(Xtrain), axis=-1) yval = np.sum(np.sin(Xval), axis=-1) scaler = MinMaxScaler() ytrain = scaler.fit_transform(ytrain.reshape(-1,1)).reshape(-1) yval = scaler.transform(yval.reshape(-1,1)).reshape(-1) return Xtrain, Xval, ytrain, yval class XYData(Dataset): def __init__(self, X, y): super(XYData, self).__init__() self.X = torch.tensor(X, dtype=torch.float32) self.y = torch.tensor(y, dtype=torch.float32) self.len = len(y) def __getitem__(self, index): return (self.X[index], self.y[index]) def __len__(self): return self.len # Data, dataset, and dataloader Xtrain, Xval, ytrain, yval = get_data() traindata = XYData(Xtrain, ytrain) valdata = XYData(Xval, yval) trainloader = DataLoader(dataset=traindata, shuffle=True, batch_size=32, drop_last=False) valloader = DataLoader(dataset=valdata, shuffle=True, batch_size=32, drop_last=False) ``` 2. Build Keras and PyTorch models with identical hyperparameters and initialization methods ```py class TorchLinearModel(nn.Module): def __init__(self, input_dim=10, random_seed=0): super(TorchLinearModel, self).__init__() _ = torch.manual_seed(random_seed) self.hidden_layer = nn.Linear(input_dim,100) self.initialize_layer(self.hidden_layer) self.output_layer = nn.Linear(100, 1) self.initialize_layer(self.output_layer) def initialize_layer(self, layer): _ = torch.nn.init.xavier_normal_(layer.weight) #_ = torch.nn.init.xavier_uniform_(layer.weight) _ = torch.nn.init.constant(layer.bias,0) def forward(self, x): x = self.hidden_layer(x) x = self.output_layer(x) return x def mean_squared_error(ytrue, ypred): return torch.mean(((ytrue - ypred) ** 2)) def build_torch_model(): torch_model = TorchLinearModel() optimizer = optim.Adam(torch_model.parameters(), betas=(0.9,0.9999), eps=1e-7, lr=1e-3, weight_decay=0) return torch_model, optimizer def build_keras_model(): x = layers.Input(shape=10) z = layers.Dense(units=100, activation=None, use_bias=True, kernel_regularizer=None, bias_regularizer=None)(x) y = layers.Dense(units=1, activation=None, use_bias=True, kernel_regularizer=None, bias_regularizer=None)(z) keras_model = Model(x, y, name='linear') optimizer = Adam(learning_rate=1e-3, beta_1=0.9, beta_2=0.9999, epsilon=1e-7, amsgrad=False) keras_model.compile(optimizer=optimizer, loss='mean_squared_error') return keras_model # Instantiate models torch_model, optimizer = build_torch_model() keras_model = build_keras_model() ``` 3. Train PyTorch model for 100 epochs: ```py torch_trainlosses, torch_vallosses = [], [] for epoch in range(100): # Training losses = [] _ = torch_model.train() for i, (x,y) in enumerate(trainloader): optimizer.zero_grad() ypred = torch_model(x) loss = mean_squared_error(y, ypred) _ = loss.backward() _ = optimizer.step() losses.append(loss.item()) torch_trainlosses.append(np.mean(losses)) # Validation losses = [] _ = torch_model.eval() with torch.no_grad(): for i, (x, y) in enumerate(valloader): ypred = torch_model(x) loss = mean_squared_error(y, ypred) losses.append(loss.item()) torch_vallosses.append(np.mean(losses)) print(f\"epoch={epoch+1}, train_loss={torch_trainlosses[-1]:.4f}, val_loss={torch_vallosses[-1]:.4f}\") ``` 4. Train Keras model for 100 epochs: ```py history = keras_model.fit(Xtrain, ytrain, sample_weight=None, batch_size=32, epochs=100, validation_data=(Xval, yval)) ``` 5. Loss in training history ```py plt.plot(torch_trainlosses, color='blue', label='PyTorch Train') plt.plot(torch_vallosses, color='blue', linestyle='--', label='PyTorch Val') plt.plot(history.history['loss'], color='brown', label='Keras Train') plt.plot(history.history['val_loss'], color='brown', linestyle='--', label='Keras Val') plt.legend() ``` Keras records a much lower error in the training. Since this may be due to a difference in how Keras computes the loss, I calculated the prediction error on the validation set with sklearn.metrics.mean_squared_error 6. Validation error after training ```py ypred_keras = keras_model.predict(Xval).reshape(-1) ypred_torch = torch_model(torch.tensor(Xval, dtype=torch.float32)) ypred_torch = ypred_torch.detach().numpy().reshape(-1) mse_keras = metrics.mean_squared_error(yval, ypred_keras) mse_torch = metrics.mean_squared_error(yval, ypred_torch) print('Percent error difference:', (mse_torch \/ mse_keras - 1) * 100) r_keras = pearsonr(yval, ypred_keras)[0] r_pytorch = pearsonr(yval, ypred_torch)[0] print(\"r_keras:\", r_keras) print(\"r_pytorch:\", r_pytorch) plt.scatter(ypred_keras, yval); plt.title('Keras'); plt.show(); plt.close() plt.scatter(ypred_torch, yval); plt.title('Pytorch'); plt.show(); plt.close() ``` ```py Percent error difference: 479.1312469426776 r_keras: 0.9115184443702814 r_pytorch: 0.21728812737220082 ``` The correlation of predicted values with ground truth is 0.912 for Keras but 0.217 for Pytorch, and the error for Pytorch is 479% higher! 7. Other trials I also tried: Lowering the learning rate for Pytorch (lr=1e-4), R increases from 0.217 to 0.576, but it's still much worse than Keras (r=0.912). Increasing the learning rate for Pytorch (lr=1e-2), R is worse at 0.095 Training numerous times with different random seeds. The performance is roughly the same, regardless. Trained for longer than 100 epochs. No improvement was observed! Used torch.nn.init.xavier_uniform_ instead of torch.nn.init.xavier_normal_ in the initialization of the weights. R improves from 0.217 to 0.639, but it's still worse than Keras (0.912). What can be done to ensure that the PyTorch model converges to a reasonable error comparable with the Keras model?", "response":"The problem here is unintentional broadcasting in the PyTorch training loop. The result of a nn.Linear operation always has shape [B,D], where B is the batch size and D is the output dimension. Therefore, in your mean_squared_error function ypred has shape [32,1] and ytrue has shape [32]. By the broadcasting rules used by NumPy and PyTorch this means that ytrue - ypred has shape [32,32]. What you almost certainly meant is for ypred to have shape [32]. This can be accomplished in many ways; probably the most readable is to use Tensor.flatten ```py class TorchLinearModel(nn.Module): ... def forward(self, x): x = self.hidden_layer(x) x = self.output_layer(x) return x.flatten() ``` which produces the following train\/val curves", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/73600481\/400-higher-error-with-pytorch-compared-with-identical-keras-model-with-adam-op", "best_answers_votes":70, "question_length":8254, "response_length":732 }, { "question":"How does one use Pytorch (+ cuda) with an A100 GPU? I was trying to use my current code with an A100 gpu but I get this error: ``` ---> backend='nccl' \/home\/miranda9\/miniconda3\/envs\/metalearningpy1.7.1c10.2\/lib\/python3.8\/site-packages\/torch\/cuda\/__init__.py:104: UserWarning: A100-SXM4-40GB with CUDA capability sm_80 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37. If you want to use the A100-SXM4-40GB GPU with PyTorch, please check the instructions at https:\/\/pytorch.org\/get-started\/locally\/ ``` which is reather confusing because it points to the usual pytorch installation but doesn't tell me which combination of pytorch version + cuda version to use for my specific hardware (A100). What is the right way to install pytorch for an A100? These are some versions I've tried: ``` # conda install -y pytorch==1.8.0 torchvision cudatoolkit=10.2 -c pytorch # conda install -y pytorch torchvision cudatoolkit=10.2 -c pytorch #conda install -y pytorch==1.7.1 torchvision torchaudio cudatoolkit=10.2 -c pytorch -c conda-forge # conda install -y pytorch==1.6.0 torchvision cudatoolkit=10.2 -c pytorch #conda install -y pytorch==1.7.1 torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge # conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch # conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge # conda install -y pytorch torchvision cudatoolkit=9.2 -c pytorch # For Nano, CC # conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge ``` note that this can be subtle because I've had this error with this machine + pytorch version in the past: How to solve the famous `unhandled cuda error, NCCL version 2.7.8` error? Bonus 1: I still have errors: ``` ncclSystemError: System call (socket, malloc, munmap, etc) failed. Traceback (most recent call last): File \"\/home\/miranda9\/diversity-for-predictive-success-of-meta-learning\/div_src\/diversity_src\/experiment_mains\/main_dist_maml_l2l.py\", line 1423, in main() File \"\/home\/miranda9\/diversity-for-predictive-success-of-meta-learning\/div_src\/diversity_src\/experiment_mains\/main_dist_maml_l2l.py\", line 1365, in main train(args=args) File \"\/home\/miranda9\/diversity-for-predictive-success-of-meta-learning\/div_src\/diversity_src\/experiment_mains\/main_dist_maml_l2l.py\", line 1385, in train args.opt = move_opt_to_cherry_opt_and_sync_params(args) if is_running_parallel(args.rank) else args.opt File \"\/home\/miranda9\/ultimate-utils\/ultimate-utils-proj-src\/uutils\/torch_uu\/distributed.py\", line 456, in move_opt_to_cherry_opt_and_sync_params args.opt = cherry.optim.Distributed(args.model.parameters(), opt=args.opt, sync=syn) File \"\/home\/miranda9\/miniconda3\/envs\/meta_learning_a100\/lib\/python3.9\/site-packages\/cherry\/optim.py\", line 62, in __init__ self.sync_parameters() File \"\/home\/miranda9\/miniconda3\/envs\/meta_learning_a100\/lib\/python3.9\/site-packages\/cherry\/optim.py\", line 78, in sync_parameters dist.broadcast(p.data, src=root) File \"\/home\/miranda9\/miniconda3\/envs\/meta_learning_a100\/lib\/python3.9\/site-packages\/torch\/distributed\/distributed_c10d.py\", line 1090, in broadcast work = default_pg.broadcast([tensor], opts) RuntimeError: NCCL error in: ..\/torch\/lib\/c10d\/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8 ``` one of the answers suggested to have nvcca & pytorch.version.cuda to match but they do not: ``` (meta_learning_a100) [miranda9@hal-dgx ~]$ python -c \"import torch;print(torch.version.cuda)\" 11.1 (meta_learning_a100) [miranda9@hal-dgx ~]$ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Wed_Jul_22_19:09:09_PDT_2020 Cuda compilation tools, release 11.0, V11.0.221 Build cuda_11.0_bu.TC445_37.28845127_0 ``` How do I match them? I this the error? Can someone display their pip, conda and nvcca version to see what set up works? More error messages: ``` hal-dgx:21797:21797 [0] NCCL INFO Bootstrap : Using [0]enp226s0:141.142.153.83 [1]virbr0:192.168.122.1 hal-dgx:21797:21797 [0] NCCL INFO NET\/Plugin : No plugin found (libnccl-net.so), using internal implementation hal-dgx:21797:21797 [0] NCCL INFO NET\/IB : Using [0]mlx5_0:1\/IB [1]mlx5_1:1\/IB [2]mlx5_2:1\/IB [3]mlx5_3:1\/IB [4]mlx5_4:1\/IB [5]mlx5_5:1\/IB [6]mlx5_6:1\/IB [7]mlx5_7:1\/IB ; OOB enp226s0:141.142.153.83 hal-dgx:21797:21797 [0] NCCL INFO Using network IB NCCL version 2.7.8+cuda11.1 hal-dgx:21805:21805 [2] NCCL INFO Bootstrap : Using [0]enp226s0:141.142.153.83 [1]virbr0:192.168.122.1 hal-dgx:21799:21799 [1] NCCL INFO Bootstrap : Using [0]enp226s0:141.142.153.83 [1]virbr0:192.168.122.1 hal-dgx:21805:21805 [2] NCCL INFO NET\/Plugin : No plugin found (libnccl-net.so), using internal implementation hal-dgx:21799:21799 [1] NCCL INFO NET\/Plugin : No plugin found (libnccl-net.so), using internal implementation hal-dgx:21811:21811 [3] NCCL INFO Bootstrap : Using [0]enp226s0:141.142.153.83 [1]virbr0:192.168.122.1 hal-dgx:21811:21811 [3] NCCL INFO NET\/Plugin : No plugin found (libnccl-net.so), using internal implementation hal-dgx:21811:21811 [3] NCCL INFO NET\/IB : Using [0]mlx5_0:1\/IB [1]mlx5_1:1\/IB [2]mlx5_2:1\/IB [3]mlx5_3:1\/IB [4]mlx5_4:1\/IB [5]mlx5_5:1\/IB [6]mlx5_6:1\/IB [7]mlx5_7:1\/IB ; OOB enp226s0:141.142.153.83 hal-dgx:21811:21811 [3] NCCL INFO Using network IB hal-dgx:21799:21799 [1] NCCL INFO NET\/IB : Using [0]mlx5_0:1\/IB [1]mlx5_1:1\/IB [2]mlx5_2:1\/IB [3]mlx5_3:1\/IB [4]mlx5_4:1\/IB [5]mlx5_5:1\/IB [6]mlx5_6:1\/IB [7]mlx5_7:1\/IB ; OOB enp226s0:141.142.153.83 hal-dgx:21805:21805 [2] NCCL INFO NET\/IB : Using [0]mlx5_0:1\/IB [1]mlx5_1:1\/IB [2]mlx5_2:1\/IB [3]mlx5_3:1\/IB [4]mlx5_4:1\/IB [5]mlx5_5:1\/IB [6]mlx5_6:1\/IB [7]mlx5_7:1\/IB ; OOB enp226s0:141.142.153.83 hal-dgx:21799:21799 [1] NCCL INFO Using network IB hal-dgx:21805:21805 [2] NCCL INFO Using network IB hal-dgx:21797:27906 [0] misc\/ibvwrap.cc:280 NCCL WARN Call to ibv_create_qp failed hal-dgx:21797:27906 [0] NCCL INFO transport\/net_ib.cc:360 -> 2 hal-dgx:21797:27906 [0] NCCL INFO transport\/net_ib.cc:437 -> 2 hal-dgx:21797:27906 [0] NCCL INFO include\/net.h:21 -> 2 hal-dgx:21797:27906 [0] NCCL INFO include\/net.h:51 -> 2 hal-dgx:21797:27906 [0] NCCL INFO init.cc:300 -> 2 hal-dgx:21797:27906 [0] NCCL INFO init.cc:566 -> 2 hal-dgx:21797:27906 [0] NCCL INFO init.cc:840 -> 2 hal-dgx:21797:27906 [0] NCCL INFO group.cc:73 -> 2 [Async thread] hal-dgx:21811:27929 [3] misc\/ibvwrap.cc:280 NCCL WARN Call to ibv_create_qp failed hal-dgx:21811:27929 [3] NCCL INFO transport\/net_ib.cc:360 -> 2 hal-dgx:21811:27929 [3] NCCL INFO transport\/net_ib.cc:437 -> 2 hal-dgx:21811:27929 [3] NCCL INFO include\/net.h:21 -> 2 hal-dgx:21811:27929 [3] NCCL INFO include\/net.h:51 -> 2 hal-dgx:21811:27929 [3] NCCL INFO init.cc:300 -> 2 hal-dgx:21811:27929 [3] NCCL INFO init.cc:566 -> 2 hal-dgx:21811:27929 [3] NCCL INFO init.cc:840 -> 2 hal-dgx:21811:27929 [3] NCCL INFO group.cc:73 -> 2 [Async thread] ``` after putting ``` import os os.environ[\"NCCL_DEBUG\"] = \"INFO\" ```", "response":"From the link pytorch site from @SimonB 's answer, I did: ``` pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https:\/\/download.pytorch.org\/whl\/torch_stable.html ``` This solved the problem for me.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/66992585\/how-does-one-use-pytorch-cuda-with-an-a100-gpu", "best_answers_votes":48, "question_length":7035, "response_length":227 }, { "question":"Using CUDA with pytorch? Is there a way to reliably enable CUDA on the whole model? I want to run the training on my GPU. I found on some forums that I need to apply .cuda() on anything I want to use CUDA with (I've applied it to everything I could without making the program crash). Surprisingly, this makes the training even slower. Then, I found that you could use this torch.set_default_tensor_type('torch.cuda.FloatTensor') to use CUDA. With both enabled, nothing changes. What is happening?", "response":"You can use the tensor.to(device) command to move a tensor to a device. The .to() command is also used to move a whole model to a device, like in the post you linked to. Another possibility is to set the device of a tensor during creation using the device= keyword argument, like in t = torch.tensor(some_list, device=device) To set the device dynamically in your code, you can use ``` device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\") ``` to set cuda as your device if possible. There are various code examples on PyTorch Tutorials and in the documentation linked above that could help you.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/50954479\/using-cuda-with-pytorch", "best_answers_votes":93, "question_length":496, "response_length":611 }, { "question":"How to use 'collate_fn' with dataloaders? I am trying to train a pretrained roberta model using 3 inputs, 3 input_masks and a label as tensors of my training dataset. I do this using the following code: ``` from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler batch_size = 32 # Create the DataLoader for our training set. train_data = TensorDataset(train_AT, train_BT, train_CT, train_maskAT, train_maskBT, train_maskCT, labels_trainT) train_dataloader = DataLoader(train_data, batch_size=batch_size) # Create the Dataloader for our validation set. validation_data = TensorDataset(val_AT, val_BT, val_CT, val_maskAT, val_maskBT, val_maskCT, labels_valT) val_dataloader = DataLoader(validation_data, batch_size=batch_size) # Pytorch Training training_args = TrainingArguments( output_dir='C:\/Users\/samvd\/Documents\/Master\/AppliedMachineLearning\/FinalProject\/results', # output directory num_train_epochs=1, # total # of training epochs per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='C:\/Users\/samvd\/Documents\/Master\/AppliedMachineLearning\/FinalProject\/logs', # directory for storing logs ) trainer = Trainer( model=model, # the instantiated \ud83e\udd17 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset = train_data, # training dataset eval_dataset = validation_data, # evaluation dataset ) trainer.train() ``` However this gives me the following error: TypeError: vars() argument must have dict attribute Now I have found out that it is probably because I don't use collate_fn when using DataLoader, but I can't really find a source that helps me define this correctly so the trainer understands the different tensors I put in. Can anyone point me in the right direction?", "response":"Basically, the collate_fn receives a list of tuples if your __getitem__ function from a Dataset subclass returns a tuple, or just a normal list if your Dataset subclass returns only one element. Its main objective is to create your batch without spending much time implementing it manually. Try to see it as a glue that you specify the way examples stick together in a batch. If you don\u2019t use it, PyTorch only put batch_size examples together as you would using torch.stack (not exactly it, but it is simple like that). Suppose for example, you want to create batches of a list of varying dimension tensors. The below code pads sequences with 0 until the maximum sequence size of the batch, that is why we need the collate_fn, because a standard batching algorithm (simply using torch.stack) won\u2019t work in this case, and we need to manually pad different sequences with variable length to the same size before creating the batch. ``` def collate_fn(data): \"\"\" data: is a list of tuples with (example, label, length) where 'example' is a tensor of arbitrary shape and label\/length are scalars \"\"\" _, labels, lengths = zip(*data) max_len = max(lengths) n_ftrs = data[0][0].size(1) features = torch.zeros((len(data), max_len, n_ftrs)) labels = torch.tensor(labels) lengths = torch.tensor(lengths) for i in range(len(data)): j, k = data[i][0].size(0), data[i][0].size(1) features[i] = torch.cat([data[i][0], torch.zeros((max_len - j, k))]) return features.float(), labels.long(), lengths.long() ``` The function above is fed to the collate_fn param in the DataLoader, as this example: ``` DataLoader(toy_dataset, collate_fn=collate_fn, batch_size=5) ``` With this collate_fn function, you always gonna have a tensor where all your examples have the same size. So, when you feed your forward() function with this data, you need to use the length to get the original data back, to not use those meaningless zeros in your computation. Source: Pytorch Forum", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/65279115\/how-to-use-collate-fn-with-dataloaders", "best_answers_votes":78, "question_length":1962, "response_length":1949 }, { "question":"AttributeError: cannot assign module before Module.__init__() call I am getting the following error: ``` Traceback (most recent call last): File \"main.py\", line 63, in question_classifier = QuestionClassifier(corpus.dictionary, embeddings_index, corpus.max_sent_length, args) File \"\/net\/if5\/wua4nw\/wasi\/academic\/research_with_prof_chang\/projects\/question_answering\/duplicate_question_detection\/source\/question_classifier.py\", line 26, in __init__ self.embedding = EmbeddingLayer(len(dictionary), args.emsize, args.dropout) File \"\/if5\/wua4nw\/anaconda3\/lib\/python3.5\/site-packages\/torch\/nn\/modules\/module.py\", line 255, in __setattr__ \"cannot assign module before Module.__init__() call\") AttributeError: cannot assign module before Module.__init__() call ``` I have a class as follows: ```py class QuestionClassifier(nn.Module): def __init__(self, dictionary, embeddings_index, max_seq_length, args): self.embedding = EmbeddingLayer(len(dictionary), args.emsize, args.dropout) self.encoder = EncoderRNN(args.emsize, args.nhid, args.model, args.bidirection, args.nlayers, args.dropout) self.drop = nn.Dropout(args.dropout) ``` So, when I run the following line: ```py question_classifier = QuestionClassifier(corpus.dictionary, embeddings_index, corpus.max_sent_length, args) ``` I get the above mentioned error. Here, EmbeddingLayer and EncoderRNN is a class written by me which inherits nn.module like the QuestionClassifier class. What I am doing wrong here?", "response":"Looking at the pytorch source code for Module, we see in the docstring an example of deriving from Module includes: ``` class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) ``` So you probably want to call Module's init the same way in your derived class: ``` super(QuestionClassifier, self).__init__() ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/43080583\/attributeerror-cannot-assign-module-before-module-init-call", "best_answers_votes":84, "question_length":1460, "response_length":394 }, { "question":"Cannot convert list to array: ValueError: only one element tensors can be converted to Python scalars I'm currently working with the PyTorch framework and trying to understand foreign code. I got an indices issue and wanted to print the shape of a list. The only way of doing so (as far as Google tells me) is to convert the list into a numpy array and then getting the shape with numpy.ndarray.shape(). But trying to convert my list into an array, I got a ValueError: only one element tensors can be converted to Python scalars. My List is a converted PyTorch Tensor (list(pytorchTensor)) and looks somewhat like this: ``` [ tensor([[-0.2781, -0.2567, -0.2353, ..., -0.9640, -0.9855, -1.0069], [-0.2781, -0.2567, -0.2353, ..., -1.0069, -1.0283, -1.0927], [-0.2567, -0.2567, -0.2138, ..., -1.0712, -1.1141, -1.1784], ..., [-0.6640, -0.6425, -0.6211, ..., -1.0712, -1.1141, -1.0927], [-0.6640, -0.6425, -0.5997, ..., -0.9426, -0.9640, -0.9640], [-0.6640, -0.6425, -0.5997, ..., -0.9640, -0.9426, -0.9426]]), tensor([[-0.0769, -0.0980, -0.0769, ..., -0.9388, -0.9598, -0.9808], [-0.0559, -0.0769, -0.0980, ..., -0.9598, -1.0018, -1.0228], [-0.0559, -0.0769, -0.0769, ..., -1.0228, -1.0439, -1.0859], ..., [-0.4973, -0.4973, -0.4973, ..., -1.0018, -1.0439, -1.0228], [-0.4973, -0.4973, -0.4973, ..., -0.8757, -0.9177, -0.9177], [-0.4973, -0.4973, -0.4973, ..., -0.9177, -0.8967, -0.8967]]), tensor([[-0.1313, -0.1313, -0.1100, ..., -0.8115, -0.8328, -0.8753], [-0.1313, -0.1525, -0.1313, ..., -0.8541, -0.8966, -0.9391], [-0.1100, -0.1313, -0.1100, ..., -0.9391, -0.9816, -1.0666], ..., [-0.4502, -0.4714, -0.4502, ..., -0.8966, -0.8966, -0.8966], [-0.4502, -0.4714, -0.4502, ..., -0.8115, -0.8115, -0.7903], [-0.4502, -0.4714, -0.4502, ..., -0.8115, -0.7690, -0.7690]]), ] ``` Is there a way of getting the shape of that list without converting it into a numpy array?", "response":"It seems like you have a list of tensors. For each tensor you can see its size() (no need to convert to list\/numpy). If you insist, you can convert a tensor to numpy array using numpy(): Return a list of tensor shapes: ``` >> [t.size() for t in my_list_of_tensors] ``` Returns a list of numpy arrays: ``` >> [t.numpy() for t in my_list_of_tensors] ``` In terms of performance, it is always best to avoid casting of tensors into numpy arrays, as it may incur sync of device\/host memory. If you only need to check the shape of a tensor, use size() function.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/52074153\/cannot-convert-list-to-array-valueerror-only-one-element-tensors-can-be-conver", "best_answers_votes":39, "question_length":1865, "response_length":555 }, { "question":"How to cast a 1-d IntTensor to int in Pytorch How do I convert a 1-D IntTensor to an integer? This: ``` IntTensor.int() ``` Gives an error: ``` KeyError: Variable containing: 423 [torch.IntTensor of size 1] ```", "response":"The simplest and cleanest method I know: ``` IntTensor.item() ``` Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see tolist. PyTorch Docs", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/47588682\/how-to-cast-a-1-d-inttensor-to-int-in-pytorch", "best_answers_votes":107, "question_length":210, "response_length":215 }, { "question":"How to flatten a tensor in PyTorch? Given a tensor of multiple dimensions, how do I flatten it so that it has a single dimension? ``` torch.Size([2, 3, 5]) \u27f6 flatten \u27f6 torch.Size([30]) ```", "response":"TL;DR: torch.flatten() Use torch.flatten() which was introduced in v0.4.1 and documented in v1.0rc1: ``` >>> t = torch.tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) >>> torch.flatten(t) tensor([1, 2, 3, 4, 5, 6, 7, 8]) >>> torch.flatten(t, start_dim=1) tensor([[1, 2, 3, 4], [5, 6, 7, 8]]) ``` For v0.4.1 and earlier, use t.reshape(-1). With t.reshape(-1): If the requested view is contiguous in memory this will equivalent to t.view(-1) and memory will not be copied. Otherwise it will be equivalent to t.contiguous().view(-1). Other non-options: t.view(-1) won't copy memory, but may not work depending on original size and stride t.resize(-1) gives RuntimeError (see below) t.resize(t.numel()) warning about being a low-level method (see discussion below) (Note: pytorch's reshape() may change data but numpy's reshape() won't.) t.resize(t.numel()) needs some discussion. The torch.Tensor.resize_ documentation says: The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged) Given the current strides will be ignored with the new (1, numel()) size, the order of the elements may apppear in a different order than with reshape(-1). However, \"size\" may mean the memory size, rather than the tensor's size. It would be nice if t.resize(-1) worked for both convenience and efficiency, but with torch 1.0.1.post2, t = torch.rand([2, 3, 5]); t.resize(-1) gives: ``` RuntimeError: requested resize to -1 (-1 elements in total), but the given tensor has a size of 2x2 (4 elements). autograd's resize can only change the shape of a given tensor, while preserving the number of elements. ``` I raised a feature request for this here, but the consensus was that resize() was a low level method, and reshape() should be used in preference.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55546873\/how-to-flatten-a-tensor-in-pytorch", "best_answers_votes":71, "question_length":188, "response_length":1839 }, { "question":"How to correctly give inputs to Embedding, LSTM and Linear layers in PyTorch? I need some clarity on how to correctly prepare inputs for batch-training using different components of the torch.nn module. Specifically, I'm looking to create an encoder-decoder network for a seq2seq model. Suppose I have a module with these three layers, in order: nn.Embedding nn.LSTM nn.Linear nn.Embedding Input: batch_size * seq_length Output: batch_size * seq_length * embedding_dimension I don't have any problems here, I just want to be explicit about the expected shape of the input and output. nn.LSTM Input: seq_length * batch_size * input_size (embedding_dimension in this case) Output: seq_length * batch_size * hidden_size last_hidden_state: batch_size * hidden_size last_cell_state: batch_size * hidden_size To use the output of the Embedding layer as input for the LSTM layer, I need to transpose axis 1 and 2. Many examples I've found online do something like x = embeds.view(len(sentence), self.batch_size , -1), but that confuses me. How does this view ensure that elements of the same batch remain in the same batch? What happens when len(sentence) and self.batch size are of same size? nn.Linear Input: batch_size x input_size (hidden_size of LSTM in this case or ??) Output: batch_size x output_size If I only need the last_hidden_state of LSTM, then I can give it as input to nn.Linear. But if I want to make use of Output (which contains all intermediate hidden states as well) then I need to change nn.Linear's input size to seq_length * hidden_size and to use Output as input to Linear module I need to transpose axis 1 and 2 of output and then I can view with Output_transposed(batch_size, -1). Is my understanding here correct? How do I carry out these transpose operations in tensors (tensor.transpose(0, 1))?", "response":"Your understanding of most of the concepts is accurate, but, there are some missing points here and there. Interfacing embedding to LSTM (Or any other recurrent unit) You have embedding output in the shape of (batch_size, seq_len, embedding_size). Now, there are various ways through which you can pass this to the LSTM. * You can pass this directly to the LSTM, if LSTM accepts input as batch_first. So, while creating your LSTM pass argument batch_first=True. * Or, you can pass input in the shape of (seq_len, batch_size, embedding_size). So, to convert your embedding output to this shape, you\u2019ll need to transpose the first and second dimensions using torch.transpose(tensor_name, 0, 1), like you mentioned. Q. I see many examples online which do something like x = embeds.view(len(sentence), self.batch_size , -1) which confuses me. A. This is wrong. It will mix up batches and you will be trying to learn a hopeless learning task. Wherever you see this, you can tell the author to change this statement and use transpose instead. There is an argument in favor of not using batch_first, which states that the underlying API provided by Nvidia CUDA runs considerably faster using batch as secondary. Using context size You are directly feeding the embedding output to LSTM, this will fix the input size of LSTM to context size of 1. This means that if your input is words to LSTM, you will be giving it one word at a time always. But, this is not what we want all the time. So, you need to expand the context size. This can be done as follows - ``` # Assuming that embeds is the embedding output and context_size is a defined variable embeds = embeds.unfold(1, context_size, 1) # Keeping the step size to be 1 embeds = embeds.view(embeds.size(0), embeds.size(1), -1) ``` Unfold documentation Now, you can proceed as mentioned above to feed this to the LSTM, just remembed that seq_len is now changed to seq_len - context_size + 1 and embedding_size (which is the input size of the LSTM) is now changed to context_size * embedding_size Using variable sequence lengths Input size of different instances in a batch will not be the same always. For example, some of your sentence might be 10 words long and some might be 15 and some might be 1000. So, you definitely want variable length sequence input to your recurrent unit. To do this, there are some additional steps that needs to be performed before you can feed your input to the network. You can follow these steps - 1. Sort your batch from largest sequence to the smallest. 2. Create a seq_lengths array that defines the length of each sequence in the batch. (This can be a simple python list) 3. Pad all the sequences to be of equal length to the largest sequence. 4. Create LongTensor Variable of this batch. 5. Now, after passing the above variable through embedding and creating the proper context size input, you\u2019ll need to pack your sequence as follows - ``` # Assuming embeds to be the proper input to the LSTM lstm_input = nn.utils.rnn.pack_padded_sequence(embeds, [x - context_size + 1 for x in seq_lengths], batch_first=False) ``` Understanding output of LSTM Now, once you have prepared your lstm_input acc. To your needs, you can call lstm as ``` lstm_outs, (h_t, h_c) = lstm(lstm_input, (h_t, h_c)) ``` Here, (h_t, h_c) needs to be provided as the initial hidden state and it will output the final hidden state. You can see, why packing variable length sequence is required, otherwise LSTM will run the over the non-required padded words as well. Now, lstm_outs will be a packed sequence which is the output of lstm at every step and (h_t, h_c) are the final outputs and the final cell state respectively. h_t and h_c will be of shape (batch_size, lstm_size). You can use these directly for further input, but if you want to use the intermediate outputs as well you\u2019ll need to unpack the lstm_outs first as below ``` lstm_outs, _ = nn.utils.rnn.pad_packed_sequence(lstm_outs) ``` Now, your lstm_outs will be of shape (max_seq_len - context_size + 1, batch_size, lstm_size). Now, you can extract the intermediate outputs of lstm according to your need. Remember that the unpacked output will have 0s after the size of each batch, which is just padding to match the length of the largest sequence (which is always the first one, as we sorted the input from largest to the smallest). Also note that, h_t will always be equal to the last element for each batch output. Interfacing lstm to linear Now, if you want to use just the output of the lstm, you can directly feed h_t to your linear layer and it will work. But, if you want to use intermediate outputs as well, then, you\u2019ll need to figure out, how are you going to input this to the linear layer (through some attention network or some pooling). You do not want to input the complete sequence to the linear layer, as different sequences will be of different lengths and you can\u2019t fix the input size of the linear layer. And yes, you\u2019ll need to transpose the output of lstm to be further used (Again you cannot use view here). Ending Note: I have purposefully left some points, such as using bidirectional recurrent cells, using step size in unfold, and interfacing attention, as they can get quite cumbersome and will be out of the scope of this answer.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49466894\/how-to-correctly-give-inputs-to-embedding-lstm-and-linear-layers-in-pytorch", "best_answers_votes":64, "question_length":1818, "response_length":5278 }, { "question":"Random Choice with Pytorch? I have a tensor of pictures, and would like to randomly select from it. I'm looking for the equivalent of np.random.choice(). ``` import torch pictures = torch.randint(0, 256, (1000, 28, 28, 3)) ``` Let's say I want 10 of these pictures.", "response":"torch has no equivalent implementation of np.random.choice(), see the discussion here. The alternative is indexing with a shuffled index or random integers. To do it with replacement: Generate n random indices Index your original tensor with these indices ``` pictures[torch.randint(len(pictures), (10,))] ``` To do it without replacement: Shuffle the index Take the n first elements ``` indices = torch.randperm(len(pictures))[:10] pictures[indices] ``` Read more about torch.randint and torch.randperm. Second code snippet is inspired by this post in PyTorch Forums.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59461811\/random-choice-with-pytorch", "best_answers_votes":72, "question_length":265, "response_length":568 }, { "question":"Get single random example from PyTorch DataLoader How do I get a single random example from a PyTorch DataLoader? If my DataLoader gives minbatches of multiple images and labels, how do I get a single random image and label? Note that I don't want a single image and label per minibatch, I want a total of one example.", "response":"If your DataLoader is something like this: ```py test_loader = DataLoader(image_datasets['val'], batch_size=batch_size, shuffle=True) ``` it is giving you a batch of size batch_size, and you can pick out a single random example by directly indexing the batch: ``` for test_images, test_labels in test_loader: sample_image = test_images[0] # Reshape them according to your needs. sample_label = test_labels[0] ``` Alternative solutions You can use RandomSampler to obtain random samples. Use a batch_size of 1 in your DataLoader. Directly take samples from your DataSet like so: ``` mnist_test = datasets.MNIST('..\/MNIST\/', train=False, transform=transform) ``` Now use this dataset to take samples: ``` for image, label in mnist_test: # do something with image and other attributes ``` (Probably the best) See here: ``` inputs, classes = next(iter(dataloader)) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53570732\/get-single-random-example-from-pytorch-dataloader", "best_answers_votes":40, "question_length":318, "response_length":864 }, { "question":"Using torch.nn.DataParallel with a custom CUDA extension To my understanding, the built-in PyTorch operations all automatically handle batches through implicit vectorization, allowing parallelism across multiple GPUs. However, when writing a custom operation in CUDA as per the Documentation, the LLTM example given performs operations that are batch invariant, for example computing the gradient of the Sigmoid function elementwise. However, I have a use case that is not batch element invariant and not vectorizable. Running on a single GPU, I currently (inefficiently) loop over each element in the batch, performing a kernel launch for each, like so (written in the browser, just to demonstrate): ``` std::vector op_cuda_forward(at::Tensor input, at::Tensor elementSpecificParam) { auto output = at::zeros(torch::CUDA(\/* TYPE *\/), {\/* DIMENSIONS *\/}); const size_t blockDim = \/\/ const size_t gridDim = \/\/ const size_t = numBatches = \/\/ for (size_t i = 0; i >>(input[i], elementSpecificParam[i], output[i]); } return {output}; } ``` However, I wish to split this operation over multiple GPUs by batch element. How would the allocation of the output Tensor work in a multi-GPU scenario? Of course, one may create intermediate Tensors on each GPU before launching the appropriate kernel, however, the overhead of copying the input data to each GPU and back again would be problematic. Is there a simpler way to launch the kernels without first probing the environment for GPU information (# GPU's etc)? The end goal is to have a CUDA operation that works with torch.nn.DataParallel.", "response":"This is kind of unusual, as commonly \"Batch\" is exactly defined as all operations of the network being invariant along that dimension. So you could, for example, just introduce another dimension. So you have the \"former batch dimension\" in which your operation is not invariant. For this keep your current implementation. Then, parallelize over the new dimension of multiple \"actual batches\" of data. But, to stay closer to the question you asked, I see two options: As you said, inside your implementation figure out which original batch you are operating on (depending on total number of parallel splits, etc). This can become hairy. Consider your parameter as Part of Input! In your outside call, pass the parameter along your input data to the forward of your model. So (Pythonlike-Pseudocode): ``` Network(nn.Module): ... def forward(x, parameter): x=self.pre_modules(x) x=self.custom_module(x,parameter) return x parameter=torch.zeros(16,requires_grad=True) net=nn.DataParallel(model) net(input,parameter) ``` If your are willing to accept that this will be a leaky abstraction of the network and are mainly interested in getting things to work, I would try out the latter approach first.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51400618\/using-torch-nn-dataparallel-with-a-custom-cuda-extension", "best_answers_votes":1, "question_length":1583, "response_length":1194 }, { "question":"How to Display Custom Images in Tensorboard (e.g. Matplotlib Plots)? The Image Dashboard section of the Tensorboard ReadMe says: Since the image dashboard supports arbitrary pngs, you can use this to embed custom visualizations (e.g. matplotlib scatterplots) into TensorBoard. I see how a pyplot image could be written to file, read back in as a tensor, and then used with tf.image_summary() to write it to TensorBoard, but this statement from the readme suggests there is a more direct way. Is there? If so, is there any further documentation and\/or examples of how to do this efficiently?", "response":"It is quite easy to do if you have the image in a memory buffer. Below, I show an example, where a pyplot is saved to a buffer and then converted to a TF image representation which is then sent to an image summary. ``` import io import matplotlib.pyplot as plt import tensorflow as tf def gen_plot(): \"\"\"Create a pyplot plot and save to buffer.\"\"\" plt.figure() plt.plot([1, 2]) plt.title(\"test\") buf = io.BytesIO() plt.savefig(buf, format='png') buf.seek(0) return buf # Prepare the plot plot_buf = gen_plot() # Convert PNG buffer to TF image image = tf.image.decode_png(plot_buf.getvalue(), channels=4) # Add the batch dimension image = tf.expand_dims(image, 0) # Add image summary summary_op = tf.summary.image(\"plot\", image) # Session with tf.Session() as sess: # Run summary = sess.run(summary_op) # Write summary writer = tf.train.SummaryWriter('.\/logs') writer.add_summary(summary) writer.close() ``` This gives the following TensorBoard visualization:", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/38543850\/how-to-display-custom-images-in-tensorboard-e-g-matplotlib-plots", "best_answers_votes":48, "question_length":590, "response_length":958 }, { "question":"AttributeError: '_MultiProcessingDataLoaderIter' object has no attribute 'next' I am trying to load the dataset using Torch Dataset and DataLoader, but I got the following error: ``` AttributeError: '_MultiProcessingDataLoaderIter' object has no attribute 'next' ``` the code I use is: ``` class WineDataset(Dataset): def __init__(self): # Initialize data, download, etc. # read with numpy or pandas xy = np.loadtxt('.\/data\/wine.csv', delimiter=',', dtype=np.float32, skiprows=1) self.n_samples = xy.shape[0] # here the first column is the class label, the rest are the features self.x_data = torch.from_numpy(xy[:, 1:]) # size [n_samples, n_features] self.y_data = torch.from_numpy(xy[:, [0]]) # size [n_samples, 1] # support indexing such that dataset[i] can be used to get i-th sample def __getitem__(self, index): return self.x_data[index], self.y_data[index] # we can call len(dataset) to return the size def __len__(self): return self.n_samples dataset = WineDataset() train_loader = DataLoader(dataset=dataset, batch_size=4, shuffle=True, num_workers=2) ``` I tried to make the num_workers=0, still have the same error. ``` Python version 3.8.9 PyTorch version 1.13.0 ```", "response":"I too faced the same issue, when i tried to call the next() method as follows ``` dataiter = iter(dataloader) data = dataiter.next() ``` You need to use the following instead and it works perfectly: ``` dataiter = iter(dataloader) data = next(dataiter) ``` Finally your code should look like follows: ``` class WineDataset(Dataset): def __init__(self): # Initialize data, download, etc. # read with numpy or pandas xy = np.loadtxt('.\/data\/wine.csv', delimiter=',', dtype=np.float32, skiprows=1) self.n_samples = xy.shape[0] # here the first column is the class label, the rest are the features self.x_data = torch.from_numpy(xy[:, 1:]) # size [n_samples, n_features] self.y_data = torch.from_numpy(xy[:, [0]]) # size [n_samples, 1] # support indexing such that dataset[i] can be used to get i-th sample def __getitem__(self, index): return self.x_data[index], self.y_data[index] # we can call len(dataset) to return the size def __len__(self): return self.n_samples dataset = WineDataset() dataloader = DataLoader(dataset=dataset, batch_size=4, shuffle=True, num_workers=2) dataiter = iter(dataloader) data = next(dataiter) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/74289077\/attributeerror-multiprocessingdataloaderiter-object-has-no-attribute-next", "best_answers_votes":98, "question_length":1178, "response_length":1127 }, { "question":"How to apply layer-wise learning rate in Pytorch? I know that it is possible to freeze single layers in a network for example to train only the last layers of a pre-trained model. What I\u2019m looking for is a way to apply certain learning rates to different layers. So for example a very low learning rate of 0.000001 for the first layer and then increasing the learning rate gradually for each of the following layers. So that the last layer then ends up with a learning rate of 0.01 or so. Is this possible in pytorch? Any idea how I can archive this?", "response":"Here is the solution: ``` from torch.optim import Adam model = Net() optim = Adam( [ {\"params\": model.fc.parameters(), \"lr\": 1e-3}, {\"params\": model.agroupoflayer.parameters()}, {\"params\": model.lastlayer.parameters(), \"lr\": 4e-2}, ], lr=5e-4, ) ``` Other parameters that are didn't specify in optimizer will not optimize. So you should state all layers or groups(OR the layers you want to optimize). and if you didn't specify the learning rate it will take the global learning rate(5e-4). The trick is when you create the model you should give names to the layers or you can group it.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51801648\/how-to-apply-layer-wise-learning-rate-in-pytorch", "best_answers_votes":77, "question_length":550, "response_length":585 }, { "question":"How to get an output dimension for each layer of the Neural Network in Pytorch? ``` class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.net = nn.Sequential( nn.Conv2d(in_channels = 3, out_channels = 16), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(in_channels = 16, out_channels = 16), nn.ReLU(), Flatten(), nn.Linear(4096, 64), nn.ReLU(), nn.Linear(64, 10)) def forward(self, x): return self.net(x) ``` I have created this model without a firm knowledge in Neural Network and I just fixed parameters until it worked in the training. I am not sure how to get the output dimension for each layer (e.g. output dimension after the first layer). Is there an easy way to do this in Pytorch?", "response":"You can use torchsummary, for instance, for ImageNet dimension(3x224x224): ``` from torchvision import models from torchsummary import summary vgg = models.vgg16() summary(vgg, (3, 224, 224)) ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 224, 224] 1,792 ReLU-2 [-1, 64, 224, 224] 0 Conv2d-3 [-1, 64, 224, 224] 36,928 ReLU-4 [-1, 64, 224, 224] 0 MaxPool2d-5 [-1, 64, 112, 112] 0 Conv2d-6 [-1, 128, 112, 112] 73,856 ReLU-7 [-1, 128, 112, 112] 0 Conv2d-8 [-1, 128, 112, 112] 147,584 ReLU-9 [-1, 128, 112, 112] 0 MaxPool2d-10 [-1, 128, 56, 56] 0 Conv2d-11 [-1, 256, 56, 56] 295,168 ReLU-12 [-1, 256, 56, 56] 0 Conv2d-13 [-1, 256, 56, 56] 590,080 ReLU-14 [-1, 256, 56, 56] 0 Conv2d-15 [-1, 256, 56, 56] 590,080 ReLU-16 [-1, 256, 56, 56] 0 MaxPool2d-17 [-1, 256, 28, 28] 0 Conv2d-18 [-1, 512, 28, 28] 1,180,160 ReLU-19 [-1, 512, 28, 28] 0 Conv2d-20 [-1, 512, 28, 28] 2,359,808 ReLU-21 [-1, 512, 28, 28] 0 Conv2d-22 [-1, 512, 28, 28] 2,359,808 ReLU-23 [-1, 512, 28, 28] 0 MaxPool2d-24 [-1, 512, 14, 14] 0 Conv2d-25 [-1, 512, 14, 14] 2,359,808 ReLU-26 [-1, 512, 14, 14] 0 Conv2d-27 [-1, 512, 14, 14] 2,359,808 ReLU-28 [-1, 512, 14, 14] 0 Conv2d-29 [-1, 512, 14, 14] 2,359,808 ReLU-30 [-1, 512, 14, 14] 0 MaxPool2d-31 [-1, 512, 7, 7] 0 Linear-32 [-1, 4096] 102,764,544 ReLU-33 [-1, 4096] 0 Dropout-34 [-1, 4096] 0 Linear-35 [-1, 4096] 16,781,312 ReLU-36 [-1, 4096] 0 Dropout-37 [-1, 4096] 0 Linear-38 [-1, 1000] 4,097,000 ================================================================ Total params: 138,357,544 Trainable params: 138,357,544 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.57 Forward\/backward pass size (MB): 218.59 Params size (MB): 527.79 Estimated Total Size (MB): 746.96 ---------------------------------------------------------------- ``` Source: model-summary-in-pytorch", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55875279\/how-to-get-an-output-dimension-for-each-layer-of-the-neural-network-in-pytorch", "best_answers_votes":36, "question_length":706, "response_length":1981 }, { "question":"How to use PyTorch multiprocessing? I'm trying to use python's multiprocessing Pool method in pytorch to process a image. Here's the code: ``` from multiprocessing import Process, Pool from torch.autograd import Variable import numpy as np from scipy.ndimage import zoom def get_pred(args): img = args[0] scale = args[1] scales = args[2] img_scale = zoom(img.numpy(), (1., 1., scale, scale), order=1, prefilter=False, mode='nearest') # feed input data input_img = Variable(torch.from_numpy(img_scale), volatile=True).cuda() return input_img scales = [1,2,3,4,5] scale_list = [] for scale in scales: scale_list.append([img,scale,scales]) multi_pool = Pool(processes=5) predictions = multi_pool.map(get_pred,scale_list) multi_pool.close() multi_pool.join() ``` I'm getting this error: ``` `RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ``` ` In this line: ``` predictions = multi_pool.map(get_pred,scale_list) ``` Can anyone tell me what I'm doing wrong ?", "response":"As stated in pytorch documentation the best practice to handle multiprocessing is to use torch.multiprocessing instead of multiprocessing. Be aware that sharing CUDA tensors between processes is supported only in Python 3, either with spawn or forkserver as start method. Without touching your code, a workaround for the error you got is replacing ``` from multiprocessing import Process, Pool ``` with: ``` from torch.multiprocessing import Pool, Process, set_start_method try: set_start_method('spawn') except RuntimeError: pass ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/48822463\/how-to-use-pytorch-multiprocessing", "best_answers_votes":52, "question_length":1041, "response_length":534 }, { "question":"Label Smoothing in PyTorch I'm building a ResNet-18 classification model for the Stanford Cars dataset using transfer learning. I would like to implement label smoothing to penalize overconfident predictions and improve generalization. TensorFlow has a simple keyword argument in CrossEntropyLoss. Has anyone built a similar function for PyTorch that I could plug-and-play with?", "response":"The generalization and learning speed of a multi-class neural network can often be significantly improved by using soft targets that are a weighted average of the hard targets and the uniform distribution over labels. Smoothing the labels in this way prevents the network from becoming over-confident and label smoothing has been used in many state-of-the-art models, including image classification, language translation, and speech recognition. Label Smoothing is already implemented in Tensorflow within the cross-entropy loss functions. BinaryCrossentropy, CategoricalCrossentropy. But currently, there is no official implementation of Label Smoothing in PyTorch. However, there is going an active discussion on it and hopefully, it will be provided with an official package. Here is that discussion thread: Issue #7455. Here We will bring some available best implementation of Label Smoothing (LS) from PyTorch practitioner. Basically, there are many ways to implement the LS. Please refer to this specific discussion on this, one is here, and another here. Here we will bring implementation in 2 unique ways with two versions of each; so total 4. Option 1: CrossEntropyLossWithProbs In this way, it accepts the one-hot target vector. The user must manually smooth their target vector. And it can be done within with torch.no_grad() scope, as it temporarily sets all of the requires_grad flags to false. Devin Yang: Source ``` import torch import numpy as np import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable from torch.nn.modules.loss import _WeightedLoss class LabelSmoothingLoss(nn.Module): def __init__(self, classes, smoothing=0.0, dim=-1, weight = None): \"\"\"if smoothing == 0, it's one-hot method if 0 < smoothing < 1, it's smooth method \"\"\" super(LabelSmoothingLoss, self).__init__() self.confidence = 1.0 - smoothing self.smoothing = smoothing self.weight = weight self.cls = classes self.dim = dim def forward(self, pred, target): assert 0 <= self.smoothing < 1 pred = pred.log_softmax(dim=self.dim) if self.weight is not None: pred = pred * self.weight.unsqueeze(0) with torch.no_grad(): true_dist = torch.zeros_like(pred) true_dist.fill_(self.smoothing \/ (self.cls - 1)) true_dist.scatter_(1, target.data.unsqueeze(1), self.confidence) return torch.mean(torch.sum(-true_dist * pred, dim=self.dim)) ``` Additionally, we've added an assertion checkmark on self. smoothing and added loss weighting support on this implementation. Shital Shah: Source Shital already posted the answer here. Here we're pointing out that this implementation is similar to Devin Yang's above implementation. However, here we're mentioning his code with minimizing a bit of code syntax. ``` class SmoothCrossEntropyLoss(_WeightedLoss): def __init__(self, weight=None, reduction='mean', smoothing=0.0): super().__init__(weight=weight, reduction=reduction) self.smoothing = smoothing self.weight = weight self.reduction = reduction def k_one_hot(self, targets:torch.Tensor, n_classes:int, smoothing=0.0): with torch.no_grad(): targets = torch.empty(size=(targets.size(0), n_classes), device=targets.device) \\ .fill_(smoothing \/(n_classes-1)) \\ .scatter_(1, targets.data.unsqueeze(1), 1.-smoothing) return targets def reduce_loss(self, loss): return loss.mean() if self.reduction == 'mean' else loss.sum() \\ if self.reduction == 'sum' else loss def forward(self, inputs, targets): assert 0 <= self.smoothing < 1 targets = self.k_one_hot(targets, inputs.size(-1), self.smoothing) log_preds = F.log_softmax(inputs, -1) if self.weight is not None: log_preds = log_preds * self.weight.unsqueeze(0) return self.reduce_loss(-(targets * log_preds).sum(dim=-1)) ``` Check ``` import torch import numpy as np import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable from torch.nn.modules.loss import _WeightedLoss if __name__==\"__main__\": # 1. Devin Yang crit = LabelSmoothingLoss(classes=5, smoothing=0.5) predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0], [0, 0.9, 0.2, 0.2, 1], [1, 0.2, 0.7, 0.9, 1]]) v = crit(Variable(predict), Variable(torch.LongTensor([2, 1, 0]))) print(v) # 2. Shital Shah crit = SmoothCrossEntropyLoss(smoothing=0.5) predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0], [0, 0.9, 0.2, 0.2, 1], [1, 0.2, 0.7, 0.9, 1]]) v = crit(Variable(predict), Variable(torch.LongTensor([2, 1, 0]))) print(v) tensor(1.4178) tensor(1.4178) ``` Option 2: LabelSmoothingCrossEntropyLoss By this, it accepts the target vector and uses doesn't manually smooth the target vector, rather the built-in module takes care of the label smoothing. It allows us to implement label smoothing in terms of F.nll_loss. (a). Wangleiofficial: Source - (AFAIK), Original Poster (b). Datasaurus: Source - Added Weighting Support Further, we slightly minimize the coding write-up to make it more concise. ``` class LabelSmoothingLoss(torch.nn.Module): def __init__(self, smoothing: float = 0.1, reduction=\"mean\", weight=None): super(LabelSmoothingLoss, self).__init__() self.smoothing = smoothing self.reduction = reduction self.weight = weight def reduce_loss(self, loss): return loss.mean() if self.reduction == 'mean' else loss.sum() \\ if self.reduction == 'sum' else loss def linear_combination(self, x, y): return self.smoothing * x + (1 - self.smoothing) * y def forward(self, preds, target): assert 0 <= self.smoothing < 1 if self.weight is not None: self.weight = self.weight.to(preds.device) n = preds.size(-1) log_preds = F.log_softmax(preds, dim=-1) loss = self.reduce_loss(-log_preds.sum(dim=-1)) nll = F.nll_loss( log_preds, target, reduction=self.reduction, weight=self.weight ) return self.linear_combination(loss \/ n, nll) ``` NVIDIA\/DeepLearningExamples: Source ``` class LabelSmoothing(nn.Module): \"\"\"NLL loss with label smoothing. \"\"\" def __init__(self, smoothing=0.0): \"\"\"Constructor for the LabelSmoothing module. :param smoothing: label smoothing factor \"\"\" super(LabelSmoothing, self).__init__() self.confidence = 1.0 - smoothing self.smoothing = smoothing def forward(self, x, target): logprobs = torch.nn.functional.log_softmax(x, dim=-1) nll_loss = -logprobs.gather(dim=-1, index=target.unsqueeze(1)) nll_loss = nll_loss.squeeze(1) smooth_loss = -logprobs.mean(dim=-1) loss = self.confidence * nll_loss + self.smoothing * smooth_loss return loss.mean() ``` Check ``` if __name__==\"__main__\": # Wangleiofficial crit = LabelSmoothingLoss(smoothing=0.3, reduction=\"mean\") predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0], [0, 0.9, 0.2, 0.2, 1], [1, 0.2, 0.7, 0.9, 1]]) v = crit(Variable(predict), Variable(torch.LongTensor([2, 1, 0]))) print(v) # NVIDIA crit = LabelSmoothing(smoothing=0.3) predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0], [0, 0.9, 0.2, 0.2, 1], [1, 0.2, 0.7, 0.9, 1]]) v = crit(Variable(predict), Variable(torch.LongTensor([2, 1, 0]))) print(v) tensor(1.3883) tensor(1.3883) ``` Update: Officially Added ``` torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean', label_smoothing=0.0) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55681502\/label-smoothing-in-pytorch", "best_answers_votes":52, "question_length":378, "response_length":7026 }, { "question":"Pytorch detection of CUDA Which is the command to see the \"correct\" CUDA Version that pytorch in conda env is seeing? This, is a similar question, but doesn't get me far. nvidia-smi says I have cuda version 10.1 conda list tells me cudatoolkit version is 10.2.89 torch.cuda.is_available() shows FALSE, so it sees No CUDA? print(torch.cuda.current_device()), I get 10.0.10 (10010??) (it looks like): AssertionError: The NVIDIA driver on your system is too old (found version 10010) print(torch._C._cuda_getCompiledVersion(), 'cuda compiled version') tells me my version is 10.0.20 (10020??)? 10020 cuda compiled version Why are there so many different versions? What am I missing? P.S I have Nvidia driver 430 on Ubuntu 16.04 with Geforce 1050. It comes with libcuda1-430 when I installed the driver from additional drivers tab in ubuntu (Software and Updates). I installed pytorch with conda which also installed the cudatoolkit using conda install -c fastai -c pytorch -c anaconda fastai", "response":"In the conda env (myenv) where pytorch is installed do the following: ``` conda activate myenv torch.version.cuda ``` Nvidia-smi only shows compatible version. Does not seem to talk about the version pytorch's own cuda is built on.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/64089854\/pytorch-detection-of-cuda", "best_answers_votes":53, "question_length":988, "response_length":231 }, { "question":"Whenever I try to install torch, it displays killed I just want to install pytorch, I ran this in the terminal: ``` pip install torch ``` And it displays: ``` Collecting torch Killed ``` What is the problem?", "response":"It says your your free ram is not enough to install the package, but there is a method that you can still use it. ``` pip install torch --no-cache-dir ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/62301268\/whenever-i-try-to-install-torch-it-displays-killed", "best_answers_votes":116, "question_length":207, "response_length":154 }, { "question":"No N-dimensional tranpose in PyTorch PyTorch's torch.transpose function only transposes 2D inputs. Documentation is here. On the other hand, Tensorflow's tf.transpose function allows you to transpose a tensor of N arbitrary dimensions. Can someone please explain why PyTorch does not\/cannot have N-dimension transpose functionality? Is this due to the dynamic nature of the computation graph construction in PyTorch versus Tensorflow's Define-then-Run paradigm?", "response":"It's simply called differently in pytorch. torch.Tensor.permute will allow you to swap dimensions in pytorch like tf.transpose does in TensorFlow. As an example of how you'd convert a 4D image tensor from NHWC to NCHW (not tested, so might contain bugs): ``` >>> img_nhwc = torch.randn(10, 480, 640, 3) >>> img_nhwc.size() torch.Size([10, 480, 640, 3]) >>> img_nchw = img_nhwc.permute(0, 3, 1, 2) >>> img_nchw.size() torch.Size([10, 3, 480, 640]) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/44841654\/no-n-dimensional-tranpose-in-pytorch", "best_answers_votes":57, "question_length":461, "response_length":450 }, { "question":"How to construct a network with two inputs in PyTorch Suppose I want to have the general neural network architecture: ``` Input1 --> CNNLayer \\ ---> FCLayer ---> Output \/ Input2 --> FCLayer ``` Input1 is image data, input2 is non-image data. I have implemented this architecture in Tensorflow. All pytorch examples I have found are one input go through each layer. How can I define forward func to process 2 inputs separately then combine them in a middle layer?", "response":"By \"combine them\" I assume you mean to concatenate the two inputs. Assuming you concat along the second dimension: ``` import torch from torch import nn class TwoInputsNet(nn.Module): def __init__(self): super(TwoInputsNet, self).__init__() self.conv = nn.Conv2d( ... ) # set up your layer here self.fc1 = nn.Linear( ... ) # set up first FC layer self.fc2 = nn.Linear( ... ) # set up the other FC layer def forward(self, input1, input2): c = self.conv(input1) f = self.fc1(input2) # now we can reshape `c` and `f` to 2D and concat them combined = torch.cat((c.view(c.size(0), -1), f.view(f.size(0), -1)), dim=1) out = self.fc2(combined) return out ``` Note that when you define the number of inputs to self.fc2 you need to take into account both out_channels of self.conv as well as the output spatial dimensions of c.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51700729\/how-to-construct-a-network-with-two-inputs-in-pytorch", "best_answers_votes":39, "question_length":462, "response_length":818 }, { "question":"PyTorch: What's the difference between state_dict and parameters()? In order to access a model's parameters in pytorch, I saw two methods: using state_dict and using parameters() I wonder what's the difference, or if one is good practice and the other is bad practice. Thanks", "response":"The parameters() only gives the module parameters i.e. weights and biases. Returns an iterator over module parameters. You can check the list of the parameters as follows: ``` for name, param in model.named_parameters(): if param.requires_grad: print(name) ``` On the other hand, state_dict returns a dictionary containing a whole state of the module. Check its source code that contains not just the call to parameters but also buffers, etc. Both parameters and persistent buffers (e.g. running averages) are included. Keys are the corresponding parameter and buffer names. Check all keys that state_dict contains using: ``` model.state_dict().keys() ``` For example, in state_dict, you'll find entries like bn1.running_mean and running_var, which are not present in .parameters(). If you only want to access parameters, you can simply use .parameters(), while for purposes like saving and loading model as in transfer learning, you'll need to save state_dict not just parameters.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54746829\/pytorch-whats-the-difference-between-state-dict-and-parameters", "best_answers_votes":31, "question_length":275, "response_length":981 }, { "question":"Understanding a simple LSTM pytorch ``` import torch,ipdb import torch.autograd as autograd import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.autograd import Variable rnn = nn.LSTM(input_size=10, hidden_size=20, num_layers=2) input = Variable(torch.randn(5, 3, 10)) h0 = Variable(torch.randn(2, 3, 20)) c0 = Variable(torch.randn(2, 3, 20)) output, hn = rnn(input, (h0, c0)) ``` This is the LSTM example from the docs. I don't know understand the following things: What is output-size and why is it not specified anywhere? Why does the input have 3 dimensions. What does 5 and 3 represent? What are 2 and 3 in h0 and c0, what do those represent? Edit: ``` import torch,ipdb import torch.autograd as autograd import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.autograd import Variable import torch.nn.functional as F num_layers=3 num_hyperparams=4 batch = 1 hidden_size = 20 rnn = nn.LSTM(input_size=num_hyperparams, hidden_size=hidden_size, num_layers=num_layers) input = Variable(torch.randn(1, batch, num_hyperparams)) # (seq_len, batch, input_size) h0 = Variable(torch.randn(num_layers, batch, hidden_size)) # (num_layers, batch, hidden_size) c0 = Variable(torch.randn(num_layers, batch, hidden_size)) output, hn = rnn(input, (h0, c0)) affine1 = nn.Linear(hidden_size, num_hyperparams) ipdb.set_trace() print output.size() print h0.size() ``` *** RuntimeError: matrices expected, got 3D, 2D tensors at", "response":"The output for the LSTM is the output for all the hidden nodes on the final layer. hidden_size - the number of LSTM blocks per layer. input_size - the number of input features per time-step. num_layers - the number of hidden layers. In total there are hidden_size * num_layers LSTM blocks. The input dimensions are (seq_len, batch, input_size). seq_len - the number of time steps in each input stream. batch - the size of each batch of input sequences. The hidden and cell dimensions are: (num_layers, batch, hidden_size) output (seq_len, batch, hidden_size * num_directions): tensor containing the output features (h_t) from the last layer of the RNN, for each t. So there will be hidden_size * num_directions outputs. You didn't initialise the RNN to be bidirectional so num_directions is 1. So output_size = hidden_size. Edit: You can change the number of outputs by using a linear layer: ``` out_rnn, hn = rnn(input, (h0, c0)) lin = nn.Linear(hidden_size, output_size) v1 = nn.View(seq_len*batch, hidden_size) v2 = nn.View(seq_len, batch, output_size) output = v2(lin(v1(out_rnn))) ``` Note: for this answer I assumed that we're only talking about non-bidirectional LSTMs. Source: PyTorch docs.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/45022734\/understanding-a-simple-lstm-pytorch", "best_answers_votes":47, "question_length":1488, "response_length":1198 }, { "question":"What is the difference between Tensor.size and Tensor.shape in PyTorch? What is the difference between Tensor.size and Tensor.shape in Pytorch? I want to get the number of elements and the dimensions of Tensor. For example for a tensor with the dimensions of 2 by 3 by 4 I expect 24 for number of elements and (2,3,4) for dimension. Thanks.", "response":".shape is an alias for .size(), and was added to more closely match numpy, see this discussion here.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/63263292\/what-is-the-difference-between-tensor-size-and-tensor-shape-in-pytorch", "best_answers_votes":53, "question_length":340, "response_length":100 }, { "question":"Pytorch LSTM vs LSTMCell What is the difference between LSTM and LSTMCell in Pytorch (currently version 1.1)? It seems that LSTMCell is a special case of LSTM (i.e. with only one layer, unidirectional, no dropout). Then, what's the purpose of having both implementations? Unless I'm missing something, it's trivial to use an LSTM object as an LSTMCell (or alternatively, it's pretty easy to use multiple LSTMCells to create the LSTM object)", "response":"Yes, you can emulate one by another, the reason for having them separate is efficiency. LSTMCell is a cell that takes arguments: Input of shape batch \u00d7 input dimension; A tuple of LSTM hidden states of shape batch x hidden dimensions. It is a straightforward implementation of the equations. LSTM is a layer applying an LSTM cell (or multiple LSTM cells) in a \"for loop\", but the loop is heavily optimized using cuDNN. Its input is A three-dimensional tensor of inputs of shape batch \u00d7 input length \u00d7 input dimension; Optionally, an initial state of the LSTM, i.e., a tuple of hidden states of shape batch \u00d7 hidden dim (or tuple of such tuples if the LSTM is bidirectional) You often might want to use the LSTM cell in a different context than apply it over a sequence, i.e. make an LSTM that operates over a tree-like structure. When you write a decoder in sequence-to-sequence models, you also call the cell in a loop and stop the loop when the end-of-sequence symbol is decoded.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57048120\/pytorch-lstm-vs-lstmcell", "best_answers_votes":51, "question_length":440, "response_length":981 }, { "question":"RuntimeError: Expected object of type torch.DoubleTensor but found type torch.FloatTensor for argument #2 'weight' My input tensor is torch.DoubleTensor type. But I got the RuntimeError below: ``` RuntimeError: Expected object of type torch.DoubleTensor but found type torch.FloatTensor for argument #2 'weight' ``` I didn't specify the type of the weight explicitly(i.e. I did not init my weight by myself. The weight is created by pytorch). What will influence the type of weight in the forward process? Thanks a lot!!", "response":"The default type for weights and biases are torch.FloatTensor. So, you'll need to cast either your model to torch.DoubleTensor or cast your inputs to torch.FloatTensor. For casting your inputs you can do ``` X = X.float() ``` or cast your complete model to DoubleTensor as ``` model = model.double() ``` You can also set the default type for all tensors using ``` pytorch.set_default_tensor_type('torch.DoubleTensor') ``` It is better to convert your inputs to float rather than converting your model to double, because mathematical computations on double datatype is considerably slower on GPU.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49407303\/runtimeerror-expected-object-of-type-torch-doubletensor-but-found-type-torch-fl", "best_answers_votes":39, "question_length":520, "response_length":595 }, { "question":"Understanding accumulated gradients in PyTorch I am trying to comprehend inner workings of the gradient accumulation in PyTorch. My question is somewhat related to these two: Why do we need to call zero_grad() in PyTorch? Why do we need to explicitly call zero_grad()? Comments to the accepted answer to the second question suggest that accumulated gradients can be used if a minibatch is too large to perform a gradient update in a single forward pass, and thus has to be split into multiple sub-batches. Consider the following toy example: ``` import numpy as np import torch class ExampleLinear(torch.nn.Module): def __init__(self): super().__init__() # Initialize the weight at 1 self.weight = torch.nn.Parameter(torch.Tensor([1]).float(), requires_grad=True) def forward(self, x): return self.weight * x if __name__ == \"__main__\": # Example 1 model = ExampleLinear() # Generate some data x = torch.from_numpy(np.array([4, 2])).float() y = 2 * x optimizer = torch.optim.SGD(model.parameters(), lr=0.01) y_hat = model(x) # forward pass loss = (y - y_hat) ** 2 loss = loss.mean() # MSE loss loss.backward() # backward pass optimizer.step() # weight update print(model.weight.grad) # tensor([-20.]) print(model.weight) # tensor([1.2000] ``` Which is exactly the result one would expect. Now assume that we want to process the dataset sample-by-sample utilizing gradient accumulation: ``` # Example 2: MSE sample-by-sample model2 = ExampleLinear() optimizer = torch.optim.SGD(model2.parameters(), lr=0.01) # Compute loss sample-by-sample, then average it over all samples loss = [] for k in range(len(y)): y_hat = model2(x[k]) loss.append((y[k] - y_hat) ** 2) loss = sum(loss) \/ len(y) loss.backward() # backward pass optimizer.step() # weight update print(model2.weight.grad) # tensor([-20.]) print(model2.weight) # tensor([1.2000] ``` Again as expected, the gradient is calculated when the .backward() method is called. Finally to my question: what exactly happens 'under the hood'? My understanding is that the computational graph is dynamically updated going from to operations for the loss variable, and that no information about the data used for each forward pass is retained anywhere except for the loss tensor which can be updated until the backward pass. Are there any caveats to the reasoning in the above paragraph? Lastly, are there any best practices to follow when using gradient accumulation (i.e. can the approach I use in Example 2 backfire somehow)?", "response":"You are not actually accumulating gradients. Just leaving off optimizer.zero_grad() has no effect if you have a single .backward() call, as the gradients are already zero to begin with (technically None but they will be automatically initialised to zero). The only difference between your two versions, is how you calculate the final loss. The for loop of the second example does the same calculations as PyTorch does in the first example, but you do them individually, and PyTorch cannot optimise (parallelise and vectorise) your for loop, which makes an especially staggering difference on GPUs, granted that the tensors aren't tiny. Before getting to gradient accumulation, let's start with your question: Finally to my question: what exactly happens 'under the hood'? Every operation on tensors is tracked in a computational graph if and only if one of the operands is already part of a computational graph. When you set requires_grad=True of a tensor, it creates a computational graph with a single vertex, the tensor itself, which will remain a leaf in the graph. Any operation with that tensor will create a new vertex, which is the result of the operation, hence there is an edge from the operands to it, tracking the operation that was performed. ```py a = torch.tensor(2.0, requires_grad=True) b = torch.tensor(4.0) c = a + b # => tensor(6., grad_fn=) a.requires_grad # => True a.is_leaf # => True b.requires_grad # => False b.is_leaf # => True c.requires_grad # => True c.is_leaf # => False ``` Every intermediate tensor automatically requires gradients and has a grad_fn, which is the function to calculate the partial derivatives with respect to its inputs. Thanks to the chain rule, we can traverse the whole graph in reverse order to calculate the derivatives with respect to every single leaf, which are the parameters we want to optimise. That's the idea of backpropagation, also known as reverse mode differentiation. For more details I recommend reading Calculus on Computational Graphs: Backpropagation. PyTorch uses that exact idea, when you call loss.backward() it traverses the graph in reverse order, starting from loss, and calculates the derivatives for each vertex. Whenever a leaf is reached, the calculated derivative for that tensor is stored in its .grad attribute. In your first example, that would lead to: ``` MeanBackward -> PowBackward -> SubBackward -> MulBackward` ``` The second example is almost identical, except that you calculate the mean manually, and instead of having a single path for the loss, you have multiple paths for each element of the loss calculation. To clarify, the single path also calculates the derivatives of each element, but internally, which again opens up the possibilities for some optimisations. ```py # Example 1 loss = (y - y_hat) ** 2 # => tensor([16., 4.], grad_fn=) # Example 2 loss = [] for k in range(len(y)): y_hat = model2(x[k]) loss.append((y[k] - y_hat) ** 2) loss # => [tensor([16.], grad_fn=), tensor([4.], grad_fn=)] ``` In either case a single graph is created that is backpropagated exactly once, that's the reason it's not considered gradient accumulation. Gradient Accumulation Gradient accumulation refers to the situation, where multiple backwards passes are performed before updating the parameters. The goal is to have the same model parameters for multiple inputs (batches) and then update the model's parameters based on all these batches, instead of performing an update after every single batch. Let's revisit your example. x has size [2], that's the size of our entire dataset. For some reason, we need to calculate the gradients based on the whole dataset. That is naturally the case when using a batch size of 2, since we would have the whole dataset at once. But what happens if we can only have batches of size 1? We could run them individually and update the model after each batch as usual, but then we don't calculate the gradients over the whole dataset. What we need to do, is run each sample individually with the same model parameters and calculate the gradients without updating the model. Now you might be thinking, isn't that what you did in the second version? Almost, but not quite, and there is a crucial problem in your version, namely that you are using the same amount of memory as in the first version, because you have the same calculations and therefore the same number of values in the computational graph. How do we free memory? We need to get rid of the tensors of the previous batch and also the computational graph, because that uses a lot of memory to keep track of everything that's necessary for the backpropagation. The computational graph is automatically destroyed when .backward() is called (unless retain_graph=True is specified). ```py def calculate_loss(x: torch.Tensor) -> torch.Tensor: y = 2 * x y_hat = model(x) loss = (y - y_hat) ** 2 return loss.mean() # With mulitple batches of size 1 batches = [torch.tensor([4.0]), torch.tensor([2.0])] optimizer.zero_grad() for i, batch in enumerate(batches): # The loss needs to be scaled, because the mean should be taken across the whole # dataset, which requires the loss to be divided by the number of batches. loss = calculate_loss(batch) \/ len(batches) loss.backward() print(f\"Batch size 1 (batch {i}) - grad: {model.weight.grad}\") print(f\"Batch size 1 (batch {i}) - weight: {model.weight}\") # Updating the model only after all batches optimizer.step() print(f\"Batch size 1 (final) - grad: {model.weight.grad}\") print(f\"Batch size 1 (final) - weight: {model.weight}\") ``` Output (I removed the Parameter containing messages for readability): ``` Batch size 1 (batch 0) - grad: tensor([-16.]) Batch size 1 (batch 0) - weight: tensor([1.], requires_grad=True) Batch size 1 (batch 1) - grad: tensor([-20.]) Batch size 1 (batch 1) - weight: tensor([1.], requires_grad=True) Batch size 1 (final) - grad: tensor([-20.]) Batch size 1 (final) - weight: tensor([1.2000], requires_grad=True) ``` As you can see, the model kept the same parameter for all batches, while the gradients were accumulate, and there is a single update at the end. Note that the loss needs to be scaled per batch, in order to have the same significance over the whole dataset as if you used a single batch. While in this example, the whole dataset is used before performing the update, you can easily change that to update the parameters after a certain number of batches, but you have to remember to zero out the gradients after an optimiser step was taken. The general recipe would be: ```py accumulation_steps = 10 for i, batch in enumerate(batches): # Scale the loss to the mean of the accumulated batch size loss = calculate_loss(batch) \/ accumulation_steps loss.backward() if (i + 1) % accumulation_steps == 0: optimizer.step() # Reset gradients, for the next accumulated batches optimizer.zero_grad() ``` You can find that recipe and more techniques for working with large batch sizes in HuggingFace - Training Neural Nets on Larger Batches: Practical Tips for 1-GPU, Multi-GPU & Distributed setups.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/62067400\/understanding-accumulated-gradients-in-pytorch", "best_answers_votes":64, "question_length":2471, "response_length":7055 }, { "question":"What is difference between nn.Module and nn.Sequential I am just learning to use PyTorch as a beginner. If anyone is familiar with PyTorch, would you tell me the difference between nn.Module and nn.Sequential? My questions are What is the advantage to use nn.Module instead of nn.Sequential? Which is regularly utilised to build the model? How we should select nn.Module or nn.Sequential?", "response":"TLDR; answering your questions What is the advantage to use nn.Module instead of nn.Sequential? While nn.Module is the base class to implement PyTorch models, nn.Sequential is a quick way to define a sequential neural network structures inside or outside an existing nn.Module. Which is regularly utilized to build the model? Both are widely used. How we should select nn.Module or nn.Sequential? All neural networks are implemented with nn.Module. If the layers are sequentially used (self.layer3(self.layer2(self.layer1(x))), you can leverage nn.Sequential to not have to define the forward function of the model. I should start by mentioning that nn.Module is the base class for all neural network modules in PyTorch. As such nn.Sequential is actually a direct subclass of nn.Module, you can look for yourself on this line. When creating a new neural network, you would usually go about creating a new class and inheriting from nn.Module, and defining two methods: __init__ (the initializer, where you define your layers) and forward (the inference code of your module, where you use your layers). That's all you need, since PyTorch will handle backward pass with Autograd. Here is an example of a module: ``` class NN(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(10, 4) self.fc2 = nn.Linear(4, 2) def forward(self, x) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) return x ``` If the model you are defining is sequential, i.e. the layers are called sequentially on the input, one by one. Then, you can simply use a nn.Sequential. As I explained earlier, nn.Sequential is a special kind of nn.Module made for this particular widespread type of neural network. The equivalent here is: ``` class NN(nn.Sequential): def __init__(self): super().__init__( nn.Linear(10, 4), nn.ReLU(), nn.Linear(4, 2), nn.ReLU()) ``` Or a simpler way of putting it is: ``` NN = Sequential( nn.Linear(10, 4), nn.ReLU(), nn.Linear(4, 2), nn.ReLU()) ``` The objective of nn.Sequential is to quickly implement sequential modules such that you are not required to write the forward definition, it being implicitly known because the layers are sequentially called on the outputs. In a more complicated module though, you might need to use multiple sequential submodules. For instance, take a CNN classifier, you could define a nn.Sequential for the CNN part, then define another nn.Sequential for the fully connected classifier section of the model.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/68606661\/what-is-difference-between-nn-module-and-nn-sequential", "best_answers_votes":64, "question_length":388, "response_length":2453 }, { "question":"Why is PyTorch called PyTorch? [closed] Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about programming within the scope defined in the help center. Closed 4 years ago. This post was edited and submitted for review 3 years ago and failed to reopen the post: Original close reason(s) were not resolved Improve this question I have been looking into deep learning frameworks lately and have been wondering about the origin of the name of PyTorch. With Keras, their home page nicely explains the name's origin, and with something like TensorFlow, the reasoning behind the name seems rather clear. For PyTorch, however, I cannot seem to come across why it is so named. Of course, I understand the \"Py-\" prefix and also know that PyTorch is a successor in some sense of Torch. But I am still wondering: what is the original idea behind the \"-Torch\" part? Is it known what the origin of the name is?", "response":"Here a short answer, formed as another question: Torch, SMORCH ??? PyTorch developed from Torch7. A precursor to the original Torch was a library called SVM-Torch, which was developed around 2001. The SVM stands for Support Vector Machines. SVM-Torch is a decomposition algorithm similar to SVM-Light, but adapted to regression problems, according to this paper. Also around this time, G.W.Flake described the sequential minimal optimization algorithm (SMO), which could be used to train SVMs on sparse data sets, and this was incorporated into NODElib. Interestingly, this was called the SMORCH algorithm. You can find out more about SMORCH in the NODElib docs Optimization of the SVMs is: performed by a variation of John Platt's sequential minimal optimization (SMO) algorithm. This version of SMO is generalized for regression, uses kernel caching, and incorporates several heuristics; for these reasons, we refer to the optimization algorithm as SMORCH. So SMORCH = Sequential Minimal Optimization Regression Caching Heuristics I can't answer definitively, but my thinking is \"Torch\" is a riff or evolution of \"Light\" from SVM-Light combined with a large helping of SMORCHiness. You'd need to check in with the authors of SVMTorch and SVM-Light to confirm that this is indeed what \"sparked\" the name. It is reasonable to assume that the \"TO\" of Torch stands for some other optimization, rather than SMO, such as Tensor Optimization, but I haven't found any direct reference... yet.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51530778\/why-is-pytorch-called-pytorch", "best_answers_votes":37, "question_length":985, "response_length":1486 }, { "question":"What is the class definition of nn.Linear in PyTorch? What is self.hidden in the following code? ``` import torch.nn as nn import torch.nn.functional as F class Network(nn.Module): def __init__(self): super().__init__() self.hidden = nn.Linear(784, 256) self.output = nn.Linear(256, 10) def forward(self, x): x = F.sigmoid(self.hidden(x)) x = F.softmax(self.output(x), dim=1) return x ``` self.hidden is nn.Linear and it can take a tensor x as argument.", "response":"What is the class definition of nn.Linear in pytorch? From documentation: CLASS torch.nn.Linear(in_features, out_features, bias=True) Applies a linear transformation to the incoming data: y = x*W^T + b Parameters: in_features \u2013 size of each input sample (i.e. size of x) out_features \u2013 size of each output sample (i.e. size of y) bias \u2013 If set to False, the layer will not learn an additive bias. Default: True Note that the weights W have shape (out_features, in_features) and biases b have shape (out_features). They are initialized randomly and can be changed later (e.g. during the training of a Neural Network they are updated by some optimization algorithm). In your Neural Network, the self.hidden = nn.Linear(784, 256) defines a hidden (meaning that it is in between of the input and output layers), fully connected linear layer, which takes input x of shape (batch_size, 784), where batch size is the number of inputs (each of size 784) which are passed to the network at once (as a single tensor), and transforms it by the linear equation y = x*W^T + b into a tensor y of shape (batch_size, 256). It is further transformed by the sigmoid function, x = F.sigmoid(self.hidden(x)) (which is not a part of the nn.Linear but an additional step). Let's see a concrete example: ``` import torch import torch.nn as nn x = torch.tensor([[1.0, -1.0], [0.0, 1.0], [0.0, 0.0]]) in_features = x.shape[1] # = 2 out_features = 2 m = nn.Linear(in_features, out_features) ``` where x contains three inputs (i.e. the batch size is 3), x[0], x[1] and x[3], each of size 2, and the output is going to be of shape (batch size, out_features) = (3, 2). The values of the parameters (weights and biases) are: ``` >>> m.weight tensor([[-0.4500, 0.5856], [-0.1807, -0.4963]]) >>> m.bias tensor([ 0.2223, -0.6114]) ``` (because they were initialized randomly, most likely you will get different values from the above) The output is: ``` >>> y = m(x) tensor([[-0.8133, -0.2959], [ 0.8079, -1.1077], [ 0.2223, -0.6114]]) ``` and (behind the scenes) it is computed as: ``` y = x.matmul(m.weight.t()) + m.bias # y = x*W^T + b ``` i.e. ``` y[i,j] == x[i,0] * m.weight[j,0] + x[i,1] * m.weight[j,1] + m.bias[j] ``` where i is in interval [0, batch_size) and j in [0, out_features).", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54916135\/what-is-the-class-definition-of-nn-linear-in-pytorch", "best_answers_votes":77, "question_length":453, "response_length":2258 }, { "question":"Backward function in PyTorch I have some question about pytorch's backward function I don't think I'm getting the right output : ``` import numpy as np import torch from torch.autograd import Variable a = Variable(torch.FloatTensor([[1,2,3],[4,5,6]]), requires_grad=True) out = a * a out.backward(a) print(a.grad) ``` the output is ``` tensor([[ 2., 8., 18.], [32., 50., 72.]]) ``` maybe it's 2*a*a but i think the output suppose to be ``` tensor([[ 2., 4., 6.], [8., 10., 12.]]) ``` 2*a. cause d(x^2)\/dx=2x", "response":"Please read carefully the documentation on backward() to better understand it. By default, pytorch expects backward() to be called for the last output of the network - the loss function. The loss function always outputs a scalar and therefore, the gradients of the scalar loss w.r.t all other variables\/parameters is well defined (using the chain rule). Thus, by default, backward() is called on a scalar tensor and expects no arguments. For example: ```py a = torch.tensor([[1,2,3],[4,5,6]], dtype=torch.float, requires_grad=True) for i in range(2): for j in range(3): out = a[i,j] * a[i,j] out.backward() print(a.grad) ``` yields ``` tensor([[ 2., 4., 6.], [ 8., 10., 12.]]) ``` As expected: d(a^2)\/da = 2a. However, when you call backward on the 2-by-3 out tensor (no longer a scalar function) - what do you expects a.grad to be? You'll actually need a 2-by-3-by-2-by-3 output: d out[i,j] \/ d a[k,l](!) Pytorch does not support this non-scalar function derivatives. Instead, pytorch assumes out is only an intermediate tensor and somewhere \"upstream\" there is a scalar loss function, that through chain rule provides d loss\/ d out[i,j]. This \"upstream\" gradient is of size 2-by-3 and this is actually the argument you provide backward in this case: out.backward(g) where g_ij = d loss\/ d out_ij. The gradients are then calculated by chain rule d loss \/ d a[i,j] = (d loss\/d out[i,j]) * (d out[i,j] \/ d a[i,j]) Since you provided a as the \"upstream\" gradients you got ``` a.grad[i,j] = 2 * a[i,j] * a[i,j] ``` If you were to provide the \"upstream\" gradients to be all ones ```py out.backward(torch.ones(2,3)) print(a.grad) ``` yields ``` tensor([[ 2., 4., 6.], [ 8., 10., 12.]]) ``` As expected. It's all in the chain rule.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57248777\/backward-function-in-pytorch", "best_answers_votes":58, "question_length":507, "response_length":1725 }, { "question":"Calculate the accuracy every epoch in PyTorch I am working on a Neural Network problem, to classify data as 1 or 0. I am using Binary cross entropy loss to do this. The loss is fine, however, the accuracy is very low and isn't improving. I am assuming I did a mistake in the accuracy calculation. After every epoch, I am calculating the correct predictions after thresholding the output, and dividing that number by the total number of the dataset. Is there any thing wrong I did in the accuracy calculation? And why isn't it improving, but getting more worse? This is my code: ```py net = Model() criterion = torch.nn.BCELoss(size_average=True) optimizer = torch.optim.SGD(net.parameters(), lr=0.1) num_epochs = 100 for epoch in range(num_epochs): for i, (inputs,labels) in enumerate (train_loader): inputs = Variable(inputs.float()) labels = Variable(labels.float()) output = net(inputs) optimizer.zero_grad() loss = criterion(output, labels) loss.backward() optimizer.step() #Accuracy output = (output>0.5).float() correct = (output == labels).float().sum() print(\"Epoch {}\/{}, Loss: {:.3f}, Accuracy: {:.3f}\".format(epoch+1,num_epochs, loss.data[0], correct\/x.shape[0])) ``` And this is the strange output I get: ``` Epoch 1\/100, Loss: 0.389, Accuracy: 0.035 Epoch 2\/100, Loss: 0.370, Accuracy: 0.036 Epoch 3\/100, Loss: 0.514, Accuracy: 0.030 Epoch 4\/100, Loss: 0.539, Accuracy: 0.030 Epoch 5\/100, Loss: 0.583, Accuracy: 0.029 Epoch 6\/100, Loss: 0.439, Accuracy: 0.031 Epoch 7\/100, Loss: 0.429, Accuracy: 0.034 Epoch 8\/100, Loss: 0.408, Accuracy: 0.035 Epoch 9\/100, Loss: 0.316, Accuracy: 0.035 Epoch 10\/100, Loss: 0.436, Accuracy: 0.035 Epoch 11\/100, Loss: 0.365, Accuracy: 0.034 Epoch 12\/100, Loss: 0.485, Accuracy: 0.031 Epoch 13\/100, Loss: 0.392, Accuracy: 0.033 Epoch 14\/100, Loss: 0.494, Accuracy: 0.030 Epoch 15\/100, Loss: 0.369, Accuracy: 0.035 Epoch 16\/100, Loss: 0.495, Accuracy: 0.029 Epoch 17\/100, Loss: 0.415, Accuracy: 0.034 Epoch 18\/100, Loss: 0.410, Accuracy: 0.035 Epoch 19\/100, Loss: 0.282, Accuracy: 0.038 Epoch 20\/100, Loss: 0.499, Accuracy: 0.031 Epoch 21\/100, Loss: 0.446, Accuracy: 0.030 Epoch 22\/100, Loss: 0.585, Accuracy: 0.026 Epoch 23\/100, Loss: 0.419, Accuracy: 0.035 Epoch 24\/100, Loss: 0.492, Accuracy: 0.031 Epoch 25\/100, Loss: 0.537, Accuracy: 0.031 Epoch 26\/100, Loss: 0.439, Accuracy: 0.033 Epoch 27\/100, Loss: 0.421, Accuracy: 0.035 Epoch 28\/100, Loss: 0.532, Accuracy: 0.034 Epoch 29\/100, Loss: 0.234, Accuracy: 0.038 Epoch 30\/100, Loss: 0.492, Accuracy: 0.027 Epoch 31\/100, Loss: 0.407, Accuracy: 0.035 Epoch 32\/100, Loss: 0.305, Accuracy: 0.038 Epoch 33\/100, Loss: 0.663, Accuracy: 0.025 Epoch 34\/100, Loss: 0.588, Accuracy: 0.031 Epoch 35\/100, Loss: 0.329, Accuracy: 0.035 Epoch 36\/100, Loss: 0.474, Accuracy: 0.033 Epoch 37\/100, Loss: 0.535, Accuracy: 0.031 Epoch 38\/100, Loss: 0.406, Accuracy: 0.033 Epoch 39\/100, Loss: 0.513, Accuracy: 0.030 Epoch 40\/100, Loss: 0.593, Accuracy: 0.030 Epoch 41\/100, Loss: 0.265, Accuracy: 0.036 Epoch 42\/100, Loss: 0.576, Accuracy: 0.031 Epoch 43\/100, Loss: 0.565, Accuracy: 0.027 Epoch 44\/100, Loss: 0.576, Accuracy: 0.030 Epoch 45\/100, Loss: 0.396, Accuracy: 0.035 Epoch 46\/100, Loss: 0.423, Accuracy: 0.034 Epoch 47\/100, Loss: 0.489, Accuracy: 0.033 Epoch 48\/100, Loss: 0.591, Accuracy: 0.029 Epoch 49\/100, Loss: 0.415, Accuracy: 0.034 Epoch 50\/100, Loss: 0.291, Accuracy: 0.039 Epoch 51\/100, Loss: 0.395, Accuracy: 0.033 Epoch 52\/100, Loss: 0.540, Accuracy: 0.026 Epoch 53\/100, Loss: 0.436, Accuracy: 0.033 Epoch 54\/100, Loss: 0.346, Accuracy: 0.036 Epoch 55\/100, Loss: 0.519, Accuracy: 0.029 Epoch 56\/100, Loss: 0.456, Accuracy: 0.031 Epoch 57\/100, Loss: 0.425, Accuracy: 0.035 Epoch 58\/100, Loss: 0.311, Accuracy: 0.039 Epoch 59\/100, Loss: 0.406, Accuracy: 0.034 Epoch 60\/100, Loss: 0.360, Accuracy: 0.035 Epoch 61\/100, Loss: 0.476, Accuracy: 0.030 Epoch 62\/100, Loss: 0.404, Accuracy: 0.034 Epoch 63\/100, Loss: 0.382, Accuracy: 0.036 Epoch 64\/100, Loss: 0.538, Accuracy: 0.031 Epoch 65\/100, Loss: 0.392, Accuracy: 0.034 Epoch 66\/100, Loss: 0.434, Accuracy: 0.033 Epoch 67\/100, Loss: 0.479, Accuracy: 0.031 Epoch 68\/100, Loss: 0.494, Accuracy: 0.031 Epoch 69\/100, Loss: 0.415, Accuracy: 0.034 Epoch 70\/100, Loss: 0.390, Accuracy: 0.036 Epoch 71\/100, Loss: 0.330, Accuracy: 0.038 Epoch 72\/100, Loss: 0.449, Accuracy: 0.030 Epoch 73\/100, Loss: 0.315, Accuracy: 0.039 Epoch 74\/100, Loss: 0.450, Accuracy: 0.031 Epoch 75\/100, Loss: 0.562, Accuracy: 0.030 Epoch 76\/100, Loss: 0.447, Accuracy: 0.031 Epoch 77\/100, Loss: 0.408, Accuracy: 0.038 Epoch 78\/100, Loss: 0.359, Accuracy: 0.034 Epoch 79\/100, Loss: 0.372, Accuracy: 0.035 Epoch 80\/100, Loss: 0.452, Accuracy: 0.034 Epoch 81\/100, Loss: 0.360, Accuracy: 0.035 Epoch 82\/100, Loss: 0.453, Accuracy: 0.031 Epoch 83\/100, Loss: 0.578, Accuracy: 0.030 Epoch 84\/100, Loss: 0.537, Accuracy: 0.030 Epoch 85\/100, Loss: 0.483, Accuracy: 0.035 Epoch 86\/100, Loss: 0.343, Accuracy: 0.036 Epoch 87\/100, Loss: 0.439, Accuracy: 0.034 Epoch 88\/100, Loss: 0.686, Accuracy: 0.023 Epoch 89\/100, Loss: 0.265, Accuracy: 0.039 Epoch 90\/100, Loss: 0.369, Accuracy: 0.035 Epoch 91\/100, Loss: 0.521, Accuracy: 0.027 Epoch 92\/100, Loss: 0.662, Accuracy: 0.027 Epoch 93\/100, Loss: 0.581, Accuracy: 0.029 Epoch 94\/100, Loss: 0.322, Accuracy: 0.034 Epoch 95\/100, Loss: 0.375, Accuracy: 0.035 Epoch 96\/100, Loss: 0.575, Accuracy: 0.031 Epoch 97\/100, Loss: 0.489, Accuracy: 0.030 Epoch 98\/100, Loss: 0.435, Accuracy: 0.033 Epoch 99\/100, Loss: 0.440, Accuracy: 0.031 Epoch 100\/100, Loss: 0.444, Accuracy: 0.033 ```", "response":"A better way would be calculating correct right after optimization step ```py for epoch in range(num_epochs): correct = 0 for i, (inputs,labels) in enumerate (train_loader): ... output = net(inputs) ... optimizer.step() correct += (output == labels).float().sum() accuracy = 100 * correct \/ len(trainset) # trainset, not train_loader # probably x in your case print(\"Accuracy = {}\".format(accuracy)) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51503851\/calculate-the-accuracy-every-epoch-in-pytorch", "best_answers_votes":27, "question_length":5516, "response_length":403 }, { "question":"pytorch: \"multi-target not supported\" error message So I want to classify some (3, 50, 50) pictures. First I loaded the dataset from the file without a dataloader or batches, it worked. Now, after adding both things I get that error: ``` RuntimeError: multi-target not supported at \/pytorch\/aten\/src\/THCUNN\/generic\/ClassNLLCriterion.cu:15 ``` I found a lot of answers in the internet, mostly to use target.squeeze(1) but it doesn\u00b4t work for me. My target-batch looks like following: ``` tensor([[1, 0], [1, 0], [1, 0], [1, 0], [1, 0], [1, 0], [1, 0], [1, 0]], device='cuda:0') ``` Shouldn't that be okay? Here the full code (notice that Im only creating the structure of the model on which Im going to apply the full and correct dataset afterwards, because I dont have the full data yet, only 32 pictures and no labels, thats why I added torch.tensor([1, 0]) as a placeholder for all labels): ```py import torch import torch.utils.data import torch.nn as nn import torch.nn.functional as F import torch.optim from torch.autograd import Variable import numpy as np from PIL import Image class Model(nn.Module): def __init__(self): super(Model, self).__init__() # model structur: self.conv1 = nn.Conv2d(3, 10, kernel_size=(5,5), stride=(1,1)) self.conv2 = nn.Conv2d(10, 20, kernel_size=(5,5), stride=(1,1)) # with mapool: output = 20 * (9,9) feature-maps -> flatten self.fc1 = nn.Linear(20*9*9, 250) self.fc2 = nn.Linear(250, 100) self.fc3 = nn.Linear(100, 2) def forward(self, x): # conv layers x = F.relu(self.conv1(x)) # shape: 1, 10, 46, 46 x = F.max_pool2d(x, 2, 2) # shape: 1, 10, 23, 23 x = F.relu(self.conv2(x)) # shape: 1, 20, 19, 19 x = F.max_pool2d(x, 2, 2) # shape: 1, 20, 9, 9 # flatten to dense layer: x = x.view(-1, 20*9*9) # dense layers x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) output = F.log_softmax(self.fc3(x), dim=1) return output class Run: def __init__(self, epochs, learning_rate, dropout, momentum): # load model self.model = Model().cuda() # hyperparameters: self.epochs = epochs self.learning_rate = learning_rate self.dropout = dropout def preporcessing(self): dataset_folder = \"\/media\/theodor\/hdd\/Programming\/BWKI\/dataset\/bilder\/\" dataset = [] for i in range(0, 35): sample_image = Image.open(dataset_folder + str(i) + \".png\") data = torch.from_numpy(np.array(sample_image)).type(\"torch.Tensor\").reshape(3, 50, 50) target = torch.tensor([[1, 0]]) sample = (data, target) dataset.append(sample) train_loader = torch.utils.data.DataLoader(dataset, batch_size=8) return train_loader def train(self): train_set = self.preporcessing() criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(self.model.parameters(), lr=self.learning_rate) for epoch in range(self.epochs): epoch_loss = 0 for i, data in enumerate(train_set, 0): sample, target = data # set data as cuda varibale sample = Variable(sample.float().cuda()) target = Variable(target.cuda()) # initialize optimizer optimizer.zero_grad() # predict output = self.model(sample) # backpropagation print(output, target.squeeze(1)) loss = criterion(output, target.squeeze(1)) # ERROR MESSAGE: RuntimeError: multi-target not supported at \/pytorch\/aten\/src\/THCUNN\/generic\/ClassNLLCriterion.cu:15 loss.backward() optimizer.step() epoch_loss += loss.item() print(\"loss after epoch [\", epoch, \"|\", self.epochs, \"] :\", epoch_loss) run = Run(10, 0.001, 0.5, 0.9) run.train() ``` So I expected it to start training (of course not learning anything because the labels are wrong).", "response":"For nn.CrossEntropyLoss the target has to be a single number from the interval [0, #classes] instead of a one-hot encoded target vector. Your target is [1, 0], thus PyTorch thinks you want to have multiple labels per input which is not supported. Replace your one-hot-encoded targets: [1, 0] --> 0 [0, 1] --> 1", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57325844\/pytorch-multi-target-not-supported-error-message", "best_answers_votes":48, "question_length":3458, "response_length":310 }, { "question":"What's the purpose of torch.autograd.Variable? I load features and labels from my training dataset. Both of them are originally numpy arrays, but I change them to the torch tensor using torch.from _numpy(features.copy()) and torch.tensor(labels.astype(np.bool)). And I notice that torch.autograd.Variable is something like placeholder in tensorflow. When I train my network, first I tried ```py features = features.cuda() labels = labels.cuda() outputs = Config.MODEL(features) loss = Config.LOSS(outputs, labels) ``` Then I tried ```py features = features.cuda() labels = labels.cuda() input_var = Variable(features) target_var = Variable(labels) outputs = Config.MODEL(input_var) loss = Config.LOSS(outputs, target_var) ``` Both blocks succeed in activating training, but I worried that there might be trivial difference.", "response":"According to this question you no longer need variables to use Pytorch Autograd. Thanks to @skytree, we can make this even more explizit: Variables have been deprecated, i.e. you're not supposed to use them anymore. Autograd automatically supports Tensors with requires_grad set to True. And more importantly Variable(tensor) and Variable(tensor, requires_grad) still work as expected, but they return Tensors instead of Variables. This means that if your features and labels are tensors already (which they seem to be in your example) your Variable(features) and Variable(labels) does only return a tensor again. The original purpose of Variables was to be able to use automatic differentiation (Source): Variables are just wrappers for the tensors so you can now easily auto compute the gradients.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57580202\/whats-the-purpose-of-torch-autograd-variable", "best_answers_votes":41, "question_length":823, "response_length":799 }, { "question":"What is the difference between .flatten() and .view(-1) in PyTorch? Both .flatten() and .view(-1) flatten a tensor in PyTorch. What's the difference? Does .flatten() copy the data of the tensor? Is .view(-1) faster? Is there any situation that .flatten() doesn't work?", "response":"In addition to @adeelh's comment, there is another difference: torch.flatten() results in a .reshape(), and the differences between .reshape() and .view() are: [...] torch.reshape may return a copy or a view of the original tensor. You can not count on that to return a view or a copy. Another difference is that reshape() can operate on both contiguous and non-contiguous tensor while view() can only operate on contiguous tensor. Also see here about the meaning of contiguous For context: The community requested for a flatten function for a while, and after Issue #7743, the feature was implemented in the PR #8578. You can see the implementation of flatten here, where a call to .reshape() can be seen in return line.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57234095\/what-is-the-difference-between-flatten-and-view-1-in-pytorch", "best_answers_votes":22, "question_length":268, "response_length":721 }, { "question":"Despite installing the torch vision pytorch library, I am getting an error saying that there is no module named torch vision The error that I am getting when I use import torchvision is this: Error Message ``` \"*Traceback (most recent call last): File \"\/Users\/gokulsrin\/Desktop\/torch_basics\/data.py\", line 4, in import torchvision ModuleNotFoundError: No module named 'torchvision'*\" ``` I don't know what to do. I have tried changing the version of python from the native one to the one downloaded through anaconda. I am using anaconda as a package manager and have installed torch vision through anaconda as well as through pip commands.", "response":"From PyTorch installing Docs you should follow these steps: In Anaconda use this command: conda install pytorch torchvision cpuonly -c pytorch In Pip use this command: pip3 install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https:\/\/download.pytorch.org\/whl\/torch_stable.html Note: If you have an enabled CUDA card you can change the cpuonly option to cudatoolkit=10.1 or cudatoolkit=9.2 After successfully installing the package you can import it with the command import torchvision and the output should look like this: Otherwise, there is something wrong when you are downloading the package from the Internet", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59621736\/despite-installing-the-torch-vision-pytorch-library-i-am-getting-an-error-sayin", "best_answers_votes":18, "question_length":640, "response_length":611 }, { "question":"Load data into GPU directly using PyTorch In training loop, I load a batch of data into CPU and then transfer it to GPU: ``` import torch.utils as utils train_loader = utils.data.DataLoader(train_dataset, batch_size=128, shuffle=True, num_workers=4, pin_memory=True) for inputs, labels in train_loader: inputs, labels = inputs.to(device), labels.to(device) ``` This way of loading data is very time-consuming. Any way to directly load data into GPU without transfer step ?", "response":"@PeterJulian first of all thanks for the reply. As far as I know there is no single line command for loading a whole dataset to GPU. Actually in my reply I meant to use .to(device) in the __init__ of the data loader. There are some examples in the link that I had shared previously. Also, I left an example data loader code below. Hope both the examples in the link and the code below helps. ``` class SampleDataset(Dataset): def __init__(self, device='cuda'): super(SampleDataset, self).__init__() self.data = torch.ones(1000) self.data = self.data.to(device) def __len__(self): return len(self.data) def __getitem__(self, i): element = self.data[i] return element ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/62111599\/load-data-into-gpu-directly-using-pytorch", "best_answers_votes":13, "question_length":472, "response_length":669 }, { "question":"How do I write a PyTorch sequential model? How do I write a sequential model in PyTorch, just like what we can do with Keras? I tried: ``` import torch import torch.nn as nn net = nn.Sequential() net.add(nn.Linear(3, 4)) net.add(nn.Sigmoid()) net.add(nn.Linear(4, 1)) net.add(nn.Sigmoid()) net.float() ``` But I get the error: AttributeError: 'Sequential' object has no attribute 'add'", "response":"Sequential does not have an add method at the moment, though there is some debate about adding this functionality. As you can read in the documentation nn.Sequential takes as argument the layers separated as a sequence of arguments or an OrderedDict. If you have a model with lots of layers, you can create a list first and then use the * operator to expand the list into positional arguments, like this: ```py layers = [] layers.append(nn.Linear(3, 4)) layers.append(nn.Sigmoid()) layers.append(nn.Linear(4, 1)) layers.append(nn.Sigmoid()) net = nn.Sequential(*layers) ``` This will result in a similar structure of your code, as adding directly.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/46141690\/how-do-i-write-a-pytorch-sequential-model", "best_answers_votes":51, "question_length":385, "response_length":647 }, { "question":"How to access the network weights while using PyTorch 'nn.Sequential'? I'm building a neural network and I don't know how to access the model weights for each layer. I've tried ``` model.input_size.weight ``` Code: ``` input_size = 784 hidden_sizes = [128, 64] output_size = 10 # Build a feed-forward network model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) ``` I expected to get the weights but I got 'Sequential' object has no attribute 'input_size'", "response":"If you print out the model usingprint(model), you would get ```py Sequential( (0): Linear(in_features=784, out_features=128, bias=True) (1): ReLU() (2): Linear(in_features=128, out_features=64, bias=True) (3): ReLU() (4): Linear(in_features=64, out_features=10, bias=True) (5): Softmax(dim=1) ) ``` Now you have access to all indices of layers so you can get the weights of (let's say) second linear layer by model[4].weight.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/56435961\/how-to-access-the-network-weights-while-using-pytorch-nn-sequential", "best_answers_votes":24, "question_length":591, "response_length":425 }, { "question":"Pytorch ValueError: optimizer got an empty parameter list When trying to create a neural network and optimize it using Pytorch, I am getting ValueError: optimizer got an empty parameter list Here is the code. ``` import torch.nn as nn import torch.nn.functional as F from os.path import dirname from os import getcwd from os.path import realpath from sys import argv class NetActor(nn.Module): def __init__(self, args, state_vector_size, action_vector_size, hidden_layer_size_list): super(NetActor, self).__init__() self.args = args self.state_vector_size = state_vector_size self.action_vector_size = action_vector_size self.layer_sizes = hidden_layer_size_list self.layer_sizes.append(action_vector_size) self.nn_layers = [] self._create_net() def _create_net(self): prev_layer_size = self.state_vector_size for next_layer_size in self.layer_sizes: next_layer = nn.Linear(prev_layer_size, next_layer_size) prev_layer_size = next_layer_size self.nn_layers.append(next_layer) def forward(self, torch_state): activations = torch_state for i,layer in enumerate(self.nn_layers): if i != len(self.nn_layers)-1: activations = F.relu(layer(activations)) else: activations = layer(activations) probs = F.softmax(activations, dim=-1) return probs ``` and then the call ``` self.actor_nn = NetActor(self.args, 4, 2, [128]) self.actor_optimizer = optim.Adam(self.actor_nn.parameters(), lr=args.learning_rate) ``` gives the very informative error ValueError: optimizer got an empty parameter list I find it hard to understand what exactly in the network's definition makes the network have parameters. I am following and expanding the example I found in Pytorch's tutorial code. I can't really tell the difference between my code and theirs that makes mine think it has no parameters to optimize. How to make my network have parameters like the linked example?", "response":"Your NetActor does not directly store any nn.Parameter. Moreover, all other layers it eventually uses in forward are stored as a simple list in self.nn_layers. If you want self.actor_nn.parameters() to know that the items stored in the list self.nn_layers may contain trainable parameters, you should work with containers. Specifically, making self.nn_layers to be a nn.ModuleList instead of a simple list should solve your problem: ``` self.nn_layers = nn.ModuleList() ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54678896\/pytorch-valueerror-optimizer-got-an-empty-parameter-list", "best_answers_votes":44, "question_length":1849, "response_length":473 }, { "question":"What exactly is the definition of a 'Module' in PyTorch? Please excuse the novice question, but is Module just the same as saying model? That's what it sounds like, when the documentation says: Whenever you want a model more complex than a simple sequence of existing Modules you will need to define your model (as a custom Module subclass). Or... when they mention Module, are they referring to something more formal and computer-sciency, like a protocol \/ interface type thing?", "response":"It's a simple container. From the docs of nn.Module Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes. Submodules assigned in this way will be registered, and will have their parameters converted too when you call .cuda(), etc. From the tutorial: All network components should inherit from nn.Module and override the forward() method. That is about it, as far as the boilerplate is concerned. Inheriting from nn.Module provides functionality to your component. For example, it makes it keep track of its trainable parameters, you can swap it between CPU and GPU with the .to(device) method, where device can be a CPU device torch.device(\"cpu\") or CUDA device torch.device(\"cuda:0\"). A module is a container from which layers, model subparts (e.g. BasicBlock in resnet in torchvision) and models should inherit. Why should they? Because the inheritance from nn.Module allows you to call methods like to(\"cuda:0\"), .eval(), .parameters() or register hooks easily. why not just call the 'module' a model, and call the layers 'layers'? I suppose maybe it's just semantics and splitting hairs, but still... That's an API design choice and I find having only a Module class instead of two separate Model and Layers to be cleaner and to allow more freedom (it's easier to send just a part of the model to GPU, to get parameters only for some layers...).", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51804692\/what-exactly-is-the-definition-of-a-module-in-pytorch", "best_answers_votes":28, "question_length":479, "response_length":1519 }, { "question":"DCGAN debugging. Getting just garbage Introduction: I am trying to get a CDCGAN (Conditional Deep Convolutional Generative Adversarial Network) to work on the MNIST dataset which should be fairly easy considering that the library (PyTorch) I am using has a tutorial on its website. But I can't seem to get It working it just produces garbage or the model collapses or both. What I tried: making the model Conditional semi-supervised learning using batch norm using dropout on each layer besides the input\/output layer on the generator and discriminator label smoothing to combat overconfidence adding noise to the images (I guess you call this instance noise) to get a better data distribution use leaky relu to avoid vanishing gradients using a replay buffer to combat forgetting of learned stuff and overfitting playing with hyperparameters comparing it to the model from PyTorch tutorial basically what I did besides some things like Embedding layer ect. Images my Model generated: Hyperparameters: batch_size=50, learning_rate_discrimiantor=0.0001, learning_rate_generator=0.0003, shuffle=True, ndf=64, ngf=64, droupout=0.5 batch_size=50, learning_rate_discriminator=0.0003, learning_rate_generator=0.0003, shuffle=True, ndf=64, ngf=64, dropout=0 Images Pytorch tutorial Model generated: Code for the pytorch tutorial dcgan model As comparison here are the images from the DCGAN from the pytorch turoial: My Code: ``` import torch import torch.nn as nn import torchvision from torchvision import transforms, datasets import torch.nn.functional as F from torch import optim as optim from torch.utils.tensorboard import SummaryWriter import numpy as np import os import time class Discriminator(torch.nn.Module): def __init__(self, ndf=16, dropout_value=0.5): # ndf feature map discriminator super().__init__() self.ndf = ndf self.droupout_value = dropout_value self.condi = nn.Sequential( nn.Linear(in_features=10, out_features=64 * 64) ) self.hidden0 = nn.Sequential( nn.Conv2d(in_channels=2, out_channels=self.ndf, kernel_size=4, stride=2, padding=1, bias=False), nn.LeakyReLU(0.2), ) self.hidden1 = nn.Sequential( nn.Conv2d(in_channels=self.ndf, out_channels=self.ndf * 2, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(self.ndf * 2), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.hidden2 = nn.Sequential( nn.Conv2d(in_channels=self.ndf * 2, out_channels=self.ndf * 4, kernel_size=4, stride=2, padding=1, bias=False), #nn.BatchNorm2d(self.ndf * 4), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.hidden3 = nn.Sequential( nn.Conv2d(in_channels=self.ndf * 4, out_channels=self.ndf * 8, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(self.ndf * 8), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.out = nn.Sequential( nn.Conv2d(in_channels=self.ndf * 8, out_channels=1, kernel_size=4, stride=1, padding=0, bias=False), torch.nn.Sigmoid() ) def forward(self, x, y): y = self.condi(y.view(-1, 10)) y = y.view(-1, 1, 64, 64) x = torch.cat((x, y), dim=1) x = self.hidden0(x) x = self.hidden1(x) x = self.hidden2(x) x = self.hidden3(x) x = self.out(x) return x class Generator(torch.nn.Module): def __init__(self, n_features=100, ngf=16, c_channels=1, dropout_value=0.5): # ngf feature map of generator super().__init__() self.ngf = ngf self.n_features = n_features self.c_channels = c_channels self.droupout_value = dropout_value self.hidden0 = nn.Sequential( nn.ConvTranspose2d(in_channels=self.n_features + 10, out_channels=self.ngf * 8, kernel_size=4, stride=1, padding=0, bias=False), nn.BatchNorm2d(self.ngf * 8), nn.LeakyReLU(0.2) ) self.hidden1 = nn.Sequential( nn.ConvTranspose2d(in_channels=self.ngf * 8, out_channels=self.ngf * 4, kernel_size=4, stride=2, padding=1, bias=False), #nn.BatchNorm2d(self.ngf * 4), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.hidden2 = nn.Sequential( nn.ConvTranspose2d(in_channels=self.ngf * 4, out_channels=self.ngf * 2, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(self.ngf * 2), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.hidden3 = nn.Sequential( nn.ConvTranspose2d(in_channels=self.ngf * 2, out_channels=self.ngf, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(self.ngf), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.out = nn.Sequential( # \"out_channels=1\" because gray scale nn.ConvTranspose2d(in_channels=self.ngf, out_channels=1, kernel_size=4, stride=2, padding=1, bias=False), nn.Tanh() ) def forward(self, x, y): x_cond = torch.cat((x, y), dim=1) # Combine flatten image with conditional input (class labels) x = self.hidden0(x_cond) # Image goes into a \"ConvTranspose2d\" layer x = self.hidden1(x) x = self.hidden2(x) x = self.hidden3(x) x = self.out(x) return x class Logger: def __init__(self, model_name, model1, model2, m1_optimizer, m2_optimizer, model_parameter, train_loader): self.out_dir = \"data\" self.model_name = model_name self.train_loader = train_loader self.model1 = model1 self.model2 = model2 self.model_parameter = model_parameter self.m1_optimizer = m1_optimizer self.m2_optimizer = m2_optimizer # Exclude Epochs of the model name. This make sense e.g. when we stop a training progress and continue later on. self.experiment_name = '_'.join(\"{!s}={!r}\".format(k, v) for (k, v) in model_parameter.items())\\ .replace(\"Epochs\" + \"=\" + str(model_parameter[\"Epochs\"]), \"\") self.d_error = 0 self.g_error = 0 self.tb = SummaryWriter(log_dir=str(self.out_dir + \"\/log\/\" + self.model_name + \"\/runs\/\" + self.experiment_name)) self.path_image = os.path.join(os.getcwd(), f'{self.out_dir}\/log\/{self.model_name}\/images\/{self.experiment_name}') self.path_model = os.path.join(os.getcwd(), f'{self.out_dir}\/log\/{self.model_name}\/model\/{self.experiment_name}') try: os.makedirs(self.path_image) except Exception as e: print(\"WARNING: \", str(e)) try: os.makedirs(self.path_model) except Exception as e: print(\"WARNING: \", str(e)) def log_graph(self, model1_input, model2_input, model1_label, model2_label): self.tb.add_graph(self.model1, input_to_model=(model1_input, model1_label)) self.tb.add_graph(self.model2, input_to_model=(model2_input, model2_label)) def log(self, num_epoch, d_error, g_error): self.d_error = d_error self.g_error = g_error self.tb.add_scalar(\"Discriminator Train Error\", self.d_error, num_epoch) self.tb.add_scalar(\"Generator Train Error\", self.g_error, num_epoch) def log_image(self, images, epoch, batch_num): grid = torchvision.utils.make_grid(images) torchvision.utils.save_image(grid, f'{self.path_image}\\\\Epoch_{epoch}_batch_{batch_num}.png') self.tb.add_image(\"Generator Image\", grid) def log_histogramm(self): for name, param in self.model2.named_parameters(): self.tb.add_histogram(name, param, self.model_parameter[\"Epochs\"]) self.tb.add_histogram(f'gen_{name}.grad', param.grad, self.model_parameter[\"Epochs\"]) for name, param in self.model1.named_parameters(): self.tb.add_histogram(name, param, self.model_parameter[\"Epochs\"]) self.tb.add_histogram(f'dis_{name}.grad', param.grad, self.model_parameter[\"Epochs\"]) def log_model(self, num_epoch): torch.save({ \"epoch\": num_epoch, \"model_generator_state_dict\": self.model1.state_dict(), \"model_discriminator_state_dict\": self.model2.state_dict(), \"optimizer_generator_state_dict\": self.m1_optimizer.state_dict(), \"optimizer_discriminator_state_dict\": self.m2_optimizer.state_dict(), }, str(self.path_model + f'\\\\{time.time()}_epoch{num_epoch}.pth')) def close(self, logger, images, num_epoch, d_error, g_error): logger.log_model(num_epoch) logger.log_histogramm() logger.log(num_epoch, d_error, g_error) self.tb.close() def display_stats(self, epoch, batch_num, dis_error, gen_error): print(f'Epoch: [{epoch}\/{self.model_parameter[\"Epochs\"]}] ' f'Batch: [{batch_num}\/{len(self.train_loader)}] ' f'Loss_D: {dis_error.data.cpu()}, ' f'Loss_G: {gen_error.data.cpu()}') def get_MNIST_dataset(num_workers_loader, model_parameter, out_dir=\"data\"): compose = transforms.Compose([ transforms.Resize((64, 64)), transforms.CenterCrop((64, 64)), transforms.ToTensor(), torchvision.transforms.Normalize(mean=[0.5], std=[0.5]) ]) dataset = datasets.MNIST( root=out_dir, train=True, download=True, transform=compose ) train_loader = torch.utils.data.DataLoader(dataset, batch_size=model_parameter[\"batch_size\"], num_workers=num_workers_loader, shuffle=model_parameter[\"shuffle\"]) return dataset, train_loader def train_discriminator(p_optimizer, p_noise, p_images, p_fake_target, p_real_target, p_images_labels, p_fake_labels, device): p_optimizer.zero_grad() # 1.1 Train on real data pred_dis_real = discriminator(p_images, p_images_labels) error_real = loss(pred_dis_real, p_real_target) error_real.backward() # 1.2 Train on fake data fake_data = generator(p_noise, p_fake_labels).detach() fake_data = add_noise_to_image(fake_data, device) pred_dis_fake = discriminator(fake_data, p_fake_labels) error_fake = loss(pred_dis_fake, p_fake_target) error_fake.backward() p_optimizer.step() return error_fake + error_real def train_generator(p_optimizer, p_noise, p_real_target, p_fake_labels, device): p_optimizer.zero_grad() fake_images = generator(p_noise, p_fake_labels) fake_images = add_noise_to_image(fake_images, device) pred_dis_fake = discriminator(fake_images, p_fake_labels) error_fake = loss(pred_dis_fake, p_real_target) # because \"\"\" We use \"p_real_target\" instead of \"p_fake_target\" because we want to maximize that the discriminator is wrong. \"\"\" error_fake.backward() p_optimizer.step() return fake_images, pred_dis_fake, error_fake # TODO change to a Truncated normal distribution def get_noise(batch_size, n_features=100): return torch.FloatTensor(batch_size, n_features, 1, 1).uniform_(-1, 1) # We flip label of real and fate data. Better gradient flow I have told def get_real_data_target(batch_size): return torch.FloatTensor(batch_size, 1, 1, 1).uniform_(0.0, 0.2) def get_fake_data_target(batch_size): return torch.FloatTensor(batch_size, 1, 1, 1).uniform_(0.8, 1.1) def image_to_vector(images): return torch.flatten(images, start_dim=1, end_dim=-1) def vector_to_image(images): return images.view(images.size(0), 1, 28, 28) def get_rand_labels(batch_size): return torch.randint(low=0, high=9, size=(batch_size,)) def load_model(model_load_path): if model_load_path: checkpoint = torch.load(model_load_path) discriminator.load_state_dict(checkpoint[\"model_discriminator_state_dict\"]) generator.load_state_dict(checkpoint[\"model_generator_state_dict\"]) dis_opti.load_state_dict(checkpoint[\"optimizer_discriminator_state_dict\"]) gen_opti.load_state_dict(checkpoint[\"optimizer_generator_state_dict\"]) return checkpoint[\"epoch\"] else: return 0 def init_model_optimizer(model_parameter, device): # Initialize the Models discriminator = Discriminator(ndf=model_parameter[\"ndf\"], dropout_value=model_parameter[\"dropout\"]).to(device) generator = Generator(ngf=model_parameter[\"ngf\"], dropout_value=model_parameter[\"dropout\"]).to(device) # train dis_opti = optim.Adam(discriminator.parameters(), lr=model_parameter[\"learning_rate_dis\"], betas=(0.5, 0.999)) gen_opti = optim.Adam(generator.parameters(), lr=model_parameter[\"learning_rate_gen\"], betas=(0.5, 0.999)) return discriminator, generator, dis_opti, gen_opti def get_hot_vector_encode(labels, device): return torch.eye(10)[labels].view(-1, 10, 1, 1).to(device) def add_noise_to_image(images, device, level_of_noise=0.1): return images[0].to(device) + (level_of_noise) * torch.randn(images.shape).to(device) if __name__ == \"__main__\": # Hyperparameter model_parameter = { \"batch_size\": 500, \"learning_rate_dis\": 0.0002, \"learning_rate_gen\": 0.0002, \"shuffle\": False, \"Epochs\": 10, \"ndf\": 64, \"ngf\": 64, \"dropout\": 0.5 } # Parameter r_frequent = 10 # How many samples we save for replay per batch (batch_size \/ r_frequent). model_name = \"CDCGAN\" # The name of you model e.g. \"Gan\" num_workers_loader = 1 # How many workers should load the data sample_save_size = 16 # How many numbers your saved imaged should show device = \"cuda\" # Which device should be used to train the neural network model_load_path = \"\" # If set load model instead of training from new num_epoch_log = 1 # How frequent you want to log\/ torch.manual_seed(43) # Sets a seed for torch for reproducibility dataset_train, train_loader = get_MNIST_dataset(num_workers_loader, model_parameter) # Get dataset # Initialize the Models and optimizer discriminator, generator, dis_opti, gen_opti = init_model_optimizer(model_parameter, device) # Init model\/Optimizer start_epoch = load_model(model_load_path) # when we want to load a model # Init Logger logger = Logger(model_name, generator, discriminator, gen_opti, dis_opti, model_parameter, train_loader) loss = nn.BCELoss() images, labels = next(iter(train_loader)) # For logging # For testing # pred = generator(get_noise(model_parameter[\"batch_size\"]).to(device), get_hot_vector_encode(get_rand_labels(model_parameter[\"batch_size\"]), device)) # dis = discriminator(images.to(device), get_hot_vector_encode(labels, device)) logger.log_graph(get_noise(model_parameter[\"batch_size\"]).to(device), images.to(device), get_hot_vector_encode(get_rand_labels(model_parameter[\"batch_size\"]), device), get_hot_vector_encode(labels, device)) # Array to store exp_replay = torch.tensor([]).to(device) for num_epoch in range(start_epoch, model_parameter[\"Epochs\"]): for batch_num, data_loader in enumerate(train_loader): images, labels = data_loader images = add_noise_to_image(images, device) # Add noise to the images # 1. Train Discriminator dis_error = train_discriminator( dis_opti, get_noise(model_parameter[\"batch_size\"]).to(device), images.to(device), get_fake_data_target(model_parameter[\"batch_size\"]).to(device), get_real_data_target(model_parameter[\"batch_size\"]).to(device), get_hot_vector_encode(labels, device), get_hot_vector_encode( get_rand_labels(model_parameter[\"batch_size\"]), device), device ) # 2. Train Generator fake_image, pred_dis_fake, gen_error = train_generator( gen_opti, get_noise(model_parameter[\"batch_size\"]).to(device), get_real_data_target(model_parameter[\"batch_size\"]).to(device), get_hot_vector_encode( get_rand_labels(model_parameter[\"batch_size\"]), device), device ) # Store a random point for experience replay perm = torch.randperm(fake_image.size(0)) r_idx = perm[:max(1, int(model_parameter[\"batch_size\"] \/ r_frequent))] r_samples = add_noise_to_image(fake_image[r_idx], device) exp_replay = torch.cat((exp_replay, r_samples), 0).detach() if exp_replay.size(0) >= model_parameter[\"batch_size\"]: # Train on experienced data dis_opti.zero_grad() r_label = get_hot_vector_encode(torch.zeros(exp_replay.size(0)).numpy(), device) pred_dis_real = discriminator(exp_replay, r_label) error_real = loss(pred_dis_real, get_fake_data_target(exp_replay.size(0)).to(device)) error_real.backward() dis_opti.step() print(f'Epoch: [{num_epoch}\/{model_parameter[\"Epochs\"]}] ' f'Batch: Replay\/Experience batch ' f'Loss_D: {error_real.data.cpu()}, ' ) exp_replay = torch.tensor([]).to(device) logger.display_stats(epoch=num_epoch, batch_num=batch_num, dis_error=dis_error, gen_error=gen_error) if batch_num % 100 == 0: logger.log_image(fake_image[:sample_save_size], num_epoch, batch_num) logger.log(num_epoch, dis_error, gen_error) if num_epoch % num_epoch_log == 0: logger.log_model(num_epoch) logger.log_histogramm() logger.close(logger, fake_image[:sample_save_size], num_epoch, dis_error, gen_error) ``` First link to my Code (Pastebin) Second link to my Code (0bin) Conclusion: Since I implemented all these things (e.g. label smoothing) which are considered beneficial to a GAN\/DCGAN. And my Model still performs worse than the Tutorial DCGAN from PyTorch I think I might have a bug in my code but I can't seem to find it. Reproducibility: You should be able to just copy the code and run it if you have the libraries that I imported installed to look for yourself if you can find anything. I appreciate any feedback.", "response":"So I solved this issue a while ago, but forgot to post an answer on stack overflow. So I will simply post my code here which should work probably pretty good. Some disclaimer: I am not quite sure if it works since I did this a year ago its for 128x128px Images MNIST It's not a vanilla GAN I used various optimization techniques If you want to use it you need to change various details, such as the training dataset Resources: Multi-Scale Gradients Instance Noise Various tricks I used More tricks ```py import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F import torchvision import torchvision.transforms as transforms from torch.utils.data import DataLoader import pytorch_lightning as pl from pytorch_lightning import loggers from numpy.random import choice import os from pathlib import Path import shutil from collections import OrderedDict # custom weights initialization called on netG and netD def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv') != -1: nn.init.normal_(m.weight.data, 0.0, 0.02) elif classname.find('BatchNorm') != -1: nn.init.normal_(m.weight.data, 1.0, 0.02) nn.init.constant_(m.bias.data, 0) # randomly flip some labels def noisy_labels(y, p_flip=0.05): # # flip labels with 5% probability # determine the number of labels to flip n_select = int(p_flip * y.shape[0]) # choose labels to flip flip_ix = choice([i for i in range(y.shape[0])], size=n_select) # invert the labels in place y[flip_ix] = 1 - y[flip_ix] return y class AddGaussianNoise(object): def __init__(self, mean=0.0, std=0.1): self.std = std self.mean = mean def __call__(self, tensor): tensor = tensor.cuda() return tensor + (torch.randn(tensor.size()) * self.std + self.mean).cuda() def __repr__(self): return self.__class__.__name__ + '(mean={0}, std={1})'.format(self.mean, self.std) def resize2d(img, size): return (F.adaptive_avg_pool2d(img, size).data).cuda() def get_valid_labels(img): return ((0.8 - 1.1) * torch.rand(img.shape[0], 1, 1, 1) + 1.1).cuda() # soft labels def get_unvalid_labels(img): return (noisy_labels((0.0 - 0.3) * torch.rand(img.shape[0], 1, 1, 1) + 0.3)).cuda() # soft labels class Generator(pl.LightningModule): def __init__(self, ngf, nc, latent_dim): super(Generator, self).__init__() self.ngf = ngf self.latent_dim = latent_dim self.nc = nc self.fc0 = nn.Sequential( # input is Z, going into a convolution nn.utils.spectral_norm(nn.ConvTranspose2d(latent_dim, ngf * 16, 4, 1, 0, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf * 16) ) self.fc1 = nn.Sequential( # state size. (ngf*8) x 4 x 4 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 16, ngf * 8, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf * 8) ) self.fc2 = nn.Sequential( # state size. (ngf*4) x 8 x 8 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf * 4) ) self.fc3 = nn.Sequential( # state size. (ngf*2) x 16 x 16 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf * 2) ) self.fc4 = nn.Sequential( # state size. (ngf) x 32 x 32 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf) ) self.fc5 = nn.Sequential( # state size. (nc) x 64 x 64 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False)), nn.Tanh() ) # state size. (nc) x 128 x 128 # For Multi-Scale Gradient # Converting the intermediate layers into images self.fc0_r = nn.Conv2d(ngf * 16, self.nc, 1) self.fc1_r = nn.Conv2d(ngf * 8, self.nc, 1) self.fc2_r = nn.Conv2d(ngf * 4, self.nc, 1) self.fc3_r = nn.Conv2d(ngf * 2, self.nc, 1) self.fc4_r = nn.Conv2d(ngf, self.nc, 1) def forward(self, input): x_0 = self.fc0(input) x_1 = self.fc1(x_0) x_2 = self.fc2(x_1) x_3 = self.fc3(x_2) x_4 = self.fc4(x_3) x_5 = self.fc5(x_4) # For Multi-Scale Gradient # Converting the intermediate layers into images x_0_r = self.fc0_r(x_0) x_1_r = self.fc1_r(x_1) x_2_r = self.fc2_r(x_2) x_3_r = self.fc3_r(x_3) x_4_r = self.fc4_r(x_4) return x_5, x_0_r, x_1_r, x_2_r, x_3_r, x_4_r class Discriminator(pl.LightningModule): def __init__(self, ndf, nc): super(Discriminator, self).__init__() self.nc = nc self.ndf = ndf self.fc0 = nn.Sequential( # input is (nc) x 128 x 128 nn.utils.spectral_norm(nn.Conv2d(nc, ndf, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True) ) self.fc1 = nn.Sequential( # state size. (ndf) x 64 x 64 nn.utils.spectral_norm(nn.Conv2d(ndf + nc, ndf * 2, 4, 2, 1, bias=False)), # \"+ nc\" because of multi scale gradient nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ndf * 2) ) self.fc2 = nn.Sequential( # state size. (ndf*2) x 32 x 32 nn.utils.spectral_norm(nn.Conv2d(ndf * 2 + nc, ndf * 4, 4, 2, 1, bias=False)), # \"+ nc\" because of multi scale gradient nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ndf * 4) ) self.fc3 = nn.Sequential( # state size. (ndf*4) x 16 x 16e nn.utils.spectral_norm(nn.Conv2d(ndf * 4 + nc, ndf * 8, 4, 2, 1, bias=False)), # \"+ nc\" because of multi scale gradient nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ndf * 8), ) self.fc4 = nn.Sequential( # state size. (ndf*8) x 8 x 8 nn.utils.spectral_norm(nn.Conv2d(ndf * 8 + nc, ndf * 16, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ndf * 16) ) self.fc5 = nn.Sequential( # state size. (ndf*8) x 4 x 4 nn.utils.spectral_norm(nn.Conv2d(ndf * 16 + nc, 1, 4, 1, 0, bias=False)), nn.Sigmoid() ) # state size. 1 x 1 x 1 def forward(self, input, detach_or_not): # When we train i ncombination with generator we use multi scale gradient. x, x_0_r, x_1_r, x_2_r, x_3_r, x_4_r = input if detach_or_not: x = x.detach() x_0 = self.fc0(x) x_0 = torch.cat((x_0, x_4_r), dim=1) # Concat Multi-Scale Gradient x_1 = self.fc1(x_0) x_1 = torch.cat((x_1, x_3_r), dim=1) # Concat Multi-Scale Gradient x_2 = self.fc2(x_1) x_2 = torch.cat((x_2, x_2_r), dim=1) # Concat Multi-Scale Gradient x_3 = self.fc3(x_2) x_3 = torch.cat((x_3, x_1_r), dim=1) # Concat Multi-Scale Gradient x_4 = self.fc4(x_3) x_4 = torch.cat((x_4, x_0_r), dim=1) # Concat Multi-Scale Gradient x_5 = self.fc5(x_4) return x_5 class DCGAN(pl.LightningModule): def __init__(self, hparams, checkpoint_folder, experiment_name): super().__init__() self.hparams = hparams self.checkpoint_folder = checkpoint_folder self.experiment_name = experiment_name # networks self.generator = Generator(ngf=hparams.ngf, nc=hparams.nc, latent_dim=hparams.latent_dim) self.discriminator = Discriminator(ndf=hparams.ndf, nc=hparams.nc) self.generator.apply(weights_init) self.discriminator.apply(weights_init) # cache for generated images self.generated_imgs = None self.last_imgs = None # For experience replay self.exp_replay_dis = torch.tensor([]) def forward(self, z): return self.generator(z) def adversarial_loss(self, y_hat, y): return F.binary_cross_entropy(y_hat, y) def training_step(self, batch, batch_nb, optimizer_idx): # For adding Instance noise for more visit: https:\/\/www.inference.vc\/instance-noise-a-trick-for-stabilising-gan-training\/ std_gaussian = max(0, self.hparams.level_of_noise - ( (self.hparams.level_of_noise * 2) * (self.current_epoch \/ self.hparams.epochs))) AddGaussianNoiseInst = AddGaussianNoise(std=std_gaussian) # the noise decays over time imgs, _ = batch imgs = AddGaussianNoiseInst(imgs) # Adding instance noise to real images self.last_imgs = imgs # train generator if optimizer_idx == 0: # sample noise z = torch.randn(imgs.shape[0], self.hparams.latent_dim, 1, 1).cuda() # generate images self.generated_imgs = self(z) # ground truth result (ie: all fake) g_loss = self.adversarial_loss(self.discriminator(self.generated_imgs, False), get_valid_labels(self.generated_imgs[0])) # adversarial loss is binary cross-entropy; [0] is the image of the last layer tqdm_dict = {'g_loss': g_loss} log = {'g_loss': g_loss, \"std_gaussian\": std_gaussian} output = OrderedDict({ 'loss': g_loss, 'progress_bar': tqdm_dict, 'log': log }) return output # train discriminator if optimizer_idx == 1: # Measure discriminator's ability to classify real from generated samples # how well can it label as real? real_loss = self.adversarial_loss( self.discriminator([imgs, resize2d(imgs, 4), resize2d(imgs, 8), resize2d(imgs, 16), resize2d(imgs, 32), resize2d(imgs, 64)], False), get_valid_labels(imgs)) fake_loss = self.adversarial_loss(self.discriminator(self.generated_imgs, True), get_unvalid_labels( self.generated_imgs[0])) # how well can it label as fake?; [0] is the image of the last layer # discriminator loss is the average of these d_loss = (real_loss + fake_loss) \/ 2 tqdm_dict = {'d_loss': d_loss} log = {'d_loss': d_loss, \"std_gaussian\": std_gaussian} output = OrderedDict({ 'loss': d_loss, 'progress_bar': tqdm_dict, 'log': log }) return output def configure_optimizers(self): lr_gen = self.hparams.lr_gen lr_dis = self.hparams.lr_dis b1 = self.hparams.b1 b2 = self.hparams.b2 opt_g = torch.optim.Adam(self.generator.parameters(), lr=lr_gen, betas=(b1, b2)) opt_d = torch.optim.Adam(self.discriminator.parameters(), lr=lr_dis, betas=(b1, b2)) return [opt_g, opt_d], [] def backward(self, trainer, loss, optimizer, optimizer_idx: int) -> None: loss.backward(retain_graph=True) def train_dataloader(self): # transform = transforms.Compose([transforms.Resize((self.hparams.image_size, self.hparams.image_size)), # transforms.ToTensor(), # transforms.Normalize([0.5], [0.5])]) # dataset = torchvision.datasets.MNIST(os.getcwd(), train=False, download=True, transform=transform) # return DataLoader(dataset, batch_size=self.hparams.batch_size) # transform = transforms.Compose([transforms.Resize((self.hparams.image_size, self.hparams.image_size)), # transforms.ToTensor(), # transforms.Normalize([0.5], [0.5]) # ]) # train_dataset = torchvision.datasets.ImageFolder( # root=\".\/drive\/My Drive\/datasets\/flower_dataset\/\", # # root=\".\/drive\/My Drive\/datasets\/ghibli_dataset_small_overfit\/\", # transform=transform # ) # return DataLoader(train_dataset, num_workers=self.hparams.num_workers, shuffle=True, # batch_size=self.hparams.batch_size) transform = transforms.Compose([transforms.Resize((self.hparams.image_size, self.hparams.image_size)), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]) ]) train_dataset = torchvision.datasets.ImageFolder( root=\"ghibli_dataset_small_overfit\/\", transform=transform ) return DataLoader(train_dataset, num_workers=self.hparams.num_workers, shuffle=True, batch_size=self.hparams.batch_size) def on_epoch_end(self): z = torch.randn(4, self.hparams.latent_dim, 1, 1).cuda() # match gpu device (or keep as cpu) if self.on_gpu: z = z.cuda(self.last_imgs.device.index) # log sampled images sample_imgs = self.generator(z)[0] torchvision.utils.save_image(sample_imgs, f'generated_images_epoch{self.current_epoch}.png') # save model if self.current_epoch % self.hparams.save_model_every_epoch == 0: trainer.save_checkpoint( self.checkpoint_folder + \"\/\" + self.experiment_name + \"_epoch_\" + str(self.current_epoch) + \".ckpt\") from argparse import Namespace args = { 'batch_size': 128, # batch size 'lr_gen': 0.0003, # TTUR;learnin rate of both networks; tested value: 0.0002 'lr_dis': 0.0003, # TTUR;learnin rate of both networks; tested value: 0.0002 'b1': 0.5, # Momentum for adam; tested value(dcgan paper): 0.5 'b2': 0.999, # Momentum for adam; tested value(dcgan paper): 0.999 'latent_dim': 256, # tested value which worked(in V4_1): 100 'nc': 3, # number of color channels 'ndf': 8, # number of discriminator features 'ngf': 8, # number of generator features 'epochs': 4, # the maxima lamount of epochs the algorith should run 'save_model_every_epoch': 1, # how often we save our model 'image_size': 128, # size of the image 'num_workers': 3, 'level_of_noise': 0.1, # how much instance noise we introduce(std; tested value: 0.15 and 0.1 'experience_save_per_batch': 1, # this value should be very low; tested value which works: 1 'experience_batch_size': 50 # this value shouldnt be too high; tested value which works: 50 } hparams = Namespace(**args) # Parameters experiment_name = \"DCGAN_6_2_MNIST_128px\" dataset_name = \"mnist\" checkpoint_folder = \"DCGAN\/\" tags = [\"DCGAN\", \"128x128\"] dirpath = Path(checkpoint_folder) # defining net net = DCGAN(hparams, checkpoint_folder, experiment_name) torch.autograd.set_detect_anomaly(True) trainer = pl.Trainer( # resume_from_checkpoint=\"DCGAN_V4_2_GHIBLI_epoch_999.ckpt\", max_epochs=args[\"epochs\"], gpus=1 ) trainer.fit(net) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/60421475\/dcgan-debugging-getting-just-garbage", "best_answers_votes":2, "question_length":15889, "response_length":12513 }, { "question":"How to compute the cosine_similarity in pytorch for all rows in a matrix with respect to all rows in another matrix In pytorch, given that I have 2 matrixes how would I compute cosine similarity of all rows in each with all rows in the other. For example Given the input = ``` matrix_1 = [a b] [c d] matrix_2 = [e f] [g h] ``` I would like the output to be output = ``` [cosine_sim([a b] [e f]) cosine_sim([a b] [g h])] [cosine_sim([c d] [e f]) cosine_sim([c d] [g h])] ``` At the moment I am using torch.nn.functional.cosine_similarity(matrix_1, matrix_2) which returns the cosine of the row with only that corresponding row in the other matrix. In my example I have only 2 rows, but I would like a solution which works for many rows. I would even like to handle the case where the number of rows in the each matrix is different. I realize that I could use the expand, however I want to do it without using such a large memory footprint.", "response":"By manually computing the similarity and playing with matrix multiplication + transposition: ```python import torch from scipy import spatial import numpy as np a = torch.randn(2, 2) b = torch.randn(3, 2) # different row number, for the fun # Given that cos_sim(u, v) = dot(u, v) \/ (norm(u) * norm(v)) # = dot(u \/ norm(u), v \/ norm(v)) # We fist normalize the rows, before computing their dot products via transposition: a_norm = a \/ a.norm(dim=1)[:, None] b_norm = b \/ b.norm(dim=1)[:, None] res = torch.mm(a_norm, b_norm.transpose(0,1)) print(res) # 0.9978 -0.9986 -0.9985 # -0.8629 0.9172 0.9172 # ------- # Let's verify with numpy\/scipy if our computations are correct: a_n = a.numpy() b_n = b.numpy() res_n = np.zeros((2, 3)) for i in range(2): for j in range(3): # cos_sim(u, v) = 1 - cos_dist(u, v) res_n[i, j] = 1 - spatial.distance.cosine(a_n[i], b_n[j]) print(res_n) # [[ 0.9978022 -0.99855876 -0.99854881] # [-0.86285472 0.91716063 0.9172349 ]] ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/50411191\/how-to-compute-the-cosine-similarity-in-pytorch-for-all-rows-in-a-matrix-with-re", "best_answers_votes":37, "question_length":938, "response_length":959 }, { "question":"PyTorch - How to deactivate dropout in evaluation mode This is the model I defined it is a simple lstm with 2 fully connect layers. ``` import copy import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class mylstm(nn.Module): def __init__(self,input_dim, output_dim, hidden_dim,linear_dim): super(mylstm, self).__init__() self.hidden_dim=hidden_dim self.lstm=nn.LSTMCell(input_dim,self.hidden_dim) self.linear1=nn.Linear(hidden_dim,linear_dim) self.linear2=nn.Linear(linear_dim,output_dim) def forward(self, input): out,_=self.lstm(input) out=nn.Dropout(p=0.3)(out) out=self.linear1(out) out=nn.Dropout(p=0.3)(out) out=self.linear2(out) return out ``` x_train and x_val are float dataframe with shape (4478,30), while y_train and y_val are float df with shape (4478,10) ``` x_train.head() Out[271]: 0 1 2 3 ... 26 27 28 29 0 1.6110 1.6100 1.6293 1.6370 ... 1.6870 1.6925 1.6950 1.6905 1 1.6100 1.6293 1.6370 1.6530 ... 1.6925 1.6950 1.6905 1.6960 2 1.6293 1.6370 1.6530 1.6537 ... 1.6950 1.6905 1.6960 1.6930 3 1.6370 1.6530 1.6537 1.6620 ... 1.6905 1.6960 1.6930 1.6955 4 1.6530 1.6537 1.6620 1.6568 ... 1.6960 1.6930 1.6955 1.7040 [5 rows x 30 columns] x_train.shape Out[272]: (4478, 30) ``` Define the varible and do one time bp, I can find out the vaildation loss is 1.4941 ``` model=mylstm(30,10,200,100).double() from torch import optim optimizer=optim.RMSprop(model.parameters(), lr=0.001, alpha=0.9) criterion=nn.L1Loss() input_=torch.autograd.Variable(torch.from_numpy(np.array(x_train))) target=torch.autograd.Variable(torch.from_numpy(np.array(y_train))) input2_=torch.autograd.Variable(torch.from_numpy(np.array(x_val))) target2=torch.autograd.Variable(torch.from_numpy(np.array(y_val))) optimizer.zero_grad() output=model(input_) loss=criterion(output,target) loss.backward() optimizer.step() moniter=criterion(model(input2_),target2) moniter Out[274]: tensor(1.4941, dtype=torch.float64, grad_fn=) ``` But I called forward function again I get a different number due to randomness of dropout ``` moniter=criterion(model(input2_),target2) moniter Out[275]: tensor(1.4943, dtype=torch.float64, grad_fn=) ``` what should I do that I can eliminate all the dropout in predicting phrase? I tried eval(): ``` moniter=criterion(model.eval()(input2_),target2) moniter Out[282]: tensor(1.4942, dtype=torch.float64, grad_fn=) moniter=criterion(model.eval()(input2_),target2) moniter Out[283]: tensor(1.4945, dtype=torch.float64, grad_fn=) ``` And pass an addtional parameter p to control dropout: ``` import copy import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class mylstm(nn.Module): def __init__(self,input_dim, output_dim, hidden_dim,linear_dim,p): super(mylstm, self).__init__() self.hidden_dim=hidden_dim self.lstm=nn.LSTMCell(input_dim,self.hidden_dim) self.linear1=nn.Linear(hidden_dim,linear_dim) self.linear2=nn.Linear(linear_dim,output_dim) def forward(self, input,p): out,_=self.lstm(input) out=nn.Dropout(p=p)(out) out=self.linear1(out) out=nn.Dropout(p=p)(out) out=self.linear2(out) return out model=mylstm(30,10,200,100,0.3).double() output=model(input_) loss=criterion(output,target) loss.backward() optimizer.step() moniter=criterion(model(input2_,0),target2) Traceback (most recent call last): File \"\", line 1, in output=model(input_) File \"D:\\Users\\shan xu\\Anaconda3\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 489, in __call__ result = self.forward(*input, **kwargs) TypeError: forward() missing 1 required positional argument: 'p' ``` But neither of them worked.", "response":"You have to define your nn.Dropout layer in your __init__ and assign it to your model to be responsive for calling eval(). So changing your model like this should work for you: ``` class mylstm(nn.Module): def __init__(self,input_dim, output_dim, hidden_dim,linear_dim,p): super(mylstm, self).__init__() self.hidden_dim=hidden_dim self.lstm=nn.LSTMCell(input_dim,self.hidden_dim) self.linear1=nn.Linear(hidden_dim,linear_dim) self.linear2=nn.Linear(linear_dim,output_dim) # define dropout layer in __init__ self.drop_layer = nn.Dropout(p=p) def forward(self, input): out,_= self.lstm(input) # apply model dropout, responsive to eval() out= self.drop_layer(out) out= self.linear1(out) # apply model dropout, responsive to eval() out= self.drop_layer(out) out= self.linear2(out) return out ``` If you change it like this dropout will be inactive as soon as you call eval(). NOTE: If you want to continue training afterwards you need to call train() on your model to leave evaluation mode. You can also find a small working example for dropout with eval() for evaluation mode here: nn.Dropout vs. F.dropout pyTorch", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53879727\/pytorch-how-to-deactivate-dropout-in-evaluation-mode", "best_answers_votes":33, "question_length":3584, "response_length":1111 }, { "question":"set `torch.backends.cudnn.benchmark = True` or not? I am using pytorch and I wonder if I should use torch.backends.cudnn.benchmark = True. I find on google that I should use it when computation graph does not change. What is computation graph in pytorch?", "response":"If your model does not change and your input sizes remain the same - then you may benefit from setting torch.backends.cudnn.benchmark = True. However, if your model changes: for instance, if you have layers that are only \"activated\" when certain conditions are met, or you have layers inside a loop that can be iterated a different number of times, then setting torch.backends.cudnn.benchmark = True might stall your execution.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/58961768\/set-torch-backends-cudnn-benchmark-true-or-not", "best_answers_votes":32, "question_length":254, "response_length":427 }, { "question":"PyTorch set_grad_enabled(False) vs with no_grad(): Assuming autograd is on (as it is by default), is there any difference (besides indent) between doing: ``` with torch.no_grad(): ``` and ``` torch.set_grad_enabled(False) torch.set_grad_enabled(True) ```", "response":"Actually no, there no difference in the way used in the question. When you take a look at the source code of no_grad. You see that it is actually using torch.set_grad_enabled to archive this behaviour: ```py class no_grad(object): r\"\"\"Context-manager that disabled gradient calculation. Disabling gradient calculation is useful for inference, when you are sure that you will not call :meth:`Tensor.backward()`. It will reduce memory consumption for computations that would otherwise have `requires_grad=True`. In this mode, the result of every computation will have `requires_grad=False`, even when the inputs have `requires_grad=True`. Also functions as a decorator. Example:: >>> x = torch.tensor([1], requires_grad=True) >>> with torch.no_grad(): ... y = x * 2 >>> y.requires_grad False >>> @torch.no_grad() ... def doubler(x): ... return x * 2 >>> z = doubler(x) >>> z.requires_grad False \"\"\" def __init__(self): self.prev = torch.is_grad_enabled() def __enter__(self): torch._C.set_grad_enabled(False) def __exit__(self, *args): torch.set_grad_enabled(self.prev) return False def __call__(self, func): @functools.wraps(func) def decorate_no_grad(*args, **kwargs): with self: return func(*args, **kwargs) return decorate_no_grad ``` However there is an additional functionality of torch.set_grad_enabled over torch.no_grad when used in a with-statement which lets you control to switch on or off gradient computation: ```py >>> x = torch.tensor([1], requires_grad=True) >>> is_train = False >>> with torch.set_grad_enabled(is_train): ... y = x * 2 >>> y.requires_grad ``` https:\/\/pytorch.org\/docs\/stable\/_modules\/torch\/autograd\/grad_mode.html Edit: @TomHale Regarding your comment. I just made a short test with PyTorch 1.0 and it turned out that the gradient will be active: ``` import torch w = torch.rand(5, requires_grad=True) print('Grad Before:', w.grad) torch.set_grad_enabled(False) with torch.enable_grad(): scalar = w.sum() scalar.backward() # Gradient tracking will be enabled here. torch.set_grad_enabled(True) print('Grad After:', w.grad) ``` Output: ``` Grad Before: None Grad After: tensor([1., 1., 1., 1., 1.]) ``` So gradients will be computed in this setting. The other setting you posted in your answer also yields to the same result: ``` import torch w = torch.rand(5, requires_grad=True) print('Grad Before:', w.grad) with torch.no_grad(): with torch.enable_grad(): # Gradient tracking IS enabled here. scalar = w.sum() scalar.backward() print('Grad After:', w.grad) ``` Output: ``` Grad Before: None Grad After: tensor([1., 1., 1., 1., 1.]) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53447345\/pytorch-set-grad-enabledfalse-vs-with-no-grad", "best_answers_votes":28, "question_length":256, "response_length":2570 }, { "question":"How does Pytorch Dataloader handle variable size data? I have a dataset that looks like below. That is the first item is the user id followed by the set of items which is clicked by the user. ``` 0 24104 27359 6684 0 24104 27359 1 16742 31529 31485 1 16742 31529 2 6579 19316 13091 7181 6579 19316 13091 2 6579 19316 13091 7181 6579 19316 2 6579 19316 13091 7181 6579 19316 13091 6579 2 6579 19316 13091 7181 6579 4 19577 21608 4 19577 21608 4 19577 21608 18373 5 3541 9529 5 3541 9529 6 6832 19218 14144 6 6832 19218 7 9751 23424 25067 12606 26245 23083 12606 ``` I define a custom dataset to handle my click log data. ```py import torch.utils.data as data class ClickLogDataset(data.Dataset): def __init__(self, data_path): self.data_path = data_path self.uids = [] self.streams = [] with open(self.data_path, 'r') as fdata: for row in fdata: row = row.strip('\\n').split('\\t') self.uids.append(int(row[0])) self.streams.append(list(map(int, row[1:]))) def __len__(self): return len(self.uids) def __getitem__(self, idx): uid, stream = self.uids[idx], self.streams[idx] return uid, stream ``` Then I use a DataLoader to retrieve mini batches from the data for training. ```py from torch.utils.data.dataloader import DataLoader clicklog_dataset = ClickLogDataset(data_path) clicklog_data_loader = DataLoader(dataset=clicklog_dataset, batch_size=16) for uid_batch, stream_batch in stream_data_loader: print(uid_batch) print(stream_batch) ``` The code above returns differently from what I expected, I want stream_batch to be a 2D tensor of type integer of length 16. However, what I get is a list of 1D tensor of length 16, and the list has only one element, like below. Why is that ? ``` #stream_batch [tensor([24104, 24104, 16742, 16742, 6579, 6579, 6579, 6579, 19577, 19577, 19577, 3541, 3541, 6832, 6832, 9751])] ```", "response":"This is the way I do it: ``` def collate_fn_padd(batch): ''' Padds batch of variable length note: it converts things ToTensor manually here since the ToTensor transform assume it takes in images rather than arbitrary tensors. ''' ## get sequence lengths lengths = torch.tensor([ t.shape[0] for t in batch ]).to(device) ## padd batch = [ torch.Tensor(t).to(device) for t in batch ] batch = torch.nn.utils.rnn.pad_sequence(batch) ## compute mask mask = (batch != 0).to(device) return batch, lengths, mask ``` then I pass that to the dataloader class as a collate_fn. There seems to be a giant list of different posts in the pytorch forum. Let me link to all of them. They all have answers of their own and discussions. It doesn't seem to me that there is one \"standard way to do it\" but if there is from an authoritative reference please share. It would be nice that the ideal answer mentions efficiency, e.g. if to do the processing in GPU with torch in the collate function vs numpy things of that sort. List: https:\/\/discuss.pytorch.org\/t\/how-to-create-batches-of-a-list-of-varying-dimension-tensors\/50773 https:\/\/discuss.pytorch.org\/t\/how-to-create-a-dataloader-with-variable-size-input\/8278 https:\/\/discuss.pytorch.org\/t\/using-variable-sized-input-is-padding-required\/18131 https:\/\/discuss.pytorch.org\/t\/dataloader-for-various-length-of-data\/6418 https:\/\/discuss.pytorch.org\/t\/how-to-do-padding-based-on-lengths\/24442 bucketing: - https:\/\/discuss.pytorch.org\/t\/tensorflow-esque-bucket-by-sequence-length\/41284", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55041080\/how-does-pytorch-dataloader-handle-variable-size-data", "best_answers_votes":19, "question_length":1819, "response_length":1512 }, { "question":"What \"exactly\" happens inside embedding layer in pytorch? From multiple searches and pytorch documentation itself I could figure out that inside embedding layer there is a lookup table where the embedding vectors are stored. What I am not able to understand: what exactly happens during training in this layer? What are the weights and how the gradients of those weights are computed? My intuition is that at least there should be a function with some parameters that produces the keys for the lookup table. If so, then what is that function? Any help in this will be appreciated. Thanks.", "response":"That is a really good question! The embedding layer of PyTorch (same goes for Tensorflow) serves as a lookup table just to retrieve the embeddings for each of the inputs, which are indices. Consider the following case, you have a sentence where each word is tokenized. Therefore, each word in your sentence is represented with a unique integer (index). In case the list of indices (words) is [1, 5, 9], and you want to encode each of the words with a 50 dimensional vector (embedding), you can do the following: ```py # The list of tokens tokens = torch.tensor([0,5,9], dtype=torch.long) # Define an embedding layer, where you know upfront that in total you # have 10 distinct words, and you want each word to be encoded with # a 50 dimensional vector embedding = torch.nn.Embedding(num_embeddings=10, embedding_dim=50) # Obtain the embeddings for each of the words in the sentence embedded_words = embedding(tokens) ``` Now, to answer your questions: During the forward pass, the values for each of the tokens in your sentence are going to be obtained in a similar way as the Numpy's indexing works. Because in the backend, this is a differentiable operation, during the backward pass (training), Pytorch is going to compute the gradients for each of the embeddings and readjust them accordingly. The weights are the embeddings themselves. The word embedding matrix is actually a weight matrix that will be learned during training. There is no actual function per se. As we defined above, the sentence is already tokenized (each word is represented with a unique integer), and we can just obtain the embeddings for each of the tokens in the sentence. Finally, as I mentioned the example with the indexing many times, let us try it out. ```py # Let us assume that we have a pre-trained embedding matrix pretrained_embeddings = torch.rand(10, 50) # We can initialize our embedding module from the embedding matrix embedding = torch.nn.Embedding.from_pretrained(pretrained_embeddings) # Some tokens tokens = torch.tensor([1,5,9], dtype=torch.long) # Token embeddings from the lookup table lookup_embeddings = embedding(tokens) # Token embeddings obtained with indexing indexing_embeddings = pretrained_embeddings[tokens] # Voila! They are the same np.testing.assert_array_equal(lookup_embeddings.numpy(), indexing_embeddings.numpy()) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/58718612\/what-exactly-happens-inside-embedding-layer-in-pytorch", "best_answers_votes":37, "question_length":588, "response_length":2335 }, { "question":"PyTorch : How to properly create a list of nn.Linear() I have created a class that has nn.Module as subclass. In my class, I have to create N number of linear transformation, where N is given as class parameters. I therefore proceed as follow : ``` self.list_1 = [] for i in range(N): self.list_1.append(nn.Linear(self.x, 1, bias=mlp_bias)) ``` In the forward method, i call these matrices (with list_1[i]) and concat the results. Two things : 1) Even though I use model.cuda(), these Linear transform are used on cpu and i get the following error : RuntimeError: Expected object of type Variable[torch.cuda.FloatTensor] but found type Variable[torch.FloatTensor] for argument #1 'mat2' I have to do ``` self.list_1.append(nn.Linear(self.x, 1, bias=mlp_bias).cuda()) ``` This is not required if instead, i do : ``` self.nn = nn.Linear(self.x, 1, bias=mlp_bias) ``` and then use self.nn directly. 2) For more obvious reason, when I print(model) in my main, the Linear matrices in my list arent printed. Is there any other way. maybe using bmm ? I find it less easy, and i actually want to have my N results separately. Thank you in advance, M", "response":"You can use nn.ModuleList to wrap your list of linear layers as explained here ``` self.list_1 = nn.ModuleList(self.list_1) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/50463975\/pytorch-how-to-properly-create-a-list-of-nn-linear", "best_answers_votes":36, "question_length":1141, "response_length":127 }, { "question":"Pytorch Change the learning rate based on number of epochs When I set the learning rate and find the accuracy cannot increase after training few epochs ``` optimizer = optim.Adam(model.parameters(), lr = 1e-4) n_epochs = 10 for i in range(n_epochs): \/\/ some training here ``` If I want to use a step decay: reduce the learning rate by a factor of 10 every 5 epochs, how can I do so?", "response":"You can use learning rate scheduler torch.optim.lr_scheduler.StepLR ```py import torch.optim.lr_scheduler.StepLR scheduler = StepLR(optimizer, step_size=5, gamma=0.1) ``` Decays the learning rate of each parameter group by gamma every step_size epochs see docs here Example from docs ```py # Assuming optimizer uses lr = 0.05 for all groups # lr = 0.05 if epoch < 30 # lr = 0.005 if 30 <= epoch < 60 # lr = 0.0005 if 60 <= epoch < 90 # ... scheduler = StepLR(optimizer, step_size=30, gamma=0.1) for epoch in range(100): train(...) validate(...) scheduler.step() ``` Example: ```py import torch import torch.optim as optim optimizer = optim.SGD([torch.rand((2,2), requires_grad=True)], lr=0.1) scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1) ``` ```py for epoch in range(1, 21): scheduler.step() print('Epoch-{0} lr: {1}'.format(epoch, optimizer.param_groups[0]['lr'])) if epoch % 5 == 0:print() ``` ```py Epoch-1 lr: 0.1 Epoch-2 lr: 0.1 Epoch-3 lr: 0.1 Epoch-4 lr: 0.1 Epoch-5 lr: 0.1 Epoch-6 lr: 0.010000000000000002 Epoch-7 lr: 0.010000000000000002 Epoch-8 lr: 0.010000000000000002 Epoch-9 lr: 0.010000000000000002 Epoch-10 lr: 0.010000000000000002 Epoch-11 lr: 0.0010000000000000002 Epoch-12 lr: 0.0010000000000000002 Epoch-13 lr: 0.0010000000000000002 Epoch-14 lr: 0.0010000000000000002 Epoch-15 lr: 0.0010000000000000002 Epoch-16 lr: 0.00010000000000000003 Epoch-17 lr: 0.00010000000000000003 Epoch-18 lr: 0.00010000000000000003 Epoch-19 lr: 0.00010000000000000003 Epoch-20 lr: 0.00010000000000000003 ``` More on How to adjust Learning Rate - torch.optim.lr_scheduler provides several methods to adjust the learning rate based on the number of epochs.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/60050586\/pytorch-change-the-learning-rate-based-on-number-of-epochs", "best_answers_votes":54, "question_length":382, "response_length":1681 }, { "question":"How can I generate and display a grid of images in PyTorch with plt.imshow and torchvision.utils.make_grid? I am trying to understand how torchvision interacts with mathplotlib to produce a grid of images. It's easy to generate images and display them iteratively: ``` import torch import torchvision import matplotlib.pyplot as plt w = torch.randn(10,3,640,640) for i in range (0,10): z = w[i] plt.imshow(z.permute(1,2,0)) plt.show() ``` However, displaying these images in a grid does not seem to be as straightforward. ``` w = torch.randn(10,3,640,640) grid = torchvision.utils.make_grid(w, nrow=5) plt.imshow(grid) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () 1 w = torch.randn(10,3,640,640) 2 grid = torchvision.utils.make_grid(w, nrow=5) ----> 3 plt.imshow(grid) \/anaconda3\/lib\/python3.6\/site-packages\/matplotlib\/pyplot.py in imshow(X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, hold, data, **kwargs) 3203 filternorm=filternorm, filterrad=filterrad, 3204 imlim=imlim, resample=resample, url=url, data=data, -> 3205 **kwargs) 3206 finally: 3207 ax._hold = washold \/anaconda3\/lib\/python3.6\/site-packages\/matplotlib\/__init__.py in inner(ax, *args, **kwargs) 1853 \"the Matplotlib list!)\" % (label_namer, func.__name__), 1854 RuntimeWarning, stacklevel=2) -> 1855 return func(ax, *args, **kwargs) 1856 1857 inner.__doc__ = _add_data_doc(inner.__doc__, \/anaconda3\/lib\/python3.6\/site-packages\/matplotlib\/axes\/_axes.py in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, **kwargs) 5485 resample=resample, **kwargs) 5486 -> 5487 im.set_data(X) 5488 im.set_alpha(alpha) 5489 if im.get_clip_path() is None: \/anaconda3\/lib\/python3.6\/site-packages\/matplotlib\/image.py in set_data(self, A) 651 if not (self._A.ndim == 2 652 or self._A.ndim == 3 and self._A.shape[-1] in [3, 4]): --> 653 raise TypeError(\"Invalid dimensions for image data\") 654 655 if self._A.ndim == 3: TypeError: Invalid dimensions for image data ``` Even though PyTorch's documentation indicates that w is the correct shape, Python says that it isn't. So I tried to permute the indices of my tensor: ``` w = torch.randn(10,3,640,640) grid = torchvision.utils.make_grid(w.permute(0,2,3,1), nrow=5) plt.imshow(grid) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in () 1 w = torch.randn(10,3,640,640) ----> 2 grid = torchvision.utils.make_grid(w.permute(0,2,3,1), nrow=5) 3 plt.imshow(grid) \/anaconda3\/lib\/python3.6\/site-packages\/torchvision-0.2.1-py3.6.egg\/torchvision\/utils.py in make_grid(tensor, nrow, padding, normalize, range, scale_each, pad_value) 83 grid.narrow(1, y * height + padding, height - padding)\\ 84 .narrow(2, x * width + padding, width - padding)\\ ---> 85 .copy_(tensor[k]) 86 k = k + 1 87 return grid RuntimeError: The expanded size of the tensor (3) must match the existing size (640) at non-singleton dimension 0 ``` What's happening here? How can I place a bunch of randomly generated images into a grid and display them?", "response":"There's a small mistake in your code. torchvision.utils.make_grid() returns a tensor which contains the grid of images. But the channel dimension has to be moved to the end since that's what matplotlib recognizes. Below is the code that works fine: ``` In [107]: import torchvision # sample input (10 RGB images containing just Gaussian Noise) In [108]: batch_tensor = torch.randn(*(10, 3, 256, 256)) # (N, C, H, W) # make grid (2 rows and 5 columns) to display our 10 images In [109]: grid_img = torchvision.utils.make_grid(batch_tensor, nrow=5) # check shape In [110]: grid_img.shape Out[110]: torch.Size([3, 518, 1292]) # reshape and plot (because matplotlib needs channel as the last dimension) In [111]: plt.imshow(grid_img.permute(1, 2, 0)) Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Out[111]: ``` which shows the output as:", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51329159\/how-can-i-generate-and-display-a-grid-of-images-in-pytorch-with-plt-imshow-and-t", "best_answers_votes":44, "question_length":3231, "response_length":898 }, { "question":"best way of tqdm for data loader How to use tqdm for data_loader ? is this the correct way? ``` for i,j in enumerate(data_loader,total = 100): pass ```", "response":"You need to wrap the iterable with tqdm, as their documentation clearly says: Instantly make your loops show a smart progress meter - just wrap any iterable with tqdm(iterable), and you\u2019re done! If you're enumerating over an iterable, you can do something like the following. Sleep is only for visualizing it. ``` from tqdm import tqdm from time import sleep data_loader = list(range(1000)) for i, j in enumerate(tqdm(data_loader)): sleep(0.01) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/63426545\/best-way-of-tqdm-for-data-loader", "best_answers_votes":37, "question_length":151, "response_length":448 }, { "question":"PyTorch model input shape I loaded a custom PyTorch model and I want to find out its input shape. Something like this: ``` model.input_shape ``` Is it possible to get this information? Update: print() and summary() don't show this model's input shape, so they are not what I'm looking for.", "response":"PyTorch flexibility PyTorch models are very flexible objects, to the point where they do not enforce or generally expect a fixed input shape for data. If you have certain layers there may be constraints e.g: a flatten followed by a fully connected layer of width N would enforce the dimensions of your original input (M1 x M2 x ... Mn) to have a product equal to N a 2d convolution of N input channels would enforce the data to be 3 dimensionsal, with the first dimension having size N But as you can see neither of these enforce the total shape of the data. We might not realize it right now, but in more complex models, getting the size of the first linear layer right is sometimes a source of frustration. We\u2019ve heard stories of famous practitioners putting in arbitrary numbers and then relying on error messages from PyTorch to backtrack the correct sizes for their linear layers. Lame, eh? Nah, it\u2019s all legit! Deep Learning with PyTorch Investigation Simple case: First layer is Fully Connected If your model's first layer is a fully connected one, then the first layer in print(model) will detail the expected dimensionality of a single sample. Ambiguous case: CNN If it is a convolutional layer however, since these are dynamic and will stride as long\/wide as the input permits, there is no simple way to retrieve this info from the model itself.1 This flexibility means that for many architectures multiple compatible input sizes2 will all be acceptable by the network. This is a feature of PyTorch's Dynamic computational graph. Manual inspection What you will need to do is investigate the network architecture, and once you've found an interpretable layer (if one is present e.g. fully connected) \"work backwards\" with its dimensions, determining how the previous layers (e.g. poolings and convolutions) have compressed\/modified it. Example e.g. in the following model from Deep Learning with PyTorch (8.5.1): ```py class NetWidth(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1) self.conv2 = nn.Conv2d(32, 16, kernel_size=3, padding=1) self.fc1 = nn.Linear(16 * 8 * 8, 32) self.fc2 = nn.Linear(32, 2) def forward(self, x): out = F.max_pool2d(torch.tanh(self.conv1(x)), 2) out = F.max_pool2d(torch.tanh(self.conv2(out)), 2) out = out.view(-1, 16 * 8 * 8) out = torch.tanh(self.fc1(out)) out = self.fc2(out) return out ``` We see the model takes an input 2.d. image with 3 channels and: Conv2d -> sends it to an image of the same size with 32 channels max_pool2d(,2) -> halves the size of the image in each dimension Conv2d -> sends it to an image of the same size with 16 channels max_pool2d(,2) -> halves the size of the image in each dimension view -> reshapes the image Linear -> takes a tensor of size 16 * 8 * 8 and sends to size 32 ... So working backwards, we have: a tensor of shape 16 * 8 * 8 un-reshaped into shape (channels x height x width) un-max_pooled in 2d with factor 2, so height and width un-halved un-convolved from 16 channels to 32 Hypothesis: It is likely 16 in the product thus refers to the number of channels, and that the image seen by view was of shape (channels, 8,8), and currently is (channels, 16,16)2 un-max_pooled in 2d with factor 2, so height and width un-halved again (channels, 32,32) un-convolved from 32 channels to 3 So assuming the kernel_size and padding are sufficient that the convolutions themselves maintain image dimensions, it is likely that the input image is of shape (3,32,32) i.e. RGB 32x32 pixel square images. Notes: Even the external package pytorch-summary requires you provide the input shape in order to display the shape of the output of each layer. It could however be any 2 numbers whose produce equals 8*8 e.g. (64,1), (32,2), (16,4) etc however since the code is written as 8*8 it is likely the authors used the actual dimensions.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/66488807\/pytorch-model-input-shape", "best_answers_votes":28, "question_length":289, "response_length":3865 }, { "question":"Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a ``` nn.LSTM(input_size, hidden_size, num_layers) ``` where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: ``` op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... ``` If I do something like ``` nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size, 1) ``` I think the network architecture will look exactly like above. Am I wrong? And if yes, what is the difference between these two?", "response":"The multi-layer LSTM is better known as stacked LSTM where multiple layers of LSTM are stacked on top of each other. Your understanding is correct. The following two definitions of stacked LSTM are same. ``` nn.LSTM(input_size, hidden_size, 2) ``` and ``` nn.Sequential(OrderedDict([ ('LSTM1', nn.LSTM(input_size, hidden_size, 1), ('LSTM2', nn.LSTM(hidden_size, hidden_size, 1) ])) ``` Here, the input is feed into the lowest layer of LSTM and then the output of the lowest layer is forwarded to the next layer and so on so forth. Please note, the output size of the lowest LSTM layer and the rest of the LSTM layer's input size is hidden_size. However, you may have seen people defined stacked LSTM in the following way: ``` rnns = nn.ModuleList() for i in range(nlayers): input_size = input_size if i == 0 else hidden_size rnns.append(nn.LSTM(input_size, hidden_size, 1)) ``` The reason people sometimes use the above approach is that if you create a stacked LSTM using the first two approaches, you can't get the hidden states of each individual layer. Check out what LSTM returns in PyTorch. So, if you want to have the intermedia layer's hidden states, you have to declare each individual LSTM layer as a single LSTM and run through a loop to mimic the multi-layer LSTM operations. For example: ``` outputs = [] for i in range(nlayers): if i != 0: sent_variable = F.dropout(sent_variable, p=0.2, training=True) output, hidden = rnns[i](sent_variable) outputs.append(output) sent_variable = output ``` In the end, outputs will contain all the hidden states of each individual LSTM layer.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49224413\/difference-between-1-lstm-with-num-layers-2-and-2-lstms-in-pytorch", "best_answers_votes":40, "question_length":723, "response_length":1591 }, { "question":"How does max_length, padding and truncation arguments work in HuggingFace' BertTokenizerFast.from_pretrained('bert-base-uncased')? I am working with Text Classification problem where I want to use the BERT model as the base followed by Dense layers. I want to know how does the 3 arguments work? For example, if I have 3 sentences as: ``` 'My name is slim shade and I am an aspiring AI Engineer', 'I am an aspiring AI Engineer', 'My name is Slim' ``` SO what will these 3 arguments do? What I think is as follows: max_length=5 will keep all the sentences as of length 5 strictly padding=max_length will add a padding of 1 to the third sentence truncate=True will truncate the first and second sentence so that their length will be strictly 5. Please correct me if I am wrong. Below is my code which I have used. ``` ! pip install transformers==3.5.1 from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') tokens = tokenizer.batch_encode_plus(text,max_length=5,padding='max_length', truncation=True) text_seq = torch.tensor(tokens['input_ids']) text_mask = torch.tensor(tokens['attention_mask']) ```", "response":"What you have assumed is almost correct, however, there are few differences. max_length=5, the max_length specifies the length of the tokenized text. By default, BERT performs word-piece tokenization. For example the word \"playing\" can be split into \"play\" and \"##ing\" (This may not be very precise, but just to help you understand about word-piece tokenization), followed by adding [CLS] token at the beginning of the sentence, and [SEP] token at the end of sentence. Thus, it first tokenizes the sentence, truncates it to max_length-2 (if truncation=True), then prepend [CLS] at the beginning and [SEP] token at the end.(So a total length of max_length) padding='max_length', In this example it is not very evident that the 3rd example will be padded, as the length exceeds 5 after appending [CLS] and [SEP] tokens. However, if you have a max_length of 10. The tokenized text corresponds to [101, 2026, 2171, 2003, 11754, 102, 0, 0, 0, 0], where 101 is id of [CLS] and 102 is id of [SEP] tokens. Thus, padded by zeros to make all the text to the length of max_length Likewise, truncate=True will ensure that the max_length is strictly adhered, i.e, longer sentences are truncated to max_length only if truncate=True", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/65246703\/how-does-max-length-padding-and-truncation-arguments-work-in-huggingface-bertt", "best_answers_votes":36, "question_length":1153, "response_length":1217 }, { "question":"PyTorch custom loss function How should a custom loss function be implemented ? Using below code is causing error : ``` import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms import numpy as np import matplotlib.pyplot as plt import torch.utils.data as data_utils import torch.nn as nn import torch.nn.functional as F num_epochs = 20 x1 = np.array([0,0]) x2 = np.array([0,1]) x3 = np.array([1,0]) x4 = np.array([1,1]) num_epochs = 200 class cus2(torch.nn.Module): def __init__(self): super(cus2,self).__init__() def forward(self, outputs, labels): # reshape labels to give a flat vector of length batch_size*seq_len labels = labels.view(-1) # mask out 'PAD' tokens mask = (labels >= 0).float() # the number of tokens is the sum of elements in mask num_tokens = int(torch.sum(mask).data[0]) # pick the values corresponding to labels and multiply by mask outputs = outputs[range(outputs.shape[0]), labels]*mask # cross entropy loss for all non 'PAD' tokens return -torch.sum(outputs)\/num_tokens x = torch.tensor([x1,x2,x3,x4]).float() y = torch.tensor([0,1,1,0]).long() train = data_utils.TensorDataset(x,y) train_loader = data_utils.DataLoader(train , batch_size=2 , shuffle=True) device = 'cpu' input_size = 2 hidden_size = 100 num_classes = 2 learning_rate = .0001 class NeuralNet(nn.Module) : def __init__(self, input_size, hidden_size, num_classes) : super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size , hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size , num_classes) def forward(self, x) : out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out for i in range(0 , 1) : model = NeuralNet(input_size, hidden_size, num_classes).to(device) criterion = nn.CrossEntropyLoss() # criterion = Regress_Loss() # criterion = cus2() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) total_step = len(train_loader) for epoch in range(num_epochs) : for i,(images , labels) in enumerate(train_loader) : images = images.reshape(-1 , 2).to(device) labels = labels.to(device) outputs = model(images) loss = criterion(outputs , labels) optimizer.zero_grad() loss.backward() optimizer.step() # print(loss) outputs = model(x) print(outputs.data.max(1)[1]) ``` makes perfect predictions on training data : ``` tensor([0, 1, 1, 0]) ``` Using a custom loss function from here: is implemented in above code as cus2 Un-commenting code # criterion = cus2() to use this loss function returns : ``` tensor([0, 0, 0, 0]) ``` A warning is also returned : UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number I've not implemented the custom loss function correctly ?", "response":"Your loss function is programmatically correct except for below: ``` # the number of tokens is the sum of elements in mask num_tokens = int(torch.sum(mask).data[0]) ``` When you do torch.sum it returns a 0-dimensional tensor and hence the warning that it can't be indexed. To fix this do int(torch.sum(mask).item()) as suggested or int(torch.sum(mask)) will work too. Now, are you trying to emulate the CE loss using the custom loss? If yes, then you are missing the log_softmax To fix that add outputs = torch.nn.functional.log_softmax(outputs, dim=1) before statement 4. Note that in case of tutorial that you have attached, log_softmax is already done in the forward call. You can do that too. Also, I noticed that the learning rate is slow and even with CE loss, results are not consistent. Increasing the learning rate to 1e-3 works well for me in case of custom as well as CE loss.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53980031\/pytorch-custom-loss-function", "best_answers_votes":12, "question_length":2740, "response_length":887 }, { "question":"How to convert a pytorch tensor into a numpy array? How do I convert a torch tensor to numpy?", "response":"copied from pytorch doc: ``` a = torch.ones(5) print(a) ``` tensor([1., 1., 1., 1., 1.]) ``` b = a.numpy() print(b) ``` [1. 1. 1. 1. 1.] Following from the below discussion with @John: In case the tensor is (or can be) on GPU, or in case it (or it can) require grad, one can use ``` t.detach().cpu().numpy() ``` I recommend to uglify your code only as much as required.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54268029\/how-to-convert-a-pytorch-tensor-into-a-numpy-array", "best_answers_votes":44, "question_length":93, "response_length":369 }, { "question":"How to check if a tensor is on cuda or send it to cuda in Pytorch? I have a tensor t = torch.zeros((4, 5, 6)) How to check if it is on gpu or not, and send it to gpu and back?", "response":"From the pytorch forum use t.is_cuda, t.cuda(), t.cpu() ``` t = torch.randn(2,2) t.is_cuda # returns False t = torch.randn(2,2).cuda() t.is_cuda # returns True t = t.cpu() t.is_cuda # returns False ``` When passing to and from gpu and cpu, new arrays are allocated on the relevant device.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/65381244\/how-to-check-if-a-tensor-is-on-cuda-or-send-it-to-cuda-in-pytorch", "best_answers_votes":45, "question_length":175, "response_length":288 }, { "question":"Pytorch matmul - RuntimeError: \"addmm_impl_cpu_\" not implemented for 'Half' When I am running pytorch matmul, the following error is thrown: ``` Traceback (most recent call last): File \"\/home\/omrastogi\/Desktop\/Side\/One-Class-Classification-Customer-Complaints\/pattern.py\", line 71, in print(obj.infer(list([df.text[0]]), list([df.reason[0]]))) File \"\/home\/omrastogi\/Desktop\/Side\/One-Class-Classification-Customer-Complaints\/pattern.py\", line 45, in infer cos_sm = self.batch_cosine_similarity(enc1, enc2) File \"\/home\/omrastogi\/Desktop\/Side\/One-Class-Classification-Customer-Complaints\/pattern.py\", line 51, in batch_cosine_similarity dot_prd = torch.matmul(inp1, inp2.transpose(0, 1)) RuntimeError: \"addmm_impl_cpu_\" not implemented for 'Half' ``` inp1 --> [1256] inp2 --> [1256]", "response":"The error was throwing because the data type of operands was float16. Changing it back to float32 solved the problem. I guess float16 is for GPU implementation only.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/73530569\/pytorch-matmul-runtimeerror-addmm-impl-cpu-not-implemented-for-half", "best_answers_votes":38, "question_length":780, "response_length":165 }, { "question":"PyTorch LogSoftmax vs Softmax for CrossEntropyLoss I understand that PyTorch's LogSoftmax function is basically just a more numerically stable way to compute Log(Softmax(x)). Softmax lets you convert the output from a Linear layer into a categorical probability distribution. The pytorch documentation says that CrossEntropyLoss combines nn.LogSoftmax() and nn.NLLLoss() in one single class. Looking at NLLLoss, I'm still confused...Are there 2 logs being used? I think of negative log as information content of an event. (As in entropy) After a bit more looking, I think that NLLLoss assumes that you're actually passing in log probabilities instead of just probabilities. Is this correct? It's kind of weird if so...", "response":"Yes, NLLLoss takes log-probabilities (log(softmax(x))) as input. Why?. Because if you add a nn.LogSoftmax (or F.log_softmax) as the final layer of your model's output, you can easily get the probabilities using torch.exp(output), and in order to get cross-entropy loss, you can directly use nn.NLLLoss. Of course, log-softmax is more stable as you said. And, there is only one log (it's in nn.LogSoftmax). There is no log in nn.NLLLoss. nn.CrossEntropyLoss() combines nn.LogSoftmax() (that is, log(softmax(x))) and nn.NLLLoss() in one single class. Therefore, the output from the network that is passed into nn.CrossEntropyLoss needs to be the raw output of the network (called logits), not the output of the softmax function.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/65192475\/pytorch-logsoftmax-vs-softmax-for-crossentropyloss", "best_answers_votes":35, "question_length":718, "response_length":726 }, { "question":"What is the difference between view() and unsqueeze() in Torch? Use of unsqueeze(): ``` input = torch.Tensor(2, 4, 3) # input: 2 x 4 x 3 print(input.unsqueeze(0).size()) # prints - torch.size([1, 2, 4, 3]) ``` Use of view(): ``` input = torch.Tensor(2, 4, 3) # input: 2 x 4 x 3 print(input.view(1, -1, -1, -1).size()) # prints - torch.size([1, 2, 4, 3]) ``` According to documentation, unsqueeze() inserts singleton dim at position given as parameter and view() creates a view with different dimensions of the storage associated with tensor. What view() does is clear to me, but I am unable to distinguish it from unsqueeze(). Moreover, I don't understand when to use view() and when to use unsqueeze()? Any help with good explanation would be appreciated!", "response":"view() can only take a single -1 argument. So, if you want to add a singleton dimension, you would need to provide all the dimensions as arguments. For e.g., if A is a 2x3x4 tensor, to add a singleton dimension, you would need to do A:view(2, 1, 3, 4). However, sometimes, the dimensionality of the input is unknown when the operation is being used. Thus, we dont know that A is 2x3x4, but we would still like to insert a singleton dimension. This happens a lot when using minibatches of tensors, where the last dimension is usually unknown. In these cases, the nn.Unsqueeze is useful and lets us insert the dimension without explicitly being aware of the other dimensions when writing the code.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/42866654\/what-is-the-difference-between-view-and-unsqueeze-in-torch", "best_answers_votes":28, "question_length":756, "response_length":695 }, { "question":"Pytorch: Why is the memory occupied by the `tensor` variable so small? In Pytorch 1.0.0, I found that a tensor variable occupies very small memory. I wonder how it stores so much data. Here's the code. ``` a = np.random.randn(1, 1, 128, 256) b = torch.tensor(a, device=torch.device('cpu')) a_size = sys.getsizeof(a) b_size = sys.getsizeof(b) ``` a_size is 262288. b_size is 72.", "response":"The answer is in two parts. From the documentation of sys.getsizeof, firstly All built-in objects will return correct results, but this does not have to hold true for third-party extensions as it is implementation specific. so it could be that for tensors __sizeof__ is undefined or defined differently than you would expect - this function is not something you can rely on. Secondly Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to. which means that if the torch.Tensor object merely holds a reference to the actual memory, this won't show in sys.getsizeof. This is indeed the case, if you check the size of the underlying storage instead, you will see the expected number ``` import torch, sys b = torch.randn(1, 1, 128, 256, dtype=torch.float64) sys.getsizeof(b) >> 72 sys.getsizeof(b.storage()) >> 262208 ``` Note: I am setting dtype to float64 explicitly, because that is the default dtype in numpy, whereas torch uses float32 by default.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54361763\/pytorch-why-is-the-memory-occupied-by-the-tensor-variable-so-small", "best_answers_votes":70, "question_length":377, "response_length":1025 }, { "question":"Pytorch getting RuntimeError: Found dtype Double but expected Float I am trying to implement a neural net in PyTorch but it doesn't seem to work. The problem seems to be in the training loop. I've spend several hours into this but can't get it right. Please help, thanks. I haven't added the data preprocessing parts. ``` # importing libraries import pandas as pd import numpy as np import torch import torch.nn as nn from torch.utils.data import Dataset from torch.utils.data import DataLoader import torch.nn.functional as F ``` ``` # get x function (dataset related stuff) def Getx(idx): sample = samples[idx] vector = Calculating_bottom(sample) vector = torch.as_tensor(vector, dtype = torch.float64) return vector # get y function (dataset related stuff) def Gety(idx): y = np.array(train.iloc[idx, 4], dtype = np.float64) y = torch.as_tensor(y, dtype = torch.float64) return y ``` ``` # dataset class mydataset(Dataset): def __init__(self): super().__init__() def __getitem__(self, index): x = Getx(index) y = Gety(index) return x, y def __len__(self): return len(train) dataset = mydataset() ``` ``` # sample dataset value print(dataset.__getitem__(0)) ``` (tensor([ 5., 5., 8., 14.], dtype=torch.float64), tensor(-0.3403, dtype=torch.float64)) ``` # data-loader dataloader = DataLoader(dataset, batch_size = 1, shuffle = True) ``` ``` # nn architecture class Net(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(4, 4) self.fc2 = nn.Linear(4, 2) self.fc3 = nn.Linear(2, 1) def forward(self, x): x = x.float() x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x model = Net() ``` ``` # device device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') model.to(device) ``` ``` # hyper-parameters criterion = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.001) ``` ``` # training loop for epoch in range(5): for batch in dataloader: # unpacking x, y = batch x.to(device) y.to(device) # reset gradients optimizer.zero_grad() # forward propagation through the network out = model(x) # calculate the loss loss = criterion(out, y) # backpropagation loss.backward() # update the parameters optimizer.step() ``` Error: ``` \/opt\/conda\/lib\/python3.7\/site-packages\/torch\/nn\/modules\/loss.py:446: UserWarning: Using a target size (torch.Size([1])) that is different to the input size (torch.Size([1, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in 20 21 # backpropagation ---> 22 loss.backward() 23 24 # update the parameters \/opt\/conda\/lib\/python3.7\/site-packages\/torch\/tensor.py in backward(self, gradient, retain_graph, create_graph) 219 retain_graph=retain_graph, 220 create_graph=create_graph) --> 221 torch.autograd.backward(self, gradient, retain_graph, create_graph) 222 223 def register_hook(self, hook): \/opt\/conda\/lib\/python3.7\/site-packages\/torch\/autograd\/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 130 Variable._execution_engine.run_backward( 131 tensors, grad_tensors_, retain_graph, create_graph, --> 132 allow_unreachable=True) # allow_unreachable flag 133 134 RuntimeError: Found dtype Double but expected Float ```", "response":"You need the data type of the data to match the data type of the model. Either convert the model to double (recommended for simple nets with no serious performance problems such as yours) ``` # nn architecture class Net(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(4, 4) self.fc2 = nn.Linear(4, 2) self.fc3 = nn.Linear(2, 1) self.double() ``` or convert the data to float. ``` class mydataset(Dataset): def __init__(self): super().__init__() def __getitem__(self, index): x = Getx(index) y = Gety(index) return x.float(), y.float() ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/67456368\/pytorch-getting-runtimeerror-found-dtype-double-but-expected-float", "best_answers_votes":30, "question_length":3410, "response_length":562 }, { "question":"Pytorch tensor, how to switch channel position - Runtime error I have my training dataset as below, where X_train is 3D with 3 channels Shape of X_Train: (708, 256, 3) Shape of Y_Train: (708, 4) Then I convert them into a tensor and input into the dataloader: ``` X_train=torch.from_numpy(X_data) y_train=torch.from_numpy(y_data) training_dataset = torch.utils.data.TensorDataset(X_train, y_train) train_loader = torch.utils.data.DataLoader(training_dataset, batch_size=50, shuffle=False) ``` However when training the model, I get the following error: RuntimeError: Given groups=1, weight of size 24 3 5, expected input[708, 256, 3] to have 3 channels, but got 256 channels instead I suppose this is due to the position of the channel? In Tensorflow, the channel position is at the end, but in PyTorch the format is \"Batch Size x Channel x Height x Width\"? So how do I swap the positions in the x_train tensor to match the expected format in the dataloader? ``` class TwoLayerNet(torch.nn.Module): def __init__(self): super(TwoLayerNet,self).__init__() self.conv1 = nn.Sequential( nn.Conv1d(3, 3*8, kernel_size=5, stride=1), nn.Sigmoid(), nn.AvgPool1d(kernel_size=2, stride=0)) self.conv2 = nn.Sequential( nn.Conv1d(3*8, 12, kernel_size=5, stride=1), nn.Sigmoid(), nn.AvgPool1d(kernel_size=2, stride = 0)) #self.drop_out = nn.Dropout() self.fc1 = nn.Linear(708, 732) self.fc2 = nn.Linear(732, 4) def forward(self, x): out = self.conv1(x) out = self.conv2(out) out = out.reshape(out.size(0), -1) out = self.drop_out(out) out = self.fc1(out) out = self.fc2(out) return out ```", "response":"Use permute. ``` X_train = torch.rand(708, 256, 3) X_train = X_train.permute(2, 0, 1) X_train.shape # => torch.Size([3, 708, 256]) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59648324\/pytorch-tensor-how-to-switch-channel-position-runtime-error", "best_answers_votes":47, "question_length":1575, "response_length":134 }, { "question":"What does data.norm() < 1000 do in PyTorch? I am following the PyTorch tutorial here. It says that ``` x = torch.randn(3, requires_grad=True) y = x * 2 while y.data.norm() < 1000: y = y * 2 print(y) Out: tensor([-590.4467, 97.6760, 921.0221]) ``` Could someone explain what data.norm() does here? When I change .randn to .ones its output is tensor([ 1024., 1024., 1024.]).", "response":"It's simply the L2 norm (a.k.a Euclidean norm) of the tensor. Below is a reproducible illustration: ``` In [15]: x = torch.randn(3, requires_grad=True) In [16]: y = x * 2 In [17]: y.data Out[17]: tensor([-1.2510, -0.6302, 1.2898]) In [18]: y.data.norm() Out[18]: tensor(1.9041) # computing the norm using elementary operations In [19]: torch.sqrt(torch.sum(torch.pow(y, 2))) Out[19]: tensor(1.9041) ``` Explanation: First, it takes a square of every element in the input tensor x, then it sums them together, and finally it takes a square root of the resulting sum. All in all, these operations compute the so-called L2 or Euclidean norm.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/50753477\/what-does-data-norm-1000-do-in-pytorch", "best_answers_votes":33, "question_length":372, "response_length":638 }, { "question":"Replace all nonzero values by zero and all zero values by a specific value I have a 3d tensor which contains some zero and nonzero values. I want to replace all nonzero values by zero and zero values by a specific value. How can I do that?", "response":"Pretty much exactly how you would do it using numpy, like so: ```py tensor[tensor!=0] = 0 ``` In order to replace zeros and non-zeros, you can just chain them together. Just be sure to use a copy of the tensor, since they get modified: ```py def custom_replace(tensor, on_zero, on_non_zero): # we create a copy of the original tensor, # because of the way we are replacing them. res = tensor.clone() res[tensor==0] = on_zero res[tensor!=0] = on_non_zero return res ``` And use it like so: ```py >>>z (0 ,.,.) = 0 1 1 3 (1 ,.,.) = 0 1 1 0 [torch.LongTensor of size 2x2x2] >>>out = custom_replace(z, on_zero=5, on_non_zero=0) >>>out (0 ,.,.) = 5 0 0 0 (1 ,.,.) = 5 0 0 5 [torch.LongTensor of size 2x2x2] ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/45384684\/replace-all-nonzero-values-by-zero-and-all-zero-values-by-a-specific-value", "best_answers_votes":33, "question_length":239, "response_length":705 }, { "question":"How to efficiently run multiple Pytorch Processes \/ Models at once ? Traceback: The paging file is too small for this operation to complete Background I have a very small network which I want to test with different random seeds. The network barely uses 1% of my GPUs compute power so i could in theory run 50 processes at once to try many different seeds at once. Problem Unfortunately i can't even import pytorch in multiple processes. When the nr of processes exceeds 4 I get a Traceback regarding a too small paging file. Minimal reproducable code\u00a7 - dispatcher.py ``` from subprocess import Popen import sys procs = [] for seed in range(50): procs.append(Popen([sys.executable, \"ml_model.py\", str(seed)])) for proc in procs: proc.wait() ``` \u00a7I increased the number of seeds so people with better machines can also reproduce this. Minimal reproducable code - ml_model.py ``` import torch import time time.sleep(10) ``` ``` Traceback (most recent call last): File \"ml_model.py\", line 1, in import torch File \"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\__init__.py\", line 117, in import torch File \"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\__init__.py\", line 117, in raise err OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading \"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\lib\\cudnn_cnn_infer64_8.dll\" or one of its dependencies. raise err ``` Further Investigation I noticed that each process loads a lot of dll's into RAM. And when i close all other programs which use a lot of RAM i can get up to 10 procesess instead of 4. So it seems like a resource constraint. Questions Is there a workaround ? What's the recommended way to train many small networks with pytorch on a single gpu ? Should i write my own CUDA Kernel instead, or use a different framework to achieve this ? My goal would be to run around 50 processes at once (on a 16GB RAM Machine, 8GB GPU RAM)", "response":"I've looked a bit into this tonight. I don't have a solution (edit: I have a mitigation, see the edit at end), but I have a bit more information. It seems the issue is caused by NVidia fatbins (.nv_fatb) being loaded into memory. Several DLLs, such as cusolver64_xx.dll, torcha_cuda_cu.dll, and a few others, have .nv_fatb sections in them. These contain tons of different variations of CUDA code for different GPUs, so it ends up being several hundred megabytes to a couple gigabytes. When Python imports 'torch' it loads these DLLs, and maps the .nv_fatb section into memory. For some reason, instead of just being a memory mapped file, it is actually taking up memory. The section is set as 'copy on write', so it's possible something writes into it? I don't know. But anyway, if you look at Python using VMMap ( https:\/\/learn.microsoft.com\/en-us\/sysinternals\/downloads\/vmmap ) you can see that these DLLs are committing huge amounts of committed memory for this .nv_fatb section. The frustrating part is it doesn't seem to be using the memory. For example, right now my Python.exe has 2.7GB committed, but the working set is only 148MB. Every Python process that loads these DLLs will commit several GB of memory loading these DLLs. So if 1 Python process is wasting 2GB of memory, and you try running 8 workers, you need 16GB of memory to spare just to load the DLLs. It really doesn't seem like this memory is used, just committed. I don't know enough about these fatbinaries to try to fix it, but from looking at this for the past 2 hours it really seems like they are the issue. Perhaps its an NVidia problem that these are committing memory? edit: I made this python script: https:\/\/gist.github.com\/cobryan05\/7d1fe28dd370e110a372c4d268dcb2e5 Get it and install its pefile dependency ( python -m pip install pefile ). Run it on your torch and cuda DLLs. In OPs case, command line might look like: ``` python fixNvPe.py --input=C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\lib\\*.dll ``` (You also want to run this wherever your cusolver64_*.dll and friends are. This may be in your torch\\lib folder, or it may be, eg, C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\vXX.X\\bin . If it is under Program Files, you will need to run the script with administrative privileges) What this script is going to do is scan through all DLLs specified by the input glob, and if it finds an .nv_fatb section it will back up the DLL, disable ASLR, and mark the .nv_fatb section read-only. ASLR is 'address space layout randomization.' It is a security feature that randomizes where a DLL is loaded in memory. We disable it for this DLL so that all Python processes will load the DLL into the same base virtual address. If all Python processes using the DLL load it at the same base address, they can all share the DLL. Otherwise each process needs its own copy. Marking the section 'read-only' lets Windows know that the contents will not change in memory. If you map a file into memory read\/write, Windows has to commit enough memory, backed by the pagefile, just in case you make a modification to it. If the section is read-only, there is no need to back it in the pagefile. We know there are no modifications to it, so it can always be found in the DLL. The theory behind the script is that by changing these 2 flags that less memory will be committed for the .nv_fatb, and more memory will be shared between the Python processes. In practice, it works. Not quite as well as I'd hope (it still commits a lot more than it uses), so my understanding may be flawed, but it significantly decreases memory commit. In my limited testing I haven't ran into any issues, but I can't guarantee there are no code paths that attempts to write to that section we marked 'read only.' If you start running into issues, though, you can just restore the backups. edit 2022-01-20: Per NVIDIA: \"We have gone ahead and marked the nv_fatb section as read-only, this change will be targeting next major CUDA release 11.7 . We are not changing the ASLR, as that is considered a safety feature .\" This should certainly help. If it's not enough without ASLR as well then the script should still work", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/64837376\/how-to-efficiently-run-multiple-pytorch-processes-models-at-once-traceback", "best_answers_votes":30, "question_length":2022, "response_length":4205 }, { "question":"Saving PyTorch model with no access to model class code How can I save a PyTorch model without a need for the model class to be defined somewhere? Disclaimer: In Best way to save a trained model in PyTorch?, there are no solutions (or a working solution) for saving the model without access to the model class code.", "response":"If you plan to do inference with the Pytorch library available (i.e. Pytorch in Python, C++, or other platforms it supports) then the best way to do this is via TorchScript. I think the simplest thing is to use trace = torch.jit.trace(model, typical_input) and then torch.jit.save(trace, path). You can then load the traced model with torch.jit.load(path). Here's a really simple example. We make two files: train.py : ```py import torch class Model(torch.nn.Module): def __init__(self): super(Model, self).__init__() self.linear = torch.nn.Linear(4, 4) def forward(self, x): x = torch.relu(self.linear(x)) return x model = Model() x = torch.FloatTensor([[0.2, 0.3, 0.2, 0.7], [0.4, 0.2, 0.8, 0.9]]) with torch.no_grad(): print(model(x)) traced_cell = torch.jit.trace(model, (x)) torch.jit.save(traced_cell, \"model.pth\") ``` infer.py : ```py import torch x = torch.FloatTensor([[0.2, 0.3, 0.2, 0.7], [0.4, 0.2, 0.8, 0.9]]) loaded_trace = torch.jit.load(\"model.pth\") with torch.no_grad(): print(loaded_trace(x)) ``` Running these sequentially gives results: ```py python train.py tensor([[0.0000, 0.1845, 0.2910, 0.2497], [0.0000, 0.5272, 0.3481, 0.1743]]) python infer.py tensor([[0.0000, 0.1845, 0.2910, 0.2497], [0.0000, 0.5272, 0.3481, 0.1743]]) ``` The results are the same, so we are good. (Note that the result will be different each time here due to randomness of the initialisation of the nn.Linear layer). TorchScript provides for much more complex architectures and graph definitions (including if statements, while loops, and more) to be saved in a single file, without needing to redefine the graph at inference time. See the docs (linked above) for more advanced possibilities.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59287728\/saving-pytorch-model-with-no-access-to-model-class-code", "best_answers_votes":33, "question_length":315, "response_length":1690 }, { "question":"Difference between Tensorflow's tf.keras.layers.Dense and PyTorch's torch.nn.Linear? I have a quick (and possibly silly) question about how Tensorflow defines its Linear layer. Within PyTorch, a Linear (or Dense) layer is defined as, y = x A^T + b where A and b are the weight matrix and bias vector for a Linear layer (see here). However, I can't precisely find an equivalent equation for Tensorflow! Is it the same as PyTorch or is it just y = x A + b ? Thank you in advance!", "response":"If we set activation to None in the dense layer in keras API, then they are technically equivalent. Tensorflow's ``` tf.keras.layers.Dense(..., activation=None) ``` According to the doc, more study here. activation: Activation function to use. If you don't specify anything, no activation is applied (ie. \"linear\" activation: a(x) = x). And in PyTorch's src. ``` torch.nn.Linear ``` They are now equal at this point. A linear transformation to the incoming data: y = x*W^T + b. See the following more concrete equivalent implementation of these two. In PyTorch, we do ``` class Network(torch.nn.Module): def __init__(self): super(Network, self).__init__() self.fc1 = torch.nn.Linear(5, 30) def forward(self, state): return self.fc1(state) ``` or, ``` trd = torch.nn.Linear(in_features = 3, out_features = 30) y = trd(torch.ones(5, 3)) print(y.size()) # torch.Size([5, 30]) ``` Its equivalent tf implementation would be ``` model = tf.keras.models.Sequential() model.add(tf.keras.layers.Dense(30, input_shape=(5,), activation=None)) ``` or, ``` tfd = tf.keras.layers.Dense(30, input_shape=(3,), activation=None) x = tfd(tf.ones(shape=(5, 3))) print(x.shape) # (5, 30) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/66626700\/difference-between-tensorflows-tf-keras-layers-dense-and-pytorchs-torch-nn-lin", "best_answers_votes":27, "question_length":477, "response_length":1170 }, { "question":"How does pytorch backprop through argmax? I'm building Kmeans in pytorch using gradient descent on centroid locations, instead of expectation-maximisation. Loss is the sum of square distances of each point to its nearest centroid. To identify which centroid is nearest to each point, I use argmin, which is not differentiable everywhere. However, pytorch is still able to backprop and update weights (centroid locations), giving similar performance to sklearn kmeans on the data. Any ideas how this is working, or how I can figure this out within pytorch? Discussion on pytorch github suggests argmax is not differentiable: https:\/\/github.com\/pytorch\/pytorch\/issues\/1339. Example code below (on random pts): ```py import numpy as np import torch num_pts, batch_size, n_dims, num_clusters, lr = 1000, 100, 200, 20, 1e-5 # generate random points vector = torch.from_numpy(np.random.rand(num_pts, n_dims)).float() # randomly pick starting centroids idx = np.random.choice(num_pts, size=num_clusters) kmean_centroids = vector[idx][:,None,:] # [num_clusters,1,n_dims] kmean_centroids = torch.tensor(kmean_centroids, requires_grad=True) for t in range(4001): # get batch idx = np.random.choice(num_pts, size=batch_size) vector_batch = vector[idx] distances = vector_batch - kmean_centroids # [num_clusters, #pts, #dims] distances = torch.sum(distances**2, dim=2) # [num_clusters, #pts] # argmin membership = torch.min(distances, 0)[1] # [#pts] # cluster distances cluster_loss = 0 for i in range(num_clusters): subset = torch.transpose(distances,0,1)[membership==i] if len(subset)!=0: # to prevent NaN cluster_loss += torch.sum(subset[:,i]) cluster_loss.backward() print(cluster_loss.item()) with torch.no_grad(): kmean_centroids -= lr * kmean_centroids.grad kmean_centroids.grad.zero_() ```", "response":"As alvas noted in the comments, argmax is not differentiable. However, once you compute it and assign each datapoint to a cluster, the derivative of loss with respect to the location of these clusters is well-defined. This is what your algorithm does. Why does it work? If you had only one cluster (so that the argmax operation didn't matter), your loss function would be quadratic, with minimum at the mean of the data points. Now with multiple clusters, you can see that your loss function is piecewise (in higher dimensions think volumewise) quadratic - for any set of centroids [C1, C2, C3, ...] each data point is assigned to some centroid CN and the loss is locally quadratic. The extent of this locality is given by all alternative centroids [C1', C2', C3', ...] for which the assignment coming from argmax remains the same; within this region the argmax can be treated as a constant, rather than a function and thus the derivative of loss is well-defined. Now, in reality, it's unlikely you can treat argmax as constant, but you can still treat the naive \"argmax-is-a-constant\" derivative as pointing approximately towards a minimum, because the majority of data points are likely to indeed belong to the same cluster between iterations. And once you get close enough to a local minimum such that the points no longer change their assignments, the process can converge to a minimum. Another, more theoretical way to look at it is that you're doing an approximation of expectation maximization. Normally, you would have the \"compute assignments\" step, which is mirrored by argmax, and the \"minimize\" step which boils down to finding the minimizing cluster centers given the current assignments. The minimum is given by d(loss)\/d([C1, C2, ...]) == 0, which for a quadratic loss is given analytically by the means of data points within each cluster. In your implementation, you're solving the same equation but with a gradient descent step. In fact, if you used a 2nd order (Newton) update scheme instead of 1st order gradient descent, you would be implicitly reproducing exactly the baseline EM scheme.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54969646\/how-does-pytorch-backprop-through-argmax", "best_answers_votes":25, "question_length":1785, "response_length":2108 }, { "question":"PyTorch: manually setting weight parameters with numpy array for GRU \/ LSTM I'm trying to fill up GRU\/LSTM with manually defined parameters in pytorch. I have numpy arrays for parameters with shapes as defined in their documentation (https:\/\/pytorch.org\/docs\/stable\/nn.html#torch.nn.GRU). It seems to work but I'm not sure whether the returned values are correct. Is this a right way to fill up GRU\/LSTM with numpy parameters? ``` gru = nn.GRU(input_size, hidden_size, num_layers, bias=True, batch_first=False, dropout=dropout, bidirectional=bidirectional) def set_nn_wih(layer, parameter_name, w, l0=True): param = getattr(layer, parameter_name) if l0: for i in range(3*hidden_size): param.data[i] = w[i*input_size:(i+1)*input_size] else: for i in range(3*hidden_size): param.data[i] = w[i*num_directions*hidden_size:(i+1)*num_directions*hidden_size] def set_nn_whh(layer, parameter_name, w): param = getattr(layer, parameter_name) for i in range(3*hidden_size): param.data[i] = w[i*hidden_size:(i+1)*hidden_size] l0=True for i in range(num_directions): for j in range(num_layers): if j == 0: wih = w0[i, :, :3*input_size] whh = w0[i, :, 3*input_size:] # check l0=True else: wih = w[j-1, i, :, :num_directions*3*hidden_size] whh = w[j-1, i, :, num_directions*3*hidden_size:] l0=False if i == 0: set_nn_wih( gru, \"weight_ih_l{}\".format(j), torch.from_numpy(wih.flatten()),l0) set_nn_whh( gru, \"weight_hh_l{}\".format(j), torch.from_numpy(whh.flatten())) else: set_nn_wih( gru, \"weight_ih_l{}_reverse\".format(j), torch.from_numpy(wih.flatten()),l0) set_nn_whh( gru, \"weight_hh_l{}_reverse\".format(j), torch.from_numpy(whh.flatten())) y, hn = gru(x_t, h_t) ``` numpy arrays are defined as following: ``` rng = np.random.RandomState(313) w0 = rng.randn(num_directions, hidden_size, 3*(input_size + hidden_size)).astype(np.float32) w = rng.randn(max(1, num_layers-1), num_directions, hidden_size, 3*(num_directions*hidden_size + hidden_size)).astype(np.float32) ```", "response":"That is a good question, and you already give a decent answer. However, it reinvents the wheel - there is a very elegant Pytorch internal routine that will allow you to do the same without as much effort - and one that is applicable for any network. The core concept here is PyTorch's state_dict. The state dictionary effectively contains the parameters organized by the tree-structure given by the relationship of the nn.Modules and their submodules, etc. The short answer If you only want the code to load a value into a tensor using the state_dict, then try this line (where the dict contains a valid state_dict): ``` `model.load_state_dict(dict, strict=False)` ``` where strict=False is crucial if you want to load only some parameter values. The long answer - including an introduction to PyTorch's state_dict Here's an example of how a state dict looks for a GRU (I chose input_size = hidden_size = 2 so that I can print the entire state dict): ``` rnn = torch.nn.GRU(2, 2, 1) rnn.state_dict() # Out[10]: # OrderedDict([('weight_ih_l0', tensor([[-0.0023, -0.0460], # [ 0.3373, 0.0070], # [ 0.0745, -0.5345], # [ 0.5347, -0.2373], # [-0.2217, -0.2824], # [-0.2983, 0.4771]])), # ('weight_hh_l0', tensor([[-0.2837, -0.0571], # [-0.1820, 0.6963], # [ 0.4978, -0.6342], # [ 0.0366, 0.2156], # [ 0.5009, 0.4382], # [-0.7012, -0.5157]])), # ('bias_ih_l0', # tensor([-0.2158, -0.6643, -0.3505, -0.0959, -0.5332, -0.6209])), # ('bias_hh_l0', # tensor([-0.1845, 0.4075, -0.1721, -0.4893, -0.2427, 0.3973]))]) ``` So the state_dict all the parameters of the network. If we have \"nested\" nn.Modules, we get the tree represented by the parameter names: ``` class MLP(torch.nn.Module): def __init__(self): torch.nn.Module.__init__(self) self.lin_a = torch.nn.Linear(2, 2) self.lin_b = torch.nn.Linear(2, 2) mlp = MLP() mlp.state_dict() # Out[23]: # OrderedDict([('lin_a.weight', tensor([[-0.2914, 0.0791], # [-0.1167, 0.6591]])), # ('lin_a.bias', tensor([-0.2745, -0.1614])), # ('lin_b.weight', tensor([[-0.4634, -0.2649], # [ 0.4552, 0.3812]])), # ('lin_b.bias', tensor([ 0.0273, -0.1283]))]) class NestedMLP(torch.nn.Module): def __init__(self): torch.nn.Module.__init__(self) self.mlp_a = MLP() self.mlp_b = MLP() n_mlp = NestedMLP() n_mlp.state_dict() # Out[26]: # OrderedDict([('mlp_a.lin_a.weight', tensor([[ 0.2543, 0.3412], # [-0.1984, -0.3235]])), # ('mlp_a.lin_a.bias', tensor([ 0.2480, -0.0631])), # ('mlp_a.lin_b.weight', tensor([[-0.4575, -0.6072], # [-0.0100, 0.5887]])), # ('mlp_a.lin_b.bias', tensor([-0.3116, 0.5603])), # ('mlp_b.lin_a.weight', tensor([[ 0.3722, 0.6940], # [-0.5120, 0.5414]])), # ('mlp_b.lin_a.bias', tensor([0.3604, 0.0316])), # ('mlp_b.lin_b.weight', tensor([[-0.5571, 0.0830], # [ 0.5230, -0.1020]])), # ('mlp_b.lin_b.bias', tensor([ 0.2156, -0.2930]))]) ``` So - what if you want to not extract the state dict, but change it - and thereby the network's parameters? Use nn.Module.load_state_dict(state_dict, strict=True) (link to the docs) This method allows you to load an entire state_dict with arbitrary values into an instantiated model of the same kind as long as the keys (i.e. the parameter names) are correct and the values (i.e. the parameters) are torch.tensors of the right shape. If the strict kwarg is set to True (the default), the dict you load has to exactly match the original state dict, except for the values of the parameters. That is, there has to be one new value for each parameter. For the GRU example above, we need a tensor of the correct size (and the correct device, btw) for each of 'weight_ih_l0', 'weight_hh_l0', 'bias_ih_l0', 'bias_hh_l0'. As we sometimes only want to load some values (as I think you want to do), we can set the strict kwarg to False - and we can then load only partial state dicts, as e.g. one that only contains parameter values for 'weight_ih_l0'. As a practical advice, I'd simply create the model you want to load values into, and then print the state dict (or at least a list of the keys and the respective tensor sizes) ``` print([k, v.shape for k, v in model.state_dict().items()]) ``` That tells you what the exact name of the parameter is you want to change. You then simply create a state dict with the respective parameter name and tensor, and load it: ``` from dollections import OrderedDict new_state_dict = OrderedDict({'tensor_name_retrieved_from_original_dict': new_tensor_value}) model.load_state_dict(new_state_dict, strict=False) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/52945427\/pytorch-manually-setting-weight-parameters-with-numpy-array-for-gru-lstm", "best_answers_votes":27, "question_length":1960, "response_length":4434 }, { "question":"Understanding input shape to PyTorch LSTM This seems to be one of the most common questions about LSTMs in PyTorch, but I am still unable to figure out what should be the input shape to PyTorch LSTM. Even after following several posts (1, 2, 3) and trying out the solutions, it doesn't seem to work. Background: I have encoded text sequences (variable length) in a batch of size 12 and the sequences are padded and packed using pad_packed_sequence functionality. MAX_LEN for each sequence is 384 and each token (or word) in the sequence has a dimension of 768. Hence my batch tensor could have one of the following shapes: [12, 384, 768] or [384, 12, 768]. The batch will be my input to the PyTorch rnn module (lstm here). According to the PyTorch documentation for LSTMs, its input dimensions are (seq_len, batch, input_size) which I understand as following. seq_len - the number of time steps in each input stream (feature vector length). batch - the size of each batch of input sequences. input_size - the dimension for each input token or time step. lstm = nn.LSTM(input_size=?, hidden_size=?, batch_first=True) What should be the exact input_size and hidden_size values here?", "response":"You have explained the structure of your input, but you haven't made the connection between your input dimensions and the LSTM's expected input dimensions. Let's break down your input (assigning names to the dimensions): batch_size: 12 seq_len: 384 input_size \/ num_features: 768 That means the input_size of the LSTM needs to be 768. The hidden_size is not dependent on your input, but rather how many features the LSTM should create, which is then used for the hidden state as well as the output, since that is the last hidden state. You have to decide how many features you want to use for the LSTM. Finally, for the input shape, setting batch_first=True requires the input to have the shape [batch_size, seq_len, input_size], in your case that would be [12, 384, 768]. ```py import torch import torch.nn as nn # Size: [batch_size, seq_len, input_size] input = torch.randn(12, 384, 768) lstm = nn.LSTM(input_size=768, hidden_size=512, batch_first=True) output, _ = lstm(input) output.size() # => torch.Size([12, 384, 512]) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/61632584\/understanding-input-shape-to-pytorch-lstm", "best_answers_votes":28, "question_length":1180, "response_length":1029 }, { "question":"Tokens to Words mapping in the tokenizer decode step huggingface? Is there a way to know the mapping from the tokens back to the original words in the tokenizer.decode() function? For example: ```py from transformers.tokenization_roberta import RobertaTokenizer tokenizer = RobertaTokenizer.from_pretrained('roberta-large', do_lower_case=True) str = \"This is a tokenization example\" tokenized = tokenizer.tokenize(str) ## ['this', '\u0120is', '\u0120a', '\u0120token', 'ization', '\u0120example'] encoded = tokenizer.encode_plus(str) ## encoded['input_ids']=[0, 42, 16, 10, 19233, 1938, 1246, 2] decoded = tokenizer.decode(encoded['input_ids']) ## ' this is a tokenization example' ``` And the objective is to have a function that maps each token in the decode process to the correct input word, for here it will be: desired_output = [[1],[2],[3],[4,5],[6]] As this corresponds to id 42, while token and ization corresponds to ids [19244,1938] which are at indexes 4,5 of the input_ids array.", "response":"transformers version>=2.9.0: The FastTokenizers return a BatchEnconding object that you can utilize: ```py from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained('roberta-large') example = \"This is a tokenization example\" enc = tokenizer(example, add_special_tokens=False) desired_output = [] #BatchEncoding.word_ids returns a list mapping words to tokens for w_idx in set(enc.word_ids()): #BatchEncoding.word_to_tokens tells us which and how many tokens are used for the specific word start, end = enc.word_to_tokens(w_idx) # we add +1 because you wanted to start with 1 and not with 0 start+=1 end+=1 desired_output.append(list(range(start,end))) ``` Output: ``` [[1], [2], [3], [4, 5], [6]] ``` transformers version<2.9.0: As far as I know, there is no built-in method for that, but you can create one by yourself: ```py from transformers.tokenization_roberta import RobertaTokenizer tokenizer = RobertaTokenizer.from_pretrained('roberta-large', do_lower_case=True) example = \"This is a tokenization example\" print({x : tokenizer.encode(x, add_special_tokens=False, add_prefix_space=True) for x in example.split()}) ``` Output: ``` {'This': [42], 'is': [16], 'a': [10], 'tokenization': [19233, 1938], 'example': [1246]} ``` To get exactly your desired output, you have to work with a list comprehension: ```py #start index because the number of special tokens is fixed for each model (but be aware of single sentence input and pairwise sentence input) idx = 1 enc =[tokenizer.encode(x, add_special_tokens=False, add_prefix_space=True) for x in example.split()] desired_output = [] for token in enc: tokenoutput = [] for ids in token: tokenoutput.append(idx) idx +=1 desired_output.append(tokenoutput) print(desired_output) ``` Output: ``` [[1], [2], [3], [4, 5], [6]] ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/62317723\/tokens-to-words-mapping-in-the-tokenizer-decode-step-huggingface", "best_answers_votes":15, "question_length":972, "response_length":1814 }, { "question":"PyTorch BERT TypeError: forward() got an unexpected keyword argument 'labels' Training a BERT model using PyTorch transformers (following the tutorial here). Following statement in the tutorial ``` loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) ``` leads to ``` TypeError: forward() got an unexpected keyword argument 'labels' ``` Here is the full error, ``` TypeError Traceback (most recent call last) in 26 optimizer.zero_grad() 27 # Forward pass ---> 28 loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) 29 train_loss_set.append(loss.item()) 30 # Backward pass ~\/anaconda3\/envs\/systreviewclassifi\/lib\/python3.6\/site-packages\/torch\/nn\/modules\/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) TypeError: forward() got an unexpected keyword argument 'labels' ``` I cant seem to figure out what kind of argument the forward() function expects. There is a similar problem here, but I still do not get what the solution is. System information: OS: Ubuntu 16.04 LTS Python version: 3.6.x Torch version: 1.3.0 Torch Vision version: 0.4.1 PyTorch transformers version: 1.2.0", "response":"As far as I know, the BertModel does not take labels in the forward() function. Check out the forward function parameters. I suspect you are trying to fine-tune the BertModel for sequence classification task and the API provides a class for that which is BertForSequenceClassification. As you can see its forward() function definition: ``` def forward(self, input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, labels=None): ``` Please note, the forward() method returns the followings. ``` Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs: **loss**: (`optional`, returned when ``labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``: Classification (or regression if config.num_labels==1) loss. **logits**: ``torch.FloatTensor`` of shape ``(batch_size, config.num_labels)`` Classification (or regression if config.num_labels==1) scores (before SoftMax). **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``) list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings) of shape ``(batch_size, sequence_length, hidden_size)``: Hidden-states of the model at the output of each layer plus the initial embedding outputs. **attentions**: (`optional`, returned when ``config.output_attentions=True``) list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``: Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. ``` Hope this helps!", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/58454157\/pytorch-bert-typeerror-forward-got-an-unexpected-keyword-argument-labels", "best_answers_votes":25, "question_length":1354, "response_length":1608 }, { "question":"Add dense layer on top of Huggingface BERT model I want to add a dense layer on top of the bare BERT Model transformer outputting raw hidden-states, and then fine tune the resulting model. Specifically, I am using this base model. This is what the model should do: Encode the sentence (a vector with 768 elements for each token of the sentence) Keep only the first vector (related to the first token) Add a dense layer on top of this vector, to get the desired transformation So far, I have successfully encoded the sentences: ``` from sklearn.neural_network import MLPRegressor import torch from transformers import AutoModel, AutoTokenizer # List of strings sentences = [...] # List of numbers labels = [...] tokenizer = AutoTokenizer.from_pretrained(\"dbmdz\/bert-base-italian-xxl-cased\") model = AutoModel.from_pretrained(\"dbmdz\/bert-base-italian-xxl-cased\") # 2D array, one line per sentence containing the embedding of the first token encoded_sentences = torch.stack([model(**tokenizer(s, return_tensors='pt'))[0][0][0] for s in sentences]).detach().numpy() regr = MLPRegressor() regr.fit(encoded_sentences, labels) ``` In this way I can train a neural network by feeding it with the encoded sentences. However, this approach clearly does not fine tune the base BERT model. Can anybody help me? How can I build a model (possibly in pytorch or using the Huggingface library) that can be entirely fine tuned?", "response":"There are two ways to do it: Since you are looking to fine-tune the model for a downstream task similar to classification, you can directly use: BertForSequenceClassification class. Performs fine-tuning of logistic regression layer on the output dimension of 768. Alternatively, you can define a custom module, that created a bert model based on the pre-trained weights and adds layers on top of it. ``` from transformers import BertModel class CustomBERTModel(nn.Module): def __init__(self): super(CustomBERTModel, self).__init__() self.bert = BertModel.from_pretrained(\"dbmdz\/bert-base-italian-xxl-cased\") ### New layers: self.linear1 = nn.Linear(768, 256) self.linear2 = nn.Linear(256, 3) ## 3 is the number of classes in this example def forward(self, ids, mask): sequence_output, pooled_output = self.bert( ids, attention_mask=mask) # sequence_output has the following shape: (batch_size, sequence_length, 768) linear1_output = self.linear1(sequence_output[:,0,:].view(-1,768)) ## extract the 1st token's embeddings linear2_output = self.linear2(linear1_output) return linear2_output tokenizer = AutoTokenizer.from_pretrained(\"dbmdz\/bert-base-italian-xxl-cased\") model = CustomBERTModel() # You can pass the parameters if required to have more flexible model model.to(torch.device(\"cpu\")) ## can be gpu criterion = nn.CrossEntropyLoss() ## If required define your own criterion optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters())) for epoch in epochs: for batch in data_loader: ## If you have a DataLoader() object to get the data. data = batch[0] targets = batch[1] ## assuming that data loader returns a tuple of data and its targets optimizer.zero_grad() encoding = tokenizer.batch_encode_plus(data, return_tensors='pt', padding=True, truncation=True,max_length=50, add_special_tokens = True) outputs = model(input_ids, attention_mask=attention_mask) outputs = F.log_softmax(outputs, dim=1) input_ids = encoding['input_ids'] attention_mask = encoding['attention_mask'] loss = criterion(outputs, targets) loss.backward() optimizer.step() ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/64156202\/add-dense-layer-on-top-of-huggingface-bert-model", "best_answers_votes":38, "question_length":1410, "response_length":2079 }, { "question":"What are transforms in PyTorch used for? I am new with Pytorch and not very expert in CNN. I have done a successful classifier with the tutorial that they provide Tutorial Pytorch, but I don't really understand what I am doing when loading the data. They do some data augmentation and normalisation for training, but when I try to modify the parameters, the code does not work. ``` # Data augmentation and normalization for training # Just normalization for validation data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } ``` Am I extending my training dataset? I don't see the data augmentation. Why if I modify the value of transforms.RandomResizedCrop(224) the data loading stop working? Do I need to transform as well the test dataset? I am a bit confused with this data transformation that they do.", "response":"transforms.Compose just clubs all the transforms provided to it. So, all the transforms in the transforms.Compose are applied to the input one by one. Train transforms transforms.RandomResizedCrop(224): This will extract a patch of size (224, 224) from your input image randomly. So, it might pick this path from topleft, bottomright or anywhere in between. So, you are doing data augmentation in this part. Also, changing this value won't play nice with the fully-connected layers in your model, so not advised to change this. transforms.RandomHorizontalFlip(): Once we have our image of size (224, 224), we can choose to flip it. This is another part of data augmentation. transforms.ToTensor(): This just converts your input image to PyTorch tensor. transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]): This is just input data scaling and these values (mean and std) must have been precomputed for your dataset. Changing these values is also not advised. Validation transforms transforms.Resize(256): First your input image is resized to be of size (256, 256) transforms.CentreCrop(224): Crops the center part of the image of shape (224, 224) Rest are the same as train P.S.: You can read more about these transformations in the official docs", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/50002543\/what-are-transforms-in-pytorch-used-for", "best_answers_votes":44, "question_length":1140, "response_length":1260 }, { "question":"Can not get pytorch working with tensorboard I\"m going through this tutorial to set up pytorch (v1.3.0 through conda) with tensorboard https:\/\/pytorch.org\/tutorials\/intermediate\/tensorboard_tutorial.html# but on the step ``` from torch.utils.tensorboard import SummaryWriter # default `log_dir` is \"runs\" - we'll be more specific here writer = SummaryWriter('runs\/fashion_mnist_experiment_1') ``` I keep getting the error ``` --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) C:\\ProgramData\\Anaconda3\\envs\\fastai_v1\\lib\\site-packages\\torch\\utils\\tensorboard\\__init__.py in 1 try: ----> 2 from tensorboard.summary.writer.record_writer import RecordWriter # noqa F401 3 except ImportError: ModuleNotFoundError: No module named 'tensorboard.summary'; 'tensorboard' is not a package During handling of the above exception, another exception occurred: ImportError Traceback (most recent call last) c:\\Users\\matt\\Documents\\code\\playground\\tensorboard.py in ----> 1 from torch.utils.tensorboard import SummaryWriter 2 3 # default `log_dir` is \"runs\" - we'll be more specific here 4 writer = SummaryWriter('runs\/fashion_mnist_experiment_1') C:\\ProgramData\\Anaconda3\\envs\\fastai_v1\\lib\\site-packages\\torch\\utils\\tensorboard\\__init__.py in 2 from tensorboard.summary.writer.record_writer import RecordWriter # noqa F401 3 except ImportError: ----> 4 raise ImportError('TensorBoard logging requires TensorBoard with Python summary writer installed. ' 5 'This should be available in 1.14 or above.') 6 from .writer import FileWriter, SummaryWriter # noqa F401 ImportError: TensorBoard logging requires TensorBoard with Python summary writer installed. This should be available in 1.14 or above. ``` Does anyone have any suggestions?", "response":"The error log says, among other things, ImportError: TensorBoard logging requires TensorBoard with Python summary writer installed. This should be available in 1.14 or above. So, when it tries to import TensorBoard, it's unable to do so because it's missing it in the search path. You can install the latest version (without specifying any version number), as in: ``` $ conda install -c conda-forge tensorboard ``` Apart from that, you might also need to install protobuf: ``` $ conda install -c conda-forge protobuf ``` These installations should fix the ImportErrors.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/58686400\/can-not-get-pytorch-working-with-tensorboard", "best_answers_votes":19, "question_length":1800, "response_length":569 }, { "question":"PyTorch transforms on TensorDataset I'm using TensorDataset to create dataset from numpy arrays. ``` # convert numpy arrays to pytorch tensors X_train = torch.stack([torch.from_numpy(np.array(i)) for i in X_train]) y_train = torch.stack([torch.from_numpy(np.array(i)) for i in y_train]) # reshape into [C, H, W] X_train = X_train.reshape((-1, 1, 28, 28)).float() # create dataset and dataloaders train_dataset = torch.utils.data.TensorDataset(X_train, y_train) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64) ``` How do I apply data augmentation (transforms) to TensorDataset? For example, using ImageFolder, I can specify transforms as one of its parameters torchvision.datasets.ImageFolder(root, transform=...). According to this reply by one of PyTorch's team members, it's not supported by default. Is there any alternative way to do so? Feel free to ask if more code is needed to explain the problem.", "response":"By default transforms are not supported for TensorDataset. But we can create our custom class to add that option. But, as I already mentioned, most of transforms are developed for PIL.Image. But anyway here is very simple MNIST example with very dummy transforms. csv file with MNIST here. Code: ```py import numpy as np import torch from torch.utils.data import Dataset, TensorDataset import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt # Import mnist dataset from cvs file and convert it to torch tensor with open('mnist_train.csv', 'r') as f: mnist_train = f.readlines() # Images X_train = np.array([[float(j) for j in i.strip().split(',')][1:] for i in mnist_train]) X_train = X_train.reshape((-1, 1, 28, 28)) X_train = torch.tensor(X_train) # Labels y_train = np.array([int(i[0]) for i in mnist_train]) y_train = y_train.reshape(y_train.shape[0], 1) y_train = torch.tensor(y_train) del mnist_train class CustomTensorDataset(Dataset): \"\"\"TensorDataset with support of transforms. \"\"\" def __init__(self, tensors, transform=None): assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors) self.tensors = tensors self.transform = transform def __getitem__(self, index): x = self.tensors[0][index] if self.transform: x = self.transform(x) y = self.tensors[1][index] return x, y def __len__(self): return self.tensors[0].size(0) def imshow(img, title=''): \"\"\"Plot the image batch. \"\"\" plt.figure(figsize=(10, 10)) plt.title(title) plt.imshow(np.transpose( img.numpy(), (1, 2, 0)), cmap='gray') plt.show() # Dataset w\/o any tranformations train_dataset_normal = CustomTensorDataset(tensors=(X_train, y_train), transform=None) train_loader = torch.utils.data.DataLoader(train_dataset_normal, batch_size=16) # iterate for i, data in enumerate(train_loader): x, y = data imshow(torchvision.utils.make_grid(x, 4), title='Normal') break # we need just one batch # Let's add some transforms # Dataset with flipping tranformations def vflip(tensor): \"\"\"Flips tensor vertically. \"\"\" tensor = tensor.flip(1) return tensor def hflip(tensor): \"\"\"Flips tensor horizontally. \"\"\" tensor = tensor.flip(2) return tensor train_dataset_vf = CustomTensorDataset(tensors=(X_train, y_train), transform=vflip) train_loader = torch.utils.data.DataLoader(train_dataset_vf, batch_size=16) result = [] for i, data in enumerate(train_loader): x, y = data imshow(torchvision.utils.make_grid(x, 4), title='Vertical flip') break train_dataset_hf = CustomTensorDataset(tensors=(X_train, y_train), transform=hflip) train_loader = torch.utils.data.DataLoader(train_dataset_hf, batch_size=16) result = [] for i, data in enumerate(train_loader): x, y = data imshow(torchvision.utils.make_grid(x, 4), title='Horizontal flip') break ``` Output:", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55588201\/pytorch-transforms-on-tensordataset", "best_answers_votes":30, "question_length":930, "response_length":2761 }, { "question":"converting list of tensors to tensors pytorch I have list of tensor where each tensor has a different size. How can I convert this list of tensors into a tensor using PyTorch? For instance, ``` x[0].size() == torch.Size([4, 8]) x[1].size() == torch.Size([4, 7]) # different shapes! ``` This: ``` torch.tensor(x) ``` Gives the error: ValueError: only one element tensors can be converted to Python scalars", "response":"You might be looking for cat. However, tensors cannot hold variable length data. for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their data. also note that you can't use cat with half tensors on cpu as of right now so you should convert them to float, do the concatenation and then convert back to half ``` import torch a = torch.arange(8).reshape(2, 2, 2) b = torch.arange(12).reshape(2, 2, 3) my_list = [a, b] my_tensor = torch.cat([a, b], dim=2) print(my_tensor.shape) #torch.Size([2, 2, 5]) ``` you haven't explained your goal so another option is to use pad_sequence like this: ``` from torch.nn.utils.rnn import pad_sequence a = torch.ones(25, 300) b = torch.ones(22, 300) c = torch.ones(15, 300) pad_sequence([a, b, c]).size() #torch.Size([25, 3, 300]) ``` edit: in this particular case, you can use torch.cat([x.float() for x in sequence], dim=1).half()", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55050717\/converting-list-of-tensors-to-tensors-pytorch", "best_answers_votes":22, "question_length":404, "response_length":1045 }, { "question":"How to solve \"RuntimeError: CUDA error: invalid device ordinal\"? I'm trying to run this code. I don't know what is wrong with it, but this code is not running. and I don't know how to solve this problem. ``` import cv2 from facial_emotion_recognition import EmotionRecognition emotion_detector = EmotionRecognition(device='gpu', gpu_id=1) camera = cv2.VideoCapture(0) while True: image = camera.read()[1] image = emotion_detector.recognise_emotion(image, return_type='BGR') cv2.imshow('Camera', image) key = cv2.waitKey(1) if key == 27: break camera.release() cv2.destroyAllWindows() ``` but I'm getting this error: ``` Traceback (most recent call last): File \"\/home\/fahim\/Documents\/Python_projects\/Python tutorials\/pantech AI Master\/Computer_Vision\/Day 8 Face emotion recognition\/emotion.py\", line 4, in emotion_detector = EmotionRecognition(device='gpu', gpu_id=1) File \"\/home\/fahim\/anaconda3\/envs\/Computer_Vision\/lib\/python3.7\/site-packages\/facial_emotion_recognition\/facial_emotion_recognition.py\", line 25, in __init__ self.network = NetworkV2(in_c=1, nl=32, out_f=7).to(self.device) File \"\/home\/fahim\/anaconda3\/envs\/Computer_Vision\/lib\/python3.7\/site-packages\/torch\/nn\/modules\/module.py\", line 607, in to return self._apply(convert) File \"\/home\/fahim\/anaconda3\/envs\/Computer_Vision\/lib\/python3.7\/site-packages\/torch\/nn\/modules\/module.py\", line 354, in _apply module._apply(fn) File \"\/home\/fahim\/anaconda3\/envs\/Computer_Vision\/lib\/python3.7\/site-packages\/torch\/nn\/modules\/module.py\", line 354, in _apply module._apply(fn) File \"\/home\/fahim\/anaconda3\/envs\/Computer_Vision\/lib\/python3.7\/site-packages\/torch\/nn\/modules\/module.py\", line 376, in _apply param_applied = fn(param) File \"\/home\/fahim\/anaconda3\/envs\/Computer_Vision\/lib\/python3.7\/site-packages\/torch\/nn\/modules\/module.py\", line 605, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) RuntimeError: CUDA error: invalid device ordinal Process finished with exit code 1 ``` This is my the configuration of my computer: GPU: NVIDIA GeForce MX130 CPU: Intel i5-10210U (8) @ 4.200GHz Help me to solve this please.", "response":"Try changing: ```py emotion_detector = EmotionRecognition(device='gpu', gpu_id=1) ``` To: ```py emotion_detector = EmotionRecognition(device='gpu', gpu_id=0) ``` gpu_id is only effective when more than one GPU is detected, you only seem to have one GPU, so it throws an error since you tell the function to get GPU 2 (since we count from 0).", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/64334033\/how-to-solve-runtimeerror-cuda-error-invalid-device-ordinal", "best_answers_votes":25, "question_length":2106, "response_length":341 }, { "question":"Is there really no padding=same option for PyTorch's Conv2d? I'm currently working on building a convolutional neural network (CNN) that will work on financial time series data. The input shape is (100, 40) - 100 time stamps by 40 features. The CNN that I'm using uses asymmetric kernel sizes (i.e. 1 x 2 and 4 x 1) and also asymmetric strides (i.e. 1 x 2 for the 1 x 2 layers and 1 x 1 for the 4 x 1 layers). In order to maintain the height dimension to stay 100, I needed to pad the data. In my research, I noticed that people who use TensorFlow or Keras simply use padding='same'; but this option is apparently unavailable in PyTorch. According to some answers in What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?, and also this answer on the PyTorch discussion forum, I can manually calculate how I need to pad my data, and use torch.nn.ZeroPad2d to solve the problem - since apparently normal torch.nn.Conv2d layers don't support asymmetric padding (I believe that the total padding I need is 3 in height and 0 in width). I tried this code: ``` import torch import torch.nn as nn conv = nn.Conv2d(1, 1, kernel_size=(4, 1)) pad = nn.ZeroPad2d((0, 0, 2, 1)) # Add 2 to top and 1 to bottom. x = torch.randint(low=0, high=9, size=(100, 40)) x = x.unsqueeze(0).unsqueeze(0) y = pad(x) x.shape # (1, 1, 100, 40) y.shape # (1, 1, 103, 40) print(conv(x.float()).shape) print(conv(y.float()).shape) # Output # x -> (1, 1, 97, 40) # y -> (1, 1, 100, 40) ``` It does work, in the sense that the data shape remains the same. However, is there really no padding='same' option available? Also, how can we decide which side to pad?", "response":"I had the same issue some time ago, so I implemented it myself using a ZeroPad2d layer as you are trying to do. Here is the right formula: ``` from functools import reduce from operator import __add__ kernel_sizes = (4, 1) # Internal parameters used to reproduce Tensorflow \"Same\" padding. # For some reasons, padding dimensions are reversed wrt kernel sizes, # first comes width then height in the 2D case. conv_padding = reduce(__add__, [(k \/\/ 2 + (k - 2 * (k \/\/ 2)) - 1, k \/\/ 2) for k in kernel_sizes[::-1]]) pad = nn.ZeroPad2d(conv_padding) conv = nn.Conv2d(1, 1, kernel_size=kernel_sizes) print(x.shape) # (1, 1, 103, 40) print(conv(y.float()).shape) # (1, 1, 103, 40) ``` Also, as mentioned by @akshayk07 and @Separius, I can confirm that it is the dynamic nature of pytorch that makes it hard. Here is a post about this point from a Pytorch developper.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/58307036\/is-there-really-no-padding-same-option-for-pytorchs-conv2d", "best_answers_votes":9, "question_length":1662, "response_length":859 }, { "question":"What's the reason of the error ValueError: Expected more than 1 value per channel? reference fast.ai github repository of fast.ai (as the code elevates the library which is built on top of PyTorch) Please scroll the discussion a bit I am running the following code, and get an error while trying to pass the data to the predict_array function The code is failing when i am trying to use it to predict directly on a single image but it run's perfectly when that same image is in a test folder ``` from fastai.conv_learner import * from planet import f2 PATH = 'data\/shopstyle\/' metrics=[f2] f_model = resnet34 def get_data(sz): tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_side_on, max_zoom=1.05) return ImageClassifierData.from_csv(PATH, 'train', label_csv, tfms=tfms, suffix='.jpg', val_idxs=val_idxs, test_name='test') def print_list(list_or_iterator): return \"[\" + \", \".join( str(x) for x in list_or_iterator) + \"]\" label_csv = f'{PATH}prod_train.csv' n = len(list(open(label_csv)))-1 val_idxs = get_cv_idxs(n) sz = 64 data = get_data(sz) print(\"Loading model...\") learn = ConvLearner.pretrained(f_model, data, metrics=metrics) learn.load(f'{sz}') #learn.load(\"tmp\") print(\"Predicting...\") learn.precompute=False trn_tfms, val_tfrms = tfms_from_model(f_model, sz) #im = val_tfrms(open_image(f'{PATH}valid\/4500132.jpg')) im = val_tfrms(np.array(PIL.Image.open(f'{PATH}valid\/4500132.jpg'))) preds = learn.predict_array(im[None]) p=list(zip(data.classes, preds)) print(\"predictions = \" + print_list(p)) ``` Here's the Traceback I am Getting ``` Traceback (most recent call last): File \"predict.py\", line 34, in preds = learn.predict_array(im[None]) File \"\/home\/ubuntu\/fastai\/courses\/dl1\/fastai\/learner.py\", line 266, in predict_array def predict_array(self, arr): return to_np(self.model(V(T(arr).cuda()))) File \"\/home\/ubuntu\/src\/anaconda3\/envs\/fastai\/lib\/python3.6\/site-packages\/torch\/nn\/modules\/module.py\", line 325, in __call__ result = self.forward(*input, **kwargs) File \"\/home\/ubuntu\/src\/anaconda3\/envs\/fastai\/lib\/python3.6\/site-packages\/torch\/nn\/modules\/container.py\", line 67, in forward input = module(input) File \"\/home\/ubuntu\/src\/anaconda3\/envs\/fastai\/lib\/python3.6\/site-packages\/torch\/nn\/modules\/module.py\", line 325, in __call__ result = self.forward(*input, **kwargs) File \"\/home\/ubuntu\/src\/anaconda3\/envs\/fastai\/lib\/python3.6\/site-packages\/torch\/nn\/modules\/batchnorm.py\", line 37, in forward self.training, self.momentum, self.eps) File \"\/home\/ubuntu\/src\/anaconda3\/envs\/fastai\/lib\/python3.6\/site-packages\/torch\/nn\/functional.py\", line 1011, in batch_norm raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size)) ValueError: Expected more than 1 value per channel when training, got input size [1, 1024] ``` Things I have Tried np.expand_dims(IMG,axis=0) or image = image[..., np.newaxis] Tried a different way of reading the image ``` img = cv2.imread(img_path) img = cv2.resize(img, dsize = (200,200)) img = np.einsum('ijk->kij', img) img = np.expand_dims(img, axis =0) img = torch.from_numpy(img) learn.model(Variable(img.float()).cuda()) ``` BTW the error still remains ``` ValueError: Expected more than 1 value per channel when training, got input size [1, 1024] ``` Can't find any reference in The Google search also..", "response":"It will fail on batches of size 1 if we use feature-wise batch normalization. As Batch normalization computes: ``` y = (x - mean(x)) \/ (std(x) + eps) ``` If we have one sample per batch then mean(x) = x, and the output will be entirely zero (ignoring the bias). We can't use that for learning...", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/48343857\/whats-the-reason-of-the-error-valueerror-expected-more-than-1-value-per-channe", "best_answers_votes":57, "question_length":3299, "response_length":295 }, { "question":"AttributeError: 'collections.OrderedDict' object has no attribute 'eval' I have a model file which looks like this ``` OrderedDict([('inp.conv1.conv.weight', (0 ,0 ,0 ,.,.) = -1.5073e-01 6.4760e-02 1.9156e-01 1.2175e-01 3.5886e-02 1.3992e-01 -1.5903e-01 8.2055e-02 1.7820e-01 (0 ,0 ,1 ,.,.) = 1.0604e-01 -1.3653e-01 1.4803e-01 6.0276e-02 -1.4674e-02 2.3059e-06 -6.2192e-02 -5.1061e-03 -7.4145e-03 (0 ,0 ,2 ,.,.) = -5.5632e-02 3.5326e-02 6.5108e-02 1.1411e-01 -4.4160e-02 8.2610e-02 8.9979e-02 -3.5454e-02 4.2549e-02 (1 ,0 ,0 ,.,.) = 4.8523e-02 -4.3961e-02 5.3614e-02 -1.2644e-01 1.2777e-01 8.9547e-02 3.8392e-02 2.7016e-02 -1.4552e-01 (1 ,0 ,1 ,.,.) = 9.5537e-02 2.8748e-02 3.9772e-02 -6.2410e-02 1.1264e-01 7.8663e-02 -2.6374e-02 1.4401e-01 -1.7109e-01 (1 ,0 ,2 ,.,.) = 5.1791e-02 -1.6388e-01 -1.7605e-01 3.5028e-02 7.7164e-02 -1.4499e-01 -2.9189e-02 2.7064e-03 -2.3228e-02 (2 ,0 ,0 ,.,.) = -7.4446e-03 -9.7202e-02 -1.4704e-01 -1.0019e-02 8.1780e-02 -5.3530e-02 -1.8412e-01 1.5988e-01 -1.3450e-01 (2 ,0 ,1 ,.,.) = -1.1075e-01 -5.2478e-02 6.0658e-02 1.6739e-01 -2.9360e-02 1.2621e-01 2.0686e-02 1.1468e-01 1.2282e-01 ``` I want to do inference on this model, but when i do model.eval() i get, ``` AttributeError: 'collections.OrderedDict' object has no attribute 'eval ```", "response":"It is not a model file, instead, this is a state file. In a model file, the complete model is stored, whereas in a state file only the parameters are stored. So, your OrderedDict are just values for your model. You will need to create the model and then need to load these values into your model. So, the process will be something in form of ``` import torch import torch.nn as nn class TempModel(nn.Module): def __init__(self): self.conv1 = nn.Conv2d(3, 5, (3, 3)) def forward(self, inp): return self.conv1(inp) model = TempModel() model.load_state_dict(torch.load(file_path)) model.eval() ``` You'll need to define your model properly. The one given in the example above is just a dummy. If you construct your model yourself, you might need to update the keys of the saved dict file as mentioned here. The best course of action is to define your model in exactly the same way from when the state_dict was saved and then directly executing model.load_state_dict will work.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49941426\/attributeerror-collections-ordereddict-object-has-no-attribute-eval", "best_answers_votes":57, "question_length":1272, "response_length":973 }, { "question":"PyTorch: What is the difference between tensor.cuda() and tensor.to(torch.device(\"cuda:0\"))? In PyTorch, what is the difference between the following two methods in sending a tensor (or model) to GPU: Setup: ``` X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]]) # X = model() X = torch.DoubleTensor(X) ``` Method 1 Method 2 X.cuda() device = torch.device(\"cuda:0\")X = X.to(device) (I don't really need a detailed explanation of what is happening in the backend, just want to know if they are both essentially doing the same thing)", "response":"There is no difference between the two. Early versions of pytorch had .cuda() and .cpu() methods to move tensors and models from cpu to gpu and back. However, this made code writing a bit cumbersome: ```py if cuda_available: x = x.cuda() model.cuda() else: x = x.cpu() model.cpu() ``` Later versions introduced .to() that basically takes care of everything in an elegant way: ```py device = torch.device('cuda') if cuda_available else torch.device('cpu') x = x.to(device) model = model.to(device) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/62907815\/pytorch-what-is-the-difference-between-tensor-cuda-and-tensor-totorch-device", "best_answers_votes":32, "question_length":537, "response_length":500 }, { "question":"How to convert a pytorch tensor of ints to a tensor of booleans? I would like to cast a tensor of ints to a tensor of booleans. Specifically I would like to be able to have a function which transforms tensor([0,10,0,16]) to tensor([0,1,0,1]) This is trivial in Tensorflow by just using tf.cast(x,tf.bool). I want the cast to change all ints greater than 0 to a 1 and all ints equal to 0 to a 0. This is the equivalent of !! in most languages. Since pytorch does not seem to have a dedicated boolean type to cast to, what is the best approach here? Edit: I am looking for a vectorized solution opposed to looping through each element.", "response":"What you're looking for is to generate a boolean mask for the given integer tensor. For this, you can simply check for the condition: \"whether the values in the tensor are greater than 0\" using simple comparison operator (>) or using torch.gt(), which would then give us the desired result. ``` # input tensor In [76]: t Out[76]: tensor([ 0, 10, 0, 16]) # generate the needed boolean mask In [78]: t > 0 Out[78]: tensor([0, 1, 0, 1], dtype=torch.uint8) ``` ``` # sanity check In [93]: mask = t > 0 In [94]: mask.type() Out[94]: 'torch.ByteTensor' ``` Note: In PyTorch version 1.4+, the above operation would return 'torch.BoolTensor' ``` In [9]: t > 0 Out[9]: tensor([False, True, False, True]) # alternatively, use `torch.gt()` API In [11]: torch.gt(t, 0) Out[11]: tensor([False, True, False, True]) ``` If you indeed want single bits (either 0s or 1s), cast it using: ``` In [14]: (t > 0).type(torch.uint8) Out[14]: tensor([0, 1, 0, 1], dtype=torch.uint8) # alternatively, use `torch.gt()` API In [15]: torch.gt(t, 0).int() Out[15]: tensor([0, 1, 0, 1], dtype=torch.int32) ``` The reason for this change has been discussed in this feature-request issue: issues\/4764 - Introduce torch.BoolTensor ... TL;DR: Simple one liner ``` t.bool().int() ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53562417\/how-to-convert-a-pytorch-tensor-of-ints-to-a-tensor-of-booleans", "best_answers_votes":30, "question_length":633, "response_length":1247 }, { "question":"How to check if a model is in train or eval mode in PyTorch? How to check from within a model if it is currently in train or eval mode?", "response":"From the Pytorch forum, with a small tweak: use ``` if self.training: # it's in train mode else: # it's in eval mode ``` Always better to have a stack overflow answer than to look at forums. Explanation about the modes", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/65344578\/how-to-check-if-a-model-is-in-train-or-eval-mode-in-pytorch", "best_answers_votes":34, "question_length":135, "response_length":218 }, { "question":"How to modify path where Torch Hub models are downloaded When I download models through Torch Hub, models are automatically downloaded in \/home\/me\/.cache\/torch. How can I modify this behavior ?", "response":"From official documentation, there is several ways to modify this path. In priority order : Calling hub.set_dir() $TORCH_HOME\/hub, if environment variable TORCH_HOME is set. $XDG_CACHE_HOME\/torch\/hub, if environment variable XDG_CACHE_HOME is set. ~\/.cache\/torch\/hub So I just had to do : export TORCH_HOME=\/my\/path\/ Edit TORCH_HUB appear to be deprecated, use TORCH_HOME instead", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59134499\/how-to-modify-path-where-torch-hub-models-are-downloaded", "best_answers_votes":34, "question_length":193, "response_length":379 }, { "question":"What are the difference between .bin and .pt pytorch saved model types? Sometimes I see .bin files for pretrained pytorch, like the one here https:\/\/github.com\/allenai\/scibert#pytorch-models However, the files are usually saved as .pt files. What's the difference between these two parameter weights file formats? Why are there two?", "response":"There is no difference as it's just an extension. When it comes to UNIX-like OSes one can open the file no matter the extension (see here), Windows on the other hand is built with them in mind (here). torch can read either .bin or .pt or .anything so it's probably convention employed by the creators of that repository. Standard approach is to use .pt or .pth, though the second extension collides with Python's text file readable by interpreter, so .pt seems to be the best idea for now (see this github issue).", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57245332\/what-are-the-difference-between-bin-and-pt-pytorch-saved-model-types", "best_answers_votes":28, "question_length":332, "response_length":513 }, { "question":"PyTorch - Getting the 'TypeError: pic should be PIL Image or ndarray. Got ' error I am getting the error TypeError: pic should be PIL Image or ndarray. Got when I try to load a non-image dataset through the DataLoader. The versions of torch and torchvision are 1.0.1, and 0.2.2.post3, respectively. Python's version is 3.7.1 on a Windows 10 machine. Here is the code: ``` class AndroDataset(Dataset): def __init__(self, csv_path): self.transform = transforms.Compose([transforms.ToTensor()]) csv_data = pd.read_csv(csv_path) self.csv_path = csv_path self.features = [] self.classes = [] self.features.append(csv_data.iloc[:, :-1].values) self.classes.append(csv_data.iloc[:, -1].values) def __getitem__(self, index): # the error occurs here return self.transform(self.features[index]), self.transform(self.classes[index]) def __len__(self): return len(self.features) ``` And I set the loader: ``` training_data = AndroDataset('android.csv') train_loader = DataLoader(dataset=training_data, batch_size=batch_size, shuffle=True) ``` Here is the full error stack trace: ``` Traceback (most recent call last): File \"C:\\Program Files\\JetBrains\\PyCharm 2018.1.2\\helpers\\pydev\\pydevd.py\", line 1758, in main() File \"C:\\Program Files\\JetBrains\\PyCharm 2018.1.2\\helpers\\pydev\\pydevd.py\", line 1752, in main globals = debugger.run(setup['file'], None, None, is_module) File \"C:\\Program Files\\JetBrains\\PyCharm 2018.1.2\\helpers\\pydev\\pydevd.py\", line 1147, in run pydev_imports.execfile(file, globals, locals) # execute the script File \"C:\\Program Files\\JetBrains\\PyCharm 2018.1.2\\helpers\\pydev\\_pydev_imps\\_pydev_execfile.py\", line 18, in execfile exec(compile(contents+\"\\n\", file, 'exec'), glob, loc) File \"C:\/Users\/talha\/Documents\/PyCharmProjects\/DeepAndroid\/deep_test_conv1d.py\", line 231, in main() File \"C:\/Users\/talha\/Documents\/PyCharmProjects\/DeepAndroid\/deep_test_conv1d.py\", line 149, in main for i, (images, labels) in enumerate(train_loader): File \"C:\\Users\\talha\\Documents\\PyCharmProjects\\DeepAndroid\\venv\\lib\\site-packages\\torch\\utils\\data\\dataloader.py\", line 615, in __next__ batch = self.collate_fn([self.dataset[i] for i in indices]) File \"C:\\Users\\talha\\Documents\\PyCharmProjects\\DeepAndroid\\venv\\lib\\site-packages\\torch\\utils\\data\\dataloader.py\", line 615, in batch = self.collate_fn([self.dataset[i] for i in indices]) File \"C:\/Users\/talha\/Documents\/PyCharmProjects\/DeepAndroid\/deep_test_conv1d.py\", line 102, in __getitem__ return self.transform(self.features[index]), self.transform(self.classes[index]) File \"C:\\Users\\talha\\Documents\\PyCharmProjects\\DeepAndroid\\venv\\lib\\site-packages\\torchvision\\transforms\\transforms.py\", line 60, in __call__ img = t(img) File \"C:\\Users\\talha\\Documents\\PyCharmProjects\\DeepAndroid\\venv\\lib\\site-packages\\torchvision\\transforms\\transforms.py\", line 91, in __call__ return F.to_tensor(pic) File \"C:\\Users\\talha\\Documents\\PyCharmProjects\\DeepAndroid\\venv\\lib\\site-packages\\torchvision\\transforms\\functional.py\", line 50, in to_tensor raise TypeError('pic should be PIL Image or ndarray. Got {}'.format(type(pic))) TypeError: pic should be PIL Image or ndarray. Got ```", "response":"This happens because of the transformation you use: ``` self.transform = transforms.Compose([transforms.ToTensor()]) ``` As you can see in the documentation, torchvision.transforms.ToTensor converts a PIL Image or numpy.ndarray to tensor. So if you want to use this transformation, your data has to be of one of the above types.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/56741108\/pytorch-getting-the-typeerror-pic-should-be-pil-image-or-ndarray-got-class", "best_answers_votes":18, "question_length":3119, "response_length":328 }, { "question":"TypeError: can't convert np.ndarray of type numpy.object_ How to convert a numpy array of dtype=object to torch Tensor? ``` array([ array([0.5, 1.0, 2.0], dtype=float16), array([4.0, 6.0, 8.0], dtype=float16) ], dtype=object) ```", "response":"It is difficult to answer properly since you do not show us how you try to do it. From your error message I can see that you try to convert a numpy array containing objects to a torch tensor. This does not work, you will need a numeric data type: ``` import torch import numpy as np # Your test array without 'dtype=object' a = np.array([ np.array([0.5, 1.0, 2.0], dtype=np.float16), np.array([4.0, 6.0, 8.0], dtype=np.float16), ]) b = torch.from_numpy(a) print(a.dtype) # This should not be 'object' print(b) ``` Output ``` float16 tensor([[0.5000, 1.0000, 2.0000], [4.0000, 6.0000, 8.0000]], dtype=torch.float16) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55724123\/typeerror-cant-convert-np-ndarray-of-type-numpy-object", "best_answers_votes":25, "question_length":229, "response_length":618 }, { "question":"Why does autograd not produce gradient for intermediate variables? trying to wrap my head around how gradients are represented and how autograd works: ``` import torch from torch.autograd import Variable x = Variable(torch.Tensor([2]), requires_grad=True) y = x * x z = y * y z.backward() print(x.grad) #Variable containing: #32 #[torch.FloatTensor of size 1] print(y.grad) #None ``` Why does it not produce a gradient for y? If y.grad = dz\/dy, then shouldn't it at least produce a variable like y.grad = 2*y?", "response":"By default, gradients are only retained for leaf variables. non-leaf variables' gradients are not retained to be inspected later. This was done by design, to save memory. -soumith chintala See: https:\/\/discuss.pytorch.org\/t\/why-cant-i-see-grad-of-an-intermediate-variable\/94 Option 1: Call y.retain_grad() ``` x = Variable(torch.Tensor([2]), requires_grad=True) y = x * x z = y * y y.retain_grad() z.backward() print(y.grad) #Variable containing: # 8 #[torch.FloatTensor of size 1] ``` Source: https:\/\/discuss.pytorch.org\/t\/why-cant-i-see-grad-of-an-intermediate-variable\/94\/16 Option 2: Register a hook, which is basically a function called when that gradient is calculated. Then you can save it, assign it, print it, whatever... ``` from __future__ import print_function import torch from torch.autograd import Variable x = Variable(torch.Tensor([2]), requires_grad=True) y = x * x z = y * y y.register_hook(print) ## this can be anything you need it to be z.backward() ``` output: ``` Variable containing: 8 [torch.FloatTensor of size 1 ``` Source: https:\/\/discuss.pytorch.org\/t\/why-cant-i-see-grad-of-an-intermediate-variable\/94\/2 Also see: https:\/\/discuss.pytorch.org\/t\/why-cant-i-see-grad-of-an-intermediate-variable\/94\/7", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/45988168\/why-does-autograd-not-produce-gradient-for-intermediate-variables", "best_answers_votes":33, "question_length":509, "response_length":1227 }, { "question":"How to add parameters in module class in pytorch custom model? I tried to find the answer but I can't. I make a custom deep learning model using pytorch. For example, ``` class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.nn_layers = nn.ModuleList() self.layer = nn.Linear(2,3).double() torch.nn.init.xavier_normal_(self.layer.weight) self.bias = torch.nn.Parameter(torch.randn(3)) self.nn_layers.append(self.layer) def forward(self, x): activation = torch.tanh output = activation(self.layer(x)) + self.bias return output ``` If I print ``` model = Net() print(list(model.parameters())) ``` it does not contains model.bias, so optimizer = optimizer.Adam(model.parameters()) does not update model.bias. How can I go through this? Thanks!", "response":"You need to register your parameters: ```py self.register_parameter(name='bias', param=torch.nn.Parameter(torch.randn(3))) ``` Update: In more recent versions of PyTorch, you no longer need to explicitly register_parameter, it's enough to set a member of your nn.Module with nn.Parameter to \"notify\" pytorch that this variable should be treated as a trainable parameter: ```py self.bias = torch.nn.Parameter(torch.randn(3)) ``` Please note that is you want to have more complex data structures of parameters (e.g., lists, etc.) you should use dedicated containers like torch.nn.ParameterList or torch.nn.ParameterDict.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59234238\/how-to-add-parameters-in-module-class-in-pytorch-custom-model", "best_answers_votes":30, "question_length":761, "response_length":618 }, { "question":"RuntimeError: output with shape [1, 224, 224] doesn't match the broadcast shape [3, 224, 224] This is the error i get when I try to train my network. The class we used to store Images from the Caltech 101 dataset was provided us by our teachers. ``` from torchvision.datasets import VisionDataset from PIL import Image import os import os.path import sys def pil_loader(path): # open path as file to avoid ResourceWarning (https:\/\/github.com\/python-pillow\/Pillow\/issues\/835) with open(path, 'rb') as f: img = Image.open(f) return img.convert('RGB') class Caltech(VisionDataset): def __init__(self, root, split='train', transform=None, target_transform=None): super(Caltech, self).__init__(root, transform=transform, target_transform=target_transform) self.split = split # This defines the split you are going to use # (split files are called 'train.txt' and 'test.txt') ''' - Here you should implement the logic for reading the splits files and accessing elements - If the RAM size allows it, it is faster to store all data in memory - PyTorch Dataset classes use indexes to read elements - You should provide a way for the __getitem__ method to access the image-label pair through the index - Labels should start from 0, so for Caltech you will have lables 0...100 (excluding the background class) ''' # Open file in read only mode and read all lines file = open(self.split, \"r\") lines = file.readlines() # Filter out the lines which start with 'BACKGROUND_Google' as asked in the homework self.elements = [i for i in lines if not i.startswith('BACKGROUND_Google')] # Delete BACKGROUND_Google class from dataset labels self.classes = sorted(os.listdir(os.path.join(self.root, \"\"))) self.classes.remove(\"BACKGROUND_Google\") def __getitem__(self, index): ''' __getitem__ should access an element through its index Args: index (int): Index Returns: tuple: (sample, target) where target is class_index of the target class. ''' img = Image.open(os.path.join(self.root, self.elements[index].rstrip())) target = self.classes.index(self.elements[index].rstrip().split('\/')[0]) image, label = img, target # Provide a way to access image and label via index # Image should be a PIL Image # label can be int # Applies preprocessing when accessing the image if self.transform is not None: image = self.transform(image) return image, label def __len__(self): ''' The __len__ method returns the length of the dataset It is mandatory, as this is used by several other components ''' # Provides a way to get the length (number of elements) of the dataset length = len(self.elements) return length ``` Whereas the preprocessing phase is done by this code: ``` # Define transforms for training phase train_transform = transforms.Compose([transforms.Resize(256), # Resizes short size of the PIL image to 256 transforms.CenterCrop(224), # Crops a central square patch of the image # 224 because torchvision's AlexNet needs a 224x224 input! # Remember this when applying different transformations, otherwise you get an error transforms.ToTensor(), # Turn PIL Image to torch.Tensor transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) # Normalizes tensor with mean and standard deviation ]) # Define transforms for the evaluation phase eval_transform = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ``` In the end this is the preparation of datasets and dataloader: ``` # Clone github repository with data if not os.path.isdir('.\/Homework2-Caltech101'): !git clone https:\/\/github.com\/MachineLearning2020\/Homework2-Caltech101.git # Commands to execute when there is an error saying no file or directory related to .\/Homework2-Caltech101\/ # !rm -r .\/Homework2-Caltech101\/ # !git clone https:\/\/github.com\/MachineLearning2020\/Homework2-Caltech101.git DATA_DIR = 'Homework2-Caltech101\/101_ObjectCategories' SPLIT_TRAIN = 'Homework2-Caltech101\/train.txt' SPLIT_TEST = 'Homework2-Caltech101\/test.txt' # 1 - Data preparation myTrainDS = Caltech(DATA_DIR, split = SPLIT_TRAIN, transform=train_transform) myTestDS = Caltech(DATA_DIR, split = SPLIT_TEST, transform=eval_transform) print('My Train DS: {}'.format(len(myTrainDS))) print('My Test DS: {}'.format(len(myTestDS))) # 1 - Data preparation myTrain_dataloader = DataLoader(myTrainDS, batch_size=BATCH_SIZE, shuffle=True, num_workers=4, drop_last=True) myTest_dataloader = DataLoader(myTestDS, batch_size=BATCH_SIZE, shuffle=False, num_workers=4) ``` Okay now the two .txt files contain the lists of images we want to have in the train and test splits, so we have to get them from there, but that should have been done correctly. The thing is that when I approach my training phase (see code later) I am presented the error in the title. I already tried to add the following line in the transform function: ``` [...] transforms.Lambda(lambda x: x.repeat(3, 1, 1)), ``` after the centercrop, but it says that Image has no attribute repeat, so I'm kinda stuck. The training code line which gives me the error is the following: ``` # Iterate over the dataset for images, labels in myTrain_dataloader: ``` If needed, full error is: ``` RuntimeError Traceback (most recent call last) in () 47 48 # Iterate over the dataset ---> 49 for images, labels in myTrain_dataloader: 50 51 # Bring data over the device of choice 2 frames \/usr\/local\/lib\/python3.6\/dist-packages\/torch\/utils\/data\/dataloader.py in __next__(self) 817 else: 818 del self._task_info[idx] --> 819 return self._process_data(data) 820 821 next = __next__ # Python 2 compatibility \/usr\/local\/lib\/python3.6\/dist-packages\/torch\/utils\/data\/dataloader.py in _process_data(self, data) 844 self._try_put_index() 845 if isinstance(data, ExceptionWrapper): --> 846 data.reraise() 847 return data 848 \/usr\/local\/lib\/python3.6\/dist-packages\/torch\/_utils.py in reraise(self) 383 # (https:\/\/bugs.python.org\/issue2651), so we work around it. 384 msg = KeyErrorMessage(msg) --> 385 raise self.exc_type(msg) RuntimeError: Caught RuntimeError in DataLoader worker process 0. Original Traceback (most recent call last): File \"\/usr\/local\/lib\/python3.6\/dist-packages\/torch\/utils\/data\/_utils\/worker.py\", line 178, in _worker_loop data = fetcher.fetch(index) File \"\/usr\/local\/lib\/python3.6\/dist-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File \"\/usr\/local\/lib\/python3.6\/dist-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File \"\", line 72, in __getitem__ image = self.transform(image) File \"\/usr\/local\/lib\/python3.6\/dist-packages\/torchvision\/transforms\/transforms.py\", line 70, in __call__ img = t(img) File \"\/usr\/local\/lib\/python3.6\/dist-packages\/torchvision\/transforms\/transforms.py\", line 175, in __call__ return F.normalize(tensor, self.mean, self.std, self.inplace) File \"\/usr\/local\/lib\/python3.6\/dist-packages\/torchvision\/transforms\/functional.py\", line 217, in normalize tensor.sub_(mean[:, None, None]).div_(std[:, None, None]) RuntimeError: output with shape [1, 224, 224] doesn't match the broadcast shape [3, 224, 224] ``` I'm using Alexnet and the code I was provided is the following: ``` net = alexnet() # Loading AlexNet model # AlexNet has 1000 output neurons, corresponding to the 1000 ImageNet's classes # We need 101 outputs for Caltech-101 net.classifier[6] = nn.Linear(4096, NUM_CLASSES) # nn.Linear in pytorch is a fully connected layer # The convolutional layer is nn.Conv2d # We just changed the last layer of AlexNet with a new fully connected layer with 101 outputs # It is mandatory to study torchvision.models.alexnet source code ```", "response":"The first dimension of the tensor means the color, so what your error means is that you are giving a grayscale picture (1 channel), while the data loader expects a RGB image (3 channels). You defined a pil_loader function that returns an image in RGB, but you are never using it. So you have two options: Work with the image in Grayscale instead of rgb, which is cheaper computationally speaking. Solution: Both in train and test transforms change transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) for transforms.Normalize((0.5), (0.5)) Make sure your image is in rgb. I don't know how your images are stored, but I guess you downloaded the dataset in grayscale. One thing you could try is using the pil_loader function you defines. Try changing img = Image.open(os.path.join(self.root, self.elements[index].rstrip())) for img = pil_loader(os.path.join(self.root, self.elements[index].rstrip())) in yout __getitem__ function. Let me know how it goes!", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59218671\/runtimeerror-output-with-shape-1-224-224-doesnt-match-the-broadcast-shape", "best_answers_votes":30, "question_length":7678, "response_length":955 }, { "question":"PyTorch `torch.no_grad` vs `torch.inference_mode` PyTorch has new functionality torch.inference_mode as of v1.9 which is \"analogous to torch.no_grad... Code run under this mode gets better performance by disabling view tracking and version counter bumps.\" If I am just evaluating my model at test time (i.e. not training), is there any situation where torch.no_grad is preferable to torch.inference_mode? I plan to replace every instance of the former with the latter, and I expect to use runtime errors as a guardrail (i.e. I trust that any issue would reveal itself as a runtime error, and if it doesn't surface as a runtime error then I assume it is indeed preferable to use torch.inference_mode). More details on why inference mode was developed are mentioned in the PyTorch Developer Podcast.", "response":"Yes, torch.inference_mode is indeed preferable to torch.no_grad in all situations where inference mode does not throw a runtime error. Check here.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/69543907\/pytorch-torch-no-grad-vs-torch-inference-mode", "best_answers_votes":24, "question_length":797, "response_length":146 }, { "question":"How to implement dropout in Pytorch, and where to apply it I am quite unsure whether this is correct. It is really sad I can't find many good examples on how to parametrize a NN. What do you think of this way of dropping out in those two classes. First I'm writing the original class : ``` class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes, p = dropout): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.fc2 = nn.Linear(hidden_size, hidden_size) self.fc3 = nn.Linear(hidden_size, num_classes) def forward(self, x): out = F.relu(self.fc1(x)) out = F.relu(self.fc2(out)) out = self.fc3(out) return out ``` and then here, I found two different ways to write things, which I don't know how to distinguish. The first one uses : ``` self.drop_layer = nn.Dropout(p=p) ``` whereas the second : ``` self.dropout = nn.Dropout(p) ``` and here is my result : ``` class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes, p = dropout): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.fc2 = nn.Linear(hidden_size, hidden_size) self.fc3 = nn.Linear(hidden_size, num_classes) self.drop_layer = nn.Dropout(p=p) def forward(self, x): out = F.relu(self.fc1(x)) out = F.relu(self.fc2(out)) out = self.fc3(out) return out class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes, p = dropout): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.fc2 = nn.Linear(hidden_size, hidden_size) self.fc3 = nn.Linear(hidden_size, num_classes) self.dropout = nn.Dropout(p) def forward(self, x): out = F.relu(self.fc1(x)) out = F.relu(self.fc2(out)) out = self.fc3(out) return out ``` could this work, if not how to improve that, and is it giving me the result I m expecting, meaning creating a neural network where I can dropout some neurons. Important detail, I only want to dropout the second layer of neural network, no touch to the rest!", "response":"The two examples you provided are exactly the same. self.drop_layer = nn.Dropout(p=p) and self.dropout = nn.Dropout(p) only differ because the authors assigned the layers to different variable names. The dropout layer is typically defined in the .__init__() method, and called in .forward(). Like this: ```py class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes, p = dropout): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.fc2 = nn.Linear(hidden_size, hidden_size) self.fc3 = nn.Linear(hidden_size, num_classes) self.dropout = nn.Dropout(p) def forward(self, x): out = F.relu(self.fc1(x)) out = F.relu(self.fc2(out)) out = self.dropout(self.fc3(out)) return out ``` You can do the test: ```py import torch import torch.nn as nn m = nn.Dropout(p=0.5) input = torch.randn(20, 16) print(torch.sum(torch.nonzero(input))) print(torch.sum(torch.nonzero(m(input)))) ``` ```py tensor(5440) # sum of nonzero values tensor(2656) # sum on nonzero values after dropout ``` Let's visualize it: ```py import torch import torch.nn as nn input = torch.randn(5, 5) print(input) ``` ```py tensor([[ 1.1404, 0.2102, -0.1237, 0.4240, 0.0174], [-2.0872, 1.2790, 0.7804, -0.0962, -0.9730], [ 0.4788, -1.3408, 0.0483, 2.4125, -1.2463], [ 1.5761, 0.3592, 0.2302, 1.3980, 0.0154], [-0.4308, 0.2484, 0.8584, 0.1689, -1.3607]]) ``` Now, let's apply the dropout: ```py m = nn.Dropout(p=0.5) output = m(input) print(output) ``` ```py tensor([[ 0.0000, 0.0000, -0.0000, 0.8481, 0.0000], [-0.0000, 0.0000, 1.5608, -0.0000, -1.9459], [ 0.0000, -0.0000, 0.0000, 0.0000, -0.0000], [ 0.0000, 0.7184, 0.4604, 2.7959, 0.0308], [-0.0000, 0.0000, 0.0000, 0.0000, -0.0000]]) ``` Approximately half the neurons have been turned to zero, because we had probability p=0.5 that a neuron is set to zero!", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59003591\/how-to-implement-dropout-in-pytorch-and-where-to-apply-it", "best_answers_votes":19, "question_length":2005, "response_length":1828 }, { "question":"GUnicorn + CUDA: Cannot re-initialize CUDA in forked subprocess I am creating an inference service with torch, gunicorn and flask that should use CUDA. To reduce resource requirements, I use the preload option of gunicorn, so the model is shared between the worker processes. However, this leads to an issue with CUDA. The following code snipped shows a minimal reproducing example: ``` from flask import Flask, request import torch app = Flask('dummy') model = torch.rand(500) model = model.to('cuda:0') @app.route('\/', methods=['POST']) def f(): data = request.get_json() x = torch.rand((data['number'], 500)) x = x.to('cuda:0') res = x * model return { \"result\": res.sum().item() } ``` Starting the server with CUDA_VISIBLE_DEVICES=1 gunicorn -w 3 -b $HOST_IP:8080 --preload run_server:app lets the service start successfully. However, once doing the first request (curl -X POST -d '{\"number\": 1}'), the worker throws the following error: ``` [2022-06-28 09:42:00,378] ERROR in app: Exception on \/ [POST] Traceback (most recent call last): File \"\/home\/user\/.local\/lib\/python3.6\/site-packages\/flask\/app.py\", line 2447, in wsgi_app response = self.full_dispatch_request() File \"\/home\/user\/.local\/lib\/python3.6\/site-packages\/flask\/app.py\", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File \"\/home\/user\/.local\/lib\/python3.6\/site-packages\/flask\/app.py\", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File \"\/home\/user\/.local\/lib\/python3.6\/site-packages\/flask\/_compat.py\", line 39, in reraise raise value File \"\/home\/user\/.local\/lib\/python3.6\/site-packages\/flask\/app.py\", line 1950, in full_dispatch_request rv = self.dispatch_request() File \"\/home\/user\/.local\/lib\/python3.6\/site-packages\/flask\/app.py\", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File \"\/home\/user\/project\/run_server.py\", line 14, in f x = x.to('cuda:0') File \"\/home\/user\/.local\/lib\/python3.6\/site-packages\/torch\/cuda\/__init__.py\", line 195, in _lazy_init \"Cannot re-initialize CUDA in forked subprocess. \" + msg) RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ``` I load the model in the parent process and it's accessible to each forked worker process. The problem occurs when creating a CUDA-backed tensor in the worker process. This re-initializes the CUDA context in the worker process, which fails because it was already initialized in the parent process. If we set x = data['number'] and remove x = x.to('cuda:0'), the inference succeeds. Adding torch.multiprocessing.set_start_method('spawn') or multiprocessing.set_start_method('spawn') won't change anything, probably because gunicorn will definitely use fork when being started with the --preload option. A solution could be not using the --preload option, which leads to multiple copies of the model in memory\/GPU. But this is what I am trying to avoid. Is there any possibility to overcome this issue without loading the model separately in each worker process?", "response":"Reason for the Error As correctly stated in the comments by @Newbie, the issue isn't the model itself, but the CUDA context. When new child processes are forked, the parent's memory is shared read-only with the child, but the CUDA context doesn't support this sharing, it must be copied to the child. Hence, it reports above-mentioned error. Spawn instead of Fork To resolve this issue, we have to change the start method for the child processes from fork to spawn with multiprocessing.set_start_method. The following simple example works fine: ``` import torch import torch.multiprocessing as mp def f(y): y[0] = 1000 if __name__ == '__main__': x = torch.zeros(1).cuda() x.share_memory_() mp.set_start_method('spawn') p = mp.Process(target=f, args=(x,), daemon=True) p.start() p.join() print(\"x =\", x.item()) ``` When running this code, a second CUDA context is initialized (this can be observed via watch -n 1 nvidia-smi in a second window), and f is executed after the context was initialized completely. After this, x = 1000.0 is printed on the console, thus, we confirmed that the tensor x was successfully shared between the processes. However, Gunicorn internally uses os.fork to start the worker processes, so multiprocessing.set_start_method has no influence on Gunicorn's behavior. Consequently, initializing the CUDA context in the root process must be avoided. Solution for Gunicorn In order to share the model among the worker processes, we thus must load the model in one single process and share it with the workers. Luckily, sending a CUDA tensor via a torch.multiprocessing.Queue to another process doesn't copy the parameters on the GPU, so we can use those queues for this problem. ``` import time import torch import torch.multiprocessing as mp def f(q): y = q.get() y[0] = 1000 def g(q): x = torch.zeros(1).cuda() x.share_memory_() q.put(x) q.put(x) while True: time.sleep(1) # this process must live as long as x is in use if __name__ == '__main__': queue = mp.Queue() pf = mp.Process(target=f, args=(queue,), daemon=True) pf.start() pg = mp.Process(target=g, args=(queue,), daemon=True) pg.start() pf.join() x = queue.get() print(\"x =\", x.item()) # Prints x = 1000.0 ``` For the Gunicorn server, we can use the same strategy: A model server process loads the model and serves it to each new worker process after its fork. In the post_fork hook the worker requests and receives the model from the model server. A Gunicorn configuration could look like this: ``` import logging from client import request_model from app import app logging.basicConfig(level=logging.INFO) bind = \"localhost:8080\" workers = 1 zmq_url = \"tcp:\/\/127.0.0.1:5555\" def post_fork(server, worker): app.config['MODEL'], app.config['COUNTER'] = request_model(zmq_url) ``` In the post_fork hook, we call request_model to get a model from the model server and store the model in the configuration of the Flask application. The method request_model is defined in my example in the file client.py and defined as follows: ``` import logging import os from torch.multiprocessing.reductions import ForkingPickler import zmq def request_model(zmq_url: str): logging.info(\"Connecting\") context = zmq.Context() with context.socket(zmq.REQ) as socket: socket.connect(zmq_url) logging.info(\"Sending request\") socket.send(ForkingPickler.dumps(os.getpid())) logging.info(\"Waiting for a response\") model = ForkingPickler.loads(socket.recv()) logging.info(\"Got response from object server\") return model ``` We make use of ZeroMQ for inter-process communication here because it allows us to reference servers by name\/address and to outsource the server code into its own application. multiprocessing.Queue and multiprocessing.Process apparently don't work well with Gunicorn. multiprocessing.Queue uses the ForkingPickler internally to serialize the objects, and the module torch.multiprocessing alters it in a way that Torch data structures can be serialized appropriately and reliably. So, we use this class to serialize our model to send it to the worker processes. The model is loaded and served in an application that is completely separate from Gunicorn and defined in server.py: ``` from argparse import ArgumentParser import logging import torch from torch.multiprocessing.reductions import ForkingPickler import zmq def load_model(): model = torch.nn.Linear(10000, 50000) model.cuda() model.share_memory() counter = torch.zeros(1).cuda() counter.share_memory_() return model, counter def share_object(obj, url): context = zmq.Context() socket = context.socket(zmq.REP) socket.bind(url) while True: logging.info(\"Waiting for requests on %s\", url) message = socket.recv() logging.info(\"Got a message from %d\", ForkingPickler.loads(message)) socket.send(ForkingPickler.dumps(obj)) if __name__ == '__main__': parser = ArgumentParser(description=\"Serve model\") parser.add_argument(\"--listen-address\", default=\"tcp:\/\/127.0.0.1:5555\") args = parser.parse_args() logging.basicConfig(level=logging.INFO) logging.info(\"Loading model\") model = load_model() share_object(model, args.listen_address) ``` For this test, we use a model of about 2GB in size to see an effect on the GPU memory allocation in nvidia-smi and a small tensor to verify that the data is actually shared among the processes. Our sample flask application runs the model with a random input, counts the number of requests and returns both results: ``` from flask import Flask import torch app = Flask(__name__) @app.route(\"\/\", methods=[\"POST\"]) def infer(): model: torch.nn.Linear = app.config['MODEL'] counter: torch.Tensor = app.config['COUNTER'] counter[0] += 1 # not thread-safe input_features = torch.rand(model.in_features).cuda() return { \"result\": model(input_features).sum().item(), \"counter\": counter.item() } ``` Test The example can be run as follows: ``` $ python server.py & INFO:root:Waiting for requests on tcp:\/\/127.0.0.1:5555 $ gunicorn -c config.py app:app [2023-02-01 16:45:34 +0800] [24113] [INFO] Starting gunicorn 20.1.0 [2023-02-01 16:45:34 +0800] [24113] [INFO] Listening at: http:\/\/127.0.0.1:8080 (24113) [2023-02-01 16:45:34 +0800] [24113] [INFO] Using worker: sync [2023-02-01 16:45:34 +0800] [24186] [INFO] Booting worker with pid: 24186 INFO:root:Connecting INFO:root:Sending request INFO:root:Waiting for a response INFO:root:Got response from object server ``` Using nvidia-smi, we can observe that now, two processes are using the GPU, and one of them allocates 2GB more VRAM than the other. Querying the flask application also works as expected: ``` $ curl -X POST localhost:8080 {\"counter\":1.0,\"result\":-23.956459045410156} $ curl -X POST localhost:8080 {\"counter\":2.0,\"result\":-8.161510467529297} $ curl -X POST localhost:8080 {\"counter\":3.0,\"result\":-37.823692321777344} ``` Let's introduce some chaos and terminate our only Gunicorn worker: ``` $ kill 24186 [2023-02-01 18:02:09 +0800] [24186] [INFO] Worker exiting (pid: 24186) [2023-02-01 18:02:09 +0800] [4196] [INFO] Booting worker with pid: 4196 INFO:root:Connecting INFO:root:Sending request INFO:root:Waiting for a response INFO:root:Got response from object server ``` It's restarting properly and ready to answer our requests. Benefit Initially, the amount of required VRAM for our service was (SizeOf(Model) + SizeOf(CUDA context)) * Num(Workers). By sharing the weights of the model, we can reduce this by SizeOf(Model) * (Num(Workers) - 1) to SizeOf(Model) + SizeOf(CUDA context) * Num(Workers). Caveats The reliability of this approach relies on the single model server process. If that process terminates, not only will newly started workers get stuck, but the models in the existing workers will become unavailable and all workers crash at once. The shared tensors\/models are only available as long as the server process is running. Even if the model server and Gunicorn workers are restarted, a short outage is certainly unavoidable. In a production environment, you thus should make sure this server process is kept alive. Additionally, sharing data among different processes can have side effects. When sharing changeable data, proper locks must be used to avoid race conditions.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/72779926\/gunicorn-cuda-cannot-re-initialize-cuda-in-forked-subprocess", "best_answers_votes":14, "question_length":3069, "response_length":8128 }, { "question":"PyTorch get all layers of model What's the easiest way to take a pytorch model and get a list of all the layers without any nn.Sequence groupings? For example, a better way to do this? ``` import pretrainedmodels def unwrap_model(model): for i in children(model): if isinstance(i, nn.Sequential): unwrap_model(i) else: l.append(i) model = pretrainedmodels.__dict__['xception'](num_classes=1000, pretrained='imagenet') l = [] unwrap_model(model) print(l) ```", "response":"You can iterate over all modules of a model (including those inside each Sequential) with the modules() method. Here's a simple example: ``` >>> model = nn.Sequential(nn.Linear(2, 2), nn.ReLU(), nn.Sequential(nn.Linear(2, 1), nn.Sigmoid())) >>> l = [module for module in model.modules() if not isinstance(module, nn.Sequential)] >>> l [Linear(in_features=2, out_features=2, bias=True), ReLU(), Linear(in_features=2, out_features=1, bias=True), Sigmoid()] ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54846905\/pytorch-get-all-layers-of-model", "best_answers_votes":37, "question_length":457, "response_length":458 }, { "question":"How does torch.distributed.barrier() work I've read all the documentations I could find about torch.distributed.barrier(), but still having trouble understanding how it's being used in this script and would really appreciate some help. So the official doc of torch.distributed.barrier says it \"Synchronizes all processes.This collective blocks processes until the whole group enters this function, if async_op is False, or if async work handle is called on wait().\" It's used in two places in the script: First place ``` if args.local_rank not in [-1, 0] and not evaluate: torch.distributed.barrier() # Make sure only the first process in distributed training process the dataset, and the others will use the cache ... (preprocesses the data and save the preprocessed data) if args.local_rank == 0 and not evaluate: torch.distributed.barrier() ``` Second place ``` if args.local_rank not in [-1, 0]: torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab ... (loads the model and the vocabulary) if args.local_rank == 0: torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab ``` I'm having trouble relating the comment in the code to the functionality of this function stated in the official doc. How does it make sure only the first process executes the code between the two calls of torch.distributed.barrier() and why it only checks whether the local rank is 0 before the second call? Thanks in advance!", "response":"First you need to understand the ranks. To be brief: in a multiprocessing context we typically assume that rank 0 is the first process or base process. The other processes are then ranked differently, e.g. 1, 2, 3, totalling four processes in total. Some operations are not necessary to be done in parallel or you just need one process to do some preprocessing or caching so that the other processes can use that data. In your example, if the first if statement is entered by the non-base processes (rank 1, 2, 3), they will block (or \"wait\") because they run into the barrier. They wait there, because barrier() blocks until all processes have reached a barrier, but the base process has not reached a barrier yet. So at this point the non-base processes (1, 2, 3) are blocked, but the base process (0) continues. The base process will do some operations (preprocess and cache data, in this case) until it reaches the second if-statement. There, the base process will run into a barrier. At this point, all processes have stopped at a barrier, meaning that all current barriers can be lifted and all processes can continue. Because the base process prepared the data, the other processes can now use that data. Perhaps the most important thing to understand is: when a process encounters a barrier it will block the position of the barrier is not important (not all processes have to enter the same if-statement, for instance) a process is blocked by a barrier until all processes have encountered a barrier, upon which those barriers are lifted for all processes", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59760328\/how-does-torch-distributed-barrier-work", "best_answers_votes":53, "question_length":1527, "response_length":1564 }, { "question":"what does dim=-1 or -2 mean in torch.sum()? let me take a 2D matrix as example: ``` mat = torch.arange(9).view(3, -1) tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) torch.sum(mat, dim=-2) tensor([ 9, 12, 15]) ``` I find the result of torch.sum(mat, dim=-2) is equal to torch.sum(mat, dim=0) and dim=-1 equal to dim=1. My question is how to understand the negative dimension here. What if the input matrix has 3 or more dimensions?", "response":"A tensor has multiple dimensions, ordered as in the following figure. There is a forward and backward indexing. Forward indexing uses positive integers, backward indexing uses negative integers. Example: -1 will be the last one, in our case it will be dim=2 -2 will be dim=1 -3 will be dim=0", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59702785\/what-does-dim-1-or-2-mean-in-torch-sum", "best_answers_votes":35, "question_length":425, "response_length":291 }, { "question":"How to make VScode launch.json for a Python module I'm researching self-supervised machine learning code. And I have wanted to debug the code with python debugger not pdb.set_trace(). This is python command for ubuntu terminal. ```bash python -m torch.distributed.launch --nproc_per_node=1 main_swav.py \\ --data_path \/dataset\/imagenet\/train \\ --epochs 400 \\ --base_lr 0.6 \\ --final_lr 0.0006 \\ --warmup_epochs 0 \\ --batch_size 8 \\ --size_crops 224 96 \\ --nmb_crops 2 6 \\ --min_scale_crops 0.14 0.05 \\ --max_scale_crops 1. 0.14 \\ --use_fp16 true \\ --freeze_prototypes_niters 5005 \\ --queue_length 380 \\ --epoch_queue_starts 15\\ --workers 10 ``` In order to debug the code with VScode, I tried to revise launch.json like below as referring stackoverflow question ```json { \"version\": \"0.2.0\", \"configurations\": [ { \"name\": \"Python: Current File\", \"type\": \"python\", \"module\": \"torch.distributed.launch --nproc_per_node=1 main_swav.py\", \"request\": \"launch\", \"console\": \"integratedTerminal\", \"args\": [\"--data_path\", \"\/dataset\/imagenet\/train\"] } ] } ``` I knew this would not work... Could you give me some advice?", "response":"Specify the module you want to run with \"module\": \"torch.distributed.launch\" You can ignore the -m flag. Put everything else under the args key. Note: Make sure to include --nproc_per_node and the name of file (main_swav.py) in the list of arguments ``` { \"version\": \"0.2.0\", \"configurations\": [ { \"name\": \"Python: Current File\", \"type\": \"debugpy\", \"module\": \"torch.distributed.launch\", \"request\": \"launch\", \"console\": \"integratedTerminal\", \"args\": [ \"--nproc_per_node\", \"1\", \"main_swav.py\", \"--data_path\", \"\/dataset\/imagenet\/train\", ] } ] } ``` Read more here: https:\/\/code.visualstudio.com\/docs\/python\/debugging#_module", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/67518928\/how-to-make-vscode-launch-json-for-a-python-module", "best_answers_votes":24, "question_length":1108, "response_length":621 }, { "question":"difference between Dataset and TensorDataset in pyTorch what is the difference between \"torch.utils.data.TensorDataset\" and \"torch.utils.data.Dataset\" - the docs are not clear about that and I could not find any answers on google.", "response":"The Dataset class is an abstract class that is used to define new types of (customs) datasets. Instead, the TensorDataset is a ready to use class to represent your data as list of tensors. You can define your custom dataset in the following way: ```py class CustomDataset(torch.utils.data.Dataset): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # Your code self.instances = your_data def __getitem__(self, idx): return self.instances[idx] # In case you stored your data on a list called instances def __len__(self): return len(self.instances) ``` If you just want to create a dataset that contains tensors for input features and labels, then use the TensorDataset directly: ```py dataset = TensorDataset(input_features, labels) ``` Note that input_features and labels must match on the length of the first dimension.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/67683406\/difference-between-dataset-and-tensordataset-in-pytorch", "best_answers_votes":28, "question_length":230, "response_length":841 }, { "question":"How does the __getitem__'s idx work within PyTorch's DataLoader? I'm currently trying to use PyTorch's DataLoader to process data to feed into my deep learning model, but am facing some difficulty. The data that I need is of shape (minibatch_size=32, rows=100, columns=41). The __getitem__ code that I have within the custom Dataset class that I wrote looks something like this: ``` def __getitem__(self, idx): x = np.array(self.train.iloc[idx:100, :]) return x ``` The reason I wrote it like that is because I want the DataLoader to handle input instances of shape (100, 41) at a time, and we have 32 of these single instances. However, I noticed that contrary to my initial belief the idx argument the DataLoader passes to the function is not sequential (this is crucial because my data is time series data). For example, printing the values gave me something like this: ``` idx = 206000 idx = 113814 idx = 80597 idx = 3836 idx = 156187 idx = 54990 idx = 8694 idx = 190555 idx = 84418 idx = 161773 idx = 177725 idx = 178351 idx = 89217 idx = 11048 idx = 135994 idx = 15067 ``` Is this normal behavior? I'm posting this question because the data batches that are being returned are not what I initially wanted them to be. The original logic that I used to preprocess the data before using the DataLoader was: Read data in from either txt or csv file. Calculate how many batches are in the data and slice the data accordingly. For example, since one input instance is of shape (100, 41) and 32 of these form one minibatch, we usually end up with around 100 or so batches and reshape the data accordingly. One input is of shape (32, 100, 41). I'm not sure how else I should be handling the DataLoader hook methods. Any tips or advice are greatly appreciated. Thanks in advance.", "response":"What defines the idx is the sampler or batch_sampler, as you can see here (open-source projects are your friend). In this code (and comment\/docstring) you can see the difference between sampler and batch_sampler. If you look here you'll see how the index is chosen: ```py def __next__(self): index = self._next_index() # and _next_index is implemented on the base class (_BaseDataLoaderIter) def _next_index(self): return next(self._sampler_iter) # self._sampler_iter is defined in the __init__ like this: self._sampler_iter = iter(self._index_sampler) # and self._index_sampler is a property implemented like this (modified to one-liner for simplicity): self._index_sampler = self.batch_sampler if self._auto_collation else self.sampler ``` Pay attention that this is the _SingleProcessDataLoaderIter implementation; you can find the _MultiProcessingDataLoaderIter here (ofc, which one is used depends on the num_workers value, as you can see here). Going back to the samplers, assuming your Dataset is not _DatasetKind.Iterable and that you are not providing a custom sampler, it means you are either using (dataloader.py#L212-L215): ```py if shuffle: sampler = RandomSampler(dataset) else: sampler = SequentialSampler(dataset) if batch_size is not None and batch_sampler is None: # auto_collation without custom batch_sampler batch_sampler = BatchSampler(sampler, batch_size, drop_last) ``` Let's take a look at how the default BatchSampler builds a batch: ```py def __iter__(self): batch = [] for idx in self.sampler: batch.append(idx) if len(batch) == self.batch_size: yield batch batch = [] if len(batch) > 0 and not self.drop_last: yield batch ``` Very simple: it gets indices from the sampler until the desired batch_size is reached. Now the question \"How does the __getitem__'s idx work within PyTorch's DataLoader?\" can be answered by seeing how each default sampler works. SequentialSampler (this is the full implementation -- very simple, isn't it?): ```py class SequentialSampler(Sampler): def __init__(self, data_source): self.data_source = data_source def __iter__(self): return iter(range(len(self.data_source))) def __len__(self): return len(self.data_source) ``` RandomSampler (let's see only the __iter__ implementation): ```py def __iter__(self): n = len(self.data_source) if self.replacement: return iter(torch.randint(high=n, size=(self.num_samples,), dtype=torch.int64).tolist()) return iter(torch.randperm(n).tolist()) ``` Therefore, as you did not provide any code, we can only assume: You are using shuffle=True in your DataLoader or You are using a custom sampler or Your Dataset is _DatasetKind.Iterable", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/58834338\/how-does-the-getitem-s-idx-work-within-pytorchs-dataloader", "best_answers_votes":26, "question_length":1776, "response_length":2631 }, { "question":"Tensorflow 2.0 dataset and dataloader I am a pytorch user, and I am used to the data.dataset and data.dataloader api in pytorch. I am trying to build a same model with tensorflow 2.0, and I wonder whether there is an api that works similarly with these api in pytorch. If there is no such api, can any of you tell me how people usually do to implement the data loading part in tensorflow ? I've used tensorflow 1, but never had an experience with dataset api. I've hard coded before. I hope there is something like overriding getitem with only index as an input. Thanks much in advance.", "response":"When using the tf.data API, you will usually also make use of the map function. In PyTorch, your __getItem__ call basically fetches an element from your data structure given in __init__ and transforms it if necessary. In TF2.0, you do the same by initializing a Dataset using one of the Dataset.from_... functions (see from_generator, from_tensor_slices, from_tensors); this is essentially the __init__ part of a PyTorch Dataset. Then, you can call map to do the element-wise manipulations you would have in __getItem__. Tensorflow datasets are pretty much fancy iterators, so by design you don't access their elements using indices, but rather by traversing them. The guide on tf.data is very useful and provides a wide variety of examples.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/58505880\/tensorflow-2-0-dataset-and-dataloader", "best_answers_votes":21, "question_length":586, "response_length":741 }, { "question":"Using GPU inside docker container - CUDA Version: N\/A and torch.cuda.is_available returns False I'm trying to use GPU from inside my docker container. I'm using docker with version 19.03 on Ubuntu 18.04. Outside the docker container if I run nvidia-smi I get the below output: ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.51.05 Driver Version: 450.51.05 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage\/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N\/A 30C P8 9W \/ 70W | 0MiB \/ 15109MiB | 0% Default | | | | N\/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ``` If I run the same thing inside the container created from nvidia\/cuda docker image, I get the same output as above and everything is running smoothly. torch.cuda.is_available() returns True. But if I run the same nvidia-smi command inside any other docker container, it gives the following output where you can see that the CUDA Version is coming as N\/A. Inside the containers torch.cuda.is_available() also returns False. ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.51.05 Driver Version: 450.51.05 CUDA Version: N\/A | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage\/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N\/A 30C P8 9W \/ 70W | 0MiB \/ 15109MiB | 0% Default | | | | N\/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ``` I have installed nvidia-container-toolkit using the following commands. ``` curl -s -L https:\/\/nvidia.github.io\/nvidia-docker\/gpgkey | sudo apt-key add - curl -s -L https:\/\/nvidia.github.io\/nvidia-docker\/ubuntu18.04\/nvidia-docker.list | sudo tee \/etc\/apt\/sources.list.d\/nvidia-docker.list sudo apt-get update sudo apt-get install nvidia-container-toolkit sudo systemctl restart docker ``` I started my containers using the following commands ``` sudo docker run --rm --gpus all nvidia\/cuda nvidia-smi sudo docker run -it --rm --gpus all ubuntu nvidia-smi ```", "response":"For anybody arriving here looking how to do it with docker compose, add to your service: ```yaml deploy: resources: reservations: devices: - driver: nvidia capabilities: - gpu - utility # nvidia-smi - compute # CUDA - video # NVDEC\/NVENC\/NVCUVID. For instance to use a hardware accelerated ffmpeg. Skip it if you don't need it ``` Note that, if the environment variable NVIDIA_DRIVER_CAPABILITIES is empty or unset, the container will use the default driver capabilities, which are utility and compute. If it's set to ALL, the container will use all the driver capabilities, but docker compose will still require you to set the capabilities in the docker-compose.yml, such as: ```yaml deploy: resources: reservations: devices: - driver: nvidia capabilities: # always required, whatever the value of NVIDIA_DRIVER_CAPABILITIES - gpu ``` You also need to use a nvidia\/cuda image. Doc: https:\/\/docs.docker.com\/compose\/gpu-support You can find a list of the driver capabilities here: https:\/\/docs.nvidia.com\/datacenter\/cloud-native\/container-toolkit\/latest\/docker-specialized.html#driver-capabilities", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/63751883\/using-gpu-inside-docker-container-cuda-version-n-a-and-torch-cuda-is-availabl", "best_answers_votes":10, "question_length":3271, "response_length":1096 }, { "question":"In PyTorch, what exactly does the grad_fn attribute store and how is it used? In PyTorch, the Tensor class has a grad_fn attribute. This references the operation used to obtain the tensor: for instance, if a = b + 2, a.grad_fn will be AddBackward0. But what does \"reference\" mean exactly? Inspecting AddBackward0 using inspect.getmro(type(a.grad_fn)) will state that the only base class of AddBackward0 is object. Additionally, the source code for this class (and in fact, any other class which might be encountered in grad_fn) is nowhere to be found in the source code! All of this leads me to the following questions: What precisely is stored in grad_fn and how is it called during back-propagation? How come the objects that get stored in grad_fn do not have some sort of common super class, and why is there no source code for them on GitHub?", "response":"grad_fn is a function \"handle\", giving access to the applicable gradient function. The gradient at the given point is a coefficient for adjusting weights during back-propagation. \"Handle\" is a general term for an object descriptor, designed to give appropriate access to the object. For instance, when you open a file, open returns a file handle. When you instantiate a class, the __init__ function returns a handle to the created instance. The handle contains references (usually memory addresses) to the data and functions for the item in question. It appears as the generic object class because it's from the underlying implementation in another language, such that it does not map exactly to the Python function type. PyTorch handles the inter-language call and return. This hand-off is part of the pre-complied (shared-object) run-time system. Is that enough to clarify what you see?", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/66402331\/in-pytorch-what-exactly-does-the-grad-fn-attribute-store-and-how-is-it-used", "best_answers_votes":8, "question_length":846, "response_length":888 }, { "question":"How to install nvidia apex on Google Colab what I did is follow the instruction on the official github site ``` !git clone https:\/\/github.com\/NVIDIA\/apex !cd apex !pip install -v --no-cache-dir .\/ ``` it gives me the error: ``` ERROR: Directory '.\/' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. Exception information: Traceback (most recent call last): File \"\/usr\/local\/lib\/python3.6\/dist-packages\/pip\/_internal\/cli\/base_command.py\", line 178, in main status = self.run(options, args) File \"\/usr\/local\/lib\/python3.6\/dist-packages\/pip\/_internal\/commands\/install.py\", line 326, in run self.name, wheel_cache File \"\/usr\/local\/lib\/python3.6\/dist-packages\/pip\/_internal\/cli\/base_command.py\", line 268, in populate_requirement_set wheel_cache=wheel_cache File \"\/usr\/local\/lib\/python3.6\/dist-packages\/pip\/_internal\/req\/constructors.py\", line 248, in install_req_from_line \"nor 'pyproject.toml' found.\" % name pip._internal.exceptions.InstallationError: Directory '.\/' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. ```", "response":"(wanted to just add a comment but I don't have enough reputation...) it works for me but the cd is actually not required. Also, I needed the two global options as suggested here: https:\/\/github.com\/NVIDIA\/apex\/issues\/86 ``` %%writefile setup.sh git clone https:\/\/github.com\/NVIDIA\/apex pip install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" .\/apex ``` then ``` !sh setup.sh ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57284345\/how-to-install-nvidia-apex-on-google-colab", "best_answers_votes":19, "question_length":1053, "response_length":409 }, { "question":"How does pytorch broadcasting work? ``` torch.add(torch.ones(4,1), torch.randn(4)) ``` produces a Tensor with size: torch.Size([4,4]). Can someone provide a logic behind this?", "response":"PyTorch broadcasting is based on numpy broadcasting semantics which can be understood by reading numpy broadcasting rules or PyTorch broadcasting guide. Expounding the concept with an example would be intuitive to understand it better. So, please see the example below: ``` In [27]: t_rand Out[27]: tensor([ 0.23451, 0.34562, 0.45673]) In [28]: t_ones Out[28]: tensor([[ 1.], [ 1.], [ 1.], [ 1.]]) ``` Now for torch.add(t_rand, t_ones), visualize it like: ``` # shape of (3,) tensor([ 0.23451, 0.34562, 0.45673]) # (4, 1) | | | | | | | | | | | | tensor([[ 1.],____+ | | | ____+ | | | ____+ | | | [ 1.],______+ | | ______+ | | ______+ | | [ 1.],________+ | ________+ | ________+ | [ 1.]])_________+ __________+ __________+ ``` which should give the output with tensor of shape (4,3) as: ``` # shape of (4,3) In [33]: torch.add(t_rand, t_ones) Out[33]: tensor([[ 1.23451, 1.34562, 1.45673], [ 1.23451, 1.34562, 1.45673], [ 1.23451, 1.34562, 1.45673], [ 1.23451, 1.34562, 1.45673]]) ``` Also, note that we get exactly the same result even if we pass the arguments in a reverse order as compared to the previous one: ``` # shape of (4, 3) In [34]: torch.add(t_ones, t_rand) Out[34]: tensor([[ 1.23451, 1.34562, 1.45673], [ 1.23451, 1.34562, 1.45673], [ 1.23451, 1.34562, 1.45673], [ 1.23451, 1.34562, 1.45673]]) ``` Anyway, I prefer the former way of understanding for more straightforward intuitiveness. For pictorial understanding, I culled out more examples which are enumerated below: Example-1: Example-2:: T and F stand for True and False respectively and indicate along which dimensions we allow broadcasting (source: Theano). Example-3: Here are some shapes where the array b is broadcasted appropriately to attempt to match the shape of the array a. As shown above, the broadcasted b may still not match the shape of a, and so the operation a + b will fail whenever the final broadcasted shapes do not match.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51371070\/how-does-pytorch-broadcasting-work", "best_answers_votes":46, "question_length":175, "response_length":1913 }, { "question":"Implementing dropout from scratch This code attempts to utilize a custom implementation of dropout : ``` %reset -f import torch import torch.nn as nn # import torchvision # import torchvision.transforms as transforms import torch import torch.nn as nn import torch.utils.data as data_utils import numpy as np import matplotlib.pyplot as plt import torch.nn.functional as F num_epochs = 1000 number_samples = 10 from sklearn.datasets import make_moons from matplotlib import pyplot from pandas import DataFrame # generate 2d classification dataset X, y = make_moons(n_samples=number_samples, noise=0.1) # scatter plot, dots colored by class value x_data = [a for a in enumerate(X)] x_data_train = x_data[:int(len(x_data) * .5)] x_data_train = [i[1] for i in x_data_train] x_data_train y_data = [y[i[0]] for i in x_data] y_data_train = y_data[:int(len(y_data) * .5)] y_data_train x_test = [a[1] for a in x_data[::-1][:int(len(x_data) * .5)]] y_test = [a for a in y_data[::-1][:int(len(y_data) * .5)]] x = torch.tensor(x_data_train).float() # print(x) y = torch.tensor(y_data_train).long() print(y) x_test = torch.tensor(x_test).float() print(x_test) y_test = torch.tensor(y_test).long() print(y_test) class Dropout(nn.Module): def __init__(self, p=0.5, inplace=False): # print(p) super(Dropout, self).__init__() if p 1: raise ValueError(\"dropout probability has to be between 0 and 1, \" \"but got {}\".format(p)) self.p = p self.inplace = inplace def forward(self, input): print(list(input.shape)) return np.random.binomial([np.ones((len(input),np.array(list(input.shape))))],1-dropout_percent)[0] * (1.0\/(1-self.p)) def __repr__(self): inplace_str = ', inplace' if self.inplace else '' return self.__class__.__name__ + '(' \\ + 'p=' + str(self.p) \\ + inplace_str + ')' class MyLinear(nn.Linear): def __init__(self, in_feats, out_feats, drop_p, bias=True): super(MyLinear, self).__init__(in_feats, out_feats, bias=bias) self.custom_dropout = Dropout(p=drop_p) def forward(self, input): dropout_value = self.custom_dropout(self.weight) return F.linear(input, dropout_value, self.bias) my_train = data_utils.TensorDataset(x, y) train_loader = data_utils.DataLoader(my_train, batch_size=2, shuffle=True) my_test = data_utils.TensorDataset(x_test, y_test) test_loader = data_utils.DataLoader(my_train, batch_size=2, shuffle=True) # Device configuration device = 'cpu' print(device) # Hyper-parameters input_size = 2 hidden_size = 100 num_classes = 2 learning_rate = 0.0001 pred = [] # Fully connected neural network with one hidden layer class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes, p): super(NeuralNet, self).__init__() # self.drop_layer = nn.Dropout(p=p) # self.drop_layer = MyLinear() # self.fc1 = MyLinear(input_size, hidden_size, p) self.fc1 = MyLinear(input_size, hidden_size , p) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x): # out = self.drop_layer(x) out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out model = NeuralNet(input_size, hidden_size, num_classes, p=0.9).to(device) # Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Train the model total_step = len(train_loader) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): # Move tensors to the configured device images = images.reshape(-1, 2).to(device) labels = labels.to(device) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() if (epoch) % 100 == 0: print ('Epoch [{}\/{}], Step [{}\/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, i+1, total_step, loss.item())) ``` Custom dropout is implemented as : ``` class Dropout(nn.Module): def __init__(self, p=0.5, inplace=False): # print(p) super(Dropout, self).__init__() if p 1: raise ValueError(\"dropout probability has to be between 0 and 1, \" \"but got {}\".format(p)) self.p = p self.inplace = inplace def forward(self, input): print(list(input.shape)) return np.random.binomial([np.ones((len(input),np.array(list(input.shape))))],1-dropout_percent)[0] * (1.0\/(1-self.p)) def __repr__(self): inplace_str = ', inplace' if self.inplace else '' return self.__class__.__name__ + '(' \\ + 'p=' + str(self.p) \\ + inplace_str + ')' class MyLinear(nn.Linear): def __init__(self, in_feats, out_feats, drop_p, bias=True): super(MyLinear, self).__init__(in_feats, out_feats, bias=bias) self.custom_dropout = Dropout(p=drop_p) def forward(self, input): dropout_value = self.custom_dropout(self.weight) return F.linear(input, dropout_value, self.bias) ``` It seems I've implemented the dropout function incorrectly ? : ``` np.random.binomial([np.ones((len(input),np.array(list(input.shape))))],1-dropout_percent)[0] * (1.0\/(1-self.p)) ``` How to modify in order to correctly utilize dropout ? These posts were useful in getting to this point : Hinton's Dropout in 3 Lines of Python : https:\/\/iamtrask.github.io\/2015\/07\/28\/dropout\/ Making a Custom Dropout Function : https:\/\/discuss.pytorch.org\/t\/making-a-custom-dropout-function\/14053\/2", "response":"It seems I've implemented the dropout function incorrectly? ``` np.random.binomial([np.ones((len(input),np.array(list(input.shape))))],1 dropout_percent)[0] * (1.0\/(1-self.p)) ``` In fact, the above implementation is known as Inverted Dropout. Inverted Dropout is how Dropout is implemented in practice in the various deep learning frameworks. What is inverted dropout? Before jump into the inverted dropout, it can be helpful to see how Dropout works for a single neuron: Since during train phase a neuron is kept on with probability q (=1-p), during the testing phase we have to emulate the behavior of the ensemble of networks used in the training phase. To this end, the authors suggest scaling the activation function by a factor of q during the test phase in order to use the expected output produced in the training phase as the single output required in the test phase (Section 10, Multiplicative Gaussian Noise). Thus: Inverted dropout is a bit different. This approach consists in the scaling of the activations during the training phase, leaving the test phase untouched. The scale factor is the inverse of the keep probability 1\/1-p = 1\/q, thus: Inverted dropout helps to define the model once and just change a parameter (the keep\/drop probability) to run train and test on the same model. Direct Dropout, instead, force you to modify the network during the test phase because if you don\u2019t multiply by q the output the neuron will produce values that are higher respect to the one expected by the successive neurons (thus the following neurons can saturate or explode): that\u2019s why Inverted Dropout is the more common implementation. References: Dropout Regularization, coursera by Andrew NG What is inverted dropout? Dropout: scaling the activation versus inverting the dropout Analysis of Dropout How implement inverted dropout Pytorch? ``` class MyDropout(nn.Module): def __init__(self, p: float = 0.5): super(MyDropout, self).__init__() if p 1: raise ValueError(\"dropout probability has to be between 0 and 1, \" \"but got {}\".format(p)) self.p = p def forward(self, X): if self.training: binomial = torch.distributions.binomial.Binomial(probs=1-self.p) return X * binomial.sample(X.size()) * (1.0\/(1-self.p)) return X ``` How to implement in Numpy? ``` import numpy as np pKeep = 0.8 weights = np.ones([1, 5]) binary_value = np.random.rand(weights.shape[0], weights.shape[1]) < pKeep res = np.multiply(weights, binary_value) res \/= pKeep # this line is called inverted dropout technique print(res) ``` How to implement in Tensorflow? ``` import tensorflow as tf tf.enable_eager_execution() weights = tf.ones(shape=[1, 5]) keep_prob = 0.8 random_tensor = keep_prob random_tensor += tf.random_uniform(weights.shape) # 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob) binary_tensor = tf.floor(random_tensor) ret = tf.div(weights, keep_prob) * binary_tensor print(ret) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54109617\/implementing-dropout-from-scratch", "best_answers_votes":42, "question_length":5139, "response_length":2891 }, { "question":"Why can GPU do matrix multiplication faster than CPU? I've been using GPU for a while without questioning it but now I'm curious. Why can GPU do matrix multiplication much faster than CPU? Is it because of parallel processing? But I didn't write any parallel processing code. Does it do it automatically by itself? Any intuition \/ high-level explanation will be appreciated!", "response":"How do you parallelize the computations? GPU's are able to do a lot of parallel computations. A Lot more than a CPU could do. Look at this example of vector addition of let's say 1M elements. Using a CPU let's say you have 100 maximum threads you can run : (100 is lot more but let's assume for a while) In a typical multi-threading example let's say you parallelized additions on all threads. Here is what I mean by it : ``` c[0] = a[0] + b[0] # let's do it on thread 0 c[1] = a[1] + b[1] # let's do it on thread 1 c[101] = a[101] + b[101] # let's do it on thread 1 ``` We are able to do it because value of c[0], doesn't depend upon any other values except a[0] and b[0]. So each addition is independent of others. Hence, we were able to easily parallelize the task. As you see in above example that simultaneously all the addition of 100 different elements take place saving you time. In this way it takes 1M\/100 = 10,000 steps to add all the elements. How Efficient does GPU Parallelizes? Now consider today's GPU with about 2048 threads, all threads can independently do 2048 different operations in constant time. Hence giving a boost up. In your case of matrix multiplication. You can parallelize the computations, Because GPU have much more threads and in each thread you have multiple blocks. So a lot of computations are parallelized, resulting quick computations. But I didn't write any parallel processing for my GTX1080! Does it do it by itself? Almost all the framework for machine learning uses parallelized implementation of all the possible operations. This is achieved by CUDA programming, NVIDIA API to do parallel computations on NVIDIA GPU's. You don't write it explicitly, it's all done at low level, and you do not even get to know. Yes it doesn't mean that a C++ program you wrote will automatically be parallelized, just because you have a GPU. No, you need to write it using CUDA, only then it will be parallelized, but most programming framework have it, So it is not required from your end.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51344018\/why-can-gpu-do-matrix-multiplication-faster-than-cpu", "best_answers_votes":24, "question_length":374, "response_length":2018 }, { "question":"What is tape-based autograd in Pytorch? I understand autograd is used to imply automatic differentiation. But what exactly is tape-based autograd in Pytorch and why there are so many discussions that affirm or deny it. For example: this In pytorch, there is no traditional sense of tape and this We don\u2019t really build gradient tapes per se. But graphs. but not this Autograd is now a core torch package for automatic differentiation. It uses a tape based system for automatic differentiation. And for further reference, please compare it with GradientTape in Tensorflow.", "response":"There are different types of automatic differentiation e.g. forward-mode, reverse-mode, hybrids; (more explanation). The tape-based autograd in Pytorch simply refers to the uses of reverse-mode automatic differentiation, source. The reverse-mode auto diff is simply a technique used to compute gradients efficiently and it happens to be used by backpropagation, source. Now, in PyTorch, Autograd is the core torch package for automatic differentiation. It uses a tape-based system for automatic differentiation. In the forward phase, the autograd tape will remember all the operations it executed, and in the backward phase, it will replay the operations. Same in TensorFlow, to differentiate automatically, It also needs to remember what operations happen in what order during the forward pass. Then, during the backward pass, TensorFlow traverses this list of operations in reverse order to compute gradients. Now, TensorFlow provides the tf.GradientTape API for automatic differentiation; that is, computing the gradient of computation with respect to some inputs, usually tf.Variables. TensorFlow records relevant operations executed inside the context of a tf.GradientTape onto a tape. TensorFlow then uses that tape to compute the gradients of a recorded computation using reverse mode differentiation. So, as we can see from the high-level viewpoint, both are doing the same operation. However, during the custom training loop, the forward pass and calculation of the loss are more explicit in TensorFlow as it uses tf.GradientTape API scope whereas in PyTorch it's implicit for these operations but it requires to set required_grad flags to False temporarily while updating the training parameters (weights and biases). For that, it uses torch.no_grad API explicitly. In other words, TensorFlow's tf.GradientTape() is similar to PyTorch's loss.backward(). Below is the simplistic form in the code of the above statements. ``` # TensorFlow [w, b] = tf_model.trainable_variables for epoch in range(epochs): with tf.GradientTape() as tape: # forward passing and loss calculations # within explicit tape scope predictions = tf_model(x) loss = squared_error(predictions, y) # compute gradients (grad) w_grad, b_grad = tape.gradient(loss, tf_model.trainable_variables) # update training variables w.assign(w - w_grad * learning_rate) b.assign(b - b_grad * learning_rate) # PyTorch [w, b] = torch_model.parameters() for epoch in range(epochs): # forward pass and loss calculation # implicit tape-based AD y_pred = torch_model(inputs) loss = squared_error(y_pred, labels) # compute gradients (grad) loss.backward() # update training variables \/ parameters with torch.no_grad(): w -= w.grad * learning_rate b -= b.grad * learning_rate w.grad.zero_() b.grad.zero_() ``` FYI, in the above, the trainable variables (w, b) are manually updated in both frameworks but we generally use an optimizer (e.g. adam) to do the job. ``` # TensorFlow # .... # update training variables optimizer.apply_gradients(zip([w_grad, b_grad], model.trainable_weights)) # PyTorch # .... # update training variables \/ parameters optimizer.step() optimizer.zero_grad() ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/64856195\/what-is-tape-based-autograd-in-pytorch", "best_answers_votes":25, "question_length":570, "response_length":3145 }, { "question":"pytorch RuntimeError: Expected object of scalar type Double but got scalar type Float I am trying to implement a custom dataset for my neural network. But got this error when running the forward function. The code is as follows. ``` import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import Dataset, DataLoader import numpy as np class ParamData(Dataset): def __init__(self,file_name): self.data = torch.Tensor(np.loadtxt(file_name,delimiter = ',')) #first place def __len__(self): return self.data.size()[0] def __getitem__(self,i): return self.data[i] class Net(nn.Module): def __init__(self,in_size,out_size,layer_size=200): super(Net,self).__init__() self.layer = nn.Linear(in_size,layer_size) self.out_layer = nn.Linear(layer_size,out_size) def forward(self,x): x = F.relu(self.layer(x)) x = self.out_layer(x) return x datafile = 'data1.txt' net = Net(100,1) dataset = ParamData(datafile) n_samples = len(dataset) #dataset = torch.Tensor(dataset,dtype=torch.double) #second place #net.float() #thrid place net.forward(dataset[0]) #fourth place ``` In the file data1.txt is a csv formatted text file containing certain numbers, and each dataset[i] is a size 100 by 1 torch.Tensor object of dtype torch.float64. The error message is as follows: ``` Traceback (most recent call last): File \"Z:\\Wrong.py\", line 33, in net.forward(dataset[0]) File \"Z:\\Wrong.py\", line 23, in forward x = F.relu(self.layer(x)) File \"E:\\Python38\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 532, in __call__ result = self.forward(*input, **kwargs) File \"E:\\Python38\\lib\\site-packages\\torch\\nn\\modules\\linear.py\", line 87, in forward return F.linear(input, self.weight, self.bias) File \"E:\\Python38\\lib\\site-packages\\torch\\nn\\functional.py\", line 1372, in linear output = input.matmul(weight.t()) RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'mat2' in call to _th_mm ``` It seems that I should change the dtype of the numbers in dataset to torch.double. I tried things like changing the line at the first place to self.data = torch.tensor(np.loadtxt(file_name,delimiter = ','),dtype=torch.double) changing the line at the fourth place to net.forward(dataset[0].double()) uncommenting one of the two lines at the second or the thrid place I think these are the solutions I have seen from similar questions, but they either give new errors or don't do anything. What should I do? Update: So I got it working by changing the first place to ``` self.data = torch.from_numpy(np.loadtxt(file_name,delimiter = ',')).float() ``` which is weird because it is exactly the opposite of the error message. Is this a bug? I'd still like some explaining.", "response":"In short: your data has type double but your model has type float, this is not allowed in pytorch because only data with the same dtype can be fed into the model. In long: This issue is related to the default dtype of PyTorch and Numpy. I will first explain why this error happens and then suggest some solutions(but I think you will not need my solution once you understand the principle.) PyTorch has a couple of dtypes https:\/\/pytorch.org\/docs\/stable\/tensors.html. Two of them are closely related to the issue you had: torch.float32(aka torch.float) torch.float64(aka torch.double) It's important to know the default dtype of PyTorch Tensors is torch.float32(aka torch.float). This means when you create a tensor, its default dtype is torch.float32.try: torch.ones(1).dtype . This will print torch.float32 in default case. And also the model's parameters are of this dtype by default. In your case, net = Net(100,1) will create a model whose dtype of parameters are torch.float32 Then we need to talk about Numpy: The default dtype of Numpy ndarray is numpy.float64. This means when you create a numpy array, its default dtype is numpy.float64.try: np.ones(1).dtype . This will print dtype('float64') in default case. In your case, your data come from a local file loaded by np.loadtxt, so the data is first loaded as dtype('float64')(as a numpy array) and then converted to a torch tensor of dtype torch.float64(aka torch.double). This is what happens when you convert a numpy array to torch tensor: they will have the corresponding dtype. I think now the issue is pretty clear, you have a model whose parameters are of torch.float32(aka torch.float) but tries to run it on data of torch.float64(aka torch.double). This is also what the error message tries to say:Expected object of scalar type Double but got scalar type Float for argument Solutions: You have alreay found one: convert your data to torch.float32 by calling tensor.float() You can also specify the dtype when load the data: np.loadtxt(file_name,delimiter = ',',dtype=\"float32\")", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/60239051\/pytorch-runtimeerror-expected-object-of-scalar-type-double-but-got-scalar-type", "best_answers_votes":19, "question_length":2714, "response_length":2048 }, { "question":"Multi label classification in pytorch I have a multi-label classification problem. I have 11 classes, around 4k examples. Each example can have from 1 to 4-5 label. At the moment, i'm training a classifier separately for each class with log_loss. As you can expect, it is taking quite some time to train 11 classifier, and i would like to try another approach and to train only 1 classifier. The idea is that the last layer of this classifer would have 11 nodes, and would output a real number by classes which would be converted to a proba by a sigmoid. The loss I want to optimize is the mean of the log_loss on all classes. Unfortunately, i'm some kind of noob with pytorch, and even by reading the source code of the losses, i can't figure out if one of the already existing losses does exactly what i want, or if I should create a new loss, and if that's the case, i don't really know how to do it. To be very specific, i want to give for each element of the batch one vector of size 11(which contains a real number for each label (the closer to infinity, the closer this class is predicted to be 1), and 1 vector of size 11 (which contains a 1 at every true label), and be able to compute the mean log_loss on all 11 labels, and optimize my classifier based on that loss. Any help would be greatly appreciated :)", "response":"You are looking for torch.nn.BCELoss. Here's example code: ```py import torch batch_size = 2 num_classes = 11 loss_fn = torch.nn.BCELoss() outputs_before_sigmoid = torch.randn(batch_size, num_classes) sigmoid_outputs = torch.sigmoid(outputs_before_sigmoid) target_classes = torch.randint(0, 2, (batch_size, num_classes)) # randints in [0, 2). loss = loss_fn(sigmoid_outputs, target_classes) # alternatively, use BCE with logits, on outputs before sigmoid. loss_fn_2 = torch.nn.BCEWithLogitsLoss() loss2 = loss_fn_2(outputs_before_sigmoid, target_classes) assert loss == loss2 ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/52855843\/multi-label-classification-in-pytorch", "best_answers_votes":27, "question_length":1318, "response_length":579 }, { "question":"Why there are different output between model.forward(input) and model(input) I'm using pytorch to build a simple model like VGG16,and I have overloaded the function forward in my model. I found everyone tends to use model(input) to get the output rather than model.forward(input), and I am interested in the difference between them. I try to input the same data, but the result is different. I'm confused. I have output the layer_weight before I input data, the weight not be changed, and I know when we using model(input) it using __call__ function, and this function will call model.forward. ``` vgg = VGG() vgg.double() for layer in vgg.modules(): if isinstance(layer,torch.nn.Linear): print(layer.weight) print(\" use model.forward(input) \") result = vgg.forward(array) for layer in vgg.modules(): if isinstance(layer,torch.nn.Linear): print(layer.weight) print(\" use model(input) \") result_2 = vgg(array) print(result) print(result_2) ``` output: ``` Variable containing:1.00000e-02 * -0.2931 0.6716 -0.3497 -2.0217 -0.0764 1.2162 1.4983 -1.2881 [torch.DoubleTensor of size 1x8] Variable containing: 1.00000e-02 * 0.5302 0.4494 -0.6866 -2.1657 -0.9504 1.0211 0.8308 -1.1665 [torch.DoubleTensor of size 1x8] ```", "response":"model.forward just calls the forward operations as you mention but __call__ does a little extra. If you dig into the code of nn.Module class you will see __call__ ultimately calls forward but internally handles the forward or backward hooks and manages some states that pytorch allows. When calling a simple model like just an MLP, it may not be really needed but more complex model like spectral normalization layers have hooks and therefore you should use model(.) signature as much as possible unless you explicitly just want to call model.forward Also see Calling forward function without .forward() In this case, however, the difference may be due to some dropout layer, you should call vgg.eval() to make sure all the stochasticity in network is turned off before comparing the outputs.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55338756\/why-there-are-different-output-between-model-forwardinput-and-modelinput", "best_answers_votes":23, "question_length":1214, "response_length":792 }, { "question":"How to include batch size in pytorch basic example? I am new to pytorch. The following is the basic example of using nn module to train a simple one-layer model with some random data (from here) ``` import torch N, D_in, H, D_out = 64, 1000, 100, 10 x = torch.randn(N, D_in) y = torch.randn(N, D_out) model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), ) loss_fn = torch.nn.MSELoss(reduction='sum') optimizer = torch.optim.Adam(model.parameters(), lr=1e-4) for t in range(500): y_pred = model(x) loss = loss_fn(y_pred, y) print(t, loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() ``` As far as I understand, the batch size is equal to 1 in the example, in other words, a single point (out of 64) is used to calculate gradients and update parameters. My question is: how to modify this example to train the model with the batch size greater than one?", "response":"To include batch size in PyTorch basic examples, the easiest and cleanest way is to use PyTorch torch.utils.data.DataLoader and torch.utils.data.TensorDataset. Dataset stores the samples and their corresponding labels, and DataLoader wraps an iterable around the Dataset to enable easy access to the samples. DataLoader will take care of creating batches for you. Building on your question, there is a complete code snippet, where we iterate over a dataset of 10000 examples for 2 epochs with a batch size of 64: ```py import torch from torch.utils.data import DataLoader, TensorDataset # Create the dataset with N_SAMPLES samples N_SAMPLES, D_in, H, D_out = 10000, 1000, 100, 10 x = torch.randn(N_SAMPLES, D_in) y = torch.randn(N_SAMPLES, D_out) # Define the batch size and the number of epochs BATCH_SIZE = 64 N_EPOCHS = 2 # Use torch.utils.data to create a DataLoader # that will take care of creating batches dataset = TensorDataset(x, y) dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True) # Define model, loss and optimizer model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), ) loss_fn = torch.nn.MSELoss(reduction='sum') optimizer = torch.optim.Adam(model.parameters(), lr=1e-4) # Get the dataset size for printing (it is equal to N_SAMPLES) dataset_size = len(dataloader.dataset) # Loop over epochs for epoch in range(N_EPOCHS): print(f\"Epoch {epoch + 1}\\n-------------------------------\") # Loop over batches in an epoch using DataLoader for id_batch, (x_batch, y_batch) in enumerate(dataloader): y_batch_pred = model(x_batch) loss = loss_fn(y_batch_pred, y_batch) optimizer.zero_grad() loss.backward() optimizer.step() # Every 100 batches, print the loss for this batch # as well as the number of examples processed so far if id_batch % 100 == 0: loss, current = loss.item(), (id_batch + 1)* len(x_batch) print(f\"loss: {loss:>7f} [{current:>5d}\/{dataset_size:>5d}]\") ``` The output should be something like: ``` Epoch 1 ------------------------------- loss: 643.433716 [ 64\/10000] loss: 648.195435 [ 6464\/10000] Epoch 2 ------------------------------- loss: 613.619873 [ 64\/10000] loss: 625.018555 [ 6464\/10000] ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51735001\/how-to-include-batch-size-in-pytorch-basic-example", "best_answers_votes":10, "question_length":915, "response_length":2190 }, { "question":"Reproducibility and performance in PyTorch The documentation states: Deterministic mode can have a performance impact, depending on your model. My question is, what is meant by performance here. Processing speed or model quality (i.e. minimal loss)? In other words, when setting manual seeds and making the model perform in a deterministic way, does that cause longer training time until minimal loss is found, or is that minimal loss worse than when the model is non-deterministic? For completeness' sake, I manually make the model deterministic by setting all of these properties: ``` def set_seed(seed): torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False np.random.seed(seed) random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) ```", "response":"Performance refers to the run time; CuDNN has several ways of implementations, when cudnn.deterministic is set to true, you're telling CuDNN that you only need the deterministic implementations (or what we believe they are). In a nutshell, when you are doing this, you should expect the same results on the CPU or the GPU on the same system when feeding the same inputs. Why would it affect the performance? CuDNN uses heuristics for the choice of the implementation. So, it actually depends on your model how CuDNN will behave; choosing it to be deterministic may affect the runtime because their could have been, let's say, faster way of choosing them at the same point of running. Concerning your snippet, I do the exact seeding, it has been working good (in terms of reproducibility) for 100+ DL experiments.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/56354461\/reproducibility-and-performance-in-pytorch", "best_answers_votes":13, "question_length":828, "response_length":812 }, { "question":"Generating new images with PyTorch I am studying GANs I've completed the one course which gave me an example of a program that generates images based on examples inputed. The example can be found here: ``` https:\/\/github.com\/davidsonmizael\/gan ``` So I decided to use that to generate new images based on a dataset of frontal photos of faces, but I am not having any success. Differently from the example above, the code only generates noise, while the input has actual images. Actually I don't have any clue about what should I change to make the code point to the right direction and learn from images. I haven't change a single value on the code provided in the example, yet it does not work. If anyone can help me understand this and point me to the right direction would be very helpful. Thanks in advance. My Discriminator: ``` class D(nn.Module): def __init__(self): super(D, self).__init__() self.main = nn.Sequential( nn.Conv2d(3, 64, 4, 2, 1, bias = False), nn.LeakyReLU(0.2, inplace = True), nn.Conv2d(64, 128, 4, 2, 1, bias = False), nn.BatchNorm2d(128), nn.LeakyReLU(0.2, inplace = True), nn.Conv2d(128, 256, 4, 2, 1, bias = False), nn.BatchNorm2d(256), nn.LeakyReLU(0.2, inplace = True), nn.Conv2d(256, 512, 4, 2, 1, bias = False), nn.BatchNorm2d(512), nn.LeakyReLU(0.2, inplace = True), nn.Conv2d(512, 1, 4, 1, 0, bias = False), nn.Sigmoid() ) def forward(self, input): return self.main(input).view(-1) ``` My Generator: ``` class G(nn.Module): def __init__(self): super(G, self).__init__() self.main = nn.Sequential( nn.ConvTranspose2d(100, 512, 4, 1, 0, bias = False), nn.BatchNorm2d(512), nn.ReLU(True), nn.ConvTranspose2d(512, 256, 4, 2, 1, bias = False), nn.BatchNorm2d(256), nn.ReLU(True), nn.ConvTranspose2d(256, 128, 4, 2, 1, bias = False), nn.BatchNorm2d(128), nn.ReLU(True), nn.ConvTranspose2d(128, 64, 4, 2, 1, bias = False), nn.BatchNorm2d(64), nn.ReLU(True), nn.ConvTranspose2d(64, 3, 4, 2, 1, bias = False), nn.Tanh() ) def forward(self, input): return self.main(input) ``` My function to start the weights: ``` def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv') != -1: m.weight.data.normal_(0.0, 0.02) elif classname.find('BatchNorm') != -1: m.weight.data.normal_(1.0, 0.02) m.bias.data.fill_(0) ``` Full code can be seen here: ``` https:\/\/github.com\/davidsonmizael\/criminal-gan ``` Noise generated on epoch number 25: Input with real images:", "response":"The code from your example (https:\/\/github.com\/davidsonmizael\/gan) gave me the same noise as you show. The loss of the generator decreased way too quickly. There were a few things buggy, I'm not even sure anymore what - but I guess it's easy to figure out the differences yourself. For a comparison, also have a look at this tutorial: GANs in 50 lines of PyTorch ``` .... same as your code print(\"# Starting generator and descriminator...\") netG = G() netG.apply(weights_init) netD = D() netD.apply(weights_init) if torch.cuda.is_available(): netG.cuda() netD.cuda() #training the DCGANs criterion = nn.BCELoss() optimizerD = optim.Adam(netD.parameters(), lr = 0.0002, betas = (0.5, 0.999)) optimizerG = optim.Adam(netG.parameters(), lr = 0.0002, betas = (0.5, 0.999)) epochs = 25 timeElapsed = [] for epoch in range(epochs): print(\"# Starting epoch [%d\/%d]...\" % (epoch, epochs)) for i, data in enumerate(dataloader, 0): start = time.time() time.clock() #updates the weights of the discriminator nn netD.zero_grad() #trains the discriminator with a real image real, _ = data if torch.cuda.is_available(): inputs = Variable(real.cuda()).cuda() target = Variable(torch.ones(inputs.size()[0]).cuda()).cuda() else: inputs = Variable(real) target = Variable(torch.ones(inputs.size()[0])) output = netD(inputs) errD_real = criterion(output, target) errD_real.backward() #retain_graph=True #trains the discriminator with a fake image if torch.cuda.is_available(): D_noise = Variable(torch.randn(inputs.size()[0], 100, 1, 1).cuda()).cuda() target = Variable(torch.zeros(inputs.size()[0]).cuda()).cuda() else: D_noise = Variable(torch.randn(inputs.size()[0], 100, 1, 1)) target = Variable(torch.zeros(inputs.size()[0])) D_fake = netG(D_noise).detach() D_fake_ouput = netD(D_fake) errD_fake = criterion(D_fake_ouput, target) errD_fake.backward() # NOT:backpropagating the total error # errD = errD_real + errD_fake optimizerD.step() #for i, data in enumerate(dataloader, 0): #updates the weights of the generator nn netG.zero_grad() if torch.cuda.is_available(): G_noise = Variable(torch.randn(inputs.size()[0], 100, 1, 1).cuda()).cuda() target = Variable(torch.ones(inputs.size()[0]).cuda()).cuda() else: G_noise = Variable(torch.randn(inputs.size()[0], 100, 1, 1)) target = Variable(torch.ones(inputs.size()[0])) fake = netG(G_noise) G_output = netD(fake) errG = criterion(G_output, target) #backpropagating the error errG.backward() optimizerG.step() if i % 50 == 0: #prints the losses and save the real images and the generated images print(\"# Progress: \") print(\"[%d\/%d][%d\/%d] Loss_D: %.4f Loss_G: %.4f\" % (epoch, epochs, i, len(dataloader), errD_real.data[0], errG.data[0])) #calculates the remaining time by taking the avg seconds that every loop #and multiplying by the loops that still need to run timeElapsed.append(time.time() - start) avg_time = (sum(timeElapsed) \/ float(len(timeElapsed))) all_dtl = (epoch * len(dataloader)) + i rem_dtl = (len(dataloader) - i) + ((epochs - epoch) * len(dataloader)) remaining = (all_dtl - rem_dtl) * avg_time print(\"# Estimated remaining time: %s\" % (time.strftime(\"%H:%M:%S\", time.gmtime(remaining)))) if i % 100 == 0: vutils.save_image(real, \"%s\/real_samples.png\" % \".\/results\", normalize = True) vutils.save_image(fake.data, \"%s\/fake_samples_epoch_%03d.png\" % (\".\/results\", epoch), normalize = True) print (\"# Finished.\") ``` Result after 25 epochs (batchsize 256) on CIFAR-10:", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/47275395\/generating-new-images-with-pytorch", "best_answers_votes":6, "question_length":2401, "response_length":3420 }, { "question":"img should be PIL Image. Got I'm trying to iterate through a loader to check if it's working, however the below error is given: TypeError: img should be PIL Image. Got I've tried adding both transforms.ToTensor() and transforms.ToPILImage() and it gives me an error asking for the opposite. i.e, with ToPILImage(), it will ask for tensor, and vice versa. ``` # Imports here %matplotlib inline import matplotlib.pyplot as plt from torch import nn, optim import torch.nn.functional as F import torch from torchvision import transforms, datasets, models import seaborn as sns import pandas as pd import numpy as np data_dir = 'flowers' train_dir = data_dir + '\/train' valid_dir = data_dir + '\/valid' test_dir = data_dir + '\/test' #Creating transform for training set train_transforms = transforms.Compose( [transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.RandomHorizontalFlip(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) #Creating transform for test set test_transforms = transforms.Compose( [transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])]) #transforming for all data train_data = datasets.ImageFolder(train_dir, transform=train_transforms) test_data = datasets.ImageFolder(test_dir, transform = test_transforms) valid_data = datasets.ImageFolder(valid_dir, transform = test_transforms) #Creating data loaders for test and training sets trainloader = torch.utils.data.DataLoader(train_data, batch_size = 32, shuffle = True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) images, labels = next(iter(trainloader)) ``` It should allow me to simply see the image once I run plt.imshow(images[0]), if its working correctly.", "response":"transforms.RandomHorizontalFlip() works on PIL.Images, not torch.Tensor. In your code above, you are applying transforms.ToTensor() prior to transforms.RandomHorizontalFlip(), which results in tensor. But, as per the official pytorch documentation here, transforms.RandomHorizontalFlip() horizontally flip the given PIL Image randomly with a given probability. So, just change the order of your transformation in above code, like below: ``` train_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) ```", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57079219\/img-should-be-pil-image-got-class-torch-tensor", "best_answers_votes":35, "question_length":1797, "response_length":662 }, { "question":"Creating one hot vector from indices given as a tensor I have a tensor of size 4 x 6 where 4 is batch size and 6 is sequence length. Every element of the sequence vectors are some index (0 to n). I want to create a 4 x 6 x n tensor where the vectors in 3rd dimension will be one hot encoding of the index which means I want to put 1 in the specified index and rest of the values will be zero. For example, I have the following tensor: ``` [[5, 3, 2, 11, 15, 15], [1, 4, 6, 7, 3, 3], [2, 4, 7, 8, 9, 10], [11, 12, 15, 2, 5, 7]] ``` Here, all the values are in between (0 to n) where n = 15. So, I want to convert the tensor to a 4 X 6 X 16 tensor where the third dimension will represent one hot encoding vector. How can I do that using PyTorch functionalities? Right now, I am doing this with loop but I want to avoid looping!", "response":"NEW ANSWER As of PyTorch 1.1, there is a one_hot function in torch.nn.functional. Given any tensor of indices indices and a maximal index n, you can create a one_hot version as follows: ``` n = 5 indices = torch.randint(0,n, size=(4,7)) one_hot = torch.nn.functional.one_hot(indices, n) # size=(4,7,n) ``` Very old Answer At the moment, slicing and indexing can be a bit of a pain in PyTorch from my experience. I assume you don't want to convert your tensors to numpy arrays. The most elegant way I can think of at the moment is to use sparse tensors and then convert to a dense tensor. That would work as follows: ``` from torch.sparse import FloatTensor as STensor batch_size = 4 seq_length = 6 feat_dim = 16 batch_idx = torch.LongTensor([i for i in range(batch_size) for s in range(seq_length)]) seq_idx = torch.LongTensor(list(range(seq_length))*batch_size) feat_idx = torch.LongTensor([[5, 3, 2, 11, 15, 15], [1, 4, 6, 7, 3, 3], [2, 4, 7, 8, 9, 10], [11, 12, 15, 2, 5, 7]]).view(24,) my_stack = torch.stack([batch_idx, seq_idx, feat_idx]) # indices must be nDim * nEntries my_final_array = STensor(my_stack, torch.ones(batch_size * seq_length), torch.Size([batch_size, seq_length, feat_dim])).to_dense() print(my_final_array) ``` Note: PyTorch is undergoing some work currently, that will add numpy style broadcasting and other functionalities within the next two or three weeks and other functionalities. So it's possible, there'll be better solutions available in the near future. Hope this helps you a bit.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/44461772\/creating-one-hot-vector-from-indices-given-as-a-tensor", "best_answers_votes":29, "question_length":826, "response_length":1515 }, { "question":"version `GLIBC_2.28' not found I'm trying to install PyTorch on ARMv7(32-bit) architecture but PyTorch doesn\u2019t have official ARMv7 builds so i tried this unofficial build. It installed successfully but when I import torch I get the following error ``` >>import torch Traceback (most recent call last): File \"\", line 1, in File \"\/usr\/local\/lib\/python3.7\/site-packages\/torch\/__init__.py\", line 81, in from torch._C import * ImportError: \/lib\/arm-linux-gnueabihf\/libc.so.6: version `GLIBC_2.28' not found (required by \/usr\/local\/lib\/python3.7\/site-packages\/torch\/lib\/libtorch_python.so) ``` I tried the following ``` sudo apt-get update sudo apt-get install libc6 ``` but it seams like that i have the newest version of libc6 ``` Reading package lists... Done Building dependency tree Reading state information... Done libc6 is already the newest version (2.23-0ubuntu11). The following packages were automatically installed and are no longer required: busybox-initramfs cpio initramfs-tools initramfs-tools-bin initramfs-tools-core klibc-utils libdbusmenu-gtk4 libklibc libllvm3.8 libmircommon5 linux-base Use 'sudo apt autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. ``` Here is my GLIBCXX and GLIBC versions that i have: ``` strings \/usr\/lib\/arm-linux-gnueabihf\/libstdc++.so.6 | grep GLIBC GLIBCXX_3.4 GLIBCXX_3.4.1 GLIBCXX_3.4.2 GLIBCXX_3.4.3 GLIBCXX_3.4.4 GLIBCXX_3.4.5 GLIBCXX_3.4.6 GLIBCXX_3.4.7 GLIBCXX_3.4.8 GLIBCXX_3.4.9 GLIBCXX_3.4.10 GLIBCXX_3.4.11 GLIBCXX_3.4.12 GLIBCXX_3.4.13 GLIBCXX_3.4.14 GLIBCXX_3.4.15 GLIBCXX_3.4.16 GLIBCXX_3.4.17 GLIBCXX_3.4.18 GLIBCXX_3.4.19 GLIBCXX_3.4.20 GLIBCXX_3.4.21 GLIBCXX_3.4.22 GLIBCXX_3.4.23 GLIBCXX_3.4.24 GLIBCXX_3.4.25 GLIBCXX_3.4.26 GLIBCXX_3.4.27 GLIBCXX_3.4.28 GLIBC_2.4 GLIBC_2.6 GLIBC_2.18 GLIBC_2.16 GLIBC_2.17 ``` Ldd version: ``` ldd --version ldd (Ubuntu GLIBC 2.23-0ubuntu11) 2.23 Copyright (C) 2016 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Written by Roland McGrath and Ulrich Drepper. ``` My OS: ``` cat \/etc\/os-release NAME=\"Ubuntu\" VERSION=\"16.04.6 LTS (Xenial Xerus)\" ID=ubuntu ID_LIKE=debian PRETTY_NAME=\"Ubuntu 16.04.6 LTS\" VERSION_ID=\"16.04\" HOME_URL=\"http:\/\/www.ubuntu.com\/\" SUPPORT_URL=\"http:\/\/help.ubuntu.com\/\" BUG_REPORT_URL=\"http:\/\/bugs.launchpad.net\/ubuntu\/\" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial ``` So is it possible to install GLIBC_2.28 on my machine?", "response":"So is it possible to install GLIBC_2.28 on my machine? It is possible, but the chances of you making a mistake and rendering your system un-bootable are quite high. It is also very likely that doing so will break something else on your system (this is the reason distributions do not usually update the version of GLIBC from the one they originally shipped with). A much better solution is to built PyTorch targeting your system (i.e. using your \"normal\" toolchain). P.S. GLIBCXX has nothing to do with your problem, and just adds noise to your question.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/62324422\/version-glibc-2-28-not-found", "best_answers_votes":40, "question_length":2523, "response_length":554 }, { "question":"How to multiply a tensor row-wise by a vector in PyTorch? When I have a tensor m of shape [12, 10] and a vector s of scalars with shape [12], how can I multiply each row of m with the corresponding scalar in s?", "response":"You need to add a corresponding singleton dimension: ``` m * s[:, None] ``` s[:, None] has size of (12, 1) when multiplying a (12, 10) tensor by a (12, 1) tensor pytorch knows to broadcast s along the second singleton dimension and perform the \"element-wise\" product correctly.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53987906\/how-to-multiply-a-tensor-row-wise-by-a-vector-in-pytorch", "best_answers_votes":38, "question_length":210, "response_length":277 }, { "question":"Pytorch: Image label I am working on an image classifier with 31 classes(Office dataset). There is one folder for each of the classes. I have a python script written using PyTorch that loads the dataset using datasets.ImageFolder and assigns a label to each image and then trains. Here is my code snippet for loading data: ``` from torchvision import datasets, transforms import torch def load_training(root_path, dir, batch_size, kwargs): transform = transforms.Compose( [transforms.Resize([256, 256]), transforms.RandomCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) data = datasets.ImageFolder(root=root_path + dir, transform=transform) train_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True, drop_last=True, **kwargs) return train_loader ``` The code takes each folder, assigns the same label to all images in that folder. Is there any way to find which label is assigned to which image\/image folder?", "response":"The class ImageFolder has an attribute class_to_idx which is a dictionary mapping the name of the class to the index (label). So, you can access the classes with data.classes and for each class get the label with data.class_to_idx. For reference: https:\/\/github.com\/pytorch\/vision\/blob\/master\/torchvision\/datasets\/folder.py", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51906144\/pytorch-image-label", "best_answers_votes":37, "question_length":956, "response_length":323 }, { "question":"PyTorch: Testing with torchvision.datasets.ImageFolder and DataLoader I'm a newbie trying to make this PyTorch CNN work with the Cats&Dogs dataset from kaggle. As there are no targets for the test images, I manually classified some of the test images and put the class in the filename, to be able to test (maybe should have just used some of the train images). I used the torchvision.datasets.ImageFolder class to load the train and test images. The training seems to work. But what do I need to do to make the test-routine work? I don't know, how to connect my test_data_loader with the test loop at the bottom, via test_x and test_y. The Code is based on this MNIST example CNN. There, something like this is used right after the loaders are created. But I failed to rewrite it for my dataset: ``` test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1), volatile=True).type(torch.FloatTensor)[:2000]\/255. # shape from (2000, 28, 28) to (2000, 1, 28, 28), value in range(0,1) test_y = test_data.test_labels[:2000] ``` The Code: ``` import os import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import torch.utils.data as data import torchvision from torchvision import transforms EPOCHS = 2 BATCH_SIZE = 10 LEARNING_RATE = 0.003 TRAIN_DATA_PATH = \".\/train_cl\/\" TEST_DATA_PATH = \".\/test_named_cl\/\" TRANSFORM_IMG = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(256), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) train_data = torchvision.datasets.ImageFolder(root=TRAIN_DATA_PATH, transform=TRANSFORM_IMG) train_data_loader = data.DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) test_data = torchvision.datasets.ImageFolder(root=TEST_DATA_PATH, transform=TRANSFORM_IMG) test_data_loader = data.DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) class CNN(nn.Module): # omitted... if __name__ == '__main__': print(\"Number of train samples: \", len(train_data)) print(\"Number of test samples: \", len(test_data)) print(\"Detected Classes are: \", train_data.class_to_idx) # classes are detected by folder structure model = CNN() optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE) loss_func = nn.CrossEntropyLoss() # Training and Testing for epoch in range(EPOCHS): for step, (x, y) in enumerate(train_data_loader): b_x = Variable(x) # batch x (image) b_y = Variable(y) # batch y (target) output = model(b_x)[0] loss = loss_func(output, b_y) optimizer.zero_grad() loss.backward() optimizer.step() # Test -> this is where I have no clue if step % 50 == 0: test_x = Variable(test_data_loader) test_output, last_layer = model(test_x) pred_y = torch.max(test_output, 1)[1].data.squeeze() accuracy = sum(pred_y == test_y) \/ float(test_y.size(0)) print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy) ```", "response":"Looking at the data from Kaggle and your code, it seems that there are problems in your data loading, both train and test set. First of all, the data should be in a different folder per label for the default PyTorch ImageFolder to load it correctly. In your case, since all the training data is in the same folder, PyTorch is loading it as one class and hence learning seems to be working. You can correct this by using a folder structure like - train\/dog, - train\/cat, - test\/dog, - test\/cat and then passing the train and the test folder to the train and test ImageFolder respectively. The training code seems fine, just change the folder structure and you should be good. Take a look at the official documentation of ImageFolder which has a similar example.", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49073799\/pytorch-testing-with-torchvision-datasets-imagefolder-and-dataloader", "best_answers_votes":20, "question_length":2954, "response_length":760 }, { "question":"Calling the forward method in PyTorch vs. calling the model instance A lot of the PyTorch tutorials I've been viewing do something like this. Define model: ``` class Network(nn.Module): def __init__(): super().__init__() self.conv1 = .. ... def forward(x) ... ... ``` Once the Network has been instantiated (net = Network()), the people in the tutorials write net(input_data) instead of net.forward(input_data). I tried net.forward() and it gives the same results as net(). Why is this a common practice, and also why does this work?", "response":"You should avoid calling Module.forward. The difference is that all the hooks are dispatched in the __call__ function see this, so if you call .forward and have hooks in your model, the hooks won\u2019t have any effect. Inshort when you call Module.forward, pytorch hooks wont have any effect Detailed answer can be found in this post", "best_answers_score":0.8, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/66594136\/calling-the-forward-method-in-pytorch-vs-calling-the-model-instance", "best_answers_votes":24, "question_length":533, "response_length":329 }, { "question":"What does model.eval() do in pytorch? When should I use .eval()? I understand it is supposed to allow me to \"evaluate my model\". How do I turn it back off for training? Example training code using .eval().", "response":"model.eval() is a kind of switch for some specific layers\/parts of the model that behave differently during training and inference (evaluating) time. For example, Dropouts Layers, BatchNorm Layers etc. You need to turn them off during model evaluation, and .eval() will do it for you. In addition, the common practice for evaluating\/validation is using torch.no_grad() in pair with model.eval() to turn off gradients computation: ```py # evaluate model: model.eval() with torch.no_grad(): ... out_data = model(data) ... ``` BUT, don't forget to turn back to training mode after eval step: ```py # training step ... model.train() ... ```", "best_answers_score":0.7941, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/60018578\/what-does-model-eval-do-in-pytorch", "best_answers_votes":405, "question_length":205, "response_length":636 }, { "question":"Pytorch RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0 I use code from here to train a model to predict printed style number from 0 to 9: ``` idx_to_class = {0: \"0\", 1: \"1\", 2: \"2\", 3: \"3\", 4: \"4\", 5: \"5\", 6: \"6\", 7:\"7\", 8: \"8\", 9:\"9\"} def predict(model, test_image_name): transform = image_transforms['test'] test_image = Image.open(test_image_name) plt.imshow(test_image) test_image_tensor = transform(test_image) if torch.cuda.is_available(): test_image_tensor = test_image_tensor.view(1, 3, 224, 224).cuda() else: test_image_tensor = test_image_tensor.view(1, 3, 224, 224) with torch.no_grad(): model.eval() # Model outputs log probabilities out = model(test_image_tensor) ps = torch.exp(out) topk, topclass = ps.topk(1, dim=1) # print(topclass.cpu().numpy()[0][0]) print(\"Image class: \", idx_to_class[topclass.cpu().numpy()[0][0]]) predict(model, \"path_of_test_image\") ``` But I get an error when try to use predict: ``` Traceback (most recent call last): File \"\", line 26, in predict(model, \"\/home\/x\/\u6587\u6863\/Deep_Learning\/pytorch\/MNIST\/test\/2\/QQ\u622a\u56fe20191022093955.png\") File \"\", line 9, in predict test_image_tensor = transform(test_image) File \"\/home\/x\/.local\/lib\/python3.6\/site-packages\/torchvision\/transforms\/transforms.py\", line 61, in __call__ img = t(img) File \"\/home\/x\/.local\/lib\/python3.6\/site-packages\/torchvision\/transforms\/transforms.py\", line 166, in __call__ return F.normalize(tensor, self.mean, self.std, self.inplace) File \"\/home\/x\/.local\/lib\/python3.6\/site-packages\/torchvision\/transforms\/functional.py\", line 217, in normalize tensor.sub_(mean[:, None, None]).div_(std[:, None, None]) RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0 ``` How could I fix it? Thanks.", "response":"I suspect your test_image has an additional alpha channel per pixel, thus it has 4 channels instead of only three. Try: ```py test_image = Image.open(test_image_name).convert('RGB') ```", "best_answers_score":0.7929, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/58496858\/pytorch-runtimeerror-the-size-of-tensor-a-4-must-match-the-size-of-tensor-b", "best_answers_votes":119, "question_length":1795, "response_length":185 }, { "question":"Impact of using data shuffling in Pytorch dataloader I implemented an image classification network to classify a dataset of 100 classes by using Alexnet as a pretrained model and changing the final output layers. I noticed when I was loading my data like ``` trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=False) ``` , I was getting accuracy on validation dataset around 2-3 % for around 10 epochs but when I just changed shuffle=True and retrained the network, the accuracy jumped to 70% in the first epoch itself. I was wondering if it happened because in the first case the network was being shown one example after the other continuously for just one class for few instances resulting in network making poor generalizations during training or is there some other reason behind it? But, I did not expect that to have such a drastic impact. P.S: All the code and parameters were exactly the same for both the cases except changing the shuffle option.", "response":"Yes it totally can affect the result! Shuffling the order of the data that we use to fit the classifier is so important, as the batches between epochs do not look alike. Checking the Data Loader Documentation it says: \"shuffle (bool, optional) \u2013 set to True to have the data reshuffled at every epoch\" In any case, it will make the model more robust and avoid over\/underfitting. In your case this heavy increase of accuracy (from the lack of awareness of the dataset) probably is due to how the dataset is \"organised\" as maybe, as an example, each category goes to a different batch, and in every epoch, a batch contains the same category, which derives to a very bad accuracy when you are testing.", "best_answers_score":0.7905, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54354465\/impact-of-using-data-shuffling-in-pytorch-dataloader", "best_answers_votes":26, "question_length":982, "response_length":698 }, { "question":"Why do we \"pack\" the sequences in PyTorch? I was trying to replicate How to use packing for variable-length sequence inputs for rnn but I guess I first need to understand why we need to \"pack\" the sequence. I understand why we \"pad\" them but why is \"packing\" (via pack_padded_sequence) necessary?", "response":"Here are some visual explanations1 that might help to develop better intuition for the functionality of pack_padded_sequence(). TL;DR: It is performed primarily to save compute. Consequently, the time required for training neural network models is also (drastically) reduced, especially when carried out on very large (a.k.a. web-scale) datasets. Let's assume we have 6 sequences (of variable lengths) in total. You can also consider this number 6 as the batch_size hyperparameter. (The batch_size will vary depending on the length of the sequence (cf. Fig.2 below)) Now, we want to pass these sequences to some recurrent neural network architecture(s). To do so, we have to pad all of the sequences (typically with 0s) in our batch to the maximum sequence length in our batch (max(sequence_lengths)), which in the below figure is 9. So, the data preparation work should be complete by now, right? Not really.. Because there is still one pressing problem, mainly in terms of how much compute do we have to do when compared to the actually required computations. For the sake of understanding, let's also assume that we will matrix multiply the above padded_batch_of_sequences of shape (6, 9) with a weight matrix W of shape (9, 3). Thus, we will have to perform 6x9 = 54 multiplication and 6x8 = 48 addition (nrows x (n-1)_cols) operations, only to throw away most of the computed results since they would be 0s (where we have pads). The actual required compute in this case is as follows: ``` 9-mult 8-add 8-mult 7-add 6-mult 5-add 4-mult 3-add 3-mult 2-add 2-mult 1-add --------------- 32-mult 26-add ------------------------------ #savings: 22-mult & 22-add ops (32-54) (26-48) ``` That's a LOT more savings even for this very simple (toy) example. You can now imagine how much compute (eventually: cost, energy, time, carbon emission etc.) can be saved using pack_padded_sequence() for large tensors with millions of entries, and million+ systems all over the world doing that, again and again. The functionality of pack_padded_sequence() can be understood from the figure below, with the help of the used color-coding: As a result of using pack_padded_sequence(), we will get a tuple of tensors containing (i) the flattened (along axis-1, in the above figure) sequences , (ii) the corresponding batch sizes, tensor([6,6,5,4,3,3,2,2,1]) for the above example. The data tensor (i.e. the flattened sequences) could then be passed to objective functions such as CrossEntropy for loss calculations. 1 image credits to @sgrvinod", "best_answers_score":0.786, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51030782\/why-do-we-pack-the-sequences-in-pytorch", "best_answers_votes":167, "question_length":296, "response_length":2527 }, { "question":"What does model.train() do in PyTorch? Does it call forward() in nn.Module? I thought when we call the model, forward method is being used. Why do we need to specify train()?", "response":"model.train() tells your model that you are training the model. This helps inform layers such as Dropout and BatchNorm, which are designed to behave differently during training and evaluation. For instance, in training mode, BatchNorm updates a moving average on each new batch; whereas, for evaluation mode, these updates are frozen. More details: model.train() sets the mode to train (see source code). You can call either model.eval() or model.train(mode=False) to tell that you are testing. It is somewhat intuitive to expect train function to train model but it does not do that. It just sets the mode.", "best_answers_score":0.7854, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51433378\/what-does-model-train-do-in-pytorch", "best_answers_votes":333, "question_length":174, "response_length":607 }, { "question":"Torchtext 0.7 shows Field is being deprecated. What is the alternative? Looks like the previous paradigm of declaring Fields, Examples and using BucketIterator is deprecated and will move to legacy in 0.8. However, I don't seem to be able to find an example of the new paradigm for custom datasets (as in, not the ones included in torch.datasets) that doesn't use Field. Can anyone point me at an up-to-date example? Reference for deprecation: https:\/\/github.com\/pytorch\/text\/releases", "response":"It took me a little while to find the solution myself. The new paradigm is like so for prebuilt datasets: ``` from torchtext.experimental.datasets import AG_NEWS train, test = AG_NEWS(ngrams=3) ``` or like so for custom built datasets: ``` from torch.utils.data import DataLoader def collate_fn(batch): texts, labels = [], [] for label, txt in batch: texts.append(txt) labels.append(label) return texts, labels dataloader = DataLoader(train, batch_size=8, collate_fn=collate_fn) for idx, (texts, labels) in enumerate(dataloader): print(idx, texts, labels) ``` I've copied the examples from the Source", "best_answers_score":0.7849, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/63539809\/torchtext-0-7-shows-field-is-being-deprecated-what-is-the-alternative", "best_answers_votes":11, "question_length":484, "response_length":600 }, { "question":"No module named \"Torch\" I successfully installed pytorch via conda: ```bash conda install pytorch-cpu torchvision-cpu -c pytorch ``` I also successfully installed pytorch via pip: ```bash pip3 install https:\/\/download.pytorch.org\/whl\/cpu\/torch-1.0.1-cp36-cp36m-win_amd64.whl pip3 install torchvision ``` But, it only works in a jupyter notebook. Whenever I try to execute a script from the console, I get the error message: ``` No module named \"torch\" ```", "response":"Try to install PyTorch using pip: First create a Conda environment using: ``` conda create -n env_pytorch python=3.6 ``` Activate the environment using: ``` conda activate env_pytorch ``` Now install PyTorch using pip: ``` pip install torchvision ``` Note: This will install both torch and torchvision. Now go to Python shell and import using the command: ``` import torch import torchvision ```", "best_answers_score":0.7841, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54843067\/no-module-named-torch", "best_answers_votes":115, "question_length":455, "response_length":395 }, { "question":"Printing all the contents of a tensor I came across this PyTorch tutorial (in neural_networks_tutorial.py) where they construct a simple neural network and run an inference. I would like to print the contents of the entire input tensor for debugging purposes. What I get when I try to print the tensor is something like this and not the entire tensor: I saw a similar link for numpy but was not sure about what would work for PyTorch. I can convert it to numpy and may be view it, but would like to avoid the extra overhead. Is there a way for me to print the entire tensor?", "response":"To avoid truncation and to control how much of the tensor data is printed use the same API as numpy's numpy.set_printoptions(threshold=10_000). Example: ``` x = torch.rand(1000, 2, 2) print(x) # prints the truncated tensor torch.set_printoptions(threshold=10_000) print(x) # prints the whole tensor ``` If your tensor is very large, adjust the threshold value to a higher number. Another option is: ``` torch.set_printoptions(profile=\"full\") print(x) # prints the whole tensor torch.set_printoptions(profile=\"default\") # reset print(x) # prints the truncated tensor ``` All the available set_printoptions arguments are documented here.", "best_answers_score":0.7798, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/52673610\/printing-all-the-contents-of-a-tensor", "best_answers_votes":76, "question_length":574, "response_length":635 }, { "question":"Pytorch: RuntimeError: result type Float can't be cast to the desired output type Long I have a model which looks as follows: ```py IMG_WIDTH = IMG_HEIGHT = 224 class AlexNet(nn.Module): def __init__(self, output_dim): super(AlexNet, self).__init__() self._to_linear = None self.x = torch.randn(3, IMG_WIDTH, IMG_HEIGHT).view(-1, 3, IMG_WIDTH, IMG_HEIGHT) self.features = nn.Sequential( nn.Conv2d(3, 64, 3, 2, 1), # in_channels, out_channels, kernel_size, stride, padding nn.MaxPool2d(2), nn.ReLU(inplace=True), nn.Conv2d(64, 192, 3, padding=1), nn.MaxPool2d(2), nn.ReLU(inplace=True), nn.Conv2d(192, 384, 3, padding=1), nn.MaxPool2d(2), nn.ReLU(inplace=True), nn.Conv2d(384, 256, 3, padding=1), nn.MaxPool2d(2), nn.ReLU(inplace=True), nn.Conv2d(256, 512, 3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(512, 256, 3, padding=1), nn.MaxPool2d(2), nn.ReLU(inplace=True) ) self.conv(self.x) self.classifier = nn.Sequential( nn.Dropout(.5), nn.Linear(self._to_linear, 4096), nn.ReLU(inplace=True), nn.Dropout(.5), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, output_dim), ) def conv(self, x): x = self.features(x) if self._to_linear is None: self._to_linear = x.shape[1] * x.shape[2] * x.shape[3] return x def forward(self, x): x = self.conv(x) h = x.view(x.shape[0], -1) x = self.classifier(h) return x, h ``` Here is my optimizer and loss functions: ```py optimizer = torch.optim.Adam(model.parameters()) criterion = nn.BCEWithLogitsLoss().to(device) ``` Here is my train and evaluate functions: ```py def train(model, iterator, optimizer, criterion, device): epoch_loss, epoch_acc = 0, 0 model.train() for (x, y) in iterator: # features and labels to the device x = x.to(device) y = y.to(device).long() # Zero the gradients optimizer.zero_grad() y_pred, _ = model(x) # Calculate the loss and accuracy loss = criterion(y_pred.squeeze(), y) acc = binary_accuracy(y_pred, y) # Backward propagate loss.backward() # Update the weights optimizer.step() epoch_loss +=loss.item() epoch_acc += acc.item() return epoch_loss\/len(iterator), epoch_acc\/len(iterator) def evaluate(model, iterator, criterion, device): epoch_loss, epoch_acc = 0, 0 model.eval() with torch.no_grad(): for (x, y) in iterator: x = x.to(device) y = y.to(device).long() y_pred, _ = model(x) loss = criterion(y_pred, y) acc = binary_accuracy(y_pred, y) epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss\/len(iterator), epoch_acc\/len(iterator) ``` This is the error that I'm getting: ``` RuntimeError: result type Float can't be cast to the desired output type Long ``` What may be possibly my problem because I have tried to convert my labels to long tensors as follows: ``` y = y.to(device).long() ``` But it seems not to work.", "response":"I was getting the same error doing this: ``` loss_fn(output, target) ``` where the output was Tensor torch.float32 and target was Tensor torch.int64. What solved this problem was calling the loss function like this: ``` loss_fn(output, target.float()) ```", "best_answers_score":0.7794, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/70110429\/pytorch-runtimeerror-result-type-float-cant-be-cast-to-the-desired-output-typ", "best_answers_votes":45, "question_length":2722, "response_length":255 }, { "question":"What does .contiguous() do in PyTorch? What does x.contiguous() do for a tensor x?", "response":"There are a few operations on Tensors in PyTorch that do not change the contents of a tensor, but change the way the data is organized. These operations include: narrow(), view(), expand() and transpose() For example: when you call transpose(), PyTorch doesn't generate a new tensor with a new layout, it just modifies meta information in the Tensor object so that the offset and stride describe the desired new shape. In this example, the transposed tensor and original tensor share the same memory: ```py x = torch.randn(3,2) y = torch.transpose(x, 0, 1) x[0, 0] = 42 print(y[0,0]) # prints 42 ``` This is where the concept of contiguous comes in. In the example above, x is contiguous but y is not because its memory layout is different to that of a tensor of same shape made from scratch. Note that the word \"contiguous\" is a bit misleading because it's not that the content of the tensor is spread out around disconnected blocks of memory. Here bytes are still allocated in one block of memory but the order of the elements is different! When you call contiguous(), it actually makes a copy of the tensor such that the order of its elements in memory is the same as if it had been created from scratch with the same data. Normally you don't need to worry about this. You're generally safe to assume everything will work, and wait until you get a RuntimeError: input is not contiguous where PyTorch expects a contiguous tensor to add a call to contiguous().", "best_answers_score":0.7792, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/48915810\/what-does-contiguous-do-in-pytorch", "best_answers_votes":379, "question_length":82, "response_length":1461 }, { "question":"PyTorch Optimizer: AdamW and Adam with weight decay Is there any difference between torch.optim.Adam(weight_decay=0.01) and torch.optim.AdamW(weight_decay=0.01)? Link to the docs: torch.optim.", "response":"Yes, Adam and AdamW weight decay are different. Hutter pointed out in their paper (Decoupled Weight Decay Regularization) that the way weight decay is implemented in Adam in every library seems to be wrong, and proposed a simple way (which they call AdamW) to fix it. In Adam, the weight decay is usually implemented by adding wd*w (wd is weight decay here) to the gradients (Ist case), rather than actually subtracting from weights (IInd case). ``` # Ist: Adam weight decay implementation (L2 regularization) final_loss = loss + wd * all_weights.pow(2).sum() \/ 2 # IInd: equivalent to this in SGD w = w - lr * w.grad - lr * wd * w ``` These methods are same for vanilla SGD, but as soon as we add momentum, or use a more sophisticated optimizer like Adam, L2 regularization (first equation) and weight decay (second equation) become different. AdamW follows the second equation for weight decay. In Adam weight_decay (float, optional) \u2013 weight decay (L2 penalty) (default: 0) In AdamW weight_decay (float, optional) \u2013 weight decay coefficient (default: 1e-2) Read more on the fastai blog.", "best_answers_score":0.7783, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/64621585\/pytorch-optimizer-adamw-and-adam-with-weight-decay", "best_answers_votes":66, "question_length":192, "response_length":1089 }, { "question":"KL Divergence for two probability distributions in PyTorch I have two probability distributions. How should I find the KL-divergence between them in PyTorch? The regular cross entropy only accepts integer labels.", "response":"Yes, PyTorch has a method named kl_div under torch.nn.functional to directly compute KL-devergence between tensors. Suppose you have tensor a and b of same shape. You can use the following code: ``` import torch.nn.functional as F out = F.kl_div(a, b) ``` For more details, see the above method documentation.", "best_answers_score":0.7706, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49886369\/kl-divergence-for-two-probability-distributions-in-pytorch", "best_answers_votes":24, "question_length":212, "response_length":309 }, { "question":"Adam optimizer with warmup on PyTorch In the paper Attention is all you need, under section 5.3, the authors suggested to increase the learning rate linearly and then decrease proportionally to the inverse square root of steps. How do we implement this in PyTorch with Adam optimizer? Preferably without additional packages.", "response":"PyTorch provides learning-rate-schedulers for implementing various methods of adjusting the learning rate during the training process. Some simple LR-schedulers are are already implemented and can be found here: https:\/\/pytorch.org\/docs\/stable\/optim.html#how-to-adjust-learning-rate In your special case you can - just like the other LR-schedulers do - subclass _LRScheduler for implementing a variable schedule based on the number of epochs. For a bare-bones method you only need to implement __init__() and get_lr() methods. Just note that many of these schedulers expect you to call .step() once per epoch. But you can also update it more frequently or even pass a custom argument just like in the cosine-annealing LR-scheduler: https:\/\/pytorch.org\/docs\/stable\/_modules\/torch\/optim\/lr_scheduler.html#CosineAnnealingLR", "best_answers_score":0.7679, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/65343377\/adam-optimizer-with-warmup-on-pytorch", "best_answers_votes":17, "question_length":324, "response_length":820 }, { "question":"\"RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [3, 224, 224] instead\"? I am trying to use a pre-trained model. Here's where the problem occurs Isn't the model supposed to take in a simple colored image? Why is it expecting a 4-dimensional input? ``` RuntimeError Traceback (most recent call last) in () 33 34 # Forward pass the data through the model ---> 35 output = model(data) 36 init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability 37 5 frames \/usr\/local\/lib\/python3.6\/dist-packages\/torch\/nn\/modules\/conv.py in forward(self, input) 336 _pair(0), self.dilation, self.groups) 337 return F.conv2d(input, self.weight, self.bias, self.stride, --> 338 self.padding, self.dilation, self.groups) 339 340 RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [3, 224, 224] instead ``` Where ``` inception = models.inception_v3() model = inception.to(device) ```", "response":"From the Pytorch documentation on convolutional layers, Conv2d layers expect input with the shape ``` (n_samples, channels, height, width) # e.g., (1000, 1, 224, 224) ``` Passing grayscale images in their usual format (224, 224) won't work. To get the right shape, you will need to add a channel dimension. You can do it as follows: ``` x = np.expand_dims(x, 1) # if numpy array tensor = tensor.unsqueeze(1) # if torch tensor ``` The unsqueeze() method adds a dimensions at the specified index. The result would have shape: ``` (1000, 1, 224, 224) ```", "best_answers_score":0.7676, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57237381\/runtimeerror-expected-4-dimensional-input-for-4-dimensional-weight-32-3-3-but", "best_answers_votes":35, "question_length":1014, "response_length":551 }, { "question":"What is the relationship between PyTorch and Torch? There are two PyTorch repositories : https:\/\/github.com\/hughperkins\/pytorch https:\/\/github.com\/pytorch\/pytorch The first clearly requires Torch and lua and is a wrapper, but the second doesn't make any reference to the Torch project except with its name. How is it related to the Lua Torch?", "response":"Here a short comparison on pytorch and torch. Torch: A Tensor library like numpy, unlike numpy it has strong GPU support. Lua is a wrapper for Torch (Yes! you need to have a good understanding of Lua), and for that you will need LuaRocks package manager. PyTorch: No need for the LuaRocks package manager, no need to write code in Lua. And because we are using Python, we can develop Deep Learning models with utmost flexibility. We can also exploit major Python packages likes scipy, numpy, matplotlib and Cython with PyTorch's own autograd. There is a detailed discussion on this on pytorch forum. Adding to that both PyTorch and Torch use THNN. Torch provides lua wrappers to the THNN library while Pytorch provides Python wrappers for the same. PyTorch's recurrent nets, weight sharing and memory usage with the flexibility of interfacing with C, and the current speed of Torch. For more insights, have a look at this discussion session here.", "best_answers_score":0.7641, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/44371560\/what-is-the-relationship-between-pytorch-and-torch", "best_answers_votes":43, "question_length":342, "response_length":946 }, { "question":"`stack()` vs `cat()` in PyTorch OpenAI's REINFORCE and actor-critic example for reinforcement learning has the following code: REINFORCE: ```py policy_loss = torch.cat(policy_loss).sum() ``` actor-critic: ```py loss = torch.stack(policy_losses).sum() + torch.stack(value_losses).sum() ``` One is using torch.cat, the other uses torch.stack, for similar use cases. As far as my understanding goes, the doc doesn't give any clear distinction between them. I would be happy to know the differences between the functions.", "response":"stack Concatenates sequence of tensors along a new dimension. cat Concatenates the given sequence of seq tensors in the given dimension. So if A and B are of shape (3, 4): torch.cat([A, B], dim=0) will be of shape (6, 4) torch.stack([A, B], dim=0) will be of shape (2, 3, 4)", "best_answers_score":0.7607, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54307225\/stack-vs-cat-in-pytorch", "best_answers_votes":250, "question_length":517, "response_length":274 }, { "question":"Pytorch: how to add L1 regularizer to activations? I would like to add the L1 regularizer to the activations output from a ReLU. More generally, how does one add a regularizer only to a particular layer in the network? Related material: This similar post refers to adding L2 regularization, but it appears to add the regularization penalty to all layers of the network. nn.modules.loss.L1Loss() seems relevant, but I do not yet understand how to use this. The legacy module L1Penalty seems relevant also, but why has it been deprecated?", "response":"Here is how you do this: In your Module's forward return final output and layers' output for which you want to apply L1 regularization loss variable will be sum of cross entropy loss of output w.r.t. targets and L1 penalties. Here's an example code ``` import torch from torch.autograd import Variable from torch.nn import functional as F class MLP(torch.nn.Module): def __init__(self): super(MLP, self).__init__() self.linear1 = torch.nn.Linear(128, 32) self.linear2 = torch.nn.Linear(32, 16) self.linear3 = torch.nn.Linear(16, 2) def forward(self, x): layer1_out = F.relu(self.linear1(x)) layer2_out = F.relu(self.linear2(layer1_out)) out = self.linear3(layer2_out) return out, layer1_out, layer2_out batchsize = 4 lambda1, lambda2 = 0.5, 0.01 model = MLP() optimizer = torch.optim.SGD(model.parameters(), lr=1e-4) # usually following code is looped over all batches # but let's just do a dummy batch for brevity inputs = Variable(torch.rand(batchsize, 128)) targets = Variable(torch.ones(batchsize).long()) optimizer.zero_grad() outputs, layer1_out, layer2_out = model(inputs) cross_entropy_loss = F.cross_entropy(outputs, targets) all_linear1_params = torch.cat([x.view(-1) for x in model.linear1.parameters()]) all_linear2_params = torch.cat([x.view(-1) for x in model.linear2.parameters()]) l1_regularization = lambda1 * torch.norm(all_linear1_params, 1) l2_regularization = lambda2 * torch.norm(all_linear2_params, 2) loss = cross_entropy_loss + l1_regularization + l2_regularization loss.backward() optimizer.step() ```", "best_answers_score":0.7584, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/44641976\/pytorch-how-to-add-l1-regularizer-to-activations", "best_answers_votes":43, "question_length":536, "response_length":1527 }, { "question":"What is running loss in PyTorch and how is it calculated I had a look at this tutorial in the PyTorch docs for understanding Transfer Learning. There was one line that I failed to understand. After the loss is calculated using loss = criterion(outputs, labels), the running loss is calculated using running_loss += loss.item() * inputs.size(0) and finally, the epoch loss is calculated using running_loss \/ dataset_sizes[phase]. Isn't loss.item() supposed to be for an entire mini-batch (please correct me if I am wrong). i.e, if the batch_size is 4, loss.item() would give the loss for the entire set of 4 images. If this is true, why is loss.item() being multiplied with inputs.size(0) while calculating running_loss? Isn't this step like an extra multiplication in this case? Any help would be appreciated. Thanks!", "response":"It's because the loss given by CrossEntropy or other loss functions is divided by the number of elements i.e. the reduction parameter is mean by default. ```py torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') ``` Hence, loss.item() contains the loss of entire mini-batch, but divided by the batch size. That's why loss.item() is multiplied with batch size, given by inputs.size(0), while calculating running_loss.", "best_answers_score":0.7535, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/61092523\/what-is-running-loss-in-pytorch-and-how-is-it-calculated", "best_answers_votes":33, "question_length":817, "response_length":472 }, { "question":"CUDA runtime error (59) : device-side assert triggered ```none THCudaCheck FAIL file=\/opt\/conda\/conda-bld\/pytorch_1524584710464\/work\/aten\/src\/THC\/generated\/..\/generic\/THCTensorMathPointwise.cu line=265 error=59 : device-side assert triggered Traceback (most recent call last): File \"main.py\", line 109, in train(loader_train, model, criterion, optimizer) File \"main.py\", line 54, in train optimizer.step() File \"\/usr\/local\/anaconda35\/lib\/python3.6\/site-packages\/torch\/optim\/sgd.py\", line 93, in step d_p.add_(weight_decay, p.data) RuntimeError: cuda runtime error (59) : device-side assert triggered at \/opt\/conda\/conda-bld\/pytorch_1524584710464\/work\/aten\/src\/THC\/generated\/..\/generic\/THCTensorMathPointwise.cu:265 ``` How do I resolve this error?", "response":"This is usually an indexing issue. For example, if your ground truth label starts at 1: ``` target = [1,2,3,4,5] ``` Then you should subtract 1 for every label instead so that: ``` target = [0,1,2,3,4] ```", "best_answers_score":0.7529, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51691563\/cuda-runtime-error-59-device-side-assert-triggered", "best_answers_votes":128, "question_length":748, "response_length":205 }, { "question":"early stopping in PyTorch I tried to implement an early stopping function to avoid my neural network model overfit. I'm pretty sure that the logic is fine, but for some reason, it doesn't work. I want that when the validation loss is greater than the training loss over some epochs, the early stopping function returns True. But it returns False all the time, even though the validation loss becomes a lot greater than the training loss. Could you see where is the problem, please? early stopping function ``` def early_stopping(train_loss, validation_loss, min_delta, tolerance): counter = 0 if (validation_loss - train_loss) > min_delta: counter +=1 if counter >= tolerance: return True ``` calling the function during the training ``` for i in range(epochs): print(f\"Epoch {i+1}\") epoch_train_loss, pred = train_one_epoch(model, train_dataloader, loss_func, optimiser, device) train_loss.append(epoch_train_loss) # validation with torch.no_grad(): epoch_validate_loss = validate_one_epoch(model, validate_dataloader, loss_func, device) validation_loss.append(epoch_validate_loss) # early stopping if early_stopping(epoch_train_loss, epoch_validate_loss, min_delta=10, tolerance = 20): print(\"We are at epoch:\", i) break ``` EDIT: The train and validation loss: EDIT2: ``` def train_validate (model, train_dataloader, validate_dataloader, loss_func, optimiser, device, epochs): preds = [] train_loss = [] validation_loss = [] min_delta = 5 for e in range(epochs): print(f\"Epoch {e+1}\") epoch_train_loss, pred = train_one_epoch(model, train_dataloader, loss_func, optimiser, device) train_loss.append(epoch_train_loss) # validation with torch.no_grad(): epoch_validate_loss = validate_one_epoch(model, validate_dataloader, loss_func, device) validation_loss.append(epoch_validate_loss) # early stopping early_stopping = EarlyStopping(tolerance=2, min_delta=5) early_stopping(epoch_train_loss, epoch_validate_loss) if early_stopping.early_stop: print(\"We are at epoch:\", e) break return train_loss, validation_loss ```", "response":"Although @KarelZe's response solves your problem sufficiently and elegantly, I want to provide an alternative early stopping criterion that is arguably better. Your early stopping criterion is based on how much (and for how long) the validation loss diverges from the training loss. This will break when the validation loss is indeed decreasing but is generally not close enough to the training loss. The goal of training a model is to encourage the reduction of validation loss and not the reduction in the gap between training loss and validation loss. Hence, I would argue that a better early stopping criterion would be watch for the trend in validation loss alone, i.e., if the training is not resulting in lowering of the validation loss then terminate it. Here's an example implementation: ``` class EarlyStopper: def __init__(self, patience=1, min_delta=0): self.patience = patience self.min_delta = min_delta self.counter = 0 self.min_validation_loss = float('inf') def early_stop(self, validation_loss): if validation_loss (self.min_validation_loss + self.min_delta): self.counter += 1 if self.counter >= self.patience: return True return False ``` Here's how you'd use it: ``` early_stopper = EarlyStopper(patience=3, min_delta=10) for epoch in np.arange(n_epochs): train_loss = train_one_epoch(model, train_loader) validation_loss = validate_one_epoch(model, validation_loader) if early_stopper.early_stop(validation_loss): break ```", "best_answers_score":0.7527, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/71998978\/early-stopping-in-pytorch", "best_answers_votes":87, "question_length":2018, "response_length":1446 }, { "question":"Print exact value of PyTorch tensor (floating point precision) I'm trying to print torch.FloatTensor like: ``` a = torch.FloatTensor(3,3) print(a) ``` This way I can get a value like: ``` 0.0000e+00 0.0000e+00 3.2286e-41 1.2412e-40 1.2313e+00 1.6751e-37 2.6801e-36 3.5873e-41 9.4463e+21 ``` But I want to get more accurate value, like 10 decimal point: ``` 0.1234567891+01 ``` With other python numerical objects, I could get it with: ``` print('{:.10f}'.format(a)) ``` but in the case of a tensor, I get this error: ``` TypeError: unsupported format string passed to torch.FloatTensor.__format__ ``` How can I print more precise values of tensors?", "response":"You can set the precision options: ``` torch.set_printoptions(precision=10) ``` There are more formatting options on the documentation page, it is very similar to numpy's.", "best_answers_score":0.7458, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/47483733\/print-exact-value-of-pytorch-tensor-floating-point-precision", "best_answers_votes":50, "question_length":648, "response_length":171 }, { "question":"Taking subsets of a pytorch dataset I have a network which I want to train on some dataset (as an example, say CIFAR10). I can create data loader object via ``` trainset = torchvision.datasets.CIFAR10(root='.\/data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) ``` My question is as follows: Suppose I want to make several different training iterations. Let's say I want at first to train the network on all images in odd positions, then on all images in even positions and so on. In order to do that, I need to be able to access to those images. Unfortunately, it seems that trainset does not allow such access. That is, trying to do trainset[:1000] or more generally trainset[mask] will throw an error. I could do instead ``` trainset.train_data=trainset.train_data[mask] trainset.train_labels=trainset.train_labels[mask] ``` and then ``` trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) ``` However, that will force me to create a new copy of the full dataset in each iteration (as I already changed trainset.train_data so I will need to redefine trainset). Is there some way to avoid it? Ideally, I would like to have something \"equivalent\" to ``` trainloader = torch.utils.data.DataLoader(trainset[mask], batch_size=4, shuffle=True, num_workers=2) ```", "response":"torch.utils.data.Subset is easier, supports shuffle, and doesn't require writing your own sampler: ```py import torchvision import torch trainset = torchvision.datasets.CIFAR10(root='.\/data', train=True, download=True, transform=None) evens = list(range(0, len(trainset), 2)) odds = list(range(1, len(trainset), 2)) trainset_1 = torch.utils.data.Subset(trainset, evens) trainset_2 = torch.utils.data.Subset(trainset, odds) trainloader_1 = torch.utils.data.DataLoader(trainset_1, batch_size=4, shuffle=True, num_workers=2) trainloader_2 = torch.utils.data.DataLoader(trainset_2, batch_size=4, shuffle=True, num_workers=2) ```", "best_answers_score":0.7447, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/47432168\/taking-subsets-of-a-pytorch-dataset", "best_answers_votes":128, "question_length":1400, "response_length":624 }, { "question":"What is volatile variable in Pytorch What is volatile attribute of a Variable in Pytorch? Here's a sample code for defining a variable in PyTorch. ``` datatensor = Variable(data, volatile=True) ```", "response":"Basically, set the input to a network to volatile if you are doing inference only and won't be running backpropagation in order to conserve memory. From the docs: Volatile is recommended for purely inference mode, when you\u2019re sure you won\u2019t be even calling .backward(). It\u2019s more efficient than any other autograd setting - it will use the absolute minimal amount of memory to evaluate the model. volatile also determines that requires_grad is False. Edit: The volatile keyword has been deprecated as of pytorch version 0.4.0", "best_answers_score":0.7445, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49837638\/what-is-volatile-variable-in-pytorch", "best_answers_votes":31, "question_length":197, "response_length":525 }, { "question":"How does Pytorch's \"Fold\" and \"Unfold\" work? I've gone through the official doc. I'm having a hard time understanding what this function is used for and how it works. Can someone explain this in layman's terms?", "response":"unfold imagines a tensor as a longer tensor with repeated columns\/rows of values 'folded' on top of each other, which is then \"unfolded\": size determines how large the folds are step determines how often it is folded E.g. for a 2x5 tensor, unfolding it with step=1, and patch size=2 across dim=1: ```py x = torch.tensor([[1,2,3,4,5], [6,7,8,9,10]]) ``` ```py >>> x.unfold(1,2,1) tensor([[[ 1, 2], [ 2, 3], [ 3, 4], [ 4, 5]], [[ 6, 7], [ 7, 8], [ 8, 9], [ 9, 10]]]) ``` fold is roughly the opposite of this operation, but \"overlapping\" values are summed in the output.", "best_answers_score":0.7433, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53972159\/how-does-pytorchs-fold-and-unfold-work", "best_answers_votes":79, "question_length":210, "response_length":567 }, { "question":"How to run Pytorch on Macbook pro (M1) GPU? I tried to train a model using PyTorch on my Macbook pro. It uses the new generation apple M1 CPU. However, PyTorch couldn't recognize my GPUs. ``` GPU available: False, used: False TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs ``` Does anyone know any solution? I have updated all the libraries to the latest versions.", "response":"PyTorch added support for M1 GPU as of 2022-05-18 in the Nightly version. Read more about it in their blog post. Simply install nightly: conda install pytorch -c pytorch-nightly --force-reinstall Update: It's available in the stable version: Conda:conda install pytorch torchvision torchaudio -c pytorch pip: pip3 install torch torchvision torchaudio To use (source): ``` mps_device = torch.device(\"mps\") # Create a Tensor directly on the mps device x = torch.ones(5, device=mps_device) # Or x = torch.ones(5, device=\"mps\") # Any operation happens on the GPU y = x * 2 # Move your model to mps just like any other device model = YourFavoriteNet() model.to(mps_device) # Now every call runs on the GPU pred = model(x) ```", "best_answers_score":0.7393, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/68820453\/how-to-run-pytorch-on-macbook-pro-m1-gpu", "best_answers_votes":39, "question_length":394, "response_length":720 }, { "question":"What does next() and iter() do in PyTorch's DataLoader() I have the following code: ``` import torch import numpy as np import pandas as pd from torch.utils.data import TensorDataset, DataLoader # Load dataset df = pd.read_csv(r'..\/iris.csv') # Extract features and target data = df.drop('target',axis=1).values labels = df['target'].values # Create tensor dataset iris = TensorDataset(torch.FloatTensor(data),torch.LongTensor(labels)) # Create random batches iris_loader = DataLoader(iris, batch_size=105, shuffle=True) next(iter(iris_loader)) ``` What does next() and iter() do in the above code? I have went through PyTorch's documentation and still can't quite understand what is next() and iter() doing here. Can anyone help in explaining this? Many thanks in advance.", "response":"These are built-in functions of python, they are used for working with iterables. Basically iter() calls the __iter__() method on the iris_loader which returns an iterator. next() then calls the __next__() method on that iterator to get the first iteration. Running next() again will get the second item of the iterator, etc. This logic often happens 'behind the scenes', for example when running a for loop. It calls the __iter__() method on the iterable, and then calls __next__() on the returned iterator until it reaches the end of the iterator. It then raises a stopIteration and the loop stops. Please see the documentation for further details and some nuances: https:\/\/docs.python.org\/3\/library\/functions.html#iter", "best_answers_score":0.7378, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/62549990\/what-does-next-and-iter-do-in-pytorchs-dataloader", "best_answers_votes":45, "question_length":773, "response_length":721 }, { "question":"Where is the source code of pytorch conv2d? Where do I find the source code of the pytorch function conv2d? It should be in torch.nn.functional but I only find _add_docstr lines, if i search for conv2d. I looked here: https:\/\/github.com\/pytorch\/pytorch\/blob\/master\/torch\/nn\/functional.py Update: It's is not my typing, I do mean the function. Conv2d class uses conv2d function from the nn.functional Here: https:\/\/github.com\/pytorch\/pytorch\/blob\/master\/torch\/nn\/modules\/conv.py On line 338: ``` return F.conv2d(F.pad(input, expanded_padding, mode='circular') ``` F is how they import functional So then I went there, but I don't find the code.", "response":"The functional code is implemented in C++. As of version 1.13.1 the entry point into the C++ code for conv2d is at aten\/src\/ATen\/native\/Convolution.cpp:804. If you are interested more generally in how functions are registered to the API then you can take a look at aten\/src\/ATen\/native\/README.md. A deeper dive will benefit from understanding some of the design decisions in PyTorch. For example, the dispatcher mechanism (see here). More general information can be found in the PyTorch developer's wiki, though keep in mind this wiki is primarily a tool for contributors and is not as polished as the Python API documentation. IMO a good starting point is the Core Frontend Onboarding page which gives links to most everything needed to get your head around the PyTorch source code.", "best_answers_score":0.7352, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59127007\/where-is-the-source-code-of-pytorch-conv2d", "best_answers_votes":18, "question_length":643, "response_length":783 }, { "question":"How Pytorch Tensor get the index of specific value With python lists, we can do: ``` a = [1, 2, 3] assert a.index(2) == 1 ``` How can a pytorch tensor find the .index() directly?", "response":"I think there is no direct translation from list.index() to a pytorch function. However, you can achieve similar results using tensor==number and then the nonzero() function. For example: ``` t = torch.Tensor([1, 2, 3]) print ((t == 2).nonzero(as_tuple=True)[0]) ``` This piece of code returns 1 [torch.LongTensor of size 1x1]", "best_answers_score":0.7341, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/47863001\/how-pytorch-tensor-get-the-index-of-specific-value", "best_answers_votes":100, "question_length":178, "response_length":326 }, { "question":"What's the difference between `reshape()` and `view()` in PyTorch? In numpy, we use ndarray.reshape() for reshaping an array. I noticed that in PyTorch, people use torch.view() for the same purpose, but at the same time, there is also a torch.reshape() existing. So I am wondering what the differences are between them and when I should use either of them?", "response":"torch.view has existed for a long time. It will return a tensor with the new shape. The returned tensor will share the underling data with the original tensor. See the documentation here. On the other hand, it seems that torch.reshape has been introduced recently in version 0.4. According to the document, this method will Returns a tensor with the same data and number of elements as input, but with the specified shape. When possible, the returned tensor will be a view of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. viewing behavior. It means that torch.reshape may return a copy or a view of the original tensor. You can not count on that to return a view or a copy. According to the developer: if you need a copy use clone() if you need the same storage use view(). The semantics of reshape() are that it may or may not share the storage and you don't know beforehand. Another difference is that reshape() can operate on both contiguous and non-contiguous tensor while view() can only operate on contiguous tensor. Also see here about the meaning of contiguous.", "best_answers_score":0.7336, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49643225\/whats-the-difference-between-reshape-and-view-in-pytorch", "best_answers_votes":277, "question_length":356, "response_length":1191 }, { "question":"pytorch - connection between loss.backward() and optimizer.step() Where is an explicit connection between the optimizer and the loss? How does the optimizer know where to get the gradients of the loss without a call liks this optimizer.step(loss)? -More context- When I minimize the loss, I didn't have to pass the gradients to the optimizer. ``` loss.backward() # Back Propagation optimizer.step() # Gradient Descent ```", "response":"Without delving too deep into the internals of pytorch, I can offer a simplistic answer: Recall that when initializing optimizer you explicitly tell it what parameters (tensors) of the model it should be updating. The gradients are \"stored\" by the tensors themselves (they have a grad and a requires_grad attributes) once you call backward() on the loss. After computing the gradients for all tensors in the model, calling optimizer.step() makes the optimizer iterate over all parameters (tensors) it is supposed to update and use their internally stored grad to update their values. More info on computational graphs and the additional \"grad\" information stored in pytorch tensors can be found in this answer. Referencing the parameters by the optimizer can sometimes cause troubles, e.g., when the model is moved to GPU after initializing the optimizer. Make sure you are done setting up your model before constructing the optimizer. See this answer for more details.", "best_answers_score":0.7323, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53975717\/pytorch-connection-between-loss-backward-and-optimizer-step", "best_answers_votes":158, "question_length":421, "response_length":969 }, { "question":"Can't install pytorch with pip on Windows I'm trying to install Pytorch with Windows and I'm using the commands of the official site https:\/\/pytorch.org\/get-started\/locally\/ ``` pip3 install torch==1.2.0 torchvision==0.4.0 -f https:\/\/download.pytorch.org\/whl\/torch_stable.html ``` This is the command if I choose Windows, Cuda 10.0, and Python 3.7 But if I run this I get the error message: ``` ERROR: Could not find a version that satisfies the requirement torch==1.2.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch==1.2.0 ``` So why does this happen? My pip is version 19.2 and I am in a newly installed python 3.7 environment", "response":"I tried multiple solutions and it wasn't working on Windows 10 until I tried this: ``` pip install torch==1.5.0+cpu -f https:\/\/download.pytorch.org\/whl\/torch_stable.html ``` If you want your GPU enabled then remove the \"+CPU\": ``` pip install torch==1.5.0 -f https:\/\/download.pytorch.org\/whl\/torch_stable.html ```", "best_answers_score":0.7321, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57499002\/cant-install-pytorch-with-pip-on-windows", "best_answers_votes":28, "question_length":679, "response_length":313 }, { "question":"Force GPU memory limit in PyTorch Is there a way to force a maximum value for the amount of GPU memory that I want to be available for a particular Pytorch instance? For example, my GPU may have 12Gb available, but I'd like to assign 4Gb max to a particular process.", "response":"Update pytorch to 1.8.0 \uff08pip install --upgrade torch==1.8.0\uff09 function: torch.cuda.set_per_process_memory_fraction(fraction, device=None) params: fraction (float) \u2013 Range: 0~1. Allowed memory equals total_memory * fraction. device (torch.device or int, optional) \u2013 selected device. If it is None the default CUDA device is used. eg: ```py import torch torch.cuda.set_per_process_memory_fraction(0.5, 0) torch.cuda.empty_cache() total_memory = torch.cuda.get_device_properties(0).total_memory # less than 0.5 will be ok: tmp_tensor = torch.empty(int(total_memory * 0.499), dtype=torch.int8, device='cuda') del tmp_tensor torch.cuda.empty_cache() # this allocation will raise a OOM: torch.empty(total_memory \/\/ 2, dtype=torch.int8, device='cuda') \"\"\" It raises an error as follows: RuntimeError: CUDA out of memory. Tried to allocate 5.59 GiB (GPU 0; 11.17 GiB total capacity; 0 bytes already allocated; 10.91 GiB free; 5.59 GiB allowed; 0 bytes reserved in total by PyTorch) \"\"\" ```", "best_answers_score":0.7308, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49529372\/force-gpu-memory-limit-in-pytorch", "best_answers_votes":23, "question_length":266, "response_length":980 }, { "question":"How to multiply a matrix by a vector in PyTorch I'm playing around with PyTorch with the aim of learning it, and I have a very dumb question: how can I multiply a matrix by a single vector? Here's what I've tried: ``` >>> import torch >>> a = torch.rand(4,4) >>> a 0.3162 0.4434 0.9318 0.8752 0.0129 0.8609 0.6402 0.2396 0.5720 0.7262 0.7443 0.0425 0.4561 0.1725 0.4390 0.8770 [torch.FloatTensor of size 4x4] >>> b = torch.rand(4) >>> b 0.1813 0.7090 0.0329 0.7591 [torch.FloatTensor of size 4] >>> a.mm(b) Traceback (most recent call last): File \"\", line 1, in RuntimeError: invalid argument 2: dimension 1 out of range of 1D tensor at \/Users\/soumith\/code\/builder\/wheel\/pytorch-src\/torch\/lib\/TH\/generic\/THTensor.c:24 >>> a.mm(b.t()) Traceback (most recent call last): File \"\", line 1, in RuntimeError: t() expects a 2D tensor, but self is 1D >>> b.mm(a) Traceback (most recent call last): File \"\", line 1, in RuntimeError: matrices expected, got 1D, 2D tensors at \/Users\/soumith\/code\/builder\/wheel\/pytorch-src\/torch\/lib\/TH\/generic\/THTensorMath.c:1288 >>> b.t().mm(a) Traceback (most recent call last): File \"\", line 1, in RuntimeError: t() expects a 2D tensor, but self is 1D ``` On the other hand, if I do ``` >>> b = torch.rand(4,2) ``` then my first attempt, a.mm(b), works fine. So the problem is just that I'm multiplying a vector rather than a matrix --- but how can I do this?", "response":"You're looking for ``` torch.mv(a,b) ``` Note that for the future, you may also find torch.matmul() useful. torch.matmul() infers the dimensionality of your arguments and accordingly performs either dot products between vectors, matrix-vector or vector-matrix multiplication, matrix multiplication or batch matrix multiplication for higher order tensors.", "best_answers_score":0.7277, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/47870003\/how-to-multiply-a-matrix-by-a-vector-in-pytorch", "best_answers_votes":37, "question_length":1388, "response_length":354 }, { "question":"Parallelization strategies for deep learning What strategies and forms of parallelization are feasible and available for training and serving a neural network?: inside a machine across cores (e.g. GPU \/ TPU \/ CPU) across machines on a network or a rack I'm also looking for evidence for how they may also be used in e.g. TensorFlow, PyTorch or MXNet. Training To my knowledge, when training large neural networks on large datasets, one could at least have: Different cores or machines operate on different parts of the graph (\"graph splitting\"). E.g. backpropagation through the graph itself can be parallelized e.g. by having different layers hosted on different machines since (I think?) the autodiff graph is always a DAG. Different cores or machines operate on different samples of data (\"data splitting\"). In SGD, the computation of gradients across batches or samples can also be parallelized (e.g. the gradients can be combined after computing them independently on different batches). I believe this is also called gradient accumulation (?). When is each strategy better for what type of problem or neural network? Which modes are supported by modern libraries? and can one combine all four (2x2) strategies? On top of that, I have read about: Asynchronous training Synchronous training but I don't know what exactly that refers to, e.g. is it the computation of gradients on different data batches or the computation of gradients on different subgraphs? Or perhaps it refers to something else altogether? Serving If the network is huge, prediction \/ inference may also be slow, and the model may not fit on a single machine in memory at serving time. Are there any known multi-core and multi-node prediction solutions that work that can handle such models?", "response":"Training In general, there are two strategies of parallelizing model training: data parallelism and model parallelism. 1. Data parallelism This strategy splits training data into N partitions, each of which will be trained on different \u201cdevices\u201d (different CPU cores, GPUs, or even machines). In contrast to training without data parallelism which produces one gradient per minibatch, we now have N gradients for each minibatch step. The next question is how we should combine these N gradients. One way to do it is by averaging all the N gradients and then updating the model parameters once based on the average. This technique is called synchronous distributed SGD. By doing the average, we have a more accurate gradient, but with a cost of waiting all the devices to finish computing its own local gradient. Another way is by not combining the gradients \u2014 each gradient will instead be used to update the model parameters independently. So, there will be N parameter updates for each minibatch step, in contrast to only one for the previous technique. This technique is called asynchronous distributed SGD. Because it doesn't have to wait other devices to finish, the async approach will take less time to complete a minibatch step than the sync approach will do. However, the async approach will produce a more noisy gradient, so it might need to complete more minibatch steps to catch up with the performance (in terms of loss) of the sync approach. There are many papers proposing some improvements and optimizations on either approach, but the main idea is generally the same as described above. In the literature there's been some disagreement on which technique is better in practice. At the end most people now settle on the synchronous approach. Data Parallelism in PyTorch To do synchronous SGD, we can wrap our model with torch.nn.parallel.DistributedDataParallel: ``` from torch.nn.parallel import DistributedDataParallel as DDP # `model` is the model we previously initialized model = ... # `rank` is a device number starting from 0 model = model.to(rank) ddp_model = DDP(model, device_ids=[rank]) ``` Then we can train it similarly. For more details, you can refer to the official tutorial. For doing asynchronous SGD in PyTorch, we need to implement it more manually since there is no wrapper similar to DistributedDataParallel for it. Data Parallelism in TensorFlow\/Keras For synchronous SGD, we can use tf.distribute.MirroredStrategy to wrap the model initalization: ``` import tensorflow as tf strategy = tf.distribute.MirroredStrategy() with strategy.scope(): model = Model(...) model.compile(...) ``` Then we can train it as usual. For more details, you can refer to the official guides on Keras website and TensorFlow website. For asynchronous SGD, we can use tf.distribute.experimental.ParameterServerStrategy similarly. 2. Model Parallelism This strategy splits the model into N parts, each of which will be computed on different devices. A common way to split the model is based on layers: different sets of layers are placed on different devices. But we can also split it more intricately depending on the model architecture. Model Parallelism in TensorFlow and PyTorch To implement model parallelism in either TensorFlow or PyTorch, the idea is the same: to move some model parameters into a different device. In PyTorch we can use torch.nn.Module.to method to move a module into a different device. For example, suppose we want to create two linear layers each of which is placed on a different GPU: ``` import torch.nn as nn linear1 = nn.Linear(16, 8).to('cuda:0') linear2 = nn.Linear(8, 4).to('cuda:1') ``` In TensorFlow we can use tf.device to place an operation into a specific device. To implement the PyTorch example above in TensorFlow: ``` import tensorflow as tf from tensorflow.keras import layers with tf.device('\/GPU:0'): linear1 = layers.Dense(8, input_dim=16) with tf.device('\/GPU:1'): linear2 = layers.Dense(4, input_dim=8) ``` For more details you can refer to the official PyTorch tutorial; or if you use TensorFlow you can even use a more high-level library like mesh. 3. Hybrid: Data and Model Parallelism Recall that data parallelism only splits the training data, whereas model parallelism only splits the model structures. If we have a model so large that even after using either parallelism strategy it still doesn't fit in the memory, we can always do both. In practice most people prefer data parallelism to model parallelism since the former is more decoupled (in fact, independent) from the model architecture than the latter. That is, by using data parallelism they can change the model architecture as they like, without worrying which part of the model should be parallelized. Model Inference \/ Serving Parallelizing model serving is easier than parallelizing model training since the model parameters are already fixed and each request can be processed independently. Similar to scaling a regular Python web service, we can scale model serving by spawning more processes (to workaround Python's GIL) in a single machine, or even spawning more machine instances. When we use a GPU to serve the model, though, we need to do more work to scale it. Because of how concurrency is handled differently by a GPU compared to a CPU, in order to maximize the performance, we need to do inference request batching. The idea is when a request comes, instead of immediately processing it, we wait some timeout duration for other requests to come. When the timeout is up, even if the number of requests is only one, we batch them all to be processed on the GPU. In order to minimize the average request latency, we need to find the optimal timeout duration. To find it we need to observe that there is a trade-off between minimizing the timeout duration and maximizing the number of batch size. If the timeout is too low, the batch size will be small, so the GPU will be underutilized. But if the timeout is too high, the requests that come early will wait too long before they get processed. So, the optimal timeout duration depends on the model complexity (hence, the inference duration) and the average requests per second to receive. Implementing a scheduler to do request batching is not a trivial task, so instead of doing it manually, we'd better use TensorFlow Serving or PyTorch Serve which already supports it. To learn more about parallel and distributed learning, you can read this review paper.", "best_answers_score":0.7269, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/62759940\/parallelization-strategies-for-deep-learning", "best_answers_votes":13, "question_length":1765, "response_length":6453 }, { "question":"What does the parameter retain_graph mean in the Variable's backward() method? I'm going through the neural transfer pytorch tutorial and am confused about the use of retain_variable(deprecated, now referred to as retain_graph). The code example show: ``` class ContentLoss(nn.Module): def __init__(self, target, weight): super(ContentLoss, self).__init__() self.target = target.detach() * weight self.weight = weight self.criterion = nn.MSELoss() def forward(self, input): self.loss = self.criterion(input * self.weight, self.target) self.output = input return self.output def backward(self, retain_variables=True): #Why is retain_variables True?? self.loss.backward(retain_variables=retain_variables) return self.loss ``` From the documentation retain_graph (bool, optional) \u2013 If False, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph. So by setting retain_graph= True, we're not freeing the memory allocated for the graph on the backward pass. What is the advantage of keeping this memory around, why do we need it?", "response":"@cleros is pretty on the point about the use of retain_graph=True. In essence, it will retain any necessary information to calculate a certain variable, so that we can do backward pass on it. An illustrative example Suppose that we have a computation graph shown above. The variable d and e is the output, and a is the input. For example, ```py import torch from torch.autograd import Variable a = Variable(torch.rand(1, 4), requires_grad=True) b = a**2 c = b*2 d = c.mean() e = c.sum() ``` when we do d.backward(), that is fine. After this computation, the parts of the graph that calculate d will be freed by default to save memory. So if we do e.backward(), the error message will pop up. In order to do e.backward(), we have to set the parameter retain_graph to True in d.backward(), i.e., ```py d.backward(retain_graph=True) ``` As long as you use retain_graph=True in your backward method, you can do backward any time you want: ```py d.backward(retain_graph=True) # fine e.backward(retain_graph=True) # fine d.backward() # also fine e.backward() # error will occur! ``` More useful discussion can be found here. A real use case Right now, a real use case is multi-task learning where you have multiple losses that maybe be at different layers. Suppose that you have 2 losses: loss1 and loss2 and they reside in different layers. In order to backprop the gradient of loss1 and loss2 w.r.t to the learnable weight of your network independently. You have to use retain_graph=True in backward() method in the first back-propagated loss. ```py # suppose you first back-propagate loss1, then loss2 (you can also do the reverse) loss1.backward(retain_graph=True) loss2.backward() # now the graph is freed, and next process of batch gradient descent is ready optimizer.step() # update the network parameters ```", "best_answers_score":0.7247, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/46774641\/what-does-the-parameter-retain-graph-mean-in-the-variables-backward-method", "best_answers_votes":144, "question_length":1191, "response_length":1810 }, { "question":"I have a GPU and CUDA installed in Windows 10 but Pytorch's torch.cuda.is_available() returns false; how can I correct this? I have PyTorch installed on a Windows 10 machine with a Nvidia GTX 1050 GPU. I have installed the CUDA Toolkit and tested it using Nvidia instructions and that has gone smoothly, including execution of the suggested tests. However, torch.cuda.is_available() returns False. How can I fix this?", "response":"I also had the same issue. And running this => a=torch.cuda.FloatTensor(), gave the assertion error AssertionError: Torch not compiled with CUDA enabled . ...which kind of cleared that i was running pytorch without cuda. Steps: Make sure you have un-installed Pytorch by invoking the following command: pip uninstall torch Go to https:\/\/pytorch.org\/get-started\/locally\/ and select your system configurations(as shown in the figure). Copy the exact command from the Run this command dialog and run it on your terminal.", "best_answers_score":0.724, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57238344\/i-have-a-gpu-and-cuda-installed-in-windows-10-but-pytorchs-torch-cuda-is-availa", "best_answers_votes":69, "question_length":417, "response_length":517 }, { "question":"ImportError: libc10.so: cannot open shared object file: No such file or directory While running smdataparallel, I see following error ``` # python Python 3.6.10 |Anaconda, Inc.| (default, May 8 2020, 02:54:21) [GCC 7.3.0] on linux Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import smdistributed.dataparallel.torch.distributed as dist Traceback (most recent call last): File \"\", line 1, in File \"\/opt\/conda\/lib\/python3.6\/site-packages\/smdistributed\/dataparallel\/__init__.py\", line 16, in import smddpcommon as hc ImportError: libc10.so: cannot open shared object file: No such file or directory ```", "response":"libc10.so is made available by pytorch. Hence first ``` import torch ``` and then import packages that depend on pytorch.", "best_answers_score":0.7208, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/65710713\/importerror-libc10-so-cannot-open-shared-object-file-no-such-file-or-director", "best_answers_votes":48, "question_length":631, "response_length":121 }, { "question":"RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle)` with GPU only I'm working on the CNN with one-dimensional signal. It works totally fine with CPU device. However, when I training model in GPU, CUDA error occurred. I set os.environ['CUDA_LAUNCH_BLOCKING'] = \"1\" command after I got RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling cublasCreate(handle). With doing this, a cublasSgemm error occurred instead of cublasCreate error. Though the nvidia document doubt the hardware problem, I can training other CNN with images without any error. Below is my code for the data loading and set data in training model. ``` idx = np.arange(len(dataset)) # dataset & label shuffle in once np.random.shuffle(idx) dataset = dataset[idx] sdnn = np.array(sdnn)[idx.astype(int)] train_data, val_data = dataset[:int(0.8 * len(dataset))], dataset[int(0.8 * len(dataset)):] train_label, val_label = sdnn[:int(0.8 * len(sdnn))], sdnn[int(0.8 * len(sdnn)):] train_set = DataLoader(dataset=train_data, batch_size=opt.batch_size, num_workers=opt.workers) for i, data in enumerate(train_set, 0): # data.shape = [batch_size, 3000(len(signal)), 1(channel)] tensor x = data.transpose(1, 2) label = torch.Tensor(train_label[i * opt.batch_size:i * opt.batch_size + opt.batch_size]) x = x.to(device, non_blocking=True) label = label.to(device, non_blocking=True) # [batch size] label = label.view([len(label), 1]) optim.zero_grad() # Feature of signal extract y_predict = model(x) # [batch size, fc3 output] # Error occurred HERE loss = mse(y_predict, label) ``` Below is the error message from this code. ``` File C:\/Users\/Me\/Desktop\/Me\/Study\/Project\/Analysis\/Regression\/main.py\", line 217, in Processing y_predict = model(x) # [batch size, fc3 output] File \"C:\\Anaconda\\envs\\torch\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 722, in _call_impl result = self.forward(*input, **kwargs) File \"C:\\Users\\ME\\Desktop\\ME\\Study\\Project\\Analysis\\Regression\\cnn.py\", line 104, in forward x = self.fc1(x) File \"C:\\Anaconda\\envs\\torch\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 722, in _call_impl result = self.forward(*input, **kwargs) File \"C:\\Anaconda\\envs\\torch\\lib\\site-packages\\torch\\nn\\modules\\linear.py\", line 91, in forward return F.linear(input, self.weight, self.bias) File \"C:\\Anaconda\\envs\\torch\\lib\\site-packages\\torch\\nn\\functional.py\", line 1674, in linear ret = torch.addmm(bias, input, weight.t()) RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` ``` I've tried to solve this error for weeks but can't find the solution. If you can see anything wrong here, please let me know.", "response":"Please know that, it can also be caused if you have a mismatch between the dimension of your input tensor and the dimensions of your nn.Linear module. (ex. input.shape = (a, b) and nn.Linear(c, c, bias=False) with c not matching).", "best_answers_score":0.7157, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/66600362\/runtimeerror-cuda-error-cublas-status-execution-failed-when-calling-cublassge", "best_answers_votes":27, "question_length":2733, "response_length":230 }, { "question":"Parsing CSV into Pytorch tensors I have a CSV files with all numeric values except the header row. When trying to build tensors, I get the following exception: ``` Traceback (most recent call last): File \"pytorch.py\", line 14, in test_tensor = torch.tensor(test) ValueError: could not determine the shape of object type 'DataFrame' ``` This is my code: ``` import torch import dask.dataframe as dd device = torch.device(\"cuda:0\") print(\"Loading CSV...\") test = dd.read_csv(\"test.csv\", encoding = \"UTF-8\") train = dd.read_csv(\"train.csv\", encoding = \"UTF-8\") print(\"Converting to Tensor...\") test_tensor = torch.tensor(test) train_tensor = torch.tensor(train) ``` Using pandas instead of Dask for CSV parsing produced the same error. I also tried to specify dtype=torch.float64 inside the call to torch.tensor(data), but got the same error again.", "response":"Try converting it to an array first: ``` test_tensor = torch.Tensor(test.values) ```", "best_answers_score":0.7121, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/51858067\/parsing-csv-into-pytorch-tensors", "best_answers_votes":24, "question_length":846, "response_length":84 }, { "question":"How to load a list of numpy arrays to pytorch dataset loader? I have a huge list of numpy arrays, where each array represents an image and I want to load it using torch.utils.data.Dataloader object. But the documentation of torch.utils.data.Dataloader mentions that it loads data directly from a folder. How do I modify it for my cause? I am new to pytorch and any help would be greatly appreciated. my numpy array for a single image looks something like this. The image is RBG image. ``` [[[ 70 82 94] [ 67 81 93] [ 66 82 94] ..., [182 182 188] [183 183 189] [188 186 192]] [[ 66 80 92] [ 62 78 91] [ 64 79 95] ..., [176 176 182] [178 178 184] [180 180 186]] [[ 62 82 93] [ 62 81 96] [ 65 80 99] ..., [169 172 177] [173 173 179] [172 172 178]] ..., ```", "response":"I think what DataLoader actually requires is an input that subclasses Dataset. You can either write your own dataset class that subclasses Datasetor use TensorDataset as I have done below: ``` import torch import numpy as np from torch.utils.data import TensorDataset, DataLoader my_x = [np.array([[1.0,2],[3,4]]),np.array([[5.,6],[7,8]])] # a list of numpy arrays my_y = [np.array([4.]), np.array([2.])] # another list of numpy arrays (targets) tensor_x = torch.Tensor(my_x) # transform to torch tensor tensor_y = torch.Tensor(my_y) my_dataset = TensorDataset(tensor_x,tensor_y) # create your datset my_dataloader = DataLoader(my_dataset) # create your dataloader ``` Works for me.", "best_answers_score":0.7119, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/44429199\/how-to-load-a-list-of-numpy-arrays-to-pytorch-dataset-loader", "best_answers_votes":202, "question_length":753, "response_length":682 }, { "question":"RuntimeError: module must have its parameters and buffers on device cuda:1 (device_ids[0]) but found one of them on device: cuda:2 I have 4 GPUs (0,1,2,3) and I want to run one Jupyter notebook on GPU 2 and another one on GPU 0. Thus, after executing, ``` export CUDA_VISIBLE_DEVICES=0,1,2,3 ``` for the GPU 2 notebook I do, ``` device = torch.device( f'cuda:{2}' if torch.cuda.is_available() else 'cpu') device, torch.cuda.device_count(), torch.cuda.is_available(), torch.cuda.current_device(), torch.cuda.get_device_properties(1) ``` and after creating a new model or loading one, ``` model = nn.DataParallel( model, device_ids = [ 0, 1, 2, 3]) model = model.to( device) ``` Then, when I start training the model, I get, ``` RuntimeError Traceback (most recent call last) in 46 with torch.set_grad_enabled( phase == 'train'): 47 # [N, Nclass, H, W] ---> 48 prediction = model(X) 49 # print( prediction.shape, y.shape) 50 loss_matrix = criterion( prediction, y) ~\/.local\/lib\/python3.6\/site-packages\/torch\/nn\/modules\/module.py in __call__(self, *input, **kwargs) 491 result = self._slow_forward(*input, **kwargs) 492 else: --> 493 result = self.forward(*input, **kwargs) 494 for hook in self._forward_hooks.values(): 495 hook_result = hook(self, input, result) ~\/.local\/lib\/python3.6\/site-packages\/torch\/nn\/parallel\/data_parallel.py in forward(self, *inputs, **kwargs) 144 raise RuntimeError(\"module must have its parameters and buffers \" 145 \"on device {} (device_ids[0]) but found one of \" --> 146 \"them on device: {}\".format(self.src_device_obj, t.device)) 147 148 inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:2 ```", "response":"DataParallel requires every input tensor be provided on the first device in its device_ids list. It basically uses that device as a staging area before scattering to the other GPUs and it's the device where final outputs are gathered before returning from forward. If you want device 2 to be the primary device then you just need to put it at the front of the list as follows ``` model = nn.DataParallel(model, device_ids = [2, 0, 1, 3]) model.to(f'cuda:{model.device_ids[0]}') ``` After which all tensors provided to model should be on the first device as well. ``` x = ... # input tensor x = x.to(f'cuda:{model.device_ids[0]}') y = model(x) ```", "best_answers_score":0.7106, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/59249563\/runtimeerror-module-must-have-its-parameters-and-buffers-on-device-cuda1-devi", "best_answers_votes":37, "question_length":1767, "response_length":646 }, { "question":"Indexing a multi-dimensional tensor with a tensor in PyTorch I have the following code: ``` a = torch.randint(0,10,[3,3,3,3]) b = torch.LongTensor([1,1,1,1]) ``` I have a multi-dimensional index b and want to use it to select a single cell in a. If b wasn't a tensor, I could do: ``` a[1,1,1,1] ``` Which returns the correct cell, but: ``` a[b] ``` Doesn't work, because it just selects a[1] four times. How can I do this? Thanks", "response":"You can split b into 4 using chunk, and then use the chunked b to index the specific element you want: ``` >> a = torch.arange(3*3*3*3).view(3,3,3,3) >> b = torch.LongTensor([[1,1,1,1], [2,2,2,2], [0, 0, 0, 0]]).t() >> a[b.chunk(chunks=4, dim=0)] # here's the trick! Out[24]: tensor([[40, 80, 0]]) ``` What's nice about it is that it can be easily generalized to any dimension of a, you just need to make number of chucks equal the dimension of a.", "best_answers_score":0.7077, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/52092230\/indexing-a-multi-dimensional-tensor-with-a-tensor-in-pytorch", "best_answers_votes":11, "question_length":429, "response_length":447 }, { "question":"Can I slice tensors with logical indexing or lists of indices? I'm trying to slice a PyTorch tensor using a logical index on the columns. I want the columns that correspond to a 1 value in the index vector. Both slicing and logical indexing are possible, but are they possible together? If so, how? My attempt keeps throwing the unhelpful error TypeError: indexing a tensor with an object of type ByteTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument. MCVE Desired Output ``` import torch C = torch.LongTensor([[1, 3], [4, 6]]) # 1 3 # 4 6 ``` Logical indexing on the columns only: ``` A_log = torch.ByteTensor([1, 0, 1]) # the logical index B = torch.LongTensor([[1, 2, 3], [4, 5, 6]]) C = B[:, A_log] # Throws error ``` If the vectors are the same size, logical indexing works: ``` B_truncated = torch.LongTensor([1, 2, 3]) C = B_truncated[A_log] ``` And I can get the desired result by repeating the logical index so that it has the same size as the tensor I am indexing, but then I also have to reshape the output. ``` C = B[A_log.repeat(2, 1)] # [torch.LongTensor of size 4] C = C.resize_(2, 2) ``` I also tried using a list of indices: ``` A_idx = torch.LongTensor([0, 2]) # the index vector C = B[:, A_idx] # Throws error ``` If I want contiguous ranges of indices, slicing works: ``` C = B[:, 1:2] ```", "response":"I think this is implemented as the index_select function, you can try ``` import torch A_idx = torch.LongTensor([0, 2]) # the index vector B = torch.LongTensor([[1, 2, 3], [4, 5, 6]]) C = B.index_select(1, A_idx) # 1 3 # 4 6 ```", "best_answers_score":0.7071, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/43989310\/can-i-slice-tensors-with-logical-indexing-or-lists-of-indices", "best_answers_votes":28, "question_length":1390, "response_length":228 }, { "question":"How to convert torch tensor to pandas dataframe? I'd like to convert a torch tensor to pandas dataframe but by using pd.DataFrame I'm getting a dataframe filled with tensors instead of numeric values. ```py import torch import pandas as pd x = torch.rand(4,4) px = pd.DataFrame(x) ``` Here's what I get when clicking on px in the variable explorer: ``` 0 1 2 3 tensor(0.3880) tensor(0.4598) tensor(0.4239) tensor(0.7376) tensor(0.4174) tensor(0.9581) tensor(0.0987) tensor(0.6359) tensor(0.6199) tensor(0.8235) tensor(0.9947) tensor(0.9679) tensor(0.7164) tensor(0.9270) tensor(0.7853) tensor(0.6921) ```", "response":"I found one possible way by converting torch first to numpy: ``` import torch import pandas as pd x = torch.rand(4,4) px = pd.DataFrame(x.numpy()) ```", "best_answers_score":0.7054, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57942487\/how-to-convert-torch-tensor-to-pandas-dataframe", "best_answers_votes":40, "question_length":604, "response_length":150 }, { "question":"what the difference between att_mask and key_padding_mask in MultiHeadAttnetion What the difference between att_mask and key_padding_mask in MultiHeadAttnetion of pytorch: key_padding_mask \u2013 if provided, specified padding elements in the key will be ignored by the attention. When given a binary mask and a value is True, the corresponding value on the attention layer will be ignored. When given a byte mask and a value is non-zero, the corresponding value on the attention layer will be ignored attn_mask \u2013 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all the batches while a 3D mask allows to specify a different mask for the entries of each batch. Thanks in advance.", "response":"The key_padding_mask is used to mask out positions that are padding, i.e., after the end of the input sequence. This is always specific to the input batch and depends on how long are the sequence in the batch compared to the longest one. It is a 2D tensor of shape batch size \u00d7 input length. On the other hand, attn_mask says what key-value pairs are valid. In a Transformer decoder, a triangle mask is used to simulate the inference time and prevent the attending to the \"future\" positions. This is what att_mask is usually used for. If it is a 2D tensor, the shape is input length \u00d7 input length. You can also have a mask that is specific to every item in a batch. In that case, you can use a 3D tensor of shape (batch size \u00d7 num heads) \u00d7 input length \u00d7 input length. (So, in theory, you can simulate key_padding_mask with a 3D att_mask.)", "best_answers_score":0.7051, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/62629644\/what-the-difference-between-att-mask-and-key-padding-mask-in-multiheadattnetion", "best_answers_votes":30, "question_length":719, "response_length":840 }, { "question":"PyTorch torch.max over multiple dimensions Have tensor like :x.shape = [3, 2, 2]. ```py import torch x = torch.tensor([ [[-0.3000, -0.2926],[-0.2705, -0.2632]], [[-0.1821, -0.1747],[-0.1526, -0.1453]], [[-0.0642, -0.0568],[-0.0347, -0.0274]] ]) ``` I need to take .max() over the 2nd and 3rd dimensions. I expect some like this [-0.2632, -0.1453, -0.0274] as output. I tried to use: x.max(dim=(1,2)), but this causes an error.", "response":"Now, you can do this. The PR was merged (Aug 28 2020) and it is now available in the nightly release. Simply use torch.amax(): ```py import torch x = torch.tensor([ [[-0.3000, -0.2926],[-0.2705, -0.2632]], [[-0.1821, -0.1747],[-0.1526, -0.1453]], [[-0.0642, -0.0568],[-0.0347, -0.0274]] ]) print(torch.amax(x, dim=(1, 2))) # Output: # >>> tensor([-0.2632, -0.1453, -0.0274]) ``` Original Answer As of today (April 11, 2020), there is no way to do .min() or .max() over multiple dimensions in PyTorch. There is an open issue about it that you can follow and see if it ever gets implemented. A workaround in your case would be: ```py import torch x = torch.tensor([ [[-0.3000, -0.2926],[-0.2705, -0.2632]], [[-0.1821, -0.1747],[-0.1526, -0.1453]], [[-0.0642, -0.0568],[-0.0347, -0.0274]] ]) print(x.view(x.size(0), -1).max(dim=-1)) # output: # >>> values=tensor([-0.2632, -0.1453, -0.0274]), # >>> indices=tensor([3, 3, 3])) ``` So, if you need only the values: x.view(x.size(0), -1).max(dim=-1).values. If x is not a contiguous tensor, then .view() will fail. In this case, you should use .reshape() instead. Update August 26, 2020 This feature is being implemented in PR#43092 and the functions will be called amin and amax. They will return only the values. This is probably being merged soon, so you might be able to access these functions on the nightly build by the time you're reading this :) Have fun.", "best_answers_score":0.7044, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/61156894\/pytorch-torch-max-over-multiple-dimensions", "best_answers_votes":44, "question_length":426, "response_length":1407 }, { "question":"Torch sum a tensor along an axis How do I sum over the columns of a tensor? ``` torch.Size([10, 100]) ---> torch.Size([10]) ```", "response":"The simplest and best solution is to use torch.sum(). To sum all elements of a tensor: ``` torch.sum(x) # gives back a scalar ``` To sum over all rows (i.e. for each column): ``` torch.sum(x, dim=0) # size = [ncol] ``` To sum over all columns (i.e. for each row): ``` torch.sum(x, dim=1) # size = [nrow] ``` It should be noted that the dimension summed over is eliminated from the resulting tensor.", "best_answers_score":0.7034, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/44790670\/torch-sum-a-tensor-along-an-axis", "best_answers_votes":95, "question_length":127, "response_length":398 }, { "question":"How to solve the pytorch RuntimeError: Numpy is not available without upgrading numpy to the latest version because of other dependencies I am running a simple CNN using Pytorch for some audio classification on my Raspberry Pi 4 on Python 3.9.2 (64-bit). For the audio manipulation needed I am using librosa. librosa depends on the numba package which is only compatible with numpy version <= 1.20. When running my code, the line ``` spect_tensor = torch.from_numpy(spect).double() ``` throws the RuntimeError: ``` RuntimeError: Numpy is not available ``` Searching the internet for solutions I found upgrading Numpy to the latest version to resolve that specific error, but throwing another error, because Numba only works with Numpy <= 1.20. Is there a solution to this problem which does not include searching for an alternative to using librosa?", "response":"Getting same error for numpy 2.0 as of March 18th 2025. Current solution is : ``` pip install \"numpy<2\" ```", "best_answers_score":0.7014, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/71689095\/how-to-solve-the-pytorch-runtimeerror-numpy-is-not-available-without-upgrading", "best_answers_votes":58, "question_length":849, "response_length":107 }, { "question":"Autograd.grad() for Tensor in pytorch I want to compute the gradient between two tensors in a net. The input X tensor (batch size x m) is sent through a set of convolutional layers which give me back and output Y tensor(batch size x n). I\u2019m creating a new loss and I would like to know the gradient of Y w.r.t. X. Something that in tensorflow would be like: tf.gradients(ys=Y, xs=X) Unfortunately, I\u2019ve been making tests with torch.autograd.grad(), but I could not figure out how to do it. I get errors like: \u201cRunTimeerror: grad can be implicitly created only for scalar outputs\u201d. What should be the inputs in torch.autograd.grad() if I want to know the gradient of Y w.r.t. X?", "response":"Let's start from simple working example with plain loss function and regular backward. We will build short computational graph and do some grad computations on it. Code: ```py import torch from torch.autograd import grad import torch.nn as nn # Create some dummy data. x = torch.ones(2, 2, requires_grad=True) gt = torch.ones_like(x) * 16 - 0.5 # \"ground-truths\" # We will use MSELoss as an example. loss_fn = nn.MSELoss() # Do some computations. v = x + 2 y = v ** 2 # Compute loss. loss = loss_fn(y, gt) print(f'Loss: {loss}') # Now compute gradients: d_loss_dx = grad(outputs=loss, inputs=x) print(f'dloss\/dx:\\n {d_loss_dx}') ``` Output: ```py Loss: 42.25 dloss\/dx: (tensor([[-19.5000, -19.5000], [-19.5000, -19.5000]]),) ``` Ok, this works! Now let's try to reproduce error \"grad can be implicitly created only for scalar outputs\". As you can notice, loss in previous example is a scalar. backward() and grad() by defaults deals with single scalar value: loss.backward(torch.tensor(1.)). If you try to pass tensor with more values you will get an error. Code: ```py v = x + 2 y = v ** 2 try: dy_hat_dx = grad(outputs=y, inputs=x) except RuntimeError as err: print(err) ``` Output: grad can be implicitly created only for scalar outputs Therefore, when using grad() you need to specify grad_outputs parameter as follows: Code: ```py v = x + 2 y = v ** 2 dy_dx = grad(outputs=y, inputs=x, grad_outputs=torch.ones_like(y)) print(f'dy\/dx:\\n {dy_dx}') dv_dx = grad(outputs=v, inputs=x, grad_outputs=torch.ones_like(v)) print(f'dv\/dx:\\n {dv_dx}') ``` Output: ```py dy\/dx: (tensor([[6., 6.],[6., 6.]]),) dv\/dx: (tensor([[1., 1.], [1., 1.]]),) ``` NOTE: If you are using backward() instead, simply do y.backward(torch.ones_like(y)).", "best_answers_score":0.7009, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54754153\/autograd-grad-for-tensor-in-pytorch", "best_answers_votes":43, "question_length":677, "response_length":1728 }, { "question":"How do I save a trained model in PyTorch? How do I save a trained model in PyTorch? I have read that: torch.save()\/torch.load() is for saving\/loading a serializable object. model.state_dict()\/model.load_state_dict() is for saving\/loading model state.", "response":"Found this page on their github repo: Recommended approach for saving a model There are two main approaches for serializing and restoring a model. The first (recommended) saves and loads only the model parameters: ```py torch.save(the_model.state_dict(), PATH) ``` Then later: ```py the_model = TheModelClass(*args, **kwargs) the_model.load_state_dict(torch.load(PATH)) ``` The second saves and loads the entire model: ```py torch.save(the_model, PATH) ``` Then later: ``` the_model = torch.load(PATH) ``` However in this case, the serialized data is bound to the specific classes and the exact directory structure used, so it can break in various ways when used in other projects, or after some serious refactors. See also: Save and Load the Model section from the official PyTorch tutorials.", "best_answers_score":0.6962, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/42703500\/how-do-i-save-a-trained-model-in-pytorch", "best_answers_votes":407, "question_length":250, "response_length":793 }, { "question":"Pytorch doesn't support one-hot vector? I am very confused by how Pytorch deals with one-hot vectors. In this tutorial, the neural network will generate a one-hot vector as its output. As far as I understand, the schematic structure of the neural network in the tutorial should be like: However, the labels are not in one-hot vector format. I get the following size ``` print(labels.size()) print(outputs.size()) output>>> torch.Size([4]) output>>> torch.Size([4, 10]) ``` Miraculously, I they pass the outputs and labels to criterion=CrossEntropyLoss(), there's no error at all. ``` loss = criterion(outputs, labels) # How come it has no error? ``` My hypothesis: Maybe pytorch automatically convert the labels to one-hot vector form. So, I try to convert labels to one-hot vector before passing it to the loss function. ``` def to_one_hot_vector(num_class, label): b = np.zeros((label.shape[0], num_class)) b[np.arange(label.shape[0]), label] = 1 return b labels_one_hot = to_one_hot_vector(10,labels) labels_one_hot = torch.Tensor(labels_one_hot) labels_one_hot = labels_one_hot.type(torch.LongTensor) loss = criterion(outputs, labels_one_hot) # Now it gives me error ``` However, I got the following error RuntimeError: multi-target not supported at \/opt\/pytorch\/pytorch\/aten\/src\/THCUNN\/generic\/ClassNLLCriterion.cu:15 So, one-hot vectors are not supported in Pytorch? How does Pytorch calculates the cross entropy for the two tensor outputs = [1,0,0],[0,0,1] and labels = [0,2] ? It doesn't make sense to me at all at the moment.", "response":"PyTorch states in its documentation for CrossEntropyLoss that This criterion expects a class index (0 to C-1) as the target for each value of a 1D tensor of size minibatch In other words, it has your to_one_hot_vector function conceptually built in CEL and does not expose the one-hot API. Notice that one-hot vectors are memory inefficient compared to storing class labels. If you are given one-hot vectors and need to go to class labels format (for instance to be compatible with CEL), you can use argmax like below: ``` import torch labels = torch.tensor([1, 2, 3, 5]) one_hot = torch.zeros(4, 6) one_hot[torch.arange(4), labels] = 1 reverted = torch.argmax(one_hot, dim=1) assert (labels == reverted).all().item() ```", "best_answers_score":0.6855, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/55549843\/pytorch-doesnt-support-one-hot-vector", "best_answers_votes":37, "question_length":1534, "response_length":721 }, { "question":"AttributeError: module 'torchtext.data' has no attribute 'Field' I want to run a git project used pytorch and torchtext but when I run it, it raise error: ``` File \"main.py\", line 60, in main() File \"main.py\", line 50, in main train_iters, dev_iters, test_iters, vocab = load_dataset(config) File \"\/home\/esmailza\/style transfer\/style-transformer\/data.py\", line 23, in load_dataset TEXT = data.Field(batch_first=True, eos_token='') AttributeError: module 'torchtext.data' has no attribute 'Field' ``` torch version = 1.8.0 torchtext version = 0.9 ``` def load_dataset(config, train_pos='train.pos', train_neg='train.neg', dev_pos='dev.pos', dev_neg='dev.neg', test_pos='test.pos', test_neg='test.neg'): root = config.data_path TEXT = data.Field(batch_first=True, eos_token='') dataset_fn = lambda name: data.TabularDataset( path=root + name, format='tsv', fields=[('text', TEXT)] ) ```", "response":"From TorchText 0.9.0 Release Notes torchtext.data.Field -> torchtext.legacy.data.Field This means, all features are still available, but within torchtext.legacy instead of torchtext. torchtext.data.Field has been moved to torchtext.legacy.data.Field And the imports would change this way: ``` from torchtext.legacy import data ```", "best_answers_score":0.6828, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/66516388\/attributeerror-module-torchtext-data-has-no-attribute-field", "best_answers_votes":47, "question_length":885, "response_length":330 }, { "question":"Install PyTorch from requirements.txt Torch documentation says use ``` pip install torch==1.4.0+cpu torchvision==0.5.0+cpu -f https:\/\/download.pytorch.org\/whl\/torch_stable.html ``` to install the latest version of PyTorch. This works when I do it manually but when I add it to req.txt and do pip install -r req.txt, it fails and says ERROR: No matching distribution. Edit: adding the whole line from req.txt and error here. ``` torch==1.4.0+cpu -f https:\/\/download.pytorch.org\/whl\/torch_stable.html torchvision==0.5.0+cpu -f https:\/\/download.pytorch.org\/whl\/torch_stable.htmltorch==1.4.0+cpu ``` ``` ERROR: Could not find a version that satisfies the requirement torch==1.4.0+cpu (from -r requirements.txt (line 1)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.3.1, 0.4.0, 0.4.1, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.3.0, 1.3.1, 1.4.0) ERROR: No matching distribution found for torch==1.4.0+cpu (from -r requirements.txt (line 1)) ```", "response":"Add --find-links in requirements.txt before torch ``` --find-links https:\/\/download.pytorch.org\/whl\/torch_stable.html torch==1.2.0+cpu ``` Source: https:\/\/github.com\/pytorch\/pytorch\/issues\/29745#issuecomment-553588171", "best_answers_score":0.6827, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/60912744\/install-pytorch-from-requirements-txt", "best_answers_votes":73, "question_length":946, "response_length":217 }, { "question":"Pytorch softmax: What dimension to use? The function torch.nn.functional.softmax takes two parameters: input and dim. According to its documentation, the softmax operation is applied to all slices of input along the specified dim, and will rescale them so that the elements lie in the range (0, 1) and sum to 1. Let input be: ``` input = torch.randn((3, 4, 5, 6)) ``` Suppose I want the following, so that every entry in that array is 1: ``` sum = torch.sum(input, dim = 3) # sum's size is (3, 4, 5, 1) ``` How should I apply softmax? ``` softmax(input, dim = 0) # Way Number 0 softmax(input, dim = 1) # Way Number 1 softmax(input, dim = 2) # Way Number 2 softmax(input, dim = 3) # Way Number 3 ``` My intuition tells me that is the last one, but I am not sure. English is not my first language and the use of the word along seemed confusing to me because of that. I am not very clear on what \"along\" means, so I will use an example that could clarify things. Suppose we have a tensor of size (s1, s2, s3, s4), and I want this to happen", "response":"Steven's answer is not correct. See the snapshot below. It is actually the reverse way. Image transcribed as code: ``` >>> x = torch.tensor([[1,2],[3,4]],dtype=torch.float) >>> F.softmax(x,dim=0) tensor([[0.1192, 0.1192], [0.8808, 0.8808]]) >>> F.softmax(x,dim=1) tensor([[0.2689, 0.7311], [0.2689, 0.7311]]) ```", "best_answers_score":0.6789, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49036993\/pytorch-softmax-what-dimension-to-use", "best_answers_votes":52, "question_length":1036, "response_length":312 }, { "question":"Why do we need to explicitly call zero_grad()? [duplicate] This question already has answers here: Why do we need to call zero_grad() in PyTorch? (9 answers) Closed 5 years ago. Why do we need to explicitly zero the gradients in PyTorch? Why can't gradients be zeroed when loss.backward() is called? What scenario is served by keeping the gradients on the graph and asking the user to explicitly zero the gradients?", "response":"We explicitly need to call zero_grad() because, after loss.backward() (when gradients are computed), we need to use optimizer.step() to proceed gradient descent. More specifically, the gradients are not automatically zeroed because these two operations, loss.backward() and optimizer.step(), are separated, and optimizer.step() requires the just computed gradients. In addition, sometimes, we need to accumulate gradient among some batches; to do that, we can simply call backward multiple times and optimize once.", "best_answers_score":0.6736, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/44732217\/why-do-we-need-to-explicitly-call-zero-grad", "best_answers_votes":65, "question_length":415, "response_length":514 }, { "question":"What does `gather()` do in PyTorch in layman terms? What does torch.gather() do? This answer is hard to understand.", "response":"torch.gather creates a new tensor from the input tensor by taking the values from each row along the input dimension dim. The values in torch.LongTensor, passed as index, specify which value to take from each 'row'. The dimension of the output tensor is same as the dimension of index tensor. Following illustration from the official docs explains it more clearly: (Note: In the illustration, indexing starts from 1 and not 0). In first example, the dimension given is along rows (top to bottom), so for (1,1) position of result, it takes row value from the index for the src that is 1. At (1,1) in source value is 1 so, outputs 1 at (1,1) in result. Similarly for (2,2) the row value from the index for src is 3. At (3,2) the value in src is 8 and hence outputs 8 and so on. Similarly for second example, indexing is along columns, and hence at (2,2) position of the result, the column value from the index for src is 3, so at (2,3) from src ,6 is taken and outputs to result at (2,2)", "best_answers_score":0.673, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/50999977\/what-does-gather-do-in-pytorch-in-layman-terms", "best_answers_votes":356, "question_length":115, "response_length":985 }, { "question":"What does `-1` of `view()` mean in PyTorch? As the question says, what does -1 of view() do in PyTorch? ```py >>> a = torch.arange(1, 17) >>> a tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16.]) >>> a.view(1,-1) tensor([[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16.]]) >>> a.view(-1,1) tensor([[ 1.], [ 2.], [ 3.], [ 4.], [ 5.], [ 6.], [ 7.], [ 8.], [ 9.], [ 10.], [ 11.], [ 12.], [ 13.], [ 14.], [ 15.], [ 16.]]) ``` Does -1 of view() in PyTorch generate additional dimension? Does -1 of view() in PyTorch behave the same as -1 of reshape() in NumPy?", "response":"Yes, it does behave like -1 in numpy.reshape(), i.e. the actual value for this dimension will be inferred so that the number of elements in the view matches the original number of elements. For instance: ```py import torch x = torch.arange(6) print(x.view(3, -1)) # inferred size will be 2 as 6 \/ 3 = 2 # tensor([[ 0., 1.], # [ 2., 3.], # [ 4., 5.]]) print(x.view(-1, 6)) # inferred size will be 1 as 6 \/ 6 = 1 # tensor([[ 0., 1., 2., 3., 4., 5.]]) print(x.view(1, -1, 2)) # inferred size will be 3 as 6 \/ (1 * 2) = 3 # tensor([[[ 0., 1.], # [ 2., 3.], # [ 4., 5.]]]) # print(x.view(-1, 5)) # throw error as there's no int N so that 5 * N = 6 # RuntimeError: invalid argument 2: size '[-1 x 5]' is invalid for input with 6 elements print(x.view(-1, -1, 3)) # throw error as only one dimension can be inferred # RuntimeError: invalid argument 1: only one dimension can be inferred ```", "best_answers_score":0.6682, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/50792316\/what-does-1-of-view-mean-in-pytorch", "best_answers_votes":71, "question_length":607, "response_length":883 }, { "question":"When should I use nn.ModuleList and when should I use nn.Sequential? I am new to Pytorch and one thing that I don't quite understand is the usage of nn.ModuleList and nn.Sequential. Can I know when I should use one over the other? Thanks.", "response":"nn.ModuleList does not have a forward method, but nn.Sequential does have one. So you can wrap several modules in nn.Sequential and run it on the input. nn.ModuleList is just a Python list (though it's useful since the parameters can be discovered and trained via an optimizer). While nn.Sequential is a module that sequentially runs the component on the input.", "best_answers_score":0.6661, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/47544051\/when-should-i-use-nn-modulelist-and-when-should-i-use-nn-sequential", "best_answers_votes":43, "question_length":238, "response_length":361 }, { "question":"PyTorch - How to get learning rate during training? While training, I'd like to know the value of learning_rate. What should I do? It's my code, like this: ``` my_optimizer = torch.optim.SGD(my_model.parameters(), lr=0.001, momentum=0.99, weight_decay=2e-3) ``` Thank you.", "response":"For only one parameter group like in the example you've given, you can use this function and call it during training to get the current learning rate: ``` def get_lr(optimizer): for param_group in optimizer.param_groups: return param_group['lr'] ```", "best_answers_score":0.6642, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/52660985\/pytorch-how-to-get-learning-rate-during-training", "best_answers_votes":48, "question_length":272, "response_length":249 }, { "question":"how to flatten input in `nn.Sequential` in Pytorch how to flatten input inside the nn.Sequential ``` Model = nn.Sequential(x.view(x.shape[0],-1), nn.Linear(784,256), nn.ReLU(), nn.Linear(256,128), nn.ReLU(), nn.Linear(128,64), nn.ReLU(), nn.Linear(64,10), nn.LogSoftmax(dim=1)) ```", "response":"You can create a new module\/class as below and use it in the sequential as you are using other modules (call Flatten()). ``` class Flatten(torch.nn.Module): def forward(self, x): batch_size = x.shape[0] return x.view(batch_size, -1) ``` Ref: https:\/\/discuss.pytorch.org\/t\/flatten-layer-of-pytorch-build-by-sequential-container\/5983 EDIT: Flatten is part of torch now. See https:\/\/pytorch.org\/docs\/stable\/nn.html?highlight=flatten#torch.nn.Flatten", "best_answers_score":0.6636, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53953460\/how-to-flatten-input-in-nn-sequential-in-pytorch", "best_answers_votes":33, "question_length":281, "response_length":446 }, { "question":"Check the total number of parameters in a PyTorch model How do I count the total number of parameters in a PyTorch model? Something similar to model.count_params() in Keras.", "response":"PyTorch doesn't have a function to calculate the total number of parameters as Keras does, but it's possible to sum the number of elements for every parameter group: ``` pytorch_total_params = sum(p.numel() for p in model.parameters()) ``` If you want to calculate only the trainable parameters: ``` pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad) ``` Answer inspired by this answer on PyTorch Forums.", "best_answers_score":0.6608, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49201236\/check-the-total-number-of-parameters-in-a-pytorch-model", "best_answers_votes":351, "question_length":173, "response_length":438 }, { "question":"How to use `stack()` in PyTorch? How do I use torch.stack() to stack two tensors with shapes a.shape = (2, 3, 4) and b.shape = (2, 3) without an in-place operation?", "response":"Stacking requires same number of dimensions. One way would be to unsqueeze and stack. For example: ``` a.size() # 2, 3, 4 b.size() # 2, 3 b = torch.unsqueeze(b, dim=2) # 2, 3, 1 # torch.unsqueeze(b, dim=-1) does the same thing torch.stack([a, b], dim=2) # 2, 3, 5 ```", "best_answers_score":0.6599, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/52288635\/how-to-use-stack-in-pytorch", "best_answers_votes":37, "question_length":164, "response_length":267 }, { "question":"What's the difference between \"hidden\" and \"output\" in PyTorch LSTM? I'm having trouble understanding the documentation for PyTorch's LSTM module (and also RNN and GRU, which are similar). Regarding the outputs, it says: Outputs: output, (h_n, c_n) output (seq_len, batch, hidden_size * num_directions): tensor containing the output features (h_t) from the last layer of the RNN, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence. h_n (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t=seq_len c_n (num_layers * num_directions, batch, hidden_size): tensor containing the cell state for t=seq_len It seems that the variables output and h_n both give the values of the hidden state. Does h_n just redundantly provide the last time step that's already included in output, or is there something more to it than that?", "response":"I made a diagram. The names follow the PyTorch docs, although I renamed num_layers to w. output comprises all the hidden states in the last layer (\"last\" depth-wise, not time-wise). (h_n, c_n) comprises the hidden states after the last timestep, t = n, so you could potentially feed them into another LSTM. The batch dimension is not included.", "best_answers_score":0.6588, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/48302810\/whats-the-difference-between-hidden-and-output-in-pytorch-lstm", "best_answers_votes":252, "question_length":925, "response_length":343 }, { "question":"Pytorch beginner : tensor.new method everyone, I have a small question. What is the purpose of the method tensor.new(..) in Pytorch, I didn't find anything in the documentation. It looks like it creates a new Tensor (like the name suggests), but why we don't just use torch.Tensor constructors instead of using this new method that requires an existing tensor. Thank you in advance.", "response":"As the documentation of tensor.new() says: Constructs a new tensor of the same data type as self tensor. Also note: For CUDA tensors, this method will create new tensor on the same device as this tensor.", "best_answers_score":0.6533, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49263588\/pytorch-beginner-tensor-new-method", "best_answers_votes":23, "question_length":382, "response_length":203 }, { "question":"How do I split a custom dataset into training and test datasets? ``` import pandas as pd import numpy as np import cv2 from torch.utils.data.dataset import Dataset class CustomDatasetFromCSV(Dataset): def __init__(self, csv_path, transform=None): self.data = pd.read_csv(csv_path) self.labels = pd.get_dummies(self.data['emotion']).as_matrix() self.height = 48 self.width = 48 self.transform = transform def __getitem__(self, index): pixels = self.data['pixels'].tolist() faces = [] for pixel_sequence in pixels: face = [int(pixel) for pixel in pixel_sequence.split(' ')] # print(np.asarray(face).shape) face = np.asarray(face).reshape(self.width, self.height) face = cv2.resize(face.astype('uint8'), (self.width, self.height)) faces.append(face.astype('float32')) faces = np.asarray(faces) faces = np.expand_dims(faces, -1) return faces, self.labels def __len__(self): return len(self.data) ``` This is what I could manage to do by using references from other repositories. However, I want to split this dataset into train and test. How can I do that inside this class? Or do I need to make a separate class to do that?", "response":"Starting in PyTorch v0.4.1, you can use random_split. Since v1.13.0 You can specify the percentages as floats, they should sum up a value of 1. ```py train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [0.8, 0.2]) ``` Since v0.4.1 You can specify precisely how many items from your datasets should be delegated to each subset. ```py train_size = int(0.8 * len(full_dataset)) test_size = len(full_dataset) - train_size train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size]) ```", "best_answers_score":0.6517, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/50544730\/how-do-i-split-a-custom-dataset-into-training-and-test-datasets", "best_answers_votes":225, "question_length":1120, "response_length":542 }, { "question":"How to add a new dimension to a PyTorch tensor? In NumPy, I would do ```py a = np.zeros((4, 5, 6)) a = a[:, :, np.newaxis, :] assert a.shape == (4, 5, 1, 6) ``` How to do the same in PyTorch?", "response":"``` a = torch.zeros(4, 5, 6) a = a[:, :, None, :] assert a.shape == (4, 5, 1, 6) ```", "best_answers_score":0.651, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/65470807\/how-to-add-a-new-dimension-to-a-pytorch-tensor", "best_answers_votes":72, "question_length":191, "response_length":84 }, { "question":"How do I initialize weights in PyTorch? How do I initialize weights and biases of a network (via e.g. He or Xavier initialization)?", "response":"Single layer To initialize the weights of a single layer, use a function from torch.nn.init. For instance: ``` conv1 = torch.nn.Conv2d(...) torch.nn.init.xavier_uniform(conv1.weight) ``` Alternatively, you can modify the parameters by writing to conv1.weight.data (which is a torch.Tensor). Example: ``` conv1.weight.data.fill_(0.01) ``` The same applies for biases: ``` conv1.bias.data.fill_(0.01) ``` nn.Sequential or custom nn.Module Pass an initialization function to torch.nn.Module.apply. It will initialize the weights in the entire nn.Module recursively. apply(fn): Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch-nn-init). Example: ``` def init_weights(m): if isinstance(m, nn.Linear): torch.nn.init.xavier_uniform(m.weight) m.bias.data.fill_(0.01) net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) net.apply(init_weights) ```", "best_answers_score":0.6481, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/49433936\/how-do-i-initialize-weights-in-pytorch", "best_answers_votes":359, "question_length":131, "response_length":955 }, { "question":"How do you alter the size of a Pytorch Dataset? [duplicate] This question already has answers here: Taking subsets of a pytorch dataset (2 answers) Closed 3 years ago. Say I am loading MNIST from torchvision.datasets.MNIST, but I only want to load in 10000 images total, how would I slice the data to limit it to only some number of data points? I understand that the DataLoader is a generator yielding data in the size of the specified batch size, but how do you slice datasets? ``` tr = datasets.MNIST('..\/data', train=True, download=True, transform=transform) te = datasets.MNIST('..\/data', train=False, transform=transform) train_loader = DataLoader(tr, batch_size=args.batch_size, shuffle=True, num_workers=4, **kwargs) test_loader = DataLoader(te, batch_size=args.batch_size, shuffle=True, num_workers=4, **kwargs) ```", "response":"You can use torch.utils.data.Subset() e.g. for the first 10,000 elements: ```py import torch.utils.data as data_utils indices = torch.arange(10000) tr_10k = data_utils.Subset(tr, indices) ```", "best_answers_score":0.6475, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/44856691\/how-do-you-alter-the-size-of-a-pytorch-dataset", "best_answers_votes":28, "question_length":824, "response_length":191 }, { "question":"How do I display a single image in PyTorch? How do I display a PyTorch Tensor of shape (3, 224, 224) representing a 224x224 RGB image? Using plt.imshow(image) gives the error: TypeError: Invalid dimensions for image data", "response":"Given a Tensor representing the image, use .permute() to put the channels as the last dimension when passing them to matplotlib: ``` plt.imshow(tensor_image.permute(1, 2, 0)) ``` Note: permute does not copy or allocate memory, and from_numpy() doesn't either.", "best_answers_score":0.6465, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53623472\/how-do-i-display-a-single-image-in-pytorch", "best_answers_votes":171, "question_length":220, "response_length":259 }, { "question":"How do you load MNIST images into Pytorch DataLoader? The pytorch tutorial for data loading and processing is quite specific to one example, could someone help me with what the function should look like for a more generic simple loading of images? Tutorial: http:\/\/pytorch.org\/tutorials\/beginner\/data_loading_tutorial.html My Data: I have the MINST dataset as jpg's in the following folder structure. (I know I can just use the dataset class, but this is purely to see how to load simple images into pytorch without csv's or complex features). The folder name is the label and the images are 28x28 png's in greyscale, no transformations required. ``` data train 0 3.png 5.png 13.png 23.png ... 1 3.png 10.png 11.png ... 2 4.png 13.png ... 3 8.png ... 4 ... 5 ... 6 ... 7 ... 8 ... 9 ... ```", "response":"Here's what I did for pytorch 0.4.1 (should still work in 1.3) ``` def load_dataset(): data_path = 'data\/train\/' train_dataset = torchvision.datasets.ImageFolder( root=data_path, transform=torchvision.transforms.ToTensor() ) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=64, num_workers=0, shuffle=True ) return train_loader for batch_idx, (data, target) in enumerate(load_dataset()): #train network ```", "best_answers_score":0.6454, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/50052295\/how-do-you-load-mnist-images-into-pytorch-dataloader", "best_answers_votes":44, "question_length":790, "response_length":427 }, { "question":"How to efficiently retrieve the indices of maximum values in a Torch tensor? Assume to have a torch tensor, for example of the following shape: ``` x = torch.rand(20, 1, 120, 120) ``` What I would like now, is to get the indices of the maximum values of each 120x120 matrix. To simplify the problem I would first x.squeeze() to work with shape [20, 120, 120]. I would then like to get torch tensor which is a list of indices with shape [20, 2]. How can I do this fast?", "response":"torch.topk() is what you are looking for. From the docs, torch.topk(input, k, dim=None, largest=True, sorted=True, out=None) -> (Tensor, LongTensor) Returns the k largest elements of the given input tensor along a given dimension. If dim is not given, the last dimension of the input is chosen. If largest is False then the k smallest elements are returned. A namedtuple of (values, indices) is returned, where the indices are the indices of the elements in the original input tensor. The boolean option sorted if True, will make sure that the returned k elements are themselves sorted", "best_answers_score":0.6437, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53212507\/how-to-efficiently-retrieve-the-indices-of-maximum-values-in-a-torch-tensor", "best_answers_votes":20, "question_length":468, "response_length":585 }, { "question":"How do I check if PyTorch is using the GPU? How do I check if PyTorch is using the GPU? The nvidia-smi command can detect GPU activity, but I want to check it directly from inside a Python script.", "response":"These functions should help: ``` >>> import torch >>> torch.cuda.is_available() True >>> torch.cuda.device_count() 1 >>> torch.cuda.current_device() 0 >>> torch.cuda.device(0) >>> torch.cuda.get_device_name(0) 'GeForce GTX 950M' ``` This tells us: CUDA is available and can be used by one device. Device 0 refers to the GPU GeForce GTX 950M, and it is currently chosen by PyTorch.", "best_answers_score":0.6427, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/48152674\/how-do-i-check-if-pytorch-is-using-the-gpu", "best_answers_votes":1038, "question_length":196, "response_length":381 }, { "question":"How to do gradient clipping in pytorch? What is the correct way to perform gradient clipping in pytorch? I have an exploding gradients problem.", "response":"A more complete example from here: ``` optimizer.zero_grad() loss, hidden = model(data, hidden, targets) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), args.clip) optimizer.step() ```", "best_answers_score":0.6398, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54716377\/how-to-do-gradient-clipping-in-pytorch", "best_answers_votes":198, "question_length":143, "response_length":203 }, { "question":"How to tell PyTorch to not use the GPU? I want to do some timing comparisons between CPU & GPU as well as some profiling and would like to know if there's a way to tell pytorch to not use the GPU and instead use the CPU only? I realize I could install another CPU-only pytorch, but hoping there's an easier way.", "response":"Before running your code, run this shell command to tell torch that there are no GPUs: ``` export CUDA_VISIBLE_DEVICES=\"\" ``` This will tell it to use only one GPU (the one with id 0) and so on: ``` export CUDA_VISIBLE_DEVICES=\"0\" ```", "best_answers_score":0.6397, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/53266350\/how-to-tell-pytorch-to-not-use-the-gpu", "best_answers_votes":100, "question_length":311, "response_length":234 }, { "question":"PyTorch RuntimeError: DataLoader worker (pid(s) 15332) exited unexpectedly I am a beginner at PyTorch and I am just trying out some examples on this webpage. But I can't seem to get the 'super_resolution' program running due to this error: RuntimeError: DataLoader worker (pid(s) 15332) exited unexpectedly I searched the Internet and found that some people suggest setting num_workers to 0. But if I do that, the program tells me that I am running out of memory (either with CPU or GPU): RuntimeError: [enforce fail at ..\\c10\\core\\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9663676416 bytes. Buy new RAM! or RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.03 GiB already allocated; 0 bytes free; 2.03 GiB reserved in total by PyTorch) How do I fix this? I am using python 3.8 on Win10(64bit) and pytorch 1.4.0. More complete error messages (--cuda means using GPU, --threads x means passing x to the num_worker parameter): with command line arguments --upscale_factor 1 --cuda ``` File \"E:\\Python38\\lib\\site-packages\\torch\\utils\\data\\dataloader.py\", line 761, in _try_get_data data = self._data_queue.get(timeout=timeout) File \"E:\\Python38\\lib\\multiprocessing\\queues.py\", line 108, in get raise Empty _queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"Z:\\super_resolution\\main.py\", line 81, in train(epoch) File \"Z:\\super_resolution\\main.py\", line 48, in train for iteration, batch in enumerate(training_data_loader, 1): File \"E:\\Python38\\lib\\site-packages\\torch\\utils\\data\\dataloader.py\", line 345, in __next__ data = self._next_data() File \"E:\\Python38\\lib\\site-packages\\torch\\utils\\data\\dataloader.py\", line 841, in _next_data idx, data = self._get_data() File \"E:\\Python38\\lib\\site-packages\\torch\\utils\\data\\dataloader.py\", line 808, in _get_data success, data = self._try_get_data() File \"E:\\Python38\\lib\\site-packages\\torch\\utils\\data\\dataloader.py\", line 774, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) RuntimeError: DataLoader worker (pid(s) 16596, 9376, 12756, 9844) exited unexpectedly ``` with command line arguments --upscale_factor 1 --cuda --threads 0 ``` File \"Z:\\super_resolution\\main.py\", line 81, in train(epoch) File \"Z:\\super_resolution\\main.py\", line 52, in train loss = criterion(model(input), target) File \"E:\\Python38\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 532, in __call__ result = self.forward(*input, **kwargs) File \"Z:\\super_resolution\\model.py\", line 21, in forward x = self.relu(self.conv2(x)) File \"E:\\Python38\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 532, in __call__ result = self.forward(*input, **kwargs) File \"E:\\Python38\\lib\\site-packages\\torch\\nn\\modules\\conv.py\", line 345, in forward return self.conv2d_forward(input, self.weight) File \"E:\\Python38\\lib\\site-packages\\torch\\nn\\modules\\conv.py\", line 341, in conv2d_forward return F.conv2d(input, weight, self.bias, self.stride, RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.03 GiB already allocated; 954.35 MiB free; 2.03 GiB reserved in total by PyTorch) ```", "response":"This is the solution that worked for me. it may work for other Windows users. Just remove\/comment the num workers to disable parallel loads", "best_answers_score":0.6371, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/60101168\/pytorch-runtimeerror-dataloader-worker-pids-15332-exited-unexpectedly", "best_answers_votes":36, "question_length":3259, "response_length":139 }, { "question":"RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1) Im using a Pytorch Unet model to which i am feeding in a image as input and along with that i am feeding the label as the input image mask and traning the dataset on it. The Unet model i have picked up from somewhere else, and i am using the cross-entropy loss as a loss function but i get this dimension out of range error, ``` RuntimeError Traceback (most recent call last) in () 16 for epoch in range(0, num_epochs): 17 # train for one epoch ---> 18 curr_loss = train(train_loader, model, criterion, epoch, num_epochs) 19 20 # store best loss and save a model checkpoint in train(train_loader, model, criterion, epoch, num_epochs) 16 # measure loss 17 print (outputs.size(),labels.size()) ---> 18 loss = criterion(outputs, labels) 19 losses.update(loss.data[0], images.size(0)) 20 \/usr\/local\/lib\/python3.5\/dist-packages\/torch\/nn\/modules\/module.py in _ _call__(self, *input, **kwargs) 323 for hook in self._forward_pre_hooks.values(): 324 hook(self, input) --> 325 result = self.forward(*input, **kwargs) 326 for hook in self._forward_hooks.values(): 327 hook_result = hook(self, input, result) in forward(self, logits, targets) 9 probs_flat = probs.view(-1) 10 targets_flat = targets.view(-1) ---> 11 return self.crossEntropy_loss(probs_flat, targets_flat) \/usr\/local\/lib\/python3.5\/dist-packages\/torch\/nn\/modules\/module.py in __call__(self, *input, **kwargs) 323 for hook in self._forward_pre_hooks.values(): 324 hook(self, input) --> 325 result = self.forward(*input, **kwargs) 326 for hook in self._forward_hooks.values(): 327 hook_result = hook(self, input, result) \/usr\/local\/lib\/python3.5\/dist-packages\/torch\/nn\/modules\/loss.py in f orward(self, input, target) 599 _assert_no_grad(target) 600 return F.cross_entropy(input, target, self.weight, self.size_average, --> 601 self.ignore_index, self.reduce) 602 603 \/usr\/local\/lib\/python3.5\/dist-packages\/torch\/nn\/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce) 1138 >>> loss.backward() 1139 \"\"\" -> 1140 return nll_loss(log_softmax(input, 1), target, weight, size_average, ignore_index, reduce) 1141 1142 \/usr\/local\/lib\/python3.5\/dist-packages\/torch\/nn\/functional.py in log_softmax(input, dim, _stacklevel) 784 if dim is None: 785 dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel) --> 786 return torch._C._nn.log_softmax(input, dim) 787 788 RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1) ``` Part of my code looks like this ``` class crossEntropy(nn.Module): def __init__(self, weight = None, size_average = True): super(crossEntropy, self).__init__() self.crossEntropy_loss = nn.CrossEntropyLoss(weight, size_average) def forward(self, logits, targets): probs = F.sigmoid(logits) probs_flat = probs.view(-1) targets_flat = targets.view(-1) return self.crossEntropy_loss(probs_flat, targets_flat) class UNet(nn.Module): def __init__(self, imsize): super(UNet, self).__init__() self.imsize = imsize self.activation = F.relu self.pool1 = nn.MaxPool2d(2) self.pool2 = nn.MaxPool2d(2) self.pool3 = nn.MaxPool2d(2) self.pool4 = nn.MaxPool2d(2) self.conv_block1_64 = UNetConvBlock(4, 64) self.conv_block64_128 = UNetConvBlock(64, 128) self.conv_block128_256 = UNetConvBlock(128, 256) self.conv_block256_512 = UNetConvBlock(256, 512) self.conv_block512_1024 = UNetConvBlock(512, 1024) self.up_block1024_512 = UNetUpBlock(1024, 512) self.up_block512_256 = UNetUpBlock(512, 256) self.up_block256_128 = UNetUpBlock(256, 128) self.up_block128_64 = UNetUpBlock(128, 64) self.last = nn.Conv2d(64, 2, 1) def forward(self, x): block1 = self.conv_block1_64(x) pool1 = self.pool1(block1) block2 = self.conv_block64_128(pool1) pool2 = self.pool2(block2) block3 = self.conv_block128_256(pool2) pool3 = self.pool3(block3) block4 = self.conv_block256_512(pool3) pool4 = self.pool4(block4) block5 = self.conv_block512_1024(pool4) up1 = self.up_block1024_512(block5, block4) up2 = self.up_block512_256(up1, block3) up3 = self.up_block256_128(up2, block2) up4 = self.up_block128_64(up3, block1) return F.log_softmax(self.last(up4)) ```", "response":"According to your code: ``` probs_flat = probs.view(-1) targets_flat = targets.view(-1) return self.crossEntropy_loss(probs_flat, targets_flat) ``` You are giving two 1d tensor to nn.CrossEntropyLoss but according to documentation, it expects: ``` Input: (N,C) where C = number of classes Target: (N) where each value is 0 <= targets[i] <= C-1 Output: scalar. If reduce is False, then (N) instead. ``` I believe that is the cause of the problem you are encountering.", "best_answers_score":0.6319, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/48377214\/runtimeerror-dimension-out-of-range-expected-to-be-in-range-of-1-0-but-go", "best_answers_votes":38, "question_length":4146, "response_length":466 }, { "question":"Why Pytorch officially use mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225] to normalize images? In this page (https:\/\/pytorch.org\/vision\/stable\/models.html), it says that \"All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]\". Shouldn't the usual mean and std of normalization be [0.5, 0.5, 0.5] and [0.5, 0.5, 0.5]? Why is it setting such strange values?", "response":"Using the mean and std of Imagenet is a common practice. They are calculated based on millions of images. If you want to train from scratch on your own dataset, you can calculate the new mean and std. Otherwise, using the Imagenet pretrianed model with its own mean and std is recommended.", "best_answers_score":0.6306, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/58151507\/why-pytorch-officially-use-mean-0-485-0-456-0-406-and-std-0-229-0-224-0-2", "best_answers_votes":93, "question_length":633, "response_length":289 }, { "question":"name '_C' is not defined pytorch+jupyter notebook I have some code that uses pytorch, that runs fine from my IDE (pycharm). For research, I tried to run it from a jupyter notebook. The code in the notebook: ``` from algorithms import Argparser from algorithms import Session def main(): print(\"main started\") args = Argparser.parse() session = Session(args) session.run() ``` The package looks like: ``` |-algorithms |---__init__.py |---Argparser.py |---Session.py |---.py ``` some of those files do import torch When running the code in the notebook, I get NameError Traceback (most recent call last) in 1 from algorithms import Argparser ----> 2 from algorithms import Session 3 def main(): 4 print(\"main started\") 5 args = Argparser.parse() D:\\git\\stav\\stav-rl\\algorithms\\Session.py in 12 13 ---> 14 from algorithms.Episode import Episode 15 from algorithms.Agent import Agent 16 import torch D:\\git\\stav\\stav-rl\\algorithms\\Episode.py in 1 author = 'Noam' 2 ----> 3 import torch 4 import numpy as np 5 import cv2 c:\\anaconda3\\envs\\threadartrl\\lib\\site-packages\\torch__init__.py in 84 from torch._C import * 85 ---> 86 all += [name for name in dir(C) 87 if name[0] != '' and 88 not name.endswith('Base')] NameError: name '_C' is not defined The error is on from algorithms import Session-->...-->import torch How can i get the code to run?", "response":"Restarting the kernel will solve the problem.", "best_answers_score":0.627, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/54408973\/name-c-is-not-defined-pytorchjupyter-notebook", "best_answers_votes":65, "question_length":1341, "response_length":45 }, { "question":"How to disable TOKENIZERS_PARALLELISM=(true | false) warning? I use pytorch to train huggingface-transformers model, but every epoch, always output the warning: ``` The current process just got forked. Disabling parallelism to avoid deadlocks... To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false) ``` How to disable this warning?", "response":"Set the environment variable to the string \"false\" either by ``` TOKENIZERS_PARALLELISM=false ``` in your shell or by: ``` import os os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\" ``` in the Python script", "best_answers_score":0.6072, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/62691279\/how-to-disable-tokenizers-parallelism-true-false-warning", "best_answers_votes":106, "question_length":363, "response_length":204 }, { "question":"Pytorch AssertionError: Torch not compiled with CUDA enabled I am trying to run code from this repo. I have disabled cuda by changing lines 39\/40 in main.py from ``` parser.add_argument('--type', default='torch.cuda.FloatTensor', help='type of tensor - e.g torch.cuda.HalfTensor') ``` to ``` parser.add_argument('--type', default='torch.FloatTensor', help='type of tensor - e.g torch.HalfTensor') ``` Despite this, running the code gives me the following exception: ``` Traceback (most recent call last): File \"main.py\", line 190, in main() File \"main.py\", line 178, in main model, train_data, training=True, optimizer=optimizer) File \"main.py\", line 135, in forward for i, (imgs, (captions, lengths)) in enumerate(data): File \"\/Users\/lakshay\/anaconda\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py\", line 201, in __next__ return self._process_next_batch(batch) File \"\/Users\/lakshay\/anaconda\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py\", line 221, in _process_next_batch raise batch.exc_type(batch.exc_msg) AssertionError: Traceback (most recent call last): File \"\/Users\/lakshay\/anaconda\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py\", line 62, in _pin_memory_loop batch = pin_memory_batch(batch) File \"\/Users\/lakshay\/anaconda\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py\", line 123, in pin_memory_batch return [pin_memory_batch(sample) for sample in batch] File \"\/Users\/lakshay\/anaconda\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py\", line 123, in return [pin_memory_batch(sample) for sample in batch] File \"\/Users\/lakshay\/anaconda\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py\", line 117, in pin_memory_batch return batch.pin_memory() File \"\/Users\/lakshay\/anaconda\/lib\/python3.6\/site-packages\/torch\/tensor.py\", line 82, in pin_memory return type(self)().set_(storage.pin_memory()).view_as(self) File \"\/Users\/lakshay\/anaconda\/lib\/python3.6\/site-packages\/torch\/storage.py\", line 83, in pin_memory allocator = torch.cuda._host_allocator() File \"\/Users\/lakshay\/anaconda\/lib\/python3.6\/site-packages\/torch\/cuda\/__init__.py\", line 220, in _host_allocator _lazy_init() File \"\/Users\/lakshay\/anaconda\/lib\/python3.6\/site-packages\/torch\/cuda\/__init__.py\", line 84, in _lazy_init _check_driver() File \"\/Users\/lakshay\/anaconda\/lib\/python3.6\/site-packages\/torch\/cuda\/__init__.py\", line 51, in _check_driver raise AssertionError(\"Torch not compiled with CUDA enabled\") AssertionError: Torch not compiled with CUDA enabled ``` Spent some time looking through the issues in the Pytorch github, to no avail. Help, please?", "response":"Removing .cuda() works for me on macOS.", "best_answers_score":0.6071, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/47312396\/pytorch-assertionerror-torch-not-compiled-with-cuda-enabled", "best_answers_votes":14, "question_length":2586, "response_length":39 }, { "question":"How do I get the value of a tensor in PyTorch? Printing a tensor x gives: ``` >>> x = torch.tensor([3]) >>> print(x) tensor([3]) ``` Indexing x.data gives: ``` >>> x.data[0] tensor(3) ``` How do I get just a regular non-tensor value 3?", "response":"You can use x.item() to get a Python number from a Tensor that has one element.", "best_answers_score":0.605, "library_name":"pytorch", "question_url":"https:\/\/stackoverflow.com\/questions\/57727372\/how-do-i-get-the-value-of-a-tensor-in-pytorch", "best_answers_votes":251, "question_length":235, "response_length":79 }, { "question":"sklearn error ValueError: Input contains NaN, infinity or a value too large for dtype('float64') I am using sklearn and having a problem with the affinity propagation. I have built an input matrix and I keep getting the following error. ``` ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). ``` I have run ``` np.isnan(mat.any()) #and gets False np.isfinite(mat.all()) #and gets True ``` I tried using ``` mat[np.isfinite(mat) == True] = 0 ``` to remove the infinite values but this did not work either. What can I do to get rid of the infinite values in my matrix, so that I can use the affinity propagation algorithm? I am using anaconda and python 2.7.9.", "response":"This might happen inside scikit, and it depends on what you're doing. I recommend reading the documentation for the functions you're using. You might be using one which depends e.g. on your matrix being positive definite and not fulfilling that criteria. EDIT: How could I miss that: ``` np.isnan(mat.any()) #and gets False np.isfinite(mat.all()) #and gets True ``` is obviously wrong. Right would be: ``` np.any(np.isnan(mat)) ``` and ``` np.all(np.isfinite(mat)) ``` You want to check whether any of the elements are NaN, and not whether the return value of the any function is a number...", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31323499\/sklearn-error-valueerror-input-contains-nan-infinity-or-a-value-too-large-for", "best_answers_votes":176, "question_length":691, "response_length":591 }, { "question":"ImportError: No module named sklearn.cross_validation I am using python 2.7 in Ubuntu 14.04. I installed scikit-learn, numpy and matplotlib with these commands: ``` sudo apt-get install build-essential python-dev python-numpy \\ python-numpy-dev python-scipy libatlas-dev g++ python-matplotlib \\ ipython ``` But when I import these packages: ``` from sklearn.cross_validation import train_test_split ``` It returns me this error: ``` ImportError: No module named sklearn.cross_validation ``` What I need to do?", "response":"It must relate to the renaming and deprecation of cross_validation sub-module to model_selection. Try substituting cross_validation to model_selection", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30667525\/importerror-no-module-named-sklearn-cross-validation", "best_answers_votes":827, "question_length":509, "response_length":150 }, { "question":"How to split data into 3 sets (train, validation and test)? I have a pandas dataframe and I wish to divide it to 3 separate sets. I know that using train_test_split from sklearn.cross_validation, one can divide the data in two sets (train and test). However, I couldn't find any solution about splitting the data into three sets. Preferably, I'd like to have the indices of the original data. I know that a workaround would be to use train_test_split two times and somehow adjust the indices. But is there a more standard \/ built-in way to split the data into 3 sets instead of 2?", "response":"Numpy solution. We will shuffle the whole dataset first (df.sample(frac=1, random_state=42)) and then split our data set into the following parts: 60% - train set, 20% - validation set, 20% - test set ``` In [305]: train, validate, test = \\ np.split(df.sample(frac=1, random_state=42), [int(.6*len(df)), int(.8*len(df))]) In [306]: train Out[306]: A B C D E 0 0.046919 0.792216 0.206294 0.440346 0.038960 2 0.301010 0.625697 0.604724 0.936968 0.870064 1 0.642237 0.690403 0.813658 0.525379 0.396053 9 0.488484 0.389640 0.599637 0.122919 0.106505 8 0.842717 0.793315 0.554084 0.100361 0.367465 7 0.185214 0.603661 0.217677 0.281780 0.938540 In [307]: validate Out[307]: A B C D E 5 0.806176 0.008896 0.362878 0.058903 0.026328 6 0.145777 0.485765 0.589272 0.806329 0.703479 In [308]: test Out[308]: A B C D E 4 0.521640 0.332210 0.370177 0.859169 0.401087 3 0.333348 0.964011 0.083498 0.670386 0.169619 ``` [int(.6*len(df)), int(.8*len(df))] - is an indices_or_sections array for numpy.split(). Here is a small demo for np.split() usage - let's split 20-elements array into the following parts: 80%, 10%, 10%: ``` In [45]: a = np.arange(1, 21) In [46]: a Out[46]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]) In [47]: np.split(a, [int(.8 * len(a)), int(.9 * len(a))]) Out[47]: [array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]), array([17, 18]), array([19, 20])] ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38250710\/how-to-split-data-into-3-sets-train-validation-and-test", "best_answers_votes":295, "question_length":580, "response_length":1414 }, { "question":"pandas dataframe columns scaling with sklearn I have a pandas dataframe with mixed type columns, and I'd like to apply sklearn's min_max_scaler to some of the columns. Ideally, I'd like to do these transformations in place, but haven't figured out a way to do that yet. I've written the following code that works: ``` import pandas as pd import numpy as np from sklearn import preprocessing scaler = preprocessing.MinMaxScaler() dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']}) min_max_scaler = preprocessing.MinMaxScaler() def scaleColumns(df, cols_to_scale): for col in cols_to_scale: df[col] = pd.DataFrame(min_max_scaler.fit_transform(pd.DataFrame(dfTest[col])),columns=[col]) return df dfTest A B C 0 14.00 103.02 big 1 90.20 107.26 small 2 90.95 110.35 big 3 96.27 114.23 small 4 91.21 114.68 small scaled_df = scaleColumns(dfTest,['A','B']) scaled_df A B C 0 0.000000 0.000000 big 1 0.926219 0.363636 small 2 0.935335 0.628645 big 3 1.000000 0.961407 small 4 0.938495 1.000000 small ``` I'm curious if this is the preferred\/most efficient way to do this transformation. Is there a way I could use df.apply that would be better? I'm also surprised I can't get the following code to work: ``` bad_output = min_max_scaler.fit_transform(dfTest['A']) ``` If I pass an entire dataframe to the scaler it works: ``` dfTest2 = dfTest.drop('C', axis = 1) good_output = min_max_scaler.fit_transform(dfTest2) good_output ``` I'm confused why passing a series to the scaler fails. In my full working code above I had hoped to just pass a series to the scaler then set the dataframe column = to the scaled series.", "response":"I am not sure if previous versions of pandas prevented this but now the following snippet works perfectly for me and produces exactly what you want without having to use apply ``` >>> import pandas as pd >>> from sklearn.preprocessing import MinMaxScaler >>> scaler = MinMaxScaler() >>> dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21], 'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']}) >>> dfTest[['A', 'B']] = scaler.fit_transform(dfTest[['A', 'B']]) >>> dfTest A B C 0 0.000000 0.000000 big 1 0.926219 0.363636 small 2 0.935335 0.628645 big 3 1.000000 0.961407 small 4 0.938495 1.000000 small ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/24645153\/pandas-dataframe-columns-scaling-with-sklearn", "best_answers_votes":363, "question_length":1702, "response_length":645 }, { "question":"Random state (Pseudo-random number) in Scikit learn I want to implement a machine learning algorithm in scikit learn, but I don't understand what this parameter random_state does? Why should I use it? I also could not understand what is a Pseudo-random number.", "response":"train_test_split splits arrays or matrices into random train and test subsets. That means that everytime you run it without specifying random_state, you will get a different result, this is expected behavior. For example: Run 1: ``` >>> a, b = np.arange(10).reshape((5, 2)), range(5) >>> train_test_split(a, b) [array([[6, 7], [8, 9], [4, 5]]), array([[2, 3], [0, 1]]), [3, 4, 2], [1, 0]] ``` Run 2 ``` >>> train_test_split(a, b) [array([[8, 9], [4, 5], [0, 1]]), array([[6, 7], [2, 3]]), [4, 2, 0], [3, 1]] ``` It changes. On the other hand if you use random_state=some_number, then you can guarantee that the output of Run 1 will be equal to the output of Run 2, i.e. your split will be always the same. It doesn't matter what the actual random_state number is 42, 0, 21, ... The important thing is that everytime you use 42, you will always get the same output the first time you make the split. This is useful if you want reproducible results, for example in the documentation, so that everybody can consistently see the same numbers when they run the examples. In practice I would say, you should set the random_state to some fixed number while you test stuff, but then remove it in production if you really need a random (and not a fixed) split. Regarding your second question, a pseudo-random number generator is a number generator that generates almost truly random numbers. Why they are not truly random is out of the scope of this question and probably won't matter in your case, you can take a look here form more details.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/28064634\/random-state-pseudo-random-number-in-scikit-learn", "best_answers_votes":303, "question_length":260, "response_length":1533 }, { "question":"RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility I have this error for trying to load a saved SVM model. I have tried uninstalling sklearn, NumPy and SciPy, reinstalling the latest versions all-together again (using pip). I am still getting this error. Why? ``` In [1]: import sklearn; print sklearn.__version__ 0.18.1 In [3]: import numpy; print numpy.__version__ 1.11.2 In [5]: import scipy; print scipy.__version__ 0.18.1 In [7]: import pandas; print pandas.__version__ 0.19.1 In [10]: clf = joblib.load('model\/trained_model.pkl') --------------------------------------------------------------------------- RuntimeWarning Traceback (most recent call last) in () ----> 1 clf = joblib.load('sentiment_classification\/model\/trained_model.pkl') \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/externals\/joblib\/numpy_pickle.pyc in load(filename, mmap_mode) 573 return load_compatibility(fobj) 574 --> 575 obj = _unpickle(fobj, filename, mmap_mode) 576 577 return obj \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/externals\/joblib\/numpy_pickle.pyc in _unpickle(fobj, filename, mmap_mode) 505 obj = None 506 try: --> 507 obj = unpickler.load() 508 if unpickler.compat_mode: 509 warnings.warn(\"The file '%s' has been generated with a \" \/usr\/lib\/python2.7\/pickle.pyc in load(self) 862 while 1: 863 key = read(1) --> 864 dispatch[key](self) 865 except _Stop, stopinst: 866 return stopinst.value \/usr\/lib\/python2.7\/pickle.pyc in load_global(self) 1094 module = self.readline()[:-1] 1095 name = self.readline()[:-1] -> 1096 klass = self.find_class(module, name) 1097 self.append(klass) 1098 dispatch[GLOBAL] = load_global \/usr\/lib\/python2.7\/pickle.pyc in find_class(self, module, name) 1128 def find_class(self, module, name): 1129 # Subclasses may override this -> 1130 __import__(module) 1131 mod = sys.modules[module] 1132 klass = getattr(mod, name) \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/svm\/__init__.py in () 11 # License: BSD 3 clause (C) INRIA 2010 12 ---> 13 from .classes import SVC, NuSVC, SVR, NuSVR, OneClassSVM, LinearSVC, \\ 14 LinearSVR 15 from .bounds import l1_min_c \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/svm\/classes.py in () 2 import numpy as np 3 ----> 4 from .base import _fit_liblinear, BaseSVC, BaseLibSVM 5 from ..base import BaseEstimator, RegressorMixin 6 from ..linear_model.base import LinearClassifierMixin, SparseCoefMixin, \\ \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/svm\/base.py in () 6 from abc import ABCMeta, abstractmethod 7 ----> 8 from . import libsvm, liblinear 9 from . import libsvm_sparse 10 from ..base import BaseEstimator, ClassifierMixin __init__.pxd in init sklearn.svm.libsvm (sklearn\/svm\/libsvm.c:10207)() RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 80 ``` UPDATE: OK, by following here, and ``` pip uninstall -y scipy scikit-learn pip install --no-binary scipy scikit-learn ``` The error has now gone, though I still have no idea why it occurred in the first place...", "response":"According to MAINT: silence Cython warnings about changes dtype\/ufunc size. - numpy\/numpy: These warnings are visible whenever you import scipy (or another package) that was compiled against an older numpy than is installed. and the checks are inserted by Cython (hence are present in any module compiled with it). Long story short, these warnings should be benign in the particular case of numpy, and these messages are filtered out since numpy 1.8 (the branch this commit went onto). While scikit-learn 0.18.1 is compiled against numpy 1.6.1. To filter these warnings yourself, you can do the same as the patch does: ``` import warnings warnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\") warnings.filterwarnings(\"ignore\", message=\"numpy.ufunc size changed\") ``` Of course, you can just recompile all affected modules from source against your local numpy with pip install --no-binary :all:\u00b9 instead if you have the balls tools for that. Longer story: the patch's proponent claims there should be no risk specifically with numpy, and 3rd-party packages are intentionally built against older versions: [Rebuilding everything against current numpy is] not a feasible solution, and certainly shouldn't be necessary. Scipy (as many other packages) is compatible with a number of versions of numpy. So when we distribute scipy binaries, we build them against the lowest supported numpy version (1.5.1 as of now) and they work with 1.6.x, 1.7.x and numpy master as well. The real correct would be for Cython only to issue warnings when the size of dtypes\/ufuncs has changes in a way that breaks the ABI, and be silent otherwise. As a result, Cython's devs agreed to trust the numpy team with maintaining binary compatibility by hand, so we can probably expect that using versions with breaking ABI changes would yield a specially-crafted exception or some other explicit show-stopper. \u00b9The previously available --no-use-wheel option has been removed since pip 10.0.0.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40845304\/runtimewarning-numpy-dtype-size-changed-may-indicate-binary-incompatibility", "best_answers_votes":172, "question_length":3008, "response_length":1980 }, { "question":"How to convert a Scikit-learn dataset to a Pandas dataset How do I convert data from a Scikit-learn Bunch object to a Pandas DataFrame? ``` from sklearn.datasets import load_iris import pandas as pd data = load_iris() print(type(data)) data1 = pd. # Is there a Pandas method to accomplish this? ```", "response":"Manually, you can use pd.DataFrame constructor, giving a numpy array (data) and a list of the names of the columns (columns). To have everything in one DataFrame, you can concatenate the features and the target into one numpy array with np.c_[...] (note the []): ``` import numpy as np import pandas as pd from sklearn.datasets import load_iris # save load_iris() sklearn dataset to iris # if you'd like to check dataset type use: type(load_iris()) # if you'd like to view list of attributes use: dir(load_iris()) iris = load_iris() # np.c_ is the numpy concatenate function # which is used to concat iris['data'] and iris['target'] arrays # for pandas column argument: concat iris['feature_names'] list # and string list (in this case one string); you can make this anything you'd like.. # the original dataset would probably call this ['Species'] data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= iris['feature_names'] + ['target']) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38105539\/how-to-convert-a-scikit-learn-dataset-to-a-pandas-dataset", "best_answers_votes":210, "question_length":298, "response_length":961 }, { "question":"Can anyone explain me StandardScaler? I am unable to understand the page of the StandardScaler in the documentation of sklearn. Can anyone explain this to me in simple terms?", "response":"Intro I assume that you have a matrix X where each row\/line is a sample\/observation and each column is a variable\/feature (this is the expected input for any sklearn ML function by the way -- X.shape should be [number_of_samples, number_of_features]). Core of method The main idea is to normalize\/standardize i.e. \u03bc = 0 and \u03c3 = 1 your features\/variables\/columns of X, individually, before applying any machine learning model. StandardScaler() will normalize the features i.e. each column of X, INDIVIDUALLY, so that each column\/feature\/variable will have \u03bc = 0 and \u03c3 = 1. P.S: I find the most upvoted answer on this page, wrong. I am quoting \"each value in the dataset will have the sample mean value subtracted\" -- This is neither true nor correct. See also: How and why to Standardize your data: A python tutorial Example with code ``` from sklearn.preprocessing import StandardScaler import numpy as np # 4 samples\/observations and 2 variables\/features data = np.array([[0, 0], [1, 0], [0, 1], [1, 1]]) scaler = StandardScaler() scaled_data = scaler.fit_transform(data) print(data) [[0, 0], [1, 0], [0, 1], [1, 1]]) print(scaled_data) [[-1. -1.] [ 1. -1.] [-1. 1.] [ 1. 1.]] ``` Verify that the mean of each feature (column) is 0: ``` scaled_data.mean(axis = 0) array([0., 0.]) ``` Verify that the std of each feature (column) is 1: ``` scaled_data.std(axis = 0) array([1., 1.]) ``` Appendix: The maths UPDATE 08\/2020: Concerning the input parameters with_mean and with_std to False\/True, I have provided an answer here: StandardScaler difference between \u201cwith_std=False or True\u201d and \u201cwith_mean=False or True\u201d", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40758562\/can-anyone-explain-me-standardscaler", "best_answers_votes":189, "question_length":174, "response_length":1612 }, { "question":"Logistic regression python solvers' definitions I am using the logistic regression function from sklearn, and was wondering what each of the solver is actually doing behind the scenes to solve the optimization problem. Can someone briefly describe what \"newton-cg\", \"sag\", \"lbfgs\" and \"liblinear\" are doing?", "response":"Well, I hope I'm not too late for the party! Let me first try to establish some intuition before digging into loads of information (warning: this is not a brief comparison, TL;DR) Introduction A hypothesis h(x), takes an input and gives us the estimated output value. This hypothesis can be as simple as a one-variable linear equation, .. up to a very complicated and long multivariate equation with respect to the type of algorithm we\u2019re using (e.g. linear regression, logistic regression..etc). Our task is to find the best Parameters (a.k.a Thetas or Weights) that give us the least error in predicting the output. We call the function that calculates this error a Cost or Loss Function, and apparently, our goal is to minimize the error in order to get the best-predicted output! One more thing to recall is, the relation between the parameter value and its effect on the cost function (i.e. the error) looks like a bell curve (i.e. Quadratic; recall this because it\u2019s important). So if we start at any point in that curve and keep taking the derivative (i.e. tangent line) of each point we stop at (assuming it's a univariate problem, otherwise, if we have multiple features, we take the partial derivative), we will end up at what so-called the Global Optima as shown in this image: If we take the partial derivative at the minimum cost point (i.e. global optima) we find the slope of the tangent line = 0 (then we know that we reached our target). That\u2019s valid only if we have a Convex Cost Function, but if we don\u2019t, we may end up stuck at what is called Local Optima; consider this non-convex function: Now you should have the intuition about the heck relationship between what we are doing and the terms: Derivative, Tangent Line, Cost Function, Hypothesis ..etc. Side Note: The above-mentioned intuition is also related to the Gradient Descent Algorithm (see later). Background Linear Approximation: Given a function, f(x), we can find its tangent at x=a. The equation of the tangent line L(x) is: L(x)=f(a)+f\u2032(a)(x\u2212a). Take a look at the following graph of a function and its tangent line: From this graph we can see that near x=a, the tangent line and the function have nearly the same graph. On occasion, we will use the tangent line, L(x), as an approximation to the function, f(x), near x=a. In these cases, we call the tangent line the \"Linear Approximation\" to the function at x=a. Quadratic Approximation: Same as a linear approximation, yet this time we are dealing with a curve where we cannot find the point near to 0 by using only the tangent line. Instead, we use the parabola as it's shown in the following graph: In order to fit a good parabola, both parabola and quadratic function should have the same value, the same first derivative, AND the same second derivative. The formula will be (just out of curiosity): Qa(x) = f(a) + f'(a)(x-a) + f''(a)(x-a)2\/2 Now we should be ready to do the comparison in detail. Comparison between the methods 1. Newton\u2019s Method Recall the motivation for the gradient descent step at x: we minimize the quadratic function (i.e. Cost Function). Newton\u2019s method uses in a sense a better quadratic function minimisation. It's better because it uses the quadratic approximation (i.e. first AND second partial derivatives). You can imagine it as a twisted Gradient Descent with the Hessian (the Hessian is a square matrix of second-order partial derivatives of order n X n). Moreover, the geometric interpretation of Newton's method is that at each iteration one approximates f(x) by a quadratic function around xn, and then takes a step towards the maximum\/minimum of that quadratic function (in higher dimensions, this may also be a saddle point). Note that if f(x) happens to be a quadratic function, then the exact extremum is found in one step. Drawbacks: It\u2019s computationally expensive because of the Hessian Matrix (i.e. second partial derivatives calculations). It attracts to Saddle Points which are common in multivariable optimization (i.e. a point that its partial derivatives disagree over whether this input should be a maximum or a minimum point!). 2. Limited-memory Broyden\u2013Fletcher\u2013Goldfarb\u2013Shanno Algorithm: In a nutshell, it is an analogue of Newton\u2019s Method, yet here the Hessian matrix is approximated using updates specified by gradient evaluations (or approximate gradient evaluations). In other words, using estimation to the inverse Hessian matrix. The term Limited-memory simply means it stores only a few vectors that represent the approximation implicitly. If I dare say that when the dataset is small, L-BFGS relatively performs the best compared to other methods especially because it saves a lot of memory, however, there are some \u201cserious\u201d drawbacks such that if it is unsafeguarded, it may not converge to anything. Side note: This solver has become the default solver in sklearn LogisticRegression since version 0.22, replacing LIBLINEAR. 3. A Library for Large Linear Classification: It\u2019s a linear classification that supports logistic regression and linear support vector machines. The solver uses a Coordinate Descent (CD) algorithm that solves optimization problems by successively performing approximate minimization along coordinate directions or coordinate hyperplanes. LIBLINEAR is the winner of the ICML 2008 large-scale learning challenge. It applies automatic parameter selection (a.k.a L1 Regularization) and it\u2019s recommended when you have high dimension dataset (recommended for solving large-scale classification problems) Drawbacks: It may get stuck at a non-stationary point (i.e. non-optima) if the level curves of a function are not smooth. Also cannot run in parallel. It cannot learn a true multinomial (multiclass) model; instead, the optimization problem is decomposed in a \u201cone-vs-rest\u201d fashion, so separate binary classifiers are trained for all classes. Side note: According to Scikit Documentation: The \u201cliblinear\u201d solver was the one used by default for historical reasons before version 0.22. Since then, the default use is Limited-memory Broyden\u2013Fletcher\u2013Goldfarb\u2013Shanno Algorithm. 4. Stochastic Average Gradient: The SAG method optimizes the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values, the SAG method achieves a faster convergence rate than black-box SG methods. It is faster than other solvers for large datasets when both the number of samples and the number of features are large. Drawbacks: It only supports L2 penalization. This is not really a drawback, but more like a comparison: although SAG is suitable for large datasets, with a memory cost of O(N), it can be less practical for very large N (as the most recent gradient evaluation for each function needs to be maintained in the memory). This is usually not a problem, but a better option would be SVRG 1, 2 which is unfortunately not implemented in scikit-learn! 5. SAGA: The SAGA solver is a variant of SAG that also supports the non-smooth penalty L1 option (i.e. L1 Regularization). This is therefore the solver of choice for sparse multinomial logistic regression. It also has a better theoretical convergence compared to SAG. Drawbacks: This is not really a drawback, but more like a comparison: SAGA is similar to SAG with regard to memory cost. That's it's suitable for large datasets, yet in edge cases where the dataset is very large, the SVRG 1, 2 would be a better option (unfortunately not implemented in scikit-learn)! Side note: According to Scikit Documentation: The SAGA solver is often the best choice. Please note the attributes \"Large\" and \"Small\" used in Scikit-Learn and in this comparison are relative. AFAIK, there is no universal unanimous and accurate definition of the dataset boundaries to be considered as \"Large\", \"Too Large\", \"Small\", \"Too Small\"...etc! Summary The following table is taken from Scikit Documentation Updated Table from the same link above (accessed 02\/11\/2021):", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38640109\/logistic-regression-python-solvers-definitions", "best_answers_votes":331, "question_length":307, "response_length":8081 }, { "question":"What are the pros and cons between get_dummies (Pandas) and OneHotEncoder (Scikit-learn)? I'm learning different methods to convert categorical variables to numeric for machine-learning classifiers. I came across the pd.get_dummies method and sklearn.preprocessing.OneHotEncoder() and I wanted to see how they differed in terms of performance and usage. I found a tutorial on how to use OneHotEncoder() on https:\/\/xgdgsc.wordpress.com\/2015\/03\/20\/note-on-using-onehotencoder-in-scikit-learn-to-work-on-categorical-features\/ since the sklearn documentation wasn't too helpful on this feature. I have a feeling I'm not doing it correctly...but Can some explain the pros and cons of using pd.dummies over sklearn.preprocessing.OneHotEncoder() and vice versa? I know that OneHotEncoder() gives you a sparse matrix but other than that I'm not sure how it is used and what the benefits are over the pandas method. Am I using it inefficiently? ``` import pandas as pd import numpy as np from sklearn.datasets import load_iris sns.set() %matplotlib inline #Iris Plot iris = load_iris() n_samples, m_features = iris.data.shape #Load Data X, y = iris.data, iris.target D_target_dummy = dict(zip(np.arange(iris.target_names.shape[0]), iris.target_names)) DF_data = pd.DataFrame(X,columns=iris.feature_names) DF_data[\"target\"] = pd.Series(y).map(D_target_dummy) #sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) \\ #0 5.1 3.5 1.4 0.2 #1 4.9 3.0 1.4 0.2 #2 4.7 3.2 1.3 0.2 #3 4.6 3.1 1.5 0.2 #4 5.0 3.6 1.4 0.2 #5 5.4 3.9 1.7 0.4 DF_dummies = pd.get_dummies(DF_data[\"target\"]) #setosa versicolor virginica #0 1 0 0 #1 1 0 0 #2 1 0 0 #3 1 0 0 #4 1 0 0 #5 1 0 0 from sklearn.preprocessing import OneHotEncoder, LabelEncoder def f1(DF_data): Enc_ohe, Enc_label = OneHotEncoder(), LabelEncoder() DF_data[\"Dummies\"] = Enc_label.fit_transform(DF_data[\"target\"]) DF_dummies2 = pd.DataFrame(Enc_ohe.fit_transform(DF_data[[\"Dummies\"]]).todense(), columns = Enc_label.classes_) return(DF_dummies2) %timeit pd.get_dummies(DF_data[\"target\"]) #1000 loops, best of 3: 777 \u00b5s per loop %timeit f1(DF_data) #100 loops, best of 3: 2.91 ms per loop ```", "response":"For machine learning, you almost definitely want to use sklearn.OneHotEncoder. For other tasks like simple analyses, you might be able to use pd.get_dummies, which is a bit more convenient. Note that sklearn.OneHotEncoder has been updated in the latest version so that it does accept strings for categorical variables, as well as integers. The crux of it is that the sklearn encoder creates a function which persists and can then be applied to new data sets which use the same categorical variables, with consistent results. ``` from sklearn.preprocessing import OneHotEncoder # Create the encoder. encoder = OneHotEncoder(handle_unknown=\"ignore\") encoder.fit(X_train) # Assume for simplicity all features are categorical. # Apply the encoder. X_train = encoder.transform(X_train) X_test = encoder.transform(X_test) ``` Note how we apply the same encoder we created via X_train to the new data set X_test. Consider what happens if X_test contains different levels than X_train for one of its variables. For example, let's say X_train[\"color\"] contains only \"red\" and \"green\", but in addition to those, X_test[\"color\"] sometimes contains \"blue\". If we use pd.get_dummies, X_test will end up with an additional \"color_blue\" column which X_train doesn't have, and the inconsistency will probably break our code later on, especially if we are feeding X_test to an sklearn model which we trained on X_train. And if we want to process the data like this in production, where we're receiving a single example at a time, pd.get_dummies won't be of use. With sklearn.OneHotEncoder on the other hand, once we've created the encoder, we can reuse it to produce the same output every time, with columns only for \"red\" and \"green\". And we can explicitly control what happens when it encounters the new level \"blue\": if we think that's impossible, then we can tell it to throw an error with handle_unknown=\"error\"; otherwise we can tell it to continue and simply set the red and green columns to 0, with handle_unknown=\"ignore\".", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36631163\/what-are-the-pros-and-cons-between-get-dummies-pandas-and-onehotencoder-sciki", "best_answers_votes":256, "question_length":2139, "response_length":2014 }, { "question":"What are the different use cases of joblib versus pickle? Background: I'm just getting started with scikit-learn, and read at the bottom of the page about joblib, versus pickle. it may be more interesting to use joblib\u2019s replacement of pickle (joblib.dump & joblib.load), which is more efficient on big data, but can only pickle to the disk and not to a string I read this Q&A on Pickle, Common use-cases for pickle in Python and wonder if the community here can share the differences between joblib and pickle? When should one use one over another?", "response":"joblib is usually significantly faster on large numpy arrays because it has a special handling for the array buffers of the numpy datastructure. To find about the implementation details you can have a look at the source code. It can also compress that data on the fly while pickling using zlib or lz4. joblib also makes it possible to memory map the data buffer of an uncompressed joblib-pickled numpy array when loading it which makes it possible to share memory between processes. if you don't pickle large numpy arrays, then regular pickle can be significantly faster, especially on large collections of small python objects (e.g. a large dict of str objects) because the pickle module of the standard library is implemented in C while joblib is pure python. since PEP 574 (Pickle protocol 5) has been merged in Python 3.8, it is now much more efficient (memory-wise and cpu-wise) to pickle large numpy arrays using the standard library. Large arrays in this context means 4GB or more. But joblib can still be useful with Python 3.8 to load objects that have nested numpy arrays in memory mapped mode with mmap_mode=\"r\".", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/12615525\/what-are-the-different-use-cases-of-joblib-versus-pickle", "best_answers_votes":186, "question_length":549, "response_length":1123 }, { "question":"How to compute precision, recall, accuracy and f1-score for the multiclass case with scikit learn? I'm working in a sentiment analysis problem the data looks like this: ``` label instances 5 1190 4 838 3 239 1 204 2 127 ``` So my data is unbalanced since 1190 instances are labeled with 5. For the classification Im using scikit's SVC. The problem is I do not know how to balance my data in the right way in order to compute accurately the precision, recall, accuracy and f1-score for the multiclass case. So I tried the following approaches: First: ``` wclf = SVC(kernel='linear', C= 1, class_weight={1: 10}) wclf.fit(X, y) weighted_prediction = wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, weighted_prediction) print 'F1 score:', f1_score(y_test, weighted_prediction,average='weighted') print 'Recall:', recall_score(y_test, weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, weighted_prediction, average='weighted') print '\\n clasification report:\\n', classification_report(y_test, weighted_prediction) print '\\n confussion matrix:\\n',confusion_matrix(y_test, weighted_prediction) ``` Second: ``` auto_wclf = SVC(kernel='linear', C= 1, class_weight='auto') auto_wclf.fit(X, y) auto_weighted_prediction = auto_wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, auto_weighted_prediction) print 'F1 score:', f1_score(y_test, auto_weighted_prediction, average='weighted') print 'Recall:', recall_score(y_test, auto_weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, auto_weighted_prediction, average='weighted') print '\\n clasification report:\\n', classification_report(y_test,auto_weighted_prediction) print '\\n confussion matrix:\\n',confusion_matrix(y_test, auto_weighted_prediction) ``` Third: ``` clf = SVC(kernel='linear', C= 1) clf.fit(X, y) prediction = clf.predict(X_test) from sklearn.metrics import precision_score, \\ recall_score, confusion_matrix, classification_report, \\ accuracy_score, f1_score print 'Accuracy:', accuracy_score(y_test, prediction) print 'F1 score:', f1_score(y_test, prediction) print 'Recall:', recall_score(y_test, prediction) print 'Precision:', precision_score(y_test, prediction) print '\\n clasification report:\\n', classification_report(y_test,prediction) print '\\n confussion matrix:\\n',confusion_matrix(y_test, prediction) F1 score:\/usr\/local\/lib\/python2.7\/site-packages\/sklearn\/metrics\/classification.py:676: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring=\"f1_weighted\" instead of scoring=\"f1\". sample_weight=sample_weight) \/usr\/local\/lib\/python2.7\/site-packages\/sklearn\/metrics\/classification.py:1172: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring=\"f1_weighted\" instead of scoring=\"f1\". sample_weight=sample_weight) \/usr\/local\/lib\/python2.7\/site-packages\/sklearn\/metrics\/classification.py:1082: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring=\"f1_weighted\" instead of scoring=\"f1\". sample_weight=sample_weight) 0.930416613529 ``` However, Im getting warnings like this: ``` \/usr\/local\/lib\/python2.7\/site-packages\/sklearn\/metrics\/classification.py:1172: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring=\"f1_weighted\" instead of scoring=\"f1\" ``` How can I deal correctly with my unbalanced data in order to compute in the right way classifier's metrics?", "response":"I think there is a lot of confusion about which weights are used for what. I am not sure I know precisely what bothers you so I am going to cover different topics, bear with me ;). Class weights The weights from the class_weight parameter are used to train the classifier. They are not used in the calculation of any of the metrics you are using: with different class weights, the numbers will be different simply because the classifier is different. Basically in every scikit-learn classifier, the class weights are used to tell your model how important a class is. That means that during the training, the classifier will make extra efforts to classify properly the classes with high weights. How they do that is algorithm-specific. If you want details about how it works for SVC and the doc does not make sense to you, feel free to mention it. The metrics Once you have a classifier, you want to know how well it is performing. Here you can use the metrics you mentioned: accuracy, recall_score, f1_score... Usually when the class distribution is unbalanced, accuracy is considered a poor choice as it gives high scores to models which just predict the most frequent class. I will not detail all these metrics but note that, with the exception of accuracy, they are naturally applied at the class level: as you can see in this print of a classification report they are defined for each class. They rely on concepts such as true positives or false negative that require defining which class is the positive one. ``` precision recall f1-score support 0 0.65 1.00 0.79 17 1 0.57 0.75 0.65 16 2 0.33 0.06 0.10 17 avg \/ total 0.52 0.60 0.51 50 ``` The warning ``` F1 score:\/usr\/local\/lib\/python2.7\/site-packages\/sklearn\/metrics\/classification.py:676: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring=\"f1_weighted\" instead of scoring=\"f1\". ``` You get this warning because you are using the f1-score, recall and precision without defining how they should be computed! The question could be rephrased: from the above classification report, how do you output one global number for the f1-score? You could: Take the average of the f1-score for each class: that's the avg \/ total result above. It's also called macro averaging. Compute the f1-score using the global count of true positives \/ false negatives, etc. (you sum the number of true positives \/ false negatives for each class). Aka micro averaging. Compute a weighted average of the f1-score. Using 'weighted' in scikit-learn will weigh the f1-score by the support of the class: the more elements a class has, the more important the f1-score for this class in the computation. These are 3 of the options in scikit-learn, the warning is there to say you have to pick one. So you have to specify an average argument for the score method. Which one you choose is up to how you want to measure the performance of the classifier: for instance macro-averaging does not take class imbalance into account and the f1-score of class 1 will be just as important as the f1-score of class 5. If you use weighted averaging however you'll get more importance for the class 5. The whole argument specification in these metrics is not super-clear in scikit-learn right now, it will get better in version 0.18 according to the docs. They are removing some non-obvious standard behavior and they are issuing warnings so that developers notice it. Computing scores Last thing I want to mention (feel free to skip it if you're aware of it) is that scores are only meaningful if they are computed on data that the classifier has never seen. This is extremely important as any score you get on data that was used in fitting the classifier is completely irrelevant. Here's a way to do it using StratifiedShuffleSplit, which gives you a random splits of your data (after shuffling) that preserve the label distribution. ``` from sklearn.datasets import make_classification from sklearn.cross_validation import StratifiedShuffleSplit from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report, confusion_matrix # We use a utility to generate artificial classification data. X, y = make_classification(n_samples=100, n_informative=10, n_classes=3) sss = StratifiedShuffleSplit(y, n_iter=1, test_size=0.5, random_state=0) for train_idx, test_idx in sss: X_train, X_test, y_train, y_test = X[train_idx], X[test_idx], y[train_idx], y[test_idx] svc.fit(X_train, y_train) y_pred = svc.predict(X_test) print(f1_score(y_test, y_pred, average=\"macro\")) print(precision_score(y_test, y_pred, average=\"macro\")) print(recall_score(y_test, y_pred, average=\"macro\")) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31421413\/how-to-compute-precision-recall-accuracy-and-f1-score-for-the-multiclass-case", "best_answers_votes":206, "question_length":4525, "response_length":4937 }, { "question":"UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples I'm getting this weird error: ``` classification.py:1113: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples. 'precision', 'predicted', average, warn_for)` ``` but then it also prints the f-score the first time I run: ``` metrics.f1_score(y_test, y_pred, average='weighted') ``` The second time I run, it provides the score without error. Why is that? ``` >>> y_pred = test.predict(X_test) >>> y_test array([ 1, 10, 35, 9, 7, 29, 26, 3, 8, 23, 39, 11, 20, 2, 5, 23, 28, 30, 32, 18, 5, 34, 4, 25, 12, 24, 13, 21, 38, 19, 33, 33, 16, 20, 18, 27, 39, 20, 37, 17, 31, 29, 36, 7, 6, 24, 37, 22, 30, 0, 22, 11, 35, 30, 31, 14, 32, 21, 34, 38, 5, 11, 10, 6, 1, 14, 12, 36, 25, 8, 30, 3, 12, 7, 4, 10, 15, 12, 34, 25, 26, 29, 14, 37, 23, 12, 19, 19, 3, 2, 31, 30, 11, 2, 24, 19, 27, 22, 13, 6, 18, 20, 6, 34, 33, 2, 37, 17, 30, 24, 2, 36, 9, 36, 19, 33, 35, 0, 4, 1]) >>> y_pred array([ 1, 10, 35, 7, 7, 29, 26, 3, 8, 23, 39, 11, 20, 4, 5, 23, 28, 30, 32, 18, 5, 39, 4, 25, 0, 24, 13, 21, 38, 19, 33, 33, 16, 20, 18, 27, 39, 20, 37, 17, 31, 29, 36, 7, 6, 24, 37, 22, 30, 0, 22, 11, 35, 30, 31, 14, 32, 21, 34, 38, 5, 11, 10, 6, 1, 14, 30, 36, 25, 8, 30, 3, 12, 7, 4, 10, 15, 12, 4, 22, 26, 29, 14, 37, 23, 12, 19, 19, 3, 25, 31, 30, 11, 25, 24, 19, 27, 22, 13, 6, 18, 20, 6, 39, 33, 9, 37, 17, 30, 24, 9, 36, 39, 36, 19, 33, 35, 0, 4, 1]) >>> metrics.f1_score(y_test, y_pred, average='weighted') C:\\Users\\Michael\\Miniconda3\\envs\\snowflakes\\lib\\site-packages\\sklearn\\metrics\\classification.py:1113: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples. 'precision', 'predicted', average, warn_for) 0.87282051282051276 >>> metrics.f1_score(y_test, y_pred, average='weighted') 0.87282051282051276 >>> metrics.f1_score(y_test, y_pred, average='weighted') 0.87282051282051276 ``` Also, why is there a trailing 'precision', 'predicted', average, warn_for) error message? There is no open parenthesis so why does it end with a closing parenthesis? I am running sklearn 0.18.1 using Python 3.6.0 in a conda environment on Windows 10. I also looked at here and I don't know if it's the same bug. This SO post doesn't have solution either.", "response":"As mentioned in the comments, some labels in y_test don't appear in y_pred. Specifically in this case, label '2' is never predicted: ``` >>> set(y_test) - set(y_pred) {2} ``` This means that there is no F-score to calculate for this label, and thus the F-score for this case is considered to be 0.0. Since you requested an average of the score, you must take into account that a score of 0 was included in the calculation, and this is why scikit-learn is showing you that warning. This brings me to you not seeing the error a second time. As I mentioned, this is a warning, which is treated differently from an error in python. The default behavior in most environments is to show a specific warning only once. This behavior can be changed: ``` import warnings warnings.filterwarnings('always') # \"error\", \"ignore\", \"always\", \"default\", \"module\" or \"once\" ``` If you set this before importing the other modules, you will see the warning every time you run the code. There is no way to avoid seeing this warning the first time, aside for setting warnings.filterwarnings('ignore'). What you can do, is decide that you are not interested in the scores of labels that were not predicted, and then explicitly specify the labels you are interested in (which are labels that were predicted at least once): ``` >>> metrics.f1_score(y_test, y_pred, average='weighted', labels=np.unique(y_pred)) 0.91076923076923078 ``` The warning will be gone.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43162506\/undefinedmetricwarning-f-score-is-ill-defined-and-being-set-to-0-0-in-labels-wi", "best_answers_votes":193, "question_length":2329, "response_length":1435 }, { "question":"how to check which version of nltk, scikit learn installed? In shell script I am checking whether this packages are installed or not, if not installed then install it. So withing shell script: ``` import nltk echo nltk.__version__ ``` but it stops shell script at import line in linux terminal tried to see in this manner: ``` which nltk ``` which gives nothing thought it is installed. Is there any other way to verify this package installation in shell script, if not installed, also install it.", "response":"import nltk is Python syntax, and as such won't work in a shell script. To test the version of nltk and scikit_learn, you can write a Python script and run it. Such a script may look like ``` import nltk import sklearn print('The nltk version is {}.'.format(nltk.__version__)) print('The scikit-learn version is {}.'.format(sklearn.__version__)) # The nltk version is 3.0.0. # The scikit-learn version is 0.15.2. ``` Note that not all Python packages are guaranteed to have a __version__ attribute, so for some others it may fail, but for nltk and scikit-learn at least it will work.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/28501072\/how-to-check-which-version-of-nltk-scikit-learn-installed", "best_answers_votes":219, "question_length":497, "response_length":583 }, { "question":"Stratified Train\/Test-split in scikit-learn I need to split my data into a training set (75%) and test set (25%). I currently do that with the code below: ``` X, Xt, userInfo, userInfo_train = sklearn.cross_validation.train_test_split(X, userInfo) ``` However, I'd like to stratify my training dataset. How do I do that? I've been looking into the StratifiedKFold method, but doesn't let me specifiy the 75%\/25% split and only stratify the training dataset.", "response":"[update for 0.17] See the docs of sklearn.model_selection.train_test_split: ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.25) ``` [\/update for 0.17] There is a pull request here. But you can simply do train, test = next(iter(StratifiedKFold(...))) and use the train and test indices if you want.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/29438265\/stratified-train-test-split-in-scikit-learn", "best_answers_votes":245, "question_length":457, "response_length":388 }, { "question":"LogisticRegression: Unknown label type: 'continuous' using sklearn in python I have the following code to test some of most popular ML algorithms of sklearn python library: ``` import numpy as np from sklearn import metrics, svm from sklearn.linear_model import LinearRegression from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC trainingData = np.array([ [2.3, 4.3, 2.5], [1.3, 5.2, 5.2], [3.3, 2.9, 0.8], [3.1, 4.3, 4.0] ]) trainingScores = np.array( [3.4, 7.5, 4.5, 1.6] ) predictionData = np.array([ [2.5, 2.4, 2.7], [2.7, 3.2, 1.2] ]) clf = LinearRegression() clf.fit(trainingData, trainingScores) print(\"LinearRegression\") print(clf.predict(predictionData)) clf = svm.SVR() clf.fit(trainingData, trainingScores) print(\"SVR\") print(clf.predict(predictionData)) clf = LogisticRegression() clf.fit(trainingData, trainingScores) print(\"LogisticRegression\") print(clf.predict(predictionData)) clf = DecisionTreeClassifier() clf.fit(trainingData, trainingScores) print(\"DecisionTreeClassifier\") print(clf.predict(predictionData)) clf = KNeighborsClassifier() clf.fit(trainingData, trainingScores) print(\"KNeighborsClassifier\") print(clf.predict(predictionData)) clf = LinearDiscriminantAnalysis() clf.fit(trainingData, trainingScores) print(\"LinearDiscriminantAnalysis\") print(clf.predict(predictionData)) clf = GaussianNB() clf.fit(trainingData, trainingScores) print(\"GaussianNB\") print(clf.predict(predictionData)) clf = SVC() clf.fit(trainingData, trainingScores) print(\"SVC\") print(clf.predict(predictionData)) ``` The first two works ok, but I got the following error in LogisticRegression call: ``` root@ubupc1:\/home\/ouhma# python stack.py LinearRegression [ 15.72023529 6.46666667] SVR [ 3.95570063 4.23426243] Traceback (most recent call last): File \"stack.py\", line 28, in clf.fit(trainingData, trainingScores) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/linear_model\/logistic.py\", line 1174, in fit check_classification_targets(y) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/utils\/multiclass.py\", line 172, in check_classification_targets raise ValueError(\"Unknown label type: %r\" % y_type) ValueError: Unknown label type: 'continuous' ``` The input data is the same as in the previous calls, so what is going on here? And by the way, why there is a huge diference in the first prediction of LinearRegression() and SVR() algorithms (15.72 vs 3.95)?", "response":"You are passing floats to a classifier which expects categorical values as the target vector. If you convert it to int it will be accepted as input (although it will be questionable if that's the right way to do it). It would be better to convert your training scores by using scikit's labelEncoder function. The same is true for your DecisionTree and KNeighbors qualifier. ``` from sklearn import preprocessing from sklearn import utils lab_enc = preprocessing.LabelEncoder() encoded = lab_enc.fit_transform(trainingScores) >>> array([1, 3, 2, 0], dtype=int64) print(utils.multiclass.type_of_target(trainingScores)) >>> continuous print(utils.multiclass.type_of_target(trainingScores.astype('int'))) >>> multiclass print(utils.multiclass.type_of_target(encoded)) >>> multiclass ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/41925157\/logisticregression-unknown-label-type-continuous-using-sklearn-in-python", "best_answers_votes":134, "question_length":2618, "response_length":782 }, { "question":"Scikit-learn: How to obtain True Positive, True Negative, False Positive and False Negative My problem: I have a dataset which is a large JSON file. I read it and store it in the trainList variable. Next, I pre-process it - in order to be able to work with it. Once I have done that I start the classification: I use the kfold cross validation method in order to obtain the mean accuracy and train a classifier. I make the predictions and obtain the accuracy & confusion matrix of that fold. After this, I would like to obtain the True Positive(TP), True Negative(TN), False Positive(FP) and False Negative(FN) values. I'll use these parameters to obtain the Sensitivity and Specificity. Finally, I would use this to put in HTML in order to show a chart with the TPs of each label. Code: The variables I have for the moment: ``` trainList #It is a list with all the data of my dataset in JSON form labelList #It is a list with all the labels of my data ``` Most part of the method: ``` #I transform the data from JSON form to a numerical one X=vec.fit_transform(trainList) #I scale the matrix (don't know why but without it, it makes an error) X=preprocessing.scale(X.toarray()) #I generate a KFold in order to make cross validation kf = KFold(len(X), n_folds=10, indices=True, shuffle=True, random_state=1) #I start the cross validation for train_indices, test_indices in kf: X_train=[X[ii] for ii in train_indices] X_test=[X[ii] for ii in test_indices] y_train=[listaLabels[ii] for ii in train_indices] y_test=[listaLabels[ii] for ii in test_indices] #I train the classifier trained=qda.fit(X_train,y_train) #I make the predictions predicted=qda.predict(X_test) #I obtain the accuracy of this fold ac=accuracy_score(predicted,y_test) #I obtain the confusion matrix cm=confusion_matrix(y_test, predicted) #I should calculate the TP,TN, FP and FN #I don't know how to continue ```", "response":"For the multi-class case, everything you need can be found from the confusion matrix. For example, if your confusion matrix looks like this: Then what you're looking for, per class, can be found like this: Using pandas\/numpy, you can do this for all classes at once like so: ```python FP = confusion_matrix.sum(axis=0) - np.diag(confusion_matrix) FN = confusion_matrix.sum(axis=1) - np.diag(confusion_matrix) TP = np.diag(confusion_matrix) TN = confusion_matrix.values.sum() - (FP + FN + TP) # Sensitivity, hit rate, recall, or true positive rate TPR = TP\/(TP+FN) # Specificity or true negative rate TNR = TN\/(TN+FP) # Precision or positive predictive value PPV = TP\/(TP+FP) # Negative predictive value NPV = TN\/(TN+FN) # Fall out or false positive rate FPR = FP\/(FP+TN) # False negative rate FNR = FN\/(TP+FN) # False discovery rate FDR = FP\/(TP+FP) # Overall accuracy ACC = (TP+TN)\/(TP+FP+FN+TN) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31324218\/scikit-learn-how-to-obtain-true-positive-true-negative-false-positive-and-fal", "best_answers_votes":211, "question_length":1880, "response_length":900 }, { "question":"Why does one hot encoding improve machine learning performance? [closed] Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about programming within the scope defined in the help center. Closed 4 years ago. Improve this question I have noticed that when One Hot encoding is used on a particular data set (a matrix) and used as training data for learning algorithms, it gives significantly better results with respect to prediction accuracy, compared to using the original matrix itself as training data. How does this performance increase happen?", "response":"Many learning algorithms either learn a single weight per feature, or they use distances between samples. The former is the case for linear models such as logistic regression, which are easy to explain. Suppose you have a dataset having only a single categorical feature \"nationality\", with values \"UK\", \"French\" and \"US\". Assume, without loss of generality, that these are encoded as 0, 1 and 2. You then have a weight w for this feature in a linear classifier, which will make some kind of decision based on the constraint w\u00d7x + b > 0, or equivalently w\u00d7x < b. The problem now is that the weight w cannot encode a three-way choice. The three possible values of w\u00d7x are 0, w and 2\u00d7w. Either these three all lead to the same decision (they're all < b or \u2265b) or \"UK\" and \"French\" lead to the same decision, or \"French\" and \"US\" give the same decision. There's no possibility for the model to learn that \"UK\" and \"US\" should be given the same label, with \"French\" the odd one out. By one-hot encoding, you effectively blow up the feature space to three features, which will each get their own weights, so the decision function is now w[UK]x[UK] + w[FR]x[FR] + w[US]x[US] < b, where all the x's are booleans. In this space, such a linear function can express any sum\/disjunction of the possibilities (e.g. \"UK or US\", which might be a predictor for someone speaking English). Similarly, any learner based on standard distance metrics (such as k-nearest neighbors) between samples will get confused without one-hot encoding. With the naive encoding and Euclidean distance, the distance between French and US is 1. The distance between US and UK is 2. But with the one-hot encoding, the pairwise distances between [1, 0, 0], [0, 1, 0] and [0, 0, 1] are all equal to \u221a2. This is not true for all learning algorithms; decision trees and derived models such as random forests, if deep enough, can handle categorical variables without one-hot encoding.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/17469835\/why-does-one-hot-encoding-improve-machine-learning-performance", "best_answers_votes":262, "question_length":633, "response_length":1943 }, { "question":"ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT I have a dataset consisting of both numeric and categorical data and I want to predict adverse outcomes for patients based on their medical characteristics. I defined a prediction pipeline for my dataset like so: ```py X = dataset.drop(columns=['target']) y = dataset['target'] # define categorical and numeric transformers numeric_transformer = Pipeline(steps=[ ('knnImputer', KNNImputer(n_neighbors=2, weights=\"uniform\")), ('scaler', StandardScaler())]) categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore'))]) # dispatch object columns to the categorical_transformer and remaining columns to numerical_transformer preprocessor = ColumnTransformer(transformers=[ ('num', numeric_transformer, selector(dtype_exclude=\"object\")), ('cat', categorical_transformer, selector(dtype_include=\"object\")) ]) # Append classifier to preprocessing pipeline. # Now we have a full prediction pipeline. clf = Pipeline(steps=[('preprocessor', preprocessor), ('classifier', LogisticRegression())]) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) clf.fit(X_train, y_train) print(\"model score: %.3f\" % clf.score(X_test, y_test)) ``` However, when running this code, I get the following warning message: ``` ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https:\/\/scikit-learn.org\/stable\/modules\/preprocessing.html Please also refer to the documentation for alternative solver options: https:\/\/scikit-learn.org\/stable\/modules\/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG) model score: 0.988 ``` Can someone explain to me what this warning means? I am new to machine learning so am a little lost as to what I can do to improve the prediction model. As you can see from the numeric_transformer, I scaled the data through standardisation. I am also confused as to how the model score is quite high and whether this is a good or bad thing.", "response":"The warning means what it mainly says: Suggestions to try to make the solver (the algorithm) converges. lbfgs stand for: \"Limited-memory Broyden\u2013Fletcher\u2013Goldfarb\u2013Shanno Algorithm\". It is one of the solvers' algorithms provided by Scikit-Learn Library. The term limited-memory simply means it stores only a few vectors that represent the gradients approximation implicitly. It has better convergence on relatively small datasets. But what is algorithm convergence? In simple words. If the error of solving is ranging within very small range (i.e., it is almost not changing), then that means the algorithm reached the solution (not necessary to be the best solution as it might be stuck at what so-called \"local Optima\"). On the other hand, if the error is varying noticeably (even if the error is relatively small [like in your case the score was good], but rather the differences between the errors per iteration is greater than some tolerance) then we say the algorithm did not converge. Now, you need to know that Scikit-Learn API sometimes provides the user the option to specify the maximum number of iterations the algorithm should take while it's searching for the solution in an iterative manner: ``` LogisticRegression(... solver='lbfgs', max_iter=100 ...) ``` As you can see, the default solver in LogisticRegression is 'lbfgs' and the maximum number of iterations is 100 by default. Final words, please, however, note that increasing the maximum number of iterations does not necessarily guarantee convergence, but it certainly helps! Update: Based on your comment below, some tips to try (out of many) that might help the algorithm to converge are: Increase the number of iterations: As in this answer; Try a different optimizer: Look here; Scale your data: Look here; Add engineered features: Look here; Data pre-processing: Look here - use case and here; Add more data: Look here.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/62658215\/convergencewarning-lbfgs-failed-to-converge-status-1-stop-total-no-of-iter", "best_answers_votes":199, "question_length":2225, "response_length":1895 }, { "question":"Will scikit-learn utilize GPU? Reading implementation of scikit-learn in TensorFlow: http:\/\/learningtensorflow.com\/lesson6\/ and scikit-learn: http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.cluster.KMeans.html I'm struggling to decide which implementation to use. scikit-learn is installed as part of the tensorflow docker container so can use either implementation. Reason to use scikit-learn : scikit-learn contains less boilerplate than the tensorflow implementation. Reason to use tensorflow : If running on Nvidia GPU the algorithm will be run against in parallel , I'm not sure if scikit-learn will utilize all available GPUs? Reading https:\/\/www.quora.com\/What-are-the-main-differences-between-TensorFlow-and-SciKit-Learn TensorFlow is more low-level; basically, the Lego bricks that help you to implement machine learning algorithms whereas scikit-learn offers you off-the-shelf algorithms, e.g., algorithms for classification such as SVMs, Random Forests, Logistic Regression, and many, many more. TensorFlow shines if you want to implement deep learning algorithms, since it allows you to take advantage of GPUs for more efficient training. This statement re-enforces my assertion that \"scikit-learn contains less boilerplate than the tensorflow implementation\" but also suggests scikit-learn will not utilize all available GPUs?", "response":"Tensorflow only uses GPU if it is built against Cuda and CuDNN. By default it does not use GPU, especially if it is running inside Docker, unless you use nvidia-docker and an image with a built-in support. Scikit-learn is not intended to be used as a deep-learning framework and it does not provide any GPU support. Why is there no support for deep or reinforcement learning \/ Will there be support for deep or reinforcement learning in scikit-learn? Deep learning and reinforcement learning both require a rich vocabulary to define an architecture, with deep learning additionally requiring GPUs for efficient computing. However, neither of these fit within the design constraints of scikit-learn; as a result, deep learning and reinforcement learning are currently out of scope for what scikit-learn seeks to achieve. Extracted from http:\/\/scikit-learn.org\/stable\/faq.html#why-is-there-no-support-for-deep-or-reinforcement-learning-will-there-be-support-for-deep-or-reinforcement-learning-in-scikit-learn Will you add GPU support in scikit-learn? No, or at least not in the near future. The main reason is that GPU support will introduce many software dependencies and introduce platform specific issues. scikit-learn is designed to be easy to install on a wide variety of platforms. Outside of neural networks, GPUs don\u2019t play a large role in machine learning today, and much larger gains in speed can often be achieved by a careful choice of algorithms. Extracted from http:\/\/scikit-learn.org\/stable\/faq.html#will-you-add-gpu-support", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/41567895\/will-scikit-learn-utilize-gpu", "best_answers_votes":157, "question_length":1348, "response_length":1537 }, { "question":"A progress bar for scikit-learn? Is there any way to have a progress bar to the fit method in scikit-learn ? Is it possible to include a custom one with something like Pyprind ?", "response":"If you initialize the model with verbose=1 before calling fit you should get some kind of output indicating the progress. For example sklearn.ensemble.GradientBoostingClassifer(verbose=1) provides progress output that looks like this: ``` Iter Train Loss Remaining Time 1 1.2811 0.71s 2 1.2595 0.58s 3 1.2402 0.50s 4 1.2263 0.46s 5 1.2121 0.43s 6 1.1999 0.41s 7 1.1876 0.39s 8 1.1761 0.38s 9 1.1673 0.37s 10 1.1591 0.36s 20 1.1021 0.29s 30 1.0511 0.27s 40 1.0116 0.25s 50 0.9830 0.22s 60 0.9581 0.19s 70 0.9377 0.16s 80 0.9169 0.14s 90 0.9049 0.12s 100 0.8973 0.10s ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34251980\/a-progress-bar-for-scikit-learn", "best_answers_votes":121, "question_length":177, "response_length":569 }, { "question":"RandomForestClassifier vs ExtraTreesClassifier in scikit learn Can anyone explain the difference between the RandomForestClassifier and ExtraTreesClassifier in scikit learn. I've spent a good bit of time reading the paper: P. Geurts, D. Ernst., and L. Wehenkel, \u201cExtremely randomized trees\u201d, Machine Learning, 63(1), 3-42, 2006 It seems these are the difference for ET: 1) When choosing variables at a split, samples are drawn from the entire training set instead of a bootstrap sample of the training set. 2) Splits are chosen completely at random from the range of values in the sample at each split. The result from these two things are many more \"leaves\".", "response":"Yes both conclusions are correct, although the Random Forest implementation in scikit-learn makes it possible to enable or disable the bootstrap resampling. In practice, RFs are often more compact than ETs. ETs are generally cheaper to train from a computational point of view but can grow much bigger. ETs can sometime generalize better than RFs but it's hard to guess when it's the case without trying both first (and tuning n_estimators, max_features and min_samples_split by cross-validated grid search).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/22409855\/randomforestclassifier-vs-extratreesclassifier-in-scikit-learn", "best_answers_votes":65, "question_length":659, "response_length":508 }, { "question":"ImportError: No module named model_selection I am trying to use train_test_split function and write: ``` from sklearn.model_selection import train_test_split ``` and this causes ``` ImportError: No module named model_selection ``` Why? And how to overcome?", "response":"I guess you have the wrong version of scikit-learn, a similar situation was described here on GitHub. Previously (before v0.18), train_test_split was located in the cross_validation module: ``` from sklearn.cross_validation import train_test_split ``` However, now it's in the model_selection module: ``` from sklearn.model_selection import train_test_split ``` so you'll need the newest version. To upgrade to at least version 0.18, do: ``` pip install -U scikit-learn ``` (Or pip3, depending on your version of Python). If you've installed it in a different way, make sure you use another method to update, for example when using Anaconda.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40704484\/importerror-no-module-named-model-selection", "best_answers_votes":197, "question_length":256, "response_length":641 }, { "question":"LabelEncoder: TypeError: '>' not supported between instances of 'float' and 'str' I'm facing this error for multiple variables even treating missing values. For example: ``` le = preprocessing.LabelEncoder() categorical = list(df.select_dtypes(include=['object']).columns.values) for cat in categorical: print(cat) df[cat].fillna('UNK', inplace=True) df[cat] = le.fit_transform(df[cat]) # print(le.classes_) # print(le.transform(le.classes_)) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () 4 print(cat) 5 df[cat].fillna('UNK', inplace=True) ----> 6 df[cat] = le.fit_transform(df[cat].fillna('UNK')) 7 # print(le.classes_) 8 # print(le.transform(le.classes_)) C:\\Users\\paula.ceccon.ribeiro\\AppData\\Local\\Continuum\\Anaconda3\\lib\\site-packages\\sklearn\\preprocessing\\label.py in fit_transform(self, y) 129 y = column_or_1d(y, warn=True) 130 _check_numpy_unicode_bug(y) --> 131 self.classes_, y = np.unique(y, return_inverse=True) 132 return y 133 C:\\Users\\paula.ceccon.ribeiro\\AppData\\Local\\Continuum\\Anaconda3\\lib\\site-packages\\numpy\\lib\\arraysetops.py in unique(ar, return_index, return_inverse, return_counts) 209 210 if optional_indices: --> 211 perm = ar.argsort(kind='mergesort' if return_index else 'quicksort') 212 aux = ar[perm] 213 else: TypeError: '>' not supported between instances of 'float' and 'str' ``` Checking the variable that lead to the error results ins: ``` df['CRM do M\u00e9dico'].isnull().sum() 0 ``` Besides nan values, what could be causing this error?", "response":"This is due to the series df[cat] containing elements that have varying data types e.g.(strings and\/or floats). This could be due to the way the data is read, i.e. numbers are read as float and text as strings or the datatype was float and changed after the fillna operation. In other words pandas data type 'Object' indicates mixed types rather than str type so using the following line: ``` df[cat] = le.fit_transform(df[cat].astype(str)) ``` should help", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46406720\/labelencoder-typeerror-not-supported-between-instances-of-float-and-str", "best_answers_votes":161, "question_length":1552, "response_length":456 }, { "question":"RandomForestClassfier.fit(): ValueError: could not convert string to float Given is a simple CSV file: ``` A,B,C Hello,Hi,0 Hola,Bueno,1 ``` Obviously the real dataset is far more complex than this, but this one reproduces the error. I'm attempting to build a random forest classifier for it, like so: ``` cols = ['A','B','C'] col_types = {'A': str, 'B': str, 'C': int} test = pd.read_csv('test.csv', dtype=col_types) train_y = test['C'] == 1 train_x = test[cols] clf_rf = RandomForestClassifier(n_estimators=50) clf_rf.fit(train_x, train_y) ``` But I just get this traceback when invoking fit(): ``` ValueError: could not convert string to float: 'Bueno' ``` scikit-learn version is 0.16.1.", "response":"You have to do some encoding before using fit(). As it was told fit() does not accept strings, but you solve this. There are several classes that can be used : LabelEncoder : turn your string into incremental value OneHotEncoder : use One-of-K algorithm to transform your String into integer Personally, I have post almost the same question on Stack Overflow some time ago. I wanted to have a scalable solution, but didn't get any answer. I selected OneHotEncoder that binarize all the strings. It is quite effective, but if you have a lot of different strings the matrix will grow very quickly and memory will be required.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30384995\/randomforestclassfier-fit-valueerror-could-not-convert-string-to-float", "best_answers_votes":111, "question_length":691, "response_length":623 }, { "question":"Sklearn Pipeline: Get feature names after OneHotEncode In ColumnTransformer I want to get feature names after I fit the pipeline. ``` categorical_features = ['brand', 'category_name', 'sub_category'] categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore'))]) numeric_features = ['num1', 'num2', 'num3', 'num4'] numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler())]) preprocessor = ColumnTransformer( transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features)]) ``` Then ``` clf = Pipeline(steps=[('preprocessor', preprocessor), ('regressor', GradientBoostingRegressor())]) ``` After fitting with pandas dataframe, I can get feature importances from clf.steps[1][1].feature_importances_ and I tried clf.steps[0][1].get_feature_names() but I got an error ``` AttributeError: Transformer num (type Pipeline) does not provide get_feature_names. ``` How can I get feature names from this?", "response":"You can access the feature_names using the following snippet: ``` clf.named_steps['preprocessor'].transformers_[1][1]\\ .named_steps['onehot'].get_feature_names(categorical_features) ``` Using sklearn >= 0.21 version, we can make it even simpler: ``` clf['preprocessor'].transformers_[1][1]\\ ['onehot'].get_feature_names(categorical_features) ``` Reproducible example: ``` import numpy as np import pandas as pd from sklearn.impute import SimpleImputer from sklearn.preprocessing import OneHotEncoder, StandardScaler from sklearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer from sklearn.linear_model import LinearRegression df = pd.DataFrame({'brand': ['aaaa', 'asdfasdf', 'sadfds', 'NaN'], 'category': ['asdf', 'asfa', 'asdfas', 'as'], 'num1': [1, 1, 0, 0], 'target': [0.2, 0.11, 1.34, 1.123]}) numeric_features = ['num1'] numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler())]) categorical_features = ['brand', 'category'] categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore'))]) preprocessor = ColumnTransformer( transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features)]) clf = Pipeline(steps=[('preprocessor', preprocessor), ('regressor', LinearRegression())]) clf.fit(df.drop('target', 1), df['target']) clf.named_steps['preprocessor'].transformers_[1][1]\\ .named_steps['onehot'].get_feature_names(categorical_features) # ['brand_NaN' 'brand_aaaa' 'brand_asdfasdf' 'brand_sadfds' 'category_as' # 'category_asdf' 'category_asdfas' 'category_asfa'] ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/54646709\/sklearn-pipeline-get-feature-names-after-onehotencode-in-columntransformer", "best_answers_votes":94, "question_length":1112, "response_length":1713 }, { "question":"classifiers in scikit-learn that handle nan\/null I was wondering if there are classifiers that handle nan\/null values in scikit-learn. I thought random forest regressor handles this but I got an error when I call predict. ``` X_train = np.array([[1, np.nan, 3],[np.nan, 5, 6]]) y_train = np.array([1, 2]) clf = RandomForestRegressor(X_train, y_train) X_test = np.array([7, 8, np.nan]) y_pred = clf.predict(X_test) # Fails! ``` Can I not call predict with any scikit-learn algorithm with missing values? Edit. Now that I think about this, it makes sense. It's not an issue during training but when you predict how do you branch when the variable is null? maybe you could just split both ways and average the result? It seems like k-NN should work fine as long as the distance function ignores nulls though. Edit 2 (older and wiser me) Some gbm libraries (such as xgboost) use a ternary tree instead of a binary tree precisely for this purpose: 2 children for the yes\/no decision and 1 child for the missing decision. sklearn is using a binary tree", "response":"Short answer Sometimes missing values are simply not applicable. Imputing them is meaningless. In these cases you should use a model that can handle missing values. Scitkit-learn's models cannot handle missing values. XGBoost can. More on scikit-learn and XGBoost As mentioned in this article, scikit-learn's decision trees and KNN algorithms are not (yet) robust enough to work with missing values. If imputation doesn't make sense, don't do it. Consider situtations when imputation doesn't make sense. keep in mind this is a made-up example Consider a dataset with rows of cars (\"Danho Diesel\", \"Estal Electric\", \"Hesproc Hybrid\") and columns with their properties (Weight, Top speed, Acceleration, Power output, Sulfur Dioxide Emission, Range). Electric cars do not produce exhaust fumes - so the Sulfur dioxide emission of the Estal Electric should be a NaN-value (missing). You could argue that it should be set to 0 - but electric cars cannot produce sulfur dioxide. Imputing the value will ruin your predictions. As mentioned in this article, scikit-learn's decision trees and KNN algorithms are not (yet) robust enough to work with missing values. If imputation doesn't make sense, don't do it.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30317119\/classifiers-in-scikit-learn-that-handle-nan-null", "best_answers_votes":47, "question_length":1046, "response_length":1202 }, { "question":"sklearn Logistic Regression \"ValueError: Found array with dim 3. Estimator expected <= 2.\" I attempt to solve this problem 6 in this notebook. The question is to train a simple model on this data using 50, 100, 1000 and 5000 training samples by using the LogisticRegression model from sklearn.linear_model. ``` lr = LogisticRegression() lr.fit(train_dataset,train_labels) ``` This is the code i trying to do and it give me the error. ``` ValueError: Found array with dim 3. Estimator expected <= 2. ``` Any idea?", "response":"scikit-learn expects 2d num arrays for the training dataset for a fit function. The dataset you are passing in is a 3d array you need to reshape the array into a 2d. ``` nsamples, nx, ny = train_dataset.shape d2_train_dataset = train_dataset.reshape((nsamples,nx*ny)) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34972142\/sklearn-logistic-regression-valueerror-found-array-with-dim-3-estimator-expec", "best_answers_votes":132, "question_length":512, "response_length":271 }, { "question":"Different result with roc_auc_score() and auc() I have trouble understanding the difference (if there is one) between roc_auc_score() and auc() in scikit-learn. Im tying to predict a binary output with imbalanced classes (around 1.5% for Y=1). Classifier ``` model_logit = LogisticRegression(class_weight='auto') model_logit.fit(X_train_ridge, Y_train) ``` Roc curve ``` false_positive_rate, true_positive_rate, thresholds = roc_curve(Y_test, clf.predict_proba(xtest)[:,1]) ``` AUC's ``` auc(false_positive_rate, true_positive_rate) Out[490]: 0.82338034042531527 ``` and ``` roc_auc_score(Y_test, clf.predict(xtest)) Out[493]: 0.75944737191205602 ``` Somebody can explain this difference ? I thought both were just calculating the area under the ROC curve. Might be because of the imbalanced dataset but I could not figure out why. Thanks!", "response":"AUC is not always area under the curve of a ROC curve. Area Under the Curve is an (abstract) area under some curve, so it is a more general thing than AUROC. With imbalanced classes, it may be better to find AUC for a precision-recall curve. See sklearn source for roc_auc_score: ``` def roc_auc_score(y_true, y_score, average=\"macro\", sample_weight=None): # docstring def _binary_roc_auc_score(y_true, y_score, sample_weight=None): # bla-bla fpr, tpr, tresholds = roc_curve(y_true, y_score, sample_weight=sample_weight) return auc(fpr, tpr, reorder=True) return _average_binary_score( _binary_roc_auc_score, y_true, y_score, average, sample_weight=sample_weight) ``` As you can see, this first gets a roc curve, and then calls auc() to get the area. I guess your problem is the predict_proba() call. For a normal predict() the outputs are always the same: ``` import numpy as np from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_curve, auc, roc_auc_score est = LogisticRegression(class_weight='auto') X = np.random.rand(10, 2) y = np.random.randint(2, size=10) est.fit(X, y) false_positive_rate, true_positive_rate, thresholds = roc_curve(y, est.predict(X)) print auc(false_positive_rate, true_positive_rate) # 0.857142857143 print roc_auc_score(y, est.predict(X)) # 0.857142857143 ``` If you change the above for this, you'll sometimes get different outputs: ``` false_positive_rate, true_positive_rate, thresholds = roc_curve(y, est.predict_proba(X)[:,1]) # may differ print auc(false_positive_rate, true_positive_rate) print roc_auc_score(y, est.predict(X)) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31159157\/different-result-with-roc-auc-score-and-auc", "best_answers_votes":62, "question_length":839, "response_length":1603 }, { "question":"The easiest way for getting feature names after running SelectKBest in Scikit Learn I'm trying to conduct a supervised machine-learning experiment using the SelectKBest feature of scikit-learn, but I'm not sure how to create a new dataframe after finding the best features: Let's assume I would like to conduct the experiment selecting 5 best features: ``` from sklearn.feature_selection import SelectKBest, f_classif select_k_best_classifier = SelectKBest(score_func=f_classif, k=5).fit_transform(features_dataframe, targeted_class) ``` Now, if I add the line: ``` import pandas as pd dataframe = pd.DataFrame(select_k_best_classifier) ``` I receive a new dataframe without feature names (only index starting from 0 to 4), but I want to create a dataframe with the new selected features, in a way like this: ``` dataframe = pd.DataFrame(fit_transofrmed_features, columns=features_names) ``` My question is how to create the features_names list? I know that I should use: ``` select_k_best_classifier.get_support() ``` Which returns an array of boolean values, where true values indices represent the column that should be selected in the original dataframe. How should I use this boolean array with the array of all features names I can get via the method feature_names = list(features_dataframe.columns.values) ?", "response":"This doesn't require loops. ``` # Create and fit selector selector = SelectKBest(f_classif, k=5) selector.fit(features_df, target) # Get columns to keep and create new dataframe with those only cols_idxs = selector.get_support(indices=True) features_df_new = features_df.iloc[:,cols_idxs] ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/39839112\/the-easiest-way-for-getting-feature-names-after-running-selectkbest-in-scikit-le", "best_answers_votes":109, "question_length":1314, "response_length":292 }, { "question":"Impute categorical missing values in scikit-learn I've got pandas data with some columns of text type. There are some NaN values along with these text columns. What I'm trying to do is to impute those NaN's by sklearn.preprocessing.Imputer (replacing NaN by the most frequent value). The problem is in implementation. Suppose there is a Pandas dataframe df with 30 columns, 10 of which are of categorical nature. Once I run: ``` from sklearn.preprocessing import Imputer imp = Imputer(missing_values='NaN', strategy='most_frequent', axis=0) imp.fit(df) ``` Python generates an error: 'could not convert string to float: 'run1'', where 'run1' is an ordinary (non-missing) value from the first column with categorical data. Any help would be very welcome", "response":"To use mean values for numeric columns and the most frequent value for non-numeric columns you could do something like this. You could further distinguish between integers and floats. I guess it might make sense to use the median for integer columns instead. ``` import pandas as pd import numpy as np from sklearn.base import TransformerMixin class DataFrameImputer(TransformerMixin): def __init__(self): \"\"\"Impute missing values. Columns of dtype object are imputed with the most frequent value in column. Columns of other types are imputed with mean of column. \"\"\" def fit(self, X, y=None): self.fill = pd.Series([X[c].value_counts().index[0] if X[c].dtype == np.dtype('O') else X[c].mean() for c in X], index=X.columns) return self def transform(self, X, y=None): return X.fillna(self.fill) data = [ ['a', 1, 2], ['b', 1, 1], ['b', 2, 2], [np.nan, np.nan, np.nan] ] X = pd.DataFrame(data) xt = DataFrameImputer().fit_transform(X) print('before...') print(X) print('after...') print(xt) ``` which prints, ``` before... 0 1 2 0 a 1 2 1 b 1 1 2 b 2 2 3 NaN NaN NaN after... 0 1 2 0 a 1.000000 2.000000 1 b 1.000000 1.000000 2 b 2.000000 2.000000 3 b 1.333333 1.666667 ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/25239958\/impute-categorical-missing-values-in-scikit-learn", "best_answers_votes":112, "question_length":752, "response_length":1172 }, { "question":"Feature\/Variable importance after a PCA analysis I have performed a PCA analysis over my original dataset and from the compressed dataset transformed by the PCA I have also selected the number of PC I want to keep (they explain almost the 94% of the variance). Now I am struggling with the identification of the original features that are important in the reduced dataset. How do I find out which feature is important and which is not among the remaining Principal Components after the dimension reduction? Here is my code: ``` from sklearn.decomposition import PCA pca = PCA(n_components=8) pca.fit(scaledDataset) projection = pca.transform(scaledDataset) ``` Furthermore, I tried also to perform a clustering algorithm on the reduced dataset but surprisingly for me, the score is lower than on the original dataset. How is it possible?", "response":"First of all, I assume that you call features the variables and not the samples\/observations. In this case, you could do something like the following by creating a biplot function that shows everything in one plot. In this example, I am using the iris data. Before the example, please note that the basic idea when using PCA as a tool for feature selection is to select variables according to the magnitude (from largest to smallest in absolute values) of their coefficients (loadings). See my last paragraph after the plot for more details. Overview: PART1: I explain how to check the importance of the features and how to plot a biplot. PART2: I explain how to check the importance of the features and how to save them into a pandas dataframe using the feature names. PART 1: ``` import numpy as np import matplotlib.pyplot as plt from sklearn import datasets from sklearn.decomposition import PCA import pandas as pd from sklearn.preprocessing import StandardScaler iris = datasets.load_iris() X = iris.data y = iris.target #In general a good idea is to scale the data scaler = StandardScaler() scaler.fit(X) X=scaler.transform(X) pca = PCA() x_new = pca.fit_transform(X) def myplot(score,coeff,labels=None): xs = score[:,0] ys = score[:,1] n = coeff.shape[0] scalex = 1.0\/(xs.max() - xs.min()) scaley = 1.0\/(ys.max() - ys.min()) plt.scatter(xs * scalex,ys * scaley, c = y) for i in range(n): plt.arrow(0, 0, coeff[i,0], coeff[i,1],color = 'r',alpha = 0.5) if labels is None: plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, \"Var\"+str(i+1), color = 'g', ha = 'center', va = 'center') else: plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, labels[i], color = 'g', ha = 'center', va = 'center') plt.xlim(-1,1) plt.ylim(-1,1) plt.xlabel(\"PC{}\".format(1)) plt.ylabel(\"PC{}\".format(2)) plt.grid() #Call the function. Use only the 2 PCs. myplot(x_new[:,0:2],np.transpose(pca.components_[0:2, :])) plt.show() ``` Visualize what's going on using the biplot Now, the importance of each feature is reflected by the magnitude of the corresponding values in the eigenvectors (higher magnitude - higher importance) Let's see first what amount of variance does each PC explain. ``` pca.explained_variance_ratio_ [0.72770452, 0.23030523, 0.03683832, 0.00515193] ``` PC1 explains 72% and PC2 23%. Together, if we keep PC1 and PC2 only, they explain 95%. Now, let's find the most important features. ``` print(abs( pca.components_ )) [[0.52237162 0.26335492 0.58125401 0.56561105] [0.37231836 0.92555649 0.02109478 0.06541577] [0.72101681 0.24203288 0.14089226 0.6338014 ] [0.26199559 0.12413481 0.80115427 0.52354627]] ``` Here, pca.components_ has shape [n_components, n_features]. Thus, by looking at the PC1 (First Principal Component) which is the first row: [0.52237162 0.26335492 0.58125401 0.56561105]] we can conclude that feature 1, 3 and 4 (or Var 1, 3 and 4 in the biplot) are the most important. This is also clearly visible from the biplot (that's why we often use this plot to summarize the information in a visual way). To sum up, look at the absolute values of the Eigenvectors' components corresponding to the k largest Eigenvalues. In sklearn the components are sorted by explained_variance_. The larger they are these absolute values, the more a specific feature contributes to that principal component. PART 2: The important features are the ones that influence more the components and thus, have a large absolute value\/score on the component. To get the most important features on the PCs with names and save them into a pandas dataframe use this: ``` from sklearn.decomposition import PCA import pandas as pd import numpy as np np.random.seed(0) # 10 samples with 5 features train_features = np.random.rand(10,5) model = PCA(n_components=2).fit(train_features) X_pc = model.transform(train_features) # number of components n_pcs= model.components_.shape[0] # get the index of the most important feature on EACH component # LIST COMPREHENSION HERE most_important = [np.abs(model.components_[i]).argmax() for i in range(n_pcs)] initial_feature_names = ['a','b','c','d','e'] # get the names most_important_names = [initial_feature_names[most_important[i]] for i in range(n_pcs)] # LIST COMPREHENSION HERE AGAIN dic = {'PC{}'.format(i): most_important_names[i] for i in range(n_pcs)} # build the dataframe df = pd.DataFrame(dic.items()) ``` This prints: ``` 0 1 0 PC0 e 1 PC1 d ``` So on the PC1 the feature named e is the most important and on PC2 the d. Nice article as well here: https:\/\/towardsdatascience.com\/pca-clearly-explained-how-when-why-to-use-it-and-feature-importance-a-guide-in-python-7c274582c37e?source=friends_link&sk=65bf5440e444c24aff192fedf9f8b64f", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/50796024\/feature-variable-importance-after-a-pca-analysis", "best_answers_votes":141, "question_length":837, "response_length":4667 }, { "question":"difference between StratifiedKFold and StratifiedShuffleSplit in sklearn As from the title I am wondering what is the difference between StratifiedKFold with the parameter shuffle=True ``` StratifiedKFold(n_splits=10, shuffle=True, random_state=0) ``` and StratifiedShuffleSplit ``` StratifiedShuffleSplit(n_splits=10, test_size=\u2019default\u2019, train_size=None, random_state=0) ``` and what is the advantage of using StratifiedShuffleSplit", "response":"In stratKFolds, each test set should not overlap, even when shuffle is included. With stratKFolds and shuffle=True, the data is shuffled once at the start, and then divided into the number of desired splits. The test data is always one of the splits, the train data is the rest. In ShuffleSplit, the data is shuffled every time, and then split. This means the test sets may overlap between the splits. See this block for an example of the difference. Note the overlap of the elements in the test sets for ShuffleSplit. ``` splits = 5 tx = range(10) ty = [0] * 5 + [1] * 5 from sklearn.model_selection import StratifiedShuffleSplit, StratifiedKFold from sklearn import datasets stratKfold = StratifiedKFold(n_splits=splits, shuffle=True, random_state=42) shufflesplit = StratifiedShuffleSplit(n_splits=splits, random_state=42, test_size=2) print(\"stratKFold\") for train_index, test_index in stratKfold.split(tx, ty): print(\"TRAIN:\", train_index, \"TEST:\", test_index) print(\"Shuffle Split\") for train_index, test_index in shufflesplit.split(tx, ty): print(\"TRAIN:\", train_index, \"TEST:\", test_index) ``` Output: ``` stratKFold TRAIN: [0 2 3 4 5 6 7 9] TEST: [1 8] TRAIN: [0 1 2 3 5 7 8 9] TEST: [4 6] TRAIN: [0 1 3 4 5 6 8 9] TEST: [2 7] TRAIN: [1 2 3 4 6 7 8 9] TEST: [0 5] TRAIN: [0 1 2 4 5 6 7 8] TEST: [3 9] Shuffle Split TRAIN: [8 4 1 0 6 5 7 2] TEST: [3 9] TRAIN: [7 0 3 9 4 5 1 6] TEST: [8 2] TRAIN: [1 2 5 6 4 8 9 0] TEST: [3 7] TRAIN: [4 6 7 8 3 5 1 2] TEST: [9 0] TRAIN: [7 2 6 5 4 3 0 9] TEST: [1 8] ``` As for when to use them, I tend to use stratKFolds for any cross validation, and I use ShuffleSplit with a split of 2 for my train\/test set splits. But I'm sure there are other use cases for both.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45969390\/difference-between-stratifiedkfold-and-stratifiedshufflesplit-in-sklearn", "best_answers_votes":109, "question_length":434, "response_length":1709 }, { "question":"Principal Component Analysis (PCA) in Python I have a (26424 x 144) array and I want to perform PCA over it using Python. However, there is no particular place on the web that explains about how to achieve this task (There are some sites which just do PCA according to their own - there is no generalized way of doing so that I can find). Anybody with any sort of help will do great.", "response":"I posted my answer even though another answer has already been accepted; the accepted answer relies on a deprecated function; additionally, this deprecated function is based on Singular Value Decomposition (SVD), which (although perfectly valid) is the much more memory- and processor-intensive of the two general techniques for calculating PCA. This is particularly relevant here because of the size of the data array in the OP. Using covariance-based PCA, the array used in the computation flow is just 144 x 144, rather than 26424 x 144 (the dimensions of the original data array). Here's a simple working implementation of PCA using the linalg module from SciPy. Because this implementation first calculates the covariance matrix, and then performs all subsequent calculations on this array, it uses far less memory than SVD-based PCA. (the linalg module in NumPy can also be used with no change in the code below aside from the import statement, which would be from numpy import linalg as LA.) The two key steps in this PCA implementation are: calculating the covariance matrix; and taking the eivenvectors & eigenvalues of this cov matrix In the function below, the parameter dims_rescaled_data refers to the desired number of dimensions in the rescaled data matrix; this parameter has a default value of just two dimensions, but the code below isn't limited to two but it could be any value less than the column number of the original data array. ``` def PCA(data, dims_rescaled_data=2): \"\"\" returns: data transformed in 2 dims\/columns + regenerated original data pass in: data as 2D NumPy array \"\"\" import numpy as NP from scipy import linalg as LA m, n = data.shape # mean center the data data -= data.mean(axis=0) # calculate the covariance matrix R = NP.cov(data, rowvar=False) # calculate eigenvectors & eigenvalues of the covariance matrix # use 'eigh' rather than 'eig' since R is symmetric, # the performance gain is substantial evals, evecs = LA.eigh(R) # sort eigenvalue in decreasing order idx = NP.argsort(evals)[::-1] evecs = evecs[:,idx] # sort eigenvectors according to same index evals = evals[idx] # select the first n eigenvectors (n is desired dimension # of rescaled data array, or dims_rescaled_data) evecs = evecs[:, :dims_rescaled_data] # carry out the transformation on the data using eigenvectors # and return the re-scaled data, eigenvalues, and eigenvectors return NP.dot(evecs.T, data.T).T, evals, evecs def test_PCA(data, dims_rescaled_data=2): ''' test by attempting to recover original data array from the eigenvectors of its covariance matrix & comparing that 'recovered' array with the original data ''' _ , _ , eigenvectors = PCA(data, dim_rescaled_data=2) data_recovered = NP.dot(eigenvectors, m).T data_recovered += data_recovered.mean(axis=0) assert NP.allclose(data, data_recovered) def plot_pca(data): from matplotlib import pyplot as MPL clr1 = '#2026B2' fig = MPL.figure() ax1 = fig.add_subplot(111) data_resc, data_orig = PCA(data) ax1.plot(data_resc[:, 0], data_resc[:, 1], '.', mfc=clr1, mec=clr1) MPL.show() >>> # iris, probably the most widely used reference data set in ML >>> df = \"~\/iris.csv\" >>> data = NP.loadtxt(df, delimiter=',') >>> # remove class labels >>> data = data[:,:-1] >>> plot_pca(data) ``` The plot below is a visual representation of this PCA function on the iris data. As you can see, a 2D transformation cleanly separates class I from class II and class III (but not class II from class III, which in fact requires another dimension).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/13224362\/principal-component-analysis-pca-in-python", "best_answers_votes":101, "question_length":383, "response_length":3509 }, { "question":"Difference between scikit-learn and sklearn (now deprecated) On OS X 10.11.6 and python 2.7.10 I need to import from sklearn manifold. I have numpy 1.8 Orc1, scipy .13 Ob1 and scikit-learn 0.17.1 installed. I used pip to install sklearn(0.0), but when I try to import from sklearn manifold I get the following: Traceback (most recent call last): File \"\", line 1, in File \"\/Library\/Python\/2.7\/site-packages\/sklearn\/init.py\", line 57, in from .base import clone File \"\/Library\/Python\/2.7\/site-packages\/sklearn\/base.py\", line 11, in from .utils.fixes import signature File \"\/Library\/Python\/2.7\/site-packages\/sklearn\/utils\/init.py\", line 10, in from .murmurhash import murmurhash3_32 File \"numpy.pxd\", line 155, in init sklearn.utils.murmurhash (sklearn\/utils\/murmurhash.c:5029) ValueError: numpy.dtype has the wrong size, try recompiling. What is the difference between scikit-learn and sklearn? Also, I cant import scikit-learn because of a syntax error", "response":"Regarding the difference sklearn vs. scikit-learn: The package \"scikit-learn\" is recommended to be installed using pip install scikit-learn but in your code imported using import sklearn. A bit confusing, because you can also do pip install sklearn and will end up with the same scikit-learn package installed, because there is a \"dummy\" pypi package sklearn which will install scikit-learn for you. From this thread: scikit-learn is in install_requires of sklearn setup.py so you do end-up with scikit-learn installed So: At the end, pip install sklearn or pip install scikit-learn --- apart from the annoying sklearn (0.0) showed in the pip list --- will install the latest available build from PyPI.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38733220\/difference-between-scikit-learn-and-sklearn-now-deprecated", "best_answers_votes":100, "question_length":951, "response_length":702 }, { "question":"Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: \"Registered online\", \"Accepts email notifications\" etc) and continuous data (ex: \"Age\", \"Length of membership\" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!", "response":"You have at least two options: Transform all your data into a categorical representation by computing percentiles for each continuous variables and then binning the continuous variables using the percentiles as bin boundaries. For instance for the height of a person create the following bins: \"very small\", \"small\", \"regular\", \"big\", \"very big\" ensuring that each bin contains approximately 20% of the population of your training set. We don't have any utility to perform this automatically in scikit-learn but it should not be too complicated to do it yourself. Then fit a unique multinomial NB on those categorical representation of your data. Independently fit a gaussian NB model on the continuous part of the data and a multinomial NB model on the categorical part. Then transform all the dataset by taking the class assignment probabilities (with predict_proba method) as new features: np.hstack((multinomial_probas, gaussian_probas)) and then refit a new model (e.g. a new gaussian NB) on the new features.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/14254203\/mixing-categorial-and-continuous-data-in-naive-bayes-classifier-using-scikit-lea", "best_answers_votes":76, "question_length":765, "response_length":1014 }, { "question":"Save MinMaxScaler model in sklearn I'm using the MinMaxScaler model in sklearn to normalize the features of a model. ``` training_set = np.random.rand(4,4)*10 training_set [[ 6.01144787, 0.59753007, 2.0014852 , 3.45433657], [ 6.03041646, 5.15589559, 6.64992437, 2.63440202], [ 2.27733136, 9.29927394, 0.03718093, 7.7679183 ], [ 9.86934288, 7.59003904, 6.02363739, 2.78294206]] scaler = MinMaxScaler() scaler.fit(training_set) scaler.transform(training_set) [[ 0.49184811, 0. , 0.29704831, 0.15972182], [ 0.4943466 , 0.52384506, 1. , 0. ], [ 0. , 1. , 0. , 1. ], [ 1. , 0.80357559, 0.9052909 , 0.02893534]] ``` Now I want to use the same scaler to normalize the test set: ``` [[ 8.31263467, 7.99782295, 0.02031658, 9.43249727], [ 1.03761228, 9.53173021, 5.99539478, 4.81456067], [ 0.19715961, 5.97702519, 0.53347403, 5.58747666], [ 9.67505429, 2.76225253, 7.39944931, 8.46746594]] ``` But I don't want so use the scaler.fit() with the training data all the time. Is there a way to save the scaler and load it later from a different file?", "response":"Update: sklearn.externals.joblib is deprecated. Install and use the pure joblib instead. Please see Engineero's answer below, which is otherwise identical to mine. Original answer Even better than pickle (which creates much larger files than this method), you can use sklearn's built-in tool: ``` from sklearn.externals import joblib scaler_filename = \"scaler.save\" joblib.dump(scaler, scaler_filename) # And now to load... scaler = joblib.load(scaler_filename) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/41993565\/save-minmaxscaler-model-in-sklearn", "best_answers_votes":119, "question_length":1036, "response_length":465 }, { "question":"Should I use `random.seed` or `numpy.random.seed` to control random number generation in `scikit-learn`? I'm using scikit-learn and numpy and I want to set the global seed so that my work is reproducible. Should I use numpy.random.seed or random.seed? From the link in the comments, I understand that they are different, and that the numpy version is not thread-safe. I want to know specifically which one to use to create IPython notebooks for data analysis. Some of the algorithms from scikit-learn involve generating random numbers, and I want to be sure that the notebook shows the same results on every run.", "response":"Should I use np.random.seed or random.seed? That depends on whether in your code you are using numpy's random number generator or the one in random. The random number generators in numpy.random and random have totally separate internal states, so numpy.random.seed() will not affect the random sequences produced by random.random(), and likewise random.seed() will not affect numpy.random.randn() etc. If you are using both random and numpy.random in your code then you will need to separately set the seeds for both. Update Your question seems to be specifically about scikit-learn's random number generators. As far as I can tell, scikit-learn uses numpy.random throughout, so you should use np.random.seed() rather than random.seed(). One important caveat is that np.random is not threadsafe - if you set a global seed, then launch several subprocesses and generate random numbers within them using np.random, each subprocess will inherit the RNG state from its parent, meaning that you will get identical random variates in each subprocess. The usual way around this problem is to pass a different seed (or numpy.random.Random instance) to each subprocess, such that each one has a separate local RNG state. Since some parts of scikit-learn can run in parallel using joblib, you will see that some classes and functions have an option to pass either a seed or an np.random.RandomState instance (e.g. the random_state= parameter to sklearn.decomposition.MiniBatchSparsePCA). I tend to use a single global seed for a script, then generate new random seeds based on the global seed for any parallel functions.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31057197\/should-i-use-random-seed-or-numpy-random-seed-to-control-random-number-gener", "best_answers_votes":66, "question_length":612, "response_length":1610 }, { "question":"Why is pydot unable to find GraphViz's executables in Windows 8? I have GraphViz 2.32 installed in Windows 8 and have added C:\\Program Files (x86)\\Graphviz2.32\\bin to the System PATH variable. Still pydot is unable to find its executables. ``` Traceback (most recent call last): File \"\", line 1, in graph.write_png('example1_graph.png') File \"build\\bdist.win32\\egg\\pydot.py\", line 1809, in lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File \"build\\bdist.win32\\egg\\pydot.py\", line 1911, in write dot_fd.write(self.create(prog, format)) File \"build\\bdist.win32\\egg\\pydot.py\", line 1953, in create 'GraphViz\\'s executables not found' ) InvocationException: GraphViz's executables not found ``` I found this https:\/\/code.google.com\/p\/pydot\/issues\/detail?id=65 but am unable to get the problem solved.", "response":"The problem is that the path to GraphViz was not found by the pydot module as shown in the traceback: 'GraphViz\\'s executables not found' I solved this problem on my windows 7 machine by adding the GraphViz bin directory to my computer's PATH. Then restarting my python IDE to use the updated path. Install GraphViz if you haven't already (I used the MSI download) Get the path for gvedit.exe (for me it was \"C:\\Program Files (x86)\\Graphviz2.34\\bin\\\") Add this path to the computer's PATH One way to get to environment settings to set your path is to click on each of these button\/menu options: start->computer->system properties->advanced settings->environment variables Click Edit User path Add this string to the end of your Variable value list (including semicolon): ;C:\\Program Files (x86)\\Graphviz2.34\\bin Click OK Restart your Python IDE", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/18438997\/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8", "best_answers_votes":77, "question_length":829, "response_length":844 }, { "question":"How to get most informative features for scikit-learn classifiers? The classifiers in machine learning packages like liblinear and nltk offer a method show_most_informative_features(), which is really helpful for debugging features: ``` viagra = None ok : spam = 4.5 : 1.0 hello = True ok : spam = 4.5 : 1.0 hello = None spam : ok = 3.3 : 1.0 viagra = True spam : ok = 3.3 : 1.0 casino = True spam : ok = 2.0 : 1.0 casino = None ok : spam = 1.5 : 1.0 ``` My question is if something similar is implemented for the classifiers in scikit-learn. I searched the documentation, but couldn't find anything the like. If there is no such function yet, does somebody know a workaround how to get to those values?", "response":"The classifiers themselves do not record feature names, they just see numeric arrays. However, if you extracted your features using a Vectorizer\/CountVectorizer\/TfidfVectorizer\/DictVectorizer, and you are using a linear model (e.g. LinearSVC or Naive Bayes) then you can apply the same trick that the document classification example uses. Example (untested, may contain a bug or two): ``` def print_top10(vectorizer, clf, class_labels): \"\"\"Prints features with the highest coefficient values, per class\"\"\" feature_names = vectorizer.get_feature_names() for i, class_label in enumerate(class_labels): top10 = np.argsort(clf.coef_[i])[-10:] print(\"%s: %s\" % (class_label, \" \".join(feature_names[j] for j in top10))) ``` This is for multiclass classification; for the binary case, I think you should use clf.coef_[0] only. You may have to sort the class_labels.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/11116697\/how-to-get-most-informative-features-for-scikit-learn-classifiers", "best_answers_votes":68, "question_length":703, "response_length":858 }, { "question":"scikit-learn cross validation, negative values with mean squared error When I use the following code with Data matrix X of size (952,144) and output vector y of size (952), mean_squared_error metric returns negative values, which is unexpected. Do you have any idea? ``` from sklearn.svm import SVR from sklearn import cross_validation as CV reg = SVR(C=1., epsilon=0.1, kernel='rbf') scores = CV.cross_val_score(reg, X, y, cv=10, scoring='mean_squared_error') ``` all values in scores are then negative.", "response":"Trying to close this out, so am providing the answer that David and larsmans have eloquently described in the comments section: Yes, this is supposed to happen. The actual MSE is simply the positive version of the number you're getting. The unified scoring API always maximizes the score, so scores which need to be minimized are negated in order for the unified scoring API to work correctly. The score that is returned is therefore negated when it is a score that should be minimized and left positive if it is a score that should be maximized. This is also described in sklearn GridSearchCV with Pipeline.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/21443865\/scikit-learn-cross-validation-negative-values-with-mean-squared-error", "best_answers_votes":97, "question_length":504, "response_length":608 }, { "question":"What does the \"fit\" method in scikit-learn do? [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Closed 7 years ago. Improve this question Could you please explain what the \"fit\" method in scikit-learn does? Why is it useful?", "response":"In a nutshell: fitting is equal to training. Then, after it is trained, the model can be used to make predictions, usually with a .predict() method call. To elaborate: Fitting your model to (i.e. using the .fit() method on) the training data is essentially the training part of the modeling process. It finds the coefficients for the equation specified via the algorithm being used (take for example umutto's linear regression example, above). Then, for a classifier, you can classify incoming data points (from a test set, or otherwise) using the predict method. Or, in the case of regression, your model will interpolate\/extrapolate when predict is used on incoming data points. It also should be noted that sometimes the \"fit\" nomenclature is used for non-machine-learning methods, such as scalers and other preprocessing steps. In this case, you are merely \"applying\" the specified function to your data, as in the case with a min-max scaler, TF-IDF, or other transformation. Note: here are a couple of references... fit method in python sklearn http:\/\/scikit-learn.org\/stable\/tutorial\/basic\/tutorial.html", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45704226\/what-does-the-fit-method-in-scikit-learn-do", "best_answers_votes":107, "question_length":588, "response_length":1109 }, { "question":"where to put freeze_support() in a Python script? I am confused about using freeze_support() for multiprocessing and I get a Runtime Error without it. I am only running a script, not defining a function or a module. Can I still use it? Or should the packages I'm importing be using it? Here is the documentation. Note that the specific issue is about scikit-learn calling GridSearchCV which tries to spawn processes in parallel. I am not sure if my script needs to be frozen for this, or the some code that's called (from the Anaconda distro). If details are relevant to this question, please head over to the more specific question.", "response":"On Windows all of your multiprocessing-using code must be guarded by if __name__ == \"__main__\": So to be safe, I would put all of your the code currently at the top-level of your script in a main() function, and then just do this at the top-level: ``` if __name__ == \"__main__\": main() ``` See the \"Safe importing of main module\" sub-section here for an explanation of why this is necessary. You probably don't need to call freeze_support at all, though it won't hurt anything to include it. Note that it's a best practice to use the if __name__ == \"__main__\" guard for scripts anyway, so that code isn't unexpectedly executed if you find you need to import your script into another script at some point in the future.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/24374288\/where-to-put-freeze-support-in-a-python-script", "best_answers_votes":120, "question_length":633, "response_length":718 }, { "question":"Convert numpy array type and values from Float64 to Float32 I am trying to convert threshold array(pickle file of isolation forest from scikit learn) of type from Float64 to Float32 ``` for i in range(len(tree.tree_.threshold)): tree.tree_.threshold[i] = tree.tree_.threshold[i].astype(np.float32) ``` \u200b Then Printing it ``` for value in tree.tree_.threshold[:5]: print(type(value)) print(value) ``` the output i am getting is : ``` 526226.0 91.9514312744 3.60330319405 -2.0 -2.0 ``` I am not getting a proper conversion to Float32. I want to convert values and their type to Float32, Did anybody have any workaround this ?", "response":"The problem is that you do not do any type conversion of the numpy array. You calculate a float32 variable and put it as an entry into a float64 numpy array. numpy then converts it properly back to float64 Try someting like this: ``` a = np.zeros(4,dtype=\"float64\") print a.dtype print type(a[0]) a = np.float32(a) print a.dtype print type(a[0]) ``` The output (tested with python 2.7) ``` float64 float32 ``` a is in your case the array tree.tree_.threshold", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45955186\/convert-numpy-array-type-and-values-from-float64-to-float32", "best_answers_votes":59, "question_length":628, "response_length":460 }, { "question":"How to find the corresponding class in clf.predict_proba() I have a number of classes and corresponding feature vectors, and when I run predict_proba() I will get this: ``` classes = ['one','two','three','one','three'] feature = [[0,1,1,0],[0,1,0,1],[1,1,0,0],[0,0,0,0],[0,1,1,1]] from sklearn.naive_bayes import BernoulliNB clf = BernoulliNB() clf.fit(feature,classes) clf.predict_proba([0,1,1,0]) >> array([[ 0.48247836, 0.40709111, 0.11043053]]) ``` I would like to get what probability that corresponds to what class. On this page it says that they are ordered by arithmetical order, i'm not 100% sure of what that means: http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.svm.SVC.html#sklearn.svm.SVC.predict_proba Does it mean that I have go trough my training examples assign the corresponding index to the first encounter of a class, or is there a command like clf.getClasses() = ['one','two','three']?", "response":"Just use the .classes_ attribute of the classifier to recover the mapping. In your example that gives: ``` >>> clf.classes_ array(['one', 'three', 'two'], dtype='|S5') ``` And thanks for putting a minimalistic reproduction script in your question, it makes answering really easy by just copy and pasting in a IPython shell :)", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/16858652\/how-to-find-the-corresponding-class-in-clf-predict-proba", "best_answers_votes":102, "question_length":916, "response_length":325 }, { "question":"Differences in SciKit Learn, Keras, or Pytorch [closed] Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Because this question may lead to opinionated discussion, debate, and answers, it has been closed. You may edit the question if you feel you can improve it so that it requires answers that include facts and citations or a detailed explanation of the proposed solution. If edited, the question will be reviewed and might be reopened. Closed 6 years ago. The community reviewed whether to reopen this question 1 year ago and left it closed: Needs more focus Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Improve this question Are these libraries fairly interchangeable? Looking here, https:\/\/stackshare.io\/stackups\/keras-vs-pytorch-vs-scikit-learn, it seems the major difference is the underlying framework (at least for PyTorch).", "response":"Yes, there is a major difference. SciKit Learn is a general machine learning library, built on top of NumPy. It features a lot of machine learning algorithms such as support vector machines, random forests, as well as a lot of utilities for general pre- and postprocessing of data. It is not a neural network framework. PyTorch is a deep learning framework, consisting of A vectorized math library similar to NumPy, but with GPU support and a lot of neural network related operations (such as softmax or various kinds of activations) Autograd - an algorithm which can automatically calculate gradients of your functions, defined in terms of the basic operations Gradient-based optimization routines for large scale optimization, dedicated to neural network optimization Neural-network related utility functions Keras is a higher-level deep learning framework, which abstracts many details away, making code simpler and more concise than in PyTorch or TensorFlow, at the cost of limited hackability. It abstracts away the computation backend, which can be TensorFlow, Theano or CNTK. It does not support a PyTorch backend, but that's not something unfathomable - you can consider it a simplified and streamlined subset of the above. In short, if you are going with \"classic\", non-neural algorithms, neither PyTorch nor Keras will be useful for you. If you're doing deep learning, scikit-learn may still be useful for its utility part; aside from it you will need the actual deep learning framework, where you can choose between Keras and PyTorch but you're unlikely to use both at the same time. This is very subjective, but in my view, if you're working on a novel algorithm, you're more likely to go with PyTorch (or TensorFlow or some other lower-level framework) for flexibility. If you're adapting a known and tested algorithm to a new problem setting, you may want to go with Keras for its greater simplicity and lower entry level.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/54527439\/differences-in-scikit-learn-keras-or-pytorch", "best_answers_votes":159, "question_length":1134, "response_length":1936 }, { "question":"transform scipy sparse csr to pandas? I have used the ``` sklearn.preprocessing.OneHotEncoder ``` to transform some data the output is scipy.sparse.csr.csr_matrix how can I merge it back into my original dataframe along with the other columns? I tried to use pd.concat but I get ``` TypeError: cannot concatenate a non-NDFrame object ``` Thanks", "response":"If A is csr_matrix, you can use .toarray() (there's also .todense() that produces a numpy matrix, which is also works for the DataFrame constructor): ``` df = pd.DataFrame(A.toarray()) ``` You can then use this with pd.concat(). ``` A = csr_matrix([[1, 0, 2], [0, 3, 0]]) (0, 0) 1 (0, 2) 2 (1, 1) 3 pd.DataFrame(A.todense()) 0 1 2 0 1 0 2 1 0 3 0 RangeIndex: 2 entries, 0 to 1 Data columns (total 3 columns): 0 2 non-null int64 1 2 non-null int64 2 2 non-null int64 ``` In version 0.20, pandas introduced sparse data structures, including the SparseDataFrame. In pandas 1.0, SparseDataFrame was removed: In older versions of pandas, the SparseSeries and SparseDataFrame classes were the preferred way to work with sparse data. With the advent of extension arrays, these subclasses are no longer needed. Their purpose is better served by using a regular Series or DataFrame with sparse values instead. The migration guide shows how to use these new data structures. For instance, to create a DataFrame from a sparse matrix: ``` from scipy.sparse import csr_matrix A = csr_matrix([[1, 0, 2], [0, 3, 0]]) df = pd.DataFrame.sparse.from_spmatrix(A, columns=['A', 'B', 'C']) df A B C 0 1 0 2 1 0 3 0 df.dtypes A Sparse[float64, 0] B Sparse[float64, 0] C Sparse[float64, 0] dtype: object ``` Alternatively, you can pass sparse matrices to sklearn to avoid running out of memory when converting back to pandas. Just convert your other data to sparse format by passing a numpy array to the scipy.sparse.csr_matrix constructor and use scipy.sparse.hstack to combine (see docs).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36967666\/transform-scipy-sparse-csr-to-pandas", "best_answers_votes":73, "question_length":344, "response_length":1569 }, { "question":"PCA For categorical features? In my understanding, I thought PCA can be performed only for continuous features. But while trying to understand the difference between onehot encoding and label encoding came through a post in the following link: When to use One Hot Encoding vs LabelEncoder vs DictVectorizor? It states that one hot encoding followed by PCA is a very good method, which basically means PCA is applied for categorical features. Hence confused, please suggest me on the same.", "response":"I disagree with the others. While you can use PCA on binary data (e.g. one-hot encoded data) that does not mean it is a good thing, or it will work very well. PCA is designed for continuous variables. It tries to minimize variance (=squared deviations). The concept of squared deviations breaks down when you have binary variables. So yes, you can use PCA. And yes, you get an output. It even is a least-squared output: it's not as if PCA would segfault on such data. It works, but it is just much less meaningful than you'd want it to be; and supposedly less meaningful than e.g. frequent pattern mining.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40795141\/pca-for-categorical-features", "best_answers_votes":88, "question_length":488, "response_length":605 }, { "question":"Making SVM run faster in python Using the code below for svm in python: ``` from sklearn import datasets from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import SVC iris = datasets.load_iris() X, y = iris.data, iris.target clf = OneVsRestClassifier(SVC(kernel='linear', probability=True, class_weight='auto')) clf.fit(X, y) proba = clf.predict_proba(X) ``` But it is taking a huge amount of time. Actual Data Dimensions: ``` train-set (1422392,29) test-set (233081,29) ``` How can I speed it up(parallel or some other way)? Please help. I have already tried PCA and downsampling. I have 6 classes. Edit: Found http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.linear_model.SGDClassifier.html but I wish for probability estimates and it seems not to so for svm. Edit: ``` from sklearn import datasets from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import SVC,LinearSVC from sklearn.linear_model import SGDClassifier import joblib import numpy as np from sklearn import grid_search import multiprocessing import numpy as np import math def new_func(a): #converts array(x) elements to (1\/(1 + e(-x))) a=1\/(1 + math.exp(-a)) return a if __name__ == '__main__': iris = datasets.load_iris() cores=multiprocessing.cpu_count()-2 X, y = iris.data, iris.target #loading dataset C_range = 10.0 ** np.arange(-4, 4); #c value range param_grid = dict(estimator__C=C_range.tolist()) svr = OneVsRestClassifier(LinearSVC(class_weight='auto'),n_jobs=cores) ################LinearSVC Code faster #svr = OneVsRestClassifier(SVC(kernel='linear', probability=True, ##################SVC code slow # class_weight='auto'),n_jobs=cores) clf = grid_search.GridSearchCV(svr, param_grid,n_jobs=cores,verbose=2) #grid search clf.fit(X, y) #training svm model decisions=clf.decision_function(X) #outputs decision functions #prob=clf.predict_proba(X) #only for SVC outputs probablilites print decisions[:5,:] vecfunc = np.vectorize(new_func) prob=vecfunc(decisions) #converts deicision to (1\/(1 + e(-x))) print prob[:5,:] ``` Edit 2: The answer by user3914041 yields very poor probability estimates.", "response":"If you want to stick with SVC as much as possible and train on the full dataset, you can use ensembles of SVCs that are trained on subsets of the data to reduce the number of records per classifier (which apparently has quadratic influence on complexity). Scikit supports that with the BaggingClassifier wrapper. That should give you similar (if not better) accuracy compared to a single classifier, with much less training time. The training of the individual classifiers can also be set to run in parallel using the n_jobs parameter. Alternatively, I would also consider using a Random Forest classifier - it supports multi-class classification natively, it is fast and gives pretty good probability estimates when min_samples_leaf is set appropriately. I did a quick tests on the iris dataset blown up 100 times with an ensemble of 10 SVCs, each one trained on 10% of the data. It is more than 10 times faster than a single classifier. These are the numbers I got on my laptop: Single SVC: 45s Ensemble SVC: 3s Random Forest Classifier: 0.5s See below the code that I used to produce the numbers: ``` import time import numpy as np from sklearn.ensemble import BaggingClassifier, RandomForestClassifier from sklearn import datasets from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import SVC iris = datasets.load_iris() X, y = iris.data, iris.target X = np.repeat(X, 100, axis=0) y = np.repeat(y, 100, axis=0) start = time.time() clf = OneVsRestClassifier(SVC(kernel='linear', probability=True, class_weight='auto')) clf.fit(X, y) end = time.time() print \"Single SVC\", end - start, clf.score(X,y) proba = clf.predict_proba(X) n_estimators = 10 start = time.time() clf = OneVsRestClassifier(BaggingClassifier(SVC(kernel='linear', probability=True, class_weight='auto'), max_samples=1.0 \/ n_estimators, n_estimators=n_estimators)) clf.fit(X, y) end = time.time() print \"Bagging SVC\", end - start, clf.score(X,y) proba = clf.predict_proba(X) start = time.time() clf = RandomForestClassifier(min_samples_leaf=20) clf.fit(X, y) end = time.time() print \"Random Forest\", end - start, clf.score(X,y) proba = clf.predict_proba(X) ``` If you want to make sure that each record is used only once for training in the BaggingClassifier, you can set the bootstrap parameter to False.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31681373\/making-svm-run-faster-in-python", "best_answers_votes":138, "question_length":2115, "response_length":2291 }, { "question":"How to create\/customize your own scorer function in scikit-learn? I am using Support Vector Regression as an estimator in GridSearchCV. But I want to change the error function: instead of using the default (R-squared: coefficient of determination), I would like to define my own custom error function. I tried to make one with make_scorer, but it didn't work. I read the documentation and found that it's possible to create custom estimators, but I don't need to remake the entire estimator - only the error\/scoring function. I think I can do it by defining a callable as a scorer, like it says in the docs. But I don't know how to use an estimator: in my case SVR. Would I have to switch to a classifier (such as SVC)? And how would I use it? My custom error function is as follows: ``` def my_custom_loss_func(X_train_scaled, Y_train_scaled): error, M = 0, 0 for i in range(0, len(Y_train_scaled)): z = (Y_train_scaled[i] - M) if X_train_scaled[i] > M and Y_train_scaled[i] > M and (X_train_scaled[i] - Y_train_scaled[i]) > 0: error_i = (abs(Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(z)) if X_train_scaled[i] > M and Y_train_scaled[i] > M and (X_train_scaled[i] - Y_train_scaled[i]) M and Y_train_scaled[i] < M: error_i = -(abs(Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(-z)) error += error_i return error ``` The variable M isn't null\/zero. I've just set it to zero for simplicity. Would anyone be able to show an example application of this custom scoring function? Thanks for your help!", "response":"Jamie has a fleshed out example, but here's an example using make_scorer straight from scikit-learn documentation: ``` import numpy as np def my_custom_loss_func(ground_truth, predictions): diff = np.abs(ground_truth - predictions).max() return np.log(1 + diff) # loss_func will negate the return value of my_custom_loss_func, # which will be np.log(2), 0.693, given the values for ground_truth # and predictions defined below. loss = make_scorer(my_custom_loss_func, greater_is_better=False) score = make_scorer(my_custom_loss_func, greater_is_better=True) ground_truth = [[1, 1]] predictions = [0, 1] from sklearn.dummy import DummyClassifier clf = DummyClassifier(strategy='most_frequent', random_state=0) clf = clf.fit(ground_truth, predictions) loss(clf,ground_truth, predictions) score(clf,ground_truth, predictions) ``` When defining a custom scorer via sklearn.metrics.make_scorer, the convention is that custom functions ending in _score return a value to maximize. And for scorers ending in _loss or _error, a value is returned to be minimized. You can use this functionality by setting the greater_is_better parameter inside make_scorer. That is, this parameter would be True for scorers where higher values are better, and False for scorers where lower values are better. GridSearchCV can then optimize in the appropriate direction. You can then convert your function as a scorer as follows: ``` from sklearn.metrics.scorer import make_scorer def custom_loss_func(X_train_scaled, Y_train_scaled): error, M = 0, 0 for i in range(0, len(Y_train_scaled)): z = (Y_train_scaled[i] - M) if X_train_scaled[i] > M and Y_train_scaled[i] > M and (X_train_scaled[i] - Y_train_scaled[i]) > 0: error_i = (abs(Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(z)) if X_train_scaled[i] > M and Y_train_scaled[i] > M and (X_train_scaled[i] - Y_train_scaled[i]) M and Y_train_scaled[i] < M: error_i = -(abs(Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(-z)) error += error_i return error custom_scorer = make_scorer(custom_loss_func, greater_is_better=True) ``` And then pass custom_scorer into GridSearchCV as you would any other scoring function: clf = GridSearchCV(scoring=custom_scorer).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/32401493\/how-to-create-customize-your-own-scorer-function-in-scikit-learn", "best_answers_votes":67, "question_length":1509, "response_length":2193 }, { "question":"Does the SVM in sklearn support incremental (online) learning? I am currently in the process of designing a recommender system for text articles (a binary case of 'interesting' or 'not interesting'). One of my specifications is that it should continuously update to changing trends. From what I can tell, the best way to do this is to make use of machine learning algorithm that supports incremental\/online learning. Algorithms like the Perceptron and Winnow support online learning but I am not completely certain about Support Vector Machines. Does the scikit-learn python library support online learning and if so, is a support vector machine one of the algorithms that can make use of it? I am obviously not completely tied down to using support vector machines, but they are usually the go to algorithm for binary classification due to their all round performance. I would be willing to change to whatever fits best in the end.", "response":"While online algorithms for SVMs do exist, it has become important to specify if you want kernel or linear SVMs, as many efficient algorithms have been developed for the special case of linear SVMs. For the linear case, if you use the SGD classifier in scikit-learn with the hinge loss and L2 regularization you will get an SVM that can be updated online\/incrementall. You can combine this with feature transforms that approximate a kernel to get similar to an online kernel SVM. One of my specifications is that it should continuously update to changing trends. This is referred to as concept drift, and will not be handled well by a simple online SVM. Using the PassiveAggresive classifier will likely give you better results, as it's learning rate does not decrease over time. Assuming you get feedback while training \/ running, you can attempt to detect decreases in accuracy over time and begin training a new model when the accuracy starts to decrease (and switch to the new one when you believe that it has become more accurate). JSAT has 2 drift detection methods (see jsat.driftdetectors) that can be used to track accuracy and alert you when it has changed. It also has more online linear and kernel methods. (bias note: I'm the author of JSAT).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/23056460\/does-the-svm-in-sklearn-support-incremental-online-learning", "best_answers_votes":45, "question_length":932, "response_length":1255 }, { "question":"What does `sample_weight` do to the way a `DecisionTreeClassifier` works in sklearn? I've read from the relevant documentation that : Class balancing can be done by sampling an equal number of samples from each class, or preferably by normalizing the sum of the sample weights (sample_weight) for each class to the same value. But, it is still unclear to me how this works. If I set sample_weight with an array of only two possible values, 1's and 2's, does this mean that the samples with 2's will get sampled twice as often as the samples with 1's when doing the bagging? I cannot think of a practical example for this.", "response":"Some quick preliminaries: Let's say we have a classification problem with K classes. In a region of feature space represented by the node of a decision tree, recall that the \"impurity\" of the region is measured by quantifying the inhomogeneity, using the probability of the class in that region. Normally, we estimate: ``` Pr(Class=k) = #(examples of class k in region) \/ #(total examples in region) ``` The impurity measure takes as input, the array of class probabilities: ``` [Pr(Class=1), Pr(Class=2), ..., Pr(Class=K)] ``` and spits out a number, which tells you how \"impure\" or how inhomogeneous-by-class the region of feature space is. For example, the gini measure for a two class problem is 2*p*(1-p), where p = Pr(Class=1) and 1-p=Pr(Class=2). Now, basically the short answer to your question is: sample_weight augments the probability estimates in the probability array ... which augments the impurity measure ... which augments how nodes are split ... which augments how the tree is built ... which augments how feature space is diced up for classification. I believe this is best illustrated through example. First consider the following 2-class problem where the inputs are 1 dimensional: ``` from sklearn.tree import DecisionTreeClassifier as DTC X = [[0],[1],[2]] # 3 simple training examples Y = [ 1, 2, 1 ] # class labels dtc = DTC(max_depth=1) ``` So, we'll look trees with just a root node and two children. Note that the default impurity measure the gini measure. Case 1: no sample_weight ``` dtc.fit(X,Y) print dtc.tree_.threshold # [0.5, -2, -2] print dtc.tree_.impurity # [0.44444444, 0, 0.5] ``` The first value in the threshold array tells us that the 1st training example is sent to the left child node, and the 2nd and 3rd training examples are sent to the right child node. The last two values in threshold are placeholders and are to be ignored. The impurity array tells us the computed impurity values in the parent, left, and right nodes respectively. In the parent node, p = Pr(Class=1) = 2. \/ 3., so that gini = 2*(2.0\/3.0)*(1.0\/3.0) = 0.444..... You can confirm the child node impurities as well. Case 2: with sample_weight Now, let's try: ``` dtc.fit(X,Y,sample_weight=[1,2,3]) print dtc.tree_.threshold # [1.5, -2, -2] print dtc.tree_.impurity # [0.44444444, 0.44444444, 0.] ``` You can see the feature threshold is different. sample_weight also affects the impurity measure in each node. Specifically, in the probability estimates, the first training example is counted the same, the second is counted double, and the third is counted triple, due to the sample weights we've provided. The impurity in the parent node region is the same. This is just a coincidence. We can compute it directly: ``` p = Pr(Class=1) = (1+3) \/ (1+2+3) = 2.0\/3.0 ``` The gini measure of 4\/9 follows. Now, you can see from the chosen threshold that the first and second training examples are sent to the left child node, while the third is sent to the right. We see that impurity is calculated to be 4\/9 also in the left child node because: ``` p = Pr(Class=1) = 1 \/ (1+2) = 1\/3. ``` The impurity of zero in the right child is due to only one training example lying in that region. You can extend this with non-integer sample-wights similarly. I recommend trying something like sample_weight = [1,2,2.5], and confirming the computed impurities.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34389624\/what-does-sample-weight-do-to-the-way-a-decisiontreeclassifier-works-in-skle", "best_answers_votes":73, "question_length":621, "response_length":3359 }, { "question":"How to get Best Estimator on GridSearchCV (Random Forest Classifier Scikit) I'm running GridSearch CV to optimize the parameters of a classifier in scikit. Once I'm done, I'd like to know which parameters were chosen as the best. Whenever I do so I get a AttributeError: 'RandomForestClassifier' object has no attribute 'best_estimator_', and can't tell why, as it seems to be a legitimate attribute on the documentation. ``` from sklearn.grid_search import GridSearchCV X = data[usable_columns] y = data[target] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) rfc = RandomForestClassifier(n_jobs=-1,max_features= 'sqrt' ,n_estimators=50, oob_score = True) param_grid = { 'n_estimators': [200, 700], 'max_features': ['auto', 'sqrt', 'log2'] } CV_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv= 5) print '\\n',CV_rfc.best_estimator_ ``` Yields: ``` `AttributeError: 'GridSearchCV' object has no attribute 'best_estimator_' ```", "response":"You have to fit your data before you can get the best parameter combination. ``` from sklearn.grid_search import GridSearchCV from sklearn.datasets import make_classification from sklearn.ensemble import RandomForestClassifier # Build a classification task using 3 informative features X, y = make_classification(n_samples=1000, n_features=10, n_informative=3, n_redundant=0, n_repeated=0, n_classes=2, random_state=0, shuffle=False) rfc = RandomForestClassifier(n_jobs=-1,max_features= 'sqrt' ,n_estimators=50, oob_score = True) param_grid = { 'n_estimators': [200, 700], 'max_features': ['auto', 'sqrt', 'log2'] } CV_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv= 5) CV_rfc.fit(X, y) print CV_rfc.best_params_ ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30102973\/how-to-get-best-estimator-on-gridsearchcv-random-forest-classifier-scikit", "best_answers_votes":100, "question_length":978, "response_length":729 }, { "question":"Obtain eigen values and vectors from sklearn PCA How I can get the the eigen values and eigen vectors of the PCA application? ``` from sklearn.decomposition import PCA clf=PCA(0.98,whiten=True) #converse 98% variance X_train=clf.fit_transform(X_train) X_test=clf.transform(X_test) ``` I can't find it in docs. 1.I am \"not\" able to comprehend the different results here. Edit: ``` def pca_code(data): #raw_implementation var_per=.98 data-=np.mean(data, axis=0) data\/=np.std(data, axis=0) cov_mat=np.cov(data, rowvar=False) evals, evecs = np.linalg.eigh(cov_mat) idx = np.argsort(evals)[::-1] evecs = evecs[:,idx] evals = evals[idx] variance_retained=np.cumsum(evals)\/np.sum(evals) index=np.argmax(variance_retained>=var_per) evecs = evecs[:,:index+1] reduced_data=np.dot(evecs.T, data.T).T print(evals) print(\"_\"*30) print(evecs) print(\"_\"*30) #using scipy package clf=PCA(var_per) X_train=data.T X_train=clf.fit_transform(X_train) print(clf.explained_variance_) print(\"_\"*30) print(clf.components_) print(\"__\"*30) ``` I wish to obtain all the eigenvalues and eigenvectors instead of just the reduced set with the convergence condition.", "response":"Your implementation You are computing the eigenvectors of the correlation matrix, that is the covariance matrix of the normalized variables. data\/=np.std(data, axis=0) is not part of the classic PCA, we only center the variables. So the sklearn PCA does not feature scale the data beforehand. Apart from that you are on the right track, if we abstract the fact that the code you provided did not run ;). You only got confused with the row\/column layouts. Honestly I think it's much easier to start with X = data.T and work only with X from there on. I added your code 'fixed' at the end of the post. Getting the eigenvalues You already noted that you can get the eigenvectors using clf.components_. So you have the principal components. They are eigenvectors of the covariance matrix \ud835\udc4b\u1d40\ud835\udc4b. A way to retrieve the eigenvalues from there is to apply this matrix to each principal components and project the results onto the component. Let v_1 be the first principal component and lambda_1 the associated eigenvalue. We have: and thus: since . (x, y) the scalar product of vectors x and y. Back in Python you can do: ``` n_samples = X.shape[0] # We center the data and compute the sample covariance matrix. X -= np.mean(X, axis=0) cov_matrix = np.dot(X.T, X) \/ n_samples for eigenvector in pca.components_: print(np.dot(eigenvector.T, np.dot(cov_matrix, eigenvector))) ``` And you get the eigenvalue associated with the eigenvector. Well, in my tests it turned out not to work with the couple last eigenvalues but I'd attribute that to my absence of skills in numerical stability. Now that's not the best way to get the eigenvalues but it's nice to know where they come from. The eigenvalues represent the variance in the direction of the eigenvector. So you can get them through the pca.explained_variance_ attribute: ``` eigenvalues = pca.explained_variance_ ``` Here is a reproducible example that prints the eigenvalues you get with each method: ``` import numpy as np from sklearn.decomposition import PCA from sklearn.datasets import make_classification X, y = make_classification(n_samples=1000) n_samples = X.shape[0] pca = PCA() X_transformed = pca.fit_transform(X) # We center the data and compute the sample covariance matrix. X_centered = X - np.mean(X, axis=0) cov_matrix = np.dot(X_centered.T, X_centered) \/ n_samples eigenvalues = pca.explained_variance_ for eigenvalue, eigenvector in zip(eigenvalues, pca.components_): print(np.dot(eigenvector.T, np.dot(cov_matrix, eigenvector))) print(eigenvalue) ``` Your original code, fixed If you run it you'll see the values are consistent. They're not exactly equal because numpy and scikit-learn are not using the same algorithm here. The main thing was that you were using correlation matrix instead of covariance, as mentioned above. Also you were getting the transposed eigenvectors from numpy which made it very confusing. ``` import numpy as np from scipy.stats.mstats import zscore from sklearn.decomposition import PCA def pca_code(data): #raw_implementation var_per=.98 data-=np.mean(data, axis=0) # data\/=np.std(data, axis=0) cov_mat=np.cov(data, rowvar=False) evals, evecs = np.linalg.eigh(cov_mat) idx = np.argsort(evals)[::-1] evecs = evecs[:,idx] evals = evals[idx] variance_retained=np.cumsum(evals)\/np.sum(evals) index=np.argmax(variance_retained>=var_per) evecs = evecs[:,:index+1] reduced_data=np.dot(evecs.T, data.T).T print(\"evals\", evals) print(\"_\"*30) print(evecs.T[1, :]) print(\"_\"*30) #using scipy package clf=PCA(var_per) X_train=data X_train=clf.fit_transform(X_train) print(clf.explained_variance_) print(\"_\"*30) print(clf.components_[1,:]) print(\"__\"*30) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31909945\/obtain-eigen-values-and-vectors-from-sklearn-pca", "best_answers_votes":91, "question_length":1135, "response_length":3639 }, { "question":"ImportError: cannnot import name 'Imputer' from 'sklearn.preprocessing' Trying to import Imputer from sklearn.preprocessing, ``` from sklearn.preprocessing import Imputer ``` Shows ``` ImportError: cannot import name 'Imputer' from 'sklearn.preprocessing' (\/home\/codeknight13\/anaconda3\/lib\/python3.7\/site-packages\/sklearn\/preprocessing\/__init_\\_.py)\" ```", "response":"from sklearn.preprocessing import Imputer was deprecated with scikit-learn v0.20.4 and removed as of v0.22.2. See the sklean changelog. ``` from sklearn.impute import SimpleImputer import numpy as np imputer = SimpleImputer(missing_values=np.nan, strategy='mean') ``` pip install scikit-learn==0.20.4 or conda install scikit-learn=0.20.4 are not a good options because scikit-learn==0.20.4 is more than 3 years out of date.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/59439096\/importerror-cannnot-import-name-imputer-from-sklearn-preprocessing", "best_answers_votes":135, "question_length":354, "response_length":423 }, { "question":"Scikit Learn - K-Means - Elbow - criterion Today i'm trying to learn something about K-means. I Have understand the algorithm and i know how it works. Now i'm looking for the right k... I found the elbow criterion as a method to detect the right k but i do not understand how to use it with scikit learn?! In scikit learn i'm clustering things in this way ``` kmeans = KMeans(init='k-means++', n_clusters=n_clusters, n_init=10) kmeans.fit(data) ``` So should i do this several times for n_clusters = 1...n and watch at the Error rate to get the right k ? think this would be stupid and would take a lot of time?!", "response":"If the true label is not known in advance(as in your case), then K-Means clustering can be evaluated using either Elbow Criterion or Silhouette Coefficient. Elbow Criterion Method: The idea behind elbow method is to run k-means clustering on a given dataset for a range of values of k (num_clusters, e.g k=1 to 10), and for each value of k, calculate sum of squared errors (SSE). After that, plot a line graph of the SSE for each value of k. If the line graph looks like an arm - a red circle in below line graph (like angle), the \"elbow\" on the arm is the value of optimal k (number of cluster). Here, we want to minimize SSE. SSE tends to decrease toward 0 as we increase k (and SSE is 0 when k is equal to the number of data points in the dataset, because then each data point is its own cluster, and there is no error between it and the center of its cluster). So the goal is to choose a small value of k that still has a low SSE, and the elbow usually represents where we start to have diminishing returns by increasing k. Let's consider iris datasets, ``` import pandas as pd from sklearn.datasets import load_iris from sklearn.cluster import KMeans import matplotlib.pyplot as plt iris = load_iris() X = pd.DataFrame(iris.data, columns=iris['feature_names']) #print(X) data = X[['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)']] sse = {} for k in range(1, 10): kmeans = KMeans(n_clusters=k, max_iter=1000).fit(data) data[\"clusters\"] = kmeans.labels_ #print(data[\"clusters\"]) sse[k] = kmeans.inertia_ # Inertia: Sum of distances of samples to their closest cluster center plt.figure() plt.plot(list(sse.keys()), list(sse.values())) plt.xlabel(\"Number of cluster\") plt.ylabel(\"SSE\") plt.show() ``` Plot for above code: We can see in plot, 3 is the optimal number of clusters (encircled red) for iris dataset, which is indeed correct. Silhouette Coefficient Method: From sklearn documentation, A higher Silhouette Coefficient score relates to a model with better-defined clusters. The Silhouette Coefficient is defined for each sample and is composed of two scores: ` a: The mean distance between a sample and all other points in the same class. b: The mean distance between a sample and all other points in the next nearest cluster. The Silhouette Coefficient is for a single sample is then given as: Now, to find the optimal value of k for KMeans, loop through 1..n for n_clusters in KMeans and calculate Silhouette Coefficient for each sample. A higher Silhouette Coefficient indicates that the object is well matched to its own cluster and poorly matched to neighboring clusters. ``` from sklearn.metrics import silhouette_score from sklearn.datasets import load_iris from sklearn.cluster import KMeans X = load_iris().data y = load_iris().target for n_cluster in range(2, 11): kmeans = KMeans(n_clusters=n_cluster).fit(X) label = kmeans.labels_ sil_coeff = silhouette_score(X, label, metric='euclidean') print(\"For n_clusters={}, The Silhouette Coefficient is {}\".format(n_cluster, sil_coeff)) ``` Output - For n_clusters=2, The Silhouette Coefficient is 0.680813620271 For n_clusters=3, The Silhouette Coefficient is 0.552591944521 For n_clusters=4, The Silhouette Coefficient is 0.496992849949 For n_clusters=5, The Silhouette Coefficient is 0.488517550854 For n_clusters=6, The Silhouette Coefficient is 0.370380309351 For n_clusters=7, The Silhouette Coefficient is 0.356303270516 For n_clusters=8, The Silhouette Coefficient is 0.365164535737 For n_clusters=9, The Silhouette Coefficient is 0.346583642095 For n_clusters=10, The Silhouette Coefficient is 0.328266088778 As we can see, n_clusters=2 has highest Silhouette Coefficient. This means that 2 should be the optimal number of cluster, Right? But here's the catch. Iris dataset has 3 species of flower, which contradicts the 2 as an optimal number of cluster. So despite n_clusters=2 having highest Silhouette Coefficient, We would consider n_clusters=3 as optimal number of cluster due to - Iris dataset has 3 species. (Most Important) n_clusters=3 has the 2nd highest value of Silhouette Coefficient. So choosing n_clusters=3 is the optimal no. of cluster for iris dataset. Choosing optimal no. of the cluster will depend on the type of datasets and the problem we are trying to solve. But most of the cases, taking highest Silhouette Coefficient will yield an optimal number of cluster. Hope it helps!", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/19197715\/scikit-learn-k-means-elbow-criterion", "best_answers_votes":92, "question_length":612, "response_length":4388 }, { "question":"Difference between min_samples_split and min_samples_leaf in sklearn DecisionTreeClassifier I was going through sklearn class DecisionTreeClassifier. Looking at parameters for the class, we have two parameters min_samples_split and min_samples_leaf. Basic idea behind them looks similar, you specify a minimum number of samples required to decide a node to be leaf or split further. Why do we need two parameters when one implies the other?. Is there any reason or scenario which distinguish them?.", "response":"From the documentation: The main difference between the two is that min_samples_leaf guarantees a minimum number of samples in a leaf, while min_samples_split can create arbitrary small leaves, though min_samples_split is more common in the literature. To get a grasp of this piece of documentation I think you should make the distinction between a leaf (also called external node) and an internal node. An internal node will have further splits (also called children), while a leaf is by definition a node without any children (without any further splits). min_samples_split specifies the minimum number of samples required to split an internal node, while min_samples_leaf specifies the minimum number of samples required to be at a leaf node. For instance, if min_samples_split = 5, and there are 7 samples at an internal node, then the split is allowed. But let's say the split results in two leaves, one with 1 sample, and another with 6 samples. If min_samples_leaf = 2, then the split won't be allowed (even if the internal node has 7 samples) because one of the leaves resulted will have less then the minimum number of samples required to be at a leaf node. As the documentation referenced above mentions, min_samples_leaf guarantees a minimum number of samples in every leaf, no matter the value of min_samples_split.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46480457\/difference-between-min-samples-split-and-min-samples-leaf-in-sklearn-decisiontre", "best_answers_votes":93, "question_length":498, "response_length":1327 }, { "question":"Why is scikit-learn SVM.SVC() extremely slow? I tried to use SVM classifier to train a data with about 100k samples, but I found it to be extremely slow and even after two hours there was no response. When the dataset has around 1k samples, I can get the result immediately. I also tried SGDClassifier and na\u00efve bayes which is quite fast and I got results within couple of minutes. Could you explain this phenomena?", "response":"General remarks about SVM-learning SVM-training with nonlinear-kernels, which is default in sklearn's SVC, is complexity-wise approximately: O(n_samples^2 * n_features) link to some question with this approximation given by one of sklearn's devs. This applies to the SMO-algorithm used within libsvm, which is the core-solver in sklearn for this type of problem. This changes much when no kernels are used and one uses sklearn.svm.LinearSVC (based on liblinear) or sklearn.linear_model.SGDClassifier. So we can do some math to approximate the time-difference between 1k and 100k samples: ``` 1k = 1000^2 = 1.000.000 steps = Time X 100k = 100.000^2 = 10.000.000.000 steps = Time X * 10000 !!! ``` This is only an approximation and can be even worse or less worse (e.g. setting cache-size; trading-off memory for speed-gains)! Scikit-learn specific remarks The situation could also be much more complex because of all that nice stuff scikit-learn is doing for us behind the bars. The above is valid for the classic 2-class SVM. If you are by any chance trying to learn some multi-class data; scikit-learn will automatically use OneVsRest or OneVsAll approaches to do this (as the core SVM-algorithm does not support this). Read up scikit-learns docs to understand this part. The same warning applies to generating probabilities: SVM's do not naturally produce probabilities for final-predictions. So to use these (activated by parameter) scikit-learn uses a heavy cross-validation procedure called Platt scaling which will take a lot of time too! Scikit-learn documentation Because sklearn has one of the best docs, there is often a good part within these docs to explain something like that (link):", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40077432\/why-is-scikit-learn-svm-svc-extremely-slow", "best_answers_votes":80, "question_length":415, "response_length":1697 }, { "question":"Scikit Learn TfidfVectorizer : How to get top n terms with highest tf-idf score I am working on keyword extraction problem. Consider the very general case ``` from sklearn.feature_extraction.text import TfidfVectorizer tfidf = TfidfVectorizer(tokenizer=tokenize, stop_words='english') t = \"\"\"Two Travellers, walking in the noonday sun, sought the shade of a widespreading tree to rest. As they lay looking up among the pleasant leaves, they saw that it was a Plane Tree. \"How useless is the Plane!\" said one of them. \"It bears no fruit whatever, and only serves to litter the ground with leaves.\" \"Ungrateful creatures!\" said a voice from the Plane Tree. \"You lie here in my cooling shade, and yet you say I am useless! Thus ungratefully, O Jupiter, do men receive their blessings!\" Our best blessings are often the least appreciated.\"\"\" tfs = tfidf.fit_transform(t.split(\" \")) str = 'tree cat travellers fruit jupiter' response = tfidf.transform([str]) feature_names = tfidf.get_feature_names() for col in response.nonzero()[1]: print(feature_names[col], ' - ', response[0, col]) ``` and this gives me ``` (0, 28) 0.443509712811 (0, 27) 0.517461475101 (0, 8) 0.517461475101 (0, 6) 0.517461475101 tree - 0.443509712811 travellers - 0.517461475101 jupiter - 0.517461475101 fruit - 0.517461475101 ``` which is good. For any new document that comes in, is there a way to get the top n terms with the highest tfidf score?", "response":"You have to do a little bit of a song and dance to get the matrices as numpy arrays instead, but this should do what you're looking for: ``` feature_array = np.array(tfidf.get_feature_names()) tfidf_sorting = np.argsort(response.toarray()).flatten()[::-1] n = 3 top_n = feature_array[tfidf_sorting][:n] ``` This gives me: ``` array([u'fruit', u'travellers', u'jupiter'], dtype=' learning_rate). I'm not seeing where the exact documentation for the sklearn wrapper is hidden, but the code for those classes is here: https:\/\/github.com\/dmlc\/xgboost\/blob\/master\/python-package\/xgboost\/sklearn.py For your reference here is how you would set the model object parameters directly. ``` >>> grid = {'max_depth':10} >>> >>> clf = XGBClassifier() >>> clf.max_depth 3 >>> clf.set_params(**grid) XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=10, min_child_weight=1, missing=None, n_estimators=100, nthread=-1, objective='binary:logistic', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=0, silent=True, subsample=1) >>> clf.max_depth 10 ``` EDIT: I suppose you can set parameters on model creation, it just isn't super typical to do so since most people grid search in some means. However if you do so you would need to either list them as full params or use **kwargs. For example: ``` >>> XGBClassifier(max_depth=10) XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=10, min_child_weight=1, missing=None, n_estimators=100, nthread=-1, objective='binary:logistic', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=0, silent=True, subsample=1) >>> XGBClassifier(**grid) XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=10, min_child_weight=1, missing=None, n_estimators=100, nthread=-1, objective='binary:logistic', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=0, silent=True, subsample=1) ``` Using a dictionary as input without **kwargs will set that parameter to literally be your dictionary: ``` >>> XGBClassifier(grid) XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth={'max_depth': 10}, min_child_weight=1, missing=None, n_estimators=100, nthread=-1, objective='binary:logistic', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=0, silent=True, subsample=1) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34674797\/xgboost-xgbclassifier-defaults-in-python", "best_answers_votes":49, "question_length":1472, "response_length":2563 }, { "question":"Determining the most contributing features for SVM classifier in sklearn I have a dataset and I want to train my model on that data. After training, I need to know the features that are major contributors in the classification for a SVM classifier. There is something called feature importance for forest algorithms, is there anything similar?", "response":"Yes, there is attribute coef_ for SVM classifier but it only works for SVM with linear kernel. For other kernels it is not possible because data are transformed by kernel method to another space, which is not related to input space, check the explanation. ``` from matplotlib import pyplot as plt from sklearn import svm def f_importances(coef, names): imp = coef imp,names = zip(*sorted(zip(imp,names))) plt.barh(range(len(names)), imp, align='center') plt.yticks(range(len(names)), names) plt.show() features_names = ['input1', 'input2'] svm = svm.SVC(kernel='linear') svm.fit(X, Y) f_importances(svm.coef_, features_names) ``` And the output of the function looks like this:", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/41592661\/determining-the-most-contributing-features-for-svm-classifier-in-sklearn", "best_answers_votes":64, "question_length":343, "response_length":677 }, { "question":"scikit-learn: Predicting new points with DBSCAN I am using DBSCAN to cluster some data using Scikit-Learn (Python 2.7): ```py from sklearn.cluster import DBSCAN dbscan = DBSCAN(random_state=0) dbscan.fit(X) ``` However, I found that there was no built-in function (aside from \"fit_predict\") that could assign the new data points, Y, to the clusters identified in the original data, X. The K-means method has a \"predict\" function but I want to be able to do the same with DBSCAN. Something like this: ```py dbscan.predict(X, Y) ``` So that the density can be inferred from X but the return values (cluster assignments\/labels) are only for Y. From what I can tell, this capability is available in R so I assume that it is also somehow available in Python. I just can't seem to find any documentation for this. Also, I have tried searching for reasons as to why DBSCAN may not be used for labeling new data but I haven't found any justifications.", "response":"While Anony-Mousse has some good points (Clustering is indeed not classifying) I think the ability of assigning new points has it's usefulness. * Based on the original paper on DBSCAN and robertlaytons ideas on github.com\/scikit-learn, I suggest running through core points and assigning to the cluster of the first core point that is within eps of you new point. Then it is guaranteed that your point will at least be a border point of the assigned cluster according to the definitions used for the clustering. (Be aware that your point might be deemed noise and not assigned to a cluster) I've done a quick implementation: ```py import numpy as np import scipy as sp def dbscan_predict(dbscan_model, X_new, metric=sp.spatial.distance.cosine): # Result is noise by default y_new = np.ones(shape=len(X_new), dtype=int)*-1 # Iterate all input samples for a label for j, x_new in enumerate(X_new): # Find a core sample closer than EPS for i, x_core in enumerate(dbscan_model.components_): if metric(x_new, x_core) < dbscan_model.eps: # Assign label of x_core to x_new y_new[j] = dbscan_model.labels_[dbscan_model.core_sample_indices_[i]] break return y_new ``` The labels obtained by clustering (dbscan_model = DBSCAN(...).fit(X) and the labels obtained from the same model on the same data (dbscan_predict(dbscan_model, X)) sometimes differ. I'm not quite certain if this is a bug somewhere or a result of randomness. EDIT: I Think the above problem of differing prediction outcomes could stem from the possibility that a border point can be close to multiple clusters. Please update if you test this and find an answer. Ambiguity might be solved by shuffling core points every time or by picking the closest instead of the first core point. *) Case at hand: I'd like to evaluate if the clusters obtained from a subset of my data makes sense for other subset or is simply a special case. If it generalises it supports the validity of the clusters and the earlier steps of pre-processing applied.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/27822752\/scikit-learn-predicting-new-points-with-dbscan", "best_answers_votes":37, "question_length":943, "response_length":1994 }, { "question":"Can You Consistently Keep Track of Column Labels Using Sklearn's Transformer API? This seems like a very important issue for this library, and so far I don't see a decisive answer, although it seems like for the most part, the answer is 'No.' Right now, any method that uses the transformer api in sklearn returns a numpy array as its results. Usually this is fine, but if you're chaining together a multi-step process that expands or reduces the number of columns, not having a clean way to track how they relate to the original column labels makes it difficult to use this section of the library to its fullest. As an example, here's a snippet that I just recently used, where the inability to map new columns to the ones originally in the dataset was a big drawback: ``` numeric_columns = train.select_dtypes(include=np.number).columns.tolist() cat_columns = train.select_dtypes(include=np.object).columns.tolist() numeric_pipeline = make_pipeline(SimpleImputer(strategy='median'), StandardScaler()) cat_pipeline = make_pipeline(SimpleImputer(strategy='most_frequent'), OneHotEncoder()) transformers = [ ('num', numeric_pipeline, numeric_columns), ('cat', cat_pipeline, cat_columns) ] combined_pipe = ColumnTransformer(transformers) train_clean = combined_pipe.fit_transform(train) test_clean = combined_pipe.transform(test) ``` In this example I split up my dataset using the ColumnTransformer and then added additional columns using the OneHotEncoder, so my arrangement of columns is not the same as what I started out with. I could easily have different arrangements if I used different modules that use the same API. OrdinalEncoer, select_k_best, etc. If you're doing multi-step transformations, is there a way to consistently see how your new columns relate to your original dataset? There's an extensive discussion about it here, but I don't think anything has been finalized yet.", "response":"Yes, you are right that there isn't a complete support for tracking the feature_names in sklearn as of now. Initially, it was decide to keep it as generic at the level of numpy array. Latest progress on the feature names addition to sklearn estimators can be tracked here. Anyhow, we can create wrappers to get the feature names of the ColumnTransformer. I am not sure whether it can capture all the possible types of ColumnTransformers. But at-least, it can solve your problem. From Documentation of ColumnTransformer: Notes The order of the columns in the transformed feature matrix follows the order of how the columns are specified in the transformers list. Columns of the original feature matrix that are not specified are dropped from the resulting transformed feature matrix, unless specified in the passthrough keyword. Those columns specified with passthrough are added at the right to the output of the transformers. Try this! ``` import pandas as pd import numpy as np from sklearn.compose import ColumnTransformer from sklearn.pipeline import make_pipeline, Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler, OneHotEncoder, MinMaxScaler from sklearn.feature_extraction.text import _VectorizerMixin from sklearn.feature_selection._base import SelectorMixin from sklearn.feature_selection import SelectKBest from sklearn.feature_extraction.text import CountVectorizer train = pd.DataFrame({'age': [23,12, 12, np.nan], 'Gender': ['M','F', np.nan, 'F'], 'income': ['high','low','low','medium'], 'sales': [10000, 100020, 110000, 100], 'foo' : [1,0,0,1], 'text': ['I will test this', 'need to write more sentence', 'want to keep it simple', 'hope you got that these sentences are junk'], 'y': [0,1,1,1]}) numeric_columns = ['age'] cat_columns = ['Gender','income'] numeric_pipeline = make_pipeline(SimpleImputer(strategy='median'), StandardScaler()) cat_pipeline = make_pipeline(SimpleImputer(strategy='most_frequent'), OneHotEncoder()) text_pipeline = make_pipeline(CountVectorizer(), SelectKBest(k=5)) transformers = [ ('num', numeric_pipeline, numeric_columns), ('cat', cat_pipeline, cat_columns), ('text', text_pipeline, 'text'), ('simple_transformer', MinMaxScaler(), ['sales']), ] combined_pipe = ColumnTransformer( transformers, remainder='passthrough') transformed_data = combined_pipe.fit_transform( train.drop('y',1), train['y']) ``` ``` def get_feature_out(estimator, feature_in): if hasattr(estimator,'get_feature_names'): if isinstance(estimator, _VectorizerMixin): # handling all vectorizers return [f'vec_{f}' \\ for f in estimator.get_feature_names()] else: return estimator.get_feature_names(feature_in) elif isinstance(estimator, SelectorMixin): return np.array(feature_in)[estimator.get_support()] else: return feature_in def get_ct_feature_names(ct): # handles all estimators, pipelines inside ColumnTransfomer # doesn't work when remainder =='passthrough' # which requires the input column names. output_features = [] for name, estimator, features in ct.transformers_: if name!='remainder': if isinstance(estimator, Pipeline): current_features = features for step in estimator: current_features = get_feature_out(step, current_features) features_out = current_features else: features_out = get_feature_out(estimator, features) output_features.extend(features_out) elif estimator=='passthrough': output_features.extend(ct._feature_names_in[features]) return output_features pd.DataFrame(transformed_data, columns=get_ct_feature_names(combined_pipe)) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/57528350\/can-you-consistently-keep-track-of-column-labels-using-sklearns-transformer-api", "best_answers_votes":54, "question_length":1889, "response_length":3527 }, { "question":"Evaluate multiple scores on sklearn cross_val_score I'm trying to evaluate multiple machine learning algorithms with sklearn for a couple of metrics (accuracy, recall, precision and maybe more). For what I understood from the documentation here and from the source code(I'm using sklearn 0.17), the cross_val_score function only receives one scorer for each execution. So for calculating multiple scores, I have to : Execute multiple times Implement my (time consuming and error prone) scorer I've executed multiple times with this code : ``` from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB from sklearn.tree import DecisionTreeClassifier from sklearn.cross_validation import cross_val_score import time from sklearn.datasets import load_iris iris = load_iris() models = [GaussianNB(), DecisionTreeClassifier(), SVC()] names = [\"Naive Bayes\", \"Decision Tree\", \"SVM\"] for model, name in zip(models, names): print name start = time.time() for score in [\"accuracy\", \"precision\", \"recall\"]: print score, print \" : \", print cross_val_score(model, iris.data, iris.target,scoring=score, cv=10).mean() print time.time() - start ``` And I get this output: ``` Naive Bayes accuracy : 0.953333333333 precision : 0.962698412698 recall : 0.953333333333 0.0383198261261 Decision Tree accuracy : 0.953333333333 precision : 0.958888888889 recall : 0.953333333333 0.0494720935822 SVM accuracy : 0.98 precision : 0.983333333333 recall : 0.98 0.063080072403 ``` Which is ok, but it's slow for my own data. How can I measure all scores ?", "response":"Since the time of writing this post scikit-learn has updated and made my answer obsolete, see the much cleaner solution below You can write your own scoring function to capture all three pieces of information, however a scoring function for cross validation must only return a single number in scikit-learn (this is likely for compatibility reasons). Below is an example where each of the scores for each cross validation slice prints to the console, and the returned value is just the sum of the three metrics. If you want to return all these values, you're going to have to make some changes to cross_val_score (line 1351 of cross_validation.py) and _score (line 1601 or the same file). ``` from sklearn.svm import SVC from sklearn.naive_bayes import GaussianNB from sklearn.tree import DecisionTreeClassifier from sklearn.cross_validation import cross_val_score import time from sklearn.datasets import load_iris from sklearn.metrics import accuracy_score, precision_score, recall_score iris = load_iris() models = [GaussianNB(), DecisionTreeClassifier(), SVC()] names = [\"Naive Bayes\", \"Decision Tree\", \"SVM\"] def getScores(estimator, x, y): yPred = estimator.predict(x) return (accuracy_score(y, yPred), precision_score(y, yPred, pos_label=3, average='macro'), recall_score(y, yPred, pos_label=3, average='macro')) def my_scorer(estimator, x, y): a, p, r = getScores(estimator, x, y) print a, p, r return a+p+r for model, name in zip(models, names): print name start = time.time() m = cross_val_score(model, iris.data, iris.target,scoring=my_scorer, cv=10).mean() print '\\nSum:',m, '\\n\\n' print 'time', time.time() - start, '\\n\\n' ``` Which gives: ``` Naive Bayes 0.933333333333 0.944444444444 0.933333333333 0.933333333333 0.944444444444 0.933333333333 1.0 1.0 1.0 0.933333333333 0.944444444444 0.933333333333 0.933333333333 0.944444444444 0.933333333333 0.933333333333 0.944444444444 0.933333333333 0.866666666667 0.904761904762 0.866666666667 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 Sum: 2.86936507937 time 0.0249638557434 Decision Tree 1.0 1.0 1.0 0.933333333333 0.944444444444 0.933333333333 1.0 1.0 1.0 0.933333333333 0.944444444444 0.933333333333 0.933333333333 0.944444444444 0.933333333333 0.866666666667 0.866666666667 0.866666666667 0.933333333333 0.944444444444 0.933333333333 0.933333333333 0.944444444444 0.933333333333 1.0 1.0 1.0 1.0 1.0 1.0 Sum: 2.86555555556 time 0.0237860679626 SVM 1.0 1.0 1.0 0.933333333333 0.944444444444 0.933333333333 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.933333333333 0.944444444444 0.933333333333 0.933333333333 0.944444444444 0.933333333333 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 Sum: 2.94333333333 time 0.043044090271 ``` As of scikit-learn 0.19.0 the solution becomes much easier ``` from sklearn.model_selection import cross_validate from sklearn.datasets import load_iris from sklearn.svm import SVC iris = load_iris() clf = SVC() scoring = {'acc': 'accuracy', 'prec_macro': 'precision_macro', 'rec_micro': 'recall_macro'} scores = cross_validate(clf, iris.data, iris.target, scoring=scoring, cv=5, return_train_score=True) print(scores.keys()) print(scores['test_acc']) ``` Which gives: ``` ['test_acc', 'score_time', 'train_acc', 'fit_time', 'test_rec_micro', 'train_rec_micro', 'train_prec_macro', 'test_prec_macro'] [ 0.96666667 1. 0.96666667 0.96666667 1. ] ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35876508\/evaluate-multiple-scores-on-sklearn-cross-val-score", "best_answers_votes":52, "question_length":1540, "response_length":3312 }, { "question":"Scikit-learn predict_proba gives wrong answers This is a follow-up question from How to know what classes are represented in return array from predict_proba in Scikit-learn In that question, I quoted the following code: ``` >>> import sklearn >>> sklearn.__version__ '0.13.1' >>> from sklearn import svm >>> model = svm.SVC(probability=True) >>> X = [[1,2,3], [2,3,4]] # feature vectors >>> Y = ['apple', 'orange'] # classes >>> model.fit(X, Y) >>> model.predict_proba([1,2,3]) array([[ 0.39097541, 0.60902459]]) ``` I discovered in that question this result represents the probability of the point belonging to each class, in the order given by model.classes_ ``` >>> zip(model.classes_, model.predict_proba([1,2,3])[0]) [('apple', 0.39097541289393828), ('orange', 0.60902458710606167)] ``` So... this answer, if interpreted correctly, says that the point is probably an 'orange' (with a fairly low confidence, due to the tiny amount of data). But intuitively, this result is obviously incorrect, since the point given was identical to the training data for 'apple'. Just to be sure, I tested the reverse as well: ``` >>> zip(model.classes_, model.predict_proba([2,3,4])[0]) [('apple', 0.60705475211840931), ('orange', 0.39294524788159074)] ``` Again, obviously incorrect, but in the other direction. Finally, I tried it with points that were much further away. ``` >>> X = [[1,1,1], [20,20,20]] # feature vectors >>> model.fit(X, Y) >>> zip(model.classes_, model.predict_proba([1,1,1])[0]) [('apple', 0.33333332048410247), ('orange', 0.66666667951589786)] ``` Again, the model predicts the wrong probabilities. BUT, the model.predict function gets it right! ``` >>> model.predict([1,1,1])[0] 'apple' ``` Now, I remember reading something in the docs about predict_proba being inaccurate for small datasets, though I can't seem to find it again. Is this the expected behaviour, or am I doing something wrong? If this IS the expected behaviour, then why does the predict and predict_proba function disagree one the output? And importantly, how big does the dataset need to be before I can trust the results from predict_proba? -------- UPDATE -------- Ok, so I did some more 'experiments' into this: the behaviour of predict_proba is heavily dependent on 'n', but not in any predictable way! ``` >>> def train_test(n): ... X = [[1,2,3], [2,3,4]] * n ... Y = ['apple', 'orange'] * n ... model.fit(X, Y) ... print \"n =\", n, zip(model.classes_, model.predict_proba([1,2,3])[0]) ... >>> train_test(1) n = 1 [('apple', 0.39097541289393828), ('orange', 0.60902458710606167)] >>> for n in range(1,10): ... train_test(n) ... n = 1 [('apple', 0.39097541289393828), ('orange', 0.60902458710606167)] n = 2 [('apple', 0.98437355278112448), ('orange', 0.015626447218875527)] n = 3 [('apple', 0.90235408180319321), ('orange', 0.097645918196806694)] n = 4 [('apple', 0.83333299908143665), ('orange', 0.16666700091856332)] n = 5 [('apple', 0.85714254878984497), ('orange', 0.14285745121015511)] n = 6 [('apple', 0.87499969631893626), ('orange', 0.1250003036810636)] n = 7 [('apple', 0.88888844127886335), ('orange', 0.11111155872113669)] n = 8 [('apple', 0.89999988018127364), ('orange', 0.10000011981872642)] n = 9 [('apple', 0.90909082368682159), ('orange', 0.090909176313178491)] ``` How should I use this function safely in my code? At the very least, is there any value of n for which it will be guaranteed to agree with the result of model.predict?", "response":"predict_probas is using the Platt scaling feature of libsvm to callibrate probabilities, see: How does sklearn.svm.svc's function predict_proba() work internally? So indeed the hyperplane predictions and the proba calibration can disagree, especially if you only have 2 samples in your dataset. It's weird that the internal cross validation done by libsvm for scaling the probabilities does not fail (explicitly) in this case. Maybe this is a bug. One would have to dive into the Platt scaling code of libsvm to understand what's happening.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/17017882\/scikit-learn-predict-proba-gives-wrong-answers", "best_answers_votes":27, "question_length":3438, "response_length":540 }, { "question":"How to split data on balanced training set and test set on sklearn I am using sklearn for multi-classification task. I need to split alldata into train_set and test_set. I want to take randomly the same sample number from each class. Actually, I amusing this function ``` X_train, X_test, y_train, y_test = cross_validation.train_test_split(Data, Target, test_size=0.3, random_state=0) ``` but it gives unbalanced dataset! Any suggestion.", "response":"Although Christian's suggestion is correct, technically train_test_split should give you stratified results by using the stratify param. So you could do: ``` X_train, X_test, y_train, y_test = cross_validation.train_test_split(Data, Target, test_size=0.3, random_state=0, stratify=Target) ``` The trick here is that it starts from version 0.17 in sklearn. From the documentation about the parameter stratify: stratify : array-like or None (default is None) If not None, data is split in a stratified fashion, using this as the labels array. New in version 0.17: stratify splitting", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35472712\/how-to-split-data-on-balanced-training-set-and-test-set-on-sklearn", "best_answers_votes":49, "question_length":438, "response_length":580 }, { "question":"Using a sparse matrix versus numpy array I am creating some numpy arrays with word counts in Python: rows are documents, columns are counts for word X. If I have a lot of zero counts, people suggest using sparse matrices when processing these further, e.g. in a classifier. When feeding a numpy array versus a sparse matrix into the Scikit logistic regression classifier, it did not seem to make much of a difference, however. So I was wondering about three things: Wikipedia says a sparse matrix is a matrix in which most of the elements are zero Is that an appropriate way to determine when to use a sparse matrix format - as soon as > 50 % of the values are zero? Or does it make sense to use just in case? How much does a sparse matrix help performance in a task like mine, especially compared to a numpy array or a standard list? So far, I collect my data into a numpy array, then convert into the csr_matrix in Scipy. Is that the right way to do it? I could not figure out how to build a sparse matrix from the ground up, and that might be impossible. Any help is much appreciated!", "response":"The scipy sparse matrix package, and similar ones in MATLAB, was based on ideas developed from linear algebra problems, such as solving large sparse linear equations (e.g. finite difference and finite element implementations). So things like matrix product (the dot product for numpy arrays) and equation solvers are well developed. My rough experience is that a sparse csr matrix product has to have a 1% sparsity to be faster than the equivalent dense dot operation - in other words, one nonzero value for every 99 zeros. (but see tests below) But people also try to use sparse matrices to save memory. But keep in mind that such a matrix has to store 3 arrays of values (at least in the coo format). So the sparsity has to be less than 1\/3 to start saving memory. Obviously you aren't going to save memory if you first build the dense array, and create the sparse one from that. The scipy package implements many sparse formats. The coo format is easiest to understand and build. Build one according to documentation and look at its .data, .row, and .col attributes (3 1d arrays). csr and csc are typically built from the coo format, and compress the data a bit, making them a bit harder to understand. But they have most of the math functionality. It is also possible to index csr format, though in general this is slower than the equivalent dense matrix\/array case. Other operations like changing values (especially from 0 to nonzero), concatenation, incremental growth, are also slower. lil (lists of lists) is also easy to understand, and best for incremental building. dok is a actually a dictionary subclass. A key point is that a sparse matrix is limited to 2d, and in many ways behaves like the np.matrix class (though it isn't a subclass). A search for other questions using scikit-learn and sparse might be the best way of finding the pros\/cons of using these matrices. I've answered a number of questions, but I know the 'sparse' side better than the 'learn' side. I think they are useful, but I get the sense is that the fit isn't always the best. Any customization is on the learn side. So far the sparse package has not been optimized for this application. I just tried some matrix product tests, using the sparse.random method to create a sparse matrix with a specified sparsity. Sparse matrix multiplication performed better than I expected. ``` In [251]: M=sparse.random(1000,1000,.5) In [252]: timeit M1=M*M 1 loops, best of 3: 2.78 s per loop In [253]: timeit Ma=M.toarray(); M2=Ma.dot(Ma) 1 loops, best of 3: 4.28 s per loop ``` It is a size issue; for smaller matrix the dense dot is faster ``` In [255]: M=sparse.random(100,100,.5) In [256]: timeit M1=M*M 100 loops, best of 3: 3.24 ms per loop In [257]: timeit Ma=M.toarray(); M2=Ma.dot(Ma) 1000 loops, best of 3: 1.44 ms per loop ``` But compare indexing ``` In [268]: timeit M.tocsr()[500,500] 10 loops, best of 3: 86.4 ms per loop In [269]: timeit Ma[500,500] 1000000 loops, best of 3: 318 ns per loop In [270]: timeit Ma=M.toarray();Ma[500,500] 10 loops, best of 3: 23.6 ms per loop ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36969886\/using-a-sparse-matrix-versus-numpy-array", "best_answers_votes":46, "question_length":1087, "response_length":3066 }, { "question":"module 'sklearn' has no attribute 'cross_validation' I am trying to split my dataset into training and testing dataset, but I am getting this error: ``` X_train,X_test,Y_train,Y_test = sklearn.cross_validation.train_test_split(X,df1['ENTRIESn_hourly']) ``` ``` AttributeError Traceback (most recent call last) in () ----> 1 X_train,X_test,Y_train,Y_test = sklearn.cross_validation.train_test_split(X,df1['ENTRIESn_hourly']) AttributeError: module 'sklearn' has no attribute 'cross_validation' ``` How can I handle this?", "response":"sklearn does not automatically import its subpackages. If you only imported via: import sklearn, then it won't work. Import with import sklearn.cross_validation instead. Further, sklearn.cross_validation will be deprecated in version 0.20. Use sklearn.model_selection.train_test_split instead.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46572475\/module-sklearn-has-no-attribute-cross-validation", "best_answers_votes":130, "question_length":520, "response_length":293 }, { "question":"GridSearch for an estimator inside a OneVsRestClassifier I want to perform GridSearchCV in a SVC model, but that uses the one-vs-all strategy. For the latter part, I can just do this: ``` model_to_set = OneVsRestClassifier(SVC(kernel=\"poly\")) ``` My problem is with the parameters. Let's say I want to try the following values: ``` parameters = {\"C\":[1,2,4,8], \"kernel\":[\"poly\",\"rbf\"],\"degree\":[1,2,3,4]} ``` In order to perform GridSearchCV, I should do something like: ``` cv_generator = StratifiedKFold(y, k=10) model_tunning = GridSearchCV(model_to_set, param_grid=parameters, score_func=f1_score, n_jobs=1, cv=cv_generator) ``` However, then I execute it I get: ``` Traceback (most recent call last): File \"\/...\/main.py\", line 66, in argclass_sys.set_model_parameters(model_name=\"SVC\", verbose=3, file_path=PATH_ROOT_MODELS) File \"\/...\/base.py\", line 187, in set_model_parameters model_tunning.fit(self.feature_encoder.transform(self.train_feats), self.label_encoder.transform(self.train_labels)) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/grid_search.py\", line 354, in fit return self._fit(X, y) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/grid_search.py\", line 392, in _fit for clf_params in grid for train, test in cv) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/externals\/joblib\/parallel.py\", line 473, in __call__ self.dispatch(function, args, kwargs) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/externals\/joblib\/parallel.py\", line 296, in dispatch job = ImmediateApply(func, args, kwargs) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/externals\/joblib\/parallel.py\", line 124, in __init__ self.results = func(*args, **kwargs) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/grid_search.py\", line 85, in fit_grid_point clf.set_params(**clf_params) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/base.py\", line 241, in set_params % (key, self.__class__.__name__)) ValueError: Invalid parameter kernel for estimator OneVsRestClassifier ``` Basically, since the SVC is inside a OneVsRestClassifier and that's the estimator I send to the GridSearchCV, the SVC's parameters can't be accessed. In order to accomplish what I want, I see two solutions: When creating the SVC, somehow tell it not to use the one-vs-one strategy but the one-vs-all. Somehow indicate the GridSearchCV that the parameters correspond to the estimator inside the OneVsRestClassifier. I'm yet to find a way to do any of the mentioned alternatives. Do you know if there's a way to do any of them? Or maybe you could suggest another way to get to the same result? Thanks!", "response":"When you use nested estimators with grid search you can scope the parameters with __ as a separator. In this case the SVC model is stored as an attribute named estimator inside the OneVsRestClassifier model: ``` from sklearn.datasets import load_iris from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import SVC from sklearn.grid_search import GridSearchCV from sklearn.metrics import f1_score iris = load_iris() model_to_set = OneVsRestClassifier(SVC(kernel=\"poly\")) parameters = { \"estimator__C\": [1,2,4,8], \"estimator__kernel\": [\"poly\",\"rbf\"], \"estimator__degree\":[1, 2, 3, 4], } model_tunning = GridSearchCV(model_to_set, param_grid=parameters, score_func=f1_score) model_tunning.fit(iris.data, iris.target) print model_tunning.best_score_ print model_tunning.best_params_ ``` That yields: ``` 0.973290762737 {'estimator__kernel': 'poly', 'estimator__C': 1, 'estimator__degree': 2} ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/12632992\/gridsearch-for-an-estimator-inside-a-onevsrestclassifier", "best_answers_votes":84, "question_length":2602, "response_length":907 }, { "question":"sklearn and large datasets I have a dataset of 22 GB. I would like to process it on my laptop. Of course I can't load it in memory. I use a lot sklearn but for much smaller datasets. In this situations the classical approach should be something like. Read only part of the data -> Partial train your estimator -> delete the data -> read other part of the data -> continue to train your estimator. I have seen that some sklearn algorithm have the partial fit method that should allow us to train the estimator with various subsamples of the data. Now I am wondering is there an easy why to do that in sklearn? I am looking for something like ``` r = read_part_of_data('data.csv') m = sk.my_model `for i in range(n): x = r.read_next_chunk(20 lines) m.partial_fit(x) m.predict(new_x) ``` Maybe sklearn is not the right tool for these kind of things? Let me know.", "response":"I think sklearn is fine for larger data. If your chosen algorithms support partial_fit or an online learning approach then you're on track. One thing to be aware of is that your chunk size may influence your success. This link may be useful... Working with big data in python and numpy, not enough ram, how to save partial results on disc? I agree that h5py is useful but you may wish to use tools that are already in your quiver. Another thing you can do is to randomly pick whether or not to keep a row in your csv file...and save the result to a .npy file so it loads quicker. That way you get a sampling of your data that will allow you to start playing with it with all algorithms...and deal with the bigger data issue along the way(or not at all! sometimes a sample with a good approach is good enough depending on what you want).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/23872942\/sklearn-and-large-datasets", "best_answers_votes":19, "question_length":859, "response_length":836 }, { "question":"Apply StandardScaler to parts of a data set [duplicate] This question already has answers here: pandas dataframe columns scaling with sklearn (9 answers) Closed 3 years ago. I want to use sklearn's StandardScaler. Is it possible to apply it to some feature columns but not others? For instance, say my data is: ``` data = pd.DataFrame({'Name' : [3, 4,6], 'Age' : [18, 92,98], 'Weight' : [68, 59,49]}) Age Name Weight 0 18 3 68 1 92 4 59 2 98 6 49 col_names = ['Name', 'Age', 'Weight'] features = data[col_names] ``` I fit and transform the data ``` scaler = StandardScaler().fit(features.values) features = scaler.transform(features.values) scaled_features = pd.DataFrame(features, columns = col_names) Name Age Weight 0 -1.069045 -1.411004 1.202703 1 -0.267261 0.623041 0.042954 2 1.336306 0.787964 -1.245657 ``` But of course the names are not really integers but strings and I don't want to standardize them. How can I apply the fit and transform methods only on the columns Age and Weight?", "response":"Introduced in v0.20 is ColumnTransformer which applies transformers to a specified set of columns of an array or pandas DataFrame. ``` import pandas as pd data = pd.DataFrame({'Name' : [3, 4,6], 'Age' : [18, 92,98], 'Weight' : [68, 59,49]}) col_names = ['Name', 'Age', 'Weight'] features = data[col_names] from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler ct = ColumnTransformer([ ('somename', StandardScaler(), ['Age', 'Weight']) ], remainder='passthrough') ct.fit_transform(features) ``` NB: Like Pipeline it also has a shorthand version make_column_transformer which doesn't require naming the transformers Output ``` -1.41100443, 1.20270298, 3. 0.62304092, 0.04295368, 4. 0.78796352, -1.24565666, 6. ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38420847\/apply-standardscaler-to-parts-of-a-data-set", "best_answers_votes":64, "question_length":993, "response_length":752 }, { "question":"Using hyphen\/dash in python repository name and package name I am trying to make my git repository pip-installable. In preparation for that I am restructuring the repo to follow the right conventions. My understanding from looking at other repositories is that I should put all my source code in a package that has the same name as the repository name. E.g. if my repository is called myrepo, then the source code would all go into a package also called myrepo. My repository has a hyphen in it for readability: e.g. my-repo. So if I wanted to make a package for it with the same name, it would have a hyphen in it as well. In this tutorial it says \"don't use hyphens\" for python package names. However I've seen well-established packages such as scikit-learn that have hyphens in their name. One thing that I have noticed though is that in the scikit-learn repo, the package name is not the same as the repo name and is instead called sklearn. I think my discussion above boils down to the following questions: When packaging a repo, what is the relationship between the repository's name and the package's name? Is there anything to beware of when having names that don't match? Is it okay to have hyphens in package names? What about in repository names? If the package name for scikit-learn is sklearn, then how come when I install it I do pip install scikit-learn instead of pip install sklearn?", "response":"To answer your 1st point let me rephrase my answer to a different question. The biggest source of misunderstanding is that the word \"package\" is heavily overloaded. There are 4 different names in the game \u2014 the name of the repository, the name of the directory being used for development (the one that contains setup.py), the name of the directory containing __init__.py and other importable modules, the name of distribution at PyPI. Quite often these 4 are the same or similar but that's not required. The names of the repository and development directory can be any, their names don't play any role. Of course it's convenient to name them properly but that's only convenience. The name of the directory with Python files name the package to be imported. Once the package is named for import the name usually stuck and cannot be changed. The name of the distribution gives one a page at PyPI and the name of distribution files (source distribution, eggs, wheels). It's the name one puts in setup(name='distribution') call. Let me show detailed real example. I've been maintaining a templating library called CheetahTemplate. I develop it in the development directory called cheetah3\/. The distribution at PyPI is called Cheetah3; this is the name I put into setup(name='Cheetah3'). The top-level module is Cheetah hence one does import Cheetah.Template or from Cheetah import Template; that means that I have a directory cheetah3\/Cheetah\/. The answer to 2 is: you can have dashes in repository names and PyPI distribution names but not in package (directories with __init__.py files) names and module (.py files) names because you cannot write in Python import xy-zzy, that would be subtraction and SyntaxError. Point 3: The site and the repository names are scikit-learn, as well as the distribution name, but the importable package (the top-level directory with __init__.py) is sklearn. PEP 8 has nothing to do with the question as it doesn't talk about distribution, only about importable packages and modules.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/54597212\/using-hyphen-dash-in-python-repository-name-and-package-name", "best_answers_votes":90, "question_length":1400, "response_length":2015 }, { "question":"How to tune parameters in Random Forest, using Scikit Learn? ``` class sklearn.ensemble.RandomForestClassifier(n_estimators=10, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, bootstrap=True, oob_score=False, n_jobs=1, random_state=None, verbose=0, warm_start=False, class_weight=None) ``` I'm using a random forest model with 9 samples and about 7000 attributes. Of these samples, there are 3 categories that my classifier recognizes. I know this is far from ideal conditions but I'm trying to figure out which attributes are the most important in feature predictions. Which parameters would be the best to tweak for optimizing feature importance? I tried different n_estimators and noticed that the amount of \"significant features\" (i.e. nonzero values in the feature_importances_ array) increased dramatically. I've read through the documentation but if anyone has any experience in this, I would like to know which parameters are the best to tune and a brief explanation why.", "response":"From my experience, there are three features worth exploring with the sklearn RandomForestClassifier, in order of importance: n_estimators max_features criterion n_estimators is not really worth optimizing. The more estimators you give it, the better it will do. 500 or 1000 is usually sufficient. max_features is worth exploring for many different values. It may have a large impact on the behavior of the RF because it decides how many features each tree in the RF considers at each split. criterion may have a small impact, but usually the default is fine. If you have the time, try it out. Make sure to use sklearn's GridSearch (preferably GridSearchCV, but your data set size is too small) when trying out these parameters. If I understand your question correctly, though, you only have 9 samples and 3 classes? Presumably 3 samples per class? It's very, very likely that your RF is going to overfit with that little amount of data, unless they are good, representative records.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36107820\/how-to-tune-parameters-in-random-forest-using-scikit-learn", "best_answers_votes":70, "question_length":1078, "response_length":983 }, { "question":"SKLearn MinMaxScaler - scale specific columns only [duplicate] This question already has answers here: pandas dataframe columns scaling with sklearn (9 answers) Closed 3 years ago. I'd like to scale some (but not all) of the columns in a Pandas dataFrame using a MinMaxScaler. How can I do it?", "response":"Demo: ``` In [90]: df = pd.DataFrame(np.random.randn(5, 3), index=list('abcde'), columns=list('xyz')) In [91]: df Out[91]: x y z a -0.325882 -0.299432 -0.182373 b -0.833546 -0.472082 1.158938 c -0.328513 -0.664035 0.789414 d -0.031630 -1.040802 -1.553518 e 0.813328 0.076450 0.022122 In [92]: from sklearn.preprocessing import MinMaxScaler In [93]: mms = MinMaxScaler() In [94]: df[['x','z']] = mms.fit_transform(df[['x','z']]) In [95]: df Out[95]: x y z a 0.308259 -0.299432 0.505500 b 0.000000 -0.472082 1.000000 c 0.306662 -0.664035 0.863768 d 0.486932 -1.040802 0.000000 e 1.000000 0.076450 0.580891 ``` the same result can be also achieved using sklearn.preprocessing.minmax_scale: ``` from sklearn.preprocessing import minmax_scale df[['x','z']] = minmax_scale(df[['x','z']]) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43834242\/sklearn-minmaxscaler-scale-specific-columns-only", "best_answers_votes":56, "question_length":293, "response_length":785 }, { "question":"Use sklearn's GridSearchCV with a pipeline, preprocessing just once I'm using scickit-learn to tune a model hyper-parameters. I'm using a pipeline to have chain the preprocessing with the estimator. A simple version of my problem would look like this: ``` import numpy as np from sklearn.model_selection import GridSearchCV from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression grid = GridSearchCV(make_pipeline(StandardScaler(), LogisticRegression()), param_grid={'logisticregression__C': [0.1, 10.]}, cv=2, refit=False) _ = grid.fit(X=np.random.rand(10, 3), y=np.random.randint(2, size=(10,))) ``` In my case the preprocessing (what would be StandardScale() in the toy example) is time consuming, and I'm not tuning any parameter of it. So, when I execute the example, the StandardScaler is executed 12 times. 2 fit\/predict * 2 cv * 3 parameters. But every time StandardScaler is executed for a different value of the parameter C, it returns the same output, so it'd be much more efficient, to compute it once, and then just run the estimator part of the pipeline. I can manually split the pipeline between the preprocessing (no hyper parameters tuned) and the estimator. But to apply the preprocessing to the data, I should provide the training set only. So, I would have to implement the splits manually, and not use GridSearchCV at all. Is there a simple\/standard way to avoid repeating the preprocessing while using GridSearchCV?", "response":"Update: Ideally, the answer below should not be used as it leads to data leakage as discussed in comments. In this answer, GridSearchCV will tune the hyperparameters on the data already preprocessed by StandardScaler, which is not correct. In most conditions that should not matter much, but algorithms which are too sensitive to scaling will give wrong results. Essentially, GridSearchCV is also an estimator, implementing fit() and predict() methods, used by the pipeline. So instead of: ``` grid = GridSearchCV(make_pipeline(StandardScaler(), LogisticRegression()), param_grid={'logisticregression__C': [0.1, 10.]}, cv=2, refit=False) ``` Do this: ``` clf = make_pipeline(StandardScaler(), GridSearchCV(LogisticRegression(), param_grid={'logisticregression__C': [0.1, 10.]}, cv=2, refit=True)) clf.fit() clf.predict() ``` What it will do is, call the StandardScalar() only once, for one call to clf.fit() instead of multiple calls as you described. Edit: Changed refit to True, when GridSearchCV is used inside a pipeline. As mentioned in documentation: refit : boolean, default=True Refit the best estimator with the entire dataset. If \u201cFalse\u201d, it is impossible to make predictions using this GridSearchCV instance after fitting. If refit=False, clf.fit() will have no effect because the GridSearchCV object inside the pipeline will be reinitialized after fit(). When refit=True, the GridSearchCV will be refitted with the best scoring parameter combination on the whole data that is passed in fit(). So if you want to make the pipeline, just to see the scores of the grid search, only then the refit=False is appropriate. If you want to call the clf.predict() method, refit=True must be used, else Not Fitted error will be thrown.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43366561\/use-sklearns-gridsearchcv-with-a-pipeline-preprocessing-just-once", "best_answers_votes":58, "question_length":1525, "response_length":1735 }, { "question":"What does clf mean in machine learning? When doing fitting, I always come across code like ``` clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train) ``` (from http:\/\/scikit-learn.org\/stable\/modules\/cross_validation.html#k-fold) What does clf stand for? I googled around but didn't find any clues.", "response":"In the scikit-learn tutorial, it's short for classifier.: We call our estimator instance clf, as it is a classifier.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34540017\/what-does-clf-mean-in-machine-learning", "best_answers_votes":70, "question_length":300, "response_length":116 }, { "question":"What's the difference between KFold and ShuffleSplit CV? It seems like KFold generates the same values every time the object is iterated over, while Shuffle Split generates different indices every time. Is this correct? If so, what are the uses for one over the other? ``` cv = cross_validation.KFold(10, n_folds=2,shuffle=True,random_state=None) cv2 = cross_validation.ShuffleSplit(10,n_iter=2,test_size=0.5) print(list(iter(cv))) print(list(iter(cv))) print(list(iter(cv2))) print(list(iter(cv2))) ``` Yields the following output: ``` [(array([1, 3, 5, 8, 9]), array([0, 2, 4, 6, 7])), (array([0, 2, 4, 6, 7]), array([1, 3, 5, 8, 9]))] [(array([1, 3, 5, 8, 9]), array([0, 2, 4, 6, 7])), (array([0, 2, 4, 6, 7]), array([1, 3, 5, 8, 9]))] [(array([4, 6, 3, 2, 7]), array([8, 1, 9, 0, 5])), (array([3, 6, 7, 0, 5]), array([9, 1, 8, 4, 2]))] [(array([3, 0, 2, 1, 7]), array([5, 6, 9, 4, 8])), (array([0, 7, 1, 3, 8]), array([6, 2, 5, 4, 9]))] ```", "response":"Difference in KFold and ShuffleSplit output KFold will divide your data set into prespecified number of folds, and every sample must be in one and only one fold. A fold is a subset of your dataset. ShuffleSplit will randomly sample your entire dataset during each iteration to generate a training set and a test set. The test_size and train_size parameters control how large the test and training test set should be for each iteration. Since you are sampling from the entire dataset during each iteration, values selected during one iteration, could be selected again during another iteration. Summary: ShuffleSplit works iteratively, KFold just divides the dataset into k folds. Difference when doing validation In KFold, during each round you will use one fold as the test set and all the remaining folds as your training set. However, in ShuffleSplit, during each round n you should only use the training and test set from iteration n. As your data set grows, cross validation time increases, making shufflesplits a more attractive alternate. If you can train your algorithm, with a certain percentage of your data as opposed to using all k-1 folds, ShuffleSplit is an attractive option.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34731421\/whats-the-difference-between-kfold-and-shufflesplit-cv", "best_answers_votes":66, "question_length":944, "response_length":1190 }, { "question":"How to predict time series in scikit-learn? Scikit-learn utilizes a very convenient approach based on fit and predict methods. I have time-series data in the format suited for fit and predict. For example I have the following Xs: ``` [[1.0, 2.3, 4.5], [6.7, 2.7, 1.2], ..., [3.2, 4.7, 1.1]] ``` and the corresponding ys: ``` [[1.0], [2.3], ..., [7.7]] ``` These data have the following meaning. The values stored in ys form a time series. The values in Xs are corresponding time dependent \"factors\" that are known to have some influence on the values in ys (for example: temperature, humidity and atmospheric pressure). Now, of course, I can use fit(Xs,ys). But then I get a model in which future values in ys depend only on factors and do not dependend on the previous Y values (at least directly) and this is a limitation of the model. I would like to have a model in which Y_n depends also on Y_{n-1} and Y_{n-2} and so on. For example I might want to use an exponential moving average as a model. What is the most elegant way to do it in scikit-learn ADDED As it has been mentioned in the comments, I can extend Xs by adding ys. But this way has some limitations. For example, if I add the last 5 values of y as 5 new columns to X, the information about time ordering of ys is lost. For example, there is no indication in X that values in the 5th column follows value in the 4th column and so on. As a model, I might want to have a linear fit of the last five ys and use the found linear function to make a prediction. But if I have 5 values in 5 columns it is not so trivial. ADDED 2 To make my problem even more clear, I would like to give one concrete example. I would like to have a \"linear\" model in which y_n = c + k1*x1 + k2*x2 + k3*x3 + k4*EMOV_n, where EMOV_n is just an exponential moving average. How, can I implement this simple model in scikit-learn?", "response":"According to Wikipedia, EWMA works well with stationary data, but it does not work as expected in the presence of trends, or seasonality. In those cases you should use a second or third order EWMA method, respectively. I decided to look at the pandas ewma function to see how it handled trends, and this is what I came up with: ``` import pandas, numpy as np ewma = pandas.stats.moments.ewma # make a hat function, and add noise x = np.linspace(0,1,100) x = np.hstack((x,x[::-1])) x += np.random.normal( loc=0, scale=0.1, size=200 ) plot( x, alpha=0.4, label='Raw' ) # take EWMA in both directions with a smaller span term fwd = ewma( x, span=15 ) # take EWMA in fwd direction bwd = ewma( x[::-1], span=15 ) # take EWMA in bwd direction c = np.vstack(( fwd, bwd[::-1] )) # lump fwd and bwd together c = np.mean( c, axis=0 ) # average # regular EWMA, with bias against trend plot( ewma( x, span=20 ), 'b', label='EWMA, span=20' ) # \"corrected\" (?) EWMA plot( c, 'r', label='Reversed-Recombined' ) legend(loc=8) savefig( 'ewma_correction.png', fmt='png', dpi=100 ) ``` As you can see, the EWMA bucks the trend uphill and downhill. We can correct for this (without having to implement a second-order scheme ourselves) by taking the EWMA in both directions and then averaging. I hope your data was stationary!", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/20841167\/how-to-predict-time-series-in-scikit-learn", "best_answers_votes":27, "question_length":1867, "response_length":1305 }, { "question":"Python scikit learn pca.explained_variance_ratio_ cutoff When choosing the number of principal components (k), we choose k to be the smallest value so that for example, 99% of variance, is retained. However, in the Python Scikit learn, I am not 100% sure pca.explained_variance_ratio_ = 0.99 is equal to \"99% of variance is retained\"? Could anyone enlighten? Thanks. The Python Scikit learn PCA manual is here http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.decomposition.PCA.html#sklearn.decomposition.PCA", "response":"Yes, you are nearly right. The pca.explained_variance_ratio_ parameter returns a vector of the variance explained by each dimension. Thus pca.explained_variance_ratio_[i] gives the variance explained solely by the i+1st dimension. You probably want to do pca.explained_variance_ratio_.cumsum(). That will return a vector x such that x[i] returns the cumulative variance explained by the first i+1 dimensions. ``` import numpy as np from sklearn.decomposition import PCA np.random.seed(0) my_matrix = np.random.randn(20, 5) my_model = PCA(n_components=5) my_model.fit_transform(my_matrix) print my_model.explained_variance_ print my_model.explained_variance_ratio_ print my_model.explained_variance_ratio_.cumsum() ``` ``` [ 1.50756565 1.29374452 0.97042041 0.61712667 0.31529082] [ 0.32047581 0.27502207 0.20629036 0.13118776 0.067024 ] [ 0.32047581 0.59549787 0.80178824 0.932976 1. ] ``` So in my random toy data, if I picked k=4 I would retain 93.3% of the variance.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/32857029\/python-scikit-learn-pca-explained-variance-ratio-cutoff", "best_answers_votes":108, "question_length":515, "response_length":969 }, { "question":"graph.write_pdf(\"iris.pdf\") AttributeError: 'list' object has no attribute 'write_pdf' My code is follow the class of machine learning of google.The two code are same.I don't know why it show error.May be the type of variable is error.But google's code is same to me.Who has ever had this problem? This is error ``` [0 1 2] [0 1 2] Traceback (most recent call last): File \"\/media\/joyce\/oreo\/python\/machine_learn\/VisualizingADecisionTree.py\", line 34, in graph.write_pdf(\"iris.pdf\") AttributeError: 'list' object has no attribute 'write_pdf' [Finished in 0.4s with exit code 1] [shell_cmd: python -u \"\/media\/joyce\/oreo\/python\/machine_learn\/VisualizingADecisionTree.py\"] [dir: \/media\/joyce\/oreo\/python\/machine_learn] [path: \/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin:\/usr\/games:\/usr\/local\/games] ``` This is code ``` import numpy as np from sklearn.datasets import load_iris from sklearn import tree iris = load_iris() test_idx = [0, 50, 100] # training data train_target = np.delete(iris.target, test_idx) train_data = np.delete(iris.data, test_idx, axis=0) # testing data test_target = iris.target[test_idx] test_data = iris.data[test_idx] clf = tree.DecisionTreeClassifier() clf.fit(train_data, train_target) print test_target print clf.predict(test_data) # viz code from sklearn.externals.six import StringIO import pydot dot_data = StringIO() tree.export_graphviz(clf, out_file=dot_data, feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded=True, impurity=False) graph = pydot.graph_from_dot_data(dot_data.getvalue()) graph.write_pdf(\"iris.pdf\") ```", "response":"I think you are using newer version of python. Please try with pydotplus. ``` import pydotplus ... graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) graph.write_pdf(\"iris.pdf\") ``` This should do it.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38176472\/graph-write-pdfiris-pdf-attributeerror-list-object-has-no-attribute-writ", "best_answers_votes":69, "question_length":1604, "response_length":208 }, { "question":"Controlling the threshold in Logistic Regression in Scikit Learn I am using the LogisticRegression() method in scikit-learn on a highly unbalanced data set. I have even turned the class_weight feature to auto. I know that in Logistic Regression it should be possible to know what is the threshold value for a particular pair of classes. Is it possible to know what the threshold value is in each of the One-vs-All classes the LogisticRegression() method designs? I did not find anything in the documentation page. Does it by default apply the 0.5 value as threshold for all the classes regardless of the parameter values?", "response":"There is a little trick that I use, instead of using model.predict(test_data) use model.predict_proba(test_data). Then use a range of values for thresholds to analyze the effects on the prediction; ``` pred_proba_df = pd.DataFrame(model.predict_proba(x_test)) threshold_list = [0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,.7,.75,.8,.85,.9,.95,.99] for i in threshold_list: print ('\\n******** For i = {} ******'.format(i)) Y_test_pred = pred_proba_df.applymap(lambda x: 1 if x>i else 0) test_accuracy = metrics.accuracy_score(Y_test.values.reshape(Y_test.values.size,1), Y_test_pred.iloc[:,1].values.reshape(Y_test_pred.iloc[:,1].values.size,1)) print('Our testing accuracy is {}'.format(test_accuracy)) print(confusion_matrix(Y_test.values.reshape(Y_test.values.size,1), Y_test_pred.iloc[:,1].values.reshape(Y_test_pred.iloc[:,1].values.size,1))) ``` Best!", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/28716241\/controlling-the-threshold-in-logistic-regression-in-scikit-learn", "best_answers_votes":38, "question_length":621, "response_length":871 }, { "question":"Python scikit-learn: exporting trained classifier I am using a DBN (deep belief network) from nolearn based on scikit-learn. I have already built a Network which can classify my data very well, now I am interested in exporting the model for deployment, but I don't know how (I am training the DBN every time I want to predict something). In matlab I would just export the weight matrix and import it in another machine. Does someone know how to export the model\/the weight matrix to be imported without needing to train the whole model again?", "response":"First, install joblib. You can use: ``` >>> import joblib >>> joblib.dump(clf, 'my_model.pkl', compress=9) ``` And then later, on the prediction server: ``` >>> import joblib >>> model_clone = joblib.load('my_model.pkl') ``` This is basically a Python pickle with an optimized handling for large numpy arrays. It has the same limitations as the regular pickle w.r.t. code change: if the class structure of the pickle object changes you might no longer be able to unpickle the object with new versions of nolearn or scikit-learn. If you want long-term robust way of storing your model parameters you might need to write your own IO layer (e.g. using binary format serialization tools such as protocol buffers or avro or an inefficient yet portable text \/ json \/ xml representation such as PMML).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/17511968\/python-scikit-learn-exporting-trained-classifier", "best_answers_votes":71, "question_length":542, "response_length":794 }, { "question":"Sklearn SGDClassifier partial fit I'm trying to use SGD to classify a large dataset. As the data is too large to fit into memory, I'd like to use the partial_fit method to train the classifier. I have selected a sample of the dataset (100,000 rows) that fits into memory to test fit vs. partial_fit: ``` from sklearn.linear_model import SGDClassifier def batches(l, n): for i in xrange(0, len(l), n): yield l[i:i+n] clf1 = SGDClassifier(shuffle=True, loss='log') clf1.fit(X, Y) clf2 = SGDClassifier(shuffle=True, loss='log') n_iter = 60 for n in range(n_iter): for batch in batches(range(len(X)), 10000): clf2.partial_fit(X[batch[0]:batch[-1]+1], Y[batch[0]:batch[-1]+1], classes=numpy.unique(Y)) ``` I then test both classifiers with an identical test set. In the first case I get an accuracy of 100%. As I understand it, SGD by default passes 5 times over the training data (n_iter = 5). In the second case, I have to pass 60 times over the data to reach the same accuracy. Why this difference (5 vs. 60)? Or am I doing something wrong?", "response":"I have finally found the answer. You need to shuffle the training data between each iteration, as setting shuffle=True when instantiating the model will NOT shuffle the data when using partial_fit (it only applies to fit). Note: it would have been helpful to find this information on the sklearn.linear_model.SGDClassifier page. The amended code reads as follows: ``` from sklearn.linear_model import SGDClassifier import random clf2 = SGDClassifier(loss='log') # shuffle=True is useless here shuffledRange = range(len(X)) n_iter = 5 for n in range(n_iter): random.shuffle(shuffledRange) shuffledX = [X[i] for i in shuffledRange] shuffledY = [Y[i] for i in shuffledRange] for batch in batches(range(len(shuffledX)), 10000): clf2.partial_fit(shuffledX[batch[0]:batch[-1]+1], shuffledY[batch[0]:batch[-1]+1], classes=numpy.unique(Y)) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/24617356\/sklearn-sgdclassifier-partial-fit", "best_answers_votes":78, "question_length":1038, "response_length":835 }, { "question":"How to import csv data file into scikit-learn? From my understanding, the scikit-learn accepts data in (n-sample, n-feature) format which is a 2D array. Assuming I have data in the form ... ``` Stock prices indicator1 indicator2 2.0 123 1252 1.0 .. .. .. . . . ``` How do I import this?", "response":"A very good alternative to numpy loadtxt is read_csv from Pandas. The data is loaded into a Pandas dataframe with the big advantage that it can handle mixed data types such as some columns contain text and other columns contain numbers. You can then easily select only the numeric columns and convert to a numpy array with as_matrix. Pandas will also read\/write excel files and a bunch of other formats. If we have a csv file named \"mydata.csv\": ``` point_latitude,point_longitude,line,construction,point_granularity 30.102261, -81.711777, Residential, Masonry, 1 30.063936, -81.707664, Residential, Masonry, 3 30.089579, -81.700455, Residential, Wood , 1 30.063236, -81.707703, Residential, Wood , 3 30.060614, -81.702675, Residential, Wood , 1 ``` This will read in the csv and convert the numeric columns into a numpy array for scikit_learn, then modify the order of columns and write it out to an excel spreadsheet: ``` import numpy as np import pandas as pd input_file = \"mydata.csv\" # comma delimited is the default df = pd.read_csv(input_file, header = 0) # for space delimited use: # df = pd.read_csv(input_file, header = 0, delimiter = \" \") # for tab delimited use: # df = pd.read_csv(input_file, header = 0, delimiter = \"\\t\") # put the original column names in a python list original_headers = list(df.columns.values) # remove the non-numeric columns df = df._get_numeric_data() # put the numeric column names in a python list numeric_headers = list(df.columns.values) # create a numpy array with the numeric values for input into scikit-learn numpy_array = df.to_numpy() # reverse the order of the columns numeric_headers.reverse() reverse_df = df[numeric_headers] # write the reverse_df to an excel spreadsheet reverse_df.to_excel('path_to_file.xls') ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/11023411\/how-to-import-csv-data-file-into-scikit-learn", "best_answers_votes":71, "question_length":286, "response_length":1766 }, { "question":"Visualizing decision tree in scikit-learn I am trying to design a simple Decision Tree using scikit-learn in Python (I am using Anaconda's Ipython Notebook with Python 2.7.3 on Windows OS) and visualize it as follows: ``` from pandas import read_csv, DataFrame from sklearn import tree from os import system data = read_csv('D:\/training.csv') Y = data.Y X = data.ix[:,\"X0\":\"X33\"] dtree = tree.DecisionTreeClassifier(criterion = \"entropy\") dtree = dtree.fit(X, Y) dotfile = open(\"D:\/dtree2.dot\", 'w') dotfile = tree.export_graphviz(dtree, out_file = dotfile, feature_names = X.columns) dotfile.close() system(\"dot -Tpng D:.dot -o D:\/dtree2.png\") ``` However, I get the following error: ``` AttributeError: 'NoneType' object has no attribute 'close' ``` I use the following blog post as reference: Blogpost link The following stackoverflow question doesn't seem to work for me as well: Question Could someone help me with how to visualize the decision tree in scikit-learn?", "response":"Here is one liner for those who are using jupyter and sklearn(18.2+) You don't even need matplotlib for that. Only requirement is graphviz ``` pip install graphviz ``` than run (according to code in question X is a pandas DataFrame) ``` from graphviz import Source from sklearn import tree Source( tree.export_graphviz(dtreg, out_file=None, feature_names=X.columns)) ``` This will display it in SVG format. Code above produces Graphviz's Source object (source_code - not scary) That would be rendered directly in jupyter. Some things you are likely to do with it Display it in jupter: ``` from IPython.display import SVG graph = Source( tree.export_graphviz(dtreg, out_file=None, feature_names=X.columns)) SVG(graph.pipe(format='svg')) ``` Save as png: ``` graph = Source( tree.export_graphviz(dtreg, out_file=None, feature_names=X.columns)) graph.format = 'png' graph.render('dtree_render',view=True) ``` Get the png image, save it and view it: ``` graph = Source( tree.export_graphviz(dtreg, out_file=None, feature_names=X.columns)) png_bytes = graph.pipe(format='png') with open('dtree_pipe.png','wb') as f: f.write(png_bytes) from IPython.display import Image Image(png_bytes) ``` If you are going to play with that lib here are the links to examples and userguide", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/27817994\/visualizing-decision-tree-in-scikit-learn", "best_answers_votes":39, "question_length":971, "response_length":1268 }, { "question":"How does sklearn.svm.svc's function predict_proba() work internally? I am using sklearn.svm.svc from scikit-learn to do binary classification. I am using its predict_proba() function to get probability estimates. Can anyone tell me how predict_proba() internally calculates the probability?", "response":"Scikit-learn uses LibSVM internally, and this in turn uses Platt scaling, as detailed in this note by the LibSVM authors, to calibrate the SVM to produce probabilities in addition to class predictions. Platt scaling requires first training the SVM as usual, then optimizing parameter vectors A and B such that ``` P(y|X) = 1 \/ (1 + exp(A * f(X) + B)) ``` where f(X) is the signed distance of a sample from the hyperplane (scikit-learn's decision_function method). You may recognize the logistic sigmoid in this definition, the same function that logistic regression and neural nets use for turning decision functions into probability estimates. Mind you: the B parameter, the \"intercept\" or \"bias\" or whatever you like to call it, can cause predictions based on probability estimates from this model to be inconsistent with the ones you get from the SVM decision function f. E.g. suppose that f(X) = 10, then the prediction for X is positive; but if B = -9.9 and A = 1, then P(y|X) = .475. I'm pulling these numbers out of thin air, but you've noticed that this can occur in practice. Effectively, Platt scaling trains a probability model on top of the SVM's outputs under a cross-entropy loss function. To prevent this model from overfitting, it uses an internal five-fold cross validation, meaning that training SVMs with probability=True can be quite a lot more expensive than a vanilla, non-probabilistic SVM.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/15111408\/how-does-sklearn-svm-svcs-function-predict-proba-work-internally", "best_answers_votes":80, "question_length":290, "response_length":1413 }, { "question":"Sklearn StratifiedKFold: ValueError: Supported target types are: ('binary', 'multiclass'). Got 'multilabel-indicator' instead Working with Sklearn stratified kfold split, and when I attempt to split using multi-class, I received on error (see below). When I tried and split using binary, it works no problem. ```python num_classes = len(np.unique(y_train)) y_train_categorical = keras.utils.to_categorical(y_train, num_classes) kf=StratifiedKFold(n_splits=5, shuffle=True, random_state=999) # splitting data into different folds for i, (train_index, val_index) in enumerate(kf.split(x_train, y_train_categorical)): x_train_kf, x_val_kf = x_train[train_index], x_train[val_index] y_train_kf, y_val_kf = y_train[train_index], y_train[val_index] ValueError: Supported target types are: ('binary', 'multiclass'). Got 'multilabel-indicator' instead. ```", "response":"keras.utils.to_categorical produces a one-hot encoded class vector, i.e. the multilabel-indicator mentioned in the error message. StratifiedKFold is not designed to work with such input; from the split method docs: split(X, y, groups=None) [...] y : array-like, shape (n_samples,) The target variable for supervised learning problems. Stratification is done based on the y labels. i.e. your y must be a 1-D array of your class labels. Essentially, what you have to do is simply to invert the order of the operations: split first (using your intial y_train), and convert to_categorical afterwards.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48508036\/sklearn-stratifiedkfold-valueerror-supported-target-types-are-binary-mul", "best_answers_votes":39, "question_length":848, "response_length":596 }, { "question":"How do I find which attributes my tree splits on, when using scikit-learn? I have been exploring scikit-learn, making decision trees with both entropy and gini splitting criteria, and exploring the differences. My question, is how can I \"open the hood\" and find out exactly which attributes the trees are splitting on at each level, along with their associated information values, so I can see where the two criterion make different choices? So far, I have explored the 9 methods outlined in the documentation. They don't appear to allow access to this information. But surely this information is accessible? I'm envisioning a list or dict that has entries for node and gain.", "response":"Directly from the documentation ( http:\/\/scikit-learn.org\/0.12\/modules\/tree.html ): ``` from io import StringIO out = StringIO() out = tree.export_graphviz(clf, out_file=out) ``` StringIO module is no longer supported in Python3, instead import io module. There is also the tree_ attribute in your decision tree object, which allows the direct access to the whole structure. And you can simply read it ``` clf.tree_.children_left #array of left children clf.tree_.children_right #array of right children clf.tree_.feature #array of nodes splitting feature clf.tree_.threshold #array of nodes splitting points clf.tree_.value #array of nodes values ``` for more details look at the source code of export method In general you can use the inspect module ``` from inspect import getmembers print( getmembers( clf.tree_ ) ) ``` to get all the object's elements", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/20156951\/how-do-i-find-which-attributes-my-tree-splits-on-when-using-scikit-learn", "best_answers_votes":37, "question_length":675, "response_length":856 }, { "question":"How to plot scikit learn classification report? Is it possible to plot with matplotlib scikit-learn classification report?. Let's assume I print the classification report like this: ``` print '\\n*Classification Report:\\n', classification_report(y_test, predictions) confusion_matrix_graph = confusion_matrix(y_test, predictions) ``` and I get: ``` Clasification Report: precision recall f1-score support 1 0.62 1.00 0.76 66 2 0.93 0.93 0.93 40 3 0.59 0.97 0.73 67 4 0.47 0.92 0.62 272 5 1.00 0.16 0.28 413 avg \/ total 0.77 0.57 0.49 858 ``` How can I \"plot\" the avobe chart?.", "response":"Expanding on Bin's answer: ``` import matplotlib.pyplot as plt import numpy as np def show_values(pc, fmt=\"%.2f\", **kw): ''' Heatmap with text in each cell with matplotlib's pyplot Source: https:\/\/stackoverflow.com\/a\/25074150\/395857 By HYRY ''' from itertools import izip pc.update_scalarmappable() ax = pc.get_axes() #ax = pc.axes# FOR LATEST MATPLOTLIB #Use zip BELOW IN PYTHON 3 for p, color, value in izip(pc.get_paths(), pc.get_facecolors(), pc.get_array()): x, y = p.vertices[:-2, :].mean(0) if np.all(color[:3] > 0.5): color = (0.0, 0.0, 0.0) else: color = (1.0, 1.0, 1.0) ax.text(x, y, fmt % value, ha=\"center\", va=\"center\", color=color, **kw) def cm2inch(*tupl): ''' Specify figure size in centimeter in matplotlib Source: https:\/\/stackoverflow.com\/a\/22787457\/395857 By gns-ank ''' inch = 2.54 if type(tupl[0]) == tuple: return tuple(i\/inch for i in tupl[0]) else: return tuple(i\/inch for i in tupl) def heatmap(AUC, title, xlabel, ylabel, xticklabels, yticklabels, figure_width=40, figure_height=20, correct_orientation=False, cmap='RdBu'): ''' Inspired by: - https:\/\/stackoverflow.com\/a\/16124677\/395857 - https:\/\/stackoverflow.com\/a\/25074150\/395857 ''' # Plot it out fig, ax = plt.subplots() #c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap='RdBu', vmin=0.0, vmax=1.0) c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap=cmap) # put the major ticks at the middle of each cell ax.set_yticks(np.arange(AUC.shape[0]) + 0.5, minor=False) ax.set_xticks(np.arange(AUC.shape[1]) + 0.5, minor=False) # set tick labels #ax.set_xticklabels(np.arange(1,AUC.shape[1]+1), minor=False) ax.set_xticklabels(xticklabels, minor=False) ax.set_yticklabels(yticklabels, minor=False) # set title and x\/y labels plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) # Remove last blank column plt.xlim( (0, AUC.shape[1]) ) # Turn off all the ticks ax = plt.gca() for t in ax.xaxis.get_major_ticks(): t.tick1On = False t.tick2On = False for t in ax.yaxis.get_major_ticks(): t.tick1On = False t.tick2On = False # Add color bar plt.colorbar(c) # Add text in each cell show_values(c) # Proper orientation (origin at the top left instead of bottom left) if correct_orientation: ax.invert_yaxis() ax.xaxis.tick_top() # resize fig = plt.gcf() #fig.set_size_inches(cm2inch(40, 20)) #fig.set_size_inches(cm2inch(40*4, 20*4)) fig.set_size_inches(cm2inch(figure_width, figure_height)) def plot_classification_report(classification_report, title='Classification report ', cmap='RdBu'): ''' Plot scikit-learn classification report. Extension based on https:\/\/stackoverflow.com\/a\/31689645\/395857 ''' lines = classification_report.split('\\n') classes = [] plotMat = [] support = [] class_names = [] for line in lines[2 : (len(lines) - 2)]: t = line.strip().split() if len(t) < 2: continue classes.append(t[0]) v = [float(x) for x in t[1: len(t) - 1]] support.append(int(t[-1])) class_names.append(t[0]) print(v) plotMat.append(v) print('plotMat: {0}'.format(plotMat)) print('support: {0}'.format(support)) xlabel = 'Metrics' ylabel = 'Classes' xticklabels = ['Precision', 'Recall', 'F1-score'] yticklabels = ['{0} ({1})'.format(class_names[idx], sup) for idx, sup in enumerate(support)] figure_width = 25 figure_height = len(class_names) + 7 correct_orientation = False heatmap(np.array(plotMat), title, xlabel, ylabel, xticklabels, yticklabels, figure_width, figure_height, correct_orientation, cmap=cmap) def main(): sampleClassificationReport = \"\"\" precision recall f1-score support Acacia 0.62 1.00 0.76 66 Blossom 0.93 0.93 0.93 40 Camellia 0.59 0.97 0.73 67 Daisy 0.47 0.92 0.62 272 Echium 1.00 0.16 0.28 413 avg \/ total 0.77 0.57 0.49 858\"\"\" plot_classification_report(sampleClassificationReport) plt.savefig('test_plot_classif_report.png', dpi=200, format='png', bbox_inches='tight') plt.close() if __name__ == \"__main__\": main() #cProfile.run('main()') # if you want to do some profiling ``` outputs: Example with more classes (~40):", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/28200786\/how-to-plot-scikit-learn-classification-report", "best_answers_votes":44, "question_length":575, "response_length":3970 }, { "question":"Insert or delete a step in scikit-learn Pipeline Is it possible to delete or insert a step in a sklearn.pipeline.Pipeline object? I am trying to do a grid search with or without one step in the Pipeline object. And wondering whether I can insert or delete a step in the pipeline. I saw in the Pipeline source code, there is a self.steps object holding all the steps. We can get the steps by named_steps(). Before modifying it, I want to make sure, I do not cause unexpected effects. Here is a example code: ``` from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.decomposition import PCA estimators = [('reduce_dim', PCA()), ('svm', SVC())] clf = Pipeline(estimators) clf ``` Is it possible that we do something like steps = clf.named_steps(), then insert or delete in this list? Does this cause undesired effect on the clf object?", "response":"I see that everyone mentioned only the delete step. In case you want to also insert a step in the pipeline: ``` pipe.steps.append(['step name',transformer()]) ``` pipe.steps works in the same way as lists do, so you can also insert an item into a specific location: ``` pipe.steps.insert(1,['estimator',transformer()]) #insert as second step ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34324192\/insert-or-delete-a-step-in-scikit-learn-pipeline", "best_answers_votes":55, "question_length":858, "response_length":345 }, { "question":"grid search over multiple classifiers Is there a better inbuilt way to do grid search and test multiple models in a single pipeline? Of course the parameters of the models would be different, which made is complicated for me to figure this out. Here is what I did: ``` from sklearn.pipeline import Pipeline from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.naive_bayes import MultinomialNB from sklearn.grid_search import GridSearchCV def grid_search(): pipeline1 = Pipeline(( ('clf', RandomForestClassifier()), ('vec2', TfidfTransformer()) )) pipeline2 = Pipeline(( ('clf', KNeighborsClassifier()), )) pipeline3 = Pipeline(( ('clf', SVC()), )) pipeline4 = Pipeline(( ('clf', MultinomialNB()), )) parameters1 = { 'clf__n_estimators': [10, 20, 30], 'clf__criterion': ['gini', 'entropy'], 'clf__max_features': [5, 10, 15], 'clf__max_depth': ['auto', 'log2', 'sqrt', None] } parameters2 = { 'clf__n_neighbors': [3, 7, 10], 'clf__weights': ['uniform', 'distance'] } parameters3 = { 'clf__C': [0.01, 0.1, 1.0], 'clf__kernel': ['rbf', 'poly'], 'clf__gamma': [0.01, 0.1, 1.0], } parameters4 = { 'clf__alpha': [0.01, 0.1, 1.0] } pars = [parameters1, parameters2, parameters3, parameters4] pips = [pipeline1, pipeline2, pipeline3, pipeline4] print \"starting Gridsearch\" for i in range(len(pars)): gs = GridSearchCV(pips[i], pars[i], verbose=2, refit=False, n_jobs=-1) gs = gs.fit(X_train, y_train) print \"finished Gridsearch\" print gs.best_score_ ``` However, this approach is still giving the best model within each classifier, and not comparing between classifiers.", "response":"Although the solution from dubek is more straight forward, it does not help with interactions between parameters of pipeline elements that come before the classfier. Therefore, I have written a helper class to deal with it, and can be included in the default Pipeline setting of scikit. A minimal example: ```py from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import StandardScaler, MaxAbsScaler from sklearn.svm import LinearSVC from sklearn.ensemble import RandomForestClassifier from sklearn import datasets from pipelinehelper import PipelineHelper iris = datasets.load_iris() X_iris = iris.data y_iris = iris.target pipe = Pipeline([ ('scaler', PipelineHelper([ ('std', StandardScaler()), ('max', MaxAbsScaler()), ])), ('classifier', PipelineHelper([ ('svm', LinearSVC()), ('rf', RandomForestClassifier()), ])), ]) params = { 'scaler__selected_model': pipe.named_steps['scaler'].generate({ 'std__with_mean': [True, False], 'std__with_std': [True, False], 'max__copy': [True], # just for displaying }), 'classifier__selected_model': pipe.named_steps['classifier'].generate({ 'svm__C': [0.1, 1.0], 'rf__n_estimators': [100, 20], }) } grid = GridSearchCV(pipe, params, scoring='accuracy', verbose=1) grid.fit(X_iris, y_iris) print(grid.best_params_) print(grid.best_score_) ``` It can also be used for other elements of the pipeline, not just the classifier. Code is on github if anyone wants to check it out. Edit: I have published this on PyPI if anyone is interested, just install ti using pip install pipelinehelper.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/23045318\/grid-search-over-multiple-classifiers", "best_answers_votes":24, "question_length":1652, "response_length":1588 }, { "question":"How to insert Keras model into scikit-learn pipeline? I'm using a scikit-learn custom pipeline (sklearn.pipeline.Pipeline) in conjunction with RandomizedSearchCV for hyper-parameter optimization. This works great. Now I would like to insert a keras model as a first step into the pipeline. The parameters of the model should be optimized. The computed (fitted) keras model should then be used later on in the pipeline by other steps, so I think I have to store the model as a global variable so that the other pipeline steps can use it. Is this right? I know that keras offers some wrappers for the scikit-learn API, but the problem is that these wrappers already do classification\/regression, but I only want to compute the keras model and nothing else. How can this be done? For example, I have a method which returns the model: ```py def create_model(file_path, argument2,...): ... return model ``` The method needs some fixed parameters like a file_path etc. but X and y are not needed (or can be ignored). The parameters of the model should be optimized (number of layers etc.).", "response":"You need to wrap your Keras model as a Scikit learn model first and then proceed as usual. Here's a quick example (I've omitted the imports for brevity) Here is a full blog post with this one and many other examples: Scikit-learn Pipeline Examples ```py # create a function that returns a model, taking as parameters things you # want to verify using cross-valdiation and model selection def create_model(optimizer='adagrad', kernel_initializer='glorot_uniform', dropout=0.2): model = Sequential() model.add(Dense(64,activation='relu',kernel_initializer=kernel_initializer)) model.add(Dropout(dropout)) model.add(Dense(1,activation='sigmoid',kernel_initializer=kernel_initializer)) model.compile(loss='binary_crossentropy',optimizer=optimizer, metrics=['accuracy']) return model # wrap the model using the function you created clf = KerasRegressor(build_fn=create_model,verbose=0) # just create the pipeline pipeline = Pipeline([ ('clf',clf) ]) pipeline.fit(X_train, y_train) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/42415076\/how-to-insert-keras-model-into-scikit-learn-pipeline", "best_answers_votes":40, "question_length":1083, "response_length":979 }, { "question":"Pass a dict to scikit learn estimator I am trying to pass model parameters as a dict to a Scikit-learn estimator and am having no luck. It just seems to nest my dict into one of the parameters. For instance: ``` params = { 'copy_X': True, 'fit_intercept': False, 'normalize': True } lr = LinearRegression(params) ``` Gives me: ``` LinearRegression(copy_X=True, fit_intercept={'copy_X': True, 'fit_intercept': False,'normalize': True}, normalize=False) ``` Additionally, I created a function to iterate over the dict and can create a string like: ``` 'copy_X=True, fit_intercept=True, normalize=False' ``` This was equally as unsuccessful. Anyone have any advice here? The only restriction I have is the data will be coming to me as a dict (well actually a json object being loaded with json.uploads). Thanks.", "response":"The best solution to initialise your estimator with the right parameters would be to unpack your dictionary: ``` lr = LinearRegression(**params) ``` If for some reason you need to set some parameters afterwards, you could use: ``` lr.set_params(**params) ``` This has an advantage over using setattr in that it allows Scikit learn to perform some validation checks on the parameters.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/33110973\/pass-a-dict-to-scikit-learn-estimator", "best_answers_votes":95, "question_length":808, "response_length":383 }, { "question":"return coefficients from Pipeline object in sklearn I've fit a Pipeline object with RandomizedSearchCV ``` pipe_sgd = Pipeline([('scl', StandardScaler()), ('clf', SGDClassifier(n_jobs=-1))]) param_dist_sgd = {'clf__loss': ['log'], 'clf__penalty': [None, 'l1', 'l2', 'elasticnet'], 'clf__alpha': np.linspace(0.15, 0.35), 'clf__n_iter': [3, 5, 7]} sgd_randomized_pipe = RandomizedSearchCV(estimator = pipe_sgd, param_distributions=param_dist_sgd, cv=3, n_iter=30, n_jobs=-1) sgd_randomized_pipe.fit(X_train, y_train) ``` I want to access the coef_ attribute of the best_estimator_ but I'm unable to do that. I've tried accessing coef_ with the code below. sgd_randomized_pipe.best_estimator_.coef_ However I get the following AttributeError... AttributeError: 'Pipeline' object has no attribute 'coef_' The scikit-learn docs say that coef_ is an attribute of SGDClassifier, which is the class of my base_estimator_. What am I doing wrong?", "response":"You can always use the names you assigned to them while making the pipeline by using the named_steps dict. ``` scaler = sgd_randomized_pipe.best_estimator_.named_steps['scl'] classifier = sgd_randomized_pipe.best_estimator_.named_steps['clf'] ``` and then access all the attributes like coef_, intercept_ etc. which are available to corresponding fitted estimator. This is the formal attribute exposed by the Pipeline as specified in the documentation: named_steps : dict Read-only attribute to access any step parameter by user given name. Keys are step names and values are steps parameters.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43856280\/return-coefficients-from-pipeline-object-in-sklearn", "best_answers_votes":48, "question_length":936, "response_length":593 }, { "question":"Can I use CountVectorizer in scikit-learn to count frequency of documents that were not used to extract the tokens? I have been working with the CountVectorizer class in scikit-learn. I understand that if used in the manner shown below, the final output will consist of an array containing counts of features, or tokens. These tokens are extracted from a set of keywords, i.e. ``` tags = [ \"python, tools\", \"linux, tools, ubuntu\", \"distributed systems, linux, networking, tools\", ] ``` The next step is: ``` from sklearn.feature_extraction.text import CountVectorizer vec = CountVectorizer(tokenizer=tokenize) data = vec.fit_transform(tags).toarray() print data ``` Where we get ``` [[0 0 0 1 1 0] [0 1 0 0 1 1] [1 1 1 0 1 0]] ``` This is fine, but my situation is just a little bit different. I want to extract the features the same way as above, but I don't want the rows in data to be the same documents that the features were extracted from. In other words, how can I get counts of another set of documents, say, ``` list_of_new_documents = [ [\"python, chicken\"], [\"linux, cow, ubuntu\"], [\"machine learning, bird, fish, pig\"] ] ``` And get: ``` [[0 0 0 1 0 0] [0 1 0 0 0 1] [0 0 0 0 0 0]] ``` I have read the documentation for the CountVectorizer class, and came across the vocabulary argument, which is a mapping of terms to feature indices. I can't seem to get this argument to help me, however. Any advice is appreciated. PS: all credit due to Matthias Friedrich's Blog for the example I used above.", "response":"You're right that vocabulary is what you want. It works like this: ``` >>> cv = sklearn.feature_extraction.text.CountVectorizer(vocabulary=['hot', 'cold', 'old']) >>> cv.fit_transform(['pease porridge hot', 'pease porridge cold', 'pease porridge in the pot', 'nine days old']).toarray() array([[1, 0, 0], [0, 1, 0], [0, 0, 0], [0, 0, 1]], dtype=int64) ``` So you pass it a dict with your desired features as the keys. If you used CountVectorizer on one set of documents and then you want to use the set of features from those documents for a new set, use the vocabulary_ attribute of your original CountVectorizer and pass it to the new one. So in your example, you could do ``` newVec = CountVectorizer(vocabulary=vec.vocabulary_) ``` to create a new tokenizer using the vocabulary from your first one.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/22920801\/can-i-use-countvectorizer-in-scikit-learn-to-count-frequency-of-documents-that-w", "best_answers_votes":55, "question_length":1506, "response_length":803 }, { "question":"When should one use LinearSVC or SVC? From my research, I found three conflicting results: SVC(kernel=\"linear\") is better LinearSVC is better Doesn't matter Can someone explain when to use LinearSVC vs. SVC(kernel=\"linear\")? It seems like LinearSVC is marginally better than SVC and is usually more finicky. But if scikit decided to spend time on implementing a specific case for linear classification, why wouldn't LinearSVC outperform SVC?", "response":"Mathematically, optimizing an SVM is a convex optimization problem, usually with a unique minimizer. This means that there is only one solution to this mathematical optimization problem. The differences in results come from several aspects: SVC and LinearSVC are supposed to optimize the same problem, but in fact all liblinear estimators penalize the intercept, whereas libsvm ones don't (IIRC). This leads to a different mathematical optimization problem and thus different results. There may also be other subtle differences such as scaling and default loss function (edit: make sure you set loss='hinge' in LinearSVC). Next, in multiclass classification, liblinear does one-vs-rest by default whereas libsvm does one-vs-one. SGDClassifier(loss='hinge') is different from the other two in the sense that it uses stochastic gradient descent and not exact gradient descent and may not converge to the same solution. However the obtained solution may generalize better. Between SVC and LinearSVC, one important decision criterion is that LinearSVC tends to be faster to converge the larger the number of samples is. This is due to the fact that the linear kernel is a special case, which is optimized for in Liblinear, but not in Libsvm.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35076586\/when-should-one-use-linearsvc-or-svc", "best_answers_votes":43, "question_length":441, "response_length":1237 }, { "question":"Understanding the `ngram_range` argument in a CountVectorizer in sklearn I'm a little confused about how to use ngrams in the scikit-learn library in Python, specifically, how the ngram_range argument works in a CountVectorizer. Running this code: ``` from sklearn.feature_extraction.text import CountVectorizer vocabulary = ['hi ', 'bye', 'run away'] cv = CountVectorizer(vocabulary=vocabulary, ngram_range=(1, 2)) print cv.vocabulary_ ``` gives me: ``` {'hi ': 0, 'bye': 1, 'run away': 2} ``` Where I was under the (obviously mistaken) impression that I would get unigrams and bigrams, like this: ``` {'hi ': 0, 'bye': 1, 'run away': 2, 'run': 3, 'away': 4} ``` I am working with the documentation here: http:\/\/scikit-learn.org\/stable\/modules\/feature_extraction.html Clearly there is something terribly wrong with my understanding of how to use ngrams. Perhaps the argument is having no effect or I have some conceptual issue with what an actual bigram is! I'm stumped. If anyone has a word of advice to throw my way, I'd be grateful. UPDATE: I have realized the folly of my ways. I was under the impression that the ngram_range would affect the vocabulary, not the corpus.", "response":"Setting the vocabulary explicitly means no vocabulary is learned from data. If you don't set it, you get: ``` >>> v = CountVectorizer(ngram_range=(1, 2)) >>> pprint(v.fit([\"an apple a day keeps the doctor away\"]).vocabulary_) {u'an': 0, u'an apple': 1, u'apple': 2, u'apple day': 3, u'away': 4, u'day': 5, u'day keeps': 6, u'doctor': 7, u'doctor away': 8, u'keeps': 9, u'keeps the': 10, u'the': 11, u'the doctor': 12} ``` An explicit vocabulary restricts the terms that will be extracted from text; the vocabulary is not changed: ``` >>> v = CountVectorizer(ngram_range=(1, 2), vocabulary={\"keeps\", \"keeps the\"}) >>> v.fit_transform([\"an apple a day keeps the doctor away\"]).toarray() array([[1, 1]]) # unigram and bigram found ``` (Note that stopword filtering is applied before n-gram extraction, hence \"apple day\".)", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/24005762\/understanding-the-ngram-range-argument-in-a-countvectorizer-in-sklearn", "best_answers_votes":45, "question_length":1175, "response_length":818 }, { "question":"How to graph grid scores from GridSearchCV? I am looking for a way to graph grid_scores_ from GridSearchCV in sklearn. In this example I am trying to grid search for best gamma and C parameters for an SVR algorithm. My code looks as follows: ``` C_range = 10.0 ** np.arange(-4, 4) gamma_range = 10.0 ** np.arange(-4, 4) param_grid = dict(gamma=gamma_range.tolist(), C=C_range.tolist()) grid = GridSearchCV(SVR(kernel='rbf', gamma=0.1),param_grid, cv=5) grid.fit(X_train,y_train) print(grid.grid_scores_) ``` After I run the code and print the grid scores I get the following outcome: ``` [mean: -3.28593, std: 1.69134, params: {'gamma': 0.0001, 'C': 0.0001}, mean: -3.29370, std: 1.69346, params: {'gamma': 0.001, 'C': 0.0001}, mean: -3.28933, std: 1.69104, params: {'gamma': 0.01, 'C': 0.0001}, mean: -3.28925, std: 1.69106, params: {'gamma': 0.1, 'C': 0.0001}, mean: -3.28925, std: 1.69106, params: {'gamma': 1.0, 'C': 0.0001}, mean: -3.28925, std: 1.69106, params: {'gamma': 10.0, 'C': 0.0001},etc] ``` I would like to visualize all the scores (mean values) depending on gamma and C parameters. The graph I am trying to obtain should look as follows: Where x-axis is gamma, y-axis is mean score (root mean square error in this case), and different lines represent different C values.", "response":"The code shown by @sascha is correct. However, the grid_scores_ attribute will be soon deprecated. It is better to use the cv_results attribute. It can be implemente in a similar fashion to that of @sascha method: ```py def plot_grid_search(cv_results, grid_param_1, grid_param_2, name_param_1, name_param_2): # Get Test Scores Mean and std for each grid search scores_mean = cv_results['mean_test_score'] scores_mean = np.array(scores_mean).reshape(len(grid_param_2),len(grid_param_1)) scores_sd = cv_results['std_test_score'] scores_sd = np.array(scores_sd).reshape(len(grid_param_2),len(grid_param_1)) # Plot Grid search scores _, ax = plt.subplots(1,1) # Param1 is the X-axis, Param 2 is represented as a different curve (color line) for idx, val in enumerate(grid_param_2): ax.plot(grid_param_1, scores_mean[idx,:], '-o', label= name_param_2 + ': ' + str(val)) ax.set_title(\"Grid Search Scores\", fontsize=20, fontweight='bold') ax.set_xlabel(name_param_1, fontsize=16) ax.set_ylabel('CV Average Score', fontsize=16) ax.legend(loc=\"best\", fontsize=15) ax.grid('on') # Calling Method plot_grid_search(pipe_grid.cv_results_, n_estimators, max_features, 'N Estimators', 'Max Features') ``` The above results in the following plot:", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/37161563\/how-to-graph-grid-scores-from-gridsearchcv", "best_answers_votes":55, "question_length":1286, "response_length":1231 }, { "question":"scikit-learn DBSCAN memory usage UPDATED: In the end, the solution I opted to use for clustering my large dataset was one suggested by Anony-Mousse below. That is, using ELKI's DBSCAN implimentation to do my clustering rather than scikit-learn's. It can be run from the command line and with proper indexing, performs this task within a few hours. Use the GUI and small sample datasets to work out the options you want to use and then go to town. Worth looking into. Anywho, read on for a description of my original problem and some interesting discussion. I have a dataset with ~2.5 million samples, each with 35 features (floating point values) that I'm trying to cluster. I've been trying to do this with scikit-learn's implementation of DBSCAN, using the Manhattan distance metric and a value of epsilon estimated from some small random samples drawn from the data. So far, so good. (here is the snippet, for reference) ``` db = DBSCAN(eps=40, min_samples=10, metric='cityblock').fit(mydata) ``` My issue at the moment is that I easily run out of memory. (I'm currently working on a machine with 16 GB of RAM) My question is, is DBSCAN calculating the pairwise distance matrix on the fly as it runs, and that's what's gobbling up my memory? (2.5 million ^ 2) * 8 bytes is obviously stupidly large, I would understand that. Should I not be using the fit() method? And more generally, is there a way around this issue, or am I generally barking up the wrong tree here? Apologies if the answer winds up being obvious. I've been puzzling over this for a few days. Thanks! Addendum: Also if anyone could explain the difference between fit(X) and fit_predict(X) to me more explicitly I'd also appreciate that--I'm afraid I just don't quite get it. Addendum #2: To be sure, I just tried this on a machine with ~550 GB of RAM and it still blew up, so I feel like DBSCAN is likely trying to make a pairwise distance matrix or something I clearly don't want it to do. I guess now the big question is how to stop that behavior, or find other methods that might suit my needs more. Thanks for bearing with me here. Addendum #3(!): I forgot to attach the traceback, here it is, ``` Traceback (most recent call last): File \"tDBSCAN.py\", line 34, in db = DBSCAN(eps=float(sys.argv[2]), min_samples=10, metric='cityblock').fit(mydata) File \"\/home\/jtownsend\/.local\/lib\/python2.6\/site-packages\/sklearn\/base.py\", line 329, in fit_predict self.fit(X) File \"\/home\/jtownsend\/.local\/lib\/python2.6\/site-packages\/sklearn\/cluster\/dbscan_.py\", line 186, in fit **self.get_params()) File \"\/home\/jtownsend\/.local\/lib\/python2.6\/site-packages\/sklearn\/cluster\/dbscan_.py\", line 69, in dbscan D = pairwise_distances(X, metric=metric) File \"\/home\/jtownsend\/.local\/lib\/python2.6\/site-packages\/sklearn\/metrics\/pairwise.py\", line 651, in pairwise_distances return func(X, Y, **kwds) File \"\/home\/jtownsend\/.local\/lib\/python2.6\/site-packages\/sklearn\/metrics\/pairwise.py\", line 237, in manhattan_distances D = np.abs(X[:, np.newaxis, :] - Y[np.newaxis, :, :]) MemoryError ```", "response":"The problem apparently is a non-standard DBSCAN implementation in scikit-learn. DBSCAN does not need a distance matrix. The algorithm was designed around using a database that can accelerate a regionQuery function, and return the neighbors within the query radius efficiently (a spatial index should support such queries in O(log n)). The implementation in scikit however, apparently, computes the full O(n^2) distance matrix, which comes at a cost both memory-wise and runtime-wise. So I see two choices: You may want to try the DBSCAN implementation in ELKI instead, which when used with an R*-tree index usually is substantially faster than a naive implementation. Otherwise, you may want to reimplement DBSCAN, as the implementation in scikit apparently isn't too good. Don't be scared of that: DBSCAN is really simple to implement yourself. The trickiest part of a good DBSCAN implementation is actually the regionQuery function. If you can get this query fast, DBSCAN will be fast. And you can actually reuse this function for other algorithms, too. Update: by now, sklearn no longer computes a distance matrix and can, e.g., use a kd-tree index. However, because of \"vectorization\" it will still precompute the neighbors of every point, so the memory usage of sklearn for large epsilon is O(n\u00b2), whereas to my understanding the version in ELKI will only use O(n) memory. So if you run out of memory, choose a smaller epsilon and\/or try ELKI.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/16381577\/scikit-learn-dbscan-memory-usage", "best_answers_votes":37, "question_length":3040, "response_length":1448 }, { "question":"What is \"random-state\" in sklearn.model_selection.train_test_split example? [duplicate] This question already has answers here: Random state (Pseudo-random number) in Scikit learn (8 answers) Closed 4 years ago. Can someone explain me what random_state means in below example? ``` import numpy as np from sklearn.model_selection import train_test_split X, y = np.arange(10).reshape((5, 2)), range(5) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.33, random_state=42) ``` Why is it hard coded to 42?", "response":"Isn't that obvious? 42 is the Answer to the Ultimate Question of Life, the Universe, and Everything. On a serious note, random_state simply sets a seed to the random generator, so that your train-test splits are always deterministic. If you don't set a seed, it is different each time. Relevant documentation: random_state : int, RandomState instance or None, optional (default=None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/49147774\/what-is-random-state-in-sklearn-model-selection-train-test-split-example", "best_answers_votes":105, "question_length":523, "response_length":607 }, { "question":"ROC for multiclass classification I'm doing different text classification experiments. Now I need to calculate the AUC-ROC for each task. For the binary classifications, I already made it work with this code: ``` scaler = StandardScaler(with_mean=False) enc = LabelEncoder() y = enc.fit_transform(labels) feat_sel = SelectKBest(mutual_info_classif, k=200) clf = linear_model.LogisticRegression() pipe = Pipeline([('vectorizer', DictVectorizer()), ('scaler', StandardScaler(with_mean=False)), ('mutual_info', feat_sel), ('logistregress', clf)]) y_pred = model_selection.cross_val_predict(pipe, instances, y, cv=10) # instances is a list of dictionaries #visualisation ROC-AUC fpr, tpr, thresholds = roc_curve(y, y_pred) auc = auc(fpr, tpr) print('auc =', auc) plt.figure() plt.title('Receiver Operating Characteristic') plt.plot(fpr, tpr, 'b', label='AUC = %0.2f'% auc) plt.legend(loc='lower right') plt.plot([0,1],[0,1],'r--') plt.xlim([-0.1,1.2]) plt.ylim([-0.1,1.2]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() ``` But now I need to do it for the multiclass classification task. I read somewhere that I need to binarize the labels, but I really don't get how to calculate ROC for multiclass classification. Tips?", "response":"As people mentioned in comments you have to convert your problem into binary by using OneVsAll approach, so you'll have n_class number of ROC curves. A simple example: ``` from sklearn.metrics import roc_curve, auc from sklearn import datasets from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import LinearSVC from sklearn.preprocessing import label_binarize from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt iris = datasets.load_iris() X, y = iris.data, iris.target y = label_binarize(y, classes=[0,1,2]) n_classes = 3 # shuffle and split training and test sets X_train, X_test, y_train, y_test =\\ train_test_split(X, y, test_size=0.33, random_state=0) # classifier clf = OneVsRestClassifier(LinearSVC(random_state=0)) y_score = clf.fit(X_train, y_train).decision_function(X_test) # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() roc_auc = dict() for i in range(n_classes): fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # Plot of a ROC curve for a specific class for i in range(n_classes): plt.figure() plt.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f)' % roc_auc[i]) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc=\"lower right\") plt.show() ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45332410\/roc-for-multiclass-classification", "best_answers_votes":49, "question_length":1248, "response_length":1442 }, { "question":"sklearn doesn't have attribute 'datasets' I have started using sckikit-learn for my work. So I was going through the tutorial which gives standard procedure to load some datasets: ``` $ python >>> from sklearn import datasets >>> iris = datasets.load_iris() >>> digits = datasets.load_digits() ``` However, for my convenience, I tried loading the data in the following way: ``` In [1]: import sklearn In [2]: iris = sklearn.datasets.load_iris() ``` However, this throws following error: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () ----> 1 iris = sklearn.datasets.load_iris() AttributeError: 'module' object has no attribute 'datasets' ``` However, if I use the apparently similar method: ``` In [3]: from sklearn import datasets In [4]: iris = datasets.load_iris() ``` It works without problem. In fact the following also works: ``` In [5]: iris = sklearn.datasets.load_iris() ``` I am completely confused about this. Am I missing something very trivial? What is the difference between the two approaches?", "response":"sklearn is a package. This answer said it very succinctly: when you import a package, only variables\/functions\/classes in the __init__.py file of that package are directly visible, not sub-packages or modules. datasets is a sub-package of sklearn. This is why this happens: ``` In [1]: import sklearn In [2]: sklearn.datasets --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () ----> 1 sklearn.datasets AttributeError: module 'sklearn' has no attribute 'datasets' ``` However, the reason why this works: ``` In [3]: from sklearn import datasets In [4]: sklearn.datasets Out[4]: ``` is that when you load the sub-package datasets by doing from sklearn import datasets it is automatically added to the namespace of the package sklearn. This is one of the lesser-known \"traps\" of the Python import system. Also, note that if you look at the __init__.py for sklearn you will see 'datasets' as a member of __all__, but this only allows you to do: ``` In [1]: from sklearn import * In [2]: datasets Out[2]: ``` One last point to note is that if you inspect either sklearn or datasets you will see that, although they are packages, their type is module. This is because all packages are considered modules - however, not all modules are packages.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/41467570\/sklearn-doesnt-have-attribute-datasets", "best_answers_votes":64, "question_length":1097, "response_length":1321 }, { "question":"Get Confusion Matrix From a Keras Multiclass Model [duplicate] This question already has answers here: Multilabel-indicator is not supported for confusion matrix (4 answers) Closed 6 years ago. I am building a multiclass model with Keras. ``` model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, callbacks=[checkpoint], validation_data=(X_test, y_test)) # starts training ``` Here is how my test data looks like (it's text data). ``` X_test Out[25]: array([[621, 139, 549, ..., 0, 0, 0], [621, 139, 543, ..., 0, 0, 0]]) y_test Out[26]: array([[0, 0, 1], [0, 1, 0]]) ``` After generating predictions... ``` predictions = model.predict(X_test) predictions Out[27]: array([[ 0.29071924, 0.2483743 , 0.46090645], [ 0.29566404, 0.45295066, 0.25138539]], dtype=float32) ``` I did the following to get the confusion matrix. ``` y_pred = (predictions > 0.5) confusion_matrix(y_test, y_pred) Traceback (most recent call last): File \"\", line 1, in confusion_matrix(y_test, y_pred) File \"\/Users\/abrahammathew\/anaconda3\/lib\/python3.6\/site-packages\/sklearn\/metrics\/classification.py\", line 252, in confusion_matrix raise ValueError(\"%s is not supported\" % y_type) ValueError: multilabel-indicator is not supported ``` However, I am getting the above error. How can I get a confusion matrix when doing a multiclass neural network in Keras?", "response":"Your input to confusion_matrix must be an array of int not one hot encodings. ``` matrix = metrics.confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1)) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/50920908\/get-confusion-matrix-from-a-keras-multiclass-model", "best_answers_votes":58, "question_length":1437, "response_length":165 }, { "question":"sklearn : TFIDF Transformer : How to get tf-idf values of given words in document I used sklearn for calculating TFIDF (Term frequency inverse document frequency) values for documents using command as : ``` from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() X_train_counts = count_vect.fit_transform(documents) from sklearn.feature_extraction.text import TfidfTransformer tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts) X_train_tf = tf_transformer.transform(X_train_counts) ``` X_train_tf is a scipy.sparse matrix of shape (2257, 35788). How can I get TF-IDF for words in a particular document? More specific, how to get words with maximum TF-IDF values in a given document?", "response":"You can use TfidfVectorizer from sklean ``` from sklearn.feature_extraction.text import TfidfVectorizer import numpy as np from scipy.sparse.csr import csr_matrix #need this if you want to save tfidf_matrix tf = TfidfVectorizer(input='filename', analyzer='word', ngram_range=(1,6), min_df = 0, stop_words = 'english', sublinear_tf=True) tfidf_matrix = tf.fit_transform(corpus) ``` The above tfidf_matix has the TF-IDF values of all the documents in the corpus. This is a big sparse matrix. Now, ``` feature_names = tf.get_feature_names() ``` this gives you the list of all the tokens or n-grams or words. For the first document in your corpus, ``` doc = 0 feature_index = tfidf_matrix[doc,:].nonzero()[1] tfidf_scores = zip(feature_index, [tfidf_matrix[doc, x] for x in feature_index]) ``` Lets print them, ``` for w, s in [(feature_names[i], s) for (i, s) in tfidf_scores]: print w, s ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34449127\/sklearn-tfidf-transformer-how-to-get-tf-idf-values-of-given-words-in-documen", "best_answers_votes":81, "question_length":735, "response_length":889 }, { "question":"How to see top n entries of term-document matrix after tfidf in scikit-learn I am new to scikit-learn, and I was using TfidfVectorizer to find the tfidf values of terms in a set of documents. I used the following code to obtain the same. ``` vectorizer = TfidfVectorizer(stop_words=u'english',ngram_range=(1,5),lowercase=True) X = vectorizer.fit_transform(lectures) ``` Now If I print X, I am able to see all the entries in matrix, but how can I find top n entries based on tfidf score. In addition to that is there any method that will help me to find top n entries based on tfidf score per ngram i.e. top entries among unigram,bigram,trigram and so on?", "response":"Since version 0.15, the global term weighting of the features learnt by a TfidfVectorizer can be accessed through the attribute idf_, which will return an array of length equal to the feature dimension. Sort the features by this weighting to get the top weighted features: ``` from sklearn.feature_extraction.text import TfidfVectorizer import numpy as np lectures = [\"this is some food\", \"this is some drink\"] vectorizer = TfidfVectorizer() X = vectorizer.fit_transform(lectures) indices = np.argsort(vectorizer.idf_)[::-1] features = vectorizer.get_feature_names() top_n = 2 top_features = [features[i] for i in indices[:top_n]] print top_features ``` Output: ``` [u'food', u'drink'] ``` The second problem of getting the top features by ngram can be done using the same idea, with some extra steps of splitting the features into different groups: ``` from sklearn.feature_extraction.text import TfidfVectorizer from collections import defaultdict lectures = [\"this is some food\", \"this is some drink\"] vectorizer = TfidfVectorizer(ngram_range=(1,2)) X = vectorizer.fit_transform(lectures) features_by_gram = defaultdict(list) for f, w in zip(vectorizer.get_feature_names(), vectorizer.idf_): features_by_gram[len(f.split(' '))].append((f, w)) top_n = 2 for gram, features in features_by_gram.iteritems(): top_features = sorted(features, key=lambda x: x[1], reverse=True)[:top_n] top_features = [f[0] for f in top_features] print '{}-gram top:'.format(gram), top_features ``` Output: ``` 1-gram top: [u'drink', u'food'] 2-gram top: [u'some drink', u'some food'] ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/25217510\/how-to-see-top-n-entries-of-term-document-matrix-after-tfidf-in-scikit-learn", "best_answers_votes":65, "question_length":654, "response_length":1567 }, { "question":"What is the difference between OneVsRestClassifier and MultiOutputClassifier in scikit learn? Can someone please explain (with example maybe) what is the difference between OneVsRestClassifier and MultiOutputClassifier in scikit-learn? I've read documentation and I've understood that we use: OneVsRestClassifier - when we want to do multiclass or multilabel classification and it's strategy consists of fitting one classifier per class. For each classifier, the class is fitted against all the other classes. (This is pretty clear and it means that problem of multiclass\/multilabel classification is broken down to multiple binary classification problems). MultiOutputClassifier - when we want to do multi target classification (what is this?) and it's strategy consists of fitting one classifier per target (what does target mean there?) I've already used OneVsRestClassifier for multilabel classification and I can understand how does it work but then I found MultiOutputClassifier and can't understand how does it work differently from OneVsRestClassifier.", "response":"Multiclass classification To better illustrate the differences, let us assume that your goal is that of classifying SO questions into n_classes different, mutually exclusive classes. For the sake of simplicity in this example we will only consider four classes, namely 'Python', 'Java', 'C++' and 'Other language'. Let us assume that you have a dataset formed by just six SO questions, and the class labels of those questions are stored in an array y as follows: ``` import numpy as np y = np.asarray(['Java', 'C++', 'Other language', 'Python', 'C++', 'Python']) ``` The situation described above is usually referred to as multiclass classification (also known as multinomial classification). In order to fit the classifier and validate the model through scikit-learn library you need to transform the text class labels into numerical labels. To accomplish that you could use LabelEncoder: ``` from sklearn.preprocessing import LabelEncoder le = LabelEncoder() y_numeric = le.fit_transform(y) ``` This is how the labels of your dataset are encoded: ``` In [220]: y_numeric Out[220]: array([1, 0, 2, 3, 0, 3], dtype=int64) ``` where those numbers denote indices of the following array: ``` In [221]: le.classes_ Out[221]: array(['C++', 'Java', 'Other language', 'Python'], dtype='|S14') ``` An important particular case is when there are just two classes, i.e. n_classes = 2. This is usually called binary classification. Multilabel classification Let us now suppose that you wish to perform such multiclass classification using a pool of n_classes binary classifiers, being n_classes the number of different classes. Each of these binary classifiers makes a decision on whether an item is of a specific class or not. In this case you cannot encode class labels as integer numbers from 0 to n_classes - 1, you need to create a 2-dimensional indicator matrix instead. Consider that sample n is of class k. Then, the [n, k] entry of the indicator matrix is 1 and the rest of the elements in row n are 0. It is important to note that if the classes are not mutually exclusive there can be multiple 1's in a row. This approach is named multilabel classification and can be easily implemented through MultiLabelBinarizer: ``` from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer() y_indicator = mlb.fit_transform(y[:, None]) ``` The indicator looks like this: ``` In [225]: y_indicator Out[225]: array([[0, 1, 0, 0], [1, 0, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1], [1, 0, 0, 0], [0, 0, 0, 1]]) ``` and the column numbers where 1's are actually indices of this array: ``` In [226]: mlb.classes_ Out[226]: array(['C++', 'Java', 'Other language', 'Python'], dtype=object) ``` Multioutput classification What if you want to classify a particular SO question according to two different criteria simultaneously, for instance language and application? In this case you intend to do multioutput classification. For the sake of simplicity I will consider only three application classes, namely 'Computer Vision', 'Speech Processing' and 'Other application'. The label array of your dataset should be 2-dimensional: ``` y2 = np.asarray([['Java', 'Computer Vision'], ['C++', 'Speech Recognition'], ['Other language', 'Computer Vision'], ['Python', 'Other Application'], ['C++', 'Speech Recognition'], ['Python', 'Computer Vision']]) ``` Again, we need to transform text class labels into numeric labels. As far as I know this functionality is not implemented in scikit-learn yet, so you will need to write your own code. This thread describes some clever ways to do that, but for the purposes of this post the following one-liner should suffice: ``` y_multi = np.vstack((le.fit_transform(y2[:, i]) for i in range(y2.shape[1]))).T ``` The encoded labels look like this: ``` In [229]: y_multi Out[229]: array([[1, 0], [0, 2], [2, 0], [3, 1], [0, 2], [3, 0]], dtype=int64) ``` And the meaning of the values in each column can be inferred from the following arrays: ``` In [230]: le.fit(y2[:, 0]).classes_ Out[230]: array(['C++', 'Java', 'Other language', 'Python'], dtype='|S18') In [231]: le.fit(y2[:, 1]).classes_ Out[231]: array(['Computer Vision', 'Other Application', 'Speech Recognition'], dtype='|S18') ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/42819460\/what-is-the-difference-between-onevsrestclassifier-and-multioutputclassifier-in", "best_answers_votes":39, "question_length":1060, "response_length":4214 }, { "question":"Understanding max_features parameter in RandomForestRegressor While constructing each tree in the random forest using bootstrapped samples, for each terminal node, we select m variables at random from p variables to find the best split (p is the total number of features in your data). My questions (for RandomForestRegressor) are: 1) What does max_features correspond to (m or p or something else)? 2) Are m variables selected at random from max_features variables (what is the value of m)? 3) If max_features corresponds to m, then why would I want to set it equal to p for regression (the default)? Where is the randomness with this setting (i.e., how is it different from bagging)? Thanks.", "response":"Straight from the documentation: [max_features] is the size of the random subsets of features to consider when splitting a node. So max_features is what you call m. When max_features=\"auto\", m = p and no feature subset selection is performed in the trees, so the \"random forest\" is actually a bagged ensemble of ordinary regression trees. The docs go on to say that Empirical good default values are max_features=n_features for regression problems, and max_features=sqrt(n_features) for classification tasks By setting max_features differently, you'll get a \"true\" random forest.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/23939750\/understanding-max-features-parameter-in-randomforestregressor", "best_answers_votes":27, "question_length":693, "response_length":579 }, { "question":"sklearn metrics for multiclass classification I have performed GaussianNB classification using sklearn. I tried to calculate the metrics using the following code: ``` print accuracy_score(y_test, y_pred) print precision_score(y_test, y_pred) ``` Accuracy score is working correctly but precision score calculation is showing error as: ValueError: Target is multiclass but average='binary'. Please choose another average setting. As target is multiclass, can i have the metric scores of precision, recall etc.?", "response":"The function call precision_score(y_test, y_pred) is equivalent to precision_score(y_test, y_pred, pos_label=1, average='binary'). The documentation (http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.metrics.precision_score.html) tells us: 'binary': Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary. So the problem is that your labels are not binary, but probably one-hot encoded. Fortunately, there are other options which should work with your data: precision_score(y_test, y_pred, average=None) will return the precision scores for each class, while precision_score(y_test, y_pred, average='micro') will return the total ratio of tp\/(tp + fp) The pos_label argument will be ignored if you choose another average option than binary.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45890328\/sklearn-metrics-for-multiclass-classification", "best_answers_votes":67, "question_length":509, "response_length":812 }, { "question":"Is sklearn.metrics.mean_squared_error the larger the better (negated)? In general, the mean_squared_error is the smaller the better. When I am using the sklearn metrics package, it says in the document pages: http:\/\/scikit-learn.org\/stable\/modules\/model_evaluation.html All scorer objects follow the convention that higher return values are better than lower return values. Thus metrics which measure the distance between the model and the data, like metrics.mean_squared_error, are available as neg_mean_squared_error which return the negated value of the metric. and However, if I go to: http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.metrics.mean_squared_error.html#sklearn.metrics.mean_squared_error It says it is the Mean squared error regression loss, didn't say it is negated. And if I looked at the source code and checked the example there:https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/a24c8b46\/sklearn\/metrics\/regression.py#L183 it is doing the normal mean squared error, i.e. the smaller the better. So I am wondering if I missed anything about the negated part in the document. Thanks!", "response":"The actual function \"mean_squared_error\" doesn't have anything about the negative part. But the function implemented when you try 'neg_mean_squared_error' will return a negated version of the score. Please check the source code as to how its defined in the source code: ``` neg_mean_squared_error_scorer = make_scorer(mean_squared_error, greater_is_better=False) ``` Observe how the param greater_is_better is set to False. Now all these scores\/losses are used in various other things like cross_val_score, cross_val_predict, GridSearchCV etc. For example, in cases of 'accuracy_score' or 'f1_score', the higher score is better, but in case of losses (errors), lower score is better. To handle them both in same way, it returns the negative. So this utility is made for handling the scores and losses in same way without changing the source code for the specific loss or score. So, you did not miss anything. You just need to take care of the scenario where you want to use the loss function. If you only want to calculate the mean_squared_error you can use mean_squared_error only. But if you want to use it to tune your models, or cross_validate using the utilities present in Scikit, use 'neg_mean_squared_error'. Maybe add some details about that and I will explain more.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48244219\/is-sklearn-metrics-mean-squared-error-the-larger-the-better-negated", "best_answers_votes":59, "question_length":1111, "response_length":1275 }, { "question":"In the LinearRegression method in sklearn, what exactly is the fit_intercept parameter doing? [closed] Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about programming within the scope defined in the help center. Closed 4 years ago. Improve this question In the sklearn.linear_model.LinearRegression method, there is a parameter that is fit_intercept = TRUE or fit_intercept = FALSE. I am wondering if we set it to TRUE, does it add an additional intercept column of all 1's to your dataset? If I already have a dataset with a column of 1's, does fit_intercept = FALSE account for that or does it force it to fit a zero intercept model? Update: It seems people do not get my question. The question is, what IF I had already a column of 1's in my dataset of predictors (the 1's are for the intercept). THEN, if I use fit_intercept = FALSE, will it remove the column of 1's? if I use fit_intercept = TRUE, will it add an EXTRA column of 1's?", "response":"fit_intercept=False sets the y-intercept to 0. If fit_intercept=True, the y-intercept will be determined by the line of best fit. ``` from sklearn.linear_model import LinearRegression from sklearn.datasets import make_regression import numpy as np import matplotlib.pyplot as plt bias = 100 X = np.arange(1000).reshape(-1,1) y_true = np.ravel(X.dot(0.3) + bias) noise = np.random.normal(0, 60, 1000) y = y_true + noise lr_fi_true = LinearRegression(fit_intercept=True) lr_fi_false = LinearRegression(fit_intercept=False) lr_fi_true.fit(X, y) lr_fi_false.fit(X, y) print('Intercept when fit_intercept=True : {:.5f}'.format(lr_fi_true.intercept_)) print('Intercept when fit_intercept=False : {:.5f}'.format(lr_fi_false.intercept_)) lr_fi_true_yhat = np.dot(X, lr_fi_true.coef_) + lr_fi_true.intercept_ lr_fi_false_yhat = np.dot(X, lr_fi_false.coef_) + lr_fi_false.intercept_ plt.scatter(X, y, label='Actual points') plt.plot(X, lr_fi_true_yhat, 'r--', label='fit_intercept=True') plt.plot(X, lr_fi_false_yhat, 'r-', label='fit_intercept=False') plt.legend() plt.vlines(0, 0, y.max()) plt.hlines(bias, X.min(), X.max()) plt.hlines(0, X.min(), X.max()) plt.show() ``` This example prints: ``` Intercept when fit_intercept=True : 100.32210 Intercept when fit_intercept=False : 0.00000 ``` Visually it becomes clear what fit_intercept does. When fit_intercept=True, the line of best fit is allowed to \"fit\" the y-axis (close to 100 in this example). When fit_intercept=False, the intercept is forced to the origin (0, 0). What happens if I include a column of ones or zeros and set fit_intercept to True or False? Below shows an example of how to inspect this. ``` from sklearn.linear_model import LinearRegression from sklearn.datasets import make_regression import numpy as np import matplotlib.pyplot as plt np.random.seed(1) bias = 100 X = np.arange(1000).reshape(-1,1) y_true = np.ravel(X.dot(0.3) + bias) noise = np.random.normal(0, 60, 1000) y = y_true + noise # with column of ones X_with_ones = np.hstack((np.ones((X.shape[0], 1)), X)) for b,data in ((True, X), (False, X), (True, X_with_ones), (False, X_with_ones)): lr = LinearRegression(fit_intercept=b) lr.fit(data, y) print(lr.intercept_, lr.coef_) ``` Take-away: ``` # fit_intercept=True, no column of zeros or ones 104.156765787 [ 0.29634031] # fit_intercept=False, no column of zeros or ones 0.0 [ 0.45265361] # fit_intercept=True, column of zeros or ones 104.156765787 [ 0. 0.29634031] # fit_intercept=False, column of zeros or ones 0.0 [ 104.15676579 0.29634031] ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46779605\/in-the-linearregression-method-in-sklearn-what-exactly-is-the-fit-intercept-par", "best_answers_votes":60, "question_length":1030, "response_length":2529 }, { "question":"GridSearchCV - XGBoost - Early Stopping i am trying to do hyperparemeter search with using scikit-learn's GridSearchCV on XGBoost. During gridsearch i'd like it to early stop, since it reduce search time drastically and (expecting to) have better results on my prediction\/regression task. I am using XGBoost via its Scikit-Learn API. ``` model = xgb.XGBRegressor() GridSearchCV(model, paramGrid, verbose=verbose ,fit_params={'early_stopping_rounds':42}, cv=TimeSeriesSplit(n_splits=cv).get_n_splits([trainX, trainY]), n_jobs=n_jobs, iid=iid).fit(trainX,trainY) ``` I tried to give early stopping parameters with using fit_params, but then it throws this error which is basically because of lack of validation set which is required for early stopping: ``` \/opt\/anaconda\/anaconda3\/lib\/python3.5\/site-packages\/xgboost\/callback.py in callback(env=XGBoostCallbackEnv(model= 192 score = env.evaluation_result_list[-1][1] score = undefined env.evaluation_result_list = [] 193 if len(state) == 0: 194 init(env) 195 best_score = state['best_score'] 196 best_iteration = state['best_iteration'] ``` How can i apply GridSearch on XGBoost with using early_stopping_rounds? note: model is working without gridsearch, also GridSearch works without 'fit_params={'early_stopping_rounds':42}", "response":"When using early_stopping_rounds you also have to give eval_metric and eval_set as input parameter for the fit method. Early stopping is done via calculating the error on an evaluation set. The error has to decrease every early_stopping_rounds otherwise the generation of additional trees is stopped early. See the documentation of xgboosts fit method for details. Here you see a minimal fully working example: ``` import xgboost as xgb from sklearn.model_selection import GridSearchCV from sklearn.model_selection import TimeSeriesSplit cv = 2 trainX= [[1], [2], [3], [4], [5]] trainY = [1, 2, 3, 4, 5] # these are the evaluation sets testX = trainX testY = trainY paramGrid = {\"subsample\" : [0.5, 0.8]} fit_params={\"early_stopping_rounds\":42, \"eval_metric\" : \"mae\", \"eval_set\" : [[testX, testY]]} model = xgb.XGBRegressor() gridsearch = GridSearchCV(model, paramGrid, verbose=1 , fit_params=fit_params, cv=TimeSeriesSplit(n_splits=cv).get_n_splits([trainX,trainY])) gridsearch.fit(trainX,trainY) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/42993550\/gridsearchcv-xgboost-early-stopping", "best_answers_votes":23, "question_length":1274, "response_length":1001 }, { "question":"Multilabel-indicator is not supported for confusion matrix multilabel-indicator is not supported is the error message I get, when trying to run: confusion_matrix(y_test, predictions) y_test is a DataFrame which is of shape: ``` Horse | Dog | Cat 1 0 0 0 1 0 0 1 0 ... ... ... ``` predictions is a numpy array: ``` [[1, 0, 0], [0, 1, 0], [0, 1, 0]] ``` I've searched a bit for the error message, but haven't really found something I could apply. Any hints?", "response":"No, your input to confusion_matrix must be a list of predictions, not OHEs (one hot encodings). Call argmax on your y_test and y_pred, and you should get what you expect. ``` confusion_matrix( y_test.values.argmax(axis=1), predictions.argmax(axis=1)) array([[1, 0], [0, 2]]) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46953967\/multilabel-indicator-is-not-supported-for-confusion-matrix", "best_answers_votes":79, "question_length":455, "response_length":278 }, { "question":"How do I use a TimeSeriesSplit with a GridSearchCV object to tune a model in scikit-learn? I've searched the sklearn docs for TimeSeriesSplit and the docs for cross-validation but I haven't been able to find a working example. I'm using sklearn version 0.19. This is my setup ``` import xgboost as xgb from sklearn.model_selection import TimeSeriesSplit from sklearn.grid_search import GridSearchCV import numpy as np X = np.array([[4, 5, 6, 1, 0, 2], [3.1, 3.5, 1.0, 2.1, 8.3, 1.1]]).T y = np.array([1, 6, 7, 1, 2, 3]) tscv = TimeSeriesSplit(n_splits=2) for train, test in tscv.split(X): print(train, test) ``` gives: ``` [0 1] [2 3] [0 1 2 3] [4 5] ``` If I try: ``` model = xgb.XGBRegressor() param_search = {'max_depth' : [3, 5]} my_cv = TimeSeriesSplit(n_splits=2).split(X) gsearch = GridSearchCV(estimator=model, cv=my_cv, param_grid=param_search) gsearch.fit(X, y) ``` it gives: TypeError: object of type 'generator' has no len() I get the problem: GridSearchCV is trying to call len(cv) but my_cv is an iterator without length. However, the docs for GridSearchCV state I can use a int, cross-validation generator or an iterable, optional I tried using TimeSeriesSplit without the .split(X) but it still didn't work. I'm sure I'm overlooking something simple, thanks!!", "response":"It turns out the problem was I was using GridSearchCV from sklearn.grid_search, which is deprecated. Importing GridSearchCV from sklearn.model_selection resolved the problem: ``` import xgboost as xgb from sklearn.model_selection import TimeSeriesSplit, GridSearchCV import numpy as np X = np.array([[4, 5, 6, 1, 0, 2], [3.1, 3.5, 1.0, 2.1, 8.3, 1.1]]).T y = np.array([1, 6, 7, 1, 2, 3]) model = xgb.XGBRegressor() param_search = {'max_depth' : [3, 5]} tscv = TimeSeriesSplit(n_splits=2) gsearch = GridSearchCV(estimator=model, cv=tscv, param_grid=param_search) gsearch.fit(X, y) ``` gives: ``` GridSearchCV(cv=, error_score='raise', estimator=XGBRegressor(base_score=0.5, colsample_bylevel=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=3, min_child_weight=1, missing=None, n_estimators=100, nthread=-1, objective='reg:linear', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=0, silent=True, subsample=1), fit_params=None, iid=True, n_jobs=1, param_grid={'max_depth': [3, 5]}, pre_dispatch='2*n_jobs', refit=True, return_train_score=True, scoring=None, verbose=0) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46732748\/how-do-i-use-a-timeseriessplit-with-a-gridsearchcv-object-to-tune-a-model-in-sci", "best_answers_votes":60, "question_length":1275, "response_length":1104 }, { "question":"What is the difference between SVC and SVM in scikit-learn? From the documentation scikit-learn implements SVC, NuSVC and LinearSVC which are classes capable of performing multi-class classification on a dataset. By the other hand I also read about that scikit learn also uses libsvm for support vector machine algorithm. I'm a bit confused about what's the difference between SVC and libsvm versions, by now I guess the difference is that SVC is the support vector machine algorithm fot the multiclass problem and libsvm is for the binary class problem. Could anybody help me to understad the difference between this?.", "response":"They are just different implementations of the same algorithm. The SVM module (SVC, NuSVC, etc) is a wrapper around the libsvm library and supports different kernels while LinearSVC is based on liblinear and only supports a linear kernel. So: ``` SVC(kernel = 'linear') ``` is in theory \"equivalent\" to: ``` LinearSVC() ``` Because the implementations are different in practice you will get different results, the most important ones being that LinearSVC only supports a linear kernel, is faster and can scale a lot better.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/27912872\/what-is-the-difference-between-svc-and-svm-in-scikit-learn", "best_answers_votes":44, "question_length":619, "response_length":523 }, { "question":"How is scikit-learn cross_val_predict accuracy score calculated? Does the cross_val_predict (see doc, v0.18) with k-fold method as shown in the code below calculate accuracy for each fold and average them finally or not? ``` cv = KFold(len(labels), n_folds=20) clf = SVC() ypred = cross_val_predict(clf, td, labels, cv=cv) accuracy = accuracy_score(labels, ypred) print accuracy ```", "response":"No, it does not! According to cross validation doc page, cross_val_predict does not return any scores but only the labels based on a certain strategy which is described here: The function cross_val_predict has a similar interface to cross_val_score, but returns, for each element in the input, the prediction that was obtained for that element when it was in the test set. Only cross-validation strategies that assign all elements to a test set exactly once can be used (otherwise, an exception is raised). And therefore by calling accuracy_score(labels, ypred) you are just calculating accuracy scores of labels predicted by aforementioned particular strategy compared to the true labels. This again is specified in the same documentation page: These prediction can then be used to evaluate the classifier: ``` predicted = cross_val_predict(clf, iris.data, iris.target, cv=10) metrics.accuracy_score(iris.target, predicted) ``` Note that the result of this computation may be slightly different from those obtained using cross_val_score as the elements are grouped in different ways. If you need accuracy scores of different folds you should try: ``` >>> scores = cross_val_score(clf, X, y, cv=cv) >>> scores array([ 0.96..., 1. ..., 0.96..., 0.96..., 1. ]) ``` and then for the mean accuracy of all folds use scores.mean(): ``` >>> print(\"Accuracy: %0.2f (+\/- %0.2f)\" % (scores.mean(), scores.std() * 2)) Accuracy: 0.98 (+\/- 0.03) ``` How to calculate Cohen kappa coefficient and confusion matrix for each fold? For calculating Cohen Kappa coefficient and confusion matrix I assumed you mean kappa coefficient and confusion matrix between true labels and each fold's predicted labels: ``` from sklearn.model_selection import KFold from sklearn.svm.classes import SVC from sklearn.metrics.classification import cohen_kappa_score from sklearn.metrics import confusion_matrix cv = KFold(len(labels), n_folds=20) clf = SVC() for train_index, test_index in cv.split(X): clf.fit(X[train_index], labels[train_index]) ypred = clf.predict(X[test_index]) kappa_score = cohen_kappa_score(labels[test_index], ypred) confusion_matrix = confusion_matrix(labels[test_index], ypred) ``` What does cross_val_predict return? It uses KFold to split the data to k parts and then for i=1..k iterations: takes i'th part as the test data and all other parts as training data trains the model with training data (all parts except i'th) then by using this trained model, predicts labels for i'th part (test data) In each iteration, label of i'th part of data gets predicted. In the end cross_val_predict merges all partially predicted labels and returns them as the final result. This code shows this process step by step: ``` X = np.array([[0], [1], [2], [3], [4], [5]]) labels = np.array(['a', 'a', 'a', 'b', 'b', 'b']) cv = KFold(len(labels), n_folds=3) clf = SVC() ypred_all = np.chararray((labels.shape)) i = 1 for train_index, test_index in cv.split(X): print(\"iteration\", i, \":\") print(\"train indices:\", train_index) print(\"train data:\", X[train_index]) print(\"test indices:\", test_index) print(\"test data:\", X[test_index]) clf.fit(X[train_index], labels[train_index]) ypred = clf.predict(X[test_index]) print(\"predicted labels for data of indices\", test_index, \"are:\", ypred) ypred_all[test_index] = ypred print(\"merged predicted labels:\", ypred_all) i = i+1 print(\"=====================================\") y_cross_val_predict = cross_val_predict(clf, X, labels, cv=cv) print(\"predicted labels by cross_val_predict:\", y_cross_val_predict) ``` The result is: ``` iteration 1 : train indices: [2 3 4 5] train data: [[2] [3] [4] [5]] test indices: [0 1] test data: [[0] [1]] predicted labels for data of indices [0 1] are: ['b' 'b'] merged predicted labels: ['b' 'b' '' '' '' ''] ===================================== iteration 2 : train indices: [0 1 4 5] train data: [[0] [1] [4] [5]] test indices: [2 3] test data: [[2] [3]] predicted labels for data of indices [2 3] are: ['a' 'b'] merged predicted labels: ['b' 'b' 'a' 'b' '' ''] ===================================== iteration 3 : train indices: [0 1 2 3] train data: [[0] [1] [2] [3]] test indices: [4 5] test data: [[4] [5]] predicted labels for data of indices [4 5] are: ['a' 'a'] merged predicted labels: ['b' 'b' 'a' 'b' 'a' 'a'] ===================================== predicted labels by cross_val_predict: ['b' 'b' 'a' 'b' 'a' 'a'] ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/41458834\/how-is-scikit-learn-cross-val-predict-accuracy-score-calculated", "best_answers_votes":113, "question_length":382, "response_length":4379 }, { "question":"Cache entry deserialization failed, entry ignored ``` C:\\Users\\deypr>pip3 install sklearn Collecting sklearn Cache entry deserialization failed, entry ignored Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: TLSV1_ALERT_ACCESS_DENIED] tlsv1 alert access denied (_ssl.c:777)'),)': \/simple\/sklearn\/ Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: TLSV1_ALERT_ACCESS_DENIED] tlsv1 alert access denied (_ssl.c:777)'),)': \/simple\/sklearn\/ Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: TLSV1_ALERT_ACCESS_DENIED] tlsv1 alert access denied (_ssl.c:777)'),)': \/simple\/sklearn\/ Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: TLSV1_ALERT_ACCESS_DENIED] tlsv1 alert access denied (_ssl.c:777)'),)': \/simple\/sklearn\/ Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: TLSV1_ALERT_ACCESS_DENIED] tlsv1 alert access denied (_ssl.c:777)'),)': \/simple\/sklearn\/ Could not fetch URL https:\/\/pypi.python.org\/simple\/sklearn\/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: \/simple\/sklearn\/ (Caused by SSLError(SSLError(1, '[SSL: TLSV1_ALERT_ACCESS_DENIED] tlsv1 alert access denied (_ssl.c:777)'),)) - skipping Could not find a version that satisfies the requirement sklearn (from versions: ) No matching distribution found for sklearn ``` I am getting this error whenever trying to install any python3 package. What could be the possible reasons? How to fix it ?", "response":"Regarding the error\/warning message in the question's title: Cache entry deserialization failed, entry ignored You can fix it by removing the pip cache: ``` pip cache purge ``` which is equivalent to: ``` rm -rf ~\/.cache\/pip ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/49671215\/cache-entry-deserialization-failed-entry-ignored", "best_answers_votes":54, "question_length":1868, "response_length":228 }, { "question":"Adding words to scikit-learn's CountVectorizer's stop list Scikit-learn's CountVectorizer class lets you pass a string 'english' to the argument stop_words. I want to add some things to this predefined list. Can anyone tell me how to do this?", "response":"According to the source code for sklearn.feature_extraction.text, the full list (actually a frozenset, from stop_words) of ENGLISH_STOP_WORDS is exposed through __all__. Therefore if you want to use that list plus some more items, you could do something like: ``` from sklearn.feature_extraction import text stop_words = text.ENGLISH_STOP_WORDS.union(my_additional_stop_words) ``` (where my_additional_stop_words is any sequence of strings) and use the result as the stop_words argument. This input to CountVectorizer.__init__ is parsed by _check_stop_list, which will pass the new frozenset straight through.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/24386489\/adding-words-to-scikit-learns-countvectorizers-stop-list", "best_answers_votes":67, "question_length":242, "response_length":609 }, { "question":"How to write a custom estimator in sklearn and use cross-validation on it? I would like to check the prediction error of a new method trough cross-validation. I would like to know if I can pass my method to the cross-validation function of sklearn and in case how. I would like something like sklearn.cross_validation(cv=10).mymethod. I need also to know how to define mymethod should it be a function and which input element and which output For example we can consider as mymethod an implementation of the least square estimator (of course not the ones in sklearn) . I found this tutorial link but it is not very clear to me. In the documentation they use ``` >>> import numpy as np >>> from sklearn import cross_validation >>> from sklearn import datasets >>> from sklearn import svm >>> iris = datasets.load_iris() >>> iris.data.shape, iris.target.shape ((150, 4), (150,)) >>> clf = svm.SVC(kernel='linear', C=1) >>> scores = cross_validation.cross_val_score( ... clf, iris.data, iris.target, cv=5) ... >>> scores ``` But the problem is that they are using as estimator clf that is obtained by a function built in sklearn. How should I define my own estimator in order that I can pass it to the cross_validation.cross_val_score function? So for example suppose a simple estimator that use a linear model $y=x\\beta$ where beta is estimated as X[1,:]+alpha where alpha is a parameter. How should I complete the code? ``` class my_estimator(): def fit(X,y): beta=X[1,:]+alpha #where can I pass alpha to the function? return beta def scorer(estimator, X, y) #what should the scorer function compute? return ????? ``` With the following code I received an error: ``` class my_estimator(): def fit(X, y, **kwargs): #alpha = kwargs['alpha'] beta=X[1,:]#+alpha return beta ``` ``` >>> cv=cross_validation.cross_val_score(my_estimator,x,y,scoring=\"mean_squared_error\") Traceback (most recent call last): File \"\", line 1, in File \"C:\\Python27\\lib\\site-packages\\scikit_learn-0.14.1-py2.7-win32.egg\\sklearn\\cross_validation.py\", line 1152, in cross_val_score for train, test in cv) File \"C:\\Python27\\lib\\site-packages\\scikit_learn-0.14.1-py2.7-win32.egg\\sklearn\\externals\\joblib\\parallel.py\", line 516, in __call__ for function, args, kwargs in iterable: File \"C:\\Python27\\lib\\site-packages\\scikit_learn-0.14.1-py2.7-win32.egg\\sklearn\\cross_validation.py\", line 1152, in for train, test in cv) File \"C:\\Python27\\lib\\site-packages\\scikit_learn-0.14.1-py2.7-win32.egg\\sklearn\\base.py\", line 43, in clone % (repr(estimator), type(estimator))) TypeError: Cannot clone object '' (type ): it does not seem to be a scikit-learn estimator a it does not implement a 'get_params' methods. >>> ```", "response":"The answer also lies in sklearn's documentation. You need to define two things: an estimator that implements the fit(X, y) function, X being the matrix with inputs and y being the vector of outputs a scorer function, or callable object that can be used with: scorer(estimator, X, y) and returns the score of given model Referring to your example: first of all, scorer shouldn't be a method of the estimator, it's a different notion. Just create a callable: ``` def scorer(estimator, X, y) return ????? # compute whatever you want, it's up to you to define # what does it mean that the given estimator is \"good\" or \"bad\" ``` Or even a more simple solution: you can pass a string 'mean_squared_error' or 'accuracy' (full list available in this part of the documentation) to cross_val_score function to use a predefined scorer. Another possibility is to use make_scorer factory function. As for the second thing, you can pass parameters to your model through the fit_params dict parameter of the cross_val_score function (as mentioned in the documentation). These parameters will be passed to the fit function. ``` class my_estimator(): def fit(X, y, **kwargs): alpha = kwargs['alpha'] beta=X[1,:]+alpha return beta ``` After reading all the error messages, which provide quite clear idea of what's missing, here is a simple example: ``` import numpy as np from sklearn.cross_validation import cross_val_score class RegularizedRegressor: def __init__(self, l = 0.01): self.l = l def combine(self, inputs): return sum([i*w for (i,w) in zip([1] + inputs, self.weights)]) def predict(self, X): return [self.combine(x) for x in X] def classify(self, inputs): return sign(self.predict(inputs)) def fit(self, X, y, **kwargs): self.l = kwargs['l'] X = np.matrix(X) y = np.matrix(y) W = (X.transpose() * X).getI() * X.transpose() * y self.weights = [w[0] for w in W.tolist()] def get_params(self, deep = False): return {'l':self.l} X = np.matrix([[0, 0], [1, 0], [0, 1], [1, 1]]) y = np.matrix([0, 1, 1, 0]).transpose() print cross_val_score(RegularizedRegressor(), X, y, fit_params={'l':0.1}, scoring = 'mean_squared_error') ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/20330445\/how-to-write-a-custom-estimator-in-sklearn-and-use-cross-validation-on-it", "best_answers_votes":35, "question_length":2680, "response_length":2118 }, { "question":"Specificity in scikit learn I need specificity for my classification which is defined as : TN\/(TN+FP) I am writing a custom scorer function : ``` from sklearn.metrics import make_scorer def specificity_loss_func(ground_truth, predictions): print predictions tp, tn, fn, fp = 0.0,0.0,0.0,0.0 for l,m in enumerate(ground_truth): if m==predictions[l] and m==1: tp+=1 if m==predictions[l] and m==0: tn+=1 if m!=predictions[l] and m==1: fn+=1 if m!=predictions[l] and m==0: fp+=1 `return tn\/(tn+fp) score = make_scorer(specificity_loss_func, greater_is_better=True) ``` Then, ``` from sklearn.dummy import DummyClassifier clf_dummy = DummyClassifier(strategy='most_frequent', random_state=0) ground_truth = [0,0,1,0,1,1,1,0,0,1,0,0,1] p = [0,0,0,1,0,1,1,1,1,0,0,1,0] clf_dummy = clf_dummy.fit(ground_truth, p) score(clf_dummy, ground_truth, p) ``` When I run these commands, I get p printed as : ``` [0 0 0 0 0 0 0 0 0 0 0 0 0] 1.0 ``` Why is my p changing to a series of zeros when I input p = [0,0,0,1,0,1,1,1,1,0,0,1,0]", "response":"You could get specificity from the confusion matrix. For a binary classification problem, it would be something like: ``` from sklearn.metrics import confusion_matrix y_true = [0, 0, 0, 1, 1, 1, 1, 1] y_pred = [0, 1, 0, 1, 0, 1, 0, 1] tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel() specificity = tn \/ (tn+fp) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/33275461\/specificity-in-scikit-learn", "best_answers_votes":86, "question_length":1017, "response_length":323 }, { "question":"How can I call scikit-learn classifiers from Java? I have a classifier that I trained using Python's scikit-learn. How can I use the classifier from a Java program? Can I use Jython? Is there some way to save the classifier in Python and load it in Java? Is there some other way to use it?", "response":"You cannot use jython as scikit-learn heavily relies on numpy and scipy that have many compiled C and Fortran extensions hence cannot work in jython. The easiest ways to use scikit-learn in a java environment would be to: expose the classifier as a HTTP \/ Json service, for instance using a microframework such as flask or bottle or cornice and call it from java using an HTTP client library write a commandline wrapper application in python that reads data on stdin and output predictions on stdout using some format such as CSV or JSON (or some lower level binary representation) and call the python program from java for instance using Apache Commons Exec. make the python program output the raw numerical parameters learnt at fit time (typically as an array of floating point values) and reimplement the predict function in java (this is typically easy for predictive linear models where the prediction is often just a thresholded dot product). The last approach will be a lot more work if you need to re-implement feature extraction in Java as well. Finally you can use a Java library such as Weka or Mahout that implement the algorithms you need instead of trying to use scikit-learn from Java.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/12738827\/how-can-i-call-scikit-learn-classifiers-from-java", "best_answers_votes":54, "question_length":289, "response_length":1200 }, { "question":"Using Scikit-Learn OneHotEncoder with a Pandas DataFrame I'm trying to replace a column within a Pandas DataFrame containing strings into a one-hot encoded equivalent using Scikit-Learn's OneHotEncoder. My code below doesn't work: ```py from sklearn.preprocessing import OneHotEncoder # data is a Pandas DataFrame jobs_encoder = OneHotEncoder() jobs_encoder.fit(data['Profession'].unique().reshape(1, -1)) data['Profession'] = jobs_encoder.transform(data['Profession'].to_numpy().reshape(-1, 1)) ``` It produces the following error (strings in the list are omitted): ```py --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () 3 jobs_encoder = OneHotEncoder() 4 jobs_encoder.fit(data['Profession'].unique().reshape(1, -1)) ----> 5 data['Profession'] = jobs_encoder.transform(data['Profession'].to_numpy().reshape(-1, 1)) \/usr\/local\/anaconda3\/envs\/ml\/lib\/python3.6\/site-packages\/sklearn\/preprocessing\/_encoders.py in transform(self, X) 730 copy=True) 731 else: --> 732 return self._transform_new(X) 733 734 def inverse_transform(self, X): \/usr\/local\/anaconda3\/envs\/ml\/lib\/python3.6\/site-packages\/sklearn\/preprocessing\/_encoders.py in _transform_new(self, X) 678 \"\"\"New implementation assuming categorical input\"\"\" 679 # validation of X happens in _check_X called by _transform --> 680 X_int, X_mask = self._transform(X, handle_unknown=self.handle_unknown) 681 682 n_samples, n_features = X_int.shape \/usr\/local\/anaconda3\/envs\/ml\/lib\/python3.6\/site-packages\/sklearn\/preprocessing\/_encoders.py in _transform(self, X, handle_unknown) 120 msg = (\"Found unknown categories {0} in column {1}\" 121 \" during transform\".format(diff, i)) --> 122 raise ValueError(msg) 123 else: 124 # Set the problematic rows to an acceptable value and ValueError: Found unknown categories ['...', ..., '...'] in column 0 during transform ``` Here's some sample data: ```py data['Profession'] = 0 unkn 1 safe 2 rece 3 unkn 4 lead ... 111988 indu 111989 seni 111990 mess 111991 seni 111992 proj Name: Profession, Length: 111993, dtype: object ``` What exactly am I doing wrong?", "response":"OneHotEncoder Encodes categorical integer features as a one-hot numeric array. Its Transform method returns a sparse matrix if sparse=True, otherwise it returns a 2-d array. You can't cast a 2-d array (or sparse matrix) into a Pandas Series. You must create a Pandas Serie (a column in a Pandas dataFrame) for each category. I would recommend pandas.get_dummies instead: ``` data = pd.get_dummies(data,prefix=['Profession'], columns = ['Profession'], drop_first=True) ``` EDIT: Using Sklearn OneHotEncoder: ``` transformed = jobs_encoder.transform(data['Profession'].to_numpy().reshape(-1, 1)) #Create a Pandas DataFrame of the hot encoded column ohe_df = pd.DataFrame(transformed, columns=jobs_encoder.get_feature_names()) #concat with original data data = pd.concat([data, ohe_df], axis=1).drop(['Profession'], axis=1) ``` Other Options: If you are doing hyperparameter tuning with GridSearch it's recommanded to use ColumnTransformer and FeatureUnion with Pipeline or directly make_column_transformer", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/58101126\/using-scikit-learn-onehotencoder-with-a-pandas-dataframe", "best_answers_votes":49, "question_length":2122, "response_length":1003 }, { "question":"Scikit-learn is returning coefficient of determination (R^2) values less than -1 I'm doing a simple linear model. I have ``` fire = load_data() regr = linear_model.LinearRegression() scores = cross_validation.cross_val_score(regr, fire.data, fire.target, cv=10, scoring='r2') print scores ``` which yields ``` [ 0.00000000e+00 0.00000000e+00 -8.27299054e+02 -5.80431382e+00 -1.04444147e-01 -1.19367785e+00 -1.24843536e+00 -3.39950443e-01 1.95018287e-02 -9.73940970e-02] ``` How is this possible? When I do the same thing with the built in diabetes data, it works perfectly fine, but for my data, it returns these seemingly absurd results. Have I done something wrong?", "response":"There is no reason r^2 shouldn't be negative (despite the ^2 in its name). This is also stated in the doc. You can see r^2 as the comparison of your model fit (in the context of linear regression, e.g a model of order 1 (affine)) to a model of order 0 (just fitting a constant), both by minimizing a squared loss. The constant minimizing the squared error is the mean. Since you are doing cross validation with left out data, it can happen that the mean of your test set is wildly different from the mean of your training set. This alone can induce a much higher incurred squared error in your prediction versus just predicting the mean of the test data, which results in a negative r^2 score. In worst case, if your data do not explain your target at all, these scores can become very strongly negative. Try ``` import numpy as np rng = np.random.RandomState(42) X = rng.randn(100, 80) y = rng.randn(100) # y has nothing to do with X whatsoever from sklearn.linear_model import LinearRegression from sklearn.cross_validation import cross_val_score scores = cross_val_score(LinearRegression(), X, y, cv=5, scoring='r2') ``` This should result in negative r^2 values. ``` In [23]: scores Out[23]: array([-240.17927358, -5.51819556, -14.06815196, -67.87003867, -64.14367035]) ``` The important question now is whether this is due to the fact that linear models just do not find anything in your data, or to something else that may be fixed in the preprocessing of your data. Have you tried scaling your columns to have mean 0 and variance 1? You can do this using sklearn.preprocessing.StandardScaler. As a matter of fact, you should create a new estimator by concatenating a StandardScaler and the LinearRegression into a pipeline using sklearn.pipeline.Pipeline. Next you may want to try Ridge regression.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/23036866\/scikit-learn-is-returning-coefficient-of-determination-r2-values-less-than-1", "best_answers_votes":42, "question_length":667, "response_length":1805 }, { "question":"What is the difference between xgb.train and xgb.XGBRegressor (or xgb.XGBClassifier)? I already know \"xgboost.XGBRegressor is a Scikit-Learn Wrapper interface for XGBoost.\" But do they have any other difference?", "response":"xgboost.train is the low-level API to train the model via gradient boosting method. xgboost.XGBRegressor and xgboost.XGBClassifier are the wrappers (Scikit-Learn-like wrappers, as they call it) that prepare the DMatrix and pass in the corresponding objective function and parameters. In the end, the fit call simply boils down to: ```py self._Booster = train(params, dmatrix, self.n_estimators, evals=evals, early_stopping_rounds=early_stopping_rounds, evals_result=evals_result, obj=obj, feval=feval, verbose_eval=verbose) ``` This means that everything that can be done with XGBRegressor and XGBClassifier is doable via underlying xgboost.train function. The other way around it's obviously not true, for instance, some useful parameters of xgboost.train are not supported in XGBModel API. The list of notable differences includes: xgboost.train allows to set the callbacks applied at end of each iteration. xgboost.train allows training continuation via xgb_model parameter. xgboost.train allows not only minization of the eval function, but maximization as well.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/47152610\/what-is-the-difference-between-xgb-train-and-xgb-xgbregressor-or-xgb-xgbclassif", "best_answers_votes":56, "question_length":211, "response_length":1066 }, { "question":"How to interpret scikit's learn confusion matrix and classification report? I have a sentiment analysis task, for this Im using this corpus the opinions have 5 classes (very neg, neg, neu, pos, very pos), from 1 to 5. So I do the classification as follows: ``` from sklearn.feature_extraction.text import TfidfVectorizer import numpy as np tfidf_vect= TfidfVectorizer(use_idf=True, smooth_idf=True, sublinear_tf=False, ngram_range=(2,2)) from sklearn.cross_validation import train_test_split, cross_val_score import pandas as pd df = pd.read_csv('\/corpus.csv', header=0, sep=',', names=['id', 'content', 'label']) X = tfidf_vect.fit_transform(df['content'].values) y = df['label'].values from sklearn import cross_validation X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.33) from sklearn.svm import SVC svm_1 = SVC(kernel='linear') svm_1.fit(X, y) svm_1_prediction = svm_1.predict(X_test) ``` Then with the metrics I obtained the following confusion matrix and classification report, as follows: ``` print '\\nClasification report:\\n', classification_report(y_test, svm_1_prediction) print '\\nConfussion matrix:\\n',confusion_matrix(y_test, svm_1_prediction) ``` Then, this is the result: ``` Clasification report: precision recall f1-score support 1 1.00 0.76 0.86 71 2 1.00 0.84 0.91 43 3 1.00 0.74 0.85 89 4 0.98 0.95 0.96 288 5 0.87 1.00 0.93 367 avg \/ total 0.94 0.93 0.93 858 Confussion matrix: [[ 54 0 0 0 17] [ 0 36 0 1 6] [ 0 0 66 5 18] [ 0 0 0 273 15] [ 0 0 0 0 367]] ``` How can I interpret the above confusion matrix and classification report. I tried reading the documentation and this question. But still can interpretate what happened here particularly with this data?. Wny this matrix is somehow \"diagonal\"?. By the other hand what means the recall, precision, f1score and support for this data?. What can I say about this data?. Thanks in advance guys", "response":"Classification report must be straightforward - a report of P\/R\/F-Measure for each element in your test data. In Multiclass problems, it is not a good idea to read Precision\/Recall and F-Measure over the whole data any imbalance would make you feel you've reached better results. That's where such reports help. Coming to confusion matrix, it is much detailed representation of what's going on with your labels. So there were 71 points in the first class (label 0). Out of these, your model was successful in identifying 54 of those correctly in label 0, but 17 were marked as label 4. Similarly look at second row. There were 43 points in class 1, but 36 of them were marked correctly. Your classifier predicted 1 in class 3 and 6 in class 4. Now you can see the pattern this follows. An ideal classifiers with 100% accuracy would produce a pure diagonal matrix which would have all the points predicted in their correct class. Coming to Recall\/Precision. They are some of the mostly used measures in evaluating how good your system works. Now you had 71 points in first class (call it 0 class). Out of them your classifier was able to get 54 elements correctly. That's your recall. 54\/71 = 0.76. Now look only at first column in the table. There is one cell with entry 54, rest all are zeros. This means your classifier marked 54 points in class 0, and all 54 of them were actually in class 0. This is precision. 54\/54 = 1. Look at column marked 4. In this column, there are elements scattered in all the five rows. 367 of them were marked correctly. Rest all are incorrect. So that reduces your precision. F Measure is harmonic mean of Precision and Recall. Be sure you read details about these. https:\/\/en.wikipedia.org\/wiki\/Precision_and_recall", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30746460\/how-to-interpret-scikits-learn-confusion-matrix-and-classification-report", "best_answers_votes":72, "question_length":1907, "response_length":1749 }, { "question":"How to duplicate an estimator in order to use it on multiple data sets? Here is an example that creates two data sets: ``` from sklearn.linear_model import LogisticRegression from sklearn.datasets import make_classification # data set 1 X1, y1 = make_classification(n_classes=2, n_features=5, random_state=1) # data set 2 X2, y2 = make_classification(n_classes=2, n_features=5, random_state=2) ``` I want to use the LogisticRegression estimator with the same parameter values to fit a classifier on each data set: ``` lr = LogisticRegression() clf1 = lr.fit(X1, y1) clf2 = lr.fit(X2, y2) print \"Classifier for data set 1: \" print \" - intercept: \", clf1.intercept_ print \" - coef_: \", clf1.coef_ print \"Classifier for data set 2: \" print \" - intercept: \", clf2.intercept_ print \" - coef_: \", clf2.coef_ ``` The problem is that both classifiers are the same: ``` Classifier for data set 1: - intercept: [ 0.05191729] - coef_: [[ 0.06704494 0.00137751 -0.12453698 -0.05999127 0.05798146]] Classifier for data set 2: - intercept: [ 0.05191729] - coef_: [[ 0.06704494 0.00137751 -0.12453698 -0.05999127 0.05798146]] ``` For this simple example, I could use something like: ``` lr1 = LogisticRegression() lr2 = LogisticRegression() clf1 = lr1.fit(X1, y1) clf2 = lr2.fit(X2, y2) ``` to avoid the problem. However, the question remains: How to duplicate \/ copy an estimator with its particular parameter values in general?", "response":"``` from sklearn.base import clone lr1 = LogisticRegression() lr2 = clone(lr1) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/13701603\/how-to-duplicate-an-estimator-in-order-to-use-it-on-multiple-data-sets", "best_answers_votes":55, "question_length":1414, "response_length":82 }, { "question":"sklearn agglomerative clustering linkage matrix I'm trying to draw a complete-link scipy.cluster.hierarchy.dendrogram, and I found that scipy.cluster.hierarchy.linkage is slower than sklearn.AgglomerativeClustering. However, sklearn.AgglomerativeClustering doesn't return the distance between clusters and the number of original observations, which scipy.cluster.hierarchy.dendrogram needs. Is there a way to take them?", "response":"It's possible, but it isn't pretty. It requires (at a minimum) a small rewrite of AgglomerativeClustering.fit (source). The difficulty is that the method requires a number of imports, so it ends up getting a bit nasty looking. To add in this feature: Insert the following line after line 748: kwargs['return_distance'] = True Replace line 752 with: self.children_, self.n_components_, self.n_leaves_, parents, self.distance = \\ This will give you a new attribute, distance, that you can easily call. A couple things to note: When doing this, I ran into this issue about the check_array function on line 711. This can be fixed by using check_arrays (from sklearn.utils.validation import check_arrays). You can modify that line to become X = check_arrays(X)[0]. This appears to be a bug (I still have this issue on the most recent version of scikit-learn). Depending on which version of sklearn.cluster.hierarchical.linkage_tree you have, you may also need to modify it to be the one provided in the source. To make things easier for everyone, here is the full code that you will need to use: ``` from heapq import heapify, heappop, heappush, heappushpop import warnings import sys import numpy as np from scipy import sparse from sklearn.base import BaseEstimator, ClusterMixin from sklearn.externals.joblib import Memory from sklearn.externals import six from sklearn.utils.validation import check_arrays from sklearn.utils.sparsetools import connected_components from sklearn.cluster import _hierarchical from sklearn.cluster.hierarchical import ward_tree from sklearn.cluster._feature_agglomeration import AgglomerationTransform from sklearn.utils.fast_dict import IntFloatDict def _fix_connectivity(X, connectivity, n_components=None, affinity=\"euclidean\"): \"\"\" Fixes the connectivity matrix - copies it - makes it symmetric - converts it to LIL if necessary - completes it if necessary \"\"\" n_samples = X.shape[0] if (connectivity.shape[0] != n_samples or connectivity.shape[1] != n_samples): raise ValueError('Wrong shape for connectivity matrix: %s ' 'when X is %s' % (connectivity.shape, X.shape)) # Make the connectivity matrix symmetric: connectivity = connectivity + connectivity.T # Convert connectivity matrix to LIL if not sparse.isspmatrix_lil(connectivity): if not sparse.isspmatrix(connectivity): connectivity = sparse.lil_matrix(connectivity) else: connectivity = connectivity.tolil() # Compute the number of nodes n_components, labels = connected_components(connectivity) if n_components > 1: warnings.warn(\"the number of connected components of the \" \"connectivity matrix is %d > 1. Completing it to avoid \" \"stopping the tree early.\" % n_components, stacklevel=2) # XXX: Can we do without completing the matrix? for i in xrange(n_components): idx_i = np.where(labels == i)[0] Xi = X[idx_i] for j in xrange(i): idx_j = np.where(labels == j)[0] Xj = X[idx_j] D = pairwise_distances(Xi, Xj, metric=affinity) ii, jj = np.where(D == np.min(D)) ii = ii[0] jj = jj[0] connectivity[idx_i[ii], idx_j[jj]] = True connectivity[idx_j[jj], idx_i[ii]] = True return connectivity, n_components # average and complete linkage def linkage_tree(X, connectivity=None, n_components=None, n_clusters=None, linkage='complete', affinity=\"euclidean\", return_distance=False): \"\"\"Linkage agglomerative clustering based on a Feature matrix. The inertia matrix uses a Heapq-based representation. This is the structured version, that takes into account some topological structure between samples. Parameters ---------- X : array, shape (n_samples, n_features) feature matrix representing n_samples samples to be clustered connectivity : sparse matrix (optional). connectivity matrix. Defines for each sample the neighboring samples following a given structure of the data. The matrix is assumed to be symmetric and only the upper triangular half is used. Default is None, i.e, the Ward algorithm is unstructured. n_components : int (optional) Number of connected components. If None the number of connected components is estimated from the connectivity matrix. NOTE: This parameter is now directly determined directly from the connectivity matrix and will be removed in 0.18 n_clusters : int (optional) Stop early the construction of the tree at n_clusters. This is useful to decrease computation time if the number of clusters is not small compared to the number of samples. In this case, the complete tree is not computed, thus the 'children' output is of limited use, and the 'parents' output should rather be used. This option is valid only when specifying a connectivity matrix. linkage : {\"average\", \"complete\"}, optional, default: \"complete\" Which linkage critera to use. The linkage criterion determines which distance to use between sets of observation. - average uses the average of the distances of each observation of the two sets - complete or maximum linkage uses the maximum distances between all observations of the two sets. affinity : string or callable, optional, default: \"euclidean\". which metric to use. Can be \"euclidean\", \"manhattan\", or any distance know to paired distance (see metric.pairwise) return_distance : bool, default False whether or not to return the distances between the clusters. Returns ------- children : 2D array, shape (n_nodes-1, 2) The children of each non-leaf node. Values less than `n_samples` correspond to leaves of the tree which are the original samples. A node `i` greater than or equal to `n_samples` is a non-leaf node and has children `children_[i - n_samples]`. Alternatively at the i-th iteration, children[i][0] and children[i][1] are merged to form node `n_samples + i` n_components : int The number of connected components in the graph. n_leaves : int The number of leaves in the tree. parents : 1D array, shape (n_nodes, ) or None The parent of each node. Only returned when a connectivity matrix is specified, elsewhere 'None' is returned. distances : ndarray, shape (n_nodes-1,) Returned when return_distance is set to True. distances[i] refers to the distance between children[i][0] and children[i][1] when they are merged. See also -------- ward_tree : hierarchical clustering with ward linkage \"\"\" X = np.asarray(X) if X.ndim == 1: X = np.reshape(X, (-1, 1)) n_samples, n_features = X.shape linkage_choices = {'complete': _hierarchical.max_merge, 'average': _hierarchical.average_merge, } try: join_func = linkage_choices[linkage] except KeyError: raise ValueError( 'Unknown linkage option, linkage should be one ' 'of %s, but %s was given' % (linkage_choices.keys(), linkage)) if connectivity is None: from scipy.cluster import hierarchy # imports PIL if n_clusters is not None: warnings.warn('Partial build of the tree is implemented ' 'only for structured clustering (i.e. with ' 'explicit connectivity). The algorithm ' 'will build the full tree and only ' 'retain the lower branches required ' 'for the specified number of clusters', stacklevel=2) if affinity == 'precomputed': # for the linkage function of hierarchy to work on precomputed # data, provide as first argument an ndarray of the shape returned # by pdist: it is a flat array containing the upper triangular of # the distance matrix. i, j = np.triu_indices(X.shape[0], k=1) X = X[i, j] elif affinity == 'l2': # Translate to something understood by scipy affinity = 'euclidean' elif affinity in ('l1', 'manhattan'): affinity = 'cityblock' elif callable(affinity): X = affinity(X) i, j = np.triu_indices(X.shape[0], k=1) X = X[i, j] out = hierarchy.linkage(X, method=linkage, metric=affinity) children_ = out[:, :2].astype(np.int) if return_distance: distances = out[:, 2] return children_, 1, n_samples, None, distances return children_, 1, n_samples, None if n_components is not None: warnings.warn( \"n_components is now directly calculated from the connectivity \" \"matrix and will be removed in 0.18\", DeprecationWarning) connectivity, n_components = _fix_connectivity(X, connectivity) connectivity = connectivity.tocoo() # Put the diagonal to zero diag_mask = (connectivity.row != connectivity.col) connectivity.row = connectivity.row[diag_mask] connectivity.col = connectivity.col[diag_mask] connectivity.data = connectivity.data[diag_mask] del diag_mask if affinity == 'precomputed': distances = X[connectivity.row, connectivity.col] else: # FIXME We compute all the distances, while we could have only computed # the \"interesting\" distances distances = paired_distances(X[connectivity.row], X[connectivity.col], metric=affinity) connectivity.data = distances if n_clusters is None: n_nodes = 2 * n_samples - 1 else: assert n_clusters n_leaves: raise ValueError('Cannot extract more clusters than samples: ' '%s clusters where given for a tree with %s leaves.' % (n_clusters, n_leaves)) # In this function, we store nodes as a heap to avoid recomputing # the max of the nodes: the first element is always the smallest # We use negated indices as heaps work on smallest elements, and we # are interested in largest elements # children[-1] is the root of the tree nodes = [-(max(children[-1]) + 1)] for i in xrange(n_clusters - 1): # As we have a heap, nodes[0] is the smallest element these_children = children[-nodes[0] - n_leaves] # Insert the 2 children and remove the largest node heappush(nodes, -these_children[0]) heappushpop(nodes, -these_children[1]) label = np.zeros(n_leaves, dtype=np.intp) for i, node in enumerate(nodes): label[_hierarchical._hc_get_descendent(-node, children, n_leaves)] = i return label class AgglomerativeClustering(BaseEstimator, ClusterMixin): \"\"\" Agglomerative Clustering Recursively merges the pair of clusters that minimally increases a given linkage distance. Parameters ---------- n_clusters : int, default=2 The number of clusters to find. connectivity : array-like or callable, optional Connectivity matrix. Defines for each sample the neighboring samples following a given structure of the data. This can be a connectivity matrix itself or a callable that transforms the data into a connectivity matrix, such as derived from kneighbors_graph. Default is None, i.e, the hierarchical clustering algorithm is unstructured. affinity : string or callable, default: \"euclidean\" Metric used to compute the linkage. Can be \"euclidean\", \"l1\", \"l2\", \"manhattan\", \"cosine\", or 'precomputed'. If linkage is \"ward\", only \"euclidean\" is accepted. memory : Instance of joblib.Memory or string (optional) Used to cache the output of the computation of the tree. By default, no caching is done. If a string is given, it is the path to the caching directory. n_components : int (optional) Number of connected components. If None the number of connected components is estimated from the connectivity matrix. NOTE: This parameter is now directly determined from the connectivity matrix and will be removed in 0.18 compute_full_tree : bool or 'auto' (optional) Stop early the construction of the tree at n_clusters. This is useful to decrease computation time if the number of clusters is not small compared to the number of samples. This option is useful only when specifying a connectivity matrix. Note also that when varying the number of clusters and using caching, it may be advantageous to compute the full tree. linkage : {\"ward\", \"complete\", \"average\"}, optional, default: \"ward\" Which linkage criterion to use. The linkage criterion determines which distance to use between sets of observation. The algorithm will merge the pairs of cluster that minimize this criterion. - ward minimizes the variance of the clusters being merged. - average uses the average of the distances of each observation of the two sets. - complete or maximum linkage uses the maximum distances between all observations of the two sets. pooling_func : callable, default=np.mean This combines the values of agglomerated features into a single value, and should accept an array of shape [M, N] and the keyword argument ``axis=1``, and reduce it to an array of size [M]. Attributes ---------- labels_ : array [n_samples] cluster labels for each point n_leaves_ : int Number of leaves in the hierarchical tree. n_components_ : int The estimated number of connected components in the graph. children_ : array-like, shape (n_nodes-1, 2) The children of each non-leaf node. Values less than `n_samples` correspond to leaves of the tree which are the original samples. A node `i` greater than or equal to `n_samples` is a non-leaf node and has children `children_[i - n_samples]`. Alternatively at the i-th iteration, children[i][0] and children[i][1] are merged to form node `n_samples + i` \"\"\" def __init__(self, n_clusters=2, affinity=\"euclidean\", memory=Memory(cachedir=None, verbose=0), connectivity=None, n_components=None, compute_full_tree='auto', linkage='ward', pooling_func=np.mean): self.n_clusters = n_clusters self.memory = memory self.n_components = n_components self.connectivity = connectivity self.compute_full_tree = compute_full_tree self.linkage = linkage self.affinity = affinity self.pooling_func = pooling_func def fit(self, X, y=None): \"\"\"Fit the hierarchical clustering on the data Parameters ---------- X : array-like, shape = [n_samples, n_features] The samples a.k.a. observations. Returns ------- self \"\"\" X = check_arrays(X)[0] memory = self.memory if isinstance(memory, six.string_types): memory = Memory(cachedir=memory, verbose=0) if self.linkage == \"ward\" and self.affinity != \"euclidean\": raise ValueError(\"%s was provided as affinity. Ward can only \" \"work with euclidean distances.\" % (self.affinity, )) if self.linkage not in _TREE_BUILDERS: raise ValueError(\"Unknown linkage type %s.\" \"Valid options are %s\" % (self.linkage, _TREE_BUILDERS.keys())) tree_builder = _TREE_BUILDERS[self.linkage] connectivity = self.connectivity if self.connectivity is not None: if callable(self.connectivity): connectivity = self.connectivity(X) connectivity = check_arrays( connectivity, accept_sparse=['csr', 'coo', 'lil']) n_samples = len(X) compute_full_tree = self.compute_full_tree if self.connectivity is None: compute_full_tree = True if compute_full_tree == 'auto': # Early stopping is likely to give a speed up only for # a large number of clusters. The actual threshold # implemented here is heuristic compute_full_tree = self.n_clusters < max(100, .02 * n_samples) n_clusters = self.n_clusters if compute_full_tree: n_clusters = None # Construct the tree kwargs = {} kwargs['return_distance'] = True if self.linkage != 'ward': kwargs['linkage'] = self.linkage kwargs['affinity'] = self.affinity self.children_, self.n_components_, self.n_leaves_, parents, \\ self.distance = memory.cache(tree_builder)(X, connectivity, n_components=self.n_components, n_clusters=n_clusters, **kwargs) # Cut the tree if compute_full_tree: self.labels_ = _hc_cut(self.n_clusters, self.children_, self.n_leaves_) else: labels = _hierarchical.hc_get_heads(parents, copy=False) # copy to avoid holding a reference on the original array labels = np.copy(labels[:n_samples]) # Reasign cluster numbers self.labels_ = np.searchsorted(np.unique(labels), labels) return self ``` Below is a simple example showing how to use the modified AgglomerativeClustering class: ``` import numpy as np import AgglomerativeClustering # Make sure to use the new one!!! d = np.array( [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] ) clustering = AgglomerativeClustering(n_clusters=2, compute_full_tree=True, affinity='euclidean', linkage='complete') clustering.fit(d) print clustering.distance ``` That example has the following output: ``` [ 5.19615242 10.39230485] ``` This can then be compared to a scipy.cluster.hierarchy.linkage implementation: ``` import numpy as np from scipy.cluster.hierarchy import linkage d = np.array( [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] ) print linkage(d, 'complete') ``` Output: ``` [[ 1. 2. 5.19615242 2. ] [ 0. 3. 10.39230485 3. ]] ``` Just for kicks I decided to follow up on your statement about performance: ``` import AgglomerativeClustering from scipy.cluster.hierarchy import linkage import numpy as np import time l = 1000; iters = 50 d = [np.random.random(100) for _ in xrange(1000)] t = time.time() for _ in xrange(iters): clustering = AgglomerativeClustering(n_clusters=l-1, affinity='euclidean', linkage='complete') clustering.fit(d) scikit_time = (time.time() - t) \/ iters print 'scikit-learn Time: {0}s'.format(scikit_time) t = time.time() for _ in xrange(iters): linkage(d, 'complete') scipy_time = (time.time() - t) \/ iters print 'SciPy Time: {0}s'.format(scipy_time) print 'scikit-learn Speedup: {0}'.format(scipy_time \/ scikit_time) ``` This gave me the following results: ``` scikit-learn Time: 0.566560001373s SciPy Time: 0.497740001678s scikit-learn Speedup: 0.878530077083 ``` According to this, the implementation from Scikit-Learn takes 0.88x the execution time of the SciPy implementation, i.e. SciPy's implementation is 1.14x faster. It should be noted that: I modified the original scikit-learn implementation I only did a small number of iterations I only tested a small number of test cases (both cluster size as well as number of items per dimension should be tested) I ran SciPy second, so it is had the advantage of obtaining more cache hits on the source data The two methods don't exactly do the same thing. With all of that in mind, you should really evaluate which method performs better for your specific application. There are also functional reasons to go with one implementation over the other.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/26851553\/sklearn-agglomerative-clustering-linkage-matrix", "best_answers_votes":17, "question_length":419, "response_length":17373 }, { "question":"Sklearn set_params takes exactly 1 argument? I'm trying to use SkLearn Bayes classification. ``` gnb = GaussianNB() gnb.set_params('sigma__0.2') gnb.fit(np.transpose([xn, yn]), y) ``` But I get: ``` set_params() takes exactly 1 argument (2 given) ``` now I try to use this code: ``` gnb = GaussianNB() arr = np.zeros((len(labs),len(y))) arr.fill(sigma) gnb.set_params(sigma_ = arr) ``` And get: ``` ValueError: Invalid parameter sigma_ for estimator GaussianNB ``` Is it wrong parameter name or value?", "response":"I just stumbled upon this, so here is a solution for multiple arguments from a dictionary: ``` from sklearn import svm params_svm = {\"kernel\":\"rbf\", \"C\":0.1, \"gamma\":0.1, \"class_weight\":\"auto\"} clf = svm.SVC() clf.set_params(**params_svm) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/27122757\/sklearn-set-params-takes-exactly-1-argument", "best_answers_votes":71, "question_length":501, "response_length":242 }, { "question":"Using Smote with Gridsearchcv in Scikit-learn I'm dealing with an imbalanced dataset and want to do a grid search to tune my model's parameters using scikit's gridsearchcv. To oversample the data, I want to use SMOTE, and I know I can include that as a stage of a pipeline and pass it to gridsearchcv. My concern is that I think smote will be applied to both train and validation folds, which is not what you are supposed to do. The validation set should not be oversampled. Am I right that the whole pipeline will be applied to both dataset splits? And if yes, how can I turn around this? Thanks a lot in advance", "response":"Yes, it can be done, but with imblearn Pipeline. You see, imblearn has its own Pipeline to handle the samplers correctly. I described this in a similar question here. When called predict() on a imblearn.Pipeline object, it will skip the sampling method and leave the data as it is to be passed to next transformer. You can confirm that by looking at the source code here: ``` if hasattr(transform, \"fit_sample\"): pass else: Xt = transform.transform(Xt) ``` So for this to work correctly, you need the following: ``` from imblearn.pipeline import Pipeline model = Pipeline([ ('sampling', SMOTE()), ('classification', LogisticRegression()) ]) grid = GridSearchCV(model, params, ...) grid.fit(X, y) ``` Fill the details as necessary, and the pipeline will take care of the rest.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/50245684\/using-smote-with-gridsearchcv-in-scikit-learn", "best_answers_votes":60, "question_length":613, "response_length":775 }, { "question":"PCA on sklearn - how to interpret pca.components_ I ran PCA on a data frame with 10 features using this simple code: ``` pca = PCA() fit = pca.fit(dfPca) ``` The result of pca.explained_variance_ratio_ shows: ``` array([ 5.01173322e-01, 2.98421951e-01, 1.00968655e-01, 4.28813755e-02, 2.46887288e-02, 1.40976609e-02, 1.24905823e-02, 3.43255532e-03, 1.84516942e-03, 4.50314168e-16]) ``` I believe that means that the first PC explains 52% of the variance, the second component explains 29% and so on... What I dont undestand is the output of pca.components_. If I do the following: ``` df = pd.DataFrame(pca.components_, columns=list(dfPca.columns)) ``` I get the data frame bellow where each line is a principal component. What I'd like to understand is how to interpret that table. I know that if I square all the features on each component and sum them I get 1, but what does the -0.56 on PC1 mean? Dos it tell something about \"Feature E\" since it is the highest magnitude on a component that explains 52% of the variance? Thanks", "response":"Terminology: First of all, the results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score). PART1: I explain how to check the importance of the features and how to plot a biplot. PART2: I explain how to check the importance of the features and how to save them into a pandas dataframe using the feature names. Summary in an article: Python compact guide: https:\/\/towardsdatascience.com\/pca-clearly-explained-how-when-why-to-use-it-and-feature-importance-a-guide-in-python-7c274582c37e?source=friends_link&sk=65bf5440e444c24aff192fedf9f8b64f PART 1: In your case, the value -0.56 for Feature E is the score of this feature on the PC1. This value tells us 'how much' the feature influences the PC (in our case the PC1). So the higher the value in absolute value, the higher the influence on the principal component. After performing the PCA analysis, people usually plot the known 'biplot' to see the transformed features in the N dimensions (2 in our case) and the original variables (features). I wrote a function to plot this. Example using iris data: ``` import numpy as np import matplotlib.pyplot as plt from sklearn import datasets import pandas as pd from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA iris = datasets.load_iris() X = iris.data y = iris.target #In general it is a good idea to scale the data scaler = StandardScaler() scaler.fit(X) X=scaler.transform(X) pca = PCA() pca.fit(X,y) x_new = pca.transform(X) def myplot(score,coeff,labels=None): xs = score[:,0] ys = score[:,1] n = coeff.shape[0] plt.scatter(xs ,ys, c = y) #without scaling for i in range(n): plt.arrow(0, 0, coeff[i,0], coeff[i,1],color = 'r',alpha = 0.5) if labels is None: plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, \"Var\"+str(i+1), color = 'g', ha = 'center', va = 'center') else: plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, labels[i], color = 'g', ha = 'center', va = 'center') plt.xlabel(\"PC{}\".format(1)) plt.ylabel(\"PC{}\".format(2)) plt.grid() #Call the function. myplot(x_new[:,0:2], pca.components_) plt.show() ``` Results PART 2: The important features are the ones that influence more the components and thus, have a large absolute value on the component. TO get the most important features on the PCs with names and save them into a pandas dataframe use this: ``` from sklearn.decomposition import PCA import pandas as pd import numpy as np np.random.seed(0) # 10 samples with 5 features train_features = np.random.rand(10,5) model = PCA(n_components=2).fit(train_features) X_pc = model.transform(train_features) # number of components n_pcs= model.components_.shape[0] # get the index of the most important feature on EACH component # LIST COMPREHENSION HERE most_important = [np.abs(model.components_[i]).argmax() for i in range(n_pcs)] initial_feature_names = ['a','b','c','d','e'] # get the names most_important_names = [initial_feature_names[most_important[i]] for i in range(n_pcs)] # LIST COMPREHENSION HERE AGAIN dic = {'PC{}'.format(i): most_important_names[i] for i in range(n_pcs)} # build the dataframe df = pd.DataFrame(dic.items()) ``` This prints: ``` 0 1 0 PC0 e 1 PC1 d ``` So on the PC1 the feature named e is the most important and on PC2 the d. Summary in an article: Python compact guide: https:\/\/towardsdatascience.com\/pca-clearly-explained-how-when-why-to-use-it-and-feature-importance-a-guide-in-python-7c274582c37e?source=friends_link&sk=65bf5440e444c24aff192fedf9f8b64f", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/47370795\/pca-on-sklearn-how-to-interpret-pca-components", "best_answers_votes":47, "question_length":1031, "response_length":3667 }, { "question":"Merging results from model.predict() with original pandas DataFrame? I am trying to merge the results of a predict method back with the original data in a pandas.DataFrame object. ``` from sklearn.datasets import load_iris from sklearn.cross_validation import train_test_split from sklearn.tree import DecisionTreeClassifier import pandas as pd import numpy as np data = load_iris() # bear with me for the next few steps... I'm trying to walk you through # how my data object landscape looks... i.e. how I get from raw data # to matrices with the actual data I have, not the iris dataset # put feature matrix into columnar format in dataframe df = pd.DataFrame(data = data.data) # add outcome variable df['class'] = data.target X = np.matrix(df.loc[:, [0, 1, 2, 3]]) y = np.array(df['class']) # finally, split into train-test X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.8) model = DecisionTreeClassifier() model.fit(X_train, y_train) # I've got my predictions now y_hats = model.predict(X_test) ``` To merge these predictions back with the original df, I try this: ``` df['y_hats'] = y_hats ``` But that raises: ValueError: Length of values does not match length of index I know I could split the df into train_df and test_df and this problem would be solved, but in reality I need to follow the path above to create the matrices X and y (my actual problem is a text classification problem in which I normalize the entire feature matrix before splitting into train and test). How can I align these predicted values with the appropriate rows in my df, since the y_hats array is zero-indexed and seemingly all information about which rows were included in the X_test and y_test is lost? Or will I be relegated to splitting dataframes into train-test first, and then building feature matrices? I'd like to just fill the rows included in train with np.nan values in the dataframe.", "response":"your y_hats length will only be the length on the test data (20%) because you predicted on X_test. Once your model is validated and you're happy with the test predictions (by examining the accuracy of your model on the X_test predictions compared to the X_test true values), you should rerun the predict on the full dataset (X). Add these two lines to the bottom: ``` y_hats2 = model.predict(X) df['y_hats'] = y_hats2 ``` EDIT per your comment, here is an updated result the returns the dataset with the prediction appended where they were in the test datset ``` from sklearn.datasets import load_iris from sklearn.cross_validation import train_test_split from sklearn.tree import DecisionTreeClassifier import pandas as pd import numpy as np data = load_iris() # bear with me for the next few steps... I'm trying to walk you through # how my data object landscape looks... i.e. how I get from raw data # to matrices with the actual data I have, not the iris dataset # put feature matrix into columnar format in dataframe df = pd.DataFrame(data = data.data) # add outcome variable df_class = pd.DataFrame(data = data.target) # finally, split into train-test X_train, X_test, y_train, y_test = train_test_split(df,df_class, train_size = 0.8) model = DecisionTreeClassifier() model.fit(X_train, y_train) # I've got my predictions now y_hats = model.predict(X_test) y_test['preds'] = y_hats df_out = pd.merge(df,y_test[['preds']],how = 'left',left_index = True, right_index = True) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40729162\/merging-results-from-model-predict-with-original-pandas-dataframe", "best_answers_votes":43, "question_length":1905, "response_length":1482 }, { "question":"What's the best way to test whether an sklearn model has been fitted? What's the most elegant way to check whether an sklearn model has been fitted? i.e. whether its fit() function has been called after it was instantiated, or not.", "response":"You can do something like: ``` from sklearn.exceptions import NotFittedError for model in models: try: model.predict(some_test_data) except NotFittedError as e: print(repr(e)) ``` Ideally you would check the results of model.predict against expected results but if all you want to know if wether the model is fitted or not that should suffice. Update: Some commenters have suggested using check_is_fitted. I consider check_is_fitted an internal method. Most algorithms will call check_is_fitted inside their predict method which in turn might raise NotFittedError if needed. The problem with using check_is_fitted directly is that it is model specific, i.e. you need to know which members to check depending on your algorithm. For example: ``` \u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2566\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557 \u2551 Tree models \u2551 check_is_fitted(self, 'tree_') \u2551 \u2551 Linear models \u2551 check_is_fitted(self, 'coefs_') \u2551 \u2551 KMeans \u2551 check_is_fitted(self, 'cluster_centers_') \u2551 \u2551 SVM \u2551 check_is_fitted(self, 'support_') \u2551 \u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d ``` and so on. So in general I would recommend calling model.predict() and letting the specific algorithm handle the best way to check whether it is already fitted or not.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/39884009\/whats-the-best-way-to-test-whether-an-sklearn-model-has-been-fitted", "best_answers_votes":27, "question_length":231, "response_length":1243 }, { "question":"Scikit Learn: Logistic Regression model coefficients: Clarification I need to know how to return the logistic regression coefficients in such a manner that I can generate the predicted probabilities myself. My code looks like this: ``` lr = LogisticRegression() lr.fit(training_data, binary_labels) # Generate probabities automatically predicted_probs = lr.predict_proba(binary_labels) ``` I had assumed the lr.coeff_ values would follow typical logistic regression, so that I could return the predicted probabilities like this: ``` sigmoid( dot([val1, val2, offset], lr.coef_.T) ) ``` But this is not the appropriate formulation. Does anyone have the proper format for generating predicted probabilities from Scikit Learn LogisticRegression? Thanks!", "response":"take a look at the documentations (http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.linear_model.LogisticRegression.html), offset coefficient isn't stored by lr.coef_ coef_ array, shape = [n_classes-1, n_features] Coefficient of the features in the decision function. coef_ is readonly property derived from raw_coef_ that follows the internal memory layout of liblinear. intercept_ array, shape = [n_classes-1] Intercept (a.k.a. bias) added to the decision function. It is available only when parameter intercept is set to True. try: ``` sigmoid( dot([val1, val2], lr.coef_) + lr.intercept_ ) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/18993867\/scikit-learn-logistic-regression-model-coefficients-clarification", "best_answers_votes":32, "question_length":750, "response_length":605 }, { "question":"How do I get word frequency in a corpus using Scikit Learn CountVectorizer? I'm trying to compute a simple word frequency using scikit-learn's CountVectorizer. ``` import pandas as pd import numpy as np from sklearn.feature_extraction.text import CountVectorizer texts=[\"dog cat fish\",\"dog cat cat\",\"fish bird\",\"bird\"] cv = CountVectorizer() cv_fit=cv.fit_transform(texts) print cv.vocabulary_ {u'bird': 0, u'cat': 1, u'dog': 2, u'fish': 3} ``` I was expecting it to return {u'bird': 2, u'cat': 3, u'dog': 2, u'fish': 2}.", "response":"cv.vocabulary_ in this instance is a dict, where the keys are the words (features) that you've found and the values are indices, which is why they're 0, 1, 2, 3. It's just bad luck that it looked similar to your counts :) You need to work with the cv_fit object to get the counts ``` from sklearn.feature_extraction.text import CountVectorizer texts = [\"dog cat fish\", \"dog cat cat\", \"fish bird\", \"bird\"] cv = CountVectorizer() cv_fit = cv.fit_transform(texts) print(cv.get_feature_names()) print(cv_fit.toarray()) # [\"bird\", \"cat\", \"dog\", \"fish\"] # [[0 1 1 1] # [0 2 1 0] # [1 0 0 1] # [1 0 0 0]] ``` Each row in the array is one of your original documents (strings), each column is a feature (word), and the element is the count for that particular word and document. You can see that if you sum each column you'll get the correct number ``` print(cv_fit.toarray().sum(axis=0)) # [2 3 2 2] ``` Honestly though, I'd suggest using collections.Counter or something from NLTK, unless you have some specific reason to use scikit-learn, as it'll be simpler.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/27488446\/how-do-i-get-word-frequency-in-a-corpus-using-scikit-learn-countvectorizer", "best_answers_votes":61, "question_length":521, "response_length":1053 }, { "question":"Compute class weight function issue in 'sklearn' library when used in 'Keras' classification (Python 3.8, only in VS code) The classifier script I wrote is working fine and recently added weight balancing to the fitting. Since I added the weight estimate function using 'sklearn' library I get the following error : ``` compute_class_weight() takes 1 positional argument but 3 were given ``` This error does not make sense per documentation. The script should have three inputs but not sure why it says expecting only one variable. Full error and code information is shown below. Apparently, this is failing only in VS code. I tested in the Jupyter notebook and working fine. So it seems an issue with VS code compiler. Any one notice? ( I am using Python 3.8 with other latest other libraries) ``` from sklearn.utils import compute_class_weight train_classes = train_generator.classes class_weights = compute_class_weight( \"balanced\", np.unique(train_classes), train_classes ) class_weights = dict(zip(np.unique(train_classes), class_weights)), class_weights ``` In Jupyter Notebook,", "response":"After spending a lot of time, this is how I fixed it. I still don't know why but when the code is modified as follows, it works fine. I got the idea after seeing this solution for a similar but slightly different issue. ``` class_weights = compute_class_weight( class_weight = \"balanced\", classes = np.unique(train_classes), y = train_classes ) class_weights = dict(zip(np.unique(train_classes), class_weights)) class_weights ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/69783897\/compute-class-weight-function-issue-in-sklearn-library-when-used-in-keras-cl", "best_answers_votes":92, "question_length":1084, "response_length":429 }, { "question":"What type is a sklearn model? I'm writing some code which evaluates different sklearn models against some data. I am using type hints, both for my own education and to help other people who will eventually have to read my code. My question is how do I specify the type of a sklearn predictor (such as LinearRegression())? For example: ``` def model_tester(model : Predictor, parameter: int ) -> np.ndarray: \"\"\"An example function with type hints.\"\"\" # do stuff to model return values ``` I see the typing library can make new types or I can use TypeVar to do: ``` Predictor = TypeVar('Predictor') ``` but I wouldn't want to use this if there was already a conventional type for an sklearn model. Checking the type of LinearRegression() yields: ``` sklearn.linear_model.base.LinearRegression ``` and this is clearly of use, but only if I am interested in the LinearRegression model.", "response":"From Python 3.8 on (or earlier using typing-extensions), you can use typing.Protocol. Using protocols, you can use a concept called structural subtyping to define exactly the type's expected structure: ``` from typing import Protocol # from typing_extensions import Protocol # for Python Any: model.fit(train_data, train_labels) # this type checks score = model.score(test_data, test_labels) # this type checks ... ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/54868698\/what-type-is-a-sklearn-model", "best_answers_votes":33, "question_length":881, "response_length":419 }, { "question":"Issue with OneHotEncoder for categorical features I want to encode 3 categorical features out of 10 features in my datasets. I use preprocessing from sklearn.preprocessing to do so as the following: ``` from sklearn import preprocessing cat_features = ['color', 'director_name', 'actor_2_name'] enc = preprocessing.OneHotEncoder(categorical_features=cat_features) enc.fit(dataset.values) ``` However, I couldn't proceed as I am getting this error: ``` array = np.array(array, dtype=dtype, order=order, copy=copy) ValueError: could not convert string to float: PG ``` I am surprised why it is complaining about the string as it is supposed to convert it!! Am I missing something here?", "response":"If you read the docs for OneHotEncoder you'll see the input for fit is \"Input array of type int\". So you need to do two steps for your one hot encoded data ``` from sklearn import preprocessing cat_features = ['color', 'director_name', 'actor_2_name'] enc = preprocessing.LabelEncoder() enc.fit(cat_features) new_cat_features = enc.transform(cat_features) print new_cat_features # [1 2 0] new_cat_features = new_cat_features.reshape(-1, 1) # Needs to be the correct shape ohe = preprocessing.OneHotEncoder(sparse=False) #Easier to read print ohe.fit_transform(new_cat_features) ``` Output: ``` [[ 0. 1. 0.] [ 0. 0. 1.] [ 1. 0. 0.]] ``` EDIT As of 0.20 this became a bit easier, not only because OneHotEncoder now handles strings nicely, but also because we can transform multiple columns easily using ColumnTransformer, see below for an example ``` from sklearn.compose import ColumnTransformer from sklearn.preprocessing import LabelEncoder, OneHotEncoder import numpy as np X = np.array([['apple', 'red', 1, 'round', 0], ['orange', 'orange', 2, 'round', 0.1], ['bannana', 'yellow', 2, 'long', 0], ['apple', 'green', 1, 'round', 0.2]]) ct = ColumnTransformer( [('oh_enc', OneHotEncoder(sparse=False), [0, 1, 3]),], # the column numbers I want to apply this to remainder='passthrough' # This leaves the rest of my columns in place ) print(ct2.fit_transform(X)) # Notice the output is a string ``` Output: ``` [['1.0' '0.0' '0.0' '0.0' '0.0' '1.0' '0.0' '0.0' '1.0' '1' '0'] ['0.0' '0.0' '1.0' '0.0' '1.0' '0.0' '0.0' '0.0' '1.0' '2' '0.1'] ['0.0' '1.0' '0.0' '0.0' '0.0' '0.0' '1.0' '1.0' '0.0' '2' '0'] ['1.0' '0.0' '0.0' '1.0' '0.0' '0.0' '0.0' '0.0' '1.0' '1' '0.2']] ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43588679\/issue-with-onehotencoder-for-categorical-features", "best_answers_votes":51, "question_length":683, "response_length":1674 }, { "question":"Is there easy way to grid search without cross validation in python? There is absolutely helpful class GridSearchCV in scikit-learn to do grid search and cross validation, but I don't want to do cross validataion. I want to do grid search without cross validation and use whole data to train. To be more specific, I need to evaluate my model made by RandomForestClassifier with \"oob score\" during grid search. Is there easy way to do it? or should I make a class by myself? The points are I'd like to do grid search with easy way. I don't want to do cross validation. I need to use whole data to train.(don't want to separate to train data and test data) I need to use oob score to evaluate during grid search.", "response":"I would really advise against using OOB to evaluate a model, but it is useful to know how to run a grid search outside of GridSearchCV() (I frequently do this so I can save the CV predictions from the best grid for easy model stacking). I think the easiest way is to create your grid of parameters via ParameterGrid() and then just loop through every set of params. For example assuming you have a grid dict, named \"grid\", and RF model object, named \"rf\", then you can do something like this: ``` for g in ParameterGrid(grid): rf.set_params(**g) rf.fit(X,y) # save if best if rf.oob_score_ > best_score: best_score = rf.oob_score_ best_grid = g print \"OOB: %0.5f\" % best_score print \"Grid:\", best_grid ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34624978\/is-there-easy-way-to-grid-search-without-cross-validation-in-python", "best_answers_votes":59, "question_length":710, "response_length":705 }, { "question":"Fitting data vs. transforming data in scikit-learn In scikit-learn, all estimators have a fit() method, and depending on whether they are supervised or unsupervised, they also have a predict() or transform() method. I am in the process of writing a transformer for an unsupervised learning task and was wondering if there is a rule of thumb where to put which kind of learning logic. The official documentation is not very helpful in this regard: fit_transform(X, y=None, **fit_params) Fit to data, then transform it. In this context, what is meant by both fitting data and transforming data?", "response":"Fitting finds the internal parameters of a model that will be used to transform data. Transforming applies the parameters to data. You may fit a model to one set of data, and then transform it on a completely different set. For example, you fit a linear model to data to get a slope and intercept. Then you use those parameters to transform (i.e., map) new or existing values of x to y. fit_transform is just doing both steps to the same data. A scikit example: You fit data to find the principal components. Then you transform your data to see how it maps onto these components: ``` from sklearn.decomposition import PCA pca = PCA(n_components=2) X = [[1,2],[2,4],[1,3]] pca.fit(X) # This is the model to map data pca.components_ array([[ 0.47185791, 0.88167459], [-0.88167459, 0.47185791]], dtype=float32) # Now we actually map the data pca.transform(X) array([[-1.03896057, -0.17796634], [ 1.19624651, -0.11592512], [-0.15728599, 0.29389156]]) # Or we can do both \"at once\" pca.fit_transform(X) array([[-1.03896058, -0.1779664 ], [ 1.19624662, -0.11592512], [-0.15728603, 0.29389152]], dtype=float32) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31572487\/fitting-data-vs-transforming-data-in-scikit-learn", "best_answers_votes":53, "question_length":592, "response_length":1107 }, { "question":"Key Error: None of [Int64Index...] dtype='int64] are in the columns I'm trying to shuffle my indices using the np.random.shuffle() method, but I keep getting an error that I don't understand. I'd appreciate it if someone could help me puzzle this out. Thank you! I've tried to use the delimiter=',' and delim_whitespace=0 when I made my raw_csv_data variable at the beginning, as I saw that as the solution of another problem, but it kept throwing the same error ``` import pandas as pd import numpy as np from sklearn.preprocessing import StandardScaler #%% raw_csv_data= pd.read_csv('Absenteeism-data.csv') print(raw_csv_data) #%% df= raw_csv_data.copy() print(display(df)) #%% pd.options.display.max_columns=None pd.options.display.max_rows=None print(display(df)) #%% print(df.info()) #%% df=df.drop(['ID'], axis=1) #%% print(display(df.head())) #%% #Our goal is to see who is more likely to be absent. Let's define #our targets from our dependent variable, Absenteeism Time in Hours print(df['Absenteeism Time in Hours']) print(df['Absenteeism Time in Hours'].median()) #%% targets= np.where(df['Absenteeism Time in Hours']>df['Absenteeism Time in Hours'].median(),1,0) #%% print(targets) #%% df['Excessive Absenteeism']= targets #%% print(df.head()) #%% #Let's Separate the Day and Month Values to see if there is correlation #between Day of week\/month with absence print(type(df['Date'][0])) #%% df['Date']= pd.to_datetime(df['Date'], format='%d\/%m\/%Y') #%% print(df['Date']) print(type(df['Date'][0])) #%% #Extracting the Month Value print(df['Date'][0].month) #%% list_months=[] print(list_months) #%% print(df.shape) #%% for i in range(df.shape[0]): list_months.append(df['Date'][i].month) #%% print(list_months) #%% print(len(list_months)) #%% #Let's Create a Month Value Column for df df['Month Value']= list_months #%% print(df.head()) #%% #Now let's extract the day of the week from date df['Date'][699].weekday() #%% def date_to_weekday(date_value): return date_value.weekday() #%% df['Day of the Week']= df['Date'].apply(date_to_weekday) #%% print(df.head()) #%% df= df.drop(['Date'], axis=1) #%% print(df.columns.values) #%% reordered_columns= ['Reason for Absence', 'Month Value','Day of the Week','Transportation Expense', 'Distance to Work', 'Age', 'Daily Work Load Average', 'Body Mass Index', 'Education', 'Children', 'Pets', 'Absenteeism Time in Hours', 'Excessive Absenteeism'] #%% df=df[reordered_columns] print(df.head()) #%% #First Checkpoint df_date_mod= df.copy() #%% print(df_date_mod) #%% #Let's Standardize our inputs, ignoring the Reasons and Education Columns #Because they are labelled by a separate categorical criteria, not numerically print(df_date_mod.columns.values) #%% unscaled_inputs= df_date_mod.loc[:, ['Month Value','Day of the Week','Transportation Expense','Distance to Work','Age','Daily Work Load Average','Body Mass Index','Children','Pets','Absenteeism Time in Hours']] #%% print(display(unscaled_inputs)) #%% absenteeism_scaler= StandardScaler() #%% absenteeism_scaler.fit(unscaled_inputs) #%% scaled_inputs= absenteeism_scaler.transform(unscaled_inputs) #%% print(display(scaled_inputs)) #%% print(scaled_inputs.shape) #%% scaled_inputs= pd.DataFrame(scaled_inputs, columns=['Month Value','Day of the Week','Transportation Expense','Distance to Work','Age','Daily Work Load Average','Body Mass Index','Children','Pets','Absenteeism Time in Hours']) print(display(scaled_inputs)) #%% df_date_mod= df_date_mod.drop(['Month Value','Day of the Week','Transportation Expense','Distance to Work','Age','Daily Work Load Average','Body Mass Index','Children','Pets','Absenteeism Time in Hours'], axis=1) print(display(df_date_mod)) #%% df_date_mod=pd.concat([df_date_mod,scaled_inputs], axis=1) print(display(df_date_mod)) #%% df_date_mod= df_date_mod[reordered_columns] print(display(df_date_mod.head())) #%% #Checkpoint df_date_scale_mod= df_date_mod.copy() print(display(df_date_scale_mod.head())) #%% #Let's Analyze the Reason for Absence Category print(df_date_scale_mod['Reason for Absence']) #%% print(df_date_scale_mod['Reason for Absence'].min()) print(df_date_scale_mod['Reason for Absence'].max()) #%% print(df_date_scale_mod['Reason for Absence'].unique()) #%% print(len(df_date_scale_mod['Reason for Absence'].unique())) #%% print(sorted(df['Reason for Absence'].unique())) #%% reason_columns= pd.get_dummies(df['Reason for Absence']) print(reason_columns) #%% reason_columns['check']= reason_columns.sum(axis=1) print(reason_columns) #%% print(reason_columns['check'].sum(axis=0)) #%% print(reason_columns['check'].unique()) #%% reason_columns=reason_columns.drop(['check'], axis=1) print(reason_columns) #%% reason_columns=pd.get_dummies(df_date_scale_mod['Reason for Absence'], drop_first=True) print(reason_columns) #%% print(df_date_scale_mod.columns.values) #%% print(reason_columns.columns.values) #%% df_date_scale_mod= df_date_scale_mod.drop(['Reason for Absence'], axis=1) print(df_date_scale_mod) #%% reason_type_1= reason_columns.loc[:, 1:14].max(axis=1) reason_type_2= reason_columns.loc[:, 15:17].max(axis=1) reason_type_3= reason_columns.loc[:, 18:21].max(axis=1) reason_type_4= reason_columns.loc[:, 22:].max(axis=1) #%% print(reason_type_1) print(reason_type_2) print(reason_type_3) print(reason_type_4) #%% print(df_date_scale_mod.head()) #%% df_date_scale_mod= pd.concat([df_date_scale_mod, reason_type_1,reason_type_2, reason_type_3, reason_type_4], axis=1) print(df_date_scale_mod.head()) #%% print(df_date_scale_mod.columns.values) #%% column_names= ['Month Value','Day of the Week','Transportation Expense', 'Distance to Work','Age','Daily Work Load Average','Body Mass Index', 'Education','Children','Pets','Absenteeism Time in Hours', 'Excessive Absenteeism', 'Reason_1', 'Reason_2', 'Reason_3', 'Reason_4'] df_date_scale_mod.columns= column_names print(df_date_scale_mod.head()) #%% column_names_reordered= ['Reason_1', 'Reason_2', 'Reason_3', 'Reason_4','Month Value','Day of the Week','Transportation Expense', 'Distance to Work','Age','Daily Work Load Average','Body Mass Index', 'Education','Children','Pets','Absenteeism Time in Hours', 'Excessive Absenteeism'] df_date_scale_mod=df_date_scale_mod[column_names_reordered] print(display(df_date_scale_mod.head())) #%% #Checkpoint df_date_scale_mod_reas= df_date_scale_mod.copy() print(df_date_scale_mod_reas.head()) #%% #Let's Look at the Education column now print(df_date_scale_mod_reas['Education'].unique()) #This shows us that education is rated from 1-4 based on level #of completion #%% print(df_date_scale_mod_reas['Education'].value_counts()) #The overwhelming majority of workers are highschool educated, while the #rest have higher degrees #%% #We'll create our dummy variables as highschool and higher education df_date_scale_mod_reas['Education']= df_date_scale_mod_reas['Education'].map({1:0, 2:1, 3:1, 4:1}) #%% print(df_date_scale_mod_reas['Education'].unique()) #%% print(df_date_scale_mod_reas['Education'].value_counts()) #%% #Checkpoint df_preprocessed= df_date_scale_mod_reas.copy() print(display(df_preprocessed.head())) #%% #%% #Split Inputs from targets scaled_inputs_all= df_preprocessed.loc[:,'Reason_1':'Absenteeism Time in Hours'] print(display(scaled_inputs_all.head())) print(scaled_inputs_all.shape) #%% targets_all= df_preprocessed.loc[:,'Excessive Absenteeism'] print(display(targets_all.head())) print(targets_all.shape) #%% #Shuffle Inputs and targets shuffled_indices= np.arange(scaled_inputs_all.shape[0]) np.random.shuffle(shuffled_indices) shuffled_inputs= scaled_inputs_all[shuffled_indices] shuffled_targets= targets_all[shuffled_indices] ``` This is the error I keep getting when I try to shuffle my indices: ``` KeyError Traceback (most recent call last) in 1 shuffled_indices= np.arange(scaled_inputs_all.shape[0]) 2 np.random.shuffle(shuffled_indices) ----> 3 shuffled_inputs= scaled_inputs_all[shuffled_indices] 4 shuffled_targets= targets_all[shuffled_indices] ``` ~\\Anaconda3\\lib\\site-packages\\pandas\\core\\frame.py in getitem(self, key) 2932 key = list(key) 2933 indexer = self.loc._convert_to_indexer(key, axis=1, -> 2934 raise_missing=True) 2935 2936 # take() does not accept boolean indexers ~\\Anaconda3\\lib\\site-packages\\pandas\\core\\indexing.py in _convert_to_indexer(self, obj, axis, is_setter, raise_missing) 1352 kwargs = {'raise_missing': True if is_setter else 1353 raise_missing} -> 1354 return self._get_listlike_indexer(obj, axis, **kwargs)[1] 1355 else: 1356 try: ~\\Anaconda3\\lib\\site-packages\\pandas\\core\\indexing.py in _get_listlike_indexer(self, key, axis, raise_missing) 1159 self._validate_read_indexer(keyarr, indexer, 1160 o._get_axis_number(axis), -> 1161 raise_missing=raise_missing) 1162 return keyarr, indexer 1163 ~\\Anaconda3\\lib\\site-packages\\pandas\\core\\indexing.py in _validate_read_indexer(self, key, indexer, axis, raise_missing) 1244 raise KeyError( 1245 u\"None of [{key}] are in the [{axis}]\".format( -> 1246 key=key, axis=self.obj._get_axis_name(axis))) 1247 1248 # We (temporarily) allow for some missing keys with .loc, except in KeyError: \"None of [Int64Index([560, 320, 405, 141, 154, 370, 656, 26, 444, 307,\\n ...\\n 429, 542, 676, 588, 315, 284, 293, 607, 197, 250],\\n dtype='int64', length=700)] are in the [columns]\"", "response":"You created your scaled_inputs_all DataFrame using loc function, so it most likely contains no consecutive indices. On the other hand, you created shuffled_indices as a shuffle from just a range of consecutive numbers. Remember that scaled_inputs_all[shuffled_indices] gets rows of scaled_inputs_all which have index values equal to elements of shuffled_indices. Maybe you should write: ``` scaled_inputs_all.iloc[shuffled_indices] ``` Note that iloc provides integer-location based indexing, regardless of index values, i.e. just what you need.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/55667169\/key-error-none-of-int64index-dtype-int64-are-in-the-columns", "best_answers_votes":29, "question_length":9243, "response_length":545 }, { "question":"How do you access tree depth in Python's scikit-learn? I'm using scikit-learn to create a Random Forest. However, I want to find the individual depths of each tree. It seems like a simple attribute to have but according to the documentation, (http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.ensemble.RandomForestClassifier.html) there is no way of accessing it. If this isn't possible, is there a way of accessing the tree depth from a Decision Tree model? Any help would be appreciated. Thank you.", "response":"Each instance of RandomForestClassifier has an estimators_ attribute, which is a list of DecisionTreeClassifier instances. The documentation shows that an instance of DecisionTreeClassifier has a tree_ attribute, which is an instance of the (undocumented, I believe) Tree class. Some exploration in the interpreter shows that each Tree instance has a max_depth parameter which appears to be what you're looking for -- again, it's undocumented. In any case, if forest is your instance of RandomForestClassifier, then: ``` >>> [estimator.tree_.max_depth for estimator in forest.estimators_] [9, 10, 9, 11, 9, 9, 11, 7, 13, 10] ``` should do the trick. Each estimator also has a get_depth() method than can be used to retrieve the same value with briefer syntax: ``` >>> [estimator.get_depth() for estimator in forest.estimators_] [9, 10, 9, 11, 9, 9, 11, 7, 13, 10] ``` To avoid mixup, it should be noted that there is an attribute of each estimator (and not each estimator's tree_) called max depth which returns the setting of the parameter rather than the depth of the actual tree. How estimator.get_depth(), estimator.tree_.max_depth, and estimator.max_depth relate to each other is clarified in the example below: ``` from sklearn.datasets import load_iris from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=3, random_state=4, max_depth=6) iris = load_iris() clf.fit(iris['data'], iris['target']) [(est.get_depth(), est.tree_.max_depth, est.max_depth) for est in clf.estimators_] ``` Out: ``` [(6, 6, 6), (3, 3, 6), (4, 4, 6)] ``` Setting max depth to the default value None would allow the first tree to expand to depth 7 and the output would be: ``` [(7, 7, None), (3, 3, None), (4, 4, None)] ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34214087\/how-do-you-access-tree-depth-in-pythons-scikit-learn", "best_answers_votes":58, "question_length":507, "response_length":1744 }, { "question":"Scikit-learn cross validation scoring for regression How can one use cross_val_score for regression? The default scoring seems to be accuracy, which is not very meaningful for regression. Supposedly I would like to use mean squared error, is it possible to specify that in cross_val_score? Tried the following two but doesn't work: ``` scores = cross_validation.cross_val_score(svr, diabetes.data, diabetes.target, cv=5, scoring='mean_squared_error') ``` and ``` scores = cross_validation.cross_val_score(svr, diabetes.data, diabetes.target, cv=5, scoring=metrics.mean_squared_error) ``` The first one generates a list of negative numbers while mean squared error should always be non-negative. The second one complains that: ``` mean_squared_error() takes exactly 2 arguments (3 given) ```", "response":"I dont have the reputation to comment but I want to provide this link for you and\/or a passersby where the negative output of the MSE in scikit learn is discussed - https:\/\/github.com\/scikit-learn\/scikit-learn\/issues\/2439 In addition (to make this a real answer) your first option is correct in that not only is MSE the metric you want to use to compare models but R^2 cannot be calculated depending (I think) on the type of cross-val you are using. If you choose MSE as a scorer, it outputs a list of errors which you can then take the mean of, like so: ``` # Doing linear regression with leave one out cross val from sklearn import cross_validation, linear_model import numpy as np # Including this to remind you that it is necessary to use numpy arrays rather # than lists otherwise you will get an error X_digits = np.array(x) Y_digits = np.array(y) loo = cross_validation.LeaveOneOut(len(Y_digits)) regr = linear_model.LinearRegression() scores = cross_validation.cross_val_score(regr, X_digits, Y_digits, scoring='mean_squared_error', cv=loo,) # This will print the mean of the list of errors that were output and # provide your metric for evaluation print scores.mean() ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/24132237\/scikit-learn-cross-validation-scoring-for-regression", "best_answers_votes":42, "question_length":790, "response_length":1180 }, { "question":"Spectral Clustering a graph in python I'd like to cluster a graph in python using spectral clustering. Spectral clustering is a more general technique which can be applied not only to graphs, but also images, or any sort of data, however, it's considered an exceptional graph clustering technique. Sadly, I can't find examples of spectral clustering graphs in python online. Scikit Learn has two spectral clustering methods documented: SpectralClustering and spectral_clustering which seem like they're not aliases. Both of those methods mention that they could be used on graphs, but do not offer specific instructions. Neither does the user guide. I've asked for such an example from the developers, but they're overworked and haven't gotten to it. A good network to document this against is the Karate Club Network. It's included as a method in networkx. I'd love some direction in how to go about this. If someone can help me figure it out, I can add the documentation to scikit learn. Notes: A question much like this one has already been asked on this site.", "response":"Without much experience with Spectral-clustering and just going by the docs (skip to the end for the results!): Code: ``` import numpy as np import networkx as nx from sklearn.cluster import SpectralClustering from sklearn import metrics np.random.seed(1) # Get your mentioned graph G = nx.karate_club_graph() # Get ground-truth: club-labels -> transform to 0\/1 np-array # (possible overcomplicated networkx usage here) gt_dict = nx.get_node_attributes(G, 'club') gt = [gt_dict[i] for i in G.nodes()] gt = np.array([0 if i == 'Mr. Hi' else 1 for i in gt]) # Get adjacency-matrix as numpy-array adj_mat = nx.to_numpy_matrix(G) print('ground truth') print(gt) # Cluster sc = SpectralClustering(2, affinity='precomputed', n_init=100) sc.fit(adj_mat) # Compare ground-truth and clustering-results print('spectral clustering') print(sc.labels_) print('just for better-visualization: invert clusters (permutation)') print(np.abs(sc.labels_ - 1)) # Calculate some clustering metrics print(metrics.adjusted_rand_score(gt, sc.labels_)) print(metrics.adjusted_mutual_info_score(gt, sc.labels_)) ``` Output: ``` ground truth [0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1] spectral clustering [1 1 0 1 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] just for better-visualization: invert clusters (permutation) [0 0 1 0 0 0 0 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1] 0.204094758281 0.271689477828 ``` The general idea: Introduction on the data and task from here: The nodes in the graph represent the 34 members in a college Karate club. (Zachary is a sociologist, and he was one of the members.) An edge between two nodes indicates that the two members spent significant time together outside normal club meetings. The dataset is interesting because while Zachary was collecting his data, there was a dispute in the Karate club, and it split into two factions: one led by \u201cMr. Hi\u201d, and one led by \u201cJohn A\u201d. It turns out that using only the connectivity information (the edges), it is possible to recover the two factions. Using sklearn & spectral-clustering to tackle this: If affinity is the adjacency matrix of a graph, this method can be used to find normalized graph cuts. This describes normalized graph cuts as: Find two disjoint partitions A and B of the vertices V of a graph, so that A \u222a B = V and A \u2229 B = \u2205 Given a similarity measure w(i,j) between two vertices (e.g. identity when they are connected) a cut value (and its normalized version) is defined as: cut(A, B) = SUM u in A, v in B: w(u, v) ... we seek the minimization of disassociation between the groups A and B and the maximization of the association within each group Sounds alright. So we create the adjacency matrix (nx.to_numpy_matrix(G)) and set the param affinity to precomputed (as our adjancency-matrix is our precomputed similarity-measure). Alternatively, using precomputed, a user-provided affinity matrix can be used. Edit: While unfamiliar with this, i looked for parameters to tune and found assign_labels: The strategy to use to assign labels in the embedding space. There are two ways to assign labels after the laplacian embedding. k-means can be applied and is a popular choice. But it can also be sensitive to initialization. Discretization is another approach which is less sensitive to random initialization. So trying the less sensitive approach: ``` sc = SpectralClustering(2, affinity='precomputed', n_init=100, assign_labels='discretize') ``` Output: ``` ground truth [0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1] spectral clustering [0 0 1 0 0 0 0 0 1 1 0 0 0 0 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1] just for better-visualization: invert clusters (permutation) [1 1 0 1 1 1 1 1 0 0 1 1 1 1 0 0 1 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0] 0.771725032425 0.722546051351 ``` That's a pretty much perfect fit to the ground-truth!", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46258657\/spectral-clustering-a-graph-in-python", "best_answers_votes":40, "question_length":1063, "response_length":3881 }, { "question":"How to apply LabelEncoder for a specific column in Pandas dataframe I have a dataset loaded by dataframe where the class label needs to be encoded using LabelEncoder from scikit-learn. The column label is the class label column which has the following classes: ``` [\u2018Standing\u2019, \u2018Walking\u2019, \u2018Running\u2019, \u2018null\u2019] ``` To perform label encoding, I tried the following but it does not work. How can I fix it? ``` from sklearn import preprocessing import pandas as pd df = pd.read_csv('dataset.csv', sep=',') df.apply(preprocessing.LabelEncoder().fit_transform(df['label'])) ```", "response":"You can try as following: ``` le = preprocessing.LabelEncoder() df['label'] = le.fit_transform(df.label.values) ``` Or following would work too: ``` df['label'] = le.fit_transform(df['label']) ``` It will replace original label values in dataframe with encoded labels.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/50258960\/how-to-apply-labelencoder-for-a-specific-column-in-pandas-dataframe", "best_answers_votes":64, "question_length":569, "response_length":268 }, { "question":"What are the pitfalls of using Dill to serialise scikit-learn\/statsmodels models? I need to serialise scikit-learn\/statsmodels models such that all the dependencies (code + data) are packaged in an artefact and this artefact can be used to initialise the model and make predictions. Using the pickle module is not an option because this will only take care of the data dependency (the code will not be packaged). So, I have been conducting experiments with Dill. To make my question more precise, the following is an example where I build a model and persist it. ``` from sklearn import datasets from sklearn import svm from sklearn.preprocessing import Normalizer import dill digits = datasets.load_digits() training_data_X = digits.data[:-5] training_data_Y = digits.target[:-5] test_data_X = digits.data[-5:] test_data_Y = digits.target[-5:] class Model: def __init__(self): self.normalizer = Normalizer() self.clf = svm.SVC(gamma=0.001, C=100.) def train(self, training_data_X, training_data_Y): normalised_training_data_X = normalizer.fit_transform(training_data_X) self.clf.fit(normalised_training_data_X, training_data_Y) def predict(self, test_data_X): return self.clf.predict(self.normalizer.fit_transform(test_data_X)) model = Model() model.train(training_data_X, training_data_Y) print model.predict(test_data_X) dill.dump(model, open(\"my_model.dill\", 'w')) ``` Corresponding to this, here is how I initialise the persisted model (in a new session) and make a prediction. Note that this code does not explicitly initialise or have knowledge of the class Model. ``` import dill from sklearn import datasets digits = datasets.load_digits() training_data_X = digits.data[:-5] training_data_Y = digits.target[:-5] test_data_X = digits.data[-5:] test_data_Y = digits.target[-5:] with open(\"my_model.dill\") as model_file: model = dill.load(model_file) print model.predict(test_data_X) ``` Has anyone used Dill isn this way?. The idea is for a data scientist to extend a ModelWrapper class for each model they implement and then build the infrastructure around this that persists the models, deploy the models as services and manage the entire lifecycle of the model. ``` class ModelWrapper(object): __metaclass__ = abc.ABCMeta def __init__(self, model): self.model = model @abc.abstractmethod def predict(self, input): return def dumps(self): return dill.dumps(self) def loads(self, model_string): self.model = dill.loads(model_string) ``` Other than the security implications (arbitrary code execution) and the requirement that modules like scikit-learn will have to be installed on the machine thats serving the model, are there and any other pitfalls in this approach? Any comments or words of advice would be most helpful. I think that YHat and Dato have taken similar approach but rolled out there own implementations of Dill for similar purposes.", "response":"I'm the dill author. dill was built to do exactly what you are doing\u2026 (to persist numerical fits within class instances for statistics) where these objects can then be distributed to different resources and run in an embarrassingly parallel fashion. So, the answer is yes -- I have run code like yours, using mystic and\/or sklearn. Note that many of the authors of sklearn use cloudpickle for enabling parallel computing on sklearn objects, and not dill. dill can pickle more types of objects than cloudpickle, however cloudpickle is slightly better (at this time of writing) at pickling objects that make references to the global dictionary as part of a closure -- by default, dill does this by reference, while cloudpickle physically stores the dependencies. However, dill has a \"recurse\" mode, that acts like cloudpickle, so the difference when using this mode is minor. (To enable \"recurse\" mode, do dill.settings['recurse'] = True, or use recurse=True as a flag in dill.dump). Another minor difference is that cloudpickle contains special support for things like scikits.timeseries and PIL.Image, while dill does not. On the plus side, dill does not pickle classes by reference, so by pickling a class instance, it serializes the class object itself -- which is a big advantage, as it serializes instances of derived classes of classifiers, models, and etc from sklearn in their exact state at the time of pickling\u2026 so if you make modifications to the class object, the instance still unpicks correctly. There are other advantages of dill over cloudpickle, aside from the broader range of objects (and typically a smaller pickle) -- however, I won't list them here. You asked for pitfalls, so differences are not pitfalls. Major pitfalls: You should have anything your classes refer to installed on the remote machine, just in case dill (or cloudpickle) pickles it by reference. You should try to make your classes and class methods as self-contained as possible (e.g. don't refer to objects defined in the global scope from your classes). sklearn objects can be big, so saving many of them to a single pickle is not always a good idea\u2026 you might want to use klepto which has a dict interface to caching and archiving, and enables you to configure the archive interface to store each key-value pair individually (e.g. one entry per file).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/32757656\/what-are-the-pitfalls-of-using-dill-to-serialise-scikit-learn-statsmodels-models", "best_answers_votes":48, "question_length":2857, "response_length":2343 }, { "question":"Early stopping with Keras and sklearn GridSearchCV cross-validation I wish to implement early stopping with Keras and sklean's GridSearchCV. The working code example below is modified from How to Grid Search Hyperparameters for Deep Learning Models in Python With Keras. The data set may be downloaded from here. The modification adds the Keras EarlyStopping callback class to prevent over-fitting. For this to be effective it requires the monitor='val_acc' argument for monitoring validation accuracy. For val_acc to be available KerasClassifier requires the validation_split=0.1 to generate validation accuracy, else EarlyStopping raises RuntimeWarning: Early stopping requires val_acc available!. Note the FIXME: code comment! Note we could replace val_acc by val_loss! Question: How can I use the cross-validation data set generated by the GridSearchCV k-fold algorithm instead of wasting 10% of the training data for an early stopping validation set? ```python # Use scikit-learn to grid search the learning rate and momentum import numpy from sklearn.model_selection import GridSearchCV from keras.models import Sequential from keras.layers import Dense from keras.wrappers.scikit_learn import KerasClassifier from keras.optimizers import SGD # Function to create model, required for KerasClassifier def create_model(learn_rate=0.01, momentum=0): # create model model = Sequential() model.add(Dense(12, input_dim=8, activation='relu')) model.add(Dense(1, activation='sigmoid')) # Compile model optimizer = SGD(lr=learn_rate, momentum=momentum) model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) return model # Early stopping from keras.callbacks import EarlyStopping stopper = EarlyStopping(monitor='val_acc', patience=3, verbose=1) # fix random seed for reproducibility seed = 7 numpy.random.seed(seed) # load dataset dataset = numpy.loadtxt(\"pima-indians-diabetes.csv\", delimiter=\",\") # split into input (X) and output (Y) variables X = dataset[:,0:8] Y = dataset[:,8] # create model model = KerasClassifier( build_fn=create_model, epochs=100, batch_size=10, validation_split=0.1, # FIXME: Instead use GridSearchCV k-fold validation data. verbose=2) # define the grid search parameters learn_rate = [0.01, 0.1] momentum = [0.2, 0.4] param_grid = dict(learn_rate=learn_rate, momentum=momentum) grid = GridSearchCV(estimator=model, param_grid=param_grid, verbose=2, n_jobs=1) # Fitting parameters fit_params = dict(callbacks=[stopper]) # Grid search. grid_result = grid.fit(X, Y, **fit_params) # summarize results print(\"Best: %f using %s\" % (grid_result.best_score_, grid_result.best_params_)) means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print(\"%f (%f) with: %r\" % (mean, stdev, param)) ```", "response":"[Answer after the question was edited & clarified:] Before rushing into implementation issues, it is always a good practice to take some time to think about the methodology and the task itself; arguably, intermingling early stopping with the cross validation procedure is not a good idea. Let's make up an example to highlight the argument. Suppose that you indeed use early stopping with 100 epochs, and 5-fold cross validation (CV) for hyperparameter selection. Suppose also that you end up with a hyperparameter set X giving best performance, say 89.3% binary classification accuracy. Now suppose that your second-best hyperparameter set, Y, gives 89.2% accuracy. Examining closely the individual CV folds, you see that, for your best case X, 3 out of the 5 CV folds exhausted the max 100 epochs, while in the other 2 early stopping kicked in, say in 95 and 93 epochs respectively. Now imagine that, examining your second-best set Y, you see that again 3 out of the 5 CV folds exhausted the 100 epochs, while the other 2 both stopped early enough at ~ 80 epochs. What would be your conclusion from such an experiment? Arguably, you would have found yourself in an inconclusive situation; further experiments might reveal which is actually the best hyperparameter set, provided of course that you would have thought to look into these details of the results in the first place. And needless to say, if all this was automated through a callback, you might have missed your best model despite the fact that you would have actually tried it. The whole CV idea is implicitly based on the \"all other being equal\" argument (which of course is never true in practice, only approximated in the best possible way). If you feel that the number of epochs should be a hyperparameter, just include it explicitly in your CV as such, rather than inserting it through the back door of early stopping, thus possibly compromising the whole process (not to mention that early stopping has itself a hyperparameter, patience). Not intermingling these two techniques doesn't mean of course that you cannot use them sequentially: once you have obtained your best hyperparameters through CV, you can always employ early stopping when fitting the model in your whole training set (provided of course that you do have a separate validation set). The field of deep neural nets is still (very) young, and it is true that it has yet to establish its \"best practice\" guidelines; add the fact that, thanks to an amazing community, there are all sort of tools available in open source implementations, and you can easily find yourself into the (admittedly tempting) position of mixing things up just because they happen to be available. I am not necessarily saying that this is what you are attempting to do here - I am just urging for more caution when combining ideas that may have not been designed to work along together...", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48127550\/early-stopping-with-keras-and-sklearn-gridsearchcv-cross-validation", "best_answers_votes":38, "question_length":2882, "response_length":2897 }, { "question":"Information Gain calculation with Scikit-learn I am using Scikit-learn for text classification. I want to calculate the Information Gain for each attribute with respect to a class in a (sparse) document-term matrix. the Information Gain is defined as H(Class) - H(Class | Attribute), where H is the entropy. in weka, this would be calculated with InfoGainAttribute. But I haven't found this measure in scikit-learn. (It was suggested that the formula above for Information Gain is the same measure as mutual information. This matches also the definition in wikipedia. Is it possible to use a specific setting for mutual information in scikit-learn to accomplish this task?)", "response":"You can use scikit-learn's mutual_info_classif here is an example ``` from sklearn.datasets import fetch_20newsgroups from sklearn.feature_selection import mutual_info_classif from sklearn.feature_extraction.text import CountVectorizer categories = ['talk.religion.misc', 'comp.graphics', 'sci.space'] newsgroups_train = fetch_20newsgroups(subset='train', categories=categories) X, Y = newsgroups_train.data, newsgroups_train.target cv = CountVectorizer(max_df=0.95, min_df=2, max_features=10000, stop_words='english') X_vec = cv.fit_transform(X) res = dict(zip(cv.get_feature_names(), mutual_info_classif(X_vec, Y, discrete_features=True) )) print(res) ``` this will output a dictionary of each attribute, i.e. item in the vocabulary as keys and their information gain as values here is a sample of the output ``` {'bible': 0.072327479595571439, 'christ': 0.057293733680219089, 'christian': 0.12862867565281702, 'christians': 0.068511328611810071, 'file': 0.048056478042481157, 'god': 0.12252523919766867, 'gov': 0.053547274485785577, 'graphics': 0.13044709565039875, 'jesus': 0.09245436105573257, 'launch': 0.059882179387444862, 'moon': 0.064977781072557236, 'morality': 0.050235104394123153, 'nasa': 0.11146392824624819, 'orbit': 0.087254803670582998, 'people': 0.068118370234354936, 'prb': 0.049176995204404481, 'religion': 0.067695617096125316, 'shuttle': 0.053440976618359261, 'space': 0.20115901737978983, 'thanks': 0.060202010019767334} ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46752650\/information-gain-calculation-with-scikit-learn", "best_answers_votes":32, "question_length":673, "response_length":1448 }, { "question":"Distinguishing overfitting vs good prediction These are questions on how to calculate & reduce overfitting in machine learning. I think many new to machine learning will have the same questions, so I tried to be clear with my examples and questions in hope that answers here can help others. I have a very small sample of texts and I'm trying to predict values associated with them. I've used sklearn to calculate tf-idf, and insert those into a regression model for prediction. This gives me 26 samples with 6323 features - not a lot.. I know: ``` >> count_vectorizer = CountVectorizer(min_n=1, max_n=1) >> term_freq = count_vectorizer.fit_transform(texts) >> transformer = TfidfTransformer() >> X = transformer.fit_transform(term_freq) >> print X.shape (26, 6323) ``` Inserting those 26 samples of 6323 features (X) and associated scores (y), into a LinearRegression model, gives good predictions. These are obtained using leave-one-out cross validation, from cross_validation.LeaveOneOut(X.shape[0], indices=True) : ``` using ngrams (n=1): human machine points-off %error 8.67 8.27 0.40 1.98 8.00 7.33 0.67 3.34 ... ... ... ... 5.00 6.61 1.61 8.06 9.00 7.50 1.50 7.50 mean: 7.59 7.64 1.29 6.47 std : 1.94 0.56 1.38 6.91 ``` Pretty good! Using ngrams (n=300) instead of unigrams (n=1), similar results occur, which is obviously not right. No 300-words occur in any of the texts, so the prediction should fail, but it doesn't: ``` using ngrams (n=300): human machine points-off %error 8.67 7.55 1.12 5.60 8.00 7.57 0.43 2.13 ... ... ... ... mean: 7.59 7.59 1.52 7.59 std : 1.94 0.08 1.32 6.61 ``` Question 1: This might mean that the prediction model is overfitting the data. I only know this because I chose an extreme value for the ngrams (n=300) which I KNOW can't produce good results. But if I didn't have this knowledge, how would you normally tell that the model is over-fitting? In other words, if a reasonable measure (n=1) were used, how would you know that the good prediction was a result of being overfit vs. the model just working well? Question 2: What is the best way of preventing over-fitting (in this situation) to be sure that the prediction results are good or not? Question 3: If LeaveOneOut cross validation is used, how can the model possibly over-fit with good results? Over-fitting means the prediction accuracy will suffer - so why doesn't it suffer on the prediction for the text being left out? The only reason I can think of: in a tf-idf sparse matrix of mainly 0s, there is strong overlap between texts because so many terms are 0s - the regression then thinks the texts correlate highly. Please answer any of the questions even if you don't know them all. Thanks!", "response":"how would you normally tell that the model is over-fitting? One useful rule of thumb is that you may be overfitting when your model's performance on its own training set is much better than on its held-out validation set or in a cross-validation setting. That's not all there is to it, though. The blog entry I linked to describes a procedure for testing for overfit: plot training set and validation set error as a function of training set size. If they show a stable gap at the right end of the plot, you're probably overfitting. What is the best way of preventing over-fitting (in this situation) to be sure that the prediction results are good or not? Use a held-out test set. Only do evaluation on this set when you're completely done with model selection (hyperparameter tuning); don't train on it, don't use it in (cross-)validation. The score you get on the test set is the model's final evaluation. This should show whether you've accidentally overfit the validation set(s). [Machine learning conferences are sometimes set up like a competition, where the test set is not given to the researchers until after they've delivered their final model to the organisers. In the meanwhile, they can use the training set as they please, e.g. by testing models using cross-validation. Kaggle does something similar.] If LeaveOneOut cross validation is used, how can the model possibly over-fit with good results? Because you can tune the model as much as you want in this cross-validation setting, until it performs nearly perfectly in CV. As an extreme example, suppose that you've implemented an estimator that is essentially a random number generator. You can keep trying random seeds until you hit a \"model\" that produces very low error in cross-validation, but that doesn't you've hit the right model. It means you've overfit to the cross-validation. See also this interesting warstory.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/12253151\/distinguishing-overfitting-vs-good-prediction", "best_answers_votes":36, "question_length":2696, "response_length":1890 }, { "question":"tf-idf feature weights using sklearn.feature_extraction.text.TfidfVectorizer this page: http:\/\/scikit-learn.org\/stable\/modules\/feature_extraction.html mentions: As tf\u2013idf is a very often used for text features, there is also another class called TfidfVectorizer that combines all the option of CountVectorizer and TfidfTransformer in a single model. then I followed the code and use fit_transform() on my corpus. How to get the weight of each feature computed by fit_transform()? I tried: ``` In [39]: vectorizer.idf_ --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () ----> 1 vectorizer.idf_ AttributeError: 'TfidfVectorizer' object has no attribute 'idf_' ``` but this attribute is missing. Thanks", "response":"Since version 0.15, the tf-idf score of each feature can be retrieved via the attribute idf_ of the TfidfVectorizer object: ``` from sklearn.feature_extraction.text import TfidfVectorizer corpus = [\"This is very strange\", \"This is very nice\"] vectorizer = TfidfVectorizer(min_df=1) X = vectorizer.fit_transform(corpus) idf = vectorizer.idf_ print dict(zip(vectorizer.get_feature_names(), idf)) ``` Output: ``` {u'is': 1.0, u'nice': 1.4054651081081644, u'strange': 1.4054651081081644, u'this': 1.0, u'very': 1.0} ``` As discussed in the comments, prior to version 0.15, a workaround is to access the attribute idf_ via the supposedly hidden _tfidf (an instance of TfidfTransformer) of the vectorizer: ``` idf = vectorizer._tfidf.idf_ print dict(zip(vectorizer.get_feature_names(), idf)) ``` which should give the same output as above.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/23792781\/tf-idf-feature-weights-using-sklearn-feature-extraction-text-tfidfvectorizer", "best_answers_votes":84, "question_length":780, "response_length":833 }, { "question":"Scikit Learn GridSearchCV without cross validation (unsupervised learning) Is it possible to use GridSearchCV without cross validation? I am trying to optimize the number of clusters in KMeans clustering via grid search, and thus I don't need or want cross validation. The documentation is also confusing me because under the fit() method, it has an option for unsupervised learning (says to use None for unsupervised learning). But if you want to do unsupervised learning, you need to do it without cross validation and there appears to be no option to get rid of cross validation.", "response":"After much searching, I was able to find this thread. It appears that you can get rid of cross validation in GridSearchCV if you use: cv=[(slice(None), slice(None))] I have tested this against my own coded version of grid search without cross validation and I get the same results from both methods. I am posting this answer to my own question in case others have the same issue. Edit: to answer jjrr's question in the comments, here is an example use case: ``` from sklearn.metrics import silhouette_score as sc def cv_silhouette_scorer(estimator, X): estimator.fit(X) cluster_labels = estimator.labels_ num_labels = len(set(cluster_labels)) num_samples = len(X.index) if num_labels == 1 or num_labels == num_samples: return -1 else: return sc(X, cluster_labels) cv = [(slice(None), slice(None))] gs = GridSearchCV(estimator=sklearn.cluster.MeanShift(), param_grid=param_dict, scoring=cv_silhouette_scorer, cv=cv, n_jobs=-1) gs.fit(df[cols_of_interest]) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/44636370\/scikit-learn-gridsearchcv-without-cross-validation-unsupervised-learning", "best_answers_votes":51, "question_length":582, "response_length":958 }, { "question":"predict_proba for a cross-validated model I would like to predict the probability from Logistic Regression model with cross-validation. I know you can get the cross-validation scores, but is it possible to return the values from predict_proba instead of the scores? ``` # imports from sklearn.linear_model import LogisticRegression from sklearn.cross_validation import (StratifiedKFold, cross_val_score, train_test_split) from sklearn import datasets # setup data iris = datasets.load_iris() X = iris.data y = iris.target # setup model cv = StratifiedKFold(y, 10) logreg = LogisticRegression() # cross-validation scores scores = cross_val_score(logreg, X, y, cv=cv) # predict probabilities Xtrain, Xtest, ytrain, ytest = train_test_split(X, y) logreg.fit(Xtrain, ytrain) proba = logreg.predict_proba(Xtest) ```", "response":"This is now implemented as part of scikit-learn version 0.18. You can pass a 'method' string parameter to the cross_val_predict method. Documentation is here. Example: ``` proba = cross_val_predict(logreg, X, y, cv=cv, method='predict_proba') ``` Also note that this is part of the new sklearn.model_selection package so you will need this import: ``` from sklearn.model_selection import cross_val_predict ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/28787500\/predict-proba-for-a-cross-validated-model", "best_answers_votes":53, "question_length":810, "response_length":409 }, { "question":"PCA projection and reconstruction in scikit-learn I can perform PCA in scikit by code below: X_train has 279180 rows and 104 columns. ``` from sklearn.decomposition import PCA pca = PCA(n_components=30) X_train_pca = pca.fit_transform(X_train) ``` Now, when I want to project the eigenvectors onto feature space, I must do following: ``` \"\"\" Projection \"\"\" comp = pca.components_ #30x104 com_tr = np.transpose(pca.components_) #104x30 proj = np.dot(X_train,com_tr) #279180x104 * 104x30 = 297180x30 ``` But I am hesitating with this step, because Scikit documentation says: components_: array, [n_components, n_features] Principal axes in feature space, representing the directions of maximum variance in the data. It seems to me, that it is already projected, but when I checked the source code, it returns only the eigenvectors. What is the right way how to project it? Ultimately, I am aiming to calculate the MSE of reconstruction. ``` \"\"\" Reconstruct \"\"\" recon = np.dot(proj,comp) #297180x30 * 30x104 = 279180x104 \"\"\" MSE Error \"\"\" print \"MSE = %.6G\" %(np.mean((X_train - recon)**2)) ```", "response":"You can do ``` proj = pca.inverse_transform(X_train_pca) ``` That way you do not have to worry about how to do the multiplications. What you obtain after pca.fit_transform or pca.transform are what is usually called the \"loadings\" for each sample, meaning how much of each component you need to describe it best using a linear combination of the components_ (the principal axes in feature space). The projection you are aiming at is back in the original signal space. This means that you need to go back into signal space using the components and the loadings. So there are three steps to disambiguate here. Here you have, step by step, what you can do using the PCA object and how it is actually calculated: pca.fit estimates the components (using an SVD on the centered Xtrain): ``` from sklearn.decomposition import PCA import numpy as np from numpy.testing import assert_array_almost_equal #Should this variable be X_train instead of Xtrain? X_train = np.random.randn(100, 50) pca = PCA(n_components=30) pca.fit(X_train) U, S, VT = np.linalg.svd(X_train - X_train.mean(0)) assert_array_almost_equal(VT[:30], pca.components_) ``` pca.transform calculates the loadings as you describe ``` X_train_pca = pca.transform(X_train) X_train_pca2 = (X_train - pca.mean_).dot(pca.components_.T) assert_array_almost_equal(X_train_pca, X_train_pca2) ``` pca.inverse_transform obtains the projection onto components in signal space you are interested in ``` X_projected = pca.inverse_transform(X_train_pca) X_projected2 = X_train_pca.dot(pca.components_) + pca.mean_ assert_array_almost_equal(X_projected, X_projected2) ``` You can now evaluate the projection loss ``` loss = np.sum((X_train - X_projected) ** 2, axis=1).mean() ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36566844\/pca-projection-and-reconstruction-in-scikit-learn", "best_answers_votes":62, "question_length":1091, "response_length":1721 }, { "question":"What is the meaning of the nu parameter in Scikit-Learn's SVM class? I am following the example shown in http:\/\/scikit-learn.org\/stable\/auto_examples\/svm\/plot_oneclass.html#example-svm-plot-oneclass-py, where a one class SVM is used for anomaly detection. Now, this may be a notation unique to scikit-learn, but I couldn't find an explanation of how to use the parameter nu given to the OneClassSVM constructor. In http:\/\/scikit-learn.org\/stable\/modules\/svm.html#nusvc, it is stated that the parameter nu is a reparametrization of the parameter C (which is the regularization parameter which I am familiar with) - but doesn't state how to perform that reparameterization. Both a formula and an intuition will be much appreciated. Thanks!", "response":"The problem with C and the introduction of nu The problem with the parameter C is: that it can take any positive value that it has no direct interpretation. It is therefore hard to choose correctly and one has to resort to cross validation or direct experimentation to find a suitable value. In response Sch\u00f6lkopf et al. reformulated SVM to take a new regularization parameter nu. This parameter is: bounded between 0 and 1 has a direct interpretation Interpretation of nu The parameter nu is an upper bound on the fraction of margin errors and a lower bound of the fraction of support vectors relative to the total number of training examples. For example, if you set it to 0.05 you are guaranteed to find at most 5% of your training examples being misclassified (at the cost of a small margin, though) and at least 5% of your training examples being support vectors. Relationship between C and nu The relation between C and nu is governed by the following formula: nu = A+B\/C A and B are constants which are unfortunately not that easy to calculate. Conclusion The takeaway message is that C and nu SVM are equivalent regarding their classification power. The regularization in terms of nu is easier to interpret compared to C, but the nu SVM is usually harder to optimize and runtime doesn't scale as well as the C variant with number of input samples. More details (including formulas for A and B) can be found here: Chang CC, Lin CJ - \"Training nu-support vector classifiers: theory and algorithms\"", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/11230955\/what-is-the-meaning-of-the-nu-parameter-in-scikit-learns-svm-class", "best_answers_votes":59, "question_length":737, "response_length":1503 }, { "question":"sklearn LogisticRegression and changing the default threshold for classification I am using LogisticRegression from the sklearn package, and have a quick question about classification. I built a ROC curve for my classifier, and it turns out that the optimal threshold for my training data is around 0.25. I'm assuming that the default threshold when creating predictions is 0.5. How can I change this default setting to find out what the accuracy is in my model when doing a 10-fold cross-validation? Basically, I want my model to predict a '1' for anyone greater than 0.25, not 0.5. I've been looking through all the documentation, and I can't seem to get anywhere.", "response":"I would like to give a practical answer ``` from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score, confusion_matrix, recall_score, roc_auc_score, precision_score import numpy as np X, y = make_classification( n_classes=2, class_sep=1.5, weights=[0.9, 0.1], n_features=20, n_samples=1000, random_state=10 ) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) clf = LogisticRegression(class_weight=\"balanced\") clf.fit(X_train, y_train) THRESHOLD = 0.25 preds = np.where(clf.predict_proba(X_test)[:,1] > THRESHOLD, 1, 0) pd.DataFrame(data=[accuracy_score(y_test, preds), recall_score(y_test, preds), precision_score(y_test, preds), roc_auc_score(y_test, preds)], index=[\"accuracy\", \"recall\", \"precision\", \"roc_auc_score\"]) ``` By changing the THRESHOLD to 0.25, one can find that recall and precision scores are decreasing. However, by removing the class_weight argument, the accuracy increases but the recall score falls down. Refer to the @accepted answer", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31417487\/sklearn-logisticregression-and-changing-the-default-threshold-for-classification", "best_answers_votes":29, "question_length":666, "response_length":1136 }, { "question":"Get intermediate data state in scikit-learn Pipeline Given the following example: ``` from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.decomposition import NMF from sklearn.pipeline import Pipeline import pandas as pd pipe = Pipeline([ (\"tf_idf\", TfidfVectorizer()), (\"nmf\", NMF()) ]) data = pd.DataFrame([[\"Salut comment tu vas\", \"Hey how are you today\", \"I am okay and you ?\"]]).T data.columns = [\"test\"] pipe.fit_transform(data.test) ``` I would like to get intermediate data state in scikit learn pipeline corresponding to tf_idf output (after fit_transform on tf_idf but not NMF) or NMF input. Or to say things in another way, it would be the same than to apply ``` TfidfVectorizer().fit_transform(data.test) ``` I know pipe.named_steps[\"tf_idf\"] ti get intermediate transformer, but I can't get data, only parameters of the transformer with this method.", "response":"As @Vivek Kumar suggested in the comment and as I answered here, I find a debug step that prints information or writes intermediate dataframes to csv useful: ``` from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.decomposition import NMF from sklearn.pipeline import Pipeline import pandas as pd from sklearn.base import TransformerMixin, BaseEstimator class Debug(BaseEstimator, TransformerMixin): def transform(self, X): print(X.shape) self.shape = shape # what other output you want return X def fit(self, X, y=None, **fit_params): return self pipe = Pipeline([ (\"tf_idf\", TfidfVectorizer()), (\"debug\", Debug()), (\"nmf\", NMF()) ]) data = pd.DataFrame([[\"Salut comment tu vas\", \"Hey how are you today\", \"I am okay and you ?\"]]).T data.columns = [\"test\"] pipe.fit_transform(data.test) ``` Edit I now added a state to the debug transformer. Now you can access the shape as in the answer by @datasailor with: ``` pipe.named_steps[\"debug\"].shape ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48743032\/get-intermediate-data-state-in-scikit-learn-pipeline", "best_answers_votes":25, "question_length":883, "response_length":969 }, { "question":"SKlearn import MLPClassifier fails I am trying to use the multilayer perceptron from scikit-learn in python. My problem is, that the import is not working. All other modules from scikit-learn are working fine. ``` from sklearn.neural_network import MLPClassifier ``` Import Error: cannot import name MLPClassifier I'm using the Python Environment Python64-bit 3.4 in Visual Studio 2015. I installed sklearn over the console with: conda install scikit-learn I also installed numpy and pandas. After I had the error above I also installed scikit-neuralnetwork with: pip install scikit-neuralnetwork The installed scikit-learn version is 0.17. What have I done wrong? Am I missing an installation? ----- EDIT ---- In addition to the answer of tttthomasssss, I found the solution on how to install the sknn library for neuronal networks. I followed this tutorial. Do the following steps: pip install scikit-neuralnetwork download and install the GCC compiler install mingw with conda install mingw libpython You can use the sknn library after.", "response":"MLPClassifier is not yet available in scikit-learn v0.17 (as of 1 Dec 2015). If you really want to use it you could clone 0.18dev (however, I don't know how stable this branch currently is).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34016238\/sklearn-import-mlpclassifier-fails", "best_answers_votes":32, "question_length":1039, "response_length":190 }, { "question":"ValueError: setting an array element with a sequence. while using SVM in scikit-learn I have been working on scikit-learn SVMs for a binary classification problem. I have calculated the features of audio files and wrote them into a CSV file. This is how each row in a CSV file looks like: ``` \"13_10 The Long And Winding Road \" \"[-6.5633095666136669e-16,-1.56E-15,-3.21E-15,-2.20E- 15,-2.52E-15,-3.04E-15,-3.39E-15,-3.47E-15,-3.07E-15,-6.02E-15,-3.00E-15,-4.77E-15,-3.05E- 15,-2.13E-15,-1.57E-15,-1.87E-15,-2.05E-15,-1.76E-15,-1.38E-15,-9.89E-16,-7.89E-16,-8.99E- 16,-1.09E-15,-7.26E-16,-8.68E-16,-4.68E-16,-2.82E-16,-1.99E-16,-1.75E-16,-2.18E-16,-1.43E- 16,-1.56E-16,-1.91E-16,-1.21E-16,-4.82E-17,-4.39E-17,-2.89E-17,-2.05E-17,0.0]\" 0 ``` The first column has the name of the Audio, second column has the feature array and the last element is the label {0,1} for binary classification. There are 39 float values in the array. I am using the following code to extract them from the CSV file. ``` with open('File.csv', 'rb') as csvfile: albumreader = csv.reader(csvfile, delimiter=' ') data = list() for row in albumreader: data.append(row[0:]) data = np.array(data) X_train = list() Y_train = list() k = data.shape[0] for i in range(k): feature = data[i][1] x = map(float, feature[1:-2].split(',')) X_train.append(x) label = data[i][2] y = float(label) Y_train.append(y) ``` So when I print X_train and Y_train I get exact values in an array. But when I use ``` clf = svm.SVC(C=1.0, cache_size=200,kernel='linear', max_iter=-1) clf.fit(X_train,Y_train) ``` I get the error saying ``` Traceback (most recent call last): File \"\", line 1, in File \"C:\\Python27\\lib\\site- packages\\spyderlib\\widgets\\externalshell\\sitecustomize.py\",line 540, in runfile execfile(filename, namespace) File \"SVM_test.py\", line 55, in clf.fit(X_train,Y_train) File \"sklearn\\svm\\base.py\", line 137, in fit File \"sklearn\\utils\\validation.py\", line 165, in atleast2d_or_csr File \"sklearn\\utils\\validation.py\", line 142, in _atleast2d_or_sparse File \"sklearn\\utils\\validation.py\", line 120, in array2d File \"C:\\Python27\\lib\\site-packages\\numpy\\core\\numeric.py\", line 460, in asarray return array(a, dtype, copy=False, order=order) ValueError: setting an array element with a sequence. ``` Can someone help me as to what I can do now? I am really not sure what is happening inside. Both the dimensions of X_train and Y_train are same [X_train has 21 vectors with 39 elements and Y_train has 21 floats {0 or 1}, I don't see what made these errors. Note: I have a feeling that something might be wrong while I convert the numpy array to string and then a string to a numpy array. Thanks in advance. Edit: X_Train is very large. Here it is.. ``` [[93812.4999999983, 73189.57452, 48892.17363, 37682.69053, 33709.51536, 20815.68443, 12476.88854, 13364.13645, 9574.010981, 5844.293383, 7910.017736, 12721.38592, 14184.99241, 6988.131481, 9407.380437, 6333.852471, 5688.156663, 7167.61338, 6911.084942, 9210.064235, 5732.338515, 3585.039683, 4433.278772, 4757.658741, 3387.832928, 2711.640327, 2680.255742, 1649.410788, 2024.333977, 997.2348795, 1102.115501, 1386.86396, 1160.477719, 883.941971, 881.2712624, 749.3620066, 885.6355941, 514.1635441, 0.0], [93411.33935126709, 90714.51224, 89773.71828, 61018.71033, 28082.94493, 10120.93228, 11106.07725, 6204.140734, 5968.528906, 4970.099848, 6967.870007, 6990.611982, 7656.630743, 6615.957476, 5573.621516, 8957.245225, 8512.408652, 6976.021692, 7774.215884, 5301.046573, 4666.784091, 2539.587812, 2953.578612, 3529.863917, 2365.101263, 2579.870258, 2890.325096, 3302.179572, 2078.005268, 1425.18236, 1297.961119, 736.4896705, 640.0635888, 819.022382, 659.9559469, 438.2773842, 359.3957991, 193.9937669, 0.0], [95528.45827960827, 79000.64725, 75540.32258, 47915.39365, 29573.63325, 13554.15721, 10101.04124, 6935.685456, 13681.96711, 7726.754596, 9413.96529, 9468.785586, 10479.23762, 10070.81121, 8893.475453, 9517.553541, 8493.077533, 8021.721412, 8568.069341, 7687.282084, 9902.16325, 5442.263263, 5575.258138, 4748.557573, 4580.647869, 3014.91771, 3958.708771, 2851.846841, 3407.31788, 1982.369432, 1937.459179, 1689.049684, 1457.579778, 1055.411047, 1048.471861, 661.6174333, 827.8371903, 414.802354], [101683.46698748806, 62367.04137, 66444.15995, 49621.45404, 31623.19485, 16585.34427, 12271.46378, 12114.5615, 6666.281052, 9335.886213, 19314.70299, 22588.00911, 14133.31813, 12723.03772, 7994.399321, 11447.449, 15457.39519, 7419.208867, 9286.751692, 6128.746537, 5617.886066, 4461.131891, 4651.73188, 5835.270092, 3876.10397, 4499.228748, 2661.999151, 1431.362029, 1378.115091, 1048.827946, 1470.297845, 1087.453644, 825.6318213, 861.5003481, 804.8519616, 397.0719915, 368.8037827, 293.36727], [96614.66763477474, 89674.79785, 73045.22026, 55387.48162, 32450.76131, 26161.93729, 16379.95699, 13446.77762, 6178.297767, 4499.9064, 6128.624979, 4928.968691, 7139.579976, 6442.404748, 7303.917218, 9064.476552, 8246.412739, 4526.169172, 4931.980606, 4022.38625, 3193.080061, 3991.709836, 4894.262891, 4523.545798, 5013.65655, 3165.268896, 2252.272798, 1971.857637, 1543.455559, 1248.305408, 1340.303682, 1069.466847, 1062.971087, 596.4763587, 541.7390803, 481.9598053, 261.6165905, 135.050925], [77116.86410716272, 85174.88022, 48949.81474, 39272.16867, 28721.41507, 26604.82082, 17057.75385, 11417.45143, 12775.94149, 8095.318819, 8318.738856, 7768.406613, 9501.155323, 8215.579012, 5801.439936, 6997.611748, 8358.126592, 6710.072432, 7903.976639, 4770.389995, 4443.449546, 3622.278619, 3628.985312, 4025.879147, 3378.124716, 1681.144815, 1873.675902, 1813.454359, 1203.261884, 734.9896092, 612.7767898, 581.1641439, 554.9952946, 338.9208239, 329.6306536, 210.3361409, 124.684456, 95.1698974], [86000.24134707314, 54315.80346, 61723.06357, 48194.93238, 34145.18298, 18060.21908, 17759.95552, 13594.71484, 10034.81255, 6892.428679, 13609.12234, 11345.97425, 12640.27575, 13636.73634, 8353.154837, 11543.51778, 9620.892875, 5364.536625, 6645.647746, 6939.929388, 6404.367983, 4279.002491, 5473.449778, 5173.72645, 4161.012572, 3189.349797, 1868.016199, 2370.813774, 1991.805589, 1862.750613, 1535.097522, 1195.019326, 824.4997101, 836.5762868, 758.8865079, 739.0096703, 426.339462, 495.362511], [88356.8775920093, 68677.18631, 56499.17126, 41069.83582, 34004.99481, 21584.94408, 16827.63584, 10875.88263, 8838.404327, 10399.33201, 10247.97332, 11592.57345, 6888.99984, 8027.86374, 4396.353004, 4926.542018, 4160.408132, 4829.051031, 5104.507749, 4445.908694, 4113.401198, 2070.059053, 2331.063956, 3091.764189, 2708.490628, 1357.792132, 1476.379979, 1099.46743, 895.2046416, 1017.410994, 855.9326154, 807.2299975, 817.8896259, 688.1633806, 620.1147918, 404.4791452, 355.3012015, 155.124636], [129161.3158422606, 99871.12426, 69682.53863, 42152.57846, 27722.10719, 16851.46834, 12503.65957, 15820.8482, 10208.86252, 3737.281589, 11388.29292, 9216.418551, 8412.969115, 8915.691889, 7214.795344, 6312.935476, 5691.760401, 4452.333587, 6080.803383, 3169.211512, 4640.513939, 2965.070935, 2603.678979, 3427.596811, 2650.097593, 3407.197764, 2399.210804, 1585.540133, 900.6057596, 1562.799097, 1414.458688, 1085.727804, 862.853398, 1046.809149, 1299.422095, 452.1395434, 416.0278005, 342.487369], [97676.58730158686, 85928.37013, 60031.54702, 50283.65633, 30440.49477, 23396.44028, 17693.84492, 13834.72723, 13079.6, 9484.172923, 11026.12866, 15489.77935, 14751.23748, 7719.575611, 6916.062149, 9947.922301, 9860.230801, 6685.554777, 5314.504743, 6412.026375, 5126.472976, 3994.412881, 3469.94381, 3087.75188, 2150.012155, 2510.441776, 1633.896465, 1468.22101, 1451.997957, 1594.288508, 1208.749937, 1539.411357, 846.1440547, 1015.738147, 760.9050287, 531.4752058, 352.2906744, 256.992846], [99873.48353552721, 96128.33417, 56062.95108, 48316.51261, 33803.61475, 20090.40769, 14532.69355, 16973.62408, 11745.412, 10555.56359, 12415.12332, 11311.00716, 13055.02538, 13457.43473, 11949.02017, 13726.34027, 13210.19444, 6924.913491, 7526.293551, 6489.797287, 7504.193589, 3693.345327, 3173.144967, 4589.951959, 3817.607517, 2296.577132, 4241.66248, 2298.259695, 2104.233705, 1894.800787, 1435.902299, 1237.861542, 1008.052264, 743.557111, 447.3644689, 360.231905, 263.6887002, 252.53243], [118318.40927047582, 96894.04475, 72455.95855, 53538.90521, 34270.2485, 14028.66282, 6110.994324, 10831.06944, 6500.061124, 5648.546259, 9746.722376, 11098.67455, 12414.31738, 11859.15818, 5661.36057, 6467.490449, 7160.019668, 4986.101354, 4805.715894, 4384.860917, 4818.433908, 2776.480858, 2906.711958, 4180.355966, 3029.563639, 2121.677425, 2977.055372, 1650.875378, 1328.284924, 1641.967101, 1374.844716, 1269.983055, 756.2822371, 746.9782069, 635.1025738, 901.5181204, 500.4240422, 124.234986], [99496.1074660524, 91134.19642, 64615.65163, 51749.95315, 27017.75136, 17498.19736, 8686.464718, 6354.494714, 6279.181765, 6011.661362, 9583.683802, 16802.58819, 12848.82539, 12448.85086, 9717.906293, 6025.712047, 8968.944145, 6116.427844, 8009.500521, 5857.252734, 5994.629798, 4602.865888, 5568.279578, 3847.961198, 3664.838032, 2285.641295, 2343.300802, 1538.656643, 1595.004126, 1438.685894, 1278.233128, 1138.847548, 1387.660031, 727.3346259, 443.3437923, 399.422316, 202.3671643, 210.818774], [97897.81181619188, 81534.24658, 86124.34023, 55859.41234, 43498.35095, 16317.93548, 9240.704588, 8335.639737, 5398.77203, 2959.587234, 7638.934756, 9237.569061, 9669.92492, 6395.762472, 5297.481894, 4628.757031, 5965.00084, 5360.168945, 4918.802753, 5403.035015, 7760.124783, 4316.46269, 3586.003412, 4862.517393, 2722.334238, 1950.153709, 2308.64693, 1738.602095, 1431.956923, 1195.875585, 903.5619486, 628.8441079, 378.5951575, 279.5559759, 290.8523867, 185.8872588, 124.4224622, 102.6474251], [92929.03125177219, 66037.16827, 85713.22692, 60594.81708, 21299.03928, 9728.745394, 7164.560274, 7530.287996, 3986.197072, 4768.423334, 7965.588661, 6884.742393, 7813.113615, 6783.772795, 5068.375149, 5563.205324, 4549.089711, 4178.977925, 7176.864923, 3595.204266, 4075.654498, 3667.874878, 5018.867408, 4632.204595, 4236.022945, 2419.634542, 1965.732854, 2017.314496, 1125.444672, 1776.994722, 1380.972752, 877.5693874, 1048.039171, 698.4293241, 587.3589805, 425.3561446, 374.9688448, 242.143167], [112279.4547224921, 85626.58906, 82479.88981, 41194.72139, 18581.67331, 17171.661, 11041.06798, 7470.697485, 5647.489476, 5413.921458, 6258.45235, 7817.02576, 5690.588758, 5018.057148, 2835.675844, 4192.365122, 5264.669752, 2899.863762, 4722.075443, 4359.368543, 4475.52712, 4364.193393, 4366.760559, 4466.265791, 3581.127965, 3229.694902, 3061.592084, 2761.368431, 2924.520852, 2278.74424, 1842.130366, 1353.160812, 1061.970453, 801.4987863, 559.7692834, 542.6125554, 365.0923416, 345.207555], [103517.74691357967, 75660.08945, 77823.62831, 44178.34395, 30627.84085, 16822.14795, 12153.82383, 10477.03604, 6737.154621, 3948.567091, 5952.492101, 6657.190597, 8458.524435, 4644.542091, 3262.595869, 6196.748153, 4725.493005, 3131.648336, 3043.832975, 2397.211069, 2221.444205, 1846.007568, 1906.256992, 2565.899774, 1879.678929, 1983.431392, 2057.925713, 1379.158985, 1161.566123, 1269.932159, 1882.60896, 2175.463202, 1945.131584, 2617.451168, 1724.479089, 934.2682688, 703.1608361, 325.546], [93568.70860926574, 94747.49539969431, 46432.77848910925, 34021.920729891295, 20420.991633759644, 11780.466421174959, 11808.677934216039, 8356.053623407755, 5251.866299007888, 1837.1346095714694, 3388.9444867864604, 4160.840941722876, 3099.1858407062873, 2498.7020047692304, 2320.2543950190047, 3123.103649596546, 2353.3994748227874, 2282.188646923959, 2461.343070326571, 1867.3943363024234, 2097.623570329987, 2009.6550578189285, 3308.2027220730074, 2765.1388333698114, 1557.1149504889588, 1365.611056602633, 1960.1916988919656, 1357.4554303292293, 1174.7058492639005, 1132.4243713198962, 845.4972050001261, 1168.2703928255426, 792.9426082625839, 670.3864757268884, 462.94718979251763, 418.287362938786, 440.0999154920886, 177.49705973335398, 0.0], [95355.29766162521, 80982.187596427, 52611.83577098383, 32094.021907352668, 17954.16900608238, 10539.432714398477, 8455.780912660928, 7826.206728864228, 6509.019127983875, 3428.593131775805, 4133.834750579424, 4897.866949416399, 4527.962919676826, 3097.9755890532115, 2644.294656259542, 3715.9623636641186, 2820.3307205895694, 2502.4555417041665, 4294.009887075389, 3305.480815069842, 3473.739729060158, 3436.008663252062, 2646.057627969427, 2915.118003316749, 2807.214040627724, 2182.2047542975124, 2307.7279832228096, 2051.914227220658, 1701.1785697138466, 1387.86622139378, 1717.4780638249865, 1444.4320186566786, 1543.3397450160378, 1008.7972827019012, 804.7763630817929, 727.0076251244793, 661.7971983605773, 328.023137248546, 0.0], [106950.75545607107, 83171.33894927795, 98570.84168082179, 53995.601284217235, 34045.2113137451, 32682.002511908893, 20258.01771044016, 17863.78159524713, 10999.026649078776, 7606.650910143417, 8186.182643934389, 12307.240199947704, 6014.871257290792, 4781.08981401508, 5131.609324634855, 4391.107045269739, 4364.496837469433, 3795.810404058682, 5693.929878923241, 3511.0866864164072, 4967.40355405853, 3290.291028496737, 2401.232195128987, 2787.2578565673602, 2210.985797970096, 2106.714353398232, 1799.725035771931, 2223.1076215378416, 1189.234114777526, 1003.1624544891614, 1046.7700894681655, 812.1805254193989, 750.3209854314467, 893.172975198784, 492.44092578555313, 379.87738447537436, 169.4616484512177, 100.56120686339501, 0.0]] ``` Y_Train is just the labels for individual sets of features. It is like this: ``` [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0] ``` Hope that helps!", "response":"Finally I found the answer to my question with the help of some ideas from @larsmans and @eickenberg. The problem was that X_train did not have the same number of elements in all the arrays. So, it was not able to form a 2D array. Now that I have added an additional value to that array, the dimensionality matched for all the 1D arrays and X_train was able to form a 2D array. Thanks for the ideas guys!", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/25485503\/valueerror-setting-an-array-element-with-a-sequence-while-using-svm-in-scikit", "best_answers_votes":61, "question_length":13619, "response_length":404 }, { "question":"KeyError when loading pickled scikit-learn model using joblib I have an object that contains within it two scikit-learn models, an IsolationForest and a RandomForestClassifier, that I would like to pickle and later unpickle and use to produce predictions. Apart from the two models, the object contains a couple of StandardScalers and a couple of Python lists. Pickling this object using joblib is unproblematic, but when I try to unpickle it later I get the following exception: ``` Traceback (most recent call last): File \"\", line 1, in File \"\/home\/(...)\/python3.5\/site-packages\/joblib\/numpy_pickle.py\", line 578, in load obj = _unpickle(fobj, filename, mmap_mode) File \"\/home\/(...)\/python3.5\/site-packages\/joblib\/numpy_pickle.py\", line 508, in _unpickle obj = unpickler.load() File \"\/usr\/lib\/python3.5\/pickle.py\", line 1039, in load dispatch[key[0]](self) KeyError: 0 ``` The same application both pickles and unpickles the object, so the versions of scikit-learn, joblib and other libraries are the same. I'm not sure where to start debugging, given the vague error. Any ideas or pointers?", "response":"The solution to this was pretty banal: Without being aware of it I was using the version of joblib in sklearn.externals.joblib for the pickling, but a newer version of joblib for unpickling the object. The problem was resolved when I used the newer version of joblib for both tasks.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48948209\/keyerror-when-loading-pickled-scikit-learn-model-using-joblib", "best_answers_votes":43, "question_length":1094, "response_length":282 }, { "question":"How to obtain only the name of a model's object in SciKitLearn? I have a silly question. I have done Cross-validation in scikit learn and would like to make a more visual information with the values \u200b\u200bI got for each model. However, I can not access only the template name to insert into the dataframe. Always comes with the parameters together. Is there some method of objects created to access only the name of the model, without its parameters. Or will I have to create an external list with the names for it? I use: ``` for model in models: scores = cross_val_score(model, X, y, cv=5) print(f'Name model: {model} , Mean score: {scores.mean()}') ``` But I obtain the name with the parameters: ``` Name model: LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False), Mean score: 0.8066782865537986 ``` In fact I want to get the information this way: ``` Name Model: LinearRegression, Mean Score: 0.8066782865537986 ``` Thanks!", "response":"You can do this: ``` model_name = type(model).__name__ ``` as in ``` Python 3.5.5 | packaged by conda-forge | (default, Jul 23 2018, 23:45:11) [GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> from sklearn.linear_model import LinearRegression >>> model = LinearRegression() >>> model_name = type(model).__name__ >>> print(model_name) LinearRegression ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/52763325\/how-to-obtain-only-the-name-of-a-models-object-in-scikitlearn", "best_answers_votes":63, "question_length":948, "response_length":446 }, { "question":"Using GridSearchCV with AdaBoost and DecisionTreeClassifier I am attempting to tune an AdaBoost Classifier (\"ABT\") using a DecisionTreeClassifier (\"DTC\") as the base_estimator. I would like to tune both ABT and DTC parameters simultaneously, but am not sure how to accomplish this - pipeline shouldn't work, as I am not \"piping\" the output of DTC to ABT. The idea would be to iterate hyper parameters for ABT and DTC in the GridSearchCV estimator. How can I specify the tuning parameters correctly? I tried the following, which generated an error below. ``` [IN] from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import AdaBoostClassifier from sklearn.grid_search import GridSearchCV param_grid = {dtc__criterion : [\"gini\", \"entropy\"], dtc__splitter : [\"best\", \"random\"], abc__n_estimators: [none, 1, 2] } DTC = DecisionTreeClassifier(random_state = 11, max_features = \"auto\", class_weight = \"auto\",max_depth = None) ABC = AdaBoostClassifier(base_estimator = DTC) # run grid search grid_search_ABC = GridSearchCV(ABC, param_grid=param_grid, scoring = 'roc_auc') [OUT] ValueError: Invalid parameter dtc for estimator AdaBoostClassifier(algorithm='SAMME.R', base_estimator=DecisionTreeClassifier(class_weight='auto', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, random_state=11, splitter='best'), learning_rate=1.0, n_estimators=50, random_state=11) ```", "response":"There are several things wrong in the code you posted: The keys of the param_grid dictionary need to be strings. You should be getting a NameError. The key \"abc__n_estimators\" should just be \"n_estimators\": you are probably mixing this with the pipeline syntax. Here nothing tells Python that the string \"abc\" represents your AdaBoostClassifier. None (and not none) is not a valid value for n_estimators. The default value (probably what you meant) is 50. Here's the code with these fixes. To set the parameters of your Tree estimator you can use the \"__\" syntax that allows accessing nested parameters. ``` from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import AdaBoostClassifier from sklearn.grid_search import GridSearchCV param_grid = {\"base_estimator__criterion\" : [\"gini\", \"entropy\"], \"base_estimator__splitter\" : [\"best\", \"random\"], \"n_estimators\": [1, 2] } DTC = DecisionTreeClassifier(random_state = 11, max_features = \"auto\", class_weight = \"auto\",max_depth = None) ABC = AdaBoostClassifier(base_estimator = DTC) # run grid search grid_search_ABC = GridSearchCV(ABC, param_grid=param_grid, scoring = 'roc_auc') ``` Also, 1 or 2 estimators does not really make sense for AdaBoost. But I'm guessing this is not the actual code you're running. Hope this helps.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/32210569\/using-gridsearchcv-with-adaboost-and-decisiontreeclassifier", "best_answers_votes":44, "question_length":1473, "response_length":1290 }, { "question":"TypeError: only integer arrays with one element can be converted to an index I'm getting the following error when performing recursive feature selection with cross-validation: ``` Traceback (most recent call last): File \"\/Users\/...\/srl\/main.py\", line 32, in argident_sys.train_classifier() File \"\/Users\/...\/srl\/identification.py\", line 194, in train_classifier feat_selector.fit(train_argcands_feats,train_argcands_target) File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/sklearn\/feature_selection\/rfe.py\", line 298, in fit ranking_ = rfe.fit(X[train], y[train]).ranking_ TypeError: only integer arrays with one element can be converted to an index ``` The code that generates the error is the following: ``` def train_classifier(self): # Get the argument candidates argcands = self.get_argcands(self.reader) # Extract the necessary features from the argument candidates train_argcands_feats = [] train_argcands_target = [] for argcand in argcands: train_argcands_feats.append(self.extract_features(argcand)) if argcand[\"info\"][\"label\"] == \"NULL\": train_argcands_target.append(\"NULL\") else: train_argcands_target.append(\"ARG\") # Transform the features to the format required by the classifier self.feat_vectorizer = DictVectorizer() train_argcands_feats = self.feat_vectorizer.fit_transform(train_argcands_feats) # Transform the target labels to the format required by the classifier self.target_names = list(set(train_argcands_target)) train_argcands_target = [self.target_names.index(target) for target in train_argcands_target] ## Train the appropriate supervised model # Recursive Feature Elimination self.classifier = LogisticRegression() feat_selector = RFECV(estimator=self.classifier, step=1, cv=StratifiedKFold(train_argcands_target, 10)) feat_selector.fit(train_argcands_feats,train_argcands_target) print feat_selector.n_features_ print feat_selector.support_ print feat_selector.ranking_ print feat_selector.cv_scores_ return ``` I know I should also perform GridSearch for the parameters of the LogisticRegression classifier, but I don't think that's the source of the error (or is it?). I should mention that I'm testing with around 50 features, and almost all of them are categoric (that's why I use the DictVectorizer to transform them appropriately). Any help or guidance you could give me is more than welcome. Thanks! EDIT Here's some training data examples: ``` train_argcands_feats = [{'head_lemma': u'Bras\\xedlia', 'head': u'Bras\\xedlia', 'head_postag': u'PROP'}, {'head_lemma': u'Pesquisa_Datafolha', 'head': u'Pesquisa_Datafolha', 'head_postag': u'N'}, {'head_lemma': u'dado', 'head': u'dado', 'head_postag': u'N'}, {'head_lemma': u'postura', 'head': u'postura', 'head_postag': u'N'}, {'head_lemma': u'maioria', 'head': u'maioria', 'head_postag': u'N'}, {'head_lemma': u'querer', 'head': u'quer', 'head_postag': u'V-FIN'}, {'head_lemma': u'PT', 'head': u'PT', 'head_postag': u'PROP'}, {'head_lemma': u'participar', 'head': u'participando', 'head_postag': u'V-GER'}, {'head_lemma': u'surpreendente', 'head': u'supreendente', 'head_postag': u'ADJ'}, {'head_lemma': u'Bras\\xedlia', 'head': u'Bras\\xedlia', 'head_postag': u'PROP'}, {'head_lemma': u'Pesquisa_Datafolha', 'head': u'Pesquisa_Datafolha', 'head_postag': u'N'}, {'head_lemma': u'revelar', 'head': u'revela', 'head_postag': u'V-FIN'}, {'head_lemma': u'recusar', 'head': u'recusando', 'head_postag': u'V-GER'}, {'head_lemma': u'maioria', 'head': u'maioria', 'head_postag': u'N'}, {'head_lemma': u'PT', 'head': u'PT', 'head_postag': u'PROP'}, {'head_lemma': u'participar', 'head': u'participando', 'head_postag': u'V-GER'}, {'head_lemma': u'surpreendente', 'head': u'supreendente', 'head_postag': u'ADJ'}, {'head_lemma': u'Bras\\xedlia', 'head': u'Bras\\xedlia', 'head_postag': u'PROP'}, {'head_lemma': u'Pesquisa_Datafolha', 'head': u'Pesquisa_Datafolha', 'head_postag': u'N'}, {'head_lemma': u'revelar', 'head': u'revela', 'head_postag': u'V-FIN'}, {'head_lemma': u'governo', 'head': u'Governo', 'head_postag': u'N'}, {'head_lemma': u'de', 'head': u'de', 'head_postag': u'PRP'}, {'head_lemma': u'governo', 'head': u'Governo', 'head_postag': u'N'}, {'head_lemma': u'recusar', 'head': u'recusando', 'head_postag': u'V-GER'}, {'head_lemma': u'maioria', 'head': u'maioria', 'head_postag': u'N'}, {'head_lemma': u'querer', 'head': u'quer', 'head_postag': u'V-FIN'}, {'head_lemma': u'PT', 'head': u'PT', 'head_postag': u'PROP'}, {'head_lemma': u'surpreendente', 'head': u'supreendente', 'head_postag': u'ADJ'}, {'head_lemma': u'Bras\\xedlia', 'head': u'Bras\\xedlia', 'head_postag': u'PROP'}, {'head_lemma': u'Pesquisa_Datafolha', 'head': u'Pesquisa_Datafolha', 'head_postag': u'N'}, {'head_lemma': u'revelar', 'head': u'revela', 'head_postag': u'V-FIN'}, {'head_lemma': u'muito', 'head': u'Muitas', 'head_postag': u'PRON-DET'}, {'head_lemma': u'prioridade', 'head': u'prioridades', 'head_postag': u'N'}, {'head_lemma': u'com', 'head': u'com', 'head_postag': u'PRP'}, {'head_lemma': u'prioridade', 'head': u'prioridades', 'head_postag': u'N'}] train_argcands_target = ['NULL', 'ARG', 'ARG', 'ARG', 'NULL', 'NULL', 'NULL', 'NULL', 'NULL', 'NULL', 'NULL', 'NULL', 'ARG', 'ARG', 'ARG', 'ARG', 'NULL', 'NULL', 'NULL', 'NULL', 'ARG', 'NULL', 'NULL', 'NULL', 'NULL', 'NULL', 'ARG', 'NULL', 'NULL', 'NULL', 'NULL', 'ARG', 'ARG', 'NULL', 'NULL'] ```", "response":"I finally got to solve the problem. Two things had to be done: train_argcands_target is a list and it has to be a numpy array. I'm surprised it worked well before when I just used the estimator directly. For some reason (I don't know why, yet), it doesn't work either if I use the sparse matrix created by the DictVectorizer. I had to, \"manually\", transform each feature dictionary to a feature array with just integers representing each feature value. The transformation process is similar to the one I present in the code for the target values. Thanks to everyone who tried to help!", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/12444316\/typeerror-only-integer-arrays-with-one-element-can-be-converted-to-an-index", "best_answers_votes":33, "question_length":5347, "response_length":584 }, { "question":"Why does sklearn Imputer need to fit? I'm really new in this whole machine learning thing and I'm taking an online course on this subject. In this course, the instructors showed the following piece of code: ``` imputer = Inputer(missing_values = 'Nan', strategy = 'mean', axis=0) imputer = Imputer.fit(X[:, 1:3]) X[:, 1:3] = imputer.transform(X[:, 1:3]) ``` I don't really get why this imputer object needs to fit. I mean, I\u00b4m just trying to get rid of missing values in my columns by replacing them with the column mean. From the little I know about programming, this is a pretty simple, iterative procedure, and wouldn\u00b4t require a model that has to train on data to be accomplished. Can someone please explain how this imputer thing works and why it requires training to replace some missing values by the column mean? I have read sci-kit's documentation, but it just shows how to use the methods, and not why they\u00b4re required. Thank you.", "response":"The Imputer fills missing values with some statistics (e.g. mean, median, ...) of the data. To avoid data leakage during cross-validation, it computes the statistic on the train data during the fit, stores it and uses it on the test data, during the transform. ```py from sklearn.preprocessing import Imputer obj = Imputer(strategy='mean') obj.fit([[1, 2, 3], [2, 3, 4]]) print(obj.statistics_) # array([ 1.5, 2.5, 3.5]) X = obj.transform([[4, np.nan, 6], [5, 6, np.nan]]) print(X) # array([[ 4. , 2.5, 6. ], # [ 5. , 6. , 3.5]]) ``` You can do both steps in one if your train and test data are identical, using fit_transform. ```py X = obj.fit_transform([[1, 2, np.nan], [2, 3, 4]]) print(X) # array([[ 1. , 2. , 4. ], # [ 2. , 3. , 4. ]]) ``` This data leakage issue is important, since the data distribution may change from the training data to the testing data, and you don't want the information of the testing data to be already present during the fit. See the doc for more information about cross-validation.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46691596\/why-does-sklearn-imputer-need-to-fit", "best_answers_votes":42, "question_length":940, "response_length":1015 }, { "question":"Data Standardization vs Normalization vs Robust Scaler I am working on data preprocessing and want to compare the benefits of Data Standardization vs Normalization vs Robust Scaler practically. In theory, the guidelines are: Advantages: Standardization: scales features such that the distribution is centered around 0, with a standard deviation of 1. Normalization: shrinks the range such that the range is now between 0 and 1 (or -1 to 1 if there are negative values). Robust Scaler: similar to normalization but it instead uses the interquartile range, so that it is robust to outliers. Disadvantages: Standardization: not good if the data is not normally distributed (i.e. no Gaussian Distribution). Normalization: get influenced heavily by outliers (i.e. extreme values). Robust Scaler: doesn't take the median into account and only focuses on the parts where the bulk data is. I created 20 random numerical inputs and tried the above-mentioned methods (numbers in red color represent the outliers): I noticed that -indeed- the Normalization got affected negatively by the outliers and the change scale between the new values became tiny (all values almost identical -6 digits after the decimal point- 0.000000x) even there is noticeable differences between the original inputs! My questions are: Am I right to say that also Standardization gets affected negatively by the extreme values as well? If not, why according to the result provided? I really can't see how the Robust Scaler improved the data because I still have extreme values in the resulted data set? Any simple complete interpretation?", "response":"Am I right to say that also Standardization gets affected negatively by the extreme values as well? Indeed you are; the scikit-learn docs themselves clearly warn for such a case: However, when data contains outliers, StandardScaler can often be mislead. In such cases, it is better to use a scaler that is robust against outliers. More or less, the same holds true for the MinMaxScaler as well. I really can't see how the Robust Scaler improved the data because I still have extreme values in the resulted data set? Any simple -complete interpretation? Robust does not mean immune, or invulnerable, and the purpose of scaling is not to \"remove\" outliers and extreme values - this is a separate task with its own methodologies; this is again clearly mentioned in the relevant scikit-learn docs: RobustScaler [...] Note that the outliers themselves are still present in the transformed data. If a separate outlier clipping is desirable, a non-linear transformation is required (see below). where the \"see below\" refers to the QuantileTransformer and quantile_transform.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/51841506\/data-standardization-vs-normalization-vs-robust-scaler", "best_answers_votes":24, "question_length":1603, "response_length":1067 }, { "question":"Multiclass classification with xgboost classifier? I am trying out multi-class classification with xgboost and I've built it using this code, ``` clf = xgb.XGBClassifier(max_depth=7, n_estimators=1000) clf.fit(byte_train, y_train) train1 = clf.predict_proba(train_data) test1 = clf.predict_proba(test_data) ``` This gave me some good results. I've got log-loss below 0.7 for my case. But after looking through few pages I've found that we have to use another objective in XGBClassifier for multi-class problem. Here's what is recommended from those pages. ``` clf = xgb.XGBClassifier(max_depth=5, objective='multi:softprob', n_estimators=1000, num_classes=9) clf.fit(byte_train, y_train) train1 = clf.predict_proba(train_data) test1 = clf.predict_proba(test_data) ``` This code is also working but it's taking a lot of time to complete compared when to my first code. Why is my first code also working for multi-class case? I have checked that it's default objective is binary:logistic used for binary classification but it worked really well for multi-class? Which one should I use if both are correct?", "response":"In fact, even if the default obj parameter of XGBClassifier is binary:logistic, it will internally judge the number of class of label y. When the class number is greater than 2, it will modify the obj parameter to multi:softmax. https:\/\/github.com\/dmlc\/xgboost\/blob\/master\/python-package\/xgboost\/sklearn.py ```py class XGBClassifier(XGBModel, XGBClassifierBase): # pylint: disable=missing-docstring,invalid-name,too-many-instance-attributes def __init__(self, objective=\"binary:logistic\", **kwargs): super().__init__(objective=objective, **kwargs) def fit(self, X, y, sample_weight=None, base_margin=None, eval_set=None, eval_metric=None, early_stopping_rounds=None, verbose=True, xgb_model=None, sample_weight_eval_set=None, callbacks=None): # pylint: disable = attribute-defined-outside-init,arguments-differ evals_result = {} self.classes_ = np.unique(y) self.n_classes_ = len(self.classes_) xgb_options = self.get_xgb_params() if callable(self.objective): obj = _objective_decorator(self.objective) # Use default value. Is it really not used ? xgb_options[\"objective\"] = \"binary:logistic\" else: obj = None if self.n_classes_ > 2: # Switch to using a multiclass objective in the underlying # XGB instance xgb_options['objective'] = 'multi:softprob' xgb_options['num_class'] = self.n_classes_ ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/57986259\/multiclass-classification-with-xgboost-classifier", "best_answers_votes":56, "question_length":1103, "response_length":1298 }, { "question":"Installing scipy and scikit-learn on apple m1 The installation on the m1 chip for the following packages: Numpy 1.21.1, pandas 1.3.0, torch 1.9.0 and a few other ones works fine for me. They also seem to work properly while testing them. However when I try to install scipy or scikit-learn via pip this error appears: ERROR: Failed building wheel for numpy Failed to build numpy ERROR: Could not build wheels for numpy which use PEP 517 and cannot be installed directly Why should Numpy be build again when I have the latest version from pip already installed? Every previous installation was done using python3.9 -m pip install ... on Mac OS 11.3.1 with the apple m1 chip. Maybe somebody knows how to deal with this error or if its just a matter of time.", "response":"UPDATE: scikit-learn now works via pip \u2705 Just first brew install openblas - it has instructions for different processors (wikipedia) ```bash brew install openblas export OPENBLAS=$(\/opt\/homebrew\/bin\/brew --prefix openblas) export CFLAGS=\"-falign-functions=8 ${CFLAGS}\" # ^ no need to add to .zshrc, just doing this once. pip install scikit-learn ``` Worked great on Apple Silicon M1 \ud83c\udf89 Extra details about how Pip works Pip downloaded the source from Pipy, then built the wheel targeting MacOS X 12.0, and arm64 (apple silicon): scikit_learn-1.0.1-cp38-cp38-macosx_12_0_arm64.whl. ``` Building wheels for collected packages: scikit-learn Building wheel for scikit-learn (pyproject.toml) ... done Created wheel for scikit-learn: filename=scikit_learn-1.0.1-cp38-cp38-macosx_12_0_arm64.whl size=6364030 sha256=0b0cc9a21af775e0c8077ee71698ff62da05ab62efc914c5c15cd4bf97867b31 Successfully built scikit-learn Installing collected packages: scipy, scikit-learn Successfully installed scikit-learn-1.0.1 scipy-1.7.3 ``` Note on Pipy: we usually download either a pre-built wheel (yay, this is excellent for reliable distribution and ensuring compatability). Or, if no prebuilt wheel exists (sad) then we download a tar.gz and build it ourselves. This happens because the authors don't publish a prebuilt wheel to Pipy, but more and more people are adding this to their CI (github actions) workflow. Building the wheel ourselves takes more cpu time, and is generally less reliable but works in this case. Here we are downloading a pre-built wheel that has very few limitations: it works for any version of python 3, for any os, for any architecture (like amd64 or arm64): click-8.0.3-py3-none-any.whl ``` Collecting click>=7.0 Downloading click-8.0.3-py3-none-any.whl ``` Here apparently we had no wheel available, so we have to build it ourselves with setuptools running setup.py. ``` Collecting grpcio>=1.28.1 Downloading grpcio-1.42.0.tar.gz (21.3 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 21.3 MB 12.7 MB\/s Preparing metadata (setup.py) ... done ## later in the process it installs using setuptools Running setup.py install for grpcio ... done ``` Good luck and happy piping.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/68620927\/installing-scipy-and-scikit-learn-on-apple-m1", "best_answers_votes":47, "question_length":755, "response_length":2168 }, { "question":"How can i know probability of class predicted by predict() function in Support Vector Machine? How can i know sample's probability that it belongs to a class predicted by predict() function of Scikit-Learn in Support Vector Machine? ``` >>>print clf.predict([fv]) [5] ``` There is any function?", "response":"Definitely read this section of the docs as there's some subtleties involved. See also Scikit-learn predict_proba gives wrong answers Basically, if you have a multi-class problem with plenty of data predict_proba as suggested earlier works well. Otherwise, you may have to make do with an ordering that doesn't yield probability scores from decision_function. Here's a nice motif for using predict_proba to get a dictionary or list of class vs probability: ``` model = svm.SVC(probability=True) model.fit(X, Y) results = model.predict_proba(test_data)[0] # gets a dictionary of {'class_name': probability} prob_per_class_dictionary = dict(zip(model.classes_, results)) # gets a list of ['most_probable_class', 'second_most_probable_class', ..., 'least_class'] results_ordered_by_probability = map(lambda x: x[0], sorted(zip(model.classes_, results), key=lambda x: x[1], reverse=True)) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/15015710\/how-can-i-know-probability-of-class-predicted-by-predict-function-in-support-v", "best_answers_votes":37, "question_length":294, "response_length":888 }, { "question":"Python sci-kit learn (metrics): difference between r2_score and explained_variance_score? I noticed that that r2_score and explained_variance_score are both build-in sklearn.metrics methods for regression problems. I was always under the impression that r2_score is the percent variance explained by the model. How is it different from explained_variance_score? When would you choose one over the other? Thanks!", "response":"Most of the answers I found (including here) emphasize on the difference between R2 and Explained Variance Score, that is: The Mean Residue (i.e. The Mean of Error). However, there is an important question left behind, that is: Why on earth I need to consider The Mean of Error? Refresher: R2: is the Coefficient of Determination which measures the amount of variation explained by the (least-squares) Linear Regression. You can look at it from a different angle for the purpose of evaluating the predicted values of y like this: Varianceactual_y \u00d7 R2actual_y = Variancepredicted_y So intuitively, the more R2 is closer to 1, the more actual_y and predicted_y will have same variance (i.e. same spread) As previously mentioned, the main difference is the Mean of Error; and if we look at the formulas, we find that's true: ``` R2 = 1 - [(Sum of Squared Residuals \/ n) \/ Variancey_actual] Explained Variance Score = 1 - [Variance(Ypredicted - Yactual) \/ Variancey_actual] ``` in which: ``` Variance(Ypredicted - Yactual) = (Sum of Squared Residuals - Mean Error) \/ n ``` So, obviously the only difference is that we are subtracting the Mean Error from the first formula! ... But Why? When we compare the R2 Score with the Explained Variance Score, we are basically checking the Mean Error; so if R2 = Explained Variance Score, that means: The Mean Error = Zero! The Mean Error reflects the tendency of our estimator, that is: the Biased v.s Unbiased Estimation. In Summary: If you want to have unbiased estimator so our model is not underestimating or overestimating, you may consider taking Mean of Error into account.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/24378176\/python-sci-kit-learn-metrics-difference-between-r2-score-and-explained-varian", "best_answers_votes":38, "question_length":411, "response_length":1618 }, { "question":"python scikit-learn clustering with missing data I want to cluster data with missing columns. Doing it manually I would calculate the distance in case of a missing column simply without this column. With scikit-learn, missing data is not possible. There is also no chance to specify a user distance function. Is there any chance to cluster with missing data? Example data: ``` n_samples = 1500 noise = 0.05 X, _ = make_swiss_roll(n_samples, noise) rnd = np.random.rand(X.shape[0],X.shape[1]) X[rnd<0.1] = np.nan ```", "response":"I think you can use an iterative EM-type algorithm: Initialize missing values to their column means Repeat until convergence: Perform K-means clustering on the filled-in data Set the missing values to the centroid coordinates of the clusters to which they were assigned Implementation ``` import numpy as np from sklearn.cluster import KMeans def kmeans_missing(X, n_clusters, max_iter=10): \"\"\"Perform K-Means clustering on data with missing values. Args: X: An [n_samples, n_features] array of data to cluster. n_clusters: Number of clusters to form. max_iter: Maximum number of EM iterations to perform. Returns: labels: An [n_samples] vector of integer labels. centroids: An [n_clusters, n_features] array of cluster centroids. X_hat: Copy of X with the missing values filled in. \"\"\" # Initialize missing values to their column means missing = ~np.isfinite(X) mu = np.nanmean(X, 0, keepdims=1) X_hat = np.where(missing, mu, X) for i in xrange(max_iter): if i > 0: # initialize KMeans with the previous set of centroids. this is much # faster and makes it easier to check convergence (since labels # won't be permuted on every iteration), but might be more prone to # getting stuck in local minima. cls = KMeans(n_clusters, init=prev_centroids) else: # do multiple random initializations in parallel cls = KMeans(n_clusters, n_jobs=-1) # perform clustering on the filled-in data labels = cls.fit_predict(X_hat) centroids = cls.cluster_centers_ # fill in the missing values based on their cluster centroids X_hat[missing] = centroids[labels][missing] # when the labels have stopped changing then we have converged if i > 0 and np.all(labels == prev_labels): break prev_labels = labels prev_centroids = cls.cluster_centers_ return labels, centroids, X_hat ``` Example with fake data ``` from sklearn.datasets import make_blobs from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D def make_fake_data(fraction_missing, n_clusters=5, n_samples=1500, n_features=3, seed=None): # complete data gen = np.random.RandomState(seed) X, true_labels = make_blobs(n_samples, n_features, n_clusters, random_state=gen) # with missing values missing = gen.rand(*X.shape) < fraction_missing Xm = np.where(missing, np.nan, X) return X, true_labels, Xm X, true_labels, Xm = make_fake_data(fraction_missing=0.3, n_clusters=5, seed=0) labels, centroids, X_hat = kmeans_missing(Xm, n_clusters=5) # plot the inferred points, color-coded according to the true cluster labels fig, ax = plt.subplots(1, 2, subplot_kw={'projection':'3d', 'aspect':'equal'}) ax[0].scatter3D(X[:, 0], X[:, 1], X[:, 2], c=true_labels, cmap='gist_rainbow') ax[1].scatter3D(X_hat[:, 0], X_hat[:, 1], X_hat[:, 2], c=true_labels, cmap='gist_rainbow') ax[0].set_title('Original data') ax[1].set_title('Imputed (30% missing values)') fig.tight_layout() ``` Benchmark To assess the algorithm's performance, we can use the adjusted mutual information between the true and inferred cluster labels. A score of 1 is perfect performance and 0 represents chance: ``` from sklearn.metrics import adjusted_mutual_info_score fraction = np.arange(0.0, 1.0, 0.05) n_repeat = 10 scores = np.empty((2, fraction.shape[0], n_repeat)) for i, frac in enumerate(fraction): for j in range(n_repeat): X, true_labels, Xm = make_fake_data(fraction_missing=frac, n_clusters=5) labels, centroids, X_hat = kmeans_missing(Xm, n_clusters=5) any_missing = np.any(~np.isfinite(Xm), 1) scores[0, i, j] = adjusted_mutual_info_score(labels, true_labels) scores[1, i, j] = adjusted_mutual_info_score(labels[any_missing], true_labels[any_missing]) fig, ax = plt.subplots(1, 1) scores_all, scores_missing = scores ax.errorbar(fraction * 100, scores_all.mean(-1), yerr=scores_all.std(-1), label='All labels') ax.errorbar(fraction * 100, scores_missing.mean(-1), yerr=scores_missing.std(-1), label='Labels with missing values') ax.set_xlabel('% missing values') ax.set_ylabel('Adjusted mutual information') ax.legend(loc='best', frameon=False) ax.set_ylim(0, 1) ax.set_xlim(-5, 100) ``` Update: In fact, after a quick Google search it seems that what I've come up with above is pretty much the same as the k-POD algorithm for K-means clustering of missing data (Chi, Chi & Baraniuk, 2016).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35611465\/python-scikit-learn-clustering-with-missing-data", "best_answers_votes":48, "question_length":515, "response_length":4222 }, { "question":"How to save Scikit-Learn-Keras Model into a Persistence File (pickle\/hd5\/json\/yaml) I have the following code, using Keras Scikit-Learn Wrapper: ``` from keras.models import Sequential from sklearn import datasets from keras.layers import Dense from sklearn.model_selection import train_test_split from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import cross_val_score from sklearn import preprocessing import pickle import numpy as np import json def classifier(X, y): \"\"\" Description of classifier \"\"\" NOF_ROW, NOF_COL = X.shape def create_model(): # create model model = Sequential() model.add(Dense(12, input_dim=NOF_COL, init='uniform', activation='relu')) model.add(Dense(6, init='uniform', activation='relu')) model.add(Dense(1, init='uniform', activation='sigmoid')) # Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) return model # evaluate using 10-fold cross validation seed = 7 np.random.seed(seed) model = KerasClassifier(build_fn=create_model, nb_epoch=150, batch_size=10, verbose=0) return model def main(): \"\"\" Description of main \"\"\" iris = datasets.load_iris() X, y = iris.data, iris.target X = preprocessing.scale(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0) model_tt = classifier(X_train, y_train) model_tt.fit(X_train,y_train) #-------------------------------------------------- # This fail #-------------------------------------------------- filename = 'finalized_model.sav' pickle.dump(model_tt, open(filename, 'wb')) # load the model from disk loaded_model = pickle.load(open(filename, 'rb')) result = loaded_model.score(X_test, Y_test) print(result) #-------------------------------------------------- # This also fail #-------------------------------------------------- # from keras.models import load_model # model_tt.save('test_model.h5') #-------------------------------------------------- # This works OK #-------------------------------------------------- # print model_tt.score(X_test, y_test) # print model_tt.predict_proba(X_test) # print model_tt.predict(X_test) # Output of predict_proba # 2nd column is the probability that the prediction is 1 # this value is used as final score, which can be used # with other method as comparison # [ [ 0.25311464 0.74688536] # [ 0.84401423 0.15598579] # [ 0.96047372 0.03952631] # ..., # [ 0.25518912 0.74481088] # [ 0.91467732 0.08532269] # [ 0.25473493 0.74526507]] # Output of predict # [[1] # [0] # [0] # ..., # [1] # [0] # [1]] if __name__ == '__main__': main() ``` As stated in the code there it fails at this line: ``` pickle.dump(model_tt, open(filename, 'wb')) ``` With this error: ``` pickle.PicklingError: Can't pickle : it's not found as __main__.create_model ``` How can I get around it?", "response":"Edit 1 : Original answer about saving model With HDF5 : ``` # saving model json_model = model_tt.model.to_json() open('model_architecture.json', 'w').write(json_model) # saving weights model_tt.model.save_weights('model_weights.h5', overwrite=True) # loading model from keras.models import model_from_json model = model_from_json(open('model_architecture.json').read()) model.load_weights('model_weights.h5') # dont forget to compile your model model.compile(loss='binary_crossentropy', optimizer='adam') ``` Edit 2 : full code example with iris dataset ``` # Train model and make predictions import numpy import pandas from keras.models import Sequential, model_from_json from keras.layers import Dense from keras.utils import np_utils from sklearn import datasets from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder # fix random seed for reproducibility seed = 7 numpy.random.seed(seed) # load dataset iris = datasets.load_iris() X, Y, labels = iris.data, iris.target, iris.target_names X = preprocessing.scale(X) # encode class values as integers encoder = LabelEncoder() encoder.fit(Y) encoded_Y = encoder.transform(Y) # convert integers to dummy variables (i.e. one hot encoded) y = np_utils.to_categorical(encoded_Y) def build_model(): # create model model = Sequential() model.add(Dense(4, input_dim=4, init='normal', activation='relu')) model.add(Dense(3, init='normal', activation='sigmoid')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model def save_model(model): # saving model json_model = model.to_json() open('model_architecture.json', 'w').write(json_model) # saving weights model.save_weights('model_weights.h5', overwrite=True) def load_model(): # loading model model = model_from_json(open('model_architecture.json').read()) model.load_weights('model_weights.h5') model.compile(loss='categorical_crossentropy', optimizer='adam') return model X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.3, random_state=seed) # build model = build_model() model.fit(X_train, Y_train, nb_epoch=200, batch_size=5, verbose=0) # save save_model(model) # load model = load_model() # predictions predictions = model.predict_classes(X_test, verbose=0) print(predictions) # reverse encoding for pred in predictions: print(labels[pred]) ``` Please note that I used Keras only, not the wrapper. It only add some complexity in something simple. Also code is voluntarily not refactored so you can have the whole picture. Also, you said you want to output 1 or 0. It is not possible in this dataset because you have 3 output dims and classes (Iris-setosa, Iris-versicolor, Iris-virginica). If you had only 2 classes then your output dim and classes would be 0 or 1 using sigmoid output fonction.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40396042\/how-to-save-scikit-learn-keras-model-into-a-persistence-file-pickle-hd5-json-ya", "best_answers_votes":18, "question_length":2857, "response_length":2841 }, { "question":"OLS Regression: Scikit vs. Statsmodels? [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Closed 4 years ago. Improve this question Short version: I was using the scikit LinearRegression on some data, but I'm used to p-values so put the data into the statsmodels OLS, and although the R^2 is about the same the variable coefficients are all different by large amounts. This concerns me since the most likely problem is that I've made an error somewhere and now I don't feel confident in either output (since likely I have made one model incorrectly but don't know which one). Longer version: Because I don't know where the issue is, I don't know exactly which details to include, and including everything is probably too much. I am also not sure about including code or data. I am under the impression that scikit's LR and statsmodels OLS should both be doing OLS, and as far as I know OLS is OLS so the results should be the same. For scikit's LR, the results are (statistically) the same whether or not I set normalize=True or =False, which I find somewhat strange. For statsmodels OLS, I normalize the data using StandardScaler from sklearn. I add a column of ones so it includes an intercept (since scikit's output includes an intercept). More on that here: http:\/\/statsmodels.sourceforge.net\/devel\/examples\/generated\/example_ols.html (Adding this column did not change the variable coefficients to any notable degree and the intercept was very close to zero.) StandardScaler didn't like that my ints weren't floats, so I tried this: https:\/\/github.com\/scikit-learn\/scikit-learn\/issues\/1709 That makes the warning go away but the results are exactly the same. Granted I'm using 5-folds cv for the sklearn approach (R^2 are consistent for both test and training data each time), and for statsmodels I just throw it all the data. R^2 is about 0.41 for both sklearn and statsmodels (this is good for social science). This could be a good sign or just a coincidence. The data is observations of avatars in WoW (from http:\/\/mmnet.iis.sinica.edu.tw\/dl\/wowah\/) which I munged about to make it weekly with some different features. Originally this was a class project for a data science class. Independent variables include number of observations in a week (int), character level (int), if in a guild (Boolean), when seen (Booleans on weekday day, weekday eve, weekday late, and the same three for weekend), a dummy for character class (at the time for the data collection, there were only 8 classes in WoW, so there are 7 dummy vars and the original string categorical variable is dropped), and others. The dependent variable is how many levels each character gained during that week (int). Interestingly, some of the relative order within like variables is maintained across statsmodels and sklearn. So, rank order of \"when seen\" is the same although the loadings are very different, and rank order for the character class dummies is the same although again the loadings are very different. I think this question is similar to this one: Difference in Python statsmodels OLS and R's lm I am good enough at Python and stats to make a go of it, but then not good enough to figure something like this out. I tried reading the sklearn docs and the statsmodels docs, but if the answer was there staring me in the face I did not understand it. I would love to know: Which output might be accurate? (Granted they might both be if I missed a kwarg.) If I made a mistake, what is it and how to fix it? Could I have figured this out without asking here, and if so how? I know this question has some rather vague bits (no code, no data, no output), but I am thinking it is more about the general processes of the two packages. Sure, one seems to be more stats and one seems to be more machine learning, but they're both OLS so I don't understand why the outputs aren't the same. (I even tried some other OLS calls to triangulate, one gave a much lower R^2, one looped for five minutes and I killed it, and one crashed.) Thanks!", "response":"It sounds like you are not feeding the same matrix of regressors X to both procedures (but see below). Here's an example to show you which options you need to use for sklearn and statsmodels to produce identical results. ``` import numpy as np import statsmodels.api as sm from sklearn.linear_model import LinearRegression # Generate artificial data (2 regressors + constant) nobs = 100 X = np.random.random((nobs, 2)) X = sm.add_constant(X) beta = [1, .1, .5] e = np.random.random(nobs) y = np.dot(X, beta) + e # Fit regression model sm.OLS(y, X).fit().params >> array([ 1.4507724 , 0.08612654, 0.60129898]) LinearRegression(fit_intercept=False).fit(X, y).coef_ >> array([ 1.4507724 , 0.08612654, 0.60129898]) ``` As a commenter suggested, even if you are giving both programs the same X, X may not have full column rank, and they sm\/sk could be taking (different) actions under-the-hood to make the OLS computation go through (i.e. dropping different columns). I recommend you use pandas and patsy to take care of this: ``` import pandas as pd from patsy import dmatrices dat = pd.read_csv('wow.csv') y, X = dmatrices('levels ~ week + character + guild', data=dat) ``` Or, alternatively, the statsmodels formula interface: ``` import statsmodels.formula.api as smf dat = pd.read_csv('wow.csv') mod = smf.ols('levels ~ week + character + guild', data=dat).fit() ``` Edit: This example might be useful: http:\/\/statsmodels.sourceforge.net\/devel\/example_formulas.html", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/22054964\/ols-regression-scikit-vs-statsmodels", "best_answers_votes":42, "question_length":4375, "response_length":1465 }, { "question":"How to put more weight on certain features in machine learning? If using a library like scikit-learn, how do I assign more weight on certain features in the input to a classifier like SVM? Is this something people do or not?", "response":"First of all - you should probably not do it. The whole concept of machine learning is to use statistical analysis to assign optimal weights. You are interfering here with the whole concept, thus you need really strong evidence that this is crucial to the process you are trying to model, and for some reason your model is currently missing it. That being said - there is no general answer. This is purely model specific, some of which will allow you to weight features - in random forest you could bias distribution from which you sample features to analyse towards the ones that you are interested in; in SVM it should be enough to just multiply given feature by a constant - remember when you were told to normalize your features in SVM? This is why - you can use the scale of features to 'steer' your classifier towards given features. The ones with high values will be preffered. This will actually work for most linear weight norm-regularized models (regularized logistic regression, ridge regression, lasso etc.).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38034702\/how-to-put-more-weight-on-certain-features-in-machine-learning", "best_answers_votes":39, "question_length":224, "response_length":1020 }, { "question":"Predicting how long an scikit-learn classification will take to run Is there a way to predict how long it will take to run a classifier from sci-kit learn based on the parameters and dataset? I know, pretty meta, right? Some classifiers\/parameter combinations are quite fast, and some take so long that I eventually just kill the process. I'd like a way to estimate in advance how long it will take. Alternatively, I'd accept some pointers on how to set common parameters to reduce the run time.", "response":"There are very specific classes of classifier or regressors that directly report remaining time or progress of your algorithm (number of iterations etc.). Most of this can be turned on by passing verbose=2 (any high number > 1) option to the constructor of individual models. Note: this behavior is according to sklearn-0.14. Earlier versions have a bit different verbose output (still useful though). The best example of this is ensemble.RandomForestClassifier or ensemble.GradientBoostingClassifier` that print the number of trees built so far and remaining time. ``` clf = ensemble.GradientBoostingClassifier(verbose=3) clf.fit(X, y) Out: Iter Train Loss Remaining Time 1 0.0769 0.10s ... ``` Or ``` clf = ensemble.RandomForestClassifier(verbose=3) clf.fit(X, y) Out: building tree 1 of 100 ... ``` This progress information is fairly useful to estimate the total time. Then there are other models like SVMs that print the number of optimization iterations completed, but do not directly report the remaining time. ``` clf = svm.SVC(verbose=2) clf.fit(X, y) Out: * optimization finished, #iter = 1 obj = -1.802585, rho = 0.000000 nSV = 2, nBSV = 2 ... ``` Models like linear models don't provide such diagnostic information as far as I know. Check this thread to know more about what the verbosity levels mean: scikit-learn fit remaining time", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/22443041\/predicting-how-long-an-scikit-learn-classification-will-take-to-run", "best_answers_votes":39, "question_length":495, "response_length":1345 }, { "question":"What is the difference between sample weight and class weight options in scikit learn? I have class imbalance problem and want to solve this using cost sensitive learning. under sample and over sample give weights to class to use a modified loss function Question Scikit learn has 2 options called class weights and sample weights. Is sample weight actually doing option 2) and class weight options 1). Is option 2) the the recommended way of handling class imbalance.", "response":"It's similar concepts, but with sample_weights you can force estimator to pay more attention on some samples, and with class_weights you can force estimator to learn with attention to some particular class. sample_weight=0 or class_weight=0 basically means that estimator doesn't need to take into consideration such samples\/classes in learning process at all. Thus classifier (for example) will never predict some class if class_weight = 0 for this class. If some sample_weight\/class_weight bigger than sample_weight\/class_weight on other samples\/classes - estimator will try to minimize error on that samples\/classes in the first place. You can use user-defined sample_weights and class_weights simultaneously. If you want to undersample\/oversample your training set with simple cloning\/removing - this will be equal to increasing\/decreasing of corresponding sample_weights\/class_weights. In more complex cases you can also try artificially generate samples, with techniques like SMOTE.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/32492550\/what-is-the-difference-between-sample-weight-and-class-weight-options-in-scikit", "best_answers_votes":19, "question_length":468, "response_length":988 }, { "question":"Using cosine distance with scikit learn KNeighborsClassifier Is it possible to use something like 1 - cosine similarity with scikit learn's KNeighborsClassifier? This answer says no, but on the documentation for KNeighborsClassifier, it says the metrics mentioned in DistanceMetrics are available. Distance metrics don't include an explicit cosine distance, probably because it's not really a distance, but supposedly it's possible to input a function into the metric. I tried inputting the scikit learn linear kernel into KNeighborsClassifier but it gives me an error that the function needs two arrays as arguments. Anyone else tried this?", "response":"The cosine similarity is generally defined as xT y \/ (||x|| * ||y||), and outputs 1 if they are the same and goes to -1 if they are completely different. This definition is not technically a metric, and so you can't use accelerating structures like ball and kd trees with it. If you force scikit learn to use the brute force approach, you should be able to use it as a distance if you pass it your own custom distance metric object. There are methods of transforming the cosine similarity into a valid distance metric if you would like to use ball trees (you can find one in the JSAT library) Notice though, that xT y \/ (||x|| * ||y||) = (x\/||x||)T (y\/||y||). The euclidean distance can be equivalently written as sqrt(xTx + yTy \u2212 2 xTy). If we normalize every datapoint before giving it to the KNeighborsClassifier, then x^T x = 1 for all x. So the euclidean distance will degrade to sqrt(2 \u2212 2x^T y). For completely the same inputs, we would get sqrt(2-2*1) = 0 and for complete opposites sqrt(2-2*-1)= 2. And it is clearly a simple shape, so you can get the same ordering as the cosine distance by normalizing your data and then using the euclidean distance. So long as you use the uniform weights option, the results will be identical to having used a correct Cosine Distance.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34144632\/using-cosine-distance-with-scikit-learn-kneighborsclassifier", "best_answers_votes":71, "question_length":641, "response_length":1280 }, { "question":"sklearn: Turning off warnings When I'm fitting sklearn's LogisticRegression using a 1 column python pandas DataFrame (not a Series object), I get this warning: ``` \/Library\/Python\/2.7\/site-packages\/sklearn\/preprocessing\/label.py:125: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). y = column_or_1d(y, warn=True) ``` I know I could easily advert this warning in my code, but how can I turn off these warnings?", "response":"You can use this: ``` import warnings from sklearn.exceptions import DataConversionWarning warnings.filterwarnings(action='ignore', category=DataConversionWarning) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/29086398\/sklearn-turning-off-warnings", "best_answers_votes":46, "question_length":517, "response_length":167 }, { "question":"sklearn stratified sampling based on a column I have a fairly large CSV file containing amazon review data which I read into a pandas data frame. I want to split the data 80-20(train-test) but while doing so I want to ensure that the split data is proportionally representing the values of one column (Categories), i.e all the different category of reviews are present both in train and test data proportionally. The data looks like this: ``` **ReviewerID** **ReviewText** **Categories** **ProductId** 1212 good product Mobile 14444425 1233 will buy again drugs 324532 5432 not recomended dvd 789654123 ``` Im using the following code to do so: ``` import pandas as pd Meta = pd.read_csv('C:\\\\Users\\\\xyz\\\\Desktop\\\\WM Project\\\\Joined.csv') import numpy as np from sklearn.cross_validation import train_test_split train, test = train_test_split(Meta.categories, test_size = 0.2, stratify=y) ``` it gives the following error ``` NameError: name 'y' is not defined ``` As I'm relatively new to python I cant figure out what I'm doing wrong or whether this code will stratify based on column categories. It seems to work fine when i remove the stratify option as well as the categories column from train-test split. Any help will be appreciated.", "response":"``` >>> import pandas as pd >>> Meta = pd.read_csv('C:\\\\Users\\\\*****\\\\Downloads\\\\so\\\\Book1.csv') >>> import numpy as np >>> from sklearn.model_selection import train_test_split >>> y = Meta.pop('Categories') >>> Meta ReviewerID ReviewText ProductId 0 1212 good product 14444425 1 1233 will buy again 324532 2 5432 not recomended 789654123 >>> y 0 Mobile 1 drugs 2 dvd Name: Categories, dtype: object >>> X = Meta >>> X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.33, random_state=42, stratify=y) >>> X_test ReviewerID ReviewText ProductId 0 1212 good product 14444425 ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36997619\/sklearn-stratified-sampling-based-on-a-column", "best_answers_votes":40, "question_length":1240, "response_length":596 }, { "question":"'super' object has no attribute '__sklearn_tags__' I am encountering an AttributeError while fitting an XGBRegressor using RandomizedSearchCV from Scikit-learn. The error message states: ``` 'super' object has no attribute '__sklearn_tags__'. ``` This occurs when I invoke the fit method on the RandomizedSearchCV object. I suspect it could be related to compatibility issues between Scikit-learn and XGBoost or Python version. I am using Python 3.12, and both Scikit-learn and XGBoost are installed with their latest versions. I attempted to tune the hyperparameters of an XGBRegressor using RandomizedSearchCV from Scikit-learn. I expected the model to fit the training data without issues and provide the best parameters after cross-validation. I also checked for compatibility issues, ensured the libraries were up-to-date, and reinstalled Scikit-learn and XGBoost, but the error persists.", "response":"Scikit-learn version 1.6.0 modified the API around its \"tags\", and that's the cause of this error. XGBoost made the necessary changes in version 2.1.4 (specifically in PR11021). In sklearn 1.6.1, the error was downgraded to a warning (to be returned to an error in 1.7). So you should be OK with any of: xgboost >=2.1.4 sklearn >=1.6.1,<1.7, and expect DeprecationWarnings sklearn <1.6 See also sklearn Issue#30479 and 1.6.1 release notes, and xgboost 2.1.4 release notes.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/79290968\/super-object-has-no-attribute-sklearn-tags", "best_answers_votes":33, "question_length":893, "response_length":472 }, { "question":"roc_auc_score - Only one class present in y_true I am doing a k-fold XV on an existing dataframe, and I need to get the AUC score. The problem is - sometimes the test data only contains 0s, and not 1s! I tried using this example, but with different numbers: ``` import numpy as np from sklearn.metrics import roc_auc_score y_true = np.array([0, 0, 0, 0]) y_scores = np.array([1, 0, 0, 0]) roc_auc_score(y_true, y_scores) ``` And I get this exception: ValueError: Only one class present in y_true. ROC AUC score is not defined in that case. Is there any workaround that can make it work in such cases?", "response":"You could use try-except to prevent the error: ``` import numpy as np from sklearn.metrics import roc_auc_score y_true = np.array([0, 0, 0, 0]) y_scores = np.array([1, 0, 0, 0]) try: roc_auc_score(y_true, y_scores) except ValueError: pass ``` Now you can also set the roc_auc_score to be zero if there is only one class present. However, I wouldn't do this. I guess your test data is highly unbalanced. I would suggest to use stratified K-fold instead so that you at least have both classes present.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45139163\/roc-auc-score-only-one-class-present-in-y-true", "best_answers_votes":25, "question_length":600, "response_length":499 }, { "question":"Saving StandardScaler() model for use on new datasets [duplicate] This question already has answers here: Save MinMaxScaler model in sklearn (5 answers) Closed 5 months ago. How do I save the StandardScaler() model in Sklearn? I need to make a model operational and don't want to load training data agian and again for StandardScaler to learn and then apply on new data on which I want to make predictions. ``` from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split #standardizing after splitting X_train, X_test, y_train, y_test = train_test_split(data, target) sc = StandardScaler() X_train_std = sc.fit_transform(X_train) X_test_std = sc.transform(X_test) ```", "response":"you could use joblib dump function to save the standard scaler model. Here's a complete example for reference. ``` from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.datasets import load_iris data, target = load_iris(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(data, target) sc = StandardScaler() X_train_std = sc.fit_transform(X_train) ``` if you want to save the sc standardscaller use the following ``` from sklearn.externals.joblib import dump, load dump(sc, 'std_scaler.bin', compress=True) ``` this will create the file std_scaler.bin and save the sklearn model. To read the model later use load ``` sc=load('std_scaler.bin') ``` Note: sklearn.externals.joblib is deprecated. Install and use the pure joblib instead", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/53152627\/saving-standardscaler-model-for-use-on-new-datasets", "best_answers_votes":41, "question_length":709, "response_length":810 }, { "question":"Memory leak using gridsearchcv Problem: My situation appears to be a memory leak when running gridsearchcv. This happens when I run with 1 or 32 concurrent workers (n_jobs=-1). Previously I have run this loads of times with no trouble on ubuntu 16.04, but recently upgraded to 18.04 and did a ram upgrade. ``` import os import pickle from xgboost import XGBClassifier from sklearn.model_selection import GridSearchCV,StratifiedKFold,train_test_split from sklearn.calibration import CalibratedClassifierCV from sklearn.metrics import make_scorer,log_loss from horsebet import performance scorer = make_scorer(log_loss,greater_is_better=True) kfold = StratifiedKFold(n_splits=3) # import and split data input_vectors = pickle.load(open(os.path.join('horsebet','data','x_normalized'),'rb')) output_vector = pickle.load(open(os.path.join('horsebet','data','y'),'rb')).ravel() x_train,x_test,y_train,y_test = train_test_split(input_vectors,output_vector,test_size=0.2) # XGB model = XGBClassifier() param = { 'booster':['gbtree'], 'tree_method':['hist'], 'objective':['binary:logistic'], 'n_estimators':[100,500], 'min_child_weight': [.8,1], 'gamma': [1,3], 'subsample': [0.1,.4,1.0], 'colsample_bytree': [1.0], 'max_depth': [10,20], } jobs = 8 model = GridSearchCV(model,param_grid=param,cv=kfold,scoring=scorer,pre_dispatch=jobs*2,n_jobs=jobs,verbose=5).fit(x_train,y_train) ``` Returns: UserWarning: A worker stopped while some jobs were given to the executor. This can be caused by a too short worker timeout or by a memory leak. \"timeout or by a memory leak.\", UserWarning OR TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker. The exit codes of the workers are {SIGKILL(-9)}", "response":"The cause of my issue was that i put n_jobs=-1 in gridsearchcv, when it should be placed in the classifier. This has solved the issue.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/55848101\/memory-leak-using-gridsearchcv", "best_answers_votes":58, "question_length":1869, "response_length":134 }, { "question":"Linear Regression and Gradient Descent in Scikit learn? In this Coursera course for machine learning, it says gradient descent should converge. I'm using Linear regression from scikit learn. It doesn't provide gradient descent info. I have seen many questions on StackOverflow to implement linear regression with gradient descent. How do we use Linear regression from scikit-learn in real world? OR Why doesn't scikit-learn provide gradient descent info in linear regression output?", "response":"Scikit learn provides you two approaches to linear regression: LinearRegression object uses Ordinary Least Squares solver from scipy, as LR is one of two classifiers which have closed form solution. Despite the ML course - you can actually learn this model by just inverting and multiplicating some matrices. SGDRegressor which is an implementation of stochastic gradient descent, very generic one where you can choose your penalty terms. To obtain linear regression you choose loss to be L2 and penalty also to none (linear regression) or L2 (Ridge regression) There is no \"typical gradient descent\" because it is rarely used in practise. If you can decompose your loss function into additive terms, then stochastic approach is known to behave better (thus SGD) and if you can spare enough memory - OLS method is faster and easier (thus first solution).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34469237\/linear-regression-and-gradient-descent-in-scikit-learn", "best_answers_votes":49, "question_length":482, "response_length":854 }, { "question":"How to use Isolation Forest I am trying to detect the outliers to my dataset and I find the sklearn's Isolation Forest. I can't understand how to work with it. I fit my training data in it and it gives me back a vector with -1 and 1 values. Can anyone explain to me how it works and provide an example? How can I know that the outliers are 'real' outliers? Tuning Parameters? Here is my code: ``` clf = IsolationForest(max_samples=10000, random_state=10) clf.fit(x_train) y_pred_train = clf.predict(x_train) y_pred_test = clf.predict(x_test) [1 1 1 ..., -1 1 1] ```", "response":"It seems you have many questions, let me try to answer them one by one to the best of my knowledge. How it works? It works due to the fact that the nature of outliers in any data set, which is outliers, is few and different, which is quite different from the typical clustering-based or distance-based algorithm. At the top level, it works on the logic that outliers take fewer steps to 'isolate' compare to the 'normal' point in any data set. To do so, this is what IF does; suppose you have training data set X with n data points, each having m features. In training, IF creates Isolation trees (Binary search trees) for different features. For training, you have 3 parameters for tuning during the train phase: number of isolation trees (n_estimators in sklearn_IsolationForest) number of samples (max_samples in sklearn_IsolationForest) number of features to draw from X to train each base estimator (max_features in sklearn_IF). max_samples is the number of random samples it will pick from the original data set for creating Isolation trees. During the test phase: sklearn_IF finds the path length of data point under test from all the trained Isolation Trees and finds the average path length. The higher the path length, the more normal the point, and vice-versa. Based on the average path length. It calculates the anomaly score, decision_function of sklearn_IF can be used to get this. For sklearn_IF, the lower the score, the more anomalous the sample. Based on the anomaly score, you can decide whether the given sample is anomalous or not by setting the proper value of contamination in the sklearn_IF object. The default value of contamination is 0.1, which you can tune for deciding the threshold. The amount of contamination of the data set, i.e., the proportion of outliers in the data set. Tuning parameters Training -> n_estimators, max_samples, max_features. Testing -> contamination", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43063031\/how-to-use-isolation-forest", "best_answers_votes":34, "question_length":565, "response_length":1903 }, { "question":"What are the parameters for sklearn's score function? I recently looked at a bunch of sklearn tutorials, which were all similar in that they scored the goodness of fit by: ``` clf.fit(X_train, y_train) clf.score(X_test, y_test) ``` And it'll spit out: ``` 0.92345... ``` or some other score. I am curious as to the parameters of the clf.score function or how it scores the model. I looked all over the internet, but can't seem to find documentation for it. Does anyone know?", "response":"It takes a feature matrix X_test and the expected target values y_test. Predictions for X_test are compared with y_test and either accuracy (for classifiers) or R\u00b2 score (for regression estimators is returned. This is stated very explicitly in the docstrings for score methods. The one for classification reads ``` Returns the mean accuracy on the given test data and labels. Parameters ---------- X : array-like, shape = (n_samples, n_features) Test samples. y : array-like, shape = (n_samples,) True labels for X. sample_weight : array-like, shape = [n_samples], optional Sample weights. Returns ------- score : float Mean accuracy of self.predict(X) wrt. y. ``` and the one for regression is similar.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/24458163\/what-are-the-parameters-for-sklearns-score-function", "best_answers_votes":31, "question_length":474, "response_length":703 }, { "question":"BaseEstimator in sklearn.base (Python) I've been learning and practicing sklearn library on my own. When I participated Kaggle competitions, I noticed the provided sample code used BaseEstimator from sklearn.base. I don't quite understand how\/why is BaseEstimator used. ``` from sklearn.base import BaseEstimator class FeatureMapper: def __init__(self, features): self.features = features #features contains feature_name, column_name, and extractor( which is CountVectorizer) def fit(self, X, y=None): for feature_name, column_name, extractor in self.features: extractor.fit(X[column_name], y) #my question is: is X features? if yes, where is it assigned? or else how can X call column_name by X[column_name]. ... ``` This is what I usually see on sklearn's tutorial page: ``` from sklearn import SomeClassifier X = [[0, 0], [1, 1],[2, 2],[3, 3]] Y = [0, 1, 2, 3] clf = SomeClassifier() clf = clf.fit(X, Y) ``` I couldn't find a good example or any documentations on sklearn's official page. Although I found the sklearn.base code on github, but I'd like some examples and explanation of how is it used. UPDATE Here is the link for the sample code: https:\/\/github.com\/benhamner\/JobSalaryPrediction\/blob\/master\/features.py Correction: I just realized BaseEstimator is used for the class SimpleTransform. I guess my first question is why is it needed? (because it's not used anywhere in the computation), the other question is when define fit, what is X, and how is assigned? Because usually I see: ``` def mymethod(self, X, y=None): X=self.features # then do something to X[Column_name] ```", "response":"BaseEstimator provides among other things a default implementation for the get_params and set_params methods, see [the source code]. This is useful to make the model grid search-able with GridSearchCV for automated parameters tuning and behave well with others when combined in a Pipeline.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/15233632\/baseestimator-in-sklearn-base-python", "best_answers_votes":33, "question_length":1589, "response_length":289 }, { "question":"Complex dataset split - StratifiedGroupShuffleSplit I have a dataset of ~2m observations which I need to split into training, validation and test sets in the ratio 60:20:20. A simplified excerpt of my dataset looks like this: ``` +---------+------------+-----------+-----------+ | note_id | subject_id | category | note | +---------+------------+-----------+-----------+ | 1 | 1 | ECG | blah ... | | 2 | 1 | Discharge | blah ... | | 3 | 1 | Nursing | blah ... | | 4 | 2 | Nursing | blah ... | | 5 | 2 | Nursing | blah ... | | 6 | 3 | ECG | blah ... | +---------+------------+-----------+-----------+ ``` There are multiple categories - which are not evenly balanced - so I need to ensure that the training, validation and test sets all have the same proportions of categories as in the original dataset. This part is fine, I can just use StratifiedShuffleSplit from the sklearn library. However, I also need to ensure that the observations from each subject are not split across the training, validation and test datasets. All the observations from a given subject need to be in the same bucket to ensure my trained model has never seen the subject before when it comes to validation\/testing. E.g. every observation of subject_id 1 should be in the training set. I can't think of a way to ensure a stratified split by category, prevent contamination (for want of a better word) of subject_id across datasets, ensure a 60:20:20 split and ensure that the dataset is somehow shuffled. Any help would be appreciated! Thanks! EDIT: I've now learnt that grouping by a category and keeping groups together across dataset splits can also be accomplished by sklearn through the GroupShuffleSplit function. So essentially, what I need is a combined stratified and grouped shuffle split i.e. StratifiedGroupShuffleSplit which does not exist. Github issue: https:\/\/github.com\/scikit-learn\/scikit-learn\/issues\/12076", "response":"This is solved in scikit-learn 1.0 with StratifiedGroupKFold In this example you generate 3 folds after shuffling, keeping groups together and does stratification (as much as possible) ```py import numpy as np from sklearn.model_selection import StratifiedGroupKFold X = np.ones((30, 2)) y = np.array([0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1,]) groups = np.array([1, 1, 2, 2, 3, 3, 3, 4, 5, 5, 5, 5, 6, 6, 7, 8, 8, 9, 9, 9, 10, 11, 11, 12, 12, 12, 13, 13, 13, 13]) print(\"ORIGINAL POSITIVE RATIO:\", y.mean()) cv = StratifiedGroupKFold(n_splits=3, shuffle=True) for fold, (train_idxs, test_idxs) in enumerate(cv.split(X, y, groups)): print(\"Fold :\", fold) print(\"TRAIN POSITIVE RATIO:\", y[train_idxs].mean()) print(\"TEST POSITIVE RATIO :\", y[test_idxs].mean()) print(\"TRAIN GROUPS :\", set(groups[train_idxs])) print(\"TEST GROUPS :\", set(groups[test_idxs])) ``` In the output you can see that the ratio of positives cases in the folds stays close to the original positive ratio and that the same group is never in both sets. Of course the fewer\/bigger groups you have (i.e., the more imbalanced your classes are) the more difficult will be to stay close to the original classes distribution. Output: ``` ORIGINAL POSITIVE RATIO: 0.5 Fold : 0 TRAIN POSITIVE RATIO: 0.4375 TEST POSITIVE RATIO : 0.5714285714285714 TRAIN GROUPS : {1, 3, 4, 5, 6, 7, 10, 11} TEST GROUPS : {2, 8, 9, 12, 13} Fold : 1 TRAIN POSITIVE RATIO: 0.5 TEST POSITIVE RATIO : 0.5 TRAIN GROUPS : {2, 4, 5, 7, 8, 9, 11, 12, 13} TEST GROUPS : {1, 10, 3, 6} Fold : 2 TRAIN POSITIVE RATIO: 0.5454545454545454 TEST POSITIVE RATIO : 0.375 TRAIN GROUPS : {1, 2, 3, 6, 8, 9, 10, 12, 13} TEST GROUPS : {11, 4, 5, 7} ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/56872664\/complex-dataset-split-stratifiedgroupshufflesplit", "best_answers_votes":6, "question_length":1902, "response_length":1723 }, { "question":"How to increase the model accuracy of logistic regression in Scikit python? I am trying to predict the admit variable with predictors such as gre,gpa and ranks. But the prediction accuracy is very low (0.66).The dataset is given below. https:\/\/gist.github.com\/abyalias\/3de80ab7fb93dcecc565cee21bd9501a The first few rows of the dataset looks like: ```none admit gre gpa rank_2 rank_3 rank_4 0 0 380 3.61 0.0 1.0 0.0 1 1 660 3.67 0.0 1.0 0.0 2 1 800 4.00 0.0 0.0 0.0 3 1 640 3.19 0.0 0.0 1.0 4 0 520 2.93 0.0 0.0 1.0 5 1 760 3.00 1.0 0.0 0.0 6 1 560 2.98 0.0 0.0 0.0 ``` My code: ```py from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix, accuracy_score y = data['admit'] x = data[data.columns[1:]] xtrain, xtest, ytrain, ytest = train_test_split(x, y, random_state=2) #modelling clf = LogisticRegression(penalty='l2') clf.fit(xtrain, ytrain) ypred_train = clf.predict(xtrain) ypred_test = clf.predict(xtest) #checking the classification accuracy accuracy_score(ytrain, ypred_train) # 0.70333333333333337 accuracy_score(ytest, ypred_test) # 0.66000000000000003 #confusion metrix... confusion_matrix(ytest, ypred) # array([[62, 1], # [33, 4]]) ``` The ones are wrongly predicted. How do I increase the model accuracy?", "response":"Since machine learning is more about experimenting with the features and the models, there is no correct answer to your question. Some of my suggestions to you would be: 1. Feature Scaling and\/or Normalization - Check the scales of your gre and gpa features. They differ on 2 orders of magnitude. Therefore, your gre feature will end up dominating the others in a classifier like Logistic Regression. You can normalize all your features to the same scale before putting them in a machine learning model.This is a good guide on the various feature scaling and normalization classes available in scikit-learn. 2. Class Imbalance - Look for class imbalance in your data. Since you are working with admit\/reject data, then the number of rejects would be significantly higher than the admits. Most classifiers in SkLearn including LogisticRegression have a class_weight parameter. Setting that to balanced might also work well in case of a class imbalance. 3. Optimize other scores - You can optimize on other metrics also such as Log Loss and F1-Score. The F1-Score could be useful, in case of class imbalance. This is a good guide that talks more about scoring. 4. Hyperparameter Tuning - Grid Search - You can improve your accuracy by performing a Grid Search to tune the hyperparameters of your model. For example in case of LogisticRegression, the parameter C is a hyperparameter. Also, you should avoid using the test data during grid search. Instead perform cross validation. Use your test data only to report the final numbers for your final model. Please note that GridSearch should be done for all models that you try because then only you will be able to tell what is the best you can get from each model. Scikit-Learn provides the GridSearchCV class for this. This article is also a good starting point. 5. Explore more classifiers - Logistic Regression learns a linear decision surface that separates your classes. It could be possible that your 2 classes may not be linearly separable. In such a case you might need to look at other classifiers such Support Vector Machines which are able to learn more complex decision boundaries. You can also start looking at Tree-Based classifiers such as Decision Trees which can learn rules from your data. Think of them as a series of If-Else rules which the algorithm automatically learns from the data. Often, it is difficult to get the right Bias-Variance Tradeoff with Decision Trees, so I would recommend you to look at Random Forests if you have a considerable amount of data. 6. Error Analysis - For each of your models, go back and look at the cases where they are failing. You might end up finding that some of your models work well on one part of the parameter space while others work better on other parts. If this is the case, then Ensemble Techniques such as VotingClassifier techniques often give the best results. Models that win Kaggle competitions are many times ensemble models. 7. More Features _ If all of this fails, then that means that you should start looking for more features.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38077190\/how-to-increase-the-model-accuracy-of-logistic-regression-in-scikit-python", "best_answers_votes":97, "question_length":1313, "response_length":3051 }, { "question":"TypeError: get_params() missing 1 required positional argument: 'self' I was trying to use scikit-learn package with python-3.4 to do a grid search, ``` from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.linear_model.logistic import LogisticRegression from sklearn.pipeline import Pipeline from sklearn.grid_search import GridSearchCV import pandas as pd from sklearn.cross_validation import train_test_split from sklearn.metrics import precision_score, recall_score, accuracy_score from sklearn.preprocessing import LabelBinarizer import numpy as np pipeline = Pipeline([ ('vect', TfidfVectorizer(stop_words='english')), ('clf', LogisticRegression) ]) parameters = { 'vect__max_df': (0.25, 0.5, 0.75), 'vect__stop_words': ('english', None), 'vect__max_features': (2500, 5000, 10000, None), 'vect__ngram_range': ((1, 1), (1, 2)), 'vect__use_idf': (True, False), 'vect__norm': ('l1', 'l2'), 'clf__penalty': ('l1', 'l2'), 'clf__C': (0.01, 0.1, 1, 10) } if __name__ == '__main__': grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='accuracy', cv = 3) df = pd.read_csv('SMS Spam Collection\/SMSSpamCollection', delimiter='\\t', header=None) lb = LabelBinarizer() X, y = df[1], np.array([number[0] for number in lb.fit_transform(df[0])]) X_train, X_test, y_train, y_test = train_test_split(X, y) grid_search.fit(X_train, y_train) print('Best score: ', grid_search.best_score_) print('Best parameter set:') best_parameters = grid_search.best_estimator_.get_params() for param_name in sorted(best_parameters): print(param_name, best_parameters[param_name]) ``` However, it does not run successfully, the error message looks like this: ``` Fitting 3 folds for each of 1536 candidates, totalling 4608 fits Traceback (most recent call last): File \"\/home\/xiangru\/PycharmProjects\/machine_learning_note_with_sklearn\/grid search.py\", line 36, in grid_search.fit(X_train, y_train) File \"\/usr\/local\/lib\/python3.4\/dist-packages\/sklearn\/grid_search.py\", line 732, in fit return self._fit(X, y, ParameterGrid(self.param_grid)) File \"\/usr\/local\/lib\/python3.4\/dist-packages\/sklearn\/grid_search.py\", line 493, in _fit base_estimator = clone(self.estimator) File \"\/usr\/local\/lib\/python3.4\/dist-packages\/sklearn\/base.py\", line 47, in clone new_object_params[name] = clone(param, safe=False) File \"\/usr\/local\/lib\/python3.4\/dist-packages\/sklearn\/base.py\", line 35, in clone return estimator_type([clone(e, safe=safe) for e in estimator]) File \"\/usr\/local\/lib\/python3.4\/dist-packages\/sklearn\/base.py\", line 35, in return estimator_type([clone(e, safe=safe) for e in estimator]) File \"\/usr\/local\/lib\/python3.4\/dist-packages\/sklearn\/base.py\", line 35, in clone return estimator_type([clone(e, safe=safe) for e in estimator]) File \"\/usr\/local\/lib\/python3.4\/dist-packages\/sklearn\/base.py\", line 35, in return estimator_type([clone(e, safe=safe) for e in estimator]) File \"\/usr\/local\/lib\/python3.4\/dist-packages\/sklearn\/base.py\", line 45, in clone new_object_params = estimator.get_params(deep=False) TypeError: get_params() missing 1 required positional argument: 'self' ``` I also tried to use only ``` if __name__ == '__main__': pipeline.get_params() ``` It gives the same error message. Who knows how to fix this?", "response":"This error is almost always misleading, and actually means that you're calling an instance method on the class, rather than the instance (like calling dict.keys() instead of d.keys() on a dict named d).* And that's exactly what's going on here. The docs imply that the best_estimator_ attribute, like the estimator parameter to the initializer, is not an estimator instance, it's an estimator type, and \"A object of that type is instantiated for each grid point.\" So, if you want to call methods, you have to construct an object of that type, for some particular grid point. However, from a quick glance at the docs, if you're trying to get the params that were used for the particular instance of the best estimator that returned the best score, isn't that just going to be best_params_? (I apologize that this part is a bit of a guess\u2026) For the Pipeline call, you definitely have an instance there. And the only documentation for that method is a param spec which shows that it takes one optional argument, deep. But under the covers, it's probably forwarding the get_params() call to one of its attributes. And with ('clf', LogisticRegression), it looks like you're constructing it with the class LogisticRegression, rather than an instance of that class, so if that's what it ends up forwarding to, that would explain the problem. * The reason the error says \"missing 1 required positional argument: 'self'\" instead of \"must be called on an instance\" or something is that in Python, d.keys() is effectively turned into dict.keys(d), and it's perfectly legal (and sometimes useful) to call it that way explicitly, so Python can't really tell you that dict.keys() is illegal, just that it's missing the self argument.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30026960\/typeerror-get-params-missing-1-required-positional-argument-self", "best_answers_votes":56, "question_length":3237, "response_length":1719 }, { "question":"How to plot precision and recall of multiclass classifier? I'm using scikit learn, and I want to plot the precision and recall curves. the classifier I'm using is RandomForestClassifier. All the resources in the documentations of scikit learn uses binary classification. Also, can I plot a ROC curve for multiclass? Also, I only found for SVM for multilabel and it has a decision_function which RandomForest doesn't have", "response":"From scikit-learn documentation: Precision-Recall: Precision-recall curves are typically used in binary classification to study the output of a classifier. In order to extend the precision-recall curve and average precision to multi-class or multi-label classification, it is necessary to binarize the output. One curve can be drawn per label, but one can also draw a precision-recall curve by considering each element of the label indicator matrix as a binary prediction (micro-averaging). Receiver Operating Characteristic (ROC): ROC curves are typically used in binary classification to study the output of a classifier. In order to extend ROC curve and ROC area to multi-class or multi-label classification, it is necessary to binarize the output. One ROC curve can be drawn per label, but one can also draw a ROC curve by considering each element of the label indicator matrix as a binary prediction (micro-averaging). Therefore, you should binarize the output and consider precision-recall and roc curves for each class. Moreover, you are going to use predict_proba to get class probabilities. I divide the code into three parts: general settings, learning and prediction precision-recall curve ROC curve 1. general settings, learning and prediction ``` from sklearn.datasets import fetch_openml from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.multiclass import OneVsRestClassifier from sklearn.metrics import precision_recall_curve, roc_curve from sklearn.preprocessing import label_binarize import matplotlib.pyplot as plt #%matplotlib inline mnist = fetch_openml(\"mnist_784\") y = mnist.target y = y.astype(np.uint8) n_classes = len(set(y)) Y = label_binarize(mnist.target, classes=[*range(n_classes)]) X_train, X_test, y_train, y_test = train_test_split(mnist.data, Y, random_state = 42) clf = OneVsRestClassifier(RandomForestClassifier(n_estimators=50, max_depth=3, random_state=0)) clf.fit(X_train, y_train) y_score = clf.predict_proba(X_test) ``` 2. precision-recall curve ``` # precision recall curve precision = dict() recall = dict() for i in range(n_classes): precision[i], recall[i], _ = precision_recall_curve(y_test[:, i], y_score[:, i]) plt.plot(recall[i], precision[i], lw=2, label='class {}'.format(i)) plt.xlabel(\"recall\") plt.ylabel(\"precision\") plt.legend(loc=\"best\") plt.title(\"precision vs. recall curve\") plt.show() ``` 3. ROC curve ``` # roc curve fpr = dict() tpr = dict() for i in range(n_classes): fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])) plt.plot(fpr[i], tpr[i], lw=2, label='class {}'.format(i)) plt.xlabel(\"false positive rate\") plt.ylabel(\"true positive rate\") plt.legend(loc=\"best\") plt.title(\"ROC curve\") plt.show() ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/56090541\/how-to-plot-precision-and-recall-of-multiclass-classifier", "best_answers_votes":59, "question_length":420, "response_length":2746 }, { "question":"StratifiedKFold vs KFold in scikit-learn I use this code to test KFold and StratifiedKFold. ``` import numpy as np from sklearn.model_selection import KFold,StratifiedKFold X = np.array([ [1,2,3,4], [11,12,13,14], [21,22,23,24], [31,32,33,34], [41,42,43,44], [51,52,53,54], [61,62,63,64], [71,72,73,74] ]) y = np.array([0,0,0,0,1,1,1,1]) sfolder = StratifiedKFold(n_splits=4,random_state=0,shuffle=False) floder = KFold(n_splits=4,random_state=0,shuffle=False) for train, test in sfolder.split(X,y): print('Train: %s | test: %s' % (train, test)) print(\"StratifiedKFold done\") for train, test in floder.split(X,y): print('Train: %s | test: %s' % (train, test)) print(\"KFold done\") ``` I found that StratifiedKFold can keep the proportion of labels, but KFold can't. ``` Train: [1 2 3 5 6 7] | test: [0 4] Train: [0 2 3 4 6 7] | test: [1 5] Train: [0 1 3 4 5 7] | test: [2 6] Train: [0 1 2 4 5 6] | test: [3 7] StratifiedKFold done Train: [2 3 4 5 6 7] | test: [0 1] Train: [0 1 4 5 6 7] | test: [2 3] Train: [0 1 2 3 6 7] | test: [4 5] Train: [0 1 2 3 4 5] | test: [6 7] KFold done ``` It seems that StratifiedKFold is better, so should KFold not be used? When to use KFold instead of StratifiedKFold?", "response":"I think you should ask \"When to use StratifiedKFold instead of KFold?\". You need to know what \"KFold\" and \"Stratified\" are first. KFold is a cross-validator that divides the dataset into k folds. Stratified is to ensure that each fold of dataset has the same proportion of observations with a given label. So, it means that StratifiedKFold is the improved version of KFold Therefore, the answer to this question is we should prefer StratifiedKFold over KFold when dealing with classification tasks with imbalanced class distributions. FOR EXAMPLE Suppose that there is a dataset with 16 data points and imbalanced class distribution. In the dataset, 12 of data points belong to class A and the rest (i.e. 4) belong to class B. The ratio of class B to class A is 1\/3. If we use StratifiedKFold and set k = 4, then, in each iteration, the training sets will include 9 data points from class A and 3 data points from class B while the test sets include 3 data points from class A and 1 data point from class B. As we can see, the class distribution of the dataset is preserved in the splits by StratifiedKFold while KFold does not take this into consideration.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/65318931\/stratifiedkfold-vs-kfold-in-scikit-learn", "best_answers_votes":53, "question_length":1200, "response_length":1157 }, { "question":"Impute entire DataFrame (all columns) using Scikit-learn (sklearn) without iterating over columns I want to impute all of the columns on a pandas DataFrame...the only way I can think of doing this is column by column as shown below... Is there an operation where I can impute the entire DataFrame without iterating through the columns? ``` #!\/usr\/bin\/python from sklearn.preprocessing import Imputer import numpy as np import pandas as pd #Imputer fill_NaN = Imputer(missing_values=np.nan, strategy='mean', axis=1) #Model 1 DF = pd.DataFrame([[0,1,np.nan],[2,np.nan,3],[np.nan,2,5]]) DF.columns = \"c1.c2.c3\".split(\".\") DF.index = \"i1.i2.i3\".split(\".\") #Impute Series imputed_DF = DF for col in DF.columns: imputed_column = fill_NaN.fit_transform(DF[col]).T #Fill in Series on DataFrame imputed_DF[col] = imputed_column #DF #c1 c2 c3 #i1 0 1 NaN #i2 2 NaN 3 #i3 NaN 2 5 #imputed_DF #c1 c2 c3 #i1 0 1.0 4 #i2 2 1.5 3 #i3 1 2.0 5 ```", "response":"If you want the mean or median you could do something like: ``` fill_NaN = Imputer(missing_values=np.nan, strategy='mean', axis=1) imputed_DF = pd.DataFrame(fill_NaN.fit_transform(DF)) imputed_DF.columns = DF.columns imputed_DF.index = DF.index ``` If you want to fill them with 0s or something you could always just do: ``` DF[DF.isnull()] = 0 ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/33660836\/impute-entire-dataframe-all-columns-using-scikit-learn-sklearn-without-itera", "best_answers_votes":44, "question_length":930, "response_length":348 }, { "question":"Machine Learning (tensorflow \/ sklearn) in Django? I have a django form, which is collecting user response. I also have a tensorflow sentences classification model. What is the best\/standard way to put these two together. Details: tensorflow model was trained on the Movie Review data from Rotten Tomatoes. Everytime a new row is made in my response model , i want the tensorflow code to classify it( + or - ). Basically I have a django project directory and two .py files for classification. Before going ahead myself , i wanted to know what is the standard way to implement machine learning algorithms to a web app. It'd be awesome if you could suggest a tutorial or a repo. Thank you !", "response":"Asynchronous processing If you don't need the classification result from the ML code to pass immediately to the user (e.g. as a response to the same POST request that submtted), then you can always queue the classification job to be ran in the background or even a different server with more CPU\/memory resources (e.g. with django-background-tasks or Celery) A queued task would be for example to populate the field UserResponse.class_name (positive, negative) on the database rows that have that field blank (not yet classified) Real time notification If the ML code is slow and want to return that result to the user as soon as it is available, you can use the asynchronous approach described above, and pair with the real time notification (e.g. socket.io to the browser (this can be triggered from the queued task) This becomes necessary if ML execution time is so long that it might time-out the HTTP request in the synchronous approach described below. Synchronous processing, if ML code is not CPU intensive (fast enough) If you need that classification result returned immediately, and the ML classification is fast enough *, you can do so within the HTTP request-response cycle (the POST request returns after the ML code is done, synchronously) *Fast enough here means it wouldn't time-out the HTTP request\/response, and the user wouldn't lose patience.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/37374454\/machine-learning-tensorflow-sklearn-in-django", "best_answers_votes":32, "question_length":688, "response_length":1363 }, { "question":"How to use scikit-learn PCA for features reduction and know which features are discarded I am trying to run a PCA on a matrix of dimensions m x n where m is the number of features and n the number of samples. Suppose I want to preserve the nf features with the maximum variance. With scikit-learn I am able to do it in this way: ``` from sklearn.decomposition import PCA nf = 100 pca = PCA(n_components=nf) # X is the matrix transposed (n samples on the rows, m features on the columns) pca.fit(X) X_new = pca.transform(X) ``` Now, I get a new matrix X_new that has a shape of n x nf. Is it possible to know which features have been discarded or the retained ones? Thanks", "response":"The features that your PCA object has determined during fitting are in pca.components_. The vector space orthogonal to the one spanned by pca.components_ is discarded. Please note that PCA does not \"discard\" or \"retain\" any of your pre-defined features (encoded by the columns you specify). It mixes all of them (by weighted sums) to find orthogonal directions of maximum variance. If this is not the behaviour you are looking for, then PCA dimensionality reduction is not the way to go. For some simple general feature selection methods, you can take a look at sklearn.feature_selection", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/23294616\/how-to-use-scikit-learn-pca-for-features-reduction-and-know-which-features-are-d", "best_answers_votes":33, "question_length":671, "response_length":587 }, { "question":"scikit-learn clustering: predict(X) vs. fit_predict(X) In scikit-learn, some clustering algorithms have both predict(X) and fit_predict(X) methods, like KMeans and MeanShift, while others only have the latter, like SpectralClustering. According to the doc: ``` fit_predict(X[, y]): Performs clustering on X and returns cluster labels. predict(X): Predict the closest cluster each sample in X belongs to. ``` I don't really understand the difference between the two, they seem equivalent to me.", "response":"In order to use the 'predict' you must use the 'fit' method first. So using 'fit()' and then 'predict()' is definitely the same as using 'fit_predict()'. However, one could benefit from using only 'fit()' in such cases where you need to know the initialization parameters of your models rather than if you use 'fit_predict()', where you will just be obtained the labeling results of running your model on the data.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/37106983\/scikit-learn-clustering-predictx-vs-fit-predictx", "best_answers_votes":24, "question_length":493, "response_length":414 }, { "question":"Getting the accuracy for multi-label prediction in scikit-learn In a multilabel classification setting, sklearn.metrics.accuracy_score only computes the subset accuracy (3): i.e. the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true. This way of computing the accuracy is sometime named, perhaps less ambiguously, exact match ratio (1): Is there any way to get the other typical way to compute the accuracy in scikit-learn, namely (as defined in (1) and (2), and less ambiguously referred to as the Hamming score (4) (since it is closely related to the Hamming loss), or label-based accuracy) ? (1) Sorower, Mohammad S. \"A literature survey on algorithms for multi-label learning.\" Oregon State University, Corvallis (2010). (2) Tsoumakas, Grigorios, and Ioannis Katakis. \"Multi-label classification: An overview.\" Dept. of Informatics, Aristotle University of Thessaloniki, Greece (2006). (3) Ghamrawi, Nadia, and Andrew McCallum. \"Collective multi-label classification.\" Proceedings of the 14th ACM international conference on Information and knowledge management. ACM, 2005. (4) Godbole, Shantanu, and Sunita Sarawagi. \"Discriminative methods for multi-labeled classification.\" Advances in Knowledge Discovery and Data Mining. Springer Berlin Heidelberg, 2004. 22-30.", "response":"You can write one version yourself, here is a example without considering the weight and normalize. ``` import numpy as np y_true = np.array([[0,1,0], [0,1,1], [1,0,1], [0,0,1]]) y_pred = np.array([[0,1,1], [0,1,1], [0,1,0], [0,0,0]]) def hamming_score(y_true, y_pred, normalize=True, sample_weight=None): ''' Compute the Hamming score (a.k.a. label-based accuracy) for the multi-label case http:\/\/stackoverflow.com\/q\/32239577\/395857 ''' acc_list = [] for i in range(y_true.shape[0]): set_true = set( np.where(y_true[i])[0] ) set_pred = set( np.where(y_pred[i])[0] ) #print('\\nset_true: {0}'.format(set_true)) #print('set_pred: {0}'.format(set_pred)) tmp_a = None if len(set_true) == 0 and len(set_pred) == 0: tmp_a = 1 else: tmp_a = len(set_true.intersection(set_pred))\/\\ float( len(set_true.union(set_pred)) ) #print('tmp_a: {0}'.format(tmp_a)) acc_list.append(tmp_a) return np.mean(acc_list) if __name__ == \"__main__\": print('Hamming score: {0}'.format(hamming_score(y_true, y_pred))) # 0.375 (= (0.5+1+0+0)\/4) # For comparison sake: import sklearn.metrics # Subset accuracy # 0.25 (= 0+1+0+0 \/ 4) --> 1 if the prediction for one sample fully matches the gold. 0 otherwise. print('Subset accuracy: {0}'.format(sklearn.metrics.accuracy_score(y_true, y_pred, normalize=True, sample_weight=None))) # Hamming loss (smaller is better) # $$ \\text{HammingLoss}(x_i, y_i) = \\frac{1}{|D|} \\sum_{i=1}^{|D|} \\frac{xor(x_i, y_i)}{|L|}, $$ # where # - \\\\(|D|\\\\) is the number of samples # - \\\\(|L|\\\\) is the number of labels # - \\\\(y_i\\\\) is the ground truth # - \\\\(x_i\\\\) is the prediction. # 0.416666666667 (= (1+0+3+1) \/ (3*4) ) print('Hamming loss: {0}'.format(sklearn.metrics.hamming_loss(y_true, y_pred))) ``` Outputs: ``` Hamming score: 0.375 Subset accuracy: 0.25 Hamming loss: 0.416666666667 ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/32239577\/getting-the-accuracy-for-multi-label-prediction-in-scikit-learn", "best_answers_votes":27, "question_length":1318, "response_length":1794 }, { "question":"How is Elastic Net used? This is a beginner question on regularization with regression. Most information about Elastic Net and Lasso Regression online replicates the information from Wikipedia or the original 2005 paper by Zou and Hastie (Regularization and variable selection via the elastic net). Resource for simple theory? Is there a simple and easy explanation somewhere about what it does, when and why reguarization is neccessary, and how to use it - for those who are not statistically inclined? I understand that the original paper is the ideal source if you can understand it, but is there somewhere that more simply the problem and solution? How to use in sklearn? Is there a step by step example showing why elastic net is chosen (over ridge, lasso, or just simple OLS) and how the parameters are calculated? Many of the examples on sklearn just include alpha and rho parameters directly into the prediction model, for example: ``` from sklearn.linear_model import ElasticNet alpha = 0.1 enet = ElasticNet(alpha=alpha, rho=0.7) y_pred_enet = enet.fit(X_train, y_train).predict(X_test) ``` However, they don't explain how these were calculated. How do you calculate the parameters for the lasso or net?", "response":"The documentation is lacking. I created a new issue to improve it. As Andreas said the best resource is probably ESL II freely available online as PDF. To automatically tune the value of alpha it is indeed possible to use ElasticNetCV which will spare redundant computation as apposed to using GridSearchCV in the ElasticNet class for tuning alpha. In complement, you can use a regular GridSearchCV for finding the optimal value of rho. See the docstring of ElasticNetCV fore more details. As for Lasso vs ElasticNet, ElasticNet will tend to select more variables hence lead to larger models (also more expensive to train) but also be more accurate in general. In particular Lasso is very sensitive to correlation between features and might select randomly one out of 2 very correlated informative features while ElasticNet will be more likely to select both which should lead to a more stable model (in terms of generalization ability so new samples).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/12283184\/how-is-elastic-net-used", "best_answers_votes":28, "question_length":1213, "response_length":952 }, { "question":"How to handle missing values (NaN) in categorical data when using scikit-learn OneHotEncoder? I have recently started learning python to develop a predictive model for a research project using machine learning methods. I have a large dataset comprised of both numerical and categorical data. The dataset has lots of missing values. I am currently trying to encode the categorical features using OneHotEncoder. When I read about OneHotEncoder, my understanding was that for a missing value (NaN), OneHotEncoder would assign 0s to all the feature's categories, as such: ``` 0 Male 1 Female 2 NaN ``` After applying OneHotEncoder: ``` 0 10 1 01 2 00 ``` However, when running the following code: ```py # Encoding categorical data from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder ct = ColumnTransformer([('encoder', OneHotEncoder(handle_unknown='ignore'), [1])], remainder='passthrough') obj_df = np.array(ct.fit_transform(obj_df)) print(obj_df) ``` I am getting the error ValueError: Input contains NaN So I am guessing my previous understanding of how OneHotEncoder handles missing values is wrong. Is there a way for me to get the functionality described above? I know imputing the missing values before encoding will resolve this issue, but I am reluctant to do this as I am dealing with medical data and fear that imputation may decrease the predictive accuracy of my model. I found this question that is similar but the answer doesn't offer a detailed enough solution on how to deal with the NaN values. Let me know what your thoughts are, thanks.", "response":"You will need to impute the missing values before. You can define a Pipeline with an imputing step using SimpleImputer setting a constant strategy to input a new category for null fields, prior to the OneHot encoding: ``` from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline import numpy as np categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('encoder', OneHotEncoder(handle_unknown='ignore'))]) preprocessor = ColumnTransformer( transformers=[ ('cat', categorical_transformer, [0]) ]) ``` ``` df = pd.DataFrame(['Male', 'Female', np.nan]) preprocessor.fit_transform(df) array([[0., 1., 0.], [1., 0., 0.], [0., 0., 1.]]) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/62409303\/how-to-handle-missing-values-nan-in-categorical-data-when-using-scikit-learn-o", "best_answers_votes":14, "question_length":1594, "response_length":810 }, { "question":"Save and reuse TfidfVectorizer in scikit learn I am using TfidfVectorizer in scikit learn to create a matrix from text data. Now I need to save this object to reuse it later. I tried to use pickle, but it gave the following error. ```none loc=open('vectorizer.obj','w') pickle.dump(self.vectorizer,loc) *** TypeError: can't pickle instancemethod objects ``` I tried using joblib in sklearn.externals, which again gave similar error. Is there any way to save this object so that I can reuse it later? Here is my full object: ```py class changeToMatrix(object): def __init__(self,ngram_range=(1,1),tokenizer=StemTokenizer()): from sklearn.feature_extraction.text import TfidfVectorizer self.vectorizer = TfidfVectorizer(ngram_range=ngram_range,analyzer='word',lowercase=True, token_pattern='[a-zA-Z0-9]+',strip_accents='unicode', tokenizer=tokenizer) def load_ref_text(self,text_file): textfile = open(text_file,'r') lines = textfile.readlines() textfile.close() sent_tokenizer = nltk.data.load('tokenizers\/punkt\/english.pickle') sentences = [item.strip().strip('.') for item in sent_tokenizer.tokenize(' '.join(lines).strip())] #vectorizer is transformed in this step chk2 = pd.DataFrame(self.vectorizer.fit_transform(sentences1).toarray()) return sentences, [chk2] def get_processed_data(self,data_loc): ref_sentences,ref_dataframes=self.load_ref_text(data_loc) loc = open(\"indexedData\/vectorizer.obj\",\"w\") pickle.dump(self.vectorizer,loc) #getting error here loc.close() return ref_sentences, ref_dataframes ```", "response":"Firstly, it's better to leave the import at the top of your code instead of within your class: ``` from sklearn.feature_extraction.text import TfidfVectorizer class changeToMatrix(object): def __init__(self,ngram_range=(1,1),tokenizer=StemTokenizer()): ... ``` Next StemTokenizer don't seem to be a canonical class. Possibly you've got it from http:\/\/sahandsaba.com\/visualizing-philosophers-and-scientists-by-the-words-they-used-with-d3js-and-python.html or maybe somewhere else so we'll assume it returns a list of strings. ``` class StemTokenizer(object): def __init__(self): self.ignore_set = {'footnote', 'nietzsche', 'plato', 'mr.'} def __call__(self, doc): words = [] for word in word_tokenize(doc): word = word.lower() w = wn.morphy(word) if w and len(w) > 1 and w not in self.ignore_set: words.append(w) return words ``` Now to answer your actual question, it's possible that you need to open a file in byte mode before dumping a pickle, i.e.: ``` >>> from sklearn.feature_extraction.text import TfidfVectorizer >>> from nltk import word_tokenize >>> import cPickle as pickle >>> vectorizer = TfidfVectorizer(ngram_range=(0,2),analyzer='word',lowercase=True, token_pattern='[a-zA-Z0-9]+',strip_accents='unicode',tokenizer=word_tokenize) >>> vectorizer TfidfVectorizer(analyzer='word', binary=False, decode_error=u'strict', dtype=, encoding=u'utf-8', input=u'content', lowercase=True, max_df=1.0, max_features=None, min_df=1, ngram_range=(0, 2), norm=u'l2', preprocessor=None, smooth_idf=True, stop_words=None, strip_accents='unicode', sublinear_tf=False, token_pattern='[a-zA-Z0-9]+', tokenizer=, use_idf=True, vocabulary=None) >>> with open('vectorizer.pk', 'wb') as fin: ... pickle.dump(vectorizer, fin) ... >>> exit() alvas@ubi:~$ ls -lah vectorizer.pk -rw-rw-r-- 1 alvas alvas 763 Jun 15 14:18 vectorizer.pk ``` Note: Using the with idiom for i\/o file access automatically closes the file once you get out of the with scope. Regarding the issue with SnowballStemmer(), note that SnowballStemmer('english') is an object while the stemming function is SnowballStemmer('english').stem. IMPORTANT: TfidfVectorizer's tokenizer parameter expects to take a string and return a list of string But Snowball stemmer does not take a string as input and return a list of string. So you will need to do this: ``` >>> from nltk.stem import SnowballStemmer >>> from nltk import word_tokenize >>> stemmer = SnowballStemmer('english').stem >>> def stem_tokenize(text): ... return [stemmer(i) for i in word_tokenize(text)] ... >>> vectorizer = TfidfVectorizer(ngram_range=(0,2),analyzer='word',lowercase=True, token_pattern='[a-zA-Z0-9]+',strip_accents='unicode',tokenizer=stem_tokenize) >>> with open('vectorizer.pk', 'wb') as fin: ... pickle.dump(vectorizer, fin) ... >>> exit() alvas@ubi:~$ ls -lah vectorizer.pk -rw-rw-r-- 1 alvas alvas 758 Jun 15 15:55 vectorizer.pk ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30843011\/save-and-reuse-tfidfvectorizer-in-scikit-learn", "best_answers_votes":17, "question_length":1512, "response_length":2869 }, { "question":"R internal handling of sparse matrices I have been comparing the performance of several PCA implementations from both Python and R, and noticed an interesting behavior: While it seems impossible to compute the PCA of a sparse matrix in Python (the only approach would be scikit-learn's TruncatedSVD, yet it does not support the mean-centering required to be equivalent to a covariance solution for PCA. Their argumentation is, that it would destroy the sparsity property of the matrix. Other implementations like Facebook's PCA algorithm or the PCA\/randomPCA method in scikit learn do not support sparse matrices for similar reasons. While all of that makes sense to me, several R packages, like irlba, rsvd, etc., are able to handle sparse matrices (e.g. generated with rsparsematrix), and even allow for specific center=True arguments. My question is, how R handles this internally, as it seems to be vastly more efficient than the comparable Python implementation. Does R still maintain the sparsity by doing Absolute Scaling instead (which would theoretically falsify the results, but at least maintain sparsity)? Or is there any way in which the mean can be stored explicitly for the zero values, and is only stored once (instead of for every value separately)? To get put off hold: How does R internally store matrices with mean-centering without exploding RAM usage. Hope that is concise enough....", "response":"The key here is that the underlying implementation for the partial SVD (restarted Lanczos bidiagonalization C code) doesn't store the matrix. You instead record the result of the linear operation from the matrix applied to a small set of vectors obtained from the previous iteration. Rather than explaining the concrete method used in the c code, which is quite advanced (see paper for description),I will explain it with a much simpler algorithm that captures the key idea in terms of how to preserve the efficiency from sparsity: the power method (or the subspace iteration method for its generalization to multiple eigenvalues). The algorithm returns the largest eigenvalue of a matrix A by iteratively applying a linear operator, then normalizing (or orthogonalizing a small set of vectors, in the case of subspace iteration) What you do at every iteration is ``` v=A*v v=v\/norm(v) ``` The matrix multiplication step is the crucial one, so let's see what happens when we try the same thing with a centered A. The matrix formula for centered A (with center as the vector with the mean column values and ones as the vector of ones) is: ``` A_center=A-ones*transpose(center) ``` So if we apply the iterative algorithm to this new matrix we would get ``` v=A*v-dotproduct(center,v)*ones ``` Since A was sparse we can use the sparse matrix-vector product on (A,v) and -dotproduct(center,v)*ones just entails subtracting the dot product of center and v from the resulting vector which is linear on the dimension of A.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/50853136\/r-internal-handling-of-sparse-matrices", "best_answers_votes":3, "question_length":1405, "response_length":1515 }, { "question":"Plot PCA loadings and loading in biplot in sklearn (like R's autoplot) I saw this tutorial in R w\/ autoplot. They plotted the loadings and loading labels: ``` autoplot(prcomp(df), data = iris, colour = 'Species', loadings = TRUE, loadings.colour = 'blue', loadings.label = TRUE, loadings.label.size = 3) ``` https:\/\/cran.r-project.org\/web\/packages\/ggfortify\/vignettes\/plot_pca.html I prefer Python 3 w\/ matplotlib, scikit-learn, and pandas for my data analysis. However, I don't know how to add these on? How can you plot these vectors w\/ matplotlib? I've been reading Recovering features names of explained_variance_ratio_ in PCA with sklearn but haven't figured it out yet Here's how I plot it in Python ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_iris from sklearn.preprocessing import StandardScaler from sklearn import decomposition import seaborn as sns; sns.set_style(\"whitegrid\", {'axes.grid' : False}) %matplotlib inline np.random.seed(0) # Iris dataset DF_data = pd.DataFrame(load_iris().data, index = [\"iris_%d\" % i for i in range(load_iris().data.shape[0])], columns = load_iris().feature_names) Se_targets = pd.Series(load_iris().target, index = [\"iris_%d\" % i for i in range(load_iris().data.shape[0])], name = \"Species\") # Scaling mean = 0, var = 1 DF_standard = pd.DataFrame(StandardScaler().fit_transform(DF_data), index = DF_data.index, columns = DF_data.columns) # Sklearn for Principal Componenet Analysis # Dims m = DF_standard.shape[1] K = 2 # PCA (How I tend to set it up) Mod_PCA = decomposition.PCA(n_components=m) DF_PCA = pd.DataFrame(Mod_PCA.fit_transform(DF_standard), columns=[\"PC%d\" % k for k in range(1,m + 1)]).iloc[:,:K] # Color classes color_list = [{0:\"r\",1:\"g\",2:\"b\"}[x] for x in Se_targets] fig, ax = plt.subplots() ax.scatter(x=DF_PCA[\"PC1\"], y=DF_PCA[\"PC2\"], color=color_list) ```", "response":"You could do something like the following by creating a biplot function. Nice article here: https:\/\/towardsdatascience.com\/pca-clearly-explained-how-when-why-to-use-it-and-feature-importance-a-guide-in-python-7c274582c37e?source=friends_link&sk=65bf5440e444c24aff192fedf9f8b64f In this example I am using the iris data: ``` import numpy as np import matplotlib.pyplot as plt from sklearn import datasets from sklearn.decomposition import PCA import pandas as pd from sklearn.preprocessing import StandardScaler iris = datasets.load_iris() X = iris.data y = iris.target # In general, it's a good idea to scale the data prior to PCA. scaler = StandardScaler() scaler.fit(X) X=scaler.transform(X) pca = PCA() x_new = pca.fit_transform(X) def myplot(score,coeff,labels=None): xs = score[:,0] ys = score[:,1] n = coeff.shape[0] scalex = 1.0\/(xs.max() - xs.min()) scaley = 1.0\/(ys.max() - ys.min()) plt.scatter(xs * scalex,ys * scaley, c = y) for i in range(n): plt.arrow(0, 0, coeff[i,0], coeff[i,1],color = 'r',alpha = 0.5) if labels is None: plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, \"Var\"+str(i+1), color = 'g', ha = 'center', va = 'center') else: plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, labels[i], color = 'g', ha = 'center', va = 'center') plt.xlim(-1,1) plt.ylim(-1,1) plt.xlabel(\"PC{}\".format(1)) plt.ylabel(\"PC{}\".format(2)) plt.grid() #Call the function. Use only the 2 PCs. myplot(x_new[:,0:2],np.transpose(pca.components_[0:2, :])) plt.show() ``` RESULT", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/39216897\/plot-pca-loadings-and-loading-in-biplot-in-sklearn-like-rs-autoplot", "best_answers_votes":46, "question_length":1885, "response_length":1469 }, { "question":"Using a transformer (estimator) to transform the target labels in sklearn.pipeline I understand that one can chain several estimators that implement the transform method to transform X (the feature set) in sklearn.pipeline. However I have a use case where I would like also transform the target labels (like transform the labels to [1...K] instead of [0, K-1] and I would love to do that as a component in my pipeline. Is it possible to that at all using the sklearn.pipeline.?", "response":"There is now a nicer way to do this built into scikit-learn; using a compose.TransformedTargetRegressor. When constructing these objects you give them a regressor and a transformer. When you .fit() them they transform the targets before regressing, and when you .predict() them they transform their predicted targets back to the original space. It's important to note that you can pass them a pipeline object, so they should interface nicely with your existing setup. For example, take the following setup where I train a ridge regression to predict 1 target given 2 features: ```py # Imports import numpy as np from sklearn import compose, linear_model, metrics, pipeline, preprocessing # Generate some training and test features and targets X_train = np.random.rand(200).reshape(100,2) y_train = 1.2*X_train[:, 0]+3.4*X_train[:, 1]+5.6 X_test = np.random.rand(20).reshape(10,2) y_test = 1.2*X_test[:, 0]+3.4*X_test[:, 1]+5.6 # Define my model and scalers ridge = linear_model.Ridge(alpha=1e-2) scaler = preprocessing.StandardScaler() minmax = preprocessing.MinMaxScaler(feature_range=(-1,1)) # Construct a pipeline using these methods pipe = pipeline.make_pipeline(scaler, ridge) # Construct a TransformedTargetRegressor using this pipeline # ** So far the set-up has been standard ** regr = compose.TransformedTargetRegressor(regressor=pipe, transformer=minmax) # Fit and train the regr like you would a pipeline regr.fit(X_train, y_train) y_pred = regr.predict(X_test) print(\"MAE: {}\".format(metrics.mean_absolute_error(y_test, y_pred))) ``` This still isn't quite as smooth as I'd like it to be, for example you can access the regressor that contained by a TransformedTargetRegressor using .regressor_ but the coefficients stored there are untransformed. This means there are some extra hoops to jump through if you want to work your way back to the equation that generated the data.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/18602489\/using-a-transformer-estimator-to-transform-the-target-labels-in-sklearn-pipeli", "best_answers_votes":32, "question_length":477, "response_length":1888 }, { "question":"How to pass elegantly Sklearn's GridseachCV's best parameters to another model? I have found a set of best hyperparameters for my KNN estimator with Grid Search CV: ``` >>> knn_gridsearch_model.best_params_ {'algorithm': 'auto', 'metric': 'manhattan', 'n_neighbors': 3} ``` So far, so good. I want to train my final estimator with these new-found parameters. Is there a way to feed the above hyperparameter dict to it directly? I tried this: ``` >>> new_knn_model = KNeighborsClassifier(knn_gridsearch_model.best_params_) ``` but instead the hoped result new_knn_model just got the whole dict as the first parameter of the model and left the remaining ones as default: ``` >>> knn_model KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=1, n_neighbors={'n_neighbors': 3, 'metric': 'manhattan', 'algorithm': 'auto'}, p=2, weights='uniform') ``` Disappointing indeed.", "response":"You can do that as follows: ``` new_knn_model = KNeighborsClassifier() new_knn_model.set_params(**knn_gridsearch_model.best_params_) ``` Or just unpack directly as @taras suggested: ``` new_knn_model = KNeighborsClassifier(**knn_gridsearch_model.best_params_) ``` By the way, after finish running the grid search, the grid search object actually keeps (by default) the best parameters, so you can use the object itself. Alternatively, you could also access the classifier with the best parameters through ``` gs.best_estimator_ ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45074698\/how-to-pass-elegantly-sklearns-gridseachcvs-best-parameters-to-another-model", "best_answers_votes":51, "question_length":915, "response_length":531 }, { "question":"ValueError: Solver lbfgs supports only 'l2' or 'none' penalties, got l1 penalty I'm running the process of feature selection on classification problem, using the embedded method (L1 - Lasso) With LogisticRegression. I'm running the following code: ``` from sklearn.linear_model import Lasso, LogisticRegression from sklearn.feature_selection import SelectFromModel # using logistic regression with penalty l1. selection = SelectFromModel(LogisticRegression(C=1, penalty='l1')) selection.fit(x_train, y_train) ``` But I'm getting exception (on the fit command): ``` selection.fit(x_train, y_train) File \"C:\\Python37\\lib\\site-packages\\sklearn\\feature_selection\\_from_model.py\", line 222, in fit self.estimator_.fit(X, y, **fit_params) File \"C:\\Python37\\lib\\site-packages\\sklearn\\linear_model\\_logistic.py\", line 1488, in fit solver = _check_solver(self.solver, self.penalty, self.dual) File \"C:\\Python37\\lib\\site-packages\\sklearn\\linear_model\\_logistic.py\", line 445, in _check_solver \"got %s penalty.\" % (solver, penalty)) ValueError: Solver lbfgs supports only 'l2' or 'none' penalties, got l1 penalty. ``` I'm running under python 3.7.6 and sscikit-learn version is 0.22.2.post1 What is wrong and how can I fix it ?", "response":"This is cleared up in the documentation. solver : {\u2018newton-cg\u2019, \u2018lbfgs\u2019, \u2018liblinear\u2019, \u2018sag\u2019, \u2018saga\u2019}, default=\u2019lbfgs\u2019 ... \u2018newton-cg\u2019, \u2018lbfgs\u2019, \u2018sag\u2019 and \u2018saga\u2019 handle L2 or no penalty \u2018liblinear\u2019 and \u2018saga\u2019 also handle L1 penalty Call it like this: ``` LogisticRegression(C=1, penalty='l1', solver='liblinear') ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/60868629\/valueerror-solver-lbfgs-supports-only-l2-or-none-penalties-got-l1-penalty", "best_answers_votes":46, "question_length":1216, "response_length":315 }, { "question":"Singleton array array(, dtype=object) cannot be considered a valid collection Not sure how to fix . Any help much appreciate. I saw thi Vectorization: Not a valid collection but not sure if i understood this ``` train = df1.iloc[:,[4,6]] target =df1.iloc[:,[0]] def train(classifier, X, y): X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=33) classifier.fit(X_train, y_train) print (\"Accuracy: %s\" % classifier.score(X_test, y_test)) return classifier trial1 = Pipeline([ ('vectorizer', TfidfVectorizer()), ('classifier', MultinomialNB()),]) train(trial1, train, target) ``` error below : ``` ----> 6 train(trial1, train, target) in train(classifier, X, y) 1 def train(classifier, X, y): ----> 2 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=33) 3 4 classifier.fit(X_train, y_train) 5 print (\"Accuracy: %s\" % classifier.score(X_test, y_test)) \/home\/manisha\/anaconda3\/lib\/python3.5\/site-packages\/sklearn\/model_selection\/_split.py in train_test_split(*arrays, **options) 1687 test_size = 0.25 1688 -> 1689 arrays = indexable(*arrays) 1690 1691 if stratify is not None: \/home\/manisha\/anaconda3\/lib\/python3.5\/site-packages\/sklearn\/utils\/validation.py in indexable(*iterables) 204 else: 205 result.append(np.array(X)) --> 206 check_consistent_length(*result) 207 return result 208 \/home\/manisha\/anaconda3\/lib\/python3.5\/site-packages\/sklearn\/utils\/validation.py in check_consistent_length(*arrays) 175 \"\"\" 176 --> 177 lengths = [_num_samples(X) for X in arrays if X is not None] 178 uniques = np.unique(lengths) 179 if len(uniques) > 1: \/home\/manisha\/anaconda3\/lib\/python3.5\/site-packages\/sklearn\/utils\/validation.py in (.0) 175 \"\"\" 176 --> 177 lengths = [_num_samples(X) for X in arrays if X is not None] 178 uniques = np.unique(lengths) 179 if len(uniques) > 1: \/home\/manisha\/anaconda3\/lib\/python3.5\/site-packages\/sklearn\/utils\/validation.py in _num_samples(x) 124 if len(x.shape) == 0: 125 raise TypeError(\"Singleton array %r cannot be considered\" --> 126 \" a valid collection.\" % x) 127 return x.shape[0] 128 else: TypeError: Singleton array array(, dtype=object) cannot be considered a valid collection. ____ ``` Not sure how to fix . Any help much appreciate. I saw thi Vectorization: Not a valid collection but not sure if i understood this", "response":"This error arises because your function train masks your variable train, and hence it is passed to itself. Explanation: You define a variable train like this: ``` train = df1.iloc[:,[4,6]] ``` Then after some lines, you define a method train like this: ``` def train(classifier, X, y): ``` So what actually happens is, your previous version of train is updated with new version. That means that the train now does not point to the Dataframe object as you wanted, but points to the function you defined. In the error it is cleared. ``` array(, dtype=object) ``` See the function train inside the error statement. Solution: Rename one of them (the variable or the method). Suggestion: Rename the function to some other name like training or training_func or something like that.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43222882\/singleton-array-arrayfunction-train-at-0x7f3a311320d0-dtype-object-cannot-b", "best_answers_votes":22, "question_length":2322, "response_length":776 }, { "question":"How to apply standardization to SVMs in scikit-learn? I'm using the current stable version 0.13 of scikit-learn. I'm applying a linear support vector classifier to some data using the class sklearn.svm.LinearSVC. In the chapter about preprocessing in scikit-learn's documentation, I've read the following: Many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the l1 and l2 regularizers of linear models) assume that all features are centered around zero and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected. Question 1: Is standardization useful for SVMs in general, also for those with a linear kernel function as in my case? Question 2: As far as I understand, I have to compute the mean and standard deviation on the training data and apply this same transformation on the test data using the class sklearn.preprocessing.StandardScaler. However, what I don't understand is whether I have to transform the training data as well or just the test data prior to feeding it to the SVM classifier. That is, do I have to do this: ``` scaler = StandardScaler() scaler.fit(X_train) # only compute mean and std here X_test = scaler.transform(X_test) # perform standardization by centering and scaling clf = LinearSVC() clf.fit(X_train, y_train) clf.predict(X_test) ``` Or do I have to do this: ``` scaler = StandardScaler() X_train = scaler.fit_transform(X_train) # compute mean, std and transform training data as well X_test = scaler.transform(X_test) # same as above clf = LinearSVC() clf.fit(X_train, y_train) clf.predict(X_test) ``` In short, do I have to use scaler.fit(X_train) or scaler.fit_transform(X_train) on the training data in order to get reasonable results with LinearSVC?", "response":"Neither. scaler.transform(X_train) doesn't have any effect. The transform operation is not in-place. You have to do ``` X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) ``` or ``` X_train = scaler.fit(X_train).transform(X_train) ``` You always need to do the same preprocessing on both training or test data. And yes, standardization is always good if it reflects your believe for the data. In particular for kernel-svms it is often crucial.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/14688391\/how-to-apply-standardization-to-svms-in-scikit-learn", "best_answers_votes":39, "question_length":1934, "response_length":466 }, { "question":"sklearn: use Pipeline in a RandomizedSearchCV? I'd like to be able to use pipelines in the RandomizedSearchCV construct in sklearn. However right now I believe that only estimators are supported. Here's an example of what I'd like to be able to do: ``` import numpy as np from sklearn.grid_search import RandomizedSearchCV from sklearn.datasets import load_digits from sklearn.svm import SVC from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline # get some data iris = load_digits() X, y = iris.data, iris.target # specify parameters and distributions to sample from param_dist = {'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf', 'linear'],} # create pipeline with a scaler steps = [('scaler', StandardScaler()), ('rbf_svm', SVC())] pipeline = Pipeline(steps) # do search search = RandomizedSearchCV(pipeline, param_distributions=param_dist, n_iter=50) search.fit(X, y) print search.grid_scores_ ``` If you just run like this, you'll get the following error: ``` ValueError: Invalid parameter kernel for estimator Pipeline ``` Is there a good way to do this in sklearn?", "response":"RandomizedSearchCV, as well as GridSearchCV, do support pipelines (in fact, they're independent of their implementation, and pipelines are designed to be equivalent to usual classifiers). The key to the issue is pretty straightforward if you think, what parameters should search be done over. Since pipeline consists of many objects (several transformers + a classifier), one may want to find optimal parameters both for the classifier and transformers. Thus, you need to somehow distinguish where to get \/ set properties from \/ to. So what you need to do is to say that you want to find a value for, say, not just some abstract gamma (which pipeline doesn't have at all), but gamma of pipeline's classifier, which is called in your case rbf_svm (that also justifies the need for names). This can be achieved using double underscore syntax, widely used in sklearn for nested models: ``` param_dist = { 'rbf_svm__C': [1, 10, 100, 1000], 'rbf_svm__gamma': [0.001, 0.0001], 'rbf_svm__kernel': ['rbf', 'linear'], } ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/28178763\/sklearn-use-pipeline-in-a-randomizedsearchcv", "best_answers_votes":34, "question_length":1122, "response_length":1014 }, { "question":"How to get SVMs to play nicely with missing data in scikit-learn? I am using scikit-learn for some data analysis, and my dataset has some missing values (represented by NA). I load the data in with genfromtxt with dtype='f8' and go about training my classifier. The classification is fine on RandomForestClassifier and GradientBoostingClassifier objects, but using SVC from sklearn.svm causes the following error: ``` probas = classifiers[i].fit(train[traincv], target[traincv]).predict_proba(train[testcv]) File \"C:\\Python27\\lib\\site-packages\\sklearn\\svm\\base.py\", line 409, in predict_proba X = self._validate_for_predict(X) File \"C:\\Python27\\lib\\site-packages\\sklearn\\svm\\base.py\", line 534, in _validate_for_predict X = atleast2d_or_csr(X, dtype=np.float64, order=\"C\") File \"C:\\Python27\\lib\\site-packages\\sklearn\\utils\\validation.py\", line 84, in atleast2d_or_csr assert_all_finite(X) File \"C:\\Python27\\lib\\site-packages\\sklearn\\utils\\validation.py\", line 20, in assert_all_finite raise ValueError(\"array contains NaN or infinity\") ValueError: array contains NaN or infinity ``` What gives? How can I make the SVM play nicely with the missing data? Keeping in mind that the missing data works fine for random forests and other classifiers..", "response":"You can do data imputation to handle missing values before using SVM. EDIT: In scikit-learn, there's a really easy way to do this, illustrated on this page. (copied from page and modified) ``` >>> import numpy as np >>> from sklearn.preprocessing import Imputer >>> # missing_values is the value of your placeholder, strategy is if you'd like mean, median or mode, and axis=0 means it calculates the imputation based on the other feature values for that sample >>> imp = Imputer(missing_values='NaN', strategy='mean', axis=0) >>> imp.fit(train) Imputer(axis=0, copy=True, missing_values='NaN', strategy='mean', verbose=0) >>> train_imp = imp.transform(train) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/11441751\/how-to-get-svms-to-play-nicely-with-missing-data-in-scikit-learn", "best_answers_votes":25, "question_length":1244, "response_length":662 }, { "question":"PCA on word2vec embeddings I am trying to reproduce the results of this paper: https:\/\/arxiv.org\/pdf\/1607.06520.pdf Specifically this part: To identify the gender subspace, we took the ten gender pair difference vectors and computed its principal components (PCs). As Figure 6 shows, there is a single direction that explains the majority of variance in these vectors. The first eigenvalue is significantly larger than the rest. I am using the same set of word vectors as the authors (Google News Corpus, 300 dimensions), which I load into word2vec. The 'ten gender pair difference vectors' the authors refer to are computed from the following word pairs: I've computed the differences between each normalized vector in the following way: ``` model = gensim.models.KeyedVectors.load_word2vec_format('GoogleNews-vectors- negative300.bin', binary = True) model.init_sims() pairs = [('she', 'he'), ('her', 'his'), ('woman', 'man'), ('Mary', 'John'), ('herself', 'himself'), ('daughter', 'son'), ('mother', 'father'), ('gal', 'guy'), ('girl', 'boy'), ('female', 'male')] difference_matrix = np.array([model.word_vec(a[0], use_norm=True) - model.word_vec(a[1], use_norm=True) for a in pairs]) ``` I then perform PCA on the resulting matrix, with 10 components, as per the paper: ``` from sklearn.decomposition import PCA pca = PCA(n_components=10) pca.fit(difference_matrix) ``` However I get very different results when I look at pca.explained_variance_ratio_ : ``` array([ 2.83391436e-01, 2.48616155e-01, 1.90642492e-01, 9.98411858e-02, 5.61260498e-02, 5.29706681e-02, 2.75670634e-02, 2.21957722e-02, 1.86491774e-02, 1.99108478e-32]) ``` or with a chart: The first component accounts for less than 30% of the variance when it should be above 60%! The results I get are similar to what I get when I try to do the PCA on randomly selected vectors, so I must be doing something wrong, but I can't figure out what. Note: I've tried without normalizing the vectors, but I get the same results.", "response":"They released the code for the paper on github: https:\/\/github.com\/tolga-b\/debiaswe Specifically, you can see their code for creating the PCA plot in this file. Here is the relevant snippet of code from that file: ``` def doPCA(pairs, embedding, num_components = 10): matrix = [] for a, b in pairs: center = (embedding.v(a) + embedding.v(b))\/2 matrix.append(embedding.v(a) - center) matrix.append(embedding.v(b) - center) matrix = np.array(matrix) pca = PCA(n_components = num_components) pca.fit(matrix) # bar(range(num_components), pca.explained_variance_ratio_) return pca ``` Based on the code, looks like they are taking the difference between each word in a pair and the average vector of the pair. To me, it's not clear this is what they meant in the paper. However, I ran this code with their pairs and was able to recreate the graph from the paper:", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48019843\/pca-on-word2vec-embeddings", "best_answers_votes":14, "question_length":1985, "response_length":857 }, { "question":"No module named 'sklearn.utils.linear_assignment_' I am trying to run a project from github , every object counter applications using sort algorithm. I can't run any of them because of a specific error, attaching errors screenshot. Can anyone help me about fixing this issue?", "response":"The linear_assignment function is deprecated in 0.21 and will be removed from 0.23, but sklearn.utils.linear_assignment_ can be replaced by scipy.optimize.linear_sum_assignment. You can use: ``` from scipy.optimize import linear_sum_assignment as linear_assignment ``` then you can run the file and don't need to change the code.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/62390517\/no-module-named-sklearn-utils-linear-assignment", "best_answers_votes":77, "question_length":275, "response_length":329 }, { "question":"Python import error: cannot import name 'six' from 'sklearn.externals' I'm using numpy and mlrose, and all i have written so far is: ``` import numpy as np import mlrose ``` However, when i run it, it comes up with an error message: ``` File \"C:\\Users\\\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\mlrose\\neural.py\", line 12, in from sklearn.externals import six ImportError: cannot import name 'six' from 'sklearn.externals' (C:\\Users\\\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\sklearn\\externals\\__init__.py) ``` Any help on sorting this problem will be greatly appreciated.", "response":"Solution: The real answer is that the dependency needs to be changed by the mlrose maintainers. A workaround is: ``` import six import sys sys.modules['sklearn.externals.six'] = six import mlrose ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/61867945\/python-import-error-cannot-import-name-six-from-sklearn-externals", "best_answers_votes":76, "question_length":609, "response_length":199 }, { "question":"Clustering text documents using scikit-learn kmeans in Python I need to implement scikit-learn's kMeans for clustering text documents. The example code works fine as it is but takes some 20newsgroups data as input. I want to use the same code for clustering a list of documents as shown below: ``` documents = [\"Human machine interface for lab abc computer applications\", \"A survey of user opinion of computer system response time\", \"The EPS user interface management system\", \"System and human system engineering testing of EPS\", \"Relation of user perceived response time to error measurement\", \"The generation of random binary unordered trees\", \"The intersection graph of paths in trees\", \"Graph minors IV Widths of trees and well quasi ordering\", \"Graph minors A survey\"] ``` What changes do i need to do in kMeans example code to use this list as input? (Simply taking 'dataset = documents' doesn't work)", "response":"This is a simpler example: ``` from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.cluster import KMeans from sklearn.metrics import adjusted_rand_score documents = [\"Human machine interface for lab abc computer applications\", \"A survey of user opinion of computer system response time\", \"The EPS user interface management system\", \"System and human system engineering testing of EPS\", \"Relation of user perceived response time to error measurement\", \"The generation of random binary unordered trees\", \"The intersection graph of paths in trees\", \"Graph minors IV Widths of trees and well quasi ordering\", \"Graph minors A survey\"] ``` vectorize the text i.e. convert the strings to numeric features ``` vectorizer = TfidfVectorizer(stop_words='english') X = vectorizer.fit_transform(documents) ``` cluster documents ``` true_k = 2 model = KMeans(n_clusters=true_k, init='k-means++', max_iter=100, n_init=1) model.fit(X) ``` print top terms per cluster clusters ``` print(\"Top terms per cluster:\") order_centroids = model.cluster_centers_.argsort()[:, ::-1] terms = vectorizer.get_feature_names() for i in range(true_k): print \"Cluster %d:\" % i, for ind in order_centroids[i, :10]: print ' %s' % terms[ind], print ``` If you want to have a more visual idea of how this looks like see this answer.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/27889873\/clustering-text-documents-using-scikit-learn-kmeans-in-python", "best_answers_votes":77, "question_length":908, "response_length":1315 }, { "question":"Scikit-learn grid search with SVM regression I am learning cross validation-grid search and came across this youtube playlist and the tutorial also has been uploaded to the github as an ipython notebook. I am trying to recreate the codes in the Searching multiple parameters simultaneously section but instead of using knn i am using SVM Regression. This is my code ``` from sklearn.datasets import load_iris from sklearn import svm from sklearn.grid_search import GridSearchCV import matplotlib.pyplot as plt import numpy as np iris = load_iris() X = iris.data y = iris.target k=['rbf', 'linear','poly','sigmoid','precomputed'] c= range(1,100) g=np.arange(1e-4,1e-2,0.0001) g=g.tolist() param_grid=dict(kernel=k, C=c, gamma=g) print param_grid svr=svm.SVC() grid = GridSearchCV(svr, param_grid, cv=5,scoring='accuracy') grid.fit(X, y) print() print(\"Grid scores on development set:\") print() print grid.grid_scores_ print(\"Best parameters set found on development set:\") print() print(grid.best_params_) print(\"Grid best score:\") print() print (grid.best_score_) # create a list of the mean scores only grid_mean_scores = [result.mean_validation_score for result in grid.grid_scores_] print grid_mean_scores ``` But its giving this error raise ValueError(\"X should be a square kernel matrix\") ValueError: X should be a square kernel matrix", "response":"Remove 'precomputed' from your parameter space. kernel='precomputed'can only be used when passing a (n_samples, n_samples) data matrix that represents pairwise similarities for the samples instead of the traditional (n_samples, n_features) rectangular data matrix. See the documentation for more details on the meaning of the kernel parameter: http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.svm.SVC.html http:\/\/scikit-learn.org\/stable\/modules\/svm.html#svm-kernels", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36306555\/scikit-learn-grid-search-with-svm-regression", "best_answers_votes":74, "question_length":1340, "response_length":473 }, { "question":"access to numbers in classification_report - sklearn This is a simple example of a classification_report in sklearn: ``` from sklearn.metrics import classification_report y_true = [0, 1, 2, 2, 2] y_pred = [0, 0, 2, 2, 1] target_names = ['class 0', 'class 1', 'class 2'] print(classification_report(y_true, y_pred, target_names=target_names)) # precision recall f1-score support # # class 0 0.50 1.00 0.67 1 # class 1 0.00 0.00 0.00 1 # class 2 1.00 0.67 0.80 3 # #avg \/ total 0.70 0.60 0.61 5 ``` I want to have access to avg\/total row. For instance, I want to extract the f1-score from the report, which is 0.61. How can I have access to the number in classification_report?", "response":"You can output the classification report by adding output_dict=True to the report: ``` report = classification_report(y_true, y_pred, output_dict=True) ``` And then access its single values as in a normal python dictionary. For example, the macro metrics: ``` macro_precision = report['macro avg']['precision'] macro_recall = report['macro avg']['recall'] macro_f1 = report['macro avg']['f1-score'] ``` or Accuracy: ``` accuracy = report['accuracy'] ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48417867\/access-to-numbers-in-classification-report-sklearn", "best_answers_votes":27, "question_length":675, "response_length":453 }, { "question":"Separate pandas dataframe using sklearn's KFold I had obtained the index of training set and testing set with code below. ```py df = pandas.read_pickle(filepath + filename) kf = KFold(n_splits = n_splits, shuffle = shuffle, random_state = randomState) result = next(kf.split(df), None) #train can be accessed with result[0] #test can be accessed with result[1] ``` I wonder if there is any faster way to separate them into 2 dataframe respectively with the row indexes I retrieved.", "response":"You need DataFrame.iloc for select rows by positions: Sample: ``` np.random.seed(100) df = pd.DataFrame(np.random.random((10,5)), columns=list('ABCDE')) df.index = df.index * 10 print (df) A B C D E 0 0.543405 0.278369 0.424518 0.844776 0.004719 10 0.121569 0.670749 0.825853 0.136707 0.575093 20 0.891322 0.209202 0.185328 0.108377 0.219697 30 0.978624 0.811683 0.171941 0.816225 0.274074 40 0.431704 0.940030 0.817649 0.336112 0.175410 50 0.372832 0.005689 0.252426 0.795663 0.015255 60 0.598843 0.603805 0.105148 0.381943 0.036476 70 0.890412 0.980921 0.059942 0.890546 0.576901 80 0.742480 0.630184 0.581842 0.020439 0.210027 90 0.544685 0.769115 0.250695 0.285896 0.852395 ``` ``` from sklearn.model_selection import KFold #added some parameters kf = KFold(n_splits = 5, shuffle = True, random_state = 2) result = next(kf.split(df), None) print (result) (array([0, 2, 3, 5, 6, 7, 8, 9]), array([1, 4])) train = df.iloc[result[0]] test = df.iloc[result[1]] print (train) A B C D E 0 0.543405 0.278369 0.424518 0.844776 0.004719 20 0.891322 0.209202 0.185328 0.108377 0.219697 30 0.978624 0.811683 0.171941 0.816225 0.274074 50 0.372832 0.005689 0.252426 0.795663 0.015255 60 0.598843 0.603805 0.105148 0.381943 0.036476 70 0.890412 0.980921 0.059942 0.890546 0.576901 80 0.742480 0.630184 0.581842 0.020439 0.210027 90 0.544685 0.769115 0.250695 0.285896 0.852395 print (test) A B C D E 10 0.121569 0.670749 0.825853 0.136707 0.575093 40 0.431704 0.940030 0.817649 0.336112 0.175410 ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45115964\/separate-pandas-dataframe-using-sklearns-kfold", "best_answers_votes":58, "question_length":481, "response_length":1490 }, { "question":"Scikit-Learn Vectorizer `max_features` How do I choose the number of the max_features parameter in TfidfVectorizer module? Should I use the maximum number of elements in the data? The description of the parameter does not give me a clear vision of how to choose the value for it: max_features : int or None, default=None If not None, build a vocabulary that only consider the top max_features ordered by term frequency across the corpus. This parameter is ignored if vocabulary is not None.", "response":"This parameter is absolutely optional and should be calibrated according to the rational thinking and the data structure. Sometimes it is not effective to transform the whole vocabulary, as the data may have some exceptionally rare words, which, if passed to TfidfVectorizer().fit(), will add unwanted dimensions to inputs in the future. One of the appropriate techniques in this case, for instance, would be to print out word frequences accross documents and then set a certain threshold for them. Imagine you have set a threshold of 50, and your data corpus consists of 100 words. After looking at the word frequences 20 words occur less than 50 times. Thus, you set max_features=80 and you are good to go. If max_features is set to None, then the whole corpus is considered during the TF-IDF transformation. Otherwise, if you pass, say, 5 to max_features, that would mean creating a feature matrix out of the most 5 frequent words accross text documents. Quick example Assume you work with hardware-related documents. Your raw data is the following: ``` from sklearn.feature_extraction.text import TfidfVectorizer data = ['gpu processor cpu performance', 'gpu performance ram computer', 'cpu computer ram processor jeans'] ``` You see the word jeans in the third document is hardly related and occures only once in the whole dataset. The best way to omit the word, of course, would be to use stop_words parameter, but imagine if there are plenty of such words; or words that are related to the topic but occur scarcely. In the second case, the max_features parameter might help. If you proceed with max_features=None, then it will create a 3x7 sparse matrix, while the best-case scenario would be 3x6 matrix: ``` tf = TfidfVectorizer(max_features=None).fit(data) tf.vocabulary_.__len__() # returns 7 as we passed 7 words tf.fit_transform(data) # returns 3x7 sparse matrix tf = TfidfVectorizer(max_features=6).fit(data) # excluding 'jeans' tf.vocabulary_ # prints out every words except 'jeans' tf.vocabulary_.__len__() # returns 6 tf.fit_transform(data) # returns 3x6 sparse matrix ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46118910\/scikit-learn-vectorizer-max-features", "best_answers_votes":49, "question_length":490, "response_length":2088 }, { "question":"Feature Importance with XGBClassifier Hopefully I'm reading this wrong but in the XGBoost library documentation, there is note of extracting the feature importance attributes using feature_importances_ much like sklearn's random forest. However, for some reason, I keep getting this error: AttributeError: 'XGBClassifier' object has no attribute 'feature_importances_' My code snippet is below: ``` from sklearn import datasets import xgboost as xg iris = datasets.load_iris() X = iris.data Y = iris.target Y = iris.target[ Y < 2] # arbitrarily removing class 2 so it can be 0 and 1 X = X[range(1,len(Y)+1)] # cutting the dataframe to match the rows in Y xgb = xg.XGBClassifier() fit = xgb.fit(X, Y) fit.feature_importances_ ``` It seems that you can compute feature importance using the Booster object by calling the get_fscore attribute. The only reason I'm using XGBClassifier over Booster is because it is able to be wrapped in a sklearn pipeline. Any thoughts on feature extractions? Is anyone else experiencing this?", "response":"As the comments indicate, I suspect your issue is a versioning one. However if you do not want to\/can't update, then the following function should work for you. ``` def get_xgb_imp(xgb, feat_names): from numpy import array imp_vals = xgb.booster().get_fscore() imp_dict = {feat_names[i]:float(imp_vals.get('f'+str(i),0.)) for i in range(len(feat_names))} total = array(imp_dict.values()).sum() return {k:v\/total for k,v in imp_dict.items()} >>> import numpy as np >>> from xgboost import XGBClassifier >>> >>> feat_names = ['var1','var2','var3','var4','var5'] >>> np.random.seed(1) >>> X = np.random.rand(100,5) >>> y = np.random.rand(100).round() >>> xgb = XGBClassifier(n_estimators=10) >>> xgb = xgb.fit(X,y) >>> >>> get_xgb_imp(xgb,feat_names) {'var5': 0.0, 'var4': 0.20408163265306123, 'var1': 0.34693877551020408, 'var3': 0.22448979591836735, 'var2': 0.22448979591836735} ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38212649\/feature-importance-with-xgbclassifier", "best_answers_votes":17, "question_length":1022, "response_length":881 }, { "question":"Difference between predict vs predict_proba in scikit-learn Suppose I have created a model, and my target variable is either 0, 1 or 2. It seems that if I use predict, the answer is either of 0, or 1 or 2. But if I use predict_proba, I get a row with 3 cols for each row as follows, for example ``` model = ... Classifier # It could be any classifier m1 = model.predict(mytest) m2= model.predict_proba(mytest) # Now suppose m1[3] = [0.6, 0.2, 0.2] ``` Suppose I use both predict and predict_proba. If in index 3, I get the above result with the result of predict_proba, in index 3 of the result of predict I should see 0. Is this the case? I am trying to understand how using both predict and predict_proba on the same model relate to each other.", "response":"predict() is used to predict the actual class (in your case one of 0, 1, or 2). predict_proba() is used to predict the class probabilities From the example output that you shared, predict() would output class 0 since the class probability for 0 is 0.6. [0.6, 0.2, 0.2] is the output of predict_proba that simply denotes that the class probability for classes 0, 1, and 2 are 0.6, 0.2, and 0.2 respectively. Now as the documentation mentions for predict_proba, the resulting array is ordered based on the labels you've been using: The returned estimates for all classes are ordered by the label of classes. Therefore, in your case where your class labels are [0, 1, 2], the corresponding output of predict_proba will contain the corresponding probabilities. 0.6 is the probability of the instance to be classified as 0 and 0.2 are the probabilities that the instance is categorised as 1 and 2 respectively. For a more comprehensive explanation, refer to the article What is the difference between predict() and predict_proba() in scikit-learn on TDS.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/61184906\/difference-between-predict-vs-predict-proba-in-scikit-learn", "best_answers_votes":36, "question_length":746, "response_length":1049 }, { "question":"Using sklearn, how do I find depth of a decision tree? I am training a decision tree with sklearn. When I use: ``` dt_clf = tree.DecisionTreeClassifier() ``` the max_depth parameter defaults to None. According to the documentation, if max_depth is None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. After fitting my model, how do I find out what max_depth actually is? The get_params() function doesn't help. After fitting, get_params() it still says None. How can I get the actual number for max_depth? Docs: https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.tree.DecisionTreeClassifier.html", "response":"Access the max_depth for the underlying Tree object: ``` from sklearn import tree X = [[0, 0], [1, 1]] Y = [0, 1] clf = tree.DecisionTreeClassifier() clf = clf.fit(X, Y) print(clf.tree_.max_depth) >>> 1 ``` You may get more accessible attributes from the underlying tree object using: ``` help(clf.tree_) ``` These include max_depth, node_count, and other lower-level parameters.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/54499114\/using-sklearn-how-do-i-find-depth-of-a-decision-tree", "best_answers_votes":30, "question_length":670, "response_length":379 }, { "question":"Using sklearn cross_val_score and kfolds to fit and help predict model I'm trying to understand using kfolds cross validation from the sklearn python module. I understand the basic flow: instantiate a model e.g. model = LogisticRegression() fitting the model e.g. model.fit(xtrain, ytrain) predicting e.g. model.predict(ytest) use e.g. cross val score to test the fitted model accuracy. Where i'm confused is using sklearn kfolds with cross val score. As I understand it the cross_val_score function will fit the model and predict on the kfolds giving you an accuracy score for each fold. e.g. using code like this: ``` kf = KFold(n=data.shape[0], n_folds=5, shuffle=True, random_state=8) lr = linear_model.LogisticRegression() accuracies = cross_val_score(lr, X_train,y_train, scoring='accuracy', cv = kf) ``` So if I have a dataset with training and testing data, and I use the cross_val_score function with kfolds to determine the accuracy of the algorithm on my training data for each fold, is the model now fitted and ready for prediction on the testing data? So in the case above using lr.predict", "response":"No the model is not fitted. Looking at the source code for cross_val_score: ``` scores=parallel(delayed(_fit_and_score)(clone(estimator),X,y,scorer, train,test,verbose,None,fit_params) ``` As you can see, cross_val_score clones the estimator before fitting the fold training data to it. cross_val_score will give you output an array of scores which you can analyse to know how the estimator performs for different folds of the data to check if it overfits the data or not. You can know more about it here You need to fit the whole training data to the estimator once you are satisfied with the results of cross_val_score, before you can use it to predict on test data.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/42263915\/using-sklearn-cross-val-score-and-kfolds-to-fit-and-help-predict-model", "best_answers_votes":30, "question_length":1102, "response_length":668 }, { "question":"Lime vs TreeInterpreter for interpreting decision tree [closed] Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about programming within the scope defined in the help center. Closed 4 years ago. Improve this question Lime source: https:\/\/github.com\/marcotcr\/lime treeinterpreter source: tree interpreter I am trying to understand how the DecisionTree made its predictions using Lime and treeinterpreter. While both claim they are able to interpret the decision tree in their description. It seems like both interpret the same DecisionTree in different ways. That is, the feature contribution order. How is that possible? if both are looking at the same thing and are trying to describe the same event but assign importance in difference order. Who should we trust? Especially where the top feature does matter in prediction. The code for tree ``` import sklearn import sklearn.datasets import sklearn.ensemble import numpy as np import lime import lime.lime_tabular from __future__ import print_function np.random.seed(1) from treeinterpreter import treeinterpreter as ti from sklearn.tree import DecisionTreeClassifier iris = sklearn.datasets.load_iris() dt = DecisionTreeClassifier(random_state=42) dt.fit(iris.data, iris.target) n = 100 instances =iris.data[n].reshape(1,-1) prediction, biases, contributions = ti.predict(dt, instances) for i in range(len(instances)): print (\"prediction:\",prediction) print (\"-\"*20) print (\"Feature contributions:\") print (\"-\"*20) for c, feature in sorted(zip(contributions[i], iris.feature_names), key=lambda x: ~abs(x[0].any())): print (feature, c) ``` The code for lime ``` import sklearn import sklearn.datasets import sklearn.ensemble import numpy as np import lime import lime.lime_tabular from __future__ import print_function np.random.seed(1) from sklearn.tree import DecisionTreeClassifier iris = sklearn.datasets.load_iris() dt = DecisionTreeClassifier(random_state=42) dt.fit(iris.data, iris.target) explainer = lime.lime_tabular.LimeTabularExplainer(iris.data, feature_names=iris.feature_names, class_names=iris.target_names, discretize_continuous=False) n = 100 exp = explainer.explain_instance(iris.data[n], dt.predict_proba, num_features=4, top_labels=2) exp.show_in_notebook(show_table=True, predict_proba= True , show_predicted_value = True , show_all=False) ``` Lets look first at the output of the tree. so a it did correctly say it was a virginica. However by assigning the importance in 1) petal width (cm) then petal length (cm) Now lets look at the output of lime Yes, it does say the algorithm predicted virginica however looking at how it made that classification, we clearly see the following 1) petal length (cm) > petal width (cm) in lime instead of petal length (cm) < petal width (cm) as shown in tree 2) where sepal width and sepal length was predicted zero, lime claims of certain value, as shown in the uploaded images What is happening here ? The problem grows when the features are 1000+ where every digit does matter to take a decision.", "response":"Why is it possible for the two approaches to have different results? Lime: A short explanation of how it works, taken from their github page: Intuitively, an explanation is a local linear approximation of the model's behaviour. While the model may be very complex globally, it is easier to approximate it around the vicinity of a particular instance. While treating the model as a black box, we perturb the instance we want to explain and learn a sparse linear model around it, as an explanation. The figure below illustrates the intuition for this procedure. The model's decision function is represented by the blue\/pink background, and is clearly nonlinear. The bright red cross is the instance being explained (let's call it X). We sample instances around X, and weight them according to their proximity to X (weight here is indicated by size). We then learn a linear model (dashed line) that approximates the model well in the vicinity of X, but not necessarily globally. There is much more detailed information in various links on the github page. treeinterpreter: An explanation of how this one works is available on http:\/\/blog.datadive.net\/interpreting-random-forests\/ (this is for regression; an example for classification, which works very similarly, can be found here). In short: suppose we have a node that compares feature F to some value and splits instances based on that. Suppose that 50% of all instances reaching that node belong to class C. Suppose we have a new instance, and it ends up getting assigned to the left child of this node, where now 80% of all instances belong to class C. Then, the contribution of feature F for this decision is computed as 0.8 - 0.5 = 0.3 (plus additional terms if there are more nodes along the path to leaf that also use feature F). Comparison: The important thing to note is that Lime is a model-independent method (not specific to Decision Trees \/ RFs), which is based on local linear approximation. Treeinterpreter, on the other hand, specifically operates in a similar manner to the Decision Tree itself, and really looks at which features are actually used in comparisons by the algorithm. So they're really fundamentally doing quite different things. Lime says \"a feature is important if we wiggle it a bit and this results in a different prediction\". Treeinterpreter says \"a feature is important if it was compared to a threshold in one of our nodes and this caused us to take a split that drastically changed our prediction\". Which one to trust? This is difficult to answer definitively. They're probably both useful in their own way. Intuitively, you may be inclined to lean towards treeinterpreter at first glance, because it was specifically created for Decision Trees. However, consider the following example: Root Node: 50% of instances class 0, 50% class 1. IF F <= 50, go left, otherwise go right. Left Child: 48% of instances class 0, 52% class 1. Subtree below this. Right Child: 99% of instances class 0, 1% of instances class 1. Subtree below this. This kind of setup is possible if the majority of instances go left, only some right. Now suppose we have an instance with F = 49 that got assigned to the left and ultimately assigned class 1. Treeinterpreter won't care that F was really close to ending up on the other side of the equation in the root node, and only assign a low contribution of 0.48 - 0.50 = -0.02. Lime will notice that changing F just a little bit would completely change the odds. Which one is right? That's not really clear. You could say that F was really important because if it had been only a little bit different the prediction would be different (then lime wins). You could also argue that F did not contribute to our final prediction because we hardly got any closer to a decision after inspecting its value, and still had to investigate many other features afterwards. Then treeinterpreter wins. To get a better idea here, it may help to also actually plot the learned Decision Tree itself. Then you can manually follow along its decision path and decide which features you think were important and\/or see if you can understand why both Lime and treeinterpreter say what they say.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48909418\/lime-vs-treeinterpreter-for-interpreting-decision-tree", "best_answers_votes":25, "question_length":3100, "response_length":4183 }, { "question":"sklearn pipeline - how to apply different transformations on different columns I have a dataset that has a mixture of text and numbers i.e. certain columns have text only and rest have integers (or floating point numbers). I was wondering if it was possible to build a pipeline where I can for example call LabelEncoder() on the text features and MinMaxScaler() on the numbers columns. The examples I have seen on the web mostly point towards using LabelEncoder() on the entire dataset and not on select columns. Is this possible? If so any pointers would be greatly appreciated.", "response":"The way I usually do it is with a FeatureUnion, using a FunctionTransformer to pull out the relevant columns. Important notes: You have to define your functions with def since annoyingly you can't use lambda or partial in FunctionTransformer if you want to pickle your model You need to initialize FunctionTransformer with validate=False Something like this: ``` from sklearn.pipeline import make_union, make_pipeline from sklearn.preprocessing import FunctionTransformer def get_text_cols(df): return df[['name', 'fruit']] def get_num_cols(df): return df[['height','age']] vec = make_union(*[ make_pipeline(FunctionTransformer(get_text_cols, validate=False), LabelEncoder()))), make_pipeline(FunctionTransformer(get_num_cols, validate=False), MinMaxScaler()))) ]) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/39001956\/sklearn-pipeline-how-to-apply-different-transformations-on-different-columns", "best_answers_votes":37, "question_length":579, "response_length":768 }, { "question":"Does GridSearchCV perform cross-validation? I'm currently working on a problem which compares three different machine learning algorithms performance on the same data-set. I divided the data-set into 70\/30 training\/testing sets and then performed grid search for the best parameters of each algorithm using GridSearchCV and X_train, y_train. First question, am I suppose to perform grid search on the training set or is it suppose to be on the whole data-set? Second question, I know that GridSearchCV uses K-fold in its' implementation, does it mean that I performed cross-validation if I used the same X_train, y_train for all three algorithms I compare in the GridSearchCV? Any answer would be appreciated, thank you.", "response":"All estimators in scikit where name ends with CV perform cross-validation. But you need to keep a separate test set for measuring the performance. So you need to split your whole data to train and test. Forget about this test data for a while. And then pass this train data only to grid-search. GridSearch will split this train data further into train and test to tune the hyper-parameters passed to it. And finally fit the model on the whole train data with best found parameters. Now you need to test this model on the test data you kept aside in the beginning. This will give you the near real world performance of model. If you use the whole data into GridSearchCV, then there would be leakage of test data into parameter tuning and then the final model may not perform that well on newer unseen data. You can look at my other answers which describe the GridSearch in more detail: Model help using Scikit-learn when using GridSearch scikit-learn GridSearchCV with multiple repetitions", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/49160206\/does-gridsearchcv-perform-cross-validation", "best_answers_votes":52, "question_length":720, "response_length":988 }, { "question":"\"No space left on device\" error while fitting Sklearn model I'm fitting a LDA model with lots of data using scikit-learn. Relevant code piece looks like this: ``` lda = LatentDirichletAllocation(n_topics = n_topics, max_iter = iters, learning_method = 'online', learning_offset = offset, random_state = 0, evaluate_every = 5, n_jobs = 3, verbose = 0) lda.fit(X) ``` (I guess the only possibly relevant detail here is that I'm using multiple jobs.) After some time I'm getting \"No space left on device\" error, even though there is plenty of space on the disk and plenty of free memory. I tried the same code several times, on two different computers (on my local machine and on a remote server), first using python3, then using python2, and each time I ended up with the same error. If I run the same code on a smaller sample of data everything works fine. The entire stack trace: ``` Failed to save to .npy file: Traceback (most recent call last): File \"\/home\/ubuntu\/anaconda2\/lib\/python2.7\/site-packages\/sklearn\/externals\/joblib\/numpy_pickle.py\", line 271, in save obj, filename = self._write_array(obj, filename) File \"\/home\/ubuntu\/anaconda2\/lib\/python2.7\/site-packages\/sklearn\/externals\/joblib\/numpy_pickle.py\", line 231, in _write_array self.np.save(filename, array) File \"\/home\/ubuntu\/anaconda2\/lib\/python2.7\/site-packages\/numpy\/lib\/npyio.py\", line 491, in save pickle_kwargs=pickle_kwargs) File \"\/home\/ubuntu\/anaconda2\/lib\/python2.7\/site-packages\/numpy\/lib\/format.py\", line 584, in write_array array.tofile(fp) IOError: 275500 requested and 210934 written IOErrorTraceback (most recent call last) in () 7 n_jobs = 3, 8 verbose = 0) ----> 9 lda.fit(X) \/home\/ubuntu\/anaconda2\/lib\/python2.7\/site-packages\/sklearn\/decomposition\/online_lda.pyc in fit(self, X, y) 509 for idx_slice in gen_batches(n_samples, batch_size): 510 self._em_step(X[idx_slice, :], total_samples=n_samples, --> 511 batch_update=False, parallel=parallel) 512 else: 513 # batch update \/home\/ubuntu\/anaconda2\/lib\/python2.7\/site-packages\/sklearn\/decomposition\/online_lda.pyc in _em_step(self, X, total_samples, batch_update, parallel) 403 # E-step 404 _, suff_stats = self._e_step(X, cal_sstats=True, random_init=True, --> 405 parallel=parallel) 406 407 # M-step \/home\/ubuntu\/anaconda2\/lib\/python2.7\/site-packages\/sklearn\/decomposition\/online_lda.pyc in _e_step(self, X, cal_sstats, random_init, parallel) 356 self.mean_change_tol, cal_sstats, 357 random_state) --> 358 for idx_slice in gen_even_slices(X.shape[0], n_jobs)) 359 360 # merge result \/home\/ubuntu\/anaconda2\/lib\/python2.7\/site-packages\/sklearn\/externals\/joblib\/parallel.pyc in __call__(self, iterable) 808 # consumption. 809 self._iterating = False --> 810 self.retrieve() 811 # Make sure that we get a last message telling us we are done 812 elapsed_time = time.time() - self._start_time \/home\/ubuntu\/anaconda2\/lib\/python2.7\/site-packages\/sklearn\/externals\/joblib\/parallel.pyc in retrieve(self) 725 job = self._jobs.pop(0) 726 try: --> 727 self._output.extend(job.get()) 728 except tuple(self.exceptions) as exception: 729 # Stop dispatching any new job in the async callback thread \/home\/ubuntu\/anaconda2\/lib\/python2.7\/multiprocessing\/pool.pyc in get(self, timeout) 565 return self._value 566 else: --> 567 raise self._value 568 569 def _set(self, i, obj): IOError: [Errno 28] No space left on device ```", "response":"Had the same problem with LatentDirichletAllocation. It seems, that your are running out of shared memory (\/dev\/shm when you run df -h). Try setting JOBLIB_TEMP_FOLDER environment variable to something different: e.g., to \/tmp. In my case it has solved the problem. Or just increase the size of the shared memory, if you have the appropriate rights for the machine you are training the LDA on.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40115043\/no-space-left-on-device-error-while-fitting-sklearn-model", "best_answers_votes":37, "question_length":3340, "response_length":393 }, { "question":"How to apply StandardScaler in Pipeline in scikit-learn (sklearn)? In the example below, ``` pipe = Pipeline([ ('scale', StandardScaler()), ('reduce_dims', PCA(n_components=4)), ('clf', SVC(kernel = 'linear', C = 1))]) param_grid = dict(reduce_dims__n_components=[4,6,8], clf__C=np.logspace(-4, 1, 6), clf__kernel=['rbf','linear']) grid = GridSearchCV(pipe, param_grid=param_grid, cv=3, n_jobs=1, verbose=2) grid.fit(X_train, y_train) print(grid.score(X_test, y_test)) ``` I am using StandardScaler(), is this the correct way to apply it to test set as well?", "response":"Yes, this is the right way to do this but there is a small mistake in your code. Let me break this down for you. When you use the StandardScaler as a step inside a Pipeline then scikit-learn will internally do the job for you. What happens can be described as follows: Step 0: The data are split into TRAINING data and TEST data according to the cv parameter that you specified in the GridSearchCV. Step 1: the scaler is fitted on the TRAINING data Step 2: the scaler transforms TRAINING data Step 3: the models are fitted\/trained using the transformed TRAINING data Step 4: the scaler is used to transform the TEST data Step 5: the trained models predict using the transformed TEST data Note: You should be using grid.fit(X, y) and NOT grid.fit(X_train, y_train) because the GridSearchCV will automatically split the data into training and testing data (this happen internally). Use something like this: ``` from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.preprocessing import StandardScaler from sklearn.model_selection import GridSearchCV from sklearn.decomposition import PCA pipe = Pipeline([ ('scale', StandardScaler()), ('reduce_dims', PCA(n_components=4)), ('clf', SVC(kernel = 'linear', C = 1))]) param_grid = dict(reduce_dims__n_components=[4,6,8], clf__C=np.logspace(-4, 1, 6), clf__kernel=['rbf','linear']) grid = GridSearchCV(pipe, param_grid=param_grid, cv=3, n_jobs=1, verbose=2, scoring= 'accuracy') grid.fit(X, y) print(grid.best_score_) print(grid.cv_results_) ``` Once you run this code (when you call grid.fit(X, y)), you can access the outcome of the grid search in the result object returned from grid.fit(). The best_score_ member provides access to the best score observed during the optimization procedure and the best_params_ describes the combination of parameters that achieved the best results. IMPORTANT EDIT 1: if you want to keep a validation dataset of the original dataset use this: ``` X_for_gridsearch, X_future_validation, y_for_gridsearch, y_future_validation = train_test_split(X, y, test_size=0.15, random_state=1) ``` Then use: ``` grid = GridSearchCV(pipe, param_grid=param_grid, cv=3, n_jobs=1, verbose=2, scoring= 'accuracy') grid.fit(X_for_gridsearch, y_for_gridsearch) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/51459406\/how-to-apply-standardscaler-in-pipeline-in-scikit-learn-sklearn", "best_answers_votes":43, "question_length":558, "response_length":2249 }, { "question":"Difference between cosine similarity and cosine distance It looks like scipy.spatial.distance.cdist cosine similariy distance: link to cos distance 1 ``` 1 - u*v\/(||u||||v||) ``` is different from sklearn.metrics.pairwise.cosine_similarity which is link to cos similarity 2 ``` u*v\/||u||||v|| ``` Does anybody know reason for different definitions?", "response":"Good question but yes, these are 2 different things but connected by the following equation: Cosine_distance = 1 - cosine_similarity Why? Usually, people use the cosine similarity as a similarity metric between vectors. Now, the distance can be defined as 1-cos_similarity. The intuition behind this is that if 2 vectors are perfectly the same then similarity is 1 (angle=0) and thus, distance is 0 (1-1=0). Similarly you can define the cosine distance for the resulting similarity value range. Cosine similarity range: \u22121 meaning exactly opposite, 1 meaning exactly the same, 0 indicating orthogonality. References: Scipy wolfram", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/58381092\/difference-between-cosine-similarity-and-cosine-distance", "best_answers_votes":44, "question_length":348, "response_length":630 }, { "question":"Multiple-output Gaussian Process regression in scikit-learn I am using scikit learn for Gaussian process regression (GPR) operation to predict data. My training data are as follows: ```python x_train = np.array([[0,0],[2,2],[3,3]]) #2-D cartesian coordinate points y_train = np.array([[200,250, 155],[321,345,210],[417,445,851]]) #observed output from three different datasources at respective input data points (x_train) ``` The test points (2-D) where mean and variance\/standard deviation need to be predicted are: ```python xvalues = np.array([0,1,2,3]) yvalues = np.array([0,1,2,3]) x,y = np.meshgrid(xvalues,yvalues) #Total 16 locations (2-D) positions = np.vstack([x.ravel(), y.ravel()]) x_test = (np.array(positions)).T ``` Now, after running the GPR (GausianProcessRegressor) fit (Here, the product of ConstantKernel and RBF is used as Kernel in GaussianProcessRegressor), mean and variance\/standard deviation can be predicted by following the line of code: ```python y_pred_test, sigma = gp.predict(x_test, return_std =True) ``` While printing the predicted mean (y_pred_test) and variance (sigma), I get following output printed in the console: In the predicted values (mean), the 'nested array' with three objects inside the inner array is printed. It can be presumed that the inner arrays are the predicted mean values of each data source at each 2-D test point locations. However, the printed variance contains only a single array with 16 objects (perhaps for 16 test location points). I know that the variance provides an indication of the uncertainty of the estimation. Hence, I was expecting the predicted variance for each data source at each test point. Is my expectation wrong? How can I get the predicted variance for each data source at each test points? Is it due to wrong code?", "response":"Well, you have inadvertently hit on an iceberg indeed... As a prelude, let's make clear that the concepts of variance & standard deviation are defined only for scalar variables; for vector variables (like your own 3d output here), the concept of variance is no longer meaningful, and the covariance matrix is used instead (Wikipedia, Wolfram). Continuing on the prelude, the shape of your sigma is indeed as expected according to the scikit-learn docs on the predict method (i.e. there is no coding error in your case): Returns: y_mean : array, shape = (n_samples, [n_output_dims]) Mean of predictive distribution a query points y_std : array, shape = (n_samples,), optional Standard deviation of predictive distribution at query points. Only returned when return_std is True. y_cov : array, shape = (n_samples, n_samples), optional Covariance of joint predictive distribution a query points. Only returned when return_cov is True. Combined with my previous remark about the covariance matrix, the first choice would be to try the predict function with the argument return_cov=True instead (since asking for the variance of a vector variable is meaningless); but again, this will lead to a 16x16 matrix, instead of a 3x3 one (the expected shape of a covariance matrix for 3 output variables)... Having clarified these details, let's proceed to the essence of the issue. At the heart of your issue lies something rarely mentioned (or even hinted at) in practice and in relevant tutorials: Gaussian Process regression with multiple outputs is highly non-trivial and still a field of active research. Arguably, scikit-learn cannot really handle the case, despite the fact that it will superficially appear to do so, without issuing at least some relevant warning. Let's look for some corroboration of this claim in the recent scientific literature: Gaussian process regression with multiple response variables (2015) - quoting (emphasis mine): most GPR implementations model only a single response variable, due to the difficulty in the formulation of covariance function for correlated multiple response variables, which describes not only the correlation between data points, but also the correlation between responses. In the paper we propose a direct formulation of the covariance function for multi-response GPR, based on the idea that [...] Despite the high uptake of GPR for various modelling tasks, there still exists some outstanding issues with the GPR method. Of particular interest in this paper is the need to model multiple response variables. Traditionally, one response variable is treated as a Gaussian process, and multiple responses are modelled independently without considering their correlation. This pragmatic and straightforward approach was taken in many applications (e.g. [7, 26, 27]), though it is not ideal. A key to modelling multi-response Gaussian processes is the formulation of covariance function that describes not only the correlation between data points, but also the correlation between responses. Remarks on multi-output Gaussian process regression (2018) - quoting (emphasis in the original): Typical GPs are usually designed for single-output scenarios wherein the output is a scalar. However, the multi-output problems have arisen in various fields, [...]. Suppose that we attempt to approximate T outputs {f(t}, 1 \u2264t \u2264T , one intuitive idea is to use the single-output GP (SOGP) to approximate them individually using the associated training data D(t) = { X(t), y(t) }, see Fig. 1(a). Considering that the outputs are correlated in some way, modeling them individually may result in the loss of valuable information. Hence, an increasing diversity of engineering applications are embarking on the use of multi-output GP (MOGP), which is conceptually depicted in Fig. 1(b), for surrogate modeling. The study of MOGP has a long history and is known as multivariate Kriging or Co-Kriging in the geostatistic community; [...] The MOGP handles problems with the basic assumption that the outputs are correlated in some way. Hence, a key issue in MOGP is to exploit the output correlations such that the outputs can leverage information from one another in order to provide more accurate predictions in comparison to modeling them individually. Physics-Based Covariance Models for Gaussian Processes with Multiple Outputs (2013) - quoting: Gaussian process analysis of processes with multiple outputs is limited by the fact that far fewer good classes of covariance functions exist compared with the scalar (single-output) case. [...] The difficulty of finding \u201cgood\u201d covariance models for multiple outputs can have important practical consequences. An incorrect structure of the covariance matrix can significantly reduce the efficiency of the uncertainty quantification process, as well as the forecast efficiency in kriging inferences [16]. Therefore, we argue, the covariance model may play an even more profound role in co-kriging [7, 17]. This argument applies when the covariance structure is inferred from data, as is typically the case. Hence, my understanding, as I said, is that sckit-learn is not really capable of handling such cases, despite the fact that something like that is not mentioned or hinted at in the documentation (it may be interesting to open a relevant issue at the project page). This seems to be the conclusion in this relevant SO thread, too, as well as in this CrossValidated thread regarding the GPML (Matlab) toolbox. Having said that, and apart from reverting to the choice of simply modeling each output separately (not an invalid choice, as long as you keep in mind that you may be throwing away useful information from the correlation between your 3-D output elements), there is at least one Python toolbox which seems capable of modeling multiple-output GPs, namely the runlmc (paper, code, documentation).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/50185399\/multiple-output-gaussian-process-regression-in-scikit-learn", "best_answers_votes":38, "question_length":1800, "response_length":5882 }, { "question":"Want to know the diff among pd.factorize, pd.get_dummies, sklearn.preprocessing.LableEncoder and OneHotEncoder [closed] Closed. This question needs details or clarity. It is not currently accepting answers. Want to improve this question? As written, this question is lacking some of the information it needs to be answered. If the author adds details in comments, consider editing them into the question. Once there's sufficient detail to answer, vote to reopen the question. Closed 8 years ago. Improve this question All four functions seem really similar to me. In some situations some of them might give the same result, some not. Any help will be thankfully appreciated! Now I know and I assume that internally, factorize and LabelEncoder work the same way and having no big differences in terms of results. I am not sure whether they will take up similar time with large magnitudes of data. get_dummies and OneHotEncoder will yield the same result but OneHotEncoder can only handle numbers but get_dummies will take all kinds of input. get_dummies will generate new column names automatically for each column input, but OneHotEncoder will not (it rather will assign new column names 1,2,3....). So get_dummies is better in all respectives. Please correct me if I am wrong! Thank you!", "response":"These four encoders can be split in two categories: Encode labels into categorical variables: Pandas factorize and scikit-learn LabelEncoder. The result will have 1 dimension. Encode categorical variable into dummy\/indicator (binary) variables: Pandas get_dummies and scikit-learn OneHotEncoder. The result will have n dimensions, one by distinct value of the encoded categorical variable. The main difference between pandas and scikit-learn encoders is that scikit-learn encoders are made to be used in scikit-learn pipelines with fit and transform methods. Encode labels into categorical variables Pandas factorize and scikit-learn LabelEncoder belong to the first category. They can be used to create categorical variables for example to transform characters into numbers. ``` from sklearn import preprocessing # Test data df = DataFrame(['A', 'B', 'B', 'C'], columns=['Col']) df['Fact'] = pd.factorize(df['Col'])[0] le = preprocessing.LabelEncoder() df['Lab'] = le.fit_transform(df['Col']) print(df) # Col Fact Lab # 0 A 0 0 # 1 B 1 1 # 2 B 1 1 # 3 C 2 2 ``` Encode categorical variable into dummy\/indicator (binary) variables Pandas get_dummies and scikit-learn OneHotEncoder belong to the second category. They can be used to create binary variables. OneHotEncoder can only be used with categorical integers while get_dummies can be used with other type of variables. ``` df = DataFrame(['A', 'B', 'B', 'C'], columns=['Col']) df = pd.get_dummies(df) print(df) # Col_A Col_B Col_C # 0 1.0 0.0 0.0 # 1 0.0 1.0 0.0 # 2 0.0 1.0 0.0 # 3 0.0 0.0 1.0 from sklearn.preprocessing import OneHotEncoder, LabelEncoder df = DataFrame(['A', 'B', 'B', 'C'], columns=['Col']) # We need to transform first character into integer in order to use the OneHotEncoder le = preprocessing.LabelEncoder() df['Col'] = le.fit_transform(df['Col']) enc = OneHotEncoder() df = DataFrame(enc.fit_transform(df).toarray()) print(df) # 0 1 2 # 0 1.0 0.0 0.0 # 1 0.0 1.0 0.0 # 2 0.0 1.0 0.0 # 3 0.0 0.0 1.0 ``` I've also written a more detailed post based on this answer.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40336502\/want-to-know-the-diff-among-pd-factorize-pd-get-dummies-sklearn-preprocessing", "best_answers_votes":39, "question_length":1288, "response_length":2042 }, { "question":"AttributeError: LinearRegression object has no attribute 'coef_' I've been attempting to fit this data by a Linear Regression, following a tutorial on bigdataexaminer. Everything was working fine up until this point. I imported LinearRegression from sklearn, and printed the number of coefficients just fine. This was the code before I attempted to grab the coefficients from the console. ``` import numpy as np import pandas as pd import scipy.stats as stats import matplotlib.pyplot as plt import sklearn from sklearn.datasets import load_boston from sklearn.linear_model import LinearRegression boston = load_boston() bos = pd.DataFrame(boston.data) bos.columns = boston.feature_names bos['PRICE'] = boston.target X = bos.drop('PRICE', axis = 1) lm = LinearRegression() ``` After I had all this set up I ran the following command, and it returned the proper output: ``` In [68]: print('Number of coefficients:', len(lm.coef_) Number of coefficients: 13 ``` However, now if I ever try to print this same line again, or use 'lm.coef_', it tells me coef_ isn't an attribute of LinearRegression, right after I JUST used it successfully, and I didn't touch any of the code before I tried it again. ``` In [70]: print('Number of coefficients:', len(lm.coef_)) Traceback (most recent call last): File \"\", line 1, in print('Number of coefficients:', len(lm.coef_)) AttributeError: 'LinearRegression' object has no attribute 'coef_' ```", "response":"The coef_ attribute is created when the fit() method is called. Before that, it will be undefined: ``` >>> import numpy as np >>> import pandas as pd >>> from sklearn.datasets import load_boston >>> from sklearn.linear_model import LinearRegression >>> boston = load_boston() >>> lm = LinearRegression() >>> lm.coef_ --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () 7 8 lm = LinearRegression() ----> 9 lm.coef_ AttributeError: 'LinearRegression' object has no attribute 'coef_' ``` If we call fit(), the coefficients will be defined: ``` >>> lm.fit(boston.data, boston.target) >>> lm.coef_ array([ -1.07170557e-01, 4.63952195e-02, 2.08602395e-02, 2.68856140e+00, -1.77957587e+01, 3.80475246e+00, 7.51061703e-04, -1.47575880e+00, 3.05655038e-01, -1.23293463e-02, -9.53463555e-01, 9.39251272e-03, -5.25466633e-01]) ``` My guess is that somehow you forgot to call fit() when you ran the problematic line.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38646040\/attributeerror-linearregression-object-has-no-attribute-coef", "best_answers_votes":36, "question_length":1431, "response_length":984 }, { "question":"How can I plot the probability density function for a fitted Gaussian mixture model under scikit-learn? I'm struggling with a rather simple task. I have a vector of floats to which I would like to fit a Gaussian mixture model with two Gaussian kernels: ``` from sklearn.mixture import GMM gmm = GMM(n_components=2) gmm.fit(values) # values is numpy vector of floats ``` I would now like to plot the probability density function for the mixture model I've created, but I can't seem to find any documentation on how to do this. How should I best proceed? Edit: Here is the vector of data I'm fitting. And below is a more detailed example of how I'm doing things: ``` from sklearn.mixture import GMM from matplotlib.pyplot import * import numpy as np try: import cPickle as pickle except: import pickle with open('\/path\/to\/kde.pickle') as f: # open the data file provided above kde = pickle.load(f) gmm = GMM(n_components=2) gmm.fit(kde) x = np.linspace(np.min(kde), np.max(kde), len(kde)) # Plot the data to which the GMM is being fitted figure() plot(x, kde, color='blue') ``` ``` # My half-baked attempt at replicating the scipy example fit = gmm.score_samples(x)[0] plot(x, fit, color='red') ``` The fitted curve doesn't look anything like what I'd expect. It doesn't even seem Gaussian, which is a bit strange given it was produced by a Gaussian process. Am I crazy?", "response":"I followed some examples mentioned in this thread and others and managed to get closer to the solution, but the final probability density function does not integrate to one. I guess, that I will post the question for this in another thread. ``` import ntumpy as np import matplotlib.pyplot as plt from sklearn.mixture import GaussianMixture np.random.seed(1) mus = np.array([[0.2], [0.8]]) sigmas = np.array([[0.1], [0.1]]) ** 2 gmm = GaussianMixture(2) gmm.means_ = mus gmm.covars_ = sigmas gmm.weights_ = np.array([0.5, 0.5]) #Fit the GMM with random data from the correspondent gaussians gaus_samples_1 = np.random.normal(mus[0], sigmas[0], 10).reshape(10,1) gaus_samples_2 = np.random.normal(mus[1], sigmas[1], 10).reshape(10,1) fit_samples = np.concatenate((gaus_samples_1, gaus_samples_2)) gmm.fit(fit_samples) fig = plt.figure() ax = fig.add_subplot(111) x = np.linspace(0, 1, 1000).reshape(1000,1) logprob = gmm.score_samples(x) pdf = np.exp(logprob) #print np.max(pdf) -> 19.8409464401 !? ax.plot(x, pdf, '-k') plt.show() ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/23609756\/how-can-i-plot-the-probability-density-function-for-a-fitted-gaussian-mixture-mo", "best_answers_votes":10, "question_length":1368, "response_length":1034 }, { "question":"Sklearn preprocessing - PolynomialFeatures - How to keep column names\/headers of the output array \/ dataframe TLDR: How to get headers for the output numpy array from the sklearn.preprocessing.PolynomialFeatures() function? Let's say I have the following code... ``` import pandas as pd import numpy as np from sklearn import preprocessing as pp a = np.ones(3) b = np.ones(3) * 2 c = np.ones(3) * 3 input_df = pd.DataFrame([a,b,c]) input_df = input_df.T input_df.columns=['a', 'b', 'c'] input_df a b c 0 1 2 3 1 1 2 3 2 1 2 3 poly = pp.PolynomialFeatures(2) output_nparray = poly.fit_transform(input_df) print output_nparray [[ 1. 1. 2. 3. 1. 2. 3. 4. 6. 9.] [ 1. 1. 2. 3. 1. 2. 3. 4. 6. 9.] [ 1. 1. 2. 3. 1. 2. 3. 4. 6. 9.]] ``` How can I get that 3x10 matrix\/ output_nparray to carry over the a,b,c labels how they relate to the data above?", "response":"scikit-learn 0.18 added a nifty get_feature_names() method! ``` >> input_df.columns Index(['a', 'b', 'c'], dtype='object') >> poly.fit_transform(input_df) array([[ 1., 1., 2., 3., 1., 2., 3., 4., 6., 9.], [ 1., 1., 2., 3., 1., 2., 3., 4., 6., 9.], [ 1., 1., 2., 3., 1., 2., 3., 4., 6., 9.]]) >> poly.get_feature_names(input_df.columns) ['1', 'a', 'b', 'c', 'a^2', 'a b', 'a c', 'b^2', 'b c', 'c^2'] ``` Note you have to provide it with the columns names, since sklearn doesn't read it off from the DataFrame by itself.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36728287\/sklearn-preprocessing-polynomialfeatures-how-to-keep-column-names-headers-of", "best_answers_votes":37, "question_length":842, "response_length":518 }, { "question":"Cross Validation in Keras I'm implementing a Multilayer Perceptron in Keras and using scikit-learn to perform cross-validation. For this, I was inspired by the code found in the issue Cross Validation in Keras ```python from sklearn.cross_validation import StratifiedKFold def load_data(): # load your data using this function def create model(): # create your model using this function def train_and_evaluate__model(model, data[train], labels[train], data[test], labels[test)): # fit and evaluate here. if __name__ == \"__main__\": X, Y = load_model() kFold = StratifiedKFold(n_splits=10) for train, test in kFold.split(X, Y): model = None model = create_model() train_evaluate(model, X[train], Y[train], X[test], Y[test]) ``` In my studies on neural networks, I learned that the knowledge representation of the neural network is in the synaptic weights and during the network tracing process, the weights that are updated to thereby reduce the network error rate and improve its performance. (In my case, I'm using Supervised Learning) For better training and assessment of neural network performance, a common method of being used is cross-validation that returns partitions of the data set for training and evaluation of the model. My doubt is... In this code snippet: ```python for train, test in kFold.split(X, Y): model = None model = create_model() train_evaluate(model, X[train], Y[train], X[test], Y[test]) ``` We define, train and evaluate a new neural net for each of the generated partitions? If my goal is to fine-tune the network for the entire dataset, why is it not correct to define a single neural network and train it with the generated partitions? That is, why is this piece of code like this? ```python for train, test in kFold.split(X, Y): model = None model = create_model() train_evaluate(model, X[train], Y[train], X[test], Y[test]) ``` and not so? ```python model = None model = create_model() for train, test in kFold.split(X, Y): train_evaluate(model, X[train], Y[train], X[test], Y[test]) ``` Is my understanding of how the code works wrong? Or my theory?", "response":"If my goal is to fine-tune the network for the entire dataset It is not clear what you mean by \"fine-tune\", or even what exactly is your purpose for performing cross-validation (CV); in general, CV serves one of the following purposes: Model selection (choose the values of hyperparameters) Model assessment Since you don't define any search grid for hyperparameter selection in your code, it would seem that you are using CV in order to get the expected performance of your model (error, accuracy etc). Anyway, for whatever reason you are using CV, the first snippet is the correct one; your second snippet ``` model = None model = create_model() for train, test in kFold.split(X, Y): train_evaluate(model, X[train], Y[train], X[test], Y[test]) ``` will train your model sequentially over the different partitions (i.e. train on partition #1, then continue training on partition #2 etc), which essentially is just training on your whole data set, and it is certainly not cross-validation... That said, a final step after the CV which is often only implied (and frequently missed by beginners) is that, after you are satisfied with your chosen hyperparameters and\/or model performance as given by your CV procedure, you go back and train again your model, this time with the entire available data.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48085182\/cross-validation-in-keras", "best_answers_votes":25, "question_length":2083, "response_length":1297 }, { "question":"Predicting new data using sklearn after standardizing the training data I am using Sklearn to build a linear regression model (or any other model) with the following steps: X_train and Y_train are the training data Standardize the training data ``` X_train = preprocessing.scale(X_train) ``` fit the model ``` model.fit(X_train, Y_train) ``` Once the model is fit with scaled data, how can I predict with new data (either one or more data points at a time) using the fit model? What I am using is Scale the data ``` NewData_Scaled = preprocessing.scale(NewData) ``` Predict the data ``` PredictedTarget = model.predict(NewData_Scaled) ``` I think I am missing a transformation function with preprocessing.scale so that I can save it with the trained model and then apply it on the new unseen data? any help please.", "response":"Take a look at these docs. You can use the StandardScaler class of the preprocessing module to remember the scaling of your training data so you can apply it to future values. ``` from sklearn.preprocessing import StandardScaler X_train = np.array([[ 1., -1., 2.], [ 2., 0., 0.], [ 0., 1., -1.]]) scaler = StandardScaler().fit(X_train) ``` scaler has calculated the mean and scaling factor to standardize each feature. ``` >>>scaler.mean_ array([ 1. ..., 0. ..., 0.33...]) >>>scaler.scale_ array([ 0.81..., 0.81..., 1.24...]) ``` To apply it to a dataset: ``` import numpy as np X_train_scaled = scaler.transform(X_train) new_data = np.array([-1., 1., 0.]) new_data_scaled = scaler.transform(new_data) >>>new_data_scaled array([[-2.44..., 1.22..., -0.26...]]) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38780302\/predicting-new-data-using-sklearn-after-standardizing-the-training-data", "best_answers_votes":39, "question_length":814, "response_length":763 }, { "question":"sklearn.cross_validation.StratifiedShuffleSplit - error: \"indices are out-of-bounds\" I was trying to split the sample dataset using Scikit-learn's Stratified Shuffle Split. I followed the example shown on the Scikit-learn documentation here ``` import pandas as pd import numpy as np # UCI's wine dataset wine = pd.read_csv(\"https:\/\/s3.amazonaws.com\/demo-datasets\/wine.csv\") # separate target variable from dataset target = wine['quality'] data = wine.drop('quality',axis = 1) # Stratified Split of train and test data from sklearn.cross_validation import StratifiedShuffleSplit sss = StratifiedShuffleSplit(target, n_iter=3, test_size=0.2) for train_index, test_index in sss: xtrain, xtest = data[train_index], data[test_index] ytrain, ytest = target[train_index], target[test_index] # Check target series for distribution of classes ytrain.value_counts() ytest.value_counts() ``` However, upon running this script, I get the following error: ``` IndexError: indices are out-of-bounds ``` Could someone please point out what I am doing wrong here? Thanks!", "response":"You're running into the different conventions for Pandas DataFrame indexing versus NumPy ndarray indexing. The arrays train_index and test_index are collections of row indices. But data is a Pandas DataFrame object, and when you use a single index into that object, as in data[train_index], Pandas is expecting train_index to contain column labels rather than row indices. You can either convert the dataframe to a NumPy array, using .values: ``` data_array = data.values for train_index, test_index in sss: xtrain, xtest = data_array[train_index], data_array[test_index] ytrain, ytest = target[train_index], target[test_index] ``` or use the Pandas .iloc accessor: ``` for train_index, test_index in sss: xtrain, xtest = data.iloc[train_index], data.iloc[test_index] ytrain, ytest = target[train_index], target[test_index] ``` I tend to favour the second approach, since it gives xtrain and xtest of type DataFrame rather than ndarray, and so keeps the column labels.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30023927\/sklearn-cross-validation-stratifiedshufflesplit-error-indices-are-out-of-bou", "best_answers_votes":47, "question_length":1056, "response_length":968 }, { "question":"Is scikit-learn suitable for big data tasks? I'm working on a TREC task involving use of machine learning techniques, where the dataset consists of more than 5 terabytes of web documents, from which bag-of-words vectors are planned to be extracted. scikit-learn has a nice set of functionalities that seems to fit my need, but I don't know whether it is going to scale well to handle big data. For example, is HashingVectorizer able to handle 5 terabytes of documents, and is it feasible to parallelize it? Moreover, what are some alternatives out there for large-scale machine learning tasks?", "response":"HashingVectorizer will work if you iteratively chunk your data into batches of 10k or 100k documents that fit in memory for instance. You can then pass the batch of transformed documents to a linear classifier that supports the partial_fit method (e.g. SGDClassifier or PassiveAggressiveClassifier) and then iterate on new batches. You can start scoring the model on a held-out validation set (e.g. 10k documents) as you go to monitor the accuracy of the partially trained model without waiting for having seen all the samples. You can also do this in parallel on several machines on partitions of the data and then average the resulting coef_ and intercept_ attribute to get a final linear model for the all dataset. I discuss this in this talk I gave in March 2013 at PyData: http:\/\/vimeo.com\/63269736 There is also sample code in this tutorial on paralyzing scikit-learn with IPython.parallel taken from: https:\/\/github.com\/ogrisel\/parallel_ml_tutorial", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/17017878\/is-scikit-learn-suitable-for-big-data-tasks", "best_answers_votes":47, "question_length":593, "response_length":955 }, { "question":"Macbook m1 and python libraries [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 4 years ago. Improve this question Is new macbook m1 suitable for Data Science? Do Data Science python libraries such as pandas, numpy, sklearn etc work on the macbook m1 (Apple Silicon) chip and how fast compared to the previous generation intel based macbooks?", "response":"This GitHub repository has lots of useful information about the Apple M1 chip and data science in Python https:\/\/github.com\/neurolabusc\/AppleSiliconForNeuroimaging. I have included representative quotes below. This repository focuses on software for brain imaging analysis, but the takeaways are broad. Updated on 27 September 2021. TL;DR Unless you are a developer, I would strongly discourage scientists from purchasing an Apple Silicon computer in the short term. Productive work will require core tools to be ported. In the longer term, this architecture could have a profound impact on science. In particular if Apple develops servers that exploit the remarkable power efficiency of their CPUs (competing with AWS Graviton) and leverage the Metal language and GPUs for compute tasks (competing with NVidia's Tesla products and CUDA language). Limitations facing Apple Silicon The infrastructure scientists depend on is not yet available for this architecture. Here are some of the short term limitations: Native R can use the unstable R-devel 4.1. However, RStudio will require gdb. Julia does not yet natively support Apple Silicon. Python natively supports Apple Silicon. However, some modules have issues or are slow. See the NiBabel section below. Scientific modules of Python, R, and Julia require a Fortran compiler, which is currently only available in experimental form. While Apple's C Clang compiler generates fast native code, many scientific tools will need to wait until gcc and gFortran compilers are available. Tools like VirtualBox, VMware Fusion, Boot Camp and Parallels do not yet support Apple Silicon. Many users rely on these tools for using Windows and Linux programs on their macOS computers. Docker can support Apple Silicon. However, attempts to run Intel-based containers on Apple Silicon machines can crash as QEMU sometimes fails to run the container. These containers are popular with many neuroimaging tools. Homebrew 3.0 supports Apple Silicon. However, many homebrew components do not support Apple Silicon. MATLAB is used by many scientific tools, including SPM. While Matlab works in translation, it is not yet available natively (and mex files will need to be recompiled). FSL and AFNI do not yet natively support this architecture. While code may work in translation, creating some native tools must wait for compilers and libraries to be updated. This will likely require months. The current generation M1 only has four high performance cores. Most neuroimaging pipelines combine sequential tasks that only require a single core (where the M1 excels) as well as parallel tasks. Those parallel tasks could exploit a CPU with more cores (as shown in the pigz and niimath tests below). Bear in mind that this mixture of serial and parallel code faces Amdahls law, with diminishing returns for extra cores. The current generation M1 has a maximum of 16 Gb of RAM. Neuroimaging datasets often have large memory demands (especially multi-band accelerated functional, resting-state and diffusion datasets). In general, the M1 and Intel-based Macs have identical OpenGL compatibility, with the M1 providing better performance than previous integrated solutions. However, there are corner cases that may break OpenGL tools. Here I describe four limitations. First, OpenGL geometry shaders are not supported (there is no Metal equivalent). Second, the new retina displays support wide color with 16 bitsPerSample that can cause issues for code that assumes 32-bit RGBA textures (such as the text in this Apple example code). Third, textures can be handled differently. Fourth, use of the GL_INT_2_10_10_10_REV data type will cripple performance (tested on macOS 11.2). This is unfortunate, as Apple advocated for this datatype once upon a time. In this case, code msut be changed to use the less compact GL_HALF_FLOAT which is natively supported by the M1 GPU. This impacts neuroimaging scientists visualizing DTI tractography where GPU resources can be overwhelmed.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/65095614\/macbook-m1-and-python-libraries", "best_answers_votes":45, "question_length":721, "response_length":3998 }, { "question":"sklearn GridSearchCV not using sample_weight in score function I have data with differing weights for each sample. In my application, it is important that these weights are accounted for in estimating the model and comparing alternative models. I'm using sklearn to estimate models and to compare alternative hyperparameter choices. But this unit test shows that GridSearchCV does not apply sample_weights to estimate scores. Is there a way to have sklearn use sample_weight to score the models? Unit test: ``` from __future__ import division import numpy as np from sklearn.datasets import load_iris from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import log_loss from sklearn.model_selection import GridSearchCV, RepeatedKFold def grid_cv(X_in, y_in, w_in, cv, max_features_grid, use_weighting): out_results = dict() for k in max_features_grid: clf = RandomForestClassifier(n_estimators=256, criterion=\"entropy\", warm_start=False, n_jobs=-1, random_state=RANDOM_STATE, max_features=k) for train_ndx, test_ndx in cv.split(X=X_in, y=y_in): X_train = X_in[train_ndx, :] y_train = y_in[train_ndx] w_train = w_in[train_ndx] y_test = y[test_ndx] clf.fit(X=X_train, y=y_train, sample_weight=w_train) y_hat = clf.predict_proba(X=X_in[test_ndx, :]) if use_weighting: w_test = w_in[test_ndx] w_i_sum = w_test.sum() score = w_i_sum \/ w_in.sum() * log_loss(y_true=y_test, y_pred=y_hat, sample_weight=w_test) else: score = log_loss(y_true=y_test, y_pred=y_hat) results = out_results.get(k, []) results.append(score) out_results.update({k: results}) for k, v in out_results.items(): if use_weighting: mean_score = sum(v) else: mean_score = np.mean(v) out_results.update({k: mean_score}) best_score = min(out_results.values()) best_param = min(out_results, key=out_results.get) return best_score, best_param if __name__ == \"__main__\": RANDOM_STATE = 1337 X, y = load_iris(return_X_y=True) sample_weight = np.array([1 + 100 * (i % 25) for i in range(len(X))]) # sample_weight = np.array([1 for _ in range(len(X))]) inner_cv = RepeatedKFold(n_splits=3, n_repeats=1, random_state=RANDOM_STATE) outer_cv = RepeatedKFold(n_splits=3, n_repeats=1, random_state=RANDOM_STATE) rfc = RandomForestClassifier(n_estimators=256, criterion=\"entropy\", warm_start=False, n_jobs=-1, random_state=RANDOM_STATE) search_params = {\"max_features\": [1, 2, 3, 4]} fit_params = {\"sample_weight\": sample_weight} my_scorer = make_scorer(log_loss, greater_is_better=False, needs_proba=True, needs_threshold=False) grid_clf = GridSearchCV(estimator=rfc, scoring=my_scorer, cv=inner_cv, param_grid=search_params, refit=True, return_train_score=False, iid=False) # in this usage, the results are the same for `iid=True` and `iid=False` grid_clf.fit(X, y, **fit_params) print(\"This is the best out-of-sample score using GridSearchCV: %.6f.\" % -grid_clf.best_score_) msg = \"\"\"This is the best out-of-sample score %s weighting using grid_cv: %.6f.\"\"\" score_with_weights, param_with_weights = grid_cv(X_in=X, y_in=y, w_in=sample_weight, cv=inner_cv, max_features_grid=search_params.get( \"max_features\"), use_weighting=True) print(msg % (\"WITH\", score_with_weights)) score_without_weights, param_without_weights = grid_cv(X_in=X, y_in=y, w_in=sample_weight, cv=inner_cv, max_features_grid=search_params.get( \"max_features\"), use_weighting=False) print(msg % (\"WITHOUT\", score_without_weights)) ``` Which produces output: ``` This is the best out-of-sample score using GridSearchCV: 0.135692. This is the best out-of-sample score WITH weighting using grid_cv: 0.099367. This is the best out-of-sample score WITHOUT weighting using grid_cv: 0.135692. ``` Explanation: Since manually computing the loss without weighting produces the same scoring as GridSearchCV, we know that the sample weights are not being used.", "response":"sklearn version >= 1.4 and 1.4 nightly releases From sklearn 1.4 (release date somewhere around October 2023), and nightly releases already available from September 2023 which you can install following guides from here, you can use the new metadata routing mechanism. You can simply decide which objects should or should not receive metadata such as sample_weight, like the following script: ```py import numpy as np from sklearn import set_config from sklearn.datasets import load_iris from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import log_loss from sklearn.model_selection import GridSearchCV, RepeatedKFold from sklearn.metrics import make_scorer set_config(enable_metadata_routing=True) if __name__ == \"__main__\": RANDOM_STATE = 1337 X, y = load_iris(return_X_y=True) sample_weight = np.array([1 + 100 * (i % 25) for i in range(len(X))]) search_params = {\"max_features\": [1, 2, 3, 4]} cv = RepeatedKFold(n_splits=3, n_repeats=1, random_state=RANDOM_STATE) rfc_weighted = RandomForestClassifier( n_estimators=256, criterion=\"entropy\", warm_start=False, n_jobs=1, random_state=RANDOM_STATE, ).set_fit_request(sample_weight=True) rfc_unweighted = RandomForestClassifier( n_estimators=256, criterion=\"entropy\", warm_start=False, n_jobs=1, random_state=RANDOM_STATE, ).set_fit_request(sample_weight=False) scorer_weighted = make_scorer( log_loss, greater_is_better=False, needs_proba=True, needs_threshold=False ).set_score_request(sample_weight=True) scorer_unweighted = make_scorer( log_loss, greater_is_better=False, needs_proba=True, needs_threshold=False ).set_score_request(sample_weight=False) for rfc, rfc_is_weighted in zip([rfc_weighted, rfc_unweighted], [True, False]): for scorer, scorer_is_weighted in zip( [scorer_weighted, scorer_unweighted], [True, False] ): grid_clf = GridSearchCV( estimator=rfc, scoring=scorer, cv=cv, param_grid=search_params, refit=True, return_train_score=False, ) if rfc_is_weighted or scorer_is_weighted: grid_clf.fit(X, y, sample_weight=sample_weight) else: grid_clf.fit(X, y) print( \"This is the best out-of-sample score using GridSearchCV with \" f\"(is scorer weighted: {scorer_is_weighted}), (is rfc weighted: \" f\"{rfc_is_weighted}): {-grid_clf.best_score_}\" ) ``` which will output: ``` This is the best out-of-sample score using GridSearchCV with (is scorer weighted: True), (is rfc weighted: True): 0.09180030568650309 This is the best out-of-sample score using GridSearchCV with (is scorer weighted: False), (is rfc weighted: True): 0.1225297422810374 This is the best out-of-sample score using GridSearchCV with (is scorer weighted: True), (is rfc weighted: False): 0.09064253271691491 This is the best out-of-sample score using GridSearchCV with (is scorer weighted: False), (is rfc weighted: False): 0.12187958644498716 ``` sklearn version < 1.4 The GridSearchCV takes a scoring as input, which can be callable. You can see the details of how to change the scoring function, and also how to pass your own scoring function here. Here's the relevant piece of code from that page for the sake of completeness: EDIT: The fit_params is passed only to the fit functions, and not the score functions. If there are parameters which are supposed to be passed to the scorer, they should be passed to the make_scorer. But that still doesn't solve the issue here, since that would mean that the whole sample_weight parameter would be passed to log_loss, whereas only the part which corresponds to y_test at the time of calculating the loss should be passed. sklearn does NOT support such a thing, but you can hack your way through, using a padas.DataFrame. The good news is, sklearn understands a DataFrame, and keeps it that way. Which means you can exploit the index of a DataFrame as you see in the code here: ``` # more code X, y = load_iris(return_X_y=True) index = ['r%d' % x for x in range(len(y))] y_frame = pd.DataFrame(y, index=index) sample_weight = np.array([1 + 100 * (i % 25) for i in range(len(X))]) sample_weight_frame = pd.DataFrame(sample_weight, index=index) # more code def score_f(y_true, y_pred, sample_weight): return log_loss(y_true.values, y_pred, sample_weight=sample_weight.loc[y_true.index.values].values.reshape(-1), normalize=True) score_params = {\"sample_weight\": sample_weight_frame} my_scorer = make_scorer(score_f, greater_is_better=False, needs_proba=True, needs_threshold=False, **score_params) grid_clf = GridSearchCV(estimator=rfc, scoring=my_scorer, cv=inner_cv, param_grid=search_params, refit=True, return_train_score=False, iid=False) # in this usage, the results are the same for `iid=True` and `iid=False` grid_clf.fit(X, y_frame) # more code ``` As you see, the score_f uses the index of y_true to find which parts of sample_weight to use. For the sake of completeness, here's the whole code: ``` from __future__ import division import numpy as np from sklearn.datasets import load_iris from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import log_loss from sklearn.model_selection import GridSearchCV, RepeatedKFold from sklearn.metrics import make_scorer import pandas as pd def grid_cv(X_in, y_in, w_in, cv, max_features_grid, use_weighting): out_results = dict() for k in max_features_grid: clf = RandomForestClassifier(n_estimators=256, criterion=\"entropy\", warm_start=False, n_jobs=1, random_state=RANDOM_STATE, max_features=k) for train_ndx, test_ndx in cv.split(X=X_in, y=y_in): X_train = X_in[train_ndx, :] y_train = y_in[train_ndx] w_train = w_in[train_ndx] y_test = y_in[test_ndx] clf.fit(X=X_train, y=y_train, sample_weight=w_train) y_hat = clf.predict_proba(X=X_in[test_ndx, :]) if use_weighting: w_test = w_in[test_ndx] w_i_sum = w_test.sum() score = w_i_sum \/ w_in.sum() * log_loss(y_true=y_test, y_pred=y_hat, sample_weight=w_test) else: score = log_loss(y_true=y_test, y_pred=y_hat) results = out_results.get(k, []) results.append(score) out_results.update({k: results}) for k, v in out_results.items(): if use_weighting: mean_score = sum(v) else: mean_score = np.mean(v) out_results.update({k: mean_score}) best_score = min(out_results.values()) best_param = min(out_results, key=out_results.get) return best_score, best_param #if __name__ == \"__main__\": if True: RANDOM_STATE = 1337 X, y = load_iris(return_X_y=True) index = ['r%d' % x for x in range(len(y))] y_frame = pd.DataFrame(y, index=index) sample_weight = np.array([1 + 100 * (i % 25) for i in range(len(X))]) sample_weight_frame = pd.DataFrame(sample_weight, index=index) # sample_weight = np.array([1 for _ in range(len(X))]) inner_cv = RepeatedKFold(n_splits=3, n_repeats=1, random_state=RANDOM_STATE) outer_cv = RepeatedKFold(n_splits=3, n_repeats=1, random_state=RANDOM_STATE) rfc = RandomForestClassifier(n_estimators=256, criterion=\"entropy\", warm_start=False, n_jobs=1, random_state=RANDOM_STATE) search_params = {\"max_features\": [1, 2, 3, 4]} def score_f(y_true, y_pred, sample_weight): return log_loss(y_true.values, y_pred, sample_weight=sample_weight.loc[y_true.index.values].values.reshape(-1), normalize=True) score_params = {\"sample_weight\": sample_weight_frame} my_scorer = make_scorer(score_f, greater_is_better=False, needs_proba=True, needs_threshold=False, **score_params) grid_clf = GridSearchCV(estimator=rfc, scoring=my_scorer, cv=inner_cv, param_grid=search_params, refit=True, return_train_score=False, iid=False) # in this usage, the results are the same for `iid=True` and `iid=False` grid_clf.fit(X, y_frame) print(\"This is the best out-of-sample score using GridSearchCV: %.6f.\" % -grid_clf.best_score_) msg = \"\"\"This is the best out-of-sample score %s weighting using grid_cv: %.6f.\"\"\" score_with_weights, param_with_weights = grid_cv(X_in=X, y_in=y, w_in=sample_weight, cv=inner_cv, max_features_grid=search_params.get( \"max_features\"), use_weighting=True) print(msg % (\"WITH\", score_with_weights)) score_without_weights, param_without_weights = grid_cv(X_in=X, y_in=y, w_in=sample_weight, cv=inner_cv, max_features_grid=search_params.get( \"max_features\"), use_weighting=False) print(msg % (\"WITHOUT\", score_without_weights)) ``` The output of the code is then: ``` This is the best out-of-sample score using GridSearchCV: 0.095439. This is the best out-of-sample score WITH weighting using grid_cv: 0.099367. This is the best out-of-sample score WITHOUT weighting using grid_cv: 0.135692. ``` EDIT 2: as the comment bellow says: the difference in my score and the sklearn score using this solution originates in the way that I was computing a weighted average of scores. If you omit the weighted average portion of the code, the two outputs match to machine precision.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/49581104\/sklearn-gridsearchcv-not-using-sample-weight-in-score-function", "best_answers_votes":32, "question_length":3787, "response_length":8614 }, { "question":"Does GridSearchCV use predict or predict_proba, when using auc_score as score function? Does GridSearchCV use predict or predict_proba, when using auc_score as score function? The predict function generates predicted class labels, which will always result in a triangular ROC-curve. A more curved ROC-curve is obtained using the predicted class probabilities. The latter one is, as far as I know, more accurate. If so, the area under the 'curved' ROC-curve is probably best to measure classification performance within the grid search. Therefore I am curious if either the class labels or class probabilities are used for the grid search, when using the area under the ROC-curve as performance measure. I tried to find the answer in the code, but could not figure it out. Does anyone here know the answer? Thanks", "response":"To use auc_score for grid searching you really need to use predict_proba or decision_function as you pointed out. This is not possible in the 0.13 release. If you do score_func=auc_score it will use predict which doesn't make any sense. [edit]Since 0.14[\/edit] it is possible to do grid-search using auc_score, by setting the new scoring parameter to roc_auc: GridSearch(est, param_grid, scoring='roc_auc'). It will do the right thing and use predict_proba (or decision_function if predict_proba is not available). See the whats new page of the current dev version. You need to install the current master from github to get this functionality or wait until April (?) for 0.14.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/14955458\/does-gridsearchcv-use-predict-or-predict-proba-when-using-auc-score-as-score-fu", "best_answers_votes":33, "question_length":812, "response_length":676 }, { "question":"Scikit-learn: preprocessing.scale() vs preprocessing.StandardScalar() I understand that scaling means centering the mean(mean=0) and making unit variance(variance=1). But, What is the difference between preprocessing.scale(x)and preprocessing.StandardScalar() in scikit-learn?", "response":"Those are doing exactly the same, but: preprocessing.scale(x) is just a function, which transforms some data preprocessing.StandardScaler() is a class supporting the Transformer API I would always use the latter, even if i would not need inverse_transform and co. supported by StandardScaler(). Excerpt from the docs: The function scale provides a quick and easy way to perform this operation on a single array-like dataset The preprocessing module further provides a utility class StandardScaler that implements the Transformer API to compute the mean and standard deviation on a training set so as to be able to later reapply the same transformation on the testing set. This class is hence suitable for use in the early steps of a sklearn.pipeline.Pipeline", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46257627\/scikit-learn-preprocessing-scale-vs-preprocessing-standardscalar", "best_answers_votes":34, "question_length":276, "response_length":758 }, { "question":"How to correlate an Ordinal Categorical column I have a DataFrame, df, with a non-numerical column CatColumn. ``` A B CatColumn 0 381.1396 7.343921 Medium 1 481.3268 6.786945 Medium 2 263.3766 7.628746 High 3 177.2400 5.225647 Medium-High ``` I want to include CatColumn in the correlation analysis with other columns in the Dataframe. I tried DataFrame.corr but it does not include columns with nominal values in the correlation analysis.", "response":"I am going to strongly disagree with the other comments. They miss the main point of correlation: How much does variable 1 increase or decrease as variable 2 increases or decreases. So in the very first place, order of the ordinal variable must be preserved during factorization\/encoding. If you alter the order of variables, correlation will change completely. If you are building a tree-based method, this is a non-issue but for a correlation analysis, special attention must be paid to preservation of order in an ordinal variable. Let me make my argument reproducible. A and B are numeric, C is ordinal categorical in the following table, which is intentionally slightly altered from the one in the question. ``` rawText = StringIO(\"\"\" A B C 0 100.1396 1.343921 Medium 1 105.3268 1.786945 Medium 2 200.3766 9.628746 High 3 150.2400 4.225647 Medium-High \"\"\") myData = pd.read_csv(rawText, sep = \"\\s+\") ``` Notice: As C moves from Medium to Medium-High to High, both A and B increase monotonically. Hence we should see strong correlations between tuples (C,A) and (C,B). Let's reproduce the two proposed answers: ``` In[226]: myData.assign(C=myData.C.astype('category').cat.codes).corr() Out[226]: A B C A 1.000000 0.986493 -0.438466 B 0.986493 1.000000 -0.579650 C -0.438466 -0.579650 1.000000 ``` Wait... What? Negative correlations? How come? Something is definitely not right. So what is going on? What is going on is that C is factorized according to the alphanumerical sorting of its values. [High, Medium, Medium-High] are assigned [0, 1, 2], therefore the ordering is altered: 0 < 1 < 2 implies High < Medium < Medium-High, which is not true. Hence we accidentally calculated the response of A and B as C goes from High to Medium to Medium-High. The correct answer must preserve ordering, and assign [2, 0, 1] to [High, Medium, Medium-High]. Here is how: ``` In[227]: myData['C'] = myData['C'].astype('category') myData['C'].cat.categories = [2,0,1] myData['C'] = myData['C'].astype('float') myData.corr() Out[227]: A B C A 1.000000 0.986493 0.998874 B 0.986493 1.000000 0.982982 C 0.998874 0.982982 1.000000 ``` Much better! Note1: If you want to treat your variable as a nominal variable, you can look at things like contingency tables, Cramer's V and the like; or group the continuous variable by the nominal categories etc. I don't think it would be right, though. Note2: If you had another category called Low, my answer could be criticized due to the fact that I assigned equally spaced numbers to unequally spaced categories. You could make the argument that one should assign [2, 1, 1.5, 0] to [High, Medium, Medium-High, Small], which would be valid. I believe this is what people call the art part of data science.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/47894387\/how-to-correlate-an-ordinal-categorical-column", "best_answers_votes":32, "question_length":439, "response_length":2734 }, { "question":"Combining text stemming and removal of punctuation in NLTK and scikit-learn I am using a combination of NLTK and scikit-learn's CountVectorizer for stemming words and tokenization. Below is an example of the plain usage of the CountVectorizer: ``` from sklearn.feature_extraction.text import CountVectorizer vocab = ['The swimmer likes swimming so he swims.'] vec = CountVectorizer().fit(vocab) sentence1 = vec.transform(['The swimmer likes swimming.']) sentence2 = vec.transform(['The swimmer swims.']) print('Vocabulary: %s' %vec.get_feature_names()) print('Sentence 1: %s' %sentence1.toarray()) print('Sentence 2: %s' %sentence2.toarray()) ``` Which will print ``` Vocabulary: ['he', 'likes', 'so', 'swimmer', 'swimming', 'swims', 'the'] Sentence 1: [[0 1 0 1 1 0 1]] Sentence 2: [[0 0 0 1 0 1 1]] ``` Now, let's say I want to remove stop words and stem the words. One option would be to do it like so: ``` from nltk import word_tokenize from nltk.stem.porter import PorterStemmer ####### # based on http:\/\/www.cs.duke.edu\/courses\/spring14\/compsci290\/assignments\/lab02.html stemmer = PorterStemmer() def stem_tokens(tokens, stemmer): stemmed = [] for item in tokens: stemmed.append(stemmer.stem(item)) return stemmed def tokenize(text): tokens = nltk.word_tokenize(text) stems = stem_tokens(tokens, stemmer) return stems ######## vect = CountVectorizer(tokenizer=tokenize, stop_words='english') vect.fit(vocab) sentence1 = vect.transform(['The swimmer likes swimming.']) sentence2 = vect.transform(['The swimmer swims.']) print('Vocabulary: %s' %vect.get_feature_names()) print('Sentence 1: %s' %sentence1.toarray()) print('Sentence 2: %s' %sentence2.toarray()) ``` Which prints: ``` Vocabulary: ['.', 'like', 'swim', 'swimmer'] Sentence 1: [[1 1 1 1]] Sentence 2: [[1 0 1 1]] ``` But how would I best get rid of the punctuation characters in this second version?", "response":"There are several options, try remove the punctuation before tokenization. But this would mean that don't -> dont ``` import string def tokenize(text): text = \"\".join([ch for ch in text if ch not in string.punctuation]) tokens = nltk.word_tokenize(text) stems = stem_tokens(tokens, stemmer) return stems ``` Or try removing punctuation after tokenization. ``` def tokenize(text): tokens = nltk.word_tokenize(text) tokens = [i for i in tokens if i not in string.punctuation] stems = stem_tokens(tokens, stemmer) return stems ``` EDITED The above code will work but it's rather slow because it's looping through the same text multiple times: Once to remove punctuation Second time to tokenize Third time to stem. If you have more steps like removing digits or removing stopwords or lowercasing, etc. It would be better to lump the steps together as much as possible, here's several better answers that is more efficient if your data requires more pre-processing steps: Applying NLTK-based text pre-proccessing on a pandas dataframe Why is my NLTK function slow when processing the DataFrame? https:\/\/www.kaggle.com\/alvations\/basic-nlp-with-nltk", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/26126442\/combining-text-stemming-and-removal-of-punctuation-in-nltk-and-scikit-learn", "best_answers_votes":32, "question_length":1866, "response_length":1142 }, { "question":"How can I use sklearn.naive_bayes with (multiple) categorical features? [closed] Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about programming within the scope defined in the help center. Closed 4 years ago. Improve this question I want to learn a Naive Bayes model for a problem where the class is boolean. Some of the features are boolean, but other features are categorical and can take on a small number of values (~5). If all my features were boolean then I would want to use sklearn.naive_bayes.BernoulliNB. It seems clear that sklearn.naive_bayes.MultinomialNB is not what I want. One solution is to split up my categorical features into boolean features. For instance, if a variable \"X\" takes on values \"red\", \"green\", \"blue\", I can have three variables: \"X is red\", \"X is green\", \"X is blue\". That violates the assumption of conditional independence of the variables given the class, so it seems totally inappropriate. Another possibility is to encode the variable as a real-valued variable where 0.0 means red, 1.0 means green, and 2.0 means blue. That also seems totally inappropriate to use GaussianNB (for obvious reasons). I don't understand how to fit what I am trying to do into the Naive Bayes models that sklearn gives me. [Edit to explain why I don't think multinomial NB is what I want]: My understanding is that in multinomial NB the feature vector consists of counts of how many times a token was observed in k iid samples. My understanding is that this is a fit for document of classification where there is an underlying class of document, and then each word in the document is assumed to be drawn from a categorical distribution specific to that class. A document would have k tokens, the feature vector would be of length equal to the vocabulary size, and the sum of the feature counts would be k. In my case, I have a number of bernoulli variables, plus a couple categorical ones. But there is no concept of the \"counts\" here. Example: classes are people who like or don't like math. Predictors are college major (categorical) and whether they went to graduate school (boolean). I don't think this fits multinomial since there are no counts here.", "response":"Some of the features are boolean, but other features are categorical and can take on a small number of values (~5). This is an interesting question, but it is actually more than a single one: How to deal with a categorical feature in NB. How to deal with non-homogeneous features in NB (and, as I'll point out in the following, even two categorical features are non-homogeneous). How to do this in sklearn. Consider first a single categorical feature. NB assumes\/simplifies that the features are independent. Your idea of transforming this into several binary variables is exactly that of dummy variables. Clearly, these dummy variables are anything but independent. Your idea of then running a Bernoulli NB on the result implicitly assumes independence. While it is known that, in practice, NB does not necessarily break when faced with dependent variables, there is no reason to try to transform the problem into the worst configuration for NB, especially as multinomial NB is a very easy alternative. Conversely, suppose that after transforming the single categorical variable into a multi-column dataset using the dummy variables, you use a multinomial NB. The theory for multinomial NB states: With a multinomial event model, samples (feature vectors) represent the frequencies with which certain events have been generated by a multinomial ... where p i is the probability that event i occurs. A feature vector ... is then a histogram, with x i {\\displaystyle x_{i}} x_{i} counting the number of times event i was observed in a particular instance. This is the event model typically used for document classification, with events representing the occurrence of a word in a single document (see bag of words assumption). So, here, each instance of your single categorical variable is a \"length-1 paragraph\", and the distribution is exactly multinomial. Specifically, each row has 1 in one position and 0 in all the rest because a length-1 paragraph must have exactly one word, and so those will be the frequencies. Note that from the point of view of sklearn's multinomial NB, the fact that the dataset is 5-columned, does not now imply an assumption of independence. Now consider the case where you have a dataset consisting of several features: Categorical Bernoulli Normal Under the very assumption of using NB, these variables are independent. Consequently, you can do the following: Build a NB classifier for each of the categorical data separately, using your dummy variables and a multinomial NB. Build a NB classifier for all of the Bernoulli data at once - this is because sklearn's Bernoulli NB is simply a shortcut for several single-feature Bernoulli NBs. Same as 2 for all the normal features. By the definition of independence, the probability for an instance, is the product of the probabilities of instances by these classifiers.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38621053\/how-can-i-use-sklearn-naive-bayes-with-multiple-categorical-features", "best_answers_votes":20, "question_length":2267, "response_length":2849 }, { "question":"How to debug dying Jupyter Python3 kernel? I'm running some code using scipy and scikits.learn on Jupyter notebook using Python 3 kernel. During the computation the kernel is being restarted with a message dialogue saying that \u201cThe kernel appears to have died. It will restart automatically.\u201d. The stderr of the underlying Jupyter process just logs the fact that the kernel dies and is going to be restarted without any helpful message. Is there any way of checking the underlying error? It might be a segfault coming from within some C++ code, but I can only guess. I searched for any relevant logs on the server and failed to find anything helpful.", "response":"Faced the exact same issue while reading close to 5000 images as a numpy array in an 8 gigs RAM laptop, for a Machine learning project. After doing a bit of math with the resolution of my images, the size of a respective numpy array, I figured that 8 gigs of RAM is not sufficient to handle the images. After a lot of research on the net, which involved suggestions like updating CUDA, cuDNN, downgrading TensorFlow (they faced the same error while importing the relevant modules\/packages), update numpy to the latest version and update the intel Math Kernel Version (command: \"conda install -c intel mkl\")(a whole day's research). The solution that worked for me was to run the model training process on Google colab. Now, getting back to your question: The displayed dialogue: \u201cThe kernel appears to have died. It will restart automatically.\u201d is not an \"error\" per se. It is more like \"Jupyter Notebook helping itself\" by clearing out all the variables and restarting the kernel. It is Jupyter Notebook sending an SOS signal, and getting help from itself so that it does not crash. Which otherwise would cause the restarted Jupyter Notebook to not have the unsaved changes. (Well, it autosaves, but does not \"auto checkpoint\") This \"response\" of Jupyter Notebook is simply because the maximum RAM capacity of your laptop is reached. - This is the \"underlying error\"(response). This will deallocate the resources, enabling you to restart the program. Recall your computer hanging when you open too many tabs of chrome? or run a program that has too many variables' values to be stored (like in my 5000 images case)? This could have been the alternative response of Jupyter Notebook when the RAM capacity is fully utilized. Hanging. Or crashing. But instead, the developers have been kind enough to enable it to take care of itself. Note1: Running the same code as .py script, errors will be more verbose. Note2: If you are using CUDA, remember that Jupyter Notebook fails to deallocate CUDA resources even when the session is terminated. So this might be the reason for it to restart.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/39328658\/how-to-debug-dying-jupyter-python3-kernel", "best_answers_votes":3, "question_length":650, "response_length":2085 }, { "question":"Scikit-learn cross val score: too many indices for array I have the following code ``` from sklearn.ensemble import ExtraTreesClassifier from sklearn.cross_validation import cross_val_score #split the dataset for train and test combnum['is_train'] = np.random.uniform(0, 1, len(combnum)) ET: {1})\".format(label_columns, et_score)) ``` Checking the shape of the arrays: ``` features.shape Out[19]:(43069, 34) ``` And ``` labels.shape Out[20]:(43069, 1) ``` and I'm getting: ``` IndexError: too many indices for array ``` and this relevant part of the traceback: ``` ---> 22 et_score = cross_val_score(et, features, labels, n_jobs=-1) ``` I'm creating the data from Pandas dataframes and I searched here and saw some reference to possible errors via this method but can't figure out how to correct? What the data arrays look like: features ``` Out[21]: array([[ 0., 1., 1., ..., 0., 0., 1.], [ 0., 1., 1., ..., 0., 0., 1.], [ 1., 1., 1., ..., 0., 0., 1.], ..., [ 0., 0., 1., ..., 0., 0., 1.], [ 0., 0., 1., ..., 0., 0., 1.], [ 0., 0., 1., ..., 0., 0., 1.]]) ``` labels ``` Out[22]: array([[1], [1], [1], ..., [1], [1], [1]]) ```", "response":"When we do cross validation in scikit-learn, the process requires an (R,) shape label instead of (R,1). Although they are the same thing to some extend, their indexing mechanisms are different. So in your case, just add: ``` c, r = labels.shape labels = labels.reshape(c,) ``` before passing it to the cross-validation function.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31995175\/scikit-learn-cross-val-score-too-many-indices-for-array", "best_answers_votes":36, "question_length":1127, "response_length":328 }, { "question":"ValueError: Number of labels is 1. Valid values are 2 to n_samples - 1 (inclusive) when using silhouette_score I am trying to calculate silhouette score as I find the optimal number of clusters to create, but get an error that says: ``` ValueError: Number of labels is 1. Valid values are 2 to n_samples - 1 (inclusive) ``` I am unable to understand the reason for this. Here is the code, that I am using to cluster and calculate silhouette score. I read the csv that contains the text to be clustered and run K-Means on the n cluster values. What could be the reason I am getting this error? ``` #Create cluster using K-Means #Only creates graph import matplotlib #matplotlib.use('Agg') import re import os import nltk, math, codecs import csv from nltk.corpus import stopwords from gensim.models import Doc2Vec from sklearn.cluster import KMeans import matplotlib.pyplot as plt import pandas as pd from sklearn.metrics import silhouette_score model_name = checkpoint_save_path loaded_model = Doc2Vec.load(model_name) #Load the test csv file data = pd.read_csv(test_filename) overview = data['overview'].astype('str').tolist() overview = filter(bool, overview) vectors = [] def split_words(text): return ''.join([x if x.isalnum() or x.isspace() else \" \" for x in text ]).split() def preprocess_document(text): sp_words = split_words(text) return sp_words for i, t in enumerate(overview): vectors.append(loaded_model.infer_vector(preprocess_document(t))) sse = {} silhouette = {} for k in range(1,15): km = KMeans(n_clusters=k, max_iter=1000, verbose = 0).fit(vectors) sse[k] = km.inertia_ #FOLLOWING LINE CAUSES ERROR silhouette[k] = silhouette_score(vectors, km.labels_, metric='euclidean') best_cluster_size = 1 min_error = float(\"inf\") for cluster_size in sse: if sse[cluster_size] < min_error: min_error = sse[cluster_size] best_cluster_size = cluster_size print(sse) print(\"====\") print(silhouette) ```", "response":"The error is produced because you have a loop for different number of clusters n. During the first iteration, n_clusters is 1 and this leads to all(km.labels_ == 0)to be True. In other words, you have only one cluster with label 0 (thus, np.unique(km.labels_) prints array([0], dtype=int32)). silhouette_score requires more than 1 cluster labels. This causes the error. The error message is clear. Example: ``` from sklearn import datasets from sklearn.cluster import KMeans import numpy as np iris = datasets.load_iris() X = iris.data y = iris.target km = KMeans(n_clusters=3) km.fit(X,y) # check how many unique labels do you have np.unique(km.labels_) #array([0, 1, 2], dtype=int32) ``` We have 3 different clusters\/cluster labels. ``` silhouette_score(X, km.labels_, metric='euclidean') 0.38788915189699597 ``` The function works fine. Now, let's cause the error: ``` km2 = KMeans(n_clusters=1) km2.fit(X,y) silhouette_score(X, km2.labels_, metric='euclidean') ``` ``` ValueError: Number of labels is 1. Valid values are 2 to n_samples - 1 (inclusive) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/51382250\/valueerror-number-of-labels-is-1-valid-values-are-2-to-n-samples-1-inclusiv", "best_answers_votes":41, "question_length":1908, "response_length":1059 }, { "question":"Tuning parameters of the classifier used by BaggingClassifier Say that I want to train BaggingClassifier that uses DecisionTreeClassifier: ``` dt = DecisionTreeClassifier(max_depth = 1) bc = BaggingClassifier(dt, n_estimators = 500, max_samples = 0.5, max_features = 0.5) bc = bc.fit(X_train, y_train) ``` I would like to use GridSearchCV to find the best parameters for both BaggingClassifier and DecisionTreeClassifier (e.g. max_depth from DecisionTreeClassifier and max_samples from BaggingClassifier), what is the syntax for this?", "response":"I found the solution myself: ``` param_grid = { 'base_estimator__max_depth' : [1, 2, 3, 4, 5], 'max_samples' : [0.05, 0.1, 0.2, 0.5] } clf = GridSearchCV(BaggingClassifier(DecisionTreeClassifier(), n_estimators = 100, max_features = 0.5), param_grid, scoring = choosen_scoring) clf.fit(X_train, y_train) ``` i.e. saying that max_depth \"belongs to\" __ the base_estimator, i.e. my DecisionTreeClassifier in this case. This works and returns the correct results.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/47570307\/tuning-parameters-of-the-classifier-used-by-baggingclassifier", "best_answers_votes":31, "question_length":534, "response_length":459 }, { "question":"How to disregard the NaN data point in numpy array and generate the normalized data in Python? Say I have a numpy array that has some float('nan'), I don't want to impute those data now and I want to first normalize those and keep the NaN data at the original space, is there any way I can do that? Previously I used normalize function in sklearn.Preprocessing, but that function seems can't take any NaN contained array as input.", "response":"You can mask your array using the numpy.ma.array function and subsequently apply any numpy operation: ``` import numpy as np a = np.random.rand(10) # Generate random data. a = np.where(a > 0.8, np.nan, a) # Set all data larger than 0.8 to NaN a = np.ma.array(a, mask=np.isnan(a)) # Use a mask to mark the NaNs a_norm = a \/ np.sum(a) # The sum function ignores the masked values. a_norm2 = a \/ np.std(a) # The std function ignores the masked values. ``` You can still access your raw data: ``` print a.data ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/37749900\/how-to-disregard-the-nan-data-point-in-numpy-array-and-generate-the-normalized-d", "best_answers_votes":34, "question_length":430, "response_length":509 }, { "question":"ImportError: No module named grid_search, learning_curve Problem with Scikit learn l can't use learning_curve of Sklearn and sklearn.grid_search. When l do import sklearn (it works) from sklearn.cluster import bicluster (it works). i try to reinstall scikit-learn also remain the same issue. I am using python 3.5.6, Scikit-learn version 0.20.0 Window 10. ``` import sklearn from sklearn.model_selection import StratifiedKFold, cross_val_score, train_test_split from sklearn.grid_search import GridSearchCV from sklearn.learning_curve import learning_curve ```", "response":"In the new version these are in the model_selection module. Use this: ``` from sklearn.model_selection import learning_curve, GridSearchCV ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/54455632\/importerror-no-module-named-grid-search-learning-curve", "best_answers_votes":41, "question_length":560, "response_length":142 }, { "question":"Using scikit to determine contributions of each feature to a specific class prediction I am using a scikit extra trees classifier: ``` model = ExtraTreesClassifier(n_estimators=10000, n_jobs=-1, random_state=0) ``` Once the model is fitted and used to predict classes, I would like to find out the contributions of each feature to a specific class prediction. How do I do that in scikit learn? Is it possible with extra trees classifier or do I need to use some other model?", "response":"Update Being more knowledgable about ML today than I was 2.5 years ago, I will now say this approach only works for highly linear decision problems. If you carelessly apply it to a non-linear problem you will have trouble. Example: Imagine a feature for which neither very large nor very small values predict a class, but values in some intermediate interval do. That could be water intake to predict dehydration. But water intake probably interacts with salt intake, as eating more salt allows for a greater water intake. Now you have an interaction between two non-linear features. The decision boundary meanders around your feature-space to model this non-linearity and to ask only how much one of the features influences the risk of dehydration is simply ignorant. It is not the right question. Alternative: Another, more meaningful, question you could ask is: If I didn't have this information (if I left out this feature) how much would my prediction of a given label suffer? To do this you simply leave out a feature, train a model and look at how much precision and recall drops for each of your classes. It still informs about feature importance, but it makes no assumptions about linearity. Below is the old answer. I worked through a similar problem a while back and posted the same question on Cross Validated. The short answer is that there is no implementation in sklearn that does all of what you want. However, what you are trying to achieve is really quite simple, and can be done by multiplying the average standardised mean value of each feature split on each class, with the corresponding model._feature_importances array element. You can write a simple function that standardises your dataset, computes the mean of each feature split across class predictions, and does element-wise multiplication with the model._feature_importances array. The greater the absolute resulting values are, the more important the features will be to their predicted class, and better yet, the sign will tell you if it is small or large values that are important. Here's a super simple implementation that takes a datamatrix X, a list of predictions Y and an array of feature importances, and outputs a JSON describing importance of each feature to each class. ``` def class_feature_importance(X, Y, feature_importances): N, M = X.shape X = scale(X) out = {} for c in set(Y): out[c] = dict( zip(range(N), np.mean(X[Y==c, :], axis=0)*feature_importances) ) return out ``` Example: ``` import numpy as np import json from sklearn.preprocessing import scale X = np.array([[ 2, 2, 2, 0, 3, -1], [ 2, 1, 2, -1, 2, 1], [ 0, -3, 0, 1, -2, 0], [-1, -1, 1, 1, -1, -1], [-1, 0, 0, 2, -3, 1], [ 2, 2, 2, 0, 3, 0]], dtype=float) Y = np.array([0, 0, 1, 1, 1, 0]) feature_importances = np.array([0.1, 0.2, 0.3, 0.2, 0.1, 0.1]) #feature_importances = model._feature_importances result = class_feature_importance(X, Y, feature_importances) print json.dumps(result,indent=4) { \"0\": { \"0\": 0.097014250014533204, \"1\": 0.16932975630904751, \"2\": 0.27854300726557774, \"3\": -0.17407765595569782, \"4\": 0.0961523947640823, \"5\": 0.0 }, \"1\": { \"0\": -0.097014250014533177, \"1\": -0.16932975630904754, \"2\": -0.27854300726557779, \"3\": 0.17407765595569782, \"4\": -0.0961523947640823, \"5\": 0.0 } } ``` The first level of keys in result are class labels, and the second level of keys are column-indices, i.e. feature-indices. Recall that large absolute values corresponds to importance, and the sign tells you whether it's small (possibly negative) or large values that matter.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35249760\/using-scikit-to-determine-contributions-of-each-feature-to-a-specific-class-pred", "best_answers_votes":25, "question_length":474, "response_length":3542 }, { "question":"multilayer_perceptron : ConvergenceWarning: Stochastic Optimizer: Maximum iterations reached and the optimization hasn't converged yet.Warning? I have written a basic program to understand what's happening in MLP classifier? ``` from sklearn.neural_network import MLPClassifier ``` data: a dataset of body metrics (height, width, and shoe size) labeled male or female: ``` X = [[181, 80, 44], [177, 70, 43], [160, 60, 38], [154, 54, 37], [166, 65, 40], [190, 90, 47], [175, 64, 39], [177, 70, 40], [159, 55, 37], [171, 75, 42], [181, 85, 43]] y = ['male', 'male', 'female', 'female', 'male', 'male', 'female', 'female', 'female', 'male', 'male'] ``` prepare the model: ``` clf= MLPClassifier(hidden_layer_sizes=(3,), activation='logistic', solver='adam', alpha=0.0001,learning_rate='constant', learning_rate_init=0.001) ``` train ``` clf= clf.fit(X, y) ``` attributes of the learned classifier: ``` print('current loss computed with the loss function: ',clf.loss_) print('coefs: ', clf.coefs_) print('intercepts: ',clf.intercepts_) print(' number of iterations the solver: ', clf.n_iter_) print('num of layers: ', clf.n_layers_) print('Num of o\/p: ', clf.n_outputs_) ``` test ``` print('prediction: ', clf.predict([ [179, 69, 40],[175, 72, 45] ])) ``` calc. accuracy ``` print( 'accuracy: ',clf.score( [ [179, 69, 40],[175, 72, 45] ], ['female','male'], sample_weight=None )) ``` RUN1 ``` current loss computed with the loss function: 0.617580287851 coefs: [array([[ 0.17222046, -0.02541928, 0.02743722], [-0.19425909, 0.14586716, 0.17447281], [-0.4063903 , 0.148889 , 0.02523247]]), array([[-0.66332919], [ 0.04249613], [-0.10474769]])] intercepts: [array([-0.05611057, 0.32634023, 0.51251098]), array([ 0.17996649])] number of iterations the solver: 200 num of layers: 3 Num of o\/p: 1 prediction: ['female' 'male'] accuracy: 1.0 \/home\/anubhav\/anaconda3\/envs\/mytf\/lib\/python3.6\/site-packages\/sklearn\/neural_network\/multilayer_perceptron.py:563: ConvergenceWarning: Stochastic Optimizer: Maximum iterations reached and the optimization hasn't converged yet. % (), ConvergenceWarning) ``` RUN2 ``` current loss computed with the loss function: 0.639478303643 coefs: [array([[ 0.02300866, 0.21547873, -0.1272455 ], [-0.2859666 , 0.40159542, 0.55881399], [ 0.39902066, -0.02792529, -0.04498812]]), array([[-0.64446013], [ 0.60580985], [-0.22001532]])] intercepts: [array([-0.10482234, 0.0281211 , -0.16791644]), array([-0.19614561])] number of iterations the solver: 39 num of layers: 3 Num of o\/p: 1 prediction: ['female' 'female'] accuracy: 0.5 ``` RUN3 ``` current loss computed with the loss function: 0.691966937074 coefs: [array([[ 0.21882191, -0.48037975, -0.11774392], [-0.15890357, 0.06887471, -0.03684797], [-0.28321762, 0.48392007, 0.34104955]]), array([[ 0.08672174], [ 0.1071615 ], [-0.46085333]])] intercepts: [array([-0.36606747, 0.21969636, 0.10138625]), array([-0.05670653])] number of iterations the solver: 4 num of layers: 3 Num of o\/p: 1 prediction: ['male' 'male'] accuracy: 0.5 ``` RUN4: ``` current loss computed with the loss function: 0.697102567593 coefs: [array([[ 0.32489731, -0.18529689, -0.08712877], [-0.35425908, 0.04214241, 0.41249622], [-0.19993622, -0.38873908, -0.33057999]]), array([[ 0.43304555], [ 0.37959392], [ 0.55998979]])] intercepts: [array([ 0.11555407, -0.3473817 , -0.16852093]), array([ 0.31326347])] number of iterations the solver: 158 num of layers: 3 Num of o\/p: 1 prediction: ['male' 'male'] accuracy: 0.5 ``` ----------------------------------------------------------------- I have following questions: ``` 1.Why in the RUN1 the optimizer did not converge? 2.Why in RUN3 the number of iteration were suddenly becomes so low and in the RUN4 so high? 3.What else can be done to increase the accuracy which I get in RUN1.? ```", "response":"1: Your MLP didn't converge: The algorithm is optimizing by a stepwise convergence to a minimum and in run 1 your minimum wasn't found. 2 Difference of runs: You have some random starting values for your MLP, so you dont get the same results as you see in your data. Seems that you started very close to a minimum in your fourth run. You can change the random_state parameter of your MLP to a constant e.g. random_state=0 to get the same result over and over. 3 is the most difficult point. You can optimize parameters with ``` from sklearn.model_selection import GridSearchCV ``` Gridsearch splits up your test set in eqally sized parts, uses one part as test data and the rest as training data. So it optimizes as many classifiers as parts you split your data into. you need to specify (your data is small so i suggest 2 or 3) the number of parts you split, a classifier (your MLP), and a Grid of parameters you want to optimize like this: ``` param_grid = [ { 'activation' : ['identity', 'logistic', 'tanh', 'relu'], 'solver' : ['lbfgs', 'sgd', 'adam'], 'hidden_layer_sizes': [ (1,),(2,),(3,),(4,),(5,),(6,),(7,),(8,),(9,),(10,),(11,), (12,),(13,),(14,),(15,),(16,),(17,),(18,),(19,),(20,),(21,) ] } ] ``` Beacuse you once got 100 percent accuracy with a hidden layer of three neurons, you can try to optimize parameters like learning rate and momentum instead of the hidden layers. Use Gridsearch like that: ``` clf = GridSearchCV(MLPClassifier(), param_grid, cv=3, scoring='accuracy') clf.fit(X,y) print(\"Best parameters set found on development set:\") print(clf.best_params_) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46028914\/multilayer-perceptron-convergencewarning-stochastic-optimizer-maximum-iterat", "best_answers_votes":33, "question_length":3776, "response_length":1585 }, { "question":"Why is Random Forest with a single tree much better than a Decision Tree classifier? I apply the decision tree classifier and the random forest classifier to my data with the following code: ```python def decision_tree(train_X, train_Y, test_X, test_Y): clf = tree.DecisionTreeClassifier() clf.fit(train_X, train_Y) return clf.score(test_X, test_Y) def random_forest(train_X, train_Y, test_X, test_Y): clf = RandomForestClassifier(n_estimators=1) clf = clf.fit(X, Y) return clf.score(test_X, test_Y) ``` Why the result are so much better for the random forest classifier (for 100 runs, with randomly sampling 2\/3 of data for the training and 1\/3 for the test)? ```python 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 100\/100 [00:01<00:00, 73.59it\/s] Algorithm: Decision Tree Min : 0.3883495145631068 Max : 0.6476190476190476 Mean : 0.4861783113770316 Median : 0.48868030937802126 Stdev : 0.047158171852401135 Variance: 0.0022238931724605985 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 100\/100 [00:01<00:00, 85.38it\/s] Algorithm: Random Forest Min : 0.6846846846846847 Max : 0.8653846153846154 Mean : 0.7894823428836184 Median : 0.7906101571063208 Stdev : 0.03231671150915106 Variance: 0.0010443698427656967 ``` The random forest estimators with one estimator isn't just a decision tree? Have i done something wrong or misunderstood the concept?", "response":"The random forest estimators with one estimator isn't just a decision tree? Well, this is a good question, and the answer turns out to be no; the Random Forest algorithm is more than a simple bag of individually-grown decision trees. Apart from the randomness induced from ensembling many trees, the Random Forest (RF) algorithm also incorporates randomness when building individual trees in two distinct ways, none of which is present in the simple Decision Tree (DT) algorithm. The first is the number of features to consider when looking for the best split at each tree node: while DT considers all the features, RF considers a random subset of them, of size equal to the parameter max_features (see the docs). The second is that, while DT considers the whole training set, a single RF tree considers only a bootstrapped sub-sample of it; from the docs again: The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default). The RF algorihm is essentially the combination of two independent ideas: bagging, and random selection of features (see the Wikipedia entry for a nice overview). Bagging is essentially my second point above, but applied to an ensemble; random selection of features is my first point above, and it seems that it had been independently proposed by Tin Kam Ho before Breiman's RF (again, see the Wikipedia entry). Ho had already suggested that random feature selection alone improves performance. This is not exactly what you have done here (you still use the bootstrap sampling idea from bagging, too), but you could easily replicate Ho's idea by setting bootstrap=False in your RandomForestClassifier() arguments. The fact is that, given this research, the difference in performance is not unexpected... To replicate exactly the behaviour of a single tree in RandomForestClassifier(), you should use both bootstrap=False and max_features=None arguments, i.e. ```python clf = RandomForestClassifier(n_estimators=1, max_features=None, bootstrap=False) ``` in which case neither bootstrap sampling nor random feature selection will take place, and the performance should be roughly equal to that of a single decision tree.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48239242\/why-is-random-forest-with-a-single-tree-much-better-than-a-decision-tree-classif", "best_answers_votes":38, "question_length":1346, "response_length":2226 }, { "question":"Possibility to apply online algorithms on big data files with sklearn? I would like to apply fast online dimensionality reduction techniques such as (online\/mini-batch) Dictionary Learning on big text corpora. My input data naturally do not fit in the memory (this is why i want to use an online algorithm) so i am looking for an implementation that can iterate over a file rather than loading everything in memory. Is it possible to do this with sklearn ? are there alternatives ? Thanks register", "response":"For some algorithms supporting partial_fit, it would be possible to write an outer loop in a script to do out-of-core, large scale text classification. However there are some missing elements: a dataset reader that iterates over the data on the disk as folders of flat files or a SQL database server, or NoSQL store or a Solr index with stored fields for instance. We also lack an online text vectorizer. Here is a sample integration template to explain how it would fit together. ``` import numpy as np from sklearn.linear_model import Perceptron from mymodule import SomeTextDocumentVectorizer from mymodule import DataSetReader dataset_reader = DataSetReader('\/path\/to\/raw\/data') expected_classes = dataset_reader.get_all_classes() # need to know the possible classes ahead of time feature_extractor = SomeTextDocumentVectorizer() classifier = Perceptron() dataset_reader = DataSetReader('\/path\/to\/raw\/data') for i, (documents, labels) in enumerate(dataset_reader.iter_chunks()): vectors = feature_extractor.transform(documents) classifier.partial_fit(vectors, labels, classes=expected_classes) if i % 100 == 0: # dump model to be able to monitor quality and later analyse convergence externally joblib.dump(classifier, 'model_%04d.pkl' % i) ``` The dataset reader class is application specific and will probably never make it into scikit-learn (except maybe for a folder of flat text files or CSV files that would not require to add a new dependency to the library). The text vectorizer part is more problematic. The current vectorizer does not have a partial_fit method because of the way we build the in-memory vocabulary (a python dict that is trimmed depending on max_df and min_df). We could maybe build one using an external store and drop the max_df and min_df features. Alternatively we could build an HashingTextVectorizer that would use the hashing trick to drop the dictionary requirements. None of those exist at the moment (although we already have some building blocks such as a murmurhash wrapper and a pull request for hashing features). In the mean time I would advise you to have a look at Vowpal Wabbit and maybe those python bindings. Edit: The sklearn.feature_extraction.FeatureHasher class has been merged into the master branch of scikit-learn and will be available in the next release (0.13). Have a look at the documentation on feature extraction. Edit 2: 0.13 is now released with both FeatureHasher and HashingVectorizerthat can directly deal with text data. Edit 3: there is now an example on out-of-core learning with the Reuters dataset in the official example gallery of the project.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/12460077\/possibility-to-apply-online-algorithms-on-big-data-files-with-sklearn", "best_answers_votes":25, "question_length":497, "response_length":2618 }, { "question":"Pruning Decision Trees Below is a snippet of the decision tree as it is pretty huge. How to make the tree stop growing when the lowest value in a node is under 5. Here is the code to produce the decision tree. On SciKit - Decission Tree we can see the only way to do so is by min_impurity_decrease but I am not sure how it specifically works. ``` import numpy as np import pandas as pd from sklearn.datasets import make_classification from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier X, y = make_classification(n_samples=1000, n_features=6, n_informative=3, n_classes=2, random_state=0, shuffle=False) # Creating a dataFrame df = pd.DataFrame({'Feature 1':X[:,0], 'Feature 2':X[:,1], 'Feature 3':X[:,2], 'Feature 4':X[:,3], 'Feature 5':X[:,4], 'Feature 6':X[:,5], 'Class':y}) y_train = df['Class'] X_train = df.drop('Class',axis = 1) dt = DecisionTreeClassifier( random_state=42) dt.fit(X_train, y_train) from IPython.display import display, Image import pydotplus from sklearn import tree from sklearn.tree import _tree from sklearn import tree import collections import drawtree import os os.environ[\"PATH\"] += os.pathsep + 'C:\\\\Anaconda3\\\\Library\\\\bin\\\\graphviz' dot_data = tree.export_graphviz(dt, out_file = 'thisIsTheImagetree.dot', feature_names=X_train.columns, filled = True , rounded = True , special_characters = True) graph = pydotplus.graph_from_dot_file('thisIsTheImagetree.dot') thisIsTheImage = Image(graph.create_png()) display(thisIsTheImage) #print(dt.tree_.feature) from subprocess import check_call check_call(['dot','-Tpng','thisIsTheImagetree.dot','-o','thisIsTheImagetree.png']) ``` Update I think min_impurity_decrease can in a way help reach the goal. As tweaking min_impurity_decrease does actually prune the tree. Can anyone kindly explain min_impurity_decrease. I am trying to understand the equation in scikit learn but I am not sure what is the value of right_impurity and left_impurity. ``` N = 256 N_t = 256 impurity = ?? N_t_R = 242 N_t_L = 14 right_impurity = ?? left_impurity = ?? New_Value = N_t \/ N * (impurity - ((N_t_R \/ N_t) * right_impurity) - ((N_t_L \/ N_t) * left_impurity)) New_Value ``` Update 2 Instead of pruning at a certain value, we prune under a certain condition. such as We do split at 6\/4 and 5\/5 but not at 6000\/4 or 5000\/5. Let's say if one value is under a certain percentage in comparison with its adjacent value in the node, rather than a certain value. ``` 11\/9 \/ \\ 6\/4 5\/5 \/ \\ \/ \\ 6\/0 0\/4 2\/2 3\/3 ```", "response":"Directly restricting the lowest value (number of occurences of a particular class) of a leaf cannot be done with min_impurity_decrease or any other built-in stopping criteria. I think the only way you can accomplish this without changing the source code of scikit-learn is to post-prune your tree. To accomplish this, you can just traverse the tree and remove all children of the nodes with minimum class count less that 5 (or any other condition you can think of). I will continue your example: ``` from sklearn.tree._tree import TREE_LEAF def prune_index(inner_tree, index, threshold): if inner_tree.value[index].min() < threshold: # turn node into a leaf by \"unlinking\" its children inner_tree.children_left[index] = TREE_LEAF inner_tree.children_right[index] = TREE_LEAF # if there are shildren, visit them as well if inner_tree.children_left[index] != TREE_LEAF: prune_index(inner_tree, inner_tree.children_left[index], threshold) prune_index(inner_tree, inner_tree.children_right[index], threshold) print(sum(dt.tree_.children_left < 0)) # start pruning from the root prune_index(dt.tree_, 0, 5) sum(dt.tree_.children_left < 0) ``` this code will print first 74, and then 91. It means that the code has created 17 new leaf nodes (by practically removing links to their ancestors). The tree, which has looked before like now looks like so you can see that is indeed has decreased a lot.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/49428469\/pruning-decision-trees", "best_answers_votes":30, "question_length":2513, "response_length":1391 }, { "question":"How to obtain reproducible but distinct instances of GroupKFold In the GroupKFold source, the random_state is set to None ``` def __init__(self, n_splits=3): super(GroupKFold, self).__init__(n_splits, shuffle=False, random_state=None) ``` Hence, when run multiple times (code from here) ``` import numpy as np from sklearn.model_selection import GroupKFold for i in range(0,10): X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]]) y = np.array([1, 2, 3, 4]) groups = np.array([0, 0, 2, 2]) group_kfold = GroupKFold(n_splits=2) group_kfold.get_n_splits(X, y, groups) print(group_kfold) for train_index, test_index in group_kfold.split(X, y, groups): print(\"TRAIN:\", train_index, \"TEST:\", test_index) X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] print(X_train, X_test, y_train, y_test) print print ``` o\/p ``` GroupKFold(n_splits=2) ('TRAIN:', array([0, 1]), 'TEST:', array([2, 3])) (array([[1, 2], [3, 4]]), array([[5, 6], [7, 8]]), array([1, 2]), array([3, 4])) ('TRAIN:', array([2, 3]), 'TEST:', array([0, 1])) (array([[5, 6], [7, 8]]), array([[1, 2], [3, 4]]), array([3, 4]), array([1, 2])) GroupKFold(n_splits=2) ('TRAIN:', array([0, 1]), 'TEST:', array([2, 3])) (array([[1, 2], [3, 4]]), array([[5, 6], [7, 8]]), array([1, 2]), array([3, 4])) ('TRAIN:', array([2, 3]), 'TEST:', array([0, 1])) (array([[5, 6], [7, 8]]), array([[1, 2], [3, 4]]), array([3, 4]), array([1, 2])) ``` etc ... The splits are identical. How do I set a random_state for GroupKFold in order to get a different (but repoducible) set of splits over a few different trials of cross validation? Eg, I want ``` GroupKFold(n_splits=2, random_state=42) ('TRAIN:', array([0, 1]), 'TEST:', array([2, 3])) ('TRAIN:', array([2, 3]), 'TEST:', array([0, 1])) GroupKFold(n_splits=2, random_state=13) ('TRAIN:', array([0, 2]), 'TEST:', array([1, 3])) ('TRAIN:', array([1, 3]), 'TEST:', array([0, 2])) ``` So far, it seems a strategy might be to use a sklearn.utils.shuffle first, as suggested in this post. However, this actually just rearranges the elements of each fold --- it doesn't give us new splits. ``` from sklearn.utils import shuffle from sklearn.model_selection import GroupKFold import numpy as np import sys import pdb random_state = int(sys.argv[1]) X = np.arange(20).reshape((10,2)) y = np.arange(10) groups = np.array([0,0,0,1,2,3,4,5,6,7]) def cv(X, y, groups, random_state): X_s, y_s, groups_s = shuffle(X,y, groups, random_state=random_state) cv_out = GroupKFold(n_splits=2) cv_out_splits = cv_out.split(X_s, y_s, groups_s) for train, test in cv_out_splits: print \"---\" print X_s[test] print y_s[test] print \"test groups\", groups_s[test] print \"train groups\", groups_s[train] pdb.set_trace() print \"***\" cv(X, y, groups, random_state) ``` The output: ``` >python sshuf.py 32 *** --- [[ 2 3] [ 4 5] [ 0 1] [ 8 9] [12 13]] [1 2 0 4 6] test groups [0 0 0 2 4] train groups [7 6 1 3 5] --- [[18 19] [16 17] [ 6 7] [10 11] [14 15]] [9 8 3 5 7] test groups [7 6 1 3 5] train groups [0 0 0 2 4] >python sshuf.py 234 *** --- [[12 13] [ 4 5] [ 0 1] [ 2 3] [ 8 9]] [6 2 0 1 4] test groups [4 0 0 0 2] train groups [7 3 1 5 6] --- [[18 19] [10 11] [ 6 7] [14 15] [16 17]] [9 5 3 7 8] test groups [7 3 1 5 6] train groups [4 0 0 0 2] ```", "response":"KFold is only randomized if shuffle=True. Some datasets should not be shuffled. GroupKFold is not randomized at all. Hence the random_state=None. GroupShuffleSplit may be closer to what you're looking for. A comparison of the group-based splitters: In GroupKFold, the test sets form a complete partition of all the data. LeavePGroupsOut leaves all possible subsets of P groups out, combinatorially; test sets will overlap for P > 1. Since this means P ** n_groups splits altogether, often you want a small P, and most often want LeaveOneGroupOut which is basically the same as GroupKFold with k=1. GroupShuffleSplit makes no statement about the relationship between successive test sets; each train\/test split is performed independently. As an aside, Dmytro Lituiev has proposed an alternative GroupShuffleSplit algorithm which is better at getting the right number of samples (not merely the right number of groups) in the test set for a specified test_size.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/41859613\/how-to-obtain-reproducible-but-distinct-instances-of-groupkfold", "best_answers_votes":20, "question_length":3253, "response_length":959 }, { "question":"Scikit: calculate precision and recall using cross_val_score function I'm using scikit to perform a logistic regression on spam\/ham data. X_train is my training data and y_train the labels('spam' or 'ham') and I trained my LogisticRegression this way: ``` classifier = LogisticRegression() classifier.fit(X_train, y_train) ``` If I want to get the accuracies for a 10 fold cross validation, I just write: ``` accuracy = cross_val_score(classifier, X_train, y_train, cv=10) ``` I thought it was possible to calculate also the precisions and recalls by simply adding one parameter this way: ``` precision = cross_val_score(classifier, X_train, y_train, cv=10, scoring='precision') recall = cross_val_score(classifier, X_train, y_train, cv=10, scoring='recall') ``` But it results in a ValueError: ``` ValueError: pos_label=1 is not a valid label: array(['ham', 'spam'], dtype='|S4') ``` Is it related to the data (should I binarize the labels ?) or do they change the cross_val_score function ? Thank you in advance !", "response":"To compute the recall and precision, the data has to be indeed binarized, this way: ``` from sklearn import preprocessing lb = preprocessing.LabelBinarizer() lb.fit(y_train) ``` To go further, i was surprised that I didn't have to binarize the data when I wanted to calculate the accuracy: ``` accuracy = cross_val_score(classifier, X_train, y_train, cv=10) ``` It's just because the accuracy formula doesn't really need information about which class is considered as positive or negative: (TP + TN) \/ (TP + TN + FN + FP). We can indeed see that TP and TN are exchangeable, it's not the case for recall, precision and f1.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/27357121\/scikit-calculate-precision-and-recall-using-cross-val-score-function", "best_answers_votes":14, "question_length":1015, "response_length":621 }, { "question":"Scikit Learn Multilabel Classification: ValueError: You appear to be using a legacy multi-label data representation i am trying to use scikit learn 0.17 with anaconda 2.7 for a multilabel classification problem. here is my code ``` import pandas as pd import pickle import re from sklearn.cross_validation import train_test_split from sklearn.metrics.metrics import classification_report, accuracy_score, confusion_matrix from nltk.stem import WordNetLemmatizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.naive_bayes import MultinomialNB as MNB from sklearn.pipeline import Pipeline from sklearn.grid_search import GridSearchCV traindf = pickle.load(open(\"train.pkl\",\"rb\")) X, y = traindf['colC'], traindf['colB'].as_matrix() Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, train_size=0.7) pip = Pipeline([ ('vect', TfidfVectorizer( analyzer='word', binary=False, decode_error='ignore', dtype=, encoding=u'utf-8', input=u'content', lowercase=True, max_df=0.25, max_features=None, min_df=1, ngram_range=(1, 1), norm=u'l2', preprocessor=None, smooth_idf=True, stop_words='english', strip_accents=None, sublinear_tf=True, token_pattern=u'(?u)\\\\b\\\\w\\\\w+\\\\b', tokenizer=nltk.data.load('tokenizers\/punkt\/english.pickle'), use_idf=True, vocabulary=None)), ('clf', LogisticRegression( C=10, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, max_iter=100, multi_class='multinomial', n_jobs=1, penalty='l2', random_state=None, solver='lbfgs', tol=0.0001, verbose=0, warm_start=False)) ]) parameters = {} gridSearchTS = GridSearchCV(pip,parameters,n_jobs=3, verbose=1, scoring='accuracy') gridSearchTS.fit(Xtrain, ytrain) predictions = gridSearchTS.predict(Xtest) print ('Accuracy:', accuracy_score(ytest, predictions)) print ('Confusion Matrix:', confusion_matrix(ytest, predictions)) print ('Classification Report:', classification_report(ytest, predictions)) testdf = pickle.load(open(\"test.pkl\",\"rb\")) predictions=gridSearchTS.predict(testdf['colC']) testdf['colB'] = predictions print(testdf.info()) testdf.to_csv(\"res.csv\") ``` and here is what my data looks like training ``` colC colB some text [list of tags] some text [list of tags] ``` test ``` colC some text some text ``` but i get the error ``` raise ValueError('You appear to be using a legacy multi-label data' ValueError: You appear to be using a legacy multi-label data representation. Sequence of sequences are no longer supported; use a binary array or sparse matrix instead. ``` what does this mean? here is the full stacktrace ``` Traceback (most recent call last): File \"X:\\asd.py\", line 34, in getTags gridSearchTS.fit(Xtrain, ytrain) File \"X:\\popol\\Continuum\\Anaconda2\\lib\\site-packages\\sklearn\\grid_search.py\", line 804, in fit return self._fit(X, y, ParameterGrid(self.param_grid)) File \"X:\\popol\\Continuum\\Anaconda2\\lib\\site-packages\\sklearn\\grid_search.py\", line 532, in _fit cv = check_cv(cv, X, y, classifier=is_classifier(estimator)) File \"X:\\popol\\Continuum\\Anaconda2\\lib\\site-packages\\sklearn\\cross_validation.py\", line 1676, in check_cv if type_of_target(y) in ['binary', 'multiclass']: File \"X:\\popol\\Continuum\\Anaconda2\\lib\\site-packages\\sklearn\\utils\\multiclass.py\", line 251, in type_of_target raise ValueError('You appear to be using a legacy multi-label data' ValueError: You appear to be using a legacy multi-label data representation. Sequence of sequences are no longer supported; use a binary array or sparse matrix instead. ``` how do i fix this? do i need to change the format of my data? why does gridSearchTS.fit(Xtrain, ytrain) fail? how do i make X and y suitable for the fit function? Edit I tried ``` from sklearn.preprocessing import MultiLabelBinarizer y=MultiLabelBinarizer().fit_transform(y) random_state = np.random.RandomState(0) # Split into training and test X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5, random_state=random_state) # Run classifier from sklearn import svm, datasets classifier = OneVsRestClassifier(svm.SVC(kernel='linear', probability=True, random_state=random_state)) y_score = classifier.fit(X_train, y_train).decision_function(X_test) ``` but now i get ``` ValueError: could not convert string to float: ``` on ``` y_score = classifier.fit(X_train, y_train).decision_function(X_test) ``` do i have to binarize X as well? why do i need to convert the X dimension to float?", "response":"The documentation gives this example: ``` >>> from sklearn.preprocessing import MultiLabelBinarizer >>> y = [[2, 3, 4], [2], [0, 1, 3], [0, 1, 2, 3, 4], [0, 1, 2]] >>> MultiLabelBinarizer().fit_transform(y) array([[0, 0, 1, 1, 1], [0, 0, 1, 0, 0], [1, 1, 0, 1, 0], [1, 1, 1, 1, 1], [1, 1, 1, 0, 0]]) ``` MultiLabelBinarizer.fit_transform takes in your labeled sets and can output the binary array. The output should then be alright to pass to your fit function.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34213199\/scikit-learn-multilabel-classification-valueerror-you-appear-to-be-using-a-leg", "best_answers_votes":27, "question_length":4377, "response_length":461 }, { "question":"Python scikit-learn: Cannot clone object... as the constructor does not seem to set parameter I modified the BernoulliRBM class of scikit-learn to use groups of softmax visible units. In the process, I added an extra Numpy array visible_config as a class attribute which is initialized in the constructor as follows using: ``` self.visible_config = np.cumsum(np.concatenate((np.asarray([0]), visible_config), axis=0)) ``` where visible_config is a Numpy array passed as an input to the constructor. The code runs without errors when I directly use the fit() function to train the model. However, when I use the GridSearchCV structure, I get the following error ``` Cannot clone object SoftmaxRBM(batch_size=100, learning_rate=0.01, n_components=100, n_iter=100, random_state=0, verbose=True, visible_config=[ 0 21 42 63]), as the constructor does not seem to set parameter visible_config ``` This seems to be a problem in the equality check between the instance of the class and its copy created by sklearn.base.clone because visible_config does not get copied correctly. I'm not sure how to fix this. It says in the documentation that sklearn.base.clone uses a deepcopy(), so shouldn't visible_config also get copied? Can someone please explain what I can try here? Thanks!", "response":"Without seeing your code, it's hard to tell exactly what goes wrong, but you are violating a scikit-learn API convention here. The constructor in an estimator should only set attributes to the values the user passes as arguments. All computation should occur in fit, and if fit needs to store the result of a computation, it should do so in an attribute with a trailing underscore (_). This convention is what makes clone and meta-estimators such as GridSearchCV work. (*) If you ever see an estimator in the main codebase that violates this rule: that would be a bug, and patches are welcome.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/24510510\/python-scikit-learn-cannot-clone-object-as-the-constructor-does-not-seem-to", "best_answers_votes":22, "question_length":1274, "response_length":593 }, { "question":"Finding the dimension with highest variance using scikit-learn PCA I need to use pca to identify the dimensions with the highest variance of a certain set of data. I'm using scikit-learn's pca to do it, but I can't identify from the output of the pca method what are the components of my data with the highest variance. Keep in mind that I don't want to eliminate those dimensions, only identify them. My data is organized as a matrix with 150 rows of data, each one with 4 dimensions. I'm doing as follow: ```py pca = sklearn.decomposition.PCA() pca.fit(data_matrix) ``` When I print pca.explained_variance_ratio_, it outputs an array of variance ratios ordered from highest to lowest, but it doesn't tell me which dimension from the data they correspond to (I've tried changing the order of columns on my matrix, and the resulting variance ratio array was the same). Printing pca.components_ gives me a 4x4 matrix (I left the original number of components as argument to pca) with some values I can't understand the meaning of...according to scikit's documentation, they should be the components with the maximum variance (the eigenvectors perhaps?), but no sign of which dimension those values refer to. Transforming the data doesn't help either, because the dimensions are changed in a way I can't really know which one they were originally. Is there any way I can get this information with scikit's pca? Thanks", "response":"The pca.explained_variance_ratio_ returned are the variances from principal components. You can use them to find how many dimensions (components) your data could be better transformed by pca. You can use a threshold for that (e.g, you count how many variances are greater than 0.5, among others). After that, you can transform the data by PCA using the number of dimensions (components) that are equal to principal components higher than the threshold used. The data reduced to these dimensions are different from the data on dimensions in original data. you can check the code from this link: http:\/\/scikit-learn.org\/dev\/tutorial\/statistical_inference\/unsupervised_learning.html#principal-component-analysis-pca", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/15369006\/finding-the-dimension-with-highest-variance-using-scikit-learn-pca", "best_answers_votes":21, "question_length":1415, "response_length":712 }, { "question":"Why is training a random forest regressor with MAE criterion so slow compared to MSE? When training on even small applications (<50K rows <50 columns) using the mean absolute error criterion for sklearn's RandomForestRegress is nearly 10x slower than using mean squared error. To illustrate even on a small data set: ``` import time from sklearn.ensemble import RandomForestRegressor from sklearn.datasets import load_boston X, y = load_boston(return_X_y=True) def fit_rf_criteria(criterion, X=X, y=y): reg = RandomForestRegressor(n_estimators=100, criterion=criterion, n_jobs=-1, random_state=1) start = time.time() reg.fit(X, y) end = time.time() print(end - start) fit_rf_criteria('mse') # 0.13266682624816895 fit_rf_criteria('mae') # 1.26043701171875 ``` Why does using the 'mae' criterion take so long for training a RandomForestRegressor? I want to optimize MAE for larger applications, but find the speed of the RandomForestRegressor tuned to this criterion prohibitively slow.", "response":"Thank you @hellpanderr for sharing a reference to the project issue. To summarize \u2013 when the random forest regressor optimizes for MSE it optimizes for the L2-norm and a mean-based impurity metric. But when the regressor uses the MAE criterion it optimizes for the L1-norm which amounts to calculating the median. Unfortunately, sklearn's the regressor's implementation for MAE appears to take O(N^2) currently.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/57243267\/why-is-training-a-random-forest-regressor-with-mae-criterion-so-slow-compared-to", "best_answers_votes":20, "question_length":984, "response_length":411 }, { "question":"Retain feature names after Scikit Feature Selection After running a Variance Threshold from Scikit-Learn on a set of data, it removes a couple of features. I feel I'm doing something simple yet stupid, but I'd like to retain the names of the remaining features. The following code: ``` def VarianceThreshold_selector(data): selector = VarianceThreshold(.5) selector.fit(data) selector = (pd.DataFrame(selector.transform(data))) return selector x = VarianceThreshold_selector(data) print(x) ``` changes the following data (this is just a small subset of the rows): ``` Survived Pclass Sex Age SibSp Parch Nonsense 0 3 1 22 1 0 0 1 1 2 38 1 0 0 1 3 2 26 0 0 0 ``` into this (again just a small subset of the rows) ``` 0 1 2 3 0 3 22.0 1 0 1 1 38.0 1 0 2 3 26.0 0 0 ``` Using the get_support method, I know that these are Pclass, Age, Sibsp, and Parch, so I'd rather this return something more like : ``` Pclass Age Sibsp Parch 0 3 22.0 1 0 1 1 38.0 1 0 2 3 26.0 0 0 ``` Is there an easy way to do this? I'm very new with Scikit Learn, so I'm probably just doing something silly.", "response":"Would something like this help? If you pass it a pandas dataframe, it will get the columns and use get_support like you mentioned to iterate over the columns list by their indices to pull out only the column headers that met the variance threshold. ``` >>> df Survived Pclass Sex Age SibSp Parch Nonsense 0 0 3 1 22 1 0 0 1 1 1 2 38 1 0 0 2 1 3 2 26 0 0 0 >>> from sklearn.feature_selection import VarianceThreshold >>> def variance_threshold_selector(data, threshold=0.5): selector = VarianceThreshold(threshold) selector.fit(data) return data[data.columns[selector.get_support(indices=True)]] >>> variance_threshold_selector(df, 0.5) Pclass Age 0 3 22 1 1 38 2 3 26 >>> variance_threshold_selector(df, 0.9) Age 0 22 1 38 2 26 >>> variance_threshold_selector(df, 0.1) Survived Pclass Sex Age SibSp 0 0 3 1 22 1 1 1 1 2 38 1 2 1 3 2 26 0 ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/39812885\/retain-feature-names-after-scikit-feature-selection", "best_answers_votes":43, "question_length":1076, "response_length":841 }, { "question":"Visualise word2vec generated from gensim using t-sne I have trained a doc2vec and corresponding word2vec on my own corpus using gensim. I want to visualise the word2vec using t-sne with the words. As in, each dot in the figure has the \"word\" also with it. I looked at a similar question here : t-sne on word2vec Following it, I have this code : import gensim import gensim.models as g ``` from sklearn.manifold import TSNE import re import matplotlib.pyplot as plt modelPath=\"\/Users\/tarun\/Desktop\/PE\/doc2vec\/model3_100_newCorpus60_1min_6window_100trainEpoch.bin\" model = g.Doc2Vec.load(modelPath) X = model[model.wv.vocab] print len(X) print X[0] tsne = TSNE(n_components=2) X_tsne = tsne.fit_transform(X[:1000,:]) plt.scatter(X_tsne[:, 0], X_tsne[:, 1]) plt.show() ``` This gives a figure with dots but no words. That is I don't know which dot is representative of which word. How can I display the word with the dot?", "response":"Two parts to the answer: how to get the word labels, and how to plot the labels on a scatterplot. Word labels in gensim's word2vec model.wv.vocab is a dict of {word: object of numeric vector}. To load the data into X for t-SNE, I made one change. ``` vocab = list(model.wv.key_to_index) X = model.wv[vocab] ``` This accomplishes two things: (1) it gets you a standalone vocab list for the final dataframe to plot, and (2) when you index model, you can be sure that you know the order of the words. Proceed as before with ``` tsne = TSNE(n_components=2) X_tsne = tsne.fit_transform(X) ``` Now let's put X_tsne together with the vocab list. This is easy with pandas, so import pandas as pd if you don't have that yet. ``` df = pd.DataFrame(X_tsne, index=vocab, columns=['x', 'y']) ``` The vocab words are the indices of the dataframe now. I don't have your dataset, but in the other SO you mentioned, an example df that uses sklearn's newsgroups would look something like ``` x y politics -1.524653e+20 -1.113538e+20 worry 2.065890e+19 1.403432e+20 mu -1.333273e+21 -5.648459e+20 format -4.780181e+19 2.397271e+19 recommended 8.694375e+20 1.358602e+21 arguing -4.903531e+19 4.734511e+20 or -3.658189e+19 -1.088200e+20 above 1.126082e+19 -4.933230e+19 ``` Scatterplot I like the object-oriented approach to matplotlib, so this starts out a little different. ``` fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.scatter(df['x'], df['y']) ``` Lastly, the annotate method will label coordinates. The first two arguments are the text label and the 2-tuple. Using iterrows(), this can be very succinct: ``` for word, pos in df.iterrows(): ax.annotate(word, pos) ``` [Thanks to Ricardo in the comments for this suggestion.] Then do plt.show() or fig.savefig(). Depending on your data, you'll probably have to mess with ax.set_xlim and ax.set_ylim to see into a dense cloud. This is the newsgroup example without any tweaking: You can modify dot size, color, etc., too. Happy fine-tuning!", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43776572\/visualise-word2vec-generated-from-gensim-using-t-sne", "best_answers_votes":49, "question_length":918, "response_length":1981 }, { "question":"Is there a quicker way of running GridsearchCV I'm optimizing some paramters for an SVC in sklearn, and the biggest issue here is having to wait 30 minutes before I try out any other parameter ranges. Worse is the fact that I'd like to try more values for c and gamma within the same range (so I can create a smoother surface plot) but I know that it will just take longer and longer... When I ran it today I changed the cache_size from 200 to 600 (without really knowing what it does) to see if it made a difference. The time decreased by about a minute. Is this something I can help? Or am I just gonna have to deal with a very long time? ``` clf = svm.SVC(kernel=\"rbf\" , probability = True, cache_size = 600) gamma_range = [1e-7,1e-6,1e-5,1e-4,1e-3,1e-2,1e-1,1e0,1e1] c_range = [1e-3,1e-2,1e-1,1e0,1e1,1e2,1e3,1e4,1e5] param_grid = dict(gamma = gamma_range, C = c_range) grid = GridSearchCV(clf, param_grid, cv= 10, scoring=\"accuracy\") %time grid.fit(X_norm, y) ``` returns: ``` Wall time: 32min 59s GridSearchCV(cv=10, error_score='raise', estimator=SVC(C=1.0, cache_size=600, class_weight=None, coef0=0.0, degree=3, gamma=0.0, kernel='rbf', max_iter=-1, probability=True, random_state=None, shrinking=True, tol=0.001, verbose=False), fit_params={}, iid=True, loss_func=None, n_jobs=1, param_grid={'C': [0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0, 10000.0, 100000.0], 'gamma': [1e-07, 1e-06, 1e-05, 0.0001, 0.001, 0.01, 0.1, 1.0, 10.0]}, pre_dispatch='2*n_jobs', refit=True, score_func=None, scoring='accuracy', verbose=0) ```", "response":"A few things: 10-fold CV is overkill and causes you to fit 10 models for each parameter group. You can get an instant 2-3x speedup by switching to 5- or 3-fold CV (i.e., cv=3 in the GridSearchCV call) without any meaningful difference in performance estimation. Try fewer parameter options at each round. With 9x9 combinations, you're trying 81 different combinations on each run. Typically, you'll find better performance at one end of the scale or the other, so maybe start with a coarse grid of 3-4 options, and then go finer as you start to identify the area that's more interesting for your data. 3x3 options means a 9x speedup vs. what you're doing now. You can get a trivial speedup by setting njobs to 2+ in your GridSearchCV call so you run multiple models at once. Depending on the size of your data, you may not be able to increase it too high, and you won't see an improvement increasing it past the number of cores you're running, but you can probably trim a bit of time that way.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35655701\/is-there-a-quicker-way-of-running-gridsearchcv", "best_answers_votes":33, "question_length":1530, "response_length":993 }, { "question":"Combining random forest models in scikit learn I have two RandomForestClassifier models, and I would like to combine them into one meta model. They were both trained using similar, but different, data. How can I do this? ``` rf1 #this is my first fitted RandomForestClassifier object, with 250 trees rf2 #this is my second fitted RandomForestClassifier object, also with 250 trees ``` I want to create big_rf with all trees combined into one 500 tree model", "response":"I believe this is possible by modifying the estimators_ and n_estimators attributes on the RandomForestClassifier object. Each tree in the forest is stored as a DecisionTreeClassifier object, and the list of these trees is stored in the estimators_ attribute. To make sure there is no discontinuity, it also makes sense to change the number of estimators in n_estimators. The advantage of this method is that you could build a bunch of small forests in parallel across multiple machines and combine them. Here's an example using the iris data set: ``` from sklearn.ensemble import RandomForestClassifier from sklearn.cross_validation import train_test_split from sklearn.datasets import load_iris def generate_rf(X_train, y_train, X_test, y_test): rf = RandomForestClassifier(n_estimators=5, min_samples_leaf=3) rf.fit(X_train, y_train) print \"rf score \", rf.score(X_test, y_test) return rf def combine_rfs(rf_a, rf_b): rf_a.estimators_ += rf_b.estimators_ rf_a.n_estimators = len(rf_a.estimators_) return rf_a iris = load_iris() X, y = iris.data[:, [0,1,2]], iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.33) # in the line below, we create 10 random forest classifier models rfs = [generate_rf(X_train, y_train, X_test, y_test) for i in xrange(10)] # in this step below, we combine the list of random forest models into one giant model rf_combined = reduce(combine_rfs, rfs) # the combined model scores better than *most* of the component models print \"rf combined score\", rf_combined.score(X_test, y_test) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/28489667\/combining-random-forest-models-in-scikit-learn", "best_answers_votes":33, "question_length":456, "response_length":1546 }, { "question":"featureUnion vs columnTransformer? what is the difference between FeatureUnion() and ColumnTransformer() in sklearn? which should i use if i want to build a supervised model with features containing mixed data types (categorical, numeric, unstructured text) where i need to combine separate pipelines? source: https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.pipeline.FeatureUnion.html source: https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.compose.ColumnTransformer.html", "response":"According to the sklearn documentation: FeatureUnion: Concatenates results of multiple transformer objects. This estimator applies a list of transformer objects in parallel to the input data, then concatenates the results. This is useful to combine several feature extraction mechanisms into a single transformer. ColumnTransformer: Applies transformers to columns of an array or pandas DataFrame. This estimator allows different columns or column subsets of the input to be transformed separately and the features generated by each transformer will be concatenated to form a single feature space. This is useful for heterogeneous or columnar data, to combine several feature extraction mechanisms or transformations into a single transformer. So, FeatureUnion applies different transformers to the whole of the input data and then combines the results by concatenating them. ColumnTransformer, on the other hand, applies different transformers to different subsets of the whole input data, and again concatenates the results. For the case you propose, the ColumnTransformer should be the first step. And then, once all the columns are converted to numeric, with FeatureUnion you could transform them even further by, e.g., combining PCA and SelectKBest Finally, you could certainly use FeatureUnion as a ColumnTransformer, but you would have to include in each of the branches a column\/type selector than only feeds into the next transformer down the pipeline the columns of interest, as it is explained here: https:\/\/ramhiser.com\/post\/2018-04-16-building-scikit-learn-pipeline-with-pandas-dataframe\/ However, ColumnTransformer does exactly that and in a simpler way.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/55604249\/featureunion-vs-columntransformer", "best_answers_votes":36, "question_length":491, "response_length":1668 }, { "question":"Numpy hstack - \"ValueError: all the input arrays must have same number of dimensions\" - but they do I am trying to join two numpy arrays. In one I have a set of columns\/features after running TF-IDF on a single column of text. In the other I have one column\/feature which is an integer. So I read in a column of train and test data, run TF-IDF on this, and then I want to add another integer column because I think this will help my classifier learn more accurately how it should behave. Unfortunately, I am getting the error in the title when I try and run hstack to add this single column to my other numpy array. Here is my code : ``` #reading in test\/train data for TF-IDF traindata = list(np.array(p.read_csv('FinalCSVFin.csv', delimiter=\";\"))[:,2]) testdata = list(np.array(p.read_csv('FinalTestCSVFin.csv', delimiter=\";\"))[:,2]) #reading in labels for training y = np.array(p.read_csv('FinalCSVFin.csv', delimiter=\";\"))[:,-2] #reading in single integer column to join AlexaTrainData = p.read_csv('FinalCSVFin.csv', delimiter=\";\")[[\"alexarank\"]] AlexaTestData = p.read_csv('FinalTestCSVFin.csv', delimiter=\";\")[[\"alexarank\"]] AllAlexaAndGoogleInfo = AlexaTestData.append(AlexaTrainData) tfv = TfidfVectorizer(min_df=3, max_features=None, strip_accents='unicode', analyzer='word',token_pattern=r'\\w{1,}',ngram_range=(1, 2), use_idf=1,smooth_idf=1,sublinear_tf=1) #tf-idf object rd = lm.LogisticRegression(penalty='l2', dual=True, tol=0.0001, C=1, fit_intercept=True, intercept_scaling=1.0, class_weight=None, random_state=None) #Classifier X_all = traindata + testdata #adding test and train data to put into tf-idf lentrain = len(traindata) #find length of train data tfv.fit(X_all) #fit tf-idf on all our text X_all = tfv.transform(X_all) #transform it X = X_all[:lentrain] #reduce to size of training set AllAlexaAndGoogleInfo = AllAlexaAndGoogleInfo[:lentrain] #reduce to size of training set X_test = X_all[lentrain:] #reduce to size of training set #printing debug info, output below : print \"X.shape => \" + str(X.shape) print \"AllAlexaAndGoogleInfo.shape => \" + str(AllAlexaAndGoogleInfo.shape) print \"X_all.shape => \" + str(X_all.shape) #line we get error on X = np.hstack((X, AllAlexaAndGoogleInfo)) ``` Below is the output and error message : ``` X.shape => (7395, 238377) AllAlexaAndGoogleInfo.shape => (7395, 1) X_all.shape => (10566, 238377) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () 31 print \"X_all.shape => \" + str(X_all.shape) 32 #X = np.column_stack((X, AllAlexaAndGoogleInfo)) ---> 33 X = np.hstack((X, AllAlexaAndGoogleInfo)) 34 sc = preprocessing.StandardScaler().fit(X) 35 X = sc.transform(X) C:\\Users\\Simon\\Anaconda\\lib\\site-packages\\numpy\\core\\shape_base.pyc in hstack(tup) 271 # As a special case, dimension 0 of 1-dimensional arrays is \"horizontal\" 272 if arrs[0].ndim == 1: --> 273 return _nx.concatenate(arrs, 0) 274 else: 275 return _nx.concatenate(arrs, 1) ValueError: all the input arrays must have same number of dimensions ``` What is causing my problem here? How can I fix this? As far as I can see I should be able to join these columns? What have I misunderstood? Thank you. Edit : Using the method in the answer below gets the following error : ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () ---> 36 X = np.column_stack((X, AllAlexaAndGoogleInfo)) 37 sc = preprocessing.StandardScaler().fit(X) 38 X = sc.transform(X) C:\\Users\\Simon\\Anaconda\\lib\\site-packages\\numpy\\lib\\shape_base.pyc in column_stack(tup) 294 arr = array(arr,copy=False,subok=True,ndmin=2).T 295 arrays.append(arr) --> 296 return _nx.concatenate(arrays,1) 297 298 def dstack(tup): ValueError: all the input array dimensions except for the concatenation axis must match exactly ``` Interestingly, I tried to print the dtype of X and this worked fine : ``` X.dtype => float64 ``` However, trying to print the dtype of AllAlexaAndGoogleInfo like so : ``` print \"AllAlexaAndGoogleInfo.dtype => \" + str(AllAlexaAndGoogleInfo.dtype) ``` produces : ``` 'DataFrame' object has no attribute 'dtype' ```", "response":"As X is a sparse array, instead of numpy.hstack, use scipy.sparse.hstack to join the arrays. In my opinion the error message is kind of misleading here. This minimal example illustrates the situation: ``` import numpy as np from scipy import sparse X = sparse.rand(10, 10000) xt = np.random.random((10, 1)) print 'X shape:', X.shape print 'xt shape:', xt.shape print 'Stacked shape:', np.hstack((X,xt)).shape #print 'Stacked shape:', sparse.hstack((X,xt)).shape #This works ``` Based on the following output ``` X shape: (10, 10000) xt shape: (10, 1) ``` one may expect that the hstack in the following line will work, but the fact is that it throws this error: ``` ValueError: all the input arrays must have same number of dimensions ``` So, use scipy.sparse.hstack when you have a sparse array to stack. In fact I have answered this as a comment in your another questions, and you mentioned that another error message pops up: ``` TypeError: no supported conversion for types: (dtype('float64'), dtype('O')) ``` First of all, AllAlexaAndGoogleInfo does not have a dtype as it is a DataFrame. To get it's underlying numpy array, simply use AllAlexaAndGoogleInfo.values. Check its dtype. Based on the error message, it has a dtype of object, which means that it might contain non-numerical elements like strings. This is a minimal example that reproduces this situation: ``` X = sparse.rand(100, 10000) xt = np.random.random((100, 1)) xt = xt.astype('object') # Comment this to fix the error print 'X:', X.shape, X.dtype print 'xt:', xt.shape, xt.dtype print 'Stacked shape:', sparse.hstack((X,xt)).shape ``` The error message: ``` TypeError: no supported conversion for types: (dtype('float64'), dtype('O')) ``` So, check if there is any non-numerical values in AllAlexaAndGoogleInfo and repair them, before doing the stacking.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/22257836\/numpy-hstack-valueerror-all-the-input-arrays-must-have-same-number-of-dimens", "best_answers_votes":23, "question_length":4171, "response_length":1828 }, { "question":"Under what parameters are SVC and LinearSVC in scikit-learn equivalent? I read this thread about the difference between SVC() and LinearSVC() in scikit-learn. Now I have a data set of binary classification problem(For such a problem, the one-to-one\/one-to-rest strategy difference between both functions could be ignore.) I want to try under what parameters would these 2 functions give me the same result. First of all, of course, we should set kernel='linear' for SVC() However, I just could not get the same result from both functions. I could not find the answer from the documents, could anybody help me to find the equivalent parameter set I am looking for? Updated: I modified the following code from an example of the scikit-learn website, and apparently they are not the same: ``` import numpy as np import matplotlib.pyplot as plt from sklearn import svm, datasets # import some data to play with iris = datasets.load_iris() X = iris.data[:, :2] # we only take the first two features. We could # avoid this ugly slicing by using a two-dim dataset y = iris.target for i in range(len(y)): if (y[i]==2): y[i] = 1 h = .02 # step size in the mesh # we create an instance of SVM and fit out data. We do not scale our # data since we want to plot the support vectors C = 1.0 # SVM regularization parameter svc = svm.SVC(kernel='linear', C=C).fit(X, y) lin_svc = svm.LinearSVC(C=C, dual = True, loss = 'hinge').fit(X, y) # create a mesh to plot in x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # title for the plots titles = ['SVC with linear kernel', 'LinearSVC (linear kernel)'] for i, clf in enumerate((svc, lin_svc)): # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. plt.subplot(1, 2, i + 1) plt.subplots_adjust(wspace=0.4, hspace=0.4) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8) # Plot also the training points plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired) plt.xlabel('Sepal length') plt.ylabel('Sepal width') plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.xticks(()) plt.yticks(()) plt.title(titles[i]) plt.show() ``` Result: Output Figure from previous code", "response":"In mathematical sense you need to set: ``` SVC(kernel='linear', **kwargs) # by default it uses RBF kernel ``` and ``` LinearSVC(loss='hinge', **kwargs) # by default it uses squared hinge loss ``` Another element, which cannot be easily fixed is increasing intercept_scaling in LinearSVC, as in this implementation bias is regularized (which is not true in SVC nor should be true in SVM - thus this is not SVM) - consequently they will never be exactly equal (unless bias=0 for your problem), as they assume two different models SVC : 1\/2||w||^2 + C SUM xi_i LinearSVC: 1\/2||[w b]||^2 + C SUM xi_i Personally I consider LinearSVC one of the mistakes of sklearn developers - this class is simply not a linear SVM. After increasing intercept scaling (to 10.0) However, if you scale it up too much - it will also fail, as now tolerance and number of iterations are crucial. To sum up: LinearSVC is not linear SVM, do not use it if do not have to.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/33843981\/under-what-parameters-are-svc-and-linearsvc-in-scikit-learn-equivalent", "best_answers_votes":39, "question_length":2415, "response_length":942 }, { "question":"How to use GridSearchCV output for a scikit prediction? In the following code: ``` # Load dataset iris = datasets.load_iris() X, y = iris.data, iris.target rf_feature_imp = RandomForestClassifier(100) feat_selection = SelectFromModel(rf_feature_imp, threshold=0.5) clf = RandomForestClassifier(5000) model = Pipeline([ ('fs', feat_selection), ('clf', clf), ]) params = { 'fs__threshold': [0.5, 0.3, 0.7], 'fs__estimator__max_features': ['auto', 'sqrt', 'log2'], 'clf__max_features': ['auto', 'sqrt', 'log2'], } gs = GridSearchCV(model, params, ...) gs.fit(X,y) ``` What should be used for a prediction? gs? gs.best_estimator_? or gs.best_estimator_.named_steps['clf']? What is the difference between these 3?", "response":"gs.predict(X_test) is equivalent to gs.best_estimator_.predict(X_test). Using either, X_test will be passed through your entire pipeline and it will return the predictions. gs.best_estimator_.named_steps['clf'].predict(), however is only the last phase of the pipeline. To use it, the feature selection step must already have been performed. This would only work if you have previously run your data through gs.best_estimator_.named_steps['fs'].transform() Three equivalent methods for generating predictions are shown below: Using gs directly. ``` pred = gs.predict(X_test) ``` Using best_estimator_. ``` pred = gs.best_estimator_.predict(X_test) ``` Calling each step in the pipeline individual. ``` X_test_fs = gs.best_estimator_.named_steps['fs'].transform(X_test) pred = gs.best_estimator_.named_steps['clf'].predict(X_test_fs) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35388647\/how-to-use-gridsearchcv-output-for-a-scikit-prediction", "best_answers_votes":37, "question_length":708, "response_length":836 }, { "question":"ValueError: n_splits=10 cannot be greater than the number of members in each class I am trying to run the following code: ``` from sklearn.model_selection import StratifiedKFold X = [\"hey\", \"join now\", \"hello\", \"join today\", \"join us now\", \"not today\", \"join this trial\", \" hey hey\", \" no\", \"hola\", \"bye\", \"join today\", \"no\",\"join join\"] y = [\"n\", \"r\", \"n\", \"r\", \"r\", \"n\", \"n\", \"n\", \"n\", \"r\", \"n\", \"n\", \"n\", \"r\"] skf = StratifiedKFold(n_splits=10) for train, test in skf.split(X,y): print(\"%s %s\" % (train,test)) ``` But I get the following error: ``` ValueError: n_splits=10 cannot be greater than the number of members in each class. ``` I have looked here scikit-learn error: The least populated class in y has only 1 member but I'm still not really sure what is wrong with my code. My lists both have lengths of 14 print(len(X)) print(len(y)). Part of my confusion is that I am not sure what a members is defined as and what a class is in this context. Questions: How do I fix the error? What is a member? What is a class? (in this context)", "response":"Stratification means to keep the ratio of each class in each fold. So if your original dataset has 3 classes in the ratio of 60%, 20% and 20% then stratification will try to keep that ratio in each fold. In your case, ``` X = [\"hey\", \"join now\", \"hello\", \"join today\", \"join us now\", \"not today\", \"join this trial\", \" hey hey\", \" no\", \"hola\", \"bye\", \"join today\", \"no\",\"join join\"] y = [\"n\", \"r\", \"n\", \"r\", \"r\", \"n\", \"n\", \"n\", \"n\", \"y\", \"n\", \"n\", \"n\", \"y\"] ``` You have a total of 14 samples (members) with the distribution: ``` class number of members percentage 'n' 9 64 'r' 3 22 'y' 2 14 ``` So StratifiedKFold will try to keep that ratio in each fold. Now you have specified 10 folds (n_splits). So that means in a single fold, for class 'y' to maintain the ratio, at least 2 \/ 10 = 0.2 members. But we cannot give less than 1 member (sample) so that's why its throwing an error there. If instead of n_splits=10, you have set n_splits=2, then it would have worked, because than the number of members for 'y' will be 2 \/ 2 = 1. For n_splits = 10 to work correctly, you need to have atleast 10 samples for each of your classes.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48313387\/valueerror-n-splits-10-cannot-be-greater-than-the-number-of-members-in-each-cla", "best_answers_votes":32, "question_length":1044, "response_length":1129 }, { "question":"More than one estimator in GridSearchCV(sklearn) I was checking sklearn documentation webpage about GridSearchCV. One of attributes of GridSearchCV object is best_estimator_. So here is my question. How to pass more than one estimator to GSCV object? Using a dictionary like: {'SVC()':{'C':10, 'gamma':0.01}, ' DecTreeClass()':{....}}?", "response":"GridSearchCV works on parameters. It will train multiple estimators (but same class (one of SVC, or DecisionTreeClassifier, or other classifiers) with different parameter combinations from specified in param_grid. best_estimator_ is the estimator which performs best on the data. So essentially best_estimator_ is the same class object initialized with best found params. So in the basic setup you cannot use multiple estimators in the grid-search. But as a workaround, you can have multiple estimators when using a pipeline in which the estimator is a \"parameter\" which the GridSearchCV can set. Something like this: ``` from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import GridSearchCV from sklearn.datasets import load_iris iris_data = load_iris() X, y = iris_data.data, iris_data.target # Just initialize the pipeline with any estimator you like pipe = Pipeline(steps=[('estimator', SVC())]) # Add a dict of estimator and estimator related parameters in this list params_grid = [{ 'estimator':[SVC()], 'estimator__C': [1, 10, 100, 1000], 'estimator__gamma': [0.001, 0.0001], }, { 'estimator': [DecisionTreeClassifier()], 'estimator__max_depth': [1,2,3,4,5], 'estimator__max_features': [None, \"auto\", \"sqrt\", \"log2\"], }, # {'estimator':[Any_other_estimator_you_want], # 'estimator__valid_param_of_your_estimator':[valid_values] ] grid = GridSearchCV(pipe, params_grid) ``` You can add as many dicts inside the list of params_grid as you like, but make sure that each dict have compatible parameters related to the 'estimator'.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/51629153\/more-than-one-estimator-in-gridsearchcvsklearn", "best_answers_votes":30, "question_length":335, "response_length":1626 }, { "question":"What is the difference between sklearn LabelEncoder and pd.get_dummies? I wanted to know the difference between sklearn LabelEncoder vs pandas get_dummies. Why would one choose LabelEncoder over get_dummies. What is the advantage of using one over another? Disadvantages? As far as I understand if I have a class A ``` ClassA = [\"Apple\", \"Ball\", \"Cat\"] encoder = [1, 2, 3] ``` and ``` dummy = [001, 010, 100] ``` Am I understanding this incorrectly?", "response":"These are just convenience functions falling naturally into the way these two libraries tend to do things, respectively. The first one \"condenses\" the information by changing things to integers, and the second one \"expands\" the dimensions allowing (possibly) more convenient access. sklearn.preprocessing.LabelEncoder simply transforms data, from whatever domain, so that its domain is 0, ..., k - 1, where k is the number of classes. So, for example ``` [\"paris\", \"paris\", \"tokyo\", \"amsterdam\"] ``` could become ``` [0, 0, 1, 2] ``` pandas.get_dummies also takes a Series with elements from some domain, but expands it into a DataFrame whose columns correspond to the entries in the series, and the values are 0 or 1 depending on what they originally were. So, for example, the same ``` [\"paris\", \"paris\", \"tokyo\", \"amsterdam\"] ``` would become a DataFrame with labels ``` [\"paris\", \"tokyo\", \"amsterdam\"] ``` and whose \"paris\" entry would be the series ``` [1, 1, 0, 0] ``` The main advantage of the first method is that it conserves space. Conversely, encoding things as integers might give the impression (to you or to some machine learning algorithm) that the order means something. Is \"amsterdam\" closer to \"tokyo\" than to \"paris\" just because of the integer encoding? probably not. The second representation is a bit clearer on that.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38413579\/what-is-the-difference-between-sklearn-labelencoder-and-pd-get-dummies", "best_answers_votes":24, "question_length":449, "response_length":1339 }, { "question":"AttributeError when using ColumnTransformer into a pipeline This is my first machine learning project and the first time that I use ColumnTransformer. My aim is to perform two steps of data preprocessing, and use ColumnTransformer for each of them. In the first step, I want to replace the missing values in my dataframe with the string 'missing_value' for some features, and the most frequent value for the remaining features. Therefore, I combine these two operations using ColumnTransformer and passing to it the corresponding columns of my dataframe. In the second step, I want to use the just preprocessed data and apply OrdinalEncoder or OneHotEncoder depending on the features. For that I use again ColumnTransformer. I then combine the two steps into a single pipeline. I am using the Kaggle Houses Price dataset, I have scikit-learn version 0.20 and this is a simplified version of my code: ``` cat_columns_fill_miss = ['PoolQC', 'Alley'] cat_columns_fill_freq = ['Street', 'MSZoning', 'LandContour'] cat_columns_ord = ['Street', 'Alley', 'PoolQC'] ord_mapping = [['Pave', 'Grvl'], # Street ['missing_value', 'Pave', 'Grvl'], # Alley ['missing_value', 'Fa', 'TA', 'Gd', 'Ex'] # PoolQC ] cat_columns_onehot = ['MSZoning', 'LandContour'] imputer_cat_pipeline = ColumnTransformer([ ('imp_miss', SimpleImputer(strategy='constant'), cat_columns_fill_miss), # fill_value='missing_value' by default ('imp_freq', SimpleImputer(strategy='most_frequent'), cat_columns_fill_freq), ]) encoder_cat_pipeline = ColumnTransformer([ ('ordinal', OrdinalEncoder(categories=ord_mapping), cat_columns_ord), ('pass_ord', OneHotEncoder(), cat_columns_onehot), ]) cat_pipeline = Pipeline([ ('imp_cat', imputer_cat_pipeline), ('cat_encoder', encoder_cat_pipeline), ]) ``` Unfortunately, when I apply it to housing_cat, the subset of my dataframe including only categorical features, ``` cat_pipeline.fit_transform(housing_cat) ``` I get the error: AttributeError: 'numpy.ndarray' object has no attribute 'columns' During handling of the above exception, another exception occurred: ... ValueError: Specifying the columns using strings is only supported for pandas DataFrames I have tried this simplified pipeline and it works properly: ``` new_cat_pipeline = Pipeline([ ('imp_cat', imputer_cat_pipeline), ('onehot', OneHotEncoder()), ]) ``` However, if I try: ``` enc_one = ColumnTransformer([ ('onehot', OneHotEncoder(), cat_columns_onehot), ('pass_ord', 'passthrough', cat_columns_ord) ]) new_cat_pipeline = Pipeline([ ('imp_cat', imputer_cat_pipeline), ('onehot_encoder', enc_one), ]) ``` I start to get the same error. I suspect then that this error is related to the use of ColumnTransformer in the second step, but I do not actually understand where it comes from. The way I identify the columns in the second step is the same as in the first step, so it remains unclear to me why only in the second step I get the Attribute Error...", "response":"ColumnTransformer returns numpy.array, so it can't have column attribute (as indicated by your error). If I may suggest a different solution, use pandas for both of your tasks, it will be easier. Step 1 - replacing missing values To replace missing value in a subset of columns with missing_value string use this: ``` dataframe[[\"PoolQC\", \"Alley\"]].fillna(\"missing_value\", inplace=True) ``` For the rest (imputing with mean of each column), this will work perfectly: ``` dataframe[[\"Street\", \"MSZoning\", \"LandContour\"]].fillna( dataframe[[\"Street\", \"MSZoning\", \"LandContour\"]].mean(), inplace=True ) ``` Step 2 - one hot encoding and categorical variables pandas provides get_dummies, which returns pandas Dataframe, unlike ColumnTransfomer, code for this would be: ``` encoded = pd.get_dummies(dataframe[['MSZoning', 'LandContour']], drop_first=True) pd.dropna(['MSZoning', 'LandContour'], axis=columns, inplace=True) dataframe = dataframe.join(encoded) ``` For ordinal variables and their encoding I would suggest you to look at this SO answer (unluckily some manual mapping would be needed in this case). If you want to use transformer anyway Get np.array from the dataframe using values attribute, pass it through the pipeline and recreate columns and indices from the array like this: ``` pd.DataFrame(data=your_array, index=np.arange(len(your_array)), columns=[\"A\", \"B\"]) ``` There is one caveat of this aprroach though; you will not know the names of custom created one-hot-encoded columns (the pipeline will not do this for you). Additionally, you could get the names of columns from sklearn's transforming objects (e.g. using categories_ attribute), but I think it would break the pipeline (someone correct me if I'm wrong).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/54298550\/attributeerror-when-using-columntransformer-into-a-pipeline", "best_answers_votes":13, "question_length":2923, "response_length":1733 }, { "question":"Clustering cosine similarity matrix A few questions on stackoverflow mention this problem, but I haven't found a concrete solution. I have a square matrix which consists of cosine similarities (values between 0 and 1), for example: ``` | A | B | C | D A | 1.0 | 0.1 | 0.6 | 0.4 B | 0.1 | 1.0 | 0.1 | 0.2 C | 0.6 | 0.1 | 1.0 | 0.7 D | 0.4 | 0.2 | 0.7 | 1.0 ``` The square matrix can be of any size. I want to get clusters (I don't know how many) which maximize the values between the elements in the cluster. I.e. for the above example I should get two clusters: B A, C, D The reason being because C & D have the highest value between them, and A & C also have the highest value between them. An item can be in only one cluster. Recall is not that important for this problem, but precision is very important. It is acceptable to output three clusters: 1) B, 2) A, 3) C, D . But it is not acceptable to output any solution where B is in a cluster with another element. I think the diagonal (1.0) is confusing me. My data is guaranteed to have at least one cluster of 2+ elements, and I want to find as many clusters as possible without sacrificing precision. I will have to implement this in Python.", "response":"You can easily do this using spectral clustering. You can use the ready implementations such as the one in sklearn or implement it yourself. It is rather an easy algorithm. Here is a piece of code doing it in python using sklearn: ``` import numpy as np from sklearn.cluster import SpectralClustering mat = np.matrix([[1.,.1,.6,.4],[.1,1.,.1,.2],[.6,.1,1.,.7],[.4,.2,.7,1.]]) SpectralClustering(2).fit_predict(mat) >>> array([0, 1, 0, 0], dtype=int32) ``` As you can see it returns the clustering you have mentioned. The algorithm takes the top k eigenvectors of the input matrix corresponding to the largest eigenvalues, then runs the k-mean algorithm on the new matrix. Here is a simple code that does this for your matrix: ``` from sklearn.cluster import KMeans eigen_values, eigen_vectors = np.linalg.eigh(mat) KMeans(n_clusters=2, init='k-means++').fit_predict(eigen_vectors[:, 2:4]) >>> array([0, 1, 0, 0], dtype=int32) ``` Note that the implementation of the algorithm in the sklearn library may differ from mine. The example I gave is the simplest way of doing it. There are some good tutorial available online describing the spectral clustering algorithm in depth. For the cases you want the algorithm to figure out the number of clusters by itself, you can use Density Based Clustering Algorithms like DBSCAN: ``` from sklearn.cluster import DBSCAN DBSCAN(min_samples=1).fit_predict(mat) array([0, 1, 2, 2]) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30089675\/clustering-cosine-similarity-matrix", "best_answers_votes":24, "question_length":1197, "response_length":1421 }, { "question":"Should binary features be one-hot encoded? I'm working with data that consists of a few dozen binary features about people which basically come down to \"person has feature x\" [True\/False]. From what I can find online categorical data should be one-hot encoded instead of assigning an arbitrary value for each category because you can't say \"category 1 is less than category 2\". So the solution is to create a dummy variable for each category: ``` Cat || dummy 1 | dummy 2 | dummy 3 ____||_________|_________|________ 1 || 1 | 0 | 0 2 || 0 | 1 | 0 3 || 0 | 0 | 1 ``` Now for binary features one can choose between using the variable directly (1 for true, 0 for false) or use two dummy variables ((1, 0) for true, (0, 1) for false.). But I can't find any sources that show\/explain what the best approach is. I myself am conflicted, because on one hand, the dummy variables reduces the importance of each individual variable and it is show that at least in some cases the accuracy of the model suffers, source. But on the other hand, this can also encode missing data (in the form of (0, 0)). Furthermore, is it possible to say \"False is less than True\"? I'm actually using a Random Forest in python, and I know that tree-based classifiers such as Random Forests support categorial data, but the Sklearn package hasn't implemented this yet. I wrote a small test on the Sklearn digits data set. This data set has a number of 8 by 8 images of digits (0-9), each pixel has a value between 0 and 16 and a simple model can use this to learn to recognize the digits. For my test I change the values of > 8 to True and <= 8 to False. The accuracy ofcourse suffers when compared to the original data, but when I implement one-hot encoding, thus changing True to (1, 0) and False to (0, 1) I can't find a significant difference compared to the binary encoding. An explanation of the recommended approach would be greatly appreciated!", "response":"Converting a binary variable that takes the values of [0, 1] into a one-hot encoded of [(0, 1), (1, 0)] is redundant and not recommended for the following reasons (some of them are already mentioned in the comment above but just to expand on this): It is redundant because the binary variable is already in a form similar to the one-hot encoded, where the last column is dropped as it does not make any difference with or without it, because it can be inferred from the first given column: If I give you [(0, ), (1,)], you can know the complementary column [(, 1), (, 0)]. Suppose you have more than one binary variable, say 4 for example. If you convert them into one-hot encoded form, the dimension will increase from 4 to 8. The latter is not recommended for the following reasons: The Curse of Dimensionality: High dimensional data can be so troublesome. That's because a lot of algorithms (e.g. clustering algorithms) use the Euclidean Distance which, due to the squared terms, is sensitive to noise. As a matter of fact, data points spread too thin as the dimensions increase, making data extremely noisy. Besides, the concept of neighborhood becomes meaningless, and approaches that are based on finding the relative contrast between distances of the data points become unreliable. Time & Memory Complexity: It is intuitive that increasing the number of features will cost the algorithm more execution time and memory space requirement. To name a few, algorithms that use the Covariance Matrix in its computation will get affected. Polynomial algorithms will end up with too many terms...and so on. In general, the learning usually is faster with less features especially if the extra features are redundant. Multi-Collinearity: Since the last column in the one-hot encoded form of the binary variable is redundant and 100% correlated with the first column, this will cause troubles to the Linear Regression-based Algorithms. For example, since the ordinary least squares estimates involve inverting the matrix, a computer algorithm may be unsuccessful in obtaining an approximate inverse, if a lot of features are correlated, and hence the inverse may be numerically inaccurate. Also, linear models work by observing the changes in the dependent variable y with the unit changes in one independent variable after holding all other independent variables as constants, yet in case independent variables are highly correlated, the latter fails (there are more other consequences of Multi-Collinearity) (although some other algorithms might be less sensitive to this as in Decision Trees). Overfitting-prone: In general, too many features (regardless if they're correlated or not) may overfit your model and fail to generalize to new examples, as every data point in your dataset will be fully identified by the given features (search Andrew NG lectures, he explained this in detail) Summary In a nutshell, converting a binary variable into a one-hot encoded one is redundant and may lead to troubles that are needless and unsolicited. Although correlated features may not always worsen your model, yet they will not always improve it either.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43515877\/should-binary-features-be-one-hot-encoded", "best_answers_votes":19, "question_length":1921, "response_length":3146 }, { "question":"Scikit classification report - change the format of displayed results Scikit classification report would show precision and recall scores with two digits only. Is it possible to make it display 4 digits after the dot, I mean instead of 0.67 to show 0.6783? ``` from sklearn.metrics import classification_report print classification_report(testLabels, p, labels=list(set(testLabels)), target_names=['POSITIVE', 'NEGATIVE', 'NEUTRAL']) precision recall f1-score support POSITIVE 1.00 0.82 0.90 41887 NEGATIVE 0.65 0.86 0.74 19989 NEUTRAL 0.62 0.67 0.64 10578 ``` Also, should I worry about a precision score of 1.00? Thanks!", "response":"I just came across this old question. It is indeed possible to have more precision points in classification_report. You just need to pass in a digits argument. ``` classification_report(y_true, y_pred, target_names=target_names, digits=4) ``` From the documentation: digits : int Number of digits for formatting output floating point values Demonstration: ``` from sklearn.metrics import classification_report y_true = [0, 1, 2, 2, 2] y_pred = [0, 0, 2, 2, 1] target_names = ['class 0', 'class 1', 'class 2'] print(classification_report(y_true, y_pred, target_names=target_names)) ``` Output: ``` precision recall f1-score support class 0 0.50 1.00 0.67 1 class 1 0.00 0.00 0.00 1 class 2 1.00 0.67 0.80 3 avg \/ total 0.70 0.60 0.61 5 ``` With 4 digits: ``` print(classification_report(y_true, y_pred, target_names=target_names, digits=4)) ``` Output: ``` precision recall f1-score support class 0 0.5000 1.0000 0.6667 1 class 1 0.0000 0.0000 0.0000 1 class 2 1.0000 0.6667 0.8000 3 avg \/ total 0.7000 0.6000 0.6133 5 ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/22022410\/scikit-classification-report-change-the-format-of-displayed-results", "best_answers_votes":54, "question_length":622, "response_length":1021 }, { "question":"Invert MinMaxScaler from scikit_learn To feed my generative neural net, I need to normalize some data between -1 and 1. I do it with MinMaxScaler from Sklearn and it works great. Now, my generator is going to output data between -1 and 1. How to revert MinMaxScaler to get real data ?", "response":"Let us start by defining a pandas dataframe: ``` cols = ['A', 'B'] data = pd.DataFrame(np.array([[2,3],[1.02,1.2],[0.5,0.3]]),columns=cols) ``` The we scale the data using the MinMaxScaler ``` scaler = preprocessing.MinMaxScaler(feature_range = (0,1)) scaled_data = scaler.fit_transform(data[cols]) ``` Now, to invert the transformation you should call the inverse transform: ``` scaler.inverse_transform(scaled_data) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/41551165\/invert-minmaxscaler-from-scikit-learn", "best_answers_votes":35, "question_length":284, "response_length":421 }, { "question":"XGboost python - classifier class weight option? Is there a way to set different class weights for xgboost classifier? For example in sklearn RandomForestClassifier this is done by the \"class_weight\" parameter.", "response":"For sklearn version = 0.19 There is simpler solution ``` from sklearn.utils import class_weight classes_weights = class_weight.compute_sample_weight( class_weight='balanced', y=train_df['class'] ) xgb_classifier.fit(X, y, sample_weight=classes_weights) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/42192227\/xgboost-python-classifier-class-weight-option", "best_answers_votes":18, "question_length":210, "response_length":256 }, { "question":"How to get coefficients of polynomial features I know it is possible to obtain the polynomial features as numbers by using: polynomial_features.transform(X). According to the manual, for a degree of two the features are: [1, a, b, a^2, ab, b^2]. But how do I obtain a description of the features for higher orders ? .get_params() does not show any list of features.", "response":"By the way, there is more appropriate function now: PolynomialFeatures.get_feature_names. ```python from sklearn.preprocessing import PolynomialFeatures import pandas as pd import numpy as np data = pd.DataFrame.from_dict({ 'x': np.random.randint(low=1, high=10, size=5), 'y': np.random.randint(low=-1, high=1, size=5), }) p = PolynomialFeatures(degree=2).fit(data) print p.get_feature_names(data.columns) ``` This will output as follows: ``` ['1', 'x', 'y', 'x^2', 'x y', 'y^2'] ``` N.B. For some reason you gotta fit your PolynomialFeatures object before you will be able to use get_feature_names(). If you are Pandas-lover (as I am), you can easily form DataFrame with all new features like this: ```python features = DataFrame(p.transform(data), columns=p.get_feature_names(data.columns)) print features ``` Result will look like this: ``` 1 x y x^2 x y y^2 0 1.0 8.0 -1.0 64.0 -8.0 1.0 1 1.0 9.0 -1.0 81.0 -9.0 1.0 2 1.0 1.0 0.0 1.0 0.0 0.0 3 1.0 6.0 0.0 36.0 0.0 0.0 4 1.0 5.0 -1.0 25.0 -5.0 1.0 ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31290976\/how-to-get-coefficients-of-polynomial-features", "best_answers_votes":40, "question_length":365, "response_length":1005 }, { "question":"Does GridSearchCV store scores for all parameter combinations? After running a GridSearchCV, I would like to see the score for each parameter combination. How can I access the scores for each parameter combinations after running GridSearchCV? Here is an example code that I used in another post. ``` from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.grid_search import GridSearchCV from sklearn.pipeline import Pipeline from sklearn.naive_bayes import MultinomialNB X_train = ['qwe rtyuiop', 'asd fghj kl', 'zx cv bnm', 'qw erty ui op', 'as df ghj kl', 'zxc vb nm', 'qwe rt yu iop', 'asdfg hj kl', 'zx cvb nm', 'qwe rt yui op', 'asd fghj kl', 'zx cvb nm', 'qwer tyui op', 'asd fg hjk l', 'zx cv b nm', 'qw ert yu iop', 'as df gh jkl', 'zx cvb nm', 'qwe rty uiop', 'asd fghj kl', 'zx cvbnm', 'qw erty ui op', 'as df ghj kl', 'zxc vb nm', 'qwe rtyu iop', 'as dfg hj kl', 'zx cvb nm', 'qwe rt yui op', 'asd fg hj kl', 'zx cvb nm', 'qwer tyuiop', 'asd fghjk l', 'zx cv b nm', 'qw ert yu iop', 'as df gh jkl', 'zx cvb nm'] y_train = ['1', '2', '3', '1', '1', '3', '1', '2', '3', '1', '2', '3', '1', '4', '1', '2', '2', '4', '1', '2', '3', '1', '1', '3', '1', '2', '3', '1', '2', '3', '1', '4', '1', '2', '2', '4'] parameters = { 'clf__alpha': (1e-1, 1e-2), 'vect__ngram_range': [(1,2),(1,3)], 'vect__max_df': (0.9, 0.98) } text_clf_Pipline_MultinomialNB = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB()), ]) gs_clf = GridSearchCV(text_clf_Pipline_MultinomialNB, parameters, n_jobs=-1) gs_classifier = gs_clf.fit(X_train, y_train) ```", "response":"Yes it does, exactly as it is stated in the docs: grid_scores_ : list of named tuples Contains scores for all parameter combinations in param_grid. Each entry corresponds to one parameter setting. Each named tuple has the attributes: parameters, a dict of parameter settings mean_validation_score, the mean score over the cross-validation folds cv_validation_scores, the list of scores for each fold", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34274598\/does-gridsearchcv-store-scores-for-all-parameter-combinations", "best_answers_votes":31, "question_length":1659, "response_length":399 }, { "question":"Scikit-learn : Input contains NaN, infinity or a value too large for dtype ('float64') I'm using Python scikit-learn for simple linear regression on data obtained from csv. ``` reader = pandas.io.parsers.read_csv(\"data\/all-stocks-cleaned.csv\") stock = np.array(reader) openingPrice = stock[:, 1] closingPrice = stock[:, 5] print((np.min(openingPrice))) print((np.min(closingPrice))) print((np.max(openingPrice))) print((np.max(closingPrice))) peningPriceTrain, openingPriceTest, closingPriceTrain, closingPriceTest = \\ train_test_split(openingPrice, closingPrice, test_size=0.25, random_state=42) openingPriceTrain = np.reshape(openingPriceTrain,(openingPriceTrain.size,1)) openingPriceTrain = openingPriceTrain.astype(np.float64, copy=False) # openingPriceTrain = np.arange(openingPriceTrain, dtype=np.float64) closingPriceTrain = np.reshape(closingPriceTrain,(closingPriceTrain.size,1)) closingPriceTrain = closingPriceTrain.astype(np.float64, copy=False) openingPriceTest = np.reshape(openingPriceTest,(openingPriceTest.size,1)) closingPriceTest = np.reshape(closingPriceTest,(closingPriceTest.size,1)) regression = linear_model.LinearRegression() regression.fit(openingPriceTrain, closingPriceTrain) predicted = regression.predict(openingPriceTest) ``` The min and max values are showed as 0.0 0.6 41998.0 2593.9 Yet I'm getting this error ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). How should I remove this error? Because from the above result it is true that it doesn't contain infinites or Nan values. What's the solution for this? Edit: all-stocks-cleaned.csv is avaliabale at http:\/\/www.sharecsv.com\/s\/cb31790afc9b9e33c5919cdc562630f3\/all-stocks-cleaned.csv", "response":"The problem with your regression is that somehow NaN's have sneaked into your data. This could be easily checked with the following code snippet: ``` import pandas as pd import numpy as np from sklearn import linear_model from sklearn.cross_validation import train_test_split reader = pd.io.parsers.read_csv(\".\/data\/all-stocks-cleaned.csv\") stock = np.array(reader) openingPrice = stock[:, 1] closingPrice = stock[:, 5] openingPriceTrain, openingPriceTest, closingPriceTrain, closingPriceTest = \\ train_test_split(openingPrice, closingPrice, test_size=0.25, random_state=42) openingPriceTrain = openingPriceTrain.reshape(openingPriceTrain.size,1) openingPriceTrain = openingPriceTrain.astype(np.float64, copy=False) closingPriceTrain = closingPriceTrain.reshape(closingPriceTrain.size,1) closingPriceTrain = closingPriceTrain.astype(np.float64, copy=False) openingPriceTest = openingPriceTest.reshape(openingPriceTest.size,1) openingPriceTest = openingPriceTest.astype(np.float64, copy=False) np.isnan(openingPriceTrain).any(), np.isnan(closingPriceTrain).any(), np.isnan(openingPriceTest).any() (True, True, True) ``` If you try imputing missing values like below: ``` openingPriceTrain[np.isnan(openingPriceTrain)] = np.median(openingPriceTrain[~np.isnan(openingPriceTrain)]) closingPriceTrain[np.isnan(closingPriceTrain)] = np.median(closingPriceTrain[~np.isnan(closingPriceTrain)]) openingPriceTest[np.isnan(openingPriceTest)] = np.median(openingPriceTest[~np.isnan(openingPriceTest)]) ``` your regression will run smoothly without a problem: ``` regression = linear_model.LinearRegression() regression.fit(openingPriceTrain, closingPriceTrain) predicted = regression.predict(openingPriceTest) predicted[:5] array([[ 13598.74748173], [ 53281.04442146], [ 18305.4272186 ], [ 50753.50958453], [ 14937.65782778]]) ``` In short: you have missing values in your data, as the error message said. EDIT:: perhaps an easier and more straightforward approach would be to check if you have any missing data right after you read the data with pandas: ``` data = pd.read_csv('.\/data\/all-stocks-cleaned.csv') data.isnull().any() Date False Open True High True Low True Last True Close True Total Trade Quantity True Turnover (Lacs) True ``` and then impute the data with any of the two lines below: ``` data = data.fillna(lambda x: x.median()) ``` or ``` data = data.fillna(method='ffill') ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34779961\/scikit-learn-input-contains-nan-infinity-or-a-value-too-large-for-dtype-flo", "best_answers_votes":40, "question_length":1707, "response_length":2383 }, { "question":"How areTF-IDF calculated by the scikit-learn TfidfVectorizer I run the following code to convert the text matrix to TF-IDF matrix. ``` text = ['This is a string','This is another string','TFIDF computation calculation','TfIDF is the product of TF and IDF'] from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(max_df=1.0, min_df=1, stop_words='english',norm = None) X = vectorizer.fit_transform(text) X_vocab = vectorizer.get_feature_names_out() X_mat = X.todense() X_idf = vectorizer.idf_ ``` I get the following output X_vocab = ``` [u'calculation', u'computation', u'idf', u'product', u'string', u'tf', u'tfidf'] ``` and X_mat = ``` ([[ 0. , 0. , 0. , 0. , 1.51082562, 0. , 0. ], [ 0. , 0. , 0. , 0. , 1.51082562, 0. , 0. ], [ 1.91629073, 1.91629073, 0. , 0. , 0. , 0. , 1.51082562], [ 0. , 0. , 1.91629073, 1.91629073, 0. , 1.91629073, 1.51082562]]) ``` Now I dont understand how these scores are computed. My idea is that for the text[0], score for only 'string' is computed and there is a score in the 5th coloumn. But as TF_IDF is the product of term frequency which is 2 and IDF which is log(4\/2) is 1.39 and not 1.51 as shown in the matrix. How is the TF-IDF score calculated in scikit-learn.", "response":"TF-IDF is done in multiple steps by Scikit Learn's TfidfVectorizer, which in fact uses TfidfTransformer and inherits CountVectorizer. Let me summarize the steps it does to make it more straightforward: tfs are calculated by CountVectorizer's fit_transform() idfs are calculated by TfidfTransformer's fit() tfidfs are calculated by TfidfTransformer's transform() You can check the source code here. Back to your example. Here is the calculation that is done for the tfidf weight for the 5th term of the vocabulary, 1st document (X_mat[0,4]): First, the tf for 'string', in the 1st document: ``` tf = 1 ``` Second, the idf for 'string', with smoothing enabled (default behavior): ``` df = 2 N = 4 idf = ln(N + 1 \/ df + 1) + 1 = ln (5 \/ 3) + 1 = 1.5108256238 ``` And finally, the tfidf weight for (document 0, feature 4): ``` tfidf(0,4) = tf * idf = 1 * 1.5108256238 = 1.5108256238 ``` I noticed you choose not to normalize the tfidf matrix. Keep in mind normalizing the tfidf matrix is a common and usually recommended approach, since most models will require the feature matrix (or design matrix) to be normalized. TfidfVectorizer will L-2 normalize the output matrix by default, as a final step of the calculation. Having it normalized means it will have only weights between 0 and 1.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36966019\/how-aretf-idf-calculated-by-the-scikit-learn-tfidfvectorizer", "best_answers_votes":29, "question_length":1237, "response_length":1284 }, { "question":"how to save a scikit-learn pipline with keras regressor inside to disk? I have a scikit-learn pipline with kerasRegressor in it: ``` estimators = [ ('standardize', StandardScaler()), ('mlp', KerasRegressor(build_fn=baseline_model, nb_epoch=5, batch_size=1000, verbose=1)) ] pipeline = Pipeline(estimators) ``` After, training the pipline, I am trying to save to disk using joblib... ``` joblib.dump(pipeline, filename , compress=9) ``` But I am getting an error: RuntimeError: maximum recursion depth exceeded How would you save the pipeline to disk?", "response":"I struggled with the same problem as there are no direct ways to do this. Here is a hack which worked for me. I saved my pipeline into two files. The first file stored a pickled object of the sklearn pipeline and the second one was used to store the Keras model: ``` ... from keras.models import load_model from sklearn.externals import joblib ... pipeline = Pipeline([ ('scaler', StandardScaler()), ('estimator', KerasRegressor(build_model)) ]) pipeline.fit(X_train, y_train) # Save the Keras model first: pipeline.named_steps['estimator'].model.save('keras_model.h5') # This hack allows us to save the sklearn pipeline: pipeline.named_steps['estimator'].model = None # Finally, save the pipeline: joblib.dump(pipeline, 'sklearn_pipeline.pkl') del pipeline ``` And here is how the model could be loaded back: ``` # Load the pipeline first: pipeline = joblib.load('sklearn_pipeline.pkl') # Then, load the Keras model: pipeline.named_steps['estimator'].model = load_model('keras_model.h5') y_pred = pipeline.predict(X_test) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/37984304\/how-to-save-a-scikit-learn-pipline-with-keras-regressor-inside-to-disk", "best_answers_votes":33, "question_length":550, "response_length":1026 }, { "question":"Skip forbidden parameter combinations when using GridSearchCV I want to greedily search the entire parameter space of my support vector classifier using GridSearchCV. However, some combinations of parameters are forbidden by LinearSVC and throw an exception. In particular, there are mutually exclusive combinations of the dual, penalty, and loss parameters: For example, this code: ``` from sklearn import svm, datasets from sklearn.model_selection import GridSearchCV iris = datasets.load_iris() parameters = {'dual':[True, False], 'penalty' : ['l1', 'l2'], \\ 'loss': ['hinge', 'squared_hinge']} svc = svm.LinearSVC() clf = GridSearchCV(svc, parameters) clf.fit(iris.data, iris.target) ``` Returns ValueError: Unsupported set of arguments: The combination of penalty='l2' and loss='hinge' are not supported when dual=False, Parameters: penalty='l2', loss='hinge', dual=False My question is: is it possible to make GridSearchCV skip combinations of parameters which the model forbids? If not, is there an easy way to construct a parameter space which won't violate the rules?", "response":"I solved this problem by passing error_score=0.0 to GridSearchCV: error_score : \u2018raise\u2019 (default) or numeric Value to assign to the score if an error occurs in estimator fitting. If set to \u2018raise\u2019, the error is raised. If a numeric value is given, FitFailedWarning is raised. This parameter does not affect the refit step, which will always raise the error. UPDATE: newer versions of sklearn print out a bunch of ConvergenceWarning and FitFailedWarning. I had a hard time surppressing them with contextlib.suppress, but there is a hack around that involving a testing context manager: ```py from sklearn import svm, datasets from sklearn.utils._testing import ignore_warnings from sklearn.exceptions import FitFailedWarning, ConvergenceWarning from sklearn.model_selection import GridSearchCV with ignore_warnings(category=[ConvergenceWarning, FitFailedWarning]): iris = datasets.load_iris() parameters = {'dual':[True, False], 'penalty' : ['l1', 'l2'], \\ 'loss': ['hinge', 'squared_hinge']} svc = svm.LinearSVC() clf = GridSearchCV(svc, parameters, error_score=0.0) clf.fit(iris.data, iris.target) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43009566\/skip-forbidden-parameter-combinations-when-using-gridsearchcv", "best_answers_votes":28, "question_length":1076, "response_length":1102 }, { "question":"sklearn pipeline - Applying sample weights after applying a polynomial feature transformation in a pipeline I want to apply sample weights and at the same time use a pipeline from sklearn which should make a feature transformation, e.g. polynomial, and then apply a regressor, e.g. ExtraTrees. I am using the following packages in the two examples below: ``` from sklearn.ensemble import ExtraTreesRegressor import numpy as np from sklearn.pipeline import Pipeline from sklearn.preprocessing import PolynomialFeatures ``` Everything works well as long as I seperately transform the features and generate and train the model afterwards: ``` #Feature generation X = np.random.rand(200,4) Y = np.random.rand(200) #Feature transformation poly = PolynomialFeatures(degree=2) poly.fit_transform(X) #Model generation and fit clf = ExtraTreesRegressor(n_estimators=5, max_depth = 3) weights = [1]*100 + [2]*100 clf.fit(X,Y, weights) ``` But doing it in a pipeline, does not work: ``` #Pipeline generation pipe = Pipeline([('poly2', PolynomialFeatures(degree=2)), ('ExtraTrees', ExtraTreesRegressor(n_estimators=5, max_depth = 3))]) #Feature generation X = np.random.rand(200,4) Y = np.random.rand(200) #Fitting model clf = pipe weights = [1]*100 + [2]*100 clf.fit(X,Y, weights) ``` I get the following error: TypeError: fit() takes at most 3 arguments (4 given) In this simple example, it is no issue to modify the code, but when I want to run several different tests on my real data in my real code, being able to use pipelines and sample weight", "response":"There is mention of **fit_params in the fit method of Pipeline documentation. You must specify which step of the pipeline you want to apply the parameter to. You can achieve this by following the naming rules in the docs: For this, it enables setting parameters of the various steps using their names and the parameter name separated by a \u2018__\u2019, as in the example below. So all that being said, try changing the last line to: ``` clf.fit(X,Y, **{'ExtraTrees__sample_weight': weights}) ``` Updated link: This is a good example of how to work with parameters in pipelines.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36205850\/sklearn-pipeline-applying-sample-weights-after-applying-a-polynomial-feature-t", "best_answers_votes":33, "question_length":1538, "response_length":569 }, { "question":"How to specify the prior probability for scikit-learn's Naive Bayes I'm using the scikit-learn machine learning library (Python) for a machine learning project. One of the algorithms I'm using is the Gaussian Naive Bayes implementation. One of the attributes of the GaussianNB() function is the following: ``` class_prior_ : array, shape (n_classes,) ``` I want to alter the class prior manually since the data I use is very skewed and the recall of one of the classes is very important. By assigning a high prior probability to that class the recall should increase. However, I can't figure out how to set the attribute correctly. I've read the below topics already but their answers don't work for me. How can the prior probabilities manually set for the Naive Bayes clf in scikit-learn? How do I know what prior's I'm giving to sci-kit learn? (Naive-bayes classifiers.) This is my code: ``` gnb = GaussianNB() gnb.class_prior_ = [0.1, 0.9] gnb.fit(data.XTrain, yTrain) yPredicted = gnb.predict(data.XTest) ``` I figured this was the correct syntax and I could find out which class belongs to which place in the array by playing with the values but the results remain unchanged. Also no errors were given. What is the correct way of setting the attributes of the GaussianNB algorithm from scikit-learn library? Link to the scikit documentation of GaussianNB", "response":"@Jianxun Li: there is in fact a way to set prior probabilities in GaussianNB. It's called 'priors' and its available as a parameter. See documentation: \"Parameters: priors : array-like, shape (n_classes,) Prior probabilities of the classes. If specified the priors are not adjusted according to the data.\" So let me give you an example: ``` from sklearn.naive_bayes import GaussianNB # minimal dataset X = [[1, 0], [1, 0], [0, 1]] y = [0, 0, 1] # use empirical prior, learned from y mn = GaussianNB() print mn.fit(X,y).predict([1,1]) print mn.class_prior_ >>>[0] >>>[ 0.66666667 0.33333333] ``` But if you changed the prior probabilities, it will give a different answer which is what you are looking for I believe. ``` # use custom prior to make 1 more likely mn = GaussianNB(priors=[0.1, 0.9]) mn.fit(X,y).predict([1,1]) >>>>array([1]) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30896367\/how-to-specify-the-prior-probability-for-scikit-learns-naive-bayes", "best_answers_votes":18, "question_length":1359, "response_length":841 }, { "question":"Using scikit-learn vectorizers and vocabularies with gensim I am trying to recycle scikit-learn vectorizer objects with gensim topic models. The reasons are simple: first of all, I already have a great deal of vectorized data; second, I prefer the interface and flexibility of scikit-learn vectorizers; third, even though topic modelling with gensim is very fast, computing its dictionaries (Dictionary()) is relatively slow in my experience. Similar questions have been asked before, especially here and here, and the bridging solution is gensim's Sparse2Corpus() function which transforms a Scipy sparse matrix into a gensim corpus object. However, this conversion does not make use of the vocabulary_ attribute of sklearn vectorizers, which holds the mapping between words and feature ids. This mapping is necessary in order to print the discriminant words for each topic (id2word in gensim topic models, described as \"a a mapping from word ids (integers) to words (strings)\"). I am aware of the fact that gensim's Dictionary objects are much more complex (and slower to compute) than scikit's vect.vocabulary_ (a simple Python dict)... Any ideas to use vect.vocabulary_ as id2word in gensim models? Some example code: ``` # our data documents = [u'Human machine interface for lab abc computer applications', u'A survey of user opinion of computer system response time', u'The EPS user interface management system', u'System and human system engineering testing of EPS', u'Relation of user perceived response time to error measurement', u'The generation of random binary unordered trees', u'The intersection graph of paths in trees', u'Graph minors IV Widths of trees and well quasi ordering', u'Graph minors A survey'] from sklearn.feature_extraction.text import CountVectorizer # compute vector space with sklearn vect = CountVectorizer(min_df=1, ngram_range=(1, 1), max_features=25000) corpus_vect = vect.fit_transform(documents) # each doc is a scipy sparse matrix print vect.vocabulary_ #{u'and': 1, u'minors': 20, u'generation': 9, u'testing': 32, u'iv': 15, u'engineering': 5, u'computer': 4, u'relation': 28, u'human': 11, u'measurement': 19, u'unordered': 37, u'binary': 3, u'abc': 0, u'for': 8, u'ordering': 23, u'graph': 10, u'system': 31, u'machine': 17, u'to': 35, u'quasi': 26, u'time': 34, u'random': 27, u'paths': 24, u'of': 21, u'trees': 36, u'applications': 2, u'management': 18, u'lab': 16, u'interface': 13, u'intersection': 14, u'response': 29, u'perceived': 25, u'in': 12, u'widths': 40, u'well': 39, u'eps': 6, u'survey': 30, u'error': 7, u'opinion': 22, u'the': 33, u'user': 38} import gensim # transform sparse matrix into gensim corpus corpus_vect_gensim = gensim.matutils.Sparse2Corpus(corpus_vect, documents_columns=False) lsi = gensim.models.LsiModel(corpus_vect_gensim, num_topics=4) # I instead would like something like this line below # lsi = gensim.models.LsiModel(corpus_vect_gensim, id2word=vect.vocabulary_, num_topics=2) print lsi.print_topics(2) #['0.622*\"21\" + 0.359*\"31\" + 0.256*\"38\" + 0.206*\"29\" + 0.206*\"34\" + 0.197*\"36\" + 0.170*\"33\" + 0.168*\"1\" + 0.158*\"10\" + 0.147*\"4\"', '0.399*\"36\" + 0.364*\"10\" + -0.295*\"31\" + 0.245*\"20\" + -0.226*\"38\" + 0.194*\"26\" + 0.194*\"15\" + 0.194*\"39\" + 0.194*\"23\" + 0.194*\"40\"'] ```", "response":"Gensim doesn't require Dictionary objects. You can use your plain dict as input to id2word directly, as long as it maps ids (integers) to words (strings). In fact anything dict-like will do (including dict, Dictionary, SqliteDict...). (Btw gensim's Dictionary is a simple Python dict underneath. Not sure where your remarks on Dictionary performance come from, you can't get a mapping much faster than a plain dict in Python. Maybe you're confusing it with text preprocessing (not part of gensim), which can indeed be slow.)", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/21552518\/using-scikit-learn-vectorizers-and-vocabularies-with-gensim", "best_answers_votes":12, "question_length":3255, "response_length":524 }, { "question":"How to penalize False Negatives more than False Positives From the business perspective, false negatives lead to about tenfold higher costs (real money) than false positives. Given my standard binary classification models (logit, random forest, etc.), how can I incorporate this into my model? Do I have to change (weight) the loss function in favor of the 'preferred' error (FP) ? If so, how to do that?", "response":"There are several options for you: As suggested in the comments, class_weight should boost the loss function towards the preferred class. This option is supported by various estimators, including sklearn.linear_model.LogisticRegression, sklearn.svm.SVC, sklearn.ensemble.RandomForestClassifier, and others. Note there's no theoretical limit to the weight ratio, so even if 1 to 100 isn't strong enough for you, you can go on with 1 to 500, etc. You can also select the decision threshold very low during the cross-validation to pick the model that gives highest recall (though possibly low precision). The recall close to 1.0 effectively means false_negatives close to 0.0, which is what to want. For that, use sklearn.model_selection.cross_val_predict and sklearn.metrics.precision_recall_curve functions: ``` y_scores = cross_val_predict(classifier, x_train, y_train, cv=3, method=\"decision_function\") precisions, recalls, thresholds = precision_recall_curve(y_train, y_scores) ``` If you plot the precisions and recalls against the thresholds, you should see the picture like this: After picking the best threshold, you can use the raw scores from classifier.decision_function() method for your final classification. Finally, try not to over-optimize your classifier, because you can easily end up with a trivial const classifier (which is obviously never wrong, but is useless).", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/49151325\/how-to-penalize-false-negatives-more-than-false-positives", "best_answers_votes":23, "question_length":404, "response_length":1382 }, { "question":"ValueError: Data is not binary and pos_label is not specified I am trying to calculate roc_auc_score, but I am getting following error. ``` \"ValueError: Data is not binary and pos_label is not specified\" ``` My code snippet is as follows: ``` import numpy as np from sklearn.metrics import roc_auc_score y_scores=np.array([ 0.63, 0.53, 0.36, 0.02, 0.70 ,1 , 0.48, 0.46, 0.57]) y_true=np.array(['0', '1', '0', '0', '1', '1', '1', '1', '1']) roc_auc_score(y_true, y_scores) ``` Please tell me what is wrong with it.", "response":"You only need to change y_trueso it looks like this: ``` y_true=np.array([0, 1, 0, 0, 1, 1, 1, 1, 1]) ``` Explanation: If you take a look to what roc_auc_score functions does in https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/0.15.X\/sklearn\/metrics\/metrics.py you will see that y_true is evaluated as follows: ``` classes = np.unique(y_true) if (pos_label is None and not (np.all(classes == [0, 1]) or np.all(classes == [-1, 1]) or np.all(classes == [0]) or np.all(classes == [-1]) or np.all(classes == [1]))): raise ValueError(\"Data is not binary and pos_label is not specified\") ``` At the moment of the execution pos_label is None, but as long as your are defining y_true as an array of characters the np.all are always false and as all of them are negated then the if condition is trueand the exception is raised.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/18401112\/valueerror-data-is-not-binary-and-pos-label-is-not-specified", "best_answers_votes":19, "question_length":513, "response_length":819 }, { "question":"How to compare ROC AUC scores of different binary classifiers and assess statistical significance in Python? (p-value, confidence interval) I would like to compare different binary classifiers in Python. For that, I want to calculate the ROC AUC scores, measure the 95% confidence interval (CI), and p-value to access statistical significance. Below is a minimal example in scikit-learn which trains three different models on a binary classification dataset, plots the ROC curves and calculates the AUC scores. Here are my specific questions: How to calculate the 95% confidence interval (CI) of the ROC AUC scores on the test set? (e.g. with bootstrapping). How to compare the AUC scores (on test set) and measure the p-value to assess statistical significance? (The null hypothesis is that the models are not different. Rejecting the null hypothesis means the difference in AUC scores is statistically significant.) . ``` import numpy as np np.random.seed(2018) from sklearn.datasets import load_breast_cancer from sklearn.metrics import roc_auc_score, roc_curve from sklearn.model_selection import train_test_split from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier from sklearn.neural_network import MLPClassifier import matplotlib import matplotlib.pyplot as plt data = load_breast_cancer() X = data.data y = data.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=17) # Naive Bayes Classifier nb_clf = GaussianNB() nb_clf.fit(X_train, y_train) nb_prediction_proba = nb_clf.predict_proba(X_test)[:, 1] # Ranodm Forest Classifier rf_clf = RandomForestClassifier(n_estimators=20) rf_clf.fit(X_train, y_train) rf_prediction_proba = rf_clf.predict_proba(X_test)[:, 1] # Multi-layer Perceptron Classifier mlp_clf = MLPClassifier(alpha=1, hidden_layer_sizes=150) mlp_clf.fit(X_train, y_train) mlp_prediction_proba = mlp_clf.predict_proba(X_test)[:, 1] def roc_curve_and_score(y_test, pred_proba): fpr, tpr, _ = roc_curve(y_test.ravel(), pred_proba.ravel()) roc_auc = roc_auc_score(y_test.ravel(), pred_proba.ravel()) return fpr, tpr, roc_auc plt.figure(figsize=(8, 6)) matplotlib.rcParams.update({'font.size': 14}) plt.grid() fpr, tpr, roc_auc = roc_curve_and_score(y_test, rf_prediction_proba) plt.plot(fpr, tpr, color='darkorange', lw=2, label='ROC AUC={0:.3f}'.format(roc_auc)) fpr, tpr, roc_auc = roc_curve_and_score(y_test, nb_prediction_proba) plt.plot(fpr, tpr, color='green', lw=2, label='ROC AUC={0:.3f}'.format(roc_auc)) fpr, tpr, roc_auc = roc_curve_and_score(y_test, mlp_prediction_proba) plt.plot(fpr, tpr, color='crimson', lw=2, label='ROC AUC={0:.3f}'.format(roc_auc)) plt.plot([0, 1], [0, 1], color='navy', lw=1, linestyle='--') plt.legend(loc=\"lower right\") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('1 - Specificity') plt.ylabel('Sensitivity') plt.show() ```", "response":"Bootstrap for 95% confidence interval You want to repeat your analysis on multiple resamplings of your data. In the general case, assume you have a function f(x) that determines whatever statistic you need from data x and you can bootstrap like this: ``` def bootstrap(x, f, nsamples=1000): stats = [f(x[np.random.randint(x.shape[0], size=x.shape[0])]) for _ in range(nsamples)] return np.percentile(stats, (2.5, 97.5)) ``` This gives you so-called plug-in estimates of the 95% confidence interval (i.e. you just take the percentiles of the bootstrap distribution). In your case, you can write a more specific function like this ``` def bootstrap_auc(clf, X_train, y_train, X_test, y_test, nsamples=1000): auc_values = [] for b in range(nsamples): idx = np.random.randint(X_train.shape[0], size=X_train.shape[0]) clf.fit(X_train[idx], y_train[idx]) pred = clf.predict_proba(X_test)[:, 1] roc_auc = roc_auc_score(y_test.ravel(), pred.ravel()) auc_values.append(roc_auc) return np.percentile(auc_values, (2.5, 97.5)) ``` Here, clf is the classifier for which you want to test the performance and X_train, y_train, X_test, y_test are like in your code. This gives me the following confidence intervals (rounded to three digits, 1000 bootstrap samples): Naive Bayes: 0.986 [0.980 0.988] (estimate, lower and upper limit of confidence interval) Random Forest: 0.983 [0.974 0.989] Multilayer Perceptron: 0.974 [0.223 0.98] Permutation tests to test against chance performance A permutation test would technically go over all permutations of your observation sequence and evaluate your roc curve with the permuted target values (features are not permuted). This is ok if you have a few observations, but it becomes very costly if you more observations. It is therefore common to subsample the number of permutations and simply do a number of random permutations. Here, the implementation depends a bit more on the specific thing you want to test. The following function does that for your roc_auc values ``` def permutation_test(clf, X_train, y_train, X_test, y_test, nsamples=1000): idx1 = np.arange(X_train.shape[0]) idx2 = np.arange(X_test.shape[0]) auc_values = np.empty(nsamples) for b in range(nsamples): np.random.shuffle(idx1) # Shuffles in-place np.random.shuffle(idx2) clf.fit(X_train, y_train[idx1]) pred = clf.predict_proba(X_test)[:, 1] roc_auc = roc_auc_score(y_test[idx2].ravel(), pred.ravel()) auc_values[b] = roc_auc clf.fit(X_train, y_train) pred = clf.predict_proba(X_test)[:, 1] roc_auc = roc_auc_score(y_test.ravel(), pred.ravel()) return roc_auc, np.mean(auc_values >= roc_auc) ``` This function again takes your classifier as clf and returns the AUC value on the unshuffled data and the p-value (i.e. probability to observe an AUC value larger than or equal to what you have in the unshuffled data). Running this with 1000 samples gives p-values of 0 for all three classifiers. Note that these are not exact because of the sampling, but they are an indicating that all of these classifiers perform better than chance. Permutation test for differences between classifiers This is much easier. Given two classifiers, you have prediction for every observation. You just shuffle the assignment between predictions and classifiers like this ``` def permutation_test_between_clfs(y_test, pred_proba_1, pred_proba_2, nsamples=1000): auc_differences = [] auc1 = roc_auc_score(y_test.ravel(), pred_proba_1.ravel()) auc2 = roc_auc_score(y_test.ravel(), pred_proba_2.ravel()) observed_difference = auc1 - auc2 for _ in range(nsamples): mask = np.random.randint(2, size=len(pred_proba_1.ravel())) p1 = np.where(mask, pred_proba_1.ravel(), pred_proba_2.ravel()) p2 = np.where(mask, pred_proba_2.ravel(), pred_proba_1.ravel()) auc1 = roc_auc_score(y_test.ravel(), p1) auc2 = roc_auc_score(y_test.ravel(), p2) auc_differences.append(auc1 - auc2) return observed_difference, np.mean(auc_differences >= observed_difference) ``` With this test and 1000 samples, I find no significant differences between the three classifiers: Naive bayes vs random forest: diff=0.0029, p(diff>)=0.311 Naive bayes vs MLP: diff=0.0117, p(diff>)=0.186 random forest vs MLP: diff=0.0088, p(diff>)=0.203 Where diff denotes the difference in roc curves between the two classifiers and p(diff>) is the empirical probability to observe a larger difference on a shuffled data set.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/52373318\/how-to-compare-roc-auc-scores-of-different-binary-classifiers-and-assess-statist", "best_answers_votes":16, "question_length":2873, "response_length":4353 }, { "question":"List of all classification algorithms [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Closed 2 years ago. The community reviewed whether to reopen this question 2 years ago and left it closed: Original close reason(s) were not resolved Improve this question I have a classification problem and I would like to test all the available algorithms to test their performance in tackling the problem. If you know any classification algorithm other than these listed below, please list it here. ``` GradientBoostingClassifier() DecisionTreeClassifier() RandomForestClassifier() LinearDiscriminantAnalysis() LogisticRegression() KNeighborsClassifier() GaussianNB() ExtraTreesClassifier() BaggingClassifier() ```", "response":"The answers did not provide the full list of classifiers, so I have listed them below. ``` from sklearn.tree import ExtraTreeClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.svm.classes import OneClassSVM from sklearn.neural_network.multilayer_perceptron import MLPClassifier from sklearn.neighbors.classification import RadiusNeighborsClassifier from sklearn.neighbors.classification import KNeighborsClassifier from sklearn.multioutput import ClassifierChain from sklearn.multioutput import MultiOutputClassifier from sklearn.multiclass import OutputCodeClassifier from sklearn.multiclass import OneVsOneClassifier from sklearn.multiclass import OneVsRestClassifier from sklearn.linear_model.stochastic_gradient import SGDClassifier from sklearn.linear_model.ridge import RidgeClassifierCV from sklearn.linear_model.ridge import RidgeClassifier from sklearn.linear_model.passive_aggressive import PassiveAggressiveClassifier from sklearn.gaussian_process.gpc import GaussianProcessClassifier from sklearn.ensemble.voting_classifier import VotingClassifier from sklearn.ensemble.weight_boosting import AdaBoostClassifier from sklearn.ensemble.gradient_boosting import GradientBoostingClassifier from sklearn.ensemble.bagging import BaggingClassifier from sklearn.ensemble.forest import ExtraTreesClassifier from sklearn.ensemble.forest import RandomForestClassifier from sklearn.naive_bayes import BernoulliNB from sklearn.calibration import CalibratedClassifierCV from sklearn.naive_bayes import GaussianNB from sklearn.semi_supervised import LabelPropagation from sklearn.semi_supervised import LabelSpreading from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.svm import LinearSVC from sklearn.linear_model import LogisticRegression from sklearn.linear_model import LogisticRegressionCV from sklearn.naive_bayes import MultinomialNB from sklearn.neighbors import NearestCentroid from sklearn.svm import NuSVC from sklearn.linear_model import Perceptron from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from sklearn.svm import SVC from sklearn.mixture import DPGMM from sklearn.mixture import GMM from sklearn.mixture import GaussianMixture from sklearn.mixture import VBGMM ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/41844311\/list-of-all-classification-algorithms", "best_answers_votes":56, "question_length":1068, "response_length":2258 }, { "question":"Getting a low ROC AUC score but a high accuracy Using a LogisticRegression class in scikit-learn on a version of the flight delay dataset. I use pandas to select some columns: ```python df = df[[\"MONTH\", \"DAY_OF_MONTH\", \"DAY_OF_WEEK\", \"ORIGIN\", \"DEST\", \"CRS_DEP_TIME\", \"ARR_DEL15\"]] ``` I fill in NaN values with 0: ```python df = df.fillna({'ARR_DEL15': 0}) ``` Make sure the categorical columns are marked with the 'category' data type: ```python df[\"ORIGIN\"] = df[\"ORIGIN\"].astype('category') df[\"DEST\"] = df[\"DEST\"].astype('category') ``` Then call get_dummies() from pandas: ```python df = pd.get_dummies(df) ``` Now I train and test my data set: ```python from sklearn.linear_model import LogisticRegression lr = LogisticRegression() test_set, train_set = train_test_split(df, test_size=0.2, random_state=42) train_set_x = train_set.drop('ARR_DEL15', axis=1) train_set_y = train_set[\"ARR_DEL15\"] test_set_x = test_set.drop('ARR_DEL15', axis=1) test_set_y = test_set[\"ARR_DEL15\"] lr.fit(train_set_x, train_set_y) ``` Once I call the score method I get around 0.867. However, when I call the roc_auc_score method I get a much lower number of around 0.583 ```python probabilities = lr.predict_proba(test_set_x) roc_auc_score(test_set_y, probabilities[:, 1]) ``` Is there any reason why the ROC AUC is much lower than what the score method provides?", "response":"To start with, saying that an AUC of 0.583 is \"lower\" than a score* of 0.867 is exactly like comparing apples with oranges. [* I assume your score is mean accuracy, but this is not critical for this discussion - it could be anything else in principle] According to my experience at least, most ML practitioners think that the AUC score measures something different from what it actually does: the common (and unfortunate) use is just like any other the-higher-the-better metric, like accuracy, which may naturally lead to puzzles like the one you express yourself. The truth is that, roughly speaking, the AUC measures the performance of a binary classifier averaged across all possible decision thresholds. The (decision) threshold in binary classification is the value above which we decide to label a sample as 1 (recall that probabilistic classifiers actually return a value p in [0, 1], usually interpreted as a probability - in scikit-learn it is what predict_proba returns). Now, this threshold, in methods like scikit-learn predict which return labels (1\/0), is set to 0.5 by default, but this is not the only possibility, and it may not even be desirable in come cases (imbalanced data, for example). The point to take home is that: when you ask for score (which under the hood uses predict, i.e. labels and not probabilities), you have also implicitly set this threshold to 0.5 when you ask for AUC (which, in contrast, uses probabilities returned with predict_proba), no threshold is involved, and you get (something like) the accuracy averaged across all possible thresholds Given these clarifications, your particular example provides a very interesting case in point: I get a good-enough accuracy ~ 87% with my model; should I care that, according to an AUC of 0.58, my classifier does only slightly better than mere random guessing? Provided that the class representation in your data is reasonably balanced, the answer by now should hopefully be obvious: no, you should not care; for all practical cases, what you care for is a classifier deployed with a specific threshold, and what this classifier does in a purely theoretical and abstract situation when averaged across all possible thresholds should pose very little interest for a practitioner (it does pose interest for a researcher coming up with a new algorithm, but I assume that this is not your case). (For imbalanced data, the argument changes; accuracy here is practically useless, and you should consider precision, recall, and the confusion matrix instead). For this reason, AUC has started receiving serious criticism in the literature (don't misread this - the analysis of the ROC curve itself is highly informative and useful); the Wikipedia entry and the references provided therein are highly recommended reading: Thus, the practical value of the AUC measure has been called into question, raising the possibility that the AUC may actually introduce more uncertainty into machine learning classification accuracy comparisons than resolution. [...] One recent explanation of the problem with ROC AUC is that reducing the ROC Curve to a single number ignores the fact that it is about the tradeoffs between the different systems or performance points plotted and not the performance of an individual system Emphasis mine - see also On the dangers of AUC...", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/47104129\/getting-a-low-roc-auc-score-but-a-high-accuracy", "best_answers_votes":50, "question_length":1351, "response_length":3340 }, { "question":"Using mca package in Python I am trying to use the mca package to do multiple correspondence analysis in Python. I am a bit confused as to how to use it. With PCA I would expect to fit some data (i.e. find principal components for those data) and then later I would be able to use the principal components that I found to transform unseen data. Based on the MCA documentation, I cannot work out how to do this last step. I also don't understand what any of the weirdly cryptically named properties and methods do (i.e. .E, .L, .K, .k etc). So far if I have a DataFrame with a column containing strings (assume this is the only column in the DF) I would do something like ``` import mca ca = mca.MCA(pd.get_dummies(df, drop_first=True)) ``` from what I can gather ``` ca.fs_r(1) ``` is the transformation of the data in df and ``` ca.L ``` is supposed to be the eigenvalues (although I get a vector of 1s that is one element fewer that my number of features?). now if I had some more data with the same features, let's say df_new and assuming I've already converted this correctly to dummy variables, how do I find the equivalent of ca.fs_r(1) for the new data", "response":"One other method is to use the library prince which enables easy usage of tools such as: Multiple correspondence analysis (MCA) Principal component analysis (PCA) Multiple factor analysis (MFA) You can begin first by installing with: ``` pip install --user prince ``` To use MCA, it is fairly simple and can be done in a couple of steps (just like sklearn PCA method.) We first build our dataframe. ``` import pandas as pd import prince X = pd.read_csv('https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/balloons\/adult+stretch.data') X.columns = ['Color', 'Size', 'Action', 'Age', 'Inflated'] print(X.head()) mca = prince.MCA() # outputs >> Color Size Action Age Inflated 0 YELLOW SMALL STRETCH ADULT T 1 YELLOW SMALL STRETCH CHILD F 2 YELLOW SMALL DIP ADULT F 3 YELLOW SMALL DIP CHILD F 4 YELLOW LARGE STRETCH ADULT T ``` Followed by calling the fit and transform method. ``` mca = mca.fit(X) # same as calling ca.fs_r(1) mca = mca.transform(X) # same as calling ca.fs_r_sup(df_new) for *another* test set. print(mca) # outputs >> 0 1 0 0.705387 8.373126e-15 1 -0.386586 8.336230e-15 2 -0.386586 6.335675e-15 3 -0.852014 6.726393e-15 4 0.783539 -6.333333e-01 5 0.783539 -6.333333e-01 6 -0.308434 -6.333333e-01 7 -0.308434 -6.333333e-01 8 -0.773862 -6.333333e-01 9 0.783539 6.333333e-01 10 0.783539 6.333333e-01 11 -0.308434 6.333333e-01 12 -0.308434 6.333333e-01 13 -0.773862 6.333333e-01 14 0.861691 -5.893240e-15 15 0.861691 -5.893240e-15 16 -0.230282 -5.930136e-15 17 -0.230282 -7.930691e-15 18 -0.695710 -7.539973e-15 ``` You can even print out the picture diagram of it, since it incorporates matplotlib library. ``` ax = mca.plot_coordinates( X=X, ax=None, figsize=(6, 6), show_row_points=True, row_points_size=10, show_row_labels=False, show_column_points=True, column_points_size=30, show_column_labels=False, legend_n_cols=1 ) ax.get_figure().savefig('images\/mca_coordinates.svg') ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48521740\/using-mca-package-in-python", "best_answers_votes":37, "question_length":1159, "response_length":1902 }, { "question":"How do I calculate the Adjusted R-squared score using scikit-learn? I'm already using the r2_score function but don't understand how I can get the \"adjusted\" R^2 score from this. The description at this page doesn't mention it - maybe it's the adjusted score by default?", "response":"Adjusted R2 requires number of independent variables as well. That's why it will not be calculated using such an independent metrics function (as we are not providing, how ypred was calculated). However you can calculate the adjusted R2 from R2 with a simple formula given here where n is number of observations in sample and p is number of independent variables in model", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/49381661\/how-do-i-calculate-the-adjusted-r-squared-score-using-scikit-learn", "best_answers_votes":28, "question_length":270, "response_length":371 }, { "question":"python tsne.transform does not exist? I am trying to transform two datasets: x_train and x_test using tsne. I assume the way to do this is to fit tsne to x_train, and then transform x_test and x_train. But, I am not able to transform any of the datasets. tsne = TSNE(random_state = 420, n_components=2, verbose=1, perplexity=5, n_iter=350).fit(x_train) I assume that tsne has been fitted to x_train. But, when I do this: x_train_tse = tsne.transform(x_subset) I get: AttributeError: 'TSNE' object has no attribute 'transform' Any help will be appreciated. (I know I could do fit_transform, but wouldn't I get the same error on x_test?)", "response":"Judging by the documentation of sklearn, TSNE simply does not have any transform method. Also, TSNE is an unsupervised method for dimesionality reduction\/visualization, so it does not really work with a TRAIN and TEST. You simply take all of your data and use fit_transform to have the transformation and plot it. EDIT - It is actually not possible to learn a transformation and reuse it on different data (i.e. Train and Test), as T-sne does not learn a mapping function on a lower dimensional space, but rather runs an iterative procedure on a subspace to find an equilibrium that minimizes a loss\/distance ON SOME DATA. Therefore if you want to preprocess and reduce dimensionality of both a Train and Test datasets, the way to go is PCA\/SVD or Autoencoders. T-Sne will only help you for unsupervised tasks :)", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/59214232\/python-tsne-transform-does-not-exist", "best_answers_votes":28, "question_length":635, "response_length":812 }, { "question":"How do I create a sklearn.datasets.base.Bunch object in scikit-learn from my own data? In most of the Scikit-learn algorithms, the data must be loaded as a Bunch object. For many example in the tutorial load_files() or other functions are used to populate the Bunch object. Functions like load_files() expect data to be present in certain format, but I have data stored in a different format, namely a CSV file with strings for each field. How do I parse this and load data in the Bunch object format?", "response":"You can do it like this: ``` import numpy as np import sklearn.datasets examples = [] examples.append('some text') examples.append('another example text') examples.append('example 3') target = np.zeros((3,), dtype=np.int64) target[0] = 0 target[1] = 1 target[2] = 0 dataset = sklearn.datasets.base.Bunch(data=examples, target=target) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/20485592\/how-do-i-create-a-sklearn-datasets-base-bunch-object-in-scikit-learn-from-my-own", "best_answers_votes":24, "question_length":501, "response_length":337 }, { "question":"how to split a dataset into training and validation set keeping ratio between classes? I have a multi class classification problem and my dataset is skewed, I have 100 instances of a particular class and say 10 of some different class, so I want to split my dataset keeping ratio between classes, if I have 100 instances of a particular class and I want 30% of records to go in the training set I want to have there 30 instances of my 100 record represented class and 3 instances of my 10 record represented class and so on.", "response":"You can use sklearn's StratifiedKFold, from the online docs: Stratified K-Folds cross validation iterator Provides train\/test indices to split data in train test sets. This cross-validation object is a variation of KFold that returns stratified folds. The folds are made by preserving the percentage of samples for each class. ``` >>> from sklearn import cross_validation >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]]) >>> y = np.array([0, 0, 1, 1]) >>> skf = cross_validation.StratifiedKFold(y, n_folds=2) >>> len(skf) 2 >>> print(skf) sklearn.cross_validation.StratifiedKFold(labels=[0 0 1 1], n_folds=2, shuffle=False, random_state=None) >>> for train_index, test_index in skf: ... print(\"TRAIN:\", train_index, \"TEST:\", test_index) ... X_train, X_test = X[train_index], X[test_index] ... y_train, y_test = y[train_index], y[test_index] TRAIN: [1 3] TEST: [0 2] TRAIN: [0 2] TEST: [1 3] ``` This will preserve your class ratios so that the splits retain the class ratios, this will work fine with pandas dfs. As suggested by @Ali_m you could use StratifiedShuffledSplit which accepts a split ratio param: sss = StratifiedShuffleSplit(y, 3, test_size=0.7, random_state=0) would produce a 70% split.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/29082001\/how-to-split-a-dataset-into-training-and-validation-set-keeping-ratio-between-cl", "best_answers_votes":20, "question_length":524, "response_length":1203 }, { "question":"PLS-DA algorithm in python Partial Least Squares (PLS) algorithm is implemented in the scikit-learn library, as documented here: http:\/\/scikit-learn.org\/0.12\/auto_examples\/plot_pls.html In the case where y is a binary vector, a variant of this algorithm is being used, the Partial least squares Discriminant Analysis (PLS-DA) algorithm. Does the PLSRegression module in sklearn.pls implements also this binary case? If not, where can I find a python implementation for it? In my binary case, I'm trying to use the PLSRegression: ``` pls = PLSRegression(n_components=10) pls.fit(x, y) x_r, y_r = pls.transform(x, y, copy=True) ``` In the transform function, the code gets exception in this line: ``` y_scores = np.dot(Yc, self.y_rotations_) ``` The error message is \"ValueError: matrices are not aligned\". Yc is the normalized y vector, and self.y_rotations_ = [1.]. In the fit function, self.y_rotations_ = np.ones(1) if the original y is a univariate vector (y.shape1=1).", "response":"PLS-DA is really a \"trick\" to use PLS for categorical outcomes instead of the usual continuous vector\/matrix. The trick consists of creating a dummy identity matrix of zeros\/ones which represents membership to each of the categories. So if you have a binary outcome to be predicted (i.e. male\/female , yes\/no, etc) your dummy matrix will have TWO columns representing the membership to either category. For example, consider the outcome gender for four people: 2 males and 2 females. The dummy matrix should be coded as : ``` import numpy as np dummy=np.array([[1,1,0,0],[0,0,1,1]]).T ``` , where each column represents the membership to the two categories (male, female) Then your model for data in variable Xdata ( shape 4 rows,arbitrary columns ) would be: ``` myplsda=PLSRegression().fit(X=Xdata,Y=dummy) ``` The predicted categories can be extracted from comparison of the two indicator variables in mypred: ``` mypred= myplsda.predict(Xdata) ``` For each row\/case the predicted gender is that with the highest predicted membership.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/18390150\/pls-da-algorithm-in-python", "best_answers_votes":34, "question_length":972, "response_length":1037 }, { "question":"sklearn.compose.ColumnTransformer: fit_transform() takes 2 positional arguments but 3 were given I am working on an example of using ColumnTransformer and LabelEncoder for pre-processing the well-known Titanic data set X: ``` Age Embarked Fare Sex 0 22.0 S 7.2500 male 1 38.0 C 71.2833 female 2 26.0 S 7.9250 female 3 35.0 S 53.1000 female 4 35.0 S 8.0500 male ``` Calling the transformer like this: ``` from sklearn.compose import ColumnTransformer from sklearn.preprocessing import LabelEncoder ColumnTransformer( transformers=[ (\"label-encode categorical\", LabelEncoder(), [\"Sex\", \"Embarked\"]) ] ).fit(X).transform(X) ``` results in: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in 4 (\"label-encode categorical\", LabelEncoder(), [\"Sex\", \"Embarked\"]) 5 ] ----> 6 ).fit(X).transform(X) ~\/anaconda3\/lib\/python3.7\/site-packages\/sklearn\/compose\/_column_transformer.py in fit(self, X, y) 418 # we use fit_transform to make sure to set sparse_output_ (for which we 419 # need the transformed data) to have consistent output type in predict --> 420 self.fit_transform(X, y=y) 421 return self 422 ~\/anaconda3\/lib\/python3.7\/site-packages\/sklearn\/compose\/_column_transformer.py in fit_transform(self, X, y) 447 self._validate_remainder(X) 448 --> 449 result = self._fit_transform(X, y, _fit_transform_one) 450 451 if not result: ~\/anaconda3\/lib\/python3.7\/site-packages\/sklearn\/compose\/_column_transformer.py in _fit_transform(self, X, y, func, fitted) 391 _get_column(X, column), y, weight) 392 for _, trans, column, weight in self._iter( --> 393 fitted=fitted, replace_strings=True)) 394 except ValueError as e: 395 if \"Expected 2D array, got 1D array instead\" in str(e): ~\/anaconda3\/lib\/python3.7\/site-packages\/sklearn\/externals\/joblib\/parallel.py in __call__(self, iterable) 915 # remaining jobs. 916 self._iterating = False --> 917 if self.dispatch_one_batch(iterator): 918 self._iterating = self._original_iterator is not None 919 ~\/anaconda3\/lib\/python3.7\/site-packages\/sklearn\/externals\/joblib\/parallel.py in dispatch_one_batch(self, iterator) 757 return False 758 else: --> 759 self._dispatch(tasks) 760 return True 761 ~\/anaconda3\/lib\/python3.7\/site-packages\/sklearn\/externals\/joblib\/parallel.py in _dispatch(self, batch) 714 with self._lock: 715 job_idx = len(self._jobs) --> 716 job = self._backend.apply_async(batch, callback=cb) 717 # A job can complete so quickly than its callback is 718 # called before we get here, causing self._jobs to ~\/anaconda3\/lib\/python3.7\/site-packages\/sklearn\/externals\/joblib\/_parallel_backends.py in apply_async(self, func, callback) 180 def apply_async(self, func, callback=None): 181 \"\"\"Schedule a func to be run\"\"\" --> 182 result = ImmediateResult(func) 183 if callback: 184 callback(result) ~\/anaconda3\/lib\/python3.7\/site-packages\/sklearn\/externals\/joblib\/_parallel_backends.py in __init__(self, batch) 547 # Don't delay the application, to avoid keeping the input 548 # arguments in memory --> 549 self.results = batch() 550 551 def get(self): ~\/anaconda3\/lib\/python3.7\/site-packages\/sklearn\/externals\/joblib\/parallel.py in __call__(self) 223 with parallel_backend(self._backend, n_jobs=self._n_jobs): 224 return [func(*args, **kwargs) --> 225 for func, args, kwargs in self.items] 226 227 def __len__(self): ~\/anaconda3\/lib\/python3.7\/site-packages\/sklearn\/externals\/joblib\/parallel.py in (.0) 223 with parallel_backend(self._backend, n_jobs=self._n_jobs): 224 return [func(*args, **kwargs) --> 225 for func, args, kwargs in self.items] 226 227 def __len__(self): ~\/anaconda3\/lib\/python3.7\/site-packages\/sklearn\/pipeline.py in _fit_transform_one(transformer, X, y, weight, **fit_params) 612 def _fit_transform_one(transformer, X, y, weight, **fit_params): 613 if hasattr(transformer, 'fit_transform'): --> 614 res = transformer.fit_transform(X, y, **fit_params) 615 else: 616 res = transformer.fit(X, y, **fit_params).transform(X) TypeError: fit_transform() takes 2 positional arguments but 3 were given ``` What is the problem with **fit_params here? To me this looks like a bug in sklearn or at least an incompatibility.", "response":"There are two major reasons why this will not work for your purpose. LabelEncoder() is desinged to be used for the target variable (y). That is the reason for getting the positional argument error, when columnTransformer() tries to feed X, y=None, fit_params={}. From Documentation: Encode labels with value between 0 and n_classes-1. fit(y) Fit label encoder Parameters: y : array-like of shape (n_samples,) Target values. Even if you do a workaround to remove the empty dictionary, then also LabelEncoder() cannot take 2D array (basically multiple features at a time) because it takes only 1D y values. Short answer - we should not be using LabelEncoder() for input features. Now, what is the solution to encode the input features? Use OrdinalEncoder() if your features are ordinal features or OneHotEncoder() in case of nominal features. Example: ```py >>> from sklearn.compose import ColumnTransformer >>> from sklearn.preprocessing import OrdinalEncoder, OneHotEncoder >>> X = np.array([[1000., 100., 'apple', 'green'], ... [1100., 100., 'orange', 'blue']]) >>> ct = ColumnTransformer( ... [(\"ordinal\", OrdinalEncoder(), [0, 1]), (\"nominal\", OneHotEncoder(), [2, 3])]) >>> ct.fit_transform(X) array([[0., 0., 1., 0., 0., 1.], [1., 0., 0., 1., 1., 0.]]) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/55953284\/sklearn-compose-columntransformer-fit-transform-takes-2-positional-arguments", "best_answers_votes":28, "question_length":4144, "response_length":1261 }, { "question":"Get selected feature names TFIDF Vectorizer I'm using python and I want to get the TFIDF representation for a large corpus of data, I'm using the following code to convert the docs into their TFIDF form. ``` from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer( min_df=1, # min count for relevant vocabulary max_features=4000, # maximum number of features strip_accents='unicode', # replace all accented unicode char # by their corresponding ASCII char analyzer='word', # features made of words token_pattern=r'\\w{1,}', # tokenize only words of 4+ chars ngram_range=(1, 1), # features made of a single tokens use_idf=True, # enable inverse-document-frequency reweighting smooth_idf=True, # prevents zero division for unseen words sublinear_tf=False) tfidf_df = tfidf_vectorizer.fit_transform(df['text']) ``` Here I pass a parameter max_features. The vectorizer will select the best features and return a scipy sparse matrix. Problem is I dont know which features are getting selected and how do I map those feature names back to the scipy matrix I get? Basically for the n selected features from the m number of documents, I want a m x n matrix with the selected features as the column names instead of their integer ids. How do I accomplish this?", "response":"You can use tfidf_vectorizer.get_feature_names(). This will print feature names selected (terms selected) from the raw documents. You can also use tfidf_vectorizer.vocabulary_ attribute to get a dict which will map the feature names to their indices, but will not be sorted. The array from get_feature_names() will be sorted by index.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/42525072\/get-selected-feature-names-tfidf-vectorizer", "best_answers_votes":30, "question_length":1289, "response_length":334 }, { "question":"ImportError: cannot import name 'cross_validate' I'm trying to do: ``` from sklearn.model_selection import cross_validate ``` as mentioned here. But get the error: ``` ImportError: cannot import name 'cross_validate' ``` Everything else in Sklearn seems to work fine, it's just this bit. Error even occurs when I run this one line and nothing else.", "response":"cross-validate is new in version 0.19.0 (change log): Cross validation is now able to return the results from multiple metric evaluations. The new model_selection.cross_validate can return many scores on the test data as well as training set performance and timings, and we have extended the scoring and refit parameters for grid\/randomized search to handle multiple metrics. In order to use it update your scikit-learn installation with either conda upgrade scikit-learn if you are using conda, or pip install --upgrade scikit-learn. If updating is not possible, you can still use cross_val_score instead.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45661139\/importerror-cannot-import-name-cross-validate", "best_answers_votes":22, "question_length":348, "response_length":606 }, { "question":"import check_arrays from sklearn I'm trying to use a svm function from the scikit learn package for python but I get the error message: ``` from sklearn.utils.validation import check_arrays ``` ImportError: cannot import name 'check_arrays' I'm using python 3.4. Can anyone give me an advice? Thanks in advance.", "response":"This method was removed in 0.16, replaced by a (very different) check_array function. You are likely getting this error because you didn't upgrade from 0.15 to 0.16 properly. [Or because you relied on a not-really-public function in sklearn]. See http:\/\/scikit-learn.org\/dev\/install.html#canopy-and-anaconda-for-all-supported-platforms . If you installed using anaconda \/ conda, you should use the conda mechanism to upgrade, not pip. Otherwise old .pyc files might remain in your folder.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/29596237\/import-check-arrays-from-sklearn", "best_answers_votes":20, "question_length":311, "response_length":488 }, { "question":"VotingClassifier: Different Feature Sets I have two different feature sets (so, with same number of rows and the labels are the same), in my case DataFrames: df1: ``` | A | B | C | ------------- | 1 | 4 | 2 | | 1 | 4 | 8 | | 2 | 1 | 1 | | 2 | 3 | 0 | | 3 | 2 | 5 | ``` df2: ``` | E | F | --------- | 6 | 1 | | 1 | 3 | | 8 | 1 | | 2 | 8 | | 5 | 2 | ``` labels: ``` | labels | ---------- | 5 | | 5 | | 1 | | 7 | | 3 | ``` I want to use them to train a VotingClassifier. But the fitting step only allows to specify a single feature set. Goal is to fit clf1 with df1 and clf2 with df2. ``` eclf = VotingClassifier(estimators=[('df1-clf', clf1), ('df2-clf', clf2)], voting='soft') eclf.fit(...) ``` How should I proceed with this kind of situation? Is there any easy solution?", "response":"Its pretty easy to make custom functions to do what you want to achieve. Import the prerequisites: ``` import numpy as np from sklearn.preprocessing import LabelEncoder def fit_multiple_estimators(classifiers, X_list, y, sample_weights = None): # Convert the labels `y` using LabelEncoder, because the predict method is using index-based pointers # which will be converted back to original data later. le_ = LabelEncoder() le_.fit(y) transformed_y = le_.transform(y) # Fit all estimators with their respective feature arrays estimators_ = [clf.fit(X, y) if sample_weights is None else clf.fit(X, y, sample_weights) for clf, X in zip([clf for _, clf in classifiers], X_list)] return estimators_, le_ def predict_from_multiple_estimator(estimators, label_encoder, X_list, weights = None): # Predict 'soft' voting with probabilities pred1 = np.asarray([clf.predict_proba(X) for clf, X in zip(estimators, X_list)]) pred2 = np.average(pred1, axis=0, weights=weights) pred = np.argmax(pred2, axis=1) # Convert integer predictions to original labels: return label_encoder.inverse_transform(pred) ``` The logic is taken from VotingClassifier source. Now test the above methods. First get some data: ``` from sklearn.datasets import load_iris data = load_iris() X = data.data y = [] #Convert int classes to string labels for x in data.target: if x==0: y.append('setosa') elif x==1: y.append('versicolor') else: y.append('virginica') ``` Split the data into train and test: ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y) ``` Divide the X into different feature datas: ``` X_train1, X_train2 = X_train[:,:2], X_train[:,2:] X_test1, X_test2 = X_test[:,:2], X_test[:,2:] X_train_list = [X_train1, X_train2] X_test_list = [X_test1, X_test2] ``` Get list of classifiers: ``` from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC # Make sure the number of estimators here are equal to number of different feature datas classifiers = [('knn', KNeighborsClassifier(3)), ('svc', SVC(kernel=\"linear\", C=0.025, probability=True))] ``` Fit the classifiers with the data: ``` fitted_estimators, label_encoder = fit_multiple_estimators(classifiers, X_train_list, y_train) ``` Predict using the test data: ``` y_pred = predict_from_multiple_estimator(fitted_estimators, label_encoder, X_test_list) ``` Get accuracy of predictions: ``` from sklearn.metrics import accuracy_score print(accuracy_score(y_test, y_pred)) ``` Feel free to ask if any doubt.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45074579\/votingclassifier-different-feature-sets", "best_answers_votes":17, "question_length":771, "response_length":2518 }, { "question":"ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: 0.0 I have applied Logistic Regression on train set after splitting the data set into test and train sets, but I got the above error. I tried to work it out, and when i tried to print my response vector y_train in the console it prints integer values like 0 or 1. But when i wrote it into a file I found the values were float numbers like 0.0 and 1.0. If thats the problem, how can I over come it. ``` lenreg = LogisticRegression() print y_train[0:10] y_train.to_csv(path='ytard.csv') lenreg.fit(X_train, y_train) y_pred = lenreg.predict(X_test) print metics.accuracy_score(y_test, y_pred) ``` StrackTrace is as follows, ``` Traceback (most recent call last): File \"\/home\/amey\/prog\/pd.py\", line 82, in lenreg.fit(X_train, y_train) File \"\/usr\/lib\/python2.7\/dist-packages\/sklearn\/linear_model\/logistic.py\", line 1154, in fit self.max_iter, self.tol, self.random_state) File \"\/usr\/lib\/python2.7\/dist-packages\/sklearn\/svm\/base.py\", line 885, in _fit_liblinear \" class: %r\" % classes_[0]) ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: 0.0 ``` Meanwhile I've gone across the link which was unanswered. Is there a solution.", "response":"The problem here is that your y_train vector, for whatever reason, only has zeros. It is actually not your fault, and its kind of a bug ( I think ). The classifier needs 2 classes or else it throws this error. It makes sense. If your y_train vector only has zeros, ( ie only 1 class ), then the classifier doesn't really need to do any work, since all predictions should just be the one class. In my opinion the classifier should still complete and just predict the one class ( all zeros in this case ) and then throw a warning, but it doesn't. It throws the error in stead. A way to check for this condition is like this: ``` lenreg = LogisticRegression() print y_train[0:10] y_train.to_csv(path='ytard.csv') if len(np.sum(y_train)) in [len(y_train),0]: print \"all one class\" #do something else else: #OK to proceed lenreg.fit(X_train, y_train) y_pred = lenreg.predict(X_test) print metics.accuracy_score(y_test, y_pred) ``` TO overcome the problem more easily i would recommend just including more samples in you test set, like 100 or 1000 instead of 10.", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40524790\/valueerror-this-solver-needs-samples-of-at-least-2-classes-in-the-data-but-the", "best_answers_votes":23, "question_length":1293, "response_length":1056 }, { "question":"How to extract sklearn decision tree rules to pandas boolean conditions? There are so many posts like this about how to extract sklearn decision tree rules but I could not find any about using pandas. Take this data and model for example, as below ``` # Create Decision Tree classifer object clf = DecisionTreeClassifier(criterion=\"entropy\", max_depth=3) # Train Decision Tree Classifer clf = clf.fit(X_train,y_train) ``` The result: Expected: There're 8 rules about this example. From left to right,notice that dataframe is df ``` r1 = (df['glucose']127.5) & (df['bmi']>28.15) & (df['glucose']>158.5) ``` I'm not a master of extracting sklearn decision tree rules. Getting the pandas boolean conditions will help me calculate samples and other metrics for each rule. So I want to extract each rule to a pandas boolean condition.", "response":"First of all let's use the scikit documentation on decision tree structure to get information about the tree that was constructed : ``` n_nodes = clf.tree_.node_count children_left = clf.tree_.children_left children_right = clf.tree_.children_right feature = clf.tree_.feature threshold = clf.tree_.threshold ``` We then define two recursive functions. The first one will find the path from the tree's root to create a specific node (all the leaves in our case). The second one will write the specific rules used to create a node using its creation path : ``` def find_path(node_numb, path, x): path.append(node_numb) if node_numb == x: return True left = False right = False if (children_left[node_numb] !=-1): left = find_path(children_left[node_numb], path, x) if (children_right[node_numb] !=-1): right = find_path(children_right[node_numb], path, x) if left or right : return True path.remove(node_numb) return False def get_rule(path, column_names): mask = '' for index, node in enumerate(path): #We check if we are not in the leaf if index!=len(path)-1: # Do we go under or over the threshold ? if (children_left[node] == path[index+1]): mask += \"(df['{}'] {}) \\t \".format(column_names[feature[node]], threshold[node]) # We insert the & at the right places mask = mask.replace(\"\\t\", \"&\", mask.count(\"\\t\") - 1) mask = mask.replace(\"\\t\", \"\") return mask ``` Finally, we use those two functions to first store the creation path of each leaf. And then to store the rules used to create each leaf : ``` # Leaves leave_id = clf.apply(X_test) paths ={} for leaf in np.unique(leave_id): path_leaf = [] find_path(0, path_leaf, leaf) paths[leaf] = np.unique(np.sort(path_leaf)) rules = {} for key in paths: rules[key] = get_rule(paths[key], pima.columns) ``` With the data you gave the output is : ``` rules = {3: \"(df['insulin'] 9.100000381469727) \", 6: \"(df['insulin'] 26.450000762939453) & (df['skin'] 26.450000762939453) & (df['skin']> 27.5) \", 10: \"(df['insulin']> 127.5) & (df['bp'] 127.5) & (df['bp'] 145.5) \", 13: \"(df['insulin']> 127.5) & (df['bp']> 28.149999618530273) & (df['insulin'] 127.5) & (df['bp']> 28.149999618530273) & (df['insulin']> 158.5) \"} ``` Since the rules are strings, you can't directly call them using df[rules[3]], you have to use the eval function like so df[eval(rules[3])]", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/56334210\/how-to-extract-sklearn-decision-tree-rules-to-pandas-boolean-conditions", "best_answers_votes":27, "question_length":829, "response_length":2303 }, { "question":"Cross-validation in sklearn: do I need to call fit() as well as cross_val_score()? I would like to use k-fold cross validation while learning a model. So far I am doing it like this: ``` # splitting dataset into training and test sets X_train, X_test, y_train, y_test = train_test_split(dataset_1, df1['label'], test_size=0.25, random_state=4222) # learning a model model = MultinomialNB() model.fit(X_train, y_train) scores = cross_val_score(model, X_train, y_train, cv=5) ``` At this step I am not quite sure whether I should use model.fit() or not, because in the official documentation of sklearn they do not fit but just call cross_val_score as following (they do not even split the data into training and test sets): ``` from sklearn.model_selection import cross_val_score clf = svm.SVC(kernel='linear', C=1) scores = cross_val_score(clf, iris.data, iris.target, cv=5) ``` I would like to tune the hyper parameters of the model while learning the model. What is the right pipeline?", "response":"If you want to do hyperparameter selection then look into RandomizedSearchCV or GridSearchCV. If you want to use the best model afterwards, then call either of these with refit=True and then use best_estimator_. ``` from sklearn.linear_model import LogisticRegression from sklearn.model_selection import RandomizedSearchCV log_params = {'penalty': ['l1', 'l2'], 'C': [1E-7, 1E-6, 1E-6, 1E-4, 1E-3]} clf = LogisticRegression() search = RandomizedSearchCV(clf, scoring='average_precision', cv=10, n_iter=10, param_distributions=log_params, refit=True, n_jobs=-1) search.fit(X_train, y_train) clf = search.best_estimator_ ``` http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.model_selection.RandomizedSearchCV.html", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/50329349\/cross-validation-in-sklearn-do-i-need-to-call-fit-as-well-as-cross-val-score", "best_answers_votes":16, "question_length":987, "response_length":719 }, { "question":"How to standardize data with sklearn's cross_val_score() Let's say I want to use a LinearSVC to perform k-fold-cross-validation on a dataset. How would I perform standardization on the data? The best practice I have read is to build your standardization model on your training data then apply this model to the testing data. When one uses a simple train_test_split(), this is easy as we can just do: ``` X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y) clf = svm.LinearSVC() scalar = StandardScaler() X_train = scalar.fit_transform(X_train) X_test = scalar.transform(X_test) clf.fit(X_train, y_train) predicted = clf.predict(X_test) ``` How would one go about standardizing data while doing k-fold-cross-validation? The problem comes from the fact that every data point will be for training\/testing so you cannot standardize everything before cross_val_score(). Wouldn't you need a different standardization for each cross validation? The docs do not mention standardization happening internally within the function. Am I SOL? EDIT: This post is super helpful: Python - What is exactly sklearn.pipeline.Pipeline?", "response":"You can use a Pipeline to combine both of the processes and then send it into the cross_val_score(). When the fit() is called on the pipeline, it will fit all the transforms one after the other and transform the data, then fit the transformed data using the final estimator. And during predict() (Only available if last object in pipeline is an estimator, otherwise transform()) it will apply transforms to the data, and predict with the final estimator. Like this: ``` scalar = StandardScaler() clf = svm.LinearSVC() pipeline = Pipeline([('transformer', scalar), ('estimator', clf)]) cv = KFold(n_splits=4) scores = cross_val_score(pipeline, X, y, cv = cv) ``` Check out various examples of pipeline to understand it better: http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.pipeline.Pipeline.html#examples-using-sklearn-pipeline-pipeline", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/44446501\/how-to-standardize-data-with-sklearns-cross-val-score", "best_answers_votes":29, "question_length":1133, "response_length":846 }, { "question":"How to store scaling parameters for later use I want to apply the scaling sklearn.preprocessing.scale module that scikit-learn offers for centering a dataset that I will use to train an svm classifier. How can I then store the standardization parameters so that I can also apply them to the data that I want to classify? I know I can use the standarScaler but can I somehow serialize it to a file so that I wont have to fit it to my data every time I want to run the classifier?", "response":"I think that the best way is to pickle it post fit, as this is the most generic option. Perhaps you'll later create a pipeline composed of both a feature extractor and scaler. By pickling a (possibly compound) stage, you're making things more generic. The sklearn documentation on model persistence discusses how to do this. Having said that, you can query sklearn.preprocessing.StandardScaler for the fit parameters: scale_ : ndarray, shape (n_features,) Per feature relative scaling of the data. New in version 0.17: scale_ is recommended instead of deprecated std_. mean_ : array of floats with shape [n_features] The mean value for each feature in the training set. The following short snippet illustrates this: ``` from sklearn import preprocessing import numpy as np s = preprocessing.StandardScaler() s.fit(np.array([[1., 2, 3, 4]]).T) >>> s.mean_, s.scale_ (array([ 2.5]), array([ 1.11803399])) ```", "best_answers_score":0.8, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35944783\/how-to-store-scaling-parameters-for-later-use", "best_answers_votes":12, "question_length":478, "response_length":906 }, { "question":"sklearn LabelBinarizer returns vector when there are 2 classes The following code: ``` from sklearn.preprocessing import LabelBinarizer lb = LabelBinarizer() lb.fit_transform(['yes', 'no', 'no', 'yes']) ``` returns: ``` array([[1], [0], [0], [1]]) ``` However, I would like for there to be one column per class: ``` array([[1, 0], [0, 1], [0, 1], [1, 0]]) ``` (I need the data in this format so I can give it to a neural network that uses the softmax function at the output layer) When there are more than 2 classes, LabelBinarizer behaves as desired: ``` from sklearn.preprocessing import LabelBinarizer lb = LabelBinarizer() lb.fit_transform(['yes', 'no', 'no', 'yes', 'maybe']) ``` returns ``` array([[0, 0, 1], [0, 1, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0]]) ``` Above, there is 1 column per class. Is there any simple way to achieve the same (1 column per class) when there are 2 classes? Edit: Based on yangjie's answer I wrote a class to wrap LabelBinarizer to produce the desired behavior described above: http:\/\/pastebin.com\/UEL2dP62 ``` import numpy as np from sklearn.preprocessing import LabelBinarizer class LabelBinarizer2: def __init__(self): self.lb = LabelBinarizer() def fit(self, X): # Convert X to array X = np.array(X) # Fit X using the LabelBinarizer object self.lb.fit(X) # Save the classes self.classes_ = self.lb.classes_ def fit_transform(self, X): # Convert X to array X = np.array(X) # Fit + transform X using the LabelBinarizer object Xlb = self.lb.fit_transform(X) # Save the classes self.classes_ = self.lb.classes_ if len(self.classes_) == 2: Xlb = np.hstack((Xlb, 1 - Xlb)) return Xlb def transform(self, X): # Convert X to array X = np.array(X) # Transform X using the LabelBinarizer object Xlb = self.lb.transform(X) if len(self.classes_) == 2: Xlb = np.hstack((Xlb, 1 - Xlb)) return Xlb def inverse_transform(self, Xlb): # Convert Xlb to array Xlb = np.array(Xlb) if len(self.classes_) == 2: X = self.lb.inverse_transform(Xlb[:, 0]) else: X = self.lb.inverse_transform(Xlb) return X ``` Edit 2: It turns out yangjie has also written a new version of LabelBinarizer, awesome!", "response":"I think there is no direct way to do it especially if you want to have inverse_transform. But you can use numpy to construct the label easily ``` In [18]: import numpy as np In [19]: from sklearn.preprocessing import LabelBinarizer In [20]: lb = LabelBinarizer() In [21]: label = lb.fit_transform(['yes', 'no', 'no', 'yes']) In [22]: label = np.hstack((label, 1 - label)) In [23]: label Out[23]: array([[1, 0], [0, 1], [0, 1], [1, 0]]) ``` Then you can use inverse_transform by slicing the first column ``` In [24]: lb.inverse_transform(label[:, 0]) Out[24]: array(['yes', 'no', 'no', 'yes'], dtype='>> param_grid = {'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000] } >>> clf = GridSearchCV(LogisticRegression(penalty='l2'), param_grid) GridSearchCV(cv=None, estimator=LogisticRegression(C=1.0, intercept_scaling=1, dual=False, fit_intercept=True, penalty='l2', tol=0.0001), param_grid={'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]}) ``` See the GridSearchCv document for more details on your application.", "best_answers_score":0.7935, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/21816346\/fine-tuning-parameters-in-logistic-regression", "best_answers_votes":35, "question_length":1773, "response_length":515 }, { "question":"How to pass a parameter to only one part of a pipeline object in scikit learn? I need to pass a parameter, sample_weight, to my RandomForestClassifier like so: ``` X = np.array([[2.0, 2.0, 1.0, 0.0, 1.0, 3.0, 3.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 5.0, 3.0, 2.0, '0'], [15.0, 2.0, 5.0, 5.0, 0.466666666667, 4.0, 3.0, 2.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 7.0, 14.0, 2.0, '0'], [3.0, 4.0, 3.0, 1.0, 1.33333333333, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 9.0, 8.0, 2.0, '0'], [3.0, 2.0, 3.0, 0.0, 0.666666666667, 2.0, 2.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 5.0, 3.0, 1.0, '0']], dtype=object) y = np.array([ 0., 0., 1., 0.]) m = sklearn.ensemble.RandomForestClassifier( random_state=0, oob_score=True, n_estimators=100, min_samples_leaf=5, max_depth=10) m.fit(X, y, sample_weight=np.array([3,4,2,3])) ``` The above code works perfectly fine. Then, I try to do this in a pipeline object like so, using pipeline object instead of only random forest: ``` m = sklearn.pipeline.Pipeline([ ('feature_selection', sklearn.feature_selection.SelectKBest( score_func=sklearn.feature_selection.f_regression, k=25)), ('model', sklearn.ensemble.RandomForestClassifier( random_state=0, oob_score=True, n_estimators=500, min_samples_leaf=5, max_depth=10))]) m.fit(X, y, sample_weight=np.array([3,4,2,3])) ``` Now this breaks in the fit method with \"ValueError: need more than 1 value to unpack\". ``` ValueError Traceback (most recent call last) in () 25 max_depth=10))]) 26 ---> 27 m.fit(X, y, sample_weights=np.array([3,4,2,3])) \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/pipeline.pyc in fit(self, X, y, **fit_params) 128 data, then fit the transformed data using the final estimator. 129 \"\"\" --> 130 Xt, fit_params = self._pre_transform(X, y, **fit_params) 131 self.steps[-1][-1].fit(Xt, y, **fit_params) 132 return self \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/pipeline.pyc in _pre_transform(self, X, y, **fit_params) 113 fit_params_steps = dict((step, {}) for step, _ in self.steps) 114 for pname, pval in six.iteritems(fit_params): --> 115 step, param = pname.split('__', 1) 116 fit_params_steps[step][param] = pval 117 Xt = X ValueError: need more than 1 value to unpack ``` I am using sklearn version 0.14. I think that the problem is that the F selection step in the pipeline does not take in an argument for sample_weights. how do I pass this parameter to only one step in the pipeline with I run \"fit\"? Thanks.", "response":"From the documentation: The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a \u2018__\u2019, as in the example below. So you can simply insert model__ in front of whatever fit parameter kwargs you want to pass to your 'model' step: ``` m.fit(X, y, model__sample_weight=np.array([3,4,2,3])) ```", "best_answers_score":0.7933, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35632634\/how-to-pass-a-parameter-to-only-one-part-of-a-pipeline-object-in-scikit-learn", "best_answers_votes":35, "question_length":2617, "response_length":477 }, { "question":"Non-Integer Class Labels Scikit-Learn Quick SVM question for scikit-learn. When you train an SVM, it's something like ``` from sklearn import svm s = svm.SVC() s.fit(training_data, labels) ``` Is there any way for labels to be a list of a non-numeric type? For instance, if I want to classify vectors as 'cat' or 'dog,' without having to have some kind of external lookup table that encodes 'cat' and 'dog' into 1's and 2's. When I try to just pass a list of strings, I get ... ValueError: invalid literal for float(): cat So, it doesn't look like just shoving strings in labels will work. Any ideas?", "response":"Passing strings as classes directly is on my todo, but it is not supported in the SVMs yet. For the moment, we have the LabelEncoder that can do the book keeping for you. [edit]This should work now out of the box[\/edit]", "best_answers_score":0.7913, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/13300160\/non-integer-class-labels-scikit-learn", "best_answers_votes":21, "question_length":600, "response_length":219 }, { "question":"TypeError: Singleton array array(True) cannot be considered a valid collection I want split the dataset that I have into test\/train while also ensuring that the distribution of classified labels are same in both test\/train. To do this I am using the stratify option but it is throwing an error as follows: ``` X_full_train, X_full_test, Y_full_train, Y_full_test = train_test_split(X_values_full, Y_values, test_size = 0.33, random_state = 42, stratify = True) ``` Error message: ``` TypeError Traceback (most recent call last) in 19 20 ---> 21 X_full_train, X_full_test, Y_full_train, Y_full_test = train_test_split(X_values_full, Y_values, test_size = 0.33, random_state = 42, stratify = True) 22 23 ~\/anaconda3\/lib\/python3.8\/site-packages\/sklearn\/model_selection\/_split.py in train_test_split(*arrays, **options) 2150 random_state=random_state) 2151 -> 2152 train, test = next(cv.split(X=arrays[0], y=stratify)) 2153 2154 return list(chain.from_iterable((_safe_indexing(a, train), ~\/anaconda3\/lib\/python3.8\/site-packages\/sklearn\/model_selection\/_split.py in split(self, X, y, groups) 1744 to an integer. 1745 \"\"\" -> 1746 y = check_array(y, ensure_2d=False, dtype=None) 1747 return super().split(X, y, groups) 1748 ~\/anaconda3\/lib\/python3.8\/site-packages\/sklearn\/utils\/validation.py in inner_f(*args, **kwargs) 71 FutureWarning) 72 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)}) ---> 73 return f(**kwargs) 74 return inner_f 75 ~\/anaconda3\/lib\/python3.8\/site-packages\/sklearn\/utils\/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator) 647 648 if ensure_min_samples > 0: --> 649 n_samples = _num_samples(array) 650 if n_samples 196 raise TypeError(\"Singleton array %r cannot be considered\" 197 \" a valid collection.\" % x) 198 # Check that shape is returning an integer or default to len TypeError: Singleton array array(True) cannot be considered a valid collection. ``` When I try to do this without the stratify option it does not give me an error. I thought that this was because my Y labels don't have the minimum number of samples required to distribute the labels evenly between test\/train but: ``` pp.pprint(Counter(Y_values)) ``` gives: ``` Counter({13: 1084, 1: 459, 7: 364, 8: 310, 38: 295, 15: 202, 4: 170, 37: 105, 3: 98, 0: 85, 24: 79, 20: 78, 35: 76, 2: 75, 12: 74, 39: 72, 22: 71, 9: 63, 26: 59, 11: 55, 18: 55, 32: 53, 19: 53, 33: 53, 5: 52, 30: 42, 29: 42, 25: 41, 10: 39, 23: 38, 21: 38, 6: 38, 27: 37, 14: 36, 36: 36, 34: 34, 28: 33, 17: 31, 31: 30, 16: 30}) ```", "response":"Per the sklearn documentation: stratifyarray-like, default=None If not None, data is split in a stratified fashion, using this as the class labels. Thus, it does not accept a boolean value like True or False, but the class labels themselves. So, you need to change: ``` X_full_train, X_full_test, Y_full_train, Y_full_test = train_test_split(X_values_full, Y_values, test_size = 0.33, random_state = 42, stratify = True) ``` to: ``` X_full_train, X_full_test, Y_full_train, Y_full_test = train_test_split(X_values_full, Y_values, test_size = 0.33, random_state = 42, stratify = Y_values) ```", "best_answers_score":0.7905, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/63851453\/typeerror-singleton-array-arraytrue-cannot-be-considered-a-valid-collection", "best_answers_votes":46, "question_length":2629, "response_length":591 }, { "question":"Best way to combine probabilistic classifiers in scikit-learn I have a logistic regression and a random forest and I'd like to combine them (ensemble) for the final classification probability calculation by taking an average. Is there a built-in way to do this in sci-kit learn? Some way where I can use the ensemble of the two as a classifier itself? Or would I need to roll my own classifier?", "response":"NOTE: The scikit-learn Voting Classifier is probably the best way to do this now OLD ANSWER: For what it's worth I ended up doing this as follows: ``` class EnsembleClassifier(BaseEstimator, ClassifierMixin): def __init__(self, classifiers=None): self.classifiers = classifiers def fit(self, X, y): for classifier in self.classifiers: classifier.fit(X, y) def predict_proba(self, X): self.predictions_ = list() for classifier in self.classifiers: self.predictions_.append(classifier.predict_proba(X)) return np.mean(self.predictions_, axis=0) ```", "best_answers_score":0.7897, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/21506128\/best-way-to-combine-probabilistic-classifiers-in-scikit-learn", "best_answers_votes":34, "question_length":394, "response_length":546 }, { "question":"How to find the importance of the features for a logistic regression model? I have a binary prediction model trained by logistic regression algorithm. I want know which features (predictors) are more important for the decision of positive or negative class. I know there is coef_ parameter which comes from the scikit-learn package, but I don't know whether it is enough for the importance. Another thing is how I can evaluate the coef_ values in terms of the importance for negative and positive classes. I also read about standardized regression coefficients and I don't know what it is. Lets say there are features like size of tumor, weight of tumor, and etc to make a decision for a test case like malignant or not malignant. I want to know which of the features are more important for malignant and not malignant prediction.", "response":"One of the simplest options to get a feeling for the \"influence\" of a given parameter in a linear classification model (logistic being one of those), is to consider the magnitude of its coefficient times the standard deviation of the corresponding parameter in the data. Consider this example: ``` import numpy as np from sklearn.linear_model import LogisticRegression x1 = np.random.randn(100) x2 = 4*np.random.randn(100) x3 = 0.5*np.random.randn(100) y = (3 + x1 + x2 + x3 + 0.2*np.random.randn()) > 0 X = np.column_stack([x1, x2, x3]) m = LogisticRegression() m.fit(X, y) # The estimated coefficients will all be around 1: print(m.coef_) # Those values, however, will show that the second parameter # is more influential print(np.array(np.std(X, 0))*m.coef_) ``` An alternative way to get a similar result is to examine the coefficients of the model fit on standardized parameters: ``` m.fit(X \/ np.std(X, 0), y) print(m.coef_) ``` Note that this is the most basic approach and a number of other techniques for finding feature importance or parameter influence exist (using p-values, bootstrap scores, various \"discriminative indices\", etc). I am pretty sure you would get more interesting answers at https:\/\/stats.stackexchange.com\/.", "best_answers_score":0.7884, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34052115\/how-to-find-the-importance-of-the-features-for-a-logistic-regression-model", "best_answers_votes":95, "question_length":830, "response_length":1237 }, { "question":"ImportError: No module named sklearn (Python) I wanna use scikit-learn. I have typed ``` pip install -U scikit-learn pip3 install sklearn ``` to install it; but when i type ``` $ Python >>> import sklearn ``` it returns ``` ImportError: No module named sklearn ``` I followed other tutorials, but it doesn't work. Furthermore, my enviroment returns this warning: If you have installed scikit-learn from source, please do not forget to build the package before using it: run python setup.py install or make in the source directory. What is the true code to type in the terminal? I tried to type python setup.py installin the terminal but it doesn't work", "response":"Make sure that pip and python are the same version. For example if you run pip for python 2.7, it will install the package only in 2.7, and if your python command point to for example python 3.3 interpreter, it will not have that package", "best_answers_score":0.7882, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36404042\/importerror-no-module-named-sklearn-python", "best_answers_votes":18, "question_length":652, "response_length":237 }, { "question":"How to remove common rows in two dataframes in Pandas? I have two dataframes - df1 and df2. ``` df1 has row1,row2,row3,row4,row5 df2 has row2,row5 ``` I want to have a new dataframe such that df1-df2. That is, the resultant dataframe should have rows as - row1,row3,row4.", "response":"You can use pandas.concat to concatenate the two dataframes rowwise, followed by drop_duplicates to remove all the duplicated rows in them. ``` In [1]: import pandas as pd df_1 = pd.DataFrame({\"A\":[\"foo\", \"foo\", \"foo\", \"bar\"], \"B\":[0,1,1,1], \"C\":[\"A\",\"A\",\"B\",\"A\"]}) df_2 = pd.DataFrame({\"A\":[\"foo\", \"bar\", \"foo\", \"bar\"], \"B\":[1,0,1,0], \"C\":[\"A\",\"B\",\"A\",\"B\"]}) In [2]: df = pd.concat([df_1, df_2]) In [3]: df Out[3]: A B C 0 foo 0 A 1 foo 1 A 2 foo 1 B 3 bar 1 A 0 foo 1 A 1 bar 0 B 2 foo 1 A 3 bar 0 B In [4]: df.drop_duplicates(keep=False) Out[4]: A B C 0 foo 0 A 2 foo 1 B 3 bar 1 A ```", "best_answers_score":0.7878, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38681340\/how-to-remove-common-rows-in-two-dataframes-in-pandas", "best_answers_votes":20, "question_length":271, "response_length":588 }, { "question":"How to extract the decision rules from scikit-learn decision-tree? Can I extract the underlying decision-rules (or 'decision paths') from a trained tree in a decision tree as a textual list? Something like: ``` if A>0.4 then if B0.8 then class='X' ```", "response":"I believe that this answer is more correct than the other answers here: ``` from sklearn.tree import _tree def tree_to_code(tree, feature_names): tree_ = tree.tree_ feature_name = [ feature_names[i] if i != _tree.TREE_UNDEFINED else \"undefined!\" for i in tree_.feature ] print \"def tree({}):\".format(\", \".join(feature_names)) def recurse(node, depth): indent = \" \" * depth if tree_.feature[node] != _tree.TREE_UNDEFINED: name = feature_name[node] threshold = tree_.threshold[node] print \"{}if {} {}\".format(indent, name, threshold) recurse(tree_.children_right[node], depth + 1) else: print \"{}return {}\".format(indent, tree_.value[node]) recurse(0, 1) ``` This prints out a valid Python function. Here's an example output for a tree that is trying to return its input, a number between 0 and 10. ``` def tree(f0): if f0 1.5 if f0 3.5 return [[ 4.]] else: # if f0 > 4.5 return [[ 5.]] else: # if f0 > 6.0 if f0 7.5 return [[ 8.]] else: # if f0 > 8.5 return [[ 9.]] ``` Here are some stumbling blocks that I see in other answers: Using tree_.threshold == -2 to decide whether a node is a leaf isn't a good idea. What if it's a real decision node with a threshold of -2? Instead, you should look at tree.feature or tree.children_*. The line features = [feature_names[i] for i in tree_.feature] crashes with my version of sklearn, because some values of tree.tree_.feature are -2 (specifically for leaf nodes). There is no need to have multiple if statements in the recursive function, just one is fine.", "best_answers_score":0.7876, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/20224526\/how-to-extract-the-decision-rules-from-scikit-learn-decision-tree", "best_answers_votes":182, "question_length":251, "response_length":1504 }, { "question":"scikit-learn return value of LogisticRegression.predict_proba What exactly does the LogisticRegression.predict_proba function return? In my example I get a result like this: ```py array([ [4.65761066e-03, 9.95342389e-01], [9.75851270e-01, 2.41487300e-02], [9.99983374e-01, 1.66258341e-05] ]) ``` From other calculations, using the sigmoid function, I know, that the second column is the probabilities. The documentation says that the first column is n_samples, but that can't be, because my samples are reviews, which are texts and not numbers. The documentation also says that the second column is n_classes. That certainly can't be, since I only have two classes (namely, +1 and -1) and the function is supposed to be about calculating probabilities of samples really being of a class, but not the classes themselves. What is the first column really and why it is there?", "response":"``` 4.65761066e-03 + 9.95342389e-01 = 1 9.75851270e-01 + 2.41487300e-02 = 1 9.99983374e-01 + 1.66258341e-05 = 1 ``` The first column is the probability that the entry has the -1 label and the second column is the probability that the entry has the +1 label. Note that classes are ordered as they are in self.classes_. If you would like to get the predicted probabilities for the positive label only, you can use logistic_model.predict_proba(data)[:,1]. This will yield you the [9.95342389e-01, 2.41487300e-02, 1.66258341e-05] result.", "best_answers_score":0.7874, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36681449\/scikit-learn-return-value-of-logisticregression-predict-proba", "best_answers_votes":80, "question_length":872, "response_length":533 }, { "question":"No module named 'sklearn.datasets.samples_generator' When trying to create 4 clusters of random data, I am getting the following error message: ``` # Generate 4 clusters of random data. from sklearn.datasets.samples_generator import make_blobs data, _ = make_blobs(n_samples=300, centers=4, cluster_std=0.60, random_state=0) ``` Error: ``` ModuleNotFoundError Traceback (most recent call last) in 1 # Generate 4 clusters of random data. ----> 2 from sklearn.datasets.samples_generator import make_blobs 3 4 data, _ = make_blobs(n_samples=300, centers=4, 5 cluster_std=0.60, random_state=0) ModuleNotFoundError: No module named 'sklearn.datasets.samples_generator' ``` I have tried: pip install sckit-learn and pip install sckit-datasets", "response":"In the latest versions of scikit-learn, there is no module sklearn.datasets.samples_generator - it has been replaced with sklearn.datasets (see the docs); so, according to the make_blobs documentation, your import should simply be: ``` from sklearn.datasets import make_blobs ``` As a general rule, the official documentation is your best friend, and you should definitely consult it first before anything else.", "best_answers_score":0.7874, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/65898399\/no-module-named-sklearn-datasets-samples-generator", "best_answers_votes":110, "question_length":738, "response_length":411 }, { "question":"How to normalize a confusion matrix? I calculated a confusion matrix for my classifier using confusion_matrix() from scikit-learn. The diagonal elements of the confusion matrix represent the number of points for which the predicted label is equal to the true label, while off-diagonal elements are those that are mislabeled by the classifier. I would like to normalize my confusion matrix so that it contains only numbers between 0 and 1. I would like to read the percentage of correctly classified samples from the matrix. I found several methods how to normalize a matrix (row and column normalization) but I don't know much about maths and am not sure if this is the correct approach.", "response":"Suppose that ``` >>> y_true = [0, 0, 1, 1, 2, 0, 1] >>> y_pred = [0, 1, 0, 1, 2, 2, 1] >>> C = confusion_matrix(y_true, y_pred) >>> C array([[1, 1, 1], [1, 2, 0], [0, 0, 1]]) ``` Then, to find out how many samples per class have received their correct label, you need ``` >>> C \/ C.astype(np.float).sum(axis=1) array([[ 0.33333333, 0.33333333, 1. ], [ 0.33333333, 0.66666667, 0. ], [ 0. , 0. , 1. ]]) ``` The diagonal contains the required values. Another way to compute these is to realize that what you're computing is the recall per class: ``` >>> from sklearn.metrics import precision_recall_fscore_support >>> _, recall, _, _ = precision_recall_fscore_support(y_true, y_pred) >>> recall array([ 0.33333333, 0.66666667, 1. ]) ``` Similarly, if you divide by the sum over axis=0, you get the precision (fraction of class-k predictions that have ground truth label k): ``` >>> C \/ C.astype(np.float).sum(axis=0) array([[ 0.5 , 0.33333333, 0.5 ], [ 0.5 , 0.66666667, 0. ], [ 0. , 0. , 0.5 ]]) >>> prec, _, _, _ = precision_recall_fscore_support(y_true, y_pred) >>> prec array([ 0.5 , 0.66666667, 0.5 ]) ```", "best_answers_score":0.787, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/20927368\/how-to-normalize-a-confusion-matrix", "best_answers_votes":45, "question_length":687, "response_length":1107 }, { "question":"How to specify a distance function for clustering? I'd like to cluster points given to a custom distance and strangely, it seems that neither scipy nor sklearn clustering methods allow the specification of a distance function. For instance, in sklearn.cluster.AgglomerativeClustering, the only thing I may do is enter an affinity matrix (which will be very memory-heavy). In order to build this very matrix, it is recommended to use sklearn.neighbors.kneighbors_graph, but I don't understand how I can specify a distance function either between two points. Could someone enlighten me?", "response":"All of the scipy hierarchical clustering routines will accept a custom distance function that accepts two 1D vectors specifying a pair of points and returns a scalar. For example, using fclusterdata: ``` import numpy as np from scipy.cluster.hierarchy import fclusterdata # a custom function that just computes Euclidean distance def mydist(p1, p2): diff = p1 - p2 return np.vdot(diff, diff) ** 0.5 X = np.random.randn(100, 2) fclust1 = fclusterdata(X, 1.0, metric=mydist) fclust2 = fclusterdata(X, 1.0, metric='euclidean') print(np.allclose(fclust1, fclust2)) # True ``` Valid inputs for the metric= kwarg are the same as for scipy.spatial.distance.pdist.", "best_answers_score":0.7831, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/33721996\/how-to-specify-a-distance-function-for-clustering", "best_answers_votes":28, "question_length":584, "response_length":656 }, { "question":"sklearn: Found arrays with inconsistent numbers of samples when calling LinearRegression.fit() Just trying to do a simple linear regression but I'm baffled by this error for: ``` regr = LinearRegression() regr.fit(df2.iloc[1:1000, 5].values, df2.iloc[1:1000, 2].values) ``` which produces: ``` ValueError: Found arrays with inconsistent numbers of samples: [ 1 999] ``` These selections must have the same dimensions, and they should be numpy arrays, so what am I missing?", "response":"It looks like sklearn requires the data shape of (row number, column number). If your data shape is (row number, ) like (999, ), it does not work. By using numpy.reshape(), you should change the shape of the array to (999, 1), e.g. using ``` data=data.reshape((999,1)) ``` In my case, it worked with that.", "best_answers_score":0.7825, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30813044\/sklearn-found-arrays-with-inconsistent-numbers-of-samples-when-calling-linearre", "best_answers_votes":121, "question_length":472, "response_length":305 }, { "question":"Recovering features names of explained_variance_ratio_ in PCA with sklearn I'm trying to recover from a PCA done with scikit-learn, which features are selected as relevant. A classic example with IRIS dataset. ``` import pandas as pd import pylab as pl from sklearn import datasets from sklearn.decomposition import PCA # load dataset iris = datasets.load_iris() df = pd.DataFrame(iris.data, columns=iris.feature_names) # normalize data df_norm = (df - df.mean()) \/ df.std() # PCA pca = PCA(n_components=2) pca.fit_transform(df_norm.values) print pca.explained_variance_ratio_ ``` This returns ``` In [42]: pca.explained_variance_ratio_ Out[42]: array([ 0.72770452, 0.23030523]) ``` How can I recover which two features allow these two explained variance among the dataset ? Said diferently, how can i get the index of this features in iris.feature_names ? ``` In [47]: print iris.feature_names ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'] ```", "response":"This information is included in the pca attribute: components_. As described in the documentation, pca.components_ outputs an array of [n_components, n_features], so to get how components are linearly related with the different features you have to: Note: each coefficient represents the correlation between a particular pair of component and feature ``` import pandas as pd import pylab as pl from sklearn import datasets from sklearn.decomposition import PCA # load dataset iris = datasets.load_iris() df = pd.DataFrame(iris.data, columns=iris.feature_names) # normalize data from sklearn import preprocessing data_scaled = pd.DataFrame(preprocessing.scale(df),columns = df.columns) # PCA pca = PCA(n_components=2) pca.fit_transform(data_scaled) # Dump components relations with features: print(pd.DataFrame(pca.components_,columns=data_scaled.columns,index = ['PC-1','PC-2'])) sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) PC-1 0.522372 -0.263355 0.581254 0.565611 PC-2 -0.372318 -0.925556 -0.021095 -0.065416 ``` IMPORTANT: As a side comment, note the PCA sign does not affect its interpretation since the sign does not affect the variance contained in each component. Only the relative signs of features forming the PCA dimension are important. In fact, if you run the PCA code again, you might get the PCA dimensions with the signs inverted. For an intuition about this, think about a vector and its negative in 3-D space - both are essentially representing the same direction in space. Check this post for further reference.", "best_answers_score":0.7815, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/22984335\/recovering-features-names-of-explained-variance-ratio-in-pca-with-sklearn", "best_answers_votes":118, "question_length":981, "response_length":1555 }, { "question":"Passing categorical data to Sklearn Decision Tree There are several posts about how to encode categorical data to Sklearn Decision trees, but from Sklearn documentation, we got these Some advantages of decision trees are: (...) Able to handle both numerical and categorical data. Other techniques are usually specialized in analyzing datasets that have only one type of variable. See the algorithms for more information. But running the following script ```python import pandas as pd from sklearn.tree import DecisionTreeClassifier data = pd.DataFrame() data['A'] = ['a','a','b','a'] data['B'] = ['b','b','a','b'] data['C'] = [0, 0, 1, 0] data['Class'] = ['n','n','y','n'] tree = DecisionTreeClassifier() tree.fit(data[['A','B','C']], data['Class']) ``` outputs the following error: ``` Traceback (most recent call last): File \"\", line 1, in File \"\/usr\/local\/lib\/python2.7\/site-packages\/sklearn\/tree\/tree.py\", line 154, in fit X = check_array(X, dtype=DTYPE, accept_sparse=\"csc\") File \"\/usr\/local\/lib\/python2.7\/site-packages\/sklearn\/utils\/validation.py\", line 377, in check_array array = np.array(array, dtype=dtype, order=order, copy=copy) ValueError: could not convert string to float: b ``` I know that in R it is possible to pass categorical data, with Sklearn, is it possible?", "response":"(This is just a reformat of my comment from 2016...it still holds true.) The accepted answer for this question is misleading. As it stands, sklearn decision trees do not handle categorical data - see issue #5442. The recommended approach of using Label Encoding converts to integers which the DecisionTreeClassifier() will treat as numeric. If your categorical data is not ordinal, this is not good - you'll end up with splits that do not make sense. Using a OneHotEncoder is the only current valid way, allowing arbitrary splits not dependent on the label ordering, but is computationally expensive.", "best_answers_score":0.7813, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38108832\/passing-categorical-data-to-sklearn-decision-tree", "best_answers_votes":116, "question_length":1282, "response_length":600 }, { "question":"Running Jupyter notebook in a virtualenv: installed sklearn module not available I have installed a created a virtualenv machinelearn and installed a few python modules (pandas, scipy and sklearn) in that environment. When I run jupyter notebook, I can import pandas and scipy in my notebooks - however, when I try to import sklearn, I get the following error message: ``` import sklearn --------------------------------------------------------------------------- ImportError Traceback (most recent call last) in () ----> 1 import sklearn ImportError: No module named 'sklearn' ``` I am able to import all modules, at the command line - so I know they have been successfully installed: ``` (machinelearn) me@yourbox:~\/path\/to\/machinelearn$ python -c \"import pandas, scipy, sklearn\" (machinelearn) me@yourbox:~\/path\/to\/machinelearn$ ``` How can I import sklearn in my jupyter notebook running in a virtualenv?", "response":"You probably have not installed jupyter \/ IPython in your virtualenv. Try the following: ``` python -c \"import IPython\" ``` and check that the jupyter command found in your $PATH is the one from the bin folder of your venv: ``` which jupyter ``` For windows users in a powershell console, you can use the following to check that the jupyter command in your $env:Path is the one from the Scripts folder of you venv: ``` get-command jupyter ``` Edit: if this is the problem, just run python -m pip install jupyter in your venv. Edit 2: actually you might also need: ``` python -m ipykernel install --user --name=my-virtualenv-name ``` and then switch the kernel named \"my-virtualenv-name\" in the jupyter user interface. Edit 3: maybe the --user flag in the last command is a bad idea: ``` python -m ipykernel install --name=my-virtualenv-name ```", "best_answers_score":0.781, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/42449814\/running-jupyter-notebook-in-a-virtualenv-installed-sklearn-module-not-available", "best_answers_votes":103, "question_length":909, "response_length":844 }, { "question":"scikit-learn TfidfVectorizer meaning? I was reading about TfidfVectorizer implementation of scikit-learn, i don\u00b4t understand what\u00b4s the output of the method, for example: ``` new_docs = ['He watches basketball and baseball', 'Julie likes to play basketball', 'Jane loves to play baseball'] new_term_freq_matrix = tfidf_vectorizer.transform(new_docs) print tfidf_vectorizer.vocabulary_ print new_term_freq_matrix.todense() ``` output: ``` {u'me': 8, u'basketball': 1, u'julie': 4, u'baseball': 0, u'likes': 5, u'loves': 7, u'jane': 3, u'linda': 6, u'more': 9, u'than': 10, u'he': 2} [[ 0.57735027 0.57735027 0.57735027 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0.68091856 0. 0. 0.51785612 0.51785612 0. 0. 0. 0. 0. ] [ 0.62276601 0. 0. 0.62276601 0. 0. 0. 0.4736296 0. 0. 0. ]] ``` What is?(e.g.: u'me': 8 ): ``` {u'me': 8, u'basketball': 1, u'julie': 4, u'baseball': 0, u'likes': 5, u'loves': 7, u'jane': 3, u'linda': 6, u'more': 9, u'than': 10, u'he': 2} ``` is this a matrix or just a vector?, i can\u00b4t understand what\u00b4s telling me the output: ``` [[ 0.57735027 0.57735027 0.57735027 0. 0. 0. 0. 0. 0. 0. 0. ] [ 0. 0.68091856 0. 0. 0.51785612 0.51785612 0. 0. 0. 0. 0. ] [ 0.62276601 0. 0. 0.62276601 0. 0. 0. 0.4736296 0. 0. 0. ]] ``` Could anybody explain me in more detail these outputs? Thanks!", "response":"TfidfVectorizer - Transforms text to feature vectors that can be used as input to estimator. vocabulary_ Is a dictionary that converts each token (word) to feature index in the matrix, each unique token gets a feature index. What is?(e.g.: u'me': 8 ) It tells you that the token 'me' is represented as feature number 8 in the output matrix. is this a matrix or just a vector? Each sentence is a vector, the sentences you've entered are matrix with 3 vectors. In each vector the numbers (weights) represent features tf-idf score. For example: 'julie': 4 --> Tells you that the in each sentence 'Julie' appears you will have non-zero (tf-idf) weight. As you can see in the 2'nd vector: [ 0. 0.68091856 0. 0. 0.51785612 0.51785612 0. 0. 0. 0. 0. ] The 5'th element scored 0.51785612 - the tf-idf score for 'Julie'. For more info about Tf-Idf scoring read here: http:\/\/en.wikipedia.org\/wiki\/Tf%E2%80%93idf", "best_answers_score":0.7807, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/25902119\/scikit-learn-tfidfvectorizer-meaning", "best_answers_votes":20, "question_length":1288, "response_length":901 }, { "question":"sklearn: Hyperparameter tuning by gradient descent? Is there a way to perform hyperparameter tuning in scikit-learn by gradient descent? While a formula for the gradient of hyperparameters might be difficult to compute, numerical computation of the hyperparameter gradient by evaluating two close points in hyperparameter space should be pretty easy. Is there an existing implementation of this approach? Why is or isn't this approach a good idea?", "response":"The calculation of the gradient is the least of problems. At least in times of advanced automatic differentiation software. (Implementing this in a general way for all sklearn-classifiers of course is not easy) And while there are works of people who used this kind of idea, they only did this for some specific and well-formulated problem (e.g. SVM-tuning). Furthermore there probably were a lot of assumptions because: Why is this not a good idea? Hyper-param optimization is in general: non-smooth GD really likes smooth functions as a gradient of zero is not helpful (Each hyper-parameter which is defined by some discrete-set (e.g. choice of l1 vs. l2 penalization) introduces non-smooth surfaces) Hyper-param optimization is in general: non-convex The whole convergence-theory of GD assumes, that the underlying problem is convex Good-case: you obtain some local-minimum (can be arbitrarily bad) Worst-case: GD is not even converging to some local-minimum I might add, that your general problem is the worst kind of optimization problem one can consider because it's: non-smooth, non-convex and even stochastic \/ noisy as most underlying algorithms are heuristic approximations with some variance in regards to the final output (and often even PRNG-based random-behaviour). The last part is the reason, why the offered methods in sklearn are that simple: random-search: if we can't infere something because the problem is too hard, just try many instances and pick the best grid-search: let's assume there is some kind of smoothness instead of random-sampling, we sample in regards to our smoothness-assumption (and other assumptions like: param is probably big -> np.logspace to analyze more big numbers) While there are a lot of Bayesian-approaches including available python-software like hyperopt and spearmint, many people think, that random-search is the best method in general (which might be surprising but emphasizes the mentioned problems).", "best_answers_score":0.7798, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43420493\/sklearn-hyperparameter-tuning-by-gradient-descent", "best_answers_votes":30, "question_length":447, "response_length":1956 }, { "question":"How would one use Kernel Density Estimation as a 1D clustering method in scikit learn? I need to cluster a simple univariate data set into a preset number of clusters. Technically it would be closer to binning or sorting the data since it is only 1D, but my boss is calling it clustering, so I'm going to stick to that name. The current method used by the system I'm on is K-means, but that seems like overkill. Is there a better way of performing this task? Answers to some other posts are mentioning KDE (Kernel Density Estimation), but that is a density estimation method, how would that work? I see how KDE returns a density, but how do I tell it to split the data into bins? How do I have a fixed number of bins independent of the data (that's one of my requirements) ? More specifically, how would one pull this off using scikit learn? My input file looks like: ``` str ID sls 1 10 2 11 3 9 4 23 5 21 6 11 7 45 8 20 9 11 10 12 ``` I want to group the sls number into clusters or bins, such that: ``` Cluster 1: [10 11 9 11 11 12] Cluster 2: [23 21 20] Cluster 3: [45] ``` And my output file will look like: ``` str ID sls Cluster ID Cluster centroid 1 10 1 10.66 2 11 1 10.66 3 9 1 10.66 4 23 2 21.33 5 21 2 21.33 6 11 1 10.66 7 45 3 45 8 20 2 21.33 9 11 1 10.66 10 12 1 10.66 ```", "response":"Write code yourself. Then it fits your problem best! Boilerplate: Never assume code you download from the net to be correct or optimal... make sure to fully understand it before using it. ```py %matplotlib inline from numpy import array, linspace from sklearn.neighbors import KernelDensity from matplotlib.pyplot import plot a = array([10,11,9,23,21,11,45,20,11,12]).reshape(-1, 1) kde = KernelDensity(kernel='gaussian', bandwidth=3).fit(a) s = linspace(0,50) e = kde.score_samples(s.reshape(-1,1)) plot(s, e) ``` ```py from scipy.signal import argrelextrema mi, ma = argrelextrema(e, np.less)[0], argrelextrema(e, np.greater)[0] print \"Minima:\", s[mi] print \"Maxima:\", s[ma] > Minima: [ 17.34693878 33.67346939] > Maxima: [ 10.20408163 21.42857143 44.89795918] ``` Your clusters therefore are ```py print a[a = mi[0]) * (a = mi[1]] > [10 11 9 11 11 12] [23 21 20] [45] ``` and visually, we did this split: ```py plot(s[:mi[0]+1], e[:mi[0]+1], 'r', s[mi[0]:mi[1]+1], e[mi[0]:mi[1]+1], 'g', s[mi[1]:], e[mi[1]:], 'b', s[ma], e[ma], 'go', s[mi], e[mi], 'ro') ``` We cut at the red markers. The green markers are our best estimates for the cluster centers.", "best_answers_score":0.7783, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35094454\/how-would-one-use-kernel-density-estimation-as-a-1d-clustering-method-in-scikit", "best_answers_votes":107, "question_length":1286, "response_length":1154 }, { "question":"k-fold stratified cross-validation with imbalanced classes I have data with 4 classes and I am trying to build a classifier. I have ~1000 vectors for one class, ~10^4 for another, ~10^5 for the third and ~10^6 for the fourth. I was hoping to use cross-validation so I looked at the scikit-learn docs . My first try was to use StratifiedShuffleSplit but this gives the same percentage for each class, leaving the classes drastically imbalanced still. Is there a way to do cross-validation but with the classes balanced in the training and test set? As a side note, I couldn't work out the difference between StratifiedShuffleSplit and StratifiedKFold . The descriptions look very similar to me.", "response":"My first try was to use StratifiedShuffleSplit but this gives the same percentage for each class, leaving the classes drastically imbalanced still. I get the feeling that you're confusing what a stratified strategy will do, but you'll need to show your code and your results to say for sure what's going on (the same percentage as their percentage in the original set, or the same percentage within the returned train \/ test set? The first one is how it's supposed to be). As a side note, I couldn't work out the difference between StratifiedShuffleSplit and StratifiedKFold . The descriptions look very similar to me. One of these should definitely work. The description of the first one is definitely a little confusing, but here's what they do. StratifiedShuffleSplit Provides train\/test indices to split data in train test sets. This means that it splits your data into a train and test set. The stratified part means that percentages will be maintained in this split. So if 10% of your data is in class 1 and 90% is in class 2, this will ensure that 10% of your train set will be in class 1 and 90% will be in class 2. Same for the test set. Your post makes it sound like you'd want 50% of each class in the test set. That isn't what stratification does, stratification maintains the original percentages. You should maintain them, because otherwise you'll give yourself an irrelevant idea about the performance of your classifier: who cares how well it classified a 50\/50 split, when in practice you'll see 10\/90 splits? StratifiedKFold This cross-validation object is a variation of KFold that returns stratified folds. The folds are made by preserving the percentage of samples for each class. See k-fold cross validation. Without stratification, it just splits your data into k folds. Then, each fold 1 <= i <= k is used once as the test set, while the others are used for training. The results are averaged in the end. It's similar to running the ShuffleSplit k times. Stratification will ensure that the percentages of each class in your entire data will be the same (or very close to) within each individual fold. There is a lot of literature that deals with imbalanced classes. Some simple to use methods involve using class weights and analysis the ROC curve. I suggest the following resources for starting points on this: A scikit-learn example of using class weights. A quora question about implementing neural networks for imbalanced data. This stats.stackexchange question with more in-depth answers.", "best_answers_score":0.7781, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/32615429\/k-fold-stratified-cross-validation-with-imbalanced-classes", "best_answers_votes":27, "question_length":693, "response_length":2518 }, { "question":"ValueError: Expected 2D array, got 1D array instead: While practicing Simple Linear Regression Model I got this error, I think there is something wrong with my data set. Here is my data set: Here is independent variable X: Here is dependent variable Y: Here is X_train Here Is Y_train This is error body: ``` ValueError: Expected 2D array, got 1D array instead: array=[ 7. 8.4 10.1 6.5 6.9 7.9 5.8 7.4 9.3 10.3 7.3 8.1]. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. ``` And this is My code: ``` import pandas as pd import matplotlib as pt #import data set dataset = pd.read_csv('Sample-data-sets-for-linear-regression1.csv') x = dataset.iloc[:, 1].values y = dataset.iloc[:, 2].values #Spliting the dataset into Training set and Test Set from sklearn.cross_validation import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size= 0.2, random_state=0) #linnear Regression from sklearn.linear_model import LinearRegression regressor = LinearRegression() regressor.fit(x_train,y_train) y_pred = regressor.predict(x_test) ``` Thank you", "response":"You need to give both the fit and predict methods 2D arrays. Your x_train and x_test are currently only 1 dimensional. What is suggested by the console should work: ``` x_train= x_train.reshape(-1, 1) x_test = x_test.reshape(-1, 1) ``` This uses numpy's reshape to transform your array. For example, x = [1, 2, 3] wopuld be transformed to a matrix x' = [[1], [2], [3]] (-1 gives the x dimension of the matrix, inferred from the length of the array and remaining dimensions, 1 is the y dimension - giving us a n x 1 matrix where n is the input length). Questions about reshape have been answered in the past, this for example should answer what reshape(-1,1) fully means: What does -1 mean in numpy reshape? (also some of the other below answers explain this very well too)", "best_answers_score":0.7774, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/51150153\/valueerror-expected-2d-array-got-1d-array-instead", "best_answers_votes":47, "question_length":1159, "response_length":772 }, { "question":"Save classifier to disk in scikit-learn How do I save a trained Naive Bayes classifier to disk and use it to predict data? I have the following sample program from the scikit-learn website: ``` from sklearn import datasets iris = datasets.load_iris() from sklearn.naive_bayes import GaussianNB gnb = GaussianNB() y_pred = gnb.fit(iris.data, iris.target).predict(iris.data) print \"Number of mislabeled points : %d\" % (iris.target != y_pred).sum() ```", "response":"You can also use joblib.dump and joblib.load which is much more efficient at handling numerical arrays than the default python pickler. Joblib is included in scikit-learn: ``` >>> import joblib >>> from sklearn.datasets import load_digits >>> from sklearn.linear_model import SGDClassifier >>> digits = load_digits() >>> clf = SGDClassifier().fit(digits.data, digits.target) >>> clf.score(digits.data, digits.target) # evaluate training error 0.9526989426822482 >>> filename = '\/tmp\/digits_classifier.joblib.pkl' >>> _ = joblib.dump(clf, filename, compress=9) >>> clf2 = joblib.load(filename) >>> clf2 SGDClassifier(alpha=0.0001, class_weight=None, epsilon=0.1, eta0=0.0, fit_intercept=True, learning_rate='optimal', loss='hinge', n_iter=5, n_jobs=1, penalty='l2', power_t=0.5, rho=0.85, seed=0, shuffle=False, verbose=0, warm_start=False) >>> clf2.score(digits.data, digits.target) 0.9526989426822482 ``` Edit: in Python 3.8+ it's now possible to use pickle for efficient pickling of object with large numerical arrays as attributes if you use pickle protocol 5 (which is not the default).", "best_answers_score":0.777, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/10592605\/save-classifier-to-disk-in-scikit-learn", "best_answers_votes":251, "question_length":449, "response_length":1090 }, { "question":"How to use warm_start I'd like to use the warm_start parameter to add training data to my random forest classifier. I expected it to be used like this: ``` clf = RandomForestClassifier(...) clf.fit(get_data()) clf.fit(get_more_data(), warm_start=True) ``` But the warm_start parameter is a constructor parameter. So do I do something like this? ``` clf = RandomForestClassifier() clf.fit(get_data()) clf = RandomForestClassifier (warm_start=True) clf.fit(get_more_data) ``` That makes no sense to me. Won't the new call to the constructor discard previous training data? I think I'm missing something.", "response":"The basic pattern of (taken from Miriam's answer): ``` clf = RandomForestClassifier(warm_start=True) clf.fit(get_data()) clf.fit(get_more_data()) ``` would be the correct usage API-wise. But there is an issue here. As the docs say the following: When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. it means, that the only thing warm_start can do for you, is adding new DecisionTree's. All the previous trees seem to be untouched! Let's check this with some sources: ``` n_more_estimators = self.n_estimators - len(self.estimators_) if n_more_estimators < 0: raise ValueError('n_estimators=%d must be larger or equal to ' 'len(estimators_)=%d when warm_start==True' % (self.n_estimators, len(self.estimators_))) elif n_more_estimators == 0: warn(\"Warm-start fitting without increasing n_estimators does not \" \"fit new trees.\") ``` This basically tells us, that you would need to increase the number of estimators before approaching a new fit! I have no idea what kind of usage sklearn expects here. I'm not sure, if fitting, increasing internal variables and fitting again is correct usage, but i somehow doubt it (especially as n_estimators is not a public class-variable). Your basic approach (in regards to this library and this classifier) is probably not a good idea for your out-of-core learning here! I would not pursue this further.", "best_answers_score":0.7759, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/42757892\/how-to-use-warm-start", "best_answers_votes":36, "question_length":601, "response_length":1433 }, { "question":"Convert categorical variables from String to int representation I have a numpy array of classification of text in the form of String array, i.e. y_train = ['A', 'B', 'A', 'C',...]. I am trying to apply SKlearn multinomial NB algorithm to predict classes for entire dataset. I want to convert the String classes into integers to be able to input into the algorithm and convert ['A', 'B', 'A', 'C', ...] into ['1', '2', '1', '3', ...] I can write a for loop to go through array and create a new one with int classifiers but is there a direct function to achieve this", "response":"Try factorize method: ``` In [264]: y_train = pd.Series(['A', 'B', 'A', 'C']) In [265]: y_train Out[265]: 0 A 1 B 2 A 3 C dtype: object In [266]: pd.factorize(y_train) Out[266]: (array([0, 1, 0, 2], dtype=int64), Index(['A', 'B', 'C'], dtype='object')) ``` Demo: ``` In [271]: fct = pd.factorize(y_train)[0]+1 In [272]: fct Out[272]: array([1, 2, 1, 3], dtype=int64) ```", "best_answers_score":0.7746, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/41078003\/convert-categorical-variables-from-string-to-int-representation", "best_answers_votes":18, "question_length":564, "response_length":370 }, { "question":"Efficiently create sparse pivot tables in pandas? I'm working turning a list of records with two columns (A and B) into a matrix representation. I have been using the pivot function within pandas, but the result ends up being fairly large. Does pandas support pivoting into a sparse format? I know I can pivot it and then turn it into some kind of sparse representation, but isn't as elegant as I would like. My end goal is to use it as the input for a predictive model. Alternatively, is there some kind of sparse pivot capability outside of pandas? edit: here is an example of a non-sparse pivot ``` import pandas as pd frame=pd.DataFrame() frame['person']=['me','you','him','you','him','me'] frame['thing']=['a','a','b','c','d','d'] frame['count']=[1,1,1,1,1,1] frame person thing count 0 me a 1 1 you a 1 2 him b 1 3 you c 1 4 him d 1 5 me d 1 frame.pivot('person','thing') count thing a b c d person him NaN 1 NaN 1 me 1 NaN NaN 1 you 1 NaN 1 NaN ``` This creates a matrix that could contain all possible combinations of persons and things, but it is not sparse. http:\/\/docs.scipy.org\/doc\/scipy\/reference\/sparse.html Sparse matrices take up less space because they can imply things like NaN or 0. If I have a very large data set, this pivoting function can generate a matrix that should be sparse due to the large number of NaNs or 0s. I was hoping that I could save a lot of space\/memory by generating something that was sparse right off the bat rather than creating a dense matrix and then converting it to sparse.", "response":"Here is a method that creates a sparse scipy matrix based on data and indices of person and thing. person_u and thing_u are lists representing the unique entries for your rows and columns of pivot you want to create. Note: this assumes that your count column already has the value you want in it. ``` from scipy.sparse import csr_matrix person_u = list(sort(frame.person.unique())) thing_u = list(sort(frame.thing.unique())) data = frame['count'].tolist() row = frame.person.astype('category', categories=person_u).cat.codes col = frame.thing.astype('category', categories=thing_u).cat.codes sparse_matrix = csr_matrix((data, (row, col)), shape=(len(person_u), len(thing_u))) >>> sparse_matrix ' with 6 stored elements in Compressed Sparse Row format> >>> sparse_matrix.todense() matrix([[0, 1, 0, 1], [1, 0, 0, 1], [1, 0, 1, 0]]) ``` Based on your original question, the scipy sparse matrix should be sufficient for your needs, but should you wish to have a sparse dataframe you can do the following: ``` dfs=pd.SparseDataFrame([ pd.SparseSeries(sparse_matrix[i].toarray().ravel(), fill_value=0) for i in np.arange(sparse_matrix.shape[0]) ], index=person_u, columns=thing_u, default_fill_value=0) >>> dfs a b c d him 0 1 0 1 me 1 0 0 1 you 1 0 1 0 >>> type(dfs) pandas.sparse.frame.SparseDataFrame ```", "best_answers_score":0.7742, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31661604\/efficiently-create-sparse-pivot-tables-in-pandas", "best_answers_votes":35, "question_length":1521, "response_length":1302 }, { "question":"Using OrdinalEncoder to transform categorical values I have a dataset which has the following columns: ``` No Name Sex Blood Grade Height Study 1 Tom M O 56 160 Math 2 Harry M A 76 192 Math 3 John M A 45 178 English 4 Nancy F B 78 157 Biology 5 Mike M O 79 167 Math 6 Kate F AB 66 156 English 7 Mary F O 99 166 Science ``` I want to change it to be something like this: ``` No Name Sex Blood Grade Height Study 1 Tom 0 0 56 160 0 2 Harry 0 1 76 192 0 3 John 0 1 45 178 1 4 Nancy 1 2 78 157 2 5 Mike 0 0 79 167 0 6 Kate 1 3 66 156 1 7 Mary 0 0 99 166 3 ``` I know there is a library that can do it ``` from sklearn.preprocessing import OrdinalEncoder ``` Which I've tried this but it did not work ``` enc = OrdinalEncoder() enc.fit(df[[\"Sex\",\"Blood\", \"Study\"]]) ``` Can anyone help me find what I am doing wrong and how to do it?", "response":"You were almost there ! Basically the fit method, prepare the encoder (fit on your data i.e. prepare the mapping) but don't transform the data. You have to call transform to transform the data , or use fit_transform which fit and transform the same data. ``` enc = OrdinalEncoder() enc.fit(df[[\"Sex\",\"Blood\", \"Study\"]]) df[[\"Sex\",\"Blood\", \"Study\"]] = enc.transform(df[[\"Sex\",\"Blood\", \"Study\"]]) ``` or directly ``` enc = OrdinalEncoder() df[[\"Sex\",\"Blood\", \"Study\"]] = enc.fit_transform(df[[\"Sex\",\"Blood\", \"Study\"]]) ``` Note: The values won't be the one that you provided, since internally the fit method use numpy.unique which gives result sorted in alphabetic order and not by order of appearance. As you can see from enc.categories_ ``` [array(['F', 'M'], dtype=object), array(['A', 'AB', 'B', 'O'], dtype=object), array(['Biology', 'English', 'Math', 'Science'], dtype=object)]``` ``` Each value in the array is encoded by it's position. (F will be encoded as 0 , M as 1)", "best_answers_score":0.7733, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/56502864\/using-ordinalencoder-to-transform-categorical-values", "best_answers_votes":47, "question_length":828, "response_length":976 }, { "question":"sklearn KMeans is not working as I only get 'NoneType' object has no attribute 'split' on nonEmpty Array I don't know what is wrong but suddenly KMeans from sklearn is not working anymore and I don't know what I am doing wrong. Has anyone encountered this problem yet or knows how I can fix it? ```python from sklearn.cluster import KMeans kmeanModel = KMeans(n_clusters=k, random_state=0) kmeanModel.fit(allLocations) ``` allLocations looks like this: ```python array([[12.40236 , 51.38086 ], [12.40999 , 51.38494 ], [12.40599 , 51.37284 ], [12.28692 , 51.32039 ], [12.41349 , 51.34443 ], ...]) ``` and allLocations.dtype gives dtype('float64'). The scikit-learn version is 1.0.2 and the NumPy version is 1.22.2 and I am using Jupyter Notebook. The Error says: 'NoneType' object has no attribute 'split' The whole Error looks like this: ```none AttributeError Traceback (most recent call last) in 12 for k in K: 13 kmeanModel = KMeans(n_clusters=k, random_state=0) ---> 14 kmeanModel.fit(allLocations) 15 distortions.append(kmeanModel.inertia_) 16 #Plotting the distortions ~\\anaconda3\\lib\\site-packages\\sklearn\\cluster\\_kmeans.py in fit(self, X, y, sample_weight) 1169 if self._algorithm == \"full\": 1170 kmeans_single = _kmeans_single_lloyd -> 1171 self._check_mkl_vcomp(X, X.shape[0]) 1172 else: 1173 kmeans_single = _kmeans_single_elkan ~\\anaconda3\\lib\\site-packages\\sklearn\\cluster\\_kmeans.py in _check_mkl_vcomp(self, X, n_samples) 1026 active_threads = int(np.ceil(n_samples \/ CHUNK_SIZE)) 1027 if active_threads 1028 modules = threadpool_info() 1029 has_vcomp = \"vcomp\" in [module[\"prefix\"] for module in modules] 1030 has_mkl = (\"mkl\", \"intel\") in [ ~\\anaconda3\\lib\\site-packages\\sklearn\\utils\\fixes.py in threadpool_info() 323 return controller.info() 324 else: --> 325 return threadpoolctl.threadpool_info() 326 327 ~\\anaconda3\\lib\\site-packages\\threadpoolctl.py in threadpool_info() 122 In addition, each module may contain internal_api specific entries. 123 \"\"\" --> 124 return _ThreadpoolInfo(user_api=_ALL_USER_APIS).todicts() 125 126 ~\\anaconda3\\lib\\site-packages\\threadpoolctl.py in __init__(self, user_api, prefixes, modules) 338 339 self.modules = [] --> 340 self._load_modules() 341 self._warn_if_incompatible_openmp() 342 else: ~\\anaconda3\\lib\\site-packages\\threadpoolctl.py in _load_modules(self) 371 self._find_modules_with_dyld() 372 elif sys.platform == \"win32\": --> 373 self._find_modules_with_enum_process_module_ex() 374 else: 375 self._find_modules_with_dl_iterate_phdr() ~\\anaconda3\\lib\\site-packages\\threadpoolctl.py in _find_modules_with_enum_process_module_ex(self) 483 484 # Store the module if it is supported and selected --> 485 self._make_module_from_path(filepath) 486 finally: 487 kernel_32.CloseHandle(h_process) ~\\anaconda3\\lib\\site-packages\\threadpoolctl.py in _make_module_from_path(self, filepath) 513 if prefix in self.prefixes or user_api in self.user_api: 514 module_class = globals()[module_class] --> 515 module = module_class(filepath, prefix, user_api, internal_api) 516 self.modules.append(module) 517 ~\\anaconda3\\lib\\site-packages\\threadpoolctl.py in __init__(self, filepath, prefix, user_api, internal_api) 604 self.internal_api = internal_api 605 self._dynlib = ctypes.CDLL(filepath, mode=_RTLD_NOLOAD) --> 606 self.version = self.get_version() 607 self.num_threads = self.get_num_threads() 608 self._get_extra_info() ~\\anaconda3\\lib\\site-packages\\threadpoolctl.py in get_version(self) 644 lambda: None) 645 get_config.restype = ctypes.c_char_p --> 646 config = get_config().split() 647 if config[0] == b\"OpenBLAS\": 648 return config[1].decode(\"utf-8\") AttributeError: 'NoneType' object has no attribute 'split' ```", "response":"Upgrade threadpoolctl to version >3. This works for all versions of numpy.", "best_answers_score":0.7727, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/71352354\/sklearn-kmeans-is-not-working-as-i-only-get-nonetype-object-has-no-attribute", "best_answers_votes":69, "question_length":3673, "response_length":74 }, { "question":"Getting No loop matching the specified signature and casting error I'm a beginner to python and machine learning . I get below error when i try to fit data into statsmodels.formula.api OLS.fit() Traceback (most recent call last): File \"\", line 47, in regressor_OLS = sm.OLS(y , X_opt).fit() File \"E:\\Anaconda\\lib\\site-packages\\statsmodels\\regression\\linear_model.py\", line 190, in fit self.pinv_wexog, singular_values = pinv_extended(self.wexog) File \"E:\\Anaconda\\lib\\site-packages\\statsmodels\\tools\\tools.py\", line 342, in pinv_extended u, s, vt = np.linalg.svd(X, 0) File \"E:\\Anaconda\\lib\\site-packages\\numpy\\linalg\\linalg.py\", line 1404, in svd u, s, vt = gufunc(a, signature=signature, extobj=extobj) TypeError: No loop matching the specified signature and casting was found for ufunc svd_n_s code ``` #Importing Libraries import numpy as np # linear algebra import pandas as pd # data processing import matplotlib.pyplot as plt #Visualization #Importing the dataset dataset = pd.read_csv('Video_Games_Sales_as_at_22_Dec_2016.csv') #dataset.head(10) #Encoding categorical data using panda get_dummies function . Easier and straight forward than OneHotEncoder in sklearn #dataset = pd.get_dummies(data = dataset , columns=['Platform' , 'Genre' , 'Rating' ] , drop_first = True ) #drop_first use to fix dummy varible trap dataset=dataset.replace('tbd',np.nan) #Separating Independent & Dependant Varibles #X = pd.concat([dataset.iloc[:,[11,13]], dataset.iloc[:,13: ]] , axis=1).values #Getting important variables X = dataset.iloc[:,[10,12]].values y = dataset.iloc[:,9].values #Dependant Varible (Global sales) #Taking care of missing data from sklearn.preprocessing import Imputer imputer = Imputer(missing_values = 'NaN' , strategy = 'mean' , axis = 0) imputer = imputer.fit(X[:,0:2]) X[:,0:2] = imputer.transform(X[:,0:2]) #Splitting the dataset into the Training set and Test set from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.2 , random_state = 0) #Fitting Mutiple Linear Regression to the Training Set from sklearn.linear_model import LinearRegression regressor = LinearRegression() regressor.fit(X_train,y_train) #Predicting the Test set Result y_pred = regressor.predict(X_test) #Building the optimal model using Backward Elimination (p=0.050) import statsmodels.formula.api as sm X = np.append(arr = np.ones((16719,1)).astype(float) , values = X , axis = 1) X_opt = X[:, [0,1,2]] regressor_OLS = sm.OLS(y , X_opt).fit() regressor_OLS.summary() ``` Dataset dataset link Couldn't find anything helpful to solve this issue on stack-overflow or google .", "response":"try specifiying the dtype = 'float' When the matrix is created. Example: ``` a=np.matrix([[1,2],[3,4]], dtype='float') ``` Hope this works!", "best_answers_score":0.7698, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/47838306\/getting-no-loop-matching-the-specified-signature-and-casting-error", "best_answers_votes":67, "question_length":2638, "response_length":139 }, { "question":"xgboost sklearn wrapper value 0for Parameter num_class should be greater equal to 1 I am trying to use the XGBClassifier wrapper provided by sklearn for a multiclass problem. My classes are [0, 1, 2], the objective that I use is multi:softmax. When I am trying to fit the classifier I get xgboost.core.XGBoostError: value 0for Parameter num_class should be greater equal to 1 If I try to set the num_class parameter the I get the error got an unexpected keyword argument 'num_class' Sklearn is setting this parameter automatically so I am not supposed to pass that argument. But why do I get the first error?", "response":"You need to manually add the parameter num_class to the xgb_param ``` # Model is an XGBClassifier xgb_param = model.get_xgb_params() xgb_param['num_class'] = 3 cvresult = xgb.cv(xgb_param, ...) ``` The XGBClassifier does set this value automatically if you use its fit method, but does not in the cv method", "best_answers_score":0.7684, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40116215\/xgboost-sklearn-wrapper-value-0for-parameter-num-class-should-be-greater-equal-t", "best_answers_votes":18, "question_length":608, "response_length":306 }, { "question":"Python\/Scikit-Learn - Can't handle mix of multiclass and continuous I'm trying to fit an SGDRegressor to my data and then check the accuracy. The fitting works fine, but then the predictions are not in the same datatype(?) as the original target data, and I get the error ``` ValueError: Can't handle mix of multiclass and continuous ``` When calling print \"Accuracy:\", ms.accuracy_score(y_test,predictions). The data looks like this (just 200 thousand + rows): ``` Product_id\/Date\/product_group1\/Price\/Net price\/Purchase price\/Hour\/Quantity\/product_group2 0 107 12\/31\/2012 10 300 236 220 10 1 108 ``` The code is as follows: ``` from sklearn.preprocessing import StandardScaler import numpy as np from sklearn.linear_model import SGDRegressor import numpy as np from sklearn import metrics as ms msk = np.random.rand(len(beers)) < 0.8 train = beers[msk] test = beers[~msk] X = train [['Price', 'Net price', 'Purchase price','Hour','Product_id','product_group2']] y = train[['Quantity']] y = y.as_matrix().ravel() X_test = test [['Price', 'Net price', 'Purchase price','Hour','Product_id','product_group2']] y_test = test[['Quantity']] y_test = y_test.as_matrix().ravel() clf = SGDRegressor(n_iter=2000) clf.fit(X, y) predictions = clf.predict(X_test) print \"Accuracy:\", ms.accuracy_score(y_test,predictions) ``` What should I do differently? Thank you!", "response":"Accuracy is a classification metric. You can't use it with a regression. See the documentation for info on the various metrics.", "best_answers_score":0.7658, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/37367405\/python-scikit-learn-cant-handle-mix-of-multiclass-and-continuous", "best_answers_votes":81, "question_length":1353, "response_length":127 }, { "question":"Difference between Standard scaler and MinMaxScaler What is the difference between MinMaxScaler() and StandardScaler(). mms = MinMaxScaler(feature_range = (0, 1)) (Used in a machine learning model) sc = StandardScaler() (In another machine learning model they used standard-scaler and not min-max-scaler)", "response":"MinMaxScaler(feature_range = (0, 1)) will transform each value in the column proportionally within the range [0,1]. Use this as the first scaler choice to transform a feature, as it will preserve the shape of the dataset (no distortion). StandardScaler() will transform each value in the column to range about the mean 0 and standard deviation 1, ie, each value will be normalised by subtracting the mean and dividing by standard deviation. Use StandardScaler if you know the data distribution is normal. If there are outliers, use RobustScaler(). Alternatively you could remove the outliers and use either of the above 2 scalers (choice depends on whether data is normally distributed) Additional Note: If scaler is used before train_test_split, data leakage will happen. Do use scaler after train_test_split", "best_answers_score":0.7653, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/51237635\/difference-between-standard-scaler-and-minmaxscaler", "best_answers_votes":87, "question_length":304, "response_length":809 }, { "question":"Use scikit-learn to classify into multiple categories I'm trying to use one of scikit-learn's supervised learning methods to classify pieces of text into one or more categories. The predict function of all the algorithms I tried just returns one match. For example I have a piece of text: ``` \"Theaters in New York compared to those in London\" ``` And I have trained the algorithm to pick a place for every text snippet I feed it. In the above example I would want it to return New York and London, but it only returns New York. Is it possible to use scikit-learn to return multiple results? Or even return the label with the next highest probability? Thanks for your help. ---Update I tried using OneVsRestClassifier but I still only get one option back per piece of text. Below is the sample code I am using ``` y_train = ('New York','London') train_set = (\"new york nyc big apple\", \"london uk great britain\") vocab = {'new york' :0,'nyc':1,'big apple':2,'london' : 3, 'uk': 4, 'great britain' : 5} count = CountVectorizer(analyzer=WordNGramAnalyzer(min_n=1, max_n=2),vocabulary=vocab) test_set = ('nice day in nyc','london town','hello welcome to the big apple. enjoy it here and london too') X_vectorized = count.transform(train_set).todense() smatrix2 = count.transform(test_set).todense() base_clf = MultinomialNB(alpha=1) clf = OneVsRestClassifier(base_clf).fit(X_vectorized, y_train) Y_pred = clf.predict(smatrix2) print Y_pred ``` Result: ['New York' 'London' 'London']", "response":"What you want is called multi-label classification. Scikits-learn can do that. See here: http:\/\/scikit-learn.org\/dev\/modules\/multiclass.html. I'm not sure what's going wrong in your example, my version of sklearn apparently doesn't have WordNGramAnalyzer. Perhaps it's a question of using more training examples or trying a different classifier? Though note that the multi-label classifier expects the target to be a list of tuples\/lists of labels. The following works for me: ``` import numpy as np from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.svm import LinearSVC from sklearn.feature_extraction.text import TfidfTransformer from sklearn.multiclass import OneVsRestClassifier X_train = np.array([\"new york is a hell of a town\", \"new york was originally dutch\", \"the big apple is great\", \"new york is also called the big apple\", \"nyc is nice\", \"people abbreviate new york city as nyc\", \"the capital of great britain is london\", \"london is in the uk\", \"london is in england\", \"london is in great britain\", \"it rains a lot in london\", \"london hosts the british museum\", \"new york is great and so is london\", \"i like london better than new york\"]) y_train = [[0],[0],[0],[0],[0],[0],[1],[1],[1],[1],[1],[1],[0,1],[0,1]] X_test = np.array(['nice day in nyc', 'welcome to london', 'hello welcome to new york. enjoy it here and london too']) target_names = ['New York', 'London'] classifier = Pipeline([ ('vectorizer', CountVectorizer(min_n=1,max_n=2)), ('tfidf', TfidfTransformer()), ('clf', OneVsRestClassifier(LinearSVC()))]) classifier.fit(X_train, y_train) predicted = classifier.predict(X_test) for item, labels in zip(X_test, predicted): print '%s => %s' % (item, ', '.join(target_names[x] for x in labels)) ``` For me, this produces the output: ``` nice day in nyc => New York welcome to london => London hello welcome to new york. enjoy it here and london too => New York, London ```", "best_answers_score":0.7642, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/10526579\/use-scikit-learn-to-classify-into-multiple-categories", "best_answers_votes":111, "question_length":1478, "response_length":1953 }, { "question":"label-encoder encoding missing values I am using the label encoder to convert categorical data into numeric values. How does LabelEncoder handle missing values? ``` from sklearn.preprocessing import LabelEncoder import pandas as pd import numpy as np a = pd.DataFrame(['A','B','C',np.nan,'D','A']) le = LabelEncoder() le.fit_transform(a) ``` Output: ``` array([1, 2, 3, 0, 4, 1]) ``` For the above example, label encoder changed NaN values to a category. How would I know which category represents missing values?", "response":"Don't use LabelEncoder with missing values. I don't know which version of scikit-learn you're using, but in 0.17.1 your code raises TypeError: unorderable types: str() > float(). As you can see in the source it uses numpy.unique against the data to encode, which raises TypeError if missing values are found. If you want to encode missing values, first change its type to a string: ``` a[pd.isnull(a)] = 'NaN' ```", "best_answers_score":0.7642, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36808434\/label-encoder-encoding-missing-values", "best_answers_votes":22, "question_length":513, "response_length":413 }, { "question":"Got continuous is not supported error in RandomForestRegressor I'm just trying to do a simple RandomForestRegressor example. But while testing the accuracy I get this error ``` \/Users\/noppanit\/anaconda\/lib\/python2.7\/site-packages\/sklearn\/metrics\/classification.pyc ``` in accuracy_score(y_true, y_pred, normalize, sample_weight) 177 178 # Compute accuracy for each possible representation --> 179 y_type, y_true, y_pred = _check_targets(y_true, y_pred) 180 if y_type.startswith('multilabel'): 181 differing_labels = count_nonzero(y_true - y_pred, axis=1) ``` \/Users\/noppanit\/anaconda\/lib\/python2.7\/site-packages\/sklearn\/metrics\/classification.pyc ``` in _check_targets(y_true, y_pred) 90 if (y_type not in [\"binary\", \"multiclass\", \"multilabel-indicator\", 91 \"multilabel-sequences\"]): ---> 92 raise ValueError(\"{0} is not supported\".format(y_type)) 93 94 if y_type in [\"binary\", \"multiclass\"]: ``` ValueError: continuous is not supported ``` This is the sample of the data. I can't show the real data. ``` target, func_1, func_2, func_2, ... func_200 float, float, float, float, ... float ``` Here's my code. ``` import pandas as pd import numpy as np from sklearn.preprocessing import Imputer from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor, ExtraTreesRegressor, GradientBoostingRegressor from sklearn.cross_validation import train_test_split from sklearn.metrics import accuracy_score from sklearn import tree train = pd.read_csv('data.txt', sep='\\t') labels = train.target train.drop('target', axis=1, inplace=True) cat = ['cat'] train_cat = pd.get_dummies(train[cat]) train.drop(train[cat], axis=1, inplace=True) train = np.hstack((train, train_cat)) imp = Imputer(missing_values='NaN', strategy='mean', axis=0) imp.fit(train) train = imp.transform(train) x_train, x_test, y_train, y_test = train_test_split(train, labels.values, test_size = 0.2) clf = RandomForestRegressor(n_estimators=10) clf.fit(x_train, y_train) y_pred = clf.predict(x_test) accuracy_score(y_test, y_pred) # This is where I get the error. ```", "response":"It's because accuracy_score is for classification tasks only. For regression you should use something different, for example: ``` clf.score(X_test, y_test) ``` Where X_test is samples, y_test is corresponding ground truth values. It will compute predictions inside.", "best_answers_score":0.764, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/32664717\/got-continuous-is-not-supported-error-in-randomforestregressor", "best_answers_votes":91, "question_length":2044, "response_length":265 }, { "question":"Fastest pairwise distance metric in python I have an 1D array of numbers, and want to calculate all pairwise euclidean distances. I have a method (thanks to SO) of doing this with broadcasting, but it's inefficient because it calculates each distance twice. And it doesn't scale well. Here's an example that gives me what I want with an array of 1000 numbers. ``` import numpy as np import random r = np.array([random.randrange(1, 1000) for _ in range(0, 1000)]) dists = np.abs(r - r[:, None]) ``` What's the fastest implementation in scipy\/numpy\/scikit-learn that I can use to do this, given that it has to scale to situations where the 1D array has >10k values. Note: the matrix is symmetric, so I'm guessing that it's possible to get at least a 2x speedup by addressing that, I just don't know how.", "response":"Neither of the other answers quite answered the question - 1 was in Cython, one was slower. But both provided very useful hints. Following up on them suggests that scipy.spatial.distance.pdist is the way to go. Here's some code: ``` import numpy as np import random import sklearn.metrics.pairwise import scipy.spatial.distance r = np.array([random.randrange(1, 1000) for _ in range(0, 1000)]) c = r[:, None] def option1(r): dists = np.abs(r - r[:, None]) def option2(r): dists = scipy.spatial.distance.pdist(r, 'cityblock') def option3(r): dists = sklearn.metrics.pairwise.manhattan_distances(r) ``` Timing with IPython: ``` In [36]: timeit option1(r) 100 loops, best of 3: 5.31 ms per loop In [37]: timeit option2(c) 1000 loops, best of 3: 1.84 ms per loop In [38]: timeit option3(c) 100 loops, best of 3: 11.5 ms per loop ``` I didn't try the Cython implementation (I can't use it for this project), but comparing my results to the other answer that did, it looks like scipy.spatial.distance.pdist is roughly a third slower than the Cython implementation (taking into account the different machines by benchmarking on the np.abs solution).", "best_answers_score":0.763, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/20277982\/fastest-pairwise-distance-metric-in-python", "best_answers_votes":34, "question_length":801, "response_length":1142 }, { "question":"Save python random forest model to file In R, after running \"random forest\" model, I can use save.image(\"***.RData\") to store the model. Afterwards, I can just load the model to do predictions directly. Can you do a similar thing in python? I separate the Model and Prediction into two files. And in Model file: ``` rf= RandomForestRegressor(n_estimators=250, max_features=9,compute_importances=True) fit= rf.fit(Predx, Predy) ``` I tried to return rf or fit, but still can't load the model in the prediction file. Can you separate the model and prediction using the sklearn random forest package?", "response":"``` ... import cPickle rf = RandomForestRegresor() rf.fit(X, y) with open('path\/to\/file', 'wb') as f: cPickle.dump(rf, f) # in your prediction file with open('path\/to\/file', 'rb') as f: rf = cPickle.load(f) preds = rf.predict(new_X) ```", "best_answers_score":0.7599, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/20662023\/save-python-random-forest-model-to-file", "best_answers_votes":45, "question_length":597, "response_length":236 }, { "question":"Parameter \"stratify\" from method \"train_test_split\" (scikit Learn) I am trying to use train_test_split from package scikit Learn, but I am having trouble with parameter stratify. Hereafter is the code: ```py from sklearn import cross_validation, datasets X = iris.data[:,:2] y = iris.target cross_validation.train_test_split(X,y,stratify=y) ``` However, I keep getting the following problem: ``` raise TypeError(\"Invalid parameters passed: %s\" % str(options)) TypeError: Invalid parameters passed: {'stratify': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])} ``` Does someone have an idea what is going on? Below is the function documentation. [...] stratify : array-like or None (default is None) If not None, data is split in a stratified fashion, using this as the labels array. New in version 0.17: stratify splitting [...]", "response":"This stratify parameter makes a split so that the proportion of values in the sample produced will be the same as the proportion of values provided by parameter stratify. For example: a binary categorical classification problem, if y is the dependent variable or target\\label column within dataframe following values: 0 25% data is zeros 1 75% data is ones Then stratify=y will make sure that your random split has: 25% of 0's 75% of 1's", "best_answers_score":0.7577, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34842405\/parameter-stratify-from-method-train-test-split-scikit-learn", "best_answers_votes":533, "question_length":1238, "response_length":437 }, { "question":"Sklearn kmeans equivalent of elbow method Let's say I'm examining up to 10 clusters, with scipy I usually generate the 'elbow' plot as follows: ``` from scipy import cluster cluster_array = [cluster.vq.kmeans(my_matrix, i) for i in range(1,10)] pyplot.plot([var for (cent,var) in cluster_array]) pyplot.show() ``` I have since became motivated to use sklearn for clustering, however I'm not sure how to create the array needed to plot as in the scipy case. My best guess was: ``` from sklearn.cluster import KMeans km = [KMeans(n_clusters=i) for i range(1,10)] cluster_array = [km[i].fit(my_matrix)] ``` That unfortunately resulted in an invalid command error. What is the best way sklearn way to go about this? Thank you", "response":"you can use the inertia attribute of Kmeans class. Assuming X is your dataset: ``` from sklearn.cluster import KMeans from matplotlib import pyplot as plt X = # distorsions = [] for k in range(2, 20): kmeans = KMeans(n_clusters=k) kmeans.fit(X) distorsions.append(kmeans.inertia_) fig = plt.figure(figsize=(15, 5)) plt.plot(range(2, 20), distorsions) plt.grid(True) plt.title('Elbow curve') ```", "best_answers_score":0.7569, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/41540751\/sklearn-kmeans-equivalent-of-elbow-method", "best_answers_votes":52, "question_length":721, "response_length":395 }, { "question":"Sklearn transform error: Expected 2D array, got 1D array instead I use sklearn to transform data with this code. ``` sc = MinMaxScaler() test= df['outcome'] y = sc.fit_transform(test) ``` It show error like this. ``` ValueError: Expected 2D array, got 1D array instead: array=[ 21000. 36000. 5000. ... 7000. 12000. 11000.]. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. ``` How to fix it?", "response":"If I remember correctly, MinMaxScalar can accept a pandas dataframe but not a series, so just do test = df[['outcome']] (dataframe with one column) instead of test = df['outcome'] (a series).", "best_answers_score":0.7569, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/58498187\/sklearn-transform-error-expected-2d-array-got-1d-array-instead", "best_answers_votes":31, "question_length":484, "response_length":191 }, { "question":"Cannot understand with sklearn's PolynomialFeatures Need help in sklearn's Polynomial Features. It works quite well with one feature but whenever I add multiple features, it also outputs some values in the array besides the values raised to the power of the degrees. For ex: For this array, ``` X=np.array([[230.1,37.8,69.2]]) ``` when I try to ``` X_poly=poly.fit_transform(X) ``` It outputs ``` [[ 1.00000000e+00 2.30100000e+02 3.78000000e+01 6.92000000e+01 5.29460100e+04 8.69778000e+03 1.59229200e+04 1.42884000e+03 2.61576000e+03 4.78864000e+03]] ``` Here, what is 8.69778000e+03,1.59229200e+04,2.61576000e+03 ?", "response":"If you have features [a, b, c] the default polynomial features(in sklearn the degree is 2) should be [1, a, b, c, a^2, b^2, c^2, ab, bc, ca]. 2.61576000e+03 is 37.8x62.2=2615,76 (2615,76 = 2.61576000 x 10^3) In a simple way with the PolynomialFeatures you can create new features. There is a good reference here. Of course there are and disadvantages(\"Overfitting\") of using PolynomialFeatures(see here). Edit: We have to be careful when using the polynomial features. The formula for calculating the number of the polynomial features is N(n,d)=C(n+d,d) where n is the number of the features, d is the degree of the polynomial, C is binomial coefficient(combination). In our case the number is C(3+2,2)=5!\/(5-2)!2!=10 but when the number of features or the degree is height the polynomial features becomes too many. For example: ``` N(100,2)=5151 N(100,5)=96560646 ``` So in this case you may need to apply regularization to penalize some of the weights. It is quite possible that the algorithm will start to suffer from curse of dimensionality (here is also a very nice discussion).", "best_answers_score":0.7537, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/51906274\/cannot-understand-with-sklearns-polynomialfeatures", "best_answers_votes":37, "question_length":616, "response_length":1083 }, { "question":"What is the difference between partial fit and warm start? Context: I am using Passive Aggressor from scikit library and confused whether to use warm start or partial fit. Efforts hitherto: Referred this thread discussion: https:\/\/github.com\/scikit-learn\/scikit-learn\/issues\/1585 Gone through the scikit code for _fit and _partial_fit. My observations: _fit in turn calls _partial_fit. When warm_start is set, _fit calls _partial_fit with self.coef_ When _partial_fit is called without coef_init parameter and self.coef_ is set, it continues to use self.coef_ Question: I feel both are ultimately providing the same functionalities.Then, what is the basic difference between them? In which contexts, either of them are used? Am I missing something evident? Any help is appreciated!", "response":"I don't know about the Passive Aggressor, but at least when using the SGDRegressor, partial_fit will only fit for 1 epoch, whereas fit will fit for multiple epochs (until the loss converges or max_iter is reached). Therefore, when fitting new data to your model, partial_fit will only correct the model one step towards the new data, but with fit and warm_start it will act as if you would combine your old data and your new data together and fit the model once until convergence. Example: ``` from sklearn.linear_model import SGDRegressor import numpy as np np.random.seed(0) X = np.linspace(-1, 1, num=50).reshape(-1, 1) Y = (X * 1.5 + 2).reshape(50,) modelFit = SGDRegressor(learning_rate=\"adaptive\", eta0=0.01, random_state=0, verbose=1, shuffle=True, max_iter=2000, tol=1e-3, warm_start=True) modelPartialFit = SGDRegressor(learning_rate=\"adaptive\", eta0=0.01, random_state=0, verbose=1, shuffle=True, max_iter=2000, tol=1e-3, warm_start=False) # first fit some data modelFit.fit(X, Y) modelPartialFit.fit(X, Y) # for both: Convergence after 50 epochs, Norm: 1.46, NNZs: 1, Bias: 2.000027, T: 2500, Avg. loss: 0.000237 print(modelFit.coef_, modelPartialFit.coef_) # for both: [1.46303288] # now fit new data (zeros) newX = X newY = 0 * Y # fits only for 1 epoch, Norm: 1.23, NNZs: 1, Bias: 1.208630, T: 50, Avg. loss: 1.595492: modelPartialFit.partial_fit(newX, newY) # Convergence after 49 epochs, Norm: 0.04, NNZs: 1, Bias: 0.000077, T: 2450, Avg. loss: 0.000313: modelFit.fit(newX, newY) print(modelFit.coef_, modelPartialFit.coef_) # [0.04245779] vs. [1.22919864] newX = np.reshape([2], (-1, 1)) print(modelFit.predict(newX), modelPartialFit.predict(newX)) # [0.08499296] vs. [3.66702685] ```", "best_answers_score":0.7535, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38052342\/what-is-the-difference-between-partial-fit-and-warm-start", "best_answers_votes":15, "question_length":781, "response_length":1701 }, { "question":"What does KFold in python exactly do? I am looking at this tutorial: https:\/\/www.dataquest.io\/mission\/74\/getting-started-with-kaggle I got to part 9, making predictions. In there there is some data in a dataframe called titanic, which is then divided up in folds using: ``` # Generate cross validation folds for the titanic dataset. It return the row indices corresponding to train and test. # We set random_state to ensure we get the same splits every time we run this. kf = KFold(titanic.shape[0], n_folds=3, random_state=1) ``` I am not sure what is it exactly doing and what kind of object kf is. I tried reading the documentation but it did not help much. Also, there are three folds (n_folds=3), why is it later only accessing train and test (and how do I know they are called train and test) in this line? ``` for train, test in kf: ```", "response":"KFold will provide train\/test indices to split data in train and test sets. It will split dataset into k consecutive folds (without shuffling by default).Each fold is then used a validation set once while the k - 1 remaining folds form the training set (source). Let's say, you have some data indices from 1 to 10. If you use n_fold=k, in first iteration you will get i'th (i<=k) fold as test indices and remaining (k-1) folds (without that i'th fold) together as train indices. An example ``` import numpy as np from sklearn.cross_validation import KFold x = [1,2,3,4,5,6,7,8,9,10,11,12] kf = KFold(12, n_folds=3) for train_index, test_index in kf: print (train_index, test_index) ``` Output Fold 1: [ 4 5 6 7 8 9 10 11] [0 1 2 3] Fold 2: [ 0 1 2 3 8 9 10 11] [4 5 6 7] Fold 3: [0 1 2 3 4 5 6 7] [ 8 9 10 11] Import Update for sklearn 0.20: KFold object was moved to the sklearn.model_selection module in version 0.20. To import KFold in sklearn 0.20+ use from sklearn.model_selection import KFold. KFold current documentation source", "best_answers_score":0.7512, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36063014\/what-does-kfold-in-python-exactly-do", "best_answers_votes":29, "question_length":843, "response_length":1034 }, { "question":"Using the predict_proba() function of RandomForestClassifier in the safe and right way I'm using Scikit-learn. Sometimes I need to have the probabilities of labels\/classes instead of the labels\/classes themselves. Instead of having Spam\/Not Spam as labels of emails, I wish to have only for example: 0.78 probability a given email is Spam. For such purpose, I'm using predict_proba() with RandomForestClassifier as following: ``` clf = RandomForestClassifier(n_estimators=10, max_depth=None, min_samples_split=1, random_state=0) scores = cross_val_score(clf, X, y) print(scores.mean()) classifier = clf.fit(X,y) predictions = classifier.predict_proba(Xtest) print(predictions) ``` And I got those results: ``` [ 0.4 0.6] [ 0.1 0.9] [ 0.2 0.8] [ 0.7 0.3] [ 0.3 0.7] [ 0.3 0.7] [ 0.7 0.3] [ 0.4 0.6] ``` Where the second column is for class: Spam. However, I have two main issues with the results about which I am not confident. The first issue is that the results represent the probabilities of the labels without being affected by the size of my data? The second issue is that the results show only one digit which is not very specific in some cases where the 0.701 probability is very different from 0.708. Is there any way to get the next 5 digit for example?", "response":"A RandomForestClassifier is a collection of DecisionTreeClassifier's. No matter how big your training set, a decision tree simply returns: a decision. One class has probability 1, the other classes have probability 0. The RandomForest simply votes among the results. predict_proba() returns the number of votes for each class (each tree in the forest makes its own decision and chooses exactly one class), divided by the number of trees in the forest. Hence, your precision is exactly 1\/n_estimators. Want more \"precision\"? Add more estimators. If you want to see variation at the 5th digit, you will need 10**5 = 100,000 estimators, which is excessive. You normally don't want more than 100 estimators, and often not that many.", "best_answers_score":0.7508, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30814231\/using-the-predict-proba-function-of-randomforestclassifier-in-the-safe-and-rig", "best_answers_votes":38, "question_length":1261, "response_length":728 }, { "question":"Unbalanced classification using RandomForestClassifier in sklearn I have a dataset where the classes are unbalanced. The classes are either '1' or '0' where the ratio of class '1':'0' is 5:1. How do you calculate the prediction error for each class and the rebalance weights accordingly in sklearn with Random Forest, kind of like in the following link: http:\/\/www.stat.berkeley.edu\/~breiman\/RandomForests\/cc_home.htm#balance", "response":"You can pass sample weights argument to Random Forest fit method ``` sample_weight : array-like, shape = [n_samples] or None ``` Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node. In older version there were a preprocessing.balance_weights method to generate balance weights for given samples, such that classes become uniformly distributed. It is still there, in internal but still usable preprocessing._weights module, but is deprecated and will be removed in future versions. Don't know exact reasons for this. Update Some clarification, as you seems to be confused. sample_weight usage is straightforward, once you remember that its purpose is to balance target classes in training dataset. That is, if you have X as observations and y as classes (labels), then len(X) == len(y) == len(sample_wight), and each element of sample witght 1-d array represent weight for a corresponding (observation, label) pair. For your case, if 1 class is represented 5 times as 0 class is, and you balance classes distributions, you could use simple ``` sample_weight = np.array([5 if i == 0 else 1 for i in y]) ``` assigning weight of 5 to all 0 instances and weight of 1 to all 1 instances. See link above for a bit more crafty balance_weights weights evaluation function.", "best_answers_score":0.7506, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/20082674\/unbalanced-classification-using-randomforestclassifier-in-sklearn", "best_answers_votes":52, "question_length":425, "response_length":1539 }, { "question":"What is exactly sklearn.pipeline.Pipeline? I can't figure out how the sklearn.pipeline.Pipeline works exactly. There are a few explanation in the doc. For example what do they mean by: Pipeline of transforms with a final estimator. To make my question clearer, what are steps? How do they work? Edit Thanks to the answers I can make my question clearer: When I call pipeline and pass, as steps, two transformers and one estimator, e.g: ``` pipln = Pipeline([(\"trsfm1\",transformer_1), (\"trsfm2\",transformer_2), (\"estmtr\",estimator)]) ``` What happens when I call this? ``` pipln.fit() OR pipln.fit_transform() ``` I can't figure out how an estimator can be a transformer and how a transformer can be fitted.", "response":"Transformer in scikit-learn - some class that have fit and transform method, or fit_transform method. Predictor - some class that has fit and predict methods, or fit_predict method. Pipeline is just an abstract notion, it's not some existing ml algorithm. Often in ML tasks you need to perform sequence of different transformations (find set of features, generate new features, select only some good features) of raw dataset before applying final estimator. Here is a good example of Pipeline usage. Pipeline gives you a single interface for all 3 steps of transformation and resulting estimator. It encapsulates transformers and predictors inside, and now you can do something like: ``` vect = CountVectorizer() tfidf = TfidfTransformer() clf = SGDClassifier() vX = vect.fit_transform(Xtrain) tfidfX = tfidf.fit_transform(vX) predicted = clf.fit_predict(tfidfX) # Now evaluate all steps on test set vX = vect.fit_transform(Xtest) tfidfX = tfidf.fit_transform(vX) predicted = clf.fit_predict(tfidfX) ``` With just: ``` pipeline = Pipeline([ ('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', SGDClassifier()), ]) predicted = pipeline.fit(Xtrain).predict(Xtrain) # Now evaluate all steps on test set predicted = pipeline.predict(Xtest) ``` With pipelines you can easily perform a grid-search over set of parameters for each step of this meta-estimator. As described in the link above. All steps except last one must be transforms, last step can be transformer or predictor. Answer to edit: When you call pipln.fit() - each transformer inside pipeline will be fitted on outputs of previous transformer (First transformer is learned on raw dataset). Last estimator may be transformer or predictor, you can call fit_transform() on pipeline only if your last estimator is transformer (that implements fit_transform, or transform and fit methods separately), you can call fit_predict() or predict() on pipeline only if your last estimator is predictor. So you just can't call fit_transform or transform on pipeline, last step of which is predictor.", "best_answers_score":0.7493, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/33091376\/what-is-exactly-sklearn-pipeline-pipeline", "best_answers_votes":235, "question_length":706, "response_length":2059 }, { "question":"Preprocessing in scikit learn - single sample - Depreciation warning On a fresh installation of Anaconda under Ubuntu... I am preprocessing my data in various ways prior to a classification task using Scikit-Learn. ``` from sklearn import preprocessing scaler = preprocessing.MinMaxScaler().fit(train) train = scaler.transform(train) test = scaler.transform(test) ``` This all works fine but if I have a new sample (temp below) that I want to classify (and thus I want to preprocess in the same way then I get ``` temp = [1,2,3,4,5,5,6,....................,7] temp = scaler.transform(temp) ``` Then I get a deprecation warning... ``` DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and will raise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample. ``` So the question is how should I be rescaling a single sample like this? I suppose an alternative (not very good one) would be... ``` temp = [temp, temp] temp = scaler.transform(temp) temp = temp[0] ``` But I'm sure there are better ways.", "response":"Just listen to what the warning is telling you: Reshape your data either X.reshape(-1, 1) if your data has a single feature\/column and X.reshape(1, -1) if it contains a single sample. For your example type(if you have more than one feature\/column): ``` temp = temp.reshape(1,-1) ``` For one feature\/column: ``` temp = temp.reshape(-1,1) ```", "best_answers_score":0.7446, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35082140\/preprocessing-in-scikit-learn-single-sample-depreciation-warning", "best_answers_votes":51, "question_length":1112, "response_length":340 }, { "question":"What are different options for objective functions available in xgboost.XGBClassifier? Apart from binary:logistic (which is the default objective function), is there any other built-in objective function that can be used in xbgoost.XGBClassifier ?", "response":"That's true that binary:logistic is the default objective for XGBClassifier, but I don't see any reason why you couldn't use other objectives offered by XGBoost package. For example, you can see in sklearn.py source code that multi:softprob is used explicitly in multiclass case. Moreover, if it's really necessary, you can provide a custom objective function (details here).", "best_answers_score":0.7442, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45815708\/what-are-different-options-for-objective-functions-available-in-xgboost-xgbclass", "best_answers_votes":29, "question_length":247, "response_length":375 }, { "question":"Understanding min_df and max_df in scikit CountVectorizer I have five text files that I input to a CountVectorizer. When specifying min_df and max_df to the CountVectorizer instance what does the min\/max document frequency exactly mean? Is it the frequency of a word in its particular text file or is it the frequency of the word in the entire overall corpus (five text files)? What are the differences when min_df and max_df are provided as integers or as floats? The documentation doesn't seem to provide a thorough explanation nor does it supply an example to demonstrate the use of these two parameters. Could someone provide an explanation or example demonstrating min_df and max_df?", "response":"max_df is used for removing terms that appear too frequently, also known as \"corpus-specific stop words\". For example: max_df = 0.50 means \"ignore terms that appear in more than 50% of the documents\". max_df = 25 means \"ignore terms that appear in more than 25 documents\". The default max_df is 1.0, which means \"ignore terms that appear in more than 100% of the documents\". Thus, the default setting does not ignore any terms. min_df is used for removing terms that appear too infrequently. For example: min_df = 0.01 means \"ignore terms that appear in less than 1% of the documents\". min_df = 5 means \"ignore terms that appear in less than 5 documents\". The default min_df is 1, which means \"ignore terms that appear in less than 1 document\". Thus, the default setting does not ignore any terms.", "best_answers_score":0.744, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/27697766\/understanding-min-df-and-max-df-in-scikit-countvectorizer", "best_answers_votes":367, "question_length":688, "response_length":797 }, { "question":"How to upgrade the classifier to the latest version of scikit-learn I have a big trained TfidfVectorizer dumped with joblib.dump. It was created on my laptop with scikit-learn version 0.18. When I'm trying to put it to my server where the newest version of scikit-learn 0.18.1 is installed I'm getting warned with the following: ``` \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/base.py:315: UserWarning: Trying to unpickle estimator TfidfTransformer from version 0.18 when using version 0.18.1. This might lead to breaking code or invalid results. Use at your own risk. UserWarning) \/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/base.py:315: UserWarning: Trying to unpickle estimator TfidfVectorizer from version 0.18 when using version 0.18.1. This might lead to breaking code or invalid results. Use at your own risk. UserWarning) ``` Is there a natural way to upgrade my TfidfVectorizer to prevent any problems? Should I better uninstall scikit-learn 0.18.1 and install version 0.18 to the server instead?", "response":"Yes you should install the same version on your server as you used for development, best practice is to use a requirements.txt for all the requirements of your project and install a new environment on your server using conda or virtualenv, this will save you the problems of manually setting this stuff up.", "best_answers_score":0.7439, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40665227\/how-to-upgrade-the-classifier-to-the-latest-version-of-scikit-learn", "best_answers_votes":16, "question_length":1011, "response_length":306 }, { "question":"what is the difference between transformer and estimator in sklearn? I saw both transformer and estimator were mentioned in the sklearn documentation. Is there any difference between these two words?", "response":"The basic difference is that a: Transformer transforms the input data (X) in some ways. Estimator predicts a new value (or values) (y) by using the input data (X). Both the Transformer and Estimator should have a fit() method which can be used to train them (they learn some characteristics of the data). The signature is: ``` fit(X, y) ``` fit() does not return any value, just stores the learnt data inside the object. Here X represents the samples (feature vectors) and y is the target vector (which may have single or multiple values per corresponding sample in X). Note that y can be optional in some transformers where its not needed, but its mandatory for most estimators (supervised estimators). Look at StandardScaler for example. It needs the initial data X for finding the mean and std of the data (it learns the characteristics of X, y is not needed). Each Transformer should have a transform(X, y) function which like fit() takes the input X and returns a new transformed version of X (which generally should have same number samples but may or may not have same features). On the other hand, Estimator should have a predict(X) method which should output the predicted value of y from the given X. There will be some classes in scikit-learn which implement both transform() and predict(), like KMeans, in that case carefully reading the documentation should solve your doubts.", "best_answers_score":0.7423, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/54899647\/what-is-the-difference-between-transformer-and-estimator-in-sklearn", "best_answers_votes":16, "question_length":199, "response_length":1389 }, { "question":"How to get a regression summary in scikit-learn like R does? As an R user, I wanted to also get up to speed on scikit. Creating a linear regression model(s) is fine, but can't seem to find a reasonable way to get a standard summary of regression output. Code example: ``` # Linear Regression import numpy as np from sklearn import datasets from sklearn.linear_model import LinearRegression # Load the diabetes datasets dataset = datasets.load_diabetes() # Fit a linear regression model to the data model = LinearRegression() model.fit(dataset.data, dataset.target) print(model) # Make predictions expected = dataset.target predicted = model.predict(dataset.data) # Summarize the fit of the model mse = np.mean((predicted-expected)**2) print model.intercept_, model.coef_, mse, print(model.score(dataset.data, dataset.target)) ``` Issues: seems like the intercept and coef are built into the model, and I just type print (second to last line) to see them. What about all the other standard regression output like R^2, adjusted R^2, p values, etc. If I read the examples correctly, seems like you have to write a function\/equation for each of these and then print it. So, is there no standard summary output for lin. reg. models? Also, in my printed array of outputs of coefficients, there are no variable names associated with each of these? I just get the numeric array. Is there a way to print these where I get an output of the coefficients and the variable they go with? My printed output: ``` LinearRegression(copy_X=True, fit_intercept=True, normalize=False) 152.133484163 [ -10.01219782 -239.81908937 519.83978679 324.39042769 -792.18416163 476.74583782 101.04457032 177.06417623 751.27932109 67.62538639] 2859.69039877 0.517749425413 ``` Notes: Started off with Linear, Ridge and Lasso. I have gone through the examples. Below is for the basic OLS.", "response":"I use: ``` import sklearn.metrics as metrics def regression_results(y_true, y_pred): # Regression metrics explained_variance=metrics.explained_variance_score(y_true, y_pred) mean_absolute_error=metrics.mean_absolute_error(y_true, y_pred) mse=metrics.mean_squared_error(y_true, y_pred) mean_squared_log_error=metrics.mean_squared_log_error(y_true, y_pred) median_absolute_error=metrics.median_absolute_error(y_true, y_pred) r2=metrics.r2_score(y_true, y_pred) print('explained_variance: ', round(explained_variance,4)) print('mean_squared_log_error: ', round(mean_squared_log_error,4)) print('r2: ', round(r2,4)) print('MAE: ', round(mean_absolute_error,4)) print('MSE: ', round(mse,4)) print('RMSE: ', round(np.sqrt(mse),4)) ```", "best_answers_score":0.74, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/26319259\/how-to-get-a-regression-summary-in-scikit-learn-like-r-does", "best_answers_votes":54, "question_length":1855, "response_length":728 }, { "question":"Is it possible to toggle a certain step in sklearn pipeline? I wonder if we can set up an \"optional\" step in sklearn.pipeline. For example, for a classification problem, I may want to try an ExtraTreesClassifier with AND without a PCA transformation ahead of it. In practice, it might be a pipeline with an extra parameter specifying the toggle of the PCA step, so that I can optimize on it via GridSearch and etc. I don't see such an implementation in sklearn source, but is there any work-around? Furthermore, since the possible parameter values of a following step in pipeline might depend on the parameters in a previous step (e.g., valid values of ExtraTreesClassifier.max_features depend on PCA.n_components), is it possible to specify such a conditional dependency in sklearn.pipeline and sklearn.grid_search? Thank you!", "response":"Pipeline steps cannot currently be made optional in a grid search but you could wrap the PCA class into your own OptionalPCA component with a boolean parameter to turn off PCA when requested as a quick workaround. You might want to have a look at hyperopt to setup more complex search spaces. I think it has good sklearn integration to support this kind of patterns by default but I cannot find the doc anymore. Maybe have a look at this talk. For the dependent parameters problem, GridSearchCV supports trees of parameters to handle this case as demonstrated in the documentation.", "best_answers_score":0.74, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/19262621\/is-it-possible-to-toggle-a-certain-step-in-sklearn-pipeline", "best_answers_votes":18, "question_length":827, "response_length":581 }, { "question":"Comparing Results from StandardScaler vs Normalizer in Linear Regression I'm working through some examples of Linear Regression under different scenarios, comparing the results from using Normalizer and StandardScaler, and the results are puzzling. I'm using the boston housing dataset, and prepping it this way: ``` import numpy as np import pandas as pd from sklearn.datasets import load_boston from sklearn.preprocessing import Normalizer from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression #load the data df = pd.DataFrame(boston.data) df.columns = boston.feature_names df['PRICE'] = boston.target ``` I'm currently trying to reason about the results I get from the following scenarios: Initializing Linear Regression with the parameter normalize=True vs using Normalizer Initializing Linear Regression with the parameter fit_intercept = False with and without standardization. Collectively, I find the results confusing. Here's how I'm setting everything up: ``` # Prep the data X = df.iloc[:, :-1] y = df.iloc[:, -1:] normal_X = Normalizer().fit_transform(X) scaled_X = StandardScaler().fit_transform(X) #now prepare some of the models reg1 = LinearRegression().fit(X, y) reg2 = LinearRegression(normalize=True).fit(X, y) reg3 = LinearRegression().fit(normal_X, y) reg4 = LinearRegression().fit(scaled_X, y) reg5 = LinearRegression(fit_intercept=False).fit(scaled_X, y) ``` Then, I created 3 separate dataframes to compare the R_score, coefficient values, and predictions from each model. To create the dataframe to compare coefficient values from each model, I did the following: ``` #Create a dataframe of the coefficients coef = pd.DataFrame({ 'coeff': reg1.coef_[0], 'coeff_normalize_true': reg2.coef_[0], 'coeff_normalizer': reg3.coef_[0], 'coeff_scaler': reg4.coef_[0], 'coeff_scaler_no_int': reg5.coef_[0] }) ``` Here's how I created the dataframe to compare the R^2 values from each model: ``` scores = pd.DataFrame({ 'score': reg1.score(X, y), 'score_normalize_true': reg2.score(X, y), 'score_normalizer': reg3.score(normal_X, y), 'score_scaler': reg4.score(scaled_X, y), 'score_scaler_no_int': reg5.score(scaled_X, y) }, index=range(1) ) ``` Lastly, here's the dataframe that compares the predictions from each: ``` predictions = pd.DataFrame({ 'pred': reg1.predict(X).ravel(), 'pred_normalize_true': reg2.predict(X).ravel(), 'pred_normalizer': reg3.predict(normal_X).ravel(), 'pred_scaler': reg4.predict(scaled_X).ravel(), 'pred_scaler_no_int': reg5.predict(scaled_X).ravel() }, index=range(len(y))) ``` Here are the resulting dataframes: COEFFICIENTS: SCORES: PREDICTIONS: I have three questions that I can't reconcile: Why is there absolutely no difference between the first two models? It appears that setting normalize=False does nothing. I can understand having predictions and R^2 values that are the same, but my features have different numerical scales, so I'm not sure why normalizing would have no effect at all. This is doubly confusing when you consider that using StandardScaler changes the coefficients considerably. I don't understand why the model using Normalizer causes such radically different coefficient values from the others, especially when the model with LinearRegression(normalize=True) makes no change at all. If you were to look at the documentation for each, it appears they're very similar if not identical. From the docs on sklearn.linear_model.LinearRegression(): normalize : boolean, optional, default False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. Meanwhile, the docs on sklearn.preprocessing.Normalizer states that it normalizes to the l2 norm by default. I don't see a difference between what these two options do, and I don't see why one would have such radical differences in coefficient values from the other. The results from the model using the StandardScaler are coherent to me, but I don't understand why the model using StandardScaler and setting set_intercept=False performs so poorly. From the docs on the Linear Regression module: fit_intercept : boolean, optional, default True whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (e.g. data is expected to be already centered). The StandardScaler centers your data, so I don't understand why using it with fit_intercept=False produces incoherent results.", "response":"The reason for no difference in co-efficients between the first two models is that Sklearn de-normalize the co-efficients behind the scenes after calculating the co-effs from normalized input data. Reference This de-normalization has been done because for test data, we can directly apply the co-effs. and get the prediction without normalizing the test data. Hence, setting normalize=True do have impact on co-efficients but they dont affect the best fit line anyway. Normalizer does the normalization with respect to each sample (meaning row-wise). You see the reference code here. From documentation: Normalize samples individually to unit norm. whereas normalize=True does the normalization with respect to each column\/ feature. Reference Example to understand the impact of normalization at different dimension of the data. Let us take two dimensions x1 & x2 and y be the target variable. Target variable value is color coded in the figure. ``` import matplotlib.pyplot as plt from sklearn.preprocessing import Normalizer,StandardScaler from sklearn.preprocessing.data import normalize n=50 x1 = np.random.normal(0, 2, size=n) x2 = np.random.normal(0, 2, size=n) noise = np.random.normal(0, 1, size=n) y = 5 + 0.5*x1 + 2.5*x2 + noise fig,ax=plt.subplots(1,4,figsize=(20,6)) ax[0].scatter(x1,x2,c=y) ax[0].set_title('raw_data',size=15) X = np.column_stack((x1,x2)) column_normalized=normalize(X, axis=0) ax[1].scatter(column_normalized[:,0],column_normalized[:,1],c=y) ax[1].set_title('column_normalized data',size=15) row_normalized=Normalizer().fit_transform(X) ax[2].scatter(row_normalized[:,0],row_normalized[:,1],c=y) ax[2].set_title('row_normalized data',size=15) standardized_data=StandardScaler().fit_transform(X) ax[3].scatter(standardized_data[:,0],standardized_data[:,1],c=y) ax[3].set_title('standardized data',size=15) plt.subplots_adjust(left=0.3, bottom=None, right=0.9, top=None, wspace=0.3, hspace=None) plt.show() ``` You could see that best fit line for data in fig 1,2 and 4 would be the same; signifies that the R2_-score will not change due to column\/feature normalization or standardizing data. Just that, it ends up with different co-effs. values. Note: best fit line for fig3 would be different. When you set the fit_intercept=False, bias term is subtracted from the prediction. Meaning the intercept is set to zero, which otherwise would have been mean of target variable. The prediction with intercept as zero would be expected to perform bad for problems where target variables are not scaled (mean =0). You can see a difference of 22.532 in every row, which signifies the impact of the output.", "best_answers_score":0.7379, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/54067474\/comparing-results-from-standardscaler-vs-normalizer-in-linear-regression", "best_answers_votes":16, "question_length":4511, "response_length":2626 }, { "question":"How to list all scikit-learn classifiers that support predict_proba() I need a list of all scikit-learn classifiers that support the predict_proba() method. Since the documentation provides no easy way of getting that information, how can get this programatically?", "response":"``` from sklearn.utils import all_estimators estimators = all_estimators() for name, class_ in estimators: if hasattr(class_, 'predict_proba'): print(name) ``` You can also use CalibratedClassifierCV to make any classifier into one that has predict_proba. This was asked before on SO, but I can't find it, so you should be excused for the duplicate ;)", "best_answers_score":0.7372, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30056331\/how-to-list-all-scikit-learn-classifiers-that-support-predict-proba", "best_answers_votes":57, "question_length":264, "response_length":351 }, { "question":"How to standard scale a 3D matrix? I am working on a signal classification problem and would like to scale the dataset matrix first, but my data is in a 3D format (batch, length, channels). I tried to use Scikit-learn Standard Scaler: ``` from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) ``` But I've got this error message: Found array with dim 3. StandardScaler expected <= 2 I think one solution would be to split the matrix by each channel in multiples 2D matrices, scale them separately and then put back in 3D format, but I wonder if there is a better solution. Thank you very much.", "response":"With only 3 line of code... ``` scaler = StandardScaler() X_train = scaler.fit_transform(X_train.reshape(-1, X_train.shape[-1])).reshape(X_train.shape) X_test = scaler.transform(X_test.reshape(-1, X_test.shape[-1])).reshape(X_test.shape) ```", "best_answers_score":0.7369, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/50125844\/how-to-standard-scale-a-3d-matrix", "best_answers_votes":32, "question_length":676, "response_length":241 }, { "question":"Avoid certain parameter combinations in GridSearchCV I'm using scikit-learn's GridSearchCV to iterate over a parameter space to tune a model. Specifically, I'm using it to test different hyperparameters in a neural network. The grid is as follows: ``` params = {'num_hidden_layers': [0,1,2], 'hidden_layer_size': [64,128,256], 'activation': ['sigmoid', 'relu', 'tanh']} ``` The problem is that I end up running redundant models when hidden num_hidden_layers is set to 0. It will run a model with 0 hidden layers and 64 units, another with 128 units, and another with 256 units. All of these models are equivalent since there is no hidden layer. This is highly inefficient and it means I need to write more code to remove redundancy in the results. Is there a way to prevent such parameter combinations, perhaps by passing a tuple of parameters?", "response":"The sklearn documentation suggests two parameter grids. So you could do something like this: ``` param_grid = [ {'num_hidden_layers': [1,2], 'hidden_layer_size': [64,128,256], 'activation': ['sigmoid', 'relu', 'tanh']}, {'num_hidden_layers': [0], 'hidden_layer_size': [64], 'activation': ['sigmoid', 'relu', 'tanh']} ] ```", "best_answers_score":0.7356, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45352420\/avoid-certain-parameter-combinations-in-gridsearchcv", "best_answers_votes":32, "question_length":844, "response_length":322 }, { "question":"Cosine similarity between each row in a Dataframe in Python I have a DataFrame containing multiple vectors each having 3 entries. Each row is a vector in my representation. I needed to calculate the cosine similarity between each of these vectors. Converting this to a matrix representation is better or is there a cleaner approach in DataFrame itself? Here is the code that I have tried. ``` import pandas as pd from scipy import spatial df = pd.DataFrame([X,Y,Z]).T similarities = df.values.tolist() for x in similarities: for y in similarities: result = 1 - spatial.distance.cosine(x, y) ```", "response":"You can directly just use sklearn.metrics.pairwise.cosine_similarity. Demo ``` import numpy as np; import pandas as pd from sklearn.metrics.pairwise import cosine_similarity df = pd.DataFrame(np.random.randint(0, 2, (3, 5))) df ## 0 1 2 3 4 ## 0 1 1 1 0 0 ## 1 0 0 1 1 1 ## 2 0 1 0 1 0 cosine_similarity(df) ## array([[ 1. , 0.33333333, 0.40824829], ## [ 0.33333333, 1. , 0.40824829], ## [ 0.40824829, 0.40824829, 1. ]]) ```", "best_answers_score":0.7355, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45387476\/cosine-similarity-between-each-row-in-a-dataframe-in-python", "best_answers_votes":52, "question_length":594, "response_length":424 }, { "question":"What's the difference between predict_proba and decision_function in scikit-learn? I'm studying a scikit-learn example (Classifier comparison) and got confused with predict_proba and decision_function. They plot the classification results by drawing the contours using either Z = clf.decision_function(), or Z = clf.predict_proba(). What's the differences between these two? Is it so that each classification method has either of the two as score? Which one is more proper to interpret the classification result and how should I choose from the two?", "response":"The latter, predict_proba is a method of a (soft) classifier outputting the probability of the instance being in each of the classes. The former, decision_function, finds the distance to the separating hyperplane. For example, a(n) SVM classifier finds hyperplanes separating the space into areas associated with classification outcomes. This function, given a point, finds the distance to the separators. I'd guess that predict_prob is more useful in your case, in general - the other method is more specific to the algorithm.", "best_answers_score":0.7339, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/36543137\/whats-the-difference-between-predict-proba-and-decision-function-in-scikit-lear", "best_answers_votes":44, "question_length":549, "response_length":527 }, { "question":"Efficiently count word frequencies in python I'd like to count frequencies of all words in a text file. ``` >>> countInFile('test.txt') ``` should return {'aaa':1, 'bbb': 2, 'ccc':1} if the target text file is like: ``` # test.txt aaa bbb ccc bbb ``` I've implemented it with pure python following some posts. However, I've found out pure-python ways are insufficient due to huge file size (> 1GB). I think borrowing sklearn's power is a candidate. If you let CountVectorizer count frequencies for each line, I guess you will get word frequencies by summing up each column. But, it sounds a bit indirect way. What is the most efficient and straightforward way to count words in a file with python? Update My (very slow) code is here: ``` from collections import Counter def get_term_frequency_in_file(source_file_path): wordcount = {} with open(source_file_path) as f: for line in f: line = line.lower().translate(None, string.punctuation) this_wordcount = Counter(line.split()) wordcount = add_merge_two_dict(wordcount, this_wordcount) return wordcount def add_merge_two_dict(x, y): return { k: x.get(k, 0) + y.get(k, 0) for k in set(x) | set(y) } ```", "response":"The most succinct approach is to use the tools Python gives you. ``` from future_builtins import map # Only on Python 2 from collections import Counter from itertools import chain def countInFile(filename): with open(filename) as f: return Counter(chain.from_iterable(map(str.split, f))) ``` That's it. map(str.split, f) is making an iterator that returns lists of words from each line. Wrapping in chain.from_iterable converts that to a single iterator that produces a word at a time. Counter takes an input iterable and counts all unique values in it. At the end, you return a dict-like object (a Counter) that stores all unique words and their counts, and during creation, you only store a line of data at a time and the total counts, not the whole file at once. In theory, on Python 2.7 and 3.1, you might do slightly better looping over the chained results yourself and using a dict or collections.defaultdict(int) to count (because Counter is implemented in Python, which can make it slower in some cases), but letting Counter do the work is simpler and more self-documenting (I mean, the whole goal is counting, so use a Counter). Beyond that, on CPython (the reference interpreter) 3.2 and higher Counter has a C level accelerator for counting iterable inputs that will run faster than anything you could write in pure Python. Update: You seem to want punctuation stripped and case-insensitivity, so here's a variant of my earlier code that does that: ``` from string import punctuation def countInFile(filename): with open(filename) as f: linewords = (line.translate(None, punctuation).lower().split() for line in f) return Counter(chain.from_iterable(linewords)) ``` Your code runs much more slowly because it's creating and destroying many small Counter and set objects, rather than .update-ing a single Counter once per line (which, while slightly slower than what I gave in the updated code block, would be at least algorithmically similar in scaling factor).", "best_answers_score":0.7323, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35857519\/efficiently-count-word-frequencies-in-python", "best_answers_votes":51, "question_length":1152, "response_length":1972 }, { "question":"Plotting a ROC curve in scikit yields only 3 points TLDR: scikit's roc_curve function is only returning 3 points for a certain dataset. Why could this be, and how do we control how many points to get back? I'm trying to draw a ROC curve, but consistently get a \"ROC triangle\". ``` lr = LogisticRegression(multi_class = 'multinomial', solver = 'newton-cg') y = data['target'].values X = data[['feature']].values model = lr.fit(X,y) # get probabilities for clf probas_ = model.predict_log_proba(X) ``` Just to make sure the lengths are ok: ``` print len(y) print len(probas_[:, 1]) ``` Returns 13759 on both. Then running: ``` false_pos_rate, true_pos_rate, thresholds = roc_curve(y, probas_[:, 1]) print false_pos_rate ``` returns [ 0. 0.28240129 1. ] If I call threasholds, I get array([ 0.4822225 , -0.5177775 , -0.84595197]) (always only 3 points). It is therefore no surprise that my ROC curve looks like a triangle. What I cannot understand is why scikit's roc_curve is only returning 3 points. Help hugely appreciated.", "response":"The number of points depend on the number of unique values in the input. Since the input vector has only 2 unique values, the function gives correct output.", "best_answers_score":0.7306, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30051284\/plotting-a-roc-curve-in-scikit-yields-only-3-points", "best_answers_votes":20, "question_length":1023, "response_length":156 }, { "question":"ValueError: Unknown label type: 'unknown' I try to run following code. ``` import pandas as pd import numpy as np from sklearn.linear_model import LogisticRegression # data import and preparation trainData = pd.read_csv('train.csv') train = trainData.values testData = pd.read_csv('test.csv') test = testData.values X = np.c_[train[:, 0], train[:, 2], train[:, 6:7], train[:, 9]] X = np.nan_to_num(X) y = train[:, 1] Xtest = np.c_[test[:, 0:1], test[:, 5:6], test[:, 8]] Xtest = np.nan_to_num(Xtest) # model lr = LogisticRegression() lr.fit(X, y) ``` where y is a np.ndarrayof 0s and 1s. However, I receive the following error: ```none File \"C:\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\logistic.py\", line >1174, in fit check_classification_targets(y) File \"C:\\Anaconda3\\lib\\site-packages\\sklearn\\utils\\multiclass.py\", line 172, >in check_classification_targets raise ValueError(\"Unknown label type: %r\" % y_type) ValueError: Unknown label type: 'unknown' ``` From sklearn documentation, I see that ```none y : array-like, shape (n_samples,) Target values (class labels in classification, real numbers in regression) ``` What is my error? FYI, y is np.array([0.0, 1.0, 1.0, ..., 0.0, 1.0, 0.0], dtype=object) whose size is (891,).", "response":"Your y is of type object, so sklearn cannot recognize its type. Add the line y=y.astype('int') right after the line y = train[:, 1].", "best_answers_score":0.7304, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45346550\/valueerror-unknown-label-type-unknown", "best_answers_votes":197, "question_length":1237, "response_length":132 }, { "question":"sklearn plot confusion matrix with labels I want to plot a confusion matrix to visualize the classifer's performance, but it shows only the numbers of the labels, not the labels themselves: ``` from sklearn.metrics import confusion_matrix import pylab as pl y_test=['business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business'] pred=array(['health', 'business', 'business', 'business', 'business', 'business', 'health', 'health', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'business', 'health', 'health', 'business', 'health'], dtype='|S8') cm = confusion_matrix(y_test, pred) pl.matshow(cm) pl.title('Confusion matrix of the classifier') pl.colorbar() pl.show() ``` How can I add the labels (health, business..etc) to the confusion matrix?", "response":"UPDATE: Check the ConfusionMatrixDisplay OLD ANSWER: I think it's worth mentioning the use of seaborn.heatmap here. ``` import seaborn as sns import matplotlib.pyplot as plt ax= plt.subplot() sns.heatmap(cm, annot=True, fmt='g', ax=ax); #annot=True to annotate cells, ftm='g' to disable scientific notation # labels, title and ticks ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels'); ax.set_title('Confusion Matrix'); ax.xaxis.set_ticklabels(['business', 'health']); ax.yaxis.set_ticklabels(['health', 'business']); ```", "best_answers_score":0.7292, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/19233771\/sklearn-plot-confusion-matrix-with-labels", "best_answers_votes":108, "question_length":959, "response_length":532 }, { "question":"Feature extraction and take color histogram I am working on an image processing feature extraction. I have a photo of a bird in which I have to extract bird area and tell what color the bird has. I used canny feature extraction method to get the edges of a bird. How to extract only bird area and make the background to blue color? openCv solution should also be fine. ``` import skimage import numpy as np %matplotlib inline import matplotlib.pyplot as plt import os filename = os.path.join(os.getcwd(),'image\\image_bird.jpeg') from skimage import io bird =io.imread(filename,as_grey=True) plt.imshow(bird) ``` ``` from skimage import feature edges = feature.canny(bird,sigma=1) plt.imshow(edges ) ``` Actual bird image can be taken from bird link", "response":"Identify the edges of your image Binarize the image via automatic thresholding Use contour detection to identify black regions which are inside a white region and merge them with the white region. (Mockup, image may slightly vary) Use the created image as mask to color the background and color it This can be done by simply setting each background pixel (black) to its respective color. As you can see, the approach is far from perfect, but should give you a general idea about how to accomplish your task. The final image quality might be improved by slightly eroding the map to tighten it to the contours of the bird. You then also use the mask to calculate your color histogram by only taking foreground pixels into account. Edit: Look here: Eroded mask Final image", "best_answers_score":0.7276, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/52153979\/feature-extraction-and-take-color-histogram", "best_answers_votes":23, "question_length":748, "response_length":769 }, { "question":"How is the R2 value in Scikit learn calculated? The R^2 value returned by scikit learn (metrics.r2_score()) can be negative. The docs say: \"Unlike most other scores, R\u00b2 score may be negative (it need not actually be the square of a quantity R).\" However the wikipedia article on R^2 mentions no R (not squared) quantity. Perhaps it uses absolute differences instead of square differences. I really have no idea", "response":"The R^2 in scikit learn is essentially the same as what is described in the wikipedia article on the coefficient of determination (grep for \"the most general definition\"). It is 1 - residual sum of square \/ total sum of squares. The big difference between a classical stats setting and what you usually try to do with machine learning, is that in machine learning you evaluate your score on unseen data, which can lead to results outside [0,1]. If you apply R^2 to the same data you used to fit your model, it will lie within [0, 1] See also this very similar question", "best_answers_score":0.7275, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/23309073\/how-is-the-r2-value-in-scikit-learn-calculated", "best_answers_votes":39, "question_length":410, "response_length":568 }, { "question":"Adjust size of ConfusionMatrixDisplay (ScikitLearn) How to set the size of the figure ploted by ScikitLearn's Confusion Matrix? ```py import numpy as np from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(np.arange(25), np.arange(25)) cmp = ConfusionMatrixDisplay(cm, display_labels=np.arange(25)) cmp.plot() ``` The code above shows this figure, which is too tight:", "response":"You can send a matplotlib.axes object to the .plot method of sklearn.metrics.ConfusionMatrixDisplay. Set the size of the figure in matplotlib.pyplot.subplots first. ```py import numpy as np from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix import matplotlib.pyplot as plt cm = confusion_matrix(np.arange(25), np.arange(25)) cmp = ConfusionMatrixDisplay(cm, display_labels=np.arange(25)) fig, ax = plt.subplots(figsize=(10,10)) cmp.plot(ax=ax) ```", "best_answers_score":0.7231, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/66483409\/adjust-size-of-confusionmatrixdisplay-scikitlearn", "best_answers_votes":47, "question_length":405, "response_length":465 }, { "question":"python - how to append numpy array to a pandas dataframe I have trained a Logistic Regression classifier to predict whether a review is positive or negative. Now, I want to append the predicted probabilities returned by the predict_proba-function to my Pandas data frame containing the reviews. I tried doing something like: ``` test_data['prediction'] = sentiment_model.predict_proba(test_matrix) ``` Obviously, that doesn't work, since predict_proba returns a 2D-numpy array. So, what is the most efficient way of doing this? I created test_matrix with SciKit-Learn's CountVectorizer: ``` vectorizer = CountVectorizer(token_pattern=r'\\b\\w+\\b') train_matrix = vectorizer.fit_transform(train_data['review_clean'].values.astype('U')) test_matrix = vectorizer.transform(test_data['review_clean'].values.astype('U')) ``` Sample data looks like: ``` | Review | Prediction | | ------------------------------------------ | ------------------ | | \"Toy was great! Our six-year old loved it!\"| 0.986 | ```", "response":"Assign the predictions to a variable and then extract the columns from the variable to be assigned to the pandas dataframe cols. If x is the 2D numpy array with predictions, ``` x = sentiment_model.predict_proba(test_matrix) ``` then you can do, ``` test_data['prediction0'] = x[:,0] test_data['prediction1'] = x[:,1] ```", "best_answers_score":0.7207, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/42314542\/python-how-to-append-numpy-array-to-a-pandas-dataframe", "best_answers_votes":26, "question_length":996, "response_length":321 }, { "question":"FastAPI {\"detail\":\"Method Not Allowed\"} I am using FAST API for my ML Model. I have a pipeline. ```py lr_tfidf = Pipeline([('vect', tfidf), ('clf', LogisticRegression(penalty='l2'))]) ``` Now In Fast API, when I want to predict, and display result as API, my code is ```py app = FastAPI() @app.post('\/predict') def predict_species(data: str): data = np.array([data]) prob = lr_tfidf.predict_proba(data).max() pred = lr_tfidf.predict(data) return {'Probability': f'{prob}', 'Predictions':f'{pred}'} ``` I copied it from a tutorial. When I test it on GUI by FASTAPI, it works good as shown in Image, i.e it shows probability and predictions. When I go to request URL, as provided by the GUI, which is http:\/\/127.0.0.1:8000\/predict?data=hello (test data is hello) It gives me error. ```py {\"detail\":\"Method Not Allowed\"} ``` On my Terminal, the error message is ```py INFO: 127.0.0.1:42568 - \"GET \/predict?data=hello HTTP\/1.1\" 405 Method Not Allowed ```", "response":"The method of the endpoint is defined as POST (@app.post('\/predict')). When you call the URL from your browser, the HTTP Method is GET. A simply solution is to change the endpoints method to GET via @app.get. But this will most likely violates how REST-API endpoints should be named and when to use what HTTP method. A good starting point is https:\/\/restfulapi.net\/resource-naming\/. Or maybe you are implementing an RPC (remote procedure call)? Than it can be different as well.", "best_answers_score":0.7203, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/66622020\/fastapi-detailmethod-not-allowed", "best_answers_votes":34, "question_length":950, "response_length":478 }, { "question":"Normalise between 0 and 1 ignoring NaN For a list of numbers ranging from x to y that may contain NaN, how can I normalise between 0 and 1, ignoring the NaN values (they stay as NaN). Typically I would use MinMaxScaler (ref page) from sklearn.preprocessing, but this cannot handle NaN and recommends imputing the values based on mean or median etc. it doesn't offer the option to ignore all the NaN values.", "response":"consider pd.Series s ``` s = pd.Series(np.random.choice([3, 4, 5, 6, np.nan], 100)) s.hist() ``` Option 1 Min Max Scaling ``` new = s.sub(s.min()).div((s.max() - s.min())) new.hist() ``` NOT WHAT OP ASKED FOR I put these in because I wanted to Option 2 sigmoid ``` sigmoid = lambda x: 1 \/ (1 + np.exp(-x)) new = sigmoid(s.sub(s.mean())) new.hist() ``` Option 3 tanh (hyperbolic tangent) ``` new = np.tanh(s.sub(s.mean())).add(1).div(2) new.hist() ```", "best_answers_score":0.7197, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/39758449\/normalise-between-0-and-1-ignoring-nan", "best_answers_votes":13, "question_length":406, "response_length":450 }, { "question":"Get U, Sigma, V* matrix from Truncated SVD in scikit-learn I am using truncated SVD from scikit-learn package. In the definition of SVD, an original matrix A is approxmated as a product A \u2248 U\u03a3V* where U and V have orthonormal columns, and \u03a3 is non-negative diagonal. I need to get the U, \u03a3 and V* matrices. Looking at the source code here I found out that V* is stored in self.components_ field after calling fit_transform. Is it possible to get U and \u03a3 matrices? My code: ``` import sklearn.decomposition as skd import numpy as np matrix = np.random.random((20,20)) trsvd = skd.TruncatedSVD(n_components=15) transformed = trsvd.fit_transform(matrix) VT = trsvd.components_ ```", "response":"Looking into the source via the link you provided, TruncatedSVD is basically a wrapper around sklearn.utils.extmath.randomized_svd; you can manually call this yourself like this: ``` from sklearn.utils.extmath import randomized_svd U, Sigma, VT = randomized_svd(X, n_components=15, n_iter=5, random_state=None) ```", "best_answers_score":0.7192, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31523575\/get-u-sigma-v-matrix-from-truncated-svd-in-scikit-learn", "best_answers_votes":63, "question_length":677, "response_length":314 }, { "question":"ValueError: Dimension mismatch I use SciPy and scikit-learn to train and apply a Multinomial Naive Bayes Classifier for binary text classification. Precisely, I use the module sklearn.feature_extraction.text.CountVectorizer for creating sparse matrices that hold word feature counts from text and the module sklearn.naive_bayes.MultinomialNB as the classifier implementation for training the classifier on training data and applying it on test data. The input to the CountVectorizer is a list of text documents represented as unicode strings. The training data is much larger than the test data. My code looks like this (simplified): ``` vectorizer = CountVectorizer(**kwargs) # sparse matrix with training data X_train = vectorizer.fit_transform(list_of_documents_for_training) # vector holding target values (=classes, either -1 or 1) for training documents # this vector has the same number of elements as the list of documents y_train = numpy.array([1, 1, 1, -1, -1, 1, -1, -1, 1, 1, -1, -1, -1, ...]) # sparse matrix with test data X_test = vectorizer.fit_transform(list_of_documents_for_testing) # Training stage of NB classifier classifier = MultinomialNB() classifier.fit(X=X_train, y=y_train) # Prediction of log probabilities on test data X_log_proba = classifier.predict_log_proba(X_test) ``` Problem: As soon as MultinomialNB.predict_log_proba() is called, I get ValueError: dimension mismatch. According to the IPython stacktrace below, the error occurs in SciPy: ``` \/path\/to\/my\/code.pyc --> 177 X_log_proba = classifier.predict_log_proba(X_test) \/...\/sklearn\/naive_bayes.pyc in predict_log_proba(self, X) 76 in the model, where classes are ordered arithmetically. 77 \"\"\" --> 78 jll = self._joint_log_likelihood(X) 79 # normalize by P(x) = P(f_1, ..., f_n) 80 log_prob_x = logsumexp(jll, axis=1) \/...\/sklearn\/naive_bayes.pyc in _joint_log_likelihood(self, X) 345 \"\"\"Calculate the posterior log probability of the samples X\"\"\" 346 X = atleast2d_or_csr(X) --> 347 return (safe_sparse_dot(X, self.feature_log_prob_.T) 348 + self.class_log_prior_) 349 \/...\/sklearn\/utils\/extmath.pyc in safe_sparse_dot(a, b, dense_output) 71 from scipy import sparse 72 if sparse.issparse(a) or sparse.issparse(b): --> 73 ret = a * b 74 if dense_output and hasattr(ret, \"toarray\"): 75 ret = ret.toarray() \/...\/scipy\/sparse\/base.pyc in __mul__(self, other) 276 277 if other.shape[0] != self.shape[1]: --> 278 raise ValueError('dimension mismatch') 279 280 result = self._mul_multivector(np.asarray(other)) ``` I have no idea why this error occurs. Can anybody please explain it to me and provide a solution for this problem? Thanks a lot in advance!", "response":"Sounds to me, like you just need to use vectorizer.transform for the test dataset, since the training dataset fixes the vocabulary (you cannot know the full vocabulary including the training set afterall). Just to be clear, thats vectorizer.transform instead of vectorizer.fit_transform.", "best_answers_score":0.7187, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/12484310\/valueerror-dimension-mismatch", "best_answers_votes":61, "question_length":2641, "response_length":287 }, { "question":"pip: pulling updates from remote git repository I installed scikit-learn from GitHub a couple of weeks ago: ``` pip install git+git:\/\/github.com\/scikit-learn\/scikit-learn@master ``` I went to GitHub and there have been several changes to the master branch since then. How can I update my local installation of scikit-learn? I tried pip install scikit-learn --upgrade but I got: ``` Requirement already up-to-date Cleaning up ... ```", "response":"pip searches for the library in the Python package index. Your version is newer than the newest one in there, so pip won't update it. You'll have to reinstall from Git: ``` $ pip install git+git:\/\/github.com\/scikit-learn\/scikit-learn@main ```", "best_answers_score":0.7177, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/17710947\/pip-pulling-updates-from-remote-git-repository", "best_answers_votes":31, "question_length":432, "response_length":242 }, { "question":"Keep TFIDF result for predicting new content I am using sklearn on Python to do some clustering. I've trained 200,000 data, and code below works well. ``` corpus = open(\"token_from_xml.txt\") vectorizer = CountVectorizer(decode_error=\"replace\") transformer = TfidfTransformer() tfidf = transformer.fit_transform(vectorizer.fit_transform(corpus)) km = KMeans(30) kmresult = km.fit(tfidf).predict(tfidf) ``` But when I have new testing content, I'd like to cluster it to existed clusters I'd trained. So I'm wondering how to save IDF result, so that I can do TFIDF for the new testing content and make sure the result for new testing content have same array length. Thanks in advance. UPDATE I may need to save \"transformer\" or \"tfidf\" variable to file(txt or others), if one of them contains the trained IDF result. UPDATE For example. I have the training data: ``` [\"a\", \"b\", \"c\"] [\"a\", \"b\", \"d\"] ``` And do TFIDF, the result will contains 4 features(a,b,c,d) When I TEST: ``` [\"a\", \"c\", \"d\"] ``` to see which cluster(already made by k-means) it belongs to. TFIDF will only give the result with 3 features(a,c,d), so the clustering in k-means will fall. (If I test [\"a\", \"b\", \"e\"], there may have other problems.) So how to store the features list for testing data (even more, store it in file)?", "response":"I successfully saved the feature list by saving vectorizer.vocabulary_, and reuse by CountVectorizer(decode_error=\"replace\",vocabulary=vectorizer.vocabulary_) Codes below: ``` corpus = np.array([\"aaa bbb ccc\", \"aaa bbb ddd\"]) vectorizer = CountVectorizer(decode_error=\"replace\") vec_train = vectorizer.fit_transform(corpus) #Save vectorizer.vocabulary_ pickle.dump(vectorizer.vocabulary_,open(\"feature.pkl\",\"wb\")) #Load it later transformer = TfidfTransformer() loaded_vec = CountVectorizer(decode_error=\"replace\",vocabulary=pickle.load(open(\"feature.pkl\", \"rb\"))) tfidf = transformer.fit_transform(loaded_vec.fit_transform(np.array([\"aaa ccc eee\"]))) ``` That works. tfidf will have same feature length as trained data.", "best_answers_score":0.7174, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/29788047\/keep-tfidf-result-for-predicting-new-content", "best_answers_votes":34, "question_length":1294, "response_length":720 }, { "question":"Stratified Sampling in Pandas I've looked at the Sklearn stratified sampling docs as well as the pandas docs and also Stratified samples from Pandas and sklearn stratified sampling based on a column but they do not address this issue. Im looking for a fast pandas\/sklearn\/numpy way to generate stratified samples of size n from a dataset. However, for rows with less than the specified sampling number, it should take all of the entries. Concrete example: Thank you! :)", "response":"Use min when passing the number to sample. Consider the dataframe df ``` df = pd.DataFrame(dict( A=[1, 1, 1, 2, 2, 2, 2, 3, 4, 4], B=range(10) )) df.groupby('A', group_keys=False).apply(lambda x: x.sample(min(len(x), 2))) A B 1 1 1 2 1 2 3 2 3 6 2 6 7 3 7 9 4 9 8 4 8 ```", "best_answers_score":0.7145, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/44114463\/stratified-sampling-in-pandas", "best_answers_votes":118, "question_length":469, "response_length":271 }, { "question":"sklearn.manifold.TSNE TypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype(' dtype(' 0.5][locs] a d 0 0.945686 0.892892 ``` My final goal is to convert the result to a numpy array to pass into an sklearn regression algorithm, so I will use the code above like this: ``` training_set = array(df[df.c > 0.5][locs]) ``` ... and that peeves me since I end up with a huge array copy in memory. Perhaps there's a better way for that too?", "response":"Use its value directly: ``` In [79]: df[df.c > 0.5][['b', 'e']].values Out[79]: array([[ 0.98836259, 0.82403141], [ 0.337358 , 0.02054435], [ 0.29271728, 0.37813099], [ 0.70033513, 0.69919695]]) ```", "best_answers_score":0.7096, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/17682613\/how-to-convert-a-pandas-dataframe-subset-of-columns-and-rows-into-a-numpy-array", "best_answers_votes":71, "question_length":1146, "response_length":198 }, { "question":"DBSCAN in scikit-learn of Python: save the cluster points in an array following the example Demo of DBSCAN clustering algorithm of Scikit Learning i am trying to store in an array the x, y of each clustering class ``` import numpy as np from sklearn.cluster import DBSCAN from sklearn import metrics from sklearn.datasets.samples_generator import make_blobs from sklearn.preprocessing import StandardScaler from pylab import * # Generate sample data centers = [[1, 1], [-1, -1], [1, -1]] X, labels_true = make_blobs(n_samples=750, centers=centers, cluster_std=0.4, random_state=0) X = StandardScaler().fit_transform(X) xx, yy = zip(*X) scatter(xx,yy) show() ``` ``` db = DBSCAN(eps=0.3, min_samples=10).fit(X) core_samples = db.core_sample_indices_ labels = db.labels_ n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0) print n_clusters_ 3 ``` I'm trying to understand the DBSCAN implementation by scikit-learn, but from this point I'm having trouble. The number of cluster is 3 (n_clusters_) and I wish to store the x, y of each cluster in an array", "response":"The first cluster is X[labels == 0], etc.: ``` clusters = [X[labels == i] for i in xrange(n_clusters_)] ``` and the outliers are ``` outliers = X[labels == -1] ```", "best_answers_score":0.7094, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/18237479\/dbscan-in-scikit-learn-of-python-save-the-cluster-points-in-an-array", "best_answers_votes":41, "question_length":1058, "response_length":163 }, { "question":"How to normalize a numpy array to a unit vector I would like to convert a NumPy array to a unit vector. More specifically, I am looking for an equivalent version of this normalisation function: ``` def normalize(v): norm = np.linalg.norm(v) if norm == 0: return v return v \/ norm ``` This function handles the situation where vector v has the norm value of 0. Is there any similar functions provided in sklearn or numpy?", "response":"If you're using scikit-learn you can use sklearn.preprocessing.normalize: ``` import numpy as np from sklearn.preprocessing import normalize x = np.random.rand(1000)*10 norm1 = x \/ np.linalg.norm(x) norm2 = normalize(x[:,np.newaxis], axis=0).ravel() print np.all(norm1 == norm2) # True ```", "best_answers_score":0.7087, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/21030391\/how-to-normalize-a-numpy-array-to-a-unit-vector", "best_answers_votes":250, "question_length":420, "response_length":289 }, { "question":"How to one-hot-encode from a pandas column containing a list? I would like to break down a pandas column consisting of a list of elements into as many columns as there are unique elements i.e. one-hot-encode them (with value 1 representing a given element existing in a row and 0 in the case of absence). For example, taking dataframe df ``` Col1 Col2 Col3 C 33 [Apple, Orange, Banana] A 2.5 [Apple, Grape] B 42 [Banana] ``` I would like to convert this to: df ``` Col1 Col2 Apple Orange Banana Grape C 33 1 1 1 0 A 2.5 1 0 0 1 B 42 0 0 1 0 ``` How can I use pandas\/sklearn to achieve this?", "response":"We can also use sklearn.preprocessing.MultiLabelBinarizer: Often we want to use sparse DataFrame for the real world data in order to save a lot of RAM. Sparse solution (for Pandas v0.25.0+) ``` from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer(sparse_output=True) df = df.join( pd.DataFrame.sparse.from_spmatrix( mlb.fit_transform(df.pop('Col3')), index=df.index, columns=mlb.classes_)) ``` result: ``` In [38]: df Out[38]: Col1 Col2 Apple Banana Grape Orange 0 C 33.0 1 1 0 1 1 A 2.5 1 0 1 0 2 B 42.0 0 1 0 0 In [39]: df.dtypes Out[39]: Col1 object Col2 float64 Apple Sparse[int32, 0] Banana Sparse[int32, 0] Grape Sparse[int32, 0] Orange Sparse[int32, 0] dtype: object In [40]: df.memory_usage() Out[40]: Index 128 Col1 24 Col2 24 Apple 16 # <--- NOTE! Banana 16 # <--- NOTE! Grape 8 # <--- NOTE! Orange 8 # <--- NOTE! dtype: int64 ``` Dense solution ``` mlb = MultiLabelBinarizer() df = df.join(pd.DataFrame(mlb.fit_transform(df.pop('Col3')), columns=mlb.classes_, index=df.index)) ``` Result: ``` In [77]: df Out[77]: Col1 Col2 Apple Banana Grape Orange 0 C 33.0 1 1 0 1 1 A 2.5 1 0 1 0 2 B 42.0 0 1 0 0 ```", "best_answers_score":0.7065, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/45312377\/how-to-one-hot-encode-from-a-pandas-column-containing-a-list", "best_answers_votes":103, "question_length":590, "response_length":1141 }, { "question":"Cannot import scikits-learn even though it seems to be installed Per the scikit-learn user guide, I installed scikit-learn using pip install -U scikit-learn. So using pip search scikit-learn, I get this search result: ``` scikit-learn - A set of python modules for machine learning and data mining INSTALLED: 0.12.1 (latest) ``` But when I go into Python and try to import sklearn, I get an ImportError: No module named sklearn. This really should have just worked. I am using Enthought's free distribution of Python (2.7.3) on a Mac OS 10.6.8 with NumPy 1.6.1 and SciPy 0.10.1. Yes, I'm aware that EPD Free comes with scikit-learn but pip should have upgraded my version so that I can actually use scikit-learn.", "response":"Got same problem, @Alan gave correct solution but hard way. Here are easy steps to resolve issue, as i am on mac osx, giving steps for same. ``` Ameys-Mac-mini:~ amey$ python --version Python 2.7.2 Ameys-Mac-mini:~ amey$ cd \/Library\/Python\/2.7\/site-packages\/ Ameys-Mac-mini:site-packages amey$ brew install gcc Ameys-Mac-mini:site-packages amey$ sudo pip install -t . numpy scipy scikit-learn ```", "best_answers_score":0.7058, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/13212987\/cannot-import-scikits-learn-even-though-it-seems-to-be-installed", "best_answers_votes":30, "question_length":712, "response_length":396 }, { "question":"Arrays used as indices must be of integer (or boolean) type Errors are like this: ``` Traceback (most recent call last): File \"NearestCentroid.py\", line 53, in clf.fit(X_train.todense(),y_train) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/scikit_learn-0.13.1-py2.7-linux-i686.egg\/sklearn\/neighbors\/nearest_centroid.py\", line 115, in fit variance = np.array(np.power(X - self.centroids_[y], 2)) IndexError: arrays used as indices must be of integer (or boolean) type ``` Codes are like this: ``` distancemetric=['euclidean','l2'] for mtrc in distancemetric: for shrkthrshld in [None]: #shrkthrshld=0 #while (shrkthrshld <=1.0): clf = NearestCentroid(metric=mtrc,shrink_threshold=shrkthrshld) clf.fit(X_train.todense(),y_train) y_predicted = clf.predict(X_test.todense()) ``` I am using scikit-learn package, X-train, y_train are in LIBSVM format, X is the feature:value pair, y_train is the target\/label, X_train is in CSR matric format, the shrink_threshold does not support CSR sparse matrix, so I add .todense() to X_train, then I got this error, could anyone help me fix this? Thanks a lot!", "response":"I had a similar problem using the Pystruct pystruct.learners.OneSlackSSVM. It occured because my training labels were floats, in stead of integers. In my case, it was because I initialized the labels with np.ones, without specifying dtype=np.int8. Hope it helps.", "best_answers_score":0.7054, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/17393989\/arrays-used-as-indices-must-be-of-integer-or-boolean-type", "best_answers_votes":41, "question_length":1095, "response_length":262 }, { "question":"What is python's equivalent of R's NA? What is python's equivalent of R's NA? To be more specific: R has NaN, NA, NULL, Inf and -Inf. NA is generally used when there is missing data. What is python's equivalent? How libraries such as numpy and pandas handle missing values? How does scikit-learn handle missing values? Is it different for python 2.7 and python 3?", "response":"nan in numpy is handled well with many functions: ``` >>> import numpy as np >>> a = [1, np.nan, 2, 3] >>> np.nanmean(a) 2.0 >>> np.nansum(a) 6.0 >>> np.isnan(a) array([False, True, False, False], dtype=bool) ```", "best_answers_score":0.7048, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/28654325\/what-is-pythons-equivalent-of-rs-na", "best_answers_votes":29, "question_length":363, "response_length":212 }, { "question":"Macro VS Micro VS Weighted VS Samples F1 Score In sklearn.metrics.f1_score, the f1 score has a parameter called \"average\". What does macro, micro, weighted, and samples mean? Please elaborate, because in the documentation, it was not explained properly. Or simply answer the following: Why is \"samples\" best parameter for multilabel classification? Why is micro best for an imbalanced dataset? what's the difference between weighted and macro?", "response":"The question is about the meaning of the average parameter in sklearn.metrics.f1_score. As you can see from the code: average=micro says the function to compute f1 by considering total true positives, false negatives and false positives (no matter of the prediction for each label in the dataset) average=macro says the function to compute f1 for each label, and returns the average without considering the proportion for each label in the dataset. average=weighted says the function to compute f1 for each label, and returns the average considering the proportion for each label in the dataset. average=samples says the function to compute f1 for each instance, and returns the average. Use it for multilabel classification.", "best_answers_score":0.7045, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/55740220\/macro-vs-micro-vs-weighted-vs-samples-f1-score", "best_answers_votes":71, "question_length":443, "response_length":725 }, { "question":"How to get a classifier's confidence score for a prediction in sklearn? I would like to get a confidence score of each of the predictions that it makes, showing on how sure the classifier is on its prediction that it is correct. I want something like this: How sure is the classifier on its prediction? Class 1: 81% that this is class 1 Class 2: 10% Class 3: 6% Class 4: 3% Samples of my code: ``` features_train, features_test, labels_train, labels_test = cross_validation.train_test_split(main, target, test_size = 0.4) # Determine amount of time to train t0 = time() model = SVC() #model = SVC(kernel='poly') #model = GaussianNB() model.fit(features_train, labels_train) print 'training time: ', round(time()-t0, 3), 's' # Determine amount of time to predict t1 = time() pred = model.predict(features_test) print 'predicting time: ', round(time()-t1, 3), 's' accuracy = accuracy_score(labels_test, pred) print 'Confusion Matrix: ' print confusion_matrix(labels_test, pred) # Accuracy in the 0.9333, 9.6667, 1.0 range print accuracy model.predict(sub_main) # Determine amount of time to predict t1 = time() pred = model.predict(sub_main) print 'predicting time: ', round(time()-t1, 3), 's' print '' print 'Prediction: ' print pred ``` I suspect that I would use the score() function, but I seem to keep implementing it correctly. I don't know if that's the right function or not, but how would one get the confidence percentage of a classifier's prediction?", "response":"Per the SVC documentation, it looks like you need to change how you construct the SVC: ``` model = SVC(probability=True) ``` and then use the predict_proba method: ``` class_probabilities = model.predict_proba(sub_main) ```", "best_answers_score":0.7, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31129592\/how-to-get-a-classifiers-confidence-score-for-a-prediction-in-sklearn", "best_answers_votes":33, "question_length":1459, "response_length":223 }, { "question":"confusion matrix error \"Classification metrics can't handle a mix of multilabel-indicator and multiclass targets\" I am getting a ``` Classification metrics can't handle a mix of multilabel-indicator and multiclass targets ``` error when I try to use confusion matrix. I am doing my first deep learning project. I am new to it. I am using the mnist dataset provided by keras. I have trained and tested my model successfully. However, when I try to use the scikit learn confusion matrix I get the error stated above. I have searched for an answer and while there are answers on this error, none of them worked for me. From what I found online it probably has something to do with the loss function (I use the categorical_crossentropy in my code). I tried changing it to sparse_categorical_crossentropy but that just gave me the ``` Error when checking target: expected dense_2 to have shape (1,) but got array with shape (10,) ``` when I run the fit() function on the model. This is the code. (I have left out the imports for the sake of brevity) ``` model = Sequential() model.add(Dense(512, activation='relu', input_shape=(28 * 28,))) model.add(Dense(10, activation='softmax')) model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy']) (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = train_images.reshape((60000, 28 * 28)) train_images = train_images.astype('float32') \/ 255 test_images = test_images.reshape((10000, 28 * 28)) test_images = test_images.astype('float32') \/ 255 train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) model.fit(train_images, train_labels, epochs=10, batch_size=128) rounded_predictions = model.predict_classes(test_images, batch_size=128, verbose=0) cm = confusion_matrix(test_labels, rounded_predictions) ``` How can i fix this?", "response":"Confusion matrix needs both labels & predictions as single-digits, not as one-hot encoded vectors; although you have done this with your predictions using model.predict_classes(), i.e. ```python rounded_predictions = model.predict_classes(test_images, batch_size=128, verbose=0) rounded_predictions[1] # 2 ``` your test_labels are still one-hot encoded: ```python test_labels[1] # array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], dtype=float32) ``` So, you should convert them too to single-digit ones, as follows: ```python import numpy as np rounded_labels=np.argmax(test_labels, axis=1) rounded_labels[1] # 2 ``` After which, the confusion matrix should come up OK: ```python from sklearn.metrics import confusion_matrix cm = confusion_matrix(rounded_labels, rounded_predictions) cm # result: array([[ 971, 0, 0, 2, 1, 0, 2, 1, 3, 0], [ 0, 1121, 2, 1, 0, 1, 3, 0, 7, 0], [ 5, 4, 990, 7, 5, 3, 2, 7, 9, 0], [ 0, 0, 0, 992, 0, 2, 0, 7, 7, 2], [ 2, 0, 2, 0, 956, 0, 3, 3, 2, 14], [ 3, 0, 0, 10, 1, 872, 3, 0, 1, 2], [ 5, 3, 1, 1, 9, 10, 926, 0, 3, 0], [ 0, 7, 10, 1, 0, 2, 0, 997, 1, 10], [ 5, 0, 3, 7, 5, 7, 3, 4, 937, 3], [ 5, 5, 0, 9, 10, 3, 0, 8, 3, 966]]) ```", "best_answers_score":0.6989, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/54589669\/confusion-matrix-error-classification-metrics-cant-handle-a-mix-of-multilabel", "best_answers_votes":44, "question_length":1864, "response_length":1163 }, { "question":"Using explicit (predefined) validation set for grid search with sklearn I have a dataset, which has previously been split into 3 sets: train, validation and test. These sets have to be used as given in order to compare the performance across different algorithms. I would now like to optimize the parameters of my SVM using the validation set. However, I cannot find how to input the validation set explicitly into sklearn.grid_search.GridSearchCV(). Below is some code I've previously used for doing K-fold cross-validation on the training set. However, for this problem I need to use the validation set as given. How can I do that? ```py from sklearn import svm, cross_validation from sklearn.grid_search import GridSearchCV # (some code left out to simplify things) skf = cross_validation.StratifiedKFold(y_train, n_folds=5, shuffle = True) clf = GridSearchCV(svm.SVC(tol=0.005, cache_size=6000, class_weight=penalty_weights), param_grid=tuned_parameters, n_jobs=2, pre_dispatch=\"n_jobs\", cv=skf, scoring=scorer) clf.fit(X_train, y_train) ```", "response":"Use PredefinedSplit ``` ps = PredefinedSplit(test_fold=your_test_fold) ``` then set cv=ps in GridSearchCV test_fold : \u201carray-like, shape (n_samples,) test_fold[i] gives the test set fold of sample i. A value of -1 indicates that the corresponding sample is not part of any test set folds, but will instead always be put into the training fold. Also see here when using a validation set, set the test_fold to 0 for all samples that are part of the validation set, and to -1 for all other samples.", "best_answers_score":0.6984, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31948879\/using-explicit-predefined-validation-set-for-grid-search-with-sklearn", "best_answers_votes":49, "question_length":1045, "response_length":495 }, { "question":"Deprecation warnings from sklearn I am using cross_validation from sklearn, ``` from sklearn.cross_validation import train_test_split ``` I get the below warning: cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved.", "response":"Problem: The deprecation warning means that the module is deprecated, i.e. no longer supported. You are using a version for which sklearn.cross_validation is not a module any longer. Solution: ``` from sklearn.model_selection import train_test_split ``` C\/O: This post.", "best_answers_score":0.6977, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43302400\/deprecation-warnings-from-sklearn", "best_answers_votes":51, "question_length":351, "response_length":269 }, { "question":"Tensorflow Precision \/ Recall \/ F1 score and Confusion matrix I would like to know if there is a way to implement the different score function from the scikit learn package like this one : ``` from sklearn.metrics import confusion_matrix confusion_matrix(y_true, y_pred) ``` into a tensorflow model to get the different score. ``` with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess: init = tf.initialize_all_variables() sess.run(init) for epoch in xrange(1): avg_cost = 0. total_batch = len(train_arrays) \/ batch_size for batch in range(total_batch): train_step.run(feed_dict = {x: train_arrays, y: train_labels}) avg_cost += sess.run(cost, feed_dict={x: train_arrays, y: train_labels})\/total_batch if epoch % display_step == 0: print \"Epoch:\", '%04d' % (epoch+1), \"cost=\", \"{:.9f}\".format(avg_cost) print \"Optimization Finished!\" correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\")) print \"Accuracy:\", batch, accuracy.eval({x: test_arrays, y: test_labels}) ``` Will i have to run the session again to get the prediction ?", "response":"You do not really need sklearn to calculate precision\/recall\/f1 score. You can easily express them in TF-ish way by looking at the formulas: Now if you have your actual and predicted values as vectors of 0\/1, you can calculate TP, TN, FP, FN using tf.count_nonzero: ``` TP = tf.count_nonzero(predicted * actual) TN = tf.count_nonzero((predicted - 1) * (actual - 1)) FP = tf.count_nonzero(predicted * (actual - 1)) FN = tf.count_nonzero((predicted - 1) * actual) ``` Now your metrics are easy to calculate: ``` precision = TP \/ (TP + FP) recall = TP \/ (TP + FN) f1 = 2 * precision * recall \/ (precision + recall) ```", "best_answers_score":0.6966, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35365007\/tensorflow-precision-recall-f1-score-and-confusion-matrix", "best_answers_votes":61, "question_length":1144, "response_length":615 }, { "question":"Converting LinearSVC's decision function to probabilities (Scikit learn python ) I use linear SVM from scikit learn (LinearSVC) for binary classification problem. I understand that LinearSVC can give me the predicted labels, and the decision scores but I wanted probability estimates (confidence in the label). I want to continue using LinearSVC because of speed (as compared to sklearn.svm.SVC with linear kernel) Is it reasonable to use a logistic function to convert the decision scores to probabilities? ``` import sklearn.svm as suppmach # Fit model: svmmodel=suppmach.LinearSVC(penalty='l1',C=1) predicted_test= svmmodel.predict(x_test) predicted_test_scores= svmmodel.decision_function(x_test) ``` I want to check if it makes sense to obtain Probability estimates simply as [1 \/ (1 + exp(-x)) ] where x is the decision score. Alternately, are there other options wrt classifiers that I can use to do this efficiently? Thanks.", "response":"scikit-learn provides CalibratedClassifierCV which can be used to solve this problem: it allows to add probability output to LinearSVC or any other classifier which implements decision_function method: ``` svm = LinearSVC() clf = CalibratedClassifierCV(svm) clf.fit(X_train, y_train) y_proba = clf.predict_proba(X_test) ``` User guide has a nice section on that. By default CalibratedClassifierCV+LinearSVC will get you Platt scaling, but it also provides other options (isotonic regression method), and it is not limited to SVM classifiers.", "best_answers_score":0.6956, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/26478000\/converting-linearsvcs-decision-function-to-probabilities-scikit-learn-python", "best_answers_votes":136, "question_length":932, "response_length":541 }, { "question":"Will pandas dataframe object work with sklearn kmeans clustering? dataset is pandas dataframe. This is sklearn.cluster.KMeans ``` km = KMeans(n_clusters = n_Clusters) km.fit(dataset) prediction = km.predict(dataset) ``` This is how I decide which entity belongs to which cluster: ``` for i in range(len(prediction)): cluster_fit_dict[dataset.index[i]] = prediction[i] ``` This is how dataset looks: ``` A 1 2 3 4 5 6 B 2 3 4 5 6 7 C 1 4 2 7 8 1 ... ``` where A,B,C are indices Is this the correct way of using k-means?", "response":"Assuming all the values in the dataframe are numeric, ``` # Convert DataFrame to matrix mat = dataset.values # Using sklearn km = sklearn.cluster.KMeans(n_clusters=5) km.fit(mat) # Get cluster assignment labels labels = km.labels_ # Format results as a DataFrame results = pandas.DataFrame([dataset.index,labels]).T ``` Alternatively, you could try KMeans++ for Pandas.", "best_answers_score":0.6944, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/28017091\/will-pandas-dataframe-object-work-with-sklearn-kmeans-clustering", "best_answers_votes":39, "question_length":518, "response_length":369 }, { "question":"Use sklearn TfidfVectorizer with already tokenized inputs? I have a list of tokenized sentences and would like to fit a tfidf Vectorizer. I tried the following: ``` tokenized_list_of_sentences = [['this', 'is', 'one'], ['this', 'is', 'another']] def identity_tokenizer(text): return text tfidf = TfidfVectorizer(tokenizer=identity_tokenizer, stop_words='english') tfidf.fit_transform(tokenized_list_of_sentences) ``` which errors out as ``` AttributeError: 'list' object has no attribute 'lower' ``` is there a way to do this? I have a billion sentences and do not want to tokenize them again. They are tokenized before for another stage before this.", "response":"Try initializing the TfidfVectorizer object with the parameter lowercase=False (assuming this is actually desired as you've lowercased your tokens in previous stages). ```py tokenized_list_of_sentences = [['this', 'is', 'one', 'basketball'], ['this', 'is', 'a', 'football']] def identity_tokenizer(text): return text tfidf = TfidfVectorizer(tokenizer=identity_tokenizer, stop_words='english', lowercase=False) tfidf.fit_transform(tokenized_list_of_sentences) ``` Note that I changed the sentences as they apparently only contained stop words which caused another error due to an empty vocabulary.", "best_answers_score":0.6944, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48671270\/use-sklearn-tfidfvectorizer-with-already-tokenized-inputs", "best_answers_votes":27, "question_length":650, "response_length":596 }, { "question":"what is the difference between 'transform' and 'fit_transform' in sklearn In the sklearn-python toolbox, there are two functions transform and fit_transform about sklearn.decomposition.RandomizedPCA. The description of two functions are as follows But what is the difference between them ?", "response":"In scikit-learn estimator api, fit() : used for generating learning model parameters from training data transform() : parameters generated from fit() method,applied upon model to generate transformed data set. fit_transform() : combination of fit() and transform() api on same data set Checkout Chapter-4 from this book & answer from stackexchange for more clarity", "best_answers_score":0.6911, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/23838056\/what-is-the-difference-between-transform-and-fit-transform-in-sklearn", "best_answers_votes":164, "question_length":289, "response_length":364 }, { "question":"What is the difference between pipeline and make_pipeline in scikit-learn? I got this from the sklearn webpage: Pipeline: Pipeline of transforms with a final estimator Make_pipeline: Construct a Pipeline from the given estimators. This is a shorthand for the Pipeline constructor. But I still do not understand when I have to use each one. Can anyone give me an example?", "response":"The only difference is that make_pipeline generates names for steps automatically. Step names are needed e.g. if you want to use a pipeline with model selection utilities (e.g. GridSearchCV). With grid search you need to specify parameters for various steps of a pipeline: ``` pipe = Pipeline([('vec', CountVectorizer()), ('clf', LogisticRegression()]) param_grid = [{'clf__C': [1, 10, 100, 1000]} gs = GridSearchCV(pipe, param_grid) gs.fit(X, y) ``` compare it with make_pipeline: ``` pipe = make_pipeline(CountVectorizer(), LogisticRegression()) param_grid = [{'logisticregression__C': [1, 10, 100, 1000]} gs = GridSearchCV(pipe, param_grid) gs.fit(X, y) ``` So, with Pipeline: names are explicit, you don't have to figure them out if you need them; name doesn't change if you change estimator\/transformer used in a step, e.g. if you replace LogisticRegression() with LinearSVC() you can still use clf__C. make_pipeline: shorter and arguably more readable notation; names are auto-generated using a straightforward rule (lowercase name of an estimator). When to use them is up to you :) I prefer make_pipeline for quick experiments and Pipeline for more stable code; a rule of thumb: IPython Notebook -> make_pipeline; Python module in a larger project -> Pipeline. But it is certainly not a big deal to use make_pipeline in a module or Pipeline in a short script or a notebook.", "best_answers_score":0.6907, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40708077\/what-is-the-difference-between-pipeline-and-make-pipeline-in-scikit-learn", "best_answers_votes":143, "question_length":370, "response_length":1380 }, { "question":"What does calling fit() multiple times on the same model do? After I instantiate a scikit model (e.g. LinearRegression), if I call its fit() method multiple times (with different X and y data), what happens? Does it fit the model on the data like if I just re-instantiated the model (i.e. from scratch), or does it keep into accounts data already fitted from the previous call to fit()? Trying with LinearRegression (also looking at its source code) it seems to me that every time I call fit(), it fits from scratch, ignoring the result of any previous call to the same method. I wonder if this true in general, and I can rely on this behavior for all models\/pipelines of scikit learn.", "response":"If you will execute model.fit(X_train, y_train) for a second time - it'll overwrite all previously fitted coefficients, weights, intercept (bias), etc. If you want to fit just a portion of your data set and then to improve your model by fitting a new data, then you can use estimators, supporting \"Incremental learning\" (those, that implement partial_fit() method)", "best_answers_score":0.6891, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/49841324\/what-does-calling-fit-multiple-times-on-the-same-model-do", "best_answers_votes":103, "question_length":685, "response_length":364 }, { "question":"scikit-learn - ROC curve with confidence intervals I am able to get a ROC curve using scikit-learn with fpr, tpr, thresholds = metrics.roc_curve(y_true,y_pred, pos_label=1), where y_true is a list of values based on my gold standard (i.e., 0 for negative and 1 for positive cases) and y_pred is a corresponding list of scores (e.g., 0.053497243, 0.008521122, 0.022781548, 0.101885263, 0.012913795, 0.0, 0.042881547 [...]) I am trying to figure out how to add confidence intervals to that curve, but didn't find any easy way to do that with sklearn.", "response":"You can bootstrap the ROC computations (sample with replacement new versions of y_true \/ y_pred out of the original y_true \/ y_pred and recompute a new value for roc_curve each time) and the estimate a confidence interval this way. To take the variability induced by the train test split into account, you can also use the ShuffleSplit CV iterator many times, fit a model on the train split, generate y_pred for each model and thus gather an empirical distribution of roc_curves as well and finally compute confidence intervals for those. Edit: bootstrapping in python Here is an example for bootstrapping the ROC AUC score out of the predictions of a single model. I chose to bootstrap the ROC AUC to make it easier to follow as a Stack Overflow answer, but it can be adapted to bootstrap the whole curve instead: ``` import numpy as np from scipy.stats import sem from sklearn.metrics import roc_auc_score y_pred = np.array([0.21, 0.32, 0.63, 0.35, 0.92, 0.79, 0.82, 0.99, 0.04]) y_true = np.array([0, 1, 0, 0, 1, 1, 0, 1, 0 ]) print(\"Original ROC area: {:0.3f}\".format(roc_auc_score(y_true, y_pred))) n_bootstraps = 1000 rng_seed = 42 # control reproducibility bootstrapped_scores = [] rng = np.random.RandomState(rng_seed) for i in range(n_bootstraps): # bootstrap by sampling with replacement on the prediction indices indices = rng.randint(0, len(y_pred), len(y_pred)) if len(np.unique(y_true[indices])) < 2: # We need at least one positive and one negative sample for ROC AUC # to be defined: reject the sample continue score = roc_auc_score(y_true[indices], y_pred[indices]) bootstrapped_scores.append(score) print(\"Bootstrap #{} ROC area: {:0.3f}\".format(i + 1, score)) ``` You can see that we need to reject some invalid resamples. However on real data with many predictions this is a very rare event and should not impact the confidence interval significantly (you can try to vary the rng_seed to check). Here is the histogram: ``` import matplotlib.pyplot as plt plt.hist(bootstrapped_scores, bins=50) plt.title('Histogram of the bootstrapped ROC AUC scores') plt.show() ``` Note that the resampled scores are censored in the [0 - 1] range causing a high number of scores in the last bin. To get a confidence interval one can sort the samples: ``` sorted_scores = np.array(bootstrapped_scores) sorted_scores.sort() # Computing the lower and upper bound of the 90% confidence interval # You can change the bounds percentiles to 0.025 and 0.975 to get # a 95% confidence interval instead. confidence_lower = sorted_scores[int(0.05 * len(sorted_scores))] confidence_upper = sorted_scores[int(0.95 * len(sorted_scores))] print(\"Confidence interval for the score: [{:0.3f} - {:0.3}]\".format( confidence_lower, confidence_upper)) ``` which gives: ``` Confidence interval for the score: [0.444 - 1.0] ``` The confidence interval is very wide but this is probably a consequence of my choice of predictions (3 mistakes out of 9 predictions) and the total number of predictions is quite small. Another remark on the plot: the scores are quantized (many empty histogram bins). This is a consequence of the small number of predictions. One could introduce a bit of Gaussian noise on the scores (or the y_pred values) to smooth the distribution and make the histogram look better. But then the choice of the smoothing bandwidth is tricky. Finally as stated earlier this confidence interval is specific to you training set. To get a better estimate of the variability of the ROC induced by your model class and parameters, you should do iterated cross-validation instead. However this is often much more costly as you need to train a new model for each random train \/ test split. EDIT: since I first wrote this reply, there is a bootstrap implementation in scipy directly: https:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.stats.bootstrap.html", "best_answers_score":0.6884, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/19124239\/scikit-learn-roc-curve-with-confidence-intervals", "best_answers_votes":48, "question_length":548, "response_length":3850 }, { "question":"Scikit-learn train_test_split with indices How do I get the original indices of the data when using train_test_split()? What I have is the following ``` from sklearn.cross_validation import train_test_split import numpy as np data = np.reshape(np.randn(20),(10,2)) # 10 training examples labels = np.random.randint(2, size=10) # 10 labels x1, x2, y1, y2 = train_test_split(data, labels, size=0.2) ``` But this does not give the indices of the original data. One workaround is to add the indices to data (e.g. data = [(i, d) for i, d in enumerate(data)]) and then pass them inside train_test_split and then expand again. Are there any cleaner solutions?", "response":"You can use pandas dataframes or series as Julien said but if you want to restrict your-self to numpy you can pass an additional array of indices: ``` from sklearn.model_selection import train_test_split import numpy as np n_samples, n_features, n_classes = 10, 2, 2 data = np.random.randn(n_samples, n_features) # 10 training examples labels = np.random.randint(n_classes, size=n_samples) # 10 labels indices = np.arange(n_samples) ( data_train, data_test, labels_train, labels_test, indices_train, indices_test, ) = train_test_split(data, labels, indices, test_size=0.2) ```", "best_answers_score":0.6867, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31521170\/scikit-learn-train-test-split-with-indices", "best_answers_votes":137, "question_length":652, "response_length":576 }, { "question":"What is the inverse of regularization strength in Logistic Regression? How should it affect my code? I am using sklearn.linear_model.LogisticRegression in scikit learn to run a Logistic Regression. ```none C : float, optional (default=1.0) Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization. ``` What does C mean here in simple terms? What is regularization strength?", "response":"Regularization is applying a penalty to increasing the magnitude of parameter values in order to reduce overfitting. When you train a model such as a logistic regression model, you are choosing parameters that give you the best fit to the data. This means minimizing the error between what the model predicts for your dependent variable given your data compared to what your dependent variable actually is. The problem comes when you have a lot of parameters (a lot of independent variables) but not too much data. In this case, the model will often tailor the parameter values to idiosyncrasies in your data -- which means it fits your data almost perfectly. However because those idiosyncrasies don't appear in future data you see, your model predicts poorly. To solve this, as well as minimizing the error as already discussed, you add to what is minimized and also minimize a function that penalizes large values of the parameters. Most often the function is \u03bb\u03a3\u03b8j2, which is some constant \u03bb times the sum of the squared parameter values \u03b8j2. The larger \u03bb is the less likely it is that the parameters will be increased in magnitude simply to adjust for small perturbations in the data. In your case however, rather than specifying \u03bb, you specify C=1\/\u03bb.", "best_answers_score":0.6864, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/22851316\/what-is-the-inverse-of-regularization-strength-in-logistic-regression-how-shoul", "best_answers_votes":108, "question_length":458, "response_length":1255 }, { "question":"'verbose' argument in scikit-learn Many scikit-learn functions have a verbose argument that, according to their documentation, \"[c]ontrols the verbosity: the higher, the more messages\" (e.g., GridSearchCV). Unfortunately, no guidance is provided on which integers are allowed (e.g., can a user set verbosity to 100?) and what level of verbosity corresponds to which integers. I cannot find this information anywhere in the documentation. My question is, which integers map to which levels of verbosity?", "response":"Higher integers map to higher verbosity as the docstring says. You can set verbosity=100 but I'm pretty sure it will be the same as verbosity=10. If you are looking for a list of what exactly is printed for each estimator for each integer, you have to look into the source. I think most estimators only have two or three levels of verbosity, I think 3 or above will be the most verbose you can get.", "best_answers_score":0.6856, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/29995249\/verbose-argument-in-scikit-learn", "best_answers_votes":33, "question_length":502, "response_length":398 }, { "question":"Getting model attributes from pipeline I typically get PCA loadings like this: ``` pca = PCA(n_components=2) X_t = pca.fit(X).transform(X) loadings = pca.components_ ``` If I run PCA using a scikit-learn pipeline: ``` from sklearn.pipeline import Pipeline pipeline = Pipeline(steps=[ ('scaling',StandardScaler()), ('pca',PCA(n_components=2)) ]) X_t=pipeline.fit_transform(X) ``` is it possible to get the loadings? Simply trying loadings = pipeline.components_ fails: ``` AttributeError: 'Pipeline' object has no attribute 'components_' ``` (Also interested in extracting attributes like coef_ from pipelines.)", "response":"Did you look at the documentation: http:\/\/scikit-learn.org\/dev\/modules\/pipeline.html I feel it is pretty clear. Update: in 0.21 you can use just square brackets: ``` pipeline['pca'] ``` or indices ``` pipeline[1] ``` There are two ways to get to the steps in a pipeline, either using indices or using the string names you gave: ``` pipeline.named_steps['pca'] pipeline.steps[1][1] ``` This will give you the PCA object, on which you can get components. With named_steps you can also use attribute access with a . which allows autocompletion: pipeline.names_steps.pca.", "best_answers_score":0.685, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/28822756\/getting-model-attributes-from-pipeline", "best_answers_votes":119, "question_length":610, "response_length":567 }, { "question":"Random forest class_weight and sample_weight parameters I have a class imbalance problem and been experimenting with a weighted Random Forest using the implementation in scikit-learn (>= 0.16). I have noticed that the implementation takes a class_weight parameter in the tree constructor and sample_weight parameter in the fit method to help solve class imbalance. Those two seem to be multiplied though to decide a final weight. I have trouble understanding the following: In what stages of the tree construction\/training\/prediction are those weights used? I have seen some papers for weighted trees, but I am not sure what scikit implements. What exactly is the difference between class_weight and sample_weight?", "response":"RandomForests are built on Trees, which are very well documented. Check how Trees use the sample weighting: User guide on decision trees - tells exactly what algorithm is used Decision tree API - explains how sample_weight is used by trees (which for random forests, as you have determined, is the product of class_weight and sample_weight). As for the difference between class_weight and sample_weight: much can be determined simply by the nature of their datatypes. sample_weight is 1D array of length n_samples, assigning an explicit weight to each example used for training. class_weight is either a dictionary of each class to a uniform weight for that class (e.g., {1:.9, 2:.5, 3:.01}), or is a string telling sklearn how to automatically determine this dictionary. So the training weight for a given example is the product of it's explicitly named sample_weight (or 1 if sample_weight is not provided), and it's class_weight (or 1 if class_weight is not provided).", "best_answers_score":0.6848, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30805192\/random-forest-class-weight-and-sample-weight-parameters", "best_answers_votes":21, "question_length":714, "response_length":971 }, { "question":"Invalid parameter for sklearn estimator pipeline I am implementing an example from the O'Reilly book \"Introduction to Machine Learning with Python\", using Python 2.7 and sklearn 0.16. The code I am using: ``` pipe = make_pipeline(TfidfVectorizer(), LogisticRegression()) param_grid = {\"logisticregression_C\": [0.001, 0.01, 0.1, 1, 10, 100], \"tfidfvectorizer_ngram_range\": [(1,1), (1,2), (1,3)]} grid = GridSearchCV(pipe, param_grid, cv=5) grid.fit(X_train, y_train) print(\"Best cross-validation score: {:.2f}\".format(grid.best_score_)) ``` The error being returned boils down to: ``` ValueError: Invalid parameter logisticregression_C for estimator Pipeline ``` Is this an error related to using Make_pipeline from v.0.16? What is causing this error?", "response":"There should be two underscores between estimator name and it's parameters in a Pipeline logisticregression__C. Do the same for tfidfvectorizer It is mentioned in the user guide here: https:\/\/scikit-learn.org\/stable\/modules\/compose.html#nested-parameters. See the example at https:\/\/scikit-learn.org\/stable\/auto_examples\/compose\/plot_compare_reduction.html#sphx-glr-auto-examples-compose-plot-compare-reduction-py", "best_answers_score":0.6832, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/41899132\/invalid-parameter-for-sklearn-estimator-pipeline", "best_answers_votes":75, "question_length":750, "response_length":413 }, { "question":"confused about random_state in decision tree of scikit learn Confused about random_state parameter, not sure why decision tree training needs some randomness. My thoughts is it related to random forest? is it related to split training testing data set? If so, why not use training testing split method directly (http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.cross_validation.train_test_split.html)? http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.tree.DecisionTreeClassifier.html ``` from sklearn.datasets import load_iris from sklearn.cross_validation import cross_val_score from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier(random_state=0) iris = load_iris() cross_val_score(clf, iris.data, iris.target, cv=10) ... ... array([ 1. , 0.93..., 0.86..., 0.93..., 0.93..., 0.93..., 0.93..., 1. , 0.93..., 1. ]) ```", "response":"This is explained in the documentation The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts. Consequently, practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement. So, basically, a sub-optimal greedy algorithm is repeated a number of times using random selections of features and samples (a similar technique used in random forests). The random_state parameter allows controlling these random choices. The interface documentation specifically states: If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. So, the random algorithm will be used in any case. Passing any value (whether a specific int, e.g., 0, or a RandomState instance), will not change that. The only rationale for passing in an int value (0 or otherwise) is to make the outcome consistent across calls: if you call this with random_state=0 (or any other value), then each and every time, you'll get the same result.", "best_answers_score":0.6829, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/39158003\/confused-about-random-state-in-decision-tree-of-scikit-learn", "best_answers_votes":42, "question_length":858, "response_length":1467 }, { "question":"scikit learn - feature importance calculation in decision trees I'm trying to understand how feature importance is calculated for decision trees in sci-kit learn. This question has been asked before, but I am unable to reproduce the results the algorithm is providing. For example: ``` from StringIO import StringIO from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier from sklearn.tree.export import export_graphviz from sklearn.feature_selection import mutual_info_classif X = [[1,0,0], [0,0,0], [0,0,1], [0,1,0]] y = [1,0,1,1] clf = DecisionTreeClassifier() clf.fit(X, y) feat_importance = clf.tree_.compute_feature_importances(normalize=False) print(\"feat importance = \" + str(feat_importance)) out = StringIO() out = export_graphviz(clf, out_file='test\/tree.dot') ``` results in feature importance: ``` feat importance = [0.25 0.08333333 0.04166667] ``` and gives the following decision tree: Now, this answer to a similar question suggests the importance is calculated as Where G is the node impurity, in this case the gini impurity. This is the impurity reduction as far as I understood it. However, for feature 1 this should be: This answer suggests the importance is weighted by the probability of reaching the node (which is approximated by the proportion of samples reaching that node). Again, for feature 1 this should be: Both formulas provide the wrong result. How is the feature importance calculated correctly?", "response":"I think feature importance depends on the implementation so we need to look at the documentation of scikit-learn. The feature importances. The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance That reduction or weighted information gain is defined as : The weighted impurity decrease equation is the following: N_t \/ N * (impurity - N_t_R \/ N_t * right_impurity - N_t_L \/ N_t * left_impurity) where N is the total number of samples, N_t is the number of samples at the current node, N_t_L is the number of samples in the left child, and N_t_R is the number of samples in the right child. http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier Since each feature is used once in your case, feature information must be equal to equation above. For X[2] : feature_importance = (4 \/ 4) * (0.375 - (0.75 * 0.444)) = 0.042 For X[1] : feature_importance = (3 \/ 4) * (0.444 - (2\/3 * 0.5)) = 0.083 For X[0] : feature_importance = (2 \/ 4) * (0.5) = 0.25", "best_answers_score":0.6826, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/49170296\/scikit-learn-feature-importance-calculation-in-decision-trees", "best_answers_votes":37, "question_length":1460, "response_length":1162 }, { "question":"Lasso on sklearn does not converge When I run something like ``` import numpy from sklearn import linear_model A= #something b= #something clf=linear_model.Lasso(alpha=0.015, fit_intercept=False, tol=0.00000000000001, max_iter=10000000000000, positive=True) clf.fit(A,b) ``` I get the error: ``` usr\/local\/lib\/python2.7\/dist-packages\/scikit_learn-0.14.1-py2.7-linux-x86_64.egg\/ sklearn\/linear_model\/coordinate_descent.py:418: UserWarning: Objective did not converge. You might want to increase the number of iterations ' to increase the number of iterations') ``` The interesting thing is that A is never rank defficient. (I think)", "response":"Try increasing tol. From the documentation: tol : float, optional The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. The default for tol is 0.0001 on my version of scikit-learn. I assume that your tolerance is so small that the optimization never reaches a lower value.", "best_answers_score":0.6806, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/20681864\/lasso-on-sklearn-does-not-converge", "best_answers_votes":39, "question_length":631, "response_length":392 }, { "question":"Difference between cross_val_score and cross_val_predict I want to evaluate a regression model build with scikitlearn using cross-validation and getting confused, which of the two functions cross_val_score and cross_val_predict I should use. One option would be : ``` cvs = DecisionTreeRegressor(max_depth = depth) scores = cross_val_score(cvs, predictors, target, cv=cvfolds, scoring='r2') print(\"R2-Score: %0.2f (+\/- %0.2f)\" % (scores.mean(), scores.std() * 2)) ``` An other one, to use the cv-predictions with the standard r2_score: ``` cvp = DecisionTreeRegressor(max_depth = depth) predictions = cross_val_predict(cvp, predictors, target, cv=cvfolds) print (\"CV R^2-Score: {}\".format(r2_score(df[target], predictions_cv))) ``` I would assume that both methods are valid and give similar results. But that is only the case with small k-folds. While the r^2 is roughly the same for 10-fold-cv, it gets increasingly lower for higher k-values in the case of the first version using \"cross_vall_score\". The second version is mostly unaffected by changing numbers of folds. Is this behavior to be expected and do I lack some understanding regarding CV in SKLearn?", "response":"cross_val_score returns score of test fold where cross_val_predict returns predicted y values for the test fold. For the cross_val_score(), you are using the average of the output, which will be affected by the number of folds because then it may have some folds which may have high error (not fit correctly). Whereas, cross_val_predict() returns, for each element in the input, the prediction that was obtained for that element when it was in the test set. [Note that only cross-validation strategies that assign all elements to a test set exactly once can be used]. So the increasing the number of folds, only increases the training data for the test element, and hence its result may not be affected much. Edit (after comment) Please have a look the following answer on how cross_val_predict works: How is scikit-learn cross_val_predict accuracy score calculated? I think that cross_val_predict will be overfit because as the folds increase, more data will be for train and less will for test. So the resultant label is more dependent on training data. Also as already told above, the prediction for one sample is done only once, so it may be susceptible to the splitting of data more. Thats why most of the places or tutorials recommend using the cross_val_score for analysis.", "best_answers_score":0.68, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43613443\/difference-between-cross-val-score-and-cross-val-predict", "best_answers_votes":43, "question_length":1162, "response_length":1280 }, { "question":"TfidfVectorizer in scikit-learn : ValueError: np.nan is an invalid document I'm using TfidfVectorizer from scikit-learn to do some feature extraction from text data. I have a CSV file with a Score (can be +1 or -1) and a Review (text). I pulled this data into a DataFrame so I can run the Vectorizer. This is my code: ``` import pandas as pd import numpy as np from sklearn.feature_extraction.text import TfidfVectorizer df = pd.read_csv(\"train_new.csv\", names = ['Score', 'Review'], sep=',') # x = df['Review'] == np.nan # # print x.to_csv(path='FindNaN.csv', sep=',', na_rep = 'string', index=True) # # print df.isnull().values.any() v = TfidfVectorizer(decode_error='replace', encoding='utf-8') x = v.fit_transform(df['Review']) ``` This is the traceback for the error I get: ``` Traceback (most recent call last): File \"\/home\/PycharmProjects\/Review\/src\/feature_extraction.py\", line 16, in x = v.fit_transform(df['Review']) File \"\/home\/b\/hw1\/local\/lib\/python2.7\/site- packages\/sklearn\/feature_extraction\/text.py\", line 1305, in fit_transform X = super(TfidfVectorizer, self).fit_transform(raw_documents) File \"\/home\/b\/work\/local\/lib\/python2.7\/site-packages\/sklearn\/feature_extraction\/text.py\", line 817, in fit_transform self.fixed_vocabulary_) File \"\/home\/b\/work\/local\/lib\/python2.7\/site- packages\/sklearn\/feature_extraction\/text.py\", line 752, in _count_vocab for feature in analyze(doc): File \"\/home\/b\/work\/local\/lib\/python2.7\/site-packages\/sklearn\/feature_extraction\/text.py\", line 238, in tokenize(preprocess(self.decode(doc))), stop_words) File \"\/home\/b\/work\/local\/lib\/python2.7\/site-packages\/sklearn\/feature_extraction\/text.py\", line 118, in decode raise ValueError(\"np.nan is an invalid document, expected byte or \" ValueError: np.nan is an invalid document, expected byte or unicode string. ``` I checked the CSV file and DataFrame for anything that's being read as NaN but I can't find anything. There are 18000 rows, none of which return isnan as True. This is what df['Review'].head() looks like: ``` 0 This book is such a life saver. It has been s... 1 I bought this a few times for my older son and... 2 This is great for basics, but I wish the space... 3 This book is perfect! I'm a first time new mo... 4 During your postpartum stay at the hospital th... Name: Review, dtype: object ```", "response":"You need to convert the dtype object to unicode string as is clearly mentioned in the traceback. ``` x = v.fit_transform(df['Review'].values.astype('U')) ## Even astype(str) would work ``` From the Doc page of TFIDF Vectorizer: fit_transform(raw_documents, y=None) Parameters: raw_documents : iterable an iterable which yields either str, unicode or file objects", "best_answers_score":0.6794, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/39303912\/tfidfvectorizer-in-scikit-learn-valueerror-np-nan-is-an-invalid-document", "best_answers_votes":157, "question_length":2307, "response_length":362 }, { "question":"Feature names from OneHotEncoder I am using OneHotEncoder to encode few categorical variables (eg - Sex and AgeGroup). The resulting feature names from the encoder are like - 'x0_female', 'x0_male', 'x1_0.0', 'x1_15.0' etc. ``` >>> train_X = pd.DataFrame({'Sex':['male', 'female']*3, 'AgeGroup':[0,15,30,45,60,75]}) >>> from sklearn.preprocessing import OneHotEncoder >>> encoder = OneHotEncoder() >>> train_X_encoded = encoder.fit_transform(train_X[['Sex', 'AgeGroup']]) ``` ``` >>> encoder.get_feature_names() >>> array(['x0_female', 'x0_male', 'x1_0.0', 'x1_15.0', 'x1_30.0', 'x1_45.0', 'x1_60.0', 'x1_75.0'], dtype=object) ``` Is there a way to tell OneHotEncoder to create the feature names in such a way that the column name is added at the beginning, something like - Sex_female, AgeGroup_15.0 etc, similar to what Pandas get_dummies() does.", "response":"A list with the original column names can be passed to get_feature_names. ```py >>> encoder.get_feature_names(['Sex', 'AgeGroup']) array(['Sex_female', 'Sex_male', 'AgeGroup_0', 'AgeGroup_15', 'AgeGroup_30', 'AgeGroup_45', 'AgeGroup_60', 'AgeGroup_75'], dtype=object) ``` DEPRECATED: get_feature_names is deprecated in 1.0 and will be removed in 1.2. Please use get_feature_names_out instead. As per sklearn.preprocessing.OneHotEncoder. ```py >>> encoder.get_feature_names_out(['Sex', 'AgeGroup']) array(['Sex_female', 'Sex_male', 'AgeGroup_0', 'AgeGroup_15', 'AgeGroup_30', 'AgeGroup_45', 'AgeGroup_60', 'AgeGroup_75'], dtype=object) ```", "best_answers_score":0.677, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/54570947\/feature-names-from-onehotencoder", "best_answers_votes":81, "question_length":848, "response_length":638 }, { "question":"sklearn GridSearchCV with Pipeline I am trying to build a pipeline which first does RandomizedPCA on my training data and then fits a ridge regression model. Here is my code: ``` pca = RandomizedPCA(1000, whiten=True) rgn = Ridge() pca_ridge = Pipeline([('pca', pca), ('ridge', rgn)]) parameters = {'ridge__alpha': 10 ** np.linspace(-5, -2, 3)} grid_search = GridSearchCV(pca_ridge, parameters, cv=2, n_jobs=1, scoring='mean_squared_error') grid_search.fit(train_x, train_y[:, 1:]) ``` I know about the RidgeCV function but I want to try out Pipeline and GridSearch CV. I want the grid search CV to report RMSE error, but this doesn't seem supported in sklearn so I'm making do with MSE. However, the scores it resports are negative: ``` In [41]: grid_search.grid_scores_ Out[41]: [mean: -0.02665, std: 0.00007, params: {'ridge__alpha': 1.0000000000000001e-05}, mean: -0.02658, std: 0.00009, params: {'ridge__alpha': 0.031622776601683791}, mean: -0.02626, std: 0.00008, params: {'ridge__alpha': 100.0}] ``` Obviously this isn't possible for mean squared error - what am I doing wrong here?", "response":"Those scores are negative MSE scores, i.e. negate them and you get the MSE. The thing is that GridSearchCV, by convention, always tries to maximize its score so loss functions like MSE have to be negated.", "best_answers_score":0.6736, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/21050110\/sklearn-gridsearchcv-with-pipeline", "best_answers_votes":48, "question_length":1089, "response_length":204 }, { "question":"One-Hot-Encode categorical variables and scale continuous ones simultaneouely I'm confused because it's going to be a problem if you first do OneHotEncoder and then StandardScaler because the scaler will also scale the columns previously transformed by OneHotEncoder. Is there a way to perform encoding and scaling at the same time and then concatenate the results together?", "response":"Sure thing. Just separately scale and one-hot-encode the separate columns as needed: ``` # Import libraries and download example data from sklearn.preprocessing import StandardScaler, OneHotEncoder dataset = pd.read_csv(\"https:\/\/stats.idre.ucla.edu\/stat\/data\/binary.csv\") print(dataset.head(5)) # Define which columns should be encoded vs scaled columns_to_encode = ['rank'] columns_to_scale = ['gre', 'gpa'] # Instantiate encoder\/scaler scaler = StandardScaler() ohe = OneHotEncoder(sparse=False) # Scale and Encode Separate Columns scaled_columns = scaler.fit_transform(dataset[columns_to_scale]) encoded_columns = ohe.fit_transform(dataset[columns_to_encode]) # Concatenate (Column-Bind) Processed Columns Back Together processed_data = np.concatenate([scaled_columns, encoded_columns], axis=1) ```", "best_answers_score":0.6674, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43798377\/one-hot-encode-categorical-variables-and-scale-continuous-ones-simultaneouely", "best_answers_votes":37, "question_length":374, "response_length":801 }, { "question":"Sklearn changing string class label to int I have a pandas dataframe and I'm trying to change the values in a given column which are represented by strings into integers. For instance: ``` df = index fruit quantity price 0 apple 5 0.99 1 apple 2 0.99 2 orange 4 0.89 4 banana 1 1.64 ... 10023 kiwi 10 0.92 ``` I would like it to look at: ``` df = index fruit quantity price 0 1 5 0.99 1 1 2 0.99 2 2 4 0.89 4 3 1 1.64 ... 10023 5 10 0.92 ``` I can do this using ``` df[\"fruit\"] = df[\"fruit\"].map({\"apple\": 1, \"orange\": 2,...}) ``` which works if I have a small list to change, but I'm looking at a column with over 500 different labels. Is there any way of changing this from a string to a an int?", "response":"You can use sklearn.preprocessing ``` from sklearn import preprocessing le = preprocessing.LabelEncoder() le.fit(df.fruit) df['categorical_label'] = le.transform(df.fruit) ``` Transform labels back to original encoding. ``` le.inverse_transform(df['categorical_label']) ```", "best_answers_score":0.666, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/42320834\/sklearn-changing-string-class-label-to-int", "best_answers_votes":36, "question_length":697, "response_length":273 }, { "question":"Put customized functions in Sklearn pipeline In my classification scheme, there are several steps including: SMOTE (Synthetic Minority Over-sampling Technique) Fisher criteria for feature selection Standardization (Z-score normalisation) SVC (Support Vector Classifier) The main parameters to be tuned in the scheme above are percentile (2.) and hyperparameters for SVC (4.) and I want to go through grid search for tuning. The current solution builds a \"partial\" pipeline including step 3 and 4 in the scheme clf = Pipeline([('normal',preprocessing.StandardScaler()),('svc',svm.SVC(class_weight='auto'))]) and breaks the scheme into two parts: Tune the percentile of features to keep through the first grid search ```py skf = StratifiedKFold(y) for train_ind, test_ind in skf: X_train, X_test, y_train, y_test = X[train_ind], X[test_ind], y[train_ind], y[test_ind] # SMOTE synthesizes the training data (we want to keep test data intact) X_train, y_train = SMOTE(X_train, y_train) for percentile in percentiles: # Fisher returns the indices of the selected features specified by the parameter 'percentile' selected_ind = Fisher(X_train, y_train, percentile) X_train_selected, X_test_selected = X_train[selected_ind,:], X_test[selected_ind, :] model = clf.fit(X_train_selected, y_train) y_predict = model.predict(X_test_selected) f1 = f1_score(y_predict, y_test) ``` The f1 scores will be stored and then be averaged through all fold partitions for all percentiles, and the percentile with the best CV score is returned. The purpose of putting 'percentile for loop' as the inner loop is to allow fair competition as we have the same training data (including synthesized data) across all fold partitions for all percentiles. After determining the percentile, tune the hyperparameters by second grid search ```py skf = StratifiedKFold(y) for train_ind, test_ind in skf: X_train, X_test, y_train, y_test = X[train_ind], X[test_ind], y[train_ind], y[test_ind] # SMOTE synthesizes the training data (we want to keep test data intact) X_train, y_train = SMOTE(X_train, y_train) for parameters in parameter_comb: # Select the features based on the tuned percentile selected_ind = Fisher(X_train, y_train, best_percentile) X_train_selected, X_test_selected = X_train[selected_ind,:], X_test[selected_ind, :] clf.set_params(svc__C=parameters['C'], svc__gamma=parameters['gamma']) model = clf.fit(X_train_selected, y_train) y_predict = model.predict(X_test_selected) f1 = f1_score(y_predict, y_test) ``` It is done in the very similar way, except we tune the hyperparamter for SVC rather than percentile of features to select. My questions are: In the current solution, I only involve 3. and 4. in the clf and do 1. and 2. kinda \"manually\" in two nested loop as described above. Is there any way to include all four steps in a pipeline and do the whole process at once? If it is okay to keep the first nested loop, then is it possible (and how) to simplify the next nested loop using a single pipeline ```py clf_all = Pipeline([('smote', SMOTE()), ('fisher', Fisher(percentile=best_percentile)) ('normal',preprocessing.StandardScaler()), ('svc',svm.SVC(class_weight='auto'))]) ``` and simply use GridSearchCV(clf_all, parameter_comb) for tuning? Please note that both SMOTE and Fisher (ranking criteria) have to be done only for the training data in each fold partition. It would be so much appreciated for any comment. SMOTE and Fisher are shown below: ```py def Fscore(X, y, percentile=None): X_pos, X_neg = X[y==1], X[y==0] X_mean = X.mean(axis=0) X_pos_mean, X_neg_mean = X_pos.mean(axis=0), X_neg.mean(axis=0) deno = (1.0\/(shape(X_pos)[0]-1))*X_pos.var(axis=0) +(1.0\/(shape(X_neg[0]-1))*X_neg.var(axis=0) num = (X_pos_mean - X_mean)**2 + (X_neg_mean - X_mean)**2 F = num\/deno sort_F = argsort(F)[::-1] n_feature = (float(percentile)\/100)*shape(X)[1] ind_feature = sort_F[:ceil(n_feature)] return(ind_feature) ``` SMOTE is from https:\/\/github.com\/blacklab\/nyan\/blob\/master\/shared_modules\/smote.py, it returns the synthesized data. I modified it to return the original input data stacked with the synthesized data along with its labels and synthesized ones. ```py def smote(X, y): n_pos = sum(y==1), sum(y==0) n_syn = (n_neg-n_pos)\/float(n_pos) X_pos = X[y==1] X_syn = SMOTE(X_pos, int(round(n_syn))*100, 5) y_syn = np.ones(shape(X_syn)[0]) X, y = np.vstack([X, X_syn]), np.concatenate([y, y_syn]) return(X, y) ```", "response":"I don't know where your SMOTE() and Fisher() functions are coming from, but the answer is yes you can definitely do this. In order to do so you will need to write a wrapper class around those functions though. The easiest way to this is inherit sklearn's BaseEstimator and TransformerMixin classes, see this for an example: http:\/\/scikit-learn.org\/stable\/auto_examples\/hetero_feature_union.html If this isn't making sense to you, post the details of at least one of your functions (the library it comes from or your code if you wrote it yourself) and we can go from there. EDIT: I apologize, I didn't look at your functions closely enough to realize that they transform your target in addition to your training data (i.e. both X and y). Pipeline does not support transformations to your target so you will have do them prior as you originally were. For your reference, here is what it would look like to write your custom class for your Fisher process which would work if the function itself did not need to affect your target variable. ``` >>> from sklearn.base import BaseEstimator, TransformerMixin >>> from sklearn.preprocessing import StandardScaler >>> from sklearn.svm import SVC >>> from sklearn.pipeline import Pipeline >>> from sklearn.grid_search import GridSearchCV >>> from sklearn.datasets import load_iris >>> >>> class Fisher(BaseEstimator, TransformerMixin): ... def __init__(self,percentile=0.95): ... self.percentile = percentile ... def fit(self, X, y): ... from numpy import shape, argsort, ceil ... X_pos, X_neg = X[y==1], X[y==0] ... X_mean = X.mean(axis=0) ... X_pos_mean, X_neg_mean = X_pos.mean(axis=0), X_neg.mean(axis=0) ... deno = (1.0\/(shape(X_pos)[0]-1))*X_pos.var(axis=0) + (1.0\/(shape(X_neg)[0]-1))*X_neg.var(axis=0) ... num = (X_pos_mean - X_mean)**2 + (X_neg_mean - X_mean)**2 ... F = num\/deno ... sort_F = argsort(F)[::-1] ... n_feature = (float(self.percentile)\/100)*shape(X)[1] ... self.ind_feature = sort_F[:ceil(n_feature)] ... return self ... def transform(self, x): ... return x[self.ind_feature,:] ... >>> >>> data = load_iris() >>> >>> pipeline = Pipeline([ ... ('fisher', Fisher()), ... ('normal',StandardScaler()), ... ('svm',SVC(class_weight='auto')) ... ]) >>> >>> grid = { ... 'fisher__percentile':[0.75,0.50], ... 'svm__C':[1,2] ... } >>> >>> model = GridSearchCV(estimator = pipeline, param_grid=grid, cv=2) >>> model.fit(data.data,data.target) Traceback (most recent call last): File \"\", line 1, in File \"\/Users\/dmcgarry\/anaconda\/lib\/python2.7\/site-packages\/sklearn\/grid_search.py\", line 596, in fit return self._fit(X, y, ParameterGrid(self.param_grid)) File \"\/Users\/dmcgarry\/anaconda\/lib\/python2.7\/site-packages\/sklearn\/grid_search.py\", line 378, in _fit for parameters in parameter_iterable File \"\/Users\/dmcgarry\/anaconda\/lib\/python2.7\/site-packages\/sklearn\/externals\/joblib\/parallel.py\", line 653, in __call__ self.dispatch(function, args, kwargs) File \"\/Users\/dmcgarry\/anaconda\/lib\/python2.7\/site-packages\/sklearn\/externals\/joblib\/parallel.py\", line 400, in dispatch job = ImmediateApply(func, args, kwargs) File \"\/Users\/dmcgarry\/anaconda\/lib\/python2.7\/site-packages\/sklearn\/externals\/joblib\/parallel.py\", line 138, in __init__ self.results = func(*args, **kwargs) File \"\/Users\/dmcgarry\/anaconda\/lib\/python2.7\/site-packages\/sklearn\/cross_validation.py\", line 1239, in _fit_and_score estimator.fit(X_train, y_train, **fit_params) File \"\/Users\/dmcgarry\/anaconda\/lib\/python2.7\/site-packages\/sklearn\/pipeline.py\", line 130, in fit self.steps[-1][-1].fit(Xt, y, **fit_params) File \"\/Users\/dmcgarry\/anaconda\/lib\/python2.7\/site-packages\/sklearn\/svm\/base.py\", line 149, in fit (X.shape[0], y.shape[0])) ValueError: X and y have incompatible shapes. X has 1 samples, but y has 75. ```", "best_answers_score":0.6636, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31259891\/put-customized-functions-in-sklearn-pipeline", "best_answers_votes":17, "question_length":4407, "response_length":3733 }, { "question":"Get nearest point to centroid, scikit-learn? I am using K-means for a clustering problem. I am trying to find the data point which is most close to the centroid, which I believe is called the medoid. Is there a way to do this in scikit-learn?", "response":"This is not the medoid, but here's something you can try: ``` >>> import numpy as np >>> from sklearn.cluster import KMeans >>> from sklearn.metrics import pairwise_distances_argmin_min >>> X = np.random.randn(10, 4) >>> km = KMeans(n_clusters=2).fit(X) >>> closest, _ = pairwise_distances_argmin_min(km.cluster_centers_, X) >>> closest array([0, 8]) ``` The array closest contains the index of the point in X that is closest to each centroid. So X[0] is the closest point in X to centroid 0, and X[8] is the closest to centroid 1.", "best_answers_score":0.6633, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/21660937\/get-nearest-point-to-centroid-scikit-learn", "best_answers_votes":39, "question_length":242, "response_length":531 }, { "question":"Missing values in scikits machine learning Is it possible to have missing values in scikit-learn ? How should they be represented? I couldn't find any documentation about that.", "response":"Missing values are simply not supported in scikit-learn. There has been discussion on the mailing list about this before, but no attempt to actually write code to handle them. Whatever you do, don't use NaN to encode missing values, since many of the algorithms refuse to handle samples containing NaNs. The above answer is outdated; the latest release of scikit-learn has a class Imputer that does simple, per-feature missing value imputation. You can feed it arrays containing NaNs to have those replaced by the mean, median or mode of the corresponding feature.", "best_answers_score":0.6629, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/9365982\/missing-values-in-scikits-machine-learning", "best_answers_votes":31, "question_length":176, "response_length":564 }, { "question":"Factor Loadings using sklearn I want the correlations between individual variables and principal components in python. I am using PCA in sklearn. I don't understand how can I achieve the loading matrix after I have decomposed my data? My code is here. ``` iris = load_iris() data, y = iris.data, iris.target pca = PCA(n_components=2) transformed_data = pca.fit(data).transform(data) eigenValues = pca.explained_variance_ratio_ ``` http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.decomposition.PCA.html doesn't mention how this can be achieved.", "response":"I think that @RickardSjogren is describing the eigenvectors, while @BigPanda is giving the loadings. There's a big difference: Loadings vs eigenvectors in PCA: when to use one or another?. I created this PCA class with a loadings method. Loadings, as given by pca.components_ * np.sqrt(pca.explained_variance_), are more analogous to coefficients in a multiple linear regression. I don't use .T here because in the PCA class linked above, the components are already transposed. numpy.linalg.svd produces u, s, and vt, where vt is the Hermetian transpose, so you first need to back into v with vt.T. There is also one other important detail: the signs (positive\/negative) on the components and loadings in sklearn.PCA may differ from packages such as R. More on that here: In sklearn.decomposition.PCA, why are components_ negative?.", "best_answers_score":0.6615, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/21217710\/factor-loadings-using-sklearn", "best_answers_votes":20, "question_length":552, "response_length":832 }, { "question":"Can sklearn random forest directly handle categorical features? Say I have a categorical feature, color, which takes the values ['red', 'blue', 'green', 'orange'], and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically, when sklearn is randomly selecting features to use at different nodes, it should either include the red, blue, green and orange dummies together, or it shouldn't include any of them. I've heard that there's no way to do this, but I'd imagine there must be a way to deal with categorical variables without arbitrarily coding them as numbers or something like that.", "response":"No, there isn't. Somebody's working on this and the patch might be merged into mainline some day, but right now there's no support for categorical variables in scikit-learn except dummy (one-hot) encoding.", "best_answers_score":0.6606, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/24715230\/can-sklearn-random-forest-directly-handle-categorical-features", "best_answers_votes":68, "question_length":747, "response_length":205 }, { "question":"How to tune parameters of nested Pipelines by GridSearchCV in scikit-learn? Is it possible to tune parameters of nested Pipelines in scikit-learn? E.g.: ``` svm = Pipeline([ ('chi2', SelectKBest(chi2)), ('cls', LinearSVC(class_weight='auto')) ]) classifier = Pipeline([ ('vectorizer', TfIdfVectorizer()), ('ova_svm', OneVsRestClassifier(svm)) }) parameters = ? GridSearchCV(classifier, parameters) ``` If it's not possible to do this directly, what could be a workaround?", "response":"For the estimator that you have created you can get the list of parameters with their tags as follows. ``` import pprint as pp pp.pprint(sorted(classifier.get_params().keys())) ``` ['ova_svm', 'ova_svm__estimator', 'ova_svm__estimator__chi2', 'ova_svm__estimator__chi2__k', 'ova_svm__estimator__chi2__score_func', 'ova_svm__estimator__cls', 'ova_svm__estimator__cls__C', 'ova_svm__estimator__cls__class_weight', 'ova_svm__estimator__cls__dual', 'ova_svm__estimator__cls__fit_intercept', 'ova_svm__estimator__cls__intercept_scaling', 'ova_svm__estimator__cls__loss', 'ova_svm__estimator__cls__max_iter', 'ova_svm__estimator__cls__multi_class', 'ova_svm__estimator__cls__penalty', 'ova_svm__estimator__cls__random_state', 'ova_svm__estimator__cls__tol', 'ova_svm__estimator__cls__verbose', 'ova_svm__estimator__steps', 'ova_svm__n_jobs', 'steps', 'vectorizer', 'vectorizer__analyzer', 'vectorizer__binary', 'vectorizer__decode_error', 'vectorizer__dtype', 'vectorizer__encoding', 'vectorizer__input', 'vectorizer__lowercase', 'vectorizer__max_df', 'vectorizer__max_features', 'vectorizer__min_df', 'vectorizer__ngram_range', 'vectorizer__norm', 'vectorizer__preprocessor', 'vectorizer__smooth_idf', 'vectorizer__stop_words', 'vectorizer__strip_accents', 'vectorizer__sublinear_tf', 'vectorizer__token_pattern', 'vectorizer__tokenizer', 'vectorizer__use_idf', 'vectorizer__vocabulary'] From this list you can then set the parameters you want to do a GridSearchCV on.", "best_answers_score":0.66, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/16437022\/how-to-tune-parameters-of-nested-pipelines-by-gridsearchcv-in-scikit-learn", "best_answers_votes":23, "question_length":471, "response_length":1463 }, { "question":"scikit-learn: how to scale back the 'y' predicted result I'm trying to learn scikit-learn and Machine Learning by using the Boston Housing Data Set. ``` # I splitted the initial dataset ('housing_X' and 'housing_y') from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(housing_X, housing_y, test_size=0.25, random_state=33) # I scaled those two datasets from sklearn.preprocessing import StandardScaler scalerX = StandardScaler().fit(X_train) scalery = StandardScaler().fit(y_train) X_train = scalerX.transform(X_train) y_train = scalery.transform(y_train) X_test = scalerX.transform(X_test) y_test = scalery.transform(y_test) # I created the model from sklearn import linear_model clf_sgd = linear_model.SGDRegressor(loss='squared_loss', penalty=None, random_state=42) train_and_evaluate(clf_sgd,X_train,y_train) ``` Based on this new model clf_sgd, I am trying to predict the y based on the first instance of X_train. ``` X_new_scaled = X_train[0] print (X_new_scaled) y_new = clf_sgd.predict(X_new_scaled) print (y_new) ``` However, the result is quite odd for me (1.34032174, instead of 20-30, the range of the price of the houses) ``` [-0.32076092 0.35553428 -1.00966618 -0.28784917 0.87716097 1.28834383 0.4759489 -0.83034371 -0.47659648 -0.81061061 -2.49222645 0.35062335 -0.39859013] [ 1.34032174] ``` I guess that this 1.34032174 value should be scaled back, but I am trying to figure out how to do it with no success. Any tip is welcome. Thank you very much.", "response":"You can use inverse_transform using your scalery object: ``` y_new_inverse = scalery.inverse_transform(y_new) ```", "best_answers_score":0.6595, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/38058774\/scikit-learn-how-to-scale-back-the-y-predicted-result", "best_answers_votes":66, "question_length":1521, "response_length":113 }, { "question":"Custom transformer for sklearn Pipeline that alters both X and y I want to create my own transformer for use with the sklearn Pipeline. I am creating a class that implements both fit and transform methods. The purpose of the transformer will be to remove rows from the matrix that have more than a specified number of NaNs. The issue I am facing is how can I change both the X and y matrices that are passed to the transformer? I believe this has to be done in the fit method since it has access to both X and y. Since python passes arguments by assignment once I reassign X to a new matrix with fewer rows the reference to the original X is lost (and of course the same is true for y). Is it possible to maintain this reference? I\u2019m using a pandas DataFrame to easily drop the rows that have too many NaNs, this may not be the right way to do it for my use case. The current code looks like this: ``` class Dropna(): # thresh is max number of NaNs allowed in a row def __init__(self, thresh=0): self.thresh = thresh def fit(self, X, y): total = X.shape[1] # +1 to account for 'y' being added to the dframe new_thresh = total + 1 - self.thresh df = pd.DataFrame(X) df['y'] = y df.dropna(thresh=new_thresh, inplace=True) X = df.drop('y', axis=1).values y = df['y'].values return self def transform(self, X): return X ```", "response":"You have to modify the internal code of sklearn Pipeline. We define a transformer that removes samples where at least the value of a feature or the target is NaN during fitting (fit_transform). While it removes the samples where at least the value of a feature is NaN during inference (transform). Important to note that our transformer returns X and y in fit_transform so we need to handle this behaviour in the sklearn Pipeline. ``` class Dropna(): def fit(self, X, y): return self def fit_transform(self, X, y): mask = (np.isnan(X).any(-1) | np.isnan(y)) if hasattr(X, 'loc'): X = X.loc[~mask] else: X = X[~mask] if hasattr(y, 'loc'): y = y.loc[~mask] else: y = y[~mask] return X, y ###### make fit_transform return X and y def transform(self, X): mask = np.isnan(X).any(-1) if hasattr(X, 'loc'): X = X.loc[~mask] else: X = X[~mask] return X ``` We only have to modify the original sklearn Pipeline in only two specific points in fit and in _fit method. The rest remains unchanged. ``` from sklearn import pipeline from sklearn.base import clone from sklearn.utils import _print_elapsed_time from sklearn.utils.validation import check_memory class Pipeline(pipeline.Pipeline): def _fit(self, X, y=None, **fit_params_steps): self.steps = list(self.steps) self._validate_steps() memory = check_memory(self.memory) fit_transform_one_cached = memory.cache(pipeline._fit_transform_one) for (step_idx, name, transformer) in self._iter( with_final=False, filter_passthrough=False ): if transformer is None or transformer == \"passthrough\": with _print_elapsed_time(\"Pipeline\", self._log_message(step_idx)): continue try: # joblib >= 0.12 mem = memory.location except AttributeError: mem = memory.cachedir finally: cloned_transformer = clone(transformer) if mem else transformer X, fitted_transformer = fit_transform_one_cached( cloned_transformer, X, y, None, message_clsname=\"Pipeline\", message=self._log_message(step_idx), **fit_params_steps[name], ) if isinstance(X, tuple): ###### unpack X if is tuple: X = (X,y) X, y = X self.steps[step_idx] = (name, fitted_transformer) return X, y def fit(self, X, y=None, **fit_params): fit_params_steps = self._check_fit_params(**fit_params) Xt = self._fit(X, y, **fit_params_steps) if isinstance(Xt, tuple): ###### unpack X if is tuple: X = (X,y) Xt, y = Xt with _print_elapsed_time(\"Pipeline\", self._log_message(len(self.steps) - 1)): if self._final_estimator != \"passthrough\": fit_params_last_step = fit_params_steps[self.steps[-1][0]] self._final_estimator.fit(Xt, y, **fit_params_last_step) return self ``` This is required in order to unpack the values generated by Dropna().fit_transform(X, y) in the new X and y. Here is the full pipeline at work: ``` from sklearn.linear_model import Ridge X = np.random.uniform(0,1, (100,3)) y = np.random.uniform(0,1, (100,)) X[np.random.uniform(0,1, (100)) < 0.1] = np.nan y[np.random.uniform(0,1, (100)) < 0.1] = np.nan pipe = Pipeline([('dropna', Dropna()), ('model', Ridge())]) pipe.fit(X, y) pipe.predict(X).shape ``` Another trial with a further intermediate preprocessing step: ``` from sklearn.preprocessing import StandardScaler pipe = Pipeline([('dropna', Dropna()), ('scaler', StandardScaler()), ('model', Ridge())]) pipe.fit(X, y) pipe.predict(X).shape ``` More complex behaviors can be achieved with other simple modifications according to the needs. If you are interested also in Pipeline().fit_transform or Pipeline().fit_predict you need to operate the same changes.", "best_answers_score":0.6588, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/25539311\/custom-transformer-for-sklearn-pipeline-that-alters-both-x-and-y", "best_answers_votes":13, "question_length":1319, "response_length":3463 }, { "question":"Scikit-learn balanced subsampling I'm trying to create N balanced random subsamples of my large unbalanced dataset. Is there a way to do this simply with scikit-learn \/ pandas or do I have to implement it myself? Any pointers to code that does this? These subsamples should be random and can be overlapping as I feed each to separate classifier in a very large ensemble of classifiers. In Weka there is tool called spreadsubsample, is there equivalent in sklearn? http:\/\/wiki.pentaho.com\/display\/DATAMINING\/SpreadSubsample (I know about weighting but that's not what I'm looking for.)", "response":"Here is my first version that seems to be working fine, feel free to copy or make suggestions on how it could be more efficient (I have quite a long experience with programming in general but not that long with python or numpy) This function creates single random balanced subsample. edit: The subsample size now samples down minority classes, this should probably be changed. ``` def balanced_subsample(x,y,subsample_size=1.0): class_xs = [] min_elems = None for yi in np.unique(y): elems = x[(y == yi)] class_xs.append((yi, elems)) if min_elems == None or elems.shape[0] use_elems: np.random.shuffle(this_xs) x_ = this_xs[:use_elems] y_ = np.empty(use_elems) y_.fill(ci) xs.append(x_) ys.append(y_) xs = np.concatenate(xs) ys = np.concatenate(ys) return xs,ys ``` For anyone trying to make the above work with a Pandas DataFrame, you need to make a couple of changes: Replace the np.random.shuffle line with this_xs = this_xs.reindex(np.random.permutation(this_xs.index)) Replace the np.concatenate lines with xs = pd.concat(xs) ys = pd.Series(data=np.concatenate(ys),name='target')", "best_answers_score":0.6579, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/23455728\/scikit-learn-balanced-subsampling", "best_answers_votes":29, "question_length":584, "response_length":1085 }, { "question":"How can I capture return value with Python timeit module? Im running several machine learning algorithms with sklearn in a for loop and want to see how long each of them takes. The problem is I also need to return a value and DONT want to have to run it more than once because each algorithm takes so long. Is there a way to capture the return value 'clf' using python's timeit module or a similar one with a function like this... ``` def RandomForest(train_input, train_output): clf = ensemble.RandomForestClassifier(n_estimators=10) clf.fit(train_input, train_output) return clf ``` when I call the function like this ``` t = Timer(lambda : RandomForest(trainX,trainy)) print t.timeit(number=1) ``` P.S. I also dont want to set a global 'clf' because I might want to do multithreading or multiprocessing later.", "response":"For Python 3.5 you can override the value of timeit.template ``` timeit.template = \"\"\" def inner(_it, _timer{init}): {setup} _t0 = _timer() for _i in _it: retval = {stmt} _t1 = _timer() return _t1 - _t0, retval \"\"\" ``` unutbu's answer works for python 3.4 but not 3.5 as the _template_func function appears to have been removed in 3.5", "best_answers_score":0.6567, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/24812253\/how-can-i-capture-return-value-with-python-timeit-module", "best_answers_votes":25, "question_length":812, "response_length":334 }, { "question":"What does the CV stand for in sklearn.linear_model.LogisticRegressionCV? scikit-learn has two logistic regression functions: sklearn.linear_model.LogisticRegression sklearn.linear_model.LogisticRegressionCV I'm just curious what the CV stands for in the second one. The only acronym I know in ML that matches \"CV\" is cross-validation, but I'm guessing that's not it, since that would be achieved in scikit-learn with a wrapper function, not as part of the logistic regression function itself (I think).", "response":"You are right in guessing that the latter allows the user to perform cross validation. The user can pass the number of folds as an argument cv of the function to perform k-fold cross-validation (default is 10 folds with StratifiedKFold). I would recommend reading the documentation for the functions LogisticRegression and LogisticRegressionCV", "best_answers_score":0.6567, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/46507606\/what-does-the-cv-stand-for-in-sklearn-linear-model-logisticregressioncv", "best_answers_votes":18, "question_length":502, "response_length":343 }, { "question":"[sklearn][standardscaler] can I inverse the standardscaler for the model output? I have some data structured as below, trying to predict t from the features. ``` train_df t: time to predict f1: feature1 f2: feature2 f3:...... ``` Can t be scaled with StandardScaler, so I instead predict t' and then inverse the StandardScaler to get back the real time? For example: ``` from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(train_df['t']) train_df['t']= scaler.transform(train_df['t']) ``` After this, I would like to: run regression model check the score check predicted t' with the real time value by using the inverse StandardScaler Is this possible?", "response":"Yeah, and it's conveniently called inverse_transform. The documentation provides examples of its use.", "best_answers_score":0.656, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/44552031\/sklearnstandardscaler-can-i-inverse-the-standardscaler-for-the-model-output", "best_answers_votes":25, "question_length":686, "response_length":101 }, { "question":"how to explain the decision tree from scikit-learn I have two problems with understanding the result of decision tree from scikit-learn. For example, this is one of my decision trees: My question is that how I can use the tree? The first question is that: if a sample satisfied the condition, then it goes to the LEFT branch (if exists), otherwise it goes RIGHT. In my case, if a sample with X[7] > 63521.3984. Then the sample will go to the green box. Correct? The second question is that: when a sample reaches the leaf node, how can I know which category it belongs? In this example, I have three categories to classify. In the red box, there are 91, 212, and 113 samples are satisfied the condition, respectively. But how can I decide the category? I know there is a function clf.predict(sample) to tell the category. Can I do that from the graph??? Many thanks.", "response":"The value line in each box is telling you how many samples at that node fall into each category, in order. That's why, in each box, the numbers in value add up to the number shown in sample. For instance, in your red box, 91+212+113=416. So this means if you reach this node, there were 91 data points in category 1, 212 in category 2, and 113 in category 3. If you were going to predict the outcome for a new data point that reached that leaf in the decision tree, you would predict category 2, because that is the most common category for samples at that node.", "best_answers_score":0.6555, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/23557545\/how-to-explain-the-decision-tree-from-scikit-learn", "best_answers_votes":34, "question_length":866, "response_length":562 }, { "question":"TFIDF for Large Dataset I have a corpus which has around 8 million news articles, I need to get the TFIDF representation of them as a sparse matrix. I have been able to do that using scikit-learn for relatively lower number of samples, but I believe it can't be used for such a huge dataset as it loads the input matrix into memory first and that's an expensive process. Does anyone know, what would be the best way to extract out the TFIDF vectors for large datasets?", "response":"Gensim has an efficient tf-idf model and does not need to have everything in memory at once. Your corpus simply needs to be an iterable, so it does not need to have the whole corpus in memory at a time. The make_wiki script runs over Wikipedia in about 50m on a laptop according to the comments.", "best_answers_score":0.6554, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/25145552\/tfidf-for-large-dataset", "best_answers_votes":32, "question_length":468, "response_length":295 }, { "question":"Web application that uses scikit-learn I have locally trained a sklearn classifier and I have to create a simple web application that demonstrate its use. I'm a complete noob on web app development and I don't want to waste hours on creating a web app using a framework that doesn't support the modules I'm using. What do you suggest would be a good approach for this task? What web app development framework should I use (if any)? Do I have to dive into things like Heroku , django etc. or is there more simple and quicker solutions for a simple scientific demo? My thought was to take the classifier I trained, pickle it and un-pickle it on the server, then to run classify from the server, but I'm not sure where to begin.", "response":"If this is just for a demo, train your classifier offline, pickle the model and then use a simple python web framework such as flask or bottle to unpickle the model at server startup time and call the predict function in an HTTP request handler. django is a feature complete framework hence is longer to learn than flask or bottle but it has a great documentation and a larger community. heroku is a service to host your application in the cloud. It's possible to host flask applications on heroku, here is a simple template project + instructions to do so. For \"production\" setups I would advise you not to use pickle but to write your own persistence layer for the machine learning model so as to have full control on the parameters your store and be more robust to library upgrades that might break the unpickling of old models.", "best_answers_score":0.6533, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/11600471\/web-application-that-uses-scikit-learn", "best_answers_votes":19, "question_length":725, "response_length":831 }, { "question":"How to give column names after one-hot encoding with sklearn? Here is my question, I hope someone can help me to figure it out.. To explain, there are more than 10 categorical columns in my data set and each of them has 200-300 categories. I want to convert them into binary values. For that I used first label encoder to convert string categories into numbers. The Label Encoder code and the output is shown below. After Label Encoder, I used One Hot Encoder From scikit-learn again and it is worked. BUT THE PROBLEM IS, I need column names after one hot encoder. For example, column A with categorical values before encoding. A = [1,2,3,4,..] It should be like that after encoding, A-1, A-2, A-3 Anyone know how to assign column names to (old column names -value name or number) after one hot encoding. Here is my one hot encoding and it's output; I need columns with name because I trained an ANN, but every time data comes up I cannot convert all past data again and again. So, I want to add just new ones every time. Thank anyway..", "response":"You can get the column names using .get_feature_names() attribute. ``` >>> ohenc.get_feature_names() >>> x_cat_df.columns = ohenc.get_feature_names() ``` Detailed example is here. Update from Version 1.0, use get_feature_names_out", "best_answers_score":0.6519, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/56338847\/how-to-give-column-names-after-one-hot-encoding-with-sklearn", "best_answers_votes":39, "question_length":1036, "response_length":230 }, { "question":"Sklearn pass fit() parameters to xgboost in pipeline Similar to How to pass a parameter to only one part of a pipeline object in scikit learn? I want to pass parameters to only one part of a pipeline. Usually, it should work fine like: ``` estimator = XGBClassifier() pipeline = Pipeline([ ('clf', estimator) ]) ``` and executed like ``` pipeline.fit(X_train, y_train, clf__early_stopping_rounds=20) ``` but it fails with: ``` \/usr\/local\/lib\/python3.5\/site-packages\/sklearn\/pipeline.py in fit(self, X, y, **fit_params) 114 \"\"\" 115 Xt, yt, fit_params = self._pre_transform(X, y, **fit_params) --> 116 self.steps[-1][-1].fit(Xt, yt, **fit_params) 117 return self 118 \/usr\/local\/lib\/python3.5\/site-packages\/xgboost-0.6-py3.5.egg\/xgboost\/sklearn.py in fit(self, X, y, sample_weight, eval_set, eval_metric, early_stopping_rounds, verbose) 443 early_stopping_rounds=early_stopping_rounds, 444 evals_result=evals_result, obj=obj, feval=feval, --> 445 verbose_eval=verbose) 446 447 self.objective = xgb_options[\"objective\"] \/usr\/local\/lib\/python3.5\/site-packages\/xgboost-0.6-py3.5.egg\/xgboost\/training.py in train(params, dtrain, num_boost_round, evals, obj, feval, maximize, early_stopping_rounds, evals_result, verbose_eval, learning_rates, xgb_model, callbacks) 201 evals=evals, 202 obj=obj, feval=feval, --> 203 xgb_model=xgb_model, callbacks=callbacks) 204 205 \/usr\/local\/lib\/python3.5\/site-packages\/xgboost-0.6-py3.5.egg\/xgboost\/training.py in _train_internal(params, dtrain, num_boost_round, evals, obj, feval, xgb_model, callbacks) 97 end_iteration=num_boost_round, 98 rank=rank, ---> 99 evaluation_result_list=evaluation_result_list)) 100 except EarlyStopException: 101 break \/usr\/local\/lib\/python3.5\/site-packages\/xgboost-0.6-py3.5.egg\/xgboost\/callback.py in callback(env) 196 def callback(env): 197 \"\"\"internal function\"\"\" --> 198 score = env.evaluation_result_list[-1][1] 199 if len(state) == 0: 200 init(env) IndexError: list index out of range ``` Whereas a ``` estimator.fit(X_train, y_train, early_stopping_rounds=20) ``` works just fine.", "response":"For the early stopping rounds, you must always specify the validation set given by the argument eval_set. Here is how the error in your code can be fixed. ``` pipeline.fit(X_train, y_train, clf__early_stopping_rounds=20, clf__eval_set=[(test_X, test_y)]) ```", "best_answers_score":0.6517, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40329576\/sklearn-pass-fit-parameters-to-xgboost-in-pipeline", "best_answers_votes":20, "question_length":2046, "response_length":258 }, { "question":"Scikit-learn - feature reduction using RFECV and GridSearch. Where are the coefficients stored? I am using Scikit-learn RFECV to select most significant features for a logistic regression using a Cross Validation. Assume X is a [n,x] dataframe of features, and y represents the response variable: ``` from sklearn.pipeline import make_pipeline from sklearn.grid_search import GridSearchCV from sklearn.cross_validation import StratifiedKFold from sklearn import preprocessing from sklearn.feature_selection import RFECV import sklearn import sklearn.linear_model as lm import sklearn.grid_search as gs # Create a logistic regression estimator logreg = lm.LogisticRegression() # Use RFECV to pick best features, using Stratified Kfold rfecv = RFECV(estimator=logreg, cv=StratifiedKFold(y, 3), scoring='roc_auc') # Fit the features to the response variable rfecv.fit(X, y) # Put the best features into new df X_new X_new = rfecv.transform(X) # pipe = make_pipeline(preprocessing.StandardScaler(), lm.LogisticRegression()) # Define a range of hyper parameters for grid search C_range = 10.**np.arange(-5, 1) penalty_options = ['l1', 'l2'] skf = StratifiedKFold(y, 3) param_grid = dict(logisticregression__C=C_range, logisticregression__penalty=penalty_options) grid = GridSearchCV(pipe, param_grid, cv=skf, scoring='roc_auc') grid.fit(X_new, y) ``` Two questions: a) Is this the correct process for feature, hyper-parameter selection and fitting? b) Where can I find the fitted coefficients for the selected features?", "response":"Is this the correct process for feature selection? This is ONE of the many ways of feature selection. Recursive feature elimination is an automated approach to this, others are listed in scikit.learn documentation. They have different pros and cons, and usually feature selection is best achieved by also involving common sense and trying models with different features. RFE is a quick way of selecting a good set of features, but does not necessarily give you the ultimately best. By the way, you don't need to build your StratifiedKFold separately. If you just set the cv parameter to cv=3, both RFECV and GridSearchCV will automatically use StratifiedKFold if the y values are binary or multiclass, which I'm assuming is most likely the case since you are using LogisticRegression. You can also combine ``` # Fit the features to the response variable rfecv.fit(X, y) # Put the best features into new df X_new X_new = rfecv.transform(X) ``` into ``` X_new = rfecv.fit_transform(X, y) ``` Is this the correct process for hyper-parameter selection? GridSearchCV is basically an automated way of systematically trying a whole set of combinations of model parameters and picking the best among these according to some performance metric. It's a good way of finding well-suited parameters, yes. Is this the correct process for fitting? Yes, this is a valid way of fitting the model. When you call grid.fit(X_new, y), it makes a grid of LogisticRegression estimators (each with a set of parameters that are tried) and fits each of them. It will keep the one with the best performance under grid.best_estimator_, the parameters of this estimator in grid.best_params_ and the performance score for this estimator under grid.best_score_. It will return itself, and not the best estimator. Remember that with incoming new X values that you will use the model to predict on, you have to apply the transform with the fitted RFECV model. So, you can actually add this step to the pipeline as well. Where can I find the fitted coefficients for the selected features? The grid.best_estimator_ attribute is a LogisticRegression object with all this information, so grid.best_estimator_.coef_ has all the coefficients (and grid.best_estimator_.intercept_ is the intercept). Note that to be able to get this grid.best_estimator_, the refit parameter on GridSearchCV needs to be set to True, but this is the default anyway.", "best_answers_score":0.65, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31059123\/scikit-learn-feature-reduction-using-rfecv-and-gridsearch-where-are-the-coeff", "best_answers_votes":31, "question_length":1514, "response_length":2406 }, { "question":"How does the class_weight parameter in scikit-learn work? I am having a lot of trouble understanding how the class_weight parameter in scikit-learn's Logistic Regression operates. The Situation I want to use logistic regression to do binary classification on a very unbalanced data set. The classes are labelled 0 (negative) and 1 (positive) and the observed data is in a ratio of about 19:1 with the majority of samples having negative outcome. First Attempt: Manually Preparing Training Data I split the data I had into disjoint sets for training and testing (about 80\/20). Then I randomly sampled the training data by hand to get training data in different proportions than 19:1; from 2:1 -> 16:1. I then trained logistic regression on these different training data subsets and plotted recall (= TP\/(TP+FN)) as a function of the different training proportions. Of course, the recall was computed on the disjoint TEST samples which had the observed proportions of 19:1. Note, although I trained the different models on different training data, I computed recall for all of them on the same (disjoint) test data. The results were as expected: the recall was about 60% at 2:1 training proportions and fell off rather fast by the time it got to 16:1. There were several proportions 2:1 -> 6:1 where the recall was decently above 5%. Second Attempt: Grid Search Next, I wanted to test different regularization parameters and so I used GridSearchCV and made a grid of several values of the C parameter as well as the class_weight parameter. To translate my n:m proportions of negative:positive training samples into the dictionary language of class_weight I thought that I just specify several dictionaries as follows: ``` { 0:0.67, 1:0.33 } #expected 2:1 { 0:0.75, 1:0.25 } #expected 3:1 { 0:0.8, 1:0.2 } #expected 4:1 ``` and I also included None and auto. This time the results were totally wacked. All my recalls came out tiny (< 0.05) for every value of class_weight except auto. So I can only assume that my understanding of how to set the class_weight dictionary is wrong. Interestingly, the class_weight value of 'auto' in the grid search was around 59% for all values of C, and I guessed it balances to 1:1? My Questions How do you properly use class_weight to achieve different balances in training data from what you actually give it? Specifically, what dictionary do I pass to class_weight to use n:m proportions of negative:positive training samples? If you pass various class_weight dictionaries to GridSearchCV, during cross-validation will it rebalance the training fold data according to the dictionary but use the true given sample proportions for computing my scoring function on the test fold? This is critical since any metric is only useful to me if it comes from data in the observed proportions. What does the auto value of class_weight do as far as proportions? I read the documentation and I assume \"balances the data inversely proportional to their frequency\" just means it makes it 1:1. Is this correct? If not, can someone clarify?", "response":"First off, it might not be good to just go by recall alone. You can simply achieve a recall of 100% by classifying everything as the positive class. I usually suggest using AUC for selecting parameters, and then finding a threshold for the operating point (say a given precision level) that you are interested in. For how class_weight works: It penalizes mistakes in samples of class[i] with class_weight[i] instead of 1. So higher class-weight means you want to put more emphasis on a class. From what you say it seems class 0 is 19 times more frequent than class 1. So you should increase the class_weight of class 1 relative to class 0, say {0:.1, 1:.9}. If the class_weight doesn't sum to 1, it will basically change the regularization parameter. For how class_weight=\"auto\" works, you can have a look at this discussion. In the dev version you can use class_weight=\"balanced\", which is easier to understand: it basically means replicating the smaller class until you have as many samples as in the larger one, but in an implicit way.", "best_answers_score":0.6478, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/30972029\/how-does-the-class-weight-parameter-in-scikit-learn-work", "best_answers_votes":181, "question_length":3057, "response_length":1038 }, { "question":"AttributeError: 'str' object has no attribute 'decode' in fitting Logistic Regression Model I am currently trying to create a binary classification using Logistic regression. Currently I am in determining the feature importance. I already did the data preprocessing (One Hot Encoding and sampling) and ran it with XGBoost and RandomFOrestClassifier, no problem However, when I tried to fit a LogisticRegression model (below is my code in Notebook), ``` from sklearn.linear_model import LogisticRegression #Logistic Regression # fit the model model = LogisticRegression() # fit the model model.fit(np.array(X_over), np.array(y_over)) # get importance importance = model.coef_[0] # summarize feature importance df_imp = pd.DataFrame({'feature':list(X_over.columns), 'importance':importance}) display(df_imp.sort_values('importance', ascending=False).head(20)) # plot feature importance plt.bar(list(X_over.columns), importance) plt.show() ``` it gave an error ``` ... ~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\joblib\\parallel.py in (.0) 223 with parallel_backend(self._backend, n_jobs=self._n_jobs): 224 return [func(*args, **kwargs) --> 225 for func, args, kwargs in self.items] 226 227 def __len__(self): ~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\sklearn\\linear_model\\_logistic.py in _logistic_regression_path(X, y, pos_class, Cs, fit_intercept, max_iter, tol, verbose, solver, coef, class_weight, dual, penalty, intercept_scaling, multi_class, random_state, check_input, max_squared_sum, sample_weight, l1_ratio) 762 n_iter_i = _check_optimize_result( 763 solver, opt_res, max_iter, --> 764 extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG) 765 w0, loss = opt_res.x, opt_res.fun 766 elif solver == 'newton-cg': ~\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\sklearn\\utils\\optimize.py in _check_optimize_result(solver, result, max_iter, extra_warning_msg) 241 \" https:\/\/scikit-learn.org\/stable\/modules\/\" 242 \"preprocessing.html\" --> 243 ).format(solver, result.status, result.message.decode(\"latin1\")) 244 if extra_warning_msg is not None: 245 warning_msg += \"\\n\" + extra_warning_msg AttributeError: 'str' object has no attribute 'decode' ``` I googled it and mostly all the responses said that this error is because the scikit-learn library tried to decode an already decoded string. But I don't know how to solve it in my case here. I made sure all my data is either integer or float64, and no strings.", "response":"I tried to upgrade my scikit-learn using the below command, still, that didn't solve the AttributeError: 'str' object has no attribute 'decode' issue ``` pip install scikit-learn -U ``` Finally, below code snippet solved the issue, add the solver as liblinear ``` model = LogisticRegression(solver='liblinear') ```", "best_answers_score":0.645, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/65682019\/attributeerror-str-object-has-no-attribute-decode-in-fitting-logistic-regre", "best_answers_votes":66, "question_length":2437, "response_length":314 }, { "question":"Get feature importance from GridSearchCV Is there a way to get feature importance from a sklearn's GridSearchCV? For example : ``` from sklearn.model_selection import GridSearchCV print(\"starting grid search ......\") optimized_GBM = GridSearchCV(LGBMRegressor(), params, cv=3, n_jobs=-1) # optimized_GBM.fit(tr, yvar) preds2 = optimized_GBM.predict(te) ``` Is there a way I can access feature importance ? Maybe something like ``` optimized_GBM.feature_importances_ ```", "response":"This one works ``` optimized_GBM.best_estimator_.feature_importances_ ```", "best_answers_score":0.6449, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/48377296\/get-feature-importance-from-gridsearchcv", "best_answers_votes":37, "question_length":469, "response_length":73 }, { "question":"find the \"elbow point\" on an optimization curve with Python i have a list of points which are the inertia values of a kmeans algorithm. To determine the optimum amount of clusters i need to find the point, where this curve starts to flatten. Data example Here is how my list of values is created and filled: ``` sum_squared_dist = [] K = range(1,50) for k in K: km = KMeans(n_clusters=k, random_state=0) km = km.fit(normalized_modeling_data) sum_squared_dist.append(km.inertia_) print(sum_squared_dist) ``` How can i find a point, where the pitch of this curve increases (the curve is falling, so the first derivation is negative)? My approach ``` derivates = [] for i in range(len(sum_squared_dist)): derivates.append(sum_squared_dist[i] - sum_squared_dist[i-1]) ``` I want to find the optimum number of clusters any given data using the elbow method. Could someone help me how i can find the point where the list of the inertia values starts to flatten? Edit Datapoints: ``` [7342.1301373073857, 6881.7109460930769, 6531.1657905495022, 6356.2255554679778, 6209.8382535595829, 6094.9052166741121, 5980.0191582610196, 5880.1869867848218, 5779.8957906367368, 5691.1879324562778, 5617.5153566271356, 5532.2613232619951, 5467.352265375117, 5395.4493783888756, 5345.3459908298091, 5290.6769823693812, 5243.5271656371888, 5207.2501206569532, 5164.9617535255456] ``` Graph:", "response":"I worked on a Python package modeled after the Kneedle algorithm. It finds x=5 as the point where the curve starts to flatten. The documentation and the paper discuss the algorithm for choosing the knee point in more detail. ``` y = [7342.1301373073857, 6881.7109460930769, 6531.1657905495022, 6356.2255554679778, 6209.8382535595829, 6094.9052166741121, 5980.0191582610196, 5880.1869867848218, 5779.8957906367368, 5691.1879324562778, 5617.5153566271356, 5532.2613232619951, 5467.352265375117, 5395.4493783888756, 5345.3459908298091, 5290.6769823693812, 5243.5271656371888, 5207.2501206569532, 5164.9617535255456] x = range(1, len(y)+1) from kneed import KneeLocator kn = KneeLocator(x, y, curve='convex', direction='decreasing') print(kn.knee) 5 import matplotlib.pyplot as plt plt.xlabel('number of clusters k') plt.ylabel('Sum of squared distances') plt.plot(x, y, 'bx-') plt.vlines(kn.knee, plt.ylim()[0], plt.ylim()[1], linestyles='dashed') ```", "best_answers_score":0.6439, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/51762514\/find-the-elbow-point-on-an-optimization-curve-with-python", "best_answers_votes":51, "question_length":1367, "response_length":948 }, { "question":"Facing ValueError: Target is multiclass but average='binary' I'm trying to use Naive Bayes algorithm for my dataset. I'm able to find out the accuracy but trying to find out precision and recall for the same. But, it is throwing the following error: ```none ValueError: Target is multiclass but average='binary'. Please choose another average setting. ``` Can anyone please suggest me how to proceed with it. I have tried using average ='micro' in the precision and the recall scores. It worked without any errors but it is giving the same score for accuracy, precision, recall. My dataset: train_data.csv: ```none review,label Colors & clarity is superb,positive Sadly the picture is not nearly as clear or bright as my 40 inch Samsung,negative ``` test_data.csv: ```none review,label The picture is clear and beautiful,positive Picture is not clear,negative ``` My code: ```py import pandas as pd from sklearn.feature_extraction.text import CountVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import confusion_matrix X_train, y_train = pd.read_csv('train_data.csv') X_test, y_test = pd.read_csv('test_data.csv') vec = CountVectorizer() X_train_transformed = vec.fit_transform(X_train) X_test_transformed = vec.transform(X_test) clf = MultinomialNB() clf.fit(X_train_transformed, y_train) score = clf.score(X_test_transformed, y_test) y_pred = clf.predict(X_test_transformed) cm = confusion_matrix(y_test, y_pred) precision = precision_score(y_test, y_pred, pos_label='positive') recall = recall_score(y_test, y_pred, pos_label='positive') ```", "response":"You need to add the 'average' param. According to the documentation: average : string, [None, \u2018binary\u2019 (default), \u2018micro\u2019, \u2018macro\u2019, \u2018samples\u2019, \u2018weighted\u2019] This parameter is required for multiclass\/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: Do this: ``` print(\"Precision Score : \",precision_score(y_test, y_pred, pos_label='positive' average='micro')) print(\"Recall Score : \",recall_score(y_test, y_pred, pos_label='positive' average='micro')) ``` Replace 'micro' with any one of the above options except 'binary'. Also, in the multiclass setting, there is no need to provide the 'pos_label' as it will be anyways ignored. Update for comment: Yes, they can be equal. Its given in the user guide here: Note that for \u201cmicro\u201d-averaging in a multiclass setting with all labels included will produce equal precision, recall and F, while \u201cweighted\u201d averaging may produce an F-score that is not between precision and recall.", "best_answers_score":0.6435, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/52269187\/facing-valueerror-target-is-multiclass-but-average-binary", "best_answers_votes":55, "question_length":1660, "response_length":1014 }, { "question":"Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample While I am predicting the one sample from my data, it gives reshape error but my model has equal number of rows. Here is my code: ```py import pandas as pd from sklearn.linear_model import LinearRegression import numpy as np x = np.array([2.0 , 2.4, 1.5, 3.5, 3.5, 3.5, 3.5, 3.7, 3.7]) y = np.array([196, 221, 136, 255, 244, 230, 232, 255, 267]) lr = LinearRegression() lr.fit(x,y) print(lr.predict(2.4)) ``` The error is ```none if it contains a single sample.\".format(array)) ValueError: Expected 2D array, got scalar array instead: array=2.4. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. ```", "response":"You should reshape your X to be a 2D array not 1D array. Fitting a model requires requires a 2D array. i.e (n_samples, n_features) ``` x = np.array([2.0 , 2.4, 1.5, 3.5, 3.5, 3.5, 3.5, 3.7, 3.7]) y = np.array([196, 221, 136, 255, 244, 230, 232, 255, 267]) lr = LinearRegression() lr.fit(x.reshape(-1, 1), y) print(lr.predict([[2.4]])) ```", "best_answers_score":0.6433, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/58663739\/reshape-your-data-either-using-array-reshape-1-1-if-your-data-has-a-single-fe", "best_answers_votes":33, "question_length":832, "response_length":338 }, { "question":"scikit-learn .predict() default threshold I'm working on a classification problem with unbalanced classes (5% 1's). I want to predict the class, not the probability. In a binary classification problem, is scikit's classifier.predict() using 0.5 by default? If it doesn't, what's the default method? If it does, how do I change it? In scikit some classifiers have the class_weight='auto' option, but not all do. With class_weight='auto', would .predict() use the actual population proportion as a threshold? What would be the way to do this in a classifier like MultinomialNB that doesn't support class_weight? Other than using predict_proba() and then calculation the classes myself.", "response":"The threshold can be set using clf.predict_proba() for example: ``` from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier(random_state = 2) clf.fit(X_train,y_train) # y_pred = clf.predict(X_test) # default threshold is 0.5 y_pred = (clf.predict_proba(X_test)[:,1] >= 0.3).astype(bool) # set threshold as 0.3 ```", "best_answers_score":0.6415, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/19984957\/scikit-learn-predict-default-threshold", "best_answers_votes":88, "question_length":683, "response_length":334 }, { "question":"How to obtain features' weights I am dealing with highly imbalanced data set and my idea is to obtain values of feature weights from my libSVM model. As for now I am OK with the linear kernel, where I can obtain feature weights, but when I am using rbf or poly, I fail to reach my objective. Here I am using sklearn for my model and it's easy to obtain feature weights for linear kernel using .coef_. Can anyone help me to do same thing for rbf or poly? What I've tried to do so far is given below: ``` svr = SVC(C=10, cache_size=200, class_weight='auto', coef0=0.0, degree=3.0, gamma=0.12,kernel='rbf', max_iter=-1, probability=True, random_state=0,shrinking=True, tol=0.001, verbose=False) clf = svr.fit(data_train,target_train) print clf.coef_ ```", "response":"This is not only impossible, as stated in the documentation: Weights asigned to the features (coefficients in the primal problem). This is only available in the case of linear kernel. but also it doesn't make sense. In linear SVM the resulting separating plane is in the same space as your input features. Therefore its coefficients can be viewed as weights of the input's \"dimensions\". In other kernels, the separating plane exists in another space - a result of kernel transformation of the original space. Its coefficients are not directly related to the input space. In fact, for the rbf kernel the transformed space is infinite-dimensional (you can get a starting point on this on Wikipedia of course).", "best_answers_score":0.6404, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/21260691\/how-to-obtain-features-weights", "best_answers_votes":23, "question_length":750, "response_length":707 }, { "question":"Sklearn How to Save a Model Created From a Pipeline and GridSearchCV Using Joblib or Pickle? After identifying the best parameters using a pipeline and GridSearchCV, how do I pickle\/joblib this process to re-use later? I see how to do this when it's a single classifier... ```py import joblib joblib.dump(clf, 'filename.pkl') ``` But how do I save this overall pipeline with the best parameters after performing and completing a gridsearch? I tried: joblib.dump(grid, 'output.pkl') - But that dumped every gridsearch attempt (many files) joblib.dump(pipeline, 'output.pkl') - But I don't think that contains the best parameters ```py X_train = df['Keyword'] y_train = df['Ad Group'] pipeline = Pipeline([ ('tfidf', TfidfVectorizer()), ('sgd', SGDClassifier()) ]) parameters = {'tfidf__ngram_range': [(1, 1), (1, 2)], 'tfidf__use_idf': (True, False), 'tfidf__max_df': [0.25, 0.5, 0.75, 1.0], 'tfidf__max_features': [10, 50, 100, 250, 500, 1000, None], 'tfidf__stop_words': ('english', None), 'tfidf__smooth_idf': (True, False), 'tfidf__norm': ('l1', 'l2', None), } grid = GridSearchCV(pipeline, parameters, cv=2, verbose=1) grid.fit(X_train, y_train) #These were the best combination of tuning parameters discovered ##best_params = {'tfidf__max_features': None, 'tfidf__use_idf': False, ## 'tfidf__smooth_idf': False, 'tfidf__ngram_range': (1, 2), ## 'tfidf__max_df': 1.0, 'tfidf__stop_words': 'english', ## 'tfidf__norm': 'l2'} ```", "response":"``` import joblib joblib.dump(grid.best_estimator_, 'filename.pkl') ``` If you want to dump your object into one file - use: ``` joblib.dump(grid.best_estimator_, 'filename.pkl', compress = 1) ```", "best_answers_score":0.638, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/34143829\/sklearn-how-to-save-a-model-created-from-a-pipeline-and-gridsearchcv-using-jobli", "best_answers_votes":72, "question_length":1431, "response_length":196 }, { "question":"When scale the data, why the train dataset use 'fit' and 'transform', but the test dataset only use 'transform'? When scale the data, why the train dataset use 'fit' and 'transform', but the test dataset only use 'transform'? ``` SAMPLE_COUNT = 5000 TEST_COUNT = 20000 seed(0) sample = list() test_sample = list() for index, line in enumerate(open('covtype.data','rb')): if index < SAMPLE_COUNT: sample.append(line) else: r = randint(0,index) if r < SAMPLE_COUNT: sample[r] = line else: k = randint(0,index) if k < TEST_COUNT: if len(test_sample) < TEST_COUNT: test_sample.append(line) else: test_sample[k] = line from sklearn.preprocessing import StandardScaler for n, line in enumerate(sample): sample[n] = map(float, line.strip().split(',')) y = np.array(sample)[:,-1] scaling = StandardScaler() X = scaling.fit_transform(np.array(sample)[:,:-1]) ##here use fit and transform for n,line in enumerate(test_sample): test_sample[n] = map(float,line.strip().split(',')) yt = np.array(test_sample)[:,-1] Xt = scaling.transform(np.array(test_sample)[:,:-1])##why here only use transform ``` As the annotation says, why Xt only use transform but no fit?", "response":"We use fit_transform() on the train data so that we learn the parameters of scaling on the train data and in the same time we scale the train data. We only use transform() on the test data because we use the scaling paramaters learned on the train data to scale the test data. This is the standart procedure to scale. You always learn your scaling parameters on the train and then use them on the test. Here is an article that explane it very well : https:\/\/sebastianraschka.com\/faq\/docs\/scale-training-test.html", "best_answers_score":0.6355, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/43675665\/when-scale-the-data-why-the-train-dataset-use-fit-and-transform-but-the-te", "best_answers_votes":46, "question_length":1149, "response_length":512 }, { "question":"ValueError: continuous format is not supported I have written a simple function where I am using the average_precision_score from scikit-learn to compute average precision. My Code: ```py def compute_average_precision(predictions, gold): gold_predictions = np.zeros(predictions.size, dtype=np.int) for idx in range(gold): gold_predictions[idx] = 1 return average_precision_score(predictions, gold_predictions) ``` When the function is executed, it produces the following error. ```none Traceback (most recent call last): File \"test.py\", line 91, in total_avg_precision += compute_average_precision(np.asarray(probs), len(gold_candidates)) File \"test.py\", line 29, in compute_average_precision return average_precision_score(predictions, gold_predictions) File \"\/if5\/wua4nw\/anaconda3\/lib\/python3.5\/site-packages\/sklearn\/metrics\/ranking.py\", line 184, in average_precision_score average, sample_weight=sample_weight) File \"\/if5\/wua4nw\/anaconda3\/lib\/python3.5\/site-packages\/sklearn\/metrics\/base.py\", line 81, in _average_binary_score raise ValueError(\"{0} format is not supported\".format(y_type)) ValueError: continuous format is not supported ``` If I print the two numpy arrays predictions and gold_predictions, say for one example, it looks alright. [One example is provided below.] ```none [ 0.40865014 0.26047812 0.07588802 0.26604077 0.10586583 0.17118802 0.26797949 0.34618672 0.33659923 0.22075308 0.42288553 0.24908153 0.26506338 0.28224747 0.32942101 0.19986877 0.39831917 0.23635269 0.34715138 0.39831917 0.23635269 0.35822859 0.12110706] [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] ``` What I am doing wrong here? What is the meaning of the error?", "response":"Just taking a look at the sklearn docs Parameters: y_true : array, shape = [n_samples] or [n_samples, n_classes] True binary labels in binary label indicators. y_score : array, shape = [n_samples] or [n_samples, n_classes] Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by \u201cdecision_function\u201d on some classifiers). So your first argument has to be an array of binary labels, but you are passing some sort of float array as the first argument. So I believe you need to reverse the order of the arguments you are passing.", "best_answers_score":0.6341, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/44468172\/valueerror-continuous-format-is-not-supported", "best_answers_votes":32, "question_length":1661, "response_length":618 }, { "question":"scikit-learn: clustering text documents using DBSCAN I'm tryin to use scikit-learn to cluster text documents. On the whole, I find my way around, but I have my problems with specific issues. Most of the examples I found illustrate clustering using scikit-learn with k-means as clustering algorithm. Adopting these example with k-means to my setting works in principle. However, k-means is not suitable since I don't know the number of clusters. From what I read so far -- please correct me here if needed -- DBSCAN or MeanShift seem the be more appropriate in my case. The scikit-learn website provides examples for each cluster algorithm. The problem is now, that with both DBSCAN and MeanShift I get errors I cannot comprehend, let alone solve. My minimal code is as follows: ``` docs = [] for item in [database]: docs.append(item) vectorizer = TfidfVectorizer(min_df=1) X = vectorizer.fit_transform(docs) X = X.todense() # <-- This line was needed to resolve the isse db = DBSCAN(eps=0.3, min_samples=10).fit(X) ... ``` (My documents are already processed, i.e., stopwords have been removed and an Porter Stemmer has been applied.) When I run this code, I get the following error when instatiating DBSCAN and calling fit(): ``` ... File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/cluster\/dbscan_.py\", line 248, in fit clust = dbscan(X, **self.get_params()) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/cluster\/dbscan_.py\", line 86, in dbscan n = X.shape[0] IndexError: tuple index out of range ``` Clicking on the line in dbscan_.py that throws the error, I noticed the following line ``` ... X = np.asarray(X) n = X.shape[0] ... ``` When I use these to lines directly in my code for testing, I get the same error. I don't really know what np.asarray(X) is doing here, but after the command X.shape = (). Hence X.shape[0] bombs -- before, X.shape[0] correctly refers to the number of documents. Out of curiosity, I removed X = np.asarray(X) from dbscan_.py. When I do this, something is computing heavily. But after some seconds, I get another error: ``` ... File \"\/usr\/lib\/python2.7\/dist-packages\/scipy\/sparse\/csr.py\", line 214, in extractor (min_indx,max_indx) = check_bounds(indices,N) File \"\/usr\/lib\/python2.7\/dist-packages\/scipy\/sparse\/csr.py\", line 198, in check_bounds max_indx = indices.max() File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/core\/_methods.py\", line 17, in _amax out=out, keepdims=keepdims) ValueError: zero-size array to reduction operation maximum which has no identity ``` In short, I have no clue how to get DBSCAN working, or what I might have missed, in general.", "response":"It looks like sparse representations for DBSCAN are supported as of Jan. 2015. I upgraded sklearn to 0.16.1 and it worked for me on text.", "best_answers_score":0.6324, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/25217065\/scikit-learn-clustering-text-documents-using-dbscan", "best_answers_votes":15, "question_length":2604, "response_length":137 }, { "question":"Python scikit learn MLPClassifier \"hidden_layer_sizes\" I am lost in the scikit learn 0.18 user manual (http:\/\/scikit-learn.org\/dev\/modules\/generated\/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier): ``` hidden_layer_sizes : tuple, length = n_layers - 2, default (100,) The ith element represents the number of neurons in the ith hidden layer. ``` If I am looking for only 1 hidden layer and 7 hidden units in my model, should I put like this? Thanks! ``` hidden_layer_sizes=(7, 1) ```", "response":"hidden_layer_sizes=(7,) if you want only 1 hidden layer with 7 hidden units. length = n_layers - 2 is because you have 1 input layer and 1 output layer.", "best_answers_score":0.6323, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/35363530\/python-scikit-learn-mlpclassifier-hidden-layer-sizes", "best_answers_votes":86, "question_length":515, "response_length":152 }, { "question":"True Positive Rate and False Positive Rate (TPR, FPR) for Multi-Class Data in python [duplicate] This question already has answers here: How to get precision, recall and f-measure from confusion matrix in Python [duplicate] (3 answers) calculate precision and recall in a confusion matrix (5 answers) Closed 4 years ago. How do you compute the true- and false- positive rates of a multi-class classification problem? Say, ``` y_true = [1, -1, 0, 0, 1, -1, 1, 0, -1, 0, 1, -1, 1, 0, 0, -1, 0] y_prediction = [-1, -1, 1, 0, 0, 0, 0, -1, 1, -1, 1, 1, 0, 0, 1, 1, -1] ``` The confusion matrix is computed by metrics.confusion_matrix(y_true, y_prediction), but that just shifts the problem. EDIT after @seralouk's answer. Here, the class -1 is to be considered as the negatives, while 0 and 1 are variations of positives.", "response":"Using your data, you can get all the metrics for all the classes at once: ``` import numpy as np from sklearn.metrics import confusion_matrix y_true = [1, -1, 0, 0, 1, -1, 1, 0, -1, 0, 1, -1, 1, 0, 0, -1, 0] y_prediction = [-1, -1, 1, 0, 0, 0, 0, -1, 1, -1, 1, 1, 0, 0, 1, 1, -1] cnf_matrix = confusion_matrix(y_true, y_prediction) print(cnf_matrix) #[[1 1 3] # [3 2 2] # [1 3 1]] FP = cnf_matrix.sum(axis=0) - np.diag(cnf_matrix) FN = cnf_matrix.sum(axis=1) - np.diag(cnf_matrix) TP = np.diag(cnf_matrix) TN = cnf_matrix.sum() - (FP + FN + TP) FP = FP.astype(float) FN = FN.astype(float) TP = TP.astype(float) TN = TN.astype(float) # Sensitivity, hit rate, recall, or true positive rate TPR = TP\/(TP+FN) # Specificity or true negative rate TNR = TN\/(TN+FP) # Precision or positive predictive value PPV = TP\/(TP+FP) # Negative predictive value NPV = TN\/(TN+FN) # Fall out or false positive rate FPR = FP\/(FP+TN) # False negative rate FNR = FN\/(TP+FN) # False discovery rate FDR = FP\/(TP+FP) # Overall accuracy ACC = (TP+TN)\/(TP+FP+FN+TN) ``` For a general case where we have a lot of classes, these metrics are represented graphically in the following image:", "best_answers_score":0.6312, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/50666091\/true-positive-rate-and-false-positive-rate-tpr-fpr-for-multi-class-data-in-py", "best_answers_votes":62, "question_length":816, "response_length":1158 }, { "question":"How to generate a train-test-split based on a group id? I have the following data: ```none Group_ID Item_id Target 0 1 1 0 1 1 2 0 2 1 3 1 3 2 4 0 4 2 5 1 5 2 6 1 6 3 7 0 7 4 8 0 8 5 9 0 9 5 10 1 ``` I need to split the dataset into a training and testing set based on the \"Group_ID\" so that 80% of the data goes into a training set and 20% into a test set. That is, I need my training set to look something like: ```none Group_ID Item_id Target 0 1 1 0 1 1 2 0 2 1 3 1 3 2 4 0 4 2 5 1 5 2 6 1 6 3 7 0 7 4 8 0 ``` And test set: ```none Group_ID Item_id Target 8 5 9 0 9 5 10 1 ``` What would be the simplest way to do this? As far as I know, the standard test_train_split function in sklearn does not support splitting by groups in a way where I can also indicate the size of the split (e.g. 80\/20).", "response":"I figured out the answer. This seems to work: ``` from sklearn.model_selection import GroupShuffleSplit splitter = GroupShuffleSplit(test_size=.20, n_splits=2, random_state = 7) split = splitter.split(df, groups=df['Group_Id']) train_inds, test_inds = next(split) train = df.iloc[train_inds] test = df.iloc[test_inds] ```", "best_answers_score":0.6309, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/54797508\/how-to-generate-a-train-test-split-based-on-a-group-id", "best_answers_votes":77, "question_length":799, "response_length":321 }, { "question":"Splitting data using time-based splitting in test and train datasets I know that train_test_split splits it randomly, but I need to know how to split it based on time. ``` X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) # this splits the data randomly as 67% test and 33% train ``` How to split the same data set based on time as 67% train and 33% test? The dataset has a column TimeStamp. I tried searching on the similar questions but was not sure about the approach. Can someone explain briefly?", "response":"One easy way to do it.. First: sort the data by time Second: ``` import numpy as np train_set, test_set= np.split(data, [int(.67 *len(data))]) ``` That makes the train_set with the first 67% of the data, and the test_set with rest 33% of the data.", "best_answers_score":0.6309, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/50879915\/splitting-data-using-time-based-splitting-in-test-and-train-datasets", "best_answers_votes":19, "question_length":541, "response_length":247 }, { "question":"AttributeError: 'GridSearchCV' object has no attribute 'best_params_' Grid search is a way to find the best parameters for any model out of the combinations we specify. I have formed a grid search on my model in the below manner and wish to find best parameters identified using this gridsearch. ```py from sklearn.model_selection import GridSearchCV # Create the parameter grid based on the results of random search param_grid = { 'bootstrap': [True],'max_depth': [20,30,40, 100, 110], 'max_features': ['sqrt'],'min_samples_leaf': [5,10,15], 'min_samples_split': [40,50,60], 'n_estimators': [150, 200, 250] } # Create a based model rf = RandomForestClassifier() # Instantiate the grid search model grid_search = GridSearchCV(estimator = rf, param_grid = param_grid, cv = 3, n_jobs = -1, verbose = 2) ``` Now I want to find the best parameters of the gridsearch as the output ```py grid_search.best_params_ ``` Error: ```none ----> grid_search.best_params_ AttributeError: 'GridSearchCV' object has no attribute 'best_params_' ``` What am I missing?", "response":"You cannot get best parameters without fitting the data. Fit the data ``` grid_search.fit(X_train, y_train) ``` Now find the best parameters. ``` grid_search.best_params_ ``` grid_search.best_params_ will work after fitting on X_train and y_train.", "best_answers_score":0.6296, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/60786220\/attributeerror-gridsearchcv-object-has-no-attribute-best-params", "best_answers_votes":39, "question_length":1049, "response_length":247 }, { "question":"Scikit-learn: How to run KMeans on a one-dimensional array? I have an array of 13.876(13,876) values between 0 and 1. I would like to apply sklearn.cluster.KMeans to only this vector to find the different clusters in which the values are grouped. However, it seems KMeans works with a multidimensional array and not with one-dimensional ones. I guess there is a trick to make it work but I don't know how. I saw that KMeans.fit() accepts \"X : array-like or sparse matrix, shape=(n_samples, n_features)\", but it wants the n_samples to be bigger than one I tried putting my array on a np.zeros() matrix and run KMeans, but then is putting all the non-null values on class 1 and the rest on class 0. Can anyone help in running this algorithm on a one-dimensional array?", "response":"You have many samples of 1 feature, so you can reshape the array to (13,876, 1) using numpy's reshape: ``` from sklearn.cluster import KMeans import numpy as np x = np.random.random(13876) km = KMeans() km.fit(x.reshape(-1,1)) # -1 will be calculated to be 13876 here ```", "best_answers_score":0.6266, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/28416408\/scikit-learn-how-to-run-kmeans-on-a-one-dimensional-array", "best_answers_votes":56, "question_length":766, "response_length":271 }, { "question":"How to pass a parameter to Scikit-Learn Keras model function I have the following code, using Keras Scikit-Learn Wrapper, which work fine: ``` from keras.models import Sequential from keras.layers import Dense from sklearn import datasets from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import cross_val_score import numpy as np def create_model(): # create model model = Sequential() model.add(Dense(12, input_dim=4, init='uniform', activation='relu')) model.add(Dense(6, init='uniform', activation='relu')) model.add(Dense(1, init='uniform', activation='sigmoid')) # Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) return model def main(): \"\"\" Description of main \"\"\" iris = datasets.load_iris() X, y = iris.data, iris.target NOF_ROW, NOF_COL = X.shape # evaluate using 10-fold cross validation seed = 7 np.random.seed(seed) model = KerasClassifier(build_fn=create_model, nb_epoch=150, batch_size=10, verbose=0) kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed) results = cross_val_score(model, X, y, cv=kfold) print(results.mean()) # 0.666666666667 if __name__ == '__main__': main() ``` The pima-indians-diabetes.data can be downloaded here. Now what I want to do is to pass a value NOF_COL into a parameter of create_model() function the following way ``` model = KerasClassifier(build_fn=create_model(input_dim=NOF_COL), nb_epoch=150, batch_size=10, verbose=0) ``` With the create_model() function that looks like this: ``` def create_model(input_dim=None): # create model model = Sequential() model.add(Dense(12, input_dim=input_dim, init='uniform', activation='relu')) model.add(Dense(6, init='uniform', activation='relu')) model.add(Dense(1, init='uniform', activation='sigmoid')) # Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) return model ``` But it fails giving this error: ``` TypeError: __call__() takes at least 2 arguments (1 given) ``` What's the right way to do it?", "response":"You can add an input_dim keyword argument to the KerasClassifier constructor: ``` model = KerasClassifier(build_fn=create_model, input_dim=5, nb_epoch=150, batch_size=10, verbose=0) ```", "best_answers_score":0.6242, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/40393629\/how-to-pass-a-parameter-to-scikit-learn-keras-model-function", "best_answers_votes":22, "question_length":2090, "response_length":185 }, { "question":"Extract blocks or patches from NumPy Array I have a 2-d numpy array as follows: ``` a = np.array([[1,5,9,13], [2,6,10,14], [3,7,11,15], [4,8,12,16]] ``` I want to extract it into patches of 2 by 2 sizes with out repeating the elements. The answer should exactly be the same. This can be 3-d array or list with the same order of elements as below: ``` [[[1,5], [2,6]], [[3,7], [4,8]], [[9,13], [10,14]], [[11,15], [12,16]]] ``` How can do it easily? In my real problem the size of a is (36, 72). I can not do it one by one. I want programmatic way of doing it.", "response":"Using scikit-image: ``` import numpy as np from skimage.util import view_as_blocks a = np.array([[1,5,9,13], [2,6,10,14], [3,7,11,15], [4,8,12,16]]) print(view_as_blocks(a, (2, 2))) ```", "best_answers_score":0.621, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/31527755\/extract-blocks-or-patches-from-numpy-array", "best_answers_votes":29, "question_length":559, "response_length":185 }, { "question":"ImportError in importing from sklearn: cannot import name check_build I am getting the following error while trying to import from sklearn: ``` >>> from sklearn import svm Traceback (most recent call last): File \"\", line 1, in from sklearn import svm File \"C:\\Python27\\lib\\site-packages\\sklearn\\__init__.py\", line 16, in from . import check_build ImportError: cannot import name check_build ``` I am using python 2.7, scipy-0.12.0b1 superpack, numpy-1.6.0 superpack, scikit-learn-0.11 I have a windows 7 machine I have checked several answers for this issue but none of them gives a way out of this error.", "response":"Worked for me after installing scipy.", "best_answers_score":0.6139, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/15274696\/importerror-in-importing-from-sklearn-cannot-import-name-check-build", "best_answers_votes":164, "question_length":607, "response_length":37 }, { "question":"Sklearn, gridsearch: how to print out progress during the execution? I am using GridSearch from sklearn to optimize parameters of the classifier. There is a lot of data, so the whole process of optimization takes a while: more than a day. I would like to watch the performance of the already-tried combinations of parameters during the execution. Is it possible?", "response":"Set the verbose parameter in GridSearchCV to a positive number (the greater the number the more detail you will get). For instance: ``` GridSearchCV(clf, param_grid, cv=cv, scoring='accuracy', verbose=10) ```", "best_answers_score":0.6123, "library_name":"scikit-learn", "question_url":"https:\/\/stackoverflow.com\/questions\/24121018\/sklearn-gridsearch-how-to-print-out-progress-during-the-execution", "best_answers_votes":179, "question_length":362, "response_length":208 }, { "question":"Relationship between SciPy and NumPy SciPy appears to provide most (but not all [1]) of NumPy's functions in its own namespace. In other words, if there's a function named numpy.foo, there's almost certainly a scipy.foo. Most of the time, the two appear to be exactly the same, oftentimes even pointing to the same function object. Sometimes, they're different. To give an example that came up recently: numpy.log10 is a ufunc that returns NaNs for negative arguments; scipy.log10 returns complex values for negative arguments and doesn't appear to be a ufunc. The same can be said about log, log2 and logn, but not about log1p [2]. On the other hand, numpy.exp and scipy.exp appear to be different names for the same ufunc. This is also true of scipy.log1p and numpy.log1p. Another example is numpy.linalg.solve vs scipy.linalg.solve. They're similar, but the latter offers some additional features over the former. Why the apparent duplication? If this is meant to be a wholesale import of numpy into the scipy namespace, why the subtle differences in behaviour and the missing functions? Is there some overarching logic that would help clear up the confusion? [1] numpy.min, numpy.max, numpy.abs and a few others have no counterparts in the scipy namespace. [2] Tested using NumPy 1.5.1 and SciPy 0.9.0rc2.", "response":"Last time I checked it, the scipy __init__ method executes a ``` from numpy import * ``` so that the whole numpy namespace is included into scipy when the scipy module is imported. The log10 behavior you are describing is interesting, because both versions are coming from numpy. One is a ufunc, the other is a numpy.lib function. Why scipy is preferring the library function over the ufunc, I don't know off the top of my head. EDIT: In fact, I can answer the log10 question. Looking in the scipy __init__ method I see this: ``` # Import numpy symbols to scipy name space import numpy as _num from numpy import oldnumeric from numpy import * from numpy.random import rand, randn from numpy.fft import fft, ifft from numpy.lib.scimath import * ``` The log10 function you get in scipy comes from numpy.lib.scimath. Looking at that code, it says: ``` \"\"\" Wrapper functions to more user-friendly calling of certain math functions whose output data-type is different than the input data-type in certain domains of the input. For example, for functions like log() with branch cuts, the versions in this module provide the mathematically valid answers in the complex plane: >>> import math >>> from numpy.lib import scimath >>> scimath.log(-math.exp(1)) == (1+1j*math.pi) True Similarly, sqrt(), other base logarithms, power() and trig functions are correctly handled. See their respective docstrings for specific examples. \"\"\" ``` It seems that module overlays the base numpy ufuncs for sqrt, log, log2, logn, log10, power, arccos, arcsin, and arctanh. That explains the behavior you are seeing. The underlying design reason why it is done like that is probably buried in a mailing list post somewhere.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/6200910\/relationship-between-scipy-and-numpy", "best_answers_votes":156, "question_length":1309, "response_length":1697 }, { "question":"How to do exponential and logarithmic curve fitting in Python? I found only polynomial fitting I have a set of data and I want to compare which line describes it best (polynomials of different orders, exponential or logarithmic). I use Python and Numpy and for polynomial fitting there is a function polyfit(). But I found no such functions for exponential and logarithmic fitting. Are there any? Or how to solve it otherwise?", "response":"For fitting y = A + B log x, just fit y against (log x). ``` >>> x = numpy.array([1, 7, 20, 50, 79]) >>> y = numpy.array([10, 19, 30, 35, 51]) >>> numpy.polyfit(numpy.log(x), y, 1) array([ 8.46295607, 6.61867463]) # y \u2248 8.46 log(x) + 6.62 ``` For fitting y = AeBx, take the logarithm of both side gives log y = log A + Bx. So fit (log y) against x. Note that fitting (log y) as if it is linear will emphasize small values of y, causing large deviation for large y. This is because polyfit (linear regression) works by minimizing \u2211i (\u0394Y)2 = \u2211i (Yi \u2212 \u0176i)2. When Yi = log yi, the residues \u0394Yi = \u0394(log yi) \u2248 \u0394yi \/ |yi|. So even if polyfit makes a very bad decision for large y, the \"divide-by-|y|\" factor will compensate for it, causing polyfit favors small values. This could be alleviated by giving each entry a \"weight\" proportional to y. polyfit supports weighted-least-squares via the w keyword argument. ``` >>> x = numpy.array([10, 19, 30, 35, 51]) >>> y = numpy.array([1, 7, 20, 50, 79]) >>> numpy.polyfit(x, numpy.log(y), 1) array([ 0.10502711, -0.40116352]) # y \u2248 exp(-0.401) * exp(0.105 * x) = 0.670 * exp(0.105 * x) # (^ biased towards small values) >>> numpy.polyfit(x, numpy.log(y), 1, w=numpy.sqrt(y)) array([ 0.06009446, 1.41648096]) # y \u2248 exp(1.42) * exp(0.0601 * x) = 4.12 * exp(0.0601 * x) # (^ not so biased) ``` Note that Excel, LibreOffice and most scientific calculators typically use the unweighted (biased) formula for the exponential regression \/ trend lines. If you want your results to be compatible with these platforms, do not include the weights even if it provides better results. Now, if you can use scipy, you could use scipy.optimize.curve_fit to fit any model without transformations. For y = A + B log x the result is the same as the transformation method: ``` >>> x = numpy.array([1, 7, 20, 50, 79]) >>> y = numpy.array([10, 19, 30, 35, 51]) >>> scipy.optimize.curve_fit(lambda t,a,b: a+b*numpy.log(t), x, y) (array([ 6.61867467, 8.46295606]), array([[ 28.15948002, -7.89609542], [ -7.89609542, 2.9857172 ]])) # y \u2248 6.62 + 8.46 log(x) ``` For y = AeBx, however, we can get a better fit since it computes \u0394(log y) directly. But we need to provide an initialize guess so curve_fit can reach the desired local minimum. ``` >>> x = numpy.array([10, 19, 30, 35, 51]) >>> y = numpy.array([1, 7, 20, 50, 79]) >>> scipy.optimize.curve_fit(lambda t,a,b: a*numpy.exp(b*t), x, y) (array([ 5.60728326e-21, 9.99993501e-01]), array([[ 4.14809412e-27, -1.45078961e-08], [ -1.45078961e-08, 5.07411462e+10]])) # oops, definitely wrong. >>> scipy.optimize.curve_fit(lambda t,a,b: a*numpy.exp(b*t), x, y, p0=(4, 0.1)) (array([ 4.88003249, 0.05531256]), array([[ 1.01261314e+01, -4.31940132e-02], [ -4.31940132e-02, 1.91188656e-04]])) # y \u2248 4.88 exp(0.0553 x). much better. ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/3433486\/how-to-do-exponential-and-logarithmic-curve-fitting-in-python-i-found-only-poly", "best_answers_votes":328, "question_length":426, "response_length":2791 }, { "question":"How to add a new row to an empty numpy array Using standard Python arrays, I can do the following: ``` arr = [] arr.append([1,2,3]) arr.append([4,5,6]) # arr is now [[1,2,3],[4,5,6]] ``` However, I cannot do the same thing in numpy. For example: ``` arr = np.array([]) arr = np.append(arr, np.array([1,2,3])) arr = np.append(arr, np.array([4,5,6])) # arr is now [1,2,3,4,5,6] ``` I also looked into vstack, but when I use vstack on an empty array, I get: ``` ValueError: all the input array dimensions except for the concatenation axis must match exactly ``` So how do I do append a new row to an empty array in numpy?", "response":"The way to \"start\" the array that you want is: ``` arr = np.empty((0,3), int) ``` Which is an empty array but it has the proper dimensionality. ``` >>> arr array([], shape=(0, 3), dtype=int64) ``` Then be sure to append along axis 0: ``` arr = np.append(arr, np.array([[1,2,3]]), axis=0) arr = np.append(arr, np.array([[4,5,6]]), axis=0) ``` But, @jonrsharpe is right. In fact, if you're going to be appending in a loop, it would be much faster to append to a list as in your first example, then convert to a numpy array at the end, since you're really not using numpy as intended during the loop: ``` In [210]: %%timeit .....: l = [] .....: for i in xrange(1000): .....: l.append([3*i+1,3*i+2,3*i+3]) .....: l = np.asarray(l) .....: 1000 loops, best of 3: 1.18 ms per loop In [211]: %%timeit .....: a = np.empty((0,3), int) .....: for i in xrange(1000): .....: a = np.append(a, 3*i+np.array([[1,2,3]]), 0) .....: 100 loops, best of 3: 18.5 ms per loop In [214]: np.allclose(a, l) Out[214]: True ``` The numpythonic way to do it depends on your application, but it would be more like: ``` In [220]: timeit n = np.arange(1,3001).reshape(1000,3) 100000 loops, best of 3: 5.93 \u00b5s per loop In [221]: np.allclose(a, n) Out[221]: True ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/22392497\/how-to-add-a-new-row-to-an-empty-numpy-array", "best_answers_votes":343, "question_length":618, "response_length":1232 }, { "question":"How to normalize a NumPy array to within a certain range? After doing some processing on an audio or image array, it needs to be normalized within a range before it can be written back to a file. This can be done like so: ``` # Normalize audio channels to between -1.0 and +1.0 audio[:,0] = audio[:,0]\/abs(audio[:,0]).max() audio[:,1] = audio[:,1]\/abs(audio[:,1]).max() # Normalize image to between 0 and 255 image = image\/(image.max()\/255.0) ``` Is there a less verbose, convenience function way to do this? matplotlib.colors.Normalize() doesn't seem to be related.", "response":"``` # Normalize audio channels to between -1.0 and +1.0 audio \/= np.max(np.abs(audio),axis=0) # Normalize image to between 0 and 255 image *= (255.0\/image.max()) ``` Using \/= and *= allows you to eliminate an intermediate temporary array, thus saving some memory. Multiplication is less expensive than division, so ``` image *= 255.0\/image.max() # Uses 1 division and image.size multiplications ``` is marginally faster than ``` image \/= image.max()\/255.0 # Uses 1+image.size divisions ``` Since we are using basic numpy methods here, I think this is about as efficient a solution in numpy as can be. In-place operations do not change the dtype of the container array. Since the desired normalized values are floats, the audio and image arrays need to have floating-point point dtype before the in-place operations are performed. If they are not already of floating-point dtype, you'll need to convert them using astype. For example, ``` image = image.astype('float64') ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/1735025\/how-to-normalize-a-numpy-array-to-within-a-certain-range", "best_answers_votes":218, "question_length":566, "response_length":973 }, { "question":"What are the differences between Pandas and NumPy+SciPy in Python? [closed] Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Because this question may lead to opinionated discussion, debate, and answers, it has been closed. You may edit the question if you feel you can improve it so that it requires answers that include facts and citations or a detailed explanation of the proposed solution. If edited, the question will be reviewed and might be reopened. Closed 10 years ago. Improve this question They both seem exceedingly similar and I'm curious as to which package would be more beneficial for financial data analysis.", "response":"pandas provides high level data manipulation tools built on top of NumPy. NumPy by itself is a fairly low-level tool, similar to MATLAB. pandas on the other hand provides rich time series functionality, data alignment, NA-friendly statistics, groupby, merge and join methods, and lots of other conveniences. It has become very popular in recent years in financial applications. I will have a chapter dedicated to financial data analysis using pandas in my upcoming book.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11077023\/what-are-the-differences-between-pandas-and-numpyscipy-in-python", "best_answers_votes":329, "question_length":686, "response_length":470 }, { "question":"Numpy Resize\/Rescale Image I would like to take an image and change the scale of the image, while it is a numpy array. For example I have this image of a coca-cola bottle: bottle-1 Which translates to a numpy array of shape (528, 203, 3) and I want to resize that to say the size of this second image: bottle-2 Which has a shape of (140, 54, 3). How do I change the size of the image to a certain shape while still maintaining the original image? Other answers suggest stripping every other or third row out, but what I want to do is basically shrink the image how you would via an image editor but in python code. Are there any libraries to do this in numpy\/SciPy?", "response":"Yeah, you can install opencv (this is a library used for image processing, and computer vision), and use the cv2.resize function. And for instance use: ``` import cv2 import numpy as np img = cv2.imread('your_image.jpg') res = cv2.resize(img, dsize=(54, 140), interpolation=cv2.INTER_CUBIC) ``` Here img is thus a numpy array containing the original image, whereas res is a numpy array containing the resized image. An important aspect is the interpolation parameter: there are several ways how to resize an image. Especially since you scale down the image, and the size of the original image is not a multiple of the size of the resized image. Possible interpolation schemas are: INTER_NEAREST - a nearest-neighbor interpolation INTER_LINEAR - a bilinear interpolation (used by default) INTER_AREA - resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire\u2019-free results. But when the image is zoomed, it is similar to the INTER_NEAREST method. INTER_CUBIC - a bicubic interpolation over 4x4 pixel neighborhood INTER_LANCZOS4 - a Lanczos interpolation over 8x8 pixel neighborhood Like with most options, there is no \"best\" option in the sense that for every resize schema, there are scenarios where one strategy can be preferred over another.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/48121916\/numpy-resize-rescale-image", "best_answers_votes":223, "question_length":665, "response_length":1293 }, { "question":"Peak-finding algorithm for Python\/SciPy I can write something myself by finding zero-crossings of the first derivative or something, but it seems like a common-enough function to be included in standard libraries. Anyone know of one? My particular application is a 2D array, but usually it would be used for finding peaks in FFTs, etc. Specifically, in these kinds of problems, there are multiple strong peaks, and then lots of smaller \"peaks\" that are just caused by noise that should be ignored. These are just examples; not my actual data: 1-dimensional peaks: 2-dimensional peaks: The peak-finding algorithm would find the location of these peaks (not just their values), and ideally would find the true inter-sample peak, not just the index with maximum value, probably using quadratic interpolation or something. Typically you only care about a few strong peaks, so they'd either be chosen because they're above a certain threshold, or because they're the first n peaks of an ordered list, ranked by amplitude. As I said, I know how to write something like this myself. I'm just asking if there's a pre-existing function or package that's known to work well. Update: I translated a MATLAB script and it works decently for the 1-D case, but could be better. Updated update: sixtenbe created a better version for the 1-D case.", "response":"The function scipy.signal.find_peaks, as its name suggests, is useful for this. But it's important to understand well its parameters width, threshold, distance and above all prominence to get a good peak extraction. According to my tests and the documentation, the concept of prominence is \"the useful concept\" to keep the good peaks, and discard the noisy peaks. What is (topographic) prominence? It is \"the minimum height necessary to descend to get from the summit to any higher terrain\", as it can be seen here: The idea is: The higher the prominence, the more \"important\" the peak is. Test: I used a (noisy) frequency-varying sinusoid on purpose because it shows many difficulties. We can see that the width parameter is not very useful here because if you set a minimum width too high, then it won't be able to track very close peaks in the high frequency part. If you set width too low, you would have many unwanted peaks in the left part of the signal. Same problem with distance. threshold only compares with the direct neighbours, which is not useful here. prominence is the one that gives the best solution. Note that you can combine many of these parameters! Code: ``` import numpy as np import matplotlib.pyplot as plt from scipy.signal import find_peaks x = np.sin(2*np.pi*(2**np.linspace(2,10,1000))*np.arange(1000)\/48000) + np.random.normal(0, 1, 1000) * 0.15 peaks, _ = find_peaks(x, distance=20) peaks2, _ = find_peaks(x, prominence=1) # BEST! peaks3, _ = find_peaks(x, width=20) peaks4, _ = find_peaks(x, threshold=0.4) # Required vertical distance to its direct neighbouring samples, pretty useless plt.subplot(2, 2, 1) plt.plot(peaks, x[peaks], \"xr\"); plt.plot(x); plt.legend(['distance']) plt.subplot(2, 2, 2) plt.plot(peaks2, x[peaks2], \"ob\"); plt.plot(x); plt.legend(['prominence']) plt.subplot(2, 2, 3) plt.plot(peaks3, x[peaks3], \"vg\"); plt.plot(x); plt.legend(['width']) plt.subplot(2, 2, 4) plt.plot(peaks4, x[peaks4], \"xk\"); plt.plot(x); plt.legend(['threshold']) plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/1713335\/peak-finding-algorithm-for-python-scipy", "best_answers_votes":190, "question_length":1330, "response_length":2007 }, { "question":"How do I use numpy.where()? What should I pass, and what does the result mean? [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Closed 9 years ago. Improve this question I tried reading the documentation for numpy.where(), but I'm still confused. What should I pass for the condition, x and y values? When I pass only condition, what does the result mean and how can I use it? What about when I pass all three? I found How does python numpy.where() work? but it didn't answer my question because it seems to be about the implementation rather than about how to use it. Numpy where() on a 2D matrix also didn't explain things for me; I'm looking for a step-by-step explanation, rather than a how-to guide for a specific case. Please include examples with both 1D and 2D source data.", "response":"After fiddling around for a while, I figured things out, and am posting them here hoping it will help others. Intuitively, np.where is like asking \"tell me where in this array, entries satisfy a given condition\". ``` >>> a = np.arange(5,10) >>> np.where(a >> a[np.where(a >> a = np.arange(4,10).reshape(2,3) array([[4, 5, 6], [7, 8, 9]]) >>> np.where(a > 8) (array(1), array(2)) ``` As in the 1d case, we can use np.where() to get entries in the 2d array that satisfy the condition: ``` >>> a[np.where(a > 8)] # selects from a entries 0, 1, 2 ``` array([9]) Note, when a is 1d, np.where() still returns an array of row idx's and an array of col idx's, but columns are of length 1, so latter is empty array.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/34667282\/how-do-i-use-numpy-where-what-should-i-pass-and-what-does-the-result-mean", "best_answers_votes":304, "question_length":1145, "response_length":706 }, { "question":"A tool to convert MATLAB code to Python [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 9 years ago. Improve this question I have a bunch of MATLAB code from my MS thesis which I now want to convert to Python (using numpy\/scipy and matplotlib) and distribute as open-source. I know the similarity between MATLAB and Python scientific libraries, and converting them manually will be not more than a fortnight (provided that I work towards it every day for some time). I was wondering if there was already any tool available which can do the conversion.", "response":"There are several tools for converting Matlab to Python code. The only one that's seen recent activity (last commit from June 2018) is Small Matlab to Python compiler (also developed here: SMOP@chiselapp). Other options include: LiberMate: translate from Matlab to Python and SciPy (Requires Python 2, last update 4 years ago). OMPC: Matlab to Python (a bit outdated). Mat2py: Matlab to Python (Requires Python 2). Also, for those interested in an interface between the two languages and not conversion: pymatlab: communicate from Python by sending data to the MATLAB workspace, operating on them with scripts and pulling back the resulting data. Python-Matlab wormholes: both directions of interaction supported. Python-Matlab bridge: use Matlab from within Python, offers matlab_magic for iPython, to execute normal matlab code from within ipython. PyMat: Control Matlab session from Python. pymat2: continuation of the seemingly abandoned PyMat. mlabwrap, mlabwrap-purepy: make Matlab look like Python library (based on PyMat). oct2py (repository): run GNU Octave commands from within Python. pymex: Embeds the Python Interpreter in Matlab, also on File Exchange. matpy: Access MATLAB in various ways: create variables, access .mat files, direct interface to MATLAB engine (requires MATLAB be installed). MatPy: Python package for numerical linear algebra and plotting with a MatLab-like interface. Btw might be helpful to look here for other migration tips: http:\/\/bci2000.org\/downloads\/BCPy2000\/Migration.html On a different note, for people who might find it useful there is: matlab2fortran", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9845292\/a-tool-to-convert-matlab-code-to-python", "best_answers_votes":186, "question_length":930, "response_length":1596 }, { "question":"binning data in python with scipy\/numpy is there a more efficient way to take an average of an array in prespecified bins? for example, i have an array of numbers and an array corresponding to bin start and end positions in that array, and I want to just take the mean in those bins? I have code that does it below but i am wondering how it can be cut down and improved. thanks. ``` from scipy import * from numpy import * def get_bin_mean(a, b_start, b_end): ind_upper = nonzero(a >= b_start)[0] a_upper = a[ind_upper] a_range = a_upper[nonzero(a_upper < b_end)[0]] mean_val = mean(a_range) return mean_val data = rand(100) bins = linspace(0, 1, 10) binned_data = [] n = 0 for n in range(0, len(bins)-1): b_start = bins[n] b_end = bins[n+1] binned_data.append(get_bin_mean(data, b_start, b_end)) print binned_data ```", "response":"It's probably faster and easier to use numpy.digitize(): ``` import numpy data = numpy.random.random(100) bins = numpy.linspace(0, 1, 10) digitized = numpy.digitize(data, bins) bin_means = [data[digitized == i].mean() for i in range(1, len(bins))] ``` An alternative to this is to use numpy.histogram(): ``` bin_means = (numpy.histogram(data, bins, weights=data)[0] \/ numpy.histogram(data, bins)[0]) ``` Try for yourself which one is faster... :)", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/6163334\/binning-data-in-python-with-scipy-numpy", "best_answers_votes":230, "question_length":818, "response_length":446 }, { "question":"How to check BLAS\/LAPACK linkage in NumPy and SciPy? I am builing my numpy\/scipy environment based on blas and lapack more or less based on this walk through. When I am done, how can I check, that my numpy\/scipy functions really do use the previously built blas\/lapack functionalities?", "response":"The method numpy.show_config() (or numpy.__config__.show()) outputs information about linkage gathered at build time. My output looks like this. I think it means I am using the BLAS\/LAPACK that ships with Mac OS. ``` >>> import numpy as np >>> np.show_config() lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3'] define_macros = [('NO_ATLAS_INFO', 3)] blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3', '-I\/System\/Library\/Frameworks\/vecLib.framework\/Headers'] define_macros = [('NO_ATLAS_INFO', 3)] ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9000164\/how-to-check-blas-lapack-linkage-in-numpy-and-scipy", "best_answers_votes":317, "question_length":285, "response_length":605 }, { "question":"Can't install Scipy through pip When installing scipy through pip with : ``` pip install scipy ``` Pip fails to build scipy and throws the following error: ``` Cleaning up... Command \/Users\/administrator\/dev\/KaggleAux\/env\/bin\/python2.7 -c \"import setuptools, tokenize;__file__='\/Users\/administrator\/dev\/KaggleAux\/env\/build\/scipy\/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))\" install --record \/var\/folders\/zl\/7698ng4d4nxd49q1845jd9340000gn\/T\/pip-eO8gua-record\/install-record.txt --single-version-externally-managed --compile --install-headers \/Users\/administrator\/dev\/KaggleAux\/env\/bin\/..\/include\/site\/python2.7 failed with error code 1 in \/Users\/administrator\/dev\/KaggleAux\/env\/build\/scipy Storing debug log for failure in \/Users\/administrator\/.pip\/pip.log ``` How can I get scipy to build successfully? This may be a new issue with OSX Yosemite since I just upgraded and haven't had issues installing scipy before. Debug log: ``` Cleaning up... Removing temporary dir \/Users\/administrator\/dev\/KaggleAux\/env\/build... Command \/Users\/administrator\/dev\/KaggleAux\/env\/bin\/python2.7 -c \"import setuptools, tokenize;__file__='\/Users\/administrator\/dev\/KaggleAux\/env\/build\/scipy\/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))\" install --record \/var\/folders\/zl\/7698ng4d4nxd49q1845jd9340000gn\/T\/pip-eO8gua-record\/install-record.txt --single-version-externally-managed --compile --install-headers \/Users\/administrator\/dev\/KaggleAux\/env\/bin\/..\/include\/site\/python2.7 failed with error code 1 in \/Users\/administrator\/dev\/KaggleAux\/env\/build\/scipy Exception information: Traceback (most recent call last): File \"\/Users\/administrator\/dev\/KaggleAux\/env\/lib\/python2.7\/site-packages\/pip\/basecommand.py\", line 122, in main status = self.run(options, args) File \"\/Users\/administrator\/dev\/KaggleAux\/env\/lib\/python2.7\/site-packages\/pip\/commands\/install.py\", line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File \"\/Users\/administrator\/dev\/KaggleAux\/env\/lib\/python2.7\/site-packages\/pip\/req.py\", line 1435, in install requirement.install(install_options, global_options, *args, **kwargs) File \"\/Users\/administrator\/dev\/KaggleAux\/env\/lib\/python2.7\/site-packages\/pip\/req.py\", line 706, in install cwd=self.source_dir, filter_stdout=self._filter_install, show_stdout=False) File \"\/Users\/administrator\/dev\/KaggleAux\/env\/lib\/python2.7\/site-packages\/pip\/util.py\", line 697, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command \/Users\/administrator\/dev\/KaggleAux\/env\/bin\/python2.7 -c \"import setuptools, tokenize;__file__='\/Users\/administrator\/dev\/KaggleAux\/env\/build\/scipy\/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))\" install --record \/var\/folders\/zl\/7698ng4d4nxd49q1845jd9340000gn\/T\/pip-eO8gua-record\/install-record.txt --single-version-externally-managed --compile --install-headers \/Users\/administrator\/dev\/KaggleAux\/env\/bin\/..\/include\/site\/python2.7 failed with error code 1 in \/Users\/administrator\/dev\/KaggleAux\/env\/build\/scipy ```", "response":"After opening up an issue with the SciPy team, we found that you need to upgrade pip with: ``` pip install --upgrade pip ``` And in Python 3 this works: ``` python3 -m pip install --upgrade pip ``` for SciPy to install properly. Why? Because: Older versions of pip have to be told to use wheels, IIRC with --use-wheel. Or you can upgrade pip itself, then it should pick up the wheels. Upgrading pip solves the issue, but you might be able to just use the --use-wheel flag as well.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/26575587\/cant-install-scipy-through-pip", "best_answers_votes":135, "question_length":3193, "response_length":480 }, { "question":"Plotting a fast Fourier transform in Python I have access to NumPy and SciPy and want to create a simple FFT of a data set. I have two lists, one that is y values and the other is timestamps for those y values. What is the simplest way to feed these lists into a SciPy or NumPy method and plot the resulting FFT? I have looked up examples, but they all rely on creating a set of fake data with some certain number of data points, and frequency, etc. and don't really show how to do it with just a set of data and the corresponding timestamps. I have tried the following example: ``` from scipy.fftpack import fft # Number of samplepoints N = 600 # Sample spacing T = 1.0 \/ 800.0 x = np.linspace(0.0, N*T, N) y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x) yf = fft(y) xf = np.linspace(0.0, 1.0\/(2.0*T), N\/2) import matplotlib.pyplot as plt plt.plot(xf, 2.0\/N * np.abs(yf[0:N\/2])) plt.grid() plt.show() ``` But when I change the argument of fft to my data set and plot it, I get extremely odd results, and it appears the scaling for the frequency may be off. I am unsure. Here is a pastebin of the data I am attempting to FFT http:\/\/pastebin.com\/0WhjjMkb http:\/\/pastebin.com\/ksM4FvZS When I use fft() on the whole thing it just has a huge spike at zero and nothing else. Here is my code: ``` ## Perform FFT with SciPy signalFFT = fft(yInterp) ## Get power spectral density signalPSD = np.abs(signalFFT) ** 2 ## Get frequencies corresponding to signal PSD fftFreq = fftfreq(len(signalPSD), spacing) ## Get positive half of frequencies i = fftfreq>0 ## plt.figurefigsize = (8, 4) plt.plot(fftFreq[i], 10*np.log10(signalPSD[i])); #plt.xlim(0, 100); plt.xlabel('Frequency [Hz]'); plt.ylabel('PSD [dB]') ``` Spacing is just equal to xInterp[1]-xInterp[0].", "response":"So I run a functionally equivalent form of your code in an IPython notebook: ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import scipy.fftpack # Number of samplepoints N = 600 # sample spacing T = 1.0 \/ 800.0 x = np.linspace(0.0, N*T, N) y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x) yf = scipy.fftpack.fft(y) xf = np.linspace(0.0, 1.0\/(2.0*T), N\/\/2) fig, ax = plt.subplots() ax.plot(xf, 2.0\/N * np.abs(yf[:N\/\/2])) plt.show() ``` I get what I believe to be very reasonable output. It's been longer than I care to admit since I was in engineering school thinking about signal processing, but spikes at 50 and 80 are exactly what I would expect. So what's the issue? In response to the raw data and comments being posted The problem here is that you don't have periodic data. You should always inspect the data that you feed into any algorithm to make sure that it's appropriate. ``` import pandas import matplotlib.pyplot as plt #import seaborn %matplotlib inline # the OP's data x = pandas.read_csv('http:\/\/pastebin.com\/raw.php?i=ksM4FvZS', skiprows=2, header=None).values y = pandas.read_csv('http:\/\/pastebin.com\/raw.php?i=0WhjjMkb', skiprows=2, header=None).values fig, ax = plt.subplots() ax.plot(x, y) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/25735153\/plotting-a-fast-fourier-transform-in-python", "best_answers_votes":127, "question_length":1765, "response_length":1256 }, { "question":"How to calculate probability in a normal distribution given mean & standard deviation? How to calculate probability in normal distribution given mean, std in Python? I can always explicitly code my own function according to the definition like the OP in this question did: Calculating Probability of a Random Variable in a Distribution in Python Just wondering if there is a library function call will allow you to do this. In my imagine it would like this: ``` nd = NormalDistribution(mu=100, std=12) p = nd.prob(98) ``` There is a similar question in Perl: How can I compute the probability at a point given a normal distribution in Perl?. But I didn't see one in Python. Numpy has a random.normal function, but it's like sampling, not exactly what I want.", "response":"There's one in scipy.stats: ``` >>> import scipy.stats >>> scipy.stats.norm(0, 1) >>> scipy.stats.norm(0, 1).pdf(0) 0.3989422804014327 >>> scipy.stats.norm(0, 1).cdf(0) 0.5 >>> scipy.stats.norm(100, 12) >>> scipy.stats.norm(100, 12).pdf(98) 0.032786643008494994 >>> scipy.stats.norm(100, 12).cdf(98) 0.43381616738909634 >>> scipy.stats.norm(100, 12).cdf(100) 0.5 ``` [One thing to beware of -- just a tip -- is that the parameter passing is a little broad. Because of the way the code is set up, if you accidentally write scipy.stats.norm(mean=100, std=12) instead of scipy.stats.norm(100, 12) or scipy.stats.norm(loc=100, scale=12), then it'll accept it, but silently discard those extra keyword arguments and give you the default (0,1).]", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12412895\/how-to-calculate-probability-in-a-normal-distribution-given-mean-standard-devi", "best_answers_votes":170, "question_length":758, "response_length":741 }, { "question":"How can I perform two-dimensional interpolation using scipy? This Q&A is intended as a canonical(-ish) concerning two-dimensional (and multi-dimensional) interpolation using scipy. There are often questions concerning the basic syntax of various multidimensional interpolation methods, I hope to set these straight too. I have a set of scattered two-dimensional data points, and I would like to plot them as a nice surface, preferably using something like contourf or plot_surface in matplotlib.pyplot. How can I interpolate my two-dimensional or multidimensional data to a mesh using scipy? I've found the scipy.interpolate sub-package, but I keep getting errors when using interp2d or bisplrep or griddata or RBFInterpolator (or the older Rbf). What is the proper syntax of these methods?", "response":"Disclaimer: I'm mostly writing this post with syntactical considerations and general behaviour in mind. I'm not familiar with the memory and CPU aspect of the methods described, and I aim this answer at those who have reasonably small sets of data, such that the quality of the interpolation can be the main aspect to consider. I am aware that when working with very large data sets, the better-performing methods (namely griddata and RBFInterpolator without a neighbors keyword argument) might not be feasible. Note that this answer uses the new RBFInterpolator class introduced in SciPy 1.7.0. For the legacy Rbf class see the previous version of this answer. I'm going to compare three kinds of multi-dimensional interpolation methods (interp2d\/splines, griddata and RBFInterpolator). I will subject them to two kinds of interpolation tasks and two kinds of underlying functions (points from which are to be interpolated). The specific examples will demonstrate two-dimensional interpolation, but the viable methods are applicable in arbitrary dimensions. Each method provides various kinds of interpolation; in all cases I will use cubic interpolation (or something close1). It's important to note that whenever you use interpolation you introduce bias compared to your raw data, and the specific methods used affect the artifacts that you will end up with. Always be aware of this, and interpolate responsibly. The two interpolation tasks will be upsampling (input data is on a rectangular grid, output data is on a denser grid) interpolation of scattered data onto a regular grid The two functions (over the domain [x, y] in [-1, 1]x[-1, 1]) will be a smooth and friendly function: cos(pi*x)*sin(pi*y); range in [-1, 1] an evil (and in particular, non-continuous) function: x*y \/ (x^2 + y^2) with a value of 0.5 near the origin; range in [-0.5, 0.5] Here's how they look: I will first demonstrate how the three methods behave under these four tests, then I'll detail the syntax of all three. If you know what you should expect from a method, you might not want to waste your time learning its syntax (looking at you, interp2d). Test data For the sake of explicitness, here is the code with which I generated the input data. While in this specific case I'm obviously aware of the function underlying the data, I will only use this to generate input for the interpolation methods. I use numpy for convenience (and mostly for generating the data), but scipy alone would suffice too. ``` import numpy as np import scipy.interpolate as interp # auxiliary function for mesh generation def gimme_mesh(n): minval = -1 maxval = 1 # produce an asymmetric shape in order to catch issues with transpositions return np.meshgrid(np.linspace(minval, maxval, n), np.linspace(minval, maxval, n + 1)) # set up underlying test functions, vectorized def fun_smooth(x, y): return np.cos(np.pi*x) * np.sin(np.pi*y) def fun_evil(x, y): # watch out for singular origin; function has no unique limit there return np.where(x**2 + y**2 > 1e-10, x*y\/(x**2+y**2), 0.5) # sparse input mesh, 6x7 in shape N_sparse = 6 x_sparse, y_sparse = gimme_mesh(N_sparse) z_sparse_smooth = fun_smooth(x_sparse, y_sparse) z_sparse_evil = fun_evil(x_sparse, y_sparse) # scattered input points, 10^2 altogether (shape (100,)) N_scattered = 10 rng = np.random.default_rng() x_scattered, y_scattered = rng.random((2, N_scattered**2))*2 - 1 z_scattered_smooth = fun_smooth(x_scattered, y_scattered) z_scattered_evil = fun_evil(x_scattered, y_scattered) # dense output mesh, 20x21 in shape N_dense = 20 x_dense, y_dense = gimme_mesh(N_dense) ``` Smooth function and upsampling Let's start with the easiest task. Here's how an upsampling from a mesh of shape [6, 7] to one of [20, 21] works out for the smooth test function: Even though this is a simple task, there are already subtle differences between the outputs. At a first glance all three outputs are reasonable. There are two features to note, based on our prior knowledge of the underlying function: the middle case of griddata distorts the data most. Note the y == -1 boundary of the plot (nearest the x label): the function should be strictly zero (since y == -1 is a nodal line for the smooth function), yet this is not the case for griddata. Also note the x == -1 boundary of the plots (behind, to the left): the underlying function has a local maximum (implying zero gradient near the boundary) at [-1, -0.5], yet the griddata output shows clearly non-zero gradient in this region. The effect is subtle, but it's a bias none the less. Evil function and upsampling A bit harder task is to perform upsampling on our evil function: Clear differences are starting to show among the three methods. Looking at the surface plots, there are clear spurious extrema appearing in the output from interp2d (note the two humps on the right side of the plotted surface). While griddata and RBFInterpolator seem to produce similar results at first glance, producing local minima near [0.4, -0.4] that is absent from the underlying function. However, there is one crucial aspect in which RBFInterpolator is far superior: it respects the symmetry of the underlying function (which is of course also made possible by the symmetry of the sample mesh). The output from griddata breaks the symmetry of the sample points, which is already weakly visible in the smooth case. Smooth function and scattered data Most often one wants to perform interpolation on scattered data. For this reason I expect these tests to be more important. As shown above, the sample points were chosen pseudo-uniformly in the domain of interest. In realistic scenarios you might have additional noise with each measurement, and you should consider whether it makes sense to interpolate your raw data to begin with. Output for the smooth function: Now there's already a bit of a horror show going on. I clipped the output from interp2d to between [-1, 1] exclusively for plotting, in order to preserve at least a minimal amount of information. It's clear that while some of the underlying shape is present, there are huge noisy regions where the method completely breaks down. The second case of griddata reproduces the shape fairly nicely, but note the white regions at the border of the contour plot. This is due to the fact that griddata only works inside the convex hull of the input data points (in other words, it doesn't perform any extrapolation). I kept the default NaN value for output points lying outside the convex hull.2 Considering these features, RBFInterpolator seems to perform best. Evil function and scattered data And the moment we've all been waiting for: It's no huge surprise that interp2d gives up. In fact, during the call to interp2d you should expect some friendly RuntimeWarnings complaining about the impossibility of the spline to be constructed. As for the other two methods, RBFInterpolator seems to produce the best output, even near the borders of the domain where the result is extrapolated. So let me say a few words about the three methods, in decreasing order of preference (so that the worst is the least likely to be read by anybody). scipy.interpolate.RBFInterpolator The RBF in the name of the RBFInterpolator class stands for \"radial basis functions\". To be honest I've never considered this approach until I started researching for this post, but I'm pretty sure I'll be using these in the future. Just like the spline-based methods (see later), usage comes in two steps: first one creates a callable RBFInterpolator class instance based on the input data, and then calls this object for a given output mesh to obtain the interpolated result. Example from the smooth upsampling test: ```py import scipy.interpolate as interp sparse_points = np.stack([x_sparse.ravel(), y_sparse.ravel()], -1) # shape (N, 2) in 2d dense_points = np.stack([x_dense.ravel(), y_dense.ravel()], -1) # shape (N, 2) in 2d zfun_smooth_rbf = interp.RBFInterpolator(sparse_points, z_sparse_smooth.ravel(), smoothing=0, kernel='cubic') # explicit default smoothing=0 for interpolation z_dense_smooth_rbf = zfun_smooth_rbf(dense_points).reshape(x_dense.shape) # not really a function, but a callable class instance zfun_evil_rbf = interp.RBFInterpolator(sparse_points, z_sparse_evil.ravel(), smoothing=0, kernel='cubic') # explicit default smoothing=0 for interpolation z_dense_evil_rbf = zfun_evil_rbf(dense_points).reshape(x_dense.shape) # not really a function, but a callable class instance ``` Note that we had to do some array building gymnastics to make the API of RBFInterpolator happy. Since we have to pass the 2d points as arrays of shape (N, 2), we have to flatten the input grid and stack the two flattened arrays. The constructed interpolator also expects query points in this format, and the result will be a 1d array of shape (N,) which we have to reshape back to match our 2d grid for plotting. Since RBFInterpolator makes no assumptions about the number of dimensions of the input points, it supports arbitrary dimensions for interpolation. So, scipy.interpolate.RBFInterpolator produces well-behaved output even for crazy input data supports interpolation in higher dimensions extrapolates outside the convex hull of the input points (of course extrapolation is always a gamble, and you should generally not rely on it at all) creates an interpolator as a first step, so evaluating it in various output points is less additional effort can have output point arrays of arbitrary shape (as opposed to being constrained to rectangular meshes, see later) more likely to preserving the symmetry of the input data supports multiple kinds of radial functions for keyword kernel: multiquadric, inverse_multiquadric, inverse_quadratic, gaussian, linear, cubic, quintic, thin_plate_spline (the default). As of SciPy 1.7.0 the class doesn't allow passing a custom callable due to technical reasons, but this is likely to be added in a future version. can give inexact interpolations by increasing the smoothing parameter One drawback of RBF interpolation is that interpolating N data points involves inverting an N x N matrix. This quadratic complexity very quickly blows up memory need for a large number of data points. However, the new RBFInterpolator class also supports a neighbors keyword parameter that restricts computation of each radial basis function to k nearest neighbours, thereby reducing memory need. scipy.interpolate.griddata My former favourite, griddata, is a general workhorse for interpolation in arbitrary dimensions. It doesn't perform extrapolation beyond setting a single preset value for points outside the convex hull of the nodal points, but since extrapolation is a very fickle and dangerous thing, this is not necessarily a con. Usage example: ```py sparse_points = np.stack([x_sparse.ravel(), y_sparse.ravel()], -1) # shape (N, 2) in 2d z_dense_smooth_griddata = interp.griddata(sparse_points, z_sparse_smooth.ravel(), (x_dense, y_dense), method='cubic') # default method is linear ``` Note that the same array transformations were necessary for the input arrays as for RBFInterpolator. The input points have to be specified in an array of shape [N, D] in D dimensions, or alternatively as a tuple of 1d arrays: ```py z_dense_smooth_griddata = interp.griddata((x_sparse.ravel(), y_sparse.ravel()), z_sparse_smooth.ravel(), (x_dense, y_dense), method='cubic') ``` The output point arrays can be specified as a tuple of arrays of arbitrary dimensions (as in both above snippets), which gives us some more flexibility. In a nutshell, scipy.interpolate.griddata produces well-behaved output even for crazy input data supports interpolation in higher dimensions does not perform extrapolation, a single value can be set for the output outside the convex hull of the input points (see fill_value) computes the interpolated values in a single call, so probing multiple sets of output points starts from scratch can have output points of arbitrary shape supports nearest-neighbour and linear interpolation in arbitrary dimensions, cubic in 1d and 2d. Nearest-neighbour and linear interpolation use NearestNDInterpolator and LinearNDInterpolator under the hood, respectively. 1d cubic interpolation uses a spline, 2d cubic interpolation uses CloughTocher2DInterpolator to construct a continuously differentiable piecewise-cubic interpolator. might violate the symmetry of the input data scipy.interpolate.interp2d\/scipy.interpolate.bisplrep The only reason I'm discussing interp2d and its relatives is that it has a deceptive name, and people are likely to try using it. Spoiler alert: don't use it. interp2d was deprecated in SciPy version 1.10, and will be removed in SciPy 1.12. See this mailing list discussion for details. It's also more special than the previous subjects in that it's specifically used for two-dimensional interpolation, but I suspect this is by far the most common case for multivariate interpolation. As far as syntax goes, interp2d is similar to RBFInterpolator in that it first needs constructing an interpolation instance, which can be called to provide the actual interpolated values. There's a catch, however: the output points have to be located on a rectangular mesh, so inputs going into the call to the interpolator have to be 1d vectors which span the output grid, as if from numpy.meshgrid: ``` # reminder: x_sparse and y_sparse are of shape [6, 7] from numpy.meshgrid zfun_smooth_interp2d = interp.interp2d(x_sparse, y_sparse, z_sparse_smooth, kind='cubic') # default kind is 'linear' # reminder: x_dense and y_dense are of shape (20, 21) from numpy.meshgrid xvec = x_dense[0,:] # 1d array of unique x values, 20 elements yvec = y_dense[:,0] # 1d array of unique y values, 21 elements z_dense_smooth_interp2d = zfun_smooth_interp2d(xvec, yvec) # output is (20, 21)-shaped array ``` One of the most common mistakes when using interp2d is putting your full 2d meshes into the interpolation call, which leads to explosive memory consumption, and hopefully to a hasty MemoryError. Now, the greatest problem with interp2d is that it often doesn't work. In order to understand this, we have to look under the hood. It turns out that interp2d is a wrapper for the lower-level functions bisplrep + bisplev, which are in turn wrappers for FITPACK routines (written in Fortran). The equivalent call to the previous example would be ``` kind = 'cubic' if kind == 'linear': kx = ky = 1 elif kind == 'cubic': kx = ky = 3 elif kind == 'quintic': kx = ky = 5 # bisplrep constructs a spline representation, bisplev evaluates the spline at given points bisp_smooth = interp.bisplrep(x_sparse.ravel(), y_sparse.ravel(), z_sparse_smooth.ravel(), kx=kx, ky=ky, s=0) z_dense_smooth_bisplrep = interp.bisplev(xvec, yvec, bisp_smooth).T # note the transpose ``` Now, here's the thing about interp2d: (in scipy version 1.7.0) there is a nice comment in interpolate\/interpolate.py for interp2d: ``` if not rectangular_grid: # TODO: surfit is really not meant for interpolation! self.tck = fitpack.bisplrep(x, y, z, kx=kx, ky=ky, s=0.0) ``` and indeed in interpolate\/fitpack.py, in bisplrep there's some setup and ultimately ``` tx, ty, c, o = _fitpack._surfit(x, y, z, w, xb, xe, yb, ye, kx, ky, task, s, eps, tx, ty, nxest, nyest, wrk, lwrk1, lwrk2) ``` And that's it. The routines underlying interp2d are not really meant to perform interpolation. They might suffice for sufficiently well-behaved data, but under realistic circumstances you will probably want to use something else. Just to conclude, interpolate.interp2d can lead to artifacts even with well-tempered data is specifically for bivariate problems (although there's the limited interpn for input points defined on a grid) performs extrapolation creates an interpolator as a first step, so evaluating it in various output points is less additional effort can only produce output over a rectangular grid, for scattered output you would have to call the interpolator in a loop supports linear, cubic and quintic interpolation might violate the symmetry of the input data 1I'm fairly certain that the cubic and linear kind of basis functions of RBFInterpolator do not exactly correspond to the other interpolators of the same name. 2These NaNs are also the reason for why the surface plot seems so odd: matplotlib historically has difficulties with plotting complex 3d objects with proper depth information. The NaN values in the data confuse the renderer, so parts of the surface that should be in the back are plotted to be in the front. This is an issue with visualization, and not interpolation.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/37872171\/how-can-i-perform-two-dimensional-interpolation-using-scipy", "best_answers_votes":221, "question_length":790, "response_length":16593 }, { "question":"Histogram Matplotlib So I have a little problem. I have a data set in scipy that is already in the histogram format, so I have the center of the bins and the number of events per bin. How can I now plot is as a histogram. I tried just doing ``` bins, n=hist() ``` but it didn't like that. Any recommendations?", "response":"``` import matplotlib.pyplot as plt import numpy as np mu, sigma = 100, 15 x = mu + sigma * np.random.randn(10000) hist, bins = np.histogram(x, bins=50) width = 0.7 * (bins[1] - bins[0]) center = (bins[:-1] + bins[1:]) \/ 2 plt.bar(center, hist, align='center', width=width) plt.show() ``` The object-oriented interface is also straightforward: ``` fig, ax = plt.subplots() ax.bar(center, hist, align='center', width=width) fig.savefig(\"1.png\") ``` If you are using custom (non-constant) bins, you can pass compute the widths using np.diff, pass the widths to ax.bar and use ax.set_xticks to label the bin edges: ``` import matplotlib.pyplot as plt import numpy as np mu, sigma = 100, 15 x = mu + sigma * np.random.randn(10000) bins = [0, 40, 60, 75, 90, 110, 125, 140, 160, 200] hist, bins = np.histogram(x, bins=bins) width = np.diff(bins) center = (bins[:-1] + bins[1:]) \/ 2 fig, ax = plt.subplots(figsize=(8,3)) ax.bar(center, hist, align='center', width=width) ax.set_xticks(bins) fig.savefig(\"\/tmp\/out.png\") plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/5328556\/histogram-matplotlib", "best_answers_votes":268, "question_length":309, "response_length":1027 }, { "question":"savefig without frames, axes, only content In numpy\/scipy I have an image stored in an array. I can display it, I want to save it using savefig without any borders, axes, labels, titles,... Just pure image, nothing else. I want to avoid packages like PyPNG or scipy.misc.imsave, they are sometimes problematic (they do not always install well), only basic savefig() for me", "response":"EDIT Changed aspect='normal to aspect='auto' since that changed in more recent versions of matplotlib (thanks to @Luke19). Assuming : ```py import matplotlib.pyplot as plt ``` To make a figure without the frame : ```py fig = plt.figure(frameon=False) fig.set_size_inches(w,h) ``` To make the content fill the whole figure ```py ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ``` Then draw your image on it : ```py ax.imshow(your_image, aspect='auto') fig.savefig(fname, dpi) ``` The aspect parameter changes the pixel size to make sure they fill the figure size specified in fig.set_size_inches(\u2026). To get a feel of how to play with this sort of things, read through matplotlib's documentation, particularly on the subject of Axes, Axis and Artist.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8218608\/savefig-without-frames-axes-only-content", "best_answers_votes":152, "question_length":372, "response_length":773 }, { "question":"Quantile-Quantile Plot using SciPy How would you create a qq-plot using Python? Assuming that you have a large set of measurements and are using some plotting function that takes XY-values as input. The function should plot the quantiles of the measurements against the corresponding quantiles of some distribution (normal, uniform...). The resulting plot lets us then evaluate in our measurement follows the assumed distribution or not. http:\/\/en.wikipedia.org\/wiki\/Quantile-quantile_plot Both R and Matlab provide ready made functions for this, but I am wondering what the cleanest method for implementing in in Python would be.", "response":"Update: As folks have pointed out this answer is not correct. A probplot is different from a quantile-quantile plot. Please see those comments and other answers before you make an error in interpreting or conveying your distributions' relationship. I think that scipy.stats.probplot will do what you want. See the documentation for more detail. ``` import numpy as np import pylab import scipy.stats as stats measurements = np.random.normal(loc = 20, scale = 5, size=100) stats.probplot(measurements, dist=\"norm\", plot=pylab) pylab.show() ``` Result", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13865596\/quantile-quantile-plot-using-scipy", "best_answers_votes":139, "question_length":630, "response_length":549 }, { "question":"How to calculate the inverse of the normal cumulative distribution function in python? How do I calculate the inverse of the cumulative distribution function (CDF) of the normal distribution in Python? Which library should I use? Possibly scipy?", "response":"NORMSINV (mentioned in a comment) is the inverse of the CDF of the standard normal distribution. Using scipy, you can compute this with the ppf method of the scipy.stats.norm object. The acronym ppf stands for percent point function, which is another name for the quantile function. ``` In [20]: from scipy.stats import norm In [21]: norm.ppf(0.95) Out[21]: 1.6448536269514722 ``` Check that it is the inverse of the CDF: ``` In [34]: norm.cdf(norm.ppf(0.95)) Out[34]: 0.94999999999999996 ``` By default, norm.ppf uses mean=0 and stddev=1, which is the \"standard\" normal distribution. You can use a different mean and standard deviation by specifying the loc and scale arguments, respectively. ``` In [35]: norm.ppf(0.95, loc=10, scale=2) Out[35]: 13.289707253902945 ``` If you look at the source code for scipy.stats.norm, you'll find that the ppf method ultimately calls scipy.special.ndtri. So to compute the inverse of the CDF of the standard normal distribution, you could use that function directly: ``` In [43]: from scipy.special import ndtri In [44]: ndtri(0.95) Out[44]: 1.6448536269514722 ``` ndtri is much faster than norm.ppf: ``` In [46]: %timeit norm.ppf(0.95) 240 \u00b5s \u00b1 1.75 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1,000 loops each) In [47]: %timeit ndtri(0.95) 1.47 \u00b5s \u00b1 1.3 ns per loop (mean \u00b1 std. dev. of 7 runs, 1,000,000 loops each) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20626994\/how-to-calculate-the-inverse-of-the-normal-cumulative-distribution-function-in-p", "best_answers_votes":180, "question_length":245, "response_length":1358 }, { "question":"Principal component analysis in Python I'd like to use principal component analysis (PCA) for dimensionality reduction. Does numpy or scipy already have it, or do I have to roll my own using numpy.linalg.eigh? I don't just want to use singular value decomposition (SVD) because my input data are quite high-dimensional (~460 dimensions), so I think SVD will be slower than computing the eigenvectors of the covariance matrix. I was hoping to find a premade, debugged implementation that already makes the right decisions for when to use which method, and which maybe does other optimizations that I don't know about.", "response":"Months later, here's a small class PCA, and a picture: ``` #!\/usr\/bin\/env python \"\"\" a small class for Principal Component Analysis Usage: p = PCA( A, fraction=0.90 ) In: A: an array of e.g. 1000 observations x 20 variables, 1000 rows x 20 columns fraction: use principal components that account for e.g. 90 % of the total variance Out: p.U, p.d, p.Vt: from numpy.linalg.svd, A = U . d . Vt p.dinv: 1\/d or 0, see NR p.eigen: the eigenvalues of A*A, in decreasing order (p.d**2). eigen[j] \/ eigen.sum() is variable j's fraction of the total variance; look at the first few eigen[] to see how many PCs get to 90 %, 95 % ... p.npc: number of principal components, e.g. 2 if the top 2 eigenvalues are >= `fraction` of the total. It's ok to change this; methods use the current value. Methods: The methods of class PCA transform vectors or arrays of e.g. 20 variables, 2 principal components and 1000 observations, using partial matrices U' d' Vt', parts of the full U d Vt: A ~ U' . d' . Vt' where e.g. U' is 1000 x 2 d' is diag([ d0, d1 ]), the 2 largest singular values Vt' is 2 x 20. Dropping the primes, d . Vt 2 principal vars = p.vars_pc( 20 vars ) U 1000 obs = p.pc_obs( 2 principal vars ) U . d . Vt 1000 obs, p.obs( 20 vars ) = pc_obs( vars_pc( vars )) fast approximate A . vars, using the `npc` principal components Ut 2 pcs = p.obs_pc( 1000 obs ) V . dinv 20 vars = p.pc_vars( 2 principal vars ) V . dinv . Ut 20 vars, p.vars( 1000 obs ) = pc_vars( obs_pc( obs )), fast approximate Ainverse . obs: vars that give ~ those obs. Notes: PCA does not center or scale A; you usually want to first A -= A.mean(A, axis=0) A \/= A.std(A, axis=0) with the little class Center or the like, below. See also: http:\/\/en.wikipedia.org\/wiki\/Principal_component_analysis http:\/\/en.wikipedia.org\/wiki\/Singular_value_decomposition Press et al., Numerical Recipes (2 or 3 ed), SVD PCA micro-tutorial iris-pca .py .png \"\"\" from __future__ import division import numpy as np dot = np.dot # import bz.numpyutil as nu # dot = nu.pdot __version__ = \"2010-04-14 apr\" __author_email__ = \"denis-bz-py at t-online dot de\" #............................................................................... class PCA: def __init__( self, A, fraction=0.90 ): assert 0 = self.d[1:] ) # sorted self.eigen = self.d**2 self.sumvariance = np.cumsum(self.eigen) self.sumvariance \/= self.sumvariance[-1] self.npc = np.searchsorted( self.sumvariance, fraction ) + 1 self.dinv = np.array([ 1\/d if d > self.d[0] * 1e-6 else 0 for d in self.d ]) def pc( self ): \"\"\" e.g. 1000 x 2 U[:, :npc] * d[:npc], to plot etc. \"\"\" n = self.npc return self.U[:, :n] * self.d[:n] # These 1-line methods may not be worth the bother; # then use U d Vt directly -- def vars_pc( self, x ): n = self.npc return self.d[:n] * dot( self.Vt[:n], x.T ).T # 20 vars -> 2 principal def pc_vars( self, p ): n = self.npc return dot( self.Vt[:n].T, (self.dinv[:n] * p).T ) .T # 2 PC -> 20 vars def pc_obs( self, p ): n = self.npc return dot( self.U[:, :n], p.T ) # 2 principal -> 1000 obs def obs_pc( self, obs ): n = self.npc return dot( self.U[:, :n].T, obs ) .T # 1000 obs -> 2 principal def obs( self, x ): return self.pc_obs( self.vars_pc(x) ) # 20 vars -> 2 principal -> 1000 obs def vars( self, obs ): return self.pc_vars( self.obs_pc(obs) ) # 1000 obs -> 2 principal -> 20 vars class Center: \"\"\" A -= A.mean() \/= A.std(), inplace -- use A.copy() if need be uncenter(x) == original A . x \"\"\" # mttiw def __init__( self, A, axis=0, scale=True, verbose=1 ): self.mean = A.mean(axis=axis) if verbose: print \"Center -= A.mean:\", self.mean A -= self.mean if scale: std = A.std(axis=axis) self.std = np.where( std, std, 1. ) if verbose: print \"Center \/= A.std:\", self.std A \/= self.std else: self.std = np.ones( A.shape[-1] ) self.A = A def uncenter( self, x ): return np.dot( self.A, x * self.std ) + np.dot( x, self.mean ) #............................................................................... if __name__ == \"__main__\": import sys csv = \"iris4.csv\" # wikipedia Iris_flower_data_set # 5.1,3.5,1.4,0.2 # ,Iris-setosa ... N = 1000 K = 20 fraction = .90 seed = 1 exec \"\\n\".join( sys.argv[1:] ) # N= ... np.random.seed(seed) np.set_printoptions( 1, threshold=100, suppress=True ) # .1f try: A = np.genfromtxt( csv, delimiter=\",\" ) N, K = A.shape except IOError: A = np.random.normal( size=(N, K) ) # gen correlated ? print \"csv: %s N: %d K: %d fraction: %.2g\" % (csv, N, K, fraction) Center(A) print \"A:\", A print \"PCA ...\" , p = PCA( A, fraction=fraction ) print \"npc:\", p.npc print \"% variance:\", p.sumvariance * 100 print \"Vt[0], weights that give PC 0:\", p.Vt[0] print \"A . Vt[0]:\", dot( A, p.Vt[0] ) print \"pc:\", p.pc() print \"\\nobs pc x: with fraction=1, diffs should be ~ 0\" x = np.ones(K) # x = np.ones(( 3, K )) print \"x:\", x pc = p.vars_pc(x) # d' Vt' x print \"vars_pc(x):\", pc print \"back to ~ x:\", p.pc_vars(pc) Ax = dot( A, x.T ) pcx = p.obs(x) # U' d' Vt' x print \"Ax:\", Ax print \"A'x:\", pcx print \"max |Ax - A'x|: %.2g\" % np.linalg.norm( Ax - pcx, np.inf ) b = Ax # ~ back to original x, Ainv A x back = p.vars(b) print \"~ back again:\", back print \"max |back - x|: %.2g\" % np.linalg.norm( back - x, np.inf ) # end pca.py ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/1730600\/principal-component-analysis-in-python", "best_answers_votes":67, "question_length":616, "response_length":5182 }, { "question":"Creating lowpass filter in SciPy - understanding methods and units I am trying to filter a noisy heart rate signal with python. Because heart rates should never be above about 220 beats per minute, I want to filter out all noise above 220 bpm. I converted 220\/minute into 3.66666666 Hertz and then converted that Hertz to rad\/s to get 23.0383461 rad\/sec. The sampling frequency of the chip that takes data is 30Hz so I converted that to rad\/s to get 188.495559 rad\/s. After looking up some stuff online I found some functions for a bandpass filter that I wanted to make into a lowpass. Here is the link the bandpass code, so I converted it to be this: ``` from scipy.signal import butter, lfilter from scipy.signal import freqs def butter_lowpass(cutOff, fs, order=5): nyq = 0.5 * fs normalCutoff = cutOff \/ nyq b, a = butter(order, normalCutoff, btype='low', analog = True) return b, a def butter_lowpass_filter(data, cutOff, fs, order=4): b, a = butter_lowpass(cutOff, fs, order=order) y = lfilter(b, a, data) return y cutOff = 23.1 #cutoff frequency in rad\/s fs = 188.495559 #sampling frequency in rad\/s order = 20 #order of filter #print sticker_data.ps1_dxdt2 y = butter_lowpass_filter(data, cutOff, fs, order) plt.plot(y) ``` I am very confused by this though because I am pretty sure the butter function takes in the cutoff and sampling frequency in rad\/s but I seem to be getting a weird output. Is it actually in Hz? Secondly, what is the purpose of these two lines: ``` nyq = 0.5 * fs normalCutoff = cutOff \/ nyq ``` I know it's something about normalization but I thought the nyquist was 2 times the sampling requency, not one half. And why are you using the nyquist as a normalizer? Can someone explain more about how to create filters with these functions? I plotted the filter using: ``` w, h = signal.freqs(b, a) plt.plot(w, 20 * np.log10(abs(h))) plt.xscale('log') plt.title('Butterworth filter frequency response') plt.xlabel('Frequency [radians \/ second]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.axvline(100, color='green') # cutoff frequency plt.show() ``` and got this which clearly does not cut-off at 23 rad\/s:", "response":"A few comments: The Nyquist frequency is half the sampling rate. You are working with regularly sampled data, so you want a digital filter, not an analog filter. This means you should not use analog=True in the call to butter, and you should use scipy.signal.freqz (not freqs) to generate the frequency response. One goal of those short utility functions is to allow you to leave all your frequencies expressed in Hz. You shouldn't have to convert to rad\/sec. As long as you express your frequencies with consistent units, the fs parameter of the SciPy functions will take care of the scaling for you. Here's my modified version of your script, followed by the plot that it generates. ``` import numpy as np from scipy.signal import butter, lfilter, freqz import matplotlib.pyplot as plt def butter_lowpass(cutoff, fs, order=5): return butter(order, cutoff, fs=fs, btype='low', analog=False) def butter_lowpass_filter(data, cutoff, fs, order=5): b, a = butter_lowpass(cutoff, fs, order=order) y = lfilter(b, a, data) return y # Filter requirements. order = 6 fs = 30.0 # sample rate, Hz cutoff = 3.667 # desired cutoff frequency of the filter, Hz # Get the filter coefficients so we can check its frequency response. b, a = butter_lowpass(cutoff, fs, order) # Plot the frequency response. w, h = freqz(b, a, fs=fs, worN=8000) plt.subplot(2, 1, 1) plt.plot(w, np.abs(h), 'b') plt.plot(cutoff, 0.5*np.sqrt(2), 'ko') plt.axvline(cutoff, color='k') plt.xlim(0, 0.5*fs) plt.title(\"Lowpass Filter Frequency Response\") plt.xlabel('Frequency [Hz]') plt.grid() # Demonstrate the use of the filter. # First make some data to be filtered. T = 5.0 # seconds n = int(T * fs) # total number of samples t = np.linspace(0, T, n, endpoint=False) # \"Noisy\" data. We want to recover the 1.2 Hz signal from this. data = np.sin(1.2*2*np.pi*t) + 1.5*np.cos(9*2*np.pi*t) + 0.5*np.sin(12.0*2*np.pi*t) # Filter the data, and plot both the original and filtered signals. y = butter_lowpass_filter(data, cutoff, fs, order) plt.subplot(2, 1, 2) plt.plot(t, data, 'b-', label='data') plt.plot(t, y, 'g-', linewidth=2, label='filtered data') plt.xlabel('Time [sec]') plt.grid() plt.legend() plt.subplots_adjust(hspace=0.35) plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/25191620\/creating-lowpass-filter-in-scipy-understanding-methods-and-units", "best_answers_votes":211, "question_length":2182, "response_length":2208 }, { "question":"How to delete columns in numpy.array I would like to delete selected columns in a numpy.array . This is what I do: ``` n [397]: a = array([[ NaN, 2., 3., NaN], .....: [ 1., 2., 3., 9]]) In [398]: print a [[ NaN 2. 3. NaN] [ 1. 2. 3. 9.]] In [399]: z = any(isnan(a), axis=0) In [400]: print z [ True False False True] In [401]: delete(a, z, axis = 1) Out[401]: array([[ 3., NaN], [ 3., 9.]]) ``` In this example my goal is to delete all the columns that contain NaN's. I expect the last command to result in: ``` array([[2., 3.], [2., 3.]]) ``` How can I do that?", "response":"Given its name, I think the standard way should be delete: ``` import numpy as np A = np.delete(A, 1, 0) # delete second row of A B = np.delete(B, 2, 0) # delete third row of B C = np.delete(C, 1, 1) # delete second column of C ``` According to numpy's documentation page, the parameters for numpy.delete are as follow: numpy.delete(arr, obj, axis=None) arr refers to the input array, obj refers to which sub-arrays (e.g. column\/row no. or slice of the array) and axis refers to either column wise (axis = 1) or row-wise (axis = 0) delete operation.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/1642730\/how-to-delete-columns-in-numpy-array", "best_answers_votes":180, "question_length":562, "response_length":549 }, { "question":"How to implement band-pass Butterworth filter with Scipy.signal.butter UPDATE: I found a Scipy Recipe based in this question! So, for anyone interested, go straight to: Contents \u00bb Signal processing \u00bb Butterworth Bandpass I'm having a hard time to achieve what seemed initially a simple task of implementing a Butterworth band-pass filter for 1-D numpy array (time-series). The parameters I have to include are the sample_rate, cutoff frequencies IN HERTZ and possibly order (other parameters, like attenuation, natural frequency, etc. are more obscure to me, so any \"default\" value would do). What I have now is this, which seems to work as a high-pass filter but I'm no way sure if I'm doing it right: ``` def butter_highpass(interval, sampling_rate, cutoff, order=5): nyq = sampling_rate * 0.5 stopfreq = float(cutoff) cornerfreq = 0.4 * stopfreq # (?) ws = cornerfreq\/nyq wp = stopfreq\/nyq # for bandpass: # wp = [0.2, 0.5], ws = [0.1, 0.6] N, wn = scipy.signal.buttord(wp, ws, 3, 16) # (?) # for hardcoded order: # N = order b, a = scipy.signal.butter(N, wn, btype='high') # should 'high' be here for bandpass? sf = scipy.signal.lfilter(b, a, interval) return sf ``` The docs and examples are confusing and obscure, but I'd like to implement the form presented in the commend marked as \"for bandpass\". The question marks in the comments show where I just copy-pasted some example without understanding what is happening. I am no electrical engineering or scientist, just a medical equipment designer needing to perform some rather straightforward bandpass filtering on EMG signals.", "response":"You could skip the use of buttord, and instead just pick an order for the filter and see if it meets your filtering criterion. To generate the filter coefficients for a bandpass filter, give butter() the filter order, the cutoff frequencies Wn=[lowcut, highcut], the sampling rate fs (expressed in the same units as the cutoff frequencies) and the band type btype=\"band\". Here's a script that defines a couple convenience functions for working with a Butterworth bandpass filter. When run as a script, it makes two plots. One shows the frequency response at several filter orders for the same sampling rate and cutoff frequencies. The other plot demonstrates the effect of the filter (with order=6) on a sample time series. ``` from scipy.signal import butter, lfilter def butter_bandpass(lowcut, highcut, fs, order=5): return butter(order, [lowcut, highcut], fs=fs, btype='band') def butter_bandpass_filter(data, lowcut, highcut, fs, order=5): b, a = butter_bandpass(lowcut, highcut, fs, order=order) y = lfilter(b, a, data) return y if __name__ == \"__main__\": import numpy as np import matplotlib.pyplot as plt from scipy.signal import freqz # Sample rate and desired cutoff frequencies (in Hz). fs = 5000.0 lowcut = 500.0 highcut = 1250.0 # Plot the frequency response for a few different orders. plt.figure(1) plt.clf() for order in [3, 6, 9]: b, a = butter_bandpass(lowcut, highcut, fs, order=order) w, h = freqz(b, a, fs=fs, worN=2000) plt.plot(w, abs(h), label=\"order = %d\" % order) plt.plot([0, 0.5 * fs], [np.sqrt(0.5), np.sqrt(0.5)], '--', label='sqrt(0.5)') plt.xlabel('Frequency (Hz)') plt.ylabel('Gain') plt.grid(True) plt.legend(loc='best') # Filter a noisy signal. T = 0.05 nsamples = T * fs t = np.arange(0, nsamples) \/ fs a = 0.02 f0 = 600.0 x = 0.1 * np.sin(2 * np.pi * 1.2 * np.sqrt(t)) x += 0.01 * np.cos(2 * np.pi * 312 * t + 0.1) x += a * np.cos(2 * np.pi * f0 * t + .11) x += 0.03 * np.cos(2 * np.pi * 2000 * t) plt.figure(2) plt.clf() plt.plot(t, x, label='Noisy signal') y = butter_bandpass_filter(x, lowcut, highcut, fs, order=6) plt.plot(t, y, label='Filtered signal (%g Hz)' % f0) plt.xlabel('time (seconds)') plt.hlines([-a, a], 0, T, linestyles='--') plt.grid(True) plt.axis('tight') plt.legend(loc='upper left') plt.show() ``` Here are the plots that are generated by this script:", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12093594\/how-to-implement-band-pass-butterworth-filter-with-scipy-signal-butter", "best_answers_votes":150, "question_length":1585, "response_length":2311 }, { "question":"Two-sample Kolmogorov-Smirnov Test in Python Scipy I can't figure out how to do a Two-sample KS test in Scipy. After reading the documentation of scipy kstest, I can see how to test whether a distribution is identical to standard normal distribution ```py from scipy.stats import kstest import numpy as np x = np.random.normal(0,1,1000) test_stat = kstest(x, 'norm') #>>> test_stat #(0.021080234718821145, 0.76584491300591395) ``` Which means that at p-value of 0.76 we cannot reject the null hypothesis that the two distributions are identical. However, I want to compare two distributions and see if I can reject the null hypothesis that they are identical, something like: ```py from scipy.stats import kstest import numpy as np x = np.random.normal(0,1,1000) z = np.random.normal(1.1,0.9, 1000) ``` and test whether x and z are identical. I tried the naive: ```py test_stat = kstest(x, z) ``` and got the following error: ```none TypeError: 'numpy.ndarray' object is not callable ``` Is there a way to do a two-sample KS test in Python? If so, how should I do it?", "response":"You are using the one-sample KS test. You probably want the two-sample test ks_2samp: ``` >>> from scipy.stats import ks_2samp >>> import numpy as np >>> >>> np.random.seed(12345678) >>> x = np.random.normal(0, 1, 1000) >>> y = np.random.normal(0, 1, 1000) >>> z = np.random.normal(1.1, 0.9, 1000) >>> >>> ks_2samp(x, y) Ks_2sampResult(statistic=0.022999999999999909, pvalue=0.95189016804849647) >>> ks_2samp(x, z) Ks_2sampResult(statistic=0.41800000000000004, pvalue=3.7081494119242173e-77) ``` Results can be interpreted as following: You can either compare the statistic value given by python to the KS-test critical value table according to your sample size. When statistic value is higher than the critical value, the two distributions are different. Or you can compare the p-value to a level of significance a, usually a=0.05 or 0.01 (you decide, the lower a is, the more significant). If p-value is lower than a, then it is very probable that the two distributions are different.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/10884668\/two-sample-kolmogorov-smirnov-test-in-python-scipy", "best_answers_votes":157, "question_length":1067, "response_length":986 }, { "question":"How to check the version of scipy How can I check the version of scipy installed on my system?", "response":"``` In [95]: import scipy In [96]: scipy.__version__ Out[96]: '0.12.0' In [104]: scipy.version.*version? scipy.version.full_version scipy.version.short_version scipy.version.version In [105]: scipy.version.full_version Out[105]: '0.12.0' In [106]: scipy.version.git_revision Out[106]: 'cdd6b32233bbecc3e8cbc82531905b74f3ea66eb' In [107]: scipy.version.release Out[107]: True In [108]: scipy.version.short_version Out[108]: '0.12.0' In [109]: scipy.version.version Out[109]: '0.12.0' ``` See SciPy doveloper documentation for reference.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/21385196\/how-to-check-the-version-of-scipy", "best_answers_votes":132, "question_length":94, "response_length":535 }, { "question":"shuffle vs permute numpy What is the difference between numpy.random.shuffle(x) and numpy.random.permutation(x)? I have read the doc pages but I could not understand if there was any difference between the two when I just want to randomly shuffle the elements of an array. To be more precise suppose I have an array x=[1,4,2,8]. If I want to generate random permutations of x, then what is the difference between shuffle(x) and permutation(x)?", "response":"np.random.permutation has two differences from np.random.shuffle: if passed an array, it will return a shuffled copy of the array; np.random.shuffle shuffles the array inplace if passed an integer, it will return a shuffled range i.e. np.random.shuffle(np.arange(n)) If x is an integer, randomly permute np.arange(x). If x is an array, make a copy and shuffle the elements randomly. The source code might help to understand this: ``` 3280 def permutation(self, object x): ... 3307 if isinstance(x, (int, np.integer)): 3308 arr = np.arange(x) 3309 else: 3310 arr = np.array(x) 3311 self.shuffle(arr) 3312 return arr ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15474159\/shuffle-vs-permute-numpy", "best_answers_votes":138, "question_length":443, "response_length":618 }, { "question":"How do you find the IQR in Numpy? Is there a baked-in Numpy\/Scipy function to find the interquartile range? I can do it pretty easily myself, but mean() exists which is basically sum\/len... ``` def IQR(dist): return np.percentile(dist, 75) - np.percentile(dist, 25) ```", "response":"np.percentile takes multiple percentile arguments, and you are slightly better off doing: ``` q75, q25 = np.percentile(x, [75 ,25]) iqr = q75 - q25 ``` or ``` iqr = np.subtract(*np.percentile(x, [75, 25])) ``` than making two calls to percentile: ``` In [8]: x = np.random.rand(1e6) In [9]: %timeit q75, q25 = np.percentile(x, [75 ,25]); iqr = q75 - q25 10 loops, best of 3: 24.2 ms per loop In [10]: %timeit iqr = np.subtract(*np.percentile(x, [75, 25])) 10 loops, best of 3: 24.2 ms per loop In [11]: %timeit iqr = np.percentile(x, 75) - np.percentile(x, 25) 10 loops, best of 3: 33.7 ms per loop ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/23228244\/how-do-you-find-the-iqr-in-numpy", "best_answers_votes":160, "question_length":269, "response_length":602 }, { "question":"Find out if a matrix is positive definite with NumPy How can I find out if a matrix is positive definite? My matrix is a NumPy matrix. I was expecting to find any related method in the NumPy library, but I didn't have any success.", "response":"You can also check if all the eigenvalues of matrix are positive. If so, the matrix is positive definite: ``` import numpy as np def is_pos_def(x): return np.all(np.linalg.eigvals(x) > 0) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16266720\/find-out-if-a-matrix-is-positive-definite-with-numpy", "best_answers_votes":114, "question_length":230, "response_length":191 }, { "question":"Save \/ load scipy sparse csr_matrix in portable data format How do you save\/load a scipy sparse csr_matrix in a portable format? The scipy sparse matrix is created on Python 3 (Windows 64-bit) to run on Python 2 (Linux 64-bit). Initially, I used pickle (with protocol=2 and fix_imports=True) but this didn't work going from Python 3.2.2 (Windows 64-bit) to Python 2.7.2 (Windows 32-bit) and got the error: ``` TypeError: ('data type not understood', , (, (0,), '[98]')). ``` Next, tried numpy.save and numpy.load as well as scipy.io.mmwrite() and scipy.io.mmread() and none of these methods worked either.", "response":"edit: scipy 0.19 now has scipy.sparse.save_npz and scipy.sparse.load_npz. ``` from scipy import sparse sparse.save_npz(\"yourmatrix.npz\", your_matrix) your_matrix_back = sparse.load_npz(\"yourmatrix.npz\") ``` For both functions, the file argument may also be a file-like object (i.e. the result of open) instead of a filename. Got an answer from the Scipy user group: A csr_matrix has 3 data attributes that matter: .data, .indices, and .indptr. All are simple ndarrays, so numpy.save will work on them. Save the three arrays with numpy.save or numpy.savez, load them back with numpy.load, and then recreate the sparse matrix object with: ``` new_csr = csr_matrix((data, indices, indptr), shape=(M, N)) ``` So for example: ``` def save_sparse_csr(filename, array): np.savez(filename, data=array.data, indices=array.indices, indptr=array.indptr, shape=array.shape) def load_sparse_csr(filename): loader = np.load(filename) return csr_matrix((loader['data'], loader['indices'], loader['indptr']), shape=loader['shape']) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8955448\/save-load-scipy-sparse-csr-matrix-in-portable-data-format", "best_answers_votes":146, "question_length":605, "response_length":1019 }, { "question":"Resampling a numpy array representing an image I am looking for how to resample a numpy array representing image data at a new size, preferably having a choice of the interpolation method (nearest, bilinear, etc.). I know there is ``` scipy.misc.imresize ``` which does exactly this by wrapping PIL's resize function. The only problem is that since it uses PIL, the numpy array has to conform to image formats, giving me a maximum of 4 \"color\" channels. I want to be able to resize arbitrary images, with any number of \"color\" channels. I was wondering if there is a simple way to do this in scipy\/numpy, or if I need to roll my own. I have two ideas for how to concoct one myself: a function that runs scipy.misc.imresize on every channel separately create my own using scipy.ndimage.interpolation.affine_transform The first one would probably be slow for large data, and the second one does not seem to offer any other interpolation method except splines.", "response":"Based on your description, you want scipy.ndimage.zoom. Bilinear interpolation would be order=1, nearest is order=0, and cubic is the default (order=3). zoom is specifically for regularly-gridded data that you want to resample to a new resolution. As a quick example: ``` import numpy as np import scipy.ndimage x = np.arange(9).reshape(3,3) print 'Original array:' print x print 'Resampled by a factor of 2 with nearest interpolation:' print scipy.ndimage.zoom(x, 2, order=0) print 'Resampled by a factor of 2 with bilinear interpolation:' print scipy.ndimage.zoom(x, 2, order=1) print 'Resampled by a factor of 2 with cubic interpolation:' print scipy.ndimage.zoom(x, 2, order=3) ``` And the result: ``` Original array: [[0 1 2] [3 4 5] [6 7 8]] Resampled by a factor of 2 with nearest interpolation: [[0 0 1 1 2 2] [0 0 1 1 2 2] [3 3 4 4 5 5] [3 3 4 4 5 5] [6 6 7 7 8 8] [6 6 7 7 8 8]] Resampled by a factor of 2 with bilinear interpolation: [[0 0 1 1 2 2] [1 2 2 2 3 3] [2 3 3 4 4 4] [4 4 4 5 5 6] [5 5 6 6 6 7] [6 6 7 7 8 8]] Resampled by a factor of 2 with cubic interpolation: [[0 0 1 1 2 2] [1 1 1 2 2 3] [2 2 3 3 4 4] [4 4 5 5 6 6] [5 6 6 7 7 7] [6 6 7 7 8 8]] ``` Edit: As Matt S. pointed out, there are a couple of caveats for zooming multi-band images. I'm copying the portion below almost verbatim from one of my earlier answers: Zooming also works for 3D (and nD) arrays. However, be aware that if you zoom by 2x, for example, you'll zoom along all axes. ``` data = np.arange(27).reshape(3,3,3) print 'Original:\\n', data print 'Zoomed by 2x gives an array of shape:', ndimage.zoom(data, 2).shape ``` This yields: ``` Original: [[[ 0 1 2] [ 3 4 5] [ 6 7 8]] [[ 9 10 11] [12 13 14] [15 16 17]] [[18 19 20] [21 22 23] [24 25 26]]] Zoomed by 2x gives an array of shape: (6, 6, 6) ``` In the case of multi-band images, you usually don't want to interpolate along the \"z\" axis, creating new bands. If you have something like a 3-band, RGB image that you'd like to zoom, you can do this by specifying a sequence of tuples as the zoom factor: ``` print 'Zoomed by 2x along the last two axes:' print ndimage.zoom(data, (1, 2, 2)) ``` This yields: ``` Zoomed by 2x along the last two axes: [[[ 0 0 1 1 2 2] [ 1 1 1 2 2 3] [ 2 2 3 3 4 4] [ 4 4 5 5 6 6] [ 5 6 6 7 7 7] [ 6 6 7 7 8 8]] [[ 9 9 10 10 11 11] [10 10 10 11 11 12] [11 11 12 12 13 13] [13 13 14 14 15 15] [14 15 15 16 16 16] [15 15 16 16 17 17]] [[18 18 19 19 20 20] [19 19 19 20 20 21] [20 20 21 21 22 22] [22 22 23 23 24 24] [23 24 24 25 25 25] [24 24 25 25 26 26]]] ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13242382\/resampling-a-numpy-array-representing-an-image", "best_answers_votes":134, "question_length":957, "response_length":2534 }, { "question":"Fitting a Normal distribution to 1D data I have a 1 dimensional array. I can compute the \"mean\" and \"standard deviation\" of this sample and plot the \"Normal distribution\" but I have a problem: I want to plot the data and Normal distribution in the same figure. I dont know how to plot both the data and the normal distribution. Any Idea about \"Gaussian probability density function in scipy.stats\"? ``` s = np.std(array) m = np.mean(array) plt.plot(norm.pdf(array,m,s)) ```", "response":"You can use matplotlib to plot the histogram and the PDF (as in the link in @MrE's answer). For fitting and for computing the PDF, you can use scipy.stats.norm, as follows. ``` import numpy as np from scipy.stats import norm import matplotlib.pyplot as plt # Generate some data for this demonstration. data = norm.rvs(10.0, 2.5, size=500) # Fit a normal distribution to the data: mu, std = norm.fit(data) # Plot the histogram. plt.hist(data, bins=25, density=True, alpha=0.6, color='g') # Plot the PDF. xmin, xmax = plt.xlim() x = np.linspace(xmin, xmax, 100) p = norm.pdf(x, mu, std) plt.plot(x, p, 'k', linewidth=2) title = \"Fit results: mu = %.2f, std = %.2f\" % (mu, std) plt.title(title) plt.show() ``` Here's the plot generated by the script:", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20011122\/fitting-a-normal-distribution-to-1d-data", "best_answers_votes":191, "question_length":473, "response_length":747 }, { "question":"Mesh grid functions in Python (meshgrid mgrid ogrid ndgrid) I'm looking for a clear comparison of meshgrid-like functions. Unfortunately I don't find it! Numpy http:\/\/docs.scipy.org\/doc\/numpy\/reference\/ provides mgrid ogrid meshgrid Scitools http:\/\/hplgit.github.io\/scitools\/doc\/api\/html\/index.html provides ndgrid boxgrid Ideally a table summarizing all this would be perfect!", "response":"numpy.meshgrid is modelled after Matlab's meshgrid command. It is used to vectorise functions of two variables, so that you can write ``` x = numpy.array([1, 2, 3]) y = numpy.array([10, 20, 30]) XX, YY = numpy.meshgrid(x, y) ZZ = XX + YY ZZ => array([[11, 12, 13], [21, 22, 23], [31, 32, 33]]) ``` So ZZ contains all the combinations of x and y put into the function. When you think about it, meshgrid is a bit superfluous for numpy arrays, as they broadcast. This means you can do ``` XX, YY = numpy.atleast_2d(x, y) YY = YY.T # transpose to allow broadcasting ZZ = XX + YY ``` and get the same result. mgrid and ogrid are helper classes which use index notation so that you can create XX and YY in the previous examples directly, without having to use something like linspace. The order in which the output are generated is reversed. ``` YY, XX = numpy.mgrid[10:40:10, 1:4] ZZ = XX + YY # These are equivalent to the output of meshgrid YY, XX = numpy.ogrid[10:40:10, 1:4] ZZ = XX + YY # These are equivalent to the atleast_2d example ``` I am not familiar with the scitools stuff, but ndgrid seems equivalent to meshgrid, while BoxGrid is actually a whole class to help with this kind of generation.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12402045\/mesh-grid-functions-in-python-meshgrid-mgrid-ogrid-ndgrid", "best_answers_votes":93, "question_length":377, "response_length":1201 }, { "question":"How do I plot list of tuples? I have the following data set. I would like to use Python or Gnuplot to plot the data. The tuples are of the form (x, y). The Y-axis should be a log axis, that is, log(y). A scatter plot or line plot would be ideal. How can this be done? ``` [(0, 6.0705199999997801e-08), (1, 2.1015700100300739e-08), (2, 7.6280656623374823e-09), (3, 5.7348209304555086e-09), (4, 3.6812203579604238e-09), (5, 4.1572516753310418e-09)] ```", "response":"If I get your question correctly, you could do something like this. ``` >>> import matplotlib.pyplot as plt >>> testList =[(0, 6.0705199999997801e-08), (1, 2.1015700100300739e-08), (2, 7.6280656623374823e-09), (3, 5.7348209304555086e-09), (4, 3.6812203579604238e-09), (5, 4.1572516753310418e-09)] >>> from math import log >>> testList2 = [(elem1, log(elem2)) for elem1, elem2 in testList] >>> testList2 [(0, -16.617236475334405), (1, -17.67799605473062), (2, -18.691431541177973), (3, -18.9767093108359), (4, -19.420021520728017), (5, -19.298411635970396)] >>> zip(*testList2) [(0, 1, 2, 3, 4, 5), (-16.617236475334405, -17.67799605473062, -18.691431541177973, -18.9767093108359, -19.420021520728017, -19.298411635970396)] >>> plt.scatter(*zip(*testList2)) >>> plt.show() ``` which would give you something like Or as a line plot, ``` >>> plt.plot(*zip(*testList2)) >>> plt.show() ``` EDIT - If you want to add a title and labels for the axis, you could do something like ``` >>> plt.scatter(*zip(*testList2)) >>> plt.title('Random Figure') >>> plt.xlabel('X-Axis') >>> plt.ylabel('Y-Axis') >>> plt.show() ``` which would give you", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/18458734\/how-do-i-plot-list-of-tuples", "best_answers_votes":119, "question_length":450, "response_length":1130 }, { "question":"Working with big data in python and numpy, not enough ram, how to save partial results on disc? I am trying to implement algorithms for 1000-dimensional data with 200k+ datapoints in python. I want to use numpy, scipy, sklearn, networkx, and other useful libraries. I want to perform operations such as pairwise distance between all of the points and do clustering on all of the points. I have implemented working algorithms that perform what I want with reasonable complexity but when I try to scale them to all of my data I run out of RAM. Of course, I do, creating the matrix for pairwise distances on 200k+ data takes a lot of memory. Here comes the catch: I would really like to do this on crappy computers with low amounts of RAM. Is there a feasible way for me to make this work without the constraints of low RAM? That it will take a much longer time is really not a problem, as long as the time reqs don't go to infinity! I would like to be able to put my algorithms to work and then come back an hour or five later and not have it stuck because it ran out of RAM! I would like to implement this in python, and be able to use the numpy, scipy, sklearn, and networkx libraries. I would like to be able to calculate the pairwise distance to all my points etc Is this feasible? And how would I go about it, what can I start to read up on?", "response":"Using numpy.memmap you create arrays directly mapped into a file: ``` import numpy a = numpy.memmap('test.mymemmap', dtype='float32', mode='w+', shape=(200000,1000)) # here you will see a 762MB file created in your working directory ``` You can treat it as a conventional array: a += 1000. It is possible even to assign more arrays to the same file, controlling it from mutually sources if needed. But I've experiences some tricky things here. To open the full array you have to \"close\" the previous one first, using del: ``` del a b = numpy.memmap('test.mymemmap', dtype='float32', mode='r+', shape=(200000,1000)) ``` But openning only some part of the array makes it possible to achieve the simultaneous control: ``` b = numpy.memmap('test.mymemmap', dtype='float32', mode='r+', shape=(2,1000)) b[1,5] = 123456. print a[1,5] #123456.0 ``` Great! a was changed together with b. And the changes are already written on disk. The other important thing worth commenting is the offset. Suppose you want to take not the first 2 lines in b, but lines 150000 and 150001. ``` b = numpy.memmap('test.mymemmap', dtype='float32', mode='r+', shape=(2,1000), offset=150000*1000*32\/8) b[1,2] = 999999. print a[150001,2] #999999.0 ``` Now you can access and update any part of the array in simultaneous operations. Note the byte-size going in the offset calculation. So for a 'float64' this example would be 150000*1000*64\/8. Other references: Is it possible to map a discontiuous data on disk to an array with python? numpy.memmap documentation here.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16149803\/working-with-big-data-in-python-and-numpy-not-enough-ram-how-to-save-partial-r", "best_answers_votes":102, "question_length":1344, "response_length":1536 }, { "question":"What is the difference between numpy.fft and scipy.fftpack? Is the later just a synonym of the former, or are they two different implementations of FFT? Which one is better?", "response":"SciPy does more: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/routines.fft.html http:\/\/docs.scipy.org\/doc\/scipy\/reference\/fftpack.html# In addition, SciPy exports some of the NumPy features through its own interface, for example if you execute scipy.fftpack.helper.fftfreq and numpy.fft.helper.fftfreq you're actually running the same code. However, SciPy has its own implementations of much functionality. The source has performance benchmarks that compare the original NumPy and new SciPy versions. My archaic laptop shows something like this: ``` Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | numpy | scipy | numpy ------------------------------------------------- 100 | 0.07 | 0.06 | 0.06 | 0.07 (secs for 7000 calls) 1000 | 0.06 | 0.09 | 0.09 | 0.09 (secs for 2000 calls) 256 | 0.11 | 0.11 | 0.12 | 0.11 (secs for 10000 calls) 512 | 0.16 | 0.21 | 0.20 | 0.21 (secs for 10000 calls) 1024 | 0.03 | 0.04 | 0.04 | 0.04 (secs for 1000 calls) 2048 | 0.05 | 0.09 | 0.08 | 0.08 (secs for 1000 calls) 4096 | 0.05 | 0.08 | 0.07 | 0.09 (secs for 500 calls) 8192 | 0.10 | 0.20 | 0.19 | 0.21 (secs for 500 calls) ``` It does seem that SciPy runs significantly faster as the array increases in size, though these are just contrived examples and it would be worth experimenting with both for your particular project. It's worth checking out the source code http:\/\/www.scipy.org\/Download#head-312ad78cdf85a9ca6fa17a266752069d23f785d1 . Yes those .f files really are Fortran! :-D", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/6363154\/what-is-the-difference-between-numpy-fft-and-scipy-fftpack", "best_answers_votes":47, "question_length":173, "response_length":1585 }, { "question":"How to understand numpy strides for layman? I am currently going through numpy and there is a topic in numpy called \"strides\". I understand what it is. But how does it work? I did not find any useful information online. Can anyone let me understand in a layman's terms?", "response":"The actual data of a numpy array is stored in a homogeneous and contiguous block of memory called data buffer. For more information see NumPy internals. Using the (default) row-major order, a 2D array looks like this: To map the indices i,j,k,... of a multidimensional array to the positions in the data buffer (the offset, in bytes), NumPy uses the notion of strides. Strides are the number of bytes to jump-over in the memory in order to get from one item to the next item along each direction\/dimension of the array. In other words, it's the byte-separation between consecutive items for each dimension. For example: ``` >>> a = np.arange(1,10).reshape(3,3) >>> a array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) ``` This 2D array has two directions, axes-0 (running vertically downwards across rows), and axis-1 (running horizontally across columns), with each item having size: ``` >>> a.itemsize # in bytes 4 ``` So to go from a[0, 0] -> a[0, 1] (moving horizontally along the 0th row, from the 0th column to the 1st column) the byte-step in the data buffer is 4. Same for a[0, 1] -> a[0, 2], a[1, 0] -> a[1, 1] etc. This means that the number of strides for the horizontal direction (axis-1) is 4 bytes. However, to go from a[0, 0] -> a[1, 0] (moving vertically along the 0th column, from the 0th row to the 1st row), you need first to traverse all the remaining items on the 0th row to get to the 1st row, and then move through the 1st row to get to the item a[1, 0], i.e. a[0, 0] -> a[0, 1] -> a[0, 2] -> a[1, 0]. Therefore the number of strides for the vertical direction (axis-0) is 3*4 = 12 bytes. Note that going from a[0, 2] -> a[1, 0], and in general from the last item of the i-th row to the first item of the (i+1)-th row, is also 4 bytes because the array a is stored in the row-major order. That's why ``` >>> a.strides # (strides[0], strides[1]) (12, 4) ``` Here's another example showing that the strides in the horizontal direction (axis-1), strides[1], of a 2D array is not necessary equal to the item size (e.g. an array with column-major order): ``` >>> b = np.array([[1, 4, 7], [2, 5, 8], [3, 6, 9]]).T >>> b array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> b.strides (4, 12) ``` Here strides[1] is a multiple of the item-size. Although the array b looks identical to the array a, it's a different array: internally b is stored as |1|4|7|2|5|8|3|6|9| (because transposing doesn't affect the data buffer but only swaps the strides and the shape), whereas a as |1|2|3|4|5|6|7|8|9|. What makes them look alike is the different strides. That is, the byte-step for b[0, 0] -> b[0, 1] is 3*4=12 bytes and for b[0, 0] -> b[1, 0] is 4 bytes, whereas for a[0, 0] -> a[0, 1] is 4 bytes and for a[0, 0] -> a[1, 0] is 12 bytes. Last but not least, NumPy allows to create views of existing arrays with the option of modifying the strides and the shape, see stride tricks. For example: ``` >>> np.lib.stride_tricks.as_strided(a, shape=a.shape[::-1], strides=a.strides[::-1]) array([[1, 4, 7], [2, 5, 8], [3, 6, 9]]) ``` which is equivalent to transposing the array a. Let me just add, but without going into much detail, that one can even define strides that are not multiples of the item size. Here's an example: ``` >>> a = np.lib.stride_tricks.as_strided(np.array([1, 512, 0, 3], dtype=np.int16), shape=(3,), strides=(3,)) >>> a array([1, 2, 3], dtype=int16) >>> a.strides[0] 3 >>> a.itemsize 2 ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/53097952\/how-to-understand-numpy-strides-for-layman", "best_answers_votes":134, "question_length":269, "response_length":3401 }, { "question":"Colorize Voronoi Diagram I'm trying to colorize a Voronoi Diagram created using scipy.spatial.Voronoi. Here's my code: ``` import numpy as np import matplotlib.pyplot as plt from scipy.spatial import Voronoi, voronoi_plot_2d # make up data points points = np.random.rand(15,2) # compute Voronoi tesselation vor = Voronoi(points) # plot voronoi_plot_2d(vor) # colorize for region in vor.regions: if not -1 in region: polygon = [vor.vertices[i] for i in region] plt.fill(*zip(*polygon)) plt.show() ``` The resulting image: As you can see some of the Voronoi regions at the border of the image are not colored. That is because some indices to the Voronoi vertices for these regions are set to -1, i.e., for those vertices outside the Voronoi diagram. According to the docs: regions: (list of list of ints, shape (nregions, *)) Indices of the Voronoi vertices forming each Voronoi region. -1 indicates vertex outside the Voronoi diagram. In order to colorize these regions as well, I've tried to just remove these \"outside\" vertices from the polygon, but that didn't work. I think, I need to fill in some points at the border of the image region, but I can't seem to figure out how to achieve this reasonably. Can anyone help?", "response":"The Voronoi data structure contains all the necessary information to construct positions for the \"points at infinity\". Qhull also reports them simply as -1 indices, so Scipy doesn't compute them for you. https:\/\/gist.github.com\/pv\/8036995 http:\/\/nbviewer.ipython.org\/gist\/pv\/8037100 ``` import numpy as np import matplotlib.pyplot as plt from scipy.spatial import Voronoi def voronoi_finite_polygons_2d(vor, radius=None): \"\"\" Reconstruct infinite voronoi regions in a 2D diagram to finite regions. Parameters ---------- vor : Voronoi Input diagram radius : float, optional Distance to 'points at infinity'. Returns ------- regions : list of tuples Indices of vertices in each revised Voronoi regions. vertices : list of tuples Coordinates for revised Voronoi vertices. Same as coordinates of input vertices, with 'points at infinity' appended to the end. \"\"\" if vor.points.shape[1] != 2: raise ValueError(\"Requires 2D input\") new_regions = [] new_vertices = vor.vertices.tolist() center = vor.points.mean(axis=0) if radius is None: radius = vor.points.ptp().max() # Construct a map containing all ridges for a given point all_ridges = {} for (p1, p2), (v1, v2) in zip(vor.ridge_points, vor.ridge_vertices): all_ridges.setdefault(p1, []).append((p2, v1, v2)) all_ridges.setdefault(p2, []).append((p1, v1, v2)) # Reconstruct infinite regions for p1, region in enumerate(vor.point_region): vertices = vor.regions[region] if all(v >= 0 for v in vertices): # finite region new_regions.append(vertices) continue # reconstruct a non-finite region ridges = all_ridges[p1] new_region = [v for v in vertices if v >= 0] for p2, v1, v2 in ridges: if v2 = 0: # finite ridge: already in the region continue # Compute the missing endpoint of an infinite ridge t = vor.points[p2] - vor.points[p1] # tangent t \/= np.linalg.norm(t) n = np.array([-t[1], t[0]]) # normal midpoint = vor.points[[p1, p2]].mean(axis=0) direction = np.sign(np.dot(midpoint - center, n)) * n far_point = vor.vertices[v2] + direction * radius new_region.append(len(new_vertices)) new_vertices.append(far_point.tolist()) # sort region counterclockwise vs = np.asarray([new_vertices[v] for v in new_region]) c = vs.mean(axis=0) angles = np.arctan2(vs[:,1] - c[1], vs[:,0] - c[0]) new_region = np.array(new_region)[np.argsort(angles)] # finish new_regions.append(new_region.tolist()) return new_regions, np.asarray(new_vertices) # make up data points np.random.seed(1234) points = np.random.rand(15, 2) # compute Voronoi tesselation vor = Voronoi(points) # plot regions, vertices = voronoi_finite_polygons_2d(vor) print \"--\" print regions print \"--\" print vertices # colorize for region in regions: polygon = vertices[region] plt.fill(*zip(*polygon), alpha=0.4) plt.plot(points[:,0], points[:,1], 'ko') plt.xlim(vor.min_bound[0] - 0.1, vor.max_bound[0] + 0.1) plt.ylim(vor.min_bound[1] - 0.1, vor.max_bound[1] + 0.1) plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20515554\/colorize-voronoi-diagram", "best_answers_votes":81, "question_length":1222, "response_length":2885 }, { "question":"How to extend an array in-place in Numpy? Currently, I have some code like this ``` import numpy as np ret = np.array([]) for i in range(100000): tmp = get_input(i) ret = np.append(ret, np.zeros(len(tmp))) ret = np.append(ret, np.ones(fixed_length)) ``` I think this code is not efficient as np.append needs to return a copy of the array instead of modify the ret in-place I was wondering whether I can use the extend for a numpy array like this: ``` import numpy as np from somewhere import np_extend ret = np.array([]) for i in range(100000): tmp = get_input(i) np_extend(ret, np.zeros(len(tmp))) np_extend(ret, np.ones(fixed_length)) ``` So that the extend would be much more efficient. Does anyone have ideas about this? Thanks!", "response":"Imagine a numpy array as occupying one contiguous block of memory. Now imagine other objects, say other numpy arrays, which are occupying the memory just to the left and right of our numpy array. There would be no room to append to or extend our numpy array. The underlying data in a numpy array always occupies a contiguous block of memory. So any request to append to or extend our numpy array can only be satisfied by allocating a whole new larger block of memory, copying the old data into the new block and then appending or extending. So: It will not occur in-place. It will not be efficient.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13215525\/how-to-extend-an-array-in-place-in-numpy", "best_answers_votes":81, "question_length":732, "response_length":598 }, { "question":"How to access sparse matrix elements? ``` type(A) A.shape (8529, 60877) print A[0,:] (0, 25) 1.0 (0, 7422) 1.0 (0, 26062) 1.0 (0, 31804) 1.0 (0, 41602) 1.0 (0, 43791) 1.0 print A[1,:] (0, 7044) 1.0 (0, 31418) 1.0 (0, 42341) 1.0 (0, 47125) 1.0 (0, 54376) 1.0 print A[:,0] #nothing returned ``` Now what I don't understand is that A[1,:] should select elements from the 2nd row, yet I get elements from the 1st row via print A[1,:]. Also, print A[:,0] should return the first column but I get nothing printed. Why?", "response":"A[1,:] is itself a sparse matrix with shape (1, 60877). This is what you are printing, and it has only one row, so all the row coordinates are 0. For example: ``` In [41]: a = csc_matrix([[1, 0, 0, 0], [0, 0, 10, 11], [0, 0, 0, 99]]) In [42]: a.todense() Out[42]: matrix([[ 1, 0, 0, 0], [ 0, 0, 10, 11], [ 0, 0, 0, 99]], dtype=int64) In [43]: print(a[1, :]) (0, 2) 10 (0, 3) 11 In [44]: print(a) (0, 0) 1 (1, 2) 10 (1, 3) 11 (2, 3) 99 In [45]: print(a[1, :].toarray()) [[ 0 0 10 11]] ``` You can select columns, but if there are no nonzero elements in the column, nothing is displayed when it is output with print: ``` In [46]: a[:, 3].toarray() Out[46]: array([[ 0], [11], [99]]) In [47]: print(a[:,3]) (1, 0) 11 (2, 0) 99 In [48]: a[:, 1].toarray() Out[48]: array([[0], [0], [0]]) In [49]: print(a[:, 1]) In [50]: ``` The last print call shows no output because the column a[:, 1] has no nonzero elements.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15115765\/how-to-access-sparse-matrix-elements", "best_answers_votes":77, "question_length":513, "response_length":907 }, { "question":"AttributeError: 'module' object (scipy) has no attribute 'misc' I updated from ubuntu 12.04 to ubuntu 12.10 and the python module I have written suddenly no longer works with the error message that the module scipy does not have the attribute 'misc'. This worked previously. I am still using python 2.7 after the update. Here is where the code crashes ``` import scipy scipy.misc.imsave(slice,dat) ``` Any ideas?", "response":"``` >>> import scipy >>> scipy.misc Traceback (most recent call last): File \"\", line 1, in AttributeError: 'module' object has no attribute 'misc' >>> >>> >>> import scipy.misc >>> scipy.misc.imsave >>> ``` Which seems to be quite common with scipy.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13581593\/attributeerror-module-object-scipy-has-no-attribute-misc", "best_answers_votes":93, "question_length":412, "response_length":251 }, { "question":"Does `anaconda` create a separate PYTHONPATH variable for each new environment? I am starting to work with the Python Anaconda distribution from Continuum.io to do scipy work. I have been able to get Anaconda up and running, but I cannot tell whether Anaconda creates a new PYTHONPATH environment variable for each new environment it creates, or whether it relies on the common system PYTHONPATH. I could not find any information on this in the documentation. Further, when I did a printenv, I did not see a PYTHONPATH variable in the newly created environment --though I did find a few new anaconda created environment variables. The best I can find is that Anaconda added some Anaconda directories and the new environment directory to the head of PATH variable --but this does not necessarily isolate the new package from the system environment but it is close. Does anyone know the answer to this question or found a way to deal with this concern?", "response":"Anaconda does not use the PYTHONPATH. One should however note that if the PYTHONPATH is set it could be used to load a library that is not in the anaconda environment. That is why before activating an environment it might be good to do a ``` unset PYTHONPATH ``` For instance this PYTHONPATH points to an incorrect pandas lib: ``` export PYTHONPATH=\/home\/john\/share\/usr\/anaconda\/lib\/python source activate anaconda-2.7 python >>>> import pandas as pd \/home\/john\/share\/usr\/lib\/python\/pandas-0.12.0-py2.7-linux-x86_64.egg\/pandas\/hashtable.so: undefined symbol: PyUnicodeUCS2_DecodeUTF8 Traceback (most recent call last): File \"\", line 1, in File \"\/home\/john\/share\/usr\/lib\/python\/pandas-0.12.0-py2.7-linux-x86_64.egg\/pandas\/__init__.py\", line 6, in from . import hashtable, tslib, lib ImportError: \/home\/john\/share\/usr\/lib\/python\/pandas-0.12.0-py2.7-linux-x86_64.egg\/pandas\/hashtable.so: undefined symbol: PyUnicodeUCS2_DecodeUTF8 ``` unsetting the PYTHONPATH prevents the wrong pandas lib from being loaded: ``` unset PYTHONPATH source activate anaconda-2.7 python >>>> import pandas as pd >>>> ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/17386880\/does-anaconda-create-a-separate-pythonpath-variable-for-each-new-environment", "best_answers_votes":44, "question_length":950, "response_length":1098 }, { "question":"Selecting Pandas Columns by dtype I was wondering if there is an elegant and shorthand way in Pandas DataFrames to select columns by data type (dtype). i.e. Select only int64 columns from a DataFrame. To elaborate, something along the lines of ``` df.select_columns(dtype=float64) ```", "response":"Since 0.14.1 there's a select_dtypes method so you can do this more elegantly\/generally. ``` In [11]: df = pd.DataFrame([[1, 2.2, 'three']], columns=['A', 'B', 'C']) In [12]: df.select_dtypes(include=['int']) Out[12]: A 0 1 ``` To select all numeric types use the numpy dtype numpy.number ``` In [13]: df.select_dtypes(include=[np.number]) Out[13]: A B 0 1 2.2 In [14]: df.select_dtypes(exclude=[object]) Out[14]: A B 0 1 2.2 ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/21271581\/selecting-pandas-columns-by-dtype", "best_answers_votes":98, "question_length":284, "response_length":429 }, { "question":"How do I install SciPy on 64 bit Windows? How do I install SciPy on my system? For the NumPy part (that SciPy depends on) there is actually an installer for 64 bit Windows: numpy-1.3.0.win-amd64-py2.6.msi (is direct download URL, 2310144 bytes). Running the SciPy superpack installer results in this message in a dialog box: Cannot install. Python version 2.6 required, which was not found in the registry. I already have Python 2.6.2 installed (and a working Django installation in it), but I don't know about any Registry story. The registry entries seem to already exist: ``` REGEDIT4 [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python] [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore] [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6] [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6\\Help] [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6\\Help\\Main Python Documentation] @=\"D:\\\\Python262\\\\Doc\\\\python262.chm\" [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6\\InstallPath] @=\"D:\\\\Python262\\\\\" [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6\\InstallPath\\InstallGroup] @=\"Python 2.6\" [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6\\Modules] [HKEY_LOCAL_MACHINE\\SOFTWARE\\Python\\PythonCore\\2.6\\PythonPath] @=\"D:\\\\Python262\\\\Lib;D:\\\\Python262\\\\DLLs;D:\\\\Python262\\\\Lib\\\\lib-tk\" ``` What I have done so far: Step 1 Downloaded the NumPy superpack installer numpy-1.3.0rc2-win32-superpack-python2.6.exe (direct download URL, 4782592 bytes). Running this installer resulted in the same message, \"Cannot install. Python version 2.6 required, which was not found in the registry.\". Update: there is actually an installer for NumPy that works - see beginning of the question. Step 2 Tried to install NumPy in another way. Downloaded the zip package numpy-1.3.0rc2.zip (direct download URL, 2404011 bytes), extracted the zip file in a normal way to a temporary directory, D:\\temp7\\numpy-1.3.0rc2 (where setup.py and README.txt is). I then opened a command line window and: ``` d: cd D:\\temp7\\numpy-1.3.0rc2 setup.py install ``` This ran for a long time and also included use of cl.exe (part of Visual Studio). Here is a nearly 5000 lines long transcript (230 KB). This seemed to work. I can now do this in Python: ``` import numpy as np np.random.random(10) ``` with this result: ``` array([ 0.35667511, 0.56099423, 0.38423629, 0.09733172, 0.81560421, 0.18813222, 0.10566666, 0.84968066, 0.79472597, 0.30997724]) ``` Step 3 Downloaded the SciPy superpack installer, scipy-0.7.1rc3- win32-superpack-python2.6.exe (direct download URL, 45597175 bytes). Running this installer resulted in the message listed in the beginning Step 4 Tried to install SciPy in another way. Downloaded the zip package scipy-0.7.1rc3.zip (direct download URL, 5506562 bytes), extracted the zip file in a normal way to a temporary directory, D:\\temp7\\scipy-0.7.1 (where setup.py and README.txt is). I then opened a command line window and: ``` d: cd D:\\temp7\\scipy-0.7.1 setup.py install ``` This did not achieve much - here is a transcript (about 95 lines). And it fails: ``` >>> import scipy as sp2 Traceback (most recent call last): File \"\", line 1, in ImportError: No module named scipy ``` Platform: Python 2.6.2 installed in directory D:\\Python262, Windows XP 64 bit SP2, 8 GB RAM, Visual Studio 2008 Professional Edition installed. The startup screen of the installed Python is: ``` Python 2.6.2 (r262:71605, Apr 14 2009, 22:46:50) [MSC v.1500 64 bit (AMD64)] on win32 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> ``` Value of PATH, result from SET in a command line window: ``` Path=D:\\Perl64\\site\\bin;D:\\Perl64\\bin;C:\\Program Files (x86)\\PC Connectivity Solution\\;D:\\Perl\\site\\bin;D:\\Perl\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\Program Files (x86)\\ATI Technologies\\ATI.ACE\\Core-Static;d:\\Program Files (x86)\\WinSCP\\;D:\\MassLynx\\;D:\\Program Files (x86)\\Analyst\\bin;d:\\Python262;d:\\Python262\\Scripts;D:\\Program Files (x86)\\TortoiseSVN\\bin;D:\\Program Files\\TortoiseSVN\\bin;C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0;D:\\Program Files (x86)\\IDM Computer Solutions\\UltraEdit\\ ```", "response":"Unofficial 64-bit installers for NumPy and SciPy are available at http:\/\/www.lfd.uci.edu\/~gohlke\/pythonlibs\/ Make sure that you download & install the packages (aka. wheels) that match your CPython version and bitness (ie. cp35 = Python v3.5; win_amd64 = x86_64). You'll want to install NumPy first; From a CMD prompt with administrator privileges for a system-wide (aka. Program Files) install: ``` C:\\>pip install numpy\u2011+mkl\u2011cp\u2011cpm\u2011.whl ``` Or include the --user flag to install to the current user's application folder (Typically %APPDATA%\\Python on Windows) from a non-admin CMD prompt: ``` C:\\>pip install --user numpy\u2011+mkl\u2011cp\u2011cpm\u2011.whl ``` Then do the same for SciPy: ``` C:\\>pip install [--user] scipy\u2011\u2011cp\u2011cpm\u2011.whl ``` Don't forget to replace , , and appropriately if you copy & paste any of these examples. And also that you must use the numpy & scipy packages from the ifd.uci.edu link above (or else you will get errors if you try to mix & match incompatible packages -- uninstall any conflicting packages first [ie. pip list]).", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/1517129\/how-do-i-install-scipy-on-64-bit-windows", "best_answers_votes":64, "question_length":4098, "response_length":1038 }, { "question":"Find matching rows in 2 dimensional numpy array I would like to get the index of a 2 dimensional Numpy array that matches a row. For example, my array is this: ``` vals = np.array([[0, 0], [1, 0], [2, 0], [0, 1], [1, 1], [2, 1], [0, 2], [1, 2], [2, 2], [0, 3], [1, 3], [2, 3], [0, 0], [1, 0], [2, 0], [0, 1], [1, 1], [2, 1], [0, 2], [1, 2], [2, 2], [0, 3], [1, 3], [2, 3]]) ``` I would like to get the index that matches the row [0, 1] which is index 3 and 15. When I do something like numpy.where(vals == [0 ,1]) I get... ``` (array([ 0, 3, 3, 4, 5, 6, 9, 12, 15, 15, 16, 17, 18, 21]), array([0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0])) ``` I want index array([3, 15]).", "response":"You need the np.where function to get the indexes: ``` >>> np.where((vals == (0, 1)).all(axis=1)) (array([ 3, 15]),) ``` Or, as the documentation states: If only condition is given, return condition.nonzero() You could directly call .nonzero() on the array returned by .all: ``` >>> (vals == (0, 1)).all(axis=1).nonzero() (array([ 3, 15]),) ``` To dissassemble that: ``` >>> vals == (0, 1) array([[ True, False], [False, False], ... [ True, False], [False, False], [False, False]], dtype=bool) ``` and calling the .all method on that array (with axis=1) gives you True where both are True: ``` >>> (vals == (0, 1)).all(axis=1) array([False, False, False, True, False, False, False, False, False, False, False, False, False, False, False, True, False, False, False, False, False, False, False, False], dtype=bool) ``` and to get which indexes are True: ``` >>> np.where((vals == (0, 1)).all(axis=1)) (array([ 3, 15]),) ``` or ``` >>> (vals == (0, 1)).all(axis=1).nonzero() (array([ 3, 15]),) ``` I find my solution a bit more readable, but as unutbu points out, the following may be faster, and returns the same value as (vals == (0, 1)).all(axis=1): ``` >>> (vals[:, 0] == 0) & (vals[:, 1] == 1) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/25823608\/find-matching-rows-in-2-dimensional-numpy-array", "best_answers_votes":97, "question_length":670, "response_length":1199 }, { "question":"transform scipy sparse csr to pandas? I have used the ``` sklearn.preprocessing.OneHotEncoder ``` to transform some data the output is scipy.sparse.csr.csr_matrix how can I merge it back into my original dataframe along with the other columns? I tried to use pd.concat but I get ``` TypeError: cannot concatenate a non-NDFrame object ``` Thanks", "response":"If A is csr_matrix, you can use .toarray() (there's also .todense() that produces a numpy matrix, which is also works for the DataFrame constructor): ``` df = pd.DataFrame(A.toarray()) ``` You can then use this with pd.concat(). ``` A = csr_matrix([[1, 0, 2], [0, 3, 0]]) (0, 0) 1 (0, 2) 2 (1, 1) 3 pd.DataFrame(A.todense()) 0 1 2 0 1 0 2 1 0 3 0 RangeIndex: 2 entries, 0 to 1 Data columns (total 3 columns): 0 2 non-null int64 1 2 non-null int64 2 2 non-null int64 ``` In version 0.20, pandas introduced sparse data structures, including the SparseDataFrame. In pandas 1.0, SparseDataFrame was removed: In older versions of pandas, the SparseSeries and SparseDataFrame classes were the preferred way to work with sparse data. With the advent of extension arrays, these subclasses are no longer needed. Their purpose is better served by using a regular Series or DataFrame with sparse values instead. The migration guide shows how to use these new data structures. For instance, to create a DataFrame from a sparse matrix: ``` from scipy.sparse import csr_matrix A = csr_matrix([[1, 0, 2], [0, 3, 0]]) df = pd.DataFrame.sparse.from_spmatrix(A, columns=['A', 'B', 'C']) df A B C 0 1 0 2 1 0 3 0 df.dtypes A Sparse[float64, 0] B Sparse[float64, 0] C Sparse[float64, 0] dtype: object ``` Alternatively, you can pass sparse matrices to sklearn to avoid running out of memory when converting back to pandas. Just convert your other data to sparse format by passing a numpy array to the scipy.sparse.csr_matrix constructor and use scipy.sparse.hstack to combine (see docs).", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/36967666\/transform-scipy-sparse-csr-to-pandas", "best_answers_votes":73, "question_length":344, "response_length":1569 }, { "question":"Convert Pandas dataframe to Sparse Numpy Matrix directly I am creating a matrix from a Pandas dataframe as follows: ``` dense_matrix = np.array(df.as_matrix(columns = None), dtype=bool).astype(np.int) ``` And then into a sparse matrix with: ``` sparse_matrix = scipy.sparse.csr_matrix(dense_matrix) ``` Is there any way to go from a df straight to a sparse matrix? Thanks in advance.", "response":"df.values is a numpy array, and accessing values that way is always faster than np.array. ``` scipy.sparse.csr_matrix(df.values) ``` You might need to take the transpose first, like df.values.T. In DataFrames, the columns are axis 0.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20459536\/convert-pandas-dataframe-to-sparse-numpy-matrix-directly", "best_answers_votes":75, "question_length":383, "response_length":233 }, { "question":"scipy linkage format I have written my own clustering routine and would like to produce a dendrogram. The easiest way to do this would be to use scipy dendrogram function. However, this requires the input to be in the same format that the scipy linkage function produces. I cannot find an example of how the output of this is formatted. I was wondering whether someone out there can enlighten me.", "response":"I agree with https:\/\/stackoverflow.com\/users\/1167475\/mortonjt that the documentation does not fully explain the indexing of intermediate clusters, while I do agree with the https:\/\/stackoverflow.com\/users\/1354844\/dkar that the format is otherwise precisely explained. Using the example data from this question: Tutorial for scipy.cluster.hierarchy ``` A = np.array([[0.1, 2.5], [1.5, .4 ], [0.3, 1 ], [1 , .8 ], [0.5, 0 ], [0 , 0.5], [0.5, 0.5], [2.7, 2 ], [2.2, 3.1], [3 , 2 ], [3.2, 1.3]]) ``` A linkage matrix can be built using the single (i.e, the closest matching points): ``` z = hac.linkage(a, method=\"single\") array([[ 7. , 9. , 0.3 , 2. ], [ 4. , 6. , 0.5 , 2. ], [ 5. , 12. , 0.5 , 3. ], [ 2. , 13. , 0.53851648, 4. ], [ 3. , 14. , 0.58309519, 5. ], [ 1. , 15. , 0.64031242, 6. ], [ 10. , 11. , 0.72801099, 3. ], [ 8. , 17. , 1.2083046 , 4. ], [ 0. , 16. , 1.5132746 , 7. ], [ 18. , 19. , 1.92353841, 11. ]]) ``` As the documentation explains the clusters below n (here: 11) are simply the data points in the original matrix A. The intermediate clusters going forward, are indexed successively. Thus, clusters 7 and 9 (the first merge) are merged into cluster 11, clusters 4 and 6 into 12. Then observe line three, merging clusters 5 (from A) and 12 (from the not-shown intermediate cluster 12) resulting with a Within-Cluster Distance (WCD) of 0.5. The single method entails that the new WCS is 0.5, which is the distance between A[5] and the closest point in cluster 12, A[4] and A[6]. Let's check: ``` In [198]: norm([a[5]-a[4]]) Out[198]: 0.70710678118654757 In [199]: norm([a[5]-a[6]]) Out[199]: 0.5 ``` This cluster should now be intermediate cluster 13, which subsequently is merged with A[2]. Thus, the new distance should be the closest between the points A[2] and A[4,5,6]. ``` In [200]: norm([a[2]-a[4]]) Out[200]: 1.019803902718557 In [201]: norm([a[2]-a[5]]) Out[201]: 0.58309518948452999 In [202]: norm([a[2]-a[6]]) Out[202]: 0.53851648071345048 ``` Which, as can be seen also checks out, and explains the intermediate format of new clusters.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9838861\/scipy-linkage-format", "best_answers_votes":56, "question_length":396, "response_length":2067 }, { "question":"Obtain eigen values and vectors from sklearn PCA How I can get the the eigen values and eigen vectors of the PCA application? ``` from sklearn.decomposition import PCA clf=PCA(0.98,whiten=True) #converse 98% variance X_train=clf.fit_transform(X_train) X_test=clf.transform(X_test) ``` I can't find it in docs. 1.I am \"not\" able to comprehend the different results here. Edit: ``` def pca_code(data): #raw_implementation var_per=.98 data-=np.mean(data, axis=0) data\/=np.std(data, axis=0) cov_mat=np.cov(data, rowvar=False) evals, evecs = np.linalg.eigh(cov_mat) idx = np.argsort(evals)[::-1] evecs = evecs[:,idx] evals = evals[idx] variance_retained=np.cumsum(evals)\/np.sum(evals) index=np.argmax(variance_retained>=var_per) evecs = evecs[:,:index+1] reduced_data=np.dot(evecs.T, data.T).T print(evals) print(\"_\"*30) print(evecs) print(\"_\"*30) #using scipy package clf=PCA(var_per) X_train=data.T X_train=clf.fit_transform(X_train) print(clf.explained_variance_) print(\"_\"*30) print(clf.components_) print(\"__\"*30) ``` I wish to obtain all the eigenvalues and eigenvectors instead of just the reduced set with the convergence condition.", "response":"Your implementation You are computing the eigenvectors of the correlation matrix, that is the covariance matrix of the normalized variables. data\/=np.std(data, axis=0) is not part of the classic PCA, we only center the variables. So the sklearn PCA does not feature scale the data beforehand. Apart from that you are on the right track, if we abstract the fact that the code you provided did not run ;). You only got confused with the row\/column layouts. Honestly I think it's much easier to start with X = data.T and work only with X from there on. I added your code 'fixed' at the end of the post. Getting the eigenvalues You already noted that you can get the eigenvectors using clf.components_. So you have the principal components. They are eigenvectors of the covariance matrix \ud835\udc4b\u1d40\ud835\udc4b. A way to retrieve the eigenvalues from there is to apply this matrix to each principal components and project the results onto the component. Let v_1 be the first principal component and lambda_1 the associated eigenvalue. We have: and thus: since . (x, y) the scalar product of vectors x and y. Back in Python you can do: ``` n_samples = X.shape[0] # We center the data and compute the sample covariance matrix. X -= np.mean(X, axis=0) cov_matrix = np.dot(X.T, X) \/ n_samples for eigenvector in pca.components_: print(np.dot(eigenvector.T, np.dot(cov_matrix, eigenvector))) ``` And you get the eigenvalue associated with the eigenvector. Well, in my tests it turned out not to work with the couple last eigenvalues but I'd attribute that to my absence of skills in numerical stability. Now that's not the best way to get the eigenvalues but it's nice to know where they come from. The eigenvalues represent the variance in the direction of the eigenvector. So you can get them through the pca.explained_variance_ attribute: ``` eigenvalues = pca.explained_variance_ ``` Here is a reproducible example that prints the eigenvalues you get with each method: ``` import numpy as np from sklearn.decomposition import PCA from sklearn.datasets import make_classification X, y = make_classification(n_samples=1000) n_samples = X.shape[0] pca = PCA() X_transformed = pca.fit_transform(X) # We center the data and compute the sample covariance matrix. X_centered = X - np.mean(X, axis=0) cov_matrix = np.dot(X_centered.T, X_centered) \/ n_samples eigenvalues = pca.explained_variance_ for eigenvalue, eigenvector in zip(eigenvalues, pca.components_): print(np.dot(eigenvector.T, np.dot(cov_matrix, eigenvector))) print(eigenvalue) ``` Your original code, fixed If you run it you'll see the values are consistent. They're not exactly equal because numpy and scikit-learn are not using the same algorithm here. The main thing was that you were using correlation matrix instead of covariance, as mentioned above. Also you were getting the transposed eigenvectors from numpy which made it very confusing. ``` import numpy as np from scipy.stats.mstats import zscore from sklearn.decomposition import PCA def pca_code(data): #raw_implementation var_per=.98 data-=np.mean(data, axis=0) # data\/=np.std(data, axis=0) cov_mat=np.cov(data, rowvar=False) evals, evecs = np.linalg.eigh(cov_mat) idx = np.argsort(evals)[::-1] evecs = evecs[:,idx] evals = evals[idx] variance_retained=np.cumsum(evals)\/np.sum(evals) index=np.argmax(variance_retained>=var_per) evecs = evecs[:,:index+1] reduced_data=np.dot(evecs.T, data.T).T print(\"evals\", evals) print(\"_\"*30) print(evecs.T[1, :]) print(\"_\"*30) #using scipy package clf=PCA(var_per) X_train=data X_train=clf.fit_transform(X_train) print(clf.explained_variance_) print(\"_\"*30) print(clf.components_[1,:]) print(\"__\"*30) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/31909945\/obtain-eigen-values-and-vectors-from-sklearn-pca", "best_answers_votes":91, "question_length":1135, "response_length":3639 }, { "question":"scipy csr_matrix: understand indptr Every once in a while, I get to manipulate a csr_matrix but I always forget how the parameters indices and indptr work together to build a sparse matrix. I am looking for a clear and intuitive explanation on how the indptr interacts with both the data and indices parameters when defining a sparse matrix using the notation csr_matrix((data, indices, indptr), [shape=(M, N)]). I can see from the scipy documentation that the data parameter contains all the non-zero data, and the indices parameter contains the columns associated to that data (as such, indices is equal to col in the example given in the documentation). But how can we explain in clear terms the indptr parameter?", "response":"Maybe this explanation can help understand the concept: data is an array containing all the non zero elements of the sparse matrix. indices is an array mapping each element in data to its column in the sparse matrix. indptr then maps the elements of data and indices to the rows of the sparse matrix. This is done with the following reasoning: If the sparse matrix has M rows, indptr is an array containing M+1 elements for row i, [indptr[i]:indptr[i+1]] returns the indices of elements to take from data and indices corresponding to row i. So suppose indptr[i]=k and indptr[i+1]=l, the data corresponding to row i would be data[k:l] at columns indices[k:l]. This is the tricky part, and I hope the following example helps understanding it. EDIT : I replaced the numbers in data by letters to avoid confusion in the following example. Note: the values in indptr are necessarily increasing, because the next cell in indptr (the next row) is referring to the next values in data and indices corresponding to that row.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/52299420\/scipy-csr-matrix-understand-indptr", "best_answers_votes":118, "question_length":716, "response_length":1015 }, { "question":"What do all the distributions available in scipy.stats look like? Visualizing scipy.stats distributions A histogram can be made of the scipy.stats normal random variable to see what the distribution looks like. ``` % matplotlib inline import pandas as pd import scipy.stats as stats d = stats.norm() rv = d.rvs(100000) pd.Series(rv).hist(bins=32, normed=True) ``` What do the other distributions look like?", "response":"Visualizing all scipy.stats distributions Based on the list of scipy.stats distributions, plotted below are the histograms and PDFs of each continuous random variable. The code used to generate each distribution is at the bottom. Note: The shape constants were taken from the examples on the scipy.stats distribution documentation pages. alpha(a=3.57, loc=0.00, scale=1.00) anglit(loc=0.00, scale=1.00) arcsine(loc=0.00, scale=1.00) beta(a=2.31, loc=0.00, scale=1.00, b=0.63) betaprime(a=5.00, loc=0.00, scale=1.00, b=6.00) bradford(loc=0.00, c=0.30, scale=1.00) burr(loc=0.00, c=10.50, scale=1.00, d=4.30) cauchy(loc=0.00, scale=1.00) chi(df=78.00, loc=0.00, scale=1.00) chi2(df=55.00, loc=0.00, scale=1.00) cosine(loc=0.00, scale=1.00) dgamma(a=1.10, loc=0.00, scale=1.00) dweibull(loc=0.00, c=2.07, scale=1.00) erlang(a=2.00, loc=0.00, scale=1.00) expon(loc=0.00, scale=1.00) exponnorm(loc=0.00, K=1.50, scale=1.00) exponpow(loc=0.00, scale=1.00, b=2.70) exponweib(a=2.89, loc=0.00, c=1.95, scale=1.00) f(loc=0.00, dfn=29.00, scale=1.00, dfd=18.00) fatiguelife(loc=0.00, c=29.00, scale=1.00) fisk(loc=0.00, c=3.09, scale=1.00) foldcauchy(loc=0.00, c=4.72, scale=1.00) foldnorm(loc=0.00, c=1.95, scale=1.00) frechet_l(loc=0.00, c=3.63, scale=1.00) frechet_r(loc=0.00, c=1.89, scale=1.00) gamma(a=1.99, loc=0.00, scale=1.00) gausshyper(a=13.80, loc=0.00, c=2.51, scale=1.00, b=3.12, z=5.18) genexpon(a=9.13, loc=0.00, c=3.28, scale=1.00, b=16.20) genextreme(loc=0.00, c=-0.10, scale=1.00) gengamma(a=4.42, loc=0.00, c=-3.12, scale=1.00) genhalflogistic(loc=0.00, c=0.77, scale=1.00) genlogistic(loc=0.00, c=0.41, scale=1.00) gennorm(loc=0.00, beta=1.30, scale=1.00) genpareto(loc=0.00, c=0.10, scale=1.00) gilbrat(loc=0.00, scale=1.00) gompertz(loc=0.00, c=0.95, scale=1.00) gumbel_l(loc=0.00, scale=1.00) gumbel_r(loc=0.00, scale=1.00) halfcauchy(loc=0.00, scale=1.00) halfgennorm(loc=0.00, beta=0.68, scale=1.00) halflogistic(loc=0.00, scale=1.00) halfnorm(loc=0.00, scale=1.00) hypsecant(loc=0.00, scale=1.00) invgamma(a=4.07, loc=0.00, scale=1.00) invgauss(mu=0.14, loc=0.00, scale=1.00) invweibull(loc=0.00, c=10.60, scale=1.00) johnsonsb(a=4.32, loc=0.00, scale=1.00, b=3.18) johnsonsu(a=2.55, loc=0.00, scale=1.00, b=2.25) ksone(loc=0.00, scale=1.00, n=1000.00) kstwobign(loc=0.00, scale=1.00) laplace(loc=0.00, scale=1.00) levy(loc=0.00, scale=1.00) levy_l(loc=0.00, scale=1.00) loggamma(loc=0.00, c=0.41, scale=1.00) logistic(loc=0.00, scale=1.00) loglaplace(loc=0.00, c=3.25, scale=1.00) lognorm(loc=0.00, s=0.95, scale=1.00) lomax(loc=0.00, c=1.88, scale=1.00) maxwell(loc=0.00, scale=1.00) mielke(loc=0.00, s=3.60, scale=1.00, k=10.40) nakagami(loc=0.00, scale=1.00, nu=4.97) ncf(loc=0.00, dfn=27.00, nc=0.42, dfd=27.00, scale=1.00) nct(df=14.00, loc=0.00, scale=1.00, nc=0.24) ncx2(df=21.00, loc=0.00, scale=1.00, nc=1.06) norm(loc=0.00, scale=1.00) pareto(loc=0.00, scale=1.00, b=2.62) pearson3(loc=0.00, skew=0.10, scale=1.00) powerlaw(a=1.66, loc=0.00, scale=1.00) powerlognorm(loc=0.00, s=0.45, scale=1.00, c=2.14) powernorm(loc=0.00, c=4.45, scale=1.00) rayleigh(loc=0.00, scale=1.00) rdist(loc=0.00, c=0.90, scale=1.00) recipinvgauss(mu=0.63, loc=0.00, scale=1.00) reciprocal(a=0.01, loc=0.00, scale=1.00, b=1.01) rice(loc=0.00, scale=1.00, b=0.78) semicircular(loc=0.00, scale=1.00) t(df=2.74, loc=0.00, scale=1.00) triang(loc=0.00, c=0.16, scale=1.00) truncexpon(loc=0.00, scale=1.00, b=4.69) truncnorm(a=0.10, loc=0.00, scale=1.00, b=2.00) tukeylambda(loc=0.00, scale=1.00, lam=3.13) uniform(loc=0.00, scale=1.00) vonmises(loc=0.00, scale=1.00, kappa=3.99) vonmises_line(loc=0.00, scale=1.00, kappa=3.99) wald(loc=0.00, scale=1.00) weibull_max(loc=0.00, c=2.87, scale=1.00) weibull_min(loc=0.00, c=1.79, scale=1.00) wrapcauchy(loc=0.00, c=0.03, scale=1.00) Generation Code Here is the Jupyter Notebook used to generate the plots. ``` %matplotlib inline import io import numpy as np import pandas as pd import scipy.stats as stats import matplotlib import matplotlib.pyplot as plt matplotlib.rcParams['figure.figsize'] = (16.0, 14.0) matplotlib.style.use('ggplot') ``` ``` # Distributions to check, shape constants were taken from the examples on the scipy.stats distribution documentation pages. DISTRIBUTIONS = [ stats.alpha(a=3.57, loc=0.0, scale=1.0), stats.anglit(loc=0.0, scale=1.0), stats.arcsine(loc=0.0, scale=1.0), stats.beta(a=2.31, b=0.627, loc=0.0, scale=1.0), stats.betaprime(a=5, b=6, loc=0.0, scale=1.0), stats.bradford(c=0.299, loc=0.0, scale=1.0), stats.burr(c=10.5, d=4.3, loc=0.0, scale=1.0), stats.cauchy(loc=0.0, scale=1.0), stats.chi(df=78, loc=0.0, scale=1.0), stats.chi2(df=55, loc=0.0, scale=1.0), stats.cosine(loc=0.0, scale=1.0), stats.dgamma(a=1.1, loc=0.0, scale=1.0), stats.dweibull(c=2.07, loc=0.0, scale=1.0), stats.erlang(a=2, loc=0.0, scale=1.0), stats.expon(loc=0.0, scale=1.0), stats.exponnorm(K=1.5, loc=0.0, scale=1.0), stats.exponweib(a=2.89, c=1.95, loc=0.0, scale=1.0), stats.exponpow(b=2.7, loc=0.0, scale=1.0), stats.f(dfn=29, dfd=18, loc=0.0, scale=1.0), stats.fatiguelife(c=29, loc=0.0, scale=1.0), stats.fisk(c=3.09, loc=0.0, scale=1.0), stats.foldcauchy(c=4.72, loc=0.0, scale=1.0), stats.foldnorm(c=1.95, loc=0.0, scale=1.0), stats.frechet_r(c=1.89, loc=0.0, scale=1.0), stats.frechet_l(c=3.63, loc=0.0, scale=1.0), stats.genlogistic(c=0.412, loc=0.0, scale=1.0), stats.genpareto(c=0.1, loc=0.0, scale=1.0), stats.gennorm(beta=1.3, loc=0.0, scale=1.0), stats.genexpon(a=9.13, b=16.2, c=3.28, loc=0.0, scale=1.0), stats.genextreme(c=-0.1, loc=0.0, scale=1.0), stats.gausshyper(a=13.8, b=3.12, c=2.51, z=5.18, loc=0.0, scale=1.0), stats.gamma(a=1.99, loc=0.0, scale=1.0), stats.gengamma(a=4.42, c=-3.12, loc=0.0, scale=1.0), stats.genhalflogistic(c=0.773, loc=0.0, scale=1.0), stats.gilbrat(loc=0.0, scale=1.0), stats.gompertz(c=0.947, loc=0.0, scale=1.0), stats.gumbel_r(loc=0.0, scale=1.0), stats.gumbel_l(loc=0.0, scale=1.0), stats.halfcauchy(loc=0.0, scale=1.0), stats.halflogistic(loc=0.0, scale=1.0), stats.halfnorm(loc=0.0, scale=1.0), stats.halfgennorm(beta=0.675, loc=0.0, scale=1.0), stats.hypsecant(loc=0.0, scale=1.0), stats.invgamma(a=4.07, loc=0.0, scale=1.0), stats.invgauss(mu=0.145, loc=0.0, scale=1.0), stats.invweibull(c=10.6, loc=0.0, scale=1.0), stats.johnsonsb(a=4.32, b=3.18, loc=0.0, scale=1.0), stats.johnsonsu(a=2.55, b=2.25, loc=0.0, scale=1.0), stats.ksone(n=1e+03, loc=0.0, scale=1.0), stats.kstwobign(loc=0.0, scale=1.0), stats.laplace(loc=0.0, scale=1.0), stats.levy(loc=0.0, scale=1.0), stats.levy_l(loc=0.0, scale=1.0), stats.levy_stable(alpha=0.357, beta=-0.675, loc=0.0, scale=1.0), stats.logistic(loc=0.0, scale=1.0), stats.loggamma(c=0.414, loc=0.0, scale=1.0), stats.loglaplace(c=3.25, loc=0.0, scale=1.0), stats.lognorm(s=0.954, loc=0.0, scale=1.0), stats.lomax(c=1.88, loc=0.0, scale=1.0), stats.maxwell(loc=0.0, scale=1.0), stats.mielke(k=10.4, s=3.6, loc=0.0, scale=1.0), stats.nakagami(nu=4.97, loc=0.0, scale=1.0), stats.ncx2(df=21, nc=1.06, loc=0.0, scale=1.0), stats.ncf(dfn=27, dfd=27, nc=0.416, loc=0.0, scale=1.0), stats.nct(df=14, nc=0.24, loc=0.0, scale=1.0), stats.norm(loc=0.0, scale=1.0), stats.pareto(b=2.62, loc=0.0, scale=1.0), stats.pearson3(skew=0.1, loc=0.0, scale=1.0), stats.powerlaw(a=1.66, loc=0.0, scale=1.0), stats.powerlognorm(c=2.14, s=0.446, loc=0.0, scale=1.0), stats.powernorm(c=4.45, loc=0.0, scale=1.0), stats.rdist(c=0.9, loc=0.0, scale=1.0), stats.reciprocal(a=0.00623, b=1.01, loc=0.0, scale=1.0), stats.rayleigh(loc=0.0, scale=1.0), stats.rice(b=0.775, loc=0.0, scale=1.0), stats.recipinvgauss(mu=0.63, loc=0.0, scale=1.0), stats.semicircular(loc=0.0, scale=1.0), stats.t(df=2.74, loc=0.0, scale=1.0), stats.triang(c=0.158, loc=0.0, scale=1.0), stats.truncexpon(b=4.69, loc=0.0, scale=1.0), stats.truncnorm(a=0.1, b=2, loc=0.0, scale=1.0), stats.tukeylambda(lam=3.13, loc=0.0, scale=1.0), stats.uniform(loc=0.0, scale=1.0), stats.vonmises(kappa=3.99, loc=0.0, scale=1.0), stats.vonmises_line(kappa=3.99, loc=0.0, scale=1.0), stats.wald(loc=0.0, scale=1.0), stats.weibull_min(c=1.79, loc=0.0, scale=1.0), stats.weibull_max(c=2.87, loc=0.0, scale=1.0), stats.wrapcauchy(c=0.0311, loc=0.0, scale=1.0) ] ``` ``` bins = 32 size = 16384 plotData = [] for distribution in DISTRIBUTIONS: try: # Create random data rv = pd.Series(distribution.rvs(size=size)) # Get sane start and end points of distribution start = distribution.ppf(0.01) end = distribution.ppf(0.99) # Build PDF and turn into pandas Series x = np.linspace(start, end, size) y = distribution.pdf(x) pdf = pd.Series(y, x) # Get histogram of random data b = np.linspace(start, end, bins+1) y, x = np.histogram(rv, bins=b, normed=True) x = [(a+x[i+1])\/2.0 for i,a in enumerate(x[0:-1])] hist = pd.Series(y, x) # Create distribution name and parameter string title = '{}({})'.format(distribution.dist.name, ', '.join(['{}={:0.2f}'.format(k,v) for k,v in distribution.kwds.items()])) # Store data for later plotData.append({ 'pdf': pdf, 'hist': hist, 'title': title }) except Exception: print 'could not create data', distribution.dist.name ``` ``` plotMax = len(plotData) for i, data in enumerate(plotData): w = abs(abs(data['hist'].index[0]) - abs(data['hist'].index[1])) # Display plt.figure(figsize=(10, 6)) ax = data['pdf'].plot(kind='line', label='Model PDF', legend=True, lw=2) ax.bar(data['hist'].index, data['hist'].values, label='Random Sample', width=w, align='center', alpha=0.5) ax.set_title(data['title']) # Grab figure fig = matplotlib.pyplot.gcf() # Output 'file' fig.savefig('~\/Desktop\/dist\/'+data['title']+'.png', format='png', bbox_inches='tight') matplotlib.pyplot.close() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/37559470\/what-do-all-the-distributions-available-in-scipy-stats-look-like", "best_answers_votes":139, "question_length":406, "response_length":9594 }, { "question":"plotting results of hierarchical clustering on top of a matrix of data How can I plot a dendrogram right on top of a matrix of values, reordered appropriately to reflect the clustering, in Python? An example is the following figure: This is Figure 6 from: A panel of induced pluripotent stem cells from chimpanzees: a resource for comparative functional genomics I use scipy.cluster.dendrogram to make my dendrogram and perform hierarchical clustering on a matrix of data. How can I then plot the data as a matrix where the rows have been reordered to reflect a clustering induced by the cutting the dendrogram at a particular threshold, and have the dendrogram plotted alongside the matrix? I know how to plot the dendrogram in scipy, but not how to plot the intensity matrix of data with the right scale bar next to it.", "response":"The question does not define matrix very well: \"matrix of values\", \"matrix of data\". I assume that you mean a distance matrix. In other words, element D_ij in the symmetric nonnegative N-by-N distance matrix D denotes the distance between two feature vectors, x_i and x_j. Is that correct? If so, then try this (edited June 13, 2010, to reflect two different dendrograms). Tested in python 3.10 and matplotlib 3.5.1 ```py import numpy as np import matplotlib.pyplot as plt import scipy.cluster.hierarchy as sch from scipy.spatial.distance import squareform # Generate random features and distance matrix. np.random.seed(200) # for reproducible data x = np.random.rand(40) D = np.zeros([40, 40]) for i in range(40): for j in range(40): D[i,j] = abs(x[i] - x[j]) condensedD = squareform(D) # Compute and plot first dendrogram. fig = plt.figure(figsize=(8, 8)) ax1 = fig.add_axes([0.09, 0.1, 0.2, 0.6]) Y = sch.linkage(condensedD, method='centroid') Z1 = sch.dendrogram(Y, orientation='left') ax1.set_xticks([]) ax1.set_yticks([]) # Compute and plot second dendrogram. ax2 = fig.add_axes([0.3, 0.71, 0.6, 0.2]) Y = sch.linkage(condensedD, method='single') Z2 = sch.dendrogram(Y) ax2.set_xticks([]) ax2.set_yticks([]) # Plot distance matrix. axmatrix = fig.add_axes([0.3, 0.1, 0.6, 0.6]) idx1 = Z1['leaves'] idx2 = Z2['leaves'] D = D[idx1,:] D = D[:,idx2] im = axmatrix.matshow(D, aspect='auto', origin='lower', cmap=plt.cm.YlGnBu) axmatrix.set_xticks([]) # remove axis labels axmatrix.set_yticks([]) # remove axis labels # Plot colorbar. axcolor = fig.add_axes([0.91, 0.1, 0.02, 0.6]) plt.colorbar(im, cax=axcolor) plt.show() fig.savefig('dendrogram.png') ``` Edit: For different colors, adjust the cmap attribute in imshow. See the scipy\/matplotlib docs for examples. That page also describes how to create your own colormap. For convenience, I recommend using a preexisting colormap. In my example, I used YlGnBu. Edit: add_axes (see documentation here) accepts a list or tuple: (left, bottom, width, height). For example, (0.5,0,0.5,1) adds an Axes on the right half of the figure. (0,0.5,1,0.5) adds an Axes on the top half of the figure. Most people probably use add_subplot for its convenience. I like add_axes for its control. To remove the border, use add_axes([left,bottom,width,height], frame_on=False). See example here.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2982929\/plotting-results-of-hierarchical-clustering-on-top-of-a-matrix-of-data", "best_answers_votes":108, "question_length":821, "response_length":2328 }, { "question":"How to use numpy.void type I loaded a MATLAB .mat file via scipy.io.loadmat and it gave me a list of numpy.void objects. What are they, how can they be used and where can I get some reference documentation on them?", "response":"According to the numpy documentation: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/arrays.dtypes.html, numpy.void types are defined as flexible data types. Basically, these are data types where there is no pre-defined type associated to the variable(s) you're looking at. If you look at numpy, you have data types such as float, uint8, bool, string, etc. void is to accommodate for more generic and flexible types and are for those data types that don't necessary fall into any one of these pre-defined data types. This situation is mostly encountered when you're loading in a struct where each element has multiple data types associated with multiple fields. Each structure element could have a combination of different data types, and the amalgamation of all of these data types to represent an instance of this structure element thus leads us to numpy.void. With the documentation, you can certainly do the same operations like you would with any other data type. Take a look at the generic data type methods here: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.generic.html#numpy.generic . In fact, all numpy data types are derived from this generic class, including numpy.void. In the first link I provided at the beginning of this post, it shows a good example of how to create a custom record type, where a record is a combination of a tuple of numbers and a string. When creating a list of these records, each type in the list is of type numpy.void and it demonstrates that a record is of this data type. However, bear in mind that this record list has a data type that is of this record, but each element of this list will be of type numpy.void. However, as a matter of self-containment, let's re-create the example here: Let's create a custom record type where it has two fields associated for each variable you create: A 16-bit string with a field named name A 2-element tuple of floating point numbers that are 64-bits each, with a field named grades As such, you'd do something like: ``` import numpy as np dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) ``` As such, let's create an example list of two elements and instantiate their fields: ``` x = np.array([('Sarah', (8.0, 7.0)), ('John', (6.0, 7.0))], dtype=dt) ``` Because we made this list into a numpy.array, we expect its data type to be so: ``` type(x) ``` We get: ``` ``` Remember, the list itself is a numpy.array, but not the individual elements. To access the second element of this list, which is the second record, we do: ``` x[1] ``` We get: ``` ('John', [6.0, 7.0]) ``` To check the type of the second record, we do: ``` type(x[1]) ``` We get: ``` # As expected ``` Some additional bonuses for you To access the name of the second record, we do: ``` x[1]['name'] ``` We get: ``` 'John' ``` To access the grades of the second record, we do: ``` x[1]['grades'] ``` We get: ``` array([ 6., 7.]) ``` To check the type of the name inside the second record, we do: ``` type(x[1]['name']) ``` We get: ``` ``` To check the type of the grades inside the second record, we do: ``` type(x[1]['grades']) ``` We get: ``` ``` Take note that each element in this list is of type numpy.void. However, the individual fields for each element in our list is either a tuple of numbers, or a string. The collection of these elements together is of type numpy.void.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/25247190\/how-to-use-numpy-void-type", "best_answers_votes":86, "question_length":214, "response_length":3357 }, { "question":"P-value from Chi sq test statistic in Python I have computed a test statistic that is distributed as a chi square with 1 degree of freedom, and want to find out what P-value this corresponds to using python. I'm a python and maths\/stats newbie so I think what I want here is the probability denisty function for the chi2 distribution from SciPy. However, when I use this like so: ``` from scipy import stats stats.chi2.pdf(3.84 , 1) 0.029846 ``` However some googling and talking to some colleagues who know maths but not python have said it should be 0.05. Any ideas? Cheers, Davy", "response":"Quick refresher here: Probability Density Function: think of it as a point value; how dense is the probability at a given point? Cumulative Distribution Function: this is the mass of probability of the function up to a given point; what percentage of the distribution lies on one side of this point? In your case, you took the PDF, for which you got the correct answer. If you try 1 - CDF: ``` >>> 1 - stats.chi2.cdf(3.84, 1) 0.050043521248705147 ``` PDF CDF", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11725115\/p-value-from-chi-sq-test-statistic-in-python", "best_answers_votes":66, "question_length":581, "response_length":458 }, { "question":"Scipy sparse... arrays? So, I'm doing some Kmeans classification using numpy arrays that are quite sparse-- lots and lots of zeroes. I figured that I'd use scipy's 'sparse' package to reduce the storage overhead, but I'm a little confused about how to create arrays, not matrices. I've gone through this tutorial on how to create sparse matrices: http:\/\/www.scipy.org\/SciPy_Tutorial#head-c60163f2fd2bab79edd94be43682414f18b90df7 To mimic an array, I just create a 1xN matrix, but as you may guess, Asp.dot(Bsp) doesn't quite work because you can't multiply two 1xN matrices. I'd have to transpose each array to Nx1, and that's pretty lame, since I'd be doing it for every dot-product calculation. Next up, I tried to create an NxN matrix where column 1 == row 1 (such that you can multiply two matrices and just take the top-left corner as the dot product), but that turned out to be really inefficient. I'd love to use scipy's sparse package as a magic replacement for numpy's array(), but as yet, I'm not really sure what to do. Any advice?", "response":"Use a scipy.sparse format that is row or column based: csc_matrix and csr_matrix. These use efficient, C implementations under the hood (including multiplication), and transposition is a no-op (esp. if you call transpose(copy=False)), just like with numpy arrays. EDIT: some timings via ipython: ``` import numpy, scipy.sparse n = 100000 x = (numpy.random.rand(n) * 2).astype(int).astype(float) # 50% sparse vector x_csr = scipy.sparse.csr_matrix(x) x_dok = scipy.sparse.dok_matrix(x.reshape(x_csr.shape)) ``` Now x_csr and x_dok are 50% sparse: ``` print repr(x_csr) ' with 49757 stored elements in Compressed Sparse Row format> ``` And the timings: ``` timeit numpy.dot(x, x) 10000 loops, best of 3: 123 us per loop timeit x_dok * x_dok.T 1 loops, best of 3: 1.73 s per loop timeit x_csr.multiply(x_csr).sum() 1000 loops, best of 3: 1.64 ms per loop timeit x_csr * x_csr.T 100 loops, best of 3: 3.62 ms per loop ``` So it looks like I told a lie. Transposition is very cheap, but there is no efficient C implementation of csr * csc (in the latest scipy 0.9.0). A new csr object is constructed in each call :-( As a hack (though scipy is relatively stable these days), you can do the dot product directly on the sparse data: ``` timeit numpy.dot(x_csr.data, x_csr.data) 10000 loops, best of 3: 62.9 us per loop ``` Note this last approach does a numpy dense multiplication again. The sparsity is 50%, so it's actually faster than dot(x, x) by a factor of 2.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2540059\/scipy-sparse-arrays", "best_answers_votes":36, "question_length":1042, "response_length":1458 }, { "question":"How to read images into a script without using using imageio or scikit image? I am trying to read a png image in python. The imread function in scipy is being deprecated and they recommend using imageio library. However, I am would rather restrict my usage of external libraries to scipy, numpy and matplotlib libraries. Thus, using imageio or scikit image is not a good option for me. Are there any methods in python or scipy, numpy or matplotlib to read images, which are not being deprecated?", "response":"With matplotlib you can use (as shown in the matplotlib documentation) ``` import matplotlib.pyplot as plt import matplotlib.image as mpimg img=mpimg.imread('image_name.png') ``` And plot the image if you want ``` imgplot = plt.imshow(img) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/48729915\/how-to-read-images-into-a-script-without-using-using-imageio-or-scikit-image", "best_answers_votes":67, "question_length":495, "response_length":243 }, { "question":"Python finite difference functions? I've been looking around in Numpy\/Scipy for modules containing finite difference functions. However, the closest thing I've found is numpy.gradient(), which is good for 1st-order finite differences of 2nd order accuracy, but not so much if you're wanting higher-order derivatives or more accurate methods. I haven't even found very many specific modules for this sort of thing; most people seem to be doing a \"roll-your-own\" thing as they need them. So my question is if anyone knows of any modules (either part of Numpy\/Scipy or a third-party module) that are specifically dedicated to higher-order (both in accuracy and derivative) finite difference methods. I've got my own code that I'm working on, but it's currently kind of slow, and I'm not going to attempt to optimize it if there's something already available. Note that I am talking about finite differences, not derivatives. I've seen both scipy.misc.derivative() and Numdifftools, which take the derivative of an analytical function, which I don't have.", "response":"One way to do this quickly is by convolution with the derivative of a gaussian kernel. The simple case is a convolution of your array with [-1, 1] which gives exactly the simple finite difference formula. Beyond that, (f*g)'= f'*g = f*g' where the * is convolution, so you end up with your derivative convolved with a plain gaussian, so of course this will smooth your data a bit, which can be minimized by choosing the smallest reasonable kernel. ``` import numpy as np from scipy import ndimage import matplotlib.pyplot as plt #Data: x = np.linspace(0,2*np.pi,100) f = np.sin(x) + .02*(np.random.rand(100)-.5) #Normalization: dx = x[1] - x[0] # use np.diff(x) if x is not uniform dxdx = dx**2 #First derivatives: df = np.diff(f) \/ dx cf = np.convolve(f, [1,-1]) \/ dx gf = ndimage.gaussian_filter1d(f, sigma=1, order=1, mode='wrap') \/ dx #Second derivatives: ddf = np.diff(f, 2) \/ dxdx ccf = np.convolve(f, [1, -2, 1]) \/ dxdx ggf = ndimage.gaussian_filter1d(f, sigma=1, order=2, mode='wrap') \/ dxdx #Plotting: plt.figure() plt.plot(x, f, 'k', lw=2, label='original') plt.plot(x[:-1], df, 'r.', label='np.diff, 1') plt.plot(x, cf[:-1], 'r--', label='np.convolve, [1,-1]') plt.plot(x, gf, 'r', label='gaussian, 1') plt.plot(x[:-2], ddf, 'g.', label='np.diff, 2') plt.plot(x, ccf[:-2], 'g--', label='np.convolve, [1,-2,1]') plt.plot(x, ggf, 'g', label='gaussian, 2') ``` Since you mentioned np.gradient I assumed you had at least 2d arrays, so the following applies to that: This is built into the scipy.ndimage package if you want to do it for ndarrays. Be cautious though, because of course this doesn't give you the full gradient but I believe the product of all directions. Someone with better expertise will hopefully speak up. Here's an example: ``` from scipy import ndimage x = np.linspace(0,2*np.pi,100) sine = np.sin(x) im = sine * sine[...,None] d1 = ndimage.gaussian_filter(im, sigma=5, order=1, mode='wrap') d2 = ndimage.gaussian_filter(im, sigma=5, order=2, mode='wrap') plt.figure() plt.subplot(131) plt.imshow(im) plt.title('original') plt.subplot(132) plt.imshow(d1) plt.title('first derivative') plt.subplot(133) plt.imshow(d2) plt.title('second derivative') ``` Use of the gaussian_filter1d allows you to take a directional derivative along a certain axis: ``` imx = im * x d2_0 = ndimage.gaussian_filter1d(imx, axis=0, sigma=5, order=2, mode='wrap') d2_1 = ndimage.gaussian_filter1d(imx, axis=1, sigma=5, order=2, mode='wrap') plt.figure() plt.subplot(131) plt.imshow(imx) plt.title('original') plt.subplot(132) plt.imshow(d2_0) plt.title('derivative along axis 0') plt.subplot(133) plt.imshow(d2_1) plt.title('along axis 1') ``` The first set of results above are a little confusing to me (peaks in the original show up as peaks in the second derivative when the curvature should point down). Without looking further into how the 2d version works, I can only really recomend the 1d version. If you want the magnitude, simply do something like: ``` d2_mag = np.sqrt(d2_0**2 + d2_1**2) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/18991408\/python-finite-difference-functions", "best_answers_votes":59, "question_length":1051, "response_length":3006 }, { "question":"How do I transform a \"SciPy sparse matrix\" to a \"NumPy matrix\"? I am using a python function called \"incidence_matrix(G)\", which returns the incident matrix of graph. It is from Networkx package. The problem that I am facing is the return type of this function is \"Scipy Sparse Matrix\". I need to have the Incident matrix in the format of numpy matrix or array. I was wondering if there is any easy way of doing that or not? Or is there any built-in function that can do this transformation for me or not? Thanks", "response":"The scipy.sparse.*_matrix has several useful methods, for example, if a is e.g. scipy.sparse.csr_matrix: a.toarray() - Return a dense ndarray representation of this matrix. (numpy.array, recommended) a.todense() - Return a dense matrix representation of this matrix. (numpy.matrix) Previously, these methods had shorthands (.A for .toarray(), and .M for .todense()), but these have been or will be deprecated as of Scipy v1.14.0.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/26576524\/how-do-i-transform-a-scipy-sparse-matrix-to-a-numpy-matrix", "best_answers_votes":80, "question_length":512, "response_length":429 }, { "question":"Iterating through a scipy.sparse vector (or matrix) I'm wondering what the best way is to iterate nonzero entries of sparse matrices with scipy.sparse. For example, if I do the following: ``` from scipy.sparse import lil_matrix x = lil_matrix( (20,1) ) x[13,0] = 1 x[15,0] = 2 c = 0 for i in x: print c, i c = c+1 ``` the output is ``` 0 1 2 3 4 5 6 7 8 9 10 11 12 13 (0, 0) 1.0 14 15 (0, 0) 2.0 16 17 18 19 ``` so it appears the iterator is touching every element, not just the nonzero entries. I've had a look at the API http:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.sparse.lil_matrix.html and searched around a bit, but I can't seem to find a solution that works.", "response":"Edit: bbtrb's method (using coo_matrix) is much faster than my original suggestion, using nonzero. Sven Marnach's suggestion to use itertools.izip also improves the speed. Current fastest is using_tocoo_izip: ``` import scipy.sparse import random import itertools def using_nonzero(x): rows,cols = x.nonzero() for row,col in zip(rows,cols): ((row,col), x[row,col]) def using_coo(x): cx = scipy.sparse.coo_matrix(x) for i,j,v in zip(cx.row, cx.col, cx.data): (i,j,v) def using_tocoo(x): cx = x.tocoo() for i,j,v in zip(cx.row, cx.col, cx.data): (i,j,v) def using_tocoo_izip(x): cx = x.tocoo() for i,j,v in itertools.izip(cx.row, cx.col, cx.data): (i,j,v) N=200 x = scipy.sparse.lil_matrix( (N,N) ) for _ in xrange(N): x[random.randint(0,N-1),random.randint(0,N-1)]=random.randint(1,100) ``` yields these timeit results: ``` % python -mtimeit -s'import test' 'test.using_tocoo_izip(test.x)' 1000 loops, best of 3: 670 usec per loop % python -mtimeit -s'import test' 'test.using_tocoo(test.x)' 1000 loops, best of 3: 706 usec per loop % python -mtimeit -s'import test' 'test.using_coo(test.x)' 1000 loops, best of 3: 802 usec per loop % python -mtimeit -s'import test' 'test.using_nonzero(test.x)' 100 loops, best of 3: 5.25 msec per loop ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/4319014\/iterating-through-a-scipy-sparse-vector-or-matrix", "best_answers_votes":80, "question_length":678, "response_length":1239 }, { "question":"Correct way to obtain confidence interval with scipy I have a 1-dimensional array of data: ``` a = np.array([1,2,3,4,4,4,5,5,5,5,4,4,4,6,7,8]) ``` for which I want to obtain the 68% confidence interval (ie: the 1 sigma). The first comment in this answer states that this can be achieved using scipy.stats.norm.interval from the scipy.stats.norm function, via: ``` from scipy import stats import numpy as np mean, sigma = np.mean(a), np.std(a) conf_int = stats.norm.interval(0.68, loc=mean, scale=sigma) ``` But a comment in this post states that the actual correct way of obtaining the confidence interval is: ``` conf_int = stats.norm.interval(0.68, loc=mean, scale=sigma \/ np.sqrt(len(a))) ``` that is, sigma is divided by the square-root of the sample size: np.sqrt(len(a)). The question is: which version is the correct one?", "response":"The 68% confidence interval for a single draw from a normal distribution with mean mu and std deviation sigma is ``` stats.norm.interval(0.68, loc=mu, scale=sigma) ``` The 68% confidence interval for the mean of N draws from a normal distribution with mean mu and std deviation sigma is ``` stats.norm.interval(0.68, loc=mu, scale=sigma\/sqrt(N)) ``` Intuitively, these formulas make sense, since if you hold up a jar of jelly beans and ask a large number of people to guess the number of jelly beans, each individual may be off by a lot -- the same std deviation sigma -- but the average of the guesses will do a remarkably fine job of estimating the actual number and this is reflected by the standard deviation of the mean shrinking by a factor of 1\/sqrt(N). If a single draw has variance sigma**2, then by the Bienaym\u00e9 formula, the sum of N uncorrelated draws has variance N*sigma**2. The mean is equal to the sum divided by N. When you multiply a random variable (like the sum) by a constant, the variance is multiplied by the constant squared. That is ``` Var(cX) = c**2 * Var(X) ``` So the variance of the mean equals ``` (variance of the sum)\/N**2 = N * sigma**2 \/ N**2 = sigma**2 \/ N ``` and so the standard deviation of the mean (which is the square root of the variance) equals ``` sigma\/sqrt(N). ``` This is the origin of the sqrt(N) in the denominator. Here is some example code, based on Tom's code, which demonstrates the claims made above: ``` import numpy as np from scipy import stats N = 10000 a = np.random.normal(0, 1, N) mean, sigma = a.mean(), a.std(ddof=1) conf_int_a = stats.norm.interval(0.68, loc=mean, scale=sigma) print('{:0.2%} of the single draws are in conf_int_a' .format(((a >= conf_int_a[0]) & (a = conf_int_b[0]) & (b < conf_int_b[1])).sum() \/ float(N))) ``` prints ``` 68.03% of the single draws are in conf_int_a 67.78% of the means are in conf_int_b ``` Beware that if you define conf_int_b with the estimates for mean and sigma based on the sample a, the mean may not fall in conf_int_b with the desired frequency. If you take a sample from a distribution and compute the sample mean and std deviation, ``` mean, sigma = a.mean(), a.std() ``` be careful to note that there is no guarantee that these will equal the population mean and standard deviation and that we are assuming the population is normally distributed -- those are not automatic givens! If you take a sample and want to estimate the population mean and standard deviation, you should use ``` mean, sigma = a.mean(), a.std(ddof=1) ``` since this value for sigma is the unbiased estimator for the population standard deviation.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/28242593\/correct-way-to-obtain-confidence-interval-with-scipy", "best_answers_votes":94, "question_length":828, "response_length":2630 }, { "question":"Scipy Normaltest how is it used? I need to use normaltest in scipy for testing if the dataset is normal distributet. But I cant seem to find any good examples how to use scipy.stats.normaltest. My dataset has more than 100 values.", "response":"``` In [12]: import scipy.stats as stats In [13]: x = stats.norm.rvs(size = 100) In [14]: stats.normaltest(x) Out[14]: (1.627533590094232, 0.44318552909231262) ``` normaltest returns a 2-tuple of the chi-squared statistic, and the associated p-value. Given the null hypothesis that x came from a normal distribution, the p-value represents the probability that a chi-squared statistic that large (or larger) would be seen. If the p-val is very small, it means it is unlikely that the data came from a normal distribution. For example: ``` In [15]: y = stats.uniform.rvs(size = 100) In [16]: stats.normaltest(y) Out[16]: (31.487039026711866, 1.4543748291516241e-07) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12838993\/scipy-normaltest-how-is-it-used", "best_answers_votes":83, "question_length":230, "response_length":668 }, { "question":"Inverse Distance Weighted (IDW) Interpolation with Python The Question: What is the best way to calculate inverse distance weighted (IDW) interpolation in Python, for point locations? Some Background: Currently I'm using RPy2 to interface with R and its gstat module. Unfortunately, the gstat module conflicts with arcgisscripting which I got around by running RPy2 based analysis in a separate process. Even if this issue is resolved in a recent\/future release, and efficiency can be improved, I'd still like to remove my dependency on installing R. The gstat website does provide a stand alone executable, which is easier to package with my python script, but I still hope for a Python solution which doesn't require multiple writes to disk and launching external processes. The number of calls to the interpolation function, of separate sets of points and values, can approach 20,000 in the processing I'm performing. I specifically need to interpolate for points, so using the IDW function in ArcGIS to generate rasters sounds even worse than using R, in terms of performance.....unless there is a way to efficiently mask out only the points I need. Even with this modification, I wouldn't expect performance to be all that great. I will look into this option as another alternative. UPDATE: The problem here is you are tied to the cell size you are using. If you reduce the cell-size to get better accuracy, processing takes a long time. You also need to follow up by extracting by points.....over all an ugly method if you want values for specific points. I have looked at the scipy documentation, but it doesn't look like there is a straight forward way to calculate IDW. I'm thinking of rolling my own implementation, possibly using some of the scipy functionality to locate the closest points and calculate distances. Am I missing something obvious? Is there a python module I haven't seen that does exactly what I want? Is creating my own implementation with the aid of scipy a wise choice?", "response":"changed 20 Oct: this class Invdisttree combines inverse-distance weighting and scipy.spatial.KDTree. Forget the original brute-force answer; this is imho the method of choice for scattered-data interpolation. ``` \"\"\" invdisttree.py: inverse-distance-weighted interpolation using KDTree fast, solid, local \"\"\" from __future__ import division import numpy as np from scipy.spatial import cKDTree as KDTree # http:\/\/docs.scipy.org\/doc\/scipy\/reference\/spatial.html __date__ = \"2010-11-09 Nov\" # weights, doc #............................................................................... class Invdisttree: \"\"\" inverse-distance-weighted interpolation using KDTree: invdisttree = Invdisttree( X, z ) -- data points, values interpol = invdisttree( q, nnear=3, eps=0, p=1, weights=None, stat=0 ) interpolates z from the 3 points nearest each query point q; For example, interpol[ a query point q ] finds the 3 data points nearest q, at distances d1 d2 d3 and returns the IDW average of the values z1 z2 z3 (z1\/d1 + z2\/d2 + z3\/d3) \/ (1\/d1 + 1\/d2 + 1\/d3) = .55 z1 + .27 z2 + .18 z3 for distances 1 2 3 q may be one point, or a batch of points. eps: approximate nearest, dist = 0 w \/= np.sum(w) wz = np.dot( w, self.z[ix] ) if self.stat: self.wn += 1 self.wsum += w interpol[jinterpol] = wz jinterpol += 1 return interpol if qdim > 1 else interpol[0] #............................................................................... if __name__ == \"__main__\": import sys N = 10000 Ndim = 2 Nask = N # N Nask 1e5: 24 sec 2d, 27 sec 3d on mac g4 ppc Nnear = 8 # 8 2d, 11 3d => 5 % chance one-sided -- Wendel, mathoverflow.com leafsize = 10 eps = .1 # approximate nearest, dist <= (1 + eps) * true nearest p = 1 # weights ~ 1 \/ distance**p cycle = .25 seed = 1 exec \"\\n\".join( sys.argv[1:] ) # python this.py N= ... np.random.seed(seed ) np.set_printoptions( 3, threshold=100, suppress=True ) # .3f print \"\\nInvdisttree: N %d Ndim %d Nask %d Nnear %d leafsize %d eps %.2g p %.2g\" % ( N, Ndim, Nask, Nnear, leafsize, eps, p) def terrain(x): \"\"\" ~ rolling hills \"\"\" return np.sin( (2*np.pi \/ cycle) * np.mean( x, axis=-1 )) known = np.random.uniform( size=(N,Ndim) ) ** .5 # 1\/(p+1): density x^p z = terrain( known ) ask = np.random.uniform( size=(Nask,Ndim) ) #............................................................................... invdisttree = Invdisttree( known, z, leafsize=leafsize, stat=1 ) interpol = invdisttree( ask, nnear=Nnear, eps=eps, p=p ) print \"average distances to nearest points: %s\" % \\ np.mean( invdisttree.distances, axis=0 ) print \"average weights: %s\" % (invdisttree.wsum \/ invdisttree.wn) # see Wikipedia Zipf's law err = np.abs( terrain(ask) - interpol ) print \"average |terrain() - interpolated|: %.2g\" % np.mean(err) # print \"interpolate a single point: %.2g\" % \\ # invdisttree( known[0], nnear=Nnear, eps=eps ) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/3104781\/inverse-distance-weighted-idw-interpolation-with-python", "best_answers_votes":44, "question_length":2000, "response_length":2837 }, { "question":"Difference between scipy.spatial.KDTree and scipy.spatial.cKDTree What is the difference between these two algorithms?", "response":"From SciPy 1.6 on, cKDTree and KDTree are identical, and you should prefer KDTree if you aren't worried about pre-1.6 compatibility. Before SciPy 1.6, cKDTree was a subset of KDTree, implemented in C++ wrapped in Cython, so therefore faster.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/6931209\/difference-between-scipy-spatial-kdtree-and-scipy-spatial-ckdtree", "best_answers_votes":59, "question_length":118, "response_length":241 }, { "question":"Concatenate sparse matrices in Python using SciPy\/Numpy What would be the most efficient way to concatenate sparse matrices in Python using SciPy\/Numpy? Here I used the following: ``` >>> np.hstack((X, X2)) array([ ' with 1135520 stored elements in Compressed Sparse Row format>, ' with 1135520 stored elements in Compressed Sparse Row format>], dtype=object) ``` I would like to use both predictors in a regression, but the current format is obviously not what I'm looking for. Would it be possible to get the following: ``` ' with 2271040 stored elements in Compressed Sparse Row format> ``` It is too large to be converted to a deep format.", "response":"You can use the scipy.sparse.hstack to concatenate sparse matrices with the same number of rows (horizontal concatenation): ``` from scipy.sparse import hstack hstack((X, X2)) ``` Similarly, you can use scipy.sparse.vstack to concatenate sparse matrices with the same number of columns (vertical concatenation). Using numpy.hstack or numpy.vstack will create an array with two sparse matrix objects.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19710602\/concatenate-sparse-matrices-in-python-using-scipy-numpy", "best_answers_votes":90, "question_length":643, "response_length":399 }, { "question":"Invertible STFT and ISTFT in Python Is there any general-purpose form of short-time Fourier transform with corresponding inverse transform built into SciPy or NumPy or whatever? There's the pyplot specgram function in matplotlib, which calls ax.specgram(), which calls mlab.specgram(), which calls _spectral_helper(): ``` #The checks for if y is x are so that we can use the same function to #implement the core of psd(), csd(), and spectrogram() without doing #extra calculations. We return the unaveraged Pxy, freqs, and t. ``` but This is a helper function that implements the commonality between the 204 #psd, csd, and spectrogram. It is NOT meant to be used outside of mlab I'm not sure if this can be used to do an STFT and ISTFT, though. Is there anything else, or should I translate something like these MATLAB functions? I know how to write my own ad-hoc implementation; I'm just looking for something full-featured, which can handle different windowing functions (but has a sane default), is fully invertible with COLA windows (istft(stft(x))==x), tested by multiple people, no off-by-one errors, handles the ends and zero padding well, fast RFFT implementation for real input, etc.", "response":"Here is my Python code, simplified for this answer: ``` import scipy, pylab def stft(x, fs, framesz, hop): framesamp = int(framesz*fs) hopsamp = int(hop*fs) w = scipy.hanning(framesamp) X = scipy.array([scipy.fft(w*x[i:i+framesamp]) for i in range(0, len(x)-framesamp, hopsamp)]) return X def istft(X, fs, T, hop): x = scipy.zeros(T*fs) framesamp = X.shape[1] hopsamp = int(hop*fs) for n,i in enumerate(range(0, len(x)-framesamp, hopsamp)): x[i:i+framesamp] += scipy.real(scipy.ifft(X[n])) return x ``` Notes: The list comprehension is a little trick I like to use to simulate block processing of signals in numpy\/scipy. It's like blkproc in Matlab. Instead of a for loop, I apply a command (e.g., fft) to each frame of the signal inside a list comprehension, and then scipy.array casts it to a 2D-array. I use this to make spectrograms, chromagrams, MFCC-grams, and much more. For this example, I use a naive overlap-and-add method in istft. In order to reconstruct the original signal the sum of the sequential window functions must be constant, preferably equal to unity (1.0). In this case, I've chosen the Hann (or hanning) window and a 50% overlap which works perfectly. See this discussion for more information. There are probably more principled ways of computing the ISTFT. This example is mainly meant to be educational. A test: ``` if __name__ == '__main__': f0 = 440 # Compute the STFT of a 440 Hz sinusoid fs = 8000 # sampled at 8 kHz T = 5 # lasting 5 seconds framesz = 0.050 # with a frame size of 50 milliseconds hop = 0.025 # and hop size of 25 milliseconds. # Create test signal and STFT. t = scipy.linspace(0, T, T*fs, endpoint=False) x = scipy.sin(2*scipy.pi*f0*t) X = stft(x, fs, framesz, hop) # Plot the magnitude spectrogram. pylab.figure() pylab.imshow(scipy.absolute(X.T), origin='lower', aspect='auto', interpolation='nearest') pylab.xlabel('Time') pylab.ylabel('Frequency') pylab.show() # Compute the ISTFT. xhat = istft(X, fs, T, hop) # Plot the input and output signals over 0.1 seconds. T1 = int(0.1*fs) pylab.figure() pylab.plot(t[:T1], x[:T1], t[:T1], xhat[:T1]) pylab.xlabel('Time (seconds)') pylab.figure() pylab.plot(t[-T1:], x[-T1:], t[-T1:], xhat[-T1:]) pylab.xlabel('Time (seconds)') ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2459295\/invertible-stft-and-istft-in-python", "best_answers_votes":65, "question_length":1192, "response_length":2225 }, { "question":"Prevent anti-aliasing for imshow in matplotlib When I use matplotlib's imshow() method to represent a small numpy matrix, it ends up doing some smoothing between pixels. Is there any way to disables this? It makes my figure's misleading in presentations. The figure above is a 28x28 image, so I should be seeing large squares of single colors representing each pixel (as matlab would display it when using imagesc()). But Instead, the pixels seem to be blurred with neighboring pixels. Is there a way to disable this behavior?", "response":"There is an interpolation option for imshow which controls how and if interpolation will be applied to the rendering of the matrix. If you try ``` imshow(array, interpolation=\"nearest\") ``` you might get something more like you want. As an example ``` A=10*np.eye(10) + np.random.rand(100).reshape(10,10) imshow(A) ``` compared with ``` A=10*np.eye(10) + np.random.rand(100).reshape(10,10) imshow(A, interpolation=\"nearest\") ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8376609\/prevent-anti-aliasing-for-imshow-in-matplotlib", "best_answers_votes":56, "question_length":526, "response_length":428 }, { "question":"Using a sparse matrix versus numpy array I am creating some numpy arrays with word counts in Python: rows are documents, columns are counts for word X. If I have a lot of zero counts, people suggest using sparse matrices when processing these further, e.g. in a classifier. When feeding a numpy array versus a sparse matrix into the Scikit logistic regression classifier, it did not seem to make much of a difference, however. So I was wondering about three things: Wikipedia says a sparse matrix is a matrix in which most of the elements are zero Is that an appropriate way to determine when to use a sparse matrix format - as soon as > 50 % of the values are zero? Or does it make sense to use just in case? How much does a sparse matrix help performance in a task like mine, especially compared to a numpy array or a standard list? So far, I collect my data into a numpy array, then convert into the csr_matrix in Scipy. Is that the right way to do it? I could not figure out how to build a sparse matrix from the ground up, and that might be impossible. Any help is much appreciated!", "response":"The scipy sparse matrix package, and similar ones in MATLAB, was based on ideas developed from linear algebra problems, such as solving large sparse linear equations (e.g. finite difference and finite element implementations). So things like matrix product (the dot product for numpy arrays) and equation solvers are well developed. My rough experience is that a sparse csr matrix product has to have a 1% sparsity to be faster than the equivalent dense dot operation - in other words, one nonzero value for every 99 zeros. (but see tests below) But people also try to use sparse matrices to save memory. But keep in mind that such a matrix has to store 3 arrays of values (at least in the coo format). So the sparsity has to be less than 1\/3 to start saving memory. Obviously you aren't going to save memory if you first build the dense array, and create the sparse one from that. The scipy package implements many sparse formats. The coo format is easiest to understand and build. Build one according to documentation and look at its .data, .row, and .col attributes (3 1d arrays). csr and csc are typically built from the coo format, and compress the data a bit, making them a bit harder to understand. But they have most of the math functionality. It is also possible to index csr format, though in general this is slower than the equivalent dense matrix\/array case. Other operations like changing values (especially from 0 to nonzero), concatenation, incremental growth, are also slower. lil (lists of lists) is also easy to understand, and best for incremental building. dok is a actually a dictionary subclass. A key point is that a sparse matrix is limited to 2d, and in many ways behaves like the np.matrix class (though it isn't a subclass). A search for other questions using scikit-learn and sparse might be the best way of finding the pros\/cons of using these matrices. I've answered a number of questions, but I know the 'sparse' side better than the 'learn' side. I think they are useful, but I get the sense is that the fit isn't always the best. Any customization is on the learn side. So far the sparse package has not been optimized for this application. I just tried some matrix product tests, using the sparse.random method to create a sparse matrix with a specified sparsity. Sparse matrix multiplication performed better than I expected. ``` In [251]: M=sparse.random(1000,1000,.5) In [252]: timeit M1=M*M 1 loops, best of 3: 2.78 s per loop In [253]: timeit Ma=M.toarray(); M2=Ma.dot(Ma) 1 loops, best of 3: 4.28 s per loop ``` It is a size issue; for smaller matrix the dense dot is faster ``` In [255]: M=sparse.random(100,100,.5) In [256]: timeit M1=M*M 100 loops, best of 3: 3.24 ms per loop In [257]: timeit Ma=M.toarray(); M2=Ma.dot(Ma) 1000 loops, best of 3: 1.44 ms per loop ``` But compare indexing ``` In [268]: timeit M.tocsr()[500,500] 10 loops, best of 3: 86.4 ms per loop In [269]: timeit Ma[500,500] 1000000 loops, best of 3: 318 ns per loop In [270]: timeit Ma=M.toarray();Ma[500,500] 10 loops, best of 3: 23.6 ms per loop ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/36969886\/using-a-sparse-matrix-versus-numpy-array", "best_answers_votes":46, "question_length":1087, "response_length":3066 }, { "question":"Plot Normal distribution with Matplotlib [duplicate] This question already has answers here: How to plot normal distribution (10 answers) Closed 3 years ago. please help me to plot the normal distribution of the folowing data: DATA: ``` import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm h = [186, 176, 158, 180, 186, 168, 168, 164, 178, 170, 189, 195, 172, 187, 180, 186, 185, 168, 179, 178, 183, 179, 170, 175, 186, 159, 161, 178, 175, 185, 175, 162, 173, 172, 177, 175, 172, 177, 180] std = np.std(h) mean = np.mean(h) plt.plot(norm.pdf(h,mean,std)) ``` output: ``` Standard Deriviation = 8.54065575872 mean = 176.076923077 ``` the plot is incorrect, what is wrong with my code?", "response":"Note: This solution is using pylab, not matplotlib.pyplot You may try using hist to put your data info along with the fitted curve as below: ``` import numpy as np import scipy.stats as stats import pylab as pl h = sorted([186, 176, 158, 180, 186, 168, 168, 164, 178, 170, 189, 195, 172, 187, 180, 186, 185, 168, 179, 178, 183, 179, 170, 175, 186, 159, 161, 178, 175, 185, 175, 162, 173, 172, 177, 175, 172, 177, 180]) #sorted fit = stats.norm.pdf(h, np.mean(h), np.std(h)) #this is a fitting indeed pl.plot(h,fit,'-o') pl.hist(h,normed=True) #use this to draw histogram of your data pl.show() #use may also need add this ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20011494\/plot-normal-distribution-with-matplotlib", "best_answers_votes":100, "question_length":710, "response_length":625 }, { "question":"\"ImportError: cannot import name 'triu' from 'scipy.linalg'\" when importing Gensim I am trying to use Gensim, but running import gensim raises this error: ```none Traceback (most recent call last): File \"\", line 1, in File \"\/usr\/local\/lib\/python3.10\/dist-packages\/gensim\/__init__.py\", line 11, in from gensim import parsing, corpora, matutils, interfaces, models, similarities, utils # noqa:F401 File \"\/usr\/local\/lib\/python3.10\/dist-packages\/gensim\/corpora\/__init__.py\", line 6, in from .indexedcorpus import IndexedCorpus # noqa:F401 must appear before the other classes File \"\/usr\/local\/lib\/python3.10\/dist-packages\/gensim\/corpora\/indexedcorpus.py\", line 14, in from gensim import interfaces, utils File \"\/usr\/local\/lib\/python3.10\/dist-packages\/gensim\/interfaces.py\", line 19, in from gensim import utils, matutils File \"\/usr\/local\/lib\/python3.10\/dist-packages\/gensim\/matutils.py\", line 20, in from scipy.linalg import get_blas_funcs, triu ImportError: cannot import name 'triu' from 'scipy.linalg' (\/usr\/local\/lib\/python3.10\/dist-packages\/scipy\/linalg\/__init__.py) ``` Why is this happening and how can I fix it?", "response":"I found the issue. The scipy.linalg functions tri, triu & tril are deprecated and will be removed in SciPy 1.13. \u2014 SciPy 1.11.0 Release Notes \u00a7 Deprecated features So, I installed SciPy v1.10.1 instead of the latest version and it was working well. ```bash pip install scipy==1.10.1 ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/78279136\/importerror-cannot-import-name-triu-from-scipy-linalg-when-importing-gens", "best_answers_votes":66, "question_length":1121, "response_length":286 }, { "question":"How to convert a column or row matrix to a diagonal matrix in Python? I have a row vector A, A = [a1 a2 a3 ..... an] and I would like to create a diagonal matrix, B = diag(a1, a2, a3, ....., an) with the elements of this row vector. How can this be done in Python? UPDATE This is the code to illustrate the problem: ``` import numpy as np a = np.matrix([1,2,3,4]) d = np.diag(a) print (d) ``` the output of this code is [1], but my desired output is: ``` [[1 0 0 0] [0 2 0 0] [0 0 3 0] [0 0 0 4]] ```", "response":"You can use diag method: ``` import numpy as np a = np.array([1,2,3,4]) d = np.diag(a) # or simpler: d = np.diag([1,2,3,4]) print(d) ``` Results in: ``` [[1 0 0 0] [0 2 0 0] [0 0 3 0] [0 0 0 4]] ``` If you have a row vector, you can do this: ``` a = np.array([[1, 2, 3, 4]]) d = np.diag(a[0]) ``` Results in: ``` [[1 0 0 0] [0 2 0 0] [0 0 3 0] [0 0 0 4]] ``` For the given matrix in the question: ``` import numpy as np a = np.matrix([1,2,3,4]) d = np.diag(a.A1) print (d) ``` Result is again: ``` [[1 0 0 0] [0 2 0 0] [0 0 3 0] [0 0 0 4]] ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/28598572\/how-to-convert-a-column-or-row-matrix-to-a-diagonal-matrix-in-python", "best_answers_votes":73, "question_length":500, "response_length":543 }, { "question":"What's the difference between KFold and ShuffleSplit CV? It seems like KFold generates the same values every time the object is iterated over, while Shuffle Split generates different indices every time. Is this correct? If so, what are the uses for one over the other? ``` cv = cross_validation.KFold(10, n_folds=2,shuffle=True,random_state=None) cv2 = cross_validation.ShuffleSplit(10,n_iter=2,test_size=0.5) print(list(iter(cv))) print(list(iter(cv))) print(list(iter(cv2))) print(list(iter(cv2))) ``` Yields the following output: ``` [(array([1, 3, 5, 8, 9]), array([0, 2, 4, 6, 7])), (array([0, 2, 4, 6, 7]), array([1, 3, 5, 8, 9]))] [(array([1, 3, 5, 8, 9]), array([0, 2, 4, 6, 7])), (array([0, 2, 4, 6, 7]), array([1, 3, 5, 8, 9]))] [(array([4, 6, 3, 2, 7]), array([8, 1, 9, 0, 5])), (array([3, 6, 7, 0, 5]), array([9, 1, 8, 4, 2]))] [(array([3, 0, 2, 1, 7]), array([5, 6, 9, 4, 8])), (array([0, 7, 1, 3, 8]), array([6, 2, 5, 4, 9]))] ```", "response":"Difference in KFold and ShuffleSplit output KFold will divide your data set into prespecified number of folds, and every sample must be in one and only one fold. A fold is a subset of your dataset. ShuffleSplit will randomly sample your entire dataset during each iteration to generate a training set and a test set. The test_size and train_size parameters control how large the test and training test set should be for each iteration. Since you are sampling from the entire dataset during each iteration, values selected during one iteration, could be selected again during another iteration. Summary: ShuffleSplit works iteratively, KFold just divides the dataset into k folds. Difference when doing validation In KFold, during each round you will use one fold as the test set and all the remaining folds as your training set. However, in ShuffleSplit, during each round n you should only use the training and test set from iteration n. As your data set grows, cross validation time increases, making shufflesplits a more attractive alternate. If you can train your algorithm, with a certain percentage of your data as opposed to using all k-1 folds, ShuffleSplit is an attractive option.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/34731421\/whats-the-difference-between-kfold-and-shufflesplit-cv", "best_answers_votes":66, "question_length":944, "response_length":1190 }, { "question":"Python p-value from t-statistic I have some t-values and degrees of freedom and want to find the p-values from them (it's two-tailed). In the real world I would use a t-test table in the back of a Statistics textbook; how do I do the equivalent in Python? e.g. t-lookup(5, 7) = 0.00245 or something like that. I know in SciPy if I had arrays I could do scipy.stats.ttest_ind, but I don't. I just have t-statistics and degrees of freedom.", "response":"From http:\/\/docs.scipy.org\/doc\/scipy\/reference\/tutorial\/stats.html As an exercise, we can calculate our ttest also directly without using the provided function, which should give us the same answer, and so it does: ``` tt = (sm-m)\/np.sqrt(sv\/float(n)) # t-statistic for mean pval = stats.t.sf(np.abs(tt), n-1)*2 # two-sided pvalue = Prob(abs(t)>tt) print 't-statistic = %6.3f pvalue = %6.4f' % (tt, pval) t-statistic = 0.391 pvalue = 0.6955 ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/17559897\/python-p-value-from-t-statistic", "best_answers_votes":54, "question_length":437, "response_length":444 }, { "question":"Scipy sparse matrices - purpose and usage of different implementations Scipy has many different types of sparse matrices available. What are the most important differences between these types, and what is the difference in their intended usage? I'm developing a code in python based on a sample code1 in Matlab. One section of the code utilizes sparse matrices - which seem to have a single (annoying) type in Matlab, and I'm trying to figure out which type I should use2 in python. 1: This is for a class. Most people are doing the project in Matlab, but I like to create unnecessary work and confusion --- apparently. 2: This is an academic question: I have the code working properly with the 'CSR' format, but I'm interesting in knowing what the optimal usages are.", "response":"Sorry if I'm not answering this completely enough, but hopefully I can provide some insight. CSC (Compressed Sparse Column) and CSR (Compressed Sparse Row) are more compact and efficient, but difficult to construct \"from scratch\". Coo (Coordinate) and DOK (Dictionary of Keys) are easier to construct, and can then be converted to CSC or CSR via matrix.tocsc() or matrix.tocsr(). CSC is more efficient at accessing column-vectors or column operations, generally, as it is stored as arrays of columns and their value at each row. CSR matrices are the opposite; stored as arrays of rows and their values at each column, and are more efficient at accessing row-vectors or row operations.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15755270\/scipy-sparse-matrices-purpose-and-usage-of-different-implementations", "best_answers_votes":54, "question_length":768, "response_length":684 }, { "question":"How to elementwise-multiply a scipy.sparse matrix by a broadcasted dense 1d array? Suppose I have a 2d sparse array. In my real usecase both the number of rows and columns are much bigger (say 20000 and 50000) hence it cannot fit in memory when a dense representation is used: ``` >>> import numpy as np >>> import scipy.sparse as ssp >>> a = ssp.lil_matrix((5, 3)) >>> a[1, 2] = -1 >>> a[4, 1] = 2 >>> a.todense() matrix([[ 0., 0., 0.], [ 0., 0., -1.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 2., 0.]]) ``` Now suppose I have a dense 1d array with all non-zeros components with size 3 (or 50000 in my real life case): ``` >>> d = np.ones(3) * 3 >>> d array([ 3., 3., 3.]) ``` I would like to compute the elementwise multiplication of a and d using the usual broadcasting semantics of numpy. However, sparse matrices in scipy are of the np.matrix: the '*' operator is overloaded to have it behave like a matrix-multiply instead of the elementwise-multiply: ``` >>> a * d array([ 0., -3., 0., 0., 6.]) ``` One solution would be to make 'a' switch to the array semantics for the '*' operator, that would give the expected result: ``` >>> a.toarray() * d array([[ 0., 0., 0.], [ 0., 0., -3.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 6., 0.]]) ``` But I cannot do that since the call to toarray() would materialize the dense version of 'a' which does not fit in memory (and the result will be dense too): ``` >>> ssp.issparse(a.toarray()) False ``` Any idea how to build this while keeping only sparse datastructures and without having to do a unefficient python loop on the columns of 'a'?", "response":"I replied over at scipy.org as well, but I thought I should add an answer here, in case others find this page when searching. You can turn the vector into a sparse diagonal matrix and then use matrix multiplication (with *) to do the same thing as broadcasting, but efficiently. ``` >>> d = ssp.lil_matrix((3,3)) >>> d.setdiag(np.ones(3)*3) >>> a*d ' with 2 stored elements in Compressed Sparse Row format> >>> (a*d).todense() matrix([[ 0., 0., 0.], [ 0., 0., -3.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 6., 0.]]) ``` Hope that helps!", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/3247775\/how-to-elementwise-multiply-a-scipy-sparse-matrix-by-a-broadcasted-dense-1d-arra", "best_answers_votes":52, "question_length":1578, "response_length":532 }, { "question":"Use Distance Matrix in scipy.cluster.hierarchy.linkage()? I have a distance matrix n*n M where M_ij is the distance between object_i and object_j. So as expected, it takes the following form: ``` \/ 0 M_01 M_02 ... M_0n\\ | M_10 0 M_12 ... M_1n | | M_20 M_21 0 ... M2_n | | ... | \\ M_n0 M_n2 M_n2 ... 0 \/ ``` Now I wish to cluster these n objects with hierarchical clustering. Python has an implementation of this called scipy.cluster.hierarchy.linkage(y, method='single', metric='euclidean'). Its documentation says: y must be a {n \\choose 2} sized vector where n is the number of original observations paired in the distance matrix. y : ndarray A condensed or redundant distance matrix. A condensed distance matrix is a flat array containing the upper triangular of the distance matrix. This is the form that pdist returns. Alternatively, a collection of m observation vectors in n dimensions may be passed as an m by n array. I am confused by this description of y. Can I directly feed my M in as the input y? Update @hongbo-zhu-cn has raised this issue up in GitHub. This is exactly what I am concerning about. However, as a newbie to GitHub, I don't know how it works and therefore have no idea how this issue is dealt with.", "response":"It seems that indeed we cannot directly pass the redundant square matrix in, although the documentation claims we can do so. To benefit anyone who faces the same problem in the future, I write my solution as an additional answer here. So the copy-and-paste guys can just proceed with the clustering. Use the following snippet to condense the matrix and happily proceed. ``` import scipy.spatial.distance as ssd # convert the redundant n*n square matrix form into a condensed nC2 array distArray = ssd.squareform(distMatrix) # distArray[{n choose 2}-{n-i choose 2} + (j-i-1)] is the distance between points i and j ``` Please correct me if I am wrong.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/18952587\/use-distance-matrix-in-scipy-cluster-hierarchy-linkage", "best_answers_votes":50, "question_length":1227, "response_length":650 }, { "question":"Plotting power spectrum in python I have an array with 301 values, which were gathered from a movie clip with 301 frames. This means 1 value from 1 frame. The movie clip is running at 30 fps, so is in fact 10 sec long Now I would like to get the power spectrum of this \"signal\" ( with the right Axis). I tried: ``` X = fft(S_[:,2]); pl.plot(abs(X)) pl.show() ``` I also tried: ``` X = fft(S_[:,2]); pl.plot(abs(X)**2) pl.show() ``` Though I don't think this is the real spectrum. the signal: The spectrum: The power spectrum : Can anyone provide some help with this ? I would like to have a plot in Hz.", "response":"Numpy has a convenience function, np.fft.fftfreq to compute the frequencies associated with FFT components: ``` from __future__ import division import numpy as np import matplotlib.pyplot as plt data = np.random.rand(301) - 0.5 ps = np.abs(np.fft.fft(data))**2 time_step = 1 \/ 30 freqs = np.fft.fftfreq(data.size, time_step) idx = np.argsort(freqs) plt.plot(freqs[idx], ps[idx]) ``` Note that the largest frequency you see in your case is not 30 Hz, but ``` In [7]: max(freqs) Out[7]: 14.950166112956811 ``` You never see the sampling frequency in a power spectrum. If you had had an even number of samples, then you would have reached the Nyquist frequency, 15 Hz in your case (although numpy would have calculated it as -15).", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15382076\/plotting-power-spectrum-in-python", "best_answers_votes":65, "question_length":602, "response_length":727 }, { "question":"Quadratic Program (QP) Solver that only depends on NumPy\/SciPy? I would like students to solve a quadratic program in an assignment without them having to install extra software like cvxopt etc. Is there a python implementation available that only depends on NumPy\/SciPy?", "response":"I'm not very familiar with quadratic programming, but I think you can solve this sort of problem just using scipy.optimize's constrained minimization algorithms. Here's an example: ``` import numpy as np from scipy import optimize from matplotlib import pyplot as plt from mpl_toolkits.mplot3d.axes3d import Axes3D # minimize # F = x[1]^2 + 4x[2]^2 -32x[2] + 64 # subject to: # x[1] + x[2] = 0 # x[2] >= 0 # x[2] <= 4 # in matrix notation: # F = (1\/2)*x.T*H*x + c*x + c0 # subject to: # Ax <= b # where: # H = [[2, 0], # [0, 8]] # c = [0, -32] # c0 = 64 # A = [[ 1, 1], # [-1, 2], # [-1, 0], # [0, -1], # [0, 1]] # b = [7,4,0,0,4] H = np.array([[2., 0.], [0., 8.]]) c = np.array([0, -32]) c0 = 64 A = np.array([[ 1., 1.], [-1., 2.], [-1., 0.], [0., -1.], [0., 1.]]) b = np.array([7., 4., 0., 0., 4.]) x0 = np.random.randn(2) def loss(x, sign=1.): return sign * (0.5 * np.dot(x.T, np.dot(H, x))+ np.dot(c, x) + c0) def jac(x, sign=1.): return sign * (np.dot(x.T, H) + c) cons = {'type':'ineq', 'fun':lambda x: b - np.dot(A,x), 'jac':lambda x: -A} opt = {'disp':False} def solve(): res_cons = optimize.minimize(loss, x0, jac=jac,constraints=cons, method='SLSQP', options=opt) res_uncons = optimize.minimize(loss, x0, jac=jac, method='SLSQP', options=opt) print '\\nConstrained:' print res_cons print '\\nUnconstrained:' print res_uncons x1, x2 = res_cons['x'] f = res_cons['fun'] x1_unc, x2_unc = res_uncons['x'] f_unc = res_uncons['fun'] # plotting xgrid = np.mgrid[-2:4:0.1, 1.5:5.5:0.1] xvec = xgrid.reshape(2, -1).T F = np.vstack([loss(xi) for xi in xvec]).reshape(xgrid.shape[1:]) ax = plt.axes(projection='3d') ax.hold(True) ax.plot_surface(xgrid[0], xgrid[1], F, rstride=1, cstride=1, cmap=plt.cm.jet, shade=True, alpha=0.9, linewidth=0) ax.plot3D([x1], [x2], [f], 'og', mec='w', label='Constrained minimum') ax.plot3D([x1_unc], [x2_unc], [f_unc], 'oy', mec='w', label='Unconstrained minimum') ax.legend(fancybox=True, numpoints=1) ax.set_xlabel('x1') ax.set_ylabel('x2') ax.set_zlabel('F') ``` Output: ``` Constrained: status: 0 success: True njev: 4 nfev: 4 fun: 7.9999999999997584 x: array([ 2., 3.]) message: 'Optimization terminated successfully.' jac: array([ 4., -8., 0.]) nit: 4 Unconstrained: status: 0 success: True njev: 3 nfev: 5 fun: 0.0 x: array([ -2.66453526e-15, 4.00000000e+00]) message: 'Optimization terminated successfully.' jac: array([ -5.32907052e-15, -3.55271368e-15, 0.00000000e+00]) nit: 3 ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/17009774\/quadratic-program-qp-solver-that-only-depends-on-numpy-scipy", "best_answers_votes":55, "question_length":271, "response_length":2422 }, { "question":"kalman 2d filter in python My input is 2d (x,y) time series of a dot moving on a screen for a tracker software. It has some noise I want to remove using Kalman filter. Does someone can point me for a python code for Kalman 2d filter? In scipy cookbook I found only a 1d example: http:\/\/www.scipy.org\/Cookbook\/KalmanFiltering I saw there is implementation for Kalman filter in OpenCV, but couldn't find code examples. Thanks!", "response":"Here is my implementation of the Kalman filter based on the equations given on wikipedia. Please be aware that my understanding of Kalman filters is very rudimentary so there are most likely ways to improve this code. (For example, it suffers from the numerical instability problem discussed here. As I understand it, this only affects the numerical stability when Q, the motion noise, is very small. In real life, the noise is usually not small, so fortunately (at least for my implementation) in practice the numerical instability does not show up.) In the example below, kalman_xy assumes the state vector is a 4-tuple: 2 numbers for the location, and 2 numbers for the velocity. The F and H matrices have been defined specifically for this state vector: If x is a 4-tuple state, then ``` new_x = F * x position = H * x ``` It then calls kalman, which is the generalized Kalman filter. It is general in the sense it is still useful if you wish to define a different state vector -- perhaps a 6-tuple representing location, velocity and acceleration. You just have to define the equations of motion by supplying the appropriate F and H. ``` import numpy as np import matplotlib.pyplot as plt def kalman_xy(x, P, measurement, R, motion = np.matrix('0. 0. 0. 0.').T, Q = np.matrix(np.eye(4))): \"\"\" Parameters: x: initial state 4-tuple of location and velocity: (x0, x1, x0_dot, x1_dot) P: initial uncertainty convariance matrix measurement: observed position R: measurement noise motion: external motion added to state vector x Q: motion noise (same shape as P) \"\"\" return kalman(x, P, measurement, R, motion, Q, F = np.matrix(''' 1. 0. 1. 0.; 0. 1. 0. 1.; 0. 0. 1. 0.; 0. 0. 0. 1. '''), H = np.matrix(''' 1. 0. 0. 0.; 0. 1. 0. 0.''')) def kalman(x, P, measurement, R, motion, Q, F, H): ''' Parameters: x: initial state P: initial uncertainty convariance matrix measurement: observed position (same shape as H*x) R: measurement noise (same shape as H) motion: external motion added to state vector x Q: motion noise (same shape as P) F: next state function: x_prime = F*x H: measurement function: position = H*x Return: the updated and predicted new values for (x, P) See also http:\/\/en.wikipedia.org\/wiki\/Kalman_filter This version of kalman can be applied to many different situations by appropriately defining F and H ''' # UPDATE x, P based on measurement m # distance between measured and current position-belief y = np.matrix(measurement).T - H * x S = H * P * H.T + R # residual convariance K = P * H.T * S.I # Kalman gain x = x + K*y I = np.matrix(np.eye(F.shape[0])) # identity matrix P = (I - K*H)*P # PREDICT x, P based on motion x = F*x + motion P = F*P*F.T + Q return x, P def demo_kalman_xy(): x = np.matrix('0. 0. 0. 0.').T P = np.matrix(np.eye(4))*1000 # initial uncertainty N = 20 true_x = np.linspace(0.0, 10.0, N) true_y = true_x**2 observed_x = true_x + 0.05*np.random.random(N)*true_x observed_y = true_y + 0.05*np.random.random(N)*true_y plt.plot(observed_x, observed_y, 'ro') result = [] R = 0.01**2 for meas in zip(observed_x, observed_y): x, P = kalman_xy(x, P, meas, R) result.append((x[:2]).tolist()) kalman_x, kalman_y = zip(*result) plt.plot(kalman_x, kalman_y, 'g-') plt.show() demo_kalman_xy() ``` The red dots show the noisy position measurements, the green line shows the Kalman predicted positions.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13901997\/kalman-2d-filter-in-python", "best_answers_votes":67, "question_length":424, "response_length":3332 }, { "question":"Experience with using h5py to do analytical work on big data in Python? I do a lot of statistical work and use Python as my main language. Some of the data sets I work with though can take 20GB of memory, which makes operating on them using in-memory functions in numpy, scipy, and PyIMSL nearly impossible. The statistical analysis language SAS has a big advantage here in that it can operate on data from hard disk as opposed to strictly in-memory processing. But, I want to avoid having to write a lot of code in SAS (for a variety of reasons) and am therefore trying to determine what options I have with Python (besides buying more hardware and memory). I should clarify that approaches like map-reduce will not help in much of my work because I need to operate on complete sets of data (e.g. computing quantiles or fitting a logistic regression model). Recently I started playing with h5py and think it is the best option I have found for allowing Python to act like SAS and operate on data from disk (via hdf5 files), while still being able to leverage numpy\/scipy\/matplotlib, etc. I would like to hear if anyone has experience using Python and h5py in a similar setting and what they have found. Has anyone been able to use Python in \"big data\" settings heretofore dominated by SAS? EDIT: Buying more hardware\/memory certainly can help, but from an IT perspective it is hard for me to sell Python to an organization that needs to analyze huge data sets when Python (or R, or MATLAB etc) need to hold data in memory. SAS continues to have a strong selling point here because while disk-based analytics may be slower, you can confidently deal with huge data sets. So, I am hoping that Stackoverflow-ers can help me figure out how to reduce the perceived risk around using Python as a mainstay big-data analytics language.", "response":"We use Python in conjunction with h5py, numpy\/scipy and boost::python to do data analysis. Our typical datasets have sizes of up to a few hundred GBs. HDF5 advantages: data can be inspected conveniently using the h5view application, h5py\/ipython and the h5* commandline tools APIs are available for different platforms and languages structure data using groups annotating data using attributes worry-free built-in data compression io on single datasets is fast HDF5 pitfalls: Performance breaks down, if a h5 file contains too many datasets\/groups (> 1000), because traversing them is very slow. On the other side, io is fast for a few big datasets. Advanced data queries (SQL like) are clumsy to implement and slow (consider SQLite in that case) HDF5 is not thread-safe in all cases: one has to ensure, that the library was compiled with the correct options changing h5 datasets (resize, delete etc.) blows up the file size (in the best case) or is impossible (in the worst case) (the whole h5 file has to be copied to flatten it again)", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/4871670\/experience-with-using-h5py-to-do-analytical-work-on-big-data-in-python", "best_answers_votes":49, "question_length":1827, "response_length":1037 }, { "question":"how to plot and annotate hierarchical clustering dendrograms in scipy\/matplotlib I'm using dendrogram from scipy to plot hierarchical clustering using matplotlib as follows: ``` mat = array([[1, 0.5, 0.9], [0.5, 1, -0.5], [0.9, -0.5, 1]]) plt.subplot(1,2,1) plt.title(\"mat\") dist_mat = mat linkage_matrix = linkage(dist_mat, \"single\") print \"linkage2:\" print linkage(1-dist_mat, \"single\") dendrogram(linkage_matrix, color_threshold=1, labels=[\"a\", \"b\", \"c\"], show_leaf_counts=True) plt.subplot(1,2,2) plt.title(\"1 - mat\") dist_mat = 1 - mat linkage_matrix = linkage(dist_mat, \"single\") dendrogram(linkage_matrix, color_threshold=1, labels=[\"a\", \"b\", \"c\"], show_leaf_counts=True) ``` My questions are: first, why does mat and 1-mat give identical clusterings here? and second, how can I annotate the distance along each branch of the tree using dendrogram so that the distances between pairs of nodes can be compared? finally it seems that show_leaf_counts flag is ignored, is there a way to turn it on so that the number of objects in each class is shown? thanks.", "response":"The input to linkage() is either an n x m array, representing n points in m-dimensional space, or a one-dimensional array containing the condensed distance matrix. In your example, mat is 3 x 3, so you are clustering three 3-d points. Clustering is based on the distance between these points. Why does mat and 1-mat give identical clusterings here? The arrays mat and 1-mat produce the same clustering because the clustering is based on distances between the points, and neither a reflection (-mat) nor a translation (mat + offset) of the entire data set change the relative distances between the points. How can I annotate the distance along each branch of the tree using dendrogram so that the distances between pairs of nodes can be compared? In the code below, I show how you can use the data returned by dendrogram to label the horizontal segments of the diagram with the corresponding distance. The values associated with the keys icoord and dcoord give the x and y coordinates of each three-segment inverted-U of the figure. In augmented_dendrogram this data is used to add a label of the distance (i.e. y value) of each horizontal line segment in dendrogram. ``` from scipy.cluster.hierarchy import dendrogram import matplotlib.pyplot as plt def augmented_dendrogram(*args, **kwargs): ddata = dendrogram(*args, **kwargs) if not kwargs.get('no_plot', False): for i, d in zip(ddata['icoord'], ddata['dcoord']): x = 0.5 * sum(i[1:3]) y = d[1] plt.plot(x, y, 'ro') plt.annotate(\"%.3g\" % y, (x, y), xytext=(0, -8), textcoords='offset points', va='top', ha='center') return ddata ``` For your mat array, the augmented dendrogram is So point 'a' and 'c' are 1.01 units apart, and point 'b' is 1.57 units from the cluster ['a', 'c']. It seems that show_leaf_counts flag is ignored, is there a way to turn it on so that the number of objects in each class is shown? The flag show_leaf_counts only applies when not all the original data points are shown as leaves. For example, when trunc_mode = \"lastp\", only the last p nodes are show. Here's an example with 100 points: ``` import numpy as np from scipy.cluster.hierarchy import linkage import matplotlib.pyplot as plt from augmented_dendrogram import augmented_dendrogram # Generate a random sample of `n` points in 2-d. np.random.seed(12312) n = 100 x = np.random.multivariate_normal([0, 0], np.array([[4.0, 2.5], [2.5, 1.4]]), size=(n,)) plt.figure(1, figsize=(6, 5)) plt.clf() plt.scatter(x[:, 0], x[:, 1]) plt.axis('equal') plt.grid(True) linkage_matrix = linkage(x, \"single\") plt.figure(2, figsize=(10, 4)) plt.clf() plt.subplot(1, 2, 1) show_leaf_counts = False ddata = augmented_dendrogram(linkage_matrix, color_threshold=1, p=6, truncate_mode='lastp', show_leaf_counts=show_leaf_counts, ) plt.title(\"show_leaf_counts = %s\" % show_leaf_counts) plt.subplot(1, 2, 2) show_leaf_counts = True ddata = augmented_dendrogram(linkage_matrix, color_threshold=1, p=6, truncate_mode='lastp', show_leaf_counts=show_leaf_counts, ) plt.title(\"show_leaf_counts = %s\" % show_leaf_counts) plt.show() ``` These are the points in the data set: With p=6 and trunc_mode=\"lastp\", dendrogram only shows the \"top\" of the dendrogram. The following shows the effect of show_leaf_counts.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11917779\/how-to-plot-and-annotate-hierarchical-clustering-dendrograms-in-scipy-matplotlib", "best_answers_votes":70, "question_length":1063, "response_length":3218 }, { "question":"Python web hosting: Numpy, Matplotlib, Scientific Computing I write scientific software in Numpy\/Scipy\/Matplotlib. Having developed applications on my home computer, I am now interested in writing simple web applications. Example: user uploads image or audio file, my program processes it using Numpy\/Scipy, and output is displayed on the browser using Matplotlib, or perhaps the user can download a processed file. I already pay for hosting that does have Python 2.4.3 installed, but no Numpy\/Scipy. I don't have shell access via command line, either. Just drag-and-drop FTP. Pretty limited, but I can get simple Python\/CGI scripts working. Surprisingly, a web search revealed few suitable options for web hosting with these capabilities already built in. (Please guide me if I am wrong.) I am learning about the Google App Engine, but I still don't have a full understanding about its tools and limitations. What the web did tell me is that others have similar concerns. Hoping for solutions, I thought I would ask these simple questions to the awesome SO community: Is there a simple way of installing numpy (or any third-party package\/library) onto my already hosted space? I know the Python path on my hosted space, and I know the relevant Python\/Numpy directories on my home computer. Can I simply copy files over and have it work? Both local and remote systems run Ubuntu. What hosting sites exist (either free or paid) which have Numpy\/Matplotlib installed or, if not installed, the possibility of installing it? Are there any documented sites that you can reference with working applications, no matter how simple? Can Google App Engine help me in any way? Or is it totally for something else? Have you or others used it to write scientific applications in Python\/Numpy? If so, could you reference them? Thank you for your help. EDIT: After the useful answers below, I bought the $20 plan at Slicehost, and I love it so far! (I first tried Amazon EC2. I must be stupid, but I just couldn't get it to work.) Setting up the Ubuntu server with Apache took mere hours (and I'm an Apache novice). It allows me to do exactly what I wanted with Python plus much more. I now have my own remote repository for version control, too. Thanks again! EDIT 2: Nearly two years later, I tried Linode and EC2 (again). Linode is great. EC2 seemed easier this time around -- maybe it's just added experience, or maybe it's the improvements that Amazon made to the AWS management console. For those interested in Numpy\/Scipy\/Matplotlib\/Audiolab, here is my Ubuntu cheat sheet whenever I launch an EC2 instance: ``` ec2:~$ sudo aptitude install build-essential python-scipy ipython python-matplotlib python-dev python-setuptools libsndfile-dev libasound2-dev mysql-server python-mysqldb Upload scikits.audiolab-0.11.0 ec2:~\/scikits.audiolab-0.11.0$ sudo python setup.py install ec2:~$ sudo rm -rf scikits.audiolab-0.11.0 ec2:~$ nano .ipython\/ipy_user_conf.py ip.ex('import matplotlib; matplotlib.use(\"Agg\"); import scipy, pylab, scipy.signal as sig, scipy.linalg as lin, scipy.sparse as spar, os, sys, MySQLdb, boto; from scikits import audiolab') import ipy_greedycompleter import ipy_autoreload ```", "response":"1: Installing third party packages to hosted spaces You can indeed install third party packages to your hosted space. If it's a pure python package, all that's needed is to unpack it to a directory and then add that directory to your PYTHONPATH environment variable or sys.path. This can be tiring to do often, and won't work easily for compiled modules. If you have shell access to your python host, the excellent virtualenv package allows you to do set up a private python environment with its own libraries. To set up your virtualenv, you'll do something like this at the shell: ``` $ virtualenv $HOME\/my_python $ $HOME\/my_python\/bin\/easy_install numpy ``` You can keep running easy_install for anything else you want to install in your personal python environment. Now, when you write your python scripts, you will want to use your private python interpreter, if that is possible: ``` #!\/home\/myuser\/my_python\/bin\/python import numpy # script here ``` If your python env cannot be specified (such as if run by mod_wsgi), you will need to add it to the import path: ``` import sys sys.path.insert(0, '\/home\/myuser\/my_python\/lib\/python2.5\/site-packages') import numpy ``` 2: Hosting sites with numpy I can't think of any hosting sites offhand which offer numpy pre-installed. However, Dreamhost\/Bluehost for sharedhosts provide SSH access, and with shell access you can install numpy using the methods I described above. Any Virtual Private Server such as Linode\/Slicehost will allow you to install whatever you desire, as well. 3: AppEngine As mentioned above, AppEngine will not allow you to install C extensions (but pure python ones do work) so it's unlikely numpy will work for you on there, since I suspect some of its features use C speedups.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2080110\/python-web-hosting-numpy-matplotlib-scientific-computing", "best_answers_votes":18, "question_length":3188, "response_length":1751 }, { "question":"Trouble installing scipy in virtualenv on a amazon ec2 linux micro instance I have successfully installed scipy in the default python compiler on an amazon ec2 micro instance (Ubuntu 13.04). However i am not able to install scipy in a virtualenv. pip install scipy ends with this error ``` scipy\/sparse\/sparsetools\/csr_wrap.cxx: In function \u2018void init_csr()\u2019: scipy\/sparse\/sparsetools\/csr_wrap.cxx:73303:21: warning: variable \u2018md\u2019 set but not used [-Wunused-but-set-variable] c++: internal compiler error: Killed (program cc1plus) Please submit a full bug report, with preprocessed source if appropriate. See for instructions. ---------------------------------------- Cleaning up... Command \/home\/ubuntu\/pnr\/bin\/python -c \"import setuptools;__file__='\/home\/ubuntu\/pnr\/build\/scipy\/setup.py';exec(compile(open(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))\" install --record \/tmp\/pip-t8Drvd-record\/install-record.txt --single-version-externally-managed --install-headers \/home\/ubuntu\/pnr\/include\/site\/python2.7 failed with error code -9 in \/home\/ubuntu\/pnr\/build\/scipy ``` and ``` Traceback (most recent call last): File \"\/home\/ubuntu\/pnr\/bin\/pip\", line 9, in load_entry_point('pip==1.4.1', 'console_scripts', 'pip')() File \"\/home\/ubuntu\/pnr\/local\/lib\/python2.7\/site-packages\/pip\/__init__.py\", line 148, in main return command.main(args[1:], options) File \"\/home\/ubuntu\/pnr\/local\/lib\/python2.7\/site-packages\/pip\/basecommand.py\", line 169, in main text = '\\n'.join(complete_log) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 53: ordinal not in range(128) ``` Before anyone asks. pip freeze for default compiler returns ``` Cheetah==2.4.4 Landscape-Client==12.12 M2Crypto==0.21.1 PAM==0.4.2 Pillow==2.0.0 PyYAML==3.10 Twisted-Core==12.3.0 Twisted-Names==12.3.0 Twisted-Web==12.3.0 apt-xapian-index==0.45 argparse==1.2.1 boto==2.3.0 chardet==2.0.1 cloud-init==0.7.2 configobj==4.7.2 distribute==0.6.34 distro-info==0.10 euca2ools==2.1.1 numpy==1.7.1 oauth==1.0.1 paramiko==1.7.7.1 prettytable==0.6.1 pyOpenSSL==0.13 pycrypto==2.6 pycurl==7.19.0 pygobject==3.8.0 pyserial==2.6 python-apt==0.8.8ubuntu6 python-debian==0.1.21-nmu2ubuntu1 requests==1.1.0 scipy==0.11.0 six==1.2.0 ssh-import-id==3.14 urllib3==1.5 virtualenv==1.10.1 wsgiref==0.1.2 zope.interface==4.0.5 ``` pip freeze command for virtualenv returns ``` Cython==0.19.2 Flask==0.10.1 Flask-Bootstrap==3.0.0.1 Flask-WTF==0.9.3 Jinja2==2.7.1 MarkupSafe==0.18 WTForms==1.0.5 Werkzeug==0.9.4 argparse==1.2.1 beautifulsoup4==4.3.2 itsdangerous==0.23 numpy==1.7.1 pymongo==2.6.2 requests==2.0.0 wsgiref==0.1.2 ```", "response":"One solution is to temporarily enable swap on your micro instance. As described at this SO post, enable 1gb swap via: ``` sudo \/bin\/dd if=\/dev\/zero of=\/var\/swap.1 bs=1M count=1024 sudo \/sbin\/mkswap \/var\/swap.1 sudo \/sbin\/swapon \/var\/swap.1 ``` Once swap is on, install scipy via pip: ``` sudo apt-get install -y libatlas-base-dev gfortran python-dev build-essential g++ sudo pip install numpy sudo pip install scipy ``` Once scipy successfully installs, you can disable it via: ``` sudo swapoff \/var\/swap.1 sudo rm \/var\/swap.1 ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19595944\/trouble-installing-scipy-in-virtualenv-on-a-amazon-ec2-linux-micro-instance", "best_answers_votes":101, "question_length":2599, "response_length":530 }, { "question":"Fast tensor rotation with NumPy At the heart of an application (written in Python and using NumPy) I need to rotate a 4th order tensor. Actually, I need to rotate a lot of tensors many times and this is my bottleneck. My naive implementation (below) involving eight nested loops seems to be quite slow, but I cannot see a way to leverage NumPy's matrix operations and, hopefully, speed things up. I've a feeling I should be using np.tensordot, but I don't see how. Mathematically, elements of the rotated tensor, T', are given by: T'ijkl = \u03a3 gia gjb gkc gld Tabcd with the sum being over the repeated indices on the right hand side. T and Tprime are 3*3*3*3 NumPy arrays and the rotation matrix g is a 3*3 NumPy array. My slow implementation (taking ~0.04 seconds per call) is below. ``` #!\/usr\/bin\/env python import numpy as np def rotT(T, g): Tprime = np.zeros((3,3,3,3)) for i in range(3): for j in range(3): for k in range(3): for l in range(3): for ii in range(3): for jj in range(3): for kk in range(3): for ll in range(3): gg = g[ii,i]*g[jj,j]*g[kk,k]*g[ll,l] Tprime[i,j,k,l] = Tprime[i,j,k,l] + \\ gg*T[ii,jj,kk,ll] return Tprime if __name__ == \"__main__\": T = np.array([[[[ 4.66533067e+01, 5.84985000e-02, -5.37671310e-01], [ 5.84985000e-02, 1.56722231e+01, 2.32831900e-02], [ -5.37671310e-01, 2.32831900e-02, 1.33399259e+01]], [[ 4.60051700e-02, 1.54658176e+01, 2.19568200e-02], [ 1.54658176e+01, -5.18223500e-02, -1.52814920e-01], [ 2.19568200e-02, -1.52814920e-01, -2.43874100e-02]], [[ -5.35577630e-01, 1.95558600e-02, 1.31108757e+01], [ 1.95558600e-02, -1.51342210e-01, -6.67615000e-03], [ 1.31108757e+01, -6.67615000e-03, 6.90486240e-01]]], [[[ 4.60051700e-02, 1.54658176e+01, 2.19568200e-02], [ 1.54658176e+01, -5.18223500e-02, -1.52814920e-01], [ 2.19568200e-02, -1.52814920e-01, -2.43874100e-02]], [[ 1.57414726e+01, -3.86167500e-02, -1.55971950e-01], [ -3.86167500e-02, 4.65601977e+01, -3.57741000e-02], [ -1.55971950e-01, -3.57741000e-02, 1.34215636e+01]], [[ 2.58256300e-02, -1.49072770e-01, -7.38843000e-03], [ -1.49072770e-01, -3.63410500e-02, 1.32039847e+01], [ -7.38843000e-03, 1.32039847e+01, 1.38172700e-02]]], [[[ -5.35577630e-01, 1.95558600e-02, 1.31108757e+01], [ 1.95558600e-02, -1.51342210e-01, -6.67615000e-03], [ 1.31108757e+01, -6.67615000e-03, 6.90486240e-01]], [[ 2.58256300e-02, -1.49072770e-01, -7.38843000e-03], [ -1.49072770e-01, -3.63410500e-02, 1.32039847e+01], [ -7.38843000e-03, 1.32039847e+01, 1.38172700e-02]], [[ 1.33639532e+01, -1.26331100e-02, 6.84650400e-01], [ -1.26331100e-02, 1.34222177e+01, 1.67851800e-02], [ 6.84650400e-01, 1.67851800e-02, 4.89151396e+01]]]]) g = np.array([[ 0.79389393, 0.54184237, 0.27593346], [-0.59925749, 0.62028664, 0.50609776], [ 0.10306737, -0.56714313, 0.8171449 ]]) for i in range(100): Tprime = rotT(T,g) ``` Is there a way to make this go faster? Making the code generalise to other ranks of tensor would be useful, but is less important.", "response":"To use tensordot, compute the outer product of the g tensors: ``` def rotT(T, g): gg = np.outer(g, g) gggg = np.outer(gg, gg).reshape(4 * g.shape) axes = ((0, 2, 4, 6), (0, 1, 2, 3)) return np.tensordot(gggg, T, axes) ``` On my system, this is around seven times faster than Sven's solution. If the g tensor doesn't change often, you can also cache the gggg tensor. If you do this and turn on some micro-optimizations (inlining the tensordot code, no checks, no generic shapes), you can still make it two times faster: ``` def rotT(T, gggg): return np.dot(gggg.transpose((1, 3, 5, 7, 0, 2, 4, 6)).reshape((81, 81)), T.reshape(81, 1)).reshape((3, 3, 3, 3)) ``` Results of timeit on my home laptop (500 iterations): ``` Your original code: 19.471129179 Sven's code: 0.718412876129 My first code: 0.118047952652 My second code: 0.0690279006958 ``` The numbers on my work machine are: ``` Your original code: 9.77922987938 Sven's code: 0.137110948563 My first code: 0.0569641590118 My second code: 0.0308079719543 ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/4962606\/fast-tensor-rotation-with-numpy", "best_answers_votes":41, "question_length":2923, "response_length":1013 }, { "question":"Calculate Matrix Rank using scipy I'd like to calculate the mathematical rank of a matrix using scipy. The most obvious function numpy.rank calculates the dimension of an array (ie. scalars have dimension 0, vectors 1, matrices 2, etc...). I am aware that the numpy.linalg.lstsq module has this capability, but I was wondering if such a fundamental operation is built into the matrix class somewhere. Here is an explicit example: ``` from numpy import matrix, rank A = matrix([[1,3,7],[2,8,3],[7,8,1]]) print rank(A) ``` This gives 2 the dimension, where I'm looking for an answer of 3.", "response":"Numpy provides numpy.linalg.matrix_rank(): ``` >>> import numpy >>> numpy.__version__ '1.5.1' >>> A = numpy.matrix([[1,3,7],[2,8,3],[7,8,1]]) >>> numpy.linalg.matrix_rank(A) 3 ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2473983\/calculate-matrix-rank-using-scipy", "best_answers_votes":62, "question_length":586, "response_length":179 }, { "question":"What is a \"scalar\" in NumPy? The documentation states the purpose of scalars, such as the fact that conventional Python numbers like float and integer are too primitive, and therefore more complex data types are necessary. It also states certain kinds of scalars (data type hierarchy); as well as a couple of attributes of scalar. But it never gives a concrete definition of exactly what a scalar is in the context of Python. I want to get to the heart of the issue on this. In the simplest terms possible, what is a Pythonic scalar?", "response":"A NumPy scalar is any object which is an instance of np.generic or whose type is in np.ScalarType: ```none In [12]: np.ScalarType Out[13]: (int, float, complex, long, bool, str, unicode, buffer, numpy.int16, numpy.float16, numpy.int8, numpy.uint64, numpy.complex192, numpy.void, numpy.uint32, numpy.complex128, numpy.unicode_, numpy.uint32, numpy.complex64, numpy.string_, numpy.uint16, numpy.timedelta64, numpy.bool_, numpy.uint8, numpy.datetime64, numpy.object_, numpy.int64, numpy.float96, numpy.int32, numpy.float64, numpy.int32, numpy.float32) ``` This definition comes from looking at the source code for np.isscalar: ``` def isscalar(num): if isinstance(num, generic): return True else: return type(num) in ScalarType ``` Note that you can test if something is a scalar by using np.isscalar: ```none >>> np.isscalar(3.1) True >>> np.isscalar([3.1]) False >>> np.isscalar(False) True ``` How do we know what we know? I like learning how people know what they know\u2014more than the answers themselves. So let me try to explain where the above answer comes from. Having the right tools can help you figure out things like this for yourself. I found this out by using IPython. Using its TAB-completion feature, typing ```none In [19]: import numpy as np In [20]: np.[TAB] ``` causes IPython to display all variables in the np module namespace. A search for the string \"scalar\" will lead you to np.ScalarType and np.isscalar. Typing ```none In [20]: np.isscalar? ``` (note the question mark at the end) prompts IPython to show you where np.isscalar is defined: ```none File: \/data1\/unutbu\/.virtualenvs\/dev\/lib\/python2.7\/site-packages\/numpy\/core\/numeric.py ``` which is how I got to the definition of isscalar. Alternatively, the NumPy documentation for isscalar has a link to the source code as well.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/21968643\/what-is-a-scalar-in-numpy", "best_answers_votes":54, "question_length":533, "response_length":1799 }, { "question":"sorting points to form a continuous line I have a list of (x,y)-coordinates that represent a line skeleton. The list is obtained directly from a binary image: ``` import numpy as np list=np.where(img_skeleton>0) ``` Now the points in the list are sorted according to their position in the image along one of the axes. I would like to sort the list such that the order represents a smooth path along the line. (This is currently not the case where the line curves back). Subsequently, I want to fit a spline to these points. A similar problem has been described and solved using arcPy here. Is there a convenient way to achieve this using python, numpy, scipy, openCV (or another library?) below is an example image. it results in a list of 59 (x,y)-coordinates. when I send the list to scipy's spline fitting routine, I am running into a problem because the points aren't 'ordered' on the line:", "response":"I apologize for the long answer in advance :P (the problem is not that simple). Lets start by rewording the problem. Finding a line that connects all the points, can be reformulated as a shortest path problem in a graph, where (1) the graph nodes are the points in the space, (2) each node is connected to its 2 nearest neighbors, and (3) the shortest path passes through each of the nodes only once. That last constrain is a very important (and quite hard one to optimize). Essentially, the problem is to find a permutation of length N, where the permutation refers to the order of each of the nodes (N is the total number of nodes) in the path. Finding all the possible permutations and evaluating their cost is too expensive (there are N! permutations if I'm not wrong, which is too big for problems). Bellow I propose an approach that finds the N best permutations (the optimal permutation for each of the N points) and then find the permutation (from those N) that minimizes the error\/cost. 1. Create a random problem with unordered points Now, lets start to create a sample problem: ``` import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 2 * np.pi, 100) y = np.sin(x) plt.plot(x, y) plt.show() ``` And here, the unsorted version of the points [x, y] to simulate a random points in space connected in a line: ``` idx = np.random.permutation(x.size) x = x[idx] y = y[idx] plt.plot(x, y) plt.show() ``` The problem is then to order those points to recover their original order so that the line is plotted properly. 2. Create 2-NN graph between nodes We can first rearrange the points in a [N, 2] array: ``` points = np.c_[x, y] ``` Then, we can start by creating a nearest neighbour graph to connect each of the nodes to its 2 nearest neighbors: ``` from sklearn.neighbors import NearestNeighbors clf = NearestNeighbors(n_neighbors=2).fit(points) G = clf.kneighbors_graph() ``` G is a sparse N x N matrix, where each row represents a node, and the non-zero elements of the columns the euclidean distance to those points. We can then use networkx to construct a graph from this sparse matrix: ``` import networkx as nx T = nx.from_scipy_sparse_array(G) ``` 3. Find shortest path from source And, here begins the magic: we can extract the paths using dfs_preorder_nodes, which will essentially create a path through all the nodes (passing through each of them exactly once) given a starting node (if not given, the 0 node will be selected). ``` order = list(nx.dfs_preorder_nodes(T, 0)) xx = x[order] yy = y[order] plt.plot(xx, yy) plt.show() ``` Well, is not too bad, but we can notice that the reconstruction is not optimal. This is because the point 0 in the unordered list lays in the middle of the line, that is way it first goes in one direction, and then comes back and finishes in the other direction. 4. Find the path with smallest cost from all sources So, in order to obtain the optimal order, we can just get the best order for all the nodes: ``` paths = [list(nx.dfs_preorder_nodes(T, i)) for i in range(len(points))] ``` Now that we have the optimal path starting from each of the N = 100 nodes, we can discard them and find the one that minimizes the distances between the connections (optimization problem): ``` mindist = np.inf minidx = 0 for i in range(len(points)): p = paths[i] # order of nodes ordered = points[p] # ordered nodes # find cost of that order by the sum of euclidean distances between points (i) and (i+1) cost = (((ordered[:-1] - ordered[1:])**2).sum(1)).sum() if cost < mindist: mindist = cost minidx = i ``` The points are ordered for each of the optimal paths, and then a cost is computed (by calculating the euclidean distance between all pairs of points i and i+1). If the path starts at the start or end point, it will have the smallest cost as all the nodes will be consecutive. On the other hand, if the path starts at a node that lies in the middle of the line, the cost will be very high at some point, as it will need to travel from the end (or beginning) of the line to the initial position to explore the other direction. The path that minimizes that cost, is the path starting in an optimal point. ``` opt_order = paths[minidx] ``` Now, we can reconstruct the order properly: ``` xx = x[opt_order] yy = y[opt_order] plt.plot(xx, yy) plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/37742358\/sorting-points-to-form-a-continuous-line", "best_answers_votes":42, "question_length":894, "response_length":4316 }, { "question":"Ctrl-C crashes Python after importing scipy.stats I'm running 64-bit Python 2.7.3 on Win7 64-bit. I can reliably crash the Python interpreter by doing this: ``` >>> from scipy import stats >>> import time >>> time.sleep(3) ``` and pressing Control-C during the sleep. A KeyboardInterrupt is not raised; the interpreter crashes. The following is printed: ``` forrtl: error (200): program aborting due to control-C event Image PC Routine Line Source libifcoremd.dll 00000000045031F8 Unknown Unknown Unknown libifcoremd.dll 00000000044FC789 Unknown Unknown Unknown libifcoremd.dll 00000000044E8583 Unknown Unknown Unknown libifcoremd.dll 000000000445725D Unknown Unknown Unknown libifcoremd.dll 00000000044672A6 Unknown Unknown Unknown kernel32.dll 0000000077B74AF3 Unknown Unknown Unknown kernel32.dll 0000000077B3F56D Unknown Unknown Unknown ntdll.dll 0000000077C73281 Unknown Unknown Unknown ``` This makes it impossible to interrupt long-running scipy calculations. Googling for \"forrtl\" and the like, I see suggestions that this kind of problem is due to use of a Fortran library that overrides Ctrl-C handling. I don't see a bug on the Scipy trackerbut given that Scipy is a library for use with Python, I would consider this a bug. It breaks Python's handling of Ctrl-C. Is there any workaround for this? Edit: Following @cgohlke's suggestion I tried to add my own handler after importing scipy. This question about a related issue shows that adding a signal handler doesn't work. I tried using the Windows API SetConsoleCtrlHandler function via pywin32: ``` from scipy import stats import win32api def doSaneThing(sig, func=None): print \"Here I am\" raise KeyboardInterrupt win32api.SetConsoleCtrlHandler(doSaneThing, 1) ``` After this, hitting Ctrl-C prints \"Here I am\", but Python still crashes with the forrtl error. Sometimes I also get a message saying \"ConsoleCtrlHandler function failed\", which quickly disappears. If I run this in IPython, I can see a normal Python KeyboardInterrupt traceback before the forrtl error. I also see a normal Python traceback followed by the forrtl error if I raise some other error instead of KeyboardInterrupt (e.g., ValueError): ``` ValueError Traceback (most recent call last) in doSaneThing(sig, func) 3 def doSaneThing(sig, func=None): 4 print \"Here I am\" ----> 5 raise ValueError 6 win32api.SetConsoleCtrlHandler(doSaneThing, 1) ValueError: forrtl: error (200): program aborting due to control-C event [etc.] ``` It seems that whatever the underlying handler is doing, it's not just trapping Ctrl-C directly, but is reacting to the error condition (ValueError) and crashing itself. Is there any way to eliminate this?", "response":"Here's a variation on your posted solution that may work. Maybe there's a better way to solve this problem -- or maybe even avoid it all together by setting an environment variable that tells the DLL to skip installing a handler. Hopefully this helps until you find a better way. Both the time module (lines 868-876) and _multiprocessing module (lines 312-321) call SetConsoleCtrlHandler. In the case of the time module, its console control handler sets a Windows event, hInterruptEvent. For the main thread, time.sleep waits on this event via WaitForSingleObject(hInterruptEvent, ul_millis), where ul_millis is the number of milliseconds to sleep unless interrupted by Ctrl+C. Since the handler that you've installed returns True, the time module's handler never gets called to set hInterruptEvent, which means sleep cannot be interrupted. I tried using imp.init_builtin('time') to reinitialize the time module, but apparently SetConsoleCtrlHandler ignores the 2nd call. It seems the handler has to be removed and then reinserted. Unfortunately, the time module doesn't export a function for that. So, as a kludge, just make sure you import the time module after you install your handler. Since importing scipy also imports time, you need to pre-load libifcoremd.dll using ctypes to get the handlers in the right order. Finally, add a call to thread.interrupt_main to make sure Python's SIGINT handler gets called[1]. For example: ```python import os import imp import ctypes import thread import win32api # Load the DLL manually to ensure its handler gets # set before our handler. basepath = imp.find_module('numpy')[1] ctypes.CDLL(os.path.join(basepath, 'core', 'libmmd.dll')) ctypes.CDLL(os.path.join(basepath, 'core', 'libifcoremd.dll')) # Now set our handler for CTRL_C_EVENT. Other control event # types will chain to the next handler. def handler(dwCtrlType, hook_sigint=thread.interrupt_main): if dwCtrlType == 0: # CTRL_C_EVENT hook_sigint() return 1 # don't chain to the next handler return 0 # chain to the next handler win32api.SetConsoleCtrlHandler(handler, 1) >>> import time >>> from scipy import stats >>> time.sleep(10) Traceback (most recent call last): File \"\", line 1, in KeyboardInterrupt ``` [1] interrupt_main calls PyErr_SetInterrupt. This trips Handlers[SIGINT] and calls Py_AddPendingCall to add checksignals_witharg. In turn this calls PyErr_CheckSignals. Since Handlers[SIGINT] is tripped, this calls Handlers[SIGINT].func. Finally, if func is signal.default_int_handler, you'll get a KeyboardInterrupt exception.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15457786\/ctrl-c-crashes-python-after-importing-scipy-stats", "best_answers_votes":23, "question_length":2667, "response_length":2544 }, { "question":"interpolate 3D volume with numpy and or scipy I am extremely frustrated because after several hours I can't seem to be able to do a seemingly easy 3D interpolation in python. In Matlab all I had to do was ``` Vi = interp3(x,y,z,V,xi,yi,zi) ``` What is the exact equivalent of this using scipy's ndimage.map_coordinate or other numpy methods? Thanks", "response":"In scipy 0.14 or later, there is a new function scipy.interpolate.RegularGridInterpolator which closely resembles interp3. The MATLAB command Vi = interp3(x,y,z,V,xi,yi,zi) would translate to something like: ``` from numpy import array from scipy.interpolate import RegularGridInterpolator as rgi my_interpolating_function = rgi((x,y,z), V) Vi = my_interpolating_function(array([xi,yi,zi]).T) ``` Here is a full example demonstrating both; it will help you understand the exact differences... MATLAB CODE: ``` x = linspace(1,4,11); y = linspace(4,7,22); z = linspace(7,9,33); V = zeros(22,11,33); for i=1:11 for j=1:22 for k=1:33 V(j,i,k) = 100*x(i) + 10*y(j) + z(k); end end end xq = [2,3]; yq = [6,5]; zq = [8,7]; Vi = interp3(x,y,z,V,xq,yq,zq); ``` The result is Vi=[268 357] which is indeed the value at those two points (2,6,8) and (3,5,7). SCIPY CODE: ``` from scipy.interpolate import RegularGridInterpolator from numpy import linspace, zeros, array x = linspace(1,4,11) y = linspace(4,7,22) z = linspace(7,9,33) V = zeros((11,22,33)) for i in range(11): for j in range(22): for k in range(33): V[i,j,k] = 100*x[i] + 10*y[j] + z[k] fn = RegularGridInterpolator((x,y,z), V) pts = array([[2,6,8],[3,5,7]]) print(fn(pts)) ``` Again it's [268,357]. So you see some slight differences: Scipy uses x,y,z index order while MATLAB uses y,x,z (strangely); In Scipy you define a function in a separate step and when you call it, the coordinates are grouped like (x1,y1,z1),(x2,y2,z2),... while matlab uses (x1,x2,...),(y1,y2,...),(z1,z2,...). Other than that, the two are similar and equally easy to use.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/21836067\/interpolate-3d-volume-with-numpy-and-or-scipy", "best_answers_votes":36, "question_length":348, "response_length":1601 }, { "question":"Fit sigmoid function (\"S\" shape curve) to data using Python I'm trying to fit a sigmoid function to some data I have but I keep getting:ValueError: Unable to determine number of fit parameters. My data looks like this: My code is: ``` from scipy.optimize import curve_fit def sigmoid(x): return (1\/(1+np.exp(-x))) popt, pcov = curve_fit(sigmoid, xdata, ydata, method='dogbox') ``` Then I get: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in 2 return (1\/(1+np.exp(-x))) 3 ----> 4 popt, pcov = curve_fit(sigmoid, xdata, ydata, method='dogbox') ~\\Anaconda3\\lib\\site-packages\\scipy\\optimize\\minpack.py in curve_fit(f, xdata, ydata, p0, sigma, absolute_sigma, check_finite, bounds, method, jac, **kwargs) 685 args, varargs, varkw, defaults = _getargspec(f) 686 if len(args) 687 raise ValueError(\"Unable to determine number of fit parameters.\") 688 n = len(args) - 1 689 else: ValueError: Unable to determine number of fit parameters. ``` I'm not sure why this does not work, it seems like a trivial action--> fit a curve to some point. The desired curve would look like this: Sorry for the graphics.. I did it in PowerPoint... How can I find the best sigmoid (\"S\" shape) curve? UPDATE Thanks to @Brenlla I've changed my code to: ``` def sigmoid(k,x,x0): return (1 \/ (1 + np.exp(-k*(x-x0)))) popt, pcov = curve_fit(sigmoid, xdata, ydata, method='dogbox') ``` Now I do not get an error, but the curve is not as desired: ``` x = np.linspace(0, 1600, 1000) y = sigmoid(x, *popt) plt.plot(xdata, ydata, 'o', label='data') plt.plot(x,y, label='fit') plt.ylim(0, 1.3) plt.legend(loc='best') ``` and the result is: How can I improve it so it will fit the data better? UPDATE2 The code is now: ``` def sigmoid(x, L,x0, k, b): y = L \/ (1 + np.exp(-k*(x-x0)))+b ``` But the result is still...", "response":"After great help from @Brenlla the code was modified to: ``` def sigmoid(x, L ,x0, k, b): y = L \/ (1 + np.exp(-k*(x-x0))) + b return (y) p0 = [max(ydata), np.median(xdata),1,min(ydata)] # this is an mandatory initial guess popt, pcov = curve_fit(sigmoid, xdata, ydata,p0, method='dogbox') ``` The parameters optimized are L, x0, k, b, who are initially assigned in p0, the point the optimization starts from. L is responsible for scaling the output range from [0,1] to [0,L] b adds bias to the output and changes its range from [0,L] to [b,L+b] k is responsible for scaling the input, which remains in (-inf,inf) x0 is the point in the middle of the Sigmoid, i.e. the point where Sigmoid should originally output the value 1\/2 [since if x=x0, we get 1\/(1+exp(0)) = 1\/2]. And the result:", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/55725139\/fit-sigmoid-function-s-shape-curve-to-data-using-python", "best_answers_votes":50, "question_length":1859, "response_length":786 }, { "question":"scipy, lognormal distribution - parameters I want to fit lognormal distribution to my data, using python scipy.stats.lognormal.fit. According to the manual, fit returns shape, loc, scale parameters. But, lognormal distribution normally needs only two parameters: mean and standard deviation. How to interpret the results from scipy fit function? How to get mean and std.dev.?", "response":"The distributions in scipy are coded in a generic way wrt two parameter location and scale so that location is the parameter (loc) which shifts the distribution to the left or right, while scale is the parameter which compresses or stretches the distribution. For the two parameter lognormal distribution, the \"mean\" and \"std dev\" correspond to log(scale) and shape (you can let loc=0). The following illustrates how to fit a lognormal distribution to find the two parameters of interest: ``` In [56]: import numpy as np In [57]: from scipy import stats In [58]: logsample = stats.norm.rvs(loc=10, scale=3, size=1000) # logsample ~ N(mu=10, sigma=3) In [59]: sample = np.exp(logsample) # sample ~ lognormal(10, 3) In [60]: shape, loc, scale = stats.lognorm.fit(sample, floc=0) # hold location to 0 while fitting In [61]: shape, loc, scale Out[61]: (2.9212650122639419, 0, 21318.029350592606) In [62]: np.log(scale), shape # mu, sigma Out[62]: (9.9673084420467362, 2.9212650122639419) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8747761\/scipy-lognormal-distribution-parameters", "best_answers_votes":48, "question_length":375, "response_length":987 }, { "question":"Evaluate sympy expression from an array of values I'm experimenting with sympy and I've hit upon an issue I can't work out. Using scipy I can write an expression and evaluate it for an array of x values as follows: ``` import scipy xvals = scipy.arange(-100,100,0.1) f = lambda x: x**2 f(xvals) ``` Using sympy I can write the same expression as follows: ``` import sympy x = sympy.symbols('x') g = x**2 ``` I can evaluate this expression for a single value by doing the following: ``` g.evalf(subs={x:10}) ``` However I can't work out how to evaluate it for an array of x values, like I did with scipy. How would I do this?", "response":"First of all, at the moment SymPy does not guarantee support for numpy arrays which is what you want in this case. Check this bug report http:\/\/code.google.com\/p\/sympy\/issues\/detail?id=537 Second, If you want to evaluate something numerically for many values SymPy is not the best choice (it is a symbolic library after all). Use numpy and scipy. However, a valid reason to evaluate something numerically will be that deriving the expression to be evaluated was hard so you derive it in SymPy and then evaluate it in NumPy\/SciPy\/C\/Fortran. To translate an expression to numpy just use ``` from sympy.utilities.lambdify import lambdify func = lambdify(x, big_expression_containing_x,'numpy') # returns a numpy-ready function numpy_array_of_results = func(numpy_array_of_arguments) ``` Check the docstring of lambdify for more details. Be aware that lambdify still has some issues and may need a rewrite. And just as a side note, if you want to evaluate the expressions really many times, you can use the codegen\/autowrap module from sympy in order to create fortran or C code that is wrapped and callable from python. EDIT: An updates list of ways to do numerics in SymPy can be found on the wiki https:\/\/github.com\/sympy\/sympy\/wiki\/Philosophy-of-Numerics-and-Code-Generation-in-SymPy", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/10678843\/evaluate-sympy-expression-from-an-array-of-values", "best_answers_votes":52, "question_length":624, "response_length":1283 }, { "question":"pandas columns correlation with statistical significance What is the best way, given a pandas dataframe, df, to get the correlation between its columns df.1 and df.2? I do not want the output to count rows with NaN, which pandas built-in correlation does. But I also want it to output a pvalue or a standard error, which the built-in does not. SciPy seems to get caught up by the NaNs, though I believe it does report significance. Data example: ``` 1 2 0 2 NaN 1 NaN 1 2 1 2 3 -4 3 4 1.3 1 5 NaN NaN ```", "response":"To calculate all the p-values at once, you can use calculate_pvalues function (code below): ``` df = pd.DataFrame({'A':[1,2,3], 'B':[2,5,3], 'C':[5,2,1], 'D':['text',2,3] }) calculate_pvalues(df) ``` The output is similar to the corr() (but with p-values): ``` A B C A 0 0.7877 0.1789 B 0.7877 0 0.6088 C 0.1789 0.6088 0 ``` Details: Column D is automatically ignored as it contains text. p-values are rounded to 4 decimals You can subset to indicate exact columns: calculate_pvalues(df[['A','B','C']] Following is the code of the function: ``` from scipy.stats import pearsonr import pandas as pd def calculate_pvalues(df): dfcols = pd.DataFrame(columns=df.columns) pvalues = dfcols.transpose().join(dfcols, how='outer') for r in df.columns: for c in df.columns: tmp = df[df[r].notnull() & df[c].notnull()] pvalues[r][c] = round(pearsonr(tmp[r], tmp[c])[1], 4) return pvalues ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/25571882\/pandas-columns-correlation-with-statistical-significance", "best_answers_votes":63, "question_length":504, "response_length":880 }, { "question":"Map each list value to its corresponding percentile I'd like to create a function that takes a (sorted) list as its argument and outputs a list containing each element's corresponding percentile. For example, fn([1,2,3,4,17]) returns [0.0, 0.25, 0.50, 0.75, 1.00]. Can anyone please either: Help me correct my code below? OR Offer a better alternative than my code for mapping values in a list to their corresponding percentiles? My current code: ``` def median(mylist): length = len(mylist) if not length % 2: return (mylist[length \/ 2] + mylist[length \/ 2 - 1]) \/ 2.0 return mylist[length \/ 2] ############################################################################### # PERCENTILE FUNCTION ############################################################################### def percentile(x): \"\"\" Find the correspoding percentile of each value relative to a list of values. where x is the list of values Input list should already be sorted! \"\"\" # sort the input list # list_sorted = x.sort() # count the number of elements in the list list_elementCount = len(x) #obtain set of values from list listFromSetFromList = list(set(x)) # count the number of unique elements in the list list_uniqueElementCount = len(set(x)) # define extreme quantiles percentileZero = min(x) percentileHundred = max(x) # define median quantile mdn = median(x) # create empty list to hold percentiles x_percentile = [0.00] * list_elementCount # initialize unique count uCount = 0 for i in range(list_elementCount): if x[i] == percentileZero: x_percentile[i] = 0.00 elif x[i] == percentileHundred: x_percentile[i] = 1.00 elif x[i] == mdn: x_percentile[i] = 0.50 else: subList_elementCount = 0 for j in range(i): if x[j] listFromSetFromList[uCount]]) \/ list_elementCount) if i == 0: continue else: if x[i] == x[i-1]: continue else: uCount = uCount + 1 return x_percentile ``` Currently, if I submit percentile([1,2,3,4,17]), the list [0.0, 0.0, 0.5, 0.0, 1.0] is returned.", "response":"I think your example input\/output does not correspond to typical ways of calculating percentile. If you calculate the percentile as \"proportion of data points strictly less than this value\", then the top value should be 0.8 (since 4 of 5 values are less than the largest one). If you calculate it as \"percent of data points less than or equal to this value\", then the bottom value should be 0.2 (since 1 of 5 values equals the smallest one). Thus the percentiles would be [0, 0.2, 0.4, 0.6, 0.8] or [0.2, 0.4, 0.6, 0.8, 1]. Your definition seems to be \"the number of data points strictly less than this value, considered as a proportion of the number of data points not equal to this value\", but in my experience this is not a common definition (see for instance wikipedia). With the typical percentile definitions, the percentile of a data point is equal to its rank divided by the number of data points. (See for instance this question on Stats SE asking how to do the same thing in R.) Differences in how to compute the percentile amount to differences in how to compute the rank (for instance, how to rank tied values). The scipy.stats.percentileofscore function provides four ways of computing percentiles: ``` >>> x = [1, 1, 2, 2, 17] >>> [stats.percentileofscore(x, a, 'rank') for a in x] [30.0, 30.0, 70.0, 70.0, 100.0] >>> [stats.percentileofscore(x, a, 'weak') for a in x] [40.0, 40.0, 80.0, 80.0, 100.0] >>> [stats.percentileofscore(x, a, 'strict') for a in x] [0.0, 0.0, 40.0, 40.0, 80.0] >>> [stats.percentileofscore(x, a, 'mean') for a in x] [20.0, 20.0, 60.0, 60.0, 90.0] ``` (I used a dataset containing ties to illustrate what happens in such cases.) The \"rank\" method assigns tied groups a rank equal to the average of the ranks they would cover (i.e., a three-way tie for 2nd place gets a rank of 3 because it \"takes up\" ranks 2, 3 and 4). The \"weak\" method assigns a percentile based on the proportion of data points less than or equal to a given point; \"strict\" is the same but counts proportion of points strictly less than the given point. The \"mean\" method is the average of the latter two. As Kevin H. Lin noted, calling percentileofscore in a loop is inefficient since it has to recompute the ranks on every pass. However, these percentile calculations can be easily replicated using different ranking methods provided by scipy.stats.rankdata, letting you calculate all the percentiles at once: ``` >>> from scipy import stats >>> stats.rankdata(x, \"average\")\/len(x) array([ 0.3, 0.3, 0.7, 0.7, 1. ]) >>> stats.rankdata(x, 'max')\/len(x) array([ 0.4, 0.4, 0.8, 0.8, 1. ]) >>> (stats.rankdata(x, 'min')-1)\/len(x) array([ 0. , 0. , 0.4, 0.4, 0.8]) ``` In the last case the ranks are adjusted down by one to make them start from 0 instead of 1. (I've omitted \"mean\", but it could easily be obtained by averaging the results of the latter two methods.) I did some timings. With small data such as that in your example, using rankdata is somewhat slower than Kevin H. Lin's solution (presumably due to the overhead scipy incurs in converting things to numpy arrays under the hood) but faster than calling percentileofscore in a loop as in reptilicus's answer: ``` In [11]: %timeit [stats.percentileofscore(x, i) for i in x] 1000 loops, best of 3: 414 \u00b5s per loop In [12]: %timeit list_to_percentiles(x) 100000 loops, best of 3: 11.1 \u00b5s per loop In [13]: %timeit stats.rankdata(x, \"average\")\/len(x) 10000 loops, best of 3: 39.3 \u00b5s per loop ``` With a large dataset, however, the performance advantage of numpy takes effect and using rankdata is 10 times faster than Kevin's list_to_percentiles: ``` In [18]: x = np.random.randint(0, 10000, 1000) In [19]: %timeit [stats.percentileofscore(x, i) for i in x] 1 loops, best of 3: 437 ms per loop In [20]: %timeit list_to_percentiles(x) 100 loops, best of 3: 1.08 ms per loop In [21]: %timeit stats.rankdata(x, \"average\")\/len(x) 10000 loops, best of 3: 102 \u00b5s per loop ``` This advantage will only become more pronounced on larger and larger datasets.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12414043\/map-each-list-value-to-its-corresponding-percentile", "best_answers_votes":61, "question_length":1950, "response_length":4015 }, { "question":"Getting the r-squared value using curve_fit I am a beginner with both Python and all its libs. But I have managed to make a small program that works as intended. It takes a string, counts the occurence of the different letters and plots them in a graph and then applies a equation and its curve.\u00a8 Now i would like to get the r-squared value of the fit. The overall idea is to compare different kinds of text from articles on different levels and see how strong the overall pattern is. Is just an excersise and I am new, so a easy to understand answer would be awesome. The code is: ``` import numpy as np import math import matplotlib.pyplot as plt from matplotlib.pylab import figure, show from scipy.optimize import curve_fit s=\"\"\"det, og deres unders\u00f8gelse af hvor meget det bliver brugt viser, at der kun er seks plugins, som benyttes af mere end 5 % af Chrome-brugere. Problemet med teknologien er, at den ivivuilv rduyd iytf ouyf ouy yg oyuf yd iyt erzypu zhrpyh dfgopaehr poargi ah pargoh ertao gehorg aeophgrpaoghraprbpaenbtibaeriber en af hoved\u00e5rsagerne til sikkerhedshuller, ustabilitet og deciderede nedbrud af browseren. Der vil ikke bve lukket for API'et ivivuilv rduyd iytf ouyf ouy yg oyuf yd iyt erzypu zhrpyh dfgopaehr poargi ah pargoh ertao gehorg aeophgrpaoghraprbpaenbtibaeriber en af hoved\u00e5rsagerne til sikkerhedshuller, ustabilitet og deciderede nedbrud af browseren. Der vil ikke blive lukket for API'et p\u00e5 \u00e9n gang, men det vil blive udfaset i l\u00f8bet af et \u00e5rs tid. De mest popul\u00e6re plugins f\u00e5r lov at fungere i udfasningsperioden; Det drejer sig om: Silverlight (anvendt af 15 % af Chrome-brugere sidste m\u00e5ned), Unity (9,1 %), Google Earth (9,1 %), Java (8,9%), Google Talk (8,7 %) og Facebook Video (6,0 %). Det er muligt at hvidliste andre plugins, men i slutningen af 2014 forventer udviklerne helt at lukke for brugen af dem.\"\"\" fordel=[] alf=['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z','\u00e6','\u00f8','\u00e5'] i=1 p=0 fig = figure() ax1 = fig.add_subplot(1,2,0) for i in range(len(alf)): fordel.append(s.count(alf[i])) i=i+1 fordel=sorted(fordel,key=int,reverse=True) yFit=fordel xFit=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28] def func(x, a, b): return a * (b ** x) popt, pcov = curve_fit(func, xFit, yFit) t = np.arange(0.0, 30.0, 0.1) a=popt[0] b=popt[1] s = (a*b**t) ax1.plot(t,s) print(popt) yMax=math.ceil(fordel[0]+5) ax1.axis([0,30,0,yMax]) for i in range(0,int(len(alf))*2,2): fordel.insert(i,p) p=p+1 for i in range(0,int(len(fordel)\/2)): ax1.scatter(fordel[0],fordel[1]) fordel.pop(0) fordel.pop(0) plt.show() show() ```", "response":"Computing : The value can be found using the mean (), the total sum of squares (), and the residual sum of squares (). Each is defined as: where is the function value at point . Taken from Wikipedia. From scipy.optimize.curve_fit(): You can get the parameters (popt) from curve_fit() with popt, pcov = curve_fit(f, xdata, ydata) You can get the residual sum of squares () with residuals = ydata- f(xdata, *popt) ss_res = numpy.sum(residuals**2) You can get the total sum of squares () with ss_tot = numpy.sum((ydata-numpy.mean(ydata))**2) And finally, the -value with, r_squared = 1 - (ss_res \/ ss_tot)", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19189362\/getting-the-r-squared-value-using-curve-fit", "best_answers_votes":104, "question_length":2645, "response_length":602 }, { "question":"How to create a white image in Python? The following code creates a black picture: ``` from scipy.misc import imread,imsave from numpy import zeros img = zeros([100,100,3]) for y in range(len(img)): for x in range(len(img[0])): img[y,x] = [255,255,255] imsave(\"Result.jpg\",img) ``` I would have assumed it to be white. Edit in 2024: The code block is deprecated and won't run anymore.", "response":"Every color in an image is represented by one byte. So to create an image array, you should set it's dtype to uint8. And, you don't need for-loop to set every elements to 255, you can use fill() method or slice index: ``` import numpy as np img = np.zeros([100,100,3],dtype=np.uint8) img.fill(255) # or img[:] = 255 ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/10465747\/how-to-create-a-white-image-in-python", "best_answers_votes":87, "question_length":384, "response_length":319 }, { "question":"Tutorial for scipy.cluster.hierarchy [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 8 years ago. Improve this question I'm trying to understand how to manipulate a hierarchy cluster but the documentation is too ... technical?... and I can't understand how it works. Is there any tutorial that can help me to start with, explaining step by step some simple tasks? Let's say I have the following data set: ``` a = np.array([[0, 0 ], [1, 0 ], [0, 1 ], [1, 1 ], [0.5, 0 ], [0, 0.5], [0.5, 0.5], [2, 2 ], [2, 3 ], [3, 2 ], [3, 3 ]]) ``` I can easily do the hierarchy cluster and plot the dendrogram: ``` z = linkage(a) d = dendrogram(z) ``` Now, how I can recover a specific cluster? Let's say the one with elements [0,1,2,4,5,6] in the dendrogram? How I can get back the values of that elements?", "response":"There are three steps in hierarchical agglomerative clustering (HAC): Quantify Data (metric argument) Cluster Data (method argument) Choose the number of clusters Doing ``` z = linkage(a) ``` will accomplish the first two steps. Since you did not specify any parameters it uses the standard values metric = 'euclidean' method = 'single' So z = linkage(a) will give you a single linked hierachical agglomerative clustering of a. This clustering is kind of a hierarchy of solutions. From this hierarchy you get some information about the structure of your data. What you might do now is: Check which metric is appropriate, e. g. cityblock or chebychev will quantify your data differently (cityblock, euclidean and chebychev correspond to L1, L2, and L_inf norm) Check the different properties \/ behaviours of the methdos (e. g. single, complete and average) Check how to determine the number of clusters, e. g. by reading the wiki about it Compute indices on the found solutions (clusterings) such as the silhouette coefficient (with this coefficient you get a feedback on the quality of how good a point\/observation fits to the cluster it is assigned to by the clustering). Different indices use different criteria to qualify a clustering. Here is something to start with ``` import numpy as np import scipy.cluster.hierarchy as hac import matplotlib.pyplot as plt a = np.array([[0.1, 2.5], [1.5, .4 ], [0.3, 1 ], [1 , .8 ], [0.5, 0 ], [0 , 0.5], [0.5, 0.5], [2.7, 2 ], [2.2, 3.1], [3 , 2 ], [3.2, 1.3]]) fig, axes23 = plt.subplots(2, 3) for method, axes in zip(['single', 'complete'], axes23): z = hac.linkage(a, method=method) # Plotting axes[0].plot(range(1, len(z)+1), z[::-1, 2]) knee = np.diff(z[::-1, 2], 2) axes[0].plot(range(2, len(z)), knee) num_clust1 = knee.argmax() + 2 knee[knee.argmax()] = 0 num_clust2 = knee.argmax() + 2 axes[0].text(num_clust1, z[::-1, 2][num_clust1-1], 'possible\\n<- knee point') part1 = hac.fcluster(z, num_clust1, 'maxclust') part2 = hac.fcluster(z, num_clust2, 'maxclust') clr = ['#2200CC' ,'#D9007E' ,'#FF6600' ,'#FFCC00' ,'#ACE600' ,'#0099CC' , '#8900CC' ,'#FF0000' ,'#FF9900' ,'#FFFF00' ,'#00CC01' ,'#0055CC'] for part, ax in zip([part1, part2], axes[1:]): for cluster in set(part): ax.scatter(a[part == cluster, 0], a[part == cluster, 1], color=clr[cluster]) m = '\\n(method: {})'.format(method) plt.setp(axes[0], title='Screeplot{}'.format(m), xlabel='partition', ylabel='{}\\ncluster distance'.format(m)) plt.setp(axes[1], title='{} Clusters'.format(num_clust1)) plt.setp(axes[2], title='{} Clusters'.format(num_clust2)) plt.tight_layout() plt.show() ``` Gives", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/21638130\/tutorial-for-scipy-cluster-hierarchy", "best_answers_votes":74, "question_length":1171, "response_length":2602 }, { "question":"Highest Posterior Density Region and Central Credible Region Given a posterior p(\u0398|D) over some parameters \u0398, one can define the following: Highest Posterior Density Region: The Highest Posterior Density Region is the set of most probable values of \u0398 that, in total, constitute 100(1-\u03b1) % of the posterior mass. In other words, for a given \u03b1, we look for a p* that satisfies: and then obtain the Highest Posterior Density Region as the set: Central Credible Region: Using the same notation as above, a Credible Region (or interval) is defined as: Depending on the distribution, there could be many such intervals. The central credible interval is defined as a credible interval where there is (1-\u03b1)\/2 mass on each tail. Computation: For general distributions, given samples from the distribution, are there any built-ins in to obtain the two quantities above in Python or PyMC? For common parametric distributions (e.g. Beta, Gaussian, etc.) are there any built-ins or libraries to compute this using SciPy or statsmodels?", "response":"From my understanding \"central credible region\" is not any different from how confidence intervals are calculated; all you need is the inverse of cdf function at alpha\/2 and 1-alpha\/2; in scipy this is called ppf ( percentage point function ); so as for Gaussian posterior distribution: ``` >>> from scipy.stats import norm >>> alpha = .05 >>> l, u = norm.ppf(alpha \/ 2), norm.ppf(1 - alpha \/ 2) ``` to verify that [l, u] covers (1-alpha) of posterior density: ``` >>> norm.cdf(u) - norm.cdf(l) 0.94999999999999996 ``` similarly for Beta posterior with say a=1 and b=3: ``` >>> from scipy.stats import beta >>> l, u = beta.ppf(alpha \/ 2, a=1, b=3), beta.ppf(1 - alpha \/ 2, a=1, b=3) ``` and again: ``` >>> beta.cdf(u, a=1, b=3) - beta.cdf(l, a=1, b=3) 0.94999999999999996 ``` here you can see parametric distributions that are included in scipy; and I guess all of them have ppf function; As for highest posterior density region, it is more tricky, since pdf function is not necessarily invertible; and in general such a region may not even be connected; for example, in the case of Beta with a = b = .5 ( as can be seen here); But, in the case of Gaussian distribution, it is easy to see that \"Highest Posterior Density Region\" coincides with \"Central Credible Region\"; and I think that is is the case for all symmetric uni-modal distributions ( i.e. if pdf function is symmetric around the mode of distribution) A possible numerical approach for the general case would be binary search over the value of p* using numerical integration of pdf; utilizing the fact that the integral is a monotone function of p*; Here is an example for mixture Gaussian: [ 1 ] First thing you need is an analytical pdf function; for mixture Gaussian that is easy: ``` def mix_norm_pdf(x, loc, scale, weight): from scipy.stats import norm return np.dot(weight, norm.pdf(x, loc, scale)) ``` so for example for location, scale and weight values as in ``` loc = np.array([-1, 3]) # mean values scale = np.array([.5, .8]) # standard deviations weight = np.array([.4, .6]) # mixture probabilities ``` you will get two nice Gaussian distributions holding hands: [ 2 ] now, you need an error function which given a test value for p* integrates pdf function above p* and returns squared error from the desired value 1 - alpha: ``` def errfn( p, alpha, *args): from scipy import integrate def fn( x ): pdf = mix_norm_pdf(x, *args) return pdf if pdf > p else 0 # ideally integration limits should not # be hard coded but inferred lb, ub = -3, 6 prob = integrate.quad(fn, lb, ub)[0] return (prob + alpha - 1.0)**2 ``` [ 3 ] now, for a given value of alpha we can minimize the error function to obtain p*: ``` alpha = .05 from scipy.optimize import fmin p = fmin(errfn, x0=0, args=(alpha, loc, scale, weight))[0] ``` which results in p* = 0.0450, and HPD as below; the red area represents 1 - alpha of the distribution, and the horizontal dashed line is p*.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/22284502\/highest-posterior-density-region-and-central-credible-region", "best_answers_votes":25, "question_length":1022, "response_length":2926 }, { "question":"Calculating Slopes in Numpy (or Scipy) I am trying to find the fastest and most efficient way to calculate slopes using Numpy and Scipy. I have a data set of three Y variables and one X variable and I need to calculate their individual slopes. For example, I can easily do this one row at a time, as shown below, but I was hoping there was a more efficient way of doing this. I also don't think linregress is the best way to go because I don't need any of the auxiliary variables like intercept, standard error, etc in my results. Any help is greatly appreciated. ``` import numpy as np from scipy import stats Y = [[ 2.62710000e+11 3.14454000e+11 3.63609000e+11 4.03196000e+11 4.21725000e+11 2.86698000e+11 3.32909000e+11 4.01480000e+11 4.21215000e+11 4.81202000e+11] [ 3.11612352e+03 3.65968334e+03 4.15442691e+03 4.52470938e+03 4.65011423e+03 3.10707392e+03 3.54692896e+03 4.20656404e+03 4.34233412e+03 4.88462501e+03] [ 2.21536396e+01 2.59098311e+01 2.97401268e+01 3.04784552e+01 3.13667639e+01 2.76377113e+01 3.27846013e+01 3.73223417e+01 3.51249997e+01 4.42563658e+01]] X = [ 1990. 1991. 1992. 1993. 1994. 1995. 1996. 1997. 1998. 1999.] slope_0, intercept, r_value, p_value, std_err = stats.linregress(X, Y[0,:]) slope_1, intercept, r_value, p_value, std_err = stats.linregress(X, Y[1,:]) slope_2, intercept, r_value, p_value, std_err = stats.linregress(X, Y[2,:]) slope_0 = slope\/Y[0,:][0] slope_1 = slope\/Y[1,:][0] slope_2 = slope\/Y[2,:][0] b, a = polyfit(X, Y[1,:], 1) slope_1_a = b\/Y[1,:][0] ```", "response":"The fastest and the most efficient way would be to use a native scipy function from linregress which calculates everything: slope : slope of the regression line intercept : intercept of the regression line r-value : correlation coefficient p-value : two-sided p-value for a hypothesis test whose null hypothesis is that the slope is zero stderr : Standard error of the estimate And here is an example: ``` a = [15, 12, 8, 8, 7, 7, 7, 6, 5, 3] b = [10, 25, 17, 11, 13, 17, 20, 13, 9, 15] from scipy.stats import linregress linregress(a, b) ``` will return you: ``` LinregressResult(slope=0.20833333333333337, intercept=13.375, rvalue=0.14499815458068521, pvalue=0.68940144811669501, stderr=0.50261704627083648) ``` P.S. Just a mathematical formula for slope:", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9538525\/calculating-slopes-in-numpy-or-scipy", "best_answers_votes":74, "question_length":1505, "response_length":757 }, { "question":"correct and efficient way to flatten array in numpy in python? [duplicate] This question already has answers here: From ND to 1D arrays (7 answers) Closed 9 years ago. I have: ``` a = array([[1,2,3],[4,5,6]]) ``` and I'd like to flatten it, joining the two inner lists into one flat array entry. I can do: ``` array(list(flatten(a))) ``` but that seems inefficient due to the list cast (I want to end up with an array and not a generator.) Also, how can this be generalized to an array like this: ``` b = array([[[1,2,3],[4,5,6]], [[10,11,12],[13,14,15]]]) ``` where the result should be: ``` b = array([[1,2,3,4,5,6], [10,11,12,13,14,15]]) ``` are there builtin\/efficient numpy\/scipy operators for this? thanks.", "response":"You might need to check out numpy.flatten and numpy.ravel, both return a 1-d array from an n-d array. Furthermore, if you're not going to modify the returned 1-d array, I suggest you use numpy.ravel, since it doesn't make a copy of the array, but just return a view of the array, which is much faster than numpy.flatten. ``` >>>a = np.arange(10000).reshape((100,100)) >>>%timeit a.flatten() 100000 loops, best of 3: 4.02 \u00b5s per loop >>>%timeit a.ravel() 1000000 loops, best of 3: 412 ns per loop ``` Also check out this post.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9057379\/correct-and-efficient-way-to-flatten-array-in-numpy-in-python", "best_answers_votes":46, "question_length":712, "response_length":525 }, { "question":"How to force zero interception in linear regression? I have some more or less linear data of the form: ``` x = [0.1, 0.2, 0.4, 0.6, 0.8, 1.0, 2.0, 4.0, 6.0, 8.0, 10.0, 20.0, 40.0, 60.0, 80.0] y = [0.50505332505407008, 1.1207373784533172, 2.1981844719020001, 3.1746209003398689, 4.2905482471260044, 6.2816226678076958, 11.073788414382639, 23.248479770546009, 32.120462301367183, 44.036117671229206, 54.009003143831116, 102.7077685684846, 185.72880217806673, 256.12183145545811, 301.97120103079675] ``` I am using scipy.optimize.leastsq to fit a linear regression to this: ``` def lin_fit(x, y): '''Fits a linear fit of the form mx+b to the data''' fitfunc = lambda params, x: params[0] * x + params[1] #create fitting function of form mx+b errfunc = lambda p, x, y: fitfunc(p, x) - y #create error function for least squares fit init_a = 0.5 #find initial value for a (gradient) init_b = min(y) #find initial value for b (y axis intersection) init_p = numpy.array((init_a, init_b)) #bundle initial values in initial parameters #calculate best fitting parameters (i.e. m and b) using the error function p1, success = scipy.optimize.leastsq(errfunc, init_p.copy(), args = (x, y)) f = fitfunc(p1, x) #create a fit with those parameters return p1, f ``` And it works beautifully (although I am not sure if scipy.optimize is the right thing to use here, it might be a bit over the top?). However, due to the way the data points lie it does not give me a y-axis interception at 0. I do know though that it has to be zero in this case, if x = 0 than y = 0. Is there any way I can force this?", "response":"As @AbhranilDas mentioned, just use a linear method. There's no need for a non-linear solver like scipy.optimize.lstsq. Typically, you'd use numpy.polyfit to fit a line to your data, but in this case you'll need to do use numpy.linalg.lstsq directly, as you want to set the intercept to zero. As a quick example: ``` import numpy as np import matplotlib.pyplot as plt x = np.array([0.1, 0.2, 0.4, 0.6, 0.8, 1.0, 2.0, 4.0, 6.0, 8.0, 10.0, 20.0, 40.0, 60.0, 80.0]) y = np.array([0.50505332505407008, 1.1207373784533172, 2.1981844719020001, 3.1746209003398689, 4.2905482471260044, 6.2816226678076958, 11.073788414382639, 23.248479770546009, 32.120462301367183, 44.036117671229206, 54.009003143831116, 102.7077685684846, 185.72880217806673, 256.12183145545811, 301.97120103079675]) # Our model is y = a * x, so things are quite simple, in this case... # x needs to be a column vector instead of a 1D vector for this, however. x = x[:,np.newaxis] a, _, _, _ = np.linalg.lstsq(x, y) plt.plot(x, y, 'bo') plt.plot(x, a*x, 'r-') plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9990789\/how-to-force-zero-interception-in-linear-regression", "best_answers_votes":58, "question_length":1583, "response_length":1035 }, { "question":"Get coordinates of local maxima in 2D array above certain value ``` from PIL import Image import numpy as np from scipy.ndimage.filters import maximum_filter import pylab # the picture (256 * 256 pixels) contains bright spots of which I wanna get positions # problem: data has high background around value 900 - 1000 im = Image.open('slice0000.png') data = np.array(im) # as far as I understand, data == maximum_filter gives True-value for pixels # being the brightest in their neighborhood (here 10 * 10 pixels) maxima = (data == maximum_filter(data,10)) # How can I get only maxima, outstanding the background a certain value, let's say 500 ? ``` I'm afraid I don't really understand the scipy.ndimage.filters.maximum_filter() function. Is there a way to obtain pixel-coordinates only within the spots and not within the background? https:\/\/i.sstatic.net\/RImHW.png (16-bit grayscale picture, 256*256 pixels)", "response":"``` import numpy as np import scipy import scipy.ndimage as ndimage import matplotlib.pyplot as plt fname = '\/tmp\/slice0000.png' neighborhood_size = 5 threshold = 1500 data = scipy.misc.imread(fname) data_max = ndimage.maximum_filter(data, neighborhood_size) # apply maximum filter with a size of neighborhood_size maxima = (data == data_max) # boolean mask: local maximum within neighborhood_size data_min = ndimage.minimum_filter(data, neighborhood_size) # apply minimum filter with a size of neighborhood_size diff = ((data_max - data_min) > threshold) # boolean mask where the difference of the filters exceeds threshold maxima[~diff] = False # remove the local maxima which do not satisfy the minimum difference in neighborhood labeled, num_objects = ndimage.label(maxima) # label connected components on maxima binary array (boolean mask) slices = ndimage.find_objects(labeled) # slices are 2d rect x, y = [], [] for dy,dx in slices: x_center = (dx.start + dx.stop - 1)\/2 x.append(x_center) y_center = (dy.start + dy.stop - 1)\/2 y.append(y_center) plt.imshow(data) plt.savefig('\/tmp\/data.png', bbox_inches = 'tight') plt.autoscale(False) plt.plot(x,y, 'ro') plt.savefig('\/tmp\/result.png', bbox_inches = 'tight') ``` Given data.png: the above program yields result.png with threshold = 1500. Lower the threshold to pick up more local maxima: References: J.F. Sebastian counts nuclei Joe Kington finds paw prints Ivan finds local maximums", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9111711\/get-coordinates-of-local-maxima-in-2d-array-above-certain-value", "best_answers_votes":66, "question_length":909, "response_length":1442 }, { "question":"My scipy.misc module appears to be missing imsave I open the python3 interpreter and type ``` import scipy.misc scipy.misc.imsave ``` with the result ``` Traceback (most recent call last): File \"\", line 1, in AttributeError: 'module' object has no attribute 'imsave' ``` Has the name changed? It works fine in python2 but I would rather not migrate backwards so to speak. I have python 3.3.1 on Lubuntu 13.04 with all the modules downloaded from the default repositories. Scipy is installed and print(scipy.misc.__doc__) shows that imsave should be there. EDIT: scipy.__version__ gives 0.11.0 from scipy.misc import imsave gives ``` Traceback (most recent call last): File \"\", line 1, in ImportError: cannot import name imsave ```", "response":"scipy.misc.imsave has been deprecated in newer Scipy versions. Change your code to: ``` import imageio imageio.imwrite('filename.jpg', array) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19991665\/my-scipy-misc-module-appears-to-be-missing-imsave", "best_answers_votes":62, "question_length":732, "response_length":145 }, { "question":"`ValueError: A value in x_new is above the interpolation range.` - what other reasons than not ascending values? I receive this error in scipy interp1d function. Normally, this error would be generated if the x was not monotonically increasing. ``` import scipy.interpolate as spi def refine(coarsex,coarsey,step): finex = np.arange(min(coarsex),max(coarsex)+step,step) intfunc = spi.interp1d(coarsex, coarsey,axis=0) finey = intfunc(finex) return finex, finey for num, tfile in enumerate(files): tfile = tfile.dropna(how='any') x = np.array(tfile['col1']) y = np.array(tfile['col2']) finex, finey = refine(x,y,0.01) ``` The code is correct, because it successfully worked on 6 data files and threw the error for the 7th. So there must be something wrong with the data. But as far as I can tell, the data increase all the way down. I am sorry for not providing an example, because I am not able to reproduce the error on an example. There are two things that could help me: Some brainstorming - if the data are indeed monotonically increasing, what else could produce this error? Another hint, regarding the decimals, could be in this question, but I think my solution (the min and max of x) is robust enough to avoid it. Or isn't it? Is it possible (how?) to return the value of x_new and it's index when throwing the ValueError: A value in x_new is above the interpolation range. so that I could actually see where in the file is the problem? UPDATE So the problem is that, for some reason, max(finex) is larger than max(coarsex) (one is .x39 and the other is .x4). I hoped rounding the original values to 2 significant digits would solve the problem, but it didn't, it displays fewer digits but still counts with the undisplayed. What can I do about it?", "response":"If you are running Scipy v. 0.17.0 or newer, then you can pass fill_value='extrapolate' to spi.interp1d, and it will extrapolate to accomadate these values of your's that lie outside the interpolation range. So define your interpolation function like so: intfunc = spi.interp1d(coarsex, coarsey,axis=0, fill_value=\"extrapolate\") Be forewarned, however! Depending on what your data looks like and the type on interpolation you are performing, the extrapolated values can be erroneous. This is especially true if you have noisy or non-monotonic data. In your case you might be ok because your x_new value is only slighly beyond your interpolation range. Here's simple demonstration of how this feature can work nicely but also give erroneous results. ``` import scipy.interpolate as spi import numpy as np x = np.linspace(0,1,100) y = x + np.random.randint(-1,1,100)\/100 x_new = np.linspace(0,1.1,100) intfunc = spi.interp1d(x,y,fill_value=\"extrapolate\") y_interp = intfunc(x_new) import matplotlib.pyplot as plt plt.plot(x_new,y_interp,'r', label='interp\/extrap') plt.plot(x,y, 'b--', label='data') plt.legend() plt.show() ``` So the interpolated portion (in red) worked well, but the extrapolated portion clearly fails to follow the otherwise linear trend in this data because of the noise. So have some understanding of your data and proceed with caution.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/45429831\/valueerror-a-value-in-x-new-is-above-the-interpolation-range-what-other-re", "best_answers_votes":76, "question_length":1756, "response_length":1356 }, { "question":"Determining the byte size of a scipy.sparse matrix? Is it possible to determine the byte size of a scipy.sparse matrix? In NumPy you can determine the size of an array by doing the following: ``` import numpy as np print(np.zeros((100, 100, 100).nbytes) 8000000 ```", "response":"A sparse matrix is constructed from regular numpy arrays, so you can get the byte count for any of these just as you would a regular array. If you just want the number of bytes of the array elements: ``` >>> from scipy.sparse import csr_matrix >>> a = csr_matrix(np.arange(12).reshape((4,3))) >>> a.data.nbytes 88 ``` If you want the byte counts of all arrays required to build the sparse matrix, then I think you want: ``` >>> print a.data.nbytes + a.indptr.nbytes + a.indices.nbytes 152 ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11173019\/determining-the-byte-size-of-a-scipy-sparse-matrix", "best_answers_votes":64, "question_length":265, "response_length":492 }, { "question":"In Scipy how and why does curve_fit calculate the covariance of the parameter estimates I have been using scipy.optimize.leastsq to fit some data. I would like to get some confidence intervals on these estimates so I look into the cov_x output but the documentation is very unclear as to what this is and how to get the covariance matrix for my parameters from this. First of all it says that it is a Jacobian, but in the notes it also says that \"cov_x is a Jacobian approximation to the Hessian\" so that it is not actually a Jacobian but a Hessian using some approximation from the Jacobian. Which of these statements is correct? Secondly this sentence to me is confusing: This matrix must be multiplied by the residual variance to get the covariance of the parameter estimates \u2013 see curve_fit. I indeed go look at the source code for curve_fit where they do: ``` s_sq = (func(popt, *args)**2).sum()\/(len(ydata)-len(p0)) pcov = pcov * s_sq ``` which corresponds to multiplying cov_x by s_sq but I cannot find this equation in any reference. Can someone explain why this equation is correct? My intuition tells me that it should be the other way around since cov_x is supposed to be a derivative (Jacobian or Hessian) so I was thinking: cov_x * covariance(parameters) = sum of errors(residuals) where sigma(parameters) is the thing I want. How do I connect the thing curve_fit is doing with what I see at eg. wikipedia: http:\/\/en.wikipedia.org\/wiki\/Propagation_of_uncertainty#Non-linear_combinations", "response":"OK, I think I found the answer. First the solution: cov_x*s_sq is simply the covariance of the parameters which is what you want. Taking sqrt of the diagonal elements will give you standard deviation (but be careful about covariances!). Residual variance = reduced chi square = s_sq = sum[(f(x)-y)^2]\/(N-n), where N is number of data points and n is the number of fitting parameters. Reduced chi square. The reason for my confusion is that cov_x as given by leastsq is not actually what is called cov(x) in other places rather it is the reduced cov(x) or fractional cov(x). The reason it does not show up in any of the other references is that it is a simple rescaling which is useful in numerical computations, but is not relevant for a textbook. About Hessian versus Jacobian, the documentation is poorly worded. It is the Hessian that is calculated in both cases as is obvious since the Jacobian is zero at a minimum. What they mean is that they are using an approximation to the Jacobian to find the Hessian. A further note. It seems that the curve_fit result does not actually account for the absolute size of the errors, but only take into account the relative size of the sigmas provided. This means that the pcov returned doesn't change even if the errorbars change by a factor of a million. This is of course not right, but seems to be standard practice ie. Matlab does the same thing when using their Curve fitting toolbox. The correct procedure is described here: https:\/\/en.wikipedia.org\/wiki\/Linear_least_squares_(mathematics)#Parameter_errors_and_correlation It seems fairly straightforward to do this once the optimum has been found, at least for Linear Least squares.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/14854339\/in-scipy-how-and-why-does-curve-fit-calculate-the-covariance-of-the-parameter-es", "best_answers_votes":33, "question_length":1499, "response_length":1683 }, { "question":"Show confidence limits and prediction limits in scatter plot I have two arrays of data for height and weight: ``` import numpy as np, matplotlib.pyplot as plt heights = np.array([50,52,53,54,58,60,62,64,66,67,68,70,72,74,76,55,50,45,65]) weights = np.array([25,50,55,75,80,85,50,65,85,55,45,45,50,75,95,65,50,40,45]) plt.plot(heights,weights,'bo') plt.show() ``` How can I produce a plot similar to the following?", "response":"Here's what I put together. I tried to closely emulate your screenshot. Given ``` import numpy as np import scipy as sp import scipy.stats as stats import matplotlib.pyplot as plt %matplotlib inline # Raw Data heights = np.array([50,52,53,54,58,60,62,64,66,67,68,70,72,74,76,55,50,45,65]) weights = np.array([25,50,55,75,80,85,50,65,85,55,45,45,50,75,95,65,50,40,45]) ``` Two detailed options to plot confidence intervals: ``` def plot_ci_manual(t, s_err, n, x, x2, y2, ax=None): \"\"\"Return an axes of confidence bands using a simple approach. Notes ----- .. math:: \\left| \\: \\hat{\\mu}_{y|x0} - \\mu_{y|x0} \\: \\right| \\; \\leq \\; T_{n-2}^{.975} \\; \\hat{\\sigma} \\; \\sqrt{\\frac{1}{n}+\\frac{(x_0-\\bar{x})^2}{\\sum_{i=1}^n{(x_i-\\bar{x})^2}}} .. math:: \\hat{\\sigma} = \\sqrt{\\sum_{i=1}^n{\\frac{(y_i-\\hat{y})^2}{n-2}}} References ---------- .. [1] M. Duarte. \"Curve fitting,\" Jupyter Notebook. http:\/\/nbviewer.ipython.org\/github\/demotu\/BMC\/blob\/master\/notebooks\/CurveFitting.ipynb \"\"\" if ax is None: ax = plt.gca() ci = t * s_err * np.sqrt(1\/n + (x2 - np.mean(x))**2 \/ np.sum((x - np.mean(x))**2)) ax.fill_between(x2, y2 + ci, y2 - ci, color=\"#b9cfe7\", edgecolor=\"\") return ax def plot_ci_bootstrap(xs, ys, resid, nboot=500, ax=None): \"\"\"Return an axes of confidence bands using a bootstrap approach. Notes ----- The bootstrap approach iteratively resampling residuals. It plots `nboot` number of straight lines and outlines the shape of a band. The density of overlapping lines indicates improved confidence. Returns ------- ax : axes - Cluster of lines - Upper and Lower bounds (high and low) (optional) Note: sensitive to outliers References ---------- .. [1] J. Stults. \"Visualizing Confidence Intervals\", Various Consequences. http:\/\/www.variousconsequences.com\/2010\/02\/visualizing-confidence-intervals.html \"\"\" if ax is None: ax = plt.gca() bootindex = sp.random.randint for _ in range(nboot): resamp_resid = resid[bootindex(0, len(resid) - 1, len(resid))] # Make coeffs of for polys pc = sp.polyfit(xs, ys + resamp_resid, 1) # Plot bootstrap cluster ax.plot(xs, sp.polyval(pc, xs), \"b-\", linewidth=2, alpha=3.0 \/ float(nboot)) return ax ``` Code ``` # Computations ---------------------------------------------------------------- # Modeling with Numpy def equation(a, b): \"\"\"Return a 1D polynomial.\"\"\" return np.polyval(a, b) x = heights y = weights p, cov = np.polyfit(x, y, 1, cov=True) # parameters and covariance from of the fit of 1-D polynom. y_model = equation(p, x) # model using the fit parameters; NOTE: parameters here are coefficients # Statistics n = weights.size # number of observations m = p.size # number of parameters dof = n - m # degrees of freedom t = stats.t.ppf(0.975, n - m) # t-statistic; used for CI and PI bands # Estimates of Error in Data\/Model resid = y - y_model # residuals; diff. actual data from predicted values chi2 = np.sum((resid \/ y_model)**2) # chi-squared; estimates error in data chi2_red = chi2 \/ dof # reduced chi-squared; measures goodness of fit s_err = np.sqrt(np.sum(resid**2) \/ dof) # standard deviation of the error # Plotting -------------------------------------------------------------------- fig, ax = plt.subplots(figsize=(8, 6)) # Data ax.plot( x, y, \"o\", color=\"#b9cfe7\", markersize=8, markeredgewidth=1, markeredgecolor=\"b\", markerfacecolor=\"None\" ) # Fit ax.plot(x, y_model, \"-\", color=\"0.1\", linewidth=1.5, alpha=0.5, label=\"Fit\") x2 = np.linspace(np.min(x), np.max(x), 100) y2 = equation(p, x2) # Confidence Interval (select one) plot_ci_manual(t, s_err, n, x, x2, y2, ax=ax) #plot_ci_bootstrap(x, y, resid, ax=ax) # Prediction Interval pi = t * s_err * np.sqrt(1 + 1\/n + (x2 - np.mean(x))**2 \/ np.sum((x - np.mean(x))**2)) ax.fill_between(x2, y2 + pi, y2 - pi, color=\"None\", linestyle=\"--\") ax.plot(x2, y2 - pi, \"--\", color=\"0.5\", label=\"95% Prediction Limits\") ax.plot(x2, y2 + pi, \"--\", color=\"0.5\") #plt.show() ``` The following modifications are optional, originally implemented to mimic the OP's desired result. ``` # Figure Modifications -------------------------------------------------------- # Borders ax.spines[\"top\"].set_color(\"0.5\") ax.spines[\"bottom\"].set_color(\"0.5\") ax.spines[\"left\"].set_color(\"0.5\") ax.spines[\"right\"].set_color(\"0.5\") ax.get_xaxis().set_tick_params(direction=\"out\") ax.get_yaxis().set_tick_params(direction=\"out\") ax.xaxis.tick_bottom() ax.yaxis.tick_left() # Labels plt.title(\"Fit Plot for Weight\", fontsize=\"14\", fontweight=\"bold\") plt.xlabel(\"Height\") plt.ylabel(\"Weight\") plt.xlim(np.min(x) - 1, np.max(x) + 1) # Custom legend handles, labels = ax.get_legend_handles_labels() display = (0, 1) anyArtist = plt.Line2D((0, 1), (0, 0), color=\"#b9cfe7\") # create custom artists legend = plt.legend( [handle for i, handle in enumerate(handles) if i in display] + [anyArtist], [label for i, label in enumerate(labels) if i in display] + [\"95% Confidence Limits\"], loc=9, bbox_to_anchor=(0, -0.21, 1., 0.102), ncol=3, mode=\"expand\" ) frame = legend.get_frame().set_edgecolor(\"0.5\") # Save Figure plt.tight_layout() plt.savefig(\"filename.png\", bbox_extra_artists=(legend,), bbox_inches=\"tight\") plt.show() ``` Output Using plot_ci_manual(): Using plot_ci_bootstrap(): Hope this helps. Cheers. Details I believe that since the legend is outside the figure, it does not show up in matplotblib's popup window. It works fine in Jupyter using %maplotlib inline. The primary confidence interval code (plot_ci_manual()) is adapted from another source producing a plot similar to the OP. You can select a more advanced technique called residual bootstrapping by uncommenting the second option plot_ci_bootstrap(). Updates This post has been updated with revised code compatible with Python 3. stats.t.ppf() accepts the lower tail probability. According to the following resources, t = sp.stats.t.ppf(0.95, n - m) was corrected to t = sp.stats.t.ppf(0.975, n - m) to reflect a two-sided 95% t-statistic (or one-sided 97.5% t-statistic). original notebook and equation statistics reference (thanks @Bonlenfum and @tryptofan) verified t-value given dof=17 y2 was updated to respond more flexibly with a given model (@regeneration). An abstracted equation function was added to wrap the model function. Non-linear regressions are possible although not demonstrated. Amend appropriate variables as needed (thanks @PJW). See Also This post on plotting bands with statsmodels library. This tutorial on plotting bands and computing confidence intervals with uncertainties library (install with caution in a separate environment).", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/27164114\/show-confidence-limits-and-prediction-limits-in-scatter-plot", "best_answers_votes":83, "question_length":413, "response_length":6501 }, { "question":"Plot a plane based on a normal vector and a point in Matlab or matplotlib How would one go plotting a plane in matlab or matplotlib from a normal vector and a point?", "response":"For all the copy\/pasters out there, here is similar code for Python using matplotlib: ```py import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D point = np.array([1, 2, 3]) normal = np.array([1, 1, 2]) # a plane is a*x+b*y+c*z+d=0 # [a,b,c] is the normal. Thus, we have to calculate # d and we're set d = -point.dot(normal) # create x,y xx, yy = np.meshgrid(range(10), range(10)) # calculate corresponding z z = (-normal[0] * xx - normal[1] * yy - d) * 1. \/normal[2] # plot the surface plt3d = plt.figure().gca(projection='3d') plt3d.plot_surface(xx, yy, z) plt.show() ``` EDIT For newer matplotlib>3.6: ```py import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D point = np.array([1, 2, 3]) normal = np.array([1, 1, 2]) # a plane is a*x+b*y+c*z+d=0 # [a,b,c] is the normal. Thus, we have to calculate # d and we're set d = -point.dot(normal) # create x,y xx, yy = np.meshgrid(range(10), range(10)) # calculate corresponding z z = (-normal[0] * xx - normal[1] * yy - d) * 1. \/normal[2] # plot the surface fig = plt.figure() plt3d = fig.add_subplot(projection='3d') plt3d.plot_surface(xx, yy, z) plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/3461869\/plot-a-plane-based-on-a-normal-vector-and-a-point-in-matlab-or-matplotlib", "best_answers_votes":59, "question_length":165, "response_length":1184 }, { "question":"Overflow Error in Python's numpy.exp function I want to use numpy.exp like this: ``` cc = np.array([ [0.120,0.34,-1234.1] ]) print 1\/(1+np.exp(-cc)) ``` But this gives me error: ``` \/usr\/local\/lib\/python2.7\/site-packages\/ipykernel\/__main__.py:5: RuntimeWarning: overflow encountered in exp ``` I can't understand why? How can I fix this? It seems the problem is with third number (-1234.1)", "response":"As fuglede says, the issue here is that np.float64 can't handle a number as large as exp(1234.1). Try using np.float128 instead: ``` >>> cc = np.array([[0.120,0.34,-1234.1]], dtype=np.float128) >>> cc array([[ 0.12, 0.34, -1234.1]], dtype=float128) >>> 1 \/ (1 + np.exp(-cc)) array([[ 0.52996405, 0.58419052, 1.0893812e-536]], dtype=float128) ``` Note however, that there are certain quirks with using extended precision. It may not work on Windows; you don't actually get the full 128 bits of precision; and you might lose the precision whenever the number passes through pure python. You can read more about the details here. For most practical purposes, you can probably approximate 1 \/ (1 + ) to zero. That is to say, just ignore the warning and move on. Numpy takes care of the approximation for you (when using np.float64): ``` >>> 1 \/ (1 + np.exp(-cc)) \/usr\/local\/bin\/ipython3:1: RuntimeWarning: overflow encountered in exp #!\/usr\/local\/bin\/python3.4 array([[ 0.52996405, 0.58419052, 0. ]]) ``` If you want to suppress the warning, you could use scipy.special.expit, as suggested by WarrenWeckesser in a comment to the question: ``` >>> from scipy.special import expit >>> expit(cc) array([[ 0.52996405, 0.58419052, 0. ]]) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/40726490\/overflow-error-in-pythons-numpy-exp-function", "best_answers_votes":54, "question_length":389, "response_length":1232 }, { "question":"B\u00e9zier curve fitting with SciPy I have a set of points which approximate a 2D curve. I would like to use Python with numpy and scipy to find a cubic B\u00e9zier path which approximately fits the points, where I specify the exact coordinates of two endpoints, and it returns the coordinates of the other two control points. I initially thought scipy.interpolate.splprep() might do what I want, but it seems to force the curve to pass through each one of the data points (as I suppose you would want for interpolation). I'll assume that I was on the wrong track with that. My question is similar to this one: How can I fit a B\u00e9zier curve to a set of data?, except that they said they didn't want to use numpy. My preference would be to find what I need already implemented somewhere in scipy or numpy. Otherwise, I plan to implement the algorithm linked from one of the answers to that question, using numpy: An algorithm for automatically fitting digitized curves (pdf.page 622). Thank you for any suggestions! Edit: I understand that a cubic B\u00e9zier curve is not guaranteed to pass through all the points; I want one which passes through two given endpoints, and which is as close as possible to the specified interior points.", "response":"Here's a way to do Bezier curves with numpy: ``` import numpy as np from scipy.special import comb def bernstein_poly(i, n, t): \"\"\" The Bernstein polynomial of n, i as a function of t \"\"\" return comb(n, i) * ( t**(n-i) ) * (1 - t)**i def bezier_curve(points, nTimes=1000): \"\"\" Given a set of control points, return the bezier curve defined by the control points. points should be a list of lists, or list of tuples such as [ [1,1], [2,3], [4,5], ..[Xn, Yn] ] nTimes is the number of time steps, defaults to 1000 See http:\/\/processingjs.nihongoresources.com\/bezierinfo\/ \"\"\" nPoints = len(points) xPoints = np.array([p[0] for p in points]) yPoints = np.array([p[1] for p in points]) t = np.linspace(0.0, 1.0, nTimes) polynomial_array = np.array([ bernstein_poly(i, nPoints-1, t) for i in range(0, nPoints) ]) xvals = np.dot(xPoints, polynomial_array) yvals = np.dot(yPoints, polynomial_array) return xvals, yvals if __name__ == \"__main__\": from matplotlib import pyplot as plt nPoints = 4 points = np.random.rand(nPoints,2)*200 xpoints = [p[0] for p in points] ypoints = [p[1] for p in points] xvals, yvals = bezier_curve(points, nTimes=1000) plt.plot(xvals, yvals) plt.plot(xpoints, ypoints, \"ro\") for nr in range(len(points)): plt.text(points[nr][0], points[nr][1], nr) plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12643079\/b%c3%a9zier-curve-fitting-with-scipy", "best_answers_votes":25, "question_length":1220, "response_length":1284 }, { "question":"How to select inverse of indexes of a numpy array? I have a large set of data in which I need to compare the distances of a set of samples from this array with all the other elements of the array. Below is a very simple example of my data set. ``` import numpy as np import scipy.spatial.distance as sd data = np.array( [[ 0.93825827, 0.26701143], [ 0.99121108, 0.35582816], [ 0.90154837, 0.86254049], [ 0.83149103, 0.42222948], [ 0.27309625, 0.38925281], [ 0.06510739, 0.58445673], [ 0.61469637, 0.05420098], [ 0.92685408, 0.62715114], [ 0.22587817, 0.56819403], [ 0.28400409, 0.21112043]] ) sample_indexes = [1,2,3] # I'd rather not make this other_indexes = list(set(range(len(data))) - set(sample_indexes)) sample_data = data[sample_indexes] other_data = data[other_indexes] # compare them dists = sd.cdist(sample_data, other_data) ``` Is there a way to index a numpy array for indexes that are NOT the sample indexes? In my above example I make a list called other_indexes. I'd rather not have to do this for various reasons (large data set, threading, a very VERY low amount of memory on the system this is running on etc. etc. etc.). Is there a way to do something like.. ``` other_data = data[ indexes not in sample_indexes] ``` I read that numpy masks can do this but I tried... ``` other_data = data[~sample_indexes] ``` And this gives me an error. Do I have to create a mask?", "response":"``` mask = np.ones(len(data), np.bool) mask[sample_indexes] = 0 other_data = data[mask] ``` not the most elegant for what perhaps should be a single-line statement, but its fairly efficient, and the memory overhead is minimal too. If memory is your prime concern, np.delete would avoid the creation of the mask, and fancy-indexing creates a copy anyway. On second thought; np.delete does not modify the existing array, so its pretty much exactly the single line statement you are looking for.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/25330959\/how-to-select-inverse-of-indexes-of-a-numpy-array", "best_answers_votes":47, "question_length":1386, "response_length":492 }, { "question":"How do I get a lognormal distribution in Python with Mu and Sigma? I have been trying to get the result of a lognormal distribution using Scipy. I already have the Mu and Sigma, so I don't need to do any other prep work. If I need to be more specific (and I am trying to be with my limited knowledge of stats), I would say that I am looking for the cumulative function (cdf under Scipy). The problem is that I can't figure out how to do this with just the mean and standard deviation on a scale of 0-1 (ie the answer returned should be something from 0-1). I'm also not sure which method from dist, I should be using to get the answer. I've tried reading the documentation and looking through SO, but the relevant questions (like this and this) didn't seem to provide the answers I was looking for. Here is a code sample of what I am working with. Thanks. ``` from scipy.stats import lognorm stddev = 0.859455801705594 mean = 0.418749176686875 total = 37 dist = lognorm.cdf(total,mean,stddev) ``` UPDATE: So after a bit of work and a little research, I got a little further. But I still am getting the wrong answer. The new code is below. According to R and Excel, the result should be .7434, but that's clearly not what is happening. Is there a logic flaw I am missing? ``` dist = lognorm([1.744],loc=2.0785) dist.cdf(25) # yields=0.96374596, expected=0.7434 ``` UPDATE 2: Working lognorm implementation which yields the correct 0.7434 result. ``` def lognorm(self,x,mu=0,sigma=1): a = (math.log(x) - mu)\/math.sqrt(2*sigma**2) p = 0.5 + 0.5*math.erf(a) return p lognorm(25,1.744,2.0785) > 0.7434 ```", "response":"I know this is a bit late (almost one year!) but I've been doing some research on the lognorm function in scipy.stats. A lot of folks seem confused about the input parameters, so I hope to help these people out. The example above is almost correct, but I found it strange to set the mean to the location (\"loc\") parameter - this signals that the cdf or pdf doesn't 'take off' until the value is greater than the mean. Also, the mean and standard deviation arguments should be in the form exp(Ln(mean)) and Ln(StdDev), respectively. Simply put, the arguments are (x, shape, loc, scale), with the parameter definitions below: loc - No equivalent, this gets subtracted from your data so that 0 becomes the infimum of the range of the data. scale - exp \u03bc, where \u03bc is the mean of the log of the variate. (When fitting, typically you'd use the sample mean of the log of the data.) shape - the standard deviation of the log of the variate. I went through the same frustration as most people with this function, so I'm sharing my solution. Just be careful because the explanations aren't very clear without a compendium of resources. For more information, I found these sources helpful: http:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.stats.lognorm.html#scipy.stats.lognorm https:\/\/stats.stackexchange.com\/questions\/33036\/fitting-log-normal-distribution-in-r-vs-scipy And here is an example, taken from @serv-inc 's answer, posted on this page here: ``` import math from scipy import stats # standard deviation of normal distribution sigma = 0.859455801705594 # mean of normal distribution mu = 0.418749176686875 # hopefully, total is the value where you need the cdf total = 37 frozen_lognorm = stats.lognorm(s=sigma, scale=math.exp(mu)) frozen_lognorm.cdf(total) # use whatever function and value you need here ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8870982\/how-do-i-get-a-lognormal-distribution-in-python-with-mu-and-sigma", "best_answers_votes":51, "question_length":1600, "response_length":1818 }, { "question":"Numpy transpose of 1D array not giving expected result I am trying a very basic example in Python scipy module for transpose() method but it's not giving expected result. I am using Ipython with pylab mode. ``` a = array([1,2,3] print a.shape >> (3,) b = a.transpose() print b.shape >> (3,) ``` If I print the contents of arrays \"a\" and \"b\", they are similar. Expectation is: (which will be result in Matlab on transpose) ``` [1, 2, 3] ```", "response":"NumPy's transpose() effectively reverses the shape of an array. If the array is one-dimensional, this means it has no effect. In NumPy, the arrays ``` array([1, 2, 3]) ``` and ``` array([1, 2, 3]) ``` are actually the same \u2013 they only differ in whitespace. What you probably want are the corresponding two-dimensional arrays, for which transpose() would work fine. Also consider using NumPy's matrix type: ``` In [1]: numpy.matrix([1, 2, 3]) Out[1]: matrix([[1, 2, 3]]) In [2]: numpy.matrix([1, 2, 3]).T Out[2]: matrix([[1], [2], [3]]) ``` Note that for most applications, the plain one-dimensional array would work fine as both a row or column vector, but when coming from Matlab, you might prefer using numpy.matrix.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11885503\/numpy-transpose-of-1d-array-not-giving-expected-result", "best_answers_votes":38, "question_length":439, "response_length":718 }, { "question":"Parse a Pandas column to Datetime when importing table from SQL database and filtering rows by date I have a DataFrame with column named date. How can we convert\/parse the 'date' column to a DateTime object? I loaded the date column from a Postgresql database using sql.read_frame(). An example of the date column is 2013-04-04. What I am trying to do is to select all rows in a dataframe that has their date columns within a certain period, like after 2013-04-01 and before 2013-04-04. My attempt below gives the error 'Series' object has no attribute 'read' Attempt ``` import dateutil df['date'] = dateutil.parser.parse(df['date']) ``` Error ``` AttributeError Traceback (most recent call last) in () 15 16 # Parse 'Date' Column to Datetime ---> 17 df['date'] = dateutil.parser.parse(df['date']) 18 19 # SELECT RECENT SALES C:\\Python27\\lib\\site-packages\\dateutil\\parser.pyc in parse(timestr, parserinfo, **kwargs) 695 return parser(parserinfo).parse(timestr, **kwargs) 696 else: --> 697 return DEFAULTPARSER.parse(timestr, **kwargs) 698 699 C:\\Python27\\lib\\site-packages\\dateutil\\parser.pyc in parse(self, timestr, default, ignoretz, tzinfos, **kwargs) 299 default = datetime.datetime.now().replace(hour=0, minute=0, 300 second=0, microsecond=0) --> 301 res = self._parse(timestr, **kwargs) 302 if res is None: 303 raise ValueError, \"unknown string format\" C:\\Python27\\lib\\site-packages\\dateutil\\parser.pyc in _parse(self, timestr, dayfirst, yearfirst, fuzzy) 347 yearfirst = info.yearfirst 348 res = self._result() --> 349 l = _timelex.split(timestr) 350 try: 351 C:\\Python27\\lib\\site-packages\\dateutil\\parser.pyc in split(cls, s) 141 142 def split(cls, s): --> 143 return list(cls(s)) 144 split = classmethod(split) 145 C:\\Python27\\lib\\site-packages\\dateutil\\parser.pyc in next(self) 135 136 def next(self): --> 137 token = self.get_token() 138 if token is None: 139 raise StopIteration C:\\Python27\\lib\\site-packages\\dateutil\\parser.pyc in get_token(self) 66 nextchar = self.charstack.pop(0) 67 else: ---> 68 nextchar = self.instream.read(1) 69 while nextchar == '\\x00': 70 nextchar = self.instream.read(1) AttributeError: 'Series' object has no attribute 'read' ``` df['date'].apply(dateutil.parser.parse) gives me the error AttributeError: 'datetime.date' object has no attribute 'read' df['date'].truncate(after='2013\/04\/01') gives the error TypeError: can't compare datetime.datetime to long df['date'].dtype returns dtype('O'). Is it already a datetime object?", "response":"Pandas is aware of the object datetime but when you use some of the import functions it is taken as a string. So what you need to do is make sure the column is set as the datetime type not as a string. Then you can make your query. ``` df['date'] = pd.to_datetime(df['date']) df_masked = df[(df['date'] > datetime.date(2012,4,1)) & (df['date'] < datetime.date(2012,4,4))] ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16412099\/parse-a-pandas-column-to-datetime-when-importing-table-from-sql-database-and-fil", "best_answers_votes":62, "question_length":2471, "response_length":375 }, { "question":"Fitting a 2D Gaussian function using scipy.optimize.curve_fit - ValueError and minpack.error I intend to fit a 2D Gaussian function to images showing a laser beam to get its parameters like FWHM and position. So far I tried to understand how to define a 2D Gaussian function in Python and how to pass x and y variables to it. I've written a little script which defines that function, plots it, adds some noise to it and then tries to fit it using curve_fit. Everything seems to work except the last step in which I try to fit my model function to the noisy data. Here is my code: ``` import scipy.optimize as opt import numpy as np import pylab as plt #define model function and pass independant variables x and y as a list def twoD_Gaussian((x,y), amplitude, xo, yo, sigma_x, sigma_y, theta, offset): xo = float(xo) yo = float(yo) a = (np.cos(theta)**2)\/(2*sigma_x**2) + (np.sin(theta)**2)\/(2*sigma_y**2) b = -(np.sin(2*theta))\/(4*sigma_x**2) + (np.sin(2*theta))\/(4*sigma_y**2) c = (np.sin(theta)**2)\/(2*sigma_x**2) + (np.cos(theta)**2)\/(2*sigma_y**2) return offset + amplitude*np.exp( - (a*((x-xo)**2) + 2*b*(x-xo)*(y-yo) + c*((y-yo)**2))) # Create x and y indices x = np.linspace(0, 200, 201) y = np.linspace(0, 200, 201) x,y = np.meshgrid(x, y) #create data data = twoD_Gaussian((x, y), 3, 100, 100, 20, 40, 0, 10) # plot twoD_Gaussian data generated above plt.figure() plt.imshow(data) plt.colorbar() # add some noise to the data and try to fit the data generated beforehand initial_guess = (3,100,100,20,40,0,10) data_noisy = data + 0.2*np.random.normal(size=len(x)) popt, pcov = opt.curve_fit(twoD_Gaussian, (x,y), data_noisy, p0 = initial_guess) ``` Here is the error message I get when running the script using winpython 64-bit Python 2.7: ``` ValueError: object too deep for desired array Traceback (most recent call last): File \"\", line 1, in File \"C:\\Python\\WinPython-64bit-2.7.6.2\\python-2.7.6.amd64\\lib\\site-packages\\spyderlib\\widgets\\externalshell\\sitecustomize.py\", line 540, in runfile execfile(filename, namespace) File \"E:\/Work Computer\/Software\/Python\/Fitting scripts\/2D Gaussian function fit\/2D_Gaussian_LevMarq_v2.py\", line 39, in popt, pcov = opt.curve_fit(twoD_Gaussian, (x,y), data_noisy, p0 = initial_guess) File \"C:\\Python\\WinPython-64bit-2.7.6.2\\python-2.7.6.amd64\\lib\\site-packages\\scipy\\optimize\\minpack.py\", line 533, in curve_fit res = leastsq(func, p0, args=args, full_output=1, **kw) File \"C:\\Python\\WinPython-64bit-2.7.6.2\\python-2.7.6.amd64\\lib\\site-packages\\scipy\\optimize\\minpack.py\", line 378, in leastsq gtol, maxfev, epsfcn, factor, diag) minpack.error: Result from function call is not a proper array of floats. ``` What is it that am I doing wrong? Is it how I pass the independent variables to the model function\/curve_fit?", "response":"The output of twoD_Gaussian needs to be 1D. What you can do is add a .ravel() onto the end of the last line, like this: ``` def twoD_Gaussian(xy, amplitude, xo, yo, sigma_x, sigma_y, theta, offset): x, y = xy xo = float(xo) yo = float(yo) a = (np.cos(theta)**2)\/(2*sigma_x**2) + (np.sin(theta)**2)\/(2*sigma_y**2) b = -(np.sin(2*theta))\/(4*sigma_x**2) + (np.sin(2*theta))\/(4*sigma_y**2) c = (np.sin(theta)**2)\/(2*sigma_x**2) + (np.cos(theta)**2)\/(2*sigma_y**2) g = offset + amplitude*np.exp( - (a*((x-xo)**2) + 2*b*(x-xo)*(y-yo) + c*((y-yo)**2))) return g.ravel() ``` You'll obviously need to reshape the output for plotting, e.g: ``` # Create x and y indices x = np.linspace(0, 200, 201) y = np.linspace(0, 200, 201) x, y = np.meshgrid(x, y) #create data data = twoD_Gaussian((x, y), 3, 100, 100, 20, 40, 0, 10) # plot twoD_Gaussian data generated above plt.figure() plt.imshow(data.reshape(201, 201)) plt.colorbar() ``` Do the fitting as before: ``` # add some noise to the data and try to fit the data generated beforehand initial_guess = (3,100,100,20,40,0,10) data_noisy = data + 0.2*np.random.normal(size=data.shape) popt, pcov = opt.curve_fit(twoD_Gaussian, (x, y), data_noisy, p0=initial_guess) ``` And plot the results: ``` data_fitted = twoD_Gaussian((x, y), *popt) fig, ax = plt.subplots(1, 1) #ax.hold(True) For older versions. This has now been deprecated and later removed ax.imshow(data_noisy.reshape(201, 201), cmap=plt.cm.jet, origin='lower', extent=(x.min(), x.max(), y.min(), y.max())) ax.contour(x, y, data_fitted.reshape(201, 201), 8, colors='w') plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/21566379\/fitting-a-2d-gaussian-function-using-scipy-optimize-curve-fit-valueerror-and-m", "best_answers_votes":55, "question_length":2769, "response_length":1581 }, { "question":"Check if two scipy.sparse.csr_matrix are equal I want to check if two csr_matrix are equal. If I do: ``` x.__eq__(y) ``` I get: ``` raise ValueError(\"The truth value of an array with more than one \" ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all(). ``` This, However, works well: ``` assert (z in x for z in y) ``` Is there a better way to do it? maybe using some scipy optimized function instead? Thanks so much", "response":"Can we assume they are the same shape? ``` In [202]: a=sparse.csr_matrix([[0,1],[1,0]]) In [203]: b=sparse.csr_matrix([[0,1],[1,1]]) In [204]: (a!=b).nnz==0 Out[204]: False ``` This checks the sparsity of the inequality array. It will give you an efficiency warning if you try a==b (at least the 1st time you use it). That's because it has to test all those zeros. It can't take much advantage of the sparsity. You need a relatively recent version to use logical operators like this. Were you trying to use x.__eq__(y) in some if expression, or did you get error from just that expression? In general you probably want to check several parameters first. Same shape, same nnz, same dtype. You need to be careful with floats. For dense arrays np.allclose is a good way of testing equality. And if the sparse arrays aren't too large, that might be good as well ``` np.allclose(a.A, b.A) ``` allclose uses all(less_equal(abs(x-y), atol + rtol * abs(y))). You can use a-b, but I suspect that this too will give an efficiecy warning.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/30685024\/check-if-two-scipy-sparse-csr-matrix-are-equal", "best_answers_votes":53, "question_length":467, "response_length":1027 }, { "question":"How to plot a 3D density map in python with matplotlib I have a large dataset of (x,y,z) protein positions and would like to plot areas of high occupancy as a heatmap. Ideally the output should look similiar to the volumetric visualisation below, but I'm not sure how to achieve this with matplotlib. My initial idea was to display my positions as a 3D scatter plot and color their density via a KDE. I coded this up as follows with test data: ```py import numpy as np from scipy import stats import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D mu, sigma = 0, 0.1 x = np.random.normal(mu, sigma, 1000) y = np.random.normal(mu, sigma, 1000) z = np.random.normal(mu, sigma, 1000) xyz = np.vstack([x,y,z]) density = stats.gaussian_kde(xyz)(xyz) idx = density.argsort() x, y, z, density = x[idx], y[idx], z[idx], density[idx] fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(x, y, z, c=density) plt.show() ``` This works well! However, my real data contains many thousands of data points and calculating the kde and the scatter plot becomes extremely slow. A small sample of my real data: My research would suggest that a better option is to evaluate the gaussian kde on a grid. I\u2019m just not sure how to this in 3D: ```py import numpy as np from scipy import stats import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D mu, sigma = 0, 0.1 x = np.random.normal(mu, sigma, 1000) y = np.random.normal(mu, sigma, 1000) nbins = 50 xy = np.vstack([x,y]) density = stats.gaussian_kde(xy) xi, yi = np.mgrid[x.min():x.max():nbins*1j, y.min():y.max():nbins*1j] di = density(np.vstack([xi.flatten(), yi.flatten()])) fig = plt.figure() ax = fig.add_subplot(111) ax.pcolormesh(xi, yi, di.reshape(xi.shape)) plt.show() ```", "response":"Thanks to mwaskon for suggesting the mayavi library. I recreated the density scatter plot in mayavi as follows: ```py import numpy as np from scipy import stats from mayavi import mlab mu, sigma = 0, 0.1 x = 10*np.random.normal(mu, sigma, 5000) y = 10*np.random.normal(mu, sigma, 5000) z = 10*np.random.normal(mu, sigma, 5000) xyz = np.vstack([x,y,z]) kde = stats.gaussian_kde(xyz) density = kde(xyz) # Plot scatter with mayavi figure = mlab.figure('DensityPlot') pts = mlab.points3d(x, y, z, density, scale_mode='none', scale_factor=0.07) mlab.axes() mlab.show() ``` Setting the scale_mode to 'none' prevents glyphs from being scaled in proportion to the density vector. In addition for large datasets, I disabled scene rendering and used a mask to reduce the number of points. ``` # Plot scatter with mayavi figure = mlab.figure('DensityPlot') figure.scene.disable_render = True pts = mlab.points3d(x, y, z, density, scale_mode='none', scale_factor=0.07) mask = pts.glyph.mask_points mask.maximum_number_of_points = x.size mask.on_ratio = 1 pts.glyph.mask_input_points = True figure.scene.disable_render = False mlab.axes() mlab.show() ``` Next, to evaluate the gaussian kde on a grid: ``` import numpy as np from scipy import stats from mayavi import mlab mu, sigma = 0, 0.1 x = 10*np.random.normal(mu, sigma, 5000) y = 10*np.random.normal(mu, sigma, 5000) z = 10*np.random.normal(mu, sigma, 5000) xyz = np.vstack([x,y,z]) kde = stats.gaussian_kde(xyz) # Evaluate kde on a grid xmin, ymin, zmin = x.min(), y.min(), z.min() xmax, ymax, zmax = x.max(), y.max(), z.max() xi, yi, zi = np.mgrid[xmin:xmax:30j, ymin:ymax:30j, zmin:zmax:30j] coords = np.vstack([item.ravel() for item in [xi, yi, zi]]) density = kde(coords).reshape(xi.shape) # Plot scatter with mayavi figure = mlab.figure('DensityPlot') grid = mlab.pipeline.scalar_field(xi, yi, zi, density) min = density.min() max=density.max() mlab.pipeline.volume(grid, vmin=min, vmax=min + .5*(max-min)) mlab.axes() mlab.show() ``` As a final improvement I sped up the evaluation of kensity density function by calling the kde function in parallel. ``` import numpy as np from scipy import stats from mayavi import mlab import multiprocessing def calc_kde(data): return kde(data.T) mu, sigma = 0, 0.1 x = 10*np.random.normal(mu, sigma, 5000) y = 10*np.random.normal(mu, sigma, 5000) z = 10*np.random.normal(mu, sigma, 5000) xyz = np.vstack([x,y,z]) kde = stats.gaussian_kde(xyz) # Evaluate kde on a grid xmin, ymin, zmin = x.min(), y.min(), z.min() xmax, ymax, zmax = x.max(), y.max(), z.max() xi, yi, zi = np.mgrid[xmin:xmax:30j, ymin:ymax:30j, zmin:zmax:30j] coords = np.vstack([item.ravel() for item in [xi, yi, zi]]) # Multiprocessing cores = multiprocessing.cpu_count() pool = multiprocessing.Pool(processes=cores) results = pool.map(calc_kde, np.array_split(coords.T, 2)) density = np.concatenate(results).reshape(xi.shape) # Plot scatter with mayavi figure = mlab.figure('DensityPlot') grid = mlab.pipeline.scalar_field(xi, yi, zi, density) min = density.min() max=density.max() mlab.pipeline.volume(grid, vmin=min, vmax=min + .5*(max-min)) mlab.axes() mlab.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/25286811\/how-to-plot-a-3d-density-map-in-python-with-matplotlib", "best_answers_votes":54, "question_length":1765, "response_length":3127 }, { "question":"How to correctly use scipy's skew and kurtosis functions? The skewness is a parameter to measure the symmetry of a data set and the kurtosis to measure how heavy its tails are compared to a normal distribution, see for example here. scipy.stats provides an easy way to calculate these two quantities, see scipy.stats.kurtosis and scipy.stats.skew. In my understanding, the skewness and kurtosis of a normal distribution should both be 0 using the functions just mentioned. That is, however, not the case with my code: ``` import numpy as np from scipy.stats import kurtosis from scipy.stats import skew x = np.linspace( -5, 5, 1000 ) y = 1.\/(np.sqrt(2.*np.pi)) * np.exp( -.5*(x)**2 ) # normal distribution print( 'excess kurtosis of normal distribution (should be 0): {}'.format( kurtosis(y) )) print( 'skewness of normal distribution (should be 0): {}'.format( skew(y) )) ``` The output is: excess kurtosis of normal distribution (should be 0): -0.307393087742 skewness of normal distribution (should be 0): 1.11082371392 What am I doing wrong ? The versions I am using are ``` python: 2.7.6 scipy : 0.17.1 numpy : 1.12.1 ```", "response":"These functions calculate moments of the probability density distribution (that's why it takes only one parameter) and doesn't care about the \"functional form\" of the values. These are meant for \"random datasets\" (think of them as measures like mean, standard deviation, variance): ``` import numpy as np from scipy.stats import kurtosis, skew x = np.random.normal(0, 2, 10000) # create random values based on a normal distribution print( 'excess kurtosis of normal distribution (should be 0): {}'.format( kurtosis(x) )) print( 'skewness of normal distribution (should be 0): {}'.format( skew(x) )) ``` which gives: ``` excess kurtosis of normal distribution (should be 0): -0.024291887786943356 skewness of normal distribution (should be 0): 0.009666157036010928 ``` changing the number of random values increases the accuracy: ``` x = np.random.normal(0, 2, 10000000) ``` Leading to: ``` excess kurtosis of normal distribution (should be 0): -0.00010309478605163847 skewness of normal distribution (should be 0): -0.0006751744848755031 ``` In your case the function \"assumes\" that each value has the same \"probability\" (because the values are equally distributed and each value occurs only once) so from the point of view of skew and kurtosis it's dealing with a non-gaussian probability density (not sure what exactly this is) which explains why the resulting values aren't even close to 0: ``` import numpy as np from scipy.stats import kurtosis, skew x_random = np.random.normal(0, 2, 10000) x = np.linspace( -5, 5, 10000 ) y = 1.\/(np.sqrt(2.*np.pi)) * np.exp( -.5*(x)**2 ) # normal distribution import matplotlib.pyplot as plt f, (ax1, ax2) = plt.subplots(1, 2) ax1.hist(x_random, bins='auto') ax1.set_title('probability density (random)') ax2.hist(y, bins='auto') ax2.set_title('(your dataset)') plt.tight_layout() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/45483890\/how-to-correctly-use-scipys-skew-and-kurtosis-functions", "best_answers_votes":51, "question_length":1126, "response_length":1825 }, { "question":"Reverse Box-Cox transformation I am using SciPy's boxcox function to perform a Box-Cox transformation on a continuous variable. ``` from scipy.stats import boxcox import numpy as np y = np.random.random(100) y_box, lambda_ = ss.boxcox(y + 1) # Add 1 to be able to transform 0 values ``` Then, I fit a statistical model to predict the values of this Box-Cox transformed variable. The model predictions are in the Box-Cox scale and I want to transform them to the original scale of the variable. ``` from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor() X = np.random.random((100, 100)) rf.fit(X, y_box) pred_box = rf.predict(X) ``` However, I can't find a SciPy function that performs a reverse Box-Cox transformation given transformed data and lambda. Is there such a function? I coded an inverse transformation for now. ``` pred_y = np.power((y_box * lambda_) + 1, 1 \/ lambda_) - 1 ```", "response":"SciPy has added an inverse Box-Cox transformation. https:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.special.inv_boxcox.html scipy.special.inv_boxcox scipy.special.inv_boxcox(y, lmbda) = Compute the inverse of the Box-Cox transformation. Find x such that: ``` y = (x**lmbda - 1) \/ lmbda if lmbda != 0 log(x) if lmbda == 0 ``` Parameters: y : array_like Data to be transformed. lmbda : array_like Power parameter of the Box-Cox transform. Returns: x : array Transformed data. Notes New in version 0.16.0. Example: ``` from scipy.special import boxcox, inv_boxcox y = boxcox([1, 4, 10], 2.5) inv_boxcox(y, 2.5) output: array([1., 4., 10.]) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/26391454\/reverse-box-cox-transformation", "best_answers_votes":24, "question_length":912, "response_length":650 }, { "question":"Populate a Pandas SparseDataFrame from a SciPy Sparse Matrix I noticed Pandas now has support for Sparse Matrices and Arrays. Currently, I create DataFrame()s like this: ``` return DataFrame(matrix.toarray(), columns=features, index=observations) ``` Is there a way to create a SparseDataFrame() with a scipy.sparse.csc_matrix() or csr_matrix()? Converting to dense format kills RAM badly. Thanks!", "response":"A direct conversion is not supported ATM. Contributions are welcome! Try this, should be ok on memory as the SpareSeries is much like a csc_matrix (for 1 column) and pretty space efficient ``` In [37]: col = np.array([0,0,1,2,2,2]) In [38]: data = np.array([1,2,3,4,5,6],dtype='float64') In [39]: m = csc_matrix( (data,(row,col)), shape=(3,3) ) In [40]: m Out[40]: ' with 6 stored elements in Compressed Sparse Column format> In [46]: pd.SparseDataFrame([ pd.SparseSeries(m[i].toarray().ravel()) for i in np.arange(m.shape[0]) ]) Out[46]: 0 1 2 0 1 0 4 1 0 0 5 2 2 3 6 In [47]: df = pd.SparseDataFrame([ pd.SparseSeries(m[i].toarray().ravel()) for i in np.arange(m.shape[0]) ]) In [48]: type(df) Out[48]: pandas.sparse.frame.SparseDataFrame ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/17818783\/populate-a-pandas-sparsedataframe-from-a-scipy-sparse-matrix", "best_answers_votes":30, "question_length":397, "response_length":744 }, { "question":"SciPy build\/install Mac Osx I successfully built\/installed NumPy on my mac os x for python 2.7.3. Now I would like to build\/install scipy as well. I downloaded it from git hub. Went into the directory. Ran python setup.py build and it seemed to be working until it came across this error: ``` customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize NAGFCompiler customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Could not locate executable g77 customize G95FCompiler Could not locate executable g95 customize PGroupFCompiler Could not locate executable pgfortran don't know how to compile Fortran code on platform 'posix' building 'dfftpack' library error: library dfftpack has Fortran sources but no Fortran compiler found ``` I thought that I had Fortran installed for NumPy...guess not? How would I download it?", "response":"Your problem is that you need to install a Fortran compiler to build scipy. Also, if you already have a numpy that's built with Fortran support disabled, you may have to replace it. Some of Apple's pre-installed Python versions have such a numpy build pre-installed. The easiest way to get Fortran is with Homebrew. As the docs say, you need to install Xcode and its Command Line Tools first. (The way to install the Command Line Tools changes with almost each major version of Xcode, so see the linked docs for an up-to-date explanation.) Then install Homebrew. The installation URL has changed a few times, so see the Homebrew home page or installation instructions(http:\/\/brew.sh\/), but it will be something like: ``` ruby -e \"$(curl -fsSL https:\/\/raw.githubusercontent.com\/Homebrew\/install\/master\/install)\" ``` Then: ``` brew install gcc ``` (Note that until some time in 2014, gfortran was a separate recipe from gcc, so the command was brew install gfortran. But if you try that now, you'll get an error saying \"GNU Fortran is now provided as part of GCC, and can be installed with: brew install gcc\".) You really want to use pip to install scipy, so if you don't have that, get it first. Apple's pre-installed Python, at least in 10.7 and 10.8, includes easy_install but not pip, so the easiest way to do that is: ``` sudo easy_install pip ``` However, you may want to consider using a virtualenv instead of a global install (in which case you also want to remove the sudo on the following commands). Now that you've got gfortran and pip, all you have to do is this: ``` sudo pip install --upgrade numpy sudo pip install scipy ``` Caveats: The instructions above are for Apple's pre-installed version(s) of Python. If you're using a different version of Python, you really should consider not doing so. Keeping the paths, installed packages, etc. in sync is a nightmare. The exception to this is if you want a Python 3.x version, in which case installing it from python.org or Homebrew is perfectly reasonable. There will be no collisions, because python, pip2.7, etc. will be for Apple's Python; python3, pip3.3, etc. for the 3.x version. If you already have pip, but fear it may be out of date, pip install --upgrade pip. (Besides the security and robustness benefits, this may save you a whole lot of time by making you compatible with binary wheels for some of the scientific stack or other modules.) For most non-Apple Python installations (and maybe even Apple's in 10.9 or 10.10; I haven't checked), you should not use easy_install to install pip. Follow the pip install instructions. But first make sure you don't already have it. If you're using virtualenv\/venv, your virtual environments will already include pip. Python 3.4 or later may (and will, if from a python.org installer) include a pip bootstrap. If your 3.4+ doesn't already have pip, you may want to python -m ensurepip to install it. Some third-party installs, like Homebrew or ActiveState, include pip. For Python 3.3 or later, you may want to use the built-in venv instead of virtualenv. If you're using MacPorts, Fink, gentoo-alt, etc., you should install the scipy package that comes with your package manager, and it will drag in whatever else it needs (maybe even including rebuilding Python and GCC). Third-party binary installs like Enthought and ActiveState may already include scipy and everything else you need. If not, the instructions are basically the same as above, but you'll have to guess which steps to skip or follow, whether to sudo, etc. If you're using a non-Apple build of Python 2.7, and you want to avoid the PATH problems, you have to do two things: First, do not, ever, install any Python packages that include scripts or binaries (including pip itself) in more than one Python. For example, if you install ipython for both Apple 2.7 and Homebrew 2.7, both will attempt to create scripts named \/usr\/local\/bin\/ipython and \/usr\/local\/bin\/ipython-2.7. If you're lucky, one install will fail. Otherwise, they'll both succeed, one will end up overwriting the other, and you will have no way of running the overwritten version. Second, make sure the path to the alternate Python's scripts and binaries comes before Apple's in the PATH. Depending on which alternate Python you've installed and which instructions you followed, this could be: \/usr\/local\/bin \/Library\/Frameworks\/Python.framework\/Versions\/2.7\/bin \/usr\/local\/share\/python2.7\/bin \/usr\/local\/Cellar\/python\/2.7.3\/bin something else Whatever the path is, you need to edit your PATH variable. If you want to affect GUI apps (and LaunchAgents, etc.), there is apparently no longer a supported way to do this, but the deprecated QA1067 does seem to still work in Lion. It's also what the Homebrew FAQ and Python FAQ suggest. If you only care about command-line sessions (both Terminal.app and remote ssh), you can instead just do the standard Unix thing of editing the appropriate profile file. Which profile file is appropriate depends on what you want to affect. (All users or just one user? bash or any shell? And so on.) If you don't know which one you want, you really should do some research. If you don't want to bother learning, just do ~\/.profile and then don't complain if it wasn't what you wanted. Either way, you need to make sure that the appropriate path comes before \/usr\/bin in the PATH. So, for example, you could add the following to ~\/.profile: ``` PATH=\/usr\/local\/bin:$PATH export PATH ``` (You will of course need to either create a new Terminal shell, or source the script, before it takes effect.) If you're using homebrew, brew doctor will tell you if you got it right.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/14821297\/scipy-build-install-mac-osx", "best_answers_votes":94, "question_length":1105, "response_length":5649 }, { "question":"Python out of memory on large CSV file (numpy) I have a 3GB CSV file that I try to read with python, I need the median column wise. ``` from numpy import * def data(): return genfromtxt('All.csv',delimiter=',') data = data() # This is where it fails already. med = zeros(len(data[0])) data = data.T for i in xrange(len(data)): m = median(data[i]) med[i] = 1.0\/float(m) print med ``` The error that I get is this: ``` Python(1545) malloc: *** mmap(size=16777216) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Traceback (most recent call last): File \"Normalize.py\", line 40, in data = data() File \"Normalize.py\", line 39, in data return genfromtxt('All.csv',delimiter=',') File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.6\/lib\/python2.6\/site- packages\/numpy\/lib\/npyio.py\", line 1495, in genfromtxt for (i, line) in enumerate(itertools.chain([first_line, ], fhd)): MemoryError ``` I think it's just an out of memory error. I am running a 64bit MacOSX with 4GB of ram and both numpy and Python compiled in 64bit mode. How do I fix this? Should I try a distributed approach, just for the memory management? Thanks EDIT: Also tried with this but no luck... ``` genfromtxt('All.csv',delimiter=',', dtype=float16) ```", "response":"As other folks have mentioned, for a really large file, you're better off iterating. However, you do commonly want the entire thing in memory for various reasons. genfromtxt is much less efficient than loadtxt (though it handles missing data, whereas loadtxt is more \"lean and mean\", which is why the two functions co-exist). If your data is very regular (e.g. just simple delimited rows of all the same type), you can also improve on either by using numpy.fromiter. If you have enough ram, consider using np.loadtxt('yourfile.txt', delimiter=',') (You may also need to specify skiprows if you have a header on the file.) As a quick comparison, loading ~500MB text file with loadtxt uses ~900MB of ram at peak usage, while loading the same file with genfromtxt uses ~2.5GB. Loadtxt Genfromtxt Alternately, consider something like the following. It will only work for very simple, regular data, but it's quite fast. (loadtxt and genfromtxt do a lot of guessing and error-checking. If your data is very simple and regular, you can improve on them greatly.) ``` import numpy as np def generate_text_file(length=1e6, ncols=20): data = np.random.random((length, ncols)) np.savetxt('large_text_file.csv', data, delimiter=',') def iter_loadtxt(filename, delimiter=',', skiprows=0, dtype=float): def iter_func(): with open(filename, 'r') as infile: for _ in range(skiprows): next(infile) for line in infile: line = line.rstrip().split(delimiter) for item in line: yield dtype(item) iter_loadtxt.rowlength = len(line) data = np.fromiter(iter_func(), dtype=dtype) data = data.reshape((-1, iter_loadtxt.rowlength)) return data #generate_text_file() data = iter_loadtxt('large_text_file.csv') ``` Fromiter", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8956832\/python-out-of-memory-on-large-csv-file-numpy", "best_answers_votes":73, "question_length":1278, "response_length":1693 }, { "question":"Calling statistics functions from Scipy This may well be completely trivial. I want to call the spearmanr function from scipy: (the data are just examples) ``` import scipy from numpy import * Len = [2,3,5,7,2] Pop = [5,2,6,3,2] a = array(Len) b = array(Pop) print scipy.stats.spearmanr(a,b) ``` This generates the error: ``` AttributeError: 'module' object has no attribute 'spearmanr' ``` What am I doing wrong? Thanks", "response":"Use import scipy.stats. Then it works. Importing a package does not automatically import all the subpackages\/modules. In these cases you have to do this explicitly.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/6664996\/calling-statistics-functions-from-scipy", "best_answers_votes":53, "question_length":420, "response_length":164 }, { "question":"fitting exponential decay with no initial guessing Does anyone know a scipy\/numpy module which will allow to fit exponential decay to data? Google search returned a few blog posts, for example - http:\/\/exnumerus.blogspot.com\/2010\/04\/how-to-fit-exponential-decay-example-in.html , but that solution requires y-offset to be pre-specified, which is not always possible EDIT: curve_fit works, but it can fail quite miserably with no initial guess for parameters, and that is sometimes needed. The code I'm working with is ``` #!\/usr\/bin\/env python import numpy as np import scipy as sp import pylab as pl from scipy.optimize.minpack import curve_fit x = np.array([ 50., 110., 170., 230., 290., 350., 410., 470., 530., 590.]) y = np.array([ 3173., 2391., 1726., 1388., 1057., 786., 598., 443., 339., 263.]) smoothx = np.linspace(x[0], x[-1], 20) guess_a, guess_b, guess_c = 4000, -0.005, 100 guess = [guess_a, guess_b, guess_c] exp_decay = lambda x, A, t, y0: A * np.exp(x * t) + y0 params, cov = curve_fit(exp_decay, x, y, p0=guess) A, t, y0 = params print \"A = %s\\nt = %s\\ny0 = %s\\n\" % (A, t, y0) pl.clf() best_fit = lambda x: A * np.exp(t * x) + y0 pl.plot(x, y, 'b.') pl.plot(smoothx, best_fit(smoothx), 'r-') pl.show() ``` which works, but if we remove \"p0=guess\", it fails miserably.", "response":"You have two options: Linearize the system, and fit a line to the log of the data. Use a non-linear solver (e.g. scipy.optimize.curve_fit The first option is by far the fastest and most robust. However, it requires that you know the y-offset a-priori, otherwise it's impossible to linearize the equation. (i.e. y = A * exp(K * t) can be linearized by fitting y = log(A * exp(K * t)) = K * t + log(A), but y = A*exp(K*t) + C can only be linearized by fitting y - C = K*t + log(A), and as y is your independent variable, C must be known beforehand for this to be a linear system. If you use a non-linear method, it's a) not guaranteed to converge and yield a solution, b) will be much slower, c) gives a much poorer estimate of the uncertainty in your parameters, and d) is often much less precise. However, a non-linear method has one huge advantage over a linear inversion: It can solve a non-linear system of equations. In your case, this means that you don't have to know C beforehand. Just to give an example, let's solve for y = A * exp(K * t) with some noisy data using both linear and nonlinear methods: ``` import numpy as np import matplotlib.pyplot as plt import scipy as sp import scipy.optimize def main(): # Actual parameters A0, K0, C0 = 2.5, -4.0, 2.0 # Generate some data based on these tmin, tmax = 0, 0.5 num = 20 t = np.linspace(tmin, tmax, num) y = model_func(t, A0, K0, C0) # Add noise noisy_y = y + 0.5 * (np.random.random(num) - 0.5) fig = plt.figure() ax1 = fig.add_subplot(2,1,1) ax2 = fig.add_subplot(2,1,2) # Non-linear Fit A, K, C = fit_exp_nonlinear(t, noisy_y) fit_y = model_func(t, A, K, C) plot(ax1, t, y, noisy_y, fit_y, (A0, K0, C0), (A, K, C0)) ax1.set_title('Non-linear Fit') # Linear Fit (Note that we have to provide the y-offset (\"C\") value!! A, K = fit_exp_linear(t, y, C0) fit_y = model_func(t, A, K, C0) plot(ax2, t, y, noisy_y, fit_y, (A0, K0, C0), (A, K, 0)) ax2.set_title('Linear Fit') plt.show() def model_func(t, A, K, C): return A * np.exp(K * t) + C def fit_exp_linear(t, y, C=0): y = y - C y = np.log(y) K, A_log = np.polyfit(t, y, 1) A = np.exp(A_log) return A, K def fit_exp_nonlinear(t, y): opt_parms, parm_cov = sp.optimize.curve_fit(model_func, t, y, maxfev=1000) A, K, C = opt_parms return A, K, C def plot(ax, t, y, noisy_y, fit_y, orig_parms, fit_parms): A0, K0, C0 = orig_parms A, K, C = fit_parms ax.plot(t, y, 'k--', label='Actual Function:\\n $y = %0.2f e^{%0.2f t} + %0.2f$' % (A0, K0, C0)) ax.plot(t, fit_y, 'b-', label='Fitted Function:\\n $y = %0.2f e^{%0.2f t} + %0.2f$' % (A, K, C)) ax.plot(t, noisy_y, 'ro') ax.legend(bbox_to_anchor=(1.05, 1.1), fancybox=True, shadow=True) if __name__ == '__main__': main() ``` Note that the linear solution provides a result much closer to the actual values. However, we have to provide the y-offset value in order to use a linear solution. The non-linear solution doesn't require this a-priori knowledge.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/3938042\/fitting-exponential-decay-with-no-initial-guessing", "best_answers_votes":66, "question_length":1284, "response_length":2906 }, { "question":"python built-in function to do matrix reduction Does python have a built-in function that converts a matrix into row echelon form (also known as upper triangular)?", "response":"If you can use sympy, Matrix.rref() can do it: ``` In [8]: sympy.Matrix(np.random.random((4,4))).rref() Out[8]: ([1, 1.42711055402454e-17, 0, -1.38777878078145e-17] [0, 1.0, 0, 2.22044604925031e-16] [0, -2.3388341405089e-16, 1, -2.22044604925031e-16] [0, 3.65674099486992e-17, 0, 1.0], [0, 1, 2, 3]) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/7664246\/python-built-in-function-to-do-matrix-reduction", "best_answers_votes":45, "question_length":163, "response_length":303 }, { "question":"Computing the correlation coefficient between two multi-dimensional arrays I have two arrays that have the shapes N X T and M X T. I'd like to compute the correlation coefficient across T between every possible pair of rows n and m (from N and M, respectively). What's the fastest, most pythonic way to do this? (Looping over N and M would seem to me to be neither fast nor pythonic.) I'm expecting the answer to involve numpy and\/or scipy. Right now my arrays are numpy arrays, but I'm open to converting them to a different type. I'm expecting my output to be an array with the shape N X M. N.B. When I say \"correlation coefficient,\" I mean the Pearson product-moment correlation coefficient. Here are some things to note: The numpy function correlate requires input arrays to be one-dimensional. The numpy function corrcoef accepts two-dimensional arrays, but they must have the same shape. The scipy.stats function pearsonr requires input arrays to be one-dimensional.", "response":"Correlation (default 'valid' case) between two 2D arrays: You can simply use matrix-multiplication np.dot like so - ``` out = np.dot(arr_one,arr_two.T) ``` Correlation with the default \"valid\" case between each pairwise row combinations (row1,row2) of the two input arrays would correspond to multiplication result at each (row1,row2) position. Row-wise Correlation Coefficient calculation for two 2D arrays: ``` def corr2_coeff(A, B): # Rowwise mean of input arrays & subtract from input arrays themeselves A_mA = A - A.mean(1)[:, None] B_mB = B - B.mean(1)[:, None] # Sum of squares across rows ssA = (A_mA**2).sum(1) ssB = (B_mB**2).sum(1) # Finally get corr coeff return np.dot(A_mA, B_mB.T) \/ np.sqrt(np.dot(ssA[:, None],ssB[None])) ``` This is based upon this solution to How to apply corr2 functions in Multidimentional arrays in MATLAB Benchmarking This section compares runtime performance with the proposed approach against generate_correlation_map & loopy pearsonr based approach listed in the other answer.(taken from the function test_generate_correlation_map() without the value correctness verification code at the end of it). Please note the timings for the proposed approach also include a check at the start to check for equal number of columns in the two input arrays, as also done in that other answer. The runtimes are listed next. Case #1: ``` In [106]: A = np.random.rand(1000, 100) In [107]: B = np.random.rand(1000, 100) In [108]: %timeit corr2_coeff(A, B) 100 loops, best of 3: 15 ms per loop In [109]: %timeit generate_correlation_map(A, B) 100 loops, best of 3: 19.6 ms per loop ``` Case #2: ``` In [110]: A = np.random.rand(5000, 100) In [111]: B = np.random.rand(5000, 100) In [112]: %timeit corr2_coeff(A, B) 1 loops, best of 3: 368 ms per loop In [113]: %timeit generate_correlation_map(A, B) 1 loops, best of 3: 493 ms per loop ``` Case #3: ``` In [114]: A = np.random.rand(10000, 10) In [115]: B = np.random.rand(10000, 10) In [116]: %timeit corr2_coeff(A, B) 1 loops, best of 3: 1.29 s per loop In [117]: %timeit generate_correlation_map(A, B) 1 loops, best of 3: 1.83 s per loop ``` The other loopy pearsonr based approach seemed too slow, but here are the runtimes for one small datasize - ``` In [118]: A = np.random.rand(1000, 100) In [119]: B = np.random.rand(1000, 100) In [120]: %timeit corr2_coeff(A, B) 100 loops, best of 3: 15.3 ms per loop In [121]: %timeit generate_correlation_map(A, B) 100 loops, best of 3: 19.7 ms per loop In [122]: %timeit pearsonr_based(A, B) 1 loops, best of 3: 33 s per loop ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/30143417\/computing-the-correlation-coefficient-between-two-multi-dimensional-arrays", "best_answers_votes":47, "question_length":972, "response_length":2550 }, { "question":"Exponential curve fitting in SciPy I have two NumPy arrays x and y. When I try to fit my data using exponential function and curve_fit (SciPy) with this simple code ``` #!\/usr\/bin\/env python from pylab import * from scipy.optimize import curve_fit x = np.array([399.75, 989.25, 1578.75, 2168.25, 2757.75, 3347.25, 3936.75, 4526.25, 5115.75, 5705.25]) y = np.array([109,62,39,13,10,4,2,0,1,2]) def func(x, a, b, c, d): return a*np.exp(b-c*x)+d popt, pcov = curve_fit(func, x, y) ``` I get wrong coefficients popt ``` [a,b,c,d] = [1., 1., 1., 24.19999988] ``` What is the problem?", "response":"First comment: since a*exp(b - c*x) = (a*exp(b))*exp(-c*x) = A*exp(-c*x), a or b is redundant. I'll drop b and use: ``` import matplotlib.pyplot as plt def func(x, a, c, d): return a*np.exp(-c*x)+d ``` That isn't the main issue. The problem is simply that curve_fit fails to converge to a solution to this problem when you use the default initial guess (which is all 1s). Check pcov; you'll see that it is inf. This is not surprising, because if c is 1, most of the values of exp(-c*x) underflow to 0: ``` In [32]: np.exp(-x) Out[32]: array([ 2.45912644e-174, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000]) ``` This suggests that c should be small. A better initial guess is, say, p0 = (1, 1e-6, 1). Then I get: ``` In [36]: popt, pcov = curve_fit(func, x, y, p0=(1, 1e-6, 1)) In [37]: popt Out[37]: array([ 1.63561656e+02, 9.71142196e-04, -1.16854450e+00]) ``` This looks reasonable: ``` In [42]: xx = np.linspace(300, 6000, 1000) In [43]: yy = func(xx, *popt) In [44]: plt.plot(x, y, 'ko') Out[44]: [] In [45]: plt.plot(xx, yy) Out[45]: [] ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/21420792\/exponential-curve-fitting-in-scipy", "best_answers_votes":53, "question_length":578, "response_length":1152 }, { "question":"SciPy\/Python install on Ubuntu I'm currently following the tutorial Installing the SciPy Stack to install SciPy on Ubuntu 12.04 (Precise Pangolin) (I can't use apt-get install because I need a recent version). However, I get errors when I do the following commands: ``` python setup.py build sudo python setup.py install --prefix=\/usr\/local # Installs to \/usr\/local python setup.py build michael@michael-laptop-ubuntu:~\/Downloads\/scipy-0.11.0rc1$ python setup.py buildRunning from scipy source directory. blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in \/usr\/local\/lib libraries mkl,vml,guide not found in \/usr\/lib libraries mkl,vml,guide not found in \/usr\/lib\/i386-linux-gnu NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in \/usr\/local\/lib libraries ptf77blas,ptcblas,atlas not found in \/usr\/lib\/sse2 libraries ptf77blas,ptcblas,atlas not found in \/usr\/lib libraries ptf77blas,ptcblas,atlas not found in \/usr\/lib\/i386-linux-gnu\/sse2 libraries ptf77blas,ptcblas,atlas not found in \/usr\/lib\/i386-linux-gnu NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in \/usr\/local\/lib libraries f77blas,cblas,atlas not found in \/usr\/lib\/sse2 libraries f77blas,cblas,atlas not found in \/usr\/lib libraries f77blas,cblas,atlas not found in \/usr\/lib\/i386-linux-gnu\/sse2 libraries f77blas,cblas,atlas not found in \/usr\/lib\/i386-linux-gnu NOT AVAILABLE \/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py:1423: UserWarning: Atlas (http:\/\/math-atlas.sourceforge.net\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in \/usr\/local\/lib libraries blas not found in \/usr\/lib libraries blas not found in \/usr\/lib\/i386-linux-gnu NOT AVAILABLE \/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py:1432: UserWarning: Blas (http:\/\/www.netlib.org\/blas\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE \/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py:1435: UserWarning: Blas (http:\/\/www.netlib.org\/blas\/) sources not found. Directories to search for the sources can be specified in the numpy\/distutils\/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) Traceback (most recent call last): File \"setup.py\", line 208, in setup_package() File \"setup.py\", line 199, in setup_package configuration=configuration ) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/core.py\", line 152, in setup config = configuration() File \"setup.py\", line 136, in configuration config.add_subpackage('scipy') File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 1002, in add_subpackage caller_level = 2) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 971, in get_subpackage caller_level = caller_level + 1) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 908, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File \"scipy\/setup.py\", line 8, in configuration config.add_subpackage('integrate') File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 1002, in add_subpackage caller_level = 2) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 971, in get_subpackage caller_level = caller_level + 1) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 908, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File \"scipy\/integrate\/setup.py\", line 10, in configuration blas_opt = get_info('blas_opt',notfound_action=2) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py\", line 320, in get_info return cl().get_info(notfound_action) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py\", line 471, in get_info raise self.notfounderror(self.notfounderror.__doc__) numpy.distutils.system_info.BlasNotFoundError: Blas (http:\/\/www.netlib.org\/blas\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [blas]) or by setting the BLAS environment variable. Error in sys.excepthook: Traceback (most recent call last): File \"\/usr\/lib\/python2.7\/dist-packages\/apport_python_hook.py\", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File \"\/usr\/lib\/python2.7\/dist-packages\/apport\/__init__.py\", line 1, in from apport.report import Report File \"\/usr\/lib\/python2.7\/dist-packages\/apport\/report.py\", line 18, in import problem_report File \"\/usr\/lib\/python2.7\/dist-packages\/problem_report.py\", line 14, in import zlib, base64, time, sys, gzip, struct, os File \"\/usr\/lib\/python2.7\/gzip.py\", line 10, in import io File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/io\/__init__.py\", line 83, in from matlab import loadmat, savemat, byteordercodes File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/io\/matlab\/__init__.py\", line 11, in from mio import loadmat, savemat File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/io\/matlab\/mio.py\", line 15, in from mio4 import MatFile4Reader, MatFile4Writer File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/io\/matlab\/mio4.py\", line 9, in import scipy.sparse File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/sparse\/__init__.py\", line 180, in from csr import * File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/sparse\/csr.py\", line 12, in from sparsetools import csr_tocsc, csr_tobsr, csr_count_blocks, \\ File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/sparse\/sparsetools\/__init__.py\", line 4, in from csr import * File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/sparse\/sparsetools\/csr.py\", line 25, in _csr = swig_import_helper() File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/sparse\/sparsetools\/csr.py\", line 17, in swig_import_helper import _csr ImportError: No module named _csr Original exception was: Traceback (most recent call last): File \"setup.py\", line 208, in setup_package() File \"setup.py\", line 199, in setup_package configuration=configuration ) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/core.py\", line 152, in setup config = configuration() File \"setup.py\", line 136, in configuration config.add_subpackage('scipy') File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 1002, in add_subpackage caller_level = 2) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 971, in get_subpackage caller_level = caller_level + 1) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 908, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File \"scipy\/setup.py\", line 8, in configuration config.add_subpackage('integrate') File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 1002, in add_subpackage caller_level = 2) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 971, in get_subpackage caller_level = caller_level + 1) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 908, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File \"scipy\/integrate\/setup.py\", line 10, in configuration blas_opt = get_info('blas_opt',notfound_action=2) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py\", line 320, in get_info return cl().get_info(notfound_action) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py\", line 471, in get_info raise self.notfounderror(self.notfounderror.__doc__) numpy.distutils.system_info.BlasNotFoundError: Blas (http:\/\/www.netlib.org\/blas\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [blas]) or by setting the BLAS environment variable. ``` sudo python setup.py install --prefix=\/usr\/local # installs to \/usr\/local ``` michael@michael-laptop-ubuntu:~\/Downloads\/scipy-0.11.0rc1$ sudo python setup.py install --prefix=\/usr\/local [sudo] password for michael: Running from scipy source directory. blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in \/usr\/local\/lib libraries mkl,vml,guide not found in \/usr\/lib libraries mkl,vml,guide not found in \/usr\/lib\/i386-linux-gnu NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in \/usr\/local\/lib libraries ptf77blas,ptcblas,atlas not found in \/usr\/lib\/sse2 libraries ptf77blas,ptcblas,atlas not found in \/usr\/lib libraries ptf77blas,ptcblas,atlas not found in \/usr\/lib\/i386-linux-gnu\/sse2 libraries ptf77blas,ptcblas,atlas not found in \/usr\/lib\/i386-linux-gnu NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in \/usr\/local\/lib libraries f77blas,cblas,atlas not found in \/usr\/lib\/sse2 libraries f77blas,cblas,atlas not found in \/usr\/lib libraries f77blas,cblas,atlas not found in \/usr\/lib\/i386-linux-gnu\/sse2 libraries f77blas,cblas,atlas not found in \/usr\/lib\/i386-linux-gnu NOT AVAILABLE \/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py:1423: UserWarning: Atlas (http:\/\/math-atlas.sourceforge.net\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in \/usr\/local\/lib libraries blas not found in \/usr\/lib libraries blas not found in \/usr\/lib\/i386-linux-gnu NOT AVAILABLE \/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py:1432: UserWarning: Blas (http:\/\/www.netlib.org\/blas\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE \/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py:1435: UserWarning: Blas (http:\/\/www.netlib.org\/blas\/) sources not found. Directories to search for the sources can be specified in the numpy\/distutils\/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) Traceback (most recent call last): File \"setup.py\", line 208, in setup_package() File \"setup.py\", line 199, in setup_package configuration=configuration ) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/core.py\", line 152, in setup config = configuration() File \"setup.py\", line 136, in configuration config.add_subpackage('scipy') File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 1002, in add_subpackage caller_level = 2) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 971, in get_subpackage caller_level = caller_level + 1) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 908, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File \"scipy\/setup.py\", line 8, in configuration config.add_subpackage('integrate') File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 1002, in add_subpackage caller_level = 2) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 971, in get_subpackage caller_level = caller_level + 1) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 908, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File \"scipy\/integrate\/setup.py\", line 10, in configuration blas_opt = get_info('blas_opt',notfound_action=2) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py\", line 320, in get_info return cl().get_info(notfound_action) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py\", line 471, in get_info raise self.notfounderror(self.notfounderror.__doc__) numpy.distutils.system_info.BlasNotFoundError: Blas (http:\/\/www.netlib.org\/blas\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [blas]) or by setting the BLAS environment variable. Error in sys.excepthook: Traceback (most recent call last): File \"\/usr\/lib\/python2.7\/dist-packages\/apport_python_hook.py\", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File \"\/usr\/lib\/python2.7\/dist-packages\/apport\/__init__.py\", line 1, in from apport.report import Report File \"\/usr\/lib\/python2.7\/dist-packages\/apport\/report.py\", line 18, in import problem_report File \"\/usr\/lib\/python2.7\/dist-packages\/problem_report.py\", line 14, in import zlib, base64, time, sys, gzip, struct, os File \"\/usr\/lib\/python2.7\/gzip.py\", line 10, in import io File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/io\/__init__.py\", line 83, in from matlab import loadmat, savemat, byteordercodes File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/io\/matlab\/__init__.py\", line 11, in from mio import loadmat, savemat File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/io\/matlab\/mio.py\", line 15, in from mio4 import MatFile4Reader, MatFile4Writer File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/io\/matlab\/mio4.py\", line 9, in import scipy.sparse File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/sparse\/__init__.py\", line 180, in from csr import * File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/sparse\/csr.py\", line 12, in from sparsetools import csr_tocsc, csr_tobsr, csr_count_blocks, \\ File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/sparse\/sparsetools\/__init__.py\", line 4, in from csr import * File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/sparse\/sparsetools\/csr.py\", line 25, in _csr = swig_import_helper() File \"\/home\/michael\/Downloads\/scipy-0.11.0rc1\/scipy\/sparse\/sparsetools\/csr.py\", line 17, in swig_import_helper import _csr ImportError: No module named _csr Original exception was: Traceback (most recent call last): File \"setup.py\", line 208, in setup_package() File \"setup.py\", line 199, in setup_package configuration=configuration ) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/core.py\", line 152, in setup config = configuration() File \"setup.py\", line 136, in configuration config.add_subpackage('scipy') File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 1002, in add_subpackage caller_level = 2) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 971, in get_subpackage caller_level = caller_level + 1) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 908, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File \"scipy\/setup.py\", line 8, in configuration config.add_subpackage('integrate') File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 1002, in add_subpackage caller_level = 2) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 971, in get_subpackage caller_level = caller_level + 1) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/misc_util.py\", line 908, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File \"scipy\/integrate\/setup.py\", line 10, in configuration blas_opt = get_info('blas_opt',notfound_action=2) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py\", line 320, in get_info return cl().get_info(notfound_action) File \"\/usr\/lib\/python2.7\/dist-packages\/numpy\/distutils\/system_info.py\", line 471, in get_info raise self.notfounderror(self.notfounderror.__doc__) numpy.distutils.system_info.BlasNotFoundError: Blas (http:\/\/www.netlib.org\/blas\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [blas]) or by setting the BLAS environment variable. ``` Moreover, how do I test if this module was installed correctly?", "response":"My usual work flow is to use a virtualenv to have a Python distribution with up-to-date packages. Within this environment you can than install and update all packages you need with pip and without any sudo calls. So if you only need SciPy (and NumPy) this would be: ``` $ sudo apt-get install python-virtualenv python-pip $ sudo apt-get build-dep python-numpy python-scipy $ # Create virtualenv in home $ virtualenv .myenv $ # Activate the virtualenv $ source .myenv\/bin\/activate (myenv)$ pip install -U numpy (myenv)$ pip install -U scipy ``` (If you don't have root access, you can install virtualenv and pip as described here. However, you need the dependencies of NumPy and SciPy.) You can include source .myenv\/bin\/activate in your .bash_profile and your shell will always start with that environment. If you use requirement files it is easy to install and maintain the same environments on all your machines.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11863775\/scipy-python-install-on-ubuntu", "best_answers_votes":28, "question_length":16154, "response_length":914 }, { "question":"scipy.special import issue I have an issue with importing the scipy.special package. It isn't harmful, just annoying\/interesting. When I import scipy using import scipy as sp and then try to access sp.special I get: ``` >>> import scipy as sp >>> sp.special Traceback (most recent call last): File \"\", line 1, in AttributeError: 'module' object has no attribute 'special' >>> ``` but if I then do import scipy.special I can access the special module through scipy.special and sp.special: ``` >>> import scipy as sp >>> import scipy.special >>> scipy.special >>> sp.special >>> ``` So I now have the special module accessible through both sp and scipy namespaces. The interesting bit is that I can access the rest of scipy through the scipy namespace. First question: Why does the special module not import first time round? Second question: How can I get access to the special module through the sp namespace only, without defining the scipy namespace? Edit: using Python 2.7.2 and scipy 0.10.1", "response":"By default, \"import scipy\" does not import any subpackage. There are too many subpackages with large Fortran extension modules that are slow to load. I do not recommend doing import scipy or the abbreviated import scipy as sp. It's just not very useful. Use from scipy import special, from scipy import linalg, etc.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9819733\/scipy-special-import-issue", "best_answers_votes":52, "question_length":997, "response_length":315 }, { "question":"expanding (adding a row or column) a scipy.sparse matrix Suppose I have a NxN matrix M (lil_matrix or csr_matrix) from scipy.sparse, and I want to make it (N+1)xN where M_modified[i,j] = M[i,j] for 0 <= i < N (and all j) and M[N,j] = 0 for all j. Basically, I want to add a row of zeros to the bottom of M and preserve the remainder of the matrix. Is there a way to do this without copying the data?", "response":"Scipy doesn't have a way to do this without copying the data but you can do it yourself by changing the attributes that define the sparse matrix. There are 4 attributes that make up the csr_matrix: data: An array containing the actual values in the matrix indices: An array containing the column index corresponding to each value in data indptr: An array that specifies the index before the first value in data for each row. If the row is empty then the index is the same as the previous column. shape: A tuple containing the shape of the matrix If you are simply adding a row of zeros to the bottom all you have to do is change the shape and indptr for your matrix. ``` x = np.ones((3,5)) x = csr_matrix(x) x.toarray() >> array([[ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.]]) # reshape is not implemented for csr_matrix but you can cheat and do it yourself. x._shape = (4,5) # Update indptr to let it know we added a row with nothing in it. So just append the last # value in indptr to the end. # note that you are still copying the indptr array x.indptr = np.hstack((x.indptr,x.indptr[-1])) x.toarray() array([[ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 0., 0., 0., 0., 0.]]) ``` Here is a function to handle the more general case of vstacking any 2 csr_matrices. You still end up copying the underlying numpy arrays but it is still significantly faster than the scipy vstack method. ``` def csr_vappend(a,b): \"\"\" Takes in 2 csr_matrices and appends the second one to the bottom of the first one. Much faster than scipy.sparse.vstack but assumes the type to be csr and overwrites the first matrix instead of copying it. The data, indices, and indptr still get copied.\"\"\" a.data = np.hstack((a.data,b.data)) a.indices = np.hstack((a.indices,b.indices)) a.indptr = np.hstack((a.indptr,(b.indptr + a.nnz)[1:])) a._shape = (a.shape[0]+b.shape[0],b.shape[1]) return a ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/4695337\/expanding-adding-a-row-or-column-a-scipy-sparse-matrix", "best_answers_votes":32, "question_length":399, "response_length":1918 }, { "question":"Use scipy.integrate.quad to integrate complex numbers I'm using right now the scipy.integrate.quad to successfully integrate some real integrands. Now a situation appeared that I need to integrate a complex integrand. quad seems not be able to do it, as the other scipy.integrate routines, so I ask: is there any way to integrate a complex integrand using scipy.integrate, without having to separate the integral in the real and the imaginary parts?", "response":"What's wrong with just separating it out into real and imaginary parts? scipy.integrate.quad requires the integrated function return floats (aka real numbers) for the algorithm it uses. ``` import scipy from scipy.integrate import quad def complex_quadrature(func, a, b, **kwargs): def real_func(x): return scipy.real(func(x)) def imag_func(x): return scipy.imag(func(x)) real_integral = quad(real_func, a, b, **kwargs) imag_integral = quad(imag_func, a, b, **kwargs) return (real_integral[0] + 1j*imag_integral[0], real_integral[1:], imag_integral[1:]) ``` E.g., ``` >>> complex_quadrature(lambda x: (scipy.exp(1j*x)), 0,scipy.pi\/2) ((0.99999999999999989+0.99999999999999989j), (1.1102230246251564e-14,), (1.1102230246251564e-14,)) ``` which is what you expect to rounding error - integral of exp(i x) from 0, pi\/2 is (1\/i)(e^i pi\/2 - e^0) = -i(i - 1) = 1 + i ~ (0.99999999999999989+0.99999999999999989j). And for the record in case it isn't 100% clear to everyone, integration is a linear functional, meaning that \u222b { f(x) + k g(x) } dx = \u222b f(x) dx + k \u222b g(x) dx (where k is a constant with respect to x). Or for our specific case \u222b z(x) dx = \u222b Re z(x) dx + i \u222b Im z(x) dx as z(x) = Re z(x) + i Im z(x). If you are trying to do a integration over a path in the complex plane (other than along the real axis) or region in the complex plane, you'll need a more sophisticated algorithm. Note: Scipy.integrate will not directly handle complex integration. Why? It does the heavy lifting in the FORTRAN QUADPACK library, specifically in qagse.f which explicitly requires the functions\/variables to be real before doing its \"global adaptive quadrature based on 21-point Gauss\u2013Kronrod quadrature within each subinterval, with acceleration by Peter Wynn's epsilon algorithm.\" So unless you want to try and modify the underlying FORTRAN to get it to handle complex numbers, compile it into a new library, you aren't going to get it to work. If you really want to do the Gauss-Kronrod method with complex numbers in exactly one integration, look at wikipedias page and implement directly as done below (using 15-pt, 7-pt rule). Note, I memoize'd function to repeat common calls to the common variables (assuming function calls are slow as if the function is very complex). Also only did 7-pt and 15-pt rule, since I didn't feel like calculating the nodes\/weights myself and those were the ones listed on wikipedia, but getting reasonable errors for test cases (~1e-14) ``` import scipy from scipy import array def quad_routine(func, a, b, x_list, w_list): c_1 = (b-a)\/2.0 c_2 = (b+a)\/2.0 eval_points = map(lambda x: c_1*x+c_2, x_list) func_evals = map(func, eval_points) return c_1 * sum(array(func_evals) * array(w_list)) def quad_gauss_7(func, a, b): x_gauss = [-0.949107912342759, -0.741531185599394, -0.405845151377397, 0, 0.405845151377397, 0.741531185599394, 0.949107912342759] w_gauss = array([0.129484966168870, 0.279705391489277, 0.381830050505119, 0.417959183673469, 0.381830050505119, 0.279705391489277,0.129484966168870]) return quad_routine(func,a,b,x_gauss, w_gauss) def quad_kronrod_15(func, a, b): x_kr = [-0.991455371120813,-0.949107912342759, -0.864864423359769, -0.741531185599394, -0.586087235467691,-0.405845151377397, -0.207784955007898, 0.0, 0.207784955007898,0.405845151377397, 0.586087235467691, 0.741531185599394, 0.864864423359769, 0.949107912342759, 0.991455371120813] w_kr = [0.022935322010529, 0.063092092629979, 0.104790010322250, 0.140653259715525, 0.169004726639267, 0.190350578064785, 0.204432940075298, 0.209482141084728, 0.204432940075298, 0.190350578064785, 0.169004726639267, 0.140653259715525, 0.104790010322250, 0.063092092629979, 0.022935322010529] return quad_routine(func,a,b,x_kr, w_kr) class Memoize(object): def __init__(self, func): self.func = func self.eval_points = {} def __call__(self, *args): if args not in self.eval_points: self.eval_points[args] = self.func(*args) return self.eval_points[args] def quad(func,a,b): ''' Output is the 15 point estimate; and the estimated error ''' func = Memoize(func) # Memoize function to skip repeated function calls. g7 = quad_gauss_7(func,a,b) k15 = quad_kronrod_15(func,a,b) # I don't have much faith in this error estimate taken from wikipedia # without incorporating how it should scale with changing limits return [k15, (200*scipy.absolute(g7-k15))**1.5] ``` Test case: ``` >>> quad(lambda x: scipy.exp(1j*x), 0,scipy.pi\/2.0) [(0.99999999999999711+0.99999999999999689j), 9.6120083407040365e-19] ``` I don't trust the error estimate -- I took something from wiki for recommended error estimate when integrating from [-1 to 1] and the values don't seem reasonable to me. E.g., the error above compared with truth is ~5e-15 not ~1e-19. I'm sure if someone consulted num recipes, you could get a more accurate estimate. (Probably have to multiple by (a-b)\/2 to some power or something similar). Recall, the python version is less accurate than just calling scipy's QUADPACK based integration twice. (You could improve upon it if desired).", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/5965583\/use-scipy-integrate-quad-to-integrate-complex-numbers", "best_answers_votes":61, "question_length":449, "response_length":5023 }, { "question":"How to find last occurrence of maximum value in a numpy.ndarray I have a numpy.ndarray in which the maximum value will mostly occur more than once. EDIT: This is subtly different from numpy.argmax: how to get the index corresponding to the *last* occurrence, in case of multiple occurrences of the maximum values because the author says Or, even better, is it possible to get a list of indices of all the occurrences of the maximum value in the array? whereas in my case getting such a list may prove very expensive Is it possible to find the index of the last occurrence of the maximum value by using something like numpy.argmax? I want to find only the index of the last occurrence, not an array of all occurrences (since several hundreds may be there) For example this will return the index of the first occurrence ie 2 ``` import numpy as np a=np.array([0,0,4,4,4,4,2,2,2,2]) print np.argmax(a) ``` However I want it to output 5.", "response":"numpy.argmax only returns the index of the first occurrence. You could apply argmax to a reversed view of the array: ``` import numpy as np a = np.array([0,0,4,4,4,4,2,2,2,2]) b = a[::-1] i = len(b) - np.argmax(b) - 1 i # 5 a[i:] # array([4, 2, 2, 2, 2]) ``` Note numpy doesn't copy the array but instead creates a view of the original with a stride that accesses it in reverse order. ``` id(a) == id(b.base) # True ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8768540\/how-to-find-last-occurrence-of-maximum-value-in-a-numpy-ndarray", "best_answers_votes":72, "question_length":933, "response_length":419 }, { "question":"how to annotate heatmap with text in matplotlib I am plotting a heatmap in matplotlib using: plt.pcolor(rand(5,5)) how can I annotate the heatmap with the actual numbers plotted? meaning in each cell of the plotted heatmap, put the value corresponding to that cell in the 5x5 matrix passed to pcolor. thanks.", "response":"There is no automatic feature to do such a thing, but you could loop through each point and put text in the appropriate location: ``` import matplotlib.pyplot as plt import numpy as np data = np.random.rand(5, 4) heatmap = plt.pcolor(data) for y in range(data.shape[0]): for x in range(data.shape[1]): plt.text(x + 0.5, y + 0.5, '%.4f' % data[y, x], horizontalalignment='center', verticalalignment='center', ) plt.colorbar(heatmap) plt.show() ``` HTH", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11917547\/how-to-annotate-heatmap-with-text-in-matplotlib", "best_answers_votes":55, "question_length":308, "response_length":450 }, { "question":"Scipy\/Numpy FFT Frequency Analysis I'm looking for how to turn the frequency axis in a fft (taken via scipy.fftpack.fftfreq) into a frequency in Hertz, rather than bins or fractional bins. I tried to code below to test out the FFT: ``` t = scipy.linspace(0,120,4000) acc = lambda t: 10*scipy.sin(2*pi*2.0*t) + 5*scipy.sin(2*pi*8.0*t) + 2*scipy.random.random(len(t)) signal = acc(t) FFT = abs(scipy.fft(signal)) FFT = scipy.fftpack.fftshift(FFT) freqs = scipy.fftpack.fftfreq(signal.size) pylab.plot(freqs,FFT,'x') pylab.show() ``` The sampling rate should be 4000 samples \/ 120 seconds = 33.34 samples\/sec. The signal has a 2.0 Hz signal, a 8.0 Hz signal, and some random noise. I take the FFT, grab the frequencies, and plot it. The numbers are pretty nonsensical. If I multiply the frequencies by 33.34 (the sampling frequency), then I get peaks at about 8 Hz and 15 Hz, which seems wrong (also, the frequencies should be a factor of 4 apart, not 2!). Any thoughts on what I'm doing wrong here?", "response":"I think you don't need to do fftshift(), and you can pass sampling period to fftfreq(): ``` import scipy import scipy.fftpack import pylab from scipy import pi t = scipy.linspace(0,120,4000) acc = lambda t: 10*scipy.sin(2*pi*2.0*t) + 5*scipy.sin(2*pi*8.0*t) + 2*scipy.random.random(len(t)) signal = acc(t) FFT = abs(scipy.fft(signal)) freqs = scipy.fftpack.fftfreq(signal.size, t[1]-t[0]) pylab.subplot(211) pylab.plot(t, signal) pylab.subplot(212) pylab.plot(freqs,20*scipy.log10(FFT),'x') pylab.show() ``` from the graph you can see there are two peak at 2Hz and 8Hz.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9456037\/scipy-numpy-fft-frequency-analysis", "best_answers_votes":48, "question_length":996, "response_length":569 }, { "question":"Restrict scipy.optimize.minimize to integer values I'm using scipy.optimize.minimize to optimize a real-world problem for which the answers can only be integers. My current code looks like this: ``` from scipy.optimize import minimize def f(x): return (481.79\/(5+x[0]))+(412.04\/(4+x[1]))+(365.54\/(3+x[2]))+(375.88\/(3+x[3]))+(379.75\/(3+x[4]))+(632.92\/(5+x[5]))+(127.89\/(1+x[6]))+(835.71\/(6+x[7]))+(200.21\/(1+x[8])) def con(x): return sum(x)-7 cons = {'type':'eq', 'fun': con} print scipy.optimize.minimize(f, [1,1,1,1,1,1,1,0,0], constraints=cons, bounds=([0,7],[0,7],[0,7],[0,7],[0,7],[0,7],[0,7],[0,7],[0,7])) ``` This yields: ``` x: array([ 2.91950510e-16, 2.44504019e-01, 9.97850733e-01, 1.05398840e+00, 1.07481251e+00, 2.60570253e-01, 1.36470363e+00, 4.48527831e-02, 1.95871767e+00] ``` But I want it optimized with integer values (rounding all x to the nearest whole number doesn't always give the minimum). Is there a way to use scipy.optimize.minimize with only integer values? (I guess I could create an array with all possible permutations of x and evaluate f(x) for each combination, but that doesn't seem like a very elegant or quick solution.)", "response":"pulp solution After some research, I don't think your objective function is linear. I recreated the problem in the Python pulp library but pulp doesn't like that we're dividing by a float and 'LpAffineExpression'. This answer suggests that linear programming \"doesn't understand divisions\" but that comment is in context of adding constraints, not the objective function. That comment pointed me to \"Mixed Integer Linear Fractional Programming (MILFP)\" and on Wikipedia. Here's how you could do it in pulp if it actually worked (maybe someone can figure out why): ``` import pulp data = [(481.79, 5), (412.04, 4), (365.54, 3)] #, (375.88, 3), (379.75, 3), (632.92, 5), (127.89, 1), (835.71, 6), (200.21, 1)] x = pulp.LpVariable.dicts('x', range(len(data)), lowBound=0, upBound=7, cat=pulp.LpInteger) numerator = dict((i,tup[0]) for i,tup in enumerate(data)) denom_int = dict((i,tup[1]) for i,tup in enumerate(data)) problem = pulp.LpProblem('Mixed Integer Linear Programming', sense=pulp.LpMinimize) # objective function (doesn't work) # TypeError: unsupported operand type(s) for \/: 'float' and 'LpAffineExpression' problem += sum([numerator[i] \/ (denom_int[i] + x[i]) for i in range(len(data))]) problem.solve() for v in problem.variables(): print(v.name, \"=\", v.varValue) ``` brute solution with scipy.optimize You can use brute and ranges of slices for each x in your function. If you have 3 xs in your function, you'll also have 3 slices in your ranges tuple. The key to all of this is to add the step size of 1 to the slice(start, stop,step) so slice(#, #, 1). ``` from scipy.optimize import brute import itertools def f(x): return (481.79\/(5+x[0]))+(412.04\/(4+x[1]))+(365.54\/(3+x[2])) ranges = (slice(0, 9, 1),) * 3 result = brute(f, ranges, disp=True, finish=None) print(result) ``` itertools solution Or you can use itertools to generate all combinations: ``` combinations = list(itertools.product(*[[0,1,2,3,4,5,6,7,8]]*3)) values = [] for combination in combinations: values.append((combination, f(combination))) best = [c for c,v in values if v == min([v for c,v in values])] print(best) ``` Note: this is a scaled-down version of your original function for example purposes.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/39236863\/restrict-scipy-optimize-minimize-to-integer-values", "best_answers_votes":14, "question_length":1155, "response_length":2187 }, { "question":"Fitting to Poisson histogram I am trying to fit a curve over the histogram of a Poisson distribution that looks like this I have modified the fit function so that it resembles a Poisson distribution, with the parameter t as a variable. But the curve_fit function can not be plotted and I am not sure why. ``` def histo(bsize): N = bsize #binwidth bw = (dt.max()-dt.min())\/(N-1.) bin1 = dt.min()+ bw*np.arange(N) #define the array to hold the occurrence count bincount= np.array([]) for bin in bin1: count = np.where((dt>=bin)&(dt'), ha='right') plt.show() ``` To do this in n-dimensions, we just need to pass in the appropriate sized arrays: ``` import numpy as np from scipy import ndimage data = np.arange(3*5*9).reshape((3,5,9)).astype(np.float) coords = np.array([[1.2, 3.5, 7.8], [0.5, 0.5, 6.8]]) zi = ndimage.map_coordinates(data, coords.T) ``` As far as scaling and memory usage goes, map_coordinates will create a filtered copy of the array if you're using an order > 1 (i.e. not linear interpolation). If you just want to interpolate at a very small number of points, this is a rather large overhead. It doesn't increase with the number points you want to interpolate at, however. As long as have enough RAM for a single temporary copy of your input data array, you'll be fine. If you can't store a copy of your data in memory, you can either a) specify prefilter=False and order=1 and use linear interpolation, or b) replace your original data with a filtered version using ndimage.spline_filter, and then call map_coordinates with prefilter=False. Even if you have enough ram, keeping the filtered dataset around can be a big speedup if you need to call map_coordinates multiple times (e.g. interactive use, etc).", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/6238250\/multivariate-spline-interpolation-in-python-scipy", "best_answers_votes":48, "question_length":889, "response_length":3014 }, { "question":"Understanding scipy deconvolve I'm trying to understand scipy.signal.deconvolve. From the mathematical point of view a convolution is just the multiplication in fourier space so I would expect that for two functions f and g: Deconvolve(Convolve(f,g) , g) == f In numpy\/scipy this is either not the case or I'm missing an important point. Although there are some questions related to deconvolve on SO already (like here and here) they do not address this point, others remain unclear (this) or unanswered (here). There are also two questions on SignalProcessing SE (this and this) the answers to which are not helpful in understanding how scipy's deconvolve function works. The question would be: How do you reconstruct the original signal f from a convoluted signal, assuming you know the convolving function g.? Or in other words: How does this pseudocode Deconvolve(Convolve(f,g) , g) == f translate into numpy \/ scipy? Edit: Note that this question is not targeted at preventing numerical inaccuracies (although this is also an open question) but at understanding how convolve\/deconvolve work together in scipy. The following code tries to do that with a Heaviside function and a gaussian filter. As can be seen in the image, the result of the deconvolution of the convolution is not at all the original Heaviside function. I would be glad if someone could shed some light into this issue. ``` import numpy as np import scipy.signal import matplotlib.pyplot as plt # Define heaviside function H = lambda x: 0.5 * (np.sign(x) + 1.) #define gaussian gauss = lambda x, sig: np.exp(-( x\/float(sig))**2 ) X = np.linspace(-5, 30, num=3501) X2 = np.linspace(-5,5, num=1001) # convolute a heaviside with a gaussian H_c = np.convolve( H(X), gauss(X2, 1), mode=\"same\" ) # deconvolute a the result H_dc, er = scipy.signal.deconvolve(H_c, gauss(X2, 1) ) #### Plot #### fig , ax = plt.subplots(nrows=4, figsize=(6,7)) ax[0].plot( H(X), color=\"#907700\", label=\"Heaviside\", lw=3 ) ax[1].plot( gauss(X2, 1), color=\"#907700\", label=\"Gauss filter\", lw=3 ) ax[2].plot( H_c\/H_c.max(), color=\"#325cab\", label=\"convoluted\" , lw=3 ) ax[3].plot( H_dc, color=\"#ab4232\", label=\"deconvoluted\", lw=3 ) for i in range(len(ax)): ax[i].set_xlim([0, len(X)]) ax[i].set_ylim([-0.07, 1.2]) ax[i].legend(loc=4) plt.show() ``` Edit: Note that there is a matlab example, showing how to convolve\/deconvolve a rectangular signal using ``` yc=conv(y,c,'full').\/sum(c); ydc=deconv(yc,c).*sum(c); ``` In the spirit of this question it would also help if someone was able to translate this example into python.", "response":"After some trial and error I found out how to interprete the results of scipy.signal.deconvolve() and I post my findings as an answer. Let's start with a working example code ``` import numpy as np import scipy.signal import matplotlib.pyplot as plt # let the signal be box-like signal = np.repeat([0., 1., 0.], 100) # and use a gaussian filter # the filter should be shorter than the signal # the filter should be such that it's much bigger then zero everywhere gauss = np.exp(-( (np.linspace(0,50)-25.)\/float(12))**2 ) print gauss.min() # = 0.013 >> 0 # calculate the convolution (np.convolve and scipy.signal.convolve identical) # the keywordargument mode=\"same\" ensures that the convolution spans the same # shape as the input array. #filtered = scipy.signal.convolve(signal, gauss, mode='same') filtered = np.convolve(signal, gauss, mode='same') deconv, _ = scipy.signal.deconvolve( filtered, gauss ) #the deconvolution has n = len(signal) - len(gauss) + 1 points n = len(signal)-len(gauss)+1 # so we need to expand it by s = (len(signal)-n)\/2 #on both sides. deconv_res = np.zeros(len(signal)) deconv_res[s:len(signal)-s-1] = deconv deconv = deconv_res # now deconv contains the deconvolution # expanded to the original shape (filled with zeros) #### Plot #### fig , ax = plt.subplots(nrows=4, figsize=(6,7)) ax[0].plot(signal, color=\"#907700\", label=\"original\", lw=3 ) ax[1].plot(gauss, color=\"#68934e\", label=\"gauss filter\", lw=3 ) # we need to divide by the sum of the filter window to get the convolution normalized to 1 ax[2].plot(filtered\/np.sum(gauss), color=\"#325cab\", label=\"convoluted\" , lw=3 ) ax[3].plot(deconv, color=\"#ab4232\", label=\"deconvoluted\", lw=3 ) for i in range(len(ax)): ax[i].set_xlim([0, len(signal)]) ax[i].set_ylim([-0.07, 1.2]) ax[i].legend(loc=1, fontsize=11) if i != len(ax)-1 : ax[i].set_xticklabels([]) plt.savefig(__file__ + \".png\") plt.show() ``` This code produces the following image, showing exactly what we want (Deconvolve(Convolve(signal,gauss) , gauss) == signal) Some important findings are: The filter should be shorter than the signal The filter should be much bigger than zero everywhere (here > 0.013 is good enough) Using the keyword argument mode = 'same' to the convolution ensures that it lives on the same array shape as the signal. The deconvolution has n = len(signal) - len(gauss) + 1 points. So in order to let it also reside on the same original array shape we need to expand it by s = (len(signal)-n)\/2 on both sides. Of course, further findings, comments and suggestion to this question are still welcome.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/40615034\/understanding-scipy-deconvolve", "best_answers_votes":25, "question_length":2570, "response_length":2570 }, { "question":"Efficiently get indices of histogram bins in Python Short Question I have a large 10000x10000 elements image, which I bin into a few hundred different sectors\/bins. I then need to perform some iterative calculation on the values contained within each bin. How do I extract the indices of each bin to efficiently perform my calculation using the bins values? What I am looking for is a solution which avoids the bottleneck of having to select every time ind == j from my large array. Is there a way to obtain directly, in one go, the indices of the elements belonging to every bin? Detailed Explanation 1. Straightforward Solution One way to achieve what I need is to use code like the following (see e.g. THIS related answer), where I digitize my values and then have a j-loop selecting digitized indices equal to j like below ``` import numpy as np # This function func() is just a placemark for a much more complicated function. # I am aware that my problem could be easily sped up in the specific case of # of the sum() function, but I am looking for a general solution to the problem. def func(x): y = np.sum(x) return y vals = np.random.random(1e8) nbins = 100 bins = np.linspace(0, 1, nbins+1) ind = np.digitize(vals, bins) result = [func(vals[ind == j]) for j in range(1, nbins)] ``` This is not what I want as it selects every time ind == j from my large array. This makes this solution very inefficient and slow. 2. Using binned_statistics The above approach turns out to be the same implemented in scipy.stats.binned_statistic, for the general case of a user-defined function. Using Scipy directly an identical output can be obtained with the following ``` import numpy as np from scipy.stats import binned_statistics vals = np.random.random(1e8) results = binned_statistic(vals, vals, statistic=func, bins=100, range=[0, 1])[0] ``` 3. Using labeled_comprehension Another Scipy alternative is to use scipy.ndimage.measurements.labeled_comprehension. Using that function, the above example would become ``` import numpy as np from scipy.ndimage import labeled_comprehension vals = np.random.random(1e8) nbins = 100 bins = np.linspace(0, 1, nbins+1) ind = np.digitize(vals, bins) result = labeled_comprehension(vals, ind, np.arange(1, nbins), func, float, 0) ``` Unfortunately also this form is inefficient and in particular, it has no speed advantage over my original example. 4. Comparison with IDL language To further clarify, what I am looking for is a functionality equivalent to the REVERSE_INDICES keyword in the HISTOGRAM function of the IDL language HERE. Can this very useful functionality be efficiently replicated in Python? Specifically, using the IDL language the above example could be written as ``` vals = randomu(s, 1e8) nbins = 100 bins = [0:1:1.\/nbins] h = histogram(vals, MIN=bins[0], MAX=bins[-2], NBINS=nbins, REVERSE_INDICES=r) result = dblarr(nbins) for j=0, nbins-1 do begin jbins = r[r[j]:r[j+1]-1] ; Selects indices of bin j result[j] = func(vals[jbins]) endfor ``` The above IDL implementation is about 10 times faster than the Numpy one, due to the fact that the indices of the bins do not have to be selected for every bin. And the speed difference in favour of the IDL implementation increases with the number of bins.", "response":"I found that a particular sparse matrix constructor can achieve the desired result very efficiently. It's a bit obscure but we can abuse it for this purpose. The function below can be used in nearly the same way as scipy.stats.binned_statistic but can be orders of magnitude faster ``` import numpy as np from scipy.sparse import csr_matrix def binned_statistic(x, values, func, nbins, range): '''The usage is nearly the same as scipy.stats.binned_statistic''' N = len(values) r0, r1 = range digitized = (float(nbins)\/(r1 - r0)*(x - r0)).astype(int) S = csr_matrix((values, [digitized, np.arange(N)]), shape=(nbins, N)) return [func(group) for group in np.split(S.data, S.indptr[1:-1])] ``` I avoided np.digitize because it doesn't use the fact that all bins are equal width and hence is slow, but the method I used instead may not handle all edge cases perfectly.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/26783719\/efficiently-get-indices-of-histogram-bins-in-python", "best_answers_votes":13, "question_length":3258, "response_length":864 }, { "question":"Installing scipy and scikit-learn on apple m1 The installation on the m1 chip for the following packages: Numpy 1.21.1, pandas 1.3.0, torch 1.9.0 and a few other ones works fine for me. They also seem to work properly while testing them. However when I try to install scipy or scikit-learn via pip this error appears: ERROR: Failed building wheel for numpy Failed to build numpy ERROR: Could not build wheels for numpy which use PEP 517 and cannot be installed directly Why should Numpy be build again when I have the latest version from pip already installed? Every previous installation was done using python3.9 -m pip install ... on Mac OS 11.3.1 with the apple m1 chip. Maybe somebody knows how to deal with this error or if its just a matter of time.", "response":"UPDATE: scikit-learn now works via pip \u2705 Just first brew install openblas - it has instructions for different processors (wikipedia) ```bash brew install openblas export OPENBLAS=$(\/opt\/homebrew\/bin\/brew --prefix openblas) export CFLAGS=\"-falign-functions=8 ${CFLAGS}\" # ^ no need to add to .zshrc, just doing this once. pip install scikit-learn ``` Worked great on Apple Silicon M1 \ud83c\udf89 Extra details about how Pip works Pip downloaded the source from Pipy, then built the wheel targeting MacOS X 12.0, and arm64 (apple silicon): scikit_learn-1.0.1-cp38-cp38-macosx_12_0_arm64.whl. ``` Building wheels for collected packages: scikit-learn Building wheel for scikit-learn (pyproject.toml) ... done Created wheel for scikit-learn: filename=scikit_learn-1.0.1-cp38-cp38-macosx_12_0_arm64.whl size=6364030 sha256=0b0cc9a21af775e0c8077ee71698ff62da05ab62efc914c5c15cd4bf97867b31 Successfully built scikit-learn Installing collected packages: scipy, scikit-learn Successfully installed scikit-learn-1.0.1 scipy-1.7.3 ``` Note on Pipy: we usually download either a pre-built wheel (yay, this is excellent for reliable distribution and ensuring compatability). Or, if no prebuilt wheel exists (sad) then we download a tar.gz and build it ourselves. This happens because the authors don't publish a prebuilt wheel to Pipy, but more and more people are adding this to their CI (github actions) workflow. Building the wheel ourselves takes more cpu time, and is generally less reliable but works in this case. Here we are downloading a pre-built wheel that has very few limitations: it works for any version of python 3, for any os, for any architecture (like amd64 or arm64): click-8.0.3-py3-none-any.whl ``` Collecting click>=7.0 Downloading click-8.0.3-py3-none-any.whl ``` Here apparently we had no wheel available, so we have to build it ourselves with setuptools running setup.py. ``` Collecting grpcio>=1.28.1 Downloading grpcio-1.42.0.tar.gz (21.3 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 21.3 MB 12.7 MB\/s Preparing metadata (setup.py) ... done ## later in the process it installs using setuptools Running setup.py install for grpcio ... done ``` Good luck and happy piping.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/68620927\/installing-scipy-and-scikit-learn-on-apple-m1", "best_answers_votes":47, "question_length":755, "response_length":2168 }, { "question":"How can I efficiently process a numpy array in blocks similar to Matlab's blkproc (blockproc) function I'm looking for a good approach for efficiently dividing an image into small regions, processing each region separately, and then re-assembling the results from each process into a single processed image. Matlab had a tool for this called blkproc (replaced by blockproc in newer versions of Matlab). In an ideal world, the function or class would support overlap between the divisions in the input matrix too. In the Matlab help, blkproc is defined as: B = blkproc(A,[m n],[mborder nborder],fun,...) A is your input matrix, [m n] is the block size [mborder, nborder] is the size of your border region (optional) fun is a function to apply to each block I have kluged together an approach, but it strikes me as clumsy and I bet there's a much better way. At the risk of my own embarrassment, here's my code: ``` import numpy as np def segmented_process(M, blk_size=(16,16), overlap=(0,0), fun=None): rows = [] for i in range(0, M.shape[0], blk_size[0]): cols = [] for j in range(0, M.shape[1], blk_size[1]): cols.append(fun(M[i:i+blk_size[0], j:j+blk_size[1]])) rows.append(np.concatenate(cols, axis=1)) return np.concatenate(rows, axis=0) R = np.random.rand(128,128) passthrough = lambda(x):x Rprime = segmented_process(R, blk_size=(16,16), overlap=(0,0), fun=passthrough) np.all(R==Rprime) ```", "response":"Here are some examples of a different (loop free) way to work with blocks: ``` import numpy as np from numpy.lib.stride_tricks import as_strided as ast A= np.arange(36).reshape(6, 6) print A #[[ 0 1 2 3 4 5] # [ 6 7 8 9 10 11] # ... # [30 31 32 33 34 35]] # 2x2 block view B= ast(A, shape= (3, 3, 2, 2), strides= (48, 8, 24, 4)) print B[1, 1] #[[14 15] # [20 21]] # for preserving original shape B[:, :]= np.dot(B[:, :], np.array([[0, 1], [1, 0]])) print A #[[ 1 0 3 2 5 4] # [ 7 6 9 8 11 10] # ... # [31 30 33 32 35 34]] print B[1, 1] #[[15 14] # [21 20]] # for reducing shape, processing in 3D is enough C= B.reshape(3, 3, -1) print C.sum(-1) #[[ 14 22 30] # [ 62 70 78] # [110 118 126]] ``` So just trying to simply copy the matlab functionality to numpy is not all ways the best way to proceed. Sometimes a 'off the hat' thinking is needed. Caveat: In general, implementations based on stride tricks may (but does not necessary need to) suffer some performance penalties. So be prepared to all ways measure your performance. In any case it's wise to first check if the needed functionality (or similar enough, in order to easily adapt for) has all ready been implemented in numpy or scipy. Update: Please note that there is no real magic involved here with the strides, so I'll provide a simple function to get a block_view of any suitable 2D numpy-array. So here we go: ``` from numpy.lib.stride_tricks import as_strided as ast def block_view(A, block= (3, 3)): \"\"\"Provide a 2D block view to 2D array. No error checking made. Therefore meaningful (as implemented) only for blocks strictly compatible with the shape of A.\"\"\" # simple shape and strides computations may seem at first strange # unless one is able to recognize the 'tuple additions' involved ;-) shape= (A.shape[0]\/ block[0], A.shape[1]\/ block[1])+ block strides= (block[0]* A.strides[0], block[1]* A.strides[1])+ A.strides return ast(A, shape= shape, strides= strides) if __name__ == '__main__': from numpy import arange A= arange(144).reshape(12, 12) print block_view(A)[0, 0] #[[ 0 1 2] # [12 13 14] # [24 25 26]] print block_view(A, (2, 6))[0, 0] #[[ 0 1 2 3 4 5] # [12 13 14 15 16 17]] print block_view(A, (3, 12))[0, 0] #[[ 0 1 2 3 4 5 6 7 8 9 10 11] # [12 13 14 15 16 17 18 19 20 21 22 23] # [24 25 26 27 28 29 30 31 32 33 34 35]] ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/5073767\/how-can-i-efficiently-process-a-numpy-array-in-blocks-similar-to-matlabs-blkpro", "best_answers_votes":26, "question_length":1397, "response_length":2309 }, { "question":"numpy\/scipy equivalent of R ecdf(x)(x) function? What is the equivalent of R's ecdf(x)(x) function in Python, in either numpy or scipy? Is ecdf(x)(x) basically the same as: ``` import numpy as np def ecdf(x): # normalize X to sum to 1 x = x \/ np.sum(x) return np.cumsum(x) ``` or is something else required? EDIT how can one control the number of bins used by ecdf?", "response":"The OP implementation for ecdf is wrong, you are not supposed to cumsum() the values. So not ys = np.cumsum(x)\/np.sum(x) but ys = np.cumsum(1 for _ in x)\/float(len(x)) or better ys = np.arange(1, len(x)+1)\/float(len(x)) You either go with statmodels's ECDF if you are OK with that extra dependency or provide your own implementation. See below: ``` import numpy as np import matplotlib.pyplot as plt from statsmodels.distributions.empirical_distribution import ECDF %matplotlib inline grades = (93.5,93,60.8,94.5,82,87.5,91.5,99.5,86,93.5,92.5,78,76,69,94.5, 89.5,92.8,78,65.5,98,98.5,92.3,95.5,76,91,95,61) def ecdf_wrong(x): xs = np.sort(x) # need to be sorted ys = np.cumsum(xs)\/np.sum(xs) # normalize so sum == 1 return (xs,ys) def ecdf(x): xs = np.sort(x) ys = np.arange(1, len(xs)+1)\/float(len(xs)) return xs, ys xs, ys = ecdf_wrong(grades) plt.plot(xs, ys, label=\"wrong cumsum\") xs, ys = ecdf(grades) plt.plot(xs, ys, label=\"handwritten\", marker=\">\", markerfacecolor='none') cdf = ECDF(grades) plt.plot(cdf.x, cdf.y, label=\"statmodels\", marker=\"<\", markerfacecolor='none') plt.legend() plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15792552\/numpy-scipy-equivalent-of-r-ecdfxx-function", "best_answers_votes":35, "question_length":365, "response_length":1107 }, { "question":"Fitting data to distributions? I am not a statistician (more of a researchy web developer) but I've been hearing a lot about scipy and R these days. So out of curiosity I wanted to ask this question (though it might sound silly to the experts around here) because I am not sure of the advances in this area and want to know how people without a sound statistics background approach these problems. Given a set of real numbers observed from an experiment, let us say they belong to one of the many distributions out there (like Weibull, Erlang, Cauchy, Exponential etc.), are there any automated ways of finding the right distribution and the distribution parameters for the data? Are there any good tutorials that walk me through the process? Real-world Scenario: For instance, let us say I initiated a small survey and recorded information about how many people a person talks to every day for say 300 people and I have the following information: ``` 1 10 2 5 3 20 ... ... ``` where X Y tells me that person X talked to Y people during the period of the survey. Now using the information from the 300 people, I want to fit this into a model. The question boils down to are there any automated ways of finding out the right distribution and distribution parameters for this data or if not, is there a good step-by-step procedure to achieve the same?", "response":"This is a complicated question, and there are no perfect answers. I'll try to give you an overview of the major concepts, and point you in the direction of some useful reading on the topic. Assume that you a one dimensional set of data, and you have a finite set of probability distribution functions that you think the data may have been generated from. You can consider each distribution independently, and try to find parameters that are reasonable given your data. There are two methods for setting parameters for a probability distribution function given data: Least Squares Maximum Likelihood In my experience, Maximum Likelihood has been preferred in recent years, although this may not be the case in every field. Here's a concrete example of how to estimate parameters in R. Consider a set of random points generated from a Gaussian distribution with mean of 0 and standard deviation of 1: ``` x = rnorm( n = 100, mean = 0, sd = 1 ) ``` Assume that you know the data were generated using a Gaussian process, but you've forgotten (or never knew!) the parameters for the Gaussian. You'd like to use the data to give you reasonable estimates of the mean and standard deviation. In R, there is a standard library that makes this very straightforward: ``` library(MASS) params = fitdistr( x, \"normal\" ) print( params ) ``` This gave me the following output: ``` mean sd -0.17922360 1.01636446 ( 0.10163645) ( 0.07186782) ``` Those are fairly close to the right answer, and the numbers in parentheses are confidence intervals around the parameters. Remember that every time you generate a new set of points, you'll get a new answer for the estimates. Mathematically, this is using maximum likelihood to estimate both the mean and standard deviation of the Gaussian. Likelihood means (in this case) \"probability of data given values of the parameters.\" Maximum likelihood means \"the values of the parameters that maximize the probability of generating my input data.\" Maximum likelihood estimation is the algorithm for finding the values of the parameters which maximize the probability of generating the input data, and for some distributions it can involve numerical optimization algorithms. In R, most of the work is done by fitdistr, which in certain cases will call optim. You can extract the log-likelihood from your parameters like this: ``` print( params$loglik ) [1] -139.5772 ``` It's more common to work with the log-likelihood rather than likelihood to avoid rounding errors. Estimating the joint probability of your data involves multiplying probabilities, which are all less than 1. Even for a small set of data, the joint probability approaches 0 very quickly, and adding the log-probabilities of your data is equivalent to multiplying the probabilities. The likelihood is maximized as the log-likelihood approaches 0, and thus more negative numbers are worse fits to your data. With computational tools like this, it's easy to estimate parameters for any distribution. Consider this example: ``` x = x[ x >= 0 ] distributions = c(\"normal\",\"exponential\") for ( dist in distributions ) { print( paste( \"fitting parameters for \", dist ) ) params = fitdistr( x, dist ) print( params ) print( summary( params ) ) print( params$loglik ) } ``` The exponential distribution doesn't generate negative numbers, so I removed them in the first line. The output (which is stochastic) looked like this: ``` [1] \"fitting parameters for normal\" mean sd 0.72021836 0.54079027 (0.07647929) (0.05407903) Length Class Mode estimate 2 -none- numeric sd 2 -none- numeric n 1 -none- numeric loglik 1 -none- numeric [1] -40.21074 [1] \"fitting parameters for exponential\" rate 1.388468 (0.196359) Length Class Mode estimate 1 -none- numeric sd 1 -none- numeric n 1 -none- numeric loglik 1 -none- numeric [1] -33.58996 ``` The exponential distribution is actually slightly more likely to have generated this data than the normal distribution, likely because the exponential distribution doesn't have to assign any probability density to negative numbers. All of these estimation problems get worse when you try to fit your data to more distributions. Distributions with more parameters are more flexible, so they'll fit your data better than distributions with less parameters. Also, some distributions are special cases of other distributions (for example, the Exponential is a special case of the Gamma). Because of this, it's very common to use prior knowledge to constrain your choice models to a subset of all possible models. One trick to get around some problems in parameter estimation is to generate a lot of data, and leave some of the data out for cross-validation. To cross-validate your fit of parameters to data, leave some of the data out of your estimation procedure, and then measure each model's likelihood on the left-out data.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/4290081\/fitting-data-to-distributions", "best_answers_votes":40, "question_length":1349, "response_length":4838 }, { "question":"Slicing Sparse Matrices in Scipy -- Which Types Work Best? The SciPy Sparse Matrix tutorial is very good -- but it actually leaves the section on slicing un(der)developed (still in outline form -- see section: \"Handling Sparse Matrices\"). I will try and update the tutorial, once this question is answered. I have a large sparse matrix -- currently in dok_matrix format. ``` import numpy as np from scipy import sparse M = sparse.dok_matrix((10**6, 10**6)) ``` For various methods I want to be able to slice columns and for others I want to slice rows. Ideally I would use advanced-indexing (i.e. a boolean vector, bool_vect) with which to slice a sparse matrix M -- as in: ``` bool_vect = np.arange(10**6)%2 # every even index out = M[bool_vect,:] # Want to select every even row ``` or ``` out = M[:,bool_vect] # Want to select every even column ``` First off, dok_matrices do not support this -- but I think it works (slowly) if I first cast to lil_matrices, via sparse.lil_matrix(M) As far as I can gather from the tutorial -- to slice columns I want to use CSC and to slice rows I want to slice CSR. So does that mean I should cast the matrix M via: ``` M.tocsc()[:,bool_vect] ``` or ``` M.tocsr()[bool_vect,:] ``` I am kinda guessing here and my code is slow because of it. Any help from someone who understands how this works would be appreciated. Thanks in advance. If it turns out I should not be indexing my matrix with a boolean array, but rather a list of integers (indices) -- that is also something I would be happy to find out. Whichever is more efficient. Finally -- this is a big matrix, so bonus points if this can happen in place \/ with broadcasting.", "response":"Ok, so I'm pretty sure the \"right\" way to do this is: if you are slicing columns, use tocsc() and slice using a list\/array of integers. Boolean vectors does not seem to do the trick with sparse matrices -- the way it does with ndarrays in numpy. Which means the answer is. ``` indices = np.where(bool_vect)[0] out1 = M.tocsc()[:,indices] out2 = M.tocsr()[indices,:] ``` But question: is this the best way? Is this in place? In practice this does seem to be happening in place -- and it is much faster than prior attempts (using lil_matrix).", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13352280\/slicing-sparse-matrices-in-scipy-which-types-work-best", "best_answers_votes":49, "question_length":1669, "response_length":540 }, { "question":"Calculating gradient with NumPy I really can not understand what numpy.gradient function does and how to use it for computation of multivariable function gradient. For example, I have such a function: ``` def func(q, chi, delta): return q * chi * delta ``` I need to compute it's 3-dimensional gradient (in other words, I want to compute partial derivatives with respect to all variables (q, chi, delta)). How can I calculate this gradient using NumPy?", "response":"The problem is, that numpy can't give you the derivatives directly and you have two options: With NUMPY What you essentially have to do, is to define a grid in three dimension and to evaluate the function on this grid. Afterwards you feed this table of function values to numpy.gradient to get an array with the numerical derivative for every dimension (variable). Example from here: ``` from numpy import * x,y,z = mgrid[-100:101:25., -100:101:25., -100:101:25.] V = 2*x**2 + 3*y**2 - 4*z # just a random function for the potential Ex,Ey,Ez = gradient(V) ``` Without NUMPY You could also calculate the derivative yourself by using the centered difference quotient. This is essentially, what numpy.gradient is doing for every point of your predefined grid.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16078818\/calculating-gradient-with-numpy", "best_answers_votes":27, "question_length":452, "response_length":756 }, { "question":"Implementing a Kolmogorov Smirnov test in python scipy I have a data set on N numbers that I want to test for normality. I know scipy.stats has a kstest function but there are no examples on how to use it and how to interpret the results. Is anyone here familiar with it that can give me some advice? According to the documentation, using kstest returns two numbers, the KS test statistic D and the p-value. If the p-value is greater than the significance level (say 5%), then we cannot reject the hypothesis that the data come from the given distribution. When I do a test run by drawing 10000 samples from a normal distribution and testing for gaussianity: ``` import numpy as np from scipy.stats import kstest mu,sigma = 0.07, 0.89 kstest(np.random.normal(mu,sigma,10000),'norm') ``` I get the following output: (0.04957880905196102, 8.9249710700788814e-22) The p-value is less than 5% which means that we can reject the hypothesis that the data are normally distributed. But the samples were drawn from a normal distribution! Can someone understand and explain to me the discrepancy here? (Does testing for normality assume mu = 0 and sigma = 1? If so, how can I test that my data are gaussianly distributed but with a different mu and sigma?)", "response":"Your data was generated with mu=0.07 and sigma=0.89. You are testing this data against a normal distribution with mean 0 and standard deviation of 1. The null hypothesis (H0) is that the distribution of which your data is a sample is equal to the standard normal distribution with mean 0, std deviation 1. The small p-value is indicating that a test statistic as large as D would be expected with probability p-value. In other words, (with p-value ~8.9e-22) it is highly unlikely that H0 is true. That is reasonable, since the means and std deviations don't match. Compare your result with: ``` In [22]: import numpy as np In [23]: import scipy.stats as stats In [24]: stats.kstest(np.random.normal(0,1,10000),'norm') Out[24]: (0.007038739782416259, 0.70477679457831155) ``` To test your data is gaussian, you could shift and rescale it so it is normal with mean 0 and std deviation 1: ``` data=np.random.normal(mu,sigma,10000) normed_data=(data-mu)\/sigma print(stats.kstest(normed_data,'norm')) # (0.0085805670733036798, 0.45316245879609179) ``` Warning: (many thanks to user333700 (aka scipy developer Josef Perktold)) If you don't know mu and sigma, estimating the parameters makes the p-value invalid: ``` import numpy as np import scipy.stats as stats mu = 0.3 sigma = 5 num_tests = 10**5 num_rejects = 0 alpha = 0.05 for i in xrange(num_tests): data = np.random.normal(mu, sigma, 10000) # normed_data = (data - mu) \/ sigma # this is okay # 4915\/100000 = 0.05 rejects at rejection level 0.05 (as expected) normed_data = (data - data.mean()) \/ data.std() # this is NOT okay # 20\/100000 = 0.00 rejects at rejection level 0.05 (not expected) D, pval = stats.kstest(normed_data, 'norm') if pval < alpha: num_rejects += 1 ratio = float(num_rejects) \/ num_tests print('{}\/{} = {:.2f} rejects at rejection level {}'.format( num_rejects, num_tests, ratio, alpha)) ``` prints ``` 20\/100000 = 0.00 rejects at rejection level 0.05 (not expected) ``` which shows that stats.kstest may not reject the expected number of null hypotheses if the sample is normalized using the sample's mean and standard deviation ``` normed_data = (data - data.mean()) \/ data.std() # this is NOT okay ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/7903977\/implementing-a-kolmogorov-smirnov-test-in-python-scipy", "best_answers_votes":27, "question_length":1247, "response_length":2177 }, { "question":"Does Python have a function which computes multinomial coefficients? I was looking for a Python library function which computes multinomial coefficients. I could not find any such function in any of the standard libraries. For binomial coefficients (of which multinomial coefficients are a generalization) there is scipy.special.binom and also scipy.misc.comb. Also, numpy.random.multinomial draws samples from a multinomial distribution, and sympy.ntheory.multinomial.multinomial_coefficients returns a dictionary related to multinomial coefficients. However, I could not find a multinomial coefficients function proper, which given a,b,...,z returns (a+b+...+z)!\/(a! b! ... z!). Did I miss it? Is there a good reason there is none available? I would be happy to contribute an efficient implementation to SciPy say. (I would have to figure out how to contribute, as I have never done this). For background, they do come up when expanding (a+b+...+z)^n. Also, they count the ways of depositing a+b+...+z distinct objects into distinct bins such that the first bin contains a objects, etc. I need them occasionally for a Project Euler problem. BTW, other languages do offer this function: Mathematica, MATLAB, Maple.", "response":"To partially answer my own question, here is my simple and fairly efficient implementation of the multinomial function: ``` def multinomial(lst): res, i = 1, 1 for a in lst: for j in range(1,a+1): res *= i res \/\/= j i += 1 return res ``` It seems from the comments so far that no efficient implementation of the function exists in any of the standard libraries. Update (January 2020). As Don Hatch has pointed out in the comments, this can be further improved by looking for the largest argument (especially for the case that it dominates all others): ``` def multinomial(lst): res, i = 1, sum(lst) i0 = lst.index(max(lst)) for a in lst[:i0] + lst[i0+1:]: for j in range(1,a+1): res *= i res \/\/= j i -= 1 return res ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/46374185\/does-python-have-a-function-which-computes-multinomial-coefficients", "best_answers_votes":14, "question_length":1215, "response_length":719 }, { "question":"Numpy Indexing: Return the rest A simply example of numpy indexing: ``` In: a = numpy.arange(10) In: sel_id = numpy.arange(5) In: a[sel_id] Out: array([0,1,2,3,4]) ``` How do I return the rest of the array that are not indexed by sel_id? What I can think of is: ``` In: numpy.array([x for x in a if x not in a[id]]) out: array([5,6,7,8,9]) ``` Is there any easier way?", "response":"For this simple 1D case, I'd actually use a boolean mask: ``` a = numpy.arange(10) include_index = numpy.arange(4) include_idx = set(include_index) #Set is more efficient, but doesn't reorder your elements if that is desireable mask = numpy.array([(i in include_idx) for i in xrange(len(a))]) ``` Now you can get your values: ``` included = a[mask] # array([0, 1, 2, 3]) excluded = a[~mask] # array([4, 5, 6, 7, 8, 9]) ``` Note that a[mask] doesn't necessarily yield the same thing as a[include_index] since the order of include_index matters for the output in that scenario (it should be roughly equivalent to a[sorted(include_index)]). However, since the order of your excluded items isn't well defined, this should work Ok. EDIT A better way to create the mask is: ``` mask = np.zeros(a.shape,dtype=bool) mask[include_idx] = True ``` (thanks to seberg).", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12518043\/numpy-indexing-return-the-rest", "best_answers_votes":21, "question_length":368, "response_length":856 }, { "question":"Fit a gaussian function I have a histogram (see below) and I am trying to find the mean and standard deviation along with code which fits a curve to my histogram. I think there is something in SciPy or matplotlib that can help, but every example I've tried doesn't work. ``` import matplotlib.pyplot as plt import numpy as np with open('gau_b_g_s.csv') as f: v = np.loadtxt(f, delimiter= ',', dtype=\"float\", skiprows=1, usecols=None) fig, ax = plt.subplots() plt.hist(v, bins=500, color='#7F38EC', histtype='step') plt.title(\"Gaussian\") plt.axis([-1, 2, 0, 20000]) plt.show() ```", "response":"Take a look at this answer for fitting arbitrary curves to data. Basically you can use scipy.optimize.curve_fit to fit any function you want to your data. The code below shows how you can fit a Gaussian to some random data (credit to this SciPy-User mailing list post). ``` import numpy from scipy.optimize import curve_fit import matplotlib.pyplot as plt # Define some test data which is close to Gaussian data = numpy.random.normal(size=10000) hist, bin_edges = numpy.histogram(data, density=True) bin_centres = (bin_edges[:-1] + bin_edges[1:])\/2 # Define model function to be used to fit to the data above: def gauss(x, *p): A, mu, sigma = p return A*numpy.exp(-(x-mu)**2\/(2.*sigma**2)) # p0 is the initial guess for the fitting coefficients (A, mu and sigma above) p0 = [1., 0., 1.] coeff, var_matrix = curve_fit(gauss, bin_centres, hist, p0=p0) # Get the fitted curve hist_fit = gauss(bin_centres, *coeff) plt.plot(bin_centres, hist, label='Test data') plt.plot(bin_centres, hist_fit, label='Fitted data') # Finally, lets get the fitting parameters, i.e. the mean and standard deviation: print 'Fitted mean = ', coeff[1] print 'Fitted standard deviation = ', coeff[2] plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11507028\/fit-a-gaussian-function", "best_answers_votes":49, "question_length":579, "response_length":1187 }, { "question":"Capturing high multi-collinearity in statsmodels Say I fit a model in statsmodels ``` mod = smf.ols('dependent ~ first_category + second_category + other', data=df).fit() ``` When I do mod.summary() I may see the following: ``` Warnings: [1] The condition number is large, 1.59e+05. This might indicate that there are strong multicollinearity or other numerical problems. ``` Sometimes the warning is different (e.g. based on eigenvalues of the design matrix). How can I capture high-multi-collinearity conditions in a variable? Is this warning stored somewhere in the model object? Also, where can I find a description of the fields in summary()?", "response":"You can detect high-multi-collinearity by inspecting the eigen values of correlation matrix. A very low eigen value shows that the data are collinear, and the corresponding eigen vector shows which variables are collinear. If there is no collinearity in the data, you would expect that none of the eigen values are close to zero: ``` >>> xs = np.random.randn(100, 5) # independent variables >>> corr = np.corrcoef(xs, rowvar=0) # correlation matrix >>> w, v = np.linalg.eig(corr) # eigen values & eigen vectors >>> w array([ 1.256 , 1.1937, 0.7273, 0.9516, 0.8714]) ``` However, if say x[4] - 2 * x[0] - 3 * x[2] = 0, then ``` >>> noise = np.random.randn(100) # white noise >>> xs[:,4] = 2 * xs[:,0] + 3 * xs[:,2] + .5 * noise # collinearity >>> corr = np.corrcoef(xs, rowvar=0) >>> w, v = np.linalg.eig(corr) >>> w array([ 0.0083, 1.9569, 1.1687, 0.8681, 0.9981]) ``` one of the eigen values (here the very first one), is close to zero. The corresponding eigen vector is: ``` >>> v[:,0] array([-0.4077, 0.0059, -0.5886, 0.0018, 0.6981]) ``` Ignoring almost zero coefficients, above basically says x[0], x[2] and x[4] are colinear (as expected). If one standardizes xs values and multiplies by this eigen vector, the result will hover around zero with small variance: ``` >>> std_xs = (xs - xs.mean(axis=0)) \/ xs.std(axis=0) # standardized values >>> ys = std_xs.dot(v[:,0]) >>> ys.mean(), ys.var() (0, 0.0083) ``` Note that ys.var() is basically the eigen value which was close to zero. So, in order to capture high multi-linearity, look at the eigen values of correlation matrix.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/25676145\/capturing-high-multi-collinearity-in-statsmodels", "best_answers_votes":58, "question_length":647, "response_length":1581 }, { "question":"scipy signal find_peaks_cwt not finding the peaks accurately? I've got a 1-D signal in which I'm trying to find the peaks. I'm looking to find them perfectly. I'm currently doing: ``` import scipy.signal as signal peaks = signal.find_peaks_cwt(data, np.arange(100,200)) ``` The following is a graph with red spots which show the location of the peaks as found by find_peaks_cwt(). As you can see, the calculated peaks aren't accurate enough. The ones that are really important are the three on the right hand side. My question: How do I make this more accurate? UPDATE: Data is here: http:\/\/pastebin.com\/KSBTRUmW For some background, what I'm trying to do is locate the space in-between the fingers in an image. What is plotted is the x-coordinate of the contour around the hand. Cyan spots = peaks. If there is a more reliable\/robust approach this, please leave a comment.", "response":"Solved, solution: Filter data first: ``` window = signal.general_gaussian(51, p=0.5, sig=20) filtered = signal.fftconvolve(window, data) filtered = (np.average(data) \/ np.average(filtered)) * filtered filtered = np.roll(filtered, -25) ``` Then use angrelextrema as per rapelpy's answer. Result:", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/25571260\/scipy-signal-find-peaks-cwt-not-finding-the-peaks-accurately", "best_answers_votes":24, "question_length":873, "response_length":294 }, { "question":"scipy.linalg.eig return complex eigenvalues for covariance matrix? The eigenvalues of a covariance matrix should be real and non-negative because covariance matrices are symmetric and semi positive definite. However, take a look at the following experiment with scipy: ``` >>> a=np.random.random(5) >>> b=np.random.random(5) >>> ab = np.vstack((a,b)).T >>> C=np.cov(ab) >>> eig(C) 7.90174997e-01 +0.00000000e+00j, 2.38344473e-17 +6.15983679e-17j, 2.38344473e-17 -6.15983679e-17j, -1.76100435e-17 +0.00000000e+00j, 5.42658040e-33 +0.00000000e+00j ``` However, reproducing the above example in Matlab works correctly: ``` a = [0.6271, 0.4314, 0.3453, 0.8073, 0.9739] b = [0.1924, 0.3680, 0.0568, 0.1831, 0.0176] C=cov([a;b]) eig(C) -0.0000 -0.0000 0.0000 0.0000 0.7902 ```", "response":"You have raised two issues: The eigenvalues returned by scipy.linalg.eig are not real. Some of the eigenvalues are negative. Both of these issues are the result of errors introduced by truncation and rounding errors, which always happen with iterative algorithms using floating-point arithmetic. Note that the Matlab results also produced negative eigenvalues. Now, for a more interesting aspect of the issue: why is Matlab's result real, whereas SciPy's result has some complex components? Matlab's eig detects if the input matrix is real symmetric or Hermitian and uses Cholesky factorization when it is. See the description of the chol argument in the eig documentation. This is not done automatically in SciPy. If you want to use an algorithm that exploits the structure of a real symmetric or Hermitian matrix, use scipy.linalg.eigh. For the example in the question: ``` >>> eigh(C, eigvals_only=True) array([ -3.73825923e-17, -1.60154836e-17, 8.11704449e-19, 3.65055777e-17, 7.90175615e-01]) ``` This result is the same as Matlab's, if you round to the same number of digits of precision that Matlab printed.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8765310\/scipy-linalg-eig-return-complex-eigenvalues-for-covariance-matrix", "best_answers_votes":45, "question_length":770, "response_length":1114 }, { "question":"How much of NumPy and SciPy is in C? Are parts of NumPy and\/or SciPy programmed in C\/C++? And how does the overhead of calling C from Python compare to the overhead of calling C from Java and\/or C#? I'm just wondering if Python is a better option than Java or C# for scientific apps. If I look at the shootouts, Python loses by a huge margin. But I guess this is because they don't use 3rd-party libraries in those benchmarks.", "response":"I would question any benchmark which doesn't show the source for each implementation (or did I miss something)? It's entirely possible that either or both of those solutions are coded badly which would result in an unfair appraisal of either or both language's performance. [Edit] Oops, now I see the source. As others have pointed out though, it's not using the NumPy\/SciPy libraries so those benchmarks are not going to help you make a decision. I believe the vast majority of NumPy and SciPy is written in C and wrapped in Python for ease of use. It probably depends what you're doing in any of those languages as to how much overhead there is for a particular application. I've used Python for data processing and analysis for a couple of years now so I would say it's certainly fit for purpose. What are you trying to achieve at the end of the day? If you want a fast way to develop readable code, Python is an excellent option and certainly fast enough for a first stab at whatever it is you're trying to solve. Why not have a bash at each for a small subset of your problem and benchmark the results in terms of development time and run time? Then you can make an objective decision based on some relevant data ...or at least that's what I'd do :-)", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/1825857\/how-much-of-numpy-and-scipy-is-in-c", "best_answers_votes":19, "question_length":426, "response_length":1255 }, { "question":"How to compute cluster assignments from linkage\/distance matrices if you have this hierarchical clustering call in scipy in Python: ``` from scipy.cluster.hierarchy import linkage # dist_matrix is long form distance matrix linkage_matrix = linkage(squareform(dist_matrix), linkage_method) ``` then what's an efficient way to go from this to cluster assignments for individual points? i.e. a vector of length N where N is number of points, where each entry i is the cluster number of point i, given the number of clusters generated by a given threshold thresh on the resulting clustering? To clarify: The cluster number would be the cluster that it's in after applying a threshold to the tree. In which case you would get a unique cluster for each leaf node for the cluster that it is in. Unique in the sense that each point belongs to one \"most specific cluster\" which is defined by the threshold where you cut the dendrogram. I know that scipy.cluster.hierarchy.fclusterdata gives you this cluster assignment as its return value, but I am starting from a custom made distance matrix and distance metric, so I cannot use fclusterdata. The question boils down to: how can I compute what fclusterdata is computing -- the cluster assignments?", "response":"If I understand you right, that is what fcluster does: scipy.cluster.hierarchy.fcluster(Z, t, criterion='inconsistent', depth=2, R=None, monocrit=None) Forms flat clusters from the hierarchical clustering defined by the linkage matrix Z. ... Returns: An array of length n. T[i] is the flat cluster number to which original observation i belongs. So just call fcluster(linkage_matrix, t), where t is your threshold.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15951711\/how-to-compute-cluster-assignments-from-linkage-distance-matrices", "best_answers_votes":32, "question_length":1239, "response_length":414 }, { "question":"Root mean square of a function in python I want to calculate root mean square of a function in Python. My function is in a simple form like y = f(x). x and y are arrays. I tried Numpy and Scipy Docs and couldn't find anything.", "response":"I'm going to assume that you want to compute the expression given by the following pseudocode: ``` ms = 0 for i = 1 ... N ms = ms + y[i]^2 ms = ms \/ N rms = sqrt(ms) ``` i.e. the square root of the mean of the squared values of elements of y. In numpy, you can simply square y, take its mean and then its square root as follows: ``` rms = np.sqrt(np.mean(y**2)) ``` So, for example: ``` >>> y = np.array([0, 0, 1, 1, 0, 1, 0, 1, 1, 1]) # Six 1's >>> y.size 10 >>> np.mean(y**2) 0.59999999999999998 >>> np.sqrt(np.mean(y**2)) 0.7745966692414834 ``` Do clarify your question if you mean to ask something else.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/40963659\/root-mean-square-of-a-function-in-python", "best_answers_votes":68, "question_length":226, "response_length":607 }, { "question":"Converting image from RGB to HSV color space I couldn't find such function (i.e. RGB_to_HSV()) in Scipy or Matplotlib's documentations, and Google doesn't show pointers, except ActiveState recipe which demonstrates rgb2hsv function, though not usable on Numpy array as is. Does someone know of a shortcut? Edit: Sorry, just found matplotlib.colors.rgb_to_hsv() which is exactly what I was looking for. Should I delete this question?", "response":"Matplotlib provides RGB to HSV conversion function: matplotlib.colors.rgb_to_hsv(): matplotlib.colors.rgb_to_hsv(arr) convert rgb values in a numpy array to hsv values input and output arrays should have shape (M,N,3)", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15278323\/converting-image-from-rgb-to-hsv-color-space", "best_answers_votes":53, "question_length":432, "response_length":217 }, { "question":"Chi-Squared test in Python I've used the following code in R to determine how well observed values (20, 20, 0 and 0 for example) fit expected values\/ratios (25% for each of the four cases, for example): ``` > chisq.test(c(20,20,0,0), p=c(0.25, 0.25, 0.25, 0.25)) Chi-squared test for given probabilities data: c(20, 20, 0, 0) X-squared = 40, df = 3, p-value = 1.066e-08 ``` How can I replicate this in Python? I've tried using the chisquare function from scipy but the results I obtained were very different; I'm not sure if this is even the correct function to use. I've searched through the scipy documentation, but it's quite daunting as it runs to 1000+ pages; the numpy documentation is almost 50% more than that.", "response":"scipy.stats.chisquare expects observed and expected absolute frequencies, not ratios. You can obtain what you want with ``` >>> observed = np.array([20., 20., 0., 0.]) >>> expected = np.array([.25, .25, .25, .25]) * np.sum(observed) >>> chisquare(observed, expected) (40.0, 1.065509033425585e-08) ``` Although in the case that the expected values are uniformly distributed over the classes, you can leave out the computation of the expected values: ``` >>> chisquare(observed) (40.0, 1.065509033425585e-08) ``` The first returned value is the \u03c7\u00b2 statistic, the second the p-value of the test.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9330114\/chi-squared-test-in-python", "best_answers_votes":37, "question_length":718, "response_length":592 }, { "question":"import problems with scipy.io I've been trying to get started with scipy, but the package is giving me some problems. The tutorial leans heavily on scipy.io, but when I import scypi and try to use scipy.io, I get errors: ``` In [1]: import scipy In [2]: help(scipy.io) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) \/home\/chris\/dev\/scipy\/ in () ----> 1 help(scipy.io) AttributeError: 'module' object has no attribute 'io' ``` I've run system updates and I uninstalled scipy then installed it again. Interestingly enough, I can import the module this way: ``` In [1]: import scipy.io ``` But then when I try to use it, I get an error as soon as I use a method: ``` In [2]: arr = scipy.array([[1.0,2.0],[3.0,4.0],[5.0,6.0]]) In [3]: outFile = file('tmpdata1.txt', 'w') In [4]: scipy.io.write_array(outFile, arr) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) \/home\/chris\/dev\/scipy\/ in () ----> 1 scipy.io.write_array(outFile, arr) AttributeError: 'module' object has no attribute 'write_array' ``` I'm sure I'm missing something embarrassingly basic, but I've not been able to find an answer to this problem on Google or in the stackoverflow archives.", "response":"Two things here. First, you cannot in general access a module in a package by doing import package and then trying to access package.module. You often have to do what you did, import package.module, or (if you don't want to type package.module all the time, you can do from package import module. So you can also do from scipy import io. Second, the scipy.io module does not provide a write_array function. It looks like maybe it used to, but they got rid of it. You may be looking at an outdated tutorial. (What tutorial are you using?) Googling around, it seems they suggest to use numpy's savetxt function instead, so you might want to look into that.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11172623\/import-problems-with-scipy-io", "best_answers_votes":37, "question_length":1305, "response_length":654 }, { "question":"Is there an \"enhanced\" numpy\/scipy dot method? Problem I would like to compute the following using numpy or scipy: ``` Y = A**T * Q * A ``` where A is a m x n matrix, A**T is the transpose of A and Q is an m x m diagonal matrix. Since Q is a diagonal matrix I store only its diagonal elements as a vector. Ways of solving for Y Currently I can think of two ways of how to calculate Y: Y = np.dot(np.dot(A.T, np.diag(Q)), A) and Y = np.dot(A.T * Q, A). Clearly option 2 is better than option 1 since no real matrix has to be created with diag(Q) (if this is what numpy really does...) However, both methods suffer from the defect of having to allocate more memory than there really is necessary since A.T * Q and np.dot(A.T, np.diag(Q)) have to be stored along with A in order to calculate Y. Question Is there a method in numpy\/scipy that would eliminate the unnecessary allocation of extra memory where you would only pass two matrices A and B (in my case B is A.T) and a weighting vector Q along with it?", "response":"(w\/r\/t the last sentence of the OP: i am not aware of such a numpy\/scipy method but w\/r\/t the Question in the OP Title (i.e., improving NumPy dot performance) what's below should be of some help. In other words, my answer is directed to improving performance of most of the steps comprising your function for Y). First, this should give you a noticeable boost over the vanilla NumPy dot method: ``` >>> from scipy.linalg import blas as FB >>> vx = FB.dgemm(alpha=1., a=v1, b=v2, trans_b=True) ``` Note that the two arrays, v1, v2 are both in C_FORTRAN order You can access the byte order of a NumPy array through an array's flags attribute like so: ``` >>> c = NP.ones((4, 3)) >>> c.flags C_CONTIGUOUS : True # refers to C-contiguous order F_CONTIGUOUS : False # fortran-contiguous OWNDATA : True MASKNA : False OWNMASKNA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False ``` to change the order of one of the arrays so both are aligned, just call the NumPy array constructor, pass in the array and set the appropriate order flag to True ``` >>> c = NP.array(c, order=\"F\") >>> c.flags C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : True MASKNA : False OWNMASKNA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False ``` You can further optimize by exploiting array-order alignment to reduce excess memory consumption caused by copying the original arrays. But why are the arrays copied before being passed to dot? The dot product relies on BLAS operations. These operations require arrays stored in C-contiguous order--it's this constraint that causes the arrays to be copied. On the other hand, the transpose does not effect a copy, though unfortunately returns the result in Fortran order: Therefore, to remove the performance bottleneck, you need to eliminate the predicate array-copying step; to do that just requires passing both arrays to dot in C-contiguous order*. So to calculate dot(A.T., A) without making an extra copy: ``` >>> import scipy.linalg.blas as FB >>> vx = FB.dgemm(alpha=1.0, a=A.T, b=A.T, trans_b=True) ``` In sum, the expression just above (along with the predicate import statement) can substitute for dot, to supply the same functionality but better performance you can bind that expression to a function like so: ``` >>> super_dot = lambda v, w: FB.dgemm(alpha=1., a=v.T, b=w.T, trans_b=True) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9478791\/is-there-an-enhanced-numpy-scipy-dot-method", "best_answers_votes":26, "question_length":1006, "response_length":2347 }, { "question":"Integrating a multidimensional integral in scipy Motivation: I have a multidimensional integral, which for completeness I have reproduced below. It comes from the computation of the second virial coefficient when there is significant anisotropy: Here W is a function of all the variables. It is a known function, one which I can define a python function for. Programming Question: How do I get scipy to integrate this expression? I was thinking of chaining two triple quads (scipy.integrate.tplquad) together, but I'm worried about performance and accuracy. Is there a higher dimensional integrator in scipy, one that can handle an arbitrary number of nested integrals? If not, what is the best way to do this?", "response":"With a higher-dimensional integral like this, monte carlo methods are often a useful technique - they converge on the answer as the inverse square root of the number of function evaluations, which is better for higher dimension then you'll generally get out of even fairly sophisticated adaptive methods (unless you know something very specific about your integrand - symmetries that can be exploited, etc.) The mcint package performs a monte carlo integration: running with a non-trivial W that is nonetheless integrable so we know the answer we get (note that I've truncated r to be from [0,1); you'll have to do some sort of log transform or something to get that semi-unbounded domain into something tractable for most numerical integrators): ``` import mcint import random import math def w(r, theta, phi, alpha, beta, gamma): return(-math.log(theta * beta)) def integrand(x): r = x[0] theta = x[1] alpha = x[2] beta = x[3] gamma = x[4] phi = x[5] k = 1. T = 1. ww = w(r, theta, phi, alpha, beta, gamma) return (math.exp(-ww\/(k*T)) - 1.)*r*r*math.sin(beta)*math.sin(theta) def sampler(): while True: r = random.uniform(0.,1.) theta = random.uniform(0.,2.*math.pi) alpha = random.uniform(0.,2.*math.pi) beta = random.uniform(0.,2.*math.pi) gamma = random.uniform(0.,2.*math.pi) phi = random.uniform(0.,math.pi) yield (r, theta, alpha, beta, gamma, phi) domainsize = math.pow(2*math.pi,4)*math.pi*1 expected = 16*math.pow(math.pi,5)\/3. for nmc in [1000, 10000, 100000, 1000000, 10000000, 100000000]: random.seed(1) result, error = mcint.integrate(integrand, sampler(), measure=domainsize, n=nmc) diff = abs(result - expected) print \"Using n = \", nmc print \"Result = \", result, \"estimated error = \", error print \"Known result = \", expected, \" error = \", diff, \" = \", 100.*diff\/expected, \"%\" print \" \" ``` Running gives ``` Using n = 1000 Result = 1654.19633236 estimated error = 399.360391622 Known result = 1632.10498552 error = 22.0913468345 = 1.35354937522 % Using n = 10000 Result = 1634.88583778 estimated error = 128.824988953 Known result = 1632.10498552 error = 2.78085225405 = 0.170384397984 % Using n = 100000 Result = 1646.72936 estimated error = 41.3384733174 Known result = 1632.10498552 error = 14.6243744747 = 0.8960437352 % Using n = 1000000 Result = 1640.67189792 estimated error = 13.0282663003 Known result = 1632.10498552 error = 8.56691239895 = 0.524899591322 % Using n = 10000000 Result = 1635.52135088 estimated error = 4.12131562436 Known result = 1632.10498552 error = 3.41636536248 = 0.209322647304 % Using n = 100000000 Result = 1631.5982799 estimated error = 1.30214644297 Known result = 1632.10498552 error = 0.506705620147 = 0.0310461413109 % ``` You could greatly speed this up by vectorizing the random number generation, etc. Of course, you can chain the triple integrals as you suggest: ``` import numpy import scipy.integrate import math def w(r, theta, phi, alpha, beta, gamma): return(-math.log(theta * beta)) def integrand(phi, alpha, gamma, r, theta, beta): ww = w(r, theta, phi, alpha, beta, gamma) k = 1. T = 1. return (math.exp(-ww\/(k*T)) - 1.)*r*r*math.sin(beta)*math.sin(theta) # limits of integration def zero(x, y=0): return 0. def one(x, y=0): return 1. def pi(x, y=0): return math.pi def twopi(x, y=0): return 2.*math.pi # integrate over phi [0, Pi), alpha [0, 2 Pi), gamma [0, 2 Pi) def secondIntegrals(r, theta, beta): res, err = scipy.integrate.tplquad(integrand, 0., 2.*math.pi, zero, twopi, zero, pi, args=(r, theta, beta)) return res # integrate over r [0, 1), beta [0, 2 Pi), theta [0, 2 Pi) def integral(): return scipy.integrate.tplquad(secondIntegrals, 0., 2.*math.pi, zero, twopi, zero, one) expected = 16*math.pow(math.pi,5)\/3. result, err = integral() diff = abs(result - expected) print \"Result = \", result, \" estimated error = \", err print \"Known result = \", expected, \" error = \", diff, \" = \", 100.*diff\/expected, \"%\" ``` which is slow but gives very good results for this simple case. Which is better is going to come down to how complicated your W is and what your accuracy requirements are. Simple (fast to evaluate) W with high accuracy will push you to this sort of method; complicated (slow to evaluate) W with moderate accuracy requirements will push you towards MC techniques. ``` Result = 1632.10498552 estimated error = 3.59054059995e-11 Known result = 1632.10498552 error = 4.54747350886e-13 = 2.7862628625e-14 % ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/14071704\/integrating-a-multidimensional-integral-in-scipy", "best_answers_votes":36, "question_length":710, "response_length":4393 }, { "question":"matplotlib: limits when using plot and imshow in same axes I've been trying to plot an ellipse into an imshow plot. It works, but plotting the ellipse after plotting the image seems to increase xlim and ylim, resulting in a border, which I'd like to get rid of: Note that there is NO white border directly after calling imshow only. My code looks as follows: ``` self.dpi = 100 self.fig = Figure((6.0, 6.0), dpi=self.dpi) self.canvas = FigureCanvas(self.fig) self.canvas.setMinimumSize(800, 400) self.cax = None self.axes = self.fig.add_subplot(111) self.axes.imshow(channel1, interpolation=\"nearest\") self.canvas.draw() self.axes.plot(dat[0], dat[1], \"b-\") ``` I've tried setting the limits before and after calling \"plot\", with no effect ``` # get limits after calling imshow xlim, ylim = pylab.xlim(), pylab.ylim() ... # set limits before\/after calling plot self.axes.set_xlim(xlim) self.axes.set_ylim(ylim) ``` How can I force plot not to increase existing figure limits? Solution (thanks to Joe): ``` #for newer matplotlib versions self.axes.imshow(channel1, interpolation=\"nearest\") self.axes.autoscale(False) self.axes.plot(dat[0], dat[1], \"b-\") #for older matplotlib versions (worked for me using 0.99.1.1) self.axes.imshow(channel1, interpolation=\"nearest\") self.axes.plot(dat[0], dat[1], \"b-\", scalex=False, scaley=False) ```", "response":"What's happening is that the axis is autoscaling to match the extents of each item you plot. Images are autoscaled much tighter than lines, etc (imshow basically calls ax.axis('image')). Getting the axis limits before and setting them after should have worked. (It's cleaner to just do limits = axes.axis() before and axes.axis(limits) after, though.) However, if you don't want things to autoscale, it's best to just turn autoscaling off after the initial plot. Try axes.autoscale(False) after plotting the image. As an example, compare this: ``` import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots() ax.imshow(np.random.random((10,10))) ax.plot(range(11)) plt.show() ``` With this: ``` import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots() ax.imshow(np.random.random((10,10))) ax.autoscale(False) ax.plot(range(11)) plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9120749\/matplotlib-limits-when-using-plot-and-imshow-in-same-axes", "best_answers_votes":35, "question_length":1335, "response_length":876 }, { "question":"what does numpy.apply_along_axis perform exactly? I have come across the numpy.apply_along_axis function in some code. And I don't understand the documentation about it. This is an example of the documentation: ``` >>> def new_func(a): ... \"\"\"Divide elements of a by 2.\"\"\" ... return a * 0.5 >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> np.apply_along_axis(new_func, 0, b) array([[ 0.5, 1. , 1.5], [ 2. , 2.5, 3. ], [ 3.5, 4. , 4.5]]) ``` As far I as thought I understood the documentation, I would have expected: ``` array([[ 0.5, 1. , 1.5], [ 4 , 5 , 6 ], [ 7 , 8 , 9 ]]) ``` i.e. having applied the function along the axis [1,2,3] which is axis 0 in [[1,2,3], [4,5,6], [7,8,9]] Obviously I am wrong. Could you correct me ?", "response":"apply_along_axis applies the supplied function along 1D slices of the input array, with the slices taken along the axis you specify. So in your example, new_func is applied over each slice of the array along the first axis. It becomes clearer if you use a vector valued function, rather than a scalar, like this: ``` In [20]: b = np.array([[1,2,3], [4,5,6], [7,8,9]]) In [21]: np.apply_along_axis(np.diff,0,b) Out[21]: array([[3, 3, 3], [3, 3, 3]]) In [22]: np.apply_along_axis(np.diff,1,b) Out[22]: array([[1, 1], [1, 1], [1, 1]]) ``` Here, numpy.diff (i.e. the arithmetic difference of adjacent array elements) is applied along each slice of either the first or second axis (dimension) of the input array.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9019581\/what-does-numpy-apply-along-axis-perform-exactly", "best_answers_votes":26, "question_length":730, "response_length":707 }, { "question":"calling dot products and linear algebra operations in Cython? I'm trying to use dot products, matrix inversion and other basic linear algebra operations that are available in numpy from Cython. Functions like numpy.linalg.inv (inversion), numpy.dot (dot product), X.t (transpose of matrix\/array). There's a large overhead to calling numpy.* from Cython functions and the rest of the function is written in Cython, so I'd like to avoid this. If I assume users have numpy installed, is there a way to do something like: ``` #include \"numpy\/npy_math.h\" ``` as an extern, and call these functions? Or alternatively call BLAS directly (or whatever it is that numpy calls for these core operations)? To give an example, imagine you have a function in Cython that does many things and in the end needs to make a computation involving dot products and matrix inverses: ``` cdef myfunc(...): # ... do many things faster than Python could # ... # compute one value using dot products and inv # without using # import numpy as np # np.* val = gammaln(sum(v)) - sum(gammaln(v)) + dot((v - 1).T, log(x).T) ``` how can this be done? If there's a library that implements these in Cython already, I can also use that, but have not found anything. Even if those procedures are less optimized than BLAS directly, not having the overhead of calling numpy Python module from Cython will still make things overall faster. Example functions I'd like to call: dot product (np.dot) matrix inversion (np.linalg.inv) matrix multiplication taking transpose (equivalent of x.T in numpy) gammaln function (like scipy.gammaln equivalent, which should be available in C) I realize as it says on numpy mailing list (https:\/\/groups.google.com\/forum\/?fromgroups=#!topic\/cython-users\/XZjMVSIQnTE) that if you call these functions on large matrices, there is no point in doing it from Cython, since calling it from numpy will just result in the majority of the time spent in the optimized C code that numpy calls. However, in my case, I have many calls to these linear algebra operations on small matrices -- in that case, the overhead introduced by repeatedly going from Cython back to numpy and back to Cython will far outweigh the time spent actually computing the operation from BLAS. Therefore, I'd like to keep everything at the C\/Cython level for these simple operations and not go through python. I'd prefer not to go through GSL, since that adds another dependency and since it's unclear if GSL is actively maintained. Since I'm assuming users of the code already have scipy\/numpy installed, I can safely assume that they have all the associated C code that goes along with these libraries, so I just want to be able to tap into that code and call it from Cython. edit: I found a library that wraps BLAS in Cython (https:\/\/github.com\/tokyo\/tokyo) which is close but not what I'm looking for. I'd like to call the numpy\/scipy C functions directly (I'm assuming the user has these installed.)", "response":"Calling BLAS bundled with Scipy is \"fairly\" straightforward, here's one example for calling DGEMM to compute matrix multiplication: https:\/\/gist.github.com\/pv\/5437087 Note that BLAS and LAPACK expect all arrays to be Fortran-contiguous (modulo the lda\/b\/c parameters), hence order=\"F\" and double[::1,:] which are required for correct functioning. Computing inverses can be similarly done by applying the LAPACK function dgesv on the identity matrix. For the signature, see here. All this requires dropping down to rather low-level coding, you need to allocate temporary work arrays yourself etc etc. --- however these can be encapsulated into your own convenience functions, or just reuse the code from tokyo by replacing the lib_* functions with function pointers obtained from Scipy in the above way. If you use Cython's memoryview syntax (double[::1,:]) you transpose is the same x.T as usual. Alternatively, you can compute the transpose by writing a function of your own that swaps elements of the array across the diagonal. Numpy doesn't actually contain this operation, x.T only changes the strides of the array and doesn't move the data around. It would probably be possible to rewrite the tokyo module to use the BLAS\/LAPACK exported by Scipy and bundle it in scipy.linalg, so that you could just do from scipy.linalg.blas cimport dgemm. Pull requests are accepted if someone wants to get down to it. As you can see, it all boils down to passing function pointers around. As alluded to above, Cython does in fact provide its own protocol for exchanging function pointers. For an example, consider from scipy.spatial import qhull; print(qhull.__pyx_capi__) --- those functions could be accessed via from scipy.spatial.qhull cimport XXXX in Cython (they're private though, so don't do that). However, at the present, scipy.special does not offer this C-API. It would however in fact be quite simple to provide it, given that the interface module in scipy.special is written in Cython. I don't think there is at the moment any sane and portable way to access the function doing the heavy lifting for gamln, (although you could snoop around the UFunc object, but that's not a sane solution :), so at the moment it's probably best to just grab the relevant part of source code from scipy.special and bundle it with your project, or use e.g. GSL.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16114100\/calling-dot-products-and-linear-algebra-operations-in-cython", "best_answers_votes":26, "question_length":2963, "response_length":2349 }, { "question":"DFT matrix in python What's the easiest way to get the DFT matrix for 2-d DFT in python? I could not find such function in numpy.fft. Thanks!", "response":"The easiest and most likely the fastest method would be using fft from SciPy. ``` import scipy as sp def dftmtx(N): return sp.fft(sp.eye(N)) ``` If you know even faster way (might be more complicated) I'd appreciate your input. Just to make it more relevant to the main question - you can also do it with numpy: ``` import numpy as np dftmtx = np.fft.fft(np.eye(N)) ``` When I had benchmarked both of them I have an impression scipy one was marginally faster but I have not done it thoroughly and it was sometime ago so don't take my word for it. Here's pretty good source on FFT implementations in python: http:\/\/nbviewer.ipython.org\/url\/jakevdp.github.io\/downloads\/notebooks\/UnderstandingTheFFT.ipynb It's rather from speed perspective, but in this case we can actually see that sometimes it comes with simplicity too.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19739503\/dft-matrix-in-python", "best_answers_votes":23, "question_length":141, "response_length":820 }, { "question":"Scipy.optimize Inequality Constraint - Which side of the inequality is considered? I am using the scipy.optimize module to find optimal input weights that would minimize my output. From the examples I've seen, we define the constraint with a one-sided equation; then we create a variable that's of the type 'inequality'. My question is how does the optimization package know whether the sum of the variables in my constraint need to be smaller than 1 or larger than 1? ... ``` def constraint1(x): return x[0]+x[1]+x[2]+x[3]-1 ``` .... ``` con1 = {'type': 'ineq', 'fun': constraint1} ``` link to full solution I'm using in my example: http:\/\/apmonitor.com\/che263\/index.php\/Main\/PythonOptimization", "response":"Refer to https:\/\/docs.scipy.org\/doc\/scipy-0.18.1\/reference\/tutorial\/optimize.html and scroll down to Constrained minimization of multivariate scalar functions (minimize), you can find that This algorithm allows to deal with constrained minimization problems of the form: where the inequalities are of the form C_j(x) >= 0. So when you define the constraint as ``` def constraint1(x): return x[0]+x[1]+x[2]+x[3]-1 ``` and specify the type of the constraint as ``` con1 = {'type': 'ineq', 'fun': constraint1} ``` it automatically assumes that the constraint is in the standard form x[0]+x[1]+x[2]+x[3]-1>=0 i.e., x[0]+x[1]+x[2]+x[3]>=1", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/42303470\/scipy-optimize-inequality-constraint-which-side-of-the-inequality-is-considere", "best_answers_votes":43, "question_length":695, "response_length":633 }, { "question":"Draw polygons more efficiently with matplotlib I have a dateset of around 60000 shapes (with lat\/lon coordinates of each corner) which I want to draw on a map using matplotlib and basemap. This is the way I am doing it at the moment: ``` for ii in range(len(data)): lons = np.array([data['lon1'][ii],data['lon3'][ii],data['lon4'][ii],data['lon2'][ii]],'f2') lats = np.array([data['lat1'][ii],data['lat3'][ii],data['lat4'][ii],data['lat2'][ii]],'f2') x,y = m(lons,lats) poly = Polygon(zip(x,y),facecolor=colorval[ii],edgecolor='none') plt.gca().add_patch(poly) ``` However, this takes around 1.5 minutes on my machine and I was thinking whether it is possible to speed things up a little. Is there a more efficient way to draw polygons and add them to the map?", "response":"You could consider creating Collections of polygons instead of individual polygons. The relevant docs can be found here: http:\/\/matplotlib.org\/api\/collections_api.html With a example worth picking appart here: http:\/\/matplotlib.org\/examples\/api\/collections_demo.html As an example: ``` import numpy as np import matplotlib.pyplot as plt from matplotlib.collections import PolyCollection import matplotlib as mpl # Generate data. In this case, we'll make a bunch of center-points and generate # verticies by subtracting random offsets from those center-points numpoly, numverts = 100, 4 centers = 100 * (np.random.random((numpoly,2)) - 0.5) offsets = 10 * (np.random.random((numverts,numpoly,2)) - 0.5) verts = centers + offsets verts = np.swapaxes(verts, 0, 1) # In your case, \"verts\" might be something like: # verts = zip(zip(lon1, lat1), zip(lon2, lat2), ...) # If \"data\" in your case is a numpy array, there are cleaner ways to reorder # things to suit. # Color scalar... # If you have rgb values in your \"colorval\" array, you could just pass them # in as \"facecolors=colorval\" when you create the PolyCollection z = np.random.random(numpoly) * 500 fig, ax = plt.subplots() # Make the collection and add it to the plot. coll = PolyCollection(verts, array=z, cmap=mpl.cm.jet, edgecolors='none') ax.add_collection(coll) ax.autoscale_view() # Add a colorbar for the PolyCollection fig.colorbar(coll, ax=ax) plt.show() ``` HTH,", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12881848\/draw-polygons-more-efficiently-with-matplotlib", "best_answers_votes":39, "question_length":759, "response_length":1427 }, { "question":"Second Derivative in Python - scipy\/numpy\/pandas I'm trying to take a second derivative in python with two numpy arrays of data. For example, the arrays in question look like this: ``` import numpy as np x = np.array([ 120. , 121.5, 122. , 122.5, 123. , 123.5, 124. , 124.5, 125. , 125.5, 126. , 126.5, 127. , 127.5, 128. , 128.5, 129. , 129.5, 130. , 130.5, 131. , 131.5, 132. , 132.5, 133. , 133.5, 134. , 134.5, 135. , 135.5, 136. , 136.5, 137. , 137.5, 138. , 138.5, 139. , 139.5, 140. , 140.5, 141. , 141.5, 142. , 142.5, 143. , 143.5, 144. , 144.5, 145. , 145.5, 146. , 146.5, 147. ]) y = np.array([ 1.25750000e+01, 1.10750000e+01, 1.05750000e+01, 1.00750000e+01, 9.57500000e+00, 9.07500000e+00, 8.57500000e+00, 8.07500000e+00, 7.57500000e+00, 7.07500000e+00, 6.57500000e+00, 6.07500000e+00, 5.57500000e+00, 5.07500000e+00, 4.57500000e+00, 4.07500000e+00, 3.57500000e+00, 3.07500000e+00, 2.60500000e+00, 2.14500000e+00, 1.71000000e+00, 1.30500000e+00, 9.55000000e-01, 6.65000000e-01, 4.35000000e-01, 2.70000000e-01, 1.55000000e-01, 9.00000000e-02, 5.00000000e-02, 2.50000000e-02, 1.50000000e-02, 1.00000000e-02, 1.00000000e-02, 1.00000000e-02, 1.00000000e-02, 1.00000000e-02, 1.00000000e-02, 1.00000000e-02, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03, 5.00000000e-03]) ``` I currently then have f(x) = y, and I want d^2 y \/ dx^2. Numerically, I know I can either interpolate the function and take the derivative analytically or use higher order finite-differences. I think that there is enough data to use either, if one or the other is considered faster, more accurate, etc. I have looked at np.interp() and scipy.interpolate with no success, as this returns me a fitted (linear or cubic) spline, but don't know how to get the derivative at that point. Any guidance is much appreciated.", "response":"You can interpolate your data using scipy's 1-D Splines functions. The computed spline has a convenient derivative method for computing derivatives. For the data of your example, using UnivariateSpline gives the following fit ``` import matplotlib.pyplot as plt from scipy.interpolate import UnivariateSpline y_spl = UnivariateSpline(x,y,s=0,k=4) plt.semilogy(x,y,'ro',label = 'data') x_range = np.linspace(x[0],x[-1],1000) plt.semilogy(x_range,y_spl(x_range)) ``` The fit seems reasonably good, at least visually. You might want to experiment with the parameters used by UnivariateSpline. The second derivate of the spline fit can be simply obtained as ``` y_spl_2d = y_spl.derivative(n=2) plt.plot(x_range,y_spl_2d(x_range)) ``` The outcome appears somewhat unnatural (in case your data corresponds to some physical process). You may either want to change the spline fit parameters, improve your data (e.g., provide more samples, perform less noisy measurements), or decide on an analytic function to model your data and perform a curve fit (e.g., using sicpy's curve_fit)", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/40226357\/second-derivative-in-python-scipy-numpy-pandas", "best_answers_votes":27, "question_length":1973, "response_length":1074 }, { "question":"Using adaptive step sizes with scipy.integrate.ode The (brief) documentation for scipy.integrate.ode says that two methods (dopri5 and dop853) have stepsize control and dense output. Looking at the examples and the code itself, I can only see a very simple way to get output from an integrator. Namely, it looks like you just step the integrator forward by some fixed dt, get the function value(s) at that time, and repeat. My problem has pretty variable timescales, so I'd like to just get the values at whatever time steps it needs to evaluate to achieve the required tolerances. That is, early on, things are changing slowly, so the output time steps can be big. But as things get interesting, the output time steps have to be smaller. I don't actually want dense output at equal intervals, I just want the time steps the adaptive function uses. EDIT: Dense output A related notion (almost the opposite) is \"dense output\", whereby the steps taken are as large as the stepper cares to take, but the values of the function are interpolated (usually with accuracy comparable to the accuracy of the stepper) to whatever you want. The fortran underlying scipy.integrate.ode is apparently capable of this, but ode does not have the interface. odeint, on the other hand, is based on a different code, and does evidently do dense output. (You can output every time your right-hand-side is called to see when that happens, and see that it has nothing to do with the output times.) So I could still take advantage of adaptivity, as long as I could decide on the output time steps I want ahead of time. Unfortunately, for my favorite system, I don't even know what the approximate timescales are as functions of time, until I run the integration. So I'll have to combine the idea of taking one integrator step with this notion of dense output. EDIT 2: Dense output again Apparently, scipy 1.0.0 introduced support for dense output through a new interface. In particular, they recommend moving away from scipy.integrate.odeint and towards scipy.integrate.solve_ivp, which as a keyword dense_output. If set to True, the returned object has an attribute sol that you can call with an array of times, which then returns the integrated functions values at those times. That still doesn't solve the problem for this question, but it is useful in many cases.", "response":"Since SciPy 0.13.0, The intermediate results from the dopri family of ODE solvers can now be accessed by a solout callback function. ``` import numpy as np from scipy.integrate import ode import matplotlib.pyplot as plt def logistic(t, y, r): return r * y * (1.0 - y) r = .01 t0 = 0 y0 = 1e-5 t1 = 5000.0 backend = 'dopri5' # backend = 'dop853' solver = ode(logistic).set_integrator(backend) sol = [] def solout(t, y): sol.append([t, *y]) solver.set_solout(solout) solver.set_initial_value(y0, t0).set_f_params(r) solver.integrate(t1) sol = np.array(sol) plt.plot(sol[:,0], sol[:,1], 'b.-') plt.show() ``` Result: The result seems to be slightly different from Tim D's, although they both use the same backend. I suspect this having to do with FSAL property of dopri5. In Tim's approach, I think the result k7 from the seventh stage is discarded, so k1 is calculated afresh. Note: There's a known bug with set_solout not working if you set it after setting initial values. It was fixed as of SciPy 0.17.0.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12926393\/using-adaptive-step-sizes-with-scipy-integrate-ode", "best_answers_votes":17, "question_length":2343, "response_length":1005 }, { "question":"Load just part of an image in python This might be a silly question, but... I have several thousand images that I would like to load into Python and then convert into numpy arrays. Obviously this goes a little slowly. But, I am actually only interested in a small portion of each image. (The same portion, just 100x100 pixels in the center of the image.) Is there any way to load just part of the image to make things go faster? Here is some sample code where I generate some sample images, save them, and load them back in. ``` import numpy as np import matplotlib.pyplot as plt import Image, time #Generate sample images num_images = 5 for i in range(0,num_images): Z = np.random.rand(2000,2000) print 'saving %i'%i plt.imsave('%03i.png'%i,Z) %load the images for i in range(0,num_images): t = time.time() im = Image.open('%03i.png'%i) w,h = im.size imc = im.crop((w-50,h-50,w+50,h+50)) print 'Time to open: %.4f seconds'%(time.time()-t) #convert them to numpy arrays data = np.array(imc) ```", "response":"Save your files as uncompressed 24-bit BMPs. These store pixel data in a very regular way. Check out the \"Image Data\" portion of this diagram from Wikipedia. Note that most of the complexity in the diagram is just from the headers: For example, let's say you are storing this image (here shown zoomed in): This is what the pixel data section looks like, if it's stored as a 24-bit uncompressed BMP. Note that the data is stored bottom-up, for some reason, and in BGR form instead of RGB, so the first line in the file is the bottom-most line of the image, the second line is the second-bottom-most, etc: ``` 00 00 FF FF FF FF 00 00 FF 00 00 00 FF 00 00 00 ``` That data is explained as follows: ``` | First column | Second Column | Padding -----------+----------------+-----------------+----------- Second Row | 00 00 FF | FF FF FF | 00 00 -----------+----------------+-----------------+----------- First Row | FF 00 00 | 00 FF 00 | 00 00 -----------+----------------+-----------------+----------- ``` or: ``` | First column | Second Column | Padding -----------+----------------+-----------------+----------- Second Row | red | white | 00 00 -----------+----------------+-----------------+----------- First Row | blue | green | 00 00 -----------+----------------+-----------------+----------- ``` The padding is there to pad the row size to a multiple of 4 bytes. So, all you have to do is implement a reader for this particular file format, and then calculate the byte offset of where you have to start and stop reading each row: ``` def calc_bytes_per_row(width, bytes_per_pixel): res = width * bytes_per_pixel if res % 4 != 0: res += 4 - res % 4 return res def calc_row_offsets(pixel_array_offset, bmp_width, bmp_height, x, y, row_width): if x + row_width > bmp_width: raise ValueError(\"This is only for calculating offsets within a row\") bytes_per_row = calc_bytes_per_row(bmp_width, 3) whole_row_offset = pixel_array_offset + bytes_per_row * (bmp_height - y - 1) start_row_offset = whole_row_offset + x * 3 end_row_offset = start_row_offset + row_width * 3 return (start_row_offset, end_row_offset) ``` Then you just have to process the proper byte offsets. For example, say you want to read the 400x400 chunk starting at position 500x500 in a 10000x10000 bitmap: ``` def process_row_bytes(row_bytes): ... some efficient way to process the bytes ... bmpf = open(..., \"rb\") pixel_array_offset = ... extract from bmp header ... bmp_width = 10000 bmp_height = 10000 start_x = 500 start_y = 500 end_x = 500 + 400 end_y = 500 + 400 for cur_y in xrange(start_y, end_y): start, end = calc_row_offsets(pixel_array_offset, bmp_width, bmp_height, start_x, cur_y, end_x - start_x) bmpf.seek(start) cur_row_bytes = bmpf.read(end - start) process_row_bytes(cur_row_bytes) ``` Note that it's important how you process the bytes. You can probably do something clever using PIL and just dumping the pixel data into it but I'm not entirely sure. If you do it in an inefficient manner then it might not be worth it. If speed is a huge concern, you might consider writing it with pyrex or implementing the above in C and just calling it from Python.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19695249\/load-just-part-of-an-image-in-python", "best_answers_votes":10, "question_length":994, "response_length":3136 }, { "question":"sine calculation orders of magnitude slower than cosine tl;dr Of the same numpy array, calculating np.cos takes 3.2 seconds, wheras np.sin runs 548 seconds (nine minutes) on Linux Mint. See this repo for full code. I've got a pulse signal (see image below) which I need to modulate onto a HF-carrier, simulating a Laser Doppler Vibrometer. Therefore signal and its time basis need to be resampled to match the carrier's higher sampling rate. In the following demodulation process both the in-phase carrier cos(omega * t) and the phase-shifted carrier sin(omega * t) are needed. Oddly, the time to evaluate these functions depends highly on the way the time vector has been calculated. The time vector t1 is being calculated using np.linspace directly, t2 uses the method implemented in scipy.signal.resample. ``` pulse = np.load('data\/pulse.npy') # 768 samples pulse_samples = len(pulse) pulse_samplerate = 960 # 960 Hz pulse_duration = pulse_samples \/ pulse_samplerate # here: 0.8 s pulse_time = np.linspace(0, pulse_duration, pulse_samples, endpoint=False) carrier_freq = 40e6 # 40 MHz carrier_samplerate = 100e6 # 100 MHz carrier_samples = pulse_duration * carrier_samplerate # 80 million t1 = np.linspace(0, pulse_duration, carrier_samples) # method used in scipy.signal.resample # https:\/\/github.com\/scipy\/scipy\/blob\/v0.17.0\/scipy\/signal\/signaltools.py#L1754 t2 = np.arange(0, carrier_samples) * (pulse_time[1] - pulse_time[0]) \\ * pulse_samples \/ float(carrier_samples) + pulse_time[0] ``` As can be seen in the picture below, the time vectors are not identical. At 80 million samples the difference t1 - t2 reaches 1e-8. Calculating the in-phase and shifted carrier of t1 takes 3.2 seconds each on my machine. With t2, however, calculating the shifted carrier takes 540 seconds. Nine minutes. For nearly the same 80 million values. ``` omega_t1 = 2 * np.pi * carrier_frequency * t1 np.cos(omega_t1) # 3.2 seconds np.sin(omega_t1) # 3.3 seconds omega_t2 = 2 * np.pi * carrier_frequency * t2 np.cos(omega_t2) # 3.2 seconds np.sin(omega_t2) # 9 minutes ``` I can reproduce this bug on both my 32-bit laptop and my 64-bit tower, both running Linux Mint 17. On my flat mate's MacBook, however, the \"slow sine\" takes as little time as the other three calculations. I run a Linux Mint 17.03 on a 64-bit AMD processor and Linux Mint 17.2 on 32-bit Intel processor.", "response":"I don't think numpy has anything to do with this: I think you're tripping across a performance bug in the C math library on your system, one which affects sin near large multiples of pi. (I'm using \"bug\" in a pretty broad sense here -- for all I know, since the sine of large floats is poorly defined, the \"bug\" is actually the library behaving correctly to handle corner cases!) On linux, I get: ``` >>> %timeit -n 10000 math.sin(6e7*math.pi) 10000 loops, best of 3: 191 \u00b5s per loop >>> %timeit -n 10000 math.sin(6e7*math.pi+0.12) 10000 loops, best of 3: 428 ns per loop ``` and other Linux-using types from the Python chatroom report ``` 10000 loops, best of 3: 49.4 \u00b5s per loop 10000 loops, best of 3: 206 ns per loop ``` and ``` In [3]: %timeit -n 10000 math.sin(6e7*math.pi) 10000 loops, best of 3: 116 \u00b5s per loop In [4]: %timeit -n 10000 math.sin(6e7*math.pi+0.12) 10000 loops, best of 3: 428 ns per loop ``` but a Mac user reported ``` In [3]: timeit -n 10000 math.sin(6e7*math.pi) 10000 loops, best of 3: 300 ns per loop In [4]: %timeit -n 10000 math.sin(6e7*math.pi+0.12) 10000 loops, best of 3: 361 ns per loop ``` for no order-of-magnitude difference. As a workaround, you might try taking things mod 2 pi first: ``` >>> new = np.sin(omega_t2[-1000:] % (2*np.pi)) >>> old = np.sin(omega_t2[-1000:]) >>> abs(new - old).max() 7.83773902468434e-09 ``` which has better performance: ``` >>> %timeit -n 1000 new = np.sin(omega_t2[-1000:] % (2*np.pi)) 1000 loops, best of 3: 63.8 \u00b5s per loop >>> %timeit -n 1000 old = np.sin(omega_t2[-1000:]) 1000 loops, best of 3: 6.82 ms per loop ``` Note that as expected, a similar effect happens for cos, just shifted: ``` >>> %timeit -n 1000 np.cos(6e7*np.pi + np.pi\/2) 1000 loops, best of 3: 37.6 \u00b5s per loop >>> %timeit -n 1000 np.cos(6e7*np.pi + np.pi\/2 + 0.12) 1000 loops, best of 3: 2.46 \u00b5s per loop ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/35815093\/sine-calculation-orders-of-magnitude-slower-than-cosine", "best_answers_votes":18, "question_length":2363, "response_length":1854 }, { "question":"skew normal distribution in scipy Does anyone know how to plot a skew normal distribution with scipy? I supose that stats.norm class can be used but I just can't figure out how. Furthermore, how can I estimate the parameters describing the skew normal distribution of a unidimensional dataset?", "response":"From the Wikipedia description, ``` from scipy import linspace from scipy import pi,sqrt,exp from scipy.special import erf from pylab import plot,show def pdf(x): return 1\/sqrt(2*pi) * exp(-x**2\/2) def cdf(x): return (1 + erf(x\/sqrt(2))) \/ 2 def skew(x,e=0,w=1,a=0): t = (x-e) \/ w return 2 \/ w * pdf(t) * cdf(a*t) # You can of course use the scipy.stats.norm versions # return 2 * norm.pdf(t) * norm.cdf(a*t) n = 2**10 e = 1.0 # location w = 2.0 # scale x = linspace(-10,10,n) for a in range(-3,4): p = skew(x,e,w,a) plot(x,p) show() ``` If you want to find the scale, location, and shape parameters from a dataset use scipy.optimize.leastsq, for example using e=1.0,w=2.0 and a=1.0, ``` fzz = skew(x,e,w,a) + norm.rvs(0,0.04,size=n) # fuzzy data def optm(l,x): return skew(x,l[0],l[1],l[2]) - fzz print leastsq(optm,[0.5,0.5,0.5],(x,)) ``` should give you something like, ``` (array([ 1.05206154, 1.96929465, 0.94590444]), 1) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/5884768\/skew-normal-distribution-in-scipy", "best_answers_votes":43, "question_length":293, "response_length":930 }, { "question":"Why can't I install a Python package with the Python requirement \">=3.8,\"] [tool.poetry.dependencies] python = \"^3.9\" [tool.poetry.dev-dependencies] pytest = \"^5.2\" [build-system] requires = [\"poetry-core>=1.0.0\"] build-backend = \"poetry.core.masonry.api\" ``` When I run poetry add scipy, it tries to install the latest version of SciPy, which right now is 1.8.1. I get the following error: ``` $ poetry add scipy Creating virtualenv scipy-test-4EDXm154-py3.9 in \/home\/mattwelke\/.cache\/pypoetry\/virtualenvs Using version ^1.8.1 for scipy Updating dependencies Resolving dependencies... (0.1s) SolverProblemError The current project's Python requirement (>=3.9,=3.8,=3.11,1.8.1,=3.8,=3.9,=3.8,=3.8,=3.10,<3.11\" ``` If I use this value, it works again, allowing me to install the dependency. And this time, it looks like it restricts my project to only being compatible with 3.10, as desired. But it looks a bit verbose. I still don't understand why setting it to \"^3.9\" (or \"^3.10\") didn't work. Is there something I'm missing here? If so, how would I change my pyproject.toml file to make it compatible with this dependency I want to add to my project?", "response":"The caret requirement you specify... ``` [tool.poetry.dependencies] python = \"^3.9\" ``` ...means \"This Python code has compatibility of 3.9 =3.10, <3.11\" ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/73116647\/why-cant-i-install-a-python-package-with-the-python-requirement-3-8-3-11-i", "best_answers_votes":46, "question_length":1152, "response_length":157 }, { "question":"Difference between cosine similarity and cosine distance It looks like scipy.spatial.distance.cdist cosine similariy distance: link to cos distance 1 ``` 1 - u*v\/(||u||||v||) ``` is different from sklearn.metrics.pairwise.cosine_similarity which is link to cos similarity 2 ``` u*v\/||u||||v|| ``` Does anybody know reason for different definitions?", "response":"Good question but yes, these are 2 different things but connected by the following equation: Cosine_distance = 1 - cosine_similarity Why? Usually, people use the cosine similarity as a similarity metric between vectors. Now, the distance can be defined as 1-cos_similarity. The intuition behind this is that if 2 vectors are perfectly the same then similarity is 1 (angle=0) and thus, distance is 0 (1-1=0). Similarly you can define the cosine distance for the resulting similarity value range. Cosine similarity range: \u22121 meaning exactly opposite, 1 meaning exactly the same, 0 indicating orthogonality. References: Scipy wolfram", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/58381092\/difference-between-cosine-similarity-and-cosine-distance", "best_answers_votes":44, "question_length":348, "response_length":630 }, { "question":"Can scipy.stats identify and mask obvious outliers? With scipy.stats.linregress I am performing a simple linear regression on some sets of highly correlated x,y experimental data, and initially visually inspecting each x,y scatter plot for outliers. More generally (i.e. programmatically) is there a way to identify and mask outliers?", "response":"The statsmodels package has what you need. Look at this little code snippet and its output: ``` # Imports # import statsmodels.api as smapi import statsmodels.graphics as smgraphics # Make data # x = range(30) y = [y*10 for y in x] # Add outlier # x.insert(6,15) y.insert(6,220) # Make graph # regression = smapi.OLS(x, y).fit() figure = smgraphics.regressionplots.plot_fit(regression, 0) # Find outliers # test = regression.outlier_test() outliers = ((x[i],y[i]) for i,t in enumerate(test) if t[2] < 0.5) print 'Outliers: ', list(outliers) ``` Outliers: [(15, 220)] Edit With the newer version of statsmodels, things have changed a bit. Here is a new code snippet that shows the same type of outlier detection. ``` # Imports # from random import random import statsmodels.api as smapi from statsmodels.formula.api import ols import statsmodels.graphics as smgraphics # Make data # x = range(30) y = [y*(10+random())+200 for y in x] # Add outlier # x.insert(6,15) y.insert(6,220) # Make fit # regression = ols(\"data ~ x\", data=dict(data=y, x=x)).fit() # Find outliers # test = regression.outlier_test() outliers = ((x[i],y[i]) for i,t in enumerate(test.icol(2)) if t < 0.5) print 'Outliers: ', list(outliers) # Figure # figure = smgraphics.regressionplots.plot_fit(regression, 1) # Add line # smgraphics.regressionplots.abline_plot(model_results=regression, ax=figure.axes[0]) ``` Outliers: [(15, 220)]", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/10231206\/can-scipy-stats-identify-and-mask-obvious-outliers", "best_answers_votes":29, "question_length":334, "response_length":1402 }, { "question":"What is dimension order of numpy shape for image data? I am using nibabel lib to load data from nii file. I read the document of the lib at http:\/\/nipy.org\/nibabel\/gettingstarted.html, and found that This information is available without the need to load anything of the main image data into the memory. Of course there is also access to the image data as a NumPy array This is my code to load the data and it shapes ``` import nibabel as nib img = nib.load('example.nii') data = img.get_data() data = np.squeeze(data) data = np.copy(data, order=\"C\") print data.shape ``` I got the result ``` 128, 128, 64 ``` What is order of data shape? Is it WidthxHeightxDepth? And my input must arranged as depth, height, width. So I will use input=data.transpose(2,0,1). Is it right? Thanks all Update: I found that the Numpy will read the image by order Height x Width x Depth as the reference http:\/\/www.python-course.eu\/images\/axis.jpeg", "response":"OK, here's my take: Using scipy.ndimage.imread('img.jpg', mode='RGB'), the resulting array will always have this order: (H, W, D) i.e. (height, width, depth) because of the terminology that numpy uses for ndarrays (axis=0, axis=1, axis=2) or analogously (Y, X, Z) if one would like to visualize in 3 dimensions. ``` # read image In [21]: img = scipy.ndimage.imread('suza.jpg', mode='RGB') # image shape as (H, W, D) In [22]: img.shape Out[22]: (634, 1366, 3) # transpose to shape as (D, H, W) In [23]: tr_img = img.transpose((-1, 0, 1)) In [23]: tr_img.shape Out[23]: (3, 634, 1366) ``` If you consider the img_shape as a tuple, ``` # index (0, 1, 2) img_shape = (634, 1366, 3) # or index (-3, -2, -1) ``` Choose which one is a convenient way for you to remember. NOTE: The scipy.ndimage.imread() API has been removed since Scipy 1.2.0. So, it is now recommended to use imageio.imread(), which reads the image and returns Array, a subclass of numpy array, following the same conventions discussed above. ``` # read image $ img = imageio.imread('suza.jpg', format='jpg') # convert the image to a numpy array $ img_np = np.asarray(img) ``` PS: It should also be noted that libraries like tensorflow also (almost) follows the same convention as numpy. tf.image_decode_jpeg() returns: A Tensor of type uint8. 3-D with shape [height, width, channels]", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/43272848\/what-is-dimension-order-of-numpy-shape-for-image-data", "best_answers_votes":42, "question_length":928, "response_length":1345 }, { "question":"How do I put a constraint on SciPy curve fit? I'm trying to fit the distribution of some experimental values with a custom probability density function. Obviously, the integral of the resulting function should always be equal to 1, but the results of simple scipy.optimize.curve_fit(function, dataBincenters, dataCounts) never satisfy this condition. What is the best way to solve this problem?", "response":"You can define your own residuals function, including a penalization parameter, like detailed in the code below, where it is known beforehand that the integral along the interval must be 2.. If you test without the penalization you will see that what your are getting is the conventional curve_fit: ``` import matplotlib.pyplot as plt import scipy from scipy.optimize import curve_fit, minimize, leastsq from scipy.integrate import quad from scipy import pi, sin x = scipy.linspace(0, pi, 100) y = scipy.sin(x) + (0. + scipy.rand(len(x))*0.4) def func1(x, a0, a1, a2, a3): return a0 + a1*x + a2*x**2 + a3*x**3 # here you include the penalization factor def residuals(p, x, y): integral = quad(func1, 0, pi, args=(p[0], p[1], p[2], p[3]))[0] penalization = abs(2.-integral)*10000 return y - func1(x, p[0], p[1], p[2], p[3]) - penalization popt1, pcov1 = curve_fit(func1, x, y) popt2, pcov2 = leastsq(func=residuals, x0=(1., 1., 1., 1.), args=(x, y)) y_fit1 = func1(x, *popt1) y_fit2 = func1(x, *popt2) plt.scatter(x, y, marker='.') plt.plot(x, y_fit1, color='g', label='curve_fit') plt.plot(x, y_fit2, color='y', label='constrained') plt.legend() plt.xlim(-0.1, 3.5) plt.ylim(0, 1.4) print('Exact integral:', quad(sin, 0, pi)[0]) print('Approx integral1:', quad(func1, 0, pi, args=(popt1[0], popt1[1], popt1[2], popt1[3]))[0]) print('Approx integral2:', quad(func1, 0, pi, args=(popt2[0], popt2[1], popt2[2], popt2[3]))[0]) plt.show() #Exact integral: 2.0 #Approx integral1: 2.60068579748 #Approx integral2: 2.00001911981 ``` Other related questions: SciPy LeastSq Goodness of Fit Estimator", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16541171\/how-do-i-put-a-constraint-on-scipy-curve-fit", "best_answers_votes":23, "question_length":394, "response_length":1589 }, { "question":"AttributeError: 'module' object (scipy) has no attribute *** Why does this error occur? In scipy, the error occurs quite often. ``` >>> import scipy >>> scipy.integrate.trapz(gyroSeries, timeSeries) Traceback (most recent call last): File \"\", line 1, in AttributeError: 'module' object has no attribute 'integrate' >>> ``` I figure out how to solve this problem by doing the following: ``` >>> >>> import scipy.integrate >>> scipy.integrate.trapz(gyroSeries, timeSeries) >>> 1.2 ``` My question: Why does the error occur? Why would that fix the error?", "response":"Most possibly because scipy is a library (package) that contains modules and to import a specific module from the scipy library, you need to specify it and import the module itself. As it's a separate module (sub-package), once you import it, it's attributes are available to you by using the regular scipy.module.attribute", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/18049687\/attributeerror-module-object-scipy-has-no-attribute-why-does-this-error", "best_answers_votes":25, "question_length":552, "response_length":323 }, { "question":"Find the area between two curves plotted in matplotlib (fill_between area) I have a list of x and y values for two curves, both having weird shapes, and I don't have a function for any of them. I need to do two things: Plot it and shade the area between the curves like the image below. Find the total area of this shaded region between the curves. I'm able to plot and shade the area between those curves with fill_between and fill_betweenx in matplotlib, but I have no idea on how to calculate the exact area between them, specially because I don't have a function for any of those curves. Any ideas? I looked everywhere and can't find a simple solution for this. I'm quite desperate, so any help is much appreciated. Thank you very much! EDIT: For future reference (in case anyone runs into the same problem), here is how I've solved this: connected the first and last node\/point of each curve together, resulting in a big weird-shaped polygon, then used shapely to calculate the polygon's area automatically, which is the exact area between the curves, no matter which way they go or how nonlinear they are. Works like a charm! :) Here is my code: ```py from shapely.geometry import Polygon x_y_curve1 = [(0.121,0.232),(2.898,4.554),(7.865,9.987)] #these are your points for curve 1 (I just put some random numbers) x_y_curve2 = [(1.221,1.232),(3.898,5.554),(8.865,7.987)] #these are your points for curve 2 (I just put some random numbers) polygon_points = [] #creates a empty list where we will append the points to create the polygon for xyvalue in x_y_curve1: polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 1 for xyvalue in x_y_curve2[::-1]: polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 2 in the reverse order (from last point to first point) for xyvalue in x_y_curve1[0:1]: polygon_points.append([xyvalue[0],xyvalue[1]]) #append the first point in curve 1 again, to it \"closes\" the polygon polygon = Polygon(polygon_points) area = polygon.area print(area) ``` EDIT 2: Thank you for the answers. Like Kyle explained, this only works for positive values. If your curves go below 0 (which is not my case, as showed in the example chart), then you would have to work with absolute numbers.", "response":"The area calculation is straightforward in blocks where the two curves don't intersect: thats the trapezium as has been pointed out above. If they intersect, then you create two triangles between x[i] and x[i+1], and you should add the area of the two. If you want to do it directly, you should handle the two cases separately. Here's a basic working example to solve your problem. First, I will start with some fake data: ``` #!\/usr\/bin\/python import numpy as np # let us generate fake test data x = np.arange(10) y1 = np.random.rand(10) * 20 y2 = np.random.rand(10) * 20 ``` Now, the main code. Based on your plot, looks like you have y1 and y2 defined at the same X points. Then we define, ``` z = y1-y2 dx = x[1:] - x[:-1] cross_test = np.sign(z[:-1] * z[1:]) ``` cross_test will be negative whenever the two graphs cross. At these points, we want to calculate the x coordinate of the crossover. For simplicity, I will calculate x coordinates of the intersection of all segments of y. For places where the two curves don't intersect, they will be useless values, and we won't use them anywhere. This just keeps the code easier to understand. Suppose you have z1 and z2 at x1 and x2, then we are solving for x0 such that z = 0: ``` # (z2 - z1)\/(x2 - x1) = (z0 - z1) \/ (x0 - x1) = -z1\/(x0 - x1) # x0 = x1 - (x2 - x1) \/ (z2 - z1) * z1 x_intersect = x[:-1] - dx \/ (z[1:] - z[:-1]) * z[:-1] dx_intersect = - dx \/ (z[1:] - z[:-1]) * z[:-1] ``` Where the curves don't intersect, area is simply given by: ``` areas_pos = abs(z[:-1] + z[1:]) * 0.5 * dx # signs of both z are same ``` Where they intersect, we add areas of both triangles: ``` areas_neg = 0.5 * dx_intersect * abs(z[:-1]) + 0.5 * (dx - dx_intersect) * abs(z[1:]) ``` Now, the area in each block x[i] to x[i+1] is to be selected, for which I use np.where: ``` areas = np.where(cross_test = 0) plot(x, y1) plot(x, y2) plot(x, z) plt.vlines(x_intersect[negatives], -20, 20) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/25439243\/find-the-area-between-two-curves-plotted-in-matplotlib-fill-between-area", "best_answers_votes":7, "question_length":2260, "response_length":1934 }, { "question":"Python eigenvectors: differences among numpy.linalg, scipy.linalg and scipy.sparse.linalg Scipy and Numpy have between them three different functions for finding eigenvectors for a given square matrix, these are: numpy.linalg.eig(a) scipy.linalg.eig(a), and scipy.sparse.linalg.eig(A, k) Focusing specifically on the situation that all the optional arguments I've left off the last two are left at their defaults and that a\/A is real-valued, I am curious about the differences among these three which are ambiguous from the documentation - especially: Why does (3) have a note that it can't find all eigenvectors? Why must the other two compute all solutions - why don't they take a k argument? (1) has a note saying that the eigenvalues are returned in no particular order; (3) has an optional argument to control the order. Does (2) make any guarantees about this? Does (3) assume that A is sparse? (mathematically speaking, rather than being represented as a scipy sparse matrix) Can it be inefficient, or even give wrong results, if this assumption doesn't hold? Are there other factors I should consider when choosing among these?", "response":"The special behaviour of the third one has to do with the Lanczos algorithm, which works very well with sparse matrices. The documentation of scipy.sparse.linalg.eig says it uses a wrapper for ARPACK, which in turn uses \"the Implicitly Restarted Arnoldi Method (IRAM) or, in the case of symmetric matrices, the corresponding variant of the Lanczos algorithm.\" (1). Now, the Lanczos algorithm has the property that it works better for large eigenvalues (in fact, it uses the maximum eigenvalue): In practice, this simple algorithm does not work very well for computing very many of the eigenvectors because any round-off error will tend to introduce slight components of the more significant eigenvectors back into the computation, degrading the accuracy of the computation. (2) So, whereas the Lanczos algorithm is only an approximation, I guess the other two methods use algos to find the exact eigenvalues -- and seemingly all of them, which probably depends on the algorithms used, too.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11083660\/python-eigenvectors-differences-among-numpy-linalg-scipy-linalg-and-scipy-spar", "best_answers_votes":14, "question_length":1135, "response_length":989 }, { "question":"Fast linear interpolation in Numpy \/ Scipy \"along a path\" Let's say that I have data from weather stations at 3 (known) altitudes on a mountain. Specifically, each station records a temperature measurement at its location every minute. I have two kinds of interpolation I'd like to perform. And I'd like to be able to perform each quickly. So let's set up some data: ``` import numpy as np from scipy.interpolate import interp1d import pandas as pd import seaborn as sns np.random.seed(0) N, sigma = 1000., 5 basetemps = 70 + (np.random.randn(N) * sigma) midtemps = 50 + (np.random.randn(N) * sigma) toptemps = 40 + (np.random.randn(N) * sigma) alltemps = np.array([basetemps, midtemps, toptemps]).T # note transpose! trend = np.sin(4 \/ N * np.arange(N)) * 30 trend = trend[:, np.newaxis] altitudes = np.array([500, 1500, 4000]).astype(float) finaltemps = pd.DataFrame(alltemps + trend, columns=altitudes) finaltemps.index.names, finaltemps.columns.names = ['Time'], ['Altitude'] finaltemps.plot() ``` Great, so our temperatures look like this: Interpolate all times to for the same altitude: I think this one is pretty straightforward. Say I want to get the temperature at an altitude of 1,000 for each time. I can just use built in scipy interpolation methods: ``` interping_function = interp1d(altitudes, finaltemps.values) interped_to_1000 = interping_function(1000) fig, ax = plt.subplots(1, 1, figsize=(8, 5)) finaltemps.plot(ax=ax, alpha=0.15) ax.plot(interped_to_1000, label='Interped') ax.legend(loc='best', title=finaltemps.columns.name) ``` This works nicely. And let's see about speed: ``` %%timeit res = interp1d(altitudes, finaltemps.values)(1000) #-> 1000 loops, best of 3: 207 \u00b5s per loop ``` Interpolate \"along a path\": So now I have a second, related problem. Say I know the altitude of a hiking party as a function of time, and I want to compute the temperature at their (moving) location by linearly interpolating my data through time. In particular, the times at which I know the location of the hiking party are the same times at which I know the temperatures at my weather stations. I can do this without too much effort: ``` location = np.linspace(altitudes[0], altitudes[-1], N) interped_along_path = np.array([interp1d(altitudes, finaltemps.values[i, :])(loc) for i, loc in enumerate(location)]) fig, ax = plt.subplots(1, 1, figsize=(8, 5)) finaltemps.plot(ax=ax, alpha=0.15) ax.plot(interped_along_path, label='Interped') ax.legend(loc='best', title=finaltemps.columns.name) ``` So this works really nicely, but its important to note that the key line above is using list comprehension to hide an enormous amount of work. In the previous case, scipy is creating a single interpolation function for us, and evaluating it once on a large amount of data. In this case, scipy is actually constructing N individual interpolating functions and evaluating each once on a small amount of data. This feels inherently inefficient. There is a for loop lurking here (in the list comprehension) and moreover, this just feels flabby. Not surprisingly, this is much slower than the previous case: ``` %%timeit res = np.array([interp1d(altitudes, finaltemps.values[i, :])(loc) for i, loc in enumerate(location)]) #-> 10 loops, best of 3: 145 ms per loop ``` So the second example runs 1,000 slower than the first. I.e. consistent with the idea that the heavy lifting is the \"make a linear interpolation function\" step...which is happening 1,000 times in the second example but only once in the first. So, the question: is there a better way to approach the second problem? For example, is there a good way to set it up with 2-dimensinoal interpolation (which could perhaps handle the case where the times at which the hiking party locations are known are not the times at which the temperatures have been sampled)? Or is there a particularly slick way to handle things here where the times do line up? Or other?", "response":"A linear interpolation between two values y1, y2 at locations x1 and x2, with respect to point xi is simply: ``` yi = y1 + (y2-y1) * (xi-x1) \/ (x2-x1) ``` With some vectorized Numpy expressions we can select the relevant points from the dataset and apply the above function: ``` I = np.searchsorted(altitudes, location) x1 = altitudes[I-1] x2 = altitudes[I] time = np.arange(len(alltemps)) y1 = alltemps[time,I-1] y2 = alltemps[time,I] xI = location yI = y1 + (y2-y1) * (xI-x1) \/ (x2-x1) ``` The trouble is that some points lie on the boundaries of (or even outside of) the known range, which should be taken into account: ``` I = np.searchsorted(altitudes, location) same = (location == altitudes.take(I, mode='clip')) out_of_range = ~same & ((I == 0) | (I == altitudes.size)) I[out_of_range] = 1 # Prevent index-errors x1 = altitudes[I-1] x2 = altitudes[I] time = np.arange(len(alltemps)) y1 = alltemps[time,I-1] y2 = alltemps[time,I] xI = location yI = y1 + (y2-y1) * (xI-x1) \/ (x2-x1) yI[out_of_range] = np.nan ``` Luckily, Scipy already provides ND interpolation, which also just as easy takes care of the mismatching times, for example: ``` from scipy.interpolate import interpn time = np.arange(len(alltemps)) M = 150 hiketime = np.linspace(time[0], time[-1], M) location = np.linspace(altitudes[0], altitudes[-1], M) xI = np.column_stack((hiketime, location)) yI = interpn((time, altitudes), alltemps, xI) ``` Here's a benchmark code (without any pandas actually, bit I did include the solution from the other answer): ``` import numpy as np from scipy.interpolate import interp1d, interpn def original(): return np.array([interp1d(altitudes, alltemps[i, :])(loc) for i, loc in enumerate(location)]) def OP_self_answer(): return np.diagonal(interp1d(altitudes, alltemps)(location)) def interp_checked(): I = np.searchsorted(altitudes, location) same = (location == altitudes.take(I, mode='clip')) out_of_range = ~same & ((I == 0) | (I == altitudes.size)) I[out_of_range] = 1 # Prevent index-errors x1 = altitudes[I-1] x2 = altitudes[I] time = np.arange(len(alltemps)) y1 = alltemps[time,I-1] y2 = alltemps[time,I] xI = location yI = y1 + (y2-y1) * (xI-x1) \/ (x2-x1) yI[out_of_range] = np.nan return yI def scipy_interpn(): time = np.arange(len(alltemps)) xI = np.column_stack((time, location)) yI = interpn((time, altitudes), alltemps, xI) return yI N, sigma = 1000., 5 basetemps = 70 + (np.random.randn(N) * sigma) midtemps = 50 + (np.random.randn(N) * sigma) toptemps = 40 + (np.random.randn(N) * sigma) trend = np.sin(4 \/ N * np.arange(N)) * 30 trend = trend[:, np.newaxis] alltemps = np.array([basetemps, midtemps, toptemps]).T + trend altitudes = np.array([500, 1500, 4000], dtype=float) location = np.linspace(altitudes[0], altitudes[-1], N) funcs = [original, interp_checked, scipy_interpn] for func in funcs: print(func.func_name) %timeit func() from itertools import combinations outs = [func() for func in funcs] print('Output allclose:') print([np.allclose(out1, out2) for out1, out2 in combinations(outs, 2)]) ``` With the following result on my system: ```none original 10 loops, best of 3: 184 ms per loop OP_self_answer 10 loops, best of 3: 89.3 ms per loop interp_checked 1000 loops, best of 3: 224 \u00b5s per loop scipy_interpn 1000 loops, best of 3: 1.36 ms per loop Output allclose: [True, True, True, True, True, True] ``` Scipy's interpn suffers somewhat in terms of speed compared to the very fastest method, but for it's generality and ease of use it's definitely the way to go.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/33069366\/fast-linear-interpolation-in-numpy-scipy-along-a-path", "best_answers_votes":11, "question_length":3922, "response_length":3505 }, { "question":"What statistics module for python supports one way ANOVA with post hoc tests (Tukey, Scheffe or other)? I have tried looking through multiple statistics modules for Python but can't seem to find any that support one-way ANOVA post hoc tests.", "response":"one way ANOVA can be used like ``` from scipy import stats f_value, p_value = stats.f_oneway(data1, data2, data3, data4, ...) ``` This is one way ANOVA and it returns F value and P value. There is significant difference If the P value is below your setting. The Tukey-kramer HSD test can be used like ``` from statsmodels.stats.multicomp import pairwise_tukeyhsd print pairwise_tukeyhsd(Data, Group) ``` This is multicomparison. The output is like ``` Multiple Comparison of Means - Tukey HSD,FWER=0.05 ================================================ group1 group2 meandiff lower upper reject ------------------------------------------------ 0 1 -35.2153 -114.8741 44.4434 False 0 2 46.697 -40.4993 133.8932 False 0 3 -7.5709 -87.49 72.3482 False 1 2 81.9123 5.0289 158.7956 True 1 3 27.6444 -40.8751 96.164 False 2 3 -54.2679 -131.4209 22.8852 False ------------------------------------------------ ``` Please refer to this site how to set the arguments. The tukeyhsd of statsmodels doesn't return P value. So, if you want to know P value, calculate from these outputted value or use R.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16049552\/what-statistics-module-for-python-supports-one-way-anova-with-post-hoc-tests-tu", "best_answers_votes":44, "question_length":241, "response_length":1088 }, { "question":"Numpy transpose multiplication problem I tried to find the eigenvalues of a matrix multiplied by its transpose but I couldn't do it using numpy. ``` testmatrix = numpy.array([[1,2],[3,4],[5,6],[7,8]]) prod = testmatrix * testmatrix.T print eig(prod) ``` I expected to get the following result for the product: ``` 5 11 17 23 11 25 39 53 17 39 61 83 23 53 83 113 ``` and eigenvalues: ``` 0.0000 0.0000 0.3929 203.6071 ``` Instead I got ValueError: shape mismatch: objects cannot be broadcast to a single shape when multiplying testmatrix with its transpose. This works (the multiplication, not the code) in MatLab but I need to use it in a python application. Can someone tell me what I'm doing wrong?", "response":"You might find this tutorial useful since you know MATLAB. Also, try multiplying testmatrix with the dot() function, i.e. numpy.dot(testmatrix,testmatrix.T) Apparently numpy.dot is used between arrays for matrix multiplication! The * operator is for element-wise multiplication (.* in MATLAB).", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/3213105\/numpy-transpose-multiplication-problem", "best_answers_votes":27, "question_length":700, "response_length":293 }, { "question":"Correct fitting with scipy curve_fit including errors in x? I'm trying to fit a histogram with some data in it using scipy.optimize.curve_fit. If I want to add an error in y, I can simply do so by applying a weight to the fit. But how to apply the error in x (i. e. the error due to binning in case of histograms)? My question also applies to errors in x when making a linear regression with curve_fit or polyfit; I know how to add errors in y, but not in x. Here an example (partly from the matplotlib documentation): ``` import numpy as np import pylab as P from scipy.optimize import curve_fit # create the data histogram mu, sigma = 200, 25 x = mu + sigma*P.randn(10000) # define fit function def gauss(x, *p): A, mu, sigma = p return A*np.exp(-(x-mu)**2\/(2*sigma**2)) # the histogram of the data n, bins, patches = P.hist(x, 50, histtype='step') sigma_n = np.sqrt(n) # Adding Poisson errors in y bin_centres = (bins[:-1] + bins[1:])\/2 sigma_x = (bins[1] - bins[0])\/np.sqrt(12) # Binning error in x P.setp(patches, 'facecolor', 'g', 'alpha', 0.75) # fitting and plotting p0 = [700, 200, 25] popt, pcov = curve_fit(gauss, bin_centres, n, p0=p0, sigma=sigma_n, absolute_sigma=True) x = np.arange(100, 300, 0.5) fit = gauss(x, *popt) P.plot(x, fit, 'r--') ``` Now, this fit (when it doesn't fail) does consider the y-errors sigma_n, but I haven't found a way to make it consider sigma_x. I scanned a couple of threads on the scipy mailing list and found out how to use the absolute_sigma value and a post on Stackoverflow about asymmetrical errors, but nothing about errors in both directions. Is it possible to achieve?", "response":"scipy.optmize.curve_fit uses standard non-linear least squares optimization and therefore only minimizes the deviation in the response variables. If you want to have an error in the independent variable to be considered you can try scipy.odr which uses orthogonal distance regression. As its name suggests it minimizes in both independent and dependent variables. Have a look at the sample below. The fit_type parameter determines whether scipy.odr does full ODR (fit_type=0) or least squares optimization (fit_type=2). EDIT Although the example worked it did not make much sense, since the y data was calculated on the noisy x data, which just resulted in an unequally spaced indepenent variable. I updated the sample which now also shows how to use RealData which allows for specifying the standard error of the data instead of the weights. ``` from scipy.odr import ODR, Model, Data, RealData import numpy as np from pylab import * def func(beta, x): y = beta[0]+beta[1]*x+beta[2]*x**3 return y #generate data x = np.linspace(-3,2,100) y = func([-2.3,7.0,-4.0], x) # add some noise x += np.random.normal(scale=0.3, size=100) y += np.random.normal(scale=0.1, size=100) data = RealData(x, y, 0.3, 0.1) model = Model(func) odr = ODR(data, model, [1,0,0]) odr.set_job(fit_type=2) output = odr.run() xn = np.linspace(-3,2,50) yn = func(output.beta, xn) hold(True) plot(x,y,'ro') plot(xn,yn,'k-',label='leastsq') odr.set_job(fit_type=0) output = odr.run() yn = func(output.beta, xn) plot(xn,yn,'g-',label='odr') legend(loc=0) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/26058792\/correct-fitting-with-scipy-curve-fit-including-errors-in-x", "best_answers_votes":29, "question_length":1621, "response_length":1526 }, { "question":"Constrained Linear Regression in Python I have a classic linear regression problem of the form: y = X b where y is a response vector X is a matrix of input variables and b is the vector of fit parameters I am searching for. Python provides b = numpy.linalg.lstsq( X , y ) for solving problems of this form. However, when I use this I tend to get either extremely large or extremely small values for the components of b. I'd like to perform the same fit, but constrain the values of b between 0 and 255. It looks like scipy.optimize.fmin_slsqp() is an option, but I found it extremely slow for the size of problem I'm interested in (X is something like 3375 by 1500 and hopefully even larger). Are there any other Python options for performing constrained least squares fits? Or are there python routines for performing Lasso Regression or Ridge Regression or some other regression method which penalizes large b coefficient values?", "response":"You mention you would find Lasso Regression or Ridge Regression acceptable. These and many other constrained linear models are available in the scikit-learn package. Check out the section on generalized linear models. Usually constraining the coefficients involves some kind of regularization parameter (C or alpha)---some of the models (the ones ending in CV) can use cross validation to automatically set these parameters. You can also further constrain models to use only positive coefficents---for example, there is an option for this on the Lasso model.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/10154922\/constrained-linear-regression-in-python", "best_answers_votes":10, "question_length":931, "response_length":558 }, { "question":"how do I get the subtrees of dendrogram made by scipy.cluster.hierarchy I had a confusion regarding this module (scipy.cluster.hierarchy) ... and still have some ! For example we have the following dendrogram: My question is how can I extract the coloured subtrees (each one represent a cluster) in a nice format, say SIF format ? Now the code to get the plot above is: ``` import scipy import scipy.cluster.hierarchy as sch import matplotlib.pylab as plt scipy.randn(100,2) d = sch.distance.pdist(X) Z= sch.linkage(d,method='complete') P =sch.dendrogram(Z) plt.savefig('plot_dendrogram.png') T = sch.fcluster(Z, 0.5*d.max(), 'distance') #array([4, 5, 3, 2, 2, 3, 5, 2, 2, 5, 2, 2, 2, 3, 2, 3, 2, 5, 4, 5, 2, 5, 2, # 3, 3, 3, 1, 3, 4, 2, 2, 4, 2, 4, 3, 3, 2, 5, 5, 5, 3, 2, 2, 2, 5, 4, # 2, 4, 2, 2, 5, 5, 1, 2, 3, 2, 2, 5, 4, 2, 5, 4, 3, 5, 4, 4, 2, 2, 2, # 4, 2, 5, 2, 2, 3, 3, 2, 4, 5, 3, 4, 4, 2, 1, 5, 4, 2, 2, 5, 5, 2, 2, # 5, 5, 5, 4, 3, 3, 2, 4], dtype=int32) sch.leaders(Z,T) # (array([190, 191, 182, 193, 194], dtype=int32), # array([2, 3, 1, 4,5],dtype=int32)) ``` So now, the output of fcluster() gives the clustering of the nodes (by their id's), and leaders() described here is supposed to return 2 arrays: first one contains the leader nodes of the clusters generated by Z, here we can see we have 5 clusters, as well as in the plot and the second one the id's of these clusters So if this leaders() returns resp. L and M : L[2]=182 and M[2]=1, then cluster 1 is leaded by node id 182, which doesn't exist in the observations set X, the documentation says \"... then it corresponds to a non-singleton cluster\". But I can't get it ... Also, I converted the Z to a tree by sch.to_tree(Z), that will return an easy-to-use tree object, which I want to visualize, but which tool should I use as a graphical platform that manipulate these kind of tree objects as inputs?", "response":"Answering the part of your question regarding tree manipulation... As explained in another answer, you can read the coordinates of the branches reading icoord and dcoord from the tree object. For each branch the coordinated are given from the left to the right. If you want to manually plot the tree you can use something like: ``` def plot_tree(P, pos=None): plt.clf() icoord = scipy.array(P['icoord']) dcoord = scipy.array(P['dcoord']) color_list = scipy.array(P['color_list']) xmin, xmax = icoord.min(), icoord.max() ymin, ymax = dcoord.min(), dcoord.max() if pos: icoord = icoord[pos] dcoord = dcoord[pos] color_list = color_list[pos] for xs, ys, color in zip(icoord, dcoord, color_list): plt.plot(xs, ys, color) plt.xlim(xmin-10, xmax + 0.1*abs(xmax)) plt.ylim(ymin, ymax + 0.1*abs(ymax)) plt.show() ``` Where, in your code, plot_tree(P) gives: The function allows you to select just some branches: ``` plot_tree(P, range(10)) ``` Now you have to know which branches to plot. Maybe the fcluster() output is a little obscure and another way to find which branches to plot based on a minimum and a maximum distance tolerance would be using the output of linkage() directly (Z in the OP's case): ``` dmin = 0.2 dmax = 0.3 pos = scipy.all( (Z[:,2] >= dmin, Z[:,2] <= dmax), axis=0 ).nonzero() plot_tree( P, pos ) ``` Recommended references: How does condensed distance matrix work? (pdist) how to plot and annotate hierarchical clustering dendrograms in scipy\/matplotlib", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16883412\/how-do-i-get-the-subtrees-of-dendrogram-made-by-scipy-cluster-hierarchy", "best_answers_votes":25, "question_length":1878, "response_length":1471 }, { "question":"savetxt How change the type from float64 to int or double I have been trying to use the savetxt function in numpy. The problem I am running into is that even thought I define my variables accordingly, i.e. int() or double(), the text file i am getting out has floats in them. How can I change that? Input is as follows: pNoise=[int(i), around(pNoise[0], decimals=3), around(pNoise[1], decimals=3), around(pNoise[2], decimals=3)] savetxt line is as follows: savetxt(noutF, pNoisetot) What I expect is: 0 1.567 8.865 instead I get 0.000000000000000000e+00 1.015909999999999940e+02 2.600000000000000089e-01", "response":"You can define how the output has to be formatted with the fmt parameter of np.savetxt, e.g.: for floats rounded to five decimals: ``` np.savetxt(\"file.txt\", output, fmt='%10.5f', delimiter='\\t') ``` for integers: ``` np.savetxt(\"file.txt\", output, fmt='%i', delimiter='\\t') ``` Here you can find more information about the possibilities of fmt: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.savetxt.html", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/5346362\/savetxt-how-change-the-type-from-float64-to-int-or-double", "best_answers_votes":39, "question_length":603, "response_length":416 }, { "question":"How to disregard the NaN data point in numpy array and generate the normalized data in Python? Say I have a numpy array that has some float('nan'), I don't want to impute those data now and I want to first normalize those and keep the NaN data at the original space, is there any way I can do that? Previously I used normalize function in sklearn.Preprocessing, but that function seems can't take any NaN contained array as input.", "response":"You can mask your array using the numpy.ma.array function and subsequently apply any numpy operation: ``` import numpy as np a = np.random.rand(10) # Generate random data. a = np.where(a > 0.8, np.nan, a) # Set all data larger than 0.8 to NaN a = np.ma.array(a, mask=np.isnan(a)) # Use a mask to mark the NaNs a_norm = a \/ np.sum(a) # The sum function ignores the masked values. a_norm2 = a \/ np.std(a) # The std function ignores the masked values. ``` You can still access your raw data: ``` print a.data ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/37749900\/how-to-disregard-the-nan-data-point-in-numpy-array-and-generate-the-normalized-d", "best_answers_votes":34, "question_length":430, "response_length":509 }, { "question":"How to change elements in sparse matrix in Python's SciPy? I have built a small code that I want to use for solving eigenvalue problems involving large sparse matrices. It's working fine, all I want to do now is to set some elements in the sparse matrix to zero, i.e. the ones in the very top row (which corresponds to implementing boundary conditions). I can just adjust the column vectors (C0, C1, and C2) below to achieve that. However, I wondered if there is a more direct way. Evidently, NumPy indexing does not work with SciPy's sparse package. ``` import scipy.sparse as sp import scipy.sparse.linalg as la import numpy as np import matplotlib.pyplot as plt #discretize x-axis N = 11 x = np.linspace(-5,5,N) print(x) V = x * x \/ 2 h = len(x)\/(N) hi2 = 1.\/(h**2) #discretize Schroedinger Equation, i.e. build #banded matrix from difference equation C0 = np.ones(N)*30. + V C1 = np.ones(N) * -16. C2 = np.ones(N) * 1. diagonals = np.array([-2,-1,0,1,2]) H = sp.spdiags([C2, C1, C0,C1,C2],[-2,-1,0,1,2], N, N) H *= hi2 * (- 1.\/12.) * (- 1. \/ 2.) #solve for eigenvalues EV = la.eigsh(H,return_eigenvectors = False) #check structure of H plt.figure() plt.spy(H) plt.show() ``` This is a visualisation of the matrix that is build by the code above. I want so set the elements in the first row zero.", "response":"As suggested in the comments, I'll post the answer that I found to my own question. There are several matrix classes in in SciPy's sparse package, they are listed here. One can convert sparse matrices from one class to another. So for what I need to do, I choose to convert my sparse matrix to the class csr_matrix, simply by ``` H = sp.csr_matrix(H) ``` Then I can set the elements in the first row to 0 by using the regular NumPy notation: ``` H[0,0] = 0 H[0,1] = 0 H[0,2] = 0 ``` For completeness, I post the full modified code snippet below. ``` #SciPy Sparse linear algebra takes care of sparse matrix computations #http:\/\/docs.scipy.org\/doc\/scipy\/reference\/sparse.linalg.html import scipy.sparse as sp import scipy.sparse.linalg as la import numpy as np import matplotlib.pyplot as plt #discretize x-axis N = 1100 x = np.linspace(-100,100,N) V = x * x \/ 2. h = len(x)\/(N) hi2 = 1.\/(h**2) #discretize Schroedinger Equation, i.e. build #banded matrix from difference equation C0 = np.ones(N)*30. + V C1 = np.ones(N) * -16. C2 = np.ones(N) * 1. H = sp.spdiags([C2, C1, C0, C1, C2],[-2,-1,0,1,2], N, N) H *= hi2 * (- 1.\/12.) * (- 1. \/ 2.) H = sp.csr_matrix(H) H[0,0] = 0 H[0,1] = 0 H[0,2] = 0 #check structure of H plt.figure() plt.spy(H) plt.show() EV = la.eigsh(H,return_eigenvectors = False) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15896588\/how-to-change-elements-in-sparse-matrix-in-pythons-scipy", "best_answers_votes":28, "question_length":1299, "response_length":1300 }, { "question":"Scientific libraries for Lua? [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 10 years ago. Improve this question Are there any scientific packages for Lua comparable to Scipy?", "response":"You should try Torch7 (github). Torch7 has a very nice and efficient vector\/matrix\/tensor numerical library with a Lua front-end. It also has a bunch of functions for computer vision and machine learning. It's pretty recent but getting better quickly.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/388172\/scientific-libraries-for-lua", "best_answers_votes":12, "question_length":555, "response_length":251 }, { "question":"Why isn't `curve_fit` able to estimate the covariance of the parameter if the parameter fits exactly? I don't understand curve_fit isn't able to estimate the covariance of the parameter, thus raising the OptimizeWarning below. The following MCVE explains my problem: MCVE python snippet ``` from scipy.optimize import curve_fit func = lambda x, a: a * x popt, pcov = curve_fit(f = func, xdata = [1], ydata = [1]) print(popt, pcov) ``` Output ``` \\python-3.4.4\\lib\\site-packages\\scipy\\optimize\\minpack.py:715: OptimizeWarning: Covariance of the parameters could not be estimated category=OptimizeWarning) [ 1.] [[ inf]] ``` For a = 1 the function fits xdata and ydata exactly. Why isn't the error\/variance 0, or something close to 0, but inf instead? There is this quote from the curve_fit SciPy Reference Guide: If the Jacobian matrix at the solution doesn\u2019t have a full rank, then \u2018lm\u2019 method returns a matrix filled with np.inf, on the other hand \u2018trf\u2019 and \u2018dogbox\u2019 methods use Moore-Penrose pseudoinverse to compute the covariance matrix. So, what's the underlying problem? Why doesn't the Jacobian matrix at the solution have a full rank?", "response":"The formula for the covariance of the parameters (Wikipedia) has the number of degrees of freedom in the denominator. The degrees of freedoms are computed as (number of data points) - (number of parameters), which is 1 - 1 = 0 in your example. And this is where SciPy checks the number of degrees of freedom before dividing by it. With xdata = [1, 2], ydata = [1, 2] you would get zero covariance (note that the model still fits exactly: exact fit is not the problem). This is the same sort of issue as sample variance being undefined if the sample size N is 1 (the formula for sample variance has (N-1) in the denominator). If we only took size=1 sample out of the population, we don't estimate the variance by zero, we know nothing about the variance.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/41725377\/why-isnt-curve-fit-able-to-estimate-the-covariance-of-the-parameter-if-the-pa", "best_answers_votes":24, "question_length":1142, "response_length":753 }, { "question":"In SciPy, what is 'slinear' interpolation? I can't find an explanation in the documentation or anywhere online. What does 'slinear' stand for and what does it do?", "response":"Looking at the source of scipy\/interpolate\/interpolate.py, slinear is a spline of order 1 ``` if kind in ['zero', 'slinear', 'quadratic', 'cubic']: order = {'nearest': 0, 'zero': 0,'slinear': 1, 'quadratic': 2, 'cubic': 3}[kind] kind = 'spline' ``` ... ``` if kind in ('linear', 'nearest'): # Make a \"view\" of the y array that is rotated to the interpolation # axis. minval = 2 if kind == 'linear': self._call = self._call_linear elif kind == 'nearest': self.x_bds = (x[1:] + x[:-1]) \/ 2.0 self._call = self._call_nearest else: minval = order + 1 self._call = self._call_spline self._spline = splmake(x, y, order=order) ``` Since the docs for splmake state: ``` def splmake(xk, yk, order=3, kind='smoothest', conds=None): \"\"\" Return a representation of a spline given data-points at internal knots ... ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/17572390\/in-scipy-what-is-slinear-interpolation", "best_answers_votes":24, "question_length":162, "response_length":805 }, { "question":"TypeError: cannot unpack non-iterable int objec How can I solve this error After running my code as follows . I am using the function below and implementin running window for loop on it but end up getting the error below. The for loop works and hungs at a point. ``` def get_grps(s, thresh=-1, Nmin=3): \"\"\" Nmin : int > 0 Min number of consecutive values below threshold. \"\"\" m = np.logical_and.reduce([s.shift(-i).le(thresh) for i in range(Nmin)]) if Nmin > 1: m = pd.Series(m, index=s.index).replace({False: np.NaN}).ffill(limit=Nmin - 1).fillna(False) else: m = pd.Series(m, index=s.index) # Form consecutive groups gps = m.ne(m.shift(1)).cumsum().where(m) # Return None if no groups, else the aggregations if gps.isnull().all(): return 0 else: agg = s.groupby(gps).agg([list, sum, 'size']).reset_index(drop=True) # agg2 = s2.groupby(gps).agg([list, sum, 'size']).reset_index(drop=True) return agg, gps data_spi = [-0.32361498 -0.5229471 0.15702732 0.28753752 -0.01069884 -0.8163699 -1.3169327 0.4413181 0.75815576 1.3858147 0.49990863-0.06357133 -0.78432 -0.95337325 -1.663739 0.18965477 0.81183237 0.8360347 0.99537593 -0.12197364 -0.31432647 -2.0865853 0.2084263 0.13332903 -0.05270813 -1.0090573 -1.6578217 -1.2969246 -0.70916456 0.70059913 -1.2127264 -0.659762 -1.1612778 -2.1216285 -0.8054617 -0.6293912 -2.2103117 -1.9373081 -2.530625 -2.4089663 -1.950846 -1.6129876] lon = data_spi.lon lat = data_spi.lat print(len(data_spi)) n=6 for x in range(len(lat)): for y in range(len(lon)): if data_spi[0, x, y] != 0: for i in range(len(data_spi)-70): ts = data_spi[i:i+10, x, y].fillna(1) print(ts) # print(np.array(ts)) agg, gps = get_grps(pd.Series(ts), thresh=-1, Nmin=3) duration = np.nanmean(agg['sum']) frequency = len(agg['sum']) severity = np.abs(np.mean(agg['sum'])) intensity = np.mean(np.abs(agg['sum'] \/ agg['size'])) print(f'intensity {intensity}') ``` I get this error ``` Traceback (most recent call last): File \"\/Users\/mada0007\/PycharmProjects\/Research_ass \/FREQ_MEAN_INT_DUR_CORR.py\", line 80, in agg, gps = get_grps(pd.Series(ts), thresh=-1, Nmin=3) typeError: cannot unpack non-iterable int object ``` How can I resolve this error?", "response":"Just replace return 0 by return 0, 0, or better: raise an error instead of returning 0 When your if condition is True, you only return 0. Then later, when you do agg, gps = get_grps(...), you tell python to unpack the result of the function. Then, python is expecting a 2-length iterable, and try to unpack it, but as it says: it 'cannot unpack non-iterable int object'... So a quick workaround is to return a tuple (0, 0) with return 0, 0, but it is quite bad because you return integers where objects are expected. your script will crash on the next line duration = np.nanmean(agg['sum']) (since agg is 0). Some cleaner solutions to handle this case would be to unpack in a second time: ``` def get_grps(s, thresh=-1, Nmin=3): # ... if gps.isnull().all(): return None else: # ... return agg, gps for i in range(len(data_spi)-70): ts = data_spi[i:i+10, x, y].fillna(1) result = get_grps(pd.Series(ts), thresh=-1, Nmin=3) if result is None: break agg, gps = result duration = np.nanmean(agg['sum']) frequency = len(agg['sum']) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/57722456\/typeerror-cannot-unpack-non-iterable-int-objec", "best_answers_votes":14, "question_length":2154, "response_length":1030 }, { "question":"Integer step size in scipy optimize minimize I have a computer vision algorithm I want to tune up using scipy.optimize.minimize. Right now I only want to tune up two parameters but the number of parameters might eventually grow so I would like to use a technique that can do high-dimensional gradient searches. The Nelder-Mead implementation in SciPy seemed like a good fit. I got the code all set up but it seems that the minimize function really wants to use floating point values with a step size that is less than one.The current set of parameters are both integers and one has a step size of one and the other has a step size of two (i.e. the value must be odd, if it isn't the thing I am trying to optimize will convert it to an odd number). Roughly one parameter is a window size in pixels and the other parameter is a threshold (a value from 0-255). For what it is worth I am using a fresh build of scipy from the git repo. Does anyone know how to tell scipy to use a specific step size for each parameter? Is there some way I can roll my own gradient function? Is there a scipy flag that could help me out? I am aware that this could be done with a simple parameter sweep, but I would eventually like to apply this code to much larger sets of parameters. The code itself is dead simple: ``` import numpy as np from scipy.optimize import minimize from ScannerUtil import straightenImg import bson def doSingleIteration(parameters): # do some machine vision magic # return the difference between my value and the truth value parameters = np.array([11,10]) res = minimize( doSingleIteration, parameters, method='Nelder-Mead',options={'xtol': 1e-2, 'disp': True,'ftol':1.0,}) #not sure if these params do anything print \"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\" print res ``` This is what my output looks like. As you can see we are repeating a lot of runs and not getting anywhere in the minimization. ``` *+++++++++++++++++++++++++++++++++++++++++ [ 11. 10.] <-- Output from scipy minimize {'block_size': 11, 'degree': 10} <-- input to my algorithm rounded and made int +++++++++++++++++++++++++++++++++++++++++ 120 <-- output of the function I am trying to minimize +++++++++++++++++++++++++++++++++++++++++ [ 11.55 10. ] {'block_size': 11, 'degree': 10} +++++++++++++++++++++++++++++++++++++++++ 120 +++++++++++++++++++++++++++++++++++++++++ [ 11. 10.5] {'block_size': 11, 'degree': 10} +++++++++++++++++++++++++++++++++++++++++ 120 +++++++++++++++++++++++++++++++++++++++++ [ 11.55 9.5 ] {'block_size': 11, 'degree': 9} +++++++++++++++++++++++++++++++++++++++++ 120 +++++++++++++++++++++++++++++++++++++++++ [ 11.1375 10.25 ] {'block_size': 11, 'degree': 10} +++++++++++++++++++++++++++++++++++++++++ 120 +++++++++++++++++++++++++++++++++++++++++ [ 11.275 10. ] {'block_size': 11, 'degree': 10} +++++++++++++++++++++++++++++++++++++++++ 120 +++++++++++++++++++++++++++++++++++++++++ [ 11. 10.25] {'block_size': 11, 'degree': 10} +++++++++++++++++++++++++++++++++++++++++ 120 +++++++++++++++++++++++++++++++++++++++++ [ 11.275 9.75 ] {'block_size': 11, 'degree': 9} +++++++++++++++++++++++++++++++++++++++++ 120 +++++++++++++++++++++++++++++++++++++++++ ~~~ SNIP ~~~ +++++++++++++++++++++++++++++++++++++++++ [ 11. 10.0078125] {'block_size': 11, 'degree': 10} +++++++++++++++++++++++++++++++++++++++++ 120 Optimization terminated successfully. Current function value: 120.000000 Iterations: 7 Function evaluations: 27 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ status: 0 nfev: 27 success: True fun: 120.0 x: array([ 11., 10.]) message: 'Optimization terminated successfully.' nit: 7* ```", "response":"Assuming that the function to minimize is arbitrarily complex (nonlinear), this is a very hard problem in general. It cannot be guaranteed to be solved optimal unless you try every possible option. I do not know if there are any integer constrained nonlinear optimizer (somewhat doubt it) and I will assume you know that Nelder-Mead should work fine if it was a contiguous function. Edit: Considering the comment from @Dougal I will just add here: Set up a coarse+fine grid search first, if you then feel like trying if your Nelder-Mead works (and converges faster), the points below may help... But maybe some points that help: Considering how the whole integer constraint is very difficult, maybe it would be an option to do some simple interpolation to help the optimizer. It should still converge to an integer solution. Of course this requires to calculate extra points, but it might solve many other problems. (even in linear integer programming its common to solve the unconstrained system first AFAIK) Nelder-Mead starts with N+1 points, these are hard wired in scipy (at least older versions) to (1+0.05) * x0[j] (for j in all dimensions, unless x0[j] is 0), which you will see in your first evaluation steps. Maybe these can be supplied in newer versions, otherwise you could just change\/copy the scipy code (it is pure python) and set it to something more reasonable. Or if you feel that is simpler, scale all input variables down so that (1+0.05)*x0 is of sensible size. Maybe you should cache all function evaluations, since if you use Nelder-Mead I would guess you can always run into duplicat evaluation (at least at the end). You have to check how likely Nelder-Mead will just shrink to a single value and give up, because it always finds the same result. You generally must check if your function is well behaved at all... This optimization is doomed if the function does not change smooth over the parameter space, and even then it can easily run into local minima if you should have of those. (since you cached all evaluations - see 2. - you could at least plot those and have a look at the error landscape without needing to do any extra evluations)", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12180822\/integer-step-size-in-scipy-optimize-minimize", "best_answers_votes":6, "question_length":3591, "response_length":2169 }, { "question":"How to determine what is the probability distribution function from a numpy array? I have searched around and to my surprise it seems that this question has not been answered. I have a Numpy array containing 10000 values from measurements. I have plotted a histogram with Matplotlib, and by visual inspection the values seem to be normally distributed: However, I would like to validate this. I have found a normality test implemented under scipy.stats.mstats.normaltest, but the result says otherwise. I get this output: ``` (masked_array(data = [1472.8855375088663], mask = [False], fill_value = 1e+20) , masked_array(data = [ 0.], mask = False, fill_value = 1e+20) ``` ) which means that the chances that the dataset is normally distributed are 0. I have re-run the experiments and tested them again obtaining the same outcome, and in the \"best\" case the p value was 3.0e-290. I have tested the function with the following code and it seems to do what I want: ``` import numpy import scipy.stats as stats mu, sigma = 0, 0.1 s = numpy.random.normal(mu, sigma, 10000) print stats.normaltest(s) (1.0491016699730547, 0.59182113002186942) ``` If I have understood and used the function correctly it means that the values are not normally distributed. (And honestly I have no idea why there is a difference in the output, i.e. less details.) I was pretty sure that it is a normal distribution (although my knowledge of statistics is basic), and I don't know what could the alternative be. How can I check what is the probability distribution function in question? EDIT: My Numpy array containing 10000 values is generated like this (I know that's not the best way to populate a Numpy array), and afterwards the normaltest is run: ``` values = numpy.empty(shape=10000, 1)) for i in range(0, 10000): values[i] = measurement(...) # The function returns a float print normaltest(values) ``` EDIT 2: I have just realised that the discrepancy between the outputs is because I have inadvertently used two different functions (scipy.stats.normaltest() and scipy.stats.mstats.normaltest()), but it does not make a difference since the relevant part of the output is the same regardless of the used function. EDIT 3: Fitting the histogram with the suggestion from askewchan: ``` plt.plot(bin_edges, scipy.stats.norm.pdf(bin_edges, loc=values.mean(), scale=values.std())) ``` results in this: EDIT 4: Fitting the histogram with the suggestion from user user333700: ``` scipy.stats.t.fit(data) ``` results in this:", "response":"Assuming you have used the test correctly, my guess is that you have a small deviation from a normal distribution and because your sample size is so large, even small deviations will lead to a rejection of the null hypothesis of a normal distribution. One possibility is to visually inspect your data by plotting a normed histogram with a large number of bins and the pdf with loc=data.mean() and scale=data.std(). There are alternative test for testing normality, statsmodels has Anderson-Darling and Lillifors (Kolmogorov-Smirnov) tests when the distribution parameters are estimated. However, I expect that the results will not differ much given the large sample size. The main question is whether you want to test whether your sample comes \"exactly\" from a normal distribution, or whether you are just interested in whether your sample comes from a distribution that is very close to the normal distribution, close in terms of practical usage. To elaborate on the last point: http:\/\/jpktd.blogspot.ca\/2012\/10\/tost-statistically-significant.html http:\/\/www.graphpad.com\/guides\/prism\/6\/statistics\/index.htm?testing_for_equivalence2.htm As the sample size increases a hypothesis test gains more power, that means that the test will be able to reject the null hypothesis of equality even for smaller and smaller differences. If we keep our significance level fixed, then eventually we will reject tiny differences that we don't really care about. An alternative type of hypothesis test is where we want to show that our sample is close to the given point hypothesis, for example two samples have almost the same mean. The problem is that we have to define what our equivalence region is. In the case of goodness of fit tests we need to choose a distance measure and define a threshold for the distance measure between the sample and the hypothesized distribution. I have not found any explanation where intuition would help to choose this distance threshold. stats.normaltest is based on deviations of skew and kurtosis from those of the normal distribution. Anderson-Darling is based on a integral of the weighted squared differences between the cdf. Kolmogorov-Smirnov is based on the maximum absolute difference between the cdf. chisquare for binned data would be based on the weighted sum of squared bin probabilities. and so on. I only ever tried equivalence testing with binned or discretized data, where I used a threshold from some reference cases which was still rather arbitrary. In medical equivalence testing there are some predefined standards to specify when two treatments can be considered as equivalent, or similarly as inferior or superior in the one sided version.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/23251759\/how-to-determine-what-is-the-probability-distribution-function-from-a-numpy-arra", "best_answers_votes":5, "question_length":2499, "response_length":2683 }, { "question":"High Pass Filter for image processing in python by using scipy\/numpy I am currently studying image processing. In SciPy, I know there is a median filter in scipy.signal. Is there a filter similar to a high pass filter?", "response":"\"High pass filter\" is a very generic term. There are an infinite number of different \"highpass filters\" that do very different things (e.g. an edge dectection filter, as mentioned earlier, is technically a highpass (most are actually a bandpass) filter, but has a very different effect from what you probably had in mind.) At any rate, based on most of the questions you've been asking, you should probably look into scipy.ndimage instead of scipy.filter, especially if you're going to be working with large images (ndimage can preform operations in-place, conserving memory). As a basic example, showing a few different ways of doing things: ``` import matplotlib.pyplot as plt import numpy as np from scipy import ndimage import Image def plot(data, title): plot.i += 1 plt.subplot(2,2,plot.i) plt.imshow(data) plt.gray() plt.title(title) plot.i = 0 # Load the data... im = Image.open('lena.png') data = np.array(im, dtype=float) plot(data, 'Original') # A very simple and very narrow highpass filter kernel = np.array([[-1, -1, -1], [-1, 8, -1], [-1, -1, -1]]) highpass_3x3 = ndimage.convolve(data, kernel) plot(highpass_3x3, 'Simple 3x3 Highpass') # A slightly \"wider\", but sill very simple highpass filter kernel = np.array([[-1, -1, -1, -1, -1], [-1, 1, 2, 1, -1], [-1, 2, 4, 2, -1], [-1, 1, 2, 1, -1], [-1, -1, -1, -1, -1]]) highpass_5x5 = ndimage.convolve(data, kernel) plot(highpass_5x5, 'Simple 5x5 Highpass') # Another way of making a highpass filter is to simply subtract a lowpass # filtered image from the original. Here, we'll use a simple gaussian filter # to \"blur\" (i.e. a lowpass filter) the original. lowpass = ndimage.gaussian_filter(data, 3) gauss_highpass = data - lowpass plot(gauss_highpass, r'Gaussian Highpass, $\\sigma = 3 pixels$') plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/6094957\/high-pass-filter-for-image-processing-in-python-by-using-scipy-numpy", "best_answers_votes":54, "question_length":218, "response_length":1774 }, { "question":"Multidimensional Euclidean Distance in Python I want to calculate the Euclidean distance in multiple dimensions (24 dimensions) between 2 arrays. I'm using numpy-Scipy. Here is my code: ``` import numpy,scipy; A=numpy.array([116.629, 7192.6, 4535.66, 279714, 176404, 443608, 295522, 1.18399e+07, 7.74233e+06, 2.85839e+08, 2.30168e+08, 5.6919e+08, 168989, 7.48866e+06, 1.45261e+06, 7.49496e+07, 2.13295e+07, 3.74361e+08, 54.5, 3349.39, 262.614, 16175.8, 3693.79, 205865]); B=numpy.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 151246, 6795630, 4566625, 2.0355328e+08, 1.4250515e+08, 3.2699482e+08, 95635, 4470961, 589043, 29729866, 6124073, 222.3]); ``` However, I used scipy.spatial.distance.cdist(A[numpy.newaxis,:],B,'euclidean') to calcuate the eucleidan distance. But it gave me an error ``` raise ValueError('XB must be a 2-dimensional array.'); ``` I don't seem to understand it. I looked up scipy.spatial.distance.pdist but don't understand how to use it? Is there any other better way to do it?", "response":"Perhaps scipy.spatial.distance.euclidean? Examples ``` >>> from scipy.spatial import distance >>> distance.euclidean([1, 0, 0], [0, 1, 0]) 1.4142135623730951 >>> distance.euclidean([1, 1, 0], [0, 1, 0]) 1.0 ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9414722\/multidimensional-euclidean-distance-in-python", "best_answers_votes":27, "question_length":999, "response_length":210 }, { "question":"multidimensional confidence intervals [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Closed 4 years ago. Improve this question I have numerous tuples (par1,par2), i.e. points in a 2 dimensional parameter space obtained from repeating an experiment multiple times. I'm looking for a possibility to calculate and visualize confidence ellipses (not sure if thats the correct term for this). Here an example plot that I found in the web to show what I mean: source: blogspot.ch\/2011\/07\/classification-and-discrimination-with.html So in principle one has to fit a multivariate normal distribution to a 2D histogram of data points I guess. Can somebody help me with this?", "response":"It sounds like you just want the 2-sigma ellipse of the scatter of points? If so, consider something like this (From some code for a paper here: https:\/\/github.com\/joferkington\/oost_paper_code\/blob\/master\/error_ellipse.py): ``` import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Ellipse def plot_point_cov(points, nstd=2, ax=None, **kwargs): \"\"\" Plots an `nstd` sigma ellipse based on the mean and covariance of a point \"cloud\" (points, an Nx2 array). Parameters ---------- points : An Nx2 array of the data points. nstd : The radius of the ellipse in numbers of standard deviations. Defaults to 2 standard deviations. ax : The axis that the ellipse will be plotted on. Defaults to the current axis. Additional keyword arguments are pass on to the ellipse patch. Returns ------- A matplotlib ellipse artist \"\"\" pos = points.mean(axis=0) cov = np.cov(points, rowvar=False) return plot_cov_ellipse(cov, pos, nstd, ax, **kwargs) def plot_cov_ellipse(cov, pos, nstd=2, ax=None, **kwargs): \"\"\" Plots an `nstd` sigma error ellipse based on the specified covariance matrix (`cov`). Additional keyword arguments are passed on to the ellipse patch artist. Parameters ---------- cov : The 2x2 covariance matrix to base the ellipse on pos : The location of the center of the ellipse. Expects a 2-element sequence of [x0, y0]. nstd : The radius of the ellipse in numbers of standard deviations. Defaults to 2 standard deviations. ax : The axis that the ellipse will be plotted on. Defaults to the current axis. Additional keyword arguments are pass on to the ellipse patch. Returns ------- A matplotlib ellipse artist \"\"\" def eigsorted(cov): vals, vecs = np.linalg.eigh(cov) order = vals.argsort()[::-1] return vals[order], vecs[:,order] if ax is None: ax = plt.gca() vals, vecs = eigsorted(cov) theta = np.degrees(np.arctan2(*vecs[:,0][::-1])) # Width and height are \"full\" widths, not radius width, height = 2 * nstd * np.sqrt(vals) ellip = Ellipse(xy=pos, width=width, height=height, angle=theta, **kwargs) ax.add_artist(ellip) return ellip if __name__ == '__main__': #-- Example usage ----------------------- # Generate some random, correlated data points = np.random.multivariate_normal( mean=(1,1), cov=[[0.4, 9],[9, 10]], size=1000 ) # Plot the raw points... x, y = points.T plt.plot(x, y, 'ro') # Plot a transparent 3 standard deviation covariance ellipse plot_point_cov(points, nstd=3, alpha=0.5, color='green') plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12301071\/multidimensional-confidence-intervals", "best_answers_votes":40, "question_length":1031, "response_length":2454 }, { "question":"Cannot import name '_centered' from 'scipy.signal.signaltools' Unable to import functions from scipy module. Gives error : ``` from scipy.signal.signaltools import _centered Cannot import name '_centered' from 'scipy.signal.signaltools' scipy.__version__ 1.8.0 ```", "response":"I encountered the same problem while using statsmodels~=0.12.x. Increasing the statsmodels package to version 0.13.2, this import issue is resolved. UPDATE with more notes: before: installation of fixed version of statsmodels==0.12.2 which is dependent on scipy there was newly released scipy==1.8.0 - 2022-02-05 when installing it, got this problem: ```sh from statsmodels.tsa.seasonal import seasonal_decompose File \"\/usr\/local\/lib\/python3.8\/site-packages\/statsmodels\/tsa\/seasonal.py\", line 12, in from statsmodels.tsa.filters.filtertools import convolution_filter File \"\/usr\/local\/lib\/python3.8\/site-packages\/statsmodels\/tsa\/filters\/filtertools.py\", line 18, in from scipy.signal.signaltools import _centered as trim_centered ImportError: cannot import name '_centered' from 'scipy.signal.signaltools' (\/usr\/local\/lib\/python3.8\/site-packages\/scipy\/signal\/signaltools.py) ``` after: when bumping up statsmodels to the latest version available 0.13.2 release 2022-02-08, it works If you are not using statsmodels but other package which is dependent on scipy, have a look if there is newer version available (after the release of scipy to v.1.8.0)", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/71106940\/cannot-import-name-centered-from-scipy-signal-signaltools", "best_answers_votes":22, "question_length":264, "response_length":1150 }, { "question":"How to filter\/smooth with SciPy\/Numpy? I am trying to filter\/smooth signal obtained from a pressure transducer of sampling frequency 50 kHz. A sample signal is shown below: I would like to obtain a smooth signal obtained by loess in MATLAB (I am not plotting the same data, values are different). I calculated the power spectral density using matplotlib's psd() function and the power spectral density is also provided below: I have tried using the following code and obtained a filtered signal: ``` import csv import numpy as np import matplotlib.pyplot as plt import scipy as sp from scipy.signal import butter, lfilter, freqz def butter_lowpass(cutoff, fs, order=5): nyq = 0.5 * fs normal_cutoff = cutoff \/ nyq b, a = butter(order, normal_cutoff, btype='low', analog=False) return b, a def butter_lowpass_filter(data, cutoff, fs, order=5): b, a = butter_lowpass(cutoff, fs, order=order) y = lfilter(b, a, data) return y data = np.loadtxt('data.dat', skiprows=2, delimiter=',', unpack=True).transpose() time = data[:,0] pressure = data[:,1] cutoff = 2000 fs = 50000 pressure_smooth = butter_lowpass_filter(pressure, cutoff, fs) figure_pressure_trace = plt.figure(figsize=(5.15, 5.15)) figure_pressure_trace.clf() plot_P_vs_t = plt.subplot(111) plot_P_vs_t.plot(time, pressure, linewidth=1.0) plot_P_vs_t.plot(time, pressure_smooth, linewidth=1.0) plot_P_vs_t.set_ylabel('Pressure (bar)', labelpad=6) plot_P_vs_t.set_xlabel('Time (ms)', labelpad=6) plt.show() plt.close() ``` The output I get is: I need more smoothing, I tried changing the cutoff frequency but still satisfactory results can not be obtained. I can't get the same smoothness by MATLAB. I am sure it can be done in Python, but how? You can find the data here. Update I applied lowess smoothing from statsmodels, this also does not provide satisfactory results.", "response":"Here are a couple suggestions. First, try the lowess function from statsmodels with it=0, and tweak the frac argument a bit: ``` In [328]: from statsmodels.nonparametric.smoothers_lowess import lowess In [329]: filtered = lowess(pressure, time, is_sorted=True, frac=0.025, it=0) In [330]: plot(time, pressure, 'r') Out[330]: [] In [331]: plot(filtered[:,0], filtered[:,1], 'b') Out[331]: [] ``` A second suggestion is to use scipy.signal.filtfilt instead of lfilter to apply the Butterworth filter. filtfilt is the forward-backward filter. It applies the filter twice, once forward and once backward, resulting in zero phase delay. Here's a modified version of your script. The significant changes are the use of filtfilt instead of lfilter, and the change of cutoff from 3000 to 1500. You might want to experiment with that parameter--higher values result in better tracking of the onset of the pressure increase, but a value that is too high doesn't filter out the 3kHz (roughly) oscillations after the pressure increase. ``` import numpy as np import matplotlib.pyplot as plt from scipy.signal import butter, filtfilt def butter_lowpass(cutoff, fs, order=5): nyq = 0.5 * fs normal_cutoff = cutoff \/ nyq b, a = butter(order, normal_cutoff, btype='low', analog=False) return b, a def butter_lowpass_filtfilt(data, cutoff, fs, order=5): b, a = butter_lowpass(cutoff, fs, order=order) y = filtfilt(b, a, data) return y data = np.loadtxt('data.dat', skiprows=2, delimiter=',', unpack=True).transpose() time = data[:,0] pressure = data[:,1] cutoff = 1500 fs = 50000 pressure_smooth = butter_lowpass_filtfilt(pressure, cutoff, fs) figure_pressure_trace = plt.figure() figure_pressure_trace.clf() plot_P_vs_t = plt.subplot(111) plot_P_vs_t.plot(time, pressure, 'r', linewidth=1.0) plot_P_vs_t.plot(time, pressure_smooth, 'b', linewidth=1.0) plt.show() plt.close() ``` Here's the plot of the result. Note the deviation in the filtered signal at the right edge. To handle that, you can experiment with the padtype and padlen parameters of filtfilt, or, if you know you have enough data, you can discard the edges of the filtered signal.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/28536191\/how-to-filter-smooth-with-scipy-numpy", "best_answers_votes":38, "question_length":1827, "response_length":2129 }, { "question":"numpy array creating with a sequence I am on my transitional trip from MATLAB to scipy(+numpy)+matplotlib. I keep having issues when implementing some things. I want to create a simple vector array in three different parts. In MATLAB I would do something like: ``` vector=[0.2,1:60,60.8]; ``` This results in a one dimensional array of 62 positions. I'm trying to implement this using scipy. The closest I am right now is this: ``` a=[[0.2],linspace(1,60,60),[60.8]] ``` However this creates a list, not an array, and hence I cannot reshape it to a vector array. But then, when I do this, I get an error ``` a=array([[0.2],linspace(1,60,60),[60.8]]) ValueError: setting an array element with a sequence. ``` I believe my main obstacle is that I can't figure out how to translate this simple operation in MATLAB: ``` a=[1:2:20]; ``` to numpy. I know how to do it to access positions in an array, although not when creating a sequence. Any help will be appreciated, thanks!", "response":"Well NumPy implements MATLAB's array-creation function, vector, using two functions instead of one--each implicitly specifies a particular axis along which concatenation ought to occur. These functions are: r_ (row-wise concatenation) and c_ (column-wise) So for your example, the NumPy equivalent is: ``` >>> import numpy as NP >>> v = NP.r_[.2, 1:10, 60.8] >>> print(v) [ 0.2 1. 2. 3. 4. 5. 6. 7. 8. 9. 60.8] ``` The column-wise counterpart is: ``` >>> NP.c_[.2, 1:10, 60.8] ``` slice notation works as expected [start:stop:step]: ``` >>> v = NP.r_[.2, 1:25:7, 60.8] >>> v array([ 0.2, 1. , 8. , 15. , 22. , 60.8]) ``` Though if an imaginary number of used as the third argument, the slicing notation behaves like linspace: ``` >>> v = NP.r_[.2, 1:25:7j, 60.8] >>> v array([ 0.2, 1. , 5. , 9. , 13. , 17. , 21. , 25. , 60.8]) ``` Otherwise, it behaves like arange: ``` >>> v = NP.r_[.2, 1:25:7, 60.8] >>> v array([ 0.2, 1. , 8. , 15. , 22. , 60.8]) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/10753528\/numpy-array-creating-with-a-sequence", "best_answers_votes":20, "question_length":971, "response_length":954 }, { "question":"Pass args for solve_ivp (new SciPy ODE API) For solving simple ODEs using SciPy, I used to use the odeint function, with form: ``` scipy.integrate.odeint(func, y0, t, args=(), Dfun=None, col_deriv=0, full_output=0, ml=None, mu=None, rtol=None, atol=None, tcrit=None, h0=0.0, hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhnil=0, mxordn=12, mxords=5, printmessg=0)[source] ``` where a simple function to be integrated could include additional arguments of the form: ``` def dy_dt(t, y, arg1, arg2): # processing code here ``` In SciPy 1.0, it seems the ode and odeint funcs have been replaced by a newer solve_ivp method. ``` scipy.integrate.solve_ivp(fun, t_span, y0, method='RK45', t_eval=None, dense_output=False, events=None, vectorized=False, **options) ``` However, this doesn't seem to offer an args parameter, nor any indication in the documentation as to implementing the passing of args. Therefore, I wonder if arg passing is possible with the new API, or is this a feature that has yet to be added? (It would seem an oversight to me if this features has been intentionally removed?) Reference: https:\/\/docs.scipy.org\/doc\/scipy\/reference\/integrate.html", "response":"Relatively recently there appeared a similar question on scipy's github. Their solution is to use lambda: ``` solve_ivp(fun=lambda t, y: fun(t, y, *args), ...) ``` And they argue that there is already enough overhead for this not to matter.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/48245765\/pass-args-for-solve-ivp-new-scipy-ode-api", "best_answers_votes":16, "question_length":1155, "response_length":240 }, { "question":"Matlab vs Python: Reshape So I found this: When converting MATLAB code it might be necessary to first reshape a matrix to a linear sequence, perform some indexing operations and then reshape back. As reshape (usually) produces views onto the same storage, it should be possible to do this fairly efficiently. Note that the scan order used by reshape in Numpy defaults to the 'C' order, whereas MATLAB uses the Fortran order. If you are simply converting to a linear sequence and back this doesn't matter. But if you are converting reshapes from MATLAB code which relies on the scan order, then this MATLAB code: ``` z = reshape(x,3,4); ``` should become ``` z = x.reshape(3,4,order='F').copy() ``` in Numpy. I have a multidimensional 16*2 array called mafs, when I do in MATLAB: ``` mafs2 = reshape(mafs,[4,4,2]) ``` I get something different than when in python I do: ``` mafs2 = reshape(mafs,(4,4,2)) ``` or even ``` mafs2 = mafs.reshape((4,4,2),order='F').copy() ``` Any help on this? Thank you all.", "response":"Example: MATLAB: ``` >> mafs = [(1:16)' (17:32)'] mafs = 1 17 2 18 3 19 4 20 5 21 6 22 7 23 8 24 9 25 10 26 11 27 12 28 13 29 14 30 15 31 16 32 >> reshape(mafs,[4 4 2]) ans(:,:,1) = 1 5 9 13 2 6 10 14 3 7 11 15 4 8 12 16 ans(:,:,2) = 17 21 25 29 18 22 26 30 19 23 27 31 20 24 28 32 ``` Python: ``` >>> import numpy as np >>> mafs = np.c_[np.arange(1,17), np.arange(17,33)] >>> mafs.shape (16, 2) >>> mafs[:,0] array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]) >>> mafs[:,1] array([17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]) >>> r = np.reshape(mafs, (4,4,2), order=\"F\") >>> r.shape (4, 4, 2) >>> r[:,:,0] array([[ 1, 5, 9, 13], [ 2, 6, 10, 14], [ 3, 7, 11, 15], [ 4, 8, 12, 16]]) >>> r[:,:,1] array([[17, 21, 25, 29], [18, 22, 26, 30], [19, 23, 27, 31], [20, 24, 28, 32]]) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11892358\/matlab-vs-python-reshape", "best_answers_votes":35, "question_length":1002, "response_length":811 }, { "question":"Plotting a histogram with a function line on top I'm trying to do a little bit of distribution plotting and fitting in Python using SciPy for stats and matplotlib for the plotting. I'm having good luck with some things like creating a histogram: ``` seed(2) alpha=5 loc=100 beta=22 data=ss.gamma.rvs(alpha,loc=loc,scale=beta,size=5000) myHist = hist(data, 100, normed=True) ``` Brilliant! I can even take the same gamma parameters and plot the line function of the probability distribution function (after some googling): ``` rv = ss.gamma(5,100,22) x = np.linspace(0,600) h = plt.plot(x, rv.pdf(x)) ``` How would I go about plotting the histogram myHist with the PDF line h superimposed on top of the histogram? I'm hoping this is trivial, but I have been unable to figure it out.", "response":"just put both pieces together. ``` import scipy.stats as ss import numpy as np import matplotlib.pyplot as plt alpha, loc, beta=5, 100, 22 data=ss.gamma.rvs(alpha,loc=loc,scale=beta,size=5000) myHist = plt.hist(data, 100, normed=True) rv = ss.gamma(alpha,loc,beta) x = np.linspace(0,600) h = plt.plot(x, rv.pdf(x), lw=2) plt.show() ``` to make sure you get what you want in any specific plot instance, try to create a figure object first ``` import scipy.stats as ss import numpy as np import matplotlib.pyplot as plt # setting up the axes fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot(111) # now plot alpha, loc, beta=5, 100, 22 data=ss.gamma.rvs(alpha,loc=loc,scale=beta,size=5000) myHist = ax.hist(data, 100, normed=True) rv = ss.gamma(alpha,loc,beta) x = np.linspace(0,600) h = ax.plot(x, rv.pdf(x), lw=2) # show plt.show() ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11315641\/plotting-a-histogram-with-a-function-line-on-top", "best_answers_votes":20, "question_length":781, "response_length":838 }, { "question":"Plot a (polar) color wheel based on a colormap using Python\/Matplotlib I am trying to create a color wheel in Python, preferably using Matplotlib. The following works OK: ``` import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt xval = np.arange(0, 2*pi, 0.01) yval = np.ones_like(xval) colormap = plt.get_cmap('hsv') norm = mpl.colors.Normalize(0.0, 2*np.pi) ax = plt.subplot(1, 1, 1, polar=True) ax.scatter(xval, yval, c=xval, s=300, cmap=colormap, norm=norm, linewidths=0) ax.set_yticks([]) ``` However, this attempt has two serious drawbacks. First, when saving the resulting figure as a vector (figure_1.svg), the color wheel consists (as expected) of 621 different shapes, corresponding to the different (x,y) values being plotted. Although the result looks like a circle, it isn't really. I would greatly prefer to use an actual circle, defined by a few path points and Bezier curves between them, as in e.g. matplotlib.patches.Circle. This seems to me the 'proper' way of doing it, and the result would look nicer (no banding, better gradient, better anti-aliasing). Second (relatedly), the final plotted markers (the last few before 2*pi) overlap the first few. It's very hard to see in the pixel rendering, but if you zoom in on the vector-based rendering you can clearly see the last disc overlap the first few. I tried using different markers (. or |), but none of them go around the second issue. Bottom line: can I draw a circle in Python\/Matplotlib which is defined in the proper vector\/Bezier curve way, and which has an edge color defined according to a colormap (or, failing that, an arbitrary color gradient)?", "response":"One way I have found is to produce a colormap and then project it onto a polar axis. Here is a working example - it includes a nasty hack, though (clearly commented). I'm sure there's a way to either adjust limits or (harder) write your own Transform to get around it, but I haven't quite managed that yet. I thought the bounds on the call to Normalize would do that, but apparently not. ``` import matplotlib.pyplot as plt import numpy as np from matplotlib import cm import matplotlib as mpl fig = plt.figure() display_axes = fig.add_axes([0.1,0.1,0.8,0.8], projection='polar') display_axes._direction = 2*np.pi ## This is a nasty hack - using the hidden field to ## multiply the values such that 1 become 2*pi ## this field is supposed to take values 1 or -1 only!! norm = mpl.colors.Normalize(0.0, 2*np.pi) # Plot the colorbar onto the polar axis # note - use orientation horizontal so that the gradient goes around # the wheel rather than centre out quant_steps = 2056 cb = mpl.colorbar.ColorbarBase(display_axes, cmap=cm.get_cmap('hsv',quant_steps), norm=norm, orientation='horizontal') # aesthetics - get rid of border and axis labels cb.outline.set_visible(False) display_axes.set_axis_off() plt.show() # Replace with plt.savefig if you want to save a file ``` This produces If you want a ring rather than a wheel, use this before plt.show() or plt.savefig ``` display_axes.set_rlim([-1,1]) ``` This gives As per @EelkeSpaak in comments - if you save the graphic as an SVG as per the OP, here is a tip for working with the resulting graphic: The little elements of the resulting SVG image are touching and non-overlapping. This leads to faint grey lines in some renderers (Inkscape, Adobe Reader, probably not in print). A simple solution to this is to apply a small (e.g. 120%) scaling to each of the individual gradient elements, using e.g. Inkscape or Illustrator. Note you'll have to apply the transform to each element separately (the mentioned software provides functionality to do this automatically), rather than to the whole drawing, otherwise it has no effect.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/31940285\/plot-a-polar-color-wheel-based-on-a-colormap-using-python-matplotlib", "best_answers_votes":22, "question_length":1650, "response_length":2078 }, { "question":"Clustering geo location coordinates (lat,long pairs) using KMeans algorithm with Python Using the following code to cluster geolocation coordinates results in 3 clusters: ``` import numpy as np import matplotlib.pyplot as plt from scipy.cluster.vq import kmeans2, whiten coordinates= np.array([ [lat, long], [lat, long], ... [lat, long] ]) x, y = kmeans2(whiten(coordinates), 3, iter = 20) plt.scatter(coordinates[:,0], coordinates[:,1], c=y); plt.show() ``` Is it right to use Kmeans for location clustering, as it uses Euclidean distance and not Haversine formula as a distance function?", "response":"k-means is not a good algorithm to use for spatial clustering, for the reasons you meantioned. Instead, you could do this clustering job using scikit-learn's DBSCAN with the haversine metric and ball-tree algorithm. This tutorial demonstrates clustering latitude-longitude spatial data with DBSCAN\/haversine and avoids all those Euclidean-distance problems: ``` df = pd.read_csv('gps.csv') coords = df.as_matrix(columns=['lat', 'lon']) db = DBSCAN(eps=eps, min_samples=ms, algorithm='ball_tree', metric='haversine').fit(np.radians(coords)) ``` Note that this specifically uses scikit-learn v0.15, as some earlier\/later versions seem to require a full distance matrix to be computed. Also notice that the eps value is in radians and that .fit() takes the coordinates in radian units for the haversine metric.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/24762435\/clustering-geo-location-coordinates-lat-long-pairs-using-kmeans-algorithm-with", "best_answers_votes":27, "question_length":589, "response_length":807 }, { "question":"Fastest 2D convolution or image filter in Python Several users have asked about the speed or memory consumption of image convolutions in numpy or scipy [1, 2, 3, 4]. From the responses and my experience using Numpy, I believe this may be a major shortcoming of numpy compared to Matlab or IDL. None of the answers so far have addressed the overall question, so here it is: \"What is the fastest method for computing a 2D convolution in Python?\" Common python modules are fair game: numpy, scipy, and PIL (others?). For the sake of a challenging comparison, I'd like to propose the following rules: Input matrices are 2048x2048 and 32x32, respectively. Single or double precision floating point are both acceptable. Time spent converting your input matrix to the appropriate format doesn't count -- just the convolution step. Replacing the input matrix with your output is acceptable (does any python library support that?) Direct DLL calls to common C libraries are alright -- lapack or scalapack PyCUDA is right out. It's not fair to use your custom GPU hardware.", "response":"It really depends on what you want to do... A lot of the time, you don't need a fully generic (read: slower) 2D convolution... (i.e. If the filter is separable, you use two 1D convolutions instead... This is why the various scipy.ndimage.gaussian, scipy.ndimage.uniform, are much faster than the same thing implemented as a generic n-D convolutions.) At any rate, as a point of comparison: ``` t = timeit.timeit(stmt='ndimage.convolve(x, y, output=x)', number=1, setup=\"\"\" import numpy as np from scipy import ndimage x = np.random.random((2048, 2048)).astype(np.float32) y = np.random.random((32, 32)).astype(np.float32) \"\"\") print t ``` This takes 6.9 sec on my machine... Compare this with fftconvolve ``` t = timeit.timeit(stmt=\"signal.fftconvolve(x, y, mode='same')\", number=1, setup=\"\"\" import numpy as np from scipy import signal x = np.random.random((2048, 2048)).astype(np.float32) y = np.random.random((32, 32)).astype(np.float32) \"\"\") print t ``` This takes about 10.8 secs. However, with different input sizes, using fft's to do a convolution can be considerably faster (Though I can't seem to come up with a good example, at the moment...).", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/5710842\/fastest-2d-convolution-or-image-filter-in-python", "best_answers_votes":12, "question_length":1063, "response_length":1153 }, { "question":"How do I apply a DCT to an image in Python? I want to apply a Discrete Cosine Transform (as well as the inverse) to an image in Python and I'm wondering what is the best way to do it and how. I've looked at PIL and OpenCV but I still don't understand how to use it.", "response":"Example with scipy.fftpack: ``` from scipy.fftpack import dct, idct # implement 2D DCT def dct2(a): return dct(dct(a.T, norm='ortho').T, norm='ortho') # implement 2D IDCT def idct2(a): return idct(idct(a.T, norm='ortho').T, norm='ortho') from skimage.io import imread from skimage.color import rgb2gray import numpy as np import matplotlib.pylab as plt # read lena RGB image and convert to grayscale im = rgb2gray(imread('images\/lena.jpg')) imF = dct2(im) im1 = idct2(imF) # check if the reconstructed image is nearly equal to the original image np.allclose(im, im1) # True # plot original and reconstructed images with matplotlib.pylab plt.gray() plt.subplot(121), plt.imshow(im), plt.axis('off'), plt.title('original image', size=20) plt.subplot(122), plt.imshow(im1), plt.axis('off'), plt.title('reconstructed image (DCT+IDCT)', size=20) plt.show() ``` Also, if you plot a small slice of the 2D DCT coefficients array imF (in log domain), you will get a figure like the following (with a checkerboard pattern):", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/7110899\/how-do-i-apply-a-dct-to-an-image-in-python", "best_answers_votes":15, "question_length":265, "response_length":1013 }, { "question":"fitting multivariate curve_fit in python I'm trying to fit a simple function to two arrays of independent data in python. I understand that I need to bunch the data for my independent variables into one array, but something still seems to be wrong with the way I'm passing variables when I try to do the fit. (There are a couple previous posts related to this one, but they haven't been much help.) ``` import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit def fitFunc(x_3d, a, b, c, d): return a + b*x_3d[0,:] + c*x_3d[1,:] + d*x_3d[0,:]*x_3d[1,:] x_3d = np.array([[1,2,3],[4,5,6]]) p0 = [5.11, 3.9, 5.3, 2] fitParams, fitCovariances = curve_fit(fitFunc, x_3d[:2,:], x_3d[2,:], p0) print ' fit coefficients:\\n', fitParams ``` The error I get reads, ``` raise TypeError('Improper input: N=%s must not exceed M=%s' % (n, m)) TypeError: Improper input: N=4 must not exceed M=3 ``` What is M the length of? Is N the length of p0? What am I doing wrong here?", "response":"N and M are defined in the help for the function. N is the number of data points and M is the number of parameters. Your error therefore basically means you need at least as many data points as you have parameters, which makes perfect sense. This code works for me: ``` import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit def fitFunc(x, a, b, c, d): return a + b*x[0] + c*x[1] + d*x[0]*x[1] x_3d = np.array([[1,2,3,4,6],[4,5,6,7,8]]) p0 = [5.11, 3.9, 5.3, 2] fitParams, fitCovariances = curve_fit(fitFunc, x_3d, x_3d[1,:], p0) print ' fit coefficients:\\n', fitParams ``` I have included more data. I have also changed fitFunc to be written in a form that scans as only being a function of a single x - the fitter will handle calling this for all the data points. The code as you posted also referenced x_3d[2,:], which was causing an error.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20769340\/fitting-multivariate-curve-fit-in-python", "best_answers_votes":26, "question_length":988, "response_length":876 }, { "question":"Building and updating a sparse matrix in python using scipy I'm trying to build and update a sparse matrix as I read data from file. The matrix is of size 100000X40000 What is the most efficient way of updating multiple entries of the sparse matrix? specifically I need to increment each entry by 1. Let's say I have row indices [2, 236, 246, 389, 1691] and column indices [117, 3, 34, 2757, 74, 1635, 52] so all the following entries must be incremented by one: (2,117) (2,3) (2,34) (2,2757) ... (236,117) (236,3) (236, 34) (236,2757) ... and so on. I'm already using lil_matrix as it gave me a warning to use while I tried to update a single entry. lil_matrix format is already not supporting multiple updating. matrix[1:3,0] += [2,3] is giving me a notimplemented error. I can do this naively, by incrementing every entry individually. I was wondering if there is any better way to do this, or better sparse matrix implementation that I can use. My computer is also an average i5 machine with 4GB RAM, so I have to be careful not to blow it up :)", "response":"Creating a second matrix with 1s in your new coordinates and adding it to the existing one is a possible way of doing this: ``` >>> import scipy.sparse as sps >>> shape = (1000, 2000) >>> rows, cols = 1000, 2000 >>> sps_acc = sps.coo_matrix((rows, cols)) # empty matrix >>> for j in xrange(100): # add 100 sets of 100 1's ... r = np.random.randint(rows, size=100) ... c = np.random.randint(cols, size=100) ... d = np.ones((100,)) ... sps_acc = sps_acc + sps.coo_matrix((d, (r, c)), shape=(rows, cols)) ... >>> sps_acc ' with 9985 stored elements in Compressed Sparse Row format> ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20583381\/building-and-updating-a-sparse-matrix-in-python-using-scipy", "best_answers_votes":13, "question_length":1049, "response_length":582 }, { "question":"Scipy.sparse.csr_matrix: How to get top ten values and indices? I have a large csr_matrix and I am interested in the top ten values and their indices each row. But I did not find a decent way to manipulate the matrix. Here is my current solution and the main idea is to process them row by row: ``` row = csr_matrix.getrow(row_number).toarray()[0].ravel() top_ten_indicies = row.argsort()[-10:] top_ten_values = row[row.argsort()[-10:]] ``` By doing this, the advantages of csr_matrix is not fully used. It's more like a brute force solution.", "response":"I don't see what the advantages of csr format are in this case. Sure, all the nonzero values are collected in one .data array, with the corresponding column indexes in .indices. But they are in blocks of varying length. And that means they can't be processed in parallel or with numpy array strides. One solution is the pad those blocks into common length blocks. That's what .toarray() does. Then you can find the maximum values with argsort(axis=1) or withargpartition`. Another is to break them into row sized blocks, and process each of those. That's what you are doing with the .getrow. Another way of breaking them up is convert to lil format, and process the sublists of the .data and .rows arrays. A possible third option is to use the ufunc reduceat method. This lets you apply ufunc reduction methods to sequential blocks of an array. There are established ufunc like np.add that take advantage of this. argsort is not such a function. But there is a way of constructing a ufunc from a Python function, and gain some modest speed over regular Python iteration. [I need to look up a recent SO question that illustrates this.] I'll illustrate some of this with a simpler function, sum over rows. If A2 is a csr matrix. ``` A2.sum(axis=1) # the fastest compile csr method A2.A.sum(axis=1) # same, but with a dense intermediary [np.sum(l.data) for l in A2] # iterate over the rows of A2 [np.sum(A2.getrow(i).data) for i in range(A2.shape[0])] # iterate with index [np.sum(l) for l in A2.tolil().data] # sum the sublists of lil format np.add.reduceat(A2.data, A2.indptr[:-1]) # with reduceat ``` A2.sum(axis=1) is implemented as a matrix multiplication. That's not relevant to the sort problem, but still an interesting way of looking at the summation problem. Remember csr format was developed for efficient multiplication. For a my current sample matrix (created for another SO sparse question) ``` ' with 32 stored elements in Compressed Sparse Row format> ``` some comparative times are ``` In [694]: timeit np.add.reduceat(A2.data, A2.indptr[:-1]) 100000 loops, best of 3: 7.41 \u00b5s per loop In [695]: timeit A2.sum(axis=1) 10000 loops, best of 3: 71.6 \u00b5s per loop In [696]: timeit [np.sum(l) for l in A2.tolil().data] 1000 loops, best of 3: 280 \u00b5s per loop ``` Everything else is 1ms or more. I suggest focusing on developing your one-row function, something like: ``` def max_n(row_data, row_indices, n): i = row_data.argsort()[-n:] # i = row_data.argpartition(-n)[-n:] top_values = row_data[i] top_indices = row_indices[i] # do the sparse indices matter? return top_values, top_indices, i ``` Then see how if fits in one of these iteration methods. tolil() looks most promising. I haven't addressed the question of how to collect these results. Should they be lists of lists, array with 10 columns, another sparse matrix with 10 values per row, etc.? sorting each row of a large sparse & saving top K values & column index - Similar question from several years back, but unanswered. Argmax of each row or column in scipy sparse matrix - Recent question seeking argmax for rows of csr. I discuss some of the same issues. how to speed up loop in numpy? - example of how to use np.frompyfunc to create a ufunc. I don't know if the resulting function has the .reduceat method. Increasing value of top k elements in sparse matrix - get the top k elements of csr (not by row). Case for argpartition. The row summation implemented with np.frompyfunc: ``` In [741]: def foo(a,b): return a+b In [742]: vfoo=np.frompyfunc(foo,2,1) In [743]: timeit vfoo.reduceat(A2.data,A2.indptr[:-1],dtype=object).astype(float) 10000 loops, best of 3: 26.2 \u00b5s per loop ``` That's respectable speed. But I can't think of a way of writing a binary function (takes to 2 arguments) that would implement argsort via reduction. So this is probably a deadend for this problem.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/31790819\/scipy-sparse-csr-matrix-how-to-get-top-ten-values-and-indices", "best_answers_votes":9, "question_length":542, "response_length":3855 }, { "question":"Positive directional derivative for linesearch What does the smode of scipy.optimize 'Positive directional derivative for linesearch' mean? for example in fmin_slsqp http:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.optimize.fmin_slsqp.html", "response":"These optimization algorithms typically work by choosing a descent direction, and then performing a line search to that direction. I think this message means that the optimizer got into a position where it did not manage to find a direction where the value of the objective function decreases (fast enough), but could also not verify that the current position is a minimum.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11155721\/positive-directional-derivative-for-linesearch", "best_answers_votes":32, "question_length":248, "response_length":373 }, { "question":"Storing numpy sparse matrix in HDF5 (PyTables) I am having trouble storing a numpy csr_matrix with PyTables. I'm getting this error: ``` TypeError: objects of type ``csr_matrix`` are not supported in this context, sorry; supported objects are: NumPy array, record or scalar; homogeneous list or tuple, integer, float, complex or string ``` My code: ``` f = tables.openFile(path,'w') atom = tables.Atom.from_dtype(self.count_vector.dtype) ds = f.createCArray(f.root, 'count', atom, self.count_vector.shape) ds[:] = self.count_vector f.close() ``` Any ideas? Thanks", "response":"The answer by DaveP is almost right... but can cause problems for very sparse matrices: if the last column(s) or row(s) are empty, they are dropped. So to be sure that everything works, the \"shape\" attribute must be stored too. This is the code I regularly use: ``` import tables as tb from numpy import array from scipy import sparse def store_sparse_mat(m, name, store='store.h5'): msg = \"This code only works for csr matrices\" assert(m.__class__ == sparse.csr.csr_matrix), msg with tb.openFile(store,'a') as f: for par in ('data', 'indices', 'indptr', 'shape'): full_name = '%s_%s' % (name, par) try: n = getattr(f.root, full_name) n._f_remove() except AttributeError: pass arr = array(getattr(m, par)) atom = tb.Atom.from_dtype(arr.dtype) ds = f.createCArray(f.root, full_name, atom, arr.shape) ds[:] = arr def load_sparse_mat(name, store='store.h5'): with tb.openFile(store) as f: pars = [] for par in ('data', 'indices', 'indptr', 'shape'): pars.append(getattr(f.root, '%s_%s' % (name, par)).read()) m = sparse.csr_matrix(tuple(pars[:3]), shape=pars[3]) return m ``` It is trivial to adapt it to csc matrices.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11129429\/storing-numpy-sparse-matrix-in-hdf5-pytables", "best_answers_votes":35, "question_length":563, "response_length":1115 }, { "question":"harmonic mean in python The Harmonic Mean function in Python (scipy.stats.hmean) requires that the input be positive numbers. For example: ``` from scipy import stats print stats.hmean([ -50.2 , 100.5 ]) ``` results in: ``` ValueError: Harmonic mean only defined if all elements greater than zero ``` I don't mathematically see why this should be the case, except for the rare instance where you would end up dividing by zero. Instead of checking for a divide by zero, hmean() then throws an error upon inputing any positive number, whether a harmonic mean can be found or not. Am I missing something here in the maths? Or is this really a limitation in SciPy? How would you go about finding the harmonic mean of a set of numbers which might be positive or negative in python?", "response":"The harmonic mean is only defined for sets of positive real numbers. If you try and compute it for sets with negatives you get all kinds of strange and useless results even if you don't hit div by 0. For example, applying the formula to the set (3, -3, 4) gives a mean of 12!", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/10712842\/harmonic-mean-in-python", "best_answers_votes":36, "question_length":776, "response_length":275 }, { "question":"How to compute the probability of a value given a list of samples from a distribution in Python? Not sure if this belongs in statistics, but I am trying to use Python to achieve this. I essentially just have a list of integers: ``` data = [300,244,543,1011,300,125,300 ... ] ``` And I would like to know the probability of a value occurring given this data. I graphed histograms of the data using matplotlib and obtained these: In the first graph, the numbers represent the amount of characters in a sequence. In the second graph, it's a measured amount of time in milliseconds. The minimum is greater than zero, but there isn't necessarily a maximum. The graphs were created using millions of examples, but I'm not sure I can make any other assumptions about the distribution. I want to know the probability of a new value given that I have a few million examples of values. In the first graph, I have a few million sequences of different lengths. Would like to know probability of a 200 length, for example. I know that for a continuous distribution the probability of any exact point is supposed to be zero, but given a stream of new values, I need be able to say how likely each value is. I've looked through some of the numpy\/scipy probability density functions, but I'm not sure which to choose from or how to query for new values once I run something like scipy.stats.norm.pdf(data). It seems like different probability density functions will fit the data differently. Given the shape of the histograms I'm not sure how to decide which to use.", "response":"Since you don't seem to have a specific distribution in mind, but you might have a lot of data samples, I suggest using a non-parametric density estimation method. One of the data types you describe (time in ms) is clearly continuous, and one method for non-parametric estimation of a probability density function (PDF) for continuous random variables is the histogram that you already mentioned. However, as you will see below, Kernel Density Estimation (KDE) can be better. The second type of data you describe (number of characters in a sequence) is of the discrete kind. Here, kernel density estimation can also be useful and can be seen as a smoothing technique for the situations where you don't have a sufficient amount of samples for all values of the discrete variable. Estimating Density The example below shows how to first generate data samples from a mixture of 2 Gaussian distributions and then apply kernel density estimation to find the probability density function: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.mlab as mlab from sklearn.neighbors import KernelDensity # Generate random samples from a mixture of 2 Gaussians # with modes at 5 and 10 data = np.concatenate((5 + np.random.randn(10, 1), 10 + np.random.randn(30, 1))) # Plot the true distribution x = np.linspace(0, 16, 1000)[:, np.newaxis] norm_vals = mlab.normpdf(x, 5, 1) * 0.25 + mlab.normpdf(x, 10, 1) * 0.75 plt.plot(x, norm_vals) # Plot the data using a normalized histogram plt.hist(data, 50, normed=True) # Do kernel density estimation kd = KernelDensity(kernel='gaussian', bandwidth=0.75).fit(data) # Plot the estimated densty kd_vals = np.exp(kd.score_samples(x)) plt.plot(x, kd_vals) # Show the plots plt.show() ``` This will produce the following plot, where the true distribution is shown in blue, the histogram is shown in green, and the PDF estimated using KDE is shown in red: As you can see, in this situation, the PDF approximated by the histogram is not very useful, while KDE provides a much better estimate. However, with a larger number of data samples and a proper choice of bin size, histogram might produce a good estimate as well. The parameters you can tune in case of KDE are the kernel and the bandwidth. You can think about the kernel as the building block for the estimated PDF, and several kernel functions are available in Scikit Learn: gaussian, tophat, epanechnikov, exponential, linear, cosine. Changing the bandwidth allows you to adjust the bias-variance trade-off. Larger bandwidth will result in increased bias, which is good if you have less data samples. Smaller bandwidth will increase variance (fewer samples are included into the estimation), but will give a better estimate when more samples are available. Calculating Probability For a PDF, probability is obtained by calculating the integral over a range of values. As you noticed, that will lead to the probability 0 for a specific value. Scikit Learn does not seem to have a builtin function for calculating probability. However, it is easy to estimate the integral of the PDF over a range. We can do it by evaluating the PDF multiple times within the range and summing the obtained values multiplied by the step size between each evaluation point. In the example below, N samples are obtained with step step. ``` # Get probability for range of values start = 5 # Start of the range end = 6 # End of the range N = 100 # Number of evaluation points step = (end - start) \/ (N - 1) # Step size x = np.linspace(start, end, N)[:, np.newaxis] # Generate values in the range kd_vals = np.exp(kd.score_samples(x)) # Get PDF values for each x probability = np.sum(kd_vals * step) # Approximate the integral of the PDF print(probability) ``` Please note that kd.score_samples generates log-likelihood of the data samples. Therefore, np.exp is needed to obtain likelihood. The same computation can be performed using builtin SciPy integration methods, which will give a bit more accurate result: ``` from scipy.integrate import quad probability = quad(lambda x: np.exp(kd.score_samples(x)), start, end)[0] ``` For instance, for one run, the first method calculated the probability as 0.0859024655305, while the second method produced 0.0850974209996139.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/38711541\/how-to-compute-the-probability-of-a-value-given-a-list-of-samples-from-a-distrib", "best_answers_votes":37, "question_length":1550, "response_length":4250 }, { "question":"Multiplying Numpy\/Scipy Sparse and Dense Matrices Efficiently I'm working to implement the following equation: ``` X =(Y.T * Y + Y.T * C * Y) ^ -1 ``` Y is a (n x f) matrix and C is (n x n) diagonal one; n is about 300k and f will vary between 100 and 200. As part of an optimization process this equation will be used almost 100 million times so it has to be processed really fast. Y is initialized randomly and C is a very sparse matrix with only a few numbers out of the 300k on the diagonal will be different than 0.Since Numpy's diagonal functions creates dense matrices, I created C as a sparse csr matrix. But when trying to solve the first part of the equation: ``` r = dot(C, Y) ``` The computer crashes due Memory limits. I decided then trying to convert Y to csr_matrix and make the same operation: ``` r = dot(C, Ysparse) ``` and this approach took 1.38 ms. But this solution is somewhat \"tricky\" since I'm using a sparse matrix to store a dense one, I wonder how efficient this really. So my question is if is there some way of multiplying the sparse C and the dense Y without having to turn Y into sparse and improve performance? If somehow C could be represented as diagonal dense without consuming tons of memory maybe this would lead to very efficient performance but I don't know if this is possible. I appreciate your help!", "response":"The reason the dot product runs into memory issues when computing r = dot(C,Y) is because numpy's dot function does not have native support for handling sparse matrices. What is happening is numpy thinks of the sparse matrix C as a python object, and not a numpy array. If you inspect on small scale you can see the problem first hand: ``` >>> from numpy import dot, array >>> from scipy import sparse >>> Y = array([[1,2],[3,4]]) >>> C = sparse.csr_matrix(array([[1,0], [0,2]])) >>> dot(C,Y) array([[ (0, 0) 1 (1, 1) 2, (0, 0) 2 (1, 1) 4], [ (0, 0) 3 (1, 1) 6, (0, 0) 4 (1, 1) 8]], dtype=object) ``` Clearly the above is not the result you are interested in. Instead what you want to do is compute using scipy's sparse.csr_matrix.dot function: ``` r = sparse.csr_matrix.dot(C, Y) ``` or more compactly ``` r = C.dot(Y) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13272453\/multiplying-numpy-scipy-sparse-and-dense-matrices-efficiently", "best_answers_votes":34, "question_length":1342, "response_length":823 }, { "question":"Fast interpolation of grid data I have a large 3d np.ndarray of data that represents a physical variable sampled over a volume in a regular grid fashion (as in the value in array[0,0,0] represents the value at physical coords (0,0,0)). I would like to go to a finer grid spacing by interpolating the data in the rough grid. At the moment I'm using scipy griddata linear interpolation but it's pretty slow (~90secs for 20x20x20 array). It's a bit overengineered for my purposes, allowing random sampling of the volume data. Is there anything out there that can take advantage of my regularly spaced data and the fact that there is only a limited set of specific points I want to interpolate to?", "response":"Sure! There are two options that do different things but both exploit the regularly-gridded nature of the original data. The first is scipy.ndimage.zoom. If you just want to produce a denser regular grid based on interpolating the original data, this is the way to go. The second is scipy.ndimage.map_coordinates. If you'd like to interpolate a few (or many) arbitrary points in your data, but still exploit the regularly-gridded nature of the original data (e.g. no quadtree required), it's the way to go. \"Zooming\" an array (scipy.ndimage.zoom) As a quick example (This will use cubic interpolation. Use order=1 for bilinear, order=0 for nearest, etc.): ``` import numpy as np import scipy.ndimage as ndimage data = np.arange(9).reshape(3,3) print 'Original:\\n', data print 'Zoomed by 2x:\\n', ndimage.zoom(data, 2) ``` This yields: ``` Original: [[0 1 2] [3 4 5] [6 7 8]] Zoomed by 2x: [[0 0 1 1 2 2] [1 1 1 2 2 3] [2 2 3 3 4 4] [4 4 5 5 6 6] [5 6 6 7 7 7] [6 6 7 7 8 8]] ``` This also works for 3D (and nD) arrays. However, be aware that if you zoom by 2x, for example, you'll zoom along all axes. ``` data = np.arange(27).reshape(3,3,3) print 'Original:\\n', data print 'Zoomed by 2x gives an array of shape:', ndimage.zoom(data, 2).shape ``` This yields: ``` Original: [[[ 0 1 2] [ 3 4 5] [ 6 7 8]] [[ 9 10 11] [12 13 14] [15 16 17]] [[18 19 20] [21 22 23] [24 25 26]]] Zoomed by 2x gives an array of shape: (6, 6, 6) ``` If you have something like a 3-band, RGB image that you'd like to zoom, you can do this by specifying a sequence of tuples as the zoom factor: ``` print 'Zoomed by 2x along the last two axes:' print ndimage.zoom(data, (1, 2, 2)) ``` This yields: ``` Zoomed by 2x along the last two axes: [[[ 0 0 1 1 2 2] [ 1 1 1 2 2 3] [ 2 2 3 3 4 4] [ 4 4 5 5 6 6] [ 5 6 6 7 7 7] [ 6 6 7 7 8 8]] [[ 9 9 10 10 11 11] [10 10 10 11 11 12] [11 11 12 12 13 13] [13 13 14 14 15 15] [14 15 15 16 16 16] [15 15 16 16 17 17]] [[18 18 19 19 20 20] [19 19 19 20 20 21] [20 20 21 21 22 22] [22 22 23 23 24 24] [23 24 24 25 25 25] [24 24 25 25 26 26]]] ``` Arbitrary interpolation of regularly-gridded data using map_coordinates The first thing to undersand about map_coordinates is that it operates in pixel coordinates (e.g. just like you'd index the array, but the values can be floats). From your description, this is exactly what you want, but if often confuses people. For example, if you have x, y, z \"real-world\" coordinates, you'll need to transform them to index-based \"pixel\" coordinates. At any rate, let's say we wanted to interpolate the value in the original array at position 1.2, 0.3, 1.4. If you're thinking of this in terms of the earlier RGB image case, the first coordinate corresponds to the \"band\", the second to the \"row\" and the last to the \"column\". What order corresponds to what depends entirely on how you decide to structure your data, but I'm going to use these as \"z, y, x\" coordinates, as it makes the comparison to the printed array easier to visualize. ``` import numpy as np import scipy.ndimage as ndimage data = np.arange(27).reshape(3,3,3) print 'Original:\\n', data print 'Sampled at 1.2, 0.3, 1.4:' print ndimage.map_coordinates(data, [[1.2], [0.3], [1.4]]) ``` This yields: ``` Original: [[[ 0 1 2] [ 3 4 5] [ 6 7 8]] [[ 9 10 11] [12 13 14] [15 16 17]] [[18 19 20] [21 22 23] [24 25 26]]] Sampled at 1.2, 0.3, 1.4: [14] ``` Once again, this is cubic interpolation by default. Use the order kwarg to control the type of interpolation. It's worth noting here that all of scipy.ndimage's operations preserve the dtype of the original array. If you want floating point results, you'll need to cast the original array as a float: ``` In [74]: ndimage.map_coordinates(data.astype(float), [[1.2], [0.3], [1.4]]) Out[74]: array([ 13.5965]) ``` Another thing you may notice is that the interpolated coordinates format is rather cumbersome for a single point (e.g. it expects a 3xN array instead of an Nx3 array). However, it's arguably nicer when you have sequences of coordinate. For example, consider the case of sampling along a line that passes through the \"cube\" of data: ``` xi = np.linspace(0, 2, 10) yi = 0.8 * xi zi = 1.2 * xi print ndimage.map_coordinates(data, [zi, yi, xi]) ``` This yields: ``` [ 0 1 4 8 12 17 21 24 0 0] ``` This is also a good place to mention how boundary conditions are handled. By default, anything outside of the array is set to 0. Thus the last two values in the sequence are 0. (i.e. zi is > 2 for the last two elements). If we wanted the points outside the array to be, say -999 (We can't use nan as this is an integer array. If you want nan, you'll need to cast to floats.): ``` In [75]: ndimage.map_coordinates(data, [zi, yi, xi], cval=-999) Out[75]: array([ 0, 1, 4, 8, 12, 17, 21, 24, -999, -999]) ``` If we wanted it to return the nearest value for points outside the array, we'd do: ``` In [76]: ndimage.map_coordinates(data, [zi, yi, xi], mode='nearest') Out[76]: array([ 0, 1, 4, 8, 12, 17, 21, 24, 25, 25]) ``` You can also use \"reflect\" and \"wrap\" as boundary modes, in addition to \"nearest\" and the default \"constant\". These are fairly self-explanatory, but try experimenting a bit if you're confused. For example, let's interpolate a line along the first row of the first band in the array that extends for twice the distance of the array: ``` xi = np.linspace(0, 5, 10) yi, zi = np.zeros_like(xi), np.zeros_like(xi) ``` The default give: ``` In [77]: ndimage.map_coordinates(data, [zi, yi, xi]) Out[77]: array([0, 0, 1, 2, 0, 0, 0, 0, 0, 0]) ``` Compare this to: ``` In [78]: ndimage.map_coordinates(data, [zi, yi, xi], mode='reflect') Out[78]: array([0, 0, 1, 2, 2, 1, 2, 1, 0, 0]) In [78]: ndimage.map_coordinates(data, [zi, yi, xi], mode='wrap') Out[78]: array([0, 0, 1, 2, 0, 1, 1, 2, 0, 1]) ``` Hopefully that clarifies things a bit!", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16983843\/fast-interpolation-of-grid-data", "best_answers_votes":39, "question_length":693, "response_length":5819 }, { "question":"How to interpolate a 2D curve in Python I have a set of x & y coordinate which is a curve \/ shape, I want the smooth the curve \/ sharp and plot a graph. I tried different interpolation to smooth the curve \/ shape, But it still cannot fit my expectation. Using point to draw a smooth curve \/ shape. Like the following, using x, y point to get a smooth circle \/ curve However, I get something like circle.jpg curve.jpg square.jpg I also get trouble on spline interpolation, and rbf interpolation. for cubic_spline_interpolation, I got ValueError: Error on input data for univariate_spline_interpolated, I got ValueError: x must be strictly increasing for rbf, I got numpy.linalg.linalg.LinAlgError: Matrix is singular. I have on idea to fix them and get correct sharp and curve. Many thanks for help. Edit For those cannot download the source code and x, y coordinate file, I post the code and x, y coordinate in question. The following is my code: ``` #!\/usr\/bin\/env python3 from std_lib import * import os import numpy as np import cv2 from scipy import interpolate import matplotlib.pyplot as plt CUR_DIR = os.getcwd() CIRCLE_FILE = \"circle.txt\" CURVE_FILE = \"curve.txt\" SQUARE_FILE = \"square.txt\" #test CIRCLE_NAME = \"circle\" CURVE_NAME = \"curve\" SQUARE_NAME = \"square\" SYS_TOKEN_CNT = 2 # x, y total_pt_cnt = 0 # total no. of points x_arr = np.array([]) # x position set y_arr = np.array([]) # y position set def convert_coord_to_array(file_path): global total_pt_cnt global x_arr global y_arr if file_path == \"\": return FALSE with open(file_path) as f: content = f.readlines() content = [x.strip() for x in content] total_pt_cnt = len(content) if (total_pt_cnt (content[-1])): # is_reverse = TRUE for x in content: token_cnt = get_token_cnt(x, ',') if (token_cnt != SYS_TOKEN_CNT): return FALSE for idx in range(token_cnt): token_string = get_token_string(x, ',', idx) token_string = token_string.strip() if (not token_string.isdigit()): return FALSE # save x, y set if (idx == 0): x_arr = np.append(x_arr, int(token_string)) else: y_arr = np.append(y_arr, int(token_string)) return TRUE def linear_interpolation(fig, axs): xnew = np.linspace(x_arr.min(), x_arr.max(), len(x_arr)) f = interpolate.interp1d(xnew , y_arr) axs.plot(xnew, f(xnew)) axs.set_title('linear') def cubic_interpolation(fig, axs): xnew = np.linspace(x_arr.min(), x_arr.max(), len(x_arr)) f = interpolate.interp1d(xnew , y_arr, kind='cubic') axs.plot(xnew, f(xnew)) axs.set_title('cubic') def cubic_spline_interpolation(fig, axs): xnew = np.linspace(x_arr.min(), x_arr.max(), len(x_arr)) tck = interpolate.splrep(x_arr, y_arr, s=0) #always fail (ValueError: Error on input data) ynew = interpolate.splev(xnew, tck, der=0) axs.plot(xnew, ynew) axs.set_title('cubic spline') def parametric_spline_interpolation(fig, axs): xnew = np.linspace(x_arr.min(), x_arr.max(), len(x_arr)) tck, u = interpolate.splprep([x_arr, y_arr], s=0) out = interpolate.splev(xnew, tck) axs.plot(out[0], out[1]) axs.set_title('parametric spline') def univariate_spline_interpolated(fig, axs): s = interpolate.InterpolatedUnivariateSpline(x_arr, y_arr)# ValueError: x must be strictly increasing xnew = np.linspace(x_arr.min(), x_arr.max(), len(x_arr)) ynew = s(xnew) axs.plot(xnew, ynew) axs.set_title('univariate spline') def rbf(fig, axs): xnew = np.linspace(x_arr.min(), x_arr.max(), len(x_arr)) rbf = interpolate.Rbf(x_arr, y_arr) # numpy.linalg.linalg.LinAlgError: Matrix is singular. fi = rbf(xnew) axs.plot(xnew, fi) axs.set_title('rbf') def interpolation(): fig, axs = plt.subplots(nrows=4) axs[0].plot(x_arr, y_arr, 'r-') axs[0].set_title('org') cubic_interpolation(fig, axs[1]) # cubic_spline_interpolation(fig, axs[2]) parametric_spline_interpolation(fig, axs[2]) # univariate_spline_interpolated(fig, axs[3]) # rbf(fig, axs[3]) linear_interpolation(fig, axs[3]) plt.show() #------- main ------- if __name__ == \"__main__\": # np.seterr(divide='ignore', invalid='ignore') file_name = CUR_DIR + \"\/\" + CIRCLE_FILE convert_coord_to_array(file_name) #file_name = CUR_DIR + \"\/\" + CURVE_FILE #convert_coord_to_array(file_name) #file_name = CUR_DIR + \"\/\" + SQUARE_FILE #convert_coord_to_array(file_name) # interpolation() ``` circle x, y coordinate ``` 307, 91 308, 90 339, 90 340, 91 348, 91 349, 92 351, 92 352, 93 357, 93 358, 94 361, 94 362, 95 364, 95 365, 96 369, 96 370, 97 374, 97 375, 98 376, 98 377, 99 379, 99 380, 100 382, 100 383, 101 386, 101 387, 102 389, 102 390, 103 392, 103 393, 104 394, 104 395, 105 398, 105 399, 106 400, 106 401, 107 402, 107 403, 108 405, 108 406, 109 407, 109 408, 110 410, 110 411, 111 413, 111 414, 112 415, 112 416, 113 417, 113 418, 114 419, 114 420, 115 421, 115 422, 116 423, 116 425, 118 426, 118 428, 120 429, 120 430, 121 430, 122 431, 122 433, 124 434, 124 435, 125 435, 126 437, 128 437, 129 441, 133 441, 134 442, 135 442, 137 443, 137 444, 138 444, 140 445, 141 445, 142 446, 143 446, 146 447, 147 447, 148 448, 149 448, 153 449, 154 449, 191 448, 192 448, 223 447, 224 447, 240 446, 241 446, 242 445, 243 445, 248 444, 249 444, 253 443, 254 443, 256 442, 257 442, 259 441, 260 441, 263 440, 264 440, 267 439, 268 439, 269 438, 270 438, 272 436, 274 436, 275 435, 276 435, 279 434, 280 434, 281 433, 282 433, 283 431, 285 431, 288 429, 290 429, 291 428, 292 428, 293 426, 295 426, 296 425, 297 425, 298 424, 299 424, 300 423, 301 423, 303 422, 304 422, 305 420, 307 420, 308 419, 309 419, 310 417, 312 417, 313 415, 315 415, 316 414, 317 414, 318 412, 320 411, 320 410, 321 410, 322 409, 323 409, 324 408, 325 407, 325 402, 330 401, 330 401, 331 399, 333 398, 333 395, 336 395, 337 394, 338 393, 338 390, 341 388, 341 387, 342 387, 343 386, 344 384, 344 383, 345 382, 345 380, 347 379, 347 377, 349 376, 349 374, 351 373, 351 373, 352 372, 353 370, 353 369, 354 368, 354 367, 355 366, 355 365, 356 364, 356 363, 357 362, 357 359, 360 358, 360 357, 361 356, 361 355, 362 353, 362 353, 363 352, 364 348, 364 347, 365 314, 365 313, 364 297, 364 296, 363 284, 363 283, 362 280, 362 279, 361 273, 361 272, 360 271, 360 270, 359 265, 359 264, 358 262, 358 261, 357 260, 357 258, 355 257, 355 256, 354 255, 354 252, 351 251, 351 246, 346 245, 346 237, 338 237, 337 235, 335 234, 335 231, 332 231, 331 230, 330 230, 329 222, 321 222, 320 217, 315 217, 314 213, 310 213, 309 210, 306 210, 305 204, 299 204, 298 203, 297 203, 296 199, 292 199, 291 198, 290 198, 289 197, 289 194, 286 194, 285 191, 282 191, 280 187, 276 187, 275 185, 273 185, 271 184, 270 184, 269 183, 268 183, 266 182, 265 182, 264 180, 262 180, 261 179, 260 179, 258 177, 256 177, 254 176, 253 176, 251 175, 250 175, 249 174, 248 174, 246 173, 245 173, 243 171, 241 171, 237 170, 236 170, 232 169, 231 169, 230 168, 229 168, 211 169, 210 169, 205 170, 204 170, 199 171, 198 171, 195 172, 194 172, 193 173, 192 173, 189 174, 188 174, 185 176, 183 176, 180 177, 179 177, 177 178, 176 178, 175 179, 174 179, 173 180, 172 180, 170 182, 168 182, 167 183, 166 183, 165 185, 163 185, 162 186, 161 186, 160 189, 157 189, 156 191, 154 191, 153 192, 152 192, 149 197, 144 197, 143 203, 137 204, 137 207, 134 208, 134 211, 131 213, 131 216, 128 217, 128 218, 127 219, 127 221, 125 222, 125 223, 124 224, 124 225, 123 226, 123 227, 122 228, 122 229, 121 231, 121 233, 119 234, 119 237, 116 239, 116 240, 115 241, 115 242, 114 244, 114 245, 113 246, 113 247, 112 250, 112 251, 111 252, 111 253, 110 256, 110 257, 109 258, 109 259, 108 262, 108 263, 107 266, 107 267, 106 269, 106 272, 103 274, 103 275, 102 276, 102 277, 101 278, 101 279, 100 281, 100 282, 99 283, 99 284, 98 286, 98 287, 97 288, 97 289, 96 290, 96 291, 95 293, 95 295, 93 298, 93 299, 92 302, 92 303, 91 ``` Solved ``` def linear_interpolateion(self, x, y): points = np.array([x, y]).T # a (nbre_points x nbre_dim) array # Linear length along the line: distance = np.cumsum( np.sqrt(np.sum( np.diff(points, axis=0)**2, axis=1 )) ) distance = np.insert(distance, 0, 0) alpha = np.linspace(distance.min(), int(distance.max()), len(x)) interpolator = interpolate.interp1d(distance, points, kind='slinear', axis=0) interpolated_points = interpolator(alpha) out_x = interpolated_points.T[0] out_y = interpolated_points.T[1] return out_x, out_y ```", "response":"Because the interpolation is wanted for generic 2d curve i.e. (x, y)=f(s) where s is the coordinates along the curve, rather than y = f(x), the distance along the line s have to be computed first. Then, the interpolation for each coordinates is performed relatively to s. (for instance, in the circle case y = f(x) have two solutions) s (or distance in the code here) is calculated as the cumulative sum of the length of each segments between the given points. ``` import numpy as np from scipy.interpolate import interp1d import matplotlib.pyplot as plt # Define some points: points = np.array([[0, 1, 8, 2, 2], [1, 0, 6, 7, 2]]).T # a (nbre_points x nbre_dim) array # Linear length along the line: distance = np.cumsum( np.sqrt(np.sum( np.diff(points, axis=0)**2, axis=1 )) ) distance = np.insert(distance, 0, 0)\/distance[-1] # Interpolation for different methods: interpolations_methods = ['slinear', 'quadratic', 'cubic'] alpha = np.linspace(0, 1, 75) interpolated_points = {} for method in interpolations_methods: interpolator = interp1d(distance, points, kind=method, axis=0) interpolated_points[method] = interpolator(alpha) # Graph: plt.figure(figsize=(7,7)) for method_name, curve in interpolated_points.items(): plt.plot(*curve.T, '-', label=method_name); plt.plot(*points.T, 'ok', label='original points'); plt.axis('equal'); plt.legend(); plt.xlabel('x'); plt.ylabel('y'); ``` which gives: Regarding the graphs, it seems you are looking for a smoothing method rather than an interpolation of the points. Here, is a similar approach use to fit a spline separately on each coordinates of the given curve (see Scipy UnivariateSpline): ``` import numpy as np import matplotlib.pyplot as plt from scipy.interpolate import UnivariateSpline # Define some points: theta = np.linspace(-3, 2, 40) points = np.vstack( (np.cos(theta), np.sin(theta)) ).T # add some noise: points = points + 0.05*np.random.randn(*points.shape) # Linear length along the line: distance = np.cumsum( np.sqrt(np.sum( np.diff(points, axis=0)**2, axis=1 )) ) distance = np.insert(distance, 0, 0)\/distance[-1] # Build a list of the spline function, one for each dimension: splines = [UnivariateSpline(distance, coords, k=3, s=.2) for coords in points.T] # Computed the spline for the asked distances: alpha = np.linspace(0, 1, 75) points_fitted = np.vstack( spl(alpha) for spl in splines ).T # Graph: plt.plot(*points.T, 'ok', label='original points'); plt.plot(*points_fitted.T, '-r', label='fitted spline k=3, s=.2'); plt.axis('equal'); plt.legend(); plt.xlabel('x'); plt.ylabel('y'); ``` which gives:", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/52014197\/how-to-interpolate-a-2d-curve-in-python", "best_answers_votes":39, "question_length":8164, "response_length":2579 }, { "question":"Custom cluster colors of SciPy dendrogram in Python (link_color_func?) I want to color my clusters with a color map that I made in the form of a dictionary (i.e. {leaf: color}). I've tried following https:\/\/joernhees.de\/blog\/2015\/08\/26\/scipy-hierarchical-clustering-and-dendrogram-tutorial\/ but the colors get messed up for some reason. The default plot looks good, I just want to assign those colors differently. I saw that there was a link_color_func but when I tried using my color map (D_leaf_color dictionary) I got an error b\/c it wasn't a function. I've created D_leaf_color to customize the colors of the leaves associated with particular clusters. In my actual dataset, the colors mean something so I'm steering away from arbitrary color assignments. I don't want to use color_threshold b\/c in my actual data, I have way more clusters and SciPy repeats the colors, hence this question. . . How can I use my leaf-color dictionary to customize the color of my dendrogram clusters? I made a GitHub issue https:\/\/github.com\/scipy\/scipy\/issues\/6346 where I further elaborated on the approach to color the leaves in Interpreting the output of SciPy's hierarchical clustering dendrogram? (maybe found a bug...) but I still can't figure out how to actually either: (i) use dendrogram output to reconstruct my dendrogram with my specified color dictionary or (ii) reformat my D_leaf_color dictionary for the link_color_func parameter. ``` # Init import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set() # Load data from sklearn.datasets import load_diabetes # Clustering from scipy.cluster.hierarchy import dendrogram, fcluster, leaves_list from scipy.spatial import distance from fastcluster import linkage # You can use SciPy one too %matplotlib inline # Dataset A_data = load_diabetes().data DF_diabetes = pd.DataFrame(A_data, columns = [\"attr_%d\" % j for j in range(A_data.shape[1])]) # Absolute value of correlation matrix, then subtract from 1 for disimilarity DF_dism = 1 - np.abs(DF_diabetes.corr()) # Compute average linkage A_dist = distance.squareform(DF_dism.as_matrix()) Z = linkage(A_dist,method=\"average\") # Color mapping D_leaf_colors = {\"attr_1\": \"#808080\", # Unclustered gray \"attr_4\": \"#B061FF\", # Cluster 1 indigo \"attr_5\": \"#B061FF\", \"attr_2\": \"#B061FF\", \"attr_8\": \"#B061FF\", \"attr_6\": \"#B061FF\", \"attr_7\": \"#B061FF\", \"attr_0\": \"#61ffff\", # Cluster 2 cyan \"attr_3\": \"#61ffff\", \"attr_9\": \"#61ffff\", } # Dendrogram # To get this dendrogram coloring below `color_threshold=0.7` D = dendrogram(Z=Z, labels=DF_dism.index, color_threshold=None, leaf_font_size=12, leaf_rotation=45, link_color_func=D_leaf_colors) # TypeError: 'dict' object is not callable ``` I also tried how do I get the subtrees of dendrogram made by scipy.cluster.hierarchy", "response":"Here a solution that uses the return matrix Z of linkage() (described early but a little hidden in the docs) and link_color_func: ``` # see question for code prior to \"color mapping\" # Color mapping dflt_col = \"#808080\" # Unclustered gray D_leaf_colors = {\"attr_1\": dflt_col, \"attr_4\": \"#B061FF\", # Cluster 1 indigo \"attr_5\": \"#B061FF\", \"attr_2\": \"#B061FF\", \"attr_8\": \"#B061FF\", \"attr_6\": \"#B061FF\", \"attr_7\": \"#B061FF\", \"attr_0\": \"#61ffff\", # Cluster 2 cyan \"attr_3\": \"#61ffff\", \"attr_9\": \"#61ffff\", } # notes: # * rows in Z correspond to \"inverted U\" links that connect clusters # * rows are ordered by increasing distance # * if the colors of the connected clusters match, use that color for link link_cols = {} for i, i12 in enumerate(Z[:,:2].astype(int)): c1, c2 = (link_cols[x] if x > len(Z) else D_leaf_colors[\"attr_%d\"%x] for x in i12) link_cols[i+1+len(Z)] = c1 if c1 == c2 else dflt_col # Dendrogram D = dendrogram(Z=Z, labels=DF_dism.index, color_threshold=None, leaf_font_size=12, leaf_rotation=45, link_color_func=lambda x: link_cols[x]) ``` Here the output:", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/38153829\/custom-cluster-colors-of-scipy-dendrogram-in-python-link-color-func", "best_answers_votes":18, "question_length":2806, "response_length":1071 }, { "question":"What is the fastest way to minimize a function in python? So I have the following problem to minimize. I have a vector w that I need to find in order to minimize the following function: ``` import numpy as np from scipy.optimize import minimize matrix = np.array([[1.0, 1.5, -2.], [0.5, 3.0, 2.5], [1.0, 0.25, 0.75]]) def fct(x): return x.dot(matrix).dot(x) x0 = np.ones(3) \/ 3 cons = ({'type': 'eq', 'fun': lambda x: x.sum() - 1.0}) bnds = [(0, 1)] * 3 w = minimize(fct, x0, method='SLSQP', bounds=bnds, constraints=cons)['x'] ``` I chose the method='SLSQP' because it seems like it is the only one that allows for bounds and constraints. My problem is I will have to loop my solution on multiple selections so I am trying to gain some speed here. Is my solution the fastest one using an optimizer or would there be any other faster solutions? Thank you.", "response":"Introduction In general the fastest approach will always be the most tailored to the problem. As all optimization-algorithms within scipy.minimize are quite general, there will always be faster methods, gaining performance from special characteristics of your problem. It will be a trade-off, how much analysis and work is done to gain performance. It's important to note, that SLSQP for example is an algorithm, which is able to tackle non-convex problems, in which case convergence to some local-optimum is guaranteed (ignoring numerical-trouble within the implementation; which is always a possible problem). This power comes with a price: SLSQP will be less fast and less robust compared to algorithms which are specifically designed for convex problems (and even within convex problems, although they are all polynomially solvable, there are easier ones as LinearProgramming and harder ones as SemidefiniteProgramming). Problem Analysis As indicated in the comments above, for some general indefinite matrix M, this problem is non-convex (with a high probability; i'm not giving formal proof), which means, that there is no general feasible approach without further assumptions (ignoring special analysis as some non-convex problems can be solved globally in polynomial time). This means: every optimization algorithm within scipy, will at most guarantee a local-optimum, which might be arbitrarily bad compared to the global-optimum Assumption: M is positive-definite \/ negative-definite If we assume matrix M is either positive-definite or negative-definite, but not indefinite, this is a convex-optimization problem. As you seem to be interested in this case, here are some remarks and approaches. This means: SLSQP is guaranteed to converge to the global-optimum You can use solvers specifically designed for convex optimization problems Commercial solvers: Gurobi, CPLEX, Mosek Open-Source solvers: ECOS, SCS Example code using Python + cvxpy + ecos\/scs There is no special convex-optimization solver except for linprog, which is for Linear Programming and is therefore unable to tackle this problem. There are other alternatives though, as mentioned above and there are many possible routes to use them. Here i will present one of the most simple ones: cvxpy is used for model-formulation It will automatically proof that this problem is convex! (Model-building and convexity-inference can be costly) ecos General-purpose solver for many convex optimization problems Based on the Interior-Point-Method scs General-purpose solver for many convex optimization problems Based on alternating direction method of multipliers (ADMM) Supports two different approaches to solve linear equations: direct (factorization based) indirect (conjugate-gradient based) GPU support for this one as it's all about matrix-vector products Less accurate solutions in general compared to ECOS, but often much faster Example code: Example using: 1000x1000 matrix Solver: SCS indirect-mode CPU Multithreaded (automatically if BLAS available) Code: ``` import time import numpy as np from cvxpy import * # Convex-Opt \"\"\" Create some random pos-def matrix \"\"\" N = 1000 matrix_ = np.random.normal(size=(N,N)) matrix = np.dot(matrix_, matrix_.T) \"\"\" CVXPY-based Convex-Opt \"\"\" print('\\ncvxpy\\n') x = Variable(N) constraints = [x >= 0, x = 100); faster = orders of magnitude for big N I checked the equivalence of this method compared to SLSQP for small examples", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/43648073\/what-is-the-fastest-way-to-minimize-a-function-in-python", "best_answers_votes":21, "question_length":855, "response_length":3444 }, { "question":"SciPy: leastsq vs least_squares SciPy provides two functions for nonlinear least squares problems: optimize.leastsq() uses the Levenberg-Marquardt algorithm only. optimize.least_squares() allows us to choose the Levenberg-Marquardt, Trust Region Reflective, or Trust Region Dogleg algorithm. Should we always use least_squares() instead of leastsq()? If so, what purpose does the latter serve?", "response":"Short answer Should we always use least_squares() instead of leastsq()? Yes. If so, what purpose does the latter serve? Backward compatibility. Explanation The least_squares function is new in 0.17.1. Its documentation refers to leastsq as A legacy wrapper for the MINPACK implementation of the Levenberg-Marquadt algorithm. The original commit introducing least_squares actually called leastsq when the method was chosen to be 'lm'. But the contributor (Nikolay Mayorov) then decided that least_squares might feel more solid and homogeneous if I write a new wrapper to MINPACK functions, instead of calling leastsq. and so he did. So, leastsq is no longer required by least_squares, but I'd expect it to be kept at least for a while, to avoid breaking old code.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/41315270\/scipy-leastsq-vs-least-squares", "best_answers_votes":24, "question_length":393, "response_length":762 }, { "question":"What to do if I want 3D spline\/smooth interpolation of random unstructured data? I was inspired by this answer by @James to see how griddata and map_coordinates might be used. In the examples below I'm showing 2D data, but my interest is in 3D. I noticed that griddata only provides splines for 1D and 2D, and is limited to linear interpolation for 3D and higher (probably for very good reasons). However, map_coordinates seems to be fine with 3D using higher order (smoother than piece-wise linear) interpolation. My primary question: if I have random, unstructured data (where I can not use map_coordinates) in 3D, is there some way to get smoother than piece-wise linear interpolation within the NumPy SciPy universe, or at least nearby? My secondary question: is spline for 3D not available in griddata because it is difficult or tedious to implement, or is there a fundamental difficulty? The images and horrible python below show my current understanding of how griddata and map_coordinates can or can't be used. Interpolation is done along the thick black line. STRUCTURED DATA: UNSTRUCTURED DATA: Horrible python: ``` import numpy as np import matplotlib.pyplot as plt def g(x, y): return np.exp(-((x-1.0)**2 + (y-1.0)**2)) def findit(x, X): # or could use some 1D interpolation fraction = (x - X[0]) \/ (X[-1]-X[0]) return fraction * float(X.shape[0]-1) nth, nr = 12, 11 theta_min, theta_max = 0.2, 1.3 r_min, r_max = 0.7, 2.0 theta = np.linspace(theta_min, theta_max, nth) r = np.linspace(r_min, r_max, nr) R, TH = np.meshgrid(r, theta) Xp, Yp = R*np.cos(TH), R*np.sin(TH) array = g(Xp, Yp) x, y = np.linspace(0.0, 2.0, 200), np.linspace(0.0, 2.0, 200) X, Y = np.meshgrid(x, y) blob = g(X, Y) xtest = np.linspace(0.25, 1.75, 40) ytest = np.zeros_like(xtest) + 0.75 rtest = np.sqrt(xtest**2 + ytest**2) thetatest = np.arctan2(xtest, ytest) ir = findit(rtest, r) it = findit(thetatest, theta) plt.figure() plt.subplot(2,1,1) plt.scatter(100.0*Xp.flatten(), 100.0*Yp.flatten()) plt.plot(100.0*xtest, 100.0*ytest, '-k', linewidth=3) plt.hold plt.imshow(blob, origin='lower', cmap='gray') plt.text(5, 5, \"don't use jet!\", color='white') exact = g(xtest, ytest) import scipy.ndimage.interpolation as spndint ndint0 = spndint.map_coordinates(array, [it, ir], order=0) ndint1 = spndint.map_coordinates(array, [it, ir], order=1) ndint2 = spndint.map_coordinates(array, [it, ir], order=2) import scipy.interpolate as spint points = np.vstack((Xp.flatten(), Yp.flatten())).T # could use np.array(zip(...)) grid_x = xtest grid_y = np.array([0.75]) g0 = spint.griddata(points, array.flatten(), (grid_x, grid_y), method='nearest') g1 = spint.griddata(points, array.flatten(), (grid_x, grid_y), method='linear') g2 = spint.griddata(points, array.flatten(), (grid_x, grid_y), method='cubic') plt.subplot(4,2,5) plt.plot(exact, 'or') #plt.plot(ndint0) plt.plot(ndint1) plt.plot(ndint2) plt.title(\"map_coordinates\") plt.subplot(4,2,6) plt.plot(exact, 'or') #plt.plot(g0) plt.plot(g1) plt.plot(g2) plt.title(\"griddata\") plt.subplot(4,2,7) #plt.plot(ndint0 - exact) plt.plot(ndint1 - exact) plt.plot(ndint2 - exact) plt.title(\"error map_coordinates\") plt.subplot(4,2,8) #plt.plot(g0 - exact) plt.plot(g1 - exact) plt.plot(g2 - exact) plt.title(\"error griddata\") plt.show() seed_points_rand = 2.0 * np.random.random((400, 2)) rr = np.sqrt((seed_points_rand**2).sum(axis=-1)) thth = np.arctan2(seed_points_rand[...,1], seed_points_rand[...,0]) isinside = (rr>r_min) * (rrtheta_min) * (thth 0.0: if x1 >= b: return None,None x1 = x2; f1 = f2 x2 = x1 + dx; f2 = f(x2) return x1,x2 def bisect(f,x1,x2,switch=0,epsilon=1.0e-9): f1 = f(x1) if f1 == 0.0: return x1 f2 = f(x2) if f2 == 0.0: return x2 if f1*f2 > 0.0: print('Root is not bracketed') return None n = int(math.ceil(math.log(abs(x2 - x1)\/epsilon)\/math.log(2.0))) for i in range(n): x3 = 0.5*(x1 + x2); f3 = f(x3) if (switch == 1) and (abs(f3) >abs(f1)) and (abs(f3) > abs(f2)): return None if f3 == 0.0: return x3 if f2*f3 < 0.0: x1 = x3 f1 = f3 else: x2 =x3 f2 = f3 return (x1 + x2)\/2.0 def roots(f, a, b, eps=1e-6): print ('The roots on the interval [%f, %f] are:' % (a,b)) while 1: x1,x2 = rootsearch(f,a,b,eps) if x1 != None: a = x2 root = bisect(f,x1,x2,1) if root != None: pass print (round(root,-int(math.log(eps, 10)))) else: print ('\\nDone') break f=lambda x:x*math.cos(x-4) roots(f, -3, 3) ``` roots finds all roots of f in the interval [a, b].", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13054758\/python-finding-multiple-roots-of-nonlinear-equation", "best_answers_votes":23, "question_length":576, "response_length":1158 }, { "question":"Parallel optimizations in SciPy I have a simple function ``` def square(x, a=1): return [x**2 + a, 2*x] ``` I want to minimize it over x, for several parameters a. I currently have loops that, in spirit, do something like this: ``` In [89]: from scipy import optimize In [90]: res = optimize.minimize(square, 25, method='BFGS', jac=True) In [91]: [res.x, res.fun] Out[91]: [array([ 0.]), 1.0] In [92]: l = lambda x: square(x, 2) In [93]: res = optimize.minimize(l, 25, method='BFGS', jac=True) In [94]: [res.x, res.fun] Out[94]: [array([ 0.]), 2.0] ``` Now, the function is already vectorized ``` In [98]: square(array([2,3])) Out[98]: [array([ 5, 10]), array([4, 6])] In [99]: square(array([2,3]), array([2,3])) Out[99]: [array([ 6, 12]), array([4, 6])] ``` Which means it would probably be much faster to run all the optimizations in parallel rather than looping. Is that something that's easily do-able with SciPy? Or any other 3rd party tool?", "response":"Here's another try, based on my original answer and the discussion that followed. As far as I know, the scipy.optimize module is for functions with scalar or vector inputs and a scalar output, or \"cost\". Since you're treating each equation as independent of the others, my best idea is to use the multiprocessing module to do the work in parallel. If the functions you're minimizing are as simple as the ones in your question, I'd say it's not worth the effort. If the functions are more complex, and you'd like to divide the work up, try something like: ``` import numpy as np from scipy import optimize from multiprocessing import Pool def square(x, a=1): return [np.sum(x**2 + a), 2*x] def minimize(args): f,x,a = args res = optimize.minimize(f, x, method = 'BFGS', jac = True, args = [a]) return res.x # your a values a = np.arange(1,11) # initial guess for all the x values x = np.empty(len(a)) x[:] = 25 args = [(square,a[i],x[i]) for i in range(10)] p = Pool(4) print p.map(minimize,args) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12874756\/parallel-optimizations-in-scipy", "best_answers_votes":25, "question_length":946, "response_length":999 }, { "question":"scipy curve_fit raises \"OptimizeWarning: Covariance of the parameters could not be estimated\" I am trying to fit this function to some data: But when I use my code ```py import numpy as np from scipy.optimize import curve_fit import matplotlib.pyplot as plt def f(x, start, end): res = np.empty_like(x) res[x end] = 1 linear = np.all([[start <= x], [x <= end]], axis=0)[0] res[linear] = np.linspace(-1., 1., num=np.sum(linear)) return res if __name__ == '__main__': xdata = np.linspace(0., 1000., 1000) ydata = -np.ones(1000) ydata[500:1000] = 1. ydata = ydata + np.random.normal(0., 0.25, len(ydata)) popt, pcov = curve_fit(f, xdata, ydata, p0=[495., 505.]) print(popt, pcov) plt.figure() plt.plot(xdata, f(xdata, *popt), 'r-', label='fit') plt.plot(xdata, ydata, 'b-', label='data') plt.show() ``` I get the warning ```none OptimizeWarning: Covariance of the parameters could not be estimated ``` Output: In this example start and end should be closer to 500, but they don't change at all from my initial guess.", "response":"The warning (not error) of ``` OptimizeWarning: Covariance of the parameters could not be estimated ``` means that the fit could not determine the uncertainties (variance) of the fitting parameters. The main problem is that your model function f treats the parameters start and end as discrete values -- they are used as integer locations for the change in functional form. scipy's curve_fit (and all other optimization routines in scipy.optimize) assume that parameters are continuous variables, not discrete. The fitting procedure will try to take small steps (typically around machine precision) in the parameters to get a numerical derivative of the residual with respect to the variables (the Jacobian). With values used as discrete variables, these derivatives will be zero and the fitting procedure will not know how to change the values to improve the fit. It looks like you're trying to fit a step function to some data. Allow me to recommend trying lmfit (https:\/\/lmfit.github.io\/lmfit-py) which provides a higher-level interface to curve fitting, and has many built-in models. For example, it includes a StepModel that should be able to model your data. For a slight modification of your data (so that it has a finite step), the following script with lmfit can fit such data: ``` #!\/usr\/bin\/python import numpy as np from lmfit.models import StepModel, LinearModel import matplotlib.pyplot as plt np.random.seed(0) xdata = np.linspace(0., 1000., 1000) ydata = -np.ones(1000) ydata[500:1000] = 1. # note that a linear step is added here: ydata[490:510] = -1 + np.arange(20)\/10.0 ydata = ydata + np.random.normal(size=len(xdata), scale=0.1) # model data as Step + Line step_mod = StepModel(form='linear', prefix='step_') line_mod = LinearModel(prefix='line_') model = step_mod + line_mod # make named parameters, giving initial values: pars = model.make_params(line_intercept=ydata.min(), line_slope=0, step_center=xdata.mean(), step_amplitude=ydata.std(), step_sigma=2.0) # fit data to this model with these parameters out = model.fit(ydata, pars, x=xdata) # print results print(out.fit_report()) # plot data and best-fit plt.plot(xdata, ydata, 'b') plt.plot(xdata, out.best_fit, 'r-') plt.show() ``` which prints out a report of ``` [[Model]] (Model(step, prefix='step_', form='linear') + Model(linear, prefix='line_')) [[Fit Statistics]] # fitting method = leastsq # function evals = 49 # data points = 1000 # variables = 5 chi-square = 9.72660131 reduced chi-square = 0.00977548 Akaike info crit = -4622.89074 Bayesian info crit = -4598.35197 [[Variables]] step_sigma: 20.6227793 +\/- 0.77214167 (3.74%) (init = 2) step_center: 490.167878 +\/- 0.44804412 (0.09%) (init = 500) step_amplitude: 1.98946656 +\/- 0.01304854 (0.66%) (init = 0.996283) line_intercept: -1.00628058 +\/- 0.00706005 (0.70%) (init = -1.277259) line_slope: 1.3947e-05 +\/- 2.2340e-05 (160.18%) (init = 0) [[Correlations]] (unreported correlations are < 0.100) C(step_amplitude, line_slope) = -0.875 C(step_sigma, step_center) = -0.863 C(line_intercept, line_slope) = -0.774 C(step_amplitude, line_intercept) = 0.461 C(step_sigma, step_amplitude) = 0.170 C(step_sigma, line_slope) = -0.147 C(step_center, step_amplitude) = -0.146 C(step_center, line_slope) = 0.127 ``` and produces a plot of Lmfit has lots of extra features. For example, if you want to set bounds on some of the parameter values or fix some from varying, you can do the following: ``` # make named parameters, giving initial values: pars = model.make_params(line_intercept=ydata.min(), line_slope=0, step_center=xdata.mean(), step_amplitude=ydata.std(), step_sigma=2.0) # now set max and min values for step amplitude\" pars['step_amplitude'].min = 0 pars['step_amplitude'].max = 100 # fix the offset of the line to be -1.0 pars['line_offset'].value = -1.0 pars['line_offset'].vary = False # then run fit with these parameters out = model.fit(ydata, pars, x=xdata) ``` If you know the model should be Step+Constant and that the constant should be fixed, you could also modify the model to be ``` from lmfit.models import ConstantModel # model data as Step + Constant step_mod = StepModel(form='linear', prefix='step_') const_mod = ConstantModel(prefix='const_') model = step_mod + const_mod pars = model.make_params(const_c=-1, step_center=xdata.mean(), step_amplitude=ydata.std(), step_sigma=2.0) pars['const_c'].vary = False ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/50371428\/scipy-curve-fit-raises-optimizewarning-covariance-of-the-parameters-could-not", "best_answers_votes":36, "question_length":1014, "response_length":4375 }, { "question":"Slicing arrays in Numpy \/ Scipy I have an array like: ``` a = array([[1,2,3],[3,4,5],[4,5,6]]) ``` What's the most efficient way to slice out a 1x2 array out of this that has only the first two columns of \"a\"? i.e. ``` array([[2,3],[4,5],[5,6]]) in this case. ```", "response":"Two dimensional numpy arrays are indexed using a[i,j] (not a[i][j]), but you can use the same slicing notation with numpy arrays and matrices as you can with ordinary matrices in python (just put them in a single []): ``` >>> from numpy import array >>> a = array([[1,2,3],[3,4,5],[4,5,6]]) >>> a[:,1:] array([[2, 3], [4, 5], [5, 6]]) ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2725750\/slicing-arrays-in-numpy-scipy", "best_answers_votes":28, "question_length":263, "response_length":338 }, { "question":"How to add clipboard support to Matplotlib figures? In MATLAB, there is a very convenient option to copy the current figure to the clipboard. Although Python\/numpy\/scipy\/matplotlib is a great alternative to MATLAB, such an option is unfortunately missing. Can this option easily be added to Matplotlib figures? Preferably, all MPL figures should automatically benefit from this functionality. I'm using MPL's Qt4Agg backend, with PySide.", "response":"Yes, it can. The idea is to replace the default plt.figure with a custom one (a technique known as monkey patching) that injects a keyboard handler for copying to the clipboard. The following code will allow you to copy any MPL figure to the clipboard by pressing Ctrl+C: ``` import io import matplotlib.pyplot as plt from PySide.QtGui import QApplication, QImage def add_clipboard_to_figures(): # use monkey-patching to replace the original plt.figure() function with # our own, which supports clipboard-copying oldfig = plt.figure def newfig(*args, **kwargs): fig = oldfig(*args, **kwargs) def clipboard_handler(event): if event.key == 'ctrl+c': # store the image in a buffer using savefig(), this has the # advantage of applying all the default savefig parameters # such as background color; those would be ignored if you simply # grab the canvas using Qt buf = io.BytesIO() fig.savefig(buf) QApplication.clipboard().setImage(QImage.fromData(buf.getvalue())) buf.close() fig.canvas.mpl_connect('key_press_event', clipboard_handler) return fig plt.figure = newfig add_clipboard_to_figures() ``` Note that if you want to use from matplotlib.pyplot import * (e.g. in an interactive session), you need to do so after you've executed the above code, otherwise the figure you import into the default namespace will be the unpatched version.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/31607458\/how-to-add-clipboard-support-to-matplotlib-figures", "best_answers_votes":15, "question_length":437, "response_length":1337 }, { "question":"How to get confidence intervals from curve_fit My question involves statistics and python and I am a beginner in both. I am running a simulation, and for each value for the independent variable (X) I produce 1000 values for the dependent variable (Y). What I have done is that I calculated the average of Y for each value of X and fitted these averages using scipy.optimize.curve_fit. The curve fits nicely, but I want to draw also the confidence intervals. I am not sure if what I am doing is correct or if what I want to do can be done, but my question is how can I get the confidence intervals from the covariance matrix produced by curve_fit. The code reads the averages from files first then it just simply uses curve_fit. ``` import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit def readTDvsTx(L, B, P, fileformat): # L should be '_Fixed_' or '_' TD = [] infile = open(fileformat.format(L, B, P), 'r') infile.readline() # To remove header for line in infile: l = line.split() # each line contains TxR followed by CD followed by TD if eval(l[0]) >= 70 and eval(l[0]) <=190: td = eval(l[2]) TD.append(td) infile.close() tdArray = np.array(TD) return tdArray def rec(x, a, b): return a * (1 \/ (x**2)) + b fileformat = 'Densities_file{}BS{}_PRNTS{}.txt' txR = np.array(range(70, 200, 20)) parents = np.array(range(1,6)) disc_p1 = readTDvsTx('_Fixed_', 5, 1, fileformat) popt, pcov = curve_fit(rec, txR, disc_p1) plt.plot(txR, rec(txR, popt[0], popt[1]), 'r-') plt.plot(txR, disc_p1, '.') print(popt) plt.show() ``` And here is the resulting fit:", "response":"Here's a quick and wrong answer: you can approximate the errors from the covariance matrix for your a and b parameters as the square root of its diagonals: np.sqrt(np.diagonal(pcov)). The parameter uncertainties can then be used to draw the confidence intervals. The answer is wrong because you before you fit your data to a model, you'll need an estimate of the errors on your averaged disc_p1 points. When averaging, you have lost the information about the scatter of the population, leading curve_fit to believe that the y-points you feed it are absolute and undisputable. This might cause an underestimation of your parameter errors. For an estimate of the uncertainties of your averaged Y values, you need to estimate their dispersion measure and pass it along to curve_fit while saying that your errors are absolute. Below is an example of how to do this for a random dataset where each of your points consists of a 1000 samples drawn from a normal distribution. ``` from scipy.optimize import curve_fit import matplotlib.pylab as plt import numpy as np # model function func = lambda x, a, b: a * (1 \/ (x**2)) + b # approximating OP points n_ypoints = 7 x_data = np.linspace(70, 190, n_ypoints) # approximating the original scatter in Y-data n_nested_points = 1000 point_errors = 50 y_data = [func(x, 4e6, -100) + np.random.normal(x, point_errors, n_nested_points) for x in x_data] # averages and dispersion of data y_means = np.array(y_data).mean(axis = 1) y_spread = np.array(y_data).std(axis = 1) best_fit_ab, covar = curve_fit(func, x_data, y_means, sigma = y_spread, absolute_sigma = True) sigma_ab = np.sqrt(np.diagonal(covar)) from uncertainties import ufloat a = ufloat(best_fit_ab[0], sigma_ab[0]) b = ufloat(best_fit_ab[1], sigma_ab[1]) text_res = \"Best fit parameters:\\na = {}\\nb = {}\".format(a, b) print(text_res) # plotting the unaveraged data flier_kwargs = dict(marker = 'o', markerfacecolor = 'silver', markersize = 3, alpha=0.7) line_kwargs = dict(color = 'k', linewidth = 1) bp = plt.boxplot(y_data, positions = x_data, capprops = line_kwargs, boxprops = line_kwargs, whiskerprops = line_kwargs, medianprops = line_kwargs, flierprops = flier_kwargs, widths = 5, manage_ticks = False) # plotting the averaged data with calculated dispersion #plt.scatter(x_data, y_means, facecolor = 'silver', alpha = 1) #plt.errorbar(x_data, y_means, y_spread, fmt = 'none', ecolor = 'black') # plotting the model hires_x = np.linspace(50, 190, 100) plt.plot(hires_x, func(hires_x, *best_fit_ab), 'black') bound_upper = func(hires_x, *(best_fit_ab + sigma_ab)) bound_lower = func(hires_x, *(best_fit_ab - sigma_ab)) # plotting the confidence intervals plt.fill_between(hires_x, bound_lower, bound_upper, color = 'black', alpha = 0.15) plt.text(140, 800, text_res) plt.xlim(40, 200) plt.ylim(0, 1000) plt.show() ``` Edit: If you are not considering the intrinsic errors on the data points, you are probably fine with using the \"qiuck and wrong\" case I mentioned before. The square root of the diagonal entries of covariance matrix can then be used to calculate your confidence intervals. However, note that the confidence intervals have shrunk now that we've dropped the uncertainties: ``` from scipy.optimize import curve_fit import matplotlib.pylab as plt import numpy as np func = lambda x, a, b: a * (1 \/ (x**2)) + b n_ypoints = 7 x_data = np.linspace(70, 190, n_ypoints) y_data = np.array([786.31, 487.27, 341.78, 265.49, 224.76, 208.04, 200.22]) best_fit_ab, covar = curve_fit(func, x_data, y_data) sigma_ab = np.sqrt(np.diagonal(covar)) # an easy way to properly format parameter errors from uncertainties import ufloat a = ufloat(best_fit_ab[0], sigma_ab[0]) b = ufloat(best_fit_ab[1], sigma_ab[1]) text_res = \"Best fit parameters:\\na = {}\\nb = {}\".format(a, b) print(text_res) plt.scatter(x_data, y_data, facecolor = 'silver', edgecolor = 'k', s = 10, alpha = 1) # plotting the model hires_x = np.linspace(50, 200, 100) plt.plot(hires_x, func(hires_x, *best_fit_ab), 'black') bound_upper = func(hires_x, *(best_fit_ab + sigma_ab)) bound_lower = func(hires_x, *(best_fit_ab - sigma_ab)) # plotting the confidence intervals plt.fill_between(hires_x, bound_lower, bound_upper, color = 'black', alpha = 0.15) plt.text(140, 630, text_res) plt.xlim(60, 200) plt.ylim(0, 800) plt.show() ``` If you're unsure whether to include the absolute errors or how to estimate them in your case, you'd be better off asking for advice at Cross Validated, as Stack Overflow is mainly for discussion on implementations of regression methods and not for discussion on the underlying statistics.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/39434402\/how-to-get-confidence-intervals-from-curve-fit", "best_answers_votes":29, "question_length":1582, "response_length":4588 }, { "question":"Interpolating a closed curve using scipy I'm writing a python script to interpolate a given set of points with splines. The points are defined by their [x, y] coordinates. I tried to use this code: ``` x = np.array([23, 24, 24, 25, 25]) y = np.array([13, 12, 13, 12, 13]) tck, u = scipy.interpolate.splprep([x,y], s=0) unew = np.arange(0, 1.00, 0.005) out = scipy.interpolate.splev(unew, tck) ``` which gives me a curve like this: However, I need to have a smooth closed curve - on the picture above the derivatives at one of the points are obviously not the same. How can I achieve this?", "response":"Your closed path can be considered as a parametric curve, x=f(u), y=g(u) where u is distance along the curve, bounded on the interval [0, 1). You can use scipy.interpolate.splprep with per=True to treat your x and y points as periodic, then evaluate the fitted splines using scipy.interpolate.splev: ``` import numpy as np from scipy import interpolate from matplotlib import pyplot as plt x = np.array([23, 24, 24, 25, 25]) y = np.array([13, 12, 13, 12, 13]) # append the starting x,y coordinates x = np.r_[x, x[0]] y = np.r_[y, y[0]] # fit splines to x=f(u) and y=g(u), treating both as periodic. also note that s=0 # is needed in order to force the spline fit to pass through all the input points. tck, u = interpolate.splprep([x, y], s=0, per=True) # evaluate the spline fits for 1000 evenly spaced distance values xi, yi = interpolate.splev(np.linspace(0, 1, 1000), tck) # plot the result fig, ax = plt.subplots(1, 1) ax.plot(x, y, 'or') ax.plot(xi, yi, '-b') ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/33962717\/interpolating-a-closed-curve-using-scipy", "best_answers_votes":27, "question_length":588, "response_length":968 }, { "question":"What is the difference between cholesky in numpy and scipy? I use Cholesky decomposition to sample random variables from multi-dimension Gaussian, and calculate the power spectrum of the random variables. The result I get from numpy.linalg.cholesky always has higher power in high frequencies than from scipy.linalg.cholesky. What are the differences between these two functions that could possibly cause this result? Which one is more numerically stable? Here is the code I use: ``` n = 2000 m = 10000 c0 = np.exp(-.05*np.arange(n)) C = linalg.toeplitz(c0) Xn = np.dot(np.random.randn(m,n),np.linalg.cholesky(C)) Xs = np.dot(np.random.randn(m,n),linalg.cholesky(C)) Xnf = np.fft.fft(Xn) Xsf = np.fft.fft(Xs) Xnp = np.mean(Xnf*Xnf.conj(),axis=0) Xsp = np.mean(Xsf*Xsf.conj(),axis=0) ```", "response":"scipy.linalg.cholesky is giving you the upper-triangular decomposition by default, whereas np.linalg.cholesky is giving you the lower-triangular version. From the docs for scipy.linalg.cholesky: ``` cholesky(a, lower=False, overwrite_a=False) Compute the Cholesky decomposition of a matrix. Returns the Cholesky decomposition, :math:`A = L L^*` or :math:`A = U^* U` of a Hermitian positive-definite matrix A. Parameters ---------- a : ndarray, shape (M, M) Matrix to be decomposed lower : bool Whether to compute the upper or lower triangular Cholesky factorization. Default is upper-triangular. overwrite_a : bool Whether to overwrite data in `a` (may improve performance). ``` For example: ``` >>> scipy.linalg.cholesky([[1,2], [1,9]]) array([[ 1. , 2. ], [ 0. , 2.23606798]]) >>> scipy.linalg.cholesky([[1,2], [1,9]], lower=True) array([[ 1. , 0. ], [ 1. , 2.82842712]]) >>> np.linalg.cholesky([[1,2], [1,9]]) array([[ 1. , 0. ], [ 1. , 2.82842712]]) ``` If I modify your code to use the same random matrix both times and to use linalg.cholesky(C,lower=True) instead, then I get answers like: ``` >>> Xnp array([ 79621.02629287+0.j, 78060.96077912+0.j, 77110.92428806+0.j, ..., 75526.55192199+0.j, 77110.92428806+0.j, 78060.96077912+0.j]) >>> Xsp array([ 79621.02629287+0.j, 78060.96077912+0.j, 77110.92428806+0.j, ..., 75526.55192199+0.j, 77110.92428806+0.j, 78060.96077912+0.j]) >>> np.allclose(Xnp, Xsp) True ```", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16699163\/what-is-the-difference-between-cholesky-in-numpy-and-scipy", "best_answers_votes":26, "question_length":786, "response_length":1418 }, { "question":"Do not print \"optimization terminated successfully\" scipy.optimize.fmin? Is there a way to \"quietly\" use scipy.optimize.fmin? That is, that it does not print e.g. ``` Optimization terminated successfully. Current function value: 0.000000 Iterations: 13 Function evaluations: 30 ``` when running the code? This could be useful in e.g. loops that last a for a while.", "response":"Setting the argument disp=False stops scipy.optimize.fmin from printing messages.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/22330620\/do-not-print-optimization-terminated-successfully-scipy-optimize-fmin", "best_answers_votes":22, "question_length":364, "response_length":81 }, { "question":"Python 4D linear interpolation on a rectangular grid I need to interpolate temperature data linearly in 4 dimensions (latitude, longitude, altitude and time). The number of points is fairly high (360x720x50x8) and I need a fast method of computing the temperature at any point in space and time within the data bounds. I have tried using scipy.interpolate.LinearNDInterpolator but using Qhull for triangulation is inefficient on a rectangular grid and takes hours to complete. By reading this SciPy ticket, the solution seemed to be implementing a new nd interpolator using the standard interp1d to calculate a higher number of data points, and then use a \"nearest neighbor\" approach with the new dataset. This, however, takes a long time again (minutes). Is there a quick way of interpolating data on a rectangular grid in 4 dimensions without it taking minutes to accomplish? I thought of using interp1d 4 times without calculating a higher density of points, but leaving it for the user to call with the coordinates, but I can't get my head around how to do this. Otherwise would writing my own 4D interpolator specific to my needs be an option here? Here's the code I've been using to test this: Using scipy.interpolate.LinearNDInterpolator: ``` import numpy as np from scipy.interpolate import LinearNDInterpolator lats = np.arange(-90,90.5,0.5) lons = np.arange(-180,180,0.5) alts = np.arange(1,1000,21.717) time = np.arange(8) data = np.random.rand(len(lats)*len(lons)*len(alts)*len(time)).reshape((len(lats),len(lons),len(alts),len(time))) coords = np.zeros((len(lats),len(lons),len(alts),len(time),4)) coords[...,0] = lats.reshape((len(lats),1,1,1)) coords[...,1] = lons.reshape((1,len(lons),1,1)) coords[...,2] = alts.reshape((1,1,len(alts),1)) coords[...,3] = time.reshape((1,1,1,len(time))) coords = coords.reshape((data.size,4)) interpolatedData = LinearNDInterpolator(coords,data) ``` Using scipy.interpolate.interp1d: ``` import numpy as np from scipy.interpolate import LinearNDInterpolator lats = np.arange(-90,90.5,0.5) lons = np.arange(-180,180,0.5) alts = np.arange(1,1000,21.717) time = np.arange(8) data = np.random.rand(len(lats)*len(lons)*len(alts)*len(time)).reshape((len(lats),len(lons),len(alts),len(time))) interpolatedData = np.array([None, None, None, None]) interpolatedData[0] = interp1d(lats,data,axis=0) interpolatedData[1] = interp1d(lons,data,axis=1) interpolatedData[2] = interp1d(alts,data,axis=2) interpolatedData[3] = interp1d(time,data,axis=3) ``` Thank you very much for your help!", "response":"In the same ticket you have linked, there is an example implementation of what they call tensor product interpolation, showing the proper way to nest recursive calls to interp1d. This is equivalent to quadrilinear interpolation if you choose the default kind='linear' parameter for your interp1d's. While this may be good enough, this is not linear interpolation, and there will be higher order terms in the interpolation function, as this image from the wikipedia entry on bilinear interpolation shows: This may very well be good enough for what you are after, but there are applications where a triangulated, really piecewise linear, interpoaltion is preferred. If you really need this, there is an easy way of working around the slowness of qhull. Once LinearNDInterpolator has been setup, there are two steps to coming up with an interpolated value for a given point: figure out inside which triangle (4D hypertetrahedron in your case) the point is, and interpolate using the barycentric coordinates of the point relative to the vertices as weights. You probably do not want to mess with barycentric coordinates, so better leave that to LinearNDInterpolator. But you do know some things about the triangulation. Mostly that, because you have a regular grid, within each hypercube the triangulation is going to be the same. So to interpolate a single value, you could first determine in which subcube your point is, build a LinearNDInterpolator with the 16 vertices of that cube, and use it to interpolate your value: ``` from itertools import product def interpolator(coords, data, point) : dims = len(point) indices = [] sub_coords = [] for j in xrange(dims) : idx = np.digitize([point[j]], coords[j])[0] indices += [[idx - 1, idx]] sub_coords += [coords[j][indices[-1]]] indices = np.array([j for j in product(*indices)]) sub_coords = np.array([j for j in product(*sub_coords)]) sub_data = data[list(np.swapaxes(indices, 0, 1))] li = LinearNDInterpolator(sub_coords, sub_data) return li([point])[0] >>> point = np.array([12.3,-4.2, 500.5, 2.5]) >>> interpolator((lats, lons, alts, time), data, point) 0.386082399091 ``` This cannot work on vectorized data, since that would require storing a LinearNDInterpolator for every possible subcube, and even though it probably would be faster than triangulating the whole thing, it would still be very slow.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/14119892\/python-4d-linear-interpolation-on-a-rectangular-grid", "best_answers_votes":13, "question_length":2523, "response_length":2355 }, { "question":"SciPy SVD vs. Numpy SVD Both SciPy and Numpy have built in functions for singular value decomposition (SVD). The commands are basically scipy.linalg.svd and numpy.linalg.svd. What is the difference between these two? Is any of them better than the other one?", "response":"From the FAQ page, it says scipy.linalg submodule provides a more complete wrapper for the Fortran LAPACK library whereas numpy.linalg tries to be able to build independent of LAPACK. I did some benchmarks for the different implementation of the svd functions and found scipy.linalg.svd is faster than the numpy counterpart: However, jax wrapped numpy, aka jax.numpy.linalg.svd is even faster: Full notebook for the benchmarks are available here.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/32569188\/scipy-svd-vs-numpy-svd", "best_answers_votes":14, "question_length":258, "response_length":446 }, { "question":"How to perform a chi-squared goodness of fit test using scientific libraries in Python? Let's assume I have some data I obtained empirically: ``` from scipy import stats size = 10000 x = 10 * stats.expon.rvs(size=size) + 0.2 * np.random.uniform(size=size) ``` It is exponentially distributed (with some noise) and I want to verify this using a chi-squared goodness of fit (GoF) test. What is the simplest way of doing this using the standard scientific libraries in Python (e.g. scipy or statsmodels) with the least amount of manual steps and assumptions? I can fit a model with: ``` param = stats.expon.fit(x) plt.hist(x, normed=True, color='white', hatch='\/') plt.plot(grid, distr.pdf(np.linspace(0, 100, 10000), *param)) ``` It is very elegant to calculate the Kolmogorov-Smirnov test. ``` >>> stats.kstest(x, lambda x : stats.expon.cdf(x, *param)) (0.0061000000000000004, 0.85077099515985011) ``` However, I can't find a good way of calculating the chi-squared test. There is a chi-squared GoF function in statsmodel, but it assumes a discrete distribution (and the exponential distribution is continuous). The official scipy.stats tutorial only covers a case for a custom distribution and probabilities are built by fiddling with many expressions (npoints, npointsh, nbound, normbound), so it's not quite clear to me how to do it for other distributions. The chisquare examples assume the expected values and DoF are already obtained. Also, I am not looking for a way to \"manually\" perform the test as was already discussed here, but would like to know how to apply one of the available library functions.", "response":"An approximate solution for equal probability bins: Estimate the parameters of the distribution Use the inverse cdf, ppf if it's a scipy.stats.distribution, to get the binedges for a regular probability grid, e.g. distribution.ppf(np.linspace(0, 1, n_bins + 1), *args) Then, use np.histogram to count the number of observations in each bin then use chisquare test on the frequencies. An alternative would be to find the bin edges from the percentiles of the sorted data, and use the cdf to find the actual probabilities. This is only approximate, since the theory for the chisquare test assumes that the parameters are estimated by maximum likelihood on the binned data. And I'm not sure whether the selection of binedges based on the data affects the asymptotic distribution. I haven't looked into this into a long time. If an approximate solution is not good enough, then I would recommend that you ask the question on stats.stackexchange.", "best_answers_score":0.8, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/24371051\/how-to-perform-a-chi-squared-goodness-of-fit-test-using-scientific-libraries-in", "best_answers_votes":5, "question_length":1610, "response_length":941 }, { "question":"ImportError: cannot import name NUMPY_MKL I am trying to run the following simple code ``` import scipy scipy.test() ``` But I am getting the following error ``` Traceback (most recent call last): File \"\", line 1, in File \"C:\\Python27\\lib\\site-packages\\spyderlib\\widgets\\externalshell\\sitecustomize.py\", line 586, in runfile execfile(filename, namespace) File \"C:\/Users\/Mustafa\/Documents\/My Python Code\/SpectralGraphAnalysis\/main.py\", line 8, in import scipy File \"C:\\Python27\\lib\\site-packages\\scipy\\__init__.py\", line 61, in from numpy._distributor_init import NUMPY_MKL # requires numpy+mkl ImportError: cannot import name NUMPY_MKL ``` I am using python 2.7 under windows 10. I have installed scipy but that does not seem to solve the problem Any help is appreciated.", "response":"If you look at the line which is causing the error, you'll see this: ``` from numpy._distributor_init import NUMPY_MKL # requires numpy+mkl ``` This line comment states the dependency as numpy+mkl (numpy with Intel Math Kernel Library). This means that you've installed the numpy by pip, but the scipy was installed by precompiled archive, which expects numpy+mkl. This problem can be easy solved by installation for numpy+mkl from whl file from here.", "best_answers_score":0.7974, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/37267399\/importerror-cannot-import-name-numpy-mkl", "best_answers_votes":220, "question_length":774, "response_length":451 }, { "question":"Calculate the Cumulative Distribution Function (CDF) in Python How can I calculate in python the Cumulative Distribution Function (CDF)? I want to calculate it from an array of points I have (discrete distribution), not with the continuous distributions that, for example, scipy has.", "response":"(It is possible that my interpretation of the question is wrong. If the question is how to get from a discrete PDF into a discrete CDF, then np.cumsum divided by a suitable constant will do if the samples are equispaced. If the array is not equispaced, then np.cumsum of the array multiplied by the distances between the points will do.) If you have a discrete array of samples, and you would like to know the CDF of the sample, then you can just sort the array. If you look at the sorted result, you'll realize that the smallest value represents 0% , and largest value represents 100 %. If you want to know the value at 50 % of the distribution, just look at the array element which is in the middle of the sorted array. Let us have a closer look at this with a simple example: ``` import matplotlib.pyplot as plt import numpy as np # create some randomly ddistributed data: data = np.random.randn(10000) # sort the data: data_sorted = np.sort(data) # calculate the proportional values of samples p = 1. * np.arange(len(data)) \/ (len(data) - 1) # plot the sorted data: fig = plt.figure() ax1 = fig.add_subplot(121) ax1.plot(p, data_sorted) ax1.set_xlabel('$p$') ax1.set_ylabel('$x$') ax2 = fig.add_subplot(122) ax2.plot(data_sorted, p) ax2.set_xlabel('$x$') ax2.set_ylabel('$p$') ``` This gives the following plot where the right-hand-side plot is the traditional cumulative distribution function. It should reflect the CDF of the process behind the points, but naturally, it is not as long as the number of points is finite. This function is easy to invert, and it depends on your application which form you need.", "best_answers_score":0.7974, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/24788200\/calculate-the-cumulative-distribution-function-cdf-in-python", "best_answers_votes":60, "question_length":283, "response_length":1615 }, { "question":"How to compute scipy sparse matrix determinant without turning it to dense? I am trying to figure out the fastest method to find the determinant of sparse symmetric and real matrices in python. using scipy sparse module but really surprised that there is no determinant function. I am aware I could use LU factorization to compute determinant but don't see a easy way to do it because the return of scipy.sparse.linalg.splu is an object and instantiating a dense L and U matrix is not worth it - I may as well do sp.linalg.det(A.todense()) where A is my scipy sparse matrix. I am also a bit surprised why others have not faced the problem of efficient determinant computation within scipy. How would one use splu to compute determinant? I looked into pySparse and scikits.sparse.chlmod. The latter is not practical right now for me - needs package installations and also not sure sure how fast the code is before I go into all the trouble. Any solutions? Thanks in advance.", "response":"You can use scipy.sparse.linalg.splu to obtain sparse matrices for the lower (L) and upper (U) triangular matrices of an M=LU decomposition: ``` from scipy.sparse.linalg import splu lu = splu(M) ``` The determinant det(M) can be then represented as: ``` det(M) = det(LU) = det(L)det(U) ``` The determinant of triangular matrices is just the product of the diagonal terms: ``` diagL = lu.L.diagonal() diagU = lu.U.diagonal() d = diagL.prod()*diagU.prod() ``` However, for large matrices underflow or overflow commonly occurs, which can be avoided by working with the logarithms. ``` diagL = diagL.astype(np.complex128) diagU = diagU.astype(np.complex128) logdet = np.log(diagL).sum() + np.log(diagU).sum() ``` Note that I invoke complex arithmetic to account for negative numbers that might appear in the diagonals. Now, from logdet you can recover the determinant: ``` det = np.exp(logdet) # usually underflows\/overflows for large matrices ``` whereas the sign of the determinant can be calculated directly from diagL and diagU (important for example when implementing Crisfield's arc-length method): ``` sign = swap_sign*np.sign(diagL).prod()*np.sign(diagU).prod() ``` where swap_sign is a term to consider the number of permutations in the LU decomposition. Thanks to @Luiz Felippe Rodrigues, it can be calculated: ``` swap_sign = (-1)**minimumSwaps(lu.perm_r) def minimumSwaps(arr): \"\"\" Minimum number of swaps needed to order a permutation array \"\"\" # from https:\/\/www.thepoorcoder.com\/hackerrank-minimum-swaps-2-solution\/ a = dict(enumerate(arr)) b = {v:k for k,v in a.items()} count = 0 for i in a: x = a[i] if x!=i: y = b[i] a[y] = x b[x] = y count+=1 return count ```", "best_answers_score":0.7973, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19107617\/how-to-compute-scipy-sparse-matrix-determinant-without-turning-it-to-dense", "best_answers_votes":10, "question_length":973, "response_length":1675 }, { "question":"How to detect a shift between images I am analyzing multiple images and need to be able to tell if they are shifted compared to a reference image. The purpose is to tell if the camera moved at all in between capturing images. I would ideally like to be able to correct the shift in order to still do the analysis, but at a minimum I need to be able to determine if an image is shifted and discard it if it's beyond a certain threshold. Here are some examples of the shifts in an image I would like to detect: I will use the first image as a reference and then compare all of the following images to it to figure out if they are shifted. The images are gray-scale (they are just displayed in color using a heat-map) and are stored in a 2-D numpy array. Any ideas how I can do this? I would prefer to use the packages I already have installed (scipy, numpy, PIL, matplotlib).", "response":"As Lukas Graf hints, you are looking for cross-correlation. It works well, if: The scale of your images does not change considerably. There is no rotation change in the images. There is no significant illumination change in the images. For plain translations cross-correlation is very good. The simplest cross-correlation tool is scipy.signal.correlate. However, it uses the trivial method for cross-correlation, which is O(n^4) for a two-dimensional image with side length n. In practice, with your images it'll take very long. The better too is scipy.signal.fftconvolve as convolution and correlation are closely related. Something like this: ``` import numpy as np import scipy.signal def cross_image(im1, im2): # get rid of the color channels by performing a grayscale transform # the type cast into 'float' is to avoid overflows im1_gray = np.sum(im1.astype('float'), axis=2) im2_gray = np.sum(im2.astype('float'), axis=2) # get rid of the averages, otherwise the results are not good im1_gray -= np.mean(im1_gray) im2_gray -= np.mean(im2_gray) # calculate the correlation image; note the flipping of onw of the images return scipy.signal.fftconvolve(im1_gray, im2_gray[::-1,::-1], mode='same') ``` The funny-looking indexing of im2_gray[::-1,::-1] rotates it by 180\u00b0 (mirrors both horizontally and vertically). This is the difference between convolution and correlation, correlation is a convolution with the second signal mirrored. Now if we just correlate the first (topmost) image with itself, we get: This gives a measure of self-similarity of the image. The brightest spot is at (201, 200), which is in the center for the (402, 400) image. The brightest spot coordinates can be found: ``` np.unravel_index(np.argmax(corr_img), corr_img.shape) ``` The linear position of the brightest pixel is returned by argmax, but it has to be converted back into the 2D coordinates with unravel_index. Next, we try the same by correlating the first image with the second image: The correlation image looks similar, but the best correlation has moved to (149,200), i.e. 52 pixels upwards in the image. This is the offset between the two images. This seems to work with these simple images. However, there may be false correlation peaks, as well, and any of the problems outlined in the beginning of this answer may ruin the results. In any case you should consider using a windowing function. The choice of the function is not that important, as long as something is used. Also, if you have problems with small rotation or scale changes, try correlating several small areas agains the surrounding image. That will give you different displacements at different positions of the image.", "best_answers_score":0.7946, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/24768222\/how-to-detect-a-shift-between-images", "best_answers_votes":34, "question_length":873, "response_length":2680 }, { "question":"Correlation coefficients and p values for all pairs of rows of a matrix I have a matrix data with m rows and n columns. I used to compute the correlation coefficients between all pairs of rows using np.corrcoef: ``` import numpy as np data = np.array([[0, 1, -1], [0, -1, 1]]) np.corrcoef(data) ``` Now I would also like to have a look at the p-values of these coefficients. np.corrcoef doesn't provide these; scipy.stats.pearsonr does. However, scipy.stats.pearsonr does not accept a matrix on input. Is there a quick way how to compute both the coefficient and the p-value for all pairs of rows (arriving e.g. at two m by m matrices, one with correlation coefficients, the other with corresponding p-values) without having to manually go through all pairs?", "response":"I have encountered the same problem today. After half an hour of googling, I can't find any code in numpy\/scipy library can help me do this. So I wrote my own version of corrcoef ``` import numpy as np from scipy.stats import pearsonr, betai def corrcoef(matrix): r = np.corrcoef(matrix) rf = r[np.triu_indices(r.shape[0], 1)] df = matrix.shape[1] - 2 ts = rf * rf * (df \/ (1 - rf * rf)) pf = betai(0.5 * df, 0.5, df \/ (df + ts)) p = np.zeros(shape=r.shape) p[np.triu_indices(p.shape[0], 1)] = pf p[np.tril_indices(p.shape[0], -1)] = p.T[np.tril_indices(p.shape[0], -1)] p[np.diag_indices(p.shape[0])] = np.ones(p.shape[0]) return r, p def corrcoef_loop(matrix): rows, cols = matrix.shape[0], matrix.shape[1] r = np.ones(shape=(rows, rows)) p = np.ones(shape=(rows, rows)) for i in range(rows): for j in range(i+1, rows): r_, p_ = pearsonr(matrix[i], matrix[j]) r[i, j] = r[j, i] = r_ p[i, j] = p[j, i] = p_ return r, p ``` The first version use the result of np.corrcoef, and then calculate p-value based on triangle-upper values of corrcoef matrix. The second loop version just iterating over rows, do pearsonr manually. ``` def test_corrcoef(): a = np.array([ [1, 2, 3, 4], [1, 3, 1, 4], [8, 3, 8, 5], [2, 3, 2, 1]]) r1, p1 = corrcoef(a) r2, p2 = corrcoef_loop(a) assert np.allclose(r1, r2) assert np.allclose(p1, p2) ``` The test passed, they are the same. ``` def test_timing(): import time a = np.random.randn(100, 2500) def timing(func, *args, **kwargs): t0 = time.time() loops = 10 for _ in range(loops): func(*args, **kwargs) print('{} takes {} seconds loops={}'.format( func.__name__, time.time() - t0, loops)) timing(corrcoef, a) timing(corrcoef_loop, a) if __name__ == '__main__': test_corrcoef() test_timing() ``` The performance on my Macbook against 100x2500 matrix corrcoef takes 0.06608104705810547 seconds loops=10 corrcoef_loop takes 7.585600137710571 seconds loops=10", "best_answers_score":0.7945, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/24432101\/correlation-coefficients-and-p-values-for-all-pairs-of-rows-of-a-matrix", "best_answers_votes":16, "question_length":758, "response_length":1887 }, { "question":"Is there a difference between scipy.pi, numpy.pi, or math.pi? In a project using SciPy and NumPy, when should one use scipy.pi vs numpy.pi vs just math.pi? Is there a difference between these values?", "response":"``` >>> import math >>> import numpy as np >>> import scipy >>> math.pi == np.pi == scipy.pi True ``` So it doesn't matter, they are all the same value. The only reason all three modules provide a pi value is so if you are using just one of the three modules, you can conveniently have access to pi without having to import another module. They're not providing different values for pi.", "best_answers_score":0.7935, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12645547\/is-there-a-difference-between-scipy-pi-numpy-pi-or-math-pi", "best_answers_votes":257, "question_length":199, "response_length":386 }, { "question":"Speedup scipy griddata for multiple interpolations between two irregular grids I have several values that are defined on the same irregular grid (x, y, z) that I want to interpolate onto a new grid (x1, y1, z1). i.e., I have f(x, y, z), g(x, y, z), h(x, y, z) and I want to calculate f(x1, y1, z1), g(x1, y1, z1), h(x1, y1, z1). At the moment I am doing this using scipy.interpolate.griddata and it works well. However, because I have to perform each interpolation separately and there are many points, it is quite slow, with a great deal of duplication in the calculation (i.e finding which points are closest, setting up the grids etc...). Is there a way to speedup the calculation and reduce the duplicated calculations? i.e something along the lines of defining the two grids, then changing the values for the interpolation?", "response":"There are several things going on every time you make a call to scipy.interpolate.griddata: First, a call to sp.spatial.qhull.Delaunay is made to triangulate the irregular grid coordinates. Then, for each point in the new grid, the triangulation is searched to find in which triangle (actually, in which simplex, which in your 3D case will be in which tetrahedron) does it lay. The barycentric coordinates of each new grid point with respect to the vertices of the enclosing simplex are computed. An interpolated values is computed for that grid point, using the barycentric coordinates, and the values of the function at the vertices of the enclosing simplex. The first three steps are identical for all your interpolations, so if you could store, for each new grid point, the indices of the vertices of the enclosing simplex and the weights for the interpolation, you would minimize the amount of computations by a lot. This is unfortunately not easy to do directly with the functionality available, although it is indeed possible: ``` import scipy.interpolate as spint import scipy.spatial.qhull as qhull import itertools def interp_weights(xyz, uvw): tri = qhull.Delaunay(xyz) simplex = tri.find_simplex(uvw) vertices = np.take(tri.simplices, simplex, axis=0) temp = np.take(tri.transform, simplex, axis=0) delta = uvw - temp[:, d] bary = np.einsum('njk,nk->nj', temp[:, :d, :], delta) return vertices, np.hstack((bary, 1 - bary.sum(axis=1, keepdims=True))) def interpolate(values, vtx, wts): return np.einsum('nj,nj->n', np.take(values, vtx), wts) ``` The function interp_weights does the calculations for the first three steps I listed above. Then the function interpolate uses those calcualted values to do step 4 very fast: ``` m, n, d = 3.5e4, 3e3, 3 # make sure no new grid point is extrapolated bounding_cube = np.array(list(itertools.product([0, 1], repeat=d))) xyz = np.vstack((bounding_cube, np.random.rand(m - len(bounding_cube), d))) f = np.random.rand(m) g = np.random.rand(m) uvw = np.random.rand(n, d) In [2]: vtx, wts = interp_weights(xyz, uvw) In [3]: np.allclose(interpolate(f, vtx, wts), spint.griddata(xyz, f, uvw)) Out[3]: True In [4]: %timeit spint.griddata(xyz, f, uvw) 1 loops, best of 3: 2.81 s per loop In [5]: %timeit interp_weights(xyz, uvw) 1 loops, best of 3: 2.79 s per loop In [6]: %timeit interpolate(f, vtx, wts) 10000 loops, best of 3: 66.4 us per loop In [7]: %timeit interpolate(g, vtx, wts) 10000 loops, best of 3: 67 us per loop ``` So first, it does the same as griddata, which is good. Second, setting up the interpolation, i.e. computing vtx and wts takes roughly the same as a call to griddata. But third, you can now interpolate for different values on the same grid in virtually no time. The only thing that griddata does that is not contemplated here is assigning fill_value to points that have to be extrapolated. You could do that by checking for points for which at least one of the weights is negative, e.g.: ``` def interpolate(values, vtx, wts, fill_value=np.nan): ret = np.einsum('nj,nj->n', np.take(values, vtx), wts) ret[np.any(wts < 0, axis=1)] = fill_value return ret ```", "best_answers_score":0.7927, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20915502\/speedup-scipy-griddata-for-multiple-interpolations-between-two-irregular-grids", "best_answers_votes":60, "question_length":828, "response_length":3132 }, { "question":"Official abbreviation for: import scipy as sp\/sc I've seen both: ``` import scipy as sp ``` and: ``` import scipy as sc ``` Is there an official preference listed anywhere? For example, in the Introduction of the Scipy documentation, it is recommended to ``` import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt ``` but a similar abbreviation is not offered for the Scipy package. In this question, sp is recommended, but the link to the Scipy docs doesn't actually specify sp over sc.", "response":"EDIT: As of version 1.9, scipy has changed so that import scipy now allows access to the modules but only actually loads them when they are used. Please refer to the answer from tupui below (and upvote that answer to make it the top one!). For historical purposes, here is my answer from 2016: The \"official\" answer, according to the Scipy documentation, is that there is really no reason to ever ``` import scipy ``` since all of the interesting functions in Scipy are actually located in the submodules, which are not automatically imported. Therefore, the recommended method is to use ``` from scipy import fftpack from scipy import integrate ``` then, functions can be called with ``` fftpack.fft() ``` Personally, I always use ``` import scipy.fftpack ``` and live with the slightly longer function call ``` scipy.fftpack.fft(data) ``` This way I know where the functions are coming from.", "best_answers_score":0.7922, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/36014733\/official-abbreviation-for-import-scipy-as-sp-sc", "best_answers_votes":56, "question_length":508, "response_length":893 }, { "question":"Random Number from Histogram Suppose I create a histogram using scipy\/numpy, so I have two arrays: one for the bin counts, and one for the bin edges. If I use the histogram to represent a probability distribution function, how can I efficiently generate random numbers from that distribution?", "response":"It's probably what np.random.choice does in @Ophion's answer, but you can construct a normalized cumulative density function, then choose based on a uniform random number: ``` from __future__ import division import numpy as np import matplotlib.pyplot as plt data = np.random.normal(size=1000) hist, bins = np.histogram(data, bins=50) bin_midpoints = bins[:-1] + np.diff(bins)\/2 cdf = np.cumsum(hist) cdf = cdf \/ cdf[-1] values = np.random.rand(10000) value_bins = np.searchsorted(cdf, values) random_from_cdf = bin_midpoints[value_bins] plt.subplot(121) plt.hist(data, 50) plt.subplot(122) plt.hist(random_from_cdf, 50) plt.show() ``` A 2D case can be done as follows: ``` data = np.column_stack((np.random.normal(scale=10, size=1000), np.random.normal(scale=20, size=1000))) x, y = data.T hist, x_bins, y_bins = np.histogram2d(x, y, bins=(50, 50)) x_bin_midpoints = x_bins[:-1] + np.diff(x_bins)\/2 y_bin_midpoints = y_bins[:-1] + np.diff(y_bins)\/2 cdf = np.cumsum(hist.ravel()) cdf = cdf \/ cdf[-1] values = np.random.rand(10000) value_bins = np.searchsorted(cdf, values) x_idx, y_idx = np.unravel_index(value_bins, (len(x_bin_midpoints), len(y_bin_midpoints))) random_from_cdf = np.column_stack((x_bin_midpoints[x_idx], y_bin_midpoints[y_idx])) new_x, new_y = random_from_cdf.T plt.subplot(121, aspect='equal') plt.hist2d(x, y, bins=(50, 50)) plt.subplot(122, aspect='equal') plt.hist2d(new_x, new_y, bins=(50, 50)) plt.show() ```", "best_answers_score":0.7918, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/17821458\/random-number-from-histogram", "best_answers_votes":50, "question_length":292, "response_length":1432 }, { "question":"Installing SciPy and NumPy using pip I'm trying to create required libraries in a package I'm distributing. It requires both the SciPy and NumPy libraries. While developing, I installed both using ``` apt-get install scipy ``` which installed SciPy 0.9.0 and NumPy 1.5.1, and it worked fine. I would like to do the same using pip install - in order to be able to specify dependencies in a setup.py of my own package. The problem is, when I try: ``` pip install 'numpy==1.5.1' ``` it works fine. But then ``` pip install 'scipy==0.9.0' ``` fails miserably, with ``` raise self.notfounderror(self.notfounderror.__doc__) numpy.distutils.system_info.BlasNotFoundError: Blas (http:\/\/www.netlib.org\/blas\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [blas]) or by setting the BLAS environment variable. ``` How do I get it to work?", "response":"This worked for me on Ubuntu 14.04: ``` sudo apt-get install libblas-dev liblapack-dev libatlas-base-dev gfortran pip install scipy ```", "best_answers_score":0.7917, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11114225\/installing-scipy-and-numpy-using-pip", "best_answers_votes":333, "question_length":905, "response_length":135 }, { "question":"How to get the indices list of all NaN value in numpy array? Say now I have a numpy array which is defined as, ``` [[1,2,3,4], [2,3,NaN,5], [NaN,5,2,3]] ``` Now I want to have a list that contains all the indices of the missing values, which is [(1,2),(2,0)] at this case. Is there any way I can do that?", "response":"np.isnan combined with np.argwhere ``` x = np.array([[1,2,3,4], [2,3,np.nan,5], [np.nan,5,2,3]]) np.argwhere(np.isnan(x)) ``` output: ``` array([[1, 2], [2, 0]]) ```", "best_answers_score":0.7907, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/37754948\/how-to-get-the-indices-list-of-all-nan-value-in-numpy-array", "best_answers_votes":234, "question_length":304, "response_length":165 }, { "question":"how to zscore normalize pandas column with nans? I have a pandas dataframe with a column of real values that I want to zscore normalize: ``` >> a array([ nan, 0.0767, 0.4383, 0.7866, 0.8091, 0.1954, 0.6307, 0.6599, 0.1065, 0.0508]) >> df = pandas.DataFrame({\"a\": a}) ``` The problem is that a single nan value makes all the array nan: ``` >> from scipy.stats import zscore >> zscore(df[\"a\"]) array([ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]) ``` What's the correct way to apply zscore (or an equivalent function not from scipy) to a column of a pandas dataframe and have it ignore the nan values? I'd like it to be same dimension as original column with np.nan for values that can't be normalized edit: maybe the best solution is to use scipy.stats.nanmean and scipy.stats.nanstd? I don't see why the degrees of freedom need to be changed for std for this purpose: ``` zscore = lambda x: (x - scipy.stats.nanmean(x)) \/ scipy.stats.nanstd(x) ```", "response":"Well the pandas' versions of mean and std will hand the Nan so you could just compute that way (to get the same as scipy zscore I think you need to use ddof=0 on std): ``` df['zscore'] = (df.a - df.a.mean())\/df.a.std(ddof=0) print df a zscore 0 NaN NaN 1 0.0767 -1.148329 2 0.4383 0.071478 3 0.7866 1.246419 4 0.8091 1.322320 5 0.1954 -0.747912 6 0.6307 0.720512 7 0.6599 0.819014 8 0.1065 -1.047803 9 0.0508 -1.235699 ```", "best_answers_score":0.7893, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/23451244\/how-to-zscore-normalize-pandas-column-with-nans", "best_answers_votes":28, "question_length":953, "response_length":422 }, { "question":"Convert an image RGB->Lab with python What is the preferred way of doing the conversion using PIL\/Numpy\/SciPy today?", "response":"Since 2010 when the linked question was asked the corresponding code moved from scipy to a separate toolkit: http:\/\/scikit-image.org\/ So here's the code I was actually looking for: ``` from skimage import io, color rgb = io.imread(filename) lab = color.rgb2lab(rgb) ``` It should also be noted that due to Lab nature srgb->lab conversion depends on an additional parameter: whitepoint, eg: \u2022 Photoshop uses a white point called D50 (which is a standard for icc) \u2022 OpenCV and skimage use D65 (which is a standard for srgb). \u2022 default Matlab implementation uses D50 (it is capable of using others), This nice FAQ explains it this way: You should use D65 unless you have a good reason to use something else. The print industry commonly uses D50 and photography commonly uses D55. These represent compromises between the conditions of indoor (tungsten) and daylight viewing. You can tell which whitepoint you're dealing with by converting RGB (0,0,255) to Lab: \u2022 D50 would give you (30, 68, -112) \u2022 D55 (30, 73, -110) \u2022 D65 (32, 79, -108) The numbers after 'D' correspond to (internally) used color temperature of white point: D50 = 5003 K (yellowish), D65 = 6504 K (blueish) I'm grateful to Alex and Roman for their answers because they pointed me into the right direction.", "best_answers_score":0.789, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13405956\/convert-an-image-rgb-lab-with-python", "best_answers_votes":82, "question_length":116, "response_length":1270 }, { "question":"How to specify a distance function for clustering? I'd like to cluster points given to a custom distance and strangely, it seems that neither scipy nor sklearn clustering methods allow the specification of a distance function. For instance, in sklearn.cluster.AgglomerativeClustering, the only thing I may do is enter an affinity matrix (which will be very memory-heavy). In order to build this very matrix, it is recommended to use sklearn.neighbors.kneighbors_graph, but I don't understand how I can specify a distance function either between two points. Could someone enlighten me?", "response":"All of the scipy hierarchical clustering routines will accept a custom distance function that accepts two 1D vectors specifying a pair of points and returns a scalar. For example, using fclusterdata: ``` import numpy as np from scipy.cluster.hierarchy import fclusterdata # a custom function that just computes Euclidean distance def mydist(p1, p2): diff = p1 - p2 return np.vdot(diff, diff) ** 0.5 X = np.random.randn(100, 2) fclust1 = fclusterdata(X, 1.0, metric=mydist) fclust2 = fclusterdata(X, 1.0, metric='euclidean') print(np.allclose(fclust1, fclust2)) # True ``` Valid inputs for the metric= kwarg are the same as for scipy.spatial.distance.pdist.", "best_answers_score":0.7831, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/33721996\/how-to-specify-a-distance-function-for-clustering", "best_answers_votes":28, "question_length":584, "response_length":656 }, { "question":"Python baseline correction library I am currently working with some Raman Spectra data, and I am trying to correct my data caused by florescence skewing. Take a look at the graph below: I am pretty close to achieving what I want. As you can see, I am trying to fit a polynomial in all my data whereas I should really just be fitting a polynomial at the local minimas. Ideally I would want to have a polynomial fitting which when subtracted from my original data would result in something like this: Are there any built in libs that does this already? If not, any simple algorithm one can recommend for me?", "response":"I found an answer to my question, just sharing for everyone who stumbles upon this. There is an algorithm called \"Asymmetric Least Squares Smoothing\" by P. Eilers and H. Boelens in 2005. The paper is free and you can find it on google. ``` def baseline_als(y, lam, p, niter=10): L = len(y) D = sparse.csc_matrix(np.diff(np.eye(L), 2)) w = np.ones(L) for i in xrange(niter): W = sparse.spdiags(w, 0, L, L) Z = W + lam * D.dot(D.transpose()) z = spsolve(Z, w*y) w = p * (y > z) + (1-p) * (y < z) return z ```", "best_answers_score":0.782, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/29156532\/python-baseline-correction-library", "best_answers_votes":43, "question_length":605, "response_length":506 }, { "question":"How to make scipy.interpolate give an extrapolated result beyond the input range? I'm trying to port a program which uses a hand-rolled interpolator (developed by a mathematician colleage) over to use the interpolators provided by scipy. I'd like to use or wrap the scipy interpolator so that it has as close as possible behavior to the old interpolator. A key difference between the two functions is that in our original interpolator - if the input value is above or below the input range, our original interpolator will extrapolate the result. If you try this with the scipy interpolator it raises a ValueError. Consider this program as an example: ``` import numpy as np from scipy import interpolate x = np.arange(0,10) y = np.exp(-x\/3.0) f = interpolate.interp1d(x, y) print f(9) print f(11) # Causes ValueError, because it's greater than max(x) ``` Is there a sensible way to make it so that instead of crashing, the final line will simply do a linear extrapolate, continuing the gradients defined by the first and last two points to infinity. Note, that in the real software I'm not actually using the exp function - that's here for illustration only!", "response":"As of SciPy version 0.17.0, there is a new option for scipy.interpolate.interp1d that allows extrapolation. Simply set fill_value='extrapolate' in the call. Modifying your code in this way gives: ``` import numpy as np from scipy import interpolate x = np.arange(0,10) y = np.exp(-x\/3.0) f = interpolate.interp1d(x, y, fill_value='extrapolate') print f(9) print f(11) ``` and the output is: ``` 0.0497870683679 0.010394302658 ``` Unfortunately, as of 2024, there is a warning in the docs for interp1d: This class is considered legacy and will no longer receive updates. This could also mean it will be removed in future SciPy versions.", "best_answers_score":0.7767, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2745329\/how-to-make-scipy-interpolate-give-an-extrapolated-result-beyond-the-input-range", "best_answers_votes":104, "question_length":1158, "response_length":635 }, { "question":"Scipy.optimize: how to restrict argument values I'm trying to use scipy.optimize functions to find a global minimum of a complicated function with several arguments. scipy.optimize.minimize seems to do the job best of all, namely, the 'Nelder-Mead' method. However, it tends to go to the areas out of arguments' domain (to assign negative values to arguments that can only be positive) and thus returns an error in such cases. Is there a way to restrict the arguments' bounds within the scipy.optimize.minimize function itself? Or maybe within other scipy.optimize functions? I've found the following advice: When the parameters fall out of the admissible range, return a wildly huge number (far from the data to be fitted). This will (hopefully) penalize this choice of parameters so much that curve_fit will settle on some other admissible set of parameters as optimal. given in this previous answer, but the procedure will take a lot of computational time in my case.", "response":"The minimize function has a bounds parameter which can be used to restrict the bounds for each variable when using the L-BFGS-B, TNC, COBYLA or SLSQP methods. For example, ``` import scipy.optimize as optimize fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2 res = optimize.minimize(fun, (2, 0), method='TNC', tol=1e-10) print(res.x) # [ 1. 2.49999999] bnds = ((0.25, 0.75), (0, 2.0)) res = optimize.minimize(fun, (2, 0), method='TNC', bounds=bnds, tol=1e-10) print(res.x) # [ 0.75 2. ] ```", "best_answers_score":0.7763, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19244527\/scipy-optimize-how-to-restrict-argument-values", "best_answers_votes":48, "question_length":970, "response_length":489 }, { "question":"How to specify upper and lower limits when using numpy.random.normal I want to be able to pick values from a normal distribution that only ever fall between 0 and 1. In some cases I want to be able to basically just return a completely random distribution, and in other cases I want to return values that fall in the shape of a gaussian. At the moment I am using the following function: ``` def blockedgauss(mu,sigma): while True: numb = random.gauss(mu,sigma) if (numb > 0 and numb < 1): break return numb ``` It picks a value from a normal distribution, then discards it if it falls outside of the range 0 to 1, but I feel like there must be a better way of doing this.", "response":"It sounds like you want a truncated normal distribution. Using scipy, you could use scipy.stats.truncnorm to generate random variates from such a distribution: ``` import matplotlib.pyplot as plt import scipy.stats as stats lower, upper = 3.5, 6 mu, sigma = 5, 0.7 X = stats.truncnorm( (lower - mu) \/ sigma, (upper - mu) \/ sigma, loc=mu, scale=sigma) N = stats.norm(loc=mu, scale=sigma) fig, ax = plt.subplots(2, sharex=True) ax[0].hist(X.rvs(10000), normed=True) ax[1].hist(N.rvs(10000), normed=True) plt.show() ``` The top figure shows the truncated normal distribution, the lower figure shows the normal distribution with the same mean mu and standard deviation sigma.", "best_answers_score":0.7758, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/18441779\/how-to-specify-upper-and-lower-limits-when-using-numpy-random-normal", "best_answers_votes":72, "question_length":671, "response_length":671 }, { "question":"Python: Calculate Voronoi Tesselation from Scipy's Delaunay Triangulation in 3D I have about 50,000 data points in 3D on which I have run scipy.spatial.Delaunay from the new scipy (I'm using 0.10) which gives me a very useful triangulation. Based on: http:\/\/en.wikipedia.org\/wiki\/Delaunay_triangulation (section \"Relationship with the Voronoi diagram\") ...I was wondering if there is an easy way to get to the \"dual graph\" of this triangulation, which is the Voronoi Tesselation. Any clues? My searching around on this seems to show no pre-built in scipy functions, which I find almost strange! Thanks, Edward", "response":"The adjacency information can be found in the neighbors attribute of the Delaunay object. Unfortunately, the code does not expose the circumcenters to the user at the moment, so you'll have to recompute those yourself. Also, the Voronoi edges that extend to infinity are not directly obtained in this way. It's still probably possible, but needs some more thinking. ``` import numpy as np from scipy.spatial import Delaunay points = np.random.rand(30, 2) tri = Delaunay(points) p = tri.points[tri.vertices] # Triangle vertices A = p[:,0,:].T B = p[:,1,:].T C = p[:,2,:].T # See http:\/\/en.wikipedia.org\/wiki\/Circumscribed_circle#Circumscribed_circles_of_triangles # The following is just a direct transcription of the formula there a = A - C b = B - C def dot2(u, v): return u[0]*v[0] + u[1]*v[1] def cross2(u, v, w): \"\"\"u x (v x w)\"\"\" return dot2(u, w)*v - dot2(u, v)*w def ncross2(u, v): \"\"\"|| u x v ||^2\"\"\" return sq2(u)*sq2(v) - dot2(u, v)**2 def sq2(u): return dot2(u, u) cc = cross2(sq2(a) * b - sq2(b) * a, a, b) \/ (2*ncross2(a, b)) + C # Grab the Voronoi edges vc = cc[:,tri.neighbors] vc[:,tri.neighbors == -1] = np.nan # edges at infinity, plotting those would need more work... lines = [] lines.extend(zip(cc.T, vc[:,:,0].T)) lines.extend(zip(cc.T, vc[:,:,1].T)) lines.extend(zip(cc.T, vc[:,:,2].T)) # Plot it import matplotlib.pyplot as plt from matplotlib.collections import LineCollection lines = LineCollection(lines, edgecolor='k') plt.hold(1) plt.plot(points[:,0], points[:,1], '.') plt.plot(cc[0], cc[1], '*') plt.gca().add_collection(lines) plt.axis('equal') plt.xlim(-0.1, 1.1) plt.ylim(-0.1, 1.1) plt.show() ```", "best_answers_score":0.7756, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/10650645\/python-calculate-voronoi-tesselation-from-scipys-delaunay-triangulation-in-3d", "best_answers_votes":20, "question_length":609, "response_length":1631 }, { "question":"Efficiently create sparse pivot tables in pandas? I'm working turning a list of records with two columns (A and B) into a matrix representation. I have been using the pivot function within pandas, but the result ends up being fairly large. Does pandas support pivoting into a sparse format? I know I can pivot it and then turn it into some kind of sparse representation, but isn't as elegant as I would like. My end goal is to use it as the input for a predictive model. Alternatively, is there some kind of sparse pivot capability outside of pandas? edit: here is an example of a non-sparse pivot ``` import pandas as pd frame=pd.DataFrame() frame['person']=['me','you','him','you','him','me'] frame['thing']=['a','a','b','c','d','d'] frame['count']=[1,1,1,1,1,1] frame person thing count 0 me a 1 1 you a 1 2 him b 1 3 you c 1 4 him d 1 5 me d 1 frame.pivot('person','thing') count thing a b c d person him NaN 1 NaN 1 me 1 NaN NaN 1 you 1 NaN 1 NaN ``` This creates a matrix that could contain all possible combinations of persons and things, but it is not sparse. http:\/\/docs.scipy.org\/doc\/scipy\/reference\/sparse.html Sparse matrices take up less space because they can imply things like NaN or 0. If I have a very large data set, this pivoting function can generate a matrix that should be sparse due to the large number of NaNs or 0s. I was hoping that I could save a lot of space\/memory by generating something that was sparse right off the bat rather than creating a dense matrix and then converting it to sparse.", "response":"Here is a method that creates a sparse scipy matrix based on data and indices of person and thing. person_u and thing_u are lists representing the unique entries for your rows and columns of pivot you want to create. Note: this assumes that your count column already has the value you want in it. ``` from scipy.sparse import csr_matrix person_u = list(sort(frame.person.unique())) thing_u = list(sort(frame.thing.unique())) data = frame['count'].tolist() row = frame.person.astype('category', categories=person_u).cat.codes col = frame.thing.astype('category', categories=thing_u).cat.codes sparse_matrix = csr_matrix((data, (row, col)), shape=(len(person_u), len(thing_u))) >>> sparse_matrix ' with 6 stored elements in Compressed Sparse Row format> >>> sparse_matrix.todense() matrix([[0, 1, 0, 1], [1, 0, 0, 1], [1, 0, 1, 0]]) ``` Based on your original question, the scipy sparse matrix should be sufficient for your needs, but should you wish to have a sparse dataframe you can do the following: ``` dfs=pd.SparseDataFrame([ pd.SparseSeries(sparse_matrix[i].toarray().ravel(), fill_value=0) for i in np.arange(sparse_matrix.shape[0]) ], index=person_u, columns=thing_u, default_fill_value=0) >>> dfs a b c d him 0 1 0 1 me 1 0 0 1 you 1 0 1 0 >>> type(dfs) pandas.sparse.frame.SparseDataFrame ```", "best_answers_score":0.7742, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/31661604\/efficiently-create-sparse-pivot-tables-in-pandas", "best_answers_votes":35, "question_length":1521, "response_length":1302 }, { "question":"Pythonic way to create a numpy array from a list of numpy arrays I generate a list of one dimensional numpy arrays in a loop and later convert this list to a 2d numpy array. I would've preallocated a 2d numpy array if i knew the number of items ahead of time, but I don't, therefore I put everything in a list. The mock up is below: ``` >>> list_of_arrays = map(lambda x: x*ones(2), range(5)) >>> list_of_arrays [array([ 0., 0.]), array([ 1., 1.]), array([ 2., 2.]), array([ 3., 3.]), array([ 4., 4.])] >>> arr = array(list_of_arrays) >>> arr array([[ 0., 0.], [ 1., 1.], [ 2., 2.], [ 3., 3.], [ 4., 4.]]) ``` My question is the following: Is there a better way (performancewise) to go about the task of collecting sequential numerical data (in my case numpy arrays) than putting them in a list and then making a numpy.array out of it (I am creating a new obj and copying the data)? Is there an \"expandable\" matrix data structure available in a well tested module? A typical size of my 2d matrix would be between 100x10 and 5000x10 floats EDIT: In this example i'm using map, but in my actual application I have a for loop", "response":"Suppose you know that the final array arr will never be larger than 5000x10. Then you could pre-allocate an array of maximum size, populate it with data as you go through the loop, and then use arr.resize to cut it down to the discovered size after exiting the loop. The tests below suggest doing so will be slightly faster than constructing intermediate python lists no matter what the ultimate size of the array is. Also, arr.resize de-allocates the unused memory, so the final (though maybe not the intermediate) memory footprint is smaller than what is used by python_lists_to_array. This shows numpy_all_the_way is faster: ``` % python -mtimeit -s\"import test\" \"test.numpy_all_the_way(100)\" 100 loops, best of 3: 1.78 msec per loop % python -mtimeit -s\"import test\" \"test.numpy_all_the_way(1000)\" 100 loops, best of 3: 18.1 msec per loop % python -mtimeit -s\"import test\" \"test.numpy_all_the_way(5000)\" 10 loops, best of 3: 90.4 msec per loop % python -mtimeit -s\"import test\" \"test.python_lists_to_array(100)\" 1000 loops, best of 3: 1.97 msec per loop % python -mtimeit -s\"import test\" \"test.python_lists_to_array(1000)\" 10 loops, best of 3: 20.3 msec per loop % python -mtimeit -s\"import test\" \"test.python_lists_to_array(5000)\" 10 loops, best of 3: 101 msec per loop ``` This shows numpy_all_the_way uses less memory: ``` % test.py Initial memory usage: 19788 After python_lists_to_array: 20976 After numpy_all_the_way: 20348 ``` test.py: ``` import numpy as np import os def memory_usage(): pid = os.getpid() return next(line for line in open('\/proc\/%s\/status' % pid).read().splitlines() if line.startswith('VmSize')).split()[-2] N, M = 5000, 10 def python_lists_to_array(k): list_of_arrays = list(map(lambda x: x * np.ones(M), range(k))) arr = np.array(list_of_arrays) return arr def numpy_all_the_way(k): arr = np.empty((N, M)) for x in range(k): arr[x] = x * np.ones(M) arr.resize((k, M)) return arr if __name__ == '__main__': print('Initial memory usage: %s' % memory_usage()) arr = python_lists_to_array(5000) print('After python_lists_to_array: %s' % memory_usage()) arr = numpy_all_the_way(5000) print('After numpy_all_the_way: %s' % memory_usage()) ```", "best_answers_score":0.7727, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2106287\/pythonic-way-to-create-a-numpy-array-from-a-list-of-numpy-arrays", "best_answers_votes":21, "question_length":1122, "response_length":2169 }, { "question":"Confusion between numpy, scipy, matplotlib and pylab Numpy, scipy, matplotlib, and pylab are common terms among they who use python for scientific computation. I just learn a bit about pylab, and I got confused. Whenever I want to import numpy, I can always do: ``` import numpy as np ``` I just consider, that once I do ``` from pylab import * ``` the numpy will be imported as well (with np alias). So basically the second one does more things compared to the first one. There are few things I want to ask: Is it right that pylab is just a wrapper for numpy, scipy and matplotlib? As np is the numpy alias in pylab, what is the scipy and matplotlib alias in pylab? (as far as I know, plt is alias of matplotlib.pyplot, but I don't know the alias for the matplotlib itself)", "response":"No, pylab is part of matplotlib (in matplotlib.pylab) and tries to give you a MatLab like environment. matplotlib has a number of dependencies, among them numpy which it imports under the common alias np. scipy is not a dependency of matplotlib. If you run ipython --pylab an automatic import will put all symbols from matplotlib.pylab into global scope. Like you wrote numpy gets imported under the np alias. Symbols from matplotlib are available under the mpl alias.", "best_answers_score":0.7701, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12987624\/confusion-between-numpy-scipy-matplotlib-and-pylab", "best_answers_votes":136, "question_length":774, "response_length":468 }, { "question":"Why does scipy.optimize.curve_fit not fit to the data? I've been trying to fit an exponential to some data for a while using scipy.optimize.curve_fit but i'm having real difficulty. I really can't see any reason why this wouldn't work but it just produces a strait line, no idea why! Any help would be much appreciated ``` from __future__ import division import numpy from scipy.optimize import curve_fit import matplotlib.pyplot as pyplot def func(x,a,b,c): return a*numpy.exp(-b*x)-c yData = numpy.load('yData.npy') xData = numpy.load('xData.npy') trialX = numpy.linspace(xData[0],xData[-1],1000) # Fit a polynomial fitted = numpy.polyfit(xData, yData, 10)[::-1] y = numpy.zeros(len(trailX)) for i in range(len(fitted)): y += fitted[i]*trialX**i # Fit an exponential popt, pcov = curve_fit(func, xData, yData) yEXP = func(trialX, *popt) pyplot.figure() pyplot.plot(xData, yData, label='Data', marker='o') pyplot.plot(trialX, yEXP, 'r-',ls='--', label=\"Exp Fit\") pyplot.plot(trialX, y, label = '10 Deg Poly') pyplot.legend() pyplot.show() ``` ``` xData = [1e-06, 2e-06, 3e-06, 4e-06, 5e-06, 6e-06, 7e-06, 8e-06, 9e-06, 1e-05, 2e-05, 3e-05, 4e-05, 5e-05, 6e-05, 7e-05, 8e-05, 9e-05, 0.0001, 0.0002, 0.0003, 0.0004, 0.0005, 0.0006, 0.0007, 0.0008, 0.0009, 0.001, 0.002, 0.003, 0.004, 0.005, 0.006, 0.007, 0.008, 0.009, 0.01] yData = [6.37420666067e-09, 1.13082012115e-08, 1.52835756975e-08, 2.19214493931e-08, 2.71258852882e-08, 3.38556130078e-08, 3.55765277358e-08, 4.13818145846e-08, 4.72543475372e-08, 4.85834751151e-08, 9.53876562077e-08, 1.45110636413e-07, 1.83066627931e-07, 2.10138415308e-07, 2.43503982686e-07, 2.72107045549e-07, 3.02911771395e-07, 3.26499455951e-07, 3.48319349445e-07, 5.13187669283e-07, 5.98480176303e-07, 6.57028222701e-07, 6.98347073045e-07, 7.28699930335e-07, 7.50686502279e-07, 7.7015576866e-07, 7.87147246927e-07, 7.99607141001e-07, 8.61398763228e-07, 8.84272900407e-07, 8.96463883243e-07, 9.04105135329e-07, 9.08443443149e-07, 9.12391264185e-07, 9.150842683e-07, 9.16878548643e-07, 9.18389990067e-07] ```", "response":"Numerical algorithms tend to work better when not fed extremely small (or large) numbers. In this case, the graph shows your data has extremely small x and y values. If you scale them, the fit is remarkable better: ``` xData = np.load('xData.npy')*10**5 yData = np.load('yData.npy')*10**5 ``` ``` from __future__ import division import os os.chdir(os.path.expanduser('~\/tmp')) import numpy as np import scipy.optimize as optimize import matplotlib.pyplot as plt def func(x,a,b,c): return a*np.exp(-b*x)-c xData = np.load('xData.npy')*10**5 yData = np.load('yData.npy')*10**5 print(xData.min(), xData.max()) print(yData.min(), yData.max()) trialX = np.linspace(xData[0], xData[-1], 1000) # Fit a polynomial fitted = np.polyfit(xData, yData, 10)[::-1] y = np.zeros(len(trialX)) for i in range(len(fitted)): y += fitted[i]*trialX**i # Fit an exponential popt, pcov = optimize.curve_fit(func, xData, yData) print(popt) yEXP = func(trialX, *popt) plt.figure() plt.plot(xData, yData, label='Data', marker='o') plt.plot(trialX, yEXP, 'r-',ls='--', label=\"Exp Fit\") plt.plot(trialX, y, label = '10 Deg Poly') plt.legend() plt.show() ``` Note that after rescaling xData and yData, the parameters returned by curve_fit must also be rescaled. In this case, a, b and c each must be divided by 10**5 to obtain fitted parameters for the original data. One objection you might have to the above is that the scaling has to be chosen rather \"carefully\". (Read: Not every reasonable choice of scale works!) You can improve the robustness of curve_fit by providing a reasonable initial guess for the parameters. Usually you have some a priori knowledge about the data which can motivate ballpark \/ back-of-the envelope type guesses for reasonable parameter values. For example, calling curve_fit with ``` guess = (-1, 0.1, 0) popt, pcov = optimize.curve_fit(func, xData, yData, guess) ``` helps improve the range of scales on which curve_fit succeeds in this case.", "best_answers_score":0.7694, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15624070\/why-does-scipy-optimize-curve-fit-not-fit-to-the-data", "best_answers_votes":45, "question_length":2036, "response_length":1945 }, { "question":"Fitting only one parameter of a function with many parameters in python In python I have a function which has many parameters. I want to fit this function to a data set, but using only one parameter, the rest of the parameters I want to supply on on my own. Here is an example: ``` def func(x,a,b): return a*x*x + b for b in xrange(10): popt,pcov = curve_fit(func,x1,x2) ``` In this I want that the fitting is done only for a and the parameter b takes the value of the loop variable. How can this be done?", "response":"You can wrap func in a lambda, as follows: ``` def func(x, a, b): return a*x*x + b for b in xrange(10): popt, pcov = curve_fit(lambda x, a: func(x, a, b), x1, x2) ``` A lambda is an anonymous function, which in Python can only be used for simple one line functions. Basically, it's normally used to reduce the amount of code when don't need to assign a name to the function. A more detailed description is given in the official documentation: http:\/\/docs.python.org\/tutorial\/controlflow.html#lambda-forms In this case, a lambda is used to fix one of the arguments of func. The newly created function accepts only two arguments: x and a, whereas b is fixed to the value taken from the local b variable. This new function is then passed into curve_fit as an argument.", "best_answers_score":0.7691, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12208634\/fitting-only-one-parameter-of-a-function-with-many-parameters-in-python", "best_answers_votes":70, "question_length":505, "response_length":765 }, { "question":"ndimage missing from scipy I'm trying to use the ndimage library from scipy, but its apparently missing. I have run the tests from both numpy and scipy and the results were OK. I am using numpy 1.6.1 and scipy 0.10.0 installed from the official packages on sourceforge. Running ``` import numpy import scipy import pprint print(scipy.version.version) print(numpy.version.version) img = scipy.ndimage.imread(\"\") ``` gives ``` 0.10.0 1.6.1 Traceback (most recent call last): File \"extract.py\", line 8, in img = scipy.ndimage.imread(\"\") AttributeError: 'module' object has no attribute 'ndimage' ```", "response":"You have to import the module: ``` import scipy.ndimage ```", "best_answers_score":0.7658, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8200348\/ndimage-missing-from-scipy", "best_answers_votes":65, "question_length":597, "response_length":59 }, { "question":"python numpy euclidean distance calculation between matrices of row vectors I am new to Numpy and I would like to ask you how to calculate euclidean distance between points stored in a vector. Let's assume that we have a numpy.array each row is a vector and a single numpy.array. I would like to know if it is possible to calculate the euclidean distance between all the points and this single point and store them in one numpy.array. Here is an interface: ``` points #2d list of row-vectors singlePoint #one row-vector listOfDistances= procedure( points,singlePoint) ``` Can we have something like this? Or is it possible to have one command to have the single point as a list of other points and at the end we get a matrix of distances? Thanks", "response":"While you can use vectorize, @Karl's approach will be rather slow with numpy arrays. The easier approach is to just do np.hypot(*(points - single_point).T). (The transpose assumes that points is a Nx2 array, rather than a 2xN. If it's 2xN, you don't need the .T. However this is a bit unreadable, so you write it out more explictly like this (using some canned example data...): ``` import numpy as np single_point = [3, 4] points = np.arange(20).reshape((10,2)) dist = (points - single_point)**2 dist = np.sum(dist, axis=1) dist = np.sqrt(dist) ```", "best_answers_score":0.7647, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/4370975\/python-numpy-euclidean-distance-calculation-between-matrices-of-row-vectors", "best_answers_votes":32, "question_length":745, "response_length":549 }, { "question":"How to transform numpy.matrix or array to scipy sparse matrix For SciPy sparse matrix, one can use todense() or toarray() to transform to NumPy matrix or array. What are the functions to do the inverse? I searched, but got no idea what keywords should be the right hit.", "response":"You can pass a numpy array or matrix as an argument when initializing a sparse matrix. For a CSR matrix, for example, you can do the following. ``` >>> import numpy as np >>> from scipy import sparse >>> A = np.array([[1,2,0],[0,0,3],[1,0,4]]) >>> B = np.matrix([[1,2,0],[0,0,3],[1,0,4]]) >>> A array([[1, 2, 0], [0, 0, 3], [1, 0, 4]]) >>> sA = sparse.csr_matrix(A) # Here's the initialization of the sparse matrix. >>> sB = sparse.csr_matrix(B) >>> sA ' with 5 stored elements in Compressed Sparse Row format> >>> print sA (0, 0) 1 (0, 1) 2 (1, 2) 3 (2, 0) 1 (2, 2) 4 ```", "best_answers_score":0.763, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/7922487\/how-to-transform-numpy-matrix-or-array-to-scipy-sparse-matrix", "best_answers_votes":162, "question_length":269, "response_length":572 }, { "question":"Fastest pairwise distance metric in python I have an 1D array of numbers, and want to calculate all pairwise euclidean distances. I have a method (thanks to SO) of doing this with broadcasting, but it's inefficient because it calculates each distance twice. And it doesn't scale well. Here's an example that gives me what I want with an array of 1000 numbers. ``` import numpy as np import random r = np.array([random.randrange(1, 1000) for _ in range(0, 1000)]) dists = np.abs(r - r[:, None]) ``` What's the fastest implementation in scipy\/numpy\/scikit-learn that I can use to do this, given that it has to scale to situations where the 1D array has >10k values. Note: the matrix is symmetric, so I'm guessing that it's possible to get at least a 2x speedup by addressing that, I just don't know how.", "response":"Neither of the other answers quite answered the question - 1 was in Cython, one was slower. But both provided very useful hints. Following up on them suggests that scipy.spatial.distance.pdist is the way to go. Here's some code: ``` import numpy as np import random import sklearn.metrics.pairwise import scipy.spatial.distance r = np.array([random.randrange(1, 1000) for _ in range(0, 1000)]) c = r[:, None] def option1(r): dists = np.abs(r - r[:, None]) def option2(r): dists = scipy.spatial.distance.pdist(r, 'cityblock') def option3(r): dists = sklearn.metrics.pairwise.manhattan_distances(r) ``` Timing with IPython: ``` In [36]: timeit option1(r) 100 loops, best of 3: 5.31 ms per loop In [37]: timeit option2(c) 1000 loops, best of 3: 1.84 ms per loop In [38]: timeit option3(c) 100 loops, best of 3: 11.5 ms per loop ``` I didn't try the Cython implementation (I can't use it for this project), but comparing my results to the other answer that did, it looks like scipy.spatial.distance.pdist is roughly a third slower than the Cython implementation (taking into account the different machines by benchmarking on the np.abs solution).", "best_answers_score":0.763, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20277982\/fastest-pairwise-distance-metric-in-python", "best_answers_votes":34, "question_length":801, "response_length":1142 }, { "question":"generalized cumulative functions in NumPy\/SciPy? Is there a function in numpy or scipy (or some other library) that generalizes the idea of cumsum and cumprod to arbitrary function. For example, consider the (theoretical) function ``` cumf( func, array) ``` func is a function that accepts two floats, and returns a float. Particular cases ``` lambda x,y: x+y ``` and ``` lambda x,y: x*y ``` are cumsum and cumprod respectively. For example, if ``` func = lambda x,prev_x: x^2*prev_x ``` and I apply it to: ``` cumf(func, np.array( 1, 2, 3) ) ``` I would like ``` np.array( 1, 4, 9*4 ) ```", "response":"The ValueError above is still a bug using Numpy 1.20.1 (with Python 3.9.1). Luckily a workaround was discovered that uses casting: https:\/\/groups.google.com\/forum\/#!topic\/numpy\/JgUltPe2hqw ``` import numpy as np uadd = np.frompyfunc(lambda x, y: x + y, 2, 1) uadd.accumulate([1,2,3], dtype=object).astype(int) # array([1, 3, 6]) ``` Note that since the custom operation works on an object type, it won't benefit from the efficient memory management of numpy. So the operation may be slower than one that didn't need casting to object for extremely large arrays.", "best_answers_score":0.7621, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13828599\/generalized-cumulative-functions-in-numpy-scipy", "best_answers_votes":15, "question_length":589, "response_length":561 }, { "question":"Python double free error for huge datasets I have a very simple script in Python, but for some reason I get the following error when running a large amount of data: ``` *** glibc detected *** python: double free or corruption (out): 0x00002af5a00cc010 *** ``` I am used to these errors coming up in C or C++, when one tries to free memory that has already been freed. However, by my understanding of Python (and especially the way I've written the code), I really don't understand why this should happen. Here is the code: ``` #!\/usr\/bin\/python -tt import sys, commands, string import numpy as np import scipy.io as io from time import clock W = io.loadmat(sys.argv[1])['W'] size = W.shape[0] numlabels = int(sys.argv[2]) Q = np.zeros((size, numlabels), dtype=np.double) P = np.zeros((size, numlabels), dtype=np.double) Q += 1.0 \/ Q.shape[1] nu = 0.001 mu = 0.01 start = clock() mat = -nu + mu*(W*(np.log(Q)-1)) end = clock() print >> sys.stderr, \"Time taken to compute matrix: %.2f seconds\"%(end-start) ``` One may ask, why declare a P and a Q numpy array? I simply do that to reflect the actual conditions (as this code is simply a segment of what I actually do, where I need a P matrix and declare it beforehand). I have access to a 192GB machine, and so I tested this out on a very large SciPy sparse matrix (2.2 million by 2.2 million, but very sparse, that's not the issue). The main memory is taken up by the Q, P, and mat matrices, as they are all 2.2 million by 2000 matrices (size = 2.2 million, numlabels = 2000). The peak memory goes up to 131GB, which comfortably fits in memory. While the mat matrix is being computed, I get the glibc error, and my process automatically goes into the sleep (S) state, without deallocating the 131GB it has taken up. Given the bizarre (for Python) error (I am not explicitly deallocating anything), and the fact that this works nicely for smaller matrix sizes (around 1.5 million by 2000), I am really not sure where to start to debug this. As a starting point, I have set \"ulimit -s unlimited\" before running, but to no avail. Any help or insight into numpy's behavior with really large amounts of data would be welcome. Note that this is NOT an out of memory error - I have 196GB, and my process reaches around 131GB and stays there for some time before giving the error below. Update: February 16, 2013 (1:10 PM PST): As per suggestions, I ran Python with GDB. Interestingly, on one GDB run I forgot to set the stack size limit to \"unlimited\", and got the following output: ``` *** glibc detected *** \/usr\/bin\/python: munmap_chunk(): invalid pointer: 0x00007fe7508a9010 *** ======= Backtrace: ========= \/lib64\/libc.so.6(+0x733b6)[0x7ffff6ec23b6] \/usr\/lib64\/python2.7\/site-packages\/numpy\/core\/multiarray.so(+0x4a496)[0x7ffff69fc496] \/usr\/lib64\/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x4e67)[0x7ffff7af48c7] \/usr\/lib64\/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x309)[0x7ffff7af6c49] \/usr\/lib64\/libpython2.7.so.1.0(PyEval_EvalCode+0x32)[0x7ffff7b25592] \/usr\/lib64\/libpython2.7.so.1.0(+0xfcc61)[0x7ffff7b33c61] \/usr\/lib64\/libpython2.7.so.1.0(PyRun_FileExFlags+0x84)[0x7ffff7b34074] \/usr\/lib64\/libpython2.7.so.1.0(PyRun_SimpleFileExFlags+0x189)[0x7ffff7b347c9] \/usr\/lib64\/libpython2.7.so.1.0(Py_Main+0x36c)[0x7ffff7b3e1bc] \/lib64\/libc.so.6(__libc_start_main+0xfd)[0x7ffff6e6dbfd] \/usr\/bin\/python[0x4006e9] ======= Memory map: ======== 00400000-00401000 r-xp 00000000 09:01 50336181 \/usr\/bin\/python2.7 00600000-00601000 r--p 00000000 09:01 50336181 \/usr\/bin\/python2.7 00601000-00602000 rw-p 00001000 09:01 50336181 \/usr\/bin\/python2.7 00602000-00e5f000 rw-p 00000000 00:00 0 [heap] 7fdf2584c000-7ffff0a66000 rw-p 00000000 00:00 0 7ffff0a66000-7ffff0a6b000 r-xp 00000000 09:01 50333916 \/usr\/lib64\/python2.7\/lib-dynload\/mmap.so 7ffff0a6b000-7ffff0c6a000 ---p 00005000 09:01 50333916 \/usr\/lib64\/python2.7\/lib-dynload\/mmap.so 7ffff0c6a000-7ffff0c6b000 r--p 00004000 09:01 50333916 \/usr\/lib64\/python2.7\/lib-dynload\/mmap.so 7ffff0c6b000-7ffff0c6c000 rw-p 00005000 09:01 50333916 \/usr\/lib64\/python2.7\/lib-dynload\/mmap.so 7ffff0c6c000-7ffff0c77000 r-xp 00000000 00:12 54138483 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/io\/matlab\/streams.so 7ffff0c77000-7ffff0e76000 ---p 0000b000 00:12 54138483 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/io\/matlab\/streams.so 7ffff0e76000-7ffff0e77000 r--p 0000a000 00:12 54138483 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/io\/matlab\/streams.so 7ffff0e77000-7ffff0e78000 rw-p 0000b000 00:12 54138483 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/io\/matlab\/streams.so 7ffff0e78000-7ffff0e79000 rw-p 00000000 00:00 0 7ffff0e79000-7ffff0e9b000 r-xp 00000000 00:12 54138481 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/io\/matlab\/mio5_utils.so 7ffff0e9b000-7ffff109a000 ---p 00022000 00:12 54138481 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/io\/matlab\/mio5_utils.so 7ffff109a000-7ffff109b000 r--p 00021000 00:12 54138481 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/io\/matlab\/mio5_utils.so 7ffff109b000-7ffff109f000 rw-p 00022000 00:12 54138481 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/io\/matlab\/mio5_utils.so 7ffff109f000-7ffff10a0000 rw-p 00000000 00:00 0 7ffff10a0000-7ffff10a5000 r-xp 00000000 09:01 50333895 \/usr\/lib64\/python2.7\/lib-dynload\/zlib.so 7ffff10a5000-7ffff12a4000 ---p 00005000 09:01 50333895 \/usr\/lib64\/python2.7\/lib-dynload\/zlib.so 7ffff12a4000-7ffff12a5000 r--p 00004000 09:01 50333895 \/usr\/lib64\/python2.7\/lib-dynload\/zlib.so 7ffff12a5000-7ffff12a7000 rw-p 00005000 09:01 50333895 \/usr\/lib64\/python2.7\/lib-dynload\/zlib.so 7ffff12a7000-7ffff12ad000 r-xp 00000000 00:12 54138491 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/io\/matlab\/mio_utils.so 7ffff12ad000-7ffff14ac000 ---p 00006000 00:12 54138491 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/io\/matlab\/mio_utils.so 7ffff14ac000-7ffff14ad000 r--p 00005000 00:12 54138491 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/io\/matlab\/mio_utils.so 7ffff14ad000-7ffff14ae000 rw-p 00006000 00:12 54138491 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/io\/matlab\/mio_utils.so 7ffff14ae000-7ffff14b5000 r-xp 00000000 00:12 54138562 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/sparse\/sparsetools\/_csgraph.so 7ffff14b5000-7ffff16b4000 ---p 00007000 00:12 54138562 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/sparse\/sparsetools\/_csgraph.so 7ffff16b4000-7ffff16b5000 r--p 00006000 00:12 54138562 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/sparse\/sparsetools\/_csgraph.so 7ffff16b5000-7ffff16b6000 rw-p 00007000 00:12 54138562 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/sparse\/sparsetools\/_csgraph.so 7ffff16b6000-7ffff17c2000 r-xp 00000000 00:12 54138558 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/sparse\/sparsetools\/_bsr.so 7ffff17c2000-7ffff19c2000 ---p 0010c000 00:12 54138558 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/sparse\/sparsetools\/_bsr.so 7ffff19c2000-7ffff19c3000 r--p 0010c000 00:12 54138558 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/sparse\/sparsetools\/_bsr.so 7ffff19c3000-7ffff19c6000 rw-p 0010d000 00:12 54138558 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/sparse\/sparsetools\/_bsr.so 7ffff19c6000-7ffff19d5000 r-xp 00000000 00:12 54138561 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/sparse\/sparsetools\/_dia.so 7ffff19d5000-7ffff1bd4000 ---p 0000f000 00:12 54138561 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/sparse\/sparsetools\/_dia.so 7ffff1bd4000-7ffff1bd5000 r--p 0000e000 00:12 54138561 \/home\/avneesh\/.local\/lib\/python2.7\/site-packages\/scipy\/sparse\/sparsetools\/_dia.so Program received signal SIGABRT, Aborted. 0x00007ffff6e81ab5 in raise () from \/lib64\/libc.so.6 (gdb) bt #0 0x00007ffff6e81ab5 in raise () from \/lib64\/libc.so.6 #1 0x00007ffff6e82fb6 in abort () from \/lib64\/libc.so.6 #2 0x00007ffff6ebcdd3 in __libc_message () from \/lib64\/libc.so.6 #3 0x00007ffff6ec23b6 in malloc_printerr () from \/lib64\/libc.so.6 #4 0x00007ffff69fc496 in ?? () from \/usr\/lib64\/python2.7\/site-packages\/numpy\/core\/multiarray.so #5 0x00007ffff7af48c7 in PyEval_EvalFrameEx () from \/usr\/lib64\/libpython2.7.so.1.0 #6 0x00007ffff7af6c49 in PyEval_EvalCodeEx () from \/usr\/lib64\/libpython2.7.so.1.0 #7 0x00007ffff7b25592 in PyEval_EvalCode () from \/usr\/lib64\/libpython2.7.so.1.0 #8 0x00007ffff7b33c61 in ?? () from \/usr\/lib64\/libpython2.7.so.1.0 #9 0x00007ffff7b34074 in PyRun_FileExFlags () from \/usr\/lib64\/libpython2.7.so.1.0 #10 0x00007ffff7b347c9 in PyRun_SimpleFileExFlags () from \/usr\/lib64\/libpython2.7.so.1.0 #11 0x00007ffff7b3e1bc in Py_Main () from \/usr\/lib64\/libpython2.7.so.1.0 #12 0x00007ffff6e6dbfd in __libc_start_main () from \/lib64\/libc.so.6 #13 0x00000000004006e9 in _start () ``` When I set the stack size limit to unlimited\", I get the following: ``` *** glibc detected *** \/usr\/bin\/python: double free or corruption (out): 0x00002abb2732c010 *** ^X^C Program received signal SIGINT, Interrupt. 0x00002aaaab9d08fe in __lll_lock_wait_private () from \/lib64\/libc.so.6 (gdb) bt #0 0x00002aaaab9d08fe in __lll_lock_wait_private () from \/lib64\/libc.so.6 #1 0x00002aaaab969f2e in _L_lock_9927 () from \/lib64\/libc.so.6 #2 0x00002aaaab9682d1 in free () from \/lib64\/libc.so.6 #3 0x00002aaaaaabbfe2 in _dl_scope_free () from \/lib64\/ld-linux-x86-64.so.2 #4 0x00002aaaaaab70a4 in _dl_map_object_deps () from \/lib64\/ld-linux-x86-64.so.2 #5 0x00002aaaaaabcaa0 in dl_open_worker () from \/lib64\/ld-linux-x86-64.so.2 #6 0x00002aaaaaab85f6 in _dl_catch_error () from \/lib64\/ld-linux-x86-64.so.2 #7 0x00002aaaaaabc5da in _dl_open () from \/lib64\/ld-linux-x86-64.so.2 #8 0x00002aaaab9fb530 in do_dlopen () from \/lib64\/libc.so.6 #9 0x00002aaaaaab85f6 in _dl_catch_error () from \/lib64\/ld-linux-x86-64.so.2 #10 0x00002aaaab9fb5cf in dlerror_run () from \/lib64\/libc.so.6 #11 0x00002aaaab9fb637 in __libc_dlopen_mode () from \/lib64\/libc.so.6 #12 0x00002aaaab9d60c5 in init () from \/lib64\/libc.so.6 #13 0x00002aaaab080933 in pthread_once () from \/lib64\/libpthread.so.0 #14 0x00002aaaab9d61bc in backtrace () from \/lib64\/libc.so.6 #15 0x00002aaaab95dde7 in __libc_message () from \/lib64\/libc.so.6 #16 0x00002aaaab9633b6 in malloc_printerr () from \/lib64\/libc.so.6 #17 0x00002aaaab9682dc in free () from \/lib64\/libc.so.6 #18 0x00002aaaabef1496 in ?? () from \/usr\/lib64\/python2.7\/site-packages\/numpy\/core\/multiarray.so #19 0x00002aaaaad888c7 in PyEval_EvalFrameEx () from \/usr\/lib64\/libpython2.7.so.1.0 #20 0x00002aaaaad8ac49 in PyEval_EvalCodeEx () from \/usr\/lib64\/libpython2.7.so.1.0 #21 0x00002aaaaadb9592 in PyEval_EvalCode () from \/usr\/lib64\/libpython2.7.so.1.0 #22 0x00002aaaaadc7c61 in ?? () from \/usr\/lib64\/libpython2.7.so.1.0 #23 0x00002aaaaadc8074 in PyRun_FileExFlags () from \/usr\/lib64\/libpython2.7.so.1.0 #24 0x00002aaaaadc87c9 in PyRun_SimpleFileExFlags () from \/usr\/lib64\/libpython2.7.so.1.0 #25 0x00002aaaaadd21bc in Py_Main () from \/usr\/lib64\/libpython2.7.so.1.0 #26 0x00002aaaab90ebfd in __libc_start_main () from \/lib64\/libc.so.6 #27 0x00000000004006e9 in _start () ``` This makes me believe the basic issue is with the numpy multiarray core module (line #4 in the first output and line #18 in the second). I will bring it up as a bug report in both numpy and scipy just in case. Has anyone seen this before? Update: February 17, 2013 (4:45 PM PST) I found a machine that I could run the code on that had a more recent version of SciPy (0.11) and NumPy (1.7.0). Running the code straight up (without GDB) resulted in a seg fault without any output to stdout or stderr. Running again through GDB, I get the following: ``` Program received signal SIGSEGV, Segmentation fault. 0x00002aaaabead970 in ?? () from \/lib\/x86_64-linux-gnu\/libc.so.6 (gdb) bt #0 0x00002aaaabead970 in ?? () from \/lib\/x86_64-linux-gnu\/libc.so.6 #1 0x00002aaaac5fcd04 in PyDataMem_FREE (ptr=, $K8=) at numpy\/core\/src\/multiarray\/multiarraymodule.c:3510 #2 array_dealloc (self=0xc00ab7edbfc228fe) at numpy\/core\/src\/multiarray\/arrayobject.c:416 #3 0x0000000000498eac in PyEval_EvalFrameEx () #4 0x000000000049f1c0 in PyEval_EvalCodeEx () #5 0x00000000004a9081 in PyRun_FileExFlags () #6 0x00000000004a9311 in PyRun_SimpleFileExFlags () #7 0x00000000004aa8bd in Py_Main () #8 0x00002aaaabe4f76d in __libc_start_main () from \/lib\/x86_64-linux-gnu\/libc.so.6 #9 0x000000000041b9b1 in _start () ``` I understand this is not as useful as a NumPy compiled with debugging symbols, I will try doing that and post the output later.", "response":"After discussions on the same issue on the Numpy Github page (https:\/\/github.com\/numpy\/numpy\/issues\/2995) it has been brought to my attention that Numpy\/Scipy will not support such a large number of non-zeros in the resulting sparse matrix. Basically, W is a sparse matrix, and Q (or np.log(Q)-1) is a dense matrix. When multiplying a dense matrix with a sparse one, the resulting product will also be represented in sparse matrix form (which makes a lot of sense). However, note that since I have no zero rows in my W matrix, the resulting product W*(np.log(Q)-1) will have nnz > 2^31 (2.2 million multiplied by 2000) and this exceeds the maximum number of elements in a sparse matrix in current versions of Scipy. At this stage, I'm not sure how else to get this to work, barring a re-implementation in another language. Perhaps it can still be done in Python, but it might be better to just write up a C++ and Eigen implementation. A special thanks to pv. for helping out on this to pinpoint the exact issue, and thanks to everyone else for the brainstorming!", "best_answers_score":0.7601, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/14906962\/python-double-free-error-for-huge-datasets", "best_answers_votes":6, "question_length":12456, "response_length":1062 }, { "question":"Where can I find mad (mean absolute deviation) in scipy? It seems scipy once provided a function mad to calculate the mean absolute deviation for a set of numbers: http:\/\/projects.scipy.org\/scipy\/browser\/trunk\/scipy\/stats\/models\/utils.py?rev=3473 However, I can not find it anywhere in current versions of scipy. Of course it is possible to just copy the old code from repository but I prefer to use scipy's version. Where can I find it, or has it been replaced or removed?", "response":"[EDIT] Since this keeps on getting downvoted: I know that median absolute deviation is a more commonly-used statistic, but the questioner asked for mean absolute deviation, and here's how to do it: ``` from numpy import mean, absolute def mad(data, axis=None): return mean(absolute(data - mean(data, axis)), axis) ```", "best_answers_score":0.7594, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8930370\/where-can-i-find-mad-mean-absolute-deviation-in-scipy", "best_answers_votes":65, "question_length":473, "response_length":317 }, { "question":"Why does from scipy import spatial work, while scipy.spatial doesn't work after import scipy? I would like to use scipy.spatial.distance.cosine in my code. I can import the spatial submodule if I do something like import scipy.spatial or from scipy import spatial, but if I simply import scipy calling scipy.spatial.distance.cosine(...) results in the following error: AttributeError: 'module' object has no attribute 'spatial'. What is wrong with the second approach?", "response":"Importing a package does not import submodule automatically. You need to import submodule explicitly. For example, import xml does not import the submodule xml.dom ``` >>> import xml >>> xml.dom Traceback (most recent call last): File \"\", line 1, in AttributeError: 'module' object has no attribute 'dom' >>> import xml.dom >>> xml.dom ``` There's an exception like os.path. (os module itself import the submodule into its namespace) ``` >>> import os >>> os.path ```", "best_answers_score":0.7588, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/21071715\/why-does-from-scipy-import-spatial-work-while-scipy-spatial-doesnt-work-after", "best_answers_votes":32, "question_length":468, "response_length":470 }, { "question":"Installing NumPy and SciPy on 64-bit Windows (with Pip) I found out that it's impossible to install NumPy\/SciPy via installers on Windows 64-bit, that's only possible on 32-bit. Because I need more memory than a 32-bit installation gives me, I need the 64-bit version of everything. I tried to install everything via Pip and most things worked. But when I came to SciPy, it complained about missing a Fortran compiler. So I installed Fortran via MinGW\/MSYS. But you can't install SciPy right away after that, you need to reinstall NumPy. So I tried that, but now it doesn't work anymore via Pip nor via easy_install. Both give these errors: There are a lot of errors about LNK2019 and LNK1120,. I get a lot of errors in the range of C: C2065,C2054,C2085,C2143`, etc. They belong together I believe. There is no Fortran linker found, but I have no idea how to install that, can't find anything on it. And many more errors which are already out of the visible part of my cmd-windows... The fatal error is about LNK1120: build\\lib.win-amd64-2.7\\numpy\\linalg\\lapack_lite.pyd : fatal error LNK1120: 7 unresolved externals error: Setup script exited with error: Command \"C:\\Users\\me\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\amd64\\link.exe \/DLL \/nologo \/INCREMENTAL:NO \/LIBPATH:C:\\BLAS \/LIBPATH:C:\\Python27\\libs \/LIBPATH:C:\\Python27\\PCbuild\\amd64 \/LIBPATH:build\\temp.win-amd64-2.7 lapack.lib blas.lib \/EXPORT:initlapack_lite build\\temp.win-amd64-2.7\\Release\\numpy\\linalg\\lapack_litemodule.obj \/OUT:build\\lib.win-amd64-2.7\\numpy\\linalg\\lapack_lite.pyd \/IMPLIB:build\\temp.win-amd64-2.7\\Release\\numpy\\linalg\\lapack_lite.lib \/MANIFESTFILE:build\\temp.win-amd64-2.7\\Release\\numpy\\linalg\\lapack_lite.pyd.manifest\" failed with exit status 1120 What is the correct way to install the 64-bit versions NumPy and SciPy on a 64-bit Windows machine? Did I miss anything? Do I need to specify something somewhere? There is no information for Windows on these problems that I can find, only for Linux or Mac OS X, but they don't help me as I can't use their commands.", "response":"You can install scipy and numpy using their wheels. First install wheel package if it's already not there... ``` pip install wheel ``` Just select the package you want from http:\/\/www.lfd.uci.edu\/~gohlke\/pythonlibs\/#scipy Example: if you're running python3.5 32 bit on Windows choose scipy-0.18.1-cp35-cp35m-win_amd64.whl then it will automatically download. Then go to the command line and change the directory to the downloads folder and install the above wheel using pip. Example: ``` cd C:\\Users\\[user]\\Downloads pip install scipy-0.18.1-cp35-cp35m-win_amd64.whl ```", "best_answers_score":0.757, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/26657334\/installing-numpy-and-scipy-on-64-bit-windows-with-pip", "best_answers_votes":38, "question_length":2076, "response_length":570 }, { "question":"Sklearn kmeans equivalent of elbow method Let's say I'm examining up to 10 clusters, with scipy I usually generate the 'elbow' plot as follows: ``` from scipy import cluster cluster_array = [cluster.vq.kmeans(my_matrix, i) for i in range(1,10)] pyplot.plot([var for (cent,var) in cluster_array]) pyplot.show() ``` I have since became motivated to use sklearn for clustering, however I'm not sure how to create the array needed to plot as in the scipy case. My best guess was: ``` from sklearn.cluster import KMeans km = [KMeans(n_clusters=i) for i range(1,10)] cluster_array = [km[i].fit(my_matrix)] ``` That unfortunately resulted in an invalid command error. What is the best way sklearn way to go about this? Thank you", "response":"you can use the inertia attribute of Kmeans class. Assuming X is your dataset: ``` from sklearn.cluster import KMeans from matplotlib import pyplot as plt X = # distorsions = [] for k in range(2, 20): kmeans = KMeans(n_clusters=k) kmeans.fit(X) distorsions.append(kmeans.inertia_) fig = plt.figure(figsize=(15, 5)) plt.plot(range(2, 20), distorsions) plt.grid(True) plt.title('Elbow curve') ```", "best_answers_score":0.7569, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/41540751\/sklearn-kmeans-equivalent-of-elbow-method", "best_answers_votes":52, "question_length":721, "response_length":395 }, { "question":"sampling random floats on a range in numpy How can I sample random floats on an interval [a, b] in numpy? Not just integers, but any real numbers. For example, random_float(5, 10) would return random numbers between [5, 10]. thanks.", "response":"The uniform distribution would probably do what you are asking. ``` np.random.uniform(5,10) # A single value np.random.uniform(5,10,[2,3]) # A 2x3 array ```", "best_answers_score":0.7538, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11873741\/sampling-random-floats-on-a-range-in-numpy", "best_answers_votes":109, "question_length":232, "response_length":156 }, { "question":"Moving average or running mean Is there a SciPy function or NumPy function or module for Python that calculates the running mean of a 1D array given a specific window?", "response":"NOTE: More efficient solutions may include scipy.ndimage.uniform_filter1d (see this answer), or using newer libraries including talib's talib.MA. Use np.convolve: ``` np.convolve(x, np.ones(N)\/N, mode='valid') ``` Explanation The running mean is a case of the mathematical operation of convolution. For the running mean, you slide a window along the input and compute the mean of the window's contents. For discrete 1D signals, convolution is the same thing, except instead of the mean you compute an arbitrary linear combination, i.e., multiply each element by a corresponding coefficient and add up the results. Those coefficients, one for each position in the window, are sometimes called the convolution kernel. The arithmetic mean of N values is (x_1 + x_2 + ... + x_N) \/ N, so the corresponding kernel is (1\/N, 1\/N, ..., 1\/N), and that's exactly what we get by using np.ones(N)\/N. Edges The mode argument of np.convolve specifies how to handle the edges. I chose the valid mode here because I think that's how most people expect the running mean to work, but you may have other priorities. Here is a plot that illustrates the difference between the modes: ``` import numpy as np import matplotlib.pyplot as plt modes = ['full', 'same', 'valid'] for m in modes: plt.plot(np.convolve(np.ones(200), np.ones(50)\/50, mode=m)); plt.axis([-10, 251, -.1, 1.1]); plt.legend(modes, loc='lower center'); plt.show() ```", "best_answers_score":0.7517, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13728392\/moving-average-or-running-mean", "best_answers_votes":382, "question_length":167, "response_length":1413 }, { "question":"Fitting a Weibull distribution using Scipy I am trying to recreate maximum likelihood distribution fitting, I can already do this in Matlab and R, but now I want to use scipy. In particular, I would like to estimate the Weibull distribution parameters for my data set. I have tried this: ``` import scipy.stats as s import numpy as np import matplotlib.pyplot as plt def weib(x,n,a): return (a \/ n) * (x \/ n)**(a - 1) * np.exp(-(x \/ n)**a) data = np.loadtxt(\"stack_data.csv\") (loc, scale) = s.exponweib.fit_loc_scale(data, 1, 1) print loc, scale x = np.linspace(data.min(), data.max(), 1000) plt.plot(x, weib(x, loc, scale)) plt.hist(data, data.max(), density=True) plt.show() ``` And get this: ``` (2.5827280639441961, 3.4955032285727947) ``` And a distribution that looks like this: I have been using the exponweib after reading this http:\/\/www.johndcook.com\/distributions_scipy.html. I have also tried the other Weibull functions in scipy (just in case!). In Matlab (using the Distribution Fitting Tool - see screenshot) and in R (using both the MASS library function fitdistr and the GAMLSS package) I get a (loc) and b (scale) parameters more like 1.58463497 5.93030013. I believe all three methods use the maximum likelihood method for distribution fitting. I have posted my data here if you would like to have a go! And for completeness I am using Python 2.7.5, Scipy 0.12.0, R 2.15.2 and Matlab 2012b. Why am I getting a different result!?", "response":"My guess is that you want to estimate the shape parameter and the scale of the Weibull distribution while keeping the location fixed. Fixing loc assumes that the values of your data and of the distribution are positive with lower bound at zero. floc=0 keeps the location fixed at zero, f0=1 keeps the first shape parameter of the exponential weibull fixed at one. ``` >>> stats.exponweib.fit(data, floc=0, f0=1) [1, 1.8553346917584836, 0, 6.8820748596850905] >>> stats.weibull_min.fit(data, floc=0) [1.8553346917584836, 0, 6.8820748596850549] ``` The fit compared to the histogram looks ok, but not very good. The parameter estimates are a bit higher than the ones you mention are from R and matlab. Update The closest I can get to the plot that is now available is with unrestricted fit, but using starting values. The plot is still less peaked. Note values in fit that don't have an f in front are used as starting values. ``` >>> from scipy import stats >>> import matplotlib.pyplot as plt >>> plt.plot(data, stats.exponweib.pdf(data, *stats.exponweib.fit(data, 1, 1, scale=02, loc=0))) >>> _ = plt.hist(data, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5); >>> plt.show() ```", "best_answers_score":0.7503, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/17481672\/fitting-a-weibull-distribution-using-scipy", "best_answers_votes":41, "question_length":1447, "response_length":1185 }, { "question":"Writing wav file in Python with wavfile.write from SciPy I have this code: ``` import numpy as np import scipy.io.wavfile import math rate, data = scipy.io.wavfile.read('xenencounter_23.wav') data2 = [] for i in range(len(data)): data2.append([int(round(math.sin(data[i][0])*3000)), int(round(math.sin(data[i][1])*3000))]) data2 = np.asarray(data2) print data2 scipy.io.wavfile.write('xenencounter_23sin3.wav',rate,data2) ``` This prints (truncated): ``` [[-2524 2728] [ -423 -2270] [ 2270 423] ..., [-2524 0] [ 2524 -2728] [-2270 838]] ``` The wav file opens and plays in Windows Media Player, so at least its the proper format. However, when opening it with Audacity and looking at the individual samples, they're all 0, and concordantly the file plays no sound at all. What I don't understand is how that numpy array listed above becomes all 0's. It should be below the maximum value for a sample (or above, if it's negative).", "response":"I found that scipy.io.wavfile.write() writes in 16-bit integer, which explains the larger file sizes when trying to use a 32-bit integer (the default) instead. While I couldn't find a way to change this in wavfile.write, I did find that by changing: ``` data2 = np.asarray(data2) ``` to ``` data2 = np.asarray(data2, dtype=np.int16) ``` I could write a working file.", "best_answers_score":0.7497, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/18645544\/writing-wav-file-in-python-with-wavfile-write-from-scipy", "best_answers_votes":17, "question_length":929, "response_length":366 }, { "question":"Pyinstaller --onefile warning pyconfig.h when importing scipy or scipy.signal This is very simple to recreate. If my script foo.py is: ``` import scipy ``` Then run: ``` python pyinstaller.py --onefile foo.py ``` When I launch foo.exe I get: ``` WARNING: file already exists but should not: C:\\Users\\username\\AppData\\Local\\Temp\\_MEI86402\\Include\\pyconfig.h ``` I've tested a few versions but the latest I've confirmed is 2.1dev-e958e02 running on Win7, Python 2.7.5 (32 bit), Scipy version 0.12.0 I've submitted a ticket with the Pyinstaller folks but haven't heard anything yet. Any clues how to debug this further?", "response":"You can hack the spec file to remove the second instance by adding these lines after a=Analysis: ``` for d in a.datas: if 'pyconfig' in d[0]: a.datas.remove(d) break ```", "best_answers_score":0.747, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19055089\/pyinstaller-onefile-warning-pyconfig-h-when-importing-scipy-or-scipy-signal", "best_answers_votes":21, "question_length":616, "response_length":169 }, { "question":"Creating a Confidence Ellipse in a scatterplot using matplotlib How do I create a confidence ellipse in a scatterplot using matplotlib? The following code works until creating scatter plot. Then, is anyone familiar with putting confidence ellipses over the scatter plot? ``` import numpy as np import matplotlib.pyplot as plt x = [5,7,11,15,16,17,18] y = [8, 5, 8, 9, 17, 18, 25] plt.scatter(x,y) plt.show() ``` Following is the reference for Confidence Ellipses from SAS. http:\/\/support.sas.com\/documentation\/cdl\/en\/grstatproc\/62603\/HTML\/default\/viewer.htm#a003160800.htm The code in sas is like this: ``` proc sgscatter data=sashelp.iris(where=(species=\"Versicolor\")); title \"Versicolor Length and Width\"; compare y=(sepalwidth petalwidth) x=(sepallength petallength) \/ reg ellipse=(type=mean) spacing=4; run; ```", "response":"After giving the accepted answer a go, I found that it doesn't choose the quadrant correctly when calculating theta, as it relies on np.arccos: Taking a look at the 'possible duplicate' and Joe Kington's solution on github, I watered his code down to this: ``` import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Ellipse def eigsorted(cov): vals, vecs = np.linalg.eigh(cov) order = vals.argsort()[::-1] return vals[order], vecs[:,order] x = [5,7,11,15,16,17,18] y = [25, 18, 17, 9, 8, 5, 8] nstd = 2 ax = plt.subplot(111) cov = np.cov(x, y) vals, vecs = eigsorted(cov) theta = np.degrees(np.arctan2(*vecs[:,0][::-1])) w, h = 2 * nstd * np.sqrt(vals) ell = Ellipse(xy=(np.mean(x), np.mean(y)), width=w, height=h, angle=theta, color='black') ell.set_facecolor('none') ax.add_artist(ell) plt.scatter(x, y) plt.show() ```", "best_answers_score":0.7437, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20126061\/creating-a-confidence-ellipse-in-a-scatterplot-using-matplotlib", "best_answers_votes":29, "question_length":815, "response_length":847 }, { "question":"Structure of inputs to scipy minimize function I have inherited some code that is trying to minimize a function using scipy.optimize.minimize. I am having trouble understanding some of the inputs to the fun and jac arguments. The call to minimize looks something like this: ```py result = minimize(func, jac=jac_func, args=(D_neg, D, C), method = 'TNC' ...other arguments) ``` func looks like the following: ```py def func(G, D_neg, D, C): #do stuff ``` jac_func has the following structure: ```py def jac_func(G, D_neg, D, C): #do stuff ``` What I don't understand is where the G input to func and jac_func is coming from. Is that somehow specified in the minimize function, or by the fact that the method is specified as TNC? I've tried to do some research into the structure of this optimization function but I'm having trouble finding the answer I need.", "response":"The short answer is that G is maintained by the optimizer as part of the minimization process, while the (D_neg, D, and C) arguments are passed in as-is from the args tuple. By default, scipy.optimize.minimize takes a function fun(x) that accepts one argument x (which might be an array or the like) and returns a scalar. scipy.optimize.minimize then finds an argument value xp such that fun(xp) is less than fun(x) for other values of x. The optimizer is responsible for creating values of x and passing them to fun for evaluation. But what if you happen to have a function fun(x, y) that has some additional parameter y that needs to be passed in separately (but is considered a constant for the purposes of the optimization)? This is what the args tuple is for. The documentation tries to explain how the args tuple is used, but it can be a little hard to parse: args: tuple, optional Extra arguments passed to the objective function and its derivatives (Jacobian, Hessian). Effectively, scipy.optimize.minimize will pass whatever is in args as the remainder of the arguments to fun, using the asterisk arguments notation: the function is then called as fun(x, *args) during optimization. The x portion is passed in by the optimizer, and the args tuple is given as the remaining arguments. So, in your code, the value of the G element is maintained by the optimizer while evaluating possible values of G, and the (D_neg, D, C) tuple is passed in as-is.", "best_answers_score":0.7431, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19843752\/structure-of-inputs-to-scipy-minimize-function", "best_answers_votes":66, "question_length":857, "response_length":1455 }, { "question":"Vectorized way of calculating row-wise dot product two matrices with Scipy I want to calculate the row-wise dot product of two matrices of the same dimension as fast as possible. This is the way I am doing it: ``` import numpy as np a = np.array([[1,2,3], [3,4,5]]) b = np.array([[1,2,3], [1,2,3]]) result = np.array([]) for row1, row2 in a, b: result = np.append(result, np.dot(row1, row2)) print result ``` and of course the output is: ``` [ 26. 14.] ```", "response":"Straightforward way to do that is: ```python import numpy as np a=np.array([[1,2,3],[3,4,5]]) b=np.array([[1,2,3],[1,2,3]]) np.sum(a*b, axis=1) ``` which avoids the python loop and is faster in cases like: ```python def npsumdot(x, y): return np.sum(x*y, axis=1) def loopdot(x, y): result = np.empty((x.shape[0])) for i in range(x.shape[0]): result[i] = np.dot(x[i], y[i]) return result timeit npsumdot(np.random.rand(500000,50),np.random.rand(500000,50)) # 1 loops, best of 3: 861 ms per loop timeit loopdot(np.random.rand(500000,50),np.random.rand(500000,50)) # 1 loops, best of 3: 1.58 s per loop ```", "best_answers_score":0.7374, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15616742\/vectorized-way-of-calculating-row-wise-dot-product-two-matrices-with-scipy", "best_answers_votes":71, "question_length":456, "response_length":603 }, { "question":"Resolve warning \"A NumPy version >=1.16.5 and =1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.1 ``` It's true that I am running NumPy version 1.23.1, however this message is a mystery to me since I am running SciPy version 1.7.3, which, according to SciPy's documentation, is compatible with NumPy <1.24.0. Anyone having this problem or know how to resolve it? I am using Conda as an environment manager, and all my packages are up to date as far as I know. python: 3.9.12 numpy: 1.23.1 scipy: 1.7.3 Thanks in advance if anyone has any clues !", "response":"According to the setup.py file of the scipy 1.7.3, numpy is indeed <1.23.0. As @Libra said, the docs must be incorrect. You can: Ignore this warning Use scipy 1.8 Use numpy < 1.23.0 Edit: This is now fixed in the dev docs of scipy https:\/\/scipy.github.io\/devdocs\/dev\/toolchain.html", "best_answers_score":0.7367, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/73072257\/resolve-warning-a-numpy-version-1-16-5-and-1-23-0-is-required-for-this-versi", "best_answers_votes":12, "question_length":576, "response_length":281 }, { "question":"Ignoring -Inf values in arrays using numpy\/scipy in Python I have an NxM array in numpy that I would like to take the log of, and ignore entries that were negative prior to taking the log. When I take the log of negative entries, it returns -Inf, so I will have a matrix with some -Inf values as a result. I then want to sum over the columns of this matrix, but ignoring the -Inf values -- how can I do this? For example, ``` mylogarray = log(myarray) # take sum, but ignore -Inf? sum(mylogarray, 0) ``` I know there's nansum and I need the equivalent, something like infsum. Thanks.", "response":"The easiest way to do this is to use numpy.ma.masked_invalid(): ``` a = numpy.log(numpy.arange(15)) a.sum() # -inf numpy.ma.masked_invalid(a).sum() # 25.19122118273868 ```", "best_answers_score":0.7341, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/4485779\/ignoring-inf-values-in-arrays-using-numpy-scipy-in-python", "best_answers_votes":44, "question_length":583, "response_length":171 }, { "question":"python numpy\/scipy curve fitting I have some points and I am trying to fit curve for this points. I know that there exist scipy.optimize.curve_fit function, but I do not understand the documentation, i.e. how to use this function. My points: ```py np.array([(1, 1), (2, 4), (3, 1), (9, 3)]) ``` Can anybody explain how to do that?", "response":"I suggest you to start with simple polynomial fit, scipy.optimize.curve_fit tries to fit a function f that you must know to a set of points. This is a simple 3 degree polynomial fit using numpy.polyfit and poly1d, the first performs a least squares polynomial fit and the second calculates the new points: ``` import numpy as np import matplotlib.pyplot as plt points = np.array([(1, 1), (2, 4), (3, 1), (9, 3)]) # get x and y vectors x = points[:,0] y = points[:,1] # calculate polynomial z = np.polyfit(x, y, 3) f = np.poly1d(z) # calculate new x's and y's x_new = np.linspace(x[0], x[-1], 50) y_new = f(x_new) plt.plot(x,y,'o', x_new, y_new) plt.xlim([x[0]-1, x[-1] + 1 ]) plt.show() ```", "best_answers_score":0.7315, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19165259\/python-numpy-scipy-curve-fitting", "best_answers_votes":127, "question_length":330, "response_length":690 }, { "question":"How to display a 3D plot of a 3D array isosurface with mplot3D or similar I have a 3-dimensional numpy array. I'd like to display (in matplotlib) a nice 3D plot of an isosurface of this array (or more strictly, display an isosurface of the 3D scalar field defined by interpolating between the sample points). matplotlib's mplot3D part provides nice 3D plot support, but (so far as I can see) its API doesn't have anything which will simply take a 3D array of scalar values and display an isosurface. However, it does support displaying a collection of polygons, so presumably I could implement the marching cubes algorithm to generate such polygons. It does seem quite likely that a scipy-friendly marching cubes has already been implemented somewhere and that I haven't found it, or that I'm missing some easy way of doing this. Alternatively I'd welcome any pointers to other tools for visualising 3D array data easily usable from the Python\/numpy\/scipy world.", "response":"Complementing the answer of @DanHickstein, you can also use trisurf to visualize the polygons obtained in the marching cubes phase. ``` import numpy as np from numpy import sin, cos, pi from skimage import measure import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def fun(x, y, z): return cos(x) + cos(y) + cos(z) x, y, z = pi*np.mgrid[-1:1:31j, -1:1:31j, -1:1:31j] vol = fun(x, y, z) iso_val=0.0 verts, faces = measure.marching_cubes(vol, iso_val, spacing=(0.1, 0.1, 0.1)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_trisurf(verts[:, 0], verts[:,1], faces, verts[:, 2], cmap='Spectral', lw=1) plt.show() ``` Update: May 11, 2018 As mentioned by @DrBwts, now marching_cubes return 4 values. The following code works. ``` import numpy as np from numpy import sin, cos, pi from skimage import measure import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def fun(x, y, z): return cos(x) + cos(y) + cos(z) x, y, z = pi*np.mgrid[-1:1:31j, -1:1:31j, -1:1:31j] vol = fun(x, y, z) iso_val=0.0 verts, faces, _, _ = measure.marching_cubes(vol, iso_val, spacing=(0.1, 0.1, 0.1)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_trisurf(verts[:, 0], verts[:,1], faces, verts[:, 2], cmap='Spectral', lw=1) plt.show() ``` Update: February 2, 2020 Adding to my previous answer, I should mention that since then PyVista has been released, and it makes this kind of tasks somewhat effortless. Following the same example as before. ```py from numpy import cos, pi, mgrid import pyvista as pv #%% Data x, y, z = pi*mgrid[-1:1:31j, -1:1:31j, -1:1:31j] vol = cos(x) + cos(y) + cos(z) grid = pv.StructuredGrid(x, y, z) grid[\"vol\"] = vol.flatten() contours = grid.contour([0]) #%% Visualization pv.set_plot_theme('document') p = pv.Plotter() p.add_mesh(contours, scalars=contours.points[:, 2], show_scalar_bar=False) p.show() ``` With the following result Update: February 24, 2020 As mentioned by @HenriMenke, marching_cubes has been renamed to marching_cubes_lewiner. The \"new\" snippet is the following. ```py import numpy as np from numpy import cos, pi from skimage.measure import marching_cubes_lewiner import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D x, y, z = pi*np.mgrid[-1:1:31j, -1:1:31j, -1:1:31j] vol = cos(x) + cos(y) + cos(z) iso_val=0.0 verts, faces, _, _ = marching_cubes_lewiner(vol, iso_val, spacing=(0.1, 0.1, 0.1)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_trisurf(verts[:, 0], verts[:,1], faces, verts[:, 2], cmap='Spectral', lw=1) plt.show() ```", "best_answers_score":0.7308, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/6030098\/how-to-display-a-3d-plot-of-a-3d-array-isosurface-with-mplot3d-or-similar", "best_answers_votes":46, "question_length":962, "response_length":2583 }, { "question":"How does pip decide which many linux wheel to use? Binary many-linux wheels are now supported: https:\/\/github.com\/pypa\/manylinux Specifically I would like to install the many linux wheel for scipy on Travis, using the trusty beta operating system. The wheels are listed here: https:\/\/pypi.python.org\/pypi\/scipy\/0.17.1 I get: ``` Collecting scipy Downloading scipy-0.17.1.tar.gz (12.4MB) 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 12.4MB 100kB\/s ``` Instead of: ``` Collecting scipy Downloading scipy-0.17.1-cp27-cp27mu-manylinux1_x86_64.whl (39.5MB) 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 39.5MB 37kB\/s ``` So, in order to fix this, I would like to know, how pip determines which wheel to download and install. And yes, I did update pip to version 8.1.2 which supports binary many linux wheels. Specifically, I am not interested in alternative solutions, just answer the question, if you can.", "response":"You need pip 8.1 or later and a linux distribution that is based on glibc (and not musl libc as alpine linux for instance). EDIT: the function pip._internal.utils.compatibility_tags.get_supported() should return the list of supported platform tags in order. Pip prefers wheel tags that appear earlier in this list over tags that appear later. Also may I kindly suggest you to use python 3.5 instead of 2.7 ;)", "best_answers_score":0.7305, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/37793046\/how-does-pip-decide-which-many-linux-wheel-to-use", "best_answers_votes":16, "question_length":890, "response_length":408 }, { "question":"How to raise a numpy array to a power? (corresponding to repeated matrix multiplications, not elementwise) I want to raise a 2-dimensional numpy array, let's call it A, to the power of some number n, but I have thus far failed to find the function or operator to do that. I'm aware that I could cast it to the matrix type and use the fact that then (similar to what would be the behaviour in Matlab), A**n does just what I want, (for array the same expression means elementwise exponentiation). Casting to matrix and back seems like a rather ugly workaround though. Surely there must be a good way to perform that calculation while keeping the format to array?", "response":"I believe you want numpy.linalg.matrix_power As a quick example: ``` import numpy as np x = np.arange(9).reshape(3,3) y = np.matrix(x) a = y**3 b = np.linalg.matrix_power(x, 3) print a print b assert np.all(a==b) ``` This yields: ``` In [19]: a Out[19]: matrix([[ 180, 234, 288], [ 558, 720, 882], [ 936, 1206, 1476]]) In [20]: b Out[20]: array([[ 180, 234, 288], [ 558, 720, 882], [ 936, 1206, 1476]]) ```", "best_answers_score":0.7298, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/5018552\/how-to-raise-a-numpy-array-to-a-power-corresponding-to-repeated-matrix-multipl", "best_answers_votes":35, "question_length":660, "response_length":406 }, { "question":"scipy.misc module has no attribute imread? I am trying to read an image with scipy. However it does not accept the scipy.misc.imread part. What could be the cause of this? ``` >>> import scipy >>> scipy.misc >>> scipy.misc.imread('test.tif') Traceback (most recent call last): File \"\", line 1, in scipy.misc.imread('test.tif') AttributeError: 'module' object has no attribute 'imread' ```", "response":"imread is deprecated in SciPy 1.0.0, and will be removed in 1.2.0. Use imageio.imread instead. ``` import imageio im = imageio.imread('astronaut.png') im.shape # im is a numpy array (512, 512, 3) imageio.imwrite('imageio:astronaut-gray.jpg', im[:, :, 0]) ```", "best_answers_score":0.7296, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15345790\/scipy-misc-module-has-no-attribute-imread", "best_answers_votes":177, "question_length":390, "response_length":258 }, { "question":"integrating 2D samples on a rectangular grid using SciPy SciPy has three methods for doing 1D integrals over samples (trapz, simps, and romb) and one way to do a 2D integral over a function (dblquad), but it doesn't seem to have methods for doing a 2D integral over samples -- even ones on a rectangular grid. The closest thing I see is scipy.interpolate.RectBivariateSpline.integral -- you can create a RectBivariateSpline from data on a rectangular grid and then integrate it. However, that isn't terribly fast. I want something more accurate than the rectangle method (i.e. just summing everything up). I could, say, use a 2D Simpson's rule by making an array with the correct weights, multiplying that by the array I want to integrate, and then summing up the result. However, I don't want to reinvent the wheel if there's already something better out there. Is there?", "response":"Use the 1D rule twice. ``` >>> from scipy.integrate import simps >>> import numpy as np >>> x = np.linspace(0, 1, 20) >>> y = np.linspace(0, 1, 30) >>> z = np.cos(x[:,None])**4 + np.sin(y)**2 >>> simps(simps(z, y), x) 0.85134099743259539 >>> import sympy >>> xx, yy = sympy.symbols('x y') >>> sympy.integrate(sympy.cos(xx)**4 + sympy.sin(yy)**2, (xx, 0, 1), (yy, 0, 1)).evalf() 0.851349922021627 ```", "best_answers_score":0.7277, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20668689\/integrating-2d-samples-on-a-rectangular-grid-using-scipy", "best_answers_votes":26, "question_length":872, "response_length":399 }, { "question":"kmeans scatter plot: plot different colors per cluster I am trying to do a scatter plot of a kmeans output which clusters sentences of the same topic together. The problem i am facing is plotting points that belongs to each cluster a certain color. ``` sentence_list=[\"Hi how are you\", \"Good morning\" ...] #i have 10 setences km = KMeans(n_clusters=5, init='k-means++',n_init=10, verbose=1) #with 5 cluster, i want 5 different colors km.fit(vectorized) km.labels_ # [0,1,2,3,3,4,4,5,2,5] pipeline = Pipeline([('tfidf', TfidfVectorizer())]) X = pipeline.fit_transform(sentence_list).todense() pca = PCA(n_components=2).fit(X) data2D = pca.transform(X) plt.scatter(data2D[:,0], data2D[:,1]) km.fit(X) centers2D = pca.transform(km.cluster_centers_) plt.hold(True) labels=np.array([km.labels_]) print labels ``` My problem is in the bottom code for plt.scatter(); what should i use for the parameter c? when i use c=labels in the code, i get this error: number in rbg sequence outside 0-1 range 2.When i set c= km.labels_ instead, i get the error: ValueError: Color array must be two-dimensional ``` plt.scatter(centers2D[:,0], centers2D[:,1], marker='x', s=200, linewidths=3, c=labels) plt.show() ```", "response":"``` from sklearn.cluster import KMeans import matplotlib.pyplot as plt # Scaling the data to normalize model = KMeans(n_clusters=5).fit(X) # Visualize it: plt.figure(figsize=(8, 6)) plt.scatter(data[:,0], data[:,1], c=model.labels_.astype(float)) ``` Now you have different color for different clusters.", "best_answers_score":0.7274, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/28227340\/kmeans-scatter-plot-plot-different-colors-per-cluster", "best_answers_votes":33, "question_length":1197, "response_length":303 }, { "question":"Convert Z-score (Z-value, standard score) to p-value for normal distribution in Python How does one convert a Z-score from the Z-distribution (standard normal distribution, Gaussian distribution) to a p-value? I have yet to find the magical function in Scipy's stats module to do this, but one must be there.", "response":"I like the survival function (upper tail probability) of the normal distribution a bit better, because the function name is more informative: ``` p_values = scipy.stats.norm.sf(abs(z_scores)) #one-sided p_values = scipy.stats.norm.sf(abs(z_scores))*2 #twosided ``` normal distribution \"norm\" is one of around 90 distributions in scipy.stats norm.sf also calls the corresponding function in scipy.special as in gotgenes example small advantage of survival function, sf: numerical precision should better for quantiles close to 1 than using the cdf", "best_answers_score":0.7262, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/3496656\/convert-z-score-z-value-standard-score-to-p-value-for-normal-distribution-in", "best_answers_votes":76, "question_length":308, "response_length":546 }, { "question":"Generate correlated data in Python (3.3) In R there is a function (cm.rnorm.cor, from package CreditMetrics), that takes the amount of samples, the amount of variables, and a correlation matrix in order to create correlated data. Is there an equivalent in Python?", "response":"The method multivariate_normal of the Generator class in numpy.random is the function that you want. Example: ``` import numpy as np import matplotlib.pyplot as plt num_samples = 400 # The desired mean values of the sample. mu = np.array([5.0, 0.0, 10.0]) # The desired covariance matrix. r = np.array([ [ 3.40, -2.75, -2.00], [ -2.75, 5.50, 1.50], [ -2.00, 1.50, 1.25] ]) # Generate the random samples. rng = np.random.default_rng() y = rng.multivariate_normal(mu, r, size=num_samples) # Plot various projections of the samples. plt.subplot(2,2,1) plt.plot(y[:,0], y[:,1], 'b.', alpha=0.25) plt.plot(mu[0], mu[1], 'ro', ms=3.5) plt.ylabel('y[1]') plt.axis('equal') plt.grid(True) plt.subplot(2,2,3) plt.plot(y[:,0], y[:,2], 'b.', alpha=0.25) plt.plot(mu[0], mu[2], 'ro', ms=3.5) plt.xlabel('y[0]') plt.ylabel('y[2]') plt.axis('equal') plt.grid(True) plt.subplot(2,2,4) plt.plot(y[:,1], y[:,2], 'b.', alpha=0.25) plt.plot(mu[1], mu[2], 'ro', ms=3.5) plt.xlabel('y[1]') plt.axis('equal') plt.grid(True) plt.show() ``` Result: See also CorrelatedRandomSamples in the SciPy Cookbook.", "best_answers_score":0.7256, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16024677\/generate-correlated-data-in-python-3-3", "best_answers_votes":31, "question_length":263, "response_length":1080 }, { "question":"numpy\/scipy analog of matlab's fminsearch I am converting some Matlab code into python using numpy. Everything worked pretty smoothly but recently I encountered fminsearch function. So, to cut it short: is there an easy way to make in python something like this: ``` banana = @(x)100*(x(2)-x(1)^2)^2+(1-x(1))^2; [x,fval] = fminsearch(banana,[-1.2, 1]) ``` which will return ``` x = 1.0000 1.0000 fval = 8.1777e-010 ``` Up till now I have not found anything that looks similar in numpy. The only thing that I found similar is scipy.optimize.fmin. Based on the definition it Minimize a function using the downhill simplex algorithm. But right now I can not find to write the above-mentioned Matlab code using this function", "response":"It's just a straight-forward conversion from Matlab syntax to python syntax: ``` import scipy.optimize banana = lambda x: 100*(x[1]-x[0]**2)**2+(1-x[0])**2 xopt = scipy.optimize.fmin(func=banana, x0=[-1.2,1]) ``` with output: ``` Optimization terminated successfully. Current function value: 0.000000 Iterations: 85 Function evaluations: 159 array([ 1.00002202, 1.00004222]) ```", "best_answers_score":0.7255, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19070943\/numpy-scipy-analog-of-matlabs-fminsearch", "best_answers_votes":31, "question_length":720, "response_length":378 }, { "question":"Numpy: Divide each row by a vector element Suppose I have a numpy array: ``` data = np.array([[1,1,1],[2,2,2],[3,3,3]]) ``` and I have a corresponding \"vector:\" ``` vector = np.array([1,2,3]) ``` How do I operate on data along each row to either subtract or divide so the result is: ``` sub_result = [[0,0,0], [0,0,0], [0,0,0]] div_result = [[1,1,1], [1,1,1], [1,1,1]] ``` Long story short: How do I perform an operation on each row of a 2D array with a 1D array of scalars that correspond to each row?", "response":"Here you go. You just need to use None (or alternatively np.newaxis) combined with broadcasting: ``` In [6]: data - vector[:,None] Out[6]: array([[0, 0, 0], [0, 0, 0], [0, 0, 0]]) In [7]: data \/ vector[:,None] Out[7]: array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]) ```", "best_answers_score":0.7214, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19602187\/numpy-divide-each-row-by-a-vector-element", "best_answers_votes":268, "question_length":502, "response_length":262 }, { "question":"Get U, Sigma, V* matrix from Truncated SVD in scikit-learn I am using truncated SVD from scikit-learn package. In the definition of SVD, an original matrix A is approxmated as a product A \u2248 U\u03a3V* where U and V have orthonormal columns, and \u03a3 is non-negative diagonal. I need to get the U, \u03a3 and V* matrices. Looking at the source code here I found out that V* is stored in self.components_ field after calling fit_transform. Is it possible to get U and \u03a3 matrices? My code: ``` import sklearn.decomposition as skd import numpy as np matrix = np.random.random((20,20)) trsvd = skd.TruncatedSVD(n_components=15) transformed = trsvd.fit_transform(matrix) VT = trsvd.components_ ```", "response":"Looking into the source via the link you provided, TruncatedSVD is basically a wrapper around sklearn.utils.extmath.randomized_svd; you can manually call this yourself like this: ``` from sklearn.utils.extmath import randomized_svd U, Sigma, VT = randomized_svd(X, n_components=15, n_iter=5, random_state=None) ```", "best_answers_score":0.7192, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/31523575\/get-u-sigma-v-matrix-from-truncated-svd-in-scikit-learn", "best_answers_votes":63, "question_length":677, "response_length":314 }, { "question":"ValueError: Dimension mismatch I use SciPy and scikit-learn to train and apply a Multinomial Naive Bayes Classifier for binary text classification. Precisely, I use the module sklearn.feature_extraction.text.CountVectorizer for creating sparse matrices that hold word feature counts from text and the module sklearn.naive_bayes.MultinomialNB as the classifier implementation for training the classifier on training data and applying it on test data. The input to the CountVectorizer is a list of text documents represented as unicode strings. The training data is much larger than the test data. My code looks like this (simplified): ``` vectorizer = CountVectorizer(**kwargs) # sparse matrix with training data X_train = vectorizer.fit_transform(list_of_documents_for_training) # vector holding target values (=classes, either -1 or 1) for training documents # this vector has the same number of elements as the list of documents y_train = numpy.array([1, 1, 1, -1, -1, 1, -1, -1, 1, 1, -1, -1, -1, ...]) # sparse matrix with test data X_test = vectorizer.fit_transform(list_of_documents_for_testing) # Training stage of NB classifier classifier = MultinomialNB() classifier.fit(X=X_train, y=y_train) # Prediction of log probabilities on test data X_log_proba = classifier.predict_log_proba(X_test) ``` Problem: As soon as MultinomialNB.predict_log_proba() is called, I get ValueError: dimension mismatch. According to the IPython stacktrace below, the error occurs in SciPy: ``` \/path\/to\/my\/code.pyc --> 177 X_log_proba = classifier.predict_log_proba(X_test) \/...\/sklearn\/naive_bayes.pyc in predict_log_proba(self, X) 76 in the model, where classes are ordered arithmetically. 77 \"\"\" --> 78 jll = self._joint_log_likelihood(X) 79 # normalize by P(x) = P(f_1, ..., f_n) 80 log_prob_x = logsumexp(jll, axis=1) \/...\/sklearn\/naive_bayes.pyc in _joint_log_likelihood(self, X) 345 \"\"\"Calculate the posterior log probability of the samples X\"\"\" 346 X = atleast2d_or_csr(X) --> 347 return (safe_sparse_dot(X, self.feature_log_prob_.T) 348 + self.class_log_prior_) 349 \/...\/sklearn\/utils\/extmath.pyc in safe_sparse_dot(a, b, dense_output) 71 from scipy import sparse 72 if sparse.issparse(a) or sparse.issparse(b): --> 73 ret = a * b 74 if dense_output and hasattr(ret, \"toarray\"): 75 ret = ret.toarray() \/...\/scipy\/sparse\/base.pyc in __mul__(self, other) 276 277 if other.shape[0] != self.shape[1]: --> 278 raise ValueError('dimension mismatch') 279 280 result = self._mul_multivector(np.asarray(other)) ``` I have no idea why this error occurs. Can anybody please explain it to me and provide a solution for this problem? Thanks a lot in advance!", "response":"Sounds to me, like you just need to use vectorizer.transform for the test dataset, since the training dataset fixes the vocabulary (you cannot know the full vocabulary including the training set afterall). Just to be clear, thats vectorizer.transform instead of vectorizer.fit_transform.", "best_answers_score":0.7187, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12484310\/valueerror-dimension-mismatch", "best_answers_votes":61, "question_length":2641, "response_length":287 }, { "question":"scipy.io.loadmat nested structures (i.e. dictionaries) Using the given routines (how to load Matlab .mat files with scipy), I could not access deeper nested structures to recover them into dictionaries To present the problem I run into in more detail, I give the following toy example: ``` load scipy.io as spio a = {'b':{'c':{'d': 3}}} # my dictionary: a['b']['c']['d'] = 3 spio.savemat('xy.mat',a) ``` Now I want to read the mat-File back into python. I tried the following: ``` vig=spio.loadmat('xy.mat',squeeze_me=True) ``` If I now want to access the fields I get: ``` >> vig['b'] array(((array(3),),), dtype=[('c', '|O8')]) >> vig['b']['c'] array(array((3,), dtype=[('d', '|O8')]), dtype=object) >> vig['b']['c']['d'] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) \/ in () ValueError: field named d not found. ``` However, by using the option struct_as_record=False the field could be accessed: ``` v=spio.loadmat('xy.mat',squeeze_me=True,struct_as_record=False) ``` Now it was possible to access it by ``` >> v['b'].c.d array(3) ```", "response":"Here are the functions, which reconstructs the dictionaries just use this loadmat instead of scipy.io's loadmat: ``` import scipy.io as spio def loadmat(filename): ''' this function should be called instead of direct spio.loadmat as it cures the problem of not properly recovering python dictionaries from mat files. It calls the function check keys to cure all entries which are still mat-objects ''' data = spio.loadmat(filename, struct_as_record=False, squeeze_me=True) return _check_keys(data) def _check_keys(dict): ''' checks if entries in dictionary are mat-objects. If yes todict is called to change them to nested dictionaries ''' for key in dict: if isinstance(dict[key], spio.matlab.mio5_params.mat_struct): dict[key] = _todict(dict[key]) return dict def _todict(matobj): ''' A recursive function which constructs from matobjects nested dictionaries ''' dict = {} for strg in matobj._fieldnames: elem = matobj.__dict__[strg] if isinstance(elem, spio.matlab.mio5_params.mat_struct): dict[strg] = _todict(elem) else: dict[strg] = elem return dict ```", "best_answers_score":0.7181, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/7008608\/scipy-io-loadmat-nested-structures-i-e-dictionaries", "best_answers_votes":67, "question_length":1113, "response_length":1059 }, { "question":"Plotting a Pandas DataSeries.GroupBy I am new to python and pandas, and have the following DataFrame. How can I plot the DataFrame where each ModelID is a separate plot, saledate is the x-axis and MeanToDate is the y-axis? Attempt ``` data[40:76].groupby('ModelID').plot() ``` DataFrame", "response":"You can make the plots by looping over the groups from groupby: ``` import matplotlib.pyplot as plt for title, group in df.groupby('ModelID'): group.plot(x='saleDate', y='MeanToDate', title=title) ``` See for more information on plotting with pandas dataframes: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/visualization.html and for looping over a groupby-object: http:\/\/pandas.pydata.org\/pandas-docs\/stable\/groupby.html#iterating-through-groups", "best_answers_score":0.7181, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16376159\/plotting-a-pandas-dataseries-groupby", "best_answers_votes":34, "question_length":286, "response_length":445 }, { "question":"Is there a python (scipy) function to determine parameters needed to obtain a target power? In R there is a very useful function that helps with determining parameters for a two sided t-test in order to obtain a target statistical power. The function is called power.prop.test. http:\/\/stat.ethz.ch\/R-manual\/R-patched\/library\/stats\/html\/power.prop.test.html You can call it using: ``` power.prop.test(p1 = .50, p2 = .75, power = .90) ``` And it will tell you n the sample size needed to obtain this power. This is extremely useful in deterring sample sizes for tests. Is there a similar function in the scipy package?", "response":"I've managed to replicate the function using the below formula for n and the inverse survival function norm.isf from scipy.stats ``` from scipy.stats import norm, zscore def sample_power_probtest(p1, p2, power=0.8, sig=0.05): z = norm.isf([sig\/2]) #two-sided t test zp = -1 * norm.isf([power]) d = (p1-p2) s =2*((p1+p2) \/2)*(1-((p1+p2) \/2)) n = s * ((zp + z)**2) \/ (d**2) return int(round(n[0])) def sample_power_difftest(d, s, power=0.8, sig=0.05): z = norm.isf([sig\/2]) zp = -1 * norm.isf([power]) n = s * ((zp + z)**2) \/ (d**2) return int(round(n[0])) if __name__ == '__main__': n = sample_power_probtest(0.1, 0.11, power=0.8, sig=0.05) print n #14752 n = sample_power_difftest(0.1, 0.5, power=0.8, sig=0.05) print n #392 ```", "best_answers_score":0.7176, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15204070\/is-there-a-python-scipy-function-to-determine-parameters-needed-to-obtain-a-ta", "best_answers_votes":28, "question_length":616, "response_length":728 }, { "question":"ImportError: No module named scipy I am using Python 2.7 and trying to get PyBrain to work. But I get this error even though scipy is installed - ``` Traceback (most recent call last): File \"\", line 1, in File \"\/usr\/local\/lib\/python2.7\/site-packages\/PyBrain-0.3.1- py2.7.egg\/pybrain\/__init__.py\", line 1, in from pybrain.structure.__init__ import * File \"\/usr\/local\/lib\/python2.7\/site-packages\/PyBrain-0.3.1-py2.7.egg\/pybrain\/structure\/__init__.py\", line 1, in from pybrain.structure.connections.__init__ import * File \"\/usr\/local\/lib\/python2.7\/site-packages\/PyBrain-0.3.1-py2.7.egg\/pybrain\/structure\/connections\/__init__.py\", line 1, in from pybrain.structure.connections.full import FullConnection File \"\/usr\/local\/lib\/python2.7\/site-packages\/PyBrain-0.3.1-py2.7.egg\/pybrain\/structure\/connections\/full.py\", line 3, in from scipy import reshape, dot, outer ImportError: No module named scipy ``` I have installed scipy using this command - ``` sudo apt-get install python-scipy ``` I get - ``` Reading package lists... Done Building dependency tree Reading state information... Done python-scipy is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. ``` What should I do?", "response":"Try to install it as a python package using pip. You said you already tried: ``` sudo apt-get install python-scipy ``` Now run: ``` pip install scipy ``` I ran both and it worked on my Debian-based box.", "best_answers_score":0.7169, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/24808043\/importerror-no-module-named-scipy", "best_answers_votes":186, "question_length":1217, "response_length":202 }, { "question":"Generating a dense matrix from a sparse matrix in numpy python I have a Sqlite database that contains following type of schema: ``` termcount(doc_num, term , count) ``` This table contains terms with their respective counts in the document. like ``` (doc1 , term1 ,12) (doc1, term 22, 2) . . (docn,term1 , 10) ``` This matrix can be considered as sparse matrix as each documents contains very few terms that will have a non-zero value. How would I create a dense matrix from this sparse matrix using numpy as I have to calculate the similarity among documents using cosine similarity. This dense matrix will look like a table that have docid as the first column and all the terms will be listed as the first row.and remaining cells will contain counts.", "response":"``` from scipy.sparse import csr_matrix A = csr_matrix([[1,0,2],[0,3,0]]) >>>A ' with 3 stored elements in Compressed Sparse Row format> >>> A.todense() matrix([[1, 0, 2], [0, 3, 0]]) >>> A.toarray() array([[1, 0, 2], [0, 3, 0]]) ``` this is an example of how to convert a sparse matrix to a dense matrix taken from scipy", "best_answers_score":0.7111, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16505670\/generating-a-dense-matrix-from-a-sparse-matrix-in-numpy-python", "best_answers_votes":109, "question_length":752, "response_length":321 }, { "question":"How to apply piecewise linear fit in Python? I am trying to fit piecewise linear fit as shown in fig.1 for a data set This figure was obtained by setting on the lines. I attempted to apply a piecewise linear fit using the code: ``` from scipy import optimize import matplotlib.pyplot as plt import numpy as np x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11, 12, 13, 14, 15]) y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03]) def linear_fit(x, a, b): return a * x + b fit_a, fit_b = optimize.curve_fit(linear_fit, x[0:5], y[0:5])[0] y_fit = fit_a * x[0:7] + fit_b fit_a, fit_b = optimize.curve_fit(linear_fit, x[6:14], y[6:14])[0] y_fit = np.append(y_fit, fit_a * x[6:14] + fit_b) figure = plt.figure(figsize=(5.15, 5.15)) figure.clf() plot = plt.subplot(111) ax1 = plt.gca() plot.plot(x, y, linestyle = '', linewidth = 0.25, markeredgecolor='none', marker = 'o', label = r'\\textit{y_a}') plot.plot(x, y_fit, linestyle = ':', linewidth = 0.25, markeredgecolor='none', marker = '', label = r'\\textit{y_b}') plot.set_ylabel('Y', labelpad = 6) plot.set_xlabel('X', labelpad = 6) figure.savefig('test.pdf', box_inches='tight') plt.close() ``` But this gave me fitting of the form in fig. 2, I tried playing with the values but no change I can't get the fit of the upper line proper. The most important requirement for me is how can I get Python to get the gradient change point. I want the code to recognize and fit two linear fits in the appropriate range. How can this be done in Python?", "response":"You can use numpy.piecewise() to create the piecewise function and then use curve_fit(), Here is the code ``` from scipy import optimize import matplotlib.pyplot as plt import numpy as np %matplotlib inline x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11, 12, 13, 14, 15], dtype=float) y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03]) def piecewise_linear(x, x0, y0, k1, k2): return np.piecewise(x, [x < x0], [lambda x:k1*x + y0-k1*x0, lambda x:k2*x + y0-k2*x0]) p , e = optimize.curve_fit(piecewise_linear, x, y) xd = np.linspace(0, 15, 100) plt.plot(x, y, \"o\") plt.plot(xd, piecewise_linear(xd, *p)) ``` the output: For an N parts fitting, please reference segments_fit.ipynb", "best_answers_score":0.7086, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/29382903\/how-to-apply-piecewise-linear-fit-in-python", "best_answers_votes":88, "question_length":1536, "response_length":729 }, { "question":"Overflow in exp in scipy\/numpy in Python? What does the following error: ``` Warning: overflow encountered in exp ``` in scipy\/numpy using Python generally mean? I'm computing a ratio in log form, i.e. log(a) + log(b) and then taking the exponent of the result, using exp, and using a sum with logsumexp, as follows: ``` c = log(a) + log(b) c = c - logsumexp(c) ``` some values in the array b are intentionally set to 0. Their log will be -Inf. What could be the cause of this warning? thanks.", "response":"In your case, it means that b is very small somewhere in your array, and you're getting a number (a\/b or exp(log(a) - log(b))) that is too large for whatever dtype (float32, float64, etc) the array you're using to store the output is. Numpy can be configured to Ignore these sorts of errors, Print the error, but not raise a warning to stop the execution (the default) Log the error, Raise a warning Raise an error Call a user-defined function See numpy.seterr to control how it handles having under\/overflows, etc in floating point arrays.", "best_answers_score":0.7085, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/4359959\/overflow-in-exp-in-scipy-numpy-in-python", "best_answers_votes":33, "question_length":493, "response_length":540 }, { "question":"Installing SciPy with pip It is possible to install NumPy with pip using pip install numpy. Is there a similar possibility with SciPy? (Doing pip install scipy does not work.) Update The package SciPy is now available to be installed with pip!", "response":"Prerequisite: ``` sudo apt-get install build-essential gfortran libatlas-base-dev python-pip python-dev sudo pip install --upgrade pip ``` Actual packages: ``` sudo pip install numpy sudo pip install scipy ``` Optional packages: ``` sudo pip install matplotlib OR sudo apt-get install python-matplotlib sudo pip install -U scikit-learn sudo pip install pandas ``` src", "best_answers_score":0.7075, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2213551\/installing-scipy-with-pip", "best_answers_votes":220, "question_length":243, "response_length":367 }, { "question":"T-test in Pandas If I want to calculate the mean of two categories in Pandas, I can do it like this: ``` data = {'Category': ['cat2','cat1','cat2','cat1','cat2','cat1','cat2','cat1','cat1','cat1','cat2'], 'values': [1,2,3,1,2,3,1,2,3,5,1]} my_data = DataFrame(data) my_data.groupby('Category').mean() Category: values: cat1 2.666667 cat2 1.600000 ``` I have a lot of data formatted this way, and now I need to do a T-test to see if the mean of cat1 and cat2 are statistically different. How can I do that?", "response":"it depends what sort of t-test you want to do (one sided or two sided dependent or independent) but it should be as simple as: ``` from scipy.stats import ttest_ind cat1 = my_data[my_data['Category']=='cat1'] cat2 = my_data[my_data['Category']=='cat2'] ttest_ind(cat1['values'], cat2['values']) >>> (1.4927289925706944, 0.16970867501294376) ``` it returns a tuple with the t-statistic & the p-value see here for other t-tests http:\/\/docs.scipy.org\/doc\/scipy\/reference\/stats.html", "best_answers_score":0.7074, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13404468\/t-test-in-pandas", "best_answers_votes":122, "question_length":505, "response_length":478 }, { "question":"Error \"Import Error: No module named numpy\" on Windows [duplicate] This question already has answers here: ImportError: No module named requests (39 answers) Closed last year. I have a very similar question to this question, but I am still one step behind. I have only one version of Python 3 installed on my Windows 7 (sorry) 64-bit system. I installed NumPy following this link - as suggested in the question. The installation went fine but when I execute ``` import numpy ``` I got the following error: Import error: No module named numpy", "response":"You can simply use ``` pip install numpy ``` Or for python3, use ``` pip3 install numpy ```", "best_answers_score":0.7054, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/7818811\/error-import-error-no-module-named-numpy-on-windows", "best_answers_votes":390, "question_length":541, "response_length":91 }, { "question":"If I use python pandas, is there any need for structured arrays? Now that pandas provides a data frame structure, is there any need for structured\/record arrays in numpy? There are some modifications I need to make to an existing code which requires this structured array type framework, but I am considering using pandas in its place from this point forward. Will I at any point find that I need some functionality of structured\/record arrays that pandas does not provide?", "response":"pandas's DataFrame is a high level tool while structured arrays are a very low-level tool, enabling you to interpret a binary blob of data as a table-like structure. One thing that is hard to do in pandas is nested data types with the same semantics as structured arrays, though this can be imitated with hierarchical indexing (structured arrays can't do most things you can do with hierarchical indexing). Structured arrays are also amenable to working with massive tabular data sets loaded via memory maps (np.memmap). This is a limitation that will be addressed in pandas eventually, though.", "best_answers_score":0.705, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12052067\/if-i-use-python-pandas-is-there-any-need-for-structured-arrays", "best_answers_votes":16, "question_length":473, "response_length":594 }, { "question":"Does Conda replace the need for virtualenv? I recently discovered Conda after I was having trouble installing SciPy, specifically on a Heroku app that I am developing. With Conda you create environments, very similar to what virtualenv does. My questions are: If I use Conda will it replace the need for virtualenv? If not, how do I use the two together? Do I install virtualenv in Conda, or Conda in virtualenv? Do I still need to use pip? If so, will I still be able to install packages with pip in an isolated environment?", "response":"Conda replaces virtualenv. In my opinion it is better. It is not limited to Python but can be used for other languages too. In my experience it provides a much smoother experience, especially for scientific packages. The first time I got MayaVi properly installed on Mac was with conda. You can still use pip. In fact, conda installs pip in each new environment. It knows about pip-installed packages. For example: ``` conda list ``` lists all installed packages in your current environment. Conda-installed packages show up like this: ``` sphinx_rtd_theme 0.1.7 py35_0 defaults ``` and the ones installed via pip have the marker: ``` wxpython-common 3.0.0.0 ```", "best_answers_score":0.7009, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/34398676\/does-conda-replace-the-need-for-virtualenv", "best_answers_votes":232, "question_length":525, "response_length":664 }, { "question":"Fitting a closed curve to a set of points I have a set of points pts which form a loop and it looks like this: This is somewhat similar to 31243002, but instead of putting points in between pairs of points, I would like to fit a smooth curve through the points (coordinates are given at the end of the question), so I tried something similar to scipy documentation on Interpolation: ``` values = pts tck = interpolate.splrep(values[:,0], values[:,1], s=1) xnew = np.arange(2,7,0.01) ynew = interpolate.splev(xnew, tck, der=0) ``` but I get this error: ValueError: Error on input data Is there any way to find such a fit? Coordinates of the points: ``` pts = array([[ 6.55525 , 3.05472 ], [ 6.17284 , 2.802609], [ 5.53946 , 2.649209], [ 4.93053 , 2.444444], [ 4.32544 , 2.318749], [ 3.90982 , 2.2875 ], [ 3.51294 , 2.221875], [ 3.09107 , 2.29375 ], [ 2.64013 , 2.4375 ], [ 2.275444, 2.653124], [ 2.137945, 3.26562 ], [ 2.15982 , 3.84375 ], [ 2.20982 , 4.31562 ], [ 2.334704, 4.87873 ], [ 2.314264, 5.5047 ], [ 2.311709, 5.9135 ], [ 2.29638 , 6.42961 ], [ 2.619374, 6.75021 ], [ 3.32448 , 6.66353 ], [ 3.31582 , 5.68866 ], [ 3.35159 , 5.17255 ], [ 3.48482 , 4.73125 ], [ 3.70669 , 4.51875 ], [ 4.23639 , 4.58968 ], [ 4.39592 , 4.94615 ], [ 4.33527 , 5.33862 ], [ 3.95968 , 5.61967 ], [ 3.56366 , 5.73976 ], [ 3.78818 , 6.55292 ], [ 4.27712 , 6.8283 ], [ 4.89532 , 6.78615 ], [ 5.35334 , 6.72433 ], [ 5.71583 , 6.54449 ], [ 6.13452 , 6.46019 ], [ 6.54478 , 6.26068 ], [ 6.7873 , 5.74615 ], [ 6.64086 , 5.25269 ], [ 6.45649 , 4.86206 ], [ 6.41586 , 4.46519 ], [ 5.44711 , 4.26519 ], [ 5.04087 , 4.10581 ], [ 4.70013 , 3.67405 ], [ 4.83482 , 3.4375 ], [ 5.34086 , 3.43394 ], [ 5.76392 , 3.55156 ], [ 6.37056 , 3.8778 ], [ 6.53116 , 3.47228 ]]) ```", "response":"Actually, you were not far from the solution in your question. Using scipy.interpolate.splprep for parametric B-spline interpolation would be the simplest approach. It also natively supports closed curves, if you provide the per=1 parameter, ``` import numpy as np from scipy.interpolate import splprep, splev import matplotlib.pyplot as plt # define pts from the question tck, u = splprep(pts.T, u=None, s=0.0, per=1) u_new = np.linspace(u.min(), u.max(), 1000) x_new, y_new = splev(u_new, tck, der=0) plt.plot(pts[:,0], pts[:,1], 'ro') plt.plot(x_new, y_new, 'b--') plt.show() ``` Fundamentally, this approach not very different from the one in @Joe Kington's answer. Although, it will probably be a bit more robust, because the equivalent of the i vector is chosen, by default, based on the distances between points and not simply their index (see splprep documentation for the u parameter).", "best_answers_score":0.6998, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/31464345\/fitting-a-closed-curve-to-a-set-of-points", "best_answers_votes":43, "question_length":1742, "response_length":894 }, { "question":"Python statistics package: difference between statsmodel and scipy.stats [closed] As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 12 years ago. I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats. One thing that I know is those with scikits namespace are specific \"branches\" of scipy, and what used to be scikits.statsmodels is now called statsmodels. On the other hand there is also scipy.stats. What are the differences between the two, and which one is the statistics package for Python? Thanks. --EDIT-- I changed the title because some answers are not really related to the question, and I suppose that's because the title is not clear enough.", "response":"Statsmodels has scipy.stats as a dependency. Scipy.stats has all of the probability distributions and some statistical tests. It's more like library code in the vein of numpy and scipy. Statsmodels on the other hand provides statistical models with a formula framework similar to R and it works with pandas DataFrames. There are also statistical tests, plotting, and plenty of helper functions in statsmodels. Really it depends on what you need, but you definitely don't have to choose one. They have different aims and strengths.", "best_answers_score":0.6995, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/14573728\/python-statistics-package-difference-between-statsmodel-and-scipy-stats", "best_answers_votes":42, "question_length":1097, "response_length":530 }, { "question":"Installing SciPy on Ubuntu I have Python 2.7 running and trying to install scipy by using easy_install which returns following errors: ``` Searching for scipy Reading http:\/\/pypi.python.org\/simple\/scipy\/ Reading http:\/\/www.scipy.org Reading http:\/\/sourceforge.net\/project\/showfiles.php?group_id=27747&package_id=19531 Reading http:\/\/new.scipy.org\/Wiki\/Download Best match: scipy 0.11.0 Downloading http:\/\/pypi.python.org\/packages\/source\/s\/scipy\/scipy-0.11.0.zip#md5=40b700ddde9ddab643b640fff7a9d753 Processing scipy-0.11.0.zip Running scipy-0.11.0\/setup.py -q bdist_egg --dist-dir \/tmp\/easy_install-49BQSz\/scipy-0.11.0\/egg-dist-tmp-KMjwKy Running from scipy source directory. \/usr\/lib64\/python2.7\/site-packages\/numpy\/distutils\/system_info.py:1425: UserWarning: Atlas (http:\/\/math-atlas.sourceforge.net\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) \/usr\/lib64\/python2.7\/site-packages\/numpy\/distutils\/system_info.py:1434: UserWarning: Blas (http:\/\/www.netlib.org\/blas\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) \/usr\/lib64\/python2.7\/site-packages\/numpy\/distutils\/system_info.py:1437: UserWarning: Blas (http:\/\/www.netlib.org\/blas\/) sources not found. Directories to search for the sources can be specified in the numpy\/distutils\/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) error: Blas (http:\/\/www.netlib.org\/blas\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [blas]) or by setting the BLAS environment variable. ``` So I installed Blas and Atalas by executing ``` apt-get install blas apt-get install atlas ``` However, the easy_install errors don't disappear. Thanks!", "response":"What worked for me: To actually install scipy from PIP, you need packages libatlas-base-dev (libraries etc. for ATLAS\/BLAS) and gfortran (GNU Fortran compiler). Once these packages are installed, the scipy installer should finish as expected.", "best_answers_score":0.6988, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/14408123\/installing-scipy-on-ubuntu", "best_answers_votes":74, "question_length":2083, "response_length":242 }, { "question":"How To apply a filter to a signal in python is there any prepared function in python to apply a filter (for example Butterworth filter) to a given signal? I looking for such a function in 'scipy.signal' but I haven't find any useful functions more than filter design ones. actually I want this function to convolve a filter with the signal.", "response":"Yes! There are two: ``` scipy.signal.filtfilt scipy.signal.lfilter ``` There are also methods for convolution (convolve and fftconvolve), but these are probably not appropriate for your application because it involves IIR filters. Full code sample: ``` b, a = scipy.signal.butter(N, Wn, 'low') output_signal = scipy.signal.filtfilt(b, a, input_signal) ``` You can read more about the arguments and usage in the documentation. One gotcha is that Wn is a fraction of the Nyquist frequency (half the sampling frequency). So if the sampling rate is 1000Hz and you want a cutoff of 250Hz, you should use Wn=0.5. By the way, I highly recommend the use of filtfilt over lfilter (which is called just filter in Matlab) for most applications. As the documentation states: This function applies a linear filter twice, once forward and once backwards. The combined filter has linear phase. What this means is that each value of the output is a function of both \"past\" and \"future\" points in the input equally. Therefore it will not lag the input. In contrast, lfilter uses only \"past\" values of the input. This inevitably introduces a time lag, which will be frequency-dependent. There are of course a few applications for which this is desirable (notably real-time filtering), but most users are far better off with filtfilt.", "best_answers_score":0.6967, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13740348\/how-to-apply-a-filter-to-a-signal-in-python", "best_answers_votes":47, "question_length":340, "response_length":1315 }, { "question":"Has someone made the Parula colormap in Matplotlib? I know that there's been some discussion of Matlab copyrighting their new default colormap, but I'm wondering if any intrepid user has created the colormap in Matplotlib. Viridis is great, but it's a bit dark for what I'm trying to do.", "response":"In case the link that @tom provided breaks, here it is: ``` from matplotlib.colors import LinearSegmentedColormap cm_data = [[0.2081, 0.1663, 0.5292], [0.2116238095, 0.1897809524, 0.5776761905], [0.212252381, 0.2137714286, 0.6269714286], [0.2081, 0.2386, 0.6770857143], [0.1959047619, 0.2644571429, 0.7279], [0.1707285714, 0.2919380952, 0.779247619], [0.1252714286, 0.3242428571, 0.8302714286], [0.0591333333, 0.3598333333, 0.8683333333], [0.0116952381, 0.3875095238, 0.8819571429], [0.0059571429, 0.4086142857, 0.8828428571], [0.0165142857, 0.4266, 0.8786333333], [0.032852381, 0.4430428571, 0.8719571429], [0.0498142857, 0.4585714286, 0.8640571429], [0.0629333333, 0.4736904762, 0.8554380952], [0.0722666667, 0.4886666667, 0.8467], [0.0779428571, 0.5039857143, 0.8383714286], [0.079347619, 0.5200238095, 0.8311809524], [0.0749428571, 0.5375428571, 0.8262714286], [0.0640571429, 0.5569857143, 0.8239571429], [0.0487714286, 0.5772238095, 0.8228285714], [0.0343428571, 0.5965809524, 0.819852381], [0.0265, 0.6137, 0.8135], [0.0238904762, 0.6286619048, 0.8037619048], [0.0230904762, 0.6417857143, 0.7912666667], [0.0227714286, 0.6534857143, 0.7767571429], [0.0266619048, 0.6641952381, 0.7607190476], [0.0383714286, 0.6742714286, 0.743552381], [0.0589714286, 0.6837571429, 0.7253857143], [0.0843, 0.6928333333, 0.7061666667], [0.1132952381, 0.7015, 0.6858571429], [0.1452714286, 0.7097571429, 0.6646285714], [0.1801333333, 0.7176571429, 0.6424333333], [0.2178285714, 0.7250428571, 0.6192619048], [0.2586428571, 0.7317142857, 0.5954285714], [0.3021714286, 0.7376047619, 0.5711857143], [0.3481666667, 0.7424333333, 0.5472666667], [0.3952571429, 0.7459, 0.5244428571], [0.4420095238, 0.7480809524, 0.5033142857], [0.4871238095, 0.7490619048, 0.4839761905], [0.5300285714, 0.7491142857, 0.4661142857], [0.5708571429, 0.7485190476, 0.4493904762], [0.609852381, 0.7473142857, 0.4336857143], [0.6473, 0.7456, 0.4188], [0.6834190476, 0.7434761905, 0.4044333333], [0.7184095238, 0.7411333333, 0.3904761905], [0.7524857143, 0.7384, 0.3768142857], [0.7858428571, 0.7355666667, 0.3632714286], [0.8185047619, 0.7327333333, 0.3497904762], [0.8506571429, 0.7299, 0.3360285714], [0.8824333333, 0.7274333333, 0.3217], [0.9139333333, 0.7257857143, 0.3062761905], [0.9449571429, 0.7261142857, 0.2886428571], [0.9738952381, 0.7313952381, 0.266647619], [0.9937714286, 0.7454571429, 0.240347619], [0.9990428571, 0.7653142857, 0.2164142857], [0.9955333333, 0.7860571429, 0.196652381], [0.988, 0.8066, 0.1793666667], [0.9788571429, 0.8271428571, 0.1633142857], [0.9697, 0.8481380952, 0.147452381], [0.9625857143, 0.8705142857, 0.1309], [0.9588714286, 0.8949, 0.1132428571], [0.9598238095, 0.9218333333, 0.0948380952], [0.9661, 0.9514428571, 0.0755333333], [0.9763, 0.9831, 0.0538]] parula_map = LinearSegmentedColormap.from_list('parula', cm_data) # For use of \"viscm view\" test_cm = parula_map if __name__ == \"__main__\": import matplotlib.pyplot as plt import numpy as np try: from viscm import viscm viscm(parula_map) except ImportError: print(\"viscm not found, falling back on simple display\") plt.imshow(np.linspace(0, 100, 256)[None, :], aspect='auto', cmap=parula_map) plt.show() ```", "best_answers_score":0.6963, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/34859628\/has-someone-made-the-parula-colormap-in-matplotlib", "best_answers_votes":16, "question_length":287, "response_length":3162 }, { "question":"How to perform cubic spline interpolation in python? I have two lists to describe the function y(x): ``` x = [0,1,2,3,4,5] y = [12,14,22,39,58,77] ``` I would like to perform cubic spline interpolation so that given some value u in the domain of x, e.g. ``` u = 1.25 ``` I can find y(u). I found this in SciPy but I am not sure how to use it.", "response":"In case, scipy is not installed: ``` import numpy as np from math import sqrt def cubic_interp1d(x0, x, y): \"\"\" Interpolate a 1-D function using cubic splines. x0 : a float or an 1d-array x : (N,) array_like A 1-D array of real\/complex values. y : (N,) array_like A 1-D array of real values. The length of y along the interpolation axis must be equal to the length of x. Implement a trick to generate at first step the cholesky matrice L of the tridiagonal matrice A (thus L is a bidiagonal matrice that can be solved in two distinct loops). additional ref: www.math.uh.edu\/~jingqiu\/math4364\/spline.pdf \"\"\" x = np.asfarray(x) y = np.asfarray(y) # remove non finite values # indexes = np.isfinite(x) # x = x[indexes] # y = y[indexes] # check if sorted if np.any(np.diff(x) < 0): indexes = np.argsort(x) x = x[indexes] y = y[indexes] size = len(x) xdiff = np.diff(x) ydiff = np.diff(y) # allocate buffer matrices Li = np.empty(size) Li_1 = np.empty(size-1) z = np.empty(size) # fill diagonals Li and Li-1 and solve [L][y] = [B] Li[0] = sqrt(2*xdiff[0]) Li_1[0] = 0.0 B0 = 0.0 # natural boundary z[0] = B0 \/ Li[0] for i in range(1, size-1, 1): Li_1[i] = xdiff[i-1] \/ Li[i-1] Li[i] = sqrt(2*(xdiff[i-1]+xdiff[i]) - Li_1[i-1] * Li_1[i-1]) Bi = 6*(ydiff[i]\/xdiff[i] - ydiff[i-1]\/xdiff[i-1]) z[i] = (Bi - Li_1[i-1]*z[i-1])\/Li[i] i = size - 1 Li_1[i-1] = xdiff[-1] \/ Li[i-1] Li[i] = sqrt(2*xdiff[-1] - Li_1[i-1] * Li_1[i-1]) Bi = 0.0 # natural boundary z[i] = (Bi - Li_1[i-1]*z[i-1])\/Li[i] # solve [L.T][x] = [y] i = size-1 z[i] = z[i] \/ Li[i] for i in range(size-2, -1, -1): z[i] = (z[i] - Li_1[i-1]*z[i+1])\/Li[i] # find index index = x.searchsorted(x0) np.clip(index, 1, size-1, index) xi1, xi0 = x[index], x[index-1] yi1, yi0 = y[index], y[index-1] zi1, zi0 = z[index], z[index-1] hi1 = xi1 - xi0 # calculate cubic f0 = zi0\/(6*hi1)*(xi1-x0)**3 + \\ zi1\/(6*hi1)*(x0-xi0)**3 + \\ (yi1\/hi1 - zi1*hi1\/6)*(x0-xi0) + \\ (yi0\/hi1 - zi0*hi1\/6)*(xi1-x0) return f0 if __name__ == '__main__': import matplotlib.pyplot as plt x = np.linspace(0, 10, 11) y = np.sin(x) plt.scatter(x, y) x_new = np.linspace(0, 10, 201) plt.plot(x_new, cubic_interp1d(x_new, x, y)) plt.show() ```", "best_answers_score":0.6962, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/31543775\/how-to-perform-cubic-spline-interpolation-in-python", "best_answers_votes":43, "question_length":342, "response_length":2156 }, { "question":"Python\/Numpy MemoryError Basically, I am getting a memory error in python when trying to perform an algebraic operation on a numpy matrix. The variable u, is a large matrix of double (in the failing case its a 288x288x156 matrix of doubles. I only get this error in this huge case, but I am able to do this on other large matrices, just not this big). Here is the Python error: ``` Traceback (most recent call last): File \"S:\\3D_Simulation_Data\\Patient SPM Segmentation\\20 pc t perim erosion flattop\\SwSim.py\", line 121, in __init__ self.mainSimLoop() File \"S:\\3D_Simulation_Data\\Patient SPM Segmentation\\20 pc t perim erosion flattop\\SwSim.py\", line 309, in mainSimLoop u = solver.solve_cg(u,b,tensors,param,fdHold,resid) # Solve the left hand si de of the equation Au=b with conjugate gradient method to approximate u File \"S:\\3D_Simulation_Data\\Patient SPM Segmentation\\20 pc t perim erosion flattop\\conjugate_getb.py\", line 47, in solv e_cg u = u + alpha*p MemoryError ``` u = u + alpha*p is the line of code that fails. alpha is just a double, while u and r are the large matrices described above (both of the same size). I don't know that much about memory errors especially in Python. Any insight\/tips into solving this would be very appreciated! Thanks", "response":"Rewrite to ``` p *= alpha u += p ``` and this will use much less memory. Whereas p = p*alpha allocates a whole new matrix for the result of p*alpha and then discards the old p; p*= alpha does the same thing in place. In general, with big matrices, try to use op= assignment.", "best_answers_score":0.6955, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/4318615\/python-numpy-memoryerror", "best_answers_votes":52, "question_length":1260, "response_length":274 }, { "question":"Compute a confidence interval from sample data I have sample data which I would like to compute a confidence interval for, assuming a normal distribution. I have found and installed the numpy and scipy packages and have gotten numpy to return a mean and standard deviation (numpy.mean(data) with data being a list). Any advice on getting a sample confidence interval would be much appreciated.", "response":"Here a shortened version of shasan's code, calculating the 95% confidence interval of the mean of array a: ``` import numpy as np, scipy.stats as st st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a)) ``` But using StatsModels' tconfint_mean is arguably even nicer: ``` import statsmodels.stats.api as sms sms.DescrStatsW(a).tconfint_mean() ``` The underlying assumptions for both are that the sample (array a) was drawn independently from a normal distribution with unknown standard deviation (see MathWorld or Wikipedia). For large sample size n, the sample mean is normally distributed, and one can calculate its confidence interval using st.norm.interval() (as suggested in Jaime's comment). But the above solutions are correct also for small n, where st.norm.interval() gives confidence intervals that are too narrow (i.e., \"fake confidence\"). See my answer to a similar question for more details (and one of Russ's comments here). Here an example where the correct options give (essentially) identical confidence intervals: ``` In [9]: a = range(10,14) In [10]: mean_confidence_interval(a) Out[10]: (11.5, 9.4457397432391215, 13.554260256760879) In [11]: st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a)) Out[11]: (9.4457397432391215, 13.554260256760879) In [12]: sms.DescrStatsW(a).tconfint_mean() Out[12]: (9.4457397432391197, 13.55426025676088) ``` And finally, the incorrect result using st.norm.interval(): ``` In [13]: st.norm.interval(0.95, loc=np.mean(a), scale=st.sem(a)) Out[13]: (10.23484868811834, 12.76515131188166) ```", "best_answers_score":0.6941, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15033511\/compute-a-confidence-interval-from-sample-data", "best_answers_votes":210, "question_length":393, "response_length":1562 }, { "question":"can't install scipy - freezes on \"Running setup.py install for scipy\" when I run ``` sudo pip install -U scipy ``` it is first downloaded and then it goes on to show ``` Running setup.py install for scipy ``` but it freezes there. I tried upgrading pip itself. Worked fine. My pip version is 1.5.4 The only error i get is InsecurePlatforWarning. The complete output looks like this: ``` tom@tom-ThinkPad-Edge-E430:~$ sudo pip install -U scipy The directory '\/home\/tom\/.cache\/pip\/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '\/home\/tom\/.cache\/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Collecting scipy \/usr\/local\/lib\/python2.7\/dist-packages\/pip\/_vendor\/requests\/packages\/urllib3\/util\/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https:\/\/urllib3.readthedocs.org\/en\/latest\/security.html#insecureplatformwarning. InsecurePlatformWarning Downloading scipy-0.16.1.tar.gz (12.2MB) 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 12.2MB 32kB\/s Installing collected packages: scipy Running setup.py install for scipy ```", "response":"It took unexpectedly long, but it finished after several minutes. So problem solved.", "best_answers_score":0.693, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/33848709\/cant-install-scipy-freezes-on-running-setup-py-install-for-scipy", "best_answers_votes":72, "question_length":1514, "response_length":84 }, { "question":"Downsample a 1D numpy array I have a 1-d numpy array which I would like to downsample. Any of the following methods are acceptable if the downsampling raster doesn't perfectly fit the data: overlap downsample intervals convert whatever number of values remains at the end to a separate downsampled value interpolate to fit raster basically if I have ``` 1 2 6 2 1 ``` and I am downsampling by a factor of 3, all of the following are ok: ``` 3 3 3 1.5 ``` or whatever an interpolation would give me here. I'm just looking for the fastest\/easiest way to do this. I found scipy.signal.decimate, but that sounds like it decimates the values (takes them out as needed and only leaves one in X). scipy.signal.resample seems to have the right name, but I do not understand where they are going with the whole fourier thing in the description. My signal is not particularly periodic. Could you give me a hand here? This seems like a really simple task to do, but all these functions are quite intricate...", "response":"In the simple case where your array's size is divisible by the downsampling factor (R), you can reshape your array, and take the mean along the new axis: ``` import numpy as np a = np.array([1.,2,6,2,1,7]) R = 3 a.reshape(-1, R) => array([[ 1., 2., 6.], [ 2., 1., 7.]]) a.reshape(-1, R).mean(axis=1) => array([ 3. , 3.33333333]) ``` In the general case, you can pad your array with NaNs to a size divisible by R, and take the mean using scipy.nanmean. ``` import math, scipy b = np.append(a, [ 4 ]) b.shape => (7,) pad_size = math.ceil(float(b.size)\/R)*R - b.size b_padded = np.append(b, np.zeros(pad_size)*np.NaN) b_padded.shape => (9,) scipy.nanmean(b_padded.reshape(-1,R), axis=1) => array([ 3. , 3.33333333, 4.]) ```", "best_answers_score":0.6917, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20322079\/downsample-a-1d-numpy-array", "best_answers_votes":35, "question_length":997, "response_length":720 }, { "question":"Defining a white noise process in Python I need to draw samples from a white noise process in order to implement a particular integral numerically. How do I generate this with Python (i.e., numpy, scipy, etc.)?", "response":"You can achieve this through the numpy.random.normal function, which draws a given number of samples from a Gaussian distribution. ``` import numpy import matplotlib.pyplot as plt mean = 0 std = 1 num_samples = 1000 samples = numpy.random.normal(mean, std, size=num_samples) plt.plot(samples) plt.show() ```", "best_answers_score":0.6915, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/32237769\/defining-a-white-noise-process-in-python", "best_answers_votes":39, "question_length":210, "response_length":307 }, { "question":"scipy is not optimizing and returns \"Desired error not necessarily achieved due to precision loss\" I have the following code which attempts to minimize a log likelihood function. ``` #!\/usr\/bin\/python import math import random import numpy as np from scipy.optimize import minimize def loglikelihood(params, data): (mu, alpha, beta) = params tlist = np.array(data) r = np.zeros(len(tlist)) for i in xrange(1,len(tlist)): r[i] = math.exp(-beta*(tlist[i]-tlist[i-1]))*(1+r[i-1]) loglik = -tlist[-1]*mu loglik = loglik+alpha\/beta*sum(np.exp(-beta*(tlist[-1]-tlist))-1) loglik = loglik+np.sum(np.log(mu+alpha*r)) return -loglik atimes = [ 148.98894201, 149.70253172, 151.13717804, 160.35968355, 160.98322609, 161.21331798, 163.60755544, 163.68994973, 164.26131871, 228.79436067] a= 0.01 alpha = 0.5 beta = 0.6 print loglikelihood((a, alpha, beta), atimes) res = minimize(loglikelihood, (0.01, 0.1,0.1), method = 'BFGS',args = (atimes,)) print res ``` It gives me ``` 28.3136498357 .\/test.py:17: RuntimeWarning: invalid value encountered in log loglik = loglik+np.sum(np.log(mu+alpha*r)) status: 2 success: False njev: 14 nfev: 72 hess_inv: array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) fun: 32.131359359964378 x: array([ 0.01, 0.1 , 0.1 ]) message: 'Desired error not necessarily achieved due to precision loss.' jac: array([ -2.8051672 , 13.06962156, -48.97879982]) ``` Notice that it hasn't managed to optimize the parameters at all and the minimized value 32 is bigger than 28 which is what you get with a= 0.01, alpha = 0.5, beta = 0.6 . It's possible this problem could be avoided by choosing better initial guesses but if so, how can I do this automatically? Nelder-Mead, TNC and SLSQP work as drop-in replacements. None of the other methods do.", "response":"I copied your example and tried a little bit. Looks like if you stick with BFGS solver, after a few iteration the mu+ alpha * r will have some negative numbers, and that's how you get the RuntimeWarning. The easiest fix I can think of is to switch to Nelder Mead solver. ``` res = minimize(loglikelihood, (0.01, 0.1,0.1), method = 'Nelder-Mead',args = (atimes,)) ``` And it will give you this result: ``` 28.3136498357 status: 0 nfev: 159 success: True fun: 27.982451280648817 x: array([ 0.01410906, 0.68346023, 0.90837568]) message: 'Optimization terminated successfully.' nit: 92 ```", "best_answers_score":0.6912, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/24767191\/scipy-is-not-optimizing-and-returns-desired-error-not-necessarily-achieved-due", "best_answers_votes":41, "question_length":1742, "response_length":585 }, { "question":"How to install python packages without root privileges? I am using numpy \/ scipy \/ pynest to do some research computing on Mac OS X. For performance, we rent a 400-node cluster (with Linux) from our university so that the tasks could be done parallel. The problem is that we are NOT allowed to install any extra packages on the cluster (no sudo or any installation tool), they only provide the raw python itself. How can I run my scripts on the cluster then? Is there any way to integrate the modules (numpy and scipy also have some compiled binaries I think) so that it could be interpreted and executed without installing packages?", "response":"You don't need root privileges to install packages in your home directory. You can do that with a command such as ``` pip install --user numpy ``` or from source ``` python setup.py install --user ``` See https:\/\/stackoverflow.com\/a\/7143496\/284795 The first alternative is much more convenient, so if the server doesn't have pip or easy_install, you should politely ask the admins to add it, explaining the benefit to them (they won't be bothered anymore by requests for individual packages).", "best_answers_score":0.6906, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/14179941\/how-to-install-python-packages-without-root-privileges", "best_answers_votes":45, "question_length":633, "response_length":492 }, { "question":"Pixel neighbors in 2d array (image) using Python I have a numpy array like this: ``` x = np.array([[1,2,3],[4,5,6],[7,8,9]]) ``` I need to create a function let's call it \"neighbors\" with the following input parameter: x: a numpy 2d array (i,j): the index of an element in a 2d array d: neighborhood radius As output I want to get the neighbors of the cell i,j with a given distance d. So if I run ``` neighbors(im, i, j, d=1) with i = 1 and j = 1 (element value = 5) ``` I should get the indices of the following values: [1,2,3,4,6,7,8,9]. I hope I make it clear. Is there any library like scipy which deal with this? I've done something working but it's a rough solution. ``` def pixel_neighbours(self, p): rows, cols = self.im.shape i, j = p[0], p[1] rmin = i - 1 if i - 1 >= 0 else 0 rmax = i + 1 if i + 1 = 0 else 0 cmax = j + 1 if j + 1 < cols else j neighbours = [] for x in xrange(rmin, rmax + 1): for y in xrange(cmin, cmax + 1): neighbours.append([x, y]) neighbours.remove([p[0], p[1]]) return neighbours ``` How can I improve this?", "response":"Have a look at scipy.ndimage.generic_filter. As an example: ``` import numpy as np import scipy.ndimage as ndimage def test_func(values): print(values) return values.sum() x = np.array([[1,2,3],[4,5,6],[7,8,9]]) footprint = np.array([[1,1,1], [1,0,1], [1,1,1]]) results = ndimage.generic_filter(x, test_func, footprint=footprint) ``` By default, it will \"reflect\" the values at the boundaries. You can control this with the mode keyword argument. However, if you're wanting to do something like this, there's a good chance that you can express your problem as a convolution of some sort. If so, it will be much faster to break it down into convolutional steps and use more optimized functions (e.g. most of scipy.ndimage).", "best_answers_score":0.6896, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/10996769\/pixel-neighbors-in-2d-array-image-using-python", "best_answers_votes":29, "question_length":1042, "response_length":722 }, { "question":"Converting a pandas multi-index series to a dataframe by using second index as columns Hi I have a DataFrame\/Series with 2-level multi-index and one column. I would like to take the second-level index and use it as a column. For example (code taken from multi-index docs): ``` import pandas as pd import numpy as np arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']] tuples = list(zip(*arrays)) index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second']) s = pd.DataFrame(np.random.randn(8), index=index, columns=[\"col\"]) ``` Which looks like: ``` first second bar one -0.982656 two -0.078237 baz one -0.345640 two -0.160661 foo one -0.605568 two -0.140384 qux one 1.434702 two -1.065408 dtype: float64 ``` What I would like is to have a DataFrame with index [bar, baz, foo, qux] and columns [one, two].", "response":"You just need to unstack your series: ``` >>> s.unstack(level=1) second one two first bar -0.713374 0.556993 baz 0.523611 0.328348 foo 0.338351 -0.571854 qux 0.036694 -0.161852 ```", "best_answers_score":0.6878, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/44142591\/converting-a-pandas-multi-index-series-to-a-dataframe-by-using-second-index-as-c", "best_answers_votes":57, "question_length":887, "response_length":180 }, { "question":"Geometric median of multidimensional points I have an array of 3D points: ``` a = np.array([[2., 3., 8.], [10., 4., 3.], [58., 3., 4.], [34., 2., 43.]]) ``` How can I compute the geometric median of those points?", "response":"I implemented Yehuda Vardi and Cun-Hui Zhang's algorithm for the geometric median, described in their paper \"The multivariate L1-median and associated data depth\". Everything is vectorized in numpy, so should be very fast. I didn't implement weights - only unweighted points. ``` import numpy as np from scipy.spatial.distance import cdist, euclidean def geometric_median(X, eps=1e-5): y = np.mean(X, 0) while True: D = cdist(X, [y]) nonzeros = (D != 0)[:, 0] Dinv = 1 \/ D[nonzeros] Dinvs = np.sum(Dinv) W = Dinv \/ Dinvs T = np.sum(W * X[nonzeros], 0) num_zeros = len(X) - np.sum(nonzeros) if num_zeros == 0: y1 = T elif num_zeros == len(X): return y else: R = (T - y) * Dinvs r = np.linalg.norm(R) rinv = 0 if r == 0 else num_zeros\/r y1 = max(0, 1-rinv)*T + min(1, rinv)*y if euclidean(y, y1) < eps: return y1 y = y1 ``` In addition to the default SO license terms, I release the code above under the zlib license, if you so prefer.", "best_answers_score":0.687, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/30299267\/geometric-median-of-multidimensional-points", "best_answers_votes":36, "question_length":212, "response_length":933 }, { "question":"Scikit-learn train_test_split with indices How do I get the original indices of the data when using train_test_split()? What I have is the following ``` from sklearn.cross_validation import train_test_split import numpy as np data = np.reshape(np.randn(20),(10,2)) # 10 training examples labels = np.random.randint(2, size=10) # 10 labels x1, x2, y1, y2 = train_test_split(data, labels, size=0.2) ``` But this does not give the indices of the original data. One workaround is to add the indices to data (e.g. data = [(i, d) for i, d in enumerate(data)]) and then pass them inside train_test_split and then expand again. Are there any cleaner solutions?", "response":"You can use pandas dataframes or series as Julien said but if you want to restrict your-self to numpy you can pass an additional array of indices: ``` from sklearn.model_selection import train_test_split import numpy as np n_samples, n_features, n_classes = 10, 2, 2 data = np.random.randn(n_samples, n_features) # 10 training examples labels = np.random.randint(n_classes, size=n_samples) # 10 labels indices = np.arange(n_samples) ( data_train, data_test, labels_train, labels_test, indices_train, indices_test, ) = train_test_split(data, labels, indices, test_size=0.2) ```", "best_answers_score":0.6867, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/31521170\/scikit-learn-train-test-split-with-indices", "best_answers_votes":137, "question_length":652, "response_length":576 }, { "question":"Any way to solve a system of coupled differential equations in python? I've been working with sympy and scipy, but can't find or figure out how to solve a system of coupled differential equations (non-linear, first-order). So is there any way to solve coupled differential equations? The equations are of the form: ``` V11'(s) = -12*v12(s)**2 v22'(s) = 12*v12(s)**2 v12'(s) = 6*v11(s)*v12(s) - 6*v12(s)*v22(s) - 36*v12(s) ``` with initial conditions for v11(s), v22(s), v12(s).", "response":"In addition to SciPy methods odeint and ode that were already mentioned, it now has solve_ivp which is newer and often more convenient. A complete example, encoding [v11, v22, v12] as an array v: ``` from scipy.integrate import solve_ivp def rhs(s, v): return [-12*v[2]**2, 12*v[2]**2, 6*v[0]*v[2] - 6*v[2]*v[1] - 36*v[2]] res = solve_ivp(rhs, (0, 0.1), [2, 3, 4]) ``` This solves the system on the interval (0, 0.1) with initial value [2, 3, 4]. The result has independent variable (s in your notation) as res.t: ``` array([ 0. , 0.01410735, 0.03114023, 0.04650042, 0.06204205, 0.07758368, 0.0931253 , 0.1 ]) ``` These values were chosen automatically. One can provide t_eval to have the solution evaluated at desired points: for example, t_eval=np.linspace(0, 0.1). The dependent variable (the function we are looking for) is in res.y: ``` array([[ 2. , 0.54560138, 0.2400736 , 0.20555144, 0.2006393 , 0.19995753, 0.1998629 , 0.1998538 ], [ 3. , 4.45439862, 4.7599264 , 4.79444856, 4.7993607 , 4.80004247, 4.8001371 , 4.8001462 ], [ 4. , 1.89500744, 0.65818761, 0.24868116, 0.09268216, 0.0345318 , 0.01286543, 0.00830872]]) ``` With Matplotlib, this solution is plotted as plt.plot(res.t, res.y.T) (the plot would be smoother if I provided t_eval as mentioned). Finally, if the system involved equations of order higher than 1, one would need to use reduction to a 1st order system.", "best_answers_score":0.6857, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16909779\/any-way-to-solve-a-system-of-coupled-differential-equations-in-python", "best_answers_votes":17, "question_length":477, "response_length":1384 }, { "question":"Is there a test suite for numpy \/ scipy? I'm about to reinstall numpy and scipy on my Ubuntu Lucid. As these things carry quite a few dependencies, I'm wondering if there is a comprehensive test suite to check if the new install really works. Of course, I can just take a bunch of my scripts and run them one by one to see if they keep working, but that won't guard against a situation where at some point in the future I'll try to use something I didn't use before and it'll break (or, worse, silently produce nonsence).", "response":"Yes. Both packages have a test method for this. ``` import numpy numpy.test('full') import scipy scipy.test('full') ``` You will need to have pytest and hypothesis installed to run numpy.test.", "best_answers_score":0.6801, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/9200727\/is-there-a-test-suite-for-numpy-scipy", "best_answers_votes":59, "question_length":521, "response_length":192 }, { "question":"Improving FFT performance in Python What is the fastest FFT implementation in Python? It seems numpy.fft and scipy.fftpack both are based on fftpack, and not FFTW. Is fftpack as fast as FFTW? What about using multithreaded FFT, or using distributed (MPI) FFT?", "response":"You could certainly wrap whatever FFT implementation that you wanted to test using Cython or other like-minded tools that allow you to access external libraries. GPU-based If you're going to test FFT implementations, you might also take a look at GPU-based codes (if you have access to the proper hardware). There are several: reikna.fft, scikits.cuda. CPU-based There's also a CPU based python FFTW wrapper pyFFTW. (There is pyFFTW3 as well, but it is not so actively maintained as pyFFTW, and it does not work with Python3. (source)) I don't have experience with any of these. It's probably going to fall to you to do some digging around and benchmark different codes for your particular application if speed is important to you.", "best_answers_score":0.6794, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/6365623\/improving-fft-performance-in-python", "best_answers_votes":23, "question_length":259, "response_length":731 }, { "question":"Factorial in numpy and scipy How can I import factorial function from numpy and scipy separately in order to see which one is faster? I already imported factorial from python itself by import math. But, it does not work for numpy and scipy.", "response":"You can import them like this: ``` In [7]: import scipy, numpy, math In [8]: scipy.math.factorial, numpy.math.factorial, math.factorial Out[8]: (, , ) ``` scipy.math.factorial and numpy.math.factorial seem to simply be aliases\/references for\/to math.factorial, that is scipy.math.factorial is math.factorial and numpy.math.factorial is math.factorial should both give True.", "best_answers_score":0.6791, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/21753841\/factorial-in-numpy-and-scipy", "best_answers_votes":106, "question_length":240, "response_length":373 }, { "question":"Generating Discrete random variables with specified weights using SciPy or NumPy I am looking for a simple function that can generate an array of specified random values based on their corresponding (also specified) probabilities. I only need it to generate float values, but I don't see why it shouldn't be able to generate any scalar. I can think of many ways of building this from existing functions, but I think I probably just missed an obvious SciPy or NumPy function. E.g.: ``` >>> values = [1.1, 2.2, 3.3] >>> probabilities = [0.2, 0.5, 0.3] >>> print some_function(values, probabilities, size=10) (2.2, 1.1, 3.3, 3.3, 2.2, 2.2, 1.1, 2.2, 3.3, 2.2) ``` Note: I found scipy.stats.rv_discrete but I don't understand how it works. Specifically, I do not understand what this (below) means nor what it should do: ``` numargs = generic.numargs [ ] = ['Replace with resonable value', ]*numargs ``` If rv_discrete is what I should be using, could you please provide me with a simple example and an explanation of the above \"shape\" statement?", "response":"Drawing from a discrete distribution is directly built into numpy. The function is called random.choice (difficult to find without any reference to discrete distributions in the numpy docs). ``` elements = [1.1, 2.2, 3.3] probabilities = [0.2, 0.5, 0.3] np.random.choice(elements, 10, p=probabilities) ```", "best_answers_score":0.6784, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11373192\/generating-discrete-random-variables-with-specified-weights-using-scipy-or-numpy", "best_answers_votes":99, "question_length":1043, "response_length":305 }, { "question":"Finding the null space of a matrix I'm trying to find the null space (solution space of Ax=0) of a given matrix. I've found two examples, but I can't seem to get either to work. Moreover, I can't understand what they're doing to get there, so I can't debug. Can someone walk me through this? The documentation pages (numpy.linalg.svd, and numpy.compress) are opaque to me. I learned to do this by creating the matrix C = [A|0], finding the reduced row echelon form and solving for variables by row. I can't seem to follow how it's being done in these examples. Thanks for any and all help! Here is my sample matrix, which is the same as the wikipedia example: ``` A = matrix([ [2,3,5], [-4,2,3] ]) ``` Method (found here, and here): ``` import scipy from scipy import linalg, matrix def null(A, eps=1e-15): u, s, vh = scipy.linalg.svd(A) null_mask = (s >> import scipy >>> from scipy import linalg, matrix >>> def null(A, eps=1e-15): ... u, s, vh = scipy.linalg.svd(A) ... null_mask = (s >> A = matrix([ ... [2,3,5], ... [-4,2,3] ... ]) >>> >>> null(A) array([], shape=(3, 0), dtype=float64) >>> ```", "response":"Sympy makes this straightforward. ``` >>> from sympy import Matrix >>> A = [[2, 3, 5], [-4, 2, 3], [0, 0, 0]] >>> A = Matrix(A) >>> A * A.nullspace()[0] Matrix([ [0], [0], [0]]) >>> A.nullspace() [Matrix([ [-1\/16], [-13\/8], [ 1]])] ```", "best_answers_score":0.6776, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/5889142\/finding-the-null-space-of-a-matrix", "best_answers_votes":27, "question_length":1099, "response_length":235 }, { "question":"NumPy and SciPy - Difference between .todense() and .toarray() I am wondering if there is any difference (advantage\/disadvantage) of using .toarray() vs. .todense() on sparse NumPy arrays. E.g., ```py import scipy as sp import numpy as np sparse_m = sp.sparse.bsr_matrix(np.array([[1,0,0,0,1], [1,0,0,0,1]])) %timeit sparse_m.toarray() 1000 loops, best of 3: 299 \u00b5s per loop %timeit sparse_m.todense() 1000 loops, best of 3: 305 \u00b5s per loop ```", "response":"toarray returns an ndarray; todense returns a matrix. If you want a matrix, use todense; otherwise, use toarray.", "best_answers_score":0.677, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/30416695\/numpy-and-scipy-difference-between-todense-and-toarray", "best_answers_votes":53, "question_length":444, "response_length":112 }, { "question":"How does condensed distance matrix work? (pdist) scipy.spatial.distance.pdist returns a condensed distance matrix. From the documentation: Returns a condensed distance matrix Y. For each and (where ), the metric dist(u=X[i], v=X[j]) is computed and stored in entry ij. I thought ij meant i*j. But I think I might be wrong. Consider ```py X = array([[1,2], [1,2], [3,4]]) dist_matrix = pdist(X) ``` then the documentation says that dist(X[0], X[2]) should be dist_matrix[0*2]. However, dist_matrix[0*2] is 0 -- not 2.8 as it should be. What's the formula I should use to access the similarity of a two vectors, given i and j?", "response":"You can look at it this way: Suppose x is m by n. The possible pairs of m rows, chosen two at a time, is itertools.combinations(range(m), 2), e.g, for m=3: ``` >>> import itertools >>> list(combinations(range(3),2)) [(0, 1), (0, 2), (1, 2)] ``` So if d = pdist(x), the kth tuple in combinations(range(m), 2)) gives the indices of the rows of x associated with d[k]. Example: ``` >>> x = array([[0,10],[10,10],[20,20]]) >>> pdist(x) array([ 10. , 22.36067977, 14.14213562]) ``` The first element is dist(x[0], x[1]), the second is dist(x[0], x[2]) and the third is dist(x[1], x[2]). Or you can view it as the elements in the upper triangular part of the square distance matrix, strung together into a 1D array. E.g. ``` >>> squareform(pdist(x)) array([[ 0. , 10. , 22.361], [ 10. , 0. , 14.142], [ 22.361, 14.142, 0. ]]) >>> y = array([[0,10],[10,10],[20,20],[10,0]]) >>> squareform(pdist(y)) array([[ 0. , 10. , 22.361, 14.142], [ 10. , 0. , 14.142, 10. ], [ 22.361, 14.142, 0. , 22.361], [ 14.142, 10. , 22.361, 0. ]]) >>> pdist(y) array([ 10. , 22.361, 14.142, 14.142, 10. , 22.361]) ```", "best_answers_score":0.6756, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13079563\/how-does-condensed-distance-matrix-work-pdist", "best_answers_votes":113, "question_length":624, "response_length":1089 }, { "question":"Python curve_fit with multiple independent variables Python's curve_fit calculates the best-fit parameters for a function with a single independent variable, but is there a way, using curve_fit or something else, to fit for a function with multiple independent variables? For example: ``` def func(x, y, a, b, c): return log(a) + b*log(x) + c*log(y) ``` where x and y are the independent variable and we would like to fit for a, b, and c.", "response":"You can pass curve_fit a multi-dimensional array for the independent variables, but then your func must accept the same thing. For example, calling this array X and unpacking it to x, y for clarity: ``` import numpy as np from scipy.optimize import curve_fit def func(X, a, b, c): x,y = X return np.log(a) + b*np.log(x) + c*np.log(y) # some artificially noisy data to fit x = np.linspace(0.1,1.1,101) y = np.linspace(1.,2., 101) a, b, c = 10., 4., 6. z = func((x,y), a, b, c) * 1 + np.random.random(101) \/ 100 # initial guesses for a,b,c: p0 = 8., 2., 7. print(curve_fit(func, (x,y), z, p0)) ``` Gives the fit: ``` (array([ 9.99933937, 3.99710083, 6.00875164]), array([[ 1.75295644e-03, 9.34724308e-05, -2.90150983e-04], [ 9.34724308e-05, 5.09079478e-06, -1.53939905e-05], [ -2.90150983e-04, -1.53939905e-05, 4.84935731e-05]])) ```", "best_answers_score":0.6755, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/28372597\/python-curve-fit-with-multiple-independent-variables", "best_answers_votes":97, "question_length":438, "response_length":831 }, { "question":"Removing duplicate columns and rows from a NumPy 2D array I'm using a 2D shape array to store pairs of longitudes+latitudes. At one point, I have to merge two of these 2D arrays, and then remove any duplicated entry. I've been searching for a function similar to numpy.unique, but I've had no luck. Any implementation I've been thinking on looks very \"unoptimizied\". For example, I'm trying with converting the array to a list of tuples, removing duplicates with set, and then converting to an array again: ``` coordskeys = np.array(list(set([tuple(x) for x in coordskeys]))) ``` Are there any existing solutions, so I do not reinvent the wheel? To make it clear, I'm looking for: ``` >>> a = np.array([[1, 1], [2, 3], [1, 1], [5, 4], [2, 3]]) >>> unique_rows(a) array([[1, 1], [2, 3],[5, 4]]) ``` BTW, I wanted to use just a list of tuples for it, but the lists were so big that they consumed my 4Gb RAM + 4Gb swap (numpy arrays are more memory efficient).", "response":"This should do the trick: ``` def unique_rows(a): a = np.ascontiguousarray(a) unique_a = np.unique(a.view([('', a.dtype)]*a.shape[1])) return unique_a.view(a.dtype).reshape((unique_a.shape[0], a.shape[1])) ``` Example: ``` >>> a = np.array([[1, 1], [2, 3], [1, 1], [5, 4], [2, 3]]) >>> unique_rows(a) array([[1, 1], [2, 3], [5, 4]]) ```", "best_answers_score":0.6744, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8560440\/removing-duplicate-columns-and-rows-from-a-numpy-2d-array", "best_answers_votes":34, "question_length":957, "response_length":336 }, { "question":"Is there a numpy\/scipy dot product, calculating only the diagonal entries of the result? Imagine having 2 numpy arrays: ``` > A, A.shape = (n,p) > B, B.shape = (p,p) ``` Typically p is a smaller number (p result = diag_dot(A.dot(B), A.T) ``` Is there a premade functionality like this and can this be done efficiently without the need for allocating the intermediate (n x n) array?", "response":"I think i got it on my own, but nevertheless will share the solution: since getting only the diagonals of a matrix multiplication ``` > Z = N.diag(X.dot(Y)) ``` is equivalent to the individual sum of the scalar product of rows of X and columns of Y, the previous statement is equivalent to: ``` > Z = (X * Y.T).sum(-1) ``` For the original variables this means: ``` > result = (A.dot(B) * A).sum(-1) ``` Please correct me if I am wrong but this should be it ...", "best_answers_score":0.674, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/14758283\/is-there-a-numpy-scipy-dot-product-calculating-only-the-diagonal-entries-of-the", "best_answers_votes":70, "question_length":382, "response_length":461 }, { "question":"What is the difference between numpy.linalg.lstsq and scipy.linalg.lstsq? lstsq tries to solve Ax=b minimizing |b - Ax|. Both scipy and numpy provide a linalg.lstsq function with a very similar interface. The documentation does not mention which kind of algorithm is used, neither for scipy.linalg.lstsq nor for numpy.linalg.lstsq, but it seems to do pretty much the same. The implementation seems to be different for scipy.linalg.lstsq and numpy.linalg.lstsq. Both seem to use LAPACK, both algorithms seem to use a SVD. Where is the difference? Which one should I use? Note: do not confuse linalg.lstsq with scipy.optimize.leastsq which can solve also non-linear optimization problems.", "response":"If I read the source code right (Numpy 1.8.2, Scipy 0.14.1 ), numpy.linalg.lstsq() uses the LAPACK routine xGELSD and scipy.linalg.lstsq() usesxGELSS. The LAPACK Manual Sec. 2.4 states The subroutine xGELSD is significantly faster than its older counterpart xGELSS, especially for large problems, but may require somewhat more workspace depending on the matrix dimensions. That means that Numpy is faster but uses more memory. Update August 2017: Scipy now uses xGELSD by default https:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.linalg.lstsq.html", "best_answers_score":0.674, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/29372559\/what-is-the-difference-between-numpy-linalg-lstsq-and-scipy-linalg-lstsq", "best_answers_votes":34, "question_length":686, "response_length":556 }, { "question":"How to create a pydub AudioSegment using an numpy array? I have the following code in python ``` from scipy.io.wavfile import read rate, signal = read('.\/data\/input.wav') # get only one channel signal = signal[:,0] # do a bunch of processing here ``` Now I want to create an pydub segment using 'signal' and 'rate' ``` audio_segment = pydub.AudioSegment() ``` So how can I create this audio segment, and after that, how can I get back my signal as an numpy array?", "response":"I was able to run this code on my machine: ``` from scipy.io.wavfile import read from pydub import AudioSegment rate, signal = read(\".\/test\/data\/test1.wav\") channel1 = signal[:,0] audio_segment = pydub.AudioSegment( channel1.tobytes(), frame_rate=rate, sample_width=channel1.dtype.itemsize, channels=1 ) # test that it sounds right (requires ffplay, or pyaudio): from pydub.playback import play play(audio_segment) ```", "best_answers_score":0.6735, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/35735497\/how-to-create-a-pydub-audiosegment-using-an-numpy-array", "best_answers_votes":20, "question_length":463, "response_length":418 }, { "question":"How to plot gamma distribution with alpha and beta parameters in python I want to plot a gamma distribution with alpha = 29 (the scale) and beta = 3 (the size). In other words, I want to plot the pdf for Gamma(29,3). How do I do this if according to the documentation, the python gamma function only has parameters a and x and the size parameter doesn't exist? I thought loc was beta, but I think it's actually offset, so the code below is wrong... ``` import numpy as np import scipy.stats as stats from matplotlib import pyplot as plt x = np.linspace (0, 100, 200) y1 = stats.gamma.pdf(x, a=29, loc=3) #a is alpha, loc is beta??? plt.plot(x, y1, \"y-\", label=(r'$\\alpha=29, \\beta=3$')) plt.ylim([0,0.08]) plt.xlim([0,150]) plt.show() ```", "response":"According to the documentation, you want to use the scale parameter (theta), but since you are defining beta, which is the inverse of theta, then you pass scale with the value of 1\/beta, which in your example would be 1\/3 or 0.33333. Therefore, try: ``` y1 = stats.gamma.pdf(x, a=29, scale=0.33333) ```", "best_answers_score":0.6699, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/42150965\/how-to-plot-gamma-distribution-with-alpha-and-beta-parameters-in-python", "best_answers_votes":27, "question_length":738, "response_length":302 }, { "question":"Integrate stiff ODEs with Python I'm looking for a good library that will integrate stiff ODEs in Python. The issue is, scipy's odeint gives me good solutions sometimes, but the slightest change in the initial conditions causes it to fall down and give up. The same problem is solved quite happily by MATLAB's stiff solvers (ode15s and ode23s), but I can't use it (even from Python, because none of the Python bindings for the MATLAB C API implement callbacks, and I need to pass a function to the ODE solver). I'm trying PyGSL, but it's horrendously complex. Any suggestions would be greatly appreciated. EDIT: The specific problem I'm having with PyGSL is choosing the right step function. There are several of them, but no direct analogues to ode15s or ode23s (bdf formula and modified Rosenbrock if that makes sense). So what is a good step function to choose for a stiff system? I have to solve this system for a really long time to ensure that it reaches steady-state, and the GSL solvers either choose a miniscule time-step or one that's too large.", "response":"Python can call C. The industry standard is LSODE in ODEPACK. It is public-domain. You can download the C version. These solvers are extremely tricky, so it's best to use some well-tested code. Added: Be sure you really have a stiff system, i.e. if the rates (eigenvalues) differ by more than 2 or 3 orders of magnitude. Also, if the system is stiff, but you are only looking for a steady-state solution, these solvers give you the option of solving some of the equations algebraically. Otherwise, a good Runge-Kutta solver like DVERK will be a good, and much simpler, solution. Added here because it would not fit in a comment: This is from the DLSODE header doc: ``` C T :INOUT Value of the independent variable. On return it C will be the current value of t (normally TOUT). C C TOUT :IN Next point where output is desired (.NE. T). ``` Also, yes Michaelis-Menten kinetics is nonlinear. The Aitken acceleration works with it, though. (If you want a short explanation, first consider the simple case of Y being a scalar. You run the system to get 3 Y(T) points. Fit an exponential curve through them (simple algebra). Then set Y to the asymptote and repeat. Now just generalize to Y being a vector. Assume the 3 points are in a plane - it's OK if they're not.) Besides, unless you have a forcing function (like a constant IV drip), the MM elimination will decay away and the system will approach linearity. Hope that helps.", "best_answers_score":0.6696, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2088473\/integrate-stiff-odes-with-python", "best_answers_votes":18, "question_length":1055, "response_length":1425 }, { "question":"How to create a density plot In R I can create the desired output by doing: ``` data = c(rep(1.5, 7), rep(2.5, 2), rep(3.5, 8), rep(4.5, 3), rep(5.5, 1), rep(6.5, 8)) plot(density(data, bw=0.5)) ``` In python (with matplotlib) the closest I got was with a simple histogram: ``` import matplotlib.pyplot as plt data = [1.5]*7 + [2.5]*2 + [3.5]*8 + [4.5]*3 + [5.5]*1 + [6.5]*8 plt.hist(data, bins=6) plt.show() ``` I also tried the normed=True parameter but couldn't get anything other than trying to fit a gaussian to the histogram. My latest attempts were around scipy.stats and gaussian_kde, following examples on the web, but I've been unsuccessful so far.", "response":"Five years later, when I Google \"how to create a kernel density plot using python\", this thread still shows up at the top! Today, a much easier way to do this is to use seaborn, a package that provides many convenient plotting functions and good style management. ``` import numpy as np import seaborn as sns data = [1.5]*7 + [2.5]*2 + [3.5]*8 + [4.5]*3 + [5.5]*1 + [6.5]*8 sns.set_style('whitegrid') sns.kdeplot(np.array(data), bw=0.5) ```", "best_answers_score":0.6693, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/4150171\/how-to-create-a-density-plot", "best_answers_votes":205, "question_length":658, "response_length":440 }, { "question":"Efficient way to normalize a Scipy Sparse Matrix I'd like to write a function that normalizes the rows of a large sparse matrix (such that they sum to one). ``` from pylab import * import scipy.sparse as sp def normalize(W): z = W.sum(0) z[z < 1e-6] = 1e-6 return W \/ z[None,:] w = (rand(10,10)<0.1)*rand(10,10) w = sp.csr_matrix(w) w = normalize(w) ``` However this gives the following exception: ``` File \"\/usr\/lib\/python2.6\/dist-packages\/scipy\/sparse\/base.py\", line 325, in __div__ return self.__truediv__(other) File \"\/usr\/lib\/python2.6\/dist-packages\/scipy\/sparse\/compressed.py\", line 230, in __truediv__ raise NotImplementedError ``` Are there any reasonably simple solutions? I have looked at this, but am still unclear on how to actually do the division.", "response":"This has been implemented in scikit-learn sklearn.preprocessing.normalize. ``` from sklearn.preprocessing import normalize w_normalized = normalize(w, norm='l1', axis=1) ``` axis=1 should normalize by rows, axis=0 to normalize by column. Use the optional argument copy=False to modify the matrix in place.", "best_answers_score":0.6691, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12305021\/efficient-way-to-normalize-a-scipy-sparse-matrix", "best_answers_votes":49, "question_length":761, "response_length":305 }, { "question":"Scipy sparse matrix multiplication I have this example of matrix by matrix multiplication using numpy arrays: ``` import numpy as np m = np.array([[1,2,3],[4,5,6],[7,8,9]]) c = np.array([0,1,2]) m * c array([[ 0, 2, 6], [ 0, 5, 12], [ 0, 8, 18]]) ``` How can i do the same thing if m is scipy sparse CSR matrix? This gives dimension mismatch: ``` sp.sparse.csr_matrix(m)*sp.sparse.csr_matrix(c) ```", "response":"You can call the multiply method of csr_matrix to do pointwise multiplication. ``` sparse.csr_matrix(m).multiply(sparse.csr_matrix(c)).todense() # matrix([[ 0, 2, 6], # [ 0, 5, 12], # [ 0, 8, 18]], dtype=int64) ```", "best_answers_score":0.6682, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/42537943\/scipy-sparse-matrix-multiplication", "best_answers_votes":24, "question_length":398, "response_length":214 }, { "question":"How to filter numpy array by list of indices? I have a numpy array, filtered__rows, comprised of LAS data [x, y, z, intensity, classification]. I have created a cKDTree of points and have found nearest neighbors, query_ball_point, which is a list of indices for the point and its neighbors. Is there a way to filter filtered__rows to create an array of only points whose index is in the list returned by query_ball_point?", "response":"It looks like you just need a basic integer array indexing: ``` filter_indices = [1,3,5] np.array([11,13,155,22,0xff,32,56,88])[filter_indices] ```", "best_answers_score":0.6679, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/19821425\/how-to-filter-numpy-array-by-list-of-indices", "best_answers_votes":112, "question_length":421, "response_length":147 }, { "question":"Consecutive, Overlapping Subsets of Array (NumPy, Python) I have a NumPy array [1,2,3,4,5,6,7,8,9,10,11,12,13,14] and want to have an array structured like [[1,2,3,4], [2,3,4,5], [3,4,5,6], ..., [11,12,13,14]]. Sure this is possible by looping over the large array and adding arrays of length four to the new array, but I'm curious if there is some secret 'magic' Python method doing just this :)", "response":"You should use stride_tricks. When I first saw this, the word 'magic' did spring to mind. It's simple and is by far the fastest method. ``` >>> as_strided = numpy.lib.stride_tricks.as_strided >>> a = numpy.arange(1,15) >>> a array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]) >>> b = as_strided(a, (11,4), a.strides*2) >>> b array([[ 1, 2, 3, 4], [ 2, 3, 4, 5], [ 3, 4, 5, 6], [ 4, 5, 6, 7], [ 5, 6, 7, 8], [ 6, 7, 8, 9], [ 7, 8, 9, 10], [ 8, 9, 10, 11], [ 9, 10, 11, 12], [10, 11, 12, 13], [11, 12, 13, 14]]) ``` Be aware that the values in array b are those in a, just viewed differently. Do a .copy() on b if you plan to modify it. I saw this at a SciPy conference. Here are the slides for more explanation.", "best_answers_score":0.6669, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2485669\/consecutive-overlapping-subsets-of-array-numpy-python", "best_answers_votes":35, "question_length":396, "response_length":715 }, { "question":"Fastest way to compute k largest eigenvalues and corresponding eigenvectors with numpy I have a large NxN dense symmetric matrix and want the eigenvectors corresponding to the k largest eigenvalues. What's the best way to find them (preferably using numpy but perhaps in general using blas\/atlas\/lapack if that's the only way to go)? In general N is much much larger then k (say N > 5000, k < 10). Numpy seems to only have functions for finding the k largest eigenvalues if my starting matrix is sparse.", "response":"In SciPy, you can use the linalg.eigh function, with the eigvals parameter. eigvals : tuple (lo, hi) Indexes of the smallest and largest (in ascending order) eigenvalues and corresponding eigenvectors to be returned: 0 <= lo < hi <= M-1. If omitted, all eigenvalues and eigenvectors are returned. Which in your case should be set to (N-k,N-1).", "best_answers_score":0.6642, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/12167654\/fastest-way-to-compute-k-largest-eigenvalues-and-corresponding-eigenvectors-with", "best_answers_votes":23, "question_length":503, "response_length":343 }, { "question":"Multiple variables in SciPy's optimize.minimize According to the SciPy documentation, it is possible to minimize functions with multiple variables, yet it doesn't say how to optimize such functions. ```py from scipy.optimize import minimize from math import * def f(c): return sqrt((sin(pi\/2) + sin(0) + sin(c) - 2)**2 + (cos(pi\/2) + cos(0) + cos(c) - 1)**2) print(minimize(f, 3.14\/2 + 3.14\/7)) ``` The above code try to minimize the function f, but for my task I need to minimize with respect to three variables. Simply introducing a second argument and adjusting minimize accordingly yields an error: ```none TypeError: f() takes exactly 2 arguments (1 given) ``` How does minimize work when minimizing with multiple variables?", "response":"Pack the multiple variables into a single array: ``` import scipy.optimize as optimize def f(params): # print(params) # <-- you'll see that params is a NumPy array a, b, c = params # <-- for readability you may wish to assign names to the component variables return a**2 + b**2 + c**2 initial_guess = [1, 1, 1] result = optimize.minimize(f, initial_guess) if result.success: fitted_params = result.x print(fitted_params) else: raise ValueError(result.message) ``` yields ``` [ -1.66705302e-08 -1.66705302e-08 -1.66705302e-08] ```", "best_answers_score":0.6609, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13670333\/multiple-variables-in-scipys-optimize-minimize", "best_answers_votes":115, "question_length":729, "response_length":529 }, { "question":"Swap two values in a numpy array. Is there something more efficient than the following code to swap two values of a numpy 1D array? ``` input_seq = arange(64) ix1 = randint(len(input_seq)) ixs2 = randint(len(input_seq)) temp = input_seq[ix2] input_seq[ix2] = input_seq[ix1] input_seq[ix1] = temp ```", "response":"I see you're using numpy arrays. In that case, you can also do this: ``` input_seq[[ix1, ix2]] = input_seq[[ix2, ix1]] ```", "best_answers_score":0.6594, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/22847410\/swap-two-values-in-a-numpy-array", "best_answers_votes":58, "question_length":299, "response_length":122 }, { "question":"Distance between point and a line (from two points) I'm using Python+Numpy (can maybe also use Scipy) and have three 2D points ``` (P1, P2, P3); ``` I am trying to get the distance from P3 perpendicular to a line drawn between P1 and P2. Let P1=(x1,y1), P2=(x2,y2) and P3=(x3,y3) In vector notation this would be pretty easy, but I'm fairly new to python\/numpy and can't get anythng that works (or even close). Any tips appreciated, thanks!", "response":"Try using the norm function from numpy.linalg ``` d = norm(np.cross(p2-p1, p1-p3))\/norm(p2-p1) ```", "best_answers_score":0.6562, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/39840030\/distance-between-point-and-a-line-from-two-points", "best_answers_votes":65, "question_length":440, "response_length":98 }, { "question":"how to explain the decision tree from scikit-learn I have two problems with understanding the result of decision tree from scikit-learn. For example, this is one of my decision trees: My question is that how I can use the tree? The first question is that: if a sample satisfied the condition, then it goes to the LEFT branch (if exists), otherwise it goes RIGHT. In my case, if a sample with X[7] > 63521.3984. Then the sample will go to the green box. Correct? The second question is that: when a sample reaches the leaf node, how can I know which category it belongs? In this example, I have three categories to classify. In the red box, there are 91, 212, and 113 samples are satisfied the condition, respectively. But how can I decide the category? I know there is a function clf.predict(sample) to tell the category. Can I do that from the graph??? Many thanks.", "response":"The value line in each box is telling you how many samples at that node fall into each category, in order. That's why, in each box, the numbers in value add up to the number shown in sample. For instance, in your red box, 91+212+113=416. So this means if you reach this node, there were 91 data points in category 1, 212 in category 2, and 113 in category 3. If you were going to predict the outcome for a new data point that reached that leaf in the decision tree, you would predict category 2, because that is the most common category for samples at that node.", "best_answers_score":0.6555, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/23557545\/how-to-explain-the-decision-tree-from-scikit-learn", "best_answers_votes":34, "question_length":866, "response_length":562 }, { "question":"What is the fastest way to slice a scipy.sparse matrix? I normally use ``` matrix[:, i:] ``` It seems not work as fast as I expected.", "response":"To obtain a sparse matrix as output the fastest way to do row slicing is to have a csr type, and for columns slicing csc, as detailed here. In both cases you just have to do what you are currently doing: ``` matrix[l1:l2, c1:c2] ``` If you want a ndarray as output it might be faster to perform the slicing directly in the ndarray object, which you can obtain from the sparse matrix using the .A attribute or the .toarray() method: ``` matrix.A[l1:l2, c1:c2] ``` or: ``` matrix.toarray()[l1:l2, c1:c2] ``` As mentioned in the comment below, converting the sparse array to a dense array might lead to memory errors if the array is big enough.", "best_answers_score":0.6549, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13843352\/what-is-the-fastest-way-to-slice-a-scipy-sparse-matrix", "best_answers_votes":23, "question_length":133, "response_length":641 }, { "question":"Is the order of a Python dictionary guaranteed over iterations? I'm currently implementing a complex microbial food-web in Python using SciPy.integrate.ode. I need the ability to easily add species and reactions to the system, so I have to code up something quite general. My scheme looks something like this: ``` class Reaction(object): def __init__(self): #stuff common to all reactions def __getReactionRate(self, **kwargs): raise NotImplementedError ... Reaction subclasses that ... implement specific types of reactions class Species(object): def __init__(self, reactionsDict): self.reactionsDict = reactionsDict #reactionsDict looks like {'ReactionName':reactionObject, ...} #stuff common to all species def sumOverAllReactionsForThisSpecies(self, **kwargs): #loop over all the reactions and return the #cumulative change in the concentrations of all solutes ...Species subclasses where for each species ... are defined and passed to the superclass constructor class FermentationChamber(object): def __init__(self, speciesList, timeToSolve, *args): #do initialization def step(self): #loop over each species, which in turn loops #over each reaction inside it and return a #cumulative dictionary of total change for each #solute in the whole system if __name__==__main__: f = FermentationChamber(...) o = ode(...) #initialize ode solver while o.successful() and o.t>> import matplotlib.pyplot as plt >>> x = [1,2,3,4] >>> y = [1,2,3,4] >>> m = [[15,14,13,12],[14,12,10,8],[13,10,7,4],[12,8,4,0]] >>> cs = plt.contour(x,y,m, [9.5]) >>> cs.collections[0].get_paths() ``` The result of this call into cs.collections[0].get_paths() is: ``` [Path([[ 4. 1.625 ] [ 3.25 2. ] [ 3. 2.16666667] [ 2.16666667 3. ] [ 2. 3.25 ] [ 1.625 4. ]], None)] ``` Based on the plots, this result makes sense and appears to be collection of (y,x) pairs for the contour line. Other than manually looping over this return value, extracting the coordinates and assembling arrays for the line, are there better ways to get data back from a matplotlib.path object? Are there pitfalls to be aware of when extracting data from a matplotlib.path? Alternatively, are there alternatives within matplotlib or better yet numpy\/scipy to do a similar thing? Ideal thing would be to get a high resolution vector of (x,y) pairs describing the line, which could be used for further analysis, as in general my datasets are not a small or simple as the example above.", "response":"For a given path, you can get the points like this: ``` p = cs.collections[0].get_paths()[0] v = p.vertices x = v[:,0] y = v[:,1] ```", "best_answers_score":0.6528, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/5666056\/matplotlib-extracting-data-from-contour-lines", "best_answers_votes":65, "question_length":1346, "response_length":133 }, { "question":"Computing cross-correlation function? In R, I am using ccf or acf to compute the pair-wise cross-correlation function so that I can find out which shift gives me the maximum value. From the looks of it, R gives me a normalized sequence of values. Is there something similar in Python's scipy or am I supposed to do it using the fft module? Currently, I am doing it as follows: ``` xcorr = lambda x,y : irfft(rfft(x)*rfft(y[::-1])) x = numpy.array([0,0,1,1]) y = numpy.array([1,1,0,0]) print xcorr(x,y) ```", "response":"To cross-correlate 1d arrays use numpy.correlate. For 2d arrays, use scipy.signal.correlate2d. There is also scipy.stsci.convolve.correlate2d. There is also matplotlib.pyplot.xcorr which is based on numpy.correlate. See this post on the SciPy mailing list for some links to different implementations. Edit: @user333700 added a link to the SciPy ticket for this issue in a comment.", "best_answers_score":0.6521, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/6991471\/computing-cross-correlation-function", "best_answers_votes":56, "question_length":505, "response_length":380 }, { "question":"Interpolation over an irregular grid So, I have three numpy arrays which store latitude, longitude, and some property value on a grid -- that is, I have LAT(y,x), LON(y,x), and, say temperature T(y,x), for some limits of x and y. The grid isn't necessarily regular -- in fact, it's tripolar. I then want to interpolate these property (temperature) values onto a bunch of different lat\/lon points (stored as lat1(t), lon1(t), for about 10,000 t...) which do not fall on the actual grid points. I've tried matplotlib.mlab.griddata, but that takes far too long (it's not really designed for what I'm doing, after all). I've also tried scipy.interpolate.interp2d, but I get a MemoryError (my grids are about 400x400). Is there any sort of slick, preferably fast way of doing this? I can't help but think the answer is something obvious... Thanks!!", "response":"There is a nice inverse distance example by Roger Veciana i Rovira along with some code using GDAL to write to geotiff if you're into that. This is of coarse to a regular grid, but assuming you project the data first to a pixel grid with pyproj or something, all the while being careful what projection is used for your data. A copy of his algorithm and example script: ``` from math import pow from math import sqrt import numpy as np import matplotlib.pyplot as plt def pointValue(x,y,power,smoothing,xv,yv,values): nominator=0 denominator=0 for i in range(0,len(values)): dist = sqrt((x-xv[i])*(x-xv[i])+(y-yv[i])*(y-yv[i])+smoothing*smoothing); #If the point is really close to one of the data points, return the data point value to avoid singularities if(dist 0: value = nominator\/denominator else: value = -9999 return value def invDist(xv,yv,values,xsize=100,ysize=100,power=2,smoothing=0): valuesGrid = np.zeros((ysize,xsize)) for x in range(0,xsize): for y in range(0,ysize): valuesGrid[y][x] = pointValue(x,y,power,smoothing,xv,yv,values) return valuesGrid if __name__ == \"__main__\": power=1 smoothing=20 #Creating some data, with each coodinate and the values stored in separated lists xv = [10,60,40,70,10,50,20,70,30,60] yv = [10,20,30,30,40,50,60,70,80,90] values = [1,2,2,3,4,6,7,7,8,10] #Creating the output grid (100x100, in the example) ti = np.linspace(0, 100, 100) XI, YI = np.meshgrid(ti, ti) #Creating the interpolation function and populating the output matrix value ZI = invDist(xv,yv,values,100,100,power,smoothing) # Plotting the result n = plt.normalize(0.0, 100.0) plt.subplot(1, 1, 1) plt.pcolor(XI, YI, ZI) plt.scatter(xv, yv, 100, values) plt.title('Inv dist interpolation - power: ' + str(power) + ' smoothing: ' + str(smoothing)) plt.xlim(0, 100) plt.ylim(0, 100) plt.colorbar() plt.show() ```", "best_answers_score":0.65, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/3242382\/interpolation-over-an-irregular-grid", "best_answers_votes":9, "question_length":843, "response_length":1826 }, { "question":"How do I create an array whose elements are all equal to a specified value? How do I create an array where every entry is the same value? I know numpy.ones() and numpy.zeros() do this for 1's and 0's, but what about -1? For example: ``` >>import numpy as np >>np.zeros((3,3)) array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]]) >>np.ones((2,5)) array([[ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.]]) >>np.negative_ones((2,5)) ??? ```", "response":"I don't know if there's a nice one-liner without an arithmetic operation, but probably the fastest approach is to create an uninitialized array using empty and then use .fill() to set the values. For comparison: ``` >>> timeit m = np.zeros((3,3)); m += -1 100000 loops, best of 3: 6.9 us per loop >>> timeit m = np.ones((3,3)); m *= -1 100000 loops, best of 3: 9.49 us per loop >>> timeit m = np.zeros((3,3)); m.fill(-1) 100000 loops, best of 3: 2.31 us per loop >>> timeit m = np.empty((3,3)); m[:] = -1 100000 loops, best of 3: 3.18 us per loop >>> timeit m = np.empty((3,3)); m.fill(-1) 100000 loops, best of 3: 2.09 us per loop ``` but to be honest, I tend to either add to the zero matrix or multiply the ones matrix instead, as initialization is seldom a bottleneck.", "best_answers_score":0.65, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/14112235\/how-do-i-create-an-array-whose-elements-are-all-equal-to-a-specified-value", "best_answers_votes":21, "question_length":433, "response_length":772 }, { "question":"python equivalent of qnorm, qf and qchi2 of R I need the quantile of some distributions in python. In r it is possible to compute these values using the qf, qnorm and qchi2 functions. Is there any python equivalent of these R functions? I have been looking on scipy but I did non find anything.", "response":"The equivalent of the R pnorm() function is: scipy.stats.norm.cdf() with python The equivalent of the R qnorm() function is: scipy.stats.norm.ppf() with python", "best_answers_score":0.6492, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/24695174\/python-equivalent-of-qnorm-qf-and-qchi2-of-r", "best_answers_votes":57, "question_length":294, "response_length":159 }, { "question":"Read .mat files in Python Is it possible to read binary MATLAB .mat files in Python? I've seen that SciPy has alleged support for reading .mat files, but I'm unsuccessful with it. I installed SciPy version 0.7.0, and I can't find the loadmat() method.", "response":"An import is required, import scipy.io... ``` import scipy.io mat = scipy.io.loadmat('file.mat') ```", "best_answers_score":0.6452, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/874461\/read-mat-files-in-python", "best_answers_votes":819, "question_length":251, "response_length":100 }, { "question":"How to perform two-sample one-tailed t-test with numpy\/scipy In R, it is possible to perform two-sample one-tailed t-test simply by using ``` > A = c(0.19826790, 1.36836629, 1.37950911, 1.46951540, 1.48197798, 0.07532846) > B = c(0.6383447, 0.5271385, 1.7721380, 1.7817880) > t.test(A, B, alternative=\"greater\") Welch Two Sample t-test data: A and B t = -0.4189, df = 6.409, p-value = 0.6555 alternative hypothesis: true difference in means is greater than 0 95 percent confidence interval: -1.029916 Inf sample estimates: mean of x mean of y 0.9954942 1.1798523 ``` In Python world, scipy provides similar function ttest_ind, but which can only do two-tailed t-tests. Closest information on the topic I found is this link, but it seems to be rather a discussion of the policy of implementing one-tailed vs two-tailed in scipy. Therefore, my question is that does anyone know any examples or instructions on how to perform one-tailed version of the test using numpy\/scipy?", "response":"From your mailing list link: because the one-sided tests can be backed out from the two-sided tests. (With symmetric distributions one-sided p-value is just half of the two-sided pvalue) It goes on to say that scipy always gives the test statistic as signed. This means that given p and t values from a two-tailed test, you would reject the null hypothesis of a greater-than test when p\/2 0, and of a less-than test when p\/2 < alpha and t < 0.", "best_answers_score":0.6445, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15984221\/how-to-perform-two-sample-one-tailed-t-test-with-numpy-scipy", "best_answers_votes":94, "question_length":972, "response_length":444 }, { "question":"SciPy Create 2D Polygon Mask I need to create a numpy 2D array which represents a binary mask of a polygon, using standard Python packages. input: polygon vertices, image dimensions output: binary mask of polygon (numpy 2D array) (Larger context: I want to get the distance transform of this polygon using scipy.ndimage.morphology.distance_transform_edt.) Can anyone show me how to do this?", "response":"The answer turns out to be quite simple: ``` import numpy from PIL import Image, ImageDraw # polygon = [(x1,y1),(x2,y2),...] or [x1,y1,x2,y2,...] # width = ? # height = ? img = Image.new('L', (width, height), 0) ImageDraw.Draw(img).polygon(polygon, outline=1, fill=1) mask = numpy.array(img) ```", "best_answers_score":0.643, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/3654289\/scipy-create-2d-polygon-mask", "best_answers_votes":99, "question_length":390, "response_length":295 }, { "question":"scipy minimize with constraints Let's suppose I have a matrix ```py arr = array([[0.8, 0.2],[-0.1, 0.14]]) ``` with a target function ```py def matr_t(t): return array([[t[0], 0],[t[2]+complex(0,1)*t[3], t[1]]] def target(t): arr2 = matr_t(t) ret = 0 for i, v1 in enumerate(arr): for j, v2 in enumerate(v1): ret += abs(arr[i][j]-arr2[i][j])**2 return ret ``` now I want to minimize this target function under the assumption that the t[i] are real numbers, and something like t[0]+t[1]=1.", "response":"This constraint ``` t[0] + t[1] = 1 ``` would be an equality (type='eq') constraint, where you make a function that must equal zero: ``` def con(t): return t[0] + t[1] - 1 ``` Then you make a dict of your constraint (list of dicts if more than one): ``` cons = {'type':'eq', 'fun': con} ``` I've never tried it, but I believe that to keep t real, you could use: ``` con_real(t): return np.sum(np.iscomplex(t)) ``` And make your cons include both constraints: ``` cons = [{'type':'eq', 'fun': con}, {'type':'eq', 'fun': con_real}] ``` Then you feed cons into minimize as: ``` scipy.optimize.minimize(func, x0, constraints=cons) ```", "best_answers_score":0.6402, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/20075714\/scipy-minimize-with-constraints", "best_answers_votes":73, "question_length":487, "response_length":630 }, { "question":"Error \"filename.whl is not a supported wheel on this platform\" I would like to install scipy-0.15.1-cp33-none-win_amd64.whl that I have saved to the local drive. I am using: ```none pip 6.0.8 from C:\\Python27\\Lib\\site-packages python 2.7.9 (default, Dec 10 2014, 12:28:03) [MSC v.1500 64 bit (AMD64)] ``` When I run: ```none pip install scipy-0.15.1-cp33-none-win_amd64.whl ``` I get the following error: scipy-0.15.1-cp33-none-win_amd64.whl is not a supported wheel on this platform What is the problem?", "response":"cp33 means CPython 3.3. You need scipy\u20110.15.1\u2011cp27\u2011none\u2011win_amd64.whl instead.", "best_answers_score":0.6398, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/28568070\/error-filename-whl-is-not-a-supported-wheel-on-this-platform", "best_answers_votes":560, "question_length":504, "response_length":78 }, { "question":"Fast interpolation of regularly sampled 3D data with different intervals in x,y, and z I have some volumetric imaging data consisting of values sampled on a regular grid in x,y,z, but with a non-cubic voxel shape (the space between adjacent points in z is greater than in x,y). I would eventually like to be able to interpolate the values on some arbitrary 2D plane that passes through the volume, like this: I'm aware of scipy.ndimage.map_coordinates, but in my case using it is less straightforward because it implicitly assumes that the spacing of the elements in the input array are equal across dimensions. I could first resample my input array according to the smallest voxel dimension (so that all of my voxels would then be cubes), then use map_coordinates to interpolate over my plane, but it doesn't seem like a great idea to interpolate my data twice. I'm also aware that scipy has various interpolators for irregularly-spaced ND data (LinearNDInterpolator, NearestNDInterpolator etc.), but these are very slow and memory-intensive for my purposes. What is the best way of interpolating my data given that I know that the values are regularly spaced within each dimension?", "response":"You can use map_coordinates with a little bit of algebra. Lets say the spacings of your grid are dx, dy and dz. We need to map these real world coordinates to array index coordinates, so lets define three new variables: ``` xx = x \/ dx yy = y \/ dy zz = z \/ dz ``` The array index input to map_coordinates is an array of shape (d, ...) where d is the number of dimensions of your original data. If you define an array such as: ``` scaling = np.array([dx, dy, dz]) ``` you can transform your real world coordinates to array index coordinates by dividing by scaling with a little broadcasting magic: ``` idx = coords \/ scaling[(slice(None),) + (None,)*(coords.ndim-1)] ``` To put it all together in an example: ``` dx, dy, dz = 1, 1, 2 scaling = np.array([dx, dy, dz]) data = np.random.rand(10, 15, 5) ``` Lets say we want to interpolate values along the plane 2*y - z = 0. We take two vectors perpendicular to the planes normal vector: ``` u = np.array([1, 0 ,0]) v = np.array([0, 1, 2]) ``` And get the coordinates at which we want to interpolate as: ``` coords = (u[:, None, None] * np.linspace(0, 9, 10)[None, :, None] + v[:, None, None] * np.linspace(0, 2.5, 10)[None, None, :]) ``` We convert them to array index coordinates and interpoalte using map_coordinates: ``` idx = coords \/ scaling[(slice(None),) + (None,)*(coords.ndim-1)] new_data = ndi.map_coordinates(data, idx) ``` This last array is of shape (10, 10) and has in position [u_idx, v_idx] the value corresponding to the coordinate coords[:, u_idx, v_idx]. You could build on this idea to handle interpolation where your coordinates don't start at zero, by adding an offset before the scaling.", "best_answers_score":0.6388, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16217995\/fast-interpolation-of-regularly-sampled-3d-data-with-different-intervals-in-x-y", "best_answers_votes":19, "question_length":1183, "response_length":1657 }, { "question":"Efficient method of calculating density of irregularly spaced points I am attempting to generate map overlay images that would assist in identifying hot-spots, that is areas on the map that have high density of data points. None of the approaches that I've tried are fast enough for my needs. Note: I forgot to mention that the algorithm should work well under both low and high zoom scenarios (or low and high data point density). I looked through numpy, pyplot and scipy libraries, and the closest I could find was numpy.histogram2d. As you can see in the image below, the histogram2d output is rather crude. (Each image includes points overlaying the heatmap for better understanding) My second attempt was to iterate over all the data points, and then calculate the hot-spot value as a function of distance. This produced a better looking image, however it is too slow to use in my application. Since it's O(n), it works ok with 100 points, but blows out when I use my actual dataset of 30000 points. My final attempt was to store the data in an KDTree, and use the nearest 5 points to calculate the hot-spot value. This algorithm is O(1), so much faster with large dataset. It's still not fast enough, it takes about 20 seconds to generate a 256x256 bitmap, and I would like this to happen in around 1 second time. Edit The boxsum smoothing solution provided by 6502 works well at all zoom levels and is much faster than my original methods. The gaussian filter solution suggested by Luke and Neil G is the fastest. You can see all four approaches below, using 1000 data points in total, at 3x zoom there are around 60 points visible. Complete code that generates my original 3 attempts, the boxsum smoothing solution provided by 6502 and gaussian filter suggested by Luke (improved to handle edges better and allow zooming in) is here: ``` import matplotlib import numpy as np from matplotlib.mlab import griddata import matplotlib.cm as cm import matplotlib.pyplot as plt import math from scipy.spatial import KDTree import time import scipy.ndimage as ndi def grid_density_kdtree(xl, yl, xi, yi, dfactor): zz = np.empty([len(xi),len(yi)], dtype=np.uint8) zipped = zip(xl, yl) kdtree = KDTree(zipped) for xci in range(0, len(xi)): xc = xi[xci] for yci in range(0, len(yi)): yc = yi[yci] density = 0. retvalset = kdtree.query((xc,yc), k=5) for dist in retvalset[0]: density = density + math.exp(-dfactor * pow(dist, 2)) \/ 5 zz[yci][xci] = min(density, 1.0) * 255 return zz def grid_density(xl, yl, xi, yi): ximin, ximax = min(xi), max(xi) yimin, yimax = min(yi), max(yi) xxi,yyi = np.meshgrid(xi,yi) #zz = np.empty_like(xxi) zz = np.empty([len(xi),len(yi)]) for xci in range(0, len(xi)): xc = xi[xci] for yci in range(0, len(yi)): yc = yi[yci] density = 0. for i in range(0,len(xl)): xd = math.fabs(xl[i] - xc) yd = math.fabs(yl[i] - yc) if xd < 1 and yd < 1: dist = math.sqrt(math.pow(xd, 2) + math.pow(yd, 2)) density = density + math.exp(-5.0 * pow(dist, 2)) zz[yci][xci] = density return zz def boxsum(img, w, h, r): st = [0] * (w+1) * (h+1) for x in xrange(w): st[x+1] = st[x] + img[x] for y in xrange(h): st[(y+1)*(w+1)] = st[y*(w+1)] + img[y*w] for x in xrange(w): st[(y+1)*(w+1)+(x+1)] = st[(y+1)*(w+1)+x] + st[y*(w+1)+(x+1)] - st[y*(w+1)+x] + img[y*w+x] for y in xrange(h): y0 = max(0, y - r) y1 = min(h, y + r + 1) for x in xrange(w): x0 = max(0, x - r) x1 = min(w, x + r + 1) img[y*w+x] = st[y0*(w+1)+x0] + st[y1*(w+1)+x1] - st[y1*(w+1)+x0] - st[y0*(w+1)+x1] def grid_density_boxsum(x0, y0, x1, y1, w, h, data): kx = (w - 1) \/ (x1 - x0) ky = (h - 1) \/ (y1 - y0) r = 15 border = r * 2 imgw = (w + 2 * border) imgh = (h + 2 * border) img = [0] * (imgw * imgh) for x, y in data: ix = int((x - x0) * kx) + border iy = int((y - y0) * ky) + border if 0 <= ix < imgw and 0 <= iy < imgh: img[iy * imgw + ix] += 1 for p in xrange(4): boxsum(img, imgw, imgh, r) a = np.array(img).reshape(imgh,imgw) b = a[border:(border+h),border:(border+w)] return b def grid_density_gaussian_filter(x0, y0, x1, y1, w, h, data): kx = (w - 1) \/ (x1 - x0) ky = (h - 1) \/ (y1 - y0) r = 20 border = r imgw = (w + 2 * border) imgh = (h + 2 * border) img = np.zeros((imgh,imgw)) for x, y in data: ix = int((x - x0) * kx) + border iy = int((y - y0) * ky) + border if 0 <= ix < imgw and 0 <= iy < imgh: img[iy][ix] += 1 return ndi.gaussian_filter(img, (r,r)) ## gaussian convolution def generate_graph(): n = 1000 # data points range data_ymin = -2. data_ymax = 2. data_xmin = -2. data_xmax = 2. # view area range view_ymin = -.5 view_ymax = .5 view_xmin = -.5 view_xmax = .5 # generate data xl = np.random.uniform(data_xmin, data_xmax, n) yl = np.random.uniform(data_ymin, data_ymax, n) zl = np.random.uniform(0, 1, n) # get visible data points xlvis = [] ylvis = [] for i in range(0,len(xl)): if view_xmin < xl[i] < view_xmax and view_ymin < yl[i] < view_ymax: xlvis.append(xl[i]) ylvis.append(yl[i]) fig = plt.figure() # plot histogram plt1 = fig.add_subplot(221) plt1.set_axis_off() t0 = time.clock() zd, xe, ye = np.histogram2d(yl, xl, bins=10, range=[[view_ymin, view_ymax],[view_xmin, view_xmax]], normed=True) plt.title('numpy.histogram2d - '+str(time.clock()-t0)+\"sec\") plt.imshow(zd, origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax]) plt.scatter(xlvis, ylvis) # plot density calculated with kdtree plt2 = fig.add_subplot(222) plt2.set_axis_off() xi = np.linspace(view_xmin, view_xmax, 256) yi = np.linspace(view_ymin, view_ymax, 256) t0 = time.clock() zd = grid_density_kdtree(xl, yl, xi, yi, 70) plt.title('function of 5 nearest using kdtree\\n'+str(time.clock()-t0)+\"sec\") cmap=cm.jet A = (cmap(zd\/256.0)*255).astype(np.uint8) #A[:,:,3] = zd plt.imshow(A , origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax]) plt.scatter(xlvis, ylvis) # gaussian filter plt3 = fig.add_subplot(223) plt3.set_axis_off() t0 = time.clock() zd = grid_density_gaussian_filter(view_xmin, view_ymin, view_xmax, view_ymax, 256, 256, zip(xl, yl)) plt.title('ndi.gaussian_filter - '+str(time.clock()-t0)+\"sec\") plt.imshow(zd , origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax]) plt.scatter(xlvis, ylvis) # boxsum smoothing plt3 = fig.add_subplot(224) plt3.set_axis_off() t0 = time.clock() zd = grid_density_boxsum(view_xmin, view_ymin, view_xmax, view_ymax, 256, 256, zip(xl, yl)) plt.title('boxsum smoothing - '+str(time.clock()-t0)+\"sec\") plt.imshow(zd, origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax]) plt.scatter(xlvis, ylvis) if __name__=='__main__': generate_graph() plt.show() ```", "response":"This approach is along the lines of some previous answers: increment a pixel for each spot, then smooth the image with a gaussian filter. A 256x256 image runs in about 350ms on my 6-year-old laptop. ``` import numpy as np import scipy.ndimage as ndi data = np.random.rand(30000,2) ## create random dataset inds = (data * 255).astype('uint') ## convert to indices img = np.zeros((256,256)) ## blank image for i in xrange(data.shape[0]): ## draw pixels img[inds[i,0], inds[i,1]] += 1 img = ndi.gaussian_filter(img, (10,10)) ```", "best_answers_score":0.6385, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/6652671\/efficient-method-of-calculating-density-of-irregularly-spaced-points", "best_answers_votes":31, "question_length":6527, "response_length":525 }, { "question":"Obtaining values used in boxplot, using python and matplotlib I can draw a boxplot from data: ``` import numpy as np import matplotlib.pyplot as plt data = np.random.rand(100) plt.boxplot(data) ``` Then, the box will range from the 25th-percentile to 75th-percentile, and the whisker will range from the smallest value to the largest value between (25th-percentile - 1.5*IQR, 75th-percentile + 1.5*IQR), where the IQR denotes the inter-quartile range. (Of course, the value 1.5 is customizable). Now I want to know the values used in the boxplot, i.e. the median, upper and lower quartile, the upper whisker end point and the lower whisker end point. While the former three are easy to obtain by using np.median() and np.percentile(), the end point of the whiskers will require some verbose coding: ``` median = np.median(data) upper_quartile = np.percentile(data, 75) lower_quartile = np.percentile(data, 25) iqr = upper_quartile - lower_quartile upper_whisker = data[data=lower_quartile-1.5*iqr].min() ``` I was wondering, while this is acceptable, would there be a neater way to do this? It seems that the values should be ready to pull-out from the boxplot, as it's already drawn.", "response":"Why do you want to do so? what you are doing is already pretty direct. Yeah, if you want to fetch them for the plot, when the plot is already made, simply use the get_ydata() method. ``` B = plt.boxplot(data) [item.get_ydata() for item in B['whiskers']] ``` It returns an array of the shape (2,) for each whiskers, the second element is the value we want: ``` [item.get_ydata()[1] for item in B['whiskers']] ```", "best_answers_score":0.6378, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/23461713\/obtaining-values-used-in-boxplot-using-python-and-matplotlib", "best_answers_votes":39, "question_length":1184, "response_length":411 }, { "question":"scipy.stats seed? I am trying to generate scipy.stats.pareto.rvs(b, loc=0, scale=1, size=1) with different seed. In numpy we can seed using numpy.random.seed(seed=233423). Is there any way to seed the random number generated by scipy stats. Note: I am not using numpy pareto because I want to give different values for scale.", "response":"scipy.stats just uses numpy.random to generate its random numbers, so numpy.random.seed() will work here as well. E.g., ``` import numpy as np from scipy.stats import pareto b = 0.9 np.random.seed(seed=233423) print pareto.rvs(b, loc=0, scale=1, size=5) np.random.seed(seed=233423) print pareto.rvs(b, loc=0, scale=1, size=5) ``` will print [ 9.7758784 10.78405752 4.19704602 1.19256849 1.02750628] twice.", "best_answers_score":0.6377, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/16016959\/scipy-stats-seed", "best_answers_votes":60, "question_length":325, "response_length":405 }, { "question":"Convert ndarray from float64 to integer I've got an ndarray in python with a dtype of float64. I'd like to convert the array to be an array of integers. How should I do this? int() won't work, as it says it can't convert it to a scalar. Changing the dtype field itself obviously doesn't work, as the actual bytes haven't changed. I can't seem to find anything on Google or in the documentation - what's the best way to do this?", "response":"Use .astype. ``` >>> a = numpy.array([1, 2, 3, 4], dtype=numpy.float64) >>> a array([ 1., 2., 3., 4.]) >>> a.astype(numpy.int64) array([1, 2, 3, 4]) ``` See the documentation for more options.", "best_answers_score":0.6373, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8855574\/convert-ndarray-from-float64-to-integer", "best_answers_votes":97, "question_length":427, "response_length":192 }, { "question":"Multivariate normal density in Python? Is there any python package that allows the efficient computation of the PDF (probability density function) of a multivariate normal distribution? It doesn't seem to be included in Numpy\/Scipy, and surprisingly a Google search didn't turn up any useful thing.", "response":"The multivariate normal is now available on SciPy 0.14.0.dev-16fc0af: ``` from scipy.stats import multivariate_normal var = multivariate_normal(mean=[0,0], cov=[[1,0],[0,1]]) var.pdf([1,0]) ```", "best_answers_score":0.6345, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11615664\/multivariate-normal-density-in-python", "best_answers_votes":101, "question_length":298, "response_length":193 }, { "question":"Multiple linear regression in Python I can't seem to find any python libraries that do multiple regression. The only things I find only do simple regression. I need to regress my dependent variable (y) against several independent variables (x1, x2, x3, etc.). For example, with this data: ``` print 'y x1 x2 x3 x4 x5 x6 x7' for t in texts: print \"{:>7.1f}{:>10.2f}{:>9.2f}{:>9.2f}{:>10.2f}{:>7.2f}{:>7.2f}{:>9.2f}\" \/ .format(t.y,t.x1,t.x2,t.x3,t.x4,t.x5,t.x6,t.x7) ``` (output for above:) ``` y x1 x2 x3 x4 x5 x6 x7 -6.0 -4.95 -5.87 -0.76 14.73 4.02 0.20 0.45 -5.0 -4.55 -4.52 -0.71 13.74 4.47 0.16 0.50 -10.0 -10.96 -11.64 -0.98 15.49 4.18 0.19 0.53 -5.0 -1.08 -3.36 0.75 24.72 4.96 0.16 0.60 -8.0 -6.52 -7.45 -0.86 16.59 4.29 0.10 0.48 -3.0 -0.81 -2.36 -0.50 22.44 4.81 0.15 0.53 -6.0 -7.01 -7.33 -0.33 13.93 4.32 0.21 0.50 -8.0 -4.46 -7.65 -0.94 11.40 4.43 0.16 0.49 -8.0 -11.54 -10.03 -1.03 18.18 4.28 0.21 0.55 ``` How would I regress these in python, to get the linear regression formula: Y = a1x1 + a2x2 + a3x3 + a4x4 + a5x5 + a6x6 + +a7x7 + c", "response":"sklearn.linear_model.LinearRegression will do it: ``` from sklearn import linear_model clf = linear_model.LinearRegression() clf.fit([[getattr(t, 'x%d' % i) for i in range(1, 8)] for t in texts], [t.y for t in texts]) ``` Then clf.coef_ will have the regression coefficients. sklearn.linear_model also has similar interfaces to do various kinds of regularizations on the regression.", "best_answers_score":0.6316, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/11479064\/multiple-linear-regression-in-python", "best_answers_votes":119, "question_length":1050, "response_length":382 }, { "question":"Does Python SciPy need BLAS? ``` numpy.distutils.system_info.BlasNotFoundError: Blas (http:\/\/www.netlib.org\/blas\/) libraries not found. Directories to search for the libraries can be specified in the numpy\/distutils\/site.cfg file (section [blas]) or by setting the BLAS environment variable. ``` Which tar do I need to download off this site? I've tried the fortrans, but I keep getting this error (after setting the environment variable obviously).", "response":"If you need to use the latest versions of SciPy rather than the packaged version, without going through the hassle of building BLAS and LAPACK, you can follow the below procedure. Install linear algebra libraries from repository (for Ubuntu), ``` sudo apt-get install gfortran libopenblas-dev liblapack-dev ``` Then install SciPy, (after downloading the SciPy source): python setup.py install or ``` pip install scipy ``` As the case may be.", "best_answers_score":0.6303, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/7496547\/does-python-scipy-need-blas", "best_answers_votes":341, "question_length":449, "response_length":441 }, { "question":"Linear regression of arrays containing NAN values I have two arrays, say varx and vary. Both contain NaN values at various positions. However, I would like to do a linear regression on both to show how much the two arrays correlate. This was very helpful so far. However, using the following ```py slope, intercept, r_value, p_value, std_err = stats.linregress(varx, vary) ``` results in NaNs for every output variable. What is the most convenient way to take only valid values from both arrays as input to the linear regression? I heard about masking arrays, but am not sure how it works exactly.", "response":"You can remove NaNs using a mask: ``` mask = ~np.isnan(varx) & ~np.isnan(vary) slope, intercept, r_value, p_value, std_err = stats.linregress(varx[mask], vary[mask]) ```", "best_answers_score":0.6279, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/13643363\/linear-regression-of-arrays-containing-nan-values", "best_answers_votes":53, "question_length":597, "response_length":169 }, { "question":"plus\/minus operator for python \u00b1 I am looking for a way to do a plus\/minus operation in python 2 or 3. I do not know the command or operator, and I cannot find a command or operator to do this. Am I missing something?", "response":"If you are looking to print the \u00b1 symbol, just use: ``` print(u\"\\u00B1\") ```", "best_answers_score":0.6269, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/27872250\/plus-minus-operator-for-python-%c2%b1", "best_answers_votes":54, "question_length":217, "response_length":76 }, { "question":"How do I read CSV data into a record array in NumPy? Is there a direct way to import the contents of a CSV file into a record array, just like how R's read.table(), read.delim(), and read.csv() import data into R dataframes? Or should I use csv.reader() and then apply numpy.core.records.fromrecords()?", "response":"Use numpy.genfromtxt() by setting the delimiter kwarg to a comma: ``` from numpy import genfromtxt my_data = genfromtxt('my_file.csv', delimiter=',') ```", "best_answers_score":0.626, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/3518778\/how-do-i-read-csv-data-into-a-record-array-in-numpy", "best_answers_votes":884, "question_length":302, "response_length":153 }, { "question":"Building up an array in numpy\/scipy by iteration in Python? Often, I am building an array by iterating through some data, e.g.: ``` my_array = [] for n in range(1000): # do operation, get value my_array.append(value) # cast to array my_array = array(my_array) ``` I find that I have to first build a list and then cast it (using \"array\") to an array. Is there a way around these? All these casting calls clutter the code... how can I iteratively build up \"my_array\", with it being an array from the start?", "response":"NumPy provides a 'fromiter' method: ``` def myfunc(n): for i in range(n): yield i**2 np.fromiter(myfunc(5), dtype=int) ``` which yields ``` array([ 0, 1, 4, 9, 16]) ```", "best_answers_score":0.6227, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2641691\/building-up-an-array-in-numpy-scipy-by-iteration-in-python", "best_answers_votes":36, "question_length":505, "response_length":168 }, { "question":"How to solve a pair of nonlinear equations using Python? What's the (best) way to solve a pair of non linear equations using Python. (Numpy, Scipy or Sympy) eg: x+y^2 = 4 e^x+ xy = 3 A code snippet which solves the above pair will be great", "response":"for numerical solution, you can use fsolve: http:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.optimize.fsolve.html#scipy.optimize.fsolve ``` from scipy.optimize import fsolve import math def equations(p): x, y = p return (x+y**2-4, math.exp(x) + x*y - 3) x, y = fsolve(equations, (1, 1)) print equations((x, y)) ```", "best_answers_score":0.6225, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/8739227\/how-to-solve-a-pair-of-nonlinear-equations-using-python", "best_answers_votes":102, "question_length":239, "response_length":323 }, { "question":"How to calculate the statistics \"t-test\" with numpy I'm looking to generate some statistics about a model I created in python. I'd like to generate the t-test on it, but was wondering if there was an easy way to do this with numpy\/scipy. Are there any good explanations around? For example, I have three related datasets that look like this: ``` [55.0, 55.0, 47.0, 47.0, 55.0, 55.0, 55.0, 63.0] ``` Now, I would like to do the student's t-test on them.", "response":"In a scipy.stats package there are few ttest_... functions. See example from here: ``` >>> print 't-statistic = %6.3f pvalue = %6.4f' % stats.ttest_1samp(x, m) t-statistic = 0.391 pvalue = 0.6955 ```", "best_answers_score":0.6223, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2324438\/how-to-calculate-the-statistics-t-test-with-numpy", "best_answers_votes":30, "question_length":452, "response_length":199 }, { "question":"Extract blocks or patches from NumPy Array I have a 2-d numpy array as follows: ``` a = np.array([[1,5,9,13], [2,6,10,14], [3,7,11,15], [4,8,12,16]] ``` I want to extract it into patches of 2 by 2 sizes with out repeating the elements. The answer should exactly be the same. This can be 3-d array or list with the same order of elements as below: ``` [[[1,5], [2,6]], [[3,7], [4,8]], [[9,13], [10,14]], [[11,15], [12,16]]] ``` How can do it easily? In my real problem the size of a is (36, 72). I can not do it one by one. I want programmatic way of doing it.", "response":"Using scikit-image: ``` import numpy as np from skimage.util import view_as_blocks a = np.array([[1,5,9,13], [2,6,10,14], [3,7,11,15], [4,8,12,16]]) print(view_as_blocks(a, (2, 2))) ```", "best_answers_score":0.621, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/31527755\/extract-blocks-or-patches-from-numpy-array", "best_answers_votes":29, "question_length":559, "response_length":185 }, { "question":"ImportError in importing from sklearn: cannot import name check_build I am getting the following error while trying to import from sklearn: ``` >>> from sklearn import svm Traceback (most recent call last): File \"\", line 1, in from sklearn import svm File \"C:\\Python27\\lib\\site-packages\\sklearn\\__init__.py\", line 16, in from . import check_build ImportError: cannot import name check_build ``` I am using python 2.7, scipy-0.12.0b1 superpack, numpy-1.6.0 superpack, scikit-learn-0.11 I have a windows 7 machine I have checked several answers for this issue but none of them gives a way out of this error.", "response":"Worked for me after installing scipy.", "best_answers_score":0.6139, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/15274696\/importerror-in-importing-from-sklearn-cannot-import-name-check-build", "best_answers_votes":164, "question_length":607, "response_length":37 }, { "question":"Failed building wheel for spacy I'm trying to install spacy by running pip install spacy for python version 3.6.1 but continuously i'm getting errors like below,how to get rid of this issue? previously i was having cl.exe not found error, after that i added visual studio path in environment variables where cl.exe exists. ``` Failed building wheel for spacy Running setup.py clean for spacy Running setup.py bdist_wheel for murmurhash ... error Complete output from command c:\\users\\sh00428701\\appdata\\local\\programs\\python\\python36\\python.exe -u -c \"import setuptools, tokenize;__file__='C:\\\\Users\\\\SH0042~1\\\\AppData\\\\Local\\\\Temp\\\\pip-build-joi6voav\\\\murmurhash\\\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" bdist_wheel -d C:\\Users\\SH0042~1\\AppData\\Local\\Temp\\tmpa6tzdkovpip-wheel- --python-tag cp36: running bdist_wheel running build running build_py ---------------------------------------- Failed building wheel for murmurhash Running setup.py clean for murmurhash Running setup.py bdist_wheel for cymem ... error Complete output from command c:\\users\\sh00428701\\appdata\\local\\programs\\python\\python36\\python.exe -u -c \"import setuptools, tokenize;__file__='C:\\\\Users\\\\SH0042~1\\\\AppData\\\\Local\\\\Temp\\\\pip-build-joi6voav\\\\cymem\\\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" bdist_wheel -d C:\\Users\\SH0042~1\\AppData\\Local\\Temp\\tmpz7p6hkiwpip-wheel- --python-tag cp36: ---------------------------------------- Failed building wheel for cymem Running setup.py clean for cymem Running setup.py bdist_wheel for preshed ... error Complete output from command c:\\users\\sh00428701\\appdata\\local\\programs\\python\\python36\\python.exe -u -c \"import setuptools, tokenize;__file__='C:\\\\Users\\\\SH0042~1\\\\AppData\\\\Local\\\\Temp\\\\pip-build-joi6voav\\\\preshed\\\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" bdist_wheel -d C:\\Users\\SH0042~1\\AppData\\Local\\Temp\\tmpwppgmyp9pip-wheel- --python-tag cp36: ---------------------------------------- Failed building wheel for preshed Running setup.py clean for preshed Running setup.py bdist_wheel for thinc ... error ---------------------------------------- Failed building wheel for thinc Running setup.py clean for thinc Running setup.py bdist_wheel for ujson ... error ---------------------------------------- Failed building wheel for ujson Running setup.py clean for ujson Running setup.py bdist_wheel for cytoolz ... error ---------------------------------------- Failed building wheel for cytoolz Running setup.py clean for cytoolz Failed to build spacy murmurhash cymem preshed thinc ujson cytoolz Installing collected packages: murmurhash, cymem, preshed, wrapt, tqdm, toolz, cytoolz, plac, pyreadline, dill, termcolor, pathlib, thinc, ujson, regex, spacy Running setup.py install for murmurhash ... error C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\bin\\cl.exe \/c \/nologo \/Ox \/W3 \/GL \/DNDEBUG \/MD -Ic:\\users\\sh00428701\\appdata\\local\\programs\\python\\python36\\include -IC:\\Users\\SH0042~1\\AppData\\Local\\Temp\\pip-build-joi6voav\\murmurhash\\murmurhash\\include -Ic:\\users\\sh00428701\\appdata\\local\\programs\\python\\python36\\include -Ic:\\users\\sh00428701\\appdata\\local\\programs\\python\\python36\\include \/EHsc \/Tpmurmurhash\/mrmr.cpp \/Fobuild\\temp.win-amd64-3.6\\Release\\murmurhash\/mrmr.obj \/Ox \/EHsc mrmr.cpp c1xx: fatal error C1083: Cannot open source file: 'murmurhash\/mrmr.cpp': No such file or directory error: command 'C:\\\\Program Files (x86)\\\\Microsoft Visual Studio 14.0\\\\VC\\\\bin\\\\cl.exe' failed with exit status 2 ---------------------------------------- Command \"c:\\users\\sh00428701\\appdata\\local\\programs\\python\\python36\\python.exe -u -c \"import setuptools, tokenize;__file__='C:\\\\Users\\\\SH0042~1\\\\AppData\\\\Local\\\\Temp\\\\pip-build-joi6voav\\\\murmurhash\\\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" install --record C:\\Users\\SH0042~1\\AppData\\Local\\Temp\\pip-_j1cxej1-record\\install-record.txt --single-version-externally-managed --compile\" failed with error code 1 in C:\\Users\\SH0042~1\\AppData\\Local\\Temp\\pip-build-joi6voav\\murmurhash\\ ```", "response":"for me, pip install --no-cache-dir spacy worked", "best_answers_score":0.6134, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/43370851\/failed-building-wheel-for-spacy", "best_answers_votes":33, "question_length":4353, "response_length":47 }, { "question":"Minimize function with parameters Currently I have the following code that defines the function f. ``` a = #something b = #something c = #something def f(x): \"\"\"Evaluates some function that depends on parameters a, b, and c\"\"\" someNumber = #some calculation return someNumber ``` Ideally I would do def f(x, a, b, c), BUT I am minimizing f with respect to x and SciPy's optimization toolbox doesn't allow for one to minimize functions with parameters in the arguments. That said I would like to run my minimization code for multiple values of a, b and c. Is there a way I can do this?", "response":"You can specify additional arguments in args ``` from scipy.optimize import minimize minimize(f, x0, args=(a, b, c)) ```", "best_answers_score":0.6126, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/43017792\/minimize-function-with-parameters", "best_answers_votes":37, "question_length":584, "response_length":120 }, { "question":"Sorting arrays in NumPy by column How do I sort a NumPy array by its nth column? For example, given: ``` a = array([[9, 2, 3], [4, 5, 6], [7, 0, 5]]) ``` I want to sort the rows of a by the second column to obtain: ``` array([[7, 0, 5], [9, 2, 3], [4, 5, 6]]) ```", "response":"To sort by the second column of a: ``` a[a[:, 1].argsort()] ```", "best_answers_score":0.6125, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/2828059\/sorting-arrays-in-numpy-by-column", "best_answers_votes":1017, "question_length":263, "response_length":63 }, { "question":"FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated use `arr[tuple(seq)]` I have searched S\/O but I couldn't find a answer for this. When I try to plot a distribution plot using seaborn I am getting a futurewarning. I was wondering what could be the issue here. ``` import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt % matplotlib inline from sklearn import datasets iris = datasets.load_iris() df = pd.DataFrame(iris.data, columns=iris.feature_names) df['class'] = iris.target df['species'] = df['class'].map({idx:s for idx, s in enumerate(iris.target_names)}) fig, ((ax1,ax2),(ax3,ax4))= plt.subplots(2,2, figsize =(13,9)) sns.distplot(a = df.iloc[:,0], ax=ax1) sns.distplot(a = df.iloc[:,1], ax=ax2) sns.distplot(a = df.iloc[:,2], ax=ax3) sns.distplot(a = df.iloc[:,3], ax=ax4) plt.show() ``` This is the warning: ``` C:\\ProgramData\\Anaconda3\\lib\\site-packages\\scipy\\stats\\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. return np.add.reduce(sorted[indexer] * weights, axis=axis) \/ sumval ``` Any help? You can run the above code. You'll get the warning. Pandas : 0.23.4, seaborn : 0.9.0, matplotlib : 2.2.3, scipy : 1.1.0, numpy: 1.15.0'", "response":"For python>=3.7 you need to upgrade your scipy>=1.2.", "best_answers_score":0.6065, "library_name":"scipy", "question_url":"https:\/\/stackoverflow.com\/questions\/52594235\/futurewarning-using-a-non-tuple-sequence-for-multidimensional-indexing-is-depre", "best_answers_votes":22, "question_length":1455, "response_length":52 }, { "question":"Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 I have recently installed tensorflow (Windows CPU version) and received the following message: Successfully installed tensorflow-1.4.0 tensorflow-tensorboard-0.4.0rc2 Then when I tried to run ```py import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() sess.run(hello) 'Hello, TensorFlow!' a = tf.constant(10) b = tf.constant(32) sess.run(a + b) 42 sess.close() ``` (which I found through https:\/\/github.com\/tensorflow\/tensorflow) I received the following message: 2017-11-02 01:56:21.698935: I C:\\tf_jenkins\\home\\workspace\\rel-win\\M\\windows\\PY\\36\\tensorflow\\core\\platform\\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 But when I ran ```py import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello)) ``` it ran as it should and output Hello, TensorFlow!, which indicates that the installation was successful indeed but there is something else that is wrong. Do you know what the problem is and how to fix it?", "response":"What is this warning about? Modern CPUs provide a lot of low-level instructions, besides the usual arithmetic and logic, known as extensions, e.g. SSE2, SSE4, AVX, etc. From the Wikipedia: Advanced Vector Extensions (AVX) are extensions to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge processor shipping in Q1 2011 and later on by AMD with the Bulldozer processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme. In particular, AVX introduces fused multiply-accumulate (FMA) operations, which speed up linear algebra computation, namely dot-product, matrix multiply, convolution, etc. Almost every machine-learning training involves a great deal of these operations, hence will be faster on a CPU that supports AVX and FMA (up to 300%). The warning states that your CPU does support AVX (hooray!). I'd like to stress here: it's all about CPU only. Why isn't it used then? Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible. Another argument is that even with these extensions CPU is a lot slower than a GPU, and it's expected for medium- and large-scale machine-learning training to be performed on a GPU. What should you do? If you have a GPU, you shouldn't care about AVX support, because most expensive ops will be dispatched on a GPU device (unless explicitly set not to). In this case, you can simply ignore this warning by ``` # Just disables the warning, doesn't take advantage of AVX\/FMA to run faster import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' ``` ... or by setting export TF_CPP_MIN_LOG_LEVEL=2 if you're on Unix. Tensorflow is working fine anyway, but you won't see these annoying warnings. If you don't have a GPU and want to utilize CPU as much as possible, you should build tensorflow from the source optimized for your CPU with AVX, AVX2, and FMA enabled if your CPU supports them. It's been discussed in this question and also this GitHub issue. Tensorflow uses an ad-hoc build system called bazel and building it is not that trivial, but is certainly doable. After this, not only will the warning disappear, tensorflow performance should also improve.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47068709\/your-cpu-supports-instructions-that-this-tensorflow-binary-was-not-compiled-to-u", "best_answers_votes":971, "question_length":1154, "response_length":2412 }, { "question":"How to save\/restore a model after training? After you train a model in Tensorflow: How do you save the trained model? How do you later restore this saved model?", "response":"In (and after) Tensorflow version 0.11: Save the model: ``` import tensorflow as tf #Prepare to feed input, i.e. feed_dict and placeholders w1 = tf.placeholder(\"float\", name=\"w1\") w2 = tf.placeholder(\"float\", name=\"w2\") b1= tf.Variable(2.0,name=\"bias\") feed_dict ={w1:4,w2:8} #Define a test operation that we will restore w3 = tf.add(w1,w2) w4 = tf.multiply(w3,b1,name=\"op_to_restore\") sess = tf.Session() sess.run(tf.global_variables_initializer()) #Create a saver object which will save all the variables saver = tf.train.Saver() #Run the operation by feeding input print sess.run(w4,feed_dict) #Prints 24 which is sum of (w1+w2)*b1 #Now, save the graph saver.save(sess, 'my_test_model',global_step=1000) ``` Restore the model: ``` import tensorflow as tf sess=tf.Session() #First let's load meta graph and restore weights saver = tf.train.import_meta_graph('my_test_model-1000.meta') saver.restore(sess,tf.train.latest_checkpoint('.\/')) # Access saved Variables directly print(sess.run('bias:0')) # This will print 2, which is the value of bias that we saved # Now, let's access and create placeholders variables and # create feed-dict to feed new data graph = tf.get_default_graph() w1 = graph.get_tensor_by_name(\"w1:0\") w2 = graph.get_tensor_by_name(\"w2:0\") feed_dict ={w1:13.0,w2:17.0} #Now, access the op that you want to run. op_to_restore = graph.get_tensor_by_name(\"op_to_restore:0\") print sess.run(op_to_restore,feed_dict) #This will print 60 which is calculated ``` This and some more advanced use-cases have been explained very well here. A quick complete tutorial to save and restore Tensorflow models", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33759623\/how-to-save-restore-a-model-after-training", "best_answers_votes":277, "question_length":160, "response_length":1615 }, { "question":"What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow? What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow? In my opinion, 'VALID' means there will be no zero padding outside the edges when we do max pool. According to A guide to convolution arithmetic for deep learning, it says that there will be no padding in pool operator, i.e. just use 'VALID' of tensorflow. But what is 'SAME' padding of max pool in tensorflow?", "response":"If you like ascii art: \"VALID\" = without padding: ``` inputs: 1 2 3 4 5 6 7 8 9 10 11 (12 13) |________________| dropped |_________________| ``` \"SAME\" = with zero padding: ``` pad| |pad inputs: 0 |1 2 3 4 5 6 7 8 9 10 11 12 13|0 0 |________________| |_________________| |________________| ``` In this example: Input width = 13 Filter width = 6 Stride = 5 Notes: \"VALID\" only ever drops the right-most columns (or bottom-most rows). \"SAME\" tries to pad evenly left and right, but if the amount of columns to be added is odd, it will add the extra column to the right, as is the case in this example (the same logic applies vertically: there may be an extra row of zeros at the bottom). Edit: About the name: With \"SAME\" padding, if you use a stride of 1, the layer's outputs will have the same spatial dimensions as its inputs. With \"VALID\" padding, there's no \"made-up\" padding inputs. The layer only uses valid input data.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37674306\/what-is-the-difference-between-same-and-valid-padding-in-tf-nn-max-pool-of-t", "best_answers_votes":786, "question_length":492, "response_length":924 }, { "question":"Could not find a version that satisfies the requirement tensorflow I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement TensorFlow (from versions: ) No matching distribution found for TensorFlow. Then I tried installing TensorFlow from the command prompt and I got the same error message. I did however successfully install tflearn. I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask). How can I install Tensorflow? Thanks.", "response":"The latest requirements for running TensorFlow are documented in the installation documentation. TensorFlow only supports 64-bit Python TensorFlow only supports certain versions of Python (for example, Python 3.6 is not supported) So, if you're using an out-of-range version of Python (older or newer) or a 32-bit version, then you'll need to use a different version.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48720833\/could-not-find-a-version-that-satisfies-the-requirement-tensorflow", "best_answers_votes":276, "question_length":841, "response_length":367 }, { "question":"How to find which version of TensorFlow is installed in my system? I need to find which version of TensorFlow I have installed. I'm using Ubuntu 16.04 Long Term Support.", "response":"This depends on how you installed TensorFlow. I am going to use the same headings used by TensorFlow's installation instructions to structure this answer. Pip installation Run: ``` python -c 'import tensorflow as tf; print(tf.__version__)' # for Python 2 python3 -c 'import tensorflow as tf; print(tf.__version__)' # for Python 3 ``` Note that python is symlinked to \/usr\/bin\/python3 in some Linux distributions, so use python instead of python3 in these cases. pip list | grep tensorflow for Python 2 or pip3 list | grep tensorflow for Python 3 will also show the version of Tensorflow installed. Virtualenv installation Run: ``` python -c 'import tensorflow as tf; print(tf.__version__)' # for both Python 2 and Python 3 ``` pip list | grep tensorflow will also show the version of Tensorflow installed. For example, I have installed TensorFlow 0.9.0 in a virtualenv for Python 3. So, I get: ``` $ python -c 'import tensorflow as tf; print(tf.__version__)' 0.9.0 $ pip list | grep tensorflow tensorflow (0.9.0) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38549253\/how-to-find-which-version-of-tensorflow-is-installed-in-my-system", "best_answers_votes":547, "question_length":169, "response_length":1016 }, { "question":"How to prevent tensorflow from allocating the totality of a GPU memory? I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each. For small to moderate size models, the 12 GB of the Titan X is usually enough for 2\u20133 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the GPU, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having multiple users simultaneously train on the GPU. The problem with TensorFlow is that, by default, it allocates the full amount of available GPU memory when it is launched. Even for a small two-layer neural network, I see that all 12 GB of the GPU memory is used up. Is there a way to make TensorFlow only allocate, say, 4 GB of GPU memory, if one knows that this is enough for a given model?", "response":"You can set the fraction of GPU memory to be allocated when you construct a tf.Session by passing a tf.GPUOptions as part of the optional config argument: ``` # Assume that you have 12GB of GPU memory and want to allocate ~4GB: gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) ``` The per_process_gpu_memory_fraction acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34199233\/how-to-prevent-tensorflow-from-allocating-the-totality-of-a-gpu-memory", "best_answers_votes":338, "question_length":1123, "response_length":655 }, { "question":"Convert a tensor to numpy array in Tensorflow? How to convert a tensor into a numpy array when using Tensorflow with Python bindings?", "response":"TensorFlow 2.x Eager Execution is enabled by default, so just call .numpy() on the Tensor object. ``` import tensorflow as tf a = tf.constant([[1, 2], [3, 4]]) b = tf.add(a, 1) a.numpy() # array([[1, 2], # [3, 4]], dtype=int32) b.numpy() # array([[2, 3], # [4, 5]], dtype=int32) tf.multiply(a, b).numpy() # array([[ 2, 6], # [12, 20]], dtype=int32) ``` See NumPy Compatibility for more. It is worth noting (from the docs), Numpy array may share a memory with the Tensor object. Any changes to one may be reflected in the other. Bold emphasis mine. A copy may or may not be returned, and this is an implementation detail based on whether the data is in CPU or GPU (in the latter case, a copy has to be made from GPU to host memory). But why am I getting the AttributeError: 'Tensor' object has no attribute 'numpy'?. A lot of folks have commented about this issue, there are a couple of possible reasons: TF 2.0 is not correctly installed (in which case, try re-installing), or TF 2.0 is installed, but eager execution is disabled for some reason. In such cases, call tf.compat.v1.enable_eager_execution() to enable it, or see below. If Eager Execution is disabled, you can build a graph and then run it through tf.compat.v1.Session: ``` a = tf.constant([[1, 2], [3, 4]]) b = tf.add(a, 1) out = tf.multiply(a, b) out.eval(session=tf.compat.v1.Session()) # array([[ 2, 6], # [12, 20]], dtype=int32) ``` See also TF 2.0 Symbols Map for a mapping of the old API to the new one.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34097281\/convert-a-tensor-to-numpy-array-in-tensorflow", "best_answers_votes":249, "question_length":133, "response_length":1473 }, { "question":"Which TensorFlow and CUDA version combinations are compatible? I have noticed that some newer TensorFlow versions are incompatible with older CUDA and cuDNN versions. Does an overview of the compatible versions or even a list of officially tested combinations exist? I can't find it in the TensorFlow documentation.", "response":"TL;DR) See this table: https:\/\/www.tensorflow.org\/install\/source#gpu Generally: Check the CUDA version: ``` cat \/usr\/local\/cuda\/version.txt ``` and cuDNN version: ``` grep CUDNN_MAJOR -A 2 \/usr\/local\/cuda\/include\/cudnn.h ``` and install a combination as given below in the images or here. The following images and the link provide an overview of the officially supported\/tested combinations of CUDA and TensorFlow on Linux, macOS and Windows: Minor configurations: Since the given specifications below in some cases might be too broad, here is one specific configuration that works: tensorflow-gpu==1.12.0 cuda==9.0 cuDNN==7.1.4 The corresponding cudnn can be downloaded here. Tested build configurations Please refer to https:\/\/www.tensorflow.org\/install\/source#gpu for a up-to-date compatibility chart (for official TF wheels). (figures updated May 20, 2020) Linux GPU Linux CPU macOS GPU macOS CPU Windows GPU Windows CPU Updated as of Dec 5 2020: For the updated information please refer Link for Linux and Link for Windows.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50622525\/which-tensorflow-and-cuda-version-combinations-are-compatible", "best_answers_votes":404, "question_length":315, "response_length":1028 }, { "question":"How to print the value of a Tensor object in TensorFlow? I have been using the introductory example of matrix multiplication in TensorFlow. ``` matrix1 = tf.constant([[3., 3.]]) matrix2 = tf.constant([[2.],[2.]]) product = tf.matmul(matrix1, matrix2) ``` When I print the product, it is displaying it as a Tensor object: ``` ``` But how do I know the value of product? The following doesn't help: ``` print product Tensor(\"MatMul:0\", shape=TensorShape([Dimension(1), Dimension(1)]), dtype=float32) ``` I know that graphs run on Sessions, but isn't there any way I can check the output of a Tensor object without running the graph in a session?", "response":"The easiest[A] way to evaluate the actual value of a Tensor object is to pass it to the Session.run() method, or call Tensor.eval() when you have a default session (i.e. in a with tf.Session(): block, or see below). In general[B], you cannot print the value of a tensor without running some code in a session. If you are experimenting with the programming model, and want an easy way to evaluate tensors, the tf.InteractiveSession lets you open a session at the start of your program, and then use that session for all Tensor.eval() (and Operation.run()) calls. This can be easier in an interactive setting, such as the shell or an IPython notebook, when it's tedious to pass around a Session object everywhere. For example, the following works in a Jupyter notebook: ``` with tf.Session() as sess: print(product.eval()) ``` This might seem silly for such a small expression, but one of the key ideas in Tensorflow 1.x is deferred execution: it's very cheap to build a large and complex expression, and when you want to evaluate it, the back-end (to which you connect with a Session) is able to schedule its execution more efficiently (e.g. executing independent parts in parallel and using GPUs). [A]: To print the value of a tensor without returning it to your Python program, you can use the tf.print() operator, as Andrzej suggests in another answer. According to the official documentation: To make sure the operator runs, users need to pass the produced op to tf.compat.v1.Session's run method, or to use the op as a control dependency for executed ops by specifying with tf.compat.v1.control_dependencies([print_op]), which is printed to standard output. Also note that: In Jupyter notebooks and colabs, tf.print prints to the notebook cell outputs. It will not write to the notebook kernel's console logs. [B]: You might be able to use the tf.get_static_value() function to get the constant value of the given tensor if its value is efficiently calculable.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33633370\/how-to-print-the-value-of-a-tensor-object-in-tensorflow", "best_answers_votes":282, "question_length":644, "response_length":1964 }, { "question":"What's the difference of name scope and a variable scope in tensorflow? What's the differences between these functions? tf.variable_op_scope(values, name, default_name, initializer=None) Returns a context manager for defining an op that creates variables. This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope and a variable scope. tf.op_scope(values, name, default_name=None) Returns a context manager for use when defining a Python op. This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope. tf.name_scope(name) Wrapper for Graph.name_scope() using the default graph. See Graph.name_scope() for more details. tf.variable_scope(name_or_scope, reuse=None, initializer=None) Returns a context for variable scope. Variable scope allows to create new variables and to share already created ones while providing checks to not create or share by accident. For details, see the Variable Scope How To, here we present only a few basic examples.", "response":"Let's begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around. The method tf.get_variable can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists). It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced. As a result, we end up having two different types of scopes: name scope, created using tf.name_scope variable scope, created using tf.variable_scope Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e., the scope will be added as a prefix to the operation or variable name. However, name scope is ignored by tf.get_variable. We can see that in the following example: ```python with tf.name_scope(\"my_scope\"): v1 = tf.get_variable(\"var1\", [1], dtype=tf.float32) v2 = tf.Variable(1, name=\"var2\", dtype=tf.float32) a = tf.add(v1, v2) print(v1.name) # var1:0 print(v2.name) # my_scope\/var2:0 print(a.name) # my_scope\/Add:0 ``` The only way to place a variable accessed using tf.get_variable in a scope is to use a variable scope, as in the following example: ```python with tf.variable_scope(\"my_scope\"): v1 = tf.get_variable(\"var1\", [1], dtype=tf.float32) v2 = tf.Variable(1, name=\"var2\", dtype=tf.float32) a = tf.add(v1, v2) print(v1.name) # my_scope\/var1:0 print(v2.name) # my_scope\/var2:0 print(a.name) # my_scope\/Add:0 ``` This allows us to easily share variables across different parts of the program, even within different name scopes: ```python with tf.name_scope(\"foo\"): with tf.variable_scope(\"var_scope\"): v = tf.get_variable(\"var\", [1]) with tf.name_scope(\"bar\"): with tf.variable_scope(\"var_scope\", reuse=True): v1 = tf.get_variable(\"var\", [1]) assert v1 == v print(v.name) # var_scope\/var:0 print(v1.name) # var_scope\/var:0 ``` UPDATE As of version r0.11, op_scope and variable_op_scope are both deprecated and replaced by name_scope and variable_scope.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35919020\/whats-the-difference-of-name-scope-and-a-variable-scope-in-tensorflow", "best_answers_votes":401, "question_length":1124, "response_length":2335 }, { "question":"How to get current available GPUs in tensorflow? I have a plan to use distributed TensorFlow, and I saw TensorFlow can use GPUs for training and testing. In a cluster environment, each machine could have 0 or 1 or more GPUs, and I want to run my TensorFlow graph into GPUs on as many machines as possible. I found that when running tf.Session() TensorFlow gives information about GPU in the log messages like below: ``` I tensorflow\/core\/common_runtime\/gpu\/gpu_init.cc:126] DMA: 0 I tensorflow\/core\/common_runtime\/gpu\/gpu_init.cc:136] 0: Y I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:838] Creating TensorFlow device (\/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0) ``` My question is how do I get information about current available GPU from TensorFlow? I can get loaded GPU information from the log, but I want to do it in a more sophisticated, programmatic way. I also could restrict GPUs intentionally using the CUDA_VISIBLE_DEVICES environment variable, so I don't want to know a way of getting GPU information from OS kernel. In short, I want a function like tf.get_available_gpus() that will return ['\/gpu:0', '\/gpu:1'] if there are two GPUs available in the machine. How can I implement this?", "response":"There is an undocumented method called device_lib.list_local_devices() that enables you to list the devices available in the local process. (N.B. As an undocumented method, this is subject to backwards incompatible changes.) The function returns a list of DeviceAttributes protocol buffer objects. You can extract a list of string device names for the GPU devices as follows: ``` from tensorflow.python.client import device_lib def get_available_gpus(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos if x.device_type == 'GPU'] ``` Note that (at least up to TensorFlow 1.4), calling device_lib.list_local_devices() will run some initialization code that, by default, will allocate all of the GPU memory on all of the devices (GitHub issue). To avoid this, first create a session with an explicitly small per_process_gpu_fraction, or allow_growth=True, to prevent all of the memory being allocated. See this question for more details.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38559755\/how-to-get-current-available-gpus-in-tensorflow", "best_answers_votes":310, "question_length":1231, "response_length":984 }, { "question":"Keras, How to get the output of each layer? I have trained a binary classification model with CNN, and here is my code ``` model = Sequential() model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], border_mode='valid', input_shape=input_shape)) model.add(Activation('relu')) model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1])) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=pool_size)) # (16, 16, 32) model.add(Convolution2D(nb_filters*2, kernel_size[0], kernel_size[1])) model.add(Activation('relu')) model.add(Convolution2D(nb_filters*2, kernel_size[0], kernel_size[1])) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=pool_size)) # (8, 8, 64) = (2048) model.add(Flatten()) model.add(Dense(1024)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(2)) # define a binary classification problem model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy']) model.fit(x_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=(x_test, y_test)) ``` And here, I wanna get the output of each layer just like TensorFlow, how can I do that?", "response":"You can easily get the outputs of any layer by using: model.layers[index].output For all layers use this: ``` from keras import backend as K inp = model.input # input placeholder outputs = [layer.output for layer in model.layers] # all layer outputs functors = [K.function([inp, K.learning_phase()], [out]) for out in outputs] # evaluation functions # Testing test = np.random.random(input_shape)[np.newaxis,...] layer_outs = [func([test, 1.]) for func in functors] print layer_outs ``` Note: To simulate Dropout use learning_phase as 1. in layer_outs otherwise use 0. Edit: (based on comments) K.function creates theano\/tensorflow tensor functions which is later used to get the output from the symbolic graph given the input. Now K.learning_phase() is required as an input as many Keras layers like Dropout\/Batchnomalization depend on it to change behavior during training and test time. So if you remove the dropout layer in your code you can simply use: ``` from keras import backend as K inp = model.input # input placeholder outputs = [layer.output for layer in model.layers] # all layer outputs functors = [K.function([inp], [out]) for out in outputs] # evaluation functions # Testing test = np.random.random(input_shape)[np.newaxis,...] layer_outs = [func([test]) for func in functors] print layer_outs ``` Edit 2: More optimized I just realized that the previous answer is not that optimized as for each function evaluation the data will be transferred CPU->GPU memory and also the tensor calculations needs to be done for the lower layers over-n-over. Instead this is a much better way as you don't need multiple functions but a single function giving you the list of all outputs: ``` from keras import backend as K inp = model.input # input placeholder outputs = [layer.output for layer in model.layers] # all layer outputs functor = K.function([inp, K.learning_phase()], outputs ) # evaluation function # Testing test = np.random.random(input_shape)[np.newaxis,...] layer_outs = functor([test, 1.]) print layer_outs ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41711190\/keras-how-to-get-the-output-of-each-layer", "best_answers_votes":243, "question_length":1206, "response_length":2031 }, { "question":"How to build and use Google TensorFlow C++ api I'm really eager to start using Google's new Tensorflow library in C++. The website and docs are just really unclear in terms of how to build the project's C++ API and I don't know where to start. Can someone with more experience help by discovering and sharing a guide to using tensorflow's C++ API?", "response":"To get started, you should download the source code from Github, by following the instructions here (you'll need Bazel and a recent version of GCC). The C++ API (and the backend of the system) is in tensorflow\/core. Right now, only the C++ Session interface, and the C API are being supported. You can use either of these to execute TensorFlow graphs that have been built using the Python API and serialized to a GraphDef protocol buffer. There is also an experimental feature for building graphs in C++, but this is currently not quite as full-featured as the Python API (e.g. no support for auto-differentiation at present). You can see an example program that builds a small graph in C++ here. The second part of the C++ API is the API for adding a new OpKernel, which is the class containing implementations of numerical kernels for CPU and GPU. There are numerous examples of how to build these in tensorflow\/core\/kernels, as well as a tutorial for adding a new op in C++.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33620794\/how-to-build-and-use-google-tensorflow-c-api", "best_answers_votes":58, "question_length":347, "response_length":977 }, { "question":"How to run Tensorflow on CPU I have installed the GPU version of tensorflow on an Ubuntu 14.04. I am on a GPU server where tensorflow can access the available GPUs. I want to run tensorflow on the CPUs. Normally I can use env CUDA_VISIBLE_DEVICES=0 to run on GPU no. 0. How can I pick between the CPUs instead? I am not intersted in rewritting my code with with tf.device(\"\/cpu:0\"):", "response":"You can also set the environment variable to ``` CUDA_VISIBLE_DEVICES=\"\" ``` without having to modify the source code.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37660312\/how-to-run-tensorflow-on-cpu", "best_answers_votes":217, "question_length":382, "response_length":118 }, { "question":"What is the role of \"Flatten\" in Keras? I am trying to understand the role of the Flatten function in Keras. Below is my code, which is a simple two-layer network. It takes in 2-dimensional data of shape (3, 2), and outputs 1-dimensional data of shape (1, 4): ``` model = Sequential() model.add(Dense(16, input_shape=(3, 2))) model.add(Activation('relu')) model.add(Flatten()) model.add(Dense(4)) model.compile(loss='mean_squared_error', optimizer='SGD') x = np.array([[[1, 2], [3, 4], [5, 6]]]) y = model.predict(x) print y.shape ``` This prints out that y has shape (1, 4). However, if I remove the Flatten line, then it prints out that y has shape (1, 3, 4). I don't understand this. From my understanding of neural networks, the model.add(Dense(16, input_shape=(3, 2))) function is creating a hidden fully-connected layer, with 16 nodes. Each of these nodes is connected to each of the 3x2 input elements. Therefore, the 16 nodes at the output of this first layer are already \"flat\". So, the output shape of the first layer should be (1, 16). Then, the second layer takes this as an input, and outputs data of shape (1, 4). So if the output of the first layer is already \"flat\" and of shape (1, 16), why do I need to further flatten it?", "response":"If you read the Keras documentation entry for Dense, you will see that this call: ``` Dense(16, input_shape=(5,3)) ``` would result in a Dense network with 3 inputs and 16 outputs which would be applied independently for each of 5 steps. So, if D(x) transforms 3 dimensional vector to 16-d vector, what you'll get as output from your layer would be a sequence of vectors: [D(x[0,:]), D(x[1,:]),..., D(x[4,:])] with shape (5, 16). In order to have the behavior you specify you may first Flatten your input to a 15-d vector and then apply Dense: ``` model = Sequential() model.add(Flatten(input_shape=(3, 2))) model.add(Dense(16)) model.add(Activation('relu')) model.add(Dense(4)) model.compile(loss='mean_squared_error', optimizer='SGD') ``` EDIT: As some people struggled to understand - here you have an explaining image:", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43237124\/what-is-the-role-of-flatten-in-keras", "best_answers_votes":187, "question_length":1240, "response_length":822 }, { "question":"TensorFlow, why was python the chosen language? [closed] Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Because this question may lead to opinionated discussion, debate, and answers, it has been closed. You may edit the question if you feel you can improve it so that it requires answers that include facts and citations or a detailed explanation of the proposed solution. If edited, the question will be reviewed and might be reopened. Closed 6 months ago. The community reviewed whether to reopen this question 6 months ago and left it closed: Original close reason(s) were not resolved Improve this question I recently started studying deep learning and other ML techniques, and I started searching for frameworks that simplify the process of build a net and training it, then I found TensorFlow, having little experience in the field, for me, it seems that speed is a big factor for making a big ML system even more if working with deep learning, so why python was chosen by Google to make TensorFlow? Wouldn't it be better to make it over an language that can be compiled and not interpreted? What are the advantages of using Python over a language like C++ for machine learning?", "response":"The most important thing to realize about TensorFlow is that, for the most part, the core is not written in Python: It's written in a combination of highly-optimized C++ and CUDA (Nvidia's language for programming GPUs). Much of that happens, in turn, by using Eigen (a high-performance C++ and CUDA numerical library) and NVidia's cuDNN (a very optimized DNN library for NVidia GPUs, for functions such as convolutions). The model for TensorFlow is that the programmer uses \"some language\" (most likely Python!) to express the model. This model, written in the TensorFlow constructs such as: ``` h1 = tf.nn.relu(tf.matmul(l1, W1) + b1) h2 = ... ``` is not actually executed when the Python is run. Instead, what's actually created is a dataflow graph that says to take particular inputs, apply particular operations, supply the results as the inputs to other operations, and so on. This model is executed by fast C++ code, and for the most part, the data going between operations is never copied back to the Python code. Then the programmer \"drives\" the execution of this model by pulling on nodes -- for training, usually in Python, and for serving, sometimes in Python and sometimes in raw C++: ``` sess.run(eval_results) ``` This one Python (or C++ function call) uses either an in-process call to C++ or an RPC for the distributed version to call into the C++ TensorFlow server to tell it to execute, and then copies back the results. So, with that said, let's re-phrase the question: Why did TensorFlow choose Python as the first well-supported language for expressing and controlling the training of models? The answer to that is simple: Python is probably the most comfortable language for a large range of data scientists and machine learning experts that's also that easy to integrate and have control a C++ backend, while also being general, widely-used both inside and outside of Google, and open source. Given that with the basic model of TensorFlow, the performance of Python isn't that important, it was a natural fit. It's also a huge plus that NumPy makes it easy to do pre-processing in Python -- also with high performance -- before feeding it in to TensorFlow for the truly CPU-heavy things. There's also a bunch of complexity in expressing the model that isn't used when executing it -- shape inference (e.g., if you do matmul(A, B), what is the shape of the resulting data?) and automatic gradient computation. It turns out to have been nice to be able to express those in Python, though I think in the long term they'll probably move to the C++ backend to make adding other languages easier. (The hope, of course, is to support other languages in the future for creating and expressing models. It's already quite straightforward to run inference using several other languages -- C++ works now, someone from Facebook contributed Go bindings that we're reviewing now, etc.)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35677724\/tensorflow-why-was-python-the-chosen-language", "best_answers_votes":295, "question_length":1247, "response_length":2894 }, { "question":"Understanding TensorBoard (weight) histograms It is really straightforward to see and understand the scalar values in TensorBoard. However, it's not clear how to understand histogram graphs. For example, they are the histograms of my network weights. (After fixing a bug thanks to sunside) What is the best way to interpret these? Layer 1 weights look mostly flat, what does this mean? I added the network construction code here. ``` X = tf.placeholder(tf.float32, [None, input_size], name=\"input_x\") x_image = tf.reshape(X, [-1, 6, 10, 1]) tf.summary.image('input', x_image, 4) # First layer of weights with tf.name_scope(\"layer1\"): W1 = tf.get_variable(\"W1\", shape=[input_size, hidden_layer_neurons], initializer=tf.contrib.layers.xavier_initializer()) layer1 = tf.matmul(X, W1) layer1_act = tf.nn.tanh(layer1) tf.summary.histogram(\"weights\", W1) tf.summary.histogram(\"layer\", layer1) tf.summary.histogram(\"activations\", layer1_act) # Second layer of weights with tf.name_scope(\"layer2\"): W2 = tf.get_variable(\"W2\", shape=[hidden_layer_neurons, hidden_layer_neurons], initializer=tf.contrib.layers.xavier_initializer()) layer2 = tf.matmul(layer1_act, W2) layer2_act = tf.nn.tanh(layer2) tf.summary.histogram(\"weights\", W2) tf.summary.histogram(\"layer\", layer2) tf.summary.histogram(\"activations\", layer2_act) # Third layer of weights with tf.name_scope(\"layer3\"): W3 = tf.get_variable(\"W3\", shape=[hidden_layer_neurons, hidden_layer_neurons], initializer=tf.contrib.layers.xavier_initializer()) layer3 = tf.matmul(layer2_act, W3) layer3_act = tf.nn.tanh(layer3) tf.summary.histogram(\"weights\", W3) tf.summary.histogram(\"layer\", layer3) tf.summary.histogram(\"activations\", layer3_act) # Fourth layer of weights with tf.name_scope(\"layer4\"): W4 = tf.get_variable(\"W4\", shape=[hidden_layer_neurons, output_size], initializer=tf.contrib.layers.xavier_initializer()) Qpred = tf.nn.softmax(tf.matmul(layer3_act, W4)) # Bug fixed: Qpred = tf.nn.softmax(tf.matmul(layer3, W4)) tf.summary.histogram(\"weights\", W4) tf.summary.histogram(\"Qpred\", Qpred) # We need to define the parts of the network needed for learning a policy Y = tf.placeholder(tf.float32, [None, output_size], name=\"input_y\") advantages = tf.placeholder(tf.float32, name=\"reward_signal\") # Loss function # Sum (Ai*logp(yi|xi)) log_lik = -Y * tf.log(Qpred) loss = tf.reduce_mean(tf.reduce_sum(log_lik * advantages, axis=1)) tf.summary.scalar(\"Q\", tf.reduce_mean(Qpred)) tf.summary.scalar(\"Y\", tf.reduce_mean(Y)) tf.summary.scalar(\"log_likelihood\", tf.reduce_mean(log_lik)) tf.summary.scalar(\"loss\", loss) # Learning train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss) ```", "response":"It appears that the network hasn't learned anything in the layers one to three. The last layer does change, so that means that there either may be something wrong with the gradients (if you're tampering with them manually), you're constraining learning to the last layer by optimizing only its weights or the last layer really 'eats up' all error. It could also be that only biases are learned. The network appears to learn something though, but it might not be using its full potential. More context would be needed here, but playing around with the learning rate (e.g. using a smaller one) might be worth a shot. In general, histograms display the number of occurrences of a value relative to each other values. Simply speaking, if the possible values are in a range of 0..9 and you see a spike of amount 10 on the value 0, this means that 10 inputs assume the value 0; in contrast, if the histogram shows a plateau of 1 for all values of 0..9, it means that for 10 inputs, each possible value 0..9 occurs exactly once. You can also use histograms to visualize probability distributions when you normalize all histogram values by their total sum; if you do that, you'll intuitively obtain the likelihood with which a certain value (on the x axis) will appear (compared to other inputs). Now for layer1\/weights, the plateau means that: most of the weights are in the range of -0.15 to 0.15 it is (mostly) equally likely for a weight to have any of these values, i.e. they are (almost) uniformly distributed Said differently, almost the same number of weights have the values -0.15, 0.0, 0.15 and everything in between. There are some weights having slightly smaller or higher values. So in short, this simply looks like the weights have been initialized using a uniform distribution with zero mean and value range -0.15..0.15 ... give or take. If you do indeed use uniform initialization, then this is typical when the network has not been trained yet. In comparison, layer1\/activations forms a bell curve (gaussian)-like shape: The values are centered around a specific value, in this case 0, but they may also be greater or smaller than that (equally likely so, since it's symmetric). Most values appear close around the mean of 0, but values do range from -0.8 to 0.8. I assume that the layer1\/activations is taken as the distribution over all layer outputs in a batch. You can see that the values do change over time. The layer 4 histogram doesn't tell me anything specific. From the shape, it's just showing that some weight values around -0.1, 0.05 and 0.25 tend to be occur with a higher probability; a reason could be, that different parts of each neuron there actually pick up the same information and are basically redundant. This can mean that you could actually use a smaller network or that your network has the potential to learn more distinguishing features in order to prevent overfitting. These are just assumptions though. Also, as already stated in the comments below, do add bias units. By leaving them out, you are forcefully constraining your network to a possibly invalid solution.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42315202\/understanding-tensorboard-weight-histograms", "best_answers_votes":174, "question_length":2653, "response_length":3105 }, { "question":"What does tf.nn.embedding_lookup function do? ``` tf.nn.embedding_lookup(params, ids, partition_strategy='mod', name=None) ``` I cannot understand the duty of this function. Is it like a lookup table? Which means to return the parameters corresponding to each id (in ids)? For instance, in the skip-gram model if we use tf.nn.embedding_lookup(embeddings, train_inputs), then for each train_input it finds the correspond embedding?", "response":"Yes, this function is hard to understand, until you get the point. In its simplest form, it is similar to tf.gather. It returns the elements of params according to the indexes specified by ids. For example (assuming you are inside tf.InteractiveSession()) ``` params = tf.constant([10,20,30,40]) ids = tf.constant([0,1,2,3]) print tf.nn.embedding_lookup(params,ids).eval() ``` would return [10 20 30 40], because the first element (index 0) of params is 10, the second element of params (index 1) is 20, etc. Similarly, ``` params = tf.constant([10,20,30,40]) ids = tf.constant([1,1,3]) print tf.nn.embedding_lookup(params,ids).eval() ``` would return [20 20 40]. But embedding_lookup is more than that. The params argument can be a list of tensors, rather than a single tensor. ``` params1 = tf.constant([1,2]) params2 = tf.constant([10,20]) ids = tf.constant([2,0,2,1,2,3]) result = tf.nn.embedding_lookup([params1, params2], ids) ``` In such a case, the indexes, specified in ids, correspond to elements of tensors according to a partition strategy, where the default partition strategy is 'mod'. In the 'mod' strategy, index 0 corresponds to the first element of the first tensor in the list. Index 1 corresponds to the first element of the second tensor. Index 2 corresponds to the first element of the third tensor, and so on. Simply index i corresponds to the first element of the (i+1)th tensor , for all the indexes 0..(n-1), assuming params is a list of n tensors. Now, index n cannot correspond to tensor n+1, because the list params contains only n tensors. So index n corresponds to the second element of the first tensor. Similarly, index n+1 corresponds to the second element of the second tensor, etc. So, in the code ``` params1 = tf.constant([1,2]) params2 = tf.constant([10,20]) ids = tf.constant([2,0,2,1,2,3]) result = tf.nn.embedding_lookup([params1, params2], ids) ``` index 0 corresponds to the first element of the first tensor: 1 index 1 corresponds to the first element of the second tensor: 10 index 2 corresponds to the second element of the first tensor: 2 index 3 corresponds to the second element of the second tensor: 20 Thus, the result would be: ``` [ 2 1 2 10 2 20] ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34870614\/what-does-tf-nn-embedding-lookup-function-do", "best_answers_votes":233, "question_length":430, "response_length":2205 }, { "question":"Can I run Keras model on gpu? I'm running a Keras model, with a submission deadline of 36 hours, if I train my model on the cpu it will take approx 50 hours, is there a way to run Keras on gpu? I'm using Tensorflow backend and running it on my Jupyter notebook, without anaconda installed.", "response":"Yes you can run keras models on GPU. Few things you will have to check first. your system has GPU (Nvidia. As AMD doesn't work yet) You have installed the GPU version of tensorflow You have installed CUDA installation instructions Verify that tensorflow is running with GPU check if GPU is working sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) for TF > v2.0 sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True)) (Thanks @nbro and @Ferro for pointing this out in the comments) OR ``` from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) ``` output will be something like this: ``` [ name: \"\/cpu:0\"device_type: \"CPU\", name: \"\/gpu:0\"device_type: \"GPU\" ] ``` Once all this is done your model will run on GPU: To Check if keras(>=2.1.1) is using GPU: ``` from keras import backend as K K.tensorflow_backend._get_available_gpus() ``` All the best.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45662253\/can-i-run-keras-model-on-gpu", "best_answers_votes":230, "question_length":289, "response_length":931 }, { "question":"Tensorflow - ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float) Continuation from previous question: Tensorflow - TypeError: 'int' object is not iterable My training data is a list of lists each comprised of 1000 floats. For example, x_train[0] = ``` [0.0, 0.0, 0.1, 0.25, 0.5, ...] ``` Here is my model: ``` model = Sequential() model.add(LSTM(128, activation='relu', input_shape=(1000, 1), return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(128, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(32, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(1, activation='sigmoid')) opt = tf.keras.optimizers.Adam(lr=1e-3, decay=1e-5) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=3, validation_data=(x_test, y_test)) ``` Here is the error I'm getting: ``` Traceback (most recent call last): File \"C:\\Users\\bencu\\Desktop\\ProjectFiles\\Code\\Program.py\", line 88, in FitModel model.fit(x_train, y_train, epochs=3, validation_data=(x_test, y_test)) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\training.py\", line 728, in fit use_multiprocessing=use_multiprocessing) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\training_v2.py\", line 224, in fit distribution_strategy=strategy) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\training_v2.py\", line 547, in _process_training_inputs use_multiprocessing=use_multiprocessing) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\training_v2.py\", line 606, in _process_inputs use_multiprocessing=use_multiprocessing) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\data_adapter.py\", line 479, in __init__ batch_size=batch_size, shuffle=shuffle, **kwargs) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\data_adapter.py\", line 321, in __init__ dataset_ops.DatasetV2.from_tensors(inputs).repeat() File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\data\\ops\\dataset_ops.py\", line 414, in from_tensors return TensorDataset(tensors) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\data\\ops\\dataset_ops.py\", line 2335, in __init__ element = structure.normalize_element(element) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\data\\util\\structure.py\", line 111, in normalize_element ops.convert_to_tensor(t, name=\"component_%d\" % i)) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 1184, in convert_to_tensor return convert_to_tensor_v2(value, dtype, preferred_dtype, name) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 1242, in convert_to_tensor_v2 as_ref=False) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 1296, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\tensor_conversion_registry.py\", line 52, in _default_conversion_function return constant_op.constant(value, dtype, name=name) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\constant_op.py\", line 227, in constant allow_broadcast=True) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\constant_op.py\", line 235, in _constant_impl t = convert_to_eager_tensor(value, ctx, dtype) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\constant_op.py\", line 96, in convert_to_eager_tensor return ops.EagerTensor(value, ctx.device_name, dtype) ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float). ``` I've tried googling the error myself, I found something about using the tf.convert_to_tensor function. I tried passing my training and testing lists through this but the function won't take them.", "response":"TL;DR Several possible errors, most fixed with x = np.asarray(x).astype('float32'). Others may be faulty data preprocessing; ensure everything is properly formatted (categoricals, nans, strings, etc). Below shows what the model expects: ```py [print(i.shape, i.dtype) for i in model.inputs] [print(o.shape, o.dtype) for o in model.outputs] [print(l.name, l.input_shape, l.dtype) for l in model.layers] ``` The problem's rooted in using lists as inputs, as opposed to Numpy arrays; Keras\/TF doesn't support former. A simple conversion is: x_array = np.asarray(x_list). The next step's to ensure data is fed in expected format; for LSTM, that'd be a 3D tensor with dimensions (batch_size, timesteps, features) - or equivalently, (num_samples, timesteps, channels). Lastly, as a debug pro-tip, print ALL the shapes for your data. Code accomplishing all of the above, below: ```py Sequences = np.asarray(Sequences) Targets = np.asarray(Targets) show_shapes() Sequences = np.expand_dims(Sequences, -1) Targets = np.expand_dims(Targets, -1) show_shapes() ``` ```py # OUTPUTS Expected: (num_samples, timesteps, channels) Sequences: (200, 1000) Targets: (200,) Expected: (num_samples, timesteps, channels) Sequences: (200, 1000, 1) Targets: (200, 1) ``` As a bonus tip, I notice you're running via main(), so your IDE probably lacks a Jupyter-like cell-based execution; I strongly recommend the Spyder IDE. It's as simple as adding # In[], and pressing Ctrl + Enter below: Function used: ```py def show_shapes(): # can make yours to take inputs; this'll use local variable values print(\"Expected: (num_samples, timesteps, channels)\") print(\"Sequences: {}\".format(Sequences.shape)) print(\"Targets: {}\".format(Targets.shape)) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58636087\/tensorflow-valueerror-failed-to-convert-a-numpy-array-to-a-tensor-unsupporte", "best_answers_votes":150, "question_length":4591, "response_length":1719 }, { "question":"Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation I just installed the latest version of Tensorflow via pip install tensorflow and whenever I run a program, I get the log message: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found Is this bad? How do I fix the error?", "response":"Tensorflow 2.1+ What's going on? With the new Tensorflow 2.1 release, the default tensorflow pip package contains both CPU and GPU versions of TF. In previous TF versions, not finding the CUDA libraries would emit an error and raise an exception, while now the library dynamically searches for the correct CUDA version and, if it doesn't find it, emits the warning (The W in the beginning stands for warnings, errors have an E (or F for fatal errors) and falls back to CPU-only mode. In fact, this is also written in the log as an info message right after the warning (do note that if you have a higher minimum log level that the default, you might not see info messages). The full log is (emphasis mine): 2020-01-20 12:27:44.554767: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-01-20 12:27:44.554964: I tensorflow\/stream_executor\/cuda\/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Should I worry? How do I fix it? If you don't have a CUDA-enabled GPU on your machine, or if you don't care about not having GPU acceleration, no need to worry. If, on the other hand, you installed tensorflow and wanted GPU acceleration, check your CUDA installation (TF 2.1 requires CUDA 10.1, not 10.2 or 10.0). If you just want to get rid of the warning, you can adapt TF's logging level to suppress warnings, but that might be overkill, as it will silence all warnings. Tensorflow 1.X or 2.0: Your CUDA setup is broken, ensure you have the correct version installed.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/59823283\/could-not-load-dynamic-library-cudart64-101-dll-on-tensorflow-cpu-only-install", "best_answers_votes":148, "question_length":403, "response_length":1613 }, { "question":"What does this tensorflow message mean? Any side effect? Was the installation successful? I just installed tensorflow v2.3 on anaconda python. I tried to test out the installation using the python command below; ``` $ python -c \"import tensorflow as tf; x = [[2.]]; print('tensorflow version', tf.__version__); print('hello, {}'.format(tf.matmul(x, x)))\" ``` I got the following message; ```none 2020-12-15 07:59:12.411952: I tensorflow\/core\/platform\/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. hello, [[4.]] ``` From the message, it seems that the installation was installed successfully. But what does This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2 mean exactly? Am I using a tensorflow version with some limited features? Any side effects? I am using Windows 10.", "response":"An important part of Tensorflow is that it is supposed to be fast. With a suitable installation, it works with CPUs, GPUs, or TPUs. Part of going fast means that it uses different code depending on your hardware. Some CPUs support operations that other CPUs do not, such as vectorized addition (adding multiple variables at once). Tensorflow is simply telling you that the version you have installed can use the AVX and AVX2 operations and is set to do so by default in certain situations (say inside a forward or back-prop matrix multiply), which can speed things up. This is not an error, it is just telling you that it can and will take advantage of your CPU to get that extra speed out. Note: AVX stands for Advanced Vector Extensions.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/65298241\/what-does-this-tensorflow-message-mean-any-side-effect-was-the-installation-su", "best_answers_votes":304, "question_length":1126, "response_length":739 }, { "question":"Deep-Learning Nan loss reasons [closed] Closed. This question is not about programming or software development. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 8 months ago. The community reviewed whether to reopen this question 8 months ago and left it closed: Not suitable for this site Improve this question What would cause a Convolutional Neural Network to diverge? Specifics: I am using Tensorflow's iris_training model with some of my own data and keep getting ERROR:tensorflow:Model diverged with loss = NaN. Traceback... tensorflow.contrib.learn.python.learn.monitors.NanLossDuringTrainingError: NaN loss during training. Traceback originated with line: ``` tf.contrib.learn.DNNClassifier(feature_columns=feature_columns, hidden_units=[300, 300, 300], #optimizer=tf.train.ProximalAdagradOptimizer(learning_rate=0.001, l1_regularization_strength=0.00001), n_classes=11, model_dir=\"\/tmp\/iris_model\") ``` I've tried adjusting the optimizer, using a zero for learning rate, and using no optimizer.", "response":"There are lots of things I have seen make a model diverge. Too high of a learning rate. You can often tell if this is the case if the loss begins to increase and then diverges to infinity. I am not to familiar with the DNNClassifier but I am guessing it uses the categorical cross entropy cost function. This involves taking the log of the prediction which diverges as the prediction approaches zero. That is why people usually add a small epsilon value to the prediction to prevent this divergence. I am guessing the DNNClassifier probably does this or uses the tensorflow opp for it. Probably not the issue. Other numerical stability issues can exist such as division by zero where adding the epsilon can help. Another less obvious one if the square root whose derivative can diverge if not properly simplified when dealing with finite precision numbers. Yet again I doubt this is the issue in the case of the DNNClassifier. You may have an issue with the input data. Try calling assert not np.any(np.isnan(x)) on the input data to make sure you are not introducing the nan. Also make sure all of the target values are valid. Finally, make sure the data is properly normalized. You probably want to have the pixels in the range [-1, 1] and not [0, 255]. The labels must be in the domain of the loss function, so if using a logarithmic-based loss function all labels must be non-negative (as noted by evan pu and the comments below).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40050397\/deep-learning-nan-loss-reasons", "best_answers_votes":231, "question_length":1305, "response_length":1434 }, { "question":"What does tf.nn.conv2d do in tensorflow? I was looking at the docs of tensorflow about tf.nn.conv2d here. But I can't understand what it does or what it is trying to achieve. It says on the docs, #1 : Flattens the filter to a 2-D matrix with shape [filter_height * filter_width * in_channels, output_channels]. Now what does that do? Is that element-wise multiplication or just plain matrix multiplication? I also could not understand the other two points mentioned in the docs. I have written them below : # 2: Extracts image patches from the the input tensor to form a virtual tensor of shape [batch, out_height, out_width, filter_height * filter_width * in_channels]. # 3: For each patch, right-multiplies the filter matrix and the image patch vector. It would be really helpful if anyone could give an example, a piece of code (extremely helpful) maybe and explain what is going on there and why the operation is like this. I've tried coding a small portion and printing out the shape of the operation. Still, I can't understand. I tried something like this: ```python op = tf.shape(tf.nn.conv2d(tf.random_normal([1,10,10,10]), tf.random_normal([2,10,10,10]), strides=[1, 2, 2, 1], padding='SAME')) with tf.Session() as sess: result = sess.run(op) print(result) ``` I understand bits and pieces of convolutional neural networks. I studied them here. But the implementation on tensorflow is not what I expected. So it raised the question. EDIT: So, I implemented a much simpler code. But I can't figure out what's going on. I mean how the results are like this. It would be extremely helpful if anyone could tell me what process yields this output. ```python input = tf.Variable(tf.random_normal([1,2,2,1])) filter = tf.Variable(tf.random_normal([1,1,1,1])) op = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME') init = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init) print(\"input\") print(input.eval()) print(\"filter\") print(filter.eval()) print(\"result\") result = sess.run(op) print(result) ``` output ``` input [[[[ 1.60314465] [-0.55022103]] [[ 0.00595062] [-0.69889867]]]] filter [[[[-0.59594476]]]] result [[[[-0.95538563] [ 0.32790133]] [[-0.00354624] [ 0.41650501]]]] ```", "response":"Ok I think this is about the simplest way to explain it all. Your example is 1 image, size 2x2, with 1 channel. You have 1 filter, with size 1x1, and 1 channel (size is height x width x channels x number of filters). For this simple case the resulting 2x2, 1 channel image (size 1x2x2x1, number of images x height x width x x channels) is the result of multiplying the filter value by each pixel of the image. Now let's try more channels: ```py input = tf.Variable(tf.random_normal([1,3,3,5])) filter = tf.Variable(tf.random_normal([1,1,5,1])) op = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='VALID') ``` Here the 3x3 image and the 1x1 filter each have 5 channels. The resulting image will be 3x3 with 1 channel (size 1x3x3x1), where the value of each pixel is the dot product across channels of the filter with the corresponding pixel in the input image. Now with a 3x3 filter ```py input = tf.Variable(tf.random_normal([1,3,3,5])) filter = tf.Variable(tf.random_normal([3,3,5,1])) op = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='VALID') ``` Here we get a 1x1 image, with 1 channel (size 1x1x1x1). The value is the sum of the 9, 5-element dot products. But you could just call this a 45-element dot product. Now with a bigger image ```py input = tf.Variable(tf.random_normal([1,5,5,5])) filter = tf.Variable(tf.random_normal([3,3,5,1])) op = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='VALID') ``` The output is a 3x3 1-channel image (size 1x3x3x1). Each of these values is a sum of 9, 5-element dot products. Each output is made by centering the filter on one of the 9 center pixels of the input image, so that none of the filter sticks out. The xs below represent the filter centers for each output pixel. ``` ..... .xxx. .xxx. .xxx. ..... ``` Now with \"SAME\" padding: ```py input = tf.Variable(tf.random_normal([1,5,5,5])) filter = tf.Variable(tf.random_normal([3,3,5,1])) op = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME') ``` This gives a 5x5 output image (size 1x5x5x1). This is done by centering the filter at each position on the image. Any of the 5-element dot products where the filter sticks out past the edge of the image get a value of zero. So the corners are only sums of 4, 5-element dot products. Now with multiple filters. ```py input = tf.Variable(tf.random_normal([1,5,5,5])) filter = tf.Variable(tf.random_normal([3,3,5,7])) op = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME') ``` This still gives a 5x5 output image, but with 7 channels (size 1x5x5x7). Where each channel is produced by one of the filters in the set. Now with strides 2,2: ```py input = tf.Variable(tf.random_normal([1,5,5,5])) filter = tf.Variable(tf.random_normal([3,3,5,7])) op = tf.nn.conv2d(input, filter, strides=[1, 2, 2, 1], padding='SAME') ``` Now the result still has 7 channels, but is only 3x3 (size 1x3x3x7). This is because instead of centering the filters at every point on the image, the filters are centered at every other point on the image, taking steps (strides) of width 2. The x's below represent the filter center for each output pixel, on the input image. ``` x.x.x ..... x.x.x ..... x.x.x ``` And of course the first dimension of the input is the number of images so you can apply it over a batch of 10 images, for example: ```py input = tf.Variable(tf.random_normal([10,5,5,5])) filter = tf.Variable(tf.random_normal([3,3,5,7])) op = tf.nn.conv2d(input, filter, strides=[1, 2, 2, 1], padding='SAME') ``` This performs the same operation, for each image independently, giving a stack of 10 images as the result (size 10x3x3x7)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34619177\/what-does-tf-nn-conv2d-do-in-tensorflow", "best_answers_votes":166, "question_length":2221, "response_length":3626 }, { "question":"How are the new tf.contrib.summary summaries in TensorFlow evaluated? I'm having a bit of trouble understanding the new tf.contrib.summary API. In the old one, it seemed that all one was supposed to do was to run tf.summary.merge_all() and run that as an op. But now we have things like tf.contrib.summary.record_summaries_every_n_global_steps, which can be used like this: ``` import tensorflow.contrib.summary as tfsum summary_writer = tfsum.create_file_writer(logdir, flush_millis=3000) summaries = [] # First we create one summary which runs every n global steps with summary_writer.as_default(), tfsum.record_summaries_every_n_global_steps(30): summaries.append(tfsum.scalar(\"train\/loss\", loss)) # And then one that runs every single time? with summary_writer.as_default(), tfsum.always_record_summaries(): summaries.append(tfsum.scalar(\"train\/accuracy\", accuracy)) # Then create an optimizer which uses a global step step = tf.create_global_step() train = tf.train.AdamOptimizer().minimize(loss, global_step=step) ``` And now come a few questions: If we just run session.run(summaries) in a loop, I assume that the accuracy summary would get written every single time, while the loss one wouldn't, because it only gets written if the global step is divisible by 30? Assuming the summaries automatically evaluate their dependencies, I never need to run session.run([accuracy, summaries]) but can just run, session.run(summaries) since they have a dependency in the graph, right? If 2) is true, can't I just add a control dependency to the training step so that the summaries are written on every train run? Or is this a bad practice? Is there any downside to using control dependencies in general for things that are going to be evaluated at the same time anyway? Why does tf.contrib.summary.scalar (and others) take in a step parameter? By adding a control dependency in 3) I mean doing this: ``` tf.control_dependencies(summaries): train = tf.train.AdamOptimizer().minimize(loss, global_step=step) ```", "response":"answer moved from edit to self-answer as requested I just played around with this a little bit, and it seems that if one combines tf.control_dependencies with tf.record_summaries_every_n_global_steps it behaves as expected and the summary only gets recorded every nth step. But if they are run together within a session, such as session.run([train, summs]), the summaries are stored every once in a while, but not exactly every nth step. I tested this with n=2 and with the second approach the summary was often written at odd steps, while with the control dependency approach it was always on an even step.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49083680\/how-are-the-new-tf-contrib-summary-summaries-in-tensorflow-evaluated", "best_answers_votes":2, "question_length":2008, "response_length":607 }, { "question":"Should we do learning rate decay for adam optimizer I'm training a network for image localization with Adam optimizer, and someone suggest me to use exponential decay. I don't want to try that because Adam optimizer itself decays learning rate. But that guy insists and he said he did that before. So should I do that and is there any theory behind your suggestion?", "response":"It depends. ADAM updates any parameter with an individual learning rate. This means that every parameter in the network has a specific learning rate associated. But the single learning rate for each parameter is computed using lambda (the initial learning rate) as an upper limit. This means that every single learning rate can vary from 0 (no update) to lambda (maximum update). It's true, that the learning rates adapt themselves during training steps, but if you want to be sure that every update step doesn't exceed lambda you can than lower lambda using exponential decay or whatever. It can help to reduce loss during the latest step of training, when the computed loss with the previously associated lambda parameter has stopped to decrease.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39517431\/should-we-do-learning-rate-decay-for-adam-optimizer", "best_answers_votes":169, "question_length":365, "response_length":748 }, { "question":"TensorFlow, why there are 3 files after saving the model? Having read the docs, I saved a model in TensorFlow, here is my demo code: ```py # Create some variables. v1 = tf.Variable(..., name=\"v1\") v2 = tf.Variable(..., name=\"v2\") ... # Add an op to initialize the variables. init_op = tf.global_variables_initializer() # Add ops to save and restore all the variables. saver = tf.train.Saver() # Later, launch the model, initialize the variables, do some work, save the # variables to disk. with tf.Session() as sess: sess.run(init_op) # Do some work with the model. .. # Save the variables to disk. save_path = saver.save(sess, \"\/tmp\/model.ckpt\") print(\"Model saved in file: %s\" % save_path) ``` but after that, I found there are 3 files ``` model.ckpt.data-00000-of-00001 model.ckpt.index model.ckpt.meta ``` And I can't restore the model by restore the model.ckpt file, since there is no such file. Here is my code ```py with tf.Session() as sess: # Restore variables from disk. saver.restore(sess, \"\/tmp\/model.ckpt\") ``` So, why there are 3 files?", "response":"Try this: ``` with tf.Session() as sess: saver = tf.train.import_meta_graph('\/tmp\/model.ckpt.meta') saver.restore(sess, \"\/tmp\/model.ckpt\") ``` The TensorFlow save method saves three kinds of files because it stores the graph structure separately from the variable values. The .meta file describes the saved graph structure, so you need to import it before restoring the checkpoint (otherwise it doesn't know what variables the saved checkpoint values correspond to). Alternatively, you could do this: ``` # Recreate the EXACT SAME variables v1 = tf.Variable(..., name=\"v1\") v2 = tf.Variable(..., name=\"v2\") ... # Now load the checkpoint variable values with tf.Session() as sess: saver = tf.train.Saver() saver.restore(sess, \"\/tmp\/model.ckpt\") ``` Even though there is no file named model.ckpt, you still refer to the saved checkpoint by that name when restoring it. From the saver.py source code: Users only need to interact with the user-specified prefix... instead of any physical pathname.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41265035\/tensorflow-why-there-are-3-files-after-saving-the-model", "best_answers_votes":126, "question_length":1050, "response_length":993 }, { "question":"Will scikit-learn utilize GPU? Reading implementation of scikit-learn in TensorFlow: http:\/\/learningtensorflow.com\/lesson6\/ and scikit-learn: http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.cluster.KMeans.html I'm struggling to decide which implementation to use. scikit-learn is installed as part of the tensorflow docker container so can use either implementation. Reason to use scikit-learn : scikit-learn contains less boilerplate than the tensorflow implementation. Reason to use tensorflow : If running on Nvidia GPU the algorithm will be run against in parallel , I'm not sure if scikit-learn will utilize all available GPUs? Reading https:\/\/www.quora.com\/What-are-the-main-differences-between-TensorFlow-and-SciKit-Learn TensorFlow is more low-level; basically, the Lego bricks that help you to implement machine learning algorithms whereas scikit-learn offers you off-the-shelf algorithms, e.g., algorithms for classification such as SVMs, Random Forests, Logistic Regression, and many, many more. TensorFlow shines if you want to implement deep learning algorithms, since it allows you to take advantage of GPUs for more efficient training. This statement re-enforces my assertion that \"scikit-learn contains less boilerplate than the tensorflow implementation\" but also suggests scikit-learn will not utilize all available GPUs?", "response":"Tensorflow only uses GPU if it is built against Cuda and CuDNN. By default it does not use GPU, especially if it is running inside Docker, unless you use nvidia-docker and an image with a built-in support. Scikit-learn is not intended to be used as a deep-learning framework and it does not provide any GPU support. Why is there no support for deep or reinforcement learning \/ Will there be support for deep or reinforcement learning in scikit-learn? Deep learning and reinforcement learning both require a rich vocabulary to define an architecture, with deep learning additionally requiring GPUs for efficient computing. However, neither of these fit within the design constraints of scikit-learn; as a result, deep learning and reinforcement learning are currently out of scope for what scikit-learn seeks to achieve. Extracted from http:\/\/scikit-learn.org\/stable\/faq.html#why-is-there-no-support-for-deep-or-reinforcement-learning-will-there-be-support-for-deep-or-reinforcement-learning-in-scikit-learn Will you add GPU support in scikit-learn? No, or at least not in the near future. The main reason is that GPU support will introduce many software dependencies and introduce platform specific issues. scikit-learn is designed to be easy to install on a wide variety of platforms. Outside of neural networks, GPUs don\u2019t play a large role in machine learning today, and much larger gains in speed can often be achieved by a careful choice of algorithms. Extracted from http:\/\/scikit-learn.org\/stable\/faq.html#will-you-add-gpu-support", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41567895\/will-scikit-learn-utilize-gpu", "best_answers_votes":157, "question_length":1348, "response_length":1537 }, { "question":"Tensorflow Strides Argument I am trying to understand the strides argument in tf.nn.avg_pool, tf.nn.max_pool, tf.nn.conv2d. The documentation repeatedly says strides: A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. My questions are: What do each of the 4+ integers represent? Why must they have strides[0] = strides[3] = 1 for convnets? In this example we see tf.reshape(_X,shape=[-1, 28, 28, 1]). Why -1? Sadly the examples in the docs for reshape using -1 don't translate too well to this scenario.", "response":"The pooling and convolutional ops slide a \"window\" across the input tensor. Using tf.nn.conv2d as an example: If the input tensor has 4 dimensions: [batch, height, width, channels], then the convolution operates on a 2D window on the height, width dimensions. strides determines how much the window shifts by in each of the dimensions. The typical use sets the first (the batch) and last (the depth) stride to 1. Let's use a very concrete example: Running a 2-d convolution over a 32x32 greyscale input image. I say greyscale because then the input image has depth=1, which helps keep it simple. Let that image look like this: ``` 00 01 02 03 04 ... 10 11 12 13 14 ... 20 21 22 23 24 ... 30 31 32 33 34 ... ... ``` Let's run a 2x2 convolution window over a single example (batch size = 1). We'll give the convolution an output channel depth of 8. The input to the convolution has shape=[1, 32, 32, 1]. If you specify strides=[1,1,1,1] with padding=SAME, then the output of the filter will be [1, 32, 32, 8]. The filter will first create an output for: ``` F(00 01 10 11) ``` And then for: ``` F(01 02 11 12) ``` and so on. Then it will move to the second row, calculating: ``` F(10, 11 20, 21) ``` then ``` F(11, 12 21, 22) ``` If you specify a stride of [1, 2, 2, 1] it won't do overlapping windows. It will compute: ``` F(00, 01 10, 11) ``` and then ``` F(02, 03 12, 13) ``` The stride operates similarly for the pooling operators. Question 2: Why strides [1, x, y, 1] for convnets The first 1 is the batch: You don't usually want to skip over examples in your batch, or you shouldn't have included them in the first place. :) The last 1 is the depth of the convolution: You don't usually want to skip inputs, for the same reason. The conv2d operator is more general, so you could create convolutions that slide the window along other dimensions, but that's not a typical use in convnets. The typical use is to use them spatially. Why reshape to -1 -1 is a placeholder that says \"adjust as necessary to match the size needed for the full tensor.\" It's a way of making the code be independent of the input batch size, so that you can change your pipeline and not have to adjust the batch size everywhere in the code.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34642595\/tensorflow-strides-argument", "best_answers_votes":232, "question_length":562, "response_length":2217 }, { "question":"Can Keras with Tensorflow backend be forced to use CPU or GPU at will? I have Keras installed with the Tensorflow backend and CUDA. I'd like to sometimes on demand force Keras to use CPU. Can this be done without say installing a separate CPU-only Tensorflow in a virtual environment? If so how? If the backend were Theano, the flags could be set, but I have not heard of Tensorflow flags accessible via Keras.", "response":"If you want to force Keras to use CPU Way 1 ``` import os os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\" # see issue #152 os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"\" ``` before Keras \/ Tensorflow is imported. Way 2 Run your script as ``` $ CUDA_VISIBLE_DEVICES=\"\" .\/your_keras_code.py ``` See also https:\/\/github.com\/keras-team\/keras\/issues\/152 https:\/\/github.com\/fchollet\/keras\/issues\/4613", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40690598\/can-keras-with-tensorflow-backend-be-forced-to-use-cpu-or-gpu-at-will", "best_answers_votes":120, "question_length":410, "response_length":386 }, { "question":"Does model.compile() initialize all the weights and biases in Keras (tensorflow backend)? When I start training a model, there is no model saved previously. I can use model.compile() safely. I have now saved the model in a h5 file for further training using checkpoint. Say, I want to train the model further. I am confused at this point: can I use model.compile() here? And should it be placed before or after the model = load_model() statement? If model.compile() reinitializes all the weights and biases, I should place it before model = load_model() statement. After discovering some discussions, it seems to me that model.compile() is only needed when I have no model saved previously. Once I have saved the model, there is no need to use model.compile(). Is it true or false? And when I want to predict using the trained model, should I use model.compile() before predicting?", "response":"When to use? If you're using compile, surely it must be after load_model(). After all, you need a model to compile. (PS: load_model automatically compiles the model with the optimizer that was saved along with the model) What does compile do? Compile defines the loss function, the optimizer and the metrics. That's all. It has nothing to do with the weights and you can compile a model as many times as you want without causing any problem to pretrained weights. You need a compiled model to train (because training uses the loss function and the optimizer). But it's not necessary to compile a model for predicting. Do you need to use compile more than once? Only if: You want to change one of these: Loss function Optimizer \/ Learning rate Metrics The trainable property of some layer You loaded (or created) a model that is not compiled yet. Or your load\/save method didn't consider the previous compilation. Consequences of compiling again: If you compile a model again, you will lose the optimizer states. This means that your training will suffer a little at the beginning until it adjusts the learning rate, the momentums, etc. But there is absolutely no damage to the weights (unless, of course, your initial learning rate is so big that the first training step wildly changes the fine tuned weights).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47995324\/does-model-compile-initialize-all-the-weights-and-biases-in-keras-tensorflow", "best_answers_votes":231, "question_length":881, "response_length":1310 }, { "question":"What is the difference between sparse_categorical_crossentropy and categorical_crossentropy? What is the difference between sparse_categorical_crossentropy and categorical_crossentropy? When should one loss be used as opposed to the other? For example, are these losses suitable for linear regression?", "response":"Simply: categorical_crossentropy (cce) produces a one-hot array containing the probable match for each category, sparse_categorical_crossentropy (scce) produces a category index of the most likely matching category. Consider a classification problem with 5 categories (or classes). In the case of cce, the one-hot target may be [0, 1, 0, 0, 0] and the model may predict [.2, .5, .1, .1, .1] (probably right) In the case of scce, the target index may be [1] and the model may predict: [.5]. Consider now a classification problem with 3 classes. In the case of cce, the one-hot target might be [0, 0, 1] and the model may predict [.5, .1, .4] (probably inaccurate, given that it gives more probability to the first class) In the case of scce, the target index might be [0], and the model may predict [.5] Many categorical models produce scce output because you save space, but lose A LOT of information (for example, in the 2nd example, index 2 was also very close.) I generally prefer cce output for model reliability. There are a number of situations to use scce, including: when your classes are mutually exclusive, i.e. you don't care at all about other close-enough predictions, the number of categories is large to the prediction output becomes overwhelming. 220405: response to \"one-hot encoding\" comments: one-hot encoding is used for a category feature INPUT to select a specific category (e.g. male versus female). This encoding allows the model to train more efficiently: training weight is a product of category, which is 0 for all categories except for the given one. cce and scce are a model OUTPUT. cce is a probability array of each category, totally 1.0. scce shows the MOST LIKELY category, totally 1.0. scce is technically a one-hot array, just like a hammer used as a door stop is still a hammer, but its purpose is different. cce is NOT one-hot.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58565394\/what-is-the-difference-between-sparse-categorical-crossentropy-and-categorical-c", "best_answers_votes":97, "question_length":301, "response_length":1864 }, { "question":"Can I use TensorBoard with Google Colab? Is there any way to use TensorBoard when training a TensorFlow model on Google Colab?", "response":"EDIT: You probably want to give the official %tensorboard magic a go, available from TensorFlow 1.13 onward. Prior to the existence of the %tensorboard magic, the standard way to achieve this was to proxy network traffic to the Colab VM using ngrok. A Colab example can be found here. These are the steps (the code snippets represent cells of type \"code\" in colab): Get TensorBoard running in the background. Inspired by this answer. ```py LOG_DIR = '\/tmp\/log' get_ipython().system_raw( 'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &' .format(LOG_DIR) ) ``` Download and unzip ngrok. Replace the link passed to wget with the correct download link for your OS. ```py ! wget https:\/\/bin.equinox.io\/c\/4VmDzA7iaHb\/ngrok-stable-linux-amd64.zip ! unzip ngrok-stable-linux-amd64.zip ``` Launch ngrok background process... ```py get_ipython().system_raw('.\/ngrok http 6006 &') ``` ...and retrieve public url. Source ```py ! curl -s http:\/\/localhost:4040\/api\/tunnels | python3 -c \\ \"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])\" ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47818822\/can-i-use-tensorboard-with-google-colab", "best_answers_votes":98, "question_length":126, "response_length":1059 }, { "question":"How to set adaptive learning rate for GradientDescentOptimizer? I am using TensorFlow to train a neural network. This is how I am initializing the GradientDescentOptimizer: ``` init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) mse = tf.reduce_mean(tf.square(out - out_)) train_step = tf.train.GradientDescentOptimizer(0.3).minimize(mse) ``` The thing here is that I don't know how to set an update rule for the learning rate or a decay value for that. How can I use an adaptive learning rate here?", "response":"First of all, tf.train.GradientDescentOptimizer is designed to use a constant learning rate for all variables in all steps. TensorFlow also provides out-of-the-box adaptive optimizers including the tf.train.AdagradOptimizer and the tf.train.AdamOptimizer, and these can be used as drop-in replacements. However, if you want to control the learning rate with otherwise-vanilla gradient descent, you can take advantage of the fact that the learning_rate argument to the tf.train.GradientDescentOptimizer constructor can be a Tensor object. This allows you to compute a different value for the learning rate in each step, for example: ``` learning_rate = tf.placeholder(tf.float32, shape=[]) # ... train_step = tf.train.GradientDescentOptimizer( learning_rate=learning_rate).minimize(mse) sess = tf.Session() # Feed different values for learning rate to each training step. sess.run(train_step, feed_dict={learning_rate: 0.1}) sess.run(train_step, feed_dict={learning_rate: 0.1}) sess.run(train_step, feed_dict={learning_rate: 0.01}) sess.run(train_step, feed_dict={learning_rate: 0.01}) ``` Alternatively, you could create a scalar tf.Variable that holds the learning rate, and assign it each time you want to change the learning rate.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33919948\/how-to-set-adaptive-learning-rate-for-gradientdescentoptimizer", "best_answers_votes":197, "question_length":519, "response_length":1233 }, { "question":"TensorFlow saving into\/loading a graph from a file From what I've gathered so far, there are several different ways of dumping a TensorFlow graph into a file and then loading it into another program, but I haven't been able to find clear examples\/information on how they work. What I already know is this: Save the model's variables into a checkpoint file (.ckpt) using a tf.train.Saver() and restore them later (source) Save a model into a .pb file and load it back in using tf.train.write_graph() and tf.import_graph_def() (source) Load in a model from a .pb file, retrain it, and dump it into a new .pb file using Bazel (source) Freeze the graph to save the graph and weights together (source) Use as_graph_def() to save the model, and for weights\/variables, map them into constants (source) However, I haven't been able to clear up several questions regarding these different methods: Regarding checkpoint files, do they only save the trained weights of a model? Could checkpoint files be loaded into a new program, and be used to run the model, or do they simply serve as ways to save the weights in a model at a certain time\/stage? Regarding tf.train.write_graph(), are the weights\/variables saved as well? Regarding Bazel, can it only save into\/load from .pb files for retraining? Is there a simple Bazel command just to dump a graph into a .pb? Regarding freezing, can a frozen graph be loaded in using tf.import_graph_def()? The Android demo for TensorFlow loads in Google's Inception model from a .pb file. If I wanted to substitute my own .pb file, how would I go about doing that? Would I need to change any native code\/methods? In general, what exactly is the difference between all these methods? Or more broadly, what is the difference between as_graph_def()\/.ckpt\/.pb? In short, what I'm looking for is a method to save both a graph (as in, the various operations and such) and its weights\/variables into a file, which can then be used to load the graph and weights into another program, for use (not necessarily continuing\/retraining). Documentation about this topic isn't very straightforward, so any answers\/information would be greatly appreciated.", "response":"There are many ways to approach the problem of saving a model in TensorFlow, which can make it a bit confusing. Taking each of your sub-questions in turn: The checkpoint files (produced e.g. by calling saver.save() on a tf.train.Saver object) contain only the weights, and any other variables defined in the same program. To use them in another program, you must re-create the associated graph structure (e.g. by running code to build it again, or calling tf.import_graph_def()), which tells TensorFlow what to do with those weights. Note that calling saver.save() also produces a file containing a MetaGraphDef, which contains a graph and details of how to associate the weights from a checkpoint with that graph. See the tutorial for more details. tf.train.write_graph() only writes the graph structure; not the weights. Bazel is unrelated to reading or writing TensorFlow graphs. (Perhaps I misunderstand your question: feel free to clarify it in a comment.) A frozen graph can be loaded using tf.import_graph_def(). In this case, the weights are (typically) embedded in the graph, so you don't need to load a separate checkpoint. The main change would be to update the names of the tensor(s) that are fed into the model, and the names of the tensor(s) that are fetched from the model. In the TensorFlow Android demo, this would correspond to the inputName and outputName strings that are passed to TensorFlowClassifier.initializeTensorFlow(). The GraphDef is the program structure, which typically does not change through the training process. The checkpoint is a snapshot of the state of a training process, which typically changes at every step of the training process. As a result, TensorFlow uses different storage formats for these types of data, and the low-level API provides different ways to save and load them. Higher-level libraries, such as the MetaGraphDef libraries, Keras, and skflow build on these mechanisms to provide more convenient ways to save and restore an entire model.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38947658\/tensorflow-saving-into-loading-a-graph-from-a-file", "best_answers_votes":87, "question_length":2168, "response_length":1997 }, { "question":"AttributeError: 'Tensor' object has no attribute 'numpy' I downloaded this code from GitHub. ``` predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy() ``` But I get an error that says: ``` AttributeError: 'Tensor' object has no attribute 'numpy' ``` What is wrong, and how do I fix it?", "response":"Since the accepted answer did not solve the problem for me so I thought it might be helpful for some people who face the problem and that already have tensorflow version >= 2.2.0 and eager execution enabled. The issue seems to be that for certain functions during the fitting model.fit() the @tf.function decorator prohibits the execution of functions like tensor.numpy() for performance reasons. The solution for me was to pass the flag run_eagerly=True to the model.compile() like this: ``` model.compile(..., run_eagerly=True) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/52357542\/attributeerror-tensor-object-has-no-attribute-numpy", "best_answers_votes":104, "question_length":311, "response_length":533 }, { "question":"What does global_step mean in Tensorflow? In this is tutorial code from TensorFlow website, could anyone help explain what does global_step mean? I found on the Tensorflow website written that global step is used count training steps, but I don't quite get what exactly it means. Also, what does the number 0 mean when setting up global_step? ```py def training(loss,learning_rate): tf.summary.scalar('loss',loss) optimizer = tf.train.GradientDescentOptimizer(learning_rate) # Why 0 as the first parameter of the global_step tf.Variable? global_step = tf.Variable(0, name='global_step',trainable=False) train_op = optimizer.minimize(loss, global_step=global_step) return train_op ``` According to Tensorflow doc global_step: increment by one after the variables have been updated. Does that mean after one update global_step becomes 1?", "response":"global_step refers to the number of batches seen by the graph. Every time a batch is provided, the weights are updated in the direction that minimizes the loss. global_step just keeps track of the number of batches seen so far. When it is passed in the minimize() argument list, the variable is increased by one. Have a look at optimizer.minimize(). You can get the global_step value using tf.train.global_step(). Also handy are the utility methods tf.train.get_global_step or tf.train.get_or_create_global_step. 0 is the initial value of the global step in this context.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41166681\/what-does-global-step-mean-in-tensorflow", "best_answers_votes":124, "question_length":835, "response_length":571 }, { "question":"Tensorflow set CUDA_VISIBLE_DEVICES within jupyter I have two GPUs and would like to run two different networks via ipynb simultaneously, however the first notebook always allocates both GPUs. Using CUDA_VISIBLE_DEVICES, I can hide devices for python files, however I am unsure of how to do so within a notebook. Is there anyway to hide different GPUs in to notebooks running on the same server?", "response":"You can set environment variables in the notebook using os.environ. Do the following before initializing TensorFlow to limit TensorFlow to first GPU. ``` import os os.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\" # see issue #152 os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\" ``` You can double check that you have the correct devices visible to TF ``` from tensorflow.python.client import device_lib print device_lib.list_local_devices() ``` I tend to use it from utility module like notebook_util ``` import notebook_util notebook_util.pick_gpu_lowest_memory() import tensorflow as tf ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37893755\/tensorflow-set-cuda-visible-devices-within-jupyter", "best_answers_votes":206, "question_length":395, "response_length":579 }, { "question":"How to get Tensorflow tensor dimensions (shape) as int values? Suppose I have a Tensorflow tensor. How do I get the dimensions (shape) of the tensor as integer values? I know there are two methods, tensor.get_shape() and tf.shape(tensor), but I can't get the shape values as integer int32 values. For example, below I've created a 2-D tensor, and I need to get the number of rows and columns as int32 so that I can call reshape() to create a tensor of shape (num_rows * num_cols, 1). However, the method tensor.get_shape() returns values as Dimension type, not int32. ``` import tensorflow as tf import numpy as np sess = tf.Session() tensor = tf.convert_to_tensor(np.array([[1001,1002,1003],[3,4,5]]), dtype=tf.float32) sess.run(tensor) # array([[ 1001., 1002., 1003.], # [ 3., 4., 5.]], dtype=float32) tensor_shape = tensor.get_shape() tensor_shape # TensorShape([Dimension(2), Dimension(3)]) print tensor_shape # (2, 3) num_rows = tensor_shape[0] # ??? num_cols = tensor_shape[1] # ??? tensor2 = tf.reshape(tensor, (num_rows*num_cols, 1)) # Traceback (most recent call last): # File \"\", line 1, in # File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/ops\/gen_array_ops.py\", line 1750, in reshape # name=name) # File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/framework\/op_def_library.py\", line 454, in apply_op # as_ref=input_arg.is_ref) # File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/framework\/ops.py\", line 621, in convert_to_tensor # ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) # File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/framework\/constant_op.py\", line 180, in _constant_tensor_conversion_function # return constant(v, dtype=dtype, name=name) # File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/framework\/constant_op.py\", line 163, in constant # tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape)) # File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/framework\/tensor_util.py\", line 353, in make_tensor_proto # _AssertCompatible(values, dtype) # File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/framework\/tensor_util.py\", line 290, in _AssertCompatible # (dtype.name, repr(mismatch), type(mismatch).__name__)) # TypeError: Expected int32, got Dimension(6) of type 'Dimension' instead. ```", "response":"To get the shape as a list of ints, do tensor.get_shape().as_list(). To complete your tf.shape() call, try tensor2 = tf.reshape(tensor, tf.TensorShape([num_rows*num_cols, 1])). Or you can directly do tensor2 = tf.reshape(tensor, tf.TensorShape([-1, 1])) where its first dimension can be inferred.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40666316\/how-to-get-tensorflow-tensor-dimensions-shape-as-int-values", "best_answers_votes":147, "question_length":2337, "response_length":296 }, { "question":"How to solve \"AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key\"? I encountered it while executing from object_detection.utils import label_map_util in jupyter notebook. It is actually the tensorflow object detection tutorial notebook(it comes with the tensorflow object detection api) The complete error log: ```none AttributeError Traceback (most recent call last) in 1 from object_detection.utils import ops as utils_ops ----> 2 from object_detection.utils import label_map_util 3 from object_detection.utils import visualization_utils as vis_util ~\\AppData\\Roaming\\Python\\Python37\\site-packages\\object_detection\\utils\\label_map_util.py in 25 import tensorflow as tf 26 from google.protobuf import text_format ---> 27 from object_detection.protos import string_int_label_map_pb2 28 29 ~\\AppData\\Roaming\\Python\\Python37\\site-packages\\object_detection\\protos\\string_int_label_map_pb2.py in 19 syntax='proto2', 20 serialized_options=None, ---> 21 create_key=_descriptor._internal_create_key, 22 serialized_pb=b'\\n2object_detection\/protos\/string_int_label_map.proto\\x12\\x17object_detection.protos\\\"\\xc0\\x01\\n\\x15StringIntLabelMapItem\\x12\\x0c\\n\\x04name\\x18\\x01 \\x01(\\t\\x12\\n\\n\\x02id\\x18\\x02 \\x01(\\x05\\x12\\x14\\n\\x0c\\x64isplay_name\\x18\\x03 \\x01(\\t\\x12M\\n\\tkeypoints\\x18\\x04 \\x03(\\x0b\\x32:.object_detection.protos.StringIntLabelMapItem.KeypointMap\\x1a(\\n\\x0bKeypointMap\\x12\\n\\n\\x02id\\x18\\x01 \\x01(\\x05\\x12\\r\\n\\x05label\\x18\\x02 \\x01(\\t\\\"Q\\n\\x11StringIntLabelMap\\x12<\\n\\x04item\\x18\\x01 \\x03(\\x0b\\x32..object_detection.protos.StringIntLabelMapItem' 23 ) AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key' ```", "response":"The protoc version I got through pip show protobuf and protoc --version were different. The version in pip was a bit outdated. After I upgraded the pip version with ```sh pip install --upgrade protobuf ``` the problem was solved.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/61922334\/how-to-solve-attributeerror-module-google-protobuf-descriptor-has-no-attribu", "best_answers_votes":199, "question_length":1686, "response_length":229 }, { "question":"When importing tensorflow, I get the following error: No module named 'numpy.core._multiarray_umath' I have installed Ancaconda3 and Tensorflow. When I try to import Tensorflow in python shell I receive the following error: ModuleNotFoundError: No module named 'numpy.core._multiarray_umath' ImportError: numpy.core.multiarray failed to import The above exception was the direct cause of the following exception: Traceback (most recent call last): File \"\", line 980, in _find_and_load SystemError: returned a result with an error set ImportError: numpy.core._multiarray_umath failed to import ImportError: numpy.core.umath failed to import I am not sure what the problem is as numpy is installed on my system and can be successfully imported in python. I am using Windows10.", "response":"I also had the same issue. It got resloved once I upgraded the numpy from 1.15.4 to 1.16.1. If you're using pip: pip install numpy --upgrade Numpy that came with Anaconda3 is of version 1.15.4. so i upgraded and it worked. Side note: if you're also using scikit-image in your script, be aware that numpy 1.16.3 has a conflict with old versions of scikit-image (e.g. you may get ImportError: cannot import name '_validate_lengths'). In that case, pip install --upgrade scikit-image from terminal solved the issue for me.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/54665842\/when-importing-tensorflow-i-get-the-following-error-no-module-named-numpy-cor", "best_answers_votes":142, "question_length":775, "response_length":519 }, { "question":"What does batch, repeat, and shuffle do with TensorFlow Dataset? I'm currently learning TensorFlow but I came across a confusion in the below code snippet: ``` dataset = dataset.shuffle(buffer_size = 10 * batch_size) dataset = dataset.repeat(num_epochs).batch(batch_size) return dataset.make_one_shot_iterator().get_next() ``` I know that first the dataset will hold all the data but what shuffle(),repeat(), and batch() do to the dataset? Please help me with an example and explanation.", "response":"Update: Here is a small collaboration notebook for demonstration of this answer. Imagine, you have a dataset: [1, 2, 3, 4, 5, 6], then: How ds.shuffle() works dataset.shuffle(buffer_size=3) will allocate a buffer of size 3 for picking random entries. This buffer will be connected to the source dataset. We could image it like this: ``` Random buffer | | Source dataset where all other elements live | | \u2193 \u2193 [1,2,3] <= [4,5,6] ``` Let's assume that entry 2 was taken from the random buffer. Free space is filled by the next element from the source buffer, that is 4: ``` 2 <= [1,3,4] <= [5,6] ``` We continue reading till nothing is left: ``` 1 <= [3,4,5] <= [6] 5 <= [3,4,6] <= [] 3 <= [4,6] <= [] 6 <= [4] <= [] 4 <= [] <= [] ``` How ds.repeat() works As soon as all the entries are read from the dataset and you try to read the next element, the dataset will throw an error. That's where ds.repeat() comes into play. It will re-initialize the dataset, making it again like this: ``` [1,2,3] <= [4,5,6] ``` What will ds.batch() produce The ds.batch() will take the first batch_size entries and make a batch out of them. So, a batch size of 3 for our example dataset will produce two batch records: ``` [2,1,5] [3,6,4] ``` As we have a ds.repeat() before the batch, the generation of the data will continue. But the order of the elements will be different, due to the ds.random(). What should be taken into account is that 6 will never be present in the first batch, due to the size of the random buffer.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/53514495\/what-does-batch-repeat-and-shuffle-do-with-tensorflow-dataset", "best_answers_votes":166, "question_length":487, "response_length":1505 }, { "question":"Using a pre-trained word embedding (word2vec or Glove) in TensorFlow I've recently reviewed an interesting implementation for convolutional text classification. However all TensorFlow code I've reviewed uses a random (not pre-trained) embedding vectors like the following: ``` with tf.device('\/cpu:0'), tf.name_scope(\"embedding\"): W = tf.Variable( tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0), name=\"W\") self.embedded_chars = tf.nn.embedding_lookup(W, self.input_x) self.embedded_chars_expanded = tf.expand_dims(self.embedded_chars, -1) ``` Does anybody know how to use the results of Word2vec or a GloVe pre-trained word embedding instead of a random one?", "response":"There are a few ways that you can use a pre-trained embedding in TensorFlow. Let's say that you have the embedding in a NumPy array called embedding, with vocab_size rows and embedding_dim columns and you want to create a tensor W that can be used in a call to tf.nn.embedding_lookup(). Simply create W as a tf.constant() that takes embedding as its value: ``` W = tf.constant(embedding, name=\"W\") ``` This is the easiest approach, but it is not memory efficient because the value of a tf.constant() is stored multiple times in memory. Since embedding can be very large, you should only use this approach for toy examples. Create W as a tf.Variable and initialize it from the NumPy array via a tf.placeholder(): ``` W = tf.Variable(tf.constant(0.0, shape=[vocab_size, embedding_dim]), trainable=False, name=\"W\") embedding_placeholder = tf.placeholder(tf.float32, [vocab_size, embedding_dim]) embedding_init = W.assign(embedding_placeholder) # ... sess = tf.Session() sess.run(embedding_init, feed_dict={embedding_placeholder: embedding}) ``` This avoid storing a copy of embedding in the graph, but it does require enough memory to keep two copies of the matrix in memory at once (one for the NumPy array, and one for the tf.Variable). Note that I've assumed that you want to hold the embedding matrix constant during training, so W is created with trainable=False. If the embedding was trained as part of another TensorFlow model, you can use a tf.train.Saver to load the value from the other model's checkpoint file. This means that the embedding matrix can bypass Python altogether. Create W as in option 2, then do the following: ``` W = tf.Variable(...) embedding_saver = tf.train.Saver({\"name_of_variable_in_other_model\": W}) # ... sess = tf.Session() embedding_saver.restore(sess, \"checkpoint_filename.ckpt\") ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35687678\/using-a-pre-trained-word-embedding-word2vec-or-glove-in-tensorflow", "best_answers_votes":132, "question_length":670, "response_length":1819 }, { "question":"How to choose cross-entropy loss in TensorFlow? Classification problems, such as logistic regression or multinomial logistic regression, optimize a cross-entropy loss. Normally, the cross-entropy layer follows the softmax layer, which produces probability distribution. In tensorflow, there are at least a dozen of different cross-entropy loss functions: tf.losses.softmax_cross_entropy tf.losses.sparse_softmax_cross_entropy tf.losses.sigmoid_cross_entropy tf.contrib.losses.softmax_cross_entropy tf.contrib.losses.sigmoid_cross_entropy tf.nn.softmax_cross_entropy_with_logits tf.nn.sigmoid_cross_entropy_with_logits ... Which one works only for binary classification and which are suitable for multi-class problems? When should you use sigmoid instead of softmax? How are sparse functions different from others and why is it only softmax? Related (more math-oriented) discussion: What are the differences between all these cross-entropy losses in Keras and TensorFlow?.", "response":"Preliminary facts In functional sense, the sigmoid is a partial case of the softmax function, when the number of classes equals 2. Both of them do the same operation: transform the logits (see below) to probabilities. In simple binary classification, there's no big difference between the two, however in case of multinomial classification, sigmoid allows to deal with non-exclusive labels (a.k.a. multi-labels), while softmax deals with exclusive classes (see below). A logit (also called a score) is a raw unscaled value associated with a class, before computing the probability. In terms of neural network architecture, this means that a logit is an output of a dense (fully-connected) layer. Tensorflow naming is a bit strange: all of the functions below accept logits, not probabilities, and apply the transformation themselves (which is simply more efficient). Sigmoid functions family tf.nn.sigmoid_cross_entropy_with_logits tf.nn.weighted_cross_entropy_with_logits tf.losses.sigmoid_cross_entropy tf.contrib.losses.sigmoid_cross_entropy (DEPRECATED) As stated earlier, sigmoid loss function is for binary classification. But tensorflow functions are more general and allow to do multi-label classification, when the classes are independent. In other words, tf.nn.sigmoid_cross_entropy_with_logits solves N binary classifications at once. The labels must be one-hot encoded or can contain soft class probabilities. tf.losses.sigmoid_cross_entropy in addition allows to set the in-batch weights, i.e. make some examples more important than others. tf.nn.weighted_cross_entropy_with_logits allows to set class weights (remember, the classification is binary), i.e. make positive errors larger than negative errors. This is useful when the training data is unbalanced. Softmax functions family tf.nn.softmax_cross_entropy_with_logits (DEPRECATED IN 1.5) tf.nn.softmax_cross_entropy_with_logits_v2 tf.losses.softmax_cross_entropy tf.contrib.losses.softmax_cross_entropy (DEPRECATED) These loss functions should be used for multinomial mutually exclusive classification, i.e. pick one out of N classes. Also applicable when N = 2. The labels must be one-hot encoded or can contain soft class probabilities: a particular example can belong to class A with 50% probability and class B with 50% probability. Note that strictly speaking it doesn't mean that it belongs to both classes, but one can interpret the probabilities this way. Just like in sigmoid family, tf.losses.softmax_cross_entropy allows to set the in-batch weights, i.e. make some examples more important than others. As far as I know, as of tensorflow 1.3, there's no built-in way to set class weights. [UPD] In tensorflow 1.5, v2 version was introduced and the original softmax_cross_entropy_with_logits loss got deprecated. The only difference between them is that in a newer version, backpropagation happens into both logits and labels (here's a discussion why this may be useful). Sparse functions family tf.nn.sparse_softmax_cross_entropy_with_logits tf.losses.sparse_softmax_cross_entropy tf.contrib.losses.sparse_softmax_cross_entropy (DEPRECATED) Like ordinary softmax above, these loss functions should be used for multinomial mutually exclusive classification, i.e. pick one out of N classes. The difference is in labels encoding: the classes are specified as integers (class index), not one-hot vectors. Obviously, this doesn't allow soft classes, but it can save some memory when there are thousands or millions of classes. However, note that logits argument must still contain logits per each class, thus it consumes at least [batch_size, classes] memory. Like above, tf.losses version has a weights argument which allows to set the in-batch weights. Sampled softmax functions family tf.nn.sampled_softmax_loss tf.contrib.nn.rank_sampled_softmax_loss tf.nn.nce_loss These functions provide another alternative for dealing with huge number of classes. Instead of computing and comparing an exact probability distribution, they compute a loss estimate from a random sample. The arguments weights and biases specify a separate fully-connected layer that is used to compute the logits for a chosen sample. Like above, labels are not one-hot encoded, but have the shape [batch_size, num_true]. Sampled functions are only suitable for training. In test time, it's recommended to use a standard softmax loss (either sparse or one-hot) to get an actual distribution. Another alternative loss is tf.nn.nce_loss, which performs noise-contrastive estimation (if you're interested, see this very detailed discussion). I've included this function to the softmax family, because NCE guarantees approximation to softmax in the limit.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47034888\/how-to-choose-cross-entropy-loss-in-tensorflow", "best_answers_votes":151, "question_length":971, "response_length":4697 }, { "question":"How do display different runs in TensorBoard? TensorBoard seems to have a feature to display multiple different runs and toggle them. How can I make multiple runs show up here and how can assign a name to them to differentiate them?", "response":"In addition to TensorBoard scanning subdirectories (so you can pass a directory containing the directories with your runs), you can also pass multiple directories to TensorBoard explicitly and give custom names (example taken from the --help output): ``` tensorboard --logdir=name1:\/path\/to\/logs\/1,name2:\/path\/to\/logs\/2 ``` More information can be found at the TensorBoard documentation. In recent versions of TensorBoard, aliasing this way requires a different argument, however its use is discouraged (quote from current documentation on github - linked above): Logdir & Logdir_spec (Legacy Mode) You may also pass a comma separated list of log directories, and TensorBoard will watch each directory. You can also assign names to individual log directories by putting a colon between the name and the path, as in tensorboard --logdir_spec name1:\/path\/to\/logs\/1,name2:\/path\/to\/logs\/2 This flag (--logdir_spec) is discouraged and can usually be avoided. TensorBoard walks log directories recursively; for finer-grained control, prefer using a symlink tree. Some features may not work when using --logdir_spec instead of --logdir.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36182380\/how-do-display-different-runs-in-tensorboard", "best_answers_votes":161, "question_length":232, "response_length":1129 }, { "question":"How to *actually* read CSV data in TensorFlow? I'm relatively new to the world of TensorFlow, and pretty perplexed by how you'd actually read CSV data into a usable example\/label tensors in TensorFlow. The example from the TensorFlow tutorial on reading CSV data is pretty fragmented and only gets you part of the way to being able to train on CSV data. Here's my code that I've pieced together, based off that CSV tutorial: ``` from __future__ import print_function import tensorflow as tf def file_len(fname): with open(fname) as f: for i, l in enumerate(f): pass return i + 1 filename = \"csv_test_data.csv\" # setup text reader file_length = file_len(filename) filename_queue = tf.train.string_input_producer([filename]) reader = tf.TextLineReader(skip_header_lines=1) _, csv_row = reader.read(filename_queue) # setup CSV decoding record_defaults = [[0],[0],[0],[0],[0]] col1,col2,col3,col4,col5 = tf.decode_csv(csv_row, record_defaults=record_defaults) # turn features back into a tensor features = tf.stack([col1,col2,col3,col4]) print(\"loading, \" + str(file_length) + \" line(s)\\n\") with tf.Session() as sess: tf.initialize_all_variables().run() # start populating filename queue coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for i in range(file_length): # retrieve a single instance example, label = sess.run([features, col5]) print(example, label) coord.request_stop() coord.join(threads) print(\"\\ndone loading\") ``` And here is an brief example from the CSV file I'm loading - pretty basic data - 4 feature columns, and 1 label column: ``` 0,0,0,0,0 0,15,0,0,0 0,30,0,0,0 0,45,0,0,0 ``` All the code above does is print each example from the CSV file, one by one, which, while nice, is pretty darn useless for training. What I'm struggling with here is how you'd actually turn those individual examples, loaded one-by-one, into a training dataset. For example, here's a notebook I was working on in the Udacity Deep Learning course. I basically want to take the CSV data I'm loading, and plop it into something like train_dataset and train_labels: ``` def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) ``` I've tried using tf.train.shuffle_batch, like this, but it just inexplicably hangs: ``` for i in range(file_length): # retrieve a single instance example, label = sess.run([features, colRelevant]) example_batch, label_batch = tf.train.shuffle_batch([example, label], batch_size=file_length, capacity=file_length, min_after_dequeue=10000) print(example, label) ``` So to sum up, here are my questions: What am I missing about this process? It feels like there is some key intuition that I'm missing about how to properly build an input pipeline. Is there a way to avoid having to know the length of the CSV file? It feels pretty inelegant to have to know the number of lines you want to process (the for i in range(file_length) line of code above) Edit: As soon as Yaroslav pointed out that I was likely mixing up imperative and graph-construction parts here, it started to become clearer. I was able to pull together the following code, which I think is closer to what would typically done when training a model from CSV (excluding any model training code): ``` from __future__ import print_function import numpy as np import tensorflow as tf import math as math import argparse parser = argparse.ArgumentParser() parser.add_argument('dataset') args = parser.parse_args() def file_len(fname): with open(fname) as f: for i, l in enumerate(f): pass return i + 1 def read_from_csv(filename_queue): reader = tf.TextLineReader(skip_header_lines=1) _, csv_row = reader.read(filename_queue) record_defaults = [[0],[0],[0],[0],[0]] colHour,colQuarter,colAction,colUser,colLabel = tf.decode_csv(csv_row, record_defaults=record_defaults) features = tf.stack([colHour,colQuarter,colAction,colUser]) label = tf.stack([colLabel]) return features, label def input_pipeline(batch_size, num_epochs=None): filename_queue = tf.train.string_input_producer([args.dataset], num_epochs=num_epochs, shuffle=True) example, label = read_from_csv(filename_queue) min_after_dequeue = 10000 capacity = min_after_dequeue + 3 * batch_size example_batch, label_batch = tf.train.shuffle_batch( [example, label], batch_size=batch_size, capacity=capacity, min_after_dequeue=min_after_dequeue) return example_batch, label_batch file_length = file_len(args.dataset) - 1 examples, labels = input_pipeline(file_length, 1) with tf.Session() as sess: tf.initialize_all_variables().run() # start populating filename queue coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) try: while not coord.should_stop(): example_batch, label_batch = sess.run([examples, labels]) print(example_batch) except tf.errors.OutOfRangeError: print('Done training, epoch reached') finally: coord.request_stop() coord.join(threads) ```", "response":"I think you are mixing up imperative and graph-construction parts here. The operation tf.train.shuffle_batch creates a new queue node, and a single node can be used to process the entire dataset. So I think you are hanging because you created a bunch of shuffle_batch queues in your for loop and didn't start queue runners for them. Normal input pipeline usage looks like this: Add nodes like shuffle_batch to input pipeline (optional, to prevent unintentional graph modification) finalize graph --- end of graph construction, beginning of imperative programming -- tf.start_queue_runners while(True): session.run() To be more scalable (to avoid Python GIL), you could generate all of your data using TensorFlow pipeline. However, if performance is not critical, you can hook up a numpy array to an input pipeline by using slice_input_producer. Here's an example with some Print nodes to see what's going on (messages in Print go to stdout when node is run) ``` tf.reset_default_graph() num_examples = 5 num_features = 2 data = np.reshape(np.arange(num_examples*num_features), (num_examples, num_features)) print data (data_node,) = tf.slice_input_producer([tf.constant(data)], num_epochs=1, shuffle=False) data_node_debug = tf.Print(data_node, [data_node], \"Dequeueing from data_node \") data_batch = tf.batch([data_node_debug], batch_size=2) data_batch_debug = tf.Print(data_batch, [data_batch], \"Dequeueing from data_batch \") sess = tf.InteractiveSession() sess.run(tf.initialize_all_variables()) tf.get_default_graph().finalize() tf.start_queue_runners() try: while True: print sess.run(data_batch_debug) except tf.errors.OutOfRangeError as e: print \"No more inputs.\" ``` You should see something like this ``` [[0 1] [2 3] [4 5] [6 7] [8 9]] [[0 1] [2 3]] [[4 5] [6 7]] No more inputs. ``` The \"8, 9\" numbers didn't fill up the full batch, so they didn't get produced. Also tf.Print are printed to sys.stdout, so they show up in separately in Terminal for me. PS: a minimal of connecting batch to a manually initialized queue is in github issue 2193 Also, for debugging purposes you might want to set timeout on your session so that your IPython notebook doesn't hang on empty queue dequeues. I use this helper function for my sessions ``` def create_session(): config = tf.ConfigProto(log_device_placement=True) config.gpu_options.per_process_gpu_memory_fraction=0.3 # don't hog all vRAM config.operation_timeout_in_ms=60000 # terminate on long hangs # create interactive session to register a default session sess = tf.InteractiveSession(\"\", config=config) return sess ``` Scalability Notes: tf.constant inlines copy of your data into the Graph. There's a fundamental limit of 2GB on size of Graph definition so that's an upper limit on size of data You could get around that limit by using v=tf.Variable and saving the data into there by running v.assign_op with a tf.placeholder on right-hand side and feeding numpy array to the placeholder (feed_dict) That still creates two copies of data, so to save memory you could make your own version of slice_input_producer which operates on numpy arrays, and uploads rows one at a time using feed_dict", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37091899\/how-to-actually-read-csv-data-in-tensorflow", "best_answers_votes":28, "question_length":5446, "response_length":3152 }, { "question":"Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, In Tensorflow\/ Keras when running the code from https:\/\/github.com\/pierluigiferrari\/ssd_keras, use the estimator: ssd300_evaluation. I received this error. Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. This is very similar to the unsolved question: Google Colab Error : Failed to get convolution algorithm.This is probably because cuDNN failed to initialize With the issue I'm running: python: 3.6.4. Tensorflow Version: 1.12.0. Keras Version: 2.2.4. CUDA: V10.0. cuDNN: V7.4.1.5. NVIDIA GeForce GTX 1080. Also I ran: ``` import tensorflow as tf with tf.device('\/gpu:0'): a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) with tf.Session() as sess: print (sess.run(c)) ``` With no errors or issues. The minimalist example is: ``` from keras import backend as K from keras.models import load_model from keras.optimizers import Adam from scipy.misc import imread import numpy as np from matplotlib import pyplot as plt from models.keras_ssd300 import ssd_300 from keras_loss_function.keras_ssd_loss import SSDLoss from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes from keras_layers.keras_layer_DecodeDetections import DecodeDetections from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast from keras_layers.keras_layer_L2Normalization import L2Normalization from data_generator.object_detection_2d_data_generator import DataGenerator from eval_utils.average_precision_evaluator import Evaluator import tensorflow as tf %matplotlib inline import keras keras.__version__ # Set a few configuration parameters. img_height = 300 img_width = 300 n_classes = 20 model_mode = 'inference' K.clear_session() # Clear previous models from memory. model = ssd_300(image_size=(img_height, img_width, 3), n_classes=n_classes, mode=model_mode, l2_regularization=0.0005, scales=[0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05], # The scales for MS COCO [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] aspect_ratios_per_layer=[[1.0, 2.0, 0.5], [1.0, 2.0, 0.5, 3.0, 1.0\/3.0], [1.0, 2.0, 0.5, 3.0, 1.0\/3.0], [1.0, 2.0, 0.5, 3.0, 1.0\/3.0], [1.0, 2.0, 0.5], [1.0, 2.0, 0.5]], two_boxes_for_ar1=True, steps=[8, 16, 32, 64, 100, 300], offsets=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5], clip_boxes=False, variances=[0.1, 0.1, 0.2, 0.2], normalize_coords=True, subtract_mean=[123, 117, 104], swap_channels=[2, 1, 0], confidence_thresh=0.01, iou_threshold=0.45, top_k=200, nms_max_output_size=400) # 2: Load the trained weights into the model. # TODO: Set the path of the trained weights. weights_path = 'C:\/Users\/USAgData\/TF SSD Keras\/weights\/VGG_VOC0712Plus_SSD_300x300_iter_240000.h5' model.load_weights(weights_path, by_name=True) # 3: Compile the model so that Keras won't complain the next time you load it. adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) model.compile(optimizer=adam, loss=ssd_loss.compute_loss) dataset = DataGenerator() # TODO: Set the paths to the dataset here. dir= \"C:\/Users\/USAgData\/TF SSD Keras\/VOC\/VOCtest_06-Nov-2007\/VOCdevkit\/VOC2007\/\" Pascal_VOC_dataset_images_dir = dir+ 'JPEGImages' Pascal_VOC_dataset_annotations_dir = dir + 'Annotations\/' Pascal_VOC_dataset_image_set_filename = dir+'ImageSets\/Main\/test.txt' # The XML parser needs to now what object class names to look for and in which order to map them to integers. classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] dataset.parse_xml(images_dirs=[Pascal_VOC_dataset_images_dir], image_set_filenames=[Pascal_VOC_dataset_image_set_filename], annotations_dirs=[Pascal_VOC_dataset_annotations_dir], classes=classes, include_classes='all', exclude_truncated=False, exclude_difficult=False, ret=False) evaluator = Evaluator(model=model, n_classes=n_classes, data_generator=dataset, model_mode=model_mode) results = evaluator(img_height=img_height, img_width=img_width, batch_size=8, data_generator_mode='resize', round_confidences=False, matching_iou_threshold=0.5, border_pixels='include', sorting_algorithm='quicksort', average_precision_mode='sample', num_recall_points=11, ignore_neutral_boxes=True, return_precisions=True, return_recalls=True, return_average_precisions=True, verbose=True) ```", "response":"I've seen this error message for three different reasons, with different solutions: 1. You have cache issues I regularly work around this error by shutting down my python process, removing the ~\/.nv directory (on linux, rm -rf ~\/.nv), and restarting the Python process. I don't exactly know why this works. It's probably at least partly related to the second option: 2. You're out of memory The error can also show up if you run out of graphics card RAM. With an nvidia GPU you can check graphics card memory usage with nvidia-smi. This will give you a readout of how much GPU RAM you have in use (something like 6025MiB \/ 6086MiB if you're almost at the limit) as well as a list of what processes are using GPU RAM. If you've run out of RAM, you'll need to restart the process (which should free up the RAM) and then take a less memory-intensive approach. A few options are: reducing your batch size using a simpler model using less data limit TensorFlow GPU memory fraction: For example, the following will make sure TensorFlow uses <= 90% of your RAM: ```py import keras import tensorflow as tf config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.9 # 0.6 sometimes works better for folks keras.backend.tensorflow_backend.set_session(tf.Session(config=config)) ``` This can slow down your model evaluation if not used together with the items above, presumably since the large data set will have to be swapped in and out to fit into the small amount of memory you've allocated. A second option is to have TensorFlow start out using only a minimum amount of memory and then allocate more as needed (documented here): ```py os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true' ``` 3. You have incompatible versions of CUDA, TensorFlow, NVIDIA drivers, etc. If you've never had similar models working, you're not running out of VRAM and your cache is clean, I'd go back and set up CUDA + TensorFlow using the best available installation guide - I have had the most success with following the instructions at https:\/\/www.tensorflow.org\/install\/gpu rather than those on the NVIDIA \/ CUDA site. Lambda Stack is also a good way to go.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/53698035\/failed-to-get-convolution-algorithm-this-is-probably-because-cudnn-failed-to-in", "best_answers_votes":114, "question_length":4624, "response_length":2153 }, { "question":"How could I use batch normalization in TensorFlow? I would like to use batch normalization in TensorFlow. I found the related C++ source code in core\/ops\/nn_ops.cc. However, I did not find it documented on tensorflow.org. BN has different semantics in MLP and CNN, so I am not sure what exactly this BN does. I did not find a method called MovingMoments either.", "response":"Update July 2016 The easiest way to use batch normalization in TensorFlow is through the higher-level interfaces provided in either contrib\/layers, tflearn, or slim. Previous answer if you want to DIY: The documentation string for this has improved since the release - see the docs comment in the master branch instead of the one you found. It clarifies, in particular, that it's the output from tf.nn.moments. You can see a very simple example of its use in the batch_norm test code. For a more real-world use example, I've included below the helper class and use notes that I scribbled up for my own use (no warranty provided!): ```py \"\"\"A helper class for managing batch normalization state. This class is designed to simplify adding batch normalization (http:\/\/arxiv.org\/pdf\/1502.03167v3.pdf) to your model by managing the state variables associated with it. Important use note: The function get_assigner() returns an op that must be executed to save the updated state. A suggested way to do this is to make execution of the model optimizer force it, e.g., by: update_assignments = tf.group(bn1.get_assigner(), bn2.get_assigner()) with tf.control_dependencies([optimizer]): optimizer = tf.group(update_assignments) \"\"\" import tensorflow as tf class ConvolutionalBatchNormalizer(object): \"\"\"Helper class that groups the normalization logic and variables. Use: ewma = tf.train.ExponentialMovingAverage(decay=0.99) bn = ConvolutionalBatchNormalizer(depth, 0.001, ewma, True) update_assignments = bn.get_assigner() x = bn.normalize(y, train=training?) (the output x will be batch-normalized). \"\"\" def __init__(self, depth, epsilon, ewma_trainer, scale_after_norm): self.mean = tf.Variable(tf.constant(0.0, shape=[depth]), trainable=False) self.variance = tf.Variable(tf.constant(1.0, shape=[depth]), trainable=False) self.beta = tf.Variable(tf.constant(0.0, shape=[depth])) self.gamma = tf.Variable(tf.constant(1.0, shape=[depth])) self.ewma_trainer = ewma_trainer self.epsilon = epsilon self.scale_after_norm = scale_after_norm def get_assigner(self): \"\"\"Returns an EWMA apply op that must be invoked after optimization.\"\"\" return self.ewma_trainer.apply([self.mean, self.variance]) def normalize(self, x, train=True): \"\"\"Returns a batch-normalized version of x.\"\"\" if train: mean, variance = tf.nn.moments(x, [0, 1, 2]) assign_mean = self.mean.assign(mean) assign_variance = self.variance.assign(variance) with tf.control_dependencies([assign_mean, assign_variance]): return tf.nn.batch_norm_with_global_normalization( x, mean, variance, self.beta, self.gamma, self.epsilon, self.scale_after_norm) else: mean = self.ewma_trainer.average(self.mean) variance = self.ewma_trainer.average(self.variance) local_beta = tf.identity(self.beta) local_gamma = tf.identity(self.gamma) return tf.nn.batch_norm_with_global_normalization( x, mean, variance, local_beta, local_gamma, self.epsilon, self.scale_after_norm) ``` Note that I called it a ConvolutionalBatchNormalizer because it pins the use of tf.nn.moments to sum across axes 0, 1, and 2, whereas for non-convolutional use you might only want axis 0. Feedback appreciated if you use it.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33949786\/how-could-i-use-batch-normalization-in-tensorflow", "best_answers_votes":57, "question_length":361, "response_length":3135 }, { "question":"Can I measure the execution time of individual operations with TensorFlow? I know I can measure the execution time of a call to sess.run(), but is it possible to get a finer granularity and measure the execution time of individual operations?", "response":"I have used the Timeline object to get the time of execution for each node in the graph: you use a classic sess.run() but also specify the optional arguments options and run_metadata you then create a Timeline object with the run_metadata.step_stats data Here is an example program that measures the performance of a matrix multiplication: ```py import tensorflow as tf from tensorflow.python.client import timeline x = tf.random_normal([1000, 1000]) y = tf.random_normal([1000, 1000]) res = tf.matmul(x, y) # Run the graph with full trace option with tf.Session() as sess: run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() sess.run(res, options=run_options, run_metadata=run_metadata) # Create the Timeline object, and write it to a json tl = timeline.Timeline(run_metadata.step_stats) ctf = tl.generate_chrome_trace_format() with open('timeline.json', 'w') as f: f.write(ctf) ``` You can then open Google Chrome, go to the page chrome:\/\/tracing and load the timeline.json file. You should see something like:", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34293714\/can-i-measure-the-execution-time-of-individual-operations-with-tensorflow", "best_answers_votes":111, "question_length":242, "response_length":1058 }, { "question":"What's the difference between tf.Session() and tf.InteractiveSession()? In which cases should tf.Session() and tf.InteractiveSession() be considered for what purpose? When I tried to use the former one, some functions (for example, .eval()) didn't work, and when I changed to the later one, it worked.", "response":"Mainly taken from official documentation: The only difference with a regular Session is that an InteractiveSession installs itself as the default session on construction. The methods Tensor.eval() and Operation.run() will use that session to run ops. This allows to use interactive context, like shell, as it avoids having to pass an explicit Session object to run op: ``` sess = tf.InteractiveSession() a = tf.constant(5.0) b = tf.constant(6.0) c = a * b # We can just use 'c.eval()' without passing 'sess' print(c.eval()) sess.close() ``` It is also possible to say, that InteractiveSession supports less typing, as allows to run variables without needing to constantly refer to the session object.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41791469\/whats-the-difference-between-tf-session-and-tf-interactivesession", "best_answers_votes":83, "question_length":301, "response_length":700 }, { "question":"Tensorflow: None of the MLIR optimization passes are enabled (registered 1) I am using a very small model for testing purposes using tensorflow 2.3 and keras. Looking at my terminal, I get the following warning: ``` I tensorflow\/compiler\/mlir\/mlir_graph_optimization_pass.cc:118] None of the MLIR optimization passes are enabled (registered 1) ``` However, the code works as expected. But what does this message mean? Thanks.", "response":"MLIR is being used as another solution to implementing and optimizing Tensorflow logic. This informative message is benign and is saying MLIR was not being used. This is expected as in TF 2.3, the MLIR based implementation is still being developed and proven, so end users are generally not expected to use the MLIR implementation and are instead expected to use the non-MLIR feature complete implementation. Update: still experimental on version 2.9.1. On the docs it is written: DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/63886762\/tensorflow-none-of-the-mlir-optimization-passes-are-enabled-registered-1", "best_answers_votes":82, "question_length":425, "response_length":528 }, { "question":"How to export Keras .h5 to tensorflow .pb? I have fine-tuned inception model with a new dataset and saved it as \".h5\" model in Keras. now my goal is to run my model on android Tensorflow which accepts \".pb\" extension only. question is that is there any library in Keras or tensorflow to do this conversion? I have seen this post so far : https:\/\/blog.keras.io\/keras-as-a-simplified-interface-to-tensorflow-tutorial.html but can't figure out yet.", "response":"Keras does not include by itself any means to export a TensorFlow graph as a protocol buffers file, but you can do it using regular TensorFlow utilities. Here is a blog post explaining how to do it using the utility script freeze_graph.py included in TensorFlow, which is the \"typical\" way it is done. However, I personally find a nuisance having to make a checkpoint and then run an external script to obtain a model, and instead prefer to do it from my own Python code, so I use a function like this: ```py def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True): \"\"\" Freezes the state of a session into a pruned computation graph. Creates a new computation graph where variable nodes are replaced by constants taking their current value in the session. The new graph will be pruned so subgraphs that are not necessary to compute the requested outputs are removed. @param session The TensorFlow session to be frozen. @param keep_var_names A list of variable names that should not be frozen, or None to freeze all the variables in the graph. @param output_names Names of the relevant graph outputs. @param clear_devices Remove the device directives from the graph for better portability. @return The frozen graph definition. \"\"\" graph = session.graph with graph.as_default(): freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or [])) output_names = output_names or [] output_names += [v.op.name for v in tf.global_variables()] input_graph_def = graph.as_graph_def() if clear_devices: for node in input_graph_def.node: node.device = \"\" frozen_graph = tf.graph_util.convert_variables_to_constants( session, input_graph_def, output_names, freeze_var_names) return frozen_graph ``` Which is inspired in the implementation of freeze_graph.py. The parameters are similar to the script too. session is the TensorFlow session object. keep_var_names is only needed if you want to keep some variable not frozen (e.g. for stateful models), so generally not. output_names is a list with the names of the operations that produce the outputs that you want. clear_devices just removes any device directives to make the graph more portable. So, for a typical Keras model with one output, you would do something like: ``` from keras import backend as K # Create, compile and train model... frozen_graph = freeze_session(K.get_session(), output_names=[out.op.name for out in model.outputs]) ``` Then you can write the graph to a file as usual with tf.train.write_graph: ``` tf.train.write_graph(frozen_graph, \"some_directory\", \"my_model.pb\", as_text=False) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45466020\/how-to-export-keras-h5-to-tensorflow-pb", "best_answers_votes":101, "question_length":445, "response_length":2621 }, { "question":"How to count total number of trainable parameters in a tensorflow model? Is there a function call or another way to count the total number of parameters in a tensorflow model? By parameters I mean: an N dim vector of trainable variables has N parameters, a NxM matrix has N*M parameters, etc. So essentially I'd like to sum the product of the shape dimensions of all the trainable variables in a tensorflow session.", "response":"Loop over the shape of every variable in tf.trainable_variables(). ``` total_parameters = 0 for variable in tf.trainable_variables(): # shape is an array of tf.Dimension shape = variable.get_shape() print(shape) print(len(shape)) variable_parameters = 1 for dim in shape: print(dim) variable_parameters *= dim.value print(variable_parameters) total_parameters += variable_parameters print(total_parameters) ``` Update: I wrote an article to clarify the dynamic\/static shapes in Tensorflow because of this answer: https:\/\/pgaleone.eu\/tensorflow\/2018\/07\/28\/understanding-tensorflow-tensors-shape-static-dynamic\/", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38160940\/how-to-count-total-number-of-trainable-parameters-in-a-tensorflow-model", "best_answers_votes":94, "question_length":415, "response_length":609 }, { "question":"Keras model.summary() object to string I want to write a *.txt file with the neural network hyperparameters and the model architecture. Is it possible to write the object model.summary() to my output file? ``` (...) summary = str(model.summary()) (...) out = open(filename + 'report.txt','w') out.write(summary) out.close ``` It happens that I'm getting \"None\" as you can see below. ``` Hyperparameters ========================= learning_rate: 0.01 momentum: 0.8 decay: 0.0 batch size: 128 no. epochs: 3 dropout: 0.5 ------------------------- None val_acc: 0.232323229313 val_loss: 3.88496732712 train_acc: 0.0965207634216 train_loss: 4.07161939425 train\/val loss ratio: 1.04804469418 ``` Any idea how to deal with that?", "response":"With my version of Keras (2.0.6) and Python (3.5.0), this works for me: ``` # Create an empty model from keras.models import Sequential model = Sequential() # Open the file with open(filename + 'report.txt','w') as fh: # Pass the file handle in as a lambda function to make it callable model.summary(print_fn=lambda x: fh.write(x + '\\n')) ``` This outputs the following lines to the file: ``` _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= Total params: 0 Trainable params: 0 Non-trainable params: 0 _________________________________________________________________ ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41665799\/keras-model-summary-object-to-string", "best_answers_votes":79, "question_length":720, "response_length":688 }, { "question":"How to \"reset\" tensorboard data after killing tensorflow instance I'm testing different hyperparameters for a cnn model I built, but I'm having a small annoyance when viewing the summaries in Tensorboard. The problem seems to be that the data is just \"added\" in consecutive runs, so the functions result in a weird superposition unless I see the information as \"relative\" instead of \"by step\". See here: I've tried killing tensorboard's process and erasing the log files, but it seems it is not enough. So the question is, how do I reset this information? Thanks!!", "response":"Note: The solution you've posted (erase TensorBoard's log files and kill the process) will work, but it isn't preferred, because it destroys historical information about your training. Instead, you can have each new training job write to a new subdirectory (of your top-level log directory). Then, TensorBoard will consider each job a new \"run\" and will create a nice comparison view so you can see how the training differed between iterations of your model. In the following an example from https:\/\/www.tensorflow.org\/tensorboard\/get_started: ``` model = create_model() ... model.compile(...) log_dir = \"logs\/fit\/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\") tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) model.fit(..., callbacks=[tensorboard_callback]) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34454721\/how-to-reset-tensorboard-data-after-killing-tensorflow-instance", "best_answers_votes":49, "question_length":564, "response_length":809 }, { "question":"TensorFlow - Importing data from a TensorBoard TFEvent file? I've run several training sessions with different graphs in TensorFlow. The summaries I set up show interesting results in the training and validation. Now, I'd like to take the data I've saved in the summary logs and perform some statistical analysis and in general plot and look at the summary data in different ways. Is there any existing way to easily access this data? More specifically, is there any built in way to read a TFEvent record back into Python? If there is no simple way to do this, TensorFlow states that all its file formats are protobuf files. From my understanding of protobufs (which is limited), I think I'd be able to extract this data if I have the TFEvent protocol specification. Is there an easy way to get ahold of this? Thank you much.", "response":"As Fabrizio says, TensorBoard is a great tool for visualizing the contents of your summary logs. However, if you want to perform a custom analysis, you can use tf.train.summary_iterator() function to loop over all of the tf.Event and tf.Summary protocol buffers in the log: ``` for summary in tf.train.summary_iterator(\"\/path\/to\/log\/file\"): # Perform custom processing in here. ``` UPDATE for tf2: ``` from tensorflow.python.summary.summary_iterator import summary_iterator ``` You need to import it, that module level is not currently imported by default. On 2.0.0-rc2", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37304461\/tensorflow-importing-data-from-a-tensorboard-tfevent-file", "best_answers_votes":67, "question_length":825, "response_length":569 }, { "question":"Tensorflow One Hot Encoder? Does tensorflow have something similar to scikit learn's one hot encoder for processing categorical data? Would using a placeholder of tf.string behave as categorical data? I realize I can manually pre-process the data before sending it to tensorflow, but having it built in is very convenient.", "response":"As of TensorFlow 0.8, there is now a native one-hot op, tf.one_hot that can convert a set of sparse labels to a dense one-hot representation. This is in addition to tf.nn.sparse_softmax_cross_entropy_with_logits, which can in some cases let you compute the cross entropy directly on the sparse labels instead of converting them to one-hot. Previous answer, in case you want to do it the old way: @Salvador's answer is correct - there (used to be) no native op to do it. Instead of doing it in numpy, though, you can do it natively in tensorflow using the sparse-to-dense operators: ``` num_labels = 10 # label_batch is a tensor of numeric labels to process # 0 <= label < num_labels sparse_labels = tf.reshape(label_batch, [-1, 1]) derived_size = tf.shape(label_batch)[0] indices = tf.reshape(tf.range(0, derived_size, 1), [-1, 1]) concated = tf.concat(1, [indices, sparse_labels]) outshape = tf.pack([derived_size, num_labels]) labels = tf.sparse_to_dense(concated, outshape, 1.0, 0.0) ``` The output, labels, is a one-hot matrix of batch_size x num_labels. Note also that as of 2016-02-12 (which I assume will eventually be part of a 0.7 release), TensorFlow also has the tf.nn.sparse_softmax_cross_entropy_with_logits op, which in some cases can let you do training without needing to convert to a one-hot encoding. Edited to add: At the end, you may need to explicitly set the shape of labels. The shape inference doesn't recognize the size of the num_labels component. If you don't need a dynamic batch size with derived_size, this can be simplified. Edited 2016-02-12 to change the assignment of outshape per comment below.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33681517\/tensorflow-one-hot-encoder", "best_answers_votes":68, "question_length":322, "response_length":1629 }, { "question":"How do I use TensorFlow GPU? How do I use TensorFlow GPU version instead of CPU version in Python 3.6 x64? ``` import tensorflow as tf ``` Python is using my CPU for calculations. I can notice it because I have an error: Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 I have installed tensorflow and tensorflow-gpu. How do I switch to GPU version?", "response":"Follow this tutorial Tensorflow GPU I did it and it works perfect. Attention! - install version 9.0! newer version is not supported by Tensorflow-gpu Steps: Uninstall your old tensorflow Install tensorflow-gpu pip install tensorflow-gpu Install Nvidia Graphics Card & Drivers (you probably already have) Download & Install CUDA Download & Install cuDNN Verify by simple program ```py from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51306862\/how-do-i-use-tensorflow-gpu", "best_answers_votes":80, "question_length":389, "response_length":474 }, { "question":"How to add if condition in a TensorFlow graph? Let's say I have following code: ``` x = tf.placeholder(\"float32\", shape=[None, ins_size**2*3], name = \"x_input\") condition = tf.placeholder(\"int32\", shape=[1, 1], name = \"condition\") W = tf.Variable(tf.zeros([ins_size**2*3,label_option]), name = \"weights\") b = tf.Variable(tf.zeros([label_option]), name = \"bias\") if condition > 0: y = tf.nn.softmax(tf.matmul(x, W) + b) else: y = tf.nn.softmax(tf.matmul(x, W) - b) ``` Would the if statement work in the calculation (I do not think so)? If not, how can I add an if statement into the TensorFlow calculation graph?", "response":"You're correct that the if statement doesn't work here, because the condition is evaluated at graph construction time, whereas presumably you want the condition to depend on the value fed to the placeholder at runtime. (In fact, it will always take the first branch, because condition > 0 evaluates to a Tensor, which is \"truthy\" in Python.) To support conditional control flow, TensorFlow provides the tf.cond() operator, which evaluates one of two branches, depending on a boolean condition. To show you how to use it, I'll rewrite your program so that condition is a scalar tf.int32 value for simplicity: ```py x = tf.placeholder(tf.float32, shape=[None, ins_size**2*3], name=\"x_input\") condition = tf.placeholder(tf.int32, shape=[], name=\"condition\") W = tf.Variable(tf.zeros([ins_size**2 * 3, label_option]), name=\"weights\") b = tf.Variable(tf.zeros([label_option]), name=\"bias\") y = tf.cond(condition > 0, lambda: tf.matmul(x, W) + b, lambda: tf.matmul(x, W) - b) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35833011\/how-to-add-if-condition-in-a-tensorflow-graph", "best_answers_votes":109, "question_length":612, "response_length":973 }, { "question":"Unbalanced data and weighted cross entropy I'm trying to train a network with an unbalanced data. I have A (198 samples), B (436 samples), C (710 samples), D (272 samples) and I have read about the \"weighted_cross_entropy_with_logits\" but all the examples I found are for binary classification so I'm not very confident in how to set those weights. Total samples: 1616 A_weight: 198\/1616 = 0.12? The idea behind, if I understood, is to penalize the errors of the majority class and value more positively the hits in the minority one, right? My piece of code: ``` weights = tf.constant([0.12, 0.26, 0.43, 0.17]) cost = tf.reduce_mean(tf.nn.weighted_cross_entropy_with_logits(logits=pred, targets=y, pos_weight=weights)) ``` I have read this one and others examples with binary classification but still not very clear.", "response":"Note that weighted_cross_entropy_with_logits is the weighted variant of sigmoid_cross_entropy_with_logits. Sigmoid cross entropy is typically used for binary classification. Yes, it can handle multiple labels, but sigmoid cross entropy basically makes a (binary) decision on each of them -- for example, for a face recognition net, those (not mutually exclusive) labels could be \"Does the subject wear glasses?\", \"Is the subject female?\", etc. In binary classification(s), each output channel corresponds to a binary (soft) decision. Therefore, the weighting needs to happen within the computation of the loss. This is what weighted_cross_entropy_with_logits does, by weighting one term of the cross-entropy over the other. In mutually exclusive multilabel classification, we use softmax_cross_entropy_with_logits, which behaves differently: each output channel corresponds to the score of a class candidate. The decision comes after, by comparing the respective outputs of each channel. Weighting in before the final decision is therefore a simple matter of modifying the scores before comparing them, typically by multiplication with weights. For example, for a ternary classification task, ``` # your class weights class_weights = tf.constant([[1.0, 2.0, 3.0]]) # deduce weights for batch samples based on their true label weights = tf.reduce_sum(class_weights * onehot_labels, axis=1) # compute your (unweighted) softmax cross entropy loss unweighted_losses = tf.nn.softmax_cross_entropy_with_logits(onehot_labels, logits) # apply the weights, relying on broadcasting of the multiplication weighted_losses = unweighted_losses * weights # reduce the result to get your final loss loss = tf.reduce_mean(weighted_losses) ``` You could also rely on tf.losses.softmax_cross_entropy to handle the last three steps. In your case, where you need to tackle data imbalance, the class weights could indeed be inversely proportional to their frequency in your train data. Normalizing them so that they sum up to one or to the number of classes also makes sense. Note that in the above, we penalized the loss based on the true label of the samples. We could also have penalized the loss based on the estimated labels by simply defining ``` weights = class_weights ``` and the rest of the code need not change thanks to broadcasting magic. In the general case, you would want weights that depend on the kind of error you make. In other words, for each pair of labels X and Y, you could choose how to penalize choosing label X when the true label is Y. You end up with a whole prior weight matrix, which results in weights above being a full (num_samples, num_classes) tensor. This goes a bit beyond what you want, but it might be useful to know nonetheless that only your definition of the weight tensor need to change in the code above.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44560549\/unbalanced-data-and-weighted-cross-entropy", "best_answers_votes":96, "question_length":816, "response_length":2827 }, { "question":"Nvidia Cudatoolkit vs Conda Cudatoolkit Till date, I have been using Tensorflow-GPU by installing it using pip and the Cuda related software and Nvidia softwares\/drivers from Nvidia's website. Recently, I found that using conda install tensorflow-gpu also installs cudatoolkit and cudnn. So, how are these(the ones provided by conda) different from the ones that I downloaded from Nvidia's website? In my first (previous) environment, conda list showed that I have installed only TensorFlow(from PyPi) and no cudnn\/cudatoolkit, but still everything worked. Also, in a new environment in which I ran conda install tensorflow-gpu, conda list showed me tensorflow-gpu has been installed along with cudatoolkit and cudnn by Anaconda. And in this environment also, everything worked fine. So does this mean, that downloading and installing Cuda from Nvidia's website is only necessary if I use pip to install TensorFlow?", "response":"If using anaconda to install tensorflow-gpu, yes it will install cuda and cudnn for you in same conda environment as tensorflow-gpu. All you need to install yourself is the latest nvidia-driver (so that it works with the latest CUDA level and all older CUDA levels you use.) This has many advantages over the pip install tensorflow-gpu method: Anaconda will always install the CUDA and CuDNN version that the TensorFlow code was compiled to use. You can have multiple conda environments with different levels of TensorFlow, CUDA, and CuDNN and just use conda activate to switch between them. You don't have to deal with installing CUDA and cuDNN manaually at the system wide level. The disadvantage when compared to pip install tensorflow-gpu, is the latest version of tensorflow is added to pypi weeks before Anaconda is able to update the conda recipe and publish their builds of the latest TensorFlow version.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/59529804\/nvidia-cudatoolkit-vs-conda-cudatoolkit", "best_answers_votes":63, "question_length":915, "response_length":912 }, { "question":"How do I check if keras is using gpu version of tensorflow? When I run a keras script, I get the following output: ``` Using TensorFlow backend. 2017-06-14 17:40:44.621761: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-06-14 17:40:44.621783: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-06-14 17:40:44.621788: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-06-14 17:40:44.621791: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-06-14 17:40:44.621795: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. 2017-06-14 17:40:44.721911: I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:901] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2017-06-14 17:40:44.722288: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:887] Found device 0 with properties: name: GeForce GTX 850M major: 5 minor: 0 memoryClockRate (GHz) 0.9015 pciBusID 0000:0a:00.0 Total memory: 3.95GiB Free memory: 3.69GiB 2017-06-14 17:40:44.722302: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:908] DMA: 0 2017-06-14 17:40:44.722307: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:918] 0: Y 2017-06-14 17:40:44.722312: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:977] Creating TensorFlow device (\/gpu:0) -> (device: 0, name: GeForce GTX 850M, pci bus id: 0000:0a:00.0) ``` What does this mean? Am I using GPU or CPU version of tensorflow? Before installing keras, I was working with the GPU version of tensorflow. Also sudo pip3 list shows tensorflow-gpu(1.1.0) and nothing like tensorflow-cpu. Running the command mentioned on [this stackoverflow question], gives the following: ``` The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-06-14 17:53:31.424793: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-06-14 17:53:31.424803: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-06-14 17:53:31.424812: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-06-14 17:53:31.424820: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. 2017-06-14 17:53:31.540959: I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:901] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2017-06-14 17:53:31.541359: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:887] Found device 0 with properties: name: GeForce GTX 850M major: 5 minor: 0 memoryClockRate (GHz) 0.9015 pciBusID 0000:0a:00.0 Total memory: 3.95GiB Free memory: 128.12MiB 2017-06-14 17:53:31.541407: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:908] DMA: 0 2017-06-14 17:53:31.541420: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:918] 0: Y 2017-06-14 17:53:31.541441: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:977] Creating TensorFlow device (\/gpu:0) -> (device: 0, name: GeForce GTX 850M, pci bus id: 0000:0a:00.0) 2017-06-14 17:53:31.547902: E tensorflow\/stream_executor\/cuda\/cuda_driver.cc:893] failed to allocate 128.12M (134348800 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY Device mapping: \/job:localhost\/replica:0\/task:0\/gpu:0 -> device: 0, name: GeForce GTX 850M, pci bus id: 0000:0a:00.0 2017-06-14 17:53:31.549482: I tensorflow\/core\/common_runtime\/direct_session.cc:257] Device mapping: \/job:localhost\/replica:0\/task:0\/gpu:0 -> device: 0, name: GeForce GTX 850M, pci bus id: 0000:0a:00.0 ```", "response":"You are using the GPU version. You can list the available tensorflow devices with (also check this question): ``` from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) # list of DeviceAttributes ``` EDIT: With tensorflow >= 1.4 you can run the following function: ``` import tensorflow as tf tf.test.is_gpu_available() # True\/False # Or only check for gpu's with cuda support tf.test.is_gpu_available(cuda_only=True) ``` EDIT 2: The above function is deprecated in tensorflow > 2.1. Instead you should use the following function: ``` import tensorflow as tf tf.config.list_physical_devices('GPU') ``` NOTE: In your case both the cpu and gpu are available, if you use the cpu version of tensorflow the gpu will not be listed. In your case, without setting your tensorflow device (with tf.device(\"..\")), tensorflow will automatically pick your gpu! In addition, your sudo pip3 list clearly shows you are using tensorflow-gpu. If you would have the tensoflow cpu version the name would be something like tensorflow(1.1.0). Check this issue for information about the warnings.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44544766\/how-do-i-check-if-keras-is-using-gpu-version-of-tensorflow", "best_answers_votes":118, "question_length":4764, "response_length":1105 }, { "question":"How to define max_queue_size, workers and use_multiprocessing in keras fit_generator()? I am applying transfer-learning on a pre-trained network using the GPU version of keras. I don't understand how to define the parameters max_queue_size, workers, and use_multiprocessing. If I change these parameters (primarily to speed-up learning), I am unsure whether all data is still seen per epoch. max_queue_size: maximum size of the internal training queue which is used to \"precache\" samples from the generator Question: Does this refer to how many batches are prepared on CPU? How is it related to workers? How to define it optimally? workers: number of threads generating batches in parallel. Batches are computed in parallel on the CPU and passed on the fly onto the GPU for neural network computations Question: How do I find out how many batches my CPU can\/should generate in parallel? use_multiprocessing: whether to use process-based threading Question: Do I have to set this parameter to true if I change workers? Does it relate to CPU usage? Related questions can be found here: Detailed explanation of model.fit_generator() parameters: queue size, workers and use_multiprocessing What does worker mean in fit_generator in Keras? What is the parameter \u201cmax_q_size\u201d used for in \u201cmodel.fit_generator\u201d? A detailed example of how to use data generators with Keras. I am using fit_generator() as follows: ``` history = model.fit_generator(generator=trainGenerator, steps_per_epoch=trainGenerator.samples\/\/nBatches, # total number of steps (batches of samples) epochs=nEpochs, # number of epochs to train the model verbose=2, # verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch callbacks=callback, # keras.callbacks.Callback instances to apply during training validation_data=valGenerator, # generator or tuple on which to evaluate the loss and any model metrics at the end of each epoch validation_steps= valGenerator.samples\/\/nBatches, # number of steps (batches of samples) to yield from validation_data generator before stopping at the end of every epoch class_weight=classWeights, # optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function max_queue_size=10, # maximum size for the generator queue workers=1, # maximum number of processes to spin up when using process-based threading use_multiprocessing=False, # whether to use process-based threading shuffle=True, # whether to shuffle the order of the batches at the beginning of each epoch initial_epoch=0) ``` The specs of my machine are: ``` CPU : 2xXeon E5-2260 2.6 GHz Cores: 10 Graphic card: Titan X, Maxwell, GM200 RAM: 128 GB HDD: 4TB SSD: 512 GB ```", "response":"Q_0: Question: Does this refer to how many batches are prepared on CPU? How is it related to workers? How to define it optimally? From the link you posted, you can learn that your CPU keeps creating batches until the queue is at the maximum queue size or reaches the stop. You want to have batches ready for your GPU to \"take\" so that the GPU doesn't have to wait for the CPU. An ideal value for the queue size would be to make it large enough that your GPU is always running near the maximum and never has to wait for the CPU to prepare new batches. Q_1: Question: How do I find out how many batches my CPU can\/should generate in parallel? If you see that your GPU is idling and waiting for batches, try to increase the amount of workers and perhaps also the queue size. Q_2: Do I have to set this parameter to true if I change workers? Does it relate to CPU usage? Here is a practical analysis of what happens when you set it to True or False. Here is a recommendation to set it to False to prevent freezing (in my setup True works fine without freezing). Perhaps someone else can increase our understanding of the topic. In summary: Try not to have a sequential setup, try to enable the CPU to provide enough data for the GPU. Also: You could (should?) create several questions the next time, so that it is easier to answer them.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55531427\/how-to-define-max-queue-size-workers-and-use-multiprocessing-in-keras-fit-gener", "best_answers_votes":68, "question_length":2694, "response_length":1332 }, { "question":"TensorFlow - regularization with L2 loss, how to apply to all weights, not just last one? I am playing with a ANN which is part of Udacity DeepLearning course. I have an assignment which involves introducing generalization to the network with one hidden ReLU layer using L2 loss. I wonder how to properly introduce it so that ALL weights are penalized, not only weights of the output layer. Code for network without generalization is at the bottom of the post (code to actually run the training is out of the scope of the question). Obvious way of introducing the L2 is to replace the loss calculation with something like this (if beta is 0.01): ``` loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(out_layer, tf_train_labels) + 0.01*tf.nn.l2_loss(out_weights)) ``` But in such case it will take into account values of output layer's weights. I am not sure, how do we properly penalize the weights which come INTO the hidden ReLU layer. Is it needed at all or introducing penalization of output layer will somehow keep the hidden weights in check also? ``` #some importing from __future__ import print_function import numpy as np import tensorflow as tf from six.moves import cPickle as pickle from six.moves import range #loading data pickle_file = '\/home\/maxkhk\/Documents\/Udacity\/DeepLearningCourse\/SourceCode\/tensorflow\/examples\/udacity\/notMNIST.pickle' with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) #prepare data to have right format for tensorflow #i.e. data is flat matrix, labels are onehot image_size = 28 num_labels = 10 def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) #now is the interesting part - we are building a network with #one hidden ReLU layer and out usual output linear layer #we are going to use SGD so here is our size of batch batch_size = 128 #building tensorflow graph graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) #now let's build our new hidden layer #that's how many hidden neurons we want num_hidden_neurons = 1024 #its weights hidden_weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_hidden_neurons])) hidden_biases = tf.Variable(tf.zeros([num_hidden_neurons])) #now the layer itself. It multiplies data by weights, adds biases #and takes ReLU over result hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset, hidden_weights) + hidden_biases) #time to go for output linear layer #out weights connect hidden neurons to output labels #biases are added to output labels out_weights = tf.Variable( tf.truncated_normal([num_hidden_neurons, num_labels])) out_biases = tf.Variable(tf.zeros([num_labels])) #compute output out_layer = tf.matmul(hidden_layer,out_weights) + out_biases #our real output is a softmax of prior result #and we also compute its cross-entropy to get our loss loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(out_layer, tf_train_labels)) #now we just minimize this loss to actually train the network optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) #nice, now let's calculate the predictions on each dataset for evaluating the #performance so far # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(out_layer) valid_relu = tf.nn.relu( tf.matmul(tf_valid_dataset, hidden_weights) + hidden_biases) valid_prediction = tf.nn.softmax( tf.matmul(valid_relu, out_weights) + out_biases) test_relu = tf.nn.relu( tf.matmul( tf_test_dataset, hidden_weights) + hidden_biases) test_prediction = tf.nn.softmax(tf.matmul(test_relu, out_weights) + out_biases) ```", "response":"A shorter and scalable way of doing this would be ; ``` vars = tf.trainable_variables() lossL2 = tf.add_n([ tf.nn.l2_loss(v) for v in vars ]) * 0.001 ``` This basically sums the l2_loss of all your trainable variables. You could also make a dictionary where you specify only the variables you want to add to your cost and use the second line above. Then you can add lossL2 with your softmax cross entropy value in order to calculate your total loss. Edit : As mentioned by Piotr Dabkowski, the code above will also regularise biases. This can be avoided by adding an if statement in the second line ; ``` lossL2 = tf.add_n([ tf.nn.l2_loss(v) for v in vars if 'bias' not in v.name ]) * 0.001 ``` This can be used to exclude other variables.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38286717\/tensorflow-regularization-with-l2-loss-how-to-apply-to-all-weights-not-just", "best_answers_votes":106, "question_length":4924, "response_length":739 }, { "question":"Dimension of shape in conv1D I have tried to build a CNN with one layer, but I have some problem with it. Indeed, the compilator says me that ValueError: Error when checking model input: expected conv1d_1_input to have 3 dimensions, but got array with shape (569, 30) This is the code ```py import numpy from keras.models import Sequential from keras.layers.convolutional import Conv1D numpy.random.seed(7) datasetTraining = numpy.loadtxt(\"CancerAdapter.csv\",delimiter=\",\") X = datasetTraining[:,1:31] Y = datasetTraining[:,0] datasetTesting = numpy.loadtxt(\"CancereEvaluation.csv\",delimiter=\",\") X_test = datasetTraining[:,1:31] Y_test = datasetTraining[:,0] model = Sequential() model.add(Conv1D(2,2,activation='relu',input_shape=X.shape)) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X, Y, epochs=150, batch_size=5) scores = model.evaluate(X_test, Y_test) print(\"\\n%s: %.2f%%\" % (model.metrics_names[1], scores[1]*100)) ```", "response":"td; lr you need to reshape you data to have a spatial dimension for Conv1d to make sense: ``` X = np.expand_dims(X, axis=2) # reshape (569, 30) to (569, 30, 1) # now input can be set as model.add(Conv1D(2,2,activation='relu',input_shape=(30, 1)) ``` Essentially reshaping a dataset that looks like this: ``` features .8, .1, .3 .2, .4, .6 .7, .2, .1 ``` To: ``` [[.8 .1 .3], [.2, .4, .6 ], [.7, .2, .1]] ``` Explanation and examples Normally convolution works over spatial dimensions. The kernel is \"convolved\" over the dimension producing a tensor. In the case of Conv1D, the kernel is passed over the 'steps' dimension of every example. You will see Conv1D used in NLP where steps is a number of words in the sentence (padded to some fixed maximum length). The words would be encoded as vectors of length 4. Here is an example sentence: ``` jack .1 .3 -.52 | is .05 .8, -.7 |<--- kernel is `convolving` along this dimension. a .5 .31 -.2 | boy .5 .8 -.4 \\|\/ ``` And the way we would set the input to the conv in this case: ``` maxlen = 4 input_dim = 3 model.add(Conv1D(2,2,activation='relu',input_shape=(maxlen, input_dim)) ``` In your case, you will treat the features as the spatial dimensions with each feature having length 1. (see below) Here would be an example from your dataset ``` att1 .04 | att2 .05 | < -- kernel convolving along this dimension att3 .1 | notice the features have length 1. each att4 .5 \\|\/ example have these 4 featues. ``` And we would set the Conv1D example as: ``` maxlen = num_features = 4 # this would be 30 in your case input_dim = 1 # since this is the length of _each_ feature (as shown above) model.add(Conv1D(2,2,activation='relu',input_shape=(maxlen, input_dim)) ``` As you see your dataset has to be reshaped in to (569, 30, 1) use: ``` X = np.expand_dims(X, axis=2) # reshape (569, 30, 1) # now input can be set as model.add(Conv1D(2,2,activation='relu',input_shape=(30, 1)) ``` Here is a full-fledged example that you can run (I'll use the Functional API) ``` from keras.models import Model from keras.layers import Conv1D, Dense, MaxPool1D, Flatten, Input import numpy as np inp = Input(shape=(5, 1)) conv = Conv1D(filters=2, kernel_size=2)(inp) pool = MaxPool1D(pool_size=2)(conv) flat = Flatten()(pool) dense = Dense(1)(flat) model = Model(inp, dense) model.compile(loss='mse', optimizer='adam') print(model.summary()) # get some data X = np.expand_dims(np.random.randn(10, 5), axis=2) y = np.random.randn(10, 1) # fit model model.fit(X, y) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43396572\/dimension-of-shape-in-conv1d", "best_answers_votes":136, "question_length":973, "response_length":2491 }, { "question":"tf.nn.conv2d vs tf.layers.conv2d Is there any advantage in using tf.nn.* over tf.layers.*? Most of the examples in the doc use tf.nn.conv2d, for instance, but it is not clear why they do so.", "response":"As GBY mentioned, they use the same implementation. There is a slight difference in the parameters. For tf.nn.conv2d: ``` filter: A Tensor. Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels] ``` For tf.layers.conv2d: ``` filters: Integer, the dimensionality of the output space (i.e. the number of filters in the convolution). ``` I would use tf.nn.conv2d when loading a pretrained model (example code: https:\/\/github.com\/ry\/tensorflow-vgg16), and tf.layers.conv2d for a model trained from scratch.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42785026\/tf-nn-conv2d-vs-tf-layers-conv2d", "best_answers_votes":43, "question_length":190, "response_length":560 }, { "question":"Error running basic tensorflow example I have just reinstalled latest tensorflow on ubuntu: ``` $ sudo pip install --upgrade https:\/\/storage.googleapis.com\/tensorflow\/linux\/cpu\/tensorflow-0.7.1-cp27-none-linux_x86_64.whl [sudo] password for ubuntu: The directory '\/home\/ubuntu\/.cache\/pip\/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '\/home\/ubuntu\/.cache\/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Collecting tensorflow==0.7.1 from https:\/\/storage.googleapis.com\/tensorflow\/linux\/cpu\/tensorflow-0.7.1-cp27-none-linux_x86_64.whl Downloading https:\/\/storage.googleapis.com\/tensorflow\/linux\/cpu\/tensorflow-0.7.1-cp27-none-linux_x86_64.whl (13.8MB) 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13.8MB 32kB\/s Requirement already up-to-date: six>=1.10.0 in \/usr\/local\/lib\/python2.7\/dist-packages (from tensorflow==0.7.1) Requirement already up-to-date: protobuf==3.0.0b2 in \/usr\/local\/lib\/python2.7\/dist-packages (from tensorflow==0.7.1) Requirement already up-to-date: wheel in \/usr\/local\/lib\/python2.7\/dist-packages (from tensorflow==0.7.1) Requirement already up-to-date: numpy>=1.8.2 in \/usr\/local\/lib\/python2.7\/dist-packages (from tensorflow==0.7.1) Requirement already up-to-date: setuptools in \/usr\/local\/lib\/python2.7\/dist-packages (from protobuf==3.0.0b2->tensorflow==0.7.1) Installing collected packages: tensorflow Found existing installation: tensorflow 0.7.1 Uninstalling tensorflow-0.7.1: Successfully uninstalled tensorflow-0.7.1 Successfully installed tensorflow-0.7.1 ``` When following the directions to test it fails with cannot import name pywrap_tensorflow: ``` $ ipython \/git\/tensorflow\/tensorflow\/__init__.py in () 21 from __future__ import print_function 22 ---> 23 from tensorflow.python import * \/git\/tensorflow\/tensorflow\/python\/__init__.py in () 43 _default_dlopen_flags = sys.getdlopenflags() 44 sys.setdlopenflags(_default_dlopen_flags | ctypes.RTLD_GLOBAL) ---> 45 from tensorflow.python import pywrap_tensorflow 46 sys.setdlopenflags(_default_dlopen_flags) 47 ImportError: cannot import name pywrap_tensorflow ``` Is there an additional change needed to my python or ubuntu\/bash environment?", "response":"From the path in your stack trace (\/git\/tensorflow\/tensorflow\/\u2026), it looks like your Python path may be loading the tensorflow libraries from the source directory, rather than the version that you have installed. As a result, it is unable to find the (compiled) pywrap_tensorflow library, which is installed in a different directory. A common solution is to cd out of the \/git\/tensorflow directory before starting python or ipython.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35953210\/error-running-basic-tensorflow-example", "best_answers_votes":173, "question_length":2456, "response_length":432 }, { "question":"How to understand static shape and dynamic shape in TensorFlow? In TensorFlow FAQ, it says: In TensorFlow, a tensor has both a static (inferred) shape and a dynamic (true) shape. The static shape can be read using the tf.Tensor.get_shape() method: this shape is inferred from the operations that were used to create the tensor, and may be partially complete. If the static shape is not fully defined, the dynamic shape of a Tensor t can be determined by evaluating tf.shape(t). But I still cannot fully understand the relationship between static shape and dynamic shape. Are there any examples showing their differences? Thanks.", "response":"Sometimes the shape of a tensor depends on a value that is computed at runtime. Let's take the following example, where x is defined as a tf.placeholder() vector with four elements: ``` x = tf.placeholder(tf.int32, shape=[4]) print x.get_shape() # ==> '(4,)' ``` The value of x.get_shape() is the static shape of x, and the (4,) means that it is a vector of length 4. Now let's apply the tf.unique() op to x ``` y, _ = tf.unique(x) print y.get_shape() # ==> '(?,)' ``` The (?,) means that y is a vector of unknown length. Why is it unknown? tf.unique(x) returns the unique values from x, and the values of x are unknown because it is a tf.placeholder(), so it doesn't have a value until you feed it. Let's see what happens if you feed two different values: ``` sess = tf.Session() print sess.run(y, feed_dict={x: [0, 1, 2, 3]}).shape # ==> '(4,)' print sess.run(y, feed_dict={x: [0, 0, 0, 0]}).shape # ==> '(1,)' ``` Hopefully this makes it clear that a tensor can have a different static and dynamic shape. The dynamic shape is always fully defined\u2014it has no ? dimensions\u2014but the static shape can be less specific. This is what allows TensorFlow to support operations like tf.unique() and tf.dynamic_partition(), which can have variable-sized outputs, and are used in advanced applications. Finally, the tf.shape() op can be used to get the dynamic shape of a tensor and use it in a TensorFlow computation: ``` z = tf.shape(y) print sess.run(z, feed_dict={x: [0, 1, 2, 3]}) # ==> [4] print sess.run(z, feed_dict={x: [0, 0, 0, 0]}) # ==> [1] ``` Here's a schematic image showing both:", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37096225\/how-to-understand-static-shape-and-dynamic-shape-in-tensorflow", "best_answers_votes":92, "question_length":628, "response_length":1584 }, { "question":"Tensorflow vs OpenCV [closed] Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Because this question may lead to opinionated discussion, debate, and answers, it has been closed. You may edit the question if you feel you can improve it so that it requires answers that include facts and citations or a detailed explanation of the proposed solution. If edited, the question will be reviewed and might be reopened. Closed 7 years ago. Improve this question I'm new into the AI world, I've start doing some stuff using Python & OpenCV for face detection and so on. I know that with the implementation of some algorithms I can develop AI system using Python & OpenCV. So my question is : What is the position of Tensorflow here? Can I say Tensorflow is an alternative to OpenCV? as I can say Python is an alternative programming language to Java (for example).", "response":"The main difference is that TensorFlow is a framework for machine learning, and OpenCV is a library for computer vision. It can be a good start to check the link below to get a grasp for the difference between framework and library: What is the difference between a framework and a library? You can do image recognition with TensorFlow. Though it is suited for more general problems as well, such as: classification, clustering and regression. I guess people downvoted because this question might be more relevant to: https:\/\/datascience.stackexchange.com\/", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49919300\/tensorflow-vs-opencv", "best_answers_votes":72, "question_length":916, "response_length":556 }, { "question":"ValueError: Shapes (None, 1) and (None, 2) are incompatible I am training a facial expression (angry vs happy) model. Last dense output layer was previously 1 but when i predict an image it's output was always 1 with 64 % accuracy. So i changed it to 2 for 2 outputs. But now i am getting this error:: ``` Epoch 1\/15 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () 11 epochs=epochs, 12 validation_data = val_data_gen, ---> 13 validation_steps = validation_steps, 14 15 ) 10 frames \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/framework\/func_graph.py in wrapper(*args, **kwargs) 966 except Exception as e: # pylint:disable=broad-except 967 if hasattr(e, \"ag_error_metadata\"): --> 968 raise e.ag_error_metadata.to_exception(e) 969 else: 970 raise ValueError: in user code: \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/engine\/training.py:571 train_function * outputs = self.distribute_strategy.run( \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/distribute\/distribute_lib.py:951 run ** return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/distribute\/distribute_lib.py:2290 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/distribute\/distribute_lib.py:2649 _call_for_each_replica return fn(*args, **kwargs) \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/engine\/training.py:533 train_step ** y, y_pred, sample_weight, regularization_losses=self.losses) \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/engine\/compile_utils.py:205 __call__ loss_value = loss_obj(y_t, y_p, sample_weight=sw) \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/losses.py:143 __call__ losses = self.call(y_true, y_pred) \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/losses.py:246 call return self.fn(y_true, y_pred, **self._fn_kwargs) \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/losses.py:1527 categorical_crossentropy return K.categorical_crossentropy(y_true, y_pred, from_logits=from_logits) \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/backend.py:4561 categorical_crossentropy target.shape.assert_is_compatible_with(output.shape) \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/framework\/tensor_shape.py:1117 assert_is_compatible_with raise ValueError(\"Shapes %s and %s are incompatible\" % (self, other)) ValueError: Shapes (None, 1) and (None, 2) are incompatible ``` The relevant code is : ``` model = Sequential([ Conv2D(32,3, activation='relu', input_shape=(48,48,1)), BatchNormalization(), MaxPooling2D(pool_size=(3, 3)), Flatten(), Dense(512, activation='relu'), Dense(2,activation='softmax') ]) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() Model: \"sequential_4\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_6 (Conv2D) (None, 46, 46, 32) 320 _________________________________________________________________ batch_normalization_4 (Batch (None, 46, 46, 32) 128 _________________________________________________________________ max_pooling2d_6 (MaxPooling2 (None, 15, 15, 32) 0 _________________________________________________________________ flatten_4 (Flatten) (None, 7200) 0 _________________________________________________________________ dense_8 (Dense) (None, 512) 3686912 _________________________________________________________________ dense_9 (Dense) (None, 2) 1026 ================================================================= Total params: 3,688,386 Trainable params: 3,688,322 Non-trainable params: 64 _________________________________________________________________ epochs = 15 steps_per_epoch = train_data_gen.n\/\/train_data_gen.batch_size validation_steps = val_data_gen.n\/\/val_data_gen.batch_size history = model.fit( x=train_data_gen, steps_per_epoch=steps_per_epoch, epochs=epochs, validation_data = val_data_gen, validation_steps = validation_steps, ) ```", "response":"i was facing the same problem my shapes were ``` shape of X (271, 64, 64, 3) shape of y (271,) shape of trainX (203, 64, 64, 3) shape of trainY (203, 1) shape of testX (68, 64, 64, 3) shape of testY (68, 1) ``` and ``` loss=\"categorical_crossentropy\" ``` i changed it to ``` loss=\"sparse_categorical_crossentropy\" ``` and it worked like a charm for me", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/61742556\/valueerror-shapes-none-1-and-none-2-are-incompatible", "best_answers_votes":89, "question_length":4233, "response_length":351 }, { "question":"Negative dimension size caused by subtracting 3 from 1 for 'Conv2D' I'm using Keras with Tensorflow as backend , here is my code: ``` import numpy as np np.random.seed(1373) import tensorflow as tf tf.python.control_flow_ops = tf import os from keras.datasets import mnist from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.utils import np_utils batch_size = 128 nb_classes = 10 nb_epoch = 12 img_rows, img_cols = 28, 28 nb_filters = 32 nb_pool = 2 nb_conv = 3 (X_train, y_train), (X_test, y_test) = mnist.load_data() print(X_train.shape[0]) X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols) X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train \/= 255 X_test \/= 255 print('X_train shape:', X_train.shape) print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes) model = Sequential() model.add(Convolution2D(nb_filters, nb_conv, nb_conv, border_mode='valid', input_shape=(1, img_rows, img_cols))) model.add(Activation('relu')) model.add(Convolution2D(nb_filters, nb_conv, nb_conv)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(nb_classes)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=[\"accuracy\"]) model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1]) ``` and Trackback error: ``` Using TensorFlow backend. 60000 ('X_train shape:', (60000, 1, 28, 28)) (60000, 'train samples') (10000, 'test samples') Traceback (most recent call last): File \"mnist.py\", line 154, in input_shape=(1, img_rows, img_cols))) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/keras\/models.py\", line 276, in add layer.create_input_layer(batch_input_shape, input_dtype) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/keras\/engine\/topology.py\", line 370, in create_input_layer self(x) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/keras\/engine\/topology.py\", line 514, in __call__ self.add_inbound_node(inbound_layers, node_indices, tensor_indices) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/keras\/engine\/topology.py\", line 572, in add_inbound_node Node.create_node(self, inbound_layers, node_indices, tensor_indices) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/keras\/engine\/topology.py\", line 149, in create_node output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0])) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/keras\/layers\/convolutional.py\", line 466, in call filter_shape=self.W_shape) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/keras\/backend\/tensorflow_backend.py\", line 1579, in conv2d x = tf.nn.conv2d(x, kernel, strides, padding=padding) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/ops\/gen_nn_ops.py\", line 396, in conv2d data_format=data_format, name=name) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/op_def_library.py\", line 759, in apply_op op_def=op_def) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/ops.py\", line 2242, in create_op set_shapes_for_outputs(ret) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/ops.py\", line 1617, in set_shapes_for_outputs shapes = shape_func(op) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/ops.py\", line 1568, in call_with_requiring return call_cpp_shape_fn(op, require_shape_fn=True) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/common_shapes.py\", line 610, in call_cpp_shape_fn debug_python_shape_fn, require_shape_fn) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/common_shapes.py\", line 675, in _call_cpp_shape_fn_impl raise ValueError(err.message) ValueError: Negative dimension size caused by subtracting 3 from 1 for 'Conv2D' (op: 'Conv2D') with input shapes: [?,1,28,28], [3,3,28,32]. ``` First I saw some answers that problem is with Tensorflow version so I upgrade Tensorflow to 0.12.0, but still exist , is that problem with network or I missing something, what should input_shape looks like? Update Here is .\/keras\/keras.json: ``` { \"image_dim_ordering\": \"tf\", \"epsilon\": 1e-07, \"floatx\": \"float32\", \"backend\": \"tensorflow\" } ```", "response":"Your issue comes from the image_ordering_dim in keras.json. From Keras Image Processing doc: dim_ordering: One of {\"th\", \"tf\"}. \"tf\" mode means that the images should have shape (samples, height, width, channels), \"th\" mode means that the images should have shape (samples, channels, height, width). It defaults to the image_dim_ordering value found in your Keras config file at ~\/.keras\/keras.json. If you never set it, then it will be \"tf\". Keras maps the convolution operation to the chosen backend (theano or tensorflow). However, both backends have made different choices for the ordering of the dimensions. If your image batch is of N images of HxW size with C channels, theano uses the NCHW ordering while tensorflow uses the NHWC ordering. Keras allows you to choose which ordering you prefer and will do the conversion to map to the backends behind. But if you choose image_ordering_dim=\"th\" it expects Theano-style ordering (NCHW, the one you have in your code) and if image_ordering_dim=\"tf\" it expects tensorflow-style ordering (NHWC). Since your image_ordering_dim is set to \"tf\", if you reshape your data to the tensorflow style it should work: ``` X_train = X_train.reshape(X_train.shape[0], img_cols, img_rows, 1) X_test = X_test.reshape(X_test.shape[0], img_cols, img_rows, 1) ``` and ``` input_shape=(img_cols, img_rows, 1) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41651628\/negative-dimension-size-caused-by-subtracting-3-from-1-for-conv2d", "best_answers_votes":95, "question_length":4758, "response_length":1345 }, { "question":"tf.data with multiple inputs \/ outputs in Keras For the application, such as pair text similarity, the input data is similar to: pair_1, pair_2. In these problems, we usually have multiple input data. Previously, I implemented my models successfully: ``` model.fit([pair_1, pair_2], labels, epochs=50) ``` I decided to replace my input pipeline with tf.data API. To this end, I create a Dataset similar to: ``` dataset = tf.data.Dataset.from_tensor_slices((pair_1, pair2, labels)) ``` It compiles successfully but when start to train it throws the following exception: ``` AttributeError: 'tuple' object has no attribute 'ndim' ``` My Keras and Tensorflow version respectively are 2.1.6 and 1.11.0. I found a similar issue in Tensorflow repository: tf.keras multi-input models don't work when using tf.data.Dataset. Does anyone know how to fix the issue? Here is some main part of the code: ```py (q1_test, q2_test, label_test) = test (q1_train, q2_train, label_train) = train def tfdata_generator(sent1, sent2, labels, is_training): '''Construct a data generator using tf.Dataset''' dataset = tf.data.Dataset.from_tensor_slices((sent1, sent2, labels)) if is_training: dataset = dataset.shuffle(1000) # depends on sample size dataset = dataset.repeat() dataset = dataset.prefetch(tf.contrib.data.AUTOTUNE) return dataset train_dataset = tfdata_generator(q1_train, q2_train, label_train, is_training=True, batch_size=_BATCH_SIZE) test_dataset = tfdata_generator(q1_test, q2_test, label_test, is_training=False, batch_size=_BATCH_SIZE) inps1 = keras.layers.Input(shape=(50,)) inps2 = keras.layers.Input(shape=(50,)) embed = keras.layers.Embedding(input_dim=nb_vocab, output_dim=300, weights=[embedding], trainable=False) embed1 = embed(inps1) embed2 = embed(inps2) gru = keras.layers.CuDNNGRU(256) gru1 = gru(embed1) gru2 = gru(embed2) concat = keras.layers.concatenate([gru1, gru2]) preds = keras.layers.Dense(1, 'sigmoid')(concat) model = keras.models.Model(inputs=[inps1, inps2], outputs=preds) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) print(model.summary()) model.fit( train_dataset.make_one_shot_iterator(), steps_per_epoch=len(q1_train) \/\/ _BATCH_SIZE, epochs=50, validation_data=test_dataset.make_one_shot_iterator(), validation_steps=len(q1_test) \/\/ _BATCH_SIZE, verbose=1) ```", "response":"I'm not using Keras but I would go with an tf.data.Dataset.from_generator() - like: ``` def _input_fn(): sent1 = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=np.int64) sent2 = np.array([20, 25, 35, 40, 600, 30, 20, 30], dtype=np.int64) sent1 = np.reshape(sent1, (8, 1, 1)) sent2 = np.reshape(sent2, (8, 1, 1)) labels = np.array([40, 30, 20, 10, 80, 70, 50, 60], dtype=np.int64) labels = np.reshape(labels, (8, 1)) def generator(): for s1, s2, l in zip(sent1, sent2, labels): yield {\"input_1\": s1, \"input_2\": s2}, l dataset = tf.data.Dataset.from_generator(generator, output_types=({\"input_1\": tf.int64, \"input_2\": tf.int64}, tf.int64)) dataset = dataset.batch(2) return dataset ... model.fit(_input_fn(), epochs=10, steps_per_epoch=4) ``` This generator can iterate over your e.g text-files \/ numpy arrays and yield on every call a example. In this example, I assume that the word of the sentences are already converted to the indices in the vocabulary. Edit: Since OP asked, it should be also possible with Dataset.from_tensor_slices(): ``` def _input_fn(): sent1 = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=np.int64) sent2 = np.array([20, 25, 35, 40, 600, 30, 20, 30], dtype=np.int64) sent1 = np.reshape(sent1, (8, 1)) sent2 = np.reshape(sent2, (8, 1)) labels = np.array([40, 30, 20, 10, 80, 70, 50, 60], dtype=np.int64) labels = np.reshape(labels, (8)) dataset = tf.data.Dataset.from_tensor_slices(({\"input_1\": sent1, \"input_2\": sent2}, labels)) dataset = dataset.batch(2, drop_remainder=True) return dataset ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/52582275\/tf-data-with-multiple-inputs-outputs-in-keras", "best_answers_votes":76, "question_length":2324, "response_length":1510 }, { "question":"How do I convert a directory of jpeg images to TFRecords file in tensorflow? I have training data that is a directory of jpeg images and a corresponding text file containing the file name and the associated category label. I am trying to convert this training data into a tfrecords file as described in the tensorflow documentation. I have spent quite some time trying to get this to work but there are no examples in tensorflow that demonstrate how to use any of the readers to read in jpeg files and add them to a tfrecord using tfrecordwriter", "response":"I hope this helps: ```py filename_queue = tf.train.string_input_producer(['\/Users\/HANEL\/Desktop\/tf.png']) # list of files to read reader = tf.WholeFileReader() key, value = reader.read(filename_queue) my_img = tf.image.decode_png(value) # use decode_png or decode_jpeg decoder based on your files. init_op = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init_op) # Start populating the filename queue. coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for i in range(1): #length of your filename list image = my_img.eval() #here is your image Tensor :) print(image.shape) Image.show(Image.fromarray(np.asarray(image))) coord.request_stop() coord.join(threads) ``` For getting all images as an array of tensors use the following code example. Github repo of ImageFlow Update: In the previous answer I just told how to read an image in TF format, but not saving it in TFRecords. For that you should use: ```py def _int64_feature(value): return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) def _bytes_feature(value): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) # images and labels array as input def convert_to(images, labels, name): num_examples = labels.shape[0] if images.shape[0] != num_examples: raise ValueError(\"Images size %d does not match label size %d.\" % (images.shape[0], num_examples)) rows = images.shape[1] cols = images.shape[2] depth = images.shape[3] filename = os.path.join(FLAGS.directory, name + '.tfrecords') print('Writing', filename) writer = tf.python_io.TFRecordWriter(filename) for index in range(num_examples): image_raw = images[index].tostring() example = tf.train.Example(features=tf.train.Features(feature={ 'height': _int64_feature(rows), 'width': _int64_feature(cols), 'depth': _int64_feature(depth), 'label': _int64_feature(int(labels[index])), 'image_raw': _bytes_feature(image_raw)})) writer.write(example.SerializeToString()) ``` More info here And you read the data like this: ```py # Remember to generate a file name queue of you 'train.TFRecord' file path def read_and_decode(filename_queue): reader = tf.TFRecordReader() _, serialized_example = reader.read(filename_queue) features = tf.parse_single_example( serialized_example, dense_keys=['image_raw', 'label'], # Defaults are not specified since both keys are required. dense_types=[tf.string, tf.int64]) # Convert from a scalar string tensor (whose single string has image = tf.decode_raw(features['image_raw'], tf.uint8) image = tf.reshape(image, [my_cifar.n_input]) image.set_shape([my_cifar.n_input]) # OPTIONAL: Could reshape into a 28x28 image and apply distortions # here. Since we are not applying any distortions in this # example, and the next step expects the image to be flattened # into a vector, we don't bother. # Convert from [0, 255] -> [-0.5, 0.5] floats. image = tf.cast(image, tf.float32) image = tf.cast(image, tf.float32) * (1. \/ 255) - 0.5 # Convert label from a scalar uint8 tensor to an int32 scalar. label = tf.cast(features['label'], tf.int32) return image, label ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33849617\/how-do-i-convert-a-directory-of-jpeg-images-to-tfrecords-file-in-tensorflow", "best_answers_votes":49, "question_length":545, "response_length":3084 }, { "question":"\"Could not load dynamic library 'libcudnn.so.8'\" when running tensorflow on ubuntu 20.04 Note: there are many similar questions but for different versions of ubuntu and somewhat different specific libraries. I have not been able to figure out what combination of symbolic links, additional environment variables such as LD_LIBRARY_PATH would work Here is my nvidia configuration ``` $ nvidia-smi Tue Apr 6 11:35:54 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage\/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce RTX 2070 Off | 00000000:01:00.0 Off | N\/A | | 18% 25C P8 9W \/ 175W | 25MiB \/ 7982MiB | 0% Default | | | | N\/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N\/A N\/A 1081 G \/usr\/lib\/xorg\/Xorg 20MiB | | 0 N\/A N\/A 1465 G \/usr\/bin\/gnome-shell 3MiB | +-----------------------------------------------------------------------------+ ``` When running a TF program the following happened: ``` 2021-04-06 14:35:01.589906: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:60] Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory 2021-04-06 14:35:01.589914: W tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1757] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https:\/\/www.tensorflow.org\/install\/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... ``` Has anyone seen this particular mix and how did you resolve it? Here is one of the additional fixes attempted, but with no change: ``` conda install cudatoolkit=11.0 ```", "response":"So I had the same issue. As the comments say, it's because you need to install CUDNN. For that, there is a guide here. But as I know already your distro (Ubuntu 20.04) I can give you the command lines already: ``` wget https:\/\/developer.download.nvidia.com\/compute\/cuda\/repos\/ubuntu2004\/x86_64\/cuda-ubuntu2004.pin sudo mv cuda-ubuntu2004.pin \/etc\/apt\/preferences.d\/cuda-repository-pin-600 export last_public_key=3bf863cc # SEE NOTE BELOW sudo apt-key adv --fetch-keys https:\/\/developer.download.nvidia.com\/compute\/cuda\/repos\/ubuntu2004\/x86_64\/${last_public_key}.pub sudo add-apt-repository \"deb https:\/\/developer.download.nvidia.com\/compute\/cuda\/repos\/ubuntu2004\/x86_64\/ \/\" sudo apt-get update sudo apt-get install libcudnn8 sudo apt-get install libcudnn8-dev ``` where ${last_public_key} is the last public key (file with .pub extension) published on https:\/\/developer.download.nvidia.com\/compute\/cuda\/repos\/ubuntu2004\/x86_64\/. (At March 8th 2023 when this post was edit, it was 3bf863cc). And if you want to install a specific version, the last 2 commands would be replaced with ``` sudo apt-get install libcudnn8=${cudnn_version}-1+${cuda_version} sudo apt-get install libcudnn8-dev=${cudnn_version}-1+${cuda_version} ``` where ${cudnn_version} is for example 8.2.4.* and ${cuda_version} is for example cuda11.0 (as I see you have 11.0 on the command nvidia-smi, although I have not tested it as mine was 11.4 but I guess it should work Ok)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/66977227\/could-not-load-dynamic-library-libcudnn-so-8-when-running-tensorflow-on-ubun", "best_answers_votes":71, "question_length":2340, "response_length":1443 }, { "question":"Why is the accuracy for my Keras model always 0 when training? I have built a simple Keras network: ``` import numpy as np; from keras.models import Sequential; from keras.layers import Dense,Activation; data= np.genfromtxt(\".\/kerastests\/mydata.csv\", delimiter=';') x_target=data[:,29] x_training=np.delete(data,6,axis=1) x_training=np.delete(x_training,28,axis=1) model=Sequential() model.add(Dense(20,activation='relu', input_dim=x_training.shape[1])) model.add(Dense(10,activation='relu')) model.add(Dense(1)); model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy']) model.fit(x_training, x_target) ``` From my source data, I have removed 2 columns, as you can see. One is a column that came with dates in a string format (in the dataset, besides it, I have a column for the day, another for the month, and another for the year, so I don't need that column) and the other column is the column I use as target for the model). When I train this model I get this output: ``` 32\/816 [>.............................] - ETA: 23s - loss: 13541942.0000 - acc: 0.0000e+00 800\/816 [============================>.] - ETA: 0s - loss: 11575466.0400 - acc: 0.0000e+00 816\/816 [==============================] - 1s - loss: 11536905.2353 - acc: 0.0000e+00 Epoch 2\/10 32\/816 [>.............................] - ETA: 0s - loss: 6794785.0000 - acc: 0.0000e+00 816\/816 [==============================] - 0s - loss: 5381360.4314 - acc: 0.0000e+00 Epoch 3\/10 32\/816 [>.............................] - ETA: 0s - loss: 6235184.0000 - acc: 0.0000e+00 800\/816 [============================>.] - ETA: 0s - loss: 5199512.8700 - acc: 0.0000e+00 816\/816 [==============================] - 0s - loss: 5192977.4216 - acc: 0.0000e+00 Epoch 4\/10 32\/816 [>.............................] - ETA: 0s - loss: 4680165.5000 - acc: 0.0000e+00 736\/816 [==========================>...] - ETA: 0s - loss: 5050110.3043 - acc: 0.0000e+00 816\/816 [==============================] - 0s - loss: 5168771.5490 - acc: 0.0000e+00 Epoch 5\/10 32\/816 [>.............................] - ETA: 0s - loss: 5932391.0000 - acc: 0.0000e+00 768\/816 [===========================>..] - ETA: 0s - loss: 5198882.9167 - acc: 0.0000e+00 816\/816 [==============================] - 0s - loss: 5159585.9020 - acc: 0.0000e+00 Epoch 6\/10 32\/816 [>.............................] - ETA: 0s - loss: 4488318.0000 - acc: 0.0000e+00 768\/816 [===========================>..] - ETA: 0s - loss: 5144843.8333 - acc: 0.0000e+00 816\/816 [==============================] - 0s - loss: 5151492.1765 - acc: 0.0000e+00 Epoch 7\/10 32\/816 [>.............................] - ETA: 0s - loss: 6920405.0000 - acc: 0.0000e+00 800\/816 [============================>.] - ETA: 0s - loss: 5139358.5000 - acc: 0.0000e+00 816\/816 [==============================] - 0s - loss: 5169839.2941 - acc: 0.0000e+00 Epoch 8\/10 32\/816 [>.............................] - ETA: 0s - loss: 3973038.7500 - acc: 0.0000e+00 672\/816 [=======================>......] - ETA: 0s - loss: 5183285.3690 - acc: 0.0000e+00 816\/816 [==============================] - 0s - loss: 5141417.0000 - acc: 0.0000e+00 Epoch 9\/10 32\/816 [>.............................] - ETA: 0s - loss: 4969548.5000 - acc: 0.0000e+00 768\/816 [===========================>..] - ETA: 0s - loss: 5126550.1667 - acc: 0.0000e+00 816\/816 [==============================] - 0s - loss: 5136524.5098 - acc: 0.0000e+00 Epoch 10\/10 32\/816 [>.............................] - ETA: 0s - loss: 6334703.5000 - acc: 0.0000e+00 768\/816 [===========================>..] - ETA: 0s - loss: 5197778.8229 - acc: 0.0000e+00 816\/816 [==============================] - 0s - loss: 5141391.2059 - acc: 0.0000e+00 ``` Why is this happening? My data is a time series. I know that for time series people do not usually use Dense neurons, but it is just a test. What really tricks me is that accuracy is always 0. And, with other tests, I did even lose: gets to a \"NAN\" value. Could anybody help here?", "response":"Your model seems to correspond to a regression model for the following reasons: You are using linear (the default one) as an activation function in the output layer (and relu in the layer before). Your loss is loss='mean_squared_error'. However, the metric that you use- metrics=['accuracy'] corresponds to a classification problem. If you want to do regression, remove metrics=['accuracy']. That is, use ``` model.compile(optimizer='adam',loss='mean_squared_error') ``` Here is a list of keras metrics for regression and classification (taken from this blog post): Keras Regression Metrics \u2022Mean Squared Error: mean_squared_error, MSE or mse \u2022Mean Absolute Error: mean_absolute_error, MAE, mae \u2022Mean Absolute Percentage Error: mean_absolute_percentage_error, MAPE, mape \u2022Cosine Proximity: cosine_proximity, cosine Keras Classification Metrics \u2022Binary Accuracy: binary_accuracy, acc \u2022Categorical Accuracy: categorical_accuracy, acc \u2022Sparse Categorical Accuracy: sparse_categorical_accuracy \u2022Top k Categorical Accuracy: top_k_categorical_accuracy (requires you specify a k parameter) \u2022Sparse Top k Categorical Accuracy: sparse_top_k_categorical_accuracy (requires you specify a k parameter)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45632549\/why-is-the-accuracy-for-my-keras-model-always-0-when-training", "best_answers_votes":103, "question_length":3936, "response_length":1189 }, { "question":"ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory I have installed Cuda 10.1 and cudnn on Ubuntu 18.04 and it seems to be installed properly as type nvcc and nvidia-smi, I get proper response: ``` user:~$ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Fri_Feb__8_19:08:17_PST_2019 Cuda compilation tools, release 10.1, V10.1.105 user:~$ nvidia-smi Mon Mar 18 14:36:47 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.43 Driver Version: 418.43 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage\/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Quadro K5200 Off | 00000000:03:00.0 On | Off | | 26% 39C P8 14W \/ 150W | 225MiB \/ 8118MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1538 G \/usr\/lib\/xorg\/Xorg 32MiB | | 0 1583 G \/usr\/bin\/gnome-shell 5MiB | | 0 3008 G \/usr\/lib\/xorg\/Xorg 100MiB | | 0 3120 G \/usr\/bin\/gnome-shell 82MiB | +-----------------------------------------------------------------------------+ ``` I have installed tensorflow using: user:~$ sudo pip3 install --upgrade tensorflow-gpu ``` The directory '\/home\/amin\/.cache\/pip\/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '\/home\/amin\/.cache\/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Requirement already up-to-date: tensorflow-gpu in \/usr\/local\/lib\/python3.6\/dist-packages (1.13.1) Requirement already satisfied, skipping upgrade: keras-applications>=1.0.6 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorflow-gpu) (1.0.7) Requirement already satisfied, skipping upgrade: protobuf>=3.6.1 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorflow-gpu) (3.6.1) Requirement already satisfied, skipping upgrade: wheel>=0.26 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorflow-gpu) (0.32.3) Requirement already satisfied, skipping upgrade: absl-py>=0.1.6 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorflow-gpu) (0.7.0) Requirement already satisfied, skipping upgrade: keras-preprocessing>=1.0.5 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorflow-gpu) (1.0.9) Requirement already satisfied, skipping upgrade: gast>=0.2.0 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorflow-gpu) (0.2.2) Requirement already satisfied, skipping upgrade: termcolor>=1.1.0 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorflow-gpu) (1.1.0) Requirement already satisfied, skipping upgrade: grpcio>=1.8.6 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorflow-gpu) (1.18.0) Requirement already satisfied, skipping upgrade: tensorflow-estimator=1.13.0 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorflow-gpu) (1.13.0) Requirement already satisfied, skipping upgrade: six>=1.10.0 in \/usr\/lib\/python3\/dist-packages (from tensorflow-gpu) (1.11.0) Requirement already satisfied, skipping upgrade: numpy>=1.13.3 in \/usr\/lib\/python3\/dist-packages (from tensorflow-gpu) (1.13.3) Requirement already satisfied, skipping upgrade: astor>=0.6.0 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorflow-gpu) (0.7.1) Requirement already satisfied, skipping upgrade: tensorboard=1.13.0 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorflow-gpu) (1.13.1) Requirement already satisfied, skipping upgrade: h5py in \/usr\/local\/lib\/python3.6\/dist-packages (from keras-applications>=1.0.6->tensorflow-gpu) (2.9.0) Requirement already satisfied, skipping upgrade: setuptools in \/usr\/local\/lib\/python3.6\/dist-packages (from protobuf>=3.6.1->tensorflow-gpu) (40.6.3) Requirement already satisfied, skipping upgrade: mock>=2.0.0 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorflow-estimator=1.13.0->tensorflow-gpu) (2.0.0) Requirement already satisfied, skipping upgrade: werkzeug>=0.11.15 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorboard=1.13.0->tensorflow-gpu) (0.14.1) Requirement already satisfied, skipping upgrade: markdown>=2.6.8 in \/usr\/local\/lib\/python3.6\/dist-packages (from tensorboard=1.13.0->tensorflow-gpu) (3.0.1) Requirement already satisfied, skipping upgrade: pbr>=0.11 in \/usr\/local\/lib\/python3.6\/dist-packages (from mock>=2.0.0->tensorflow-estimator=1.13.0->tensorflow-gpu) (5.1.1) ``` However when I am trying to import tensorflow I am getting error about libcublas.so.10.0: ``` user:~$ python3 Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import tensorflow as tf Traceback (most recent call last): File \"\/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/pywrap_tensorflow.py\", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File \"\/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py\", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File \"\/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py\", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File \"\/usr\/lib\/python3.6\/imp.py\", line 243, in load_module return load_dynamic(name, filename, file) File \"\/usr\/lib\/python3.6\/imp.py\", line 343, in load_dynamic return _load(spec) ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"\", line 1, in File \"\/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/__init__.py\", line 24, in from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File \"\/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/__init__.py\", line 49, in from tensorflow.python import pywrap_tensorflow File \"\/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/pywrap_tensorflow.py\", line 74, in raise ImportError(msg) ImportError: Traceback (most recent call last): File \"\/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/pywrap_tensorflow.py\", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File \"\/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py\", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File \"\/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py\", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File \"\/usr\/lib\/python3.6\/imp.py\", line 243, in load_module return load_dynamic(name, filename, file) File \"\/usr\/lib\/python3.6\/imp.py\", line 343, in load_dynamic return _load(spec) ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory Failed to load the native TensorFlow runtime. See https:\/\/www.tensorflow.org\/install\/errors for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. ``` What I am missing? and How can I resolve this? Thanks", "response":"I downloaded cuda 10.0 from the following link CUDA 10.0 Then I installed it using the following commands: ``` sudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb sudo apt-key adv --fetch-keys https:\/\/developer.download.nvidia.com\/compute\/cuda\/repos\/ubuntu1804\/x86_64\/7fa2af80.pub sudo apt-get update sudo apt-get install cuda-10-0 ``` I then installed cudnn v7.5.0 for CUDA 10.0 by going to link CUDNN download and you need to logon using an account. and after choosing the correct version I downloaded via link CUDNN power link after that I added the include and lib files for cudnn as follows: ``` sudo cp -P cuda\/targets\/ppc64le-linux\/include\/cudnn.h \/usr\/local\/cuda-10.0\/include\/ sudo cp -P cuda\/targets\/ppc64le-linux\/lib\/libcudnn* \/usr\/local\/cuda-10.0\/lib64\/ sudo chmod a+r \/usr\/local\/cuda-10.0\/lib64\/libcudnn* ``` After modified the .bashrc for lib and path of cuda 10.0, if you do not have it you need to add them into .bashrc ``` export PATH=\/usr\/local\/cuda-10.0\/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=\/usr\/local\/cuda-10.0\/lib64:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} ``` And after all these steps, I managed to import tensorflow in python3 successfully.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55224016\/importerror-libcublas-so-10-0-cannot-open-shared-object-file-no-such-file-or", "best_answers_votes":53, "question_length":7736, "response_length":1172 }, { "question":"Tensorflow installation error: not a supported wheel on this platform when I try to install TensorFlow by cloning from Git, I run into the error \"no module named copyreg,\" so I tried installing using a virtualenv. However, I then run into this error: ```none pip install https:\/\/storage.googleapis.com\/tensorflow\/mac\/tensorflow-0.5.0-py2-none-any.whl tensorflow-0.5.0-py2-none-any.whl is not a supported wheel on this platform. ``` I don't see this under the common problems section. I am using OS X v10.10.5 (Yosemite) and Python 3.4.3, but I also have Python 2.7 (I am unsure if pip differentiates between these or how to switch between them).", "response":"I too got the same problem. I downloaded get-pip.py from https:\/\/bootstrap.pypa.io\/get-pip.py and then ran python2.7 get-pip.py for installing pip2.7. And then ran the pip install command with python2.7 as follows. For Ubuntu\/Linux: ```none python2.7 -m pip install https:\/\/storage.googleapis.com\/tensorflow\/linux\/cpu\/tensorflow-0.5.0-cp27-none-linux_x86_64.whl ``` For Mac OS X: ```none python2.7 -m pip install https:\/\/storage.googleapis.com\/tensorflow\/mac\/tensorflow-0.5.0-py2-none-any.whl ``` This should work just fine as it did for me :) I followed these instructions from here.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33622613\/tensorflow-installation-error-not-a-supported-wheel-on-this-platform", "best_answers_votes":56, "question_length":645, "response_length":584 }, { "question":"Adjust Single Value within Tensor -- TensorFlow I feel embarrassed asking this, but how do you adjust a single value within a tensor? Suppose you want to add '1' to only one value within your tensor? Doing it by indexing doesn't work: ``` TypeError: 'Tensor' object does not support item assignment ``` One approach would be to build an identically shaped tensor of 0's. And then adjusting a 1 at the position you want. Then you would add the two tensors together. Again this runs into the same problem as before. I've read through the API docs several times and can't seem to figure out how to do this. Thanks in advance!", "response":"UPDATE: TensorFlow 1.0 includes a tf.scatter_nd() operator, which can be used to create delta below without creating a tf.SparseTensor. This is actually surprisingly tricky with the existing ops! Perhaps somebody can suggest a nicer way to wrap up the following, but here's one way to do it. Let's say you have a tf.constant() tensor: ``` c = tf.constant([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]) ``` ...and you want to add 1.0 at location [1, 1]. One way you could do this is to define a tf.SparseTensor, delta, representing the change: ``` indices = [[1, 1]] # A list of coordinates to update. values = [1.0] # A list of values corresponding to the respective # coordinate in indices. shape = [3, 3] # The shape of the corresponding dense tensor, same as `c`. delta = tf.SparseTensor(indices, values, shape) ``` Then you can use the tf.sparse_tensor_to_dense() op to make a dense tensor from delta and add it to c: ``` result = c + tf.sparse_tensor_to_dense(delta) sess = tf.Session() sess.run(result) # ==> array([[ 0., 0., 0.], # [ 0., 1., 0.], # [ 0., 0., 0.]], dtype=float32) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34685947\/adjust-single-value-within-tensor-tensorflow", "best_answers_votes":72, "question_length":622, "response_length":1096 }, { "question":"MemoryError in TensorFlow; and \"successful NUMA node read from SysFS had negative value (-1)\" with xen I am using tensor flow version : 0.12.1 Cuda tool set version is 8. ``` lrwxrwxrwx 1 root root 19 May 28 17:27 cuda -> \/usr\/local\/cuda-8.0 ``` As documented here I have downloaded and installed cuDNN. But while execeting following line from my python script I am getting error messages mentioned in header: ``` model.fit_generator(train_generator, steps_per_epoch= len(train_samples), validation_data=validation_generator, validation_steps=len(validation_samples), epochs=9) ``` Detailed error message is as follows: ``` Using TensorFlow backend. I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally Epoch 1\/9 Exception in thread Thread-1: Traceback (most recent call last): File \" lib\/python3.5\/threading.py\", line 914, in _bootstrap_inner self.run() File \" lib\/python3.5\/threading.py\", line 862, in run self._target(*self._args, **self._kwargs) File \" lib\/python3.5\/site-packages\/keras\/engine\/training.py\", line 612, in data_generator_task generator_output = next(self._generator) StopIteration I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:885] Found device 0 with properties: name: GRID K520 major: 3 minor: 0 memoryClockRate (GHz) 0.797 pciBusID 0000:00:03.0 Total memory: 3.94GiB Free memory: 3.91GiB I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:906] DMA: 0 I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:916] 0: Y I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:975] Creating TensorFlow device (\/gpu:0) -> (device: 0, name: GRID K520, pci bus id: 0000:00:03.0) Traceback (most recent call last): File \"model_new.py\", line 82, in model.fit_generator(train_generator, steps_per_epoch= len(train_samples),validation_data=validation_generator, validation_steps=len(validation_samples),epochs=9) File \" lib\/python3.5\/site-packages\/keras\/legacy\/interfaces.py\", line 88, in wrapper return func(*args, **kwargs) File \" lib\/python3.5\/site-packages\/keras\/models.py\", line 1110, in fit_generator initial_epoch=initial_epoch) File \" lib\/python3.5\/site-packages\/keras\/legacy\/interfaces.py\", line 88, in wrapper return func(*args, **kwargs) File \" lib\/python3.5\/site-packages\/keras\/engine\/training.py\", line 1890, in fit_generator class_weight=class_weight) File \" lib\/python3.5\/site-packages\/keras\/engine\/training.py\", line 1633, in train_on_batch outputs = self.train_function(ins) File \" lib\/python3.5\/site-packages\/keras\/backend\/tensorflow_backend.py\", line 2229, in __call__ feed_dict=feed_dict) File \" lib\/python3.5\/site-packages\/tensorflow\/python\/client\/session.py\", line 766, in run run_metadata_ptr) File \" lib\/python3.5\/site-packages\/tensorflow\/python\/client\/session.py\", line 937, in _run np_val = np.asarray(subfeed_val, dtype=subfeed_dtype) File \" lib\/python3.5\/site-packages\/numpy\/core\/numeric.py\", line 531, in asarray return array(a, dtype, copy=False, order=order) MemoryError ``` If any suggestion to resolve this error is appreciated. EDIT: Issue is fatal. ``` uname -a Linux ip-172-31-76-109 4.4.0-78-generic #99-Ubuntu SMP Thu Apr 27 15:29:09 UTC 2017 x86_64 x86_64 x86_64 GNU\/Linux sudo lshw -short [sudo] password for carnd: H\/W path Device Class Description ========================================== system HVM domU \/0 bus Motherboard \/0\/0 memory 96KiB BIOS \/0\/401 processor Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz \/0\/402 processor CPU \/0\/403 processor CPU \/0\/404 processor CPU \/0\/405 processor CPU \/0\/406 processor CPU \/0\/407 processor CPU \/0\/408 processor CPU \/0\/1000 memory 15GiB System Memory \/0\/1000\/0 memory 15GiB DIMM RAM \/0\/100 bridge 440FX - 82441FX PMC [Natoma] \/0\/100\/1 bridge 82371SB PIIX3 ISA [Natoma\/Triton II] \/0\/100\/1.1 storage 82371SB PIIX3 IDE [Natoma\/Triton II] \/0\/100\/1.3 bridge 82371AB\/EB\/MB PIIX4 ACPI \/0\/100\/2 display GD 5446 \/0\/100\/3 display GK104GL [GRID K520] \/0\/100\/1f generic Xen Platform Device \/1 eth0 network Ethernet interface ``` EDIT 2: This is an EC2 instance in Amazon cloud. And all the files holding value -1. ``` :\/sys$ find . -name numa_node -exec cat '{}' \\; find: \u2018.\/fs\/fuse\/connections\/39\u2019: Permission denied -1 -1 -1 -1 -1 -1 -1 find: \u2018.\/kernel\/debug\u2019: Permission denied ``` EDIT3: After updating the numa_nod files NUMA related error is disappeared. But all other previous errors listed above is remaining. And again I got a fatal error. ``` Using TensorFlow backend. I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally Epoch 1\/9 Exception in thread Thread-1: Traceback (most recent call last): File \" lib\/python3.5\/threading.py\", line 914, in _bootstrap_inner self.run() File \" lib\/python3.5\/threading.py\", line 862, in run self._target(*self._args, **self._kwargs) File \" lib\/python3.5\/site-packages\/keras\/engine\/training.py\", line 612, in data_generator_task generator_output = next(self._generator) StopIteration I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:885] Found device 0 with properties: name: GRID K520 major: 3 minor: 0 memoryClockRate (GHz) 0.797 pciBusID 0000:00:03.0 Total memory: 3.94GiB Free memory: 3.91GiB I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:906] DMA: 0 I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:916] 0: Y I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:975] Creating TensorFlow device (\/gpu:0) -> (device: 0, name: GRID K520, pci bus id: 0000:00:03.0) Traceback (most recent call last): File \"model_new.py\", line 85, in model.fit_generator(train_generator, steps_per_epoch= len(train_samples),validation_data=validation_generator, validation_steps=len(validation_samples),epochs=9) File \" lib\/python3.5\/site-packages\/keras\/legacy\/interfaces.py\", line 88, in wrapper return func(*args, **kwargs) File \" lib\/python3.5\/site-packages\/keras\/models.py\", line 1110, in fit_generator initial_epoch=initial_epoch) File \" lib\/python3.5\/site-packages\/keras\/legacy\/interfaces.py\", line 88, in wrapper return func(*args, **kwargs) File \" lib\/python3.5\/site-packages\/keras\/engine\/training.py\", line 1890, in fit_generator class_weight=class_weight) File \" lib\/python3.5\/site-packages\/keras\/engine\/training.py\", line 1633, in train_on_batch outputs = self.train_function(ins) File \" lib\/python3.5\/site-packages\/keras\/backend\/tensorflow_backend.py\", line 2229, in __call__ feed_dict=feed_dict) File \" lib\/python3.5\/site-packages\/tensorflow\/python\/client\/session.py\", line 766, in run run_metadata_ptr) File \" lib\/python3.5\/site-packages\/tensorflow\/python\/client\/session.py\", line 937, in _run np_val = np.asarray(subfeed_val, dtype=subfeed_dtype) File \" lib\/python3.5\/site-packages\/numpy\/core\/numeric.py\", line 531, in asarray return array(a, dtype, copy=False, order=order) MemoryError ```", "response":"There is the code which prints the message \"successful NUMA node read from SysFS had negative value (-1)\", and it is not Fatal Error, it is just warning. Real error is MemoryError in your File \"model_new.py\", line 85, in . We need more sources to check this error. Try to make your model smaller or run on server with more RAM. About NUMA node warning: https:\/\/github.com\/tensorflow\/tensorflow\/blob\/e4296aefff97e6edd3d7cee9a09b9dd77da4c034\/tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc#L855 ```cpp \/\/ Attempts to read the NUMA node corresponding to the GPU device's PCI bus out \/\/ of SysFS. Returns -1 if it cannot... static int TryToReadNumaNode(const string &pci_bus_id, int device_ordinal) {... string filename = port::Printf(\"\/sys\/bus\/pci\/devices\/%s\/numa_node\", pci_bus_id.c_str()); FILE *file = fopen(filename.c_str(), \"r\"); if (file == nullptr) { LOG(ERROR) Description: This file contains the NUMA node to which the PCI device is attached, or -1 if the node is unknown. The initial value comes from an ACPI _PXM method or a similar firmware source. If that is missing or incorrect, this file can be written to override the node. In that case, please report a firmware bug to the system vendor. Writing to this file taints the kernel with TAINT_FIRMWARE_WORKAROUND, which reduces the supportability of your system. ``` There is quick (kludge) workaround for this error: find the numa_node of your GPU and with root account do after every boot this command where NNNNN is the PCI id of your card (search in lspci output and in \/sys\/bus\/pci\/devices\/ directory) ``` echo 0 | sudo tee -a \/sys\/bus\/pci\/devices\/NNNNN\/numa_node ``` Or just echo it into every such file, it should be rather safe: ``` for a in \/sys\/bus\/pci\/devices\/*; do echo 0 | sudo tee -a $a\/numa_node; done ``` Also your lshw shows that it is not PC, but Xen virtual guest. There is something wrong between Xen platform (ACPI) emulation and Linux PCI bus NUMA-support code.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44232898\/memoryerror-in-tensorflow-and-successful-numa-node-read-from-sysfs-had-negativ", "best_answers_votes":91, "question_length":7718, "response_length":1949 }, { "question":"How to set layer-wise learning rate in Tensorflow? I am wondering if there is a way that I can use different learning rate for different layers like what is in Caffe. I am trying to modify a pre-trained model and use it for other tasks. What I want is to speed up the training for new added layers and keep the trained layers at low learning rate in order to prevent them from being distorted. for example, I have a 5-conv-layer pre-trained model. Now I add a new conv layer and fine tune it. The first 5 layers would have learning rate of 0.00001 and the last one would have 0.001. Any idea how to achieve this?", "response":"It can be achieved quite easily with 2 optimizers: ``` var_list1 = [variables from first 5 layers] var_list2 = [the rest of variables] train_op1 = GradientDescentOptimizer(0.00001).minimize(loss, var_list=var_list1) train_op2 = GradientDescentOptimizer(0.0001).minimize(loss, var_list=var_list2) train_op = tf.group(train_op1, train_op2) ``` One disadvantage of this implementation is that it computes tf.gradients(.) twice inside the optimizers and thus it might not be optimal in terms of execution speed. This can be mitigated by explicitly calling tf.gradients(.), splitting the list into 2 and passing corresponding gradients to both optimizers. Related question: Holding variables constant during optimizer EDIT: Added more efficient but longer implementation: ``` var_list1 = [variables from first 5 layers] var_list2 = [the rest of variables] opt1 = tf.train.GradientDescentOptimizer(0.00001) opt2 = tf.train.GradientDescentOptimizer(0.0001) grads = tf.gradients(loss, var_list1 + var_list2) grads1 = grads[:len(var_list1)] grads2 = grads[len(var_list1):] tran_op1 = opt1.apply_gradients(zip(grads1, var_list1)) train_op2 = opt2.apply_gradients(zip(grads2, var_list2)) train_op = tf.group(train_op1, train_op2) ``` You can use tf.trainable_variables() to get all training variables and decide to select from them. The difference is that in the first implementation tf.gradients(.) is called twice inside the optimizers. This may cause some redundant operations to be executed (e.g. gradients on the first layer can reuse some computations for the gradients of the following layers).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34945554\/how-to-set-layer-wise-learning-rate-in-tensorflow", "best_answers_votes":96, "question_length":612, "response_length":1590 }, { "question":"What's the difference between a Tensorflow Keras Model and Estimator? Both Tensorflow Keras models and Tensorflow Estimators are able to train neural network models and use them to predict new data. They are both high-level APIs that sits on top of the low-level core TensorFlow API. So when should I use one over the other?", "response":"As @jaromir pointed out - estimators are deprecated and unavailable from Tensorflow 2.16. Use the Keras APIs instead. From the documentation: Warning: TensorFlow 2.15 included the final release of the tf-estimator package. Estimators will not be available in TensorFlow 2.16 or after. See the migration guide for more information about how to convert off of Estimators. Below is the original answer from 2018. Background The Estimators API was added to Tensorflow in Release 1.1, and provides a high-level abstraction over lower-level Tensorflow core operations. It works with an Estimator instance, which is TensorFlow's high-level representation of a complete model. Keras is similar to the Estimators API in that it abstracts deep learning model components such as layers, activation functions and optimizers, to make it easier for developers. It is a model-level library, and does not handle low-level operations, which is the job of tensor manipulation libraries, or backends. Keras supports three backends - Tensorflow, Theano and CNTK. Keras was not part of Tensorflow until Release 1.4.0 (2 Nov 2017). Now, when you use tf.keras (or talk about 'Tensorflow Keras'), you are simply using the Keras interface with the Tensorflow backend to build and train your model. So both the Estimator API and Keras API provides a high-level API over low-level core Tensorflow API, and you can use either to train your model. But in most cases, if you are working with Tensorflow, you'd want to use the Estimators API for the reasons listed below. Distribution You can conduct distributed training across multiple servers with the Estimators API, but not with Keras API. From the Tensorflow Keras Guide, it says that: The Estimators API is used for training models for distributed environments. And from the Tensorflow Estimators Guide, it says that: You can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model. Pre-made Estimator Whilst Keras provides abstractions that makes building your models easier, you still have to write code to build your model. With Estimators, Tensorflow provides Pre-made Estimators, which are models which you can use straight away, simply by plugging in the hyperparameters. Pre-made Estimators are similar to how you'd work with scikit-learn. For example, the tf.estimator.LinearRegressor from Tensorflow is similar to the sklearn.linear_model.LinearRegression from scikit-learn. Integration with Other Tensorflow Tools Tensorflow provides a vistualzation tool called TensorBoard that helps you visualize your graph and statistics. By using an Estimator, you can easily save summaries to be visualized with Tensorboard. Converting Keras Model to Estimator To migrate a Keras model to an Estimator, use the tf.keras.estimator.model_to_estimator method.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51455863\/whats-the-difference-between-a-tensorflow-keras-model-and-estimator", "best_answers_votes":88, "question_length":324, "response_length":2941 }, { "question":"Cuda 12 + tf-nightly 2.12: Could not find cuda drivers on your machine, GPU will not be used, while every checking is fine and in torch it works tf-nightly version = 2.12.0-dev2023203 Python version = 3.10.6 CUDA drivers version = 525.85.12 CUDA version = 12.0 Cudnn version = 8.5.0 I am using Linux (x86_64, Ubuntu 22.04) I am coding in Visual Studio Code on a venv virtual environment I am trying to run some models on the GPU (NVIDIA GeForce RTX 3050) using tensorflow nightly 2.12 (to be able to use Cuda 12.0). The problem that I have is that apparently every checking that I am making seems to be correct, but in the end the script is not able to detect the GPU. I've dedicated a lot of time trying to see what is happening and nothing seems to work, so any advice or solution will be more than welcomed. The GPU seems to be working for torch as you can see at the very end of the question. I will show some of the most common checkings regarding CUDA that I did (Visual Studio Code terminal), I hope you find them useful: Check CUDA version: $ nvcc --version ``` nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Fri_Jan__6_16:45:21_PST_2023 Cuda compilation tools, release 12.0, V12.0.140 Build cuda_12.0.r12.0\/compiler.32267302_0 ``` Check if the connection with the CUDA libraries is correct: $ echo $LD_LIBRARY_PATH ``` \/usr\/cuda\/lib ``` Check nvidia drivers for the GPU and check if GPU is readable for the venv: $ nvidia-smi ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage\/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... On | 00000000:01:00.0 On | N\/A | | N\/A 40C P5 6W \/ 20W | 46MiB \/ 4096MiB | 22% Default | | | | N\/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N\/A N\/A 1356 G \/usr\/lib\/xorg\/Xorg 45MiB | +-----------------------------------------------------------------------------+ ``` Add cuda\/bin PATH and Check it: $ export PATH=\"\/usr\/local\/cuda\/bin:$PATH\" $ echo $PATH ``` \/usr\/local\/cuda-12.0\/bin:\/home\/victus-linux\/Escritorio\/MasterThesis_CODE\/to_share\/venv_master\/bin:\/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin:\/usr\/games:\/usr\/local\/games:\/snap\/bin:\/snap\/bin ``` Custom function to check if CUDA is correctly installed: [function by Sherlock] ```bash function lib_installed() { \/sbin\/ldconfig -N -v $(sed 's\/:\/ \/' \/dev\/null | grep $1; } function check() { lib_installed $1 && echo \"$1 is installed\" || echo \"ERROR: $1 is NOT installed\"; } check libcuda check libcudart ``` ``` libcudart.so.12 -> libcudart.so.12.0.146 libcuda.so.1 -> libcuda.so.525.85.12 libcuda.so.1 -> libcuda.so.525.85.12 libcudadebugger.so.1 -> libcudadebugger.so.525.85.12 libcuda is installed libcudart.so.12 -> libcudart.so.12.0.146 libcudart is installed ``` Custom function to check if Cudnn is correctly installed: [function by Sherlock] ```bash function lib_installed() { \/sbin\/ldconfig -N -v $(sed 's\/:\/ \/' \/dev\/null | grep $1; } function check() { lib_installed $1 && echo \"$1 is installed\" || echo \"ERROR: $1 is NOT installed\"; } check libcudnn ``` ``` libcudnn_cnn_train.so.8 -> libcudnn_cnn_train.so.8.8.0 libcudnn_cnn_infer.so.8 -> libcudnn_cnn_infer.so.8.8.0 libcudnn_adv_train.so.8 -> libcudnn_adv_train.so.8.8.0 libcudnn.so.8 -> libcudnn.so.8.8.0 libcudnn_ops_train.so.8 -> libcudnn_ops_train.so.8.8.0 libcudnn_adv_infer.so.8 -> libcudnn_adv_infer.so.8.8.0 libcudnn_ops_infer.so.8 -> libcudnn_ops_infer.so.8.8.0 libcudnn is installed ``` So, once I did these previous checkings I used a script to evaluate if everything was finally ok and then the following error appeared: ```py import tensorflow as tf print(f'\\nTensorflow version = {tf.__version__}\\n') print(f'\\n{tf.config.list_physical_devices(\"GPU\")}\\n') ``` ``` 2023-03-02 12:05:09.463343: I tensorflow\/tsl\/cuda\/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-03-02 12:05:09.489911: I tensorflow\/tsl\/cuda\/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-03-02 12:05:09.490522: I tensorflow\/core\/platform\/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-03-02 12:05:10.066759: W tensorflow\/compiler\/tf2tensorrt\/utils\/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Tensorflow version = 2.12.0-dev20230203 2023-03-02 12:05:10.748675: I tensorflow\/compiler\/xla\/stream_executor\/cuda\/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https:\/\/github.com\/torvalds\/linux\/blob\/v6.0\/Documentation\/ABI\/testing\/sysfs-bus-pci#L344-L355 2023-03-02 12:05:10.771263: W tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https:\/\/www.tensorflow.org\/install\/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... [] ``` Extra check: I tried to run a checking script on torch and in here it worked so I guess the problem is related with tensorflow\/tf-nightly ```py import torch print(f'\\nAvailable cuda = {torch.cuda.is_available()}') print(f'\\nGPUs availables = {torch.cuda.device_count()}') print(f'\\nCurrent device = {torch.cuda.current_device()}') print(f'\\nCurrent Device location = {torch.cuda.device(0)}') print(f'\\nName of the device = {torch.cuda.get_device_name(0)}') ``` ``` Available cuda = True GPUs availables = 1 Current device = 0 Current Device location = Name of the device = NVIDIA GeForce RTX 3050 Laptop GPU ``` Please, if you know something that might help solve this issue, don't hesitate on telling me.", "response":"I think that, as of March 2023, the only tensorflow distribution for cuda 12 is the docker package from NVIDIA. A tf package for cuda 12 should show the following info ``` >>> tf.sysconfig.get_build_info() OrderedDict([('cpu_compiler', '\/usr\/bin\/x86_64-linux-gnu-gcc-11'), ('cuda_compute_capabilities', ['compute_86']), ('cuda_version', '12.0'), ('cudnn_version', '8'), ('is_cuda_build', True), ('is_rocm_build', False), ('is_tensorrt_build', True)]) ``` But if we run tf.sysconfig.get_build_info() on any tensorflow package installed via pip, it stills tells that cuda_version is 11.x So your alternatives are: install docker with the nvidia cloud instructions and run one of the recent containers compile tensorflow from source, either nightly or last release. Caveat, it takes a lot of RAM and some time, as all good compilations do, and the occasional error to be corrected on the run. In my case, to define kFP8, the new 8-bits float. wait", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/75614728\/cuda-12-tf-nightly-2-12-could-not-find-cuda-drivers-on-your-machine-gpu-will", "best_answers_votes":34, "question_length":6552, "response_length":944 }, { "question":"Keras: change learning rate I'm trying to change the learning rate of my model after it has been trained with a different learning rate. I read here, here, here and some other places i can't even find anymore. I tried: ``` model.optimizer.learning_rate.set_value(0.1) model.optimizer.lr = 0.1 model.optimizer.learning_rate = 0.1 K.set_value(model.optimizer.learning_rate, 0.1) K.set_value(model.optimizer.lr, 0.1) model.optimizer.lr.assign(0.1) ``` ... but none of them worked! I don't understand how there could be such confusion around such a simple thing. Am I missing something? EDIT: Working example Here is a working example of what I'd like to do: ``` from keras.models import Sequential from keras.layers import Dense import keras import numpy as np model = Sequential() model.add(Dense(1, input_shape=(10,))) optimizer = keras.optimizers.Adam(lr=0.01) model.compile(loss='mse', optimizer=optimizer) model.fit(np.random.randn(50,10), np.random.randn(50), epochs=50) # Change learning rate to 0.001 and train for 50 more epochs model.fit(np.random.randn(50,10), np.random.randn(50), initial_epoch=50, epochs=50) ```", "response":"You can change the learning rate as follows: ``` from keras import backend as K K.set_value(model.optimizer.learning_rate, 0.001) ``` Included into your complete example it looks as follows: ``` from keras.models import Sequential from keras.layers import Dense from keras import backend as K import keras import numpy as np model = Sequential() model.add(Dense(1, input_shape=(10,))) optimizer = keras.optimizers.Adam(lr=0.01) model.compile(loss='mse', optimizer=optimizer) print(\"Learning rate before first fit:\", model.optimizer.learning_rate.numpy()) model.fit(np.random.randn(50,10), np.random.randn(50), epochs=50, verbose=0) # Change learning rate to 0.001 and train for 50 more epochs K.set_value(model.optimizer.learning_rate, 0.001) print(\"Learning rate before second fit:\", model.optimizer.learning_rate.numpy()) model.fit(np.random.randn(50,10), np.random.randn(50), initial_epoch=50, epochs=50, verbose=0) ``` I've just tested this with keras 2.3.1. Not sure why the approach didn't seem to work for you.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/59737875\/keras-change-learning-rate", "best_answers_votes":66, "question_length":1122, "response_length":1017 }, { "question":"How to make a custom activation function with only Python in Tensorflow? Suppose you need to make an activation function which is not possible using only pre-defined tensorflow building-blocks, what can you do? So in Tensorflow it is possible to make your own activation function. But it is quite complicated, you have to write it in C++ and recompile the whole of tensorflow [1] [2]. Is there a simpler way?", "response":"Yes There is! Credit: It was hard to find the information and get it working but here is an example copying from the principles and code found here and here. Requirements: Before we start, there are two requirement for this to be able to succeed. First you need to be able to write your activation as a function on numpy arrays. Second you have to be able to write the derivative of that function either as a function in Tensorflow (easier) or in the worst case scenario as a function on numpy arrays. Writing Activation function: So let's take for example this function which we would want to use an activation function: ``` def spiky(x): r = x % 1 if r <= 0.5: return r else: return 0 ``` Which look as follows: The first step is making it into a numpy function, this is easy: ``` import numpy as np np_spiky = np.vectorize(spiky) ``` Now we should write its derivative. Gradient of Activation: In our case it is easy, it is 1 if x mod 1 < 0.5 and 0 otherwise. So: ``` def d_spiky(x): r = x % 1 if r <= 0.5: return 1 else: return 0 np_d_spiky = np.vectorize(d_spiky) ``` Now for the hard part of making a TensorFlow function out of it. Making a numpy fct to a tensorflow fct: We will start by making np_d_spiky into a tensorflow function. There is a function in tensorflow tf.py_func(func, inp, Tout, stateful=stateful, name=name) [doc] which transforms any numpy function to a tensorflow function, so we can use it: ``` import tensorflow as tf from tensorflow.python.framework import ops np_d_spiky_32 = lambda x: np_d_spiky(x).astype(np.float32) def tf_d_spiky(x,name=None): with tf.name_scope(name, \"d_spiky\", [x]) as name: y = tf.py_func(np_d_spiky_32, [x], [tf.float32], name=name, stateful=False) return y[0] ``` tf.py_func acts on lists of tensors (and returns a list of tensors), that is why we have [x] (and return y[0]). The stateful option is to tell tensorflow whether the function always gives the same output for the same input (stateful = False) in which case tensorflow can simply the tensorflow graph, this is our case and will probably be the case in most situations. One thing to be careful of at this point is that numpy used float64 but tensorflow uses float32 so you need to convert your function to use float32 before you can convert it to a tensorflow function otherwise tensorflow will complain. This is why we need to make np_d_spiky_32 first. What about the Gradients? The problem with only doing the above is that even though we now have tf_d_spiky which is the tensorflow version of np_d_spiky, we couldn't use it as an activation function if we wanted to because tensorflow doesn't know how to calculate the gradients of that function. Hack to get Gradients: As explained in the sources mentioned above, there is a hack to define gradients of a function using tf.RegisterGradient [doc] and tf.Graph.gradient_override_map [doc]. Copying the code from harpone we can modify the tf.py_func function to make it define the gradient at the same time: ``` def py_func(func, inp, Tout, stateful=True, name=None, grad=None): # Need to generate a unique name to avoid duplicates: rnd_name = 'PyFuncGrad' + str(np.random.randint(0, 1E+8)) tf.RegisterGradient(rnd_name)(grad) # see _MySquareGrad for grad example g = tf.get_default_graph() with g.gradient_override_map({\"PyFunc\": rnd_name}): return tf.py_func(func, inp, Tout, stateful=stateful, name=name) ``` Now we are almost done, the only thing is that the grad function we need to pass to the above py_func function needs to take a special form. It needs to take in an operation, and the previous gradients before the operation and propagate the gradients backward after the operation. Gradient Function: So for our spiky activation function that is how we would do it: ``` def spikygrad(op, grad): x = op.inputs[0] n_gr = tf_d_spiky(x) return grad * n_gr ``` The activation function has only one input, that is why x = op.inputs[0]. If the operation had many inputs, we would need to return a tuple, one gradient for each input. For example if the operation was a-bthe gradient with respect to a is +1 and with respect to b is -1 so we would have return +1*grad,-1*grad. Notice that we need to return tensorflow functions of the input, that is why need tf_d_spiky, np_d_spiky would not have worked because it cannot act on tensorflow tensors. Alternatively we could have written the derivative using tensorflow functions: ``` def spikygrad2(op, grad): x = op.inputs[0] r = tf.mod(x,1) n_gr = tf.to_float(tf.less_equal(r, 0.5)) return grad * n_gr ``` Combining it all together: Now that we have all the pieces, we can combine them all together: ``` np_spiky_32 = lambda x: np_spiky(x).astype(np.float32) def tf_spiky(x, name=None): with tf.name_scope(name, \"spiky\", [x]) as name: y = py_func(np_spiky_32, [x], [tf.float32], name=name, grad=spikygrad) # <-- here's the call to the gradient return y[0] ``` And now we are done. And we can test it. Test: ``` with tf.Session() as sess: x = tf.constant([0.2,0.7,1.2,1.7]) y = tf_spiky(x) tf.initialize_all_variables().run() print(x.eval(), y.eval(), tf.gradients(y, [x])[0].eval()) ``` [ 0.2 0.69999999 1.20000005 1.70000005] [ 0.2 0. 0.20000005 0.] [ 1. 0. 1. 0.] Success!", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39921607\/how-to-make-a-custom-activation-function-with-only-python-in-tensorflow", "best_answers_votes":86, "question_length":408, "response_length":5195 }, { "question":"\"freeze\" some variables\/scopes in tensorflow: stop_gradient vs passing variables to minimize I am trying to implement Adversarial NN, which requires to 'freeze' one or the other part of the graph during alternating training minibatches. I.e. there two sub-networks: G and D. ``` G( Z ) -> Xz D( X ) -> Y ``` where loss function of G depends on D[G(Z)], D[X]. First I need to train parameters in D with all G parameters fixed, and then parameters in G with parameters in D fixed. Loss function in first case will be negative loss function in the second case and the update will have to apply to the parameters of whether first or second subnetwork. I saw that tensorflow has tf.stop_gradient function. For purpose of training the D (downstream) subnetwork I can use this function to block the gradient flow to ``` Z -> [ G ] -> tf.stop_gradient(Xz) -> [ D ] -> Y ``` The tf.stop_gradient is very succinctly annotated with no in-line example (and example seq2seq.py is too long and not that easy to read), but looks like it must be called during the graph creation. Does it imply that if I want to block\/unblock gradient flow in alternating batches, I need to re-create and re-initialize the graph model? Also it seems that one cannot block the gradient flowing through the G (upstream) network by means of tf.stop_gradient, right? As an alternative I saw that one can pass the list of variables to the optimizer call as opt_op = opt.minimize(cost, ), which would be an easy solution if one could get all variables in the scopes of each subnetwork. Can one get a for a tf.scope?", "response":"The easiest way to achieve this, as you mention in your question, is to create two optimizer operations using separate calls to opt.minimize(cost, ...). By default, the optimizer will use all of the variables in tf.trainable_variables(). If you want to filter the variables to a particular scope, you can use the optional scope argument to tf.get_collection() as follows: ``` optimizer = tf.train.AdagradOptimzer(0.01) first_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, \"scope\/prefix\/for\/first\/vars\") first_train_op = optimizer.minimize(cost, var_list=first_train_vars) second_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, \"scope\/prefix\/for\/second\/vars\") second_train_op = optimizer.minimize(cost, var_list=second_train_vars) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35298326\/freeze-some-variables-scopes-in-tensorflow-stop-gradient-vs-passing-variables", "best_answers_votes":72, "question_length":1577, "response_length":768 }, { "question":"Tensorflow estimator ValueError: logits and labels must have the same shape ((?, 1) vs (?,)) I'm classifying movie reviews as positive or negative using binary crossentropy. So, when I'm trying to wrap my keras model with tensorflow estimator, I get the error: ``` Tensorflow estimator ValueError: logits and labels must have the same shape ((?, 1) vs (?,)) ``` I'm using sigmoid activation as my last layer, guess I'm missing something trivial here. Any help? ``` from tensorflow import keras import tensorflow as tf print(\"Tensorflow {} loaded\".format(tf.__version__)) import numpy as np keras.__version__ from keras.datasets import imdb (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) def vectorize_sequences(sequences, dimension=10000): # Create an all-zero matrix of shape (len(sequences), dimension) results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): results[i, sequence] = 1. # set specific indices of results[i] to 1s return results.astype('float32') # Our vectorized training data x_train = vectorize_sequences(train_data) # Our vectorized test data x_test = vectorize_sequences(test_data) # Our vectorized labels y_train = np.asarray(train_labels).astype('float32') y_test = np.asarray(test_labels).astype('float32') x_val = x_train[:10000] partial_x_train = x_train[10000:] y_val = y_train[:10000] partial_y_train = y_train[10000:] model = keras.models.Sequential() model.add(keras.layers.Dense(16, activation='relu', input_shape=(10000,), name='reviews')) model.add(keras.layers.Dense(16, activation='relu')) model.add(keras.layers.Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) estimator_model = keras.estimator.model_to_estimator(keras_model=model) def input_function(features,labels=None,shuffle=False,epochs=None,batch_size=None): input_fn = tf.estimator.inputs.numpy_input_fn( x={\"reviews_input\": features}, y=labels, shuffle=shuffle, num_epochs=epochs, batch_size=batch_size ) return input_fn estimator_model.train(input_fn=input_function(partial_x_train, partial_y_train, True,20,512)) score = estimator_model.evaluate(input_function(x_val, labels=y_val)) print(score) ```", "response":"You should reshape your labels as 2d-tensor (the first dimension will be the batch dimension and the second the scalar label): ``` # Our vectorized labels y_train = np.asarray(train_labels).astype('float32').reshape((-1,1)) y_test = np.asarray(test_labels).astype('float32').reshape((-1,1)) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48851558\/tensorflow-estimator-valueerror-logits-and-labels-must-have-the-same-shape", "best_answers_votes":42, "question_length":2238, "response_length":294 }, { "question":"How to Properly Combine TensorFlow's Dataset API and Keras? Keras' fit_generator() model method expects a generator which produces tuples of the shape (input, targets), where both elements are NumPy arrays. The documentation seems to imply that if I simply wrap a Dataset iterator in a generator, and make sure to convert the Tensors to NumPy arrays, I should be good to go. This code, however, gives me an error: ``` import numpy as np import os import keras.backend as K from keras.layers import Dense, Input from keras.models import Model import tensorflow as tf from tensorflow.contrib.data import Dataset os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' with tf.Session() as sess: def create_data_generator(): dat1 = np.arange(4).reshape(-1, 1) ds1 = Dataset.from_tensor_slices(dat1).repeat() dat2 = np.arange(5, 9).reshape(-1, 1) ds2 = Dataset.from_tensor_slices(dat2).repeat() ds = Dataset.zip((ds1, ds2)).batch(4) iterator = ds.make_one_shot_iterator() while True: next_val = iterator.get_next() yield sess.run(next_val) datagen = create_data_generator() input_vals = Input(shape=(1,)) output = Dense(1, activation='relu')(input_vals) model = Model(inputs=input_vals, outputs=output) model.compile('rmsprop', 'mean_squared_error') model.fit_generator(datagen, steps_per_epoch=1, epochs=5, verbose=2, max_queue_size=2) ``` Here's the error I get: ``` Using TensorFlow backend. Epoch 1\/5 Exception in thread Thread-1: Traceback (most recent call last): File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/client\/session.py\", line 270, in __init__ fetch, allow_tensor=True, allow_operation=True)) File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/framework\/ops.py\", line 2708, in as_graph_element return self._as_graph_element_locked(obj, allow_tensor, allow_operation) File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/framework\/ops.py\", line 2787, in _as_graph_element_locked raise ValueError(\"Tensor %s is not an element of this graph.\" % obj) ValueError: Tensor Tensor(\"IteratorGetNext:0\", shape=(?, 1), dtype=int64) is not an element of this graph. During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/threading.py\", line 916, in _bootstrap_inner self.run() File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/threading.py\", line 864, in run self._target(*self._args, **self._kwargs) File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/keras\/utils\/data_utils.py\", line 568, in data_generator_task generator_output = next(self._generator) File \".\/datagen_test.py\", line 25, in create_data_generator yield sess.run(next_val) File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/client\/session.py\", line 895, in run run_metadata_ptr) File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/client\/session.py\", line 1109, in _run self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles) File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/client\/session.py\", line 413, in __init__ self._fetch_mapper = _FetchMapper.for_fetch(fetches) File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/client\/session.py\", line 233, in for_fetch return _ListFetchMapper(fetch) File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/client\/session.py\", line 340, in __init__ self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches] File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/client\/session.py\", line 340, in self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches] File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/client\/session.py\", line 241, in for_fetch return _ElementFetchMapper(fetches, contraction_fn) File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/client\/session.py\", line 277, in __init__ 'Tensor. (%s)' % (fetch, str(e))) ValueError: Fetch argument cannot be interpreted as a Tensor. (Tensor Tensor(\"IteratorGetNext:0\", shape=(?, 1), dtype=int64) is not an element of this graph.) Traceback (most recent call last): File \".\/datagen_test.py\", line 34, in verbose=2, max_queue_size=2) File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/keras\/legacy\/interfaces.py\", line 87, in wrapper return func(*args, **kwargs) File \"\/home\/jsaporta\/anaconda3\/lib\/python3.6\/site-packages\/keras\/engine\/training.py\", line 2011, in fit_generator generator_output = next(output_generator) StopIteration ``` Strangely enough, adding a line containing next(datagen) directly after where I initialize datagen causes the code to run just fine, with no errors. Why does my original code not work? Why does it begin to work when I add that line to my code? Is there a more efficient way to use TensorFlow's Dataset API with Keras that doesn't involve converting Tensors to NumPy arrays and back again?", "response":"Update June 09, 2018 Starting from Tensorflow 1.9, one can pass tf.data.Dataset object directly into keras.Model.fit() and it would act similar to fit_generator. A complete example can be found on this gist. ```py # Load mnist training data (x_train, y_train), _ = tf.keras.datasets.mnist.load_data() training_set = tfdata_generator(x_train, y_train,is_training=True) model = # your keras model here model.fit( training_set.make_one_shot_iterator(), steps_per_epoch=len(x_train) \/\/ 128, epochs=5, verbose = 1) ``` tfdata_generator is a function that returns an iterable tf.data.Dataset. ```py def tfdata_generator(images, labels, is_training, batch_size=128): '''Construct a data generator using `tf.Dataset`. ''' def map_fn(image, label): '''Preprocess raw data to trainable input. ''' x = tf.reshape(tf.cast(image, tf.float32), (28, 28, 1)) y = tf.one_hot(tf.cast(label, tf.uint8), _NUM_CLASSES) return x, y dataset = tf.data.Dataset.from_tensor_slices((images, labels)) if is_training: dataset = dataset.shuffle(1000) # depends on sample size dataset = dataset.map(map_fn) dataset = dataset.batch(batch_size) dataset = dataset.repeat() dataset = dataset.prefetch(tf.contrib.data.AUTOTUNE) return dataset ``` Old Solution: In addition to @Yu-Yang's answer, you can also modify tf.data.Dataset to become a generator for fit_generator as following ```py from tensorflow.contrib.learn.python.learn.datasets import mnist data = mnist.load_mnist() model = # your Keras model model.fit_generator(generator = tfdata_generator(data.train.images, data.train.labels), steps_per_epoch=200, workers = 0 , # This is important verbose = 1) def tfdata_generator(images, labels, batch_size=128, shuffle=True,): def map_func(image, label): '''A transformation function''' x_train = tf.reshape(tf.cast(image, tf.float32), image_shape) y_train = tf.one_hot(tf.cast(label, tf.uint8), num_classes) return [x_train, y_train] dataset = tf.data.Dataset.from_tensor_slices((images, labels)) dataset = dataset.map(map_func) dataset = dataset.shuffle().batch(batch_size).repeat() iterator = dataset.make_one_shot_iterator() next_batch = iterator.get_next() while True: yield K.get_session().run(next_batch) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46135499\/how-to-properly-combine-tensorflows-dataset-api-and-keras", "best_answers_votes":62, "question_length":4992, "response_length":2185 }, { "question":"How can I solve 'ran out of gpu memory' in TensorFlow I ran the MNIST demo in TensorFlow with 2 conv layers and a full-conect layer, I got an message that 'ran out of memeory trying to allocate 2.59GiB' , but it shows that total memory is 4.69GiB, and free memory is 3.22GiB, how can it stop with 2.59GiB? And with larger network, how can I manage gpu memory? I concern only how to make best use of the gpu memory and wanna know how it happened, not how to pre-allocating memory", "response":"I was encountering out of memory errors when training a small CNN on a GTX 970. Through somewhat of a fluke, I discovered that telling TensorFlow to allocate memory on the GPU as needed (instead of up front) resolved all my issues. This can be accomplished using the following Python code: ``` config = tf.ConfigProto() config.gpu_options.allow_growth = True sess = tf.Session(config=config) ``` Previously, TensorFlow would pre-allocate ~90% of GPU memory. For some unknown reason, this would later result in out-of-memory errors even though the model could fit entirely in GPU memory. By using the above code, I no longer have OOM errors. Note: If the model is too big to fit in GPU memory, this probably won't help!", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36927607\/how-can-i-solve-ran-out-of-gpu-memory-in-tensorflow", "best_answers_votes":43, "question_length":478, "response_length":718 }, { "question":"CBOW v.s. skip-gram: why invert context and target words? In this page, it is said that: [...] skip-gram inverts contexts and targets, and tries to predict each context word from its target word [...] However, looking at the training dataset it produces, the content of the X and Y pair seems to be interexchangeable, as those two pairs of (X, Y): (quick, brown), (brown, quick) So, why distinguish that much between context and targets if it is the same thing in the end? Also, doing Udacity's Deep Learning course exercise on word2vec, I wonder why they seem to do the difference between those two approaches that much in this problem: An alternative to skip-gram is another Word2Vec model called CBOW (Continuous Bag of Words). In the CBOW model, instead of predicting a context word from a word vector, you predict a word from the sum of all the word vectors in its context. Implement and evaluate a CBOW model trained on the text8 dataset. Would not this yields the same results?", "response":"Here is my oversimplified and rather naive understanding of the difference: As we know, CBOW is learning to predict the word by the context. Or maximize the probability of the target word by looking at the context. And this happens to be a problem for rare words. For example, given the context yesterday was a really [...] day CBOW model will tell you that most probably the word is beautiful or nice. Words like delightful will get much less attention of the model, because it is designed to predict the most probable word. This word will be smoothed over a lot of examples with more frequent words. On the other hand, the skip-gram model is designed to predict the context. Given the word delightful it must understand it and tell us that there is a huge probability that the context is yesterday was really [...] day, or some other relevant context. With skip-gram the word delightful will not try to compete with the word beautiful but instead, delightful+context pairs will be treated as new observations. UPDATE Thanks to @0xF for sharing this article According to Mikolov Skip-gram: works well with small amount of the training data, represents well even rare words or phrases. CBOW: several times faster to train than the skip-gram, slightly better accuracy for the frequent words One more addition to the subject is found here: In the \"skip-gram\" mode alternative to \"CBOW\", rather than averaging the context words, each is used as a pairwise training example. That is, in place of one CBOW example such as [predict 'ate' from average('The', 'cat', 'the', 'mouse')], the network is presented with four skip-gram examples [predict 'ate' from 'The'], [predict 'ate' from 'cat'], [predict 'ate' from 'the'], [predict 'ate' from 'mouse']. (The same random window-reduction occurs, so half the time that would just be two examples, of the nearest words.)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38287772\/cbow-v-s-skip-gram-why-invert-context-and-target-words", "best_answers_votes":108, "question_length":984, "response_length":1859 }, { "question":"NotImplementedError: Layers with arguments in `__init__` must override `get_config` I'm trying to save my TensorFlow model using model.save(), however - I am getting this error. The model summary is provided here: Model Summary The code for the transformer model: ```py def transformer(vocab_size, num_layers, units, d_model, num_heads, dropout, name=\"transformer\"): inputs = tf.keras.Input(shape=(None,), name=\"inputs\") dec_inputs = tf.keras.Input(shape=(None,), name=\"dec_inputs\") enc_padding_mask = tf.keras.layers.Lambda( create_padding_mask, output_shape=(1, 1, None), name='enc_padding_mask')(inputs) # mask the future tokens for decoder inputs at the 1st attention block look_ahead_mask = tf.keras.layers.Lambda( create_look_ahead_mask, output_shape=(1, None, None), name='look_ahead_mask')(dec_inputs) # mask the encoder outputs for the 2nd attention block dec_padding_mask = tf.keras.layers.Lambda( create_padding_mask, output_shape=(1, 1, None), name='dec_padding_mask')(inputs) enc_outputs = encoder( vocab_size=vocab_size, num_layers=num_layers, units=units, d_model=d_model, num_heads=num_heads, dropout=dropout, )(inputs=[inputs, enc_padding_mask]) dec_outputs = decoder( vocab_size=vocab_size, num_layers=num_layers, units=units, d_model=d_model, num_heads=num_heads, dropout=dropout, )(inputs=[dec_inputs, enc_outputs, look_ahead_mask, dec_padding_mask]) outputs = tf.keras.layers.Dense(units=vocab_size, name=\"outputs\")(dec_outputs) return tf.keras.Model(inputs=[inputs, dec_inputs], outputs=outputs, name=name) ``` I don't understand why it's giving this error since the model trains perfectly fine. Any help would be appreciated. My saving code for reference: ```py print(\"Saving the model.\") saveloc = \"C:\/tmp\/solar.h5\" model.save(saveloc) print(\"Model saved to: \" + saveloc + \" succesfully.\") ```", "response":"It's not a bug, it's a feature. This error lets you know that TF can't save your model, because it won't be able to load it. Specifically, it won't be able to reinstantiate your custom Layer classes: encoder and decoder. To solve this, just override their get_config method according to the new arguments you've added. A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration. For example, if your encoder class looks something like this: ``` class encoder(tf.keras.layers.Layer): def __init__( self, vocab_size, num_layers, units, d_model, num_heads, dropout, **kwargs, ): super().__init__(**kwargs) self.vocab_size = vocab_size self.num_layers = num_layers self.units = units self.d_model = d_model self.num_heads = num_heads self.dropout = dropout # Other methods etc. ``` then you only need to override this method: ``` def get_config(self): config = super().get_config().copy() config.update({ 'vocab_size': self.vocab_size, 'num_layers': self.num_layers, 'units': self.units, 'd_model': self.d_model, 'num_heads': self.num_heads, 'dropout': self.dropout, }) return config ``` When TF sees this (for both classes), you will be able to save the model. Because now when the model is loaded, TF will be able to reinstantiate the same layer from config. Layer.from_config's source code may give a better sense of how it works: ``` @classmethod def from_config(cls, config): return cls(**config) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58678836\/notimplementederror-layers-with-arguments-in-init-must-override-get-conf", "best_answers_votes":88, "question_length":1817, "response_length":1533 }, { "question":"Tensorflow read images with labels I am building a standard image classification model with Tensorflow. For this I have input images, each assigned with a label (number in {0,1}). The Data can hence be stored in a list using the following format: ``` \/path\/to\/image_0 label_0 \/path\/to\/image_1 label_1 \/path\/to\/image_2 label_2 ... ``` I want to use TensorFlow's queuing system to read my data and feed it to my model. Ignoring the labels, one can easily achieve this by using string_input_producer and wholeFileReader. Here the code: ``` def read_my_file_format(filename_queue): reader = tf.WholeFileReader() key, value = reader.read(filename_queue) example = tf.image.decode_png(value) return example #removing label, obtaining list containing \/path\/to\/image_x image_list = [line[:-2] for line in image_label_list] input_queue = tf.train.string_input_producer(image_list) input_images = read_my_file_format(input_queue) ``` However, the labels are lost in that process as the image data is purposely shuffled as part of the input pipeline. What is the easiest way of pushing the labels together with the image data through the input queues?", "response":"Using slice_input_producer provides a solution which is much cleaner. Slice Input Producer allows us to create an Input Queue containing arbitrarily many separable values. This snippet of the question would look like this: ``` def read_labeled_image_list(image_list_file): \"\"\"Reads a .txt file containing pathes and labeles Args: image_list_file: a .txt file with one \/path\/to\/image per line label: optionally, if set label will be pasted after each line Returns: List with all filenames in file image_list_file \"\"\" f = open(image_list_file, 'r') filenames = [] labels = [] for line in f: filename, label = line[:-1].split(' ') filenames.append(filename) labels.append(int(label)) return filenames, labels def read_images_from_disk(input_queue): \"\"\"Consumes a single filename and label as a ' '-delimited string. Args: filename_and_label_tensor: A scalar string tensor. Returns: Two tensors: the decoded image, and the string label. \"\"\" label = input_queue[1] file_contents = tf.read_file(input_queue[0]) example = tf.image.decode_png(file_contents, channels=3) return example, label # Reads pfathes of images together with their labels image_list, label_list = read_labeled_image_list(filename) images = ops.convert_to_tensor(image_list, dtype=dtypes.string) labels = ops.convert_to_tensor(label_list, dtype=dtypes.int32) # Makes an input queue input_queue = tf.train.slice_input_producer([images, labels], num_epochs=num_epochs, shuffle=True) image, label = read_images_from_disk(input_queue) # Optional Preprocessing or Data Augmentation # tf.image implements most of the standard image augmentation image = preprocess_image(image) label = preprocess_label(label) # Optional Image and Label Batching image_batch, label_batch = tf.train.batch([image, label], batch_size=batch_size) ``` See also the generic_input_producer from the TensorVision examples for full input-pipeline.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34340489\/tensorflow-read-images-with-labels", "best_answers_votes":51, "question_length":1140, "response_length":1879 }, { "question":"Difference between Keras model.save() and model.save_weights()? To save a model in Keras, what are the differences between the output files of: model.save() model.save_weights() ModelCheckpoint() in the callback The saved file from model.save() is larger than the model from model.save_weights(), but significantly larger than a JSON or Yaml model architecture file. Why is this? Restating this: Why is size(model.save()) + size(something) = size(model.save_weights()) + size(model.to_json()), what is that \"something\"? Would it be more efficient to just model.save_weights() and model.to_json(), and load from these than to just do model.save() and load_model()? What are the differences?", "response":"save() saves the weights and the model structure to a single HDF5 file. I believe it also includes things like the optimizer state. Then you can use that HDF5 file with load() to reconstruct the whole model, including weights. save_weights() only saves the weights to HDF5 and nothing else. You need extra code to reconstruct the model from a JSON file.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42621864\/difference-between-keras-model-save-and-model-save-weights", "best_answers_votes":48, "question_length":689, "response_length":353 }, { "question":"ImportError: Failed to import any qt binding, Python - Tensorflow I'm starting my adventure with Tensorflow. I think I installed everything correctly, but when running this code, PyCharm returns an error: ``` Traceback (most recent call last): File \"C:\/Users\/tymot\/Desktop\/myenv3\/env\/Tensorflow\/all_good.py\", line 15, in import matplotlib.pyplot as plt File \"C:\\Users\\tymot\\Anaconda1\\lib\\site-packages\\matplotlib\\pyplot.py\", line 115, in _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup() File \"C:\\Users\\tymot\\Anaconda1\\lib\\site-packages\\matplotlib\\backends\\__init__.py\", line 62, in pylab_setup [backend_name], 0) File \"C:\\Users\\tymot\\Anaconda1\\lib\\site-packages\\matplotlib\\backends\\backend_qt5agg.py\", line 15, in from .backend_qt5 import ( File \"C:\\Users\\tymot\\Anaconda1\\lib\\site-packages\\matplotlib\\backends\\backend_qt5.py\", line 19, in import matplotlib.backends.qt_editor.figureoptions as figureoptions File \"C:\\Users\\tymot\\Anaconda1\\lib\\site-packages\\matplotlib\\backends\\qt_editor\\figureoptions.py\", line 20, in import matplotlib.backends.qt_editor.formlayout as formlayout File \"C:\\Users\\tymot\\Anaconda1\\lib\\site-packages\\matplotlib\\backends\\qt_editor\\formlayout.py\", line 54, in from matplotlib.backends.qt_compat import QtGui, QtWidgets, QtCore File \"C:\\Users\\tymot\\Anaconda1\\lib\\site-packages\\matplotlib\\backends\\qt_compat.py\", line 158, in raise ImportError(\"Failed to import any qt binding\") ImportError: Failed to import any qt binding ``` My code which I am trying to run: ``` import numpy as np import tensorflow as tf import matplotlib.pyplot as plt num_features = 2 num_iter = 10000 display_step = int(num_iter \/ 10) learning_rate = 0.01 num_input = 2 # units in the input layer 28x28 images num_hidden1 = 2 # units in the first hidden layer num_output = 1 # units in the output, only one output 0 or 1 #%% mlp function def multi_layer_perceptron_xor(x, weights, biases): hidden_layer1 = tf.add(tf.matmul(x, weights['w_h1']), biases['b_h1']) hidden_layer1 = tf.nn.sigmoid(hidden_layer1) out_layer = tf.add(tf.matmul(hidden_layer1, weights['w_out']), biases['b_out']) return out_layer #%% x = np.array([[0, 0], [0, 1], [1, 0], [1, 1]], np.float32) # 4x2, input y = np.array([0, 1, 1, 0], np.float32) # 4, correct output, AND operation y = np.reshape(y, [4,1]) # convert to 4x1 # trainum_inputg data and labels X = tf.placeholder('float', [None, num_input]) # training data Y = tf.placeholder('float', [None, num_output]) # labels # weights and biases weights = { 'w_h1' : tf.Variable(tf.random_normal([num_input, num_hidden1])), # w1, from input layer to hidden layer 1 'w_out': tf.Variable(tf.random_normal([num_hidden1, num_output])) # w2, from hidden layer 1 to output layer } biases = { 'b_h1' : tf.Variable(tf.zeros([num_hidden1])), 'b_out': tf.Variable(tf.zeros([num_output])) } model = multi_layer_perceptron_xor(X, weights, biases) ''' - cost function and optimization - sigmoid cross entropy -- single output - softmax cross entropy -- multiple output, normalized ''' loss_func = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(logits=model, labels=Y)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(loss_func) sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) for k in range(num_iter): tmp_cost, _ = sess.run([loss_func, optimizer], feed_dict={X: x, Y: y}) if k % display_step == 0: #print('output: ', sess.run(model, feed_dict={X:x})) print('loss= ' + \"{:.5f}\".format(tmp_cost)) # separates the input space W = np.squeeze(sess.run(weights['w_h1'])) # 2x2 b = np.squeeze(sess.run(biases['b_h1'])) # 2, sess.close() #%% # Now plot the fitted line. We need only two points to plot the line plot_x = np.array([np.min(x[:, 0] - 0.2), np.max(x[:, 1]+0.2)]) plot_y = -1 \/ W[1, 0] * (W[0, 0] * plot_x + b[0]) plot_y = np.reshape(plot_y, [2, -1]) plot_y = np.squeeze(plot_y) plot_y2 = -1 \/ W[1, 1] * (W[0, 1] * plot_x + b[1]) plot_y2 = np.reshape(plot_y2, [2, -1]) plot_y2 = np.squeeze(plot_y2) plt.scatter(x[:, 0], x[:, 1], c=y, s=100, cmap='viridis') plt.plot(plot_x, plot_y, color='k', linewidth=2) # line 1 plt.plot(plot_x, plot_y2, color='k', linewidth=2) # line 2 plt.xlim([-0.2, 1.2]); plt.ylim([-0.2, 1.25]); #plt.text(0.425, 1.05, 'XOR', fontsize=14) plt.xticks([0.0, 0.5, 1.0]); plt.yticks([0.0, 0.5, 1.0]) plt.show() #%% ``` I think it follows another version of python. How can I run the code without error. I installed qt-binding and added tensorflow to my PyCharm. Any help will be appreciated.", "response":"make sure you have PyQt5 installed. you may open a python shell and try: ``` import PyQt5 ``` if it fails then you can install it via: ``` pip install PyQt5 ``` If you are on macOS or Linux be careful that you might need to run ``` pip3 install PyQt5 ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/52346254\/importerror-failed-to-import-any-qt-binding-python-tensorflow", "best_answers_votes":107, "question_length":4529, "response_length":254 }, { "question":"Tensorflow crashes with CUBLAS_STATUS_ALLOC_FAILED I'm running tensorflow-gpu on Windows 10 using a simple MINST neural network program. When it tries to run, it encounters a CUBLAS_STATUS_ALLOC_FAILED error. A google search doesn't turn up anything. ``` I c:\\tf_jenkins\\home\\workspace\\release-win\\device\\gpu\\os\\windows\\tensorflow\\core\\common_runtime\\gpu\\gpu_device.cc:885] Found device 0 with properties: name: GeForce GTX 970 major: 5 minor: 2 memoryClockRate (GHz) 1.253 pciBusID 0000:0f:00.0 Total memory: 4.00GiB Free memory: 3.31GiB I c:\\tf_jenkins\\home\\workspace\\release-win\\device\\gpu\\os\\windows\\tensorflow\\core\\common_runtime\\gpu\\gpu_device.cc:906] DMA: 0 I c:\\tf_jenkins\\home\\workspace\\release-win\\device\\gpu\\os\\windows\\tensorflow\\core\\common_runtime\\gpu\\gpu_device.cc:916] 0: Y I c:\\tf_jenkins\\home\\workspace\\release-win\\device\\gpu\\os\\windows\\tensorflow\\core\\common_runtime\\gpu\\gpu_device.cc:975] Creating TensorFlow device (\/gpu:0) -> (device: 0, name: GeForce GTX 970, pci bus id: 0000:0f:00.0) E c:\\tf_jenkins\\home\\workspace\\release-win\\device\\gpu\\os\\windows\\tensorflow\\stream_executor\\cuda\\cuda_blas.cc:372] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED W c:\\tf_jenkins\\home\\workspace\\release-win\\device\\gpu\\os\\windows\\tensorflow\\stream_executor\\stream.cc:1390] attempting to perform BLAS operation using StreamExecutor without BLAS support Traceback (most recent call last): File \"C:\\Users\\Anonymous\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 1021, in _do_call return fn(*args) File \"C:\\Users\\Anonymous\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 1003, in _run_fn status, run_metadata) File \"C:\\Users\\Anonymous\\AppData\\Local\\Programs\\Python\\Python35\\lib\\contextlib.py\", line 66, in __exit__ next(self.gen) File \"C:\\Users\\Anonymous\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\errors_impl.py\", line 469, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.InternalError: Blas SGEMM launch failed : a.shape=(100, 784), b.shape=(784, 256), m=100, n=256, k=784 [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device=\"\/job:localhost\/replica:0\/task:0\/gpu:0\"](_recv_Placeholder_0\/_7, Variable\/read)]] [[Node: Mean\/_15 = _Recv[client_terminated=false, recv_device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\", send_device=\"\/job:localhost\/replica:0\/task:0\/gpu:0\", send_device_incarnation=1, tensor_name=\"edge_35_Mean\", tensor_type=DT_FLOAT, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"]()]] ```", "response":"For TensorFlow 2.2 none of the other answers worked when the CUBLAS_STATUS_ALLOC_FAILED problem was encountered. Found a solution on https:\/\/www.tensorflow.org\/guide\/gpu: ``` import tensorflow as tf gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: # Currently, memory growth needs to be the same across GPUs for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) logical_gpus = tf.config.experimental.list_logical_devices('GPU') print(len(gpus), \"Physical GPUs,\", len(logical_gpus), \"Logical GPUs\") except RuntimeError as e: # Memory growth must be set before GPUs have been initialized print(e) ``` I ran this code before any further calculations are made and found that the same code that produced CUBLAS error before now worked in same session. The sample code above is a specific example that sets the memory growth across a number of physical GPUs but it also solves the memory expansion problem.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41117740\/tensorflow-crashes-with-cublas-status-alloc-failed", "best_answers_votes":49, "question_length":2654, "response_length":939 }, { "question":"Tensorflow: How to replace or modify gradient? I would like to replace or modify the gradient of an op or portion of the graph in tensorflow. It would be ideal if I can use the existing gradient in the calculation. In some ways this is the opposite to what tf.stop_gradient() does: instead of adding a calculation which is ignored when calculating gradients, I want a calculation which is only used when calculating gradients. A simple example would be something which simply scales gradients by multiplying them with a constant (but does not multiply the forward calculation by a constant). Another example would be something which clips the gradients to a given range.", "response":"For TensorFlow 1.7 and TensorFlow 2.0 look at edit blow. First define your custom gradient: ``` @tf.RegisterGradient(\"CustomGrad\") def _const_mul_grad(unused_op, grad): return 5.0 * grad ``` Since you want nothing to happen in the forward pass, override the gradient of an identity operation with your new gradient: ``` g = tf.get_default_graph() with g.gradient_override_map({\"Identity\": \"CustomGrad\"}): output = tf.identity(input, name=\"Identity\") ``` Here is a working example with a layer that clips gradients in the backwards pass and does nothing in the forwards pass, using the same method: ``` import tensorflow as tf @tf.RegisterGradient(\"CustomClipGrad\") def _clip_grad(unused_op, grad): return tf.clip_by_value(grad, -0.1, 0.1) input = tf.Variable([3.0], dtype=tf.float32) g = tf.get_default_graph() with g.gradient_override_map({\"Identity\": \"CustomClipGrad\"}): output_clip = tf.identity(input, name=\"Identity\") grad_clip = tf.gradients(output_clip, input) # output without gradient clipping in the backwards pass for comparison: output = tf.identity(input) grad = tf.gradients(output, input) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(\"with clipping:\", sess.run(grad_clip)[0]) print(\"without clipping:\", sess.run(grad)[0]) ``` Edit for TensorFlow 1.7 and TensorFlow 2.0 Since 1.7 there is a new way to redefine the gradient with shorter syntax, which also works with Tensorflow 2.0. It also allows to redefine the gradient of multiple operations at the same time. Here are the examples from above, rewritten for TensorFlow 1.7 and TensorFlow 2.0: Layer that scales gradients in the backward pass: ``` @tf.custom_gradient def scale_grad_layer(x): def grad(dy): return 5.0 * dy return tf.identity(x), grad ``` Example with a layer that clips gradients in the backward pass: ``` @tf.custom_gradient def clip_grad_layer(x): def grad(dy): return tf.clip_by_value(dy, -0.1, 0.1) return tf.identity(x), grad ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43839431\/tensorflow-how-to-replace-or-modify-gradient", "best_answers_votes":60, "question_length":670, "response_length":1950 }, { "question":"TypeError: 'Tensor' object does not support item assignment in TensorFlow I try to run this code: ``` outputs, states = rnn.rnn(lstm_cell, x, initial_state=initial_state, sequence_length=real_length) tensor_shape = outputs.get_shape() for step_index in range(tensor_shape[0]): word_index = self.x[:, step_index] word_index = tf.reshape(word_index, [-1,1]) index_weight = tf.gather(word_weight, word_index) outputs[step_index, :, :]=tf.mul(outputs[step_index, :, :] , index_weight) ``` But I get error on last line: TypeError: 'Tensor' object does not support item assignment It seems I can not assign to tensor, how can I fix it?", "response":"In general, a TensorFlow tensor object is not assignable, so you cannot use it on the left-hand side of an assignment. The easiest way to do what you're trying to do is to build a Python list of tensors, and tf.stack() them together at the end of the loop: ``` outputs, states = rnn.rnn(lstm_cell, x, initial_state=initial_state, sequence_length=real_length) output_list = [] tensor_shape = outputs.get_shape() for step_index in range(tensor_shape[0]): word_index = self.x[:, step_index] word_index = tf.reshape(word_index, [-1,1]) index_weight = tf.gather(word_weight, word_index) output_list.append(tf.mul(outputs[step_index, :, :] , index_weight)) outputs = tf.stack(output_list) ``` * With the exception of tf.Variable objects, using the Variable.assign() etc. methods. However, rnn.rnn() likely returns a tf.Tensor object that does not support this method.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37697747\/typeerror-tensor-object-does-not-support-item-assignment-in-tensorflow", "best_answers_votes":53, "question_length":629, "response_length":861 }, { "question":"What is difference between tf.truncated_normal and tf.random_normal? tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) outputs random values from a normal distribution. tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) outputs random values from a truncated normal distribution. I tried googling 'truncated normal distribution'. But didn't understand much.", "response":"The documentation says it all: For truncated normal distribution: The values are drawn from a normal distribution with specified mean and standard deviation, discarding and re-drawing any samples that are more than two standard deviations from the mean. Most probably it is easy to understand the difference by plotting the graph for yourself (%magic is because I use jupyter notebook): ``` import tensorflow as tf import matplotlib.pyplot as plt %matplotlib inline n = 500000 A = tf.truncated_normal((n,)) B = tf.random_normal((n,)) with tf.Session() as sess: a, b = sess.run([A, B]) ``` And now ``` plt.hist(a, 100, (-4.2, 4.2)); plt.hist(b, 100, (-4.2, 4.2)); ``` The point for using truncated normal is to overcome saturation of tome functions like sigmoid (where if the value is too big\/small, the neuron stops learning).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41704484\/what-is-difference-between-tf-truncated-normal-and-tf-random-normal", "best_answers_votes":80, "question_length":431, "response_length":826 }, { "question":"What is the meaning of the \"None\" in model.summary of KERAS? What is the meaning of the (None, 100) in Output Shape? Is this(\"None\") the Sample number or the hidden dimension?", "response":"None means this dimension is variable. The first dimension in a keras model is always the batch size. You don't need fixed batch sizes, unless in very specific cases (for instance, when working with stateful=True LSTM layers). That's why this dimension is often ignored when you define your model. For instance, when you define input_shape=(100,200), actually you're ignoring the batch size and defining the shape of \"each sample\". Internally the shape will be (None, 100, 200), allowing a variable batch size, each sample in the batch having the shape (100,200). The batch size will be then automatically defined in the fit or predict methods. Other None dimensions: Not only the batch dimension can be None, but many others as well. For instance, in a 2D convolutional network, where the expected input is (batchSize, height, width, channels), you can have shapes like (None, None, None, 3), allowing variable image sizes. In recurrent networks and in 1D convolutions, you can also make the length\/timesteps dimension variable, with shapes like (None, None, featuresOrChannels)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47240348\/what-is-the-meaning-of-the-none-in-model-summary-of-keras", "best_answers_votes":74, "question_length":175, "response_length":1079 }, { "question":"TensorFlow: training on my own image I am new to TensorFlow. I am looking for the help on the image recognition where I can train my own image dataset. Is there any example for training the new dataset?", "response":"If you are interested in how to input your own data in TensorFlow, you can look at this tutorial. I've also written a guide with best practices for CS230 at Stanford here. New answer (with tf.data) and with labels With the introduction of tf.data in r1.4, we can create a batch of images without placeholders and without queues. The steps are the following: Create a list containing the filenames of the images and a corresponding list of labels Create a tf.data.Dataset reading these filenames and labels Preprocess the data Create an iterator from the tf.data.Dataset which will yield the next batch The code is: ``` # step 1 filenames = tf.constant(['im_01.jpg', 'im_02.jpg', 'im_03.jpg', 'im_04.jpg']) labels = tf.constant([0, 1, 0, 1]) # step 2: create a dataset returning slices of `filenames` dataset = tf.data.Dataset.from_tensor_slices((filenames, labels)) # step 3: parse every image in the dataset using `map` def _parse_function(filename, label): image_string = tf.read_file(filename) image_decoded = tf.image.decode_jpeg(image_string, channels=3) image = tf.cast(image_decoded, tf.float32) return image, label dataset = dataset.map(_parse_function) dataset = dataset.batch(2) # step 4: create iterator and final input tensor iterator = dataset.make_one_shot_iterator() images, labels = iterator.get_next() ``` Now we can run directly sess.run([images, labels]) without feeding any data through placeholders. Old answer (with TensorFlow queues) To sum it up you have multiple steps: Create a list of filenames (ex: the paths to your images) Create a TensorFlow filename queue Read and decode each image, resize them to a fixed size (necessary for batching) Output a batch of these images The simplest code would be: ``` # step 1 filenames = ['im_01.jpg', 'im_02.jpg', 'im_03.jpg', 'im_04.jpg'] # step 2 filename_queue = tf.train.string_input_producer(filenames) # step 3: read, decode and resize images reader = tf.WholeFileReader() filename, content = reader.read(filename_queue) image = tf.image.decode_jpeg(content, channels=3) image = tf.cast(image, tf.float32) resized_image = tf.image.resize_images(image, [224, 224]) # step 4: Batching image_batch = tf.train.batch([resized_image], batch_size=8) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37340129\/tensorflow-training-on-my-own-image", "best_answers_votes":98, "question_length":202, "response_length":2218 }, { "question":"In tensorflow what is the difference between tf.add and operator (+)? In tensorflow tutorials, I see both codes like tf.add(tf.matmul(X, W), b) and tf.matmul(X, W) + b, what is the difference between using the math function tf.add(), tf.assign(), etc and the operators + and =, etc, in precision or other aspects?", "response":"There's no difference in precision between a+b and tf.add(a, b). The former translates to a.__add__(b) which gets mapped to tf.add by means of following line in math_ops.py _OverrideBinaryOperatorHelper(gen_math_ops.add, \"add\") The only difference is that node name in the underlying Graph is add instead of Add. You can generally compare things by looking at the underlying Graph representation like this ``` tf.reset_default_graph() dtype = tf.int32 a = tf.placeholder(dtype) b = tf.placeholder(dtype) c = a+b print(tf.get_default_graph().as_graph_def()) ``` You could also see this directly by inspecting the __add__ method. There's an extra level of indirection because it's a closure, but you can get the underlying function as follows ``` real_function = tf.Tensor.__add__.im_func.func_closure[0].cell_contents print(real_function.__module__ + \".\" + real_function.__name__) print(tf.add.__module__ + \".\" + tf.add.__name__) ``` And you'll see output below which means that they call same underlying function ``` tensorflow.python.ops.gen_math_ops.add tensorflow.python.ops.gen_math_ops.add ``` You can see from tf.Tensor.OVERLOADABLE_OPERATORS that following Python special methods are potentially overloaded by appropriate TensorFlow versions ``` {'__abs__', '__add__', '__and__', '__div__', '__floordiv__', '__ge__', '__getitem__', '__gt__', '__invert__', '__le__', '__lt__', '__mod__', '__mul__', '__neg__', '__or__', '__pow__', '__radd__', '__rand__', '__rdiv__', '__rfloordiv__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rsub__', '__rtruediv__', '__rxor__', '__sub__', '__truediv__', '__xor__'} ``` Those methods are described in Python reference 3.3.7: emulating numeric types. Note that Python data model does not provide a way to overload assignment operator = so assignment always uses native Python implementation.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37900780\/in-tensorflow-what-is-the-difference-between-tf-add-and-operator", "best_answers_votes":65, "question_length":313, "response_length":1838 }, { "question":"Tensorboard not found as magic function in jupyter I want to run tensorboard in jupyter using the latest tensorflow 2.0.0a0. With the tensorboard version 1.13.1, and python 3.6. using ... %tensorboard --logdir {logs_base_dir} I get the error : UsageError: Line magic function %tensorboard not found Do you have an idea what the problem could be? It seems that all versions are up to date and the command seems correct too. Thanks", "response":"UPDATE For newer TF versions (tensorflow>=1.14.0 & tensorflow != 2.0.0a0 - newer than TF2.0-alpha) load the extension like this ``` %load_ext tensorboard ``` OLD ANSWER The extension needs to be loaded first: ``` %load_ext tensorboard.notebook %tensorboard --logdir {logs_base_dir} ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55970686\/tensorboard-not-found-as-magic-function-in-jupyter", "best_answers_votes":88, "question_length":429, "response_length":285 }, { "question":"How to get allocated GPU spec in Google Colab I'm using Google Colab for deep learning and I'm aware that they randomly allocate GPU's to users. I'd like to be able to see which GPU I've been allocated in any given session. Is there a way to do this in Google Colab notebooks? Note that I am using Tensorflow if that helps.", "response":"Since you can run bash command in colab, just run !nvidia-smi:", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/60299967\/how-to-get-allocated-gpu-spec-in-google-colab", "best_answers_votes":62, "question_length":323, "response_length":62 }, { "question":"Tensorflow not running on GPU I have aldready spent a considerable of time digging around on stack overflow and else looking for the answer, but couldn't find anything Hi all, I am running Tensorflow with Keras on top. I am 90% sure I installed Tensorflow GPU, is there any way to check which install I did? I was trying to do run some CNN models from Jupyter notebook and I noticed that Keras was running the model on the CPU (checked task manager, CPU was at 100%). I tried running this code from the tensorflow website: ``` # Creates a graph. a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) # Creates a session with log_device_placement set to True. sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) # Runs the op. print(sess.run(c)) ``` And this is what I got: ``` MatMul: (MatMul): \/job:localhost\/replica:0\/task:0\/cpu:0 2017-06-29 17:09:38.783183: I c:\\tf_jenkins\\home\\workspace\\release-win\\m\\windows\\py\\35\\tensorflow\\core\\common_runtime\\simple_placer.cc:847] MatMul: (MatMul)\/job:localhost\/replica:0\/task:0\/cpu:0 b: (Const): \/job:localhost\/replica:0\/task:0\/cpu:0 2017-06-29 17:09:38.784779: I c:\\tf_jenkins\\home\\workspace\\release-win\\m\\windows\\py\\35\\tensorflow\\core\\common_runtime\\simple_placer.cc:847] b: (Const)\/job:localhost\/replica:0\/task:0\/cpu:0 a: (Const): \/job:localhost\/replica:0\/task:0\/cpu:0 2017-06-29 17:09:38.786128: I c:\\tf_jenkins\\home\\workspace\\release-win\\m\\windows\\py\\35\\tensorflow\\core\\common_runtime\\simple_placer.cc:847] a: (Const)\/job:localhost\/replica:0\/task:0\/cpu:0 [[ 22. 28.] [ 49. 64.]] ``` Which to me shows I am running on my CPU, for some reason. I have a GTX1050 (driver version 382.53), I installed CUDA, and Cudnn, and tensorflow installed without any problems. I installed Visual Studio 2015 as well since it was listed as a compatible version. I remember CUDA mentioning something about an incompatible driver being installed, but if I recall correctly CUDA should have installed its own driver. Edit: I ran theses commands to list the available devices ``` from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) ``` and this is what I get ``` [name: \"\/cpu:0\" device_type: \"CPU\" memory_limit: 268435456 locality { } incarnation: 14922788031522107450 ] ``` and a whole lot of warnings like this ``` 2017-06-29 17:32:45.401429: W c:\\tf_jenkins\\home\\workspace\\release-win\\m\\windows\\py\\35\\tensorflow\\core\\platform\\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations. ``` Edit 2 Tried running ``` pip3 install --upgrade tensorflow-gpu ``` and I get ``` Requirement already up-to-date: tensorflow-gpu in c:\\users\\xxx\\appdata\\local\\programs\\python\\python35\\lib\\site-packages Requirement already up-to-date: markdown==2.2.0 in c:\\users\\xxx\\appdata\\local\\programs\\python\\python35\\lib\\site-packages (from tensorflow-gpu) Requirement already up-to-date: html5lib==0.9999999 in c:\\users\\xxx\\appdata\\local\\programs\\python\\python35\\lib\\site-packages (from tensorflow-gpu) Requirement already up-to-date: werkzeug>=0.11.10 in c:\\users\\xxx\\appdata\\local\\programs\\python\\python35\\lib\\site-packages (from tensorflow-gpu) Requirement already up-to-date: wheel>=0.26 in c:\\users\\xxx\\appdata\\local\\programs\\python\\python35\\lib\\site-packages (from tensorflow-gpu) Requirement already up-to-date: bleach==1.5.0 in c:\\users\\xxx\\appdata\\local\\programs\\python\\python35\\lib\\site-packages (from tensorflow-gpu) Requirement already up-to-date: six>=1.10.0 in c:\\users\\xxx\\appdata\\local\\programs\\python\\python35\\lib\\site-packages (from tensorflow-gpu) Requirement already up-to-date: protobuf>=3.2.0 in c:\\users\\xxx\\appdata\\local\\programs\\python\\python35\\lib\\site-packages (from tensorflow-gpu) Requirement already up-to-date: backports.weakref==1.0rc1 in c:\\users\\xxx\\appdata\\local\\programs\\python\\python35\\lib\\site-packages (from tensorflow-gpu) Requirement already up-to-date: numpy>=1.11.0 in c:\\users\\xxx\\appdata\\local\\programs\\python\\python35\\lib\\site-packages (from tensorflow-gpu) Requirement already up-to-date: setuptools in c:\\users\\xxx\\appdata\\local\\programs\\python\\python35\\lib\\site-packages (from protobuf>=3.2.0->tensorflow-gpu) ``` Solved: Check comments for solution. Thanks to all who helped! I am new to this, so any help is greatly appreciated! Thank you.", "response":"To check which devices are available to TensorFlow you can use this and see if the GPU cards are available: ```py from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) ``` More info There are also C++ logs available controlled by the TF_CPP_MIN_VLOG_LEVEL env variable, e.g.: ```py import os os.environ[\"TF_CPP_MIN_VLOG_LEVEL\"] = \"2\" ``` should allow them to be printed when running import tensorflow as tf. You should see this kind of logs if you use GPU-enabled tensorflow with proper access to the GPU machine: ``` successfully opened CUDA library libcublas.so.*.* locally successfully opened CUDA library libcudnn.so.*.* locally successfully opened CUDA library libcufft.so.*.* locally ``` On the other hand, if there are no CUDA libraries in the system \/ container, you will see: ``` Could not find cuda drivers on your machine, GPU will not be used. ``` and where CUDA are installed, but there is no GPU physically available, TF will import cleanly and error only later, when you run device_lib.list_local_devices() with this: ``` failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44829085\/tensorflow-not-running-on-gpu", "best_answers_votes":45, "question_length":4464, "response_length":1153 }, { "question":"How to install TensorFlow on Windows? I am starting to work with TensorFlow library for deep learning, https:\/\/www.tensorflow.org\/. I found a explicit guide to work on it on linux and Mac but I did not find how to work with it under Windows. I try over the net, but the information are lacking. I use Visual Studio 2015 for my projects, and I am trying to compile the library with Visual studio Compiler VC14. How to install it and to use it under Windows? Can I use Bazel for Windows for production use?", "response":"How to install TensorFlow and to use it under Windows? Updated on 8\/4\/16 Windows 10 now has a Ubuntu Bash environment, AKA Bash on Ubuntu on Windows, available as a standard option (as opposed to Insider Preview updates for developers). (StackOverflow tag wsl) This option came with the Windows 10 anniversary update (Version 1607) released on 8\/2\/2016. This allows the use of apt-get to install software packages such as Python and TensorFlow. Note: Bash on Ubuntu on Windows does not have access to the GPU, so all of the GPU options for installing TensorFlow will not work. The dated installation instructions for Bash on Ubuntu on Windows are basically correct, but only these steps are necessary: Prerequisites Enable the Windows Subsystem for Linux feature (GUI) Reboot when prompted Run Bash on Windows Steps no longer needed: Turn on Developer Mode Enable the Windows Subsystem for Linux feature (command-line) Then install TensorFlow using apt-get ``` sudo apt-get install python3-pip python3-dev sudo pip3 install --upgrade https:\/\/storage.googleapis.com\/tensorflow\/linux\/cpu\/tensorflow-0.8.0-cp34-cp34m-linux_x86_64.whl ``` and now test TensorFlow ``` $ python3 ... >>> import tensorflow as tf >>> hello = tf.constant('Hello, TensorFlow!') >>> sess = tf.Session() >>> print(sess.run(hello)) Hello, TensorFlow! >>> a = tf.constant(10) >>> b = tf.constant(32) >>> print(sess.run(a + b)) 42 >>> exit() ``` and run an actual neural network ``` python3 -m tensorflow.models.image.mnist.convolutional ``` Earlier Answer After learning about the developer preview of Bash on Windows. See Playing with TensorFlow on Windows by Scott Hanselman which uses Bash on Windows 10 Original Answer Bazel is the problem TensorFlow is not made with build automation tools such as make, but with Google's in-house build tool Bazel. Bazel only works on systems based on Unix such as Linux and OS X. Since the current published\/known means to build TensorFlow uses Bazel and Bazel does not work on Windows, one can not install or run TensorFlow natively on Windows. From Bazel FAQ What about Windows? Due to its UNIX heritage, porting Bazel to Windows is significant work. For example, Bazel uses symlinks extensively, which has varying levels of support across Windows versions. We are currently actively working on improving Windows support, but it's still ways from being usable. Status See: TensorFlow issue #17 See: Bazel issue #276 Solutions The solutions are listed in the order of complexity and work needed; from about an hour to may not even work. Docker ~ 1 hour Docker installation Docker is a system to build self contained versions of a Linux operating system running on your machine. When you install and run TensorFlow via Docker it completely isolates the installation from pre-existing packages on your machine. Also look at TensorFlow - which Docker image to use? OS X ~ 1 hour If you have a current Mac running OS X then see: Installation for Mac OS X Linux The recommend Linux system tends to be Ubuntu 14.04 LTS (Download page). a. Virtual Machine - Hardware Virtualization - Full Virtualization ~ 3 hours Download and install a virtual machine such as the commercial VMware or the free Virtual Box, after which you can install Linux and then install TensorFlow. When you go to install TensorFlow you will be using Pip - Python's package management system. Visual Studio users should think NuGet. The packages are known as wheels. See: Pip Installation If you need to build from the source then see: Installing From Sources ~ 4 hours Note: If you plan on using a Virtual Machine and have never done so before, consider using the Docker option instead, since Docker is the Virtual Machine, OS and TensorFlow all packaged together. b. Dual boot ~ 3 hours If you want to run TensorFlow on the same machine that you have Windows and make use of the GPU version then you will most likely have to use this option as running on a hosted Virtual Machine, Type 2 hypervisor, will not allow you access to the GPU. Remote machine ~ 4 hours If you have remote access to another machine that you can install the Linux OS and TensorFlow software on and allow remote connections to, then you can use your Windows machine to present the remote machine as an application running on Windows. Cloud Service I have no experience with this. Please edit answer if you know. Cloud services such as AWS are being used. From TensorFlow Features Want to run the model as a service in the cloud? Containerize with Docker and TensorFlow just works. From Docker Running Docker on AWS provides a highly reliable, low-cost way to quickly build, ship, and run distributed applications at scale. Deploy Docker using AMIs from the AWS Marketplace. Wait for Bazel to work on Windows. Currently it appears the only hold up is Bazel, however Bazel's roadmap list working on Windows should be available this year. There are two features listed for Windows: ``` 2016\u201102 Bazel can bootstrap itself on Windows without requiring admin privileges. 2016\u201112 Full Windows support for Android: Android feature set is identical for Windows and Linux\/OS X. ``` Build TensorFlow by hand. A few days or more depending on you skill level. I gave up on this one; too many subprojects to build and files to locate. Remember that Bazel is only used to build TensorFlow. If you get the commands Bazel runs and the correct source code and libraries you should be able to build TensorFlow on Windows. See: How do I get the commands executed by Bazel. While I have not researched this more, you can look at the continuous integration info for needed files and info on how to they build it for testing. (Readme) (site) Build Bazel on Windows A few days or more depending on you skill level. I gave up on this one also; could not find the necessary source files needed for Windows. There is a public experimental source code version of Bazel that boots on Windows. You may be able to leverage this into getting Bazel to work on Windows, etc. Also these solutions require the use of Cygwin or MinGW which adds another layer of complexity. Use alternative build system such as Make If you get this one to work I would like to see in on GitHub. This currently does not exist for TensorFlow. It is a feature request. See: TensorFlow issue 380 Cross Build If you get this one to work I would like to see in on GitHub. You build TensorFlow on Linux using Bazel but change the build process to output a wheel that can be installed on Windows. This will require detailed knowledge of Bazel to change the configuration, and locating the source code and libraries that work with Windows. An option I would only suggest as a last resort. It may not even be possible. Run on the new Windows Subsystem for Linux. See: Windows Subsystem for Linux Overview You will know as much as I do by reading the referenced article. Can I use Bazel for Windows for production use? Since it is experimental software I would not use on a production machine. Remember that you only need Bazel to build TensorFlow. So use the experimental code on a non production machine to build the wheel, then install the wheel on a production machine. See: Pip Installation TLDR; Currently I have several versions for learning. Most use a VMWare 7.1 Workstation to host Ubuntu 14.04 LTS or Ubuntu 15 or Debian. I also have one dual boot of Ubuntu 14.04 LTS on my Windows machine to access the GPU as the machine with VMware does not have the proper GPU. I would recommend that you give these machines at least 8G of memory either as RAM or RAM and swap space as I have run out of memory a few times.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34785414\/how-to-install-tensorflow-on-windows", "best_answers_votes":71, "question_length":504, "response_length":7607 }, { "question":"What is the mathematics behind the \"smoothing\" parameter in TensorBoard's scalar graphs? I presume it is some kind of moving average, but the valid range is between 0 and 1.", "response":"ORIGINAL ANSWER It is called exponential moving average, below is a code explanation how it is created. Assuming all the real scalar values are in a list called scalars the smoothing is applied as follows: ```py def smooth(scalars: List[float], weight: float) -> List[float]: # Weight between 0 and 1 last = scalars[0] # First value in the plot (first timestep) smoothed = list() for point in scalars: smoothed_val = last * weight + (1 - weight) * point # Calculate smoothed value smoothed.append(smoothed_val) # Save it last = smoothed_val # Anchor the last smoothed value return smoothed ``` UPDATED ANSWER As @SaPropper correctly pointed out, TensorBoard now includes the debiasing factor.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42281844\/what-is-the-mathematics-behind-the-smoothing-parameter-in-tensorboards-scalar", "best_answers_votes":96, "question_length":173, "response_length":692 }, { "question":"Convert between NHWC and NCHW in TensorFlow What is the best way to convert a tensor from NHWC format to NCHW format, and vice versa? Is there an op specifically that does this, or will I need to use some combination of the split\/concat type operations?", "response":"All you need to do is a permutation of the dimensions from NHWC to NCHW (or the contrary). The meaning of each letter might help understand: N: number of images in the batch H: height of the image W: width of the image C: number of channels of the image (ex: 3 for RGB, 1 for grayscale...) From NHWC to NCHW The image shape is (N, H, W, C) and we want the output to have shape (N, C, H, W). Therefore we need to apply tf.transpose with a well chosen permutation perm. The returned tensor's dimension i will correspond to the input dimension perm[i] ```py perm[0] = 0 # output dimension 0 will be 'N', which was dimension 0 in the input perm[1] = 3 # output dimension 1 will be 'C', which was dimension 3 in the input perm[2] = 1 # output dimension 2 will be 'H', which was dimension 1 in the input perm[3] = 2 # output dimension 3 will be 'W', which was dimension 2 in the input ``` In practice: ```py images_nhwc = tf.placeholder(tf.float32, [None, 200, 300, 3]) # input batch out = tf.transpose(images_nhwc, [0, 3, 1, 2]) print(out.get_shape()) # the shape of out is [None, 3, 200, 300] ``` From NCHW to NHWC The image shape is (N, C, H, W) and we want the output to have shape (N, H, W, C). Therefore we need to apply tf.transpose with a well chosen permutation perm. The returned tensor's dimension i will correspond to the input dimension perm[i] ```py perm[0] = 0 # output dimension 0 will be 'N', which was dimension 0 in the input perm[1] = 2 # output dimension 1 will be 'H', which was dimension 2 in the input perm[2] = 3 # output dimension 2 will be 'W', which was dimension 3 in the input perm[3] = 1 # output dimension 3 will be 'C', which was dimension 1 in the input ``` In practice: ```py images_nchw = tf.placeholder(tf.float32, [None, 3, 200, 300]) # input batch out = tf.transpose(images_nchw, [0, 2, 3, 1]) print(out.get_shape()) # the shape of out is [None, 200, 300, 3] ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37689423\/convert-between-nhwc-and-nchw-in-tensorflow", "best_answers_votes":88, "question_length":253, "response_length":1895 }, { "question":"How to replace (or insert) intermediate layer in Keras model? I have a trained Keras model and I would like: 1) to replace Con2D layer with the same but without bias. 2) to add BatchNormalization layer before first Activation How can I do this? ```python def keras_simple_model(): from keras.models import Model from keras.layers import Input, Dense, GlobalAveragePooling2D from keras.layers import Conv2D, MaxPooling2D, Activation inputs1 = Input((28, 28, 1)) x = Conv2D(4, (3, 3), activation=None, padding='same', name='conv1')(inputs1) x = Activation('relu')(x) x = Conv2D(4, (3, 3), activation=None, padding='same', name='conv2')(x) x = Activation('relu')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='pool1')(x) x = Conv2D(8, (3, 3), activation=None, padding='same', name='conv3')(x) x = Activation('relu')(x) x = Conv2D(8, (3, 3), activation=None, padding='same', name='conv4')(x) x = Activation('relu')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='pool2')(x) x = GlobalAveragePooling2D()(x) x = Dense(10, activation=None)(x) x = Activation('softmax')(x) model = Model(inputs=inputs1, outputs=x) return model if __name__ == '__main__': model = keras_simple_model() print(model.summary()) ```", "response":"The following function allows you to insert a new layer before, after or to replace each layer in the original model whose name matches a regular expression, including non-sequential models such as DenseNet or ResNet. ```python import re from keras.models import Model def insert_layer_nonseq(model, layer_regex, insert_layer_factory, insert_layer_name=None, position='after'): # Auxiliary dictionary to describe the network graph network_dict = {'input_layers_of': {}, 'new_output_tensor_of': {}} # Set the input layers of each layer for layer in model.layers: for node in layer._outbound_nodes: layer_name = node.outbound_layer.name if layer_name not in network_dict['input_layers_of']: network_dict['input_layers_of'].update( {layer_name: [layer.name]}) else: network_dict['input_layers_of'][layer_name].append(layer.name) # Set the output tensor of the input layer network_dict['new_output_tensor_of'].update( {model.layers[0].name: model.input}) # Iterate over all layers after the input model_outputs = [] for layer in model.layers[1:]: # Determine input tensors layer_input = [network_dict['new_output_tensor_of'][layer_aux] for layer_aux in network_dict['input_layers_of'][layer.name]] if len(layer_input) == 1: layer_input = layer_input[0] # Insert layer if name matches the regular expression if re.match(layer_regex, layer.name): if position == 'replace': x = layer_input elif position == 'after': x = layer(layer_input) elif position == 'before': pass else: raise ValueError('position must be: before, after or replace') new_layer = insert_layer_factory() if insert_layer_name: new_layer.name = insert_layer_name else: new_layer.name = '{}_{}'.format(layer.name, new_layer.name) x = new_layer(x) print('New layer: {} Old layer: {} Type: {}'.format(new_layer.name, layer.name, position)) if position == 'before': x = layer(x) else: x = layer(layer_input) # Set new output tensor (the original one, or the one of the inserted # layer) network_dict['new_output_tensor_of'].update({layer.name: x}) # Save tensor in output list if it is output in initial model if layer_name in model.output_names: model_outputs.append(x) return Model(inputs=model.inputs, outputs=model_outputs) ``` The difference with respect to the simpler case of a purely sequential model is that before iterating over the layers to find the key layer, you first parse the graph and store the input layers of each layer in an auxiliary dictionary. Then, as you iterate over the layers, you also store the new output tensor of each layer, which is used to determine the input layers of each layer, when building the new model. A use case would be the following, where a Dropout layer is inserted after each activation layer of ResNet50: ```python from keras.applications.resnet50 import ResNet50 from keras.models import load_model model = ResNet50() def dropout_layer_factory(): return Dropout(rate=0.2, name='dropout') model = insert_layer_nonseq(model, '.*activation.*', dropout_layer_factory) # Fix possible problems with new model model.save('temp.h5') model = load_model('temp.h5') model.summary() ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49492255\/how-to-replace-or-insert-intermediate-layer-in-keras-model", "best_answers_votes":39, "question_length":1206, "response_length":3084 }, { "question":"How to get the dimensions of a tensor (in TensorFlow) at graph construction time? I am trying an Op that is not behaving as expected. ``` graph = tf.Graph() with graph.as_default(): train_dataset = tf.placeholder(tf.int32, shape=[128, 2]) embeddings = tf.Variable( tf.random_uniform([50000, 64], -1.0, 1.0)) embed = tf.nn.embedding_lookup(embeddings, train_dataset) embed = tf.reduce_sum(embed, reduction_indices=0) ``` So I need to know the dimensions of the Tensor embed. I know that it can be done at the run time but it's too much work for such a simple operation. What's the easier way to do it?", "response":"I see most people confused about tf.shape(tensor) and tensor.get_shape() Let's make it clear: tf.shape tf.shape is used for dynamic shape. If your tensor's shape is changable, use it. An example: a input is an image with changable width and height, we want resize it to half of its size, then we can write something like: new_height = tf.shape(image)[0] \/ 2 tensor.get_shape tensor.get_shape is used for fixed shapes, which means the tensor's shape can be deduced in the graph. Conclusion: tf.shape can be used almost anywhere, but t.get_shape only for shapes can be deduced from graph.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36966316\/how-to-get-the-dimensions-of-a-tensor-in-tensorflow-at-graph-construction-time", "best_answers_votes":65, "question_length":600, "response_length":586 }, { "question":"FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated After updating my Numpy and Tensorflow I am getting these kind of warnings. I had already tried these, but nothing works, every suggestion will be appreciated. ``` FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters 2018-01-19 17:11:38.695932: I C:\\tf_jenkins\\home\\workspace\\rel-win\\M\\windows\\PY\\36\\tensorflow\\core\\platform\\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 ```", "response":"This might or might not be your case, but the same warning is also spit out from h5py package: \/home\/user\/bin\/conda3\/lib\/python3.6\/site-packages\/h5py\/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters For anyone coming here with this problem, it is a known h5py issue, introduced with numpy 1.14. As stated by the devs: You can ignore the warning, it's not going to cause any issues at the moment, but you should upgrade to the next release of h5py when it becomes available. ... so it's harmless. The fix has just been merged to master. But until the update is released, the workaround is to downgrade numpy to a previous version: ``` pip install numpy==1.13.0 ``` Update: h5py has released the RC build with the fix. The following command should do it: ``` pip install h5py==2.8.0rc1 ``` Update (FINAL): there's a full-fledged release now. So you can simply run: ``` pip install --upgrade h5py ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48340392\/futurewarning-conversion-of-the-second-argument-of-issubdtype-from-float-to", "best_answers_votes":66, "question_length":743, "response_length":1100 }, { "question":"Make a custom loss function in keras Hi I have been trying to make a custom loss function in keras for dice_error_coefficient. It has its implementations in tensorboard and I tried using the same function in keras with tensorflow but it keeps returning a NoneType when I used model.train_on_batch or model.fit where as it gives proper values when used in metrics in the model. Can please someone help me out with what should i do? I have tried following libraries like Keras-FCN by ahundt where he has used custom loss functions but none of it seems to work. The target and output in the code are y_true and y_pred respectively as used in the losses.py file in keras. ``` def dice_hard_coe(target, output, threshold=0.5, axis=[1,2], smooth=1e-5): \"\"\"References ----------- - `Wiki-Dice `_ \"\"\" output = tf.cast(output > threshold, dtype=tf.float32) target = tf.cast(target > threshold, dtype=tf.float32) inse = tf.reduce_sum(tf.multiply(output, target), axis=axis) l = tf.reduce_sum(output, axis=axis) r = tf.reduce_sum(target, axis=axis) hard_dice = (2. * inse + smooth) \/ (l + r + smooth) hard_dice = tf.reduce_mean(hard_dice) return hard_dice ```", "response":"There are two steps in implementing a parameterized custom loss function in Keras. First, writing a method for the coefficient\/metric. Second, writing a wrapper function to format things the way Keras needs them to be. It's actually quite a bit cleaner to use the Keras backend instead of tensorflow directly for simple custom loss functions like DICE. Here's an example of the coefficient implemented that way: ``` import keras.backend as K def dice_coef(y_true, y_pred, smooth, thresh): y_pred = y_pred > thresh y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + smooth) \/ (K.sum(y_true_f) + K.sum(y_pred_f) + smooth) ``` Now for the tricky part. Keras loss functions must only take (y_true, y_pred) as parameters. So we need a separate function that returns another function. ``` def dice_loss(smooth, thresh): def dice(y_true, y_pred) return -dice_coef(y_true, y_pred, smooth, thresh) return dice ``` Finally, you can use it as follows in Keras compile. ``` # build model model = my_model() # get the loss function model_dice = dice_loss(smooth=1e-5, thresh=0.5) # compile model model.compile(loss=model_dice) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45961428\/make-a-custom-loss-function-in-keras", "best_answers_votes":108, "question_length":1148, "response_length":1194 }, { "question":"Count number of \"True\" values in boolean Tensor I understand that tf.where will return the locations of True values, so that I could use the result's shape[0] to get the number of Trues. However, when I try and use this, the dimension is unknown (which makes sense as it needs to be computed at runtime). So my question is, how can I access a dimension and use it in an operation like a sum? For example: ``` myOtherTensor = tf.constant([[True, True], [False, True]]) myTensor = tf.where(myOtherTensor) myTensor.get_shape() #=> [None, 2] sum = 0 sum += myTensor.get_shape().as_list()[0] # Well defined at runtime but considered None until then. ```", "response":"You can cast the values to floats and compute the sum on them: tf.reduce_sum(tf.cast(myOtherTensor, tf.float32)) Depending on your actual use case you can also compute sums per row\/column if you specify the reduce dimensions of the call.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34598371\/count-number-of-true-values-in-boolean-tensor", "best_answers_votes":59, "question_length":648, "response_length":237 }, { "question":"How to write a custom loss function in Tensorflow? I am new to tensorflow. I want to write my own custom loss function. Is there any tutorial about this? For example, the hinge loss or a sum_of_square_loss(though this is already in tf)? Can I do it directly in python or I have to write the cpp code?", "response":"We need to write down the loss function. For example, we can use basic mean square error as our loss function for predicted y and target y_: ``` loss_mse = 1\/n(Sum((y-y_)^2)) ``` There are basic functions for tensors like tf.add(x,y), tf.sub(x,y), tf.square(x), tf.reduce_sum(x), etc. Then we can define our loss function in Tensorflow like: ``` cost = tf.reduce_mean(tf.square(tf.sub(y,y_))) ``` Note: y and y_ are tensors. Moreover, we can define any other loss functions if we can write down the equations. For some training operators (minimizers), the loss function should satisfy some conditions (smooth, differentiable ...). In one word, Tensorflow define arrays, constants, variables into tensors, define calculations using tf functions, and use session to run though graph. We can define whatever we like and run it in the end.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34875944\/how-to-write-a-custom-loss-function-in-tensorflow", "best_answers_votes":60, "question_length":300, "response_length":835 }, { "question":"Use shared GPU memory with TensorFlow? So I installed the GPU version of TensorFlow on a Windows 10 machine with a GeForce GTX 980 graphics card on it. Admittedly, I know very little about graphics cards, but according to dxdiag it does have: 4060MB of dedicated memory (VRAM) and; 8163MB of shared memory for a total of about 12224MB. What I noticed, though, is that this \"shared\" memory seems to be pretty much useless. When I start training a model, the VRAM will fill up and if the memory requirement exceeds these 4GB, TensorFlow will crash with a \"resource exhausted\" error message. I CAN, of course, prevent reaching that point by choosing the batch size suitably low, but I do wonder if there's a way to make use of these \"extra\" 8GB of RAM, or if that's it and TensorFlow requires the memory to be dedicated.", "response":"Shared memory is an area of the main system RAM reserved for graphics. References: https:\/\/en.wikipedia.org\/wiki\/Shared_graphics_memory https:\/\/www.makeuseof.com\/tag\/can-shared-graphics-finally-compete-with-a-dedicated-graphics-card\/ https:\/\/youtube.com\/watch?v=E5WyJY1zwcQ This type of memory is what integrated graphics eg Intel HD series typically use. This is not on your NVIDIA GPU, and CUDA can't use it. Tensorflow can't use it when running on GPU because CUDA can't use it, and also when running on CPU because it's reserved for graphics. Even if CUDA could use it somehow. It won't be useful because system RAM bandwidth is around 10x less than GPU memory bandwidth, and you have to somehow get the data to and from the GPU over the slow (and high latency) PCIE bus. Bandwidth numbers for reference : GeForce GTX 980: 224 GB\/s DDR4 on desktop motherboard: approx 25GB\/s PCIe 16x: 16GB\/s This doesn't take into account latency. In practice, running a GPU compute task on data which is too big to fit in GPU memory and has to be transferred over PCIe every time it is accessed is so slow for most types of compute that doing the same calculation on CPU would be much faster. Why do you see that kind of memory being allocated when you have a NVIDIA card in your machine? Good question. I can think of a couple of possibilities: (a) You have both NVIDIA and Intel graphics drivers active (eg as happens when running different displays on both). Uninstaller the Intel drivers and\/or disable Intel HD graphics in the BIOS and shared memory will disappear. (b) NVIDIA is using it. This may be eg extra texture memory, etc. It could also not be real memory but just a memory mapped area that corresponds to GPU memory. Look in the advanced settings of the NVIDIA driver for a setting that controls this. In any case, no, there isn't anything that Tensorflow can use.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47859924\/use-shared-gpu-memory-with-tensorflow", "best_answers_votes":59, "question_length":817, "response_length":1868 }, { "question":"Can't save custom subclassed model Inspired by tf.keras.Model subclassing I created custom model. I can train it and get successfull results, but I can't save it. I use python3.6 with tensorflow v1.10 (or v1.9) Minimal complete code example here: ``` import tensorflow as tf from tensorflow.keras.datasets import mnist class Classifier(tf.keras.Model): def __init__(self): super().__init__(name=\"custom_model\") self.batch_norm1 = tf.layers.BatchNormalization() self.conv1 = tf.layers.Conv2D(32, (7, 7)) self.pool1 = tf.layers.MaxPooling2D((2, 2), (2, 2)) self.batch_norm2 = tf.layers.BatchNormalization() self.conv2 = tf.layers.Conv2D(64, (5, 5)) self.pool2 = tf.layers.MaxPooling2D((2, 2), (2, 2)) def call(self, inputs, training=None, mask=None): x = self.batch_norm1(inputs) x = self.conv1(x) x = tf.nn.relu(x) x = self.pool1(x) x = self.batch_norm2(x) x = self.conv2(x) x = tf.nn.relu(x) x = self.pool2(x) return x if __name__ == '__main__': (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(*x_train.shape, 1)[:1000] y_train = y_train.reshape(*y_train.shape, 1)[:1000] x_test = x_test.reshape(*x_test.shape, 1) y_test = y_test.reshape(*y_test.shape, 1) y_train = tf.keras.utils.to_categorical(y_train) y_test = tf.keras.utils.to_categorical(y_test) model = Classifier() inputs = tf.keras.Input((28, 28, 1)) x = model(inputs) x = tf.keras.layers.Flatten()(x) x = tf.keras.layers.Dense(10, activation=\"sigmoid\")(x) model = tf.keras.Model(inputs=inputs, outputs=x) model.compile(optimizer=\"adam\", loss=\"binary_crossentropy\", metrics=[\"accuracy\"]) model.fit(x_train, y_train, epochs=1, shuffle=True) model.save(\".\/my_model\") ``` Error message: ``` 1000\/1000 [==============================] - 1s 1ms\/step - loss: 4.6037 - acc: 0.7025 Traceback (most recent call last): File \"\/home\/user\/Data\/test\/python\/mnist\/mnist_run.py\", line 62, in model.save(\".\/my_model\") File \"\/home\/user\/miniconda3\/envs\/ml3.6\/lib\/python3.6\/site-packages\/tensorflow\/python\/keras\/engine\/network.py\", line 1278, in save save_model(self, filepath, overwrite, include_optimizer) File \"\/home\/user\/miniconda3\/envs\/ml3.6\/lib\/python3.6\/site-packages\/tensorflow\/python\/keras\/engine\/saving.py\", line 101, in save_model 'config': model.get_config() File \"\/home\/user\/miniconda3\/envs\/ml3.6\/lib\/python3.6\/site-packages\/tensorflow\/python\/keras\/engine\/network.py\", line 1049, in get_config layer_config = layer.get_config() File \"\/home\/user\/miniconda3\/envs\/ml3.6\/lib\/python3.6\/site-packages\/tensorflow\/python\/keras\/engine\/network.py\", line 1028, in get_config raise NotImplementedError NotImplementedError Process finished with exit code 1 ``` I looked into the error line and found out that get_config method checks self._is_graph_network Do anybody deal with this problem? Thanks! Update 1: On the keras 2.2.2 (not tf.keras) Found comment (for model saving) file: keras\/engine\/network.py Function: get_config # Subclassed networks are not serializable # (unless serialization is implemented by # the author of the subclassed network). So, obviously it won't work... I wonder, why don't they point it out in the documentation (Like: \"Use subclassing without ability to save!\") Update 2: Found in keras documentation: In subclassed models, the model's topology is defined as Python code (rather than as a static graph of layers). That means the model's topology cannot be inspected or serialized. As a result, the following methods and attributes are not available for subclassed models: model.inputs and model.outputs. model.to_yaml() and model.to_json() model.get_config() and model.save(). So, there is no way to save model by using subclassing. It's possible to only use Model.save_weights()", "response":"TensorFlow 2.2 Thanks for @cal for noticing me that the new TensorFlow has supported saving the custom models! By using model.save to save the whole model and by using load_model to restore previously stored subclassed model. The following code snippets describe how to implement them. ``` class ThreeLayerMLP(keras.Model): def __init__(self, name=None): super(ThreeLayerMLP, self).__init__(name=name) self.dense_1 = layers.Dense(64, activation='relu', name='dense_1') self.dense_2 = layers.Dense(64, activation='relu', name='dense_2') self.pred_layer = layers.Dense(10, name='predictions') def call(self, inputs): x = self.dense_1(inputs) x = self.dense_2(x) return self.pred_layer(x) def get_model(): return ThreeLayerMLP(name='3_layer_mlp') model = get_model() # Save the model model.save('path_to_my_model',save_format='tf') # Recreate the exact same model purely from the file new_model = keras.models.load_model('path_to_my_model') ``` See: Save and serialize models with Keras - Part II: Saving and Loading of Subclassed Models TensorFlow 2.0 TL;DR: do not use model.save() for custom subclass keras model; use save_weights() and load_weights() instead. With the help of the Tensorflow Team, it turns out the best practice of saving a Custom Sub-Class Keras Model is to save its weights and load it back when needed. The reason that we can not simply save a Keras custom subclass model is that it contains custom codes, which can not be serialized safely. However, the weights can be saved\/loaded when we have the same model structure and custom codes without any problem. There has a great tutorial written by Francois Chollet who is the author of Keras, for how to save\/load Sequential\/Functional\/Keras\/Custom Sub-Class Models in Tensorflow 2.0 in Colab at here. In Saving Subclassed Models section, it said that: Sequential models and Functional models are datastructures that represent a DAG of layers. As such, they can be safely serialized and deserialized. A subclassed model differs in that it's not a datastructure, it's a piece of code. The architecture of the model is defined via the body of the call method. This means that the architecture of the model cannot be safely serialized. To load a model, you'll need to have access to the code that created it (the code of the model subclass). Alternatively, you could be serializing this code as bytecode (e.g. via pickling), but that's unsafe and generally not portable.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51806852\/cant-save-custom-subclassed-model", "best_answers_votes":67, "question_length":3687, "response_length":2437 }, { "question":"module 'tensorflow' has no attribute 'logging' I'm trying to run a tensorflow code in v2.0 and I'mg getting the following error ``` AttributeError: module 'tensorflow' has no attribute 'logging' ``` I don't want to simply remove it from the code. why this code has been removed? why should I do instead?", "response":"tf.logging was for Logging and Summary Operations and in TF 2.0 it has been removed in favor of the open-source absl-py, and to make the main tf.* namespace has functions that will be used more often. In TF.2 lesser used functions are gone or moved into sub-packages like tf.math So instead of tf.logging you could: tf_upgrade_v2 will upgrade script and changes tf.logging to tf.compat.v1.logging Python logging module can be used instead Import absl-py library", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55318626\/module-tensorflow-has-no-attribute-logging", "best_answers_votes":61, "question_length":303, "response_length":461 }, { "question":"Keras - stateful vs stateless LSTMs I'm having a hard time conceptualizing the difference between stateful and stateless LSTMs in Keras. My understanding is that at the end of each batch, the \"state of the network is reset\" in the stateless case, whereas for the stateful case, the state of the network is preserved for each batch, and must then be manually reset at the end of each epoch. My questions are as follows: 1. In the stateless case, how is the network learning if the state isn't preserved in-between batches? 2. When would one use the stateless vs stateful modes of an LSTM?", "response":"I recommend you to firstly learn the concepts of BPTT (Back Propagation Through Time) and mini-batch SGD(Stochastic Gradient Descent), then you'll have further understandings of LSTM's training procedure. For your questions, Q1. In stateless cases, LSTM updates parameters on batch1 and then, initiate hidden states and cell states (usually all zeros) for batch2, while in stateful cases, it uses batch1's last output hidden states and cell sates as initial states for batch2. Q2. As you can see above, when two sequences in two batches have connections (e.g. prices of one stock), you'd better use stateful mode, else (e.g. one sequence represents a complete sentence) you should use stateless mode. BTW, @vu.pham said if we use stateful RNN, then in production, the network is forced to deal with infinite long sequences. This seems not correct, actually, as you can see in Q1, LSTM WON'T learn on the whole sequence, it first learns sequence in batch1, updates parameters, and then learn sequence on batch2.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39681046\/keras-stateful-vs-stateless-lstms", "best_answers_votes":48, "question_length":587, "response_length":1010 }, { "question":"How to interpret increase in both loss and accuracy I have run deep learning models(CNN's) using tensorflow. Many times during the epoch, i have observed that both loss and accuracy have increased, or both have decreased. My understanding was that both are always inversely related. What could be scenario where both increase or decrease simultaneously.", "response":"The loss decreases as the training process goes on, except for some fluctuation introduced by the mini-batch gradient descent and\/or regularization techniques like dropout (that introduces random noise). If the loss decreases, the training process is going well. The (validation I suppose) accuracy, instead, it's a measure of how good the predictions of your model are. If the model is learning, the accuracy increases. If the model is overfitting, instead, the accuracy stops to increase and can even start to decrease. If the loss decreases and the accuracy decreases, your model is overfitting. If the loss increases and the accuracy increase too is because your regularization techniques are working well and you're fighting the overfitting problem. This is true only if the loss, then, starts to decrease whilst the accuracy continues to increase. Otherwise, if the loss keep growing your model is diverging and you should look for the cause (usually you're using a too high learning rate value).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40910857\/how-to-interpret-increase-in-both-loss-and-accuracy", "best_answers_votes":61, "question_length":353, "response_length":1002 }, { "question":"Keras + Tensorflow and Multiprocessing in Python I'm using Keras with Tensorflow as backend. I am trying to save a model in my main process and then load\/run (i.e. call model.predict) within another process. I'm currently just trying the naive approach from the docs to save\/load the model: https:\/\/keras.io\/getting-started\/faq\/#how-can-i-save-a-keras-model. So basically: model.save() in main process model = load_model() in child process model.predict() in child process However, it simply hangs on the load_model call. Searching around I've discovered this potentially related answer suggesting that Keras can only be utilized in one process: using multiprocessing with theano but am unsure if this is true (can't seem to find much on this). Is there a way to accomplish my goal? A high level description or short example is greatly appreciated. Note: I've attempted approaches along the lines of passing a graph to the process but failed since it seems tensorflow graphs aren't pickable (related SO post for that here: Tensorflow: Passing a session to a python multiprocess). If there is indeed a way to pass the tensorflow graph\/model to the child process then I am open to that as well. Thanks!", "response":"From my experience - the problem lies in loading Keras to one process and then spawning a new process when the keras has been loaded to your main environment. But for some applications (like e.g. training a mixture of Kerasmodels) it's simply better to have all of this things in one process. So what I advise is the following (a little bit cumbersome - but working for me) approach: DO NOT LOAD KERAS TO YOUR MAIN ENVIRONMENT. If you want to load Keras \/ Theano \/ TensorFlow do it only in the function environment. E.g. don't do this: ``` import keras def training_function(...): ... ``` but do the following: ``` def training_function(...): import keras ... ``` Run work connected with each model in a separate process: I'm usually creating workers which are making the job (like e.g. training, tuning, scoring) and I'm running them in separate processes. What is nice about it that whole memory used by this process is completely freed when your process is done. This helps you with loads of memory problems which you usually come across when you are using multiprocessing or even running multiple models in one process. So this looks e.g. like this: ``` def _training_worker(train_params): import keras model = obtain_model(train_params) model.fit(train_params) send_message_to_main_process(...) def train_new_model(train_params): training_process = multiprocessing.Process(target=_training_worker, args = train_params) training_process.start() get_message_from_training_process(...) training_process.join() ``` Different approach is simply preparing different scripts for different model actions. But this may cause memory errors especially when your models are memory consuming. NOTE that due to this reason it's better to make your execution strictly sequential.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42504669\/keras-tensorflow-and-multiprocessing-in-python", "best_answers_votes":62, "question_length":1200, "response_length":1769 }, { "question":"400% higher error with PyTorch compared with identical Keras model (with Adam optimizer) TLDR: A simple (single hidden-layer) feed-forward Pytorch model trained to predict the function y = sin(X1) + sin(X2) + ... sin(X10) substantially underperforms an identical model built\/trained with Keras. Why is this so and what can be done to mitigate the difference in performance? In training a regression model, I noticed that PyTorch drastically underperforms an identical model built with Keras. This phenomenon has been observed and reported previously: The same model produces worse results on pytorch than on tensorflow CNN model in pytorch giving 30% less accuracy to Tensoflowflow model: PyTorch Adam vs Tensorflow Adam Suboptimal convergence when compared with TensorFlow model RNN and Adam: slower convergence than Keras PyTorch comparable but worse than keras on a simple feed forward network Why is the PyTorch model doing worse than the same model in Keras even with the same weight initialization? Why Keras behave better than Pytorch under the same network configuration? The following explanations and suggestions have been made previously as well: Using the same decimal precision (32 vs 64): 1, 2, Using a CPU instead of a GPU: 1,2 Change retain_graph=True to create_graph=True in computing the 2nd derivative with autograd.grad: 1 Check if keras is using a regularizer, constraint, bias, or loss function in a different way from pytorch: 1,2 Ensure you are computing the validation loss in the same way: 1 Use the same initialization routine: 1,2 Training the pytorch model for longer epochs: 1 Trying several random seeds: 1 Ensure that model.eval() is called in validation step when training pytorch model: 1 The main issue is with the Adam optimizer, not the initialization: 1 To understand this issue, I trained a simple two-layer neural network (much simpler than my original model) in Keras and PyTorch, using the same hyperparameters and initialization routines, and following all the recommendations listed above. However, the PyTorch model results in a mean squared error (MSE) that is 400% higher than the MSE of the Keras model. Here is my code: 0. Imports ```py import numpy as np from scipy.stats import pearsonr from sklearn.preprocessing import MinMaxScaler from sklearn import metrics from torch.utils.data import Dataset, DataLoader import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras.regularizers import L2 from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam ``` 1. Generate a reproducible dataset ```py def get_data(): np.random.seed(0) Xtrain = np.random.normal(0, 1, size=(7000,10)) Xval = np.random.normal(0, 1, size=(700,10)) ytrain = np.sum(np.sin(Xtrain), axis=-1) yval = np.sum(np.sin(Xval), axis=-1) scaler = MinMaxScaler() ytrain = scaler.fit_transform(ytrain.reshape(-1,1)).reshape(-1) yval = scaler.transform(yval.reshape(-1,1)).reshape(-1) return Xtrain, Xval, ytrain, yval class XYData(Dataset): def __init__(self, X, y): super(XYData, self).__init__() self.X = torch.tensor(X, dtype=torch.float32) self.y = torch.tensor(y, dtype=torch.float32) self.len = len(y) def __getitem__(self, index): return (self.X[index], self.y[index]) def __len__(self): return self.len # Data, dataset, and dataloader Xtrain, Xval, ytrain, yval = get_data() traindata = XYData(Xtrain, ytrain) valdata = XYData(Xval, yval) trainloader = DataLoader(dataset=traindata, shuffle=True, batch_size=32, drop_last=False) valloader = DataLoader(dataset=valdata, shuffle=True, batch_size=32, drop_last=False) ``` 2. Build Keras and PyTorch models with identical hyperparameters and initialization methods ```py class TorchLinearModel(nn.Module): def __init__(self, input_dim=10, random_seed=0): super(TorchLinearModel, self).__init__() _ = torch.manual_seed(random_seed) self.hidden_layer = nn.Linear(input_dim,100) self.initialize_layer(self.hidden_layer) self.output_layer = nn.Linear(100, 1) self.initialize_layer(self.output_layer) def initialize_layer(self, layer): _ = torch.nn.init.xavier_normal_(layer.weight) #_ = torch.nn.init.xavier_uniform_(layer.weight) _ = torch.nn.init.constant(layer.bias,0) def forward(self, x): x = self.hidden_layer(x) x = self.output_layer(x) return x def mean_squared_error(ytrue, ypred): return torch.mean(((ytrue - ypred) ** 2)) def build_torch_model(): torch_model = TorchLinearModel() optimizer = optim.Adam(torch_model.parameters(), betas=(0.9,0.9999), eps=1e-7, lr=1e-3, weight_decay=0) return torch_model, optimizer def build_keras_model(): x = layers.Input(shape=10) z = layers.Dense(units=100, activation=None, use_bias=True, kernel_regularizer=None, bias_regularizer=None)(x) y = layers.Dense(units=1, activation=None, use_bias=True, kernel_regularizer=None, bias_regularizer=None)(z) keras_model = Model(x, y, name='linear') optimizer = Adam(learning_rate=1e-3, beta_1=0.9, beta_2=0.9999, epsilon=1e-7, amsgrad=False) keras_model.compile(optimizer=optimizer, loss='mean_squared_error') return keras_model # Instantiate models torch_model, optimizer = build_torch_model() keras_model = build_keras_model() ``` 3. Train PyTorch model for 100 epochs: ```py torch_trainlosses, torch_vallosses = [], [] for epoch in range(100): # Training losses = [] _ = torch_model.train() for i, (x,y) in enumerate(trainloader): optimizer.zero_grad() ypred = torch_model(x) loss = mean_squared_error(y, ypred) _ = loss.backward() _ = optimizer.step() losses.append(loss.item()) torch_trainlosses.append(np.mean(losses)) # Validation losses = [] _ = torch_model.eval() with torch.no_grad(): for i, (x, y) in enumerate(valloader): ypred = torch_model(x) loss = mean_squared_error(y, ypred) losses.append(loss.item()) torch_vallosses.append(np.mean(losses)) print(f\"epoch={epoch+1}, train_loss={torch_trainlosses[-1]:.4f}, val_loss={torch_vallosses[-1]:.4f}\") ``` 4. Train Keras model for 100 epochs: ```py history = keras_model.fit(Xtrain, ytrain, sample_weight=None, batch_size=32, epochs=100, validation_data=(Xval, yval)) ``` 5. Loss in training history ```py plt.plot(torch_trainlosses, color='blue', label='PyTorch Train') plt.plot(torch_vallosses, color='blue', linestyle='--', label='PyTorch Val') plt.plot(history.history['loss'], color='brown', label='Keras Train') plt.plot(history.history['val_loss'], color='brown', linestyle='--', label='Keras Val') plt.legend() ``` Keras records a much lower error in the training. Since this may be due to a difference in how Keras computes the loss, I calculated the prediction error on the validation set with sklearn.metrics.mean_squared_error 6. Validation error after training ```py ypred_keras = keras_model.predict(Xval).reshape(-1) ypred_torch = torch_model(torch.tensor(Xval, dtype=torch.float32)) ypred_torch = ypred_torch.detach().numpy().reshape(-1) mse_keras = metrics.mean_squared_error(yval, ypred_keras) mse_torch = metrics.mean_squared_error(yval, ypred_torch) print('Percent error difference:', (mse_torch \/ mse_keras - 1) * 100) r_keras = pearsonr(yval, ypred_keras)[0] r_pytorch = pearsonr(yval, ypred_torch)[0] print(\"r_keras:\", r_keras) print(\"r_pytorch:\", r_pytorch) plt.scatter(ypred_keras, yval); plt.title('Keras'); plt.show(); plt.close() plt.scatter(ypred_torch, yval); plt.title('Pytorch'); plt.show(); plt.close() ``` ```py Percent error difference: 479.1312469426776 r_keras: 0.9115184443702814 r_pytorch: 0.21728812737220082 ``` The correlation of predicted values with ground truth is 0.912 for Keras but 0.217 for Pytorch, and the error for Pytorch is 479% higher! 7. Other trials I also tried: Lowering the learning rate for Pytorch (lr=1e-4), R increases from 0.217 to 0.576, but it's still much worse than Keras (r=0.912). Increasing the learning rate for Pytorch (lr=1e-2), R is worse at 0.095 Training numerous times with different random seeds. The performance is roughly the same, regardless. Trained for longer than 100 epochs. No improvement was observed! Used torch.nn.init.xavier_uniform_ instead of torch.nn.init.xavier_normal_ in the initialization of the weights. R improves from 0.217 to 0.639, but it's still worse than Keras (0.912). What can be done to ensure that the PyTorch model converges to a reasonable error comparable with the Keras model?", "response":"The problem here is unintentional broadcasting in the PyTorch training loop. The result of a nn.Linear operation always has shape [B,D], where B is the batch size and D is the output dimension. Therefore, in your mean_squared_error function ypred has shape [32,1] and ytrue has shape [32]. By the broadcasting rules used by NumPy and PyTorch this means that ytrue - ypred has shape [32,32]. What you almost certainly meant is for ypred to have shape [32]. This can be accomplished in many ways; probably the most readable is to use Tensor.flatten ```py class TorchLinearModel(nn.Module): ... def forward(self, x): x = self.hidden_layer(x) x = self.output_layer(x) return x.flatten() ``` which produces the following train\/val curves", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/73600481\/400-higher-error-with-pytorch-compared-with-identical-keras-model-with-adam-op", "best_answers_votes":70, "question_length":8254, "response_length":732 }, { "question":"Is there an example on how to generate protobuf files holding trained TensorFlow graphs I am looking at Google's example on how to deploy and use a pre-trained Tensorflow graph (model) on Android. This example uses a .pb file at: https:\/\/storage.googleapis.com\/download.tensorflow.org\/models\/inception5h.zip which is a link to a file that downloads automatically. The example shows how to load the .pb file to a Tensorflow session and use it to perform classification, but it doesn't seem to mention how to generate such a .pb file, after a graph is trained (e.g., in Python). Are there any examples on how to do that?", "response":"EDIT: The freeze_graph.py script, which is part of the TensorFlow repository, now serves as a tool that generates a protocol buffer representing a \"frozen\" trained model, from an existing TensorFlow GraphDef and a saved checkpoint. It uses the same steps as described below, but it much easier to use. Currently the process isn't very well documented (and subject to refinement), but the approximate steps are as follows: Build and train your model as a tf.Graph called g_1. Fetch the final values of each of the variables and store them as numpy arrays (using Session.run()). In a new tf.Graph called g_2, create tf.constant() tensors for each of the variables, using the value of the corresponding numpy array fetched in step 2. Use tf.import_graph_def() to copy nodes from g_1 into g_2, and use the input_map argument to replace each variable in g_1 with the corresponding tf.constant() tensors created in step 3. You may also want to use input_map to specify a new input tensor (e.g. replacing an input pipeline with a tf.placeholder()). Use the return_elements argument to specify the name of the predicted output tensor. Call g_2.as_graph_def() to get a protocol buffer representation of the graph. (NOTE: The generated graph will have extra nodes in the graph for training. Although it is not part of the public API, you may wish to use the internal graph_util.extract_sub_graph() function to strip these nodes from the graph.)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34343259\/is-there-an-example-on-how-to-generate-protobuf-files-holding-trained-tensorflow", "best_answers_votes":39, "question_length":618, "response_length":1434 }, { "question":"\"zsh: illegal hardware instruction python\" when installing Tensorflow on macbook pro M1 [duplicate] This question already has answers here: Why does loading tensorflow on Mac lead to \"Process finished with exit code 132 (interrupted by signal 4: SIGILL)\"? (7 answers) Closed 3 years ago. This post was edited and submitted for review 2 years ago and failed to reopen the post: Original close reason(s) were not resolved I'm trying to get tensorflow working on my MacBook pro M1. However, I keep getting the following error when trying to import: zsh: illegal hardware instruction python I have downloaded and installed tensorflow via this link. These were my installation steps: install a venv: python3 -m venv venv. drag the install_venv.sh (which is located within the downloaded folder) file to the terminal, add -p at the end. select the directory of the venv as the location where tensorflow should be installed. activate the venv. type \"python\". try to import tensorflow: import tensorflow as tf. I'm using Python 3.8.2.", "response":"This worked for me after trying a bunch of solutions to no avail. Step 1 Using pyenv install python version 3.8.5 and set it as your default python version. This tutorial(https:\/\/realpython.com\/intro-to-pyenv\/) is helpful for getting pyenv configured properly. Step 1.1 Use this post(https:\/\/github.com\/pyenv\/pyenv\/issues\/1446) if you have troubles running pyenv in zsh. Step 1.2 Once you have python version 3.8.5 running which you can check by running python -V which should output: ``` Python 3.8.5 ``` Step 2 Install virtualenv via pip install virtualenv Step 2.1 Create a virtual environment by running virtualenv ENV Step 2.2 Activate that virtual environment by running source ENV\/bin\/activate Step 3 Install the tensorflow wheel called tensorflow-2.4.1-py3-none-any.whl located at this public google drive link https:\/\/drive.google.com\/drive\/folders\/1oSipZLnoeQB0Awz8U68KYeCPsULy_dQ7 Step 3.1 Assuming you simply installed the wheel to downloads run pip install ~\/Downloads\/tensorflow-2.4.1-py3-none-any.whl in your activated virtual environment Step 4 Type python which will bring up >>>in your terminal and type ``` >>> import tensorflow >>> ``` If there is no 'zsh illegal hardware instruction\" error you should be good to go. Note: If you are using anaconda, the above will also work. You can skip the virtual env steps (assuming you have a virtual env activated through Conda) and just go straight to the pip install as mentioned above (steps 3 and later).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/65383338\/zsh-illegal-hardware-instruction-python-when-installing-tensorflow-on-macbook", "best_answers_votes":41, "question_length":1026, "response_length":1469 }, { "question":"what is XLA_GPU and XLA_CPU for tensorflow I can list gpu devices sing the following tensorflow code: ``` import tensorflow as tf from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) ``` The result is: ``` [name: \"\/device:CPU:0\" device_type: \"CPU\" memory_limit: 268435456 locality { } incarnation: 17897160860519880862, name: \"\/device:XLA_GPU:0\" device_type: \"XLA_GPU\" memory_limit: 17179869184 locality { } incarnation: 9751861134541508701 physical_device_desc: \"device: XLA_GPU device\", name: \"\/device:XLA_CPU:0\" device_type: \"XLA_CPU\" memory_limit: 17179869184 locality { } incarnation: 5368380567397471193 physical_device_desc: \"device: XLA_CPU device\", name: \"\/device:GPU:0\" device_type: \"GPU\" memory_limit: 21366299034 locality { bus_id: 1 links { link { device_id: 1 type: \"StreamExecutor\" strength: 1 } } } incarnation: 7110958745101815531 physical_device_desc: \"device: 0, name: Tesla P40, pci bus id: 0000:02:00.0, compute capability: 6.1\", name: \"\/device:GPU:1\" device_type: \"GPU\" memory_limit: 17336821351 locality { bus_id: 1 links { link { type: \"StreamExecutor\" strength: 1 } } } incarnation: 3366465227705362600 physical_device_desc: \"device: 1, name: Tesla P40, pci bus id: 0000:03:00.0, compute capability: 6.1\", name: \"\/device:GPU:2\" device_type: \"GPU\" memory_limit: 22590563943 locality { bus_id: 2 numa_node: 1 links { link { device_id: 3 type: \"StreamExecutor\" strength: 1 } } } incarnation: 8774017944003495680 physical_device_desc: \"device: 2, name: Tesla P40, pci bus id: 0000:83:00.0, compute capability: 6.1\", name: \"\/device:GPU:3\" device_type: \"GPU\" memory_limit: 22590563943 locality { bus_id: 2 numa_node: 1 links { link { device_id: 2 type: \"StreamExecutor\" strength: 1 } } } incarnation: 2007348906807258050 physical_device_desc: \"device: 3, name: Tesla P40, pci bus id: 0000:84:00.0, compute capability: 6.1\"] ``` I want to know what is XLA_GPU and XLA_CPU?", "response":"As mentioned in the docs, XLA stands for \"accelerated linear algebra\". It's Tensorflow's relatively new optimizing compiler that can further speed up your ML models' GPU operations by combining what used to be multiple CUDA kernels into one (simplifying because this isn't that important for your question). To your question, my understanding is that XLA is separate enough from the default Tensorflow compiler that they separately register GPU devices and have slightly different constraints on which GPUs they treat as visible (see here for more on this). Looking at the output of the command you ran, it looks like XLA is registering 1 GPU and normal TF is registering 3. I'm not sure if you're having issues or are just curious, but if it's the former, I recommend taking a look at the issue I linked above and this one. Tensorflow's finicky about which CUDA\/cuDNN versions with which it works flawlessly and it's possible you're using incompatible versions. (If you're not having issues, then hopefully the first part of my answer is sufficient.)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/52943489\/what-is-xla-gpu-and-xla-cpu-for-tensorflow", "best_answers_votes":28, "question_length":1924, "response_length":1051 }, { "question":"ValueError: Can not squeeze dim[1], expected a dimension of 1, got 3 for 'sparse_softmax_cross_entropy_loss I tried to replace the training and validation data with local images. But when running the training code, it came up with the error : ValueError: Can not squeeze dim[1], expected a dimension of 1, got 3 for 'sparse_softmax_cross_entropy_loss\/remove_squeezable_dimensions\/Squeeze' (op: 'Squeeze') with input shapes: [100,3]. I don't know how to fix it up. There is no visible variable in the model definition code. The code was modified from TensorFlow tutorial. The images are jpgs. Here is the detail Error message: ``` INFO:tensorflow:Using default config. INFO:tensorflow:Using config: {'_log_step_count_steps': 100, '_is_chief': True, '_model_dir': '\/tmp\/mnist_convnet_model', '_tf_random_seed': None, '_session_config': None, '_save_checkpoints_secs': 600, '_num_worker_replicas': 1, '_save_checkpoints_steps': None, '_service': None, '_keep_checkpoint_max': 5, '_cluster_spec': , '_keep_checkpoint_every_n_hours': 10000, '_task_type': 'worker', '_master': '', '_save_summary_steps': 100, '_num_ps_replicas': 0, '_task_id': 0} Traceback (most recent call last): File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\common_shapes.py\", line 686, in _call_cpp_shape_fn_impl input_tensors_as_shapes, status) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\errors_impl.py\", line 473, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: Can not squeeze dim[1], expected a dimension of 1, got 3 for 'sparse_softmax_cross_entropy_loss\/remove_squeezable_dimensions\/Squeeze' (op: 'Squeeze') with input shapes: [100,3]. During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"D:\\tf_exe_5_make_image_lables\\cnn_mnist.py\", line 214, in tf.app.run() File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\platform\\app.py\", line 124, in run _sys.exit(main(argv)) File \"D:\\tf_exe_5_make_image_lables\\cnn_mnist.py\", line 203, in main hooks=[logging_hook]) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\estimator\\estimator.py\", line 314, in train loss = self._train_model(input_fn, hooks, saving_listeners) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\estimator\\estimator.py\", line 743, in _train_model features, labels, model_fn_lib.ModeKeys.TRAIN, self.config) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\estimator\\estimator.py\", line 725, in _call_model_fn model_fn_results = self._model_fn(features=features, **kwargs) File \"D:\\tf_exe_5_make_image_lables\\cnn_mnist.py\", line 67, in cnn_model_fn loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\ops\\losses\\losses_impl.py\", line 790, in sparse_softmax_cross_entropy labels, logits, weights, expected_rank_diff=1) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\ops\\losses\\losses_impl.py\", line 720, in _remove_squeezable_dimensions labels, predictions, expected_rank_diff=expected_rank_diff) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\ops\\confusion_matrix.py\", line 76, in remove_squeezable_dimensions labels = array_ops.squeeze(labels, [-1]) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\ops\\array_ops.py\", line 2490, in squeeze return gen_array_ops._squeeze(input, axis, name) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\ops\\gen_array_ops.py\", line 7049, in _squeeze \"Squeeze\", input=input, squeeze_dims=axis, name=name) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\op_def_library.py\", line 787, in _apply_op_helper op_def=op_def) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\ops.py\", line 3162, in create_op compute_device=compute_device) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\ops.py\", line 3208, in _create_op_helper set_shapes_for_outputs(op) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\ops.py\", line 2427, in set_shapes_for_outputs return _set_shapes_for_outputs(op) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\ops.py\", line 2400, in _set_shapes_for_outputs shapes = shape_func(op) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\ops.py\", line 2330, in call_with_requiring return call_cpp_shape_fn(op, require_shape_fn=True) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\common_shapes.py\", line 627, in call_cpp_shape_fn require_shape_fn) File \"C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\common_shapes.py\", line 691, in _call_cpp_shape_fn_impl raise ValueError(err.message) ValueError: Can not squeeze dim[1], expected a dimension of 1, got 3 for 'sparse_softmax_cross_entropy_loss\/remove_squeezable_dimensions\/Squeeze' (op: 'Squeeze') with input shapes: [100,3]. >>> ``` Here is my code: ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function #imports import numpy as np import tensorflow as tf import glob import cv2 import random import matplotlib.pylab as plt import pandas as pd import sys as system from mlxtend.preprocessing import one_hot from sklearn import preprocessing from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import OneHotEncoder tf.logging.set_verbosity(tf.logging.INFO) def cnn_model_fn(features, labels, mode): \"\"\"Model function for CNN\"\"\" #Input Layer input_layer = tf.reshape(features[\"x\"], [-1,320,320,3]) #Convolutional Layer #1 conv1 = tf.layers.conv2d( inputs = input_layer, filters = 32, kernel_size=[5,5], padding = \"same\", activation=tf.nn.relu) #Pooling Layer #1 pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2,2], strides=2) #Convolutional Layer #2 and Pooling Layer #2 conv2 = tf.layers.conv2d( inputs=pool1, filters=64, kernel_size=[5,5], padding=\"same\", activation=tf.nn.relu) pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2,2], strides=2) #Dense Layer pool2_flat = tf.reshape(pool2, [-1,80*80*64]) dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu) dropout = tf.layers.dropout( inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN) #Logits Layer logits = tf.layers.dense(inputs=dropout, units=3) predictions = { #Generate predictions (for PREDICT and EVAL mode) \"classes\": tf.argmax(input=logits, axis=1), #Add 'softmax_tensor' to the graph. It is used for PREDICT and by the #'logging_hook' \"probabilities\": tf.nn.softmax(logits, name=\"softmax_tensor\") } if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions) # Calculate Loss (for both TRAIN and EVAL modes loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits) # Configure the Training Op (for TRAIN mode) if mode == tf.estimator.ModeKeys.TRAIN: optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001) train_op = optimizer.minimize( loss=loss, global_step=tf.train.get_global_step()) return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op) # Add evaluation metrics (for EVAL mode) eval_metric_ops = { \"accuracy\": tf.metrics.accuracy( labels=labels, predictions=predictions[\"classes\"])} return tf.estimator.EstimatorSpec( mode=mode, loss=loss,eval_metric_ops=eval_metric_ops) def main(unused_argv): ''' #Load training and eval data mnist = tf.contrib.learn.datasets.load_dataset(\"mnist\") train_data = mnist.train.images train_labels = np.asarray(mnist.train.labels, dtype=np.int32) eval_data = mnist.test.images eval_labels = np.asarray(mnist.test.labels, dtype=np.int32) ''' #Load cats, dogs and cars image in local folder X_data = [] files = glob.glob(\"data\/cats\/*.jpg\") for myFile in files: image = cv2.imread (myFile) imgR = cv2.resize(image, (320, 320)) imgNR = imgR\/255 X_data.append(imgNR) files = glob.glob(\"data\/dogs\/*.jpg\") for myFile in files: image = cv2.imread (myFile) imgR = cv2.resize(image, (320, 320)) imgNR = imgR\/255 X_data.append(imgNR) files = glob.glob (\"data\/cars\/*.jpg\") for myFile in files: image = cv2.imread (myFile) imgR = cv2.resize(image, (320, 320)) imgNR = imgR\/255 X_data.append (imgNR) #print('X_data count:', len(X_data)) X_data_Val = [] files = glob.glob (\"data\/Validation\/cats\/*.jpg\") for myFile in files: image = cv2.imread (myFile) imgR = cv2.resize(image, (320, 320)) imgNR = imgR\/255 X_data_Val.append (imgNR) files = glob.glob (\"data\/Validation\/dogs\/*.jpg\") for myFile in files: image = cv2.imread (myFile) imgR = cv2.resize(image, (320, 320)) imgNR = imgR\/255 X_data_Val.append (imgNR) files = glob.glob (\"data\/Validation\/cars\/*.jpg\") for myFile in files: image = cv2.imread (myFile) imgR = cv2.resize(image, (320, 320)) imgNR = imgR\/255 X_data_Val.append (imgNR) #Feed One hot lables Y_Label = np.zeros(shape=(300,1)) for el in range(0,100): Y_Label[el]=[0] for el in range(101,200): Y_Label[el]=[1] for el in range(201,300): Y_Label[el]=[2] onehot_encoder = OneHotEncoder(sparse=False) #Y_Label_RS = Y_Label.reshape(len(Y_Label), 1) Y_Label_Encode = onehot_encoder.fit_transform(Y_Label) #print('Y_Label_Encode shape:', Y_Label_Encode.shape) Y_Label_Val = np.zeros(shape=(30,1)) for el in range(0, 10): Y_Label_Val[el]=[0] for el in range(11, 20): Y_Label_Val[el]=[1] for el in range(21, 30): Y_Label_Val[el]=[2] #Y_Label_Val_RS = Y_Label_Val.reshape(len(Y_Label_Val), 1) Y_Label_Val_Encode = onehot_encoder.fit_transform(Y_Label_Val) #print('Y_Label_Val_Encode shape:', Y_Label_Val_Encode.shape) train_data = np.array(X_data) train_data = train_data.astype(np.float32) train_labels = np.asarray(Y_Label_Encode, dtype=np.int32) eval_data = np.array(X_data_Val) eval_data = eval_data.astype(np.float32) eval_labels = np.asarray(Y_Label_Val_Encode, dtype=np.int32) print(train_data.shape) print(train_labels.shape) #Create the Estimator mnist_classifier = tf.estimator.Estimator( model_fn=cnn_model_fn, model_dir=\"\/tmp\/mnist_convnet_model\") # Set up logging for predictions tensor_to_log = {\"probabilities\": \"softmax_tensor\"} logging_hook = tf.train.LoggingTensorHook( tensors=tensor_to_log, every_n_iter=50) # Train the model train_input_fn = tf.estimator.inputs.numpy_input_fn( x={\"x\": train_data}, y=train_labels, batch_size=100, num_epochs=None, shuffle=True) mnist_classifier.train( input_fn=train_input_fn, #original steps are 20000 steps=1, hooks=[logging_hook]) # Evaluate the model and print results eval_input_fn = tf.estimator.inputs.numpy_input_fn( x={\"x\": eval_data}, y=eval_labels, num_epochs=1, shuffle=False) eval_results = mnist_classifier.evaluate(input_fn=eval_input_fn) print(eval_results) if __name__ == \"__main__\": tf.app.run() ```", "response":"The error here is from tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits). The TensorFlow documentation clearly states that \"labels vector must provide a single specific index for the true class for each row of logits\". So your labels vector must include only class-indices like 0,1,2 and not their respective one-hot-encodings like [1,0,0], [0,1,0], [0,0,1]. Reproducing the error to explain further: ``` import numpy as np import tensorflow as tf # Create random-array and assign as logits tensor np.random.seed(12345) logits = tf.convert_to_tensor(np.random.sample((4,4))) print logits.get_shape() #[4,4] # Create random-labels (Assuming only 4 classes) labels = tf.convert_to_tensor(np.array([2, 2, 0, 1])) loss_1 = tf.losses.sparse_softmax_cross_entropy(labels, logits) sess = tf.Session() sess.run(tf.global_variables_initializer()) print 'Loss: {}'.format(sess.run(loss_1)) # 1.44836854 # Now giving one-hot-encodings in place of class-indices for labels wrong_labels = tf.convert_to_tensor(np.array([[0,0,1,0], [0,0,1,0], [1,0,0,0],[0,1,0,0]])) loss_2 = tf.losses.sparse_softmax_cross_entropy(wrong_labels, logits) # This should give you a similar error as soon as you define it ``` So try giving class-indices instead of one-hot encodings in your Y_Labels vector. Hope this clears your doubt.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49083984\/valueerror-can-not-squeeze-dim1-expected-a-dimension-of-1-got-3-for-sparse", "best_answers_votes":60, "question_length":11413, "response_length":1319 }, { "question":"Error while importing Tensorflow in Python 2.7 in Ubuntu 12.04. 'GLIBC_2.17 not found' I have installed the Tensorflow bindings with python successfully. But when I try to import Tensorflow, I get the follwoing error. ``` ImportError: \/lib\/x86_64-linux-gnu\/libc.so.6: version `GLIBC_2.17' not found (required by \/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/_pywrap_tensorflow.so) ``` I have tried to update GLIBC_2.15 to 2.17, but no luck.", "response":"I've just managed to install tensorflow 0.12rc0 on CentOS 6.5 with glibc 2.12, without having root privileges. Simply installing tensorflow binary via pip was giving me an error, related to GLIBC version as well. Basically, you have 4 options how to deal with this (each with some advantages and disadvantages): Option 1 - Upgrade your system GLIBC globally. This is, probably, the best option, if your system supports this, you have root privileges, and you are confident that this upgrade won't break anything for some weird reason. Ultimately, this goes up to upgrading the whole Linux distribution. Here's a nice short list of default GLIBC versions on popular distributions. Option 2 - Add second GLIBC to your system Compile or download binary. The most simple&straightforward option. Especially if you only need to run few simple scripts. It is possible to have multiple versions of glibc on the same system, but one should do this with a great care. You won't destroy your system, if all your changes would be limited to a virtual environment. Many programs, installed\/compiled before might be relying on old GLIBC, would just crash in your new environment (e.g. your python IDE). Including most basic bash commands, like \"lc\", \"cd\", etc. Other side-effects like significant memory leaks are also possible. Thus, it's a very bad idea to add new GLIBC to your normal environment, e.g. via .bashrc. On the other hand, if you need some specific tool in your new virtual environment, you may recompile it, linking against new GLIBC. So, that it would work OK in your new enviroment. However, personally, I quickly gave up recompiling everything I need in a new environment (without root and a package manager). A slightly different approach is officially offered by GLIBC developers, for testing new GLIBC builds. Option 3 - Patch tensorflow This may work for TF 0.6.0, but you would probably have to start again from scratch, when each new tensorflow version is released. E.g. here's a fix for 0.9.0. Option 4 - Compile tensorflow from source If you re-compile it from source and link against your existing GLIBC, newer GLIBC would be no longer needed. Somehow, this option was not mentioned in any answer here yet. Imho, this is the best option, both \"in general\", and \"specifically for tensorflow\". This works OK with r0.11 and would probably work for years, but theoretically, it might break in some newer tensorflow version, if they would decide to actually use some new GLIBC functionality, not present in older versions. To be honest, building tensorflow from source is not straightforward, especially on outdated systems. A quick summary of \"building tensorflow on outdated system\": Although the official guide provides a \"installing from sources\" section, there are few tricks you need to do to build it on an outdated system. Here I assume, that you do not have root privileges (if you do - you probably would be able to install the same pre-requestities with a package manager, rather them manually building them from source). I found two well-documented success stories: #1, #2 and a number of useful posts on the official github (mostly about a set of libraries to link inside the binary): #1, #2, #3, #4. I had to combine tricks, described there to successfully compile TF in my case. First of all, check your gcc --version, and verify that it supports c++11. Mine was 4.4.7, so it won't work. I've downloaded gcc-4.9.4 source code, and compiled it. This step is pretty straightforward, but the compilation itself may take few hours. As a workaround for an issue in bazel, I've compiled gcc with hardcoded paths to as,ld and nm. However, you may try another workarounds: (1, 2). ``` #!\/bin\/sh unset LIBRARY_PATH CPATH C_INCLUDE_PATH unset PKG_CONFIG_PATH CPLUS_INCLUDE_PATH INCLUDE LD_LIBRARY_PATH cd gcc-4.9.4 .\/contrib\/download_prerequisites mkdir objdir cd objdir # I've added --disable-multilib to fix the following error: # \/usr\/bin\/ld: crt1.o: No such file: No such file or directory # collect2: ld returned 1 exit status # configure: error: I suspect your system does not have 32-bit # developement libraries (libc and headers). If you have them, # rerun configure with --enable-multilib. If you do not have them, # and want to build a 64-bit-only compiler, rerun configure # with --disable-multilib. ..\/configure --prefix=$HOME\/opt\/gcc-4.9.4 \\ --disable-multilib \\ --disable-nls \\ --enable-languages=c,c++ \\ --with-ld=\/usr\/bin\/ld \\ --with-nm=\/usr\/bin\/nm \\ --with-as=\/usr\/bin\/as make make install ``` Check your java --version. Bazel requires JDK 8, install it if necessary. (They still provide some jdk7 related downloads, for bazel-0.4.1 but it looks like they consider it deprecated) I've created a separate use_gcc_4.9.4.sh file, with necessary environment variables. I use source .\/use_gcc_4.9.4.sh when I need to so something related to this newer compiler. ``` #!\/bin\/sh this=$HOME\/opt\/gcc-4.9.4 export PATH=$this\/bin:$PATH export CPATH=$this\/include:$CPATH export LIBRARY_PATH=$this\/lib:$LIBRARY_PATH export LIBRARY_PATH=$this\/lib64:$LIBRARY_PATH export LD_LIBRARY_PATH=$this\/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=$this\/lib64:$LD_LIBRARY_PATH ``` The current bazel binary (0.4.1) requires GLIBC 2.14, so we have to compile bazel from source as well (with our new gcc). Works OK, unless you are only allowed to run a very limited number of threads on the target machine. (This post describes some additional workarounds, but in my case they were not needed, maybe due to recent updates in bazel code.) Obtain tensorflow source code git clone https:\/\/github.com\/tensorflow\/tensorflow, and install prerequisites you need (CUDA,cuDNN,python, etc). See official guide. If you're not using default system gcc (e.g. if you had to compile newer gcc, like discussed above), add the following linker flags to tensorflow\/third_party\/gpus\/crosstool\/CROSSTOOL.tpl, line 59: ``` linker_flag: \"-L\/home\/username\/localinst\/opt\/gcc-4.9.4\/lib64\" linker_flag: \"-Wl,-rpath,\/home\/username\/localinst\/opt\/gcc-4.9.4\/lib64\" ``` Without this step, you would likely run into error messages like this: ``` # ERROR: \/home\/username\/localdistr\/src\/tensorflow\/tensorflow\/tensorflow\/core\/debug\/BUILD:33:1: null failed: protoc failed: error executing command bazel-out\/host\/bin\/external\/protobuf\/protoc '--cpp_out=bazel-out\/local_linux-py3-opt\/genfiles\/' '--plugin=protoc-gen-grpc=bazel-out\/host\/bin\/external\/grpc\/grpc_cpp_plugin' ... (remaining 8 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1. # bazel-out\/host\/bin\/external\/protobuf\/protoc: \/usr\/lib64\/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by bazel-out\/host\/bin\/external\/protobuf\/protoc) # bazel-out\/host\/bin\/external\/protobuf\/protoc: \/usr\/lib64\/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by bazel-out\/host\/bin\/external\/protobuf\/protoc) # bazel-out\/host\/bin\/external\/protobuf\/protoc: \/usr\/lib64\/libstdc++.so.6: version `GLIBCXX_3.4.18' not found (required by bazel-out\/host\/bin\/external\/protobuf\/protoc) ``` Finally, to avoid GLIBC dependencies, we have to statically link some libraries, by adding the -lrt linker flag (maybe -lm as well). I found multiple posts, suggesting to add this in a different manner: via bazel command line (may sound reasonable, but not working for me on current tensorflow version, somehow), via \"bazel-tensorflow\/external\/protobuf\/BUILD\"(not sure if it's working, but this does not look convenient - this file is only created during the build attempt itself) via \"third_party\/gpus\/crosstool\/CROSSTOOL.tpl\" (the same file we've just edited in the previous step, just below the lines we've already added). ``` linker_flag: \"-lrt\" linker_flag: \"-lm\" ``` via \"tensorflow\/tensorflow.bzl\" (works for me, but less convenient just because you have to edit one more file. I'm not sure it's 100% equivalent to the previous point) Without -lrt I ran into GLIBC-version-specific error again, trying to import tensorflow: ``` # ImportError: \/lib64\/libc.so.6: version `GLIBC_2.14' not found (required by \/home\/username\/anaconda3\/envs\/myenvname\/lib\/python3.5\/site-packages\/tensorflow\/python\/_pywrap_tensorflow.so) ``` Without -lm you may run into this (for me, it turned out to be not necessary). Run the build process. ```bash source .\/use_gcc_4.9.4.sh .\/configure bazel build -c opt --config=cuda \/\/tensorflow\/tools\/pip_package:build_pip_package bazel-bin\/tensorflow\/tools\/pip_package\/build_pip_package \/tmp\/tensorflow_pkg pip install --upgrade \/tmp\/tensorflow_pkg\/tensorflow-0.12.0rc0-cp35-cp35m-linux_x86_64.whl ``` Try to run the following simple python script to test if the most basic stuff is functioning: ``` import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello)) a = tf.constant(10) b = tf.constant(32) print(sess.run(a + b)) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33655731\/error-while-importing-tensorflow-in-python-2-7-in-ubuntu-12-04-glibc-2-17-not", "best_answers_votes":24, "question_length":451, "response_length":8867 }, { "question":"How do you read Tensorboard files programmatically? How can you write a python script to read Tensorboard log files, extracting the loss and accuracy and other numerical data, without launching the GUI tensorboard --logdir=...?", "response":"You can use TensorBoard's Python classes or script to extract the data: How can I export data from TensorBoard? If you'd like to export data to visualize elsewhere (e.g. iPython Notebook), that's possible too. You can directly depend on the underlying classes that TensorBoard uses for loading data: python\/summary\/event_accumulator.py (for loading data from a single run) or python\/summary\/event_multiplexer.py (for loading data from multiple runs, and keeping it organized). These classes load groups of event files, discard data that was \"orphaned\" by TensorFlow crashes, and organize the data by tag. As another option, there is a script (tensorboard\/scripts\/serialize_tensorboard.py) which will load a logdir just like TensorBoard does, but write all of the data out to disk as json instead of starting a server. This script is setup to make \"fake TensorBoard backends\" for testing, so it is a bit rough around the edges. Using EventAccumulator: ``` # In [1]: from tensorflow.python.summary import event_accumulator # deprecated In [1]: from tensorboard.backend.event_processing import event_accumulator In [2]: ea = event_accumulator.EventAccumulator('events.out.tfevents.x.ip-x-x-x-x', ...: size_guidance={ # see below regarding this argument ...: event_accumulator.COMPRESSED_HISTOGRAMS: 500, ...: event_accumulator.IMAGES: 4, ...: event_accumulator.AUDIO: 4, ...: event_accumulator.SCALARS: 0, ...: event_accumulator.HISTOGRAMS: 1, ...: }) In [3]: ea.Reload() # loads events from file Out[3]: In [4]: ea.Tags() Out[4]: {'audio': [], 'compressedHistograms': [], 'graph': True, 'histograms': [], 'images': [], 'run_metadata': [], 'scalars': ['Loss', 'Epsilon', 'Learning_rate']} In [5]: ea.Scalars('Loss') Out[5]: [ScalarEvent(wall_time=1481232633.080754, step=1, value=1.6365480422973633), ScalarEvent(wall_time=1481232633.2001867, step=2, value=1.2162202596664429), ScalarEvent(wall_time=1481232633.3877788, step=3, value=1.4660096168518066), ScalarEvent(wall_time=1481232633.5749283, step=4, value=1.2405034303665161), ScalarEvent(wall_time=1481232633.7419815, step=5, value=0.897326648235321), ...] ``` size_guidance: ``` size_guidance: Information on how much data the EventAccumulator should store in memory. The DEFAULT_SIZE_GUIDANCE tries not to store too much so as to avoid OOMing the client. The size_guidance should be a map from a `tagType` string to an integer representing the number of items to keep per tag for items of that `tagType`. If the size is 0, all events are stored. ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41074688\/how-do-you-read-tensorboard-files-programmatically", "best_answers_votes":59, "question_length":227, "response_length":2505 }, { "question":"TensorFlow ValueError: Cannot feed value of shape (64, 64, 3) for Tensor u'Placeholder:0', which has shape '(?, 64, 64, 3)' I am new to TensorFlow and machine learning. I am trying to classify two objects a cup and a pendrive (jpeg images). I have trained and exported a model.ckpt successfully. Now I am trying to restore the saved model.ckpt for prediction. Here is the script: ``` import tensorflow as tf import math import numpy as np from PIL import Image from numpy import array # image parameters IMAGE_SIZE = 64 IMAGE_CHANNELS = 3 NUM_CLASSES = 2 def main(): image = np.zeros((64, 64, 3)) img = Image.open('.\/IMG_0849.JPG') img = img.resize((64, 64)) image = array(img).reshape(64,64,3) k = int(math.ceil(IMAGE_SIZE \/ 2.0 \/ 2.0 \/ 2.0 \/ 2.0)) # Store weights for our convolution and fully-connected layers with tf.name_scope('weights'): weights = { # 5x5 conv, 3 input channel, 32 outputs each 'wc1': tf.Variable(tf.random_normal([5, 5, 1 * IMAGE_CHANNELS, 32])), # 5x5 conv, 32 inputs, 64 outputs 'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])), # 5x5 conv, 64 inputs, 128 outputs 'wc3': tf.Variable(tf.random_normal([5, 5, 64, 128])), # 5x5 conv, 128 inputs, 256 outputs 'wc4': tf.Variable(tf.random_normal([5, 5, 128, 256])), # fully connected, k * k * 256 inputs, 1024 outputs 'wd1': tf.Variable(tf.random_normal([k * k * 256, 1024])), # 1024 inputs, 2 class labels (prediction) 'out': tf.Variable(tf.random_normal([1024, NUM_CLASSES])) } # Store biases for our convolution and fully-connected layers with tf.name_scope('biases'): biases = { 'bc1': tf.Variable(tf.random_normal([32])), 'bc2': tf.Variable(tf.random_normal([64])), 'bc3': tf.Variable(tf.random_normal([128])), 'bc4': tf.Variable(tf.random_normal([256])), 'bd1': tf.Variable(tf.random_normal([1024])), 'out': tf.Variable(tf.random_normal([NUM_CLASSES])) } saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, \".\/model.ckpt\") print \"...Model Loaded...\" x_ = tf.placeholder(tf.float32, shape=[None, IMAGE_SIZE , IMAGE_SIZE , IMAGE_CHANNELS]) y_ = tf.placeholder(tf.float32, shape=[None, NUM_CLASSES]) keep_prob = tf.placeholder(tf.float32) init = tf.initialize_all_variables() sess.run(init) my_classification = sess.run(tf.argmax(y_, 1), feed_dict={x_:image}) print 'Neural Network predicted', my_classification[0], \"for your image\" if __name__ == '__main__': main() ``` When I run the above script for prediction I get the following error: ``` ValueError: Cannot feed value of shape (64, 64, 3) for Tensor u'Placeholder:0', which has shape '(?, 64, 64, 3)' ``` What am I doing wrong? And how do I fix the shape of numpy array?", "response":"image has a shape of (64,64,3). Your input placeholder _x have a shape of (?,64,64,3). The problem is that you're feeding the placeholder with a value of a different shape. You have to feed it with a value of (1,64,64,3) = a batch of 1 image. Just reshape your image value to a batch with size one. ``` image = array(img).reshape(1,64,64,3) ``` P.S: The fact that the input placeholder accepts a batch of images, means that you can run predicions for a batch of images in parallel. You can try to read more than 1 image (N images) and then build a batch of N images, using a tensor with shape (N,64,64,3)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40430186\/tensorflow-valueerror-cannot-feed-value-of-shape-64-64-3-for-tensor-uplace", "best_answers_votes":48, "question_length":2625, "response_length":604 }, { "question":"Tensorflow : logits and labels must have the same first dimension I am new in tensoflow and I want to adapt the MNIST tutorial https:\/\/www.tensorflow.org\/tutorials\/layers with my own data (images of 40x40). This is my model function : ``` def cnn_model_fn(features, labels, mode): # Input Layer input_layer = tf.reshape(features, [-1, 40, 40, 1]) # Convolutional Layer #1 conv1 = tf.layers.conv2d( inputs=input_layer, filters=32, kernel_size=[5, 5], # To specify that the output tensor should have the same width and height values as the input tensor # value can be \"same\" ou \"valid\" padding=\"same\", activation=tf.nn.relu) # Pooling Layer #1 pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2) # Convolutional Layer #2 and Pooling Layer #2 conv2 = tf.layers.conv2d( inputs=pool1, filters=64, kernel_size=[5, 5], padding=\"same\", activation=tf.nn.relu) pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2) # Dense Layer pool2_flat = tf.reshape(pool2, [-1, 10 * 10 * 64]) dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu) dropout = tf.layers.dropout( inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN) # Logits Layer logits = tf.layers.dense(inputs=dropout, units=2) predictions = { # Generate predictions (for PREDICT and EVAL mode) \"classes\": tf.argmax(input=logits, axis=1), # Add `softmax_tensor` to the graph. It is used for PREDICT and by the # `logging_hook`. \"probabilities\": tf.nn.softmax(logits, name=\"softmax_tensor\") } if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions) # Calculate Loss (for both TRAIN and EVAL modes) loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits) # Configure the Training Op (for TRAIN mode) if mode == tf.estimator.ModeKeys.TRAIN: optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001) train_op = optimizer.minimize( loss=loss, global_step=tf.train.get_global_step()) return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op) # Add evaluation metrics (for EVAL mode) eval_metric_ops = { \"accuracy\": tf.metrics.accuracy( labels=labels, predictions=predictions[\"classes\"])} return tf.estimator.EstimatorSpec( mode=mode, loss=loss, eval_metric_ops=eval_metric_ops) ``` I have a shape size error between labels and logits : InvalidArgumentError (see above for traceback): logits and labels must have the same first dimension, got logits shape [3,2] and labels shape [1] filenames_array is an array of 16 string ``` [\"file1.png\", \"file2.png\", \"file3.png\", ...] ``` and labels_array is an array of 16 integer ``` [0,0,1,1,0,1,0,0,0,...] ``` The main function is : ``` # Create the Estimator mnist_classifier = tf.estimator.Estimator(model_fn=cnn_model_fn, model_dir=\"\/tmp\/test_convnet_model\") # Train the model cust_train_input_fn = lambda: train_input_fn_custom( filenames_array=filenames, labels_array=labels, batch_size=1) mnist_classifier.train( input_fn=cust_train_input_fn, steps=20000, hooks=[logging_hook]) ``` I tried to reshape logits without success : logits = tf.reshape(logits, [1, 2]) I need your help, thanks EDIT After more time to search, in the first line of my model function ``` input_layer = tf.reshape(features, [-1, 40, 40, 1]) ``` the \"-1\" that signifies that the batch_size dimension will be dynamically calculated have here the value \"3\". The same \"3\" as in my error : logits and labels must have the same first dimension, got logits shape [3,2] and labels shape [1] If I force the value to \"1\" I have this new error : Input to reshape is a tensor with 4800 values, but the requested shape has 1600 Maybe a problem with my features ? EDIT2 : the complete code is here : https:\/\/gist.github.com\/geoffreyp\/cc8e97aab1bff4d39e10001118c6322e EDIT3 I updated the gist with ``` logits = tf.layers.dense(inputs=dropout, units=1) ``` https:\/\/gist.github.com\/geoffreyp\/cc8e97aab1bff4d39e10001118c6322e But I don't completely understand your answer about the batch size, how the batch size can be 3 here whereas I choose a batch size of 1 ? If I choose a batch_size = 3 I have this error : logits and labels must have the same first dimension, got logits shape [9,1] and labels shape [3] I tried to reshape labels : ``` labels = tf.reshape(labels, [3, 1]) ``` and I updated features and labels structure : ``` filenames_train = [['blackcorner-data\/1.png', 'blackcorner-data\/2.png', 'blackcorner-data\/3.png', 'blackcorner-data\/4.png', 'blackcorner-data\/n1.png'], ['blackcorner-data\/n2.png', 'blackcorner-data\/n3.png', 'blackcorner-data\/n4.png', 'blackcorner-data\/11.png', 'blackcorner-data\/21.png'], ['blackcorner-data\/31.png', 'blackcorner-data\/41.png', 'blackcorner-data\/n11.png', 'blackcorner-data\/n21.png', 'blackcorner-data\/n31.png'] ] labels = [[0, 0, 0, 0, 1], [1, 1, 1, 0, 0], [0, 0, 1, 1, 1]] ``` but without success...", "response":"The problem is in your target shape and is related to the correct choice of an appropriate loss function. you have 2 possibilities: 1. possibility: if you have 1D integer encoded target, you can use sparse_categorical_crossentropy as loss function ``` n_class = 3 n_features = 100 n_sample = 1000 X = np.random.randint(0,10, (n_sample,n_features)) y = np.random.randint(0,n_class, n_sample) inp = Input((n_features,)) x = Dense(128, activation='relu')(inp) out = Dense(n_class, activation='softmax')(x) model = Model(inp, out) model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy']) history = model.fit(X, y, epochs=3) ``` 2. possibility: if you have one-hot encoded your target in order to have 2D shape (n_samples, n_class), you can use categorical_crossentropy ``` n_class = 3 n_features = 100 n_sample = 1000 X = np.random.randint(0,10, (n_sample,n_features)) y = pd.get_dummies(np.random.randint(0,n_class, n_sample)).values inp = Input((n_features,)) x = Dense(128, activation='relu')(inp) out = Dense(n_class, activation='softmax')(x) model = Model(inp, out) model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy']) history = model.fit(X, y, epochs=3) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49161174\/tensorflow-logits-and-labels-must-have-the-same-first-dimension", "best_answers_votes":85, "question_length":4876, "response_length":1226 }, { "question":"AttributeError: module 'setuptools._distutils' has no attribute 'version' I was trying to train a model using tensorboard. While executing, I got this error: ``` $ python train.py Traceback (most recent call last): File \"train.py\", line 6, in from torch.utils.tensorboard import SummaryWriter File \"C:\\Users\\91960\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\utils\\tensorboard\\__init__.py\", line 4, in LooseVersion = distutils.version.LooseVersion AttributeError: module 'setuptools._distutils' has no attribute 'version'. ``` I'm using python 3.8.9 64-bit & tensorflow with distutils already installed which is required by tensorboard. Why is this happening?", "response":"This is a known bug which has been patched: https:\/\/github.com\/pytorch\/pytorch\/pull\/69904 You can either use the nightly-release of PyTorch, or otherwise downgrade setup tools to setuptools version 59.5.0: pip install setuptools==59.5.0", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/70520120\/attributeerror-module-setuptools-distutils-has-no-attribute-version", "best_answers_votes":79, "question_length":679, "response_length":236 }, { "question":"What is the relationship between steps and epochs in TensorFlow? I am going through TensorFlow get started tutorial. In the tf.contrib.learn example, these are two lines of code: ``` input_fn = tf.contrib.learn.io.numpy_input_fn({\"x\":x}, y, batch_size=4, num_epochs=1000) estimator.fit(input_fn=input_fn, steps=1000) ``` I am wondering what is the difference between argument steps in the call to fit function and num_epochs in the numpy_input_fn call. Shouldn't there be just one argument? How are they connected? I have found that code is somehow taking the min of these two as the number of steps in the toy example of the tutorial. At least, one of the two parameters either num_epochs or steps has to be redundant. We can calculate one from the other. Is there a way I can know how many steps (number of times parameters get updated) my algorithm actually took? I am curious about which one takes precedence. And does it depend on some other parameters?", "response":"TL;DR: An epoch is when your model goes through your whole training data once. A step is when your model trains on a single batch (or a single sample if you send samples one by one). Training for 5 epochs on a 1000 samples 10 samples per batch will take 500 steps. The contrib.learn.io module is not documented very well, but it seems that numpy_input_fn() function takes some numpy arrays and batches them together as input for a classificator. So, the number of epochs probably means \"how many times to go through the input data I have before stopping\". In this case, they feed two arrays of length 4 in 4 element batches, so it will just mean that the input function will do this at most a 1000 times before raising an \"out of data\" exception. The steps argument in the estimator fit() function is how many times should estimator do the training loop. This particular example is somewhat perverse, so let me make up another one to make things a bit clearer (hopefully). Lets say you have two numpy arrays (samples and labels) that you want to train on. They are a 100 elements each. You want your training to take batches with 10 samples per batch. So after 10 batches you will go through all of your training data. That is one epoch. If you set your input generator to 10 epochs, it will go through your training set 10 times before stopping, that is it will generate at most a 100 batches. Again, the io module is not documented, but considering how other input related APIs in tensorflow work, it should be possible to make it generate data for unlimited number of epochs, so the only thing controlling the length of training are going to be the steps. This gives you some extra flexibility on how you want your training to progress. You can go a number of epochs at a time or a number of steps at a time or both or whatever.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42816124\/what-is-the-relationship-between-steps-and-epochs-in-tensorflow", "best_answers_votes":45, "question_length":958, "response_length":1831 }, { "question":"Error importing tensorflow \"AlreadyExistsError: Another metric with the same name already exists.\" I am running this simple code on Spyder 3.3 with Python 3.7 and Tensorlow 2.0: ``` import tensorflow as tf print(tf.__version__) ``` When I try to run it again in the same IPython console, I get the following error: File \"\/home\/rodrigo\/.local\/lib\/python3.7\/site-packages\/tensorflow_core\/python\/eager\/monitoring.py\", line 121, in init self._metric = self._metric_methods[self._label_length].create(*args) AlreadyExistsError: Another metric with the same name already exists. If I close the IPython console, and then open it again, it works fine. I am getting this error in every code that imports Tensorflow. Does anyone know how to solve this? System configuration: Ubuntu 19.04 Spyder 3.3.2 Python 3.7.3 IPython 5.8.0 TensorFlow 2.0.0-rc2", "response":"TL;DR: Ensure the Keras version matches the Tensorflow version I am experiencing the same thing with: Windows Python3.8 Tensorflow-2.6.1 The core issue appears to be that there are two Keras packages installed: ``` \/keras tfjs > brainjs. Python can be directly compiled to machine code and directly use the CPU and GPU, whereas tfjs is a script-language which is being compiled on the client and has to use the in the browser to access the GPU the same as brain.js (I am not sure if brain.js is GPU-accelerated) Another thing is that tensorflow is a whole ecosystem, which is kept in sync with each different version for the different platforms, so it is really easy to port your python(keras) model to tfjs and if you know how to code a tensorflow-model you can do it in any language. And if you're using nodejs the only reason to stay with tfjs and not switch to python is that you like the JavaScript language better or you are forced to use because you are working in a JS backend. PS: A new library was just released (ML5), which is a wrapper for tfjs and adds a lot of stuff, which helps you to build and use models without having a deep machine learning background.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51797280\/machine-learning-tensorflow-v-s-tensorflow-js-v-s-brain-js", "best_answers_votes":36, "question_length":1658, "response_length":990 }, { "question":"Update TensorFlow I'm working with Ubuntu 14.04 , I had a TensorFlow V0.10 but I want to update this version. if i do: ``` $ pip install --upgrade $TF_BINARY_URL ``` but it prints: ``` Exception: Traceback (most recent call last): File \"\/usr\/lib\/python2.7\/dist-packages\/pip\/basecommand.py\", line 122, in main status = self.run(options, args) File \"\/usr\/lib\/python2.7\/dist-packages\/pip\/commands\/install.py\", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File \"\/usr\/lib\/python2.7\/dist-packages\/pip\/req.py\", line 1198, in prepare_files do_download, File \"\/usr\/lib\/python2.7\/dist-packages\/pip\/req.py\", line 1376, in unpack_url self.session, File \"\/usr\/lib\/python2.7\/dist-packages\/pip\/download.py\", line 572, in unpack_http_url download_hash = _download_url(resp, link, temp_location) File \"\/usr\/lib\/python2.7\/dist-packages\/pip\/download.py\", line 433, in _download_url for chunk in resp_read(4096): File \"\/usr\/lib\/python2.7\/dist-packages\/pip\/download.py\", line 421, in resp_read chunk_size, decode_content=False): File \"\/usr\/share\/python-wheels\/urllib3-1.7.1-py2.py3-none-any.whl\/urllib3\/response.py\", line 225, in stream data = self.read(amt=amt, decode_content=decode_content) File \"\/usr\/share\/python-wheels\/urllib3-1.7.1-py2.py3-none-any.whl\/urllib3\/response.py\", line 174, in read data = self._fp.read(amt) File \"\/usr\/lib\/python2.7\/httplib.py\", line 573, in read s = self.fp.read(amt) File \"\/usr\/lib\/python2.7\/socket.py\", line 380, in read data = self._sock.recv(left) File \"\/usr\/lib\/python2.7\/ssl.py\", line 341, in recv return self.read(buflen) File \"\/usr\/lib\/python2.7\/ssl.py\", line 260, in read return self._sslobj.read(len) SSLError: The read operation timed out Storing debug log for failure in \/home\/brm17\/.pip\/pip.log ```", "response":"``` (tensorflow)$ pip install --upgrade pip # for Python 2.7 (tensorflow)$ pip3 install --upgrade pip # for Python 3.n (tensorflow)$ pip install --upgrade tensorflow # for Python 2.7 (tensorflow)$ pip3 install --upgrade tensorflow # for Python 3.n (tensorflow)$ pip install --upgrade tensorflow-gpu # for Python 2.7 and GPU (tensorflow)$ pip3 install --upgrade tensorflow-gpu # for Python 3.n and GPU (tensorflow)$ pip install --upgrade tensorflow-gpu==1.4.1 # for a specific version ``` Details on install tensorflow.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42574476\/update-tensorflow", "best_answers_votes":98, "question_length":1789, "response_length":518 }, { "question":"How do I specify nvidia runtime from docker-compose.yml? I am able to run a tensorflow container w\/ access to the GPU from the command line w\/ the following command $ sudo docker run --runtime=nvidia --rm gcr.io\/tensorflow\/tensorflow:latest-gpu I would like to be able to run this container from docker-compose. Is it possible to specify the --runtime flag from docker-compose.yml?", "response":"Currently (Aug 2018), NVIDIA container runtime for Docker (nvidia-docker2) supports Docker Compose. Yes, use Compose format 2.3 and add runtime: nvidia to your GPU service. Docker Compose must be version 1.19.0 or higher. Example docker-compose.yml: ``` version: '2.3' services: nvsmi: image: ubuntu:16.04 runtime: nvidia environment: - NVIDIA_VISIBLE_DEVICES=all command: nvidia-smi ``` More example from NVIDIA blog uses Docker Compose to show how to launch multiple GPU containers with the NVIDIA Container Runtime.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47465696\/how-do-i-specify-nvidia-runtime-from-docker-compose-yml", "best_answers_votes":53, "question_length":381, "response_length":518 }, { "question":"How to manually create a tf.Summary() I often want to log python variables --as opposed to tf tensors. In the docs it says that \"you can pass a tf.Summary protocol buffer that you populate with your own data\" but there is no docs for tf.Summary and i could not figure out how to use it. Anyone knows how to create a Scalar summary this way?", "response":"You can create a tf.Summary object in your Python program and write it to the same tf.summary.FileWriter object that takes your TensorFlow-produced summaries using the SummaryWriter.add_summary() method. The tf.Summary class is a Python protocol buffer wrapper for the Summary protocol buffer. Each Summary contains a list of tf.Summary.Value protocol buffers, which each have a tag and a either a \"simple\" (floating-point scalar) value, an image, a histogram, or an audio snippet. For example, you can generate a scalar summary from a Python object as follows: ``` writer = tf.train.SummaryWriter(...) value = 37.0 summary = tf.Summary(value=[ tf.Summary.Value(tag=\"summary_tag\", simple_value=value), ]) writer.add_summary(summary) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37902705\/how-to-manually-create-a-tf-summary", "best_answers_votes":63, "question_length":340, "response_length":736 }, { "question":"LSTM Autoencoder I'm trying to build a LSTM autoencoder with the goal of getting a fixed sized vector from a sequence, which represents the sequence as good as possible. This autoencoder consists of two parts: LSTM Encoder: Takes a sequence and returns an output vector (return_sequences = False) LSTM Decoder: Takes an output vector and returns a sequence (return_sequences = True) So, in the end, the encoder is a many to one LSTM and the decoder is a one to many LSTM. Image source: Andrej Karpathy On a high level the coding looks like this (similar as described here): ``` encoder = Model(...) decoder = Model(...) autoencoder = Model(encoder.inputs, decoder(encoder(encoder.inputs))) autoencoder.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) autoencoder.fit(data, data, batch_size=100, epochs=1500) ``` The shape (number of training examples, sequence length, input dimension) of the data array is (1200, 10, 5) and looks like this: ``` array([[[1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0], ..., [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], ... ] ``` Problem: I am not sure how to proceed, especially how to integrate LSTM to Model and how to get the decoder to generate a sequence from a vector. I am using keras with tensorflow backend. EDIT: If someone wants to try out, here is my procedure to generate random sequences with moving ones (including padding): ``` import random import math def getNotSoRandomList(x): rlen = 8 rlist = [0 for x in range(rlen)] if x <= 7: rlist[x] = 1 return rlist sequence = [[getNotSoRandomList(x) for x in range(round(random.uniform(0, 10)))] for y in range(5000)] ### Padding afterwards from keras.preprocessing import sequence as seq data = seq.pad_sequences( sequences = sequence, padding='post', maxlen=None, truncating='post', value=0. ) ```", "response":"Models can be any way you want. If I understood it right, you just want to know how to create models with LSTM? Using LSTMs Well, first, you have to define what your encoded vector looks like. Suppose you want it to be an array of 20 elements, a 1-dimension vector. So, shape (None,20). The size of it is up to you, and there is no clear rule to know the ideal one. And your input must be three-dimensional, such as your (1200,10,5). In keras summaries and error messages, it will be shown as (None,10,5), as \"None\" represents the batch size, which can vary each time you train\/predict. There are many ways to do this, but, suppose you want only one LSTM layer: ``` from keras.layers import * from keras.models import Model inpE = Input((10,5)) #here, you don't define the batch size outE = LSTM(units = 20, return_sequences=False, ...optional parameters...)(inpE) ``` This is enough for a very very simple encoder resulting in an array with 20 elements (but you can stack more layers if you want). Let's create the model: ``` encoder = Model(inpE,outE) ``` Now, for the decoder, it gets obscure. You don't have an actual sequence anymore, but a static meaningful vector. You may want to use LTSMs still, they will suppose the vector is a sequence. But here, since the input has shape (None,20), you must first reshape it to some 3-dimensional array in order to attach an LSTM layer next. The way you will reshape it is entirely up to you. 20 steps of 1 element? 1 step of 20 elements? 10 steps of 2 elements? Who knows? ``` inpD = Input((20,)) outD = Reshape((10,2))(inpD) #supposing 10 steps of 2 elements ``` It's important to notice that if you don't have 10 steps anymore, you won't be able to just enable \"return_sequences\" and have the output you want. You'll have to work a little. Acually, it's not necessary to use \"return_sequences\" or even to use LSTMs, but you may do that. Since in my reshape I have 10 timesteps (intentionally), it will be ok to use \"return_sequences\", because the result will have 10 timesteps (as the initial input) ``` outD1 = LSTM(5,return_sequences=True,...optional parameters...)(outD) #5 cells because we want a (None,10,5) vector. ``` You could work in many other ways, such as simply creating a 50 cell LSTM without returning sequences and then reshaping the result: ``` alternativeOut = LSTM(50,return_sequences=False,...)(outD) alternativeOut = Reshape((10,5))(alternativeOut) ``` And our model goes: ``` decoder = Model(inpD,outD1) alternativeDecoder = Model(inpD,alternativeOut) ``` After that, you unite the models with your code and train the autoencoder. All three models will have the same weights, so you can make the encoder bring results just by using its predict method. ``` encoderPredictions = encoder.predict(data) ``` What I often see about LSTMs for generating sequences is something like predicting the next element. You take just a few elements of the sequence and try to find the next element. And you take another segment one step forward and so on. This may be helpful in generating sequences.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44647258\/lstm-autoencoder", "best_answers_votes":32, "question_length":1830, "response_length":3056 }, { "question":"Choosing number of Steps per Epoch If I want to train a model with train_generator, is there a significant difference between choosing 10 Epochs with 500 Steps each and 100 Epochs with 50 Steps each Currently I am training for 10 epochs, because each epoch takes a long time, but any graph showing improvement looks very \"jumpy\" because I only have 10 datapoints. I figure I can get a smoother graph if I use 100 Epochs, but I want to know first if there is any downside to this", "response":"Based on what you said it sounds like you need a larger batch_size, and of course there are implications with that which could impact the steps_per_epoch and number of epochs. To solve for jumping-around A larger batch size will give you a better gradient and will help to prevent jumping around You may also want to consider a smaller learning rate, or a learning rate scheduler (or decay) to allow the network to \"settle in\" as it trains Implications of a larger batch-size Too large of a batch_size can produce memory problems, especially if you are using a GPU. Once you exceed the limit, dial it back until it works. This will help you find the max batch-size that your system can work with. Too large of a batch size can get you stuck in a local minima, so if your training get stuck, I would reduce it some. Imagine here you are over-correcting the jumping-around and it's not jumping around enough to further minimize the loss function. When to reduce epochs If your train error is very low, yet your test\/validation is very high, then you have over-fit the model with too many epochs. The best way to find the right balance is to use early-stopping with a validation test set. Here you can specify when to stop training, and save the weights for the network that gives you the best validation loss. (I highly recommend using this always) When to adjust steps-per-epoch Traditionally, the steps per epoch is calculated as train_length \/\/ batch_size, since this will use all of the data points, one batch size worth at a time. If you are augmenting the data, then you can stretch this a tad (sometimes I multiply that function above by 2 or 3 etc. But, if it's already training for too long, then I would just stick with the traditional approach.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49922252\/choosing-number-of-steps-per-epoch", "best_answers_votes":75, "question_length":478, "response_length":1753 }, { "question":"Tensorflow variable scope: reuse if variable exists I want a piece of code that creates a variable within a scope if it doesn't exist, and access the variable if it already exists. I need it to be the same code since it will be called multiple times. However, Tensorflow needs me to specify whether I want to create or reuse the variable, like this: ``` with tf.variable_scope(\"foo\"): #create the first time v = tf.get_variable(\"v\", [1]) with tf.variable_scope(\"foo\", reuse=True): #reuse the second time v = tf.get_variable(\"v\", [1]) ``` How can I get it to figure out whether to create or reuse it automatically? I.e., I want the above two blocks of code to be the same and have the program run.", "response":"A ValueError is raised in get_variable() when creating a new variable and shape is not declared, or when violating reuse during variable creation. Therefore, you can try this: ``` def get_scope_variable(scope_name, var, shape=None): with tf.variable_scope(scope_name) as scope: try: v = tf.get_variable(var, shape) except ValueError: scope.reuse_variables() v = tf.get_variable(var) return v v1 = get_scope_variable('foo', 'v', [1]) v2 = get_scope_variable('foo', 'v') assert v1 == v2 ``` Note that the following also works: ``` v1 = get_scope_variable('foo', 'v', [1]) v2 = get_scope_variable('foo', 'v', [1]) assert v1 == v2 ``` UPDATE. The new API supports auto-reusing now: ``` def get_scope_variable(scope, var, shape=None): with tf.variable_scope(scope, reuse=tf.AUTO_REUSE): v = tf.get_variable(var, shape) return v ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38545362\/tensorflow-variable-scope-reuse-if-variable-exists", "best_answers_votes":36, "question_length":696, "response_length":826 }, { "question":"How does TensorFlow name tensors? I wonder if this is the correct understanding: All tensors are derived from some operation, and operations are either given a name in the constructor, or given the default name for a particular kind of operation. If the name is not unique, TensorFlow automatically handles this by appending \"_1\", \"_2\", etc. An operation with n tensor outputs name these tensors \"op_name:0\", \"op_name:1\", ..., \"op_name:n-1\". One problem seems to arise: if x is a tf.Variable, then x.name gives \"variable_name:0\". This is confusing: to what does \"variable_name\" refer?", "response":"Your observations on Tensor naming are absolutely correct: the name of a Tensor is the concatenation of the name of the operation that produced it, a colon (:), and the index of that tensor in the outputs of the operation that produced it. Therefore the tensor named \"foo:2\" is the output of the op named \"foo\" at position 2 (with indices starting from zero). The naming of tf.Variable objects is slightly strange. Every tf.Variable contains a mutable tensor object that holds the state of the variable (and a few other tensors). A \"Variable\" op (which has the name \"variable_name\" in your example) \"produces\" this mutable tensor each time it is run as its 0th output, so the name of the mutable tensor is \"variable_name:0\". Since a tf.Variable is mostly indistinguishable from a tf.Tensor\u2014in that it can be used in the same places\u2014we took the decision to make variable names resemble tensor names, so the Variable.name property returns the name of the mutable tensor. (This contrasts with tf.QueueBase and tf.ReaderBase objects, which are not usable directly as tensors (instead you have to call methods on them to create ops that operate on their state), so these do not have a tensor-like name.)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36150834\/how-does-tensorflow-name-tensors", "best_answers_votes":54, "question_length":584, "response_length":1198 }, { "question":"TensorFlow: How and why to use SavedModel I have a few questions regarding the SavedModel API, whose documentation I find leaves a lot of details unexplained. The first three questions are about what to pass to the arguments of the add_meta_graph_and_variables() method of tf.saved_model.builder.SavedModelBuilder, while the fourth question is about why to use the SavedModel API over tf.train.Saver. What is the format of the signature_def_map argument? Do I normally need to set this argument when saving a model? Similarly, What is the format of the assets_collection argument? Why do you save a list of tags with a metagraph as opposed to just giving it a name (i.e. attaching just one unique tag to it)? Why would I add multiple tags to a given metagraph? What if I try to load a metagrpah from a pb by a certain tag, but multiple metagraphs in that pb match that tag? The documentation argues that it is recommended to use SavedModel to save entire models (as opposed to variables only) in self-contained files. But tf.train.Saver also saves the graph in addition to the variables in a .meta file. So what are the advantages of using SavedModel? The documentation says When you want to save and load variables, the graph, and the graph's metadata--basically, when you want to save or restore your model--we recommend using SavedModel. SavedModel is a language-neutral, recoverable, hermetic serialization format. SavedModel enables higher-level systems and tools to produce, consume, and transform TensorFlow models. but this explanation is quite abstract and doesn't really help me understand what the advantages of SavedModel are. What would be concrete examples where SavedModel (as opposed to tf.train.Saver) would be better to use? Please note that my question is not a duplicate of this question. I'm not asking how to save a model, I am asking very specific questions about the properties of SavedModel, which is only one of multiple mechanisms TensorFlow provides to save and load models. None of the answers in the linked question touch on the SavedModel API (which, once again, is not the same as tf.train.Saver).", "response":"EDIT: I wrote this back at TensorFlow 1.4. As of today (TensorFlow 1.12 is stable, there's a 1.13rc and 2.0 is around the corner) the docs linked in the question are much improved. I'm trying to use tf.saved_model and also found the Docs quite (too) abstract. Here's my stab at a full answer to your questions: 1. signature_def_map: a. Format See Tom's answer to Tensorflow: how to save\/restore a model. (Ctrl-F for \"tf.saved_model\" - currently, the only uses of the phrase on that question are in his answer). b. need It's my understanding that you do normally need it. If you intend to use the model, you need to know the inputs and outputs of the graph. I think it is akin to a C++ function signature: If you intend to define a function after it's called or in another C++ file, you need the signature in your main file (i.e. prototyped or in a header file). 2. assets_collection: format: Couldn't find clear documentation, so I went to the builder source code. It appears that the argument is an iterable of Tensors of dtype=tf.string, where each Tensor is a path for the asset directory. So, a TensorFlow Graph collection should work. I guess that is the parameter's namesake, but from the source code I would expect a Python list to work too. (You didn't ask if you need to set it, but judging from Zoe's answer to What are assets in tensorflow? and iga's answer to the tangentially related Tensorflow serving: \u201cNo assets to save\/writes\u201d when exporting models, it doesn't usually need set.) 3. Tags: a. Why list I don't know why you must pass a list, but you may pass a list with one element. For instance, in my current project I only use the [tf...tag_constants.SERVING] tag. b. When to use multiple Say you're using explicit device placement for operations. Maybe you want to save a CPU version and a GPU version of your graph. Obviously you want to save a serving version of each, and say you want to save training checkpoints. You could use a CPU\/GPU tag and a training\/serving tag to manage all cases. The docs hint at it: Each MetaGraphDef added to the SavedModel must be annotated with user-specified tags. The tags provide a means to identify the specific MetaGraphDef to load and restore, along with the shared set of variables and assets. These tags typically annotate a MetaGraphDef with its functionality (for example, serving or training), and optionally with hardware-specific aspects (for example, GPU). c. Collision Too lazy to force a collision myself - I see two cases that would need addressed - I went to the loader source code. Inside def load, you'll see: ``` saved_model = _parse_saved_model(export_dir) found_match = False for meta_graph_def in saved_model.meta_graphs: if set(meta_graph_def.meta_info_def.tags) == set(tags): meta_graph_def_to_load = meta_graph_def found_match = True break if not found_match: raise RuntimeError( \"MetaGraphDef associated with tags \" + str(tags).strip(\"[]\") + \" could not be found in SavedModel. To inspect available tag-sets in\" \" the SavedModel, please use the SavedModel CLI: `saved_model_cli`\" ) ``` It appears to me that it's looking for an exact match. E.g. say you have a metagraph with tags \"GPU\" and \"Serving\" and a metagraph with tag \"Serving\". If you load \"Serving\", you'll get the latter metagraph. On the other hand, say you have a metagraph \"GPU\" and \"Serving\" and a metagraph \"CPU\" and \"Serving\". If you try to load \"Serving\", you'll get the error. If you try to save two metagraphs with the exact same tags in the same folder, I expect you'll overwrite the first one. It doesn't look like the build code handles such a collision in any special way. 4. SavedModel or tf.train.Saver: This confused me too. wicke's answer to Should TensorFlow users prefer SavedModel over Checkpoint or GraphDef? cleared it up for me. I'll throw in my two cents: In the scope of local Python+TensorFlow, you can make tf.train.Saver do everything. But, it will cost you. Let me outline the save-a-trained-model-and-deploy use case. You'll need your saver object. It's easiest to set it up to save the complete graph (every variable). You probably don't want to save the .meta all the time since you're working with a static graph. You'll need to specify that in your training hook. You can read about that on cv-tricks. When your training finishes, you'll need convert your checkpoint file to a pb file. That usually means clearing the current graph, restoring the checkpoint, freezing your variables to constants with tf.python.framework.graph_util, and writing it with tf.gfile.GFile. You can read about that on medium. After that, you want to deploy it in Python. You'll need the input and output Tensor names - the string names in the graph def. You can read about that on metaflow (actually a very good blog post for the tf.train.Saver method). Some op nodes will let you feed data into them easily. Some not so much. I usually gave up on finding an appropriate node and added a tf.reshape that didn't actually reshape anything to the graph def. That was my ad-hoc input node. Same for the output. And then finally, you can deploy your model, at least locally in Python. Or, you could use the answer I linked in point 1 to accomplish all this with the SavedModel API. Less headaches thanks to Tom's answer . You'll get more support and features in the future if it ever gets documented appropriately . Looks like it's easier to use command line serving (the medium link covers doing that with Saver - looks tough, good luck!). It's practically baked in to the new Estimators. And according to the Docs, SavedModel is a language-neutral, recoverable, hermetic serialization format. Emphasis mine: Looks like you can get your trained models into the growing C++ API much easier. The way I see it, it's like the Datasets API. It's just easier than the old way! As far as concrete examples of SavedModel of tf.train.Saver: If \"basically, when you want to save or restore your model\" isn't clear enough for you: The correct time to use it is any time it makes your life easier. To me, that looks like always. Especially if you're using Estimators, deploying in C++, or using command line serving. So that's my research on your question. Or four enumerated questions. Err, eight question marks. Hope this helps.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46513923\/tensorflow-how-and-why-to-use-savedmodel", "best_answers_votes":52, "question_length":2129, "response_length":6270 }, { "question":"How to Display Custom Images in Tensorboard (e.g. Matplotlib Plots)? The Image Dashboard section of the Tensorboard ReadMe says: Since the image dashboard supports arbitrary pngs, you can use this to embed custom visualizations (e.g. matplotlib scatterplots) into TensorBoard. I see how a pyplot image could be written to file, read back in as a tensor, and then used with tf.image_summary() to write it to TensorBoard, but this statement from the readme suggests there is a more direct way. Is there? If so, is there any further documentation and\/or examples of how to do this efficiently?", "response":"It is quite easy to do if you have the image in a memory buffer. Below, I show an example, where a pyplot is saved to a buffer and then converted to a TF image representation which is then sent to an image summary. ``` import io import matplotlib.pyplot as plt import tensorflow as tf def gen_plot(): \"\"\"Create a pyplot plot and save to buffer.\"\"\" plt.figure() plt.plot([1, 2]) plt.title(\"test\") buf = io.BytesIO() plt.savefig(buf, format='png') buf.seek(0) return buf # Prepare the plot plot_buf = gen_plot() # Convert PNG buffer to TF image image = tf.image.decode_png(plot_buf.getvalue(), channels=4) # Add the batch dimension image = tf.expand_dims(image, 0) # Add image summary summary_op = tf.summary.image(\"plot\", image) # Session with tf.Session() as sess: # Run summary = sess.run(summary_op) # Write summary writer = tf.train.SummaryWriter('.\/logs') writer.add_summary(summary) writer.close() ``` This gives the following TensorBoard visualization:", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38543850\/how-to-display-custom-images-in-tensorboard-e-g-matplotlib-plots", "best_answers_votes":48, "question_length":590, "response_length":958 }, { "question":"How do I get the current value of a Variable? Suppose we have a variable: ``` x = tf.Variable(...) ``` This variable can be updated during the training process using the assign() method. What is the best way to get the current value of a variable? I know we could use this: ``` session.run(x) ``` But I'm afraid this would trigger a whole chain of operations. In Theano, you could just do ``` y = theano.shared(...) y_vals = y.get_value() ``` I'm looking for the equivalent thing in TensorFlow.", "response":"The only way to get the value of the variable is by running it in a session. In the FAQ it is written that: A Tensor object is a symbolic handle to the result of an operation, but does not actually hold the values of the operation's output. So TF equivalent would be: ``` import tensorflow as tf x = tf.Variable([1.0, 2.0]) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) v = sess.run(x) print(v) # will show you your variable. ``` The part with init = global_variables_initializer() is important and should be done in order to initialize variables. Also, take a look at InteractiveSession if you work in IPython.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33679382\/how-do-i-get-the-current-value-of-a-variable", "best_answers_votes":40, "question_length":494, "response_length":648 }, { "question":"Why do we name variables in Tensorflow? In some of the places, I saw the syntax, where variables are initialized with names, sometimes without names. For example: ``` # With name var = tf.Variable(0, name=\"counter\") # Without one = tf.constant(1) ``` What is the point of naming the variable var \"counter\"?", "response":"The name parameter is optional (you can create variables and constants with or without it), and the variable you use in your program does not depend on it. Names can be helpful in a couple of places: When you want to save or restore your variables (you can save them to a binary file after the computation). From docs: By default, it uses the value of the Variable.name property for each variable ``` matrix_1 = tf.Variable([[1, 2], [2, 3]], name=\"v1\") matrix_2 = tf.Variable([[3, 4], [5, 6]], name=\"v2\") init = tf.initialize_all_variables() saver = tf.train.Saver() sess = tf.Session() sess.run(init) save_path = saver.save(sess, \"\/model.ckpt\") sess.close() ``` Nonetheless you have variables matrix_1, matrix_2 they are saves as v1, v2 in the file. Also names are used in TensorBoard to nicely show names of edges. You can even group them by using the same scope: ``` import tensorflow as tf with tf.name_scope('hidden') as scope: a = tf.constant(5, name='alpha') W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0), name='weights') b = tf.Variable(tf.zeros([1]), name='biases') ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33648167\/why-do-we-name-variables-in-tensorflow", "best_answers_votes":47, "question_length":306, "response_length":1085 }, { "question":"How to apply Drop Out in Tensorflow to improve the accuracy of neural network? Drop-Out is regularization techniques. And I want to apply it to notMNIST data to reduce over-fitting to finish my Udacity Deep Learning Course Assignment.I have read the docs of tensorflow on how to call the tf.nn.dropout. And here is my code ```py # before proceeding further. from __future__ import print_function import numpy as np import tensorflow as tf from six.moves import cPickle as pickle pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) image_size = 28 num_labels = 10 def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) \/ predictions.shape[0]) # ReLU neuron # param training_epochs = 30 batch_size = 521 display_step = 1 n_input = 784 # img shape: 28*28 n_classes = 10 # MNIST total classes (0-9 digits) # hyper-parameter n_hidden_1 = 256 learning_rate = 0.05 lambda_term = 0.01 graph = tf.Graph() with graph.as_default(): # init weights weights_hiden = tf.Variable(tf.random_normal([n_input, n_hidden_1], stddev=np.sqrt(n_input))) weights_out = tf.Variable(tf.random_normal([n_hidden_1, n_classes], stddev=np.sqrt(n_hidden_1))) biases_hidden = tf.Variable(tf.random_normal([n_hidden_1])) biases_out = tf.Variable(tf.random_normal([n_classes])) x = tf.placeholder(\"float\", [None, n_input]) y = tf.placeholder(\"float\", [None, n_classes]) def model(x, weights_hiden, weights_out, biases_hidden, biases_out): # hidden layer with RELU activation layer_1 = tf.nn.relu(tf.add(tf.matmul(x, weights_hiden), biases_hidden)) # apply DropOut to hidden layer keep_prob = tf.placeholder(tf.float32) # DROP-OUT here drop_out = tf.nn.dropout(layer_1, keep_prob) # DROP-OUT here # output layer with linear activation out_layer = tf.matmul(layer_1, weights_out) + biases_out return out_layer # Construct model pred = model(x, weights_hiden, weights_out, biases_hidden, biases_out) # Define loss and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y) + lambda_term * tf.nn.l2_loss(weights_hiden) + lambda_term * tf.nn.l2_loss(weights_out) + lambda_term * tf.nn.l2_loss(biases_hidden) + lambda_term * tf.nn.l2_loss(biases_out)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # run the graph with tf.Session(graph=graph) as sess: tf.initialize_all_variables().run() print('Initialized') # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(train_dataset.shape[0]\/batch_size) # Loop over all batches for i in range(total_batch): batch_x = train_dataset[(i*batch_size):((i*batch_size) + batch_size), :] batch_y = train_labels[(i*batch_size):((i*batch_size) + batch_size), :] # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) # Compute average loss avg_cost += c \/ total_batch # Display logs per epoch step if epoch % display_step == 0: print(\"Epoch:\", '%04d' % (epoch+1), \"cost=\", \"{:.9f}\".format(avg_cost)) print(\"Optimization Finished!\") # Test model correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\")) print(\"Test data accuracy:\", accuracy.eval({x: test_dataset, y: test_labels})) print(\"Valid data accuracy:\", accuracy.eval({x: valid_dataset, y: valid_labels})) ``` The tf.nn.dropout is called in function model(), but after I applied the DropOut technique to the neural network, the accuracy did seem any change, here is the result: ``` Epoch: 0001 cost= 579980.086977807 Epoch: 0002 cost= 238859.802382506 Epoch: 0003 cost= 90672.733752856 Epoch: 0004 cost= 32649.040985028 Epoch: 0005 cost= 11325.878361874 Epoch: 0006 cost= 3866.805511076 Epoch: 0007 cost= 1357.785540469 Epoch: 0008 cost= 519.381747333 Epoch: 0009 cost= 225.359804119 Epoch: 0010 cost= 110.099476707 Epoch: 0011 cost= 55.212384386 Epoch: 0012 cost= 28.469241683 Epoch: 0013 cost= 14.511494627 Epoch: 0014 cost= 6.567228943 Epoch: 0015 cost= 3.186372240 Epoch: 0016 cost= 1.701917576 Epoch: 0017 cost= 1.041632473 Epoch: 0018 cost= 0.843376874 Epoch: 0019 cost= 0.786183911 Epoch: 0020 cost= 0.775412846 Epoch: 0021 cost= 0.782965020 Epoch: 0022 cost= 0.796788171 Epoch: 0023 cost= 0.814522117 Epoch: 0024 cost= 0.832090579 Epoch: 0025 cost= 0.849197715 Epoch: 0026 cost= 0.867473578 Epoch: 0027 cost= 0.889561496 Epoch: 0028 cost= 0.921837020 Epoch: 0029 cost= 16.655304543 Epoch: 0030 cost= 1.421570476 Optimization Finished! Test data accuracy: 0.8775 Valid data accuracy: 0.8069 ``` How can I apply DropOut by Tensorflow to improve the accuracy of the network? Thank you!", "response":"In the graph, I'd suggest to move keep_prob = tf.placeholder(tf.float32) outside of the model function to make it global. ```py with graph.as_default(): ... x = tf.placeholder(\"float\", [None, n_input]) y = tf.placeholder(\"float\", [None, n_classes]) keep_prob = tf.placeholder(tf.float32) def model(x, weights_hiden, weights_out, biases_hidden, biases_out, keep_prob): # hidden layer with RELU activation layer_1 = tf.nn.relu(tf.add(tf.matmul(x, weights_hiden), biases_hidden)) # apply DropOut to hidden layer drop_out = tf.nn.dropout(layer_1, keep_prob) # DROP-OUT here # output layer with linear activation out_layer = tf.matmul(drop_out, weights_out) + biases_out return out_layer ... ``` When running session, feed a desired keep_prob value during training time, and feed 1.0 to keep_prob during reference (validation and\/or testing) time. ```py # run the graph with tf.Session(graph=graph) as sess: tf.initialize_all_variables().run() ... for epoch in range(training_epochs): ... for i in range(total_batch): batch_x = ... batch_y = ... # Run optimization op (backprop) and cost op (to get loss value) # Feed a value < 1.0 for keep prob during training _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y, keep_prob : 0.5}) ... # Feed 1.0 for keep prob during testing print(\"Test data accuracy:\", accuracy.eval({x: test_dataset, y: test_labels, keep_prob : 1.0})) print(\"Valid data accuracy:\", accuracy.eval({x: valid_dataset, y: valid_labels, keep_prob : 1.0})) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40879504\/how-to-apply-drop-out-in-tensorflow-to-improve-the-accuracy-of-neural-network", "best_answers_votes":57, "question_length":5785, "response_length":1487 }, { "question":"What are the differences between all these cross-entropy losses in Keras and TensorFlow? What are the differences between all these cross-entropy losses? Keras is talking about Binary cross-entropy Categorical cross-entropy Sparse categorical cross-entropy While TensorFlow has Softmax cross-entropy with logits Sparse softmax cross-entropy with logits Sigmoid cross-entropy with logits What are the differences and relationships between them? What are the typical applications for them? What's the mathematical background? Are there other cross-entropy types that one should know? Are there any cross-entropy types without logits?", "response":"There is just one cross (Shannon) entropy defined as: ``` H(P||Q) = - SUM_i P(X=i) log Q(X=i) ``` In machine learning usage, P is the actual (ground truth) distribution, and Q is the predicted distribution. All the functions you listed are just helper functions which accepts different ways to represent P and Q. There are basically 3 main things to consider: there are either 2 possibles outcomes (binary classification) or more. If there are just two outcomes, then Q(X=1) = 1 - Q(X=0) so a single float in (0,1) identifies the whole distribution, this is why neural network in binary classification has a single output (and so does logistic regresssion). If there are K>2 possible outcomes one has to define K outputs (one per each Q(X=...)) one either produces proper probabilities (meaning that Q(X=i)>=0 and SUM_i Q(X=i) =1 or one just produces a \"score\" and has some fixed method of transforming score to probability. For example a single real number can be \"transformed to probability\" by taking sigmoid, and a set of real numbers can be transformed by taking their softmax and so on. there is j such that P(X=j)=1 (there is one \"true class\", targets are \"hard\", like \"this image represent a cat\") or there are \"soft targets\" (like \"we are 60% sure this is a cat, but for 40% it is actually a dog\"). Depending on these three aspects, different helper function should be used: ``` outcomes what is in Q targets in P ------------------------------------------------------------------------------- binary CE 2 probability any categorical CE >2 probability soft sparse categorical CE >2 probability hard sigmoid CE with logits 2 score any softmax CE with logits >2 score soft sparse softmax CE with logits >2 score hard ``` In the end one could just use \"categorical cross entropy\", as this is how it is mathematically defined, however since things like hard targets or binary classification are very popular - modern ML libraries do provide these additional helper functions to make things simpler. In particular \"stacking\" sigmoid and cross entropy might be numerically unstable, but if one knows these two operations are applied together - there is a numerically stable version of them combined (which is implemented in TF). It is important to notice that if you apply wrong helper function the code will usually still execute, but results will be wrong. For example if you apply softmax_* helper for binary classification with one output your network will be considered to always produce \"True\" at the output. As a final note - this answer considers classification, it is slightly different when you consider multi label case (when a single point can have multiple labels), as then Ps do not sum to 1, and one should use sigmoid_cross_entropy_with_logits despite having multiple output units.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44674847\/what-are-the-differences-between-all-these-cross-entropy-losses-in-keras-and-ten", "best_answers_votes":35, "question_length":631, "response_length":2800 }, { "question":"How to understand the term `tensor` in TensorFlow? I am new to TensorFlow. While I am reading the existing documentation, I found the term tensor really confusing. Because of it, I need to clarify the following questions: What is the relationship between tensor and Variable, tensor vs. tf.constant, 'tensor' vs. tf.placeholder? Are they all types of tensors?", "response":"TensorFlow doesn't have first-class Tensor objects, meaning that there are no notion of Tensor in the underlying graph that's executed by the runtime. Instead the graph consists of op nodes connected to each other, representing operations. An operation allocates memory for its outputs, which are available on endpoints :0, :1, etc, and you can think of each of these endpoints as a Tensor. If you have tensor corresponding to nodename:0 you can fetch its value as sess.run(tensor) or sess.run('nodename:0'). Execution granularity happens at operation level, so the run method will execute op which will compute all of the endpoints, not just the :0 endpoint. It's possible to have an Op node with no outputs (like tf.group) in which case there are no tensors associated with it. It is not possible to have tensors without an underlying Op node. You can examine what happens in underlying graph by doing something like this ``` tf.reset_default_graph() value = tf.constant(1) print(tf.get_default_graph().as_graph_def()) ``` So with tf.constant you get a single operation node, and you can fetch it using sess.run(\"Const:0\") or sess.run(value) Similarly, value=tf.placeholder(tf.int32) creates a regular node with name Placeholder, and you could feed it as feed_dict={\"Placeholder:0\":2} or feed_dict={value:2}. You can not feed and fetch a placeholder in the same session.run call, but you can see the result by attaching a tf.identity node on top and fetching that. For variable ``` tf.reset_default_graph() value = tf.Variable(tf.ones_initializer()(())) value2 = value+3 print(tf.get_default_graph().as_graph_def()) ``` You'll see that it creates two nodes Variable and Variable\/read, the :0 endpoint is a valid value to fetch on both of these nodes. However Variable:0 has a special ref type meaning it can be used as an input to mutating operations. The result of Python call tf.Variable is a Python Variable object and there's some Python magic to substitute Variable\/read:0 or Variable:0 depending on whether mutation is necessary. Since most ops have only 1 endpoint, :0 is dropped. Another example is Queue -- close() method will create a new Close op node which connects to Queue op. To summarize -- operations on python objects like Variable and Queue map to different underlying TensorFlow op nodes depending on usage. For ops like tf.split or tf.nn.top_k which create nodes with multiple endpoints, Python's session.run call automatically wraps output in tuple or collections.namedtuple of Tensor objects which can be fetched individually.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37849322\/how-to-understand-the-term-tensor-in-tensorflow", "best_answers_votes":69, "question_length":359, "response_length":2551 }, { "question":"Install Tensorflow 2.0 in conda enviroment I would like to know if anyone knows how can I install tensorflow==2.0.0-alpha0 in a conda enviroment using python 3.7. Is it possible to use python 3.7 or do I have to downgrade to 3.6. Either way what is the command I need to use because the following don't find any package ``` conda install tensorflow==2.0.0-alpha0 conda install tensorflow conda install tensorflow=2.0.0-alpha0 ``` I am using fedora 29 and conda 4.6.8 Thanks!", "response":"TENSORFLOW 2.0 release version is out! Since 01\/10\/2019 I'm not talking beta but the release version. Using Anaconda Since 01\/11\/2019 Anaconda is supporting the Tensorflow 2.0.0. Option 1: For what the easiest way is just: conda install tensorflow or conda install tensorflow-gpu For the gpu mode, anaconda will take care of all the CUDA everything you need to install for the tensorflow gpu mode to work so I strongly recommend using this method. The only issue with this method is that anaconda might not have the last last version of TensorFlow. For example, at Feb 21 2021, conda has the version 2.3 whereas the PIP version is 2.4. You can check the current version of gpu or cpu. Option 2 (virtual env): It is strongly recommended to use an environment on where to install tensorflow, for which you need the following command that will create an environment first and then install tensorflow within: CPU: conda create -n tensorflow GPU: conda create -n tensorflow-gpu Change by a meaningful name like tf-2 To use tensorflow run first conda activate Using pip Using pip the tensorflow official instructions are quite complete. Just install tensorflow using pip like: ``` # Current stable release for CPU-only pip install tensorflow ``` I yet recommend before doing everything to install tensorflow in a new environment so the 3 steps would be (with anaconda): ``` conda create --n pip conda activate pip install tensorflow ``` Now for the GPU version it's harder with pip, I recommend you this link that explains the extra things you need to install (CUDA and others).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55392100\/install-tensorflow-2-0-in-conda-enviroment", "best_answers_votes":36, "question_length":474, "response_length":1578 }, { "question":"Adam optimizer goes haywire after 200k batches, training loss grows I've been seeing a very strange behavior when training a network, where after a couple of 100k iterations (8 to 10 hours) of learning fine, everything breaks and the training loss grows: The training data itself is randomized and spread across many .tfrecord files containing 1000 examples each, then shuffled again in the input stage and batched to 200 examples. The background I am designing a network that performs four different regression tasks at the same time, e.g. determining the likelihood of an object appearing in the image and simultanously determining its orientation. The network starts with a couple of convolutional layers, some with residual connections, and then branches into the four fully-connected segments. Since the first regression results in a probability, I'm using cross entropy for the loss, whereas the others use classical L2 distance. However, due to their nature, the probability loss is around the order of 0..1, while the orientation losses can be much larger, say 0..10. I already normalized both input and output values and use clipping ``` normalized = tf.clip_by_average_norm(inferred.sin_cos, clip_norm=2.) ``` in cases where things can get really bad. I've been (successfully) using the Adam optimizer to optimize on the tensor containing all distinct losses (rather than reduce_suming them), like so: ``` reg_loss = tf.reduce_sum(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)) loss = tf.pack([loss_probability, sin_cos_mse, magnitude_mse, pos_mse, reg_loss]) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, epsilon=self.params.adam_epsilon) op_minimize = optimizer.minimize(loss, global_step=global_step) ``` In order to display the results in TensorBoard, I then actually do ``` loss_sum = tf.reduce_sum(loss) ``` for a scalar summary. Adam is set to learning rate 1e-4 and epsilon 1e-4 (I see the same behavior with the default value for epislon and it breaks even faster when I keep the learning rate on 1e-3). Regularization also has no influence on this one, it does this sort-of consistently at some point. I should also add that stopping the training and restarting from the last checkpoint - implying that the training input files are shuffled again as well - results in the same behavior. The training always seems to behave similarly at that point.", "response":"Yes. This is a known problem of Adam. The equations for Adam are ``` t <- t + 1 lr_t <- learning_rate * sqrt(1 - beta2^t) \/ (1 - beta1^t) m_t <- beta1 * m_{t-1} + (1 - beta1) * g v_t <- beta2 * v_{t-1} + (1 - beta2) * g * g variable <- variable - lr_t * m_t \/ (sqrt(v_t) + epsilon) ``` where m is an exponential moving average of the mean gradient and v is an exponential moving average of the squares of the gradients. The problem is that when you have been training for a long time, and are close to the optimal, then v can become very small. If then all of a sudden the gradients starts increasing again it will be divided by a very small number and explode. By default beta1=0.9 and beta2=0.999. So m changes much more quickly than v. So m can start being big again while v is still small and cannot catch up. To remedy to this problem you can increase epsilon which is 10-8 by default. Thus stopping the problem of dividing almost by 0. Depending on your network, a value of epsilon in 0.1, 0.01, or 0.001 might be good.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42327543\/adam-optimizer-goes-haywire-after-200k-batches-training-loss-grows", "best_answers_votes":65, "question_length":2393, "response_length":1025 }, { "question":"What is y_true and y_pred when creating a custom metric in Keras? I want to implement my custom metric in Keras. According to the documentation, my custom metric should be defined as a function that takes as input two tensors, y_pred and y_true, and returns a single tensor value. However, I'm confused to what exactly will be contained in these tensors y_pred and y_true when the optimization is running. Is it just one data point? Is it the whole batch? The whole epoch (probably not)? Is there a way to obtain these tensors' shapes? Can someone point to a trustworthy place where I can get this information? Any help would be appreciated. Not sure if relevant, but I'm using TensorFlow backend. Things I tried so far, in order to answer this: Checking the Keras metrics documentation (no explanation there about what these tensors are). Checking the source code for the Keras metrics and trying to understand these tensors by looking at the Keras implementation for other metrics (This seems to suggest that y_true and y_pred have the labels for an entire batch, but I'm not sure). Reading these stackoverflow questions: 1, 2, 3, and others (none answer my question, most are centered on the OP not clearly understanding the difference between a tensor and the values computed using that tensor during the session). Printing the values of y_true and y_pred during the optimization, by defining a metric like this: ```py def test_metric(y_true, y_pred): y_true = K.print_tensor(y_true) y_pred = K.print_tensor(y_pred) return y_true - y_pred ``` (unfortunately these don't print anything during the optimization).", "response":"y_true and y_pred The tensor y_true is the true data (or target, ground truth) you pass to the fit method. It's a conversion of the numpy array y_train into a tensor. The tensor y_pred is the data predicted (calculated, output) by your model. Usually, both y_true and y_pred have exactly the same shape. A few of the losses, such as the sparse ones, may accept them with different shapes. The shape of y_true It contains an entire batch. Its first dimension is always the batch size, and it must exist, even if the batch has only one element. Two very easy ways to find the shape of y_true are: check your true\/target data: print(Y_train.shape) check your model.summary() and see the last output But its first dimension will be the batch size. So, if your last layer outputs (None, 1), the shape of y_true is (batch, 1). If the last layer outputs (None, 200,200, 3), then y_true will be (batch, 200,200,3). Custom metrics and loss functions Unfotunately, printing custom metrics will not reveal their content (unless you are using eager mode on, and you have calculated every step of the model with data). You can see their shapes with print(K.int_shape(y_pred)), for instance. Remember that these libraries first \"compile a graph\", then later \"runs it with data\". When you define your loss, you're in the compile phase, and asking for data needs the model to run. But even if the result of your metric is multidimensional, keras will automatically find ways to output a single scalar for that metric. (Not sure what is the operation, but very probably a K.mean() hidden under the table - it's interesting to return the entire batch, so Keras applies other operations such as sample weights, for instance). Sources. After you get used to keras, this understanding gets natural from simply reading this part: y_true: True labels. Theano\/TensorFlow tensor. y_pred: Predictions. Theano\/TensorFlow tensor of the same shape as y_true. True labels mean true\/target data. Labels is a badly chosen word here, it is only really \"labels\" in classification models. Predictions mean the results of your model.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46663013\/what-is-y-true-and-y-pred-when-creating-a-custom-metric-in-keras", "best_answers_votes":44, "question_length":1614, "response_length":2097 }, { "question":"How do I find the variable names and values that are saved in a checkpoint? I want to see the variables that are saved in a TensorFlow checkpoint along with their values. How can I find the variable names that are saved in a TensorFlow checkpoint? I used tf.train.NewCheckpointReader which is explained here. But, it is not given in the documentation of TensorFlow. Is there any other way?", "response":"Example usage: ```py from tensorflow.python.tools.inspect_checkpoint import print_tensors_in_checkpoint_file import os checkpoint_path = os.path.join(model_dir, \"model.ckpt\") # List ALL tensors example output: v0\/Adam (DT_FLOAT) [3,3,1,80] print_tensors_in_checkpoint_file(file_name=checkpoint_path, tensor_name='') # List contents of v0 tensor. # Example output: tensor_name: v0 [[[[ 9.27958265e-02 7.40226209e-02 4.52989563e-02 3.15700471e-02 print_tensors_in_checkpoint_file(file_name=checkpoint_path, tensor_name='v0') # List contents of v1 tensor. print_tensors_in_checkpoint_file(file_name=checkpoint_path, tensor_name='v1') ``` Update: all_tensors argument was added to print_tensors_in_checkpoint_file since Tensorflow 0.12.0-rc0 so you may need to add all_tensors=False or all_tensors=True if required. Alternative method: ```py from tensorflow.python import pywrap_tensorflow import os checkpoint_path = os.path.join(model_dir, \"model.ckpt\") reader = pywrap_tensorflow.NewCheckpointReader(checkpoint_path) var_to_shape_map = reader.get_variable_to_shape_map() for key in var_to_shape_map: print(\"tensor_name: \", key) print(reader.get_tensor(key)) # Remove this is you want to print only variable names ``` Hope it helps.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38218174\/how-do-i-find-the-variable-names-and-values-that-are-saved-in-a-checkpoint", "best_answers_votes":54, "question_length":389, "response_length":1230 }, { "question":"Error in python after 'import tensorflow': TypeError: __init__() got an unexpected keyword argument 'syntax' I installed TensorFlow on my Ubuntu 15.10 machine as instructed for CPU only: ```sh $ pip install https:\/\/storage.googleapis.com\/tensorflow\/linux\/cpu\/tensorflow-0.5.0-cp27-none-linux_x86_64.whl ``` Then when I run the Python REPL and import tensorflow, I get: ```sh $ python Python 2.7.10 (default, Oct 14 2015, 16:09:02) [GCC 5.2.1 20151010] on linux2 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import tensorflow as tf Traceback (most recent call last): File \"\", line 1, in File \"\/home\/phil\/.local\/lib\/python2.7\/site-packages\/tensorflow\/__init__.py\", line 4, in from tensorflow.python import * File \"\/home\/phil\/.local\/lib\/python2.7\/site-packages\/tensorflow\/python\/__init__.py\", line 13, in from tensorflow.core.framework.graph_pb2 import * File \"\/home\/phil\/.local\/lib\/python2.7\/site-packages\/tensorflow\/core\/framework\/graph_pb2.py\", line 16, in from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2 File \"\/home\/phil\/.local\/lib\/python2.7\/site-packages\/tensorflow\/core\/framework\/attr_value_pb2.py\", line 16, in from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2 File \"\/home\/phil\/.local\/lib\/python2.7\/site-packages\/tensorflow\/core\/framework\/tensor_pb2.py\", line 16, in from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2 File \"\/home\/phil\/.local\/lib\/python2.7\/site-packages\/tensorflow\/core\/framework\/tensor_shape_pb2.py\", line 22, in serialized_pb=_b('\\n,tensorflow\/core\/framework\/tensor_shape.proto \\x12\\ntensorflow\\\"d\\n\\x10TensorShapeProto\\x12-\\n\\x03\\x64im\\x18\\x02 \\x03(\\x0b\\x32 .tensorflow.TensorShapeProto.Dim\\x1a!\\n\\x03\\x44im\\x12\\x0c\\n\\x04size\\x18\\x01 \\x01(\\x03\\x12\\x0c\\n\\x04name\\x18\\x02 \\x01(\\tb\\x06proto3') TypeError: __init__() got an unexpected keyword argument 'syntax' ``` I have the Ubuntu protobuf-compiler package installed and it's version 2.6.1-1.2", "response":"Several users have reported issues that arise when an older version of protobuf is installed. TensorFlow requires (and uses a copy of) protobuf-3.0.0a4. However it seems to be conflicting with your installed version (perhaps due to how the PYTHONPATH environment variable is configured on your system?). Can you try uninstalling the python-protobuf package if it is installed, and then seeing if TensorFlow starts correctly? Some users have had success when using a virtualenv. For instructions on installing TensorFlow into a virtualenv, see the getting started guide.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33622842\/error-in-python-after-import-tensorflow-typeerror-init-got-an-unexpect", "best_answers_votes":46, "question_length":2080, "response_length":569 }, { "question":"TensorFlow: Max of a tensor along an axis My question is in two connected parts: How do I calculate the max along a certain axis of a tensor? For example, if I have ``` x = tf.constant([[1,220,55],[4,3,-1]]) ``` I want something like ``` x_max = tf.max(x, axis=1) print sess.run(x_max) output: [220,4] ``` I know there is a tf.argmax and a tf.maximum, but neither give the maximum value along an axis of a single tensor. For now I have a workaround: ``` x_max = tf.slice(x, begin=[0,0], size=[-1,1]) for a in range(1,2): x_max = tf.maximum(x_max , tf.slice(x, begin=[0,a], size=[-1,1])) ``` But it looks less than optimal. Is there a better way to do this? Given the indices of an argmax of a tensor, how do I index into another tensor using those indices? Using the example of x above, how do I do something like the following: ``` ind_max = tf.argmax(x, dimension=1) #output is [1,0] y = tf.constant([[1,2,3], [6,5,4]) y_ = y[:, ind_max] #y_ should be [2,6] ``` I know slicing, like the last line, does not exist in TensorFlow yet (#206). My question is: what is the best workaround for my specific case (maybe using other methods like gather, select, etc.)? Additional information: I know x and y are going to be two dimensional tensors only!", "response":"The tf.reduce_max() operator provides exactly this functionality. By default it computes the global maximum of the given tensor, but you can specify a list of reduction_indices, which has the same meaning as axis in NumPy. To complete your example: ``` x = tf.constant([[1, 220, 55], [4, 3, -1]]) x_max = tf.reduce_max(x, reduction_indices=[1]) print sess.run(x_max) # ==> \"array([220, 4], dtype=int32)\" ``` If you compute the argmax using tf.argmax(), you could obtain the the values from a different tensor y by flattening y using tf.reshape(), converting the argmax indices into vector indices as follows, and using tf.gather() to extract the appropriate values: ``` ind_max = tf.argmax(x, dimension=1) y = tf.constant([[1, 2, 3], [6, 5, 4]]) flat_y = tf.reshape(y, [-1]) # Reshape to a vector. # N.B. Handles 2-D case only. flat_ind_max = ind_max + tf.cast(tf.range(tf.shape(y)[0]) * tf.shape(y)[1], tf.int64) y_ = tf.gather(flat_y, flat_ind_max) print sess.run(y_) # ==> \"array([2, 6], dtype=int32)\" ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34987509\/tensorflow-max-of-a-tensor-along-an-axis", "best_answers_votes":77, "question_length":1245, "response_length":1008 }, { "question":"import input_data MNIST tensorflow not working TensorFlow MNIST example not running with fully_connected_feed.py I checked this out and realized that input_data was not built-in. So I downloaded the whole folder from here. How can I start the tutorial: ``` import input_data mnist = input_data.read_data_sets(\"MNIST_data\/\", one_hot=True) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) in () ----> 1 import input_data 2 mnist = tf.input_data.read_data_sets(\"MNIST_data\/\", one_hot=True) ImportError: No module named input_data ``` I'm using iPython (Jupyter) so do I need to change my working directory to this folder I downloaded? or can I add this to my tensorflow directory? If so, where do I add the files? I installed tensorflow with pip (on my OSX) and the current location is ~\/anaconda\/lib\/python2.7\/site-packages\/tensorflow\/__init__.py Are these files meant to be accessed directly through tensorflow like sklearn datasets? or am I just supposed to cd into the directory and work from there? The example is not clear. EDIT: This post is very out-dated", "response":"So let's assume that you are in the directory: \/somePath\/tensorflow\/tutorial (and this is your working directory). All you need to do is to download the input_data.py file and place it like this. Let's assume that the file name you invoke: ``` import input_data mnist = input_data.read_data_sets(\"MNIST_data\/\", one_hot=True) ... ``` is main.py and it is also in the same directory. Once this is done, you can just start running main.py which will start downloading the files and will put them in the MNIST_data folder (once they are there the script will not be downloading them next time).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33664651\/import-input-data-mnist-tensorflow-not-working", "best_answers_votes":34, "question_length":1134, "response_length":590 }, { "question":"What is the difference between Luong attention and Bahdanau attention? These two attentions are used in seq2seq modules. The two different attentions are introduced as multiplicative and additive attentions in this TensorFlow documentation. What is the difference?", "response":"I went through this Effective Approaches to Attention-based Neural Machine Translation. In the section 3.1 They have mentioned the difference between two attentions as follows, Luong attention used top hidden layer states in both of encoder and decoder. But Bahdanau attention take concatenation of forward and backward source hidden state (Top Hidden Layer). In Luong attention they get the decoder hidden state at time t. Then calculate attention scores and from that get the context vector which will be concatenated with hidden state of the decoder and then predict. But in the Bahdanau at time t we consider about t-1 hidden state of the decoder. Then we calculate alignment , context vectors as above. But then we concatenate this context with hidden state of the decoder at t-1. So before the softmax this concatenated vector goes inside a GRU. Luong has diffferent types of alignments. Bahdanau has only concat score alignment model.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44238154\/what-is-the-difference-between-luong-attention-and-bahdanau-attention", "best_answers_votes":39, "question_length":264, "response_length":941 }, { "question":"TensorFlow operator overloading What is the difference between ``` tf.add(x, y) ``` and ``` x + y ``` in TensorFlow? What would be different in your computation graph when you construct your graph with + instead of tf.add()? More generally, are + or other operations overloaded for tensors?", "response":"If at least one of x or y is a tf.Tensor object, the expressions tf.add(x, y) and x + y are equivalent. The main reason you might use tf.add() is to specify an explicit name keyword argument for the created op, which is not possible with the overloaded operator version. Note that if neither x nor y is a tf.Tensor\u2014for example if they are NumPy arrays\u2014then x + y will not create a TensorFlow op. tf.add() always creates a TensorFlow op and converts its arguments to tf.Tensor objects. Therefore, if you are writing a library function that might accept both tensors and NumPy arrays, you might prefer to use tf.add(). The following operators are overloaded in the TensorFlow Python API: __neg__ (unary -) __abs__ (abs()) __invert__ (unary ~) __add__ (binary +) __sub__ (binary -) __mul__ (binary elementwise *) __div__ (binary \/ in Python 2) __floordiv__ (binary \/\/ in Python 3) __truediv__ (binary \/ in Python 3) __mod__ (binary %) __pow__ (binary **) __and__ (binary &) __or__ (binary |) __xor__ (binary ^) __lt__ (binary ) __ge__ (binary >=) Please note, __eq__ ( binary == ) is not overloaded. x == y will simply return a Python boolean whether x and y refer to the same tensor. You need to use tf.equal() explicitly to check for element-wise equality. Same goes for not equal, __ne__ ( binary != ).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35094899\/tensorflow-operator-overloading", "best_answers_votes":76, "question_length":290, "response_length":1302 }, { "question":"Is there an easy way to get something like Keras model.summary in Tensorflow? I have been working with Keras and really liked the model.summary() It gives a good overview of the size of the different layers and especially an overview of the number of parameters the model has. Is there a similar function in Tensorflow? I could find nothing on Stackoverflow or the Tensorflow API documentation.", "response":"Looks like you can use Slim Example: ``` import numpy as np from tensorflow.python.layers import base import tensorflow as tf import tensorflow.contrib.slim as slim x = np.zeros((1,4,4,3)) x_tf = tf.convert_to_tensor(x, np.float32) z_tf = tf.layers.conv2d(x_tf, filters=32, kernel_size=(3,3)) def model_summary(): model_vars = tf.trainable_variables() slim.model_analyzer.analyze_vars(model_vars, print_info=True) model_summary() ``` Output: ``` --------- Variables: name (type shape) [size] --------- conv2d\/kernel:0 (float32_ref 3x3x3x32) [864, bytes: 3456] conv2d\/bias:0 (float32_ref 32) [32, bytes: 128] Total size of variables: 896 Total bytes of variables: 3584 ``` Also here is an example of custom function to print model summary: https:\/\/github.com\/NVlabs\/stylegan\/blob\/f3a044621e2ab802d40940c16cc86042ae87e100\/dnnlib\/tflib\/network.py#L507 If you already have .pb tensorflow model you can use: inspect_pb.py to print model info or use tensorflow summarize_graph tool with --print_structure flag, also it's nice that it can detect input and output names.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46560313\/is-there-an-easy-way-to-get-something-like-keras-model-summary-in-tensorflow", "best_answers_votes":62, "question_length":394, "response_length":1062 }, { "question":"Change images slider step in TensorBoard TensorBoard 1.1.0's images history. I would like to set the slider's position (on top of the black image with 7) more precisely, to be able to select any step. Now I can only select e.g. between steps 2050 or 2810. Is that possible? Maybe a place in sources where the 10 constant is hardcoded?", "response":"I answered this question over there \"TensorBoard doesn't show all data points\", but this seems to be more popular so I will quote it here. You don't have to change the source code for this, there is a flag called --samples_per_plugin. Quoting from the help command --samples_per_plugin: An optional comma separated list of plugin_name=num_samples pairs to explicitly specify how many samples to keep per tag for that plugin. For unspecified plugins, TensorBoard randomly downsamples logged summaries to reasonable values to prevent out-of-memory errors for long running jobs. This flag allows fine control over that downsampling. Note that 0 means keep all samples of that type. For instance, \"scalars=500,images=0\" keeps 500 scalars and all images. Most users should not need to set this flag. (default: '') So if you want to have a slider of 100 images, use: tensorboard --samples_per_plugin images=100", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43763858\/change-images-slider-step-in-tensorboard", "best_answers_votes":73, "question_length":334, "response_length":904 }, { "question":"Can I export a tensorflow summary to CSV? Is there a way to extract scalar summaries to CSV (preferably from within tensorboard) from tfevents files? Example code The following code generates tfevent files in a summary_dir within the same directory. Suppose you let it run and you find something interesting. You want to get the raw data for further investigation. How would you do that? ```python #!\/usr\/bin\/env python \"\"\"A very simple MNIST classifier.\"\"\" import argparse import sys from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf ce_with_logits = tf.nn.softmax_cross_entropy_with_logits FLAGS = None def inference(x): \"\"\" Build the inference graph. Parameters ---------- x : placeholder Returns ------- Output tensor with the computed logits. \"\"\" W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.matmul(x, W) + b return y def loss(logits, labels): \"\"\" Calculate the loss from the logits and the labels. Parameters ---------- logits : Logits tensor, float - [batch_size, NUM_CLASSES]. labels : Labels tensor, int32 - [batch_size] \"\"\" cross_entropy = tf.reduce_mean(ce_with_logits(labels=labels, logits=logits)) return cross_entropy def training(loss, learning_rate=0.5): \"\"\" Set up the training Ops. Parameters ---------- loss : Loss tensor, from loss(). learning_rate : The learning rate to use for gradient descent. Returns ------- train_op: The Op for training. \"\"\" optimizer = tf.train.GradientDescentOptimizer(learning_rate) train_step = optimizer.minimize(loss) return train_step def main(_): # Import data mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) # Create the model x = tf.placeholder(tf.float32, [None, 784]) y = inference(x) # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10]) loss_ = loss(logits=y, labels=y_) train_step = training(loss_) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) with tf.name_scope('accuracy'): tf.summary.scalar('accuracy', accuracy) merged = tf.summary.merge_all() sess = tf.InteractiveSession() train_writer = tf.summary.FileWriter('summary_dir\/train', sess.graph) test_writer = tf.summary.FileWriter('summary_dir\/test', sess.graph) tf.global_variables_initializer().run() for train_step_i in range(100000): if train_step_i % 100 == 0: summary, acc = sess.run([merged, accuracy], feed_dict={x: mnist.test.images, y_: mnist.test.labels}) test_writer.add_summary(summary, train_step_i) summary, acc = sess.run([merged, accuracy], feed_dict={x: mnist.train.images, y_: mnist.train.labels}) train_writer.add_summary(summary, train_step_i) batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--data_dir', type=str, default='\/tmp\/tensorflow\/mnist\/input_data', help='Directory for storing input data') FLAGS, unparsed = parser.parse_known_args() tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) ```", "response":"While the answer here is as requested within tensorboard it only allows to download a csv for a single run of a single tag. If you have for example 10 tags and 20 runs (what is not at all much) you would need to do the above step 200 times (that alone will probably take you more than a hour). If now you for some reason would like to actually do something with the data for all runs for a single tag you would need to write some weird CSV accumulation script or copy everything by hand (what will probably cost you more than a day). Therefore I would like to add a solution that extracts a CSV file for every tag with all runs contained. Column headers are the run path names and row indices are the run step numbers. ```py import os import numpy as np import pandas as pd from collections import defaultdict from tensorboard.backend.event_processing.event_accumulator import EventAccumulator def tabulate_events(dpath): summary_iterators = [EventAccumulator(os.path.join(dpath, dname)).Reload() for dname in os.listdir(dpath)] tags = summary_iterators[0].Tags()['scalars'] for it in summary_iterators: assert it.Tags()['scalars'] == tags out = defaultdict(list) steps = [] for tag in tags: steps = [e.step for e in summary_iterators[0].Scalars(tag)] for events in zip(*[acc.Scalars(tag) for acc in summary_iterators]): assert len(set(e.step for e in events)) == 1 out[tag].append([e.value for e in events]) return out, steps def to_csv(dpath): dirs = os.listdir(dpath) d, steps = tabulate_events(dpath) tags, values = zip(*d.items()) np_values = np.array(values) for index, tag in enumerate(tags): df = pd.DataFrame(np_values[index], index=steps, columns=dirs) df.to_csv(get_file_path(dpath, tag)) def get_file_path(dpath, tag): file_name = tag.replace(\"\/\", \"_\") + '.csv' folder_path = os.path.join(dpath, 'csv') if not os.path.exists(folder_path): os.makedirs(folder_path) return os.path.join(folder_path, file_name) if __name__ == '__main__': path = \"path_to_your_summaries\" to_csv(path) ``` My solution builds upon: https:\/\/stackoverflow.com\/a\/48774926\/2230045 EDIT: I created a more sophisticated version and released it on GitHub: https:\/\/github.com\/Spenhouet\/tensorboard-aggregator This version aggregates multiple tensorboard runs and is able to save the aggregates to a new tensorboard summary or as a .csv file.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42355122\/can-i-export-a-tensorflow-summary-to-csv", "best_answers_votes":43, "question_length":3167, "response_length":2322 }, { "question":"Tensorflow Keras Copy Weights From One Model to Another Using Keras from Tensorflow 1.4.1, how does one copy weights from one model to another? As some background, I'm trying to implement a deep-q network (DQN) for Atari games following the DQN publication by DeepMind. My understanding is that the implementation uses two networks, Q and Q'. The weights of Q are trained using gradient descent, and then the weights are copied periodically to Q'. Here's how I build Q and Q': ``` ACT_SIZE = 4 LEARN_RATE = 0.0025 OBS_SIZE = 128 def buildModel(): model = tf.keras.models.Sequential() model.add(tf.keras.layers.Lambda(lambda x: x \/ 255.0, input_shape=OBS_SIZE)) model.add(tf.keras.layers.Dense(128, activation=\"relu\")) model.add(tf.keras.layers.Dense(128, activation=\"relu\")) model.add(tf.keras.layers.Dense(ACT_SIZE, activation=\"linear\")) opt = tf.keras.optimizers.RMSprop(lr=LEARN_RATE) model.compile(loss=\"mean_squared_error\", optimizer=opt) return model ``` I call that twice to get Q and Q'. I have an updateTargetModel method below that is my attempt at copying weights. The code runs fine, but my overall DQN implementation is failing. I'm really just trying to verify if this is a valid way of copying weights from one network to another. ``` def updateTargetModel(model, targetModel): modelWeights = model.trainable_weights targetModelWeights = targetModel.trainable_weights for i in range(len(targetModelWeights)): targetModelWeights[i].assign(modelWeights[i]) ``` There's another question here that discusses saving and loading weights to and from disk (Tensorflow Copy Weights Issue), but there's no accepted answer. There is also a question about loading weights from individual layers (Copying weights from one Conv2D layer to another), but I'm wanting to copy the entire model's weights.", "response":"Actually what you've done is much more than simply copying weights. You made these two models identical all the time. Every time you update one model - the second one is also updated - as both models have the same weights variables. If you want to just copy weights - the simplest way is by this command: ``` target_model.set_weights(model.get_weights()) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48547688\/tensorflow-keras-copy-weights-from-one-model-to-another", "best_answers_votes":82, "question_length":1801, "response_length":358 }, { "question":"keras tensorboard: plot train and validation scalars in a same figure So I am using tensorboard within keras. In tensorflow one could use two different summarywriters for train and validation scalars so that tensorboard could plot them in a same figure. Something like the figure in TensorBoard - Plot training and validation losses on the same graph? Is there a way to do this in keras? Thanks.", "response":"To handle the validation logs with a separate writer, you can write a custom callback that wraps around the original TensorBoard methods. ```py import os import tensorflow as tf from keras.callbacks import TensorBoard class TrainValTensorBoard(TensorBoard): def __init__(self, log_dir='.\/logs', **kwargs): # Make the original `TensorBoard` log to a subdirectory 'training' training_log_dir = os.path.join(log_dir, 'training') super(TrainValTensorBoard, self).__init__(training_log_dir, **kwargs) # Log the validation metrics to a separate subdirectory self.val_log_dir = os.path.join(log_dir, 'validation') def set_model(self, model): # Setup writer for validation metrics self.val_writer = tf.summary.FileWriter(self.val_log_dir) super(TrainValTensorBoard, self).set_model(model) def on_epoch_end(self, epoch, logs=None): # Pop the validation logs and handle them separately with # `self.val_writer`. Also rename the keys so that they can # be plotted on the same figure with the training metrics logs = logs or {} val_logs = {k.replace('val_', ''): v for k, v in logs.items() if k.startswith('val_')} for name, value in val_logs.items(): summary = tf.Summary() summary_value = summary.value.add() summary_value.simple_value = value.item() summary_value.tag = name self.val_writer.add_summary(summary, epoch) self.val_writer.flush() # Pass the remaining logs to `TensorBoard.on_epoch_end` logs = {k: v for k, v in logs.items() if not k.startswith('val_')} super(TrainValTensorBoard, self).on_epoch_end(epoch, logs) def on_train_end(self, logs=None): super(TrainValTensorBoard, self).on_train_end(logs) self.val_writer.close() ``` In __init__, two subdirectories are set up for training and validation logs In set_model, a writer self.val_writer is created for the validation logs In on_epoch_end, the validation logs are separated from the training logs and written to file with self.val_writer Using the MNIST dataset as an example: ```py from keras.models import Sequential from keras.layers import Dense from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(60000, 784) x_test = x_test.reshape(10000, 784) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train \/= 255 x_test \/= 255 model = Sequential() model.add(Dense(64, activation='relu', input_shape=(784,))) model.add(Dense(10, activation='softmax')) model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test), callbacks=[TrainValTensorBoard(write_graph=False)]) ``` You can then visualize the two curves on a same figure in TensorBoard. EDIT: I've modified the class a bit so that it can be used with eager execution. The biggest change is that I use tf.keras in the following code. It seems that the TensorBoard callback in standalone Keras does not support eager mode yet. ```py import os import tensorflow as tf from tensorflow.keras.callbacks import TensorBoard from tensorflow.python.eager import context class TrainValTensorBoard(TensorBoard): def __init__(self, log_dir='.\/logs', **kwargs): self.val_log_dir = os.path.join(log_dir, 'validation') training_log_dir = os.path.join(log_dir, 'training') super(TrainValTensorBoard, self).__init__(training_log_dir, **kwargs) def set_model(self, model): if context.executing_eagerly(): self.val_writer = tf.contrib.summary.create_file_writer(self.val_log_dir) else: self.val_writer = tf.summary.FileWriter(self.val_log_dir) super(TrainValTensorBoard, self).set_model(model) def _write_custom_summaries(self, step, logs=None): logs = logs or {} val_logs = {k.replace('val_', ''): v for k, v in logs.items() if 'val_' in k} if context.executing_eagerly(): with self.val_writer.as_default(), tf.contrib.summary.always_record_summaries(): for name, value in val_logs.items(): tf.contrib.summary.scalar(name, value.item(), step=step) else: for name, value in val_logs.items(): summary = tf.Summary() summary_value = summary.value.add() summary_value.simple_value = value.item() summary_value.tag = name self.val_writer.add_summary(summary, step) self.val_writer.flush() logs = {k: v for k, v in logs.items() if not 'val_' in k} super(TrainValTensorBoard, self)._write_custom_summaries(step, logs) def on_train_end(self, logs=None): super(TrainValTensorBoard, self).on_train_end(logs) self.val_writer.close() ``` The idea is the same -- Check the source code of TensorBoard callback See what it does to set up the writer Do the same thing in this custom callback Again, you can use the MNIST data to test it, ```py from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.train import AdamOptimizer tf.enable_eager_execution() (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(60000, 784) x_test = x_test.reshape(10000, 784) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train \/= 255 x_test \/= 255 y_train = y_train.astype(int) y_test = y_test.astype(int) model = Sequential() model.add(Dense(64, activation='relu', input_shape=(784,))) model.add(Dense(10, activation='softmax')) model.compile(loss='sparse_categorical_crossentropy', optimizer=AdamOptimizer(), metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test), callbacks=[TrainValTensorBoard(write_graph=False)]) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47877475\/keras-tensorboard-plot-train-and-validation-scalars-in-a-same-figure", "best_answers_votes":72, "question_length":395, "response_length":5458 }, { "question":"Illegal instruction (core dumped) after running import tensorflow I created a fresh virtual environment: virtualenv -p python2 test_venv\/ And installed tensorflow: pip install --upgrade --no-cache-dir tensorflow import tensorflow gives me Illegal instruction (core dumped) Please help me understand what's going on and how I can fix it. Thank you. CPU information: ``` -cpu description: CPU product: Intel(R) Core(TM) i3 CPU M 330 @ 2.13GHz bus info: cpu@0 version: CPU Version capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm tpr_shadow vnmi flexpriority ept vpid dtherm arat cpufreq ``` Stacktrace obtained with gdb: ``` #0 0x00007fffe5793880 in std::pair >, false, true>, bool> std::_Hashtable >, std::allocator > >, std::__detail::_Select1st, std::equal_to, tensorflow::StringPieceHasher, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits >::_M_emplace > >(std::integral_constant, std::pair >&&) () from \/media\/gerry\/hdd_1\/ws_hdd\/test_venv\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/..\/libtensorflow_framework.so #1 0x00007fffe5795735 in tensorflow::UnaryVariantOpRegistry::RegisterDecodeFn(std::string const&, std::function const&) () from \/media\/gerry\/hdd_1\/ws_hdd\/test_venv\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/..\/libtensorflow_framework.so #2 0x00007fffe5770a7c in tensorflow::variant_op_registry_fn_registration::UnaryVariantDecodeRegistration::UnaryVariantDecodeRegistration(std::string const&) () from \/media\/gerry\/hdd_1\/ws_hdd\/test_venv\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/..\/libtensorflow_framework.so #3 0x00007fffe56ea165 in _GLOBAL__sub_I_tensor.cc () from \/media\/gerry\/hdd_1\/ws_hdd\/test_venv\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/..\/libtensorflow_framework.so #4 0x00007ffff7de76ba in call_init (l=, argc=argc@entry=2, argv=argv@entry=0x7fffffffd5c8, env=env@entry=0xa7b4d0) at dl-init.c:72 #5 0x00007ffff7de77cb in call_init (env=0xa7b4d0, argv=0x7fffffffd5c8, argc=2, l=) at dl-init.c:30 #6 _dl_init (main_map=main_map@entry=0xa11920, argc=2, argv=0x7fffffffd5c8, env=0xa7b4d0) at dl-init.c:120 #7 0x00007ffff7dec8e2 in dl_open_worker (a=a@entry=0x7fffffffb5c0) at dl-open.c:575 #8 0x00007ffff7de7564 in _dl_catch_error (objname=objname@entry=0x7fffffffb5b0, errstring=errstring@entry=0x7fffffffb5b8, mallocedp=mallocedp@entry=0x7fffffffb5af, operate=operate@entry=0x7ffff7dec4d0 , args=args@entry=0x7fffffffb5c0) at dl-error.c:187 #9 0x00007ffff7debda9 in _dl_open ( file=0x7fffea7cbc34 \"\/media\/gerry\/hdd_1\/ws_hdd\/test_venv\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/_pywrap_tensorflow_internal.so\", mode=-2147483646, caller_dlopen=0x51ad19 , nsid=-2, argc=, argv=, env=0xa7b4d0) at dl-open.c:660 #10 0x00007ffff75ecf09 in dlopen_doit (a=a@entry=0x7fffffffb7f0) at dlopen.c:66 #11 0x00007ffff7de7564 in _dl_catch_error (objname=0x9b1870, errstring=0x9b1878, mallocedp=0x9b1868, operate=0x7ffff75eceb0 , args=0x7fffffffb7f0) at dl-error.c:187 #12 0x00007ffff75ed571 in _dlerror_run (operate=operate@entry=0x7ffff75eceb0 , args=args@entry=0x7fffffffb7f0) at dlerror.c:163 #13 0x00007ffff75ecfa1 in __dlopen (file=, mode=) at dlopen.c:87 #14 0x000000000051ad19 in _PyImport_GetDynLoadFunc () #15 0x000000000051a8e4 in _PyImport_LoadDynamicModule () #16 0x00000000005b7b1b in ?? () #17 0x00000000004bc3fa in PyEval_EvalFrameEx () #18 0x00000000004c136f in PyEval_EvalFrameEx () #19 0x00000000004b9ab6 in PyEval_EvalCodeEx () #20 0x00000000004b97a6 in PyEval_EvalCode () #21 0x00000000004b96df in PyImport_ExecCodeModuleEx () #22 0x00000000004b2b06 in ?? () #23 0x00000000004a4ae1 in ?? () ```", "response":"I would use older version. Looks like your CPU does not support AVX instructions. Quoting from their Release Page ``` Breaking Changes Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7. Prebuilt binaries will use AVX instructions. This may break TF on older CPUs. ``` You have atleast two options: Use tensorflow 1.5 or older Build from source Regarding your concern for differences, you will miss out on new features, but most basic features and documentations are not that different.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49094597\/illegal-instruction-core-dumped-after-running-import-tensorflow", "best_answers_votes":58, "question_length":3960, "response_length":496 }, { "question":"What is difference frozen_inference_graph.pb and saved_model.pb? I have a trained model (Faster R-CNN) which I exported using export_inference_graph.py to use for inference. I'm trying to understand the difference between the created frozen_inference_graph.pb and saved_model.pb and also model.ckpt* files. I've also seen .pbtxt representations. I tried reading through this but couldn't really find the answers: https:\/\/www.tensorflow.org\/extend\/tool_developers\/ What do each of these files contain? Which ones can be converted to which other ones? What is the ideal purpose of each?", "response":"frozen_inference_graph.pb, is a frozen graph that cannot be trained anymore, it defines the graphdef and is actually a serialized graph and can be loaded with this code: ``` def load_graph(frozen_graph_filename): with tf.gfile.GFile(frozen_graph_filename, \"rb\") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) return graph_def tf.import_graph_def(load_graph(\"frozen_inference_graph.pb\")) ``` the saved model is a model generated by tf.saved_model.builder and is has to be imported into a session, this file contains the full graph with all training weights (just like the frozen graph) but here can be trained upon, and this one is not serialized and needs to be loaded by this snippet. The [] are tagconstants which can be read by the saved_model_cli. This model is also often served to predict on, like google ml engine par example: ``` with tf.Session() as sess: tf.saved_model.loader.load(sess, [], \"foldername to saved_model.pb, only folder\") ``` model.ckpt files are checkpoints, generated during training, this is used to resume training or to have a back up when something goes wrong after along training. If you have a saved model and a frozen graph, then you can ignore this. .pbtxt files are basically the same as previous discussed models, but then human readable, not binary. These can be ignored as well. To answer your conversion question: saved models can be transformed into a frozen graph and vice versa, although a saved_model extracted from a frozen graph is also no trainable, but the way it is stored is in saved model format. Checkpoints can be read in and loaded into a session, and there you can build a saved model from them. Hope I helped, any questions, ask away! ADDITION: How to freeze a graph, starting from a saved model folder structure. This post is old, so the method I used before might not work anymore, it will most likely still work with Tensorflow 1.+. Start of by downloading this file from the tensorflow library, and then this code snippit should do the trick: ``` import freeze_graph # the file you just downloaded from tensorflow.python.saved_model import tag_constants # might be unnecessary freeze_graph.freeze_graph( input_graph=None, input_saver=None, input_binary=None, input_checkpoint=None, output_node_names=\"dense_output\/BiasAdd\", restore_op_name=None, filename_tensor_name=None, output_graph=os.path.join(path, \"frozen_graph.pb\"), clear_devices=None, initializer_nodes=None, input_saved_model_dir=path, saved_model_tags=tag_constants.SERVING ) ``` output_node_names = Node name of the final operation, if you end on a dense layer, it will be dense layer_name\/BiasAdd output_graph = output graph name input_saved_model_dir = root folder of the saved model saved_model_tags = saved model tags, in your case this can be None, I did however use a tag. ANOTHER ADDITION: The code to load models is already provided above. To actually predict you need a session, for a saved model this session is already created, for a frozen model, it's not. saved model: ``` with tf.Session() as sess: tf.saved_model.loader.load(sess, [], \"foldername to saved_model.pb, only folder\") prediction = sess.run(output_tensor, feed_dict={input_tensor: test_images}) ``` Frozen model: ``` tf.import_graph_def(load_graph(\"frozen_inference_graph.pb\")) with tf.Session() as sess: prediction = sess.run(output_tensor, feed_dict={input_tensor: test_images}) ``` To further understand what your input and output layers are, you need to check them out with tensorboard, simply add the following line of code into your session: ``` tf.summary.FileWriter(\"path\/to\/folder\/to\/save\/logs\", sess.graph) ``` This line will create a log file that you can open with the cli\/powershell, to see how to run tensorboard, check out this previously posted question", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/52934795\/what-is-difference-frozen-inference-graph-pb-and-saved-model-pb", "best_answers_votes":58, "question_length":584, "response_length":3789 }, { "question":"Tensorflow: How to convert .meta, .data and .index model files into one graph.pb file In tensorflow the training from the scratch produced following 6 files: events.out.tfevents.1503494436.06L7-BRM738 model.ckpt-22480.meta checkpoint model.ckpt-22480.data-00000-of-00001 model.ckpt-22480.index graph.pbtxt I would like to convert them (or only the needed ones) into one file graph.pb to be able to transfer it to my Android application. I tried the script freeze_graph.py but it requires as an input already the input.pb file which I do not have. (I have only these 6 files mentioned before). How to proceed to get this one freezed_graph.pb file? I saw several threads but none was working for me.", "response":"You can use this simple script to do that. But you must specify the names of the output nodes. ``` import tensorflow as tf meta_path = 'model.ckpt-22480.meta' # Your .meta file output_node_names = ['output:0'] # Output nodes with tf.Session() as sess: # Restore the graph saver = tf.train.import_meta_graph(meta_path) # Load weights saver.restore(sess,tf.train.latest_checkpoint('path\/of\/your\/.meta\/file')) # Freeze the graph frozen_graph_def = tf.graph_util.convert_variables_to_constants( sess, sess.graph_def, output_node_names) # Save the frozen graph with open('output_graph.pb', 'wb') as f: f.write(frozen_graph_def.SerializeToString()) ``` If you don't know the name of the output node or nodes, there are two ways You can explore the graph and find the name with Netron or with console summarize_graph utility. You can use all the nodes as output ones as shown below. ```py output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node] ``` (Note that you have to put this line just before convert_variables_to_constants call.) But I think it's unusual situation, because if you don't know the output node, you cannot use the graph actually.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45864363\/tensorflow-how-to-convert-meta-data-and-index-model-files-into-one-graph-pb", "best_answers_votes":46, "question_length":697, "response_length":1167 }, { "question":"Multivariate LSTM with missing values I am working on a Time Series Forecasting problem using LSTM. The input contains several features, so I am using a Multivariate LSTM. The problem is that there are some missing values, for example: ``` Feature 1 Feature 2 ... Feature n 1 2 4 nan 2 5 8 10 3 8 8 5 4 nan 7 7 5 6 nan 12 ``` Instead of interpolating the missing values, that can introduce bias in the results, because sometimes there are a lot of consecutive timestamps with missing values on the same feature, I would like to know if there is a way to let the LSTM learn with the missing values, for example, using a masking layer or something like that? Can someone explain to me what will be the best approach to deal with this problem? I am using Tensorflow and Keras.", "response":"As suggested by Fran\u00e7ois Chollet (creator of Keras) in his book, one way to handle missing values is to replace them with zero: In general, with neural networks, it\u2019s safe to input missing values as 0, with the condition that 0 isn\u2019t already a meaningful value. The network will learn from exposure to the data that the value 0 means missing data and will start ignoring the value. Note that if you\u2019re expecting missing values in the test data, but the network was trained on data without any missing values, the network won\u2019t have learned to ignore missing values! In this situation, you should artificially generate training samples with missing entries: copy some training samples several times, and drop some of the features that you expect are likely to be missing in the test data. So you can assign zero to NaN elements, considering that zero is not used in your data (you can normalize the data to a range, say [1,2], and then assign zero to NaN elements; or alternatively, you can normalize all the values to be in range [0,1] and then use -1 instead of zero to replace NaN elements.) Another alternative way is to use a Masking layer in Keras. You give it a mask value, say 0, and it would drop any timestep (i.e. row) where all its features are equal to the mask value. However, all the following layers should support masking and you also need to pre-process your data and assign the mask value to all the features of a timestep which includes one or more NaN features. Example from Keras doc: Consider a Numpy data array x of shape (samples, timesteps,features), to be fed to an LSTM layer. You want to mask timestep #3 and #5 because you lack data for these timesteps. You can: set x[:, 3, :] = 0. and x[:, 5, :] = 0. insert a Masking layer with mask_value=0. before the LSTM layer: ``` model = Sequential() model.add(Masking(mask_value=0., input_shape=(timesteps, features))) model.add(LSTM(32)) ``` Update (May 2021): According to an updated suggestion from Fran\u00e7ois Cholle, it might be better to use a more meaningful or informative value (instead of using zero) for masking missing values. This value could be computed (e.g. mean, median, etc.) or predicted from the data itself.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/52570199\/multivariate-lstm-with-missing-values", "best_answers_votes":58, "question_length":773, "response_length":2197 }, { "question":"Is it possible to make a trainable variable not trainable? I created a trainable variable in a scope. Later, I entered the same scope, set the scope to reuse_variables, and used get_variable to retrieve the same variable. However, I cannot set the variable's trainable property to False. My get_variable line is like: ``` weight_var = tf.get_variable('weights', trainable = False) ``` But the variable 'weights' is still in the output of tf.trainable_variables. Can I set a shared variable's trainable flag to False by using get_variable? The reason I want to do this is that I'm trying to reuse the low-level filters pre-trained from VGG net in my model, and I want to build the graph like before, retrieve the weights variable, and assign VGG filter values to the weight variable, and then keep them fixed during the following training step.", "response":"After looking at the documentation and the code, I was not able to find a way to remove a Variable from the TRAINABLE_VARIABLES. Here is what happens: The first time tf.get_variable('weights', trainable=True) is called, the variable is added to the list of TRAINABLE_VARIABLES. The second time you call tf.get_variable('weights', trainable=False), you get the same variable but the argument trainable=False has no effect as the variable is already present in the list of TRAINABLE_VARIABLES (and there is no way to remove it from there) First solution When calling the minimize method of the optimizer (see doc.), you can pass a var_list=[...] as argument with the variables you want to optimizer. For instance, if you want to freeze all the layers of VGG except the last two, you can pass the weights of the last two layers in var_list. Second solution You can use a tf.train.Saver() to save variables and restore them later (see this tutorial). First you train your entire VGG model with all trainable variables. You save them in a checkpoint file by calling saver.save(sess, \"\/path\/to\/dir\/model.ckpt\"). Then (in another file) you train the second version with non trainable variables. You load the variables previously stored with saver.restore(sess, \"\/path\/to\/dir\/model.ckpt\"). Optionally, you can decide to save only some of the variables in your checkpoint file. See the doc for more info.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37326002\/is-it-possible-to-make-a-trainable-variable-not-trainable", "best_answers_votes":31, "question_length":843, "response_length":1395 }, { "question":"Tensorflow: How to replace a node in a calculation graph? If you have two disjoint graphs, and want to link them, turning this: ``` x = tf.placeholder('float') y = f(x) y = tf.placeholder('float') z = f(y) ``` into this: ``` x = tf.placeholder('float') y = f(x) z = g(y) ``` Is there a way to do that? It seems like it could make construction easier in some cases. For example if you have a graph that has the input image as a tf.placeholder, and want to optimize the input image, deep-dream style, is there a way to just replace the placeholder with a tf.variable node? Or do you have to think of that before building the graph?", "response":"TL;DR: If you can define the two computations as Python functions, you should do that. If you can't, there's more advanced functionality in TensorFlow to serialize and import graphs, which allows you to compose graphs from different sources. One way to do this in TensorFlow is to build the disjoint computations as separate tf.Graph objects, then convert them to serialized protocol buffers using Graph.as_graph_def(): ``` with tf.Graph().as_default() as g_1: input = tf.placeholder(tf.float32, name=\"input\") y = f(input) # NOTE: using identity to get a known name for the output tensor. output = tf.identity(y, name=\"output\") gdef_1 = g_1.as_graph_def() with tf.Graph().as_default() as g_2: # NOTE: g_2 not g_1 input = tf.placeholder(tf.float32, name=\"input\") z = g(input) output = tf.identity(y, name=\"output\") gdef_2 = g_2.as_graph_def() ``` Then you could compose gdef_1 and gdef_2 into a third graph, using tf.import_graph_def(): ``` with tf.Graph().as_default() as g_combined: x = tf.placeholder(tf.float32, name=\"\") # Import gdef_1, which performs f(x). # \"input:0\" and \"output:0\" are the names of tensors in gdef_1. y, = tf.import_graph_def(gdef_1, input_map={\"input:0\": x}, return_elements=[\"output:0\"]) # Import gdef_2, which performs g(y) z, = tf.import_graph_def(gdef_2, input_map={\"input:0\": y}, return_elements=[\"output:0\"] ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33748552\/tensorflow-how-to-replace-a-node-in-a-calculation-graph", "best_answers_votes":36, "question_length":629, "response_length":1342 }, { "question":"Visualizing output of convolutional layer in tensorflow I'm trying to visualize the output of a convolutional layer in tensorflow using the function tf.image_summary. I'm already using it successfully in other instances (e. g. visualizing the input image), but have some difficulties reshaping the output here correctly. I have the following conv layer: ```py img_size = 256 x_image = tf.reshape(x, [-1,img_size, img_size,1], \"sketch_image\") W_conv1 = weight_variable([5, 5, 1, 32]) b_conv1 = bias_variable([32]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) ``` So the output of h_conv1 would have the shape [-1, img_size, img_size, 32]. Just using tf.image_summary(\"first_conv\", tf.reshape(h_conv1, [-1, img_size, img_size, 1])) Doesn't account for the 32 different kernels, so I'm basically slicing through different feature maps here. How can I reshape them correctly? Or is there another helper function I could use for including this output in the summary?", "response":"I don't know of a helper function but if you want to see all the filters you can pack them into one image with some fancy uses of tf.transpose. So if you have a tensor that's images x ix x iy x channels ```py >>> V = tf.Variable() >>> print V.get_shape() TensorShape([Dimension(-1), Dimension(256), Dimension(256), Dimension(32)]) ``` So in this example ix = 256, iy=256, channels=32 first slice off 1 image, and remove the image dimension ```py V = tf.slice(V,(0,0,0,0),(1,-1,-1,-1)) #V[0,...] V = tf.reshape(V,(iy,ix,channels)) ``` Next add a couple of pixels of zero padding around the image ```py ix += 4 iy += 4 V = tf.image.resize_image_with_crop_or_pad(image, iy, ix) ``` Then reshape so that instead of 32 channels you have 4x8 channels, lets call them cy=4 and cx=8. ```py V = tf.reshape(V,(iy,ix,cy,cx)) ``` Now the tricky part. tf seems to return results in C-order, numpy's default. The current order, if flattened, would list all the channels for the first pixel (iterating over cx and cy), before listing the channels of the second pixel (incrementing ix). Going across the rows of pixels (ix) before incrementing to the next row (iy). We want the order that would lay out the images in a grid. So you go across a row of an image (ix), before stepping along the row of channels (cx), when you hit the end of the row of channels you step to the next row in the image (iy) and when you run out or rows in the image you increment to the next row of channels (cy). so: ```py V = tf.transpose(V,(2,0,3,1)) #cy,iy,cx,ix ``` Personally I prefer np.einsum for fancy transposes, for readability, but it's not in tf yet. ```py newtensor = np.einsum('yxYX->YyXx',oldtensor) ``` anyway, now that the pixels are in the right order, we can safely flatten it into a 2d tensor: ```py # image_summary needs 4d input V = tf.reshape(V,(1,cy*iy,cx*ix,1)) ``` try tf.image_summary on that, you should get a grid of little images. Below is an image of what one gets after following all the steps here.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33802336\/visualizing-output-of-convolutional-layer-in-tensorflow", "best_answers_votes":37, "question_length":973, "response_length":1993 }, { "question":"Clarification on tf.Tensor.set_shape() I have an image that is 478 x 717 x 3 = 1028178 pixels, with a rank of 1. I verified it by calling tf.shape and tf.rank. When I call image.set_shape([478, 717, 3]), it throws the following error. ``` \"Shapes %s and %s must have the same rank\" % (self, other)) ValueError: Shapes (?,) and (478, 717, 3) must have the same rank ``` I tested again by first casting to 1028178, but the error still exists. ``` ValueError: Shapes (1028178,) and (478, 717, 3) must have the same rank ``` Well, that does make sense because one is of rank 1 and the other is of rank 3. However, why is it necessary to throw an error, as the total number of pixels still match. I could of course use tf.reshape and it works, but I think that's not optimal. As stated on the TensorFlow FAQ What is the difference between x.set_shape() and x = tf.reshape(x)? The tf.Tensor.set_shape() method updates the static shape of a Tensor object, and it is typically used to provide additional shape information when this cannot be inferred directly. It does not change the dynamic shape of the tensor. The tf.reshape() operation creates a new tensor with a different dynamic shape. Creating a new tensor involves memory allocation and that could potentially be more costly when more training examples are involved. Is this by design, or am I missing something here?", "response":"As far as I know (and I wrote that code), there isn't a bug in Tensor.set_shape(). I think the misunderstanding stems from the confusing name of that method. To elaborate on the FAQ entry you quoted, Tensor.set_shape() is a pure-Python function that improves the shape information for a given tf.Tensor object. By \"improves\", I mean \"makes more specific\". Therefore, when you have a Tensor object referred to by t with shape (?,), that is a one-dimensional tensor of unknown length. You can call t.set_shape((1028178,)), and then the Tensor referred to by t will have shape (1028178,) when you call t.get_shape(). This doesn't affect the underlying storage, or indeed anything on the backend: it merely means that subsequent shape inference using t can rely on the assertion that it is a vector of length 1028178. If the Tensor object referred to by t has shape (?,), a call to t.set_shape((478, 717, 3)) will fail, because TensorFlow already knows that t refers to a vector, so it cannot have shape (478, 717, 3). If you want to make a new Tensor with that shape from the contents of the Tensor referred to by t, you can use reshaped_t = tf.reshape(t, (478, 717, 3)). This creates a new tf.Tensor object in Python; the actual implementation of tf.reshape() does this using a shallow copy of the tensor buffer, so it is inexpensive in practice. One analogy is that Tensor.set_shape() is like a run-time cast in an object-oriented language like Java. For example, if you have a pointer to an Object but know that, in fact, it is a String, you might do the cast (String) obj in order to pass obj to a method that expects a String argument. However, if you have a String s and try to cast it to a java.util.Vector, the compiler will give you an error, because these two types are unrelated.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35451948\/clarification-on-tf-tensor-set-shape", "best_answers_votes":79, "question_length":1368, "response_length":1787 }, { "question":"How to convert tf.int64 to tf.float32? I tried: ``` test_image = tf.convert_to_tensor(img, dtype=tf.float32) ``` Then following error appears: ``` ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int64: 'Tensor(\"test\/ArgMax:0\", shape=TensorShape([Dimension(None)]), dtype=int64)' ```", "response":"You can cast generally using: ``` tf.cast(my_tensor, tf.float32) ``` Replace tf.float32 with your desired type. Edit: It seems at the moment at least, that tf.cast won't cast to an unsigned dtype (e.g. tf.uint8). To work around this, you can cast to the signed equivalent and used tf.bitcast to get all the way. e.g. ``` tf.bitcast(tf.cast(my_tensor, tf.int8), tf.uint8) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35596629\/how-to-convert-tf-int64-to-tf-float32", "best_answers_votes":59, "question_length":310, "response_length":374 }, { "question":"'Tensor' object has no attribute 'lower' I am fine-tuning a MobileNet with 14 new classes. When I add new layers by: ``` x=mobile.layers[-6].output x=Flatten(x) predictions = Dense(14, activation='softmax')(x) model = Model(inputs=mobile.input, outputs=predictions) ``` I get the error: ``` 'Tensor' object has no attribute 'lower' ``` Also using: ``` model.compile(Adam(lr=.0001), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(train_batches, steps_per_epoch=18, validation_data=valid_batches, validation_steps=3, epochs=60, verbose=2) ``` I get the error: ``` Error when checking target: expected dense_1 to have 4 dimensions, but got array with shape (10, 14) ``` What does lower mean? I saw other fine-tuning scripts and there were no other arguments other than the name of the model which is x in this case.", "response":"The tensor must be passed to the layer when you are calling it, and not as an argument. Therefore it must be like this: ``` x = Flatten()(x) # first the layer is constructed and then it is called on x ``` To make it more clear, it is equivalent to this: ``` flatten_layer = Flatten() # instantiate the layer x = flatten_layer(x) # call it on the given tensor ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/53153790\/tensor-object-has-no-attribute-lower", "best_answers_votes":75, "question_length":840, "response_length":362 }, { "question":"TensorFlow wasn't compiled to use SSE (etc.) instructions, but these are available I am running TensorFlow for the first time using some example code. I got the following warnings when running my code. Does anybody know why this happened, and how to fix it? ``` 2017-03-31 02:12:59.346109: W c:\\tf_jenkins\\home\\workspace\\release-win\\device\\cpu\\os\\windows\\tensorflow\\core\\platform\\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations. 2017-03-31 02:12:59.346968: W c:\\tf_jenkins\\home\\workspace\\release-win\\device\\cpu\\os\\windows\\tensorflow\\core\\platform\\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations. 2017-03-31 02:12:59.346975: W c:\\tf_jenkins\\home\\workspace\\release-win\\device\\cpu\\os\\windows\\tensorflow\\core\\platform\\cpu_feature_guard.cc:45] The TensorFlow libbrary wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. 2017-03-31 02:12:59.346979: W c:\\tf_jenkins\\home\\workspace\\release-win\\device\\cpu\\os\\windows\\tensorflow\\core\\platform\\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-03-31 02:12:59.346983: W c:\\tf_jenkins\\home\\workspace\\release-win\\device\\cpu\\os\\windows\\tensorflow\\core\\platform\\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-03-31 02:12:59.346987: W c:\\tf_jenkins\\home\\workspace\\release-win\\device\\cpu\\os\\windows\\tensorflow\\core\\platform\\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-03-31 02:12:59.346991: W c:\\tf_jenkins\\home\\workspace\\release-win\\device\\cpu\\os\\windows\\tensorflow\\core\\platform\\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-03-31 02:12:59.346995: W c:\\tf_jenkins\\home\\workspace\\release-win\\device\\cpu\\os\\windows\\tensorflow\\core\\platform\\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. ```", "response":"Those are warnings (as indicated by the W after the colon. Errors have an E there). The warnings refer to the fact that your CPU supports SSE Instructions, which allow some fast in-hardware-parallel operations. Enabling these operations is a compile-time operation (i.e. to use SSE you need to build the library from the source enabling the specific SSE version you're targeting), in which case you might take a look at this question. Note, however, that SSE support influences only the computation speed. Tensorflow will work with or without SSE, but it might take longer for your code to run. Note, also, that this influences only the CPU. If you're using the GPU build of Tensorflow, all the operations run on the GPU will not benefit of SSE instructions.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43134753\/tensorflow-wasnt-compiled-to-use-sse-etc-instructions-but-these-are-availab", "best_answers_votes":54, "question_length":2547, "response_length":758 }, { "question":"How can I clear a model created with Keras and Tensorflow(as backend)? I have a problem when training a neural net with Keras in Jupyter Notebook. I created a sequential model with several hidden layers. After training the model and saving the results, I want to delete this model and create a new model in the same session, as I have a for loop that checks the results for different parameters. But as I understand the errors I get, when changing the parameters, when I loop over, I am just adding layers to the model (even though I initialise it again with network = Sequential() inside the loop). So my question is, how can I completely clear the previous model or how can I initialise a completely new model in the same session?", "response":"keras.backend.clear_session() should clear the previous model. From https:\/\/keras.io\/backend\/: Destroys the current TF graph and creates a new one. Useful to avoid clutter from old models \/ layers.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/52133347\/how-can-i-clear-a-model-created-with-keras-and-tensorflowas-backend", "best_answers_votes":66, "question_length":732, "response_length":197 }, { "question":"How to use tf.while_loop() in tensorflow This is a generic question. I found that in the tensorflow, after we build the graph, fetch data into the graph, the output from graph is a tensor. but in many cases, we need to do some computation based on this output (which is a tensor), which is not allowed in tensorflow. for example, I'm trying to implement a RNN, which loops times based on data self property. That is, I need use a tensor to judge whether I should stop (I am not using dynamic_rnn since in my design, the rnn is highly customized). I find tf.while_loop(cond,body.....) might be a candidate for my implementation. But the official tutorial is too simple. I don't know how to add more functionalities into the 'body'. Can anyone give me few more complex example? Also, in such case that if the future computation is based on the tensor output (ex: the RNN stop based on the output criterion), which is very common case. Is there an elegant way or better way instead of dynamic graph?", "response":"What is stopping you from adding more functionality to the body? You can build whatever complex computational graph you like in the body and take whatever inputs you like from the enclosing graph. Also, outside of the loop, you can then do whatever you want with whatever outputs you return. As you can see from the amount of 'whatevers', TensorFlow's control flow primitives were built with much generality in mind. Below is another 'simple' example, in case it helps. ``` import tensorflow as tf import numpy as np def body(x): a = tf.random_uniform(shape=[2, 2], dtype=tf.int32, maxval=100) b = tf.constant(np.array([[1, 2], [3, 4]]), dtype=tf.int32) c = a + b return tf.nn.relu(x + c) def condition(x): return tf.reduce_sum(x) < 100 x = tf.Variable(tf.constant(0, shape=[2, 2])) with tf.Session(): tf.global_variables_initializer().run() result = tf.while_loop(condition, body, [x]) print(result.eval()) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37441140\/how-to-use-tf-while-loop-in-tensorflow", "best_answers_votes":58, "question_length":996, "response_length":911 }, { "question":"Tensorflow : What is the relationship between .ckpt file and .ckpt.meta and .ckpt.index , and .pb file I used saver=tf.train.Saver() to save the model that I trained, and I get three kinds of files named: .ckpt.meta .ckpt.index .ckpt.data And a file called: checkpoint What is the connection with the .ckpt file? I saw someone saved model with only .ckpt file, I don't know how to make it. How can I save model as a .pb file?", "response":"the .ckpt file is the old version output of saver.save(sess), which is the equivalent of your .ckpt-data (see below) the \"checkpoint\" file is only here to tell some TF functions which is the latest checkpoint file. .ckpt-meta contains the metagraph, i.e. the structure of your computation graph, without the values of the variables (basically what you can see in tensorboard\/graph). .ckpt-data contains the values for all the variables, without the structure. To restore a model in python, you'll usually use the meta and data files with (but you can also use the .pb file): ``` saver = tf.train.import_meta_graph(path_to_ckpt_meta) saver.restore(sess, path_to_ckpt_data) ``` I don't know exactly for .ckpt-index, I guess it's some kind of index needed internally to map the two previous files correctly. Anyway it's not really necessary usually, you can restore a model with only .ckpt-meta and .ckpt-data. the .pb file can save your whole graph (meta + data). To load and use (but not train) a graph in c++ you'll usually use it, created with freeze_graph, which creates the .pb file from the meta and data. Be careful, (at least in previous TF versions and for some people) the py function provided by freeze_graph did not work properly, so you'd have to use the script version. Tensorflow also provides a tf.train.Saver.to_proto() method, but I don't know what it does exactly. There are a lot of questions here about how to save and restore a graph. See the answer here for instance, but be careful that the two cited tutorials, though really helpful, are far from perfect, and a lot of people still seem to struggle to import a model in c++. EDIT: it looks like you can also use the .ckpt files in c++ now, so I guess you don't necessarily need the .pb file any more.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44516609\/tensorflow-what-is-the-relationship-between-ckpt-file-and-ckpt-meta-and-ckp", "best_answers_votes":46, "question_length":425, "response_length":1773 }, { "question":"What is the difference between Keras and tf.keras in TensorFlow 1.1+? Now that TensorFlow 1.1 supports the Keras API under tf.contrib.keras, which one should I use if I intend to use Keras with a TF backend? Is the tf.contrib.keras version different in any way than a regular Keras distribution? (TF specific optimizations of internal data structures come to mind). Is there any benefit in terms of using Keras and TensorFlow Core together if I use one or the other? Or is tf.contrib.keras simply a copy of the same codebase as Keras but under a different namespace?", "response":"tf.keras (formerly tf.contrib.keras) is an implementation of keras 2 implemented exclusively with\/for tensorflow. It is hosted on the tensorflow repo and has a distinct code base than the official repo (the last commit there in the tf-keras branch dates back from May 2017). As a rule of thumb, if your code use any tensorflow-specific code, say anything in tf.data.* for providing inputs or tf.summary.* for visualization in tensorboard, it is simpler to just use tf.keras. (Some may even recommend not using the reference Keras implementation with TF because of occasional problems it has with this toolkit). On the other hand, if you plan to actively maintain a framework-agnostic code, using keras' own package is your only choice. If you don't care much about being framework-agnostic but don't use tensorflow-specific code, I would probably advise to go with tf.keras and start using tensorflow-specific code, esp. tf.data which is a game-changer in my opinion. EDIT I attended a talk by Chollet on TF2 (couldn't find a recording online) in which he basically said that support for frameworks other than TF would eventually drop and future developments of Keras would happen exclusively in tf.keras. From what I can see, this is already happening, as Keras' commit stream is getting thin these days. It makes a lot of sense since, as of now, the only other popular DL framework is pytorch, which is not supported by Keras. Keeping Keras code \"agnostic\" to tensorflow -- the only major framework it is supporting -- makes less and less sense. So today, my answer would be to use tf.keras by default, and keep Keras for legacy projects that would be hard to migrate -- that is the future-proof choice for Keras.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44068899\/what-is-the-difference-between-keras-and-tf-keras-in-tensorflow-1-1", "best_answers_votes":29, "question_length":566, "response_length":1715 }, { "question":"What's the difference between Tensor and Variable in Tensorflow What's the difference between Tensor and Variable in Tensorflow? I noticed in this stackoverflow answer, we can use Variable wherever Tensor can be used. However, I failed to do session.run() on a Variable: ```py A = tf.zeros([10]) # A is a Tensor B = tf.Variable([111, 11, 11]) # B is a Variable sess.run(A) # OK. Will return the values in A sess.run(B) # Error. ```", "response":"Variable is basically a wrapper on Tensor that maintains state across multiple calls to run, and I think makes some things easier with saving and restoring graphs. A Variable needs to be initialized before you can run it. You provide an initial value when you define the Variable, but you have to call its initializer function in order to actually assign this value in your session and then use the Variable. A common way to do this is with tf.global_variables_initalizer(). For example: ```py import tensorflow as tf test_var = tf.Variable([111, 11, 1]) sess = tf.Session() sess.run(test_var) # Error! sess.run(tf.global_variables_initializer()) # initialize variables sess.run(test_var) # array([111, 11, 1], dtype=int32) ``` As for why you use Variables instead of Tensors, basically a Variable is a Tensor with additional capability and utility. You can specify a Variable as trainable (the default, actually), meaning that your optimizer will adjust it in an effort to minimize your cost function; you can specify where the Variable resides on a distributed system; you can easily save and restore Variables and graphs. Some more information on how to use Variables can be found here.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44167134\/whats-the-difference-between-tensor-and-variable-in-tensorflow", "best_answers_votes":37, "question_length":431, "response_length":1189 }, { "question":"Trouble with TensorFlow in Jupyter Notebook I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with \"source activate tensorflow\". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine. My question is: how can I also have it work in the Jupyter notebooks? This leads me to a more general question: it seems that my python kernel in Jupyter\/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python.", "response":"Update TensorFlow website supports five installations. To my understanding, using Pip installation directly would be fine to import TensorFlow in Jupyter Notebook (as long as Jupyter Notebook was installed and there were no other issues) b\/z it didn't create any virtual environments. Using virtualenv install and conda install would need to install jupyter into the newly created TensorFlow environment to allow TensorFlow to work in Jupyter Notebook (see the following original post section for more details). I believe docker install may require some port setup in the VirtualBox to make TensorFlow work in Jupyter Notebook (see this post). For installing from sources, it also depends on which environment the source code is built and installed into. If it's installed into a freshly created virtual environment or an virtual environment which didn't have Jupyter Notebook installed, it would also need to install Jupyter Notebook into the virtual environment to use Tensorflow in Jupyter Notebook. Original Post To use tensorflow in Ipython and\/or Jupyter(Ipython) Notebook, you'll need to install Ipython and Jupyter (after installing tensorflow) under the tensorflow activated environment. Before install Ipython and Jupyter under tensorflow environment, if you do the following commands in terminal: ``` username$ source activate tensorflow (tensorflow)username$ which ipython (tensorflow)username$ \/Users\/username\/anaconda\/bin\/ipython (tensorflow)username$ which jupyter (tensorflow)username$ \/Users\/username\/anaconda\/bin\/jupyter (tensorflow)username$ which python (tensorflow)username$ \/User\/username\/\/anaconda\/envs\/tensorflow\/bin\/python ``` This is telling you that when you open python from terminal, it is using the one installed in the \"environments\" where tensorflow is installed. Therefore you can actually import tensorflow successfully. However, if you are trying to run ipython and\/or jupyter notebook, these are not installed under the \"environments\" equipped with tensorflow, hence it has to go back to use the regular environment which has no tensorflow module, hence you get an import error. You can verify this by listing out the items under envs\/tensorflow\/bin directory: ``` (tensorflow) username$ ls \/User\/username\/anaconda\/envs\/tensorflow\/bin\/ ``` You will see that there are no \"ipython\" and\/or \"jupyer\" listing out. To use tensorflow with Ipython and\/or Jupyter notebook, simply install them into the tensorflow environment: ``` (tensorflow) username$ conda install ipython (tensorflow) username$ pip install jupyter #(use pip3 for python3) ``` After installing them, there should be a \"jupyer\" and a \"ipython\" show up in the envs\/tensorflow\/bin\/ directory. Notes: Before trying to import tensorflow module in jupyter notebook, try close the notebook. And \"source deactivate tensorflow\" first, and then reactivate it (\"source activate tensorflow\") to make sure things are \"on the same page\". Then reopen the notebook and try import tensorflow. It should be import successfully (worked on mine at least).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37061089\/trouble-with-tensorflow-in-jupyter-notebook", "best_answers_votes":65, "question_length":1005, "response_length":3033 }, { "question":"Tensorflow image reading & display I've got a bunch of images in a format similar to Cifar10 (binary file, size = 96*96*3 bytes per image), one image after another (STL-10 dataset). The file I'm opening has 138MB. I tried to read & check the contents of the Tensors containing the images to be sure that the reading is done right, however I have two questions - Does the FixedLengthRecordReader load the whole file, however just provide inputs one at a time? Since reading the first size bytes should be relatively fast. However, the code takes about two minutes to run. How to get the actual image contents in a displayable format, or display them internally to validate that the images are read well? I did sess.run(uint8image), however the result is empty. The code is below: ``` import tensorflow as tf def read_stl10(filename_queue): class STL10Record(object): pass result = STL10Record() result.height = 96 result.width = 96 result.depth = 3 image_bytes = result.height * result.width * result.depth record_bytes = image_bytes reader = tf.FixedLengthRecordReader(record_bytes=record_bytes) result.key, value = reader.read(filename_queue) print value record_bytes = tf.decode_raw(value, tf.uint8) depth_major = tf.reshape(tf.slice(record_bytes, [0], [image_bytes]), [result.depth, result.height, result.width]) result.uint8image = tf.transpose(depth_major, [1, 2, 0]) return result # probably a hack since I should've provided a string tensor filename_queue = tf.train.string_input_producer(['.\/data\/train_X']) image = read_stl10(filename_queue) print image.uint8image with tf.Session() as sess: result = sess.run(image.uint8image) print result, type(result) ``` Output: ``` Tensor(\"ReaderRead:1\", shape=TensorShape([]), dtype=string) Tensor(\"transpose:0\", shape=TensorShape([Dimension(96), Dimension(96), Dimension(3)]), dtype=uint8) I tensorflow\/core\/common_runtime\/local_device.cc:25] Local device intra op parallelism threads: 4 I tensorflow\/core\/common_runtime\/local_session.cc:45] Local session inter op parallelism threads: 4 [empty line for last print] Process finished with exit code 137 ``` I'm running this on my CPU, if that adds anything. EDIT: I found the pure TensorFlow solution thanks to Rosa. Apparently, when using the string_input_producer, in order to see the results, you need to initialize the queue runners. The only required thing to add to the code above is the second line from below: ``` ... with tf.Session() as sess: tf.train.start_queue_runners(sess=sess) ... ``` Afterwards, the image in the result can be displayed with matplotlib.pyplot.imshow(result). I hope this helps someone. If you have any further questions, feel free to ask me or check the link in Rosa's answer.", "response":"Just to give a complete answer: ``` filename_queue = tf.train.string_input_producer(['\/Users\/HANEL\/Desktop\/tf.png']) # list of files to read reader = tf.WholeFileReader() key, value = reader.read(filename_queue) my_img = tf.image.decode_png(value) # use png or jpg decoder based on your files. init_op = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init_op) # Start populating the filename queue. coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for i in range(1): #length of your filename list image = my_img.eval() #here is your image Tensor :) print(image.shape) Image.fromarray(np.asarray(image)).show() coord.request_stop() coord.join(threads) ``` Or if you have a directory of images you can add them all via this Github source file @mttk and @salvador-dali: I hope it is what you need", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33648322\/tensorflow-image-reading-display", "best_answers_votes":46, "question_length":2709, "response_length":850 }, { "question":"How does the Flatten layer work in Keras? I am using the TensorFlow backend. I am applying a convolution, max-pooling, flatten and a dense layer sequentially. The convolution requires a 3D input (height, width, color_channels_depth). After the convolution, this becomes (height, width, Number_of_filters). After applying max-pooling height and width changes. But, after applying the flatten layer, what happens exactly? For example, if the input before flatten is (24, 24, 32), then how it flattens it out? Is it sequential like (24 * 24) for height, weight for each filter number sequentially, or in some other way? An example would be appreciated with actual values.", "response":"The Flatten() operator unrolls the values beginning at the last dimension (at least for Theano, which is \"channels first\", not \"channels last\" like TF. I can't run TensorFlow in my environment). This is equivalent to numpy.reshape with 'C' ordering: \u2018C\u2019 means to read \/ write the elements using C-like index order, with the last axis index changing fastest, back to the first axis index changing slowest. Here is a standalone example illustrating Flatten operator with the Keras Functional API. You should be able to easily adapt for your environment. ``` import numpy as np from keras.layers import Input, Flatten from keras.models import Model inputs = Input(shape=(3,2,4)) # Define a model consisting only of the Flatten operation prediction = Flatten()(inputs) model = Model(inputs=inputs, outputs=prediction) X = np.arange(0,24).reshape(1,3,2,4) print(X) #[[[[ 0 1 2 3] # [ 4 5 6 7]] # # [[ 8 9 10 11] # [12 13 14 15]] # # [[16 17 18 19] # [20 21 22 23]]]] model.predict(X) #array([[ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., # 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., # 22., 23.]], dtype=float32) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44176982\/how-does-the-flatten-layer-work-in-keras", "best_answers_votes":53, "question_length":668, "response_length":1123 }, { "question":"Should TensorFlow users prefer SavedModel over Checkpoint or GraphDef? From SavedModel Docs, SavedModel, the universal serialization format for TensorFlow models. and SavedModel wraps a TensorFlow Saver. The Saver is primarily used to generate the variable checkpoints. From my understanding, SavedModel is must if someone wants use TensorFlow Serving. However, I can deploy Tensorflow Model to service server without SavedModel: Freeze graph and export it as GraphDef, and load graph into Session using ReadBinaryProto and Create in C++ or Import in Go. What is the purpose of SavedModel? Should users prefer SavedModel over Checkpoint or GraphDef to aggregate more data related to the model?", "response":"A checkpoint contains the value of (some of the) variables in a TensorFlow model. It is created by a Saver, which is either given specific Variables to save, or by default saves all (non-local) Variables. To use a checkpoint, you need to have a compatible TensorFlow Graph, whose Variables have the same names as the Variables in the checkpoint. (If you don't have a compatible Graph, you can still load the values stored in a checkpoint into selected Variables using the init_from_checkpoint utilities in contrib.) SavedModel is much more comprehensive: It contains a set of Graphs (MetaGraphs, in fact, saving collections and such), as well as a checkpoint which is supposed to be compatible with these Graphs, and any asset files that are needed to run the model (e.g. Vocabulary files). For each MetaGraph it contains, it also stores a set of signatures. Signatures define (named) input and output tensors. This means that given only a SavedModel, you can write tools (such as tensorflow\/serving, or the new saved_model command line utility that will appear in tools\/ shortly) that interpret or execute the graphs inside. All you have to provide is the data. If in doubt, I would always err on the side of writing a SavedModel, not just a checkpoint. Not only does this allow you to use tensorflow\/serving (and other neat utilities that will grow in number), it makes sure that you have all the information necessary to run the model. Nothing is more frustrating than a checkpoint you cannot use any more because you modified your model and now it is incompatible with checkpoint files and all you want to do is run some predictions through it for comparison.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42216208\/should-tensorflow-users-prefer-savedmodel-over-checkpoint-or-graphdef", "best_answers_votes":56, "question_length":693, "response_length":1663 }, { "question":"keras vs. tensorflow.python.keras - which one to use? Which one is the recommended (or more future-proof) way to use Keras? What are the advantages\/disadvantages of each? I guess there are more differences than simply saving one pip install step and writing tensorflow.python.keras instead of keras.", "response":"tensorflow.python.keras is just a bundle of keras with a single backend inside tensorflow package. This allows you to start using keras by installing just pip install tensorflow. keras package contains full keras library with three supported backends: tensorflow, theano and CNTK. If you even wish to switch between backends, you should choose keras package. This approach is also more flexible because it allows to install keras updates independently from tensorflow (which may not be easy to update, for example, because the next version may require a different version of CUDA driver) or vice versa. For this reason, I prefer to install keras as another package. In terms of API, there is no difference right now, but keras will probably be integrated more tightly into tensorflow in the future. So there is a chance there will be tensorflow-only features in keras, but even in this case it's not a blocker to use keras package. UPDATE As of Keras 2.3.0 release, Francois Chollet announced that users should switch towards tf.keras instead of plain Keras. Therefore, the change to tf.keras instead of keras should be made by all users.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48893528\/keras-vs-tensorflow-python-keras-which-one-to-use", "best_answers_votes":45, "question_length":299, "response_length":1138 }, { "question":"How do you get the name of the tensorflow output nodes in a Keras Model? I'm trying to create a pb file from my Keras (tensorflow backend) model so I can build it on iOS. I'm using freeze.py and I need to pass the output nodes. How do i get the names of the output nodes of my Keras model? https:\/\/github.com\/tensorflow\/tensorflow\/blob\/master\/tensorflow\/python\/tools\/freeze_graph.py", "response":"You can use Keras model.summary() to get the name of the last layer. If model.outputs is not empty you can get the node names via: ``` [node.op.name for node in model.outputs] ``` you get the session via ``` session = keras.backend.get_session() ``` and you convert all training variables to consts via ``` min_graph = convert_variables_to_constants(session, session.graph_def, [node.op.name for node in model.outputs]) ``` after that you can write a protobuf-file via ``` tensorflow.train.write_graph(min_graph, \"\/logdir\/\", \"file.pb\", as_text=True) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40028175\/how-do-you-get-the-name-of-the-tensorflow-output-nodes-in-a-keras-model", "best_answers_votes":30, "question_length":382, "response_length":553 }, { "question":"ImportError: cannot import name 'set_random_seed' from 'tensorflow' (C:\\Users\\polon\\Anaconda3\\lib\\site-packages\\tensorflow\\__init__.py) Good day, Here is the error. Can somebody help how can i solve it? ``` ImportError Traceback (most recent call last) in 7 import numpy as np 8 import numpy.random as nr ----> 9 from tensorflow import set_random_seed 10 import matplotlib.pyplot as plt 11 get_ipython().run_line_magic('matplotlib', 'inline') ImportError: cannot import name 'set_random_seed' from 'tensorflow' (C:\\Users\\polon\\Anaconda3\\lib\\site-packages\\tensorflow\\__init__.py) ``` Looked for similar problems on Stack, but nothing worked for me.", "response":"In Tensoflow2 there is no need to perform ``` from tensorflow import set_random_seed ``` in order to run ``` set_random_seed(x) ``` (as it was in older version) Only have to run ``` import tensorflow tensorflow.random.set_seed(x) ``` Thanks to @David Buck", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58638701\/importerror-cannot-import-name-set-random-seed-from-tensorflow-c-users-po", "best_answers_votes":76, "question_length":649, "response_length":255 }, { "question":"Understanding tf.extract_image_patches for extracting patches from an image I found the following method tf.extract_image_patches in tensorflow API, but I am not clear about its functionality. Say the batch_size = 1, and an image is of size 225x225x3, and we want to extract patches of size 32x32. How exactly does this function behave? Specifically, the documentation mentions the dimension of the output tensor to be [batch, out_rows, out_cols, ksize_rows * ksize_cols * depth] , but what out_rows and out_cols are is not mentioned. Ideally, given an input image tensor of size 1x225x225x3 (where 1 is the batch size), I want to be able to get Kx32x32x3 as output, where K is the total number of patches and 32x32x3 is the dimension of each patch. Is there something in tensorflow that already achieves this?", "response":"Here is how the method works: ksizes is used to decide the dimensions of each patch, or in other words, how many pixels each patch should contain. strides denotes the length of the gap between the start of one patch and the start of the next consecutive patch within the original image. rates is a number that essentially means our patch should jump by rates pixels in the original image for each consecutive pixel that ends up in our patch. (The example below helps illustrate this.) padding is either \"VALID\", which means every patch must be fully contained in the image, or \"SAME\", which means patches are allowed to be incomplete (the remaining pixels will be filled in with zeroes). Here is some sample code with output to help demonstrate how it works: ``` import tensorflow as tf n = 10 # images is a 1 x 10 x 10 x 1 array that contains the numbers 1 through 100 in order images = [[[[x * n + y + 1] for y in range(n)] for x in range(n)]] # We generate four outputs as follows: # 1. 3x3 patches with stride length 5 # 2. Same as above, but the rate is increased to 2 # 3. 4x4 patches with stride length 7; only one patch should be generated # 4. Same as above, but with padding set to 'SAME' with tf.Session() as sess: print tf.extract_image_patches(images=images, ksizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 1, 1, 1], padding='VALID').eval(), '\\n\\n' print tf.extract_image_patches(images=images, ksizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 2, 2, 1], padding='VALID').eval(), '\\n\\n' print tf.extract_image_patches(images=images, ksizes=[1, 4, 4, 1], strides=[1, 7, 7, 1], rates=[1, 1, 1, 1], padding='VALID').eval(), '\\n\\n' print tf.extract_image_patches(images=images, ksizes=[1, 4, 4, 1], strides=[1, 7, 7, 1], rates=[1, 1, 1, 1], padding='SAME').eval() ``` Output: ``` [[[[ 1 2 3 11 12 13 21 22 23] [ 6 7 8 16 17 18 26 27 28]] [[51 52 53 61 62 63 71 72 73] [56 57 58 66 67 68 76 77 78]]]] [[[[ 1 3 5 21 23 25 41 43 45] [ 6 8 10 26 28 30 46 48 50]] [[ 51 53 55 71 73 75 91 93 95] [ 56 58 60 76 78 80 96 98 100]]]] [[[[ 1 2 3 4 11 12 13 14 21 22 23 24 31 32 33 34]]]] [[[[ 1 2 3 4 11 12 13 14 21 22 23 24 31 32 33 34] [ 8 9 10 0 18 19 20 0 28 29 30 0 38 39 40 0]] [[ 71 72 73 74 81 82 83 84 91 92 93 94 0 0 0 0] [ 78 79 80 0 88 89 90 0 98 99 100 0 0 0 0 0]]]] ``` So, for example, our first result looks like the following: ``` * * * 4 5 * * * 9 10 * * * 14 15 * * * 19 20 * * * 24 25 * * * 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 * * * 54 55 * * * 59 60 * * * 64 65 * * * 69 70 * * * 74 75 * * * 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 ``` As you can see, we have 2 rows and 2 columns worth of patches, which are what out_rows and out_cols are.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40731433\/understanding-tf-extract-image-patches-for-extracting-patches-from-an-image", "best_answers_votes":58, "question_length":810, "response_length":2724 }, { "question":"In TensorFlow, how can I get nonzero values and their indices from a tensor with python? I want to do something like this. Let's say we have a tensor A. ``` A = [[1,0],[0,4]] ``` And I want to get nonzero values and their indices from it. ``` Nonzero values: [1,4] Nonzero indices: [[0,0],[1,1]] ``` There are similar operations in Numpy. np.flatnonzero(A) return indices that are non-zero in the flattened A. x.ravel()[np.flatnonzero(x)] extract elements according to non-zero indices. Here's a link for these operations. How can I do somthing like above Numpy operations in Tensorflow with python? (Whether a matrix is flattened or not doesn't really matter.)", "response":"You can achieve same result in Tensorflow using not_equal and where methods. ``` zero = tf.constant(0, dtype=tf.float32) where = tf.not_equal(A, zero) ``` where is a tensor of the same shape as A holding True or False, in the following case ``` [[True, False], [False, True]] ``` This would be sufficient to select zero or non-zero elements from A. If you want to obtain indices you can use wheremethod as follows: ``` indices = tf.where(where) ``` where tensor has two True values so indices tensor will have two entries. where tensor has rank of two, so entries will have two indices: ``` [[0, 0], [1, 1]] ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39219414\/in-tensorflow-how-can-i-get-nonzero-values-and-their-indices-from-a-tensor-with", "best_answers_votes":50, "question_length":661, "response_length":611 }, { "question":"how to get string value out of tf.tensor which dtype is string I want to use tf.data.Dataset.list_files function to feed my datasets. But because the file is not image, I need to load it manually. The problem is tf.data.Dataset.list_files pass variable as tf.tensor and my python code can not handle tensor. How can I get string value from tf.tensor. The dtype is string. ``` train_dataset = tf.data.Dataset.list_files(PATH+'clean_4s_val\/*.wav') train_dataset = train_dataset.map(lambda x: load_audio_file(x)) def load_audio_file(file_path): print(\"file_path: \", file_path) # i want do something like string_path = convert_tensor_to_string(file_path) ``` file_path is Tensor(\"arg0:0\", shape=(), dtype=string) I use tensorflow 1.13.1 and eager mode. thanks in advance", "response":"You can use tf.py_func to wrap load_audio_file(). ``` import tensorflow as tf tf.enable_eager_execution() def load_audio_file(file_path): # you should decode bytes type to string type print(\"file_path: \",bytes.decode(file_path),type(bytes.decode(file_path))) return file_path train_dataset = tf.data.Dataset.list_files('clean_4s_val\/*.wav') train_dataset = train_dataset.map(lambda x: tf.py_func(load_audio_file, [x], [tf.string])) for one_element in train_dataset: print(one_element) file_path: clean_4s_val\/1.wav (,) file_path: clean_4s_val\/3.wav (,) file_path: clean_4s_val\/2.wav (,) ``` UPDATE for TF 2 The above solution will not work with TF 2 (tested with 2.2.0), even when replacing tf.py_func with tf.py_function, giving ``` InvalidArgumentError: TypeError: descriptor 'decode' requires a 'bytes' object but received a 'tensorflow.python.framework.ops.EagerTensor' ``` To make it work in TF 2, make the following changes: Remove tf.enable_eager_execution() (eager is enabled by default in TF 2, which you can verify with tf.executing_eagerly() returning True) Replace tf.py_func with tf.py_function Replace all in-function references of file_path with file_path.numpy()", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/56122670\/how-to-get-string-value-out-of-tf-tensor-which-dtype-is-string", "best_answers_votes":39, "question_length":766, "response_length":1181 }, { "question":"Extract target from Tensorflow PrefetchDataset I am still learning tensorflow and keras, and I suspect this question has a very easy answer I'm just missing due to lack of familiarity. I have a PrefetchDataset object: ``` > print(tf_test) $ ``` ...made up of features and a target. I can iterate over it using a for loop: ``` > for example in tf_test: > print(example[0].numpy()) > print(example[1].numpy()) > exit() $ [[-0.31 -0.94 -1.12 ... 0.18 -0.27] [-0.22 -0.54 -0.14 ... 0.33 -0.55] [-0.60 -0.02 -1.41 ... 0.21 -0.63] ... [-0.03 -0.91 -0.12 ... 0.77 -0.23] [-0.76 -1.48 -0.15 ... 0.38 -0.35] [-0.55 -0.08 -0.69 ... 0.44 -0.36]] [0 0 1 0 1 0 0 0 1 0 1 1 0 1 0 0 0 ... 0 1 1 0] ``` However, this is very slow. What I'd like to do is access the tensor corresponding to the class labels and turn that into a numpy array, or a list, or any sort of iterable that can be fed into scikit-learn's classification report and\/or confusion matrix: ``` > y_pred = model.predict(tf_test) > print(y_pred) $ [[0.01] [0.14] [0.00] ... [0.32] [0.03] [0.00]] > y_pred_list = [int(x[0]) for x in y_pred] # assumes value >= 0.5 is positive prediction > y_true = [] # what I need help with > print(sklearn.metrics.confusion_matrix(y_true, y_pred_list) ``` ...OR access the data such that it could be used in tensorflow's confusion matrix: ``` > labels = [] # what I need help with > predictions = y_pred_list # could we just use a tensor? > print(tf.math.confusion_matrix(labels, predictions) ``` In both cases, the general ability to grab the target data from the original object in a manner that isn't computationally expensive would be very helpful (and might help with my underlying intuitions re: tensorflow and keras). Any advice would be greatly appreciated.", "response":"You can convert it to a list with list(ds) and then recompile it as a normal Dataset with tf.data.Dataset.from_tensor_slices(list(ds)). From there your nightmare begins again but at least it's a nightmare that other people have had before. Note that for more complex datasets (e.g. nested dictionaries) you will need more preprocessing after calling list(ds), but this should work for the example you asked about. This is far from a satisfying answer but unfortunately the class is entirely undocumented and none of the standard Dataset tricks work.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/62436302\/extract-target-from-tensorflow-prefetchdataset", "best_answers_votes":15, "question_length":1750, "response_length":549 }, { "question":"How do Monitored Training Sessions work? I'm trying to understand the difference between using tf.Session and tf.train.MonitoredTrainingSession, and where I might prefer one over the other. It seems that when I use the latter, I can avoid many \"chores\" such as initializing variables, starting queue runners, or setting up file writers for summary operations. On the other hand, with a monitored training session, I cannot specify the computation graph I want to use explicitly. All of this seems rather mysterious to me. Is there some underlying philosophy behind how these classes were created that I'm not understanding?", "response":"I can't give some insights on how these classes were created, but here are a few things which I think are relevants on how you could use them. The tf.Session is a low level object in the python TensorFlow API while, as you said, the tf.train.MonitoredTrainingSession comes with a lot of handy features, especially useful in most of the common cases. Before describing some of the benefits of tf.train.MonitoredTrainingSession, let me answer the question about the graph used by the session. You can specify the tf.Graph used by the MonitoredTrainingSession by using a context manager with your_graph.as_default(): ``` from __future__ import print_function import tensorflow as tf def example(): g1 = tf.Graph() with g1.as_default(): # Define operations and tensors in `g`. c1 = tf.constant(42) assert c1.graph is g1 g2 = tf.Graph() with g2.as_default(): # Define operations and tensors in `g`. c2 = tf.constant(3.14) assert c2.graph is g2 # MonitoredTrainingSession example with g1.as_default(): with tf.train.MonitoredTrainingSession() as sess: print(c1.eval(session=sess)) # Next line raises # ValueError: Cannot use the given session to evaluate tensor: # the tensor's graph is different from the session's graph. try: print(c2.eval(session=sess)) except ValueError as e: print(e) # Session example with tf.Session(graph=g2) as sess: print(c2.eval(session=sess)) # Next line raises # ValueError: Cannot use the given session to evaluate tensor: # the tensor's graph is different from the session's graph. try: print(c1.eval(session=sess)) except ValueError as e: print(e) if __name__ == '__main__': example() ``` So, as you said, the benefits of using MonitoredTrainingSession are that, this object takes care of initialising variables, starting queue runner as well as setting up the file writers, but it has also the benefit of making your code easy to distribute as it also works differently depending if you specified the running process as a master or not. For example you could run something like: ``` def run_my_model(train_op, session_args): with tf.train.MonitoredTrainingSession(**session_args) as sess: sess.run(train_op) ``` that you would call in a non-distributed way: ``` run_my_model(train_op, {})` ``` or in a distributed way (see the distributed doc for more information on the inputs): ``` run_my_model(train_op, {\"master\": server.target, \"is_chief\": (FLAGS.task_index == 0)}) ``` On the other hand, the benefit of using the raw tf.Session object is that, you don't have the extra benefits of tf.train.MonitoredTrainingSession, which can be useful if you don't plan to use them or if you want to get more control (for example on how the queues are started). EDIT (as per comment): For the op initialisation, you would have to do something like (cf. official doc: ``` # Define your graph and your ops init_op = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init_p) sess.run(your_graph_ops,...) ``` For the QueueRunner, I would refer you to the official doc where you will find more complete examples. EDIT2: The main concept to understand to get a sense on how tf.train.MonitoredTrainingSession works is the _WrappedSession class: This wrapper is used as a base class for various session wrappers that provide additional functionality such as monitoring, coordination, and recovery. The tf.train.MonitoredTrainingSession works (as of version 1.1) this way: It first checks if it is a chief or a worker (cf. the distributed doc for lexical question). It begins the hooks which have been provided (for example, StopAtStepHook would just retrieve the global_step tensor at this stage. It creates a session which is a Chief (or Worker session) wrapped into a _HookedSession wrapped into a _CoordinatedSession wrapped into a _RecoverableSession. The Chief\/Worker sessions are in charge of running initialising ops provided by the Scaffold. ``` scaffold: A `Scaffold` used for gathering or building supportive ops. If not specified a default one is created. It's used to finalize the graph. ``` The chief session also takes care of all the checkpoint parts: e.g. restoring from checkpoints using the Saver from the Scaffold. The _HookedSession is basically there to decorate the run method: it calls the _call_hook_before_run and after_run methods when relevant. At creation the _CoordinatedSession builds a Coordinator which starts the queue runners and will be responsible of closing them. The _RecoverableSession will insures that there is retry in case of tf.errors.AbortedError. In conclusion, the tf.train.MonitoredTrainingSession avoids a lot of boiler plate code while being easily extendable with the hooks mechanism.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43245231\/how-do-monitored-training-sessions-work", "best_answers_votes":36, "question_length":623, "response_length":4665 }, { "question":"Multilabel Text Classification using TensorFlow The text data is organized as vector with 20,000 elements, like [2, 1, 0, 0, 5, ...., 0]. i-th element indicates the frequency of the i-th word in a text. The ground truth label data is also represented as vector with 4,000 elements, like [0, 0, 1, 0, 1, ...., 0]. i-th element indicates whether the i-th label is a positive label for a text. The number of labels for a text differs depending on texts. I have a code for single-label text classification. How can I edit the following code for multilabel text classification? Especially, I would like to know following points. How to compute accuracy using TensorFlow. How to set a threshold which judges whether a label is positive or negative. For instance, if the output is [0.80, 0.43, 0.21, 0.01, 0.32] and the ground truth is [1, 1, 0, 0, 1], the labels with scores over 0.25 should be judged as positive. Thank you. ``` import tensorflow as tf # hidden Layer class HiddenLayer(object): def __init__(self, input, n_in, n_out): self.input = input w_h = tf.Variable(tf.random_normal([n_in, n_out],mean = 0.0,stddev = 0.05)) b_h = tf.Variable(tf.zeros([n_out])) self.w = w_h self.b = b_h self.params = [self.w, self.b] def output(self): linarg = tf.matmul(self.input, self.w) + self.b self.output = tf.nn.relu(linarg) return self.output # output Layer class OutputLayer(object): def __init__(self, input, n_in, n_out): self.input = input w_o = tf.Variable(tf.random_normal([n_in, n_out], mean = 0.0, stddev = 0.05)) b_o = tf.Variable(tf.zeros([n_out])) self.w = w_o self.b = b_o self.params = [self.w, self.b] def output(self): linarg = tf.matmul(self.input, self.w) + self.b self.output = tf.nn.relu(linarg) return self.output # model def model(): h_layer = HiddenLayer(input = x, n_in = 20000, n_out = 1000) o_layer = OutputLayer(input = h_layer.output(), n_in = 1000, n_out = 4000) # loss function out = o_layer.output() cross_entropy = -tf.reduce_sum(y_*tf.log(out + 1e-9), name='xentropy') # regularization l2 = (tf.nn.l2_loss(h_layer.w) + tf.nn.l2_loss(o_layer.w)) lambda_2 = 0.01 # compute loss loss = cross_entropy + lambda_2 * l2 # compute accuracy for single label classification task correct_pred = tf.equal(tf.argmax(out, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, \"float\")) return loss, accuracy ```", "response":"Change relu to sigmoid of output layer. Modify cross entropy loss to explicit mathematical formula of sigmoid cross entropy loss (explicit loss was working in my case\/version of tensorflow ) ``` import tensorflow as tf # hidden Layer class HiddenLayer(object): def __init__(self, input, n_in, n_out): self.input = input w_h = tf.Variable(tf.random_normal([n_in, n_out],mean = 0.0,stddev = 0.05)) b_h = tf.Variable(tf.zeros([n_out])) self.w = w_h self.b = b_h self.params = [self.w, self.b] def output(self): linarg = tf.matmul(self.input, self.w) + self.b self.output = tf.nn.relu(linarg) return self.output # output Layer class OutputLayer(object): def __init__(self, input, n_in, n_out): self.input = input w_o = tf.Variable(tf.random_normal([n_in, n_out], mean = 0.0, stddev = 0.05)) b_o = tf.Variable(tf.zeros([n_out])) self.w = w_o self.b = b_o self.params = [self.w, self.b] def output(self): linarg = tf.matmul(self.input, self.w) + self.b #changed relu to sigmoid self.output = tf.nn.sigmoid(linarg) return self.output # model def model(): h_layer = HiddenLayer(input = x, n_in = 20000, n_out = 1000) o_layer = OutputLayer(input = h_layer.output(), n_in = 1000, n_out = 4000) # loss function out = o_layer.output() # modified cross entropy to explicit mathematical formula of sigmoid cross entropy loss cross_entropy = -tf.reduce_sum( ( (y_*tf.log(out + 1e-9)) + ((1-y_) * tf.log(1 - out + 1e-9)) ) , name='xentropy' ) # regularization l2 = (tf.nn.l2_loss(h_layer.w) + tf.nn.l2_loss(o_layer.w)) lambda_2 = 0.01 # compute loss loss = cross_entropy + lambda_2 * l2 # compute accuracy for single label classification task correct_pred = tf.equal(tf.argmax(out, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, \"float\")) return loss, accuracy ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35400065\/multilabel-text-classification-using-tensorflow", "best_answers_votes":20, "question_length":2338, "response_length":1770 }, { "question":"How to fix MatMul Op has type float64 that does not match type float32 TypeError? I am trying to save Nueral Network weights into a file and then restoring those weights by initializing the network instead of random initialization. My code works fine with random initialization. But, when i initialize weights from file it is showing me an error TypeError: Input 'b' of 'MatMul' Op has type float64 that does not match type float32 of argument 'a'. I don't know how do i solve this issue.Here is my code: Model Initialization ``` # Parameters training_epochs = 5 batch_size = 64 display_step = 5 batch = tf.Variable(0, trainable=False) regualarization = 0.008 # Network Parameters n_hidden_1 = 300 # 1st layer num features n_hidden_2 = 250 # 2nd layer num features n_input = model.layer1_size # Vector input (sentence shape: 30*10) n_classes = 12 # Sentence Category detection total classes (0-11 categories) #History storing variables for plots loss_history = [] train_acc_history = [] val_acc_history = [] # tf Graph input x = tf.placeholder(\"float\", [None, n_input]) y = tf.placeholder(\"float\", [None, n_classes]) ``` Model parameters ``` #loading Weights def weight_variable(fan_in, fan_out, filename): stddev = np.sqrt(2.0\/fan_in) if (filename == \"\"): initial = tf.random_normal([fan_in,fan_out], stddev=stddev) else: initial = np.loadtxt(filename) print initial.shape return tf.Variable(initial) #loading Biases def bias_variable(shape, filename): if (filename == \"\"): initial = tf.constant(0.1, shape=shape) else: initial = np.loadtxt(filename) print initial.shape return tf.Variable(initial) # Create model def multilayer_perceptron(_X, _weights, _biases): layer_1 = tf.nn.relu(tf.add(tf.matmul(_X, _weights['h1']), _biases['b1'])) layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1, _weights['h2']), _biases['b2'])) return tf.matmul(layer_2, weights['out']) + biases['out'] # Store layers weight & bias weights = { 'h1': w2v_utils.weight_variable(n_input, n_hidden_1, filename=\"weights_h1.txt\"), 'h2': w2v_utils.weight_variable(n_hidden_1, n_hidden_2, filename=\"weights_h2.txt\"), 'out': w2v_utils.weight_variable(n_hidden_2, n_classes, filename=\"weights_out.txt\") } biases = { 'b1': w2v_utils.bias_variable([n_hidden_1], filename=\"biases_b1.txt\"), 'b2': w2v_utils.bias_variable([n_hidden_2], filename=\"biases_b2.txt\"), 'out': w2v_utils.bias_variable([n_classes], filename=\"biases_out.txt\") } # Define loss and optimizer #learning rate # Optimizer: set up a variable that's incremented once per batch and # controls the learning rate decay. learning_rate = tf.train.exponential_decay( 0.02*0.01, # Base learning rate. #0.002 batch * batch_size, # Current index into the dataset. X_train.shape[0], # Decay step. 0.96, # Decay rate. staircase=True) # Construct model pred = tf.nn.relu(multilayer_perceptron(x, weights, biases)) #L2 regularization l2_loss = tf.add_n([tf.nn.l2_loss(v) for v in tf.trainable_variables()]) #Softmax loss cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) #Total_cost cost = cost+ (regualarization*0.5*l2_loss) # Adam Optimizer optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost,global_step=batch) # Add ops to save and restore all the variables. saver = tf.train.Saver() # Initializing the variables init = tf.initialize_all_variables() print \"Network Initialized!\" ``` ERROR DETAILS", "response":"The tf.matmul() op does not perform automatic type conversions, so both of its inputs must have the same element type. The error message you are seeing indicates that you have a call to tf.matmul() where the first argument has type tf.float32, and the second argument has type tf.float64. You must convert one of the inputs to match the other, for example using tf.cast(x, tf.float32). Looking at your code, I don't see anywhere that a tf.float64 tensor is explicitly created (the default dtype for floating-point values in the TensorFlow Python API\u2014e.g. for tf.constant(37.0)\u2014is tf.float32). I would guess that the errors are caused by the np.loadtxt(filename) calls, which might be loading an np.float64 array. You can explicitly change them to load np.float32 arrays (which are converted to tf.float32 tensors) as follows: ``` initial = np.loadtxt(filename).astype(np.float32) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36210887\/how-to-fix-matmul-op-has-type-float64-that-does-not-match-type-float32-typeerror", "best_answers_votes":58, "question_length":3362, "response_length":883 }, { "question":"Tensor with unspecified dimension in tensorflow I'm playing around with tensorflow and ran into a problem with the following code: ``` def _init_parameters(self, input_data, labels): # the input shape is (batch_size, input_size) input_size = tf.shape(input_data)[1] # labels in one-hot format have shape (batch_size, num_classes) num_classes = tf.shape(labels)[1] stddev = 1.0 \/ tf.cast(input_size, tf.float32) w_shape = tf.pack([input_size, num_classes], 'w-shape') normal_dist = tf.truncated_normal(w_shape, stddev=stddev, name='normaldist') self.w = tf.Variable(normal_dist, name='weights') ``` (I'm using tf.pack as suggested in this question, since I was getting the same error) When I run it (from a larger script that invokes this one), I get this error: ``` ValueError: initial_value must have a shape specified: Tensor(\"normaldist:0\", shape=TensorShape([Dimension(None), Dimension(None)]), dtype=float32) ``` I tried to replicate the process in the interactive shell. Indeed, the dimensions of normal_dist are unspecified, although the supplied values do exist: ``` In [70]: input_size.eval() Out[70]: 4 In [71]: num_classes.eval() Out[71]: 3 In [72]: w_shape.eval() Out[72]: array([4, 3], dtype=int32) In [73]: normal_dist.eval() Out[73]: array([[-0.27035281, -0.223277 , 0.14694688], [-0.16527176, 0.02180306, 0.00807841], [ 0.22624688, 0.36425814, -0.03099642], [ 0.25575709, -0.02765726, -0.26169327]], dtype=float32) In [78]: normal_dist.get_shape() Out[78]: TensorShape([Dimension(None), Dimension(None)]) ``` This is weird. Tensorflow generates the vector but can't say its shape. Am I doing something wrong?", "response":"As Ishamael says, all tensors have a static shape, which is known at graph construction time and accessible using Tensor.get_shape(); and a dynamic shape, which is only known at runtime and is accessible by fetching the value of the tensor, or passing it to an operator like tf.shape. In many cases, the static and dynamic shapes are the same, but they can be different - the static shape can be partially defined - in order allow the dynamic shape to vary from one step to the next. In your code normal_dist has a partially-defined static shape, because w_shape is a computed value. (TensorFlow sometimes attempts to evaluate these computed values at graph construction time, but it gets stuck at tf.pack.) It infers the shape TensorShape([Dimension(None), Dimension(None)]), which means \"a matrix with an unknown number of rows and columns,\" because it knowns that w_shape is a vector of length 2, so the resulting normal_dist must be 2-dimensional. You have two options to deal with this. You can set the static shape as Ishamael suggests, but this requires you to know the shape at graph construction time. For example, the following may work: ``` normal_dist.set_shape([input_data.get_shape()[1], labels.get_shape()[1]]) ``` Alternatively, you can pass validate_shape=False to the tf.Variable constructor. This allows you to create a variable with a partially-defined shape, but it limits the amount of static shape information that can be inferred later on in the graph.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34079787\/tensor-with-unspecified-dimension-in-tensorflow", "best_answers_votes":47, "question_length":1624, "response_length":1476 }, { "question":"What do the options in ConfigProto like allow_soft_placement and log_device_placement mean? We see this quite often in many of the TensorFlow tutorials: ```py sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)) ``` What does allow_soft_placement and log_device_placement mean?", "response":"If you look at the API of ConfigProto, on line 278, you will see this: ```cpp \/\/ Whether soft placement is allowed. If allow_soft_placement is true, \/\/ an op will be placed on CPU if \/\/ 1. there's no GPU implementation for the OP \/\/ or \/\/ 2. no GPU devices are known or registered \/\/ or \/\/ 3. need to co-locate with reftype input(s) which are from CPU. bool allow_soft_placement = 7; ``` What this really means is that if you do something like this without allow_soft_placement=True, TensorFlow will throw an error. ```py with tf.device('\/gpu:0'): # some op that doesn't have a GPU implementation ``` Right below it, you will see on line 281: ```cpp \/\/ Whether device placements should be logged. bool log_device_placement = 8; ``` When log_device_placement=True, you will get a verbose output of something like this: ``` 2017-07-03 01:13:59.466748: I tensorflow\/core\/common_runtime\/simple_placer.cc:841] Placeholder_1: (Placeholder)\/job:localhost\/replica:0\/task:0\/cpu:0 Placeholder: (Placeholder): \/job:localhost\/replica:0\/task:0\/cpu:0 2017-07-03 01:13:59.466765: I tensorflow\/core\/common_runtime\/simple_placer.cc:841] Placeholder: (Placeholder)\/job:localhost\/replica:0\/task:0\/cpu:0 Variable\/initial_value: (Const): \/job:localhost\/replica:0\/task:0\/cpu:0 2017-07-03 01:13:59.466783: I tensorflow\/core\/common_runtime\/simple_placer.cc:841] Variable\/initial_value: (Const)\/job:localhost\/replica:0\/task:0\/cpu:0 ``` You can see where each operation is mapped to. For this case, they are all mapped to \/cpu:0, but if you're in a distributed setting, there would be many more devices.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44873273\/what-do-the-options-in-configproto-like-allow-soft-placement-and-log-device-plac", "best_answers_votes":37, "question_length":319, "response_length":1577 }, { "question":"Output from TensorFlow `py_func` has unknown rank\/shape I am trying to create a simple neural net in TensorFlow. The only tricky part is I have a custom operation that I have implemented with py_func. When I pass the output from py_func to a Dense layer, TensorFlow complains that the rank should be known. The specific error is: ```py ValueError: Inputs to `Dense` should have known rank. ``` I don't know how to preserve the shape of my data when I pass it through py_func. My question is how do I get the correct shape? I have a simple example below to illustrate the problem. ```py def my_func(x): return np.sinh(x).astype('float32') inp = tf.convert_to_tensor(np.arange(5)) y = tf.py_func(my_func, [inp], tf.float32, False) with tf.Session() as sess: with sess.as_default(): print(inp.shape) print(inp.eval()) print(y.shape) print(y.eval()) ``` The output from this snippet is: ```py (5,) [0 1 2 3 4] [ 0. 1.17520118 3.62686038 10.01787472 27.28991699] ``` Why is y.shape ? I want the shape to be (5,) the same as inp. Thanks!", "response":"Since py_func can execute arbitrary Python code and output anything, TensorFlow can't figure out the shape (it would require analyzing Python code of function body) You can instead give the shape manually ``` y.set_shape(inp.get_shape()) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42590431\/output-from-tensorflow-py-func-has-unknown-rank-shape", "best_answers_votes":54, "question_length":1032, "response_length":241 }, { "question":"Printing the loss during TensorFlow training I am looking at the TensorFlow \"MNIST For ML Beginners\" tutorial, and I want to print out the training loss after every training step. My training loop currently looks like this: ``` for i in range(100): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) ``` Now, train_step is defined as: ``` train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) ``` Where cross_entropy is the loss which I want to print out: ``` cross_entropy = -tf.reduce_sum(y_ * tf.log(y)) ``` One way to print this would be to explicitly compute cross_entropy in the training loop: ``` for i in range(100): batch_xs, batch_ys = mnist.train.next_batch(100) cross_entropy = -tf.reduce_sum(y_ * tf.log(y)) print 'loss = ' + str(cross_entropy) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) ``` I now have two questions regarding this: Given that cross_entropy is already computed during sess.run(train_step, ...), it seems inefficient to compute it twice, requiring twice the number of forward passes of all the training data. Is there a way to access the value of cross_entropy when it was computed during sess.run(train_step, ...)? How do I even print a tf.Variable? Using str(cross_entropy) gives me an error... Thank you!", "response":"You can fetch the value of cross_entropy by adding it to the list of arguments to sess.run(...). For example, your for-loop could be rewritten as follows: ``` for i in range(100): batch_xs, batch_ys = mnist.train.next_batch(100) cross_entropy = -tf.reduce_sum(y_ * tf.log(y)) _, loss_val = sess.run([train_step, cross_entropy], feed_dict={x: batch_xs, y_: batch_ys}) print 'loss = ' + loss_val ``` The same approach can be used to print the current value of a variable. Let's say, in addition to the value of cross_entropy, you wanted to print the value of a tf.Variable called W, you could do the following: ``` for i in range(100): batch_xs, batch_ys = mnist.train.next_batch(100) cross_entropy = -tf.reduce_sum(y_ * tf.log(y)) _, loss_val, W_val = sess.run([train_step, cross_entropy, W], feed_dict={x: batch_xs, y_: batch_ys}) print 'loss = %s' % loss_val print 'W = %s' % W_val ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33833818\/printing-the-loss-during-tensorflow-training", "best_answers_votes":47, "question_length":1334, "response_length":886 }, { "question":"What is the difference of static Computational Graphs in tensorflow and dynamic Computational Graphs in Pytorch? When I was learning tensorflow, one basic concept of tensorflow was computational graphs, and the graphs was said to be static. And I found in Pytorch, the graphs was said to be dynamic. What's the difference of static Computational Graphs in tensorflow and dynamic Computational Graphs in Pytorch?", "response":"Both frameworks operate on tensors and view any model as a directed acyclic graph (DAG), but they differ drastically on how you can define them. TensorFlow follows \u2018data as code and code is data\u2019 idiom. In TensorFlow you define graph statically before a model can run. All communication with outer world is performed via tf.Session object and tf.Placeholder which are tensors that will be substituted by external data at runtime. In PyTorch things are way more imperative and dynamic: you can define, change and execute nodes as you go, no special session interfaces or placeholders. Overall, the framework is more tightly integrated with Python language and feels more native most of the times. When you write in TensorFlow sometimes you feel that your model is behind a brick wall with several tiny holes to communicate over. Anyways, this still sounds like a matter of taste more or less. However, those approaches differ not only in a software engineering perspective: there are several dynamic neural network architectures that can benefit from the dynamic approach. Recall RNNs: with static graphs, the input sequence length will stay constant. This means that if you develop a sentiment analysis model for English sentences you must fix the sentence length to some maximum value and pad all smaller sequences with zeros. Not too convenient, huh. And you will get more problems in the domain of recursive RNNs and tree-RNNs. Currently Tensorflow has limited support for dynamic inputs via Tensorflow Fold. PyTorch has it by-default. Reference: https:\/\/medium.com\/towards-data-science\/pytorch-vs-tensorflow-spotting-the-difference-25c75777377b https:\/\/www.reddit.com\/r\/MachineLearning\/comments\/5w3q74\/d_so_pytorch_vs_tensorflow_whats_the_verdict_on\/", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46154189\/what-is-the-difference-of-static-computational-graphs-in-tensorflow-and-dynamic", "best_answers_votes":27, "question_length":411, "response_length":1754 }, { "question":"Are tf.layers.dense() and tf.contrib.layers.fully_connected() interchangeable? I am used to using tf.contrib.layers.fully_connected to build a fully connected layer. Recently I ran into tf.layers.dense apparently used where the first functioned could be used. Are the interchangeable, producing the same output?", "response":"They are essentially the same, the later calling the former. However tf.contrib.fully_connected adds a few functionalities on top of dense, in particular the possibility to pass a normalization and an activation in the parameters, \u00e0 la Keras. As noted by @wordforthewise, mind that the later defaults to tf.nn.relu. More generally, the TF API proposes (and mixes somewhat confusingly) low- and hi-level APIs; more on that here.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44912297\/are-tf-layers-dense-and-tf-contrib-layers-fully-connected-interchangeable", "best_answers_votes":40, "question_length":311, "response_length":427 }, { "question":"ValueError: Shapes (None, 1) and (None, 3) are incompatible I have a 3 dimensional dataset of audio files where X.shape is (329,20,85). I want to have a simpl bare-bones model running, so please don't nitpick and address only the issue at hand. Here is the code: ``` model = tf.keras.models.Sequential() model.add(tf.keras.layers.LSTM(32, return_sequences=True, stateful=False, input_shape = (20,85,1))) model.add(tf.keras.layers.LSTM(20)) model.add(tf.keras.layers.Dense(nb_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[\"accuracy\"]) model.summary() print(\"Train...\") model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=50, validation_data=(X_test, y_test)) ``` But then I had the error mentioned in the title: ValueError: Shapes (None, 1) and (None, 3) are incompatible Here is the model.summary() ``` Model: \"sequential_13\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm_21 (LSTM) (None, 20, 32) 15104 _________________________________________________________________ lstm_22 (LSTM) (None, 20) 4240 _________________________________________________________________ dense_8 (Dense) (None, 3) 63 ================================================================= Total params: 19,407 Trainable params: 19,407 Non-trainable params: 0 _________________________________________________________________ Train... ``` For this, I followed this post and updated Tensorflow to the latest version, but the issue persists. This post is completely unrelated and highly unreliable.This post although a bit relatable is unanswered for a while now. Update 1.0: I strongly think the problem has something to do with the final Dense layer where I pass nb_classes as 3, since I am classifying for 3 categories in y. So I changed the Dense layer's nb_classes to 1, which ran the model and gives me this output, which I am positive is wrong. ``` Train... 9\/9 [==============================] - 2s 177ms\/step - loss: 0.0000e+00 - accuracy: 0.1520 - val_loss: 0.0000e+00 - val_accuracy: 0.3418 ``` Update 2.0: I one hot encoded the ys and resolved the shape issue. But now the above output with persists. Any help with this? Or should I post a new question for this? Thanks for all the help. How should I proceed, or what should I be changing?", "response":"The first problem is with the LSTM input_shape. input_shape = (20,85,1). From the doc: https:\/\/keras.io\/layers\/recurrent\/ LSTM layer expects 3D tensor with shape (batch_size, timesteps, input_dim). model.add(tf.keras.layers.Dense(nb_classes, activation='softmax')) - this suggets you're doing a multi-class classification. So, you need your y_train and y_test have to be one-hot-encoded. That means they must have dimension (number_of_samples, 3), where 3 denotes number of classes. You need to apply tensorflow.keras.utils.to_categorical to them. ``` y_train = to_categorical(y_train, 3) y_test = to_categorical(y_test, 3) ``` ref: https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/keras\/utils\/to_categorical tf.keras.callbacks.History() - this callback is automatically applied to every Keras model. The History object gets returned by the fit method of models. ref: https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/keras\/callbacks\/History", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/61550026\/valueerror-shapes-none-1-and-none-3-are-incompatible", "best_answers_votes":44, "question_length":2415, "response_length":934 }, { "question":"The print of string constant is always attached with 'b' inTensorFlow [duplicate] This question already has answers here: What does the 'b' character do in front of a string literal? (12 answers) Closed 8 years ago. Durng the test of TensorFlow r0.12(CPU) installed on Windows 10, I found that the printed string contant is always with an 'b' in the end. The print of python is normal. I cannot figure out the reason so came here for help. The code is as follows: ``` >>>import tensorflow as tf >>>hello = tf.constant('Hello, TensorFlow!') >>>sess = tf.Session() >>>print(sess.run(hello)) b'Hello, TensorFlow!' ```", "response":"Use sess.run(hello).decode() because it is a bytestring. decode method will return the string. Your print statement must look like ``` print(sess.run(hello).decode()) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40904979\/the-print-of-string-constant-is-always-attached-with-b-intensorflow", "best_answers_votes":45, "question_length":614, "response_length":170 }, { "question":"Confused by the behavior of `tf.cond` I need a conditional control flow in my graph. If pred is True, the graph should call an op that updates a variable and then returns it, otherwise it returns the variable unchanged. A simplified version is: ```py pred = tf.constant(True) x = tf.Variable([1]) assign_x_2 = tf.assign(x, [2]) def update_x_2(): with tf.control_dependencies([assign_x_2]): return tf.identity(x) y = tf.cond(pred, update_x_2, lambda: tf.identity(x)) with tf.Session() as session: session.run(tf.initialize_all_variables()) print(y.eval()) ``` However, I find that both pred=True and pred=False lead to the same result y=[2], which means the assign op is also called when update_x_2 is not selected by tf.cond. How to explain this? And how to solve this problem?", "response":"TL;DR: If you want tf.cond() to perform a side effect (like an assignment) in one of the branches, you must create the op that performs the side effect inside the function that you pass to tf.cond(). The behavior of tf.cond() is a little unintuitive. Because execution in a TensorFlow graph flows forward through the graph, all operations that you refer to in either branch must execute before the conditional is evaluated. This means that both the true and the false branches receive a control dependency on the tf.assign() op, and so y always gets set to 2, even if pred is False. The solution is to create the tf.assign() op inside the function that defines the true branch. For example, you could structure your code as follows: ```py pred = tf.placeholder(tf.bool, shape=[]) x = tf.Variable([1]) def update_x_2(): with tf.control_dependencies([tf.assign(x, [2])]): return tf.identity(x) y = tf.cond(pred, update_x_2, lambda: tf.identity(x)) with tf.Session() as session: session.run(tf.initialize_all_variables()) print(y.eval(feed_dict={pred: False})) # ==> [1] print(y.eval(feed_dict={pred: True})) # ==> [2] ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37063952\/confused-by-the-behavior-of-tf-cond", "best_answers_votes":42, "question_length":777, "response_length":1119 }, { "question":"Keras confusion about number of layers I'm a bit confused about the number of layers that are used in Keras models. The documentation is rather opaque on the matter. According to Jason Brownlee the first layer technically consists of two layers, the input layer, specified by input_dim and a hidden layer. See the first questions on his blog. In all of the Keras documentation the first layer is generally specified as model.add(Dense(number_of_neurons, input_dim=number_of_cols_in_input, activtion=some_activation_function)). The most basic model we could make would therefore be: ``` model = Sequential() model.add(Dense(1, input_dim = 100, activation = None)) ``` Does this model consist of a single layer, where 100 dimensional input is passed through a single input neuron, or does it consist of two layers, first a 100 dimensional input layer and second a 1 dimensional hidden layer? Further, if I were to specify a model like this, how many layers does it have? ``` model = Sequential() model.add(Dense(32, input_dim = 100, activation = 'sigmoid')) model.add(Dense(1))) ``` Is this a model with 1 input layer, 1 hidden layer, and 1 output layer or is this a model with 1 input layer and 1 output layer?", "response":"Your first one consists of a 100 neurons input layer connected to one single output neuron Your second one consists of a 100 neurons input layer, one hidden layer of 32 neurons and one output layer of one single neuron. You have to think of your first layer as your input layer (with the same number of neurons as the dimenson, so 100 for you) connected to another layer with as many neuron as you specify (1 in your first case, 32 in the second one) In Keras what is useful is the command ``` model.summary() ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45328843\/keras-confusion-about-number-of-layers", "best_answers_votes":25, "question_length":1209, "response_length":513 }, { "question":"How to install Tensorflow on Python 2.7 on Windows? I try to install TensorFlow via pip (pip install tensorflow) but get this error could not find a version that satisfies the requirement tensorflow (from versions: ) Is there a solution to this problem? I still wish to install it via pip", "response":"If you only need TensorFlow because of Keras and your are on Python 2.7.x, you can avoid installing Tensorflow(Google) and replace it by CNTK(Microsoft). According to Jeong-Yoon Lee CNTK is a lot (about 2 to 4 times) faster than TensorFlow for LSTM (Bidirectional LSTM on IMDb Data and Text Generation via LSTM), while speeds for other type of neural networks are close to each other. Your Keras code does not need to be modified (I checked it with 2 examples of Keras using TensorFlow and succesfully replaced TensorFlow with CNTK, without changing anything the Keras code. So how do you install it? -CPU-only version of CNTK: pip install https:\/\/cntk.ai\/PythonWheel\/CPU-Only\/cntk-2.4-cp27-cp27m-win_amd64.whl -GPU version of CNTK: pip install https:\/\/cntk.ai\/PythonWheel\/GPU\/cntk-2.4-cp27-cp27m-win_amd64.whl -Test CNTK install: python -c \"import cntk; print(cntk.version)\" -Install Keras: The Python Deep Learning library pip install keras -Enable CNTK as Keras back end iso TensorFlow modify the \"keras.json\" file under %USERPROFILE%\/.keras ``` { \"epsilon\": 1e-07, \"image_data_format\": \"channels_last\", \"backend\": \"cntk\", \"floatx\": \"float32\" } ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45316569\/how-to-install-tensorflow-on-python-2-7-on-windows", "best_answers_votes":15, "question_length":288, "response_length":1151 }, { "question":"Keras Maxpooling2d layer gives ValueError I am trying to replicate VGG16 model in keras, the following is my code: ``` model = Sequential() model.add(ZeroPadding2D((1,1),input_shape=(3,224,224))) model.add(Convolution2D(64, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(64, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(128, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(128, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) ###This line gives error model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(Flatten()) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1000, activation='softmax')) ``` The maxpooling2d layer gives an error at the line which is commented The error says: ``` ValueError: Negative dimension size caused by subtracting 2 from 1 for 'MaxPool_7' (op: 'MaxPool') with input shapes: [?,1,112,128]. ``` What might be the reason behind this? How to solve this? Edit: A more detailed error log: ValueError Traceback (most recent call last) in () 12 model.add(Convolution2D(128, 3, 3, activation='relu')) 13 ---> 14 model.add(MaxPooling2D((2,2), strides=(2,2))) 15 16 model.add(ZeroPadding2D((1,1))) \/usr\/local\/lib\/python2.7\/dist-packages\/keras\/models.pyc in add(self, layer) 306 output_shapes=[self.outputs[0]._keras_shape]) 307 else: --> 308 output_tensor = layer(self.outputs[0]) 309 if type(output_tensor) is list: 310 raise Exception('All layers in a Sequential model ' \/usr\/local\/lib\/python2.7\/dist-packages\/keras\/engine\/topology.pyc in call(self, x, mask) 512 if inbound_layers: 513 # this will call layer.build() if necessary --> 514 self.add_inbound_node(inbound_layers, node_indices, tensor_indices) 515 input_added = True 516 \/usr\/local\/lib\/python2.7\/dist-packages\/keras\/engine\/topology.pyc in add_inbound_node(self, inbound_layers, node_indices, tensor_indices) 570 # creating the node automatically updates self.inbound_nodes 571 # as well as outbound_nodes on inbound layers. --> 572 Node.create_node(self, inbound_layers, node_indices, tensor_indices) 573 574 def get_output_shape_for(self, input_shape): \/usr\/local\/lib\/python2.7\/dist-packages\/keras\/engine\/topology.pyc in create_node(cls, outbound_layer, inbound_layers, node_indices, tensor_indices) 147 148 if len(input_tensors) == 1: --> 149 output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0])) 150 output_masks = to_list(outbound_layer.compute_mask(input_tensors[0], input_masks[0])) 151 # TODO: try to auto-infer shape if exception is raised by get_output_shape_for \/usr\/local\/lib\/python2.7\/dist-packages\/keras\/layers\/pooling.pyc in call(self, x, mask) 160 strides=self.strides, 161 border_mode=self.border_mode, --> 162 dim_ordering=self.dim_ordering) 163 return output 164 \/usr\/local\/lib\/python2.7\/dist-packages\/keras\/layers\/pooling.pyc in _pooling_function(self, inputs, pool_size, strides, border_mode, dim_ordering) 210 border_mode, dim_ordering): 211 output = K.pool2d(inputs, pool_size, strides, --> 212 border_mode, dim_ordering, pool_mode='max') 213 return output 214 \/usr\/local\/lib\/python2.7\/dist-packages\/keras\/backend\/tensorflow_backend.pyc in pool2d(x, pool_size, strides, border_mode, dim_ordering, pool_mode) 1699 1700 if pool_mode == 'max': -> 1701 x = tf.nn.max_pool(x, pool_size, strides, padding=padding) 1702 elif pool_mode == 'avg': 1703 x = tf.nn.avg_pool(x, pool_size, strides, padding=padding) \/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/ops\/nn_ops.pyc in max_pool(value, ksize, strides, padding, data_format, name) 1391 padding=padding, 1392 data_format=data_format, -> 1393 name=name) 1394 1395 \/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/ops\/gen_nn_ops.pyc in _max_pool(input, ksize, strides, padding, data_format, name) 1593 result = _op_def_lib.apply_op(\"MaxPool\", input=input, ksize=ksize, 1594 strides=strides, padding=padding, -> 1595 data_format=data_format, name=name) 1596 return result 1597 \/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/op_def_library.pyc in apply_op(self, op_type_name, name, **keywords) 747 op = g.create_op(op_type_name, inputs, output_types, name=scope, 748 input_types=input_types, attrs=attr_protos, --> 749 op_def=op_def) 750 outputs = op.outputs 751 return _Restructure(ops.convert_n_to_tensor(outputs), \/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/ops.pyc in create_op(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_shapes, compute_device) 2388 original_op=self._default_original_op, op_def=op_def) 2389 if compute_shapes: -> 2390 set_shapes_for_outputs(ret) 2391 self._add_op(ret) 2392 self._record_op_seen_by_control_dependencies(ret) \/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/ops.pyc in set_shapes_for_outputs(op) 1783 raise RuntimeError(\"No shape function registered for standard op: %s\" 1784 % op.type) -> 1785 shapes = shape_func(op) 1786 if shapes is None: 1787 raise RuntimeError( \/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/common_shapes.pyc in call_cpp_shape_fn(op, input_tensors_needed, debug_python_shape_fn) 594 status) 595 except errors.InvalidArgumentError as err: --> 596 raise ValueError(err.message) 597 598 # Convert TensorShapeProto values in output_shapes. ValueError: Negative dimension size caused by subtracting 2 from 1 for 'MaxPool_7' (op: 'MaxPool') with input shapes: [?,1,112,128].", "response":"Quoting an answer mentioned in github, you need to specify the dimension ordering: Keras is a wrapper over Theano or Tensorflow libraries. Keras uses the setting variable image_dim_ordering to decide if the input layer is Theano or Tensorflow format. This setting can be specified in 2 ways - specify 'tf' or 'th' in ~\/.keras\/keras.json like so - image_dim_ordering: 'th'. Note: this is a json file. or specify the image_dim_ordering in your model like so: model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering=\"th\")) Update: Apr 2020 Keras 2.2.5 link seems to have an updated API where dim_ordering is changed to data_format so: keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first') to get NCHW or use channels_last to get NHWC Appendix: image_dim_ordering in 'th' mode the channels dimension (the depth) is at index 1 (e.g. 3, 256, 256). In 'tf' mode is it at index 3 (e.g. 256, 256, 3). Quoting @naoko from comments.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39815518\/keras-maxpooling2d-layer-gives-valueerror", "best_answers_votes":28, "question_length":6488, "response_length":970 }, { "question":"what does the question mark in tensorflow shape mean? I'm looking into magenta code, and printing its tensor object. I got this result: ``` Tensor(\"fully_connected\/BiasAdd:0\", shape=(?, 38), dtype=float32) ``` What does this question mark in shape mean?", "response":"It means that first dimension is not fixed in the graph and it can vary between run calls", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40951602\/what-does-the-question-mark-in-tensorflow-shape-mean", "best_answers_votes":29, "question_length":253, "response_length":89 }, { "question":"You must feed a value for placeholder tensor 'Placeholder' with dtype float I'm a newer to tensorflow, I really don't know how to solve the problem. The code is like: Feed the train with values: ``` sess.run(train_op, feed_dict={images: e, labels: l, keep_prob_fc2: 0.5}) ``` Use the value in CNN: ``` x = tf.placeholder(tf.float32, [None, 10 * 1024]) ``` Then have the error ``` InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device=\"\/job:localhost\/replica:0\/task:0\/gpu:0\"]()]] ``` I print the input valuetypes using print(e.dtype) and the result is float32 and e.shape:(10, 32, 32, 1). I really don't know why this error is happening. The code format First: ``` define the CNN model \"image = tf.placeholder(tf.float32, [FLAGS.batch_size, 32,32,1])\" is here ``` Second: ``` loss funtion and train_op is here \"label = tf.placeholder(tf.float32, [None, FLAGS.batch_size])\" is here ``` Third is the session: ``` images, labels = getShuffleimage()#here will get shuffle data num_examples = 0 init = tf.initialize_local_variables() with tf.Session() as sess: # Start populating the filename queue. sess.run(init) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord, sess=sess) try: step = 0 while not coord.should_stop(): start_time = time.time() image, label = sess.run([images, labels])#get shuffle images print(image.shape) print(image.dtype) sess.run(train_op, feed_dict={image: image, label: label , keep_prob_fc2: 0.5}) duration = time.time() - start_time except tf.errors.OutOfRangeError: print('Done training after reading all data') finally: # When done, ask the threads to stop. coord.request_stop() # Wait for threads to finish. coord.join(threads) sess.close() ```", "response":"Some questions first why you use sess = tf.InteractiveSession() and with tf.Session() as sess: at same time, just curious second what is your placeholder name x or images? if name is x, {images: x_data...} won't feed x_data to x, it override(?) images I think feed_dict should be {x: x_data...} if name is images,do you have two images in your program, placeholder and shuffle data, try to modify name of variable", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41607155\/you-must-feed-a-value-for-placeholder-tensor-placeholder-with-dtype-float", "best_answers_votes":14, "question_length":1840, "response_length":413 }, { "question":"Tensorflow Slim: TypeError: Expected int32, got list containing Tensors of type '_Message' instead I am following this tutorial for learning TensorFlow Slim but upon running the following code for Inception: ``` import numpy as np import os import tensorflow as tf import urllib2 from datasets import imagenet from nets import inception from preprocessing import inception_preprocessing slim = tf.contrib.slim batch_size = 3 image_size = inception.inception_v1.default_image_size checkpoints_dir = '\/tmp\/checkpoints\/' with tf.Graph().as_default(): url = 'https:\/\/upload.wikimedia.org\/wikipedia\/commons\/7\/70\/EnglishCockerSpaniel_simon.jpg' image_string = urllib2.urlopen(url).read() image = tf.image.decode_jpeg(image_string, channels=3) processed_image = inception_preprocessing.preprocess_image(image, image_size, image_size, is_training=False) processed_images = tf.expand_dims(processed_image, 0) # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(inception.inception_v1_arg_scope()): logits, _ = inception.inception_v1(processed_images, num_classes=1001, is_training=False) probabilities = tf.nn.softmax(logits) init_fn = slim.assign_from_checkpoint_fn( os.path.join(checkpoints_dir, 'inception_v1.ckpt'), slim.get_model_variables('InceptionV1')) with tf.Session() as sess: init_fn(sess) np_image, probabilities = sess.run([image, probabilities]) probabilities = probabilities[0, 0:] sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])] plt.figure() plt.imshow(np_image.astype(np.uint8)) plt.axis('off') plt.show() names = imagenet.create_readable_names_for_imagenet_labels() for i in range(5): index = sorted_inds[i] print('Probability %0.2f%% => [%s]' % (probabilities[index], names[index])) ``` I seem to be getting this set of errors: ``` Traceback (most recent call last): File \"DA_test_pred.py\", line 24, in logits, _ = inception.inception_v1(processed_images, num_classes=1001, is_training=False) File \"\/home\/deepankar1994\/Desktop\/MTP\/TensorFlowEx\/TFSlim\/models\/slim\/nets\/inception_v1.py\", line 290, in inception_v1 net, end_points = inception_v1_base(inputs, scope=scope) File \"\/home\/deepankar1994\/Desktop\/MTP\/TensorFlowEx\/TFSlim\/models\/slim\/nets\/inception_v1.py\", line 96, in inception_v1_base net = tf.concat(3, [branch_0, branch_1, branch_2, branch_3]) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/ops\/array_ops.py\", line 1053, in concat dtype=dtypes.int32).get_shape( File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/ops.py\", line 651, in convert_to_tensor as_ref=False) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/ops.py\", line 716, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/constant_op.py\", line 176, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/constant_op.py\", line 165, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/tensor_util.py\", line 367, in make_tensor_proto _AssertCompatible(values, dtype) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/tensor_util.py\", line 302, in _AssertCompatible (dtype.name, repr(mismatch), type(mismatch).__name__)) TypeError: Expected int32, got list containing Tensors of type '_Message' instead. ``` This is strange because all of this code is from their official guide. I am new to TF and any help would be appreciated.", "response":"I got the same problem when using the 1.0 released and I could make it work without having to roll back on a previous version. The problem is caused by change in the api. That discussion helped me to find the solution: Google group > Recent API Changes in TensorFlow You just have to update all the line with tf.concat for example ``` net = tf.concat(3, [branch_0, branch_1, branch_2, branch_3]) ``` should be changed to ``` net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3) ``` Note: I was able to use the models without problem. But I still got error afterward when wanting to load the pretrained weight. Seems that the slim module got several changed since they made the checkpoint file. The graph created by the code and the one present in the checkpoint file were different. Note2: I was able to use the pretrain weights for inception_resnet_v2 by adding to all conv2d layer biases_initializer=None", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41813665\/tensorflow-slim-typeerror-expected-int32-got-list-containing-tensors-of-type", "best_answers_votes":70, "question_length":3728, "response_length":914 }, { "question":"Why does keras model predict slower after compile? In theory, the prediction should be constant as the weights have a fixed size. How do I get my speed back after compile (without the need to remove optimizer)? See associated experiment: https:\/\/nbviewer.jupyter.org\/github\/offchan42\/TensorFlowExperiments\/blob\/master\/test-prediction-speed-after-compile.ipynb?flush_cache=true", "response":"UPDATE - 1\/15\/2020: the current best practice for small batch sizes should be to feed inputs to the model directly - i.e. preds = model(x), and if layers behave differently at train \/ inference, model(x, training=False). Per latest commit, this is now documented. I haven't benchmarked these, but per the Git discussion, it's also worth trying predict_on_batch() - especially with improvements in TF 2.1. ULTIMATE CULPRIT: self._experimental_run_tf_function = True. It's experimental. But it's not actually bad. To any TensorFlow devs reading: clean up your code. It's a mess. And it violates important coding practices, such as one function does one thing; _process_inputs does a lot more than \"process inputs\", same for _standardize_user_data. \"I'm not paid enough\" - but you do pay, in extra time spent understanding your own stuff, and in users filling your Issues page with bugs easier resolved with a clearer code. SUMMARY: it's only a little slower with compile(). compile() sets an internal flag which assigns a different prediction function to predict. This function constructs a new graph upon each call, slowing it down relative to uncompiled. However, the difference is only pronounced when train time is much shorter than data processing time. If we increase the model size to at least mid-sized, the two become equal. See code at the bottom. This slight increase in data processing time is more than compensated by amplified graph capability. Since it's more efficient to keep only one model graph around, the one pre-compile is discarded. Nonetheless: if your model is small relative to data, you are better off without compile() for model inference. See my other answer for a workaround. WHAT SHOULD I DO? Compare model performance compiled vs uncompiled as I have in code at the bottom. Compiled is faster: run predict on a compiled model. Compiled is slower: run predict on an uncompiled model. Yes, both are possible, and it will depend on (1) data size; (2) model size; (3) hardware. Code at the bottom actually shows compiled model being faster, but 10 iterations is a small sample. See \"workarounds\" in my other answer for the \"how-to\". DETAILS: This took a while to debug, but was fun. Below I describe the key culprits I discovered, cite some relevant documentation, and show profiler results that led to the ultimate bottleneck. (FLAG == self.experimental_run_tf_function, for brevity) Model by default instantiates with FLAG=False. compile() sets it to True. predict() involves acquiring the prediction function, func = self._select_training_loop(x) Without any special kwargs passed to predict and compile, all other flags are such that: (A) FLAG==True --> func = training_v2.Loop() (B) FLAG==False --> func = training_arrays.ArrayLikeTrainingLoop() From source code docstring, (A) is heavily graph-reliant, uses more distribution strategy, and ops are prone to creating & destroying graph elements, which \"may\" (do) impact performance. True culprit: _process_inputs(), accounting for 81% of runtime. Its major component? _create_graph_function(), 72% of runtime. This method does not even exist for (B). Using a mid-sized model, however, _process_inputs comprises less than 1% of runtime. Code at bottom, and profiling results follow. DATA PROCESSORS: (A): , used in _process_inputs() . Relevant source code (B): numpy.ndarray, returned by convert_eager_tensors_to_numpy. Relevant source code, and here MODEL EXECUTION FUNCTION (e.g. predict) (A): distribution function, and here (B): distribution function (different), and here PROFILER: results for code in my other answer, \"tiny model\", and in this answer, \"medium model\": Tiny model: 1000 iterations, compile() Tiny model: 1000 iterations, no compile() Medium model: 10 iterations DOCUMENTATION (indirectly) on effects of compile(): source Unlike other TensorFlow operations, we don't convert python numerical inputs to tensors. Moreover, a new graph is generated for each distinct python numerical value, for example calling g(2) and g(3) will generate two new graphs function instantiates a separate graph for every unique set of input shapes and datatypes. For example, the following code snippet will result in three distinct graphs being traced, as each input has a different shape A single tf.function object might need to map to multiple computation graphs under the hood. This should be visible only as performance (tracing graphs has a nonzero computational and memory cost) but should not affect the correctness of the program COUNTEREXAMPLE: ```py from tensorflow.keras.layers import Input, Dense, LSTM, Bidirectional, Conv1D from tensorflow.keras.layers import Flatten, Dropout from tensorflow.keras.models import Model import numpy as np from time import time def timeit(func, arg, iterations): t0 = time() for _ in range(iterations): func(arg) print(\"%.4f sec\" % (time() - t0)) batch_size = 32 batch_shape = (batch_size, 400, 16) ipt = Input(batch_shape=batch_shape) x = Bidirectional(LSTM(512, activation='relu', return_sequences=True))(ipt) x = LSTM(512, activation='relu', return_sequences=True)(ipt) x = Conv1D(128, 400, 1, padding='same')(x) x = Flatten()(x) x = Dense(256, activation='relu')(x) x = Dropout(0.5)(x) x = Dense(128, activation='relu')(x) x = Dense(64, activation='relu')(x) out = Dense(1, activation='sigmoid')(x) model = Model(ipt, out) X = np.random.randn(*batch_shape) timeit(model.predict, X, 10) model.compile('adam', loss='binary_crossentropy') timeit(model.predict, X, 10) ``` Outputs: ```py 34.8542 sec 34.7435 sec ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58378374\/why-does-keras-model-predict-slower-after-compile", "best_answers_votes":47, "question_length":376, "response_length":5542 }, { "question":"How to import keras.engine.topology in Tensorflow? I want to import keras.engine.topology in Tensorflow. I used to add the word tensorflow at the beginning of every Keras import if I want to use the Tensorflow version of Keras. For example: instead of writing: ``` from keras.layers import Dense, Dropout, Input ``` I just write the following code and it works fine : ``` from tensorflow.keras.layers import Dense, Dropout, Input ``` But that's not the case for this specific import: ``` from tensorflow.keras.engine.topology import Layer, InputSpec ``` And I m getting the following error message: ``` No module named 'tensorflow.keras.engine' ```", "response":"You can import Layer and InputSpec from TensorFlow as follows: ``` from tensorflow.python.keras.layers import Layer, InputSpec ``` UPDATE: 30\/10\/2019 ``` from tensorflow.keras.layers import Layer, InputSpec ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51337558\/how-to-import-keras-engine-topology-in-tensorflow", "best_answers_votes":51, "question_length":648, "response_length":210 }, { "question":"Tensorflow VarLenFeature vs FixedLenFeature I was trying to save images of different sizes into tf-records. I found that even though the images have different sizes, I can still load them with FixedLenFeature. By checking the docs on FixedLenFeature and VarLenFeature, I found the difference seems to be that VarLenFeauture returns a sparse tensor. Could anyone illustrate some situations one should use FixedLenFeature or VarLenFeature?", "response":"You can load images probably beacause you saved them using feature type tf.train.BytesList() and whole image data is one big byte value inside a list. If I'm right you're using tf.decode_raw to get the data out of the image you load from TFRecord. Regarding example use cases: I use VarLenFeature for saving datasets for object detection task: There's variable amount of bounding boxes per image (equal to object in image) therefore I need another feature objects_number to track amount of objects (and bboxes). Each bounding box itself is a list of 4 float coordinates I'm using following code to load it: ``` features = tf.parse_single_example( serialized_example, features={ # We know the length of both fields. If not the # tf.VarLenFeature could be used 'height': tf.FixedLenFeature([], tf.int64), 'width': tf.FixedLenFeature([], tf.int64), 'depth': tf.FixedLenFeature([], tf.int64), # Label part 'objects_number': tf.FixedLenFeature([], tf.int64), 'bboxes': tf.VarLenFeature(tf.float32), 'labels': tf.VarLenFeature(tf.int64), # Dense data 'image_raw': tf.FixedLenFeature([],tf.string) }) # Get metadata objects_number = tf.cast(features['objects_number'], tf.int32) height = tf.cast(features['height'], tf.int32) width = tf.cast(features['width'], tf.int32) depth = tf.cast(features['depth'], tf.int32) # Actual data image_shape = tf.parallel_stack([height, width, depth]) bboxes_shape = tf.parallel_stack([objects_number, 4]) # BBOX data is actually dense convert it to dense tensor bboxes = tf.sparse_tensor_to_dense(features['bboxes'], default_value=0) # Since information about shape is lost reshape it bboxes = tf.reshape(bboxes, bboxes_shape) image = tf.decode_raw(features['image_raw'], tf.uint8) image = tf.reshape(image, image_shape) ``` Notice that \"image_raw\" is fixed length Feature (has one element) and holds values of type \"bytes\", however a value of \"bytes\" type can itself have variable size (its a string of bytes, and can have many symbols within it). So \"image_raw\" is a list with ONE element of type \"bytes\", which can be super big. To further elaborate on how it works: Features are lists of values, those values have specific \"type\". Datatypes for features are subset of data types for tensors, you have: int64 (64 bit space in memory) bytes (occupies as many bytes in memory as you want) float (occupies 32-64 bits in memory idk how much) You can check here tensors data types. So you can store variable length data without VarLenFeatures at all (actually well you do it), but first you would need to convert it into bytes\/string feature, and then decode it. And this is most common method.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41921746\/tensorflow-varlenfeature-vs-fixedlenfeature", "best_answers_votes":60, "question_length":437, "response_length":2620 }, { "question":"Could not load dynamic library 'libcublas.so.10'; dlerror: libcublas.so.10: cannot open shared object file: No such file or directory; When I try to run a python script , which uses tensorflow, it shows following error ... ``` 2020-10-04 16:01:44.994797: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-04 16:01:46.780656: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2020-10-04 16:01:46.795642: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:03:00.0 name: TITAN X (Pascal) computeCapability: 6.1 coreClock: 1.531GHz coreCount: 28 deviceMemorySize: 11.91GiB deviceMemoryBandwidth: 447.48GiB\/s 2020-10-04 16:01:46.795699: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-04 16:01:46.795808: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:59] Could not load dynamic library 'libcublas.so.10'; dlerror: libcublas.so.10: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: \/usr\/local\/cuda\/extras\/CUPTI\/lib64\/:\/usr\/local\/cuda-10.0\/lib64 2020-10-04 16:01:46.797391: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-04 16:01:46.797707: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-04 16:01:46.799529: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-04 16:01:46.800524: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-04 16:01:46.804150: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-04 16:01:46.804169: W tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1753] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https:\/\/www.tensorflow.org\/install\/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... ``` Output of nvidia-smi ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 455.23.05 Driver Version: 455.23.05 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage\/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 TITAN X (Pascal) On | 00000000:03:00.0 Off | N\/A | | 23% 28C P8 9W \/ 250W | 18MiB \/ 12194MiB | 0% Default | | | | N\/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N\/A N\/A 1825 G \/usr\/lib\/xorg\/Xorg 9MiB | | 0 N\/A N\/A 1957 G \/usr\/bin\/gnome-shell 6MiB | +-----------------------------------------------------------------------------+ ``` Tensorflow version 2.3.1, Ubuntu - 18.04 I tried to completely remove cuda toolkit and install from scratch but the error remains. Anybody could help me to identify the source of problem??", "response":"On Ubuntu 20.04, you can simply install NVIDIAs cuda toolkit cuda: ``` sudo apt-get update sudo apt install nvidia-cuda-toolkit ``` There are also install advices for Windows. The packge is around 1GB and it took a while to install... Some minutes later you need to export PATH variables so that it can be found: Find Shared Object ``` sudo find \/ -name 'libcudart.so*' \/usr\/lib\/x86_64-linux-gnu\/libcudart.so.10.1 \/usr\/lib\/x86_64-linux-gnu\/libcudart.so ``` Add the folder to path, so that python finds it ``` export PATH=\/usr\/lib\/x86_64-linux-gnu${PATH:+:${PATH}} export LD_LIBRARY_PATH=\/usr\/lib\/x86_64-linux-gnu:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} ``` Permissions ``` sudo chmod a+r \/usr\/lib\/x86_64-linux-gnu\/libcuda* ``` Helped me", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/64193633\/could-not-load-dynamic-library-libcublas-so-10-dlerror-libcublas-so-10-cann", "best_answers_votes":29, "question_length":3657, "response_length":736 }, { "question":"How can I test a .tflite model to prove that it behaves as the original model using the same Test Data? I have generated a .tflite model based on a trained model, I would like to test that the tfilte model gives the same results as the original model. Giving both the same test data and obtaining the same result.", "response":"You may use TensorFlow Lite Python interpreter to test your tflite model. It allows you to feed input data in python shell and read the output directly like you are just using a normal tensorflow model. I have answered this question here. And you can read this TensorFlow lite official guide for detailed information. You can also use Netron to visualize your model. It allows you to load your .tflite file directly and inspect your model architecture and model weights.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50764572\/how-can-i-test-a-tflite-model-to-prove-that-it-behaves-as-the-original-model-us", "best_answers_votes":34, "question_length":313, "response_length":470 }, { "question":"shuffle in the model.fit of keras In the model.fit of keras, there is a shuffle parameter, ``` shuffle: Boolean (whether to shuffle the training data before each epoch) or str (for 'batch'). 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None. ``` Assume the training set is a list with 50000 elements, so the whole list will be randomly permuted before each epoch? Of if the batch size is 250, only the elements belonging to each batch get permuted? What should be the correct understanding?", "response":"It will shuffle your entire dataset (x, y and sample_weight together) first and then make batches according to the batch_size argument you passed to fit. Edit As @yuk pointed out in the comment, the code has been changed significantly since 2018. The documentation for the shuffle parameter now seems more clear on its own. You can choose to shuffle the entire training data or just shuffle the batch: ``` shuffle: Boolean (whether to shuffle the training data before each epoch) or str (for 'batch'). This argument is ignored when `x` is a generator. 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when `steps_per_epoch` is not `None`. ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50184144\/shuffle-in-the-model-fit-of-keras", "best_answers_votes":32, "question_length":597, "response_length":717 }, { "question":"Tensorflow object detection config files documentation I am using the object detection api in tensorflow. I noticed that practically all parameters pass through the config file. I could not find any documentation or tutorial on the options for these config files though. I know that in the official git they provide a list of config files for their pretrained models which could be very helpful but it does not cover every case and of course does not provide any explanation if needed. For example in train_config section there are some data augmentation options which are quite self explanatory but the potential existence of other options is unclear: ``` data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { ssd_random_crop { } } ``` Is there a source I could refer to? For example in this tutorial two extra options (batch_queue_capacity and prefetch_queue_capacity) I did not know about appear. Where could I find a decent list of options I have? I know that it's model specific but some of them are universal and really helpful.", "response":"As mentioned in the configuration documentation, configuration files are just Protocol Buffers objects described in the .proto files under research\/object_detection\/protos. The top level object is a TrainEvalPipelineConfig defined in pipeline.proto, and different files describe each of the elements. For example, data_augmentation_options are PreprocessingStep objects, defined in preprocessor.proto (which in turn can include a range of other possible objects for different preprocessing tasks). The meaning of each object and field may or may not be obvious or well-documented, but you can always refer to the source code to see exactly how each value is being used (for example, check preprocessor.py to understand how data augmentation is done).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49148962\/tensorflow-object-detection-config-files-documentation", "best_answers_votes":28, "question_length":1065, "response_length":750 }, { "question":"tensorflow deep neural network for regression always predict same results in one batch I use a tensorflow to implement a simple multi-layer perceptron for regression. The code is modified from standard mnist classifier, that I only changed the output cost to MSE (use tf.reduce_mean(tf.square(pred-y))), and some input, output size settings. However, if I train the network using regression, after several epochs, the output batch are totally the same. for example: ``` target: 48.129, estimated: 42.634 target: 46.590, estimated: 42.634 target: 34.209, estimated: 42.634 target: 69.677, estimated: 42.634 ...... ``` I have tried different batch size, different initialization, input normalization using sklearn.preprocessing.scale (my inputs range are quite different). However, none of them worked. I have also tried one of sklearn example from Tensorflow (Deep Neural Network Regression with Boston Data). But I got another error in line 40: 'module' object has no attribute 'infer_real_valued_columns_from_input' Anyone has clues on where the problem is? Thank you My code is listed below, may be a little bit long, but very straghtforward: ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow.contrib import learn import matplotlib.pyplot as plt from sklearn.pipeline import Pipeline from sklearn import datasets, linear_model from sklearn import cross_validation import numpy as np boston = learn.datasets.load_dataset('boston') x, y = boston.data, boston.target X_train, X_test, Y_train, Y_test = cross_validation.train_test_split( x, y, test_size=0.2, random_state=42) total_len = X_train.shape[0] # Parameters learning_rate = 0.001 training_epochs = 500 batch_size = 10 display_step = 1 dropout_rate = 0.9 # Network Parameters n_hidden_1 = 32 # 1st layer number of features n_hidden_2 = 200 # 2nd layer number of features n_hidden_3 = 200 n_hidden_4 = 256 n_input = X_train.shape[1] n_classes = 1 # tf Graph input x = tf.placeholder(\"float\", [None, 13]) y = tf.placeholder(\"float\", [None]) # Create model def multilayer_perceptron(x, weights, biases): # Hidden layer with RELU activation layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) layer_1 = tf.nn.relu(layer_1) # Hidden layer with RELU activation layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']) layer_2 = tf.nn.relu(layer_2) # Hidden layer with RELU activation layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3']) layer_3 = tf.nn.relu(layer_3) # Hidden layer with RELU activation layer_4 = tf.add(tf.matmul(layer_3, weights['h4']), biases['b4']) layer_4 = tf.nn.relu(layer_4) # Output layer with linear activation out_layer = tf.matmul(layer_4, weights['out']) + biases['out'] return out_layer # Store layers weight & bias weights = { 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], 0, 0.1)), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], 0, 0.1)), 'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3], 0, 0.1)), 'h4': tf.Variable(tf.random_normal([n_hidden_3, n_hidden_4], 0, 0.1)), 'out': tf.Variable(tf.random_normal([n_hidden_4, n_classes], 0, 0.1)) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1], 0, 0.1)), 'b2': tf.Variable(tf.random_normal([n_hidden_2], 0, 0.1)), 'b3': tf.Variable(tf.random_normal([n_hidden_3], 0, 0.1)), 'b4': tf.Variable(tf.random_normal([n_hidden_4], 0, 0.1)), 'out': tf.Variable(tf.random_normal([n_classes], 0, 0.1)) } # Construct model pred = multilayer_perceptron(x, weights, biases) # Define loss and optimizer cost = tf.reduce_mean(tf.square(pred-y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Launch the graph with tf.Session() as sess: sess.run(tf.initialize_all_variables()) # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(total_len\/batch_size) # Loop over all batches for i in range(total_batch-1): batch_x = X_train[i*batch_size:(i+1)*batch_size] batch_y = Y_train[i*batch_size:(i+1)*batch_size] # Run optimization op (backprop) and cost op (to get loss value) _, c, p = sess.run([optimizer, cost, pred], feed_dict={x: batch_x, y: batch_y}) # Compute average loss avg_cost += c \/ total_batch # sample prediction label_value = batch_y estimate = p err = label_value-estimate print (\"num batch:\", total_batch) # Display logs per epoch step if epoch % display_step == 0: print (\"Epoch:\", '%04d' % (epoch+1), \"cost=\", \\ \"{:.9f}\".format(avg_cost)) print (\"[*]----------------------------\") for i in xrange(3): print (\"label value:\", label_value[i], \\ \"estimated value:\", estimate[i]) print (\"[*]============================\") print (\"Optimization Finished!\") # Test model correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\")) print (\"Accuracy:\", accuracy.eval({x: X_test, y: Y_test})) ```", "response":"Short answer: Transpose your pred vector using tf.transpose(pred). Longer answer: The problem is that pred (the predictions) and y (the labels) are not of the same shape: one is a row vector and the other a column vector. Apparently when you apply an element-wise operation on them, you'll get a matrix, which is not what you want. The solution is to transpose the prediction vector using tf.transpose() to get a proper vector and thus a proper loss function. Actually, if you set the batch size to 1 in your example you'll see that it works even without the fix, because transposing a 1x1 vector is a no-op. I applied this fix to your example code and observed the following behaviour. Before the fix: ``` Epoch: 0245 cost= 84.743440580 [*]---------------------------- label value: 23 estimated value: [ 27.47437096] label value: 50 estimated value: [ 24.71126747] label value: 22 estimated value: [ 23.87785912] ``` And after the fix at the same point in time: ``` Epoch: 0245 cost= 4.181439120 [*]---------------------------- label value: 23 estimated value: [ 21.64333534] label value: 50 estimated value: [ 48.76105118] label value: 22 estimated value: [ 24.27996063] ``` You'll see that the cost is much lower and that it actually learned the value 50 properly. You'll have to do some fine-tuning on the learning rate and such to improve your results of course.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38399609\/tensorflow-deep-neural-network-for-regression-always-predict-same-results-in-one", "best_answers_votes":27, "question_length":4949, "response_length":1367 }, { "question":"What does opt.apply_gradients() do in TensorFlow? The documentation is not quite clear about this. I suppose the gradients one can obtain by opt.compute_gradients(E, [v]) contain the \u2202E\/\u2202x = g(x) for each element x of the tensor that v stores. Does opt.apply_gradients(grads_and_vars) essentially execute x \u2190 -\u03b7\u00b7g(x), where \u03b7 is the learning rate? That would imply that if I want to add a positive additive change p to the variable, I would need to need to change g(x) \u2190 g(x) - (1\/\u03b7)p, e.g. like this: ```py opt = tf.train.GradientDescentOptimizer(learning_rate=l) grads_and_vars = opt.compute_gradients(loss, var_list) for l, gv in enumerate(grads_and_vars): grads_and_vars[l] = (gv[0] - (1\/l) * p, gv[1]) train_op = opt.apply_gradients(grads_and_vars) ``` Is there a better way to do this?", "response":"The update rule that the apply_gradients method actually applies depends on the specific optimizer. Take a look at the implementation of apply_gradients in the tf.train.Optimizer class here. It relies on the derived classes implementing the update rule in the methods _apply_dense and _apply_spares. The update rule you are referring to is implemented by the GradientDescentOptimizer. Regarding your desired positive additive update: If what you are calling opt is an instantiation of GradientDescentOptimizer, then you could indeed achieve what you want to do by ``` grads_and_vars = opt.compute_gradients(E, [v]) eta = opt._learning_rate my_grads_and_vars = [(g-(1\/eta)*p, v) for g, v in grads_and_vars] opt.apply_gradients(my_grads_and_vars) ``` The more elegant way to do this is probably to write a new optimizer (inheriting from tf.train.Optimizer) that implements your desired update rule directly.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37921781\/what-does-opt-apply-gradients-do-in-tensorflow", "best_answers_votes":22, "question_length":791, "response_length":905 }, { "question":"Unexpected keyword argument 'ragged' in Keras Trying to run a trained keras model with the following python code: ```py from keras.preprocessing.image import img_to_array from keras.models import load_model from imutils.video import VideoStream from threading import Thread import numpy as np import imutils import time import cv2 import os MODEL_PATH = \"\/home\/pi\/Documents\/converted_keras\/keras_model.h5\" print(\"[info] loading model..\") model = load_model(MODEL_PATH) print(\"[info] starting vid stream..\") vs = VideoStream(usePiCamera=True).start() time.sleep(2.0) while True: frame = vs.Read() frame = imutils.resize(frame, width=400) image = cv2.resize(frame, (28, 28)) image = image.astype(\"float\") \/ 255.0 image = img_to_array(image) image = np.expand_dims(image, axis=0) (fuel, redBall, whiteBall, none) = model.predict(image)[0] label = \"none\" proba = none if fuel > none and fuel > redBall and fuel > whiteBall: label = \"Fuel\" proba = fuel elif redBall > none and redBall > fuel and redBall > whiteBall: label = \"Red Ball\" proba = redBall elif whiteBall > none and whiteBall > redBall and whiteBall > fuel: label = \"white ball\" proba = whiteBall else: label = \"none\" proba = none label = \"{}:{:.2f%}\".format(label, proba * 100) frame = cv2.putText(frame, label, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2) cv2.imshow(\"Frame\", frame) key = cv2.waitKey(1) & 0xFF if key == ord(\"q\"): break print(\"[info] cleaning up..\") cv2.destroyAllWindows() vs.stop() ``` When I run it with python3, I get the following error: TypeError: __init__() got an unexpected keyword argument 'ragged' What's causing the error, and how do I get around it? Versions: Keras v2.3.1 tensorflow v1.13.1 Edit to add: ``` Traceback (most recent call last): File \"\/home\/pi\/Documents\/converted_keras\/keras-script.py\", line 18, in model = load_model(MODEL_PATH) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/engine\/saving.py\", line 492, in load_wrapper return load_function(*args, **kwargs) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/engine\/saving.py\", line 584, in load_model model = _deserialize_model(h5dict, custom_objects, compile) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/engine\/saving.py\", line 274, in _deserialize_model model = model_from_config(model_config, custom_objects=custom_objects) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/engine\/saving.py\", line 627, in model_from_config return deserialize(config, custom_objects=custom_objects) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/layers\/__init__.py\", line 168, in deserialize printable_module_name='layer') File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/utils\/generic_utils.py\", line 147, in deserialize_keras_object list(custom_objects.items()))) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/engine\/sequential.py\", line 301, in from_config custom_objects=custom_objects) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/layers\/__init__.py\", line 168, in deserialize printable_module_name='layer') File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/utils\/generic_utils.py\", line 147, in deserialize_keras_object list(custom_objects.items()))) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/engine\/sequential.py\", line 301, in from_config custom_objects=custom_objects) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/layers\/__init__.py\", line 168, in deserialize printable_module_name='layer') File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/utils\/generic_utils.py\", line 147, in deserialize_keras_object list(custom_objects.items()))) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/engine\/network.py\", line 1056, in from_config process_layer(layer_data) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/engine\/network.py\", line 1042, in process_layer custom_objects=custom_objects) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/layers\/__init__.py\", line 168, in deserialize printable_module_name='layer') File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/utils\/generic_utils.py\", line 149, in deserialize_keras_object return cls.from_config(config['config']) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/engine\/base_layer.py\", line 1179, in from_config return cls(**config) File \"\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/legacy\/interfaces.py\", line 91, in wrapper return func(*args, **kwargs) TypeError: __init__() got an unexpected keyword argument 'ragged' ``` h5 file link (google drive)", "response":"So I tried link above which you have mentioned teachable machine As it turns out model you have exported is from tensorflow.keras and not directly from keras API. These two are different. So while loading it might be using tf.ragged tensors that might not be compatible with keras API. Soulution to your issue: Don't import keras directly as your model is saved with Tensorflow's keras high level api. Change all your imports to tensorflow.keras Change: ``` from keras.preprocessing.image import img_to_array from keras.models import load_model ``` to this: ``` from tensorflow.keras.preprocessing.image import img_to_array from tensorflow.keras.models import load_model ``` It will solve your issue. EDIT : All of your imports, either should be from Keras or tensorflow.keras. Although being same API few things are different which creates these kind of issues. Also for tensorflow backend tf.keras is preferred, because Keras 2.3.0 is last major release which will support backends other than tensorflow. This release brings the API in sync with the tf.keras API as of TensorFlow 2.0. However note that it does not support most TensorFlow 2.0 features, in particular eager execution. If you need these features, use tf.keras. This is also the last major release of multi-backend Keras. Going forward, we recommend that users consider switching their Keras code to tf.keras in TensorFlow 2.0.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58878421\/unexpected-keyword-argument-ragged-in-keras", "best_answers_votes":68, "question_length":4424, "response_length":1393 }, { "question":"Tensorboard Error: No dashboards are active for current data set I am trying to use Tensorboard but every time I run any program with Tensorflow, I get an error when I go to localhost:6006 to view the Visualization Here is my code ``` a = tf.add(1, 2,) b = tf.multiply(a, 3) with tf.Session() as sess: writer = tf.summary.FileWriter(\"output\", sess.graph) print(sess.run(b)) writer.close() ``` When I go to the command prompt and enter ``` tensorboard --logdir=C:\\path\\to\\output\\folder ``` It returns with ``` TensorBoard 0.1.8 at http:\/\/MYCOMP:6006 (Press CTRL+C to quit) ``` When I go to localhost:6006 it states No dashboards are active for the current data set. Probable causes: - You haven\u2019t written any data to your event files. - TensorBoard can\u2019t find your event files. I have looked at this link (Tensorboard: No dashboards are active for the current data set) but it doesn't seem to fix this issue And I am running this on Windows 10 What do I do to fix this issue? Am I giving the right path for Tensorboard in the command prompt? Thank you in advance", "response":"Your issue may be related to the drive you are attempting to start tensorboard from and the drive your logdir is on. Tensorboard uses a colon to separate the optional run name and the path in the logdir flag, so your path is being interpreted as \\path\\to\\output\\folder with name C. This can be worked around by either starting tensorboard from the same drive as your log directory or by providing an explicit run name, e.g. logdir=mylogs:C:\\path\\to\\output\\folder See here for reference to the issue.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47113472\/tensorboard-error-no-dashboards-are-active-for-current-data-set", "best_answers_votes":29, "question_length":1061, "response_length":499 }, { "question":"Is there any way to get variable importance with Keras? I am looking for a proper or best way to get variable importance in a Neural Network created with Keras. The way I currently do it is I just take the weights (not the biases) of the variables in the first layer with the assumption that more important variables will have higher weights in the first layer. Is there another\/better way of doing it?", "response":"*Edited to include relevant code to implement permutation importance. I answered a similar question at Feature Importance Chart in neural network using Keras in Python. It does implement what Teque5 mentioned above, namely shuffling the variable among your sample or permutation importance using the ELI5 package. ```py from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor import eli5 from eli5.sklearn import PermutationImportance def base_model(): model = Sequential() ... return model X = ... y = ... my_model = KerasRegressor(build_fn=basemodel, **sk_params) my_model.fit(X,y) perm = PermutationImportance(my_model, random_state=1).fit(X,y) eli5.show_weights(perm, feature_names = X.columns.tolist()) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44119207\/is-there-any-way-to-get-variable-importance-with-keras", "best_answers_votes":19, "question_length":402, "response_length":727 }, { "question":"Run Tensorflow unit tests Is there any way to run Tensorflow unit tests manually? I want to perform sanity checks while modifying TF source code. I see there are many _test.py files with classes that perform many test operations and I can't figure out how to run them. There should be an easy way?", "response":"The easiest way to run the TensorFlow unit tests is using Bazel, assuming you have downloaded the source from Git: ``` # All tests (for C++ changes). $ bazel test \/\/tensorflow\/... # All Python tests (for Python front-end changes). $ bazel test \/\/tensorflow\/python\/... # All tests (with GPU support). $ bazel test -c opt --config=cuda \/\/tensorflow\/... $ bazel test -c opt --config=cuda \/\/tensorflow\/python\/... ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34204551\/run-tensorflow-unit-tests", "best_answers_votes":41, "question_length":297, "response_length":412 }, { "question":"ValueError: Tensor must be from the same graph as Tensor with Bidirectinal RNN in Tensorflow I'm doing text tagger using Bidirectional dynamic RNN in tensorflow. After maching input's dimension, I tried to run a Session. this is blstm setting parts: ``` fw_lstm_cell = BasicLSTMCell(LSTM_DIMS) bw_lstm_cell = BasicLSTMCell(LSTM_DIMS) (fw_outputs, bw_outputs), _ = bidirectional_dynamic_rnn(fw_lstm_cell, bw_lstm_cell, x_place, sequence_length=SEQLEN, dtype='float32') ``` and this is runing part: ``` with tf.Graph().as_default(): # Placehoder Settings x_place, y_place = set_placeholder(BATCH_SIZE, EM_DIMS, MAXLEN) # BLSTM Model Building hlogits = tf_kcpt.build_blstm(x_place) # Compute loss loss = tf_kcpt.get_loss(log_likelihood) # Training train_op = tf_kcpt.training(loss) # load Eval method eval_correct = tf_kcpt.evaluation(logits, y_place) # Session Setting & Init init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # tensor summary setting summary = tf.summary.merge_all() summary_writer = tf.summary.FileWriter(LOG_DIR, sess.graph) # Save saver = tf.train.Saver() # Run epoch for step in range(EPOCH): start_time = time.time() feed_dict = fill_feed_dict(KCPT_SET['train'], x_place, y_place) _, loss_value = sess.run([train_op, loss], feed_dict=feed_dict) ``` But, it give me the error: ValueError: Tensor(\"Shape:0\", shape=(1,), dtype=int32) must be from the same graph as Tensor(\"bidirectional_rnn\/fw\/fw\/stack_2:0\", shape=(1,), dtype=int32). Help me, please", "response":"TensorFlow stores all operations on an operational graph. This graph defines what functions output to where, and it links it all together so that it can follow the steps you have set up in the graph to produce your final output. If you try to input a Tensor or operation on one graph into a Tensor or operation on another graph it will fail. Everything must be on the same execution graph. Try removing with tf.Graph().as_default(): TensorFlow provides you a default graph which is referred to if you do not specify a graph. You are probably using the default graph in one spot and a different graph in your training block. There does not seem to be a reason you are specifying a graph as default here and most likely you are using separate graphs on accident. If you really want to specify a graph then you probably want to pass it as a variable, not set it like this.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42616625\/valueerror-tensor-must-be-from-the-same-graph-as-tensor-with-bidirectinal-rnn-i", "best_answers_votes":46, "question_length":1493, "response_length":869 }, { "question":"Cannot convert a partially converted tensor in TensorFlow There are many methods in TensorFlow that requires specifying a shape, for example truncated_normal: ``` tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) ``` I have a placeholder for the input of shape [None, 784], where the first dimension is None because the batch size can vary. I could use a fixed batch size but it still would be different from the test\/validation set size. I cannot feed this placeholder to tf.truncated_normal because it requires a fully specified tensor shape. What is a simple way to having tf.truncated_normal accept different tensor shapes?", "response":"You just need to feed it in as a single example but in the batched shape. So that means adding an extra dimension to the shape e.g. ``` batch_size = 32 # set this to the actual size of your batch tf.truncated_normal((batch_size, 784), mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) ``` This way it will \"fit\" into the placeholder. If you expect batch_size to change you can also use: ``` tf.truncated_normal(tf.shape(input_tensor), mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) ``` Where input_tensor could be a placeholder or just whatever tensor is going to have this noise added to it.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35289773\/cannot-convert-a-partially-converted-tensor-in-tensorflow", "best_answers_votes":34, "question_length":666, "response_length":619 }, { "question":"Why is Tensorflow not recognizing my GPU after conda install? I am new to deep learning and I have been trying to install tensorflow-gpu version in my pc in vain for the last 2 days. I avoided installing CUDA and cuDNN drivers since several forums online don't recommend it due to numerous compatibility issues. Since I was already using the conda distribution of python before, I went for the conda install -c anaconda tensorflow-gpu as written in their official website here: https:\/\/anaconda.org\/anaconda\/tensorflow-gpu . However even after installing the gpu version in a fresh virtual environment (to avoid potential conflicts with pip installed libraries in the base env), tensorflow doesn't seem to even recognize my GPU for some mysterious reason. Some of the code snippets I ran(in anaconda prompt) to understand that it wasn't recognizing my GPU:- 1. ``` >>>from tensorflow.python.client import device_lib >>>print(device_lib.list_local_devices()) [name: \"\/device:CPU:0\" device_type: \"CPU\" memory_limit: 268435456 locality { } incarnation: 7692219132769779763 ] ``` As you can see it completely ignores the GPU. 2. ``` >>>tf.debugging.set_log_device_placement(True) >>>a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) 2020-12-13 10:11:30.902956: I tensorflow\/core\/platform\/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. >>>b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]) >>>c = tf.matmul(a, b) >>>print(c) tf.Tensor( [[22. 28.] [49. 64.]], shape=(2, 2), dtype=float32) ``` Here, it was supposed to indicate that it ran with a GPU by showing Executing op MatMul in device \/job:localhost\/replica:0\/task:0\/device:GPU:0 (as written here: https:\/\/www.tensorflow.org\/guide\/gpu) but nothing like that is present. Also I am not sure what the message after the 2nd line means. I have also searched for several solutions online including here but almost all of the issues are related to the first manual installation method which I haven't tried yet since everyone recommended this approach. I don't use cmd anymore since the environment variables somehow got messed up after uninstalling tensorflow-cpu from the base env and on re-installing, it worked perfectly with anaconda prompt but not cmd. This is a separate issue (and widespread also) but I mentioned it in case that has a role to play here. I installed the gpu version in a fresh virtual environment to ensure a clean installation and as far as I understand path variables need to be set up only for manual installation of CUDA and cuDNN libraries. The card which I use:-(which is CUDA enabled) ``` C:\\WINDOWS\\system32>wmic path win32_VideoController get name Name NVIDIA GeForce 940MX Intel(R) HD Graphics 620 ``` Tensorflow and python version I am using currently:- ``` >>> import tensorflow as tf >>> tf.__version__ '2.3.0' Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. ``` System information: Windows 10 Home, 64-bit operating system, x64-based processor. Any help would be really appreciated. Thanks in advance.", "response":"August 2021 Conda install may be working now, as according to @ComputerScientist in the comments below, conda install tensorflow-gpu==2.4.1 will give cudatoolkit-10.1.243 and cudnn-7.6.5 The following was written in Jan 2021 and is out of date Currently conda install tensorflow-gpu installs tensorflow v2.3.0 and does NOT install the conda cudnn or cudatoolkit packages. Installing them manually (e.g. with conda install cudatoolkit=10.1) does not seem to fix the problem either. A solution is to install an earlier version of tensorflow, which does install cudnn and cudatoolkit, then upgrade with pip ``` conda install tensorflow-gpu=2.1 pip install tensorflow-gpu==2.3.1 ``` (2.4.0 uses cuda 11.0 and cudnn 8.0, however cudnn 8.0 is not in anaconda as of 16\/12\/2020) Edit: please also see @GZ0's answer, which links to a github discussion with a one-line solution", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/65273118\/why-is-tensorflow-not-recognizing-my-gpu-after-conda-install", "best_answers_votes":50, "question_length":3349, "response_length":867 }, { "question":"TensorFlow : failed call to cuInit: CUDA_ERROR_NO_DEVICE My test : ```py import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session()` ``` Error : c:\\l\\work\\tensorflow-1.1.0\\tensorflow\\stream_executor\\cuda\\cuda_driver.cc:405] failed call to cuInit: CUDA_ERROR_NO_DEVICE -> but \"\/cpu:0\" works fine Config : nvidia-smi : CUDA Version 9.1 tensorflow-1.1.0 Windows 10 cudnn64_7.dll (installed in C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.1\\bin) Only tensorflow-gpu is installed in my Conda environment Why can't Tensorflow detect my GPU ?", "response":"The issue was solved on GitHub. This error message will be shown if you set an invalid value for the CUDA_VISIBLE_DEVICES environment variable, e.g. when you only have a single GPU (which has ID 0) and set CUDA_VISIBLE_DEVICES=1 or CUDA_VISIBLE_DEVICES=2.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48658204\/tensorflow-failed-call-to-cuinit-cuda-error-no-device", "best_answers_votes":41, "question_length":571, "response_length":255 }, { "question":"How to properly use tf.metrics.accuracy? I have some trouble using the accuracy function from tf.metrics for a multiple classification problem with logits as input. My model output looks like: ``` logits = [[0.1, 0.5, 0.4], [0.8, 0.1, 0.1], [0.6, 0.3, 0.2]] ``` And my labels are one hot encoded vectors: ``` labels = [[0, 1, 0], [1, 0, 0], [0, 0, 1]] ``` When I try to do something like tf.metrics.accuracy(labels, logits) it never gives the correct result. I am obviously doing something wrong but I can't figure what it is.", "response":"TL;DR The accuracy function tf.metrics.accuracy calculates how often predictions matches labels based on two local variables it creates: total and count, that are used to compute the frequency with which logits matches labels. ``` acc, acc_op = tf.metrics.accuracy(labels=tf.argmax(labels, 1), predictions=tf.argmax(logits,1)) print(sess.run([acc, acc_op])) print(sess.run([acc])) # Output #[0.0, 0.66666669] #[0.66666669] ``` acc (accuracy): simply returns the metrics using total and count, doesnt update the metrics. acc_op (update up): updates the metrics. To understand why the acc returns 0.0, go through the details below. Details using a simple example: ``` logits = tf.placeholder(tf.int64, [2,3]) labels = tf.Variable([[0, 1, 0], [1, 0, 1]]) acc, acc_op = tf.metrics.accuracy(labels=tf.argmax(labels, 1), predictions=tf.argmax(logits,1)) ``` Initialize the variables: Since metrics.accuracy creates two local variables total and count, we need to call local_variables_initializer() to initialize them. ``` sess = tf.Session() sess.run(tf.local_variables_initializer()) sess.run(tf.global_variables_initializer()) stream_vars = [i for i in tf.local_variables()] print(stream_vars) #[, # ] ``` Understanding update ops and accuracy calculation: ``` print('acc:',sess.run(acc, {logits:[[0,1,0],[1,0,1]]})) #acc: 0.0 print('[total, count]:',sess.run(stream_vars)) #[total, count]: [0.0, 0.0] ``` The above returns 0.0 for accuracy as total and count are zeros, inspite of giving matching inputs. ``` print('ops:', sess.run(acc_op, {logits:[[0,1,0],[1,0,1]]})) #ops: 1.0 print('[total, count]:',sess.run(stream_vars)) #[total, count]: [2.0, 2.0] ``` With the new inputs, the accuracy is calculated when the update op is called. Note: since all the logits and labels match, we get accuracy of 1.0 and the local variables total and count actually give total correctly predicted and the total comparisons made. Now we call accuracy with the new inputs (not the update ops): ``` print('acc:', sess.run(acc,{logits:[[1,0,0],[0,1,0]]})) #acc: 1.0 ``` Accuracy call doesnt update the metrics with the new inputs, it just returns the value using the two local variables. Note: the logits and labels dont match in this case. Now calling update ops again: ``` print('op:',sess.run(acc_op,{logits:[[0,1,0],[0,1,0]]})) #op: 0.75 print('[total, count]:',sess.run(stream_vars)) #[total, count]: [3.0, 4.0] ``` The metrics are updated to new inputs For more information on how to use the metrics during training and how to reset them during validation, can be found here.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46409626\/how-to-properly-use-tf-metrics-accuracy", "best_answers_votes":66, "question_length":526, "response_length":2561 }, { "question":"Feeding .npy (numpy files) into tensorflow data pipeline Tensorflow seems to lack a reader for \".npy\" files. How can I read my data files into the new tensorflow.data.Dataset pipline? My data doesn't fit in memory. Each object is saved in a separate \".npy\" file. each file contains 2 different ndarrays as features and a scalar as their label.", "response":"It is actually possible to read directly NPY files with TensorFlow instead of TFRecords. The key pieces are tf.data.FixedLengthRecordDataset and tf.io.decode_raw, along with a look at the documentation of the NPY format. For simplicity, let's suppose that a float32 NPY file containing an array with shape (N, K) is given, and you know the number of features K beforehand, as well as the fact that it is a float32 array. An NPY file is just a binary file with a small header and followed by the raw array data (object arrays are different, but we're considering numbers now). In short, you can find the size of this header with a function like this: ``` def npy_header_offset(npy_path): with open(str(npy_path), 'rb') as f: if f.read(6) != b'\\x93NUMPY': raise ValueError('Invalid NPY file.') version_major, version_minor = f.read(2) if version_major == 1: header_len_size = 2 elif version_major == 2: header_len_size = 4 else: raise ValueError('Unknown NPY file version {}.{}.'.format(version_major, version_minor)) header_len = sum(b << (8 * i) for i, b in enumerate(f.read(header_len_size))) header = f.read(header_len) if not header.endswith(b'\\n'): raise ValueError('Invalid NPY file.') return f.tell() ``` With this you can create a dataset like this: ``` import tensorflow as tf npy_file = 'my_file.npy' num_features = ... dtype = tf.float32 header_offset = npy_header_offset(npy_file) dataset = tf.data.FixedLengthRecordDataset([npy_file], num_features * dtype.size, header_bytes=header_offset) ``` Each element of this dataset contains a long string of bytes representing a single example. You can now decode it to obtain an actual array: ``` dataset = dataset.map(lambda s: tf.io.decode_raw(s, dtype)) ``` The elements will have indeterminate shape, though, because TensorFlow does not keep track of the length of the strings. You can just enforce the shape since you know the number of features: ``` dataset = dataset.map(lambda s: tf.reshape(tf.io.decode_raw(s, dtype), (num_features,))) ``` Similarly, you can choose to perform this step after batching, or combine it in whatever way you feel like. The limitation is that you had to know the number of features in advance. It is possible to extract it from the NumPy header, though, just a bit of a pain, and in any case very hardly from within TensorFlow, so the file names would need to be known in advance. Another limitation is that, as it is, the solution requires you to either use only one file per dataset or files that have the same header size, although if you know that all the arrays have the same size that should actually be the case. Admittedly, if one considers this kind of approach it may just be better to have a pure binary file without headers, and either hard code the number of features or read them from a different source...", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48889482\/feeding-npy-numpy-files-into-tensorflow-data-pipeline", "best_answers_votes":25, "question_length":343, "response_length":2811 }, { "question":"Darknet YOLO image size I am trying to train custom object classifier in Darknet YOLO v2 https:\/\/pjreddie.com\/darknet\/yolo\/ I gathered a dataset for images most of them are 6000 x 4000 px and some lower resolutions as well. Do I need to resize the images before training to be squared ? I found that the config uses: ``` [net] batch=64 subdivisions=8 height=416 width=416 channels=3 momentum=0.9 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1 ``` thats why I was wondering how to use it for different sizes of data sets.", "response":"You don't have to resize it, because Darknet will do it instead of you! It means you really don't need to do that and you can use different image sizes during your training. What you posted above is just network configuration. There should be full network definition as well. And the height and the width tell you what's the network resolution. And it also keeps aspect ratio, check e.g this.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49450829\/darknet-yolo-image-size", "best_answers_votes":37, "question_length":533, "response_length":392 }, { "question":"difference between Tensorflow's Graph and GraphDef I'm quite new to the tensorflow. I would like to understand to conceptual difference between Graph and GraphDef. furthermore, which one should I have to run a graph loaded from protobuf file (.pb)? Thanks!", "response":"Graph or Computional Graph is the core concept of tensorflow to present computation. When you use tensorflow, you firstly create you own Computation Graph and pass the Graph to tensorflow. How to do that? As you may know, tensorflow support many front-end programming languages, like Python, C++, Java and Go and the core language is C++; how do the other languages transform the Graph to C++? They use a tool called protobuf which can generate specific language stubs, that's where the GraphDef come from. It's a serialized version of Graph. which one should I have to run a graph loaded from protobuf file (.pb) You should read your *pb file using GraphDef and bind the GraphDef to a (default) Graph, then use a session to run the Graph for computation, like the following code: ```py import tensorflow as tf from tensorflow.python.platform import gfile with tf.Session() as sess: model_filename ='PATH_TO_PB.pb' with gfile.FastGFile(model_filename, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) g_in = tf.import_graph_def(graph_def) LOGDIR='\/logs\/tests\/1\/' train_writer = tf.summary.FileWriter(LOGDIR) train_writer.add_graph(sess.graph) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47059848\/difference-between-tensorflows-graph-and-graphdef", "best_answers_votes":54, "question_length":256, "response_length":1171 }, { "question":"What is the difference between MaxPool and MaxPooling layers in Keras? I just started working with keras and noticed that there are two layers with very similar names for max-pooling: MaxPool and MaxPooling. I was surprised that I couldn't find the difference between these two on Google; so I am wondering what the difference is between the two if any.", "response":"They are basically the same thing (i.e. aliases of each other). For future readers who might want to know how this could be determined: go to the documentation page of the layer (you can use the list here) and click on \"View aliases\". This is then accompanied by a blue plus sign (+). For example, if you go to MaxPool2D documentation and do this, you will find MaxPooling2D in the list of aliases of this layer as follow:", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/63006575\/what-is-the-difference-between-maxpool-and-maxpooling-layers-in-keras", "best_answers_votes":30, "question_length":353, "response_length":422 }, { "question":"Tensorflow: How does tf.get_variable work? I have read about tf.get_variable from this question and also a bit from the documentation available at the tensorflow website. However, I am still not clear and was unable to find an answer online. How does tf.get_variable work? For example: ``` var1 = tf.Variable(3.,dtype=float64) var2 = tf.get_variable(\"var1\",[],dtype=tf.float64) ``` Does it mean that var2 is another variable with initialization similar to var1? Or is var2 an alias for var1 (I tried and it doesn't seem to)? How are var1 and var2 related? How is a variable constructed when the variable we are getting doesn't really exist?", "response":"tf.get_variable(name) creates a new variable called name (or add _ if name already exists in the current scope) in the tensorflow graph. In your example, you're creating a python variable called var1. The name of that variable in the tensorflow graph is not ** var1, but is Variable:0. Every node you define has its own name that you can specify or let tensorflow give a default (and always different) one. You can see the name value accessing the name property of the python variable. (ie print(var1.name)). On your second line, you're defining a Python variable var2 whose name in the tensorflow graph is var1. The script ``` import tensorflow as tf var1 = tf.Variable(3.,dtype=tf.float64) print(var1.name) var2 = tf.get_variable(\"var1\",[],dtype=tf.float64) print(var2.name) ``` In fact prints: ``` Variable:0 var1:0 ``` If you, instead, want to define a variable (node) called var1 in the tensorflow graph and then getting a reference to that node, you cannot simply use tf.get_variable(\"var1\"), because it will create a new different variable valled var1_1. This script ``` var1 = tf.Variable(3.,dtype=tf.float64, name=\"var1\") print(var1.name) var2 = tf.get_variable(\"var1\",[],dtype=tf.float64) print(var2.name) ``` prints: ``` var1:0 var1_1:0 ``` If you want to create a reference to the node var1, you first: Have to replace tf.Variable with tf.get_variable. The variables created with tf.Variable can't be shared, while the latter can. Know what the scope of the var1 is and allow the reuse of that scope when declaring the reference. Looking at the code is the better way for understanding ``` import tensorflow as tf #var1 = tf.Variable(3.,dtype=tf.float64, name=\"var1\") var1 = tf.get_variable(initializer=tf.constant_initializer(3.), dtype=tf.float64, name=\"var1\", shape=()) current_scope = tf.contrib.framework.get_name_scope() print(var1.name) with tf.variable_scope(current_scope, reuse=True): var2 = tf.get_variable(\"var1\",[],dtype=tf.float64) print(var2.name) ``` outputs: ``` var1:0 var1:0 ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45074049\/tensorflow-how-does-tf-get-variable-work", "best_answers_votes":53, "question_length":640, "response_length":2009 }, { "question":"Replacing placeholder for tensorflow v2 For my project, I need to convert a directed graph into a tensorflow implementation of the graph as if it was a neural network. In tensorflow version 1, I could just define all of my inputs as placeholders and then just generate the dataflow graph for the outputs using a breadthfirst search of the graph. Then I would just feed in my inputs using a feed_dict. However, in TensorFlow v2.0 they have decided to do away with placeholders entirely. How would I make a tf.function for each graph that takes in a variable amount of inputs and returns a variable amount of outputs without using a placeholder? I want to generate a tf.function like this that works for an arbitrary acyclic directed graph so that I can take advantage of tensorflow GPU support to run the graph feed forward a few thousand times in a row after I have generated it. Edit for code example: My graph is defined as a dictionary. Each key represents a node and has a corresponding value of another dictionary specifying incoming and outgoing links with weights. ```py { \"A\": { \"incoming\": [(\"B\", 2), (\"C\", -1)], \"outgoing\": [(\"D\", 3)] } } ``` I have omitted the entries for B,C, and D for brevity. Here is how I would construct the code I want in tensorflow v1.0 where inputs is just a list of key values that are strictly inputs to the graph ```py def construct_graph(graph_dict, inputs, outputs): queue = inputs[:] make_dict = {} for key, val in graph_dict.items(): if key in inputs: make_dict[key] = tf.placeholder(tf.float32, name=key) else: make_dict[key] = None # Breadth-First search of graph starting from inputs while len(queue) != 0: cur = graph_dict[queue[0]] for outg in cur[\"outgoing\"]: if make_dict[outg[0]]: # If discovered node, do add\/multiply operation make_dict[outg[0]] = tf.add(make_dict[outg[0]], tf.multiply(outg[1], make_dict[queue[0]])) else: # If undiscovered node, input is just coming in multiplied and add outgoing to queue make_dict[outg[0]] = tf.multiply(make_dict[queue[0]], outg[1]) for outgo in graph_dict[outg[0]][\"outgoing\"]: queue.append(outgo[0]) queue.pop(0) # Returns one data graph for each output return [make_dict[x] for x in outputs] ``` I would then be able to run the outputs many times as they are simply graphs with placeholders that I would provide a feed_dict for. Obviously, this is not the intended way in TensorFlow v2.0 as they seem to strongly discourage the use of placeholders in this new version. The point is that I only have to do this preprocessing for a graph once, as it returns a datagraph which is independent of the graph_dict definition.", "response":"Make your code work with TF 2.0 Below is a sample code which you can use with TF 2.0. It relies on the compatibility API that is accessible as tensorflow.compat.v1, and requires to disable v2 behaviors. I don't know if it behaves as you expected. If not, then provide us more explanation of what you try to achieve. ```py import tensorflow.compat.v1 as tf tf.disable_v2_behavior() @tf.function def construct_graph(graph_dict, inputs, outputs): queue = inputs[:] make_dict = {} for key, val in graph_dict.items(): if key in inputs: make_dict[key] = tf.placeholder(tf.float32, name=key) else: make_dict[key] = None # Breadth-First search of graph starting from inputs while len(queue) != 0: cur = graph_dict[queue[0]] for outg in cur[\"outgoing\"]: if make_dict[outg[0]]: # If discovered node, do add\/multiply operation make_dict[outg[0]] = tf.add(make_dict[outg[0]], tf.multiply(outg[1], make_dict[queue[0]])) else: # If undiscovered node, input is just coming in multiplied and add outgoing to queue make_dict[outg[0]] = tf.multiply(make_dict[queue[0]], outg[1]) for outgo in graph_dict[outg[0]][\"outgoing\"]: queue.append(outgo[0]) queue.pop(0) # Returns one data graph for each output return [make_dict[x] for x in outputs] def main(): graph_def = { \"B\": { \"incoming\": [], \"outgoing\": [(\"A\", 1.0)] }, \"C\": { \"incoming\": [], \"outgoing\": [(\"A\", 1.0)] }, \"A\": { \"incoming\": [(\"B\", 2.0), (\"C\", -1.0)], \"outgoing\": [(\"D\", 3.0)] }, \"D\": { \"incoming\": [(\"A\", 2.0)], \"outgoing\": [] } } outputs = construct_graph(graph_def, [\"B\", \"C\"], [\"A\"]) print(outputs) if __name__ == \"__main__\": main() ``` ```none [ dtype=float32>] ``` Migrate your code to TF 2.0 While the above snippet is valid, it is still tied to TF 1.0. To migrate it to TF 2.0 you have to refactor a little bit your code. Instead of returning a list of tensors, which were callables with TF 1.0, I advise you to return a list of keras.layers.Model. Below is a working example: ```py import tensorflow as tf def construct_graph(graph_dict, inputs, outputs): queue = inputs[:] make_dict = {} for key, val in graph_dict.items(): if key in inputs: # Use keras.Input instead of placeholders make_dict[key] = tf.keras.Input(name=key, shape=(), dtype=tf.dtypes.float32) else: make_dict[key] = None # Breadth-First search of graph starting from inputs while len(queue) != 0: cur = graph_dict[queue[0]] for outg in cur[\"outgoing\"]: if make_dict[outg[0]] is not None: # If discovered node, do add\/multiply operation make_dict[outg[0]] = tf.keras.layers.add([ make_dict[outg[0]], tf.keras.layers.multiply( [[outg[1]], make_dict[queue[0]]], )], ) else: # If undiscovered node, input is just coming in multiplied and add outgoing to queue make_dict[outg[0]] = tf.keras.layers.multiply( [make_dict[queue[0]], [outg[1]]] ) for outgo in graph_dict[outg[0]][\"outgoing\"]: queue.append(outgo[0]) queue.pop(0) # Returns one data graph for each output model_inputs = [make_dict[key] for key in inputs] model_outputs = [make_dict[key] for key in outputs] return [tf.keras.Model(inputs=model_inputs, outputs=o) for o in model_outputs] def main(): graph_def = { \"B\": { \"incoming\": [], \"outgoing\": [(\"A\", 1.0)] }, \"C\": { \"incoming\": [], \"outgoing\": [(\"A\", 1.0)] }, \"A\": { \"incoming\": [(\"B\", 2.0), (\"C\", -1.0)], \"outgoing\": [(\"D\", 3.0)] }, \"D\": { \"incoming\": [(\"A\", 2.0)], \"outgoing\": [] } } outputs = construct_graph(graph_def, [\"B\", \"C\"], [\"A\"]) print(\"Builded models:\", outputs) for o in outputs: o.summary(120) print(\"Output:\", o((1.0, 1.0))) if __name__ == \"__main__\": main() ``` What to notice here? Change from placeholder to keras.Input, requiring to set the shape of the input. Use keras.layers.[add|multiply] for computation. This is probably not required, but stick to one interface. However, it requires to wrap factors inside a list (to handle batching) Build keras.Model to return Call your model with a tuple of values (not a dictionary anymore) Here is the output of the code. ```none Builded models: [] Model: \"model\" ________________________________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ======================================================================================================================== B (InputLayer) [(None,)] 0 ________________________________________________________________________________________________________________________ C (InputLayer) [(None,)] 0 ________________________________________________________________________________________________________________________ tf_op_layer_mul (TensorFlowOpLayer) [(None,)] 0 B[0][0] ________________________________________________________________________________________________________________________ tf_op_layer_mul_1 (TensorFlowOpLayer) [(None,)] 0 C[0][0] ________________________________________________________________________________________________________________________ add (Add) (None,) 0 tf_op_layer_mul[0][0] tf_op_layer_mul_1[0][0] ======================================================================================================================== Total params: 0 Trainable params: 0 Non-trainable params: 0 ________________________________________________________________________________________________________________________ Output: tf.Tensor([2.], shape=(1,), dtype=float32) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58986126\/replacing-placeholder-for-tensorflow-v2", "best_answers_votes":53, "question_length":2614, "response_length":5325 }, { "question":"Force Anaconda to install tensorflow 1.14 Now, the official TensorFlow on Anaconda is 2.0. My question is how to force Anaconda to install an earlier version of TensorFlow instead. So, for example, I would like Anaconda to install TensorFlow 1.14 as plenty of my projects are depending on this version.", "response":"You can force installing a certain version of any package found on Anaconda using simply an = operator with the package version attached to it. So, if you want to install tensorflow 1.14, you can run the following command: ``` conda install -c conda-forge tensorflow=1.14 ``` You can replace 1.14 with any other versions. To see the available versions of tensorflow on Anaconda, you can run: ``` conda search tensorflow ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58688481\/force-anaconda-to-install-tensorflow-1-14", "best_answers_votes":47, "question_length":302, "response_length":423 }, { "question":"Show training and validation accuracy in TensorFlow using same graph I have a TensorFlow model, and one part of this model evaluates the accuracy. The accuracy is just another node in the tensorflow graph, that takes in logits and labels. When I want to plot the training accuracy, this is simple: I have something like: ``` tf.scalar_summary(\"Training Accuracy\", accuracy) tf.scalar_summary(\"SomethingElse\", foo) summary_op = tf.merge_all_summaries() writer = tf.train.SummaryWriter('\/me\/mydir\/', graph=sess.graph) ``` Then, during my training loop, I have something like: ``` for n in xrange(1000): ... summary, ..., ... = sess.run([summary_op, ..., ...], feed_dict) writer.add_summary(summary, n) ... ``` Also inside that for loop, every say, 100 iterations, I want to evaluate the validation accuracy. I have a separate feed_dict for this, and I am able to evaluate the validation accuracy very nicely in python. However, here is my problem: I want to make another summary for the validation accuracy, by using the accuracy node. I am not clear on how to do this though. Since I have the accuracy node it makes sense that I should be able to re-use it, but I am unsure how to do this exactly, such that I can also get the validation accuracy written out as a separate scalar_summary... How might this be possible?", "response":"You can reuse the the accuracy node but you need to use two different SummaryWriters, one for the training runs and one for the test data. Also you have to assign the scalar summary for accuracy to a variable. ``` accuracy_summary = tf.scalar_summary(\"Training Accuracy\", accuracy) tf.scalar_summary(\"SomethingElse\", foo) summary_op = tf.merge_all_summaries() summaries_dir = '\/me\/mydir\/' train_writer = tf.train.SummaryWriter(summaries_dir + '\/train', sess.graph) test_writer = tf.train.SummaryWriter(summaries_dir + '\/test') ``` Then in your training loop you have the normal training and record your summaries with the train_writer. In addition you run the graph on the test set each 100th iteration and record only the accuracy summary with the test_writer. ``` # Record train set summaries, and train summary, _ = sess.run([summary_op, train_step], feed_dict=...) train_writer.add_summary(summary, n) if n % 100 == 0: # Record summaries and test-set accuracy summary, acc = sess.run([accuracy_summary, accuracy], feed_dict=...) test_writer.add_summary(summary, n) print('Accuracy at step %s: %s' % (n, acc)) ``` You can then point TensorBoard to the parent directory (summaries_dir) and it will load both data sets. This can be also found in the TensorFlow HowTo's https:\/\/www.tensorflow.org\/versions\/r0.11\/how_tos\/summaries_and_tensorboard\/index.html", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40146428\/show-training-and-validation-accuracy-in-tensorflow-using-same-graph", "best_answers_votes":41, "question_length":1317, "response_length":1356 }, { "question":"ValueError: Output tensors to a Model must be the output of a TensorFlow `Layer` I'm building a model in Keras using some tensorflow function (reduce_sum and l2_normalize) in the last layer while encountered this problem. I have searched for a solution but all of it related to \"Keras tensor\". Here is my code: ``` import tensorflow as tf; from tensorflow.python.keras import backend as K vgg16_model = VGG16(weights = 'imagenet', include_top = False, input_shape = input_shape); fire8 = extract_layer_from_model(vgg16_model, layer_name = 'block4_pool'); pool8 = MaxPooling2D((3,3), strides = (2,2), name = 'pool8')(fire8.output); fc1 = Conv2D(64, (6,6), strides= (1, 1), padding = 'same', name = 'fc1')(pool8); fc1 = Dropout(rate = 0.5)(fc1); fc2 = Conv2D(3, (1, 1), strides = (1, 1), padding = 'same', name = 'fc2')(fc1); fc2 = Activation('relu')(fc2); fc2 = Conv2D(3, (15, 15), padding = 'valid', name = 'fc_pooling')(fc2); fc2_norm = K.l2_normalize(fc2, axis = 3); est = tf.reduce_sum(fc2_norm, axis = (1, 2)); est = K.l2_normalize(est); FC_model = Model(inputs = vgg16_model.input, outputs = est); ``` and then the error: ValueError: Output tensors to a Model must be the output of a TensorFlow Layer (thus holding past layer metadata). Found: Tensor(\"l2_normalize_3:0\", shape=(?, 3), dtype=float32) I noticed that without passing fc2 layer to these functions, the model works fine: ``` FC_model = Model(inputs = vgg16_model.input, outputs = fc2); ``` Can someone please explain to me this problem and some suggestion on how to fix it?", "response":"I have found a way to work around to solve the problem. For anyone who encounters the same issue, you can use the Lambda layer to wrap your tensorflow operations, this is what I did: ``` from tensorflow.python.keras.layers import Lambda; def norm(fc2): fc2_norm = K.l2_normalize(fc2, axis = 3); illum_est = tf.reduce_sum(fc2_norm, axis = (1, 2)); illum_est = K.l2_normalize(illum_est); return illum_est; illum_est = Lambda(norm)(fc2); ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50715928\/valueerror-output-tensors-to-a-model-must-be-the-output-of-a-tensorflow-layer", "best_answers_votes":40, "question_length":1540, "response_length":438 }, { "question":"Cannot import keras after installation I'm trying to setup keras deep learning library for Python3.5 on Ubuntu 16.04 LTS and use Tensorflow as a backend. I have Python2.7 and Python3.5 installed. I have installed Anaconda and with help of it Tensorflow, numpy, scipy, pyyaml. Afterwards I have installed keras with command sudo python setup.py install Although I can see \/usr\/local\/lib\/python3.5\/dist-packages\/Keras-1.1.0-py3.5.egg directory, I cannot use keras library. When I try to import it in python it says ImportError: No module named 'keras' I have tried to install keras usingpip3, but got the same result. What am I doing wrong? Any Ideas?", "response":"Diagnose If you have pip installed (you should have it until you use Python 3.5), list the installed Python packages, like this: ``` $ pip list | grep -i keras Keras (1.1.0) ``` If you don\u2019t see Keras, it means that the previous installation failed or is incomplete (this lib has this dependancies: numpy (1.11.2), PyYAML (3.12), scipy (0.18.1), six (1.10.0), and Theano (0.8.2).) Consult the pip.log to see what\u2019s wrong. You can also display your Python path like this: ``` $ python3 -c 'import sys, pprint; pprint.pprint(sys.path)' ['', '\/Library\/Frameworks\/Python.framework\/Versions\/3.5\/lib\/python35.zip', '\/Library\/Frameworks\/Python.framework\/Versions\/3.5\/lib\/python3.5', '\/Library\/Frameworks\/Python.framework\/Versions\/3.5\/lib\/python3.5\/plat-darwin', '\/Library\/Frameworks\/Python.framework\/Versions\/3.5\/lib\/python3.5\/lib-dynload', '\/Library\/Frameworks\/Python.framework\/Versions\/3.5\/lib\/python3.5\/site-packages'] ``` Make sure the Keras library appears in the \/Library\/Frameworks\/Python.framework\/Versions\/3.5\/lib\/python3.5\/site-packages path (the path is different on Ubuntu). If not, try do uninstall it, and retry installation: ``` $ pip uninstall Keras ``` Use a virtualenv It\u2019s a bad idea to use and pollute your system-wide Python. I recommend using a virtualenv (see this guide). The best usage is to create a virtualenv directory (in your home, for instance), and store your virtualenvs in: ``` cd virtualenv\/ virtualenv -p python3.5 py-keras source py-keras\/bin\/activate pip install -q -U pip setuptools wheel ``` Then install Keras: ``` pip install keras ``` You get: ``` $ pip list Keras (1.1.0) numpy (1.11.2) pip (8.1.2) PyYAML (3.12) scipy (0.18.1) setuptools (28.3.0) six (1.10.0) Theano (0.8.2) wheel (0.30.0a0) ``` But, you also need to install extra libraries, like Tensorflow: ``` $ python -c \"import keras\" Using TensorFlow backend. Traceback (most recent call last): ... ImportError: No module named 'tensorflow' ``` The installation guide of TesnsorFlow is here: https:\/\/www.tensorflow.org\/versions\/r0.11\/get_started\/os_setup.html#pip-installation", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39930952\/cannot-import-keras-after-installation", "best_answers_votes":27, "question_length":649, "response_length":2071 }, { "question":"Tensorflow and Multiprocessing: Passing Sessions I have recently been working on a project that uses a neural network for virtual robot control. I used tensorflow to code it up and it runs smoothly. So far, I used sequential simulations to evaluate how good the neural network is, however, I want to run several simulations in parallel to reduce the amount of time it takes to get data. To do this I am importing python's multiprocessing package. Initially I was passing the sess variable (sess=tf.Session()) to a function that would run the simulation. However, once I get to any statement that uses this sess variable, the process quits without a warning. After searching around for a bit I found these two posts: Tensorflow: Passing a session to a python multiprocess and Running multiple tensorflow sessions concurrently While they are highly related I haven't been able to figure out how to make it work. I tried creating a session for each individual process and assigning the weights of the neural net to its trainable parameters without success. I've also tried saving the session into a file and then loading it within a process, but no luck there either. Has someone been able to pass a session (or clones of sessions) to several processes? Thanks.", "response":"You can't use Python multiprocessing to pass a TensorFlow Session into a multiprocessing.Pool in the straightfoward way because the Session object can't be pickled (it's fundamentally not serializable because it may manage GPU memory and state like that). I'd suggest parallelizing the code using actors, which are essentially the parallel computing analog of \"objects\" and use used to manage state in the distributed setting. Ray is a good framework for doing this. You can define a Python class which manages the TensorFlow Session and exposes a method for running your simulation. ``` import ray import tensorflow as tf ray.init() @ray.remote class Simulator(object): def __init__(self): self.sess = tf.Session() self.simple_model = tf.constant([1.0]) def simulate(self): return self.sess.run(self.simple_model) # Create two actors. simulators = [Simulator.remote() for _ in range(2)] # Run two simulations in parallel. results = ray.get([s.simulate.remote() for s in simulators]) ``` Here are a few more examples of parallelizing TensorFlow with Ray. See the Ray documentation. Note that I'm one of the Ray developers.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36610290\/tensorflow-and-multiprocessing-passing-sessions", "best_answers_votes":21, "question_length":1258, "response_length":1122 }, { "question":"How does one train multiple models in a single script in TensorFlow when there are GPUs present? Say I have access to a number of GPUs in a single machine (for the sake of argument assume 8GPUs each with max memory of 8GB each in one single machine with some amount of RAM and disk). I wanted to run in one single script and in one single machine a program that evaluates multiple models (say 50 or 200) in TensorFlow, each with a different hyper parameter setting (say, step-size, decay rate, batch size, epochs\/iterations, etc). At the end of training assume we just record its accuracy and get rid of the model (if you want assume the model is being check pointed every so often, so its fine to just throw away the model and start training from scratch. You may also assume some other data may be recorded like the specific hyper params, train, validation, train errors are recorded as we train etc). Currently I have a (pseudo-)script that looks as follow: ``` def train_multiple_modles_in_one_script_with_gpu(arg): ''' trains multiple NN models in one session using GPUs correctly. arg = some obj\/struct with the params for trianing each of the models. ''' #### try mutliple models for mdl_id in range(100): #### define\/create graph graph = tf.Graph() with graph.as_default(): ### get mdl x = tf.placeholder(float_type, get_x_shape(arg), name='x-input') y_ = tf.placeholder(float_type, get_y_shape(arg)) y = get_mdl(arg,x) ### get loss and accuracy loss, accuracy = get_accuracy_loss(arg,x,y,y_) ### get optimizer variables opt = get_optimizer(arg) train_step = opt.minimize(loss, global_step=global_step) #### run session with tf.Session(graph=graph) as sess: # train for i in range(nb_iterations): batch_xs, batch_ys = get_batch_feed(X_train, Y_train, batch_size) sess.run(fetches=train_step, feed_dict={x: batch_xs, y_: batch_ys}) # check_point mdl if i % report_error_freq == 0: sess.run(step.assign(i)) # train_error = sess.run(fetches=loss, feed_dict={x: X_train, y_: Y_train}) test_error = sess.run(fetches=loss, feed_dict={x: X_test, y_: Y_test}) print( 'step %d, train error: %s test_error %s'%(i,train_error,test_error) ) ``` essentially it tries lots of models in one single run but it builds each model in a separate graph and runs each one in a separate session. I guess my main worry is that its unclear to me how tensorflow under the hood allocates resources for the GPUs to be used. For example, does it load the (part of the) data set only when a session is ran? When I create a graph and a model, is it brought in the GPU immediately or when is it inserted in the GPU? Do I need to clear\/free the GPU each time it tries a new model? I don't actually care too much if the models are ran in parallel in multiple GPU (which can be a nice addition), but I want it to first run everything serially without crashing. Is there anything special I need to do for this to work? Currently I am getting an error that starts as follow: ``` I tensorflow\/core\/common_runtime\/bfc_allocator.cc:702] Stats: Limit: 340000768 InUse: 336114944 MaxInUse: 339954944 NumAllocs: 78 MaxAllocSize: 335665152 W tensorflow\/core\/common_runtime\/bfc_allocator.cc:274] ***************************************************xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx W tensorflow\/core\/common_runtime\/bfc_allocator.cc:275] Ran out of memory trying to allocate 160.22MiB. See logs for memory state. W tensorflow\/core\/framework\/op_kernel.cc:975] Resource exhausted: OOM when allocating tensor with shape[60000,700] ``` and further down the line it says: ``` ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[60000,700] [[Node: standardNN\/NNLayer1\/Z1\/add = Add[T=DT_FLOAT, _device=\"\/job:localhost\/replica:0\/task:0\/gpu:0\"](standardNN\/NNLayer1\/Z1\/MatMul, b1\/read)]] I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:975] Creating TensorFlow device (\/gpu:0) -> (device: 0, name: Tesla P100-SXM2-16GB, pci bus id: 0000:06:00.0) ``` however further down the output file (where it prints) it seems to print fine the errors\/messages that should show as training proceeds. Does this mean that it didn't run out of resources? Or was it actually able to use the GPU? If it was able to use the CPU instead of the CPU, when why is this an error only happening when GPU are about to be used? The weird thing is that the data set is really not that big (all 60K points are 24.5M) and when I run a single model locally in my own computer it seems that the process uses less than 5GB. The GPUs have at least 8GB and the computer with them has plenty of RAM and disk (at least 16GB). Thus, the errors that tensorflow is throwing at me are quite puzzling. What is it trying to do and why are they occurring? Any ideas? After reading the answer that suggests to use the multiprocessing library I came up with the following script: ``` def train_mdl(args): train(mdl,args) if __name__ == '__main__': for mdl_id in range(100): # train one model with some specific hyperparms (assume they are chosen randomly inside the funciton bellow or read from a config file or they could just be passed or something) p = Process(target=train_mdl, args=(args,)) p.start() p.join() print('Done training all models!') ``` honestly I am not sure why his answer suggests to use pool, or why there are weird tuple brackets but this is what would make sense for me. Would the resources for tensorflow be re-allocated every time a new process is created in the above loop?", "response":"I think that running all models in one single script can be bad practice in the long term (see my suggestion below for a better alternative). However, if you would like to do it, here is a solution: You can encapsulate your TF session into a process with the multiprocessing module, this will make sure TF releases the session memory once the process is done. Here is a code snippet: ``` from multiprocessing import Pool import contextlib def my_model((param1, param2, param3)): # Note the extra (), required by the pool syntax num_pool_worker=1 # can be bigger than 1, to enable parallel execution with contextlib.closing(Pool(num_pool_workers)) as po: # This ensures that the processes get closed once they are done pool_results = po.map_async(my_model, ((param1, param2, param3) for param1, param2, param3 in params_list)) results_list = pool_results.get() ``` Note from OP: The random number generator seed does not reset automatically with the multi-processing library if you choose to use it. Details here: Using python multiprocessing with different random seed for each process About TF resource allocation: Usually TF allocates much more resources than it needs. Many times you can restrict each process to use a fraction of the total GPU memory, and discover through trial and error the fraction your script requires. You can do it with the following snippet ``` gpu_memory_fraction = 0.3 # Choose this number through trial and error gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_memory_fraction,) session_config = tf.ConfigProto(gpu_options=gpu_options) sess = tf.Session(config=session_config, graph=graph) ``` Note that sometimes TF increases the memory usage in order to accelerate the execution. Therefore, reducing the memory usage might make your model run slower. Answers to the new questions in your edit\/comments: Yes, Tensorflow will be re-allocated every time a new process is created, and cleared once a process ends. The for-loop in your edit should also do the job. I suggest to use Pool instead, because it will enable you to run several models concurrently on a single GPU. See my notes about setting gpu_memory_fraction and \"choosing the maximal number of processes\". Also note that: (1) The Pool map runs the loop for you, so you don't need an outer for-loop once you use it. (2) In your example, you should have something like mdl=get_model(args) before calling train() Weird tuple parenthesis: Pool only accepts a single argument, therefore we use a tuple to pass multiple arguments. See multiprocessing.pool.map and function with two arguments for more details. As suggested in one answer, you can make it more readable with ``` def train_mdl(params): (x,y)=params ``` As @Seven suggested, you can use CUDA_VISIBLE_DEVICES environment variable to choose which GPU to use for your process. You can do it from within your python script using the following on the beginning of the process function (train_mdl). ``` import os # the import can be on the top of the python script os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"{}\".format(gpu_id) ``` A better practice for executing your experiments would be to isolate your training\/evaluation code from the hyper parameters\/ model search code. E.g. have a script named train.py, which accepts a specific combination of hyper parameters and references to your data as arguments, and executes training for a single model. Then, to iterate through the all the possible combinations of parameters you can use a simple task (jobs) queue, and submit all the possible combinations of hyper-parametrs as separate jobs. The task queue will feed your jobs one at a time to your machine. Usually, you can also set the queue to execute number of processes concurrently (see details below). Specifically, I use task spooler, which is super easy to install and handful (doesn't requires admin privileges, details below). Basic usage is (see notes below about task spooler usage): ``` ts ``` In practice, I have a separate python script that manages my experiments, set all the arguments per specific experiment and send the jobs to the ts queue. Here are some relevant snippets of python code from my experiments manager: run_bash executes a bash command ``` def run_bash(cmd): p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, executable='\/bin\/bash') out = p.stdout.read().strip() return out # This is the stdout from the shell command ``` The next snippet sets the number of concurrent processes to be run (see note below about choosing the maximal number of processes): ``` max_job_num_per_gpu = 2 run_bash('ts -S %d'%max_job_num_per_gpu) ``` The next snippet iterates through a list of all combinations of hyper params \/ model params. Each element of the list is a dictionary, where the keys are the command line arguments for the train.py script ``` for combination_dict in combinations_list: job_cmd = 'python train.py ' + ' '.join( ['--{}={}'.format(flag, value) for flag, value in combination_dict.iteritems()]) submit_cmd = \"ts bash -c '%s'\" % job_cmd run_bash(submit_cmd) ``` A note about about choosing the maximal number of processes: If you are short on GPUs, you can use gpu_memory_fraction you found, to set the number of processes as max_job_num_per_gpu=int(1\/gpu_memory_fraction) Notes about task spooler (ts): You could set the number of concurrent processes to run (\"slots\") with: ts -S Installing ts doesn't requires admin privileges. You can download and compile it from source with a simple make, add it to your path and you're done. You can set up multiple queues (I use it for multiple GPUs), with TS_SOCKET= ts e.g. TS_SOCKET=\/tmp\/socket-ts.gpu_queue_1 ts TS_SOCKET=\/tmp\/socket-ts.gpu_queue_2 ts See here for further usage example A note about automatically setting the path names and file names: Once you separate your main code from the experiment manager, you will need an efficient way to generate file names and directory names, given the hyper-params. I usually keep my important hyper params in a dictionary and use the following function to generate a single chained string from the dictionary key-value pairs. Here are the functions I use for doing it: ``` def build_string_from_dict(d, sep='%'): \"\"\" Builds a string from a dictionary. Mainly used for formatting hyper-params to file names. Key-value pairs are sorted by the key name. Args: d: dictionary Returns: string :param d: input dictionary :param sep: key-value separator \"\"\" return sep.join(['{}={}'.format(k, _value2str(d[k])) for k in sorted(d.keys())]) def _value2str(val): if isinstance(val, float): # %g means: \"Floating point format. # Uses lowercase exponential format if exponent is less than -4 or not less than precision, # decimal format otherwise.\" val = '%g' % val else: val = '{}'.format(val) val = re.sub('\\.', '_', val) return val ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42426960\/how-does-one-train-multiple-models-in-a-single-script-in-tensorflow-when-there-a", "best_answers_votes":28, "question_length":5459, "response_length":6834 }, { "question":"Is there a way of determining how much GPU memory is in use by TensorFlow? Tensorflow tends to preallocate the entire available memory on it's GPUs. For debugging, is there a way of telling how much of that memory is actually in use?", "response":"(1) There is some limited support with Timeline for logging memory allocations. Here is an example for its usage: ``` run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True), options=run_options, run_metadata=run_metadata) train_writer.add_run_metadata(run_metadata, 'step%03d' % i) train_writer.add_summary(summary, i) print('Adding run metadata for', i) tl = timeline.Timeline(run_metadata.step_stats) print(tl.generate_chrome_trace_format(show_memory=True)) trace_file = tf.gfile.Open(name='timeline', mode='w') trace_file.write(tl.generate_chrome_trace_format(show_memory=True)) ``` You can give this code a try with the MNIST example (mnist with summaries) This will generate a tracing file named timeline, which you can open with chrome:\/\/tracing. Note that this only gives an approximated GPU memory usage statistics. It basically simulated a GPU execution, but doesn't have access to the full graph metadata. It also can't know how many variables have been assigned to the GPU. (2) For a very coarse measure of GPU memory usage, nvidia-smi will show the total device memory usage at the time you run the command. nvprof can show the on-chip shared memory usage and register usage at the CUDA kernel level, but doesn't show the global\/device memory usage. Here is an example command: nvprof --print-gpu-trace matrixMul And more details here: http:\/\/docs.nvidia.com\/cuda\/profiler-users-guide\/#abstract", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36123740\/is-there-a-way-of-determining-how-much-gpu-memory-is-in-use-by-tensorflow", "best_answers_votes":12, "question_length":233, "response_length":1519 }, { "question":"How to add report_tensor_allocations_upon_oom to RunOptions in Keras I'm trying to train a neural net on a GPU using Keras and am getting a \"Resource exhausted: OOM when allocating tensor\" error. The specific tensor it's trying to allocate isn't very big, so I assume some previous tensor consumed almost all the VRAM. The error message comes with a hint that suggests this: Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. That sounds good, but how do I do it? RunOptions appears to be a Tensorflow thing, and what little documentation I can find for it associates it with a \"session\". I'm using Keras, so Tensorflow is hidden under a layer of abstraction and its sessions under another layer below that. How do I dig underneath everything to set this option in such a way that it will take effect?", "response":"TF1 solution: Its not as hard as it seems, what you need to know is that according to the documentation, the **kwargs parameter passed to model.compile will be passed to session.run So you can do something like: ``` import tensorflow as tf run_opts = tf.RunOptions(report_tensor_allocations_upon_oom = True) model.compile(loss = \"...\", optimizer = \"...\", metrics = \"..\", options = run_opts) ``` And it should be passed directly each time session.run is called. TF2: The solution above works only for tf1. For tf2, unfortunately, it appears there is no easy solution yet.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49665757\/how-to-add-report-tensor-allocations-upon-oom-to-runoptions-in-keras", "best_answers_votes":19, "question_length":903, "response_length":570 }, { "question":"Hyperparameter optimization for Deep Learning Structures using Bayesian Optimization I have constructed a CLDNN (Convolutional, LSTM, Deep Neural Network) structure for raw signal classification task. Each training epoch runs for about 90 seconds and the hyperparameters seems to be very difficult to optimize. I have been research various ways to optimize the hyperparameters (e.g. random or grid search) and found out about Bayesian Optimization. Although I am still not fully understanding the optimization algorithm, I feed like it will help me greatly. I would like to ask few questions regarding the optimization task. How do I set up the Bayesian Optimization with regards to a deep network?(What is the cost function we are trying to optimize?) What is the function I am trying to optimize? Is it the cost of the validation set after N epochs? Is spearmint a good starting point for this task? Any other suggestions for this task? I would greatly appreciate any insights into this problem.", "response":"Although I am still not fully understanding the optimization algorithm, I feed like it will help me greatly. First up, let me briefly explain this part. Bayesian Optimization methods aim to deal with exploration-exploitation trade off in the multi-armed bandit problem. In this problem, there is an unknown function, which we can evaluate in any point, but each evaluation costs (direct penalty or opportunity cost), and the goal is to find its maximum using as few trials as possible. Basically, the trade off is this: you know the function in a finite set of points (of which some are good and some are bad), so you can try an area around the current local maximum, hoping to improve it (exploitation), or you can try a completely new area of space, that can potentially be much better or much worse (exploration), or somewhere in between. Bayesian Optimization methods (e.g. PI, EI, UCB), build a model of the target function using a Gaussian Process (GP) and at each step choose the most \"promising\" point based on their GP model (note that \"promising\" can be defined differently by different particular methods). Here's an example: The true function is f(x) = x * sin(x) (black curve) on [-10, 10] interval. Red dots represent each trial, red curve is the GP mean, blue curve is the mean plus or minus one standard deviation. As you can see, the GP model doesn't match the true function everywhere, but the optimizer fairly quickly identified the \"hot\" area around -8 and started to exploit it. How do I set up the Bayesian Optimization with regards to a deep network? In this case, the space is defined by (possibly transformed) hyperparameters, usually a multidimensional unit hypercube. For example, suppose you have three hyperparameters: a learning rate \u03b1 in [0.001, 0.01], the regularizer \u03bb in [0.1, 1] (both continuous) and the hidden layer size N in [50..100] (integer). The space for optimization is a 3-dimensional cube [0, 1]*[0, 1]*[0, 1]. Each point (p0, p1, p2) in this cube corresponds to a trinity (\u03b1, \u03bb, N) by the following transformation: ``` p0 -> \u03b1 = 10**(p0-3) p1 -> \u03bb = 10**(p1-1) p2 -> N = int(p2*50 + 50) ``` What is the function I am trying to optimize? Is it the cost of the validation set after N epochs? Correct, the target function is neural network validation accuracy. Clearly, each evaluation is expensive, because it requires at least several epochs for training. Also note that the target function is stochastic, i.e. two evaluations on the same point may slightly differ, but it's not a blocker for Bayesian Optimization, though it obviously increases the uncertainty. Is spearmint a good starting point for this task? Any other suggestions for this task? spearmint is a good library, you can definitely work with that. I can also recommend hyperopt. In my own research, I ended up writing my own tiny library, basically for two reasons: I wanted to code exact Bayesian method to use (in particular, I found a portfolio strategy of UCB and PI converged faster than anything else, in my case); plus there is another technique that can save up to 50% of training time called learning curve prediction (the idea is to skip full learning cycle when the optimizer is confident the model doesn't learn as fast as in other areas). I'm not aware of any library that implements this, so I coded it myself, and in the end it paid off. If you're interested, the code is on GitHub.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41860817\/hyperparameter-optimization-for-deep-learning-structures-using-bayesian-optimiza", "best_answers_votes":24, "question_length":997, "response_length":3409 }, { "question":"ValueError: Variable rnn\/basic_rnn_cell\/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Any ideas how can I solve problem shown below? With the information that I found on the web it is associated with problem of reusing tensorflow scope however nothing works. ```python ValueError: Variable rnn\/basic_rnn_cell\/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at: File \"\/code\/backend\/management\/commands\/RNN.py\", line 370, in predict states_series, current_state = tf.nn.dynamic_rnn(cell=cell, inputs=batchX_placeholder, dtype=tf.float32) File \"\/code\/backend\/management\/commands\/RNN.py\", line 499, in Command predict(\"string\") File \"\/code\/backend\/management\/commands\/RNN.py\", line 12, in class Command(BaseCommand): ``` I tried for instance something like this ```python with tf.variable_scope('scope'): states_series, current_state = tf.nn.dynamic_rnn(cell=cell, inputs=batchX_placeholder, dtype=tf.float32) ``` and this ```python with tf.variable_scope('scope', reuse = True ): states_series, current_state = tf.nn.dynamic_rnn(cell=cell, inputs=batchX_placeholder, dtype=tf.float32) ``` and this ```python with tf.variable_scope('scope', reuse = tf.AUTO_REUSE ): states_series, current_state = tf.nn.dynamic_rnn(cell=cell, inputs=batchX_placeholder, dtype=tf.float32) ``` Any ideas?", "response":"Does this happen when you run the model for the first time (upon opening a new python console)? If not, you need to clear you computational graph. You can do that by putting this line at the beginning of your script. ``` tf.reset_default_graph() ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47296969\/valueerror-variable-rnn-basic-rnn-cell-kernel-already-exists-disallowed-did-y", "best_answers_votes":83, "question_length":1411, "response_length":249 }, { "question":"Install Cuda without root I know that I can install Cuda with the following: ``` wget http:\/\/developer.download.nvidia.com\/compute\/cuda\/7_0\/Prod\/local_installers\/cuda_7.0.28_linux.run chmod +x cuda_7.0.28_linux.run .\/cuda_7.0.28_linux.run -extract=`pwd`\/nvidia_installers cd nvidia_installers sudo .\/NVIDIA-Linux-x86_64-346.46.run sudo modprobe nvidia sudo .\/cuda-linux64-rel-7.0.28-19326674.run ``` Just wondering if I can install Cuda without root? Thanks,", "response":"Update The installation UI for 10.1 changed. The following works: Deselect driver installation (pressing ENTERon it) Change options -> root install path to a non-sudo directory. Press A on the line marked with a + to access advanced options. Deselect create symbolic link, and change the toolkit install path. Now installation should work without root permissions Thank you very much for the hints in the question! I just want to complete it with an approach that worked for me, also inspired in this gist and that hopefully helps in situations where a valid driver is installed, and installing a more recent CUDA on Linux without root permissions is still needed. TL;DR: Here are the steps to install CUDA9+CUDNN7 on Debian, and installing a pre-compiled version of TensorFlow1.4 on Python2.7 to test that everything works. Everything without root privileges and via terminal. Should also work for other CUDA, CUDNN, TensorFlow and Python versions on other Linux systems too. INSTALLATION Go to NVIDIA's official release web for CUDA (as for Nov. 2017, CUDA9 is out): https:\/\/developer.nvidia.com\/cuda-downloads. Under your Linux distro, select the runfile (local)option. Note that the sudo indication present in the installation instructions is deceiving, since it is possible to run this installer without root permissions. On a server, one easy way is to copy the of the Download button and, in any location of your home directory, run wget . It will download the file. Run chmod +x to make it executable, and execute it .\/. accept the EULA, say no to driver installation, and enter a location under your home directory to install the toolkit and a for the samples. Not asked here but recommended: Download a compatible CUDNN file from the official web (you need to sign in). In my case, I downloaded the cudnn-9.0-linux-x64-v7.tgz, compatible with CUDA9 into the folder. Uncompress it: tar -xzvf .... Optional: compile the samples. cd && make. There are some very nice examples there and a very good starting point to write some CUDA scripts of yourself. (If you did 5.): Copy the required files from into , and grant reading permission to user (not sure if needed): ``` cp -P \/cuda\/include\/cudnn.h \/include\/ cp -P \/cuda\/lib64\/libcudnn* \/lib64 chmod a+r \/include\/cudnn.h \/lib64\/libcudnn* ``` Add the library to your environment. This is typically done adding this following two lines to your ~\/.bashrc file (in this example, the directory was ~\/cuda9\/: ``` export PATH=\/bin:$PATH export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:\/lib64\/ ``` FOR QUICK TESTING OR TENSORFLOW USERS The quickest way to get a TensorFlow compatible with CUDA9 and CUDNN7 (and a very quick way to test this) is to download a precompiled wheel file and install it with pip install . Most of the versions you need, can be found in mind's repo (thanks a lot guys). A minimal test that confirms that CUDNN is also working involves the use of tf.nn.conv2d: ``` import tensorflow as tf x = tf.nn.conv2d(tf.ones([1,1,10,1]), tf.ones([1,5,1,1]), strides=[1, 1, 1, 1], padding='SAME') with tf.Session() as sess: sess.run(x) # this should output a tensor of shape (1,1,10,1) with [3,4,5,5,5,5,5,5,4,3] ``` In my case, the wheel I installed required Intel's MKL library, as explained here. Again, from terminal and without root users, this are the steps I followed to install the library and make TensorFlow find it (reference): git clone https:\/\/github.com\/01org\/mkl-dnn.git cd mkl-dnn\/scripts && .\/prepare_mkl.sh && cd .. mkdir -p build && cd build cmake -D CMAKE_INSTALL_PREFIX:PATH= .. make # this takes a while make doc # do this optionally if you have doxygen make test # also takes a while make install # installs into add the following to your ~\/.bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:\/lib Hope this helps! Andres", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39379792\/install-cuda-without-root", "best_answers_votes":69, "question_length":458, "response_length":3803 }, { "question":"Installing TensorFlow on Windows (Python 3.6.x) I'm trying to install TensorFlow on Windows. I tried to install it with pip, but I always get the same error message: ``` ... is not a supported wheel on this platform. ``` I first tried it with Python 3.5.1, now I upgraded to 3.6.0b4, but it makes no difference. Python: ``` Python 3.6.0b4 (default, Nov 22 2016, 05:30:12) [MSC v.1900 64 bit (AMD64)] on win32 ``` pip: ``` pip 9.0.1 from ...\\python\\lib\\site-packages (python 3.6) ``` To be exact, I tried the following two commands: ``` pip install --upgrade https:\/\/storage.googleapis.com\/tensorflow\/windows\/cpu\/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl pip install --upgrade https:\/\/storage.googleapis.com\/tensorflow\/windows\/gpu\/tensorflow_gpu-0.12.0rc0-cp35-cp35m-win_amd64.whl ``` they output the following: ``` > tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. > tensorflow_gpu-0.12.0rc0-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. ``` Does anyone know how to solve this problem? I'm not sure where I'm making a mistake. Thanks! Edit 1 Btw, I also tried pip install tensorflow and pip install tensorflow-gpu like suggested here. I got the following output: ``` > Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow > Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) No matching distribution found for tensorflow-gpu ```", "response":"Update 15.11.2017 It seems that by now it is working like one would expect. Running the following commands using the following pip and python version should work. Installing with Python 3.6.x Version Python: 3.6.3 pip: 9.0.1 Installation Commands The following commands are based of the following installation guide here. using cmd ``` C:> pip3 install --upgrade tensorflow \/\/ cpu C:> pip3 install --upgrade tensorflow-gpu \/\/ gpu ``` using Anaconda ``` C:> conda create -n tensorflow python=3.5 C:> activate tensorflow (tensorflow)C:> pip install --ignore-installed --upgrade tensorflow (tensorflow)C:> pip install --ignore-installed --upgrade tensorflow-gpu ``` Additional Information A list of common installation problems can be found here. You can find an example console output of a successful tensorflow cpu installation here. Old response: Okay to conclude; use version 3.5.2 ! Neither 3.5.1 nor 3.6.x seem to work at the moment. Versions: Python 3.5.2 pip 8.1.1 .. (python 3.5) Commands: ``` \/\/ cpu C:> pip install --upgrade https:\/\/storage.googleapis.com\/tensorflow\/windows\/cpu\/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl \/\/ gpu C:> pip install --upgrade https:\/\/storage.googleapis.com\/tensorflow\/windows\/gpu\/tensorflow_gpu-0.12.0rc0-cp35-cp35m-win_amd64.whl ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40884668\/installing-tensorflow-on-windows-python-3-6-x", "best_answers_votes":33, "question_length":1511, "response_length":1273 }, { "question":"numpy random choice in Tensorflow Is there an equivalent function to numpy random choice in Tensorflow. In numpy we can get an item randomly from the given list with its weights. ``` np.random.choice([1,2,3,5], 1, p=[0.1, 0, 0.3, 0.6, 0]) ``` This code will select an item from the given list with p weights.", "response":"No, but you can achieve the same result using tf.multinomial: ``` elems = tf.convert_to_tensor([1,2,3,5]) samples = tf.multinomial(tf.log([[1, 0, 0.3, 0.6]]), 1) # note log-prob elems[tf.cast(samples[0][0], tf.int32)].eval() Out: 1 elems[tf.cast(samples[0][0], tf.int32)].eval() Out: 5 ``` The [0][0] part is here, as multinomial expects a row of unnormalized log-probabilities for each element of the batch and also has another dimension for the number of samples.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41123879\/numpy-random-choice-in-tensorflow", "best_answers_votes":25, "question_length":308, "response_length":465 }, { "question":"Get the bounding box coordinates in the TensorFlow object detection API tutorial I am new to both Python and Tensorflow. I am trying to run the object detection tutorial file from the Tensorflow Object Detection API, but I cannot find where I can get the coordinates of the bounding boxes when objects are detected. Relevant code: ``` # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) ``` The place where I assume bounding boxes are drawn is like this: ``` # Visualization of the results of detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) ``` I tried printing output_dict['detection_boxes'] but I am not sure what the numbers mean. There are a lot. ``` array([[ 0.56213236, 0.2780568 , 0.91445708, 0.69120586], [ 0.56261235, 0.86368728, 0.59286624, 0.8893863 ], [ 0.57073039, 0.87096912, 0.61292225, 0.90354401], [ 0.51422435, 0.78449738, 0.53994244, 0.79437423], ...... [ 0.32784131, 0.5461576 , 0.36972913, 0.56903434], [ 0.03005961, 0.02714229, 0.47211722, 0.44683522], [ 0.43143299, 0.09211366, 0.58121657, 0.3509962 ]], dtype=float32) ``` I found answers for similar questions, but I don't have a variable called boxes as they do. How can I get the coordinates?", "response":"I tried printing output_dict['detection_boxes'] but I am not sure what the numbers mean You can check out the code for yourself. visualize_boxes_and_labels_on_image_array is defined here. Note that you are passing use_normalized_coordinates=True. If you trace the function calls, you will see your numbers [ 0.56213236, 0.2780568 , 0.91445708, 0.69120586] etc. are the values [ymin, xmin, ymax, xmax] where the image coordinates: ``` (left, right, top, bottom) = (xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height) ``` are computed by the function: ``` def draw_bounding_box_on_image(image, ymin, xmin, ymax, xmax, color='red', thickness=4, display_str_list=(), use_normalized_coordinates=True): \"\"\"Adds a bounding box to an image. Bounding box coordinates can be specified in either absolute (pixel) or normalized coordinates by setting the use_normalized_coordinates argument. Each string in display_str_list is displayed on a separate line above the bounding box in black text on a rectangle filled with the input 'color'. If the top of the bounding box extends to the edge of the image, the strings are displayed below the bounding box. Args: image: a PIL.Image object. ymin: ymin of bounding box. xmin: xmin of bounding box. ymax: ymax of bounding box. xmax: xmax of bounding box. color: color to draw bounding box. Default is red. thickness: line thickness. Default value is 4. display_str_list: list of strings to display in box (each to be shown on its own line). use_normalized_coordinates: If True (default), treat coordinates ymin, xmin, ymax, xmax as relative to the image. Otherwise treat coordinates as absolute. \"\"\" draw = ImageDraw.Draw(image) im_width, im_height = image.size if use_normalized_coordinates: (left, right, top, bottom) = (xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48915003\/get-the-bounding-box-coordinates-in-the-tensorflow-object-detection-api-tutorial", "best_answers_votes":27, "question_length":1594, "response_length":1846 }, { "question":"How does TensorFlow's MultiRnnCell work? Could someone help to explain the inner mechanism of TensorFlow's tf.contrib.rnn.MultiRnnCell? For example, if I wanted to stack up two basic RNN cells into a MultiRnnCell, what would be the input and output of each basic RNN cell? I would like to know the details of how it works.", "response":"Study this blog post as well as the provided implementation. It describes in detail how use MultiRNNCell to stack multiple RNN cells.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43408463\/how-does-tensorflows-multirnncell-work", "best_answers_votes":36, "question_length":322, "response_length":133 }, { "question":"Very low GPU usage during training in Tensorflow I am trying to train a simple multi-layer perceptron for a 10-class image classification task, which is a part of the assignment for the Udacity Deep-Learning course. To be more precise, the task is to classify letters rendered from various fonts (the dataset is called notMNIST). The code I ended up with looks fairly simple, but no matter what I always get very low GPU usage during training. I measure load with GPU-Z and it shows just 25-30%. Here is my current code: ``` graph = tf.Graph() with graph.as_default(): tf.set_random_seed(52) # dataset definition dataset = Dataset.from_tensor_slices({'x': train_data, 'y': train_labels}) dataset = dataset.shuffle(buffer_size=20000) dataset = dataset.batch(128) iterator = dataset.make_initializable_iterator() sample = iterator.get_next() x = sample['x'] y = sample['y'] # actual computation graph keep_prob = tf.placeholder(tf.float32) is_training = tf.placeholder(tf.bool, name='is_training') fc1 = dense_batch_relu_dropout(x, 1024, is_training, keep_prob, 'fc1') fc2 = dense_batch_relu_dropout(fc1, 300, is_training, keep_prob, 'fc2') fc3 = dense_batch_relu_dropout(fc2, 50, is_training, keep_prob, 'fc3') logits = dense(fc3, NUM_CLASSES, 'logits') with tf.name_scope('accuracy'): accuracy = tf.reduce_mean( tf.cast(tf.equal(tf.argmax(y, 1), tf.argmax(logits, 1)), tf.float32), ) accuracy_percent = 100 * accuracy with tf.name_scope('loss'): loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): # ensures that we execute the update_ops before performing the train_op # needed for batch normalization (apparently) train_op = tf.train.AdamOptimizer(learning_rate=1e-3, epsilon=1e-3).minimize(loss) with tf.Session(graph=graph) as sess: tf.global_variables_initializer().run() step = 0 epoch = 0 while True: sess.run(iterator.initializer, feed_dict={}) while True: step += 1 try: sess.run(train_op, feed_dict={keep_prob: 0.5, is_training: True}) except tf.errors.OutOfRangeError: logger.info('End of epoch #%d', epoch) break # end of epoch train_l, train_ac = sess.run( [loss, accuracy_percent], feed_dict={x: train_data, y: train_labels, keep_prob: 1, is_training: False}, ) test_l, test_ac = sess.run( [loss, accuracy_percent], feed_dict={x: test_data, y: test_labels, keep_prob: 1, is_training: False}, ) logger.info('Train loss: %f, train accuracy: %.2f%%', train_l, train_ac) logger.info('Test loss: %f, test accuracy: %.2f%%', test_l, test_ac) epoch += 1 ``` Here's what I tried so far: I changed the input pipeline from simple feed_dict to tensorflow.contrib.data.Dataset. As far as I understood, it is supposed to take care of the efficiency of the input, e.g. load data in a separate thread. So there should not be any bottleneck associated with the input. I collected traces as suggested here: https:\/\/github.com\/tensorflow\/tensorflow\/issues\/1824#issuecomment-225754659 However, these traces didn't really show anything interesting. >90% of the train step is matmul operations. Changed batch size. When I change it from 128 to 512 the load increases from ~30% to ~38%, when I increase it further to 2048, the load goes to ~45%. I have 6Gb GPU memory and dataset is single channel 28x28 images. Am I really supposed to use such a big batch size? Should I increase it further? Generally, should I worry about the low load, is it really a sign that I am training inefficiently? Here's the GPU-Z screenshots with 128 images in the batch. You can see low load with occasional spikes to 100% when I measure accuracy on the entire dataset after each epoch.", "response":"MNIST size networks are tiny and it's hard to achieve high GPU (or CPU) efficiency for them, I think 30% is not unusual for your application. You will get higher computational efficiency with larger batch size, meaning you can process more examples per second, but you will also get lower statistical efficiency, meaning you need to process more examples total to get to target accuracy. So it's a trade-off. For tiny character models like yours, the statistical efficiency drops off very quickly after a 100, so it's probably not worth trying to grow the batch size for training. For inference, you should use the largest batch size you can.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46146757\/very-low-gpu-usage-during-training-in-tensorflow", "best_answers_votes":31, "question_length":3693, "response_length":642 }, { "question":"What is the difference between [], [None], None and () for the shape of a placeholder? I have seen pieces of code using either [], [None], None or () as the shape for a placeholder, that is ``` x = tf.placeholder(..., shape=[], ...) y = tf.placeholder(..., shape=[None], ...) z = tf.placeholder(..., shape=None, ...) w = tf.placeholder(..., shape=(), ...) ``` What's the difference between these?", "response":"TensorFlow uses arrays rather than tuples. It converts tuples to arrays. Therefore [] and () are equivalent. Now, consider this code example: ``` x = tf.placeholder(dtype=tf.int32, shape=[], name=\"foo1\") y = tf.placeholder(dtype=tf.int32, shape=[None], name=\"foo2\") z = tf.placeholder(dtype=tf.int32, shape=None, name=\"foo3\") val1 = np.array((1, 2, 3)) val2 = 45 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) #print(sess.run(x, feed_dict = {x: val1})) # Fails print(sess.run(y, feed_dict = {y: val1})) print(sess.run(z, feed_dict = {z: val1})) print(sess.run(x, feed_dict = {x: val2})) #print(sess.run(y, feed_dict = {y: val2})) # Fails print(sess.run(z, feed_dict = {z: val2})) ``` As can be seen, placeholder with [] shape takes a single scalar value directly. Placeholder with [None] shape takes a 1-dimensional array and placeholder with None shape can take in any value while computation takes place.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46940857\/what-is-the-difference-between-none-none-and-for-the-shape-of-a-placeh", "best_answers_votes":29, "question_length":396, "response_length":930 }, { "question":"Tensorflow: Can't understand ctc_beam_search_decoder() output sequence I am using Tensorflow's tf.nn.ctc_beam_search_decoder() to decode the output of a RNN doing some many-to-many mapping (i.e., multiple softmax outputs for each network cell). A simplified version of the network's output and the Beam search decoder is: ``` import numpy as np import tensorflow as tf batch_size = 4 sequence_max_len = 5 num_classes = 3 y_pred = tf.placeholder(tf.float32, shape=(batch_size, sequence_max_len, num_classes)) y_pred_transposed = tf.transpose(y_pred, perm=[1, 0, 2]) # TF expects dimensions [max_time, batch_size, num_classes] logits = tf.log(y_pred_transposed) sequence_lengths = tf.to_int32(tf.fill([batch_size], sequence_max_len)) decoded, log_probabilities = tf.nn.ctc_beam_search_decoder(logits, sequence_length=sequence_lengths, beam_width=3, merge_repeated=False, top_paths=1) decoded = decoded[0] decoded_paths = tf.sparse_tensor_to_dense(decoded) # Shape: [batch_size, max_sequence_len] with tf.Session() as session: tf.global_variables_initializer().run() softmax_outputs = np.array([[[0.1, 0.1, 0.8], [0.8, 0.1, 0.1], [0.8, 0.1, 0.1], [0.8, 0.1, 0.1], [0.8, 0.1, 0.1]], [[0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]], [[0.1, 0.7, 0.2], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]], [[0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]]]) decoded_paths = session.run(decoded_paths, feed_dict = {y_pred: softmax_outputs}) print(decoded_paths) ``` The output in this case is: ``` [[0] [1] [1] [1]] ``` My understanding is that the output tensor should be of dimensions [batch_size, max_sequence_len], with each row containing the indices of the relevant classes in the found path. In this case I would expect the output to be similar to: ``` [[2, 0, 0, 0, 0], [2, 2, 2, 2, 2], [1, 2, 2, 2, 2], [2, 2, 2, 2, 2]] ``` What am I not understanding about how ctc_beam_search_decoder works?", "response":"As indicated in tf.nn.ctc_beam_search_decoder documentation, the shape of the output is not [batch_size, max_sequence_len]. Instead, it is ``` [batch_size, max_decoded_length[j]] ``` (with j=0 in your case). Based on the beginning of section 2 of this paper (which is cited in the github repository), max_decoded_length[0] is bounded from above by max_sequence_len, but they are not necessarily equal. The relevant citation is: Let S be a set of training examples drawn from a fixed distribution D_{XxZ}. The input space X = (R^m) is the set of all sequences of m dimensional real valued vectors. The target space Z = L* is the set of all sequences over the (finite) alphabet L of labels. In general, we refer to elements of L* as label sequences or labellings. Each example in S consists of a pair of sequences (x, z). The target sequence z = (z1, z2, ..., zU) is at most as long as the input sequence x = (x1, x2, ..., xT ), i.e. U<=T. Since the input and target sequences are not generally the same length, there is no a priori way of aligning them. In fact, max_decoded_length[0] depends on the specific matrix softmax_outputs. In particular, two such matrices with exactly the same dimensions can result in different max_decoded_length[0]. For example, if you replace the row ``` softmax_outputs = np.array([[[0.1, 0.1, 0.8], [0.8, 0.1, 0.1], [0.8, 0.1, 0.1], [0.8, 0.1, 0.1], [0.8, 0.1, 0.1]], [[0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]], [[0.1, 0.7, 0.2], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]], [[0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7], [0.1, 0.2, 0.7]]]) ``` with the rows ``` np.random.seed(7) r=np.random.randint(0,100,size=(4,5,3)) softmax_outputs=r\/np.sum(r,2).reshape(4,5,1) ``` you'll get the output ``` [[1 0 1] [1 0 1] [1 0 0] [1 0 0]] ``` (in the above examples, softmax_outputs consists of logits and it is exactly of the same dimensions as the matrix you provided). On the other hand, changing the seed to np.random.seed(50) gives the output ``` [[1 0] [1 0] [1 0] [0 1]] ``` P.S. Regarding the last part of your question: In this case I would expect the output to be similar to: ``` [[2, 0, 0, 0, 0], [2, 2, 2, 2, 2], [1, 2, 2, 2, 2], [2, 2, 2, 2, 2]] ``` Note that, based on the documentation, num_classes actually represents num_labels + 1. Specifically: The inputs Tensor's innermost dimension size, num_classes, represents num_labels + 1 classes, where num_labels is the number of true labels, and the largest value (num_classes - 1) is reserved for the blank label. For example, for a vocabulary containing 3 labels [a, b, c], num_classes = 4 and the labels indexing is {a: 0, b: 1, c: 2, blank: 3}. So the true labels in your case are 0 and 1, and 2 is reserved for the blank label. The blank label represents the situation of observing no label (section 3.1 here): A CTC network has a softmax output layer (Bridle, 1990) with one more unit than there are labels in L. The activations of the first |L| units are interpreted as the probabilities of observing the corresponding labels at particular times. The activation of the extra unit is the probability of observing a \u2018blank\u2019, or no label. Together, these outputs define the probabilities of all possible ways of aligning all possible label sequences with the input sequence.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45482813\/tensorflow-cant-understand-ctc-beam-search-decoder-output-sequence", "best_answers_votes":27, "question_length":1983, "response_length":3353 }, { "question":"Building a mutlivariate, multi-task LSTM with Keras Preamble I am currently working on a Machine Learning problem where we are tasked with using past data on product sales in order to predict sales volumes going forward (so that shops can better plan their stocks). We essentially have time series data, where for each and every product we know how many units were sold on which days. We also have information like what the weather was like, whether there was a public holiday, if any of the products were on sales etc. We've been able to model this with some success using an MLP with dense layers, and just using a sliding window approach to include sales volumes from the surrounding days. However, we believe we'll be able to get much better results with a time-series approach such as an LSTM. Data The data we have essentially is as follows: (EDIT: for clarity the \"Time\" column in the picture above is not correct. We have inputs once per day, not once per month. But otherwise the structure is the same!) So the X data is of shape: ``` (numProducts, numTimesteps, numFeatures) = (50 products, 1096 days, 90 features) ``` And the Y data is of shape: ``` (numProducts, numTimesteps, numTargets) = (50 products, 1096 days, 3 binary targets) ``` So we have data for three years (2014, 2015, 2016) and want to train on this in order to make predictions for 2017. (That's of course not 100% true, since we actually have data up to Oct 2017, but let's just ignore that for now) Problem I would like to build an LSTM in Keras that allows me to make these predictions. There are a few places where I am getting stuck though. So I have six concrete questions (I know one is supposed to try to limit a Stackoverflow post to one question, but these are all intertwined). Firstly, how would I slice up my data for the batches? Since I have three full years, does it make sense to simply push through three batches, each time of size one year? Or does it make more sense to make smaller batches (say 30 days) and also to using sliding windows? I.e. instead of 36 batches of 30 days each, I use 36 * 6 batches of 30 days each, each time sliding with 5 days? Or is this not really the way LSTMs should be used? (Note that there is quite a bit of seasonality in the data, to I need to catch that kind of long-term trend as well). Secondly, does it make sense to use return_sequences=True here? In other words, I keep my Y data as is (50, 1096, 3) so that (as far as I've understood it) there is a prediction at every time step for which a loss can be calculated against the target data? Or would I be better off with return_sequences=False, so that only the final value of each batch is used to evaluate the loss (i.e. if using yearly batches, then in 2016 for product 1, we evaluate against the Dec 2016 value of (1,1,1)). Thirdly how should I deal with the 50 different products? They are different, but still strongly correlated and we've seen with other approaches (for example an MLP with simple time-windows) that the results are better when all products are considered in the same model. Some ideas that are currently on the table are: change the target variable to be not just 3 variables, but 3 * 50 = 150; i.e. for each product there are three targets, all of which are trained simultaneously. split up the results after the LSTM layer into 50 dense networks, which take as input the ouputs from the LSTM, plus some features that are specific to each product - i.e. we get a multi-task network with 50 loss functions, which we then optimise together. Would that be crazy? consider a product as a single observation, and include product specific features already at the LSTM layer. Use just this one layer followed by an ouput layer of size 3 (for the three targets). Push through each product in a separate batch. Fourthly, how do I deal with validation data? Normally I would just keep out a randomly selected sample to validate against, but here we need to keep the time ordering in place. So I guess the best is to just keep a few months aside? Fifthly, and this is the part that is probably the most unclear to me - how can I use the actual results to perform predictions? Let's say I used return_sequences=False and I trained on all three years in three batches (each time up to Nov) with the goal of training the model to predict the next value (Dec 2014, Dec 2015, Dec 2016). If I want to use these results in 2017, how does this actually work? If I understood it correctly, the only thing I can do in this instance is to then feed the model all the data points for Jan to Nov 2017 and it will give me back a prediction for Dec 2017. Is that correct? However, if I were to use return_sequences=True, then trained on all data up to Dec 2016, would I then be able to get a prediction for Jan 2017 just by giving the model the features observed at Jan 2017? Or do I need to also give it the 12 months before Jan 2017? What about Feb 2017, do I in addition need to give the value for 2017, plus a further 11 months before that? (If it sounds like I'm confused, it's because I am!) Lastly, depending on what structure I should use, how do I do this in Keras? What I have in mind at the moment is something along the following lines: (though this would be for only one product, so doesn't solve having all products in the same model): Keras code ``` trainX = trainingDataReshaped #Data for Product 1, Jan 2014 to Dec 2016 trainY = trainingTargetReshaped validX = validDataReshaped #Data for Product 1, for ??? Maybe for a few months? validY = validTargetReshaped numSequences = trainX.shape[0] numTimeSteps = trainX.shape[1] numFeatures = trainX.shape[2] numTargets = trainY.shape[2] model = Sequential() model.add(LSTM(100, input_shape=(None, numFeatures), return_sequences=True)) model.add(Dense(numTargets, activation=\"softmax\")) model.compile(loss=stackEntry.params[\"loss\"], optimizer=\"adam\", metrics=['accuracy']) history = model.fit(trainX, trainY, batch_size=30, epochs=20, verbose=1, validation_data=(validX, validY)) predictX = predictionDataReshaped #Data for Product 1, Jan 2017 to Dec 2017 prediction=model.predict(predictX) ```", "response":"So: Firstly, how would I slice up my data for the batches? Since I have three full years, does it make sense to simply push through three batches, each time of size one year? Or does it make more sense to make smaller batches (say 30 days) and also to using sliding windows? I.e. instead of 36 batches of 30 days each, I use 36 * 6 batches of 30 days each, each time sliding with 5 days? Or is this not really the way LSTMs should be used? (Note that there is quite a bit of seasonality in the data, to I need to catch that kind of long-term trend as well). Honestly - modeling such data is something really hard. First of all - I wouldn't advise you to use LSTMs as they are rather designed to capture a little bit different kind of data (e.g. NLP or speech where it's really important to model long-term dependencies - not seasonality) and they need a lot of data in order to be learned. I would rather advise you to use either GRU or SimpleRNN which are way easier to learn and should be better for your task. When it comes to batching - I would definitely advise you to use a fixed window technique as it will end up in producing way more data points than feeding a whole year or a whole month. Try to set a number of days as meta parameter which will be also optimized by using different values in training and choosing the most suitable one. When it comes to seasonality - of course, this is a case but: You might have way too few data points and years collected to provide a good estimate of season trends, Using any kind of recurrent neural network to capture such seasonalities is a really bad idea. What I advise you to do instead is: try adding seasonal features (e.g. the month variable, day variable, a variable which is set to be true if there is a certain holiday that day or how many days there are to the next important holiday - this is a room where you could be really creative) Use an aggregated last year data as a feature - you could, for example, feed last year results or aggregations of them like running average of the last year's results, maximum, minimum - etc. Secondly, does it make sense to use return_sequences=True here? In other words, I keep my Y data as is (50, 1096, 3) so that (as far as I've understood it) there is a prediction at every time step for which a loss can be calculated against the target data? Or would I be better off with return_sequences=False, so that only the final value of each batch is used to evaluate the loss (i.e. if using yearly batches, then in 2016 for product 1, we evaluate against the Dec 2016 value of (1,1,1)). Using return_sequences=True might be useful but only in following cases: When a given LSTM (or another recurrent layer) will be followed by yet another recurrent layer. In a scenario - when you feed a shifted original series as an output by what you are simultaneously learning a model in different time windows, etc. The way described in a second point might be an interesting approach but keep the mind in mind that it might be a little bit hard to implement as you will need to rewrite your model in order to obtain a production result. What also might be harder is that you'll need to test your model against many types of time instabilities - and such approach might make this totally unfeasible. Thirdly how should I deal with the 50 different products? They are different, but still strongly correlated and we've seen with other approaches (for example an MLP with simple time-windows) that the results are better when all products are considered in the same model. Some ideas that are currently on the table are: change the target variable to be not just 3 variables, but 3 * 50 = 150; i.e. for each product there are three targets, all of which are trained simultaneously. split up the results after the LSTM layer into 50 dense networks, which take as input the ouputs from the LSTM, plus some features that are specific to each product - i.e. we get a multi-task network with 50 loss functions, which we then optimise together. Would that be crazy? consider a product as a single observation, and include product-specific features already at the LSTM layer. Use just this one layer followed by an ouput layer of size 3 (for the three targets). Push through each product in a separate batch. I would definitely go for a first choice but before providing a detailed explanation I will discuss disadvantages of 2nd and 3rd ones: In the second approach: It wouldn't be mad but you will lose a lot of correlations between products targets, In third approach: you'll lose a lot of interesting patterns occuring in dependencies between different time series. Before getting to my choice - let's discuss yet another issue - redundancies in your dataset. I guess that you have 3 kinds of features: product specific ones (let's say that there is 'm' of them) general features - let's say that there is 'n` of them. Now you have table of size (timesteps, m * n, products). I would transform it into table of shape (timesteps, products * m + n) as general features are the same for all products. This will save you a lot of memory and also make it feasible to feed to recurrent network (keep in mind that recurrent layers in keras have only one feature dimension - whereas you had two - product and feature ones). So why the first approach is the best in my opinion? Becasue it takes advantage of many interesting dependencies from data. Of course - this might harm the training process - but there is an easy trick to overcome this: dimensionality reduction. You could e.g. train PCA on your 150 dimensional vector and reduce it size to a much smaller one - thanks to what you have your dependencies modeled by PCA and your output has a much more feasible size. Fourthly, how do I deal with validation data? Normally I would just keep out a randomly selected sample to validate against, but here we need to keep the time ordering in place. So I guess the best is to just keep a few months aside? This is a really important question. From my experience - you need to test your solution against many types of instabilities in order to be sure that it works fine. So a few rules which you should keep in mind: There should be no overlap between your training sequences and test sequences. If there would be such - you will have a valid values from a test set fed to a model while training, You need to test model time stability against many kinds of time dependencies. The last point might be a little bit vague - so to provide you some examples: year stability - validate your model by training it using each possible combination of two years and test it on a hold out one (e.g. 2015, 2016 against 2017, 2015, 2017 against 2016, etc.) - this will show you how year changes affects your model, future prediction stability - train your model on a subset of weeks\/months\/years and test it using a following week\/month\/year result (e.g. train it on January 2015, January 2016 and January 2017 and test it using Feburary 2015, Feburary 2016, Feburary 2017 data, etc.) month stability - train model when keeping a certain month in a test set. Of course - you could try yet another hold outs. Fifthly, and this is the part that is probably the most unclear to me - how can I use the actual results to perform predictions? Let's say I used return_sequences=False and I trained on all three years in three batches (each time up to Nov) with the goal of training the model to predict the next value (Dec 2014, Dec 2015, Dec 2016). If I want to use these results in 2017, how does this actually work? If I understood it correctly, the only thing I can do in this instance is to then feed the model all the data points for Jan to Nov 2017 and it will give me back a prediction for Dec 2017. Is that correct? However, if I were to use return_sequences=True, then trained on all data up to Dec 2016, would I then be able to get a prediction for Jan 2017 just by giving the model the features observed at Jan 2017? Or do I need to also give it the 12 months before Jan 2017? What about Feb 2017, do I in addition need to give the value for 2017, plus a further 11 months before that? (If it sounds like I'm confused, it's because I am!) This depends on how you've built your model: if you used return_sequences=True you need to rewrite it to have return_sequence=False or just taking the output and considering only the last step from result, if you used a fixed-window - then you need to just feed a window before prediction to model, if you used a varying length - you could feed any timesteps proceding your prediction period you want (but I advice you to feed at least 7 proceding days). Lastly, depending on what structure I should use, how do I do this in Keras? What I have in mind at the moment is something along the following lines: (though this would be for only one product, so doesn't solve having all products in the same model) Here - more info on what kind of model you've choosed is needed.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46947842\/building-a-mutlivariate-multi-task-lstm-with-keras", "best_answers_votes":19, "question_length":6142, "response_length":8974 }, { "question":"TensorFlow\/Keras multi-threaded model fitting I'm attempting to train multiple keras models with different parameter values using multiple threads (and the tensorflow backend). I've seen a few examples of using the same model within multiple threads, but in this particular case, I run into various errors regarding conflicting graphs, etc. Here's a simple example of what I'd like to be able to do: ```py from concurrent.futures import ThreadPoolExecutor import numpy as np import tensorflow as tf from keras import backend as K from keras.layers import Dense from keras.models import Sequential sess = tf.Session() def example_model(size): model = Sequential() model.add(Dense(size, input_shape=(5,))) model.add(Dense(1)) model.compile(optimizer='sgd', loss='mse') return model if __name__ == '__main__': K.set_session(sess) X = np.random.random((10, 5)) y = np.random.random((10, 1)) models = [example_model(i) for i in range(5, 10)] e = ThreadPoolExecutor(4) res_list = [e.submit(model.fit, X, y) for model in models] for res in res_list: print(res.result()) ``` The resulting error is ValueError: Tensor(\"Variable:0\", shape=(5, 5), dtype=float32_ref) must be from the same graph as Tensor(\"Variable_2\/read:0\", shape=(), dtype=float32).. I've also tried initializing the models within the threads which gives a similar failure. Any thoughts on the best way to go about this? I'm not at all attached to this exact structure, but I'd prefer to be able to use multiple threads rather than processes so all the models are trained within the same GPU memory allocation.", "response":"Tensorflow Graphs are not threadsafe (see https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/Graph) and when you create a new Tensorflow Session, it by default uses the default graph. You can get around this by creating a new session with a new graph in your parallelized function and constructing your keras model there. Here is some code that creates and fits a model on each available gpu in parallel: ```py import concurrent.futures import numpy as np import keras.backend as K from keras.layers import Dense from keras.models import Sequential import tensorflow as tf from tensorflow.python.client import device_lib def get_available_gpus(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos if x.device_type == 'GPU'] xdata = np.random.randn(100, 8) ytrue = np.random.randint(0, 2, 100) def fit(gpu): with tf.Session(graph=tf.Graph()) as sess: K.set_session(sess) with tf.device(gpu): model = Sequential() model.add(Dense(12, input_dim=8, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam') model.fit(xdata, ytrue, verbose=0) return model.evaluate(xdata, ytrue, verbose=0) gpus = get_available_gpus() with concurrent.futures.ThreadPoolExecutor(len(gpus)) as executor: results = [x for x in executor.map(fit, gpus)] print('results: ', results) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42322698\/tensorflow-keras-multi-threaded-model-fitting", "best_answers_votes":23, "question_length":1568, "response_length":1404 }, { "question":"NotImplementedError: Cannot convert a symbolic Tensor (lstm_2\/strided_slice:0) to a numpy array. T tensorflow version 2.3.1 numpy version 1.20 below the code ``` # define model model = Sequential() model.add(LSTM(50, activation='relu', input_shape=(n_steps, n_features))) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse') ``` we got NotImplementedError: Cannot convert a symbolic Tensor (lstm_2\/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported it seems to me a crazy error!", "response":"I solved with numpy downgrade to 1.18.5 ``` pip install -U numpy==1.18.5 ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/66207609\/notimplementederror-cannot-convert-a-symbolic-tensor-lstm-2-strided-slice0-t", "best_answers_votes":47, "question_length":573, "response_length":76 }, { "question":"Is sparse tensor multiplication implemented in TensorFlow? Multiplication of sparse tensors with themselves or with dense tensors does not seem to work in TensorFlow. The following example ```py from __future__ import print_function import tensorflow as tf x = tf.constant([[1.0,2.0], [3.0,4.0]]) y = tf.SparseTensor(indices=[[0,0],[1,1]], values=[1.0,1.0], shape=[2,2]) z = tf.matmul(x,y) sess = tf.Session() sess.run(tf.initialize_all_variables()) print(sess.run([x, y, z])) ``` fails with the error message ```py TypeError: Input 'b' of 'MatMul' Op has type string that does not match type float32 of argument 'a' ``` Both tensors have values of type float32 as seen by evaluating them without the multiplication op. Multiplication of y with itself returns a similar error message. Multipication of x with itself works fine.", "response":"General-purpose multiplication for tf.SparseTensor is not currently implemented in TensorFlow. However, there are three partial solutions, and the right one to choose will depend on the characteristics of your data: If you have a tf.SparseTensor and a tf.Tensor, you can use tf.sparse_tensor_dense_matmul() to multiply them. This is more efficient than the next approach if one of the tensors is too large to fit in memory when densified: the documentation has more guidance about how to decide between these two methods. Note that it accepts a tf.SparseTensor as the first argument, so to solve your exact problem you will need to use the adjoint_a and adjoint_b arguments, and transpose the result. If you have two sparse tensors and need to multiply them, the simplest (if not the most performant) way is to convert them to dense and use tf.matmul: ``` a = tf.SparseTensor(...) b = tf.SparseTensor(...) c = tf.matmul(tf.sparse_tensor_to_dense(a, 0.0), tf.sparse_tensor_to_dense(b, 0.0), a_is_sparse=True, b_is_sparse=True) ``` Note that the optional a_is_sparse and b_is_sparse arguments mean that \"a (or b) has a dense representation but a large number of its entries are zero\", which triggers the use of a different multiplication algorithm. For the special case of sparse vector by (potentially large and sharded) dense matrix multiplication, and the values in the vector are 0 or 1, the tf.nn.embedding_lookup operator may be more appropriate. This tutorial discusses when you might use embeddings and how to invoke the operator in more detail. For the special case of sparse matrix by (potentially large and sharded) dense matrix, tf.nn.embedding_lookup_sparse() may be appropriate. This function accepts one or two tf.SparseTensor objects, with sp_ids representing the non-zero values, and the optional sp_weights representing their values (which otherwise default to one).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34030140\/is-sparse-tensor-multiplication-implemented-in-tensorflow", "best_answers_votes":39, "question_length":827, "response_length":1882 }, { "question":"What does the function control_dependencies do? I would like to have an example illustrating the use of the function tf.control_dependencies. For example, I want to create two tensors X and Y and if they are equal do or print something. ``` import tensorflow as tf session = tf.Session() X = tf.constant(5) Y = tf.constant(50) with tf.control_dependencies([tf.assert_equal(X, Y)]): print('X and Y are equal!') ``` In the code above, X is clearly not equal to Y. What is tf.control_dependencies doing in this case?", "response":"control_dependencies is not a conditional. It is a mechanism to add dependencies to whatever ops you create in the with block. More specifically, what you specify in the argument to control_dependencies is ensured to be evaluated before anything you define in the with block. In your example, you don't add any (TensorFlow) operations in the with block, so the block does nothing. This answer has an example of how to use control_dependencies, where it is used to make sure the assignments happen before the batchnorm operations are evaluated.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42095625\/what-does-the-function-control-dependencies-do", "best_answers_votes":51, "question_length":513, "response_length":543 }, { "question":"Machine Learning (tensorflow \/ sklearn) in Django? I have a django form, which is collecting user response. I also have a tensorflow sentences classification model. What is the best\/standard way to put these two together. Details: tensorflow model was trained on the Movie Review data from Rotten Tomatoes. Everytime a new row is made in my response model , i want the tensorflow code to classify it( + or - ). Basically I have a django project directory and two .py files for classification. Before going ahead myself , i wanted to know what is the standard way to implement machine learning algorithms to a web app. It'd be awesome if you could suggest a tutorial or a repo. Thank you !", "response":"Asynchronous processing If you don't need the classification result from the ML code to pass immediately to the user (e.g. as a response to the same POST request that submtted), then you can always queue the classification job to be ran in the background or even a different server with more CPU\/memory resources (e.g. with django-background-tasks or Celery) A queued task would be for example to populate the field UserResponse.class_name (positive, negative) on the database rows that have that field blank (not yet classified) Real time notification If the ML code is slow and want to return that result to the user as soon as it is available, you can use the asynchronous approach described above, and pair with the real time notification (e.g. socket.io to the browser (this can be triggered from the queued task) This becomes necessary if ML execution time is so long that it might time-out the HTTP request in the synchronous approach described below. Synchronous processing, if ML code is not CPU intensive (fast enough) If you need that classification result returned immediately, and the ML classification is fast enough *, you can do so within the HTTP request-response cycle (the POST request returns after the ML code is done, synchronously) *Fast enough here means it wouldn't time-out the HTTP request\/response, and the user wouldn't lose patience.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37374454\/machine-learning-tensorflow-sklearn-in-django", "best_answers_votes":32, "question_length":688, "response_length":1363 }, { "question":"Choosing from different cost function and activation function of a neural network Recently I started toying with neural networks. I was trying to implement an AND gate with Tensorflow. I am having trouble understanding when to use different cost and activation functions. This is a basic neural network with only input and output layers, no hidden layers. First I tried to implement it in this way. As you can see this is a poor implementation, but I think it gets the job done, at least in some way. So, I tried only the real outputs, no one hot true outputs. For activation functions, I used a sigmoid function and for cost function I used squared error cost function (I think its called that, correct me if I'm wrong). I've tried using ReLU and Softmax as activation functions (with the same cost function) and it doesn't work. I figured out why they don't work. I also tried the sigmoid function with Cross Entropy cost function, it also doesn't work. ``` import tensorflow as tf import numpy train_X = numpy.asarray([[0,0],[0,1],[1,0],[1,1]]) train_Y = numpy.asarray([[0],[0],[0],[1]]) x = tf.placeholder(\"float\",[None, 2]) y = tf.placeholder(\"float\",[None, 1]) W = tf.Variable(tf.zeros([2, 1])) b = tf.Variable(tf.zeros([1, 1])) activation = tf.nn.sigmoid(tf.matmul(x, W)+b) cost = tf.reduce_sum(tf.square(activation - y))\/4 optimizer = tf.train.GradientDescentOptimizer(.1).minimize(cost) init = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init) for i in range(5000): train_data = sess.run(optimizer, feed_dict={x: train_X, y: train_Y}) result = sess.run(activation, feed_dict={x:train_X}) print(result) ``` after 5000 iterations: ``` [[ 0.0031316 ] [ 0.12012422] [ 0.12012422] [ 0.85576665]] ``` Question 1 - Is there any other activation function and cost function, that can work(learn) for the above network, without changing the parameters(meaning without changing W, x, b). Question 2 - I read from a StackOverflow post here: [Activation Function] selection depends on the problem. So there are no cost functions that can be used anywhere? I mean there is no standard cost function that can be used on any neural network. Right? Please correct me on this. I also implemented the AND gate with a different approach, with the output as one-hot true. As you can see the train_Y [1,0] means that the 0th index is 1, so the answer is 0. I hope you get it. Here I have used a softmax activation function, with cross entropy as cost function. Sigmoid function as activation function fails miserably. ``` import tensorflow as tf import numpy train_X = numpy.asarray([[0,0],[0,1],[1,0],[1,1]]) train_Y = numpy.asarray([[1,0],[1,0],[1,0],[0,1]]) x = tf.placeholder(\"float\",[None, 2]) y = tf.placeholder(\"float\",[None, 2]) W = tf.Variable(tf.zeros([2, 2])) b = tf.Variable(tf.zeros([2])) activation = tf.nn.softmax(tf.matmul(x, W)+b) cost = -tf.reduce_sum(y*tf.log(activation)) optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(cost) init = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init) for i in range(5000): train_data = sess.run(optimizer, feed_dict={x: train_X, y: train_Y}) result = sess.run(activation, feed_dict={x:train_X}) print(result) ``` after 5000 iteration ``` [[ 1.00000000e+00 1.41971401e-09] [ 9.98996437e-01 1.00352429e-03] [ 9.98996437e-01 1.00352429e-03] [ 1.40495342e-03 9.98595059e-01]] ``` Question 3 So in this case what cost function and activation function can I use? How do I understand what type of cost and activation functions I should use? Is there a standard way or rule, or just experience only? Should I have to try every cost and activation function in a brute force manner? I found an answer here. But I am hoping for a more elaborate explanation. Question 4 I have noticed that it takes many iterations to converge to a near accurate prediction. I think the convergance rate depends on the learning rate (using too large of will miss the solution) and the cost function (correct me if I'm wrong). So, is there any optimal way (meaning the fastest) or cost function for converging to a correct solution?", "response":"I will answer your questions a little bit out of order, starting with more general answers, and finishing with those specific to your particular experiment. Activation functions Different activation functions, in fact, do have different properties. Let's first consider an activation function between two layers of a neural network. The only purpose of an activation function there is to serve as an nonlinearity. If you do not put an activation function between two layers, then two layers together will serve no better than one, because their effect will still be just a linear transformation. For a long while people were using sigmoid function and tanh, choosing pretty much arbitrarily, with sigmoid being more popular, until recently, when ReLU became the dominant nonleniarity. The reason why people use ReLU between layers is because it is non-saturating (and is also faster to compute). Think about the graph of a sigmoid function. If the absolute value of x is large, then the derivative of the sigmoid function is small, which means that as we propagate the error backwards, the gradient of the error will vanish very quickly as we go back through the layers. With ReLU the derivative is 1 for all positive inputs, so the gradient for those neurons that fired will not be changed by the activation unit at all and will not slow down the gradient descent. For the last layer of the network the activation unit also depends on the task. For regression you will want to use the sigmoid or tanh activation, because you want the result to be between 0 and 1. For classification you will want only one of your outputs to be one and all others zeros, but there's no differentiable way to achieve precisely that, so you will want to use a softmax to approximate it. Your example. Now let's look at your example. Your first example tries to compute the output of AND in a following form: ``` sigmoid(W1 * x1 + W2 * x2 + B) ``` Note that W1 and W2 will always converge to the same value, because the output for (x1, x2) should be equal to the output of (x2, x1). Therefore, the model that you are fitting is: ``` sigmoid(W * (x1 + x2) + B) ``` x1 + x2 can only take one of three values (0, 1 or 2) and you want to return 0 for the case when x1 + x2 < 2 and 1 for the case when x1 + x2 = 2. Since the sigmoid function is rather smooth, it will take very large values of W and B to make the output close to the desired, but because of a small learning rate they can't get to those large values fast. Increasing the learning rate in your first example will increase the speed of convergence. Your second example converges better because the softmax function is good at making precisely one output be equal to 1 and all others to 0. Since this is precisely your case, it does converge quickly. Note that sigmoid would also eventually converge to good values, but it will take significantly more iterations (or higher learning rate). What to use. Now to the last question, how does one choose which activation and cost functions to use. These advices will work for majority of cases: If you do classification, use softmax for the last layer's nonlinearity and cross entropy as a cost function. If you do regression, use sigmoid or tanh for the last layer's nonlinearity and squared error as a cost function. Use ReLU as a nonlienearity between layers. Use better optimizers (AdamOptimizer, AdagradOptimizer) instead of GradientDescentOptimizer, or use momentum for faster convergence,", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34229140\/choosing-from-different-cost-function-and-activation-function-of-a-neural-networ", "best_answers_votes":44, "question_length":4097, "response_length":3480 }, { "question":"How to handle non-determinism when training on a GPU? While tuning the hyperparameters to get my model to perform better, I noticed that the score I get (and hence the model that is created) is different every time I run the code despite fixing all the seeds for random operations. This problem does not happen if I run on CPU. I googled and found out that this is a common issue when using a GPU to train. Here is a very good\/detailed example with short code snippets to verify the existence of that problem. They pinpointed the non-determinism to \"tf.reduce_sum\" function. However, that is not the case for me. it could be because I'm using different hardware (1080 TI) or a different version of CUDA libraries or Tensorflow. It seems like there are many different parts of the CUDA libraries that are non-deterministic and it doesn't seem easy to figure out exactly which part and how to get rid of it. Also, this must have been by design, so it's likely that there is a sufficient efficiency increase in exchange for non-determinism. So, my question is: Since GPUs are popular for training NNs, people in this field must have a way to deal with non-determinism, because I can't see how else you'd be able to reliably tune the hyperparameters. What is the standard way to handle non-determinism when using a GPU?", "response":"TL;DR Non-determinism for a priori deterministic operations come from concurrent (multi-threaded) implementations. Despite constant progress on that front, TensorFlow does not currently guarantee determinism for all of its operations. After a quick search on the internet, it seems that the situation is similar to the other major toolkits. During training, unless you are debugging an issue, it is OK to have fluctuations between runs. Uncertainty is in the nature of training, and it is wise to measure it and take it into account when comparing results \u2013 even when toolkits eventually reach perfect determinism in training. That, but much longer When you see neural network operations as mathematical operations, you would expect everything to be deterministic. Convolutions, activations, cross-entropy \u2013 everything here are mathematical equations and should be deterministic. Even pseudo-random operations such as shuffling, drop-out, noise and the likes, are entirely determined by a seed. When you see those operations from their computational implementation, on the other hand, you see them as massively parallelized computations, which can be source of randomness unless you are very careful. The heart of the problem is that, when you run operations on several parallel threads, you typically do not know which thread will end first. It is not important when threads operate on their own data, so for example, applying an activation function to a tensor should be deterministic. But when those threads need to synchronize, such as when you compute a sum, then the result may depend on the order of the summation, and in turn, on the order in which thread ended first. From there, you have broadly speaking two options: Keep non-determinism associated with simpler implementations. Take extra care in the design of your parallel algorithm to reduce or remove non-determinism in your computation. The added constraint usually results in slower algorithms Which route takes CuDNN? Well, mostly the deterministic one. In recent releases, deterministic operations are the norm rather than the exception. But it used to offer many non-deterministic operations, and more importantly, it used to not offer some operations such as reduction, that people needed to implement themselves in CUDA with a variable degree of consideration to determinism. Some libraries such as theano were more ahead of this topic, by exposing early on a deterministic flag that the user could turn on or off \u2013 but as you can see from its description, it is far from offering any guarantee. If more, sometimes we will select some implementations that are more deterministic, but slower. In particular, on the GPU, we will avoid using AtomicAdd. Sometimes we will still use non-deterministic implementation, e.g. when we do not have a GPU implementation that is deterministic. Also, see the dnn.conv.algo* flags to cover more cases. In TensorFlow, the realization of the need for determinism has been rather late, but it's slowly getting there \u2013 helped by the advance of CuDNN on that front also. For a long time, reductions have been non-deterministic, but now they seem to be deterministic. The fact that CuDNN introduced deterministic reductions in version 6.0 may have helped of course. It seems that currently, the main obstacle for TensorFlow towards determinism is the backward pass of the convolution. It is indeed one of the few operations for which CuDNN proposes a non-deterministic algorithm, labeled CUDNN_CONVOLUTION_BWD_FILTER_ALGO_0. This algorithm is still in the list of possible choices for the backward filter in TensorFlow. And since the choice of the filter seems to be based on performance, it could indeed be picked if it is more efficient. (I am not so familiar with TensorFlow's C++ code so take this with a grain of salt.) Is this important? If you are debugging an issue, determinism is not merely important: it is mandatory. You need to reproduce the steps that led to a problem. This is currently a real issue with toolkits like TensorFlow. To mitigate this problem, your only option is to debug live, adding checks and breakpoints at the correct locations \u2013 not great. Deployment is another aspect of things, where it is often desirable to have a deterministic behavior, in part for human acceptance. While nobody would reasonably expect a medical diagnosis algorithm to never fail, it would be awkward that a computer could give the same patient a different diagnosis depending on the run. (Although doctors themselves are not immune to this kind of variability.) Those reasons are rightful motivations to fix non-determinism in neural networks. For all other aspects, I would say that we need to accept, if not embrace, the non-deterministic nature of neural net training. For all purposes, training is stochastic. We use stochastic gradient descent, shuffle data, use random initialization and dropout \u2013 and more importantly, training data is itself but a random sample of data. From that standpoint, the fact that computers can only generate pseudo-random numbers with a seed is an artifact. When you train, your loss is a value that also comes with a confidence interval due to this stochastic nature. Comparing those values to optimize hyper-parameters while ignoring those confidence intervals does not make much sense \u2013 therefore it is vain, in my opinion, to spend too much effort fixing non-determinism in that, and many other, cases.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50744565\/how-to-handle-non-determinism-when-training-on-a-gpu", "best_answers_votes":29, "question_length":1315, "response_length":5452 }, { "question":"How can I convert a trained Tensorflow model to Keras? I have a trained Tensorflow model and weights vector which have been exported to protobuf and weights files respectively. How can I convert these to JSON or YAML and HDF5 files which can be used by Keras? I have the code for the Tensorflow model, so it would also be acceptable to convert the tf.Session to a keras model and save that in code.", "response":"Francois Chollet, the creator of keras, stated in 04\/2017 \"you cannot turn an arbitrary TensorFlow checkpoint into a Keras model. What you can do, however, is build an equivalent Keras model then load into this Keras model the weights\" , see https:\/\/github.com\/keras-team\/keras\/issues\/5273 . To my knowledge this hasn't changed. A small example: First, you can extract the weights of a tensorflow checkpoint like this ``` PATH_REL_META = r'checkpoint1.meta' # start tensorflow session with tf.Session() as sess: # import graph saver = tf.train.import_meta_graph(PATH_REL_META) # load weights for graph saver.restore(sess, PATH_REL_META[:-5]) # get all global variables (including model variables) vars_global = tf.global_variables() # get their name and value and put them into dictionary sess.as_default() model_vars = {} for var in vars_global: try: model_vars[var.name] = var.eval() except: print(\"For var={}, an exception occurred\".format(var.name)) ``` It might also be of use to export the tensorflow model for use in tensorboard, see https:\/\/stackoverflow.com\/a\/43569991\/2135504 Second, you build you keras model as usually and finalize it by \"model.compile\". Pay attention that you need to give you define each layer by name and add it to the model after that, e.g. ``` layer_1 = keras.layers.Conv2D(6, (7,7), activation='relu', input_shape=(48,48,1)) net.add(layer_1) ... net.compile(...) ``` Third, you can set the weights with the tensorflow values, e.g. ``` layer_1.set_weights([model_vars['conv7x7x1_1\/kernel:0'], model_vars['conv7x7x1_1\/bias:0']]) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44466066\/how-can-i-convert-a-trained-tensorflow-model-to-keras", "best_answers_votes":12, "question_length":398, "response_length":1565 }, { "question":"ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32 I get following error ``` ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32: 'Tensor(\"Placeholder_1:0\", shape=TensorShape([Dimension(128), Dimension(2)]), dtype=int32)' ``` when I try to calculate cross entropy loss ``` losses = tf.nn.softmax_cross_entropy_with_logits(scores, input_y) ``` I use Python 3.4.3. Any ideas why?", "response":"It sounds like you have defined input_y\u2014which I am assuming is a tf.placeholder()\u2014as having type tf.int32. Either change this to tf.float32 or add a cast: tf.cast(input_y, tf.float32) or tf.to_float(input_y).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35382409\/valueerror-tensor-conversion-requested-dtype-float32-for-tensor-with-dtype-int3", "best_answers_votes":33, "question_length":438, "response_length":208 }, { "question":"How to train TensorFlow network using a generator to produce inputs? The TensorFlow docs describe a bunch of ways to read data using TFRecordReader, TextLineReader, QueueRunner etc and queues. What I would like to do is much, much simpler: I have a python generator function that produces an infinite sequence of training data as (X, y) tuples (both are numpy arrays, and the first dimension is the batch size). I just want to train a network using that data as inputs. Is there a simple self-contained example of training a TensorFlow network using a generator which produces the data? (along the lines of the MNIST or CIFAR examples)", "response":"Suppose you have a function that generates data: ``` def generator(data): ... yield (X, y) ``` Now you need another function that describes your model architecture. It could be any function that processes X and has to predict y as output (say, neural network). Suppose your function accepts X and y as inputs, computes a prediction for y from X in some way and returns loss function (e.g. cross-entropy or MSE in the case of regression) between y and predicted y: ``` def neural_network(X, y): # computation of prediction for y using X ... return loss(y, y_pred) ``` To make your model work, you need to define placeholders for both X and y and then run a session: ``` X = tf.placeholder(tf.float32, shape=(batch_size, x_dim)) y = tf.placeholder(tf.float32, shape=(batch_size, y_dim)) ``` Placeholders are something like \"free variables\" which you need to specify when running the session by feed_dict: ``` with tf.Session() as sess: # variables need to be initialized before any sess.run() calls tf.global_variables_initializer().run() for X_batch, y_batch in generator(data): feed_dict = {X: X_batch, y: y_batch} _, loss_value, ... = sess.run([train_op, loss, ...], feed_dict) # train_op here stands for optimization operation you have defined # and loss for loss function (return value of neural_network function) ``` Hope you would find it useful. However, bear in mind this is not fully working implementation but rather a pseudocode since you specified almost no details.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39325275\/how-to-train-tensorflow-network-using-a-generator-to-produce-inputs", "best_answers_votes":31, "question_length":635, "response_length":1477 }, { "question":"Neither PyTorch nor TensorFlow >= 2.0 have been found.Models won't be available and only tokenizers, configuration and file\/data utilities can be used I am trying to install transformers using pip ``` pip install transformers ``` after import transformers this error show ``` Neither PyTorch nor TensorFlow >= 2.0 have been found.Models won't be available and only tokenizers, configuration, and file\/data utilities can be used. ``` although I install TensorFlow-GPU= 2.3.1 and using conda system info ``` Windows 10 python 3.6 cuda 10.1 tensorflow-gpu= 2.3.1 ```", "response":"You need one of them PyTorch or Tensorflow. You can check if tensorflow is installed or you can reinstall it pip uninstall tensorflow pip install tensorflow==2.2.0(you can install only tensorflow it worked same as tensorflow-gpu) pip uninstall transformers pip install transformers==3.3.1 If this doesn't solve it, try to upgrade your python to 3.7.8", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/64337550\/neither-pytorch-nor-tensorflow-2-0-have-been-found-models-wont-be-available", "best_answers_votes":13, "question_length":563, "response_length":350 }, { "question":"Should the custom loss function in Keras return a single loss value for the batch or an arrary of losses for every sample in the training batch? I'm learning keras API in tensorflow(2.3). In this guide on tensorflow website, I found an example of custom loss funciton: ``` def custom_mean_squared_error(y_true, y_pred): return tf.math.reduce_mean(tf.square(y_true - y_pred)) ``` The reduce_mean function in this custom loss function will return an scalar. Is it right to define loss function like this? As far as I know, the first dimension of the shapes of y_true and y_pred is the batch size. I think the loss function should return loss values for every sample in the batch. So the loss function shoud give an array of shape (batch_size,). But the above function gives a single value for the whole batch. Maybe the above example is wrong? Could anyone give me some help on this problem? p.s. Why do I think the loss function should return an array rather than a single value? I read the source code of Model class. When you provide a loss function (please note it's a function, not a loss class) to Model.compile() method, ths loss function is used to construct a LossesContainer object, which is stored in Model.compiled_loss. This loss function passed to the constructor of LossesContainer class is used once again to construct a LossFunctionWrapper object, which is stored in LossesContainer._losses. According to the source code of LossFunctionWrapper class, the overall loss value for a training batch is calculated by the LossFunctionWrapper.__call__() method (inherited from Loss class), i.e. it returns a single loss value for the whole batch. But the LossFunctionWrapper.__call__() first calls the LossFunctionWrapper.call() method to obtain an array of losses for every sample in the training batch. Then these losses are fianlly averaged to get the single loss value for the whole batch. It's in the LossFunctionWrapper.call() method that the loss function provided to the Model.compile() method is called. That's why I think the custom loss funciton should return an array of losses, insead of a single scalar value. Besides, if we write a custom Loss class for the Model.compile() method, the call() method of our custom Loss class should also return an array, rather than a signal value. I opened an issue on github. It's confirmed that custom loss function is required to return one loss value per sample. The example will need to be updated to reflect this.", "response":"Actually, as far as I know, the shape of return value of the loss function is not important, i.e. it could be a scalar tensor or a tensor of one or multiple values per sample. The important thing is how it should be reduced to a scalar value so that it could be used in optimization process or shown to the user. For that, you can check the reduction types in Reduction documentation. Further, here is what the compile method documentation says about the loss argument, partially addressing this point: loss: String (name of objective function), objective function or tf.keras.losses.Loss instance. See tf.keras.losses. An objective function is any callable with the signature loss = fn(y_true,y_pred), where y_true = ground truth values with shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]. y_pred = predicted values with shape = [batch_size, d0, .. dN]. It returns a weighted loss float tensor. If a custom Loss instance is used and reduction is set to NONE, return value has the shape [batch_size, d0, .. dN-1] ie. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses. In addition, it's worth noting that most of the built-in loss functions in TF\/Keras are usually reduced over the last dimension (i.e. axis=-1). For those who doubt that a custom loss function which returns a scalar value would work: you can run the following snippet and you will see that the model would train and converge properly. ``` import tensorflow as tf import numpy as np def custom_loss(y_true, y_pred): return tf.reduce_sum(tf.square(y_true - y_pred)) inp = tf.keras.layers.Input(shape=(3,)) out = tf.keras.layers.Dense(3)(inp) model = tf.keras.Model(inp, out) model.compile(loss=custom_loss, optimizer=tf.keras.optimizers.Adam(lr=0.1)) x = np.random.rand(1000, 3) y = x * 10 + 2.5 model.fit(x, y, epochs=20) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/63390725\/should-the-custom-loss-function-in-keras-return-a-single-loss-value-for-the-batc", "best_answers_votes":10, "question_length":2476, "response_length":2124 }, { "question":"What is regularization loss in tensorflow? When training an Object Detection DNN with Tensorflows Object Detection API it's Visualization Plattform Tensorboard plots a scalar named regularization_loss_1 What is this? I know what regularization is (to make the Network good at generalizing through various methods like dropout) But it is not clear to me what this displayed loss could be. Thanks!", "response":"TL;DR: it's just the additional loss generated by the regularization function. Add that to the network's loss and optimize over the sum of the two. As you correctly state, regularization methods are used to help an optimization method to generalize better. A way to obtain this is to add a regularization term to the loss function. This term is a generic function, which modifies the \"global\" loss (as in, the sum of the network loss and the regularization loss) in order to drive the optimization algorithm in desired directions. Let's say, for example, that for whatever reason I want to encourage solutions to the optimization that have weights as close to zero as possible. One approach, then, is to add to the loss produced by the network, a function of the network weights (for example, a scaled-down sum of all the absolute values of the weights). Since the optimization algorithm minimizes the global loss, my regularization term (which is high when the weights are far from zero) will push the optimization towards solutions tht have weights close to zero.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48443886\/what-is-regularization-loss-in-tensorflow", "best_answers_votes":24, "question_length":395, "response_length":1065 }, { "question":"Make predictions using a tensorflow graph from a keras model I have a model trained using Keras with Tensorflow as my backend, but now I need to turn my model into a tensorflow graph for a certain application. I attempted to do this and make predictions to insure that it is working correctly, but when comparing to the results gathered from model.predict() I get very different values. For instance: ``` from keras.models import load_model import tensorflow as tf model = load_model('model_file.h5') x_placeholder = tf.placeholder(tf.float32, shape=(None,7214,1)) y = model(x_placeholder) x = np.ones((1,7214,1)) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(\"Predictions from:\\ntf graph: \"+str(sess.run(y, feed_dict={x_placeholder:x}))) print(\"keras predict: \"+str(model.predict(x))) ``` returns: ``` Predictions from: tf graph: [[-0.1015993 0.07432419 0.0592984 ]] keras predict: [[ 0.39339241 0.57949686 -3.67846966]] ``` The values from keras predict are correct, but the tf graph results are not. If it helps to know the final intended application, I am creating a jacobian matrix with the tf.gradients() function, but currently it does not return the correct results when comparing to theano's jacobian function, which gives the correct jacobian. Here is my tensorflow jacobian code: ``` x = tf.placeholder(tf.float32, shape=(None,7214,1)) y = tf.reshape(model(x)[0],[-1]) y_list = tf.unstack(y) jacobian_list = [tf.gradients(y_, x)[0] for y_ in y_list] jacobian = tf.stack(jacobian_list) ``` EDIT: Model code ``` import numpy as np from keras.models import Sequential from keras.layers import Dense, InputLayer, Flatten from keras.layers.convolutional import Conv1D from keras.layers.convolutional import MaxPooling1D from keras.optimizers import Adam from keras.callbacks import EarlyStopping, ReduceLROnPlateau # activation function used following every layer except for the output layers activation = 'relu' # model weight initializer initializer = 'he_normal' # shape of input data that is fed into the input layer input_shape = (None,7214,1) # number of filters used in the convolutional layers num_filters = [4,16] # length of the filters in the convolutional layers filter_length = 8 # length of the maxpooling window pool_length = 4 # number of nodes in each of the hidden fully connected layers num_hidden_nodes = [256,128] # number of samples fed into model at once during training batch_size = 64 # maximum number of interations for model training max_epochs = 30 # initial learning rate for optimization algorithm lr = 0.0007 # exponential decay rate for the 1st moment estimates for optimization algorithm beta_1 = 0.9 # exponential decay rate for the 2nd moment estimates for optimization algorithm beta_2 = 0.999 # a small constant for numerical stability for optimization algorithm optimizer_epsilon = 1e-08 model = Sequential([ InputLayer(batch_input_shape=input_shape), Conv1D(kernel_initializer=initializer, activation=activation, padding=\"same\", filters=num_filters[0], kernel_size=filter_length), Conv1D(kernel_initializer=initializer, activation=activation, padding=\"same\", filters=num_filters[1], kernel_size=filter_length), MaxPooling1D(pool_size=pool_length), Flatten(), Dense(units=num_hidden_nodes[0], kernel_initializer=initializer, activation=activation), Dense(units=num_hidden_nodes[1], kernel_initializer=initializer, activation=activation), Dense(units=3, activation=\"linear\", input_dim=num_hidden_nodes[1]), ]) # compile model loss_function = mean squared error early_stopping_min_delta = 0.0001 early_stopping_patience = 4 reduce_lr_factor = 0.5 reuce_lr_epsilon = 0.0009 reduce_lr_patience = 2 reduce_lr_min = 0.00008 optimizer = Adam(lr=lr, beta_1=beta_1, beta_2=beta_2, epsilon=optimizer_epsilon, decay=0.0) early_stopping = EarlyStopping(monitor='val_loss', min_delta=early_stopping_min_delta, patience=early_stopping_patience, verbose=2, mode='min') reduce_lr = ReduceLROnPlateau(monitor='loss', factor=0.5, epsilon=reuce_lr_epsilon, patience=reduce_lr_patience, min_lr=reduce_lr_min, mode='min', verbose=2) model.compile(optimizer=optimizer, loss=loss_function) model.fit(train_x, train_y, validation_data=(cv_x, cv_y), epochs=max_epochs, batch_size=batch_size, verbose=2, callbacks=[reduce_lr,early_stopping]) model.save('model_file.h5') ```", "response":"@frankyjuang linked me to here https:\/\/github.com\/amir-abdi\/keras_to_tensorflow and combining this with code from https:\/\/github.com\/metaflow-ai\/blog\/blob\/master\/tf-freeze\/load.py and https:\/\/github.com\/tensorflow\/tensorflow\/issues\/675 I have found a solution to both predicting using a tf graph and creating the jacobian function: ``` import tensorflow as tf import numpy as np # Create function to convert saved keras model to tensorflow graph def convert_to_pb(weight_file,input_fld='',output_fld=''): import os import os.path as osp from tensorflow.python.framework import graph_util from tensorflow.python.framework import graph_io from keras.models import load_model from keras import backend as K # weight_file is a .h5 keras model file output_node_names_of_input_network = [\"pred0\"] output_node_names_of_final_network = 'output_node' # change filename to a .pb tensorflow file output_graph_name = weight_file[:-2]+'pb' weight_file_path = osp.join(input_fld, weight_file) net_model = load_model(weight_file_path) num_output = len(output_node_names_of_input_network) pred = [None]*num_output pred_node_names = [None]*num_output for i in range(num_output): pred_node_names[i] = output_node_names_of_final_network+str(i) pred[i] = tf.identity(net_model.output[i], name=pred_node_names[i]) sess = K.get_session() constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names) graph_io.write_graph(constant_graph, output_fld, output_graph_name, as_text=False) print('saved the constant graph (ready for inference) at: ', osp.join(output_fld, output_graph_name)) return output_fld+output_graph_name ``` Call: ``` tf_model_path = convert_to_pb('model_file.h5','\/model_dir\/','\/model_dir\/') ``` Create function to load the tf model as a graph: ``` def load_graph(frozen_graph_filename): # We load the protobuf file from the disk and parse it to retrieve the # unserialized graph_def with tf.gfile.GFile(frozen_graph_filename, \"rb\") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) # Then, we can use again a convenient built-in function to import a graph_def into the # current default Graph with tf.Graph().as_default() as graph: tf.import_graph_def( graph_def, input_map=None, return_elements=None, name=\"prefix\", op_dict=None, producer_op_list=None ) input_name = graph.get_operations()[0].name+':0' output_name = graph.get_operations()[-1].name+':0' return graph, input_name, output_name ``` Create a function to make model predictions using the tf graph ``` def predict(model_path, input_data): # load tf graph tf_model,tf_input,tf_output = load_graph(model_path) # Create tensors for model input and output x = tf_model.get_tensor_by_name(tf_input) y = tf_model.get_tensor_by_name(tf_output) # Number of model outputs num_outputs = y.shape.as_list()[0] predictions = np.zeros((input_data.shape[0],num_outputs)) for i in range(input_data.shape[0]): with tf.Session(graph=tf_model) as sess: y_out = sess.run(y, feed_dict={x: input_data[i:i+1]}) predictions[i] = y_out return predictions ``` Make predictions: ``` tf_predictions = predict(tf_model_path,test_data) ``` Jacobian function: ``` def compute_jacobian(model_path,input_data): tf_model,tf_input,tf_output = load_graph(model_path) x = tf_model.get_tensor_by_name(tf_input) y = tf_model.get_tensor_by_name(tf_output) y_list = tf.unstack(y) num_outputs = y.shape.as_list()[0] jacobian = np.zeros((num_outputs,input_data.shape[0],input_data.shape[1])) for i in range(input_data.shape[0]): with tf.Session(graph=tf_model) as sess: y_out = sess.run([tf.gradients(y_, x)[0] for y_ in y_list], feed_dict={x: input_data[i:i+1]}) jac_temp = np.asarray(y_out) jacobian[:,i:i+1,:]=jac_temp[:,:,:,0] return jacobian ``` Compute Jacobian Matrix: ``` jacobians = compute_jacobian(tf_model_path,test_data) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44274701\/make-predictions-using-a-tensorflow-graph-from-a-keras-model", "best_answers_votes":18, "question_length":4322, "response_length":3818 }, { "question":"get the CUDA and CUDNN version on windows with Anaconda installe There is a tensorflow-gpu version installed on Windows using Anaconda, how to check the CUDA and CUDNN version of it? Thanks.", "response":"Use the following command to check CUDA installation by Conda: ``` conda list cudatoolkit ``` And the following command to check CUDNN version installed by conda: ``` conda list cudnn ``` If you want to install\/update CUDA and CUDNN through CONDA, please use the following commands: ``` conda install -c anaconda cudatoolkit conda install -c anaconda cudnn ``` Alternatively you can use following commands to check CUDA installation: ``` nvidia-smi ``` OR ``` nvcc --version ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/52410484\/get-the-cuda-and-cudnn-version-on-windows-with-anaconda-installe", "best_answers_votes":48, "question_length":190, "response_length":478 }, { "question":"Save and load model optimizer state I have a set of fairly complicated models that I am training and I am looking for a way to save and load the model optimizer states. The \"trainer models\" consist of different combinations of several other \"weight models\", of which some have shared weights, some have frozen weights depending on the trainer, etc. It is a bit too complicated of an example to share, but in short, I am not able to use model.save('model_file.h5') and keras.models.load_model('model_file.h5') when stopping and starting my training. Using model.load_weights('weight_file.h5') works fine for testing my model if the training has finished, but if I attempt to continue training the model using this method, the loss does not come even close to returning to its last location. I have read that this is because the optimizer state is not saved using this method which makes sense. However, I need a method for saving and loading the states of the optimizers of my trainer models. It seems as though keras once had a model.optimizer.get_sate() and model.optimizer.set_sate() that would accomplish what I am after, but that does not seem to be the case anymore (at least for the Adam optimizer). Are there any other solutions with the current Keras?", "response":"You can extract the important lines from the load_model and save_model functions. For saving optimizer states, in save_model: ``` # Save optimizer weights. symbolic_weights = getattr(model.optimizer, 'weights') if symbolic_weights: optimizer_weights_group = f.create_group('optimizer_weights') weight_values = K.batch_get_value(symbolic_weights) ``` For loading optimizer states, in load_model: ``` # Set optimizer weights. if 'optimizer_weights' in f: # Build train function (to get weight updates). if isinstance(model, Sequential): model.model._make_train_function() else: model._make_train_function() # ... try: model.optimizer.set_weights(optimizer_weight_values) ``` Combining the lines above, here's an example: First fit the model for 5 epochs. ``` X, y = np.random.rand(100, 50), np.random.randint(2, size=100) x = Input((50,)) out = Dense(1, activation='sigmoid')(x) model = Model(x, out) model.compile(optimizer='adam', loss='binary_crossentropy') model.fit(X, y, epochs=5) Epoch 1\/5 100\/100 [==============================] - 0s 4ms\/step - loss: 0.7716 Epoch 2\/5 100\/100 [==============================] - 0s 64us\/step - loss: 0.7678 Epoch 3\/5 100\/100 [==============================] - 0s 82us\/step - loss: 0.7665 Epoch 4\/5 100\/100 [==============================] - 0s 56us\/step - loss: 0.7647 Epoch 5\/5 100\/100 [==============================] - 0s 76us\/step - loss: 0.7638 ``` Now save the weights and optimizer states. ``` model.save_weights('weights.h5') symbolic_weights = getattr(model.optimizer, 'weights') weight_values = K.batch_get_value(symbolic_weights) with open('optimizer.pkl', 'wb') as f: pickle.dump(weight_values, f) ``` Rebuild the model in another python session, and load weights. ``` x = Input((50,)) out = Dense(1, activation='sigmoid')(x) model = Model(x, out) model.compile(optimizer='adam', loss='binary_crossentropy') model.load_weights('weights.h5') model._make_train_function() with open('optimizer.pkl', 'rb') as f: weight_values = pickle.load(f) model.optimizer.set_weights(weight_values) ``` Continue model training. ``` model.fit(X, y, epochs=5) Epoch 1\/5 100\/100 [==============================] - 0s 674us\/step - loss: 0.7629 Epoch 2\/5 100\/100 [==============================] - 0s 49us\/step - loss: 0.7617 Epoch 3\/5 100\/100 [==============================] - 0s 49us\/step - loss: 0.7611 Epoch 4\/5 100\/100 [==============================] - 0s 55us\/step - loss: 0.7601 Epoch 5\/5 100\/100 [==============================] - 0s 49us\/step - loss: 0.7594 ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49503748\/save-and-load-model-optimizer-state", "best_answers_votes":38, "question_length":1259, "response_length":2502 }, { "question":"Train Tensorflow Object Detection on own dataset After spending a couple days trying to achieve this task, I would like to share my experience of how I went about answering the question: How do I use TS Object Detection to train using my own dataset?", "response":"This assumes the module is already installed. Please refer to their documentation if not. Disclaimer This answer is not meant to be the right or only way of training the object detection module. This is simply I sharing my experience and what has worked for me. I'm open to suggestions and learning more about this as I am still new to ML in general. TL;DR Create your own PASCAL VOC format dataset Generate TFRecords from it Configure a pipeline Visualize Each section of this answer consists of a corresponding Edit (see below). After reading each section, please read its Edit as well for clarifications. Corrections and tips were added for each section. Tools used LabelImg: A tool for creating PASCAL VOC format annotations. 1. Create your own PASCAL VOC dataset PS: For simplicity, the folder naming convention of my answer follows that of Pascal VOC 2012 A peek into the May 2012 dataset, you'll notice the folder as having the following structure +VOCdevkit +VOC2012 +Annotations +ImageSets +Action +Layout +Main +Segmentation +JPEGImages +SegmentationClass +SegmentationObject For the time being, amendments were made to the following folders: Annotations: This is were all the images' corresponding XML files will be placed in. Use the suggested tool above to create the annotations. Do not worry about and tags as they will be ignored by the training and eval binaries. JPEGImages: Location of your actual images. Make sure they are of type JPEG because that's what is currently supported in order to create TFRecords using their provided script. ImageSets->Main: This simply consists of text files. For each class, there exists a corresponding train.txt, trainval.txt and val.txt. Below is a sample of the contents of the aeroplane_train.txt in the VOC 2012 folder ``` 2008_000008 -1 2008_000015 -1 2008_000019 -1 2008_000023 -1 2008_000028 -1 2008_000033 1 ``` The structure is basically image name followed by a boolean saying whether the corresponding object exists in that image or not. Take for example image 2008_000008 does not consist of an aeroplane hence marked with a -1 but image 2008_000033 does. I wrote a small Python script to generate these text files. Simply iterate through the image names and assign a 1 or -1 next to them for object existence. I added some randomness among my text files by shuffling the image names. The {classname}_val.txt files consist of the testing validation datasets. Think of this as the test data during training. You want to divide your dataset into training and validation. More info can be found here. The format of these files is similar to that of training. At this point, your folder structure should be +VOCdevkit +VOC2012 +Annotations --(for each image, generated annotation) +ImageSets +Main --(for each class, generated *classname*_train.txt and *classname*_val.txt) +JPEGImages --(a bunch of JPEG images) 1.1 Generating label map With the dataset prepared, we need to create the corresponding label maps. Navigate to models\/object_detection\/data and open pascal_label_map.pbtxt. This file consists of a JSON that assigns an ID and name to each item. Make amendments to this file to reflect your desired objects. 2. Generate TFRecords If you look into their code especially this line, they explicitly grab the aeroplane_train.txt only. For curios minds, here's why. Change this file name to any of your class train text file. Make sure VOCdevkit is inside models\/object_detection then you can go ahead and generate the TFRecords. Please go through their code first should you run into any problems. It is self explanatory and well documented. 3. Pipeline Configuration The instructions should be self explanatory to cover this segment. Sample configs can be found in object_detection\/samples\/configs. For those looking to train from scratch as I did, just make sure to remove the fine_tune_checkpoint and from_detection_checkpoint nodes. Here's what my config file looked like for reference. From here on you can continue with the tutorial and run the training process. 4. Visualize Be sure to run the eval in parallel to the training in order to be able to visualize the learning process. To quote Jonathan Huang the best way is to just run the eval.py binary. We typically run this binary in parallel to training, pointing it at the directory holding the checkpoint that is being trained. The eval.py binary will write logs to an eval_dir that you specify which you can then point to with Tensorboard. You want to see that the mAP has \"lifted off\" in the first few hours, and then you want to see when it converges. It's hard to tell without looking at these plots how many steps you need. EDIT I (28 July '17): I never expected my response to get this much attention so I decided to come back and review it. Tools For my fellow Apple users, you could actually use RectLabel for annotations. Pascal VOC After digging around, I finally realized that trainval.txt is actually the union of training and validation datasets. Please look at their official development kit to understand the format even better. Label Map Generation At the time of my writing, ID 0 represents none_of_the_above. It is recommended that your IDs start from 1. Visualize After running your evaluation and directed tensorboard to your Eval directory, it'll show you the mAP of each category along with each category's performance. This is good but I like seeing my training data as well in parallel with Eval. To do this, run tensorboard on a different port and point it to your train directory ``` tensorboard --logdir=${PATH_TO_TRAIN} --port=${DESIRED_NUMBER} ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44973184\/train-tensorflow-object-detection-on-own-dataset", "best_answers_votes":54, "question_length":250, "response_length":5610 }, { "question":"TensorFlow: Is there a way to measure FLOPS for a model? The closest example I can get is found in this issue: https:\/\/github.com\/tensorflow\/tensorflow\/issues\/899 With this minimum reproducible code: ``` import tensorflow as tf import tensorflow.python.framework.ops as ops g = tf.Graph() with g.as_default(): A = tf.Variable(tf.random_normal( [25,16] )) B = tf.Variable(tf.random_normal( [16,9] )) C = tf.matmul(A,B) # shape=[25,9] for op in g.get_operations(): flops = ops.get_stats_for_node_def(g, op.node_def, 'flops').value if flops is not None: print 'Flops should be ~',2*25*16*9 print '25 x 25 x 9 would be',2*25*25*9 # ignores internal dim, repeats first print 'TF stats gives',flops ``` However, the FLOPS returned is always None. Is there a way to concretely measure FLOPS, especially with a PB file?", "response":"I would like to build on Tobias Schnek's answer as well as answering the original question: how to get FLOP from a pb file. Running the first snippet of code from Tobias answer with TensorFlow 1.6.0 ``` g = tf.Graph() run_meta = tf.RunMetadata() with g.as_default(): A = tf.Variable(tf.random_normal([25,16])) B = tf.Variable(tf.random_normal([16,9])) C = tf.matmul(A,B) opts = tf.profiler.ProfileOptionBuilder.float_operation() flops = tf.profiler.profile(g, run_meta=run_meta, cmd='op', options=opts) if flops is not None: print('Flops should be ~',2*25*16*9) print('TF stats gives',flops.total_float_ops) ``` We get the following ouput: ``` Flops should be ~ 7200 TF stats gives 8288 ``` So, why do we get 8288 instead of the expected result 7200=2*25*16*9[a]? The answer is in the way the tensors A and B are initialised. Initialising with a Gaussian distribution costs some FLOP. Changing the definition of A and B by ``` A = tf.Variable(initial_value=tf.zeros([25, 16])) B = tf.Variable(initial_value=tf.zeros([16, 9])) ``` gives the expected output 7200. Usually, a network's variables are initialised with Gaussian distributions among other schemes. Most of the time, we are not interested by the initialisation FLOP as they are done once during initialisation and do not happen during the training nor the inference. So, how could one get the exact number of FLOP disregarding the initialisation FLOP? Freeze the graph with a pb. Calculating the FLOP from a pb file was, actually, the OP's use case. The following snippet illustrates this: ``` import tensorflow as tf from tensorflow.python.framework import graph_util def load_pb(pb): with tf.gfile.GFile(pb, \"rb\") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def, name='') return graph # ***** (1) Create Graph ***** g = tf.Graph() sess = tf.Session(graph=g) with g.as_default(): A = tf.Variable(initial_value=tf.random_normal([25, 16])) B = tf.Variable(initial_value=tf.random_normal([16, 9])) C = tf.matmul(A, B, name='output') sess.run(tf.global_variables_initializer()) flops = tf.profiler.profile(g, options = tf.profiler.ProfileOptionBuilder.float_operation()) print('FLOP before freezing', flops.total_float_ops) # ***************************** # ***** (2) freeze graph ***** output_graph_def = graph_util.convert_variables_to_constants(sess, g.as_graph_def(), ['output']) with tf.gfile.GFile('graph.pb', \"wb\") as f: f.write(output_graph_def.SerializeToString()) # ***************************** # ***** (3) Load frozen graph ***** g2 = load_pb('.\/graph.pb') with g2.as_default(): flops = tf.profiler.profile(g2, options = tf.profiler.ProfileOptionBuilder.float_operation()) print('FLOP after freezing', flops.total_float_ops) ``` outputs ``` FLOP before freezing 8288 FLOP after freezing 7200 ``` [a] Usually the FLOP of a matrix multiplication are mq(2p -1) for the product AB where A[m, p] and B[p, q] but TensorFlow returns 2mpq for some reason. An issue has been opened to understand why.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45085938\/tensorflow-is-there-a-way-to-measure-flops-for-a-model", "best_answers_votes":29, "question_length":811, "response_length":3048 }, { "question":"Add L2 regularization when using high level tf.layers Is it possible to add an L2 regularization when using the layers defined in tf.layers? It seems to me that since tf.layers is an high level wrapper, there is no easy way to get access to the filter weights. With tf.nn.conv2d ``` regularizer = tf.contrib.layers.l2_regularizer(scale=0.1) weights = tf.get_variable( name=\"weights\", regularizer=regularizer ) #Previous layers ... #Second layer layer 2 = tf.nn.conv2d( input, weights, [1,1,1,1], [1,1,1,1]) #More layers ... #Loss loss = #some loss reg_variables = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) reg_term = tf.contrib.layers.apply_regularization(regularizer, reg_variables) loss += reg_term ``` Now what would that look like with tf.layers.conv2d? Thanks!", "response":"You can pass them into tf.layers.conv2d as arguments: ``` regularizer = tf.contrib.layers.l2_regularizer(scale=0.1) layer2 = tf.layers.conv2d( inputs, filters, kernel_size, kernel_regularizer=regularizer) ``` Then you should add the regularization loss to your loss like this: ``` l2_loss = tf.losses.get_regularization_loss() loss += l2_loss ``` Edit: Thanks Zeke Arneodo, Tom and srcolinas I added, the last bit on your feedback so that the accepted answer provides the complete solution.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44232566\/add-l2-regularization-when-using-high-level-tf-layers", "best_answers_votes":38, "question_length":777, "response_length":490 }, { "question":"For what reason Convolution 1x1 is used in deep neural networks? I'm looking at InceptionV3 (GoogLeNet) architecture and cannot understand why do we need conv1x1 layers? I know how convolution works, but I see a profit with patch size > 1.", "response":"You can think about 1x1xD convolution as a dimensionality reduction technique when it's placed somewhere into a network. If you have an input volume of 100x100x512 and you convolve it with a set of D filters each one with size 1x1x512 you reduce the number of features from 512 to D. The output volume is, therefore, 100x100xD. As you can see this (1x1x512)xD convolution is mathematically equivalent to a fully connected layer. The main difference is that whilst FC layer requires the input to have a fixed size, the convolutional layer can accept in input every volume with spatial extent greater or equal than 100x100. A 1x1xD convolution can substitute any fully connected layer because of this equivalence. In addition, 1x1xD convolutions not only reduce the features in input to the next layer, but also introduces new parameters and new non-linearity into the network that will help to increase model accuracy. When the 1x1xD convolution is placed at the end of a classification network, it acts exactly as a FC layer, but instead of thinking about it as a dimensionality reduction technique it's more intuitive to think about it as a layer that will output a tensor with shape WxHxnum_classes. The spatial extent of the output tensor (identified by W and H) is dynamic and is determined by the locations of the input image that the network analyzed. If the network has been defined with an input of 200x200x3 and we give it in input an image with this size, the output will be a map with W = H = 1 and depth = num_classes. But, if the input image have a spatial extent greater than 200x200 than the convolutional network will analyze different locations of the input image (just like a standard convolution does) and will produce a tensor with W > 1 and H > 1. This is not possibile with a FC layer that constrains the network to accept fixed size input and produce fixed size output.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39366271\/for-what-reason-convolution-1x1-is-used-in-deep-neural-networks", "best_answers_votes":48, "question_length":239, "response_length":1892 }, { "question":"Keras AttributeError: 'list' object has no attribute 'ndim' I'm running a Keras neural network model in Jupyter Notebook (Python 3.6) I get the following error AttributeError: 'list' object has no attribute 'ndim' after calling the .fit() method from Keras.model ``` model = Sequential() model.add(Dense(5, input_dim=len(X_data[0]), activation='sigmoid' )) model.add(Dense(1, activation = 'sigmoid')) model.compile(loss='mean_squared_error', optimizer='adam', metrics=['acc']) model.fit(X_data, y_data, epochs=20, batch_size=10) ``` I checked the requirements.txt file for Keras (in Anaconda3) and the numpy, scipy, and six module versions are all up to date. What can explain this AttributeError? The full error message is the following (seems to be somewhat related to Numpy): --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () 3 model.add(Dense(1, activation = 'sigmoid')) 4 model.compile(loss='mean_squared_error', optimizer='adam', metrics=['acc']) ----> 5 model.fit(X_data, y_data, epochs=20, batch_size=10) ~\\Anaconda3\\lib\\site-packages\\keras\\models.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs) 963 initial_epoch=initial_epoch, 964 steps_per_epoch=steps_per_epoch, --> 965 validation_steps=validation_steps) 966 967 def evaluate(self, x=None, y=None, ~\\Anaconda3\\lib\\site-packages\\keras\\engine\\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs) 1591 class_weight=class_weight, 1592 check_batch_axis=False, -> 1593 batch_size=batch_size) 1594 # Prepare validation data. 1595 do_validation = False ~\\Anaconda3\\lib\\site-packages\\keras\\engine\\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size) 1424 self._feed_input_shapes, 1425 check_batch_axis=False, -> 1426 exception_prefix='input') 1427 y = _standardize_input_data(y, self._feed_output_names, 1428 output_shapes, ~\\Anaconda3\\lib\\site-packages\\keras\\engine\\training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix) 68 elif isinstance(data, list): 69 data = [x.values if x.class.name == 'DataFrame' else x for x in data] ---> 70 data = [np.expand_dims(x, 1) if x is not None and x.ndim == 1 else x for x in data] 71 else: 72 data = data.values if data.class.name == 'DataFrame' else data ~\\Anaconda3\\lib\\site-packages\\keras\\engine\\training.py in (.0) 68 elif isinstance(data, list): 69 data = [x.values if x.class.name == 'DataFrame' else x for x in data] ---> 70 data = [np.expand_dims(x, 1) if x is not None and x.ndim == 1 else x for x in data] 71 else: 72 data = data.values if data.class.name == 'DataFrame' else data AttributeError: 'list' object has no attribute 'ndim'", "response":"model.fit expects x and y to be numpy array. Seems like you pass a list, it tried to get shape of input by reading ndim attribute of numpy array and failed. You can simply transform it using np.array: ``` import numpy as np ... model.fit(np.array(train_X),np.array(train_Y), epochs=20, batch_size=10) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48493755\/keras-attributeerror-list-object-has-no-attribute-ndim", "best_answers_votes":45, "question_length":3011, "response_length":304 }, { "question":"Tensorflow2 warning using @tffunction This example code from Tensorflow 2 ``` writer = tf.summary.create_file_writer(\"\/tmp\/mylogs\/tf_function\") @tf.function def my_func(step): with writer.as_default(): # other model code would go here tf.summary.scalar(\"my_metric\", 0.5, step=step) for step in range(100): my_func(step) writer.flush() ``` but it is throwing warnings. WARNING:tensorflow:5 out of the last 5 calls to triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https:\/\/www.tensorflow.org\/beta\/tutorials\/eager\/tf_function#python_or_tensor_args and https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/function for more details. Is there a better way to do this?", "response":"tf.function has some \"peculiarities\". I highly recommend reading this article: https:\/\/www.tensorflow.org\/tutorials\/customization\/performance In this case, the problem is that the function is \"retraced\" (i.e. a new graph is built) every time you call with a different input signature. For tensors, input signature refers to shape and dtype, but for Python numbers, every new value is interpreted as \"different\". In this case, because you call the function with a step variable that changes every time, the function is retraced every single time as well. This will be extremely slow for \"real\" code (e.g. calling a model inside the function). You can fix it by simply converting step to a tensor, in which case the different values will not count as a new input signature: ``` for step in range(100): step = tf.convert_to_tensor(step, dtype=tf.int64) my_func(step) writer.flush() ``` or use tf.range to get tensors directly: ``` for step in tf.range(100): step = tf.cast(step, tf.int64) my_func(step) writer.flush() ``` This should not produce warnings (and be much faster).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58972225\/tensorflow2-warning-using-tffunction", "best_answers_votes":42, "question_length":902, "response_length":1073 }, { "question":"tensorflow: what's the difference between tf.nn.dropout and tf.layers.dropout I'm quite confused about whether to use tf.nn.dropout or tf.layers.dropout. many MNIST CNN examples seems to use tf.nn.droput, with keep_prop as one of params. but how is it different with tf.layers.dropout? is the \"rate\" params in tf.layers.dropout similar to tf.nn.dropout? Or generally speaking, is the difference between tf.nn.dropout and tf.layers.dropout applies to all other similar situations, like similar functions in tf.nn and tf.layers.", "response":"A quick glance through tensorflow\/python\/layers\/core.py and tensorflow\/python\/ops\/nn_ops.py reveals that tf.layers.dropout is a wrapper for tf.nn.dropout. The only differences in the two functions are: The tf.nn.dropout has parameter keep_prob: \"Probability that each element is kept\" tf.layers.dropout has parameter rate: \"The dropout rate\" Thus, keep_prob = 1 - rate as defined here The tf.layers.dropout has training parameter: \"Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).\"", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44395547\/tensorflow-whats-the-difference-between-tf-nn-dropout-and-tf-layers-dropout", "best_answers_votes":38, "question_length":526, "response_length":545 }, { "question":"Tensorflow cannot open libcuda.so.1 I have a laptop with a GeForce 940 MX. I want to get Tensorflow up and running on the gpu. I installed everything from their tutorial page, now when I import Tensorflow, I get ``` >>> import tensorflow as tf I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally I tensorflow\/stream_executor\/dso_loader.cc:119] Couldn't open CUDA library libcuda.so.1. LD_LIBRARY_PATH: I tensorflow\/stream_executor\/cuda\/cuda_diagnostics.cc:165] hostname: workLaptop I tensorflow\/stream_executor\/cuda\/cuda_diagnostics.cc:189] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program I tensorflow\/stream_executor\/cuda\/cuda_diagnostics.cc:193] kernel reported version is: Permission denied: could not open driver version path for reading: \/proc\/driver\/nvidia\/version I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:1092] LD_LIBRARY_PATH: I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:1093] failed to find libcuda.so on this system: Failed precondition: could not dlopen DSO: libcuda.so.1; dlerror: libnvidia-fatbinaryloader.so.367.57: cannot open shared object file: No such file or directory I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally >>> ``` after which I think it just switches to running on the cpu. EDIT: After I nuked everything , started from scratch. Now I get this: ``` >>> import tensorflow I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally I tensorflow\/stream_executor\/dso_loader.cc:119] Couldn't open CUDA library libcuda.so.1. LD_LIBRARY_PATH: :\/usr\/local\/cuda\/lib64:\/usr\/local\/cuda\/extras\/CUPTI\/lib64 I tensorflow\/stream_executor\/cuda\/cuda_diagnostics.cc:165] hostname: workLaptop I tensorflow\/stream_executor\/cuda\/cuda_diagnostics.cc:189] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program I tensorflow\/stream_executor\/cuda\/cuda_diagnostics.cc:193] kernel reported version is: Permission denied: could not open driver version path for reading: \/proc\/driver\/nvidia\/version I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:1092] LD_LIBRARY_PATH: :\/usr\/local\/cuda\/lib64:\/usr\/local\/cuda\/extras\/CUPTI\/lib64 I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:1093] failed to find libcuda.so on this system: Failed precondition: could not dlopen DSO: libcuda.so.1; dlerror: libnvidia-fatbinaryloader.so.367.57: cannot open shared object file: No such file or directory I tensorflow\/stream_executor\/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally ```", "response":"libcuda.so.1 is a symlink to a file that is specific to the version of your NVIDIA drivers. It may be pointing to the wrong version or it may not exist. ``` # See where the link is pointing. ls \/usr\/lib\/x86_64-linux-gnu\/libcuda.so.1 -la # My result: # lrwxrwxrwx 1 root root 19 Feb 22 20:40 \\ # \/usr\/lib\/x86_64-linux-gnu\/libcuda.so.1 -> .\/libcuda.so.375.39 # Make sure it is pointing to the right version. # Compare it with the installed NVIDIA driver. nvidia-smi # Replace libcuda.so.1 with a link to the correct version cd \/usr\/lib\/x86_64-linux-gnu sudo ln -f -s libcuda.so. libcuda.so.1 ``` Now in the same way, make another symlink from libcuda.so.1 to a link of the same name in your LD_LIBRARY_PATH directory. You may also find that you need to create a link to libcuda.so.1 in \/usr\/lib\/x86_64-linux-gnu named libcuda.so", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41890549\/tensorflow-cannot-open-libcuda-so-1", "best_answers_votes":20, "question_length":3076, "response_length":826 }, { "question":"Why would I ever use tf.concat instead of tf.stack? Is there a good reason to use tf.concat instead of tf.stack? They seem very similar. Is it just to guarantee that the resulting tensor will have the same number of dimensions as the input list of tensors?", "response":"Actually, I've misunderstood how tf.stack works. If the axis parameter is within the range of the existing dimensions, a new axis will be inserted at that index. Example: ``` import tensorflow as tf t1 = tf.random_normal([1, 3]) t2 = tf.random_normal([1, 3]) tf.stack([t1, t2], axis=1).shape.as_list() == [1, 2, 3] tf.concat([t1, t2], axis=1).shape.as_list() == [1, 6] ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41534593\/why-would-i-ever-use-tf-concat-instead-of-tf-stack", "best_answers_votes":45, "question_length":256, "response_length":372 }, { "question":"Is it possible to modify an existing TensorFlow computation graph? TensorFlow graph is usually built gradually from inputs to outputs, and then executed. Looking at the Python code, the inputs lists of operations are immutable which suggests that the inputs should not be modified. Does that mean that there is no way to update\/modify an existing graph?", "response":"The TensorFlow tf.Graph class is an append-only data structure, which means that you can add nodes to the graph after executing part of the graph, but you cannot remove or modify existing nodes. Since TensorFlow executes only the necessary subgraph when you call Session.run(), there is no execution-time cost to having redundant nodes in the graph (although they will continue to consume memory). To remove all nodes in the graph, you can create a session with a new graph: ``` with tf.Graph().as_default(): # Create a new graph, and make it the default. with tf.Session() as sess: # `sess` will use the new, currently empty, graph. # Build graph and execute nodes in here. ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34235225\/is-it-possible-to-modify-an-existing-tensorflow-computation-graph", "best_answers_votes":25, "question_length":353, "response_length":678 }, { "question":"What is the TensorFlow checkpoint meta file? When saving a checkpoint, TensorFlow often saves a meta file: my_model.ckpt.meta. What is in that file, can we still restore a model even if we delete it and what kind of info did we lose if we restore a model without the meta file?", "response":"This file contains a serialized MetaGraphDef protocol buffer. The MetaGraphDef is designed as a serialization format that includes all of the information required to restore a training or inference process (including the GraphDef that describes the dataflow, and additional annotations that describe the variables, input pipelines, and other relevant information). For example, the MetaGraphDef is used by TensorFlow Serving to start an inference service based on your trained model. We are investigating other tools that could use the MetaGraphDef for training. Assuming that you still have the Python code for your model, you do not need the MetaGraphDef to restore the model, because you can reconstruct all of the information in the MetaGraphDef by re-executing the Python code that builds the model. To restore from a checkpoint, you only need the checkpoint files that contain the trained weights, which are written periodically to the same directory.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36195454\/what-is-the-tensorflow-checkpoint-meta-file", "best_answers_votes":39, "question_length":277, "response_length":957 }, { "question":"How does asynchronous training work in distributed Tensorflow? I've read Distributed Tensorflow Doc, and it mentions that in asynchronous training, each replica of the graph has an independent training loop that executes without coordination. From what I understand, if we use parameter-server with data parallelism architecture, it means each worker computes gradients and updates its own weights without caring about other workers updates for distributed training Neural Network. As all weights are shared on parameter server (ps), I think ps still has to coordinate (or aggregate) weight updates from all workers in some way. I wonder how does the aggregation work in asynchronous training. Or in more general words, how does asynchronous training work in distributed Tensorflow?", "response":"When you train asynchronously in Distributed TensorFlow, a particular worker does the following: The worker reads all of the shared model parameters in parallel from the PS task(s), and copies them to the worker task. These reads are uncoordinated with any concurrent writes, and no locks are acquired: in particular the worker may see partial updates from one or more other workers (e.g. a subset of the updates from another worker may have been applied, or a subset of the elements in a variable may have been updated). The worker computes gradients locally, based on a batch of input data and the parameter values that it read in step 1. The worker sends the gradients for each variable to the appropriate PS task, and applies the gradients to their respective variable, using an update rule that is determined by the optimization algorithm (e.g. SGD, SGD with Momentum, Adagrad, Adam, etc.). The update rules typically use (approximately) commutative operations, so they may be applied independently on the updates from each worker, and the state of each variable will be a running aggregate of the sequence of updates received. In asynchronous training, each update from the worker is applied concurrently, and the updates may be somewhat coordinated if the optional use_locking=True flag was set when the respective optimizer (e.g. tf.train.GradientDescentOptimizer) was initialized. Note however that the locking here only provides mutual exclusion for two concurrent updates, and (as noted above) reads do not acquire locks; the locking does not provide atomicity across the entire set of updates. (By contrast, in synchronous training, a utility like tf.train.SyncReplicasOptimizer will ensure that all of the workers read the same, up-to-date values for each model parameter; and that all of the updates for a synchronous step are aggregated before they are applied to the underlying variables. To do this, the workers are synchronized by a barrier, which they enter after sending their gradient update, and leave after the aggregated update has been applied to all variables.)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43147435\/how-does-asynchronous-training-work-in-distributed-tensorflow", "best_answers_votes":32, "question_length":782, "response_length":2087 }, { "question":"TensorFlow: using a tensor to index another tensor I have a basic question about how to do indexing in TensorFlow. In numpy: ``` x = np.asarray([1,2,3,3,2,5,6,7,1,3]) e = np.asarray([0,1,0,1,1,1,0,1]) #numpy print x * e[x] ``` I can get ``` [1 0 3 3 0 5 0 7 1 3] ``` How can I do this in TensorFlow? ``` x = np.asarray([1,2,3,3,2,5,6,7,1,3]) e = np.asarray([0,1,0,1,1,1,0,1]) x_t = tf.constant(x) e_t = tf.constant(e) with tf.Session(): ???? ``` Thanks!", "response":"Fortunately, the exact case you're asking about is supported in TensorFlow by tf.gather(): ``` result = x_t * tf.gather(e_t, x_t) with tf.Session() as sess: print sess.run(result) # ==> 'array([1, 0, 3, 3, 0, 5, 0, 7, 1, 3])' ``` The tf.gather() op is less powerful than NumPy's advanced indexing: it only supports extracting full slices of a tensor on its 0th dimension. Support for more general indexing has been requested, and is being tracked in this GitHub issue.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35842598\/tensorflow-using-a-tensor-to-index-another-tensor", "best_answers_votes":36, "question_length":453, "response_length":468 }, { "question":"TensorFlow Training Assuming I have a very simple neural network, like multilayer perceptron. For each layer the activation function is sigmoid and the network are fully connected. In TensorFlow this might be defined like this: ```py sess = tf.InteractiveSession() # Training Tensor x = tf.placeholder(tf.float32, shape = [None, n_fft]) # Label Tensor y_ = tf.placeholder(tf.float32, shape = [None, n_fft]) # Declaring variable buffer for weights W and bias b # Layer structure [n_fft, n_fft, n_fft, n_fft] # Input -> Layer 1 struct_w = [n_fft, n_fft] struct_b = [n_fft] W1 = weight_variable(struct_w, 'W1') b1 = bias_variable(struct_b, 'b1') h1 = tf.nn.sigmoid(tf.matmul(x, W1) + b1) # Layer1 -> Layer 2 W2 = weight_variable(struct_w, 'W2') b2 = bias_variable(struct_b, 'b2') h2 = tf.nn.sigmoid(tf.matmul(h1, W2) + b2) # Layer2 -> output W3 = weight_variable(struct_w, 'W3') b3 = bias_variable(struct_b, 'b3') y = tf.nn.sigmoid(tf.matmul(h2, W3) + b3) # Calculating difference between label and output using mean square error mse = tf.reduce_mean(tf.square(y - y_)) # Train the Model # Gradient Descent train_step = tf.train.GradientDescentOptimizer(0.3).minimize(mse) ``` The design target for this model is to map a n_fft points fft spectrogram to another n_fft target spectrogram. Let's assume both the training data and target data are of size [3000, n_fft]. They are stored in variables spec_train and spec_target. Now here comes the question. For TensorFlow is there any difference between these two trainings? Training 1: ```py for i in xrange(200): train_step.run(feed_dict = {x: spec_train, y_: spec_target}) ``` Training 2: ```py for i in xrange(200): for j in xrange(3000): train = spec_train[j, :].reshape(1, n_fft) label = spec_target[j, :].reshape(1, n_fft) train_step.run(feed_dict = {x: train, y_: label}) ``` Thank you very much!", "response":"In the first training version, you are training the entire batch of training data at once, which means that the first and the 3000th element of spec_train will be processed using the same model parameters in a single step. This is known as (Batch) Gradient Descent. In the second training version, you are training a single example from the training data at once, which means that the 3000th element of spec_train will be processed using model parameters that have been updated 2999 times since the first element was most recently processed. This is known as Stochastic Gradient Descent (or it would be if the element was selected at random). In general, TensorFlow is used with datasets that are too large to process in one batch, so mini-batch SGD (where a subset of the examples are processed in one step) is favored. Processing a single element at a time is theoretically desirable, but is inherently sequential and has high fixed costs because the matrix multiplications and other operations are not as computationally dense. Therefore, processing a small batch (e.g. 32 or 128) of examples at once is the usual approach, with multiple replicas training on different batches in parallel. See this Stats StackExchange question for a more theoretical discussion of when you should use one approach versus the other.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34097457\/tensorflow-training", "best_answers_votes":36, "question_length":1847, "response_length":1318 }, { "question":"How to interpret TensorFlow output? How do I interpret the TensorFlow output for building and executing computational graphs on GPGPUs? Given the following command that executes an arbitrary tensorflow script using the python API. python3 tensorflow_test.py > out The first part stream_executor seems like its loading dependencies. ``` I tensorflow\/stream_executor\/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally I tensorflow\/stream_executor\/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally I tensorflow\/stream_executor\/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally I tensorflow\/stream_executor\/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally I tensorflow\/stream_executor\/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally ``` What is a NUMA node? ``` I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:900] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero ``` I assume this is when it finds the available GPU ``` I tensorflow\/core\/common_runtime\/gpu\/gpu_init.cc:102] Found device 0 with properties: name: Tesla K40c major: 3 minor: 5 memoryClockRate (GHz) 0.745 pciBusID 0000:01:00.0 Total memory: 11.25GiB Free memory: 11.15GiB ``` Some gpu initialization? what is DMA? ``` I tensorflow\/core\/common_runtime\/gpu\/gpu_init.cc:126] DMA: 0 I tensorflow\/core\/common_runtime\/gpu\/gpu_init.cc:136] 0: Y I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:755] Creating TensorFlow device (\/gpu:0) -> (device: 0, name: Tesla K40c, pci bus id: 0000:01:00.0) ``` Why does it throw an error E? ``` E tensorflow\/stream_executor\/cuda\/cuda_driver.cc:932] failed to allocate 11.15G (11976531968 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY ``` Great answer to what the pool_allocator does: https:\/\/stackoverflow.com\/a\/35166985\/4233809 ``` I tensorflow\/core\/common_runtime\/gpu\/pool_allocator.cc:244] PoolAllocator: After 3160 get requests, put_count=2958 evicted_count=1000 eviction_rate=0.338066 and unsatisfied allocation rate=0.412025 I tensorflow\/core\/common_runtime\/gpu\/pool_allocator.cc:256] Raising pool_size_limit_ from 100 to 110 I tensorflow\/core\/common_runtime\/gpu\/pool_allocator.cc:244] PoolAllocator: After 1743 get requests, put_count=1970 evicted_count=1000 eviction_rate=0.507614 and unsatisfied allocation rate=0.456684 I tensorflow\/core\/common_runtime\/gpu\/pool_allocator.cc:256] Raising pool_size_limit_ from 256 to 281 I tensorflow\/core\/common_runtime\/gpu\/pool_allocator.cc:244] PoolAllocator: After 1986 get requests, put_count=2519 evicted_count=1000 eviction_rate=0.396983 and unsatisfied allocation rate=0.264854 I tensorflow\/core\/common_runtime\/gpu\/pool_allocator.cc:256] Raising pool_size_limit_ from 655 to 720 I tensorflow\/core\/common_runtime\/gpu\/pool_allocator.cc:244] PoolAllocator: After 28728 get requests, put_count=28680 evicted_count=1000 eviction_rate=0.0348675 and unsatisfied allocation rate=0.0418407 I tensorflow\/core\/common_runtime\/gpu\/pool_allocator.cc:256] Raising pool_size_limit_ from 1694 to 1863 ```", "response":"About NUMA -- https:\/\/software.intel.com\/en-us\/articles\/optimizing-applications-for-numa Roughly speaking, if you have dual-socket CPU, they will each have their own memory and have to access the other processor's memory through a slower QPI link. So each CPU+memory is a NUMA node. Potentially you could treat two different NUMA nodes as two different devices and structure your network to optimize for different within-node\/between-node bandwidth However, I don't think there's enough wiring in TF right now to do this right now. The detection doesn't work either -- I just tried on a machine with 2 NUMA nodes, and it still printed the same message and initialized to 1 NUMA node. DMA = Direct Memory Access. You could potentially copy things from one GPU to another GPU without utilizing CPU (ie, through NVlink). NVLink integration isn't there yet. As far as the error, TensorFlow tries to allocate close to GPU max memory so it sounds like some of your GPU memory is already been allocated to something else and the allocation failed. You can do something like below to avoid allocating so much memory ``` config = tf.ConfigProto(log_device_placement=True) config.gpu_options.per_process_gpu_memory_fraction=0.3 # don't hog all vRAM config.operation_timeout_in_ms=15000 # terminate on long hangs sess = tf.InteractiveSession(\"\", config=config) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36838770\/how-to-interpret-tensorflow-output", "best_answers_votes":28, "question_length":3135, "response_length":1353 }, { "question":"What are the constraints for tensorflow scope names? I'm running a tensorflow model and getting the following error: ```none ValueError: 'Cement (component 1)(kg in a m^3 mixture)' is not a valid scope name. ``` I get that tensorflow probably doesn't like special chars and spaces in its scope names, but I'm trying to find an actual doc on what chars are allowed. Does anyone know where I could find this?", "response":"From the TF source: NOTE: This constructor validates the given name. Valid scope names match one of the following regular expressions: ``` [A-Za-z0-9.][A-Za-z0-9_.\\\\-\/]* (for scopes at the root) [A-Za-z0-9_.\\\\-\/]* (for other scopes) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44664285\/what-are-the-constraints-for-tensorflow-scope-names", "best_answers_votes":29, "question_length":406, "response_length":236 }, { "question":"Why neural network predicts wrong on its own training data? I made a LSTM (RNN) neural network with supervised learning for data stock prediction. The problem is why it predicts wrong on its own training data? (note: reproducible example below) I created simple model to predict next 5 days stock price: ``` model = Sequential() model.add(LSTM(32, activation='sigmoid', input_shape=(x_train.shape[1], x_train.shape[2]))) model.add(Dense(y_train.shape[1])) model.compile(optimizer='adam', loss='mse') es = EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True) model.fit(x_train, y_train, batch_size=64, epochs=25, validation_data=(x_test, y_test), callbacks=[es]) ``` The correct results are in y_test (5 values), so model trains, looking back 90 previous days and then restore weights from best (val_loss=0.0030) result with patience=3: ``` Train on 396 samples, validate on 1 samples Epoch 1\/25 396\/396 [==============================] - 1s 2ms\/step - loss: 0.1322 - val_loss: 0.0299 Epoch 2\/25 396\/396 [==============================] - 0s 402us\/step - loss: 0.0478 - val_loss: 0.0129 Epoch 3\/25 396\/396 [==============================] - 0s 397us\/step - loss: 0.0385 - val_loss: 0.0178 Epoch 4\/25 396\/396 [==============================] - 0s 399us\/step - loss: 0.0398 - val_loss: 0.0078 Epoch 5\/25 396\/396 [==============================] - 0s 391us\/step - loss: 0.0343 - val_loss: 0.0030 Epoch 6\/25 396\/396 [==============================] - 0s 391us\/step - loss: 0.0318 - val_loss: 0.0047 Epoch 7\/25 396\/396 [==============================] - 0s 389us\/step - loss: 0.0308 - val_loss: 0.0043 Epoch 8\/25 396\/396 [==============================] - 0s 393us\/step - loss: 0.0292 - val_loss: 0.0056 ``` Prediction result is pretty awesome, isn't it? That's because algorithm restored best weights from #5 epoch. Okey, let's now save this model to .h5 file, move back -10 days and predict last 5 days (at first example we made model and validate on 17-23 April including day off weekends, now let's test on 2-8 April). Result: It shows absolutely wrong direction. As we see that's because model was trained and took #5 epoch best for validation set on 17-23 April, but not on 2-8. If I try train more, playing with what epoch to choose, whatever I do, there are always a lot of time intervals in the past that have wrong prediction. Why does model show wrong results on its own trained data? I trained data, it must remember how to predict data on this piece of set, but predicts wrong. What I also tried: Use large data sets with 50k+ rows, 20 years stock prices, adding more or less features Create different types of model, like adding more hidden layers, different batch_sizes, different layers activations, dropouts, batchnormalization Create custom EarlyStopping callback, get average val_loss from many validation data sets and choose the best Maybe I miss something? What can I improve? Here is very simple and reproducible example. yfinance downloads S&P 500 stock data. ``` \"\"\"python 3.7.7 tensorflow 2.1.0 keras 2.3.1\"\"\" import numpy as np import pandas as pd from keras.callbacks import EarlyStopping, Callback from keras.models import Model, Sequential, load_model from keras.layers import Dense, Dropout, LSTM, BatchNormalization from sklearn.preprocessing import MinMaxScaler import plotly.graph_objects as go import yfinance as yf np.random.seed(4) num_prediction = 5 look_back = 90 new_s_h5 = True # change it to False when you created model and want test on other past dates df = yf.download(tickers=\"^GSPC\", start='2018-05-06', end='2020-04-24', interval=\"1d\") data = df.filter(['Close', 'High', 'Low', 'Volume']) # drop last N days to validate saved model on past df.drop(df.tail(0).index, inplace=True) print(df) class EarlyStoppingCust(Callback): def __init__(self, patience=0, verbose=0, validation_sets=None, restore_best_weights=False): super(EarlyStoppingCust, self).__init__() self.patience = patience self.verbose = verbose self.wait = 0 self.stopped_epoch = 0 self.restore_best_weights = restore_best_weights self.best_weights = None self.validation_sets = validation_sets def on_train_begin(self, logs=None): self.wait = 0 self.stopped_epoch = 0 self.best_avg_loss = (np.Inf, 0) def on_epoch_end(self, epoch, logs=None): loss_ = 0 for i, validation_set in enumerate(self.validation_sets): predicted = self.model.predict(validation_set[0]) loss = self.model.evaluate(validation_set[0], validation_set[1], verbose = 0) loss_ += loss if self.verbose > 0: print('val' + str(i + 1) + '_loss: %.5f' % loss) avg_loss = loss_ \/ len(self.validation_sets) print('avg_loss: %.5f' % avg_loss) if self.best_avg_loss[0] > avg_loss: self.best_avg_loss = (avg_loss, epoch + 1) self.wait = 0 if self.restore_best_weights: print('new best epoch = %d' % (epoch + 1)) self.best_weights = self.model.get_weights() else: self.wait += 1 if self.wait >= self.patience or self.params['epochs'] == epoch + 1: self.stopped_epoch = epoch self.model.stop_training = True if self.restore_best_weights: if self.verbose > 0: print('Restoring model weights from the end of the best epoch') self.model.set_weights(self.best_weights) def on_train_end(self, logs=None): print('best_avg_loss: %.5f (#%d)' % (self.best_avg_loss[0], self.best_avg_loss[1])) def multivariate_data(dataset, target, start_index, end_index, history_size, target_size, step, single_step=False): data = [] labels = [] start_index = start_index + history_size if end_index is None: end_index = len(dataset) - target_size for i in range(start_index, end_index): indices = range(i-history_size, i, step) data.append(dataset[indices]) if single_step: labels.append(target[i+target_size]) else: labels.append(target[i:i+target_size]) return np.array(data), np.array(labels) def transform_predicted(pr): pr = pr.reshape(pr.shape[1], -1) z = np.zeros((pr.shape[0], x_train.shape[2] - 1), dtype=pr.dtype) pr = np.append(pr, z, axis=1) pr = scaler.inverse_transform(pr) pr = pr[:, 0] return pr step = 1 # creating datasets with look back scaler = MinMaxScaler() df_normalized = scaler.fit_transform(df.values) dataset = df_normalized[:-num_prediction] x_train, y_train = multivariate_data(dataset, dataset[:, 0], 0,len(dataset) - num_prediction + 1, look_back, num_prediction, step) indices = range(len(dataset)-look_back, len(dataset), step) x_test = np.array(dataset[indices]) x_test = np.expand_dims(x_test, axis=0) y_test = np.expand_dims(df_normalized[-num_prediction:, 0], axis=0) # creating past datasets to validate with EarlyStoppingCust number_validates = 50 step_past = 5 validation_sets = [(x_test, y_test)] for i in range(1, number_validates * step_past + 1, step_past): indices = range(len(dataset)-look_back-i, len(dataset)-i, step) x_t = np.array(dataset[indices]) x_t = np.expand_dims(x_t, axis=0) y_t = np.expand_dims(df_normalized[-num_prediction-i:len(df_normalized)-i, 0], axis=0) validation_sets.append((x_t, y_t)) if new_s_h5: model = Sequential() model.add(LSTM(32, return_sequences=False, activation = 'sigmoid', input_shape=(x_train.shape[1], x_train.shape[2]))) # model.add(Dropout(0.2)) # model.add(BatchNormalization()) # model.add(LSTM(units = 16)) model.add(Dense(y_train.shape[1])) model.compile(optimizer = 'adam', loss = 'mse') # EarlyStoppingCust is custom callback to validate each validation_sets and get average # it takes epoch with best \"best_avg\" value # es = EarlyStoppingCust(patience = 3, restore_best_weights = True, validation_sets = validation_sets, verbose = 1) # or there is keras extension with built-in EarlyStopping, but it validates only 1 set that you pass through fit() es = EarlyStopping(monitor = 'val_loss', patience = 3, restore_best_weights = True) model.fit(x_train, y_train, batch_size = 64, epochs = 25, shuffle = True, validation_data = (x_test, y_test), callbacks = [es]) model.save('s.h5') else: model = load_model('s.h5') predicted = model.predict(x_test) predicted = transform_predicted(predicted) print('predicted', predicted) print('real', df.iloc[-num_prediction:, 0].values) print('val_loss: %.5f' % (model.evaluate(x_test, y_test, verbose=0))) fig = go.Figure() fig.add_trace(go.Scatter( x = df.index[-60:], y = df.iloc[-60:,0], mode='lines+markers', name='real', line=dict(color='#ff9800', width=1) )) fig.add_trace(go.Scatter( x = df.index[-num_prediction:], y = predicted, mode='lines+markers', name='predict', line=dict(color='#2196f3', width=1) )) fig.update_layout(template='plotly_dark', hovermode='x', spikedistance=-1, hoverlabel=dict(font_size=16)) fig.update_xaxes(showspikes=True) fig.update_yaxes(showspikes=True) fig.show() ```", "response":"The OP postulates an interesting finding. Let me simplify the original question as follows. If the model is trained on a particular time series, why can't the model reconstruct previous time series data, which it was already trained on? Well, the answer is embedded in the training progress itself. Since EarlyStopping is used here to avoid overfitting, the best model is saved at epoch=5, where val_loss=0.0030 as mentioned by the OP. At this instance, the training loss is equal to 0.0343, that is, the RMSE of training is 0.185. Since the dataset is scaled using MinMaxScalar, we need to undo the scaling of RMSE to understand what's going on. The minimum and maximum values of the time sequence are found to be 2290 and 3380. Therefore, having 0.185 as the RMSE of training means that, even for the training set, the predicted values may differ from the ground truth values by approximately 0.185*(3380-2290), that is ~200 units on average. This explains why there is a big difference when predicting the training data itself at a previous time step. What should I do to perfectly emulate training data? I asked this question from myself. The simple answer is, make the training loss approaching 0, that is overfit the model. After some training, I realized that a model with only 1 LSTM layer that has 32 cells is not complex enough to reconstruct the training data. Therefore, I have added another LSTM layer as follows. ``` model = Sequential() model.add(LSTM(32, return_sequences=True, activation = 'sigmoid', input_shape=(x_train.shape[1], x_train.shape[2]))) # model.add(Dropout(0.2)) # model.add(BatchNormalization()) model.add(LSTM(units = 64, return_sequences=False,)) model.add(Dense(y_train.shape[1])) model.compile(optimizer = 'adam', loss = 'mse') ``` And the model is trained for 1000 epochs without considering EarlyStopping. ``` model.fit(x_train, y_train, batch_size = 64, epochs = 1000, shuffle = True, validation_data = (x_test, y_test)) ``` At the end of 1000th epoch we have a training loss of 0.00047 which is much lower than the training loss in your case. So we would expect the model to reconstruct the training data better. Following is the prediction plot for Apr 2-8. A Final Note: Training on a particular database does not necessarily mean that the model should be able to perfectly reconstruct the training data. Especially, when the methods such as early stopping, regularization and dropout are introduced to avoid overfitting, the model tends to be more generalizable rather than memorizing training data.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/61425296\/why-neural-network-predicts-wrong-on-its-own-training-data", "best_answers_votes":15, "question_length":8591, "response_length":2543 }, { "question":"What is the difference between tensors and sparse tensors? I am having troubles understanding the meaning and usages for Tensorflow Tensors and Sparse Tensors. According to the documentation Tensor Tensor is a typed multi-dimensional array. For example, you can represent a mini-batch of images as a 4-D array of floating point numbers with dimensions [batch, height, width, channels]. Sparse Tensor TensorFlow represents a sparse tensor as three separate dense tensors: indices, values, and shape. In Python, the three tensors are collected into a SparseTensor class for ease of use. If you have separate indices, values, and shape tensors, wrap them in a SparseTensor object before passing to the ops below. My understandings are Tensors are used for operations, input and output. And Sparse Tensor is just another representation of a Tensor(dense?). Hope someone can further explain the differences, and the use cases for them.", "response":"Matthew did a great job but I would love to give an example to shed more light on Sparse tensors with a example. If a tensor has lots of values that are zero, it can be called sparse. Lets consider a sparse 1-D Tensor ``` [0, 7, 0, 0, 8, 0, 0, 0, 0] ``` A sparse representation of the same tensor will focus only on the non-zero values ``` values = [7,8] ``` We also have to remember where those values occurs, by their indices ``` indices = [1,4] ``` The one-dimensional indices form will work with some methods, for this one-dimensional example, but in general indices have multiple dimensions, so it will be more consistent (and work everywhere) to represent indices like this: ``` indices = [[1], [4]] ``` With values and indices, we don't have quite enough information yet. How many zeros are there? We represent dense shape of a tensor. ``` dense_shape = [9] ``` These three things together, values, indices, and dense_shape, are a sparse representation of the tensor In tensorflow 2.0 it can be implemented as ``` x = tf.SparseTensor(values=[7,8],indices=[[1],[4]],dense_shape=[9]) x #o\/p: print(x.values) print(x.dense_shape) print(x.indices) #o\/p: tf.Tensor([7 8], shape=(2,), dtype=int32) tf.Tensor([9], shape=(1,), dtype=int64) tf.Tensor( [[1] [4]], shape=(2, 1), dtype=int64) ``` EDITED to correct indices as pointed out in the comments.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47662143\/what-is-the-difference-between-tensors-and-sparse-tensors", "best_answers_votes":40, "question_length":930, "response_length":1350 }, { "question":"How to initialise only optimizer variables in Tensorflow? I want to use MomentumOptimizer in Tensorflow. However, since this optimizer uses some internal variable, attempting to use it without initializing this variable yields an error: FailedPreconditionError (see above for traceback): Attempting to use uninitialized value Variable_2\/Momentum This can be easily solved by initializing all variables, using for example ``` tf.global_variables_initializer().run() ``` However, I do not want to initialize all the variables - only those of optimizer. Is there any way to do this?", "response":"Both current answers kinda work by filtering the variable name using the 'Momentum' string. But that is very brittle on two sides: It could silently (re-)initialize some other variables you don't actually want to reset! Either simply because of a name-clash, or because you have a more complex graph and optimize different parts separately, for example. It will only work for one specific optimizer, and how do you know the names to look out for for others? Bonus: an update to tensorflow might silently break your code. Fortunately, tensorflow's abstract Optimizer class has a mechanism for that, these extra optimizer variables are called \"slots\", and you can get all slot names of an optimizer using the get_slot_names() method: ``` opt = tf.train.MomentumOptimizer(...) print(opt.get_slot_names()) # prints ['momentum'] ``` And you can get the variable corresponding to the slot for a specific (trainable) variable v using the get_slot(var, slot_name) method: ``` opt.get_slot(some_var, 'momentum') ``` Putting all this together, you can create an op that initializes the optimizer's state as follows: ``` var_list = # list of vars to optimize, e.g. # tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) opt = tf.train.MomentumOptimizer(0.1, 0.95) step_op = opt.minimize(loss, var_list=var_list) reset_opt_op = tf.variables_initializer([opt.get_slot(var, name) for name in opt.get_slot_names() for var in var_list]) ``` This will really only reset the correct variables, and be robust across optimizers. Except for one unfortunate caveat: AdamOptimizer. That one also keeps a counter for how often it's been called. That means you should really think hard about what you're doing here anyways, but for completeness' sake, you can get its extra states as opt._get_beta_accumulators(). The returned list should be added to the list in the above reset_opt_op line.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41533489\/how-to-initialise-only-optimizer-variables-in-tensorflow", "best_answers_votes":21, "question_length":579, "response_length":1865 }, { "question":"Tensorflow multiple sessions with multiple GPUs I have a workstation with 2 GPUs and I am trying to run multiple tensorflow jobs at the same time, so I can train more than one model at once, etc. For example, I've tried to separate the sessions into different resources via the python API using in script1.py: ``` with tf.device(\"\/gpu:0\"): # do stuff ``` in script2.py: ``` with tf.device(\"\/gpu:1\"): # do stuff ``` in script3.py ``` with tf.device(\"\/cpu:0\"): # do stuff ``` If I run each script by itself I can see that it is using the specified device. (Also the models fit very well into a single GPU and doesn't use another one even if both are available.) However, if one script is running and I try to run another, I always get this error: ``` I tensorflow\/core\/common_runtime\/local_device.cc:40] Local device intra op parallelism threads: 8 I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:909] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I tensorflow\/core\/common_runtime\/gpu\/gpu_init.cc:103] Found device 0 with properties: name: GeForce GTX 980 major: 5 minor: 2 memoryClockRate (GHz) 1.2155 pciBusID 0000:01:00.0 Total memory: 4.00GiB Free memory: 187.65MiB I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:909] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I tensorflow\/core\/common_runtime\/gpu\/gpu_init.cc:103] Found device 1 with properties: name: GeForce GTX 980 major: 5 minor: 2 memoryClockRate (GHz) 1.2155 pciBusID 0000:04:00.0 Total memory: 4.00GiB Free memory: 221.64MiB I tensorflow\/core\/common_runtime\/gpu\/gpu_init.cc:127] DMA: 0 1 I tensorflow\/core\/common_runtime\/gpu\/gpu_init.cc:137] 0: Y Y I tensorflow\/core\/common_runtime\/gpu\/gpu_init.cc:137] 1: Y Y I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:702] Creating TensorFlow device (\/gpu:0) -> (device: 0, name: GeForce GTX 980, pci bus id: 0000:01:00.0) I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:702] Creating TensorFlow device (\/gpu:1) -> (device: 1, name: GeForce GTX 980, pci bus id: 0000:04:00.0) I tensorflow\/core\/common_runtime\/gpu\/gpu_bfc_allocator.cc:42] Allocating 187.40MiB bytes. E tensorflow\/stream_executor\/cuda\/cuda_driver.cc:932] failed to allocate 187.40M (196505600 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY F tensorflow\/core\/common_runtime\/gpu\/gpu_bfc_allocator.cc:47] Check failed: gpu_mem != nullptr Could not allocate GPU device memory for device 0. Tried to allocate 187.40MiB Aborted (core dumped) ``` It seems each tensorflow process is trying to grab all of the GPUs on the machine when it loads even if not all devices are going to be used to run the model. I see there is an option to limit the amount of GPU each process uses ``` tf.GPUOptions(per_process_gpu_memory_fraction=0.5) ``` ...I haven't tried it, but this seems like it would make two processes try to share 50% of each GPU instead of running each process on a separate GPU... Does anyone know how to configure tensorflow to use only one GPU and leave the other available for another tensorflow process?", "response":"TensorFlow will attempt to use (an equal fraction of the memory of) all GPU devices that are visible to it. If you want to run different sessions on different GPUs, you should do the following. Run each session in a different Python process. Start each process with a different value for the CUDA_VISIBLE_DEVICES environment variable. For example, if your script is called my_script.py and you have 4 GPUs, you could run the following: ``` $ CUDA_VISIBLE_DEVICES=0 python my_script.py # Uses GPU 0. $ CUDA_VISIBLE_DEVICES=1 python my_script.py # Uses GPU 1. $ CUDA_VISIBLE_DEVICES=2,3 python my_script.py # Uses GPUs 2 and 3. ``` Note the GPU devices in TensorFlow will still be numbered from zero (i.e. \"\/gpu:0\" etc.), but they will correspond to the devices that you have made visible with CUDA_VISIBLE_DEVICES.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34775522\/tensorflow-multiple-sessions-with-multiple-gpus", "best_answers_votes":60, "question_length":3166, "response_length":813 }, { "question":"Issue feeding a list into feed_dict in TensorFlow I'm trying to pass a list into feed_dict, however I'm having trouble doing so. Say I have: ``` inputs = 10 * [tf.placeholder(tf.float32, shape=(batch_size, input_size))] ``` where inputs is fed into some function outputs that I want to compute. So to run this in tensorflow, I created a session and ran the following: ``` sess.run(outputs, feed_dict = {inputs: data}) #data is my list of inputs, which is also of length 10 ``` but I get an error, TypeError: unhashable type: 'list'. However, I'm able to pass the data element-wise like so: ``` sess.run(outputs, feed_dict = {inputs[0]: data[0], ..., inputs[9]: data[9]}) ``` So I'm wondering if there's a way I can solve this issue. I've also tried to construct a dictionary(using a for loop), however this results in a dictionary with a single element, where they key is: tensorflow.python.framework.ops.Tensor at 0x107594a10", "response":"There are two issues that are causing problems here: The first issue is that the Session.run() call only accepts a small number of types as the keys of the feed_dict. In particular, lists of tensors are not supported as keys, so you have to put each tensor as a separate key.* One convenient way to do this is using a dictionary comprehension: ``` inputs = [tf.placeholder(...), ...] data = [np.array(...), ...] sess.run(y, feed_dict={i: d for i, d in zip(inputs, data)}) ``` The second issue is that the 10 * [tf.placeholder(...)] syntax in Python creates a list with ten elements, where each element is the same tensor object (i.e. has the same name property, the same id property, and is reference-identical if you compare two elements from the list using inputs[i] is inputs[j]). This explains why, when you tried to create a dictionary using the list elements as keys, you ended up with a dictionary with a single element - because all of the list elements were identical. To create 10 different placeholder tensors, as you intended, you should instead do the following: ``` inputs = [tf.placeholder(tf.float32, shape=(batch_size, input_size)) for _ in xrange(10)] ``` If you print the elements of this list, you'll see that each element is a tensor with a different name. EDIT: * You can now pass tuples as the keys of a feed_dict, because these may be used as dictionary keys.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33684657\/issue-feeding-a-list-into-feed-dict-in-tensorflow", "best_answers_votes":47, "question_length":926, "response_length":1383 }, { "question":"Split tensor into training and test sets Let's say I've read in a textfile using a TextLineReader. Is there some way to split this into train and test sets in Tensorflow? Something like: ``` def read_my_file_format(filename_queue): reader = tf.TextLineReader() key, record_string = reader.read(filename_queue) raw_features, label = tf.decode_csv(record_string) features = some_processing(raw_features) features_train, labels_train, features_test, labels_test = tf.train_split(features, labels, frac=.1) return features_train, labels_train, features_test, labels_test ```", "response":"As elham mentioned, you can use scikit-learn to do this easily. scikit-learn is an open source library for machine learning. There are tons of tools for data preparation including the model_selection module, which handles comparing, validating and choosing parameters. The model_selection.train_test_split() method is specifically designed to split your data into train and test sets randomly and by percentage. ``` X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.33, random_state=42) ``` test_size is the percentage to reserve for testing and random_state is to seed the random sampling. I typically use this to provide train and validation data sets, and keep true test data separately. You could just run train_test_split twice to do this as well. I.e. split the data into (Train + Validation) and Test, then split Train + Validation into two separate tensors.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41859605\/split-tensor-into-training-and-test-sets", "best_answers_votes":20, "question_length":570, "response_length":897 }, { "question":"Can not use both bias and batch normalization in convolution layers I use slim framework for tensorflow, because of its simplicity. But I want to have convolutional layer with both biases and batch normalization. In vanilla tensorflow, I have: ``` def conv2d(input_, output_dim, k_h=5, k_w=5, d_h=2, d_w=2, name=\"conv2d\"): with tf.variable_scope(name): w = tf.get_variable('w', [k_h, k_w, input_.get_shape()[-1], output_dim], initializer=tf.contrib.layers.xavier_initializer(uniform=False)) conv = tf.nn.conv2d(input_, w, strides=[1, d_h, d_w, 1], padding='SAME') biases = tf.get_variable('biases', [output_dim], initializer=tf.constant_initializer(0.0)) conv = tf.reshape(tf.nn.bias_add(conv, biases), conv.get_shape()) tf.summary.histogram(\"weights\", w) tf.summary.histogram(\"biases\", biases) return conv d_bn1 = BatchNorm(name='d_bn1') h1 = lrelu(d_bn1(conv2d(h0, df_dim + y_dim, name='d_h1_conv'))) ``` and I rewrote it to slim by this: ``` h1 = slim.conv2d(h0, num_outputs=self.df_dim + self.y_dim, scope='d_h1_conv', kernel_size=[5, 5], stride=[2, 2], activation_fn=lrelu, normalizer_fn=layers.batch_norm, normalizer_params=batch_norm_params, weights_initializer=layers.xavier_initializer(uniform=False), biases_initializer=tf.constant_initializer(0.0) ) ``` But this code does not add bias to conv layer. That is because of https:\/\/github.com\/tensorflow\/tensorflow\/blob\/master\/tensorflow\/contrib\/layers\/python\/layers\/layers.py#L1025 where is ``` layer = layer_class(filters=num_outputs, kernel_size=kernel_size, strides=stride, padding=padding, data_format=df, dilation_rate=rate, activation=None, use_bias=not normalizer_fn and biases_initializer, kernel_initializer=weights_initializer, bias_initializer=biases_initializer, kernel_regularizer=weights_regularizer, bias_regularizer=biases_regularizer, activity_regularizer=None, trainable=trainable, name=sc.name, dtype=inputs.dtype.base_dtype, _scope=sc, _reuse=reuse) outputs = layer.apply(inputs) ``` in the construction of layer, which results in not having bias when using batch normalization. Does that mean that I can not have both biases and batch normalization using slim and layers library? Or is there another way to achieve having both bias and batch normalization in layer when using slim?", "response":"Batchnormalization already includes the addition of the bias term. Recap that BatchNorm is already: ``` gamma * normalized(x) + bias ``` So there is no need (and it makes no sense) to add another bias term in the convolution layer. Simply speaking BatchNorm shifts the activation by their mean values. Hence, any constant will be canceled out. If you still want to do this, you need to remove the normalizer_fn argument and add BatchNorm as a single layer. Like I said, this makes no sense. But the solution would be something like ``` net = slim.conv2d(net, normalizer_fn=None, ...) net = tf.nn.batch_normalization(net) ``` Note, the BatchNorm relies on non-gradient updates. So you either need to use an optimizer which is compatible with the UPDATE_OPS collection. Or you need to manually add tf.control_dependencies. Long story short: Even if you implement the ConvWithBias+BatchNorm, it will behave like ConvWithoutBias+BatchNorm. It is the same as multiple fully-connected layers without activation function will behave like a single one.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46256747\/can-not-use-both-bias-and-batch-normalization-in-convolution-layers", "best_answers_votes":47, "question_length":2260, "response_length":1044 }, { "question":"What is the purpose of the tf.contrib module in Tensorflow? I'm curious about what tf.contrib is, and why the code would be included in TensorFlow, but not in the main repository. Furthermore, looking at the example here (from the tensorflow master branch), and I want to find the source for tf.contrib.layers.sparse_column_with_hash_bucket. It seems like some cool routines, but I wanted to make sure they were properly using queues, etc, for pre-fetching\/pre-processing examples to actually use them in a production setting. It appears to be documented here, but it is from the tflearn project, but tf.contrib.layers.sparse_column_with_hash_bucket doesn't seem to be in that repository either.", "response":"In general, tf.contrib contains contributed code. It is meant to contain features and contributions that eventually should get merged into core TensorFlow, but whose interfaces may still change, or which require some testing to see whether they can find broader acceptance. The code in tf.contrib isn't supported by the Tensorflow team. It is included in the hope that it is helpful, but it might change or be removed at any time; there are no guarantees. The source of tf.contrib.layers.sparse_column_with_hash_bucket can be found at https:\/\/github.com\/tensorflow\/tensorflow\/blob\/master\/tensorflow\/contrib\/layers\/python\/layers\/feature_column.py#L365", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38060825\/what-is-the-purpose-of-the-tf-contrib-module-in-tensorflow", "best_answers_votes":43, "question_length":695, "response_length":650 }, { "question":"How to restore Tensorflow model from .pb file in python? I have an tensorflow .pb file which I would like to load into python DNN, restore the graph and get the predictions. I am doing this to test out whether the .pb file created can make the predictions similar to the normal Saver.save() model. My basic problem is am getting a very different value of predictions when I make them on Android using the above mentioned .pb file My .pb file creation code: ``` frozen_graph = tf.graph_util.convert_variables_to_constants( session, session.graph_def, ['outputLayer\/Softmax'] ) with open('frozen_model.pb', 'wb') as f: f.write(frozen_graph.SerializeToString()) ``` So I have two major concerns: How can I load the above mentioned .pb file to python Tensorflow model ? Why am I getting completely different values of prediction in python and android ?", "response":"The following code will read the model and print out the names of the nodes in the graph. ``` import tensorflow as tf from tensorflow.python.platform import gfile GRAPH_PB_PATH = '.\/frozen_model.pb' with tf.Session() as sess: print(\"load graph\") with gfile.FastGFile(GRAPH_PB_PATH,'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) sess.graph.as_default() tf.import_graph_def(graph_def, name='') graph_nodes=[n for n in graph_def.node] names = [] for t in graph_nodes: names.append(t.name) print(names) ``` You are freezing the graph properly that is why you are getting different results basically weights are not getting stored in your model. You can use the freeze_graph.py (link) for getting a correctly stored graph.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50632258\/how-to-restore-tensorflow-model-from-pb-file-in-python", "best_answers_votes":33, "question_length":848, "response_length":744 }, { "question":"Keras - Validation Loss and Accuracy stuck at 0 I am trying to train a simple 2 layer Fully Connected neural net for Binary Classification in Tensorflow keras. I have split my data into Training and Validation sets with a 80-20 split using sklearn's train_test_split(). When I call model.fit(X_train, y_train, validation_data=[X_val, y_val]), it shows 0 validation loss and accuracy for all epochs, but it trains just fine. Also, when I try to evaluate it on the validation set, the output is non-zero. Can someone please explain why I am facing this 0 loss 0 accuracy error on validation. Thanks for your help. Here is the complete sample code (MCVE) for this error: https:\/\/colab.research.google.com\/drive\/1P8iCUlnD87vqtuS5YTdoePcDOVEKpBHr?usp=sharing", "response":"If you use keras instead of tf.keras everything works fine. With tf.keras, I even tried validation_data = [X_train, y_train], this also gives zero accuracy. Here is a demonstration: ``` model.fit(X_train, y_train, validation_data=[X_train.to_numpy(), y_train.to_numpy()], epochs=10, batch_size=64) Epoch 1\/10 8\/8 [==============================] - 0s 6ms\/step - loss: 0.7898 - accuracy: 0.6087 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 2\/10 8\/8 [==============================] - 0s 6ms\/step - loss: 0.6710 - accuracy: 0.6500 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 3\/10 8\/8 [==============================] - 0s 5ms\/step - loss: 0.6748 - accuracy: 0.6500 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 4\/10 8\/8 [==============================] - 0s 6ms\/step - loss: 0.6716 - accuracy: 0.6370 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 5\/10 8\/8 [==============================] - 0s 6ms\/step - loss: 0.6085 - accuracy: 0.6326 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 6\/10 8\/8 [==============================] - 0s 6ms\/step - loss: 0.6744 - accuracy: 0.6326 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 7\/10 8\/8 [==============================] - 0s 6ms\/step - loss: 0.6102 - accuracy: 0.6522 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 8\/10 8\/8 [==============================] - 0s 6ms\/step - loss: 0.7032 - accuracy: 0.6109 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 9\/10 8\/8 [==============================] - 0s 5ms\/step - loss: 0.6283 - accuracy: 0.6717 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 10\/10 8\/8 [==============================] - 0s 5ms\/step - loss: 0.6120 - accuracy: 0.6652 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 ``` So, definitely there is some issue with tensorflow implementation of fit. I dug up the source, and it seems the part responsible for validation_data: ``` ... ... # Run validation. if validation_data and self._should_eval(epoch, validation_freq): val_x, val_y, val_sample_weight = ( data_adapter.unpack_x_y_sample_weight(validation_data)) val_logs = self.evaluate( x=val_x, y=val_y, sample_weight=val_sample_weight, batch_size=validation_batch_size or batch_size, steps=validation_steps, callbacks=callbacks, max_queue_size=max_queue_size, workers=workers, use_multiprocessing=use_multiprocessing, return_dict=True) val_logs = {'val_' + name: val for name, val in val_logs.items()} epoch_logs.update(val_logs) ``` internally calls model.evaluate, as we have already established evaluate works fine, I realized the only culprit could be unpack_x_y_sample_weight. So, I looked into the implementation: ``` def unpack_x_y_sample_weight(data): \"\"\"Unpacks user-provided data tuple.\"\"\" if not isinstance(data, tuple): return (data, None, None) elif len(data) == 1: return (data[0], None, None) elif len(data) == 2: return (data[0], data[1], None) elif len(data) == 3: return (data[0], data[1], data[2]) raise ValueError(\"Data not understood.\") ``` It's crazy, but if you just pass a tuple instead of a list, everything works fine due to the check inside unpack_x_y_sample_weight. (Your labels are missing after this step and somehow the data is getting fixed inside evaluate, so you're training with no reasonable labels, this seems like a bug but the documentation clearly states to pass tuple) The following code gives correct validation accuracy and loss: ``` model.fit(X_train, y_train, validation_data=(X_train.to_numpy(), y_train.to_numpy()), epochs=10, batch_size=64) Epoch 1\/10 8\/8 [==============================] - 0s 7ms\/step - loss: 0.5832 - accuracy: 0.6696 - val_loss: 0.6892 - val_accuracy: 0.6674 Epoch 2\/10 8\/8 [==============================] - 0s 7ms\/step - loss: 0.6385 - accuracy: 0.6804 - val_loss: 0.8984 - val_accuracy: 0.5565 Epoch 3\/10 8\/8 [==============================] - 0s 7ms\/step - loss: 0.6822 - accuracy: 0.6391 - val_loss: 0.6556 - val_accuracy: 0.6739 Epoch 4\/10 8\/8 [==============================] - 0s 6ms\/step - loss: 0.6276 - accuracy: 0.6609 - val_loss: 1.0691 - val_accuracy: 0.5630 Epoch 5\/10 8\/8 [==============================] - 0s 7ms\/step - loss: 0.7048 - accuracy: 0.6239 - val_loss: 0.6474 - val_accuracy: 0.6326 Epoch 6\/10 8\/8 [==============================] - 0s 7ms\/step - loss: 0.6545 - accuracy: 0.6500 - val_loss: 0.6659 - val_accuracy: 0.6043 Epoch 7\/10 8\/8 [==============================] - 0s 7ms\/step - loss: 0.5796 - accuracy: 0.6913 - val_loss: 0.6891 - val_accuracy: 0.6435 Epoch 8\/10 8\/8 [==============================] - 0s 7ms\/step - loss: 0.5915 - accuracy: 0.6891 - val_loss: 0.5307 - val_accuracy: 0.7152 Epoch 9\/10 8\/8 [==============================] - 0s 7ms\/step - loss: 0.5571 - accuracy: 0.7000 - val_loss: 0.5465 - val_accuracy: 0.6957 Epoch 10\/10 8\/8 [==============================] - 0s 7ms\/step - loss: 0.7133 - accuracy: 0.6283 - val_loss: 0.7046 - val_accuracy: 0.6413 ``` So, as this seems to be a bug, I have just opened a relevant issue at Tensorflow Github repo: https:\/\/github.com\/tensorflow\/tensorflow\/issues\/39370", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/61706535\/keras-validation-loss-and-accuracy-stuck-at-0", "best_answers_votes":39, "question_length":753, "response_length":5063 }, { "question":"When global_variables_initializer() is actually required ``` import tensorflow as tf x = tf.constant(35, name='x') y = tf.Variable(x + 5, name='y') # model = tf.global_variables_initializer() with tf.Session() as session: print(\"x = \", session.run(x)) # session.run(model) print(\"y = \", session.run(y)) ``` I was not able to understand when global_variables_initializer() is actually required. In the above code, if we uncomment lines 4 & 7, I can execute the code and see the values. If I run as-is, I see a crash. My question is which variables it is initializing. x is a constant which does not need initialization and y is variable which is not being initialized but is used as an arithmetic operation.", "response":"tf.global_variables_initializer is a shortcut to initialize all global variables. It is not required, and you can use other ways to initialize your variables or in case of easy scripts sometimes you do not need to initialize them at all. Everything except of variables do not require initialization (constants and placeholders). But every used variable (even if it is a constant) should be initialized. This will give you an error, although z is just 0-d tensor with only one number. ``` import tensorflow as tf z = tf.Variable(4) with tf.Session() as session: print(session.run(z)) ``` I highlighted the word used, because if you just have variables which are not run (or non of the runs depends on them) you do not need to initialize them. For example this code will execute without any problems, nonetheless it has 2 variables and one operation which depends on them. But the run does not require them. ``` import tensorflow as tf x = tf.constant(35, name='x') y = tf.Variable(x + 5, name='y') z = tf.Variable(4) a = y + z with tf.Session() as session: print(\"x = \", session.run(x)) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44299666\/when-global-variables-initializer-is-actually-required", "best_answers_votes":23, "question_length":706, "response_length":1089 }, { "question":"Difference between Tensorflow's tf.keras.layers.Dense and PyTorch's torch.nn.Linear? I have a quick (and possibly silly) question about how Tensorflow defines its Linear layer. Within PyTorch, a Linear (or Dense) layer is defined as, y = x A^T + b where A and b are the weight matrix and bias vector for a Linear layer (see here). However, I can't precisely find an equivalent equation for Tensorflow! Is it the same as PyTorch or is it just y = x A + b ? Thank you in advance!", "response":"If we set activation to None in the dense layer in keras API, then they are technically equivalent. Tensorflow's ``` tf.keras.layers.Dense(..., activation=None) ``` According to the doc, more study here. activation: Activation function to use. If you don't specify anything, no activation is applied (ie. \"linear\" activation: a(x) = x). And in PyTorch's src. ``` torch.nn.Linear ``` They are now equal at this point. A linear transformation to the incoming data: y = x*W^T + b. See the following more concrete equivalent implementation of these two. In PyTorch, we do ``` class Network(torch.nn.Module): def __init__(self): super(Network, self).__init__() self.fc1 = torch.nn.Linear(5, 30) def forward(self, state): return self.fc1(state) ``` or, ``` trd = torch.nn.Linear(in_features = 3, out_features = 30) y = trd(torch.ones(5, 3)) print(y.size()) # torch.Size([5, 30]) ``` Its equivalent tf implementation would be ``` model = tf.keras.models.Sequential() model.add(tf.keras.layers.Dense(30, input_shape=(5,), activation=None)) ``` or, ``` tfd = tf.keras.layers.Dense(30, input_shape=(3,), activation=None) x = tfd(tf.ones(shape=(5, 3))) print(x.shape) # (5, 30) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/66626700\/difference-between-tensorflows-tf-keras-layers-dense-and-pytorchs-torch-nn-lin", "best_answers_votes":27, "question_length":477, "response_length":1170 }, { "question":"Python: rewrite a looping numpy math function to run on GPU Can someone help me rewrite this one function (the doTheMath function) to do the calculations on the GPU? I used a few good days now trying to get my head around it but to no result. I wonder maybe somebody can help me rewrite this function in whatever way you may seem fit as log as I gives the same result at the end. I tried to use @jit from numba but for some reason it is actually much slower than running the code as usual. With a huge sample size, the goal is to decrease the execution time considerably so naturally I believe the GPU is the fastest way to do it. I'll explain a little what is actually happening. The real data, which looks almost identical as the sample data created in the code below is divided into sample sizes of approx 5.000.000 rows each sample or around 150MB per file. In total there are around 600.000.000 rows or 20GB of data. I must loop through this data, sample by sample and then row by row in each sample, take the last 2000 (or another) rows as of each line and run the doTheMath function which returns a result. That result is then saved back to the hardrive where I can do some other things with it with another program. As you can see below, I do not need all of the results of all the rows, only those bigger than a specific amount. If I run my function as it is right now in python I get about 62seconds per 1.000.000 rows. This is a very long time considering all the data and how fast it should be done with. I must mention that I upload the real data file by file to the RAM with the help of data = joblib.load(file) so uploading the data is not the problem as it takes only about 0.29 seconds per file. Once uploaded I run the entire code below. What takes the longest time is the doTheMath function. I am willing to give all of my 500 reputation points I have on stackoverflow as a reward for somebody willing to help me rewrite this simple code to run on the GPU. My interest is specifically in the GPU, I really want to see how it is done on this problem at hand. EDIT\/UPDATE 1: Here is a link to a small sample of the real data: data_csv.zip About 102000 rows of real data1 and 2000 rows for real data2a and data2b. Use minimumLimit = 400 on the real sample data EDIT\/UPDATE 2: For those following this post here is a short summary of the answers below. Up until now we have 4 answers to the original solution. The one offered by @Divakar are just tweaks to the original code. Of the two tweaks only the first one is actually applicable to this problem, the second one is a good tweak but does not apply here. Out of the other three answers, two of them are CPU based solutions and one tensorflow-GPU try. The Tensorflow-GPU by Paul Panzer seems to be promising but when i actually run it on the GPU it is slower than the original, so the code still needs improvement. The other two CPU based solutions are submitted by @PaulPanzer (a pure numpy solution) and @MSeifert (a numba solution). Both solutions give very good results and both process data extremely fast compared to the original code. Of the two the one submitted by Paul Panzer is faster. It processes about 1.000.000 rows in about 3 seconds. The only problem is with smaller batchSizes, this can be overcome by either switching to the numba solution offered by MSeifert, or even the original code after all the tweaks that have been discussed below. I am very happy and thankful to @PaulPanzer and @MSeifert for the work they did on their answers. Still, since this is a question about a GPU based solution, i am waiting to see if anybody is willing to give it a try on a GPU version and see how much faster the data can be processed on the GPU when compared to the current CPU solutions. If there will be no other answers outperforming @PaulPanzer's pure numpy solution then i'll accept his answer as the right one and gets the bounty :) EDIT\/UPDATE 3: @Divakar has posted a new answer with a solution for the GPU. After my testings on real data, the speed is not even comparable to the CPU counterpart solutions. The GPU processes about 5.000.000 in about 1,5 seconds. This is incredible :) I am very excited about the GPU solution and i thank @Divakar for posting it. As well as i thank @PaulPanzer and @MSeifert for their CPU solutions :) Now my research continues with an incredible speed due to the GPU :) ``` import pandas as pd import numpy as np import time def doTheMath(tmpData1, data2a, data2b): A = tmpData1[:, 0] B = tmpData1[:,1] C = tmpData1[:,2] D = tmpData1[:,3] Bmax = B.max() Cmin = C.min() dif = (Bmax - Cmin) abcd = ((((A - Cmin) \/ dif) + ((B - Cmin) \/ dif) + ((C - Cmin) \/ dif) + ((D - Cmin) \/ dif)) \/ 4) return np.where(((abcd = data2b)), 1, 0).sum() #Declare variables batchSize = 2000 sampleSize = 5000000 resultArray = [] minimumLimit = 490 #use 400 on the real sample data #Create Random Sample Data data1 = np.matrix(np.random.uniform(1, 100, (sampleSize + batchSize, 4))) data2a = np.matrix(np.random.uniform(0, 1, (batchSize, 1))) #upper limit data2b = np.matrix(np.random.uniform(0, 1, (batchSize, 1))) #lower limit #approx. half of data2a will be smaller than data2b, but that is only in the sample data because it is randomly generated, NOT the real data. The real data2a is always higher than data2b. #Loop through the data t0 = time.time() for rowNr in range(data1.shape[0]): tmp_df = data1[rowNr:rowNr + batchSize] #rolling window if(tmp_df.shape[0] == batchSize): result = doTheMath(tmp_df, data2a, data2b) if (result >= minimumLimit): resultArray.append([rowNr , result]) print('Runtime:', time.time() - t0) #Save data results resultArray = np.array(resultArray) print(resultArray[:,1].sum()) resultArray = pd.DataFrame({'index':resultArray[:,0], 'result':resultArray[:,1]}) resultArray.to_csv(\"Result Array.csv\", sep=';') ``` The PC specs I am working on: ``` GTX970(4gb) video card; i7-4790K CPU 4.00Ghz; 16GB RAM; a SSD drive running Windows 7; ``` As a side question, would a second video card in SLI help on this problem?", "response":"Introduction and solution code Well, you asked for it! So, listed in this post is an implementation with PyCUDA that uses lightweight wrappers extending most of CUDA's capabilities within Python environment. We will its SourceModule functionality that lets us write and compile CUDA kernels staying in Python environment. Getting to the problem at hand, among the computations involved, we have sliding maximum and minimum, few differences and divisions and comparisons. For the maximum and minimum parts, that involves block max finding (for each sliding window), we will use reduction-technique as discussed in some detail here. This would be done at block level. For the upper level iterations across sliding windows, we would use the grid level indexing into CUDA resources. For more info on this block and grid format, please refer to page-18. PyCUDA also supports builtins for computing reductions like max and min, but we lose control, specifically we intend to use specialized memory like shared and constant memory for leveraging GPU at its near to optimum level. Listing out the PyCUDA-NumPy solution code - 1] PyCUDA part - ``` import pycuda.autoinit import pycuda.driver as drv import numpy as np from pycuda.compiler import SourceModule mod = SourceModule(\"\"\" #define TBP 1024 \/\/ THREADS_PER_BLOCK __device__ void get_Bmax_Cmin(float* out, float *d1, float *d2, int L, int offset) { int tid = threadIdx.x; int inv = TBP; __shared__ float dS[TBP][2]; dS[tid][0] = d1[tid+offset]; dS[tid][1] = d2[tid+offset]; __syncthreads(); if(tid= lowL[tid]) & (dS[tid] = lowL[tid+inv]) & (dS[tid+inv] = minimumLimit) return idx, dest[idx] ``` Benchmarking I have tested on GTX 960M. Please note that PyCUDA expects arrays to be of contiguous order. So, we need to slice the columns and make copies. I am expecting\/assuming that the data could be read from the files such that the data is spread along rows instead of being as columns. Thus, keeping those out of the benchmarking function for now. Original approach - ``` def org_app(data1, batchSize, minimumLimit): resultArray = [] for rowNr in range(data1.shape[0]-batchSize+1): tmp_df = data1[rowNr:rowNr + batchSize] #rolling window result = doTheMath(tmp_df, data2a, data2b) if (result >= minimumLimit): resultArray.append([rowNr , result]) return resultArray ``` Timings and verification - ``` In [2]: #Declare variables ...: batchSize = 2000 ...: sampleSize = 50000 ...: resultArray = [] ...: minimumLimit = 490 #use 400 on the real sample data ...: ...: #Create Random Sample Data ...: data1 = np.random.uniform(1, 100000, (sampleSize + batchSize, 4)).astype(np.float32) ...: data2b = np.random.uniform(0, 1, (batchSize)).astype(np.float32) ...: data2a = data2b + np.random.uniform(0, 1, (batchSize)).astype(np.float32) ...: ...: # Make column copies ...: A = data1[:,0].copy() ...: B = data1[:,1].copy() ...: C = data1[:,2].copy() ...: D = data1[:,3].copy() ...: ...: gpu_out1,gpu_out2 = gpu_app_v1(A, B, C, D, batchSize, minimumLimit) ...: cpu_out1,cpu_out2 = np.array(org_app(data1, batchSize, minimumLimit)).T ...: print(np.allclose(gpu_out1, cpu_out1)) ...: print(np.allclose(gpu_out2, cpu_out2)) ...: True False ``` So, there's some differences between CPU and GPU countings. Let's investigate them - ``` In [7]: idx = np.flatnonzero(~np.isclose(gpu_out2, cpu_out2)) In [8]: idx Out[8]: array([12776, 15208, 17620, 18326]) In [9]: gpu_out2[idx] - cpu_out2[idx] Out[9]: array([-1., -1., 1., 1.]) ``` There are four instances of non-matching counts. These are off at max by 1. Upon research, I came across some information on this. Basically, since we are using math intrinsics for max and min computations and those I think are causing the last binary bit in the floating pt representation to be diferent than the CPU counterpart. This is termed as ULP error and has been discused in detail here and here. Finally, puting the issue aside, let's get to the most important bit, the performance - ``` In [10]: %timeit org_app(data1, batchSize, minimumLimit) 1 loops, best of 3: 2.18 s per loop In [11]: %timeit gpu_app_v1(A, B, C, D, batchSize, minimumLimit) 10 loops, best of 3: 82.5 ms per loop In [12]: 2180.0\/82.5 Out[12]: 26.424242424242426 ``` Let's try with bigger datasets. With sampleSize = 500000, we get - ``` In [14]: %timeit org_app(data1, batchSize, minimumLimit) 1 loops, best of 3: 23.2 s per loop In [15]: %timeit gpu_app_v1(A, B, C, D, batchSize, minimumLimit) 1 loops, best of 3: 821 ms per loop In [16]: 23200.0\/821 Out[16]: 28.25822168087698 ``` So, the speedup stays constant at around 27. Limitations : 1) We are using float32 numbers, as GPUs work best with those. Double precision specially on non-server GPUs aren't popular when it comes to performance and since you are working with such a GPU, I tested with float32. Further improvement : 1) We could use faster constant memory to feed in data2a and data2b, rather than use global memory.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41957574\/python-rewrite-a-looping-numpy-math-function-to-run-on-gpu", "best_answers_votes":11, "question_length":6042, "response_length":4925 }, { "question":"TensorFlow: Unpooling Is there TensorFlow native function that does unpooling for Deconvolutional Networks ? I have written this in normal python, but it is getting complicated when want to translate it to TensorFlow as it's objects does not even support item assignment at the moment, and I think this is a great inconvenience with TF.", "response":"I don't think there is an official unpooling layer yet which is frustrating because you have to use image resize (bilinear interpolation or nearest neighbor) which is like an average unpooling operation and it's reaaaly slow. Look at the tf api in the section 'image' and you will find it. Tensorflow has a maxpooling_with_argmax thing where you get you maxpooled output as well as the activation map which is nice as you could use it in an unpooling layer to preserve the 'lost' spacial information but it seems as there isn't such an unpooling operation that does it. I guess that they are planning to add it ... soon. Edit: I found some guy on google discuss a week ago who seems to have implemented something like this but I personally haven't tried it yet. https:\/\/github.com\/ppwwyyxx\/tensorpack\/blob\/master\/tensorpack\/models\/pool.py#L66", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36548736\/tensorflow-unpooling", "best_answers_votes":16, "question_length":336, "response_length":842 }, { "question":"In TensorFlow,what's the meaning of \":0\" in a Variable's name? ``` import tensorflow as tf with tf.device('\/gpu:0'): foo = tf.Variable(1, name='foo') assert foo.name == \"foo:0\" with tf.device('\/gpu:1'): bar = tf.Variable(1, name='bar') assert bar.name == \"bar:0\" ``` The above code returns true.I use with tf.device here to illustrate that the \":0\" doesn't mean the variable lie on the specific device.So what's the meaning of the \":0\" in the variable's name(foo and bar in this example)?", "response":"It has to do with representation of tensors in underlying API. A tensor is a value associated with output of some op. In case of variables, there's a Variable op with one output. An op can have more than one output, so those tensors get referenced to as :0, :1 etc. For instance if you use tf.nn.top_k, there are two values created by this op, so you may see TopKV2:0 and TopKV2:1 ``` a,b=tf.nn.top_k([1], 1) print a.name # => 'TopKV2:0' print b.name # => 'TopKV2:1' ``` How to understand the term `tensor` in TensorFlow?", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40925652\/in-tensorflow-whats-the-meaning-of-0-in-a-variables-name", "best_answers_votes":29, "question_length":488, "response_length":521 }, { "question":"How can I use tensorflow serving for multiple models How can I use multiple tensorflow models? I use docker container. ``` model_config_list: { config: { name: \"model1\", base_path: \"\/tmp\/model\", model_platform: \"tensorflow\" }, config: { name: \"model2\", base_path: \"\/tmp\/model2\", model_platform: \"tensorflow\" } } ```", "response":"Built a docker image from official tensorflow serving docker file Then inside docker image. ``` \/usr\/local\/bin\/tensorflow_model_server --port=9000 --model_config_file=\/serving\/models.conf ``` here \/serving\/models.conf is a similar file as yours.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45749024\/how-can-i-use-tensorflow-serving-for-multiple-models", "best_answers_votes":24, "question_length":315, "response_length":245 }, { "question":"How to improve data input pipeline performance? I try to optimize my data input pipeline. The dataset is a set of 450 TFRecord files of size ~70MB each, hosted on GCS. The job is executed with GCP ML Engine. There is no GPU. Here is the pipeline: ```py def build_dataset(file_pattern): return tf.data.Dataset.list_files( file_pattern ).interleave( tf.data.TFRecordDataset, num_parallel_calls=tf.data.experimental.AUTOTUNE ).shuffle( buffer_size=2048 ).batch( batch_size=2048, drop_remainder=True, ).cache( ).repeat( ).map( map_func=_parse_example_batch, num_parallel_calls=tf.data.experimental.AUTOTUNE ).prefetch( buffer_size=1 ) ``` With the mapped function: ```py def _bit_to_float(string_batch: tf.Tensor): return tf.reshape(tf.math.floormod(tf.dtypes.cast(tf.bitwise.right_shift( tf.expand_dims(tf.io.decode_raw(string_batch, tf.uint8), 2), tf.reshape(tf.dtypes.cast(tf.range(7, -1, -1), tf.uint8), (1, 1, 8)) ), tf.float32), 2), (tf.shape(string_batch)[0], -1)) def _parse_example_batch(example_batch): preprocessed_sample_columns = { \"features\": tf.io.VarLenFeature(tf.float32), \"booleanFeatures\": tf.io.FixedLenFeature((), tf.string, \"\"), \"label\": tf.io.FixedLenFeature((), tf.float32, -1) } samples = tf.io.parse_example(example_batch, preprocessed_sample_columns) dense_float = tf.sparse.to_dense(samples[\"features\"]) bits_to_float = _bit_to_float(samples[\"booleanFeatures\"]) return ( tf.concat([dense_float, bits_to_float], 1), tf.reshape(samples[\"label\"], (-1, 1)) ) ``` I tried to follow the best practices of the data pipeline tutorial, and vectorize my mapped function (as advised by mrry). With this settings, while data are downloaded at high-speed (bandwidth is around 200MB\/s) the CPU is under-used (14%) and the training is very slow (more than 1hour for a epoch). I tried some parameters configuration, changing the interleave() arguments like num_parallel_calls or cycle_length or the TFRecordDataset arguments like num_parallel_calls. The fastest configuration uses this set of parameters: interleave.num_parallel_calls: 1 interleave.cycle_length: 8 TFRecordDataset.num_parallel_calls: 8 With this one, one epoch only take ~20 minutes to run. However, CPU usage is only at 50% while bandwidth consumption is around 55MB\/s Questions: How to optimize the pipeline to reach 100% CPU usage (and something like 100MB\/s of bandwidth consumption)? Why does tf.data.experimental.AUTOTUNE not find best value to speed up the training? Kind, Alexis. Edit After some more experimentations, I came to the following solution. Remove the interleave step which is already handled by TFRecordDataset if num_parallel_calls is greater than 0. Update the mapped function to only do parse_example and decode_raw, returning a tuple `((, ), ()) cache after the map Move the _bit_to_float function as a component of the model Finally, here is the data pipeline code: ```py def build_dataset(file_pattern): return tf.data.TFRecordDataset( tf.data.Dataset.list_files(file_pattern), num_parallel_reads=multiprocessing.cpu_count(), buffer_size=70*1000*1000 ).shuffle( buffer_size=2048 ).map( map_func=split, num_parallel_calls=tf.data.experimental.AUTOTUNE ).batch( batch_size=2048, drop_remainder=True, ).cache( ).repeat( ).prefetch( buffer_size=32 ) def split(example): preprocessed_sample_columns = { \"features\": tf.io.VarLenFeature(tf.float32), \"booleanFeatures\": tf.io.FixedLenFeature((), tf.string, \"\"), \"label\": tf.io.FixedLenFeature((), tf.float32, -1) } samples = tf.io.parse_single_example(example, preprocessed_sample_columns) dense_float = tf.sparse.to_dense(samples[\"features\"]) bits_to_float = tf.io.decode_raw(samples[\"booleanFeatures\"], tf.uint8) return ( (dense_float, bits_to_float), tf.reshape(samples[\"label\"], (1,)) ) def build_model(input_shape): feature = keras.Input(shape=(N,)) bool_feature = keras.Input(shape=(M,), dtype=\"uint8\") one_hot = dataset._bit_to_float(bool_feature) dense_input = tf.reshape( keras.backend.concatenate([feature, one_hot], 1), input_shape) output = actual_model(dense_input) model = keras.Model([feature, bool_feature], output) return model def _bit_to_float(string_batch: tf.Tensor): return tf.dtypes.cast(tf.reshape( tf.bitwise.bitwise_and( tf.bitwise.right_shift( tf.expand_dims(string_batch, 2), tf.reshape( tf.dtypes.cast(tf.range(7, -1, -1), tf.uint8), (1, 1, 8) ), ), tf.constant(0x01, dtype=tf.uint8) ), (tf.shape(string_batch)[0], -1) ), tf.float32) ``` Thanks to all these optimizations: Bandwidth consumption is around 90MB\/s CPU usage is around 20% First epoch spends 20 minutes Successives epochs spend 5 minutes each So this seems to be a good first setup. But CPU and BW are still not overused, so any advice is still welcomed! Edit Bis So, after some benchmarking I came accross what I think is our best input pipeline: ``` def build_dataset(file_pattern): tf.data.Dataset.list_files( file_pattern ).interleave( TFRecordDataset, cycle_length=tf.data.experimental.AUTOTUNE, num_parallel_calls=tf.data.experimental.AUTOTUNE ).shuffle( 2048 ).batch( batch_size=64, drop_remainder=True, ).map( map_func=parse_examples_batch, num_parallel_calls=tf.data.experimental.AUTOTUNE ).cache( ).prefetch( tf.data.experimental.AUTOTUNE ) def parse_examples_batch(examples): preprocessed_sample_columns = { \"features\": tf.io.FixedLenSequenceFeature((), tf.float32, allow_missing=True), \"booleanFeatures\": tf.io.FixedLenFeature((), tf.string, \"\"), \"label\": tf.io.FixedLenFeature((), tf.float32, -1) } samples = tf.io.parse_example(examples, preprocessed_sample_columns) bits_to_float = tf.io.decode_raw(samples[\"booleanFeatures\"], tf.uint8) return ( (samples['features'], bits_to_float), tf.expand_dims(samples[\"label\"], 1) ) ``` So, what's new: According to this GitHub issue, the TFRecordDataset interleaving is a legacy one, so interleave function is better. batch before map is a good habit (vectorizing your function) and reduce the number of times the mapped function is called. No need of repeat anymore. Since TF2.0, the Keras model API supports the dataset API and can use cache (see the SO post) Switch from a VarLenFeature to a FixedLenSequenceFeature, removing a useless call to tf.sparse.to_dense. Hope this can help. Advices are still welcomed.", "response":"Mentioning the Solution and the Important observations of @AlexisBRENON in the Answer Section, for the benefit of the Community. Below mentioned are the Important Observations: According to this GitHub issue, the TFRecordDataset interleaving is a legacy one, so interleave function is better. batch before map is a good habit (vectorizing your function) and reduce the number of times the mapped function is called. No need of repeat anymore. Since TF2.0, the Keras model API supports the dataset API and can use cache (see the SO post) Switch from a VarLenFeature to a FixedLenSequenceFeature, removing a useless call to tf.sparse.to_dense. Code for the Pipeline, with improved performance, in line with above observations is mentioned below: ``` def build_dataset(file_pattern): tf.data.Dataset.list_files( file_pattern ).interleave( TFRecordDataset, cycle_length=tf.data.experimental.AUTOTUNE, num_parallel_calls=tf.data.experimental.AUTOTUNE ).shuffle( 2048 ).batch( batch_size=64, drop_remainder=True, ).map( map_func=parse_examples_batch, num_parallel_calls=tf.data.experimental.AUTOTUNE ).cache( ).prefetch( tf.data.experimental.AUTOTUNE ) def parse_examples_batch(examples): preprocessed_sample_columns = { \"features\": tf.io.FixedLenSequenceFeature((), tf.float32, allow_missing=True), \"booleanFeatures\": tf.io.FixedLenFeature((), tf.string, \"\"), \"label\": tf.io.FixedLenFeature((), tf.float32, -1) } samples = tf.io.parse_example(examples, preprocessed_sample_columns) bits_to_float = tf.io.decode_raw(samples[\"booleanFeatures\"], tf.uint8) return ( (samples['features'], bits_to_float), tf.expand_dims(samples[\"label\"], 1) ) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58014123\/how-to-improve-data-input-pipeline-performance", "best_answers_votes":15, "question_length":6205, "response_length":1636 }, { "question":"How can I implement a custom RNN (specifically an ESN) in Tensorflow? I am trying to define my own RNNCell (Echo State Network) in Tensorflow, according to below definition. x(t + 1) = tanh(Win*u(t) + W*x(t) + Wfb*y(t)) y(t) = Wout*z(t) z(t) = [x(t), u(t)] x is state, u is input, y is output. Win, W, and Wfb are not trainable. All weights are randomly initialized, but W is modified like this: \"Set a certain percentage of elements of W to 0, scale W to keep its spectral radius below 1.0 I have this code to generate the equation. ``` x = tf.Variable(tf.reshape(tf.zeros([N]), [-1, N]), trainable=False, name=\"state_vector\") W = tf.Variable(tf.random_normal([N, N], 0.0, 0.05), trainable=False) # TODO: setup W according to the ESN paper W_x = tf.matmul(x, W) u = tf.placeholder(\"float\", [None, K], name=\"input_vector\") W_in = tf.Variable(tf.random_normal([K, N], 0.0, 0.05), trainable=False) W_in_u = tf.matmul(u, W_in) z = tf.concat(1, [x, u]) W_out = tf.Variable(tf.random_normal([K + N, L], 0.0, 0.05)) y = tf.matmul(z, W_out) W_fb = tf.Variable(tf.random_normal([L, N], 0.0, 0.05), trainable=False) W_fb_y = tf.matmul(y, W_fb) x_next = tf.tanh(W_in_u + W_x + W_fb_y) y_ = tf.placeholder(\"float\", [None, L], name=\"train_output\") ``` My problem is two-fold. First I don't know how to implement this as a superclass of RNNCell. Second I don't know how to generate a W tensor according to above specification. Any help about any of these question is greatly appreciated. Maybe I can figure out a way to prepare W, but I sure as hell don't understand how to implement my own RNN as a superclass of RNNCell.", "response":"To give a quick summary: Look in the TensorFlow source code under python\/ops\/rnn_cell.py too see how to subclass RNNCell. It's usually like this: ``` class MyRNNCell(RNNCell): def __init__(...): @property def output_size(self): ... @property def state_size(self): ... def __call__(self, input_, state, name=None): ... your per-step iteration here ... ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33879281\/how-can-i-implement-a-custom-rnn-specifically-an-esn-in-tensorflow", "best_answers_votes":12, "question_length":1609, "response_length":354 }, { "question":"Changing the scale of a tensor in tensorflow Sorry if I messed up the title, I didn't know how to phrase this. Anyways, I have a tensor of a set of values, but I want to make sure that every element in the tensor has a range from 0 - 255, (or 0 - 1 works too). However, I don't want to make all the values add up to 1 or 255 like softmax, I just want to down scale the values. Is there any way to do this? Thanks!", "response":"You are trying to normalize the data. A classic normalization formula is this one: ``` normalize_value = (value \u2212 min_value) \/ (max_value \u2212 min_value) ``` The implementation on tensorflow will look like this: ``` tensor = tf.div( tf.subtract( tensor, tf.reduce_min(tensor) ), tf.subtract( tf.reduce_max(tensor), tf.reduce_min(tensor) ) ) ``` All the values of the tensor will be betweetn 0 and 1. IMPORTANT: make sure the tensor has float\/double values, or the output tensor will have just zeros and ones. If you have a integer tensor call this first: ``` tensor = tf.to_float(tensor) ``` Update: as of tensorflow 2, tf.to_float() is deprecated and instead, tf.cast() should be used: ``` tensor = tf.cast(tensor, dtype=tf.float32) # or any other tf.dtype, that is precise enough ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38376478\/changing-the-scale-of-a-tensor-in-tensorflow", "best_answers_votes":47, "question_length":413, "response_length":782 }, { "question":"How to get the type of a Tensor? I'm looking for something similar to the effects of: ``` x.get_shape() ``` that will give the type of x. Is there is any function for this?", "response":"You can use get_shape() to get the shape of a tensorflow variable. ``` >>> x = tf.Variable(tf.random_normal([256, 100])) >>> x.get_shape() (256, 100) ``` You can use dtype property to get the type of a tensorflow variable. ``` >>> x = tf.Variable(tf.random_normal([256, 100])) >>> x.dtype ``` You can use as_numpy_dtype property of dtype to convert from tf.dtype to numpy dtype. ``` >>> x = tf.Variable(tf.random_normal([256, 100])) >>> x.dtype.as_numpy_dtype ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42758315\/how-to-get-the-type-of-a-tensor", "best_answers_votes":42, "question_length":172, "response_length":465 }, { "question":"replicate a row tensor using tf.tile? I have a tensor which is simply a vector, vector = [0.5 0.4] and tf.shape indicates that it has shape=(1,), I would like to replicate the vector m times and have the shape of [m, 2], so for m = 2, matrix = [[0.5 0.4], [0.5 0.4]]. How do I achieve that using tf.tile?", "response":"Take the following, vec is a vector, multiply is your m, the number of times to repeat the vector. tf.tile is performed on the vector and then using tf.reshape it is reshaped into the desired structure. ``` import tensorflow as tf vec = tf.constant([1, 2, 3, 4]) multiply = tf.constant([3]) matrix = tf.reshape(tf.tile(vec, multiply), [ multiply[0], tf.shape(vec)[0]]) with tf.Session() as sess: print(sess.run([matrix])) ``` This results in: ``` [array([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]], dtype=int32)] ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45315545\/replicate-a-row-tensor-using-tf-tile", "best_answers_votes":29, "question_length":304, "response_length":515 }, { "question":"Difference between tf.clip_by_value and tf.clip_by_global_norm for RNN's and how to decide max value to clip on? Want to understand the difference in roles of tf.clip_by_value and tf.clip_by_global_norm during the implementation of Gradient Clipping in TensorFlow. Which one is preferred and how to decide the max value to clip on?", "response":"TL;DR: use tf.clip_by_global_norm for gradient clipping, with \"some high value\" as max value. clip_by_value tf.clip_by_value clips each value inside one tensor, regardless of the other values in the tensor. For instance, ``` tf.clip_by_value([-1, 2, 10], 0, 3) -> [0, 2, 3] # Only the values below 0 or above 3 are changed ``` Consequently, it can change the direction of the tensor, so it should be used if the values in the tensor are decorrelated one from another (which is not the case for gradient clipping), or to avoid zero \/ infinite values in a tensor that could lead to Nan \/ infinite values elsewhere (by clipping with a minimum of epsilon=1e-8 and a very big max value for instance). clip_by_norm tf.clip_by_norm rescales one tensor if necessary, so that its L2 norm does not exceed a certain threshold. It's useful typically to avoid exploding gradient on one tensor, because you keep the gradient direction. For instance: ``` tf.clip_by_norm([-2, 3, 6], 5) -> [-2, 3, 6]*5\/7 # The original L2 norm is 7, which is >5, so the final one is 5 tf.clip_by_norm([-2, 3, 6], 9) -> [-2, 3, 6] # The original L2 norm is 7, which is 14.5 This (tf.clip_by_global_norm) is the one that you should use for gradient clipping. See this for instance for more information. Choosing the value Choosing the max value is the hardest part. You should use the biggest value such that you don't have exploding gradient (whose effects can be Nans or infinite values appearing in your tensors, constant loss \/accuracy after a few training steps). The value should be bigger for tf.clip_by_global_norm than for the others, since the global L2 norm will be mechanically bigger than the other ones due to the number of tensors implied.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44796793\/difference-between-tf-clip-by-value-and-tf-clip-by-global-norm-for-rnns-and-how", "best_answers_votes":48, "question_length":331, "response_length":1720 }, { "question":"'Dense' object has no attribute 'op' [closed] Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers. This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers. Closed 4 years ago. Improve this question I am trying to make a fully connected model using tensorflow.keras, here is my code ```py from tensorflow.keras.models import Model from tensorflow.keras.layers import Input, Dense, Flatten def load_model(input_shape): input = Input(shape = input_shape) dense_shape = input_shape[0] x = Flatten()(input) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) x = Dense(dense_shape, activation='relu')(x) output = Dense(10, activation='softmax') model = Model(input , output) model.summary() return model ``` but when I call the model ```py model = load_model((120,)) ``` I have this error ``` 'Dense' object has no attribute 'op' ``` How can I fix this?", "response":"You are missing (x) after your output layer. Try ``` output = Dense(10 , activation = 'softmax')(x) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/61083004\/dense-object-has-no-attribute-op", "best_answers_votes":47, "question_length":1166, "response_length":103 }, { "question":"Proper way to feed time-series data to stateful LSTM? Let's suppose I have a sequence of integers: 0,1,2, .. and want to predict the next integer given the last 3 integers, e.g.: [0,1,2]->5, [3,4,5]->6, etc Suppose I setup my model like so: ``` batch_size=1 time_steps=3 model = Sequential() model.add(LSTM(4, batch_input_shape=(batch_size, time_steps, 1), stateful=True)) model.add(Dense(1)) ``` It is my understanding that model has the following structure (please excuse the crude drawing): First Question: is my understanding correct? Note I have drawn the previous states C_{t-1}, h_{t-1} entering the picture as this is exposed when specifying stateful=True. In this simple \"next integer prediction\" problem, the performance should improve by providing this extra information (as long as the previous state results from the previous 3 integers). This brings me to my main question: It seems the standard practice (for example see this blog post and the TimeseriesGenerator keras preprocessing utility), is to feed a staggered set of inputs to the model during training. For example: ``` batch0: [[0, 1, 2]] batch1: [[1, 2, 3]] batch2: [[2, 3, 4]] etc ``` This has me confused because it seems this is requires the output of the 1st Lstm Cell (corresponding to the 1st time step). See this figure: From the tensorflow docs: stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. it seems this \"internal\" state isn't available and all that is available is the final state. See this figure: So, if my understanding is correct (which it's clearly not), shouldn't we be feeding non-overlapped windows of samples to the model when using stateful=True? E.g.: ``` batch0: [[0, 1, 2]] batch1: [[3, 4, 5]] batch2: [[6, 7, 8]] etc ```", "response":"The answer is: depends on problem at hand. For your case of one-step prediction - yes, you can, but you don't have to. But whether you do or not will significantly impact learning. Batch vs. sample mechanism (\"see AI\" = see \"additional info\" section) All models treat samples as independent examples; a batch of 32 samples is like feeding 1 sample at a time, 32 times (with differences - see AI). From model's perspective, data is split into the batch dimension, batch_shape[0], and the features dimensions, batch_shape[1:] - the two \"don't talk.\" The only relation between the two is via the gradient (see AI). Overlap vs no-overlap batch Perhaps the best approach to understand it is information-based. I'll begin with timeseries binary classification, then tie it to prediction: suppose you have 10-minute EEG recordings, 240000 timesteps each. Task: seizure or non-seizure? As 240k is too much for an RNN to handle, we use CNN for dimensionality reduction We have the option to use \"sliding windows\" - i.e. feed a subsegment at a time; let's use 54k Take 10 samples, shape (240000, 1). How to feed? (10, 54000, 1), all samples included, slicing as sample[0:54000]; sample[54000:108000] ... (10, 54000, 1), all samples included, slicing as sample[0:54000]; sample[1:54001] ... Which of the two above do you take? If (2), your neural net will never confuse a seizure for a non-seizure for those 10 samples. But it'll also be clueless about any other sample. I.e., it will massively overfit, because the information it sees per iteration barely differs (1\/54000 = 0.0019%) - so you're basically feeding it the same batch several times in a row. Now suppose (3): (10, 54000, 1), all samples included, slicing as sample[0:54000]; sample[24000:81000] ... A lot more reasonable; now our windows have a 50% overlap, rather than 99.998%. Prediction: overlap bad? If you are doing a one-step prediction, the information landscape is now changed: Chances are, your sequence length is faaar from 240000, so overlaps of any kind don't suffer the \"same batch several times\" effect Prediction fundamentally differs from classification in that, the labels (next timestep) differ for every subsample you feed; classification uses one for the entire sequence This dramatically changes your loss function, and what is 'good practice' for minimizing it: A predictor must be robust to its initial sample, especially for LSTM - so we train for every such \"start\" by sliding the sequence as you have shown Since labels differ timestep-to-timestep, the loss function changes substantially timestep-to-timestep, so risks of overfitting are far less What should I do? First, make sure you understand this entire post, as nothing here's really \"optional.\" Then, here's the key about overlap vs no-overlap, per batch: One sample shifted: model learns to better predict one step ahead for each starting step - meaning: (1) LSTM's robust against initial cell state; (2) LSTM predicts well for any step ahead given X steps behind Many samples, shifted in later batch: model less likely to 'memorize' train set and overfit Your goal: balance the two; 1's main edge over 2 is: 2 can handicap the model by making it forget seen samples 1 allows model to extract better quality features by examining the sample over several starts and ends (labels), and averaging the gradient accordingly Should I ever use (2) in prediction? If your sequence lengths are very long and you can afford to \"slide window\" w\/ ~50% its length, maybe, but depends on the nature of data: signals (EEG)? Yes. Stocks, weather? Doubt it. Many-to-many prediction; more common to see (2), in large per longer sequences. LSTM stateful: may actually be entirely useless for your problem. Stateful is used when LSTM can't process the entire sequence at once, so it's \"split up\" - or when different gradients are desired from backpropagation. With former, the idea is - LSTM considers former sequence in its assessment of latter: t0=seq[0:50]; t1=seq[50:100] makes sense; t0 logically leads to t1 seq[0:50] --> seq[1:51] makes no sense; t1 doesn't causally derive from t0 In other words: do not overlap in stateful in separate batches. Same batch is OK, as again, independence - no \"state\" between the samples. When to use stateful: when LSTM benefits from considering previous batch in its assessment of the next. This can include one-step predictions, but only if you can't feed the entire seq at once: Desired: 100 timesteps. Can do: 50. So we set up t0, t1 as in above's first bullet. Problem: not straightforward to implement programmatically. You'll need to find a way to feed to LSTM while not applying gradients - e.g. freezing weights or setting lr = 0. When and how does LSTM \"pass states\" in stateful? When: only batch-to-batch; samples are entirely independent How: in Keras, only batch-sample to batch-sample: stateful=True requires you to specify batch_shape instead of input_shape - because, Keras builds batch_size separate states of the LSTM at compiling Per above, you cannot do this: ```py # sampleNM = sample N at timestep(s) M batch1 = [sample10, sample20, sample30, sample40] batch2 = [sample21, sample41, sample11, sample31] ``` This implies 21 causally follows 10 - and will wreck training. Instead do: ```py batch1 = [sample10, sample20, sample30, sample40] batch2 = [sample11, sample21, sample31, sample41] ``` Batch vs. sample: additional info A \"batch\" is a set of samples - 1 or greater (assume always latter for this answer) . Three approaches to iterate over data: Batch Gradient Descent (entire dataset at once), Stochastic GD (one sample at a time), and Minibatch GD (in-between). (In practice, however, we call the last SGD also and only distinguish vs BGD - assume it so for this answer.) Differences: SGD never actually optimizes the train set's loss function - only its 'approximations'; every batch is a subset of the entire dataset, and the gradients computed only pertain to minimizing loss of that batch. The greater the batch size, the better its loss function resembles that of the train set. Above can extend to fitting batch vs. sample: a sample is an approximation of the batch - or, a poorer approximation of the dataset First fitting 16 samples and then 16 more is not the same as fitting 32 at once - since weights are updated in-between, so model outputs for the latter half will change The main reason for picking SGD over BGD is not, in fact, computational limitations - but that it's superior, most of the time. Explained simply: a lot easier to overfit with BGD, and SGD converges to better solutions on test data by exploring a more diverse loss space. BONUS DIAGRAMS:", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58276337\/proper-way-to-feed-time-series-data-to-stateful-lstm", "best_answers_votes":46, "question_length":1851, "response_length":6659 }, { "question":"Basic 1d convolution in tensorflow OK, I'd like to do a 1-dimensional convolution of time series data in Tensorflow. This is apparently supported using tf.nn.conv2d, according to these tickets, and the manual. the only requirement is to set strides=[1,1,1,1]. Sounds simple! However, I cannot work out how to do this in even a very minimal test case. What am I doing wrong? Let's set this up. ``` import tensorflow as tf import numpy as np print(tf.__version__) >>> 0.9.0 ``` OK, now generate a basic convolution test on two small arrays. I will make it easy by using a batch size of 1, and since time series are 1-dimensional, I will have an \"image height\" of 1. And since it's a univariate time series, clearly the number of \"channels\" is also 1, so this will be simple, right? ``` g = tf.Graph() with g.as_default(): # data shape is \"[batch, in_height, in_width, in_channels]\", x = tf.Variable(np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(1,1,-1,1), name=\"x\") # filter shape is \"[filter_height, filter_width, in_channels, out_channels]\" phi = tf.Variable(np.array([0.0, 0.5, 1.0]).reshape(1,-1,1,1), name=\"phi\") conv = tf.nn.conv2d( phi, x, strides=[1, 1, 1, 1], padding=\"SAME\", name=\"conv\") ``` BOOM. Error. ``` ValueError: Dimensions 1 and 5 are not compatible ``` OK, For a start, I don't understand how this should happen with any dimension, since I've specified that I'm padding the arguments in the convolution OP. but fine, maybe there are limits to that. I must have got the documentation confused and set up this convolution on the wrong axes of the tensor. I'll try all possible permutations: ``` for i in range(4): for j in range(4): shape1 = [1,1,1,1] shape1[i] = -1 shape2 = [1,1,1,1] shape2[j] = -1 x_array = np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(*shape1) phi_array = np.array([0.0, 0.5, 1.0]).reshape(*shape2) try: g = tf.Graph() with g.as_default(): x = tf.Variable(x_array, name=\"x\") phi = tf.Variable(phi_array, name=\"phi\") conv = tf.nn.conv2d( x, phi, strides=[1, 1, 1, 1], padding=\"SAME\", name=\"conv\") init_op = tf.initialize_all_variables() sess = tf.Session(graph=g) sess.run(init_op) print(\"SUCCEEDED!\", x_array.shape, phi_array.shape, conv.eval(session=sess)) sess.close() except Exception as e: print(\"FAILED!\", x_array.shape, phi_array.shape, type(e), e.args or e._message) ``` Result: ``` FAILED! (5, 1, 1, 1) (3, 1, 1, 1) ('Filter must not be larger than the input: Filter: (3, 1) Input: (1, 1)',) FAILED! (5, 1, 1, 1) (1, 3, 1, 1) ('Filter must not be larger than the input: Filter: (1, 3) Input: (1, 1)',) FAILED! (5, 1, 1, 1) (1, 1, 3, 1) ('Dimensions 1 and 3 are not compatible',) FAILED! (5, 1, 1, 1) (1, 1, 1, 3) No OpKernel was registered to support Op 'Conv2D' with these attrs [[Node: conv = Conv2D[T=DT_DOUBLE, data_format=\"NHWC\", padding=\"SAME\", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x\/read, phi\/read)]] FAILED! (1, 5, 1, 1) (3, 1, 1, 1) No OpKernel was registered to support Op 'Conv2D' with these attrs [[Node: conv = Conv2D[T=DT_DOUBLE, data_format=\"NHWC\", padding=\"SAME\", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x\/read, phi\/read)]] FAILED! (1, 5, 1, 1) (1, 3, 1, 1) ('Filter must not be larger than the input: Filter: (1, 3) Input: (5, 1)',) FAILED! (1, 5, 1, 1) (1, 1, 3, 1) ('Dimensions 1 and 3 are not compatible',) FAILED! (1, 5, 1, 1) (1, 1, 1, 3) No OpKernel was registered to support Op 'Conv2D' with these attrs [[Node: conv = Conv2D[T=DT_DOUBLE, data_format=\"NHWC\", padding=\"SAME\", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x\/read, phi\/read)]] FAILED! (1, 1, 5, 1) (3, 1, 1, 1) ('Filter must not be larger than the input: Filter: (3, 1) Input: (1, 5)',) FAILED! (1, 1, 5, 1) (1, 3, 1, 1) No OpKernel was registered to support Op 'Conv2D' with these attrs [[Node: conv = Conv2D[T=DT_DOUBLE, data_format=\"NHWC\", padding=\"SAME\", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x\/read, phi\/read)]] FAILED! (1, 1, 5, 1) (1, 1, 3, 1) ('Dimensions 1 and 3 are not compatible',) FAILED! (1, 1, 5, 1) (1, 1, 1, 3) No OpKernel was registered to support Op 'Conv2D' with these attrs [[Node: conv = Conv2D[T=DT_DOUBLE, data_format=\"NHWC\", padding=\"SAME\", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x\/read, phi\/read)]] FAILED! (1, 1, 1, 5) (3, 1, 1, 1) ('Dimensions 5 and 1 are not compatible',) FAILED! (1, 1, 1, 5) (1, 3, 1, 1) ('Dimensions 5 and 1 are not compatible',) FAILED! (1, 1, 1, 5) (1, 1, 3, 1) ('Dimensions 5 and 3 are not compatible',) FAILED! (1, 1, 1, 5) (1, 1, 1, 3) ('Dimensions 5 and 1 are not compatible',) ``` Hmm. OK, it looks like there are two problems now. Firstly, the ValueError is about applying the filter along the wrong axis, I guess, although there are two forms. But then the axes along which I can apply the filter are confusing too - notice that it actually constructs the graph with input shape (5, 1, 1, 1) and filter shape (1, 1, 1, 3). AFAICT from the documentation, this should be a filter that looks at on example from the batch, one \"pixel\" and one \"channel\" and outputs 3 \"channels\". Why does that one work, then, when others do not? Anyway, sometimes it does not fail while constructing the graph. Sometime it constructs the graph; then we get the tensorflow.python.framework.errors.InvalidArgumentError. From some confusing github tickets I gather this is probably due to the fact that I'm running on CPU instead of GPU, or vice versa the fact that the convolution Op is only defined for 32 bit floats, not 64 bit floats. If anyone could throw some light on which axes I should be aligning what on, in order to convolve a time series with a kernel, I'd be very grateful.", "response":"I am sorry to say that, but your first code was almost right. You just inverted x and phi in tf.nn.conv2d: ```py g = tf.Graph() with g.as_default(): # data shape is \"[batch, in_height, in_width, in_channels]\", x = tf.Variable(np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(1, 1, 5, 1), name=\"x\") # filter shape is \"[filter_height, filter_width, in_channels, out_channels]\" phi = tf.Variable(np.array([0.0, 0.5, 1.0]).reshape(1, 3, 1, 1), name=\"phi\") conv = tf.nn.conv2d( x, phi, strides=[1, 1, 1, 1], padding=\"SAME\", name=\"conv\") ``` Update: TensorFlow now supports 1D convolution since version r0.11, using tf.nn.conv1d. I previously made a guide to use them in the stackoverflow documentation (now extinct) that I'm pasting here: Guide to 1D convolution Consider a basic example with an input of length 10, and dimension 16. The batch size is 32. We therefore have a placeholder with input shape [batch_size, 10, 16]. ```py batch_size = 32 x = tf.placeholder(tf.float32, [batch_size, 10, 16]) ``` We then create a filter with width 3, and we take 16 channels as input, and output also 16 channels. ```py filter = tf.zeros([3, 16, 16]) # these should be real values, not 0 ``` Finally we apply tf.nn.conv1d with a stride and a padding: - stride: integer s - padding: this works like in 2D, you can choose between SAME and VALID. SAME will output the same input length, while VALID will not add zero padding. For our example we take a stride of 2, and a valid padding. ``` output = tf.nn.conv1d(x, filter, stride=2, padding=\"VALID\") ``` The output shape should be [batch_size, 4, 16]. With padding=\"SAME\", we would have had an output shape of [batch_size, 5, 16].", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38114534\/basic-1d-convolution-in-tensorflow", "best_answers_votes":36, "question_length":5592, "response_length":1660 }, { "question":"Does TensorFlow view all CPUs of one machine as ONE device? From the experiments I run, it seems like TensorFlow uses automatically all CPUs on one machine. Furthermore, it seems like TensorFlow refers to all CPUs as \/cpu:0. Am I right, that only the different GPUs of one machine get indexed and viewed as separate devices, but all the CPUs on one machine get viewed as a single device? Is there any way that a machine can have multiple CPUs viewing it from TensorFlows perspective?", "response":"By default all CPUs available to the process are aggregated under cpu:0 device. There's answer by mrry here showing how to create logical devices like \/cpu:1, \/cpu:2 There doesn't seem to be working functionality to pin logical devices to specific physical cores or be able to use NUMA nodes in tensorflow. A possible work-around is to use distributed TensorFlow with multiple processes on one machine and use taskset on Linux to pin specific processes to specific cores", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38836269\/does-tensorflow-view-all-cpus-of-one-machine-as-one-device", "best_answers_votes":39, "question_length":483, "response_length":470 }, { "question":"Which Google Cloud Platform service is the easiest for running Tensorflow? While working on Udacity Deep Learning assignments, I encountered memory problem. I need to switch to a cloud platform. I worked with AWS EC2 before but now I would like to try Google Cloud Platform (GCP). I will need at least 8GB memory. I know how to use docker locally but never tried it on the cloud. Is there any ready-made solution for running Tensorflow on GCP? If not, which service (Compute Engine or Container Engine) would make it easier to get started? Any other tip is also appreciated!", "response":"Summing up the answers: AI Platform Notebooks - One click Jupyter Lab environment Deep Learning VMs images - Raw VMs with ML libraries pre-installed Deep Learning Container Images - Containerized versions of the DLVM images Cloud ML Manual installation on Compute Engine. See instructions below. Instructions to manually run TensorFlow on Compute Engine: Create a project Open the Cloud Shell (a button at the top) List machine types: gcloud compute machine-types list. You can change the machine type I used in the next command. Create an instance: ``` gcloud compute instances create tf \\ --image container-vm \\ --zone europe-west1-c \\ --machine-type n1-standard-2 ``` Run sudo docker run -d -p 8888:8888 --name tf b.gcr.io\/tensorflow-udacity\/assignments:0.5.0 (change the image name to the desired one) Find your instance in the dashboard and edit default network. Add a firewall rule to allow your IP as well as protocol and port tcp:8888. Find the External IP of the instance from the dashboard. Open IP:8888 on your browser. Done! When you are finished, delete the created cluster to avoid charges. This is how I did it and it worked. I am sure there is an easier way to do it. More Resources You might be interested to learn more about: Google Cloud Shell Container-Optimized Google Compute Engine Images Google Cloud SDK for a more responsive shell and more. Good to know \"The contents of your Cloud Shell home directory persist across projects between all Cloud Shell sessions, even after the virtual machine terminates and is restarted\" To list all available image versions: gcloud compute images list --project google-containers Thanks to @user728291, @MattW, @CJCullen, and @zain-rizvi", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36916690\/which-google-cloud-platform-service-is-the-easiest-for-running-tensorflow", "best_answers_votes":22, "question_length":574, "response_length":1697 }, { "question":"Convolution2D + LSTM versus ConvLSTM2D Are 1 and 2 the same? Use Convolution2D layers and LSTM layers Use ConvLSTM2D If there is any difference, could you explain it for me?", "response":"They are not exactly the same, here is why: 1. Use Convolution2D layers and LSTM layers As it is known, Convolution2D serves well for capturing image or spatial features, whilst LSTM are used to detect correlations over time. However, by stacking these kind of layers, the correlation between space and time features may not be captured properly. 2. Use ConvLSTM2D To solve this, Xingjian Shi et al. proposed a network structure able to capture spatiotemporal correlations, namely ConvLSTM. In Keras, this is reflected in the ConvLSTM2D class, which computes convolutional operations in both the input and the recurrent transformations. Keras code Too illustrate this, you can see here the LSTM code, if you go to the call method from LSTMCell, you'd only see: ```python x_i = K.dot(inputs_i, self.kernel_i) x_f = K.dot(inputs_f, self.kernel_f) x_c = K.dot(inputs_c, self.kernel_c) x_o = K.dot(inputs_o, self.kernel_o) ``` Instead, the ConvLSTM2DCell class calls: ```python x_i = self.input_conv(inputs_i, self.kernel_i, self.bias_i, padding=self.padding) x_f = self.input_conv(inputs_f, self.kernel_f, self.bias_f, padding=self.padding) x_c = self.input_conv(inputs_c, self.kernel_c, self.bias_c, padding=self.padding) x_o = self.input_conv(inputs_o, self.kernel_o, self.bias_o, padding=self.padding) h_i = self.recurrent_conv(h_tm1_i, self.recurrent_kernel_i) h_f = self.recurrent_conv(h_tm1_f, self.recurrent_kernel_f) h_c = self.recurrent_conv(h_tm1_c, self.recurrent_kernel_c) h_o = self.recurrent_conv(h_tm1_o, self.recurrent_kernel_o) ``` Where: ```python def input_conv(self, x, w, b=None, padding='valid'): conv_out = K.conv2d(x, w, strides=self.strides, padding=padding, data_format=self.data_format, dilation_rate=self.dilation_rate) if b is not None: conv_out = K.bias_add(conv_out, b, data_format=self.data_format) return conv_out def recurrent_conv(self, x, w): conv_out = K.conv2d(x, w, strides=(1, 1), padding='same', data_format=self.data_format) return conv_out ``` In LSTM, the equivalent for h_x (recurrent transformations) would be: ```python K.dot(h_tm1_x, self.recurrent_kernel_x) ``` Instead of ConvLSTM2D's: ```python self.recurrent_conv(h_tm1_x, self.recurrent_kernel_x) ``` These kind of transformations could not be computed with stacked Conv2D and LSTM layers.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49603498\/convolution2d-lstm-versus-convlstm2d", "best_answers_votes":26, "question_length":173, "response_length":2289 }, { "question":"TensorFlow - introducing both L2 regularization and dropout into the network. Does it makes any sense? I am currently playing with ANN which is part of Udactity DeepLearning course. I successful built and train network and introduced the L2 regularization on all weights and biases. Right now I am trying out the dropout for hidden layer in order to improve generalization. I wonder, does it makes sense to both introduce the L2 regularization into the hidden layer and dropout on that same layer? If so, how to do this properly? During dropout we literally switch off half of the activations of hidden layer and double the amount outputted by rest of the neurons. While using the L2 we compute the L2 norm on all hidden weights. But I am not sure how to compute L2 in case we use dropout. We switch off some activations, shouldn't we remove the weights which are 'not used' now from the L2 calculation? Any references on that matter will be useful, I haven't found any info. Just in case you are interested, my code for ANN with L2 regularization is below: ```py #for NeuralNetwork model code is below #We will use SGD for training to save our time. Code is from Assignment 2 #beta is the new parameter - controls level of regularization. Default is 0.01 #but feel free to play with it #notice, we introduce L2 for both biases and weights of all layers beta = 0.01 #building tensorflow graph graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) #now let's build our new hidden layer #that's how many hidden neurons we want num_hidden_neurons = 1024 #its weights hidden_weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_hidden_neurons])) hidden_biases = tf.Variable(tf.zeros([num_hidden_neurons])) #now the layer itself. It multiplies data by weights, adds biases #and takes ReLU over result hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset, hidden_weights) + hidden_biases) #time to go for output linear layer #out weights connect hidden neurons to output labels #biases are added to output labels out_weights = tf.Variable( tf.truncated_normal([num_hidden_neurons, num_labels])) out_biases = tf.Variable(tf.zeros([num_labels])) #compute output out_layer = tf.matmul(hidden_layer,out_weights) + out_biases #our real output is a softmax of prior result #and we also compute its cross-entropy to get our loss #Notice - we introduce our L2 here loss = (tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( out_layer, tf_train_labels) + beta*tf.nn.l2_loss(hidden_weights) + beta*tf.nn.l2_loss(hidden_biases) + beta*tf.nn.l2_loss(out_weights) + beta*tf.nn.l2_loss(out_biases))) #now we just minimize this loss to actually train the network optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) #nice, now let's calculate the predictions on each dataset for evaluating the #performance so far # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(out_layer) valid_relu = tf.nn.relu( tf.matmul(tf_valid_dataset, hidden_weights) + hidden_biases) valid_prediction = tf.nn.softmax( tf.matmul(valid_relu, out_weights) + out_biases) test_relu = tf.nn.relu( tf.matmul( tf_test_dataset, hidden_weights) + hidden_biases) test_prediction = tf.nn.softmax(tf.matmul(test_relu, out_weights) + out_biases) #now is the actual training on the ANN we built #we will run it for some number of steps and evaluate the progress after #every 500 steps #number of steps we will train our ANN num_steps = 3001 #actual training with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print(\"Initialized\") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print(\"Minibatch loss at step %d: %f\" % (step, l)) print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels)) print(\"Validation accuracy: %.1f%%\" % accuracy( valid_prediction.eval(), valid_labels)) print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels)) ```", "response":"Ok, after some additional efforts I managed to solve it and introduce both L2 and dropout into my network, code is below. I got slight improvement over the same network without the dropout (with L2 in place). I am still not sure if it really worth the effort to introduce both of them, L2 and dropout but at least it works and slightly improves the results. ```py #ANN with introduced dropout #This time we still use the L2 but restrict training dataset #to be extremely small #get just first 500 of examples, so that our ANN can memorize whole dataset train_dataset_2 = train_dataset[:500, :] train_labels_2 = train_labels[:500] #batch size for SGD and beta parameter for L2 loss batch_size = 128 beta = 0.001 #that's how many hidden neurons we want num_hidden_neurons = 1024 #building tensorflow graph graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) #now let's build our new hidden layer #its weights hidden_weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_hidden_neurons])) hidden_biases = tf.Variable(tf.zeros([num_hidden_neurons])) #now the layer itself. It multiplies data by weights, adds biases #and takes ReLU over result hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset, hidden_weights) + hidden_biases) #add dropout on hidden layer #we pick up the probabylity of switching off the activation #and perform the switch off of the activations keep_prob = tf.placeholder(\"float\") hidden_layer_drop = tf.nn.dropout(hidden_layer, keep_prob) #time to go for output linear layer #out weights connect hidden neurons to output labels #biases are added to output labels out_weights = tf.Variable( tf.truncated_normal([num_hidden_neurons, num_labels])) out_biases = tf.Variable(tf.zeros([num_labels])) #compute output #notice that upon training we use the switched off activations #i.e. the variaction of hidden_layer with the dropout active out_layer = tf.matmul(hidden_layer_drop,out_weights) + out_biases #our real output is a softmax of prior result #and we also compute its cross-entropy to get our loss #Notice - we introduce our L2 here loss = (tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( out_layer, tf_train_labels) + beta*tf.nn.l2_loss(hidden_weights) + beta*tf.nn.l2_loss(hidden_biases) + beta*tf.nn.l2_loss(out_weights) + beta*tf.nn.l2_loss(out_biases))) #now we just minimize this loss to actually train the network optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) #nice, now let's calculate the predictions on each dataset for evaluating the #performance so far # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(out_layer) valid_relu = tf.nn.relu( tf.matmul(tf_valid_dataset, hidden_weights) + hidden_biases) valid_prediction = tf.nn.softmax( tf.matmul(valid_relu, out_weights) + out_biases) test_relu = tf.nn.relu( tf.matmul( tf_test_dataset, hidden_weights) + hidden_biases) test_prediction = tf.nn.softmax(tf.matmul(test_relu, out_weights) + out_biases) #now is the actual training on the ANN we built #we will run it for some number of steps and evaluate the progress after #every 500 steps #number of steps we will train our ANN num_steps = 3001 #actual training with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print(\"Initialized\") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels_2.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset_2[offset:(offset + batch_size), :] batch_labels = train_labels_2[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_prob : 0.5} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print(\"Minibatch loss at step %d: %f\" % (step, l)) print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels)) print(\"Validation accuracy: %.1f%%\" % accuracy( valid_prediction.eval(), valid_labels)) print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels)) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38292760\/tensorflow-introducing-both-l2-regularization-and-dropout-into-the-network-do", "best_answers_votes":18, "question_length":4993, "response_length":4721 }, { "question":"Does bias in the convolutional layer really make a difference to the test accuracy? I understand that bias are required in small networks, to shift the activation function. But in the case of Deep network that has multiple layers of CNN, pooling, dropout and other non -linear activations, is Bias really making a difference? The convolutional filter is learning local features and for a given conv output channel same bias is used. This is not a dupe of this link. The above link only explains role of bias in small neural network and does not attempt to explain role of bias in deep-networks containing multiple CNN layers, drop-outs, pooling and non-linear activation functions. I ran a simple experiment and the results indicated that removing bias from conv layer made no difference in final test accuracy. There are two models trained and the test-accuracy is almost same (slightly better in one without bias.) model_with_bias, model_without_bias( bias not added in conv layer) Are they being used only for historical reasons? If using bias provides no gain in accuracy, shouldn't we omit them? Less parameters to learn. I would be thankful if someone who have deeper knowledge than me, could explain the significance(if- any) of these bias in deep networks. Here is the complete code and the experiment result bias-VS-no_bias experiment ``` batch_size = 16 patch_size = 5 depth = 16 num_hidden = 64 graph = tf.Graph() with graph.as_default(): # Input data. tf_train_dataset = tf.placeholder( tf.float32, shape=(batch_size, image_size, image_size, num_channels)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. layer1_weights = tf.Variable(tf.truncated_normal( [patch_size, patch_size, num_channels, depth], stddev=0.1)) layer1_biases = tf.Variable(tf.zeros([depth])) layer2_weights = tf.Variable(tf.truncated_normal( [patch_size, patch_size, depth, depth], stddev=0.1)) layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth])) layer3_weights = tf.Variable(tf.truncated_normal( [image_size \/\/ 4 * image_size \/\/ 4 * depth, num_hidden], stddev=0.1)) layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden])) layer4_weights = tf.Variable(tf.truncated_normal( [num_hidden, num_labels], stddev=0.1)) layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels])) # define a Model with bias . def model_with_bias(data): conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME') hidden = tf.nn.relu(conv + layer1_biases) conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME') hidden = tf.nn.relu(conv + layer2_biases) shape = hidden.get_shape().as_list() reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]]) hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases) return tf.matmul(hidden, layer4_weights) + layer4_biases # define a Model without bias added in the convolutional layer. def model_without_bias(data): conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME') hidden = tf.nn.relu(conv ) # layer1_ bias is not added conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME') hidden = tf.nn.relu(conv) # + layer2_biases) shape = hidden.get_shape().as_list() reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]]) # bias are added only in Fully connected layer(layer 3 and layer 4) hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases) return tf.matmul(hidden, layer4_weights) + layer4_biases # Training computation. logits_with_bias = model_with_bias(tf_train_dataset) loss_with_bias = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits_with_bias)) logits_without_bias = model_without_bias(tf_train_dataset) loss_without_bias = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits_without_bias)) # Optimizer. optimizer_with_bias = tf.train.GradientDescentOptimizer(0.05).minimize(loss_with_bias) optimizer_without_bias = tf.train.GradientDescentOptimizer(0.05).minimize(loss_without_bias) # Predictions for the training, validation, and test data. train_prediction_with_bias = tf.nn.softmax(logits_with_bias) valid_prediction_with_bias = tf.nn.softmax(model_with_bias(tf_valid_dataset)) test_prediction_with_bias = tf.nn.softmax(model_with_bias(tf_test_dataset)) # Predictions for without train_prediction_without_bias = tf.nn.softmax(logits_without_bias) valid_prediction_without_bias = tf.nn.softmax(model_without_bias(tf_valid_dataset)) test_prediction_without_bias = tf.nn.softmax(model_without_bias(tf_test_dataset)) num_steps = 1001 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print('Initialized') for step in range(num_steps): offset = (step * batch_size) % (train_labels.shape[0] - batch_size) batch_data = train_dataset[offset:(offset + batch_size), :, :, :] batch_labels = train_labels[offset:(offset + batch_size), :] feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} session.run(optimizer_with_bias, feed_dict=feed_dict) session.run(optimizer_without_bias, feed_dict = feed_dict) print('Test accuracy(with bias): %.1f%%' % accuracy(test_prediction_with_bias.eval(), test_labels)) print('Test accuracy(without bias): %.1f%%' % accuracy(test_prediction_without_bias.eval(), test_labels)) ``` Output: Initialized Test accuracy(with bias): 90.5% Test accuracy(without bias): 90.6%", "response":"Biases are tuned alongside weights by learning algorithms such as gradient descent. biases differ from weights is that they are independent of the output from previous layers. Conceptually bias is caused by input from a neuron with a fixed activation of 1, and so is updated by subtracting the just the product of the delta value and learning rate. In a large model, removing the bias inputs makes very little difference because each node can make a bias node out of the average activation of all of its inputs, which by the law of large numbers will be roughly normal. At the first layer, the ability for this to happens depends on your input distribution. On a small network, of course you need a bias input, but on a large network, removing it makes almost no difference. Although in a large network it has no difference, it still depends on network architecture. For instance in LSTM: Most applications of LSTMs simply initialize the LSTMs with small random weights which works well on many problems. But this initialization effectively sets the forget gate to 0.5. This introduces a vanishing gradient with a factor of 0.5 per timestep, which can cause problems whenever the long term dependencies are particularly severe. This problem is addressed by simply initializing the forget gates bias to a large value such as 1 or 2. By doing so, the forget gate will be initialized to a value that is close to 1, enabling gradient flow. See also: The rule of bias in Neural network What is bias in Neural network An Empirical Exploration of Recurrent Network Architectures", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51959507\/does-bias-in-the-convolutional-layer-really-make-a-difference-to-the-test-accura", "best_answers_votes":21, "question_length":5526, "response_length":1571 }, { "question":"Understanding `tf.nn.nce_loss()` in tensorflow I am trying to understand the NCE loss function in Tensorflow. NCE loss is employed for a word2vec task, for instance: ``` # Look up embeddings for inputs. embeddings = tf.Variable( tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) embed = tf.nn.embedding_lookup(embeddings, train_inputs) # Construct the variables for the NCE loss nce_weights = tf.Variable( tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 \/ math.sqrt(embedding_size))) nce_biases = tf.Variable(tf.zeros([vocabulary_size])) # Compute the average NCE loss for the batch. # tf.nce_loss automatically draws a new sample of the negative labels each # time we evaluate the loss. loss = tf.reduce_mean( tf.nn.nce_loss(weights=nce_weights, biases=nce_biases, labels=train_labels, inputs=embed, num_sampled=num_sampled, num_classes=vocabulary_size)) ``` more details, please reference Tensorflow word2vec_basic.py What are the input and output matrices in the NCE function? In a word2vec model, we are interested in building representations for words. In the training process, given a slid window, every word will have two embeddings: 1) when the word is a centre word; 2) when the word is a context word. These two embeddings are called input and output vectors, respectively. (more explanations of input and output matrices) In my opinion, the input matrix is embeddings and the output matrix is nce_weights. Is it right? What is the final embedding? According to a post by s0urcer also relating to nce, it says the final embedding matrix is just the input matrix. While, some others saying, the final_embedding=input_matrix+output_matrix. Which is right\/more common?", "response":"Let's look at the relative code in word2vec example (examples\/tutorials\/word2vec). ``` embeddings = tf.Variable( tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) embed = tf.nn.embedding_lookup(embeddings, train_inputs) ``` These two lines create embedding representations. embeddings is a matrix where each row represents a word vector. embedding_lookup is a quick way to get vectors corresponding to train_inputs. In word2vec example, train_inputs consists of some int32 number, representing the id of target words. Basically, it can be placed by hidden layer feature. ``` # Construct the variables for the NCE loss nce_weights = tf.Variable( tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 \/ math.sqrt(embedding_size))) nce_biases = tf.Variable(tf.zeros([vocabulary_size])) ``` These two lines create parameters. They will be updated by optimizer during training. We can use tf.matmul(embed, tf.transpose(nce_weights)) + nce_biases to get final output score. In other words, last inner-product layer in classification can be replaced by it. ``` loss = tf.reduce_mean( tf.nn.nce_loss(weights=nce_weights, # [vocab_size, embed_size] biases=nce_biases, # [vocab_size] labels=train_labels, # [bs, 1] inputs=embed, # [bs, embed_size] num_sampled=num_sampled, num_classes=vocabulary_size)) ``` These lines create nce loss, @garej has given a very good explanation. num_sampled refers to the number of negative sampling in nce algorithm. To illustrate the usage of nce, we can apply it in mnist example (examples\/tutorials\/mnist\/mnist_deep.py) with following 2 steps: 1. Replace embed with hidden layer output. The dimension of hidden layer is 1024 and num_output is 10. Minimum value of num_sampled is 1. Remember to remove the last inner-product layer in deepnn(). ``` y_conv, keep_prob = deepnn(x) num_sampled = 1 vocabulary_size = 10 embedding_size = 1024 with tf.device('\/cpu:0'): embed = y_conv # Construct the variables for the NCE loss nce_weights = tf.Variable( tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 \/ math.sqrt(embedding_size))) nce_biases = tf.Variable(tf.zeros([vocabulary_size])) ``` 2. Create loss and compute output. After computing the output, we can use it to calculate accuracy. Note that the label here is not one-hot vector as used in softmax. Labels are the original label of training samples. ``` loss = tf.reduce_mean( tf.nn.nce_loss(weights=nce_weights, biases=nce_biases, labels=y_idx, inputs=embed, num_sampled=num_sampled, num_classes=vocabulary_size)) output = tf.matmul(y_conv, tf.transpose(nce_weights)) + nce_biases correct_prediction = tf.equal(tf.argmax(output, 1), tf.argmax(y_, 1)) ``` When we set num_sampled=1, the val accuracy will end at around 98.8%. And if we set num_sampled=9, we can get almost the same val accuracy as trained by softmax. But note that nce is different from softmax. Full code of training mnist by nce can be found here. Hope it is helpful.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41475180\/understanding-tf-nn-nce-loss-in-tensorflow", "best_answers_votes":18, "question_length":1708, "response_length":2958 }, { "question":"Keras: how to get tensor dimensions inside custom loss? I'm trying to write my custom loss function: I want to apply categorical_crossentropy to the parts of input vector and then sum. Assume y_true, y_pred are 1D vectors. Code: ``` def custom_loss(y_true, y_pred): loss_sum= 0.0 for i in range(0,y_true.shape[0],dictionary_dims): loss_sum+= keras.backend.categorical_crossentropy(y_true[i*dictionary_dims:(i+1)*dictionary_dims], y_pred[i*dictionary_dims:(i+1)*dictionary_dims]) return loss_sum ``` But I get an error: ``` for i in range(0,y_true.shape[0],dictionary_dims): TypeError: __index__ returned non-int (type NoneType) ``` So how to access shape of input tensors to get subset of tensor? Update: Also tried to write loss via tensorflow directly: ``` def custom_loss_tf(y_true, y_pred): print('tf.shape(y_true)',tf.shape(y_true)) # print('type(tf.shape(y_true))',type(tf.shape(y_true))) # sys.exit() loss_sum= 0.0 for i in range(0,y_true.shape[0],dictionary_dims): loss_sum+= keras.backend.categorical_crossentropy(y_true[i*dictionary_dims:(i+1)*dictionary_dims], y_pred[i*dictionary_dims:(i+1)*dictionary_dims]) return loss_sum ``` Output: ``` tf.shape(y_true) Tensor(\"Shape:0\", shape=(2,), dtype=int32) type(tf.shape(y_true)) ``` Not sure what is shape=(2,) mean, but this is not what I'm expecting, because model.summary() shows that last layer is (None, 26): ``` _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 80, 120, 3) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 80, 120, 32) 896 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 40, 60, 32) 0 _________________________________________________________________ activation_1 (Activation) (None, 40, 60, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 40, 60, 32) 9248 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 20, 30, 32) 0 _________________________________________________________________ activation_2 (Activation) (None, 20, 30, 32) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 20, 30, 64) 18496 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 10, 15, 64) 0 _________________________________________________________________ activation_3 (Activation) (None, 10, 15, 64) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 10, 15, 64) 36928 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 5, 7, 64) 0 _________________________________________________________________ activation_4 (Activation) (None, 5, 7, 64) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 2240) 0 _________________________________________________________________ head (Dense) (None, 26) 58266 ================================================================= ```", "response":"Two things here: If you want to get a tensor shape you should use int_shape function from keras.backend. The first dimension is set to be a batch dimension so int_shape(y_true)[0] will return you a batch size. You should use int_shape(y_true)[1].", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45480820\/keras-how-to-get-tensor-dimensions-inside-custom-loss", "best_answers_votes":22, "question_length":3196, "response_length":246 }, { "question":"tensorflow einsum vs. matmul vs. tensordot In tensorflow, the functions tf.einsum, tf.matmul, and tf.tensordot can all be used for the same tasks. (I realize that tf.einsum and tf.tensordot have more general definitions; I also realize that tf.matmul has batch functionality.) In a situation where any of the three could be used, does one function tend to be fastest? Are there other recommendation rules? For example, suppose that A is a rank-2 tensor, and b is rank-1 tensor, and you want to compute the product c_j = A_ij b_j. Of the three options: c = tf.einsum('ij,j->i', A, b) c = tf.matmul(A, tf.expand_dims(b,1)) c = tf.tensordot(A, b, 1) Is any generally preferable to the others?", "response":"Both tf.tensordot() and tf.einsum() are syntactic sugar that wrap one or more invocations of tf.matmul() (although in some special cases tf.einsum() can reduce to the simpler elementwise tf.multiply()). In the limit, I'd expect all three functions to have equivalent performance for the same computation. However, for smaller matrices, it may be more efficient to use tf.matmul() directly, because it would yield a simpler TensorFlow graph with fewer operations, and hence the pre-operation invocation costs will be lower.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43100679\/tensorflow-einsum-vs-matmul-vs-tensordot", "best_answers_votes":19, "question_length":689, "response_length":522 }, { "question":"Siamese Neural Network in TensorFlow I'm trying to implement a Siamese Neural Network in TensorFlow but I cannot really find any working example on the Internet (see Yann LeCun paper). The architecture I'm trying to build would consist of two LSTMs sharing weights and only connected at the end of the network. My question is: how to build two different neural networks sharing their weights (tied weights) in TensorFlow and how to connect them at the end? Thanks :) Edit: I implemented a simple and working example of a siamese network here on MNIST.", "response":"Update with tf.layers If you use the tf.layers module to build your network, you can simply use the argument reuse=True for the second part of the Siamese network: ```py x = tf.ones((1, 3)) y1 = tf.layers.dense(x, 4, name='h1') y2 = tf.layers.dense(x, 4, name='h1', reuse=True) # y1 and y2 will evaluate to the same values sess = tf.Session() sess.run(tf.global_variables_initializer()) print(sess.run(y1)) print(sess.run(y2)) # both prints will return the same values ``` Old answer with tf.get_variable You can try using the function tf.get_variable(). (See the tutorial) Implement the first network using a variable scope with reuse=False: ``` with tf.variable_scope('Inference', reuse=False): weights_1 = tf.get_variable('weights', shape=[1, 1], initializer=...) output_1 = weights_1 * input_1 ``` Then implement the second with the same code except using reuse=True ``` with tf.variable_scope('Inference', reuse=True): weights_2 = tf.get_variable('weights') output_2 = weights_2 * input_2 ``` The first implementation will create and initialize every variable of the LSTM, whereas the second implementation will use tf.get_variable() to get the same variables used in the first network. That way, variables will be shared. Then you just have to use whatever loss you want (e.g. you can use the L2 distance between the two siamese networks), and the gradients will backpropagate through both networks, updating the shared variables with the sum of the gradients.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36844909\/siamese-neural-network-in-tensorflow", "best_answers_votes":17, "question_length":551, "response_length":1466 }, { "question":"How is tf.summary.tensor_summary meant to be used? TensorFlow provides a tf.summary.tensor_summary() function that appears to be a multidimensional variant of tf.summary.scalar(): ```py tf.summary.tensor_summary(name, tensor, summary_description=None, collections=None) ``` I thought it could be useful for summarizing inferred probabilities per class ... somewhat like ```py op_summary = tf.summary.tensor_summary('classes', some_tensor) # ... summary = sess.run(op_summary) writer.add_summary(summary) ``` However it appears that TensorBoard doesn't provide a way to display these summaries at all. How are they meant to be used?", "response":"I cannot get it to work either. It seems like that feature is still under development. See this video from the TensorFlow Dev Summit that states that the tensor_summary is still under development (starting at 9:17): https:\/\/youtu.be\/eBbEDRsCmv4?t=9m17s. It will probably be better defined and examples should be provided in the future.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42329059\/how-is-tf-summary-tensor-summary-meant-to-be-used", "best_answers_votes":3, "question_length":631, "response_length":335 }, { "question":"What is the difference between keras and tf.keras? I'm learning TensorFlow and Keras. I'd like to try https:\/\/www.amazon.com\/Deep-Learning-Python-Francois-Chollet\/dp\/1617294438\/, and it seems to be written in Keras. Would it be fairly straightforward to convert code to tf.keras? I'm not more interested in the portability of the code, rather than the true difference between the two.", "response":"The difference between tf.keras and keras is the Tensorflow specific enhancement to the framework. keras is an API specification that describes how a Deep Learning framework should implement certain part, related to the model definition and training. Is framework agnostic and supports different backends (Theano, Tensorflow, ...) tf.keras is the Tensorflow specific implementation of the Keras API specification. It adds the framework the support for many Tensorflow specific features like: perfect support for tf.data.Dataset as input objects, support for eager execution, ... In Tensorflow 2.0 tf.keras will be the default and I highly recommend to start working using tf.keras", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55178230\/what-is-the-difference-between-keras-and-tf-keras", "best_answers_votes":27, "question_length":384, "response_length":680 }, { "question":"tf.multiply vs tf.matmul to calculate the dot product I have a matrix (of vectors) X with shape [3,4], and I want to calculate the dot product between each pair of vectors (X[1].X[1]) and (X[1].X[2])...etc. I saw a cosine similarity code were they use tf.reduce_sum(tf.multyply(X, X),axis=1) to calculate the dot product between the vectors in a matrix of vectors.However, this result in only calculates the dot product between (X[i], X[i]). I used tf.matmul(X, X, transpose_b=True) which calculate the dot product between every two vectors but I am still confused why tf.multiply didn't do this I think the problem with my code. the code is: ``` data=[[1.0,2.0,4.0,5.0],[0.0,6.0,7.0,8.0],[8.0,1.0,1.0,1.0]] X=tf.constant(data) matResult=tf.matmul(X, X, transpose_b=True) multiplyResult=tf.reduce_sum(tf.multiply(X,X),axis=1) with tf.Session() as sess: print('matResult') print(sess.run([matResult])) print() print('multiplyResult') print(sess.run([multiplyResult])) ``` The output is: ``` matResult [array([[ 46., 80., 19.], [ 80., 149., 21.], [ 19., 21., 67.]], dtype=float32)] multiplyResult [array([ 46., 149., 67.], dtype=float32)] ``` I would appreciate any advise", "response":"tf.multiply(X, Y) or the * operator does element-wise multiplication so that: ``` [[1 2] [[1 3] [[1 6] [3 4]] . [2 1]] = [6 4]] ``` wheras tf.matmul does matrix multiplication so that: ``` [[1 0] [[1 3] [[1 3] [0 1]] . [2 1]] = [2 1]] ``` using tf.matmul(X, X, transpose_b=True) means that you are calculating X . X^T where ^T indicates the transposing of the matrix and . is the matrix multiplication. tf.reduce_sum(_, axis=1) takes the sum along 1st axis (starting counting with 0) which means you are suming the rows: ``` tf.reduce_sum([[a, b], [c, d]], axis=1) = [a+b, c+d] ``` This means that: ``` tf.reduce_sum(tf.multiply(X, X), axis=1) = [X[1].X[1], ..., X[n].X[n]] ``` so that is the one you want if you only want the norms of each rows. On the other hand: ``` tf.matmul(X, X, transpose_b=True) = [ [ X[1].X[1], X[1].X[2], ..., X[1].X[n] ], [ X[2].X[1], ..., X[2].X[n] ], ... [ X[n].X[1], ..., X[n].X[n] ] ] ``` so that is what you need if you want the similarity between all pairs of rows.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47583501\/tf-multiply-vs-tf-matmul-to-calculate-the-dot-product", "best_answers_votes":50, "question_length":1170, "response_length":999 }, { "question":"How to import an saved Tensorflow model train using tf.estimator and predict on input data I have save the model using tf.estimator .method export_savedmodel as follows: ``` export_dir=\"exportModel\/\" feature_spec = tf.feature_column.make_parse_example_spec(feature_columns) input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec) classifier.export_savedmodel(export_dir, input_receiver_fn, as_text=False, checkpoint_path=\"Model\/model.ckpt-400\") ``` How can I import this saved model and use for predictions?", "response":"I tried to search for a good base example, but it appears the documentation and samples are a bit scattered for this topic. So let's start with a base example: the tf.estimator quickstart. That particular example doesn't actually export a model, so let's do that (not need for use case 1): ``` def serving_input_receiver_fn(): \"\"\"Build the serving inputs.\"\"\" # The outer dimension (None) allows us to batch up inputs for # efficiency. However, it also means that if we want a prediction # for a single instance, we'll need to wrap it in an outer list. inputs = {\"x\": tf.placeholder(shape=[None, 4], dtype=tf.float32)} return tf.estimator.export.ServingInputReceiver(inputs, inputs) export_dir = classifier.export_savedmodel( export_dir_base=\"\/path\/to\/model\", serving_input_receiver_fn=serving_input_receiver_fn) ``` Huge asterisk on this code: there appears to be a bug in TensorFlow 1.3 that doesn't allow you to do the above export on a \"canned\" estimator (such as DNNClassifier). For a workaround, see the \"Appendix: Workaround\" section. The code below references export_dir (return value from the export step) to emphasize that it is not \"\/path\/to\/model\", but rather, a subdirectory of that directory whose name is a timestamp. Use Case 1: Perform prediction in the same process as training This is an sci-kit learn type of experience, and is already exemplified by the sample. For completeness' sake, you simply call predict on the trained model: ``` classifier.train(input_fn=train_input_fn, steps=2000) # [...snip...] predictions = list(classifier.predict(input_fn=predict_input_fn)) predicted_classes = [p[\"classes\"] for p in predictions] ``` Use Case 2: Load a SavedModel into Python\/Java\/C++ and perform predictions Python Client Perhaps the easiest thing to use if you want to do prediction in Python is SavedModelPredictor. In the Python program that will use the SavedModel, we need code like this: ``` from tensorflow.contrib import predictor predict_fn = predictor.from_saved_model(export_dir) predictions = predict_fn( {\"x\": [[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]]}) print(predictions['scores']) ``` Java Client ``` package dummy; import java.nio.FloatBuffer; import java.util.Arrays; import java.util.List; import org.tensorflow.SavedModelBundle; import org.tensorflow.Session; import org.tensorflow.Tensor; public class Client { public static void main(String[] args) { Session session = SavedModelBundle.load(args[0], \"serve\").session(); Tensor x = Tensor.create( new long[] {2, 4}, FloatBuffer.wrap( new float[] { 6.4f, 3.2f, 4.5f, 1.5f, 5.8f, 3.1f, 5.0f, 1.7f })); \/\/ Doesn't look like Java has a good way to convert the \/\/ input\/output name (\"x\", \"scores\") to their underlying tensor, \/\/ so we hard code them (\"Placeholder:0\", ...). \/\/ You can inspect them on the command-line with saved_model_cli: \/\/ \/\/ $ saved_model_cli show --dir $EXPORT_DIR --tag_set serve --signature_def serving_default final String xName = \"Placeholder:0\"; final String scoresName = \"dnn\/head\/predictions\/probabilities:0\"; List outputs = session.runner() .feed(xName, x) .fetch(scoresName) .run(); \/\/ Outer dimension is batch size; inner dimension is number of classes float[][] scores = new float[2][3]; outputs.get(0).copyTo(scores); System.out.println(Arrays.deepToString(scores)); } } ``` C++ Client You'll likely want to use tensorflow::LoadSavedModel with Session. ``` #include #include #include #include \"tensorflow\/cc\/saved_model\/loader.h\" #include \"tensorflow\/core\/framework\/tensor.h\" #include \"tensorflow\/core\/public\/session.h\" namespace tf = tensorflow; int main(int argc, char** argv) { const string export_dir = argv[1]; tf::SavedModelBundle bundle; tf::Status load_status = tf::LoadSavedModel( tf::SessionOptions(), tf::RunOptions(), export_dir, {\"serve\"}, &bundle); if (!load_status.ok()) { std::cout (); matrix(0, 0) = 6.4; matrix(0, 1) = 3.2; matrix(0, 2) = 4.5; matrix(0, 3) = 1.5; matrix(0, 1) = 5.8; matrix(0, 2) = 3.1; matrix(0, 3) = 5.0; matrix(0, 4) = 1.7; std::vector> inputs = {{x_name, x}}; std::vector outputs; tf::Status run_status = bundle.session->Run(inputs, {scores_name}, {}, &outputs); if (!run_status.ok()) { cout () << std::endl; } } ``` Use Case 3: Serve a model using TensorFlow Serving Exporting models in a manner amenable to serving a Classification model requires that the input be a tf.Example object. Here's how we might export a model for TensorFlow serving: ``` def serving_input_receiver_fn(): \"\"\"Build the serving inputs.\"\"\" # The outer dimension (None) allows us to batch up inputs for # efficiency. However, it also means that if we want a prediction # for a single instance, we'll need to wrap it in an outer list. example_bytestring = tf.placeholder( shape=[None], dtype=tf.string, ) features = tf.parse_example( example_bytestring, tf.feature_column.make_parse_example_spec(feature_columns) ) return tf.estimator.export.ServingInputReceiver( features, {'examples': example_bytestring}) export_dir = classifier.export_savedmodel( export_dir_base=\"\/path\/to\/model\", serving_input_receiver_fn=serving_input_receiver_fn) ``` The reader is referred to TensorFlow Serving's documentation for more instructions on how to setup TensorFlow Serving, so I'll only provide the client code here: ``` # Omitting a bunch of connection\/initialization code... # But at some point we end up with a stub whose lifecycle # is generally longer than that of a single request. stub = create_stub(...) # The actual values for prediction. We have two examples in this # case, each consisting of a single, multi-dimensional feature `x`. # This data here is the equivalent of the map passed to the # `predict_fn` in use case #2. examples = [ tf.train.Example( features=tf.train.Features( feature={\"x\": tf.train.Feature( float_list=tf.train.FloatList(value=[6.4, 3.2, 4.5, 1.5]))})), tf.train.Example( features=tf.train.Features( feature={\"x\": tf.train.Feature( float_list=tf.train.FloatList(value=[5.8, 3.1, 5.0, 1.7]))})), ] # Build the RPC request. predict_request = predict_pb2.PredictRequest() predict_request.model_spec.name = \"default\" predict_request.inputs[\"examples\"].CopyFrom( tensor_util.make_tensor_proto(examples, tf.float32)) # Perform the actual prediction. stub.Predict(request, PREDICT_DEADLINE_SECS) ``` Note that the key, examples, that is referenced in the predict_request.inputs needs to match the key used in the serving_input_receiver_fn at export time (cf. the constructor to ServingInputReceiver in that code). Appendix: Working around Exports from Canned Models in TF 1.3 There appears to be a bug in TensorFlow 1.3 in which canned models do not export properly for use case 2 (the problem does not exist for \"custom\" estimators). Here's is a workaround that wraps a DNNClassifier to make things work, specifically for the Iris example: ``` # Build 3 layer DNN with 10, 20, 10 units respectively. class Wrapper(tf.estimator.Estimator): def __init__(self, **kwargs): dnn = tf.estimator.DNNClassifier(**kwargs) def model_fn(mode, features, labels): spec = dnn._call_model_fn(features, labels, mode) export_outputs = None if spec.export_outputs: export_outputs = { \"serving_default\": tf.estimator.export.PredictOutput( {\"scores\": spec.export_outputs[\"serving_default\"].scores, \"classes\": spec.export_outputs[\"serving_default\"].classes})} # Replace the 3rd argument (export_outputs) copy = list(spec) copy[4] = export_outputs return tf.estimator.EstimatorSpec(mode, *copy) super(Wrapper, self).__init__(model_fn, kwargs[\"model_dir\"], dnn.config) classifier = Wrapper(feature_columns=feature_columns, hidden_units=[10, 20, 10], n_classes=3, model_dir=\"\/tmp\/iris_model\") ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46098863\/how-to-import-an-saved-tensorflow-model-train-using-tf-estimator-and-predict-on", "best_answers_votes":56, "question_length":545, "response_length":7624 }, { "question":"how to normalize input data for models in tensorflow My training data are saved in 3 files, each file is too large and cannot fit into memory.For each training example, the data are two dimensionality (2805 rows and 222 columns, the 222nd column is for label) and are numerical values. I would like to normalize the data before feeding into models for training. Below is my code for input_pipeline, and the data has not been normalized before creating dataset. Is there some functions in tensorflow that can do normalization for my case? ``` dataset = tf.data.TextLineDataset([file1, file2, file3]) # combine 2805 lines into a single example dataset = dataset.batch(2805) def parse_example(line_batch): record_defaults = [[1.0] for col in range(0, 221)] record_defaults.append([1]) content = tf.decode_csv(line_batch, record_defaults = record_defaults, field_delim = '\\t') features = tf.stack(content[0:221]) features = tf.transpose(features) label = content[-1][-1] label = tf.one_hot(indices = tf.cast(label, tf.int32), depth = 2) return features, label dataset = dataset.map(parse_example) dataset = dataset.shuffle(1000) # batch multiple examples dataset = dataset.batch(batch_size) dataset = dataset.repeat(num_epochs) iterator = dataset.make_one_shot_iterator() data_batch, label_batch = iterator.get_next() ```", "response":"There are different ways of \"normalizing data\". Depending which one you have in mind, it may or may not be easy to implement in your case. 1. Fixed normalization If you know the fixed range(s) of your values (e.g. feature #1 has values in [-5, 5], feature #2 has values in [0, 100], etc.), you could easily pre-process your feature tensor in parse_example(), e.g.: ```python def normalize_fixed(x, current_range, normed_range): current_min, current_max = tf.expand_dims(current_range[:, 0], 1), tf.expand_dims(current_range[:, 1], 1) normed_min, normed_max = tf.expand_dims(normed_range[:, 0], 1), tf.expand_dims(normed_range[:, 1], 1) x_normed = (x - current_min) \/ (current_max - current_min) x_normed = x_normed * (normed_max - normed_min) + normed_min return x_normed def parse_example(line_batch, fixed_range=[[-5, 5], [0, 100], ...], normed_range=[[0, 1]]): # ... features = tf.transpose(features) features = normalize_fixed(features, fixed_range, normed_range) # ... ``` 2. Per-sample normalization If your features are supposed to have approximately the same range of values, per-sample normalization could also be considered, i.e. applying normalization considering the features moments (mean, variance) for each sample: ```python def normalize_with_moments(x, axes=[0, 1], epsilon=1e-8): mean, variance = tf.nn.moments(x, axes=axes) x_normed = (x - mean) \/ tf.sqrt(variance + epsilon) # epsilon to avoid dividing by zero return x_normed def parse_example(line_batch): # ... features = tf.transpose(features) features = normalize_with_moments(features) # ... ``` 3. Batch normalization You could apply the same procedure over a complete batch instead of per-sample, which may make the process more stable: ```python data_batch = normalize_with_moments(data_batch, axis=[1, 2]) ``` Similarly, you could use tf.nn.batch_normalization 4. Dataset normalization Normalizing using the mean\/variance computed over the whole dataset would be the trickiest, since as you mentioned it is a large, split one. tf.data.Dataset isn't really meant for such global computation. A solution would be to use whatever tools you have to pre-compute the dataset moments, then use this information for your TF pre-processing. As mentioned by @MiniQuark, Tensorflow has a Transform library you could use to preprocess your data. Have a look at the Get Started, or for instance at the tft.scale_to_z_score() method for sample normalization.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50346017\/how-to-normalize-input-data-for-models-in-tensorflow", "best_answers_votes":34, "question_length":1317, "response_length":2424 }, { "question":"Implementation difference between TensorFlow Variable and TensorFlow Tensor First of all, I am aware that a related question has been asked here. However, this question is about the implementation and internals. I was reading the paper \"A Tour of TensorFlow\". The following two points are quoted from there: 1. A tensor itself does not hold or store values in memory, but provides only an interface for retrieving the value referenced by the tensor. This suggests to me that a Tensor is an object that simply stores the pointer to a result of an operation and, on retrieving the result or value of the tensor, it simply dereferences that pointer. 2. Variables can be described as persistent, mutable handles to in-memory buffers storing tensors. As such, variables are characterized by a certain shape and a fixed type. At this I get confused because I thought, based on the previous point, that Tensors simply store a pointer. If they were simply pointers, they could be mutable as well. To be precise these are my questions: What is the meaning of \"in-memory buffers\"? What is the meaning of a \"handle\"? Is my initial assumption about the internals of a tensor correct? What is the essential internal implementation difference between a tensor and a variable? Why are they declared differently and why is that difference essential to TensorFlow?", "response":"Before explaining the distinction between tensors and variables, we should be precise about what the word \"tensor\" means in the context of TensorFlow: In the Python API, a tf.Tensor object represents the symbolic result of a TensorFlow operation. For example, in the expression t = tf.matmul(x, y), t is a tf.Tensor object representing the result of multiplying x and y (which may themselves be symbolic results of other operations, concrete values such as NumPy arrays, or variables). In this context, a \"symbolic result\" is more complicated than a pointer to the result of an operation. It is more analogous to a function object that, when called (i.e. passed to tf.Session.run()) will run the necessary computation to produce the result of that operation, and return it to you as a concrete value (e.g. a NumPy array). In the C++ API, a tensorflow::Tensor object represents the concrete value of a multi-dimensional array. For example, the MatMul kernel takes two two-dimensional tensorflow::Tensor objects as inputs, and produces a single two-dimensional tensorflow::Tensor object as its output. This distinction is a little confusing, and we might choose different names if we started over (in other language APIs, we prefer the name Output for a symbolic result and Tensor for a concrete value). A similar distinction exists for variables. In the Python API, a tf.Variable is the symbolic representation of a variable, which has methods for creating operations that read the current value of the variable, and assign values to it. In the C++ implementation, a tensorflow::Var object is a wrapper around a shared, mutable tensorflow::Tensor object. With that context out the way, we can address your specific questions: What is the meaning of \"in-memory buffers\"? An in-memory buffer is simply a contiguous region of memory that has been allocated with a TensorFlow allocator. tensorflow::Tensor objects contain a pointer to an in-memory buffer, which holds the values of that tensor. The buffer could be in host memory (i.e. accessible from the CPU) or device memory (e.g. accessible only from a GPU), and TensorFlow has operations to move data between these memory spaces. What is the meaning of a \"handle\"? In the explanation in the paper, the word \"handle\" is used in a couple of different ways, which are slightly different from how TensorFlow uses the term. The paper uses \"symbolic handle\" to refer to a tf.Tensor object, and \"persistent, mutable handle\" to refer to a tf.Variable object. The TensorFlow codebase uses \"handle\" to refer to a name for a stateful object (like a tf.FIFOQueue or tf.TensorArray) that can be passed around without copying all of the values (i.e. call-by-reference). Is my initial assumption about the internal of a tensor correct? Your assumption most closely matches the definition of a (C++) tensorflow::Tensor object. The (Python) tf.Tensor object is more complicated because it refers to a function for computing a value, rather than the value itself. What is the essential internal implementation difference between a tensor and a variable? In C++, a tensorflow::Tensor and tensorflow::Var are very similar; the only different is that tensorflow::Var also has a mutex that can be used to lock the variable when it is being updated. In Python, the essential difference is that a tf.Tensor is implemented as a dataflow graph, and it is read-only (i.e. by calling tf.Session.run()). A tf.Variable can be both read (i.e. by evaluating its read operation) and written (e.g. by running an assign operation). Why are they declared differently and why is that difference essential to TensorFlow? Tensors and variables serve different purposes. Tensors (tf.Tensor objects) can represent complex compositions of mathematical expressions, like loss functions in a neural network, or symbolic gradients. Variables represent state that is updated over time, like weight matrices and convolutional filters during training. While in principle you could represent the evolving state of a model without variables, you would end up with a very large (and repetetive) mathematical expression, so variables provide a convenient way to materialize the state of the model, and\u2014for example\u2014share it with other machines for parallel training.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40866675\/implementation-difference-between-tensorflow-variable-and-tensorflow-tensor", "best_answers_votes":52, "question_length":1347, "response_length":4263 }, { "question":"tensorflow:Can save best model only with val_acc available, skipping I have an issue with tf.callbacks.ModelChekpoint. As you can see in my log file, the warning comes always before the last iteration where the val_acc is calculated. Therefore, Modelcheckpoint never finds the val_acc ``` Epoch 1\/30 1\/8 [==>...........................] - ETA: 19s - loss: 1.4174 - accuracy: 0.3000 2\/8 [======>.......................] - ETA: 8s - loss: 1.3363 - accuracy: 0.3500 3\/8 [==========>...................] - ETA: 4s - loss: 1.3994 - accuracy: 0.2667 4\/8 [==============>...............] - ETA: 3s - loss: 1.3527 - accuracy: 0.3250 6\/8 [=====================>........] - ETA: 1s - loss: 1.3042 - accuracy: 0.3333 WARNING:tensorflow:Can save best model only with val_acc available, skipping. 8\/8 [==============================] - 4s 482ms\/step - loss: 1.2846 - accuracy: 0.3375 - val_loss: 1.3512 - val_accuracy: 0.5000 Epoch 2\/30 1\/8 [==>...........................] - ETA: 0s - loss: 1.0098 - accuracy: 0.5000 3\/8 [==========>...................] - ETA: 0s - loss: 0.8916 - accuracy: 0.5333 5\/8 [=================>............] - ETA: 0s - loss: 0.9533 - accuracy: 0.5600 6\/8 [=====================>........] - ETA: 0s - loss: 0.9523 - accuracy: 0.5667 7\/8 [=========================>....] - ETA: 0s - loss: 0.9377 - accuracy: 0.5714 WARNING:tensorflow:Can save best model only with val_acc available, skipping. 8\/8 [==============================] - 1s 98ms\/step - loss: 0.9229 - accuracy: 0.5750 - val_loss: 1.2507 - val_accuracy: 0.5000 ``` This is my code for training the CNN. ``` callbacks = [ TensorBoard(log_dir=r'C:\\Users\\reda\\Desktop\\logs\\{}'.format(Name), histogram_freq=1), ModelCheckpoint(filepath=r\"C:\\Users\\reda\\Desktop\\checkpoints\\{}\".format(Name), monitor='val_acc', verbose=2, save_best_only=True, mode='max')] history = model.fit_generator( train_data_gen, steps_per_epoch=total_train \/\/ batch_size, epochs=epochs, validation_data=val_data_gen, validation_steps=total_val \/\/ batch_size, callbacks=callbacks) ```", "response":"I know how frustrating these things can be sometimes..but tensorflow requires that you explicitly write out the name of metric you are wanting to calculate You will need to actually say 'val_accuracy' ``` metric = 'val_accuracy' ModelCheckpoint(filepath=r\"C:\\Users\\reda.elhail\\Desktop\\checkpoints\\{}\".format(Name), monitor=metric, verbose=2, save_best_only=True, mode='max')] ``` Hope this helps =) *** As later noted by BlueTurtle (Give their answer a thumbs up please, likely still beneath this) you also need to use the full metric name to match your model.compile, ModelCheckpoint, and EarlyStopping.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/61505749\/tensorflowcan-save-best-model-only-with-val-acc-available-skipping", "best_answers_votes":23, "question_length":2025, "response_length":604 }, { "question":"Tensorflow TypeError: Fetch argument None has invalid type ? I'm building a RNN loosely based on the TensorFlow tutorial. The relevant parts of my model are as follows: ``` input_sequence = tf.placeholder(tf.float32, [BATCH_SIZE, TIME_STEPS, PIXEL_COUNT + AUX_INPUTS]) output_actual = tf.placeholder(tf.float32, [BATCH_SIZE, OUTPUT_SIZE]) lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(CELL_SIZE, state_is_tuple=False) stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * CELL_LAYERS, state_is_tuple=False) initial_state = state = stacked_lstm.zero_state(BATCH_SIZE, tf.float32) outputs = [] with tf.variable_scope(\"LSTM\"): for step in xrange(TIME_STEPS): if step > 0: tf.get_variable_scope().reuse_variables() cell_output, state = stacked_lstm(input_sequence[:, step, :], state) outputs.append(cell_output) final_state = state ``` And the feeding: ``` cross_entropy = tf.reduce_mean(-tf.reduce_sum(output_actual * tf.log(prediction), reduction_indices=[1])) train_step = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(output_actual, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) with tf.Session() as sess: sess.run(tf.initialize_all_variables()) numpy_state = initial_state.eval() for i in xrange(1, ITERATIONS): batch = DI.next_batch() print i, type(batch[0]), np.array(batch[1]).shape, numpy_state.shape if i % LOG_STEP == 0: train_accuracy = accuracy.eval(feed_dict={ initial_state: numpy_state, input_sequence: batch[0], output_actual: batch[1] }) print \"Iteration \" + str(i) + \" Training Accuracy \" + str(train_accuracy) numpy_state, train_step = sess.run([final_state, train_step], feed_dict={ initial_state: numpy_state, input_sequence: batch[0], output_actual: batch[1] }) ``` When I run this, I get the following error: ``` Traceback (most recent call last): File \"\/home\/agupta\/Documents\/Projects\/Image-Recognition-with-LSTM\/RNN\/feature_tracking\/model.py\", line 109, in output_actual: batch[1] File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/client\/session.py\", line 698, in run run_metadata_ptr) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/client\/session.py\", line 838, in _run fetch_handler = _FetchHandler(self._graph, fetches) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/client\/session.py\", line 355, in __init__ self._fetch_mapper = _FetchMapper.for_fetch(fetches) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/client\/session.py\", line 181, in for_fetch return _ListFetchMapper(fetch) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/client\/session.py\", line 288, in __init__ self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches] File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/client\/session.py\", line 178, in for_fetch (fetch, type(fetch))) TypeError: Fetch argument None has invalid type ``` Perhaps the weirdest part is that this error gets thrown the second iteration, and the first works completely fine. I'm ripping my hair trying to fix this, so any help would be greatly appreciated.", "response":"You are re-assigning the train_step variable to the second element of the result of sess.run() (which happens to be None). Hence, on the second iteration, train_step is None, which leads to the error. The fix is fortunately simple: ``` for i in xrange(1, ITERATIONS): # ... # Discard the second element of the result. numpy_state, _ = sess.run([final_state, train_step], feed_dict={ initial_state: numpy_state, input_sequence: batch[0], output_actual: batch[1] }) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39114832\/tensorflow-typeerror-fetch-argument-none-has-invalid-type-type-nonetype", "best_answers_votes":33, "question_length":3141, "response_length":467 }, { "question":"Dynamic size for tf.zeros() (for use with placeholders with None dimensions) Consider the following code: ```py x = tf.placeholder(\"float\", shape=[42, 4]) y = tf.zeros([42, 4], \"float\") xy_stacked = tf.concat(1, [x, y]) print(x.get_shape()) print(y.get_shape()) print(xy_stacked.get_shape()) ``` This will produce the following output, as expected: ```py TensorShape([Dimension(42), Dimension(4)]) TensorShape([Dimension(42), Dimension(4)]) TensorShape([Dimension(42), Dimension(8)]) ``` However, what if the placeholder has a dynamic dimension that is determined at run-time by the value passed to feed_dict=, as placeholders often do: ```py x = tf.placeholder(\"float\", shape=[None, 4]) y = tf.zeros([None, 4], \"float\") xy_stacked = tf.concat(1, [x, y]) ``` This will produce an error for tf.zeros([None, 4], \"float\"). Apparently Dimension(None) is not allowed for tf.zeros: ```py TypeError Traceback (most recent call last) in () 2 3 x = tf.placeholder(\"float\", shape=[None, 4]) ----> 4 y = tf.zeros([None, 4], \"float\") 5 xy_stacked = tf.concat(1, [x, y]) 6 [...] \/usr\/local\/lib\/python3.4\/dist-packages\/numpy\/core\/_methods.py in _prod(a, axis, dtype, out, keepdims) 33 34 def _prod(a, axis=None, dtype=None, out=None, keepdims=False): ---> 35 return umr_prod(a, axis, dtype, out, keepdims) 36 37 def _any(a, axis=None, dtype=None, out=None, keepdims=False): TypeError: unsupported operand type(s) for *: 'NoneType' and 'int' ``` I have figured out that it does not produce an error if I set the first dimension of my zeros tensor to non-None, such as 1: ```py x = tf.placeholder(\"float\", shape=[None, 4]) y = tf.zeros([1, 4], \"float\") xy_stacked = tf.concat(1, [x, y]) ``` but then the resulting xy_stacked tensor is truncated to this size: ```py TensorShape([Dimension(None), Dimension(4)]) TensorShape([Dimension(1), Dimension(4)]) TensorShape([Dimension(1), Dimension(8)]) ``` How can I pad the placeholder tensor with zeros so I get a tensor of shape TensorShape([Dimension(None), Dimension(8)]) in this example? The only \"solutions\" I found so far is either something like the following: ```py x = tf.placeholder(\"float\", shape=[None, 4]) y = 0 * x xy_stacked = tf.concat(1, [x, y]) ``` Or simply declaring y as a placeholder and always passing a zero array of the right size. But neither looks like a clean solution to the problem and hacks like that get out of hand quickly in an application more complex than this simple example.. I'm using tensorflow-0.6.0-py3.", "response":"The recommended way to make a zero tensor with the same shape as another tensor is to use the tf.zeros_like() op: ```py x = tf.placeholder(tf.float32, shape=[None, 4]) y = tf.zeros_like(x) ``` The resulting tensor y appears to have the shape [None, None] according to Tensor.get_shape(), but at runtime it will expand to the same shape as x: ```py print y.get_shape() # ==> TensorShape([Dimension(None), Dimension(None)]) sess = tf.Session() y_result = sess.run(y, feed_dict={x: np.random.rand(4, 4)}) print y_result.shape # ==> (4, 4) ``` The [None, None] static shape is returned because shape inference hasn't been specialized for tf.zeros_like(). I've filed a GitHub issue for that and it should be fixed soon. EDIT: In your comment, you asked how to deal with the case where the zero tensor had a shape based on, but different from the original tensor. This is also possible, using tf.shape() and tf.stack() to build the dimensions, and tf.fill() to produce the zero tensor: ``` x = tf.placeholder(tf.float32, shape=[None, 4]) # Use tf.shape() to get the runtime size of `x` in the 0th dimension. zeros_dims = tf.stack([tf.shape(x)[0], 7]) y = tf.fill(zeros_dims, 0.0) sess = tf.Session() y_result = sess.run(y, feed_dict={x: np.random.rand(4, 4)}) print y_result.shape # ==> (4, 7) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34718736\/dynamic-size-for-tf-zeros-for-use-with-placeholders-with-none-dimensions", "best_answers_votes":41, "question_length":2473, "response_length":1291 }, { "question":"Why tensorflow uses channel-last ordering instead of row-major? In most tensorflow tutorials authors use channel-last dimension ordering, e.g. ``` input_layer = tf.reshape(features, [-1, 28, 28, 1]) ``` where the last digit represents the number of channels (https:\/\/www.tensorflow.org\/tutorials\/layers). Being used to Theano and Numpy (both use C-ordering, i.e. row-major), I find this awkward. Moreover, having read the documentation on in-memory layout schemes in tensorflow, I reckon channel-last layout will cause more cache-misses, because convolutions are carried out on individual channels, while in channel-last ordering these channels are intermixed in linear memory, effectively shrinking the cache by N (where N is the number of channels), which is especially inefficient in 3D and 4D convolutions. Am I getting something wrong? P.S. I've found a closely-related thread (Tensorflow 3 channel order of color inputs). The author of the accepted answer states that TF uses row-major by default, but given that all of the tutorials I've found so far show channel-last ordering I find that claim misleading.", "response":"Here's the explanation: https:\/\/www.tensorflow.org\/performance\/performance_guide#use_nchw_image_data_format Image data format refers to the representation of batches of images. TensorFlow supports NHWC (TensorFlow default) and NCHW (cuDNN default). N refers to the number of images in a batch, H refers to the number of pixels in the vertical dimension, W refers to the number of pixels in the horizontal dimension, and C refers to the channels (e.g. 1 for black and white, 3 for RGB, etc.) Although cuDNN can operate on both formats, it is faster to operate in its default format. The best practice is to build models that work with both NCHW and NHWC as it is common to train using NCHW on GPU, and then do inference with NHWC on CPU. The very brief history of these two formats is that TensorFlow started by using NHWC because it was a little faster on CPUs. Then the TensorFlow team discovered that NCHW performs better when using the NVIDIA cuDNN library. The current recommendation is that users support both formats in their models. In the long term, we plan to rewrite graphs to make switching between the formats transparent. Moreover, digging into the code we can see here that when the input is in the format NHWC, tensorflow converts it for you to NCHW. ``` if (data_format == FORMAT_NHWC) { \/\/ Convert the input tensor from NHWC to NCHW. TensorShape nchw_shape = ShapeFromFormat(FORMAT_NCHW, in_batch, in_rows, in_cols, in_depths); if (in_depths > 1) { Tensor transformed_input; OP_REQUIRES_OK(ctx, ctx->allocate_temp(DataTypeToEnum::value, nchw_shape, &transformed_input)); functor::NHWCToNCHW()( ctx->eigen_device(), const_cast(input).tensor(), transformed_input.tensor()); input = transformed_input; } else { \/\/ If depth <= 1, then just reshape. CHECK(input.CopyFrom(input, nchw_shape)); } } ``` You can specify the data format you want to use for every operation but tensorflow at default doesn't use NCHW but NHWC, that's why even the TF defelopers still use NHWC to avoid to specify in every operation the format", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44774234\/why-tensorflow-uses-channel-last-ordering-instead-of-row-major", "best_answers_votes":30, "question_length":1114, "response_length":2031 }, { "question":"What does `training=True` mean when calling a TensorFlow Keras model? In TensorFlow's offcial documentations, they always pass training=True when calling a Keras model in a training loop, for example, logits = mnist_model(images, training=True). I tried help(tf.keras.Model.call) and it shows that ``` Help on function call in module tensorflow.python.keras.engine.network: call(self, inputs, training=None, mask=None) Calls the model on new inputs. In this case `call` just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs). Arguments: inputs: A tensor or list of tensors. training: Boolean or boolean scalar tensor, indicating whether to run the `Network` in training mode or inference mode. mask: A mask or list of masks. A mask can be either a tensor or None (no mask). Returns: A tensor if there is a single output, or a list of tensors if there are more than one outputs. ``` It says that training is a Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode. But I didn't find any information about this two modes. In a nutshell, I don't know what is the influence of this argument. And what if I missed this argument when training?", "response":"Some neural network layers behave differently during training and inference, for example Dropout and BatchNormalization layers. For example During training, dropout will randomly drop out units and correspondingly scale up activations of the remaining units. During inference, it does nothing (since you usually don't want the randomness of dropping out units here). The training argument lets the layer know which of the two \"paths\" it should take. If you set this incorrectly, your network might not behave as expected.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/57320371\/what-does-training-true-mean-when-calling-a-tensorflow-keras-model", "best_answers_votes":34, "question_length":1253, "response_length":521 }, { "question":"Keras custom loss function: Accessing current input pattern In Keras (with Tensorflow backend), is the current input pattern available to my custom loss function? The current input pattern is defined as the input vector used to produce the prediction. For example, consider the following: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42, shuffle=False). Then the current input pattern is the current X_train vector associated with the y_train (which is termed y_true in the loss function). When designing a custom loss function, I intend to optimize\/minimize a value that requires access to the current input pattern, not just the current prediction. I've taken a look through https:\/\/github.com\/fchollet\/keras\/blob\/master\/keras\/losses.py I've also looked through \"Cost function that isn't just y_pred, y_true?\" I am also familiar with previous examples to produce a customized loss function: ``` import keras.backend as K def customLoss(y_true,y_pred): return K.sum(K.log(y_true) - K.log(y_pred)) ``` Presumably (y_true,y_pred) are defined elsewhere. I've taken a look through the source code without success and I'm wondering whether I need to define the current input pattern myself or whether this is already accessible to my loss function.", "response":"You can wrap the loss function as a inner function and pass your input tensor to it (as commonly done when passing additional arguments to the loss function). ``` def custom_loss_wrapper(input_tensor): def custom_loss(y_true, y_pred): return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor) return custom_loss input_tensor = Input(shape=(10,)) hidden = Dense(100, activation='relu')(input_tensor) out = Dense(1, activation='sigmoid')(hidden) model = Model(input_tensor, out) model.compile(loss=custom_loss_wrapper(input_tensor), optimizer='adam') ``` You can verify that input_tensor and the loss value (mostly, the K.mean(input_tensor) part) will change as different X is passed to the model. ``` X = np.random.rand(1000, 10) y = np.random.randint(2, size=1000) model.test_on_batch(X, y) # => 1.1974642 X *= 1000 model.test_on_batch(X, y) # => 511.15466 ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46464549\/keras-custom-loss-function-accessing-current-input-pattern", "best_answers_votes":35, "question_length":1286, "response_length":870 }, { "question":"Read only mode in keras I have cloned human pose estimation keras model from this link human pose estimation keras When I try to load the model on google colab, I get the following error code ``` from keras.models import load_model model = load_model('model.h5') ``` error ``` ValueError Traceback (most recent call last) in () 1 from keras.models import load_model ----> 2 model = load_model('model.h5') \/usr\/local\/lib\/python3.6\/dist-packages\/keras\/engine\/saving.py in load_model(filepath, custom_objects, compile) 417 f = h5dict(filepath, 'r') 418 try: --> 419 model = _deserialize_model(f, custom_objects, compile) 420 finally: 421 if opened_new_file: \/usr\/local\/lib\/python3.6\/dist-packages\/keras\/engine\/saving.py in _deserialize_model(f, custom_objects, compile) 219 return obj 220 --> 221 model_config = f['model_config'] 222 if model_config is None: 223 raise ValueError('No model found in config.') \/usr\/local\/lib\/python3.6\/dist-packages\/keras\/utils\/io_utils.py in __getitem__(self, attr) 300 else: 301 if self.read_only: --> 302 raise ValueError('Cannot create group in read only mode.') 303 val = H5Dict(self.data.create_group(attr)) 304 return val ValueError: Cannot create group in read only mode. ``` Can someone please help me understand this read-only mode? How do I load this model?", "response":"I had a similar issue and solved this way store the graph\\architecture in JSON format and weights in h5 format ``` import json # lets assume `model` is main model model_json = model.to_json() with open(\"model_in_json.json\", \"w\") as json_file: json.dump(model_json, json_file) model.save_weights(\"model_weights.h5\") ``` then need to load model first to create graph\\architecture and load_weights in model ``` from keras.models import load_model from keras.models import model_from_json import json with open('model_in_json.json','r') as f: model_json = json.load(f) model = model_from_json(model_json) model.load_weights('model_weights.h5') ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/53212672\/read-only-mode-in-keras", "best_answers_votes":15, "question_length":1298, "response_length":643 }, { "question":"TensorFlow: AttributeError: 'Tensor' object has no attribute 'shape' I have the following code which uses TensorFlow. After I reshape a list, it says AttributeError: 'Tensor' object has no attribute 'shape' when I try to print its shape. ``` # Get the shape of the training data. print \"train_data.shape: \" + str(train_data.shape) train_data = tf.reshape(train_data, [400, 1]) print \"train_data.shape: \" + str(train_data.shape) train_size,num_features = train_data.shape ``` Output: train_data.shape: (400,) Traceback (most recent call last): File \"\", line 1, in File \"\/home\/shehab\/Downloads\/tools\/python\/pycharm-edu-2.0.4\/helpers\/pydev\/pydev_import_hook.py\", line 21, in do_import module = self._system_import(name, *args, **kwargs) File \"\/home\/shehab\/Dropbox\/py-projects\/try-tf\/logistic_regression.py\", line 77, in print \"train_data.shape: \" + str(train_data.shape) AttributeError: 'Tensor' object has no attribute 'shape' Could anyone please tell me what I am missing?", "response":"UPDATE: Since TensorFlow 1.0, tf.Tensor now has a tf.Tensor.shape property, which returns the same value as tf.Tensor.get_shape(). Indeed, in versions prior to TensorFlow 1.0 tf.Tensor doesn't have a .shape property. You should use the Tensor.get_shape() method instead: ``` train_data = tf.reshape(train_data, [400, 1]) print \"train_data.shape: \" + str(train_data.get_shape()) ``` Note that in general you might not be able to get the actual shape of the result of a TensorFlow operation. In some cases, the shape will be a computed value that depends on running the computation to find its value; and it may even vary from one run to the next (e.g. the shape of tf.unique()). In that case, the result of get_shape() for some dimensions may be None (or \"?\").", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38666040\/tensorflow-attributeerror-tensor-object-has-no-attribute-shape", "best_answers_votes":30, "question_length":971, "response_length":759 }, { "question":"Tensorflow causes logging messages to double So I was playing around with Google's Tensorflow library they published yesterday and encountered an annoying bug that keeps biting me. What I did was setup the python logging functions as I usually do, and the result was that, if I import the tensorflow library, all messages in the console started doubling. Interestingly, this does not happen if you just use the logging.warn\/info\/..() function. An example of a code that does not double the messages: ``` import tensorflow as tf import logging logging.warn('test') ``` An example of a code that does double all messages: ``` import tensorflow as tf import logging logger = logging.getLogger('TEST') ch = logging.StreamHandler() logger.addHandler(ch) logger.warn('test') ``` Now, I'm a simple man. I like the functionality of logging, so I use it. The setup with the logger object and the adding of a StreamHandler is something I picked up looking at how other people did this, but it looks like it fits with how the thing was meant to be used. However, I do not have in-depth knowledge of the logging library, as it always just kind of worked. So, any help explaining why the doubling of the messages occurs will be most helpful. I am using Ubuntu 14.04.3 LTS with Python 2.7.6, but the error happens in all Python 2.7 versions I tried.", "response":"I get this output: ``` test WARNING:TEST:test ``` Tensorflow is also using the logging framework and has set up its own handlers, so when you log, by default, it propagates up to the parent logging handlers inside tensorflow. You can change this behavior by setting: ```py logger.propagate = False ``` See also duplicate output in simple python logging configuration Followup: This was an unintended side-effect of the way tensorflow was using the logging package. I've changed it at HEAD to scope its internal loggers under the name \"tensorflow\" to avoid this pollution. Should be in the github head within a day or so. In the meantime, the logger.propagate solution will work and won't break once that fix is in, so you should be safe to go. Thanks again for spotting this! Followup-Followup: Starting with TensorFlow 1.14 exposes the logger directly: ```py import tensorflow as tf logger = tf.get_logger() ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33662648\/tensorflow-causes-logging-messages-to-double", "best_answers_votes":29, "question_length":1335, "response_length":912 }, { "question":"How to iterate through tensors in custom loss function? I'm using keras with tensorflow backend. My goal is to query the batchsize of the current batch in a custom loss function. This is needed to compute values of the custom loss functions which depend on the index of particular observations. I like to make this clearer given the minimum reproducible examples below. (BTW: Of course I could use the batch size defined for the training procedure and plugin it's value when defining the custom loss function, but there are some reasons why this can vary, especially if epochsize % batchsize (epochsize modulo batchsize) is unequal zero, then the last batch of an epoch has different size. I didn't found a suitable approach in stackoverflow, especially e. g. Tensor indexing in custom loss function and Tensorflow custom loss function in Keras - loop over tensor and Looping over a tensor because obviously the shape of any tensor can't be inferred when building the graph which is the case for a loss function - shape inference is only possible when evaluating given the data, which is only possible given the graph. Hence I need to tell the custom loss function to do something with particular elements along a certain dimension without knowing the length of the dimension. (this is the same in all examples) ``` from keras.models import Sequential from keras.layers import Dense, Activation # Generate dummy data import numpy as np data = np.random.random((1000, 100)) labels = np.random.randint(2, size=(1000, 1)) model = Sequential() model.add(Dense(32, activation='relu', input_dim=100)) model.add(Dense(1, activation='sigmoid')) ``` example 1: nothing special without issue, no custom loss ``` model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) # Train the model, iterating on the data in batches of 32 samples model.fit(data, labels, epochs=10, batch_size=32) ``` (Output omitted, this runs perfectily fine) example 2: nothing special, with a fairly simple custom loss ``` def custom_loss(yTrue, yPred): loss = np.abs(yTrue-yPred) return loss model.compile(optimizer='rmsprop', loss=custom_loss, metrics=['accuracy']) # Train the model, iterating on the data in batches of 32 samples model.fit(data, labels, epochs=10, batch_size=32) ``` (Output omitted, this runs perfectily fine) example 3: the issue ``` def custom_loss(yTrue, yPred): print(yPred) # Output: Tensor(\"dense_2\/Sigmoid:0\", shape=(?, 1), dtype=float32) n = yPred.shape[0] for i in range(n): # TypeError: __index__ returned non-int (type NoneType) loss = np.abs(yTrue[i]-yPred[int(i\/2)]) return loss model.compile(optimizer='rmsprop', loss=custom_loss, metrics=['accuracy']) # Train the model, iterating on the data in batches of 32 samples model.fit(data, labels, epochs=10, batch_size=32) ``` Of course the tensor has not shape info yet which can't be inferred when building the graph, only at training time. Hence for i in range(n) rises an error. Is there any way to perform this? The traceback of the output: ------- BTW here's my true custom loss function in case of any questions. I skipped it above for clarity and simplicity. ``` def neg_log_likelihood(yTrue,yPred): yStatus = yTrue[:,0] yTime = yTrue[:,1] n = yTrue.shape[0] for i in range(n): s1 = K.greater_equal(yTime, yTime[i]) s2 = K.exp(yPred[s1]) s3 = K.sum(s2) logsum = K.log(y3) loss = K.sum(yStatus[i] * yPred[i] - logsum) return loss ``` Here's an image of the partial negative log-likelihood of the cox proportional harzards model. This is to clarify a question in the comments to avoid confusion. I don't think it is necessary to understand this in detail to answer the question.", "response":"As usual, don't loop. There are severe performance drawbacks and also bugs. Use only backend functions unless totally unavoidable (usually it's not unavoidable) Solution for example 3: So, there is a very weird thing there... Do you really want to simply ignore half of your model's predictions? (Example 3) Assuming this is true, just duplicate your tensor in the last dimension, flatten and discard half of it. You have the exact effect you want. ``` def custom_loss(true, pred): n = K.shape(pred)[0:1] pred = K.concatenate([pred]*2, axis=-1) #duplicate in the last axis pred = K.flatten(pred) #flatten pred = K.slice(pred, #take only half (= n samples) K.constant([0], dtype=\"int32\"), n) return K.abs(true - pred) ``` Solution for your loss function: If you have sorted times from greater to lower, just do a cumulative sum. Warning: If you have one time per sample, you cannot train with mini-batches!!! batch_size = len(labels) It makes sense to have time in an additional dimension (many times per sample), as is done in recurrent and 1D conv netoworks. Anyway, considering your example as expressed, that is shape (samples_equal_times,) for yTime: ``` def neg_log_likelihood(yTrue,yPred): yStatus = yTrue[:,0] yTime = yTrue[:,1] n = K.shape(yTrue)[0] #sort the times and everything else from greater to lower: #obs, you can have the data sorted already and avoid doing it here for performance #important, yTime will be sorted in the last dimension, make sure its (None,) in this case # or that it's (None, time_length) in the case of many times per sample sortedTime, sortedIndices = tf.math.top_k(yTime, n, True) sortedStatus = K.gather(yStatus, sortedIndices) sortedPreds = K.gather(yPred, sortedIndices) #do the calculations exp = K.exp(sortedPreds) sums = K.cumsum(exp) #this will have the sum for j >= i in the loop logsums = K.log(sums) return K.sum(sortedStatus * sortedPreds - logsums) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51405517\/how-to-iterate-through-tensors-in-custom-loss-function", "best_answers_votes":5, "question_length":3659, "response_length":1904 }, { "question":"Uninstalling TensorFlow from Anaconda environment Does anyone know how to uninstall TensorFlow from Anaconda environment? I want to upgrade the TensorFlow to the latest version which is ver.1.4, but not working well. So I want to uninstall it and then install the newest version.", "response":"You can remove a package with the conda remove command. So for TensorFlow this would be conda remove tensorflow.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48208128\/uninstalling-tensorflow-from-anaconda-environment", "best_answers_votes":37, "question_length":279, "response_length":112 }, { "question":"What is right batch normalization function in Tensorflow? In tensorflow 1.4, I found two functions that do batch normalization and they look same: tf.layers.batch_normalization (link) tf.contrib.layers.batch_norm (link) Which function should I use? Which one is more stable?", "response":"Just to add to the list, there're several more ways to do batch-norm in tensorflow: tf.nn.batch_normalization is a low-level op. The caller is responsible to handle mean and variance tensors themselves. tf.nn.fused_batch_norm is another low-level op, similar to the previous one. The difference is that it's optimized for 4D input tensors, which is the usual case in convolutional neural networks. tf.nn.batch_normalization accepts tensors of any rank greater than 1. tf.layers.batch_normalization is a high-level wrapper over the previous ops. The biggest difference is that it takes care of creating and managing the running mean and variance tensors, and calls a fast fused op when possible. Usually, this should be the default choice for you. tf.contrib.layers.batch_norm is the early implementation of batch norm, before it's graduated to the core API (i.e., tf.layers). The use of it is not recommended because it may be dropped in the future releases. tf.nn.batch_norm_with_global_normalization is another deprecated op. Currently, delegates the call to tf.nn.batch_normalization, but likely to be dropped in the future. Finally, there's also Keras layer keras.layers.BatchNormalization, which in case of tensorflow backend invokes tf.nn.batch_normalization.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48001759\/what-is-right-batch-normalization-function-in-tensorflow", "best_answers_votes":59, "question_length":274, "response_length":1265 }, { "question":"How to make Keras use Tensorflow backend in Anaconda? I have install tensorflow-gpu in my Anaconda environment. They both work well. Now I am trying to install Keras with Tensorflow backend. According to the instruction I just run: ``` pip install keras ``` But it doesn't install keras, then I tried: ``` conda install -c conda-forge keras=2.0.2 ``` Then I am now able import keras in python. But the problem is, it always use the Theano backend. I am trying to change this, but not knowing how to do it. I also tried edit the file ~\/.keras, but actually default backend was tensorflow already. Please help.. Thank you so much!", "response":"This happens because the keras conda-forge package puts a file in ${CONDA_PREFIX}\/etc\/conda\/activate.d\/keras_activate.sh, which sets the environment variable KERAS_BACKEND ``` (root) [root@starlabs ~]# cat $CONDA_PREFIX\/etc\/conda\/activate.d\/keras_activate.sh #!\/bin\/bash if [ \"$(uname)\" == \"Darwin\" ] then # for Mac OSX export KERAS_BACKEND=tensorflow elif [ \"$(uname)\" == \"Linux\" ] then # for Linux export KERAS_BACKEND=theano fi ``` As you can see from the file, in Linux, it sets the value to 'theano' and according to the official docs: the environment variable KERAS_BACKEND will override what is defined in your config file To work around this, you can either edit this file and change 'theano' to 'tensorflow' (which would probably get overwritten on reinstall or on changing environments) or, do the following: ``` export KERAS_BACKEND=tensorflow python \/path\/to\/python\/program.py ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43327464\/how-to-make-keras-use-tensorflow-backend-in-anaconda", "best_answers_votes":47, "question_length":628, "response_length":892 }, { "question":"How to create mask images from COCO dataset? So I have been using this code,. I am trying to generate the raw mask of the images from COCO dataset. ``` dataDir='G:' dataType='train2014' annFile='{}\/annotations\/instances_{}.json'.format(dataDir,dataType) coco=COCO(annFile) annFile = '{}\/annotations\/person_keypoints_{}.json'.format(dataDir,dataType) coco_kps=COCO(annFile) catIds = coco.getCatIds(catNms=['person']) imgIds = coco.getImgIds(catIds=catIds ); imgIds = coco.getImgIds(imgIds = imgIds[0]) img = coco.loadImgs(imgIds[np.random.randint(0,len(imgIds))])[0] I = io.imread('G:\/train2014\/'+img['file_name']) plt.imshow(I); plt.axis('off') annIds = coco.getAnnIds(imgIds=img['id'], catIds=catIds, iscrowd=None) anns = coco.loadAnns(annIds) coco.showAnns(anns) ``` But what i get is some thing like this But what I want is something like this How can I get the raw mask against each image ?", "response":"The complete code wasn't in the answer so I post it below. Please install pycocotools first. ``` pip install pycocotools ``` Import the required modules. I'm assuming you're using a jupyter notebook. ```py from pycocotools.coco import COCO import os from PIL import Image import numpy as np from matplotlib import pyplot as plt %matplotlib inline ``` Load the annotations for the coco dataset. Here, specify the 74 image. ```py coco = COCO('..\/datasets\/coco\/annotations\/instances_train2017.json') img_dir = '..\/datasets\/coco\/train2017' image_id = 74 img = coco.imgs[image_id] # loading annotations into memory... # Done (t=12.70s) # creating index... # index created! ``` The information of the loaded img is as follows. ```py img # {'license': 2, # 'file_name': '000000000074.jpg', # 'coco_url': # 'http:\/\/images.cocodataset.org\/train2017\/000000000074.jpg', # 'height': 426, # 'width': 640, # 'date_captured': '2013-11-15 03:08:44', # 'flickr_url': # 'http:\/\/farm5.staticflickr.com\/4087\/5078192399_aaefdb5074_z.jpg# ', # 'id': 74} ``` Display the image as follows. ```py image = np.array(Image.open(os.path.join(img_dir, img['file_name']))) plt.imshow(image, interpolation='nearest') plt.show() ``` If you want to see the overlay result: ```py plt.imshow(image) cat_ids = coco.getCatIds() anns_ids = coco.getAnnIds(imgIds=img['id'], catIds=cat_ids, iscrowd=None) anns = coco.loadAnns(anns_ids) coco.showAnns(anns) ``` If you just want to see the mask, as Farshid Rayhan replied, do the following: ```py mask = coco.annToMask(anns[0]) for i in range(len(anns)): mask += coco.annToMask(anns[i]) plt.imshow(mask) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50805634\/how-to-create-mask-images-from-coco-dataset", "best_answers_votes":19, "question_length":894, "response_length":1614 }, { "question":"what does arg_scope actually do? I am a beginner in neural nets and TensorFlow, and I am trying to understand the role of arg_scope. It seems to me that it is a way to put together a dictionary of \"things you want to do\" to a certain layer with certain variables. Please correct me if I am wrong. How would you explain exactly what it is for, to a beginner?", "response":"When defining convolution layers, you may always use the same padding type and the same initializer, and maybe even the same convolution size. For you pooling, maybe you are also always using the same 2x2 pooling size. And so on. arg_scope is a way to avoid repeating providing the same arguments over and over again to the same layer types. Examples from the source documentation: Example of how to use tf.contrib.framework.arg_scope: ``` from third_party.tensorflow.contrib.layers.python import layers arg_scope = tf.contrib.framework.arg_scope with arg_scope([layers.conv2d], padding='SAME', initializer=layers.variance_scaling_initializer(), regularizer=layers.l2_regularizer(0.05)): net = layers.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1') net = layers.conv2d(net, 256, [5, 5], scope='conv2') ``` The first call to conv2d will behave as follows: ``` layers.conv2d(inputs, 64, [11, 11], 4, padding='VALID', initializer=layers.variance_scaling_initializer(), regularizer=layers.l2_regularizer(0.05), scope='conv1') ``` The second call to conv2d will also use the arg_scope's default for padding: ``` layers.conv2d(inputs, 256, [5, 5], padding='SAME', initializer=layers.variance_scaling_initializer(), regularizer=layers.l2_regularizer(0.05), scope='conv2') ``` Example of how to reuse an arg_scope: ``` with arg_scope([layers.conv2d], padding='SAME', initializer=layers.variance_scaling_initializer(), regularizer=layers.l2_regularizer(0.05)) as sc: net = layers.conv2d(net, 256, [5, 5], scope='conv1') .... with arg_scope(sc): net = layers.conv2d(net, 256, [5, 5], scope='conv2') ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45226950\/what-does-arg-scope-actually-do", "best_answers_votes":42, "question_length":357, "response_length":1608 }, { "question":"Distributed tensorflow: the difference between In-graph replication and Between-graph replication I got confused about the two concepts: In-graph replication and Between-graph replication when reading the Replicated training in tensorflow's official How-to. It's said in above link that In-graph replication. In this approach, the client builds a single tf.Graph that contains one set of parameters (in tf.Variable nodes pinned to \/job:ps); ... Does this mean there are multiple tf.Graphs in Between-graph replication approach? If yes, where are the corresponding codes in the provided examples? While there is already a Between-graph replication example in above link, could anyone provide a In-graph replication implementation (pseudo code is fine) and highlight its main differences from Between-graph replication? Thanks in advance! Edit_1: more questions Thanks a lot for your detailed explanations and gist code @mrry @YaroslavBulatov ! After looking your responses, I have the following two questions: There is the following statement in Replicated training: Between-graph replication. In this approach, there is a separate client for each \/job:worker task, typically in the same process as the worker task. Each client builds a similar graph containing the parameters (pinned to \/job:ps as before using tf.train.replica_device_setter() to map them deterministically to the same tasks); and a single copy of the compute-intensive part of the model, pinned to the local task in \/job:worker. I have two sub-questions related to above words in bold. (A) Why do we say each client builds similar graph, but not same graph? I wonder the graph built in each client in the example of Replicated training should be the same because below graph construction codes are shared within all workers.: # Build model... loss = ... global_step = tf.Variable(0) (B) Shouldn't it be multiple copies of compute-intensive part of the model, since we have multiple workers? Does the example in Replicated training support training on multiple machines, each of which has multiple GPUs? If not, can we use simultaneously both the In-graph replication to support training on multiple GPUs on each machine and Between-graph replication for cross-machine training? I ask this question because @mrry indicated that the In-graph replication is essentially same to the way used in CIFAR-10 example model for multiple GPUs.", "response":"First of all, for some historical context, \"in-graph replication\" is the first approach that we tried in TensorFlow, and it did not achieve the performance that many users required, so the more complicated \"between-graph\" approach is the current recommended way to perform distributed training. Higher-level libraries such as tf.learn use the \"between-graph\" approach for distributed training. To answer your specific questions: Does this mean there are multiple tf.Graphs in the between-graph replication approach? If yes, where are the corresponding codes in the provided examples? Yes. The typical between-graph replication setup will use a separate TensorFlow process for each worker replica, and each of this will build a separate tf.Graph for the model. Usually each process uses the global default graph (accessible through tf.get_default_graph()) and it is not created explicitly. (In principle, you could use a single TensorFlow process with the same tf.Graph and multiple tf.Session objects that share the same underlying graph, as long as you configured the tf.ConfigProto.device_filters option for each session differently, but this is an uncommon setup.) While there is already a between-graph replication example in above link, could anyone provide an in-graph replication implementation (pseudocode is fine) and highlight its main differences from between-graph replication? For historical reasons, there are not many examples of in-graph replication (Yaroslav's gist is one exception). A program using in-graph replication will typically include a loop that creates the same graph structure for each worker (e.g. the loop on line 74 of the gist), and use variable sharing between the workers. The one place where in-graph replication persists is for using multiple devices in a single process (e.g. multiple GPUs). The CIFAR-10 example model for multiple GPUs is an example of this pattern (see the loop over GPU devices here). (In my opinion, the inconsistency between how multiple workers and multiple devices in a single worker are treated is unfortunate. In-graph replication is simpler to understand than between-graph replication, because it doesn't rely on implicit sharing between the replicas. Higher-level libraries, such as tf.learn and TF-Slim, hide some of these issues, and offer hope that we can offer a better replication scheme in the future.) Why do we say each client builds a similar graph, but not the same graph? Because they aren't required to be identical (and there is no integrity check that enforces this). In particular, each worker might create a graph with different explicit device assignments (\"\/job:worker\/task:0\", \"\/job:worker\/task:1\", etc.). The chief worker might create additional operations that are not created on (or used by) the non-chief workers. However, in most cases, the graphs are logically (i.e. modulo device assignments) the same. Shouldn't it be multiple copies of the compute-intensive part of the model, since we have multiple workers? Typically, each worker has a separate graph that contains a single copy of the compute-intensive part of the model. The graph for worker i does not contain the nodes for worker j (assuming i \u2260 j). (An exception would be the case where you're using between-graph replication for distributed training, and in-graph replication for using multiple GPUs in each worker. In that case, the graph for a worker would typically contain N copies of the compute-intensive part of the graph, where N is the number of GPUs in that worker.) Does the example in Replicated training support training on multiple machines, each of which has multiple GPUs? The example code only covers training on multiple machines, and says nothing about how to train on multiple GPUs in each machine. However, the techniques compose easily. In this part of the example: ``` # Build model... loss = ... ``` ...you could add a loop over the GPUs in the local machine, to achieve distributed training multiple workers each with multiple GPUs.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41600321\/distributed-tensorflow-the-difference-between-in-graph-replication-and-between", "best_answers_votes":40, "question_length":2400, "response_length":4011 }, { "question":"How to extract bias weights in Keras sequential model? [duplicate] This question already has answers here: How can I get biases from a trained model in Keras? (3 answers) Closed 8 years ago. I'm running a simple feed-forward network using Keras . Having just one hidden layer I would like to make some inference regarding the relevance of each input to each output and I would like to extract the weights. This is the model: ``` def build_model(input_dim, output_dim): n_output_layer_1 = 150 n_output = output_dim model = Sequential() model.add(Dense(n_output_layer_1, input_dim=input_dim, activation='relu')) model.add(Dropout(0.25)) model.add(Dense(n_output)) ``` To extract the weight I wrote: ``` for layer in model.layers: weights = layer.get_weights() weights = np.array(weights[0]) #this is hidden to output first = model.layers[0].get_weights() #input to hidden first = np.array(first[0]) ``` Unfortunately I don't get the biases columns in the matrices, which I know Keras automatically puts in it. Do you know how to retrieve the biases weights? Thank you in advance for your help !", "response":"get_weights() for a Dense layer returns a list of two elements, the first element contains the weights, and the second element contains the biases. So you can simply do: ``` weights = model.layers[0].get_weights()[0] biases = model.layers[0].get_weights()[1] ``` Note that weights and biases are already numpy arrays.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42411891\/how-to-extract-bias-weights-in-keras-sequential-model", "best_answers_votes":39, "question_length":1092, "response_length":317 }, { "question":"How to accumulate gradients in tensorflow? I have a question similar to this one. Because I have limited resources and I work with a deep model (VGG-16) - used to train a triplet network - I want to accumulate gradients for 128 batches of size one training example, and then propagate the error and update the weights. It's not clear to me how do I do this. I work with tensorflow but any implementation\/pseudocode is welcome.", "response":"Let's walk through the code proposed in one of the answers you linked to: ``` ## Optimizer definition - nothing different from any classical example opt = tf.train.AdamOptimizer() ## Retrieve all trainable variables you defined in your graph tvs = tf.trainable_variables() ## Creation of a list of variables with the same shape as the trainable ones # initialized with 0s accum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in tvs] zero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars] ## Calls the compute_gradients function of the optimizer to obtain... the list of gradients gvs = opt.compute_gradients(rmse, tvs) ## Adds to each element from the list you initialized earlier with zeros its gradient (works because accum_vars and gvs are in the same order) accum_ops = [accum_vars[i].assign_add(gv[0]) for i, gv in enumerate(gvs)] ## Define the training step (part with variable value update) train_step = opt.apply_gradients([(accum_vars[i], gv[1]) for i, gv in enumerate(gvs)]) ``` This first part basically adds new variables and ops to your graph which will allow you to Accumulate the gradient with ops accum_ops in (the list of) variable accum_vars Update the model weights with ops train_step Then, to use it when training, you have to follow these steps (still from the answer you linked): ``` ## The while loop for training while ...: # Run the zero_ops to initialize it sess.run(zero_ops) # Accumulate the gradients 'n_minibatches' times in accum_vars using accum_ops for i in xrange(n_minibatches): sess.run(accum_ops, feed_dict=dict(X: Xs[i], y: ys[i])) # Run the train_step ops to update the weights based on your accumulated gradients sess.run(train_step) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46772685\/how-to-accumulate-gradients-in-tensorflow", "best_answers_votes":30, "question_length":426, "response_length":1722 }, { "question":"TensorFlow - Pad unknown size tensor to a specific size? Is there a way to pad a tensor of variable size to a given shape with a specific pad value? For example given the tensors: ``` [[1, 2], [3, 4]] ``` and ``` [[1, 2, 3], [4, 5, 6]] ``` Is there a way to have a generic operation which would take either and pad them with a value (say, to shape [2, 4] with value -1) to result in: ``` [[1, 2, -1, -1], [3, 4, -1, -1]] ``` and ``` [[1, 2, 3, -1], [4, 5, 6, -1]] ``` respectively? My reasoning (in case there is a better solution) is that I have examples from a TFRecords file, part of which has a variable length. For processing, a static length makes them easier to work with.", "response":"Yes. There is. Provided you do not need to change the rank of the tensor, it's very simple. tf.pad() accepts regular python lists with tensors. The format of the padding is a list of pairs of how much to pad on each side of that dimension. e.g. ``` t = tf.constant([[1, 2], [3, 4]]) paddings = [[0, 0], [0, 4-tf.shape(t)[0]]] out = tf.pad(t, paddings, 'CONSTANT', constant_values=-1) sess.run(out) # gives: # array([[ 1, 2, -1, -1], # [ 3, 4, -1, -1]], dtype=int32) ``` If you want to generalise this to a useful function, you could do something like: ```py def pad_up_to(t, max_in_dims, constant_values): diff = max_in_dims - tf.shape(t) paddings = tf.pad(diff[:, None], [[0, 0], [1, 0]]) return tf.pad(t, paddings, 'CONSTANT', constant_values=constant_values) # (note: see edits for the solution referred to by other answers on this question) ``` where max_in_dims is essentially the desired shape of the output. Note: this function will fail if you provide a shape that is strictly smaller than t in any dimension. You can use it like: ``` t = tf.constant([[1, 2], [3, 4]]) # shape = [2, 2] t_padded = pad_up_to(t, [2, 4], -1) # shape = [2, 4], padded with -1s ``` or ``` t = tf.placeholder(tf.float32, [None, None]) # shape = [?, ?] t_padded = pad_up_to(t, [5,5], -1) # shape = [5, 5], padded with -1s t_np = np.random.uniform(0, 1, [3,4]) # shape = [3,4], no padding t_padded_out = sess.run(t_padded, {t: t_np}) t_np2 = np.random.uniform(0, 1, [2,1]) # shape = [2,1], no padding t_padded_out2 = sess.run(t_padded, {t: t_np2}) ``` Although the dimension sizes are calculated dynamically, the number of dimensions is not, so make sure that max_in_dims has the same number of elements as t.shape.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42334646\/tensorflow-pad-unknown-size-tensor-to-a-specific-size", "best_answers_votes":32, "question_length":679, "response_length":1698 }, { "question":"Difference between model(x) and model.predict(x) in Keras? What is the difference between using model(x) and model.predict(x) for predicting the outcome of a model in Keras?", "response":"Keras with tensorflow backend was using underlying tensorflow objects, but mostly was providing high level outputs which could be understood outside the tensorflow environment (as an example it could output numpy arrays or python lists). Today given a model in tensorflow 2.0 (built using the keras library), ``` out_np = model.predict(x) ``` provides a numpy array which can, as an example, be printed with print(out_np). On the other hand, ``` out_tf = model(x) ``` results into a tensorflow object, wich can be converted to a numpy array with .numpy() The two results are equivalent, as an example, we have that the following is True, ``` out_np.max() == out_tf.numpy().max() ``` The format may be different, but the meaning of model(x) and model.predict(x) is the same: given an input x, it is the value of the output nodes of a network characterized by its structure, weights and biases.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55308425\/difference-between-modelx-and-model-predictx-in-keras", "best_answers_votes":32, "question_length":173, "response_length":892 }, { "question":"Need To Compile Keras Model Before `model.evaluate()` I load a Keras model from .json and .hdf5 files. When I call model.evaluate(), it returns an error: You must compile a model before training\/testing. Use `model.compile(optimizer, loss) Why do I need to compile to run evaluate()? To add, the model can be passed predict() with no problem.", "response":"Because evaluate will calculate the loss function and the metrics. You don't have any of them until you compile the model. They're parameters to the compile method: ``` model.compile(optimizer=..., loss=..., metrics=...) ``` On the other hand, predict doesn't evaluate any metric or loss, it just passes the input data through the model and gets its output. You need the \"loss\" for training too, so you can't train without compiling. And you can compile a model as many times as you want, and even change the parameters. The outputs and the loss function: The model's outputs depend on it being defined with weights. That is automatic and you can predict from any model, even without any training. Every model in Keras is already born with weights (either initialized by you or randomly initialized) You input something, the model calculates the output. At the end of everything, this is all that matters. A good model has proper weights and outputs things correctly. But before getting to that end, your model needs to be trained. Now, the loss function takes the current output and compares it with the expected\/true result. It's a function supposed to be minimized. The less the loss, the closer your results are to the expected. This is the function from which the derivatives will be taken so the backpropagation algorithm can update the weights. The loss function is not useful for the final purpose of the model, but it's necessary for training. That's probably why you can have models without loss functions (and consequently, there is no way to evaluate them).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46127625\/need-to-compile-keras-model-before-model-evaluate", "best_answers_votes":33, "question_length":342, "response_length":1569 }, { "question":"How can I change the shape of a variable in TensorFlow? TensorFlow tutorial says that at creation time we need to specify the shape of tensors. That shape automatically becomes the shape of the tensor. It also says that TensorFlow provides advanced mechanisms to reshape variables. How can I do that? Any code example?", "response":"The tf.Variable class is the recommended way to create variables, but it restricts your ability to change the shape of the variable once it has been created. If you need to change the shape of a variable, you can do the following (e.g. for a 32-bit floating point tensor): ``` var = tf.Variable(tf.placeholder(tf.float32)) # ... new_value = ... # Tensor or numpy array. change_shape_op = tf.assign(var, new_value, validate_shape=False) # ... sess.run(change_shape_op) # Changes the shape of `var` to new_value's shape. ``` Note that this feature is not in the documented public API, so it is subject to change. If you do find yourself needing to use this feature, let us know, and we can investigate a way to support it moving forward.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33654754\/how-can-i-change-the-shape-of-a-variable-in-tensorflow", "best_answers_votes":20, "question_length":318, "response_length":735 }, { "question":"Tensorflow: When should I use or not use `feed_dict`? I am kind of confused why are we using feed_dict? According to my friend, you commonly use feed_dict when you use placeholder, and this is probably something bad for production. I have seen code like this, in which feed_dict is not involved: ``` for j in range(n_batches): X_batch, Y_batch = mnist.train.next_batch(batch_size) _, loss_batch = sess.run([optimizer, loss], {X: X_batch, Y:Y_batch}) ``` I have also seen code like this, in which feed_dict is involved: ``` for i in range(100): for x, y in data: # Session execute optimizer and fetch values of loss _, l = sess.run([optimizer, loss], feed_dict={X: x, Y:y}) total_loss += l ``` I understand feed_dict is that you are feeding in data and try X as the key as if in the dictionary. But here I don't see any difference. So, what exactly is the difference and why do we need feed_dict?", "response":"In a tensorflow model you can define a placeholder such as x = tf.placeholder(tf.float32), then you will use x in your model. For example, I define a simple set of operations as: ``` x = tf.placeholder(tf.float32) y = x * 42 ``` Now when I ask tensorflow to compute y, it's clear that y depends on x. ``` with tf.Session() as sess: sess.run(y) ``` This will produce an error because I did not give it a value for x. In this case, because x is a placeholder, if it gets used in a computation you must pass it in via feed_dict. If you don't it's an error. Let's fix that: ``` with tf.Session() as sess: sess.run(y, feed_dict={x: 2}) ``` The result this time will be 84. Great. Now let's look at a trivial case where feed_dict is not needed: ``` x = tf.constant(2) y = x * 42 ``` Now there are no placeholders (x is a constant) and so nothing needs to be fed to the model. This works now: ``` with tf.Session() as sess: sess.run(y) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50497724\/tensorflow-when-should-i-use-or-not-use-feed-dict", "best_answers_votes":32, "question_length":895, "response_length":932 }, { "question":"Why is TensorFlow's `tf.data` package slowing down my code? I'm just learning to use TensorFlow's tf.data API, and I've found that it is slowing my code down a lot, measured in time per epoch. This is the opposite of what it's supposed to do, I thought. I wrote a simple linear regression program to test it out. Tl;Dr: With 100,000 training data, tf.data slows time per epoch down by about a factor of ten, if you're using full batch training. Worse if you use smaller batches. The opposite is true with 500 training data. My question: What is going on? Is my implementation flawed? Other sources I've read have tf.data improving speeds by about 30%. ``` import tensorflow as tf import numpy as np import timeit import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' tf.logging.set_verbosity(tf.logging.ERROR) n_epochs = 10 input_dimensions_list = [10] def function_to_approximate(x): return np.dot(x, random_covector).astype(np.float32) + np.float32(.01) * np.random.randn(1,1).astype(np.float32) def regress_without_tfData(n_epochs, input_dimension, training_inputs, training_labels): tf.reset_default_graph() weights = tf.get_variable(\"weights\", initializer=np.random.randn(input_dimension, 1).astype(np.float32)) X = tf.placeholder(tf.float32, shape=(None, input_dimension), name='X') Y = tf.placeholder(tf.float32, shape=(None, 1), name='Y') prediction = tf.matmul(X,weights) loss = tf.reduce_mean(tf.square(tf.subtract(prediction, Y))) loss_op = tf.train.AdamOptimizer(.01).minimize(loss) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for _ in range(n_epochs): sess.run(loss_op, feed_dict={X: training_inputs, Y:training_labels}) def regress_with_tfData(n_epochs, input_dimension, training_inputs, training_labels, batch_size): tf.reset_default_graph() weights = tf.get_variable(\"weights\", initializer=np.random.randn(input_dimension, 1).astype(np.float32)) X,Y = data_set.make_one_shot_iterator().get_next() prediction = tf.matmul(X, weights) loss = tf.reduce_mean(tf.square(tf.subtract(prediction, Y))) loss_op = tf.train.AdamOptimizer(.01).minimize(loss) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) while True: try: sess.run(loss_op) except tf.errors.OutOfRangeError: break for input_dimension in input_dimensions_list: for data_size in [500, 100000]: training_inputs = np.random.randn(data_size, input_dimension).astype(np.float32) random_covector = np.random.randint(-5, 5, size=(input_dimension, 1)) training_labels = function_to_approximate(training_inputs) print(\"Not using tf.data, with data size \" \"{}, input dimension {} and training with \" \"a full batch, it took an average of \" \"{} seconds to run {} epochs.\\n\". format( data_size, input_dimension, timeit.timeit( lambda: regress_without_tfData( n_epochs, input_dimension, training_inputs, training_labels ), number=3 ), n_epochs)) for input_dimension in input_dimensions_list: for data_size, batch_size in [(500, 50), (500, 500), (100000, 50), (100000, 100000)]: training_inputs = np.random.randn(data_size, input_dimension).astype(np.float32) random_covector = np.random.randint(-5, 5, size=(input_dimension, 1)) training_labels = function_to_approximate(training_inputs) data_set = tf.data.Dataset.from_tensor_slices((training_inputs, training_labels)) data_set = data_set.repeat(n_epochs) data_set = data_set.batch(batch_size) print(\"Using tf.data, with data size \" \"{}, and input dimension {}, and training with \" \"batch size {}, it took an average of {} seconds \" \"to run {} epochs.\\n\". format( data_size, input_dimension, batch_size, timeit.timeit( lambda: regress_with_tfData( n_epochs, input_dimension, training_inputs, training_labels, batch_size ), number=3 )\/3, n_epochs )) ``` This outputs for me: Not using tf.data, with data size 500, input dimension 10 and training with a full batch, it took an average of 0.20243382899980134 seconds to run 10 epochs. Not using tf.data, with data size 100000, input dimension 10 and training with a full batch, it took an average of 0.2431719040000644 seconds to run 10 epochs. Using tf.data, with data size 500, and input dimension 10, and training with batch size 50, it took an average of 0.09512088866661846 seconds to run 10 epochs. Using tf.data, with data size 500, and input dimension 10, and training with batch size 500, it took an average of 0.07286913600000844 seconds to run 10 epochs. Using tf.data, with data size 100000, and input dimension 10, and training with batch size 50, it took an average of 4.421892363666605 seconds to run 10 epochs. Using tf.data, with data size 100000, and input dimension 10, and training with batch size 100000, it took an average of 2.2555197536667038 seconds to run 10 epochs. Edit: Fixed an important issue that Fred Guth pointed out. It didn't much affect the results, though.", "response":"I wanted to test the dataset API which seems to be really convenient for processing data. I did a lot of time testing about this API in CPU, GPU and multi-GPU way for small and large NN with different type of data. First thing, It seems to me that your code is ok. But I need to point that your NN is just one simple layer. Now, the dataset API is not suitable for your type of NN but for NN with a lot more complexity. Why ? For several reasons that I explain below (founded in my quest of understanding the dataset API). Firstly, in one hand the dataset API processes data each batch whereas in the other hand data are preprocessed. Therefore, if it fits your RAM, you can save time by preprocessing the data. Here your data are just to \"simple\". If you want to test what i am saying, try to find a really really big dataset to process. Nevertheless, the dataset API can be tuned with prefetching data. You can take a look to this tutorial that explain really well why it is good to process data with prefetch. Secondly, in my quest of dataset API for Multi-GPU training, I discovered that as far as i know the old pre-processing way is faster than dataset API for small Neural Network. You can verify that by creating a simple stackable RNN which take a sequence in input. You can try different size of stack (i have tested 1, 2, 10 and 20). You will see that, using the dataset API, on 1-GPU or on 4-GPUs, the time did not differ for small RNN stacks (1, 2 and 5). To summarize, the dataset API is suitable for Neural Network that have data that can't be pre-process. Depending on your task, it may be more convenient to pre-process data, for example if you want to tweak your NN in order to improve it. I agree that the dataset API is really cool for batch, padding and also convenient for shuffling large amount of data but it's also not suitable for multi-GPU training.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51541610\/why-is-tensorflows-tf-data-package-slowing-down-my-code", "best_answers_votes":8, "question_length":4819, "response_length":1876 }, { "question":"TensorFlow Object Detection API Weird Behavior I was playing with TensorFlow's brand new Object Detection API and decided to train it on some other publicly available datasets. I happened to stumble upon this grocery dataset which consists of images of various brands of cigarette boxes on the supermarket shelf along with a text file which lists out the bounding boxes of each cigarette box in each image. 10 major brands have been labeled in the dataset and all other brands fall into the 11th \"miscellaneous\" category. I followed their tutorial and managed to train the model on this dataset. Due to limitations on processing power, I used only a third of the dataset and performed a 70:30 split for training and testing data. I used the faster_rcnn_resnet101 model. All parameters in my config file are the same as the default parameters provided by TF. After 16491 global steps, I tested the model on some images but I am not too happy with the results - Failed to detect the Camels in top-shelf whereas it detects the product in other images Why does it fail to detect the Marlboros in the top row? Another issue I had is that the model never detected any other label except for label 1 Doesn't detected a crop instance of the product from the training data It detects cigarette boxes with 99% confidence even in negative images! Can somebody help me with what is going wrong? What can I do to improve the accuracy? And why does it detect all products to belong in category 1 even though I have mentioned that there are 11 classes in total? Edit Added my label map: ``` item { id: 1 name: '1' } item { id: 2 name: '2' } item { id: 3 name: '3' } item { id: 4 name: '4' } item { id: 5 name: '5' } item { id: 6 name: '6' } item { id: 7 name: '7' } item { id: 8 name: '8' } item { id: 9 name: '9' } item { id: 10 name: '10' } item { id: 11 name: '11' } ```", "response":"So I think I figured out what is going on. I did some analysis on the dataset and found out that it is skewed towards objects of category 1. This is the frequency distribution of the each category from 1 to 11 (in 0 based indexing) ``` 0 10440 1 304 2 998 3 67 4 412 5 114 6 190 7 311 8 195 9 78 10 75 ``` I guess the model is hitting a local minima where just labelling everything as category 1 is good enough. About the problem of not detecting some boxes : I tried training again, but this time I didn't differentiate between brands. Instead, I tried to teach the model what a cigarette box is. It still wasn't detecting all the boxes. Then I decided to crop the input image and provide that as an input. Just to see if the results improve and it did! It turns out that the dimensions of the input image were much larger than the 600 x 1024 that is accepted by the model. So, it was scaling down these images to 600 x 1024 which meant that the cigarette boxes were losing their details :) So, I decided to test the original model which was trained on all classes on cropped images and it works like a charm :) This was the output of the model on the original image This is the output of the model when I crop out the top left quarter and provide it as input. Thanks everyone who helped! And congrats to the TensorFlow team for an amazing job for the API :) Now everybody can train object detection models!", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45029977\/tensorflow-object-detection-api-weird-behavior", "best_answers_votes":16, "question_length":1858, "response_length":1408 }, { "question":"TensorFlow Inference I've been digging around on this for a while. I have found a ton of articles; but none really show just tensorflow inference as a plain inference. Its always \"use the serving engine\" or using a graph that is pre-coded\/defined. Here is the problem: I have a device which occasionally checks for updated models. It then needs to load that model and run input predictions through the model. In keras this was simple: build a model; train the model and the call model.predict(). In scikit-learn same thing. I am able to grab a new model and load it; I can print out all of the weights; but how in the world do I run inference against it? Code to load model and print weights: ``` with tf.Session() as sess: new_saver = tf.train.import_meta_graph(MODEL_PATH + '.meta', clear_devices=True) new_saver.restore(sess, MODEL_PATH) for var in tf.trainable_variables(): print(sess.run(var)) ``` I printed out all of my collections and I have: ['queue_runners', 'variables', 'losses', 'summaries', 'train_op', 'cond_context', 'trainable_variables'] I tried using sess.run(train_op); however that just started kicking up a full training session; which is not what I want to do. I just want to run inference against a different set of inputs that I provide which are not TF Records. Just a little more detail: The device can use C++ or Python; as long as I can produce a .exe. I can set up a feed dict if I want to feed the system. I trained with TFRecords; but in production I'm not going to use TFRecords; its a real\/near real time system. Thanks for any input. I am posting sample code to this repo: https:\/\/github.com\/drcrook1\/CIFAR10\/TensorFlow which does all the training and sample inference. Any hints are greatly appreciated! ------------EDITS----------------- I rebuilt the model to be as below: ``` def inference(images): ''' Portion of the compute graph that takes an input and converts it into a Y output ''' with tf.variable_scope('Conv1') as scope: C_1_1 = ld.cnn_layer(images, (5, 5, 3, 32), (1, 1, 1, 1), scope, name_postfix='1') C_1_2 = ld.cnn_layer(C_1_1, (5, 5, 32, 32), (1, 1, 1, 1), scope, name_postfix='2') P_1 = ld.pool_layer(C_1_2, (1, 2, 2, 1), (1, 2, 2, 1), scope) with tf.variable_scope('Dense1') as scope: P_1 = tf.reshape(C_1_2, (CONSTANTS.BATCH_SIZE, -1)) dim = P_1.get_shape()[1].value D_1 = ld.mlp_layer(P_1, dim, NUM_DENSE_NEURONS, scope, act_func=tf.nn.relu) with tf.variable_scope('Dense2') as scope: D_2 = ld.mlp_layer(D_1, NUM_DENSE_NEURONS, CONSTANTS.NUM_CLASSES, scope) H = tf.nn.softmax(D_2, name='prediction') return H ``` notice I add the name 'prediction' to the TF operation so I can retrieve it later. When training I used the input pipeline for tfrecords and input queues. ``` GRAPH = tf.Graph() with GRAPH.as_default(): examples, labels = Inputs.read_inputs(CONSTANTS.RecordPaths, batch_size=CONSTANTS.BATCH_SIZE, img_shape=CONSTANTS.IMAGE_SHAPE, num_threads=CONSTANTS.INPUT_PIPELINE_THREADS) examples = tf.reshape(examples, [CONSTANTS.BATCH_SIZE, CONSTANTS.IMAGE_SHAPE[0], CONSTANTS.IMAGE_SHAPE[1], CONSTANTS.IMAGE_SHAPE[2]]) logits = Vgg3CIFAR10.inference(examples) loss = Vgg3CIFAR10.loss(logits, labels) OPTIMIZER = tf.train.AdamOptimizer(CONSTANTS.LEARNING_RATE) ``` I am attempting to use feed_dict on the loaded operation in the graph; however now it is just simply hanging.... ``` MODEL_PATH = 'models\/' + CONSTANTS.MODEL_NAME + '.model' images = tf.placeholder(tf.float32, shape=(1, 32, 32, 3)) def run_inference(): '''Runs inference against a loaded model''' with tf.Session() as sess: #sess.run(tf.global_variables_initializer()) new_saver = tf.train.import_meta_graph(MODEL_PATH + '.meta', clear_devices=True) new_saver.restore(sess, MODEL_PATH) pred = tf.get_default_graph().get_operation_by_name('prediction') rand = np.random.rand(1, 32, 32, 3) print(rand) print(pred) print(sess.run(pred, feed_dict={images: rand})) print('done') run_inference() ``` I believe this is not working because the original network was trained using TFRecords. In the sample CIFAR data set the data is small; our real data set is huge and it is my understanding TFRecords the the default best practice for training a network. The feed_dict makes great perfect sense from a productionizing perspective; we can spin up some threads and populate that thing from our input systems. So I guess I have a network that is trained, I can get the predict operation; but how do I tell it to stop using the input queues and start using the feed_dict? Remember that from the production perspective I do not have access to whatever the scientists did to make it. They do their thing; and we stick it in production using whatever agreed upon standard. -------INPUT OPS-------- tf.Operation 'input\/input_producer\/Const' type=Const, tf.Operation 'input\/input_producer\/Size' type=Const, tf.Operation 'input\/input_producer\/Greater\/y' type=Const, tf.Operation 'input\/input_producer\/Greater' type=Greater, tf.Operation 'input\/input_producer\/Assert\/Const' type=Const, tf.Operation 'input\/input_producer\/Assert\/Assert\/data_0' type=Const, tf.Operation 'input\/input_producer\/Assert\/Assert' type=Assert, tf.Operation 'input\/input_producer\/Identity' type=Identity, tf.Operation 'input\/input_producer\/RandomShuffle' type=RandomShuffle, tf.Operation 'input\/input_producer' type=FIFOQueueV2, tf.Operation 'input\/input_producer\/input_producer_EnqueueMany' type=QueueEnqueueManyV2, tf.Operation 'input\/input_producer\/input_producer_Close' type=QueueCloseV2, tf.Operation 'input\/input_producer\/input_producer_Close_1' type=QueueCloseV2, tf.Operation 'input\/input_producer\/input_producer_Size' type=QueueSizeV2, tf.Operation 'input\/input_producer\/Cast' type=Cast, tf.Operation 'input\/input_producer\/mul\/y' type=Const, tf.Operation 'input\/input_producer\/mul' type=Mul, tf.Operation 'input\/input_producer\/fraction_of_32_full\/tags' type=Const, tf.Operation 'input\/input_producer\/fraction_of_32_full' type=ScalarSummary, tf.Operation 'input\/TFRecordReaderV2' type=TFRecordReaderV2, tf.Operation 'input\/ReaderReadV2' type=ReaderReadV2, ------END INPUT OPS----- ----UPDATE 3---- I believe what I need to do is to kill the input section of the graph trained with TF Records and rewire the input to the first layer to a new input. Its kinda like performing surgery; but this is the only way I can find to do inference if I trained using TFRecords as crazy as it sounds... Full Graph: Section to kill: So I think the question becomes: How does one kill the input section of the graph and replace it with a feed_dict? A follow up to this would be: is this really the right way to do it? This seems bonkers. ----END UPDATE 3---- ---link to checkpoint files--- https:\/\/drcdata.blob.core.windows.net\/checkpoints\/CIFAR_10_VGG3_50neuron_1pool_1e-3lr_adam.model.zip?st=2017-05-01T21%3A56%3A00Z&se=2020-05-02T21%3A56%3A00Z&sp=rl&sv=2015-12-11&sr=b&sig=oBCGxlOusB4NOEKnSnD%2FTlRYa5NKNIwAX1IyuZXAr9o%3D --end link to checkpoint files--- -----UPDATE 4 ----- I gave in and just gave a shot at the 'normal' way of performing inference assuming I could have the scientists simply just pickle their models and we could grab the model pickle; unpack it and then run inference on it. So to test I tried the normal way assuming we already unpacked it...It doesn't work worth a beans either... ``` import tensorflow as tf import CONSTANTS import Vgg3CIFAR10 import numpy as np from scipy import misc import time MODEL_PATH = 'models\/' + CONSTANTS.MODEL_NAME + '.model' imgs_bsdir = 'C:\/data\/cifar_10\/train\/' images = tf.placeholder(tf.float32, shape=(1, 32, 32, 3)) logits = Vgg3CIFAR10.inference(images) def run_inference(): '''Runs inference against a loaded model''' with tf.Session() as sess: sess.run(tf.global_variables_initializer()) new_saver = tf.train.import_meta_graph(MODEL_PATH + '.meta')#, import_scope='1', input_map={'input:0': images}) new_saver.restore(sess, MODEL_PATH) pred = tf.get_default_graph().get_operation_by_name('prediction') enq = sess.graph.get_operation_by_name(enqueue_op) #tf.train.start_queue_runners(sess) print(rand) print(pred) print(enq) for i in range(1, 25): img = misc.imread(imgs_bsdir + str(i) + '.png').astype(np.float32) \/ 255.0 img = img.reshape(1, 32, 32, 3) print(sess.run(logits, feed_dict={images : img})) time.sleep(3) print('done') run_inference() ``` Tensorflow ends up building a new graph with the inference function from the loaded model; then it appends all the other stuff from the other graph to the end of it. So then when I populate a feed_dict expecting to get inferences back; I just get a bunch of random garbage as if it were the first pass through the network... Again; this seems nuts; do I really need to write my own framework for serializing and deserializing random networks? This has had to have been done before... -----UPDATE 4 ----- Again; thanks!", "response":"Alright, this took way too much time to figure out; so here is the answer for the rest of the world. Quick Reminder: I needed to persist a model that can be dynamically loaded and inferred against without knowledge as to the under pinnings or insides of how it works. Step 1: Create a model as a Class and ideally use an interface definition ``` class Vgg3Model: NUM_DENSE_NEURONS = 50 DENSE_RESHAPE = 32 * (CONSTANTS.IMAGE_SHAPE[0] \/\/ 2) * (CONSTANTS.IMAGE_SHAPE[1] \/\/ 2) def inference(self, images): ''' Portion of the compute graph that takes an input and converts it into a Y output ''' with tf.variable_scope('Conv1') as scope: C_1_1 = ld.cnn_layer(images, (5, 5, 3, 32), (1, 1, 1, 1), scope, name_postfix='1') C_1_2 = ld.cnn_layer(C_1_1, (5, 5, 32, 32), (1, 1, 1, 1), scope, name_postfix='2') P_1 = ld.pool_layer(C_1_2, (1, 2, 2, 1), (1, 2, 2, 1), scope) with tf.variable_scope('Dense1') as scope: P_1 = tf.reshape(P_1, (-1, self.DENSE_RESHAPE)) dim = P_1.get_shape()[1].value D_1 = ld.mlp_layer(P_1, dim, self.NUM_DENSE_NEURONS, scope, act_func=tf.nn.relu) with tf.variable_scope('Dense2') as scope: D_2 = ld.mlp_layer(D_1, self.NUM_DENSE_NEURONS, CONSTANTS.NUM_CLASSES, scope) H = tf.nn.softmax(D_2, name='prediction') return H def loss(self, logits, labels): ''' Adds Loss to all variables ''' cross_entr = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels) cross_entr = tf.reduce_mean(cross_entr) tf.summary.scalar('cost', cross_entr) tf.add_to_collection('losses', cross_entr) return tf.add_n(tf.get_collection('losses'), name='total_loss') ``` Step 2: Train your network with whatever inputs you want; in my case I used Queue Runners and TF Records. Note that this step is done by a different team which iterates, builds, designs and optimizes models. This can also change over time. The output they produce must be able to be pulled from a remote location so we can dynamically load the updated models on devices (reflashing hardware is a pain especially if it is geographically distributed). In this instance; the team drops the 3 files associated with a graph saver; but also a pickle of the model used for that training session ``` model = vgg3.Vgg3Model() def create_sess_ops(): ''' Creates and returns operations needed for running a tensorflow training session ''' GRAPH = tf.Graph() with GRAPH.as_default(): examples, labels = Inputs.read_inputs(CONSTANTS.RecordPaths, batch_size=CONSTANTS.BATCH_SIZE, img_shape=CONSTANTS.IMAGE_SHAPE, num_threads=CONSTANTS.INPUT_PIPELINE_THREADS) examples = tf.reshape(examples, [-1, CONSTANTS.IMAGE_SHAPE[0], CONSTANTS.IMAGE_SHAPE[1], CONSTANTS.IMAGE_SHAPE[2]], name='infer\/input') logits = model.inference(examples) loss = model.loss(logits, labels) OPTIMIZER = tf.train.AdamOptimizer(CONSTANTS.LEARNING_RATE) gradients = OPTIMIZER.compute_gradients(loss) apply_gradient_op = OPTIMIZER.apply_gradients(gradients) gradients_summary(gradients) summaries_op = tf.summary.merge_all() return [apply_gradient_op, summaries_op, loss, logits], GRAPH def main(): ''' Run and Train CIFAR 10 ''' print('starting...') ops, GRAPH = create_sess_ops() total_duration = 0.0 with tf.Session(graph=GRAPH) as SESSION: COORDINATOR = tf.train.Coordinator() THREADS = tf.train.start_queue_runners(SESSION, COORDINATOR) SESSION.run(tf.global_variables_initializer()) SUMMARY_WRITER = tf.summary.FileWriter('Tensorboard\/' + CONSTANTS.MODEL_NAME, graph=GRAPH) GRAPH_SAVER = tf.train.Saver() for EPOCH in range(CONSTANTS.EPOCHS): duration = 0 error = 0.0 start_time = time.time() for batch in range(CONSTANTS.MINI_BATCHES): _, summaries, cost_val, prediction = SESSION.run(ops) error += cost_val duration += time.time() - start_time total_duration += duration SUMMARY_WRITER.add_summary(summaries, EPOCH) print('Epoch %d: loss = %.2f (%.3f sec)' % (EPOCH, error, duration)) if EPOCH == CONSTANTS.EPOCHS - 1 or error < 0.005: print( 'Done training for %d epochs. (%.3f sec)' % (EPOCH, total_duration) ) break GRAPH_SAVER.save(SESSION, 'models\/' + CONSTANTS.MODEL_NAME + '.model') with open('models\/' + CONSTANTS.MODEL_NAME + '.pkl', 'wb') as output: pickle.dump(model, output) COORDINATOR.request_stop() COORDINATOR.join(THREADS) ``` Step 3: Run some Inference. Load your pickled model; create a new graph by piping in the new placeholder to the logits; and then call session restore. DO NOT RESTORE THE WHOLE GRAPH; JUST THE VARIABLES. ``` MODEL_PATH = 'models\/' + CONSTANTS.MODEL_NAME + '.model' imgs_bsdir = 'C:\/data\/cifar_10\/train\/' images = tf.placeholder(tf.float32, shape=(1, 32, 32, 3)) with open('models\/vgg3.pkl', 'rb') as model_in: model = pickle.load(model_in) logits = model.inference(images) def run_inference(): '''Runs inference against a loaded model''' with tf.Session() as sess: sess.run(tf.global_variables_initializer()) new_saver = tf.train.Saver() new_saver.restore(sess, MODEL_PATH) print(\"Starting...\") for i in range(20, 30): print(str(i) + '.png') img = misc.imread(imgs_bsdir + str(i) + '.png').astype(np.float32) \/ 255.0 img = img.reshape(1, 32, 32, 3) pred = sess.run(logits, feed_dict={images : img}) max_node = np.argmax(pred) print('predicted label: ' + str(max_node)) print('done') run_inference() ``` There definitely ways to improve on this using interfaces and maybe packaging up everything better; but this is working and sets the stage for how we will be moving forward. FINAL NOTE When we finally pushed this to production, we ended up having to ship the stupid `mymodel_model.py file down with everything to build up the graph. So we now enforce a naming convention for all models and there is also a coding standard for production model runs so we can do this properly. Good Luck!", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43708616\/tensorflow-inference", "best_answers_votes":20, "question_length":8807, "response_length":5677 }, { "question":"The minimum required Cuda capability is 3.5 After installing TensorFlow and its dependencies on a g2.2xlarge EC2 instance I tried to run an MNIST example from the getting started page: ``` python tensorflow\/models\/image\/mnist\/convolutional.py ``` But I get the following warning: ``` I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:611] Ignoring gpu device (device: 0, name: GRID K520, pci bus id: 0000:00:03.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5. ``` Is this a hard requirement? Any chance I could comment that check out in a fork of TensorFlow? It would be super nice to be able to train models in AWS.", "response":"There is a section in the official installation page that guides you to enable Cuda 3, but you need to build Tensorflow from source. ``` $ TF_UNOFFICIAL_SETTING=1 .\/configure # Same as the official settings above WARNING: You are configuring unofficial settings in TensorFlow. Because some external libraries are not backward compatible, these settings are largely untested and unsupported. Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https:\/\/developer.nvidia.com\/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: \"3.5,5.2\"]: 3.0 Setting up Cuda include Setting up Cuda lib64 Setting up Cuda bin Setting up Cuda nvvm Configuration finished ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33651810\/the-minimum-required-cuda-capability-is-3-5", "best_answers_votes":11, "question_length":650, "response_length":828 }, { "question":"Running trained tensorflow model in C++ I have trained an image classification network in python using tensorflow. The trained model was saved as a .pb. Now, I want to test the model, I need this to be done in C++. I had used numpy in manipulating and handling data. During training phase the image is passed in as a numpy array. The image is stretched out as a 1D array and the class label is prepended to this array. I'm confused as to how to pass the image data while running the model in C++, where numpy isn't available to me. I use numpy operations to manipulate and handle the data. In what format should I pass in the data if I have to execute it in C++. Below is how I train and save my model ``` def trainModel(data): global_step = tf.Variable(0, name='global_step', trainable=False) X, y,keep_prob = modelInputs((741, 620, 1),4) logits = cnnModel(X,keep_prob) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y), name=\"cost\") optimizer = tf.train.AdamOptimizer(.0001, name='Adam').minimize(cost) prediction = tf.argmax(logits, 1, name=\"prediction\") correct_pred = tf.equal(prediction, tf.argmax(y, 1), name=\"correct_pred\") accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver = tf.train.Saver() batch_size = 30 for e in range(11): batch_x, batch_y = data.next_batch(batch_size) batch_y = batch_y.astype('int32') x = np.reshape(batch_x, [batch_size, 741, 620, 1]) labels = np.zeros(shape=(batch_size,4)) labels[np.arange(len(labels)),batch_y]=1 sess.run(optimizer, feed_dict={X: x, y: labels,keep_prob:0.5}) global_step.assign(e).eval() saver.save(sess, '.\/my_test_model',global_step=global_step) ``` *741x620 is the size of the image", "response":"Instructions for using a graph in C++ can be found here. Here is some code to use your image as input: ``` tensorflow::Tensor keep_prob = tensorflow::Tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape()); keep_prob.scalar()() = 1.0; tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({1,height,width,depth})); auto input_tensor_mapped = input_tensor.tensor(); const float * source_data = (float*) img.data; \/\/ here img is an opencv image, but if it's just a float array this code is very easy to adapt \/\/ copying the image data into the corresponding tensor for (int y = 0; y finalOutput; tensorflow::Status run_status = this->tf_session->Run({{InputName,input_tensor}, {dropoutPlaceHolderName, keep_prob}}, {OutputName}, {}, &finalOutput); ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45027394\/running-trained-tensorflow-model-in-c", "best_answers_votes":14, "question_length":1781, "response_length":774 }, { "question":"Keras - class_weight vs sample_weights in the fit_generator In Keras (using TensorFlow as a backend) I am building a model which is working with a huge dataset that is having highly imbalanced classes (labels). To be able to run the training process, I created a generator which feeds chunks of data to the fit_generator. According to the documentation for the fit_generator, the output of the generator can either be the tuple (inputs, targets) or the tuple (inputs, targets, sample_weights). Having that in mind, here are a few questions: My understanding is that the class_weight regards the weights of all classes for the entire dataset whereas the sample_weights regards the weights of all classes for each individual chunk created by the generator. Is that correct? If not, can someone elaborate on the matter? Is it necessary to give both the class_weight to the fit_generator and then the sample_weights as an output for each chunk? If yes, then why? If not then which one is better to give? If I should give the sample_weights for each chunk, how do I map the weights if some of the classes are missing from a specific chunk? Let me give an example. In my overall dataset, I have 7 possible classes (labels). Because these classes are highly imbalanced, when I create smaller chunks of data as an output from the fit_generator, some of the classes are missing from the specific chunk. How should I create the sample_weights for these chunks?", "response":"My understanding is that the class_weight regards the weights of all classes for the entire dataset whereas the sample_weights regards the weights of all classes for each individual chunk created by the generator. Is that correct? If not, can someone elaborate on the matter? class_weight affects the relative weight of each class in the calculation of the objective function. sample_weights, as the name suggests, allows further control of the relative weight of samples that belong to the same class. Is it necessary to give both the class_weight to the fit_generator and then the sample_weights as an output for each chunk? If yes, then why? If not then which one is better to give? It depends on your application. Class weights are useful when training on highly skewed data sets; for example, a classifier to detect fraudulent transactions. Sample weights are useful when you don't have equal confidence in the samples in your batch. A common example is performing regression on measurements with variable uncertainty. If I should give the sample_weights for each chunk, how do I map the weights if some of the classes are missing from a specific chunk? Let me give an example. In my overall dataset, I have 7 possible classes (labels). Because these classes are highly imbalanced, when I create smaller chunks of data as an output from the fit_generator, some of the classes are missing from the specific chunk. How should I create the sample_weights for these chunks? This is not an issue. sample_weights is defined on a per-sample basis and is independent from the class. For this reason, the documentation states that (inputs, targets, sample_weights) should be the same length. The function _weighted_masked_objective in engine\/training.py has an example of sample_weights are being applied.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43459317\/keras-class-weight-vs-sample-weights-in-the-fit-generator", "best_answers_votes":23, "question_length":1450, "response_length":1801 }, { "question":"TensorFlow: how to log GPU memory (VRAM) utilization? TensorFlow always (pre-)allocates all free memory (VRAM) on my graphics card, which is ok since I want my simulations to run as fast as possible on my workstation. However, I would like to log how much memory (in sum) TensorFlow really uses. Additionally it would be really nice, if I could also log how much memory single tensors use. This information is important to measure and compare the memory size that different ML\/AI architectures need. Any tips?", "response":"Update, can use TensorFlow ops to query allocator: ``` # maximum across all sessions and .run calls so far sess.run(tf.contrib.memory_stats.MaxBytesInUse()) # current usage sess.run(tf.contrib.memory_stats.BytesInUse()) ``` Also you can get detailed information about session.run call including all memory being allocations during run call by looking at RunMetadata. IE something like this ``` run_metadata = tf.RunMetadata() sess.run(c, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE, output_partition_graphs=True), run_metadata=run_metadata) ``` Here's an end-to-end example -- take column vector, row vector and add them to get a matrix of additions: ``` import tensorflow as tf no_opt = tf.OptimizerOptions(opt_level=tf.OptimizerOptions.L0, do_common_subexpression_elimination=False, do_function_inlining=False, do_constant_folding=False) config = tf.ConfigProto(graph_options=tf.GraphOptions(optimizer_options=no_opt), log_device_placement=True, allow_soft_placement=False, device_count={\"CPU\": 3}, inter_op_parallelism_threads=3, intra_op_parallelism_threads=1) sess = tf.Session(config=config) with tf.device(\"cpu:0\"): a = tf.ones((13, 1)) with tf.device(\"cpu:1\"): b = tf.ones((1, 13)) with tf.device(\"cpu:2\"): c = a+b sess = tf.Session(config=config) run_metadata = tf.RunMetadata() sess.run(c, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE, output_partition_graphs=True), run_metadata=run_metadata) with open(\"\/tmp\/run2.txt\", \"w\") as out: out.write(str(run_metadata)) ``` If you open run.txt you'll see messages like this: ``` node_name: \"ones\" allocation_description { requested_bytes: 52 allocator_name: \"cpu\" ptr: 4322108320 } .... node_name: \"ones_1\" allocation_description { requested_bytes: 52 allocator_name: \"cpu\" ptr: 4322092992 } ... node_name: \"add\" allocation_description { requested_bytes: 676 allocator_name: \"cpu\" ptr: 4492163840 ``` So here you can see that a and b allocated 52 bytes each (13*4), and the result allocated 676 bytes.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40190510\/tensorflow-how-to-log-gpu-memory-vram-utilization", "best_answers_votes":23, "question_length":509, "response_length":1983 }, { "question":"The name tf.Session is deprecated. Please use tf.compat.v1.Session instead I got the following deprecation warning in my tensorflow code: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. Why I got this warning What will happen in tensorflow 2.0. instead of tf.session Is it okay to use tf.compat.v1.Session", "response":"To make TensorFlow be more \"Pythonic\" in version 2.0, by design TF 2.0 does not have tf.Session. TensorFlow 1.X requires users to manually stitch together an abstract syntax tree (the graph) by making tf.* API calls. It then requires users to manually compile the abstract syntax tree by passing a set of output tensors and input tensors to a session.run() call. TensorFlow 2.0 executes eagerly (like Python normally does) and in 2.0, graphs and sessions should feel like implementation details. You could use: ``` import tensorflow.compat.v1 as tf tf.disable_v2_behavior() ``` However, this does not let you take advantage of many of the improvements made in TensorFlow 2.0. The better solution is: Replace tf.Session.run calls: Every tf.Session.run call should be replaced by a Python function. The feed_dict and tf.placeholders become function arguments. The fetches become the function's return value.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/56820327\/the-name-tf-session-is-deprecated-please-use-tf-compat-v1-session-instead", "best_answers_votes":20, "question_length":333, "response_length":905 }, { "question":"Understanding COCO evaluation \"maximum detections\" I started using the cocoapi to evaluate a model trained using the Object Detection API. After reading various sources that explain mean average precision (mAP) and recall, I am confused with the \"maximum detections\" paramter used in the cocoapi. From what I understood (e.g. here, here or here), one calculates mAP by calculating precision and recall for various model score thresholds. This gives the precision-recall curve and mAP is calculated as an approximation to the area under this curve. Or, expressed differently, as the average of the maximum precision in defined recall ranges (0:0.1:1). However, the cocoapi seems to calculate precision and recall for a given number of maximum detections (maxDet) with the highest scores. And from there get the precision-recall curve for maxDets = 1, 10, 100. Why is this a good metric since it is clearly not the same as the above method (it potentially excludes datapoints)? In my example, I have ~ 3000 objects per image. Evaluating the result using the cocoapi gives terrible recall because it limits the number of detected objects to 100. For testing purposes, I feed the evaluation dataset as the ground truth and the detected objects (with some artificial scores). I would expect precision and recall pretty good, which is actually happening. But as soon as I feed in more than 100 objects, precision and recall go down with increasing number of \"detected objects\". Even though they are all \"correct\"! How does that make sense?", "response":"You can change the maxDets parameter and define a new summarize() instance method. Let's create a COCOeval object: ```py cocoEval = COCOeval(cocoGt,cocoDt,annType) cocoEval.params.maxDets = [200] cocoEval.params.imgIds = imgIdsDt cocoEval.evaluate() cocoEval.accumulate() cocoEval.summarize_2() # instead of calling cocoEval.summarize() ``` Now, define summarize_2() method in cocoeval.py module in the following way: ```py def summarize_2(self): # Copy everything from `summarize` method here except # the function `_summarizeDets()`. def _summarizeDets(): stats = np.zeros((12,)) stats[0] = _summarize(1, maxDets=self.params.maxDets[0]) stats[1] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[0]) stats[2] = _summarize(1, iouThr=.75, maxDets=self.params.maxDets[0]) stats[3] = _summarize(1, areaRng='small', maxDets=self.params.maxDets[0]) stats[4] = _summarize(1, areaRng='medium', maxDets=self.params.maxDets[0]) stats[5] = _summarize(1, areaRng='large', maxDets=self.params.maxDets[0]) stats[6] = _summarize(0, maxDets=self.params.maxDets[0]) stats[9] = _summarize(0, areaRng='small', maxDets=self.params.maxDets[0]) stats[10] = _summarize(0, areaRng='medium', maxDets=self.params.maxDets[0]) stats[11] = _summarize(0, areaRng='large', maxDets=self.params.maxDets[0]) return stats # Copy other things which are left from `summarize()` here. ``` If you run the above method over your dataset, you will get an output similar to this: ```sh Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=200 ] = 0.507 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=200 ] = 0.699 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=200 ] = 0.575 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=200 ] = 0.586 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=200 ] = 0.519 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=200 ] = 0.501 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=200 ] = 0.598 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=200 ] = 0.640 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=200 ] = 0.566 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=200 ] = 0.564 ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/52839368\/understanding-coco-evaluation-maximum-detections", "best_answers_votes":12, "question_length":1533, "response_length":2204 }, { "question":"Extract features using pre-trained (Tensorflow) CNN Deep Learning has been applied successfully on several large data sets for the classification of a handful of classes (cats, dogs, cars, planes, etc), with performances beating simpler descriptors like Bags of Features over SIFT, color histograms, etc. Nevertheless, training such a network requires a lot of data per class and a lot of training time. However, very often one doesn't have enough data or just wants to get an idea of how well a convolutional neural network might do, before spending time one designing and training such a device and gathering the training data. In this particular case, it might be ideal to have a network configured and trained using some benchmark data set used by the state of the art publications, and to simply apply it to some data set that you might have as a feature extractor. This results in a set of features for each image, which one could feed to a classical classification method like SVM's, logistic regression, neural networks, etc. In particular when one does not have enough data to train the CNN, I may expect this to outperform a pipeline where the CNN was trained on few samples. I was looking at the tensorflow tutorials, but they always seem to have a clear training \/ testing phase. I couldn't find a pickle file (or similar) with a pre-configured CNN feature extractor. My questions are: do such pre-trained networks exist and where can I find them. Alternatively: does this approach make sense? Where could I find a CNN+weights ? EDIT W.r.t. @john's comment I tried using 'DecodeJpeg:0' and 'DecodeJpeg\/contents:0' and checked the outputs, which are different (:S) ```py import cv2, requests, numpy import tensorflow.python.platform import tensorflow as tf response = requests.get('https:\/\/i.sstatic.net\/LIW6C.jpg?s=328&g=1') data = numpy.asarray(bytearray(response.content), dtype=np.uint8) image = cv2.imdecode(data,-1) compression_worked, jpeg_data = cv2.imencode('.jpeg', image) if not compression_worked: raise Exception(\"Failure when compressing image to jpeg format in opencv library\") jpeg_data = jpeg_data.tostring() with open('.\/deep_learning_models\/inception-v3\/classify_image_graph_def.pb', 'rb') as graph_file: graph_def = tf.GraphDef() graph_def.ParseFromString(graph_file.read()) tf.import_graph_def(graph_def, name='') with tf.Session() as sess: softmax_tensor = sess.graph.get_tensor_by_name('pool_3:0') arr0 = numpy.squeeze(sess.run( softmax_tensor, {'DecodeJpeg:0': image} )) arr1 = numpy.squeeze(sess.run( softmax_tensor, {'DecodeJpeg\/contents:0': jpeg_data} )) print(numpy.abs(arr0 - arr1).max()) ``` So the max absolute difference is 1.27649, and in general all the elements differ (especially since the average value of the arr0 and arr1 themselves lies between 0 - 0.5). I also would expect that 'DecodeJpeg:0' needs a jpeg-string, not a numpy array, why else does the name contain 'Jpeg'. @john: Could you state how sure you are about your comment? So I guess I'm not sure what is what, as I would expect a trained neural network to be deterministic (but chaotic at most).", "response":"The TensorFlow team recently released a deep CNN trained on the ImageNet dataset. You can download the script that fetches the data (including the model graph and the trained weights) from here. The associated Image Recognition tutorial has more details about the model. While the current model isn't specifically packaged to be used in a subsequent training step, you could explore modifying the script to reuse parts of the model and the trained weights in your own network.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34175174\/extract-features-using-pre-trained-tensorflow-cnn", "best_answers_votes":18, "question_length":3108, "response_length":476 }, { "question":"How to add new embeddings for unknown words in Tensorflow (training & pre-set for testing) I am curious as to how I can add a normal-randomized 300 dimension vector (elements' type = tf.float32) whenever a word unknown to the pre-trained vocabulary is encountered. I am using pre-trained GloVe word embeddings, but in some cases, I realize I encounter unknown words, and I want to create a normal-randomized word vector for this new found unknown word. The problem is that with my current set up, I use tf.contrib.lookup.index_table_from_tensor to convert from words to integers based on the known vocabulary. This function can create new tokens and hash them for some predefined number of out of vocabulary words, but my embed will not contain an embedding for this new unknown hash value. I am uncertain if I can simply append a randomized embedding to the end of the embed list. I also would like to do this in an efficient way, so pre-built tensorflow function or method involving tensorflow functions would probably be the most efficient. I define pre-known special tokens such as an end of sentence token and a default unknown as the empty string (\"at index 0), but this is limited in its power to learn for various different unknown words. I currently use tf.nn.embedding_lookup() as the final embedding step. I would like to be able to add new random 300d vectors for each unknown word in the training data, and I would also like to add pre-made random word vectors for any unknown tokens not seen in training that is possibly encountered during testing. What is the most efficient way of doing this? ``` def embed_tensor(string_tensor, trainable=True): \"\"\" Convert List of strings into list of indicies then into 300d vectors \"\"\" # ordered lists of vocab and corresponding (by index) 300d vector vocab, embed = load_pretrained_glove() # Set up tensorflow look up from string word to unique integer vocab_lookup = tf.contrib.lookup.index_table_from_tensor( mapping=tf.constant(vocab), default_value = 0) string_tensor = vocab_lookup.lookup(string_tensor) # define the word embedding embedding_init = tf.Variable(tf.constant(np.asarray(embed), dtype=tf.float32), trainable=trainable, name=\"embed_init\") # return the word embedded version of the sentence (300d vectors\/word) return tf.nn.embedding_lookup(embedding_init, string_tensor) ```", "response":"The code example below adapts your embed_tensor function such that words are embedded as follows: For words that have a pretrained embedding, the embedding is initialized with the pretrained embedding. The embedding can be kept fixed during training if trainable is False. For words in the training data that don't have a pretrained embedding, the embedding is initialized randomly. The embedding can be kept fixed during training if trainable is False. For words in the test data that don't occur in the training data and don't have a pretrained embedding, a single randomly initialized embedding vector is used. This vector can't be trained. ```py import tensorflow as tf import numpy as np EMB_DIM = 300 def load_pretrained_glove(): return [\"a\", \"cat\", \"sat\", \"on\", \"the\", \"mat\"], np.random.rand(6, EMB_DIM) def get_train_vocab(): return [\"a\", \"dog\", \"sat\", \"on\", \"the\", \"mat\"] def embed_tensor(string_tensor, trainable=True): \"\"\" Convert List of strings into list of indices then into 300d vectors \"\"\" # ordered lists of vocab and corresponding (by index) 300d vector pretrained_vocab, pretrained_embs = load_pretrained_glove() train_vocab = get_train_vocab() only_in_train = list(set(train_vocab) - set(pretrained_vocab)) vocab = pretrained_vocab + only_in_train # Set up tensorflow look up from string word to unique integer vocab_lookup = tf.contrib.lookup.index_table_from_tensor( mapping=tf.constant(vocab), default_value=len(vocab)) string_tensor = vocab_lookup.lookup(string_tensor) # define the word embedding pretrained_embs = tf.get_variable( name=\"embs_pretrained\", initializer=tf.constant_initializer(np.asarray(pretrained_embs), dtype=tf.float32), shape=pretrained_embs.shape, trainable=trainable) train_embeddings = tf.get_variable( name=\"embs_only_in_train\", shape=[len(only_in_train), EMB_DIM], initializer=tf.random_uniform_initializer(-0.04, 0.04), trainable=trainable) unk_embedding = tf.get_variable( name=\"unk_embedding\", shape=[1, EMB_DIM], initializer=tf.random_uniform_initializer(-0.04, 0.04), trainable=False) embeddings = tf.concat([pretrained_embs, train_embeddings, unk_embedding], axis=0) return tf.nn.embedding_lookup(embeddings, string_tensor) ``` FYI, to have a sensible, non-random representation for words that don't occur in the training data and don't have a pretrained embedding, you could consider mapping words with a low frequency in your training data to an unk token (that is not in your vocabulary) and make the unk_embedding trainable. This way you learn a prototype for words that are unseen in the training data.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45113130\/how-to-add-new-embeddings-for-unknown-words-in-tensorflow-training-pre-set-fo", "best_answers_votes":11, "question_length":2345, "response_length":2563 }, { "question":"Unable to Install Tensorflow (MemoryError) I tried to install Tensorflow on Linux Ubuntu 16.04 from source and using pip and I keep getting the following error return base64.b64encode(b).decode(\"ascii\") MemoryError. I tried to google this problem but only found a website written in Chinese (probably) http:\/\/juncollin.hatenablog.com\/entry\/2017\/03\/05\/025318 that doesn't really help.", "response":"Try installing without caching: pip install --no-cache-dir tensorflow.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44335087\/unable-to-install-tensorflow-memoryerror", "best_answers_votes":104, "question_length":383, "response_length":70 }, { "question":"Deploy python app to Heroku \"Slug Size too large\" I'm trying to deploy a Streamlit app written in python to Heroku. My whole directory is 4.73 MB, where 4.68 MB is my ML model. My requirements.txt looks like this: ``` absl-py==0.9.0 altair==4.0.1 astor==0.8.1 attrs==19.3.0 backcall==0.1.0 base58==2.0.0 bleach==3.1.3 blinker==1.4 boto3==1.12.29 botocore==1.15.29 cachetools==4.0.0 certifi==2019.11.28 chardet==3.0.4 click==7.1.1 colorama==0.4.3 cycler==0.10.0 decorator==4.4.2 defusedxml==0.6.0 docutils==0.15.2 entrypoints==0.3 enum-compat==0.0.3 future==0.18.2 gast==0.2.2 google-auth==1.11.3 google-auth-oauthlib==0.4.1 google-pasta==0.2.0 grpcio==1.27.2 h5py==2.10.0 idna==2.9 importlib-metadata==1.5.2 ipykernel==5.2.0 ipython==7.13.0 ipython-genutils==0.2.0 ipywidgets==7.5.1 jedi==0.16.0 Jinja2==2.11.1 jmespath==0.9.5 joblib==0.14.1 jsonschema==3.2.0 jupyter-client==6.1.1 jupyter-core==4.6.3 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.0 kiwisolver==1.1.0 Markdown==3.2.1 MarkupSafe==1.1.1 matplotlib==3.2.1 mistune==0.8.4 nbconvert==5.6.1 nbformat==5.0.4 notebook==6.0.3 numpy==1.18.2 oauthlib==3.1.0 opencv-python==4.2.0.32 opt-einsum==3.2.0 pandas==1.0.3 pandocfilters==1.4.2 parso==0.6.2 pathtools==0.1.2 pickleshare==0.7.5 Pillow==7.0.0 prometheus-client==0.7.1 prompt-toolkit==3.0.4 protobuf==3.11.3 pyasn1==0.4.8 pyasn1-modules==0.2.8 pydeck==0.3.0b2 Pygments==2.6.1 pyparsing==2.4.6 pyrsistent==0.16.0 python-dateutil==2.8.0 pytz==2019.3 pywinpty==0.5.7 pyzmq==19.0.0 requests==2.23.0 requests-oauthlib==1.3.0 rsa==4.0 s3transfer==0.3.3 scikit-learn==0.22.2.post1 scipy==1.4.1 Send2Trash==1.5.0 six==1.14.0 sklearn==0.0 streamlit==0.56.0 tensorboard==2.1.1 tensorflow==2.1.0 tensorflow-estimator==2.1.0 termcolor==1.1.0 terminado==0.8.3 testpath==0.4.4 toml==0.10.0 toolz==0.10.0 tornado==5.1.1 traitlets==4.3.3 tzlocal==2.0.0 urllib3==1.25.8 validators==0.14.2 watchdog==0.10.2 wcwidth==0.1.9 webencodings==0.5.1 Werkzeug==1.0.0 widgetsnbextension==3.5.1 wincertstore==0.2 wrapt==1.12.1 zipp==3.1.0 ``` When I push my app to Heroku, the message is: ``` remote: -----> Discovering process types remote: Procfile declares types -> web remote: remote: -----> Compressing... remote: ! Compiled slug size: 623.5M is too large (max is 500M). remote: ! See: http:\/\/devcenter.heroku.com\/articles\/slug-size remote: remote: ! Push failed ``` How can my slug size be too large? Is it the size of the requirements? Then how is it possible to deploy a python app using tensorflow to Heroku after all? Thanks for the help!", "response":"I have already answered this here. Turns out the Tensorflow 2.0 module is very large (more than 500MB, the limit for Heroku) because of its GPU support. Since Heroku doesn't support GPU, it doesn't make sense to install the module with GPU support. Solution: Simply replace tensorflow with tensorflow-cpu in your requirements. This worked for me, hope it works for you too!", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/61062303\/deploy-python-app-to-heroku-slug-size-too-large", "best_answers_votes":61, "question_length":2534, "response_length":373 }, { "question":"Display image of graph in TensorFlow? I wrote a simple script to calculate the golden ratio from 1,2,5. Is there a way to actually produce a visual through tensorflow (possibly with the aid of matplotlib or networkx) of the actual graph structure? The doc of tensorflow is pretty similar to a factor graph so I was wondering: How can an image of the graph structure be generated through tensorflow? In this example below, it would be C_1, C_2, C_3 as individual nodes, and then C_1 would have the tf.sqrt operation followed by the operation that brings them together. Maybe the graph structure (nodes,edges) can be imported into networkx? I see that the tensor objects have a graph attribute but I haven't found out how to actually use this for imaging purposes. ``` #!\/usr\/bin\/python import tensorflow as tf C_1 = tf.constant(5.0) C_2 = tf.constant(1.0) C_3 = tf.constant(2.0) golden_ratio = (tf.sqrt(C_1) + C_2)\/C_3 sess = tf.Session() print sess.run(golden_ratio) #1.61803 sess.close() ```", "response":"This is exactly what tensorboard was created for. You need to slightly modify your code to store the information about your graph. ``` import tensorflow as tf C_1 = tf.constant(5.0) C_2 = tf.constant(1.0) C_3 = tf.constant(2.0) golden_ratio = (tf.sqrt(C_1) + C_2)\/C_3 with tf.Session() as sess: writer = tf.summary.FileWriter('logs', sess.graph) print sess.run(golden_ratio) writer.close() ``` This will create a logs folder with event files in your working directory. After this you should run tensorboard from your command line tensorboard --logdir=\"logs\" and navigate to the url it gives you (http:\/\/127.0.0.1:6006). In your browser go to GRAPHS tab and enjoy your graph. You will use TB a lot if you are going to do anything with TF. So it makes sense to learn about it more from official tutorials and from this video.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34230613\/display-image-of-graph-in-tensorflow", "best_answers_votes":55, "question_length":992, "response_length":823 }, { "question":"using cuDNN kernel for LSTM I want to train my RNN model using Cudnn: ``` max_length % # layer input layer_embedding( name = \"input\", input_dim = num_words, input_length = max_length, output_dim = embedding_dim, embeddings_initializer = initializer_random_uniform(minval = -0.05, maxval = 0.05, seed = 2) ) %>% # layer dropout layer_spatial_dropout_1d( name = \"embedding_dropout\", rate = 0.2 ) %>% # layer lstm 1 bidirectional(layer_lstm( name = \"lstm\", units = 64, unroll = FALSE, dropout = 0.2, use_bias = TRUE, recurrent_dropout = 0, return_sequences = TRUE )) %>% layer_batch_normalization() %>% # layer output layer_dense( name = \"output\", units = 3, activation = \"softmax\" ) ``` when I run this I get this warming: WARNING:tensorflow:Layer lstm will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU I think I have followed all the requirements, not sure what I'm missing. SessionInfo: ``` R version 4.0.0 (2020-04-24) Platform: x86_64-w64-mingw32\/x64 (64-bit) Running under: Windows 10 x64 (build 18363) Matrix products: default locale: [1] LC_COLLATE=English_United States.1252 [2] LC_CTYPE=English_United States.1252 [3] LC_MONETARY=English_United States.1252 [4] LC_NUMERIC=C [5] LC_TIME=English_United States.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] keras_2.3.0.0 loaded via a namespace (and not attached): [1] Rcpp_1.0.4.6 lattice_0.20-41 zeallot_0.1.0 rappdirs_0.3.1 [5] grid_4.0.0 R6_2.4.1 jsonlite_1.6.1 magrittr_1.5 [9] tfruns_1.4 whisker_0.4 Matrix_1.2-18 reticulate_1.15 [13] generics_0.0.2 tools_4.0.0 xfun_0.14 compiler_4.0.0 [17] base64enc_0.1-3 tensorflow_2.2.0 knitr_1.28 ```", "response":"I ran into the same problem and fixed it by manually setting the options to use the cuDNN-compatible implementation as specified here. \"Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the CuDNN kernel (see below for details), the layer will use a fast cuDNN implementation.\" The requirements to use the cuDNN implementation are: activation == tanh recurrent_activation == sigmoid recurrent_dropout == 0 unroll is False use_bias is True Inputs, if use masking, are strictly right-padded. Eager execution is enabled in the outermost context. In particular, I had to specify recurrent_activation == sigmoid. The version of Keras\/TF I installed had a default of recurrent_activation == hard_sigmoid.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/62044838\/using-cudnn-kernel-for-lstm", "best_answers_votes":53, "question_length":1755, "response_length":892 }, { "question":"TensorFlow: cast a float64 tensor to float32 I am trying to use: train = optimizer.minimize(loss) but the standard optimizers do not work with tf.float64. Therefore I want to truncate my loss from tf.float64 to only tf.float32. ``` Traceback (most recent call last): File \"q4.py\", line 85, in train = optimizer.minimize(loss) File \"\/Library\/Python\/2.7\/site-packages\/tensorflow\/python\/training\/optimizer.py\", line 190, in minimize colocate_gradients_with_ops=colocate_gradients_with_ops) File \"\/Library\/Python\/2.7\/site-packages\/tensorflow\/python\/training\/optimizer.py\", line 229, in compute_gradients self._assert_valid_dtypes([loss]) File \"\/Library\/Python\/2.7\/site-packages\/tensorflow\/python\/training\/optimizer.py\", line 354, in _assert_valid_dtypes dtype, t.name, [v for v in valid_dtypes])) ValueError: Invalid type tf.float64 for Add_1:0, expected: [tf.float32]. ```", "response":"The short answer is that you can convert a tensor from tf.float64 to tf.float32 using the tf.cast() op: ``` loss = tf.cast(loss, tf.float32) ``` The longer answer is that this will not solve all of your problems with the optimizers. (The lack of support for tf.float64 is a known issue.) The optimizers require that all of the tf.Variable objects that you are trying to optimize must also have type tf.float32.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35725513\/tensorflow-cast-a-float64-tensor-to-float32", "best_answers_votes":53, "question_length":870, "response_length":410 }, { "question":"How to optimize for inference a simple, saved TensorFlow 1.0.1 graph? I cannot successfully run the optimize_for_inference module on a simple, saved TensorFlow graph (Python 2.7; package installed by pip install tensorflow-gpu==1.0.1). Background Saving TensorFlow Graph Here's my Python script to generate and save a simple graph to add 5 to my input x placeholder operation. ``` import tensorflow as tf # make and save a simple graph G = tf.Graph() with G.as_default(): x = tf.placeholder(dtype=tf.float32, shape=(), name=\"x\") a = tf.Variable(5.0, name=\"a\") y = tf.add(a, x, name=\"y\") saver = tf.train.Saver() with tf.Session(graph=G) as sess: sess.run(tf.global_variables_initializer()) out = sess.run(fetches=[y], feed_dict={x: 1.0}) print(out) saver.save(sess=sess, save_path=\"test_model\") ``` Restoring TensorFlow Graph I have a simple restore script that recreates the saved graph and restores graph params. Both the save\/restore scripts produce the same output. ``` import tensorflow as tf # Restore simple graph and test model output G = tf.Graph() with tf.Session(graph=G) as sess: # recreate saved graph (structure) saver = tf.train.import_meta_graph('.\/test_model.meta') # restore net params saver.restore(sess, tf.train.latest_checkpoint('.\/')) x = G.get_operation_by_name(\"x\").outputs[0] y = G.get_operation_by_name(\"y\").outputs out = sess.run(fetches=[y], feed_dict={x: 1.0}) print(out[0]) ``` Optimization Attempt But, while I don't expect much in terms of optimization, when I try to optimize the graph for inference, I get the following error message. The expected output node does not appear to be in the saved graph. ``` $ python -m tensorflow.python.tools.optimize_for_inference --input test_model.data-00000-of-00001 --output opt_model --input_names=x --output_names=y Traceback (most recent call last): File \"\/usr\/lib\/python2.7\/runpy.py\", line 174, in _run_module_as_main \"__main__\", fname, loader, pkg_name) File \"\/usr\/lib\/python2.7\/runpy.py\", line 72, in _run_code exec code in run_globals File \"\/{path}\/lib\/python2.7\/site-packages\/tensorflow\/python\/tools\/optimize_for_inference.py\", line 141, in app.run(main=main, argv=[sys.argv[0]] + unparsed) File \"\/{path}\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/platform\/app.py\", line 44, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File \"\/{path}\/lib\/python2.7\/site-packages\/tensorflow\/python\/tools\/optimize_for_inference.py\", line 90, in main FLAGS.output_names.split(\",\"), FLAGS.placeholder_type_enum) File \"\/{path}\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/tools\/optimize_for_inference_lib.py\", line 91, in optimize_for_inference placeholder_type_enum) File \"\/{path}\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/tools\/strip_unused_lib.py\", line 71, in strip_unused output_node_names) File \"\/{path}\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/framework\/graph_util_impl.py\", line 141, in extract_sub_graph assert d in name_to_node_map, \"%s is not in graph\" % d AssertionError: y is not in graph ``` Further investigation led me to inspect the checkpoint of the saved graph, which only shows 1 tensor (a, no x and no y). ``` (tf-1.0.1) $ python -m tensorflow.python.tools.inspect_checkpoint --file_name .\/test_model --all_tensors tensor_name: a 5.0 ``` Specific Questions Why do I not see x and y in the checkpoint? Is it because they are operations and not tensors? Since I need to provide input and output names to the optimize_for_inference module, how do I build the graph so I can reference the input and output nodes?", "response":"Here is the detailed guide on how to optimize for inference: The optimize_for_inference module takes a frozen binary GraphDef file as input and outputs the optimized Graph Def file which you can use for inference. And to get the frozen binary GraphDef file you need to use the module freeze_graph which takes a GraphDef proto, a SaverDef proto and a set of variables stored in a checkpoint file. The steps to achieve that is given below: 1. Saving tensorflow graph ``` # make and save a simple graph G = tf.Graph() with G.as_default(): x = tf.placeholder(dtype=tf.float32, shape=(), name=\"x\") a = tf.Variable(5.0, name=\"a\") y = tf.add(a, x, name=\"y\") saver = tf.train.Saver() with tf.Session(graph=G) as sess: sess.run(tf.global_variables_initializer()) out = sess.run(fetches=[y], feed_dict={x: 1.0}) # Save GraphDef tf.train.write_graph(sess.graph_def,'.','graph.pb') # Save checkpoint saver.save(sess=sess, save_path=\"test_model\") ``` 2. Freeze graph ``` python -m tensorflow.python.tools.freeze_graph --input_graph graph.pb --input_checkpoint test_model --output_graph graph_frozen.pb --output_node_names=y ``` 3. Optimize for inference ``` python -m tensorflow.python.tools.optimize_for_inference --input graph_frozen.pb --output graph_optimized.pb --input_names=x --output_names=y ``` 4. Using Optimized graph ``` with tf.gfile.GFile('graph_optimized.pb', 'rb') as f: graph_def_optimized = tf.GraphDef() graph_def_optimized.ParseFromString(f.read()) G = tf.Graph() with tf.Session(graph=G) as sess: y, = tf.import_graph_def(graph_def_optimized, return_elements=['y:0']) print('Operations in Optimized Graph:') print([op.name for op in G.get_operations()]) x = G.get_tensor_by_name('import\/x:0') out = sess.run(y, feed_dict={x: 1.0}) print(out) #Output #Operations in Optimized Graph: #['import\/x', 'import\/a', 'import\/y'] #6.0 ``` 5. For multiple output names If there are multiple output nodes, then specify : output_node_names = 'boxes, scores, classes' and import graph by, ``` boxes,scores,classes, = tf.import_graph_def(graph_def_optimized, return_elements=['boxes:0', 'scores:0', 'classes:0']) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45382917\/how-to-optimize-for-inference-a-simple-saved-tensorflow-1-0-1-graph", "best_answers_votes":50, "question_length":3539, "response_length":2109 }, { "question":"Tensorflow CUDA - CUPTI error: CUPTI could not be loaded or symbol could not be found I use the Tensorflow v 1.14.0. I work on Windows 10. And here is how relevant environment variables look in the PATH: ``` C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.0\\bin C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.0\\libnvvp C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common C:\\Users\\sinthes\\AppData\\Local\\Programs\\Python\\Python37 C:\\Users\\sinthes\\AppData\\Local\\Programs\\Python\\Python37\\Scripts C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.0\\cuda\\bin ``` Maybe also worth to mention, just in case it might be relevant.. I use Sublime Text 3 for development and I do not use Anaconda. I find it a bit cumbersome to make updates on tensorflow in the conda environment so I just use Sublime Text right now. (I was using Anaconda (Spyder) previously but I uninstalled it from my computer.) Things seem to work fine except with some occasional strange warnings. But one consistent warning I get is the following whenever I run the fit function. ``` E tensorflow\/core\/platform\/default\/device_tracer.cc:68] CUPTI error: CUPTI could not be loaded or symbol could not be found. ``` And here is how I call the fit function: ``` history = model.fit(x=train_x, y=train_y, batch_size=BATCH_SIZE, epochs=110, verbose=2, callbacks=[tensorboard, checkpoint, reduce_lr_on_plateau], validation_data=(dev_x, dev_y), shuffle=True, class_weight=class_weight, steps_per_epoch=None, validation_steps=None) ``` I just wonder why I see the CUPTI Error message during the run time? It is only printed out once. Is that something that I need to fix or is it something that can be ignored? This message does not tell anything concrete to me to be able to take any action.", "response":"The NVIDIA\u00ae CUDA Profiling Tools Interface (CUPTI) is a dynamic library that enables the creation of profiling and tracing tools that target CUDA applications. CPUTI seems to have been added by the Tensorflow Developors to allow profiling. You can simply ignore the error if you don't mind the exception or adapt your environment path, so the dynamically linked library (DLL) can be found during execution. Inside of you CUDA installation directory, there is an extras\\CUPTI\\lib64 directory that contains the cupti64_101.dll that is trying to be loaded. Adding that directory to your path should resolve the issue, e.g., ``` SET PATH=C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.1\\extras\\CUPTI\\lib64;%PATH% ``` N.B. in case you get an INSUFFICIENT_PRIVILEGES error next, try to run your program as administrator.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/56860180\/tensorflow-cuda-cupti-error-cupti-could-not-be-loaded-or-symbol-could-not-be", "best_answers_votes":11, "question_length":1818, "response_length":822 }, { "question":"Neural network for square (x^2) approximation I made a simple module that should figure out the relationship between input and output numbers, in this case, x and x squared. The code in Python: ``` import numpy as np import tensorflow as tf # TensorFlow only log error messages. tf.logging.set_verbosity(tf.logging.ERROR) features = np.array([-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype = float) labels = np.array([100, 81, 64, 49, 36, 25, 16, 9, 4, 1, 0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100], dtype = float) model = tf.keras.Sequential([ tf.keras.layers.Dense(units = 1, input_shape = [1]) ]) model.compile(loss = \"mean_squared_error\", optimizer = tf.keras.optimizers.Adam(0.0001)) model.fit(features, labels, epochs = 50000, verbose = False) print(model.predict([4, 11, 20])) ``` I tried a different number of units, and adding more layers, and even using the relu activation function, but the results were always wrong. It works with other relationships like x and 2x. What is the problem here?", "response":"You are making two very basic mistakes: Your ultra-simple model (a single-layer network with a single unit) hardly qualifies as a neural network at all, let alone a \"deep learning\" one (as your question is tagged) Similarly, your dataset (just 20 samples) is also ultra-small It is certainly understood that neural networks need to be of some complexity if they are to solve problems even as \"simple\" as x*x; and where they really shine is when fed with large training datasets. The methodology when trying to solve such function approximations is not to just list the (few possible) inputs and then fed to the model, along with the desired outputs; remember, NNs learn through examples, and not through symbolic reasoning. And the more examples the better. What we usually do in similar cases is to generate a large number of examples, which we subsequently feed to the model for training. Having said that, here is a rather simple demonstration of a 3-layer neural network in Keras for approximating the function x*x, using as input 10,000 random numbers generated in [-50, 50]: ``` import numpy as np import keras from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam from keras import regularizers import matplotlib.pyplot as plt model = Sequential() model.add(Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.001), input_shape = (1,))) model.add(Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.001))) model.add(Dense(1)) model.compile(optimizer=Adam(),loss='mse') # generate 10,000 random numbers in [-50, 50], along with their squares x = np.random.random((10000,1))*100-50 y = x**2 # fit the model, keeping 2,000 samples as validation set hist = model.fit(x,y,validation_split=0.2, epochs= 15000, batch_size=256) # check some predictions: print(model.predict([4, -4, 11, 20, 8, -5])) # result: [[ 16.633354] [ 15.031291] [121.26833 ] [397.78638 ] [ 65.70035 ] [ 27.040245]] ``` Well, not that bad! Remember that NNs are function approximators: we should expect them neither to exactly reproduce the functional relationship nor to \"know\" that the results for 4 and -4 should be identical. Let's generate some new random data in [-50,50] (remember, for all practical purposes, these are unseen data for the model) and plot them, along with the original ones, to get a more general picture: ``` plt.figure(figsize=(14,5)) plt.subplot(1,2,1) p = np.random.random((1000,1))*100-50 # new random data in [-50, 50] plt.plot(p,model.predict(p), '.') plt.xlabel('x') plt.ylabel('prediction') plt.title('Predictions on NEW data in [-50,50]') plt.subplot(1,2,2) plt.xlabel('x') plt.ylabel('y') plt.plot(x,y,'.') plt.title('Original data') ``` Result: Well, it arguably does look like a good approximation indeed... You could also take a look at this thread for a sine approximation. The last thing to keep in mind is that, although we did get a decent approximation even with our relatively simple model, what we should not expect is extrapolation, i.e. good performance outside [-50, 50]; for details, see my answer in Is deep learning bad at fitting simple non linear functions outside training scope?", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55170460\/neural-network-for-square-x2-approximation", "best_answers_votes":30, "question_length":1036, "response_length":3178 }, { "question":"Deep neural network skip connection implemented as summation vs concatenation? [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Closed 7 years ago. Improve this question In deep neural network, we can implement the skip connections to help: Solve problem of vanishing gradient, training faster The network learns a combination of low level and high level features Recover info loss during downsampling like max pooling. https:\/\/medium.com\/@mikeliao\/deep-layer-aggregation-combining-layers-in-nn-architectures-2744d29cab8 However, i read some source code, some implemented skip connections as concatenation, some as summation. So my question is what are the benefits of each of these implementations?", "response":"Basically, the difference relies on the different way in which the final layer is influenced by middle features. Standard architectures with skip-connection using element-wise summation (e.g. ResNet) can be viewed as an iterative estimation procedure to some extent (see for instance this work), where the features are refined through the various layers of the network. The main benefits of this choice are that it works and is a compact solution (it keeps the number of features fixed across a block). Architectures with concatenated skip-connections (e.g. DenseNet), allow the subsequent layers to re-use middle representations, maintaining more information which can lead to better performances. Apart from the feature re-use, another consequence is the implicit deep supervision (as in this work) which allow better gradient propagation across the network, especially for deep ones (in fact it has been used for the Inception architecture). Obviously, if not properly designed, concatenating features can lead to an exponential growth of the parameters (this explains, in part, the hierarchical aggregation used in the work you pointed out) and, depending on the problem, using a lot of information could lead to overfitting.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49164230\/deep-neural-network-skip-connection-implemented-as-summation-vs-concatenation", "best_answers_votes":41, "question_length":1063, "response_length":1229 }, { "question":"Given a tensor flow model graph, how to find the input node and output node names I use custom model for classification in Tensor flow Camera Demo. I generated a .pb file (serialized protobuf file) and I could display the huge graph it contains. To convert this graph to a optimized graph, as given in [https:\/\/www.oreilly.com\/learning\/tensorflow-on-android], the following procedure could be used: ``` $ bazel-bin\/tensorflow\/python\/tools\/optimize_for_inference \\ --input=tf_files\/retrained_graph.pb \\ --output=tensorflow\/examples\/android\/assets\/retrained_graph.pb --input_names=Mul \\ --output_names=final_result ``` Here how to find the input_names and output_names from the graph display. When I dont use proper names, I get device crash: ``` E\/TensorFlowInferenceInterface(16821): Failed to run TensorFlow inference with inputs:[AvgPool], outputs:[predictions] E\/AndroidRuntime(16821): FATAL EXCEPTION: inference E\/AndroidRuntime(16821): java.lang.IllegalArgumentException: Incompatible shapes: [1,224,224,3] vs. [32,1,1,2048] E\/AndroidRuntime(16821): [[Node: dropout\/dropout\/mul = Mul[T=DT_FLOAT, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](dropout\/dropout\/div, dropout\/dropout\/Floor)]] ```", "response":"Try this: run python ``` >>> import tensorflow as tf >>> gf = tf.GraphDef() >>> gf.ParseFromString(open('\/your\/path\/to\/graphname.pb','rb').read()) ``` and then ``` >>> [n.name + '=>' + n.op for n in gf.node if n.op in ( 'Softmax','Placeholder')] ``` Then, you can get result similar to this: ``` ['Mul=>Placeholder', 'final_result=>Softmax'] ``` But I'm not sure it's the problem of node names regarding the error messages. I guess you provided wrong arguements when loading the graph file or your generated graph file is something wrong? Check this part: ``` E\/AndroidRuntime(16821): java.lang.IllegalArgumentException: Incompatible shapes: [1,224,224,3] vs. [32,1,1,2048] ``` UPDATE: Sorry, if you're using (re)trained graph , then try this: ``` [n.name + '=>' + n.op for n in gf.node if n.op in ( 'Softmax','Mul')] ``` It seems that (re)trained graph saves input\/output op name as \"Mul\" and \"Softmax\", while optimized and\/or quantized graph saves them as \"Placeholder\" and \"Softmax\". BTW, using retrained graph in mobile environment is not recommended according to Peter Warden's post: https:\/\/petewarden.com\/2016\/09\/27\/tensorflow-for-mobile-poets\/ . It's better to use quantized or memmapped graph due to performance and file size issue, I couldn't find out how to load memmapped graph in android though...:( (no problem loading optimized \/ quantized graph in android)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43517959\/given-a-tensor-flow-model-graph-how-to-find-the-input-node-and-output-node-name", "best_answers_votes":23, "question_length":1199, "response_length":1372 }, { "question":"No variable to save error in Tensorflow I am trying to save the model and then reuse it for classifying my images but unfortunately i am getting errors in restoring the model that i have saved. The code in which model has been created : ``` # Deep Learning # ============= # # Assignment 4 # ------------ # In[25]: # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import numpy as np import tensorflow as tf from six.moves import cPickle as pickle from six.moves import range # In[37]: pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) print(test_labels) # Reformat into a TensorFlow-friendly shape: # - convolutions need the image data formatted as a cube (width by height by #channels) # - labels as float 1-hot encodings. # In[38]: image_size = 28 num_labels = 10 num_channels = 1 # grayscale import numpy as np def reformat(dataset, labels): dataset = dataset.reshape( (-1, image_size, image_size, num_channels)).astype(np.float32) #print(np.arange(num_labels)) labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) #print(labels[0,:]) print(labels[0]) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) #print(labels[0]) # In[39]: def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) \/ predictions.shape[0]) # Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes. # In[47]: batch_size = 16 patch_size = 5 depth = 16 num_hidden = 64 graph = tf.Graph() with graph.as_default(): # Input data. tf_train_dataset = tf.placeholder( tf.float32, shape=(batch_size, image_size, image_size, num_channels)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. layer1_weights = tf.Variable(tf.truncated_normal( [patch_size, patch_size, num_channels, depth], stddev=0.1),name=\"layer1_weights\") layer1_biases = tf.Variable(tf.zeros([depth]),name = \"layer1_biases\") layer2_weights = tf.Variable(tf.truncated_normal( [patch_size, patch_size, depth, depth], stddev=0.1),name = \"layer2_weights\") layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]),name =\"layer2_biases\") layer3_weights = tf.Variable(tf.truncated_normal( [image_size \/\/ 4 * image_size \/\/ 4 * depth, num_hidden], stddev=0.1),name=\"layer3_biases\") layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]),name = \"layer3_biases\") layer4_weights = tf.Variable(tf.truncated_normal( [num_hidden, num_labels], stddev=0.1),name = \"layer4_weights\") layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]),name = \"layer4_biases\") # Model. def model(data): conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME') hidden = tf.nn.relu(conv + layer1_biases) conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME') hidden = tf.nn.relu(conv + layer2_biases) shape = hidden.get_shape().as_list() reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]]) hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases) return tf.matmul(hidden, layer4_weights) + layer4_biases # Training computation. logits = model(tf_train_dataset) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax(model(tf_valid_dataset)) test_prediction = tf.nn.softmax(model(tf_test_dataset)) # In[48]: num_steps = 1001 #saver = tf.train.Saver() with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') for step in range(num_steps): offset = (step * batch_size) % (train_labels.shape[0] - batch_size) batch_data = train_dataset[offset:(offset + batch_size), :, :, :] batch_labels = train_labels[offset:(offset + batch_size), :] feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 50 == 0): print('Minibatch loss at step %d: %f' % (step, l)) print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels)) print('Validation accuracy: %.1f%%' % accuracy( valid_prediction.eval(), valid_labels)) print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels)) save_path = tf.train.Saver().save(session, \"\/tmp\/model.ckpt\") print(\"Model saved in file: %s\" % save_path) ``` Everything works fine and the model is stored in the respective folder . I have created one more python file where i have tried restoring the model but getting an error there ``` # In[1]: from __future__ import print_function import numpy as np import tensorflow as tf from six.moves import cPickle as pickle from six.moves import range # In[3]: image_size = 28 num_labels = 10 num_channels = 1 # grayscale import numpy as np # In[4]: def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) \/ predictions.shape[0]) # In[8]: batch_size = 16 patch_size = 5 depth = 16 num_hidden = 64 graph = tf.Graph() with graph.as_default(): '''# Input data. tf_train_dataset = tf.placeholder( tf.float32, shape=(batch_size, image_size, image_size, num_channels)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset)''' # Variables. layer1_weights = tf.Variable(tf.truncated_normal( [patch_size, patch_size, num_channels, depth], stddev=0.1),name=\"layer1_weights\") layer1_biases = tf.Variable(tf.zeros([depth]),name = \"layer1_biases\") layer2_weights = tf.Variable(tf.truncated_normal( [patch_size, patch_size, depth, depth], stddev=0.1),name = \"layer2_weights\") layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]),name =\"layer2_biases\") layer3_weights = tf.Variable(tf.truncated_normal( [image_size \/\/ 4 * image_size \/\/ 4 * depth, num_hidden], stddev=0.1),name=\"layer3_biases\") layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]),name = \"layer3_biases\") layer4_weights = tf.Variable(tf.truncated_normal( [num_hidden, num_labels], stddev=0.1),name = \"layer4_weights\") layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]),name = \"layer4_biases\") # Model. def model(data): conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME') hidden = tf.nn.relu(conv + layer1_biases) conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME') hidden = tf.nn.relu(conv + layer2_biases) shape = hidden.get_shape().as_list() reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]]) hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases) return tf.matmul(hidden, layer4_weights) + layer4_biases '''# Training computation. logits = model(tf_train_dataset) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)''' # Predictions for the training, validation, and test data. #train_prediction = tf.nn.softmax(logits) #valid_prediction = tf.nn.softmax(model(tf_valid_dataset)) #test_prediction = tf.nn.softmax(model(tf_test_dataset)) # In[17]: #saver = tf.train.Saver() with tf.Session() as sess: # Restore variables from disk. tf.train.Saver().restore(sess, \"\/tmp\/model.ckpt\") print(\"Model restored.\") # Do some work with the model ``` error that i am getting is : No variables to save Any help would be appreciated", "response":"The error here is quite subtle. In In[8] you create a tf.Graph called graph and set it as default for the with graph.as_default(): block. This means that all of the variables are created in graph, and if you print graph.all_variables() you should see a list of your variables. However, you exit the with block before creating (i) the tf.Session, and (ii) the tf.train.Saver. This means that the session and saver are created in a different graph (the global default tf.Graph that is used when you don't explicitly create one and set it as default), which doesn't contain any variables\u2014or any nodes at all. There are at least two solutions: As Yaroslav suggests, you can write your program without using the with graph.as_default(): block, which avoids the confusion with multiple graphs. However, this can lead to name collisions between different cells in your IPython notebook, which is awkward when using the tf.train.Saver, since it uses the name property of a tf.Variable as the key in the checkpoint file. You can create the saver inside the with graph.as_default(): block, and create the tf.Session with an explicit graph, as follows: ``` with graph.as_default(): # [Variable and model creation goes here.] saver = tf.train.Saver() # Gets all variables in `graph`. with tf.Session(graph=graph) as sess: saver.restore(sess) # Do some work with the model.... ``` Alternatively, you can create the tf.Session inside the with graph.as_default(): block, in which case it will use graph for all of its operations.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36281129\/no-variable-to-save-error-in-tensorflow", "best_answers_votes":39, "question_length":8655, "response_length":1514 }, { "question":"why tensorflow just outputs killed When I run my tensorflow app, it just outputs \"killed\". How do I debug this? source code ``` root@8e4a3a65184e:~\/tensorflow# python sample_cnn.py INFO:tensorflow:Using default config. INFO:tensorflow:Using config: {'_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_tf_random_seed': 1, '_keep_checkpoint_every_n_hours': 10000, '_save_checkpoints_steps': None, '_model_dir': 'data\/convnet_model', '_save_summary_steps': 100} INFO:tensorflow:Create CheckpointSaverHook. 2017-08-17 12:56:53.160481: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-17 12:56:53.160536: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-17 12:56:53.160545: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-08-17 12:56:53.160550: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-17 12:56:53.160555: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. Killed ```", "response":"When I run your code I get the same behavior, after typing dmesg you'll see a trace like, which confirms what gdelab was hinting at: ``` [38607.234089] python3 invoked oom-killer: gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=0, order=0, oom_score_adj=0 [38607.234090] python3 cpuset=\/ mems_allowed=0 [38607.234094] CPU: 3 PID: 1420 Comm: python3 Tainted: G O 4.9.0-3-amd64 #1 Debian 4.9.30-2+deb9u2 [38607.234094] Hardware name: Dell Inc. XPS 15 9560\/05FFDN, BIOS 1.2.4 03\/29\/2017 [38607.234096] 0000000000000000 ffffffffa9f28414 ffffa50090317cf8 ffff940effa5f040 [38607.234097] ffffffffa9dfe050 0000000000000000 0000000000000000 0101ffffa9d82dd0 [38607.234098] e09c7db7f06d0ac2 00000000ffffffff 0000000000000000 0000000000000000 [38607.234100] Call Trace: [38607.234104] [] ? dump_stack+0x5c\/0x78 [38607.234106] [] ? dump_header+0x78\/0x1fd [38607.234108] [] ? oom_kill_process+0x21a\/0x3e0 [38607.234109] [] ? oom_badness+0xed\/0x170 [38607.234110] [] ? out_of_memory+0x111\/0x470 [38607.234111] [] ? __alloc_pages_slowpath+0xb7f\/0xbc0 [38607.234112] [] ? __alloc_pages_nodemask+0x1fe\/0x260 [38607.234113] [] ? alloc_pages_vma+0xae\/0x260 [38607.234115] [] ? handle_mm_fault+0x111a\/0x1350 [38607.234117] [] ? __do_page_fault+0x2a4\/0x510 [38607.234118] [] ? page_fault+0x28\/0x30 ... [38607.234158] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name ... [38607.234332] [ 1396] 1000 1396 4810969 3464995 6959 21 0 0 python3 [38607.234332] Out of memory: Kill process 1396 (python3) score 568 or sacrifice child [38607.234357] Killed process 1396 (python3) total-vm:19243876kB, anon-rss:13859980kB, file-rss:0kB, shmem-rss:0kB [38607.720757] oom_reaper: reaped process 1396 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB ``` Which basically means python was starting too consume too much memory and the kernel decided to kill the process. If you add some prints in your code you'll see that mnist_classifier.train() is the function which is active. However some dumb tests (as removing the logging and lowering the steps, did not seem to help here).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45735858\/why-tensorflow-just-outputs-killed", "best_answers_votes":21, "question_length":1659, "response_length":2092 }, { "question":"Confused about conv2d_transpose I'm getting this error message when using conv2d_transpose: ``` W tensorflow\/core\/common_runtime\/executor.cc:1102] 0x7fc81f0d6250 Compute status: Invalid argument: Conv2DBackpropInput: Number of rows of out_backprop doesn't match computed: actual = 32, computed = 4 [[Node: generator\/g_h1\/conv2d_transpose = Conv2DBackpropInput[T=DT_FLOAT, padding=\"SAME\", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](generator\/g_h1\/conv2d_transpose\/output_shape, generator\/g_h1\/w\/read, _recv_l_0)]] ``` However, it occurs after the graph is built while compiling the loss function (Adam). Any ideas on what would cause this? I suspect it's related to the input dimensions but I'm not sure exactly why. Full error: https:\/\/gist.github.com\/jimfleming\/75d88e888044615dd6e3 Relevant code: ``` # l shape: [batch_size, 32, 32, 4] output_shape = [self.batch_size, 8, 8, 128] filter_shape = [7, 7, 128, l.get_shape()[-1]] strides = [1, 2, 2, 1] with tf.variable_scope(\"g_h1\"): w = tf.get_variable('w', filter_shape, initializer=tf.random_normal_initializer(stddev=0.02)) h1 = tf.nn.conv2d_transpose(l, w, output_shape=output_shape, strides=strides, padding='SAME') h1 = tf.nn.relu(h1) output_shape = [self.batch_size, 16, 16, 128] filter_shape = [7, 7, 128, h1.get_shape()[-1]] strides = [1, 2, 2, 1] with tf.variable_scope(\"g_h2\"): w = tf.get_variable('w', filter_shape, initializer=tf.random_normal_initializer(stddev=0.02)) h2 = tf.nn.conv2d_transpose(h1, w,output_shape=output_shape, strides=strides, padding='SAME') h2 = tf.nn.relu(h2) output_shape = [self.batch_size, 32, 32, 3] filter_shape = [5, 5, 3, h2.get_shape()[-1]] strides = [1, 2, 2, 1] with tf.variable_scope(\"g_h3\"): w = tf.get_variable('w', filter_shape, initializer=tf.random_normal_initializer(stddev=0.02)) h3 = tf.nn.conv2d_transpose(h2, w,output_shape=output_shape, strides=strides, padding='SAME') h3 = tf.nn.tanh(h3) ```", "response":"Thanks for the question! You're exactly right---the problem is that the input and output dimensions being passed to tf.nn.conv2d_transpose don't agree. (The error may be detected when computing gradients, but the gradient computation isn't the problem.) Let's look at just the first part of your code, and simplify it a little bit: ``` sess = tf.Session() batch_size = 3 output_shape = [batch_size, 8, 8, 128] strides = [1, 2, 2, 1] l = tf.constant(0.1, shape=[batch_size, 32, 32, 4]) w = tf.constant(0.1, shape=[7, 7, 128, 4]) h1 = tf.nn.conv2d_transpose(l, w, output_shape=output_shape, strides=strides, padding='SAME') print sess.run(h1) ``` I replaced the variables with constants --- it's easier to see what's going on. If you try to run this code, you get a similar error: ``` InvalidArgumentError: Conv2DCustomBackpropInput: Size of out_backprop doesn't match computed: actual = 32, computed = 4 [[Node: conv2d_transpose_6 = Conv2DBackpropInput[T=DT_FLOAT, data_format=\"NHWC\", padding=\"SAME\", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](conv2d_transpose_6\/output_shape, Const_25, Const_24)]] ``` Now, the error is a little misleading --- it talks about the 'out_backprop' argument to 'Conv2DCustomBackpropInput'. The key is that tf.nn.conv2d_transpose is actually just the gradient of tf.nn.conv2d, so Tensorflow uses the same code internally (Conv2DCustomBackpropInput) to compute the gradient of tf.nn.conv2d and to compute tf.nn.conv2d_transpose. The error means that the 'output_shape' you requested is not possible, given the shapes of 'l' and 'w'. Since tf.nn.conv2d_transpose is the backward (gradient) counterpart of tf.nn.conv2d, one way to see what the correct shapes should be is to use the corresponding forward operation: ``` output = tf.constant(0.1, shape=output_shape) expected_l = tf.nn.conv2d(output, w, strides=strides, padding='SAME') print expected_l.get_shape() # Prints (3, 4, 4, 4) ``` That is, in the forward direction, if you provided a tensor of shape 'output_shape', you would get out a tensor of shape (3, 4, 4, 4). So one way to fix the problem is to change the shape of 'l' to (3, 4, 4, 4); if you change the code above to: ``` l = tf.constant(0.1, shape=[batch_size, 4, 4, 4]) ``` everything works fine. In general, try using tf.nn.conv2d to get a feel for what the relationship between the tensor shapes is. Since tf.nn.conv2d_transpose is its backward counterpart, it has the same relationship between input, output and filter shapes (but with the roles of the input and output reversed.) Hope that helps!", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35488717\/confused-about-conv2d-transpose", "best_answers_votes":38, "question_length":1953, "response_length":2596 }, { "question":"expected ndim=3, found ndim=2 I'm new with Keras and I'm trying to implement a Sequence to Sequence LSTM. Particularly, I have a dataset with 9 features and I want to predict 5 continuous values. I split the training and the test set and their shape are respectively: ``` X TRAIN (59010, 9) X TEST (25291, 9) Y TRAIN (59010, 5) Y TEST (25291, 5) ``` The LSTM is extremely simple at the moment: ``` model = Sequential() model.add(LSTM(100, input_shape=(9,), return_sequences=True)) model.compile(loss=\"mean_absolute_error\", optimizer=\"adam\", metrics= ['accuracy']) history = model.fit(X_train,y_train,epochs=100, validation_data=(X_test,y_test)) ``` But I have the following error: ValueError: Input 0 is incompatible with layer lstm_1: expected ndim=3, found ndim=2 Can anyone help me?", "response":"LSTM layer expects inputs to have shape of (batch_size, timesteps, input_dim). In keras you need to pass (timesteps, input_dim) for input_shape argument. But you are setting input_shape (9,). This shape does not include timesteps dimension. The problem can be solved by adding extra dimension to input_shape for time dimension. E.g adding extra dimension with value 1 could be simple solution. For this you have to reshape input dataset( X Train) and Y shape. But this might be problematic because the time resolution is 1 and you are feeding length one sequence. With length one sequence as input, using LSTM does not seem the right option. ``` x_train = x_train.reshape(-1, 1, 9) x_test = x_test.reshape(-1, 1, 9) y_train = y_train.reshape(-1, 1, 5) y_test = y_test.reshape(-1, 1, 5) model = Sequential() model.add(LSTM(100, input_shape=(1, 9), return_sequences=True)) model.add(LSTM(5, input_shape=(1, 9), return_sequences=True)) model.compile(loss=\"mean_absolute_error\", optimizer=\"adam\", metrics= ['accuracy']) history = model.fit(X_train,y_train,epochs=100, validation_data=(X_test,y_test)) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/54416322\/expected-ndim-3-found-ndim-2", "best_answers_votes":32, "question_length":785, "response_length":1100 }, { "question":"How to get a tensorflow op by name? You can get a tensor by name with tf.get_default_graph().get_tensor_by_name(\"tensor_name:0\") But can you get an operation, such as Optimizer.minimize, or an enqueue operation on a queue? In my first model I returned all tensors and ops I would need from a build_model function. But the list of tensors got ugly. In later models I tossed all tensors and ops in a dictionary for easier access. This time around I thought I'd just look up tensors by name as I needed them, but I don't know how to do that with ops. Or is there a better way to do this? I find various tensors and ops are needed all over the place. Training, inference code, test cases, hence the desire for a nice standard way of accessing the various parts of the graph without passing variables all over the place.", "response":"You can use the tf.Graph.get_operation_by_name() method to get a tf.Operation by name. For example, to get an operation called \"enqueue\" from the default graph: ``` op = tf.get_default_graph().get_operation_by_name(\"enqueue\") ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42685994\/how-to-get-a-tensorflow-op-by-name", "best_answers_votes":31, "question_length":815, "response_length":229 }, { "question":"Use GPU with opencv-python I'm trying to use opencv-python with GPU on windows 10. I installed opencv-contrib-python using pip and it's v4.4.0.42, I also have Cuda on my computer and in path. Anyway, here is a (simple) code that I'm trying to compile: ``` import cvlib as cv from cvlib.object_detection import draw_bbox bbox, label, conf = cv.detect_common_objects(img,confidence=0.5,model='yolov3-worker',enable_gpu=True) output_image = draw_bbox(img, bbox, label, conf) ``` First, here is the line that tell me that tf is ok with cuda: ``` 2020-08-26 5:51:55.718555: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll ``` but when I try to use my GPU to analyse the image, here is what happen: ``` [ WARN:0] global C:\\Users\\appveyor\\AppData\\Local\\Temp\\1\\pip-req-build-j8nxabm_\\opencv\\modules\\dnn\\src\\dnn.cpp (1429) cv::dnn::dnn4_v20200609::Net::Impl::setUpNet DNN module was not built with CUDA backend; switching to CPU ``` Is there a way to solve this without install opencv using cmake? It's a mess on windows...", "response":"The problem here is that version of opencv distributed with your system (Windows in this case) was not compiled with Cuda support. Therefore, you cannot use any cuda related function with this build. If you want to have an opencv with cuda support, you will have to either compile it yourself (which may be tedious on windows) or find a prebuilt one somewhere. In case you want to go for the 1st solution, here is a link that may help you with the process: https:\/\/programming.vip\/docs\/compile-opencv-with-cuda-support-on-windows-10.html. Keep in mind that this will require you to install a bunch of SDK in the process.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/63601580\/use-gpu-with-opencv-python", "best_answers_votes":10, "question_length":1083, "response_length":620 }, { "question":"Tensorflow 2.0 dataset and dataloader I am a pytorch user, and I am used to the data.dataset and data.dataloader api in pytorch. I am trying to build a same model with tensorflow 2.0, and I wonder whether there is an api that works similarly with these api in pytorch. If there is no such api, can any of you tell me how people usually do to implement the data loading part in tensorflow ? I've used tensorflow 1, but never had an experience with dataset api. I've hard coded before. I hope there is something like overriding getitem with only index as an input. Thanks much in advance.", "response":"When using the tf.data API, you will usually also make use of the map function. In PyTorch, your __getItem__ call basically fetches an element from your data structure given in __init__ and transforms it if necessary. In TF2.0, you do the same by initializing a Dataset using one of the Dataset.from_... functions (see from_generator, from_tensor_slices, from_tensors); this is essentially the __init__ part of a PyTorch Dataset. Then, you can call map to do the element-wise manipulations you would have in __getItem__. Tensorflow datasets are pretty much fancy iterators, so by design you don't access their elements using indices, but rather by traversing them. The guide on tf.data is very useful and provides a wide variety of examples.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58505880\/tensorflow-2-0-dataset-and-dataloader", "best_answers_votes":21, "question_length":586, "response_length":741 }, { "question":"Tensorflow: ImportError: libcudnn.so.7: cannot open shared object file: No such file or directory I have recently installed tensorflow-gpu using pip. But when I am importing it it is giving the following error: ``` ImportError: libcudnn.so.7: cannot open shared object file: No such file or directory ``` I have gone through all the answers of stackoverflow related to this issue but none of them worked for me. libcudnn.so.7 is present in both the following directories \/usr\/local\/cuda\/lib64 and \/usr\/local\/cuda-9.0\/lib64 . Also, I have added the following path in my .bashrc file: ``` export PATH=\/usr\/local\/cuda-9.0\/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=\/usr\/local\/cuda-9.0\/lib64\\${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} export PATH=\/usr\/local\/cuda\/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=\/usr\/local\/cuda\/lib64\\${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} ``` Please help me in resolving this", "response":"You are setting LD_LIBRARY_PATH in the wrong way, I would recommend to do it this way (which is kind of the standard): ``` export LD_LIBRARY_PATH=\/usr\/local\/cuda-9.0\/lib64:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=\/usr\/local\/cuda\/lib64:$LD_LIBRARY_PATH ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51019509\/tensorflow-importerror-libcudnn-so-7-cannot-open-shared-object-file-no-such", "best_answers_votes":16, "question_length":899, "response_length":254 }, { "question":"Is Intel based graphic card compatible with tensorflow\/GPU? Is this graphic card compatible with tensorflow\/GPU ? ``` *-display description: VGA compatible controller product: Haswell-ULT Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:44 memory:c2000000-c23fffff memory:b0000000-bfffffff ioport:7000(size=64) ```", "response":"At the moment no. Only Nvidia GPUs and (intel\/amd) CPU versions are available. They are working on an openCL compatible version of tensorflow that would be compatible and you can follow its progress here", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40000518\/is-intel-based-graphic-card-compatible-with-tensorflow-gpu", "best_answers_votes":24, "question_length":516, "response_length":203 }, { "question":"How to run Keras on multiple cores? I'm using Keras with Tensorflow backend on a cluster (creating neural networks). How can I run it in a multi-threaded way on the cluster (on several cores) or is this done automatically by Keras? For example in Java one can create several threads, each thread running on a core. If possible, how many cores should be used?", "response":"Tensorflow automatically runs the computations on as many cores as are available on a single machine. If you have a distributed cluster, be sure you follow the instructions at https:\/\/www.tensorflow.org\/how_tos\/distributed\/ to configure the cluster. (e.g. create the tf.ClusterSpec correctly, etc.) To help debug, you can use the log_device_placement configuration options on the session to have Tensorflow print out where the computations are actually placed. (Note: this works for both GPUs as well as distributed Tensorflow.) ``` # Creates a session with log_device_placement set to True. sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) ``` Note that while Tensorflow's computation placement algorithm works fine for small computational graphs, you might be able to get better performance on large computational graphs by manually placing the computations in specific devices. (e.g. using with tf.device(...): blocks.)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41588383\/how-to-run-keras-on-multiple-cores", "best_answers_votes":18, "question_length":358, "response_length":941 }, { "question":"tensorflow: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob` I get this warning most of the time when i define a model using Keras. It seems to somehow come from tensorflow though: ``` WARNING:tensorflow:From C:\\Users\\lenik\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. ``` Is this warning something to worry about? If yes, how do i solve this problem?", "response":"This depreciation warning is due to the Dropout layer in tf.keras.layers.Dropout.To avoid this warning, you need to clearly specify rate= in Dropout as: Dropout(rate=0.2). Earlier it was keep_prob and it is now deprecated to rate i.e. rate = 1-keep_prob. For more, you can check out this tensorflow documentation.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55235230\/tensorflow-please-use-rate-instead-of-keep-prob-rate-should-be-set-to-rat", "best_answers_votes":10, "question_length":680, "response_length":313 }, { "question":"TensorFlow REST Frontend but not TensorFlow Serving I want to deploy a simple TensorFlow model and run it in REST service like Flask. Did not find so far good example on github or here. I am not ready to use TF Serving as suggested in other posts, it is perfect solution for Google but it overkill for my tasks with gRPC, bazel, C++ coding, protobuf...", "response":"There are different ways to do this. Purely, using tensorflow is not very flexible, however relatively straightforward. The downside of this approach is that you have to rebuild the graph and initialize variables in the code where you restore the model. There is a way shown in tensorflow skflow\/contrib learn which is more elegant, however this doesn't seem to be functional at the moment and the documentation is out of date. I put a short example together on github here that shows how you would named GET or POST parameters to a flask REST-deployed tensorflow model. The main code is then in a function that takes a dictionary based on the POST\/GET data: ``` @app.route('\/model', methods=['GET', 'POST']) @parse_postget def apply_model(d): tf.reset_default_graph() with tf.Session() as session: n = 1 x = tf.placeholder(tf.float32, [n], name='x') y = tf.placeholder(tf.float32, [n], name='y') m = tf.Variable([1.0], name='m') b = tf.Variable([1.0], name='b') y = tf.add(tf.mul(m, x), b) # fit y_i = m * x_i + b y_act = tf.placeholder(tf.float32, [n], name='y_') error = tf.sqrt((y - y_act) * (y - y_act)) train_step = tf.train.AdamOptimizer(0.05).minimize(error) feed_dict = {x: np.array([float(d['x_in'])]), y_act: np.array([float(d['y_star'])])} saver = tf.train.Saver() saver.restore(session, 'linear.chk') y_i, _, _ = session.run([y, m, b], feed_dict) return jsonify(output=float(y_i)) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38935428\/tensorflow-rest-frontend-but-not-tensorflow-serving", "best_answers_votes":7, "question_length":352, "response_length":1397 }, { "question":"How to fix \"ResourceExhaustedError: OOM when allocating tensor\" I wanna make a model with multiple inputs. So, I try to build a model like this. ``` # define two sets of inputs inputA = Input(shape=(32,64,1)) inputB = Input(shape=(32,1024)) # CNN x = layers.Conv2D(32, kernel_size = (3, 3), activation = 'relu')(inputA) x = layers.Conv2D(32, (3,3), activation='relu')(x) x = layers.MaxPooling2D(pool_size=(2,2))(x) x = layers.Dropout(0.2)(x) x = layers.Flatten()(x) x = layers.Dense(500, activation = 'relu')(x) x = layers.Dropout(0.5)(x) x = layers.Dense(500, activation='relu')(x) x = Model(inputs=inputA, outputs=x) # DNN y = layers.Flatten()(inputB) y = Dense(64, activation=\"relu\")(y) y = Dense(250, activation=\"relu\")(y) y = Dense(500, activation=\"relu\")(y) y = Model(inputs=inputB, outputs=y) # Combine the output of the two models combined = concatenate([x.output, y.output]) # combined outputs z = Dense(300, activation=\"relu\")(combined) z = Dense(100, activation=\"relu\")(combined) z = Dense(1, activation=\"softmax\")(combined) model = Model(inputs=[x.input, y.input], outputs=z) model.summary() opt = Adam(lr=1e-3, decay=1e-3 \/ 200) model.compile(loss = 'sparse_categorical_crossentropy', optimizer = opt, metrics = ['accuracy']) ``` and the summary : _ But, when i try to train this model, ``` history = model.fit([trainimage, train_product_embd],train_label, validation_data=([validimage,valid_product_embd],valid_label), epochs=10, steps_per_epoch=100, validation_steps=10) ``` the problem happens.... : ``` ResourceExhaustedError Traceback (most recent call last) in () ----> 1 history = model.fit([trainimage, train_product_embd],train_label, validation_data=([validimage,valid_product_embd],valid_label), epochs=10, steps_per_epoch=100, validation_steps=10) 4 frames \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow_core\/python\/client\/session.py in __call__(self, *args, **kwargs) 1470 ret = tf_session.TF_SessionRunCallable(self._session._session, 1471 self._handle, args, -> 1472 run_metadata_ptr) 1473 if run_metadata: 1474 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[800000,32,30,62] and type float on \/job:localhost\/replica:0\/task:0\/device:GPU:0 by allocator GPU_0_bfc [[{{node conv2d_1\/convolution}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[metrics\/acc\/Mean_1\/_185]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. (1) Resource exhausted: OOM when allocating tensor with shape[800000,32,30,62] and type float on \/job:localhost\/replica:0\/task:0\/device:GPU:0 by allocator GPU_0_bfc [[{{node conv2d_1\/convolution}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. 0 successful operations. 0 derived errors ignored. ``` Thanks for reading and hopefully helping me :)", "response":"OOM stands for \"out of memory\". Your GPU is running out of memory, so it can't allocate memory for this tensor. There are a few things you can do: Decrease the number of filters in your Dense, Conv2D layers Use a smaller batch_size (or increase steps_per_epoch and validation_steps) Use grayscale images (you can use tf.image.rgb_to_grayscale) Reduce the number of layers Use MaxPooling2D layers after convolutional layers Reduce the size of your images (you can use tf.image.resize for that) Use smaller float precision for your input, namely np.float32 If you're using a pre-trained model, freeze the first layers (like this) There is more useful information about this error: ``` OOM when allocating tensor with shape[800000,32,30,62] ``` This is a weird shape. If you're working with images, you should normally have 3 or 1 channel. On top of that, it seems like you are passing your entire dataset at once; you should instead pass it in batches.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/59394947\/how-to-fix-resourceexhaustederror-oom-when-allocating-tensor", "best_answers_votes":60, "question_length":3132, "response_length":950 }, { "question":"Unable to (manually) load cifar10 dataset First, I tried to load using: ``` (X_train, y_train), (X_test, y_test) = datasets.cifar10.load_data() ``` But it gave an error: ``` Exception: URL fetch failure on https:\/\/www.cs.toronto.edu\/~kriz\/cifar-10-python.tar.gz: None -- [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1125) ``` So I manually downloaded the dataset and put it in C:\\Users\\SAHAN\\.keras\\datasets and renamed it to cifar-10-batches-py.tar.gz. But then it gives an error: ``` PermissionError: [Errno 13] Permission denied: 'C:\\\\Users\\\\SAHAN\\\\.keras\\\\datasets\\\\cifar-10-batches-py.tar.gz' ``` How can I load this dataset?", "response":"I was having a similar CERTIFICATE_VERIFY_FAILED error downloading CIFAR-10. Putting this in my python file worked: ```py import ssl ssl._create_default_https_context = ssl._create_unverified_context ``` Reference: https:\/\/programmerah.com\/python-error-certificate-verify-failed-certificate-has-expired-40374\/", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/69687794\/unable-to-manually-load-cifar10-dataset", "best_answers_votes":50, "question_length":677, "response_length":309 }, { "question":"What is tf.nn.max_pool's ksize parameter used for? In the definition of tf.nn.max_pool, what is ksize used for? ``` tf.nn.max_pool(value, ksize, strides, padding, data_format='NHWC', name=None) Performs the max pooling on the input. Args: value: A 4-D Tensor with shape [batch, height, width, channels] and type tf.float32. ksize: A list of ints that has length >= 4. The size of the window for each dimension of the input tensor. ``` For instance, if an input value is of tensor : [1, 64, 64, 3] and ksize=3.what does that mean?", "response":"The documentation states: ksize: A list of ints that has length >= 4. The size of the window for each dimension of the input tensor. In general for images, your input is of shape [batch_size, 64, 64, 3] for an RGB image of 64x64 pixels. The kernel size ksize will typically be [1, 2, 2, 1] if you have a 2x2 window over which you take the maximum. On the batch size dimension and the channels dimension, ksize is 1 because we don't want to take the maximum over multiple examples, or over multiples channels.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38601452\/what-is-tf-nn-max-pools-ksize-parameter-used-for", "best_answers_votes":50, "question_length":529, "response_length":508 }, { "question":"How to add attention layer to a Bi-LSTM I am developing a Bi-LSTM model and want to add a attention layer to it. But I am not getting how to add it. My current code for the model is ``` model = Sequential() model.add(Embedding(max_words, 1152, input_length=max_len, weights=[embeddings])) model.add(BatchNormalization()) model.add(Activation('tanh')) model.add(Dropout(0.5)) model.add(Bidirectional(LSTM(32))) model.add(BatchNormalization()) model.add(Activation('tanh')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.summary() ``` And the model summary is ``` Model: \"sequential_1\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (None, 1152, 1152) 278396928 _________________________________________________________________ batch_normalization_1 (Batch (None, 1152, 1152) 4608 _________________________________________________________________ activation_1 (Activation) (None, 1152, 1152) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 1152, 1152) 0 _________________________________________________________________ bidirectional_1 (Bidirection (None, 64) 303360 _________________________________________________________________ batch_normalization_2 (Batch (None, 64) 256 _________________________________________________________________ activation_2 (Activation) (None, 64) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 64) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 65 ================================================================= Total params: 278,705,217 Trainable params: 278,702,785 Non-trainable params: 2,432 ```", "response":"This can be a possible custom solution with a custom layer that computes attention on the positional\/temporal dimension ``` from tensorflow.keras.layers import Layer from tensorflow.keras import backend as K class Attention(Layer): def __init__(self, return_sequences=True): self.return_sequences = return_sequences super(Attention,self).__init__() def build(self, input_shape): self.W=self.add_weight(name=\"att_weight\", shape=(input_shape[-1],1), initializer=\"normal\") self.b=self.add_weight(name=\"att_bias\", shape=(input_shape[1],1), initializer=\"zeros\") super(Attention,self).build(input_shape) def call(self, x): e = K.tanh(K.dot(x,self.W)+self.b) a = K.softmax(e, axis=1) output = x*a if self.return_sequences: return output return K.sum(output, axis=1) ``` it's build to receive 3D tensors and output 3D tensors (return_sequences=True) or 2D tensors (return_sequences=False). below a dummy example ``` # dummy data creation max_len = 100 max_words = 333 emb_dim = 126 n_sample = 5 X = np.random.randint(0,max_words, (n_sample,max_len)) Y = np.random.randint(0,2, n_sample) ``` with return_sequences=True ``` model = Sequential() model.add(Embedding(max_words, emb_dim, input_length=max_len)) model.add(Bidirectional(LSTM(32, return_sequences=True))) model.add(Attention(return_sequences=True)) # receive 3D and output 3D model.add(LSTM(32)) model.add(Dense(1, activation='sigmoid')) model.summary() model.compile('adam', 'binary_crossentropy') model.fit(X,Y, epochs=3) ``` with return_sequences=False ``` model = Sequential() model.add(Embedding(max_words, emb_dim, input_length=max_len)) model.add(Bidirectional(LSTM(32, return_sequences=True))) model.add(Attention(return_sequences=False)) # receive 3D and output 2D model.add(Dense(1, activation='sigmoid')) model.summary() model.compile('adam', 'binary_crossentropy') model.fit(X,Y, epochs=3) ``` You can integrate it into your networks easily here the running notebook", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/62948332\/how-to-add-attention-layer-to-a-bi-lstm", "best_answers_votes":43, "question_length":1842, "response_length":1929 }, { "question":"Cannot import name 'dtensor' from 'tensorflow.compat.v2.experimental' I am having problems trying to run TensorFlow on my Windows 10 machine. Code runs fine on my MacOS machine. ``` Traceback (most recent call last): File \"c:\\Users\\Fynn\\Documents\\GitHub\\AlpacaTradingBot\\ai.py\", line 15, in from keras.models import Sequential, load_model File \"C:\\Users\\Fynn\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\keras\\__init__.py\", line 24, in from keras import models File \"C:\\Users\\Fynn\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\keras\\models\\__init__.py\", line 18, in from keras.engine.functional import Functional File \"C:\\Users\\Fynn\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\keras\\engine\\functional.py\", line 24, in from keras.dtensor import layout_map as layout_map_lib File \"C:\\Users\\Fynn\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\keras\\dtensor\\__init__.py\", line 22, in from tensorflow.compat.v2.experimental import dtensor as dtensor_api # pylint: disable=g-import-not-at-top ImportError: cannot import name 'dtensor' from 'tensorflow.compat.v2.experimental' (C:\\Users\\Fynn\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\tensorflow\\_api\\v2\\compat\\v2\\experimental\\__init__.py) ```", "response":"This can be caused by an incompatibility between your tensorflow and your keras versions. In particular I see this with tensorflow==2.6.0 and keras==2.9.0, though I would not be surprised if other versions can cause this as well. Either update your tensorflow version by: ```bash pip install tensorflow==2.8 ``` or downgrade your keras version by: ```bash pip install keras==2.6 ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/72255562\/cannot-import-name-dtensor-from-tensorflow-compat-v2-experimental", "best_answers_votes":43, "question_length":1251, "response_length":382 }, { "question":"Colab: (0) UNIMPLEMENTED: DNN library is not found I have pretrained model for object detection (Google Colab + TensorFlow) inside Google Colab and I run it two-three times per week for new images I have and everything was fine for the last year till this week. Now when I try to run model I have this message: ``` Graph execution error: 2 root error(s) found. (0) UNIMPLEMENTED: DNN library is not found. [[{{node functional_1\/conv1_conv\/Conv2D}}]] [[StatefulPartitionedCall\/SecondStagePostprocessor\/BatchMultiClassNonMaxSuppression\/MultiClassNonMaxSuppression\/Reshape_5\/_126]] (1) UNIMPLEMENTED: DNN library is not found. [[{{node functional_1\/conv1_conv\/Conv2D}}]] 0 successful operations. 0 derived errors ignored. [Op:__inference_restored_function_body_27380] *** ``` Never happended before. Before I can run my model I have to install Tensor Flow object detection API with this command: ```py import os os.chdir('\/project\/models\/research') !protoc object_detection\/protos\/*.proto --python_out=. !cp object_detection\/packages\/tf2\/setup.py . !python -m pip install . ``` This is the output of command: ``` Processing \/content\/gdrive\/MyDrive\/models\/research DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default. pip 21.3 will remove support for this functionality. You can find discussion regarding this at https:\/\/github.com\/pypa\/pip\/issues\/7555. Collecting avro-python3 Downloading avro-python3-1.10.2.tar.gz (38 kB) Collecting apache-beam Downloading apache_beam-2.35.0-cp37-cp37m-manylinux2010_x86_64.whl (9.9 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 9.9 MB 1.6 MB\/s Requirement already satisfied: pillow in \/usr\/local\/lib\/python3.7\/dist-packages (from object-detection==0.1) (7.1.2) Requirement already satisfied: lxml in \/usr\/local\/lib\/python3.7\/dist-packages (from object-detection==0.1) (4.2.6) Requirement already satisfied: matplotlib in \/usr\/local\/lib\/python3.7\/dist-packages (from object-detection==0.1) (3.2.2) Requirement already satisfied: Cython in \/usr\/local\/lib\/python3.7\/dist-packages (from object-detection==0.1) (0.29.27) Requirement already satisfied: contextlib2 in \/usr\/local\/lib\/python3.7\/dist-packages (from object-detection==0.1) (0.5.5) Collecting tf-slim Downloading tf_slim-1.1.0-py2.py3-none-any.whl (352 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 352 kB 50.5 MB\/s Requirement already satisfied: six in \/usr\/local\/lib\/python3.7\/dist-packages (from object-detection==0.1) (1.15.0) Requirement already satisfied: pycocotools in \/usr\/local\/lib\/python3.7\/dist-packages (from object-detection==0.1) (2.0.4) Collecting lvis Downloading lvis-0.5.3-py3-none-any.whl (14 kB) Requirement already satisfied: scipy in \/usr\/local\/lib\/python3.7\/dist-packages (from object-detection==0.1) (1.4.1) Requirement already satisfied: pandas in \/usr\/local\/lib\/python3.7\/dist-packages (from object-detection==0.1) (1.3.5) Collecting tf-models-official>=2.5.1 Downloading tf_models_official-2.8.0-py2.py3-none-any.whl (2.2 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.2 MB 38.3 MB\/s Collecting tensorflow_io Downloading tensorflow_io-0.24.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (23.4 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 23.4 MB 1.7 MB\/s Requirement already satisfied: keras in \/usr\/local\/lib\/python3.7\/dist-packages (from object-detection==0.1) (2.7.0) Collecting opencv-python-headless Downloading opencv_python_headless-4.5.5.62-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (47.7 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 47.7 MB 74 kB\/s Collecting sacrebleu Downloading sacrebleu-2.0.0-py3-none-any.whl (90 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 90 kB 10.4 MB\/s Requirement already satisfied: kaggle>=1.3.9 in \/usr\/local\/lib\/python3.7\/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.5.12) Requirement already satisfied: psutil>=5.4.3 in \/usr\/local\/lib\/python3.7\/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (5.4.8) Requirement already satisfied: oauth2client in \/usr\/local\/lib\/python3.7\/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (4.1.3) Collecting tensorflow-addons Downloading tensorflow_addons-0.15.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.1 MB 37.8 MB\/s Requirement already satisfied: gin-config in \/usr\/local\/lib\/python3.7\/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (0.5.0) Requirement already satisfied: tensorflow-datasets in \/usr\/local\/lib\/python3.7\/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (4.0.1) Collecting sentencepiece Downloading sentencepiece-0.1.96-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.2 MB 37.5 MB\/s Collecting tensorflow-model-optimization>=0.4.1 Downloading tensorflow_model_optimization-0.7.0-py2.py3-none-any.whl (213 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 213 kB 42.7 MB\/s Collecting pyyaml=5.1 Downloading PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl (636 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 636 kB 53.3 MB\/s Collecting tensorflow-text~=2.8.0 Downloading tensorflow_text-2.8.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (4.9 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.9 MB 46.1 MB\/s Requirement already satisfied: google-api-python-client>=1.6.7 in \/usr\/local\/lib\/python3.7\/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.12.10) Requirement already satisfied: numpy>=1.15.4 in \/usr\/local\/lib\/python3.7\/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.19.5) Requirement already satisfied: tensorflow-hub>=0.6.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (0.12.0) Collecting seqeval Downloading seqeval-1.2.2.tar.gz (43 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 43 kB 2.1 MB\/s Collecting tensorflow~=2.8.0 Downloading tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl (497.5 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 497.5 MB 28 kB\/s Collecting py-cpuinfo>=3.3.0 Downloading py-cpuinfo-8.0.0.tar.gz (99 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 99 kB 10.1 MB\/s Requirement already satisfied: google-auth=1.16.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.35.0) Requirement already satisfied: uritemplate=3.0.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.1) Requirement already satisfied: httplib2=0.15.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.17.4) Requirement already satisfied: google-auth-httplib2>=0.0.3 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.0.4) Requirement already satisfied: google-api-core=1.21.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.26.3) Requirement already satisfied: setuptools>=40.3.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-api-core=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (57.4.0) Requirement already satisfied: pytz in \/usr\/local\/lib\/python3.7\/dist-packages (from google-api-core=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2018.9) Requirement already satisfied: googleapis-common-protos=1.6.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-api-core=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.54.0) Requirement already satisfied: requests=2.18.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-api-core=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.23.0) Requirement already satisfied: packaging>=14.3 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-api-core=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (21.3) Requirement already satisfied: protobuf>=3.12.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-api-core=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.17.3) Requirement already satisfied: pyasn1-modules>=0.2.1 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-auth=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.2.8) Requirement already satisfied: rsa=3.1.4 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-auth=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (4.8) Requirement already satisfied: cachetools=2.0.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-auth=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (4.2.4) Requirement already satisfied: certifi in \/usr\/local\/lib\/python3.7\/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (2021.10.8) Requirement already satisfied: urllib3 in \/usr\/local\/lib\/python3.7\/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (1.24.3) Requirement already satisfied: python-dateutil in \/usr\/local\/lib\/python3.7\/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (2.8.2) Requirement already satisfied: tqdm in \/usr\/local\/lib\/python3.7\/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (4.62.3) Requirement already satisfied: python-slugify in \/usr\/local\/lib\/python3.7\/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (5.0.2) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in \/usr\/local\/lib\/python3.7\/dist-packages (from packaging>=14.3->google-api-core=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.7) Requirement already satisfied: pyasn1=0.4.6 in \/usr\/local\/lib\/python3.7\/dist-packages (from pyasn1-modules>=0.2.1->google-auth=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.4.8) Requirement already satisfied: idna=2.5 in \/usr\/local\/lib\/python3.7\/dist-packages (from requests=2.18.0->google-api-core=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.10) Requirement already satisfied: chardet=3.0.2 in \/usr\/local\/lib\/python3.7\/dist-packages (from requests=2.18.0->google-api-core=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.4) Requirement already satisfied: termcolor>=1.1.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.1.0) Requirement already satisfied: libclang>=9.0.1 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (13.0.0) Requirement already satisfied: h5py>=2.9.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.1.0) Requirement already satisfied: astunparse>=1.6.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.6.3) Requirement already satisfied: gast>=0.2.1 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.4.0) Requirement already satisfied: google-pasta>=0.1.1 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.2.0) Requirement already satisfied: typing-extensions>=3.6.6 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.10.0.2) Requirement already satisfied: wrapt>=1.11.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.13.3) Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.23.1) Collecting tf-estimator-nightly==2.8.0.dev2021122109 Downloading tf_estimator_nightly-2.8.0.dev2021122109-py2.py3-none-any.whl (462 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 462 kB 49.5 MB\/s Requirement already satisfied: keras-preprocessing>=1.1.1 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.1.2) Collecting tensorboard=2.8 Downloading tensorboard-2.8.0-py3-none-any.whl (5.8 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5.8 MB 41.2 MB\/s Requirement already satisfied: flatbuffers>=1.12 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (2.0) Collecting keras Downloading keras-2.8.0-py2.py3-none-any.whl (1.4 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.4 MB 41.2 MB\/s Requirement already satisfied: opt-einsum>=2.3.2 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.3.0) Collecting numpy>=1.15.4 Downloading numpy-1.21.5-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 15.7 MB 41.4 MB\/s Requirement already satisfied: absl-py>=0.4.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.0.0) Requirement already satisfied: grpcio=1.24.3 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.43.0) Requirement already satisfied: wheel=0.23.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from astunparse>=1.6.0->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.37.1) Requirement already satisfied: cached-property in \/usr\/local\/lib\/python3.7\/dist-packages (from h5py>=2.9.0->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.5.2) Requirement already satisfied: tensorboard-data-server=0.6.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorboard=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.6.1) Requirement already satisfied: werkzeug>=0.11.15 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorboard=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.0.1) Requirement already satisfied: google-auth-oauthlib=0.4.1 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorboard=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (0.4.6) Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorboard=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.8.1) Requirement already satisfied: markdown>=2.6.8 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorboard=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.3.6) Requirement already satisfied: requests-oauthlib>=0.7.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from google-auth-oauthlib=0.4.1->tensorboard=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (1.3.1) Requirement already satisfied: importlib-metadata>=4.4 in \/usr\/local\/lib\/python3.7\/dist-packages (from markdown>=2.6.8->tensorboard=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (4.10.1) Requirement already satisfied: zipp>=0.5 in \/usr\/local\/lib\/python3.7\/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.7.0) Requirement already satisfied: oauthlib>=3.0.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib=0.4.1->tensorboard=2.8->tensorflow~=2.8.0->tf-models-official>=2.5.1->object-detection==0.1) (3.2.0) Requirement already satisfied: dm-tree~=0.1.1 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow-model-optimization>=0.4.1->tf-models-official>=2.5.1->object-detection==0.1) (0.1.6) Requirement already satisfied: crcmod=1.7 in \/usr\/local\/lib\/python3.7\/dist-packages (from apache-beam->object-detection==0.1) (1.7) Collecting fastavro=0.21.4 Downloading fastavro-1.4.9-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.3 MB 38.1 MB\/s Requirement already satisfied: pyarrow=0.15.1 in \/usr\/local\/lib\/python3.7\/dist-packages (from apache-beam->object-detection==0.1) (6.0.1) Requirement already satisfied: pydot=1.2.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from apache-beam->object-detection==0.1) (1.3.0) Collecting proto-plus=1.7.1 Downloading proto_plus-1.19.9-py3-none-any.whl (45 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 45 kB 3.2 MB\/s Collecting requests=2.18.0 Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 63 kB 1.8 MB\/s Collecting dill=0.3.1.1 Downloading dill-0.3.1.1.tar.gz (151 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 151 kB 44.4 MB\/s Collecting numpy>=1.15.4 Downloading numpy-1.20.3-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.3 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 15.3 MB 21.1 MB\/s Collecting orjson=2.1.0 Downloading hdfs-2.6.0-py3-none-any.whl (33 kB) Collecting pymongo=3.8.0 Downloading pymongo-3.12.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (508 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 508 kB 44.3 MB\/s Requirement already satisfied: docopt in \/usr\/local\/lib\/python3.7\/dist-packages (from hdfs=2.1.0->apache-beam->object-detection==0.1) (0.6.2) Collecting protobuf>=3.12.0 Downloading protobuf-3.19.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.1 MB 47.3 MB\/s Requirement already satisfied: charset-normalizer~=2.0.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from requests=2.18.0->google-api-core=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.0.11) Requirement already satisfied: opencv-python>=4.1.0.25 in \/usr\/local\/lib\/python3.7\/dist-packages (from lvis->object-detection==0.1) (4.1.2.30) Requirement already satisfied: cycler>=0.10.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from lvis->object-detection==0.1) (0.11.0) Requirement already satisfied: kiwisolver>=1.1.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from lvis->object-detection==0.1) (1.3.2) Requirement already satisfied: text-unidecode>=1.3 in \/usr\/local\/lib\/python3.7\/dist-packages (from python-slugify->kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (1.3) Requirement already satisfied: regex in \/usr\/local\/lib\/python3.7\/dist-packages (from sacrebleu->tf-models-official>=2.5.1->object-detection==0.1) (2019.12.20) Requirement already satisfied: tabulate>=0.8.9 in \/usr\/local\/lib\/python3.7\/dist-packages (from sacrebleu->tf-models-official>=2.5.1->object-detection==0.1) (0.8.9) Collecting portalocker Downloading portalocker-2.3.2-py2.py3-none-any.whl (15 kB) Collecting colorama Downloading colorama-0.4.4-py2.py3-none-any.whl (16 kB) Requirement already satisfied: scikit-learn>=0.21.3 in \/usr\/local\/lib\/python3.7\/dist-packages (from seqeval->tf-models-official>=2.5.1->object-detection==0.1) (1.0.2) Requirement already satisfied: joblib>=0.11 in \/usr\/local\/lib\/python3.7\/dist-packages (from scikit-learn>=0.21.3->seqeval->tf-models-official>=2.5.1->object-detection==0.1) (1.1.0) Requirement already satisfied: threadpoolctl>=2.0.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from scikit-learn>=0.21.3->seqeval->tf-models-official>=2.5.1->object-detection==0.1) (3.1.0) Requirement already satisfied: typeguard>=2.7 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow-addons->tf-models-official>=2.5.1->object-detection==0.1) (2.7.1) Requirement already satisfied: promise in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (2.3) Requirement already satisfied: future in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (0.16.0) Requirement already satisfied: attrs>=18.1.0 in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (21.4.0) Requirement already satisfied: importlib-resources in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (5.4.0) Requirement already satisfied: tensorflow-metadata in \/usr\/local\/lib\/python3.7\/dist-packages (from tensorflow-datasets->tf-models-official>=2.5.1->object-detection==0.1) (1.6.0) Collecting tensorflow-io-gcs-filesystem>=0.23.1 Downloading tensorflow_io_gcs_filesystem-0.24.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (2.1 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.1 MB 40.9 MB\/s Building wheels for collected packages: object-detection, py-cpuinfo, dill, avro-python3, seqeval Building wheel for object-detection (setup.py) ... done Created wheel for object-detection: filename=object_detection-0.1-py3-none-any.whl size=1686316 sha256=775b8c34c800b3b3139d1067abd686af9ce9158011fccfb5450ccfd9bf424a5a Stored in directory: \/tmp\/pip-ephem-wheel-cache-rmw0fvil\/wheels\/d0\/e3\/e9\/b9ffe85019ec441e90d8ff9eddee9950c4c23b7598204390b9 Building wheel for py-cpuinfo (setup.py) ... done Created wheel for py-cpuinfo: filename=py_cpuinfo-8.0.0-py3-none-any.whl size=22257 sha256=ac956c4c039868fdba78645bea056754e667e8840bea783ad2ca75e4d3e682c6 Stored in directory: \/root\/.cache\/pip\/wheels\/d2\/f1\/1f\/041add21dc9c4220157f1bd2bd6afe1f1a49524c3396b94401 Building wheel for dill (setup.py) ... done Created wheel for dill: filename=dill-0.3.1.1-py3-none-any.whl size=78544 sha256=d9c6cdfd69aea2b4d78e6afbbe2bc530394e4081eb186eb4f4cd02373ca739fd Stored in directory: \/root\/.cache\/pip\/wheels\/a4\/61\/fd\/c57e374e580aa78a45ed78d5859b3a44436af17e22ca53284f Building wheel for avro-python3 (setup.py) ... done Created wheel for avro-python3: filename=avro_python3-1.10.2-py3-none-any.whl size=44010 sha256=4eca8b4f30e4850d5dabccee36c40c8dda8a6c7e7058cfb7f0258eea5ce7b2b3 Stored in directory: \/root\/.cache\/pip\/wheels\/d6\/e5\/b1\/6b151d9b535ee50aaa6ab27d145a0104b6df02e5636f0376da Building wheel for seqeval (setup.py) ... done Created wheel for seqeval: filename=seqeval-1.2.2-py3-none-any.whl size=16180 sha256=0ddfa46d0e36e9be346a90833ef11cc0d38cc7e744be34c5a0d321f997a30cba Stored in directory: \/root\/.cache\/pip\/wheels\/05\/96\/ee\/7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7 Successfully built object-detection py-cpuinfo dill avro-python3 seqeval Installing collected packages: requests, protobuf, numpy, tf-estimator-nightly, tensorflow-io-gcs-filesystem, tensorboard, keras, tensorflow, portalocker, dill, colorama, tf-slim, tensorflow-text, tensorflow-model-optimization, tensorflow-addons, seqeval, sentencepiece, sacrebleu, pyyaml, pymongo, py-cpuinfo, proto-plus, orjson, opencv-python-headless, hdfs, fastavro, tf-models-official, tensorflow-io, lvis, avro-python3, apache-beam, object-detection Attempting uninstall: requests Found existing installation: requests 2.23.0 Uninstalling requests-2.23.0: Successfully uninstalled requests-2.23.0 Attempting uninstall: protobuf Found existing installation: protobuf 3.17.3 Uninstalling protobuf-3.17.3: Successfully uninstalled protobuf-3.17.3 Attempting uninstall: numpy Found existing installation: numpy 1.19.5 Uninstalling numpy-1.19.5: Successfully uninstalled numpy-1.19.5 Attempting uninstall: tensorflow-io-gcs-filesystem Found existing installation: tensorflow-io-gcs-filesystem 0.23.1 Uninstalling tensorflow-io-gcs-filesystem-0.23.1: Successfully uninstalled tensorflow-io-gcs-filesystem-0.23.1 Attempting uninstall: tensorboard Found existing installation: tensorboard 2.7.0 Uninstalling tensorboard-2.7.0: Successfully uninstalled tensorboard-2.7.0 Attempting uninstall: keras Found existing installation: keras 2.7.0 Uninstalling keras-2.7.0: Successfully uninstalled keras-2.7.0 Attempting uninstall: tensorflow Found existing installation: tensorflow 2.7.0 Uninstalling tensorflow-2.7.0: Successfully uninstalled tensorflow-2.7.0 Attempting uninstall: dill Found existing installation: dill 0.3.4 Uninstalling dill-0.3.4: Successfully uninstalled dill-0.3.4 Attempting uninstall: pyyaml Found existing installation: PyYAML 3.13 Uninstalling PyYAML-3.13: Successfully uninstalled PyYAML-3.13 Attempting uninstall: pymongo Found existing installation: pymongo 4.0.1 Uninstalling pymongo-4.0.1: Successfully uninstalled pymongo-4.0.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. yellowbrick 1.3.post1 requires numpy=1.16.0, but you have numpy 1.20.3 which is incompatible. multiprocess 0.70.12.2 requires dill>=0.3.4, but you have dill 0.3.1.1 which is incompatible. google-colab 1.0.0 requires requests~=2.23.0, but you have requests 2.27.1 which is incompatible. datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible. albumentations 0.1.12 requires imgaug=0.2.5, but you have imgaug 0.2.9 which is incompatible. Successfully installed apache-beam-2.35.0 avro-python3-1.10.2 colorama-0.4.4 dill-0.3.1.1 fastavro-1.4.9 hdfs-2.6.0 keras-2.8.0 lvis-0.5.3 numpy-1.20.3 object-detection-0.1 opencv-python-headless-4.5.5.62 orjson-3.6.6 portalocker-2.3.2 proto-plus-1.19.9 protobuf-3.19.4 py-cpuinfo-8.0.0 pymongo-3.12.3 pyyaml-5.4.1 requests-2.27.1 sacrebleu-2.0.0 sentencepiece-0.1.96 seqeval-1.2.2 tensorboard-2.8.0 tensorflow-2.8.0 tensorflow-addons-0.15.0 tensorflow-io-0.24.0 tensorflow-io-gcs-filesystem-0.24.0 tensorflow-model-optimization-0.7.0 tensorflow-text-2.8.1 tf-estimator-nightly-2.8.0.dev2021122109 tf-models-official-2.8.0 tf-slim-1.1.0 ``` I am noticing that this command uninstalling tensorflow 2.7 and installing tensorflow 2.8. I am not sure it was happening before. Maybe it's the reason DNN library link is missing o something? I can confirm these: Nothing was changed inside pretrained model or already installed model or object_detection source files I downloaded a year ago. I tried to run command !pip install dnn - not working I tried to restart runtime (without disconnecting) - not working Somebody can help? Thanks.", "response":"It's worked for me (Colab) ``` # Check libcudnn8 version !apt-cache policy libcudnn8 # Install latest version !apt install --allow-change-held-packages libcudnn8=8.4.1.50-1+cuda11.6 # Export env variables !export PATH=\/usr\/local\/cuda-11.4\/bin${PATH:+:${PATH}} !export LD_LIBRARY_PATH=\/usr\/local\/cuda-11.4\/lib64:$LD_LIBRARY_PATH !export LD_LIBRARY_PATH=\/usr\/local\/cuda-11.4\/include:$LD_LIBRARY_PATH !export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:\/usr\/local\/cuda\/extras\/CUPTI\/lib64 # Install tensorflow !pip install tflite-model-maker==0.4.0 !pip uninstall -y tensorflow && pip install -q tensorflow==2.9.1 !pip install pycocotools==2.0.4 !pip install opencv-python-headless==4.6.0.66 ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/71000120\/colab-0-unimplemented-dnn-library-is-not-found", "best_answers_votes":15, "question_length":26660, "response_length":680 }, { "question":"Why can GPU do matrix multiplication faster than CPU? I've been using GPU for a while without questioning it but now I'm curious. Why can GPU do matrix multiplication much faster than CPU? Is it because of parallel processing? But I didn't write any parallel processing code. Does it do it automatically by itself? Any intuition \/ high-level explanation will be appreciated!", "response":"How do you parallelize the computations? GPU's are able to do a lot of parallel computations. A Lot more than a CPU could do. Look at this example of vector addition of let's say 1M elements. Using a CPU let's say you have 100 maximum threads you can run : (100 is lot more but let's assume for a while) In a typical multi-threading example let's say you parallelized additions on all threads. Here is what I mean by it : ``` c[0] = a[0] + b[0] # let's do it on thread 0 c[1] = a[1] + b[1] # let's do it on thread 1 c[101] = a[101] + b[101] # let's do it on thread 1 ``` We are able to do it because value of c[0], doesn't depend upon any other values except a[0] and b[0]. So each addition is independent of others. Hence, we were able to easily parallelize the task. As you see in above example that simultaneously all the addition of 100 different elements take place saving you time. In this way it takes 1M\/100 = 10,000 steps to add all the elements. How Efficient does GPU Parallelizes? Now consider today's GPU with about 2048 threads, all threads can independently do 2048 different operations in constant time. Hence giving a boost up. In your case of matrix multiplication. You can parallelize the computations, Because GPU have much more threads and in each thread you have multiple blocks. So a lot of computations are parallelized, resulting quick computations. But I didn't write any parallel processing for my GTX1080! Does it do it by itself? Almost all the framework for machine learning uses parallelized implementation of all the possible operations. This is achieved by CUDA programming, NVIDIA API to do parallel computations on NVIDIA GPU's. You don't write it explicitly, it's all done at low level, and you do not even get to know. Yes it doesn't mean that a C++ program you wrote will automatically be parallelized, just because you have a GPU. No, you need to write it using CUDA, only then it will be parallelized, but most programming framework have it, So it is not required from your end.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51344018\/why-can-gpu-do-matrix-multiplication-faster-than-cpu", "best_answers_votes":24, "question_length":374, "response_length":2018 }, { "question":"what does x = tf.placeholder(tf.float32, [None, 784]) means? I know basic use for tf.placeholder: ``` x = tf.placeholder(tf.float32, shape=(1024, 1024)) y = tf.matmul(x, x) with tf.Session() as sess: print(sess.run(y)) # ERROR: will fail because x was not fed. rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) # Will succeed. ``` I know the second parameter is about shape. However I don't know what is that mean when the first one is None in the shape. ex:[None,784].", "response":"From the tutorial: Deep MNIST for Experts Here we assign it a shape of [None, 784], where 784 is the dimensionality of a single flattened 28 by 28 pixel MNIST image, and None indicates that the first dimension, corresponding to the batch size, can be of any size.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39305174\/what-does-x-tf-placeholdertf-float32-none-784-means", "best_answers_votes":38, "question_length":505, "response_length":263 }, { "question":"TensorFlow TypeError: Value passed to parameter input has DataType uint8 not in list of allowed values: float16, float32 I am trying to get a simple CNN to train for the past 3 days. First, I have setup an input pipeline\/queue configuration that reads images from a directory tree and prepares batches. I got the code for this at this link. So, I now have train_image_batch and train_label_batch that I need to feed to my CNN. ``` train_image_batch, train_label_batch = tf.train.batch( [train_image, train_label], batch_size=BATCH_SIZE # ,num_threads=1 ) ``` And I am unable to figure out how. I am using the code for CNN given at this link. ``` # Input Layer input_layer = tf.reshape(train_image_batch, [-1, IMAGE_HEIGHT, IMAGE_WIDTH, NUM_CHANNELS]) # Convolutional Layer #1 conv1 = new_conv_layer(input_layer, NUM_CHANNELS, 5, 32, 2) # Pooling Layer #1 pool1 = new_pooling_layer(conv1, 2, 2) ``` The input_layer on printing shows this Tensor(\"Reshape:0\", shape=(5, 120, 120, 3), dtype=uint8) The next line crashes with TypeError; conv1 = new_conv_layer(...). The body of new_conv_layer function is given below ``` def new_conv_layer(input, # The previous layer. num_input_channels, # Num. channels in prev. layer. filter_size, # Width and height of each filter. num_filters, # Number of filters. stride): # Shape of the filter-weights for the convolution. # This format is determined by the TensorFlow API. shape = [filter_size, filter_size, num_input_channels, num_filters] # Create new weights aka. filters with the given shape. weights = tf.Variable(tf.truncated_normal(shape, stddev=0.05)) # Create new biases, one for each filter. biases = tf.Variable(tf.constant(0.05, shape=[num_filters])) # Create the TensorFlow operation for convolution. # Note the strides are set to 1 in all dimensions. # The first and last stride must always be 1, # because the first is for the image-number and # the last is for the input-channel. # But e.g. strides=[1, 2, 2, 1] would mean that the filter # is moved 2 pixels across the x- and y-axis of the image. # The padding is set to 'SAME' which means the input image # is padded with zeroes so the size of the output is the same. layer = tf.nn.conv2d(input=input, filter=weights, strides=[1, stride, stride, 1], padding='SAME') # Add the biases to the results of the convolution. # A bias-value is added to each filter-channel. layer += biases # Rectified Linear Unit (ReLU). # It calculates max(x, 0) for each input pixel x. # This adds some non-linearity to the formula and allows us # to learn more complicated functions. layer = tf.nn.relu(layer) # Note that ReLU is normally executed before the pooling, # but since relu(max_pool(x)) == max_pool(relu(x)) we can # save 75% of the relu-operations by max-pooling first. # We return both the resulting layer and the filter-weights # because we will plot the weights later. return layer, weights ``` Precisely it crashes at tf.nn.conv2d with this error TypeError: Value passed to parameter 'input' has DataType uint8 not in list of allowed values: float16, float32", "response":"The image from your input pipeline is of type 'uint8', you need to type cast it to 'float32', You can do this after the image jpeg decoder: ``` image = tf.image.decode_jpeg(... image = tf.cast(image, tf.float32) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44822999\/tensorflow-typeerror-value-passed-to-parameter-input-has-datatype-uint8-not-in", "best_answers_votes":34, "question_length":3057, "response_length":215 }, { "question":"How to feed a placeholder? I am trying to implement a simple feed forward network. However, I can't figure out how to feed a Placeholder. This example: ``` import tensorflow as tf num_input = 2 num_hidden = 3 num_output = 2 x = tf.placeholder(\"float\", [num_input, 1]) W_hidden = tf.Variable(tf.zeros([num_hidden, num_input])) W_out = tf.Variable(tf.zeros([num_output, num_hidden])) b_hidden = tf.Variable(tf.zeros([num_hidden])) b_out = tf.Variable(tf.zeros([num_output])) h = tf.nn.softmax(tf.matmul(W_hidden,x) + b_hidden) sess = tf.Session() with sess.as_default(): print h.eval() ``` Gives me the following error: ``` ... results = self._do_run(target_list, unique_fetch_targets, feed_dict_string) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/client\/session.py\", line 419, in _do_run e.code) tensorflow.python.framework.errors.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape dim { size: 2 } dim { size: 1 } [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[2,1], _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"]()]] Caused by op u'Placeholder', defined at: File \"\/home\/sfalk\/workspace\/SemEval2016\/java\/semeval2016-python\/slot1_tf.py\", line 8, in x = tf.placeholder(\"float\", [num_input, 1]) ... ``` I have tried ``` tf.assign([tf.Variable(1.0), tf.Variable(1.0)], x) tf.assign([1.0, 1.0], x) ``` but that does not work apparently.", "response":"To feed a placeholder, you use the feed_dict argument to Session.run() (or Tensor.eval()). Let's say you have the following graph, with a placeholder: ``` x = tf.placeholder(tf.float32, shape=[2, 2]) y = tf.constant([[1.0, 1.0], [0.0, 1.0]]) z = tf.matmul(x, y) ``` If you want to evaluate z, you must feed a value for x. You can do this as follows: ``` sess = tf.Session() print sess.run(z, feed_dict={x: [[3.0, 4.0], [5.0, 6.0]]}) ``` For more information, see the documentation on feeding.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33810990\/how-to-feed-a-placeholder", "best_answers_votes":34, "question_length":1419, "response_length":492 }, { "question":"What is tape-based autograd in Pytorch? I understand autograd is used to imply automatic differentiation. But what exactly is tape-based autograd in Pytorch and why there are so many discussions that affirm or deny it. For example: this In pytorch, there is no traditional sense of tape and this We don\u2019t really build gradient tapes per se. But graphs. but not this Autograd is now a core torch package for automatic differentiation. It uses a tape based system for automatic differentiation. And for further reference, please compare it with GradientTape in Tensorflow.", "response":"There are different types of automatic differentiation e.g. forward-mode, reverse-mode, hybrids; (more explanation). The tape-based autograd in Pytorch simply refers to the uses of reverse-mode automatic differentiation, source. The reverse-mode auto diff is simply a technique used to compute gradients efficiently and it happens to be used by backpropagation, source. Now, in PyTorch, Autograd is the core torch package for automatic differentiation. It uses a tape-based system for automatic differentiation. In the forward phase, the autograd tape will remember all the operations it executed, and in the backward phase, it will replay the operations. Same in TensorFlow, to differentiate automatically, It also needs to remember what operations happen in what order during the forward pass. Then, during the backward pass, TensorFlow traverses this list of operations in reverse order to compute gradients. Now, TensorFlow provides the tf.GradientTape API for automatic differentiation; that is, computing the gradient of computation with respect to some inputs, usually tf.Variables. TensorFlow records relevant operations executed inside the context of a tf.GradientTape onto a tape. TensorFlow then uses that tape to compute the gradients of a recorded computation using reverse mode differentiation. So, as we can see from the high-level viewpoint, both are doing the same operation. However, during the custom training loop, the forward pass and calculation of the loss are more explicit in TensorFlow as it uses tf.GradientTape API scope whereas in PyTorch it's implicit for these operations but it requires to set required_grad flags to False temporarily while updating the training parameters (weights and biases). For that, it uses torch.no_grad API explicitly. In other words, TensorFlow's tf.GradientTape() is similar to PyTorch's loss.backward(). Below is the simplistic form in the code of the above statements. ``` # TensorFlow [w, b] = tf_model.trainable_variables for epoch in range(epochs): with tf.GradientTape() as tape: # forward passing and loss calculations # within explicit tape scope predictions = tf_model(x) loss = squared_error(predictions, y) # compute gradients (grad) w_grad, b_grad = tape.gradient(loss, tf_model.trainable_variables) # update training variables w.assign(w - w_grad * learning_rate) b.assign(b - b_grad * learning_rate) # PyTorch [w, b] = torch_model.parameters() for epoch in range(epochs): # forward pass and loss calculation # implicit tape-based AD y_pred = torch_model(inputs) loss = squared_error(y_pred, labels) # compute gradients (grad) loss.backward() # update training variables \/ parameters with torch.no_grad(): w -= w.grad * learning_rate b -= b.grad * learning_rate w.grad.zero_() b.grad.zero_() ``` FYI, in the above, the trainable variables (w, b) are manually updated in both frameworks but we generally use an optimizer (e.g. adam) to do the job. ``` # TensorFlow # .... # update training variables optimizer.apply_gradients(zip([w_grad, b_grad], model.trainable_weights)) # PyTorch # .... # update training variables \/ parameters optimizer.step() optimizer.zero_grad() ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/64856195\/what-is-tape-based-autograd-in-pytorch", "best_answers_votes":25, "question_length":570, "response_length":3145 }, { "question":"Looping over a tensor I am trying to process a tensor of variable size, in a python way that would be something like: ``` # X is of shape [m, n] for x in X: process(x) ``` I have tried to use tf.scan, the thing is that I want to process every sub-tensor, so I have tried to use a nested scan, but I was enable to do it, because tf.scan work with the accumulator, if not found it will take the first entry of the elems as initializer, which I don't want to do. As an example, suppose I want to add one to every element of my tensor (this is just an example), and I want to process it element by element. If I run the code bellow, I will only have one added to a sub-tensor, because scan consider the first tensor as initializer, along with the first element of every sub-tensor. ``` import numpy as np import tensorflow as tf batch_x = np.random.randint(0, 10, size=(5, 10)) x = tf.placeholder(tf.float32, shape=[None, 10]) def inner_loop(x_in): return tf.scan(lambda _, x_: x_ + 1, x_in) outer_loop = tf.scan(lambda _, input_: inner_loop(input_), x, back_prop=True) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) rs = sess.run(outer_loop, feed_dict={x: batch_x}) ``` Any suggestions ?", "response":"To loop over a tensor you could try tf.unstack Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors. So adding 1 to each tensor would look something like: ``` import tensorflow as tf x = tf.placeholder(tf.float32, shape=(None, 10)) x_unpacked = tf.unstack(x) # defaults to axis 0, returns a list of tensors processed = [] # this will be the list of processed tensors for t in x_unpacked: # do whatever result_tensor = t + 1 processed.append(result_tensor) output = tf.concat(processed, 0) with tf.Session() as sess: print(sess.run([output], feed_dict={x: np.zeros((5, 10))})) ``` Obviously you can further unpack each tensor from the list to process it, down to single elements. To avoid lots of nested unpacking though, you could maybe try flattening x with tf.reshape(x, [-1]) first, and then loop over it like ``` flattened_unpacked = tf.unstack(tf.reshape(x, [-1]) for elem in flattened_unpacked: process(elem) ``` In this case elem is a scalar.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43327668\/looping-over-a-tensor", "best_answers_votes":17, "question_length":1208, "response_length":967 }, { "question":"Running TensorFlow on a Slurm Cluster? I could get access to a computing cluster, specifically one node with two 12-Core CPUs, which is running with Slurm Workload Manager. I would like to run TensorFlow on that system but unfortunately I were not able to find any information about how to do this or if this is even possible. I am new to this but as far as I understand it, I would have to run TensorFlow by creating a Slurm job and can not directly execute python\/tensorflow via ssh. Has anyone an idea, tutorial or any kind of source on this topic?", "response":"It's relatively simple. Under the simplifying assumptions that you request one process per host, slurm will provide you with all the information you need in environment variables, specifically SLURM_PROCID, SLURM_NPROCS and SLURM_NODELIST. For example, you can initialize your task index, the number of tasks and the nodelist as follows: ``` from hostlist import expand_hostlist task_index = int( os.environ['SLURM_PROCID'] ) n_tasks = int( os.environ['SLURM_NPROCS'] ) tf_hostlist = [ (\"%s:22222\" % host) for host in expand_hostlist( os.environ['SLURM_NODELIST']) ] ``` Note that slurm gives you a host list in its compressed format (e.g., \"myhost[11-99]\"), that you need to expand. I do that with module hostlist by Kent Engstr\u00f6m, available here https:\/\/pypi.python.org\/pypi\/python-hostlist At that point, you can go right ahead and create your TensorFlow cluster specification and server with the information you have available, e.g.: ``` cluster = tf.train.ClusterSpec( {\"your_taskname\" : tf_hostlist } ) server = tf.train.Server( cluster.as_cluster_def(), job_name = \"your_taskname\", task_index = task_index ) ``` And you're set! You can now perform TensorFlow node placement on a specific host of your allocation with the usual syntax: ``` for idx in range(n_tasks): with tf.device(\"\/job:your_taskname\/task:%d\" % idx ): ... ``` A flaw with the code reported above is that all your jobs will instruct Tensorflow to install servers listening at fixed port 22222. If multiple such jobs happen to be scheduled to the same node, the second one will fail to listen to 22222. A better solution is to let slurm reserve ports for each job. You need to bring your slurm administrator on board and ask him to configure slurm so it allows you to ask for ports with the --resv-ports option. In practice, this requires asking them to add a line like the following in their slurm.conf: ``` MpiParams=ports=15000-19999 ``` Before you bug your slurm admin, check what options are already configured, e.g., with: ``` scontrol show config | grep MpiParams ``` If your site already uses an old version of OpenMPI, there's a chance an option like this is already in place. Then, amend my first snippet of code as follows: ``` from hostlist import expand_hostlist task_index = int( os.environ['SLURM_PROCID'] ) n_tasks = int( os.environ['SLURM_NPROCS'] ) port = int( os.environ['SLURM_STEP_RESV_PORTS'].split('-')[0] ) tf_hostlist = [ (\"%s:%s\" % (host,port)) for host in expand_hostlist( os.environ['SLURM_NODELIST']) ] ``` Good luck!", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34826736\/running-tensorflow-on-a-slurm-cluster", "best_answers_votes":27, "question_length":551, "response_length":2518 }, { "question":"TensorFlow Lite C++ API example for inference I am trying to get a TensorFlow Lite example to run on a machine with an ARM Cortex-A72 processor. Unfortunately, I wasn't able to deploy a test model due to the lack of examples on how to use the C++ API. I will try to explain what I have achieved so far. Create the tflite model I have created a simple linear regression model and converted it, which should approximate the function f(x) = 2x - 1. I got this code snippet from some tutorial, but I am unable to find it anymore. ``` import tensorflow as tf import numpy as np from tensorflow import keras from tensorflow.contrib import lite model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) model.compile(optimizer='sgd', loss='mean_squared_error') xs = np.array([ -1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float) ys = np.array([ -3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float) model.fit(xs, ys, epochs=500) print(model.predict([10.0])) keras_file = 'linear.h5' keras.models.save_model(model, keras_file) converter = lite.TocoConverter.from_keras_model_file(keras_file) tflite_model = converter.convert() open('linear.tflite', 'wb').write(tflite_model) ``` This creates a binary called linear.tflite, which I should be able to load. Compile TensorFlow Lite for my machine TensorFlow Lite comes with a script for the compilation on machines with the aarch64 architecture. I followed the guide here to do this, even though I had to modify the Makefile slightly. Note that I compiled this natively on my target system. This created a static library called libtensorflow-lite.a. Problem: Inference I tried to follow the tutorial on the site here, and simply pasted the the code snippets from loading and running the model together, e.g. ``` class FlatBufferModel { \/\/ Build a model based on a file. Return a nullptr in case of failure. static std::unique_ptr BuildFromFile( const char* filename, ErrorReporter* error_reporter); \/\/ Build a model based on a pre-loaded flatbuffer. The caller retains \/\/ ownership of the buffer and should keep it alive until the returned object \/\/ is destroyed. Return a nullptr in case of failure. static std::unique_ptr BuildFromBuffer( const char* buffer, size_t buffer_size, ErrorReporter* error_reporter); }; tflite::FlatBufferModel model(\".\/linear.tflite\"); tflite::ops::builtin::BuiltinOpResolver resolver; std::unique_ptr interpreter; tflite::InterpreterBuilder(*model, resolver)(&interpreter); \/\/ Resize input tensors, if desired. interpreter->AllocateTensors(); float* input = interpreter->typed_input_tensor(0); \/\/ Fill `input`. interpreter->Invoke(); float* output = interpreter->typed_output_tensor(0); ``` When trying to compile this via ``` g++ demo.cpp libtensorflow-lite.a ``` I get a load of errors. Log: ``` root@localhost:\/inference# g++ demo.cpp libtensorflow-lite.a demo.cpp:3:15: error: \u2018unique_ptr\u2019 in namespace \u2018std\u2019 does not name a template type static std::unique_ptr BuildFromFile( ^~~~~~~~~~ demo.cpp:10:15: error: \u2018unique_ptr\u2019 in namespace \u2018std\u2019 does not name a template type static std::unique_ptr BuildFromBuffer( ^~~~~~~~~~ demo.cpp:16:1: error: \u2018tflite\u2019 does not name a type tflite::FlatBufferModel model(\".\/linear.tflite\"); ^~~~~~ demo.cpp:18:1: error: \u2018tflite\u2019 does not name a type tflite::ops::builtin::BuiltinOpResolver resolver; ^~~~~~ demo.cpp:19:6: error: \u2018unique_ptr\u2019 in namespace \u2018std\u2019 does not name a template type std::unique_ptr interpreter; ^~~~~~~~~~ demo.cpp:20:1: error: \u2018tflite\u2019 does not name a type tflite::InterpreterBuilder(*model, resolver)(&interpreter); ^~~~~~ demo.cpp:23:1: error: \u2018interpreter\u2019 does not name a type interpreter->AllocateTensors(); ^~~~~~~~~~~ demo.cpp:25:16: error: \u2018interpreter\u2019 was not declared in this scope float* input = interpreter->typed_input_tensor(0); ^~~~~~~~~~~ demo.cpp:25:48: error: expected primary-expression before \u2018float\u2019 float* input = interpreter->typed_input_tensor(0); ^~~~~ demo.cpp:28:1: error: \u2018interpreter\u2019 does not name a type interpreter->Invoke(); ^~~~~~~~~~~ demo.cpp:30:17: error: \u2018interpreter\u2019 was not declared in this scope float* output = interpreter->typed_output_tensor(0); ^~~~~~~~~~~ demo.cpp:30:50: error: expected primary-expression before \u2018float\u2019 float* output = interpreter->typed_output_tensor(0); ``` I am relatively new to C++, so I may be missing something obvious here. It seems, however, that other people have trouble with the C++ API as well (look at this GitHub issue). Has anybody also stumbled across this and got it to run? The most important aspects for me to cover would be: 1.) Where and how do I define the signature, so that the model knows what to treat as inputs and outputs? 2.) Which headers do I have to include? Thanks! EDIT Thanks to @Alex Cohn, the linker was able to find the correct headers. I also realized that I probably do not need to redefine the flatbuffers class, so I ended up with this code (minor change is marked): ``` #include \"tensorflow\/lite\/interpreter.h\" #include \"tensorflow\/lite\/kernels\/register.h\" #include \"tensorflow\/lite\/model.h\" #include \"tensorflow\/lite\/tools\/gen_op_registration.h\" auto model = tflite::FlatBufferModel::BuildFromFile(\"linear.tflite\"); \/\/CHANGED tflite::ops::builtin::BuiltinOpResolver resolver; std::unique_ptr interpreter; tflite::InterpreterBuilder(*model, resolver)(&interpreter); \/\/ Resize input tensors, if desired. interpreter->AllocateTensors(); float* input = interpreter->typed_input_tensor(0); \/\/ Fill `input`. interpreter->Invoke(); float* output = interpreter->typed_output_tensor(0); ``` This reduces the number of errors greatly, but I am not sure how to resolve the rest: ``` root@localhost:\/inference# g++ demo.cpp -I\/tensorflow demo.cpp:10:34: error: expected \u2018)\u2019 before \u2018,\u2019 token tflite::InterpreterBuilder(*model, resolver)(&interpreter); ^ demo.cpp:10:44: error: expected initializer before \u2018)\u2019 token tflite::InterpreterBuilder(*model, resolver)(&interpreter); ^ demo.cpp:13:1: error: \u2018interpreter\u2019 does not name a type interpreter->AllocateTensors(); ^~~~~~~~~~~ demo.cpp:18:1: error: \u2018interpreter\u2019 does not name a type interpreter->Invoke(); ^~~~~~~~~~~ ``` How do I have to tackle these? It seems that I have to define my own resolver, but I have no clue on how to do that.", "response":"I finally got it to run. Considering my directory structure looks like this: ``` \/(root) \/tensorflow # whole tf repo \/demo demo.cpp linear.tflite libtensorflow-lite.a ``` I changed demo.cpp to ``` #include #include \"tensorflow\/lite\/interpreter.h\" #include \"tensorflow\/lite\/kernels\/register.h\" #include \"tensorflow\/lite\/model.h\" #include \"tensorflow\/lite\/tools\/gen_op_registration.h\" int main(){ std::unique_ptr model = tflite::FlatBufferModel::BuildFromFile(\"linear.tflite\"); if(!model){ printf(\"Failed to mmap model\\n\"); exit(0); } tflite::ops::builtin::BuiltinOpResolver resolver; std::unique_ptr interpreter; tflite::InterpreterBuilder(*model.get(), resolver)(&interpreter); \/\/ Resize input tensors, if desired. interpreter->AllocateTensors(); float* input = interpreter->typed_input_tensor(0); \/\/ Dummy input for testing *input = 2.0; interpreter->Invoke(); float* output = interpreter->typed_output_tensor(0); printf(\"Result is: %f\\n\", *output); return 0; } ``` Also, I had to adapt my compile command (I had to install flatbuffers manually to make it work). What worked for me was: ``` g++ demo.cpp -I\/tensorflow -L\/demo -ltensorflow-lite -lrt -ldl -pthread -lflatbuffers -o demo ``` Thanks to @AlexCohn for getting me on the right track!", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/56837288\/tensorflow-lite-c-api-example-for-inference", "best_answers_votes":17, "question_length":6224, "response_length":1245 }, { "question":"Configuring Tensorflow to use all CPU's Reading : https:\/\/www.tensorflow.org\/versions\/r0.10\/resources\/faq.html it states : Does TensorFlow make use of all the devices (GPUs and CPUs) available on my machine? TensorFlow supports multiple GPUs and CPUs. See the how-to documentation on using GPUs with TensorFlow for details of how TensorFlow assigns operations to devices, and the CIFAR-10 tutorial for an example model that uses multiple GPUs. Note that TensorFlow only uses GPU devices with a compute capability greater than 3.5. Does this mean Tensorflow can automatically make use of all CPU's on given machine or does it ned to be explicitly configured ?", "response":"CPUs are used via a \"device\" which is just a threadpool. You can control the number of threads if you feel like you need more: ``` sess = tf.Session(config=tf.ConfigProto( intra_op_parallelism_threads=NUM_THREADS)) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39395198\/configuring-tensorflow-to-use-all-cpus", "best_answers_votes":25, "question_length":658, "response_length":218 }, { "question":"tf.data vs keras.utils.sequence performance I'm trying to decide whether to use the existing keras.utils.sequence module or to switch to tf.data. From what I understand, tf.data optimizes performance by overlapping training on GPU with pre-processing on the CPU. But how does that compare to keras.utils.sequence and the keras data generator? From what I read here it seems that it's doing the same thing. Is there anything to gain by switching to tf.data ?", "response":"Both approaches overlap input data preprocessing with model training. keras.utils.sequence does this by running multiple Python processes, while tf.data does this by running multiple C++ threads. If your preprocessing is being done by a non-TensorFlow Python library such as PIL, keras.utils.sequence may work better for you since multiple processes are needed to avoid contention on Python's global interpreter lock. If you can express your preprocessing using TensorFlow operations, I would expect tf.data to give better performance. Some other things to consider: tf.data is the recommended approach for building scalable input pipelines for tf.keras tf.data is used more widely than keras.utils.sequence, so it may be easier to search for help with getting good performance.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55852831\/tf-data-vs-keras-utils-sequence-performance", "best_answers_votes":24, "question_length":457, "response_length":778 }, { "question":"No Module Named '_pywrap_tensorflow_internal' While trying to validate the installation of tensorflow-gpu, I get an ImportError when trying to execute \"import tensorflow as tf\". I am using a Quadro K620 on Windows 7. Tensorflow was installed using pip. The following is the stack trace: ``` Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\\Users\\aagarwal>python Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AM D64)] on win32 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import tensorflow as tf Traceback (most recent call last): File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packag es\\tensorflow\\python\\pywrap_tensorflow_internal.py\", line 18, in swig_import_hel per return importlib.import_module(mname) File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\importlib\\_ _init__.py\", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File \"\", line 986, in _gcd_import File \"\", line 969, in _find_and_load File \"\", line 958, in _find_and_load_unlocked File \"\", line 666, in _load_unlocked File \"\", line 577, in module_from_spec File \"\", line 906, in create_module File \"\", line 222, in _call_with_frames_removed ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packag es\\tensorflow\\python\\pywrap_tensorflow.py\", line 41, in from tensorflow.python.pywrap_tensorflow_internal import * File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packag es\\tensorflow\\python\\pywrap_tensorflow_internal.py\", line 21, in _pywrap_tensorflow_internal = swig_import_helper() File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packag es\\tensorflow\\python\\pywrap_tensorflow_internal.py\", line 20, in swig_import_hel per return importlib.import_module('_pywrap_tensorflow_internal') File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\importlib\\_ _init__.py\", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: No module named '_pywrap_tensorflow_internal' During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"\", line 1, in File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packag es\\tensorflow\\__init__.py\", line 24, in from tensorflow.python import * File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packag es\\tensorflow\\python\\__init__.py\", line 51, in from tensorflow.python import pywrap_tensorflow File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packag es\\tensorflow\\python\\pywrap_tensorflow.py\", line 52, in raise ImportError(msg) ImportError: Traceback (most recent call last): File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packag es\\tensorflow\\python\\pywrap_tensorflow_internal.py\", line 18, in swig_import_hel per return importlib.import_module(mname) File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\importlib\\_ _init__.py\", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File \"\", line 986, in _gcd_import File \"\", line 969, in _find_and_load File \"\", line 958, in _find_and_load_unlocked File \"\", line 666, in _load_unlocked File \"\", line 577, in module_from_spec File \"\", line 906, in create_module File \"\", line 222, in _call_with_frames_removed ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packag es\\tensorflow\\python\\pywrap_tensorflow.py\", line 41, in from tensorflow.python.pywrap_tensorflow_internal import * File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packag es\\tensorflow\\python\\pywrap_tensorflow_internal.py\", line 21, in _pywrap_tensorflow_internal = swig_import_helper() File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packag es\\tensorflow\\python\\pywrap_tensorflow_internal.py\", line 20, in swig_import_hel per return importlib.import_module('_pywrap_tensorflow_internal') File \"C:\\Users\\aagarwal\\AppData\\Local\\Programs\\Python\\Python35\\lib\\importlib\\_ _init__.py\", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: No module named '_pywrap_tensorflow_internal' Failed to load the native TensorFlow runtime. See https:\/\/www.tensorflow.org\/install\/install_sources#common_installation_probl ems for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. >>> ``` I have looked at multiple other stack overflow posts which things like correcting the path but I have not been able to solve this issue.", "response":"I came across the same issue today, please switch to cuDNN v5.1 Library for Windows instead as @mickdelaney suggested and then try to Check environment settings of CUDA, normally all the settings of CUDA had been added to Windows environment Copy files in bin, lib and include of cuDNN to bin, lib and include of CUDA respectively. Normally the directory is C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA And then you can import tensorflow and run your code. Good luck!", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44080677\/no-module-named-pywrap-tensorflow-internal", "best_answers_votes":7, "question_length":5070, "response_length":473 }, { "question":"Why was Eigen chosen for TensorFlow? [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 8 years ago. Improve this question The TensorFlow white paper mentions that Eigen is used. Are there public explanations for how Eigen was chosen, and are they motivation for using Eigen in TensorFlow C++ op kernels?", "response":"I think that one of the key feature that drove the use of Eigen in the first place is because Eigen features its own highly optimized matrix product kernels whereas all other competitors have to be linked to some BLAS libraries. Moreover, the code of Eigen's product kernel is C++ with easy access to low-level internal kernels, so it was 'easy' for them to tweak and extend it to match their needs. This way Google has been able to develop the Tensor module with high CPU performance in a pure header-only fashion. The support for CUDA and now OpenCL via SyCL came later, those are not intrinsic features of Eigen that drove the initial choice.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41518379\/why-was-eigen-chosen-for-tensorflow", "best_answers_votes":22, "question_length":680, "response_length":645 }, { "question":"TensorFlow strings: what they are and how to work with them When I read file with tf.read_file I get something with type tf.string. Documentation says only that it is \"Variable length byte arrays. Each element of a Tensor is a byte array.\" (https:\/\/www.tensorflow.org\/versions\/r0.10\/resources\/dims_types.html). I have no idea how to interpret this. I can do nothing with this type. In usual python you can get elements by index like my_string[:4], but when I run following code I get an error. ``` import tensorflow as tf import numpy as np x = tf.constant(\"This is string\") y = x[:4] init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) result = sess.run(y) print result ``` It says ``` File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/tensor_shape.py\", line 621, in assert_has_rank raise ValueError(\"Shape %s must have rank %d\" % (self, rank)) ValueError: Shape () must have rank 1 ``` Also I cannot convert my string to tf.float32 tensor. It is .flo file and it has magic header \"PIEH\". This numpy code successfuly convert such header into number (see example here https:\/\/stackoverflow.com\/a\/28016469\/4744283) but I can't do that with tensorflow. I tried tf.string_to_number(string, out_type=tf.float32) but it says ``` tensorflow.python.framework.errors.InvalidArgumentError: StringToNumberOp could not correctly convert string: PIEH ``` So, what string is? What it's shape is? How can I at least get part of the string? I suppose that if I can get part of it I can just skip \"PIEH\" part. UPD: I forgot to say that tf.slice(string, [0], [4]) also doesn't work with same error.", "response":"Unlike Python, where a string can be treated as a list of characters for the purposes of slicing and such, TensorFlow's tf.strings are indivisible values. For instance, x below is a Tensor with shape (2,) whose each element is a variable length string. ``` x = tf.constant([\"This is a string\", \"This is another string\"]) ``` However, to achieve what you want, TensorFlow provides the tf.decode_raw operator. It takes a tf.string tensor as input, but can decode the string into any other primitive data type. For instance, to interpret the string as a tensor of characters, you can do the following : ``` x = tf.constant(\"This is string\") x = tf.decode_raw(x, tf.uint8) y = x[:4] sess = tf.InteractiveSession() print(y.eval()) # prints [ 84 104 105 115] ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38902433\/tensorflow-strings-what-they-are-and-how-to-work-with-them", "best_answers_votes":20, "question_length":1624, "response_length":756 }, { "question":"What is a good explanation of how to read the histogram feature of TensorBoard? Question is simple, how do you read those graphs? I read their explanation and it doesn't make sense to me. I was reading TensorFlow's newly updated readme file for TensorBoard and in it it tries to explain what a \"histogram\" is. First it clarifies that its not really a histogram: Right now, its name is a bit of a misnomer, as it doesn't show histograms; instead, it shows some high-level statistics on a distribution. I am trying to figure out what their description is actually trying to say. Right now I am trying to parse the specific sentence: Each line on the chart represents a percentile in the distribution over the data: for example, the bottom line shows how the minimum value has changed over time, and the line in the middle shows how the median has changed. The first question I have is, what do they mean by \"each line\". There are horizontal axis and there are lines that make a square grid on the graph or maybe the plotted lines, themselves. Consider a screen shot from the TensorBoard example: What are they referring to with \"lines\"? In the above example what are the lines and percentiles that they are talking about? Then the readme file tries to provide more detail with an example: Reading from top to bottom, the lines have the following meaning: [maximum, 93%, 84%, 69%, 50%, 31%, 16%, 7%, minimum] However, its unclear to me what they are talking about. What is lines and what percentiles? It seems that they are trying to replace this in the future, but meanwhile, I am stuck with this. Can someone help me understand how to use this?", "response":"The lines that they are talking about are described below: as for the meaning of percentile, check out the wikipedia article, basically, the 93rd percentile means that 93% of the values are situated below the 93rd percentile line", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38149622\/what-is-a-good-explanation-of-how-to-read-the-histogram-feature-of-tensorboard", "best_answers_votes":18, "question_length":1643, "response_length":229 }, { "question":"How to set the input of a Keras layer with a Tensorflow tensor? In my previous question, I used Keras' Layer.set_input() to connect my Tensorflow pre-processing output tensor to my Keras model's input. However, this method has been removed after Keras version 1.1.1. How can I achieve this in newer Keras versions? Example: ``` # Tensorflow pre-processing raw_input = tf.placeholder(tf.string) ### some TF operations on raw_input ### tf_embedding_input = ... # pre-processing output tensor # Keras model model = Sequential() e = Embedding(max_features, 128, input_length=maxlen) ### THIS DOESN'T WORK ANYMORE ### e.set_input(tf_embedding_input) ################################ model.add(e) model.add(LSTM(128, activation='sigmoid')) model.add(Dense(num_classes, activation='softmax')) ```", "response":"After you are done with pre-processing, You can add the tensor as input layer by calling tensor param of Input So in your case: ``` tf_embedding_input = ... # pre-processing output tensor # Keras model model = Sequential() model.add(Input(tensor=tf_embedding_input)) model.add(Embedding(max_features, 128, input_length=maxlen)) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42441431\/how-to-set-the-input-of-a-keras-layer-with-a-tensorflow-tensor", "best_answers_votes":17, "question_length":789, "response_length":331 }, { "question":"Tensorflow: why 'pip uninstall tensorflow' cannot find tensorflow I'm using Tensorflow-0.8 on Ubuntu14.04. I first install Tensorflow from sources and then setup Tensorflow for development according to the official tutorial. When I want to uninstall tensorflow using the following command ``` sudo pip uninstall tensorflow ``` I encountered the following error: ``` Can't uninstall 'tensorflow'. No files were found to uninstall ``` Could anyone tell me where is wrong? For your reference, the output of pip show tensorflow is ``` Name: tensorflow Version: 0.8.0 Location: \/home\/AIJ\/tensorflow\/_python_build Requires: numpy, six, protobuf, wheel ``` But I actually find another Tensorflow directory at ``` \/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow ``` Besides, I also have a question about the general usage of Python. I have seen two quite similar directories in my system, i.e. ``` \/usr\/lib\/python2.7\/dist-packages \/usr\/local\/lib\/python2.7\/dist-packages ``` Could any one tell me the differences between them? I noticed that everytime I use sudo pip install , the package will be installed to \/usr\/local\/lib\/python2.7\/dist-packages, could I instead install packages into \/usr\/lib\/python2.7\/dist-packages using pip install? Thanks a lot for your help in advance!", "response":"It could be because you didn't install Tensorflow using pip, but using python setup.py develop instead as your link shows. pip uninstall is likely to fail if the package is installed using python setup.py install as they do not leave behind metadata to determine what files were installed. Therefore, you should be able to unistall Tensorflow with the option -u or --unistall of develop ``` cd \/home\/AIJ\/tensorflow\/_python_build python setup.py develop --uninstall ``` To answer for the second (interestring) question about the two dist-package created under \/usr\/lib\/python2.7 and \/usr\/local\/lib\/python2.7 it exists already a great Stack Overflow answer on the topic. PS: Tensorflow is a good library, you should consider not uninstall it :)", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39729787\/tensorflow-why-pip-uninstall-tensorflow-cannot-find-tensorflow", "best_answers_votes":10, "question_length":1271, "response_length":742 }, { "question":"Parallel threads with TensorFlow Dataset API and flat_map I'm changing my TensorFlow code from the old queue interface to the new Dataset API. With the old interface I could specify the num_threads argument to the tf.train.shuffle_batch queue. However, the only way to control the amount of threads in the Dataset API seems to be in the map function using the num_parallel_calls argument. However, I'm using the flat_map function instead, which doesn't have such an argument. Question: Is there a way to control the number of threads\/processes for the flat_map function? Or is there are way to use map in combination with flat_map and still specify the number of parallel calls? Note that it is of crucial importance to run multiple threads in parallel, as I intend to run heavy pre-processing on the CPU before data enters the queue. There are two (here and here) related posts on GitHub, but I don't think they answer this question. Here is a minimal code example of my use-case for illustration: ``` with tf.Graph().as_default(): data = tf.ones(shape=(10, 512), dtype=tf.float32, name=\"data\") input_tensors = (data,) def pre_processing_func(data_): # normally I would do data-augmentation here results = (tf.expand_dims(data_, axis=0),) return tf.data.Dataset.from_tensor_slices(results) dataset_source = tf.data.Dataset.from_tensor_slices(input_tensors) dataset = dataset_source.flat_map(pre_processing_func) # do something with 'dataset' ```", "response":"To the best of my knowledge, at the moment flat_map does not offer parallelism options. Given that the bulk of the computation is done in pre_processing_func, what you might use as a workaround is a parallel map call followed by some buffering, and then using a flat_map call with an identity lambda function that takes care of flattening the output. In code: ```python NUM_THREADS = 5 BUFFER_SIZE = 1000 def pre_processing_func(data_): # data-augmentation here # generate new samples starting from the sample `data_` artificial_samples = generate_from_sample(data_) return atificial_samples dataset_source = (tf.data.Dataset.from_tensor_slices(input_tensors). map(pre_processing_func, num_parallel_calls=NUM_THREADS). prefetch(BUFFER_SIZE). flat_map(lambda *x : tf.data.Dataset.from_tensor_slices(x)). shuffle(BUFFER_SIZE)) # my addition, probably necessary though ``` Note (to myself and whoever will try to understand the pipeline): Since pre_processing_func generates an arbitrary number of new samples starting from the initial sample (organised in matrices of shape (?, 512)), the flat_map call is necessary to turn all the generated matrices into Datasets containing single samples (hence the tf.data.Dataset.from_tensor_slices(x) in the lambda) and then flatten all these datasets into one big Dataset containing individual samples. It's probably a good idea to .shuffle() that dataset, or generated samples will be packed together.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47411383\/parallel-threads-with-tensorflow-dataset-api-and-flat-map", "best_answers_votes":15, "question_length":1446, "response_length":1440 }, { "question":"How exactly does LSTMCell from TensorFlow operates? I try to reproduce results generated by the LSTMCell from TensorFlow to be sure that I know what it does. Here is my TensorFlow code: ``` num_units = 3 lstm = tf.nn.rnn_cell.LSTMCell(num_units = num_units) timesteps = 7 num_input = 4 X = tf.placeholder(\"float\", [None, timesteps, num_input]) x = tf.unstack(X, timesteps, 1) outputs, states = tf.contrib.rnn.static_rnn(lstm, x, dtype=tf.float32) sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) x_val = np.random.normal(size = (1, 7, num_input)) res = sess.run(outputs, feed_dict = {X:x_val}) for e in res: print e ``` Here is its output: ``` [[-0.13285545 -0.13569424 -0.23993783]] [[-0.04818152 0.05927373 0.2558436 ]] [[-0.13818116 -0.13837864 -0.15348436]] [[-0.232219 0.08512601 0.05254192]] [[-0.20371495 -0.14795329 -0.2261929 ]] [[-0.10371902 -0.0263292 -0.0914975 ]] [[0.00286371 0.16377522 0.059478 ]] ``` And here is my own implementation: ``` n_steps, _ = X.shape h = np.zeros(shape = self.hid_dim) c = np.zeros(shape = self.hid_dim) for i in range(n_steps): x = X[i,:] vec = np.concatenate([x, h]) #vec = np.concatenate([h, x]) gs = np.dot(vec, self.kernel) + self.bias g1 = gs[0*self.hid_dim : 1*self.hid_dim] g2 = gs[1*self.hid_dim : 2*self.hid_dim] g3 = gs[2*self.hid_dim : 3*self.hid_dim] g4 = gs[3*self.hid_dim : 4*self.hid_dim] I = vsigmoid(g1) N = np.tanh(g2) F = vsigmoid(g3) O = vsigmoid(g4) c = c*F + I*N h = O * np.tanh(c) print h ``` And here is its output: ``` [-0.13285543 -0.13569425 -0.23993781] [-0.01461723 0.08060743 0.30876374] [-0.13142865 -0.14921292 -0.16898363] [-0.09892188 0.11739943 0.08772941] [-0.15569218 -0.15165766 -0.21918869] [-0.0480604 -0.00918626 -0.06084118] [0.0963612 0.1876516 0.11888081] ``` As you might notice I was able to reproduce the first hidden vector, but the second one and all the following ones are different. What am I missing?", "response":"Tensorflow uses glorot_uniform() function to initialize the lstm kernel, which samples weights from a random uniform distribution. We need to fix a value for the kernel to get reproducible results: ``` import tensorflow as tf import numpy as np np.random.seed(0) timesteps = 7 num_input = 4 x_val = np.random.normal(size = (1, timesteps, num_input)) num_units = 3 def glorot_uniform(shape): limit = np.sqrt(6.0 \/ (shape[0] + shape[1])) return np.random.uniform(low=-limit, high=limit, size=shape) kernel_init = glorot_uniform((num_input + num_units, 4 * num_units)) ``` My implementation of the LSTMCell (well, actually it's just slightly rewritten tensorflow's code): ``` def sigmoid(x): return 1. \/ (1 + np.exp(-x)) class LSTMCell(): \"\"\"Long short-term memory unit (LSTM) recurrent network cell. \"\"\" def __init__(self, num_units, initializer=glorot_uniform, forget_bias=1.0, activation=np.tanh): \"\"\"Initialize the parameters for an LSTM cell. Args: num_units: int, The number of units in the LSTM cell. initializer: The initializer to use for the kernel matrix. Default: glorot_uniform forget_bias: Biases of the forget gate are initialized by default to 1 in order to reduce the scale of forgetting at the beginning of the training. activation: Activation function of the inner states. Default: np.tanh. \"\"\" # Inputs must be 2-dimensional. self._num_units = num_units self._forget_bias = forget_bias self._activation = activation self._initializer = initializer def build(self, inputs_shape): input_depth = inputs_shape[-1] h_depth = self._num_units self._kernel = self._initializer(shape=(input_depth + h_depth, 4 * self._num_units)) self._bias = np.zeros(shape=(4 * self._num_units)) def call(self, inputs, state): \"\"\"Run one step of LSTM. Args: inputs: input numpy array, must be 2-D, `[batch, input_size]`. state: a tuple of numpy arrays, both `2-D`, with column sizes `c_state` and `m_state`. Returns: A tuple containing: - A `2-D, [batch, output_dim]`, numpy array representing the output of the LSTM after reading `inputs` when previous state was `state`. Here output_dim is equal to num_units. - Numpy array(s) representing the new state of LSTM after reading `inputs` when the previous state was `state`. Same type and shape(s) as `state`. \"\"\" num_proj = self._num_units (c_prev, m_prev) = state input_size = inputs.shape[-1] # i = input_gate, j = new_input, f = forget_gate, o = output_gate lstm_matrix = np.hstack([inputs, m_prev]).dot(self._kernel) lstm_matrix += self._bias i, j, f, o = np.split(lstm_matrix, indices_or_sections=4, axis=0) # Diagonal connections c = (sigmoid(f + self._forget_bias) * c_prev + sigmoid(i) * self._activation(j)) m = sigmoid(o) * self._activation(c) new_state = (c, m) return m, new_state X = x_val.reshape(x_val.shape[1:]) cell = LSTMCell(num_units, initializer=lambda shape: kernel_init) cell.build(X.shape) state = (np.zeros(num_units), np.zeros(num_units)) for i in range(timesteps): x = X[i,:] output, state = cell.call(x, state) print(output) ``` Produces output: ``` [-0.21386017 -0.08401277 -0.25431477] [-0.22243588 -0.25817422 -0.1612211 ] [-0.2282134 -0.14207162 -0.35017249] [-0.23286737 -0.17129192 -0.2706512 ] [-0.11768674 -0.20717363 -0.13339118] [-0.0599215 -0.17756104 -0.2028935 ] [ 0.11437953 -0.19484555 0.05371994] ``` While your Tensorflow code, if you replace the second line with ``` lstm = tf.nn.rnn_cell.LSTMCell(num_units = num_units, initializer = tf.constant_initializer(kernel_init)) ``` returns: ``` [[-0.2138602 -0.08401276 -0.25431478]] [[-0.22243595 -0.25817424 -0.16122109]] [[-0.22821338 -0.1420716 -0.35017252]] [[-0.23286738 -0.1712919 -0.27065122]] [[-0.1176867 -0.2071736 -0.13339119]] [[-0.05992149 -0.177561 -0.2028935 ]] [[ 0.11437953 -0.19484554 0.05371996]] ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/54767816\/how-exactly-does-lstmcell-from-tensorflow-operates", "best_answers_votes":4, "question_length":1923, "response_length":3754 }, { "question":"How to deal with UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape I am having the following warning in Tensorflow: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. The reason I am getting this is: ```py import tensorflow as tf # Flatten batch elements to rank-2 tensor where 1st max_length rows #belong to first batch element and so forth all_timesteps = tf.reshape(raw_output, [-1, n_dim]) # (batch_size*max_length, n_dim) # Indices to last element of each sequence. # Index to first element is the sequence order number times max #sequence length. # Index to last element is the index to first element plus sequence #length. row_inds = tf.range(0, batch_size) * max_length + (seq_len - 1) # Gather rows with indices to last elements of sequences # http:\/\/stackoverflow.com\/questions\/35892412\/tensorflow-dense-gradient-explanation # This is due to gather returning IndexedSlice which is later #converted into a Tensor for gradient # calculation. last_timesteps = tf.gather(all_timesteps, row_inds) # (batch_size,n_dim) ``` tf.gather is causing the issue. I have been ignoring it until now because my architectures were not really big. However, now, I have bigger architectures and a lot of data. I am facing Out of memory issues when training with batch sizes bigger than 10. I believe that dealing with this warning would allow me to fit my models inside the GPU. Please note that I am using Tensorflow 1.3.", "response":"I managed to solve the issue by using tf.dynnamic_partition instead of tf.gather . I replaced the above code like this: ```py # Flatten batch elements to rank-2 tensor where 1st max_length rows belong to first batch element and so forth all_timesteps = tf.reshape(raw_output, [-1, n_dim]) # (batch_size*max_length, n_dim) # Indices to last element of each sequence. # Index to first element is the sequence order number times max sequence length. # Index to last element is the index to first element plus sequence length. row_inds = tf.range(0, batch_size) * max_length + (seq_len - 1) # Creating a vector of 0s and 1s that will specify what timesteps to choose. partitions = tf.reduce_sum(tf.one_hot(row_inds, tf.shape(all_timesteps)[0], dtype='int32'), 0) # Selecting the elements we want to choose. last_timesteps = tf.dynamic_partition(all_timesteps, partitions, 2) # (batch_size, n_dim) last_timesteps = last_timesteps[1] ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45882401\/how-to-deal-with-userwarning-converting-sparse-indexedslices-to-a-dense-tensor", "best_answers_votes":13, "question_length":1515, "response_length":931 }, { "question":"How to train a RNN with LSTM cells for time series prediction I'm currently trying to build a simple model for predicting time series. The goal would be to train the model with a sequence so that the model is able to predict future values. I'm using tensorflow and lstm cells to do so. The model is trained with truncated backpropagation through time. My question is how to structure the data for training. For example let's assume we want to learn the given sequence: ``` [1,2,3,4,5,6,7,8,9,10,11,...] ``` And we unroll the network for num_steps=4. Option 1 ``` input data label 1,2,3,4 2,3,4,5 5,6,7,8 6,7,8,9 9,10,11,12 10,11,12,13 ... ``` Option 2 ``` input data label 1,2,3,4 2,3,4,5 2,3,4,5 3,4,5,6 3,4,5,6 4,5,6,7 ... ``` Option 3 ``` input data label 1,2,3,4 5 2,3,4,5 6 3,4,5,6 7 ... ``` Option 4 ``` input data label 1,2,3,4 5 5,6,7,8 9 9,10,11,12 13 ... ``` Any help would be appreciated.", "response":"I'm just about to learn LSTMs in TensorFlow and try to implement an example which (luckily) tries to predict some time-series \/ number-series genereated by a simple math-fuction. But I'm using a different way to structure the data for training, motivated by Unsupervised Learning of Video Representations using LSTMs: LSTM Future Predictor Model Option 5: ``` input data label 1,2,3,4 5,6,7,8 2,3,4,5 6,7,8,9 3,4,5,6 7,8,9,10 ... ``` Beside this paper, I (tried) to take inspiration by the given TensorFlow RNN examples. My current complete solution looks like this: ``` import math import random import numpy as np import tensorflow as tf LSTM_SIZE = 64 LSTM_LAYERS = 2 BATCH_SIZE = 16 NUM_T_STEPS = 4 MAX_STEPS = 1000 LAMBDA_REG = 5e-4 def ground_truth_func(i, j, t): return i * math.pow(t, 2) + j def get_batch(batch_size): seq = np.zeros([batch_size, NUM_T_STEPS, 1], dtype=np.float32) tgt = np.zeros([batch_size, NUM_T_STEPS], dtype=np.float32) for b in xrange(batch_size): i = float(random.randint(-25, 25)) j = float(random.randint(-100, 100)) for t in xrange(NUM_T_STEPS): value = ground_truth_func(i, j, t) seq[b, t, 0] = value for t in xrange(NUM_T_STEPS): tgt[b, t] = ground_truth_func(i, j, t + NUM_T_STEPS) return seq, tgt # Placeholder for the inputs in a given iteration sequence = tf.placeholder(tf.float32, [BATCH_SIZE, NUM_T_STEPS, 1]) target = tf.placeholder(tf.float32, [BATCH_SIZE, NUM_T_STEPS]) fc1_weight = tf.get_variable('w1', [LSTM_SIZE, 1], initializer=tf.random_normal_initializer(mean=0.0, stddev=1.0)) fc1_bias = tf.get_variable('b1', [1], initializer=tf.constant_initializer(0.1)) # ENCODER with tf.variable_scope('ENC_LSTM'): lstm = tf.nn.rnn_cell.LSTMCell(LSTM_SIZE) multi_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * LSTM_LAYERS) initial_state = multi_lstm.zero_state(BATCH_SIZE, tf.float32) state = initial_state for t_step in xrange(NUM_T_STEPS): if t_step > 0: tf.get_variable_scope().reuse_variables() # state value is updated after processing each batch of sequences output, state = multi_lstm(sequence[:, t_step, :], state) learned_representation = state # DECODER with tf.variable_scope('DEC_LSTM'): lstm = tf.nn.rnn_cell.LSTMCell(LSTM_SIZE) multi_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * LSTM_LAYERS) state = learned_representation logits_stacked = None loss = 0.0 for t_step in xrange(NUM_T_STEPS): if t_step > 0: tf.get_variable_scope().reuse_variables() # state value is updated after processing each batch of sequences output, state = multi_lstm(sequence[:, t_step, :], state) # output can be used to make next number prediction logits = tf.matmul(output, fc1_weight) + fc1_bias if logits_stacked is None: logits_stacked = logits else: logits_stacked = tf.concat(1, [logits_stacked, logits]) loss += tf.reduce_sum(tf.square(logits - target[:, t_step])) \/ BATCH_SIZE reg_loss = loss + LAMBDA_REG * (tf.nn.l2_loss(fc1_weight) + tf.nn.l2_loss(fc1_bias)) train = tf.train.AdamOptimizer().minimize(reg_loss) with tf.Session() as sess: sess.run(tf.initialize_all_variables()) total_loss = 0.0 for step in xrange(MAX_STEPS): seq_batch, target_batch = get_batch(BATCH_SIZE) feed = {sequence: seq_batch, target: target_batch} _, current_loss = sess.run([train, reg_loss], feed) if step % 10 == 0: print(\"@{}: {}\".format(step, current_loss)) total_loss += current_loss print('Total loss:', total_loss) print('### SIMPLE EVAL: ###') seq_batch, target_batch = get_batch(BATCH_SIZE) feed = {sequence: seq_batch, target: target_batch} prediction = sess.run([logits_stacked], feed) for b in xrange(BATCH_SIZE): print(\"{} -> {})\".format(str(seq_batch[b, :, 0]), target_batch[b, :])) print(\" `-> Prediction: {}\".format(prediction[0][b])) ``` Sample output of this looks like this: ``` ### SIMPLE EVAL: ### # [input seq] -> [target prediction] # `-> Prediction: [model prediction] [ 33. 53. 113. 213.] -> [ 353. 533. 753. 1013.]) `-> Prediction: [ 19.74548721 28.3149128 33.11489105 35.06603241] [ -17. -32. -77. -152.] -> [-257. -392. -557. -752.]) `-> Prediction: [-16.38951683 -24.3657589 -29.49801064 -31.58583832] [ -7. -4. 5. 20.] -> [ 41. 68. 101. 140.]) `-> Prediction: [ 14.14126873 22.74848557 31.29668617 36.73633194] ... ``` The model is a LSTM-autoencoder having 2 layers each. Unfortunately, as you can see in the results, this model does not learn the sequence properly. I might be the case that I'm just doing a bad mistake somewhere, or that 1000-10000 training steps is just way to few for a LSTM. As I said, I'm also just starting to understand\/use LSTMs properly. But hopefully this can give you some inspiration regarding the implementation.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35961216\/how-to-train-a-rnn-with-lstm-cells-for-time-series-prediction", "best_answers_votes":5, "question_length":899, "response_length":4592 }, { "question":"How to implement Tensorflow batch normalization in LSTM My current LSTM network looks like this. ```py rnn_cell = tf.contrib.rnn.BasicRNNCell(num_units=CELL_SIZE) init_s = rnn_cell.zero_state(batch_size=1, dtype=tf.float32) # very first hidden state outputs, final_s = tf.nn.dynamic_rnn( rnn_cell, # cell you have chosen tf_x, # input initial_state=init_s, # the initial hidden state time_major=False, # False: (batch, time step, input); True: (time step, batch, input) ) # reshape 3D output to 2D for fully connected layer outs2D = tf.reshape(outputs, [-1, CELL_SIZE]) net_outs2D = tf.layers.dense(outs2D, INPUT_SIZE) # reshape back to 3D outs = tf.reshape(net_outs2D, [-1, TIME_STEP, INPUT_SIZE]) ``` Usually, I apply tf.layers.batch_normalization as batch normalization. But I am not sure if this works in a LSTM network. ```py b1 = tf.layers.batch_normalization(outputs, momentum=0.4, training=True) d1 = tf.layers.dropout(b1, rate=0.4, training=True) # reshape 3D output to 2D for fully connected layer outs2D = tf.reshape(d1, [-1, CELL_SIZE]) net_outs2D = tf.layers.dense(outs2D, INPUT_SIZE) # reshape back to 3D outs = tf.reshape(net_outs2D, [-1, TIME_STEP, INPUT_SIZE]) ```", "response":"If you want to use batch norm for RNN (LSTM or GRU), you can check out this implementation , or read the full description from blog post. However, the layer-normalization has more advantage than batch norm in sequence data. Specifically, \"the effect of batch normalization is dependent on the mini-batch size and it is not obvious how to apply it to recurrent networks\" (from the paper Ba, et al. Layer normalization). For layer normalization, it normalizes the summed inputs within each layer. You can check out the implementation of layer-normalization for GRU cell:", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46915354\/how-to-implement-tensorflow-batch-normalization-in-lstm", "best_answers_votes":4, "question_length":1181, "response_length":568 }, { "question":"How to connect LSTM layers in Keras, RepeatVector or return_sequence=True? I'm trying to develop an Encoder model in keras for timeseries. The shape of data is (5039, 28, 1), meaning that my seq_len is 28 and I have one feature. For the first layer of the encoder, I'm using 112 hunits, second layer will have 56 and to be able to get back to the input shape for decoder, I had to add 3rd layer with 28 hunits (this autoencoder is supposed to reconstruct its input). But I don't know what is the correct approach to connect the LSTM layers together. AFAIK, I can either add RepeatVector or return_seq=True. You can see both of my models in the following code. I wonder what will be the difference and which approach is the correct one? First model using return_sequence=True: ``` inputEncoder = Input(shape=(28, 1)) firstEncLayer = LSTM(112, return_sequences=True)(inputEncoder) snd = LSTM(56, return_sequences=True)(firstEncLayer) outEncoder = LSTM(28)(snd) context = RepeatVector(1)(outEncoder) context_reshaped = Reshape((28,1))(context) encoder_model = Model(inputEncoder, outEncoder) firstDecoder = LSTM(112, return_sequences=True)(context_reshaped) outDecoder = LSTM(1, return_sequences=True)(firstDecoder) autoencoder = Model(inputEncoder, outDecoder) ``` Second model with RepeatVector: ``` inputEncoder = Input(shape=(28, 1)) firstEncLayer = LSTM(112)(inputEncoder) firstEncLayer = RepeatVector(1)(firstEncLayer) snd = LSTM(56)(firstEncLayer) snd = RepeatVector(1)(snd) outEncoder = LSTM(28)(snd) encoder_model = Model(inputEncoder, outEncoder) context = RepeatVector(1)(outEncoder) context_reshaped = Reshape((28, 1))(context) firstDecoder = LSTM(112)(context_reshaped) firstDecoder = RepeatVector(1)(firstDecoder) sndDecoder = LSTM(28)(firstDecoder) outDecoder = RepeatVector(1)(sndDecoder) outDecoder = Reshape((28, 1))(outDecoder) autoencoder = Model(inputEncoder, outDecoder) ```", "response":"You will probably have to see for yourself which one is better because it depends on the problem you're solving. However, I'm giving you the difference between the two approaches. Essentially, return_sequences=True returns all the outputs the encoder observed in the past, while RepeatVector repeats the very last output of the encoder.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51749404\/how-to-connect-lstm-layers-in-keras-repeatvector-or-return-sequence-true", "best_answers_votes":71, "question_length":1893, "response_length":336 }, { "question":"How to calculate the Cosine similarity between two tensors? I have two normalized tensors and I need to calculate the cosine similarity between these tensors. How do I do it with TensorFlow? ``` cosine(normalize_a,normalize_b) a = tf.placeholder(tf.float32, shape=[None], name=\"input_placeholder_a\") b = tf.placeholder(tf.float32, shape=[None], name=\"input_placeholder_b\") normalize_a = tf.nn.l2_normalize(a,0) normalize_b = tf.nn.l2_normalize(b,0) ```", "response":"This will do the job: ``` a = tf.placeholder(tf.float32, shape=[None], name=\"input_placeholder_a\") b = tf.placeholder(tf.float32, shape=[None], name=\"input_placeholder_b\") normalize_a = tf.nn.l2_normalize(a,0) normalize_b = tf.nn.l2_normalize(b,0) cos_similarity=tf.reduce_sum(tf.multiply(normalize_a,normalize_b)) sess=tf.Session() cos_sim=sess.run(cos_similarity,feed_dict={a:[1,2,3],b:[2,4,6]}) ``` This prints 0.99999988", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43357732\/how-to-calculate-the-cosine-similarity-between-two-tensors", "best_answers_votes":31, "question_length":452, "response_length":424 }, { "question":"ValueError: Trying to share variable rnn\/multi_rnn_cell\/cell_0\/basic_lstm_cell\/kernel This it the code: ``` X = tf.placeholder(tf.float32, [batch_size, seq_len_1, 1], name='X') labels = tf.placeholder(tf.float32, [None, alpha_size], name='labels') rnn_cell = tf.contrib.rnn.BasicLSTMCell(512) m_rnn_cell = tf.contrib.rnn.MultiRNNCell([rnn_cell] * 3, state_is_tuple=True) pre_prediction, state = tf.nn.dynamic_rnn(m_rnn_cell, X, dtype=tf.float32) ``` This is full error: ValueError: Trying to share variable rnn\/multi_rnn_cell\/cell_0\/basic_lstm_cell\/kernel, but specified shape (1024, 2048) and found shape (513, 2048). I'm using a GPU version of tensorflow.", "response":"I encountered a similar problem when I upgraded to v1.2 (tensorflow-gpu). Instead of using [rnn_cell]*3, I created 3 rnn_cells (stacked_rnn) by a loop (so that they don't share variables) and fed MultiRNNCell with stacked_rnn and the problem goes away. I'm not sure it is the right way to do it. ``` stacked_rnn = [] for iiLyr in range(3): stacked_rnn.append(tf.nn.rnn_cell.LSTMCell(num_units=512, state_is_tuple=True)) MultiLyr_cell = tf.nn.rnn_cell.MultiRNNCell(cells=stacked_rnn, state_is_tuple=True) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44615147\/valueerror-trying-to-share-variable-rnn-multi-rnn-cell-cell-0-basic-lstm-cell-k", "best_answers_votes":30, "question_length":657, "response_length":507 }, { "question":"TensorFlow: \u201cAttempting to use uninitialized value\u201d in variable initialization Here's my code. ``` import tensorflow as tf a=tf.Variable(tf.constant([0,1,2],dtype=tf.int32)) b=tf.Variable(tf.constant([1,1,1],dtype=tf.int32)) recall=tf.metrics.recall(b,a) init=tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) rec=sess.run(recall) print(rec) ``` I tried to test tf.metrics.precision and got the following error message. ``` FailedPreconditionError (see above for traceback): Attempting to use uninitialized value recall\/true_positives\/count [[Node: recall\/true_positives\/count\/read = Identity[T=DT_FLOAT, _class=[\"loc:@recall\/true_positives\/count\"], _device=\"\/job:localhost\/replica:0\/task:0\/gpu:0\"](recall\/true_positives\/count)]] [[Node: recall\/value\/_15 = _Recv[client_terminated=false, recv_device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\", send_device=\"\/job:localhost\/replica:0\/task:0\/gpu:0\", send_device_incarnation=1, tensor_name=\"edge_73_recall\/value\", tensor_type=DT_FLOAT, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"]()]] ```", "response":"You also need to initialise the local variables hidden in the tf.metrics.recallmethod. For example, this piece of code would work: ``` init_g = tf.global_variables_initializer() init_l = tf.local_variables_initializer() with tf.Session() as sess: sess.run(init_g) sess.run(init_l) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44624648\/tensorflow-attempting-to-use-uninitialized-value-in-variable-initialization", "best_answers_votes":45, "question_length":1061, "response_length":284 }, { "question":"Keras LSTM input dimension setting I was trying to train a LSTM model using keras but I think I got something wrong here. I got an error of ValueError: Error when checking input: expected lstm_17_input to have 3 dimensions, but got array with shape (10000, 0, 20) while my code looks like ``` model = Sequential() model.add(LSTM(256, activation=\"relu\", dropout=0.25, recurrent_dropout=0.25, input_shape=(None, 20, 64))) model.add(Dense(1, activation=\"sigmoid\")) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=batch_size, epochs=10) ``` where X_train has a shape of (10000, 20) and the first few data points are like ``` array([[ 0, 0, 0, ..., 40, 40, 9], [ 0, 0, 0, ..., 33, 20, 51], [ 0, 0, 0, ..., 54, 54, 50], ... ``` and y_train has a shape of (10000, ), which is a binary (0\/1) label array. Could someone point out where I was wrong here?", "response":"For the sake of completeness, here's what's happened. First up, LSTM, like all layers in Keras, accepts two arguments: input_shape and batch_input_shape. The difference is in convention that input_shape does not contain the batch size, while batch_input_shape is the full input shape including the batch size. Hence, the specification input_shape=(None, 20, 64) tells keras to expect a 4-dimensional input, which is not what you want. The correct would have been just (20,). But that's not all. LSTM layer is a recurrent layer, hence it expects a 3-dimensional input (batch_size, timesteps, input_dim). That's why the correct specification is input_shape=(20, 1) or batch_input_shape=(10000, 20, 1). Plus, your training array should also be reshaped to denote that it has 20 time steps and 1 input feature per each step. Hence, the solution: ```py X_train = np.expand_dims(X_train, 2) # makes it (10000,20,1) ... model = Sequential() model.add(LSTM(..., input_shape=(20, 1))) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48140989\/keras-lstm-input-dimension-setting", "best_answers_votes":44, "question_length":917, "response_length":979 }, { "question":"Keras callback ReduceLROnPlateau - cooldown parameter ReduceLROnPlateau callback in Keras seems to be an interesting tool to use in training models. But I could not really figure out exactly what the cooldown parameter means in the callback function ReduceLROnPlateau in Keras. Here is what the documentation says: First, the interface of the function: ``` keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=0, mode='auto', min_delta=0.0001, cooldown=0, min_lr=0) ``` ReduceLROnPlateau: Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced. cooldown: number of epochs to wait before resuming normal operation after lr has been reduced. The explanation does not really make it clear to me. Is it meant here that: - Say that lr=A. And the learning rate is reduced if the relevant monitored metric does not improve during patience number of epochs. (And say that lr=B after reducing it.) - And the learning rate is set to its first value (lr=A again) after cooldown number of epochs. Is my understanding correct? If not, what is the real function of cooldown parameter here? PS. When I google it, I see some examples where people set the cooldown parameter to zero, which makes me think that my perception on this parameter is wrong.", "response":"True, it does not state it clearly in the description. What it means is that if you set a cooldown you have to wait before resuming normal operation (i.e. beginning to monitor if there is any improvement in the monitored metric over a patience epochs). For example, let's say cooldown=5. After the learning rate is reduced, the algorithm waits 5 epochs before starting to monitor the metrics again. So if there is no improvement in the metric and patience=10, the learning rate will be reduced again after 15 epochs. You can confirm this by looking at the corresponding code.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/52338090\/keras-callback-reducelronplateau-cooldown-parameter", "best_answers_votes":42, "question_length":1439, "response_length":575 }, { "question":"TensorFlow image operations for batches There are a number of image operations in TensorFlow used for distorting input images during training, e.g. tf.image.random_flip_left_right(image, seed=None) and tf.image.random_brightness(image, max_delta, seed=None) and several others. These functions are made for single images (i.e. 3-D tensors with shape [height, width, color-channel]). How can I make them work on a batch of images (i.e. 4-D tensors with shape [batch, height, width, color-channel])? A working example would be greatly appreciated!", "response":"One possibility is to use the recently added tf.map_fn() to apply the single-image operator to each element of the batch. ``` result = tf.map_fn(lambda img: tf.image.random_flip_left_right(img), images) ``` This effectively builds the same graph as keveman suggests building, but it can be more efficient for larger batch sizes, by using TensorFlow's support for loops.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38920240\/tensorflow-image-operations-for-batches", "best_answers_votes":35, "question_length":545, "response_length":369 }, { "question":"Interleaving tf.data.Datasets I'm trying to use tf.data.Dataset to interleave two datasets but having problems doing so. Given this simple example: ``` ds0 = tf.data.Dataset() ds0 = ds0.range(0, 10, 2) ds1 = tf.data.Dataset() ds1 = ds1.range(1, 10, 2) dataset = ... iter = dataset.make_one_shot_iterator() val = iter.get_next() ``` What is ... to produce an output like 0, 1, 2, 3...9? It would seem like dataset.interleave() would be relevant but I haven't been able to formulate the statement in a way that doesn't generate an error.", "response":"MattScarpino is on the right track in his comment. You can use Dataset.zip() along with Dataset.flat_map() to flatten a multi-element dataset: ``` ds0 = tf.data.Dataset.range(0, 10, 2) ds1 = tf.data.Dataset.range(1, 10, 2) # Zip combines an element from each input into a single element, and flat_map # enables you to map the combined element into two elements, then flattens the # result. dataset = tf.data.Dataset.zip((ds0, ds1)).flat_map( lambda x0, x1: tf.data.Dataset.from_tensors(x0).concatenate( tf.data.Dataset.from_tensors(x1))) iter = dataset.make_one_shot_iterator() val = iter.get_next() ``` Having said this, your intuition about using Dataset.interleave() is pretty sensible. We're investigating ways that you can do this more easily. PS. As an alternative, you can use Dataset.interleave() to solve the problem if you change how ds0 and ds1 are defined: ``` dataset = tf.data.Dataset.range(2).interleave( lambda x: tf.data.Dataset.range(x, 10, 2), cycle_length=2, block_length=1) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47343228\/interleaving-tf-data-datasets", "best_answers_votes":31, "question_length":535, "response_length":998 }, { "question":"'Library not loaded: @rpath\/libcudart.7.5.dylib' TensorFlow Error on Mac I'm using OS X El Capitan (10.11.4). I just downloaded TensorFlow using the pip install instructions here. Everything went pretty smoothly, though I did get a few warning messages like: The directory '\/Users\/myusername\/Library\/Caches\/pip\/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag. and You are using pip version 6.0.8, however version 8.1.2 is available. Even though I just installed pip. Then, when I tested TensorFlow in Python, I got the error: ``` >>> import tensorflow as tf Traceback (most recent call last): File \"\", line 1, in File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.4\/lib\/python3.4\/site-packages\/tensorflow\/__init__.py\", line 23, in from tensorflow.python import * File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.4\/lib\/python3.4\/site-packages\/tensorflow\/python\/__init__.py\", line 48, in from tensorflow.python import pywrap_tensorflow File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.4\/lib\/python3.4\/site-packages\/tensorflow\/python\/pywrap_tensorflow.py\", line 28, in _pywrap_tensorflow = swig_import_helper() File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.4\/lib\/python3.4\/site-packages\/tensorflow\/python\/pywrap_tensorflow.py\", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description) File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.4\/lib\/python3.4\/imp.py\", line 243, in load_module return load_dynamic(name, filename, file) ImportError: dlopen(\/Library\/Frameworks\/Python.framework\/Versions\/3.4\/lib\/python3.4\/site-packages\/tensorflow\/python\/_pywrap_tensorflow.so, 10): Library not loaded: @rpath\/libcudart.7.5.dylib Referenced from: \/Library\/Frameworks\/Python.framework\/Versions\/3.4\/lib\/python3.4\/site-packages\/tensorflow\/python\/_pywrap_tensorflow.so Reason: image not found ``` Now, when I try to do pip uninstall tensorflow-0.10.0rc0 it tells me that it's not installed. The closest thing I've found to resembling this problem is this issue in the TensorFlow GitHub docs (which I have not tried). How can I uninstall whatever it did install and get TensorFlow up and running correctly?", "response":"This error message is displayed if you install the GPU-enabled Mac OS version of TensorFlow (available from release 0.10 onwards) on a machine that does not have CUDA installed. To fix the error, install the CPU version for Python 2.7 or 3.x, as follows: ``` # Mac OS X, CPU only, Python 2.7: $ export TF_BINARY_URL=https:\/\/storage.googleapis.com\/tensorflow\/mac\/cpu\/tensorflow-0.12.0-py2-none-any.whl $ sudo pip install --upgrade $TF_BINARY_URL # Mac OS X, CPU only, Python 3.4 or 3.5: $ export TF_BINARY_URL=https:\/\/storage.googleapis.com\/tensorflow\/mac\/cpu\/tensorflow-0.12.0-py3-none-any.whl $ sudo pip3 install --upgrade $TF_BINARY_URL ``` See tensorflow versions: https:\/\/www.tensorflow.org\/versions\/", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38710339\/library-not-loaded-rpath-libcudart-7-5-dylib-tensorflow-error-on-mac", "best_answers_votes":35, "question_length":2320, "response_length":704 }, { "question":"Why use tensorflow gfile? (for file I\/O) Tensorflow code uses methods for file I\/O that are different than python builtin methods. According to the source code, it is useful as \"File I\/O wrappers without thread locking\" I am not sure on what occasions it is useful and when it shouldn't be used. Any idea? Thank you", "response":"This comment: File I\/O wrappers without thread locking ...is a particularly unhelpful description for TensorFlow's tf.gfile module! The main roles of the tf.gfile module are: To provide an API that is close to Python's file objects, and To provide an implementation based on TensorFlow's C++ FileSystem API. The C++ FileSystem API supports multiple file system implementations, including local files, Google Cloud Storage (using a gs:\/\/ prefix), and HDFS (using an hdfs:\/\/ prefix). TensorFlow exports these as tf.gfile so that you can uses these implementations for saving and loading checkpoints, writing TensorBoard logs, and accessing training data (among other uses). However, if all of your files are local, you can use the regular Python file API without any problem.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42922948\/why-use-tensorflow-gfile-for-file-i-o", "best_answers_votes":39, "question_length":315, "response_length":773 }, { "question":"What is the difference between Loss, accuracy, validation loss, Validation accuracy? At the end of each epoch, I am getting for example the following output: ``` Epoch 1\/25 2018-08-06 14:54:12.555511: 2\/2 [==============================] - 86s 43s\/step - loss: 6.0767 - acc: 0.0469 - val_loss: 4.1037 - val_acc: 0.2000 Epoch 2\/25 2\/2 [==============================] - 26s 13s\/step - loss: 3.6901 - acc: 0.0938 - val_loss: 2.5610 - val_acc: 0.0000e+00 Epoch 3\/25 2\/2 [==============================] - 66s 33s\/step - loss: 3.1491 - acc: 0.1406 - val_loss: 2.4793 - val_acc: 0.0500 Epoch 4\/25 2\/2 [==============================] - 44s 22s\/step - loss: 3.0686 - acc: 0.0694 - val_loss: 2.3159 - val_acc: 0.0500 Epoch 5\/25 2\/2 [==============================] - 62s 31s\/step - loss: 2.5884 - acc: 0.1094 - val_loss: 2.4601 - val_acc: 0.1500 Epoch 6\/25 2\/2 [==============================] - 41s 20s\/step - loss: 2.7708 - acc: 0.1493 - val_loss: 2.2542 - val_acc: 0.4000 . . . . ``` Can anyone explain me what's the difference between loss, accuracy, validation loss and validation accuracy?", "response":"When we mention validation_split as fit parameter while fitting DL model, it splits data into two parts for every epoch i.e. training data and validation data. It trains the model on training data and validate the model on validation data by checking its loss and accuracy. Usually with every epoch increasing, loss goes lower and accuracy goes higher. But with val_loss and val_acc, many cases can be possible: val_loss starts increasing, val_acc starts decreasing(means model is cramming values not learning) val_loss starts increasing, val_acc also increases.(could be case of overfitting or diverse probability values in cases softmax is used in output layer) val_loss starts decreasing, val_acc starts increasing(Correct, means model build is learning and working fine) This is a link to refer as well in which there is more description given. Thanks. How to interpret \"loss\" and \"accuracy\" for a machine learning model I have tried to explain at https:\/\/www.javacodemonk.com\/difference-between-loss-accuracy-validation-loss-validation-accuracy-when-training-deep-learning-model-with-keras-ff358faa", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51704808\/what-is-the-difference-between-loss-accuracy-validation-loss-validation-accur", "best_answers_votes":34, "question_length":1088, "response_length":1103 }, { "question":"Keras with TensorFlow backend not using GPU I built the gpu version of the docker image https:\/\/github.com\/floydhub\/dl-docker with keras version 2.0.0 and tensorflow version 0.12.1. I then ran the mnist tutorial https:\/\/github.com\/fchollet\/keras\/blob\/master\/examples\/mnist_cnn.py but realized that keras is not using GPU. Below is the output that I have ``` root@b79b8a57fb1f:~\/sharedfolder# python test.py Using TensorFlow backend. Downloading data from https:\/\/s3.amazonaws.com\/img-datasets\/mnist.npz x_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples Train on 60000 samples, validate on 10000 samples Epoch 1\/12 2017-09-06 16:26:54.866833: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-06 16:26:54.866855: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-06 16:26:54.866863: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-09-06 16:26:54.866870: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-06 16:26:54.866876: W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. ``` Can anyone let me know if there are some settings that need to be made before keras uses GPU ? I am very new to all these so do let me know if I need to provide more information. I have installed the pre-requisites as mentioned on the page Install Docker following the installation guide for your platform: https:\/\/docs.docker.com\/engine\/installation\/ I am able to launch the docker image ``` docker run -it -p 8888:8888 -p 6006:6006 -v \/sharedfolder:\/root\/sharedfolder floydhub\/dl-docker:cpu bash ``` GPU Version Only: Install Nvidia drivers on your machine either from Nvidia directly or follow the instructions here. Note that you don't have to install CUDA or cuDNN. These are included in the Docker container. I am able to run the last step ``` cv@cv-P15SM:~$ cat \/proc\/driver\/nvidia\/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 375.66 Mon May 1 15:29:16 PDT 2017 GCC version: gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) ``` GPU Version Only: Install nvidia-docker: https:\/\/github.com\/NVIDIA\/nvidia-docker, following the instructions here. This will install a replacement for the docker CLI. It takes care of setting up the Nvidia host driver environment inside the Docker containers and a few other things. I am able to run the step here ``` # Test nvidia-smi cv@cv-P15SM:~$ nvidia-docker run --rm nvidia\/cuda nvidia-smi Thu Sep 7 00:33:06 2017 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 375.66 Driver Version: 375.66 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage\/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 780M Off | 0000:01:00.0 N\/A | N\/A | | N\/A 55C P0 N\/A \/ N\/A | 310MiB \/ 4036MiB | N\/A Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +-----------------------------------------------------------------------------+ ``` I am also able to run the nvidia-docker command to launch a gpu supported image. What I have tried I have tried the following suggestions below Check if you have completed step 9 of this tutorial ( https:\/\/github.com\/ignaciorlando\/skinner\/wiki\/Keras-and-TensorFlow-installation ). Note: Your file paths may be completely different inside that docker image, you'll have to locate them somehow. I appended the suggested lines to my bashrc and have verified that the bashrc file is updated. ``` echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:\/usr\/local\/cuda-8.0\/lib64:\/usr\/local\/cuda-8.0\/extras\/CUPTI\/lib64' >> ~\/.bashrc echo 'export CUDA_HOME=\/usr\/local\/cuda-8.0' >> ~\/.bashrc ``` To import the following commands in my python file import os os.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\" # see issue #152 os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\" Both steps, done separately or together unfortunately did not solve the issue. Keras is still running with the CPU version of tensorflow as its backend. However, I might have found the possible issue. I checked the version of my tensorflow via the following commands and found two of them. This is the CPU version ``` root@08b5fff06800:~# pip show tensorflow Name: tensorflow Version: 1.3.0 Summary: TensorFlow helps the tensors flow Home-page: http:\/\/tensorflow.org\/ Author: Google Inc. Author-email: [email protected] License: Apache 2.0 Location: \/usr\/local\/lib\/python2.7\/dist-packages Requires: tensorflow-tensorboard, six, protobuf, mock, numpy, backports.weakref, wheel ``` And this is the GPU version ``` root@08b5fff06800:~# pip show tensorflow-gpu Name: tensorflow-gpu Version: 0.12.1 Summary: TensorFlow helps the tensors flow Home-page: http:\/\/tensorflow.org\/ Author: Google Inc. Author-email: [email protected] License: Apache 2.0 Location: \/usr\/local\/lib\/python2.7\/dist-packages Requires: mock, numpy, protobuf, wheel, six ``` Interestingly, the output shows that keras is using tensorflow version 1.3.0 which is the CPU version and not 0.12.1, the GPU version ``` import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import backend as K import tensorflow as tf print('Tensorflow: ', tf.__version__) ``` Output ``` root@08b5fff06800:~\/sharedfolder# python test.py Using TensorFlow backend. Tensorflow: 1.3.0 ``` I guess now I need to figure out how to have keras use the gpu version of tensorflow.", "response":"It is never a good idea to have both tensorflow and tensorflow-gpu packages installed side by side (the one single time it happened to me accidentally, Keras was using the CPU version). I guess now I need to figure out how to have keras use the gpu version of tensorflow. You should simply remove both packages from your system, and then re-install tensorflow-gpu [UPDATED after comment]: ``` pip uninstall tensorflow tensorflow-gpu pip install tensorflow-gpu ``` Moreover, it is puzzling why you seem to use the floydhub\/dl-docker:cpu container, while according to the instructions you should be using the floydhub\/dl-docker:gpu one...", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46080634\/keras-with-tensorflow-backend-not-using-gpu", "best_answers_votes":31, "question_length":6548, "response_length":636 }, { "question":"Tensorflow TFRecord: Can't parse serialized example I am trying to follow this guide in order to serialize my input data into the TFRecord format but I keep hitting this error when trying to read it: InvalidArgumentError: Key: my_key. Can't parse serialized Example. I am not sure where I'm going wrong. Here is a minimal reproduction of the issue I cannot get past. Serialise some sample data: ``` with tf.python_io.TFRecordWriter('train.tfrecords') as writer: for idx in range(10): example = tf.train.Example( features=tf.train.Features( feature={ 'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[1,2,3])), 'test': tf.train.Feature(float_list=tf.train.FloatList(value=[0.1,0.2,0.3])) } ) ) writer.write(example.SerializeToString()) writer.close() ``` Parsing function & deserialise: ``` def parse(tfrecord): features = { 'label': tf.FixedLenFeature([], tf.int64, default_value=0), 'test': tf.FixedLenFeature([], tf.float32, default_value=0.0), } return tf.parse_single_example(tfrecord, features) dataset = tf.data.TFRecordDataset('train.tfrecords').map(parse) getnext = dataset.make_one_shot_iterator().get_next() ``` When trying to run this: ``` with tf.Session() as sess: v = sess.run(getnext) print (v) ``` I trigger the above error message. Is it possible to get past this error and deserialize my data?", "response":"tf.FixedLenFeature() is used for reading the fixed size arrays of data. And the shape of the data should be defined beforehand. Updating the parse function to ``` def parse(tfrecord): return tf.parse_single_example(tfrecord, features={ 'label': tf.FixedLenFeature([3], tf.int64, default_value=[0,0,0]), 'test': tf.FixedLenFeature([3], tf.float32, default_value=[0.0, 0.0, 0.0]), }) ``` Should do the job.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/53499409\/tensorflow-tfrecord-cant-parse-serialized-example", "best_answers_votes":25, "question_length":1324, "response_length":404 }, { "question":"Save Keras ModelCheckpoints in Google Cloud Bucket I'm working on training a LSTM network on Google Cloud Machine Learning Engine using Keras with TensorFlow backend. I managed it to deploy my model and perform a successful training task after some adjustments to the gcloud and my python script. I then tried to make my model save checkpoints after every epoch using Keras modelCheckpoint callback. Running a local training job with Google Cloud works perfectly as expected. The weights are getting stored in the specified path after each epoch. But when I try to run the same job online on Google Cloud Machine Learning Engine the weights.hdf5 does not get written to my Google Cloud Bucket. Instead I get the following error: ``` ... File \"h5f.pyx\", line 71, in h5py.h5f.open (h5py\/h5f.c:1797) IOError: Unable to open file (Unable to open file: name = 'gs:\/\/...\/weights.hdf5', errno = 2, error message = 'no such file or directory', flags = 0, o_flags = 0) ``` I investigated this issue and it turned out, that there is no Problem with the the Bucket itself, as Keras Tensorboard callback does work fine and writes the expected output to the same bucket. I also made sure that h5py gets included by providing it in the setup.py located at: ``` \u251c\u2500\u2500 setup.py \u2514\u2500\u2500 trainer \u251c\u2500\u2500 __init__.py \u251c\u2500\u2500 ... ``` The actual include in setup.py is shown below: ``` # setup.py from setuptools import setup, find_packages setup(name='kerasLSTM', version='0.1', packages=find_packages(), author='Kevin Katzke', install_requires=['keras','h5py','simplejson'], zip_safe=False) ``` I guess the problem comes down to the fact that GCS cannot be accessed with Pythons open for I\/O since it instead provides a custom implementation: ``` import tensorflow as tf from tensorflow.python.lib.io import file_io with file_io.FileIO(\"gs:\/\/...\", 'r') as f: f.write(\"Hi!\") ``` After checking how Keras modelCheckpoint callback implements the actual file writing and it turned out, that it is using h5py.File() for I\/O: ``` with h5py.File(filepath, mode='w') as f: f.attrs['keras_version'] = str(keras_version).encode('utf8') f.attrs['backend'] = K.backend().encode('utf8') f.attrs['model_config'] = json.dumps({ 'class_name': model.__class__.__name__, 'config': model.get_config() }, default=get_json_type).encode('utf8') ``` And as the h5py package is a Pythonic interface to the HDF5 binary data format the h5py.File() seems to call an underlying HDF5 functionality written in Fortran as far as I can tell: source, documentation. How can I fix this and make the modelCheckpoint callback write to my GCS Bucket? Is there a way for \"monkey patching\" to somehow overwrite how a hdf5 file is opened to make it use GCS's file_io.FileIO()?", "response":"I might be a bit late on this, but for the sake of future visitors I would describe the whole process of how to adapt the code that was previously run locally to be GoogleML-aware from the IO point of view. Python standard open(file_name, mode) does not work with buckets (gs:\/\/.....\/file_name). One needs to from tensorflow.python.lib.io import file_io and change all calls to open(file_name, mode) to file_io.FileIO(file_name, mode=mode) (note the named mode parameter). The interface of the opened handle is the same. Keras and\/or other libraries mostly use standard open(file_name, mode) internally. That said, something like trained_model.save(file_path) calls to 3rd-party libraries will fail to store the result to the bucket. The only way to retrieve a model after the job has finished successfully would be to store it locally and then move to the bucket. The code below is quite inefficient, because it loads the whole model at once and then dumps it to the bucket, but it worked for me for relatively small models: ``` model.save(file_path) with file_io.FileIO(file_path, mode='rb') as if: with file_io.FileIO(os.path.join(model_dir, file_path), mode='wb+') as of: of.write(if.read()) ``` The mode must be set to binary for both reading and writing. When the file is relatively big, it makes sense to read and write it in chunks to decrease memory consumption. Before running a real task, I would advise to run a stub that simply saves a file to remote bucket. This implementation, temporarily put instead of real train_model call, should do: ``` if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument( '--job-dir', help='GCS location with read\/write access', required=True ) args = parser.parse_args() arguments = args.__dict__ job_dir = arguments.pop('job_dir') with file_io.FileIO(os.path.join(job_dir, \"test.txt\"), mode='wb+') as of: of.write(\"Test passed.\") ``` After a successful execution you should see the file test.txt with a content \"Test passed.\" in your bucket.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45585104\/save-keras-modelcheckpoints-in-google-cloud-bucket", "best_answers_votes":17, "question_length":2703, "response_length":2012 }, { "question":"Hyperparameter Tuning of Tensorflow Model I've used Scikit-learn's GridSearchCV before to optimize the hyperparameters of my models, but just wondering if a similar tool exists to optimize hyperparameters for Tensorflow (for instance number of epochs, learning rate, sliding window size etc.) And if not, how can I implement a snippet that effectively runs all different combinations?", "response":"Even though it does not seem to be explicitly documented (in version 1.2), the package tf.contrib.learn (included in TensorFlow) defines classifiers that are supposed to be compatible with scikit-learn... However, looking at the source, it seems you need to explicitly set the environment variable TENSORFLOW_SKLEARN (e.g. to \"1\") to actually get this compatibility. If this works, you can already use GridSearchCV (see this test case). That said, there are a few alternatives. I don't know about any specific to TensorFlow, but hyperopt, Scikit-Optimize or SMAC3 should all be valid options. MOE and Spearmint look like used to be good choices but now don't seem too maintained. Alternatively, you can look into a service like SigOpt (a company by the original author of MOE). Edit About running all possible combinations of parameters, the core logic, if you want to implement it yourself, is not really complicated. You can just define lists with the possible values for each parameter and then run through all the combinations with itertools.product. Something like: ``` from itertools import product param1_values = [...] param2_values = [...] param3_values = [...] for param1, param2, param3 in product(param1_values, param2_values param3_values): run_experiment(param1, param2, param3) ``` Note however that grid search can be prohibitively expensive to run in many cases, and even doing just a random search in the parameters space will probably be more efficient (more about that in this publication).", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44802939\/hyperparameter-tuning-of-tensorflow-model", "best_answers_votes":20, "question_length":384, "response_length":1510 }, { "question":"How to extract and save images from tensorboard event summary? Given a tensorflow event file, how can I extract images corresponding to a specific tag, and then save them to disk in a common format e.g. .png?", "response":"You could extract the images like so. The output format may depend on how the image is encoded in the summary, so the resulting write to disk may need to use another format besides .png ``` import os import scipy.misc import tensorflow as tf def save_images_from_event(fn, tag, output_dir='.\/'): assert(os.path.isdir(output_dir)) image_str = tf.placeholder(tf.string) im_tf = tf.image.decode_image(image_str) sess = tf.InteractiveSession() with sess.as_default(): count = 0 for e in tf.train.summary_iterator(fn): for v in e.summary.value: if v.tag == tag: im = im_tf.eval({image_str: v.image.encoded_image_string}) output_fn = os.path.realpath('{}\/image_{:05d}.png'.format(output_dir, count)) print(\"Saving '{}'\".format(output_fn)) scipy.misc.imsave(output_fn, im) count += 1 ``` And then an example invocation may look like: save_images_from_event('path\/to\/event\/file', 'tag0') Note that this assumes the event file is fully written -- in the case that it's not, some error handling is probably necessary.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47232779\/how-to-extract-and-save-images-from-tensorboard-event-summary", "best_answers_votes":17, "question_length":208, "response_length":1007 }, { "question":"TensorFlow Object Detection API - what do the losses mean in the object detection api? What do each for the following losses mean? (in the TensorFlow Object detection API, while training FasterRCNN based models) Loss\/BoxClassifierLoss\/classification_loss\/mul_1 Loss\/BoxClassifierLoss\/localization_loss\/mul_1 Loss\/RPNLoss\/localization_loss\/mul_1 Loss\/RPNLoss\/objectness_loss\/mul_1 clone_loss_1", "response":"The losses for the Region Proposal Network: Loss\/RPNLoss\/localization_loss\/mul_1: Localization Loss or the Loss of the Bounding Box regressor for the RPN Loss\/RPNLoss\/objectness_loss\/mul_1: Loss of the Classifier that classifies if a bounding box is an object of interest or background The losses for the Final Classifier: Loss\/BoxClassifierLoss\/classification_loss\/mul_1: Loss for the classification of detected objects into various classes: Cat, Dog, Airplane etc Loss\/BoxClassifierLoss\/localization_loss\/mul_1: Localization Loss or the Loss of the Bounding Box regressor", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48111847\/tensorflow-object-detection-api-what-do-the-losses-mean-in-the-object-detectio", "best_answers_votes":25, "question_length":392, "response_length":573 }, { "question":"Tensorflow, multi label accuracy calculation I am working on a multi label problem and i am trying to determine the accuracy of my model. My model: ```py NUM_CLASSES = 361 x = tf.placeholder(tf.float32, [None, IMAGE_PIXELS]) y_ = tf.placeholder(tf.float32, [None, NUM_CLASSES]) # create the network pred = conv_net( x ) # loss cost = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( pred, y_) ) # train step train_step = tf.train.AdamOptimizer().minimize( cost ) ``` i want to calculate the accuracy in two different ways - % of all labels that are predicted correctly - % of images where ALL labels are predicted correctly unfortunately i am only able to calculate the % of all labels that are predicted correctly. I thought this code would calculate % of images where ALL labels are predicted correctly ```py correct_prediction = tf.equal( tf.round( pred ), tf.round( y_ ) ) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) ``` and this code % of all labels that are predicted correctly ```py pred_reshape = tf.reshape( pred, [ BATCH_SIZE * NUM_CLASSES, 1 ] ) y_reshape = tf.reshape( y_, [ BATCH_SIZE * NUM_CLASSES, 1 ] ) correct_prediction_all = tf.equal( tf.round( pred_reshape ), tf.round( y_reshape ) ) accuracy_all = tf.reduce_mean( tf.cast(correct_prediction_all, tf.float32 ) ) ``` somehow the coherency of the labels belonging to one image is lost and i am not sure why.", "response":"I believe the bug in your code is in: correct_prediction = tf.equal( tf.round( pred ), tf.round( y_ ) ). pred should be unscaled logits (i.e. without a final sigmoid). Here you want to compare the output of sigmoid(pred) and y_ (both in the interval [0, 1]) so you have to write: ```py correct_prediction = tf.equal(tf.round(tf.nn.sigmoid(pred)), tf.round(y_)) ``` Then to compute: Mean accuracy over all labels: ```py accuracy1 = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) ``` Accuracy where all labels need to be correct: ```py all_labels_true = tf.reduce_min(tf.cast(correct_prediction), tf.float32), 1) accuracy2 = tf.reduce_mean(all_labels_true) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37746670\/tensorflow-multi-label-accuracy-calculation", "best_answers_votes":31, "question_length":1406, "response_length":667 }, { "question":"(Tensorflow-GPU) import tensorflow ImportError: Could not find 'cudnn64_7.dll' After created tensorflow environment under anaconda, I installed tensorflow-gpu. Then I was trying to import tensorflow to verify if it's correctly installed, but got this error: ``` ImportError: Could not find 'cudnn64_7.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Note that installing cuDNN is a separate step from installing CUDA, and this DLL is often found in a different directory from the CUDA DLLs. You may install the necessary DLL by downloading cuDNN 7 from this URL: https:\/\/developer.nvidia.com\/cudnn ``` Setup is: ``` NVIDIA GTX 1080 CUDA 9.0 cuDNN 6.0 tensorflow-gpu 1.5 ``` Environment Variables are: ``` CUDA_PAT: C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.0 CUDA_PATH_V9_0: C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.0 ``` The %Path% variables are: ``` C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.0\\bin C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.0\\lib\\x64 C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.0\\libnvvp C:\\Users\\yshen\\AppData\\Local\\cudnn-8.0-windows10-x64-v6.0\\cuda\\bin ``` it is obvious that I installed cuDNN6.0, don't why the error shows \"Could not find 'cudnn64_7.dll' \". Why it automatically searches cudnn64_7.dll instead of cudnn64_6.dll?", "response":"Also, I got below error when I installed TensorFlow 1.8. I have the Anaconda environment. \"ImportError: Could not find 'cudnn64_7.dll'\" But after I installed Nvidia cuDNN v7.1.3 (April 17, 2018), for CUDA 9.0, everything started to work. Please note that one needs to sign up as a Nvidia developer to be able to download the installation package(s). Then, just follow the instructions in the page : cudnn-install For Windows: 3.3. Installing cuDNN on Windows The following steps describe how to build a cuDNN dependent program. In the following sections: -your CUDA directory path is referred to as C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.0 -your cuDNN directory path is referred to as Navigate to your directory containing cuDNN. Unzip the cuDNN package. -cudnn-9.0-windows7-x64-v7.zip or -cudnn-9.0-windows10-x64-v7.zip Copy the following files into the CUDA Toolkit directory. Copy \\cuda\\bin\\cudnn64_7.dll to C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.0\\bin. Copy \\cuda\\ include\\cudnn.h to C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.0\\include. Copy \\cuda\\lib\\x64\\cudnn.lib to C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.0\\lib\\x64. Set the following environment variables to point to where cuDNN is located. To access the value of the $(CUDA_PATH) environment variable, perform the following steps: Open a command prompt from the Start menu. Type Run and hit Enter. Issue the control sysdm.cpl command. Select the Advanced tab at the top of the window. Click Environment Variables at the bottom of the window. Ensure the following values are set: Variable Name: CUDA_PATH Variable Value: C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v9.0 Include cudnn.lib in your Visual Studio project. Open the Visual Studio project and right-click on the project name. Click Linker > Input > Additional Dependencies. Add cudnn.lib and click OK.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48698536\/tensorflow-gpu-import-tensorflow-importerror-could-not-find-cudnn64-7-dll", "best_answers_votes":29, "question_length":1377, "response_length":1886 }, { "question":"can anyone give a tiny example to explain the params of tf.random.categorical? tensorflow's site gives this example ``` tf.random.categorical(tf.log([[10., 10.]]), 5) ``` produces a tensor that \"has shape [1, 5], where each value is either 0 or 1 with equal probability\" I have already known, the basic demo, the meaning of tf.log([[10., 10.]]). what I want to know is what does [batch_size, num_classes] do, can anyone give a tiny example to explain the params?", "response":"As you note, tf.random.categorical takes two parameters: logits, a 2D float tensor with shape [batch_size, num_classes] num_samples, an integer scalar. The output is a 2D integer tensor with shape [batch_size, num_samples]. Each \"row\" of the logits tensor (logits[0, :], logits[1, :], ...) represents the event probabilities of a different categorical distribution. The function does not expect actual probability values, though, but unnormalized log-probabilities; so the actual probabilities would be softmax(logits[0, :]), softmax(logits[1, :]), etc. The benefit of this is that you can give basically any real values as input (e.g. the output of a neural network) and they will be valid. Also, it's trivial to use specific probability values, or proportions, using logarithms. For example, both [log(0.1), log(0.3), log(0.6)] and [log(1), log(3), log(6)] represent the same probability, where the second class is three times as likely as the first one but only half as likely as the third one. For each row of (unnormalized log-)probabilities, you get num_samples samples from the distribution. Each sample is an integer between 0 and num_classes - 1, drawn according to the given probabilities. So, the result is the 2D tensor with shape [batch_size, num_samples] with the sampled integers for each distribution. EDIT: A small example of the function. ``` import tensorflow as tf with tf.Graph().as_default(), tf.Session() as sess: tf.random.set_random_seed(123) logits = tf.log([[1., 1., 1., 1.], [0., 1., 2., 3.]]) num_samples = 30 cat = tf.random.categorical(logits, num_samples) print(sess.run(cat)) # [[3 3 1 1 0 3 3 0 2 3 1 3 3 3 1 1 0 2 2 0 3 1 3 0 1 1 0 1 3 3] # [2 2 3 3 2 3 3 3 2 2 3 3 2 2 2 1 3 3 3 2 3 2 2 1 3 3 3 3 3 2]] ``` In this case, the result is an array with two rows and 30 columns. The values in the first row are sampled from a categorical distribution where every class ([0, 1, 2, 3]) has the same probability. In the second row, the class 3 is the most likely one, and class 0 has just no probability of being sampled.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55063120\/can-anyone-give-a-tiny-example-to-explain-the-params-of-tf-random-categorical", "best_answers_votes":30, "question_length":462, "response_length":2049 }, { "question":"Best practice for upgrading CUDA and cuDNN for tensorflow I'm currently in charge of getting tensorflow-gpu 1.8 to work on my machine. I've been using tf-gpu 1.2 until now, but due to some required features, I have to upgrade my installation. Before doing so, I wanted to check if there is a best practice to do this. My current setup looks like this: Ubuntu 16.04 NVIDIA GeForce GTX 1080 Ti NVIDIA Driver Version: 384.111 Anaconda 4.4.0 for Python 3 CUDA v8.0 cuDNN v5.1 tensorflow-gpu v1.2 As written on the tf-homepage, I would have to use CUDA v9.0 as well as cuDNN v7.1. As all these instructions refer to a clean install and not an update, I'm not sure if it would be best to uninstall the old versions first. Please share your experiences if you have already had the same issue. Thank you!", "response":"Thanks @jo\u00e3o gabriel s.f. I was able to successfully deinstall CUDA 8.0\/cuDNN 5.1 and install the latest version of tensorflow. As the whole procedure was a little confusing to me, I decided to post a quick walkthrough and maybe help people in the same situation. UNINSTALL First, I uninstalled cuda and all its dependencies. As I installed it via package manager, I used apt-get to remove it. For runfile installations, you can check this. ``` sudo apt-get --purge remove cuda sudo apt-get autoremove dpkg --list |grep \"^rc\" | cut -d \" \" -f 3 | xargs sudo dpkg --purge ``` Also, I checked for any cuda folders in \/usr\/local\/ and removed them. Regarding cuDNN, through the removal of all cuda folders, the corresponding cuda headers and libs have been deleted. INSTALL Check the driver of the graphics card first. CUDA 9.0 works with the v384.111 driver (so no 390.xxx needed), so I had nothing to do here. I downloaded CUDA Toolkit 9.0 here as deb (local). In the same folder, I executed ``` dpkg -i cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb sudo apt-key add \/var\/cuda-repo-9-0-local\/7fa2af80.pub sudo apt-get update sudo apt-get install cuda ``` Then set the environment variables: ``` export PATH=${PATH}:\/usr\/local\/cuda-9.0\/bin export CUDA_HOME=${CUDA_HOME}:\/usr\/local\/cuda:\/usr\/local\/cuda-9.0 export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:\/usr\/local\/cuda-9.0\/lib64 ``` Afterwards I verified my installation as described here. I downloaded cuDNN 7.1 from the archive as tarball and installed it via ``` tar -xzvf cudnn-9.0-linux-x64-v7.1.tgz sudo cp cuda\/include\/cudnn.h \/usr\/local\/cuda\/include sudo cp cuda\/lib64\/libcudnn* \/usr\/local\/cuda\/lib64 sudo chmod a+r \/usr\/local\/cuda\/include\/cudnn.h \\ \/usr\/local\/cuda\/lib64\/libcudnn* ``` After starting the Python bash, I was able to import tensorflow and run a simple graph. Thanks again and have a nice week!", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50213021\/best-practice-for-upgrading-cuda-and-cudnn-for-tensorflow", "best_answers_votes":22, "question_length":796, "response_length":1862 }, { "question":"What is the difference between backpropagation and reverse-mode autodiff? Going through this book, I am familiar with the following: For each training instance the backpropagation algorithm first makes a prediction (forward pass), measures the error, then goes through each layer in reverse to measure the error contribution from each connection (reverse pass), and finally slightly tweaks the connection weights to reduce the error. However I am not sure how this differs from the reverse-mode autodiff implementation by TensorFlow. As far as I know reverse-mode autodiff first goes through the graph in the forward direction and then in the second pass computes all partial derivatives for the outputs with respect to the inputs. This is very similar to the propagation algorithm. How does backpropagation differ from reverse-mode autodiff ?", "response":"The most important distinction between backpropagation and reverse-mode AD is that reverse-mode AD computes the vector-Jacobian product of a vector valued function from R^n -> R^m, while backpropagation computes the gradient of a scalar valued function from R^n -> R. Backpropagation is therefore a subset of reverse-mode AD. When we train neural networks, we always have a scalar-valued loss function, so we are always using backpropagation. Since backprop is a subset of reverse-mode AD, then we are also using reverse-mode AD when we train a neural network. Whether or not backpropagation takes the more general definition of reverse-mode AD as applied to a scalar loss function, or the more specific definition of reverse-mode AD as applied to a scalar loss function for training neural networks is a matter of personal taste. It's a word that has slightly different meaning in different contexts, but is most commonly used in the machine learning community to talk about computing gradients of neural network parameters using a scalar loss function. For completeness: Sometimes reverse-mode AD can compute the full Jacobian on a single reverse pass, not just the vector-Jacobian product. Also, the vector Jacobian product for a scalar function where the vector is the vector [1.0] is the same as the gradient.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49926192\/what-is-the-difference-between-backpropagation-and-reverse-mode-autodiff", "best_answers_votes":15, "question_length":843, "response_length":1314 }, { "question":"Why is TensorFlow Lite slower than TensorFlow on desktop? I'm currently working on Single Image Superresolution and I've managed to freeze an existing checkpoint file and convert it into tensorflow lite. However, when performing inference using the .tflite file, the time taken to upsample one image is at least 4 times that when restoring the model using the .ckpt file. Inference using the .ckpt file is done using session.run(), while inference using the .tflite file is done using interpreter.invoke(). Both operations were done on a Ubuntu 18 VM running on a typical PC. What I did to find out more about the issue is to run top in a seperate terminal to see the CPU utilization rate when either operations are performed. Utilization rate hits 270% with the .ckpt file, but stays at around 100% with the .tflite file. ``` interpreter.set_tensor(input_details[0]['index'], input_image_reshaped) interpreter.set_tensor(input_details[1]['index'], input_bicubic_image_reshaped) start = time.time() interpreter.invoke() end = time.time() ``` vs ``` y = self.sess.run(self.y_, feed_dict={self.x: image.reshape(1, image.shape[0], image.shape[1], ch), self.x2: bicubic_image.reshape(1, self.scale * image.shape[0], self.scale * image.shape[1], ch), self.dropout: 1.0, self.is_training: 0}) ``` One hypothesis is that tensorflow lite is not configured for multithreading, and another is that tensorflow lite is optimized for ARM processors (rather than an Intel one that my computer runs on) and thus it is slower. However, I cannot tell for sure and neither do I know how to trace the root of the issue - hopefully someone out there will be more knowledgeable about this?", "response":"Yes, the current TensorFlow Lite op kernels are optimized for ARM processor (using NEON instruction set). If SSE is available, it will try to use NEON_2_SSE to adapt NEON calls to SSE, so it should be still running with some sort of SIMD. However we didn't put much effort to optimize this code path. Regarding number of threads. There is a SetNumThreads function in C++ API, but it's not exposed in Python API (yet). When it's not set, the underlying implementation may try to probe number of available cores. If you build the code by yourself, you can try to change the value and see if it affects the result. Hope these helps.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/54093424\/why-is-tensorflow-lite-slower-than-tensorflow-on-desktop", "best_answers_votes":27, "question_length":1668, "response_length":629 }, { "question":"How do you create a boolean mask for a tensor in Keras? I am building a custom metric to measure the accuracy of one class in my multi-class dataset during training. I am having trouble selecting the class. The targets are one hot (e.g: the class 0 label is [1 0 0 0 0]): ``` from keras import backend as K def single_class_accuracy(y_true, y_pred): idx = bool(y_true[:, 0]) # boolean mask for class 0 class_preds = y_pred[idx] class_true = y_true[idx] class_acc = K.mean(K.equal(K.argmax(class_true, axis=-1), K.argmax(class_preds, axis=-1))) # multi-class accuracy return class_acc ``` The trouble is, we have to use Keras functions to index tensors. How do you create a boolean mask for a tensor?", "response":"Note that when talking about the accuracy of one class one may refer to either of the following (not equivalent) two amounts: The recall, which, for class C, is the ratio of examples labelled with class C that are predicted to have class C. The precision, which, for class C, is the ratio of examples predicted to be of class C that are in fact labelled with class C. Instead of doing complex indexing, you can just rely on masking for you computation. Assuming we are talking about precision here (changing to recall would be trivial). ``` from keras import backend as K INTERESTING_CLASS_ID = 0 # Choose the class of interest def single_class_accuracy(y_true, y_pred): class_id_true = K.argmax(y_true, axis=-1) class_id_preds = K.argmax(y_pred, axis=-1) # Replace class_id_preds with class_id_true for recall here accuracy_mask = K.cast(K.equal(class_id_preds, INTERESTING_CLASS_ID), 'int32') class_acc_tensor = K.cast(K.equal(class_id_true, class_id_preds), 'int32') * accuracy_mask class_acc = K.sum(class_acc_tensor) \/ K.maximum(K.sum(accuracy_mask), 1) return class_acc ``` If you want to be more flexible, you can also have the class of interest parametrised: ``` from keras import backend as K def single_class_accuracy(interesting_class_id): def fn(y_true, y_pred): class_id_true = K.argmax(y_true, axis=-1) class_id_preds = K.argmax(y_pred, axis=-1) # Replace class_id_preds with class_id_true for recall here accuracy_mask = K.cast(K.equal(class_id_preds, interesting_class_id), 'int32') class_acc_tensor = K.cast(K.equal(class_id_true, class_id_preds), 'int32') * accuracy_mask class_acc = K.sum(class_acc_tensor) \/ K.maximum(K.sum(accuracy_mask), 1) return class_acc return fn ``` And the use it as: ``` model.compile(..., metrics=[single_class_accuracy(INTERESTING_CLASS_ID)]) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41458859\/how-do-you-create-a-boolean-mask-for-a-tensor-in-keras", "best_answers_votes":23, "question_length":699, "response_length":1794 }, { "question":"module 'tensorflow.compat.v2.__internal__' has no attribute 'tf2' I'm trying to use TensorFlow as backend yesterday I can use it, but today when I use it to show some error message when I'm trying to import Keras, so here's my code: ``` # Install required libs # NOTE: Run this one code, then restart this runtime and run again for next all... (PENTING!!!) ### please update Albumentations to version>=0.3.0 for `Lambda` transform support !pip install -U segmentation-models !pip install q tensorflow==2.1 !pip install q keras==2.3.1 !pip install tensorflow-estimator==2.1. ## Imports libs import os os.environ['CUDA_VISIBLE_DEVICES'] = '0' import cv2 import Keras import NumPy as np import matplotlib.pyplot as plt ``` it shows this error: ``` AttributeError Traceback (most recent call last) in () 5 6 import cv2 ----> 7 import keras 8 import numpy as np 9 import matplotlib.pyplot as plt 8 frames \/usr\/local\/lib\/python3.7\/dist-packages\/keras\/initializers\/__init__.py in populate_deserializable_objects() 47 48 LOCAL.ALL_OBJECTS = {} ---> 49 LOCAL.GENERATED_WITH_V2 = tf.__internal__.tf2.enabled() 50 51 # Compatibility aliases (need to exist in both V1 and V2). AttributeError: module 'tensorflow.compat.v2.__internal__' has no attribute 'tf2' ``` while therefore I was using TensorFlow version 2.2 and Keras version 2.3.1, yesterday I can run, but today it seems can't. did I was the wrong version import for my Keras and TensorFlow for today? Edit: when I use from tensorFlow import keras the output I want using tensorflow backend doesn't show up, And then when I load import segmentation_models as sm it shows the same error when I use import Keras like on above.", "response":"Here is the solution to your problem, I've tested it on colab. ``` !pip install -U -q segmentation-models !pip install -q tensorflow==2.1 !pip install -q keras==2.3.1 !pip install -q tensorflow-estimator==2.1. ## Imports libs import os os.environ['CUDA_VISIBLE_DEVICES'] = '0' os.environ[\"SM_FRAMEWORK\"] = \"tf.keras\" from tensorflow import keras import segmentation_models as sm ``` ``` |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 51kB 3.3MB\/s |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 421.8MB 42kB\/s |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 450kB 35.7MB\/s |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3.9MB 33.6MB\/s Building wheel for gast (setup.py) ... done ERROR: tensorflow-probability 0.12.1 has requirement gast>=0.3.2, but you'll have gast 0.2.2 which is incompatible. |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 378kB 2.1MB\/s Segmentation Models: using `tf.keras` framework. ``` Update You don't need to install any specific version of tensorflow \/ keras. Any version above 2.x would be ok to run, i.e tf 2.4\/ 2.5\/ 2.6. However, in colab, you need to restart the kernel to see the effect. but if you run on the kaggle kernel, you don't need to restart the kernel. See below: In colab: ``` # Cell: 1 import os !pip install -U -q segmentation-models --user os.kill(os.getpid(), 9) ``` It will auto-restart the kernel. After restarting, run the following code in the new cell. ``` #Cell: 2 import os os.environ[\"SM_FRAMEWORK\"] = \"tf.keras\" import segmentation_models as sm ``` In Kaggle Kernel: ``` import os !pip install -U -q segmentation-models --user os.environ[\"SM_FRAMEWORK\"] = \"tf.keras\" import segmentation_models as sm ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/67696519\/module-tensorflow-compat-v2-internal-has-no-attribute-tf2", "best_answers_votes":15, "question_length":1671, "response_length":1596 }, { "question":"Is tf.layers.dense a single layer? If I just use a single layer like this: ``` layer = tf.layers.dense(tf_x, 1, tf.nn.relu) ``` Is this just a single layer with a single node? Or is it actually a set of layers (input, hidden, output) with 1 node? My network seemed to work properly with just 1 layer, so I was curious about the setup. Consequently, does this setup below have 2 hidden layers (are layer1 and layer2 here both hidden layers)? Or actually just 1 (just layer 1)? ``` layer1 = tf.layers.dense(tf_x, 10, tf.nn.relu) layer2 = tf.layers.dense(layer1, 1, tf.nn.relu) ``` tf_x is my input features tensor.", "response":"tf.layers.dense adds a single layer to your network. The second argument is the number of neurons\/nodes of the layer. For example: ``` # no hidden layers, dimension output layer = 1 output = tf.layers.dense(tf_x, 1, tf.nn.relu) # one hidden layer, dimension hidden layer = 10, dimension output layer = 1 hidden = tf.layers.dense(tf_x, 10, tf.nn.relu) output = tf.layers.dense(hidden, 1, tf.nn.relu) ``` My network seemed to work properly with just 1 layer, so I was curious about the setup. That is possible, for some tasks you will get decent results without hidden layers.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45693020\/is-tf-layers-dense-a-single-layer", "best_answers_votes":18, "question_length":612, "response_length":574 }, { "question":"Custom weighted loss function in Keras for weighing each element I'm trying to create a simple weighted loss function. Say, I have input dimensions 100 * 5, and output dimensions also 100 * 5. I also have a weight matrix of the same dimension. Something like the following: ``` import numpy as np train_X = np.random.randn(100, 5) train_Y = np.random.randn(100, 5)*0.01 + train_X weights = np.random.randn(*train_X.shape) ``` Defining the custom loss function ``` def custom_loss_1(y_true, y_pred): return K.mean(K.abs(y_true-y_pred)*weights) ``` Defining the model ``` from keras.layers import Dense, Input from keras import Model import keras.backend as K input_layer = Input(shape=(5,)) out = Dense(5)(input_layer) model = Model(input_layer, out) ``` Testing with existing metrics works fine ``` model.compile('adam','mean_absolute_error') model.fit(train_X, train_Y, epochs=1) ``` Testing with our custom loss function doesn't work ``` model.compile('adam',custom_loss_1) model.fit(train_X, train_Y, epochs=10) ``` It gives the following stack trace: ``` InvalidArgumentError (see above for traceback): Incompatible shapes: [32,5] vs. [100,5] [[Node: loss_9\/dense_8_loss\/mul = Mul[T=DT_FLOAT, _device=\"\/job:localhost\/replica:0\/task:0\/device:CPU:0\"](loss_9\/dense_8_loss\/Abs, loss_9\/dense_8_loss\/mul\/y)]] ``` Where is the number 32 coming from? Testing a loss function with weights as Keras tensors ``` def custom_loss_2(y_true, y_pred): return K.mean(K.abs(y_true-y_pred)*K.ones_like(y_true)) ``` This function seems to do the work. So, probably suggests that a Keras tensor as a weight matrix would work. So, I created another version of the loss function. Loss function try 3 ``` from functools import partial def custom_loss_3(y_true, y_pred, weights): return K.mean(K.abs(y_true-y_pred)*K.variable(weights, dtype=y_true.dtype)) cl3 = partial(custom_loss_3, weights=weights) ``` Fitting data using cl3 gives the same error as above. ``` InvalidArgumentError (see above for traceback): Incompatible shapes: [32,5] vs. [100,5] [[Node: loss_11\/dense_8_loss\/mul = Mul[T=DT_FLOAT, _device=\"\/job:localhost\/replica:0\/task:0\/device:CPU:0\"](loss_11\/dense_8_loss\/Abs, loss_11\/dense_8_loss\/Variable\/read)]] ``` I wonder what I'm missing! I could have used the notion of sample_weight in Keras; but then I'd have to reshape my inputs to a 3d vector. I thought that this custom loss function should really have been trivial.", "response":"In model.fit the batch size is 32 by default, that's where this number is coming from. Here's what's happening: In custom_loss_1 the tensor K.abs(y_true-y_pred) has shape (batch_size=32, 5), while the numpy array weights has shape (100, 5). This is an invalid multiplication, since the dimensions don't agree and broadcasting can't be applied. In custom_loss_2 this problem doesn't exist because you're multiplying 2 tensors with the same shape (batch_size=32, 5). In custom_loss_3 the problem is the same as in custom_loss_1, because converting weights into a Keras variable doesn't change their shape. UPDATE: It seems you want to give a different weight to each element in each training sample, so the weights array should have shape (100, 5) indeed. In this case, I would input your weights' array into your model and then use this tensor within the loss function: ``` import numpy as np from keras.layers import Dense, Input from keras import Model import keras.backend as K from functools import partial def custom_loss_4(y_true, y_pred, weights): return K.mean(K.abs(y_true - y_pred) * weights) train_X = np.random.randn(100, 5) train_Y = np.random.randn(100, 5) * 0.01 + train_X weights = np.random.randn(*train_X.shape) input_layer = Input(shape=(5,)) weights_tensor = Input(shape=(5,)) out = Dense(5)(input_layer) cl4 = partial(custom_loss_4, weights=weights_tensor) model = Model([input_layer, weights_tensor], out) model.compile('adam', cl4) model.fit(x=[train_X, weights], y=train_Y, epochs=10) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48082655\/custom-weighted-loss-function-in-keras-for-weighing-each-element", "best_answers_votes":20, "question_length":2417, "response_length":1511 }, { "question":"Difference between Dense(2) and Dense(1) as the final layer of a binary classification CNN? In a CNN for binary classification of images, should the shape of output be (number of images, 1) or (number of images, 2)? Specifically, here are 2 kinds of last layer in a CNN: ``` keras.layers.Dense(2, activation = 'softmax')(previousLayer) ``` or ``` keras.layers.Dense(1, activation = 'softmax')(previousLayer) ``` In the first case, for every image there are 2 output values (probability of belonging to group 1 and probability of belonging to group 2). In the second case, each image has only 1 output value, which is its label (0 or 1, label=1 means it belongs to group 1). Which one is correct? Is there intrinsic difference? I don't want to recognize any object in those images, just divide them into 2 groups. Thanks a lot!", "response":"This first one is the correct solution: ``` keras.layers.Dense(2, activation = 'softmax')(previousLayer) ``` Usually, we use the softmax activation function to do classification tasks, and the output width will be the number of the categories. This means that if you want to classify one object into three categories with the labels A,B, or C, you would need to make the Dense layer generate an output with a shape of (None, 3). Then you can use the cross_entropyloss function to calculate the LOSS, automatically calculate the gradient, and do the back-propagation process. If you want to only generate one value with the Dense layer, that means you get a tensor with a shape of (None, 1) - so it produces a single numeric value, like a regression task. You are using the value of the output to represent the category. The answer is correct, but does not perform like the general solution of the classification task.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50808593\/difference-between-dense2-and-dense1-as-the-final-layer-of-a-binary-classifi", "best_answers_votes":9, "question_length":826, "response_length":917 }, { "question":"When to use tensorflow datasets api versus pandas or numpy There are a number of guides I've seen on using LSTMs for time series in tensorflow, but I am still unsure about the current best practices in terms of reading and processing data - in particular, when one is supposed to use the tf.data.Dataset API. In my situation I have a file data.csv with my features, and would like to do the following two tasks: Compute targets - the target at time t is the percent change of some column at some horizon, i.e., ``` labels[i] = features[i + h, -1] \/ features[i, -1] - 1 ``` I would like h to be a parameter here, so I can experiment with different horizons. Get rolling windows - for training purposes, I need to roll my features into windows of length window: ``` train_features[i] = features[i: i + window] ``` I am perfectly comfortable constructing these objects using pandas or numpy, so I'm not asking how to achieve this in general - my question is specifically what such a pipeline ought to look like in tensorflow. Edit: I guess that I'd also like to know whether the 2 tasks I listed are suited for the dataset api, or if i'm better off using other libraries to deal with them?", "response":"First off, note that you can use dataset API with pandas or numpy arrays as described in the tutorial: If all of your input data fit in memory, the simplest way to create a Dataset from them is to convert them to tf.Tensor objects and use Dataset.from_tensor_slices() A more interesting question is whether you should organize data pipeline with session feed_dict or via Dataset methods. As already stated in the comments, Dataset API is more efficient, because the data flows directly to the device, bypassing the client. From \"Performance Guide\": While feeding data using a feed_dict offers a high level of flexibility, in most instances using feed_dict does not scale optimally. However, in instances where only a single GPU is being used the difference can be negligible. Using the Dataset API is still strongly recommended. Try to avoid the following: ``` # feed_dict often results in suboptimal performance when using large inputs sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) ``` But, as they say themselves, the difference may be negligible and the GPU can still be fully utilized with ordinary feed_dict input. When the training speed is not critical, there's no difference, use any pipeline you feel comfortable with. When the speed is important and you have a large training set, the Dataset API seems a better choice, especially you plan distributed computation. The Dataset API works nicely with text data, such as CSV files, checkout this section of the dataset tutorial.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48246418\/when-to-use-tensorflow-datasets-api-versus-pandas-or-numpy", "best_answers_votes":14, "question_length":1186, "response_length":1499 }, { "question":"Compute gradient norm of each part of composite loss function Assume I have the following loss function: ``` loss_a = tf.reduce_mean(my_loss_fn(model_output, targets)) loss_b = tf.reduce_mean(my_other_loss_fn(model_output, targets)) loss_final = loss_a + tf.multiply(alpha, loss_b) ``` To visualize the norm of the gradients w.r.t to loss_final one could do this: ``` optimizer = tf.train.AdamOptimizer(learning_rate=0.001) grads_and_vars = optimizer.compute_gradients(loss_final) grads, _ = list(zip(*grads_and_vars)) norms = tf.global_norm(grads) gradnorm_s = tf.summary.scalar('gradient norm', norms) train_op = optimizer.apply_gradients(grads_and_vars, name='train_op') ``` However, I would like to plot the norm of the gradients w.r.t to loss_a and loss_b separately. How can I do this in the most efficient way? Do I have to call compute_gradients(..) on both loss_a and loss_b separately and then add those two gradients together before passing them to optimizer.apply_gradients(..)? I know that this would mathematically be correct due to the summation rule, but it just seems a bit cumbersome and I also don't know how you would implement the summation of the gradients correctly. Also, loss_final is rather simple, because it's just a summation. What if loss_final was more complicated, e.g. a division? I'm using Tensorflow 0.12.", "response":"You are right that combining gradients could get messy. Instead just compute the gradients of each of the losses as well as the final loss. Because tensorflow optimizes the directed acyclic graph (DAG) before compilation, this doesn't result in duplication of work. For example: ``` import tensorflow as tf with tf.name_scope('inputs'): W = tf.Variable(dtype=tf.float32, initial_value=tf.random_normal((4, 1), dtype=tf.float32), name='W') x = tf.random_uniform((6, 4), dtype=tf.float32, name='x') with tf.name_scope('outputs'): y = tf.matmul(x, W, name='y') def my_loss_fn(output, targets, name): return tf.reduce_mean(tf.abs(output - targets), name=name) def my_other_loss_fn(output, targets, name): return tf.sqrt(tf.reduce_mean((output - targets) ** 2), name=name) def get_tensors(loss_fn): loss = loss_fn(y, targets, 'loss') grads = tf.gradients(loss, W, name='gradients') norm = tf.norm(grads, name='norm') return loss, grads, norm targets = tf.random_uniform((6, 1)) with tf.name_scope('a'): loss_a, grads_a, norm_a = get_tensors(my_loss_fn) with tf.name_scope('b'): loss_b, grads_b, norm_b = get_tensors(my_loss_fn) with tf.name_scope('combined'): loss = tf.add(loss_a, loss_b, name='loss') grad = tf.gradients(loss, W, name='gradients') with tf.Session() as sess: tf.global_variables_initializer().run(session=sess) writer = tf.summary.FileWriter('.\/tensorboard_results', sess.graph) res = sess.run([norm_a, norm_b, grad]) print(*res, sep='\\n') ``` Edit: In response to your comment... You can check the DAG of a tensorflow model using tensorboard. I've updated the code to store the graph. Run tensorboard --logdir $PWD\/tensorboard_results in a terminal and navigate to the url printed on the commandline (typically http:\/\/localhost:6006\/). Then click on GRAPH tab to view the DAG. You can recursively expand the tensors, ops, namespaces to see subgraphs to see individual operations and their inputs.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43830022\/compute-gradient-norm-of-each-part-of-composite-loss-function", "best_answers_votes":13, "question_length":1340, "response_length":1910 }, { "question":"Replacing tf.placeholder and feed_dict with tf.data API I have an existing TensorFlow model which used a tf.placeholder for the model input and the feed_dict parameter of tf.Session().run to feed in data. Previously the entire dataset was read into memory and passed in this way. I want to use a much larger dataset and take advantage of the performance improvements of the tf.data API. I've defined a tf.data.TextLineDataset and one-shot iterator from it, but I'm having a hard time figuring out how to get the data into the model to train it. At first I tried to just define the feed_dict as a dictionary from the placeholder to iterator.get_next(), but that gave me an error saying the value of a feed cannot be a tf.Tensor object. More digging led me to understand that this is because the object returned by iterator.get_next() is already part of the graph, unlike what you would feed into feed_dict -- and that I shouldn't be trying to use feed_dict at all anyway for performance reasons. So now I've gotten rid of the input tf.placeholder and replaced it with a parameter to the constructor of the class that defines my model; when constructing the model in my training code, I pass the output of iterator.get_next() to that parameter. This already seems a bit clunky because it breaks separation between the definition of the model and the datasets\/training procedure. And I'm now getting an error saying that the Tensor representing (I believe) my model's input must be from the same graph as the Tensor from iterator.get_next(). Am I on the right track with this approach and just doing something wrong with how I set up the graph and the session, or something like that? (The datasets and model are both initialized outside of a session, and the error occurs before I attempt to create one.) Or am I totally off base with this and need to do something different like use the Estimator API and define everything in an input function? Here is some code demonstrating a minimal example: ``` import tensorflow as tf import numpy as np class Network: def __init__(self, x_in, input_size): self.input_size = input_size # self.x_in = tf.placeholder(dtype=tf.float32, shape=(None, self.input_size)) # Original self.x_in = x_in self.output_size = 3 tf.reset_default_graph() # This turned out to be the problem self.layer = tf.layers.dense(self.x_in, self.output_size, activation=tf.nn.relu) self.loss = tf.reduce_sum(tf.square(self.layer - tf.constant(0, dtype=tf.float32, shape=[self.output_size]))) data_array = np.random.standard_normal([4, 10]).astype(np.float32) dataset = tf.data.Dataset.from_tensor_slices(data_array).batch(2) model = Network(x_in=dataset.make_one_shot_iterator().get_next(), input_size=dataset.output_shapes[-1]) ```", "response":"It took a bit for me to get my head around too. You're on the right track. The entire Dataset definition is just part of the graph. I generally create it as a different class from my Model class and pass the dataset into the Model class. I specify the Dataset class I want to load on the command line and then load that class dynamically, thereby decoupling the Dataset and the graph modularly. Notice that you can (and should) name all the tensors in the Dataset, it really helps make things easy to understand as you pass data through the various transformations you'll need. You can write simple test cases that pull samples from the iterator.get_next() and displays them, you'll have something like sess.run(next_element_tensor), no feed_dict as you've correctly noted. Once you get your head around it you'll probably start liking the Dataset input pipeline. It forces you to modularize your code well, and it forces it into a structure that's easy to unit test. Make sure you read the developers guide, there are tons of examples there: https:\/\/www.tensorflow.org\/programmers_guide\/datasets Another thing I'll note is how easy it is to work with a train and test dataset with this pipeline. That's important because you often perform data augmentation on the training dataset that you don't perform on the test dataset, from_string_handle allows you to do that and is clearly described in the guide above.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49762314\/replacing-tf-placeholder-and-feed-dict-with-tf-data-api", "best_answers_votes":7, "question_length":2743, "response_length":1411 }, { "question":"Quantize a Keras neural network model Recently, I've started creating neural networks with Tensorflow + Keras and I would like to try the quantization feature available in Tensorflow. So far, experimenting with examples from TF tutorials worked just fine and I have this basic working example (from https:\/\/www.tensorflow.org\/tutorials\/keras\/basic_classification): ``` import tensorflow as tf from tensorflow import keras fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # fashion mnist data labels (indexes related to their respective labelling in the data set) class_names = ['T-shirt\/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # preprocess the train and test images train_images = train_images \/ 255.0 test_images = test_images \/ 255.0 # settings variables input_shape = (train_images.shape[1], train_images.shape[2]) # create the model layers model = keras.Sequential([ keras.layers.Flatten(input_shape=input_shape), keras.layers.Dense(128, activation=tf.nn.relu), keras.layers.Dense(10, activation=tf.nn.softmax) ]) # compile the model with added settings model.compile(optimizer=tf.train.AdamOptimizer(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # train the model epochs = 3 model.fit(train_images, train_labels, epochs=epochs) # evaluate the accuracy of model on test data test_loss, test_acc = model.evaluate(test_images, test_labels) print('Test accuracy:', test_acc) ``` Now, I would like to employ quantization in the learning and classification process. The quantization documentation (https:\/\/www.tensorflow.org\/performance\/quantization) (the page is no longer available since cca September 15, 2018) suggests to use this piece of code: ``` loss = tf.losses.get_total_loss() tf.contrib.quantize.create_training_graph(quant_delay=2000000) optimizer = tf.train.GradientDescentOptimizer(0.00001) optimizer.minimize(loss) ``` However, it does not contain any information about where this code should be utilized or how it should be connected to a TF code (not even mentioning a high level model created with Keras). I have no idea how this quantization part relates to the previously created neural network model. Just inserting it following the neural network code runs into the following error: ``` Traceback (most recent call last): File \"so.py\", line 41, in loss = tf.losses.get_total_loss() File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/ops\/losses\/util.py\", line 112, in get_total_loss return math_ops.add_n(losses, name=name) File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/ops\/math_ops.py\", line 2119, in add_n raise ValueError(\"inputs must be a list of at least one Tensor with the \" ValueError: inputs must be a list of at least one Tensor with the same dtype and shape ``` Is it possible to quantize a Keras NN model in this way or am I missing something basic? A possible solution that crossed my mind could be using low level TF API instead of Keras (needing to do quite a bit of work to construct the model), or maybe trying to extract some of the lower level methods from the Keras models.", "response":"As mentioned in other answers, TensorFlow Lite can help you with network quantization. TensorFlow Lite provides several levels of support for quantization. Tensorflow Lite post-training quantization quantizes weights and activations post training easily. Quantization-aware training allows for training of networks that can be quantized with minimal accuracy drop; this is only available for a subset of convolutional neural network architectures. So first, you need to decide whether you need post-training quantization or quantization-aware training. For example, if you already saved the model as *.h5 files, you would probably want to follow @Mitiku's instruction and do the post-training quantization. If you prefer to achieve higher performance by simulating the effect of quantization in training (using the method you quoted in the question), and your model is in the subset of CNN architecture supported by quantization-aware training, this example may help you in terms of interaction between Keras and TensorFlow. Basically, you only need to add this code between model definition and its fitting: ``` sess = tf.keras.backend.get_session() tf.contrib.quantize.create_training_graph(sess.graph) sess.run(tf.global_variables_initializer()) ```", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/52259343\/quantize-a-keras-neural-network-model", "best_answers_votes":2, "question_length":3201, "response_length":1252 }, { "question":"Float16 slower than float32 in keras I'm testing out my new NVIDIA Titan V, which supports float16 operations. I noticed that during training, float16 is much slower (~800 ms\/step) than float32 (~500 ms\/step). To do float16 operations, I changed my keras.json file to: ``` { \"backend\": \"tensorflow\", \"floatx\": \"float16\", \"image_data_format\": \"channels_last\", \"epsilon\": 1e-07 } ``` Why are the float16 operations so much slower? Do I need to make modifications to my code and not just the keras.json file? I am using CUDA 9.0, cuDNN 7.0, tensorflow 1.7.0, and keras 2.1.5 on Windows 10. My python 3.5 code is below: ``` img_width, img_height = 336, 224 train_data_dir = 'C:\\\\my_dir\\\\train' test_data_dir = 'C:\\\\my_dir\\\\test' batch_size=128 datagen = ImageDataGenerator(rescale=1.\/255, horizontal_flip=True, # randomly flip the images vertical_flip=True) train_generator = datagen.flow_from_directory( train_data_dir, target_size=(img_height, img_width), batch_size=batch_size, class_mode='binary') test_generator = datagen.flow_from_directory( test_data_dir, target_size=(img_height, img_width), batch_size=batch_size, class_mode='binary') # Architecture of NN model = Sequential() model.add(Conv2D(32,(3, 3), input_shape=(img_height, img_width, 3),padding='same',kernel_initializer='lecun_normal')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32,(3, 3),padding='same')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64,(3, 3),padding='same')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64,(3, 3),padding='same')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(AveragePooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(1)) model.add(Activation('sigmoid')) my_rmsprop = keras.optimizers.RMSprop(lr=0.0001, rho=0.9, epsilon=1e-04, decay=0.0) model.compile(loss='binary_crossentropy', optimizer=my_rmsprop, metrics=['accuracy']) # Training nb_epoch = 32 nb_train_samples = 512 nb_test_samples = 512 model.fit_generator( train_generator, steps_per_epoch=nb_train_samples\/batch_size, epochs=nb_epoch, verbose=1, validation_data=test_generator, validation_steps=nb_test_samples\/batch_size) # Evaluating on the testing set model.evaluate_generator(test_generator, nb_test_samples) ```", "response":"From the documentation of cuDNN (section 2.7, subsection Type Conversion) you can see: Note: Accumulators are 32-bit integers which wrap on overflow. and that this holds for the standard INT8 data type of the following: the data input, the filter input and the output. Under those assumptions, @jiandercy is right that there's a float16 to float32 conversion and then back-conversion before returning the result, and float16 would be slower.", "best_answers_score":0.8, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49782579\/float16-slower-than-float32-in-keras", "best_answers_votes":3, "question_length":2360, "response_length":441 }, { "question":"How to assign a value to a TensorFlow variable? I am trying to assign a new value to a tensorflow variable in python. ``` import tensorflow as tf import numpy as np x = tf.Variable(0) init = tf.initialize_all_variables() sess = tf.InteractiveSession() sess.run(init) print(x.eval()) x.assign(1) print(x.eval()) ``` But the output I get is ``` 0 0 ``` So the value has not changed. What am I missing?", "response":"In TF1, the statement x.assign(1) does not actually assign the value 1 to x, but rather creates a tf.Operation that you have to explicitly run to update the variable.* A call to Operation.run() or Session.run() can be used to run the operation: ``` assign_op = x.assign(1) sess.run(assign_op) # or `assign_op.op.run()` print(x.eval()) # ==> 1 ``` (* In fact, it returns a tf.Tensor, corresponding to the updated value of the variable, to make it easier to chain assignments.) However, in TF2 x.assign(1) will now assign the value eagerly: ``` x.assign(1) print(x.numpy()) # ==> 1 ```", "best_answers_score":0.799, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34220532\/how-to-assign-a-value-to-a-tensorflow-variable", "best_answers_votes":135, "question_length":399, "response_length":583 }, { "question":"Keras ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5 I have checked all the solutions, but still, I am facing the same error. My training images shape is (26721, 32, 32, 1), which I believe it is 4 dimension, but I don't know why error shows it is 5 dimension. ``` model = Sequential() model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape= input_shape )) ``` So this is how I am defining model.fit_generator ``` model.fit_generator(train_dataset, train_labels, nb_epoch=epochs, verbose=1,validation_data=(valid_dataset, valid_labels), nb_val_samples=valid_dataset.shape[0],callbacks=model_callbacks) ```", "response":"The problem is input_shape. It should actually contain 3 dimensions only. And internally keras will add the batch dimension making it 4. Since you probably used input_shape with 4 dimensions (batch included), keras is adding the 5th. You should use input_shape=(32,32,1).", "best_answers_score":0.7986, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47665391\/keras-valueerror-input-0-is-incompatible-with-layer-conv2d-1-expected-ndim-4", "best_answers_votes":71, "question_length":656, "response_length":271 }, { "question":"How do I get the weights of a layer in Keras? I am using Windows 10, Python 3.5, and tensorflow 1.1.0. I have the following script: ``` import tensorflow as tf import tensorflow.contrib.keras.api.keras.backend as K from tensorflow.contrib.keras.api.keras.layers import Dense tf.reset_default_graph() init = tf.global_variables_initializer() sess = tf.Session() K.set_session(sess) # Keras will use this sesssion to initialize all variables input_x = tf.placeholder(tf.float32, [None, 10], name='input_x') dense1 = Dense(10, activation='relu')(input_x) sess.run(init) dense1.get_weights() ``` I get the error: AttributeError: 'Tensor' object has no attribute 'weights' What am I doing wrong, and how do I get the weights of dense1? I have look at this and this SO post, but I still can't make it work.", "response":"If you want to get weights and biases of all layers, you can simply use: ``` for layer in model.layers: print(layer.get_config(), layer.get_weights()) ``` This will print all information that's relevant. If you want the weights directly returned as numpy arrays, you can use: ``` first_layer_weights = model.layers[0].get_weights()[0] first_layer_biases = model.layers[0].get_weights()[1] second_layer_weights = model.layers[1].get_weights()[0] second_layer_biases = model.layers[1].get_weights()[1] ``` etc.", "best_answers_score":0.7985, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43715047\/how-do-i-get-the-weights-of-a-layer-in-keras", "best_answers_votes":103, "question_length":800, "response_length":508 }, { "question":"What is the purpose of the Tensorflow Gradient Tape? I watched the Tensorflow Developer's summit video on Eager Execution in Tensorflow, and the presenter gave an introduction to \"Gradient Tape.\" Now I understand that Gradient Tape tracks the automatic differentiation that occurs in a TF model. I was trying to understand why I would use Gradient Tape? Can anyone explain how Gradient Tape is used as a diagnostic tool? Why would someone use Gradient Tape versus just Tensorboard visualization of weights. So I get that the automatic differentiation that occurs with a model is to compute the gradients of each node--meaning the adjustment of the weights and biases at each node, given some batch of data. So that is the learning process. But I was under the impression that I can actually use a tf.keras.callback.TensorBoard() call to see the tensorboard visualization of training--so I can watch the weights on each node and determine if there are any dead or oversaturated nodes. Is the use of Gradient Tape only to see if some gradients go to zero or get really big, etc? Or is there some other use of the Gradient Tape?", "response":"With eager execution enabled, Tensorflow will calculate the values of tensors as they occur in your code. This means that it won't precompute a static graph for which inputs are fed in through placeholders. This means to back propagate errors, you have to keep track of the gradients of your computation and then apply these gradients to an optimiser. This is very different from running without eager execution, where you would build a graph and then simply use sess.run to evaluate your loss and then pass this into an optimiser directly. Fundamentally, because tensors are evaluated immediately, you don't have a graph to calculate gradients and so you need a gradient tape. It is not so much that it is just used for visualisation, but more that you cannot implement a gradient descent in eager mode without it. Obviously, Tensorflow could just keep track of every gradient for every computation on every tf.Variable. However, that could be a huge performance bottleneck. They expose a gradient tape so that you can control what areas of your code need the gradient information. Note that in non-eager mode, this will be statically determined based on the computational branches that are descendants of your loss but in eager mode there is no static graph and so no way of knowing.", "best_answers_score":0.7965, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/53953099\/what-is-the-purpose-of-the-tensorflow-gradient-tape", "best_answers_votes":85, "question_length":1125, "response_length":1285 }, { "question":"What's the difference between sparse_softmax_cross_entropy_with_logits and softmax_cross_entropy_with_logits? I recently came across tf.nn.sparse_softmax_cross_entropy_with_logits and I can not figure out what the difference is compared to tf.nn.softmax_cross_entropy_with_logits. Is the only difference that training vectors y have to be one-hot encoded when using sparse_softmax_cross_entropy_with_logits? Reading the API, I was unable to find any other difference compared to softmax_cross_entropy_with_logits. But why do we need the extra function then? Shouldn't softmax_cross_entropy_with_logits produce the same results as sparse_softmax_cross_entropy_with_logits, if it is supplied with one-hot encoded training data\/vectors?", "response":"Having two different functions is a convenience, as they produce the same result. The difference is simple: For sparse_softmax_cross_entropy_with_logits, labels must have the shape [batch_size] and the dtype int32 or int64. Each label is an int in range [0, num_classes-1]. For softmax_cross_entropy_with_logits, labels must have the shape [batch_size, num_classes] and dtype float32 or float64. Labels used in softmax_cross_entropy_with_logits are the one hot version of labels used in sparse_softmax_cross_entropy_with_logits. Another tiny difference is that with sparse_softmax_cross_entropy_with_logits, you can give -1 as a label to have loss 0 on this label.", "best_answers_score":0.7953, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37312421\/whats-the-difference-between-sparse-softmax-cross-entropy-with-logits-and-softm", "best_answers_votes":188, "question_length":733, "response_length":664 }, { "question":"List of tensor names in graph in Tensorflow The graph object in Tensorflow has a method called \"get_tensor_by_name(name)\". Is there anyway to get a list of valid tensor names? If not, does anyone know the valid names for the pretrained model inception-v3 from here? From their example, pool_3, is one valid tensor but a list of all of them would be nice. I looked at the paper referred to and some of the layers seems to correspond to the sizes in table 1 but not all of them.", "response":"The paper is not accurately reflecting the model. If you download the source from arxiv it has an accurate model description as model.txt, and the names in there correlate strongly with the names in the released model. To answer your first question, sess.graph.get_operations() gives you a list of operations. For an op, op.name gives you the name and op.values() gives you a list of tensors it produces (in the inception-v3 model, all tensor names are the op name with a \":0\" appended to it, so pool_3:0 is the tensor produced by the final pooling op.)", "best_answers_score":0.7951, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35336648\/list-of-tensor-names-in-graph-in-tensorflow", "best_answers_votes":64, "question_length":476, "response_length":553 }, { "question":"How to prefetch data using a custom python function in tensorflow I am trying to prefetch training data to hide I\/O latency. I would like to write custom Python code that loads data from disk and preprocesses the data (e.g. by adding a context window). In other words, one thread does data preprocessing and the other does training. Is this possible in TensorFlow? Update: I have a working example based on @mrry's example. ``` import numpy as np import tensorflow as tf import threading BATCH_SIZE = 5 TRAINING_ITERS = 4100 feature_input = tf.placeholder(tf.float32, shape=[128]) label_input = tf.placeholder(tf.float32, shape=[128]) q = tf.FIFOQueue(200, [tf.float32, tf.float32], shapes=[[128], [128]]) enqueue_op = q.enqueue([label_input, feature_input]) label_batch, feature_batch = q.dequeue_many(BATCH_SIZE) c = tf.reshape(feature_batch, [BATCH_SIZE, 128]) + tf.reshape(label_batch, [BATCH_SIZE, 128]) sess = tf.Session() def load_and_enqueue(sess, enqueue_op, coord): with open('dummy_data\/features.bin') as feature_file, open('dummy_data\/labels.bin') as label_file: while not coord.should_stop(): feature_array = np.fromfile(feature_file, np.float32, 128) if feature_array.shape[0] == 0: print('reach end of file, reset using seek(0,0)') feature_file.seek(0,0) label_file.seek(0,0) continue label_value = np.fromfile(label_file, np.float32, 128) sess.run(enqueue_op, feed_dict={feature_input: feature_array, label_input: label_value}) coord = tf.train.Coordinator() t = threading.Thread(target=load_and_enqueue, args=(sess,enqueue_op, coord)) t.start() for i in range(TRAINING_ITERS): sum = sess.run(c) print('train_iter='+str(i)) print(sum) coord.request_stop() coord.join([t]) ```", "response":"This is a common use case, and most implementations use TensorFlow's queues to decouple the preprocessing code from the training code. There is a tutorial on how to use queues, but the main steps are as follows: Define a queue, q, that will buffer the preprocessed data. TensorFlow supports the simple tf.FIFOQueue that produces elements in the order they were enqueued, and the more advanced tf.RandomShuffleQueue that produces elements in a random order. A queue element is a tuple of one or more tensors (which can have different types and shapes). All queues support single-element (enqueue, dequeue) and batch (enqueue_many, dequeue_many) operations, but to use the batch operations you must specify the shapes of each tensor in a queue element when constructing the queue. Build a subgraph that enqueues preprocessed elements into the queue. One way to do this would be to define some tf.placeholder() ops for tensors corresponding to a single input example, then pass them to q.enqueue(). (If your preprocessing produces a batch at once, you should use q.enqueue_many() instead.) You might also include TensorFlow ops in this subgraph. Build a subgraph that performs training. This will look like a regular TensorFlow graph, but will get its input by calling q.dequeue_many(BATCH_SIZE). Start your session. Create one or more threads that execute your preprocessing logic, then execute the enqueue op, feeding in the preprocessed data. You may find the tf.train.Coordinator and tf.train.QueueRunner utility classes useful for this. Run your training graph (optimizer, etc.) as normal. EDIT: Here's a simple load_and_enqueue() function and code fragment to get you started: ``` # Features are length-100 vectors of floats feature_input = tf.placeholder(tf.float32, shape=[100]) # Labels are scalar integers. label_input = tf.placeholder(tf.int32, shape=[]) # Alternatively, could do: # feature_batch_input = tf.placeholder(tf.float32, shape=[None, 100]) # label_batch_input = tf.placeholder(tf.int32, shape=[None]) q = tf.FIFOQueue(100, [tf.float32, tf.int32], shapes=[[100], []]) enqueue_op = q.enqueue([feature_input, label_input]) # For batch input, do: # enqueue_op = q.enqueue_many([feature_batch_input, label_batch_input]) feature_batch, label_batch = q.dequeue_many(BATCH_SIZE) # Build rest of model taking label_batch, feature_batch as input. # [...] train_op = ... sess = tf.Session() def load_and_enqueue(): with open(...) as feature_file, open(...) as label_file: while True: feature_array = numpy.fromfile(feature_file, numpy.float32, 100) if not feature_array: return label_value = numpy.fromfile(feature_file, numpy.int32, 1)[0] sess.run(enqueue_op, feed_dict={feature_input: feature_array, label_input: label_value}) # Start a thread to enqueue data asynchronously, and hide I\/O latency. t = threading.Thread(target=load_and_enqueue) t.start() for _ in range(TRAINING_EPOCHS): sess.run(train_op) ```", "best_answers_score":0.7941, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34594198\/how-to-prefetch-data-using-a-custom-python-function-in-tensorflow", "best_answers_votes":54, "question_length":1691, "response_length":2920 }, { "question":"Tensorflow: Using Adam optimizer I am experimenting with some simple models in tensorflow, including one that looks very similar to the first MNIST for ML Beginners example, but with a somewhat larger dimensionality. I am able to use the gradient descent optimizer with no problems, getting good enough convergence. When I try to use the ADAM optimizer, I get errors like this: ``` tensorflow.python.framework.errors.FailedPreconditionError: Attempting to use uninitialized value Variable_21\/Adam [[Node: Adam_2\/update_Variable_21\/ApplyAdam = ApplyAdam[T=DT_FLOAT, use_locking=false, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](Variable_21, Variable_21\/Adam, Variable_21\/Adam_1, beta1_power_2, beta2_power_2, Adam_2\/learning_rate, Adam_2\/beta1, Adam_2\/beta2, Adam_2\/epsilon, gradients_11\/add_10_grad\/tuple\/control_dependency_1)]] ``` where the specific variable that complains about being uninitialized changes depending on the run. What does this error mean? And what does it suggest is wrong? It seems to occur regardless of the learning rate I use.", "response":"The AdamOptimizer class creates additional variables, called \"slots\", to hold values for the \"m\" and \"v\" accumulators. See the source here if you're curious, it's actually quite readable: https:\/\/github.com\/tensorflow\/tensorflow\/blob\/master\/tensorflow\/python\/training\/adam.py#L39 . Other optimizers, such as Momentum and Adagrad use slots too. These variables must be initialized before you can train a model. The normal way to initialize variables is to call tf.initialize_all_variables() which adds ops to initialize the variables present in the graph when it is called. (Aside: unlike its name suggests, initialize_all_variables() does not initialize anything, it only add ops that will initialize the variables when run.) What you must do is call initialize_all_variables() after you have added the optimizer: ``` ...build your model... # Add the optimizer train_op = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) # Add the ops to initialize variables. These will include # the optimizer slots added by AdamOptimizer(). init_op = tf.initialize_all_variables() # launch the graph in a session sess = tf.Session() # Actually intialize the variables sess.run(init_op) # now train your model for ...: sess.run(train_op) ```", "best_answers_score":0.7919, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33788989\/tensorflow-using-adam-optimizer", "best_answers_votes":101, "question_length":1056, "response_length":1230 }, { "question":"In TensorFlow, what is the difference between Session.run() and Tensor.eval()? TensorFlow has two ways to evaluate part of graph: Session.run on a list of variables and Tensor.eval. Is there a difference between these two?", "response":"If you have a Tensor t, calling t.eval() is equivalent to calling tf.get_default_session().run(t). You can make a session the default as follows: ``` t = tf.constant(42.0) sess = tf.Session() with sess.as_default(): # or `with sess:` to close on exit assert sess is tf.get_default_session() assert t.eval() == sess.run(t) ``` The most important difference is that you can use sess.run() to fetch the values of many tensors in the same step: ``` t = tf.constant(42.0) u = tf.constant(37.0) tu = tf.mul(t, u) ut = tf.mul(u, t) with sess.as_default(): tu.eval() # runs one step ut.eval() # runs one step sess.run([tu, ut]) # evaluates both tensors in a single step ``` Note that each call to eval and run will execute the whole graph from scratch. To cache the result of a computation, assign it to a tf.Variable.", "best_answers_score":0.7902, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33610685\/in-tensorflow-what-is-the-difference-between-session-run-and-tensor-eval", "best_answers_votes":250, "question_length":222, "response_length":810 }, { "question":"How to deal with batches with variable-length sequences in TensorFlow? I was trying to use an RNN (specifically, LSTM) for sequence prediction. However, I ran into an issue with variable sequence lengths. For example, ``` sent_1 = \"I am flying to Dubain\" sent_2 = \"I was traveling from US to Dubai\" ``` I am trying to predicting the next word after the current one with a simple RNN based on this Benchmark for building a PTB LSTM model. However, the num_steps parameter (used for unrolling to the previous hidden states), should remain the same in each Tensorflow's epoch. Basically, batching sentences is not possible as the sentences vary in length. ``` # inputs = [tf.squeeze(input_, [1]) # for input_ in tf.split(1, num_steps, inputs)] # outputs, states = rnn.rnn(cell, inputs, initial_state=self._initial_state) ``` Here, num_steps need to be changed in my case for every sentence. I have tried several hacks, but nothing seems working.", "response":"You can use the ideas of bucketing and padding which are described in: Sequence-to-Sequence Models Also, the rnn function which creates RNN network accepts parameter sequence_length. As an example, you can create buckets of sentences of the same size, pad them with the necessary amount of zeros, or placeholders which stand for zero word and afterwards feed them along with seq_length = len(zero_words). ``` seq_length = tf.placeholder(tf.int32) outputs, states = rnn.rnn(cell, inputs, initial_state=initial_state, sequence_length=seq_length) sess = tf.Session() feed = { seq_length: 20, #other feeds } sess.run(outputs, feed_dict=feed) ``` Take a look at this reddit thread as well: Tensorflow basic RNN example with 'variable length' sequences", "best_answers_score":0.7894, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34670112\/how-to-deal-with-batches-with-variable-length-sequences-in-tensorflow", "best_answers_votes":24, "question_length":942, "response_length":746 }, { "question":"Keras: How to get layer shapes in a Sequential model I would like to access the layer size of all the layers in a Sequential Keras model. My code: ``` model = Sequential() model.add(Conv2D(filters=32, kernel_size=(3,3), input_shape=(64,64,3) )) model.add(MaxPooling2D(pool_size=(3,3), strides=(2,2))) ``` Then I would like some code like the following to work ``` for layer in model.layers: print(layer.get_shape()) ``` .. but it doesn't. I get the error: AttributeError: 'Conv2D' object has no attribute 'get_shape'", "response":"According to official doc for Keras Layer, one can access layer output\/input shape via layer.output_shape or layer.input_shape. ```py from keras.models import Sequential from keras.layers import Conv2D, MaxPool2D model = Sequential(layers=[ Conv2D(32, (3, 3), input_shape=(64, 64, 3)), MaxPool2D(pool_size=(3, 3), strides=(2, 2)) ]) for layer in model.layers: print(layer.output_shape) # Output # (None, 62, 62, 32) # (None, 30, 30, 32) ```", "best_answers_score":0.7864, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43743593\/keras-how-to-get-layer-shapes-in-a-sequential-model", "best_answers_votes":43, "question_length":516, "response_length":440 }, { "question":"Unknown initializer: GlorotUniform when loading Keras model I trained my CNN (VGG) through google colab and generated .h5 file. Now problem is, I can predict my output successfully through google colab but when i download that .h5 trained model file and try to predict output on my laptop, I am getting error when loading the model. Here is the code: ``` import tensorflow as tf from tensorflow import keras import h5py # Initialization loaded_model = keras.models.load_model('.\/train_personCount_model.h5') ``` And the error: ``` ValueError: Unknown initializer: GlorotUniform ```", "response":"I fixed the problem: Before: ```py from keras.models import load_model classifierLoad = load_model('model\/modeltest.h5') ``` Works for me ```py import tensorflow as tf classifierLoad = tf.keras.models.load_model('model\/modeltest.h5') ```", "best_answers_score":0.7864, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/53183865\/unknown-initializer-glorotuniform-when-loading-keras-model", "best_answers_votes":43, "question_length":581, "response_length":237 }, { "question":"How do I get the gradient of the loss at a TensorFlow variable? The feature I'm after is to be able to tell what the gradient of a given variable is with respect to my error function given some data. One way to do this would be to see how much the variable has changed after a call to train, but obviously that can vary massively based on the learning algorithm (for example it would be almost impossible to tell with something like RProp) and just isn't very clean. Thanks in advance.", "response":"The tf.gradients() function allows you to compute the symbolic gradient of one tensor with respect to one or more other tensors\u2014including variables. Consider the following simple example: ``` data = tf.placeholder(tf.float32) var = tf.Variable(...) # Must be a tf.float32 or tf.float64 variable. loss = some_function_of(var, data) # some_function_of() returns a `Tensor`. var_grad = tf.gradients(loss, [var])[0] ``` You can then use this symbolic gradient to evaluate the gradient in some specific point (data): ``` sess = tf.Session() var_grad_val = sess.run(var_grad, feed_dict={data: ...}) ```", "best_answers_score":0.7859, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35226428\/how-do-i-get-the-gradient-of-the-loss-at-a-tensorflow-variable", "best_answers_votes":59, "question_length":485, "response_length":596 }, { "question":"In Keras, what exactly am I configuring when I create a stateful `LSTM` layer with N `units`? The first arguments in a normal Dense layer is also units, and is the number of neurons\/nodes in that layer. A standard LSTM unit however looks like the following: (This is a reworked version of \"Understanding LSTM Networks\") In Keras, when I create an LSTM object like this LSTM(units=N, ...), am I actually creating N of these LSTM units? Or is it the size of the \"Neural Network\" layers inside the LSTM unit, i.e., the W's in the formulas? Or is it something else? For context, I'm working based on this example code. The following is the documentation: https:\/\/keras.io\/layers\/recurrent\/ It says: units: Positive integer, dimensionality of the output space. It makes me think it is the number of outputs from the Keras LSTM \"layer\" object. Meaning the next layer will have N inputs. Does that mean there actually exists N of these LSTM units in the LSTM layer, or maybe that that exactly one LSTM unit is run for N iterations outputting N of these h[t] values, from, say, h[t-N] up to h[t]? If it only defines the number of outputs, does that mean the input still can be, say, just one, or do we have to manually create lagging input variables x[t-N] to x[t], one for each LSTM unit defined by the units=N argument? As I'm writing this it occurs to me what the argument return_sequences does. If set to True all the N outputs are passed forward to the next layer, while if it is set to False it only passes the last h[t] output to the next layer. Am I right?", "response":"You can check this question for further information, although it is based on Keras-1.x API. Basically, the unit means the dimension of the inner cells in LSTM. Because in LSTM, the dimension of inner cell (C_t and C_{t-1} in the graph), output mask (o_t in the graph) and hidden\/output state (h_t in the graph) should have the SAME dimension, therefore you output's dimension should be unit-length as well. And LSTM in Keras only define exactly one LSTM block, whose cells is of unit-length. If you set return_sequence=True, it will return something with shape: (batch_size, timespan, unit). If false, then it just return the last output in shape (batch_size, unit). As for the input, you should provide input for every timestamp. Basically, the shape is like (batch_size, timespan, input_dim), where input_dim can be different from the unit. If you just want to provide input at the first step, you can simply pad your data with zeros at other time steps.", "best_answers_score":0.7852, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44273249\/in-keras-what-exactly-am-i-configuring-when-i-create-a-stateful-lstm-layer-wi", "best_answers_votes":46, "question_length":1556, "response_length":956 }, { "question":"Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found Using tensorflow 2.4.1 When I run my program, I'm getting this error and can't use my gpu. I'm using CUDA 11.0, cudnn 8.0 ``` 2021-02-07 03:36:18.132005: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll WARNING:tensorflow:From D:\/PycharmProjects\/pythonProject\/models\/kp\u015f,i.py:5: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2021-02-07 03:36:19.735127: I tensorflow\/core\/platform\/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-02-07 03:36:19.739052: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll 2021-02-07 03:36:20.715634: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1720] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1650 computeCapability: 7.5 coreClock: 1.56GHz coreCount: 16 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 119.24GiB\/s 2021-02-07 03:36:20.716281: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll 2021-02-07 03:36:20.723519: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll 2021-02-07 03:36:20.724040: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll 2021-02-07 03:36:20.729436: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll 2021-02-07 03:36:20.731800: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll 2021-02-07 03:36:20.741580: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll 2021-02-07 03:36:20.745576: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll 2021-02-07 03:36:20.746657: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:60] Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found 2021-02-07 03:36:20.746971: W tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1757] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https:\/\/www.tensorflow.org\/install\/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... 2021-02-07 03:36:20.836861: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-02-07 03:36:20.837144: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1267] 0 2021-02-07 03:36:20.837314: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1280] 0: N 2021-02-07 03:36:20.837493: I tensorflow\/compiler\/jit\/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set ```", "response":"I think I can help you with providing a cudnn64_8.dll file (this is the download link: https:\/\/www.dll-files.com\/cudnn64_8.dll.html). When you get the file, you can just put in your bin directory. For example, usually in windows platform, you can put it into C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.3\\bin.", "best_answers_score":0.7845, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/66083545\/could-not-load-dynamic-library-cudnn64-8-dll-dlerror-cudnn64-8-dll-not-found", "best_answers_votes":33, "question_length":3428, "response_length":320 }, { "question":"Tensorflow: Cannot interpret feed_dict key as Tensor I am trying to build a neural network model with one hidden layer (1024 nodes). The hidden layer is nothing but a relu unit. I am also processing the input data in batches of 128. The inputs are images of size 28 * 28. In the following code I get the error in line ``` _, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y}) Error: TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor(\"Placeholder_64:0\", shape=(128, 784), dtype=float32) is not an element of this graph. ``` Here is the code I have written ```py #Initialize batch_size = 128 layer1_input = 28 * 28 hidden_layer1 = 1024 num_labels = 10 num_steps = 3001 #Create neural network model def create_model(inp, w, b): layer1 = tf.add(tf.matmul(inp, w['w1']), b['b1']) layer1 = tf.nn.relu(layer1) layer2 = tf.matmul(layer1, w['w2']) + b['b2'] return layer2 #Initialize variables x = tf.placeholder(tf.float32, shape=(batch_size, layer1_input)) y = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) w = { 'w1': tf.Variable(tf.random_normal([layer1_input, hidden_layer1])), 'w2': tf.Variable(tf.random_normal([hidden_layer1, num_labels])) } b = { 'b1': tf.Variable(tf.zeros([hidden_layer1])), 'b2': tf.Variable(tf.zeros([num_labels])) } init = tf.initialize_all_variables() train_prediction = tf.nn.softmax(model) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) model = create_model(x, w, b) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model, y)) optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) #Process with tf.Session(graph=graph1) as sess: tf.initialize_all_variables().run() total_batch = int(train_dataset.shape[0] \/ batch_size) for epoch in range(num_steps): loss = 0 for i in range(total_batch): batch_x, batch_y = train_dataset[epoch * batch_size:(epoch+1) * batch_size, :], train_labels[epoch * batch_size:(epoch+1) * batch_size,:] _, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y}) loss = loss + c loss = loss \/ total_batch if epoch % 500 == 0: print (\"Epoch :\", epoch, \". cost = {:.9f}\".format(avg_cost)) print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels)) valid_prediction = tf.run(tf_valid_dataset, {x: tf_valid_dataset}) print(\"Validation accuracy: %.1f%%\" % accuracy(valid_prediction.eval(), valid_labels)) test_prediction = tf.run(tf_test_dataset, {x: tf_test_dataset}) print(\"TEST accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels)) ```", "response":"This worked for me ``` from keras import backend as K ``` and after predicting my data i inserted this part of code then i had again loaded the model. ``` K.clear_session() ``` i faced this problem in production server, but in my pc it was running fine ........... ``` from keras import backend as K #Before prediction K.clear_session() #After prediction K.clear_session() ```", "best_answers_score":0.7838, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40785224\/tensorflow-cannot-interpret-feed-dict-key-as-tensor", "best_answers_votes":70, "question_length":2535, "response_length":376 }, { "question":"Tf 2.0 : RuntimeError: GradientTape.gradient can only be called once on non-persistent tapes In tf 2.0 DC Gan example in tensorflow 2.0 guide, there are two gradient tapes . See below. ```py @tf.function def train_step(images): noise = tf.random.normal([BATCH_SIZE, noise_dim]) with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: generated_images = generator(noise, training=True) real_output = discriminator(images, training=True) fake_output = discriminator(generated_images, training=True) gen_loss = generator_loss(fake_output) disc_loss = discriminator_loss(real_output, fake_output) gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables) gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables) generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables)) discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables)) ``` As you can see clearly that there are two gradient tapes. I was wondering what difference does using a single tape make and changed it to the following ```py @tf.function def train_step(images): noise = tf.random.normal([BATCH_SIZE, noise_dim]) with tf.GradientTape() as tape: generated_images = generator(noise, training=True) real_output = discriminator(images, training=True) fake_output = discriminator(generated_images, training=True) gen_loss = generator_loss(fake_output) disc_loss = discriminator_loss(real_output, fake_output) gradients_of_generator = tape.gradient(gen_loss, generator.trainable_variables) gradients_of_discriminator = tape.gradient(disc_loss, discriminator.trainable_variables) generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables)) discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables)) ``` This gives me the following error : ``` RuntimeError: GradientTape.gradient can only be called once on non-persistent tapes. ``` I would like to know why two tapes are necessary. As of now the documentation on tf2.0 APIs is scanty. Can anyone explain or point me to the right docs\/tutorials?", "response":"From the documentation of GradientTape: By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a persistent gradient tape. This allows multiple calls to the gradient() method as resources are released when the tape object is garbage collected. A persistent gradient can be created with with tf.GradientTape(persistent=True) as tape and can\/should be manually deleted with del tape (credits for this @zwep, @Crispy13).", "best_answers_score":0.7817, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/56072634\/tf-2-0-runtimeerror-gradienttape-gradient-can-only-be-called-once-on-non-pers", "best_answers_votes":27, "question_length":2195, "response_length":541 }, { "question":"Tensorflow: ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory I'm having problems in importing tensorflow in python3: ``` >>> import tensorflow as tf Traceback (most recent call last): File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow.py\", line 41, in from tensorflow.python.pywrap_tensorflow_internal import * File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py\", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py\", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File \"\/usr\/lib\/python3.5\/imp.py\", line 242, in load_module return load_dynamic(name, filename, file) File \"\/usr\/lib\/python3.5\/imp.py\", line 342, in load_dynamic return _load(spec) ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"\", line 1, in File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/__init__.py\", line 24, in from tensorflow.python import * File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/__init__.py\", line 51, in from tensorflow.python import pywrap_tensorflow File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow.py\", line 52, in raise ImportError(msg) ImportError: Traceback (most recent call last): File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow.py\", line 41, in from tensorflow.python.pywrap_tensorflow_internal import * File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py\", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py\", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File \"\/usr\/lib\/python3.5\/imp.py\", line 242, in load_module return load_dynamic(name, filename, file) File \"\/usr\/lib\/python3.5\/imp.py\", line 342, in load_dynamic return _load(spec) ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory Failed to load the native TensorFlow runtime. See https:\/\/www.tensorflow.org\/install\/install_sources#common_installation_problems for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. ``` I am using Nvidia drivers version 381.09 beta, as version 375 has this bug: https:\/\/askubuntu.com\/questions\/896221\/strange-artifacts-along-window-borders-after-waking-computer-from-sleep-mode?noredirect=1&lq=1 I have install CUDA 8.0 and cuDNN-v6.0: ``` rharish@rharish-GL552VW:~$ cd \/usr\/local rharish@rharish-GL552VW:\/usr\/local$ ls bin cuda etc include man share computecpp cuda-8.0 games lib sbin src ``` Also, libcusolver.so.8.0 exists in \/usr\/local\/cuda\/lib64\/: libcusolver.so.8.0 in 'ls' output I have uninstalled and reinstalled CUDA, cuDNN, and built tensorflow from sources. This problem has been occuring since updating the Nvidia drivers to version 381.09 beta. Any help?", "response":"Found the solution: I reinstalled nvidia-381, CUDA-8.0 (using the runfile) and cuDNN 6.0. Then I added the following in my .bashrc: ``` export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:\/usr\/local\/cuda\/lib64\/ ```", "best_answers_score":0.7815, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43558707\/tensorflow-importerror-libcusolver-so-8-0-cannot-open-shared-object-file-no", "best_answers_votes":42, "question_length":3296, "response_length":202 }, { "question":"TypeError: only integer scalar arrays can be converted to a scalar index I am trying a simple demo code of tensorflow from github link. I'm currently using python version 3.5.2 ``` Z:\\downloads\\tensorflow_demo-master\\tensorflow_demo-master>py Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)] on win32 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. ``` I ran into this error when I tried board.py in command-line. I have installed all the dependencies that are required for this to run. ``` def _read32(bytestream): dt = numpy.dtype(numpy.uint32).newbyteorder('>') return numpy.frombuffer(bytestream.read(4), dtype=dt) def extract_images(filename): \"\"\"Extract the images into a 4D uint8 numpy array [index, y, x, depth].\"\"\" print('Extracting', filename) with gzip.open(filename) as bytestream: magic = _read32(bytestream) if magic != 2051: raise ValueError( 'Invalid magic number %d in MNIST image file: %s' % (magic, filename)) num_images = _read32(bytestream) rows = _read32(bytestream) cols = _read32(bytestream) buf = bytestream.read(rows * cols * num_images) data = numpy.frombuffer(buf, dtype=numpy.uint8) data = data.reshape(num_images, rows, cols, 1) return data Z:\\downloads\\tensorflow_demo-master\\tensorflow_demo-master>py board.py Extracting Z:\/downloads\/MNIST dataset\\train-images-idx3-ubyte.gz Traceback (most recent call last): File \"board.py\", line 3, in mnist = input_data.read_data_sets(r'Z:\/downloads\/MNIST dataset', one_hot=True) File \"Z:\\downloads\\tensorflow_demo-master\\tensorflow_demo-master\\input_data.py\", line 150, in read_data_sets train_images = extract_images(local_file) File \"Z:\\downloads\\tensorflow_demo-master\\tensorflow_demo-master\\input_data.py\", line 40, in extract_images buf = bytestream.read(rows * cols * num_images) File \"C:\\Users\\surak\\AppData\\Local\\Programs\\Python\\Python35\\lib\\gzip.py\", line 274, in read return self._buffer.read(size) TypeError: only integer scalar arrays can be converted to a scalar index ```", "response":"you can modify the function: ``` def _read32(bytestream): dt = numpy.dtype(numpy.uint32).newbyteorder('>') return numpy.frombuffer(bytestream.read(4), dtype=dt) ``` new version: ``` def _read32(bytestream): dt = numpy.dtype(numpy.uint32).newbyteorder('>') return numpy.frombuffer(bytestream.read(4), dtype=dt)[0] ``` add [0] in the end. This appears to be an issue with the latest version of Numpy. A recent change made it an error to treat a single-element array as a scalar for the purposes of indexing.", "best_answers_score":0.7796, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42128830\/typeerror-only-integer-scalar-arrays-can-be-converted-to-a-scalar-index", "best_answers_votes":36, "question_length":2016, "response_length":505 }, { "question":"How to import keras from tf.keras in Tensorflow? ``` import tensorflow as tf import tensorflow from tensorflow import keras from keras.layers import Dense ``` I am getting the below error ``` from keras.layers import Input, Dense Traceback (most recent call last): File \"\", line 1, in from keras.layers import Input, Dense ModuleNotFoundError: No module named 'keras' ``` How do I solve this? Note: I am using Tensorflow version 1.4", "response":"Use the keras module from tensorflow like this: import tensorflow as tf Import classes from tensorflow.python.keras.layers import Input, Dense or use directly dense = tf.keras.layers.Dense(...) EDIT Tensorflow 2 from tensorflow.keras.layers import Input, Dense and the rest stays the same.", "best_answers_score":0.7791, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47262955\/how-to-import-keras-from-tf-keras-in-tensorflow", "best_answers_votes":134, "question_length":433, "response_length":289 }, { "question":"How to permutate tranposition in tensorflow? From the docs: Transposes a. Permutes the dimensions according to perm. The returned tensor's dimension i will correspond to the input dimension perm[i]. If perm is not given, it is set to (n-1...0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors. But it's still a little unclear to me how should I be slicing the input tensor. E.g. from the docs too: ``` tf.transpose(x, perm=[0, 2, 1]) ==> [[[1 4] [2 5] [3 6]] [[7 10] [8 11] [9 12]]] ``` Why is it that perm=[0,2,1] produces a 1x3x2 tensor? After some trial and error: ``` twothreefour = np.array([ [[1,2,3,4], [5,6,7,8], [9,10,11,12]] , [[13,14,15,16], [17,18,19,20], [21,22,23,24]] ]) twothreefour ``` [out]: ``` array([[[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]], [[13, 14, 15, 16], [17, 18, 19, 20], [21, 22, 23, 24]]]) ``` And if I transpose it: ``` fourthreetwo = tf.transpose(twothreefour) with tf.Session() as sess: init = tf.initialize_all_variables() sess.run(init) print (fourthreetwo.eval()) ``` I get a 4x3x2 to a 2x3x4 and that sounds logical. [out]: ``` [[[ 1 13] [ 5 17] [ 9 21]] [[ 2 14] [ 6 18] [10 22]] [[ 3 15] [ 7 19] [11 23]] [[ 4 16] [ 8 20] [12 24]]] ``` But when I use the perm parameter the output, I'm not sure what I'm really getting: ``` twofourthree = tf.transpose(twothreefour, perm=[0,2,1]) with tf.Session() as sess: init = tf.initialize_all_variables() sess.run(init) print (threetwofour.eval()) ``` [out]: ``` [[[ 1 5 9] [ 2 6 10] [ 3 7 11] [ 4 8 12]] [[13 17 21] [14 18 22] [15 19 23] [16 20 24]]] ``` Why does perm=[0,2,1] returns a 2x4x3 matrix from a 2x3x4 ? Trying it again with perm=[1,0,2]: ``` threetwofour = tf.transpose(twothreefour, perm=[1,0,2]) with tf.Session() as sess: init = tf.initialize_all_variables() sess.run(init) print (threetwofour.eval()) ``` [out]: ``` [[[ 1 2 3 4] [13 14 15 16]] [[ 5 6 7 8] [17 18 19 20]] [[ 9 10 11 12] [21 22 23 24]]] ``` Why does perm=[1,0,2] return a 3x2x4 from a 2x3x4? Does it mean that the perm parameter is taking my np.shape and transposing the tensor based on the elements based on my array shape? I.e. : ``` _size = (2, 4, 3, 5) randarray = np.random.randint(5, size=_size) shape_idx = {i:_s for i, _s in enumerate(_size)} randarray_t_func = tf.transpose(randarray, perm=[3,0,2,1]) with tf.Session() as sess: init = tf.initialize_all_variables() sess.run(init) tranposed_array = randarray_t_func.eval() print (tranposed_array.shape) print (tuple(shape_idx[_s] for _s in [3,0,2,1])) ``` [out]: ``` (5, 2, 3, 4) (5, 2, 3, 4) ```", "response":"I think perm is permuting the dimensions. For example perm=[0,2,1] is short for dim_0 -> dim_0, dim_1 -> dim_2, dim_2 -> dim_1. So for a 2D tensor, perm=[1,0] is just matrix transpose. Does this answer your question?", "best_answers_score":0.7787, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38517533\/how-to-permutate-tranposition-in-tensorflow", "best_answers_votes":38, "question_length":2605, "response_length":216 }, { "question":"How can I make tensorflow run on a GPU with capability 2.x? I've successfully installed tensorflow (GPU) on Linux Ubuntu 16.04 and made some small changes in order to make it work with the new Ubuntu LTS release. However, I thought (who knows why) that my GPU met the minimum requirement of a compute capability greater than 3.5. That was not the case since my GeForce 820M has just 2.1. Is there a way of making tensorflow GPU version working with my GPU? I am asking this question since apparently there was no way of making tensorflow GPU version working on Ubuntu 16.04 but by searching the internet I found out that was not the case and indeed I made it almost work were it not for this unsatisfied requirement. Now I am wondering if this issue with GPU compute capability could be fixed as well.", "response":"Recent GPU versions of tensorflow require compute capability 3.5 or higher (and use cuDNN to access the GPU. cuDNN also requires a GPU of cc3.0 or higher: cuDNN is supported on Windows, Linux and MacOS systems with Pascal, Kepler, Maxwell, Tegra K1 or Tegra X1 GPUs. Kepler = cc3.x Maxwell = cc5.x Pascal = cc6.x TK1 = cc3.2 TX1 = cc5.3 Fermi GPUs (cc2.0, cc2.1) are not supported by cuDNN. Older GPUs (e.g. compute capability 1.x) are also not supported by cuDNN. Note that there has never been either a version of cuDNN or any version of TF that officially supported NVIDIA GPUs less than cc3.0. The initial version of cuDNN started out by requiring cc3.0 GPUs, and the initial version of TF started out by requiring cc3.0 GPUs.", "best_answers_score":0.777, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38542763\/how-can-i-make-tensorflow-run-on-a-gpu-with-capability-2-x", "best_answers_votes":30, "question_length":801, "response_length":730 }, { "question":"How does data normalization work in keras during prediction? I see that the imageDataGenerator allows me to specify different styles of data normalization, e.g. featurewise_center, samplewise_center, etc. I see from the examples that if I specify one of these options, then I need to call the fit method on the generator in order to allow the generator to compute statistics like the mean image on the generator. ``` (X_train, y_train), (X_test, y_test) = cifar10.load_data() Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes) datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True) # compute quantities required for featurewise normalization # (std, mean, and principal components if ZCA whitening is applied) datagen.fit(X_train) # fits the model on batches with real-time data augmentation: model.fit_generator(datagen.flow(X_train, Y_train, batch_size=32), samples_per_epoch=len(X_train), nb_epoch=nb_epoch) ``` My question is, how does prediction work if I have specified data normalization during training? I can't see how in the framework I would even pass knowledge of the training set mean\/std deviation along to predict to allow me to normalize my test data myself, but I also don't see in the training code where this information is stored. Are the image statistics needed for normalization stored in the model so that they can be used during prediction?", "response":"Use the standardize method of the generator for each element. Here is a complete example for CIFAR 10: ``` #!\/usr\/bin\/env python import keras from keras.datasets import cifar10 from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D # input image dimensions img_rows, img_cols, img_channels = 32, 32, 3 num_classes = 10 batch_size = 32 epochs = 1 # The data, shuffled and split between train and test sets: (x_train, y_train), (x_test, y_test) = cifar10.load_data() print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # Convert class vectors to binary class matrices. y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=x_train.shape[1:])) model.add(Conv2D(32, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Conv2D(64, (3, 3), padding='same', activation='relu')) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train \/= 255 x_test \/= 255 datagen = ImageDataGenerator(zca_whitening=True) # Compute principal components required for ZCA datagen.fit(x_train) # Apply normalization (ZCA and others) print(x_test.shape) for i in range(len(x_test)): # this is what you are looking for x_test[i] = datagen.standardize(x_test[i]) print(x_test.shape) # Fit the model on the batches generated by datagen.flow(). model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size), steps_per_epoch=x_train.shape[0] \/\/ batch_size, epochs=epochs, validation_data=(x_test, y_test)) ```", "best_answers_score":0.776, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41855512\/how-does-data-normalization-work-in-keras-during-prediction", "best_answers_votes":24, "question_length":1545, "response_length":2124 }, { "question":"InvalidArgumentError: cannot compute MatMul as input #0(zero-based) was expected to be a float tensor but is a double tensor [Op:MatMul] Can somebody explain, how does TensorFlow's eager mode work? I am trying to build a simple regression as follows: ```py import tensorflow as tf import numpy as np tfe = tf.contrib.eager tf.enable_eager_execution() def make_model(): net = tf.keras.Sequential() net.add(tf.keras.layers.Dense(4, activation='relu')) net.add(tf.keras.layers.Dense(1)) return net def compute_loss(pred, actual): return tf.reduce_mean(tf.square(tf.subtract(pred, actual))) def compute_gradient(model, pred, actual): \"\"\"compute gradients with given noise and input\"\"\" with tf.GradientTape() as tape: loss = compute_loss(pred, actual) grads = tape.gradient(loss, model.variables) return grads, loss def apply_gradients(optimizer, grads, model_vars): optimizer.apply_gradients(zip(grads, model_vars)) model = make_model() optimizer = tf.train.AdamOptimizer(1e-4) x = np.linspace(0,1,1000) y = x + np.random.normal(0,0.3,1000) y = y.astype('float32') train_dataset = tf.data.Dataset.from_tensor_slices((y.reshape(-1,1))) epochs = 2# 10 batch_size = 25 itr = y.shape[0] # batch_size for epoch in range(epochs): for data in tf.contrib.eager.Iterator(train_dataset.batch(25)): preds = model(data) grads, loss = compute_gradient(model, preds, data) apply_gradients(optimizer, grads, model.variables) # Gradient output: [None, None, None, None, None, None] ``` The error is following: ```none ---------------------------------------------------------------------- ValueError Traceback (most recent call last) in 35 grads, loss = compute_gradient(model, preds, data) 36 print(grads) ---> 37 apply_gradients(optimizer, grads, model.variables) 38 # with tf.GradientTape() as tape: 39 # loss = tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(preds, data)))) in apply_gradients(optimizer, grads, model_vars) 17 18 def apply_gradients(optimizer, grads, model_vars): ---> 19 optimizer.apply_gradients(zip(grads, model_vars)) 20 21 model = make_model() ~\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/training\/optimizer.py in apply_gradients(self, grads_and_vars, global_step, name) 589 if not var_list: 590 raise ValueError(\"No gradients provided for any variable: %s.\" % --> 591 ([str(v) for _, v, _ in converted_grads_and_vars],)) 592 with ops.init_scope(): 593 self._create_slots(var_list) ValueError: No gradients provided for any variable: ``` Edit I updated my code. Now, the problem comes in gradients calculation, it is returning zero. I have checked the loss value that is non-zero.", "response":"Part 1: The problem is indeed the datatype of your input. By default your keras model expects float32 but you are passing a float64. You can either change the dtype of the model or change the input to float32. To change your model: ``` def make_model(): net = tf.keras.Sequential() net.add(tf.keras.layers.Dense(4, activation='relu', dtype='float32')) net.add(tf.keras.layers.Dense(4, activation='relu')) net.add(tf.keras.layers.Dense(1)) return net ``` To change your input: y = y.astype('float32') Part 2: You need to call the function that computes your model (i.e. model(data)) under tf.GradientTape(). For example, you can replace your compute_loss method with the following: ``` def compute_loss(model, x, y): pred = model(x) return tf.reduce_mean(tf.square(tf.subtract(pred, y))) ```", "best_answers_score":0.7759, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/54255431\/invalidargumenterror-cannot-compute-matmul-as-input-0zero-based-was-expected", "best_answers_votes":49, "question_length":2599, "response_length":790 }, { "question":"Tensorflow._api.v2.train has no attribute 'AdamOptimizer' When using ``` model.compile(optimizer = tf.train.AdamOptimizer(), loss = 'sparse_categorical_crossentropy', metrics=['accuracy']) ``` in my Jupyter Notebook the following Error pops up: module 'tensorflow._api.v2.train' has no attribute 'AdamOptimizer' Tensorflow Version: 2.0.0-alpha0 Do you think the only possibility is to downgrade the TF version?", "response":"```py tf.train.AdamOptimizer() => tf.optimizers.Adam() ``` From https:\/\/www.tensorflow.org\/versions\/r2.0\/api_docs\/python\/tf\/optimizers", "best_answers_score":0.7758, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55318273\/tensorflow-api-v2-train-has-no-attribute-adamoptimizer", "best_answers_votes":94, "question_length":410, "response_length":134 }, { "question":"Why the 6 in relu6? I've hacked a deep feed forward NN from scratch in R, and it seems more stable with \"hard sigmoid\" activations - max(0,min(1,x)) - than ReLU. Trying to port it to TensorFlow, and noticed that they don't have this activation function built in, only relu6, which uses an upper cutoff at 6. Is there a reason for this? (I realize that you could do relu6(x*6)\/6, but if the TF guys put the 6 there for a good reason, I'd like to know.) Also, I'd like to know if others have explosion problems with ReLU in feed forward nets (I'm aware of RNN issues).", "response":"From this reddit thread: This is useful in making the networks ready for fixed-point inference. If you unbound the upper limit, you lose too many bits to the Q part of a Q.f number. Keeping the ReLUs bounded by 6 will let them take a max of 3 bits (upto 8) leaving 4\/5 bits for .f It seems, then, that 6 is just an arbitrary value chosen according to the number of bits you want to be able to compress your network's trained parameters into. As per the \"why\" only the version with value 6 is implemented, I assume it's because that's the value that fits best in 8 bits, which probably is the most common use-case.", "best_answers_score":0.7738, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47220595\/why-the-6-in-relu6", "best_answers_votes":78, "question_length":566, "response_length":613 }, { "question":"How to inspect a Tensorflow .tfrecord file? I have a .tfrecord but I don't know how it is structured. How can I inspect the schema to understand what the .tfrecord file contains? All Stackoverflow answers or documentation seem to assume I know the structure of the file. ``` reader = tf.TFRecordReader() file = tf.train.string_input_producer(\"record.tfrecord\") _, serialized_record = reader.read(file) ...HOW TO INSPECT serialized_record... ```", "response":"Found it! ``` import tensorflow as tf for example in tf.python_io.tf_record_iterator(\"data\/foobar.tfrecord\"): print(tf.train.Example.FromString(example)) ``` You can also add: ``` from google.protobuf.json_format import MessageToJson ... jsonMessage = MessageToJson(tf.train.Example.FromString(example)) ```", "best_answers_score":0.7714, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42394585\/how-to-inspect-a-tensorflow-tfrecord-file", "best_answers_votes":139, "question_length":444, "response_length":307 }, { "question":"What is the purpose of tf.global_variables_initializer? I would like to understand what tf.global_variables_initializer does in a bit more detail. A sparse description is given here: Returns an Op that initializes global variables. But that doesn't really help me. I know that the op is necessary to initialize the graph, but what does that actually mean? Is this the step where the graph is complied?", "response":"A more complete description is given here. Only after running tf.global_variables_initializer() in a session will your variables hold the values you told them to hold when you declare them (tf.Variable(tf.zeros(...)), tf.Variable(tf.random_normal(...)),...). From the TF doc : Calling tf.Variable() adds several ops to the graph: A variable op that holds the variable value. An initializer op that sets the variable to its initial value. This is actually a tf.assign op. The ops for the initial value, such as the zeros op for the biases variable in the example are also added to the graph. And also: Variable initializers must be run explicitly before other ops in your model can be run. The easiest way to do that is to add an op that runs all the variable initializers, and run that op before using the model.", "best_answers_score":0.7712, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44433438\/what-is-the-purpose-of-tf-global-variables-initializer", "best_answers_votes":59, "question_length":401, "response_length":812 }, { "question":"How to input a list of lists with different sizes in tf.data.Dataset I have a long list of lists of integers (representing sentences, each one of different sizes) that I want to feed using the tf.data library. Each list (of the lists of list) has different length, and I get an error, which I can reproduce here: ``` t = [[4,2], [3,4,5]] dataset = tf.data.Dataset.from_tensor_slices(t) ``` The error I get is: ``` ValueError: Argument must be a dense tensor: [[4, 2], [3, 4, 5]] - got shape [2], but wanted [2, 2]. ``` Is there a way to do this? EDIT 1: Just to be clear, I don't want to pad the input list of lists (it's a list of sentences containing over a million elements, with varying lengths) I want to use the tf.data library to feed, in a proper way, a list of lists with varying length.", "response":"You can use tf.data.Dataset.from_generator() to convert any iterable Python object (like a list of lists) into a Dataset: ``` t = [[4, 2], [3, 4, 5]] dataset = tf.data.Dataset.from_generator(lambda: t, tf.int32, output_shapes=[None]) iterator = dataset.make_one_shot_iterator() next_element = iterator.get_next() with tf.Session() as sess: print(sess.run(next_element)) # ==> '[4, 2]' print(sess.run(next_element)) # ==> '[3, 4, 5]' ```", "best_answers_score":0.7679, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47580716\/how-to-input-a-list-of-lists-with-different-sizes-in-tf-data-dataset", "best_answers_votes":27, "question_length":796, "response_length":436 }, { "question":"Reset weights in Keras layer I'd like to reset (randomize) the weights of all layers in my Keras (deep learning) model. The reason is that I want to be able to train the model several times with different data splits without having to do the (slow) model recompilation every time. Inspired by this discussion, I'm trying the following code: ``` # Reset weights for layer in KModel.layers: if hasattr(layer,'init'): input_dim = layer.input_shape[1] new_weights = layer.init((input_dim, layer.output_dim),name='{}_W'.format(layer.name)) layer.trainable_weights[0].set_value(new_weights.get_value()) ``` However, it only partly works. Partly, becuase I've inspected some layer.get_weights() values, and they seem to change. But when I restart the training, the cost values are much lower than the initial cost values on the first run. It's almost like I've succeeded resetting some of the weights, but not all of them.", "response":"Save the initial weights right after compiling the model but before training it: ``` model.save_weights('model.h5') ``` and then after training, \"reset\" the model by reloading the initial weights: ``` model.load_weights('model.h5') ``` This gives you an apples to apples model to compare different data sets and should be quicker than recompiling the entire model.", "best_answers_score":0.7678, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40496069\/reset-weights-in-keras-layer", "best_answers_votes":72, "question_length":915, "response_length":364 }, { "question":"Installing Python3.6 alongside Python3.7 on Mac I'm trying to install tensorflow onto a Mac with Python3.7. However, I'm getting the error: ``` $ pip3 -v install tensorflow ... Skipping link https:\/\/files.pythonhosted.org\/packages\/56\/7a\/c6bca0fe52a94ca508731d8b139e7dbd5a36cddc64c19f422f97e5a853e8\/tensorflow-1.10.0rc1-cp36-cp36m-win_amd64.whl#sha256=3ab24374888d6a13d55ce2e3cf4ba0c9cd6f824723313db5322512087525cb78 (from https:\/\/pypi.org\/simple\/tensorflow\/); it is not compatible with this Python Could not find a version that satisfies the requirement tensorflow (from versions: ) Cleaning up... Removed build tracker '\/private\/var\/folders\/4n\/9342s4wd3jv0qzwjz8rxrygr0000gp\/T\/pip-req-tracker-3p60r2lo' No matching distribution found for tensorflow ``` From what I can gather this is happening because tensorflow doesn't yet support Python3.7. As a workaround I want to install Python3.6 alongside 3.7 and then install tensorflow to that version. However, I'm new to Mac and not sure of the correct way to do this without potentially messing with the preexisting Python version. I've tried using brew, but it looks like Python3 is as specific as it gets. What is the correct way to do what I'm after?", "response":"Try using brew for example if already using Python 3: ``` $ brew unlink python ``` Then install python 3.6.5: ``` $ brew install --ignore-dependencies https:\/\/raw.githubusercontent.com\/Homebrew\/homebrew-core\/f2a764ef944b1080be64bd88dca9a1d80130c558\/Formula\/python.rb ``` To get back to python 3.7.4_1 use: ``` $ brew switch python 3.7.4_1 ``` And if need 3.6 again switch with: ``` $ brew switch python 3.6.5_1 ```", "best_answers_score":0.7663, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51726203\/installing-python3-6-alongside-python3-7-on-mac", "best_answers_votes":154, "question_length":1201, "response_length":414 }, { "question":"TensorFlow libdevice not found. Why is it not found in the searched path? Win 10 64-bit 21H1; TF2.5, CUDA 11 installed in environment (Python 3.9.5 Xeus) I am not the only one seeing this error; see also (unanswered) here and here. The issue is obscure and the proposed resolutions are unclear\/don't seem to work (see e.g. here) Issue Using the TF Linear_Mixed_Effects_Models.ipynb example (download from TensorFlow github here) execution reaches the point of performing the \"warm up stage\" then throws the error: ``` InternalError: libdevice not found at .\/libdevice.10.bc [Op:__inference_one_e_step_2806] ``` The console contains this output showing that it finds the GPU but XLA initialisation fails to find the - existing! - libdevice in the specified paths ``` 2021-08-01 22:04:36.691300: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1418] Created TensorFlow device (\/job:localhost\/replica:0\/task:0\/device:GPU:0 with 9623 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1) 2021-08-01 22:04:37.080007: W tensorflow\/python\/util\/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. 2021-08-01 22:04:54.122528: I tensorflow\/compiler\/xla\/service\/service.cc:169] XLA service 0x1d724940130 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2021-08-01 22:04:54.127766: I tensorflow\/compiler\/xla\/service\/service.cc:177] StreamExecutor device (0): NVIDIA GeForce GTX 1080 Ti, Compute Capability 6.1 2021-08-01 22:04:54.215072: W tensorflow\/compiler\/tf2xla\/kernels\/random_ops.cc:241] Warning: Using tf.random.uniform with XLA compilation will ignore seeds; consider using tf.random.stateless_uniform instead if reproducible behavior is desired. 2021-08-01 22:04:55.506464: W tensorflow\/compiler\/xla\/service\/gpu\/nvptx_compiler.cc:73] Can't find libdevice directory ${CUDA_DIR}\/nvvm\/libdevice. This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice. 2021-08-01 22:04:55.512876: W tensorflow\/compiler\/xla\/service\/gpu\/nvptx_compiler.cc:74] Searched for CUDA in the following directories: 2021-08-01 22:04:55.517387: W tensorflow\/compiler\/xla\/service\/gpu\/nvptx_compiler.cc:77] C:\/Users\/Julian\/anaconda3\/envs\/TF250_PY395_xeus\/Library\/bin 2021-08-01 22:04:55.520773: W tensorflow\/compiler\/xla\/service\/gpu\/nvptx_compiler.cc:77] C:\/Program Files\/NVIDIA GPU Computing Toolkit\/CUDA\/v11.2 2021-08-01 22:04:55.524125: W tensorflow\/compiler\/xla\/service\/gpu\/nvptx_compiler.cc:77] . 2021-08-01 22:04:55.526349: W tensorflow\/compiler\/xla\/service\/gpu\/nvptx_compiler.cc:79] You can choose the search directory by setting xla_gpu_cuda_data_dir in HloModule's DebugOptions. For most apps, setting the environment variable XLA_FLAGS=--xla_gpu_cuda_data_dir=\/path\/to\/cuda will work. ``` Now the interesting thing is that the paths searched includes \"C:\/Users\/Julian\/anaconda3\/envs\/TF250_PY395_xeus\/Library\/bin\" the content of that folder includes all the (successfully loaded at TF startup) DLLs, including cudart64_110.dll, dudnn64_8.dll... and of course libdevice.10.bc Question Since TF says it is searching this location for this file and the file exists there, what is wrong and how do I fix it? (NB C:\/Program Files\/NVIDIA GPU Computing Toolkit\/CUDA\/v11.2 does not exist... CUDA is intalled in the environment; this path must be a best guess for an OS installation) Info: I am setting the path by ``` aPath = '--xla_gpu_cuda_data_dir=C:\/Users\/Julian\/anaconda3\/envs\/TF250_PY395_xeus\/Library\/bin' print(aPath) os.environ['XLA_FLAGS'] = aPath ``` but I have also set an OS environment variable XLA_FLAGS to the same string value... I don't know which one is actually working yet, but the fact that the console output says it searched the intended path is good enough", "response":"The following worked for me. With error message: ``` error: Can't find libdevice directory ${CUDA_DIR}\/nvvm\/libdevice ``` Firstly I searched for nvvm directory and then verified that libdevice directory existed: ``` $ find \/ -type d -name nvvm 2>\/dev\/null \/usr\/lib\/cuda\/nvvm $ cd \/usr\/lib\/cuda\/nvvm \/usr\/lib\/cuda\/nvvm$ ls libdevice \/usr\/lib\/cuda\/nvvm$ cd libdevice \/usr\/lib\/cuda\/nvvm\/libdevice$ ls libdevice.10.bc ``` Then I exported the environment variable: ``` export XLA_FLAGS=--xla_gpu_cuda_data_dir=\/usr\/lib\/cuda ``` as shown by @Insectatorious above. This solved the error and I was able to run the code.", "best_answers_score":0.7657, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/68614547\/tensorflow-libdevice-not-found-why-is-it-not-found-in-the-searched-path", "best_answers_votes":36, "question_length":3885, "response_length":611 }, { "question":"Meaning of buffer_size in Dataset.map , Dataset.prefetch and Dataset.shuffle As per TensorFlow documentation , the prefetch and map methods of tf.contrib.data.Dataset class, both have a parameter called buffer_size. For prefetch method, the parameter is known as buffer_size and according to documentation : buffer_size: A tf.int64 scalar tf.Tensor, representing the maximum number elements that will be buffered when prefetching. For the map method, the parameter is known as output_buffer_size and according to documentation : output_buffer_size: (Optional.) A tf.int64 scalar tf.Tensor, representing the maximum number of processed elements that will be buffered. Similarly for the shuffle method, the same quantity appears and according to documentation : buffer_size: A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample. What is the relation between these parameters ? Suppose I create aDataset object as follows : ``` tr_data = TFRecordDataset(trainfilenames) tr_data = tr_data.map(providefortraining, output_buffer_size=10 * trainbatchsize, num_parallel_calls\\ =5) tr_data = tr_data.shuffle(buffer_size= 100 * trainbatchsize) tr_data = tr_data.prefetch(buffer_size = 10 * trainbatchsize) tr_data = tr_data.batch(trainbatchsize) ``` What role is being played by the buffer parameters in the above snippet ?", "response":"TL;DR Despite their similar names, these arguments have quite difference meanings. The buffer_size in Dataset.shuffle() can affect the randomness of your dataset, and hence the order in which elements are produced. The buffer_size in Dataset.prefetch() only affects the time it takes to produce the next element. The buffer_size argument in tf.data.Dataset.prefetch() and the output_buffer_size argument in tf.contrib.data.Dataset.map() provide a way to tune the performance of your input pipeline: both arguments tell TensorFlow to create a buffer of at most buffer_size elements, and a background thread to fill that buffer in the background. (Note that we removed the output_buffer_size argument from Dataset.map() when it moved from tf.contrib.data to tf.data. New code should use Dataset.prefetch() after map() to get the same behavior.) Adding a prefetch buffer can improve performance by overlapping the preprocessing of data with downstream computation. Typically it is most useful to add a small prefetch buffer (with perhaps just a single element) at the very end of the pipeline, but more complex pipelines can benefit from additional prefetching, especially when the time to produce a single element can vary. By contrast, the buffer_size argument to tf.data.Dataset.shuffle() affects the randomness of the transformation. We designed the Dataset.shuffle() transformation (like the tf.train.shuffle_batch() function that it replaces) to handle datasets that are too large to fit in memory. Instead of shuffling the entire dataset, it maintains a buffer of buffer_size elements, and randomly selects the next element from that buffer (replacing it with the next input element, if one is available). Changing the value of buffer_size affects how uniform the shuffling is: if buffer_size is greater than the number of elements in the dataset, you get a uniform shuffle; if it is 1 then you get no shuffling at all. For very large datasets, a typical \"good enough\" approach is to randomly shard the data into multiple files once before training, then shuffle the filenames uniformly, and then use a smaller shuffle buffer. However, the appropriate choice will depend on the exact nature of your training job.", "best_answers_score":0.765, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46444018\/meaning-of-buffer-size-in-dataset-map-dataset-prefetch-and-dataset-shuffle", "best_answers_votes":188, "question_length":1382, "response_length":2216 }, { "question":"Tensorflow mean squared error loss function I have seen a few different mean squared error loss functions in various posts for regression models in Tensorflow: ``` loss = tf.reduce_sum(tf.pow(prediction - Y,2))\/(n_instances) loss = tf.reduce_mean(tf.squared_difference(prediction, Y)) loss = tf.nn.l2_loss(prediction - Y) ``` What are the differences between these?", "response":"The first and the second loss functions calculate the same thing, but in a slightly different way. The third function calculate something completely different. You can see this by executing this code: ``` import tensorflow as tf shape_obj = (5, 5) shape_obj = (100, 6, 12) Y1 = tf.random_normal(shape=shape_obj) Y2 = tf.random_normal(shape=shape_obj) loss1 = tf.reduce_sum(tf.pow(Y1 - Y2, 2)) \/ (reduce(lambda x, y: x*y, shape_obj)) loss2 = tf.reduce_mean(tf.squared_difference(Y1, Y2)) loss3 = tf.nn.l2_loss(Y1 - Y2) with tf.Session() as sess: print sess.run([loss1, loss2, loss3]) # when I run it I got: [2.0291963, 2.0291963, 7305.1069] ``` Now you can verify that 1-st and 2-nd calculates the same thing (in theory) by noticing that tf.pow(a - b, 2) is the same as tf.squared_difference(a - b, 2). Also reduce_mean is the same as reduce_sum \/ number_of_element. The thing is that computers can't calculate everything exactly. To see what numerical instabilities can do to your calculations take a look at this: ``` import tensorflow as tf shape_obj = (5000, 5000, 10) Y1 = tf.zeros(shape=shape_obj) Y2 = tf.ones(shape=shape_obj) loss1 = tf.reduce_sum(tf.pow(Y1 - Y2, 2)) \/ (reduce(lambda x, y: x*y, shape_obj)) loss2 = tf.reduce_mean(tf.squared_difference(Y1, Y2)) with tf.Session() as sess: print sess.run([loss1, loss2]) ``` It is easy to see that the answer should be 1, but you will get something like this: [1.0, 0.26843545]. Regarding your last function, the documentation says that: Computes half the L2 norm of a tensor without the sqrt: output = sum(t ** 2) \/ 2 So if you want it to calculate the same thing (in theory) as the first one you need to scale it appropriately: ``` loss3 = tf.nn.l2_loss(Y1 - Y2) * 2 \/ (reduce(lambda x, y: x*y, shape_obj)) ```", "best_answers_score":0.7647, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41338509\/tensorflow-mean-squared-error-loss-function", "best_answers_votes":16, "question_length":365, "response_length":1768 }, { "question":"What is num_units in tensorflow BasicLSTMCell? In MNIST LSTM examples, I don't understand what \"hidden layer\" means. Is it the imaginary-layer formed when you represent an unrolled RNN over time? Why is the num_units = 128 in most cases ?", "response":"The number of hidden units is a direct representation of the learning capacity of a neural network -- it reflects the number of learned parameters. The value 128 was likely selected arbitrarily or empirically. You can change that value experimentally and rerun the program to see how it affects the training accuracy (you can get better than 90% test accuracy with a lot fewer hidden units). Using more units makes it more likely to perfectly memorize the complete training set (although it will take longer, and you run the risk of over-fitting). The key thing to understand, which is somewhat subtle in the famous Colah's blog post (find \"each line carries an entire vector\"), is that X is an array of data (nowadays often called a tensor) -- it is not meant to be a scalar value. Where, for example, the tanh function is shown, it is meant to imply that the function is broadcast across the entire array (an implicit for loop) -- and not simply performed once per time-step. As such, the hidden units represent tangible storage within the network, which is manifest primarily in the size of the weights array. And because an LSTM actually does have a bit of it's own internal storage separate from the learned model parameters, it has to know how many units there are -- which ultimately needs to agree with the size of the weights. In the simplest case, an RNN has no internal storage -- so it doesn't even need to know in advance how many \"hidden units\" it is being applied to. A good answer to a similar question here. You can look at the source for BasicLSTMCell in TensorFlow to see exactly how this is used. Side note: This notation is very common in statistics and machine-learning, and other fields that process large batches of data with a common formula (3D graphics is another example). It takes a bit of getting used to for people who expect to see their for loops written out explicitly.", "best_answers_score":0.7633, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37901047\/what-is-num-units-in-tensorflow-basiclstmcell", "best_answers_votes":46, "question_length":238, "response_length":1903 }, { "question":"What is SYCL 1.2? I am trying to install tensorflow ``` Please specify the location where ComputeCpp for SYCL 1.2 is installed. [Default is \/usr\/local\/computecpp]: Invalid SYCL 1.2 library path. \/usr\/local\/computecpp\/lib\/libComputeCpp.so cannot be found ``` What should I do?What is SYCL 1.2?", "response":"SYCL is a C++ abstraction layer for OpenCL. TensorFlow's experimental support for OpenCL uses SYCL, in conjunction with a SYCL-aware C++ compiler. As Yaroslav pointed out in his comment, SYCL is only required if you are building TensorFlow with OpenCL support. The following question during the execution of .\/configure asks about OpenCL support: ``` Do you wish to build TensorFlow with OpenCL support? [y\/N] ``` If you answer N, you will not have to supply a SYCL path.", "best_answers_score":0.763, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41831214\/what-is-sycl-1-2", "best_answers_votes":30, "question_length":292, "response_length":471 }, { "question":"Making predictions with a TensorFlow model I followed the given mnist tutorials and was able to train a model and evaluate its accuracy. However, the tutorials don't show how to make predictions given a model. I'm not interested in accuracy, I just want to use the model to predict a new example and in the output see all the results (labels), each with its assigned score (sorted or not).", "response":"In the \"Deep MNIST for Experts\" example, see this line: We can now implement our regression model. It only takes one line! We multiply the vectorized input images x by the weight matrix W, add the bias b, and compute the softmax probabilities that are assigned to each class. ``` y = tf.nn.softmax(tf.matmul(x,W) + b) ``` Just pull on node y and you'll have what you want. ``` feed_dict = {x: [your_image]} classification = tf.run(y, feed_dict) print classification ``` This applies to just about any model you create - you'll have computed the prediction probabilities as one of the last steps before computing the loss.", "best_answers_score":0.7624, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33711556\/making-predictions-with-a-tensorflow-model", "best_answers_votes":75, "question_length":389, "response_length":621 }, { "question":"TensorFlow: Remember LSTM state for next batch (stateful LSTM) Given a trained LSTM model I want to perform inference for single timesteps, i.e. seq_length = 1 in the example below. After each timestep the internal LSTM (memory and hidden) states need to be remembered for the next 'batch'. For the very beginning of the inference the internal LSTM states init_c, init_h are computed given the input. These are then stored in a LSTMStateTuple object which is passed to the LSTM. During training this state is updated every timestep. However for inference I want the state to be saved in between batches, i.e. the initial states only need to be computed at the very beginning and after that the LSTM states should be saved after each 'batch' (n=1). I found this related StackOverflow question: Tensorflow, best way to save state in RNNs?. However this only works if state_is_tuple=False, but this behavior is soon to be deprecated by TensorFlow (see rnn_cell.py). Keras seems to have a nice wrapper to make stateful LSTMs possible but I don't know the best way to achieve this in TensorFlow. This issue on the TensorFlow GitHub is also related to my question: https:\/\/github.com\/tensorflow\/tensorflow\/issues\/2838 Anyone good suggestions for building a stateful LSTM model? ``` inputs = tf.placeholder(tf.float32, shape=[None, seq_length, 84, 84], name=\"inputs\") targets = tf.placeholder(tf.float32, shape=[None, seq_length], name=\"targets\") num_lstm_layers = 2 with tf.variable_scope(\"LSTM\") as scope: lstm_cell = tf.nn.rnn_cell.LSTMCell(512, initializer=initializer, state_is_tuple=True) self.lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * num_lstm_layers, state_is_tuple=True) init_c = # compute initial LSTM memory state using contents in placeholder 'inputs' init_h = # compute initial LSTM hidden state using contents in placeholder 'inputs' self.state = [tf.nn.rnn_cell.LSTMStateTuple(init_c, init_h)] * num_lstm_layers outputs = [] for step in range(seq_length): if step != 0: scope.reuse_variables() # CNN features, as input for LSTM x_t = # ... # LSTM step through time output, self.state = self.lstm(x_t, self.state) outputs.append(output) ```", "response":"I found out it was easiest to save the whole state for all layers in a placeholder. ``` init_state = np.zeros((num_layers, 2, batch_size, state_size)) ... state_placeholder = tf.placeholder(tf.float32, [num_layers, 2, batch_size, state_size]) ``` Then unpack it and create a tuple of LSTMStateTuples before using the native tensorflow RNN Api. ``` l = tf.unpack(state_placeholder, axis=0) rnn_tuple_state = tuple( [tf.nn.rnn_cell.LSTMStateTuple(l[idx][0], l[idx][1]) for idx in range(num_layers)] ) ``` RNN passes in the API: ``` cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True) cell = tf.nn.rnn_cell.MultiRNNCell([cell]*num_layers, state_is_tuple=True) outputs, state = tf.nn.dynamic_rnn(cell, x_input_batch, initial_state=rnn_tuple_state) ``` The state - variable will then be feeded to the next batch as a placeholder.", "best_answers_score":0.7597, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38241410\/tensorflow-remember-lstm-state-for-next-batch-stateful-lstm", "best_answers_votes":23, "question_length":2154, "response_length":836 }, { "question":"How do I start tensorflow docker jupyter notebook I've installed the tensorflow docker container on an ubuntu machine. The tensorflow docker setup instructions specify: ``` docker run -it b.gcr.io\/tensorflow\/tensorflow ``` This puts me into the docker container terminal, and I can run python and execute the Hello World example. I can also manually run .\\run_jupyter.sh to start the jupyter notebook. However, I can't reach the notebook from host. How do I start the jupyter notebook such that I can use the notebook from the host machine? Ideally I would like to use docker to launch the container and start jupyter in a single command.", "response":"For a Linux host Robert Graves answer will work, but for Mac OS X or Windows there is more to be done because docker runs in a virtual machine. So to begin launch the docker shell (or any shell if you are using Linux) and run the following command to launch a new TensorFlow container: ``` docker run -p 8888:8888 -p 6006:6006 b.gcr.io\/tensorflow\/tensorflow .\/run_jupyter.sh ``` Then for Mac OS X and Windows you need to do the following only once: Open VirtualBox Click on the docker vm (mine was automatically named \"default\") Open the settings by clicking settings In the network settings open the port forwarding dialog Click the + symbol to add another port and connect a port from your mac to the VM by filling in the dialog as shown below. In this example I chose port 8810 because I run other notebooks using port 8888. then open a browser and connect to http:\/\/localhost:8810 (or whichever port you set in the host port section Make your fancy pants machine learning app!", "best_answers_score":0.7592, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33636925\/how-do-i-start-tensorflow-docker-jupyter-notebook", "best_answers_votes":49, "question_length":638, "response_length":980 }, { "question":"tensorflow on GPU: no known devices, despite cuda's deviceQuery returning a \"PASS\" result Note : this question was initially asked on github, but it was asked to be here instead I'm having trouble running tensorflow on gpu, and it does not seems to be the usual cuda's configuration problem, because everything seems to indicate cuda is properly setup. The main symptom: when running tensorflow, my gpu is not detected (the code being run, and its output). What differs from usual issues is that cuda seems properly installed and running .\/deviceQuery from cuda samples is successful (output). I have two graphical cards: an old GTX 650 used for my monitors (I don't want to use that one with tensorflow) a GTX 1060 that I want to dedicate to tensorflow I use: tensorflow-1.0.0 cuda-8.0 (ls -l \/usr\/local\/cuda\/lib64\/libcud*) cudnn-5.1.10 python-2.7.12 nvidia-drivers-375.26 (this was installed by cuda and replaced my distro driver package) I've tried: adding \/usr\/local\/cuda\/bin\/ to $PATH forcing gpu placement in tensorflow script using with tf.device('\/gpu:1'): (and with tf.device('\/gpu:0'): when it failed, for good measure) whitelisting the gpu I wanted to use with CUDA_VISIBLE_DEVICES, in case the presence of my old unsupported card did cause problems running the script with sudo (because why not) Here are the outputs of nvidia-smi and nvidia-debugdump -l, in case it's useful. At this point, I feel like I have followed all the breadcrumbs and have no idea what I could try else. I'm not even sure if I'm contemplating a bug or a configuration problem. Any advice about how to debug this would be greatly appreciated. Thanks! Update: with the help of Yaroslav on github, I gathered more debugging info by raising log level, but it doesn't seem to say much about the device selection : https:\/\/gist.github.com\/oelmekki\/760a37ca50bf58d4f03f46d104b798bb Update 2: Using theano detects gpu correctly, but interestingly it complains about cuDNN being too recent, then fallback to cpu (code ran, output). Maybe that could be the problem with tensorflow as well?", "response":"From the log output, it looks like you are running the CPU version of TensorFlow (PyPI: tensorflow), and not the GPU version (PyPI: tensorflow-gpu). Running the GPU version would either log information about the CUDA libraries, or an error if it failed to load them or open the driver. If you run the following commands, you should be able to use the GPU in subsequent runs: ``` $ pip uninstall tensorflow $ pip install tensorflow-gpu ```", "best_answers_score":0.7588, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42326748\/tensorflow-on-gpu-no-known-devices-despite-cudas-devicequery-returning-a-pas", "best_answers_votes":79, "question_length":2067, "response_length":438 }, { "question":"How to average summaries over multiple batches? Assuming I have a bunch of summaries defined like: ```py loss = ... tf.scalar_summary(\"loss\", loss) # ... summaries = tf.merge_all_summaries() ``` I can evaluate the summaries tensor every few steps on the training data and pass the result to a SummaryWriter. The result will be noisy summaries, because they're only computed on one batch. However, I would like to compute the summaries on the entire validation dataset. Of course, I can't pass the validation dataset as a single batch, because it would be too big. So, I'll get summary outputs for each validation batch. Is there a way to average those summaries so that it appears as if the summaries have been computed on the entire validation set?", "response":"Do the averaging of your measure in Python and create a new Summary object for each mean. Here is what I do: ```py accuracies = [] # Calculate your measure over as many batches as you need for batch in validation_set: accuracies.append(sess.run([training_op])) # Take the mean of you measure accuracy = np.mean(accuracies) # Create a new Summary object with your measure summary = tf.Summary() summary.value.add(tag=\"%sAccuracy\" % prefix, simple_value=accuracy) # Add it to the Tensorboard summary writer # Make sure to specify a step parameter to get nice graphs over time summary_writer.add_summary(summary, global_step) ```", "best_answers_score":0.758, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40788785\/how-to-average-summaries-over-multiple-batches", "best_answers_votes":49, "question_length":749, "response_length":626 }, { "question":"TensorFlow Variables and Constants I am new to tensorflow , I am not able to understand the difference of variable and constant, I get the idea that we use variables for equations and constants for direct values , but why code #1 works only and why not code#2 and #3, and please explain in which cases we have to run our graph first(a) and then our variable(b) i.e ``` (a) session.run(model) (b) print(session.run(y)) ``` and in which case I can directly execute this command i.e ``` print(session.run(y)) ``` Code #1 : ``` x = tf.constant(35, name='x') y = tf.Variable(x + 5, name='y') model = tf.global_variables_initializer() with tf.Session() as session: session.run(model) print(session.run(y)) ``` Code #2 : ``` x = tf.Variable(35, name='x') y = tf.Variable(x + 5, name='y') model = tf.global_variables_initializer() with tf.Session() as session: session.run(model) print(session.run(y)) ``` Code #3 : ``` x = tf.constant(35, name='x') y = tf.constant(x + 5, name='y') model = tf.global_variables_initializer() with tf.Session() as session: session.run(model) print(session.run(y)) ```", "response":"In TensorFlow the differences between constants and variables are that when you declare some constant, its value can't be changed in the future (also the initialization should be with a value, not with operation). Nevertheless, when you declare a Variable, you can change its value in the future with tf.assign() method (and the initialization can be achieved with a value or operation). The function tf.global_variables_initializer() initialises all variables in your code with the value passed as parameter, but it works in async mode, so doesn't work properly when dependencies exists between variables. Your first code (#1) works properly because there is no dependencies on variable initialization and the constant is constructed with a value. The second code (#2) doesn't work because of the async behavior of tf.global_variables_initializer(). You can fix it using tf.variables_initializer() as follows: ``` x = tf.Variable(35, name='x') model_x = tf.variables_initializer([x]) y = tf.Variable(x + 5, name='y') model_y = tf.variables_initializer([y]) with tf.Session() as session: session.run(model_x) session.run(model_y) print(session.run(y)) ``` The third code (#3) doesn't work properly because you are trying to initialize a constant with an operation, that isn't possible. To solve it, an appropriate strategy is (#1). Regarding to your last question. You need to run (a) session.run(model) when there are variables in your calculation graph (b) print(session.run(y)).", "best_answers_score":0.7579, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44745855\/tensorflow-variables-and-constants", "best_answers_votes":44, "question_length":1091, "response_length":1481 }, { "question":"Python: Neural Network - TypeError: 'History' object is not subscriptable I have been practicing building and comparing neural networks using Keras and Tensorflow in python, but when I look to plot the models for comparisons I am receiving an error: ``` TypeError: 'History' object is not subscriptable ``` Here is my code for the three models: ``` ############################## Initiate model 1 ############################### # Model 1 has no hidden layers from keras.models import Sequential model1 = Sequential() # Get layers from keras.layers import Dense # Add first layer n_cols = len(X.columns) model1.add(Dense(units=n_cols, activation='relu', input_shape=(n_cols,))) # Add output layer model1.add(Dense(units=2, activation='softmax')) # Compile the model model1.compile(loss='categorical_crossentropy', optimizer='adam', metrics= ['accuracy']) # Define early_stopping_monitor from keras.callbacks import EarlyStopping early_stopping_monitor = EarlyStopping(patience=2) # Fit model model1.fit(X, y, validation_split=0.33, epochs=30, callbacks= [early_stopping_monitor], verbose=False) ############################## Initiate model 2 ############################### # Model 2 has 1 hidden layer that has the mean number of nodes of input and output layer model2 = Sequential() # Add first layer model2.add(Dense(units=n_cols, activation='relu', input_shape=(n_cols,))) # Add hidden layer import math model2.add(Dense(units=math.ceil((n_cols+2)\/2), activation='relu')) # Add output layer model2.add(Dense(units=2, activation='softmax')) # Compile the model model2.compile(loss='categorical_crossentropy', optimizer='adam', metrics= ['accuracy']) # Fit model model2.fit(X, y, validation_split=0.33, epochs=30, callbacks= [early_stopping_monitor], verbose=False) ############################## Initiate model 3 ############################### # Model 3 has 1 hidden layer that is 2\/3 the size of the input layer plus the size of the output layer model3 = Sequential() # Add first layer model3.add(Dense(units=n_cols, activation='relu', input_shape=(n_cols,))) # Add hidden layer model3.add(Dense(units=math.ceil((n_cols*(2\/3))+2), activation='relu')) # Add output layer model3.add(Dense(units=2, activation='softmax')) # Compile the model model3.compile(loss='categorical_crossentropy', optimizer='adam', metrics= ['accuracy']) # Fit model model3.fit(X, y, validation_split=0.33, epochs=30, callbacks= [early_stopping_monitor], verbose=False) # Plot the models plt.plot(model1.history['val_loss'], 'r', model2.history['val_loss'], 'b', model3.history['val_loss'], 'g') plt.xlabel('Epochs') plt.ylabel('Validation score') plt.show() ``` I have no problems with running any of my models, getting predicted probabilities, plotting ROC curves, or plotting PR curves. However, when I attempt to plot the three curves together I am getting an error from this area of my code: ``` model1.history['val_loss'] TypeError: 'History' object is not subscriptable ``` Does anyone have experience with this type of error and can lead me to what I am doing wrong? Thank you in advance.", "response":"Call to model.fit() returns a History object that has a member history, which is of type dict. So you can replace : ``` model2.fit(X, y, validation_split=0.33, epochs=30, callbacks= [early_stopping_monitor], verbose=False) ``` with ``` history2 = model2.fit(X, y, validation_split=0.33, epochs=30, callbacks= [early_stopping_monitor], verbose=False) ``` Similarly for other models. and then you can use : ``` plt.plot(history1.history['val_loss'], 'r', history2.history['val_loss'], 'b', history3.history['val_loss'], 'g') ```", "best_answers_score":0.7574, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51731207\/python-neural-network-typeerror-history-object-is-not-subscriptable", "best_answers_votes":34, "question_length":3075, "response_length":526 }, { "question":"Clearing Tensorflow GPU memory after model execution I've trained 3 models and am now running code that loads each of the 3 checkpoints in sequence and runs predictions using them. I'm using the GPU. When the first model is loaded it pre-allocates the entire GPU memory (which I want for working through the first batch of data). But it doesn't unload memory when it's finished. When the second model is loaded, using both tf.reset_default_graph() and with tf.Graph().as_default() the GPU memory still is fully consumed from the first model, and the second model is then starved of memory. Is there a way to resolve this, other than using Python subprocesses or multiprocessing to work around the problem (the only solution I've found on via google searches)?", "response":"A git issue from June 2016 (https:\/\/github.com\/tensorflow\/tensorflow\/issues\/1727) indicates that there is the following problem: currently the Allocator in the GPUDevice belongs to the ProcessState, which is essentially a global singleton. The first session using GPU initializes it, and frees itself when the process shuts down. Thus the only workaround would be to use processes and shut them down after the computation. Example Code: ``` import tensorflow as tf import multiprocessing import numpy as np def run_tensorflow(): n_input = 10000 n_classes = 1000 # Create model def multilayer_perceptron(x, weight): # Hidden layer with RELU activation layer_1 = tf.matmul(x, weight) return layer_1 # Store layers weight & bias weights = tf.Variable(tf.random_normal([n_input, n_classes])) x = tf.placeholder(\"float\", [None, n_input]) y = tf.placeholder(\"float\", [None, n_classes]) pred = multilayer_perceptron(x, weights) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for i in range(100): batch_x = np.random.rand(10, 10000) batch_y = np.random.rand(10, 1000) sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) print \"finished doing stuff with tensorflow!\" if __name__ == \"__main__\": # option 1: execute code with extra process p = multiprocessing.Process(target=run_tensorflow) p.start() p.join() # wait until user presses enter key raw_input() # option 2: just execute the function run_tensorflow() # wait until user presses enter key raw_input() ``` So if you would call the function run_tensorflow() within a process you created and shut the process down (option 1), the memory is freed. If you just run run_tensorflow() (option 2) the memory is not freed after the function call.", "best_answers_score":0.7571, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39758094\/clearing-tensorflow-gpu-memory-after-model-execution", "best_answers_votes":39, "question_length":759, "response_length":1890 }, { "question":"How to set weights in Keras with a numpy array? I am having trouble with the Keras backend functions for setting values. I am trying to convert a model from PyTorch to Keras and am trying to set the weights of the Keras model, but the weights do not appear to be getting set. Note: I am not actually setting with np.ones just using that for an example. I have tried... Loading an existing model ``` import keras from keras.models import load_model, Model model = load_model(model_dir+file_name) keras_layer = [layer for layer in model.layers if layer.name=='conv2d_1'][0] ``` Creating a simple model ``` img_input = keras.layers.Input(shape=(3,3,3)) x = keras.layers.Conv2D(1, kernel_size=1, strides=1, padding=\"valid\", use_bias=False, name='conv1')(img_input) model = Model(img_input, x) keras_layer = [layer for layer in model.layers if layer.name=='conv1'][0] ``` Then using set_weights or set_value ``` keras_layer.set_weights([np.ones((1, 1, 3, 1))]) ``` or... ``` K.batch_set_value([(weight,np.ones((1, 1, 3, 1))) for weight in keras_layer.weights]) ``` afterwards I call either one of the following: ``` K.batch_get_value([weight for weight in keras_layer.weights]) keras_layer.get_weights() ``` And None of the weights appear to have been set. The same values as before are returned. ``` [array([[[[ 1.61547325e-06], [ 2.97779252e-06], [ 1.50160542e-06]]]], dtype=float32)] ``` How do I set the weights of a layer in Keras with a numpy array of values?", "response":"What is keras_layer in your code? You can set weights these ways: ``` model.layers[i].set_weights(listOfNumpyArrays) model.get_layer(layerName).set_weights(...) model.set_weights(listOfNumpyArrays) ``` Where model is an instance of an existing model. You can see the expected length of the list and its array shapes using the method get_weights() from the same instances above.", "best_answers_score":0.7563, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47183159\/how-to-set-weights-in-keras-with-a-numpy-array", "best_answers_votes":59, "question_length":1460, "response_length":377 }, { "question":"How to load only specific weights on Keras I have a trained model that I've exported the weights and want to partially load into another model. My model is built in Keras using TensorFlow as backend. Right now I'm doing as follows: ``` model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=input_shape, trainable=False)) model.add(Activation('relu', trainable=False)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (3, 3), trainable=False)) model.add(Activation('relu', trainable=False)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3), trainable=True)) model.add(Activation('relu', trainable=True)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(64)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.load_weights(\"image_500.h5\") model.pop() model.pop() model.pop() model.pop() model.pop() model.pop() model.add(Conv2D(1, (6, 6),strides=(1, 1), trainable=True)) model.add(Activation('relu', trainable=True)) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) ``` I'm sure it's a terrible way to do it, although it works. How do I load just the first 9 layers?", "response":"If your first 9 layers are consistently named between your original trained model and the new model, then you can use model.load_weights() with by_name=True. This will update weights only in the layers of your new model that have an identically named layer found in the original trained model. The name of the layer can be specified with the name keyword, for example: ``` model.add(Dense(8, activation='relu',name='dens_1')) ```", "best_answers_score":0.7561, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43702323\/how-to-load-only-specific-weights-on-keras", "best_answers_votes":47, "question_length":1320, "response_length":429 }, { "question":"Stateful LSTM and stream predictions I've trained an LSTM model (built with Keras and TF) on multiple batches of 7 samples with 3 features each, with a shape the like below sample (numbers below are just placeholders for the purpose of explanation), each batch is labeled 0 or 1: Data: ``` [ [[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3]] [[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3]] [[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3]] ... ] ``` i.e: batches of m sequences, each of length 7, whose elements are 3-dimensional vectors (so batch has shape (m73)) Target: ``` [ [1] [0] [1] ... ] ``` On my production environment data is a stream of samples with 3 features ([1,2,3],[1,2,3]...). I would like to stream each sample as it arrives to my model and get the intermediate probability without waiting for the entire batch (7) - see the animation below. One of my thoughts was padding the batch with 0 for the missing samples, [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[1,2,3]] but that seems to be inefficient. Will appreciate any help that will point me in the right direction of both saving the LSTM intermediate state in a persistent way, while waiting for the next sample and predicting on a model trained on a specific batch size with partial data. Update, including model code: ```py opt = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=10e-8, decay=0.001) model = Sequential() num_features = data.shape[2] num_samples = data.shape[1] first_lstm = LSTM(32, batch_input_shape=(None, num_samples, num_features), return_sequences=True, activation='tanh') model.add(first_lstm) model.add(LeakyReLU()) model.add(Dropout(0.2)) model.add(LSTM(16, return_sequences=True, activation='tanh')) model.add(Dropout(0.2)) model.add(LeakyReLU()) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy', keras_metrics.precision(), keras_metrics.recall(), f1]) ``` Model Summary: ``` _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm_1 (LSTM) (None, 100, 32) 6272 _________________________________________________________________ leaky_re_lu_1 (LeakyReLU) (None, 100, 32) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 100, 32) 0 _________________________________________________________________ lstm_2 (LSTM) (None, 100, 16) 3136 _________________________________________________________________ dropout_2 (Dropout) (None, 100, 16) 0 _________________________________________________________________ leaky_re_lu_2 (LeakyReLU) (None, 100, 16) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 1600) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 1601 ================================================================= Total params: 11,009 Trainable params: 11,009 Non-trainable params: 0 _________________________________________________________________ ```", "response":"I think there might be an easier solution. If your model does not have convolutional layers or any other layers that act upon the length\/steps dimension, you can simply mark it as stateful=True Warning: your model has layers that act on the length dimension !! The Flatten layer transforms the length dimension into a feature dimension. This will completely prevent you from achieving your goal. If the Flatten layer is expecting 7 steps, you will always need 7 steps. So, before applying my answer below, fix your model to not use the Flatten layer. Instead, it can just remove the return_sequences=True for the last LSTM layer. The following code fixed that and also prepares a few things to be used with the answer below: ``` def createModel(forTraining): #model for training, stateful=False, any batch size if forTraining == True: batchSize = None stateful = False #model for predicting, stateful=True, fixed batch size else: batchSize = 1 stateful = True model = Sequential() first_lstm = LSTM(32, batch_input_shape=(batchSize, num_samples, num_features), return_sequences=True, activation='tanh', stateful=stateful) model.add(first_lstm) model.add(LeakyReLU()) model.add(Dropout(0.2)) #this is the last LSTM layer, use return_sequences=False model.add(LSTM(16, return_sequences=False, stateful=stateful, activation='tanh')) model.add(Dropout(0.2)) model.add(LeakyReLU()) #don't add a Flatten!!! #model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) if forTraining == True: compileThisModel(model) ``` With this, you will be able to train with 7 steps and predict with one step. Otherwise it will not be possible. The usage of a stateful model as a solution for your question First, train this new model again, because it has no Flatten layer: ``` trainingModel = createModel(forTraining=True) trainThisModel(trainingModel) ``` Now, with this trained model, you can simply create a new model exactly the same way you created the trained model, but marking stateful=True in all its LSTM layers. And we should copy the weights from the trained model. Since these new layers will need a fixed batch size (Keras' rules), I assumed it would be 1 (one single stream is coming, not m streams) and added it to the model creation above. ``` predictingModel = createModel(forTraining=False) predictingModel.set_weights(trainingModel.get_weights()) ``` And voil\u00e0. Just predict the outputs of the model with a single step: ``` pseudo for loop as samples arrive to your model: prob = predictingModel.predict_on_batch(sample) #where sample.shape == (1, 1, 3) ``` When you decide that you reached the end of what you consider a continuous sequence, call predictingModel.reset_states() so you can safely start a new sequence without the model thinking it should be mended at the end of the previous one. Saving and loading states Just get and set them, saving with h5py: ``` def saveStates(model, saveName): f = h5py.File(saveName,'w') for l, lay in enumerate(model.layers): #if you have nested models, #consider making this recurrent testing for layers in layers if isinstance(lay,RNN): for s, stat in enumerate(lay.states): f.create_dataset('states_' + str(l) + '_' + str(s), data=K.eval(stat), dtype=K.dtype(stat)) f.close() def loadStates(model, saveName): f = h5py.File(saveName, 'r') allStates = list(f.keys()) for stateKey in allStates: name, layer, state = stateKey.split('_') layer = int(layer) state = int(state) K.set_value(model.layers[layer].states[state], f.get(stateKey)) f.close() ``` Working test for saving\/loading states ``` import h5py, numpy as np from keras.layers import RNN, LSTM, Dense, Input from keras.models import Model import keras.backend as K def createModel(): inp = Input(batch_shape=(1,None,3)) out = LSTM(5,return_sequences=True, stateful=True)(inp) out = LSTM(2, stateful=True)(out) out = Dense(1)(out) model = Model(inp,out) return model def saveStates(model, saveName): f = h5py.File(saveName,'w') for l, lay in enumerate(model.layers): #if you have nested models, consider making this recurrent testing for layers in layers if isinstance(lay,RNN): for s, stat in enumerate(lay.states): f.create_dataset('states_' + str(l) + '_' + str(s), data=K.eval(stat), dtype=K.dtype(stat)) f.close() def loadStates(model, saveName): f = h5py.File(saveName, 'r') allStates = list(f.keys()) for stateKey in allStates: name, layer, state = stateKey.split('_') layer = int(layer) state = int(state) K.set_value(model.layers[layer].states[state], f.get(stateKey)) f.close() def printStates(model): for l in model.layers: #if you have nested models, consider making this recurrent testing for layers in layers if isinstance(l,RNN): for s in l.states: print(K.eval(s)) model1 = createModel() model2 = createModel() model1.predict_on_batch(np.ones((1,5,3))) #changes model 1 states print('model1') printStates(model1) print('model2') printStates(model2) saveStates(model1,'testStates5') loadStates(model2,'testStates5') print('model1') printStates(model1) print('model2') printStates(model2) ``` Considerations on the aspects of the data In your first model (if it is stateful=False), it considers that each sequence in m is individual and not connected to the others. It also considers that each batch contains unique sequences. If this is not the case, you might want to train the stateful model instead (considering that each sequence is actually connected to the previous sequence). And then you would need m batches of 1 sequence. -> m x (1, 7 or None, 3).", "best_answers_score":0.754, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/53190253\/stateful-lstm-and-stream-predictions", "best_answers_votes":11, "question_length":3144, "response_length":5480 }, { "question":"TensorFlow: InternalError: Blas SGEMM launch failed When I run sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) I get InternalError: Blas SGEMM launch failed. Here is the full error and stack trace: ``` InternalErrorTraceback (most recent call last) in () 1 batch_xs, batch_ys = mnist.train.next_batch(100) ----> 2 sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) \/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/client\/session.pyc in run(self, fetches, feed_dict, options, run_metadata) 338 try: 339 result = self._run(None, fetches, feed_dict, options_ptr, --> 340 run_metadata_ptr) 341 if run_metadata: 342 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) \/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/client\/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata) 562 try: 563 results = self._do_run(handle, target_list, unique_fetches, --> 564 feed_dict_string, options, run_metadata) 565 finally: 566 # The movers are no longer used. Delete them. \/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/client\/session.pyc in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 635 if handle is None: 636 return self._do_call(_run_fn, self._session, feed_dict, fetch_list, --> 637 target_list, options, run_metadata) 638 else: 639 return self._do_call(_prun_fn, self._session, handle, feed_dict, \/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/client\/session.pyc in _do_call(self, fn, *args) 657 # pylint: disable=protected-access 658 raise errors._make_specific_exception(node_def, op, error_message, --> 659 e.code) 660 # pylint: enable=protected-access 661 InternalError: Blas SGEMM launch failed : a.shape=(100, 784), b.shape=(784, 10), m=100, n=10, k=784 [[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device=\"\/job:localhost\/replica:0\/task:0\/gpu:0\"](_recv_Placeholder_0\/_4, Variable\/read)]] Caused by op u'MatMul', defined at: File \"\/usr\/lib\/python2.7\/runpy.py\", line 162, in _run_module_as_main \"__main__\", fname, loader, pkg_name) File \"\/usr\/lib\/python2.7\/runpy.py\", line 72, in _run_code exec code in run_globals File \"\/usr\/local\/lib\/python2.7\/dist-packages\/ipykernel\/__main__.py\", line 3, in app.launch_new_instance() File \"\/usr\/local\/lib\/python2.7\/dist-packages\/traitlets\/config\/application.py\", line 596, in launch_instance app.start() File \"\/usr\/local\/lib\/python2.7\/dist-packages\/ipykernel\/kernelapp.py\", line 442, in start ioloop.IOLoop.instance().start() File \"\/usr\/local\/lib\/python2.7\/dist-packages\/zmq\/eventloop\/ioloop.py\", line 162, in start super(ZMQIOLoop, self).start() File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tornado\/ioloop.py\", line 883, in start handler_func(fd_obj, events) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tornado\/stack_context.py\", line 275, in null_wrapper return fn(*args, **kwargs) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/zmq\/eventloop\/zmqstream.py\", line 440, in _handle_events self._handle_recv() File \"\/usr\/local\/lib\/python2.7\/dist-packages\/zmq\/eventloop\/zmqstream.py\", line 472, in _handle_recv self._run_callback(callback, msg) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/zmq\/eventloop\/zmqstream.py\", line 414, in _run_callback callback(*args, **kwargs) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tornado\/stack_context.py\", line 275, in null_wrapper return fn(*args, **kwargs) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/ipykernel\/kernelbase.py\", line 276, in dispatcher return self.dispatch_shell(stream, msg) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/ipykernel\/kernelbase.py\", line 228, in dispatch_shell handler(stream, idents, msg) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/ipykernel\/kernelbase.py\", line 391, in execute_request user_expressions, allow_stdin) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/ipykernel\/ipkernel.py\", line 199, in do_execute shell.run_cell(code, store_history=store_history, silent=silent) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/IPython\/core\/interactiveshell.py\", line 2723, in run_cell interactivity=interactivity, compiler=compiler, result=result) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/IPython\/core\/interactiveshell.py\", line 2825, in run_ast_nodes if self.run_code(code, result): File \"\/usr\/local\/lib\/python2.7\/dist-packages\/IPython\/core\/interactiveshell.py\", line 2885, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File \"\", line 4, in y = tf.nn.softmax(tf.matmul(x, W) + b) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/ops\/math_ops.py\", line 1036, in matmul name=name) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/ops\/gen_math_ops.py\", line 911, in _mat_mul transpose_b=transpose_b, name=name) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/ops\/op_def_library.py\", line 655, in apply_op op_def=op_def) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/ops.py\", line 2154, in create_op original_op=self._default_original_op, op_def=op_def) File \"\/usr\/local\/lib\/python2.7\/dist-packages\/tensorflow\/python\/framework\/ops.py\", line 1154, in __init__ self._traceback = _extract_stack() ``` Stack: EC2 g2.8xlarge machine, Ubuntu 14.04", "response":"Old question, but may help others. Try to close interactive sessions active in other processes (if IPython Notebook - just restart kernels). This helped me! Additionally, I use this code to close local sessions in this kernel during experiments: ``` if 'session' in locals() and session is not None: print('Close interactive session') session.close() ```", "best_answers_score":0.7539, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37337728\/tensorflow-internalerror-blas-sgemm-launch-failed", "best_answers_votes":108, "question_length":5176, "response_length":354 }, { "question":"Cannot install TensorFlow 1.x I am trying to install Tensorflow 1.14 for a package that I am trying to use. I tried: pip3 uninstall tensorflow Then I tried to install Tensorflow 1.14 using: pip3 install tensorflow==1.14 and I get the following error ERROR: Could not find a version that satisfies the requirement tensorflow==1.14 (from versions: 2.2.0rc3, 2.2.0rc4, 2.2.0, 2.3.0rc0, 2.3.0rc1, 2.3.0rc2) ERROR: No matching distribution found for tensorflow==1.14 I also tried making a new virtual env and tried the following commands but it didn't work. Is there any way to install Tensorflow 1?", "response":"What I've found on discourse: You just need to make sure you\u2019re using Python 3.5, 3.6 or 3.7. TensorFlow 1.15 does not support Python 3.8", "best_answers_score":0.7536, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/63059979\/cannot-install-tensorflow-1-x", "best_answers_votes":26, "question_length":594, "response_length":137 }, { "question":"Saving Keras models with Custom Layers I am trying to save a Keras model in a H5 file. The Keras model has a custom layer. When I try to restore the model, I get the following error: ```py --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () 1 model.save('model.h5') 2 del model ----> 3 model = tf.keras.models.load_model('model.h5') 8 frames \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/utils\/generic_utils.py in class_and_config_for_serialized_keras_object(config, module_objects, custom_objects, printable_module_name) 319 cls = get_registered_object(class_name, custom_objects, module_objects) 320 if cls is None: --> 321 raise ValueError('Unknown ' + printable_module_name + ': ' + class_name) 322 323 cls_config = config['config'] ValueError: Unknown layer: CustomLayer ``` Could you please tell me how I am supposed to save and load weights of all the custom Keras layers too? (Also, there was no warning when saving, will it be possible to load models from H5 files which I have already saved but can't load back now?) Here is the minimal working code sample (MCVE) for this error, as well as the full expanded message: Google Colab Notebook Just for completeness, this is the code I used to make my custom layer. get_config and from_config are both working fine. ```py class CustomLayer(tf.keras.layers.Layer): def __init__(self, k, name=None): super(CustomLayer, self).__init__(name=name) self.k = k def get_config(self): return {'k': self.k} def call(self, input): return tf.multiply(input, 2) model = tf.keras.models.Sequential([ tf.keras.Input(name='input_layer', shape=(10,)), CustomLayer(10, name='custom_layer'), tf.keras.layers.Dense(1, activation='sigmoid', name='output_layer') ]) model.save('model.h5') model = tf.keras.models.load_model('model.h5') ```", "response":"You can provide manually the mapping custom_objects in the load_model method as mentioned in the answer https:\/\/stackoverflow.com\/a\/62326857\/8056572 but it can be tedious when you have a lot of custom layers (or any custom callables defined. e.g. metrics, losses, optimizers, ...). Tensorflow provides a utils function to do it automatically: tf.keras.utils.register_keras_serializable You have to update your CustomLayer as follows: ```py import tensorflow as tf @tf.keras.utils.register_keras_serializable() class CustomLayer(tf.keras.layers.Layer): def __init__(self, k, **kwargs): self.k = k super(CustomLayer, self).__init__(**kwargs) def get_config(self): config = super().get_config() config[\"k\"] = self.k return config def call(self, input): return tf.multiply(input, 2) ``` Here is the complete working code: ``` import tensorflow as tf @tf.keras.utils.register_keras_serializable() class CustomLayer(tf.keras.layers.Layer): def __init__(self, k, **kwargs): self.k = k super(CustomLayer, self).__init__(**kwargs) def get_config(self): config = super().get_config() config[\"k\"] = self.k return config def call(self, input): return tf.multiply(input, 2) def main(): model = tf.keras.models.Sequential( [ tf.keras.Input(name='input_layer', shape=(10,)), CustomLayer(10, name='custom_layer'), tf.keras.layers.Dense(1, activation='sigmoid', name='output_layer') ] ) print(\"SUMMARY OF THE MODEL CREATED\") print(\"-\" * 60) print(model.summary()) model.save('model.h5') del model print() print() model = tf.keras.models.load_model('model.h5') print(\"SUMMARY OF THE MODEL LOADED\") print(\"-\" * 60) print(model.summary()) if __name__ == \"__main__\": main() ``` And the corresponding output: ``` SUMMARY OF THE MODEL CREATED ------------------------------------------------------------ Model: \"sequential\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= custom_layer (CustomLayer) (None, 10) 0 _________________________________________________________________ output_layer (Dense) (None, 1) 11 ================================================================= Total params: 11 Trainable params: 11 Non-trainable params: 0 _________________________________________________________________ None WARNING:tensorflow:No training configuration found in the save file, so the model was *not* compiled. Compile it manually. SUMMARY OF THE MODEL LOADED ------------------------------------------------------------ Model: \"sequential\" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= custom_layer (CustomLayer) (None, 10) 0 _________________________________________________________________ output_layer (Dense) (None, 1) 11 ================================================================= Total params: 11 Trainable params: 11 Non-trainable params: 0 _________________________________________________________________ None ```", "best_answers_score":0.752, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/62280161\/saving-keras-models-with-custom-layers", "best_answers_votes":23, "question_length":1866, "response_length":3046 }, { "question":"how to implement tensorflow's next_batch for own data In the tensorflow MNIST tutorial the mnist.train.next_batch(100) function comes very handy. I am now trying to implement a simple classification myself. I have my training data in a numpy array. How could I implement a similar function for my own data to give me the next batch? ``` sess = tf.InteractiveSession() tf.global_variables_initializer().run() Xtr, Ytr = loadData() for it in range(1000): batch_x = Xtr.next_batch(100) batch_y = Ytr.next_batch(100) ```", "response":"The link you posted says: \"we get a \"batch\" of one hundred random data points from our training set\". In my example I use a global function (not a method like in your example) so there will be a difference in syntax. In my function you'll need to pass the number of samples wanted and the data array. Here is the correct code, which ensures samples have correct labels: ``` import numpy as np def next_batch(num, data, labels): ''' Return a total of `num` random samples and labels. ''' idx = np.arange(0 , len(data)) np.random.shuffle(idx) idx = idx[:num] data_shuffle = [data[ i] for i in idx] labels_shuffle = [labels[ i] for i in idx] return np.asarray(data_shuffle), np.asarray(labels_shuffle) Xtr, Ytr = np.arange(0, 10), np.arange(0, 100).reshape(10, 10) print(Xtr) print(Ytr) Xtr, Ytr = next_batch(5, Xtr, Ytr) print('\\n5 random samples') print(Xtr) print(Ytr) ``` And a demo run: ``` [0 1 2 3 4 5 6 7 8 9] [[ 0 1 2 3 4 5 6 7 8 9] [10 11 12 13 14 15 16 17 18 19] [20 21 22 23 24 25 26 27 28 29] [30 31 32 33 34 35 36 37 38 39] [40 41 42 43 44 45 46 47 48 49] [50 51 52 53 54 55 56 57 58 59] [60 61 62 63 64 65 66 67 68 69] [70 71 72 73 74 75 76 77 78 79] [80 81 82 83 84 85 86 87 88 89] [90 91 92 93 94 95 96 97 98 99]] 5 random samples [9 1 5 6 7] [[90 91 92 93 94 95 96 97 98 99] [10 11 12 13 14 15 16 17 18 19] [50 51 52 53 54 55 56 57 58 59] [60 61 62 63 64 65 66 67 68 69] [70 71 72 73 74 75 76 77 78 79]] ```", "best_answers_score":0.7515, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40994583\/how-to-implement-tensorflows-next-batch-for-own-data", "best_answers_votes":34, "question_length":516, "response_length":1422 }, { "question":"Python \/ Tensorflow - Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304 I have the following code portion for a convolutional neural network: ``` import numpy as np import matplotlib.pyplot as plt import cifar_tools import tensorflow as tf data, labels = cifar_tools.read_data('C:\\\\Users\\\\abc\\\\Desktop\\\\temp') x = tf.placeholder(tf.float32, [None, 150 * 150]) y = tf.placeholder(tf.float32, [None, 2]) w1 = tf.Variable(tf.random_normal([5, 5, 1, 64])) b1 = tf.Variable(tf.random_normal([64])) w2 = tf.Variable(tf.random_normal([5, 5, 64, 64])) b2 = tf.Variable(tf.random_normal([64])) w3 = tf.Variable(tf.random_normal([6*6*64, 1024])) b3 = tf.Variable(tf.random_normal([1024])) w_out = tf.Variable(tf.random_normal([1024, 2])) b_out = tf.Variable(tf.random_normal([2])) def conv_layer(x,w,b): conv = tf.nn.conv2d(x,w,strides=[1,1,1,1], padding = 'SAME') conv_with_b = tf.nn.bias_add(conv,b) conv_out = tf.nn.relu(conv_with_b) return conv_out def maxpool_layer(conv,k=2): return tf.nn.max_pool(conv, ksize=[1,k,k,1], strides=[1,k,k,1], padding='SAME') def model(): x_reshaped = tf.reshape(x, shape=[-1,150,150,1]) conv_out1 = conv_layer(x_reshaped, w1, b1) maxpool_out1 = maxpool_layer(conv_out1) norm1 = tf.nn.lrn(maxpool_out1, 4, bias=1.0, alpha=0.001\/9.0, beta=0.75) conv_out2 = conv_layer(norm1, w2, b2) maxpool_out2 = maxpool_layer(conv_out2) norm2 = tf.nn.lrn(maxpool_out2, 4, bias=1.0, alpha=0.001\/9.0, beta=0.75) maxpool_reshaped = tf.reshape(maxpool_out2, [-1,w3.get_shape().as_list()[0]]) local = tf.add(tf.matmul(maxpool_reshaped, w3), b3) local_out = tf.nn.relu(local) out = tf.add(tf.matmul(local_out, w_out), b_out) return out model_op = model() cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model_op, y)) train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost) correct_pred = tf.equal(tf.argmax(model_op, 1), tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct_pred,tf.float32)) ``` I'm reading 150x150 grayscale images, but couldn't understand the following error I'm having: ``` EPOCH 0 Traceback (most recent call last): File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 1021, in _do_call return fn(*args) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 1003, in _run_fn status, run_metadata) File \"C:\\Python35\\lib\\contextlib.py\", line 66, in __exit__ next(self.gen) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\errors_impl.py\", line 469, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304 [[Node: Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](MaxPool_1, Reshape_1\/shape)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"cnn.py\", line 70, in _, accuracy_val = sess.run([train_op, accuracy], feed_dict={x: batch_data, y: batch_onehot_vals}) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 766, in run run_metadata_ptr) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 964, in _run feed_dict_string, options, run_metadata) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 1014, in _do_run target_list, options, run_metadata) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 1034, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304 [[Node: Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](MaxPool_1, Reshape_1\/shape)]] Caused by op 'Reshape_1', defined at: File \"cnn.py\", line 50, in model_op = model() File \"cnn.py\", line 43, in model maxpool_reshaped = tf.reshape(maxpool_out2, [-1,w3.get_shape().as_list()[0]]) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\ops\\gen_array_ops.py\", line 2448, in reshape name=name) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\op_def_library.py\", line 759, in apply_op op_def=op_def) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\ops.py\", line 2240, in create_op original_op=self._default_original_op, op_def=op_def) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\ops.py\", line 1128, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304 [[Node: Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](MaxPool_1, Reshape_1\/shape)]] ``` EDIT-1 Got this new error after modifying based on those edits: ``` x_reshaped = tf.reshape(x, shape=[-1,150,150,1]) batch_size = x_reshaped.get_shape().as_list()[0] ... Same code as above ... maxpool_reshaped = tf.reshape(maxpool_out2, [batch_size, -1]) ``` Error: ``` Traceback (most recent call last): File \"cnn.py\", line 52, in model_op = model() File \"cnn.py\", line 45, in model maxpool_reshaped = tf.reshape(maxpool_out2, [batch_size, -1]) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\ops\\gen_array_ops.py\", line 2448, in reshape name=name) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\op_def_library.py\", line 493, in apply_op raise err File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\op_def_library.py\", line 490, in apply_op preferred_dtype=default_dtype) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\ops.py\", line 669, in convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\constant_op.py\", line 176, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\constant_op.py\", line 165, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\tensor_util.py\", line 441, in make_tensor_proto tensor_proto.string_val.extend([compat.as_bytes(x) for x in proto_values]) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\tensor_util.py\", line 441, in tensor_proto.string_val.extend([compat.as_bytes(x) for x in proto_values]) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\util\\compat.py\", line 65, in as_bytes (bytes_or_text,)) TypeError: Expected binary or unicode string, got None ``` EDIT-2 After doing the following edits (in addtion to removing batch_size: ``` w3 = tf.Variable(tf.random_normal([361, 256])) ... ... w_out = tf.Variable(tf.random_normal([256, 2])) ``` I'm having the following error: ``` EPOCH 0 W c:\\tf_jenkins\\home\\workspace\\release-win\\device\\cpu\\os\\windows\\tensorflow\\core\\framework\\op_kernel.cc:975] Invalid argument: logits and labels must be same size: logits_size=[256,2] labels_size=[1,2] [[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](Reshape_2, Reshape_3)]] Traceback (most recent call last): File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 1021, in _do_call return fn(*args) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 1003, in _run_fn status, run_metadata) File \"C:\\Python35\\lib\\contextlib.py\", line 66, in __exit__ next(self.gen) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\errors_impl.py\", line 469, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: logits and labels must be same size: logits_size=[256,2] labels_size=[1,2] [[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](Reshape_2, Reshape_3)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"cnn.py\", line 73, in _, accuracy_val = sess.run([train_op, accuracy], feed_dict={x: batch_data, y: batch_onehot_vals}) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 766, in run run_metadata_ptr) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 964, in _run feed_dict_string, options, run_metadata) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 1014, in _do_run target_list, options, run_metadata) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\client\\session.py\", line 1034, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: logits and labels must be same size: logits_size=[256,2] labels_size=[1,2] [[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](Reshape_2, Reshape_3)]] Caused by op 'SoftmaxCrossEntropyWithLogits', defined at: File \"cnn.py\", line 55, in cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model_op, y)) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\ops\\nn_ops.py\", line 1449, in softmax_cross_entropy_with_logits precise_logits, labels, name=name) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\ops\\gen_nn_ops.py\", line 2265, in _softmax_cross_entropy_with_logits features=features, labels=labels, name=name) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\op_def_library.py\", line 759, in apply_op op_def=op_def) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\ops.py\", line 2240, in create_op original_op=self._default_original_op, op_def=op_def) File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\framework\\ops.py\", line 1128, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[256,2] labels_size=[1,2] [[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](Reshape_2, Reshape_3)]] ``` EDIT-3 This is how the binary (pickled) file looks like [label, filename, data]: ``` [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), array(['1.jpg', '10.jpg', '2.jpg', '3.jpg', '4.jpg', '5.jpg', '6.jpg', '7.jpg', '8.jpg', '9.jpg'], dtype=' 200) than you\u2019ve originally specified. tf.nn.dynamic_rnn solves this. It uses a tf.While loop to dynamically construct the graph when it is executed. That means graph creation is faster and you can feed batches of variable size.", "best_answers_score":0.7487, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39734146\/whats-the-difference-between-tensorflow-dynamic-rnn-and-rnn", "best_answers_votes":49, "question_length":319, "response_length":607 }, { "question":"How to pip install old version of library(tensorflow)? I'm trying to install tensorflow r0.11. I tried ``` pip install tensorflow==r0.11 pip install tensorflow\", line 1, in IOError: [Errno 2] No such file or directory: '\/private\/var\/folders\/1p\/7km73m0s2cvdfb1js3ct8_mh0000gn\/T\/pip-JMMIRP-build\/setup.py' ---------------------------------------- Command \"python setup.py egg_info\" failed with error code 1 in \/private\/var\/folders\/1p\/7km73m0s2cvdfb1js3ct8_mh0000gn\/T\/pip-JMMIRP-build\/ ```", "response":"You can install the pip wheel from a URL directly, for example: ``` # Ubuntu\/Linux 64-bit, CPU only, Python 2.7 export TF_BINARY_URL=https:\/\/storage.googleapis.com\/tensorflow\/linux\/cpu\/tensorflow-0.11.0-cp27-none-linux_x86_64.whl pip install --ignore-installed --upgrade $TF_BINARY_URL ``` In general, installation instructions for older versions of TensorFlow can be found at : For binaries for installation using wheels: Go to tensorflow pypi release history, select the release of your choice, say tensorflow 1.8.0 , go to Download files and either download the wheel file and then install or copy the download link and save in TF_BINARY_URL for your python --version and os [mac, linux or windows] install as shown above", "best_answers_score":0.7478, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41937915\/how-to-pip-install-old-version-of-librarytensorflow", "best_answers_votes":21, "question_length":487, "response_length":724 }, { "question":"cannot import name '_registerMatType' from 'cv2.cv2' I got below error message when I run model_main_tf2.py on Object Detection API: ``` Traceback (most recent call last): File \"\/content\/models\/research\/object_detection\/model_main_tf2.py\", line 32, in from object_detection import model_lib_v2 File \"\/usr\/local\/lib\/python3.7\/dist-packages\/object_detection\/model_lib_v2.py\", line 29, in from object_detection import eval_util File \"\/usr\/local\/lib\/python3.7\/dist-packages\/object_detection\/eval_util.py\", line 36, in from object_detection.metrics import lvis_evaluation File \"\/usr\/local\/lib\/python3.7\/dist-packages\/object_detection\/metrics\/lvis_evaluation.py\", line 23, in from lvis import results as lvis_results File \"\/usr\/local\/lib\/python3.7\/dist-packages\/lvis\/__init__.py\", line 5, in from lvis.vis import LVISVis File \"\/usr\/local\/lib\/python3.7\/dist-packages\/lvis\/vis.py\", line 1, in import cv2 File \"\/usr\/local\/lib\/python3.7\/dist-packages\/cv2\/__init__.py\", line 9, in from .cv2 import _registerMatType ImportError: cannot import name '_registerMatType' from 'cv2.cv2' (\/usr\/local\/lib\/python3.7\/dist-packages\/cv2\/cv2.cpython-37m-x86_64-linux-gnu.so) ``` The weird thing is I run the same code before, it worked well but now it gives me an error.", "response":"The same thing occurred to me yesterday when I used Colab. A possible reason may be that the version of opencv-python(4.1.2.30) does not match opencv-python-headless(4.5.5.62). Or the latest version 4.5.5 may have something wrong... I uninstalled opencv-python-headless==4.5.5.62 and installed 4.1.2.30 and it fixed.", "best_answers_score":0.7469, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/70537488\/cannot-import-name-registermattype-from-cv2-cv2", "best_answers_votes":76, "question_length":1253, "response_length":316 }, { "question":"How to install libcusolver.so.11 I am trying to install Tensorflow but it is asking for libcusolver.so.11 and I only have libcusolver.so.10. Can someone tell me what I am doing wrong Here are my Ubuntu, nvidia and CUDA versions ``` $ uname -a $ Linux *****-dev-01 5.4.0-42-generic #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020 x86_64 x86_64 x86_64 GNU\/Linux $nvidia-smi --query-gpu=gpu_name --format=csv|tail -n 1 GeForce GTX 1650 $ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Thu_Jun_11_22:26:38_PDT_2020 Cuda compilation tools, release 11.0, V11.0.194 Build cuda_11.0_bu.TC445_37.28540450_0 ``` Here is how I am building tensorflow ``` $git clone https:\/\/github.com\/tensorflow\/tensorflow.git $cd .\/tensorflow $git checkout tags\/v2.2.0 $.\/configure $bazel build --config=v2 --config=cuda --config=monolithic --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-msse4.1 --copt=-msse4.2 --copt=-Wno-sign-compare \/\/ tensorflow:libtensorflow_cc.so ``` Here is the error I am receiving ``` ERROR: An error occurred during the fetch of repository 'local_config_cuda': Traceback (most recent call last): File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/gpus\/cuda_configure.bzl\", line 1210 _create_local_cuda_repository() File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/gpus\/cuda_configure.bzl\", line 934, in _create_local_cuda_repository _find_libs(repository_ctx, ) File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/gpus\/cuda_configure.bzl\", line 577, in _find_libs _check_cuda_libs(repository_ctx, ) File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/gpus\/cuda_configure.bzl\", line 479, in _check_cuda_libs execute(repository_ctx, ) File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/remote_config\/common.bzl\", line 208, in execute fail() Repository command failed No library found under: \/usr\/local\/cuda\/lib64\/libcusolver.so.11 ERROR: Skipping '\/\/tensorflow:libtensorflow_cc.so': no such package '@local_config_cuda\/\/cuda': Traceback (most recent call last): File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/gpus\/cuda_configure.bzl\", line 1210 _create_local_cuda_repository() File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/gpus\/cuda_configure.bzl\", line 934, in _create_local_cuda_repository _find_libs(repository_ctx, ) File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/gpus\/cuda_configure.bzl\", line 577, in _find_libs _check_cuda_libs(repository_ctx, ) File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/gpus\/cuda_configure.bzl\", line 479, in _check_cuda_libs execute(repository_ctx, ) File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/remote_config\/common.bzl\", line 208, in execute fail() Repository command failed No library found under: \/usr\/local\/cuda\/lib64\/libcusolver.so.11 WARNING: Target pattern parsing failed. ERROR: no such package '@local_config_cuda\/\/cuda': Traceback (most recent call last): File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/gpus\/cuda_configure.bzl\", line 1210 _create_local_cuda_repository() File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/gpus\/cuda_configure.bzl\", line 934, in _create_local_cuda_repository _find_libs(repository_ctx, ) File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/gpus\/cuda_configure.bzl\", line 577, in _find_libs _check_cuda_libs(repository_ctx, ) File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/gpus\/cuda_configure.bzl\", line 479, in _check_cuda_libs execute(repository_ctx, ) File \"\/home\/********\/Documents\/foo\/.temp_install_dir\/tensorflow\/tensorflow\/third_party\/remote_config\/common.bzl\", line 208, in execute fail() Repository command failed No library found under: \/usr\/local\/cuda\/lib64\/libcusolver.so.11 INFO: Elapsed time: 1.998s INFO: 0 processes. FAILED: Build did NOT complete successfully (0 packages loaded) currently loading: tensorflow NORMAL test.log ```", "response":"If you want a concrete solution, just find libcusolver.so.10 on your machine and create a link to libcusolver.so.11: Following command solved issue for me: ```sh sudo ln -s \/usr\/local\/cuda-11.0\/targets\/x86_64-linux\/lib\/libcusolver.so.10 \/usr\/local\/cuda-11.0\/targets\/x86_64-linux\/lib\/libcusolver.so.11 ``` Credit to: https:\/\/github.com\/tensorflow\/tensorflow\/issues\/43947", "best_answers_score":0.7469, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/63199164\/how-to-install-libcusolver-so-11", "best_answers_votes":40, "question_length":4307, "response_length":369 }, { "question":"Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2\/convolution' I got this error message when declaring the input layer in Keras. ValueError: Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2\/convolution' (op: 'Conv2D') with input shapes: [?,1,28,28], [3,3,28,32]. My code is like this ``` model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28))) ``` Sample application: https:\/\/github.com\/IntellijSys\/tensorflow\/blob\/master\/Keras.ipynb", "response":"I had the same problem, however the solution provided in this thread did not help me. In my case it was a different problem that caused this error: Code ``` imageSize=32 classifier=Sequential() classifier.add(Conv2D(64, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) classifier.add(MaxPooling2D(pool_size = (2, 2))) classifier.add(Conv2D(64, (3, 3), activation = 'relu')) classifier.add(MaxPooling2D(pool_size = (2, 2))) classifier.add(Conv2D(64, (3, 3), activation = 'relu')) classifier.add(MaxPooling2D(pool_size = (2, 2))) classifier.add(Conv2D(64, (3, 3), activation = 'relu')) classifier.add(MaxPooling2D(pool_size = (2, 2))) classifier.add(Conv2D(64, (3, 3), activation = 'relu')) classifier.add(MaxPooling2D(pool_size = (2, 2))) classifier.add(Flatten()) ``` Error The image size is 32 by 32. After the first convolutional layer, we reduced it to 30 by 30. (If I understood convolution correctly) Then the pooling layer divides it, so 15 by 15. Then another convolutional layer reduces it to 13 by 13... I hope you can see where this is going: In the end, my feature map is so small that my pooling layer (or convolution layer) is too big to go over it - and that causes the error Solution The easy solution to this error is to either make the image size bigger or use less convolutional or pooling layers.", "best_answers_score":0.7467, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45645276\/negative-dimension-size-caused-by-subtracting-3-from-1-for-conv2d-2-convolution", "best_answers_votes":41, "question_length":493, "response_length":1337 }, { "question":"What is the difference between np.mean and tf.reduce_mean? In the MNIST beginner tutorial, there is the statement ``` accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\")) ``` tf.cast basically changes the type of tensor the object is, but what is the difference between tf.reduce_mean and np.mean? Here is the doc on tf.reduce_mean: reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None) input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If None (the defaut), reduces all dimensions. ``` # 'x' is [[1., 1. ]] # [2., 2.]] tf.reduce_mean(x) ==> 1.5 tf.reduce_mean(x, 0) ==> [1.5, 1.5] tf.reduce_mean(x, 1) ==> [1., 2.] ``` For a 1D vector, it looks like np.mean == tf.reduce_mean, but I don't understand what's happening in tf.reduce_mean(x, 1) ==> [1., 2.]. tf.reduce_mean(x, 0) ==> [1.5, 1.5] kind of makes sense, since mean of [1, 2] and [1, 2] is [1.5, 1.5], but what's going on with tf.reduce_mean(x, 1)?", "response":"The functionality of numpy.mean and tensorflow.reduce_mean are the same. They do the same thing. From the documentation, for numpy and tensorflow, you can see that. Lets look at an example, ``` c = np.array([[3.,4], [5.,6], [6.,7]]) print(np.mean(c,1)) Mean = tf.reduce_mean(c,1) with tf.Session() as sess: result = sess.run(Mean) print(result) ``` Output ``` [ 3.5 5.5 6.5] [ 3.5 5.5 6.5] ``` Here you can see that when axis(numpy) or reduction_indices(tensorflow) is 1, it computes mean across (3,4) and (5,6) and (6,7), so 1 defines across which axis the mean is computed. When it is 0, the mean is computed across(3,5,6) and (4,6,7), and so on. I hope you get the idea. Now what are the differences between them? You can compute the numpy operation anywhere on python. But in order to do a tensorflow operation, it must be done inside a tensorflow Session. You can read more about it here. So when you need to perform any computation for your tensorflow graph(or structure if you will), it must be done inside a tensorflow Session. Lets look at another example. ``` npMean = np.mean(c) print(npMean+1) tfMean = tf.reduce_mean(c) Add = tfMean + 1 with tf.Session() as sess: result = sess.run(Add) print(result) ``` We could increase mean by 1 in numpy as you would naturally, but in order to do it in tensorflow, you need to perform that in Session, without using Session you can't do that. In other words, when you are computing tfMean = tf.reduce_mean(c), tensorflow doesn't compute it then. It only computes that in a Session. But numpy computes that instantly, when you write np.mean(). I hope it makes sense.", "best_answers_score":0.7465, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34236252\/what-is-the-difference-between-np-mean-and-tf-reduce-mean", "best_answers_votes":133, "question_length":998, "response_length":1616 }, { "question":"Training on imbalanced data using TensorFlow The Situation: I am wondering how to use TensorFlow optimally when my training data is imbalanced in label distribution between 2 labels. For instance, suppose the MNIST tutorial is simplified to only distinguish between 1's and 0's, where all images available to us are either 1's or 0's. This is straightforward to train using the provided TensorFlow tutorials when we have roughly 50% of each type of image to train and test on. But what about the case where 90% of the images available in our data are 0's and only 10% are 1's? I observe that in this case, TensorFlow routinely predicts my entire test set to be 0's, achieving an accuracy of a meaningless 90%. One strategy I have used to some success is to pick random batches for training that do have an even distribution of 0's and 1's. This approach ensures that I can still use all of my training data and produced decent results, with less than 90% accuracy, but a much more useful classifier. Since accuracy is somewhat useless to me in this case, my metric of choice is typically area under the ROC curve (AUROC), and this produces a result respectably higher than .50. Questions: (1) Is the strategy I have described an accepted or optimal way of training on imbalanced data, or is there one that might work better? (2) Since the accuracy metric is not as useful in the case of imbalanced data, is there another metric that can be maximized by altering the cost function? I can certainly calculate AUROC post-training, but can I train in such a way as to maximize AUROC? (3) Is there some other alteration I can make to my cost function to improve my results for imbalanced data? Currently, I am using a default suggestion given in TensorFlow tutorials: ``` cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) ``` I have heard this may be possible by up-weighting the cost of miscategorizing the smaller label class, but I am unsure of how to do this.", "response":"(1)It's ok to use your strategy. I'm working with imbalanced data as well, which I try to use down-sampling and up-sampling methods first to make the training set even distributed. Or using ensemble method to train each classifier with an even distributed subset. (2)I haven't seen any method to maximise the AUROC. My thought is that AUROC is based on true positive and false positive rate, which doesn't tell how well it works on each instance. Thus, it may not necessarily maximise the capability to separate the classes. (3)Regarding weighting the cost by the ratio of class instances, it similar to Loss function for class imbalanced binary classifier in Tensor flow and the answer.", "best_answers_score":0.7463, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35049379\/training-on-imbalanced-data-using-tensorflow", "best_answers_votes":8, "question_length":2059, "response_length":687 }, { "question":"ImportError: cannot import name 'get_config' from 'tensorflow.python.eager.context' My notebook was working up till today. At the beginning of my colab notebook I install tf-nightly, but now it is giving me this error: ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) in () 7 import tensorflow as tf 8 from tensorflow.keras import datasets, layers, models ----> 9 from keras.preprocessing import image 10 from keras_preprocessing.image import ImageDataGenerator #check underscore or not 11 from tensorflow.keras.preprocessing import image_dataset_from_directory 2 frames \/usr\/local\/lib\/python3.7\/dist-packages\/keras\/backend.py in () 35 from tensorflow.python.distribute import distribute_coordinator as dc 36 from tensorflow.python.distribute import distribute_coordinator_context as dc_context ---> 37 from tensorflow.python.eager.context import get_config 38 from tensorflow.python.framework import config 39 from keras import backend_config ImportError: cannot import name 'get_config' from 'tensorflow.python.eager.context' (\/usr\/local\/lib\/python3.7\/dist-packages\/tensorflow_core\/python\/eager\/context.py) ``` My code: ``` !pip install tf-nightly import tensorflow as tf from tensorflow.keras import datasets, layers, models from keras.preprocessing import image from keras_preprocessing.image import ImageDataGenerator from tensorflow.keras.preprocessing import image_dataset_from_directory from keras.callbacks import Callback, ModelCheckpoint, ReduceLROnPlateau, EarlyStopping ``` Installing tensorflow==2.1.0 did not work either.", "response":"Instead of: ```py import keras ``` Try: ```py from tensorflow import keras ```", "best_answers_score":0.7461, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/66964492\/importerror-cannot-import-name-get-config-from-tensorflow-python-eager-conte", "best_answers_votes":62, "question_length":1614, "response_length":78 }, { "question":"Limit number of cores used in Keras I have a shared machine with 64 cores on which I have a big pipeline of Keras functions that I want to run. The thing is that it seems that Keras automatically uses all the cores available and I can't do that. I use Python and I want to run 67 neural networks in a for loop. I would like to use half of the available cores. I can't find any way of limiting the number of cores in Keras... Do you have any clue?", "response":"As @Yu-Yang suggested, I used these lines before each fit: ```py from keras import backend as K K.set_session(K.tf.Session(config=K.tf.ConfigProto(intra_op_parallelism_threads=32, inter_op_parallelism_threads=32))) ``` Check the CPU usage (htop) :", "best_answers_score":0.7453, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46421258\/limit-number-of-cores-used-in-keras", "best_answers_votes":24, "question_length":446, "response_length":247 }, { "question":"WARNING:tensorflow:sample_weight modes were coerced from ... to ['...'] Training an image classifier using .fit_generator() or .fit() and passing a dictionary to class_weight= as an argument. I never got errors in TF1.x but in 2.1 I get the following output when starting training: ```none WARNING:tensorflow:sample_weight modes were coerced from ... to ['...'] ``` What does it mean to coerce something from ... to ['...']? The source for this warning on tensorflow's repo is here, comments placed are: Attempt to coerce sample_weight_modes to the target structure. This implicitly depends on the fact that Model flattens outputs for its internal representation.", "response":"This seems like a bogus message. I get the same warning message after upgrading to TensorFlow 2.1, but I do not use any class weights or sample weights at all. I do use a generator that returns a tuple like this: ``` return inputs, targets ``` And now I just changed it to the following to make the warning go away: ``` return inputs, targets, [None] ``` I don't know if this is relevant, but my model uses 3 inputs, so my inputs variable is actually a list of 3 numpy arrays. targets is just a single numpy array. In any case, it's just a warning. The training works fine either way. Edit for TensorFlow 2.2: This bug seems to have been fixed in TensorFlow 2.2, which is great. However the fix above will fail in TF 2.2, because it will try to get the shape of the sample weights, which will obviously fail with AttributeError: 'NoneType' object has no attribute 'shape'. So undo the above fix when upgrading to 2.2.", "best_answers_score":0.7451, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/59317919\/warningtensorflowsample-weight-modes-were-coerced-from-to", "best_answers_votes":23, "question_length":663, "response_length":917 }, { "question":"What does `tf.strided_slice()` do? I am wondering what tf.strided_slice() operator actually does. The doc says, To a first order, this operation extracts a slice of size end - begin from a tensor input starting at the location specified by begin. The slice continues by adding stride to the begin index until all dimensions are not less than end. Note that components of stride can be negative, which causes a reverse slice. And in the sample, ``` # 'input' is [[[1, 1, 1], [2, 2, 2]], # [[3, 3, 3], [4, 4, 4]], # [[5, 5, 5], [6, 6, 6]]] tf.slice(input, [1, 0, 0], [2, 1, 3], [1, 1, 1]) ==> [[[3, 3, 3]]] tf.slice(input, [1, 0, 0], [2, 2, 3], [1, 1, 1]) ==> [[[3, 3, 3], [4, 4, 4]]] tf.slice(input, [1, 1, 0], [2, -1, 3], [1, -1, 1]) ==>[[[4, 4, 4], [3, 3, 3]]] ``` So in my understanding of the doc, the first sample (tf.slice(input, begin=[1, 0, 0], end=[2, 1, 3], strides=[1, 1, 1])), resulting size is end - begin = [1, 1, 3]. The sample result shows [[[3, 3, 3,]]], that shape is [1, 1, 3], it seems OK. the first element of the result is at begin = [1, 0, 0]. The first element of the sample result is 3, which is input[1,0,0], it seems OK. the slice continues by adding stride to the begin index. So the second element of the result should be input[begin + strides] = input[2, 1, 1] = 6, but the sample shows the second element is 3. What strided_slice() does? (Note: method names in the samples and the last example is incorrect.)", "response":"I experimented a bit with this method, which gave me some insights, which I think might be of some use. let's say we have a tensor. ```py a = np.array([[[1, 1.2, 1.3], [2, 2.2, 2.3], [7, 7.2, 7.3]], [[3, 3.2, 3.3], [4, 4.2, 4.3], [8, 8.2, 8.3]], [[5, 5.2, 5.3], [6, 6.2, 6.3], [9, 9.2, 9.3]]]) # a.shape = (3, 3, 3) ``` strided_slice() requires 4 required arguments input_, begin, end, strides in which we are giving our a as input_ argument. As the case with tf.slice() method, the begin argument is zero-based and rest of args shape-based. However in the docs begin and end both are zero-based. The functionality of method is quite simple: It works like iterating over a loop, where begin is the location of element in the tensor from where the loop initiates and end is where it stops. ```py tf.strided_slice(a, [0, 0, 0], [3, 3, 3], [1, 1, 1]) # output = the tensor itself tf.strided_slice(a, [0, 0, 0], [3, 3, 3], [2, 2, 2]) # output = [[[ 1. 1.3] # [ 7. 7.3]] # [[ 5. 5.3] # [ 9. 9.3]]] ``` strides are like steps over which the loop iterates, here the [2,2,2] makes method to produce values starting at (0,0,0), (0,0,2), (0,2,0), (0,2,2), (2,0,0), (2,0,2) ..... in the a tensor. ```py tf.strided_slice(input3, [1, 1, 0], [2, -1, 3], [1, 1, 1]) ``` will produce output similar to tf.strided_slice(input3, [1, 1, 0], [2, 2, 3], [1, 1, 1]) as the tensora has shape = (3,3,3).", "best_answers_score":0.7446, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41380126\/what-does-tf-strided-slice-do", "best_answers_votes":25, "question_length":1438, "response_length":1379 }, { "question":"Tensorflow: How to get a tensor by name? I'm having trouble recovering a tensor by name, I don't even know if it's possible. I have a function that creates my graph: ``` def create_structure(tf, x, input_size,dropout): with tf.variable_scope(\"scale_1\") as scope: W_S1_conv1 = deep_dive.weight_variable_scaling([7,7,3,64], name='W_S1_conv1') b_S1_conv1 = deep_dive.bias_variable([64]) S1_conv1 = tf.nn.relu(deep_dive.conv2d(x_image, W_S1_conv1,strides=[1, 2, 2, 1], padding='SAME') + b_S1_conv1, name=\"Scale1_first_relu\") . . . return S3_conv1,regularizer ``` I want to access the variable S1_conv1 outside this function. I tried: ``` with tf.variable_scope('scale_1') as scope_conv: tf.get_variable_scope().reuse_variables() ft=tf.get_variable('Scale1_first_relu') ``` But that is giving me an error: ValueError: Under-sharing: Variable scale_1\/Scale1_first_relu does not exist, disallowed. Did you mean to set reuse=None in VarScope? But this works: ``` with tf.variable_scope('scale_1') as scope_conv: tf.get_variable_scope().reuse_variables() ft=tf.get_variable('W_S1_conv1') ``` I can get around this with ``` return S3_conv1,regularizer, S1_conv1 ``` but I don't want to do that. I think my problem is that S1_conv1 is not really a variable, it's just a tensor. Is there a way to do what I want?", "response":"There is a function tf.Graph.get_tensor_by_name(). For instance: ``` import tensorflow as tf c = tf.constant([[1.0, 2.0], [3.0, 4.0]]) d = tf.constant([[1.0, 1.0], [0.0, 1.0]]) e = tf.matmul(c, d, name='example') with tf.Session() as sess: test = sess.run(e) print e.name #example:0 test = tf.get_default_graph().get_tensor_by_name(\"example:0\") print test #Tensor(\"example:0\", shape=(2, 2), dtype=float32) ```", "best_answers_score":0.7442, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36612512\/tensorflow-how-to-get-a-tensor-by-name", "best_answers_votes":58, "question_length":1300, "response_length":409 }, { "question":"Efficient element-wise multiplication of a matrix and a vector in TensorFlow What would be the most efficient way to multiply (element-wise) a 2D tensor (matrix): ```none x11 x12 .. x1N ... xM1 xM2 .. xMN ``` by a vertical vector: ```none w1 ... wN ``` to obtain a new matrix: ```none x11*w1 x12*w2 ... x1N*wN ... xM1*w1 xM2*w2 ... xMN*wN ``` To give some context, we have M data samples in a batch that can be processed in parallel, and each N-element sample must be multiplied by weights w stored in a variable to eventually pick the largest Xij*wj for each row i.", "response":"The simplest code to do this relies on the broadcasting behavior of tf.multiply()*, which is based on numpy's broadcasting behavior: ``` x = tf.constant(5.0, shape=[5, 6]) w = tf.constant([0.0, 1.0, 2.0, 3.0, 4.0, 5.0]) xw = tf.multiply(x, w) max_in_rows = tf.reduce_max(xw, 1) sess = tf.Session() print sess.run(xw) # ==> [[0.0, 5.0, 10.0, 15.0, 20.0, 25.0], # [0.0, 5.0, 10.0, 15.0, 20.0, 25.0], # [0.0, 5.0, 10.0, 15.0, 20.0, 25.0], # [0.0, 5.0, 10.0, 15.0, 20.0, 25.0], # [0.0, 5.0, 10.0, 15.0, 20.0, 25.0]] print sess.run(max_in_rows) # ==> [25.0, 25.0, 25.0, 25.0, 25.0] ``` * In older versions of TensorFlow, tf.multiply() was called tf.mul(). You can also use the * operator (i.e. xw = x * w) to perform the same operation.", "best_answers_score":0.7415, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34192229\/efficient-element-wise-multiplication-of-a-matrix-and-a-vector-in-tensorflow", "best_answers_votes":45, "question_length":566, "response_length":731 }, { "question":"Is Tensorflow compatible with a Windows workflow? I haven't seen anything about Windows compatibility -- is this on the way or currently available somewhere if I put forth some effort? (I have a Mac and an Ubuntu box but the Windows machine is the one with the discrete graphics card that I currently use with theano).", "response":"Updated 11\/28\/2016: Today we released the first release candidate of TensorFlow 0.12, which includes support for Windows. You can install the Python bindings using the following command in a Python shell: ``` C:\\> pip install tensorflow ``` ...or, if you want GPU support: ``` C:\\> pip install tensorflow-gpu ``` You can also build TensorFlow yourself using Microsoft Visual C++ and NVCC (for the CUDA parts). The easiest way to build on Windows is currently to use the CMake build, and we will soon provide support for Bazel on Windows. Previous answer: We haven't tried to build TensorFlow on Windows so far: the only supported platforms are Linux (Ubuntu) and Mac OS X, and we've only built binaries for those platforms. For now, on Windows, the easiest way to get started with TensorFlow would be to use Docker: http:\/\/tensorflow.org\/get_started\/os_setup.md#docker-based_installation It should become easier to add Windows support when Bazel (the build system we are using) adds support for building on Windows, which is on the roadmap for Bazel 0.3. You can see the full Bazel roadmap here. In the meantime, you can follow issue 17 on the TensorFlow GitHub page.", "best_answers_score":0.7411, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33616094\/is-tensorflow-compatible-with-a-windows-workflow", "best_answers_votes":63, "question_length":318, "response_length":1167 }, { "question":"Could not load dynamic library 'libnvinfer.so.6' I am trying to normally import the TensorFlow python package, but I get the following error: Here is the text from the above terminal image: ``` 2020-02-23 19:01:06.163940: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory 2020-02-23 19:01:06.164019: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory 2020-02-23 19:01:06.164030: W tensorflow\/compiler\/tf2tensorrt\/utils\/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. 17 x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}) \/root\/anaconda\/lib\/python2.7\/site-packages\/tensorflow\/python\/framework\/ops.pyc in eval(self, feed_dict, session) 403 404 \"\"\" --> 405 return _eval_using_default_session(self, feed_dict, self.graph, session) 406 407 \/root\/anaconda\/lib\/python2.7\/site-packages\/tensorflow\/python\/framework\/ops.pyc in _eval_using_default_session(tensors, feed_dict, graph, session) 2712 session = get_default_session() 2713 if session is None: -> 2714 raise ValueError(\"Cannot evaluate tensor using eval(): No default \" 2715 \"session is registered. Use 'with \" 2716 \"DefaultSession(sess)' or pass an explicit session to \" ValueError: Cannot evaluate tensor using eval(): No default session is registered. Use 'with DefaultSession(sess)' or pass an explicit session to eval(session=sess) ``` I thought that I may need to install or reinstall TensorFlow via conda install https:\/\/storage.googleapis.com\/tensorflow\/linux\/cpu\/tensorflow-0.5.0-cp27-none-linux_x86_64.whl but conda doesn't even know how to install it. Does anyone have any idea of how to work around this error?", "response":"I figured it out. As you see in the value error, it says No default session is registered. Use 'with DefaultSession(sess)' or pass an explicit session to eval(session=sess) so the answer I came up with is to pass an explicit session to eval, just like it says. Here is where I made the changes. ``` if i%100 == 0: train_accuracy = accuracy.eval(session=sess, feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0}) ``` And ``` train_step.run(session=sess, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) ``` Now the code is working fine.", "best_answers_score":0.7399, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33785936\/tensorflow-error-found-in-tutorial", "best_answers_votes":31, "question_length":2709, "response_length":542 }, { "question":"How would I implement k-means with TensorFlow? The intro tutorial, which uses the built-in gradient descent optimizer, makes a lot of sense. However, k-means isn't just something I can plug into gradient descent. It seems like I'd have to write my own sort of optimizer, but I'm not quite sure how to do that given the TensorFlow primitives. What approach should I take?", "response":"(note: You can now get a more polished version of this code as a gist on github.) you can definitely do it, but you need to define your own optimization criteria (for k-means, it's usually a max iteration count and when the assignment stabilizes). Here's an example of how you might do it (there are probably more optimal ways to implement it, and definitely better ways to select the initial points). It's basically like you'd do it in numpy if you were trying really hard to stay away from doing things iteratively in python: ```py import tensorflow as tf import numpy as np import time N=10000 K=4 MAX_ITERS = 1000 start = time.time() points = tf.Variable(tf.random_uniform([N,2])) cluster_assignments = tf.Variable(tf.zeros([N], dtype=tf.int64)) # Silly initialization: Use the first two points as the starting # centroids. In the real world, do this better. centroids = tf.Variable(tf.slice(points.initialized_value(), [0,0], [K,2])) # Replicate to N copies of each centroid and K copies of each # point, then subtract and compute the sum of squared distances. rep_centroids = tf.reshape(tf.tile(centroids, [N, 1]), [N, K, 2]) rep_points = tf.reshape(tf.tile(points, [1, K]), [N, K, 2]) sum_squares = tf.reduce_sum(tf.square(rep_points - rep_centroids), reduction_indices=2) # Use argmin to select the lowest-distance point best_centroids = tf.argmin(sum_squares, 1) did_assignments_change = tf.reduce_any(tf.not_equal(best_centroids, cluster_assignments)) def bucket_mean(data, bucket_ids, num_buckets): total = tf.unsorted_segment_sum(data, bucket_ids, num_buckets) count = tf.unsorted_segment_sum(tf.ones_like(data), bucket_ids, num_buckets) return total \/ count means = bucket_mean(points, best_centroids, K) # Do not write to the assigned clusters variable until after # computing whether the assignments have changed - hence with_dependencies with tf.control_dependencies([did_assignments_change]): do_updates = tf.group( centroids.assign(means), cluster_assignments.assign(best_centroids)) sess = tf.Session() sess.run(tf.initialize_all_variables()) changed = True iters = 0 while changed and iters < MAX_ITERS: iters += 1 [changed, _] = sess.run([did_assignments_change, do_updates]) [centers, assignments] = sess.run([centroids, cluster_assignments]) end = time.time() print (\"Found in %.2f seconds\" % (end-start)), iters, \"iterations\" print \"Centroids:\" print centers print \"Cluster assignments:\", assignments ``` (Note that a real implementation would need to be more careful about initial cluster selection, avoiding problem cases with all points going to one cluster, etc. This is just a quick demo. I've updated my answer from earlier to make it a bit more clear and \"example-worthy\".)", "best_answers_score":0.7398, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33621643\/how-would-i-implement-k-means-with-tensorflow", "best_answers_votes":27, "question_length":370, "response_length":2704 }, { "question":"ValueError when executing softmax_cross_entropy_with_logits I am following the tensorflow tutorial. There has been recent tensor flow update in which the cost function softmax_cross_entropy_with_logits() has been modified. Hence the code in the tutorial is giving the following error: ValueError: Only call softmax_cross_entropy_with_logits with named arguments (labels=..., logits=..., ...) What does it mean and how to rectify it? Here's the entire code till that point: ``` import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets(\"\/tmp\/data\/\", one_hot = True) n_nodes_hl1 = 500 n_nodes_hl2 = 500 n_nodes_hl3 = 500 n_classes = 10 batch_size = 100 x = tf.placeholder('float', [None, 784]) y = tf.placeholder('float') def neural_network_model(data): hidden_1_layer = {'weights':tf.Variable(tf.random_normal([784, n_nodes_hl1])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))} hidden_2_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))} hidden_3_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl3]))} output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])), 'biases':tf.Variable(tf.random_normal([n_classes])),} l1 = tf.add(tf.matmul(data,hidden_1_layer['weights']), hidden_1_layer['biases']) l1 = tf.nn.relu(l1) l2 = tf.add(tf.matmul(l1,hidden_2_layer['weights']), hidden_2_layer['biases']) l2 = tf.nn.relu(l2) l3 = tf.add(tf.matmul(l2,hidden_3_layer['weights']), hidden_3_layer['biases']) l3 = tf.nn.relu(l3) output = tf.matmul(l3,output_layer['weights']) + output_layer['biases'] return output def train_neural_network(x): prediction = neural_network_model(x) cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y) ) optimizer = tf.train.AdamOptimizer().minimize(cost) ```", "response":"Change ``` tf.nn.softmax_cross_entropy_with_logits(prediction,y) ``` to ``` tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y) ```", "best_answers_score":0.7381, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42296782\/valueerror-when-executing-softmax-cross-entropy-with-logits", "best_answers_votes":40, "question_length":1951, "response_length":148 }, { "question":"\"synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'.\" problem in TensorFlow I installed TensorFlow 1.10.1 but when I tried to import TensorFlow it said that I need TensorFlow version 1.10.0. Thus, I installed it and now I get the following warnings: ``` >>> import tensorflow C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. np_resource = np.dtype([(\"resource\", np.ubyte, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'. np_resource = np.dtype([(\"resource\", np.ubyte, 1)]) ```", "response":"If you're using TF 2.0 a quick solution would be to downgrade your numpy to 1.16.4. (I used 1.17 and received the same warning messages). ``` 1. pip uninstall numpy 2. pip install numpy==1.16.4 ``` See here (thanks to ymodak)", "best_answers_score":0.7363, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/57381430\/synonym-of-type-is-deprecated-in-a-future-version-of-numpy-it-will-be-underst", "best_answers_votes":68, "question_length":4024, "response_length":225 }, { "question":"Tensorflow OOM on GPU i'm training some Music Data on a LSTM-RNN in Tensorflow and encountered some Problem with GPU-Memory-Allocation which i don't understand: I encounter an OOM when there actually seems to be just about enough VRAM still available. Some background: I'm working on Ubuntu Gnome 16.04, using a GTX1060 6GB, Intel Xeon E3-1231V3 and 8GB RAM. So now first the part of the error-message which i can understand, in the and i will add the whole error message in the end again for anyone who might ask for it to help: I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 8 Chunks of size 256 totalling 2.0KiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 1 Chunks of size 1280 totalling 1.2KiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 5 Chunks of size 44288 totalling 216.2KiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 5 Chunks of size 56064 totalling 273.8KiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 4 Chunks of size 154350080 totalling 588.80MiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 3 Chunks of size 813400064 totalling 2.27GiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 1 Chunks of size 1612612352 totalling 1.50GiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:700] Sum Total of in-use chunks: 4.35GiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:702] Stats: Limit: 5484118016 InUse: 4670717952 MaxInUse: 5484118016 NumAllocs: 29 MaxAllocSize: 1612612352 W tensorflow\/core\/common_runtime\/bfc_allocator.cc:274] *********************___________*__***************************************************xxxxxxxxxxxxxx W tensorflow\/core\/common_runtime\/bfc_allocator.cc:275] Ran out of memory trying to allocate 775.72MiB. See logs for memory state. W tensorflow\/core\/framework\/op_kernel.cc:993] Resource exhausted: OOM when allocating tensor with shape[14525,14000] So i can read that there is a maximum of 5484118016 bytes to be allocated, 4670717952 bytes are allready in use, and another 777.72MB = 775720000 bytes are to be allocated. 5484118016 bytes - 4670717952 bytes - 775720000 bytes = 37680064 bytes according to my calculator. So there should still be 37MB of free VRAM after allocating the space for the new Tensor he wants to push in there. This seems also to be quite legit to me, as Tensorflow would probably (i guess?) not try to allocate more VRAM than there is still available and just put the rest of the data on hold in RAM or something. Now i guess there is just some big error in my thinking, but i would be quite gratefull if someone could explain to me, what this error is. The obvious solving-strategy to my problem is to just make my batches a bit smaller, having them each at around 1.5GB probably just is too big. Still i would love to know what the actual problem is there. edit: I found something telling me to try: ``` config = tf.ConfigProto() config.gpu_options.allocator_type = 'BFC' with tf.Session(config = config) as s: ``` which still does not work, but as the tensorflow documentation lacks any explanation of what ``` gpu_options.allocator_type = 'BFC' ``` would be, i would love to ask you guys. Adding the rest of the error message for anyone interested: Sorry for the long copy\/paste, but maybe someone would need\/want to see it, Thank you so much in advance, Leon ``` (gputensorflow) leon@ljksUbuntu:~\/Tensorflow$ python Netzwerk_v0.5.1_gamma.py I tensorflow\/stream_executor\/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally I tensorflow\/stream_executor\/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally I tensorflow\/stream_executor\/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally I tensorflow\/stream_executor\/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally I tensorflow\/stream_executor\/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow\/core\/platform\/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:885] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate (GHz) 1.7335 pciBusID 0000:01:00.0 Total memory: 5.93GiB Free memory: 5.40GiB I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:906] DMA: 0 I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:916] 0: Y I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:975] Creating TensorFlow device (\/gpu:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0) I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (256): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (512): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (1024): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (2048): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (4096): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (8192): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (16384): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (32768): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (65536): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (131072): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (262144): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (524288): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (1048576): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (2097152): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (4194304): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (8388608): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (16777216): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (33554432): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (67108864): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (134217728): Total Chunks: 1, Chunks in use: 0 147.20MiB allocated for chunks. 147.20MiB client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:643] Bin (268435456): Total Chunks: 1, Chunks in use: 0 628.52MiB allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. I tensorflow\/core\/common_runtime\/bfc_allocator.cc:660] Bin for 775.72MiB was 256.00MiB, Chunk State: I tensorflow\/core\/common_runtime\/bfc_allocator.cc:666] Size: 628.52MiB | Requested Size: 0B | in_use: 0, prev: Size: 147.20MiB | Requested Size: 147.20MiB | in_use: 1, next: Size: 54.8KiB | Requested Size: 54.7KiB | in_use: 1 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x10208000000 of size 1280 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x10208000500 of size 256 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x10208000600 of size 56064 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x1020800e100 of size 256 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x1020800e200 of size 44288 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x10208018f00 of size 256 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x10208019000 of size 256 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x10208019100 of size 813400064 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x102387d1100 of size 56064 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x102387dec00 of size 154350080 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x10241b11e00 of size 44288 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x10241b1cb00 of size 256 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x10241b1cc00 of size 256 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x10241b1cd00 of size 154350080 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x102722d4d00 of size 56064 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x1027b615a00 of size 44288 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x1027b620700 of size 256 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x1027b620800 of size 256 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x1027b620900 of size 813400064 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x102abdd8900 of size 813400064 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x102dc590900 of size 56064 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x102dc59e400 of size 56064 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x102dc5abf00 of size 154350080 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x102e58df100 of size 154350080 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x102eec12300 of size 44288 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x102eec1d000 of size 44288 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:678] Chunk at 0x102eec27d00 of size 1612612352 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:687] Free at 0x1024ae4ff00 of size 659049984 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:687] Free at 0x102722e2800 of size 154350080 I tensorflow\/core\/common_runtime\/bfc_allocator.cc:693] Summary of in-use Chunks by size: I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 8 Chunks of size 256 totalling 2.0KiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 1 Chunks of size 1280 totalling 1.2KiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 5 Chunks of size 44288 totalling 216.2KiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 5 Chunks of size 56064 totalling 273.8KiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 4 Chunks of size 154350080 totalling 588.80MiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 3 Chunks of size 813400064 totalling 2.27GiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:696] 1 Chunks of size 1612612352 totalling 1.50GiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:700] Sum Total of in-use chunks: 4.35GiB I tensorflow\/core\/common_runtime\/bfc_allocator.cc:702] Stats: Limit: 5484118016 InUse: 4670717952 MaxInUse: 5484118016 NumAllocs: 29 MaxAllocSize: 1612612352 W tensorflow\/core\/common_runtime\/bfc_allocator.cc:274] *********************___________*__***************************************************xxxxxxxxxxxxxx W tensorflow\/core\/common_runtime\/bfc_allocator.cc:275] Ran out of memory trying to allocate 775.72MiB. See logs for memory state. W tensorflow\/core\/framework\/op_kernel.cc:993] Resource exhausted: OOM when allocating tensor with shape[14525,14000] Traceback (most recent call last): File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/client\/session.py\", line 1022, in _do_call return fn(*args) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/client\/session.py\", line 1004, in _run_fn status, run_metadata) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/contextlib.py\", line 66, in __exit__ next(self.gen) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/framework\/errors_impl.py\", line 469, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[14525,14000] [[Node: rnn\/basic_lstm_cell\/weights\/Initializer\/random_uniform = Add[T=DT_FLOAT, _class=[\"loc:@rnn\/basic_lstm_cell\/weights\"], _device=\"\/job:localhost\/replica:0\/task:0\/gpu:0\"](rnn\/basic_lstm_cell\/weights\/Initializer\/random_uniform\/mul, rnn\/basic_lstm_cell\/weights\/Initializer\/random_uniform\/min)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"Netzwerk_v0.5.1_gamma.py\", line 171, in session.run(tf.global_variables_initializer()) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/client\/session.py\", line 767, in run run_metadata_ptr) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/client\/session.py\", line 965, in _run feed_dict_string, options, run_metadata) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/client\/session.py\", line 1015, in _do_run target_list, options, run_metadata) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/client\/session.py\", line 1035, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[14525,14000] [[Node: rnn\/basic_lstm_cell\/weights\/Initializer\/random_uniform = Add[T=DT_FLOAT, _class=[\"loc:@rnn\/basic_lstm_cell\/weights\"], _device=\"\/job:localhost\/replica:0\/task:0\/gpu:0\"](rnn\/basic_lstm_cell\/weights\/Initializer\/random_uniform\/mul, rnn\/basic_lstm_cell\/weights\/Initializer\/random_uniform\/min)]] Caused by op 'rnn\/basic_lstm_cell\/weights\/Initializer\/random_uniform', defined at: File \"Netzwerk_v0.5.1_gamma.py\", line 94, in initial_state=initial_state, time_major=False) # time_major = FALSE currently File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/rnn.py\", line 545, in dynamic_rnn dtype=dtype) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/rnn.py\", line 712, in _dynamic_rnn_loop swap_memory=swap_memory) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/control_flow_ops.py\", line 2626, in while_loop result = context.BuildLoop(cond, body, loop_vars, shape_invariants) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/control_flow_ops.py\", line 2459, in BuildLoop pred, body, original_loop_vars, loop_vars, shape_invariants) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/control_flow_ops.py\", line 2409, in _BuildLoop body_result = body(*packed_vars_for_body) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/rnn.py\", line 697, in _time_step (output, new_state) = call_cell() File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/rnn.py\", line 683, in call_cell = lambda: cell(input_t, state) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/contrib\/rnn\/python\/ops\/core_rnn_cell_impl.py\", line 179, in __call__ concat = _linear([inputs, h], 4 * self._num_units, True, scope=scope) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/contrib\/rnn\/python\/ops\/core_rnn_cell_impl.py\", line 747, in _linear \"weights\", [total_arg_size, output_size], dtype=dtype) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/variable_scope.py\", line 988, in get_variable custom_getter=custom_getter) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/variable_scope.py\", line 890, in get_variable custom_getter=custom_getter) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/variable_scope.py\", line 348, in get_variable validate_shape=validate_shape) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/variable_scope.py\", line 333, in _true_getter caching_device=caching_device, validate_shape=validate_shape) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/variable_scope.py\", line 684, in _get_single_variable validate_shape=validate_shape) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/variables.py\", line 226, in __init__ expected_shape=expected_shape) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/variables.py\", line 303, in _init_from_args initial_value(), name=\"initial_value\", dtype=dtype) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/variable_scope.py\", line 673, in shape.as_list(), dtype=dtype, partition_info=partition_info) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/init_ops.py\", line 360, in __call__ dtype, seed=self.seed) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/random_ops.py\", line 246, in random_uniform return math_ops.add(rnd * (maxval - minval), minval, name=name) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/ops\/gen_math_ops.py\", line 73, in add result = _op_def_lib.apply_op(\"Add\", x=x, y=y, name=name) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/framework\/op_def_library.py\", line 763, in apply_op op_def=op_def) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/framework\/ops.py\", line 2395, in create_op original_op=self._default_original_op, op_def=op_def) File \"\/home\/leon\/anaconda3\/envs\/gputensorflow\/lib\/python3.5\/site-packages\/tensorflow\/python\/framework\/ops.py\", line 1264, in __init__ self._traceback = _extract_stack() ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[14525,14000] [[Node: rnn\/basic_lstm_cell\/weights\/Initializer\/random_uniform = Add[T=DT_FLOAT, _class=[\"loc:@rnn\/basic_lstm_cell\/weights\"], _device=\"\/job:localhost\/replica:0\/task:0\/gpu:0\"](rnn\/basic_lstm_cell\/weights\/Initializer\/random_uniform\/mul, rnn\/basic_lstm_cell\/weights\/Initializer\/random_uniform\/min)]] ```", "response":"I resolve this issue by reducing batch_size=52 Only to reduce memory use is to reduce batch_size. Batch_size depends on your gpu graphics card, size of VRAM, Cache memory etc. Please prefer this Another Stack Overflow Link", "best_answers_score":0.7355, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42495930\/tensorflow-oom-on-gpu", "best_answers_votes":9, "question_length":21833, "response_length":222 }, { "question":"In TensorFlow, what is tf.identity used for? I've seen tf.identity used in a few places, such as the official CIFAR-10 tutorial and the batch-normalization implementation on stackoverflow, but I don't see why it's necessary. What's it used for? Can anyone give a use case or two? One proposed answer is that it can be used for transfer between the CPU and GPU. This is not clear to me. Extension to the question, based on this: loss = tower_loss(scope) is under the GPU block, which suggests to me that all operators defined in tower_loss are mapped to the GPU. Then, at the end of tower_loss, we see total_loss = tf.identity(total_loss) before it's returned. Why? What would be the flaw with not using tf.identity here?", "response":"After some stumbling I think I've noticed a single use case that fits all the examples I've seen. If there are other use cases, please elaborate with an example. Use case: Suppose you'd like to run an operator every time a particular Variable is evaluated. For example, say you'd like to add one to x every time the variable y is evaluated. It might seem like this will work: ``` x = tf.Variable(0.0) x_plus_1 = tf.assign_add(x, 1) with tf.control_dependencies([x_plus_1]): y = x init = tf.initialize_all_variables() with tf.Session() as session: init.run() for i in xrange(5): print(y.eval()) ``` It doesn't: it'll print 0, 0, 0, 0, 0. Instead, it seems that we need to add a new node to the graph within the control_dependencies block. So we use this trick: ``` x = tf.Variable(0.0) x_plus_1 = tf.assign_add(x, 1) with tf.control_dependencies([x_plus_1]): y = tf.identity(x) init = tf.initialize_all_variables() with tf.Session() as session: init.run() for i in xrange(5): print(y.eval()) ``` This works: it prints 1, 2, 3, 4, 5. If in the CIFAR-10 tutorial we dropped tf.identity, then loss_averages_op would never run.", "best_answers_score":0.7344, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34877523\/in-tensorflow-what-is-tf-identity-used-for", "best_answers_votes":69, "question_length":720, "response_length":1122 }, { "question":"Tensorflow Data Adapter Error: ValueError: Failed to find data adapter that can handle input While running a sentdex tutorial script of a cryptocurrency RNN, link here YouTube Tutorial: Cryptocurrency-predicting RNN Model, but have encountered an error when attempting to train the model. My tensorflow version is 2.0.0 and I'm running python 3.6. When attempting to train the model I receive the following error: ```py File \"C:\\python36-64\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\training.py\", line 734, in fit use_multiprocessing=use_multiprocessing) File \"C:\\python36-64\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\training_v2.py\", line 224, in fit distribution_strategy=strategy) File \"C:\\python36-64\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\training_v2.py\", line 497, in _process_training_inputs adapter_cls = data_adapter.select_data_adapter(x, y) File \"C:\\python36-64\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\data_adapter.py\", line 628, in select_data_adapter _type_name(x), _type_name(y))) ValueError: Failed to find data adapter that can handle input: , ( containing values of types {\"\"}) ``` Any advice would be greatly appreciated!", "response":"Have you checked whether your training\/testing data and training\/testing labels are all numpy arrays? It might be that you're mixing numpy arrays with lists.", "best_answers_score":0.7333, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/57874436\/tensorflow-data-adapter-error-valueerror-failed-to-find-data-adapter-that-can", "best_answers_votes":107, "question_length":1192, "response_length":157 }, { "question":"tf.shape() get wrong shape in tensorflow I define a tensor like this: x = tf.get_variable(\"x\", [100]) But when I try to print shape of tensor : print( tf.shape(x) ) I get Tensor(\"Shape:0\", shape=(1,), dtype=int32), why the result of output should not be shape=(100)", "response":"tf.shape(input, name=None) returns a 1-D integer tensor representing the shape of input. You're looking for: x.get_shape() that returns the TensorShape of the x variable. Update: I wrote an article to clarify the dynamic\/static shapes in Tensorflow because of this answer: https:\/\/pgaleone.eu\/tensorflow\/2018\/07\/28\/understanding-tensorflow-tensors-shape-static-dynamic\/", "best_answers_score":0.7325, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37085430\/tf-shape-get-wrong-shape-in-tensorflow", "best_answers_votes":133, "question_length":265, "response_length":369 }, { "question":"How can I use a pre-trained neural network with grayscale images? I have a dataset containing grayscale images and I want to train a state-of-the-art CNN on them. I'd very much like to fine-tune a pre-trained model (like the ones here). The problem is that almost all models I can find the weights for have been trained on the ImageNet dataset, which contains RGB images. I can't use one of those models because their input layer expects a batch of shape (batch_size, height, width, 3) or (64, 224, 224, 3) in my case, but my images batches are (64, 224, 224). Is there any way that I can use one of those models? I've thought of dropping the input layer after I've loaded the weights and adding my own (like we do for the top layers). Is this approach correct?", "response":"The model's architecture cannot be changed because the weights have been trained for a specific input configuration. Replacing the first layer with your own would pretty much render the rest of the weights useless. -- Edit: elaboration suggested by Prune-- CNNs are built so that as they go deeper, they can extract high-level features derived from the lower-level features that the previous layers extracted. By removing the initial layers of a CNN, you are destroying that hierarchy of features because the subsequent layers won't receive the features that they are supposed to as their input. In your case the second layer has been trained to expect the features of the first layer. By replacing your first layer with random weights, you are essentially throwing away any training that has been done on the subsequent layers, as they would need to be retrained. I doubt that they could retain any of the knowledge learned during the initial training. --- end edit --- There is an easy way, though, which you can make your model work with grayscale images. You just need to make the image to appear to be RGB. The easiest way to do so is to repeat the image array 3 times on a new dimension. Because you will have the same image over all 3 channels, the performance of the model should be the same as it was on RGB images. In numpy this can be easily done like this: ``` print(grayscale_batch.shape) # (64, 224, 224) rgb_batch = np.repeat(grayscale_batch[..., np.newaxis], 3, -1) print(rgb_batch.shape) # (64, 224, 224, 3) ``` The way this works is that it first creates a new dimension (to place the channels) and then it repeats the existing array 3 times on this new dimension. I'm also pretty sure that keras' ImageDataGenerator can load grayscale images as RGB.", "best_answers_score":0.7317, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51995977\/how-can-i-use-a-pre-trained-neural-network-with-grayscale-images", "best_answers_votes":90, "question_length":761, "response_length":1768 }, { "question":"Ordering of batch normalization and dropout? The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow. When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried about the ordering? It seems possible that if I use dropout followed immediately by batch normalization there might be trouble. For example, if the shift in the batch normalization trains to the larger scale numbers of the training outputs, but then that same shift is applied to the smaller (due to the compensation for having more outputs) scale numbers without dropout during testing, then that shift may be off. Does the TensorFlow batch normalization layer automatically compensate for this? Or does this not happen for some reason I'm missing? Also, are there other pitfalls to look out for in when using these two together? For example, assuming I'm using them in the correct order in regards to the above (assuming there is a correct order), could there be trouble with using both batch normalization and dropout on multiple successive layers? I don't immediately see a problem with that, but I might be missing something. Thank you much! UPDATE: An experimental test seems to suggest that ordering does matter. I ran the same network twice with only the batch norm and dropout reverse. When the dropout is before the batch norm, validation loss seems to be going up as training loss is going down. They're both going down in the other case. But in my case the movements are slow, so things may change after more training and it's just a single test. A more definitive and informed answer would still be appreciated.", "response":"In the Ioffe and Szegedy 2015, the authors state that \"we would like to ensure that for any parameter values, the network always produces activations with the desired distribution\". So the Batch Normalization Layer is actually inserted right after a Conv Layer\/Fully Connected Layer, but before feeding into ReLu (or any other kinds of) activation. See this video at around time 53 min for more details. As far as dropout goes, I believe dropout is applied after activation layer. In the dropout paper figure 3b, the dropout factor\/probability matrix r(l) for hidden layer l is applied to it on y(l), where y(l) is the result after applying activation function f. So in summary, the order of using batch normalization and dropout is: -> CONV\/FC -> BatchNorm -> ReLu(or other activation) -> Dropout -> CONV\/FC ->", "best_answers_score":0.7314, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39691902\/ordering-of-batch-normalization-and-dropout", "best_answers_votes":272, "question_length":1790, "response_length":811 }, { "question":"Tensorflow import error: No module named 'tensorflow' I installed TensorFlow on my Windows Python 3.5 Anaconda environment The validation was successful (with a warning) ``` (tensorflow) C:\\>python ``` Python 3.5.3 |Intel Corporation| (default, Apr 27 2017, 17:03:30) [MSC v.1900 64 bit (AMD64)] on win32 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. Intel(R) Distribution for Python is brought to you by Intel Corporation. Please check out: https:\/\/software.intel.com\/en-us\/python-distribution ``` >>> import tensorflow as tf >>> hello = tf.constant('Hello, TensorFlow!') >>> sess = tf.Session() ``` 2017-10-04 11:06:13.569696: W C:\\tf_jenkins\\home\\workspace\\rel-win\\M\\windows\\PY\\35\\tensorflow\\core\\platform\\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. ``` >>> print(sess.run(hello)) ``` b'Hello, TensorFlow!' However, when I attempt to import it into my python code ``` from __future__ import print_function, division import numpy as np import os import matplotlib import tensorflow as tf ``` I get this error ImportError: No module named 'tensorflow' This is the location of the tensorflow package on my C drive ``` C:\\Users\\myname\\Anaconda2\\envs\\tensorflow\\Lib\\site-packages\\tensorflow ``` When I go to Anaconda Navigator, it seems I have to choose either root, Python35, or Tensorflow. It looks like the Tensorflow environment includes Python35. Anaconda Navigator launcher had to be reinstalled recently, possibly due to the Tensorflow installation. Maybe if there were another way to set the environment to Tensorflow within Anaconda \/Spyder IDE other than the Navigator it might help Method of installing tensorflow ``` conda create --name tensorflow python=3.5; pip install --ignore-installed --upgrade tensorflow ``` I did try: uninstalling and reinstalling protobuf, as suggesed by some blogs I see another SO user asked the same question in March, received no reply", "response":"The reason Python 3.5 environment is unable to import Tensorflow is that Anaconda does not store the tensorflow package in the same environment. One solution is to create a new separate environment in Anaconda dedicated to TensorFlow with its own Spyder ``` conda create -n newenvt anaconda python=3.5 activate newenvt ``` and then install tensorflow into newenvt I found this primer helpful", "best_answers_score":0.7304, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46568913\/tensorflow-import-error-no-module-named-tensorflow", "best_answers_votes":30, "question_length":2023, "response_length":391 }, { "question":"Keras (Tensorflow backend) Error - Tensor input_1:0, specified in either feed_devices or fetch_devices was not found in the Graph When trying to predict using a simple model I've previously trained I get the following error: Tensor input_1:0, specified in either feed_devices or fetch_devices was not found in the Graph at line: ``` seatbelt_model.predict(image_arr, verbose=1) ``` in code: ``` from tensorflow import keras import tensorflow as tf import numpy as np graph = tf.get_default_graph() seatbelt_model = keras.models.load_model(filepath='.\/graphs\/seatbelt_A_3_81.h5') class SeatbeltPredictor: INPUT_SHAPE = (-1, 120, 160, 1) @staticmethod def predict_seatbelt(image_arr): with graph.as_default(): image_arr = np.array(image_arr).reshape(SeatbeltPredictor.INPUT_SHAPE) predicted_labels = seatbelt_model.predict(image_arr, verbose=1) return predicted_labels ``` The model has the following shape: ``` input_layer = keras.layers.Input(shape=(IMAGE_HEIGHT, IMAGE_WIDTH, 1)) conv_0 = keras.layers.Conv2D(filters=32, kernel_size=[5, 5], activation=tf.nn.relu, padding=\"SAME\")(input_layer) pool_0 = keras.layers.MaxPool2D(pool_size=[2, 2], strides=2, padding=\"VALID\")(conv_0) conv_1 = keras.layers.Conv2D(filters=32, kernel_size=[5, 5], activation=tf.nn.relu, padding=\"SAME\")(pool_0) pool_1 = keras.layers.MaxPool2D(pool_size=[2, 2], strides=2, padding=\"VALID\")(conv_1) flat_0 = keras.layers.Flatten()(pool_1) dense_0 = keras.layers.Dense(units=1024, activation=tf.nn.relu)(flat_0) drop_0 = keras.layers.Dropout(rate=0.4, trainable=True)(dense_0) dense_1 = keras.layers.Dense(units=2, activation=tf.nn.softmax)(drop_0) ``` If I run the following, I get a tensor result: ``` graph.get_tensor_by_name('input_1:0') ``` The name of the first layer is input_1 image_arr is of shape (1, 120, 160, 1) Tensorflow 1.12 Any ideas?", "response":"OK, after a lot of pain and suffering and diving into the bowels of tensorflow I found the following: Although the model has a Session and Graph, in some tensorflow methods, the default Session and Graph are used. To fix this I had to explicity say that I wanted to use both my Session and my Graph as the default: ``` with session.as_default(): with session.graph.as_default(): ``` Full Code: ``` from tensorflow import keras import tensorflow as tf import numpy as np import log config = tf.ConfigProto( device_count={'GPU': 1}, intra_op_parallelism_threads=1, allow_soft_placement=True ) config.gpu_options.allow_growth = True config.gpu_options.per_process_gpu_memory_fraction = 0.6 session = tf.Session(config=config) keras.backend.set_session(session) seatbelt_model = keras.models.load_model(filepath='.\/seatbelt.h5') SEATBEL_INPUT_SHAPE = (-1, 120, 160, 1) def predict_seatbelt(image_arr): try: with session.as_default(): with session.graph.as_default(): image_arr = np.array(image_arr).reshape(SEATBEL_INPUT_SHAPE) predicted_labels = seatbelt_model.predict(image_arr, verbose=1) return predicted_labels except Exception as ex: log.log('Seatbelt Prediction Error', ex, ex.__traceback__.tb_lineno) ```", "best_answers_score":0.7303, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/54652536\/keras-tensorflow-backend-error-tensor-input-10-specified-in-either-feed-de", "best_answers_votes":35, "question_length":1825, "response_length":1208 }, { "question":"Why do we need TensorFlow tf.Graph? What is the purpose of: ``` with tf.Graph().as_default() ``` I have some tensorflow code that uses the above. However, the code has only one graph, so why do we need this?", "response":"TL;DR: It's unnecessary, but it's a good practice to follow. Since a default graph is always registered, every op and variable is placed into the default graph. The statement, however, creates a new graph and places everything (declared inside its scope) into this graph. If the graph is the only graph, it's useless. But it's a good practice because if you start to work with many graphs it's easier to understand where ops and vars are placed. Since this statement costs you nothing, it's better to write it anyway. Just to be sure that if you refactor the code in the future, the operations defined belong to the graph you choose initially.", "best_answers_score":0.73, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39614938\/why-do-we-need-tensorflow-tf-graph", "best_answers_votes":61, "question_length":207, "response_length":643 }, { "question":"What's difference between tf.sub and just minus operation in tensorflow? I am trying to use Tensorflow. Here is an very simple code. ``` train = tf.placeholder(tf.float32, [1], name=\"train\") W1 = tf.Variable(tf.truncated_normal([1], stddev=0.1), name=\"W1\") loss = tf.pow(tf.sub(train, W1), 2) step = tf.train.GradientDescentOptimizer(0.1).minimize(loss) ``` Just ignore the optimization part (4th line). It will take a floating number and train W1 so as to increase squared difference. My question is simple. If I use just minus sign instead of tf.sub\" as below, what is different? Will it cause a wrong result? ``` loss = tf.pow(train-W1, 2) ``` When I replace it, the result looks the same. If they are the same, why do we need to use the \"tf.add\/tf.sub\" things? Built-in back propagation calculation can be done only by the \"tf.*\" things?", "response":"Yes, - and + resolve to tf.sub ad tf.add. If you look at the tensorflow code you will see that these operators on tf.Variable are overloaded with the tf.* methods. As to why both exists I assume the tf.* ones exist for consistency. So sub and say matmul operation can be used in the same way. While the operator overloading is for convenience.", "best_answers_score":0.7293, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36110834\/whats-difference-between-tf-sub-and-just-minus-operation-in-tensorflow", "best_answers_votes":20, "question_length":841, "response_length":343 }, { "question":"How to turn off dropout for testing in Tensorflow? I am fairly new to Tensorflow and ML in general, so I hereby apologize for a (likely) trivial question. I use the dropout technique to improve learning rates of my network, and it seems to work just fine. Then, I would like to test the network on some data to see if it works like this: ``` def Ask(self, image): return self.session.run(self.model, feed_dict = {self.inputPh: image}) ``` Obviously, it yields different results each time as the dropout is still in place. One solution I can think of is to create two separate models - one for a training and the other one for an actual later use of the network, however, such a solution seems impractical to me. What's the common approach to solving this problem?", "response":"The easiest way is to change the keep_prob parameter using a placeholder_with_default: ``` prob = tf.placeholder_with_default(1.0, shape=()) layer = tf.nn.dropout(layer, prob) ``` in this way when you train you can set the parameter like this: ``` sess.run(train_step, feed_dict={prob: 0.5}) ``` and when you evaluate the default value of 1.0 is used.", "best_answers_score":0.728, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44971349\/how-to-turn-off-dropout-for-testing-in-tensorflow", "best_answers_votes":57, "question_length":763, "response_length":351 }, { "question":"Restoring TensorFlow model I'm trying to restore TensorFlow model. I followed this example: http:\/\/nasdag.github.io\/blog\/2016\/01\/19\/classifying-bees-with-google-tensorflow\/ At the end of the code in the example I added these lines: ``` saver = tf.train.Saver() save_path = saver.save(sess, \"model.ckpt\") print(\"Model saved in file: %s\" % save_path) ``` Two files were created: checkpoint and model.ckpt. In a new python file (tomas_bees_predict.py), I have this code: ``` import tensorflow as tf saver = tf.train.Saver() with tf.Session() as sess: # Restore variables from disk. saver.restore(sess, \"model.ckpt\") print(\"Model restored.\") ``` However when I execute the code, I get this error: ``` Traceback (most recent call last): File \"tomas_bees_predict.py\", line 3, in saver = tf.train.Saver() File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/training\/saver.py\", line 705, in __init__ raise ValueError(\"No variables to save\") ``` ValueError: No variables to save Is there a way to read mode.ckpt file and see what variables are saved? Or maybe someone can help with saving the model and restoring it based on the example described above? EDIT 1: I think I tried running the same code in order to recreate model structure and I was getting the error. I think it could be related to the fact that code described here isn't using named variables: http:\/\/nasdag.github.io\/blog\/2016\/01\/19\/classifying-bees-with-google-tensorflow\/ ``` def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) ``` So I did this experiment. I wrote two versions of the code (with and without named variables) to save the model and the code to restore the model. tensor_save_named_vars.py: ``` import tensorflow as tf # Create some variables. v1 = tf.Variable(1, name=\"v1\") v2 = tf.Variable(2, name=\"v2\") # Add an op to initialize the variables. init_op = tf.initialize_all_variables() # Add ops to save and restore all the variables. saver = tf.train.Saver() # Later, launch the model, initialize the variables, do some work, save the # variables to disk. with tf.Session() as sess: sess.run(init_op) print \"v1 = \", v1.eval() print \"v2 = \", v2.eval() # Save the variables to disk. save_path = saver.save(sess, \"\/tmp\/model.ckpt\") print \"Model saved in file: \", save_path ``` tensor_save_not_named_vars.py: ``` import tensorflow as tf # Create some variables. v1 = tf.Variable(1) v2 = tf.Variable(2) # Add an op to initialize the variables. init_op = tf.initialize_all_variables() # Add ops to save and restore all the variables. saver = tf.train.Saver() # Later, launch the model, initialize the variables, do some work, save the # variables to disk. with tf.Session() as sess: sess.run(init_op) print \"v1 = \", v1.eval() print \"v2 = \", v2.eval() # Save the variables to disk. save_path = saver.save(sess, \"\/tmp\/model.ckpt\") print \"Model saved in file: \", save_path ``` tensor_restore.py: ``` import tensorflow as tf # Create some variables. v1 = tf.Variable(0, name=\"v1\") v2 = tf.Variable(0, name=\"v2\") # Add ops to save and restore all the variables. saver = tf.train.Saver() # Later, launch the model, use the saver to restore variables from disk, and # do some work with the model. with tf.Session() as sess: # Restore variables from disk. saver.restore(sess, \"\/tmp\/model.ckpt\") print \"Model restored.\" print \"v1 = \", v1.eval() print \"v2 = \", v2.eval() ``` Here is what I get when I execute this code: ``` $ python tensor_save_named_vars.py I tensorflow\/core\/common_runtime\/local_device.cc:40] Local device intra op parallelism threads: 4 I tensorflow\/core\/common_runtime\/direct_session.cc:58] Direct session inter op parallelism threads: 4 v1 = 1 v2 = 2 Model saved in file: \/tmp\/model.ckpt $ python tensor_restore.py I tensorflow\/core\/common_runtime\/local_device.cc:40] Local device intra op parallelism threads: 4 I tensorflow\/core\/common_runtime\/direct_session.cc:58] Direct session inter op parallelism threads: 4 Model restored. v1 = 1 v2 = 2 $ python tensor_save_not_named_vars.py I tensorflow\/core\/common_runtime\/local_device.cc:40] Local device intra op parallelism threads: 4 I tensorflow\/core\/common_runtime\/direct_session.cc:58] Direct session inter op parallelism threads: 4 v1 = 1 v2 = 2 Model saved in file: \/tmp\/model.ckpt $ python tensor_restore.py I tensorflow\/core\/common_runtime\/local_device.cc:40] Local device intra op parallelism threads: 4 I tensorflow\/core\/common_runtime\/direct_session.cc:58] Direct session inter op parallelism threads: 4 W tensorflow\/core\/common_runtime\/executor.cc:1076] 0x7ff953881e40 Compute status: Not found: Tensor name \"v2\" not found in checkpoint files \/tmp\/model.ckpt [[Node: save\/restore_slice_1 = RestoreSlice[dt=DT_INT32, preferred_shard=-1, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](_recv_save\/Const_0, save\/restore_slice_1\/tensor_name, save\/restore_slice_1\/shape_and_slice)]] W tensorflow\/core\/common_runtime\/executor.cc:1076] 0x7ff953881e40 Compute status: Not found: Tensor name \"v1\" not found in checkpoint files \/tmp\/model.ckpt [[Node: save\/restore_slice = RestoreSlice[dt=DT_INT32, preferred_shard=-1, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](_recv_save\/Const_0, save\/restore_slice\/tensor_name, save\/restore_slice\/shape_and_slice)]] Traceback (most recent call last): File \"tensor_restore.py\", line 14, in saver.restore(sess, \"\/tmp\/model.ckpt\") File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/training\/saver.py\", line 891, in restore sess.run([self._restore_op_name], {self._filename_tensor_name: save_path}) File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/client\/session.py\", line 368, in run results = self._do_run(target_list, unique_fetch_targets, feed_dict_string) File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/client\/session.py\", line 444, in _do_run e.code) tensorflow.python.framework.errors.NotFoundError: Tensor name \"v2\" not found in checkpoint files \/tmp\/model.ckpt [[Node: save\/restore_slice_1 = RestoreSlice[dt=DT_INT32, preferred_shard=-1, _device=\"\/job:localhost\/replica:0\/task:0\/cpu:0\"](_recv_save\/Const_0, save\/restore_slice_1\/tensor_name, save\/restore_slice_1\/shape_and_slice)]] Caused by op u'save\/restore_slice_1', defined at: File \"tensor_restore.py\", line 8, in saver = tf.train.Saver() File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/training\/saver.py\", line 713, in __init__ restore_sequentially=restore_sequentially) File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/training\/saver.py\", line 432, in build filename_tensor, vars_to_save, restore_sequentially, reshape) File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/training\/saver.py\", line 191, in _AddRestoreOps values = self.restore_op(filename_tensor, vs, preferred_shard) File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/training\/saver.py\", line 106, in restore_op preferred_shard=preferred_shard) File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/ops\/io_ops.py\", line 189, in _restore_slice preferred_shard, name=name) File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/ops\/gen_io_ops.py\", line 271, in _restore_slice preferred_shard=preferred_shard, name=name) File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/ops\/op_def_library.py\", line 664, in apply_op op_def=op_def) File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/framework\/ops.py\", line 1834, in create_op original_op=self._default_original_op, op_def=op_def) File \"\/usr\/local\/lib\/python2.7\/site-packages\/tensorflow\/python\/framework\/ops.py\", line 1043, in __init__ self._traceback = _extract_stack() ``` So perhaps the original code (see the external link above) could be modified to something like this: ``` def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) weight_var = tf.Variable(initial, name=\"weight_var\") return weight_var def bias_variable(shape): initial = tf.constant(0.1, shape=shape) bias_var = tf.Variable(initial, name=\"bias_var\") return bias_var ``` But then the question I have: is restoring weight_var and bias_var variables sufficient to implement the prediction? I did the training on the powerful machine with GPU and I would like to copy the model to the less powerful computer without GPU to run predictions.", "response":"There's a similar question here: Tensorflow: how to save\/restore a model? TLDR; you need to recreate model structure using same sequence of TensorFlow API commands before using Saver object to restore the weights This is suboptimal, follow Github issue #696 for progress on making this easier", "best_answers_score":0.7275, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34982492\/restoring-tensorflow-model", "best_answers_votes":12, "question_length":8373, "response_length":292 }, { "question":"AttributeError: module 'tensorflow' has no attribute 'reset_default_graph' I have installed tensorflow version r0.11. In my file name cartpole.py I have imported tensorflow: ``` import tensorflow as tf ``` and use it: ``` tf.reset_default_graph() ``` Trying to run my project in PyCharm I get this error: ``` in tf.reset_default_graph() AttributeError: module 'tensorflow' has no attribute 'reset_default_graph' ``` How can I fix this error?", "response":"This function is deprecated. Use tf.compat.v1.reset_default_graph() instead. Update This is not the only function to be out of date. Check out this answer for release notes and a conversion script.", "best_answers_score":0.7272, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40782271\/attributeerror-module-tensorflow-has-no-attribute-reset-default-graph", "best_answers_votes":94, "question_length":442, "response_length":197 }, { "question":"Parallelization strategies for deep learning What strategies and forms of parallelization are feasible and available for training and serving a neural network?: inside a machine across cores (e.g. GPU \/ TPU \/ CPU) across machines on a network or a rack I'm also looking for evidence for how they may also be used in e.g. TensorFlow, PyTorch or MXNet. Training To my knowledge, when training large neural networks on large datasets, one could at least have: Different cores or machines operate on different parts of the graph (\"graph splitting\"). E.g. backpropagation through the graph itself can be parallelized e.g. by having different layers hosted on different machines since (I think?) the autodiff graph is always a DAG. Different cores or machines operate on different samples of data (\"data splitting\"). In SGD, the computation of gradients across batches or samples can also be parallelized (e.g. the gradients can be combined after computing them independently on different batches). I believe this is also called gradient accumulation (?). When is each strategy better for what type of problem or neural network? Which modes are supported by modern libraries? and can one combine all four (2x2) strategies? On top of that, I have read about: Asynchronous training Synchronous training but I don't know what exactly that refers to, e.g. is it the computation of gradients on different data batches or the computation of gradients on different subgraphs? Or perhaps it refers to something else altogether? Serving If the network is huge, prediction \/ inference may also be slow, and the model may not fit on a single machine in memory at serving time. Are there any known multi-core and multi-node prediction solutions that work that can handle such models?", "response":"Training In general, there are two strategies of parallelizing model training: data parallelism and model parallelism. 1. Data parallelism This strategy splits training data into N partitions, each of which will be trained on different \u201cdevices\u201d (different CPU cores, GPUs, or even machines). In contrast to training without data parallelism which produces one gradient per minibatch, we now have N gradients for each minibatch step. The next question is how we should combine these N gradients. One way to do it is by averaging all the N gradients and then updating the model parameters once based on the average. This technique is called synchronous distributed SGD. By doing the average, we have a more accurate gradient, but with a cost of waiting all the devices to finish computing its own local gradient. Another way is by not combining the gradients \u2014 each gradient will instead be used to update the model parameters independently. So, there will be N parameter updates for each minibatch step, in contrast to only one for the previous technique. This technique is called asynchronous distributed SGD. Because it doesn't have to wait other devices to finish, the async approach will take less time to complete a minibatch step than the sync approach will do. However, the async approach will produce a more noisy gradient, so it might need to complete more minibatch steps to catch up with the performance (in terms of loss) of the sync approach. There are many papers proposing some improvements and optimizations on either approach, but the main idea is generally the same as described above. In the literature there's been some disagreement on which technique is better in practice. At the end most people now settle on the synchronous approach. Data Parallelism in PyTorch To do synchronous SGD, we can wrap our model with torch.nn.parallel.DistributedDataParallel: ``` from torch.nn.parallel import DistributedDataParallel as DDP # `model` is the model we previously initialized model = ... # `rank` is a device number starting from 0 model = model.to(rank) ddp_model = DDP(model, device_ids=[rank]) ``` Then we can train it similarly. For more details, you can refer to the official tutorial. For doing asynchronous SGD in PyTorch, we need to implement it more manually since there is no wrapper similar to DistributedDataParallel for it. Data Parallelism in TensorFlow\/Keras For synchronous SGD, we can use tf.distribute.MirroredStrategy to wrap the model initalization: ``` import tensorflow as tf strategy = tf.distribute.MirroredStrategy() with strategy.scope(): model = Model(...) model.compile(...) ``` Then we can train it as usual. For more details, you can refer to the official guides on Keras website and TensorFlow website. For asynchronous SGD, we can use tf.distribute.experimental.ParameterServerStrategy similarly. 2. Model Parallelism This strategy splits the model into N parts, each of which will be computed on different devices. A common way to split the model is based on layers: different sets of layers are placed on different devices. But we can also split it more intricately depending on the model architecture. Model Parallelism in TensorFlow and PyTorch To implement model parallelism in either TensorFlow or PyTorch, the idea is the same: to move some model parameters into a different device. In PyTorch we can use torch.nn.Module.to method to move a module into a different device. For example, suppose we want to create two linear layers each of which is placed on a different GPU: ``` import torch.nn as nn linear1 = nn.Linear(16, 8).to('cuda:0') linear2 = nn.Linear(8, 4).to('cuda:1') ``` In TensorFlow we can use tf.device to place an operation into a specific device. To implement the PyTorch example above in TensorFlow: ``` import tensorflow as tf from tensorflow.keras import layers with tf.device('\/GPU:0'): linear1 = layers.Dense(8, input_dim=16) with tf.device('\/GPU:1'): linear2 = layers.Dense(4, input_dim=8) ``` For more details you can refer to the official PyTorch tutorial; or if you use TensorFlow you can even use a more high-level library like mesh. 3. Hybrid: Data and Model Parallelism Recall that data parallelism only splits the training data, whereas model parallelism only splits the model structures. If we have a model so large that even after using either parallelism strategy it still doesn't fit in the memory, we can always do both. In practice most people prefer data parallelism to model parallelism since the former is more decoupled (in fact, independent) from the model architecture than the latter. That is, by using data parallelism they can change the model architecture as they like, without worrying which part of the model should be parallelized. Model Inference \/ Serving Parallelizing model serving is easier than parallelizing model training since the model parameters are already fixed and each request can be processed independently. Similar to scaling a regular Python web service, we can scale model serving by spawning more processes (to workaround Python's GIL) in a single machine, or even spawning more machine instances. When we use a GPU to serve the model, though, we need to do more work to scale it. Because of how concurrency is handled differently by a GPU compared to a CPU, in order to maximize the performance, we need to do inference request batching. The idea is when a request comes, instead of immediately processing it, we wait some timeout duration for other requests to come. When the timeout is up, even if the number of requests is only one, we batch them all to be processed on the GPU. In order to minimize the average request latency, we need to find the optimal timeout duration. To find it we need to observe that there is a trade-off between minimizing the timeout duration and maximizing the number of batch size. If the timeout is too low, the batch size will be small, so the GPU will be underutilized. But if the timeout is too high, the requests that come early will wait too long before they get processed. So, the optimal timeout duration depends on the model complexity (hence, the inference duration) and the average requests per second to receive. Implementing a scheduler to do request batching is not a trivial task, so instead of doing it manually, we'd better use TensorFlow Serving or PyTorch Serve which already supports it. To learn more about parallel and distributed learning, you can read this review paper.", "best_answers_score":0.7269, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/62759940\/parallelization-strategies-for-deep-learning", "best_answers_votes":13, "question_length":1765, "response_length":6453 }, { "question":"Convert Keras model to C++ [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 6 years ago. Improve this question I am using Keras (with Theano) to train my CNN model. Does anyone has idea how can I use it in my C++ application? Does anyone tried something similar? I have idea to write some python code that will generate a c++ code with network functions - any suggestion on it? I found a similar question here how to use Tensorflow Keras model in C++ but without answer.", "response":"To answer my own question and have a solution - I wrote a plain c++ solution called keras2cpp (its code available on github). In this solution you store network architecture (in json) and weights (in hdf5). Then you can dump a network to a plain text file with provided script. You can use obtained text file with network in pure c++ code. There are no dependencies on python libraries or hdf5. It should work for theano and tensorflow backend.", "best_answers_score":0.7265, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36720498\/convert-keras-model-to-c", "best_answers_votes":69, "question_length":848, "response_length":444 }, { "question":"Printing extra training metrics with Tensorflow Estimator Is there a way to let Tensorflow print extra training metrics (e.g. batch accuracy) when using the Estimator API? One can add summaries and view the result in Tensorboard (see another post), but I was wondering if there is an elegant way to get the scalar summary values printed while training. This already happens for training loss, e.g.: ``` loss = 0.672677, step = 2901 (52.995 sec) ``` but it would be nice to have e.g. ``` loss = 0.672677, accuracy = 0.54678, step = 2901 (52.995 sec) ``` without to much trouble. I am aware that most of the time it is more useful to plot test set accuracy (I am already doing this with a validation monitor), but in this case I am also interested in training batch accuracy.", "response":"From what I've read it is not possible to change it by passing parameter. You can try to do by creating a logging hook and passing it into to estimator run. In the body of model_fn function for your estimator: ``` logging_hook = tf.train.LoggingTensorHook({\"loss\" : loss, \"accuracy\" : accuracy}, every_n_iter=10) # Rest of the function return tf.estimator.EstimatorSpec( ...params... training_hooks = [logging_hook]) ``` EDIT: To see the output you must also set logging verbosity high enough (unless its your default): tf.logging.set_verbosity(tf.logging.INFO)", "best_answers_score":0.7234, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45353389\/printing-extra-training-metrics-with-tensorflow-estimator", "best_answers_votes":26, "question_length":773, "response_length":561 }, { "question":"No module named 'tqdm' I am running the following pixel recurrent neural network (RNN) code using Python 3.6 ``` import os import logging import numpy as np from tqdm import trange import tensorflow as tf from utils import * from network import Network from statistic import Statistic ``` However, there was an error: ``` ModuleNotFoundError: No module named 'tqdm' ``` Does anyone know how to solve it?", "response":"You need to install tqdm module, you can do it by using python pip. ``` pip install tqdm ``` for more info tqdm", "best_answers_score":0.723, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47529792\/no-module-named-tqdm", "best_answers_votes":87, "question_length":403, "response_length":111 }, { "question":"TensorBoard Embedding Example? [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 2 years ago. Improve this question I'm looking for a tensorboard embedding example, with iris data for example like the embedding projector http:\/\/projector.tensorflow.org\/ But unfortunately i couldn't find one. Just a little bit information about how to do it in https:\/\/www.tensorflow.org\/how_tos\/embedding_viz\/ Does someone knows a basic tutorial for this functionality? Basics: 1) Setup a 2D tensor variable(s) that holds your embedding(s). ``` embedding_var = tf.Variable(....) ``` 2) Periodically save your embeddings in a LOG_DIR. 3) Associate metadata with your embedding.", "response":"I've used FastText's pre-trained word vectors with TensorBoard. ```py import os import tensorflow as tf import numpy as np import fasttext from tensorflow.contrib.tensorboard.plugins import projector # load model word2vec = fasttext.load_model('wiki.en.bin') # create a list of vectors embedding = np.empty((len(word2vec.words), word2vec.dim), dtype=np.float32) for i, word in enumerate(word2vec.words): embedding[i] = word2vec[word] # setup a TensorFlow session tf.reset_default_graph() sess = tf.InteractiveSession() X = tf.Variable([0.0], name='embedding') place = tf.placeholder(tf.float32, shape=embedding.shape) set_x = tf.assign(X, place, validate_shape=False) sess.run(tf.global_variables_initializer()) sess.run(set_x, feed_dict={place: embedding}) # write labels with open('log\/metadata.tsv', 'w') as f: for word in word2vec.words: f.write(word + '\\n') # create a TensorFlow summary writer summary_writer = tf.summary.FileWriter('log', sess.graph) config = projector.ProjectorConfig() embedding_conf = config.embeddings.add() embedding_conf.tensor_name = 'embedding:0' embedding_conf.metadata_path = os.path.join('log', 'metadata.tsv') projector.visualize_embeddings(summary_writer, config) # save the model saver = tf.train.Saver() saver.save(sess, os.path.join('log', \"model.ckpt\")) ``` Then run this command in your terminal: ``` tensorboard --logdir=log ```", "best_answers_score":0.7223, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41258391\/tensorboard-embedding-example", "best_answers_votes":21, "question_length":1038, "response_length":1371 }, { "question":"How to apply gradient clipping in TensorFlow? Considering the example code. I would like to know How to apply gradient clipping on this network on the RNN where there is a possibility of exploding gradients. ``` tf.clip_by_value(t, clip_value_min, clip_value_max, name=None) ``` This is an example that could be used but where do I introduce this ? In the def of RNN ``` lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0) # Split data because rnn cell needs a list of inputs for the RNN inner loop _X = tf.split(0, n_steps, _X) # n_steps tf.clip_by_value(_X, -1, 1, name=None) ``` But this doesn't make sense as the tensor _X is the input and not the grad what is to be clipped? Do I have to define my own Optimizer for this or is there a simpler option?", "response":"Gradient clipping needs to happen after computing the gradients, but before applying them to update the model's parameters. In your example, both of those things are handled by the AdamOptimizer.minimize() method. In order to clip your gradients you'll need to explicitly compute, clip, and apply them as described in this section in TensorFlow's API documentation. Specifically you'll need to substitute the call to the minimize() method with something like the following: ``` optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) gvs = optimizer.compute_gradients(cost) capped_gvs = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gvs] train_op = optimizer.apply_gradients(capped_gvs) ```", "best_answers_score":0.7221, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36498127\/how-to-apply-gradient-clipping-in-tensorflow", "best_answers_votes":159, "question_length":765, "response_length":709 }, { "question":"ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [8, 28, 28] I keep on getting this error related to input shape. Any help would be highly appreciated. Thanks! ``` import tensorflow as tf (xtrain, ytrain), (xtest, ytest) = tf.keras.datasets.mnist.load_data() model = tf.keras.Sequential([ tf.keras.layers.Conv2D(16, kernel_size=3, activation='relu'), tf.keras.layers.MaxPooling2D(pool_size=2), tf.keras.layers.Conv2D(32, kernel_size=3, activation='relu'), tf.keras.layers.MaxPooling2D(pool_size=2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics='accuracy') history = model.fit(xtrain, ytrain, validation_data=(xtest, ytest), epochs=10, batch_size=8) ``` ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [8, 28, 28]", "response":"The input layers of the model you created needs a 4 dimension tensor to work with but the x_train tensor you are passing to it has only 3 dimensions This means that you have to reshape your training set with .reshape(n_images, 286, 384, 1). Now you have added an extra dimension without changing the data and your model is ready to run. you need to reshape your x_train tensor to a 4 dimension before training your model. for example: ``` x_train = x_train.reshape(-1, 28, 28, 1) ``` for more info on keras inputs Check this answer", "best_answers_score":0.7221, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/63279168\/valueerror-input-0-of-layer-sequential-is-incompatible-with-the-layer-expect", "best_answers_votes":20, "question_length":1027, "response_length":531 }, { "question":"Keras : How should I prepare input data for RNN? I'm having trouble with preparing input data for RNN on Keras. Currently, my training data dimension is: (6752, 600, 13) 6752: number of training data 600: number of time steps 13: size of feature vectors (the vector is in float) X_train and Y_train are both in this dimension. I want to prepare this data to be fed into SimpleRNN on Keras. Suppose that we're going through time steps, from step #0 to step #599. Let's say I want to use input_length = 5, which means that I want to use recent 5 inputs. (e.g. step #10, #11,#12,#13,#14 @ step #14). How should I reshape X_train? should it be (6752, 5, 600, 13) or should it be (6752, 600, 5, 13)? And what shape should Y_train be in? Should it be (6752, 600, 13) or (6752, 1, 600, 13) or (6752, 600, 1, 13)?", "response":"If you only want to predict the output using the most recent 5 inputs, there is no need to ever provide the full 600 time steps of any training sample. My suggestion would be to pass the training data in the following manner: ``` t=0 t=1 t=2 t=3 t=4 t=5 ... t=598 t=599 sample0 |---------------------| sample0 |---------------------| sample0 |----------------- ... sample0 ----| sample0 ----------| sample1 |---------------------| sample1 |---------------------| sample1 |----------------- .... .... sample6751 ----| sample6751 ----------| ``` The total number of training sequences will sum up to ``` (600 - 4) * 6752 = 4024192 # (nb_timesteps - discarded_tailing_timesteps) * nb_samples ``` Each training sequence consists of 5 time steps. At each time step of every sequence you pass all 13 elements of the feature vector. Subsequently, the shape of the training data will be (4024192, 5, 13). This loop can reshape your data: ```py input = np.random.rand(6752,600,13) nb_timesteps = 5 flag = 0 for sample in range(input.shape[0]): tmp = np.array([input[sample,i:i+nb_timesteps,:] for i in range(input.shape[1] - nb_timesteps + 1)]) if flag==0: new_input = tmp flag = 1 else: new_input = np.concatenate((new_input,tmp)) ```", "best_answers_score":0.722, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36992855\/keras-how-should-i-prepare-input-data-for-rnn", "best_answers_votes":23, "question_length":805, "response_length":1226 }, { "question":"Cannot import name 'tf_utils' when using importing keras I'm using Oracle Linux 7.7, and I installed python3.6 using yum (epel repos). Then I install tensorflow 1.5(since if it goes newer ver I got core dumped) and keras. If I'm importing tensorflow, I got nothing. But when I import keras, I got ``` ImportError: cannot import name 'tf_utils' ``` Here's the full output: ``` $ python Python 3.6.8 (default, Aug 7 2019, 08:02:28) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39.0.1)] on linux Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import tensorflow >>> import keras Using TensorFlow backend. Traceback (most recent call last): File \"\", line 1, in File \"\/home\/reyhan\/project\/.virtualenvs\/keras_tf\/lib\/python3.6\/site-packages\/keras\/__init__.py\", line 3, in from . import utils File \"\/home\/reyhan\/project\/.virtualenvs\/keras_tf\/lib\/python3.6\/site-packages\/keras\/utils\/__init__.py\", line 6, in from . import conv_utils File \"\/home\/reyhan\/project\/.virtualenvs\/keras_tf\/lib\/python3.6\/site-packages\/keras\/utils\/conv_utils.py\", line 9, in from .. import backend as K File \"\/home\/reyhan\/project\/.virtualenvs\/keras_tf\/lib\/python3.6\/site-packages\/keras\/backend\/__init__.py\", line 1, in from .load_backend import epsilon File \"\/home\/reyhan\/project\/.virtualenvs\/keras_tf\/lib\/python3.6\/site-packages\/keras\/backend\/load_backend.py\", line 90, in from .tensorflow_backend import * File \"\/home\/reyhan\/project\/.virtualenvs\/keras_tf\/lib\/python3.6\/site-packages\/keras\/backend\/tensorflow_backend.py\", line 13, in from tensorflow.python.keras.utils import tf_utils ImportError: cannot import name 'tf_utils' ``` I was using python 3.6 by building it from source before and keras worked fine but since I can't install tkinter for pyplot I uninstall it and using the one from yum instead.", "response":"Seems like it was a problem with keras 2.3.0, I installed keras 2.1.5 using pip and it works fine.", "best_answers_score":0.7217, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/57985406\/cannot-import-name-tf-utils-when-using-importing-keras", "best_answers_votes":68, "question_length":1796, "response_length":98 }, { "question":"TensorFlow Object Detection API print objects found on image to console I'm trying to return list of objects that have been found at image with TF Object Detection API. To do that I'm using print([category_index.get(i) for i in classes[0]]) to print list of objects that have been found or print(num_detections) to display number of found objects, but in both cases it gives me list with 300 values or simply value [300.] correspondingly. How it`s possible to return only that objects that are on image? Or if there is some mistake please help to figure out what is wrong. I was using Faster RCNN models config file and checkpoints while training. Be sure it really detects few objects at image, here it is: My code: ``` import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as vis_util PATH_TO_CKPT = 'frozen_graph\/frozen_inference_graph.pb' PATH_TO_LABELS = 'object_detection\/pascal_label_map.pbtxt' NUM_CLASSES = 7 detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') label_map = label_map_util.load_labelmap(PATH_TO_LABELS) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True) category_index = label_map_util.create_category_index(categories) def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) PATH_TO_TEST_IMAGES_DIR = 'object_detection\/test_images\/' TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 2) ] IMAGE_SIZE = (12, 8) with detection_graph.as_default(): with tf.Session(graph=detection_graph) as sess: sess.run(tf.global_variables_initializer()) img = 1 for image_path in TEST_IMAGE_PATHS: image = Image.open(image_path) image_np = load_image_into_numpy_array(image) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') # Each box represents a part of the image where a particular object was detected. boxes = detection_graph.get_tensor_by_name('detection_boxes:0') scores = detection_graph.get_tensor_by_name('detection_scores:0') classes = detection_graph.get_tensor_by_name('detection_classes:0') num_detections = detection_graph.get_tensor_by_name('num_detections:0') (boxes, scores, classes, num_detections) = sess.run( [boxes, scores, classes, num_detections], feed_dict={image_tensor: image_np_expanded}) vis_util.visualize_boxes_and_labels_on_image_array( image_np, np.squeeze(boxes), np.squeeze(classes).astype(np.int32), np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imsave('RESULTS\/' + str(img) + '.jpg', image_np) img += 1 # Return found objects print([category_index.get(i) for i in classes[0]]) print(boxes.shape) print(num_detections) ``` Which gives following result: ``` [{'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'chesterfield_blue', 'id': 1}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_gold', 'id': 5}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_red', 'id': 7}, {'name': 'chesterfield_red', 'id': 2}, {'name': 'marlboro_red', 'id': 7}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_red', 'id': 7}, {'name': 'lucky_strike_blue', 'id': 3}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'lucky_strike_red', 'id': 4}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'marlboro_mentol', 'id': 6}, {'name': 'lucky_strike_red', 'id': 4}] (1, 300, 4) [ 300.] ``` Thanks in advance for any information! UPD: Thousand thanks for everyone who helped with this question. Following line of code is exactly what I needed, it gives me list with objects that were found so I can do other operations on them. ``` print [category_index.get(value) for index,value in enumerate(classes[0]) if scores[0,index] > 0.5] ```", "response":"As far as I can see you have 300 detections. visualize_boxes_and_labels_on_image_array shows very few of them because min_score_thresh=.5 (this is the default value) is too high for the most of them. If you want to add such filtering to the output you can write: ``` min_score_thresh = 0.5 print([category_index.get(i) for i in classes[0] if scores[0, i] > min_score_thresh) ``` You can change min_score_thresh to choose threshold value you need. It may be useful to print the score values with the category names.", "best_answers_score":0.7215, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45674696\/tensorflow-object-detection-api-print-objects-found-on-image-to-console", "best_answers_votes":10, "question_length":15245, "response_length":514 }, { "question":"ModuleNotFoundError: No module named 'tensorflow.examples' When I import tensorflow ``` import tensorflow as tf ``` I don't get an error. However, I do get the error below. I'm using spyder if that helps. As per other questions, I ensured up to date (v1.8) tensorflow using both conda and then pip installs. This didn't resolve the issue. Please assist. ``` import tensorflow.examples.tutorials.mnist.input_data as input_data ModuleNotFoundError: No module named 'tensorflow.examples' ```", "response":"I think you should use like bellow on tensorflow 2 ``` import tensorflow_datasets mnist = tensorflow_datasets.load('mnist') ```", "best_answers_score":0.7213, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50313441\/modulenotfounderror-no-module-named-tensorflow-examples", "best_answers_votes":29, "question_length":488, "response_length":127 }, { "question":"pip3: command not found I want to install TensorFlow following these instructions: https:\/\/web.archive.org\/web\/20170627102751\/https:\/\/www.tensorflow.org\/versions\/r0.12\/get_started\/os_setup#pip_installation But when I try this code on terminal, it returns an error. ``` $ sudo pip3 install --upgrade $TF_BINARY_URL sudo: pip3: command not found ``` So I installed Homebrew and tried to uninstall and reinstall python3-pip, but didn't work. ``` MakotonoMacBook-ea:~ makotomiyazaki$ brew uninstall python3-pip Error: No such keg: \/usr\/local\/Cellar\/python3-pip MakotonoMacBook-ea:~ makotomiyazaki$ brew install python3-pip Error: No available formula with the name \"python3-pip\" ==> Searching for a previously deleted formula... Warning: homebrew\/core is shallow clone. To get complete history run: git -C \"$(brew --repo homebrew\/core)\" fetch --unshallow ``` What should I do for getting pip3? My OS is macOS High Sierra, and I have Python 3.6.2 already installed. EDIT: I tried ``` python3 -m pip ``` and what's returned was this: ``` The directory '\/Users\/makotomiyazaki\/Library\/Caches\/pip\/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '\/Users\/makotomiyazaki\/Library\/Caches\/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. You must give at least one requirement to install (see \"pip help install\") ``` I also tried which pip3, but just I don't know if it worked... ``` MakotonoMacBook-ea:~ makotomiyazaki$ sudo which pip3 install --upgrade $TF_BINARY_URL \/usr\/bin\/install ```", "response":"You would need to install pip3. On Linux, run first sudo apt update. Then the command would be: sudo apt install python3-pip On Mac, using brew, first brew install python3 Then brew postinstall python3 Try calling pip3 -V to see if it worked.", "best_answers_score":0.7205, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48014769\/pip3-command-not-found", "best_answers_votes":181, "question_length":1811, "response_length":242 }, { "question":"TensorFlow 2.1.0: has no attribute 'random_normal' I'm trying to get Uber's Ludwig to run. I get an error about there being no attribute 'random_normal'. I can reproduce the error in Python with these commands. ``` >>> import tensorflow as tf >>> tf.reduce_sum(tf.random_normal([1000,1000])) Traceback (most recent call last): File \"\", line 1, in AttributeError: module 'tensorflow' has no attribute 'random_normal' >>> print(tf.__version__) 2.1.0 >>> print(sys.version) 3.7.5 (defaut, Oct 25 2019, 15:51:11) [GCC 7.3.0] ``` How can I get past this error?", "response":"It was moved to tf.random.normal (along with all the other tf.random_* functions)", "best_answers_score":0.7203, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/59953127\/tensorflow-2-1-0-has-no-attribute-random-normal", "best_answers_votes":48, "question_length":556, "response_length":81 }, { "question":"Tensorflow doesn't seem to see my gpu I've tried tensorflow on both cuda 7.5 and 8.0, w\/o cudnn (my GPU is old, cudnn doesn't support it). When I execute device_lib.list_local_devices(), there is no gpu in the output. Theano sees my gpu, and works fine with it, and examples in \/usr\/share\/cuda\/samples work fine as well. I installed tensorflow through pip install. Is my gpu too old for tf to support it? gtx 460", "response":"Note: If you use Windows only install tensorflow version 2.10 else use linux or WSL. ( tensorflow after 2.10 not suport GPU in windows ) Summary: check if tensorflow sees your GPU (optional) check if your videocard can work with tensorflow (optional) find versions of CUDA Toolkit and cuDNN SDK, compatible with your tf version install CUDA Toolkit check active CUDA version and switch it (if necessary) install cuDNN SDK pip uninstall tensorflow; pip install tensorflow-gpu check if tensorflow sees your GPU * source - https:\/\/www.tensorflow.org\/install\/gpu Detailed instruction: check if tensorflow sees your GPU (optional) ``` from tensorflow.python.client import device_lib def get_available_devices(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos] print(get_available_devices()) # my output was => ['\/device:CPU:0'] # good output must be => ['\/device:CPU:0', '\/device:GPU:0'] ``` check if your card can work with tensorflow (optional) my PC: GeForce GTX 1060 notebook (driver version - 419.35), windows 10, jupyter notebook tensorflow needs Compute Capability 3.5 or higher. (https:\/\/www.tensorflow.org\/install\/gpu#hardware_requirements) https:\/\/developer.nvidia.com\/cuda-gpus select \"CUDA-Enabled GeForce Products\" result - \"GeForce GTX 1060 Compute Capability = 6.1\" my card can work with tf! find versions of CUDA Toolkit and cuDNN SDK, that you need a) find your tf version ``` import sys print (sys.version) # 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)] import tensorflow as tf print(tf.__version__) # my output was => 1.13.1 ``` b) find right versions of CUDA Toolkit and cuDNN SDK for your tf version ``` https:\/\/www.tensorflow.org\/install\/source#linux * it is written for linux, but worked in my case see, that tensorflow_gpu-1.13.1 needs: CUDA Toolkit v10.0, cuDNN SDK v7.4 ``` install CUDA Toolkit a) install CUDA Toolkit 10.0 ``` https:\/\/developer.nvidia.com\/cuda-toolkit-archive select: CUDA Toolkit 10.0 and download base installer (2 GB) installation settings: select only CUDA (my installation path was: D:\\Programs\\x64\\Nvidia\\Cuda_v_10_0\\Development) ``` b) add environment variables: ``` system variables \/ path must have: D:\\Programs\\x64\\Nvidia\\Cuda_v_10_0\\Development\\bin D:\\Programs\\x64\\Nvidia\\Cuda_v_10_0\\Development\\libnvvp D:\\Programs\\x64\\Nvidia\\Cuda_v_10_0\\Development\\extras\\CUPTI\\libx64 D:\\Programs\\x64\\Nvidia\\Cuda_v_10_0\\Development\\include ``` check active CUDA version and switch it (if necessary) a) run in cmd: ``` nvcc --version This shows currently active CUDA version in system. Restart cmd after each variables change. ``` b) if you have multiple CUDA versions installed and wanna switch to 11.5, do this: ``` - system variables \/ CUDA_PATH must have: C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.5 - system variables \/ path must have: all lines with v11.5 at the top (use \"move up\" button) ``` install cuDNN SDK a) download cuDNN SDK v7.4 ``` https:\/\/developer.nvidia.com\/rdp\/cudnn-archive (needs registration, but it is simple) select \"Download cuDNN v7.4.2 (Dec 14, 2018), for CUDA 10.0\" ``` b) add path to 'bin' folder into \"environment variables \/ system variables \/ path\": ``` D:\\Programs\\x64\\Nvidia\\cudnn_for_cuda_10_0\\bin ``` pip uninstall tensorflow pip install tensorflow-gpu check if tensorflow sees your GPU ``` - restart your PC - print(get_available_devices()) - # now this code should return => ['\/device:CPU:0', '\/device:GPU:0'] ```", "best_answers_score":0.7182, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41402409\/tensorflow-doesnt-seem-to-see-my-gpu", "best_answers_votes":38, "question_length":412, "response_length":3501 }, { "question":"Getting the current learning rate from a tf.train.AdamOptimizer I'd like print out the learning rate for each training step of my nn. I know that Adam has an adaptive learning rate, but is there a way i can see this (for visualization in tensorboard)", "response":"All the optimizers have a private variable that holds the value of a learning rate. In adagrad and gradient descent it is called self._learning_rate. In adam it is self._lr. So you will just need to print sess.run(optimizer._lr) to get this value. Sess.run is needed because they are tensors.", "best_answers_score":0.716, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36990476\/getting-the-current-learning-rate-from-a-tf-train-adamoptimizer", "best_answers_votes":25, "question_length":250, "response_length":292 }, { "question":"Initializing tensorflow Variable with an array larger than 2GB I am trying to initialize a tensorflow Variable with pre-trained word2vec embeddings. I have the following code: ``` import tensorflow as tf from gensim import models model = models.Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True) X = model.syn0 embeddings = tf.Variable(tf.random_uniform(X.shape, minval=-0.1, maxval=0.1), trainable=False) sess.run(tf.initialize_all_variables()) sess.run(embeddings.assign(X)) ``` And I am receiving the following error: ``` ValueError: Cannot create an Operation with a NodeDef larger than 2GB. ``` The array (X) I am trying to assign is of shape (3000000, 300) and its size is 3.6GB. I am getting the same error if I try tf.convert_to_tensor(X) as well. I know that it fails due to the fact that the array is larger than 2GB. However, I do not know how to assign an array larger than 2GB to a tensorflow Variable", "response":"It seems like the only option is to use a placeholder. The cleanest way I can find is to initialize to a placeholder directly: ```py X_init = tf.placeholder(tf.float32, shape=(3000000, 300)) X = tf.Variable(X_init) # The rest of the setup... sess.run(tf.initialize_all_variables(), feed_dict={X_init: model.syn0}) ```", "best_answers_score":0.7156, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35394103\/initializing-tensorflow-variable-with-an-array-larger-than-2gb", "best_answers_votes":19, "question_length":944, "response_length":317 }, { "question":"How does Keras define \"accuracy\" and \"loss\"? I can't find how Keras defines \"accuracy\" and \"loss\". I know I can specify different metrics (e.g. mse, cross entropy) but Keras prints out a standard \"accuracy\". How is that defined? Likewise for loss; I know I can specify different types of regularization; are those in the loss? Ideally, I'd like to print out the equation used to define it; if not, I'll settle for an answer here.", "response":"Have a look at metrics.py, there you can find definition of all available metrics including different types of accuracy. Accuracy is not printed unless you add it to the list of desired metrics when you compile your model. Regularizers are by definition added to the loss. For example, see add_loss method of the Layerclass. Update The type of accuracy is determined based on the objective function, see training.py. The default choice is categorical_accuracy. Other types like binary_accuracy and sparse_categorical_accuracy are selected when the objective function is either binary or sparse.", "best_answers_score":0.7149, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41531695\/how-does-keras-define-accuracy-and-loss", "best_answers_votes":36, "question_length":429, "response_length":594 }, { "question":"How to graph tf.keras model in Tensorflow-2.0? I upgraded to Tensorflow 2.0 and there is no tf.summary.FileWriter(\"tf_graphs\", sess.graph). I was looking through some other StackOverflow questions on this and they said to use tf.compat.v1.summary etc. Surely there must be a way to graph and visualize a tf.keras model in Tensorflow version 2. What is it? I'm looking for a tensorboard output like the one below. Thank you!", "response":"You can visualize the graph of any tf.function decorated function, but first, you have to trace its execution. Visualizing the graph of a Keras model means to visualize it's call method. By default, this method is not tf.function decorated and therefore you have to wrap the model call in a function correctly decorated and execute it. ```py import tensorflow as tf model = tf.keras.Sequential( [ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(32, activation=\"relu\"), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=\"softmax\"), ] ) @tf.function def traceme(x): return model(x) logdir = \"log\" writer = tf.summary.create_file_writer(logdir) tf.summary.trace_on(graph=True, profiler=True) # Forward pass traceme(tf.zeros((1, 28, 28, 1))) with writer.as_default(): tf.summary.trace_export(name=\"model_trace\", step=0, profiler_outdir=logdir) ```", "best_answers_score":0.7148, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/56690089\/how-to-graph-tf-keras-model-in-tensorflow-2-0", "best_answers_votes":29, "question_length":423, "response_length":881 }, { "question":"Illegal instruction(core dumped) tensorflow I am importing tensorflow in my ubuntu python using following commands- ``` $ python3 Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import tensorflow as tf Illegal instruction (core dumped) ``` And the program exits. Please specify the solution.", "response":"I had the same problem and had to downgrade tensorflow to 1.5.0: ``` pip uninstall tensorflow pip install tensorflow==1.5.0 ``` Edit: As @Tobsta points out in the comments, the other option is to compile the binaries from source. The precompiled binaries of versions >1.5 use AVX instructions that are not supported by older CPUs", "best_answers_score":0.7146, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49092527\/illegal-instructioncore-dumped-tensorflow", "best_answers_votes":26, "question_length":394, "response_length":329 }, { "question":"In TensorFlow is there any way to just initialize uninitialised variables? The standard way of initializing variables in TensorFlow is ``` init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) ``` After running some learning for a while I create a new set of variables but once I initialize them it resets all my existing variables. At the moment my way around this is to save all the variable I need and then reapply them after the tf.initalize_all_variables call. This works but is a bit ugly and clunky. I cannot find anything like this in the docs... Does anyone know of any good way to just initialize the uninitialized variables?", "response":"UPDATE: TensorFlow 0.9 has a new method that \"fixes\" all this but only if you're using a VariableScope with reuse set to True. tf.report_uninitialized_variables which can be used in one line with sess.run( tf.initialize_variables( list( tf.get_variable(name) for name in sess.run( tf.report_uninitialized_variables( tf.all_variables( ) ) ) ) ) ) or more intelligently through the ability to specify the variables you expect to be initialized: ``` def guarantee_initialized_variables(session, list_of_variables = None): if list_of_variables is None: list_of_variables = tf.all_variables() uninitialized_variables = list(tf.get_variable(name) for name in session.run(tf.report_uninitialized_variables(list_of_variables))) session.run(tf.initialize_variables(uninitialized_variables)) return unintialized_variables ``` This is still less ideal than actually knowing which variables are and are not initialized and taking care of that properly, but in the case of misdirection like the optim classes (see below) it may be hard to avoid. Also note, tf.initialize_variables cannot evaluate tf.report_uninitialized_variables, so both of them have to be run within the context of the session to work. There is an inelegant but concise way to do it. Before introducing your new variables run temp = set(tf.all_variables()) and afterwards run sess.run(tf.initialize_variables(set(tf.all_variables()) - temp)). These together will only initialize any variables created after the temp value is assigned. I've been playing with transfer learning, so I wanted a quick way to do it too, but this is the best way I could find. Especially when using things like AdamOptimizer, which doesn't give you easy (or any, I'm not sure) access to the variables it uses. So the following actually shows up in my code. (I initialize the new layer's variables explicitly, and run it once to show the initial error before transfer learning. Just for a sanity check.) ``` temp = set(tf.all_variables()) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) #I honestly don't know how else to initialize ADAM in TensorFlow. sess.run(tf.initialize_variables(set(tf.all_variables()) - temp)) ``` And it solves all my problems. EDIT: @Lifu_Huang's answer states the proper way to fix my problem. Theoretically, you should use tf.train.Optimizer.get_slot_names and tf.train.Optimizer.get_slot: ``` optim = tf.train.AdadeltaOptimizer(1e-4) loss = cross_entropy(y,yhat) train_step = optim.minimize(loss) sess.run(tf.initialize_variables([optim.get_slot(loss, name) for name in optim.get_slot_names()]) ``` This however gives me AttributeError: 'NoneType' object has no attribute 'initializer'. I'll make edits when I figure out what I did wrong, so you don't make my mistakes.", "best_answers_score":0.7143, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35164529\/in-tensorflow-is-there-any-way-to-just-initialize-uninitialised-variables", "best_answers_votes":30, "question_length":653, "response_length":2750 }, { "question":"Warning: Please use alternatives such as official\/mnist\/dataset.py from tensorflow\/models I'm doing a simple tutorial using Tensorflow, I have just installed so it should be updated, first I load the mnist data using the following code: ``` import numpy as np import os from tensorflow.examples.tutorials.mnist import input_data os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' mnist = input_data.read_data_sets(\"MNIST_data\/\", one_hot=True) train_data = mnist.train.images # Returns np.array train_labels = np.asarray(mnist.train.labels, dtype=np.int32) eval_data = mnist.test.images # Returns np.array eval_labels = np.asarray(mnist.test.labels, dtype=np.int32) ``` But when I run it I get the following warning: ``` WARNING:tensorflow:From C:\\Users\\user\\PycharmProjects\\TensorFlowRNN\\venv\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. Instructions for updating: Use the retry module or similar alternatives. WARNING:tensorflow:From C:\/Users\/user\/PycharmProjects\/TensorFlowRNN\/sample.py:5: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use alternatives such as official\/mnist\/dataset.py from tensorflow\/models. WARNING:tensorflow:From C:\\Users\\user\\PycharmProjects\\TensorFlowRNN\\venv\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. Instructions for updating: Please write your own downloading logic. WARNING:tensorflow:From C:\\Users\\user\\PycharmProjects\\TensorFlowRNN\\venv\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use tf.data to implement this functionality. Extracting MNIST_data\/train-images-idx3-ubyte.gz WARNING:tensorflow:From C:\\Users\\user\\PycharmProjects\\TensorFlowRNN\\venv\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use tf.data to implement this functionality. Extracting MNIST_data\/train-labels-idx1-ubyte.gz WARNING:tensorflow:From C:\\Users\\user\\PycharmProjects\\TensorFlowRNN\\venv\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use tf.one_hot on tensors. Extracting MNIST_data\/t10k-images-idx3-ubyte.gz Extracting MNIST_data\/t10k-labels-idx1-ubyte.gz WARNING:tensorflow:From C:\\Users\\user\\PycharmProjects\\TensorFlowRNN\\venv\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\datasets\\mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use alternatives such as official\/mnist\/dataset.py from tensorflow\/models. ``` I have used the line os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' which should avoid getting warnings and tried other alternatives to obtain mnist, however always appear the same warnings, can someone help me figure out is this happening? PD: I am using Python 3.6 in windows 10, in case it helps.", "response":"tensorflow.examples.tutorials is now deprecated and it is recommended to use tensorflow.keras.datasets as follows: ``` import tensorflow as tf mnist = tf.keras.datasets.mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() ``` https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/keras\/datasets\/mnist\/load_data", "best_answers_score":0.7139, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49901806\/warning-please-use-alternatives-such-as-official-mnist-dataset-py-from-tensorfl", "best_answers_votes":22, "question_length":3674, "response_length":312 }, { "question":"Simple way to visualize a TensorFlow graph in Jupyter? The official way to visualize a TensorFlow graph is with TensorBoard, but sometimes I just want a quick look at the graph when I'm working in Jupyter. Is there a quick solution, ideally based on TensorFlow tools, or standard SciPy packages (like matplotlib), but if necessary based on 3rd party libraries?", "response":"Here's a recipe I copied from one of Alex Mordvintsev deep dream notebook at some point ```python from IPython.display import clear_output, Image, display, HTML import numpy as np def strip_consts(graph_def, max_const_size=32): \"\"\"Strip large constant values from graph_def.\"\"\" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = \"\"%size return strip_def def show_graph(graph_def, max_const_size=32): \"\"\"Visualize TensorFlow graph.\"\"\" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = \"\"\" function load() {{ document.getElementById(\"{id}\").pbtxt = {data}; }} \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = \"\"\" \"\"\".format(code.replace('\"', '"')) display(HTML(iframe)) ``` Then to visualize current graph ```python show_graph(tf.get_default_graph().as_graph_def()) ``` If your graph is saved as pbtxt, you could do ```python gdef = tf.GraphDef() from google.protobuf import text_format text_format.Merge(open(\"tf_persistent.pbtxt\").read(), gdef) show_graph(gdef) ``` You'll see something like this", "best_answers_score":0.7116, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38189119\/simple-way-to-visualize-a-tensorflow-graph-in-jupyter", "best_answers_votes":98, "question_length":360, "response_length":1318 }, { "question":"How to stack multiple lstm in keras? I am using deep learning library keras and trying to stack multiple LSTM with no luck. Below is my code ``` model = Sequential() model.add(LSTM(100,input_shape =(time_steps,vector_size))) model.add(LSTM(100)) ``` The above code returns error in the third line Exception: Input 0 is incompatible with layer lstm_28: expected ndim=3, found ndim=2 The input X is a tensor of shape (100,250,50). I am running keras on tensorflow backend", "response":"You need to add return_sequences=True to the first layer so that its output tensor has ndim=3 (i.e. batch size, timesteps, hidden state). Please see the following example: ``` # expected input data shape: (batch_size, timesteps, data_dim) model = Sequential() model.add(LSTM(32, return_sequences=True, input_shape=(timesteps, data_dim))) # returns a sequence of vectors of dimension 32 model.add(LSTM(32, return_sequences=True)) # returns a sequence of vectors of dimension 32 model.add(LSTM(32)) # return a single vector of dimension 32 model.add(Dense(10, activation='softmax')) ``` From: https:\/\/keras.io\/getting-started\/sequential-model-guide\/ (search for \"stacked lstm\")", "best_answers_score":0.7113, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40331510\/how-to-stack-multiple-lstm-in-keras", "best_answers_votes":164, "question_length":469, "response_length":675 }, { "question":"NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array I try to pass 2 loss functions to a model as Keras allows that. loss: String (name of objective function) or objective function or Loss instance. See losses. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses. The two loss functions: ``` def l_2nd(beta): def loss_2nd(y_true, y_pred): ... return K.mean(t) return loss_2nd ``` and ``` def l_1st(alpha): def loss_1st(y_true, y_pred): ... return alpha * 2 * tf.linalg.trace(tf.matmul(tf.matmul(Y, L, transpose_a=True), Y)) \/ batch_size return loss_1st ``` Then I build the model: ``` l2 = K.eval(l_2nd(self.beta)) l1 = K.eval(l_1st(self.alpha)) self.model.compile(opt, [l2, l1]) ``` When I train, it produces the error: ``` 1.15.0-rc3 WARNING:tensorflow:From \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow_core\/python\/ops\/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) in () 47 create_using=nx.DiGraph(), nodetype=None, data=[('weight', int)]) 48 ---> 49 model = SDNE(G, hidden_size=[256, 128],) 50 model.train(batch_size=100, epochs=40, verbose=2) 51 embeddings = model.get_embeddings() 10 frames in __init__(self, graph, hidden_size, alpha, beta, nu1, nu2) 72 self.A, self.L = self._create_A_L( 73 self.graph, self.node2idx) # Adj Matrix,L Matrix ---> 74 self.reset_model() 75 self.inputs = [self.A, self.L] 76 self._embeddings = {} in reset_model(self, opt) ---> 84 self.model.compile(opt, loss=[l2, l1]) 85 self.get_embeddings() 86 \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow_core\/python\/training\/tracking\/base.py in _method_wrapper(self, *args, **kwargs) 455 self._self_setattr_tracking = False # pylint: disable=protected-access 456 try: --> 457 result = method(self, *args, **kwargs) 458 finally: 459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array. ``` Please help, thanks!", "response":"For me, the issue occurred when upgrading from numpy 1.19 to 1.20 and using ray's RLlib, which uses tensorflow 2.2 internally. Simply downgrading with ``` pip install numpy==1.19.5 ``` solved the problem; the error did not occur anymore. Update (comment by @codeananda): You can also update to a newer TensorFlow (2.6+) version now that resolves the problem (pip install -U tensorflow).", "best_answers_score":0.7105, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58479556\/notimplementederror-cannot-convert-a-symbolic-tensor-2nd-target0-to-a-numpy", "best_answers_votes":189, "question_length":2469, "response_length":386 }, { "question":"Save Tensorflow graph for viewing in Tensorboard without summary operations I have a rather complicated Tensorflow graph that I'd like to visualize for optimization purposes. Is there a function that I can call that will simply save the graph for viewing in Tensorboard without needing to annotate variables? I Tried this: ``` merged = tf.merge_all_summaries() writer = tf.train.SummaryWriter(\"\/Users\/Name\/Desktop\/tf_logs\", session.graph_def) ``` But no output was produced. This is using the 0.6 wheel. This appears to be related: Graph visualisaton is not showing in tensorboard for seq2seq model", "response":"You can also dump the graph as a GraphDef protobuf and load that directly in TensorBoard. You can do this without starting a session or running the model. ``` ## ... create graph ... >>> graph_def = tf.get_default_graph().as_graph_def() >>> graphpb_txt = str(graph_def) >>> with open('graphpb.txt', 'w') as f: f.write(graphpb_txt) ``` This will output a file that looks something like this, depending on the specifics of your model. ``` node { name: \"W\" op: \"Const\" attr { key: \"dtype\" value { type: DT_FLOAT } } ... version 1 ``` In TensorBoard you can then use the \"Upload\" button to load it from disk.", "best_answers_score":0.71, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34427209\/save-tensorflow-graph-for-viewing-in-tensorboard-without-summary-operations", "best_answers_votes":17, "question_length":598, "response_length":604 }, { "question":"Get the value of some weights in a model trained by TensorFlow I have trained a ConvNet model with TensorFlow, and I want to get a particular weight in layer. For example in torch7 I would simply access model.modules[2].weights. to get the weights of layer 2. How would I do the same thing in TensorFlow?", "response":"In TensorFlow, trained weights are represented by tf.Variable objects. If you created a tf.Variable\u2014e.g. called v\u2014yourself, you can get its value as a NumPy array by calling sess.run(v) (where sess is a tf.Session). If you do not currently have a pointer to the tf.Variable, you can get a list of the trainable variables in the current graph by calling tf.trainable_variables(). This function returns a list of all trainable tf.Variable objects in the current graph, and you can select the one that you want by matching the v.name property. For example: ``` # Desired variable is called \"tower_2\/filter:0\". var = [v for v in tf.trainable_variables() if v.name == \"tower_2\/filter:0\"][0] ```", "best_answers_score":0.7096, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36193553\/get-the-value-of-some-weights-in-a-model-trained-by-tensorflow", "best_answers_votes":109, "question_length":304, "response_length":689 }, { "question":"What is the definition of a non-trainable parameter? What is the definition of non-trainable parameter in a model? For example, while you are building your own model, its value is 0 as a default, but when you want to use an inception model, it is becoming something else rather than 0. What would be the reason behind it?", "response":"In keras, non-trainable parameters (as shown in model.summary()) means the number of weights that are not updated during training with backpropagation. There are mainly two types of non-trainable weights: The ones that you have chosen to keep constant when training. This means that keras won't update these weights during training at all. The ones that work like statistics in BatchNormalization layers. They're updated with mean and variance, but they're not \"trained with backpropagation\". Weights are the values inside the network that perform the operations and can be adjusted to result in what we want. The backpropagation algorithm changes the weights towards a lower error at the end. By default, all weights in a keras model are trainable. When you create layers, internally it creates its own weights and they're trainable. (The backpropagation algorithm will update these weights) When you make them untrainable, the algorithm will not update these weights anymore. This is useful, for instance, when you want a convolutional layer with a specific filter, like a Sobel filter, for instance. You don't want the training to change this operation, so these weights\/filters should be kept constant. There is a lot of other reasons why you might want to make weights untrainable. Changing parameters: For deciding whether weights are trainable or not, you take layers from the model and set trainable: ``` model.get_layer(layerName).trainable = False #or True ``` This must be done before compilation.", "best_answers_score":0.7085, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47312219\/what-is-the-definition-of-a-non-trainable-parameter", "best_answers_votes":47, "question_length":321, "response_length":1508 }, { "question":"How training and test data is split? [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Closed 5 months ago. The community reviewed whether to reopen this question 5 months ago and left it closed: Not suitable for this site Improve this question I am currently training my data using neural network and using fit function. ```py history = model.fit(X, encoded_Y, batch_size=50, nb_epoch=500, validation_split=0.2, verbose=1) ``` Now I have used validation_split as 20%. What I understood is that my training data will be 80% and testing data will be 20%. I am confused how this data is dealt on back end. Is it like top 80% samples will be taken for training and below 20% percent for testing or the samples are randomly picked from inbetween? If I want to give separate training and testing data, how will I do that using model.fit()?? Moreover, my second concern is how to check if data is fitting well on model? I can see from the results that training accuracy is around 90% while the validation accuracy is around 55%. Does this mean it is the case of over-fitting or Under-fitting? My last question is what does evaluate returns? Document says it returns the loss but I am already getting loss and accuracy during each epoch (as a return of fit() (in history)). What does accuracy and score returned by evaluate shows? If the accuracy returned by evaluate returns 90%, can I say my data is fitting well, regardless of what individual accuracy and loss was for each epoch? Below is my Code: ```py import numpy import pandas import matplotlib.pyplot as plt from keras.models import Sequential from keras.layers import Dense, Dropout from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import cross_val_score from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import StratifiedKFold from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline from keras.utils import np_utils from sklearn.model_selection import KFold from sklearn.metrics import confusion_matrix import itertools seed = 7 numpy.random.seed(seed) dataframe = pandas.read_csv(\"INPUTFILE.csv\", skiprows=range(0, 0)) dataset = dataframe.values X = dataset[:,0:50].astype(float) # number of cols-1 Y = dataset[:,50] encoder = LabelEncoder() encoder.fit(Y) encoded_Y = encoder.transform(Y) encoded_Y = np_utils.to_categorical(encoded_Y) print(\"encoded_Y=\", encoded_Y) # baseline model def create_baseline(): # create model model = Sequential() model.add(Dense(5, input_dim=5, kernel_initializer='normal', activation='relu')) model.add(Dense(5, kernel_initializer='normal', activation='relu')) #model.add(Dense(2, kernel_initializer='normal', activation='sigmoid')) model.add(Dense(2, kernel_initializer='normal', activation='softmax')) # Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # for binayr classification #model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # for multi class return model model=create_baseline(); history=model.fit(X, encoded_Y, batch_size=50, nb_epoch=500, validation_split = 0.2, verbose=1) print(history.history.keys()) # list all data in history print(history.history.keys()) # summarize history for accuracy plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() pre_cls=model.predict_classes(X) cm1 = confusion_matrix(encoder.transform(Y),pre_cls) print('Confusion Matrix : \\n') print(cm1) score, acc = model.evaluate(X,encoded_Y) print('Test score:', score) print('Test accuracy:', acc) ```", "response":"The keras documentation says:\"The validation data is selected from the last samples in the x and y data provided, before shuffling.\", this means that the shuffle occurs after the split, there is also a boolean parameter called \"shuffle\" which is set true as default, so if you don't want your data to be shuffled you could just set it to false Getting good results on your training data and then getting bad or not so good results on your evaluation data usually means that your model is overfitting, overfit is when your model learns in a very specific scenario and can't achieve good results on new data evaluation is to test your model on new data that it has \"never seen before\", usually you divide your data on training and test, but sometimes you might also want to create a third group of data, because if you just adjust your model to obtain better and better results on your test data this in some way is like cheating because in some way you are telling your model how is the data you are going to use for evaluation and this could cause overfitting Also, if you want to split your data without using keras, I recommend you to use the sklearn train_test_split() function. it's easy to use and it looks like this: ``` X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33) ```", "best_answers_score":0.7083, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51006505\/how-training-and-test-data-is-split", "best_answers_votes":60, "question_length":4285, "response_length":1304 }, { "question":"What is the purpose of graph collections in TensorFlow? The API discusses Graph Collections which judging from the code are a general purpose key\/data storage. What is the purpose of those collections?", "response":"Remember that under the hood, Tensorflow is a system for specifying and then executing computational data flow graphs. The graph collections are used as part of keeping track of the constructed graphs and how they must be executed. For example, when you create certain kinds of ops, such as tf.train.batch_join, the code that adds the op will also add some queue runners to the QUEUE_RUNNERS graph collection. Later, when you call start_queue_runners(), by default, it will look at the QUEUE_RUNNERS collection to know which runners to start.", "best_answers_score":0.7077, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34235557\/what-is-the-purpose-of-graph-collections-in-tensorflow", "best_answers_votes":16, "question_length":201, "response_length":542 }, { "question":"YOLO object detection: how does the algorithm predict bounding boxes larger than a grid cell? I am trying to better understand how the YOLO2 & 3 algorithms works. The algorithm processes a series of convolutions until it gets down to a 13x13 grid. Then it is able to classify objects within each grid cell as well as the bounding boxes for those objects. If you look at this picture, you see that the bounding box in red is larger than any individual grid cell. Also the bounding box is centered at the center of the object. My questions of to do with how do the predicted bounding boxes exceed the size of the grid cell, when the network activations are based upon the individual grid cell. I mean everything outside of the grid cell should be unknown to the neurons predicting the bounding boxes for an object detected in that cell right. More precisely here are my questions: 1. How does the algorithm predict bounding boxes that are larger than the grid cell? 2. How does the algorithm know in which cell the center of the object is located?", "response":"everything outside of the grid cell should be unknown to the neurons predicting the bounding boxes for an object detected in that cell right. It's not quite right. The cells correspond to a partition of the image where the neuron have learned to respond if the center of an object is located within. However, the receptive field of those output neurons is much larger than the cell and actually cover the entire image. It is therefore able to recognize and draw a bounding box around an object much larger than its assigned \"center cell\". So a cell is centered on the center of the receptive field of the output neuron but is a much smaller part. It is also somewhat arbitrary, and one could image for example to have overlapping cells -- in which case you would expect neighboring neurons to fire simultaneously when an object is centered in the overlapping zone of their cells.", "best_answers_score":0.707, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50575301\/yolo-object-detection-how-does-the-algorithm-predict-bounding-boxes-larger-than", "best_answers_votes":11, "question_length":1045, "response_length":879 }, { "question":"How do I swap tensor's axes in TensorFlow? I have a tensor of shape (30, 116, 10), and I want to swap the first two dimensions, so that I have a tensor of shape (116, 30, 10) I saw that numpy as such a function implemented (np.swapaxes) and I searched for something similar in tensorflow but I found nothing. Do you have any idea?", "response":"tf.transpose provides the same functionality as np.swapaxes, although in a more generalized form. In your case, you can do tf.transpose(orig_tensor, [1, 0, 2]) which would be equivalent to np.swapaxes(orig_np_array, 0, 1).", "best_answers_score":0.7064, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38212205\/how-do-i-swap-tensors-axes-in-tensorflow", "best_answers_votes":63, "question_length":330, "response_length":222 }, { "question":"How does TensorFlow SparseCategoricalCrossentropy work? I'm trying to understand this loss function in TensorFlow but I don't get it. It's SparseCategoricalCrossentropy. All other loss functions need outputs and labels of the same shape, this specific loss function doesn't. Source code: ``` import tensorflow as tf; scce = tf.keras.losses.SparseCategoricalCrossentropy(); Loss = scce( tf.constant([ 1, 1, 1, 2 ], tf.float32), tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32) ); print(\"Loss:\", Loss.numpy()); ``` The error is: ``` InvalidArgumentError: Received a label value of 2 which is outside the valid range of [0, 2). Label values: 1 1 1 2 [Op:SparseSoftmaxCrossEntropyWithLogits] ``` How to provide proper params to the loss function SparseCategoricalCrossentropy?", "response":"SparseCategoricalCrossentropy and CategoricalCrossentropy both compute categorical cross-entropy. The only difference is in how the targets\/labels should be encoded. When using SparseCategoricalCrossentropy the targets are represented by the index of the category (starting from 0). Your outputs have shape 4x2, which means you have two categories. Therefore, the targets should be a 4 dimensional vector with entries that are either 0 or 1. For example: ``` scce = tf.keras.losses.SparseCategoricalCrossentropy(); Loss = scce( tf.constant([ 0, 0, 0, 1 ], tf.float32), tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32)) ``` This in contrast to CategoricalCrossentropy where the labels should be one-hot encoded: ``` cce = tf.keras.losses.CategoricalCrossentropy(); Loss = cce( tf.constant([ [1,0] [1,0], [1, 0], [0, 1] ], tf.float32), tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32)) ``` SparseCategoricalCrossentropy is more efficient when you have a lot of categories.", "best_answers_score":0.7061, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/59787897\/how-does-tensorflow-sparsecategoricalcrossentropy-work", "best_answers_votes":34, "question_length":774, "response_length":974 }, { "question":"Tensorflow: When to use tf.expand_dims? Tensorflow tutorials include the use of tf.expand_dims to add a \"batch dimension\" to a tensor. I have read the docs for this function but it still is rather mysterious to me. Does anyone know exactly under what circumstances this must be used? My code is below. My intent is to calculate a loss based on the distance between the predicted and actual bins. (E.g. if predictedBin = 10 and truthBin = 7 then binDistanceLoss = 3). ```py batch_size = tf.size(truthValues_placeholder) labels = tf.expand_dims(truthValues_placeholder, 1) predictedBin = tf.argmax(logits) binDistanceLoss = tf.abs(tf.sub(labels, logits)) ``` In this case, do I need to apply tf.expand_dims to predictedBin and binDistanceLoss? Thanks in advance.", "response":"expand_dims will not add or reduce elements in a tensor, it just changes the shape by adding 1 to dimensions. For example, a vector with 10 elements could be treated as a 10x1 matrix. The situation I have met to use expand_dims is when I tried to build a ConvNet to classify grayscale images. The grayscale images will be loaded as matrix of size [320, 320]. However, tf.nn.conv2d require input to be [batch, in_height, in_width, in_channels], where the in_channels dimension is missing in my data which in this case should be 1. So I used expand_dims to add one more dimension. In your case, I do not think you need expand_dims.", "best_answers_score":0.7056, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39008821\/tensorflow-when-to-use-tf-expand-dims", "best_answers_votes":69, "question_length":760, "response_length":629 }, { "question":"How to add regularizations in TensorFlow? I found in many available neural network code implemented using TensorFlow that regularization terms are often implemented by manually adding an additional term to loss value. My questions are: Is there a more elegant or recommended way of regularization than doing it manually? I also find that get_variable has an argument regularizer. How should it be used? According to my observation, if we pass a regularizer to it (such as tf.contrib.layers.l2_regularizer, a tensor representing regularized term will be computed and added to a graph collection named tf.GraphKeys.REGULARIZATOIN_LOSSES. Will that collection be automatically used by TensorFlow (e.g. used by optimizers when training)? Or is it expected that I should use that collection by myself?", "response":"As you say in the second point, using the regularizer argument is the recommended way. You can use it in get_variable, or set it once in your variable_scope and have all your variables regularized. The losses are collected in the graph, and you need to manually add them to your cost function like this. ``` reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) reg_constant = 0.01 # Choose an appropriate one. loss = my_normal_loss + reg_constant * sum(reg_losses) ```", "best_answers_score":0.7055, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37107223\/how-to-add-regularizations-in-tensorflow", "best_answers_votes":71, "question_length":796, "response_length":482 }, { "question":"Error: Failed to load the native TensorFlow runtime Today I installed TensorFlow using: ``` C:\\>pip3 install --upgrade tensorflow Collecting tensorflow Using cached tensorflow-1.2.0-cp35-cp35m-win_amd64.whl Requirement already up-to-date: bleach==1.5.0 in c:\\python35\\lib\\site-packages ( from tensorflow) Requirement already up-to-date: werkzeug>=0.11.10 in c:\\python35\\lib\\site-packag es (from tensorflow) Requirement already up-to-date: html5lib==0.9999999 in c:\\python35\\lib\\site-pack ages (from tensorflow) Requirement already up-to-date: protobuf>=3.2.0 in c:\\python35\\lib\\site-packages (from tensorflow) Requirement already up-to-date: backports.weakref==1.0rc1 in c:\\python35\\lib\\sit e-packages (from tensorflow) Requirement already up-to-date: markdown==2.2.0 in c:\\python35\\lib\\site-packages (from tensorflow) Requirement already up-to-date: numpy>=1.11.0 in c:\\python35\\lib\\site-packages ( from tensorflow) Requirement already up-to-date: six>=1.10.0 in c:\\python35\\lib\\site-packages (fr om tensorflow) Requirement already up-to-date: wheel>=0.26 in c:\\python35\\lib\\site-packages (fr om tensorflow) Requirement already up-to-date: setuptools in c:\\python35\\lib\\site-packages (fro m protobuf>=3.2.0->tensorflow) Installing collected packages: tensorflow Successfully installed tensorflow-1.2.0 ``` When I tried to import TensorFlow, it throws: ``` C:\\>python Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AM D64)] on win32 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import tensorflow as tf Traceback (most recent call last): File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\pywrap_tensorflow_intern al.py\", line 18, in swig_import_helper return importlib.import_module(mname) File \"C:\\Python35\\lib\\importlib\\__init__.py\", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File \"\", line 986, in _gcd_import File \"\", line 969, in _find_and_load File \"\", line 958, in _find_and_load_unlocked File \"\", line 666, in _load_unlocked File \"\", line 577, in module_from_spec File \"\", line 906, in create_module File \"\", line 222, in _call_with_frames_removed ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\pywrap_tensorflow.py\", l ine 41, in from tensorflow.python.pywrap_tensorflow_internal import * File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\pywrap_tensorflow_intern al.py\", line 21, in _pywrap_tensorflow_internal = swig_import_helper() File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\pywrap_tensorflow_intern al.py\", line 20, in swig_import_helper return importlib.import_module('_pywrap_tensorflow_internal') File \"C:\\Python35\\lib\\importlib\\__init__.py\", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: No module named '_pywrap_tensorflow_internal' During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"\", line 1, in File \"C:\\Python35\\lib\\site-packages\\tensorflow\\__init__.py\", line 24, in from tensorflow.python import * File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\__init__.py\", line 49, i n from tensorflow.python import pywrap_tensorflow File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\pywrap_tensorflow.py\", l ine 52, in raise ImportError(msg) ImportError: Traceback (most recent call last): File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\pywrap_tensorflow_intern al.py\", line 18, in swig_import_helper return importlib.import_module(mname) File \"C:\\Python35\\lib\\importlib\\__init__.py\", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File \"\", line 986, in _gcd_import File \"\", line 969, in _find_and_load File \"\", line 958, in _find_and_load_unlocked File \"\", line 666, in _load_unlocked File \"\", line 577, in module_from_spec File \"\", line 906, in create_module File \"\", line 222, in _call_with_frames_removed ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\pywrap_tensorflow.py\", l ine 41, in from tensorflow.python.pywrap_tensorflow_internal import * File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\pywrap_tensorflow_intern al.py\", line 21, in _pywrap_tensorflow_internal = swig_import_helper() File \"C:\\Python35\\lib\\site-packages\\tensorflow\\python\\pywrap_tensorflow_intern al.py\", line 20, in swig_import_helper return importlib.import_module('_pywrap_tensorflow_internal') File \"C:\\Python35\\lib\\importlib\\__init__.py\", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: No module named '_pywrap_tensorflow_internal' Failed to load the native TensorFlow runtime. See https:\/\/www.tensorflow.org\/install\/install_sources#common_installation_probl ems for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. >>> ``` I'm using Python 3.5.2 64bit. I don't really know why the import process throws errors.", "response":"My code worked perfectly after executing this line: ``` pip install tensorflow --upgrade --force-reinstall ```", "best_answers_score":0.7036, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44623184\/error-failed-to-load-the-native-tensorflow-runtime", "best_answers_votes":15, "question_length":5289, "response_length":110 }, { "question":"List of Differentiable Ops in Tensorflow Is there a master list of Tensorflow ops that are differentiable (i.e., will auto-differentiate)? Two other ways to phrase this: List of ops that do not have ops.NoGradient set. List of ops that will not trigger LookupError. For example, I'd assume that all the Control Flow ops are not differentiable (e.g., tf.where). How would I find this other than by manually running them all through tf.gradients to see if they throw the LookupError. \"Commonsense\" is not a valid answer. Thanks. EDIT: tf.where is differentiable so my intuitions are wrong. Perhaps the correct question here is which ops in Tensorflow are not differentiable. Thanks.", "response":"I have devised the entire list of Differentiable and Non-Differentiable Ops using python code. You will find the compact list here. Also the code which generated it. https:\/\/github.com\/Mainak431\/List-of-Differentiable--OPs-and-Non-differentiable-OPs--in-Tensorflow", "best_answers_score":0.7035, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44572161\/list-of-differentiable-ops-in-tensorflow", "best_answers_votes":16, "question_length":680, "response_length":264 }, { "question":"How to install Keras with gpu support? I installed Tensorflow for GPU using: pip install tensorflow-gpu But when I tried the same for Keras pip install keras-gpu, it pulled me an error: could not find the version that satisfies the requirements.", "response":"Adding to the answer below which is the correct answer in terms of recommending to use Anaconda package manager, but out of date in that there is now a keras-gpu package on Anaconda Cloud. So once you have Anaconda installed, you simply need to create a new environment where you want to install keras-gpu and execute the command: conda install -c anaconda keras-gpu This will install Keras along with both tensorflow and tensorflow-gpu libraries as the backend. (There is also no need to install separately the CUDA runtime and cudnn libraries as they are also included in the package - tested on Windows 10 and working).", "best_answers_score":0.7027, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/54689096\/how-to-install-keras-with-gpu-support", "best_answers_votes":31, "question_length":245, "response_length":622 }, { "question":"Tensorflow equivalent to numpy.diff Is there a tensorflow equivalent to numpy.diff? Calculate the n-th discrete difference along given axis. For my project I only need n=1", "response":"Try this: ``` def tf_diff_axis_0(a): return a[1:]-a[:-1] def tf_diff_axis_1(a): return a[:,1:]-a[:,:-1] ``` To check: ``` import numpy as np import tensorflow as tf x0=np.arange(5)+np.zeros((5,5)) sess = tf.Session() np.diff(x0, axis=0) == sess.run(tf_diff_axis_0(tf.constant(x0))) np.diff(x0, axis=1) == sess.run(tf_diff_axis_1(tf.constant(x0))) ```", "best_answers_score":0.7019, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42609618\/tensorflow-equivalent-to-numpy-diff", "best_answers_votes":16, "question_length":171, "response_length":350 }, { "question":"keras - cannot import name Conv2D I recently got the deep learning docker from https:\/\/github.com\/floydhub\/dl-docker running and while trying out the tutorials, received an error when importing the keras layers module. ``` from __future__ import print_function import keras from keras.datasets import cifar10 from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D --------------------------------------------------------------------------- ImportError Traceback (most recent call last) in () 5 from keras.models import Sequential 6 from keras.layers import Dense, Dropout, Activation, Flatten ----> 7 from keras.layers import Conv2D, MaxPooling2D ImportError: cannot import name Conv2D ``` I am running with ubuntu 14.04, python version 2.7.6 on the ipython notebook and the following versions of the deep learning libraries on docker. ``` ARG THEANO_VERSION=rel-0.8.2 ARG TENSORFLOW_VERSION=0.12.1 ARG TENSORFLOW_ARCH=cpu ARG KERAS_VERSION=1.2.0 ARG LASAGNE_VERSION=v0.1 ARG TORCH_VERSION=latest ARG CAFFE_VERSION=master ``` Im not sure if the problem lies with the version because it seems that there no related issues on the github thread.", "response":"Try this: from keras.layers.convolutional import Conv2D Importing changed with the new keras. Are you sure you are not using keras >= 2? NOTE: With tensorflow 2.0 keras is included. You can now import the layer with: ``` from tensorflow.keras.layers import Conv2D ```", "best_answers_score":0.7011, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44131295\/keras-cannot-import-name-conv2d", "best_answers_votes":28, "question_length":1288, "response_length":267 }, { "question":"Why is TensorFlow 2 much slower than TensorFlow 1? It's been cited by many users as the reason for switching to Pytorch, but I've yet to find a justification\/explanation for sacrificing the most important practical quality, speed, for eager execution. Below is code benchmarking performance, TF1 vs. TF2 - with TF1 running anywhere from 47% to 276% faster. My question is: what is it, at the graph or hardware level, that yields such a significant slowdown? Looking for a detailed answer - am already familiar with broad concepts. Relevant Git Specs: CUDA 10.0.130, cuDNN 7.4.2, Python 3.7.4, Windows 10, GTX 1070 Benchmark results: UPDATE: Disabling Eager Execution per below code does not help. The behavior, however, is inconsistent: sometimes running in graph mode helps considerably, other times it runs slower relative to Eager. Benchmark code: ```py # use tensorflow.keras... to benchmark tf.keras; used GPU for all above benchmarks from keras.layers import Input, Dense, LSTM, Bidirectional, Conv1D from keras.layers import Flatten, Dropout from keras.models import Model from keras.optimizers import Adam import keras.backend as K import numpy as np from time import time batch_shape = (32, 400, 16) X, y = make_data(batch_shape) model_small = make_small_model(batch_shape) model_small.train_on_batch(X, y) # skip first iteration which builds graph timeit(model_small.train_on_batch, 200, X, y) K.clear_session() # in my testing, kernel was restarted instead model_medium = make_medium_model(batch_shape) model_medium.train_on_batch(X, y) # skip first iteration which builds graph timeit(model_medium.train_on_batch, 10, X, y) ``` Functions used: ```py def timeit(func, iterations, *args): t0 = time() for _ in range(iterations): func(*args) print(\"Time\/iter: %.4f sec\" % ((time() - t0) \/ iterations)) def make_small_model(batch_shape): ipt = Input(batch_shape=batch_shape) x = Conv1D(128, 400, strides=4, padding='same')(ipt) x = Flatten()(x) x = Dropout(0.5)(x) x = Dense(64, activation='relu')(x) out = Dense(1, activation='sigmoid')(x) model = Model(ipt, out) model.compile(Adam(lr=1e-4), 'binary_crossentropy') return model def make_medium_model(batch_shape): ipt = Input(batch_shape=batch_shape) x = Bidirectional(LSTM(512, activation='relu', return_sequences=True))(ipt) x = LSTM(512, activation='relu', return_sequences=True)(x) x = Conv1D(128, 400, strides=4, padding='same')(x) x = Flatten()(x) x = Dense(256, activation='relu')(x) x = Dropout(0.5)(x) x = Dense(128, activation='relu')(x) x = Dense(64, activation='relu')(x) out = Dense(1, activation='sigmoid')(x) model = Model(ipt, out) model.compile(Adam(lr=1e-4), 'binary_crossentropy') return model def make_data(batch_shape): return np.random.randn(*batch_shape), np.random.randint(0, 2, (batch_shape[0], 1)) ```", "response":"UPDATE 8\/1730\/2020: TF 2.3 has finally done it: all cases run as fast, or notably faster, than any previous version. Further, my previous update was unfair to TF; my GPU was to blame, has been overheating lately. If you see a rising stem plot of iteration times, it's a reliable symptom. Lastly, see a dev's note on Eager vs Graph. This might be my last update on this answer. The true stats on your model's speed can only be found by you, on your device. UPDATE 5\/19\/2020: TF 2.2, using same tests: only a minor improvement in Eager speed. Plots for Large-Large Numpy train_on_batch case below, x-axis is successive fit iterations; my GPU isn't near its full capacity, so doubt it's throttling, but iterations do get slower over time. Per above, Graph and Eager are 1.56x and 1.97x slower than their TF1 counterparts, respectively. Unsure I'll debug this further, as I'm considering switching to Pytorch per TensorFlow's poor support for custom \/ low-level functionality. I did, however, open an Issue to get devs' feedback. UPDATE 2\/18\/2020: I've benched 2.1 and 2.1-nightly; the results are mixed. All but one configs (model & data size) are as fast as or much faster than the best of TF2 & TF1. The one that's slower, and slower dramatically, is Large-Large - esp. in Graph execution (1.6x to 2.5x slower). Furthermore, there are extreme reproducibility differences between Graph and Eager for a large model I tested - one not explainable via randomness\/compute-parallelism. I can't currently present reproducible code for these claims per time constraints, so instead I strongly recommend testing this for your own models. Haven't opened a Git issue on these yet, but I did comment on the original - no response yet. I'll update the answer(s) once progress is made. VERDICT: it isn't, IF you know what you're doing. But if you don't, it could cost you, lots - by a few GPU upgrades on average, and by multiple GPUs worst-case. THIS ANSWER: aims to provide a high-level description of the issue, as well as guidelines for how to decide on the training configuration specific to your needs. For a detailed, low-level description, which includes all benchmarking results + code used, see my other answer. I'll be updating my answer(s) w\/ more info if I learn any - can bookmark \/ \"star\" this question for reference. ISSUE SUMMARY: as confirmed by a TensorFlow developer, Q. Scott Zhu, TF2 focused development on Eager execution & tight integration w\/ Keras, which involved sweeping changes in TF source - including at graph-level. Benefits: greatly expanded processing, distribution, debug, and deployment capabilities. The cost of some of these, however, is speed. The matter, however, is fairly more complex. It isn't just TF1 vs. TF2 - factors yielding significant differences in train speed include: TF2 vs. TF1 Eager vs. Graph mode keras vs. tf.keras numpy vs. tf.data.Dataset vs. ... train_on_batch() vs. fit() GPU vs. CPU model(x) vs. model.predict(x) vs. ... Unfortunately, almost none of the above are independent of the other, and each can at least double execution time relative to another. Fortunately, you can determine what'll work best systematically, and with a few shortcuts - as I'll be showing. WHAT SHOULD I DO? Currently, the only way is - experiment for your specific model, data, and hardware. No single configuration will always work best - but there are do's and don't's to simplify your search: >> DO: train_on_batch() + numpy + tf.keras + TF1 + Eager\/Graph train_on_batch() + numpy + tf.keras + TF2 + Graph fit() + numpy + tf.keras + TF1\/TF2 + Graph + large model & data >> DON'T: fit() + numpy + keras for small & medium models and data fit() + numpy + tf.keras + TF1\/TF2 + Eager train_on_batch() + numpy + keras + TF1 + Eager [Major] tf.python.keras; it can run 10-100x slower, and w\/ plenty of bugs; more info This includes layers, models, optimizers, & related \"out-of-box\" usage imports; ops, utils, & related 'private' imports are fine - but to be sure, check for alts, & whether they're used in tf.keras Refer to code at bottom of my other answer for an example benchmarking setup. The list above is based mainly on the \"BENCHMARKS\" tables in the other answer. LIMITATIONS of the above DO's & DON'T's: This question's titled \"Why is TF2 much slower than TF1?\", and while its body concerns training explicitly, the matter isn't limited to it; inference, too, is subject to major speed differences, even within the same TF version, import, data format, etc. - see this answer. RNNs are likely to notably change the data grid in the other answer, as they've been improved in TF2 Models primarily used Conv1D and Dense - no RNNs, sparse data\/targets, 4\/5D inputs, & other configs Input data limited to numpy and tf.data.Dataset, while many other formats exist; see other answer GPU was used; results will differ on a CPU. In fact, when I asked the question, my CUDA wasn't properly configured, and some of the results were CPU-based. Why did TF2 sacrifice the most practical quality, speed, for eager execution? It hasn't, clearly - graph is still available. But if the question is \"why eager at all\": Superior debugging: you've likely come across multitudes of questions asking \"how do I get intermediate layer outputs\" or \"how do I inspect weights\"; with eager, it's (almost) as simple as .__dict__. Graph, in contrast, requires familiarity with special backend functions - greatly complicating the entire process of debugging & introspection. Faster prototyping: per ideas similar to above; faster understanding = more time left for actual DL. HOW TO ENABLE\/DISABLE EAGER? ```py tf.enable_eager_execution() # TF1; must be done before any model\/tensor creation tf.compat.v1.disable_eager_execution() # TF2; above holds ``` Misleading in TF2; see here. ADDITIONAL INFO: Careful with _on_batch() methods in TF2; according to the TF dev, they still use a slower implementation, but not intentionally - i.e. it's to be fixed. See other answer for details. REQUESTS TO TENSORFLOW DEVS: Please fix train_on_batch(), and the performance aspect of calling fit() iteratively; custom train loops are important to many, especially to me. Add documentation \/ docstring mention of these performance differences for users' knowledge. Improve general execution speed to keep peeps from hopping to Pytorch. ACKNOWLEDGEMENTS: Thanks to Q. Scott Zhu, TensorFlow developer, for his detailed clarification on the matter. P. Andrey for sharing useful testing, and discussion. UPDATES: 11\/14\/19 - found a model (in my real application) that that runs slower on TF2 for all* configurations w\/ Numpy input data. Differences ranged 13-19%, averaging 17%. Differences between keras and tf.keras, however, were more dramatic: 18-40%, avg. 32% (both TF1 & 2). (* - except Eager, for which TF2 OOM'd) 11\/17\/19 - devs updated on_batch() methods in a recent commit, stating to have improved speed - to be released in TF 2.1, or available now as tf-nightly. As I'm unable to get latter running, will delay benching until 2.1. 2\/20\/20 - prediction performance is also worth benching; in TF2, for example, CPU prediction times can involve periodic spikes", "best_answers_score":0.7009, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58441514\/why-is-tensorflow-2-much-slower-than-tensorflow-1", "best_answers_votes":130, "question_length":2787, "response_length":7154 }, { "question":"Tensorflow GradientTape \"Gradients does not exist for variables\" intermittently When training my network I am occasionally met with the warning: W0722 11:47:35.101842 140641577297728 optimizer_v2.py:928] Gradients does not exist for variables ['model\/conv1d_x\/Variable:0'] when minimizing the loss. This happens sporadically at infrequent intervals (maybe once in every 20 successful steps). My model basically has two paths which join together with concatenations at various positions in the network. To illustrate this, here is a simplified example of what I mean. ``` class myModel(tf.keras.Model): def __init__(self): self.conv1 = Conv2D(32) self.conv2 = Conv2D(32) self.conv3 = Conv2D(16) def call(self, inputs): net1 = self.conv1(inputs) net2 = self.conv2(inputs) net = tf.concat([net1, net2], axis=2) net = self.conv3(net) end_points = tf.nn.softmax(net) model = myModel() with tf.GradientTape() as tape: predicition = model(image) loss = myloss(labels, prediction) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) ``` In reality my network is much larger, but the variables that generally don't have gradients tend to be the ones at the top of the network. Before each Conv2D layer I also have a custom gradient. Sometimes when I the error appears I can notice that the gradient function for that layer has not been called. My question is how can the gradient tape sometimes take what appears to be different paths when propagating backwards through my network. My secondary question, is this caused by having two separate routes through my network (i.e. conv1 AND conv2). Is there a fundamental flaw in this network architecture? Ideally, could I define to the GradientTape() that it must find the gradients for each of the top layers?", "response":"The solution given by Nguy\u1ec5n and gkennos will suppress the error because it would replace all None by zeros. However, it is a big issue that your gradient is null at any point in time. The problem described above is certainly caused by unconnected variables (by default PyTorch will throw runtime error). The most common case of unconnected layers can be exemplify as follow: ``` def some_func(x): x1 = x * some variables x2 = x1 + some variables #x2 discontinued after here x3 = x1 \/ some variables return x3 ``` Now observe that x2 is unconnected, so gradient will not be propagated throw it. Carefully debug your code for unconnected variables.", "best_answers_score":0.7009, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/57144586\/tensorflow-gradienttape-gradients-does-not-exist-for-variables-intermittently", "best_answers_votes":30, "question_length":1823, "response_length":647 }, { "question":"How to change python version in Anaconda? I am trying to get into deep learning. I installed Anaconda to use jupyter and generally not to care about installing all of those packages like matplotlib etc myself. But I cannot install tensorflow as it works only with Python 3.4, 3.5, or 3.6 but I have 3.7. After I read about it I installed python 3.6.8. I uninstalled Anaconda and installed it again, nothing changed. After that, I used this command conda install python=3.6.8 to presumably install python 3.6.8 for it (I found this solution somewhere on the web). The command worked but didn't change anything. Please help", "response":"A better (recommended) alternative is to create a virtual environment of the desired Python version and then use that environment to run Tensorflow and other scripts. To do that, you can follow the instructions given here. BUT, if you don't want to create a separate environment, then conda install python= should do. OR (not recommended) you can download the \"latest\" Anaconda installer with your required Python version bundled. Source", "best_answers_score":0.7008, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/54559566\/how-to-change-python-version-in-anaconda", "best_answers_votes":37, "question_length":621, "response_length":437 }, { "question":"Difference between Variable and get_variable in TensorFlow As far as I know, Variable is the default operation for making a variable, and get_variable is mainly used for weight sharing. On the one hand, there are some people suggesting using get_variable instead of the primitive Variable operation whenever you need a variable. On the other hand, I merely see any use of get_variable in TensorFlow's official documents and demos. Thus I want to know some rules of thumb on how to correctly use these two mechanisms. Are there any \"standard\" principles?", "response":"I'd recommend to always use tf.get_variable(...) -- it will make it way easier to refactor your code if you need to share variables at any time, e.g. in a multi-gpu setting (see the multi-gpu CIFAR example). There is no downside to it. Pure tf.Variable is lower-level; at some point tf.get_variable() did not exist so some code still uses the low-level way.", "best_answers_score":0.7004, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37098546\/difference-between-variable-and-get-variable-in-tensorflow", "best_answers_votes":92, "question_length":553, "response_length":357 }, { "question":"How do you make TensorFlow + Keras fast with a TFRecord dataset? What is an example of how to use a TensorFlow TFRecord with a Keras Model and tf.session.run() while keeping the dataset in tensors w\/ queue runners? Below is a snippet that works but it needs the following improvements: Use the Model API specify an Input() Load a dataset from a TFRecord Run through a dataset in parallel (such as with a queuerunner) Here is the snippet, there are several TODO lines indicating what is needed: ```python from keras.models import Model import tensorflow as tf from keras import backend as K from keras.layers import Dense, Input from keras.objectives import categorical_crossentropy from tensorflow.examples.tutorials.mnist import input_data sess = tf.Session() K.set_session(sess) # Can this be done more efficiently than placeholders w\/ TFRecords? img = tf.placeholder(tf.float32, shape=(None, 784)) labels = tf.placeholder(tf.float32, shape=(None, 10)) # TODO: Use Input() x = Dense(128, activation='relu')(img) x = Dense(128, activation='relu')(x) preds = Dense(10, activation='softmax')(x) # TODO: Construct model = Model(input=inputs, output=preds) loss = tf.reduce_mean(categorical_crossentropy(labels, preds)) # TODO: handle TFRecord data, is it the same? mnist_data = input_data.read_data_sets('MNIST_data', one_hot=True) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss) sess.run(tf.global_variables_initializer()) # TODO remove default, add queuerunner with sess.as_default(): for i in range(1000): batch = mnist_data.train.next_batch(50) train_step.run(feed_dict={img: batch[0], labels: batch[1]}) print(loss.eval(feed_dict={img: mnist_data.test.images, labels: mnist_data.test.labels})) ``` Why is this question relevant? For high performance training without going back to python no TFRecord to numpy to tensor conversions Keras will soon be part of tensorflow Demonstrate how Keras Model() classes can accept tensors for input data correctly. Here is some starter information for a semantic segmentation problem example: example unet Keras model unet.py, happens to be for semantic segmentation. Keras + Tensorflow Blog Post An attempt at running the unet model a tf session with TFRecords and a Keras model (not working) Code to create the TFRecords: tf_records.py An attempt at running the unet model a tf session with TFRecords and a Keras model is in densenet_fcn.py (not working)", "response":"I don't use tfrecord dataset format so won't argue on the pros and cons, but I got interested in extending Keras to support the same. github.com\/indraforyou\/keras_tfrecord is the repository. Will briefly explain the main changes. Dataset creation and loading data_to_tfrecord and read_and_decode here takes care of creating tfrecord dataset and loading the same. Special care must be to implement the read_and_decode otherwise you will face cryptic errors during training. Initialization and Keras model Now both tf.train.shuffle_batch and Keras Input layer returns tensor. But the one returned by tf.train.shuffle_batch don't have metadata needed by Keras internally. As it turns out, any tensor can be easily turned into a tensor with keras metadata by calling Input layer with tensor param. So this takes care of initialization: ``` x_train_, y_train_ = ktfr.read_and_decode('train.mnist.tfrecord', one_hot=True, n_class=nb_classes, is_train=True) x_train_batch, y_train_batch = K.tf.train.shuffle_batch([x_train_, y_train_], batch_size=batch_size, capacity=2000, min_after_dequeue=1000, num_threads=32) # set the number of threads here x_train_inp = Input(tensor=x_train_batch) ``` Now with x_train_inp any keras model can be developed. Training (simple) Lets say train_out is the output tensor of your keras model. You can easily write a custom training loop on the lines of: ``` loss = tf.reduce_mean(categorical_crossentropy(y_train_batch, train_out)) train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss) # sess.run(tf.global_variables_initializer()) sess.run(tf.initialize_all_variables()) with sess.as_default(): coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) try: step = 0 while not coord.should_stop(): start_time = time.time() _, loss_value = sess.run([train_op, loss], feed_dict={K.learning_phase(): 0}) duration = time.time() - start_time if step % 100 == 0: print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value, duration)) step += 1 except tf.errors.OutOfRangeError: print('Done training for %d epochs, %d steps.' % (FLAGS.num_epochs, step)) finally: coord.request_stop() coord.join(threads) sess.close() ``` Training (keras style) One of the features of keras that makes it so lucrative is its generalized training mechanism with the callback functions. But to support tfrecords type training there are several changes that are need in the fit function running the queue threads no feeding in batch data through feed_dict supporting validation becomes tricky as the validation data will also be coming in through another tensor an different model needs to be internally created with shared upper layers and validation tensor fed in by other tfrecord reader. But all this can be easily supported by another flag parameter. What makes things messing are the keras features sample_weight and class_weight they are used to weigh each sample and weigh each class. For this in compile() keras creates placeholders (here) and placeholders are also implicitly created for the targets (here) which is not needed in our case the labels are already fed in by tfrecord readers. These placeholders needs to be fed in during session run which is unnecessary in our cae. So taking into account these changes, compile_tfrecord(here) and fit_tfrecord(here) are the extension of compile and fit and shares say 95% of the code. They can be used in the following way: ``` import keras_tfrecord as ktfr train_model = Model(input=x_train_inp, output=train_out) ktfr.compile_tfrecord(train_model, optimizer='rmsprop', loss='categorical_crossentropy', out_tensor_lst=[y_train_batch], metrics=['accuracy']) train_model.summary() ktfr.fit_tfrecord(train_model, X_train.shape[0], batch_size, nb_epoch=3) train_model.save_weights('saved_wt.h5') ``` You are welcome to improve on the code and pull requests.", "best_answers_score":0.6991, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42184863\/how-do-you-make-tensorflow-keras-fast-with-a-tfrecord-dataset", "best_answers_votes":25, "question_length":2415, "response_length":3861 }, { "question":"How to do Xavier initialization on TensorFlow I'm porting my Caffe network over to TensorFlow but it doesn't seem to have xavier initialization. I'm using truncated_normal but this seems to be making it a lot harder to train.", "response":"Since version 0.8 there is a Xavier initializer, see here for the docs. You can use something like this: ``` W = tf.get_variable(\"W\", shape=[784, 256], initializer=tf.contrib.layers.xavier_initializer()) ```", "best_answers_score":0.6982, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33640581\/how-to-do-xavier-initialization-on-tensorflow", "best_answers_votes":120, "question_length":225, "response_length":207 }, { "question":"Tensorflow Precision \/ Recall \/ F1 score and Confusion matrix I would like to know if there is a way to implement the different score function from the scikit learn package like this one : ``` from sklearn.metrics import confusion_matrix confusion_matrix(y_true, y_pred) ``` into a tensorflow model to get the different score. ``` with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess: init = tf.initialize_all_variables() sess.run(init) for epoch in xrange(1): avg_cost = 0. total_batch = len(train_arrays) \/ batch_size for batch in range(total_batch): train_step.run(feed_dict = {x: train_arrays, y: train_labels}) avg_cost += sess.run(cost, feed_dict={x: train_arrays, y: train_labels})\/total_batch if epoch % display_step == 0: print \"Epoch:\", '%04d' % (epoch+1), \"cost=\", \"{:.9f}\".format(avg_cost) print \"Optimization Finished!\" correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\")) print \"Accuracy:\", batch, accuracy.eval({x: test_arrays, y: test_labels}) ``` Will i have to run the session again to get the prediction ?", "response":"You do not really need sklearn to calculate precision\/recall\/f1 score. You can easily express them in TF-ish way by looking at the formulas: Now if you have your actual and predicted values as vectors of 0\/1, you can calculate TP, TN, FP, FN using tf.count_nonzero: ``` TP = tf.count_nonzero(predicted * actual) TN = tf.count_nonzero((predicted - 1) * (actual - 1)) FP = tf.count_nonzero(predicted * (actual - 1)) FN = tf.count_nonzero((predicted - 1) * actual) ``` Now your metrics are easy to calculate: ``` precision = TP \/ (TP + FP) recall = TP \/ (TP + FN) f1 = 2 * precision * recall \/ (precision + recall) ```", "best_answers_score":0.6966, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35365007\/tensorflow-precision-recall-f1-score-and-confusion-matrix", "best_answers_votes":61, "question_length":1144, "response_length":615 }, { "question":"What are the differences between tf.initialize_all_variables() and tf.global_variables_initializer() On Tensorflow official website, it gives explantions of the tf.initialize_all_variables() and tf.global_variables_initializer() functions as follow tf.initialize_all_variables(): Returns an op that initializes all variables. tf.global_variables_initializer(): Adds an op to initialize all variables in the model It seems like both can be used to initialize all variables in graphs. Can we use these two functions exchangbly? If not, what would be the differences?", "response":"Unfortunately, you forgot to read an important line in the documentation of tf.initialize_all_variables. THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02. Instructions for updating: Use tf.global_variables_initializer instead.", "best_answers_score":0.6963, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41439254\/what-are-the-differences-between-tf-initialize-all-variables-and-tf-global-var", "best_answers_votes":34, "question_length":564, "response_length":242 }, { "question":"What's the difference between Variable and ResourceVariable in Tensorflow In Tensorflow, Variable is a resource, inherited from ResourceBase and managed by ResourceMgr. But why is there another named ResourceVariable? Both of them can be used for optimizers like gradient_descent (see this example). What's the difference? I know the former is well documented and most often used. What's the purpose of the latter?", "response":"ResourceVariable is the replacement for Variable, that aims to clean up some of the messier aspects of the semantics of Variable. ResourceVariable is the default in TF 2.0 and you very likely don't care about the differences between the two unless you are working on details deep inside the Tensorflow implementation. When eager execution is enabled tf.Variable also creates resource variables. So just use tf.Variable for now, it's almost certainly what you want; if you experience issues which look like race conditions or bugs from inconsistent values of variables in code you can try enabling resource variables (by either passing use_resource=True to your variable-creating code or calling tf.enable_resource_variables() in TF 1.x).", "best_answers_score":0.6946, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40817665\/whats-the-difference-between-variable-and-resourcevariable-in-tensorflow", "best_answers_votes":27, "question_length":414, "response_length":737 }, { "question":"How to fix \"AttributeError: module 'tensorflow' has no attribute 'get_default_graph'\"? I am trying to run some code to create an LSTM model but i get an error: AttributeError: module 'tensorflow' has no attribute 'get_default_graph' My code is as follows: ``` from keras.models import Sequential model = Sequential() model.add(Dense(32, input_dim=784)) model.add(Activation('relu')) model.add(LSTM(17)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) ``` I have found someone else with a similar problem and they updated tensorflow and it works; but mine is up to date and still does not work. I am new to using keras and machine learning so I apologise if this is something silly!", "response":"Please try: from tensorflow.keras.models import Sequential instead of from keras.models import Sequential", "best_answers_score":0.6943, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55496289\/how-to-fix-attributeerror-module-tensorflow-has-no-attribute-get-default-gr", "best_answers_votes":48, "question_length":757, "response_length":105 }, { "question":"Xcode version must be specified to use an Apple CROSSTOOL I try to build tensorflow-serving using bazel but I've encountered some errors during the building ``` ERROR:\/private\/var\/tmp\/_bazel_Kakadu\/3f0c35881c95d2c43f04614911c03a57\/external\/local_config_cc\/BUILD:49:5: in apple_cc_toolchain rule @local_config_cc\/\/:cc-compiler-darwin_x86_64: Xcode version must be specified to use an Apple CROSSTOOL. ERROR: Analysis of target '\/\/tensorflow_serving\/sources\/storage_path:file_system_storage_path_source_proto' failed; build aborted. ``` I've already tried to use bazel clean and bazel clean --expunge but it didn't help and still Bazel doesn't see my xcode (I suppose) but it's completely installed. I even reinstalled it to make sure that all works fine but the error didn't disappeared My Bazel version is ``` Build label: 0.5.2-homebrew Build target: bazel-out\/darwin_x86_64-opt\/bin\/src\/main\/java\/com\/google\/devtools\/build\/lib\/bazel\/BazelServer_deploy.jar Build time: Thu Jul 13 12:29:40 2017 (1499948980) Build timestamp: 1499948980 Build timestamp as int: 1499948980 KakaduDevs-Mac-mini:serving Kakadu$ ``` OS is MacOS Sierra version 10.12.5 What should I do to specify Xcode version in bazel to avoid this error? It seems that the error is common but I haven't found how I can make the bazel build. P.S I'm trying to install tensorflow-serving the way it's explained here. https:\/\/tensorflow.github.io\/serving\/setup", "response":"``` bazel clean --expunge sudo xcode-select -s \/Applications\/Xcode.app\/Contents\/Developer sudo xcodebuild -license bazel clean --expunge bazel build --config=opt \/\/tensorflow\/tools\/pip_package:build_pip_package ```", "best_answers_score":0.6942, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45276830\/xcode-version-must-be-specified-to-use-an-apple-crosstool", "best_answers_votes":113, "question_length":1419, "response_length":214 }, { "question":"What is the proper way to weight decay for Adam Optimizer Since Adam Optimizer keeps an pair of running averages like mean\/variance for the gradients, I wonder how it should properly handle weight decay. I have seen two ways of implementing it. Only update mean\/variance from the gradients based on the objective loss, decay weight explicitly at each mini-batch. (the following code is taken from https:\/\/github.com\/dmlc\/mxnet\/blob\/v0.7.0\/python\/mxnet\/optimizer.py) ``` weight[:] -= lr*mean\/(sqrt(variance) + self.epsilon) wd = self._get_wd(index) if wd > 0.: weight[:] -= (lr * wd) * weight ``` Update mean\/variance from the gradients based on the objective loss + regularization loss, and update weights like usual. (the following code is taken from https:\/\/github.com\/dmlc\/mxnet\/blob\/master\/src\/operator\/optimizer_op-inl.h#L210) ``` grad = scalar(param.rescale_grad) * grad + scalar(param.wd) * weight; \/\/ stuff Assign(out, req[0], weight - scalar(param.lr) * mean \/ (F(var) + scalar(param.epsilon))); ``` These two approaches sometimes show significant difference in training results. And I actually think the first one makes more sense (and find it gives better results time to time). Caffe and old version of mxnet follow the first approach, while torch, tensorflow and new version of mxnet follow the second one. Really appreciate your help\uff01", "response":"Edit: see also this PR which just got merged into TF. When using pure SGD (without momentum) as an optimizer, weight decay is the same thing as adding a L2-regularization term to the loss. When using any other optimizer, this is not true. Weight decay (don't know how to TeX here, so excuse my pseudo-notation): ``` w[t+1] = w[t] - learning_rate * dw - weight_decay * w ``` L2-regularization: ``` loss = actual_loss + lambda * 1\/2 sum(||w||_2 for w in network_params) ``` Computing the gradient of the extra term in L2-regularization gives lambda * w and thus inserting it into the SGD update equation ``` dloss_dw = dactual_loss_dw + lambda * w w[t+1] = w[t] - learning_rate * dw ``` gives the same as weight decay, but mixes lambda with the learning_rate. Any other optimizer, even SGD with momentum, gives a different update rule for weight decay as for L2-regularization! See the paper Fixing weight decay in Adam for more details. (Edit: AFAIK, this 1987 Hinton paper introduced \"weight decay\", literally as \"each time the weights are updated, their magnitude is also decremented by 0.4%\" at page 10) That being said, there doesn't seem to be support for \"proper\" weight decay in TensorFlow yet. There are a few issues discussing it, specifically because of above paper. One possible way to implement it is by writing an op that does the decay step manually after every optimizer step. A different way, which is what I'm currently doing, is using an additional SGD optimizer just for the weight decay, and \"attaching\" it to your train_op. Both of these are just crude work-arounds, though. My current code: ``` # In the network definition: with arg_scope([layers.conv2d, layers.dense], weights_regularizer=layers.l2_regularizer(weight_decay)): # define the network. loss = # compute the actual loss of your problem. train_op = optimizer.minimize(loss, global_step=global_step) if args.weight_decay not in (None, 0): with tf.control_dependencies([train_op]): sgd = tf.train.GradientDescentOptimizer(learning_rate=1.0) train_op = sgd.minimize(tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES))) ``` This somewhat makes use of TensorFlow's provided bookkeeping. Note that the arg_scope takes care of appending an L2-regularization term for every layer to the REGULARIZATION_LOSSES graph-key, which I then all sum up and optimize using SGD which, as shown above, corresponds to actual weight-decay. Hope that helps, and if anyone gets a nicer code snippet for this, or TensorFlow implements it better (i.e. in the optimizers), please share.", "best_answers_score":0.693, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44452571\/what-is-the-proper-way-to-weight-decay-for-adam-optimizer", "best_answers_votes":29, "question_length":1348, "response_length":2554 }, { "question":"Keras - Difference between categorical_accuracy and sparse_categorical_accuracy What is the difference between categorical_accuracy and sparse_categorical_accuracy in Keras? There is no hint in the documentation for these metrics, and by asking Dr. Google, I did not find answers for that either. The source code can be found here: ``` def categorical_accuracy(y_true, y_pred): return K.cast(K.equal(K.argmax(y_true, axis=-1), K.argmax(y_pred, axis=-1)), K.floatx()) def sparse_categorical_accuracy(y_true, y_pred): return K.cast(K.equal(K.max(y_true, axis=-1), K.cast(K.argmax(y_pred, axis=-1), K.floatx())), K.floatx()) ```", "response":"So in categorical_accuracy you need to specify your target (y) as one-hot encoded vector (e.g. in case of 3 classes, when a true class is second class, y should be (0, 1, 0). In sparse_categorical_accuracy you need should only provide an integer of the true class (in the case from previous example - it would be 1 as classes indexing is 0-based).", "best_answers_score":0.6926, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44477489\/keras-difference-between-categorical-accuracy-and-sparse-categorical-accuracy", "best_answers_votes":90, "question_length":625, "response_length":347 }, { "question":"Keras replacing input layer The code that I have (that I can't change) uses the Resnet with my_input_tensor as the input_tensor. ``` model1 = keras.applications.resnet50.ResNet50(input_tensor=my_input_tensor, weights='imagenet') ``` Investigating the source code, ResNet50 function creates a new keras Input Layer with my_input_tensor and then create the rest of the model. This is the behavior that I want to copy with my own model. I load my model from h5 file. ``` model2 = keras.models.load_model('my_model.h5') ``` Since this model already has an Input Layer, I want to replace it with a new Input Layer defined with my_input_tensor. How can I replace an input layer?", "response":"When you saved your model using: ``` old_model.save('my_model.h5') ``` it will save following: The architecture of the model, allowing to create the model. The weights of the model. The training configuration of the model (loss, optimizer). The state of the optimizer, allowing training to resume from where you left before. So then, when you load the model: ``` res50_model = load_model('my_model.h5') ``` you should get the same model back, you can verify the same using: ``` res50_model.summary() res50_model.get_weights() ``` Now you can, pop the input layer and add your own using: ``` res50_model.layers.pop(0) res50_model.summary() ``` add new input layer: ``` newInput = Input(batch_shape=(0,299,299,3)) # let us say this new InputLayer newOutputs = res50_model(newInput) newModel = Model(newInput, newOutputs) newModel.summary() res50_model.summary() ```", "best_answers_score":0.6925, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49546922\/keras-replacing-input-layer", "best_answers_votes":27, "question_length":672, "response_length":863 }, { "question":"Best way to flatten a 2D tensor containing a vector in TensorFlow? What is the most efficient way to flatten a 2D tensor which is actually a horizontal or vertical vector into a 1D tensor? Is there a difference in terms of performance between: ``` tf.reshape(w, [-1]) ``` and ``` tf.squeeze(w) ``` ?", "response":"Both tf.reshape(w, [-1]) and tf.squeeze(w) are \"cheap\" in that they operate only on the metadata (i.e. the shape) of the given tensor, and don't modify the data itself. Of the two tf.reshape() has slightly simpler logic internally, but the performance of the two should be indistinguishable.", "best_answers_score":0.6911, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34194151\/best-way-to-flatten-a-2d-tensor-containing-a-vector-in-tensorflow", "best_answers_votes":80, "question_length":299, "response_length":291 }, { "question":"Tensorflow serving No versions of servable found under base path I was following this tutorial to use tensorflow serving using my object detection model. I am using tensorflow object detection for generating the model. I have created a frozen model using this exporter (the generated frozen model works using python script). The frozen graph directory has following contents ( nothing on variables directory) variables\/ saved_model.pb Now when I try to serve the model using the following command, ``` tensorflow_model_server --port=9000 --model_name=ssd --model_base_path=\/serving\/ssd_frozen\/ ``` It always shows me ... tensorflow_serving\/model_servers\/server_core.cc:421] (Re-)adding model: ssd 2017-08-07 10:22:43.892834: W tensorflow_serving\/sources\/storage_path\/file_system_storage_path_source.cc:262] No versions of servable ssd found under base path \/serving\/ssd_frozen\/ 2017-08-07 10:22:44.892901: W tensorflow_serving\/sources\/storage_path\/file_system_storage_path_source.cc:262] No versions of servable ssd found under base path \/serving\/ssd_frozen\/ ...", "response":"I had same problem, the reason is because object detection api does not assign version of your model when exporting your detection model. However, tensorflow serving requires you to assign a version number of your detection model, so that you could choose different versions of your models to serve. In your case, you should put your detection model(.pb file and variables folder) under folder: \/serving\/ssd_frozen\/1\/. In this way, you will assign your model to version 1, and tensorflow serving will automatically load this version since you only have one version. By default tensorflow serving will automatically serve the latest version(ie, the largest number of versions). Note, after you created 1\/ folder, the model_base_path is still need to be set to --model_base_path=\/serving\/ssd_frozen\/.", "best_answers_score":0.6911, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45544928\/tensorflow-serving-no-versions-of-servable-model-found-under-base-path", "best_answers_votes":94, "question_length":1063, "response_length":798 }, { "question":"what does class_mode parameter in Keras image_gen.flow_from_directory() signify? ``` train_image_gen = image_gen.flow_from_directory('\/Users\/harshpanwar\/Desktop\/Folder\/train', target_size=image_shape[:2], batch_size=batch_size, class_mode='binary') ``` In the above code snippet what does class_mode='binary' signify. I think it is for the number of categories of images. I am using this code for training a image recognition classifier in Keras to classify between 2 different categories like dog and cat. So if class_mode='binary' is for signifying two categories how do we make it for three or more?", "response":"Say you have N classes in your dataset. If you have 4 labels, dog (index 0), cat (1), donkey (2) and human (3), N would be 4. Class modes: \"categorical\": 2D output (aka. list of numbers of length N), [0, 0, 1, 0], which is a one-hot encoding (only one number is 1\/ \"hot\") representing the donkey. This is for mutually exclusive labels. A dog cannot be a cat, a human is not a dog. \"binary\": 1D output (aka. 1 number), which is either 0, 1, 2, 3 ... N. It is called this because it is binary if there are only two classes (IMHO this is a bad reason), source. I suggest using \"binary\" just for single label classification, because it documents-in-code, your intention. \"sparse\": After digging in the code, this is the same as \"binary\". The logic is done with elif self.class_mode in {'binary', 'sparse'}:, and the class_mode is not used after that. I suggest using \"sparse\" for multilabel classification though, again because it documents-in-code, your intention. \"input\": The label is literally the image again. So the label for an image of the dog, is the same dog picture array. If I knew more about autoencoders I might have been able to explain further. None: No labels, therefore not useful for training, but for inference\/ prediction. The TensorFlow documentation is here but I think it should go into more depth for class_mode: One of \"categorical\", \"binary\", \"sparse\", \"input\", or None. Default: \"categorical\". Determines the type of label arrays that are returned: - \"categorical\" will be 2D one-hot encoded labels, - \"binary\" will be 1D binary labels, \"sparse\" will be 1D integer labels, - \"input\" will be images identical to input images (mainly used to work with autoencoders). - If None, no labels are returned (the generator will only yield batches of image data, which is useful to use with model.predict()). Please note that in case of class_mode None, the data still needs to reside in a subdirectory of directory for it to work correctly. Sparse is the same as binary?: As you can see in my search results, sparse is only checked twice (line 2 and 4 in search results). I believe the intention of \"sparse\" is for multi-label classification, and \"binary\" is designed for single-label classification (Hot-dog vs. No hotdog), but currently there is no difference, since the behaviour is the same:", "best_answers_score":0.6909, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/59439128\/what-does-class-mode-parameter-in-keras-image-gen-flow-from-directory-signify", "best_answers_votes":18, "question_length":602, "response_length":2310 }, { "question":"AttributeError: 'google.protobuf.pyext._message.RepeatedCompositeCo' object has no attribute 'append' I am building a transfer learning model on the MobileNetv2 pretrained model on Google Collab. Till yesterday, everything was fine. But, today, on executing ``` #Create the base model(feature_extractor) from the pre-trained model MobileNet V2 _URL = \"https:\/\/tfhub.dev\/google\/tf2-preview\/mobilenet_v2\/feature_vector\/2\" feature_extractor = hub.KerasLayer(_URL, input_shape=(_TARGET_SIZE, _TARGET_SIZE,3)) ``` I get the error: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () 2 _TARGET_SIZE = 224 3 _URL = \"https:\/\/tfhub.dev\/google\/tf2-preview\/mobilenet_v2\/feature_vector\/2\" ----> 4 feature_extractor = hub.KerasLayer(_URL, input_shape=(_TARGET_SIZE, _TARGET_SIZE,3)) 5 #print(feature_extractor._layers) \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow_core\/python\/ops\/resource_variable_ops.py in _variable_handle_from_shape_and_dtype(shape, dtype, shared_name, name, graph_mode, initial_value) 165 handle_data = cpp_shape_inference_pb2.CppShapeInferenceResult.HandleData() 166 handle_data.is_set = True --> 167 handle_data.shape_and_type.append( 168 cpp_shape_inference_pb2.CppShapeInferenceResult.HandleShapeAndType( 169 shape=shape.as_proto(), dtype=dtype.as_datatype_enum)) AttributeError: 'google.protobuf.pyext._message.RepeatedCompositeCo' object has no attribute 'append' ``` Any idea why this happens and do I need to get into the \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow_core\/python\/ops\/resource_variable_ops.py file and make some changes ? I think its related to some update issue. Any help on how to tackle it?", "response":"I had same error with tensorflow (version 2.2.0-dev20200128) and fixed it by upgrading protobuf (As explained in this issue): ``` pip install -U protobuf==3.8.0 ``` Or if your using Notebook (like Google Colab notebook) try this: ``` !pip install -U protobuf==3.8.0 ```", "best_answers_score":0.69, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/58343293\/attributeerror-google-protobuf-pyext-message-repeatedcompositeco-object-has", "best_answers_votes":12, "question_length":1718, "response_length":269 }, { "question":"TensorFlow: How to handle void labeled data in image segmentation? I was wondering how to handle not labeled parts of an image in image segmentation using TensorFlow. For example, my input is an image of height * width * channels. The labels are too of the size height * width, with one label for every pixel. Some parts of the image are annotated, other parts are not. I would wish that those parts have no influence on the gradient computation whatsoever. Furthermore, I am not interested in the network predicting this \u201cvoid\u201d label. Is there a label or a function for this? At the moment I am using tf.nn.sparse_softmax_cross_entropy_with_logits.", "response":"I'm not 100% familiar with TF. However, have you considered using the weights parameter of the loss? Looking at tf.loses.sparse_softmax_cross_entropy it has a parameter weights weights: Coefficients for the loss. This must be scalar or of same rank as labels You can set weightof \"void\" pixels to zero, thus making the loss ignore them. You can also remove the reduction from tf.nn.sparse_softmax_cross_entropy_with_logits and use tf.losses.compute_weighted_loss to perform the weighting.", "best_answers_score":0.6897, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46097968\/tensorflow-how-to-handle-void-labeled-data-in-image-segmentation", "best_answers_votes":13, "question_length":649, "response_length":488 }, { "question":"How do I install TensorFlow's tensorboard? How do I install TensorFlow's tensorboard?", "response":"The steps to install Tensorflow are here: https:\/\/www.tensorflow.org\/install\/ For example, on Linux for CPU-only (no GPU), you would type this command: ``` pip install -U pip pip install tensorflow ``` Since TensorFlow depends on TensorBoard, running the following command should not be necessary: ``` pip install tensorboard ```", "best_answers_score":0.6862, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33634008\/how-do-i-install-tensorflows-tensorboard", "best_answers_votes":64, "question_length":85, "response_length":329 }, { "question":"Keras Sequential model with multiple inputs I am making a MLP model which takes two inputs and produces a single output. I have two input arrays (one for each input) and 1 output array. The neural network has 1 hidden layer with 2 neurons. Each array has 336 elements. ``` model0 = keras.Sequential([ keras.layers.Dense(2, input_dim=2, activation=keras.activations.sigmoid, use_bias=True), keras.layers.Dense(1, activation=keras.activations.relu, use_bias=True), ]) # Compile the neural network # model0.compile( optimizer = keras.optimizers.RMSprop(lr=0.02,rho=0.9,epsilon=None,decay=0), loss = 'mean_squared_error', metrics=['accuracy'] ) ``` I tried two ways, both of them are giving errors. ``` model0.fit(numpy.array([array_1, array_2]),output, batch_size=16, epochs=100) ``` ValueError: Error when checking input: expected dense_input to have shape (2,) but got array with shape (336,) The second way: ``` model0.fit([array_1, array_2],output, batch_size=16, epochs=100) ``` ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 2 arrays: Similar question. But not using sequential model.", "response":"To solve this problem you have two options. 1. Using a sequential model You can concatenate both arrays into one before feeding to the network. Let's assume the two arrays have a shape of (Number_data_points, ), now the arrays can be merged using numpy.stack method. ```py merged_array = np.stack([array_1, array_2], axis=1) ``` ```py model0 = keras.Sequential([ keras.layers.Dense(2, input_dim=2, activation=keras.activations.sigmoid, use_bias=True), keras.layers.Dense(1, activation=keras.activations.relu, use_bias=True), ]) model0.fit(merged_array,output, batch_size=16, epochs=100) ``` 2. Using Functional API. This is the most recommened way to use when there are multiple inputs to the model. ```py input1 = keras.layers.Input(shape=(1, )) input2 = keras.layers.Input(shape=(1,)) merged = keras.layers.Concatenate(axis=1)([input1, input2]) dense1 = keras.layers.Dense(2, input_dim=2, activation=keras.activations.sigmoid, use_bias=True)(merged) output = keras.layers.Dense(1, activation=keras.activations.relu, use_bias=True)(dense1) model10 = keras.models.Model(inputs=[input1, input2], output=output) ``` Now you can use the second method you have trying to fit to the model ```py model0.fit([array_1, array_2],output, batch_size=16, epochs=100) ```", "best_answers_score":0.6856, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55233377\/keras-sequential-model-with-multiple-inputs", "best_answers_votes":49, "question_length":1247, "response_length":1258 }, { "question":"How to know which version of docker image is behind latest tag? I am working with two docker images of tensorflow (latest and latest-gpu tags): FROM tensorflow\/tensorflow:latest-gpu and: FROM tensorflow\/tensorflow:latest In order to not have surprises in the future, I would like to set the version of these two images. On docker hub, I can't find this information in the tags pages: for example, latest would correspond to the 1.8.0-gpu tag. Do you know if and where I can find this information? Thank you, Alexandre", "response":"go to image webpage (nigix in my case) https:\/\/hub.docker.com\/_\/nginx then press tags tab, go to any latest, and copy sha256 sum then sort by newest, then scroll down until first numbered version and check if the exact same sha256 is displayed now ... STILL after that fishing, there library\/nginxit comes a sure thing: you can verify if you did it right, for example now I manage to find that nginx:latest is actually 1.17.8, so, I run: ``` docker pull nginx:1.17.8 1.17.8: Pulling from library\/nginx bc51dd8edc1b: Pull complete 66ba67045f57: Pull complete bf317aa10aa5: Pull complete Digest:sha256:ad5552c786f128e389a0263104ae39f3d3c7895579d45ae716f528185b36bc6f Status: Downloaded newer image for nginx:1.17.8 ``` and then I verify by atempt to pull latest: ``` docker pull nginx:latest latest: Pulling from library\/nginx Digest: sha256:ad5552c786f128e389a0263104ae39f3d3c7895579d45ae716f528185b36bc6f Status: Downloaded newer image for nginx:latest ``` how you can see it didn't actually pull anything, and sha256 is the exact same ;)", "best_answers_score":0.6847, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50702395\/how-to-know-which-version-of-docker-image-is-behind-latest-tag", "best_answers_votes":13, "question_length":517, "response_length":1038 }, { "question":"ImportError: libcublas.so.9.0: cannot open shared object file currently I have cuda 8.0 and cuda 9.0 installed in Gpu support system. I ran into this error while importing from keras module. It says like failed to load native tensorflow runtime. The error log which i received was: ``` Traceback (most recent call last): File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow.py\", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File \"\/usr\/local\/lib\/python3.5\/dist- packages\/tensorflow\/python\/pywrap_tensorflow_internal.py\", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py\", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File \"\/usr\/lib\/python3.5\/imp.py\", line 242, in load_module return load_dynamic(name, filename, file) File \"\/usr\/lib\/python3.5\/imp.py\", line 342, in load_dynamic return _load(spec) ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"Try1.py\", line 11, in from keras.models import Sequential File \"\/usr\/local\/lib\/python3.5\/dist-packages\/Keras-2.0.9-py3.5.egg\/keras\/__init__.py\", line 3, in File \"\/usr\/local\/lib\/python3.5\/dist-packages\/Keras-2.0.9-py3.5.egg\/keras\/utils\/__init__.py\", line 6, in File \"\/usr\/local\/lib\/python3.5\/dist-packages\/Keras-2.0.9-py3.5.egg\/keras\/utils\/conv_utils.py\", line 3, in File \"\/usr\/local\/lib\/python3.5\/dist-packages\/Keras-2.0.9-py3.5.egg\/keras\/backend\/__init__.py\", line 83, in File \"\/usr\/local\/lib\/python3.5\/dist-packages\/Keras-2.0.9-py3.5.egg\/keras\/backend\/tensorflow_backend.py\", line 1, in File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/__init__.py\", line 24, in from tensorflow.python import * File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/__init__.py\", line 49, in from tensorflow.python import pywrap_tensorflow File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow.py\", line 73, in raise ImportError(msg) ImportError: Traceback (most recent call last): File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow.py\", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py\", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File \"\/usr\/local\/lib\/python3.5\/dist-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py\", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File \"\/usr\/lib\/python3.5\/imp.py\", line 242, in load_module return load_dynamic(name, filename, file) File \"\/usr\/lib\/python3.5\/imp.py\", line 342, in load_dynamic return _load(spec) ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory Failed to load the native TensorFlow runtime. ``` When I run nvcc --version, the cuda version returned is, ``` Cuda compilation tools, release 8.0, V8.0.61 ``` I read about some similar post but couldn't solve my issue. Mostly I think this is a clash between two cuda versions. Can anyone tell me how to solve this?", "response":"You will need to update your LD_LIBRARY_PATH, so that it points to the \/usr\/local\/cuda-9.0\/lib64. Add the following line to your .bashrc file (or any other terminal you use) ``` export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:\/usr\/local\/cuda-9.0\/lib64\/ ```", "best_answers_score":0.6843, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48428415\/importerror-libcublas-so-9-0-cannot-open-shared-object-file", "best_answers_votes":49, "question_length":3350, "response_length":248 }, { "question":"How to do slice assignment in Tensorflow I found that Tensorflow provides scatter_update() to assign values to the slice of a tensor in the 0 dimension. For example, if the tensor T is three dimensional, I can assign value v[1, :, :] to T[i, :, :]. ``` a = tf.Variable(tf.zeros([10,36,36])) value = np.ones([1,36,36]) d = tf.scatter_update(a,[0],value) with tf.Session() as sess: sess.run(tf.initialize_all_variables()) print a.eval() sess.run(d) print a.eval() ``` But how to assign values v[1,1,:] to T[i,j,:]? ``` a = tf.Variable(tf.zeros([10,36,36])) value1 = np.random.randn(1,1,36) e = tf.scatter_update(a,[0],value1) #Error with tf.Session() as sess: sess.run(tf.initialize_all_variables()) print a.eval() sess.rum(e) print a.eval() ``` Is there any other function that TF provide or a simple way to do this?", "response":"Currently, you can do slice assignment for variables in TensorFlow. There is no specific named function for it, but you can select a slice and call assign on it: ``` my_var = my_var[4:8].assign(tf.zeros(4)) ``` First, note that (after having looked at the documentation) it seems that the return value of assign, even when applied to a slice, is always a reference to the whole variable after applying the update. EDIT: The information below is either deprecated, imprecise or was always wrong. The fact is that the returned value of assign is a tensor that can be readily used and already incorporates the dependency to the assignment, so simply evaluating that or using it in further operations will ensure it gets executed without need for an explicit tf.control_dependencies block. Note, also, that this will only add the assignment op to the graph, but will not run it unless it is explicitly executed or set as a dependency of some other operation. A good practice is to use it in a tf.control_dependencies context: ``` with tf.control_dependencies([my_var[4:8].assign(tf.zeros(4))]): my_var = tf.identity(my_var) ``` You can read more about it in TensorFlow issue #4638.", "best_answers_score":0.6839, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39157723\/how-to-do-slice-assignment-in-tensorflow", "best_answers_votes":53, "question_length":815, "response_length":1177 }, { "question":"What are logits? What is the difference between softmax and softmax_cross_entropy_with_logits? In the tensorflow API docs they use a keyword called logits. What is it? A lot of methods are written like: ``` tf.nn.softmax(logits, name=None) ``` If logits is just a generic Tensor input, why is it named logits? Secondly, what is the difference between the following two methods? ``` tf.nn.softmax(logits, name=None) tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None) ``` I know what tf.nn.softmax does, but not the other. An example would be really helpful.", "response":"The softmax+logits simply means that the function operates on the unscaled output of earlier layers and that the relative scale to understand the units is linear. It means, in particular, the sum of the inputs may not equal 1, that the values are not probabilities (you might have an input of 5). Internally, it first applies softmax to the unscaled output, and then computes the cross entropy of those values vs. what they \"should\" be as defined by the labels. tf.nn.softmax produces the result of applying the softmax function to an input tensor. The softmax \"squishes\" the inputs so that sum(input) = 1, and it does the mapping by interpreting the inputs as log-probabilities (logits) and then converting them back into raw probabilities between 0 and 1. The shape of output of a softmax is the same as the input: ``` a = tf.constant(np.array([[.1, .3, .5, .9]])) print s.run(tf.nn.softmax(a)) [[ 0.16838508 0.205666 0.25120102 0.37474789]] ``` See this answer for more about why softmax is used extensively in DNNs. tf.nn.softmax_cross_entropy_with_logits combines the softmax step with the calculation of the cross-entropy loss after applying the softmax function, but it does it all together in a more mathematically careful way. It's similar to the result of: ``` sm = tf.nn.softmax(x) ce = cross_entropy(sm) ``` The cross entropy is a summary metric: it sums across the elements. The output of tf.nn.softmax_cross_entropy_with_logits on a shape [2,5] tensor is of shape [2,1] (the first dimension is treated as the batch). If you want to do optimization to minimize the cross entropy AND you're softmaxing after your last layer, you should use tf.nn.softmax_cross_entropy_with_logits instead of doing it yourself, because it covers numerically unstable corner cases in the mathematically right way. Otherwise, you'll end up hacking it by adding little epsilons here and there. Edited 2016-02-07: If you have single-class labels, where an object can only belong to one class, you might now consider using tf.nn.sparse_softmax_cross_entropy_with_logits so that you don't have to convert your labels to a dense one-hot array. This function was added after release 0.6.0.", "best_answers_score":0.6831, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34240703\/what-are-logits-what-is-the-difference-between-softmax-and-softmax-cross-entrop", "best_answers_votes":530, "question_length":572, "response_length":2175 }, { "question":"Obtaining total number of records from .tfrecords file in Tensorflow Is it possible for obtain the total number of records from a .tfrecords file ? Related to this, how does one generally keep track of the number of epochs that have elapsed while training models? While it is possible for us to specify the batch_size and num_of_epochs, I am not sure if it is straightforward to obtain values such as current epoch, number of batches per epoch etc - just so that I could have more control of how the training is progressing. Currently, I'm just using a dirty hack to compute this as I know before hand how many records there are in my .tfrecords file and the size of my minibatches. Appreciate any help..", "response":"To count the number of records, you should be able to use tf.python_io.tf_record_iterator. ```py c = 0 for fn in tf_records_filenames: for record in tf.python_io.tf_record_iterator(fn): c += 1 ``` To just keep track of the model training, tensorboard comes in handy.", "best_answers_score":0.6829, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40472139\/obtaining-total-number-of-records-from-tfrecords-file-in-tensorflow", "best_answers_votes":33, "question_length":704, "response_length":266 }, { "question":"How do I split Tensorflow datasets? I have a tensorflow dataset based on one .tfrecord file. How do I split the dataset into test and train datasets? E.g. 70% Train and 30% test? Edit: My Tensorflow Version: 1.8 I've checked, there is no \"split_v\" function as mentioned in the possible duplicate. Also I am working with a tfrecord file.", "response":"You may use Dataset.take() and Dataset.skip(): ``` train_size = int(0.7 * DATASET_SIZE) val_size = int(0.15 * DATASET_SIZE) test_size = int(0.15 * DATASET_SIZE) full_dataset = tf.data.TFRecordDataset(FLAGS.input_file) full_dataset = full_dataset.shuffle() train_dataset = full_dataset.take(train_size) test_dataset = full_dataset.skip(train_size) val_dataset = test_dataset.skip(test_size) test_dataset = test_dataset.take(test_size) ``` For more generality, I gave an example using a 70\/15\/15 train\/val\/test split but if you don't need a test or a val set, just ignore the last 2 lines. Take: Creates a Dataset with at most count elements from this dataset. Skip: Creates a Dataset that skips count elements from this dataset. You may also want to look into Dataset.shard(): Creates a Dataset that includes only 1\/num_shards of this dataset.", "best_answers_score":0.6799, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51125266\/how-do-i-split-tensorflow-datasets", "best_answers_votes":57, "question_length":336, "response_length":842 }, { "question":"Keras model.fit() with tf.dataset API + validation_data So I have got my keras model to work with a tf.Dataset through the following code: ``` # Initialize batch generators(returns tf.Dataset) batch_train = build_features.get_train_batches(batch_size=batch_size) # Create TensorFlow Iterator object iterator = batch_train.make_one_shot_iterator() dataset_inputs, dataset_labels = iterator.get_next() # Create Model logits = .....(some layers) keras.models.Model(inputs=dataset_inputs, outputs=logits) # Train network model.compile(optimizer=train_opt, loss=model_loss, target_tensors=[dataset_labels]) model.fit(epochs=epochs, steps_per_epoch=num_batches, callbacks=callbacks, verbose=1) ``` however when I try to pass validation_data parameter to the model. fit it tells me that I cannot use it with the generator. Is there a way to use validation while using tf.Dataset for example in tensorflow I could do the following: ``` # initialize batch generators batch_train = build_features.get_train_batches(batch_size=batch_size) batch_valid = build_features.get_valid_batches(batch_size=batch_size) # create TensorFlow Iterator object iterator = tf.data.Iterator.from_structure(batch_train.output_types, batch_train.output_shapes) # create two initialization ops to switch between the datasets init_op_train = iterator.make_initializer(batch_train) init_op_valid = iterator.make_initializer(batch_valid) ``` then just use sess.run(init_op_train) and sess.run(init_op_valid) to switch between the datasets I tried implementing a callback that does just that (switch to validation set, predict and back) but it tells me I can't use model.predict in a callback can someone help me get validation working with Keras+Tf.Dataset edit: incorporate answer into the code so FINALLY what worked for me, thanks to the selected answer is: ``` # Initialize batch generators(returns tf.Dataset) batch_train = # returns tf.Dataset batch_valid = # returns tf.Dataset # Create TensorFlow Iterator object and wrap it in a generator itr_train = make_iterator(batch_train) itr_valid = make_iterator(batch_train) # Create Model logits = # the keras model keras.models.Model(inputs=dataset_inputs, outputs=logits) # Train network model.compile(optimizer=train_opt, loss=model_loss, target_tensors=[dataset_labels]) model.fit_generator( generator=itr_train, validation_data=itr_valid, validation_steps=batch_size, epochs=epochs, steps_per_epoch=num_batches, callbacks=cbs, verbose=1, workers=0) def make_iterator(dataset): iterator = dataset.make_one_shot_iterator() next_val = iterator.get_next() with K.get_session().as_default() as sess: while True: *inputs, labels = sess.run(next_val) yield inputs, labels ``` This doesn't introduce any overhead", "response":"I solved the problem by using fit_genertor. I found the solution here. I applied @Dat-Nguyen's solution. You need simply to create two iterators, one for training and one for validation and then create your own generator where you will extract batches from the dataset and provide the data in form of (batch_data, batch_labels) . Finally in model.fit_generator you will pass the train_generator and validation_generator.", "best_answers_score":0.6792, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50955798\/keras-model-fit-with-tf-dataset-api-validation-data", "best_answers_votes":4, "question_length":2726, "response_length":420 }, { "question":"Tensorflow: How to convert scalar tensor to scalar variable in python? In Tensorflow, I'd like to convert a scalar tensor to an integer. Is it possible to do? I need to create a loop and the index of the loop is a scalar tensor, and inside the loop body, I want to use the index to access an entry in a tensor array. For example: ``` idx = tf.constant(0) c = lambda i : tf.less(i, 10) def body(idx) : i = # convert idx to int b = weights[i] # access an entry in a tensor array, tensor cannot be used directly .... return idx+1 tf.while_loop(c, body, [idx]) ```", "response":"In Tensorflow 2.0+, it's as simple as: ```py my_tensor.numpy() ```", "best_answers_score":0.6785, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37049411\/tensorflow-how-to-convert-scalar-tensor-to-scalar-variable-in-python", "best_answers_votes":20, "question_length":560, "response_length":66 }, { "question":"How to disable dropout while prediction in keras? I am using dropout in neural network model in keras. Little bit code is like ``` model.add(Dropout(0.5)) model.add(Dense(classes)) ``` For testing, I am using preds = model_1.predict_proba(image). But while testing Dropout is also participating to predict the score which should not be happen. I search a lot to disable the dropout but didn't get any hint yet. Do anyone have solution to disable the Dropout while testing in keras??", "response":"Keras does this by default. In Keras dropout is disabled in test mode. You can look at the code here and see that they use the dropped input in training and the actual input while testing. As far as I know you have to build your own training function from the layers and specify the training flag to predict with dropout (e.g. its not possible to specify a training flag for the predict functions). This is a problem in case you want to do GANs, which use the intermediate output for training and also train the network as a whole, due to a divergence between generated training images and generated test images.", "best_answers_score":0.6767, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47787011\/how-to-disable-dropout-while-prediction-in-keras", "best_answers_votes":49, "question_length":482, "response_length":612 }, { "question":"How to install latest cuDNN to conda? In conda the latest version of conda is: ``` cudnn 7.3.1 cuda10.0_0 anaconda ``` But i need 7.4.2 for tensorflow-gpu.1.13 How install cuDNN==7.4.2 in conda?", "response":"You can install with conda-forge conda install -c conda-forge cudnn https:\/\/anaconda.org\/conda-forge\/cudnn It is more up to date than anaconda channel - for example as of today, latest version of cudnn on anaconda is still 7.6.5, but on conda-forge v8.2.0.53. Same applies to cudatoolkit package.", "best_answers_score":0.6737, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55256671\/how-to-install-latest-cudnn-to-conda", "best_answers_votes":15, "question_length":194, "response_length":296 }, { "question":"Understanding Tensorflow LSTM Input shape I have a dataset X which consists N = 4000 samples, each sample consists of d = 2 features (continuous values) spanning back t = 10 time steps. I also have the corresponding 'labels' of each sample which are also continuous values, at time step 11. At the moment my dataset is in the shape X: [4000,20], Y: [4000]. I want to train an LSTM using TensorFlow to predict the value of Y (regression), given the 10 previous inputs of d features, but I am having a tough time implementing this in TensorFlow. The main problem I have at the moment is understanding how TensorFlow is expecting the input to be formatted. I have seen various examples such as this, but these examples deal with one big string of continuous time series data. My data is different samples, each an independent time series.", "response":"The documentation of tf.nn.dynamic_rnn states: inputs: The RNN inputs. If time_major == False (default), this must be a Tensor of shape: [batch_size, max_time, ...], or a nested tuple of such elements. In your case, this means that the input should have a shape of [batch_size, 10, 2]. Instead of training on all 4000 sequences at once, you'd use only batch_size many of them in each training iteration. Something like the following should work (added reshape for clarity): ``` batch_size = 32 # batch_size sequences of length 10 with 2 values for each timestep input = get_batch(X, batch_size).reshape([batch_size, 10, 2]) # Create LSTM cell with state size 256. Could also use GRUCell, ... # Note: state_is_tuple=False is deprecated; # the option might be completely removed in the future cell = tf.nn.rnn_cell.LSTMCell(256, state_is_tuple=True) outputs, state = tf.nn.dynamic_rnn(cell, input, sequence_length=[10]*batch_size, dtype=tf.float32) ``` From the documentation, outputs will be of shape [batch_size, 10, 256], i.e. one 256-output for each timestep. state will be a tuple of shapes [batch_size, 256]. You could predict your final value, one for each sequence, from that: ``` predictions = tf.contrib.layers.fully_connected(state.h, num_outputs=1, activation_fn=None) loss = get_loss(get_batch(Y).reshape([batch_size, 1]), predictions) ``` The number 256 in the shapes of outputs and state is determined by cell.output_size resp. cell.state_size. When creating the LSTMCell like above, these are the same. Also see the LSTMCell documentation.", "best_answers_score":0.6736, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39324520\/understanding-tensorflow-lstm-input-shape", "best_answers_votes":20, "question_length":835, "response_length":1553 }, { "question":"How do I find out the version of TensorFlow on my computer? What's the command to find out the version of TensorFlow on my computer? I installed TensorFlow on my computer some time ago and want to make sure that I have the latest version.", "response":"``` import tensorflow as tf tf.__version__ ```", "best_answers_score":0.673, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39024493\/how-do-i-find-out-the-version-of-tensorflow-on-my-computer", "best_answers_votes":61, "question_length":238, "response_length":46 }, { "question":"Remove nodes from graph or reset entire default graph When working with the default global graph, is it possible to remove nodes after they've been added, or alternatively to reset the default graph to empty? When working with TF interactively in IPython, I find myself having to restart the kernel repeatedly. I would like to be able to experiment with graphs more easily if possible.", "response":"Update 11\/2\/2016 tf.reset_default_graph() Old stuff There's reset_default_graph, but not part of public API (I think it should be, does someone wants to file an issue on GitHub?) My work-around to reset things is this: ``` from tensorflow.python.framework import ops ops.reset_default_graph() sess = tf.InteractiveSession() ```", "best_answers_score":0.6719, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33765336\/remove-nodes-from-graph-or-reset-entire-default-graph", "best_answers_votes":105, "question_length":385, "response_length":327 }, { "question":"Tensorflow - matmul of input matrix with batch data I have some data represented by input_x. It is a tensor of unknown size (should be inputted by batch) and each item there is of size n. input_x undergoes tf.nn.embedding_lookup, so that embed now has dimensions [?, n, m] where m is the embedding size and ? refers to the unknown batch size. This is described here: ``` input_x = tf.placeholder(tf.int32, [None, n], name=\"input_x\") embed = tf.nn.embedding_lookup(W, input_x) ``` I'm now trying to multiply each sample in my input data (which is now expanded by embedding dimension) by a matrix variable, U, and I can't seem to get how to do that. I first tried using tf.matmul but it gives an error due to mismatch in shapes. I then tried the following, by expanding the dimension of U and applying batch_matmul (I also tried the function from tf.nn.math_ops., the result was the same): ``` U = tf.Variable( ... ) U1 = tf.expand_dims(U,0) h=tf.batch_matmul(embed, U1) ``` This passes the initial compilation, but then when actual data is applied, I get the following error: In[0].dim(0) and In[1].dim(0) must be the same: [64,58,128] vs [1,128,128] I also know why this is happening - I replicated the dimension of U and it is now 1, but the minibatch size, 64, doesn't fit. How can I do that matrix multiplication on my tensor-matrix input correctly (for unknown batch size)?", "response":"Previous answers are obsolete. Currently tf.matmul() support tensors with rank > 2: The inputs must be matrices (or tensors of rank > 2, representing batches of matrices), with matching inner dimensions, possibly after transposition. Also tf.batch_matmul() was removed and tf.matmul() is the right way to do batch multiplication. The main idea can be understood from the following code: ``` import tensorflow as tf batch_size, n, m, k = 10, 3, 5, 2 A = tf.Variable(tf.random_normal(shape=(batch_size, n, m))) B = tf.Variable(tf.random_normal(shape=(batch_size, m, k))) tf.matmul(A, B) ``` Now you will receive a tensor of the shape (batch_size, n, k). Here is what is going on here. Assume you have batch_size of matrices nxm and batch_size of matrices mxk. Now for each pair of them you calculate nxm X mxk which gives you an nxk matrix. You will have batch_size of them. Notice that something like this is also valid: ``` A = tf.Variable(tf.random_normal(shape=(a, b, n, m))) B = tf.Variable(tf.random_normal(shape=(a, b, m, k))) tf.matmul(A, B) ``` and will give you a shape (a, b, n, k)", "best_answers_score":0.6716, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38235555\/tensorflow-matmul-of-input-matrix-with-batch-data", "best_answers_votes":90, "question_length":1377, "response_length":1090 }, { "question":"Keras early stopping callback error, val_loss metric not available I am training a Keras (Tensorflow backend, Python, on MacBook) and am getting an error in the early stopping callback in fit_generator function. The error is as follows: ``` RuntimeWarning: Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: (self.monitor, ','.join(list(logs.keys()))), RuntimeWarning: Can save best model only with val_acc available, skipping. 'skipping.' % (self.monitor), RuntimeWarning [local-dir]\/lib\/python3.6\/site-packages\/keras\/callbacks.py:497: RuntimeWarning: Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: (self.monitor, ','.join(list(logs.keys()))), RuntimeWarning [local-dir]\/lib\/python3.6\/site-packages\/keras\/callbacks.py:406: RuntimeWarning: Can save best model only with val_acc available, skipping. 'skipping.' % (self.monitor), RuntimeWarning) Traceback (most recent call last): : [my-code] : File \"[local-dir]\/lib\/python3.6\/site-packages\/keras\/legacy\/interfaces.py\", line 91, in wrapper return func(*args, **kwargs) File \"[local-dir]\/lib\/python3.6\/site-packages\/keras\/engine\/training.py\", line 2213, in fit_generator callbacks.on_epoch_end(epoch, epoch_logs) File \"[local-dir]\/lib\/python3.6\/site-packages\/keras\/callbacks.py\", line 76, in on_epoch_end callback.on_epoch_end(epoch, logs) File \"[local-dir]\/lib\/python3.6\/site-packages\/keras\/callbacks.py\", line 310, in on_epoch_end self.progbar.update(self.seen, self.log_values, force=True) AttributeError: 'ProgbarLogger' object has no attribute 'log_values' ``` My code is as follows (which looks OK): ``` : ES = EarlyStopping(monitor=\"val_loss\", min_delta=0.001, patience=3, mode=\"min\", verbose=1) : self.model.fit_generator( generator = train_batch, validation_data = valid_batch, validation_steps = validation_steps, steps_per_epoch = steps_per_epoch, epochs = epochs, callbacks = [ES], verbose = 1, workers = 3, max_queue_size = 8) ``` The error message appears to relate to the early stopping callback but the callback looks OK. Also the error states that the val_loss is not appropriate, but I am not sure why... one more unusual thing about this is that the error only occurs when I use smaller data sets. Any help is appreciated.", "response":"If the error only occurs when you use smaller datasets, you're very likely using datasets small enough to not have a single sample in the validation set. Thus it cannot calculate a validation loss.", "best_answers_score":0.6713, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49035200\/keras-early-stopping-callback-error-val-loss-metric-not-available", "best_answers_votes":36, "question_length":2281, "response_length":197 }, { "question":"'tf' is not defined on load_model() - using lambda I have a Keras model that I am trying to export and use in a different python code. Here is my code: ``` from keras.models import Sequential from keras.layers import Dense, Embedding, LSTM, GRU, Flatten, Dropout, Lambda from keras.layers.embeddings import Embedding import tensorflow as tf EMBEDDING_DIM = 100 model = Sequential() model.add(Embedding(vocab_size, 300, weights=[embedding_matrix], input_length=max_length, trainable=False)) model.add(Lambda(lambda x: tf.reduce_mean(x, axis=1))) model.add(Dense(8, input_dim=4, activation='relu')) model.add(Dense(3, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X_train_pad, y_train, batch_size=128, epochs=25, validation_data=(X_val_pad, y_val), verbose=2) model.save('my_model.h5') ``` In another file, when I import my_model.h5 : ``` from keras.models import load_model from keras.layers import Lambda import tensorflow as tf def learning(test_samples): model = load_model('my_model.h5') #ERROR HERE #rest of the code ``` The error is the following: ``` in model.add(Lambda(lambda x: tf.reduce_mean(x, axis=1))) NameError: name 'tf' is not defined ``` After research, I got that the fact that I used lambda in my model is the reason for this problem, but I added these references and it didn't help: ``` from keras.models import load_model from keras.layers import Lambda import tensorflow as tf ``` What could be the problem? Thank you", "response":"When loading the model, you need to explicitly handle custom objects or custom layers (CTRL+f the docs for Handling custom layers): ``` import tensorflow as tf import keras model = keras.models.load_model('my_model.h5', custom_objects={'tf': tf}) ```", "best_answers_score":0.6712, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/54347963\/tf-is-not-defined-on-load-model-using-lambda", "best_answers_votes":44, "question_length":1514, "response_length":250 }, { "question":"what is the difference between Flatten() and GlobalAveragePooling2D() in keras I want to pass the output of ConvLSTM and Conv2D to a Dense Layer in Keras, what is the difference between using global average pooling and flatten Both is working in my case. ```python model.add(ConvLSTM2D(filters=256,kernel_size=(3,3))) model.add(Flatten()) # or model.add(GlobalAveragePooling2D()) model.add(Dense(256,activation='relu')) ```", "response":"That both seem to work doesn't mean they do the same. Flatten will take a tensor of any shape and transform it into a one dimensional tensor (plus the samples dimension) but keeping all values in the tensor. For example a tensor (samples, 10, 20, 1) will be flattened to (samples, 10 * 20 * 1). GlobalAveragePooling2D does something different. It applies average pooling on the spatial dimensions until each spatial dimension is one, and leaves other dimensions unchanged. In this case values are not kept as they are averaged. For example a tensor (samples, 10, 20, 1) would be output as (samples, 1, 1, 1), assuming the 2nd and 3rd dimensions were spatial (channels last).", "best_answers_score":0.671, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49295311\/what-is-the-difference-between-flatten-and-globalaveragepooling2d-in-keras", "best_answers_votes":64, "question_length":423, "response_length":674 }, { "question":"How to set specific gpu in tensorflow? I want to specify the gpu to run my process. And I set it as follows: ``` import tensorflow as tf with tf.device('\/gpu:0'): a = tf.constant(3.0) with tf.Session() as sess: while True: print sess.run(a) ``` However it still allocate memory in both my two gpus. ``` | 0 7479 C python 5437MiB | 1 7479 C python 5437MiB ```", "response":"TF would allocate all available memory on each visible GPU if not told otherwise. Here are 5 ways to stick to just one (or a few) GPUs. Bash solution. Set CUDA_VISIBLE_DEVICES=0,1 in your terminal\/console before starting python or jupyter notebook: ``` CUDA_VISIBLE_DEVICES=0,1 python script.py ``` Python solution. run next 2 lines of code before constructing a session ``` import os os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0,1\" ``` Automated solution. Method below will automatically detect GPU devices that are not used by other scripts and set CUDA_VISIBLE_DEVICES for you. You have to call mask_unused_gpus before constructing a session. It will filter out GPUs by current memory usage. This way you can run multiple instances of your script at once without changing your code or setting console parameters. The function: ``` import subprocess as sp import os def mask_unused_gpus(leave_unmasked=1): ACCEPTABLE_AVAILABLE_MEMORY = 1024 COMMAND = \"nvidia-smi --query-gpu=memory.free --format=csv\" try: _output_to_list = lambda x: x.decode('ascii').split('\\n')[:-1] memory_free_info = _output_to_list(sp.check_output(COMMAND.split()))[1:] memory_free_values = [int(x.split()[0]) for i, x in enumerate(memory_free_info)] available_gpus = [i for i, x in enumerate(memory_free_values) if x > ACCEPTABLE_AVAILABLE_MEMORY] if len(available_gpus) < leave_unmasked: raise ValueError('Found only %d usable GPUs in the system' % len(available_gpus)) os.environ[\"CUDA_VISIBLE_DEVICES\"] = ','.join(map(str, available_gpus[:leave_unmasked])) except Exception as e: print('\"nvidia-smi\" is probably not installed. GPUs are not masked', e) mask_unused_gpus(2) ``` Limitations: if you start multiple scripts at once it might cause a collision, because memory is not allocated immediately when you construct a session. In case it is a problem for you, you can use a randomized version as in original source code: mask_busy_gpus() Tensorflow 2.0 suggest yet another method: ``` gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: # Restrict TensorFlow to only use the first GPU try: tf.config.experimental.set_visible_devices(gpus[0], 'GPU') except RuntimeError as e: # Visible devices must be set at program startup print(e) ``` Tensorflow\/Keras also allows to specify gpu to be used with session config. I can recommend it only if setting environment variable is not an options (i.e. an MPI run). Because it tend to be the least reliable of all methods, especially with keras. ``` config = tf.ConfigProto() config.gpu_options.visible_device_list = \"0,1\" with tf.Session(config) as sess: #or K.set_session(tf.Session(config)) ```", "best_answers_score":0.6703, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40069883\/how-to-set-specific-gpu-in-tensorflow", "best_answers_votes":58, "question_length":358, "response_length":2626 }, { "question":"What is the use of a *.pb file in TensorFlow and how does it work? I am using some implementation for creating a face recognition which uses this file: \"facenet.load_model(\"20170512-110547\/20170512-110547.pb\")\" What is the use of this file? I am not sure how it works. Console log: ``` Model filename: 20170512-110547\/20170512-110547.pb distance = 0.72212267 ``` GitHub link of the actual owner of the code: https:\/\/github.com\/arunmandal53\/facematch", "response":"pb stands for protobuf. In TensorFlow, the protbuf file contains the graph definition as well as the weights of the model. Thus, a pb file is all you need to be able to run a given trained model. Given a pb file, you can load it as follows: ``` def load_pb(path_to_pb): with tf.gfile.GFile(path_to_pb, \"rb\") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def, name='') return graph ``` Once you have loaded the graph, you can basically do anything. For instance, you can retrieve tensors of interest with ``` input = graph.get_tensor_by_name('input:0') output = graph.get_tensor_by_name('output:0') ``` and use regular TensorFlow routine like: ``` sess.run(output, feed_dict={input: some_data}) ```", "best_answers_score":0.6703, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51278213\/what-is-the-use-of-a-pb-file-in-tensorflow-and-how-does-it-work", "best_answers_votes":100, "question_length":449, "response_length":784 }, { "question":"Difference between `apply_gradients` and `minimize` of optimizer in tensorflow I am confused about the difference between apply_gradients and minimize of optimizer in tensorflow. For example, ```py optimizer = tf.train.AdamOptimizer(1e-3) grads_and_vars = optimizer.compute_gradients(cnn.loss) train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step) ``` and ```py optimizer = tf.train.AdamOptimizer(1e-3) train_op = optimizer.minimize(cnn.loss, global_step=global_step) ``` Are they the same indeed? If I want to decay the learning rate, can I use the following codes? ```py global_step = tf.Variable(0, name=\"global_step\", trainable=False) starter_learning_rate = 1e-3 learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, 100, FLAGS.decay_rate, staircase=True) # Passing global_step to minimize() will increment it at each step. learning_step = ( optimizer = tf.train.AdamOptimizer(learning_rate) grads_and_vars = optimizer.compute_gradients(cnn.loss) train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step) ) ``` Thanks for your help!", "response":"You can easily know from the link : https:\/\/www.tensorflow.org\/get_started\/get_started (tf.train API part) that they actually do the same job. The difference it that: if you use the separated functions( tf.gradients, tf.apply_gradients), you can apply other mechanism between them, such as gradient clipping.", "best_answers_score":0.6703, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45473682\/difference-between-apply-gradients-and-minimize-of-optimizer-in-tensorflow", "best_answers_votes":27, "question_length":1106, "response_length":308 }, { "question":"tf.data.Dataset: how to get the dataset size (number of elements in an epoch)? Let's say I have defined a dataset in this way: ``` filename_dataset = tf.data.Dataset.list_files(\"{}\/*.png\".format(dataset)) ``` how can I get the number of elements that are inside the dataset (hence, the number of single elements that compose an epoch)? I know that tf.data.Dataset already knows the dimension of the dataset, because the repeat() method allows repeating the input pipeline for a specified number of epochs. So it must be a way to get this information.", "response":"len(list(dataset)) works in eager mode, although that's obviously not a good general solution.", "best_answers_score":0.6701, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50737192\/tf-data-dataset-how-to-get-the-dataset-size-number-of-elements-in-an-epoch", "best_answers_votes":50, "question_length":550, "response_length":94 }, { "question":"Higher validation accuracy, than training accurracy using Tensorflow and Keras [closed] Closed. This question is not about programming or software development. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed last year. The community reviewed whether to reopen this question last year and left it closed: Original close reason(s) were not resolved Improve this question I'm trying to use deep learning to predict income from 15 self reported attributes from a dating site. We're getting rather odd results, where our validation data is getting better accuracy and lower loss, than our training data. And this is consistent across different sizes of hidden layers. This is our model: ```py for hl1 in [250, 200, 150, 100, 75, 50, 25, 15, 10, 7]: def baseline_model(): model = Sequential() model.add(Dense(hl1, input_dim=299, kernel_initializer='normal', activation='relu', kernel_regularizer=regularizers.l1_l2(0.001))) model.add(Dropout(0.5, seed=seed)) model.add(Dense(3, kernel_initializer='normal', activation='sigmoid')) model.compile(loss='categorical_crossentropy', optimizer='adamax', metrics=['accuracy']) return model history_logs = LossHistory() model = baseline_model() history = model.fit(X, Y, validation_split=0.3, shuffle=False, epochs=50, batch_size=10, verbose=2, callbacks=[history_logs]) ``` And this is an example of the accuracy and losses: and . We've tried to remove regularization and dropout, which, as expected, ended in overfitting (training acc: ~85%). We've even tried to decrease the learning rate drastically, with similiar results. Has anyone seen similar results?", "response":"This happens when you use Dropout, since the behaviour when training and testing are different. When training, a percentage of the features are set to zero (50% in your case since you are using Dropout(0.5)). When testing, all features are used (and are scaled appropriately). So the model at test time is more robust - and can lead to higher testing accuracies.", "best_answers_score":0.6697, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43979449\/higher-validation-accuracy-than-training-accurracy-using-tensorflow-and-keras", "best_answers_votes":169, "question_length":1898, "response_length":362 }, { "question":"Why is this TensorFlow implementation vastly less successful than Matlab's NN? As a toy example I'm trying to fit a function f(x) = 1\/x from 100 no-noise data points. The matlab default implementation is phenomenally successful with mean square difference ~10^-10, and interpolates perfectly. I implement a neural network with one hidden layer of 10 sigmoid neurons. I'm a beginner at neural networks so be on your guard against dumb code. ``` import tensorflow as tf import numpy as np def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) #Can't make tensorflow consume ordinary lists unless they're parsed to ndarray def toNd(lst): lgt = len(lst) x = np.zeros((1, lgt), dtype='float32') for i in range(0, lgt): x[0,i] = lst[i] return x xBasic = np.linspace(0.2, 0.8, 101) xTrain = toNd(xBasic) yTrain = toNd(map(lambda x: 1\/x, xBasic)) x = tf.placeholder(\"float\", [1,None]) hiddenDim = 10 b = bias_variable([hiddenDim,1]) W = weight_variable([hiddenDim, 1]) b2 = bias_variable([1]) W2 = weight_variable([1, hiddenDim]) hidden = tf.nn.sigmoid(tf.matmul(W, x) + b) y = tf.matmul(W2, hidden) + b2 # Minimize the squared errors. loss = tf.reduce_mean(tf.square(y - yTrain)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) # For initializing the variables. init = tf.initialize_all_variables() # Launch the graph sess = tf.Session() sess.run(init) for step in xrange(0, 4001): train.run({x: xTrain}, sess) if step % 500 == 0: print loss.eval({x: xTrain}, sess) ``` Mean square difference ends at ~2*10^-3, so about 7 orders of magnitude worse than matlab. Visualising with ``` xTest = np.linspace(0.2, 0.8, 1001) yTest = y.eval({x:toNd(xTest)}, sess) import matplotlib.pyplot as plt plt.plot(xTest,yTest.transpose().tolist()) plt.plot(xTest,map(lambda x: 1\/x, xTest)) plt.show() ``` we can see the fit is systematically imperfect: while the matlab one looks perfect to the naked eye with the differences uniformly < 10^-5: I have tried to replicate with TensorFlow the diagram of the Matlab network: Incidentally, the diagram seems to imply a tanh rather than sigmoid activation function. I cannot find it anywhere in documentation to be sure. However, when I try to use a tanh neuron in TensorFlow the fitting quickly fails with nan for variables. I do not know why. Matlab uses Levenberg\u2013Marquardt training algorithm. Bayesian regularization is even more successful with mean squares at 10^-12 (we are probably in the area of vapours of float arithmetic). Why is TensorFlow implementation so much worse, and what can I do to make it better?", "response":"I tried training for 50000 iterations it got to 0.00012 error. It takes about 180 seconds on Tesla K40. It seems that for this kind of problem, first order gradient descent is not a good fit (pun intended), and you need Levenberg\u2013Marquardt or l-BFGS. I don't think anyone implemented them in TensorFlow yet. Edit Use tf.train.AdamOptimizer(0.1) for this problem. It gets to 3.13729e-05 after 4000 iterations. Also, GPU with default strategy also seems like a bad idea for this problem. There are many small operations and the overhead causes GPU version to run 3x slower than CPU on my machine.", "best_answers_score":0.6697, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33720645\/why-is-this-tensorflow-implementation-vastly-less-successful-than-matlabs-nn", "best_answers_votes":25, "question_length":2721, "response_length":594 }, { "question":"UsageError: Line magic function `%tensorflow_version` not found I've got TensorFlow installed on my machine however I'm keep getting the error: UsageError: Line magic function `%tensorflow_version` not found. Any ideas as to why this is? The code I ran is below (Jupyter Notebook) ``` %tensorflow_version 1.x import tensorflow as tf print(tf.__version__) ```", "response":"Jupyter notebook comes with a set of magic functions, but %tensorflow_version is not one of them. The magic command ``` %tensorflow_version X.X ``` is only available in Google Colab notebooks, not Jupyter notebooks.", "best_answers_score":0.6696, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/59514268\/usageerror-line-magic-function-tensorflow-version-not-found", "best_answers_votes":35, "question_length":358, "response_length":215 }, { "question":"How to display custom images in TensorBoard using Keras? I'm working on a segmentation problem in Keras and I want to display segmentation results at the end of every training epoch. I want something similar to Tensorflow: How to Display Custom Images in Tensorboard (e.g. Matplotlib Plots), but using Keras. I know that Keras has the TensorBoard callback but it seems limited for this purpose. I know this would break the Keras backend abstraction, but I'm interested in using TensorFlow backend anyway. Is it possible to achieve that with Keras + TensorFlow?", "response":"So, the following solution works well for me: ```py import tensorflow as tf def make_image(tensor): \"\"\" Convert an numpy representation image to Image protobuf. Copied from https:\/\/github.com\/lanpa\/tensorboard-pytorch\/ \"\"\" from PIL import Image height, width, channel = tensor.shape image = Image.fromarray(tensor) import io output = io.BytesIO() image.save(output, format='PNG') image_string = output.getvalue() output.close() return tf.Summary.Image(height=height, width=width, colorspace=channel, encoded_image_string=image_string) class TensorBoardImage(keras.callbacks.Callback): def __init__(self, tag): super().__init__() self.tag = tag def on_epoch_end(self, epoch, logs={}): # Load image img = data.astronaut() # Do something to the image img = (255 * skimage.util.random_noise(img)).astype('uint8') image = make_image(img) summary = tf.Summary(value=[tf.Summary.Value(tag=self.tag, image=image)]) writer = tf.summary.FileWriter('.\/logs') writer.add_summary(summary, epoch) writer.close() return tbi_callback = TensorBoardImage('Image Example') ``` Just pass the callback to fit or fit_generator. Note that you can also run some operations using the model inside the callback. For example, you may run the model on some images to check its performance.", "best_answers_score":0.6691, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43784921\/how-to-display-custom-images-in-tensorboard-using-keras", "best_answers_votes":41, "question_length":560, "response_length":1261 }, { "question":"How to convert one-hot encodings into integers? I have a numpy array data set with shape (100,10). Each row is a one-hot encoding. I want to transfer it into a nd-array with shape (100,) such that I transferred each vector row into a integer that denote the index of the nonzero index. Is there a quick way of doing this using numpy or tensorflow?", "response":"You can use numpy.argmax or tf.argmax. Example: ``` import numpy as np a = np.array([[0,1,0,0],[1,0,0,0],[0,0,0,1]]) print('np.argmax(a, axis=1): {0}'.format(np.argmax(a, axis=1))) ``` output: ``` np.argmax(a, axis=1): [1 0 3] ``` You may also want to look at sklearn.preprocessing.LabelBinarizer.inverse_transform.", "best_answers_score":0.669, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42497340\/how-to-convert-one-hot-encodings-into-integers", "best_answers_votes":52, "question_length":347, "response_length":315 }, { "question":"Tensorflow Allocation Memory: Allocation of 38535168 exceeds 10% of system memory Using ResNet50 pre-trained Weights I am trying to build a classifier. The code base is fully implemented in Keras high-level Tensorflow API. The complete code is posted in the below GitHub Link. Source Code: Classification Using RestNet50 Architecture The file size of the pre-trained model is 94.7mb. I loaded the pre-trained file ``` new_model = Sequential() new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weight_paths)) ``` and fit the model ``` train_generator = data_generator.flow_from_directory( 'path_to_the_training_set', target_size = (IMG_SIZE,IMG_SIZE), batch_size = 12, class_mode = 'categorical' ) validation_generator = data_generator.flow_from_directory( 'path_to_the_validation_set', target_size = (IMG_SIZE,IMG_SIZE), class_mode = 'categorical' ) #compile the model new_model.fit_generator( train_generator, steps_per_epoch = 3, validation_data = validation_generator, validation_steps = 1 ) ``` and in the Training dataset, I have two folders dog and cat, each holder almost 10,000 images. When I compiled the script, I get the following error Epoch 1\/1 2018-05-12 13:04:45.847298: W tensorflow\/core\/framework\/allocator.cc:101] Allocation of 38535168 exceeds 10% of system memory. 2018-05-12 13:04:46.845021: W tensorflow\/core\/framework\/allocator.cc:101] Allocation of 37171200 exceeds 10% of system memory. 2018-05-12 13:04:47.552176: W tensorflow\/core\/framework\/allocator.cc:101] Allocation of 37171200 exceeds 10% of system memory. 2018-05-12 13:04:48.199240: W tensorflow\/core\/framework\/allocator.cc:101] Allocation of 37171200 exceeds 10% of system memory. 2018-05-12 13:04:48.918930: W tensorflow\/core\/framework\/allocator.cc:101] Allocation of 37171200 exceeds 10% of system memory. 2018-05-12 13:04:49.274137: W tensorflow\/core\/framework\/allocator.cc:101] Allocation of 19267584 exceeds 10% of system memory. 2018-05-12 13:04:49.647061: W tensorflow\/core\/framework\/allocator.cc:101] Allocation of 19267584 exceeds 10% of system memory. 2018-05-12 13:04:50.028839: W tensorflow\/core\/framework\/allocator.cc:101] Allocation of 19267584 exceeds 10% of system memory. 2018-05-12 13:04:50.413735: W tensorflow\/core\/framework\/allocator.cc:101] Allocation of 19267584 exceeds 10% of system memory. Any ideas to optimize the way to load the pre-trained model (or) get rid of this warning message? Thanks!", "response":"Try reducing batch_size attribute to a small number(like 1,2 or 3). Example: ``` train_generator = data_generator.flow_from_directory( 'path_to_the_training_set', target_size = (IMG_SIZE,IMG_SIZE), batch_size = 2, class_mode = 'categorical' ) ```", "best_answers_score":0.6684, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/50304156\/tensorflow-allocation-memory-allocation-of-38535168-exceeds-10-of-system-memor", "best_answers_votes":40, "question_length":2428, "response_length":246 }, { "question":"'module' object has no attribute 'SummaryWriter' I'm using Tensorflow version 0.12.head with Python 2.7 on a linux CentOS 7 and when I run this: ``` import tensorflow as tf a = tf.constant(5, name=\"input_a\") b = tf.constant(3, name=\"input_b\") c = tf.mul(a, b, name=\"mul_c\") d = tf.add(a, b, name=\"add_d\") e = tf.add(c, d, name=\"add_e\") sess = tf.Session() output = sess.run(e) writer = tf.train.SummaryWriter('.\/my_graph', sess.graph) ``` I get this error: ``` AttributeError Traceback (most recent call last) in () ----> 1 writer = tf.train.SummaryWriter('.\/my_graph', sess.graph) AttributeError: 'module' object has no attribute 'SummaryWriter' ``` I have run these two commands because there is bug issue on Github for the same problem: ``` >>> import six >>> print(six.__version__) 1.10.0 >>> print(dir(six.moves.queue)) ['Empty', 'Full', 'LifoQueue', 'PriorityQueue', 'Queue', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '_threading', '_time', 'deque', 'heapq'] >>> print(six.moves.queue.__file__) \/usr\/lib64\/python2.7\/Queue.pyc ``` I'm new in Python and in Tensorflow. Do you know how can I fix this error? I have changed SummaryWriter with FileWriter: ``` writer = tf.train.FileWriter('.\/my_graph', sess.graph) ``` And I get the same error but with FileWriter function: ``` AttributeError Traceback (most recent call last) in () ----> 1 writer = tf.train.FileWriter('.\/my_graph', sess.graph) AttributeError: 'module' object has no attribute 'FileWriter' ``` I have also run it in a terminal and I get the same result: ``` [VansFannel@localhost ~]$ python Python 2.7.5 (default, Nov 6 2016, 00:28:07) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import tensorflow as tf W tensorflow\/core\/platform\/cpu_feature_guard.cc:95] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow\/core\/platform\/cpu_feature_guard.cc:95] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. >>> a = tf.constant(5, name=\"input_a\") >>> b = tf.constant(3, name=\"input_b\") >>> c = tf.mul(a, b, name=\"mul_c\") >>> d = tf.add(a, b, name=\"add_d\") >>> e = tf.add(c, d, name=\"add_e\") >>> sess = tf.Session() >>> output = sess.run(e) >>> writer = tf.train.FileWriter('.\/my_graph', sess.graph) Traceback (most recent call last): File \"\", line 1, in AttributeError: 'module' object has no attribute 'FileWriter' >>> ```", "response":"tf.train.SummaryWriter is deprecated, instead use tf.summary.FileWriter. \u21b3 Adding Summaries to Event Files It will be removed after 2016-11-30. Instructions for updating: Please switch to tf.summary.FileWriter. The interface and behavior is the same; this is just a rename. \u2733\ufe0e includes all current deprecated\/renamed functions \u2733\ufe0e", "best_answers_score":0.6683, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/41482913\/module-object-has-no-attribute-summarywriter", "best_answers_votes":68, "question_length":2596, "response_length":330 }, { "question":"R keras package Error: Python module tensorflow.contrib.keras.python.keras was not found I have keras installed with devtools from GitHub in R and TensorFlow installed in Python. However when I run an example Keras command like: ``` model <- keras_model_sequential() ``` I get the following: Error: Python module tensorflow.contrib.keras.python.keras was not found. ``` Detected Python configuration: python: C:\\Python35\\python.exe libpython: C:\/Python35\/python35.dll pythonhome: C:\\Python35 version: 3.5.0 (v3.5.0:374f501f4567, Sep 13 2015, 02:27:37) [MSC v.1900 64 bit (AMD64)] Architecture: 64bit numpy: C:\\Python35\\lib\\site-packages\\numpy numpy_version: 1.13.0 tensorflow: C:\\Python35\\lib\\site-packages\\tensorflow python versions found: C:\\Python35\\python.exe C:\\Python27\\\\python.exe C:\\Python35\\\\python.exe C:\\Python36\\\\python.exe ```", "response":"I had a similar problem. Restart rstudio, load keras and tensorflow libraries, and type use_condaenv(\"r-tensorflow\"). That fixed it for me.", "best_answers_score":0.6683, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44611325\/r-keras-package-error-python-module-tensorflow-contrib-keras-python-keras-was-n", "best_answers_votes":12, "question_length":839, "response_length":139 }, { "question":"What is the difference between Dataset.from_tensors and Dataset.from_tensor_slices? I have a dataset represented as a NumPy matrix of shape (num_features, num_examples) and I wish to convert it to TensorFlow type tf.Dataset. I am struggling trying to understand the difference between these two methods: Dataset.from_tensors and Dataset.from_tensor_slices. What is the right one and why? TensorFlow documentation (link) says that both method accept a nested structure of tensor although when using from_tensor_slices the tensor should have same size in the 0-th dimension.", "response":"from_tensors combines the input and returns a dataset with a single element: ``` >>> t = tf.constant([[1, 2], [3, 4]]) >>> ds = tf.data.Dataset.from_tensors(t) >>> [x for x in ds] [] ``` from_tensor_slices creates a dataset with a separate element for each row of the input tensor: ``` >>> t = tf.constant([[1, 2], [3, 4]]) >>> ds = tf.data.Dataset.from_tensor_slices(t) >>> [x for x in ds] [, ] ```", "best_answers_score":0.6676, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49579684\/what-is-the-difference-between-dataset-from-tensors-and-dataset-from-tensor-slic", "best_answers_votes":113, "question_length":572, "response_length":399 }, { "question":"Where is gen_math_ops script in tensorflow? I read the source code of tensorflow on github and find gen_math_ops is imported. ``` from tensorflow.python.ops import gen_math_ops ``` However, i cannot find this script in the whole project and it's also not under the ops folder. search result No gen_math_ops under ops", "response":"It's automatically generated by tf_gen_op_wrapper_* rules here. Also you can use ?? in your IPython notebook to find location", "best_answers_score":0.6667, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36783977\/where-is-gen-math-ops-script-in-tensorflow", "best_answers_votes":17, "question_length":316, "response_length":125 }, { "question":"how to programmatically determine available GPU memory with tensorflow? For a vector quantization (k-means) program I like to know the amount of available memory on the present GPU (if there is one). This is needed to choose an optimal batch size in order to have as few batches as possible to run over the complete data set. I have written the following test program: ``` import tensorflow as tf import numpy as np from kmeanstf import KMeansTF print(\"GPU Available: \", tf.test.is_gpu_available()) nn=1000 dd=250000 print(\"{:,d} bytes\".format(nn*dd*4)) dic = {} for x in \"ABCD\": dic[x]=tf.random.normal((nn,dd)) print(x,dic[x][:1,:2]) print(\"done...\") ``` This is a typical output on my system with (ubuntu 18.04 LTS, GTX-1060 6GB). Please note the core dump. ``` python misc\/maxmem.py GPU Available: True 1,000,000,000 bytes A tf.Tensor([[-0.23787294 -2.0841186 ]], shape=(1, 2), dtype=float32) B tf.Tensor([[ 0.23762687 -1.1229591 ]], shape=(1, 2), dtype=float32) C tf.Tensor([[-1.2672468 0.92139906]], shape=(1, 2), dtype=float32) 2020-01-02 17:35:05.988473: W tensorflow\/core\/common_runtime\/bfc_allocator.cc:419] Allocator (GPU_0_bfc) ran out of memory trying to allocate 953.67MiB (rounded to 1000000000). Current allocation summary follows. 2020-01-02 17:35:05.988752: W tensorflow\/core\/common_runtime\/bfc_allocator.cc:424] **************************************************************************************************xx 2020-01-02 17:35:05.988835: W tensorflow\/core\/framework\/op_kernel.cc:1622] OP_REQUIRES failed at cwise_ops_common.cc:82 : Resource exhausted: OOM when allocating tensor with shape[1000,250000] and type float on \/job:localhost\/replica:0\/task:0\/device:GPU:0 by allocator GPU_0_bfc Segmentation fault (core dumped) ``` Occasionally I do get an error from python instead of a core dump (see below). This would actually be better since I could catch it and thus determine by trial and error the maximum available memory. But it alternates with core dumps: ``` python misc\/maxmem.py GPU Available: True 1,000,000,000 bytes A tf.Tensor([[-0.73510283 -0.94611156]], shape=(1, 2), dtype=float32) B tf.Tensor([[-0.8458411 0.552555 ]], shape=(1, 2), dtype=float32) C tf.Tensor([[0.30532074 0.266423 ]], shape=(1, 2), dtype=float32) 2020-01-02 17:35:26.401156: W tensorflow\/core\/common_runtime\/bfc_allocator.cc:419] Allocator (GPU_0_bfc) ran out of memory trying to allocate 953.67MiB (rounded to 1000000000). Current allocation summary follows. 2020-01-02 17:35:26.401486: W tensorflow\/core\/common_runtime\/bfc_allocator.cc:424] **************************************************************************************************xx 2020-01-02 17:35:26.401571: W tensorflow\/core\/framework\/op_kernel.cc:1622] OP_REQUIRES failed at cwise_ops_common.cc:82 : Resource exhausted: OOM when allocating tensor with shape[1000,250000] and type float on \/job:localhost\/replica:0\/task:0\/device:GPU:0 by allocator GPU_0_bfc Traceback (most recent call last): File \"misc\/maxmem.py\", line 11, in dic[x]=tf.random.normal((nn,dd)) File \"\/home\/fritzke\/miniconda2\/envs\/tf20b\/lib\/python3.7\/site-packages\/tensorflow_core\/python\/ops\/random_ops.py\", line 76, in random_normal value = math_ops.add(mul, mean_tensor, name=name) File \"\/home\/fritzke\/miniconda2\/envs\/tf20b\/lib\/python3.7\/site-packages\/tensorflow_core\/python\/ops\/gen_math_ops.py\", line 391, in add _six.raise_from(_core._status_to_exception(e.code, message), None) File \"\", line 3, in raise_from tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1000,250000] and type float on \/job:localhost\/replica:0\/task:0\/device:GPU:0 by allocator GPU_0_bfc [Op:Add] name: random_normal\/ ``` How could I reliably get this information for whatever system the software is running on?", "response":"I actually found an answer in this old question of mine . To bring some additional benefit to readers I tested the mentioned program ``` import nvidia_smi nvidia_smi.nvmlInit() handle = nvidia_smi.nvmlDeviceGetHandleByIndex(0) # card id 0 hardcoded here, there is also a call to get all available card ids, so we could iterate info = nvidia_smi.nvmlDeviceGetMemoryInfo(handle) print(\"Total memory:\", info.total) print(\"Free memory:\", info.free) print(\"Used memory:\", info.used) nvidia_smi.nvmlShutdown() ``` on colab with the following result: ``` Total memory: 17071734784 Free memory: 17071734784 Used memory: 0 ``` The actual GPU I had there was a Tesla P100 as can be seen from executing ``` !nvidia-smi ``` and observing the output ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.44 Driver Version: 418.67 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage\/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 | | N\/A 32C P0 26W \/ 250W | 0MiB \/ 16280MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ```", "best_answers_score":0.6653, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/59567226\/how-to-programmatically-determine-available-gpu-memory-with-tensorflow", "best_answers_votes":38, "question_length":3776, "response_length":1707 }, { "question":"TensorFlow, \"'module' object has no attribute 'placeholder'\" I've been trying to use tensorflow for two days now installing and reinstalling it over and over again in python2.7 and 3.4. No matter what I do, I get this error message when trying to use tensorflow.placeholder() It's very boilerplate code: ``` tf_in = tf.placeholder(\"float\", [None, A]) # Features ``` No matter what I do I always get the trace back: ``` Traceback (most recent call last): File \"\/home\/willim\/PycharmProjects\/tensorflow\/tensorflow.py\", line 2, in import tensorflow as tf File \"\/home\/willim\/PycharmProjects\/tensorflow\/tensorflow.py\", line 53, in tf_in = tf.placeholder(\"float\", [None, A]) # Features AttributeError: 'module' object has no attribute 'placeholder' ``` Anyone know how I can fix this?", "response":"If you have this error after an upgrade to TensorFlow 2.0, you can still use 1.X API by replacing: ``` import tensorflow as tf ``` by ``` import tensorflow.compat.v1 as tf tf.disable_v2_behavior() ```", "best_answers_score":0.665, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37383812\/tensorflow-module-object-has-no-attribute-placeholder", "best_answers_votes":200, "question_length":779, "response_length":200 }, { "question":"tensorboard: command not found I installed TensorFlow on my MacBook Pro 10.12.5 from source code by steps described here. https:\/\/www.tensorflow.org\/install\/install_sources TensorFlow itself works well but I cannot run TensorBoard. It seems tensorboard is not installed properly. When I try running tensorboard --logdir=... it says -bash: tensorboard: command not found. And locate tensorboard returns empty. Do I need any additional step to install tensorboard?", "response":"You could call tensorboard as a python module like this: ``` python3 -m tensorboard.main --logdir=~\/my\/training\/dir ``` or add this to your .profile alias tensorboard='python3 -m tensorboard.main'", "best_answers_score":0.6648, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45095820\/tensorboard-command-not-found", "best_answers_votes":59, "question_length":462, "response_length":196 }, { "question":"how to load and use a saved model on tensorflow? I have found 2 ways to save a model in Tensorflow: tf.train.Saver() and SavedModelBuilder. However, I can't find documentation on using the model after it being loaded the second way. Note: I want to use SavedModelBuilder way because I train the model in Python and will use it at serving time in another language (Go), and it seems that SavedModelBuilder is the only way in that case. This works great with tf.train.Saver() (first way): ``` model = tf.add(W * x, b, name=\"finalnode\") # save saver = tf.train.Saver() saver.save(sess, \"\/tmp\/model\") # load saver.restore(sess, \"\/tmp\/model\") # IMPORTANT PART: REALLY USING THE MODEL AFTER LOADING IT # I CAN'T FIND AN EQUIVALENT OF THIS PART IN THE OTHER WAY. model = graph.get_tensor_by_name(\"finalnode:0\") sess.run(model, {x: [5, 6, 7]}) ``` tf.saved_model.builder.SavedModelBuilder() is defined in the Readme but after loading the model with tf.saved_model.loader.load(sess, [], export_dir)), I can't find documentation on getting back at the nodes (see \"finalnode\" in the code above)", "response":"What was missing was the signature ``` # Saving builder = tf.saved_model.builder.SavedModelBuilder(export_dir) builder.add_meta_graph_and_variables(sess, [\"tag\"], signature_def_map= { \"model\": tf.saved_model.signature_def_utils.predict_signature_def( inputs= {\"x\": x}, outputs= {\"finalnode\": model}) }) builder.save() # loading with tf.Session(graph=tf.Graph()) as sess: tf.saved_model.loader.load(sess, [\"tag\"], export_dir) graph = tf.get_default_graph() x = graph.get_tensor_by_name(\"x:0\") model = graph.get_tensor_by_name(\"finalnode:0\") print(sess.run(model, {x: [5, 6, 7, 8]})) ```", "best_answers_score":0.6643, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/45705070\/how-to-load-and-use-a-saved-model-on-tensorflow", "best_answers_votes":25, "question_length":1083, "response_length":585 }, { "question":"Tensorflow get all variables in scope I have some variables created within a certain scope like this: ``` with tf.variable_scope(\"my_scope\"): createSomeVariables() ... ``` I then want to get the list of all the variables in \"my_scope\" so I can pass it to an optimizer. What is the right way to do this?", "response":"I think you want tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='my_scope'). This will get all variables in a scope. To pass to an optimizer you do not want all variables you would just want the trainable variables. Those are also kept in a default collection, which is tf.GraphKeys.TRAINABLE_VARIABLES.", "best_answers_score":0.664, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36533723\/tensorflow-get-all-variables-in-scope", "best_answers_votes":79, "question_length":302, "response_length":311 }, { "question":"Create keras callback to save model predictions and targets for each batch during training I am building a simple Sequential model in Keras (tensorflow backend). During training I want to inspect the individual training batches and model predictions. Therefore, I am trying to create a custom Callback that saves the model predictions and targets for each training batch. However, the model is not using the current batch for prediction, but the entire training data. How can I hand over only the current training batch to the Callback? And how can I access the batches and targets that the Callback saves in self.predhis and self.targets? My current version looks as follows: ```py callback_list = [prediction_history((self.x_train, self.y_train))] self.model.fit(self.x_train, self.y_train, batch_size=self.batch_size, epochs=self.n_epochs, validation_data=(self.x_val, self.y_val), callbacks=callback_list) class prediction_history(keras.callbacks.Callback): def __init__(self, train_data): self.train_data = train_data self.predhis = [] self.targets = [] def on_batch_end(self, epoch, logs={}): x_train, y_train = self.train_data self.targets.append(y_train) prediction = self.model.predict(x_train) self.predhis.append(prediction) tf.logging.info(\"Prediction shape: {}\".format(prediction.shape)) tf.logging.info(\"Targets shape: {}\".format(y_train.shape)) ```", "response":"NOTE: this answer is outdated and only works with TF1. Check @bers's answer for a solution tested on TF2. After model compilation, the placeholder tensor for y_true is in model.targets and y_pred is in model.outputs. To save the values of these placeholders at each batch, you can: First copy the values of these tensors into variables. Evaluate these variables in on_batch_end, and store the resulting arrays. Now step 1 is a bit involved because you'll have to add an tf.assign op to the training function model.train_function. Using current Keras API, this can be done by providing a fetches argument to K.function() when the training function is constructed. In model._make_train_function(), there's a line: ```py self.train_function = K.function(inputs, [self.total_loss] + self.metrics_tensors, updates=updates, name='train_function', **self._function_kwargs) ``` The fetches argument containing the tf.assign ops can be provided via model._function_kwargs (only works after Keras 2.1.0). As an example: ```py from keras.layers import Dense from keras.models import Sequential from keras.callbacks import Callback from keras import backend as K import tensorflow as tf import numpy as np class CollectOutputAndTarget(Callback): def __init__(self): super(CollectOutputAndTarget, self).__init__() self.targets = [] # collect y_true batches self.outputs = [] # collect y_pred batches # the shape of these 2 variables will change according to batch shape # to handle the \"last batch\", specify `validate_shape=False` self.var_y_true = tf.Variable(0., validate_shape=False) self.var_y_pred = tf.Variable(0., validate_shape=False) def on_batch_end(self, batch, logs=None): # evaluate the variables and save them into lists self.targets.append(K.eval(self.var_y_true)) self.outputs.append(K.eval(self.var_y_pred)) # build a simple model # have to compile first for model.targets and model.outputs to be prepared model = Sequential([Dense(5, input_shape=(10,))]) model.compile(loss='mse', optimizer='adam') # initialize the variables and the `tf.assign` ops cbk = CollectOutputAndTarget() fetches = [tf.assign(cbk.var_y_true, model.targets[0], validate_shape=False), tf.assign(cbk.var_y_pred, model.outputs[0], validate_shape=False)] model._function_kwargs = {'fetches': fetches} # use `model._function_kwargs` if using `Model` instead of `Sequential` # fit the model and check results X = np.random.rand(10, 10) Y = np.random.rand(10, 5) model.fit(X, Y, batch_size=8, callbacks=[cbk]) ``` Unless the number of samples can be divided by the batch size, the final batch will have a different size than other batches. So K.variable() and K.update() can't be used in this case. You'll have to use tf.Variable(..., validate_shape=False) and tf.assign(..., validate_shape=False) instead. To verify the correctness of the saved arrays, you can add one line in training.py to print out the shuffled index array: ```py if shuffle == 'batch': index_array = _batch_shuffle(index_array, batch_size) elif shuffle: np.random.shuffle(index_array) print('Index array:', repr(index_array)) # Add this line batches = _make_batches(num_train_samples, batch_size) ``` The shuffled index array should be printed out during fitting: ``` Epoch 1\/1 Index array: array([8, 9, 3, 5, 4, 7, 1, 0, 6, 2]) 10\/10 [==============================] - 0s 23ms\/step - loss: 0.5670 ``` And you can check if cbk.targets is the same as Y[index_array]: ```py index_array = np.array([8, 9, 3, 5, 4, 7, 1, 0, 6, 2]) print(Y[index_array]) [[ 0.75325592 0.64857277 0.1926653 0.7642865 0.38901153] [ 0.77567689 0.13573623 0.4902501 0.42897559 0.55825652] [ 0.33760938 0.68195038 0.12303088 0.83509441 0.20991668] [ 0.98367778 0.61325065 0.28973401 0.28734073 0.93399794] [ 0.26097574 0.88219054 0.87951941 0.64887846 0.41996446] [ 0.97794604 0.91307569 0.93816428 0.2125808 0.94381495] [ 0.74813435 0.08036688 0.38094272 0.83178364 0.16713736] [ 0.52609421 0.39218962 0.21022047 0.58569125 0.08012982] [ 0.61276627 0.20679494 0.24124858 0.01262245 0.0994412 ] [ 0.6026137 0.25620512 0.7398164 0.52558182 0.09955769]] print(cbk.targets) [array([[ 0.7532559 , 0.64857274, 0.19266529, 0.76428652, 0.38901153], [ 0.77567691, 0.13573623, 0.49025011, 0.42897558, 0.55825651], [ 0.33760938, 0.68195039, 0.12303089, 0.83509439, 0.20991668], [ 0.9836778 , 0.61325067, 0.28973401, 0.28734073, 0.93399793], [ 0.26097575, 0.88219053, 0.8795194 , 0.64887846, 0.41996446], [ 0.97794604, 0.91307569, 0.93816429, 0.2125808 , 0.94381493], [ 0.74813437, 0.08036689, 0.38094273, 0.83178365, 0.16713737], [ 0.5260942 , 0.39218962, 0.21022047, 0.58569127, 0.08012982]], dtype=float32), array([[ 0.61276627, 0.20679495, 0.24124858, 0.01262245, 0.0994412 ], [ 0.60261369, 0.25620511, 0.73981643, 0.52558184, 0.09955769]], dtype=float32)] ``` As you can see, there are two batches in cbk.targets (one \"full batch\" of size 8 and the final batch of size 2), and the row order is the same as Y[index_array].", "best_answers_score":0.6639, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47079111\/create-keras-callback-to-save-model-predictions-and-targets-for-each-batch-durin", "best_answers_votes":22, "question_length":1363, "response_length":4933 }, { "question":"The added layer must be an instance of class Layer. Found: I am new to machine learning. I was following this tutorial on fine-tuning VGG16 models. The model loaded fine with this code: ``` vgg_model = tensorflow.keras.applications.vgg16.VGG16() ``` but gets this ERROR: ``` TypeError: The added layer must be an instance of class Layer. Found: ``` When running this code: ``` model = Sequential() for layer in vgg_model.layers[:-1]: model.add(layer) ``` Dependencies: Keras 2.2.3 Tensorflow 1.12.0 tensorflow-gpu1.12.0 Python 3.6.0 I am following this blog but instead, I want to use VGG16. Any help to fix this would be appreciated. Thank you so much.", "response":"This won't work because a tensorflow.keras layer is getting added to a keras Model. ``` vgg_model = tensorflow.keras.applications.vgg16.VGG16() model = keras.Sequential() model.add(vgg_model.layers[0]) ``` Instantiate tensorflow.keras.Sequential(). This will work. ``` model = tensorflow.keras.Sequential() model.add(vgg_model.layers[0]) ```", "best_answers_score":0.6637, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55324762\/the-added-layer-must-be-an-instance-of-class-layer-found-tensorflow-python-ke", "best_answers_votes":39, "question_length":655, "response_length":341 }, { "question":"CuDNNLSTM: Failed to call ThenRnnForward I am facing an issue when trying to use CuDNNLSTM instead of keras.layers.LSTM. This is the error I am getting: Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 2, 0, 0 , [num_layers, input_size, num_units, dir_count, seq_length, batch_size]: [1, 300, 512, 1, 5521, 128] [[{{node bidirectional_1\/CudnnRNN_1}} = CudnnRNN[T=DT_FLOAT, _class=[\"loc:@train...NNBackprop\"], direction=\"unidirectional\", dropout=0, input_mode=\"linear_input\", is_training=true, rnn_mode=\"lstm\", seed=87654321, seed2=0, _device=\"\/job:localhost\/replica:0\/task:0\/device:GPU:0\"](bidirectional_1\/transpose_1, bidirectional_1\/ExpandDims_1, bidirectional_1\/ExpandDims_1, bidirectional_1\/concat_1)]] [[{{node loss\/mul\/_75}} = _Recvclient_terminated=false, recv_device=\"\/job:localhost\/replica:0\/task:0\/device:CPU:0\", send_device=\"\/job:localhost\/replica:0\/task:0\/device:GPU:0\", send_device_incarnation=1, tensor_name=\"edge_1209_loss\/mul\", tensor_type=DT_FLOAT, _device=\"\/job:localhost\/replica:0\/task:0\/device:CPU:0\"]] Also, I got this error in one of the runs: InternalError: GPU sync failed And the kernel kept dying after each run. I only started getting this error when I tried to run it on a VM instance on google cloud with CuDNNLSTM. my code is: ``` MAX_LEN = max(len(article) for article in X_train_tokens) EMBEDDING_DIM=300 vocab_size = len(word_to_id) classes = 2 # Text input text_input = Input(shape=(MAX_LEN,)) embedding = Embedding(vocab_size, EMBEDDING_DIM, input_length=MAX_LEN)(text_input) x = Bidirectional(LSTM(512, return_sequences=False))(embedding) pred = Dense(2, activation='softmax')(x) model = Model(inputs=[text_input],outputs=pred) model.compile(loss='categorical_crossentropy', optimizer='RMSprop', metrics=['accuracy']) batch_size = 128 generator = text_training_generator(batch_size) steps = len(X_train)\/ batch_size model.fit_generator(generator, steps_per_epoch=steps, verbose=True, epochs=10) ``` The model summary: ``` _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 5521) 0 _________________________________________________________________ embedding_1 (Embedding) (None, 5521, 300) 8099100 _________________________________________________________________ bidirectional_1 (Bidirection (None, 1024) 3330048 _________________________________________________________________ dense_1 (Dense) (None, 2) 2050 ================================================================= Total params: 11,431,198 Trainable params: 11,431,198 Non-trainable params: 0 _________________________________________________________________ ```", "response":"Probably your are running out of memory on the gpu. Your network is very large with 11 million trainable parameters. Do you really need a 512*2 Output of your recurrent layer? Furthermore your embedding_dim is also quite large, while your vocabulary is quite small with 5k words. I guess your network is too complex for your problem. I would suggest to try an embedding size of 32 and a LSTM size of 32 as a start. If your accuracy is still bad, you can increase complexity. ``` EMBEDDING_DIM = 32 Bidirectional(LSTM(32, return_sequences=False))(embedding) ```", "best_answers_score":0.662, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/53972814\/cudnnlstm-failed-to-call-thenrnnforward", "best_answers_votes":21, "question_length":2750, "response_length":560 }, { "question":"TensorFlow for binary classification I am trying to adapt this MNIST example to binary classification. But when changing my NLABELS from NLABELS=2 to NLABELS=1, the loss function always returns 0 (and accuracy 1). ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf # Import data mnist = input_data.read_data_sets('data', one_hot=True) NLABELS = 2 sess = tf.InteractiveSession() # Create the model x = tf.placeholder(tf.float32, [None, 784], name='x-input') W = tf.Variable(tf.zeros([784, NLABELS]), name='weights') b = tf.Variable(tf.zeros([NLABELS], name='bias')) y = tf.nn.softmax(tf.matmul(x, W) + b) # Add summary ops to collect data _ = tf.histogram_summary('weights', W) _ = tf.histogram_summary('biases', b) _ = tf.histogram_summary('y', y) # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, NLABELS], name='y-input') # More name scopes will clean up the graph representation with tf.name_scope('cross_entropy'): cross_entropy = -tf.reduce_mean(y_ * tf.log(y)) _ = tf.scalar_summary('cross entropy', cross_entropy) with tf.name_scope('train'): train_step = tf.train.GradientDescentOptimizer(10.).minimize(cross_entropy) with tf.name_scope('test'): correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) _ = tf.scalar_summary('accuracy', accuracy) # Merge all the summaries and write them out to \/tmp\/mnist_logs merged = tf.merge_all_summaries() writer = tf.train.SummaryWriter('logs', sess.graph_def) tf.initialize_all_variables().run() # Train the model, and feed in test data and record summaries every 10 steps for i in range(1000): if i % 10 == 0: # Record summary data and the accuracy labels = mnist.test.labels[:, 0:NLABELS] feed = {x: mnist.test.images, y_: labels} result = sess.run([merged, accuracy, cross_entropy], feed_dict=feed) summary_str = result[0] acc = result[1] loss = result[2] writer.add_summary(summary_str, i) print('Accuracy at step %s: %s - loss: %f' % (i, acc, loss)) else: batch_xs, batch_ys = mnist.train.next_batch(100) batch_ys = batch_ys[:, 0:NLABELS] feed = {x: batch_xs, y_: batch_ys} sess.run(train_step, feed_dict=feed) ``` I have checked the dimensions of both batch_ys (fed into y) and _y and they are both 1xN matrices when NLABELS=1 so the problem seems to be prior to that. Maybe something to do with the matrix multiplication? I actually have got this same problem in a real project, so any help would be appreciated... Thanks!", "response":"The original MNIST example uses a one-hot encoding to represent the labels in the data: this means that if there are NLABELS = 10 classes (as in MNIST), the target output is [1 0 0 0 0 0 0 0 0 0] for class 0, [0 1 0 0 0 0 0 0 0 0] for class 1, etc. The tf.nn.softmax() operator converts the logits computed by tf.matmul(x, W) + b into a probability distribution across the different output classes, which is then compared to the fed-in value for y_. If NLABELS = 1, this acts as if there were only a single class, and the tf.nn.softmax() op would compute a probability of 1.0 for that class, leading to a cross-entropy of 0.0, since tf.log(1.0) is 0.0 for all of the examples. There are (at least) two approaches you could try for binary classification: The simplest would be to set NLABELS = 2 for the two possible classes, and encode your training data as [1 0] for label 0 and [0 1] for label 1. This answer has a suggestion for how to do that. You could keep the labels as integers 0 and 1 and use tf.nn.sparse_softmax_cross_entropy_with_logits(), as suggested in this answer.", "best_answers_score":0.6613, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35277898\/tensorflow-for-binary-classification", "best_answers_votes":42, "question_length":2615, "response_length":1080 }, { "question":"Is there a way to stack two tensorflow datasets? I want to stack two datasets objects in Tensorflow (rbind function in R). I have created one dataset A from tfRecord files and one dataset B from numpy arrays. Both have same variables. Do you know if there is a way to stack these two datasets to create a bigger one ? Or to create an iterrator that will randomly read data from this two sources ? Thanks", "response":"The tf.data.Dataset.concatenate() method is the closest analog of tf.stack() when working with datasets. If you have two datasets with the same structure (i.e. same types for each component, but possibly different shapes): ```py dataset_1 = tf.data.Dataset.range(10, 20) dataset_2 = tf.data.Dataset.range(60, 70) ``` then you can concatenate them as follows: ```py combined_dataset = dataset_1.concatenate(dataset_2) ```", "best_answers_score":0.66, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48771502\/is-there-a-way-to-stack-two-tensorflow-datasets", "best_answers_votes":38, "question_length":403, "response_length":420 }, { "question":"Custom loss function in Keras I'm working on a image class-incremental classifier approach using a CNN as a feature extractor and a fully-connected block for classifying. First, I did a fine-tuning of a VGG per-trained network to do a new task. Once the net is trained for the new task, i store some examples for every class in order to avoid forgetting when new classes are available. When some classes are available, i have to compute every output of the exemplars included the exemplars for the new classes. Now adding zeros to the outputs for old classes and adding the label corresponding to each new class on the new classes outputs i have my new labels, i.e: if 3 new classes enter.... Old class type output: [0.1, 0.05, 0.79, ..., 0 0 0] New class type output: [0.1, 0.09, 0.3, 0.4, ..., 1 0 0] **the last outputs correspond to the class. My question is, how i can change the loss function for a custom one to train for the new classes? The loss function that i want to implement is defined as: where distillation loss corresponds to the outputs for old classes to avoid forgetting, and classification loss corresponds to the new classes. If you can provide me a sample of code to change the loss function in keras would be nice. Thanks!!!!!", "response":"All you have to do is define a function for that, using keras backend functions for calculations. The function must take the true values and the model predicted values. Now, since I'm not sure about what are g, q, x an y in your function, I'll just create a basic example here without caring about what it means or whether it's an actual useful function: ``` import keras.backend as K def customLoss(yTrue,yPred): return K.sum(K.log(yTrue) - K.log(yPred)) ``` All backend functions can be seen here. After that, compile your model using that function instead of a regular one: ``` model.compile(loss=customLoss, optimizer = .....) ```", "best_answers_score":0.6596, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/43818584\/custom-loss-function-in-keras", "best_answers_votes":84, "question_length":1249, "response_length":634 }, { "question":"TensorFlow: \"Attempting to use uninitialized value\" in variable initialization I am trying to implement multivariate linear regression in Python using TensorFlow, but have run into some logical and implementation issues. My code throws the following error: ``` Attempting to use uninitialized value Variable Caused by op u'Variable\/read' ``` Ideally the weights output should be [2, 3] ``` def hypothesis_function(input_2d_matrix_trainingexamples, output_matrix_of_trainingexamples, initial_parameters_of_hypothesis_function, learning_rate, num_steps): # calculate num attributes and num examples number_of_attributes = len(input_2d_matrix_trainingexamples[0]) number_of_trainingexamples = len(input_2d_matrix_trainingexamples) #Graph inputs x = [] for i in range(0, number_of_attributes, 1): x.append(tf.placeholder(\"float\")) y_input = tf.placeholder(\"float\") # Create Model and Set Model weights parameters = [] for i in range(0, number_of_attributes, 1): parameters.append( tf.Variable(initial_parameters_of_hypothesis_function[i])) #Contruct linear model y = tf.Variable(parameters[0], \"float\") for i in range(1, number_of_attributes, 1): y = tf.add(y, tf.multiply(x[i], parameters[i])) # Minimize the mean squared errors loss = tf.reduce_mean(tf.square(y - y_input)) optimizer = tf.train.GradientDescentOptimizer(learning_rate) train = optimizer.minimize(loss) #Initialize the variables init = tf.initialize_all_variables() # launch the graph session = tf.Session() session.run(init) for step in range(1, num_steps + 1, 1): for i in range(0, number_of_trainingexamples, 1): feed = {} for j in range(0, number_of_attributes, 1): array = [input_2d_matrix_trainingexamples[i][j]] feed[j] = array array1 = [output_matrix_of_trainingexamples[i]] feed[number_of_attributes] = array1 session.run(train, feed_dict=feed) for i in range(0, number_of_attributes - 1, 1): print (session.run(parameters[i])) array = [[0.0, 1.0, 2.0], [0.0, 2.0, 3.0], [0.0, 4.0, 5.0]] hypothesis_function(array, [8.0, 13.0, 23.0], [1.0, 1.0, 1.0], 0.01, 200) ```", "response":"Run this: ``` init = tf.global_variables_initializer() sess.run(init) ``` Or (depending on the version of TF that you have): ``` init = tf.initialize_all_variables() sess.run(init) ```", "best_answers_score":0.6591, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36007883\/tensorflow-attempting-to-use-uninitialized-value-in-variable-initialization", "best_answers_votes":71, "question_length":2037, "response_length":184 }, { "question":"Prevent TensorFlow from accessing the GPU? [duplicate] This question already has answers here: Can Keras with Tensorflow backend be forced to use CPU or GPU at will? (8 answers) Closed 8 years ago. Is there a way to run TensorFlow purely on the CPU. All of the memory on my machine is hogged by a separate process running TensorFlow. I have tried setting the per_process_memory_fraction to 0, unsuccessfully.", "response":"Have a look to this question or this answer. To summarise you can add this piece of code: ``` import os os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\" import tensorflow as tf ``` Playing with the CUDA_VISIBLE_DEVICES environment variable is one of if not the way to go whenever you have GPU-tensorflow installed and you don't want to use any GPUs. You to want either export CUDA_VISIBLE_DEVICES= or alternatively use a virtualenv with a non-GPU installation of TensorFlow.", "best_answers_score":0.659, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44552585\/prevent-tensorflow-from-accessing-the-gpu", "best_answers_votes":117, "question_length":408, "response_length":467 }, { "question":"TensorBoard - Plot training and validation losses on the same graph? Is there a way to plot both the training losses and validation losses on the same graph? It's easy to have two separate scalar summaries for each of them individually, but this puts them on separate graphs. If both are displayed in the same graph it's much easier to see the gap between them and whether or not they have begin to diverge due to overfitting. Is there a built in way to do this? If not, a work around way? Thank you much!", "response":"The work-around I have been doing is to use two SummaryWriter with different log dir for training set and cross-validation set respectively. And you will see something like this:", "best_answers_score":0.6584, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/37146614\/tensorboard-plot-training-and-validation-losses-on-the-same-graph", "best_answers_votes":35, "question_length":505, "response_length":178 }, { "question":"Multiple outputs in Keras I have a problem which deals with predicting two outputs when given a vector of predictors. Assume that a predictor vector looks like x1, y1, att1, att2, ..., attn, which says x1, y1 are coordinates and att's are the other attributes attached to the occurrence of x1, y1 coordinates. Based on this predictor set I want to predict x2, y2. This is a time series problem, which I am trying to solve using multiple regresssion. My question is how do I setup keras, which can give me 2 outputs in the final layer.", "response":"``` from keras.models import Model from keras.layers import * #inp is a \"tensor\", that can be passed when calling other layers to produce an output inp = Input((10,)) #supposing you have ten numeric values as input #here, SomeLayer() is defining a layer, #and calling it with (inp) produces the output tensor x x = SomeLayer(blablabla)(inp) x = SomeOtherLayer(blablabla)(x) #here, I just replace x, because this intermediate output is not interesting to keep #here, I want to keep the two different outputs for defining the model #notice that both left and right are called with the same input x, creating a fork out1 = LeftSideLastLayer(balbalba)(x) out2 = RightSideLastLayer(banblabala)(x) #here, you define which path you will follow in the graph you've drawn with layers #notice the two outputs passed in a list, telling the model I want it to have two outputs. model = Model(inp, [out1,out2]) model.compile(optimizer = ...., loss = ....) #loss can be one for both sides or a list with different loss functions for out1 and out2 model.fit(inputData,[outputYLeft, outputYRight], epochs=..., batch_size=...) ```", "best_answers_score":0.6581, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44036971\/multiple-outputs-in-keras", "best_answers_votes":64, "question_length":534, "response_length":1113 }, { "question":"How to install TensorFlow-gpu with cuda8.0? I tried to install it according to the instructions on official website, which results in an ImportError when I import tensorflow: ``` ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory ``` I run the code cat \/usr\/local\/cuda\/version.txt, which shows that my cuda version is 8.0.61. It seems that tensorflow is looking for cuda 9.0. I cannot upgrade the cuda as I am working on a shared gpu-server and I do not have the root authority. Is there any way to make tensorflow work with cuda 8.0? Or any other way available? Thanks!!", "response":"You'll need to install the version 1.4.1 for CUDA-8 as ``` pip install tensorflow-gpu==1.4.1 ``` The latest (version 1.5) is for CUDA-9", "best_answers_score":0.6578, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48575900\/how-to-install-tensorflow-gpu-with-cuda8-0", "best_answers_votes":41, "question_length":609, "response_length":135 }, { "question":"Gradient Descent vs Adagrad vs Momentum in TensorFlow I'm studying TensorFlow and how to use it, even if I'm not an expert of neural networks and deep learning (just the basics). Following tutorials, I don't understand the real and practical differences between the three optimizers for loss. I look at the API and I understand the principles, but my questions are: 1. When is it preferable to use one instead of the others ? 2. Are there important differences to know ?", "response":"Here is a brief explanation based on my understanding: momentum helps SGD to navigate along the relevant directions and softens the oscillations in the irrelevant. It simply adds a fraction of the direction of the previous step to a current step. This achieves amplification of speed in the correct direction and softens oscillation in wrong directions. This fraction is usually in the (0, 1) range. It also makes sense to use adaptive momentum. In the beginning of learning a big momentum will only hinder your progress, so it makes sense to use something like 0.01 and once all the high gradients disappeared you can use a bigger momentum. There is one problem with momentum: when we are very close to the goal, our momentum in most of the cases is very high and it does not know that it should slow down. This can cause it to miss or oscillate around the minima nesterov accelerated gradient overcomes this problem by starting to slow down early. In momentum we first compute gradient and then make a jump in that direction amplified by whatever momentum we had previously. NAG does the same thing but in another order: at first we make a big jump based on our stored information, and then we calculate the gradient and make a small correction. This seemingly irrelevant change gives significant practical speedups. AdaGrad or adaptive gradient allows the learning rate to adapt based on parameters. It performs larger updates for infrequent parameters and smaller updates for frequent one. Because of this it is well suited for sparse data (NLP or image recognition). Another advantage is that it basically eliminates the need to tune the learning rate. Each parameter has its own learning rate and due to the peculiarities of the algorithm the learning rate is monotonically decreasing. This causes the biggest problem: at some point of time the learning rate is so small that the system stops learning. AdaDelta resolves the problem of monotonically decreasing learning rate in AdaGrad. In AdaGrad the learning rate was calculated approximately as one divided by the sum of square roots. At each stage you add another square root to the sum, which causes denominator to constantly increase. In AdaDelta instead of summing all past square roots it uses sliding window which allows the sum to decrease. RMSprop is very similar to AdaDelta Adam or adaptive momentum is an algorithm similar to AdaDelta. But in addition to storing learning rates for each of the parameters it also stores momentum changes for each of them separately. A few visualizations: I would say that SGD, Momentum and Nesterov are inferior than the last 3.", "best_answers_score":0.6567, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36162180\/gradient-descent-vs-adagrad-vs-momentum-in-tensorflow", "best_answers_votes":206, "question_length":470, "response_length":2631 }, { "question":"What is a loss function in simple words? Can anyone please explain in simple words and possibly with some examples what is a loss function in the field of machine learning\/neural networks? This came out while I was following a Tensorflow tutorial: https:\/\/www.tensorflow.org\/get_started\/get_started", "response":"It describes how far off the result your network produced is from the expected result - it indicates the magnitude of error your model made on its prediciton. You can then take that error and 'backpropagate' it through your model, adjusting its weights and making it get closer to the truth the next time around.", "best_answers_score":0.656, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42877989\/what-is-a-loss-function-in-simple-words", "best_answers_votes":30, "question_length":298, "response_length":312 }, { "question":"Module 'tensorflow' has no attribute 'contrib' I am trying to train my own custom object detector using Tensorflow Object-Detection-API I installed the tensorflow using \"pip install tensorflow\" in my google compute engine. Then I followed all the instructions on this site: https:\/\/tensorflow-object-detection-api-tutorial.readthedocs.io\/en\/latest\/training.html When I try to use train.py I am getting this error message: Traceback (most recent call last): File \"train.py\", line 49, in from object_detection.builders import dataset_builder File \"\/usr\/local\/lib\/python3.6\/dist-packages\/object_detection-0.1->py3.6.egg\/object_detection\/builders\/dataset_builder.py\", line 27, in from object_detection.data_decoders import tf_example_decoder File \"\/usr\/local\/lib\/python3.6\/dist-packages\/object_detection-0.1-py3.6.egg\/object_detection\/data_decoders\/tf_example_decoder.py\", line 27, in slim_example_decoder = tf.contrib.slim.tfexample_decoder AttributeError: module 'tensorflow' has no attribute 'contrib' Also I am getting different results when I try to learn version of tensorflow. python3 -c 'import tensorflow as tf; print(tf.version)' : 2.0.0-dev20190422 and when I use pip3 show tensorflow: Name: tensorflow Version: 1.13.1 Summary: TensorFlow is an open source machine learning framework for everyone. Home-page: https:\/\/www.tensorflow.org\/ Author: Google Inc. Author-email: [email protected] License: Apache 2.0 Location: \/usr\/local\/lib\/python3.6\/dist-packages Requires: gast, astor, absl-py, tensorflow-estimator, keras-preprocessing, grpcio, six, keras-applications, wheel, numpy, tensorboard, protobuf, termcolor Required-by: ``` sudo python3 train.py --logtostderr --train_dir=training\/ -- pipeline_config_path=training\/ssd_inception_v2_coco.config ``` What should I do to solve this problem? I couldn't find anything about this error message except this: tensorflow 'module' object has no attribute 'contrib'", "response":"tf.contrib has moved out of TF starting TF 2.0 alpha. Take a look at these tf 2.0 release notes https:\/\/github.com\/tensorflow\/tensorflow\/releases\/tag\/v2.0.0-alpha0 You can upgrade your TF 1.x code to TF 2.x using the tf_upgrade_v2 script https:\/\/www.tensorflow.org\/alpha\/guide\/upgrade", "best_answers_score":0.6559, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55870127\/module-tensorflow-has-no-attribute-contrib", "best_answers_votes":43, "question_length":1917, "response_length":284 }, { "question":"Why do we use tf.name_scope() I've been reading the tutorials on TensorFlow where they have written ``` with tf.name_scope('read_inputs') as scope: # something ``` The example ``` a = tf.constant(5) ``` and ``` with tf.name_scope('s1') as scope: a = tf.constant(5) ``` seem to have the same effect. So, why do we use name_scope?", "response":"They are not the same thing. ``` import tensorflow as tf c1 = tf.constant(42) with tf.name_scope('s1'): c2 = tf.constant(42) print(c1.name) print(c2.name) ``` prints ``` Const:0 s1\/Const:0 ``` So as the name suggests, the scope functions create a scope for the names of the ops you create inside. This has an effect on how you refer to tensors, on reuse, on how the graph shows in TensorBoard and so on.", "best_answers_score":0.6556, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42708989\/why-do-we-use-tf-name-scope", "best_answers_votes":27, "question_length":328, "response_length":403 }, { "question":"How to use Model.fit which supports generators (after fit_generator deprecation) I have got this deprecation warning while using Model.fit_generator in tensorflow: ``` WARNING:tensorflow: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version. Instructions for updating: Please use Model.fit, which supports generators. ``` How can I use Model.fit instead of Model.fit_generator?", "response":"Model.fit_generator is deprecated starting from tensorflow 2.1.0 which is currently is in rc1. You can find the documentation for tf-2.1.0-rc1 here: https:\/\/www.tensorflow.org\/versions\/r2.1\/api_docs\/python\/tf\/keras\/Model#fit As you can see the first argument of the Model.fit can take a generator so just pass it your generator.", "best_answers_score":0.6551, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/59380430\/how-to-use-model-fit-which-supports-generators-after-fit-generator-deprecation", "best_answers_votes":39, "question_length":445, "response_length":328 }, { "question":"ModuleNotFoundError: No module named 'tensorboard' ``` WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. WARNING:root:Limited tf.summary API due to missing TensorBoard installation. ``` Not sure what the issue is here, here are my packages And the others Cuda toolkit is at 10.1.243 When I use ``` %load_ext tensorboard ``` It returns ModuleNotFoundError: No module named 'tensorboard'", "response":"Do pip install tensorboard this will solve your problem", "best_answers_score":0.6529, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/61320572\/modulenotfounderror-no-module-named-tensorboard", "best_answers_votes":39, "question_length":857, "response_length":55 }, { "question":"Tensorflow NaN bug? I'm using TensorFlow and I modified the tutorial example to take my RGB images. The algorithm works flawlessly out of the box on the new image set, until suddenly (still converging, it's around 92% accuracy usually), it crashes with the error that ReluGrad received non-finite values. Debugging shows that nothing unusual happens with the numbers until very suddenly, for unknown reason, the error is thrown. Adding ``` print \"max W vales: %g %g %g %g\"%(tf.reduce_max(tf.abs(W_conv1)).eval(),tf.reduce_max(tf.abs(W_conv2)).eval(),tf.reduce_max(tf.abs(W_fc1)).eval(),tf.reduce_max(tf.abs(W_fc2)).eval()) print \"max b vales: %g %g %g %g\"%(tf.reduce_max(tf.abs(b_conv1)).eval(),tf.reduce_max(tf.abs(b_conv2)).eval(),tf.reduce_max(tf.abs(b_fc1)).eval(),tf.reduce_max(tf.abs(b_fc2)).eval()) ``` as debug code to each loop, yields the following output: ``` Step 8600 max W vales: 0.759422 0.295087 0.344725 0.583884 max b vales: 0.110509 0.111748 0.115327 0.124324 Step 8601 max W vales: 0.75947 0.295084 0.344723 0.583893 max b vales: 0.110516 0.111753 0.115322 0.124332 Step 8602 max W vales: 0.759521 0.295101 0.34472 0.5839 max b vales: 0.110521 0.111747 0.115312 0.124365 Step 8603 max W vales: -3.40282e+38 -3.40282e+38 -3.40282e+38 -3.40282e+38 max b vales: -3.40282e+38 -3.40282e+38 -3.40282e+38 -3.40282e+38 ``` Since none of my values is very high, the only way a NaN can happen is by a badly handled 0\/0, but since this tutorial code doesn't do any divisions or similar operations, I see no other explanation than that this comes from the internal TF code. I'm clueless on what to do with this. Any suggestions? The algorithm is converging nicely, its accuracy on my validation set was steadily climbing and just reached 92.5% at iteration 8600.", "response":"Actually, it turned out to be something stupid. I'm posting this in case anyone else would run into a similar error. ``` cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv)) ``` is actually a horrible way of computing the cross-entropy. In some samples, certain classes could be excluded with certainty after a while, resulting in y_conv=0 for that sample. That's normally not a problem since you're not interested in those, but in the way cross_entropy is written there, it yields 0*log(0) for that particular sample\/class. Hence the NaN. Replacing it with ``` cross_entropy = -tf.reduce_sum(y_*tf.log(tf.clip_by_value(y_conv,1e-10,1.0))) ``` solved all my problems.", "best_answers_score":0.6526, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33712178\/tensorflow-nan-bug", "best_answers_votes":149, "question_length":1770, "response_length":664 }, { "question":"tensorflow:AttributeError: 'module' object has no attribute 'mul' I have used tensorflow for ONE day,but there comes some troubles,when I import tensorflow, there would be AttributeError: 'module' object has no attribute 'XXXXXX' Environment I use ubuntu14.04, python2.7, CUDA toolkit 8.0 and CuDNN v5. And versions of my six and protobuf are: Name: six Version: 1.10.0 Location: \/usr\/local\/lib\/python2.7\/dist-packages Requires: Name: protobuf Version: 3.2.0 Location: \/usr\/local\/lib\/python2.7\/dist-packages Requires: six, setuptools here is my test code: ```html import tensorflow as tf a = tf.placeholder(tf.int16) b = tf.placeholder(tf.int16) add = tf.add(a, b) mul = tf.mul(a, b) with tf.Session() as sess: # Run every operation with variable input print \"Addition with variables: %i\" % sess.run(add, feed_dict={a: 2, b: 3}) print \"Multiplication with variables: %i\" % sess.run(mul, feed_dict={a: 2, b: 3}) ``` I get this output: Is there any problem with the tensorflow installation? or any other problems?", "response":"According to the tensorflow 1.0.0 release notes, tf.mul, tf.sub and tf.neg are deprecated in favor of tf.multiply, tf.subtract and tf.negative. You'll need to replace tf.mul with tf.multiply.", "best_answers_score":0.6521, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42217059\/tensorflowattributeerror-module-object-has-no-attribute-mul", "best_answers_votes":194, "question_length":1011, "response_length":191 }, { "question":"What are possible values for data_augmentation_options in the TensorFlow Object Detection pipeline configuration? I have successfully trained an object detection model with TensorFlow with the sample configurations given here: https:\/\/github.com\/tensorflow\/models\/tree\/master\/object_detection\/samples\/configs Now I want to fine tune my configuration to get better results. One of the promising options I see in there is \"data_augmentation_options\" under \"train_config\". Currently, it looks like this: ``` train_config: { batch_size: 1 ... data_augmentation_options { random_horizontal_flip { } } } ``` Are there other options to do random scaling, cropping or tweaking of brightness?", "response":"The list of options is provided in preprocessor.proto: ``` NormalizeImage normalize_image = 1; RandomHorizontalFlip random_horizontal_flip = 2; RandomPixelValueScale random_pixel_value_scale = 3; RandomImageScale random_image_scale = 4; RandomRGBtoGray random_rgb_to_gray = 5; RandomAdjustBrightness random_adjust_brightness = 6; RandomAdjustContrast random_adjust_contrast = 7; RandomAdjustHue random_adjust_hue = 8; RandomAdjustSaturation random_adjust_saturation = 9; RandomDistortColor random_distort_color = 10; RandomJitterBoxes random_jitter_boxes = 11; RandomCropImage random_crop_image = 12; RandomPadImage random_pad_image = 13; RandomCropPadImage random_crop_pad_image = 14; RandomCropToAspectRatio random_crop_to_aspect_ratio = 15; RandomBlackPatches random_black_patches = 16; RandomResizeMethod random_resize_method = 17; ScaleBoxesToPixelCoordinates scale_boxes_to_pixel_coordinates = 18; ResizeImage resize_image = 19; SubtractChannelMean subtract_channel_mean = 20; SSDRandomCrop ssd_random_crop = 21; SSDRandomCropPad ssd_random_crop_pad = 22; SSDRandomCropFixedAspectRatio ssd_random_crop_fixed_aspect_ratio = 23; ``` You can see the details about each option in preprocessor.py. Arguments can be provided as key-value pairs. ``` data_augmentation_options { ssd_random_crop { } } data_augmentation_options { random_pixel_value_scale { minval: 0.6 } } ```", "best_answers_score":0.6515, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44906317\/what-are-possible-values-for-data-augmentation-options-in-the-tensorflow-object", "best_answers_votes":96, "question_length":683, "response_length":1373 }, { "question":"TensorFlow create dataset from numpy array TensorFlow as build it a nice way to store data. This is for example used to store the MNIST data in the example: ``` >>> mnist .DataSets object at 0x10f930630> ``` Suppose to have a input and output numpy arrays. ``` >>> x = np.random.normal(0,1, (100, 10)) >>> y = np.random.randint(0, 2, 100) ``` How can I transform them in a tf dataset? I want to use functions like next_batch", "response":"The Dataset object is only part of the MNIST tutorial, not the main TensorFlow library. You can see where it is defined here: GitHub Link The constructor accepts an images and labels argument so presumably you can pass your own values there.", "best_answers_score":0.6514, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/34361030\/tensorflow-create-dataset-from-numpy-array", "best_answers_votes":9, "question_length":424, "response_length":241 }, { "question":"Getting around tf.argmax which is not differentiable I've written a custom loss function for my neural network but it can't compute any gradients. I thinks it is because I need the index of the highest value and are therefore using argmax to get this index. As argmax is not differentiable I to get around this but I don't know how it is possible. Can anyone help?", "response":"As aidan suggested, it's just a softargmax stretched to the limits by beta. We can use tf.nn.softmax to get around the numerical issues: ``` def softargmax(x, beta=1e10): x = tf.convert_to_tensor(x) x_range = tf.range(x.shape.as_list()[-1], dtype=x.dtype) return tf.reduce_sum(tf.nn.softmax(x*beta) * x_range, axis=-1) ```", "best_answers_score":0.651, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/46926809\/getting-around-tf-argmax-which-is-not-differentiable", "best_answers_votes":20, "question_length":364, "response_length":322 }, { "question":"Tensor is not an element of this graph I'm getting this error 'ValueError: Tensor Tensor(\"Placeholder:0\", shape=(1, 1), dtype=int32) is not an element of this graph.' The code is running perfectly fine without with tf.Graph(). as_default():. However I need to call M.sample(...) multiple times and each time the memory won't be free after session.close(). Probably there is a memory leak but not sure where is it. I want to restore a pre-trained neural network, set it as default graph, and testing it multiple times (like 10000) over the default graph without making it larger each time. The code is: ``` def SessionOpener(save): grph = tf.get_default_graph() sess = tf.Session(graph=grph) ckpt = tf.train.get_checkpoint_state(save) saver = tf.train.import_meta_graph('.\/predictor\/save\/model.ckpt.meta') if ckpt and ckpt.model_checkpoint_path: saver.restore(sess, ckpt.model_checkpoint_path) tf.global_variables_initializer().run(session=sess) return sess def LoadPredictor(save): with open(os.path.join(save, 'config.pkl'), 'rb') as f: saved_args = cPickle.load(f) with open(os.path.join(save, 'words_vocab.pkl'), 'rb') as f: words, vocab = cPickle.load(f) model = Model(saved_args, True) return model, words, vocab if __name__ == '__main__': Save = '.\/save' M, W, V = LoadPredictor(Save) Sess = SessionOpener(Save) word = M.sample(Sess, W, V, 1, str(123), 2, 1, 4) Sess.close() ``` And the model is: ``` class Model(): def __init__(self, args, infer=False): with tf.Graph().as_default(): self.args = args if infer: args.batch_size = 1 args.seq_length = 1 if args.model == 'rnn': cell_fn = rnn.BasicRNNCell elif args.model == 'gru': cell_fn = rnn.GRUCell elif args.model == 'lstm': cell_fn = rnn.BasicLSTMCell else: raise Exception(\"model type not supported: {}\".format(args.model)) cells = [] for _ in range(args.num_layers): cell = cell_fn(args.rnn_size) cells.append(cell) self.cell = cell = rnn.MultiRNNCell(cells) self.input_data = tf.placeholder(tf.int32, [args.batch_size, args.seq_length]) self.targets = tf.placeholder(tf.int32, [args.batch_size, args.seq_length]) self.initial_state = cell.zero_state(args.batch_size, tf.float32) self.batch_pointer = tf.Variable(0, name=\"batch_pointer\", trainable=False, dtype=tf.int32) self.inc_batch_pointer_op = tf.assign(self.batch_pointer, self.batch_pointer + 1) self.epoch_pointer = tf.Variable(0, name=\"epoch_pointer\", trainable=False) self.batch_time = tf.Variable(0.0, name=\"batch_time\", trainable=False) tf.summary.scalar(\"time_batch\", self.batch_time) def variable_summaries(var): \"\"\"Attach a lot of summaries to a Tensor (for TensorBoard visualization).\"\"\" with tf.name_scope('summaries'): mean = tf.reduce_mean(var) tf.summary.scalar('mean', mean) tf.summary.scalar('max', tf.reduce_max(var)) tf.summary.scalar('min', tf.reduce_min(var)) with tf.variable_scope('rnnlm'): softmax_w = tf.get_variable(\"softmax_w\", [args.rnn_size, args.vocab_size]) variable_summaries(softmax_w) softmax_b = tf.get_variable(\"softmax_b\", [args.vocab_size]) variable_summaries(softmax_b) with tf.device(\"\/cpu:0\"): embedding = tf.get_variable(\"embedding\", [args.vocab_size, args.rnn_size]) inputs = tf.split(tf.nn.embedding_lookup(embedding, self.input_data), args.seq_length, 1) inputs = [tf.squeeze(input_, [1]) for input_ in inputs] def loop(prev, _): prev = tf.matmul(prev, softmax_w) + softmax_b prev_symbol = tf.stop_gradient(tf.argmax(prev, 1)) return tf.nn.embedding_lookup(embedding, prev_symbol) outputs, last_state = legacy_seq2seq.rnn_decoder(inputs, self.initial_state, cell, loop_function=loop if infer else None, scope='rnnlm') output = tf.reshape(tf.concat(outputs, 1), [-1, args.rnn_size]) self.logits = tf.matmul(output, softmax_w) + softmax_b self.probs = tf.nn.softmax(self.logits) loss = legacy_seq2seq.sequence_loss_by_example([self.logits], [tf.reshape(self.targets, [-1])], [tf.ones([args.batch_size * args.seq_length])], args.vocab_size) self.cost = tf.reduce_sum(loss) \/ args.batch_size \/ args.seq_length tf.summary.scalar(\"cost\", self.cost) self.final_state = last_state self.lr = tf.Variable(0.0, trainable=False) tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(self.cost, tvars), args.grad_clip) optimizer = tf.train.AdamOptimizer(self.lr) self.train_op = optimizer.apply_gradients(zip(grads, tvars)) def sample(self, sess, words, vocab, num=200, prime='first all', sampling_type=1, pick=0, width=4): def weighted_pick(weights): t = np.cumsum(weights) s = np.sum(weights) return(int(np.searchsorted(t, np.random.rand(1)*s))) ret = '' if pick == 1: state = sess.run(self.cell.zero_state(1, tf.float32)) if not len(prime) or prime == ' ': prime = random.choice(list(vocab.keys())) for word in prime.split()[:-1]: x = np.zeros((1, 1)) x[0, 0] = vocab.get(word,0) feed = {self.input_data: x, self.initial_state:state} [state] = sess.run([self.final_state], feed) ret = prime word = prime.split()[-1] for n in range(num): x = np.zeros((1, 1)) x[0, 0] = vocab.get(word, 0) feed = {self.input_data: x, self.initial_state:state} [probs, state] = sess.run([self.probs, self.final_state], feed) p = probs[0] if sampling_type == 0: sample = np.argmax(p) elif sampling_type == 2: if word == '\\n': sample = weighted_pick(p) else: sample = np.argmax(p) else: # sampling_type == 1 default: sample = weighted_pick(p) ret = words[sample] return ret ``` and the output is: ``` Traceback (most recent call last): File \"\/rcg\/software\/Linux\/Ubuntu\/16.04\/amd64\/TOOLS\/TENSORFLOW\/1.2.1-GPU-PY352\/lib\/python3.5\/site-packages\/tensorflow\/python\/client\/session.py\", line 942, in _run allow_operation=False) File \"\/rcg\/software\/Linux\/Ubuntu\/16.04\/amd64\/TOOLS\/TENSORFLOW\/1.2.1-GPU-PY352\/lib\/python3.5\/site-packages\/tensorflow\/python\/framework\/ops.py\", line 2584, in as_graph_element return self._as_graph_element_locked(obj, allow_tensor, allow_operation) File \"\/rcg\/software\/Linux\/Ubuntu\/16.04\/amd64\/TOOLS\/TENSORFLOW\/1.2.1-GPU-PY352\/lib\/python3.5\/site-packages\/tensorflow\/python\/framework\/ops.py\", line 2663, in _as_graph_element_locked raise ValueError(\"Tensor %s is not an element of this graph.\" % obj) ValueError: Tensor Tensor(\"Placeholder:0\", shape=(1, 1), dtype=int32) is not an element of this graph. ```", "response":"When you create a Model, the session hasn't been restored yet. All placeholders, variables and ops that are defined in Model.__init__ are placed in a new graph, which makes itself a default graph inside with block. This is the key line: ```py with tf.Graph().as_default(): ... ``` This means that this instance of tf.Graph() equals to tf.get_default_graph() instance inside with block, but not before or after it. From this moment on, there exist two different graphs. When you later create a session and restore a graph into it, you can't access the previous instance of tf.Graph() in that session. Here's a short example: ```py with tf.Graph().as_default() as graph: var = tf.get_variable(\"var\", shape=[3], initializer=tf.zeros_initializer) # This works with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer()) print(sess.run(var)) # ok because `sess.graph == graph` # This fails saver = tf.train.import_meta_graph('\/tmp\/model.ckpt.meta') with tf.Session() as sess: saver.restore(sess, \"\/tmp\/model.ckpt\") print(sess.run(var)) # var is from `graph`, not `sess.graph`! ``` The best way to deal with this is give names to all nodes, e.g. 'input', 'target', etc, save the model and then look up the nodes in the restored graph by name, something like this: ```py saver = tf.train.import_meta_graph('\/tmp\/model.ckpt.meta') with tf.Session() as sess: saver.restore(sess, \"\/tmp\/model.ckpt\") input_data = sess.graph.get_tensor_by_name('input') target = sess.graph.get_tensor_by_name('target') ``` This method guarantees that all nodes will be from the graph in session.", "best_answers_score":0.65, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47115946\/tensor-is-not-an-element-of-this-graph", "best_answers_votes":18, "question_length":6181, "response_length":1588 }, { "question":"How to tell if tensorflow is using gpu acceleration from inside python shell? I have installed tensorflow in my ubuntu 16.04 using the second answer here with ubuntu's builtin apt cuda installation. Now my question is how can I test if tensorflow is really using gpu? I have a gtx 960m gpu. When I import tensorflow this is the output ``` I tensorflow\/stream_executor\/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally I tensorflow\/stream_executor\/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally I tensorflow\/stream_executor\/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally I tensorflow\/stream_executor\/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally I tensorflow\/stream_executor\/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally ``` Is this output enough to check if tensorflow is using gpu ?", "response":"No, I don't think \"open CUDA library\" is enough to tell, because different nodes of the graph may be on different devices. When using tensorflow2: ``` print(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU'))) ``` For tensorflow1, to find out which device is used, you can enable log device placement like this: ``` sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) ``` Check your console for this type of output.", "best_answers_score":0.6486, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/38009682\/how-to-tell-if-tensorflow-is-using-gpu-acceleration-from-inside-python-shell", "best_answers_votes":458, "question_length":910, "response_length":447 }, { "question":"TensorFlow 'module' object has no attribute 'global_variables_initializer' I'm new to Tensorflow I'm running a Deep learning Assignment from Udacity on iPython notebook. link And it has an error. ``` AttributeError Traceback (most recent call last) `` in ``() 2 3 with tf.Session(graph=graph) as session: ----> 4 tf.global_variables_initializer().run() AttributeError: 'module' object has no attribute 'global_variables_initializer' ``` Please help! How can I fix this? Thank you.", "response":"In older versions, it was called tf.initialize_all_variables.", "best_answers_score":0.6483, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40511562\/tensorflow-module-object-has-no-attribute-global-variables-initializer", "best_answers_votes":42, "question_length":480, "response_length":61 }, { "question":"Installation of TensorFlow on windows 7 - 'pip3' is not recognized as an internal or external command, When following the Installing TensorFlow for Windows guide https:\/\/www.tensorflow.org\/install\/install_windows, after executing ``` C:\\> pip3 install --upgrade tensorflow ``` I get the following error: ``` 'pip3' is not recognized as an internal or external command, ``` It looks like pip3 isn't recognized at all (although PATH to python is set)", "response":"Run the following ``` python -m pip install --upgrade tensorflow ``` Assuming python is working, TensorFlow should get installed (at least the \"Validate the installation\" step is green).", "best_answers_score":0.6476, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42559222\/installation-of-tensorflow-on-windows-7-pip3-is-not-recognized-as-an-interna", "best_answers_votes":30, "question_length":448, "response_length":186 }, { "question":"Tensorflow Tensorboard default port Is there a way to change the default port (6006) on TensorBoard so we could open multiple TensorBoards? Maybe an option like --port=\"8008\"?", "response":"In fact there is an option to change the default port ... ``` tensorboard --logdir=\/tmp --port=8008 ```", "best_answers_score":0.6461, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/35551326\/tensorflow-tensorboard-default-port", "best_answers_votes":127, "question_length":175, "response_length":103 }, { "question":"SavedModel file does not exist when using Tensorflow hub When trying to use the hub.load function from tensorflow_hub, I get an OSError: SavedModel file does not exist at: error. The weird thing is that it worked a few days ago, so I don't quite understand why I'm getting this error now. Code to reproduce: ``` import tensorflow as tf import tensorflow_hub as hub URL = 'https:\/\/tfhub.dev\/google\/universal-sentence-encoder\/4' embed = hub.load(URL) ``` Specific error received: ``` OSError Traceback (most recent call last) in 1 URL = 'https:\/\/tfhub.dev\/google\/universal-sentence-encoder\/4' ----> 2 embed = hub.load(URL) ~\/opt\/anaconda3\/lib\/python3.7\/site-packages\/tensorflow_hub\/module_v2.py in load(handle, tags) 100 if tags is None and is_hub_module_v1: 101 tags = [] --> 102 obj = tf_v1.saved_model.load_v2(module_path, tags=tags) 103 obj._is_hub_module_v1 = is_hub_module_v1 # pylint: disable=protected-access 104 return obj ~\/opt\/anaconda3\/lib\/python3.7\/site-packages\/tensorflow\/python\/saved_model\/load.py in load(export_dir, tags) 576 ValueError: If `tags` don't match a MetaGraph in the SavedModel. 577 \"\"\" --> 578 return load_internal(export_dir, tags) 579 580 ~\/opt\/anaconda3\/lib\/python3.7\/site-packages\/tensorflow\/python\/saved_model\/load.py in load_internal(export_dir, tags, loader_cls) 586 tags = nest.flatten(tags) 587 saved_model_proto, debug_info = ( --> 588 loader_impl.parse_saved_model_with_debug_info(export_dir)) 589 590 if (len(saved_model_proto.meta_graphs) == 1 and ~\/opt\/anaconda3\/lib\/python3.7\/site-packages\/tensorflow\/python\/saved_model\/loader_impl.py in parse_saved_model_with_debug_info(export_dir) 54 parsed. Missing graph debug info file is fine. 55 \"\"\" ---> 56 saved_model = _parse_saved_model(export_dir) 57 58 debug_info_path = os.path.join( ~\/opt\/anaconda3\/lib\/python3.7\/site-packages\/tensorflow\/python\/saved_model\/loader_impl.py in parse_saved_model(export_dir) 111 (export_dir, 112 constants.SAVED_MODEL_FILENAME_PBTXT, --> 113 constants.SAVED_MODEL_FILENAME_PB)) 114 115 OSError: SavedModel file does not exist at: \/var\/folders\/77\/rvfl368x44s51r8dc3b6l2rh0000gn\/T\/tfhub_modules\/063d866c06683311b44b4992fd46003be952409c\/{saved_model.pbtxt|saved_model.pb} ```", "response":"So, just deleting that folder and running the hub.load() function again solves the issue", "best_answers_score":0.6446, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/63078695\/savedmodel-file-does-not-exist-when-using-tensorflow-hub", "best_answers_votes":54, "question_length":2197, "response_length":88 }, { "question":"Early stopping with tf.estimator, how? I'm using tf.estimator in TensorFlow 1.4 and tf.estimator.train_and_evaluate is great but I need early stopping. What's the prefered way of adding that? I assume there is some tf.train.SessionRunHook somewhere for this. I saw that there was an old contrib package with a ValidationMonitor that seemed to have early stopping, but it doesn't seem to be around anymore in 1.4. Or will the preferred way in the future be to rely on tf.keras (with which early stopping is really easy) instead of tf.estimator\/tf.layers\/tf.data, perhaps?", "response":"Good news! tf.estimator now has early stopping support on master and it looks like it will be in 1.10. ``` estimator = tf.estimator.Estimator(model_fn, model_dir) os.makedirs(estimator.eval_dir()) # TODO This should not be expected IMO. early_stopping = tf.contrib.estimator.stop_if_no_decrease_hook( estimator, metric_name='loss', max_steps_without_decrease=1000, min_steps=100) tf.estimator.train_and_evaluate( estimator, train_spec=tf.estimator.TrainSpec(train_input_fn, hooks=[early_stopping]), eval_spec=tf.estimator.EvalSpec(eval_input_fn)) ```", "best_answers_score":0.6443, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47137061\/early-stopping-with-tf-estimator-how", "best_answers_votes":33, "question_length":570, "response_length":550 }, { "question":"Difference between tf.data.Dataset.map() and tf.data.Dataset.apply() With the recent upgrade to version 1.4, Tensorflow included tf.data in the library core. One \"major new feature\" described in the version 1.4 release notes is tf.data.Dataset.apply(), which is a \"method for applying custom transformation functions\". How is this different from the already existing tf.data.Dataset.map()?", "response":"The difference is that map will execute one function on every element of the Dataset separately, whereas apply will execute one function on the whole Dataset at once (such as group_by_window given as example in the documentation). The argument of apply is a function that takes a Dataset and returns a Dataset when the argument of map is a function that takes one element and returns one transformed element.", "best_answers_score":0.643, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/47091726\/difference-between-tf-data-dataset-map-and-tf-data-dataset-apply", "best_answers_votes":46, "question_length":389, "response_length":408 }, { "question":"How to get accuracy of model using keras? After fitting the model (which was running for a couple of hours), I wanted to get the accuracy with the following code: ``` train_loss=hist.history['loss'] val_loss=hist.history['val_loss'] train_acc=hist.history['acc'] val_acc=hist.history['val_acc'] xc=range(nb_epoch) ``` of the trained model, but was getting an error, which is caused by the deprecated methods I was using. ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) in () 3 train_loss=hist.history['loss'] 4 val_loss=hist.history['val_loss'] ----> 5 train_acc=hist.history['acc'] 6 val_acc=hist.history['val_acc'] 7 xc=range(nb_epoch) KeyError: 'acc' ``` The code I used to fit the model before trying to read the accuracy, is the following: ``` hist = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=(X_test, Y_test)) hist = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_split=0.2) ``` Which produces this output when running it: ``` Epoch 1\/20 237\/237 [==============================] - 104s 440ms\/step - loss: 6.2802 - val_loss: 2.4209 ..... ..... ..... Epoch 19\/20 189\/189 [==============================] - 91s 480ms\/step - loss: 0.0590 - val_loss: 0.2193 Epoch 20\/20 189\/189 [==============================] - 85s 451ms\/step - loss: 0.0201 - val_loss: 0.2312 ``` I've noticed that I was running deprecated methods & arguments. So how can I read the accuracy and val_accuracy without having to fit again, and waiting for a couple of hours again? I tried to replace train_acc=hist.history['acc'] with train_acc=hist.history['accuracy'] but it didn't help.", "response":"You probably didn't add \"acc\" as a metric when compiling the model. ``` model.compile(optimizer=..., loss=..., metrics=['accuracy',...]) ``` You can get the metrics and loss from any data without training again with: ``` model.evaluate(X, Y) ```", "best_answers_score":0.6427, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/51047676\/how-to-get-accuracy-of-model-using-keras", "best_answers_votes":33, "question_length":1733, "response_length":245 }, { "question":"In Tensorflow, get the names of all the Tensors in a graph I am creating neural nets with Tensorflow and skflow; for some reason I want to get the values of some inner tensors for a given input, so I am using myClassifier.get_layer_value(input, \"tensorName\"), myClassifier being a skflow.estimators.TensorFlowEstimator. However, I find it difficult to find the correct syntax of the tensor name, even knowing its name (and I'm getting confused between operation and tensors), so I'm using tensorboard to plot the graph and look for the name. Is there a way to enumerate all the tensors in a graph without using tensorboard?", "response":"You can do ``` [n.name for n in tf.get_default_graph().as_graph_def().node] ``` Also, if you are prototyping in an IPython notebook, you can show the graph directly in notebook, see show_graph function in Alexander's Deep Dream notebook", "best_answers_score":0.6403, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/36883949\/in-tensorflow-get-the-names-of-all-the-tensors-in-a-graph", "best_answers_votes":204, "question_length":623, "response_length":236 }, { "question":"Tensorflow - ValueError: Parent directory of trained_variables.ckpt doesn't exist, can't save I want to save my tensorflow session sess but i have the following error ValueError: Parent directory of trained_variables.ckpt doesn't exist, can't save. This is my line of code : saver.save(sess, \"trained_variables.ckpt\") I've also tried to change the file name and put model instead of trained_variables.ckpt but i get the same problem. Following this tutorial A TensorFlow Tutorial: Email Classification", "response":"``` saver.save(sess, \".\/trained_variables.ckpt\") ```", "best_answers_score":0.6388, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42134360\/tensorflow-valueerror-parent-directory-of-trained-variables-ckpt-doesnt-exis", "best_answers_votes":57, "question_length":501, "response_length":52 }, { "question":"from_logits=True and from_logits=False get different training result for tf.losses.CategoricalCrossentropy for UNet I am doing the image semantic segmentation job with unet, if I set the Softmax Activation for last layer like this: ``` ... conv9 = Conv2D(n_classes, (3,3), padding = 'same')(conv9) conv10 = (Activation('softmax'))(conv9) model = Model(inputs, conv10) return model ... ``` and then using loss = tf.keras.losses.CategoricalCrossentropy(from_logits=False) The training will not converge even for only one training image. But if I do not set the Softmax Activation for last layer like this: ``` ... conv9 = Conv2D(n_classes, (3,3), padding = 'same')(conv9) model = Model(inputs, conv9) return model ... ``` and then using loss = tf.keras.losses.CategoricalCrossentropy(from_logits=True) The training will converge for one training image. My groundtruth dataset is generated like this: ``` X = [] Y = [] im = cv2.imread(impath) X.append(im) seg_labels = np.zeros((height, width, n_classes)) for spath in segpaths: mask = cv2.imread(spath, 0) seg_labels[:, :, c] += mask Y.append(seg_labels.reshape(width*height, n_classes)) ``` Why? Is there something wrong for my usage? This is my experiment code of git: https:\/\/github.com\/honeytidy\/unet You can checkout and run (can run on cpu). You can change the Activation layer and from_logits of CategoricalCrossentropy and see what i said.", "response":"Pushing the \"softmax\" activation into the cross-entropy loss layer significantly simplifies the loss computation and makes it more numerically stable. It might be the case that in your example the numerical issues are significant enough to render the training process ineffective for the from_logits=False option. You can find a derivation of the cross entropy loss (a special case of \"info gain\" loss) in this post. This derivation illustrates the numerical issues that are averted when combining softmax with cross entropy loss.", "best_answers_score":0.6359, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/57253841\/from-logits-true-and-from-logits-false-get-different-training-result-for-tf-loss", "best_answers_votes":16, "question_length":1395, "response_length":530 }, { "question":"How to approach a number guessing game (with a twist) algorithm? Update(July 2020): Question is 9 years old but still one that I'm deeply interested in. In the time since, machine learning(RNN's, CNN's, GANS,etc), new approaches and cheap GPU's have risen that enable new approaches. I thought it would be fun to revisit this question to see if there are new approaches. I am learning programming (Python and algorithms) and was trying to work on a project that I find interesting. I have created a few basic Python scripts, but I\u2019m not sure how to approach a solution to a game I am trying to build. Here\u2019s how the game will work: Users will be given items with a value. For example, ``` Apple = 1 Pears = 2 Oranges = 3 ``` They will then get a chance to choose any combo of them they like (i.e. 100 apples, 20 pears, and one orange). The only output the computer gets is the total value (in this example, it's currently $143). The computer will try to guess what they have. Which obviously it won\u2019t be able to get correctly the first turn. ``` Value quantity(day1) value(day1) Apple 1 100 100 Pears 2 20 40 Orange 3 1 3 Total 121 143 ``` The next turn the user can modify their numbers but no more than 5% of the total quantity (or some other percent we may chose. I\u2019ll use 5% for example.). The prices of fruit can change(at random) so the total value may change based on that also (for simplicity I am not changing fruit prices in this example). Using the above example, on day 2 of the game, the user returns a value of $152 and $164 on day 3. Here's an example: ``` Quantity (day2) %change (day2) Value (day2) Quantity (day3) %change (day3) Value(day3) 104 104 106 106 21 42 23 46 2 6 4 12 127 4.96% 152 133 4.72% 164 ``` *(I hope the tables show up right, I had to manually space them so hopefully it's not just doing it on my screen, if it doesn't work let me know and I'll try to upload a screenshot.) I am trying to see if I can figure out what the quantities are over time (assuming the user will have the patience to keep entering numbers). I know right now my only restriction is the total value cannot be more than 5% so I cannot be within 5% accuracy right now so the user will be entering it forever. What I have done so far Here\u2019s my solution so far (not much). Basically, I take all the values and figure out all the possible combinations of them (I am done this part). Then I take all the possible combos and put them in a database as a dictionary (so for example for $143, there could be a dictionary entry {apple:143, Pears:0, Oranges :0}..all the way to {apple:0, Pears:1, Oranges :47}. I do this each time I get a new number so I have a list of all possibilities. Here\u2019s where I\u2019m stuck. In using the rules above, how can I figure out the best possible solution? I think I\u2019ll need a fitness function that automatically compares the two days data and removes any possibilities that have more than 5% variance of the previous days data. Questions: So my question with user changing the total and me having a list of all the probabilities, how should I approach this? What do I need to learn? Is there any algorithms out there or theories that I can use that are applicable? Or, to help me understand my mistake, can you suggest what rules I can add to make this goal feasible (if it's not in its current state. I was thinking adding more fruits and saying they must pick at least 3, etc..)? Also, I only have a vague understanding of genetic algorithms, but I thought I could use them here, if is there something I can use? I'm very very eager to learn so any advice or tips would be greatly appreciated (just please don't tell me this game is impossible). UPDATE: Getting feedback that this is hard to solve. So I thought I'd add another condition to the game that won't interfere with what the player is doing (game stays the same for them) but everyday the value of the fruits change price (randomly). Would that make it easier to solve? Because within a 5% movement and certain fruit value changes, only a few combinations are probable over time. Day 1, anything is possible and getting a close enough range is almost impossible, but as the prices of fruits change and the user can only choose a 5% change, then shouldn't (over time) the range be narrow and narrow. In the above example, if prices are volatile enough I think I could brute force a solution that gave me a range to guess in, but I'm trying to figure out if there's a more elegant solution or other solutions to keep narrowing this range over time. UPDATE2: After reading and asking around, I believe this is a hidden Markov\/Viterbi problem that tracks the changes in fruit prices as well as total sum (weighting the last data point the heaviest). I'm not sure how to apply the relationship though. I think this is the case and could be wrong but at the least I'm starting to suspect this is a some type of machine learning problem. Update 3: I am created a test case (with smaller numbers) and a generator to help automate the user generated data and I am trying to create a graph from it to see what's more likely. Here's the code, along with the total values and comments on what the users actually fruit quantities are. ``` #!\/usr\/bin\/env python import itertools # Fruit price data fruitPriceDay1 = {'Apple':1, 'Pears':2, 'Oranges':3} fruitPriceDay2 = {'Apple':2, 'Pears':3, 'Oranges':4} fruitPriceDay3 = {'Apple':2, 'Pears':4, 'Oranges':5} # Generate possibilities for testing (warning...will not scale with large numbers) def possibilityGenerator(target_sum, apple, pears, oranges): allDayPossible = {} counter = 1 apple_range = range(0, target_sum + 1, apple) pears_range = range(0, target_sum + 1, pears) oranges_range = range(0, target_sum + 1, oranges) for i, j, k in itertools.product(apple_range, pears_range, oranges_range): if i + j + k == target_sum: currentPossible = {} #print counter #print 'Apple', ':', i\/apple, ',', 'Pears', ':', j\/pears, ',', 'Oranges', ':', k\/oranges currentPossible['apple'] = i\/apple currentPossible['pears'] = j\/pears currentPossible['oranges'] = k\/oranges #print currentPossible allDayPossible[counter] = currentPossible counter = counter +1 return allDayPossible # Total sum being returned by user for value of fruits totalSumDay1=26 # Computer does not know this but users quantities are apple: 20, pears 3, oranges 0 at the current prices of the day totalSumDay2=51 # Computer does not know this but users quantities are apple: 21, pears 3, oranges 0 at the current prices of the day totalSumDay3=61 # Computer does not know this but users quantities are apple: 20, pears 4, oranges 1 at the current prices of the day graph = {} graph['day1'] = possibilityGenerator(totalSumDay1, fruitPriceDay1['Apple'], fruitPriceDay1['Pears'], fruitPriceDay1['Oranges'] ) graph['day2'] = possibilityGenerator(totalSumDay2, fruitPriceDay2['Apple'], fruitPriceDay2['Pears'], fruitPriceDay2['Oranges'] ) graph['day3'] = possibilityGenerator(totalSumDay3, fruitPriceDay3['Apple'], fruitPriceDay3['Pears'], fruitPriceDay3['Oranges'] ) # Sample of dict = 1 : {'oranges': 0, 'apple': 0, 'pears': 0}..70 : {'oranges': 8, 'apple': 26, 'pears': 13} print graph ```", "response":"We'll combine graph-theory and probability: On the 1st day, build a set of all feasible solutions. Lets denote the solutions set as A1={a1(1), a1(2),...,a1(n)}. On the second day you can again build the solutions set A2. Now, for each element in A2, you'll need to check if it can be reached from each element of A1 (given x% tolerance). If so - connect A2(n) to A1(m). If it can't be reached from any node in A1(m) - you can delete this node. Basically we are building a connected directed acyclic graph. All paths in the graph are equally likely. You can find an exact solution only when there is a single edge from Am to Am+1 (from a node in Am to a node in Am+1). Sure, some nodes appear in more paths than other nodes. The probability for each node can be directly deduced based on the number of paths that contains this node. By assigning a weight to each node, which equals to the number of paths that leads to this node, there is no need to keep all history, but only the previous day. Also, have a look at non-negative-values linear diphantine equations - A question I asked a while ago. The accepted answer is a great way to enumarte all combos in each step.", "best_answers_score":0.6351, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/7694978\/how-to-approach-a-number-guessing-game-with-a-twist-algorithm", "best_answers_votes":22, "question_length":7177, "response_length":1168 }, { "question":"How to keep lookup tables initialized for prediction (and not just training)? I create a lookup table from tf.contrib.lookup, using the training data (as input). Then, I pass every input through that lookup table, before passing it through my model. This works for training, but when it comes to online prediction from this same model, it raises the error: Table not initialized I'm using SavedModel to save the model. I run the prediction from this saved model. How can I initialize this table so that it stays initialized? Or is there a better way to save the model so that the table is always initialized?", "response":"I think you would be better off using tf.tables_initializer() as the legacy_init_op. tf.saved_model.main_op.main_op() also adds local and global initialization ops in addition to table initialization. when you load the saved model and it runs the legacy_init_op, it would reset your variables, which is not what you want.", "best_answers_score":0.6349, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44236090\/how-to-keep-lookup-tables-initialized-for-prediction-and-not-just-training", "best_answers_votes":14, "question_length":608, "response_length":321 }, { "question":"Return number of epochs for EarlyStopping callback in Keras Is there any way to return the number of epochs after which the training was stopped in Keras when using the EarlyStopping callback? I can get the log of the training and validation loss and compute the number of epochs myself using the patience parameter, but is there a more direct way?", "response":"Use EarlyStopping.stopped_epoch attribute: remember the callback in a separate variable, say callback, and check callback.stopped_epoch after the training stopped.", "best_answers_score":0.6341, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/49852241\/return-number-of-epochs-for-earlystopping-callback-in-keras", "best_answers_votes":14, "question_length":348, "response_length":163 }, { "question":"ValueError: Unknown layer: Functional I made a CNN in colab and saved the models at every epoch. I exported the h5 file and now am trying to run the model on some test images. Here's the main error: ``` ValueError: Unknown layer: Functional ``` Here's the code I used to run the model and save at each epoch: ``` epochs = 50 callbacks = [ tf.keras.callbacks.TensorBoard(log_dir='.\/logs'), keras.callbacks.ModelCheckpoint(\"save_at_{epoch}.h5\"), ] model.compile( optimizer=keras.optimizers.Adam(1e-3), loss=\"binary_crossentropy\", metrics=[\"accuracy\"], ) model.fit( train_ds, epochs=epochs, callbacks=callbacks, validation_data=val_ds, ) ``` After the model ran I just downloaded the h5 file from the colab sidebar locally. I re-uploaded the file from the local disk, and here's how I'm trying to load the model: ``` # load and evaluate a saved model from tensorflow.keras.models import load_model # load model# loaded_model = load_model('save_at_47.h5') loaded_model.layers[0].input_shape ``` Here's the full traceback: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () 3 4 # load model# ----> 5 loaded_model = load_model('save_at_47.h5') 6 loaded_model.layers[0].input_shape 5 frames \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/saving\/save.py in load_model(filepath, custom_objects, compile) 182 if (h5py is not None and ( 183 isinstance(filepath, h5py.File) or h5py.is_hdf5(filepath))): --> 184 return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile) 185 186 if sys.version_info >= (3, 4) and isinstance(filepath, pathlib.Path): \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/saving\/hdf5_format.py in load_model_from_hdf5(filepath, custom_objects, compile) 176 model_config = json.loads(model_config.decode('utf-8')) 177 model = model_config_lib.model_from_config(model_config, --> 178 custom_objects=custom_objects) 179 180 # set weights \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/saving\/model_config.py in model_from_config(config, custom_objects) 53 '`Sequential.from_config(config)`?') 54 from tensorflow.python.keras.layers import deserialize # pylint: disable=g-import-not-at-top ---> 55 return deserialize(config, custom_objects=custom_objects) 56 57 \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/layers\/serialization.py in deserialize(config, custom_objects) 107 module_objects=globs, 108 custom_objects=custom_objects, --> 109 printable_module_name='layer') \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/utils\/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 360 config = identifier 361 (cls, cls_config) = class_and_config_for_serialized_keras_object( --> 362 config, module_objects, custom_objects, printable_module_name) 363 364 if hasattr(cls, 'from_config'): \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/utils\/generic_utils.py in class_and_config_for_serialized_keras_object(config, module_objects, custom_objects, printable_module_name) 319 cls = get_registered_object(class_name, custom_objects, module_objects) 320 if cls is None: --> 321 raise ValueError('Unknown ' + printable_module_name + ': ' + class_name) 322 323 cls_config = config['config'] ValueError: Unknown layer: Functional ``` It seems there have been several similar questions here,and here. Changing the import method hasn't helped yet, and trying to make some kind of custom object has not worked either.", "response":"The solution to this error is very simple, ex. the reason is that you have trained the model on version '2.3.0' of Tensorflow & '2.4.3' of Keras (On Colab or local). and now you are accessing the saved model(.h5) via another version of Keras & TensorFlow. It will give you the error. The solution is that re-trained model with upgraded versions or downgrades your TF&Keras to the same version as on which model is trained.", "best_answers_score":0.6326, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/63068639\/valueerror-unknown-layer-functional", "best_answers_votes":22, "question_length":3560, "response_length":422 }, { "question":"Could not load dynamic library 'libnvinfer.so.7' I know that this question has been asked a lot, but none of the suggestions seem to work, probably since my setup is somewhat different: ``` Ubuntu 22.04 python 3.10.8 tensorflow 2.11.0 cudatoolkit 11.2.2 cudnn 8.1.0.77 nvidia-tensorrt 8.4.3.1 nvidia-pyindex 1.0.9 ``` Having created a conda environment 'tf', in the directory home\/dan\/anaconda3\/envs\/tf\/lib\/python3.10\/site-packages\/tensorrt I have ``` libnvinfer_builder_resource.so.8.4.3 libnvinfer_plugin.so.8 libnvinfer.so.8 libnvonnxparser.so.8 libnvparsers.so.8 tensorrt.so ``` When running python3 -c \"import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))\" I get ``` tensorflow\/compiler\/xla\/stream_executor\/platform\/default\/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :\/home\/dan\/anaconda3\/envs\/tf\/lib tensorflow\/compiler\/xla\/stream_executor\/platform\/default\/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :\/home\/dan\/anaconda3\/envs\/tf\/lib tensorflow\/compiler\/tf2tensorrt\/utils\/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. [PhysicalDevice(name='\/physical_device:GPU:0', device_type='GPU')] ``` I'm guessing I should downgrade nvidia-tensorrt, but nothing I've tried seems to work, any advice would be much appreciated.", "response":"For me the setting a symbolic link from libnvinfer version 7 to 8 worked: ```bash # the following path will be different for you - depending on your install method $ cd env\/lib\/python3.10\/site-packages\/tensorrt # create symbolic links $ ln -s libnvinfer_plugin.so.8 libnvinfer_plugin.so.7 $ ln -s libnvinfer.so.8 libnvinfer.so.7 # add tensorrt to library path $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~\/env\/lib\/python3.10\/site-packages\/tensorrt\/ ```", "best_answers_score":0.6294, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/74956134\/could-not-load-dynamic-library-libnvinfer-so-7", "best_answers_votes":25, "question_length":1658, "response_length":450 }, { "question":"Loading a trained Keras model and continue training I was wondering if it was possible to save a partly trained Keras model and continue the training after loading the model again. The reason for this is that I will have more training data in the future and I do not want to retrain the whole model again. The functions which I am using are: ``` #Partly train model model.fit(first_training, first_classes, batch_size=32, nb_epoch=20) #Save partly trained model model.save('partly_trained.h5') #Load partly trained model from keras.models import load_model model = load_model('partly_trained.h5') #Continue training model.fit(second_training, second_classes, batch_size=32, nb_epoch=20) ``` Edit 1: added fully working example With the first dataset after 10 epochs the loss of the last epoch will be 0.0748 and the accuracy 0.9863. After saving, deleting and reloading the model the loss and accuracy of the model trained on the second dataset will be 0.1711 and 0.9504 respectively. Is this caused by the new training data or by a completely re-trained model? ``` \"\"\" Model by: http:\/\/machinelearningmastery.com\/ \"\"\" # load (downloaded if needed) the MNIST dataset import numpy from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense from keras.utils import np_utils from keras.models import load_model numpy.random.seed(7) def baseline_model(): model = Sequential() model.add(Dense(num_pixels, input_dim=num_pixels, init='normal', activation='relu')) model.add(Dense(num_classes, init='normal', activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model if __name__ == '__main__': # load data (X_train, y_train), (X_test, y_test) = mnist.load_data() # flatten 28*28 images to a 784 vector for each image num_pixels = X_train.shape[1] * X_train.shape[2] X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32') X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32') # normalize inputs from 0-255 to 0-1 X_train = X_train \/ 255 X_test = X_test \/ 255 # one hot encode outputs y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1] # build the model model = baseline_model() #Partly train model dataset1_x = X_train[:3000] dataset1_y = y_train[:3000] model.fit(dataset1_x, dataset1_y, nb_epoch=10, batch_size=200, verbose=2) # Final evaluation of the model scores = model.evaluate(X_test, y_test, verbose=0) print(\"Baseline Error: %.2f%%\" % (100-scores[1]*100)) #Save partly trained model model.save('partly_trained.h5') del model #Reload model model = load_model('partly_trained.h5') #Continue training dataset2_x = X_train[3000:] dataset2_y = y_train[3000:] model.fit(dataset2_x, dataset2_y, nb_epoch=10, batch_size=200, verbose=2) scores = model.evaluate(X_test, y_test, verbose=0) print(\"Baseline Error: %.2f%%\" % (100-scores[1]*100)) ``` Edit 2: tensorflow.keras remarks For tensorflow.keras change the parameter nb_epochs to epochs in the model fit. The imports and basemodel function are: ``` import numpy from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.utils import to_categorical from tensorflow.keras.models import load_model numpy.random.seed(7) def baseline_model(): model = Sequential() model.add(Dense(num_pixels, input_dim=num_pixels, activation='relu')) model.add(Dense(num_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model ```", "response":"Actually - model.save saves all information need for restarting training in your case. The only thing which could be spoiled by reloading model is your optimizer state. To check that - try to save and reload model and train it on training data.", "best_answers_score":0.6266, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/42666046\/loading-a-trained-keras-model-and-continue-training", "best_answers_votes":52, "question_length":3631, "response_length":244 }, { "question":"ERROR: tensorboard 2.0.2 has requirement setuptools>=41.0.0, but you'll have setuptools 40.6.2 which is incompatible gave this error in installation. Does this cause a problem? ERROR: tensorboard 2.0.2 has requirement setuptools>=41.0.0, but you'll have setuptools 40.6.2 which is incompatible.", "response":"I just did a pip install setuptools --upgrade then pip install tensorflow", "best_answers_score":0.6238, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/59104396\/error-tensorboard-2-0-2-has-requirement-setuptools-41-0-0-but-youll-have-set", "best_answers_votes":37, "question_length":294, "response_length":73 }, { "question":"TensorFlow on 32-bit Linux? Is there a version of TensorFlow for 32-bit Linux? I only see the 64-bit wheel available, and didn't find anything about it on the site.", "response":"We have only tested the TensorFlow distribution on 64-bit Linux and Mac OS X, and distribute binary packages for those platforms only. Try following the source installation instructions to build a version for your platform. EDIT: One user has published instructions for running TensorFlow on a 32-bit ARM processor, which is promising for other 32-bit architectures. These instructions may have useful pointers for getting TensorFlow and Bazel to work in a 32-bit environment.", "best_answers_score":0.6235, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/33634525\/tensorflow-on-32-bit-linux", "best_answers_votes":26, "question_length":164, "response_length":476 }, { "question":"What does TensorFlow's `conv2d_transpose()` operation do? The documentation for the conv2d_transpose() operation does not clearly explain what it does: The transpose of conv2d. This operation is sometimes called \"deconvolution\" after Deconvolutional Networks, but is actually the transpose (gradient) of conv2d rather than an actual deconvolution. I went through the paper that the doc points to, but it did not help. What does this operation do and what are examples of why you would want to use it?", "response":"This is the best explanation I've seen online how convolution transpose works is here. I'll give my own short description. It applies convolution with a fractional stride. In other words spacing out the input values (with zeroes) to apply the filter over a region that's potentially smaller than the filter size. As for the why one would want to use it. It can be used as a sort of upsampling with learned weights as opposed to bilinear interpolation or some other fixed form of upsampling.", "best_answers_score":0.6218, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39373230\/what-does-tensorflows-conv2d-transpose-operation-do", "best_answers_votes":52, "question_length":500, "response_length":490 }, { "question":"Tensorflow GPU Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found When i run ``` import tensorflow as tf tf.test.is_gpu_available( cuda_only=False, min_cuda_compute_capability=None ) ``` I get the following error", "response":"Step 1 ``` Move to C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\bin ``` Step 2 ``` Rename file cusolver64_11.dll To cusolver64_10.dll ``` ``` cusolver64_10.dll ```", "best_answers_score":0.6205, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/65608713\/tensorflow-gpu-could-not-load-dynamic-library-cusolver64-10-dll-dlerror-cuso", "best_answers_votes":102, "question_length":250, "response_length":175 }, { "question":"This model has not yet been built error on model.summary() I've keras model defined as follow ```py class ConvLayer(Layer) : def __init__(self, nf, ks=3, s=2, **kwargs): self.nf = nf self.grelu = GeneralReLU(leak=0.01) self.conv = (Conv2D(filters = nf, kernel_size = ks, strides = s, padding = \"same\", use_bias = False, activation = \"linear\")) super(ConvLayer, self).__init__(**kwargs) def rsub(self): return -self.grelu.sub def set_sub(self, v): self.grelu.sub = -v def conv_weights(self): return self.conv.weight[0] def build(self, input_shape): # No weight to train. super(ConvLayer, self).build(input_shape) # Be sure to call this at the end def compute_output_shape(self, input_shape): output_shape = (input_shape[0], input_shape[1]\/2, input_shape[2]\/2, self.nf) return output_shape def call(self, x): return self.grelu(self.conv(x)) def __repr__(self): return f'ConvLayer(nf={self.nf}, activation={self.grelu})' ``` ```py class ConvModel(tf.keras.Model): def __init__(self, nfs, input_shape, output_shape, use_bn=False, use_dp=False): super(ConvModel, self).__init__(name='mlp') self.use_bn = use_bn self.use_dp = use_dp self.num_classes = num_classes # backbone layers self.convs = [ConvLayer(nfs[0], s=1, input_shape=input_shape)] self.convs += [ConvLayer(nf) for nf in nfs[1:]] # classification layers self.convs.append(AveragePooling2D()) self.convs.append(Dense(output_shape, activation='softmax')) def call(self, inputs): for layer in self.convs: inputs = layer(inputs) return inputs ``` I'm able to compile this model without any issues ```py >>> model.compile(optimizer=tf.keras.optimizers.Adam(lr=lr), loss='categorical_crossentropy', metrics=['accuracy']) ``` But when I query the summary for this model, I see this error ```py >>> model = ConvModel(nfs, input_shape=(32, 32, 3), output_shape=num_classes) >>> model.summary() --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () ----> 1 model.summary() \/usr\/local\/lib\/python3.6\/dist-packages\/tensorflow\/python\/keras\/engine\/network.py in summary(self, line_length, positions, print_fn) 1575 \"\"\" 1576 if not self.built: -> 1577 raise ValueError('This model has not yet been built. ' 1578 'Build the model first by calling `build()` or calling ' 1579 '`fit()` with some data, or specify ' ValueError: This model has not yet been built. Build the model first by calling `build()` or calling `fit()` with some data, or specify an `input_shape` argument in the first layer(s) for automatic build. ``` I'm providing input_shape for the first layer of my model, why is throwing this error?", "response":"The error says what to do: This model has not yet been built. Build the model first by calling build() ```py model.build(input_shape) # `input_shape` is the shape of the input data # e.g. input_shape = (None, 32, 32, 3) model.summary() ```", "best_answers_score":0.6197, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/55908188\/this-model-has-not-yet-been-built-error-on-model-summary", "best_answers_votes":89, "question_length":2622, "response_length":239 }, { "question":"How to download previous version of tensorflow? For some reason, I want to use some previous version of tensorflow('tensorflow-**-.whl', not source code on github) and where can I download the previous version and how can I know the corresponding cuda version that is compatible.", "response":"It works for me, since I have 1.6 ``` pip install tensorflow==1.5 ```", "best_answers_score":0.6184, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/40416056\/how-to-download-previous-version-of-tensorflow", "best_answers_votes":57, "question_length":279, "response_length":69 }, { "question":"How to fix ipykernel_launcher.py: error: unrecognized arguments in jupyter? I am following this tensorflow tutorial after two days setting up the environment I finally could run premade_estimator.py using cmd but when I try to run the same code in a jupyter notebook I am getting this error: ``` usage: ipykernel_launcher.py [-h] [--batch_size BATCH_SIZE] [--train_steps TRAIN_STEPS] ipykernel_launcher.py: error: unrecognized arguments: -f C:\\Users\\david\\AppData\\Roaming\\jupyter\\runtime\\kernel-4faecb24-6e87-40b4-bf15-5d24520d7130.json ``` An exception has occurred, use %tb to see the full traceback. ``` SystemExit: 2 C:\\Anaconda3\\envs\\python3x\\lib\\site-packages\\IPython\\core\\interactiveshell.py:2918: ``` ``` UserWarning: To exit: use 'exit', 'quit', or Ctrl-D. warn(\"To exit: use 'exit', 'quit', or Ctrl-D.\", stacklevel=1) ``` I have tried to fix it without success using: ``` pip install --ignore-installed --upgrade jupyter pip install ipykernel python -m ipykernel install conda install notebook ipykernel ipython kernelspec install-self ```", "response":"A more elegant solution would be: ``` args, unknown = parser.parse_known_args() ``` instead of ``` args = parser.parse_args() ```", "best_answers_score":0.6153, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/48796169\/how-to-fix-ipykernel-launcher-py-error-unrecognized-arguments-in-jupyter", "best_answers_votes":82, "question_length":1049, "response_length":129 }, { "question":"What is the equivalent of np.std() in TensorFlow? Just looking for the equivalent of np.std() in TensorFlow to calculate the standard deviation of a tensor.", "response":"To get the mean and variance just use tf.nn.moments. ``` mean, var = tf.nn.moments(x, axes=[1]) ``` For more on tf.nn.moments params see docs", "best_answers_score":0.615, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/39354566\/what-is-the-equivalent-of-np-std-in-tensorflow", "best_answers_votes":41, "question_length":156, "response_length":141 }, { "question":"ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization' i have an import problem when executing my code: ``` from keras.models import Sequential from keras.layers.normalization import BatchNormalization ``` ``` 2021-10-06 22:27:14.064885: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2021-10-06 22:27:14.064974: I tensorflow\/stream_executor\/cuda\/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File \"C:\\Data\\breast-cancer-classification\\train_model.py\", line 10, in from cancernet.cancernet import CancerNet File \"C:\\Data\\breast-cancer-classification\\cancernet\\cancernet.py\", line 2, in from keras.layers.normalization import BatchNormalization ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization' (C:\\Users\\Catalin\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\keras\\layers\\normalization\\__init__.py) ``` Keras version: 2.6.0 Tensorflow: 2.6.0 Python version: 3.9.7 The library it is installed also with ``` pip install numpy opencv-python pillow tensorflow keras imutils scikit-learn matplotlib ``` Do you have any ideas? library path", "response":"You should import BatchNormalization in following way: ``` from tensorflow.keras.layers import BatchNormalization ```", "best_answers_score":0.6113, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/69471749\/importerror-cannot-import-name-batchnormalization-from-keras-layers-normaliz", "best_answers_votes":40, "question_length":1303, "response_length":117 }, { "question":"How to disable printing reports after each epoch in Keras? After each epoch I have printout like below: ``` Train on 102 samples, validate on 26 samples Epoch 1\/1 Epoch 00000: val_acc did not improve 102\/102 [==============================] - 3s - loss: 0.4934 - acc: 0.8997 - val_loss: 0.4984 - val_acc: 0.9231 ``` I am not using built-in epochs, so I would like to disable these printouts and print something myself. How to do that? I am using tensorflow backend if it matters.", "response":"Set verbose=0 to the fit method of your model.", "best_answers_score":0.6099, "library_name":"tensorflow", "question_url":"https:\/\/stackoverflow.com\/questions\/44931689\/how-to-disable-printing-reports-after-each-epoch-in-keras", "best_answers_votes":62, "question_length":479, "response_length":46 } ]